Preview

Научно-технический вестник информационных технологий, механики и оптики

Расширенный поиск

Моделирование базового переводчика индонезийского языка жестов на одноплатном компьютере Raspberry Pi

https://doi.org/10.17586/2226-1494-2022-22-3-574-584

Аннотация

Глухие люди имеют потерю слуха от легкой формы до очень тяжелой. Такие люди испытывают трудности при обработке языковой информации, как со слуховыми аппаратами, так и без них. Глухие люди, которые не применяют слуховые аппараты, в своих повседневных разговорах используют язык жестов. В то же время здоровым людям трудно общаться с глухими, поэтому для общения они должны знать язык жестов. В Индонезии существует два жестовых языка, а именно Indonesian Sign Language System (SIBI) и Indonesian Sign Language (BISINDO). Разработана модель для помощи в общении между глухими и здоровыми людьми. Модель рассмотрена на примере использования одноручного метода SIBI и доработана с использованием одноручного и двуручного BISINDO. Основная функция метода — распознавание основных букв, слов, предложений и цифр с помощью одноплатного компьютера Raspberry Pi и камеры, которые предназначены для обнаружения движений языковых жестов. Полученные изображения переводятся в текст на экране монитора с помощью специальной программы. Используемый метод заключается в обработке изображений и машинном обучении с использованием языка программирования Python и техники сверточной нейронной сети. Прототип устройства выдает предупреждение о необходимости повторить язык жестов, если перевод не удался, и удалить перевод, если он не соответствует базе данных. Прототип устройства требует дополнительных исследований для обеспечения гибкости при считывании динамических движений, выражений лиц, и перевода слов, не включенных в существующую базу данных. Также требуется расширение базы данных, отличной от языка жестов SIBI, например, BISINDO, или языков жестов из других регионов или стран.

Об авторах

У. Фадлила
Университет Мухаммадия Суракарта; Университет Тун Хусейн Онн Малайзия
Индонезия

Фадлила Уми — магистр, преподаватель

Суракарта, 57169

Бату Пахат, Джохор, 86400



Р.А.Р. Прасетьо
Университет Мухаммадия Суракарта
Индонезия

Прасетьо Раден Адриан Рафли — бакалавр, выпускник

Суракарта, 57169



А.К. Махамад
Университет Тун Хусейн Онн Малайзия
Малайзия

Махамад Абд Кадир — доктор наук, доцент

Бату Пахат, Джохор, 86400



Б. Хандага
Университет Мухаммадия Суракарта
Индонезия

Хандага Бана — PhD, преподаватель

Суракарта, 57169



Ш. Саон
Университет Тун Хусейн Онн Малайзия
Малайзия

Саон Шарифа магистр, старший преподаватель

Бату Пахат, Джохор, 86400



Э. Судармила
Университет Мухаммадия Суракарта
Индонезия

Судармила Энда — доктор наук, исследователь, старший преподаватель

Суракарта, 57169



Список литературы

1. Hernawati T. Pengembangan kemampuan berbahasa dan berbicara anak tunarungu. Jurnal Jurusan PLB FIP Universitas Pendidikan Indonesia, 2007, vol. 7, no. 1, pp. 101110.

2. Shiraishi Y., Zhang J., Wakatsuki D., Kumai K., Morishima A. Crowdsourced real-time captioning of sign language by deaf and hard-of-hearing people. International Journal of Pervasive Computing and Communications, 2017, vol. 13, no. 1, pp. 2–25. https://doi.org/10.1108/IJPCC-02-2017-0014

3. Islalm M.S., Rahman M.M., Rahman M.H., Arifuzzaman M., Sassi R., Aktaruzzaman M. Recognition bangla sign language using convolutional neural network. Proc. of the 2019 International Conference on Innovation and Intelligence for Informatics, Computing, and Technologies (3ICT), 2019, pp. 8910301. https://doi.org/10.1109/3ICT.2019.8910301

4. Lim M.Y. MySlang — An Electronic Malaysian Sign Language Dictionary. 2008, pp. 145.

5. Karpov A., Kipyatkova I., Zelezny M. Automatic Technologies for processing spoken sign languages. Procedia Computer Science, 2016, vol. 81, pp. 201–207. https://doi.org/10.1016/j.procs.2016.04.050

6. Brour M., Benabbou A. ATLASLang NMT: Arabic text language into Arabic sign language neural machine translation. Journal of King Saud University — Computer and Information Sciences, 2019, vol. 33, no. 9, pp. 1121–1131. https://doi.org/10.1016/j.jksuci.2019.07.006

7. Latif G., Mohammad N., Alghazo J., AlKhalaf R., AlKhalaf R. ArASL: Arabic alphabets sign language dataset. Data in Brief, 2019, vol. 23, pp. 103777. https://doi.org/10.1016/j.dib.2019.103777

8. Suharjito S., Thiracitta N., Gunawan H., Witjaksono G. The comparison of some hidden Markov models for sign language recognition. Proc. of the 1st Indonesian Association for Pattern Recognition International Conference (INAPR), 2019, pp. 6–10. https://doi.org/10.1109/INAPR.2018.8627031

9. Muthu Mariappan H., Gomathi V. Real-time recognition of Indian sign language. Proc. of the 2nd International Conference on Computational Intelligence in Data Science (ICCIDS), 2019, pp. 8862125. https://doi.org/10.1109/ICCIDS.2019.8862125

10. Kadiyala A., Kumar A. Applications of Python to evaluate environmental data science problems. Environmental Progress and Sustainable Energy, 2017, vol. 36, no. 6, pp. 1580–1586. https://doi.org/10.1002/ep.12786

11. Xie M., Ma X. End-to-End residual neural network with data augmentation for sign language recognition. Proc. of the 4th IEEE Advanced Information Technology, Electronic and Automation Control Conference (IAEAC), 2019, pp. 1629–1633. https://doi.org/10.1109/IAEAC47372.2019.8998073

12. Inoue K., Shiraishi T., Yoshioka M., Yanagimoto H. Depth sensor based automatic hand region extraction by using time-series curve and its application to Japanese finger-spelled sign language recognition. Procedia Computer Science, 2015, vol. 60, no. 1, pp. 371–380. https://doi.org/10.1016/j.procs.2015.08.145

13. Xue Y., Gao S., Sun H., Qin W. A Chinese sign language recognition system using leap motion. Proc. of the 7th International Conference on Virtual Reality and Visualization (ICVRV), 2017, pp. 180–185. https://doi.org/10.1109/ICVRV.2017.00044

14. Kumar E.K., Kishore P.V.V., Kiran Kumar M.T., Kumar D.A. 3D sign language recognition with joint distance and angular coded color topographical descriptor on a 2 — stream CNN. Neurocomputing, 2020, vol. 372, pp. 40–54. https://doi.org/10.1016/j.neucom.2019.09.059camera.

15. Yang X., Chen X., Cao X., Wei S., Zhang X. Chinese sign language recognition based on an optimized tree-structure framework. IEEE Journal of Biomedical and Health Informatics, 2017, vol. 21, no. 4, pp. 994–1004. https://doi.org/10.1109/JBHI.2016.2560907

16. Rincon Vega A.M., Vasquez A., Amador W., Rojas A. Deep learning for the recognition of facial expression in the Colombian sign language. Annals of Physical and Rehabilitation Medicine, 2018, vol. 61, pp. e96. https://doi.org/10.1016/j.rehab.2018.05.204

17. Joy J., Balakrishnan K., Sreeraj M. SignQuiz: A quiz based tool for learning fingerspelled signs in indian sign language using ASLR. IEEE Access, 2019, vol. 7, pp. 28363–28371. https://doi.org/10.1109/ACCESS.2019.2901863

18. Gupta B., Shukla P., Mittal A. K-nearest correlated neighbor classification for Indian sign language gesture recognition using feature fusion. Proc. of the 6th International Conference on Computer Communication and Informatics (ICCCI), 2016, pp. 7479951. https://doi.org/10.1109/ICCCI.2016.7479951

19. Singh S. Sign language to text by feed forward back propagation neural network. International Journal of Advanced Research in Computer Science, 2012, vol. 3, no. 3, pp. 146–152. https://doi.org/10.26483/ijarcs.v3i3.1209

20. Abid M.R., Petriu E.M., Amjadian E. Dynamic sign language recognition for smart home interactive application using stochastic linear formal grammar. IEEE Transactions on Instrumentation and Measurement, 2015, vol. 64, no. 3, pp. 596–605. https://doi.org/10.1109/TIM.2014.2351331

21. Bayu Ramadhani Fajri B., Ramadhani Fajri B., Kusumastuti G. Perceptions of ‘hearing’ people on sign language learning. Advances in Social Science, Education and Humanities Research, 2019, vol. 382, pp. 364–367. https://doi.org/10.2991/icet-19.2019.91

22. Abbas A., Sarfraz S. Developing a prototype to translate text and speech to Pakistan sign language with bilingual subtitles: A framework. Journal of Educational Technology Systems, 2018, vol. 47, no. 2, pp. 248–266. https://doi. org/10.1177/0047239518794168

23. Tao W., Leu M.C., Yin Z. American Sign Language alphabet recognition using Convolutional Neural Networks with multiview augmentation and inference fusion. Engineering Applications of Artificial Intelligence, 2018, vol. 76, pp. 202–213. https://doi.org/10.1016/j.engappai.2018.09.006

24. Sudharshan D.P., Raj S. Object recognition in images using convolutional neural network. Proc. of the 2nd International Conference on Inventive Systems and Control (ICISC), 2018, pp. 718– 821. https://doi.org/10.1109/ICISC.2018.8398893

25. Hernández-Blanco A., Herrera-Flores B., Tomás D., NavarroColorado B. A Systematic review of deep learning approaches to educational data mining. Complexity, 2019, vol. 2019, pp. 1306039. https://doi.org/10.1155/2019/1306039

26. Zhang Q., Zhang M., Chen T., Sun Z., Ma Y., Yu B. Recent advances in convolutional neural network acceleration. Neurocomputing, 2019, vol. 323, pp. 37–51. https://doi.org/10.1016/j.neucom.2018.09.038

27. Bantupalli K., Xie Y. American sign language recognition using deep learning and computer vision. Proc. of the 2018 IEEE International Conference on Big Data, 2018, pp. 4896–4899. https://doi.org/10.1109/BigData.2018.8622141

28. Liao Y., Xiong P., Min W., Min W., Lu J. Dynamic sign language recognition based on video sequence with BLSTM-3D residual networks. IEEE Access, 2019, vol. 7, pp. 38044–38054. https://doi.org/10.1109/ACCESS.2019.2904749

29. Nasreddine K., Benzinou A. Shape geodesics for robust sign language recognition. IET Image Process, 2019, vol. 13, no. 5, pp. 825–832. https://doi.org/10.1049/iet-ipr.2018.5282

30. Tolentino L.K.S., Serfa Juan R.O., Thio-ac A.C., Pamahoy M.A.B., Forteza J.R.R., Garcia X.J.O. Static sign language recognition using deep learning. International Journal of Machine Learning and Computing, 2019, vol. 9, no. 6, pp. 821–827. https://doi.org/10.18178/ijmlc.2019.9.6.879

31. Sreedevi J., Rama Bai M., Ahmed M.M. An approach to translate hand gestures into telugu text. International Journal of Scientific and Technology Research, 2020, vol. 9, no. 4, pp. 258–264.

32. Azar S.G., Seyedarabi H. Trajectory-based recognition of dynamic Persian sign language using hidden Markov model. Computer Speech and Language, 2020, vol. 61, pp. 101053. https://doi.org/10.1016/j.csl.2019.101053

33. Rajam P.S., Balakrishnan G. Recognition of Tamil sign language alphabet using image processing to aid deaf-dumb people. Procedia Journal of Biomedical and Health Informatics. 2017. V. 21. N 4. P. 994–1004. https://doi.org/10.1109/JBHI.2016.2560907

34. Tan T.S., Salleh S.H., Ariff A.K., Ting C.M., Siew K.S., Leong S.H. Malay sign language gesture recognition system. Proc. of the 2007 International Conference on Intelligent and Advanced Systems (ICIAS), 2007, pp. 982–985. https://doi.org/10.1109/ICIAS.2007.4658532

35. Gulhane V.M., Joshi M.S., Ingole M.D. Neural network based hand gesture recognition. International Journal of Advanced Research in Computer Science, 2012, vol. 3, no. 3, pp. 849–853. https://doi.org/10.26483/ijarcs.v3i3.1244

36. Indra D., Purnawansyah S., Madenda S., Wibowo E.P. Indonesian sign language recognition based on shape of hand gesture. Procedia Computer Science, 2019, vol. 161, pp. 74–81. https://doi.org/10.1016/j.procs.2019.11.101

37. Cheok M.J., Omar Z., Jaward M.H. A review of hand gesture and sign language recognition techniques. International Journal of Machine Learning and Cybernetics, 2019, vol. 10, no. 1, pp. 131–153. https://doi.org/10.1007/s13042-017-0705-5

38. Tewari D., Srivastava S.K. A visual recognition of static hand gestures in Indian sign language based on kohonen self-organizing map algorithm. International Journal of Engineering and Advanced Technology, 2012, no. 2, pp. 165–170.

39. Rajaganapathy S., Aravind B., Keerthana B., Sivagami M. Conversation of sign language to speech with human gestures. Procedia Computer Science, 2015, vol. 50, pp. 10–15. https://doi.org/10.1016/j.procs.2015.04.004

40. Chen Y., Zhang W. Research and implementation of sign language recognition method based on Kinect. Proc. of the 2nd International Conference on Computer and Communications (ICCC), 2016, pp. 1947–1951. https://doi.org/10.1109/CompComm.2016.7925041

41. Abed A.A., Rahman S.A. Python-based Raspberry Pi for hand gesture recognition. International Journal of Computer Applications, 2017, vol. 173, no. 4, pp. 18–24. https://doi.org/10.5120/ijca2017915285

42. Suharjito, Ariesta M.C., Wiryana F., Kusuma G.P. A survey of hand gesture recognition methods in sign language recognition. Pertanika Journal of Science and Technology, 2018, vol. 26, no. 4, pp. 1659– 1675.

43. Asadi-Aghbolaghi M., Clapés A., Bellantonio M., Escalante H., Ponce-López V., Baró X., Guyon I., Kasaei S., Escalera S. Deep learning for action and gesture recognition in image sequences: A survey. Gesture Recognition. Springer, 2017, pp. 539–578. https://doi.org/10.1007/978-3-319-57021-1_19

44. Manikandan K., Patidar A., Walia P., Roy A.B. Hand gesture detection and conversion to speech and text. arXiv, 2018, arXiv:1811.11997. https://doi.org/10.48550/arXiv.1811.11997

45. Cañas J.M., Martín-Martín D., Arias P., Vega J., Roldán-álvarez D., García-Pérez I., Fernández-Conde J. Open-source drone programming course for distance engineering education. Electronics, 2020, vol. 9, no. 12, pp. 1–18. https://doi.org/10.3390/electronics9122163

46. Masood S., Srivastava A., Thuwal H.C., Ahmad M. Real-time sign language gesture (word) recognition from video sequences using CNN and RNN. Advances in Intelligent Systems and Computing, 2018, vol. 695, pp. 623–632. https://doi.org/10.1007/978-981-10-7566-7_63

47. Islam M.S., Mousumi S.S.S., Azad Rabby A.K.M.S., Hossain S.A., Abujar S. A potent model to recognize bangla sign language digits using convolutional neural network. Procedia Computer Science, 2018, vol. 143, pp. 611–618. https://doi.org/10.1016/j.procs.2018.10.438

48. Zhong B., Xing X., Love P., Wang X., Luo H. Convolutional neural network: Deep learning-based classification of building quality problems. Advanced Engineering Informatics, 2019, vol. 40, pp. 46–57. https://doi.org/10.1016/j.aei.2019.02.009

49. Dawane S.P., Sayyed H.G.A. Hand gesture recognition for deaf and dumb people using GSM module. International Journal of Science and Research, 2017, vol. 6, no. 5, pp. 2226–2230.

50. Handhika T., Zen R.I.M., Murni, Lestari D.P., Sari I. Gesture recognition for Indonesian Sign Language (BISINDO). Journal of Physics: Conference Series, 2018, vol. 1028, no. 1, pp. 012173. https://doi.org/10.1088/1742-6596/1028/1/012173

51. Fadlilah U., Wismoyohadi D., Mahamad A.K., Handaga B. Bisindo information system as potential daily sign language learning. AIP Conference Proceedings, 2019, vol. 2114, pp. 060021. https://doi.org/10.1063/1.5112492

52. Palfreyman N. Social meanings of linguistic variation in BISINDO (Indonesian sign language). Asia-Pacific Language Variation, 2020, vol. 6, no. 1, pp. 89–118. https://doi.org/10.1075/aplv.00008.pal

53. Fadlilah U., Mahamad A.K., Handaga B. The Development of android for Indonesian sign language using tensorflow lite and CNN: An initial study. Journal of Physics: Conference Series, 2021, vol. 1858, no. 1, pp. 012085. https://doi.org/10.1088/1742-6596/1858/1/012085

54. Dewa C.K., Fadhilah A.L., Afiahayati A. Convolutional neural networks for handwritten Javanese character recognition. IJCCS (Indonesian Journal of Computing and Cybernetics Systems), 2018, vol. 12, no 1, pp. 83. https://doi.org/10.22146/ijccs.31144

55. Kurien M., Kim M.K., Kopsida M., Brilakis I. Real-time simulation of construction workers using combined human body and hand tracking for robotic construction worker system. Automation in Construction, 2018, vol. 86, pp. 125–137. https://doi.org/10.1016/j.autcon.2017.11.005


Рецензия

Для цитирования:


Фадлила У., Прасетьо Р., Махамад А., Хандага Б., Саон Ш., Судармила Э. Моделирование базового переводчика индонезийского языка жестов на одноплатном компьютере Raspberry Pi. Научно-технический вестник информационных технологий, механики и оптики. 2022;22(3):574-584. https://doi.org/10.17586/2226-1494-2022-22-3-574-584

For citation:


Fadlilah U., Prasetyo R., Mahamad A., Handaga B., Saon Sh., Sudarmilah E. Modelling of basic Indonesian Sign Language translator based on Raspberry Pi technology. Scientific and Technical Journal of Information Technologies, Mechanics and Optics. 2022;22(3):574-584. https://doi.org/10.17586/2226-1494-2022-22-3-574-584

Просмотров: 7


Creative Commons License
Контент доступен под лицензией Creative Commons Attribution 4.0 License.


ISSN 2226-1494 (Print)
ISSN 2500-0373 (Online)