I am currently a student researcher at Maastro Clinic working on medical imaging, applying quantitative deep learning methods in the field of radiology and radiotherapy. I am pursuing my Msc in Artificial Intelligence at the Department of Data Science and Knowledge Engineering in Maastricht where my focus is on Deep Learning, Computer Vision, and Advanced Machine Learning.
Prior to this, I worked at BlinkIN @ Etrix Technologies as a Machine Learning Research and Development Engineer focused on implementing Object Detection, Action Classification, and Scene Recognition on a Real-time WebRTC video platform.
I’ve also worked with various clients in a freelancing capacity providing vision-based solutions for emotion recognition, object recognition, face features, face recognition and other vision modules.
My Undergraduate degree was in Electronics and Communications at the Manipal Institute of Technology. Through my coursework, I gained interest in Neural Networks, HCI and Artificial Intelligence and subsequently worked on two publications in assistive technology.
@article{1,
title = {Automated 3D sign language caption generation for video},
author = {Nayan Mehta, Suraj Pai, Sanjay Singh},
doi = {10.1007/s10209-019-00668-9},
issn = {1615-5297},
year = {2019},
date = {2019-07-22},
journal = {Universal Access in the Information Society},
pages = {1-14},
abstract = {Efforts to make online media accessible to a regional audience have picked up pace in recent years with multilingual captioning and keyboards. However, techniques to extend this access to people with hearing loss are limited. Further, owing to a lack of structure in the education of hearing impaired and regional differences, the issue of standardization of Indian Sign Language (ISL) has been left unaddressed, forcing educators to rely on the local language to support the ISL structure, thereby creating an array of correlations for each object, hindering the language building skills of a student. This paper aims to present a useful technology that can be used to leverage online resources and make them accessible to the hearing-impaired community in their primary mode of communication. Our tool presents an avenue for the early development of language learning and communication skills essential for the education of children with a profound hearing loss. With the proposed technology, we aim to provide a standardized teaching and learning medium to a classroom setting that can utilize and promote ISL. The goals of our proposed system involve reducing the burden of teachers to act as a valuable teaching aid. The system allows for easy translation of any online video and correlation with ISL captioning using a 3D cartoonish avatar aimed to reinforce classroom concepts during the critical period. First, the video gets converted to text via subtitles and speech processing methods. The generated text is understood through NLP algorithms and then mapped to avatar captions which are then rendered to form a cohesive video alongside the original content. We validated our results through a 6-month period and a consequent 2-month study, where we recorded a 37% and 70% increase in performance of students taught using Sign captioned videos against student taught with English captioned videos. We also recorded a 73.08% increase in vocabulary acquisition through signed aided videos.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Efforts to make online media accessible to a regional audience have picked up pace in recent years with multilingual captioning and keyboards. However, techniques to extend this access to people with hearing loss are limited. Further, owing to a lack of structure in the education of hearing impaired and regional differences, the issue of standardization of Indian Sign Language (ISL) has been left unaddressed, forcing educators to rely on the local language to support the ISL structure, thereby creating an array of correlations for each object, hindering the language building skills of a student. This paper aims to present a useful technology that can be used to leverage online resources and make them accessible to the hearing-impaired community in their primary mode of communication. Our tool presents an avenue for the early development of language learning and communication skills essential for the education of children with a profound hearing loss. With the proposed technology, we aim to provide a standardized teaching and learning medium to a classroom setting that can utilize and promote ISL. The goals of our proposed system involve reducing the burden of teachers to act as a valuable teaching aid. The system allows for easy translation of any online video and correlation with ISL captioning using a 3D cartoonish avatar aimed to reinforce classroom concepts during the critical period. First, the video gets converted to text via subtitles and speech processing methods. The generated text is understood through NLP algorithms and then mapped to avatar captions which are then rendered to form a cohesive video alongside the original content. We validated our results through a 6-month period and a consequent 2-month study, where we recorded a 37% and 70% increase in performance of students taught using Sign captioned videos against student taught with English captioned videos. We also recorded a 73.08% increase in vocabulary acquisition through signed aided videos.