Scroll Top
10 Old Grimsbury Rd, Banbury OX16 3HG, UK

7COM1084 Specialism Research Report

Assignment Brief: (Word Count: 1400)

In order to conduct independent research in computer science you must fully understand the research area. You must have a grasp of the current open questions in this area, as well as any common techniques used to solve problems.

In this assignment you must provide an overview of your MSc Research specialism (AI / Networking / Cyber Security / Software Engineering / Data Science). Students without a specialism may choose any one of these specialisms.

You must discuss the research question presented within the relevant specialist lecture for this specialism, propose some research approaches to investigate this question and identify further work you might undertake which builds on this. You must also explore your personal strengths in this area.

Answer:

RESEARCH SPECIALISM: DATA SCIENCE

Introduction

In the modern contemporary era, the emergence of data science holds a crucial position in the history of technological advancement. The concept of data science can be defined as the usage of scientific methodologies, algorithms, processes and systems to acquire streamlined and processed information from noisy data while applying this information in different domains (Xueqi et al., 2020). The research specialism presented is an article about “Real-Time Hand Movement Trajectory Tracking for Enhancing Dementia Screening in Ageing Deaf Signers of British Sign Language” by Liang et al., (2019). The article focuses on how deaf individuals who are aged and use BSL or British Sign Languages will be able to get diagnosed with dementia in its early stages using real-time trajectory tracking of hand movement utilizing machine learning approaches. The following study will assess the research question of the specialist paper while trying to justify it with proper research approaches and existing work on a similar subject matter.

Open Research Question

The research question presented in the specialist paper is:

  • “Is the signing space envelop (e.g., trajectories of gestures, facial expressions, body language) co-related with MCI (Mild Cognitive Impairment) and, in particular, early stages of Dementia?”

Sign languages are used by deaf individuals to communicate with others so that they can convey information to them as well as process and understand the other person. The usage of British Sign Language or BSL, while fundamentally different from American Sign Languages or ASL, is used by deaf individuals as well as by interpreters in the UK where specific movements of the face, hand, and body are used to interact with others (Downes and Patrick, 2021). The problem associated with this subject matter is the fact that deaf individuals receive unequal access to the diagnosis and care from healthcare and social workers for problems such as acquired neurological impairments and due to lack of appropriate language skills from the health staff. This often leads to poor outcomes and failure of proper diagnosis while also placing the burden of an increased cost of care for the deaf individual (Farooq et al., 2021).

This question is scientifically interesting because specific movements of the face, hand, and body are involved in the BSL usage of deaf individuals who follow specific patterns at specific intervals (Huang et al., 2018). These specific patterns and intervals for movements can be analysed through machine learning approaches for real-time hand movement trajectory tracking (Rastgoo et al., 2021). This question is also highly capable of answering real-world problems where data science and machine learning can be applied to diagnose deaf people with dementia in its early stages just by analysing their real-time hand movement and trajectory without even needing any form of verbal inputs from the patients. This can also be expanded into other domains and fields of medical assessment and diagnosis with little to no verbal inputs from the patients for their health problems.

Existing and Related Work

From the research works of Bragg et al., (2019), it has been found that recognition and sign language translation is an interdisciplinary field. This interdisciplinary field includes deaf culture in addition to computer vision and graphics, human-computer interaction with strong linguistics, and natural processing of language. However, some challenges have hindered the vision while enabling researchers to take innovative steps in data science most of which are associated with temporal aspects such as video or real-time movements. Supported by the research works of Huang et al., (2018), it has been found that sign language recognition or SLR has two different categories which are the Isolated SLR and the Continuous SLR where the latter is the Isolated version using temporal segmentation. However, temporal segmentation propagates errors in subsequent steps despite being a non-trivial step leading to the limitation in attaining sufficient data. To mitigate the issue, the usage of Hierarchical Attention Network with Latent Space or LS-HAN with experimentation was carried out in two different datasets for eliminating temporal segmentation and replacing it with semantic gap bridging which leads to accurate results. However, this leaves the research gap of how signing space envelops is co-related with MCI, in particular, early stages of Dementia.

On the contrary, from the research works of Ibrahim et al., (2018), it has been found that the SLRS or Sign Language Recognition System is a form of interaction where the deaf individuals’ signs are transformed into a transcript or vocal aspects of the verbal language. The system consists of four stages which are segmentation of hand, tracking of hand, extraction of feature and classification. Dataset consisting of 30 isolated words which are applied regularly in the school life of deaf kids were used for evaluation. The experimental results highlighted the fact that the process has a 97% rate in recognition for signer-independent mode while being capable of differentiating between similar gestures. However, this study is unable to answer the question of how signing space envelopes is co-related with MCI, in particular, early stages of Dementia since it focuses on assessing sign language gestures and translating them into oral languages.

From the research works of Liao et al., (2019) it has been found that sign language recognition methods have issues and challenges in recognition with complex sign languages and difficulties in data training of longer video sequences. However, to mitigate the identified problems, a dynamic SLRS based on a deep 3D residual ConvNet and bi-directional LSTM networks can be used. This approach breaks down the spatiotemporal aspects of the sign language gestures in a systematic manner to identify complex sign languages and problems with 89.8% accuracy. However, this study is also unable to answer the question of how signing space envelopes is co-related with MCI, in particular, early stages of Dementia. This is because deaf patients suffering from Dementia have their actions and thoughts disrupted leading to issues and problems in making proper gestures. As a result, it affects the sign recognition techniques in assessing and translating the gestures from such individuals.

Research Approach

The research methods which have been used in this paper to investigate this question consists of quantitative research with mixed research method composed of the experiment, analytical, build, and model. The resources available consist of BSL users as well as recorded videos of BSL users and existing codes such as Convolutional Neural Networks for gesture recognition as open-source code. An approach that can be taken to build on this research to answer related open questions is the usage of the model and system in recognizing complex gestures with high precision and translating them accurately (Liang et al., 2019).

The experimental design used in this research consists of a controlled experiment where appropriate algorithmic approaches, as well as software for extraction of trajectories from videos and images, are selected. Machine learning prediction model called the ANNs or Artificial Neural Networks based models consisting of VGG16 and ResNet-50 were used along with an algorithmic approach to make predictions of both MCI and non-MCI based on the dual factors of sign space envelop in regards to signing trajectories, speed as well as depth and complimentary facial expressions of any deaf person. The outcome of the predicated model is trained and tested to validate the results with cognitive screening results (Liang et al., 2019).

The advantage of this research is the fact that it can analyse complex sign languages by recognizing them and translating them with high accuracy. In addition, this research can also be compared with existing algorithmic models for sign language recognition which can help to further develop the field of deep learning. Moreover, another advantage of this research is that it helps to establish a correlation between signing space envelope and MCI leading to assessment of different forms of neurological damages linked with the nervous system that consists of Parkinson’s disease and stroke in deaf people as well as normal hearing people.

Personal Investment

The reason for interest in analysing this specific research question is because finding the correlation between signing space envelope and MCI can help to enhance the field of data science and machine learning while integrating them effectively in healthcare (Moradi et al., 2015). In addition, my strengths consisting of knowledge in data science and problem-solving skills and experience will help carry out this research while further making new improvements in the realm of machine learning.

References

Bragg, D., Koller, O., Bellard, M., Berke, L., Boudreault, P., Braffort, A., Caselli, N., Huenerfauth, M., Kacorri, H., Verhoef, T. and Vogler, C., 2019, October. Sign language recognition, generation, and translation: An interdisciplinary perspective. In The 21st international ACM SIGACCESS conference on computers and accessibility (pp. 16-31). https://dl.acm.org/doi/pdf/10.1145/3308561.3353774

Downes, E.J. and Patrick, P., 2021. Language Rights of Sign Language Peoples in the United Kingdom and United States: The Disability Paradigm versus the Protected Minority Language Paradigm. https://www.researchgate.net/profile/Emily_Downes3/publication/352793963_Language_Rights_of_Sign_Language_Peoples_in_the_United_Kingdom_and_United_States_The_Disability_Paradigm_versus_the_Protected_Minority_Language_Paradigm/links/60d9e6f392851ca9449097da/Language-Rights-of-Sign-Language-Peoples-in-the-United-Kingdom-and-United-States-The-Disability-Paradigm-versus-the-Protected-Minority-Language-Paradigm.pdf

Farooq, U., Rahim, M.S.M., Sabir, N., Hussain, A. and Abid, A., 2021. Advances in machine translation for sign language: approaches, limitations, and challenges. Neural Computing and Applications, pp.1-43. https://www.researchgate.net/profile/Nabeel-Khan-34/publication/351919265_Advances_in_machine_translation_for_sign_language_approaches_limitations_and_challenges/links/60b034af299bf13438efe990/Advances-in-machine-translation-for-sign-language-approaches-limitations-and-challenges.pdf

Huang, J., Zhou, W., Li, H. and Li, W., 2018. Attention-based 3D-CNNs for large-vocabulary sign language recognition. IEEE Transactions on Circuits and Systems for Video Technology29(9), pp.2822-2832. https://ieeexplore.ieee.org/abstract/document/8466903/

Huang, J., Zhou, W., Zhang, Q., Li, H. and Li, W., 2018, April. Video-based sign language recognition without temporal segmentation. In Thirty-Second AAAI Conference on Artificial Intelligence. https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/viewPDFInterstitial/17137/15938

Ibrahim, N.B., Selim, M.M. and Zayed, H.H., 2018. An automatic Arabic sign language recognition system (ArSLRS). Journal of King Saud University-Computer and Information Sciences30(4), pp.470-477. https://www.sciencedirect.com/science/article/pii/S1319157817301775

Liang, X., Kapetanios, E., Woll, B. and Angelopoulou, A., 2019, August. Real time hand movement trajectory tracking for enhancing dementia screening in ageing deaf signers of British sign language. In International Cross-Domain Conference for Machine Learning and Knowledge Extraction (pp. 377-394). Springer, Cham. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7025874/

Liao, Y., Xiong, P., Min, W., Min, W. and Lu, J., 2019. Dynamic sign language recognition based on video sequence with BLSTM-3D residual networks. IEEE Access7, pp.38044-38054. https://ieeexplore.ieee.org/iel7/6287639/6514899/08667292.pdf

Moradi, E., Pepe, A., Gaser, C., Huttunen, H., Tohka, J. and Alzheimer’s Disease Neuroimaging Initiative, 2015. Machine learning framework for early MRI-based Alzheimer’s conversion prediction in MCI subjects. Neuroimage104, pp.398-412. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5957071/

Rastgoo, R., Kiani, K. and Escalera, S., 2021. Sign language recognition: A deep survey. Expert Systems with Applications164, p.113794. https://www.sciencedirect.com/science/article/pii/S095741742030614X

Xueqi, C.H.E.N.G., Hong, M.E.I., Wei, Z.H.A.O., Wan Sang B, W.A.H., Huawei, S.H.E.N. and Guojie, L.I., 2020. Data Science and Computing Intelligence: Concept, Paradigm, and Opportunities. Bulletin of Chinese Academy of Sciences (Chinese Version)35(12), pp.1470-1481. https://bulletinofcas.researchcommons.org/journal/vol35/iss12/6/

Related Posts

Leave a comment

× WhatsApp Us