Handwriting recognition (HWR) is the ability of computers to recognize handwritten texts on any medium such as paper, photographs, documents etc. This is also often referred to as Handwriting Text Recognition(HTR). Optical Character Recognition(OCR) is the electronic conversion of handwritten of typed texts whether from scanned documents or photographs. HWR is a new research field of OCR, which many researchers and tech giants are trying to address. According to reports published by marketsandmarkets in 2021 , the global market size of OCR handwriting recognition was estimated to be USD 1,039.3 Million in 2016. It has an expected CAGR of 15.7% from 2017 to 2025. HWR market is expected to grow along with it and also show tremendous improvements on the current accuracy levels.
Handwriting recognition has a large number of industrial use cases from health care industry and pharmaceuticals to banking and insurance. HWR technologies can be employed for various reasons like – reducing labor costs, saving time invested on manually digitizing handwritten records, enhancing the user experience of customers, etc. HWR may also lead to automation of various labor intensive processes.
The varied handwriting style, lighting of an image, separation of text in cursive handwriting have made handwriting recognition a difficult problem to achieve good accuracy. However, the ongoing research which deploys state of art deep learning architectures have made considerable strides in improving accuracy. These prove to be far superior than earlier used machine learning algorithms where features to train the machine learning model were human defined.
Current Status and Future Predictions According to a new market research report published by Credence Research “Handwriting Recognition (HWR) Market (By Type – Online and Offline; By Application: Automotive; Education and Literature; Enterprise and Field Services; Healthcare and Others) – Growth, Future Prospects, Competitive Analysis and Forecast 2017 – 2025”, the global handwriting recognition (HWR) market was valued at US$ 1,039.3 Mn in 2016 and is expected to grow at a CAGR of 15.7% during 2017 to 2025.
HWR technologies can be classfied into two types – online and offline methods. Online methods correspond to extracting machine readable texts from strokes on touch screens. Offline methods refer to extracting machine readable text from paper, journals etc.
Fig. Global Handwriting Recognition Market Revenue by type
The major players of this market face high competition. The key for surviving in this market is better efforts towards service enhancement so as to address the changing regulations and economic conditions in the global market. Along with it improving accuracy of HWR software is also necessary. The key players of handwriting recognition (HWR) system market include MyScript, Nuance Communications, Inc., SELVAS AI, Inc., Hanwang Technology Co., Ltd., Paragon Software Group, PhatWare Corporation, SinoVoice (Beijing Jietong Huasheng Technology Co. Ltd.), and Sciometrics, LLC.
A ton of apps for mobile, tablet and web platforms are also available for online methods. The table below is summarizing some of most popular handwriting recognition apps.
Table 1. Software for handwriting recognition
Major cloud services like AWS (Amazon Textract), Google Cloud (Google Vision) and Azure (Microsoft Azure Vision) also provide APIs for handwriting recognition.
Handwriting Recognition Use Cases
We have all heard jokes about doctors’ unrecognizable handwriting and also faced this issue first hand. Jokes apart, according to July 2006 report from the National Academies of Science’s Institute of Medicine (IOM) found doctors’ s unrecognizable handwriting kills more than 7,000 people annually. Using a HWR technology could easily save people’s time, money and lives. A lot of healthcare institutes and hospitals are implementing data strategies to combat loss of life arsing from illegible script. These include Electronic Health Records(EHR) which have been adopted by 71% physicians as claimed by a 2013 survey.
These technologies also help mitigate issues such as dependence on paper, regulatory violations, compliance issues, forgeries and frauds. HWR technologies can be further used to digitize patient entry forms and handwritten records. Digitizing will also give access to huge amount of data which can be used for data analytics to gain meaningful insights.
There is an ongoing effort to introduce children and educators to computing devices early on. Google for educators is one such campaign which focused on schools to embrace technology. With the increasing use of tablets in schools and colleges HWR technology is bound to grow in education sector. This is a powerful tool which can enhance students learning experience. As an example student can make handwritten notes on their iPad which helps in better understanding. These notes can also be converted to computer readable text using handwriting recognition techniques. This technologies have been augmented in various note taking apps, apps to deal with complex mathematical sums and also on music apps. For example mathematical apps can convert handwritten questions and equations into neat computer readable digits and text which can be used to crunch desired answers. Other uses include turning scrawled diagrams into digital documents, music composition and even streamlining the process of adding references to research papers by including highlight and search capabilities. Thus, handwriting recognition technologies can benefit the educator, and students at all levels be it a kindergartner or a senior year college student.
Fig. Application of handwriting recognition in Banking Applications
Currently cheques deposited to banks are manually analyzed and then entered on the computer. Bank employees also have to manually verify the signature and date of the cheques. This takes time and manual effort also delays reflecting of balance on the benefited bank account. Handwriting recognition technologies can be used to read these cheques and other bank documents such as forms, demand drafts etc at a much faster pace.
A large number of historical books and journals have been digitized to make it accessible to entire world. However, most of these efforts are restricted to photos or scans of books. This effort can become even more useful if the text on these historical books could be parsed and queried and indexed by web crawlers. Handwriting recognition plays a key role in bringing alive the medieval and 20th century documents, postcards, research studies etc.
There has been an increasing demand for OCR and related technologies by consumers. This has fueled the adoption of such technologies in tablets and smartphones. The demand has surged owing to increase in number of devices to capitalize on. OCR technologies including handwriting recognition have been around for a decade. However the wide spread reach of touchscreen mobile devices have increased the adoption of such technologies in day to day life. HWR technologies might provide a viable alternative to use of digital keyboards in near future. Whether it is within mobile applications, leveraged in your smart watch, included as an alternative to your smartphone keyboard, or integrated within the dashboard of your car, HWR technology is gaining momentum.
Handwriting text recognition can be categorized in two categories – offline method and online method.
1. Online method – Online method involves touch screen devices, light pen, stylus etc. This method has access to information such as stroke and direction. They also don’t face the issue of noisy backgrounds. They can be recognized with pretty high accuracy as compared to offline methods.
2. Offline method – This method refers to text written down on a non digital media such as a paper.
In this, information such as stroke, directions are not available like in case of online methods. Apart from this method might also have issues of noisy background.
The ongoing efforts are more concentrated towards handwriting recognition in offline method which still has a long way to go before it achieves a respectable accuracy. Also offline method does not require extra devices such as light pens and stylus.
Early attempts at handwriting recognition utilized machine learning algorithms such as Hidden Markov Models(HMM), Support Vector Machines (SVM) etc. These techniques involved input of per-processed text, feature interaction to identify key elements such as loops, aspect ratio and inflection points. These generated features are then fed into machine learning algorithms such as HMM and SVM. Such methods had limited accuracy due to need of identifying features by humans. Also, this method is not scale-able as it cannot cover different languages.
Deep Learning solves this issue with use of state of art technologies together such as – Convolution Neural Networks (CNN), Recurrent Neural Networks (RNN) and Connectionist Temporal Classification .
Convolution Neural Networks are used for feature extraction from input images. The convolution layers contains three main operations – use of convolution (filter kernel) to the input image, a non-linear ReLU function, and finally a pooling layer to downsize the output of CNN. Recurrent Neural Network (RNN) is a class of artificial neural networks where connections between nodes form a directed graph along a temporal sequence. LSTM implementation of RNN is used as it facilitates long term propagation of information, thus making the trained model better. Finally Connectionist Temporal Classification (CTC) layer is used to classify the images. The CTC is given the RNN output matrix and the ground truth text as input using which it computes the loss value. The CTC gives the final text as output.
Fig. Overview of Neural Network Architecture for handwriting recognition
One of the main struggles of HWR technologies is accuracy. A lot of HWR technologies cannot predict many handwriting making them unreliable to use in real life scenarios. They also require huge datasets and huge computing power to train models. As a result of this implementing HWR on industry level might be quite expensive. Apart from this, the recognizing capability of the model depends a lot on the data used for training the model.
The future of HWR is hopeful, with recent strides in accuracy level, mass adoption of mobile devices and a push for paperless operations. A ton of softwares are available for handwriting recognition with varied accuracies and price points.
Some popular handwriting recognition softwares are –
- Amazon Textract
- Microsoft Azure Vision
- Google Cloud Vision API
While utilizing handwriting recognition software for business, you should ponder on factors such as – character recognition accuracy, word recognition accuracy, computation speed in case results need to be delivered real-time, continuous learning capabilities, user-friendliness of the interface if the interface will be used by humans and the price point carefully.
HWR technologies will play are huge role in the upcoming years. So, is your company ready to reap its benefits?
Need help with Emotion recognition and detection software?
If you want to utilize Handwriting recognition technologies and need help, advice, or developers, feel free to contact us at firstname.lastname@example.org or on our LinkedIn page or visit our company website, www.quantiantech.com
-  https://www.credenceresearch.com/press/global-handwriting-recognition-hwr-market
-  https://www.itbusinessedge.com/mobile/five-industries-benefitting-from-handwriting-recognition-technology/
-  https://towardsdatascience.com/build-a-handwritten-text-recognition-system-using-tensorflow-2326a3487cd5
At the peak of COVID-19 pandemic, government launched a chatbot called MyGov Corona Helpdesk (for whatsapp) to provide constant updates and eradicate fake news about Novel Coronavirus. People could converse with the chatbot on platforms like Facebook’s messenger, Whatsapp and Telegram. About 17 million people used it within 10 days of its launch. This chatbot is an example of “rule based chatbot”. However this chatbot can only handle pre-defined inputs. Conversational bots are a level up from these rule based or FAQs chatbots which can recognize context in a conversation.
MyGov Corona Helpdesk Whatsapp chatbot
What are conversational bots and how do they differ from FAQ chatbots?
A massive improvement over rule based chatbots are conversational bots. If you think about how human converse, context matters. What the user said before, how, when and where should influence how the conversation goes. Conversational bots powered with AI can understand the context. Understanding the context also means that the bot are capable of answering new and unexpected inputs from user. These conversational bots consider the context of what has been said before, gracefully handle unexpected dialogue turns, drive the conversation when the user drifts from the regular conversation path and improve over time.
Apollo Hospitals launched an app with a conversational bot called Apollo247 which analyses dialogue with its user to tell whether they need to visit a hospital for COVID 19 related symptoms or not. The app has a bot which asks the user gender, his/her age, what ailments one is suffering from and advises whether to visit a hospital or not. However, it states that the bot’s analysis “should not be taken as a medical advice”. It can also tell whether one should get a scan done or not .
Apollo247 conversational bot
How conversational bots can help your business?
As per survey published by Statistia in 2019 , about 78% has leveraged conversational bots powered by AI in simple self-service scenarios. 77% enterprises have reported to be using bots to try and assist with a query before passing it onto a customer care personnel. Another 70% companies, reportedly use bots to retrieve information and offer recommendations and answer to queries quicker. According to Statistia, conversational AI bots are most used in customer service. The second most common area where conversational bots are used are Customer Relationship Management (CRM) .
Most common areas for conversational bot implementation in organizations worldwide
A user can be dissuaded from using a certain online service/product if they find the site hard to navigate, or cannot get answers to simple queries, or find it too hard to get basic services. These hurdles can be overcame by conversational chatbots which are fast, intuitive and convenient. AI chatbots offer a way to increase customer engagement by providing a personalized experience. They can retrieve high value content from customers.
Listed below are 10 key areas where business can take advantage of conversational AI bots:
Chatbots provided customers with a sense “immediate response”. They always want their queries “now” service within five minutes of making contact online. Conversational chatbots enables a similar kind of response and behavior akin to talking to customer care personnel.
Drive More Revenue
Intelligent chatbots acts as a guide for customers and take them on a buying journey. This makes sales conversion and revenue. Advanced chatbots can remember “context” and thus provide customers with preferences and provide advice, tips and help, while gently providing recommendations that results in upselling of products.
According to reports use of virtual assistants cut need for queries handled by human agent by 40%, and often deliver first call resolution (FCR) rates far in excess of live agents. Chatbots will reduce costs by handling more customers at a time.
Maximize Staff Skills
Conversational bots can be used to automate a portion of call, email, SMS etc, that would have required human involvement. This gives time to employees to engage in higher-value customer engagements.
Reach New Channels
Chatbots provide personalized customer experience and try to solve customer’s query before initiating a human involvement. These bots can be simultaneously deployed on various platforms like social media, calls, SMS etc. This reduces the overhead required to deploy a support to team on each new channel or network.
Conversational bots can increase brand loyalty as well as customer retention. This happens due to fast and frictionless answering of customer queries. Use of conversational AI reduces the cost overheads of the company as well as increases customer retention.
Customer want service 24/7 and 365 days. Delivering this kind of support by human agents seems impossible. However, conversational bots can be available all the time. The customers can get their queries solved using conversational bots anytime and even on holidays.
Customers tend to spend 60% more per purchase and also report an increased frequency of purchase. As customers start to favor online methods of communication, chatbots provide an opportunity to reignite the customer experience with increased engagement, personalized customer service and improved customer satisfaction.
Understand the Customer Better
Apart from providing customers with quick query solving, these bots can also provide meaningful insights into the company’s customers. They can be used to understand trends and better interpret customer sentiment, providing invaluable insight that informs product and service development.
Furthermore, this data can be accessed by for a single product or for multitude of products.
Conversational bots can be a key factor for customers choosing your company over your competitors. These bots can deliver frictionlesss user experience that derives higher customer brand loyalty and higher customer retention.
Use cases of Conversational Bots
Chatbots are being used by multiple industries to provide seamless experience to their customers. Some of these are covered here.
Banking, Insurance and Financial Services
These chatbots can guide customers to perform a variety of financial operations without making it feel like they have to fill too many forms. The information shared with these chatbots are completely safe. From checking an account, reporting lost cards or making payments, to renewing a policy or managing a refund, the customer can manage simple tasks autonomously. These chatbots can provide immediate support to a customers. These can also be used to train customer care personnel.
These chatbots can take information from humans and then accordingly recommend car for the customers according to their needs and wants. They intake various needs of customers such as the features they want to have, their budget etc. These feel lie human-like interaction and is bound to drive the conversion rate upwards. Apart from recommending cars these chatbots prompt to schedule a test drive at the nearest car dealer.
Retail & Ecommerce
Adding conversational bots to your existing retail channels increases customer engagement as they can answer clients queries and requests instantaneously. These also provide various updates on shipping details, discount etc. in a human-like fashion. They can also help customer navigate the website to pages where they can find the product they are looking for. The data collected by chatbots can further be analyses by the marketing team understand customer behavior and make strategies to increase customer engagement and retention.
Here conversational bots are used to provide self-help FAQ and knowledge forums to find a customer’s answer to any technical issues they might be facing. Customers can also use these bots to find best deals for them, and even change their personal information such as address just by chatting to bots! Further, chatbots can come up with personalized plans for customers. At the same time, chatbots can assist potential customers in choosing the right product for their needs. This will also allow customer care employees to only address the complex issues rather than getting involved in trivial issues.
Smart Homes & IOT devcies
These conversational bots enable customers to access various functionality of smart homes via every day human speech. They can also be used as guide in smart cars to tell directions, set desired temperatures etc.
Chatbots in healthcare industry can avoid unnecessary visits to hospital. This technology seems very useful especial during pandemic when there is massive stress on hospitals and doctors alike. Further, they can also be used to schedule appointments and scans for the patients.
When bots were launched first time into the market, their were predictions that these bots will replace customer care employee requirements totally. However, this did not happen as the current state of bots have its own limitations. One of the biggest limitations of conversational bots are they fail to understand complex queries. Secondly, bots can only handle the situations fr which they have been trained. On failing to recognize an incoming message they generate messages like – “Sorry I could not understand your question” which might lead to the customer becoming frustrated. If the bot is supposed to handle high volume queries, the cost of building and deploying such a bot might be higher. The chatbot may make the conversation feel repetitive which might again frustrate the customers.
Developers understand the above limitation and thus are careful while developing bots. There are a lot of tools available to build bots for commercial purposes such as Keras, TensorFlow, and PyTorch. Apart from this various frameworks like Google’s diagflow and RASA can also be used to develop effective conversational bots which can handle context. These chatbots can act as first point of contact and reduce the overhead of backend office by solving simple queries and requests of customers. A lot of companies have already integrated chatbots to their websites, apps or other channels to drive customer engagement higher. So are your ready to step up your customer experience with these conversational bots capable of recognizing contexts?
Need help with Conversational Bot?
If you want to build Conversational bot or integrate to your existing products and need help, advice, or developers, feel free to contact us at email@example.com or on our LinkedIn page or visit our company website, www.quantiantech.com
References https://gadgets.ndtv.com/apps/news/coronavirus-mygov-corona-helpdesk-chatbot-whatsapp-indian-government-total-users-haptik-2204458  https://yourstory.com/2020/03/apollo-hospitals-launches-24-7-ai-free-app-coronavirus  https://www.statista.com/statistics/966909/worldwide-conversational-bot-implementation/
Artificial Intelligence in Emotion Recognition
Emotions serve as a source of information to perceive how a person is reacting to a particular scenario. Recognizing emotions can help us take actions for getting desired outcomes. Humans use a variety of indicators such as facial expression, voice modularity, speech content, body language, and historical context to gauge the emotions of others.
Emotion recognition using AI is a relatively new field. It refers to identifying human emotions using technology. Generally, this technology works best if it uses multiple modalities to make predictions. To date, most work has been conducted on automating the recognition of facial expressions from video, spoken expressions from audio, written expressions from the text, and physiology as measured by wearable devices.
As per the reports by marketsandmarkets  published in February 2020 the global Emotion Detection and Recognition Market is expected to grow from USD 21.6 Billion to a staggering USD 56 Billion by 2024. Many technology companies like Amazon, Microsoft have already launched emotion detection tools for predicting emotions with varying accuracy. So, are you ready to utilize this upcoming technology to your business advantage?
This article will take a look at use cases of such technologies, an overview of how it is achieved, and concerns related to this technology.
Emotion Detection and Recognition Market
The ongoing pandemic has led to many day-to-day activities being carried out in online mode. These include online classes, hiring processes, work from home scenarios, etc. This online adoption has led to an increase in the demand for emotion detection and recognition software. The market size for software related to this technology is expected to grow at a Compound Annual Growth Rate (CAGR) of 21.0% till 2024. Factors such as the rising need for socially intelligent artificial agents, increasing demand for speech-based biometric systems to enable multifactor authentication, technological advancements across the globe, and growing need for high operational excellence are expected to work in favor of the market in the near future. Also, the pandemic situation has reinforced the need for such a technology.
The advancement in technologies such as Deep Learning and NLP (Natural Language Processing) have further accelerated the development and adoption of emotion recognition software. Deep Learning uses neural networks to classify images into several classes. For instance, neural networks can be applied on the face to detect whether their expression denotes sad, happy, shock, anger, etc.
Applications Of Emotion Detection and Recognition Technologies
FER (Facial Expression Recognition) software can be used in focus groups, beta-testing for product marketing, and other market research activities to find how the customers feel about certain products. Here, the participants have already consented to the use of FER software on them, thus having no legal ramifications. FER technologies have become quite infamous for using data by stealth. This application of FER does not involve any such malpractices.
Another novel experiment in marketing was back in 2015 by M&C Saathchi where advertisement changed based on their people’s facial expressions while passing an AI-Powered poster.
The ongoing coronavirus pandemic has led to the shifting of most in-person interviews to video call interviews. The emotion detection software can analyze expressions such as fear, shock, happiness, neutral, etc. However, this remains a controversial use case of Emotion recognition software as it has the following caveats –
• The AI model used for it might have a racial bias. For example, black men are usually classified as having an “angry” facial expression
• It is not legal to use such technologies in the EU and few other nations
• This usage will be subjected to further regulations.
Deepfakes are AI-generated fake videos from real videos.  It takes as input a video of a specific individual (’target’) and outputs another video with the target’s faces replaced with those of another individual (’source’). 2020 US election saw a surge in such videos, with politically motivated videos. A research was conducted by Computer Vision Foundation and in partnership with UC Berkley, DARPA and Google which used facial expression recognition to detect deep fakes.
Medical Research in Autism
People who have Autism often find it difficult to make appropriate facial expressions at right time. As far as  39 studies have concluded the same. Most autism-affected people usually remain expressionless or produce facial expressions that are difficult to interpret. Machine Learning can be applied for the early detection of Autism spectrum disorder (ASD), where people who are diagnosed with this disorder have long-term difficulties in evaluating facial expressions.
There are a number of Machine Learning projects and research that were conducted to help people on Autism Spectrum.  Stanford university’s Autism Glass project leveraged Haar Cascade for face detection in images. Google’s face worn computing system was then applied to these images to predict emotions. This project aimed at helping autism-affected people by suggesting them appropriate social cues. Another project used an app for screening subject’s facial expressions in a movie to identify how their expression compared with non-autistic people. The project utilized Tensorflow, PyTorch, and AWS (Amazon Web Services).
There are much more applications of emotion detection technologies that can help people suffering from autism.
Virtual Learning Environment
A number of studies have been conducted using emotion detection technologies to determine how well students understand and perceive what is being thought in an online class.
One of the research based on the same applies neural networks to classify emotions in six kinds of emotional categories . For this, they have used the Haar Cascades method to detect the face on the input image. Using face as the basis, they extract eyes and mouth through Sobel edge detection to obtain characteristic value. Neural networks are then used to classify facial expression in one of the six emotion classes.
How does Emotion Recognition Works?
Emotion Recognition using images
In most emotion recognition software, emotions are usually classified in one of these 7 classes – neutral, happy, sad, surprise, fear, disgust, anger. The first step to any facial expression classifier is to detect faces present in an image or video feed.
The next step is to input the detected faces into the emotion classification model. The classification models usually employ CNNs (Constitutional Neural Networks) to detect various classes of facial expression on the training dataset. Essentially, a CNN is able to apply various filters to generate a feature map of an image which can then be applied to ANNs (Artificial Neural Networks) or any other machine learning algorithm for further classification.
Detecting emotions in audio clips
In emotion recognition from audio, different prosody features can be used to capture emotion-specific properties of the speech signal . The features such as pitch, energy, speaking rate, word duration are applied to suitable machine learning models to detect possible emotion.
Another method to detect emotion in audio clips is using Mel-frequency cepstral coefficients (MFCCs)  on audio clips and then applying CNN to the input generated using MFCCs. This is so far one of the most famous techniques in emoticon recognition using audio.
Putting it to use to analyze video
Emotion recognition using images and audio is combined using complex mathematical or machine learning models to produce accurate results.
Limitations of Emotion Recognition Technologies
Emotion recognition shares a lot of challenges with detecting moving objects in the video: identifying an object, continuous detection, incomplete or unpredictable actions, etc. It might also suffer from lack of context of the conversation, lighting issues for images, and disturbances in form of noise for audio inputs.
Depending on the datasets used, the Machine Learning models for emotion recognition can have an inherent bias. Even google photos suffered from racial bias, where google photos could not identify dark-skinned people. Emotion recognition often suffers from biases such as classifying black men as angry etc. Thus it is very important to use diverse datasets for emotion detection and recognition software.
Political and Public Scrutiny
Facial recognition and systems built on this technology have often drawn criticism from politicians and people alike. These are usually privacy concerns, and the use of data without a person’s knowledge. European Union (EU) has already banned Facial recognition-based software. More regulations are expected to follow for emotion recognition technologies.
Emotion detection and recognition systems are under constant political scrutiny. In spite of this, the market for these systems are expected to have a compound growth rate of 21%. It is also expected to have a revenue of USD 56 Billion by year 2024. Apart from the outstanding economic projections for emotion detection and recognition software, the use cases of this technology are rather compelling. If hurdles like privacy, laws regulations, racial bias can be overcome this technology can be integrated in various products to enhance the user experience.
Need help with Emotion recognition and detection software?
If you want to build Emotion recognition and detection and need help, advice, or developers, feel free to contact us at firstname.lastname@example.org or on our LinkedIn page or visit our company website, www.quantiantech.com
References “Emotion Detection and Recognition Market,” Market Research Firm. [Online]. Available: https://www.marketsandmarkets.com/Market-Reports/emotion-detection-recognition-market-23376176.html. [Accessed: 02-Mar-2021]  Sackett Catalogue of Bias Collaboration, E. A. Spencer, K. Mahtani. “Hawthorne bias.” Catalogue Of Bias, 2017  Li, Y., & Lyu, S. (2018). Exposing deepfake videos by detecting face warping artifacts. arXiv preprint arXiv:1811.00656.  Trevisan, D. A., Hoskyn, M., & Birmingham, E. (2018). Facial expression production in autism: A meta‐analysis. Autism Research, 11(12), 1586-1601.  Google Glass may help kids with autism – Stanford Children’s Health. [Online]. Available: https://www.stanfordchildrens.org/en/service/brain-and-behavior/google-glass. [Accessed: 02-Mar-2021]  W. I. R. E. D. Insider, “Researchers Are Using Machine Learning to Screen for Autism in Children,” Wired, 23-Oct-2019. [Online]. Available: https://www.wired.com/brandlab/2019/05/researchers-using-machine-learning-screen-autism-children/#:~:text=Studying%20ASD%20at%20an%20Unprecedented,children%20in%20a%20single%20study. [Accessed: 02-Mar-2021]  Yang, D., Alsadoon, A., Prasad, P. C., Singh, A. K., & Elchouemi, A. (2018). An emotion recognition model based on facial recognition in virtual learning environment. Procedia Computer Science, 125, 2-10.  Štruc, V., Dobrišek, S., Žibert, J., Mihelič, F., & Pavešić, N. (2009, September). Combining audio and video for detection of spontaneous emotions. In European Workshop on Biometrics and Identity Management (pp. 114-121). Springer, Berlin, Heidelberg.  R. Chu, “Speech Emotion Recognition with Convolution Neural Network,” Medium, 01-Jun-2019. [Online]. Available: https://towardsdatascience.com/speech-emotion-recognition-with-convolution-neural-network-1e6bb7130ce3. [Accessed: 02-Mar-2021]