CS297 Proposal
Visual and Lingual Emotional Recognition using Deep Learning Techniques
Akshay Kajale akshay.kajale@sjsu.edu
Advisor: Dr. Chris Pollett
Description
In this era of technology and Artificial Intelligence, there is growing demand for human computer interaction.
Humans express their emotions in differwnt ways (facial, speaking etc).
The aim of this project is to detect the emotions of a person using facial expression and
linguistic features by implementing computer vision, and natural language processing techniques. We will develop
a hybrid neural network model to identify the emotion of a person based on the facial expreesion and
pitch of the voice while speaking.
Schedule
Week 1:
Sept 23 - Sept 29 | Finalize topics, discuss deliverables and draft proposal |
Week 2:
Sept 29 - October 6 | Finalize the Dataset and continue working on Deliverable 1 |
Week 3:
October 6 - October 13 | Deliverable 1 Due(Neural Network) |
Week 4:
October 13 - October 20 | Learn Unity to create dataset |
Week 5:
October 20 - October 27 | Decide the labelled dataset for training model and continue working on Deliverable 2 |
Week 6:
October 27 - November 4 | Deliverable 2 Due(Video Dataset) |
Week 7:
November 4 - November 11 | Mobile Application Development. Continue to Work on Deliverable 3 |
Week 8:
November 11 - November 18 | Deliverable 3 Due(Mobile Application) |
Week 9:
November 17 - November 24 | Deploy the model on the mobile application for fine tuning the parameters |
Week 10:
November 24 - December 2 | Deliverable 4 Due(Deployment of Model) |
Week 11:
Dec 2 - Dec 9 | Start CS 297 Report |
Week 12:
Dec 10 - Dec 14 | Review the Report and Deliverable 5 Due(CS297 Report) |
Deliverables
The full project will be done when CS298 is completed. The following will be done by the end of CS297:
1. Implement a neural network to detect emotion using facial recognition algorithm with good accuracy.
2. Generate a video dataset which contains a person having different facial expressions.
3. Develop a mobile application which can access both cameras at the same time using split screen.
4. Deploy the model on Mobile Application for initial testing.
5. CS297 Report Due.
References
[1]L. Zhang, Y. Yang, W. Li, S. Dang and M. Zhu, "Research of Facial Expression Recognition Based on Deep Learning," 2018 IEEE 9th International Conference on Software Engineering and Service Science (ICSESS), Beijing, China, 2018, pp. 1-4, doi: 10.1109/ICSESS.2018.8663777.
[2]Mao Xu, Wei Cheng, Qian Zhao, Li Ma and Fang Xu, "Facial expression recognition based on transfer learning from deep convolutional networks," 2015 11th International Conference on Natural Computation (ICNC), Zhangjiajie, 2015, pp. 702-708, doi: 10.1109/ICNC.2015.7378076.
[3]A. Fathallah, L. Abdi and A. Douik, "Facial Expression Recognition via Deep Learning," 2017 IEEE/ACS 14th International Conference on Computer Systems and Applications (AICCSA), Hammamet, 2017, pp. 745-750, doi: 10.1109/AICCSA.2017.124.
|