Chris Pollett >
Students > [Bio] [Blog] [Depth Wise Convolutional Model-PDF] |
Deliverable 2Visual and Lingual Emotional Recognition using Deep Learning TechniquesUnity provides lots of open-source assets to try your hands on. I used an open-source Anime Character provided by unity to create my synthetic dataset that will be used for initial testing of my model. We created the animations of the seven emotions that Anime character could express. In the deliverable, the scope of the dataset is limited to 51 videos, each of 20 seconds. The scene created in Unity is captured from different angle with character expressing random expressions to improve the variety in the dataset. In the scene, the Humanoid will express different expressions randomly and the camera angle will be changed after every 20 seconds. This dataset will be used for the initial training and testing of the model in my project. Model will capture the distinct facial features to predict the emotion. Introduction to Unity
My Experience with Unity
Unity CharacterFacial Expressions of Model
Animator ControllerThis animator controller is used to design the flow of the Expressions. This controlled is linked with C# Script to select random expressions DatasetThis Dataset contains total 51 Videos. Each video is of 20 Sec and its taken from different camera angles to cover maximum scope of Expressions. Source Code
References |