Lei Zhang
Chris Pollett (Presenting)
May, 2021
Model | ACD |
---|---|
TGAN | 0.305 |
MoCoGAN | 0.201 |
Our Model | 0.167 |
[1] M. Saito, E. Matsumoto, and S. Saito, "Temporal generative adversarial nets with singular value clipping," Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 2830--2839.
[2] R. Abdal, Y. Qin, and P. Wonka, "Image2StyleGAN: How to Embed Images Into the StyleGAN Latent Space?," Proceedings of the IEEE International Conference on Computer Vision. 2019.
[3] T. Karras, et al., "Progressive growing of GANS for improved quality, stability, and variation," International Conference on Learning Representations (ICLR), 2018.
[4] S. Tulyakov, et al., "MoCoGAN: Decomposing Motion and Content for Video Generation," 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018. pp. 1526--535, doi: 10.1109/CVPR.2018.00165.
[5] T. Karras, S. Laine, and T. Aila, "A style-based generator architecture for generative adversarial networks," 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 4396--4405, doi: 10.1109/CVPR.2019.00453.
[6] N. Aifanti, C. Papachristou, and A. Delopoulos, "The MUG facial expression database," 11th International Workshop on Image Analysis for Multimedia Interactive Services WIAMIS 10. IEEE, 2010.
[7] T. Karras, et al., "Analyzing and improving the image quality of StyleGAN," 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8107-8116.
[8] S. Ji, et al., "3D convolutional neural networks for human action recognition," IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(1):221--231, 2013.