Chris Pollett > Students >
Lei

    ( Print View)

    [Bio]

    [Blog]

    [CS 297 Proposal]

    [GANs-PDF]

    [Gen Videos Scene Dynamics-PDF]

    [3D CNNs Human Action RECOG-PDF]

    [TGAN-PDF]

    [Deliverable 1]

    [Deliverable 2]

    [Deliverable 3]

    [Deliverable 4]

    [CS297 Report - PDF]

    [CS 298 Proposal]

    [CS 298 Report - PDF]

    [CS 298 Slides - PDF]

Project Blog

Week 8- Mar 24, 2020:

What was done:

  1. Generated 11 different emotions directions with a labeled dataset
  2. Generated videos with random human face and emotions in the latent space

What to do:

  1. Add more emotions
  2. Try to improve the generated videos: 1) less hair moves 2) No face changes
  3. Training videos and predict the emotions in a sequence

Week 7- Mar 17, 2020:

What was done:

  1. Use stylegan2 latent space
  2. Improved the video pose transfer to stylegan2 latent space

What to do:

  1. Generate different emotions in the latent space

Week 6- Mar 10, 2020:

What was done:

  1. Video pose transfer
  2. Predict noise vector directly from image into latent space

What to do:

  1. Video generation in the latent space
  2. Optimize the video pose translation

Week 5- Mar 3, 2020:

What was done:

  1. Used LSTM to predict latent vectors
  2. Generate intepolate video directly from latent space

What to do:

  1. Take a video of different face poses and transfer the style to random face
  2. Find a longer video for LSTM prediction training

Week 4- Feb 25, 2020:

What was done:

  1. Training with Progressive Growing of GANs for UCF101 dataset.
  2. Tried to recover human face image from pre-trained latent space with Flickr-Faces-HQ dataset.
  3. Explored the trained latent space

What to do:

  1. Apply to videos for searching a pre-trained GAN latent space

Week 3- Feb 18, 2020:

What was done:

  1. Implemented a 2D + 1D GAN to generate videos.
  2. Use LSTM to generate 16 noise vectors, and use them to generate 16 frames.
  3. Cannot directly train a pre-trained generator due to backpropagation problem.

What to do:

  1. One way to use a pre-trained GAN latent space
  2. Create a traing dataset from recovered latent space, which are image to noise vector mappings
  3. Train a GAN to find the closest mappings from giving frames to noise vectors
  4. Recover a synthesis video from the noise vectors

Week 2- Feb 11, 2020:

What was done:

  1. Implemented paper 2. Created a short video by search pre-trained GAN latent space.

What to do:

  1. Write a GAN model to generate a latent vector to mapping to its original image. Write a model to predict next frames.

Week 1- Feb 4, 2020:

What was done:

  1. Finished the CS298 proposal.

What to do:

  1. Implement paper 2, and search GAN latent space.

Week 15- Dec 3, 2019:

What was done:

  1. Finished the second version of CS297 report.

What to do:

  1. Finish the final changes of CS297 report and submit to canvas. Fix w3c html errors.

Week 14- Dec 3, 2019:

What was done:

  1. Finished draft version of CS297 report.

What to do:

  1. Some changes on the draft report.

Week 13- Nov 26, 2019:

What was done:

  1. Finished deliverable 4. Created a few slides for video generation idea.

What to do:

  1. Draft version of CS297 report.

Week 12- Nov 19, 2019:

What was done:

  1. Use lstm layer to simplify 3D GAN to 2D.

What to do:

  1. Finish deliverable 4. Create a few slides for video generation idea.

Week 11- Nov 12, 2019:

What was done:

  1. Read paper 8 and tried to improve video quality with a separate temporal layer. Tried to generate video with 8 frames which doesn't work.

What to do:

  1. Refine the video generation model.

Week 11- Nov 05, 2019:

What was done:

  1. Futher improved quality of generated videos with pix2pix technologies.

What to do:

  1. Trying to build a hypothesis to use tiered 2D gan plus 1D temporal layer and temporal layer to generate fake videos instead of a 3D gan.
  2. Try to keep investigating pix2pix to video method

Week 10- Oct 29, 2019:

What was done:

  1. Improved the video qualities with labeld images and pix2pix framework. Able to generate moving objects but they are not clear.

What to do:

  1. Try to further improve video quality with pix2pix framework. Use one frame to many frames mappings when training.

Week 9- Oct 22, 2019:

What was done:

  1. 3D gan seems not working well to generate fake videos. I got only static background after training 3D video gan for a day.

What to do:

  1. Try to explore labeled technologies to reduce the complexity of images

Week 8- Oct 15, 2019:

What was done:

  1. Create a model to generate videos

What to do:

  1. Improve the quality of generated videos

Week 6- Oct 1, 2019:

What was done:

  1. Slides for 3D Convolutional Neural Networks for Human Action Recognition

What to do:

  1. Finish deliverable 2

Week 5- Sep 24, 2019:

What was done:

  1. Improved deliverable 1
  2. Slides for Generating Videos with Scene Dynamics

What to do:

  1. Read paper: 3D Convolutional Neural Networks for Human Action Recognition and write slides

Week 4- Sep 17, 2019:

What was done:

  1. Finished deliverable 1
  2. Modified project proposal, add bio info, and add blogs

What to do:

  1. Make improvement of GAN training dataset
  2. Upload deliverable 1
  3. Read paper: Generating Videos with Scene Dynamics and write slides

Week 3- Sep 10, 2019:

What was done:

  1. Wrote slides for paper 7
  2. Read book: GANs in Action

What to do:

  1. Write a GAN program to fake Chinese numbers
  2. Modify project proposal, add bio info, and add blogs
  3. Check paper Eigenheads for Reconstruction

Week 2- Sep 3, 2019:

What was done:

  1. Read paper: Generative Adversarial Nets
  2. Write project proposal

What to do:

  1. Write a GAN program to fake Chinese numbers
  2. Write slides for paper 7

Week 1- Aug 27, 2019:

What was done:

  1. Discussed projects.

What to do:

  1. Write project proposal