Pearce can do the Monday 13th 9-10A or in the late afternoon around 3P or 4P.
Pearce can do the Thursday 16th morning or late afternoon. Just not between 12-3P.
Teng Moh can do the Monday 13th from 10-3 and 4-6P.
Be sure to fill out the Table of Contents and List of Figures.
In every section, connect back to what I did. Note the connection at the start of the section.
Review the page breaking too. Do not have orphan captions for figures. Add page breaks for new section headers if they're at the bottom.
Sign up for graduate commencement things.
Remove the chart titles. But add back the axes titles.
Change "Fourth Wall" to "Fourth Detection Layer"
Make the requested changes to the report. Send it to committee members.
Finish the report.
Get the form.
Continue with the Introduction section. Outline what the rest of the Report will talk about.
Talk about convolutional net in the background first. Also, mention the topics of discussion up front first AND define them. Then, go into detail about their effectiveness in this use case.
Tie things back to my project's use case in every section.
Demo of time series plotting.
Near-perfect accuracy on CARPK dataset when downsampling by factor of 2.
Time series for parking lot capacity. Use PKLot or CARPK.
Add a fourth detection layer to detect small objects.
Improvement on PKLot accuracy compared to the v2 model.
Need to iterate on it further. Try downsampling the inputs. Or add a fourth detection layer.
Completed work on YOLO v3. Got it working on the datasets. However, the training settings were incorrect, so trying again.
Collects all cars in a list of images and prints out statistics (e.g., max, min, mean).
The PKLot dataset proved promising because it had fixed camera angle on the parking lot and could easily track throughput and deltas in the occupancies. But it turns out the PKLot dataset sucks. It does NOT annotate all the cars in a given image. As a result, the detection accuracy is extremely poor.
The Hsieh guy actually took a subset of the PKLot dataset and properly annotated all the cars. He refers to this subset as the PUCPR dataset. Unfortunately, the subset is quite small. Training and detecting with this properly annotated subset yields lower accuracy than the CARPK drone perspective.
Browsed the CARPK dataset for images of the same area of the parking lot. Obtained some images of the same parking lot area. However, it is difficult to get fixed camera shots like the PKLot and PUCPR datasets, since CARPK uses drones.
Detector input now takes a list of images and outputs predictions to a folder. A simple bash script could feed in a set of images from day to day. Adding logic to track the parking lot occupancy each day is trivial now.
Yet another cars in parking lot system. It is also fixed camera position like PKLot. It differs from PKLot and PUCPR because it is closer up, so the cars are a lot larger in each frame.
Data compilation and verification has not been done yet. If it is comprehensively annotated, then use it to train and detect.
Get training working on the PKLot dataset. Start with the YOLO v2 net.
Hoping to get the new PKLot dataset training done. This dataset has the same camera location.
Otherwise, browse the CARPK dataset for images of the same area of the parking lot.
Summarized the training data, annotation conversion, and other related information in some slides. CARPK is great; COWC is not so great.
Track how many cars were detected in a given day. On subsequent image inputs, output how many cars were detected and compare to the previous days.
Conversion for CARPK dataset to YOLO format complete. It runs on the CNN during training without runtime errors. Drawing the bounding boxes with the YOLO-formatted annotations works. Bounding boxes were drawn using Pillow and match the bounding boxes around the cars of the initial CARPK annotation format.
Summarize the training data, annotation conversion, and other related information in some slides.
Initial results to train and detect cars using the CARPK dataset have failed. Investigation has been ongoing.
Previous efforts to train on the VOC dataset have worked. The Pytorch code seems to work. Note that VOC differs from CARPK in the number of objects per image.
Data annotation conversion for the CARPK dataset is complete. It is now convertable from its default format to the one YOLO takes.
Verify things are accessible.
Refer to previous people's work for proper formatting.
Include 297 deliverable summaries. Review what I created in 297.
Flesh out the discussion some more for each 297 deliverable.
Add next steps section.
See the format section for more details.
Get training to work.
Summary Report draft is due Dec 4th. It should have the following.
Also revise the proposal. Look for separate template and form. Also find 2 other professors for committee.
Also fill out GAPE form.
Getting YOLO v3 working (WIP).
Look into data annotation reformatting into VOC or YOLO input.
Talking to guy about data things.
Getting YOLO v3 working (WIP).
Updates to the deliverable things.
Swapping deliverables because it is not clear what the training data input should look like regarding annotations.
COCO Dataset Review: View here
Same dataset used in training YOLO.
Build some code to multiply dataset. Look into transforming annotated data.
Open CV demo + Slides
Also highlighted the power of Docker to containerize applications. Potential to Dockerize Yioop for local execution.
Reminisced on the Powerbook G4
Missed due to intense work situation
Presented YOLO V3
Prepare Open CV demo + Slides for next meeting
Some datasets and examples
Mark when deliverables are due
Reading YOLO V3
Showed Professor some fun Pytorch things
Add when deliverables are due
Name specific papers for the summary
Slides on how to set up Pytorch and demo it. Download here
Update description
Deliverables:
I have a machine to run deep learning training on (not my Air)