Skip to content

Applied-Machine-Learning-2022/final-project-yeg-ua

Repository files navigation

Open in Visual Studio Code

AMLI Student Detection, Tracking & Counting

National Action Council for Minorities in Engineering(NACME) Google Applied Machine Learning Intensive (AMLI) at the University of Arkansas

Developed by:

Description

The goal of this project is to detect and count the number of students in a classroom. To accomplish this, we used object tracking implemented with YOLOv4, DeepSort, and TensorFlow. YOLOv4 is a state-of-the-art algorithm that uses deep convolutional neural networks to perform object detections. The YOLO algorithm works by dividing the image into N grids, each having an equal dimensional region of SxS. Each of these N grids is responsible for the detection and localization of the object it contains. We can take the output of YOLOv4 feed these object detections into Deep SORT (Simple Online and Realtime Tracking with a Deep Association Metric) in order to create a highly accurate object tracker.

Demo of tracker on humans

Demo Colab

Open In Colab

Tutorials

Usage instructions

  1. Clone the repository recursively:

git clone --recurse-submodules https://github.com/mikel-brostrom/Yolov5_StrongSORT_OSNet.git

If you already cloned and forgot to use --recurse-submodules you can run git submodule update --init

  1. Make sure that you fulfill all the requirements: Python 3.8 or later with all requirements.txt dependencies installed, including torch>=1.7. To install, run:

pip install -r requirements.txt

  1. Make sure you replace the old track.py file with this track.py after cloning the repository above.

Tracking sources

Tracking can be run on most video formats

$ python track.py --source 0  # webcam
                           img.jpg  # image
                           vid.mp4  # video
                           path/  # directory
                           path/*.jpg  # glob
                           'https://youtu.be/Zgi9g1ksQHc'  # YouTube
                           'rtsp://example.com/media.mp4'  # RTSP, RTMP, HTTP stream

Select object detection and ReID model

Yolov5

There is a clear trade-off between model inference speed and accuracy. In order to make it possible to fulfill your inference speed/accuracy needs you can select a Yolov5 family model for automatic download

$ python track.py --source 0 --yolo-weights yolov5n.pt --img 640
                                            yolov5s.pt
                                            yolov5m.pt
                                            yolov5l.pt 
                                            yolov5x.pt --img 1280
                                            ...

StrongSORT

The above applies to StrongSORT models as well. Choose a ReID model based on your needs from this ReID model zoo

$ python track.py --source 0 --strong-sort-weights osnet_x0_25_market1501.pt
                                                   osnet_x0_5_market1501.pt
                                                   osnet_x0_75_msmt17.pt
                                                   osnet_x1_0_msmt17.pt
                                                   ...

Filter tracked classes

By default the tracker tracks all MS COCO classes.

If you only want to track persons I recommend you to get these weights for increased performance

python track.py --source 0 --yolo-weights yolov5/weights/crowdhuman_yolov5m.pt --classes 0  # tracks persons, only!

MOT compliant results

Can be saved to your experiment folder runs/track/<yolo_model>_<deep_sort_model>/ by

python track.py --source ... --save-txt

About

final-project-yeg-ua created by GitHub Classroom

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •