Head Tracking Documentation

This is the documentation for a Python script used for tracking head mounted LEDs. Primarliy using the OpenCV library, the script takes in multiiple .mpg video files and outputs a single annotated .mp4 video file as well as a .csv file containing the locations of the tracked points within each frame.

Installation and Setup

Install the Python 3.7 version of Anaconda

Download Here

Create a new virtual environment

Open a terminal window and enter the following:

conda create -n trackingEnv python=3.8 numpy pandas

This will create a Python 3.8 environment called “trackingEnv” with the numpy and pandas libraries installed.

Switch into your new environment

conda activate trackingEnv

(trackingEnv) should now be shown at the start of the command line.

Install OpenCV 4.2 in your environment

conda install -c conda-forge opencv=4.2.0

Processing Video Files

Prepare video processing directory

Place trackHead.py into a folder that contains the timestamps.csv and .mpg files that you want processed

Execute tracking script from the command line

  1. Open a terminal window
  2. Switch into your trackingEnv if you are not already in it
  3. cd into your video processing directory
  4. Run the trackHead.py script by entering the following:
python trackHead.py

Resulting Output

Annotated footage

All of the separate .mpg files will be combined into a single .mp4 that contains 3 frames stiched together.

_images/mp4_output.png

The left frame is the original video footage. The center frame shows the pixels that remain after filtering for red and blue. The right frame places circlular marks at the centers of the filtered pixel clusters.

Tracked coordinates data

A .csv file that combines the timestamp.csv data with x,y coorindinates of the tracked LEDs.

_images/csv.png