A simple tool to provide real time blur of select faces on video input.
To run the project locally, install Conda or Mamba and follow these steps:
# Create conda env for this project and install dependencies
conda env create --name face-blur-rt --file=face-blur-rt.yml
# Activate the env
conda activate face-blur-rt
# Run the demo
python main.py
We utilize computer vision and object detection to perform effective real-time blurring of faces from both live video input and pre-recorded videos. This project has significant implications for:
While face blurring is an existing practice, manual processes are time-consuming and prone to errors. Our solution creates a highly customizable architecture that utilizes efficient techniques and pre-existing face recognition libraries.
The system uses a modular pipeline where classes like FaceDetector, FaceRecognizer, and Blurrer are subclassed for specific algorithm implementations. This allows options to be hot-swappable and maintainable.
Pipeline:
An unexpected finding was that deep pretrained face detection models (like YuNet) often ran faster than trackers (like SORT). Consequently, it is often more efficient to run the FaceDetector at every frame rather than relying heavily on the tracker, which results in a higher frame rate.
YuNet
SCRFD
SORT
SFace
We benchmarked our model against TinaFace on a 6-minute clip. The video was downsampled from 60fps to 30fps.
Our model performs faster than real time for all tasks except gaussian blur. The drop in IoU between 10 and 25 seconds in the benchmarks is due to a large number of faces in the frame.
Our initial goals were met: we created an effective face blurring model that performs optimal face detection and blurring on both live input and recorded videos. The recognition model produces results with relatively high accuracy.
We were unable to create an advanced user interface for manual face selection due to time constraints. Additionally, performance may degrade in situations with rapid movement or severe occlusion.