Real-Time Face Blur

A modular tool for privacy-preserving, real-time face redaction on video streams.

Ani Aggarwal, Vance Degen, Varun Unnithan, Monish Napa, Rohit Kommuru

Live Demo

Real-Time Face Blur takes webcam or video input and automatically finds and blurs faces in each frame while keeping selected, known faces visible.

  • Supports both live camera streams and pre-recorded videos.
  • Runs comfortably faster than real time on a laptop CPU.
  • Configurable blur shape, blur strength, and performance presets.
  • Recognition system lets you whitelist faces that should stay unblurred.

Installation & Quickstart

The project is packaged as a simple Python tool using Conda for dependency management. From the repository root:

  1. Install conda or mamba.
  2. Create the environment:
    conda env create --name face-blur-rt --file face-blur-rt.yml
  3. Activate the environment:
    conda activate face-blur-rt
  4. Run the demo:
    python main.py

You can point the demo to a webcam or to a video file; configuration is handled through simple command-line flags and config files.

Initial Goal

The goal of this project is to build a practical, privacy-preserving face blurring system that works in real time on everyday hardware.

Typical use cases include IRL streaming (e.g., Twitch), redacting security footage, and any setting where video must be monitored live without exposing bystanders' identities.

Implementation & Project Architecture

RealTimeFaceBlurrerByFrame orchestrates the entire pipeline, taking in video frames and passing them through detection, tracking, recognition, and blurring. Each stage is implemented as its own class and can be swapped out without changing the rest of the system.

The pipeline is designed to be modular: add a new detector, tracker, or blur method by subclassing the corresponding abstract base class.

High-level pipeline for the real-time face blurring system.
High-level pipeline: detection → tracking → recognition → blurring.

Key Components

Detection & Tracking

Recognition

Core Libraries

Benchmarking & Performance

We benchmark our system on a challenging 6-minute IRL Twitch stream clip with crowds, occlusions, and rapid motion. TinaFace serves as a strong GPU baseline; our system runs entirely on CPU.

While TinaFace achieves strong detection metrics, it fails to run reliably in this real-time, many-face setting. Our detector + tracker combination stays stable and faster than real time.

IoU and miss-rate benchmark comparing Real-Time Face Blur to TinaFace.
Example IoU / miss-rate comparison against TinaFace on a crowded IRL stream.
FPS benchmark comparing Real-Time Face Blur to TinaFace.
Frame-rate comparison: our CPU-only implementation runs comfortably faster than real time.
Excess box analysis for our high-recall tracker configuration.
Excess boxes: high-recall tracking sometimes produces extra boxes, but keeps faces safely covered.

The excess-boxes analysis shows how our tracker deliberately errs on the side of drawing more boxes than strictly necessary. This high-recall configuration means we almost never miss a face, even if it leads to a small number of false-positive boxes on background regions. For a privacy-focused application like face blurring, this trade-off is desirable: it is better to blur a few extra patches than to expose a single face.

Discussion & Future Work

The system meets its original goal: real-time face blurring on live and recorded video with configurable speed/accuracy trade-offs and a simple interface for whitelisting faces.

Some current limitations and next steps: