Face blur sounds simple until the user moves, resizes, toggles, and exports several blur regions on one image.
The implementation behind tools such as Blur Face and Face Pixelator gets much cleaner when you stop thinking of blur as a global effect and start thinking in terms of source-aligned patches.
Step 1: Detect faces, then expand them
Face detection usually gives you a bounding box that is too tight for production use. Hair edges, side spacing, and imperfect detections often need extra room.
In this codebase, each detected face is expanded with configurable padding before it becomes an editable region.
const inset = Math.round(
Math.max(detection.width, detection.height) * (paddingPercent / 100)
);Then the region is clamped back into the image bounds and promoted into a minimum useful size.
That means the user starts from a plausible blur patch instead of a detector box that clips too close to the face.
Step 2: Create one blurred source image
Instead of blurring each patch separately, the editor first builds a blurred version of the whole source image.
ctx.filter = `blur(${blurStrength}px)`;
ctx.drawImage(image, 0, 0, canvas.width, canvas.height);
ctx.filter = "none";That blurred image becomes the source for all face patches. It is a good tradeoff because the expensive blur pass happens once per blur-strength setting, not once per patch interaction.
Step 3: Represent each face as a cropped image patch
Each detected face is stored as a FabricImage patch whose cropX and cropY point at the matching area inside the blurred source image.
const patch = new FabricImage(this.blurredSourceElement!, {
left: region.left + region.width / 2,
top: region.top + region.height / 2,
width: region.width,
height: region.height,
cropX: region.left,
cropY: region.top,
});That is the key architectural move. The editor is not drawing blur procedurally inside every patch. It is reusing a pre-blurred source image and exposing only the relevant cropped window.
Step 4: Keep geometry and crop source in sync
The patch cannot only move visually. Its crop window also has to move with it. Otherwise the rectangle moves while the blurred content inside it comes from the wrong part of the image.
That is why the implementation normalizes geometry through a helper that updates both position and crop source together:
patch.set({
left: geometry.left + geometry.width / 2,
top: geometry.top + geometry.height / 2,
width: geometry.width,
height: geometry.height,
cropX: geometry.left,
cropY: geometry.top,
scaleX: 1,
scaleY: 1,
});For move and resize events, the editor reads the effective dimensions from width * scaleX and height * scaleY, clamps the region to the image bounds, then resets the scale back into normalized geometry.
That prevents the patch from drifting or stretching out of sync with its blur source.
Step 5: Rebuild export from the source image
Export uses the same pattern as the rest of the browser-first image editors in this project: create a clean StaticCanvas at original size, add the untouched source image, then replay the visible face patches on top.
const exportCanvas = new StaticCanvas(util.createCanvasElement(), {
width,
height,
});Each visible patch is reconstructed with its current left, top, width, height, cropX, and cropY values. That way the exported file reflects the exact reviewed patch layout, not whatever the on-screen viewport happened to be showing.
Why this architecture holds up
This patch-based model solves several problems at once:
- blur strength can change without rebuilding every interaction object
- each face stays independently movable and toggleable
- export can replay only the enabled patches
- geometry stays aligned to original pixels
Most importantly, the editor keeps one clear contract: a face blur region is an editable crop window into a precomputed blur source, not a loose visual overlay with no source relationship.
That is what keeps the effect stable when the user keeps editing right up until export.
