Automatic detection is useful. Invisible automatic modification is where privacy tools start to lose trust.
That is why tools such as Smart Text Redactor, Document Redactor, and QR Code Redactor work better when detections become editable overlays instead of instant bitmap edits.
The implementation question is not only "how do we detect something sensitive?" It is also "what object should exist in the editor after detection succeeds?"
Step 1: Normalize detector output first
OCR, barcode APIs, license-plate heuristics, and signature detection all start from different inputs. The editor should not care about those differences once a candidate exists.
The practical boundary is a normalized region format:
lefttopwidthheight
Once every detector returns that shape, the rest of the editor can stay consistent.
Step 2: Insert editor objects, not one-shot mutations
In this codebase, auto-generated redaction suggestions are inserted as ordinary editor objects. A text detector can call addRedact, while blur or pixelation detectors can call addEffectRegion.
That keeps automatic suggestions under the same interaction model as manual edits:
- moveable
- resizable
- deletable
- visible before export
This matters because privacy tooling is full of false positives, partial matches, and edge cases. A detection result should be inspectable, not silently final.
Step 3: Tag auto-generated objects by source
The useful implementation detail here is tagging objects with a source marker.
redact.data = {
objectType: "shape",
filled: true,
autoGenerated: options?.autoGenerated,
};That makes it possible to distinguish:
- manual regions added by the user
- OCR-generated regions
- QR-generated regions
- license-plate or signature suggestions
Without that tag, every re-detect operation becomes destructive because the editor no longer knows what it is allowed to replace.
Step 4: Replace only what the detector owns
Once objects carry a source tag, a detector rerun can clear only its own previous suggestions.
replaceAutoRedacts(regions: RedactRegion[], sourceTag: string) {
this.clearAutoGenerated(sourceTag);
regions.forEach((region) =>
this.addRedact(region, {
autoGenerated: sourceTag,
select: false,
})
);
}That means a second OCR pass can replace only OCR suggestions while preserving the boxes the user drew manually. This is one of the most important interaction details in a real privacy editor.
Users should not lose good manual cleanup just because they re-ran detection.
Step 5: Export is still a reconstruction step
The overlay architecture does not stop at editing time. Export still needs to rebuild the final result from the source image plus the current object set.
In the markup editor, the export path creates a fresh StaticCanvas, adds the original image, then clones or reconstructs every overlay object onto it.
That design preserves the review step all the way until save:
- detector suggests
- user edits
- export reflects the current reviewed state
Nothing gets baked in too early.
Why this is better than invisible auto-apply
An invisible one-click redaction pipeline sounds convenient, but it collapses three separate concerns into one:
- detection confidence
- user intent
- final export
Keeping detections as overlays untangles those concerns. The detector only proposes. The user decides. The export path serializes the current reviewed state.
For privacy products, that is the safer and more honest architecture.
The practical rule
If an automatic detector can be wrong, it should create editable objects, not an irreversible export.
That rule scales surprisingly well:
- OCR detections
- QR and barcode detections
- signature detections
- license-plate detections
Different detectors can stay very different internally while sharing one reviewable output model. That is usually the point where an auto-detection feature stops feeling like a demo and starts feeling like a trustworthy editor.
