THE MACHINE WATCHES. YOU WATCH IT WATCHING.
You are about to enter a feedback loop. This interface uses
Facial Action Coding System (FACS)
to decompose your micro-expressions into 468 data points—reading you
while you read your own data in real-time.
Neural Synchrony begins when the wetware
(your 43 facial muscles) and the software (this machine)
achieve resonance. The question is not whether you can
fool it. The question is: what does it reveal about you?
Edge AI. Local processing only.
No biometrics stored or transmitted.
SENSITIVITY MATRIX
RESEARCH_PROTOCOL // FULL_METHODOLOGY
The human face is a high-bandwidth data transmission surface. This engine measures Resonance (did the biology react?) rather than Exposure (did the pixel load?).
All processing occurs locally in your browser. No video/images are transmitted to any server. MediaPipe FaceMesh runs on-device. Camera access requires explicit user consent via browser permission dialog.
Position face within frame. Neutral lighting preferred. System loads neural network for 468-landmark mesh detection. First frames establish baseline geometry.
30fps analysis extracts: lip corner positions (smile vector), eye aperture (attention), brow distance (confusion/focus). Raw landmark data rendered as wireframe overlay.
Ekman's Facial Action Coding System maps muscle movements to Action Units (AUs):
- JOY: AU6 + AU12 (Duchenne Marker = genuine)
- SADNESS: AU1 + AU15 (brow raise + lip depress)
- FEAR: AU5 + AU20 (lid raise + lip stretch)
- ANGER: AU4 + AU7 (brow lower + lid tense)
Session data is ephemeral. Page reload clears all computed metrics. No cookies, no storage, no analytics. The machine forgets the moment you leave.
This technology enables shift from impression-based to emotion-based ad measurement:
- Pre-roll attention verification
- Creative A/B testing via emotional response
- Engagement scoring beyond click-through