1. Get started
-
Click Start and allow camera access.
-
Hold a neutral, relaxed face for 3 seconds while the
red-bordered overlay runs. This captures your resting
baseline.
-
After that you'll see the live readout panel attached to
your face. Cognitive states (tired, focused, etc.) take
about a minute to warm up — they need a one-minute window
per the literature (PERCLOS, blink rate).
2. What you'll see
-
Top 3 emotions
— calibrated probabilities from HSEmotion (AffectNet 8-class).
Bar opacity scales with confidence.
-
Compound label
— when the top two emotions are sustained close, you'll see
e.g. "bittersweet", "angrily disgusted"
(Du, Tao & Martinez 2014).
-
v / a —
Russell circumplex valence (negative ↔ positive) and arousal
(low ↔ high). The 2D inset on the bottom-left shows the last
~2 seconds.
-
Intensity
— overall facial activity above your resting baseline,
independent of emotion classification.
-
Sparkline
— top-1 confidence over the last ~6s. Flat = stable, jagged
= the model is changing its mind.
-
States
— engaged / focused / tired / bored / stressed / confused /
calm. Heuristics grounded in published FACS literature
(Stern 1984, Wierwille 1994, D'Mello & Graesser 2010,
Whitehill 2014, Russell 1980).
-
Active muscles
— top 3 ARKit blendshapes after baseline subtraction. Same
data the AI sees.
-
Personal pick
— appears once you've calibrated 2+ emotions. Cyan if it
agrees with the model, amber if it disagrees.
3. Ask the AI
-
Why? — the AI summarizes what's happening over
the slider window (drag the slider to set 2–30 s).
-
Summarize session — the AI summarizes the entire
session since you last calibrated.
-
Type a follow-up in the chat box and hit
Ask. Conversation history is preserved, so
follow-ups have context.
-
All AI calls go through a privacy-preserving proxy —
numerical features only, no frames.
4. Record & export
-
Record — captures a clip of session data (max
10 minutes).
After stopping, click Discuss recording to have
the AI analyze the whole clip.
-
Export — downloads the session timeline as a JSON
file (timestamps, emotions, V/A, intensity, top blendshapes).
No frames; just numbers. Useful for research or self-review.
5. Personal calibration (advanced)
Open the "Personal emotion calibration" section. For
each emotion, click the button and hold that expression for
3 seconds. After two or more emotions, a personal classifier
runs alongside the model and surfaces in the panel. Templates
stay in this browser tab unless you opt in to persistence.
6. Tips
-
Good lighting, face centered. Side-lighting is fine; backlit
is hard.
-
Recalibrate if you change posture, lighting, or move closer
/ further from the camera.
-
Yawning, talking, eating, and long blinks confound the
emotion classifier — the AI is told to look for those
artifacts before reaching for an emotional narrative.
Privacy
Webcam frames stay on this device. Only numerical features
(blendshapes, calibrated probabilities, valence / arousal)
are sent to the AI, and only when you click Why? / Ask /
Summarize. Personal templates and saved recordings live only
in this browser tab; the storage opt-in is off by default.