skip to content

Renderer Data Model

Understand the typed input model, path semantics, and visible runtime behavior behind the first maintained interactions.

Reader outcome

After this page, you should be able to explain what a Vizij runtime input is, how it relates to typed paths and values, and how visible behavior on a face maps to a more stable runtime meaning underneath.

What you need

  • completed Hello Face Quickstart,
  • read Paths and Standard Controls,
  • seen at least one runtime app show loading, ready, and error states.

Proof state

  • you can explain why runtime inputs are more stable than any one UI,
  • you can distinguish staged inputs from rendered outputs,
  • you can describe how standard controls, poses, visemes, and animation-related values differ semantically.

Module Notes

This page is for readers who already understand the basic path vocabulary from Paths and Standard Controls and now need the next layer of runtime reasoning.

The artifact here is not a single app screen. It is the runtime contract that multiple Vizij surfaces rely on: typed paths, typed values, staged runtime inputs, blackboard reads and writes, and output paths that can later be observed or rendered.

What You Need

It helps if you have already:

  1. completed Hello Face Quickstart,
  2. read Paths and Standard Controls,
  3. seen at least one runtime app show loading, ready, and error states.

The Core Semantic Chain

Visible face behavior is usually the end of a longer chain:

  1. an app or hook decides to set an input,
  2. the input is staged under a path,
  3. graphs and animations read and write values against the shared runtime state,
  4. the renderer receives the resulting animatable values,
  5. the user sees a face change.

That is why the same visual behavior can show up in multiple apps. The UI is not the whole story. The runtime meaning is more stable than the surface.

What A Runtime Input Actually Is

A runtime input is a value written with an explicit meaning.

The important parts are:

  1. the path, which says what the input refers to,
  2. the value, which says what data is being written,
  3. the shape or expected kind of data, which keeps the runtime deterministic.

In the architecture primer, this is the TypedPath plus Value plus Shape contract.

That contract matters because Vizij does not want a control system that only works inside one UI. Inputs have to survive movement across apps, bundles, runtime controllers, and deployment surfaces.

Staging, Reading, and Writing

The runtime is designed so inputs are staged first and then consumed during the runtime step.

At a high level:

  1. the app stages a value,
  2. the orchestrator advances a frame,
  3. graphs and animations evaluate against the current state,
  4. output values are merged deterministically,
  5. the renderer applies the resulting values.

This is different from directly mutating a scene or attaching view-only state to a slider. Vizij is intentionally closer to a runtime control pipeline.

Common Semantic Categories

Semantic Categories

CategoryMeaningEncountered In
StandardReusable rig channels (Gaze)Hello Face / useMouseGaze
PosesAuthored expressionsHello Face / usePoseHotkeys
Speech-timed posesSpeech-driven pose weights that still use canonical pose pathsAgent Face / STT
AnimationsTime-based motionAuthoring / Player
OutputsDerived runtime valuesPlayer / Diagnostics

The most useful categories for guidebook readers are:

  1. standard controls, which expose reusable channels such as gaze or shared rig movement,
  2. pose weights, which expose named authored states,
  3. speech-timed pose driving, which still stages pose weights even when authoring groups those poses for different blend behavior,
  4. animation controls, which drive authored time-based motion,
  5. renderer outputs, which can be observed for UI, diagnostics, or logging.

These categories can all show up as paths, but they do not mean the same thing.

Choose The Semantic Family First

Use this chooser before you start wiring a control or debugging a write:

If you are trying to…Likely semantic familyWhyBest next page
steer a reusable eye, brow, jaw, or similar channelstandard controlyou are driving a reusable rig-facing channel directlyPaths and Standard Controls
blend a named facial state such as smilepose weightyou are changing the strength of an authored expressionPoses
play or inspect motion over timeanimation controlthe main question is clip transport, timing, or playbackAnimations
drive speech-shaped mouth motionspeech-timed pose weightthe speech layer still writes a canonical pose-weight path, while pose groups continue to define how subsets of poses blendAnimation, Integration, and Deployment Reference
inspect what the runtime resolved after evaluationoutput pathyou are validating the result of orchestration, not only the staged inputOrchestration and Diagnostics
expose operator control through a deployment endpointdeployment slot over runtime inputthe client sees a deployment slot name, but the runtime still resolves it into the same typed input semantics underneathOperator and Deployment Model

Why Semantics Matter More Than UI Labels

Two controls can look similar in a UI and still represent different semantics.

For example:

  1. a slider that writes rig/{face}/standard/left_eye/pos/x is steering a reusable rig channel,
  2. a slider that writes rig/{face}/poses/smile.weight is blending a named authored state,
  3. a speech-driven system may still write rig/{face}/poses/{poseId}.weight without the user touching a control at all.

The visible widget is just a surface. The runtime meaning is the important teaching target.

Pose groups matter here because they determine how different subsets of poses blend together. They are not part of the runtime input path syntax.

Where Readers See These Semantics In Practice

You can already see the semantic layers in the maintained apps:

  1. tutorial-fullscreen-face stages gaze and pose inputs through tutorial hooks,
  2. tutorial-agent-face layers conversation-driven viseme and expression behavior on top of the same runtime concepts,
  3. demo-vizij-player exposes loading state, controllers, and output paths in a more application-like shell,
  4. vizij-standalone maps externally driven values into deployment-facing control slots.

Those apps are different surfaces, but the runtime language underneath them is continuous.

That continuity is the main reason the guidebook keeps separating runtime meaning from surface labels:

  1. a deployment client may discover and write slot names such as standard/vizij/left_eye/pos/x,
  2. the standalone bridge then resolves those names into runtime paths such as rig/{faceId}/standard/vizij/left_eye/pos/x,
  3. the underlying semantic family is still “standard control” even though the operator surface does not expose the full runtime path directly.

A Useful Mental Model

If a reader gets lost, return to this question:

What value is being written, to which path, by which part of the runtime, and why?

That question usually clarifies whether the reader is dealing with:

  1. a user input,
  2. an authored pose or animation,
  3. a derived controller output,
  4. a deployment-facing control signal.

Current Useful Diagram

Semantic Entry Points

Different surfaces can begin the same semantic write:

flowchart LR
    classDef source fill:#eef4ff,stroke:#4e79a7,stroke-width:1.5px
    classDef runtime fill:#eef7ee,stroke:#2f855a,stroke-width:1.5px

    hook["App hook or UI event"]
    speech["Speech or agent behavior"]
    operator["Deployment client or discovered slot"]
    authored["Animation or procedural controller"]
    staged["Staged input\npath + value + shape"]
    orch["Orchestrator step"]

    hook --> staged
    speech --> staged
    operator --> staged
    authored --> orch
    staged --> orch

    class hook,speech,operator,authored source
    class staged,orch runtime

Runtime Resolution And Observation

After entry, the runtime resolves those writes through the same shared chain:

flowchart TB
    classDef runtime fill:#eef7ee,stroke:#2f855a,stroke-width:1.5px
    classDef observe fill:#fdf1f3,stroke:#c05670,stroke-width:1.5px

    orch["Orchestrator step"]
    board["Blackboard\nshared typed state"]
    merged["Merged output writes"]
    render["Renderer applies resolved values"]
    diag["Diagnostics\noutputPaths, frame data, target snapshots"]
    face["Visible face behavior"]

    orch --> board --> merged
    merged --> render --> face
    merged --> diag
    board --> diag

    class orch,board,merged runtime
    class render,diag,face observe

The screenshot shows the “ready” state where the face is active and responding to standard control writes.

Until then, the architecture primer is the best current textual diagram.

  1. If you want to see how these runtime ideas become an application shell, continue to Loading, Playback, and Embedding.
  2. If you want to stay on the fundamentals path inside Control, keep Paths and Standard Controls nearby as the simpler vocabulary bridge.
  3. If you need to reason about which surface owns each category, continue to Control Surfaces and Configuration Layers.