Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.elata.bio/llms.txt

Use this file to discover all available pages before exploring further.

Use this page if you already have a browser app and want the recommended existing-app integration path. If you want the scaffold path instead, use Quickstart. If you only need the mental model first, use rPPG In A Browser App. This tutorial shows the recommended app integration path for @elata-biosciences/rppg-web. Camera rPPG is the primary browser integration for many Elata products. No headset required. The core idea is simple: let createRppgSession() own the browser runtime, video processing loop, and diagnostics while your app owns the UI. This is the usual next tutorial after Build Your First Elata App when you choose the existing app branch for camera rPPG. If you are extending the scaffold instead, stay on rppg-demo and use rPPG In A Browser App plus rppg-web.

What You Will Build

You will:
  1. install @elata-biosciences/rppg-web
  2. request camera access
  3. attach the stream to a video element
  4. start createRppgSession()
  5. read metrics and diagnostics
  6. stop the session during cleanup

Step 1: Install The Package

pnpm add @elata-biosciences/rppg-web

Step 2: Prepare A Video Element

Your app needs a video element that can receive the camera stream.
index.html (or your component markup)
<video id="camera" autoplay playsinline muted></video>
The important parts are:
  • autoplay so playback can start once the stream is attached
  • playsinline for mobile browser behavior
  • muted to keep autoplay rules out of the way

Step 3: Acquire Camera Access

camera setup
const videoEl = document.getElementById("camera") as HTMLVideoElement;

const stream = await navigator.mediaDevices.getUserMedia({
  video: { facingMode: "user" },
  audio: false,
});

videoEl.srcObject = stream;
await videoEl.play();
At this point your browser app should already be showing the camera preview.

Step 4: Start createRppgSession()

rPPG session
import { createRppgSession } from "@elata-biosciences/rppg-web";

const session = await createRppgSession({
  video: videoEl,
  sampleRate: 30,
  backend: "auto",
  faceMesh: "off",
  onDiagnostics: (diagnostics) => {
    console.log("status", diagnostics.state.status);
    console.log("frames", diagnostics.framesSeen);
    console.log("samples", diagnostics.totalSamplesReceived);
    console.log("issues", diagnostics.issues);
  },
  onError: (error) => {
    console.error(error.code, error.message);
  },
});
This is the recommended starting point for most browser apps. It handles:
  • packaged WASM backend init
  • frame capture
  • ROI/session orchestration
  • diagnostics emission
  • cleanup support

Step 5: Read Metrics In Your UI

const metrics = session.getMetrics();
console.log(metrics);
In a real app you would poll or subscribe through your own UI state layer and show the values that matter to your product.

Step 6: Clean Up Correctly

When the component, route, or page is leaving, stop the session and release the camera stream. Run these in order (same session and stream as above):
await session.stop();
for (const track of stream.getTracks()) {
  track.stop();
}
This matters more than it looks. It keeps later sessions from inheriting stale camera or runtime state.

Full example (single paste)

Use this when you want one file to drop into a Vite + React app (for example replace the contents of src/App.tsx). It includes camera setup, session start, and cleanup on unmount.
App.tsx
import { useEffect, useRef, useState } from "react";
import { createRppgSession, type RppgSession } from "@elata-biosciences/rppg-web";

export default function App() {
  const videoRef = useRef<HTMLVideoElement>(null);
  const sessionRef = useRef<RppgSession | null>(null);
  const streamRef = useRef<MediaStream | null>(null);
  const [line, setLine] = useState("Starting…");

  useEffect(() => {
    let cancelled = false;

    async function run() {
      const video = videoRef.current;
      if (!video) return;

      let stream: MediaStream;
      try {
        stream = await navigator.mediaDevices.getUserMedia({
          video: { facingMode: "user" },
          audio: false,
        });
      } catch {
        setLine("Camera permission denied.");
        return;
      }

      if (cancelled) {
        stream.getTracks().forEach((t) => t.stop());
        return;
      }

      streamRef.current = stream;
      video.srcObject = stream;
      await video.play().catch(() => undefined);

      const sampleRate = stream.getVideoTracks()[0]?.getSettings().frameRate ?? 30;

      try {
        const session = await createRppgSession({
          video,
          sampleRate,
          backend: "auto",
          faceMesh: "off",
          onDiagnostics: (d) => {
            setLine(`status=${d.state.status} backend=${d.backendMode}`);
          },
          onError: (e) => setLine(`${e.code}: ${e.message}`),
        });

        if (cancelled) {
          await session.dispose();
          return;
        }

        sessionRef.current = session;
      } catch (e) {
        setLine(e instanceof Error ? e.message : "Session failed");
      }
    }

    void run();

    return () => {
      cancelled = true;
      void sessionRef.current?.dispose();
      sessionRef.current = null;
      streamRef.current?.getTracks().forEach((t) => t.stop());
      streamRef.current = null;
    };
  }, []);

  return (
    <main style={{ padding: "1.5rem", maxWidth: 720 }}>
      <p>{line}</p>
      <video ref={videoRef} autoPlay playsInline muted style={{ width: "100%", borderRadius: 12 }} />
    </main>
  );
}
This example uses a React ref on <video> instead of getElementById("camera") from the steps above.

What To Do With Diagnostics

The quickest useful app behavior is:
  1. show whether the session is running, degraded, or failed
  2. display human-readable guidance when issues appear
  3. block publishing or scoring until your app has enough stable samples
If you want a higher-level app-facing state layer later, look at:
  • createManagedRppgSession()
  • createRppgAppAdapter()
  • createRppgAppMonitor()
But start with plain createRppgSession() first.

Common Problems

  • session.getDiagnostics().backendMode is unavailable: your app is likely not loading the packaged WASM assets correctly
  • Camera access fails: check browser permissions and getUserMedia support
  • The session reaches terminal failed: recreate the session instead of trying to keep using the same poisoned processor
  • You are trying to debug lower-level generated bindings first: start with createRppgSession() unless you are intentionally debugging the SDK itself

Next

rppg-web Reference

Package API and exports

rPPG In A Browser

Integration overview

Build Your First App

Best first-time setup path