Skip to main content
rPPG stands for remote photoplethysmography. It estimates pulse-related changes from camera video, usually from a face region, without requiring a wearable sensor. In browser apps, developers usually use rPPG to turn a live camera stream into heart-rate-style metrics, diagnostics, and wellness-oriented feedback. Concrete app examples include:
  • deception or bluffing games that react to pulse changes during key moments
  • stress or arousal feedback in training and social experiences
  • breathing and relaxation flows that show physiological response over time
  • biofeedback-oriented health or wellness apps that want camera-based pulse signals without extra hardware

Start With The Fastest Path

If you want a working reference app before integrating manually, scaffold the rPPG demo:
npm create @elata-biosciences/elata-demo my-app
cd my-app
pnpm install
pnpm run dev
Use this guide when you want to add browser-side rPPG processing to an existing app.

Install

pnpm add @elata-biosciences/rppg-web
npm install @elata-biosciences/rppg-web

Recommended Entry Point

@elata-biosciences/rppg-web provides createRppgSession() as the recommended browser integration API. Start there unless you intentionally need lower-level sample ingestion or custom runtime orchestration.

How Apps Usually Use rPPG

  1. Ask for camera access and attach the stream to a video element.
  2. Start createRppgSession() on that video.
  3. Read metrics and diagnostics from the session.
  4. Show pulse-related feedback, readiness UI, or monitoring outputs in the app.
That last step can be as simple as showing a pulse estimate, or as game-like as changing dialogue, pressure, or scoring when the user’s physiology changes.

Minimal Camera → BPM Loop

The shortest path from camera to BPM readings:
import { createRppgSession } from "@elata-biosciences/rppg-web";

// 1. Acquire camera and attach to a video element
const stream = await navigator.mediaDevices.getUserMedia({ video: true });
const video = document.createElement("video");
video.srcObject = stream;
await video.play();

// 2. Start an rPPG session
const session = await createRppgSession({
  video,
  backend: "auto",
  faceMesh: "off",
});

// 3. Poll for BPM
const interval = setInterval(() => {
  // Check backendMode first — if "unavailable", WASM didn't load and BPM
  // will always be null (looks identical to the normal warmup period).
  if (session.backendMode === "unavailable") {
    console.warn("WASM backend not loaded — check that pkg/ assets are served at /pkg/");
    return;
  }
  const metrics = session.getMetrics();
  if (metrics?.bpm != null) {
    console.log("BPM:", metrics.bpm.toFixed(1));
  }
}, 1000);

// 4. Cleanup
// clearInterval(interval);
// await session.stop();
Expect a ~10 second warmup window before the first BPM estimate appears. faceMesh: "off" uses the full video frame as the ROI — good enough for a face filling most of the frame. Switch to faceMesh: "mediapipe" for face-crop ROI when the user might move around. If you need a single boolean for UI gating, use createRppgAppAdapter().canPublish instead of polling getMetrics() directly — it handles the backend check, confidence threshold, and warmup window in one place.

Minimal Integration

import { createRppgSession } from "@elata-biosciences/rppg-web";

const session = await createRppgSession({
  video: videoEl,
  sampleRate: 30,
  backend: "auto",
  faceMesh: "off",
  onDiagnostics: (diagnostics) => {
    console.log(diagnostics.state.status, diagnostics.faceTrackingMode);
    console.log(diagnostics.framesSeen, diagnostics.totalSamplesReceived);
    console.log(diagnostics.issues, diagnostics.processorFailure);
  },
  onError: (error) => {
    console.error(error.code, error.message);
  },
});

console.log(session.getMetrics());

Typical Flow

  1. Acquire a camera stream and attach it to a video element.
  2. Call createRppgSession({ video, backend: "auto" }).
  3. Read metrics from session.getMetrics().
  4. Surface diagnostics through onDiagnostics or session.getDiagnostics().
  5. Stop the session during cleanup with await session.stop().

Managed Restart Flow

If your app wants the SDK to own restart timing after terminal processor failures, use createManagedRppgSession():
import { createManagedRppgSession } from "@elata-biosciences/rppg-web";

const managed = await createManagedRppgSession({
  video: videoEl,
  faceMesh: "off",
  maxRetries: 3,
  retryDelayMs: 1500,
  onStateChange: (state) => {
    console.log(state.status, state.retryCount, state.lastError?.code);
  },
});

Public Diagnostics And Debug Data

Useful public APIs for app-side diagnostics:
  • session.getDiagnostics()
  • session.getTraceSnapshot()
  • computeTraceWaveformDebug()
  • normalizeRppgError()
  • createRppgAppAdapter()
  • createRppgAppMonitor()

Vite Config

WASM asset placement

The default session loader fetches WASM files from /pkg/rppg_wasm.js and /pkg/rppg_wasm_bg.wasm at runtime. In a Vite app, place those files under public/pkg/ so they are served at that path:
your-app/
  public/
    pkg/
      rppg_wasm.js
      rppg_wasm_bg.wasm
The built assets live in packages/rppg-web/pkg/ after running pnpm --dir packages/rppg-web run build:wasm, or in node_modules/@elata-biosciences/rppg-web/pkg/ after an npm install. Copy or symlink that directory into your public/ folder. Alternatively, use the import-based options below to let Vite manage the asset URLs instead of serving them from public/.

Dynamic import restriction

Vite 7 blocks import(url) for files served from /public. Two approaches work: Option A — vite-plugin-wasm (recommended for new projects)
npm install -D vite-plugin-wasm vite-plugin-top-level-await
import { defineConfig } from "vite";
import wasm from "vite-plugin-wasm";
import topLevelAwait from "vite-plugin-top-level-await";

export default defineConfig({
  plugins: [wasm(), topLevelAwait()],
});
import * as rppgWasm from "@elata-biosciences/rppg-web/pkg/rppg_wasm.js";
import { createRppgSession } from "@elata-biosciences/rppg-web";

const session = await createRppgSession({
  video: videoEl,
  wasmImporter: () => Promise.resolve(rppgWasm),
});
Option B — explicit URL imports (no extra plugins)
import rppgWasmJsUrl from "@elata-biosciences/rppg-web/pkg/rppg_wasm.js?url";
import rppgWasmBinaryUrl from "@elata-biosciences/rppg-web/pkg/rppg_wasm_bg.wasm?url";
import { createRppgSession } from "@elata-biosciences/rppg-web";

const session = await createRppgSession({
  video: videoEl,
  wasmJsUrl: rppgWasmJsUrl,
  wasmBinaryUrl: rppgWasmBinaryUrl,
});
Option B works because Vite resolves ?url imports to fingerprinted asset URLs at build time, bypassing the public directory restriction entirely.

Common Gotchas

  • BPM is always null but status is running — this looks like warmup but could mean WASM didn’t load. Check session.backendMode first: if it is "unavailable", the WASM assets are not being served at /pkg/. Fix the asset placement and the metrics will flow. See the Vite Config section above.
  • If session.backendMode is unavailable, your app is probably not serving the packaged pkg/ assets correctly.
  • If session.state.status is failed, recreate the session instead of continuing to use the same poisoned processor.
  • If camera access fails, verify that the page can call getUserMedia.
  • If you are just evaluating the SDK, the scaffolded demo is much faster than wiring the whole camera pipeline yourself.

Next Steps