Skip to main content
Federated learning is on the Elata roadmap for future model-improvement flows. Today, Elata SDK focuses on local on-device and in-browser processing paths. As federated capabilities are introduced, the goal is to let apps contribute to global model quality improvements without sending raw biosignal data to a central server.

What Federated Learning Means Here

In an Elata federated setup, app instances would train or adapt model weights locally, then send only constrained update artifacts for aggregation. The intended direction is:
  1. process and train locally in the user environment
  2. send model updates instead of raw EEG or camera frames
  3. aggregate updates across many participants
  4. return improved shared model versions to clients

Why It Is On The Roadmap

Federated learning is a strong fit for biosignal products because it can improve model quality across device types and usage contexts while reducing privacy exposure. Planned benefits include:
  • better cross-user and cross-device robustness over time
  • faster model iteration without requiring centralized raw-data collection
  • clearer privacy posture for sensitive physiological signals

How It Preserves Privacy

Federated learning helps preserve privacy by changing what leaves the device:
  • raw biosignal inputs stay local by default
  • shared artifacts are model updates, not full raw signal streams
  • aggregation combines many updates before model rollout
Federated learning is not a complete privacy guarantee by itself. In production systems, it is typically paired with controls such as secure transport, authentication, update validation, and additional privacy techniques.

Current Status

Federated learning is a roadmap direction, not the default production integration path today. For current integrations, use the existing package entrypoints:
  • EEG: @elata-biosciences/eeg-web and @elata-biosciences/eeg-web-ble
  • rPPG: @elata-biosciences/rppg-web

Next Steps