WellAlly Logo
WellAlly康心伴
Development

The Tech Behind a Smart Alarm: From Sensor Data to Sleep Cycles in React Native

Explore the algorithm behind smart alarms that wake you during light sleep. This deep dive covers processing accelerometer data, feature extraction, and on-device ML in React Native to estimate sleep stages.

W
2025-12-20
10 min read

We've all been there: the alarm shrieks, and you feel like you've been jolted from the deepest slumber imaginable. That groggy, disoriented feeling, known as sleep inertia, can cast a shadow over your entire morning. What if your alarm was smarter? What if it could wait for the perfect moment to wake you, when you're naturally in a lighter phase of sleep?

This is the promise of "smart alarms." In this deep dive, we'll explore the technology that makes them possible. We'll build a conceptual algorithm to analyze accelerometer data from a phone to estimate sleep cycles. This is a fascinating intersection of mobile development, signal processing, and data science.

We will walk through the process of capturing motion data, cleaning it up, extracting features that hint at your sleep stage, and finally, using a simple model to make an educated guess about whether you're in light or deep sleep. All of this will be framed within a React Native context, highlighting the unique challenges of on-device implementation.

Prerequisites:

  • A solid understanding of React Native and JavaScript.
  • A physical device for testing (simulators don't have accelerometers).
  • Basic familiarity with concepts of data processing.

Why this matters to developers: This project is a great way to venture into the world of sensor data, background processing, and on-device AI. The skills you'll learn are applicable to a wide range of applications, from fitness trackers to IoT devices.

Understanding the Problem

Human sleep isn't a monolithic state. It's a journey through several cycles, each lasting about 90 minutes. These cycles are broadly divided into:

  1. Light Sleep (N1, N2): This is the stage you're in for about half the night. Your muscles relax, and your heart rate and breathing slow down. You can be awakened relatively easily during this phase.
  2. Deep Sleep (N3): This is the most restorative stage, crucial for physical recovery. It's very difficult to wake up from deep sleep, and doing so is what often leads to that groggy feeling.
  3. REM (Rapid Eye Movement) Sleep: This is when most dreaming occurs. Your brain is highly active, but your body is in a state of paralysis (atemparalysis).

The key insight for a smart alarm is this: our body movement is a strong indicator of our sleep stage. During deep sleep, we are almost completely still. In light sleep, we tend to move around more. By placing a phone on the bed, its accelerometer can pick up on these subtle movements and vibrations. Our goal is to translate this raw motion data into a sleep stage estimation.

Prerequisites

Before we start coding, let's set up our environment. Make sure you have a React Native project ready to go.

Required Libraries:

  • react-native-sensors: For accessing the accelerometer data.
  • @tensorflow/tfjs and @tensorflow/tfjs-react-native: For running our machine learning model on the device.

Installation:

code
npm install react-native-sensors
npm install @tensorflow/tfjs @tensorflow/tfjs-react-native
Code collapsed

For iOS, you'll also need to run pod install:

code
cd ios && pod install && cd ..
Code collapsed

Version Compatibility:

  • react-native-sensors: v7.3.6 or later
  • @tensorflow/tfjs: v4.2.0 or later

Make sure to follow the peer dependency instructions for @tensorflow/tfjs-react-native.

Step 1: Collecting Accelerometer Data

First, we need to get a stream of data from the phone's accelerometer. react-native-sensors makes this straightforward. The accelerometer provides data on three axes: x, y, and z. For our purpose, we are interested in the overall magnitude of movement, regardless of direction.

What we're doing

We'll subscribe to the accelerometer and calculate the vector magnitude of the acceleration to get a single value representing the amount of movement.

Implementation

code
// src/services/accelerometerService.js
import { accelerometer } from 'react-native-sensors';
import { map, filter } from 'rxjs/operators';

const MOVEMENT_THRESHOLD = 0.1; // Adjust this based on testing

export const startMovementDetection = (onMovement) => {
  const subscription = accelerometer
    .pipe(
      // We are interested in the magnitude of the acceleration vector
      map(({ x, y, z }) => Math.sqrt(x ** 2 + y ** 2 + z ** 2) - 9.8), // Subtract gravity
      // Filter out minor sensor noise
      filter(speed => speed > MOVEMENT_THRESHOLD)
    )
    .subscribe(
      speed => onMovement(speed),
      error => {
        console.log('The sensor is not available');
      }
    );

  return subscription;
};
Code collapsed

How it works

We subscribe to the accelerometer data stream. For each data point, we calculate the magnitude of the acceleration vector. We subtract the force of gravity (approximately 9.8 m/s²) to focus on the user's movements. We then filter out very small values that are likely just sensor noise. The onMovement callback will be invoked whenever a significant movement is detected.

Common pitfalls

  • Forgetting to unsubscribe: Sensor subscriptions can drain the battery. Always make sure to unsubscribe when your component unmounts.
  • Simulator issues: The accelerometer will not work on a simulator. You must test this on a real device.

Step 2: Signal Processing - Making Sense of the Noise

The raw data from the accelerometer is noisy and comes in at a high frequency. To analyze it for sleep patterns, we need to process it first. Two key techniques are filtering and epoching.

What we're doing

  1. Filtering: We'll apply a low-pass filter to smooth out the data and remove high-frequency noise that isn't related to body movements.
  2. Epoching: We'll group the data into fixed-time windows, or "epochs" (e.g., 30 seconds). Sleep analysis is typically done over these short periods, not on a moment-by-moment basis.

Implementation

Let's create a simplified low-pass filter and a function to handle epoching.

code
// src/utils/signalProcessing.js

// A simple low-pass filter implementation
export class LowPassFilter {
  constructor(alpha) {
    this.alpha = alpha;
    this.lastValue = null;
  }

  filter(value) {
    if (this.lastValue === null) {
      this.lastValue = value;
      return value;
    }
    const filteredValue = this.alpha * value + (1 - this.alpha) * this.lastValue;
    this.lastValue = filteredValue;
    return filteredValue;
  }
}

// Function to manage data epochs
export const createEpochs = (data, epochDuration = 30000) => {
  const epochs = [];
  let currentEpoch = [];
  let epochStartTime = data.length > 0 ? data[0].timestamp : 0;

  data.forEach(point => {
    if (point.timestamp - epochStartTime < epochDuration) {
      currentEpoch.push(point.magnitude);
    } else {
      epochs.push(currentEpoch);
      currentEpoch = [point.magnitude];
      epochStartTime = point.timestamp;
    }
  });
  
  if (currentEpoch.length > 0) {
      epochs.push(currentEpoch);
  }

  return epochs;
};
Code collapsed

How it works

The LowPassFilter gives more weight to previous readings, effectively smoothing out sudden spikes. The createEpochs function iterates through our collected data points and groups them into arrays, each representing a 30-second window.

Step 3: Feature Extraction - Describing the Movement

Now that we have clean, epoch-based data, we need to describe the movement within each epoch in a way that a machine learning model can understand. This is done through feature extraction. We'll calculate a set of statistical measures for each epoch.

What we're doing

For each 30-second epoch of movement data, we'll calculate features like the average movement, the standard deviation (how varied the movement is), and the number of distinct movements.

Implementation

code
// src/utils/featureExtraction.js

const calculateMean = (arr) => arr.reduce((a, b) => a + b, 0) / arr.length || 0;

const calculateStdDev = (arr) => {
  const mean = calculateMean(arr);
  const variance = arr.reduce((a, b) => a + (b - mean) ** 2, 0) / arr.length || 0;
  return Math.sqrt(variance);
};

const countZeroCrossings = (arr) => {
  const mean = calculateMean(arr);
  let count = 0;
  for (let i = 1; i < arr.length; i++) {
    if ((arr[i-1] - mean) * (arr[i] - mean) &lt; 0) {
      count++;
    }
  }
  return count;
};

export const extractFeatures = (epoch) => {
  if (epoch.length === 0) {
    return {
      mean: 0,
      stdDev: 0,
      zeroCrossings: 0,
      max: 0,
    };
  }

  return {
    mean: calculateMean(epoch),
    stdDev: calculateStdDev(epoch),
    zeroCrossings: countZeroCrossings(epoch),
    max: Math.max(...epoch),
  };
};
Code collapsed

How it works

  • mean: The average movement intensity. Higher values suggest more activity.
  • stdDev: The variability of movement. A high standard deviation might indicate restless tossing and turning.
  • zeroCrossings: How many times the signal crosses the mean. This can be a proxy for the number of distinct movements.
  • max: The peak movement intensity in the epoch.

These features give us a rich, numerical summary of the user's movement in each 30-second window.

Step 4: On-Device Sleep Stage Classification with TensorFlow.js

This is where the "smart" part comes in. We'll use a simple, pre-trained machine learning model to classify each epoch into a sleep stage based on the features we extracted. For a real-world app, you'd train this model on a large dataset of accelerometer data that has been labeled with actual sleep stages from a sleep study. For our example, we'll simulate loading and using such a model.

What we're doing

We will load a mock TensorFlow.js model and create a function that takes our features as input and returns a predicted sleep stage.

Implementation

First, let's create a mock model for demonstration purposes. In a real app, you would load a model.json file.

code
// src/services/sleepModelService.js
import * as tf from '@tensorflow/tfjs';
import '@tensorflow/tfjs-react-native';

let model = null;

// This is a mock model for demonstration.
// In a real application, you would load a pre-trained model.
const createMockModel = () => {
  const mockModel = tf.sequential();
  mockModel.add(tf.layers.dense({ units: 1, inputShape: [4] }));
  // Compile the model for execution
  mockModel.compile({ loss: 'meanSquaredError', optimizer: 'sgd' });
  return mockModel;
};

export const loadModel = async () => {
  if (model) return;
  try {
    // In a real app: await tf.loadLayersModel('path/to/your/model.json');
    await tf.ready();
    model = createMockModel();
    console.log('Mock model loaded successfully');
  } catch (error) {
    console.error('Failed to load the model', error);
  }
};

export const predictSleepStage = (features) => {
  if (!model) {
    console.warn('Model is not loaded yet.');
    return 'unknown';
  }

  const featureTensor = tf.tensor2d([[
    features.mean,
    features.stdDev,
    features.zeroCrossings,
    features.max,
  ]]);

  // In a real model, the output would be probabilities for each class
  // For this mock, we'll use a simple heuristic
  const movementScore = features.mean + features.stdDev;
  if (movementScore &lt; 0.1) return 'deep';
  if (movementScore &lt; 0.8) return 'light';
  return 'awake';
};
Code collapsed

How it works

  1. loadModel: We ensure TensorFlow.js is ready and then load our model. Here, we create a simple sequential model for demonstration.
  2. predictSleepStage: This function takes the features we extracted, converts them into a tensor (the standard data format for TensorFlow), and would normally feed them into the model for prediction. Since our model is a mock, we use a simple heuristic: low movement suggests deep sleep, moderate movement suggests light sleep, and high movement suggests the user is awake.

Putting It All Together

Now, let's create a React Native component that uses all these pieces to create our smart alarm logic.

code
// App.js
import React, { useState, useEffect } from 'react';
import { View, Text, Button, StyleSheet } from 'react-native';
import { startMovementDetection } from './src/services/accelerometerService';
import { LowPassFilter, createEpochs } from './src/utils/signalProcessing';
import { extractFeatures } from './src/utils/featureExtraction';
import { loadModel, predictSleepStage } from './src/services/sleepModelService';

const EPOCH_DURATION = 30000; // 30 seconds

const App = () => {
  const [currentSleepStage, setCurrentSleepStage] = useState('unknown');
  const [isMonitoring, setIsMonitoring] = useState(false);
  const [movementData, setMovementData] = useState([]);

  useEffect(() => {
    loadModel();
  }, []);

  useEffect(() => {
    let subscription;
    if (isMonitoring) {
      const filter = new LowPassFilter(0.5);
      subscription = startMovementDetection(speed => {
        const filteredSpeed = filter.filter(speed);
        setMovementData(prevData => [
          ...prevData,
          { magnitude: filteredSpeed, timestamp: Date.now() },
        ]);
      });
    }

    return () => {
      if (subscription) {
        subscription.unsubscribe();
      }
    };
  }, [isMonitoring]);

  useEffect(() => {
    const interval = setInterval(() => {
      if (movementData.length === 0) return;
      
      const epochs = createEpochs(movementData);
      const lastEpoch = epochs[epochs.length - 1] || [];
      const features = extractFeatures(lastEpoch);
      const stage = predictSleepStage(features);
      setCurrentSleepStage(stage);

      // Simple alarm logic
      const now = new Date();
      const wakeUpHour = now.getHours();
      // Example: Wake up between 6:30 and 7:00 AM
      if (wakeUpHour >= 6 && now.getMinutes() >= 30 && wakeUpHour &lt; 7) {
          if (stage === 'light') {
              console.log("WAKE UP! It's the perfect time.");
              setIsMonitoring(false);
              // Trigger actual alarm sound here
          }
      }

      // Clear old data to save memory
      setMovementData([]);

    }, EPOCH_DURATION);

    return () => clearInterval(interval);
  }, [movementData]);

  return (
    <View style={styles.container}>
      <Text style={styles.title}>Smart Alarm</Text>
      <Text style={styles.status}>
        Monitoring: {isMonitoring ? 'On' : 'Off'}
      </Text>
      <Text style={styles.stage}>
        Current Sleep Stage: {currentSleepStage}
      </Text>
      <Button
        title={isMonitoring ? 'Stop Monitoring' : 'Start Monitoring'}
        onPress={() => setIsMonitoring(!isMonitoring)}
      />
    </View>
  );
};

// ... styles
Code collapsed

Challenges and Considerations

Building a production-ready smart alarm involves more than just this simple algorithm. Here are some key challenges:

Performance and Battery Life

Continuously processing accelerometer data is resource-intensive. Running this in the JavaScript thread can make the UI unresponsive and will drain the battery quickly.

  • Background Processing: For a real app, you would need to run this logic in the background. Libraries like react-native-background-fetch can schedule periodic tasks, but for continuous sensor reading, you might need to write a native module for both Android and iOS to handle this more efficiently.
  • Throttling: You don't need to sample the accelerometer at its highest frequency. A rate of 10-20 Hz is often sufficient for this kind of analysis.

Accuracy of On-Device Models

The accuracy of this system depends heavily on:

  • Phone Placement: The phone must be placed on the mattress to detect movements accurately.
  • Sharing a Bed: If you share a bed, the accelerometer will pick up your partner's movements, corrupting the data.
  • The Model: Our heuristic is very basic. A real model would need to be trained on a diverse dataset from many individuals and sleep conditions to be reliable.

Security and Privacy

Sensor data is sensitive. While on-device processing keeps the data private, if you ever plan to upload this data to a server for analysis, you must be transparent with your users and ensure the data is anonymized and securely handled.

Conclusion

We've journeyed from raw, noisy accelerometer data to a smart decision about the best time to wake someone up. We've seen how to collect sensor data in React Native, apply signal processing techniques to clean it up, extract meaningful features, and use a machine learning model to make predictions.

While our implementation is a simplified proof-of-concept, it lays the groundwork for building sophisticated, context-aware mobile applications. The true power lies in the combination of sensor data and on-device intelligence.

Next steps for readers:

  • Try to implement a more sophisticated filter.
  • Experiment with different features to see if you can improve the classification.
  • Look into training a simple model with TensorFlow and exporting it for use in your React Native app.

Resources

#

Article Tags

reactnativedatasciencemobilehealthtech
W

WellAlly's core development team, comprised of healthcare professionals, software engineers, and UX designers committed to revolutionizing digital health management.

Expertise

Healthcare TechnologySoftware DevelopmentUser ExperienceAI & Machine Learning

Found this article helpful?

Try KangXinBan and start your health management journey

© 2024 康心伴 WellAlly · Professional Health Management