500+ Sounds Detected in Real-Time on Constrained Devices
Designed and deployed an always-on audio detection system from edge devices to cloud dashboard — built for healthcare environments where every sound matters.
Client Type
Techstars Startup
Timeline
6 Months
Role
Lead AI Engineer
A healthcare-focused startup needed to detect and classify hundreds of sounds in real-time — on hardware that couldn't phone home.
The client was building an ambient monitoring platform for hospitals and elder care facilities. The use cases ranged from noise pollution monitoring in hospitals to behavioral analytics in retirement homes — detecting events like falls, equipment alarms, or daily activity patterns.
The system had to run always-on, classify 500+ distinct sounds, operate on constrained edge devices with limited compute, and feed events to a centralized dashboard for staff review. No existing off-the-shelf solution could handle this combination of scale, accuracy, and hardware constraints.
500+
Sound classes detected in real-time
90th
Percentile detection accuracy
6 mo
From zero to deployed in healthcare
Audio Classification Model
Designed and trained a lightweight model capable of classifying 500+ distinct sounds while running within the compute budget of constrained edge hardware. Optimized for always-on, real-time inference without cloud dependency.
Edge-to-Cloud Infrastructure
Architected the complete AWS infrastructure from scratch — edge devices running local inference, event streaming to the cloud, and a centralized dashboard for monitoring, flagging, and analysis. Designed from diagram to delivery.
Active Learning Loop
Built a human-in-the-loop feedback system where staff could listen to flagged detections and vote on accuracy. These labels fed directly back into the training pipeline, continuously improving model performance with real-world data.
Healthcare Deployment
Deployed initially in a hospital environment for noise pollution detection, then expanded toward retirement homes for behavioral analytics — tracking patterns like falls, bathroom usage, and activity levels to support elderly care.
BEFORE
- No ambient sound monitoring in care environments
- Staff relied on manual observation and reactive response
- Falls and critical events went undetected between check-ins
- No data on environmental noise patterns affecting patient recovery
AFTER
- 500+ sounds classified in real-time on constrained edge devices
- Detection accuracy in the 90th percentile with continuous improvement via active learning
- Full AWS infrastructure delivering real-time event dashboards to care staff
- SXSW Innovation Finalist recognition for the platform
Have a similar challenge?
I help teams design, build, and ship AI systems that work in production. Let's talk about your problem.