Article
Read time:
5 min

Building Computer Vision That Can't Get It Wrong

What it’s like to do serious computer vision work at Inspiren, where the stakes are real, the feedback loop is tight, and the problems are genuinely hard.

By Wassim El Ahmar, Ph.D, P.Eng., Staff Machine Learning Engineer

Wassim El Ahmar is a machine learning and computer vision expert with over ten years of experience across academia and industry. At Inspiren, he builds and optimizes the on-device models behind Inspiren AUGi’s real-time detection of care events, including falls, bed exits, and chair exits, while keeping resident privacy fully intact.

It's 3 am in a senior living community. A resident shifts to the edge of her bed and starts to stand. Before her feet touch the floor, the care team has been alerted and a staff member is at her bedside in time to steady her arm.

The fall that could have happened, didn't.

That moment, and the thousands like it across the communities we serve, is why we build the way we build, and it's why the engineering problems we work on are harder than they look from the outside.

The problem that doesn't keep business hours

Falls happen at 3 AM. Changes in a resident’s condition are often gradual and invisible to a care team stretched thin across a building. The gap between a resident needing help and a staff member knowing about it is exactly where faster, more reliable awareness changes outcomes.

Inspiren was built to close that gap. Across every space a resident moves through, apartments, common areas, bathrooms, our ecosystem gives care teams continuous awareness without compromising privacy or dignity.

In apartments and common spaces, AUGi is an AI-powered sensor that identifies care events, like falls, bed exits, and chair exits, using privacy-protected imagery to alert staff when a resident may need assistance. In bathrooms, where standard devices can't go, Sense is a camera-free sensor that detects falls and tracks care patterns. Across hallways and common areas, integrated eCall serves as a direct line to care within reach at all times, while Staff Beacons round out the picture by tracking care delivery and response times.

All of it runs directly on the device, with no data sent to an external server and no cloud dependency for core detection. All of it has to work reliably, at all hours, in real apartments.

Three constraints that shape everything

There’s no shortage of computer vision work in the world. What makes this setting distinct, and what makes it more interesting, is the constraint structure we operate inside.

Privacy isn't a constraint we design around, it's the foundation. Our system identifies care events while protecting resident privacy end to end. Care teams see a privacy-protected visual overlay that provides just enough context to respond, without exposing unnecessary detail about the resident. Every architectural decision starts from that premise.

Accuracy is measured in real-world outcomes. If we miss an event, a resident may not get timely support. If alerts trigger too often, care teams begin to tune them out. Accuracy isn’t an abstract metric, it directly shapes how reliably care teams can respond and how much they trust the system day to day.

These are dynamic, real-world environments. Our models run on edge hardware in real apartments, where low light, tight corners, mobility aids, and highly variable movement patterns are the norm. That means the system has to stay reliable despite constant variation in how people live and move.

What it's like to build on the Engineering team at Inspiren

The engineering team's mission maps to Inspiren's at its core: build the first unified ecosystem where AI-powered awareness and clinical expertise work as one, so that no resident's needs go unmet, and no care team is left guessing. That work spans real-time event detection, edge inference, privacy-first hardware, and the data pipelines and software that ties it all together. The scope is broad, and the stakes are tangible.

We’re a small team, which keeps the distance between an experiment and a production decision short. Your work is immediately visible, your judgment carries weight, and decisions get made close to the problem by the people working on it directly.

What makes the environment unusual is who you build with. Our Clinical Success team, RNs and physical therapists with direct senior living experience, is embedded in the feedback loop, not brought in at the end. Engineers regularly review cases with clinicians to understand why a model was wrong or why an alert didn’t make sense in context. Instead of working from a static labeled dataset, we work from real clinical interpretation.

That tight loop directly shapes how we improve the system. Not every event that looks like a fall is one: a prescribed floor exercise or a slow transfer into a wheelchair can look similar to an actual fall. When the system gets it wrong, clinicians help explain what really happened, and that becomes a training signal. Over time, those reviewed cases sharpen the model, not just for generic fall detection, but for the full range of how real residents actually move. A model that performs well on benchmarks but misclassifies a mobility-impaired resident at 3 AM isn’t a model we deploy.

How we build and ship

Every architecture decision starts from the same constraint: inference runs on the device, in the apartment, with no protected health information leaving the room. Model size, latency, and power consumption are hard ceilings. We use lightweight backbone architectures and aggressive quantization to stay within them.

Standard metrics matter, but they aren't sufficient here. What matters is how often a real fall gets caught, and how often care teams are called for nothing. A missed fall is categorically worse than a false alert, and our thresholds reflect that explicitly. We track performance by subpopulation: mobility-impaired residents, low-light conditions, and obstructed views. Aggregate metrics hide the failure modes that cost the most. Model selection happens against evaluation sets built specifically around those hard conditions, with Clinical Success team event reviews feeding continuously back in.

A model that clears evaluation enters staged rollout, not a fleet-wide push. New versions go live across a subset of communities first, where we monitor real outcomes: missed events, alert volume, Clinical Success team review results. That's where we catch the regressions evaluation sets miss.

The hard problems aren't behind us. Keeping pose estimation reliable across the full spectrum of resident mobility, residents with significant gait impairment, assistive devices, atypical movement patterns, is active work. So is scaling multi-sensor correlation as deployment density grows and edge cases compound across environments. These are research-grade problems running inside a production system, with real outcomes attached to every improvement.

If this sounds like your kind of work

We ship into an environment that affects people's daily lives. That's not a reason to move slowly, but a forcing function to build better systems.

This work rewards a specific kind of engineer. Not someone who needs a clean problem statement and a single objective function, but someone who finds it more interesting when the constraints are in tension: privacy requirements that shape architecture decisions, clinical accuracy thresholds that can't be traded against each other, edge hardware that makes every model size choice a real negotiation. If you want to grow as an engineer, there are few better environments than one where your architectural decisions have direct clinical consequences and a clinician will tell you exactly where you got it wrong.

The distance from your work to real-world impact is short. Each model runs in real apartments, reviewed by real clinicians, and the results feed back in weeks. Every improvement is a fall caught earlier, a care team that trusts the system more, a resident safer in their own home.

If that's the kind of work you want to do, we're hiring.

Apply here