Hey guys and gals!
We're shooting for a final product which we hope will save bicyclists lives, and help to make bike commuting possible again for many.
Doing all the computer vision, machine learning, etc. work I've done over the past year, it struck me (no pun intended) that there had to be a way to leverage technology to help solve one of the nastiest technology problems of our generation:
A small accident as a result of texting while driving, a 'fender bender', often ends up killing or severely injuring/degrading the quality of life of a bicyclist on the other end.
This happened to too many of my friends and colleagues last year. And even my business partner on this has been hit once, and his girlfriend twice.
These 'minor' accidents (were they to be between two cars) resulted in shattered hips, broken femurs, shattered disks - and then protracted legal battles with the insurance companies to pay for any medical bills. And one of the co-contributors on this effort lost a friend who was killed by a distracted driver.
So how do we do something about it?
Leverage computer vision + machine learning to be able to automatically detect when a driver is on a collision course with you - and stuff that functionality into a rear-facing bike light. So the final product is a dual-camera, rear-facing bike-light, which uses object detection and depth mapping to track the cars behind you - and know their x, y, and z position relative to you (and the x, y, and z of their edges, which is important).
What does it do with this information?
It's normally blinking/patterning like a normal bike light, and if it detects danger, it does one of two actions, and sometimes both:
Starts flashing an ultra-bright light.
Honks a car horn.
So case '1' occurs when a car is on your trajectory, but has plenty of time/distance to respond. An example is rounding a corner, where their arc intersects with you, but they're still at distance. The system will start flashing, to make sure that they are aware of you.
And if the flashing ultra-bright lights don't work, then the system will sound the car horn with enough time/distance left (based on relative speed) for the driver to respond. The car horn is key here, as it's one of not-that-many 'highly sensory compatible inputs' for a driving situation. What does that mean? It nets the fastest possible average response/reaction time.
So with all the background out, we're happy to say that last weekend, we actually proved out that this is possible, using slow/inefficient hardware (with a poor data path):
What does this show? It shows that you can differentiate between a near-miss and a glancing impact.
What does it not show? Me getting run over... heh. That's why we ran at the car instead of the car running at us. And believe it or not, this is harder to track. So it was great it worked well.
Long story short, the idea works!
This only runs at 3FPS, so you can see in one instance it's a little delayed. The custom board we'll be making fixes this, offloading the workload from the Pi to the Movidius X, bringing the FPS above 30. (And note we intentionally disabled the horn as it's just WAY too loud inside.)
You can see below what the system sees. It's running depth calculation, point-cloud projection (to get x,y,z for every pixel), and object detection. It's then combining the object detection and point cloud projection to know the trajectory of salient objects, and the edges of those objects.
Here's me initially testing it, figuring out what thresholds should be, etc.
And this is the car under test:
We used that for two reasons:
- It was snowing outside, and it was the only one that was parked in such a way indoors that I could run at it.
- It proves that the system handles corner cases pretty well. It's a 'not very typical' car. ;-)
And here's the hardware used:
And the next big thing is to make a custom board, which integrates everything together and solves a major data path issue, which is the main limiter of performance when using the NCS with a smaller linux system like the Pi.
So we realized that a ton of other people are having the same data path issue w/ the Pi, so we're actually planning on selling our intermediate-board, which is a carrier for the Myriad X and the Raspberry Pi Compute module.
So the AiPi will probably go to CrowdSupply, and if all goes well we'll take Commute Guardian to Kickstarter.
Oh and here're some initial renderings/ideas for the product and user interface:
Haptic part above is haptic feedback through the seat post as a warning for the rider in 'warning' state, so that they know they're at some risk, before 'DANGER' state - even if they're not using the app. (Trivia: fighter pilots get haptic feedback to notify them when they're at risk - it's a great 'a highly sensory compatible input', meaning that it evokes a fast reaction time.) That way they can take action to get themselves out of risk, and also know that a horn may be happening soon - should their action and that of the driver not be sufficient to remove them from the impending danger.
And it can record video dynamically (e.g. based on risk) and digitize salient events and risk to the rider, stored with location data, which then can be used for (1) liability assessment (back to my friend who had a protracted legal fight over his medical bills) and (2) inform city planning on where cyclists are most at risk and (3) inform fellow cyclists on what the safest routes are.
Over time it will be able to accumulate this safest-route information because it can, through the computer vision tracking, assess real-time how much risk a rider is in - and that can be aggregated from all riders (who want to opt-in) using the device.