Uber and Lyft drivers, especially in urban areas, are familiar with a certain pattern of behavior -passengers recognize the correct vehicle based on app-supplied information, they pause and lean toward the window to introduce themselves. People want to be assured that they are getting into the right car, and drivers too want to ensure that they are picking up the right passengers, this quick transaction puts them both at ease. But the simplicity of the routine belittles just how complex facial recognition and information exchange actually are.
This complexity is driving the automotive industry to think, as self-driving cars are becoming more prominent. These cars will eventually follow a fleet model, similar to that of Uber and Lyft. An autonomous fleet model raises a number of small but important questions – How will a driverless car recognize the passenger? How will it know precisely where to stop? How will it signal its arrival to confused passengers? A smartphone signal is not specific enough for a crowded street where other people are also waiting for their rides, and it does nothing to replace that element of human interaction.
Facial recognition technology has been improving exponentially. The new Apple X smartphone uses this technology as ‘Face ID’ to unlock the phone and authorize payments. These advancements, when paired with Over-The-Air (OTA) software updates, can help in identification of passengers. It may seem small in the face of the other technological hurdles that OEMs must overcome, but it will ensure that customers are fully on board with a self-driving fleet model.
Improvements in Facial Recognition Usher in a New World
At a recent outdoor music fest in Russia, participants were asked to submit a selfie to a databank. Cameras placed around the park then took pictures of the revelers, snapping candid shots throughout the night. Festival-goers did not even need to punch in any information to receive their photos – thanks to facial-recognition software, snapshots were sent instantly to their phones.
It was a remarkable display of the level of facial recognition development. 73% of the festival-goers were recognized, and the software was able to identify people in all poses, in the middle of crowds, while making different faces, and even when they were partly obscured or in motion.
Last year, Facebook announced that its recognition technology could identify people in photographs 83% of the time — even if the subject’s face was blocked — using identifiers such as posture and hairdo. And despite its slightly lower success rate, the music festival was actually a step beyond this, as cameras were able to identify people as they moved through real-world situations (as opposed to identifying faces in still photographs).
These developments stand to be extremely valuable to the automotive industry.
What Facial Recognition Can Do for Self-Driving Cars?
Autonomous vehicles are already equipped with cameras that, along with sensors, allow them to navigate through busy roads. These cameras are becoming more and more sophisticated and are increasingly able to identify humans, which is crucial in terms of safety.
As this technology progresses, it will have a number of other practical applications. For instance, it could also be used to help prevent hijacked rides. Vehicle doors could remain locked until the right passenger shows up and is identified by the camera. If the car fails to recognize the passenger (which might happen occasionally — no technology is flawless), the passenger could simply enter a code on their app to unlock the vehicle.
It can also be used to help gauge human emotions, adding a human element to autonomous vehicles that could help hesitant passengers. Uber, for instance, is already using cameras in an attempt to measure reactions that passengers have in their Pittsburgh self-driving cabs. If a car can understand if a passenger is nervous or uncomfortable, then it can adjust its driving. This is similar to how a human driver might notice that the person in the backseat wants them to, say, slow down a bit. These small interactions, which humans are very good at and which govern so much of what we do, must also be learned by cars.
For OEMs hoping to make self-driving cars a reality, such emotional and social obstacles might be steeper than technological ones, but the payoff will be worth it. Having cars that can read human behavioral cues and adapt to them will make people feel more comfortable about giving up control when they are seated in a car and more willing to embrace their autonomous nature.
Improving Safety and Security
While facial recognition will enhance car safety for passengers, the interaction between the passenger’s smartphone app and the car itself will still be vulnerable to cyber-attacks. This means that sophisticated cybersecurity is necessary for keeping the connected car safe. As security patches for threats are generated, sending them over-the-air will ensure that every car on the road is immediately protected. The same smart OTA technology will also be needed to keep vehicles up-to-date as facial recognition technology continues to improve.
The entire self-driving project depends on two things – making cars safe, and making humans comfortable with the idea of a fully autonomous vehicle. Facial recognition software, which adds a human element to self-driving cars can be constantly updated with the best level of cybersecurity. This technology coupled with OTA software updates offers a way to meet both those needs.
As the auto industry is changed by technological and economic currents, OEMs and Tier-1 manufacturers will need to partner with technological specialists to thrive in the era of the software defined car. Movimento’s expertise is rooted in our background as an automotive company. This has allowed us to create the technological platform that underpins the future of the software driven and self-driven car. Connect with us today to learn more about how we can work together.