The more advanced the technology, the more sharp a double-edged sword.
Many of the scenes in our lives have been applied to face recognition, and the advent of the iPhone X has spread it to every aspect of our lives. Some people say that in the smart network today, cars similar to mobile phones, is slowly becoming a “mobile electronic device”, the use of the function on the mobile phone can actually be applied to the car.
So we started thinking about what the technology would have in the car.
What is face recognition?
Face recognition, like fingerprint recognition, sound recognition and iris recognition, is a kind of biometric recognition technology.
It mainly contains four steps: Face detection, face image processing, face image feature extraction, and matching and recognition. There are many ways to implement these four steps in a complete way.
The data collected by different methods, the functions that can be achieved, and the accuracy rate vary greatly.
This time we’re talking about face recognition on hardware based on cameras, computer vision based on software, deep learning, CNN convolution neural networks, and cloud service implementations.
In addition, different products on the hardware and software bias is also different, some focus on the source of perception through the camera to obtain high-quality material, some focus on the software level of analysis and calculation. Different combinations of hardware and software will produce different levels of functionality. Some products can only reach the record attendance level or access level, but some can reach the payment level, national security level, military security level.
Some face recognition systems can be cracked with just a few three-dimensional virtual models synthesized from photos on social platforms, and some with 3D printing models can’t be cracked.
Simply put, face recognition is through a number of sensors, collected some data, and then analyzed to achieve a variety of applications.
Like who you are, how you feel, how your vital signs are, what you do in the car, when, what frequency, and for what purpose.
After probably knowing what it can do, we’ll talk about face recognition in the car.
Identify identity & ensure security
Most obviously, face recognition is a process that proves “you Are you.” With this technology, people in the future can enter the car through face recognition instead of car keys.
For example, Bai Teng installed a sensor on the B-column to achieve a brush face unlock.
In the future, if the car payment is realized, face recognition can be used as a payment method to achieve a sense-free expressway, no sense of refueling and so on. However, this is not to say that face recognition is better than the current way of unlocking, and face payments are safer than current sweep payments. The key is still to look at the user’s preferences.
Different security methods, according to the user’s choice of each other redundant. Face recognition has a greater role to play in the field of shared travel, from the point of view of identity identification and security protection.
At present, drip drivers need a face recognition every day to prove that the driver and registered vehicle driver consistent, to ensure the safety of passengers. Therefore, in the future, whether it is drip such a shared travel, or a user-driven shared car, face recognition can be used as a redundant choice of security.
In addition to being a key, face recognition can also be used as a powerful ID in the car. Now, OEMS have a lot of car systems to play.
Each OEM has its own car system, such as BYD’s DiLink, Geely’s GKUI, General eConnect, and OEMS have chosen to work with new car networking software providers, such as Ford’s Chinese model and SAIC Rongwei, which is carrying a zebra system. Facing more features outside the car, the car desperately needs a powerful ID to concatenate all the applications.
Mobile phone numbers, micro-signals, Alipay and other accounts become an option, but frequently enter the account password is still more troublesome. In contrast, if face can be used as a “central port” for connecting all functional applications, some problems should be resolved.
Because faces are more irreplaceable than other accounts, the way face recognition is more convenient.
At present, Chery EXEED Star Map is equipped with the relevant technology, can do “user identification” and “Face payment”, this model will be listed at this year’s Shanghai auto Show.
In addition, the use of face recognition in the car, but also the driver’s signs can be monitored, if the vehicle monitoring the driver’s mood, health status is different, the vehicle can respond autonomously. Infrared cameras, for example, can sense a user’s body temperature and thus his physical condition. Of course, body temperature is just one of the data, and the other types of perceptual data are described in more detail below.
This can be understood to be the need to have driver monitoring capabilities since 2020, when vehicles want to obtain EU five-star safety certification. Moreover, at present, we are on sale of passenger cars have been able to achieve L2 class of autonomous driving, in the future to L3 development, must be equipped with driver monitoring function.
Because the biggest difference between L2 and L3 is the need to “identify the subject of responsibility,” driver monitoring in the car must always guide the driver to prepare for taking over the vehicle.
But driver monitoring is not a novelty, a few years ago a lot of German cars (Mercedes-Benz, BMW, Audi, Volkswagen and other brands have) carried a similar function, called “fatigue monitoring.”
The difference between “fatigue monitoring” and “driver monitoring” can be easily understood in terms of the realization path, one is indirect monitoring and the other is direct monitoring. Fatigue monitoring is usually made up of an infrared sensor and an integrated ECU.
Infrared sensors can monitor the driver’s condition, such as whether his eyes are focused on the road and whether there is a precursor to sleep, while the ECU monitors the vehicle itself, such as the angular speed of the steering wheel, the use of the accelerator pedal, the time the driver continues to drive, and whether air conditioning and multimedia are used.
By digitizing the monitored data and then comparing it with the calibration values in the standard driving state, the driver’s state can be judged. This approach can only be counted as an auxiliary and preventive function, as the vehicle itself has no way of controlling the vehicle. and the algorithm and standard are fixed, the user’s driving habits are diverse, so each user can use the feeling is not the same. It is unable to learn user preferences on its own.
such as temperature, brain waves, heartbeat, skin conduction rate, body temperature, breathing and so on.
We can clearly feel that “driver monitoring” is more multidimensional than the data collected by fatigue monitoring, enabling more functionality, but it is also more difficult to achieve driver monitoring at the same time. Functionally, because the sensor collects more data, uses more technology, and the algorithm for processing the data is more complex. The most important thing is that it is not just a “preventive function”, it needs to control the vehicle autonomously, reduce the speed when the driver is unable to take over the vehicle in time, and wake up the driver.
The system should take the initiative to grind with the driver. From the product point of view, the vehicle to carry this function to consider the cost of the issue, to consider whether the user is willing to accept this may disturb his function, but also to consider whether the user would mind to disclose the privacy of the data.
Of course, we also mentioned “Driver behavior analysis” in the subtitle.
This is not difficult to understand, if the last step of the driver monitoring to achieve, over time the car will form a user driving habits and style of packets.
At this point, the combination of driver driving habits data & the vehicle’s own ADAS function &DMS data will help the user grind with the vehicle (which is important for vehicles with L3-class ADAS). Extending a little more, the future data can also be fused with the map. If the user likes to cross the road or run the mountain, the map will actively mark the recommended user autonomous driving range.
Perfect user Portrait & enhanced multi-modal interaction In fact, the driver behavior analysis we talked about in the last part is already perfecting the user portrait.
And when the data fuses with more sensors and the vehicle itself, we are able to depict a more stereoscopic user. For example, if you enter the car through face recognition, you have established an account that belongs to you, and you can slowly understand what kind of vital signs you have in the case of how you debug the vehicle under the grinding of multiple times with the vehicle.
This can include seat posture, air conditioning temperature and mode, in-car atmosphere lights, fragrance system, multimedia content recommendations. This personalized data can be stored in an account and can be circulated in different vehicles if possible, although the suitability of the vehicle is a problem.
This is a far extension, but it is possible to achieve it in a shared car.
In addition, face recognition and gesture recognition have some similarities in principle, supplemented by speech function, the future to achieve more enhanced multi-modal interaction may make the car’s human-computer interaction to a higher level.
At this point, when your driving habits, settings preferences, the environment outside the car, the car’s own driving ability and your physical state, in-car hardware control, multimedia content recommendations for data fusion, users and cars will become more and more tacit understanding. A car is not only a third living space, but also a personal assistant connected to your mind.
Impact on OEMS and insurers
function to bring users convenience at the same time, the other side is the privacy of the data, which is more important in the Western market, so DMS for OEMS is a double-edged sword.
For example, how many people are willing to trust this new function, how many people are willing to exchange privacy for convenience, people are willing to use how much financial cost to accept such convenience, the resulting data who really belong to, who have the right to know the data …
Whether this data can give OEMS more inspiration, OEM identity will be changed as a result of this data, OEMS can make good use of this data, how to use it, what to do …
In the area of insurance, DMS is a very important application because it helps insurers customize UBI (an insurance based on driving behavior). The principle is this: by analyzing the user’s long-term driving behavior habits, the insurance company can predict the user’s driving risk, so as to accurately price the insurance of different users, reduce the cost of commercial car insurance claims.
Reflection & amp; summary
First, let’s go back to “face recognition” itself and think about the potential impact of this technology.
First of all, once face recognition is popularized, your identity will always be “unconsciously proven”, and your life will always have a “God’s perspective.”
In an interview with USA Today, Alvaro Bedoya, executive director of Georgetown Privacy & Technology Law Center and a face recognition expert, said, “You can remove cookies, you can modify your browser settings, you can leave your phone at home, but you can’t delete your face, you can’t forget it at home.”
Therefore, face recognition is different from other attribute identification tools, which is a kind of “face in, identity is in” Biometric authentication tool.
One more is the issue of privacy. Face recognition This privacy data generated in the car is different from the address and mobile phone number we exposed on a treasure. It is a continuous, large, trend-changing data, and it has a huge sample of the same kind and can be analyzed behind it.
We can’t underestimate the power behind big data.
A recent study by Stanford University showed that, based on a dataset extracted from Tinder, the accuracy of using a face analysis method to predict “a person’s sexual orientation” can reach 81%. So you still think it’s looking at your face? It looks at your heart.
This is from the perspective of personal privacy, if it rises to the national level? What about rising to military security?
What would our world be like if face recognition, which was precise enough, was combined with weapons?
…… Ease the mood. Don’t be busy with panic first.
After all, it is not easy to really achieve a high recognition rate, low missed detection rate, low error detection rate of face recognition technology, especially to do it to the car gauge level. For example, can you capture compliance-level material in real time in a variety of extreme environments (backlight, exposure, dim)? Does the speed and accuracy of software algorithms allow users to have a comfortable experience? Is the base of the cloud contrast sample large enough? How do I balance product functionality with costs? Does the user fundamentally accept such a feature?
These problems require both suppliers and OEMS to think together. In justice, technology itself is not right or wrong, but it produces different robustness in different scenarios. The security of technology can never be quantified, and the process of solving security problems is always a game process with anti-compliance.