skip to content
Article

See, Think, and Act: Core Technologies for Autonomous Driving System

2019-09-06

Researchers who created a self-driving car called 'M.Biliy' talked about the present and the future of self-driving.

We met with researchers and M.Billy of Hyundai Mobis, which was developed to develop self-driving cars

We met with the Hyundai Mobis M.Billy development team responsible for the development of some of the core components of the Autonomous Driving system, to talk about the technologies which are necessary to enable a car to drive itself.

‘Hyundai Mobis M.Biliy’ the core technology behind Autonomous Driving

M.Billy is a platform for testing the sensors and hardware for Level 3 and Level 4 Autonomous Driving systems. It is designed for flexibility and ease of assembly. Sensors on the platform record details about every situation the vehicle encounters and are able to share this information. It is a great car specifically designed for the development of a safe high level Autonomous Driving system.

(clockwise from the left) Kim Hak-koo, chief researcher of Hyundai Mobis’ ADAS control design team, Park Jin-young, chief researcher of ADAS control design team, Lee Seung-beom, a researcher of ADAS sensor design team, and Kim Je-seok, a senior researcher of ADAS control design team.

What kind of technologies are necessary to enable cars to drive themselves?
Humans use their eyes to check their surroundings as they walk around and an autonomous system works in much the same way. It checks the surroundings, computes what to do and then move. An autonomous system therefore requires ‘Positioning’, ‘Cognition’, ‘Judgment’ and ‘Control’ skills. ‘Positioning’ technology tells the system the location of the car on the map. Our systems use GPS, an already familiar technology. The system also relies on high precision sensors such as cameras, radar, and lidar for ‘Recognition’. ‘Judgment’ technology is what analyzes all of the input from the sensors and decides where to move, whether to stop, what to do with traffic light input and so on. When a judgment is made using positioning and recognition, the system activates the drivetrain and steering, which is the ‘Control’ part of the system.


Both positioning and recognition technologies collect information about the surroundings, how are they different from one another?
Both technologies collect some of the information necessary for an autonomous system but there are fundamental difference between the two. Positioning uses a GPS signal and high-definition map to accurately assess the exact location of the vehicle on the map. Unfortunately, GPS has many limitations. For example, the margin of error is especially big in urban areas populated with tall buildings. The sensors installed on the front of a vehicle compensate through ‘recognition’. The sensors detect obstacles, road lanes, boundaries between the road and sidewalk, and other road markings. Using such ‘Recognition’ to overcome the limitations of GPS technology is called ‘Map Matching’. The sensors also provide input such as ‘red light is on’, ‘there is a speed bump ahead’ or ‘vehicle ahead is slowing down’ to the system just as human eyes send messages to the brain.


How do each sensor recognize its surroundings?
First, the camera converts the image acquired through its optical system and processes it into data. It recognizes the shape of any objects in the direction of travel, such as lanes, vehicles, pedestrians, speed signs, traffic lights, speed bumps, and distance. Unlike a camera, a radar uses electromagnetic waves to detect objects, giving limited information about them. Yet, it can still detect the distance, speed, and angle of objects on a dark night or in bad weather. Unlike radar, LiDAR uses pulsed laser, a type of light, to scan the surrounding environment. Images are then created based on the information obtained from the light reflected back to LiDAR. It is able to detect blind spots that cannot be recognized using a camera or radar. Together, camera, radar, and LiDAR compensate for each other, providing a comprehensive picture of the vehicles surroundings, which makes Autonomous Driving possible.


Are there any technical limitations?
There are some limitations to this new technology. First of all, improving the performance of the sensor is time consuming and costly. The price of LiDAR, which is the core of recognition technology, is very high. Whilst more sensors means better performance, that also leads to increased costs. There are also limits on the available space on a car to attach the sensors. Furthermore, there are also things that radar and LiDARs simply cannot detect. This makes ‘sensor fusion’ technology which combines the different sensors to compensate each other very important. Advancement in sensor fusion technology is expected to help overcome the technical weaknesses of the individual sensors and has been something we have focused our efforts on. We have already developed camera and radar and we know how to make them work together.


How is the collected information utilized for Autonomous Driving?
All the information gathered from the sensors is sent to an Autonomous ECU. The Autonomous ECU consists of two processors, the first is capable of high-performance calculations and the second calculates safety risks in real time. The ECU processes the collected information to predict what will happen next and generates the safest route forward. If the system detects pedestrians or obstacles in front of your vehicle, it can make a judgment using the logic entered in advance about how to best handle the situation by avoiding risk. The judgment logic is designed using probabilistic calculations to cope with all possible situations. The chief development aim is to find ways to minimize risks.

M.Biliy, named after Mobility, is a test vehicle created to develop a self-driving system

How can the system handle situations that require ethical judgment during self-driving?
How should an autonomous vehicle react when a pedestrian suddenly jumps in front of an autonomous vehicle carrying a passenger? This is one of the key challenges that Autonomous Driving technology must address to move beyond Level 4. At present social consensus is needed to make such a judgment. However, further advancement in technology may eliminate such a dilemma. A person’s field of view is narrow, which make it difficult to quickly react to objects approaching from the side. However, radar can detect 360° of a vehicle’s surroundings including even what a camera cannot see. An Autonomous Driving system can warn the driver of a potential risk even before it is visually detected. What we have to do is increase the performance of the sensor and make the judgment logic more robust, so that Autonomous Driving cars do not even have to make difficult ethical decisions.


What are the strengths of HMG in Autonomous Driving technology development?
Autonomous Driving has been a buzzword around the world for many years not only among car makers, but also among IT companies such as Google and Intel – Mobile Eye, as well as auto part makers such as Aptiv. While competitors are developing some of the positioning, cognition, judgment and control technologies, HMG is capable of developing all of the necessary technologies for Autonomous Driving. HMG also has two car manufacturing companies in the group that develop steering, suspension, and brake systems for chassis, as well as components, IT and maps. This makes the fast development of Autonomous Driving technology possible.