AUTONOMOUS VEHICLE

Atharva Gosavi
11 min readJan 20, 2022

--

The Autonomous car is an automated or autonomous vehicle capable of fulfilling the main transportation capabilities of a traditional car without human input. Suggested by Forbes magazine as one of the Five Most Disruptive Innovation of 2016. Autonomous vehicles have enormous potential to allow for more productive use of time spent.

LEVELS OF DRIVING AUTOMATION

How do autonomous cars work?

Autonomous cars rely on sensors, actuators, complex algorithms, machine learning systems, and powerful processors to execute software. Autonomous cars create and maintain a map of their surroundings based on a variety of sensors situated in different parts of the vehicle. Sophisticated software then processes all this sensory input, plots a path, and sends instructions to the car’s actuators, which control acceleration, braking, and steering. Hard-coded rules, obstacle avoidance algorithms, predictive modeling, and object recognition help the software follow traffic rules and navigate obstacles.

SOFTWARE AND ALGORITHMS

Various software and algorithms are used in autonomous car One such technique is SLAM which is abbreviation for Simultaneous Localization and Map Building. Used for solving a problem as to if it is possible for an autonomous vehicle to start in an unknown location in an unknown environment and then to incrementally build a map of this environment while simultaneously using this map to compute absolute vehicle location.

How Machine Learning Can Be Used in Autonomous Vehicles

Although autonomous vehicles are principally only in the prototyping and testing stages, ML is already being applied to several aspects of the technology used in advanced driver-assistance systems (ADAS). And it looks set to play a part in future developments, too.

Detection and Classification of Objects

Machine learning is being deployed for the higher levels of driver assistance, such as the perception and understanding of the world around the vehicle. This chiefly involves the use of camera-based systems to detect and classify objects, but there are also developments in LiDAR and radar as well.

One of the biggest issues for autonomous driving is that objects are wrongly classified. The data gathered by the vehicle’s different sensors is collected and then interpreted by the vehicle’s system. But with just a few pixels of difference in an image produced by a camera system, a vehicle might incorrectly perceive a stop sign as something more innocuous, like a speed limit sign. If the system similarly mistook a pedestrian for a lamp post, then it would not anticipate that it might move.

Hardware used in the Car.

TECHNOLOGY USED

Through improved and more generalized training of the ML models, the systems can improve perception and identify objects with greater accuracy. Training the system — by giving it more varied inputs on the key parameters on which it makes its decisions — helps to better validate the data and ensure that what it’s being trained on is representative of true distribution in real life. In this way, there isn’t a heavy dependence on a single parameter, or a key set of particulars, which might otherwise make a system draw a certain conclusion.

If a system is given data that’s 90% about red cars, then there’s a risk that it will come to identify all red objects as being red cars. This “overfitting” in one area can skew the data and therefore skew the output; thus, varied training is vital.

Driver Monitoring

Neural networks can recognize patterns, so they can be used within vehicles to monitor the driver. For example, facial recognition can be employed to identify the driver and verify if he or she has certain rights, e.g., permission to start the car, which could help prevent unauthorized use and theft.

Taking this further, the system could utilize occupancy detection to help optimize the experience for others in the car. This might mean automatically adjusting the air conditioning to correspond to the number and location of the passengers.

In the short term, vehicles will need a degree of supervision and attention from someone designated as the “driver.” It’s here that recognition of facial expressions will be key to enhancing safety. Systems can be used to learn and detect signs of fatigue or insufficient attention, and warn the occupants, perhaps even going so far as to slow or stop the vehicle.

Driver Replacement

If we take full autonomy as the ultimate aim of autonomous vehicles, then automatic systems will need to replace drivers — supplanting all human input entirely.

Here, machine learning’s role would be to take data input from a raft of sensors, so that the ADAS could accurately and safely make sense of the world around the vehicle. In this way, the system could then fully control the vehicle’s speed and direction, as well as object detection, perception, tracking, and prediction.

However, security is key here. Running on autopilot will require extremely effective — and guaranteed — ways of monitoring if the driver is paying attention or can intervene if there’s a problem.

Vision

Deep-learning framework software like Caffe and Google’s TensorFlow uses algorithms to train and enable neural networks. They can be used with image processing to learn about objects and classify them, so that the vehicle can readily react to the environment around it. This may be for lane detection, where the system determines the steering angles required to avoid objects or stay within a highway lane, and therefore accurately predicting the path ahead.

Neural networks can also be used to classify objects. With ML, they can be taught the particular shapes of different objects. For example, they’re able to distinguish between cars, pedestrians, cyclists, lamp posts, and animals.

Imaging can also be used to estimate the proximity of an object, along with its speed and direction of travel. For maneuvering around obstacles, the autonomous vehicle could use ML to calculate the free space around a vehicle, for instance, and then safely navigate around it or change lanes to overtake it.

Sensor Fusion

Each sensor modality has its own strengths and weaknesses. For example, with the visual input from cameras, you get good texture and color recognition. But cameras are susceptible to conditions that might weaken the line of sight and visual acuity, much like the human eye. So, fog, rain, snow, and the lighting conditions or the variation of lighting can all diminish perception and, therefore, detection, segmentation, and prediction by the vehicle’s system.

Whereas cameras are passive, radar and LiDAR are both active sensors and are more accurate than cameras at measuring distance.

Machine learning can be used individually on the output from each of the sensor modalities to better classify objects, detect distance and movement, and predict actions of other road users. Thus. It’s able to take camera output and draw conclusions on what the camera is seeing. With radar, signals and point clouds are being used to create better clustering, to give a more accurate 3D picture of objects. Similarly, with high-resolution LiDAR, ML can be applied to the LiDAR data to classify objects.

But fusing the sensor outputs is an even stronger option. Camera, radar, and LiDAR can combine to provide 360-degree sensing around a vehicle. By combining all of the outputs from the different sensors, we get a more complete picture of what’s going on outside the vehicle. And ML can be used here as an additional processing step on that fused output from all of these sensors.

For example, an initial classification might be made with camera images. Then, it could be fused with LiDAR output to ascertain distance and augment what the vehicle sees or validate what the camera is classifying. After fusing these two data outputs, varied ML algorithms can be run on the fused data. From this, the system can make additional conclusions or take further inferences that assist with detection, segmentation, tracking, and prediction.

Vehicle Powertrains

Vehicle powertrains typically generate a time series of data points. Machine learning can be applied to this data to improve motor control and battery management.

With ML, a vehicle isn’t limited to boundary conditions that are factory-set and permanently fixed. Instead, the system can adapt over time to the aging of the vehicle and respond to changes as they happen. ML allows for boundary conditions to be adjusted as the vehicle system ages, as the powertrain changes, and as the vehicle is gradually broken in. With flexibility of boundary conditions, the vehicle is able to achieve more optimal operation.

The system can adjust over time, changing its operating parameters. Or, if the system has sufficient computing capacity, it could adapt in real time to the changing environment. The system can learn to detect anomalies and provide timely notification that maintenance is required, or give warnings of imminent motor-control failure.

Safety and Security in Autonomous Vehicles

Undoubtedly, the most important consideration with autonomous vehicles is that they’re propelled safely and don’t cause road traffic accidents. This involves the functional safety of the vehicle’s system and its devices, as well as ensuring the inherent security of the network and systems that power it.

Functional Safety and Device Reliability

Machine learning has a part to play in ensuring that a vehicle remains in good operating order by avoiding system failures that might cause accidents.

ML can be applied to the data captured by on-board devices. Data on variables such as motor temperature, battery charge, oil pressure, and coolant levels is delivered to the system, where it’s analyzed and produces a picture of the motor’s performance and overall health of the vehicle. Indicators showing a potential fault can then alert the system — and its owner — that the vehicle should be repaired or proactively maintained.

Similarly, ML can be applied to data derived from the devices in a vehicle, ensuring that their failure does not cause an accident. Devices such as the sensor systems — cameras, LiDAR, and radar — need to be optimally maintained; otherwise, a safe journey couldn’t be assured.

Security

Adding computer systems and networking capabilities to vehicles brings automotive cybersecurity into sharper focus. ML can be used here, though, to enhance security. In particular, it can be employed to detect attacks and anomalies, and then overcome them.

One threat to an individual car is that a malicious attacker might access its system or use its data. ML models need to detect these sorts of attacks and anomalies so that the vehicle, its passengers, and the roads are kept safe.

Detecting Attacks and Anomalies

It’s possible that the autonomous classification system within a vehicle could be maliciously attacked. Such an offensive attack may deliberately make the vehicle misinterpret an object and classify it incorrectly. This sort of attack would need to be detected and overcome.

An offensive attack could impose the wrong classification on a vehicle, as in the case of a stop sign being perceived as a speed-limit sign. ML can be used to detect these kinds of adversarial attacks and manufacturers are beginning to develop defensive approaches to circumvent them.

It’s by delivering robust systems around the ML model that such attacks can be defended. Once again, training is important here. The aim is to create a more generalized way for the ADAS to make its decision. Employing training to avoid overfitting avoids a heavy dependence on one key particular — or a set of them. So, because the system has a greater breadth of knowledge, the input that’s been maliciously manipulated will not cause it to wrongly change the outcome or the perception.

Hacking, Data, and Privacy Concerns

Averting hacks on the connected networks that vehicles run on is paramount. In a best-case scenario, multiple hacked vehicles could come to a halt and cause gridlock. But at worst, an attack may result in serious collisions, injuries, and deaths.

More than 25 hacks have been published since 2015. In the largest incident to date, a hackable software vulnerability caused Chrysler to recall 1.4 million vehicles in 2015. The vulnerability meant a hacker could assume control of the car, including the transmission, steering, and the brakes.

There’s also a potential market for car-generated data. Data can be obtained on the occupants of a vehicle, their location, and movements. It’s estimated that car-generated data could become a $750 billion market by 2030.4 While this data is, of course, of interest to genuine parties, like the vendors and auto-parts manufacturers, such valuable data also attracts hackers.

Developing systems that better maintain the cybersecurity in cars is therefore vital. As many as 150 electronic control units (ECUs) occupy every car, and they require around 200 million lines of software code to run them. With such a complex system comes a greater susceptibility and vulnerability to hacking.

With an estimated 470 million connected vehicles on the road by 2025 in Europe, the U.S., and China alone,5 the wireless interfaces they employ need to be secure to prevent scalable hacking attacks. Those supplying the computer systems that power autonomous vehicles must ensure that their systems are secure and uncompromisable.

The Benefits of Using ML for Object Detection and Classification

While it may not inherently be more accurate than vision-based systems, over time, ML algorithms can achieve greater degrees of accuracy. Other systems eventually reach a plateau at a certain level, as they can’t achieve any greater accuracy. But with ML, as more training is applied, and with more rigorous training — as well as gradual augmentation of, and improvements to, the model — it’s possible to achieve greater levels of accuracy.

Machine learning is also both more adaptable and scalable than vision systems. Because the ML system creates its own rules and evolves based on training, rather than engineer input, it can be scaled up and applied to other scenarios. Effectively, the system adapts to new locations or landscapes by applying its already-learned knowledge.

The ease with which ML platforms can identify trends is also a plus. They can quickly process large volumes of data and readily spot trends and patterns that might not be so apparent to a human looking over the same information. Algorithms used in autonomous vehicles need to apply this same sort of data review over and over. Thus, it’s an advantage to have a system that can do it quickly and with a high degree of effectiveness.

ML algorithms can adapt and evolve without human input. The system is able to identify and classify new objects and adapt the vehicle’s response to them, even dynamically, without any human intervention or correction. Again, broad and deep training is required so that the system directs the vehicle to respond appropriately, but this is a relatively simple process.

Using a ML approach avoids reliance on determinist behavior. That’s to say, it’s impossible to always input the same values in the same way — not all cars are identical, yet they’re still cars — but any autonomous system needs to identify cars as cars, despite their differences. It needs to produce entirely predictable results, despite the inconsistency in the input. An autonomous vehicle needs to be able to work in the real world, where there are variances, uncertainty, and novelties.

--

--