Automotive

Automotive Technology Explained Part 2 – Self-driving Cars

There is little doubt that driver-less cars will feature strongly in the automotive technology universe in the near future. We have all seen driverless cars racing around a track, or even completing a predefined route through a city without running into anything, or worse, killing innocent pedestrians. We have no doubt also thought that this was the coolest thing ever, and we all have certainly also wondered just how these cars are controlled.

In this article we will explain what driverless cars are, and how they work. The issues that face the technology are many and varied, and while it is beyond the scope of this article to explain the finer points of the technology, we can explain the basic operating principles of driverless car control systems. So if you have ever wondered just how driverless cars do what they do, read on to find out how it’s done.

Self-driving Cars – Auto Body Shop Blog
Self-driving Cars – Auto Body Shop Blog

What are self-driving cars?

In simple terms, driverless cars are cars that can literally drive themselves without any human inputs or interventions. This ability should however not be confused with driver assistance systems such a autonomous braking or steering systems that have limited autonomy. While systems such as these are great features to have on any vehicle, they only act or intervene at the last possible moment, say, when a driver does not respond to a warning or alert.

Moreover, most driver assist systems can be deactivated, or cancelled by a control input from a driver. The control systems of a true self-driving car on the other hand, assumes full control of the vehicle right from the get-go, and unless control is interrupted or assumed by a human operator, or from a remote location, the control system will remain in full control of all functions until the trip is completed.

So how do self-driving cars work exactly?

One common misconception about driverless cars is that these vehicles are controlled by mammoth software programs that contain millions of specific instructions in the form of algorithms that are designed to cope with every possible driving situation. While algorithms are useful for the purposes of making a driverless car obey traffic rules such as having to stop at stop signs and red traffic lights, they are not suited for situations that arise unexpectedly.

The problem with algorithms is that they are limited in their functionality. Of course, some algorithms are incredibly complex, but at a fundamental level, they are nothing so much as IF-THEN rules. For instance, a control algorithm might go something like this- IF an object rolls into road, THEN slow down, and check for children or pedestrians that might be following the object into the road”.

Because there are so many different objects that might roll into the road, and even more things that can happen as a result, simple IF-THEN rules are clearly not sufficient to control a vehicle without the presence of a human driver that can instantly assess the situation and take appropriate evasive action. So given the large number of variables and permutations of variables, it is impossible to develop software that can recognize each threat, the level of danger the threat represents, and to then initiate practical measures to avoid, or eliminate the treat. So instead of converting the theory of driving into thousands of targeted algorithms, developers and programmers have found a way to make a car “learn” from its mistakes- by borrowing some principles from the field of artificial intelligence.

Artificial intelligence and driverless cars

At the heart of the problem is the fact that it is impossible to make a control system that can recognise every type of ball in existence, so instead, programmers are now working on different ways in which a driverless car can recognize, and interpret the data from its sensors. For instance, the system does not have to be able to differentiate between a soccer ball and a baseball- in fact, it does not even have to recognize the object as a ball at all.

All it has to do is differentiate between classes of objects, say, the difference between a ball and another vehicle, or between a bicycle and a dog. To do this, programmers feed the system a complex algorithm that contains thousands if images of all manner of objects the vehicle can expect to across on the road. Each image also contains “notes” on the type of object it represents, and this forms the basis of the vehicle’s learning curve.

So when the vehicle “sees” an object, it will guess at its exact nature, but since it has never seen that particular object before, it will compare the object to the images in its database, but the initial guess at what the object is will almost certainly be wrong. In this manner the system will keep on refining the basic algorithm to increase its accuracy. For instance, since both bicycles and motorcycles have two wheels, the system might see them both as the same thing, which could result in inappropriate reactions in efforts to avoid the object.

How artificial intelligence “learns”…

To avoid this, programmers have made provision for the system to use information from other sensors to correctly identify objects. For example, some of the notes on an image of a bicycle might contain the fact that bicycles generally do not travel at 80 miles per hour, and never on highways. In this way, the system will keep on refining its observations until it will correctly identify a bicycle. It will also learn to correctly identify a motorcycle, since the vehicle’s internal maps will tell it that it’s on a highway, where motorcycles routinely travel at 80 miles per hour.

Returning to the scenario with the ball; it makes no difference if the system cannot tell a baseball from a soccer ball. All it has to do is identify the ball as an unidentified object that represents a risk, and since the system does not have to increase its sensitivity toward pedestrians and children because it is set to being 100% sensitive by default, the ball may as well be a skateboard. The vehicle will recognise the risk rather than the specific object, and depending on its location, speed, and other operating conditions, the system will initiate appropriate action to avoid the object by braking or slowing down.

All of the above is off course a gross oversimplification, but it serves to illustrate the principles of artificial intelligence, which can also be used to perform evaluations, and to initiate appropriate reactions and responses to almost any situation.

However, artificial intelligence has not yet reached the point where a vehicle can be made to autonomously calculate the correct action(s) with which to counter all risks under all conditions. Instead, programmers have developed software that tells a vehicle how to react under most conditions. To do this, they have pre-loaded the control system with thousands of traffic situations, as well as the specific actions required to avoid an accident in each of the thousands of hypothetical situations in the database.

So when  a driverless car encounters a threat or a potentially dangerous situation, it will use the observations from its sensors as parameters with which to find the correct set of actions to initiate in order to avoid the threat or dangerous situation. To an observer that watches the vehicle perform evasive manoeuvres it might not always be clear just why the vehicle had acted in a specific manner, but that is because no explicit “rules of behaviour or action” were available to the vehicle.

The vehicle was forced to take action(s) based on one or more of the thousands of choices available to it- it had no choice other than to choose the option it deemed the most effective, and the “strange” behaviour seen by the observer is likely to be the result of none of the available choices being 100% applicable to the risk or situation as it was observed by the vehicle.

One more thing…

One very unusual result of artificial intelligence as it pertains to self-driving cars is the fact that current systems use randomly-generated numbers (or values) in the process of making decisions. In practice, this means that it is very likely that a self-driving vehicle may react differently in some situations even though the circumstances may be identical.

This is not the result of defective programming; it is the direct result of it being impossible to reduce human thought processes to hard-and-fast rules that can be captured in a piece of computer software. The only way to have self-driving cars approach the way human drivers think is to have them learn from experience.

To this end, Google has to date operated driverless cars (with human co-drivers in place) for more than 1.5 million miles on public roads to capture millions of traffic situations in efforts to expand the capabilities of future generations of driverless cars. All of these situations will eventually be incorporated into the control systems of driverless cars with the aim of making them respond to threats and dangerous situations in a consistent manner where the circumstances are the same.

Author:
This article is provided by the staff writers of AutoBodyShop.org – the largest Auto Body Shop Directory in the US with over 200.000 verified listings. At our Auto Body Shop Blog you find more interesting articles from our “Automotive Technology Explained” series and many others about car maintenance and road safety.

Related Articles

Back to top button