The World of Technology News and Products

Artificial intelligence machines

Artificial intelligence machines

Even though AI is enough to replace humans on certain tasks and in certain scenarios, some of its uses could prove dangerous for society.

Mike Hinchey of IFIP suggests autonomous vehicles as an example. They have a dual-function: to work independently and collect data about their environment.

The robot will not cause or even allow harm to a human being.ย  Here’s the deal. One side of it has functions such as acceleration, deceleration, rotating amd brakes. Meanwhile, the other side of it is equipped with info systems that take into account co-driver behavior and any vehicle you’re following

Most autonomous driving systems work by reacting to different conditions in a predetermined way, for example, if the car ahead slows down, the car behind responds too. The vehicle will attempt to change lanes if the person in the next lane continues accelerating instead of braking. Notably, this information is based on data collected by the driver in front. If changing lanes will inevitably cause an accident, autonomous vehicles offer an unresolved moral quandary. One way to approach this is to have a human write the plot of your story. The main caveat is that they would need to select scenarios that wouldn’t cause any unfortunate accidents.

The truth is, if the unpredictable human factor continues to exist, autonomous driving technology will never be practicalIt would be helpful if we could all agree that vehicles should be autonomous and stop holding humans responsible for this mess.

Artificial Intelligence, as the name suggests, is the embodiment of many ‘artificial’ qualities. Qualitites like the ones discussed in Neil Hinchey’s article. Hinchey notes that AI’s testings in situations where it makes decisions not predetermined, drawing on knowledge from a history of the conditions it has encountered. But how unpredictable can the decision ultimately be? What are the limits we place on artificial intelligence, or AI for that matter? When should an intelligent system be stopped and its response left to human judgement?

As shown in the above example, if a car has a clear mandate to “save human life”, it can be shut down to avoid accidents. Even then, the command to not attempt any driving manoeuvres could result in a loss of human life if, for example, the scenario takes place on a motorway at rush hour.

Machines need to be able to adapt, according to Hinchay. In order for them to do this, we need to clearly define everything allowed and prohibited. We won’t be specifically defining the actions they can take.

You can learn more about Mike Hinchey and his work , here : https://www.itu.int/en/ITU-T/AI/Pages/hinchey.aspx

If you found the article interesting you can also read this one .

Author: PC-GR
The World of Technology

The World of Technology
Logo
Enable registration in settings - general
Skip to content