How R180 Can Teach AI To Work With, Not Against, Humans

Share This Post

Artificial Intelligence (AI) is quickly becoming a staple of our modern society. From generating a whole range of digital content to implementation in education and medicine and powering self-driving vehicles, AI is here to stay, and it’s going to change and improve rapidly in future months and years.

As it becomes a more powerful tool, how can humans ensure that we don’t see a situation where we have runaway AI, unbounded by human ethics and values? The answer lies in using a tool like REVALUATE180 to train AI tools on the importance of recognizing human values and incorporating them into their day-to-day functioning. 

The Concerns of Runaway AI

People are justified in their concerns about an unbridled AI future where machines take over human functions and run amok, leaving chaos and destruction in their wake. 

There’s a very real and very tragic example of this with Uber’s self-driving vehicle program. In 2018, a pedestrian was struck by an autonomous vehicle while crossing the road with a bicycle and was killed. The pedestrian was jaywalking while pushing a bicycle when she was struck by the vehicle.

It was later revealed that the car was not programmed to recognize the concept of jaywalking. And, according to Forbes magazine, was “trained to rigidly segment objects in the road into a number of categories – such as other cars, trucks, cyclists, and pedestrians. A human being pushing a bicycle did not fit any of those categories and did not behave in a way that would be expected of any of them.”

Just like humans who find themselves stuck in patterns of over-categorization, today’s AI products can only be taught what their human programmers choose to teach them, which can be easily influenced by biases or blindspots, such as overlooking the concept of jaywalking. 

In Forbes, the author cites another example of runaway AI, as cited by Oxford philosopher Nick Bostrom. In this scenario, AI is given the assignment to make paperclips, and eventually, in its quest to find the resources for this mandate, it destroys human life to continue making paperclips. This is simply because this AI is unaware of human values that would dictate avoiding killing other humans to make a comparatively worthless product. 

There’s also been evidence of AI systems, even when designed with the best intentions, inadvertently reinforcing societal biases and exacerbating inequalities. Amazon’s facial-recognition software, called Rekognition, has shown biases against people of color, leading to false identification of minorities as criminals.

How R180 Can Be a Guardrail for AI Systems 

R180’s work with human values presents an opportunity to guide AI products to make decisions within a framework of values.

Machine programmers can use the tenets of R180 to make AI products aware of human values, recognize them, and then implement them when they embark on completing tasks they’re assigned.  

When humans undergo the R180 experience, they learn to overcome inherent biases and the over-categorization of people and/or ideas, which, in turn, can make AI systems more equitable, fair, and just.

Central to the R180 approach is training AI systems to be capable of observing and learning from human behavior. This allows R180 to assist AI in recognizing complex human behavior, translating them into actionable AI decisions that reflect our values.

Human values are multifaceted. R180 is designed to appreciate this complexity, navigate these "hard choices," and assist AI in mirroring the subtlety of human decision-making.

Moreover, R180 focuses on ensuring that AI serves as an enhancement, not a replacement, for human decision-making. While AI can help us with data analysis and pattern recognition, it should not be permitted to make decisions that inherently require human judgment. By clearly defining boundaries for AI, R180 helps us avoid a world where we abdicate our decision-making capabilities to machines.

Featured image courtesy of Andy Kelly via Unsplash

Subscribe To Our Newsletter

Get updates and learn from the best

More To Explore