Tech UPTechnologyFour keys to understanding the ethics of artificial intelligence

Four keys to understanding the ethics of artificial intelligence

In 2018, Elaine Herzberg was killed in what is believed to be the first pedestrian fatality at the hands of an autonomous vehicle. This incident caught the world’s attention and raised questions about whether we could trust artificial intelligence (AI) with something as important as our lives. But what does that mean for organizations looking to exploit this technology?

Many industry experts around the world are demanding what is called “responsible AI”, as studies that come to light show that organizations are increasingly concerned about the implications of AI. In fact, according to a CapGemini Research Institute study, 51 percent of global executives consider it important to ensure AI systems are ethical and transparent.

AI, especially machine learning , works by using data inserts, learning something from it, and from there, inferring conclusions to make predictions. This raises the question of how we judge whether an AI system exit or conclusion is safe and whether or not it will succumb to bias or cause some harm. This is the key to the ethics of AI.

“Determining whether an outcome is ethically acceptable can lead to reasonable disagreements. For example, in this period of COVID-19, doctors, politicians and the public may disagree on the ethics around health care decisions, such as prioritizing the use of ventilators in young people before in the third age. If humans can have doubts about which is the right decision, how can an AI improve this? ”, Comments Jorge Martínez, regional director of OpenText in Spain and Portugal. If we focus on the business environment, where AI is used to automate processes or improve the customer experience, ethics may seem a little less important. But, for all organizations, the primary purpose of AI should be to provide information that improves decision-making. Being able to trust and depend on that information is essential.

The key points of the ethics of AI

There are many ethical questions that can be asked about the social impact of AI and that affect all kinds of sectors and fields, such as the use of artificial intelligence in health, autonomous cars or the benefits of this technology in the supply chain. supply. Narrowing down the issues around the ethics of AI solutions that are created within any business environment, OpenText emphasizes four key points such as prejudice or bias, responsibility and explicability, transparency and, finally, data certainty. .

The area of ethics that has perhaps received the most attention is bias, when biased data models or developer biases inadvertently infiltrate the artificial intelligence system, which is not surprising considering there are 188 different cognitive biases. . Whether it is an unconscious bias from the creator of the system or a bias built into the data model that the system uses, the results are likely to be unfair, discriminatory, or simply wrong.

Responsibility and explicability

The concepts of responsibility and explicability are well understood in everyday life. Being responsible for something should be able to explain why it has happened as it has happened. The same is true in the world of AI. It is essential that whatever action the technology takes can be fully explained and audited – it has to be held accountable.

To be accountable, the artificial intelligence system must be transparent. However, many AI solutions take a “black box” approach that does not allow visibility of the underlying algorithms. However, the new generation of AI solutions that embrace open source allows organizations to integrate their own algorithms and compare the quality of these with their own data.

Certainty in the data

A key point in the creation of AI systems is how it works with the data, especially personal data, which is used to complete its models. Machine learning and deep learning require huge data sets to learn and improve. The more data, the better the results over time. However, regulation around privacy, such as the GDPR, imposes new levels of responsibility on organizations for how they capture, store, use, share and report the personal data they hold. Know how and why the data is being processed and the risks involved.

Even if an organization has a team of experienced data scientists, many of the ethical challenges will remain relatively new to them, especially since AI is a rapidly evolving and rapidly evolving technology. A good practice to avoid falling into bias, being transparent and accountable is, they say in OpenText, to establish a management team that oversees the use of AI throughout the company, and to develop an ethical framework that describes what it is supposed to be. do AI, how it should be generated and used and what are the expected results.

Chip war rages on: TSMC suspends Biren chip manufacturing

The semiconductor company is seeking to follow US rules, which prevent companies from developing certain technologies.

Do you want your company to be more optimal? use digital twins

According to one study, organizations that use them have seen a 15% improvement in operational metrics.

Not everything is DALL-E: The best websites to create images with Artificial Intelligence

The use of Artificial Intelligence in creative work is becoming more and more common and these are some of the platforms that you can use.

Electric cars with charge in 5 minutes: a new NASA technology will make it...

A new technology funded by NASA for future space missions can charge an electric car in just five minutes.

This is Optimus, Tesla's humanoid robot

Impressed? Tesla founder Elon Musk wants to build millions of robots like these and sell them for 20,000 euros a unit.

More