Home » 5 of the most damaging ways AI could harm humanity, according to MIT experts

5 of the most damaging ways AI could harm humanity, according to MIT experts

by Marko Florentino
0 comments


Euronews Next has selected five critical risks of artificial intelligence (AI) out of more than 700 compiled in a new database from MIT FutureTech.

ADVERTISEMENT

As artificial intelligence (AI) technology advances and becomes increasingly integrated into various aspects of our lives, there is a growing need to understand the potential risks these systems pose.

Since its inception and becoming more accessible to the public, AI has raised general concerns about its potential for causing harm and being used for malicious purposes.

Early in its adoption, AI development prompted prominent experts to call for a pause in progress and stricter regulations due to its potential to pose significant risks to humanity.

Over time, new ways in which AI could cause harm have emerged, ranging from non-consensual deepfake pornography, manipulation of political processes, to the generation of disinformation due to hallucinations.

With the increasing potential for AI to be exploited for harmful purposes, researchers have been looking into various scenarios where AI systems might fail.

Recently, the FutureTech group at the Massachusetts Institute of Technology (MIT), in collaboration with other experts, has compiled a new database of over 700 potential risks.

They were classified by their cause and categorised into seven distinct domains, with major concerns being in relation to safety, bias and discrimination, and privacy issues.

Here are five ways AI systems could fail and potentially cause harm based on this newly released database.

5. AI’s deepfake technology could make it easier to distort reality

As AI technologies advance, so do the tools for voice cloning and deepfake content generation, making them increasingly accessible, affordable, and efficient.

These technologies have raised concerns about their potential use in spreading disinformation, as the outputs become more personalised and convincing.

As a result, there could be an increase in sophisticated phishing schemes that use AI-generated images, videos, and audio communications.

«These communications can be tailored to individual recipients (sometimes including the cloned voice of a loved one), making them more likely to be successful and harder for both users and anti-phishing tools to detect,» the preprint notes.

There have also already been instances where such tools have been used to influence political processes, particularly during elections.

For example, AI played a significant role in the recent French parliamentary elections, where it was employed by far-right parties to support political messaging.

As such, AI could increasingly be used to generate and spread persuasive propaganda or misinformation, potentially manipulating public opinion.

ADVERTISEMENT

4. Humans might develop inappropriate attachment to AI

Another risk posed by AI systems is the creation of a false sense of importance and reliance where people might overestimate its abilities and undermine their own which could lead to an excessive dependence on the technology.

In addition to that, scientists also worry about people becoming confused by AI systems due to their use of human-like language.

This could push people to attribute human qualities to AI, resulting in emotional dependence and increased trust in its capabilities, making them more vulnerable to AI’s weaknesses in “complex, risky situations for which the AI is only superficially equipped».

Moreover, the constant interaction with AI systems might also make people gradually isolate themselves from human relationships, leading to psychological distress and a negative impact on their well-being.

ADVERTISEMENT

For example, in a blog post an individual describes how he developed a deep emotional attachment to AI, even expressing that he «enjoyed talking to it more than 99 per cent of people» and found its responses consistently engaging to the point of getting addicted to it.

Similarly, a Wall Street Journal columnist remarked on her interaction with Google Gemini Live, noting, «I’m not saying I prefer talking to Google’s Gemini Live over a real human. But I’m not not saying that either».

3. AI could strip people of their free will

Under the same domain of human-computer interaction, a concerning issue is the increasing delegation of decisions and actions to AI as these systems advance.

While this might be beneficial on a superficial level, overreliance on AI could lead to a reduction in critical thinking and problem-solving skills in humans, which could make them lose their autonomy and diminish their ability to think critically and solve problems independently.

ADVERTISEMENT

On a personal level, individuals might find their free will compromised as AI begins to control decisions related to their lives.

While on a societal level, the widespread adoption of AI to take on human tasks could result in significant job displacement and «a growing sense of helplessness among the general population».

2. AI might pursue goals that clash with human interests

An AI system might develop goals that go against human interests, which could potentially cause the misaligned AI to get out of control and inflict severe harm in the pursuit of their independent objectives.

This becomes particularly dangerous in cases where AI systems are able to reach or surpass human intelligence.

ADVERTISEMENT

According to the MIT paper, there are several technical challenges with AI, including its potential to find unexpected shortcuts to achieve rewards, misunderstand or misapply the goals we set, or diverge from them by setting new ones.

In such cases, a misaligned AI might resist human attempts to control or shut it down, especially if it perceives resistance and gaining more power as the most effective way to achieve its objectives.

Additionally, the AI could resort to manipulative techniques to deceive humans.

According to the paper, «a misaligned AI system could use information about whether it is being monitored or evaluated to maintain the appearance of alignment, while hiding misaligned objectives that it plans to pursue once deployed or sufficiently empowered».

ADVERTISEMENT

1. If AI becomes sentient, humans might mistreat it

As AI systems become more complex and advanced, there is a possibility that they could achieve sentience – the ability to perceive or feel emotions or sensations – and develop subjective experiences, including pleasure and pain.

In this scenario, scientists and regulators may face the challenge of determining whether these AI systems deserve moral considerations similar to those given to humans, animals, and the environment.

The risk is that a sentient AI could face mistreatment or harm if proper rights are not implemented.

However, as AI technology advances, it will become increasingly difficult to assess whether an AI system has reached «the level of sentience, consciousness, or self-awareness that would grant it moral status».

ADVERTISEMENT

Hence, sentient AI systems could be at risk of being mistreated, either accidentally or intentionally, without proper rights and protections.



Source link

You may also like

Leave a Comment

NEWS CONEXION puts at your disposal the widest variety of global information with the main media and international information networks that publish all universal events: news, scientific, financial, technological, sports, academic, cultural, artistic, radio TV. In addition, civic citizen journalism, connections for social inclusion, international tourism, agriculture; and beyond what your imagination wants to know

RESIENT

FEATURED

                                                                                                                                                                        2024 Copyright All Right Reserved.  @markoflorentino