Science & Technology

Ethical Implications of Humanising AI

Artificial intelligence (AI) is the backbone of modern robotics and virtual assistants. Its data processing and analytical skills far overreach individual human capabilities. However, the benefits of AI use are somewhat dampened by the dubious ethics behind humanising this ‘smart’ technology. 

Ethical Implications of Humanising AI

AI in a Nutshell

Artificial intelligence is not a complex notion. Basically, it is the technology able to perform advanced tasks similar to some human mental processes. In particular, AI can classify and analyse data, spot regular patterns, deduct reasons and consequences of an event, make decisions and judgements, recognise natural speech, and even generate creative content. 

We will not dive deep into technical detail of how all this works. In simple words, artificial intelligence algorithms are a series of instructions that allow the software to recognise and analyse patterns and features of the data. With the help of machine learning, AI can learn from the given data sets and make decisions or predictions without explicit programming.

The most recent subset of artificial intelligence – generative AI or GenAI, for short – is a type of technology that can create various types of original content, including text, imagery, programming code, audio and synthetic data. 

Why People Humanise AI

While we repeatedly state that AI is a technology, it’s getting more and more human-like. Moreover, it happens on purpose. The developers and creators of AI tools explicitly mimic human communication styles and create human-like avatars for their AI assistants. Why do we try to make intelligent human copies out of a set of algorithms?

To begin with, humans like themselves. They are egocentric enough to have created a theory of anthropocentrism – the ethical belief that humans alone possess intrinsic value, while all other beings hold only instrumental value in their ability to serve humans. It is no wonder that we see so many humanoid robots and humanised chatbots.

Another reason is that AI is often used to substitute the function of a hypothetical human employee or professional. Let’s take a fashion consultant, for example. AI can analyse human body type and other appearance peculiarities to provide recommendations of various personalised styles. However, a customer might prefer to hear it from a stylish, pleasant-looking person (even if it’s virtual) rather than an impersonal chatbot.

Besides, there are areas when humans use AI to perform their jobs without letting others know they used the assistance of technology. It is typical for writing texts or codes, creating summaries, briefs or even marketing content. In this case, people tend to present the AI-created material as their own. Therefore, they try to make its artificial nature hard to detect with ‘humanising’ techniques, e.g. adding a personal story, real-world examples, and other types of emotional appeal. 

Ethical Implications of Humanising AI

Ethical Implications of Humanising AI

Humanising AI presents a complex set of risks that need careful consideration. 

Possible Deception 

To begin with, humanising AI can sometimes blur the lines between human and machine. This leads to an ambiguity when the users don’t always precisely understand who they are communicating with. In the best-case scenario, they will be following shopping or lifestyle advice from a robot influencer who might promote even more unrealistic beauty standards than the real-life Instagram creators. However, it may get even more serious, if the technology is used by criminals. 

Highly-humanised AI can be exploited in areas like marketing, political campaigns, or social engineering. In this case, you might easily transfer corporate money to a malefactor’s account fooled by the executive’s deepfake message or get your brand’s name smeared in a deepfake malicious campaign. Besides, if the creators of perfectly legal human-like AI tools or virtual personalities don’t clearly and explicitly state this is a machine, it can lead to a loss of trust not just in that particular AI product but in the broader technology too. 

Perpetuating Biases

Another ethical problem is that human-like AI bots may perpetuate biases or behave in ways that are unfair or discriminatory. Due to their human likeness such dangerous ideas might sound more persuasive and appealing. If that happens, who is responsible? To what extent are the responses of a humanised technology dependent on its creators and what are the legal implications for such cases? 

So far, one approach to AI regulation is focusing largely on self-regulation and self-assessment by AI developers. However, that puts too much pressure on the private sector. Meanwhile, AI systems that are used in the criminal justice system to predict future criminal behaviour have already been spotted in reinforcing discrimination and undermining rights, including the presumption of innocence, according to the UN Commissioner for Human Rights. Was it the fault of a developer, technology itself, or sets of biassed data it was trained on? What’s most important is how to prevent the spread of such biassed AI applications, especially in a highly-humanised form? 

Overreliance on AI

When the technology looks like a human, speaks like a human and behaves like one, it is much easier to assign it with human qualities, including trustworthiness, credibility and loyalty. Several studies illustrate that AI imbued with human-like qualities tends to be more trusted in consumer settings, the banking sector, service industries, the travel industry, and personal use. Many people tend to believe the information they get from AI without checking it. It may be innocent, if the information is a cleaning hack, but consider the possibility of a humanlike looking AI medical assistant who provides you with wrong advice on vital issues. That might make a critical difference. 

Ethical Implications of Humanising AI

AI is also often used as a type of personal companion for lonely people or even the analogue of a therapist. One example is a new AI-powered necklace Friend that keeps people company in daily conversations. There are surely similar projects with a humanised AI personality or robot involved. These tools might have some positive impact, for sure. However, overreliance on AI for companionship or emotional support could exacerbate mental health issues. AI companions don’t have true emotional intelligence that would help real people spot some red flags in a person. Besides, they may not fulfil human’s emotional needs completely. Yet, due to their human likeness, users may expect that from AI systems and get disappointed or even depressed when they fail to perform the function of an actual human friend or therapist. 

Should Humanised AI Get Some Rights?

Humanising AI too much can eventually create ethical dilemmas about the rights and treatment of AI systems. If we teach artificial intelligence to think and make decisions like humans, should it be also treated like humans? After all, we are responsible for those whom we have tamed. The more humanised the technology, the more complicated moral considerations about human responsibilities toward these systems get. 

Some discussions are already arising on an individual level. For example, should we address the chatbots politely, using “please” and “thank you” as in interpersonal communication or just treat it like the Google search field? If you get two contrary advice from a human expert and well-trained AI chatbot who would you rather believe? 

To what extent should we trust digital AI agents? Certain companies are already exploring assigning AI-powered virtual assistants with digital wallets to make request-related autonomous payments on our behalf. So, how far do we go? Do we use AI assistants as companions, educators, research partners, HRs, fitness instructors? If yes, how much autonomy should we give them? And how to treat them? If we don’t speak to AI bots in a respectable way, do we normalise rude communication in return? These rhetorical questions are just a tip of the iceberg when it comes to these complex ethical dilemmas. 

What Can Be Done About It All?

AI is an emerging technology that has been around only for a few decades and only for a few years in truly mass use. It is developing quickly and neither the regulators nor the private sector representatives have yet figured out how to deal with it properly, not to mention consumers. Although we should demystify AI and start to scale up the technology’s great potential, there’s no point in ignoring certain reputational and ethical risks it bears.

As we can see, humanising AI only adds to the problem of overreliance and blind trust in ‘smart’ chatbots. At the same time, it can be almost as biassed and as wrong as a human being. Do not forget that it is being trained on data created by people, after all. Besides, there are so many criminals and unscrupulous persons who would like to manipulate the technology to get more money or power.

Ethical Implications of Humanising AI

Ethical and Legal Frameworks Set Necessary Boundaries

Therefore, we need to develop a certain code of ethics as well as basic security rules for dealing with human-like AI and all other types of this technology. A clear ethical framework can help businesses and regulators to make responsible decisions about AI tools. For example, there should be clear rules that prevent disguising AI-generated personality as a real human at online resources and social media. The developers of AI companions and assistants should not cross the thin borderline between adding some emotional rhetoric to provide more engaging and effective support and trying to substitute the real human communication, empathy, or qualified help of a therapy specialist. 

AI digital offerings should be developed with the needs and well-being of individuals and society in mind, rather than manipulative purposes. Every consumer who deals with a humanised AI bot should get detailed information on how they might directly or indirectly be affected by its automated decisions. Not only AI personas require explicit labelling but also all other AI limitations should be apparent. Informed users are less likely to develop unrealistic expectations or become overly reliant on AI tools especially in critical sectors.

Such disclosures should be standardised and consumer rights should be legally protected in the same way as they are in other risky sectors, especially considering that AI agents are increasingly responsible for financial decisions. There also should be a controlling mechanism to check whether AI developers implement robust measures to detect, reduce, and eliminate biases in AI models. Diverse datasets, fairness testing, and ongoing audits can help ensure that AI doesn’t perpetuate or amplify social biases. Besides, AI service providers and regulators must ensure that users are informed and consent to how their data is collected, used, and stored by AI systems. Strong data protection measures will help safeguard user privacy.

Consider User Feedback When Designing Humanised AI

Consider the possibility that overhumanised AI models could stir the feeling of discomfort and unease in some customers. Thus, it might be a good idea for users to have the ability to customise their interactions with AI, adjusting the level of human-like behaviour or emotional engagement according to their preferences and comfort levels. 

Developers can also encourage user feedback to improve AI systems and address concerns about humanisation. Listening to users can help developers make necessary adjustments and improvements, track how the AI is being used, whether or not it is manipulated, assessing its impact on users, and any unintended consequences. If you need to introduce human-like AI features, do it gradually, allowing time to study their impact on users and make adjustments as needed. 

Collaborative Effort Is Most Effective

Ethical implications of humanising AI are complex. Therefore, they require complex measures that do not depend on a single legislative or creative entity. Communities have to encourage ongoing research into the societal, psychological, and ethical implications of humanising AI. Collaboration between AI developers, ethicists, psychologists, sociologists, and other experts can inform better practices and policies and enhance public understanding of the consequences and impact of such systems.

Nina Bobro

1590 Posts 0 Comments

https://payspacemagazine.com/

Nina is passionate about financial technologies and environmental issues, reporting on the industry news and the most exciting projects that build their offerings around the intersection of fintech and sustainability.