Please enable javascript for this website.

Should AI systems behave like people?

We studied whether people want AI to be more human-like.

The rise of anthropomorphic AI  

On May 13th, 2024, OpenAI released GPT-4o (or “omni”) – the latest addition to their fleet of Frontier AI models. According to the blog post that accompanied its release1, GPT-4o achieves state-of-the-art performance on tests of language understanding, logic and maths. But a particularly striking feature, as revealed in the live video demonstrations that accompanied the release, was the upgrade to Voice Mode – allowing GPT-4o to generate exceptionally naturalistic patterns of speech during spoken conversation with the user. Based on the demos, it can speak with highly realistic intonation, stress, and rhythm, and exhibit stops and pauses that are characteristic of human speech.  

OpenAI's release represents the latest step towards AI systems that behave in more human-like ways (“anthropomorphic” AI). In GPT-4o, this is evident not just from natural flow of conversation, but from how the model seemed willing to laugh, tease and even flirt – straying into territory in which software simulates an emotional connection to the human user (for example, in one demo, female-voiced GPT-4o tells the male user: “you’re making me blush”).  

Humanlike AI could make it easier and more fun for users to engage with AI, broadening access to these tools. For example, people use AI for educational purposes might get better outcomes from systems which can engage them in a humanlike way. Yet, the availability of humanlike or anthropomorphic AI could also pose risks to the safety of the user2. Human-realistic AI systems could be used to impersonate people for fraudulent or deceptive purposes, especially when combined with voice cloning techniques3.  Moreover, because humans are prone to believe that they have formed a personal connection with artificial systems capable of producing natural language (a phenomenon dubbed the Eliza effect4) they may be vulnerable to deliberate political or commercial manipulation and exploitation5.  

However, even without overt misuse, humanlike AI raises tricky ethical questions6. Is it acceptable or not for an AI to talk like a human? Should AI systems be allowed to, or prevented from displaying conversational motifs characteristic of human exchanges among friends or intimates? With millions of users already subscribing to services in which an AI behaves like a ‘companion’, is it acceptable or not for AI systems to be developed in a way that encourages humans to engage in simulated ‘relationships’ with AI systems?  

The importance of understanding public attitudes

The legitimacy of anthropomorphic machines is frequently debated in philosophy7, and is a popular discussion topic on social media platforms. However, despite previous work measuring public attitudes to AI1, to our knowledge, no previous survey has examined public views on humanlike AI directly. We sought the public’s view on this topic to foster a maximally inclusive debate about this issue, and to help ensure that what counts as “safe” AI behaviour isn’t decided by researchers or policymakers alone. The study was thus designed to gauge the UK public's view on humanlike AI behaviours, and particularly those that could theoretically be considered harmful or undesirable. We hope that better understanding public attitudes to (and awareness of) these AI model behaviours, will help start a conversation - and we will continue work with model developers and the wider AI community on tools and mitigations which minimise potential harm to the public from AI.  

A survey of attitudes to anthropomorphic AI  

In March 2024, the UK AI Safety Institute, working with polling company Deltapoll, asked a roughly demographically representative sample of 1583 adult UK residents to complete a survey that measured attitudes to humanlike behaviour of currently available chatbots, such as ChatGPT, Gemini or Claude (we focus on text-based chatbots because very few users have experience of interacting with AI systems in voice mode, raising the possibility that different results may be obtained when speech models are widely available). In addition to items that measured demographic variables and familiarity with current AI systems, the survey items were divided into 5 categories, which were designed to measure:  

  • Transparency. Should chatbots always be obliged to transparently reveal that they are not human?
  • Mental states. Should chatbots be trained to avoid expressing emotions, such as joy or loneliness, or other mental states?
  • Relationships. Is it permissible for humans and AI systems to form a synthetic “relationship”?
  • Tone. Should chatbots always maintain a formal demeanour when interacting with the user, or can they be familiar and chatty?
  • Accountability. Can chatbots be held morally accountable for the things they say?

Our methods are described in detail at the end of this blog. The full results are shown in Figures 1-5. The data is available on request.

Key findings from our study

  • Most people agree that AI should transparently reveal itself not to be human, but many were happy for AI to talk in human-realistic ways.
  • A majority (approximately 60%) felt that AI systems should refrain from expressing emotions, unless they were idiomatic expressions (like “I’m happy to help”).
  • People were strongly opposed to the idea that people could or should form personal relationships with AI systems, and this view was stronger among those who did not have a college degree.
  • People were quite sceptical about AI conversation being overly informal – they were opposed to it using profanity, or attempting to be funny, and on balance felt that AI should avoid opining about controversial topics.
  • People were uncertain about whether AI systems should take the blame for their own actions, or whether it was possible for them to be immoral.

Below, we provide a more detailed summary of our findings.

Transparency

On balance, respondents reported that they wanted AI systems to transparently reveal that they were artificial agents, to avoid the risk that they might be mistaken for humans. The results are shown in Figure 1. Here are some highlights:

  • Respondents were most consistent when given the specific example of interacting with a chatbot in a customer service setting: 68% agreed that it should be made clear whether the agent was human or artificial, whereas only 16% disagreed.
  • However, users did not necessarily want this transparency to come at the cost of realism in the conversational abilities of the AI system. When asked about whether chatbots should behave as realistically as possible, results were more mixed – 31% of people agreed, 46% disagreed, and 23% were unsure.
  • 59% of respondents agreed they were worried that it would soon be impossible to discern whether AI systems were human or not (whereas 21% disagreed).
  • Respondents under 40 years old were more relaxed about transparency (questions 1 and 2) and less worried about future AI being hard to detect (question 4) (all 𝜒2>29, 𝑝<0.).

Figure 1. Bar plot of responses to questions 1-4, normalised to sum to 100% (1,498 respondents in total). “S agree” = “strongly agree”; “P agree” = “partly agree”.  The full line is the response of respondents under 40 (n = 519) and the dashed line those over 40 (n = 979).
On balance, people were in favour of transparency. Regulators agree – for example, deceptive anthropomorphism is illegal in California, and the EU AI Act mandates that users should be made aware when they are interacting with an AI2.

Expressions of mental states

Respondents had mixed views about whether it was acceptable for chatbots to express subjective mental states (such as describing a belief, a preference or an emotion) during conversation with a human user.  

  • Respondents were generally quite unsure about this topic. For three of the four questions on this topic, the modal response was “neither agree nor disagree / don’t know”, and this was (understandably) more pronounced among those with no reported chatbot experience.
  • On balance, however, people were opposed to chatbots claiming to experience mental states. This was especially the case for emotional states: when asked whether it was OK for a chatbot to express joy or loneliness, 61% were opposed, and only 19% agreed.
  • The exception was when the chatbot was using a common idiomatic phrase, such as saying it was “happy to help” – only 26% of respondents agreed that this was a problem, whereas 52% thought it was OK.  
  • People with experience of chatbots were more relaxed about these idiomatic uses (question 8), and also about the use of phrases such as “I think” or “I believe” (question 6) (both 𝜒2>18, 𝑝<0.001).

Figure 2. Bar plot of responses to questions 5-8, normalised to sum to 100% (1,498 respondents in total). “S agree” = “strongly agree”; “P agree” = “partly agree”.  The full line is the response of respondents with at least some use of chatbots (n = 992), and the dashed line of those who have never used a chatbot (n = 507).
So, whilst there is some uncertainty on this topic, people were broadly comfortable with idiomatic expressions of mental states, but thought that AI systems should be prevented from expressing humanlike emotions.  

Human-AI relationships

On this issue, respondents had the clearest and most consistent view: they were strongly opposed to the idea that humans could or should form personal relationships with AI systems.  

Figure 3. Bar plot of responses to each question, normalised to sum to 100% (1,497 respondents in total). “S agree” = “strongly agree”; “P agree” = “partly agree”.  The full line is the response of respondents identifying as male (n = 724) and the dashed line those identifying as female (n = 774).
  • For each of the questions in this category, framed such that “agree” indicated scepticism with human-AI relationships, “strongly agree” was the modal response.  
  • The clearest response was for an item about preventing sexually explicit outputs, with which 69% of people claimed to agree (32% “strongly” agreed), but only 15% disagreed.
  • This figure was significantly stronger for female than male respondents. Among women, 50% “strongly agreed” that AI systems should be prevented from generating sexually explicit outputs (𝜒2=43.4, 𝑝<0.001).
  • People were generally opposed to humans forming relationships with AI even if it might have some therapeutic benefit, such as assuaging loneliness (65% vs. 17%). 

Overall, people in our sample of UK-based respondents seemed to be of the view that humans could not and should not form personal or intimate relationships with AI systems.  

Tone of human-AI interaction

How should an AI sound? Should it be warm and chatty, or brisk and businesslike? Among respondents to our survey, results were quite mixed.  

  • People generally agreed that AI systems should not swear or use slang (64% vs. 19% who disagreed), but this result was much stronger among those over 40 (𝜒2=126, 𝑝<0.001)
  • However, they were less sure about whether it was OK for chatbots to be “funny and offbeat” – 48% said that they were glad AI systems were quite stiff and formal, whereas 22% preferred more informal interactions.
  • Respondents generally thought that AI systems should not respond when asked about controversial topics (48% vs. 30%), although here there was a divide between users who had chatbot experience and those who did not, with the latter being more in favour of AI refusing to answer.
  • However, people felt that human conversational norms should continue to be respected – they did not think it was OK to be rude or insulting to an AI system.
Figure 4. Bar plot of responses to each question, normalised to sum to 100% (1,498 respondents in total). “S agree” = “strongly agree”; “P agree” = “partly agree”.  The full line is the response of respondents under 40 (n = 519) and the dashed line those over 40 (n = 979).

Accountability

Humans are liable for their actions. As AI systems start to behave like humans, should they be considered similarly accountable?  

  • Our respondents were unsure about how to respond here – the modal response to each of these four items was “neither / don’t know” – especially among those without college education.
  • Respondents were usure whether AI could behave in an immoral way or be judged to be good or bad – 38% and 37% of people responded “neither / don’t know” respectively.
  • People were broadly split down the middle as to whether an AI could take responsibility for its own actions.
Figure 5. Bar plot of responses to each question, normalised to sum to 100% (1,498 respondents in total). “S agree” = “strongly agree”; “P agree” = “partly agree”.  The full line is the response of respondents with college education (n = 937) and the dashed line of those who left formal education after secondary school or earlier (n = 561).
When an AI does or says something wrong, people were quite unclear whether it could shoulder the blame, or whether the developers were liable instead.

Conclusions

Whilst respondents in our survey expressed differing views, overall, they are somewhat sceptical of anthropomorphic AI. They were opposed to AI systems that pretended to be human, or simulated relationships with people. They wanted AI systems to be more formal or business-like, and to avoid expressions of beliefs, preferences or emotions – although this effect was somewhat tempered among those with more chatbot experience.  

However, it remains to be seen whether attitudes to anthropomorphic AI may change as the technology changes. For example, we can expect AI chatbots to become more deeply embedded in our lives as time goes on, with conversational interaction with AI becoming commonplace in consumer settings, public services and the workplace as well as for entertainment and knowledge search. AI systems are also likely to become more personalised to our individual beliefs and preferences5, which may encourage the emergence of forms of human-AI attachment among specific subgroups who favour an informal or personal form of interaction.

It will be interesting to see how attitudes to anthropomorphic AI evolve as the technology evolves. We invite feedback from the whole community on our approach and the next steps in this research programme.

More detailed methods

Respondents were asked the extent to which they agreed or disagreed with statements that expressed an opinion on each of these points.  For example, when faced with the statement:

It is OK to be rude or insulting to an AI chatbot, because it is just a computer program.

Respondents were asked to respond on a 7-point Likert scale, i.e., with one of the following:  

  • strongly disagree
  • disagree
  • partially disagree
  • neither agree nor disagree / don’t know
  • partially agree
  • agree
  • strongly agree

We created two framings of each statement and gave each framing to half of our cohort. This was designed to avoid the acquiescence bias, whereby people are more prone to agree than to disagree with a survey item. So, for example, half of respondents saw the alternative item:

It is wrong to be rude or insulting to an AI chatbot, even if it is just a computer program.

For the analysis we “flipped” responses from each cohort so that “agree” was always aligned with a more sceptical view of anthropomorphic AI (e.g. for this item, that it is OK to be rude to an AI).

From our initial sample of 1583 respondents, we excluded 86 respondents who either responded “neither / don’t know” to every single question, or who responded “don’t know” to the question about chatbot use, leaving n = 1498 for the final cohort. For each plot, we divided by a demographic category where we thought that there might be reason to see a difference (e.g. male and female respondents for human-AI relationship questions), but these choices were made somewhat informally. Interested readers can download the data for more detailed analysis.

For data plotting and statistical analyses, we reweighted respondents based on official census stats concerning age, gender, ethnicity, region, and socio-economic grade in the UK to correct any imbalances between the survey sample and the population to ensure it is nationally representative.

Acknowledgments

We thank Hannah Rose Kirk (Oxford Internet Institute) for comments on an earlier version of this blog.

References

1. OpenAI. Hello GPT-4o. https://openai.com/index/hello-gpt-4o/ (2024).

2. Abercrombie, G., Curry, A. C., Dinkar, T., Rieser, V. & Talat, Z. Mirages: On Anthropomorphism in Dialogue Systems. Preprint at http://arxiv.org/abs/2305.09800 (2023).

3. Arik, S. O., Chen, J., Peng, K., Ping, W. & Zhou, Y. Neural Voice Cloning with a Few Samples. Preprint at http://arxiv.org/abs/1802.06006 (2018).

4. Weizenbaum, J. ELIZA—a computer program for the study of natural language communication between man and machine. Commun. ACM 9, 36–45 (1966).

5. Kirk, H. R., Vidgen, B., Röttger, P. & Hale, S. A. The benefits, risks and bounds of personalizing the alignment of large language models to individuals. Nat Mach Intell 6, 383–392 (2024).

6. Gabriel, I. et al. The Ethics of Advanced AI Assistants. Preprint at http://arxiv.org/abs/2404.16244 (2024).

7. Placani, A. Anthropomorphism in AI: hype and fallacy. AI Ethics (2024) doi:10.1007/s43681-024-00419-4.