For the bot-creation software, see ChatBot. For bots on Internet Relay Chat, see IRC bot.
Parts of this article (those related to everything, particularly sections after the intro) need to be updated. The reason given is: this article is using citations from 1970 and virtually all claims about conversational capabilities are at least ten years out of date (for example the Turing test was arguably made obsolete years ago by transformer models). Please help update this article to reflect recent events or newly available information.(February 2023)
A chatbot (originally chatterbot)[1] is a software application or web interface that is designed to mimic human conversation through text or voice interactions.[2][3][4] Modern chatbots are typically online and use generative artificial intelligence systems that are capable of maintaining a conversation with a user in natural language and simulating the way a human would behave as a conversational partner. Such chatbots often use deep learning and natural language processing, but simpler chatbots have existed for decades.
As chatbots work by predicting responses rather than knowing the meaning of their responses, this means they can produce coherent-sounding but inaccurate or fabricated content, referred to as ‘hallucinations’. When humans use and apply chatbot content contaminated with hallucinations, this results in ‘botshit’.[10] Given the increasing adoption and use of chatbots for generating content, there are concerns that this technology will significantly reduce the cost it takes humans to generate, spread and consume botshit.[11]
Background
In 1950, Alan Turing's famous article "Computing Machinery and Intelligence" was published,[12] which proposed what is now called the Turing test as a criterion of intelligence. This criterion depends on the ability of a computer program to impersonate a human in a real-time written conversation with a human judge to the extent that the judge is unable to distinguish reliably—on the basis of the conversational content alone—between the program and a real human. The notoriety of Turing's proposed test stimulated great interest in Joseph Weizenbaum's program ELIZA, published in 1966, which seemed to be able to fool users into believing that they were conversing with a real human. However Weizenbaum himself did not claim that ELIZA was genuinely intelligent, and the introduction to his paper presented it more as a debunking exercise:
In artificial intelligence, machines are made to behave in wondrous ways, often sufficient to dazzle even the most experienced observer. But once a particular program is unmasked, once its inner workings are explained, its magic crumbles away; it stands revealed as a mere collection of procedures. The observer says to himself "I could have written that". With that thought, he moves the program in question from the shelf marked "intelligent", to that reserved for curios. The object of this paper is to cause just such a re-evaluation of the program about to be "explained". Few programs ever needed it more.[13]
ELIZA's key method of operation (copied by chatbot designers ever since) involves the recognition of clue words or phrases in the input, and the output of the corresponding pre-prepared or pre-programmed responses that can move the conversation forward in an apparently meaningful way (e.g. by responding to any input that contains the word 'MOTHER' with 'TELL ME MORE ABOUT YOUR FAMILY').[13] Thus an illusion of understanding is generated, even though the processing involved has been merely superficial. ELIZA showed that such an illusion is surprisingly easy to generate because human judges are so ready to give the benefit of the doubt when conversational responses are capable of being interpreted as "intelligent".
Interface designers have come to appreciate that humans' readiness to interpret computer output as genuinely conversational—even when it is actually based on rather simple pattern-matching—can be exploited for useful purposes. Most people prefer to engage with programs that are human-like, and this gives chatbot-style techniques a potentially useful role in interactive systems that need to elicit information from users, as long as that information is relatively straightforward and falls into predictable categories. Thus, for example, online help systems can usefully employ chatbot techniques to identify the area of help that users require, potentially providing a "friendlier" interface than a more formal search or menu system. This sort of usage holds the prospect of moving chatbot technology from Weizenbaum's "shelf ... reserved for curios" to that marked "genuinely useful computational methods".
Development
Among the most notable early chatbots are ELIZA (1966) and PARRY (1972).[14][15][16][17] More recent notable programs include A.L.I.C.E., Jabberwacky and D.U.D.E (Agence Nationale de la Recherche and CNRS 2006). While ELIZA and PARRY were used exclusively to simulate typed conversation, many chatbots now include other functional features, such as games and web searching abilities. In 1984, a book called The Policeman's Beard is Half Constructed was published, allegedly written by the chatbot Racter (though the program as released would not have been capable of doing so).[18]
One pertinent field of AI research is natural-language processing. Usually, weak AI fields employ specialized software or programming languages created specifically for the narrow function required. For example, A.L.I.C.E. uses a markup language called AIML,[3] which is specific to its function as a conversational agent, and has since been adopted by various other developers of, so-called, Alicebots. Nevertheless, A.L.I.C.E. is still purely based on pattern matching techniques without any reasoning capabilities, the same technique ELIZA was using back in 1966. This is not strong AI, which would require sapience and logical reasoning abilities.
Jabberwacky learns new responses and context based on real-timeuser interactions, rather than being driven from a static database. Some more recent chatbots also combine real-time learning with evolutionary algorithms that optimize their ability to communicate based on each conversation held. Still, there is currently no general purpose conversational artificial intelligence, and some software developers focus on the practical aspect, information retrieval.
Chatbot competitions focus on the Turing test or more specific goals. Two such annual contests are the Loebner Prize and The Chatterbox Challenge (the latter has been offline since 2015, however, materials can still be found from web archives).[22]
Chatbots may use artificial neural networks as a language model. For example, generative pre-trained transformers (GPT), which use the transformer architecture, have become common to build sophisticated chatbots. The "pre-training" in its name refers to the initial training process on a large text corpus, which provides a solid foundation for the model to perform well on downstream tasks with limited amounts of task-specific data. An example of a GPT chatbot is ChatGPT.[23] Despite criticism of its accuracy and tendency to “hallucinate”—that is, to confidently output false information and even cite non-existent sources—ChatGPT has gained attention for its detailed responses and historical knowledge. Another example is BioGPT, developed by Microsoft, which focuses on answering biomedical questions.[24][25] In November 2023, Amazon announced a new chatbot, called Q, for people to use at work.[26]
Many companies' chatbots run on messaging apps or simply via SMS. They are used for B2C customer service, sales and marketing.[30]
In 2016, Facebook Messenger allowed developers to place chatbots on their platform. There were 30,000 bots created for Messenger in the first six months, rising to 100,000 by September 2017.[31]
Since September 2017, this has also been as part of a pilot program on WhatsApp. Airlines KLM and Aeroméxico both announced their participation in the testing;[32][33][34][35] both airlines had previously launched customer services on the Facebook Messenger platform.
The bots usually appear as one of the user's contacts, but can sometimes act as participants in a group chat.
Many banks, insurers, media companies, e-commerce companies, airlines, hotel chains, retailers, health care providers, government entities, and restaurant chains have used chatbots to answer simple questions, increase customer engagement,[36] for promotion, and to offer additional ways to order from them.[37] Chatbots are also used in market research to collect short survey responses.[38]
A 2017 study showed 4% of companies used chatbots.[39] According to a 2016 study, 80% of businesses said they intended to have one by 2020.[40]
As part of company apps and websites
Previous generations of chatbots were present on company websites, e.g. Ask Jenn from Alaska Airlines which debuted in 2008[41] or Expedia's virtual customer service agent which launched in 2011.[41][42] The newer generation of chatbots includes IBM Watson-powered "Rocky", introduced in February 2017 by the New York City-based e-commerce company Rare Carat to provide information to prospective diamond buyers.[43][44]
Chatbot sequences
Used by marketers to script sequences of messages, very similar to an autoresponder sequence. Such sequences can be triggered by user opt-in or the use of keywords within user interactions. After a trigger occurs a sequence of messages is delivered until the next anticipated user response. Each user response is used in the decision tree to help the chatbot navigate the response sequences to deliver the correct response message.
Company internal platforms
Other companies explore ways they can use chatbots internally, for example for Customer Support, Human Resources, or even in Internet-of-Things (IoT) projects. Overstock.com, for one, has reportedly launched a chatbot named Mila to automate certain simple yet time-consuming processes when requesting sick leave.[45] Other large companies such as Lloyds Banking Group, Royal Bank of Scotland, Renault and Citroën are now using automated online assistants instead of call centres with humans to provide a first point of contact. A SaaS chatbot business ecosystem has been steadily growing since the F8 Conference when Facebook’s Mark Zuckerberg unveiled that Messenger would allow chatbots into the app.[46] In large companies, like in hospitals and aviation organizations, IT architects are designing reference architectures for Intelligent Chatbots that are used to unlock and share knowledge and experience in the organization more efficiently, and reduce the errors in answers from expert service desks significantly.[47] These Intelligent Chatbots make use of all kinds of artificial intelligence like image moderation and natural-language understanding (NLU), natural-language generation (NLG), machine learning and deep learning.
Customer service
Chatbots have great potential to serve as an alternate source for customer service.[48] Many high-tech banking organizations are looking to integrate automated AI-based solutions such as chatbots into their customer service in order to provide faster and cheaper assistance to their clients who are becoming increasingly comfortable with technology. In particular, chatbots can efficiently conduct a dialogue, usually replacing other communication tools such as email, phone, or SMS. In banking, their major application is related to quick customer service answering common requests, as well as transactional support.
Deep learning techniques can be incorporated into chatbot applications to allow them to map conversations between users and customer service agents, especially in social media.[49] Research has shown that methods incorporating deep learning can learn writing styles from a brand and transfer them to another, promoting the brand's image on social media platforms.[49] Chatbots can create new ways of brands and user interactions, which can help improve the brand's performance and allow users to gain "social, information, and economic benefits".[49]
Several studies report significant reduction in the cost of customer services, expected to lead to billions of dollars of economic savings in the next ten years.[50] In 2019, Gartner predicted that by 2021, 15% of all customer service interactions globally will be handled completely by AI.[51] A study by Juniper Research in 2019 estimates retail sales resulting from chatbot-based interactions will reach $112 billion by 2023.[52]
Since 2016, when Facebook allowed businesses to deliver automated customer support, e-commerce guidance, content, and interactive experiences through chatbots, a large variety of chatbots were developed for the Facebook Messenger platform.[53]
In 2016, Russia-based Tochka Bank launched the world's first Facebook bot for a range of financial services, including a possibility of making payments.[54]
In July 2016, Barclays Africa also launched a Facebook chatbot, making it the first bank to do so in Africa.[55]
The France's third-largest bank by total assets[56]Société Générale launched their chatbot called SoBot in March 2018. While 80% of users of the SoBot expressed their satisfaction after having tested it, Société Générale deputy director Bertrand Cozzarolo stated that it will never replace the expertise provided by a human advisor.
[57]
The advantages of using chatbots for customer interactions in banking include cost reduction, financial advice, and 24/7 support.[58][59]
Chatbots are also appearing in the healthcare industry.[60][61] A study suggested that physicians in the United States believed that chatbots would be most beneficial for scheduling doctor appointments, locating health clinics, or providing medication information.[62]
The GPT chatbot ChatGPT is able to answer user queries related to health promotion and disease prevention such as screening and vaccination.[63]Whatsapp has teamed up with the World Health Organization (WHO) to make a chatbot service that answers users' questions on COVID-19.[64]
In 2020, The Indian Government launched a chatbot called MyGov Corona Helpdesk,[65] that worked through Whatsapp and helped people access information about the Coronavirus (COVID-19) pandemic.[66][67]
Certain patient groups are still reluctant to use chatbots. A mixed-methods study showed that people are still hesitant to use chatbots for their healthcare due to poor understanding of the technological complexity, the lack of empathy, and concerns about cyber-security.[68] The analysis showed that while 6% had heard of a health chatbot and 3% had experience of using it, 67% perceived themselves as likely to use one within 12 months. The majority of participants would use a health chatbot for seeking general health information (78%), booking a medical appointment (78%), and looking for local health services (80%). However, a health chatbot was perceived as less suitable for seeking results of medical tests and seeking specialist advice such as sexual health.
The analysis of attitudinal variables showed that most participants reported their preference for discussing their health with doctors (73%) and having access to reliable and accurate health information (93%). While 80% were curious about new technologies that could improve their health, 66% reported only seeking a doctor when experiencing a health problem and 65% thought that a chatbot was a good idea. 30% reported dislike about talking to computers, 41% felt it would be strange to discuss health matters with a chatbot and about half were unsure if they could trust the advice given by a chatbot. Therefore, perceived trustworthiness, individual attitudes towards bots, and dislike for talking to computers are the main barriers to health chatbots.[63]
In New Zealand, the chatbot SAM – short for Semantic Analysis Machine[69] (made by Nick Gerritsen of Touchtech[70]) – has been developed. It is designed to share its political thoughts, for example on topics such as climate change, healthcare and education, etc. It talks to people through Facebook Messenger.[71][72][73][74]
In 2022, the chatbot "Leader Lars" or "Leder Lars" was nominated for The Synthetic Party to run in the Danish parliamentary election,[75] and was built by the artist collective Computer Lars.[76] Leader Lars differed from earlier virtual politicians by leading a political party and by not pretending to be an objective candidate.[77] This chatbot engaged in critical discussions on politics with users from around the world.[78]
In India, the state government has launched a chatbot for its Aaple Sarkar platform,[79] which provides conversational access to information regarding public services managed.[80][81]
Government
Chatbots have been used at different levels of government departments, including local, national and regional contexts. Chatbots are used to provide services like citizenship and immigration, court administrations, financial aid, and migrants’ rights inquiries. For example, EMMA answers more than 500,000 inquiries monthly, regarding services on citizenship and immigration in the US.[82]
Toys
Chatbots have also been incorporated into devices not primarily meant for computing, such as toys.[83]
Hello Barbie is an Internet-connected version of the doll that uses a chatbot provided by the company ToyTalk,[84] which previously used the chatbot for a range of smartphone-based characters for children.[85] These characters' behaviors are constrained by a set of rules that in effect emulate a particular character and produce a storyline.[86]
The My Friend Cayla doll was marketed as a line of 18-inch (46 cm) dolls which uses speech recognition technology in conjunction with an Android or iOS mobile app to recognize the child's speech and have a conversation. Like the Hello Barbie doll, it attracted controversy due to vulnerabilities with the doll's Bluetooth stack and its use of data collected from the child's speech.
IBM's Watson computer has been used as the basis for chatbot-based educational toys for companies such as CogniToys,[83] intended to interact with children for educational purposes.[87]
Malicious use
Malicious chatbots are frequently used to fill chat rooms with spam and advertisements by mimicking human behavior and conversations or to entice people into revealing personal information, such as bank account numbers. They were commonly found on Yahoo! Messenger, Windows Live Messenger, AOL Instant Messenger and other instant messaging protocols. There has also been a published report of a chatbot used in a fake personal ad on a dating service's website.[88]
Tay, an AI chatbot designed to learn from previous interaction, caused major controversy due to it being targeted by internet trolls on Twitter. Soon after its launch, the bot was exploited, and with its "repeat after me" capability, it started releasing racist, sexist, and controversial responses to Twitter users.[89] This suggests that although the bot learned effectively from experience, adequate protection was not put in place to prevent misuse.[90]
If a text-sending algorithm can pass itself off as a human instead of a chatbot, its message would be more credible. Therefore, human-seeming chatbots with well-crafted online identities could start scattering fake news that seems plausible, for instance making false claims during an election. With enough chatbots, it might be even possible to achieve artificial social proof.[91][92]
Data security
Data security is one of the major concerns of chatbot technologies. Security threats and system vulnerabilities are weaknesses that are often exploited by malicious users. Storage of user data and past communication, that is highly valuable for training and development of chatbots, can also give rise to security threats.[93] Chatbots operating on third-party networks may be subject to various security issues if owners of the third-party applications have policies regarding user data that differ from those of the chatbot.[93] Security threats can be reduced or prevented by incorporating protective mechanisms. User authentication, chat End-to-end encryption, and self-destructing messages are some effective solutions to resist potential security threats.[93]
Limitations of chatbots
The creation and implementation of chatbots is still a developing area, heavily related to artificial intelligence and machine learning, so the provided solutions, while possessing obvious advantages, have some important limitations in terms of functionalities and use cases. However, this is changing over time.
As the input/output database is fixed and limited, chatbots can fail while dealing with an unsaved query.[59]
A chatbot's efficiency highly depends on language processing and is limited because of irregularities, such as accents and mistakes.
Chatbots are unable to deal with multiple questions at the same time and so conversation opportunities are limited.[94]
Chatbots require a large amount of conversational data to train. Generative models, which are based on deep learning algorithms to generate new responses word by word based on user input, are usually trained on a large dataset of natural-language phrases.[3]
Chatbots have difficulty managing non-linear conversations that must go back and forth on a topic with a user.[95]
As it happens usually with technology-led changes in existing services, some consumers, more often than not from older generations, are uncomfortable with chatbots due to their limited understanding, making it obvious that their requests are being dealt with by machines.[94]
Chatbots sometimes provide plausible-sounding but incorrect or nonsensical answers. They can make up names, dates, historical events, and even simple math problems. [96]
Chatbots are increasingly present in businesses and often are used to automate tasks that do not require skill-based talents. With customer service taking place via messaging apps as well as phone calls, there are growing numbers of use-cases where chatbot deployment gives organizations a clear return on investment. Call center workers may be particularly at risk from AI-driven chatbots.[100]
Chatbot developers create, debug, and maintain applications that automate customer services or other communication processes. Their duties include reviewing and simplifying code when needed. They may also help companies implement bots in their operations.
A study by Forrester (June 2017) predicted that 25% of all jobs would be impacted by AI technologies by 2019.[101]
Prompt engineering, the task of designing and refining prompts (inputs) leading to desired AI-generated responses has gained significant demand and popularity in recent years, with the advent of sophisticated models, notably OpenAI's GPT series.
^Beaver, Laurie (July 2016). "The Chatbots Explainer". Business Insider. BI Intelligence. Archived from the original on 3 May 2019. Retrieved 4 November 2019.
^Sajani Senadheera, et al. (2024) Understanding Chatbot Adoption in Local Governments: A Review and Framework, Journal of Urban Technology, DOI: 10.1080/10630732.2023.2297665
^Epstein, Robert (October 2007). "From Russia With Love: How I got fooled (and somewhat humiliated) by a computer"(PDF). Scientific American: Mind. pp. 16–17. Archived(PDF) from the original on 19 October 2010. Retrieved 9 December 2007.
Psychologist Robert Epstein reports how he was initially fooled by a chatterbot posing as an attractive girl in a personal ad he answered on a dating website. In the ad, the girl portrayed herself as being in Southern California and then soon revealed, in poor English, that she was actually in Russia. He became suspicious after a couple of months of email exchanges, sent her an email test of gibberish, and she still replied in general terms. The dating website is not named.
^Bird, Jordan J.; Ekart, Aniko; Faria, Diego R. (June 2018). "Learning from Interaction: An Intelligent Networked-Based Human-Bot and Bot-Bot Chatbot System". Advances in Computational Intelligence Systems. Advances in Intelligent Systems and Computing. Vol. 840 (1st ed.). Nottingham, UK: Springer. pp. 179–190. doi:10.1007/978-3-319-97982-3_15. ISBN978-3-319-97982-3. S2CID52069140.
Gertner, Jon. (2023) "Wikipedia's Moment of Truth: Can the online encyclopedia help teach A.I. chatbots to get their facts right — without destroying itself in the process?" New York Times Magazine (18 July 2023) online
Vincent, James, "Horny Robot Baby Voice: James Vincent on AI chatbots", London Review of Books, vol. 46, no. 19 (10 October 2024), pp. 29–32. "[AI chatbot] programs are made possible by new technologies but rely on the timelelss human tendency to anthropomorphise." (p. 29.)