Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Chatbot
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Limitations == Traditional chatbots particularly lacked understanding of user requests, leading to clunky, repetitive conversations. Their pre-programmed responses would often fail to satisfy unexpected user queries, causing frustration. These chatbots were particularly unhelpful for users who lacked a clear understanding of their problem or the service they needed.<ref>{{Cite news |last=Navaluri |first=Vijay |date=2024-04-09 |title=Chatbots are dead: How generative AI & automation is transforming the way we interact with technology |url=https://economictimes.indiatimes.com/small-biz/security-tech/technology/chatbots-are-dead-how-generative-ai-automation-is-transforming-the-way-we-interact-with-technology/articleshow/109153190.cms |access-date=2025-05-25 |work=The Economic Times |issn=0013-0389}}</ref> Chatbots based on [[Large language model|large language models]] are much more versatile, but require a large amount of conversational data to train. These models generate new responses word by word based on user input, and are usually trained on a large dataset of natural-language phrases.<ref name="Caldarini-20223" /> They sometimes provide plausible-sounding but incorrect or nonsensical answers, referred to as "[[Hallucination (artificial intelligence)|hallucinations]]". They can for example make up names, dates, or historical events.<ref>{{Cite journal |last=Stover |first=Dawn |date=2023-09-03 |title=Will AI make us crazy? |url=https://www.tandfonline.com/doi/full/10.1080/00963402.2023.2245247 |journal=Bulletin of the Atomic Scientists |language=en |volume=79 |issue=5 |pages=299β303 |bibcode=2023BuAtS..79e.299S |doi=10.1080/00963402.2023.2245247 |issn=0096-3402 |url-access=subscription}}</ref> When humans use and apply chatbot content contaminated with hallucinations, this results in "botshit".<ref>{{Cite journal |last1=Hannigan |first1=Timothy R. |last2=McCarthy |first2=Ian P. |last3=Spicer |first3=AndrΓ© |date=2024-03-20 |title=Beware of botshit: How to manage the epistemic risks of generative chatbots |url=https://www.sciencedirect.com/science/article/pii/S0007681324000272 |journal=Business Horizons |volume=67 |issue=5 |pages=471β486 |doi=10.1016/j.bushor.2024.03.001 |issn=0007-6813 |url-access=subscription}}</ref> Given the increasing adoption and use of chatbots for generating content, there are concerns that this technology will significantly reduce the cost it takes humans to generate [[misinformation]].<ref>{{Cite news |date=2023-01-06 |title=Transcript: Ezra Klein Interviews Gary Marcus |url=https://www.nytimes.com/2023/01/06/podcasts/transcript-ezra-klein-interviews-gary-marcus.html |access-date=2024-04-21 |work=The New York Times |language=en-US |issn=0362-4331}}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)