Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Natural language generation
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===Creative writing and computational humor=== Creative language generation by NLG has been hypothesized since the field's origins. A recent pioneer in the area is Phillip Parker, who has developed an arsenal of algorithms capable of automatically generating textbooks, crossword puzzles, poems and books on topics ranging from bookbinding to cataracts.<ref>{{Cite web |date=2013-02-11 |title=How To Author Over 1 Million Books |url=https://www.huffpost.com/entry/philip-parker-books_n_2648820 |access-date=2022-06-03 |website=HuffPost |language=en}}</ref> The advent of large pretrained transformer-based language models such as GPT-3 has also enabled breakthroughs, with such models demonstrating recognizable ability for creating-writing tasks.<ref>{{Cite web |title=Exploring GPT-3: A New Breakthrough in Language Generation |url=https://www.kdnuggets.com/exploring-gpt-3-a-new-breakthrough-in-language-generation.html/ |access-date=2022-06-03 |website=KDnuggets |language=en-US}}</ref> A related area of NLG application is computational humor production.Β JAPE (Joke Analysis and Production Engine) is one of the earliest large, automated humor production systems that uses a hand-coded template-based approach to create punning riddles for children. HAHAcronym creates humorous reinterpretations of any given acronym, as well as proposing new fitting acronyms given some keywords.<ref name=":1">{{Cite journal |last=Winters |first=Thomas |date=2021-04-30 |title=Computers Learning Humor Is No Joke |journal=Harvard Data Science Review |url=https://hdsr.mitpress.mit.edu/pub/wi9yky5c/release/3 |language=en |volume=3 |issue=2 |doi=10.1162/99608f92.f13a2337|s2cid=235589737 |doi-access=free }}</ref> Despite progresses, many challenges remain in producing automated creative and humorous content that rival human output. In an experiment for generating satirical headlines, outputs of their best BERT-based model were perceived as funny 9.4% of the time (while real headlines from [[The Onion]] were 38.4%) and a GPT-2 model fine-tuned on satirical headlines achieved 6.9%.<ref>{{Cite journal |last1=Horvitz |first1=Zachary |last2=Do |first2=Nam |last3=Littman |first3=Michael L. |date=July 2020 |title=Context-Driven Satirical News Generation |url=https://aclanthology.org/2020.figlang-1.5 |journal=Proceedings of the Second Workshop on Figurative Language Processing |location=Online |publisher=Association for Computational Linguistics |pages=40β50 |doi=10.18653/v1/2020.figlang-1.5|s2cid=220330989 |doi-access=free }}</ref>Β It has been pointed out that two main issues with humor-generation systems are the lack of annotated data sets and the lack of formal evaluation methods,<ref name=":1" /> which could be applicable to other creative content generation. Some have argued relative to other applications, there has been a lack of attention to creative aspects of language production within NLG. NLG researchers stand to benefit from insights into what constitutes creative language production, as well as structural features of narrative that have the potential to improve NLG output even in data-to-text systems.<ref name=":0" />
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)