Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Question answering
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Progress== Question answering systems have been extended in recent{{current event inline|date=April 2023}} years to encompass additional domains of knowledge<ref>{{Cite journal | doi=10.1162/089120105774321055|title = Book Review ''New'' Directions in Question Answering Mark T. Maybury (editor) (MITRE Corporation) Menlo Park, CA: AAAI Press and Cambridge, MA: The MIT Press, 2004, xi+336 pp; paperbound, ISBN 0-262-63304-3, $40.00, £25.95| journal=Computational Linguistics| volume=31| issue=3| pages=413–417|year = 2005|last1 = Paşca|first1 = Marius|s2cid = 12705839|doi-access=free}}</ref> For example, systems have been developed to automatically answer temporal and geospatial questions, questions of definition and terminology, biographical questions, multilingual questions, and questions about the content of audio, images,<ref name="visual question answering"/> and video.<ref>{{cite arXiv | eprint=1511.04670 | last1=Zhu | first1=Linchao | last2=Xu | first2=Zhongwen | last3=Yang | first3=Yi | last4=Hauptmann | first4=Alexander G. | title=Uncovering Temporal Context for Video Question and Answering | year=2015 | class=cs.CV }}</ref> Current question answering research topics include: * interactivity—clarification of questions or answers{{elucidate|date=April 2023}}<ref>Quarteroni, Silvia, and Suresh Manandhar. "[https://www.researchgate.net/publication/231992433_Designing_an_interactive_open-domain_question_answering_system Designing an interactive open-domain question answering system]." Natural Language Engineering 15.1 (2009): 73–95.</ref> * answer reuse or caching<ref>Light, Marc, et al. "[https://www.aaai.org/Papers/Symposia/Spring/2003/SS-03-07/SS03-07-016.pdf Reuse in Question Answering: A Preliminary Study]." New Directions in Question Answering. 2003.</ref> * [[semantic parsing]]<ref>Yih, Wen-tau, Xiaodong He, and Christopher Meek. "[https://www.aclweb.org/anthology/P14-2105 Semantic parsing for single-relation question answering]." Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). 2014.</ref> * answer presentation{{elucidate|date=April 2023}}<ref>Perera, R., Nand, P. and Naeem, A. 2017. [https://link.springer.com/article/10.1007/s13748-017-0113-9 Utilizing typed dependency subtree patterns for answer sentence generation in question answering systems.]</ref> * [[knowledge representation and reasoning|knowledge representation]] and semantic [[entailment (linguistics)|entailment]]<ref>de Salvo Braz, Rodrigo, et al. "[https://www.aaai.org/Papers/AAAI/2005/AAAI05-165.pdf An inference model for semantic entailment in natural language]." Machine Learning Challenges Workshop. Springer, Berlin, Heidelberg, 2005.</ref> * social media analysis{{elucidate|date=April 2023}} with question answering systems * [[sentiment analysis]]<ref>{{cite web |url=http://totalgood.com/bitcrawl/ |title=BitCrawl by Hobson Lane |access-date=2012-05-29 |url-status=bot: unknown |archive-url=https://web.archive.org/web/20121027153311/http://totalgood.com/bitcrawl/ |archive-date=October 27, 2012 }}</ref> * utilization of thematic roles<ref>Perera, R. and Perera, U. 2012. [http://rivinduperera.com/publications/qacd_coling2012.html Towards a thematic role based target identification model for question answering.] {{Webarchive|url=https://web.archive.org/web/20160304111643/http://rivinduperera.com/publications/qacd_coling2012.html |date=2016-03-04 }}</ref> * [[Image captioning]] for visual question answering<ref name="visual question answering">Anderson, Peter, et al. "[http://openaccess.thecvf.com/content_cvpr_2018/papers/Anderson_Bottom-Up_and_Top-Down_CVPR_2018_paper.pdf Bottom-up and top-down attention for image captioning and visual question answering]." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018.</ref> * [[Embodied agent|Embodied]] question answering<ref>Das, Abhishek, et al. "[https://openaccess.thecvf.com/content_cvpr_2018/papers/Das_Embodied_Question_Answering_CVPR_2018_paper.pdf Embodied question answering]." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018.</ref> In 2011, [[Watson (computer)|Watson]], a question answering computer system developed by [[IBM]], competed in two exhibition matches of ''[[Jeopardy!]]'' against [[Brad Rutter]] and [[Ken Jennings]], winning by a significant margin.<ref>{{Cite news | url=https://www.nytimes.com/2011/02/17/science/17jeopardy-watson.html?_r=0 | title=On 'Jeopardy!' Watson Win is All but Trivial| newspaper=The New York Times| date=2011-02-16| last1=Markoff| first1=John}}</ref> [[Facebook Research]] made their [[DrQA]] system<ref>{{Cite web | url=https://research.fb.com/downloads/drqa/ | title=DrQA}}</ref> available under an [[open source license]]. This system uses [[Wikipedia]] as knowledge source.<ref name=":2">{{cite arXiv| eprint=1704.00051| last1=Chen| first1=Danqi| title=Reading Wikipedia to Answer Open-Domain Questions| last2=Fisch| first2=Adam| last3=Weston| first3=Jason| last4=Bordes| first4=Antoine| class=cs.CL| year=2017}}</ref> The [[open source]] framework Haystack by [[deepset]] combines open-domain question answering with generative question answering and supports the {{clarify|text=domain adaptation|reason=what's "domain adaptation"|date=April 2023}} of the {{clarify|text=underlying|reason=what does it mean for a language model to underlie? what does it underlie?|date=April 2023}} [[Language model|language models]] for {{vague|text=industry use cases|date=April 2023}}. <ref>{{cite book |last1=Tunstall |first1=Lewis |title=Natural Language Processing with Transformers: Building Language Applications with Hugging Face |date=5 July 2022 |publisher=O'Reilly UK Ltd. |isbn=978-1098136796 |page=Chapter 7 |edition=2nd |url=https://www.oreilly.com/library/view/natural-language-processing/9781098136789/ }}</ref><ref>{{cite web |title=Haystack documentation |url=https://docs.haystack.deepset.ai/docs/intro |publisher=deepset |access-date=4 November 2022}}</ref> Large Language Models (LLMs)<sup>[[Large language model|[36]]]</sup> like GPT-4<sup>[[GPT-4o|[37]]]</sup>, Gemini<sup>[[Gemini (language model)|[38]]]</sup> are examples of successful QA systems that are enabling more sophisticated understanding and generation of text. When coupled with Multimodal<sup>[[Multimodal learning|[39]]]</sup> QA Systems, which can process and understand information from various modalities like text, images, and audio, LLMs significantly improve the capabilities of QA systems.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)