Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Machine learning
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Limitations == Although machine learning has been transformative in some fields, machine-learning programs often fail to deliver expected results.<ref>{{Cite news|url=https://www.bloomberg.com/news/articles/2016-11-10/why-machine-learning-models-often-fail-to-learn-quicktake-q-a|title=Why Machine Learning Models Often Fail to Learn: QuickTake Q&A|date=10 November 2016|work=Bloomberg.com|access-date=10 April 2017|archive-url=https://web.archive.org/web/20170320225010/https://www.bloomberg.com/news/articles/2016-11-10/why-machine-learning-models-often-fail-to-learn-quicktake-q-a|archive-date=20 March 2017}}</ref><ref>{{Cite news|url=https://hbr.org/2017/04/the-first-wave-of-corporate-ai-is-doomed-to-fail|title=The First Wave of Corporate AI Is Doomed to Fail|date=18 April 2017|work=Harvard Business Review|access-date=20 August 2018|archive-date=21 August 2018|archive-url=https://web.archive.org/web/20180821032004/https://hbr.org/2017/04/the-first-wave-of-corporate-ai-is-doomed-to-fail|url-status=live}}</ref><ref>{{Cite news|url=https://venturebeat.com/2016/09/17/why-the-a-i-euphoria-is-doomed-to-fail/|title=Why the A.I. euphoria is doomed to fail|date=18 September 2016|work=VentureBeat|access-date=20 August 2018|language=en-US|archive-date=19 August 2018|archive-url=https://web.archive.org/web/20180819124138/https://venturebeat.com/2016/09/17/why-the-a-i-euphoria-is-doomed-to-fail/|url-status=live}}</ref> Reasons for this are numerous: lack of (suitable) data, lack of access to the data, data bias, privacy problems, badly chosen tasks and algorithms, wrong tools and people, lack of resources, and evaluation problems.<ref>{{Cite web|url=https://www.kdnuggets.com/2018/07/why-machine-learning-project-fail.html|title=9 Reasons why your machine learning project will fail|website=www.kdnuggets.com|language=en-US|access-date=20 August 2018|archive-date=21 August 2018|archive-url=https://web.archive.org/web/20180821031802/https://www.kdnuggets.com/2018/07/why-machine-learning-project-fail.html|url-status=live}}</ref> The "[[Black box|black box theory]]" poses another yet significant challenge. Black box refers to a situation where the algorithm or the process of producing an output is entirely opaque, meaning that even the coders of the algorithm cannot audit the pattern that the machine extracted out of the data.<ref name="Babuta-2018">{{Cite report |url=https://www.jstor.org/stable/resrep37375.8 |title=Transparency and Intelligibility |last1=Babuta |first1=Alexander |last2=Oswald |first2=Marion |date=2018 |publisher=Royal United Services Institute (RUSI) |pages=17β22 |last3=Rinik |first3=Christine |access-date=9 December 2023 |archive-date=9 December 2023 |archive-url=https://web.archive.org/web/20231209002929/https://www.jstor.org/stable/resrep37375.8 |url-status=live }}</ref> The House of Lords Select Committee, which claimed that such an "intelligence system" that could have a "substantial impact on an individual's life" would not be considered acceptable unless it provided "a full and satisfactory explanation for the decisions" it makes.<ref name="Babuta-2018" /> In 2018, a self-driving car from [[Uber]] failed to detect a pedestrian, who was killed after a collision.<ref>{{Cite news|url=https://www.economist.com/the-economist-explains/2018/05/29/why-ubers-self-driving-car-killed-a-pedestrian|title=Why Uber's self-driving car killed a pedestrian|newspaper=The Economist|access-date=20 August 2018|language=en|archive-date=21 August 2018|archive-url=https://web.archive.org/web/20180821031818/https://www.economist.com/the-economist-explains/2018/05/29/why-ubers-self-driving-car-killed-a-pedestrian|url-status=live}}</ref> Attempts to use machine learning in healthcare with the [[Watson (computer)|IBM Watson]] system failed to deliver even after years of time and billions of dollars invested.<ref>{{Cite news|url=https://www.statnews.com/2018/07/25/ibm-watson-recommended-unsafe-incorrect-treatments/|title=IBM's Watson recommended 'unsafe and incorrect' cancer treatments β STAT|date=25 July 2018|work=STAT|access-date=21 August 2018|language=en-US|archive-date=21 August 2018|archive-url=https://web.archive.org/web/20180821062616/https://www.statnews.com/2018/07/25/ibm-watson-recommended-unsafe-incorrect-treatments/|url-status=live}}</ref><ref>{{Cite news|url=https://www.wsj.com/articles/ibm-bet-billions-that-watson-could-improve-cancer-treatment-it-hasnt-worked-1533961147|title=IBM Has a Watson Dilemma|last1=Hernandez|first1=Daniela|date=11 August 2018|work=[[The Wall Street Journal]]|access-date=21 August 2018|last2=Greenwald|first2=Ted|language=en-US|issn=0099-9660|archive-date=21 August 2018|archive-url=https://web.archive.org/web/20180821031906/https://www.wsj.com/articles/ibm-bet-billions-that-watson-could-improve-cancer-treatment-it-hasnt-worked-1533961147|url-status=live}}</ref> Microsoft's [[Bing Chat]] chatbot has been reported to produce hostile and offensive response against its users.<ref>{{Cite web |last=Allyn |first=Bobby |date=27 February 2023 |title=How Microsoft's experiment in artificial intelligence tech backfired |url=https://www.npr.org/2023/02/27/1159630243/how-microsofts-experiment-in-artificial-intelligence-tech-backfired |access-date=8 December 2023 |website=National Public Radio |archive-date=8 December 2023 |archive-url=https://web.archive.org/web/20231208234056/https://www.npr.org/2023/02/27/1159630243/how-microsofts-experiment-in-artificial-intelligence-tech-backfired |url-status=live }}</ref> Machine learning has been used as a strategy to update the evidence related to a systematic review and increased reviewer burden related to the growth of biomedical literature. While it has improved with training sets, it has not yet developed sufficiently to reduce the workload burden without limiting the necessary sensitivity for the findings research themselves.<ref>{{Cite journal|last1=Reddy|first1=Shivani M.|last2=Patel|first2=Sheila|last3=Weyrich|first3=Meghan|last4=Fenton|first4=Joshua|last5=Viswanathan|first5=Meera|date=2020|title=Comparison of a traditional systematic review approach with review-of-reviews and semi-automation as strategies to update the evidence|url= |journal=Systematic Reviews|language=en|volume=9|issue=1|pages=243|doi=10.1186/s13643-020-01450-2|issn=2046-4053|pmc=7574591|pmid=33076975 |doi-access=free }}</ref> === Explainability === {{Main|Explainable artificial intelligence}} Explainable AI (XAI), or Interpretable AI, or Explainable Machine Learning (XML), is artificial intelligence (AI) in which humans can understand the decisions or predictions made by the AI.<ref>{{cite journal |last1=Rudin |first1=Cynthia |title=Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead |journal=Nature Machine Intelligence |date=2019 |volume=1 |issue=5 |pages=206β215 |doi=10.1038/s42256-019-0048-x |pmid=35603010 |pmc=9122117 }}</ref> It contrasts with the "black box" concept in machine learning where even its designers cannot explain why an AI arrived at a specific decision.<ref>{{cite journal |last1=Hu |first1=Tongxi |last2=Zhang |first2=Xuesong |last3=Bohrer |first3=Gil |last4=Liu |first4=Yanlan |last5=Zhou |first5=Yuyu |last6=Martin |first6=Jay |last7=LI |first7=Yang |last8=Zhao |first8=Kaiguang |title=Crop yield prediction via explainable AI and interpretable machine learning: Dangers of black box models for evaluating climate change impacts on crop yield|journal=Agricultural and Forest Meteorology |date=2023 |volume=336 |page=109458 |doi=10.1016/j.agrformet.2023.109458 |s2cid=258552400 |doi-access=free }}</ref> By refining the mental models of users of AI-powered systems and dismantling their misconceptions, XAI promises to help users perform more effectively. XAI may be an implementation of the social right to explanation. === Overfitting === {{Main|Overfitting}} [[File: Overfitted Data.png|thumb|The blue line could be an example of overfitting a linear function due to random noise.]] Settling on a bad, overly complex theory gerrymandered to fit all the past training data is known as overfitting. Many systems attempt to reduce overfitting by rewarding a theory in accordance with how well it fits the data but penalising the theory in accordance with how complex the theory is.{{sfn|Domingos|2015|loc=Chapter 6, Chapter 7}} === Other limitations and vulnerabilities === <!-- WHAT IS THE TECHNICAL TERM THAT DESCRIBES THIS LIMITATION? --> Learners can also disappoint by "learning the wrong lesson". A toy example is that an image classifier trained only on pictures of brown horses and black cats might conclude that all brown patches are likely to be horses.{{sfn|Domingos|2015|p=286}} A real-world example is that, unlike humans, current image classifiers often do not primarily make judgements from the spatial relationship between components of the picture, and they learn relationships between pixels that humans are oblivious to, but that still correlate with images of certain types of real objects. Modifying these patterns on a legitimate image can result in "adversarial" images that the system misclassifies.<ref>{{cite news|title=Single pixel change fools AI programs|url=https://www.bbc.com/news/technology-41845878|access-date=12 March 2018|work=BBC News|date=3 November 2017|archive-date=22 March 2018|archive-url=https://web.archive.org/web/20180322011306/http://www.bbc.com/news/technology-41845878|url-status=live}}</ref><ref>{{cite news|title=AI Has a Hallucination Problem That's Proving Tough to Fix|url=https://www.wired.com/story/ai-has-a-hallucination-problem-thats-proving-tough-to-fix/|access-date=12 March 2018|magazine=WIRED|date=2018|archive-date=12 March 2018|archive-url=https://web.archive.org/web/20180312024533/https://www.wired.com/story/ai-has-a-hallucination-problem-thats-proving-tough-to-fix/|url-status=live}}</ref> <!-- WHAT IS THE TECHNICAL TERM THAT DESCRIBES THIS LIMITATION? --> Adversarial vulnerabilities can also result in nonlinear systems, or from non-pattern perturbations. For some systems, it is possible to change the output by only changing a single adversarially chosen pixel.<ref name=TD_1>{{cite arXiv | title=Towards deep learning models resistant to adversarial attacks| author1=Madry, A.| author2=Makelov, A.| author3=Schmidt, L.| author4=Tsipras, D.| author5=Vladu, A. | date=4 September 2019| class=stat.ML| eprint=1706.06083}}</ref> Machine learning models are often vulnerable to manipulation or evasion via [[adversarial machine learning]].<ref>{{cite web |title=Adversarial Machine Learning β CLTC UC Berkeley Center for Long-Term Cybersecurity |url=https://cltc.berkeley.edu/aml/ |website=CLTC |access-date=25 May 2022 |archive-date=17 May 2022 |archive-url=https://web.archive.org/web/20220517045352/https://cltc.berkeley.edu/aml/ |url-status=live }}</ref> Researchers have demonstrated how [[Backdoor (computing)|backdoors]] can be placed undetectably into classifying (e.g., for categories "spam" and well-visible "not spam" of posts) machine learning models that are often developed or trained by third parties. Parties can change the classification of any input, including in cases for which a type of [[algorithmic transparency|data/software transparency]] is provided, possibly including [[white-box testing|white-box access]].<ref>{{cite news |title=Machine-learning models vulnerable to undetectable backdoors |url=https://www.theregister.com/2022/04/21/machine_learning_models_backdoors/ |access-date=13 May 2022 |work=[[The Register]] |language=en |archive-date=13 May 2022 |archive-url=https://web.archive.org/web/20220513171215/https://www.theregister.com/2022/04/21/machine_learning_models_backdoors/ |url-status=live }}</ref><ref>{{cite news |title=Undetectable Backdoors Plantable In Any Machine-Learning Algorithm |url=https://spectrum.ieee.org/machine-learningbackdoor |access-date=13 May 2022 |work=[[IEEE Spectrum]] |date=10 May 2022 |language=en |archive-date=11 May 2022 |archive-url=https://web.archive.org/web/20220511152052/https://spectrum.ieee.org/machine-learningbackdoor |url-status=live }}</ref><ref>{{Cite arXiv|last1=Goldwasser |first1=Shafi |last2=Kim |first2=Michael P. |last3=Vaikuntanathan |first3=Vinod |last4=Zamir |first4=Or |title=Planting Undetectable Backdoors in Machine Learning Models |date=14 April 2022|class=cs.LG |eprint=2204.06974 }}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)