Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Machine learning
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Other limitations and vulnerabilities === <!-- WHAT IS THE TECHNICAL TERM THAT DESCRIBES THIS LIMITATION? --> Learners can also disappoint by "learning the wrong lesson". A toy example is that an image classifier trained only on pictures of brown horses and black cats might conclude that all brown patches are likely to be horses.{{sfn|Domingos|2015|p=286}} A real-world example is that, unlike humans, current image classifiers often do not primarily make judgements from the spatial relationship between components of the picture, and they learn relationships between pixels that humans are oblivious to, but that still correlate with images of certain types of real objects. Modifying these patterns on a legitimate image can result in "adversarial" images that the system misclassifies.<ref>{{cite news|title=Single pixel change fools AI programs|url=https://www.bbc.com/news/technology-41845878|access-date=12 March 2018|work=BBC News|date=3 November 2017|archive-date=22 March 2018|archive-url=https://web.archive.org/web/20180322011306/http://www.bbc.com/news/technology-41845878|url-status=live}}</ref><ref>{{cite news|title=AI Has a Hallucination Problem That's Proving Tough to Fix|url=https://www.wired.com/story/ai-has-a-hallucination-problem-thats-proving-tough-to-fix/|access-date=12 March 2018|magazine=WIRED|date=2018|archive-date=12 March 2018|archive-url=https://web.archive.org/web/20180312024533/https://www.wired.com/story/ai-has-a-hallucination-problem-thats-proving-tough-to-fix/|url-status=live}}</ref> <!-- WHAT IS THE TECHNICAL TERM THAT DESCRIBES THIS LIMITATION? --> Adversarial vulnerabilities can also result in nonlinear systems, or from non-pattern perturbations. For some systems, it is possible to change the output by only changing a single adversarially chosen pixel.<ref name=TD_1>{{cite arXiv | title=Towards deep learning models resistant to adversarial attacks| author1=Madry, A.| author2=Makelov, A.| author3=Schmidt, L.| author4=Tsipras, D.| author5=Vladu, A. | date=4 September 2019| class=stat.ML| eprint=1706.06083}}</ref> Machine learning models are often vulnerable to manipulation or evasion via [[adversarial machine learning]].<ref>{{cite web |title=Adversarial Machine Learning β CLTC UC Berkeley Center for Long-Term Cybersecurity |url=https://cltc.berkeley.edu/aml/ |website=CLTC |access-date=25 May 2022 |archive-date=17 May 2022 |archive-url=https://web.archive.org/web/20220517045352/https://cltc.berkeley.edu/aml/ |url-status=live }}</ref> Researchers have demonstrated how [[Backdoor (computing)|backdoors]] can be placed undetectably into classifying (e.g., for categories "spam" and well-visible "not spam" of posts) machine learning models that are often developed or trained by third parties. Parties can change the classification of any input, including in cases for which a type of [[algorithmic transparency|data/software transparency]] is provided, possibly including [[white-box testing|white-box access]].<ref>{{cite news |title=Machine-learning models vulnerable to undetectable backdoors |url=https://www.theregister.com/2022/04/21/machine_learning_models_backdoors/ |access-date=13 May 2022 |work=[[The Register]] |language=en |archive-date=13 May 2022 |archive-url=https://web.archive.org/web/20220513171215/https://www.theregister.com/2022/04/21/machine_learning_models_backdoors/ |url-status=live }}</ref><ref>{{cite news |title=Undetectable Backdoors Plantable In Any Machine-Learning Algorithm |url=https://spectrum.ieee.org/machine-learningbackdoor |access-date=13 May 2022 |work=[[IEEE Spectrum]] |date=10 May 2022 |language=en |archive-date=11 May 2022 |archive-url=https://web.archive.org/web/20220511152052/https://spectrum.ieee.org/machine-learningbackdoor |url-status=live }}</ref><ref>{{Cite arXiv|last1=Goldwasser |first1=Shafi |last2=Kim |first2=Michael P. |last3=Vaikuntanathan |first3=Vinod |last4=Zamir |first4=Or |title=Planting Undetectable Backdoors in Machine Learning Models |date=14 April 2022|class=cs.LG |eprint=2204.06974 }}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)