Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Artificial intelligence
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Frameworks === Artificial Intelligence projects can be guided by ethical considerations during the design, development, and implementation of an AI system. An AI framework such as the Care and Act Framework, developed by the [[Alan Turing Institute]] and based on the SUM values, outlines four main ethical dimensions, defined as follows:<ref>{{Cite web |author=Alan Turing Institute |date=2019 |title=Understanding artificial intelligence ethics and safety |url=https://www.turing.ac.uk/sites/default/files/2019-06/understanding_artificial_intelligence_ethics_and_safety.pdf |access-date=5 October 2024 |archive-date=11 September 2024 |archive-url=https://web.archive.org/web/20240911131935/https://www.turing.ac.uk/sites/default/files/2019-06/understanding_artificial_intelligence_ethics_and_safety.pdf |url-status=live }}</ref><ref>{{Cite web |author=Alan Turing Institute |date=2023 |title=AI Ethics and Governance in Practice |url=https://www.turing.ac.uk/sites/default/files/2023-12/aieg-ati-ai-ethics-an-intro_1.pdf |access-date=5 October 2024 |archive-date=11 September 2024 |archive-url=https://web.archive.org/web/20240911125504/https://www.turing.ac.uk/sites/default/files/2023-12/aieg-ati-ai-ethics-an-intro_1.pdf |url-status=live }}</ref> * '''Respect''' the dignity of individual people * '''Connect''' with other people sincerely, openly, and inclusively * '''Care''' for the wellbeing of everyone * '''Protect''' social values, justice, and the public interest Other developments in ethical frameworks include those decided upon during the [[Asilomar Conference on Beneficial AI|Asilomar Conference]], the Montreal Declaration for Responsible AI, and the IEEE's Ethics of Autonomous Systems initiative, among others;<ref>{{Cite journal |last1=Floridi |first1=Luciano |last2=Cowls |first2=Josh |date=2019-06-23 |title=A Unified Framework of Five Principles for AI in Society |url=https://hdsr.mitpress.mit.edu/pub/l0jsh9d1 |journal=Harvard Data Science Review |volume=1 |issue=1 |doi=10.1162/99608f92.8cd550d1 |s2cid=198775713 |doi-access=free |archive-date=7 August 2019 |access-date=5 December 2023 |archive-url=https://archive.today/20190807202909/https://hdsr.mitpress.mit.edu/pub/l0jsh9d1 |url-status=live }}</ref> however, these principles are not without criticism, especially regards to the people chosen to contribute to these frameworks.<ref>{{Cite journal |last1=Buruk |first1=Banu |last2=Ekmekci |first2=Perihan Elif |last3=Arda |first3=Berna |date=2020-09-01 |title=A critical perspective on guidelines for responsible and trustworthy artificial intelligence |url=https://doi.org/10.1007/s11019-020-09948-1 |journal=Medicine, Health Care and Philosophy |volume=23 |issue=3 |pages=387β399 |doi=10.1007/s11019-020-09948-1 |issn=1572-8633 |pmid=32236794 |s2cid=214766800 |access-date=5 October 2024 |archive-date=5 October 2024 |archive-url=https://web.archive.org/web/20241005170206/https://link.springer.com/article/10.1007/s11019-020-09948-1 |url-status=live }}</ref> Promotion of the wellbeing of the people and communities that these technologies affect requires consideration of the social and ethical implications at all stages of AI system design, development and implementation, and collaboration between job roles such as data scientists, product managers, data engineers, domain experts, and delivery managers.<ref>{{Cite journal |last1=Kamila |first1=Manoj Kumar |last2=Jasrotia |first2=Sahil Singh |date=2023-01-01 |title=Ethical issues in the development of artificial intelligence: recognizing the risks |url=https://doi.org/10.1108/IJOES-05-2023-0107 |journal=International Journal of Ethics and Systems |pages=45β63 |volume=41 |issue=ahead-of-print |doi=10.1108/IJOES-05-2023-0107 |issn=2514-9369 |s2cid=259614124 |access-date=5 October 2024 |archive-date=5 October 2024 |archive-url=https://web.archive.org/web/20241005170207/https://www.emerald.com/insight/content/doi/10.1108/IJOES-05-2023-0107/full/html |url-status=live }}</ref> The [[AI Safety Institute (United Kingdom)|UK AI Safety Institute]] released in 2024 a testing toolset called 'Inspect' for AI safety evaluations available under a MIT open-source licence which is freely available on GitHub and can be improved with third-party packages. It can be used to evaluate AI models in a range of areas including core knowledge, ability to reason, and autonomous capabilities.<ref>{{Cite web |date=10 May 2024 |title=AI Safety Institute releases new AI safety evaluations platform |url=https://www.gov.uk/government/news/ai-safety-institute-releases-new-ai-safety-evaluations-platform |access-date=14 May 2024 |publisher=UK Government |archive-date=5 October 2024 |archive-url=https://web.archive.org/web/20241005170207/https://www.gov.uk/government/news/ai-safety-institute-releases-new-ai-safety-evaluations-platform |url-status=live }}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)