Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Artificial intelligence
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Regulation === {{Main|Regulation of artificial intelligence|Regulation of algorithms|AI safety}} [[File:Vice President Harris at the group photo of the 2023 AI Safety Summit.jpg|upright=1.2|thumb|alt=AI Safety Summit|The first global [[AI Safety Summit]] was held in the United Kingdom in November 2023 with a declaration calling for international cooperation.]] The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating AI; it is therefore related to the broader regulation of algorithms.<ref>Regulation of AI to mitigate risks: {{Harvtxt|Berryhill|Heang|Clogher|McBride|2019}}, {{Harvtxt|Barfield|Pagallo|2018}}, {{Harvtxt|Iphofen|Kritikos|2019}}, {{Harvtxt|Wirtz|Weyerer|Geyer|2018}}, {{Harvtxt|Buiten|2019}}</ref> The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally.{{Sfnp|Law Library of Congress (U.S.). Global Legal Research Directorate|2019}} According to AI Index at [[Stanford]], the annual number of AI-related laws passed in the 127 survey countries jumped from one passed in 2016 to 37 passed in 2022 alone.{{Sfnp|Vincent|2023}}{{Sfnp|Stanford University|2023}} Between 2016 and 2020, more than 30 countries adopted dedicated strategies for AI.{{Sfnp|UNESCO|2021}} Most EU member states had released national AI strategies, as had Canada, China, India, Japan, Mauritius, the Russian Federation, Saudi Arabia, United Arab Emirates, U.S., and Vietnam. Others were in the process of elaborating their own AI strategy, including Bangladesh, Malaysia and Tunisia.{{Sfnp|UNESCO|2021}} The [[Global Partnership on Artificial Intelligence]] was launched in June 2020, stating a need for AI to be developed in accordance with human rights and democratic values, to ensure public confidence and trust in the technology.{{Sfnp|UNESCO|2021}} [[Henry Kissinger]], [[Eric Schmidt]], and [[Daniel Huttenlocher]] published a joint statement in November 2021 calling for a government commission to regulate AI.{{Sfnp|Kissinger|2021}} In 2023, OpenAI leaders published recommendations for the governance of superintelligence, which they believe may happen in less than 10 years.{{Sfnp|Altman|Brockman|Sutskever |2023}} In 2023, the United Nations also launched an advisory body to provide recommendations on AI governance; the body comprises technology company executives, governments officials and academics.<ref>{{Cite web |last=VOA News |date=October 25, 2023 |title=UN Announces Advisory Body on Artificial Intelligence |url=https://www.voanews.com/a/un-announces-advisory-body-on-artificial-intelligence-/7328732.html |access-date=5 October 2024 |archive-date=18 September 2024 |archive-url=https://web.archive.org/web/20240918071530/https://www.voanews.com/a/un-announces-advisory-body-on-artificial-intelligence-/7328732.html |url-status=live }}</ref> In 2024, the [[Council of Europe]] created the first international legally binding treaty on AI, called the "[[Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law]]". It was adopted by the European Union, the United States, the United Kingdom, and other signatories.<ref>{{Cite web |date=5 September 2024 |title=Council of Europe opens first ever global treaty on AI for signature |url=https://www.coe.int/en/web/portal/-/council-of-europe-opens-first-ever-global-treaty-on-ai-for-signature |access-date=2024-09-17 |website=Council of Europe |archive-date=17 September 2024 |archive-url=https://web.archive.org/web/20240917001330/https://www.coe.int/en/web/portal/-/council-of-europe-opens-first-ever-global-treaty-on-ai-for-signature |url-status=live }}</ref> In a 2022 [[Ipsos]] survey, attitudes towards AI varied greatly by country; 78% of Chinese citizens, but only 35% of Americans, agreed that "products and services using AI have more benefits than drawbacks".{{Sfnp|Vincent|2023}} A 2023 [[Reuters]]/Ipsos poll found that 61% of Americans agree, and 22% disagree, that AI poses risks to humanity.{{Sfnp|Edwards|2023}} In a 2023 [[Fox News]] poll, 35% of Americans thought it "very important", and an additional 41% thought it "somewhat important", for the federal government to regulate AI, versus 13% responding "not very important" and 8% responding "not at all important".{{Sfnp|Kasperowicz|2023}}{{Sfnp|Fox News|2023}} In November 2023, the first global [[AI Safety Summit]] was held in [[Bletchley Park]] in the UK to discuss the near and far term risks of AI and the possibility of mandatory and voluntary regulatory frameworks.<ref>{{Cite news |last=Milmo |first=Dan |date=3 November 2023 |title=Hope or Horror? The great AI debate dividing its pioneers |work=[[The Guardian Weekly]] |pages=10β12}}</ref> 28 countries including the United States, China, and the European Union issued a declaration at the start of the summit, calling for international co-operation to manage the challenges and risks of artificial intelligence.<ref>{{Cite web |date=1 November 2023 |title=The Bletchley Declaration by Countries Attending the AI Safety Summit, 1β2 November 2023 |url=https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023 |archive-url=https://web.archive.org/web/20231101123904/https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023 |archive-date=1 November 2023 |access-date=2 November 2023 |website=GOV.UK}}</ref><ref>{{Cite press release |title=Countries agree to safe and responsible development of frontier AI in landmark Bletchley Declaration |url=https://www.gov.uk/government/news/countries-agree-to-safe-and-responsible-development-of-frontier-ai-in-landmark-bletchley-declaration |access-date=1 November 2023 |url-status=live |archive-url=https://web.archive.org/web/20231101115016/https://www.gov.uk/government/news/countries-agree-to-safe-and-responsible-development-of-frontier-ai-in-landmark-bletchley-declaration |archive-date=1 November 2023 |website=GOV.UK}}</ref> In May 2024 at the [[AI Seoul Summit]], 16 global AI tech companies agreed to safety commitments on the development of AI.<ref>{{Cite web |date=21 May 2024 |title=Second global AI summit secures safety commitments from companies |url=https://www.reuters.com/technology/global-ai-summit-seoul-aims-forge-new-regulatory-agreements-2024-05-21 |access-date=23 May 2024 |publisher=Reuters}}</ref><ref>{{Cite web |date=21 May 2024 |title=Frontier AI Safety Commitments, AI Seoul Summit 2024 |url=https://www.gov.uk/government/publications/frontier-ai-safety-commitments-ai-seoul-summit-2024/frontier-ai-safety-commitments-ai-seoul-summit-2024 |archive-url=https://web.archive.org/web/20240523201611/https://www.gov.uk/government/publications/frontier-ai-safety-commitments-ai-seoul-summit-2024/frontier-ai-safety-commitments-ai-seoul-summit-2024 |archive-date=23 May 2024 |access-date=23 May 2024 |publisher=gov.uk}}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)