Template:Short description Template:Redirect-distinguish Template:Infobox organization

The Machine Intelligence Research Institute (MIRI), formerly the Singularity Institute for Artificial Intelligence (SIAI), is a non-profit research institute focused since 2005 on identifying and managing potential existential risks from artificial general intelligence. MIRI's work has focused on a friendly AI approach to system design and on predicting the rate of technology development.

HistoryEdit

In 2000, Eliezer Yudkowsky founded the Singularity Institute for Artificial Intelligence with funding from Brian and Sabine Atkins, with the purpose of accelerating the development of artificial intelligence (AI).<ref name=FLI2015>Template:Cite news</ref><ref name=NYER/><ref name="Waters"/> However, Yudkowsky began to be concerned that AI systems developed in the future could become superintelligent and pose risks to humanity,<ref name=FLI2015/> and in 2005 the institute moved to Silicon Valley and began to focus on ways to identify and manage those risks, which were at the time largely ignored by scientists in the field.<ref name=NYER>Template:Cite magazine</ref>

Starting in 2006, the Institute organized the Singularity Summit to discuss the future of AI including its risks, initially in cooperation with Stanford University and with funding from Peter Thiel. The San Francisco Chronicle described the first conference as a "Bay Area coming-out party for the tech-inspired philosophy called transhumanism".<ref>Template:Cite news</ref><ref>Template:Cite news</ref> In 2011, its offices were four apartments in downtown Berkeley.<ref>Template:Cite news</ref> In December 2012, the institute sold its name, web domain, and the Singularity Summit to Singularity University,<ref>Template:Cite news</ref> and in the following month took the name "Machine Intelligence Research Institute".<ref>Template:Cite news</ref>

In 2014 and 2015, public and scientific interest in the risks of AI grew, increasing donations to fund research at MIRI and similar organizations.<ref name="Waters"/><ref name="life">Template:Cite book</ref>Template:Rp

In 2019, Open Philanthropy recommended a general-support grant of approximately $2.1 million over two years to MIRI.<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref> In April 2020, Open Philanthropy supplemented this with a $7.7M grant over two years.<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref><ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref>

In 2021, Vitalik Buterin donated several million dollars worth of Ethereum to MIRI.<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref>

Research and approachEdit

File:Nate Soares giving a talk at Google.gk.jpg
Nate Soares presenting an overview of the AI alignment problem at Google in 2016

MIRI's approach to identifying and managing the risks of AI, led by Yudkowsky, primarily addresses how to design friendly AI, covering both the initial design of AI systems and the creation of mechanisms to ensure that evolving AI systems remain friendly.<ref name="Waters">Template:Cite news</ref><ref name="atlantic">Template:Cite news</ref><ref name="aima">Template:Cite book</ref>

MIRI researchers advocate early safety work as a precautionary measure.<ref name="Ozy">Template:Cite news</ref> However, MIRI researchers have expressed skepticism about the views of singularity advocates like Ray Kurzweil that superintelligence is "just around the corner".<ref name="atlantic"/> MIRI has funded forecasting work through an initiative called AI Impacts, which studies historical instances of discontinuous technological change, and has developed new measures of the relative computational power of humans and computer hardware.<ref>Template:Cite news</ref>

MIRI aligns itself with the principles and objectives of the effective altruism movement.<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref>

Works by MIRI staffEdit

  • {{#invoke:citation/CS1|citation

|CitationClass=web }}

See alsoEdit

ReferencesEdit

Template:Reflist

Further readingEdit

External linksEdit

Template:Existential risk from artificial intelligence Template:Effective altruism Template:LessWrong