Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Brain–computer interface
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==== Magnetoencephalography and fMRI ==== {{Main|Magnetoencephalography|Functional magnetic resonance imaging}} [[File:Visual stimulus reconstruction using fMRI.png|thumb|ATR Labs' reconstruction of human vision using [[functional magnetic resonance imaging|fMRI]] (top row: original image; bottom row: reconstruction from mean of combined readings)]][[Magnetoencephalography]] (MEG) and [[functional magnetic resonance imaging]] (fMRI) have both been used as non-invasive BCIs.<ref>Ranganatha Sitaram, Andrea Caria, Ralf Veit, Tilman Gaber, Giuseppina Rota, Andrea Kuebler and Niels Birbaumer(2007) "[https://archive.today/20120731202844/http://mts.hindawi.com/utils/GetFile.aspx?msid=25487&vnum=2&ftype=manuscript FMRI Brain–Computer Interface: A Tool for Neuroscientific Research and Treatment]"</ref> In a widely reported experiment, fMRI allowed two users to play [[Pong]] in real-time by altering their [[haemodynamic response]] or brain blood flow through [[biofeedback]].<ref>{{cite journal|doi=10.1038/news040823-18|title=Mental ping-pong could aid paraplegics|journal=News@nature|date=27 August 2004 | last = Peplow |first=Mark }}</ref> fMRI measurements of haemodynamic responses in real time have also been used to control robot arms with a seven-second delay between thought and movement.<ref>{{cite web | url = http://techon.nikkeibp.co.jp/english/NEWS_EN/20060525/117493/ | title = To operate robot only with brain, ATR and Honda develop BMI base technology | work = Tech-on | date = 26 May 2006 | access-date = 22 September 2006 | archive-date = 23 June 2017 | archive-url = https://web.archive.org/web/20170623060519/http://techon.nikkeibp.co.jp/english/NEWS_EN/20060525/117493/ | url-status = dead }}</ref> In 2008 research developed in the Advanced Telecommunications Research (ATR) [[Computational Neuroscience]] Laboratories in [[Kyoto]], Japan, allowed researchers to reconstruct images from brain signals at a [[Display resolution|resolution]] of 10x10 [[pixels]].<ref>{{cite journal | vauthors = Miyawaki Y, Uchida H, Yamashita O, Sato MA, Morito Y, Tanabe HC, Sadato N, Kamitani Y | display-authors = 6 | title = Visual image reconstruction from human brain activity using a combination of multiscale local image decoders | journal = Neuron | volume = 60 | issue = 5 | pages = 915–929 | date = December 2008 | pmid = 19081384 | doi = 10.1016/j.neuron.2008.11.004 | s2cid = 17327816 | doi-access = free }}</ref> A 2011 study reported second-by-second reconstruction of videos watched by the study's subjects, from fMRI data.<ref>{{cite journal |vauthors=Nishimoto S, Vu AT, Naselaris T, Benjamini Y, Yu B, Gallant JL |date=October 2011 |title=Reconstructing visual experiences from brain activity evoked by natural movies |journal=Current Biology |volume=21 |issue=19 |pages=1641–1646 |doi=10.1016/j.cub.2011.08.031 |pmc=3326357 |pmid=21945275}}</ref> This was achieved by creating a statistical model relating videos to brain activity. This model was then used to look up 100 one-second video segments, in a database of 18 million seconds of random [[YouTube]] videos, matching visual patterns to brain activity recorded when subjects watched a video. These 100 one-second video extracts were then combined into a mash-up image that resembled the video.<ref>{{cite magazine | url = http://blogs.scientificamerican.com/observations/2011/09/22/breakthrough-could-enable-others-to-watch-your-dreams-and-memories-video/ | title= Breakthrough Could Enable Others to Watch Your Dreams and Memories | last = Yam |first=Philip | date = 22 September 2011 | magazine = Scientific American | access-date = 25 September 2011}}</ref><ref>{{cite web | url = https://sites.google.com/site/gallantlabucb/publications/nishimoto-et-al-2011 | title = Reconstructing visual experiences from brain activity evoked by natural movies (Project page) | publisher = The Gallant Lab at [[UC Berkeley]] | access-date = 25 September 2011 |url-status=dead |archiveurl=https://web.archive.org/web/20110925024037/https://sites.google.com/site/gallantlabucb/publications/nishimoto-et-al-2011 |archivedate=2011-09-25}}</ref><ref>{{cite web | url= http://newscenter.berkeley.edu/2011/09/22/brain-movies/| title= Scientists use brain imaging to reveal the movies in our mind |last=Anwar |first=Yasmin | date= 22 September 2011 | publisher = [[UC Berkeley]] News Center| access-date = 25 September 2011}}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)