Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Android (robot)
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Projects== Several projects aiming to create androids that look, and, to a certain degree, speak or act like a human being have been launched or are underway. ===Japan=== {{Expand section|1=more recent examples and additional citations|section=1|date=October 2018}} [[File:Repliee Q2 face.jpg|thumb|Repliee Q2, a Japanese android]] [[Japanese robotics]] have been leading the field since the 1970s.<ref>{{cite book|url=https://books.google.com/books?id=tQqVCgAAQBAJ&pg=PA66|title=Robotics and Mechatronics: Proceedings of the 4th IFToMM International Symposium on Robotics and Mechatronics|first1=SaΓ―d|last1=Zeghloul|first2=Med Amine|last2=Laribi|first3=Jean-Pierre|last3=Gazeau|author-link3=Jean-Pierre Gazeau|date=21 September 2015|publisher=Springer|isbn=9783319223681}}</ref> [[Waseda University]] initiated the WABOT project in 1967, and in 1972 completed the WABOT-1, the first android, a full-scale humanoid intelligent robot.<ref>{{cite web|url=http://www.humanoid.waseda.ac.jp/booklet/kato_2-j.html|title=Humanoid History -WABOT-|website=www.humanoid.waseda.ac.jp}}</ref><ref name="androidworld.com">{{cite web|url=http://www.androidworld.com/prod06.htm|title=Historical Android Projects|work=androidworld.com|access-date=6 May 2017|archive-date=25 November 2005|archive-url=https://web.archive.org/web/20051125164748/http://www.androidworld.com/prod06.htm|url-status=dead}}</ref> Its limb control system allowed it to walk with the lower limbs, and to grip and transport objects with hands, using [[tactile sensor]]s. Its vision system allowed it to measure distances and directions to objects using external receptors, artificial eyes and ears. And its conversation system allowed it to communicate with a person in Japanese, with an artificial mouth.<ref name="androidworld.com"/><ref>[https://archive.org/details/robotsfromscienc0000ichb ''Robots: From Science Fiction to Technological Revolution''], page 130</ref><ref>{{cite book|url=https://books.google.com/books?id=NgLLBQAAQBAJ&pg=SA3-PA1|title=Handbook of Digital Human Modeling: Research for Applied Ergonomics and Human Factors Engineering|first=Vincent G.|last=Duffy|date=19 April 2016|publisher=CRC Press|isbn=9781420063523}}</ref> In 1984, WABOT-2 was revealed, and made a number of improvements. It was capable of playing the organ. Wabot-2 had ten fingers and two feet, and was able to read a score of music. It was also able to accompany a person.<ref>{{cite web |url=http://www.uc3m.es/uc3m/dpto/IN/dpin04/2historygroupwabo2.html |title=2history |access-date=31 August 2007 |archive-url = https://web.archive.org/web/20071012203052/http://uc3m.es/uc3m/dpto/IN/dpin04/2historygroupwabo2.html <!-- Bot retrieved archive --> |archive-date = 12 October 2007}}</ref> In 1986, [[Honda]] began its humanoid research and development program, to create humanoid robots capable of interacting successfully with humans.<ref>{{cite web |url=http://world.honda.com/ASIMO/P3/ |title=P3|publisher=Honda Worldwide |access-date=1 September 2007 }}</ref> The Intelligent Robotics Lab, directed by [[Hiroshi Ishiguro]] at [[Osaka University]], and the Kokoro company demonstrated the [[Actroid]] at [[Expo 2005]] in [[Aichi Prefecture]], Japan and released the [[Telenoid R1]] in 2010. In 2006, Kokoro developed a new ''DER 2'' android. The height of the human body part of DER2 is 165 cm. There are 47 mobile points. DER2 can not only change its expression but also move its hands and feet and twist its body. The "air servosystem" which Kokoro developed originally is used for the actuator. As a result of having an actuator controlled precisely with air pressure via a servosystem, the movement is very fluid and there is very little noise. DER2 realized a slimmer body than that of the former version by using a smaller cylinder. Outwardly DER2 has a more beautiful proportion. Compared to the previous model, DER2 has thinner arms and a wider repertoire of expressions. Once programmed, it is able to choreograph its motions and gestures with its voice. The Intelligent Mechatronics Lab, directed by Hiroshi Kobayashi at the [[Tokyo University of Science]], has developed an android head called ''Saya'', which was exhibited at Robodex 2002 in [[Yokohama]], Japan. There are several other initiatives around the world involving humanoid research and development at this time, which will hopefully introduce a broader spectrum of realized technology in the near future. Now Saya is ''working'' at the Science University of Tokyo as a guide. The [[Waseda University]] (Japan) and [[NTT docomo]]'s manufacturers have succeeded in creating a shape-shifting robot ''WD-2''. It is capable of changing its face. At first, the creators decided the positions of the necessary points to express the outline, eyes, nose, and so on of a certain person. The robot expresses its face by moving all points to the decided positions, they say. The first version of the robot was first developed back in 2003. After that, a year later, they made a couple of major improvements to the design. The robot features an elastic mask made from the average head dummy. It uses a driving system with a 3DOF unit. The WD-2 robot can change its facial features by activating specific facial points on a mask, with each point possessing three [[degrees of freedom (mechanics)|degrees of freedom]]. This one has 17 facial points, for a total of 56 degrees of freedom. As for the materials they used, the WD-2's mask is fabricated with a highly elastic material called Septom, with bits of steel wool mixed in for added strength. Other technical features reveal a shaft driven behind the mask at the desired facial point, driven by a DC motor with a simple pulley and a slide screw. Apparently, the researchers can also modify the shape of the mask based on actual human faces. To "copy" a face, they need only a [[3D scanner]] to determine the locations of an individual's 17 facial points. After that, they are then driven into position using a laptop and 56 motor control boards. In addition, the researchers also mention that the shifting robot can even display an individual's hair style and skin color if a photo of their face is projected onto the 3D Mask. ===Singapore=== Prof Nadia Thalmann, a Nanyang Technological University scientist, directed efforts of the Institute for Media Innovation along with the School of Computer Engineering in the development of a social robot, Nadine. Nadine is powered by software similar to Apple's [[Siri]] or Microsoft's [[Cortana (software)|Cortana]]. Nadine may become a personal assistant in offices and homes in future, or she may become a companion for the young and the elderly. Assoc Prof Gerald Seet from the School of Mechanical & Aerospace Engineering and the BeingThere Centre led a three-year R&D development in [[tele-presence robotics]], creating EDGAR. A remote user can control EDGAR with the user's face and expressions displayed on the robot's face in real time. The robot also mimics their upper body movements. <ref name="singapore_NTU">{{cite web|url=http://media.ntu.edu.sg/NewsReleases/Pages/newsdetail.aspx?news=fde9bfb6-ee3f-45f0-8c7b-f08bc1a9a179|title=NTU scientists unveil social and telepresence robots|access-date=31 December 2015|archive-date=3 September 2019|archive-url=https://web.archive.org/web/20190903131525/http://media.ntu.edu.sg/NewsReleases/Pages/newsdetail.aspx?news=fde9bfb6-ee3f-45f0-8c7b-f08bc1a9a179|url-status=dead}}</ref> ===South Korea=== [[File:Ever-2.jpg|thumb|200px|[[EveR-2]], the first android that can sing]] [[KITECH]] researched and developed [[EveR-1]], an android interpersonal communications model capable of emulating human emotional expression via facial "musculature" and capable of rudimentary conversation, having a vocabulary of around 400 words. She is {{nowrap|160 cm}} tall and weighs {{nowrap|50 kg}}, matching the average figure of a Korean woman in her twenties. EveR-1's name derives from the [[Eve|Biblical Eve]], plus the letter ''r'' for ''robot''. EveR-1's advanced computing processing power enables [[speech recognition]] and vocal synthesis, at the same time processing [[lip synchronization]] and visual recognition by 90-degree micro-[[charge-coupled device|CCD]] cameras with [[facial recognition system|face recognition technology]]. An independent microchip inside her artificial brain handles gesture expression, body coordination, and emotion expression. Her whole body is made of highly advanced synthetic jelly silicon and with 60 artificial joints in her face, neck, and lower body; she is able to demonstrate realistic facial expressions and sing while simultaneously dancing. In South Korea, the [[Ministry of Information and Communication (South Korea)|Ministry of Information and Communication]] had an ambitious plan to put a robot in every household by 2020.<ref>{{cite web|url=http://news.nationalgeographic.com/news/2006/09/060906-robots.html |archive-url=https://web.archive.org/web/20061114091438/http://news.nationalgeographic.com/news/2006/09/060906-robots.html |url-status=dead |archive-date=14 November 2006 |title=A Robot in Every Home by 2020, South Korea Says |publisher=News.nationalgeographic.com |date=28 October 2010 |access-date=22 November 2011}}</ref> Several robot cities have been planned for the country: the first will be built in 2016 at a cost of 500 billion won (US$440 million), of which 50 billion is direct government investment.<ref>{{cite web|url=https://www.engadget.com/2007/08/27/south-korea-set-to-build-robot-land/ |title=South Korea set to build "Robot Land" |date=27 August 2007 |publisher=Engadget |access-date=22 November 2011}}</ref> The new robot city will feature research and development centers for manufacturers and part suppliers, as well as exhibition halls and a stadium for robot competitions. The country's new Robotics Ethics Charter will establish ground rules and laws for human interaction with robots in the future, setting standards for robotics users and manufacturers, as well as guidelines on ethical standards to be programmed into robots to prevent human abuse of robots and vice versa.<ref>{{cite web|url=http://news.nationalgeographic.com/news/2007/03/070316-robot-ethics.html |archive-url=https://web.archive.org/web/20070319193834/http://news.nationalgeographic.com/news/2007/03/070316-robot-ethics.html |url-status=dead |archive-date=19 March 2007 |title=Robot Code of Ethics to Prevent Android Abuse, Protect Humans |publisher=News.nationalgeographic.com |date=28 October 2010 |access-date=22 November 2011}}</ref> ===United States=== [[Walt Disney]] and a staff of [[Imagineers]] created [[Great Moments with Mr. Lincoln]] that debuted at the [[1964 New York World's Fair]].<ref name="illinois_pavillion">{{cite web|url=http://www.nywf64.com/illinois02.shtml|title=Pavilions & Attractions β Illinois β Page Two|access-date=23 March 2011}}</ref> Dr. William Barry, an Education Futurist and former visiting West Point Professor of Philosophy and Ethical Reasoning at the [[United States Military Academy]], created an AI android character named "Maria Bot". This Interface AI android was named after the infamous fictional robot Maria in the 1927 film ''[[Metropolis (1927 film)|Metropolis]]'', as a well-behaved distant relative. Maria Bot is the first AI Android Teaching Assistant at the university level.<ref>{{Cite web|url=http://www.edsurge.com/news/2020-03-09-the-education-of-an-android-teacher|title = The Education of an Android Teacher β EdSurge News|date = 9 March 2020}}</ref><ref>{{Cite web|url=http://www.ndnu.edu/media-center/first-android-teaching-assistant|title=First Android Teaching Assistant at NDNU | Media Center|access-date=15 March 2020|archive-date=28 July 2020|archive-url=https://web.archive.org/web/20200728055628/http://www.ndnu.edu/media-center/first-android-teaching-assistant/|url-status=dead}}</ref> Maria Bot has appeared as a keynote speaker as a duo with Barry for a TEDx talk in Everett, Washington in February 2020.<ref>{{Cite web |url=http://www.tedxeverett.com/william-barry |title=William Barry | tedxeverettcom |access-date=15 March 2020 |archive-date=28 July 2020 |archive-url=https://web.archive.org/web/20200728005101/https://www.tedxeverett.com/william-barry |url-status=dead }}</ref> Resembling a human from the shoulders up, Maria Bot is a virtual being android that has complex facial expressions and head movement and engages in conversation about a variety of subjects. She uses AI to process and synthesize information to make her own decisions on how to talk and engage. She collects data through conversations, direct data inputs such as books or articles, and through internet sources. Maria Bot was built by an international high-tech company for Barry to help improve education quality and eliminate education poverty. Maria Bot is designed to create new ways for students to engage and discuss ethical issues raised by the increasing presence of robots and artificial intelligence. Barry also uses Maria Bot to demonstrate that programming a robot with life-affirming, ethical framework makes them more likely to help humans to do the same.<ref>{{Cite web|url=https://meshconference.com/speakers/maria-bot/|title=Maria Bot}}</ref> Maria Bot is an ambassador robot for good and ethical AI technology.<ref>{{Cite web|url=https://dxjournal.co/2020/02/mesh-conference-announces-ai-robot-as-keynote-speaker/|title = Mesh conference announces AI robot as keynote speaker|date = 24 February 2020}}</ref> [[David Hanson (robotics designer)|Hanson Robotics, Inc.]], of Texas and [[KAIST]] produced an android portrait of [[Albert Einstein]], using Hanson's facial android technology mounted on KAIST's life-size walking bipedal robot body. This Einstein android, also called "[[Albert Hubo]]", thus represents the first full-body walking android in history.<ref>{{cite web|url=http://www.hansonrobotics.wordpress.com|title=(no title)|website=www.hansonrobotics.wordpress.com}}</ref> Hanson Robotics, the FedEx Institute of Technology,<ref>{{cite web|url=http://www.fedex.memphis.edu|title=FIT β FedEx Institute of Technology β The University of Memphis|website=www.fedex.memphis.edu}}</ref> and the University of Texas at Arlington also developed the android portrait of sci-fi author [[Philip K. Dick]] (creator of ''[[Do Androids Dream of Electric Sheep?]]'', the basis for the film ''[[Blade Runner]]''), with full conversational capabilities that incorporated thousands of pages of the author's works.<ref>{{cite web|url=http://www.pkdandroid.org/about.htm|title=about " PKD Android|website=www.pkdandroid.org|access-date=7 August 2009|archive-date=14 August 2009|archive-url=https://web.archive.org/web/20090814175623/http://www.pkdandroid.org/about.htm|url-status=dead}}</ref> In 2005, the PKD android won a first-place [[artificial intelligence]] award from [[Association for the Advancement of Artificial Intelligence|AAAI]]. ===China=== On April 19, 2025, 21 humanoid robots participated along with 12,000 human runners in a half-marathon in Beijing. While almost every robot fell down and had overheating problems, and the robots were continuously being controlled by human handlers accompanying them, six of the robots did reach the finish line. Two of them, Tiangong Ultra by Chinese robotics company UBTech, and N2 by Chinese company Noetix Robotics, which took first and second place respectively among robots in the race, stood out for their consistent (albeit slow) pace.<ref>{{cite news |last1=Yang |first1=Zeyi |title=Stumbling and Overheating, Most Humanoid Robots Fail to Finish Half-Marathon in Beijing |url=https://www.wired.com/story/beijing-half-marathon-humanoid-robots/ |access-date=22 April 2025 |publisher=Wired |date=April 19, 2025}}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)