Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Machine learning
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===Embedded machine learning=== Embedded machine learning is a sub-field of machine learning where models are deployed on [[embedded systems]] with limited computing resources, such as [[wearable computer]]s, [[edge device]]s and [[microcontrollers]].<ref>{{Cite book|last1=Fafoutis|first1=Xenofon|last2=Marchegiani|first2=Letizia|last3=Elsts|first3=Atis|last4=Pope|first4=James|last5=Piechocki|first5=Robert|last6=Craddock|first6=Ian|title=2018 IEEE 4th World Forum on Internet of Things (WF-IoT) |chapter=Extending the battery lifetime of wearable sensors with embedded machine learning |date=7 May 2018|chapter-url=https://ieeexplore.ieee.org/document/8355116|pages=269–274|doi=10.1109/WF-IoT.2018.8355116|hdl=1983/b8fdb58b-7114-45c6-82e4-4ab239c1327f|isbn=978-1-4673-9944-9|s2cid=19192912|url=https://research-information.bris.ac.uk/en/publications/b8fdb58b-7114-45c6-82e4-4ab239c1327f |access-date=17 January 2022|archive-date=18 January 2022|archive-url=https://web.archive.org/web/20220118182543/https://ieeexplore.ieee.org/abstract/document/8355116?casa_token=LCpUeGLS1e8AAAAA:2OjuJfNwZBnV2pgDxfnEAC-jbrETv_BpTcX35_aFqN6IULFxu1xbYbVSRpD-zMd4GCUMELyG|url-status=live}}</ref><ref>{{Cite web|date=2 June 2021|title=A Beginner's Guide To Machine learning For Embedded Systems|url=https://analyticsindiamag.com/a-beginners-guide-to-machine-learning-for-embedded-systems/|access-date=17 January 2022|website=Analytics India Magazine|language=en-US|archive-date=18 January 2022|archive-url=https://web.archive.org/web/20220118182754/https://analyticsindiamag.com/a-beginners-guide-to-machine-learning-for-embedded-systems/|url-status=live}}</ref><ref>{{Cite web|last=Synced|date=12 January 2022|title=Google, Purdue & Harvard U's Open-Source Framework for TinyML Achieves up to 75x Speedups on FPGAs {{!}} Synced|url=https://syncedreview.com/2022/01/12/deepmind-podracer-tpu-based-rl-frameworks-deliver-exceptional-performance-at-low-cost-183/|access-date=17 January 2022|website=syncedreview.com|language=en-US|archive-date=18 January 2022|archive-url=https://web.archive.org/web/20220118182404/https://syncedreview.com/2022/01/12/deepmind-podracer-tpu-based-rl-frameworks-deliver-exceptional-performance-at-low-cost-183/|url-status=live}}</ref><ref>{{Cite journal | last1 = AlSelek | first1 = Mohammad | last2 = Alcaraz-Calero | first2 = Jose M. | last3 = Wang | first3 = Qi | year = 2024 | title = Dynamic AI-IoT: Enabling Updatable AI Models in Ultralow-Power 5G IoT Devices | journal = IEEE Internet of Things Journal | volume = 11 | issue = 8 | pages = 14192–14205 | doi = 10.1109/JIOT.2023.3340858 | url = https://research-portal.uws.ac.uk/en/publications/c8edfe21-77d0-4c3e-a8bc-d384faf605a0 }}</ref> Running models directly on these devices eliminates the need to transfer and store data on cloud servers for further processing, thereby reducing the risk of data breaches, privacy leaks and theft of intellectual property, personal data and business secrets. Embedded machine learning can be achieved through various techniques, such as [[hardware acceleration]],<ref>{{Cite book|last1=Giri|first1=Davide|last2=Chiu|first2=Kuan-Lin|last3=Di Guglielmo|first3=Giuseppe|last4=Mantovani|first4=Paolo|last5=Carloni|first5=Luca P.|title=2020 Design, Automation & Test in Europe Conference & Exhibition (DATE) |chapter=ESP4ML: Platform-Based Design of Systems-on-Chip for Embedded Machine Learning |date=15 June 2020|chapter-url=https://ieeexplore.ieee.org/document/9116317|pages=1049–1054|doi=10.23919/DATE48585.2020.9116317|arxiv=2004.03640|isbn=978-3-9819263-4-7|s2cid=210928161|access-date=17 January 2022|archive-date=18 January 2022|archive-url=https://web.archive.org/web/20220118182342/https://ieeexplore.ieee.org/abstract/document/9116317?casa_token=5I_Tmgrrbu4AAAAA:v7pDHPEWlRuo2Vk3pU06194PO0-W21UOdyZqADrZxrRdPBZDMLwQrjJSAHUhHtzJmLu_VdgW|url-status=live}}</ref><ref>{{Cite web|last1=Louis|first1=Marcia Sahaya|last2=Azad|first2=Zahra|last3=Delshadtehrani|first3=Leila|last4=Gupta|first4=Suyog|last5=Warden|first5=Pete|last6=Reddi|first6=Vijay Janapa|last7=Joshi|first7=Ajay|date=2019|title=Towards Deep Learning using TensorFlow Lite on RISC-V|url=https://edge.seas.harvard.edu/publications/towards-deep-learning-using-tensorflow-lite-risc-v|access-date=17 January 2022|website=[[Harvard University]]|archive-date=17 January 2022|archive-url=https://web.archive.org/web/20220117031909/https://edge.seas.harvard.edu/publications/towards-deep-learning-using-tensorflow-lite-risc-v|url-status=live}}</ref> [[approximate computing]],<ref>{{Cite book|last1=Ibrahim|first1=Ali|last2=Osta|first2=Mario|last3=Alameh|first3=Mohamad|last4=Saleh|first4=Moustafa|last5=Chible|first5=Hussein|last6=Valle|first6=Maurizio|title=2018 25th IEEE International Conference on Electronics, Circuits and Systems (ICECS) |chapter=Approximate Computing Methods for Embedded Machine Learning |date=21 January 2019|chapter-url=https://ieeexplore.ieee.org/document/8617877|pages=845–848|doi=10.1109/ICECS.2018.8617877|isbn=978-1-5386-9562-3|s2cid=58670712|access-date=17 January 2022|archive-date=17 January 2022|archive-url=https://web.archive.org/web/20220117031855/https://ieeexplore.ieee.org/abstract/document/8617877?casa_token=arUW5Oy-tzwAAAAA:I9x6edlfskM6kGNFUN9zAFrjEBv_8kYTz7ERTxtXu9jAqdrYCcDbbwjBdgwXvb6QAH_-0VJJ|url-status=live}}</ref> and model optimisation.<ref>{{Cite web|title=dblp: TensorFlow Eager: A Multi-Stage, Python-Embedded DSL for Machine Learning.|url=https://dblp.org/rec/journals/corr/abs-1903-01855.html|access-date=17 January 2022|website=dblp.org|language=en|archive-date=18 January 2022|archive-url=https://web.archive.org/web/20220118182335/https://dblp.org/rec/journals/corr/abs-1903-01855.html|url-status=live}}</ref><ref>{{Cite journal|last1=Branco|first1=Sérgio|last2=Ferreira|first2=André G.|last3=Cabral|first3=Jorge|date=5 November 2019|title=Machine Learning in Resource-Scarce Embedded Systems, FPGAs, and End-Devices: A Survey|journal=Electronics|volume=8|issue=11|pages=1289|doi=10.3390/electronics8111289|issn=2079-9292|doi-access=free|hdl=1822/62521|hdl-access=free}}</ref> Common optimisation techniques include [[Pruning (artificial neural network)|pruning]], [[Quantization (Embedded Machine Learning)|quantisation]], [[knowledge distillation]], low-rank factorisation, network architecture search, and parameter sharing.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)