Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Q-learning
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Limitations == The standard Q-learning algorithm (using a <math>Q</math> table) applies only to discrete action and state spaces. [[Discretization]] of these values leads to inefficient learning, largely due to the [[curse of dimensionality]]. However, there are adaptations of Q-learning that attempt to solve this problem such as Wire-fitted Neural Network Q-Learning.<ref>{{Cite web|last1=Gaskett|first1=Chris|last2=Wettergreen|first2=David|last3=Zelinsky|first3=Alexander|date=1999|title=Q-Learning in Continuous State and Action Spaces|url=http://users.cecs.anu.edu.au/~rsl/rsl_papers/99ai.kambara.pdf}}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)