Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Crowd simulation
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Human-like behaviors and crowd AI == [[File:Crowd simulation, Covent Garden.jpg|thumb|A crowd simulation of [[Covent Garden square]], London, showing a crowd of pedestrian agents reacting to a street performer]] {{Main|Swarm intelligence}} To simulate more aspects of human activities in a crowd, more is needed than path and motion planning. Complex social interactions, smart object manipulation, and hybrid models are challenges in this area. Simulated crowd behavior is inspired by the flow of real-world crowds. Behavioral patterns, movement speeds and densities, and anomalies are [[crowd analysis|analyzed]] across many environments and building types. Individuals are [[crowd tracking|tracked]] and their movements are documented such that algorithms can be derived and implemented into crowd simulations. Individual entities in a crowd are also called [[Software agent|agents]]. In order for a crowd to behave realistically each agent should act autonomously (be capable of acting independently of the other agents). This idea is referred to as an ''agent-based model.'' Moreover, it is usually desired that the agents act with some degree of intelligence (i.e. the agents should not perform actions that would cause them to harm themselves). For agents to make intelligent and realistic decisions, they should act in accordance with their surrounding environment, react to its changes, and react to the other agents. [[Demetri Terzopoulos|Terzopoulos]] and his students have pioneered agent-based models of pedestrians, an approach referred to as multi-human simulation to distinguish it from conventional crowd simulation.<ref name=AP/><ref name="Yu">{{cite conference |doi=10.2312/SCA/SCA07/119-128 |date=2007 |last1=Yu |first1=Qinxin |last2=Terzopoulos |first2=Demetri |title=A Decision Network Framework for the Behavioral Animation of Virtual Humans |conference=Eurographics/Siggraph Symposium on Computer Animation |isbn=978-3-905673-44-9 }}</ref><ref name=Huang/> === Rule-based AI === [[File:MaslowsHierarchyOfNeeds.svg|alt=Maslow's Hierarchy of Needs|thumb|Maslow's Hierarchy of Needs]] In rule-based AI, virtual agents follow scripts: "if this happens, do that". This is a good approach to take if agents with different roles are required, such as a main character and several background characters. This type of AI is usually implemented with a hierarchy, such as in [[Maslow's hierarchy of needs]], where the lower the need lies in the hierarchy, the stronger it is.{{fact|date=February 2025}} For example, consider a student walking to class who encounters an explosion and runs away. The theory behind this is initially the first four levels of his needs are met, and the student is acting according to his need for self-actualization. When the explosion happens his safety is threatened which is a much stronger need, causing him to act according to that need.{{fact|date=February 2025}} This approach is scalable, and can be applied to crowds with a large number of agents. Rule-based AI, however, does have some drawbacks. Most notably the behavior of the agents can become very predictable, which may cause a crowd to behave unrealistically.{{fact|date=February 2025}} === Learning AI === In learning AI, virtual characters behave in ways that have been tested to help them achieve their goals. Agents experiment with their environment or a sample environment which is similar to their real one.{{fact|date=February 2025}} Agents perform a variety of actions and learn from their mistakes. Each agent alters its behavior in response to rewards and punishments it receives from the environment. Over time, each agent would develop behaviors that are consistently more likely to earn high rewards.{{fact|date=February 2025}} If this approach is used, along with a large number of possible behaviors and a complex environment agents will act in a realistic and unpredictable fashion.{{fact|date=February 2025}} ==== Algorithms ==== There are a wide variety of machine learning algorithms that can be applied to crowd simulations.{{fact|date=February 2025}} Q-Learning is an algorithm residing under machine learning's sub field known as reinforcement learning. A basic overview of the algorithm is that each action is assigned a Q value and each agent is given the directive to always perform the action with the highest Q value. In this case learning applies to the way in which Q values are assigned, which is entirely reward based. When an agent comes in contact with a state, s, and action, a, the algorithm then estimates the total reward value that an agent would receive for performing that state action pair. After calculating this data, it is then stored in the agent's knowledge and the agent proceeds to act from there.{{fact|date=February 2025}} The agent will constantly alter its behavior depending on the best Q value available to it. And as it explores more and more of the environment, it will eventually learn the most optimal state action pairs to perform in almost every situation. The following function outlines the bulk of the algorithm: :''Q(s, a) ββ r + maxaQ(s', a')'' Given a state s and action a, r and s are the reward and state after performing (s,a), and a' is the range over all the actions.<ref>{{cite journal |last1=Torrey |first1=Lisa |title=Crowd Simulation Via Multi-Agent Reinforcement Learning |journal=Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment |date=10 October 2010 |volume=6 |issue=1 |pages=89β94 |doi=10.1609/aiide.v6i1.12390 }}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)