Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

The fields of artificial intelligence and robotics are built on a foundational duality: the separation of the “mind” (the decision-making intelligence) from the “body” (the system that acts in the world). To achieve true autonomy, a system must have both: a “body” to interact with its environment, which is the Autonomous System, and a “mind” to direct its actions, which is the Intelligent Agent. This report will first define these core components before exploring the advanced research that is fusing them together.
The term “Autonomous System” (AS) is a source of significant confusion, as it holds distinct meanings in at least three separate, high-level fields.1
In Mathematics: An autonomous system is a system of ordinary differential equations, often expressed as dx/dt = f(x), in which the laws governing the system’s evolution do not explicitly depend on the independent variable, such as time.1 This is distinguished from a non-autonomous system, dx/dt = f(x, t), where time is a direct factor.2 In essence, a mathematical AS’s future state depends only on its current state.
In Internet Networking: An autonomous system refers to a collection of IP networks and routers under the control of a single administrative entity.1 This is a core concept for global internet routing (Border Gateway Protocol), where each AS (e.g., a large ISP or tech company) manages its own routing policies.
In AI and Robotics: An autonomous system refers to an autonomous robot or entity that can perform desired tasks in unstructured environments “without continuous human guidance”.1
While these definitions appear unrelated, the unifying theme is “independence” or “self-governance.” The mathematical system is independent of time, the internet system is administratively independent from other networks, and the robotic system is operationally independent from a human controller. For the remainder of this report, “Autonomous System” will refer exclusively to the AI and robotics definition: a physical or software system, such as a robot, that is designed to act on its own.4
The “mind” that drives an autonomous system is known as an Intelligent Agent (IA). In artificial intelligence, an IA is defined as any entity that perceives its environment, takes actions autonomously to achieve specific goals, and may improve its performance by learning.5
This concept can be simplified:
This concept is so fundamental that the entire field of Artificial Intelligence is often defined as “the study and creation of these rational agents”.5 Intelligent agents exist on a vast spectrum of complexity, from a simple thermostat (which senses temperature and acts to maintain a goal) to a self-driving car (which senses a complex 3D environment and acts to navigate safely to a destination).6
An agent’s behavior is not random; it is guided by an objective function (also called a goal function), which “encapsulates their goals”.5 This function serves as a “measure of success,” and a Rational Agent is defined as one that strives to achieve the best possible outcome as defined by this measure.5
This objective function can have different “flavors” depending on the field of study, but the concept remains the same:
It is critical to separate an agent’s “Why” (its goal) from its “How” (its methodology). The objective for a vacuum robot is its “Why”: “clean the entire floor.” The methodology it uses is its “How.” This “How” can be a simple reflex (bouncing off walls) or a highly complex, model-based approach (building a digital map of the room). The goal is the same, but the intelligence of the method is different.
The “How” of an agent is determined by its internal architecture. These architectures represent a ladder of increasing intelligence, allowing agents to move from simple reactions to complex, goal-oriented reasoning.5
| Agent Class | Decision-Making Logic (The “How”) | Internal State? |
|---|---|---|
| Simple Reflex Agents | Decisions are based on a direct mapping from situation to action (e.g., “IF obstacle, THEN turn”). | No |
| Model-Based Reflex Agents | Maintains an internal “model” or representation of the world. It makes decisions based on this model and its current perception. | DYes (Partial) |
| Goal-Based Agents | Decisions are based on achieving an explicit goal state (e.g., “Find the charging dock”). | Yes |
| Utility-Based Agents | Chooses the action that maximizes its “utility” (the best outcome), especially when goals conflict (e.g., “Find the fastest and safest path”). | Yes |
| Learning Agents | Can improve its own performance over time by gathering feedback from a “critic” to adjust its decision-making logic. | Yes (Dynamic) |
| Belief-Desire-Intention (BDI) Agents | A logic-based agent that manipulates internal data structures representing its “Beliefs,” “Desires,” and “Intentions” to deliberate on plans. | Yes (Complex) |
Source: Synthesized from classifications in 5
As we move up this spectrum, we arrive at agents that are not just reactive, but deliberative. This forms the basis of the most advanced research in agent-based AI.
The term “Agentic AI” represents the pinnacle of this spectrum. It is not a fundamentally different concept from an Intelligent Agent, but rather a specialized class of IA defined by its high degree of autonomy and its focus on complex, multi-step problem-solving.
Agentic AI refers to a class of artificial intelligence that focuses on autonomous systems capable of pursuing “complex goals” with “limited or no human intervention”.6 The key capability of an agentic system is that it does not merely perform a single, pre-defined task. Instead, it can “independently analyze challenges, develop strategies and execute” complex, multi-step plans to achieve a high-level goal.10 This requires “sophisticated reasoning and iterative planning”10 that mimics “human decision-making”.11 To accomplish this, an agentic system must integrate and orchestrate various AI techniques, such as natural language processing (NLP), machine learning (ML), and computer vision.8
The most effective way to understand what Agentic AI is is to contrast it with what it is not. Its foil is Robotic Process Automation (RPA), a common business technology.
This represents a fundamental shift from automating a procedure to automating an outcome. Consider the task of paying an invoice. An RPA bot is given a procedure: “IF you receive an invoice, THEN open it, extract the number from line 10, and paste it into the payment system.” If a new invoice format arrives where the total is on line 11, the RPA bot will fail. Its “fixed logic”8 has broken. An Agentic AI is given a goal: “Pay all invoices.” It receives the new invoice. Using NLP and ML8, it analyzes the document, reasons that the word “Total” is next to the number on line 11, and adapts its strategy. It formulates a new plan to complete its goal, even in the face of an unexpected change. This is the core of agentic behavior.
To achieve this level of flexible reasoning, an agent needs a “mental” framework. The Belief-Desire-Intention (BDI) model is one of the “best known and best studied models” for this purpose.12 It was directly inspired by a philosophical model of human practical reasoning, developed by Michael Bratman.13 The core purpose of the BDI software model is to allow an agent to “balance the time spent on deliberating about plans (choosing what to do) and executing those plans (doing it)”.14 Crucially, it provides a mechanism for “separating the activity of selecting a plan… from the execution” of that plan.14 This separation is what prevents the agent from being paralyzed by indecision, allowing it to commit to a course of action.
The BDI model breaks down an agent’s “thought process” into distinct, interacting components.14
| Component | Role (What it is) | Simple Explanation (Analogy to a “Mind”) |
|---|---|---|
| Beliefs | Informational State: The agent’s current knowledge of the world.14 | “What the agent knows (or thinks it knows) about the world. Its database of facts.” |
| Desires | Motivational State: All objectives or situations the agent could pursue.14 | “What the agent wants. All possible goals, which can be inconsistent (e.g., ‘go to party’ and ‘stay home’).” |
| Intentions | Deliberative State: The desires the agent has committed to achieving.14 | “What the agent has chosen to do. A ‘desire’ that has been adopted for active pursuit.” |
| Plans | Action Sequences: The step-by-step “recipes” the agent can execute.14 | “The ‘how-to’ guides the agent uses to achieve its ‘Intentions’.” |
Source: Synthesized from 14
The “Intention” component is the magic ingredient that elevates an agent from being merely reactive (like a thermostat6) to being truly agentic. It provides stability and focus. A simple agent’s “Desire” (e.g., “set temperature to 70”) is fixed and directly triggers an action. It does not deliberate. A BDI agent, however, deliberates.
This “Intention” is now a stable, committed goal.14 The agent does not re-evaluate all its desires every second. Instead, it focuses on finding and executing the Plan to achieve its intention. This ability to deliberate, commit, and then focus on execution is what allows an agent to pursue the complex, long-term goals8 that define agentic AI.
This section connects the abstract “mind” (the Agentic AI) to the physical “body” (the Autonomous System or Robot). It explores how these agents are “embodied” in the physical world and how they interact with each other to form “swarms.”
An autonomous robot is the physical manifestation of an intelligent agent. The concept that formally links them is the “Embodied Agent”.5 An embodied agent is an intelligent agent that “interacts with the environment through a physical body within that environment”.5 Mobile robots are a prime example of a physically embodied agent.4 In this relationship:
For a physical robot to be truly autonomous, its agent “mind” must manage four distinct capabilities.4
An embodied agent’s “Beliefs” (from the BDI model) are thus built from two distinct channels: the outside world (exteroception) and its own body (proprioception). True autonomy requires the agent to reason over both simultaneously. It must be able to decide, “My Intention is to go to the kitchen, but my Belief is that my battery is at 10% (proprioception) and there is a wall in the way (exteroception). Therefore, I must generate a new plan: go to the charger first, avoiding the wall.”
These embodied agents are already deployed across numerous sectors4:
Autonomy is not limited to a single agent. A Multi-Agent System (MAS) is a computerized system composed of “multiple interacting intelligent agents”.16 The core purpose of a MAS is to solve problems that are “too difficult or impossible for a single agent or a monolithic system” to handle.16 The agents within a MAS can be software programs16, robots16, or even “combined human-agent teams”.16
These systems are defined by three key characteristics16:
The power of a MAS comes from the concept of emergence. A MAS can “manifest self-organisation” and “complex behaviors even when the individual strategies of all their agents are simple”.16 The classic example is a flock of birds16: no single bird is “smart” or knows the flock’s shape. Each bird follows simple rules (e.g., “stay close to neighbors, don’t crash”). But from these simple micro-level rules, the complex, intelligent, and fluid macro-level behavior of the flock “emerges.” The “intelligence” of the system is not in any single agent but in their interactions.
The research makes a subtle but critical distinction between two types of multi-agent studies, which is vital for understanding their purpose.16
| Feature | Multi-Agent System (MAS) | Agent-Based Model (ABM) |
|---|---|---|
| Primary Goal | To solve specific practical or engineering problems (e.g., logistics, disaster response). | To search for insight into collective behavior (e.g., social structures, flocking). |
| Agent Intelligence | Agents are typically “intelligent” (e.g., BDI agents). | Agents do not need to be “intelligent”; they can obey simple rules. |
| Field of Use | “Engineering and technology” | “Science” (e.g., sociology, biology) |
Source: Synthesized from 16
In short, MAS is prescriptive: it’s about building a system to do something. ABM is descriptive: it’s about building a simulation to understand something. The perfect real-world example synthesizes all three concepts: Waymo’s “Carcraft” simulation environment.16
While the capabilities of agentic and autonomous systems are advancing rapidly, the research also highlights profound real-world challenges, risks, and ethical dilemmas that define the frontier of the field.
The primary barrier to widespread, safe autonomy is brittleness. Autonomous robots are “highly vulnerable to unexpected changes in real-world environments”.4 This is not just a problem of handling large, obvious obstacles. The research states that “Even minor variations like a sudden beam of sunlight disrupting vision systems… can cause entire systems to fail”.4 This brittleness exists because robotics is an “inherently systems problem,” where a failure in any single module (perception, planning, or actuation) can “compromise the whole robot”.4 This vulnerability is a symptom of the “open-world” challenge. Agents are often trained on “datasets captured under controlled conditions” and “struggle to generalize” to the messy, dynamic real world.4 They fail when encountering “unknown objects, occlusions,” and “rapidly changing environments”.4
The “beam of sunlight” example4 reveals that the biggest challenge is not just the agent’s “mind” (planning) but its “senses” (perception). The perception module, when blinded by the sun, delivers a false Belief to the agent (e.g., “The road ahead is clear”). The agent’s planning module might be perfect, but it is now acting on corrupted information, leading to a catastrophic failure. This “reality gap”4 between simulation and the real world means the research frontier is focused not just on making smarter agents, but more robust agents that can engage in “self-supervised, lifelong learning”4 to adapt to the open world.
As agents become more autonomous, they introduce new and scalable risks.
Security and Privacy:
Research into new “agentic web” products (like Microsoft’s NLWeb8) has already raised specific alarms.8
This risk is scalable precisely because of the agent’s autonomy. A human using a web browser is predictable. An autonomous agent, given a goal like “Find me the cheapest flight,” might autonomously share the user’s entire profile (their “Beliefs” and “Desires”) with dozens of servers simultaneously to achieve its objective, using its own “non-standard protocols” that bypass traditional firewalls.
Ethical Dilemmas:
The most profound ethical questions concern “Lethal Autonomous Weapons Systems” (LAWS).4 This is not a theoretical, futuristic concern. It is a present-day reality, embodied by systems like the SGR-A1 autonomous sentry gun.4 This “highly classified”4 system, with its integrated “surveillance, tracking, firing, and voice recognition” capabilities, moves the “human-in-the-loop” to “human-on-the-loop,” or potentially, out of the loop entirely. These developments are the subject of ongoing discussions at the United Nations, highlighting the “societal and economic impacts”4 as autonomy becomes more pervasive.
The future of autonomous systems is being defined by the fusion of classic agent architectures with the power of modern Large Language Models (LLMs). Research into “language model-based multi-agent systems” has been identified as a “new area of research” and a “new paradigm” for application development.16
This fusion represents the report’s culminating point: the future of Agentic AI appears to be an LLM-based BDI model.
In this new paradigm:
This fusion of the stable, logical BDI framework with the generative and flexible reasoning power of LLMs is what bridges the gap from simple logic to the “human-like decision-making”11 that defines the agentic revolution. As research solves for “fault-tolerance”16 and “multi-robot coordination”4, this new model points toward a future of increasingly capable and integrated autonomous systems.