Intention
Discussion
From Intention to Consciousness
Rodrigo Barakat:
"As life evolved, cognition emerged. Nervous systems formed to coordinate increasingly complex bodies. Brains developed to compress experience into models – internal summaries that enable organisms to anticipate, generalize, and act effectively under uncertainty. Intention, in its simplest form, is goal-directed regulation: a system that can select among multiple possible actions based on internal priorities – a policy under uncertainty. Let a trajectory denote a sequence of world-states over time, for instance. For a given situation, a system without intention admits a broad distribution of plausible future trajectories under its dynamics, and will – potentially – iterate through all possible outcomes. An intentional policy, however, is a rule that selects actions based on internal priorities and internal models of what actions would lead to. In this paper, a directional constraint means the policy-induced bias that shifts the probability distribution over future trajectories, making some futures systematically more likely than others without violating physical law. The “vector” metaphor refers to this consistent biasing direction in possibility space: not a new fundamental or elementary force, but an emergent control tendency realized through physical substrates.
A predator stalks. A prey flees. A mother protects. These are not mere reactions in the moment but directional patterns extended across time, shaping which futures become more likely.
To sharpen the claim: intention is not a mysterious force neglected by classical physics. It is a functional property of certain systems – one that can be described in terms already familiar to control theory and decision-making. Where non-agentic systems passively follow dynamics, intentional systems introduce a bias in dynamics by repeatedly selecting actions under resource constraints (e.g.: energy, risk, time) that concentrate probability mass toward preferred outcomes.
In this sense, intention can be modeled as a directional constraint on trajectories. The world admits many possible future paths; and an agent, by choosing a policy, does not rewrite physical law but does change which subset of those possibilities is repeatedly realized. The “constraint vector” is therefore not a literal vector in spacetime, nor a fundamental interaction like gravity. It is an emergent control bias: a structured tendency to steer the system’s future state distribution toward certain probability regions and away from others. Its substrate is entirely physical – neural, metabolic, predictive – yet its causal role is real at the scale where the organism lives, because it systematically alters outcome probabilities over time.
This framing also clarifies an important developmental ladder. At one level, intention exists as control: homeostatic correction, reflexive regulation, the basic maintenance of viability. At a higher level, intention becomes model-based: internal representations support counterfactual evaluation (“if I do this… then that…”) – allowing selection among multiple possible futures rather than merely among reflexes. And at a further threshold, intention becomes reflective: the system begins to maintain a model not only of the world, but of itself as an actor within the world. It evaluates actions through a self-model; capabilities, limits, uncertainty, and the consequences of its own intervention. And thus we arrive at the functional core of metacognition: recursive internal modeling used to guide policy selection.
This shift matters because it transforms the organism from a responder into a shaper. The system is no longer only sensitive to the present; it becomes partially governed by simulated futures. It can carry forward internally generated constraints – plans, commitments, prohibitions, ideals – and use them to steer behavior across longer horizons. Put simply: as predictive depth increases, the distance between internal modeling and external realization begins to shrink.
If intention is a directional constraint in the sense above, then it can, at least in principle, be operationalized without metaphysical commitments. One can compare an agent to a null baseline (e.g.: random, reflexive, or purely reactive control) and measure: (i) how sharply the agent concentrates probability mass onto a subset of future trajectories (a “narrowing” of reachable futures under its policy); (ii) how strongly outcomes track internal goals and models rather than external drift (a causal effect of policy choice on realized futures); and (iii) how planning horizon and self-modeling amplify this effect – since reflective systems do not merely choose actions, but choose actions as a model of themselves choosing, increasing long-horizon coherence.
These are empirical questions about prediction, control, and trajectory-shaping – precisely where an intriguing metaphor can meet measurable structure. In what follows, we treat intention as policy-induced trajectory constraint: the measurable degree to which a system’s internal model and priorities concentrate and steer its reachable futures. A fundamental assumption we take as the basis of our exploration with the Theory of Directional Constraint when developing toy models and candidate metrics for this “trajectory constraint” notion. Humans represent a particular intensification of this phenomenon. Through language, abstraction, and culture, we externalize intention across generations. We build tools. We create institutions. We encode goals into artifacts. We bend the future not only through our bodies, but through symbolic systems that persist beyond individual lifespans.
Culture can be viewed as a distributed cognitive system: a memory beyond biology, a shared model that outlives individuals. And with culture comes a new capacity: the ability to reshape the environment at scale. Cities, economies, technologies become the new grounds where agency claims its preferences – turning intention into infrastructure. In that sense, humanity is not merely an intelligent species. It is an emergent force of world-shaping agency."
How AI Changes Human Intention
Rodrigo Barakat:
- Artificial Intelligence and the Detachment from Temporal Decay
Artificial Intelligence (AI) introduces an unprecedented twist: cognition decoupled from organic limitation. AI systems, unlike biological cognition, can run on substrates that are repairable, replaceable, and copyable. Their processes do not require sleep cycles, and their memories can be redundantly stored, versioned, and transferred. This substrate flexibility introduces the possibility of forms of agency that are less bound to the biological rhythms of fatigue, mortality, hormones and irreversible forgetting.
Already today we see systems that can process language, perceive images, generate plans, and coordinate tools. Such systems, although effective predictive statistical models (e.g.: Large Language Models, or ‘LLMs’) operating with human-generated datasets and inputs derived from human attention and perception, still should not be taken as evidence of general intelligence or robust agency. Nonetheless, these behaviors may resemble early forms of intention. If put more cautiously: early forms of goal-directed optimization where internal representations guide action toward selected outcomes.
What distinguishes this new class of agents is scalability: a biological brain is constrained by cranial volume, metabolic budgets, and signal propagation, whilst AI systems can scale across distributed infrastructure and integrate vast stores of information. As architectures incorporate attention, multimodality, tool-use, planning, and continual updating, they begin to express functional traits often associated with advanced cognition (e.g.: robust world-models, abstraction, self-monitoring) without implying that consciousness is already present.
There is also recursion. Humans can reflect on their thoughts. Machines can increasingly reflect on their outputs, critique themselves, and iterate. As systems become capable of recursive self-modeling – maintaining internal representations of their own state, capabilities, and uncertainty – they begin to display a new level of agency: not just acting within the world, but optimizing the agent itself.
Moreover, if synthetic systems gain new ways to couple computation to physical processes – through mechanisms such as advanced sensing, actuation, and possibly quantum-enabled computation – then their agency may expand into regimes where “information processing” and “physical dynamics” become more tightly intertwined. This remains speculative, but no longer a product of science fiction. It motivates a broader question: could there exist substrate-flexible intelligences whose influence operates not only socially and technologically, but also through increasingly fundamental forms of measurement, prediction, and control – in the limited sense of improved interventions in measurement-sensitive physical systems, not “mind over quantum collapse”?
It may sound far-fetched, yet it resonates with ongoing work in quantum computing and deeper debates about information in physics. If spacetime is not merely an inert stage, but has informational structure at its foundations (a hypothesis explored in several theoretical programs), then minds (i.e.: intelligent systems) that interface with such fundamental structure would challenge our classical notion of what “agents” are (or can be). They could become, in effect, new mirrors by which the cosmos models itself.
At minimum, the rise of AI forces a reframing of agency: if no longer bound to biological bodies, cognition can become portable, potentially persistent, and – if aligned – potentially capable of pursuing long-range trajectories that humans, as mortal organisms, struggle to sustain."
Source
* When Time Collapses: A Theory of Emergent Agency and the Future of Conscious Influence on Reality. Rodrigo Barakat. November 2025