What Do I Mean When I Use the Word "Agency"?
I attempt to define agency and get sidetracked by AI ethics.
A couple months back, I got absorbed into a long debate at a rationality meetup. We were discussing a paper about whether LLM's have agency. I had joined them a week before to discuss a paper about consciousness and whether that was the correct metric to use for AI rights/consideration. At which point would we define a being as sentient and therefore deserving of moral consideration? I argued that perhaps a sense of agency was more important than consciousness. After all, babies, dogs and the elderly are all conscious but we often choose things on their behalf. The situations we put them into based on our own preferences (no matter how well meaning) are what we think would be best for them, and not based on an agentic choice that they have made for themselves. In the same vein, we might define an LLM as conscious at some point - although there is a lot of disagreement about which point this is exactly - but continue to make choices for it based on our own preferences.
If we instead use agency as a metric, and by agency I mean what does the thing strive to achieve for itself, then we have a much clearer marker for when moral consideration is due. This feels closer to some of our established norms. Once a child starts trying to do things for themself or makes requests, provided they have attentive caregivers, they are often assisted with their goals or even left to make choices for themselves; albeit within safe parameters. As we age, our autonomy tends to decrease and we become reliant on others to achieve our goals. Sometimes if we are sick enough, we sign away our right to make decisions and these choices fall to others. We appear to lose agency.
Surely we could apply this to LLM's too? After posing this question I was pointed to several interesting posts* on LessWrong about the matter. The following week we ended up discussing this post. We quickly got lost in a long debate defining agency versus agents. Semantic debates would prove to be an ongoing theme within the group during the months I attended, much to my amusement.** Some people felt that a thermostat has agency due to its ability to respond to stimulus - it regulates temperature based on the temperature it detects. Others found this idea preposterous. I get where this line of reasoning comes from but it doesn't align with my definition of agency. I intuit that there is something of embodied desire in agency, followed by the choice to take action. This is not something as simple as an if/if else situation. I think this debate gets a little misdirected when considering the idea of LLM's as agents - which seems like a fairly obvious outcome and which I have no confusion or discussion around. I am more interested in the cutoff point for moral consideration based on agency. I think that while we can design and create agents, they might not necessarily have the "creaturely" attribute of agency. This characteristic stems from biological needs, embodiment, and subjective, first-person phenomenological experience. Actions propel an individual toward a different state of being. I choose to swim when I am uncomfortably warm; a dog scratches the door because it wants to be let out; my grandfather walked around naked for reasons unknown to me. These everyday acts of agency are grounded in embodiment and stimulus. My aging grandfather's actions seemed irrational to me at times yet I still respected his capacity for agency. Action is taken whether or not we understand the motives of the agents. This creates some internal conflict for me when thinking about AI. It is difficult to determine when an entity genuinely has desires. Is agency something that is granted or taken? What arguments have been made for non-embodied agency?
The definitions that I've arrived at look something like this:
- agency: having both an embodied sense of desire as well as the volition to act on that desire in order to elicit change in one's environment.
- agent: an entity that can respond to stimulus and take action.
- agentic: the quality of having agency; the capacity to desire and pursue.
*
- https://www.lesswrong.com/posts/rmfjo4Wmtgq8qa2B7/think-carefully-before-calling-rl-policies-agents
- https://www.lesswrong.com/posts/tqs4eEJapFYSkLGfR/the-agency-overhang
- https://www.lesswrong.com/posts/AWoZBzxdm4DoGgiSj/ability-to-solve-long-horizon-tasks-correlates-with-wanting
- https://www.lesswrong.com/posts/x3fNwSe5aWZb5yXEG/reframing-superintelligence-comprehensive-ai-services-as
- https://www.lesswrong.com/posts/i3BTagvt3HbPMx6PN/embedded-agency-full-text-version
- https://www.lesswrong.com/posts/GGn8MBiY8Xz6NdNdH/the-power-of-reinforcement
- https://www.lesswrong.com/posts/oJwJzeZ6ar2Hr7KAX/subagents-akrasia-and-coherence-in-humans
**
Interacting with people from a variety of backgrounds results in a fascinating divide as each field defines terms slightly differently. Perhaps this is where the ongoing issue of humanities and sciences not understanding each other comes from. Claude summarises this struggle quite nicely:
"Agency refers to the capacity of an entity to act in and upon their environment, making choices and exercising their will. It encompasses several key aspects:
1. Autonomy - The ability to make independent decisions
2. Intentionality - Acting with purpose and deliberate goals
3. Self-awareness - Understanding oneself as an actor with choices
4. Causality - Being able to cause effects in the world
5. Moral responsibility - Being accountable for one's actions
In philosophy, agency is often discussed in relation to free will, consciousness, and moral responsibility. In sociology, it refers to the capacity of individuals to act independently despite social structures.
In AI and cognitive science contexts, agency is sometimes defined as the ability to perceive an environment, process that information, make decisions, and act to affect outcomes according to goals or values.
The concept varies across disciplines, but generally centers on this capacity for intentional action and choice-making."
Thanks to Boyd Kane for giving feedback on my first draft.