"Agency and Intentions in Artificial Intelligence" (AIAI) builds on the success of our workshop series "Agency and Intentions in Language" (AIL), which brings together scholars in theoretical linguistics, philosophy, and psychology who are interested in questions related to agency, intentions, reasoning about actions, and causation. AIAI aims at extending this interdisciplinary theoretical discussion of fundamental principles underlying human-human interaction to human-machine interaction, broadly construed.
The talk of “artificial intelligence” is everywhere. From its use in medical diagnosis to relationship chatbots, AI technology is improving rapidly in the diverse tasks it can perform, offering genuine benefits to human social life along with novel risks. With so much at stake, it is surprising that we have so little basic theoretical understanding of AI systems as unique agents that encode, or can be interpreted as encoding, intentional actions in communication with humans. The goal of this conference is to start a sober conversation about AI systems as agent-like collaborators.
In particular, we are interested in understanding whether (and how) the conceptual baggage that Large Language Models (LLMs) come with is similar to (or different from) the conceptual fundamentals that underlie human linguistic competence. LLMs are often used by humans in unique forms of request-making, collaboration, and problem solving. This is the same range of tasks that has shaped the evolution of our faculty of language. We are now in the era where these two language related systems interact with each other.
We want to bring together theoretical linguists, philosophers, cognitive scientists, and computer scientists in a rich and multi-faceted discussion regarding conceptual representations of agency and intentions in LLMs and their connection to related representations based on human linguistic competence. The conference is interdisciplinary in nature. Rather than viewing the complexity of these topics as a challenge to productive conversation, we think of it as an opportunity to bring thinkers from diverse backgrounds together to share various tools, methods, theories, and perspectives on how to make sense of agency in non-human computational systems and their interaction with human agency. We do not expect all of the presenters at the conference to share the same methodological assumptions or research backgrounds, nor do we expect such congruence in our attendees. This allows all participants to benefit from seeing questions of agency and AI from new standpoints. Additionally, it encourages speakers and attendees to present their ideas and questions in clear and accessible ways so that, say, a linguist can effectively communicate their work to philosophers and cognitive scientists.