Ruta Binkyte

Post Doc Researcher

New Paper: Interactional Fairness in Multi-Agent LLM Systems


Accepted at ACM/AAAI AIES 2025, Oral Presentation


September 28, 2025

[Picture]
Illustration of the four Interactional fairness conditions used in the evaluation framework, varying along two dimensions: Interpersonal fairness (IF) (respectful vs. dismissive tone) and Informational fairness (InfF) (justification present vs. absent)
As large language models (LLMs) are increasingly deployed within multi-agent systems, fairness is no longer just about resource allocation or procedural rules — how agents interact with one another also matters. This paper introduces a formal framework for Interactional Fairness, dividing it into two complementary dimensions: 
  • Interpersonal Fairness (IF) — how agents treat each other (tone, respect)
  • Informational Fairness (InfF) — how clearly and justly information is shared (explanations, transparency)

Key contributions:
 
  1. Theoretical reframing: The authors adapt concepts from organizational psychology to non-sentient agents, treating fairness as a socially interpretable signal rather than subjective feeling.
  2. Measurement methods: They adapt established instruments (e.g. Colquitt’s Organizational Justice Scale, Critical Incident Technique) to evaluate agent behavior in simulations.
  3. Pilot experiments: Through controlled negotiation tasks, the study systematically manipulates tone, explanation quality, outcome inequality, and framing (cooperative vs competitive). Findings show:
    • Tone and justification quality strongly influence whether agents accept offers, even when outcomes are identical.
    • The relative influence of IF vs InfF shifts based on context and framing (Collaborative vs. Competitive).

Impact & Future Directions:
 
  • This work provides a foundation for fairness auditing in LLM-based multi-agent systems.
  • It offers guidance for alignment with social norms in agent design, helping designers build systems are fair in their interactions increasing performance of agent systems and trust in human-AI hybrid systems.
  • It opens new avenues for research in norm-sensitive alignment, especially in collaborative AI systems where agents must negotiate, coordinate, or persuade.
The full paper can be accessed here: Binkyte, Ruta. "Interactional Fairness in LLM Multi-Agent Systems: An Evaluation Framework." arXiv preprint arXiv:2505.12001 (2025).