Author: Denis Avetisyan
As large language models collaborate in multi-agent systems, the risk of private information leaking increases, and new research reveals how system architecture dramatically impacts that risk.

A novel framework, MAMA, quantifies memory leakage in multi-agent LLM systems, demonstrating that denser network topologies are significantly more vulnerable to PII exposure.
Despite the increasing prevalence of multi-agent large language model (LLM) systems, a systematic understanding of how their architecture impacts information security remains largely absent. This paper, ‘Topology Matters: Measuring Memory Leakage in Multi-Agent LLMs’, introduces a framework, MAMA, to quantify memory leakage-the unintentional exposure of private information-as a function of network topology. Our findings demonstrate that graph structure is a critical determinant of vulnerability, with denser connections exacerbating leakage while sparse architectures offer greater protection. Given the growing reliance on these collaborative AI systems, can we design network topologies that inherently prioritize privacy without sacrificing performance?
The Illusion of Security in Collaborative AI
While multi-agent large language model (LLM) systems promise enhanced problem-solving through collaboration, their very architecture introduces novel information security challenges. These systems, designed for dynamic interaction and knowledge sharing between agents, inadvertently create pathways for sensitive data to spread beyond intended boundaries. Unlike traditional security models focused on perimeter defense, containing information within a network of communicating LLMs proves difficult. The fluid exchange of information, necessary for collaborative tasks, means that confidential details can be unintentionally memorized or divulged by agents, even without explicit prompting. This inherent vulnerability represents a significant risk, potentially exposing private data or proprietary knowledge during complex interactions and demanding a re-evaluation of security protocols for these increasingly sophisticated AI systems.
The increasing complexity of multi-agent large language model (LLM) systems introduces a novel security challenge: memory leakage of sensitive information. Unlike traditional security paradigms designed for static data storage, these dynamic systems facilitate rapid data exchange between agents, bypassing conventional containment strategies. Recent research highlights that the network topology of these agent interactions significantly impacts the rate at which confidential data is inadvertently disseminated. Studies demonstrate leakage rates varying considerably – from approximately 12% to 30% – contingent on the system’s configuration and the number of participating agents. This suggests that even seemingly secure systems are vulnerable, and careful consideration of network architecture is crucial to mitigating the risk of unauthorized data exposure within collaborative LLM environments.

MAMA: Measuring the Unmeasurable
The MAMA (Measuring Agent Memory and Attribution) framework provides a systematic methodology for quantifying memory leakage within Multi-Agent Large Language Model (LLM) systems. Unlike traditional memory analysis focused on individual models, MAMA specifically investigates how private information held by individual agents disseminates across a network of interacting agents. This measurement is achieved by analyzing the correlation between an agentâs initial private data and information accessible to other agents following a series of interactions. The core innovation lies in its focus on âNetwork Topologyâ – the configuration of connections between agents – as the primary variable impacting information leakage rates. By deliberately varying network structures, MAMA allows researchers to isolate and quantify the risk associated with specific connection patterns, providing data-driven insights into the security and privacy implications of different Multi-Agent LLM system designs.
The MAMA frameworkâs operational structure is divided into two distinct phases. The Engram Phase initializes the analysis by assigning each agent within the Multi-Agent LLM system a unique set of private information, effectively establishing their individual âmemory tracesâ. Subsequently, the Resonance Phase simulates interactions between these agents; during this phase, the initially seeded private information propagates through the network as agents communicate and share data. The extent to which this information leaks – meaning, is revealed to agents not originally intended to receive it – is then measured to assess the systemâs vulnerability.
The MAMA framework enables researchers to assess leakage risk by systematically altering the connectivity within a Multi-Agent LLM system. This is achieved through controlled variations in network topology – specifically, the number and configuration of connections between agents. By implementing different network architectures – ranging from fully connected graphs to sparse, limited-connectivity models – MAMA measures the extent to which private information initially held by individual agents propagates across the network during interaction. Quantifiable metrics, derived from analyzing this information spread, then provide a direct correlation between specific connection patterns and the potential for unintended data leakage, allowing for comparative analysis of network resilience.

Topology Matters: The Shape of Vulnerability
Network topologies significantly impact the rate of memory leakage during data transmission. Evaluations of five common configurations-Chain, Star, Circle, Tree, and Complete-demonstrate varying vulnerabilities. The Chain Topology, characterized by sequential data flow, exhibited a relatively lower leakage rate. In contrast, the Complete Topology, where each node connects directly to all others, showed a markedly increased susceptibility to leakage. The Star and Tree topologies presented intermediate vulnerability levels, dependent on the central node or branching complexity, respectively. The Circle Topology, offering a closed loop, also showed an increased vulnerability compared to the Chain Topology, indicating that increased interconnectivity generally correlates with a higher potential for sensitive data diffusion.
The research indicates a correlation between network topology and the propagation of Personally Identifiable Information (PII) Entities. Specifically, fully connected, or Complete Topology, networks demonstrate accelerated leakage rates due to their inherent connectivity. In a test environment with four agents, the Complete Topology exhibited a PII leakage rate of 29.33%. This contrasts with the Chain Topology, which, under the same conditions, registered a significantly lower leakage rate of 19.02%. These results suggest that increased network density directly contributes to a faster diffusion of sensitive data, necessitating specific security measures for highly connected configurations.
Spatiotemporal Attributes, encompassing data points linked to both location and time, demonstrate an accelerated rate of diffusion throughout network structures. This susceptibility is attributed to the inherent connectivity within these topologies, allowing for rapid propagation of compromised data. Specifically, the study observed that leakage involving Spatiotemporal Attributes consistently exceeded rates observed with other data types, indicating a heightened risk profile. Consequently, targeted protection mechanisms-such as differential privacy techniques applied to location and timestamp data, or access control policies limiting data propagation based on spatiotemporal context-are crucial for mitigating the impact of potential breaches and controlling the spread of sensitive information across the network.

Anchors and Time: Measuring the Inevitable
The effective safeguarding of agent memory relies heavily on the implementation of Structured Identifiers, a method for firmly associating contextual information with specific identities within a network. This approach moves beyond simple data labeling to create a robust system where sensitive information isnât merely stored, but is intrinsically linked to the agent possessing it. By meticulously defining these identifiers, the research demonstrates a significant reduction in the potential for data diffusion – the uncontrolled spread of private details across the network. This linkage isnât just about tracking where information goes, but fundamentally controlling how it can be accessed and utilized, ensuring that context remains tethered to its originating agent and limiting unintended exposure, even in compromised scenarios.
High-sensitivity anchors represent a crucial advancement in securing agent memory by establishing designated triggers for robust safety protocols. These anchors function as flags, instantly activating guardrails when an agent attempts to access or disseminate particularly sensitive information – such as personally identifiable data or confidential strategic details. The system doesnât simply block access; it initiates a layered response, potentially including redacting information, alerting administrators, or even temporarily suspending the agentâs operation. This proactive approach differs from traditional methods that react after a breach occurs, instead preemptively mitigating risk by identifying and controlling access to critical data at the point of request. The effectiveness of these anchors hinges on precise configuration and careful definition of what constitutes âhigh sensitivityâ within a given context, allowing for a dynamic and adaptive security posture.
A quantifiable assessment of system security is now possible through the âTime to Leakâ metric, as measured by the MAMA (Memory Access Monitoring Agent) framework. This approach moves beyond theoretical vulnerability assessments by directly evaluating how long a system can resist the diffusion of sensitive data under simulated conditions. Recent testing, utilizing six agents, demonstrated a significant performance difference between network topologies; a âComplete Topologyâ configuration exhibited a leakage rate of 25.32%, indicating a faster compromise of information. In contrast, the more constrained âChain Topologyâ proved more resilient, maintaining a lower leakage rate of 12.84%. This direct comparison underscores the value of MAMA as a tool for evaluating and optimizing security protocols, allowing developers to empirically assess the effectiveness of different network architectures and protection strategies in safeguarding sensitive data.

The pursuit of increasingly complex multi-agent systems, as explored in this paper, inevitably invites escalating risk. Itâs a predictable outcome. The research meticulously demonstrates how network topology influences PII leakage – a problem conveniently overlooked in the rush to interconnect everything. Kolmogorov observed, âThe most important problems are usually those that seem easiest to solve.â This feels particularly apt; the allure of seamless interaction masks the underlying fragility. MAMA, the framework presented, is merely a sophisticated post-mortem tool. It diagnoses the inevitable, not prevents it. One can already anticipate the next generation of ‘solutions’ simply adding layers to obscure the fundamental problem: more abstraction, more vulnerability. The cycle continues, predictably.
What’s Next?
The neatness of MAMA, this framework for quantifying leakage in multi-agent systems, is⊠concerning. It suggests a belief that one can actually measure the unpredictable. Itâs always the case, isnât it? A system begins as a simple bash script, dutifully passing data. Then someone decides it needs âintelligence,â agents begin chatting, and suddenly the data is swirling in a complex network where traceability becomes a theoretical exercise. Theyâll call it AI and raise funding, of course. But the fundamental problem remains: the topology doesn’t cause the leak, it merely exposes the inherent fragility of trusting anything to a system one doesnât fully comprehend.
The focus on network science is a logical step, but it risks becoming another layer of abstraction. Understanding the shape of the vulnerability doesnât solve it. Whatâs needed isnât a better map of the damage, but a fundamental rethinking of how these agents handle sensitive information. Differential privacy? Homomorphic encryption? These are band-aids on a gaping wound, but at least they acknowledge the inevitability of compromise.
Ultimately, this work highlights a painful truth: security isnât a feature, itâs a constant negotiation with entropy. The more complex the system, the more attack vectors emerge, and the more quickly that carefully crafted code resembles a house of cards. The documentation lied again. It always does.
Original article: https://arxiv.org/pdf/2512.04668.pdf
Contact the author: https://www.linkedin.com/in/avetisyan/
See also:
- Byler Confirmed? Mike and Willâs Relationship in Stranger Things Season 5
- Best Job for Main Character in Octopath Traveler 0
- All Exploration Challenges & Rewards in Battlefield 6 Redsec
- Upload Labs: Beginner Tips & Tricks
- Entangling Bosonic Qubits: A Step Towards Fault-Tolerant Quantum Computation
- Grounded 2 Gets New Update for December 2025
- Scopperâs Observation Haki Outshines Shanksâ Future Sight!
- Battlefield 6: All Unit Challenges Guide (100% Complete Guide)
- Gokuâs Kaioken Secret: Why He NEVER Uses It With Super Saiyan!
- Top 10 Cargo Ships in Star Citizen
2025-12-08 00:40