Home > Blog > Artificial Intelligence
Subscribe to Our Blog
Get the latest trends, solutions, and insights into the event-driven future every week.
Thanks for subscribing.
One challenge keeps surfacing in AI orchestration: context management. As Philipp Schmid explains, we’re witnessing a fundamental shift from prompt engineering to context engineering. It’s not just about writing better prompts anymore—it’s about managing everything the model sees: conversation history, memory, RAG retrievals, available tools, and structured outputs.
The problem gets worse as context windows grow. Anthropic’s engineering team describes this as “context rot”—model accuracy actually degrades as you add more tokens. This happens because of the transformer architecture itself: every token must attend to every other token, creating an exponential attention burden. Most agent failures aren’t model failures—they’re context failures.
This becomes critical when moving from pilot to production. As Ali Pourshahid, Solace’s Chief Engineering Officer, points out, stale data and batch-processing approaches fundamentally break AI agents in real-world environments. Traditional methods of passing massive datasets to large language models simply don’t scale.
But here’s where context engineering meets a bigger challenge: it’s not just about what information reaches the LLM. It’s about where data lives, how it’s processed, and when to involve the model at all. This is context management—the complete architecture of data flow in multi-agent systems.
Solace Agent Mesh tackles this through an event-driven approach that keeps data local, passes only schemas and metadata to LLMs, and uses intelligent query generation for analysis. Instead of sending everything to the model, agents store data locally in in-memory databases and only ask the LLM to generate queries—not process raw information.
The result: minimal token usage, faster processing, higher accuracy, and built-in security. As organizations deploy AI agents across the enterprise, this shift from context engineering to complete context management isn’t just optimization—it’s becoming essential for practical, cost-effective implementations.
The Context Management Crisis in Multi-Agent Systems
Before diving into solutions, let’s understand the problem. Traditional AI architectures often struggle with:
- Token Bloat: Sending entire datasets to LLMs consumes valuable tokens, driving up costs exponentially
- Context Pollution: Irrelevant data clutters the context window, reducing the quality of AI responses
- Processing Inefficiency: LLMs spending computational resources analyzing raw data rather than focusing on insights
- Accuracy Degradation: As context windows fill with unnecessary information, the precision of AI outputs suffers
- Latency Issues: Transmitting large datasets back and forth creates significant delays in response times
- Hallucinations: LLMs can produce an output that is plausible sounding but factually incorrect
- Inaccurate Mathematical Calculations: LLMs are not inherently good at mathematics as they are statistical in nature and used to generate text by predicting the most likely sequence of words based on training patterns, not by executing formal algorithms.
These challenges become exponentially more complex in multi-agent architectures where multiple AI agents need to collaborate, share information, manage session memory across interactions, and maintain coherent context across distributed systems.
The Solace Agent Mesh Approach: Intelligent Data Management
Solace Agent Mesh introduces a paradigm shift in how AI agents interact with data. Instead of the traditional “send everything to the LLM” approach, it implements an intelligent data management layer that fundamentally changes the game.
The Architecture of Efficiency
At its core, Solace Agent Mesh employs a sophisticated orchestration pattern that treats data as a first-class citizen in the AI workflow. Here’s how it revolutionizes context management:
- Local Data Storage: When an agent retrieves data from a source, it stores it locally rather than passing all the data back to the LLM for further analysis.
- In-Memory Database Creation: The system instantiates an in-memory SQL database with the retrieved data
- Meta Data Based Intelligence: Only the data schema, file type, summarized overview of the data and the schema is passed to the LLM, dramatically reducing token usage
- Query Generation: The LLM crafts optimized SQL queries based on the schema and user intent
- Local Execution: Queries run against the local database, with results processed by built-in visualization tools
Real-World Use Case: Monthly Sales Analysis
Let’s walk through a concrete example to illustrate this revolutionary approach. Imagine you need to analyze and visualize product sales month over month – this could potentially consist of millions of rows – each row consisting of fields like productId,customerId, region, date, quantity and revenue, the raw text representation quickly explodes in size.
In the traditional approach, the workflow is costly and inefficient. You would need to retrieve the entire dataset from the database, often millions of rows, and send it to the LLM. This alone could account for millions of tokens – increasing the cost and inefficiency of the entire process.
From there, the model would be asked to analyze the data, identify trends, recommend visualization strategies, and generate charts. The result is massive token consumption, slow processing, potential hallucinations, and potential inaccuracies due to the LLM’s limited context window and limited mathematical capabilities.
With Solace Agent Mesh, the process looks entirely different. The orchestrator dispatches sales agents to gather the necessary data, which is then stored locally as CSV artifacts. An in-memory SQL database is instantiated on the fly, and instead of sending all the raw data, only the schema, such as table structures, column names, and data types, is passed to the LLM. Using that schema, the model generates an optimized SQL query for the month-over-month analysis, which is executed locally against the in-memory database. The results are then rendered immediately with built-in visualization tools. This approach delivers minimal token usage, faster processing, higher accuracy, and instant visualization.
The Sales Order agent retrieves sales orders from Sales Forces, creates a CSV file of the orders, instantiates a database, and lets the orchestrator know of the data schema and relevant metadata to ask the LLM to construct a SQl query to analyze the data.
Extending the Pattern: Additional Use Cases
Customer Sentiment Analysis Across Multiple Channels
Organizations often need to analyze customer feedback drawn from social media, support tickets, and product reviews. Traditionally, pushing millions of text entries through an LLM would be both slow and prohibitively expensive. With Solace Agent Mesh, however, agents collect the data from various channels, and natural language processing happens locally for initial categorization. Structured sentiment scores are stored in an in-memory database, and the LLM only receives the schema in order to generate queries for higher-level trend analysis. The result is the ability to surface complex cross-channel sentiment patterns without overwhelming the model or burning through tokens.
Real-Time Supply Chain Optimization
Managing supply chain operations across warehouses and transportation routes requires ingesting high-volume, real-time data. Traditional approaches that attempt to pipe live streams into an LLM quickly run into context limitations. By contrast, Solace Agent Mesh enables event-driven agents to capture logistics data and process it locally, flagging anomalies and bottlenecks in real time. Time-series data is stored in optimized in-memory structures, and the LLM is tasked only with generating optimization strategies based on summarized patterns. Execution happens through the agents themselves, reducing reliance on constant LLM interaction while keeping performance responsive.
Financial Risk Assessment and Compliance
Financial services teams face the dual challenge of handling sensitive information and processing large transaction volumes. Running all of this through an LLM is both risky and expensive. With Solace Agent Mesh, sensitive data remains securely stored within the organization, while local pattern matching and anomaly detection identify potential fraud. Only statistical summaries and schemas are shared with the LLM, which generates targeted queries for compliance checks and trend identification. Audit trails are maintained locally, ensuring regulatory requirements are met without exposing sensitive information.
Healthcare Patient Journey Mapping
In healthcare, mapping patient interactions across multiple touchpoints is critical for improving care delivery, but HIPAA compliance and data volume constraints make traditional LLM-driven workflows impractical. Solace Agent Mesh solves this by ensuring protected health information remains in secure local storage. Anonymized schemas are shared with the LLM, which generates intelligent queries to identify patient journey patterns. Results are visualized through built-in tools, offering actionable insights while maintaining strict privacy standards. Real-time updates allow providers to continuously optimize care delivery without compromising patient confidentiality.
The Future of Context Management
As AI systems become more sophisticated and data volumes continue to explode, intelligent context management will become not just an optimization, but a necessity. Solace Agent Mesh’s approach represents a fundamental shift in how we think about AI-data interactions.
The framework’s ability to:
- Maintain data locality
- Reduce token consumption
- Improve processing speed
- Enhance accuracy
- Ensure security and compliance
…positions it at the forefront of the next generation of AI orchestration platforms.
Conclusion
As AI agents are increasingly being deployed across the enterprise, smart context management is becoming a more challenging task to address. Solace Agent Mesh is tackling this challenge head on. By reimagining how AI agents interact with data. By keeping datasets local, passing only schemas and necessary metadata to the LLMs, and leveraging intelligent query generation, organizations can build more efficient, accurate, and scalable AI systems.
Whether you’re analyzing sales trends, optimizing supply chains, ensuring financial compliance, or improving healthcare delivery, the intelligent data management capabilities of Solace Agent Mesh offer a path to AI implementations that are not just powerful, but practical, cost effective and accurate.
As we move into an era where AI agents become integral to business operations, the ability to manage context intelligently will separate successful implementations from costly failures. Solace Agent Mesh provides the foundation for this success, enabling organizations to harness the full power of AI while maintaining control over their data and their costs.
The future of multi-agent AI architectures isn’t about sending more data to LLMs, it’s about sending the right information at the right time. And with Solace Agent Mesh, that future is already here.
Explore other posts from category: Artificial Intelligence

Thomas Kunnumpurath is the vice president of systems engineering for Americas at Solace, where he leads a field team to implement Solace Platform across a wide variety of industries such as finance, retail, and manufacturing.
Prior to joining Solace, Thomas spent over a decade of his career leading engineering teams responsible for building out large scale globally distributed systems for real time trading systems and credit card systems at various banks.
Thomas enjoys coding, blogging about tech, speaking at conferences and being invited to talk on PodCasts. You can follow him at Twitter with the handle @TKTheTechie, GitHub @TKTheTechie and his blog on TKTheTechie.io
Subscribe to Our Blog
Get the latest trends, solutions, and insights into the event-driven future every week.
Thanks for subscribing.
