With the recent announcement of Extreme Platform ONE, we are excited to share how AI serves as a cornerstone of its architecture and how Extreme AI Expert has become the central interface for users to engage with the platform. Before diving into the agentic nature of Extreme AI Expert and its transformative impact on workflows, I’d like to reflect on the lessons learned over the past months as we brought this vision to life.
As of fall 2023, we had already developed a functioning Retrieval-Augmented Generation (RAG) prototype, which now serves as an agent within Platform ONE’s broader architecture. While much of the industry buzz centers on large language models (LLMs) themselves, we’ve found that the real differentiator lies elsewhere: in the design of supporting documentation and data infrastructure.
Here are some insights that have shaped our approach and revealed critical factors for building effective AI systems:
One of our most impactful discoveries has been the critical role of documentation in enhancing AI performance. Surprisingly, success isn’t solely dependent on the sophistication of the AI model itself but also on the clarity and structure of the input data. Writing for AI requires a shift in mindset: short, clear, and factual sentences consistently outperform polished, complex narratives. This approach enables AI to process information more efficiently and with fewer errors, improving interpretive accuracy while also lowering processing costs.
It’s equally important to acknowledge AI’s strengths and limitations. While it excels at synthesizing and presenting readable content, it struggles with verbose or ambiguous input. As a result, we’ve adopted documentation practices that prioritize precision and utility over stylistic elegance. By focusing on clear, actionable input, we’ve created a structured foundation that allows AI to perform at its best. Writers, too, must embrace this shift, pivoting from rhetorical flourishes to creating content that is optimized for AI interpretation.
Metadata has emerged as a silent hero in the pursuit of AI excellence, offering critical improvements in quality and accuracy. Whether dealing with structured data (e.g., API calls or database queries) or unstructured formats (e.g., documents or media), metadata provides the essential context that AI needs to perform effectively. For structured data, metadata acts as a bridge, connecting raw data points to their business relevance. This requires a rigorous process of creation, validation, and ongoing refinement to ensure alignment with changing organizational needs.
In unstructured data scenarios, the value of metadata lies in its ability to enhance retrieval rather than generation. While semantic search can play an important role, but it is usually not sufficient, it is metadata that ensures AI systems retrieve the most relevant, context-specific information. By investing in robust metadata frameworks, we’ve significantly enhanced the AI Expert’s capacity to deliver nuanced, actionable recommendations. Metadata, in many ways, is the unsung foundation of successful AI ecosystems.
As an advocate of the data mesh concept, I’ve observed that process, culture, architecture, and ownership are pivotal in driving both data quality and AI performance. Data mesh represents a paradigm shift from centralized systems to a decentralized approach, empowering teams to treat data as a product and fostering accountability and better outcomes.
We’ve found that successful AI implementations aren’t purely technical; they depend on organizational alignment and a commitment to a data-first culture. Teams that embrace the data mesh mindset are better positioned to create and maintain high-quality datasets, which fuel superior AI performance. This results in an agile, scalable, and resilient data infrastructure capable of supporting cutting-edge AI applications.
The growing emphasis on AI safety and guardrails is well-justified, as trust remains a cornerstone of user adoption. However, our experience shows that accuracy is equally critical, especially in high-stakes scenarios where decisions rely on precise, actionable insights.
In such cases, fact-checking techniques and explainable AI methodologies are indispensable, offering the transparency needed to build user trust. Ultimately, safety and accuracy work hand in hand to ensure user confidence and create a robust foundation for widespread adoption.
As shown in Figure 1, Agentic workflows have emerged as a game-changer in enhancing accuracy. These workflows enable AI to iteratively query, verify, and refine its responses, significantly improving its ability to handle complex scenarios. Research from Andrew Ng and his team earlier in 2024 underscored the value of agentic approaches in creating more resilient and reliable AI systems.
Figure 1 - Agentic Workflow Accuracy - Source: Deeplearning.AI
A particularly noteworthy advancement is the implementation of reflection agents, which drive significant improvements in accuracy when compared to traditional methods. This innovation underscores the critical role of agentic workflows in advancing AI performance and is a key factor to consider moving forward. By integrating these workflows, Extreme AI Expert has taken a significant step toward achieving next-level precision.
As detailed in my previous blog, Extreme AI Expert has evolved far beyond a simple conversational interface. Users can now leverage AI to either build resources—such as dashboards, reports, widgets, and AI-driven insights—or to directly control the user interface through our AI canvas. As a side note: a similar concept (Canvas) has just been released by OpenAI as ChatGPT Canvas. But how does the process of generating insights work with Extreme AI Expert today? Figure 2 conceptually illustrates how multiple task-specific agents collaborate to execute the user’s intended job.
Figure 2 - Extreme AI Expert Multi Agent Message Flow
Here is a breakdown of the flow: The system begins with a dispatcher agent that routes the user request to the appropriate next agent. For example, an operational data request is sent to the structured data retrieval agent, instead of the RAG knowledge agent which handles requests around unstructured data and documentation. Once the request reaches the correct agent, the system selects the appropriate tools from the catalog to identify relevant data sources and endpoints. The data is then retrieved and processed by the data frame handler agent before being passed to the UX formatting agent, which determines the optimal way to present the final output to the user based on the original request.
As shown in Figure 3, this functionality extends beyond single smart widgets to include the creation of full reports and interactive, hyper-personalized dashboards within our AI canvas. By working collaboratively with AI Expert, users can generate tailored outputs that provide highly personalized insights and experiences, ensuring relevance and alignment with their specific needs.
Figure 3 - Extreme AI Expert Interactive Experience in AI canvas
There’s much more on the horizon. 2025 is set to be the definitive year of AI agents, multi-agent frameworks, agentic workflows, and autonomous systems. While 2024 was all about “Artificial Intelligence (AI),” I expect that 2025 will be the year that “Agents” steal the spotlight.
The potential of agents is undeniable—they have the power to transform IT organizations and entire enterprises, while unlocking new business opportunities for the entire ecosystem of vendors, partners, consulting, and services organizations. The growing evidence points to an exciting future, and I look forward to exploring these developments in detail. In line with these predictions, next year’s blog series will focus on agents and their impact. Stay tuned for more!