Skip to main content
10 October 2025 | 12 min read

Webinar replay

originally broadcast: Thursday 13 November 2025. 

What you’ll discover in 45 minutes:

  • Why economic uncertainty is pushing intelligence teams toward sovereign AI strategies
  • How AI maturity has evolved from basic automation to agentic systems that plan, sequence and execute steps
  • What Physical AI means for competitive intelligence and technology monitoring
  • Practical frameworks for prompts, context management, RAG systems and AI evaluation
  • How to measure AI performance against human analyst benchmarks

Watch enterprise intelligence teams adapt to the biggest shift in a decade

The market intelligence function is changing faster than most teams can keep up. Economic uncertainty sits at historic highs, AI capabilities are advancing from generative to agentic, and the gap between what executives expect and what teams can deliver keeps widening. In this session, Warren Fauvel explores how AI market intelligence trends are reshaping what’s possible in 2026, from enterprise intelligence automation to Physical AI adoption and the practical realities of building trust in AI-assisted research.

This isn’t a webinar about what AI might do someday. It’s about what leading intelligence teams are doing right now to stay ahead. Warren breaks down how to design workflows around human and machine strengths, build prompt toolboxes that deliver consistent results, manage context and memory in large language models, implement RAG systems that improve accuracy, and measure AI performance with proper evaluation protocols. If your intelligence function needs to do more with less whilst maintaining quality and trust, this session shows you how.

Warren Fauvel, Chief Growth Officer at AMPLYFI, has spent a decade helping organisations such as Google and the NHS translate emerging technologies into practical tools. In this 45-minute session, he breaks down the forces reshaping AI market intelligence work and shows how prepared teams can turn disruption into competitive advantage.

Enter your email below to watch the full recording

Why watch this webinar

Market intelligence teams face a fundamental question in 2026: how do you deliver faster, more comprehensive insight without sacrificing accuracy or trust? The answer isn’t simply “use more AI.” The teams pulling ahead are those who understand how agentic AI differs from earlier automation, how to structure prompts and context for consistent results, how RAG systems actually work under the hood, and how to measure AI performance against human benchmarks.

This webinar gives you the practical knowledge to implement enterprise intelligence automation that your executives will trust. Warren doesn’t just explain concepts. He shows evaluation data from testing four leading AI systems against human analysts, walks through the architecture of effective RAG systems, demonstrates why context management matters for large language models, and reveals why the “best possible” AI answer differs significantly from the “average” answer your team will actually get.

If you’re responsible for market intelligence, competitive intelligence, strategy research or innovation tracking at a large organisation, this session delivers the frameworks you need to adapt your function for 2026 and beyond.

Key insights revealed

Economic uncertainty is driving sovereign intelligence strategies

Global economic uncertainty has reached levels not seen since the financial crisis. Warren explains why this volatility is pushing nations and organisations toward sovereign strategies, what “slowbalisation” means for global trade dynamics, and how intelligence teams need to adapt their sourcing and analysis approaches when traditional supply chains and partnerships can no longer be assumed stable.

AI maturity has moved beyond the hype into practical deployment

Enterprise automation has shifted from traditional RPA to LLM-driven and agentic systems far faster than most teams anticipated. Warren walks through where we are on the maturity curve for software engineering, biology and mathematics based on Epoch.ai research, why the hype is flattening as capabilities continue to grow, and what this means for intelligence teams building AI into their workflows right now rather than waiting for some future breakthrough.

Physical AI is creating entirely new intelligence sources

From Waymo’s robotaxis to 1X-NEO’s humanoid co-pilot robots to Festo’s AI-driven autonomous manufacturing systems, Physical AI is generating new types of data and use cases. Warren explores what the “next wave” means for intelligence teams responsible for tracking emerging technologies, competitive movements and market disruptions before they become obvious.

The best possible AI answer is not the average answer

AMPLYFI tested four leading AI systems (GPT-5, Claude Sonnet 4.5, Gemini 2.5 Pro, Microsoft 365 Copilot) against a human analyst using 150,000+ words from Commerzbank annual reports and 10 complex questions. The results reveal critical insights about precision, recall, quality and consistency. Warren walks through the evaluation protocol and explains why measuring AI performance properly is now a core intelligence function. The data shows that one system scored 3.11 overall whilst another scored just 1.73, and why this variance matters more than any vendor’s “best case” demonstration.

Frequently Asked Questions.

What is agentic AI in market intelligence?

Agentic AI refers to artificial intelligence systems that operate with defined goals, access to specific tools, and contextual understanding of their environment. Unlike earlier AI that simply responded to prompts, agentic AI can break down complex tasks, decide which tools to use, gather information from multiple sources, and synthesise results without constant human direction. In market intelligence, this means AI systems that can conduct multi-step research processes, validate findings and produce analysis with minimal supervision whilst maintaining accuracy and trust.

How are enterprise intelligence teams using AI automation in 2026?

Leading enterprise intelligence teams are using AI automation across several specific functions: automating competitor monitoring through scheduled scanning of regulatory filings, news and financial reports; generating first-draft analysis of market trends using RAG systems tuned to proprietary databases; creating customised intelligence briefings for different stakeholder groups; validating findings through fact-checking agents before publication; and measuring their own AI system performance through structured evaluation protocols. The teams succeeding are those treating AI as a co-pilot rather than a replacement, designing workflows that combine human judgement with machine scale.

What is RAG and why does it matter for AI market intelligence?

RAG (Retrieval Augmented Generation) is a technique that improves AI accuracy by retrieving relevant information from specific sources before generating an answer. Instead of relying only on what an AI model learned during training, RAG systems search through your documents, databases or approved sources, retrieve the most relevant chunks of information, and use those chunks as context for generating responses. This matters for market intelligence because it grounds AI outputs in your organisation’s proprietary research, ensures answers cite specific sources, reduces hallucinations and allows you to control which information sources the AI can access.

How do you measure AI performance in intelligence work?

Measuring AI performance in intelligence work requires structured evaluation protocols comparing AI outputs against human analyst benchmarks. The process involves selecting representative content (such as annual reports, market research or competitor filings), creating a set of questions that require analysis rather than simple fact retrieval, running the same questions through AI systems multiple times to test consistency, and having human experts evaluate the results on precision (correctness), recall (completeness), quality (clarity) and consistency (variance across runs). Without measurement, teams cannot know which AI tools to trust, cannot improve their implementations and cannot demonstrate ROI to executives.

What's the difference between generative AI and agentic AI?

Generative AI creates content in response to prompts but requires humans to direct each step, provide all necessary context and validate outputs. Agentic AI goes further by operating with goals, using tools to gather information, maintaining context across multiple steps and making decisions about how to break down and complete complex tasks. For intelligence teams, this distinction is practical: generative AI helps you write faster summaries of research you’ve already done, whilst agentic AI can conduct portions of the research process itself, deciding which sources to check, which information to prioritise and how to structure findings.

Why is context management important for large language models?

Context management matters because large language models have finite context windows (Modern LLMs offer context windows ranging from hundreds of thousands of tokens to several million) and exhibit non-uniform attention patterns, where the beginning and end of the context often receive stronger weighting, meaning they pay strongest attention to information at the beginning and end of their context whilst giving reduced attention to the middle. For intelligence teams, this means that simply uploading dozens of documents and asking questions won’t produce consistent results. Effective context management involves using project documents to maintain portable context, creating structured memory through summaries of past interactions, and being deliberate about what information appears where in the context window.

What is Physical AI and why should intelligence teams track it?

Physical AI refers to artificial intelligence systems deployed in robots, autonomous vehicles and physical automation that interact with the real world rather than just processing digital information. Examples include Waymo’s self-driving taxis, 1X-NEO’s humanoid co-pilot robots and Festo’s autonomous manufacturing systems. Intelligence teams should track Physical AI because it represents both a significant technology trend (competitive monitoring of which industries and companies are adopting autonomous systems) and a new source of operational data (what these systems learn about efficiency, safety, quality and market conditions through real-world deployment).

How can intelligence teams build trust in AI-assisted research?

Building trust in AI-assisted research requires five elements: measuring performance through structured evaluation protocols, implementing RAG systems that cite specific sources for every claim, designing workflows where AI handles scale whilst humans handle judgement, creating audit trails that show how conclusions were reached, and being transparent with stakeholders about which portions of analysis were AI-assisted versus human-generated. Trust comes from demonstrable accuracy, not from vendor promises. Teams that measure their AI performance, iterate based on results and maintain human oversight of strategic judgements build credibility with executives and internal customers.

What skills do market intelligence analysts need for working with AI?

Market intelligence analysts working effectively with AI need prompt engineering skills (writing clear, structured instructions that produce consistent results), context management skills (understanding how to provide relevant information to LLMs), critical evaluation skills (validating AI outputs rather than accepting them uncritically), workflow design skills (knowing where AI adds value versus where human expertise remains essential), and measurement skills (testing AI performance systematically and iterating based on results). The role is shifting from pure research execution to research orchestration, where analysts spend more time directing AI systems, validating outputs and focusing human effort on strategic interpretation.

Should we build custom AI agents or use off-the-shelf tools?

The decision depends on your competitive strategy. Off-the-shelf tools offer convenience for commodity tasks, but they plateau quickly when depth, control, or traceability matter. However, every other intelligence team has access to the same tools with the same capabilities, meaning they provide no competitive differentiation. Custom AI agent development (tuned prompts, specialised RAG implementations, proprietary evaluation protocols) creates barriers that competitors cannot easily replicate. Teams should use off-the-shelf tools for commodity tasks where speed matters more than advantage, and invest in custom development for capabilities that directly impact competitive positioning or strategic decision-making.

What is slowbalisation and how does it affect market intelligence?

Slowbalisation refers to the reversal of globalisation trends, where nations and organisations prioritise sovereign capabilities, domestic supply chains and regional partnerships over globally distributed operations. This trend accelerated after 2020 and is being reinforced by trade policy shifts, economic uncertainty and geopolitical tensions. For market intelligence teams, slowbalisation means tracking not just what competitors are doing, but understanding supply chain vulnerabilities, regulatory fragmentation across jurisdictions, reshoring trends and the strategic implications of deglobalised markets. Intelligence requirements become more complex because you cannot assume stable, transparent global markets.

How do leading intelligence teams use prompt engineering?

Leading intelligence teams treat prompt engineering as a core discipline rather than ad-hoc experimentation. They build prompt toolboxes with three categories: meta prompts (prompts that create other prompts for specific tasks), thought partner prompts (prompts that critique and improve analysis before stakeholder review), and research agent prompts (prompts that gather information from defined sources using structured processes). These prompts are documented, tested for consistency, refined based on output quality and shared across teams. The goal is creating reusable, reliable tools that deliver consistent results rather than requiring every analyst to reinvent effective prompting techniques.

Essential viewing for

This webinar delivers practical intelligence for senior professionals at large global organisations:

  • Heads of Market Intelligence and Competitive Intelligence
  • Strategy and Research Directors
  • Chief Knowledge Officers and Chief Data Officers
  • Innovation and Insights Leaders
  • Senior Intelligence Analysts and Managers
  • Business Development Directors and Growth Teams
  • Product Strategy Leaders tracking technology shifts
  • Sales and Marketing Decision-Makers requiring competitive insight

Designed specifically for intelligence professionals responsible for delivering faster, more comprehensive insight whilst maintaining accuracy and trust in an era of AI automation and economic uncertainty.

Our Insights in your Inbox
Close Menu