• the analysis game
  • Posts
  • Building Master of Ceremonies: Why the tech is ready but the complexity isn't.

Building Master of Ceremonies: Why the tech is ready but the complexity isn't.

How cloud giants and integration partners could deliver the 12-agent Master of Ceremonies platform.

Developing the Platform by the Author.

Pre-reads

Those 12 AI agent personas I described - the Neo operators, Robin Williams energy, and Spock-level intelligence working together like a sophisticated orchestra?

What would it take to build the platform? 

Platform Design - My View 

Each AI agent has different workload patterns - your Sprint Planning Agent works overtime leading up to planning, while the Daily Standup Agent barely runs weekends (hopefully).  With a Microservices based architecture, each agent can scale independently - update one without breaking others. 

Software Architecture Comparison via NapkinAI.

AI Agent Platform Comparisons

AI Agent Platform Comparisons via NapkinAI.

Google’s Agentspace already provides search and generative answer connectors to JIRA, Confluence, Asana, Teams, Slack, Outlook, GitHub and others.  

Check out the following short video.

The Hard Problems 

1. Memory

Your agents need to remember that Sprint 4 decision when planning Sprint 5. 

Vector databases could be used to store the data points whilst maintaining semantic relationships - additionally they enable greater intelligence. When your Retrospective Agent suggests an action item, it should know you tried something similar three sprints ago and it didn't work - intelligence.

2. Integration 

A single JIRA ticket includes:

  • User story with acceptance criteria and definition of done

  • Linear sub-tasks

  • GitHub PR with code changes

  • Links to Slack conversations & Teams discussions 

  • Miro design links

The platform must correlate real-time, whilst respecting each tool's API limits and authentication patterns. 

Real-time vs Batch Processing: Could some of this happen as batch processing rather than real-time? 

Yes - for historical context and pattern recognition, batch processing overnight could handle the heavy lifting - analysing sprint patterns, team velocity trends, and cross-project dependencies. 

But for active ceremony prep, you need real-time updates. When someone updates a JIRA ticket 10 minutes before standup, your agents need to know about it!

3. Enterprise Constraints 

  • Data sovereignty: Customer data can't leave specific regions

  • Access control: AI insights must respect existing permission models i.e. what does my org allow me to see / not see 

  • Audit trails: Every recommendation needs compliance traceability 

  • Rate limits: JIRA uses cost-based rate limiting that varies by request complexity, GitHub allows 5,000 requests per hour for authenticated users Slack has tiered limits per workspace

4. API Rate Limits 

Managing API rate limits across multiple teams requires intelligent caching, request batching, and sophisticated retry mechanisms.

The platform - “Master of Ceremonies”,  coordinates 12 agents across multiple tools and teams, the API calls add up fast: 

  • Each agent checking for updates every few minutes, 

  • Multiplied by all your integrated tools, 

  • Multiplied by every team

That's thousands of API calls per hour, and if you hit rate limits!  

5. EU AI Act - Four Levels of Risk 

The EU AI Act takes a risk-based approach, categorising AI systems into four levels:

  1. Prohibited (Unacceptable Risk): Social scoring, emotion recognition in workplaces - these are outright banned.

  2. High-Risk: AI making decisions controlling critical infrastructure, remote biometric identification.

  3. Limited Risk: Chatbots and deepfakes requiring transparency - users must know they're interacting with AI.

  4. Minimal Risk: Most AI systems including productivity tools, spam filters, AI games. No specific obligations.

Where Master of Ceremonies Sits

Master of Ceremonies fits into Minimal Risk

Why? It performs "narrow procedural tasks" - ceremony preparation rather than employment decisions. It's preparatory automation, not performance evaluation. 

GDPR Still Applies

Being minimal risk doesn't mean no rules. Standard GDPR compliance remains essential:

  • Data minimisation: Only ceremony-relevant information - meeting times, task status, participant lists

  • Purpose limitation: Sprint planning data can't suddenly become performance review material

  • Individual rights: Team members can request access, correct or delete their data

  • Lawful basis: Legitimate business interests- workplace efficiency

Practical Considerations

Role-based access controls ensure Team A's architectural decisions don't leak to Team B. Audit trails track every AI recommendation alongside human decisions. When someone leaves, their data vanishes from all 12 agents.

What’s In Our Favour:

  • Cloud giants offer AI agent platforms, Microsoft Azure AI Foundry, Google’s Vertex AI Agent Builder

  • Vector databases like Pinecone and Weaviate provide production-ready semantic search with millisecond latency

  • Webhook infrastructure enables real-time data processing from modern development tools

  • Serverless scalability through AWS Lambda, Azure Functions, and Google Cloud Functions

What’s Genuinely Hard:

  • Cross-platform identity resolution: How do you know [email protected] (JIRA) is the same person as @john.smith (Slack)?

  • Enterprise change management: 42% of C-suite executives report AI adoption creating internal tensions - “tearing their company apart”.

  • Skills shortage: Finding teams who understand both enterprise software AND AI orchestration

The Minimum Viable Technical Stack 

Technical Stack Components via NapkinAI.

Vector Database: Pinecone starting at $500/month for Enterprise usage

Event Processing: Kafka or AWS Kinesis handling for example 500 - 1000 daily events per team

Why so many events? A 12-person agile team in active development generates per day roughly:

  • 50-80 JIRA updates (story updates, comments)

  • 200-400 Slack messages 

  • GitHub activity from 5 developers (commits, PR reviews, merges)

  • 10-15 calendar events and meeting updates

Plus Confluence edits, Miro board changes. Your team's mileage will vary - but 500-1000 events per day feels right. 

Agent Runtime: Kubernetes or serverless functions with proper cost controls

Integration Layer: Enterprise API gateway managing rate limits across 8+ platforms ( for e.g. JIRA, Confluence, teams, Slack etc)  

AI Orchestration: LangChain or Semantic Kernel coordinating agents

In Summary 

Well, this seemed like a great idea two posts ago! 

However, the tech foundation is solid. The regulatory framework is sensible. The integration complexity is massive but not insurmountable. Master of Ceremonies can be built.

The real question - Is automating standup worth the engineering effort?  

Probably not, but data, metrics and intelligence from retrospectives, backlog refinement, sprint velocity, system & integration issues  - yes.

Combining that quantitative data with team qualitative insights to manage / predict future projects - absolutely. 

Reply

or to participate.