Skip to main content
AI

The AI Control Tower Field Guide

Eclipse AI

Eclipse AI

12 May 2026 · 10 min read

AI Control Tower ServiceNow Agentic AI AI Governance Now Assist

ServiceNow built the air traffic control system for enterprise AI.

Most teams are flying blind through it.

In one page

At Knowledge 2026, ServiceNow stopped selling software. They started selling air traffic control for enterprise AI.

AI Control Tower (AICT) is that control tower: a centralised command centre for governing every AI agent, model, dataset and non-human identity across your enterprise - including those running outside ServiceNow.

It went generally available at Knowledge 2025 and was redefined at Knowledge 2026 into a five-pillar governance layer extending across AWS, Azure, GCP, SAP, Oracle and Workday through 30+ new connectors. Bill McDermott now describes ServiceNow itself as “the AI Control Tower for business reinvention.” This is no longer a feature on the side. It is the corporate identity.

If you are licensing Now Assist Pro+ or Enterprise+, AICT is bundled free. You already own it. The question is whether you are using it deliberately or letting it run in the background.

Who this is for

ServiceNow customers licensed at Now Assist Pro+ or Enterprise+ - particularly teams standing up their first AI agents, or working through DORA, FCA, PRA or ECB evidence cycles.

What most teams get wrong

They treat AICT as a dashboard. It is not. It is the connective tissue between your AI estate and the services your business runs on. Every agent acts on something - a Business Service, a Service Offering, an Application Service. If you cannot name the service each agent touches, you cannot govern it, you cannot measure its blast radius, and you cannot evidence its behaviour to a regulator.

Service-anchored agent governance is the difference between an AI estate you can defend and one that becomes someone else’s incident.

What this guide covers

  • What AICT actually is, and what shipped when
  • The five pillars in plain English
  • What is free with Now Assist Pro+ and what is not
  • Three things to do before you deploy your first agent
  • The 10-question diagnostic we use in week one
  • Where to start

What AICT actually is

AI Control Tower is a centralised command centre for AI agents, models, datasets and non-human identities - across ServiceNow and third-party systems. It is the supervisor that prevents AI sprawl: every agent registered, every action observable, every policy enforceable, every cost attributable.

Launch timeline

  • Yokohama release (Q1 2025) introduced AI Agent Orchestrator as the precursor capability.
  • Knowledge 2025 (6 May 2025) - AI Control Tower generally available, with AI Agent Fabric in early adopter.
  • Knowledge 2026 (5 May 2026) - major expansion: five-pillar model, 30+ enterprise connectors, deeper observability via the Traceloop acquisition, identity governance via the Veza integration, five new NIST and EU AI Act-aligned policy frameworks.
  • Action Fabric (announced K2026) exposes ServiceNow workflows to any external AI agent via MCP - meaning AICT now governs agents that originate outside the platform.

Why this matters now

By ServiceNow’s own disclosure at Investor Day 2026, they internally manage 1,600+ AI assets and tracked roughly $500M in cumulative AI value during 2025. Analyst projections suggest most enterprises will reach 50+ production agents within twelve months. The window for getting the governance layer right while the estate is still small is closing.

The cost of waiting: every month you defer service-anchored governance, the agent inventory grows, the data lineage gets harder to reconstruct, and the audit reverse-engineering bill compounds.

The five pillars in plain English

ServiceNow has standardised on five capabilities. Each maps to a question an enterprise needs to be able to answer about its AI estate.

PillarThe question it answersWhat you actually get
DiscoverWhere are all our AI agents?Inventory across ServiceNow plus 30+ third-party systems (AWS, Azure, GCP, SAP, Oracle, Workday and more).
ObserveWhat are they doing in real time?Continuous monitoring, runtime telemetry and alerts. Deep LLM behaviour observability via the Traceloop acquisition.
GovernAre they following the rules?Policy templates aligned to NIST AI RMF and the EU AI Act. AI Agent kill switch - out-of-permission agents can be shut down in real time.
SecureWho is allowed to do what?Identity and access governance for non-human identities across hyperscaler AI environments, via the Veza integration.
MeasureAre we getting value?Cost dashboards, ROI tracking and per-agent usage benchmarks.

Source: ServiceNow Knowledge 2026 announcements, May 2026.

What you get free with Now Assist Pro+

AICT is bundled into the Now Assist Pro+ and Enterprise+ SKUs. If you are already licensed at those tiers, the following capabilities are inside the platform you already own:

Included with the licence

  • AI Agent inventory and discovery, including non-ServiceNow agents via the 30+ connectors.
  • The five-pillar governance workspace.
  • Pre-configured policy templates aligned to NIST AI RMF and the EU AI Act.
  • AI Agent Fabric for agent-to-agent communication.
  • Cost tracking and ROI dashboards.
  • Five new compliance frameworks added at Knowledge 2026.

Not included - where partners add value

  • Sector-specific accelerators (financial services resilience, HR agent libraries, healthcare evidence packs).
  • Custom agent orchestration outside the standard AI Agent Studio.
  • Independent attestation and audit-grade evidence packs (DORA, FCA, PRA, ECB cycles).
  • Service-anchored mapping of agents to the CSDM model - connecting the AI inventory to the Business Services, Service Offerings and Application Services each agent touches.
  • Implementation, change management and adoption services.

Before the diagnostic

The 10-question diagnostic below is the depth version. Before any team gets to question one, we ask them to do three things first. If they can produce evidence of all three, they are deploying. If they cannot, they are piloting.

Three things to do before deploying your first agent

1. Map the agent to a service

Business Service - Service Offering - Application Service. Pick one. Document the relationship in the agent’s configuration. Get the service owner to sign it off before go-live.

Why this matters: every governance question downstream - blast radius, audit evidence, ROI attribution - is answered against the service. If you cannot point to one, you have no anchor.

2. Pre-classify the blast radius

Is the service the agent touches Critical or Important under your operational resilience framework? The governance bar moves materially if the answer is yes. Classify before you build, not after.

Why this matters: regulators do not accept retrospective classification. If the agent touches an Important Business Service and you did not declare it before deployment, the evidence pack is suspect.

3. Test the kill switch in dev

Before go-live, not after. Document who pulls the trigger, what the trigger is, and what the responder does. Test the runbook once. Then test it again with a different responder.

Why this matters: AICT gives you a kill switch. Whether you have the operational discipline to use it is a different question - and one you do not want to answer for the first time during a live incident.

If you cannot produce all three, you are not deploying - you are piloting.

Ten questions to answer before deploying your first agent

This is the diagnostic Eclipse AI runs at the start of every AICT engagement. If you cannot answer all ten with confidence, you are not ready to deploy - you are ready to design.

1. What problem does this agent solve, and in whose P&L?

If you cannot name the function leader whose budget benefits, you do not have a use case - you have a proof of concept.

What good looks like: the agent has a named business sponsor and a single measurable target (e.g. “reduce L1 ticket resolution cost for the IT service desk owner by 30 percent”).

2. Which ServiceNow service does it act on?

Every agent acts through the platform on a Business Service, Service Offering or Application Service. If you cannot name it, you cannot govern it, classify its blast radius, or evidence its behaviour to a regulator.

What good looks like: every agent has a documented Application Service in its configuration, with an upstream link to the Service Offering and Business Service it supports.

3. What is the agent’s blast radius if it goes wrong?

Does it touch a Critical or Important Business Service under DORA? The governance bar moves materially if the answer is yes.

What good looks like: blast radius is pre-classified before deployment, and the classification is reviewed by the service owner and risk function.

4. Who owns the agent - and who is accountable for its decisions?

Treat agents like junior employees. Every one needs a named manager. “The AI team” is not an owner.

What good looks like: a single named individual is recorded as the agent owner, with a documented escalation path to the service owner.

5. What data does it read, and what is it permitted to write?

Default to least privilege. A recurring failure pattern: incidents trace back to over-broad permissions granted during pilot and never tightened before go-live.

What good looks like: the agent has an explicit allow-list of tables and actions, signed off by the data owner before go-live.

6. How will you measure success - and how will you measure failure?

Deflection rate, MTTR reduction and cost-to-serve are necessary. They are not sufficient. You also need leading indicators of silent drift.

What good looks like: three success metrics plus two leading failure indicators, instrumented in AICT from day one and reviewed monthly.

7. Where does its decision trail get logged?

This is your audit evidence. AICT captures it natively, but the retention depth and review cadence are configuration choices - and the right answer depends on which regulator is asking.

What good looks like: complete prompt, output and downstream action audit trail retained for the full regulatory window, with sampled review built into the operations runbook.

8. What does “shut it down” look like?

AICT gives you a kill switch. The question is whether you have decided who pulls it, and when.

What good looks like: a documented kill-switch runbook with named triggers and a named responder, tested at least once before go-live.

9. How will you retire it?

Most enterprises have not thought about this. Agents accrete. Decommissioning is the discipline that prevents an unmanageable estate by year three.

What good looks like: every agent has an explicit sunset date and decommissioning criteria, reviewed quarterly against actual usage.

10. What does an external auditor need to see?

If you cannot answer this for your FCA, PRA, ECB or DORA evidence cycle, redesign the agent before you deploy it - not after.

What good looks like: a pre-built evidence pack template, mapped to the relevant regulatory framework, populated automatically from AICT.

Where to start

The first conversation we have with a new client is short - typically thirty minutes - and it answers three questions: where is your AICT inventory today, which services do your agents already touch, and what would your DORA, FCA, PRA or ECB evidence pack look like if it had to be produced this quarter.

That conversation is free, non-disruptive, and ends with one of three things: a clear answer that you are further along than you thought, a defined readiness diagnostic engagement (two to three weeks), or a polite handshake if it is not the right time. We have no interest in selling work that does not need doing.

What this looks like in practice

Last quarter we worked with a European retail bank deploying their first wave of AI agents. In week one, before any deployment work, we mapped each agent to the CSDM service it would act on. Three of them were touching Important Business Services that had not been declared in their DORA register. That five-day exercise saved them a quarter of audit remediation effort and three difficult conversations with their regulator. That is the shape of our first engagement.

Eclipse AI sits at the bleeding edge of enterprise AI. Our team brings decades of combined ServiceNow, agentic AI and regulated-industry experience - principal architects, platform engineers and subject-matter experts who have built and governed AI estates inside some of the world’s most demanding enterprises.

Like ServiceNow itself, we run as an AI-first business. Every operational workflow - research, analysis, client deliverable production - is automated through our own internal AI estate. We do not just implement the methodology we sell; we run on it.

Eclipse AI implements AI Control Tower, Now Assist and AI Agents end-to-end for regulated enterprises, with one distinguishing methodology: every agent we deploy is mapped to the CSDM service it acts on, so AI risk reads as service risk, not as an orphaned event in a governance dashboard.

Get in touch for a thirty-minute briefing. We will use the time however is most useful to you - pressure-test a roadmap, walk through the diagnostic, or just have a frank conversation about where AICT fits in your estate.

Frequently Asked Questions

What is ServiceNow AI Control Tower?

AI Control Tower (AICT) is a centralised command centre for governing every AI agent, model, dataset and non-human identity across your enterprise - including those running outside ServiceNow. It went generally available at Knowledge 2025 and was significantly expanded at Knowledge 2026 into a five-pillar governance layer with 30+ enterprise connectors across AWS, Azure, GCP, SAP, Oracle and Workday.

Is AI Control Tower included in Now Assist Pro+?

Yes. If you are licensed at Now Assist Pro+ or Enterprise+, AICT is bundled free. The five-pillar governance workspace, AI Agent inventory, pre-configured policy templates aligned to NIST AI RMF and the EU AI Act, AI Agent Fabric and cost dashboards are all included in the licence.

What are the five pillars of AI Control Tower?

The five pillars are: Discover (inventory of all AI agents across ServiceNow and third-party systems), Observe (real-time monitoring and runtime telemetry), Govern (policy enforcement aligned to NIST AI RMF and the EU AI Act), Secure (identity and access governance for non-human identities), and Measure (cost dashboards and ROI tracking per agent).

How do I know if I am ready to deploy an AI agent?

Before deploying, you need to satisfy three conditions: map the agent to a specific Business Service, Service Offering or Application Service; pre-classify the blast radius against your operational resilience framework; and test the kill switch in your development environment with a documented runbook. If you cannot evidence all three, you are piloting - not deploying.

Can AICT govern AI agents we've deployed outside ServiceNow?

Yes. AICT includes AI Agent inventory and discovery across 30+ third-party systems including AWS, Azure, GCP, SAP, Oracle and Workday. Action Fabric, announced at Knowledge 2026, also exposes ServiceNow workflows to any external AI agent via MCP - meaning AICT can govern agents that originate entirely outside the ServiceNow platform.

How does AICT relate to CSDM?

CSDM is the foundation that makes AICT governance meaningful. Every AI agent acts on something - a Business Service, a Service Offering, or an Application Service. By mapping each agent to its CSDM service, you can classify blast radius, attribute ROI, and produce audit evidence against the service it touches. Without CSDM alignment, AICT gives you an agent inventory; with it, you get governance you can defend to a regulator.

Eclipse AI

About Eclipse AI

Eclipse AI Consulting

Eclipse AI Consulting is a global ServiceNow partner specialising in AI-enhanced implementations. We combine deep ServiceNow expertise with AI-powered tools to help enterprises accelerate transformation, align data foundations, and deliver measurable outcomes.

Connect on LinkedIn

Ready to fix your foundation? Let's talk.

We start with discovery to understand your challenges and goals. Let's discuss what's possible.