OpenAI published Introducing workspace agents in ChatGPT on April 22, 2026, bringing an agent builder, a shared agent library, Slack actions, and scheduled runs into the ChatGPT enterprise workflow story. Then on May 7, 2026, the ChatGPT Enterprise & Edu Release Notes added a more important detail: workspace agents gained Enterprise Key Management support, plus admin-added skills, files, and custom MCP servers.
If you only saw the first announcement, it was easy to read this as "stronger custom GPTs." If you read both together, the picture changes. It starts to look less like a novelty agent feature and more like a shared agent layer that enterprises could actually pilot.
The real change is not chat. It is shared execution
Many teams already use ChatGPT for drafting, summarizing, and planning. Many also use Zapier or Make to connect forms, triggers, and apps. Those are useful, but they still mostly live either in personal usage or in explicit low-code automations.
Workspace agents are trying to fill a different gap: a shared layer where teams can publish repeatable, cross-tool, language-heavy agents that other teammates can actually run.
OpenAI's product page highlights a few pieces: shared agents, connectors for tools like Google Drive, SharePoint, GitHub, HubSpot, Notion, Snowflake, and Slack, scheduled runs, and a workspace-level agent library. The May 7 enterprise update then adds the governance vocabulary that matters much more to teams: EKM, RBAC, analytics, version history, and admin-managed skills, files, and custom MCP servers.

That is why this is worth covering for site readers. The story is not "ChatGPT can now do more actions." The story is that OpenAI is starting to assemble a governed shared agent layer.
Why the EKM update changes the buying signal
Most AI launches sell speed first. Enterprise pilots fail or stall for a different reason: governance. Can the company control what the agent sees, what it connects to, how it runs, and who can publish it?
That is why the May 7 release-note update matters more than the original April launch from a practical adoption perspective. EKM will not excite every individual user, but it changes how security-conscious teams judge pilot readiness. Once that lands, the product is no longer just about agent capability. It starts to answer how enterprise teams might actually manage it.
That changes the evaluation questions:
- Can admins constrain the skills, files, and servers an agent can use?
- Can a team publish and reuse a shared agent instead of relying on personal prompts?
- Can scheduled runs, connectors, and library access be managed centrally?
- Can the workspace start with low-risk internal workflows rather than jumping into sensitive core chains?
Where this differs from Zapier and Make
Readers comparing this update with Zapier or Make should not think in terms of direct replacement. The better way to frame it is that those tools sit in different layers.
Zapier and Make are explicit workflow orchestrators. They are strongest when the job is mostly triggers, structured fields, app connections, branching logic, and stable automation paths. Workspace agents are more interesting when the work includes research, summarization, drafting, classification, and language-heavy coordination across tools.
In practice, the split looks like this:
- ChatGPT workspace agents handle interpretation, synthesis, drafting, and some cross-tool agent actions.
- Zapier and Make handle stable triggers, structured transformations, and downstream orchestration.
That means the real opportunity is not choosing one over the other. It is deciding which parts of a workflow belong in a shared language-capable agent and which parts should stay inside deterministic automation infrastructure.
Which teams should pilot this now
The best first pilots are not broad "AI transformation" projects. They are narrow, repeatable, reviewable workflows such as:
- weekly internal status summaries
- early lead triage suggestions
- customer feedback clustering
- release-note drafting from multiple internal sources
- project-specific research and update synthesis
These are good pilot candidates because they have real repetition, real coordination value, and low enough risk to keep a human in review.

The wrong first pilots are high-risk or irreversible flows: direct production writes, customer promises, pricing commitments, destructive actions, or sensitive system changes without approval gates.
What site readers should do next
If you are already comparing ChatGPT with other general AI assistants, this update should change your framing. The question is no longer only whether ChatGPT is a good daily assistant. The better question is whether your team has one or two shared, language-heavy workflows that are mature enough to become governed agent assets.
A practical next step is small:
- Pick one repetitive and reviewable workflow.
- Define the inputs, run cadence, output format, and approval points.
- Decide which parts belong inside the workspace agent and which parts still belong in Zapier or Make.
- Pilot in one small team and study failure cases before you scale.
This topic is worth publishing because the combination of workspace agents, EKM, and admin controls is the first stronger signal that ChatGPT is trying to become more than a personal assistant surface. For teams building shared AI workflows, that is a meaningful product shift.
References:
- OpenAI: Introducing workspace agents in ChatGPT
- OpenAI Help: ChatGPT Enterprise & Edu Release Notes


