How we use Closot AI Agent across our teams

AI
Nikita Rani·

When we shipped Closot AI Agent internally, we expected engineering to adopt it first. Instead, our support team was using it within hours — summarizing ticket threads, drafting responses, and pulling context from internal docs without switching tabs. Three months in, the agent has become the connective tissue between our teams, our chat channels, and our workspace. Here is an honest look at the workflows that stuck, the ones that didn't, and what we learned about embedding autonomous AI into daily work.

Support: Instant context, faster resolution

Before the agent, support engineers would read through 30-40 message threads before responding. The cognitive overhead was enormous — context was scattered across tickets on our sprint boards, wiki pages with verification dates from weeks prior, and chat threads that had long since scrolled offscreen. Now they ask the agent for a summary and get the issue, what has been tried, and the customer's expectation in three sentences. Response drafts are typically 80% ready — engineers review, adjust tone, and send.

The agent also cross-references incoming tickets against our linked databases of known issues, automatically attaching related bug reports and feature requests. When it detects a duplicate, it merges the tickets and notifies both reporters. Over the last quarter, this saved our support team roughly 14 hours per week that was previously spent on manual deduplication.

One workflow we did not expect: the agent now monitors our public-facing shared channels and automatically creates Closot tickets from customer messages that match certain intent patterns. A customer types "this is broken" in chat, and by the time our support engineer checks their board, there is already a ticket with priority P2, the right labels applied, and a link back to the original chat conversation.

Average ticket resolution time (minutes)38 min (before)16 min (with agent)58% reduction in average resolution time across 2,400 tickets

Product: From feedback to roadmap

Product managers use the agent to process customer feedback at scale. They ask it to categorize the last month's feature requests, identify patterns, and draft user stories. The most impactful workflow has been automated triage — when a request arrives through Closot Requests, the agent labels it, estimates priority based on customer tier, and routes it to the right team before a human touches it.

But the deeper integration happens on our boards. Our product team runs a kanban view for their roadmap, with columns for Discovery, Speccing, Building, Shipping, and Measuring. The agent watches this board and, each Friday afternoon, generates a roadmap digest: what moved columns, what has been sitting in Discovery for more than two weeks, and which items have the most linked customer requests. It posts this digest both to the product teamspace in Closot and to the #product-updates chat channel, so stakeholders never need to ask "what's shipping next?"

Product managers also pull from the Closot Marketplace frequently. When spinning up a new initiative, they browse the marketplace for a template — competitive analysis, product brief, launch checklist — and the agent customizes it using context from the workspace: inserting the right team members, linking to existing databases, and pre-filling known constraints. A template that used to take 45 minutes to adapt is ready in under three.

Engineering: Specs and sprint prep

Engineers use the agent primarily for generating technical spec templates and preparing sprint reviews. Our engineering team runs two-week cycles with sprint planning every other Monday. Before each cycle, the agent compiles completed work, carry-overs, and blockers into a clean summary by scanning our sprint boards. It calculates velocity trends, flags tickets that have been in progress for more than five days, and highlights any ticket with a "blocked" label that hasn't been discussed in standup.

One unexpected pattern: engineers started asking the agent to review their page structures, turning it into a lightweight architecture review tool. An engineer writes a technical spec in Closot, and the agent checks it against our internal engineering wiki — are the right services mentioned? Does the approach conflict with any ADRs (Architecture Decision Records) that have been verified in the last 90 days? It pulls these connections automatically through our linked databases.

Sprint summaries are where the agent truly shines. At the end of each cycle, it auto-generates a retrospective document: tasks completed versus planned, velocity compared to the last three sprints, a breakdown by label (bug fix, feature, infrastructure), and a list of carry-overs with reasons. Our engineering managers used to spend two hours compiling this information manually. Now it is ready when they open their laptop Monday morning.

The agent in chat: Bridging conversations and workspace

Our messaging integration turned out to be the agent's most-used surface area. The Closot Agent lives in your messaging app as a bot that any team member can mention. Type @Closot create a ticket for the auth regression we discussed, and the agent creates a ticket on the appropriate sprint board with the right priority, links the chat thread as context, and assigns it based on the team's on-call rotation. No one has to leave their messaging app to capture work.

Beyond ticket creation, the agent answers workspace questions directly in chat. "What's the status of Project Aurora?" pulls real-time data from the project's board and dashboard. "When was our onboarding wiki last verified?" checks the verification date and responds in the thread. This has reduced the number of people who need direct access to Closot for casual queries — they just ask in chat.

Managing calendar and meeting notes

We recently connected the agent to our calendar views and meeting notes. Before every meeting with more than three participants, the agent creates a meeting notes page from a Marketplace template, pre-populates the agenda from the calendar event description, and links any relevant docs or tickets that have been mentioned in the meeting's chat channel in the last 48 hours.

After the meeting, participants get an AI-generated summary: decisions made, action items assigned (automatically turned into tickets on the appropriate board), and open questions flagged for follow-up. The agent posts this summary to the relevant teamspace and the meeting's chat channel. We measured a 34% reduction in "what did we decide in that meeting?" chat messages since enabling this workflow.

Hours saved per team per week (with Closot AI Agent)17hSupport13hProduct10hEngineering8hDesign6hMarketing

What we learned

The teams that got the most value were not doing the most complex tasks. They identified repetitive, context-heavy work and let the agent handle the first pass. The human always stays in the loop — but the starting point is dramatically better.

Three principles emerged. First, meet people where they work — the messaging integration matters more than the in-app experience for many workflows, because that is where conversations happen. Second, templates accelerate everything — the Closot Marketplace gives the agent a library of proven structures to work from, rather than generating from scratch. Third, connections are the value — the agent is most powerful when it links a chat conversation to a ticket to a sprint to a dashboard to a meeting note. The graph of connections is what makes the workspace intelligent.

We are still early. The agent handles roughly 30% of what we think it could. But even at 30%, it has changed how every team at Closot operates — and the teams that embraced it first are the ones shipping the fastest.

Nikita Rani·
Copy link