Back to Blog
The Pivot Chronicles • Part 11

The Brief: Why We Stopped Building Tools and Started Writing Intelligence

How a simple attendance digest revealed what operators actually need

Alistair Nicol
February 23, 2026
8 min read

If you've been following this series, you know the pattern by now. We build something. It's clever. It doesn't stick. We learn something painful. We try again.

Parts 1 through 10 cover predictive retention analytics, AI video analysis, engagement surveys, micro-checks delivered via SMS, shift-based delivery optimization, and passwordless authentication. Each one solved a real problem. None of them became a product anyone would pay for.

This post is about what finally clicked.

The Micro-Check Postmortem

Our last real bet was micro-checks: three targeted questions delivered to managers via SMS every morning, driven by review signals and team data. The idea was that if you made the daily action small enough and smart enough, managers would actually do it.

They didn't.

We built shift-based delivery so checks arrived when managers clocked in, not at arbitrary times. We eliminated passwords so it was one tap to start. We integrated with 7shifts so we knew who was on shift and when. We did everything we could to reduce friction.

Completion rates climbed for a few weeks. Then they fell. Then they flatlined. Managers stopped responding, and the operators we were working with stopped asking them to.

The uncomfortable truth was simple: we were adding workflow to people who were already overwhelmed. It didn't matter how small the workflow was. Store managers run on adrenaline, muscle memory, and whatever crisis is in front of them right now. A daily SMS, no matter how well-timed, is still one more thing. And one more thing always loses to the thing that's on fire.

The Observation That Changed Everything

While micro-checks were dying, we had a daily attendance digest running in the background. It pulled 7shifts data — late arrivals, no-shows, shift coverage — and sent a summary to the operator each morning. Not the manager. The operator.

It wasn't sophisticated. It wasn't AI-powered. It was basically a formatted email with numbers in it.

And it was the only thing our pilot customer consistently read.

When I asked why, the answer was obvious in retrospect:

"Because it tells me what's happening at my stores without me having to ask anyone."

That sentence contains the entire product.

What Operators Actually Need

We'd spent two years trying to change behavior at the store level. Every product we built assumed that the path to better operations ran through the manager. Get the manager to check three things. Get the manager to respond to a pulse survey. Get the manager to watch for the issues that reviews surfaced.

But operators don't have a manager behavior problem. They have a visibility problem. They can't be at every location every day, and the information they get from the locations they're not at is filtered, delayed, or missing entirely.

GMs don't self-report problems. That's not because they're dishonest. It's because by the time something feels like a "problem" worth escalating, it's already been a problem for weeks. And the things that are problems — the slow attendance decline, the shift where three people called out, the new pattern of service speed complaints — don't feel like problems in the moment. They feel like bad days.

The operator needs someone to connect those bad days across weeks and across data sources and say: this location is trending in the wrong direction, and here's why.

The Weekly Brief

So we stopped building tools for managers and started building intelligence for operators.

The product now is a weekly AI briefing, delivered every Monday morning, for every location. It synthesizes three data sources: scheduling and attendance from 7shifts, guest reviews from Google, and (when available) POS data from systems like Revel and Toast.

Each location gets a severity rating: Urgent, Watch, or Stable. The brief covers what happened that week in plain language, connects patterns across data sources, compares the location to its own history and to the rest of the fleet, and recommends specific action.

Here's what a real recommendation looks like:

"Staffing reliability is eroding the guest experience at this location. On-time rate has dropped 14% over the past 30 days and new service speed complaints appeared in reviews this week for the first time. This needs a direct conversation with your GM this week, not monitoring."

That's not a dashboard. That's not a checklist. That's someone telling you what to do and why.

What Makes This Different from a Report

A report gives you a snapshot. If that's all we were building — a weekly email summarizing numbers from other systems — we'd be a feature, not a product. Toast or 7shifts could build that tomorrow.

The difference is what happens over time.

Every brief includes week-over-week comparisons. If attendance dropped 8% this week, the brief also tells you it's down 14% over 30 days. It tells you whether that trend is unique to this location or showing up across the fleet.

When you read a brief and take action — talk to the GM, adjust staffing, start a hiring push — you can log what you did. One tap. Future briefs reference that action: "Three weeks ago you noted two BOH staff gave notice and hiring was in progress. Attendance has since recovered to 78%, up from 64% at the time of your note."

The system remembers what happened, what you did about it, and whether it worked. The longer you use it, the more context each brief carries. That's not a weekly email. That's an operations partner that builds institutional knowledge over time.

What We Stopped Doing

We stopped trying to change manager behavior. We stopped building store-level workflow. We stopped pretending that a daily SMS habit would compound into operational excellence.

We also stopped hiding behind abstractions. Our previous strategy documents talked about "operational drift detection" and "multi-signal triangulation" and "Operational Stability Index." Operators don't use those words. They say "I need to know which stores are struggling" and "I wish I found out about this three weeks ago." We build for the second language now.

What I Got Wrong

I got the unit of value wrong for two years. I thought the value was in the action: getting a manager to do three specific things based on signals from reviews and team data. It turns out the value is in the knowing: telling the operator what's happening at their stores so they can decide what to do.

That's a fundamentally different product. One requires adoption at the store level. The other requires a working email address.

I also got the buyer wrong. The manager was never going to be our user. Managers are stretched thin, underpaid, and living inside the problems we were trying to surface. The operator sitting above them — the person who can't be at every location every day but needs to know which ones need them — that's the buyer and the user.

Where We Are Now

We have one pilot customer with real data flowing. The briefs are getting better every week. We're adding POS integration to sharpen the signal. We're reaching out to a small group of operators to start additional pilots.

I won't pretend we've found product-market fit. We haven't. We have a hypothesis that's better informed than any of the previous six, and we have a product that our pilot customer actually reads every week. That's more than we've had before.

The thing I'm most confident about is what we're not building. We're not building another dashboard. We're not building a task management system. We're not requiring anyone at the store level to change their behavior. We're taking data that already exists and turning it into intelligence that an operator can act on in 60 seconds.

If that's not enough, we'll learn that too. But for the first time in this journey, the product is built around something an operator told us they value, not something we assumed they should.

If You Run Multiple Locations

If any of this resonates — if you're spending your weeks driving between stores, relying on GMs to tell you what's happening, finding out about problems from bad reviews instead of before them — we're running 30-day pilots with a small number of operators.

We connect your data sources, generate weekly briefs for every location, and you tell us whether the intelligence is worth paying for. No commitment until you've seen it work.

Reach out at hello@getpeakops.com or start a pilot.

This is Part 11 of The Pivot Chronicles, an ongoing series about building PeakOps. Previous parts cover predictive analytics, AI video analysis, engagement surveys, micro-checks, and the many things we got wrong along the way. Start from Part 1.

More from The Pivot Chronicles

See What PeakOps Looks Like for Your Locations

We connect your data, generate weekly briefs for every location, and you decide if the intelligence is worth paying for.