This is a submission for the Notion MCP Challenge
Table of Contents
What I Built
Explanatory Video
Introduction
The Backstory
The 7...
For further actions, you may consider blocking this person and/or reporting abuse
I don't use Notion on a daily basis, but I still love this app. Blue-neon colors π
Thanks a lot! I refuse to use any other color I'm so obsessed with it π
It's also my favorite, next to turqoise π Actually, the more I'm looking on this page, the more I want to try Notion, maybe it'll sort a bit my inner chaos π
Try it, it'll make your life easier. And it's not even hard to get used to - it's so intuitive, structured, and, most importantly, customizable.
The flight control tower framing is apt. The part I find most interesting is the audit layer - scanning for problems you didnβt know you had. Most Notion workspaces accumulate invisible debt: orphaned pages, broken relations, properties nobody uses. A tool that surfaces that automatically rather than waiting for the next spring clean is solving a real pain. How does it handle false positives when surfacing "problems" - some things that look like orphans are actually intentional?
Thanks for a great feedback. Right now, when it sees empty / orphaned pages or dead links, it just assumes it's not right, but it doesn't make any changes without the owner's approval, because it's not fully automated - and that's by design.
Actually, detecting false positives can be a really good new feature in the future. It's going to be way more complex, but really interesting at the same time.
The human-in-the-loop for approvals is the right call at this stage. False positive detection would be a genuinely tricky problem - the agent would need some model of "intended state" to know when an empty page is a bug vs. a placeholder. Would be interesting to see how you approach that when you get there.
I'll probably create a separate article and maybe even a dedicated video when I get to that. It's a major feature and definitely deserves its own spotlight.
That makes sense. False positive detection deserves its own deep-dive, not a footnote. Looking forward to that article when it comes.
George, you magnificent brain wizard! As a full-stack dev whose daily routine involves wrestling code into submission and occasionally questioning the meaning of semicolons, I gotta say, the pristine architecture and sheer logic behind NoteRunway is a sight for sore eyes. Notion is powerful, yes, but organizing it usually feels like trying to fold a fitted sheet in a wind tunnel. This 'elite crew' of independent tools isn't just sensible; it's a stroke of genius. Now go forth and absolutely crush that MCP challenge! May your databases be ever clean and your caffeine supply endless. π₯
Oh wow! That's overwhelmingly positive feedback, sir! Thank you so much!π
Now please check my website zlvox.com and please share with your friends or audience :)
The crew framing is the right mental model. Seven specialists with narrow roles beats one generalist trying to do everything. The real architecture decision was knowing where to use direct SDK for speed and MCP for safety, that's the kind of hybrid most people miss when they reach for "one tool to rule them all."
Absolutely! One thing that Iβve learned during my career is that if one tool does everything, that might not be the most effective tool in the βdrawerβ. Usually the same applies to people in general.
Awesome submission! I really like the '7 tools' framingβit makes the complex workspace management feel much more approachable. The Dependency Graph is a nice touch too; visualizing those orphaned pages is often the first step to a clean workspace. Great work on the Notion MCP integration!
Appreciate such a great feedback! This gives me more motivation to make these 7 tools even more polished and precise in the future and add a few more features.π«‘
The orphaned page detection is the sleeper feature. Most Notion workspaces accumulate dead pages faster than anyone cleans them up β not because people are lazy, but because thereβs no cost signal. Visualizing the dependency graph and seeing which pages have zero inbound links makes the cleanup obvious. This only works when you have structured access to the workspace graph, which is what MCP gives you.
In the future I might add modifications directly from the graph. It has a lot of potential.
The Sensitive Data Finder is the feature I didn't know I needed until I read this. The two-phase approach regex first, then AI deep scan for natural-language secrets is the right order. Regex catches the structured patterns fast and cheap. AI catches the password is hunter2 style entries that no regex would ever find. Running them sequentially instead of in parallel keeps the cost predictable too.
The hybrid SDK + MCP split is also worth calling out separately. Direct SDK for bulk reads, MCP for all writes that's not an obvious call to make, and most people would just pick one and stick with it. Using MCP as a safety layer specifically for destructive operations is a genuinely smart architectural boundary.
One question: with the Sensitive Data Finder, does the AI deep scan phase see the full page content or just the sections that flagged suspicious in Phase 1? Sending only the flagged content to the LLM would be cheaper and also limits how much raw workspace data the model sees curious whether that was a deliberate design call.
Thank you! Your analysis is 100% spot on and I'm glad you found it useful. π¦Ύ
Regarding the question: AI does its own scan, and it scans the whole workspace. That's why it's recommended to avoid it on huge workspaces with thousands of pages, it can drain AI credits way too fast. But for small or medium-sized workspaces, it works like a charm. In the future I might add support for selectively scanning pages with AI - sometimes you just know specific pages don't contain any sensitive data and there's no point of wasting AI credits on those.
The 'crew of specialists' framing is a nice mental model for multi-agent systems β each agent has a narrow, well-defined role rather than one generalist trying to do everything.
The interesting architecture question is how you handle coordination: when two 'crew members' have conflicting outputs, who arbitrates? In database contexts we've found that having a clear single source of truth (the DB) as the arbiter works well, but in creative/planning workflows it's murkier.
This is a great question. At the moment, each tool is designed to run independently, with its own clearly defined responsibility. That means each one makes decisions based on its own criteria.
The only exception is the semantic ask. However, any destructive action, whether itβs an addition, deletion, or modification, requires explicit user approval before being executed. Because of that, conflicts donβt really arise in practice.
Also, the process isnβt asynchronous, a user approves one change at a time.
The Sensitive Data Finder is something every Notion power user quietly needs β that "I'll move this API key later" moment never comes π
π―everybody does it, nobody admits itπ
I donβt even use Notion, but this looks clean.
The βcrew of toolsβ idea + the security angle is actually smart. Feels like something power users would genuinely need, not just another AI wrapper.
Thanks a lot! Tried my best to make it as clean and intuitive as possible. I'm glad it worked!
Did you develop this using any vibe coding tools? Still, the UI looks stunning
I donβt vibe code, I use AI as an assistant. But as a back-end engineer, I wouldnβt be able to make the UI this good without AIs help.