Campaigns 8
Analytics 3
| Campaign | Product | Twitter Views | Twitter Likes | Twitter RTs | Reddit Score | Reddit Comments | LinkedIn Reactions | LinkedIn Views | Total Clicks | Total Trials |
|---|---|---|---|---|---|---|---|---|---|---|
| Reader | 29,766 | 294 | 36 | 18 | 4 | 15 | -- | -- | -- | |
| Readwise | 255,800 | 1,100 | 65 | 108 | 16 | -- | -- | -- | -- | |
| Reader | 12,893 | 54 | 5 | 41 | 8 | 24 | 2,024 | -- | -- |
Inspiration 98
Anything you can do in Obsidian you can do from the command line. Obsidian CLI is now available in .
Loading tweet...Now you can use AI agents to design directly on the Figma canvas, with our new use_figma MCP tool and skills to teach them. Open beta starts today.
Loading tweet...Introducing Cursor Automations to build always-on agents.
Loading tweet...This russian guy found a way to learn anything 10x faster with NotebookLM + Gemini + Obsidian.
Loading tweet...This russian guy found a way to learn anything 10x faster with NotebookLM + Gemini + Obsidian.
Loading tweet...Obsidian 1.12 is now available to everyone!
Loading tweet...1. install Obsidian 1.12 2. enable CLI 3. now OpenClaw, OpenCode, Claude Code, Codex, or any other agent can use Obsidian
Loading tweet...Our biggest update. Tomorrow.
Loading tweet...Someone just built a 3D thought map by turning Obsidian embeddings into living networks. It lets you see the shape of your mind.
Loading tweet...Someone just built a 3D thought map by turning Obsidian embeddings into living networks. It lets you see the shape of your mind.
Loading tweet...I have been working on Obsidian Reader for over a year. I didn't want to share it until I felt it was good enough. It's finally there.
Loading tweet...I have been working on Obsidian Reader for over a year. I didn't want to share it until I felt it was good enough. It's finally there.
Loading tweet...Cursor going entirely from ticket to PR!
Loading tweet...Stop sleeping on Obsidian + Claude Code = your own JARVIS. Takes 1 hour to build.
Loading tweet...BREAKING: We've given Claude direct access to the full options and equities market. Introducing the Unusual Whales MCP Server. It connects any AI assistant to live, structured market data in real time.
Loading tweet...I built Feynman, Claude Code for research. I gave it a question and it came back 30 minutes later with a full research report.
Loading tweet...Introducing Airport Intelligence! Instant insights into delays at any airport. Summary status of each airport. Trend to see when delays clear. Decoded delay reasons. Dedicated weather monitoring. Basics free at flighty.com/airports
Loading tweet...Find us in Apple's brand new iPhone 17 launch video. Twice!
Loading tweet...You have 100+ tabs open and your brain is fried. Introducing Dex, your second brain in Chrome that organizes, remembers, and takes action for you. Turn tabs into to-dos, multitask with AI.
Loading tweet...Defuddle now returns Youtube transcripts! Paste a YouTube link into defuddle.md to get a markdown transcript.
Loading tweet...I'm still processing, but I think I would die for the Obsidian CLI, using this with Claude Code or any agent is fucking insane. Holy shit man.
Loading tweet...Defuddle now has a website! This means you can use Defuddle anywhere to get the main content of a page.
Loading tweet...Figma MCP server, now with design context anywhere you work: Remote access, Connection with Figma Make.
Loading tweet...Introducing Better Icons — The MCP server for searching and retrieving icons. 200,000+ icons. Auto learns your preferences and prioritizes.
Loading tweet...be Andy >become an OpenClaw power-user >realizes it forgets things and burn tokens very fast >figures out a solution >builds a ClawHub skill >makes it free, available for everyone on ClawHub
Loading tweet...Obsidian Web Clipper 1.0 is now available and adds logic to templates! This adds powerful new tools to convert web pages to markdown.
Loading tweet...Introducing Generative UI for MCP Apps. One server. Infinite interfaces. Works in Claude, ChatGPT, VS Code, Cursor and more.
Loading tweet...Introducing Generative UI for MCP Apps. One server. Infinite interfaces. Instead of building views, define a connected MCP server tools. Works in Claude, ChatGPT, VS Code, Cursor and more.
Loading tweet...It's so over. My OpenClaw agent just debugged its own infrastructure. I asked my openclaw agent Monica (chief of staff) from telegram to install mem0 locally on my mac mini for memory. She pulled Ollama models, configured the plugin.
Loading tweet...Convert your Notion pages and databases to durable, private, local files.
Loading tweet...Introducing the Readwise CLI. Anything you've saved in Readwise (highlights, articles, PDFs, books, youtube, newsletters) is now accessible via CLI.
Loading tweet...OpenClaw 3.7 JUST DROPPED and it's HUGE. Context Engine Plugin API - Say goodbye to context loss. Plugins like lossless-claw can now preserve full memory without token waste. 30% Faster with LongCat - New official integration.
Loading tweet...Introducing Codex to Figma: Roundtrip between code and canvas through the Figma MCP server.
Loading tweet...Introducing Codex to Figma: Roundtrip between code and canvas through the Figma MCP server.
Loading tweet...Introducing Flighty: Your past flights are now a heat map! Deep sorting of past flights, Super detailed past flight list view. Plus key fixes.
Loading tweet...Introducing Flighty: Your past flights are now a heat map! Deep sorting of past flights, Super detailed past flight list view. Plus key fixes.
Loading tweet...Introducing Halo — the habit app with an AI coach. It's like James Clear in your pocket.
Loading tweet...Notion official MCP server released. Community servers had security risks so this is welcome.
Loading tweet...I've used OpenClaw every single day for 50+ days. Through ClawdBot. Through MoltBot. Through the rebrand. Nobody else can tell you what happens after the honeymoon phase.
Loading tweet...There's a lot in Airport Intelligence, including extras for the AV geeks. Every alert, every minute, across airports. Weather warnings for every flight. Raw & decoded METARs, TAFs, NOTAMs.
Loading tweet...Obsidian CLI orphans command vs other methods. 54x faster than grep. 70,000x cheaper than MCP.
Loading tweet...Obsidian just got agent-ready. obsidian-skills teaches AI how to actually work with your vault: write valid Obsidian Markdown, manage your second brain.
Loading tweet...NEWS: @Flighty arrives NEXT WEEK! Prepare for unprecedented delay intel.
Loading tweet...NEWS: @Flighty arrives NEXT WEEK! Prepare for unprecedented delay intel.
Loading tweet...Manual Obsidian lasts about a week. Then it gets abandoned. The vault becomes a second brain only when an AI agent takes over the routine — sorting, searching, generating files via terminal access.
Loading tweet...Obsidian CLI closes one of the oldest feature requests, dating back to May 2020.
Loading tweet...Ok, Remotion is INSANE. I installed it on my OpenClaw this morning. I gave my logo, my landing page and the 14 steps I needed for that video. After 5 iterations, I got this. Well... almost. I wasn't happy about the voiceover. So I sent the script that Marc (my OpenClaw assistant) wrote to ElevenLabs.
Loading tweet...NEWS: @Flighty v4 is launching NEXT WEEK! Momentary doubts creep in... but I'm 99% sure it's revolutionary.
Loading tweet...Say hello to Bear 2! Packed with over 20 new features and a sparkling new design, we are excited to bring you the Bear-iest version of Bear yet!
Loading tweet...Say hello to Bear 2! Packed with over 20 new features and a sparkling new design, we are excited to bring you the Bear-iest version of Bear yet!
Loading tweet...When practicing extensive reading in English, using the Kindle Word Wise feature lets you read smoothly without looking up many words. Hints are automatically displayed above difficult words.
Loading tweet...I want this for reading with AI: Circle what you don't understand and just ask.
Loading tweet...BREAKING! Introducing Plus One: A hosted @openclaw that lives in your Slack and comes pre-loaded with @every best tools, skills, and workflows. Set it up in one click.
Loading tweet...Your Year-in-Review Passport is ready! We'll go first. Crunching our users' stats, we hit: 22.5M flights, 78M hours in the air, 34.6B miles. JFK <> LHR was our #1 route. Create yours & share.
Loading tweet...I built a tool to turn any Chrome tab, X bookmark or URL into a Markdown API for agents like OpenClaw. Add the skill and they can fetch your content on schedule to: summarize articles, draft blog posts from bookmarks, etc.
Loading tweet...I built a tool to turn any Chrome tab, X bookmark or URL into a Markdown API for agents like OpenClaw. Add the skill and they can fetch your content on schedule to: summarize articles, draft blog posts from bookmarks, etc.
Loading tweet...In the latest version of Keep, you can now turn an entire X thread into markdown by bookmarking any tweet in it.
Loading tweet...OpenClaw changed the agentic AI game. People are building insane use cases, major companies are building on it. Bookmark this.
Loading tweet...TO DO LIST: * Learn Claude Code * Set up Claude Cowork * Set up OpenClaw * Experiment with agentic second-brain database * Learn basic automation tools (Manus, MCP, Zapier) * Read AI articles * Research AI stocks/ETFs/investment arbitrages
Loading tweet...Crabwalk 1.0.11 — Workspace Editor is live. Monitor your OpenClaw agents AND edit their files from the browser. Mobile too. Browse your agent's files in a tree view, full file editor with save and delete, markdown rendering.
Loading tweet...1. Install Readwise CLI 2. OpenClaw, Claude Code, Codex, or any coding agent can now: Search your entire readwise library, Read full content of anything you've saved, Tag, create highlights, organize on your behalf.
Loading tweet...In the latest version of keep.md you can now get your own personal email inbox: subscribe to email newsletters, confirm email addresses.
Loading tweet...In the latest version of keep.md you can now get your own personal email inbox: subscribe to email newsletters, confirm email addresses.
Loading tweet...With Cloudflare's new markdown for agents, you can pull down websites' pages as markdown into your repo for agent context. npx mdrip <url>. Save on tokens and let your agent auto ref persisted context. Has awesome features with skills so your agent knows how to use the context.
Loading tweet...Better text-to-speech is possible. @UnrealSpeech has filled this void.
Loading tweet...App for chat with documents, AI agents, multi-user.
Loading tweet...New in Reader: Ghostreader v3 on web! Chat with any document, now: Much faster, links to the exact sources in your document, lookup built in.
Loading tweet...New in Reader: Ghostreader v3 on web! Chat with any document, now: Much faster, links to the exact sources in your document, preset prompts.
Loading tweet...Mega reveal of Flighty 3.0. We've dreamed of this since 2018. And it's even more seamless than you're imagining.
Loading tweet...Excited to launch rube.app on ProductHunt today! Rube is the universal MCP server that connects your AI chat (Cursor, Claude, VSCode) to 500+ apps including Slack, Notion, Figma.
Loading tweet...We shipped an MCP server! You can now query your Readwise highlights inside of Claude, Cursor and more.
Loading tweet...We just shipped a ton of new features: NEW SIDE PANEL (with RSS folders) on web, Text-to-speech on web, Entirely revamped search, Custom Ghostreader, Resurfaced Later docs, Original email styles, Automatic PDF metadata, Notion official API export.
Loading tweet...We just shipped a ton of new features: NEW SIDE PANEL, Text-to-speech on web, Entirely revamped search, Custom Ghostreader, Resurfaced Later docs, Original email styles, Automatic PDF metadata, Notion official API export.
Loading tweet...We just shipped a ton of new features: NEW SIDE PANEL, Text-to-speech on web, Entirely revamped search, Custom Ghostreader, Resurfaced Later docs, Original email styles, Automatic PDF metadata, Notion official API export.
Loading tweet...We just shipped a ton of new features: NEW SIDE PANEL, Text-to-speech on web, Entirely revamped search, Custom Ghostreader, Resurfaced Later docs, Original email styles, Automatic PDF metadata, Notion official API export.
Loading tweet...One of the coolest things about the CLI (and MCP) is that every single word of every document in your library is searchable via hybrid (FTS + semantic, @turbopuffer ofc) Allows you to build some really powerful stuff trivially, eg recommendation systems.
Loading tweet...One of the coolest things about the CLI (and MCP) is that every single word of every document in your library is searchable via hybrid (FTS + semantic, @turbopuffer ofc). Allows you to build some really powerful stuff trivially, eg recommendation systems.
Loading tweet...One of the coolest things about the CLI (and mcp) is that every single word of every document in your library is searchable via hybrid (FTS + semantic) @turbopuffer. Allows you to build some really powerful stuff trivially, eg recommendation systems.
Loading tweet...Today's a big day for my vibecoding journey. I just submitted my first app (Quran reading app powered with AI) to the iOS app store AND the Replit Mobile Hackathon.
Loading tweet...Have OpenClaw make games for you with the Remix skill (live now). Prompt a game in Claude and it ships to iOS, Android, Telegram, + more in under 2 minutes.
Loading tweet...we made a @readwise MCP server, but haven't really announced/documented it yet... anyone wanna help test it out?
Loading tweet...we made a @readwise MCP server, but haven't really announced/documented it yet... anyone wanna help test it out?
Loading tweet...Obsidian + Claude Code knowledge creation workflow using Readwise + browser clipping. Collecting, cleaning, and creating content.
Loading tweet...Reader's AI just got smarter! GPT4o-mini is now the default LLM used in Reader for autosummarizing. Ghostreader can do more.
Loading tweet...By popular demand, the Readwise CLI now has a read-only mode! readwise config set readonly true
Loading tweet...New for Linear Mobile: Customizable navbar. Rearrange and pin tabs, or add specific projects, initiatives, and views for quick access.
Loading tweet...The new Readwise CLI has full access to every word you've ever saved in your Reader library. Semantically search over your documents.
Loading tweet...readwise的新版mcp,以及readwise cli(内测中),实在是太好用太强大了。2026年,不能给agent用的工具,就不是好工具。
Loading tweet...People keep telling us they're surprised by how good Ramble (our new voice-to-task feature) is. Which is funny, because… same. It keeps doing things that even catch us off guard.
Loading tweet...The Readwise CLI/MCP work great on their own, but we've also created a readwise-skills repo that shares some powerful skills to use with them: github.com/readwiseio/readwise-skills
Loading tweet...The Readwise CLI/MCP work great on their own, but we've also created a readwise-skills repo that shares some powerful skills to use with them.
Loading tweet...The Readwise CLI/MCP work great on their own, but we've also created a readwise-skills repo that shares some powerful workflows.
Loading tweet...You can learn more about the Readwise CLI readwise.io/cli BTW- looking for an MCP server instead? Coming soon, you can try out a beta version here: readwise.io/mcp
Loading tweet...It's quite fun, here's a 2d graph visualization of my recent ~500 highlights and how they're connected. Built with the CLI.
Loading tweet...Ghostreader — Reader's AI is now: Way easier to access (two taps on mobile!), Way more powerful & accurate, You're always in control.
Loading tweet...Heading out for a long flight and want to research @openclaw architecture, so I am adding many great articles to my @ReadwiseReader for offline reading. Reader is the best on the market.
Loading tweet...In the latest version of the Keep.md Chrome extension you can now do a one-time export of all your X bookmarks. Import them to Keep, Obsidian.
Loading tweet...In the latest version of the Keep.md Chrome extension you can now do a one-time export of all your X bookmarks. Import them to Keep, Obsidian.
Loading tweet...First Class Lookup
Campaign Brief: First Class Lookup
Overview
- Feature name: First Class Lookup
- Product: reader
- Owner: @tristan
- Launch date: ~2026-04-01 (targeting next week)
- Status: inspiration
What It Is
A first-class lookup feature in Reader that lets you select any word, term, or short phrase while reading and instantly get rich definitions. It surfaces dictionary/encyclopedia definitions plus an "X-ray" section that shows how the term is used within the document itself — pulling the author's own definition, or in the case of a character name, showing who they are and their relevance to the plot (without spoilers). The functionality existed inside Ghost Reader, but this makes it a polished, one-action experience: long press a word and get everything you need.
Why We Built It
When you're reading and encounter a term or character name you can't quite place, the flow breaks. You either context-switch to Google, scroll back through the book, or just push through confused. First Class Lookup lets you long press and get refreshed instantly — dictionary definitions for general terms, and the author's own usage for document-specific terms and characters. It's a small but genuine way AI enhances reading without getting in the way.
User Demand
- Signal strength: low-medium
- Evidence: Less concrete demand, more delight-driven. The underlying capability has existed in Ghost Reader and people really like it. Worth checking Canny and HelpScout for specific requests.
- Who wants this most? Readers of complex non-fiction and long-form fiction — anyone who encounters unfamiliar terms, author-coined concepts, or large character casts.
Success
- Primary goal: retention
- What does success look like? Existing Reader users discover and love the feature, reinforcing that Reader is the smartest reading app. High usage rate among active readers. Creates genuine "wow" moments that users share organically.
- Secondary goals: Acquisition — if the launch captures attention, it positions Reader as the AI-powered reading app and draws in new users who want that superpower. Owns the "AI + reading" narrative.
The "Wow" Moment
Long press a character's name or a complex term like "antifragile" in Taleb's book and instantly get the author's own definition plus full context — no searching, no scrolling back.
Target Audience
Active Reader users, especially those reading complex non-fiction (terms, concepts) and long fiction (characters, plot references). Secondary: potential new users excited by AI-enhanced reading.
Launch Constraints
Feature expected to land within the next week. Reader only (Bookwise support exists but is not public). No hard deadline — announce soon after the feature ships.
Messaging Angle
Position as a genuine reading superpower — small, delightful, and useful. Not "AI reads for you" but "AI makes you a better reader." The X-ray feature is the differentiator: it's not just a dictionary, it understands your document. Show don't tell — the demo of long-pressing a character name or author-defined term is the pitch.
Raw Notes
this feature is our first class lookup feature, which allows you to select any word, term, or short phrase while reading and instantly get really nice definitions for it. It has a couple of different types of definitions—it'll give you encyclopedia or dictionary definitions, but it also has a really cool section called X-ray. X-ray looks up how the term is used in the document itself. It could be a character's name or a term the author has defined for the first time in the document. It'll pull all the references the author used and where they defined that term, then give you the author's definition. In the case of a character's name, it shows who the character is, their background, and their relevance to the plot without spoilers. We're announcing this for Reader because we've had this functionality buried inside Ghost Reader for a while. This just makes it really fun and easy to use, and it's a small but meaningful way that AI can enhance the reading experience. It's delightful and gives you a genuine reading superpower. Concretely, the problem is that when you're reading, you see a term or name and can't quite recall what it is. This feature allows you to long press the word and be refreshed on it in just one action. It's like having a little superpower while reading. I think there's been less concrete demand for it and more just people really liking it. We could check Canny and Help Scout reports to see if anybody's requested it. I would say this is mostly about retention, but I do really hope that if we launch it well enough and capture the zeitgeist and own our area of AI intersected with reading, we can also get users hyped about it and acquire new users who think, "Wow, I want to use a reading app that is that powerful." The wow moment will be selecting a character's name or a complex non-fiction definition—like how Taleb defines the term "anti-fragile" in his book. Then you long press that word or character's name and immediately get the definition, which is pretty powerful. There's no hard deadline. I'm hoping this feature will land in the next week, and then we can announce it pretty soon after. It works in Bookwise too but Bookwise is not public yet so it's not part of this campaign.
11 entries
Our biggest update. Tomorrow.
Loading tweet...Introducing Airport Intelligence! Instant insights into delays at any airport. Summary status of each airport. Trend to see when delays clear. Decoded delay reasons. Dedicated weather monitoring. Basics free at flighty.com/airports
Loading tweet...There's a lot in Airport Intelligence, including extras for the AV geeks. Every alert, every minute, across airports. Weather warnings for every flight. Raw & decoded METARs, TAFs, NOTAMs.
Loading tweet...When practicing extensive reading in English, using the Kindle Word Wise feature lets you read smoothly without looking up many words. Hints are automatically displayed above difficult words.
Loading tweet...Your Year-in-Review Passport is ready! We'll go first. Crunching our users' stats, we hit: 22.5M flights, 78M hours in the air, 34.6B miles. JFK <> LHR was our #1 route. Create yours & share.
Loading tweet...New in Reader: Ghostreader v3 on web! Chat with any document, now: Much faster, links to the exact sources in your document, lookup built in.
Loading tweet...We just shipped a ton of new features: NEW SIDE PANEL (with RSS folders) on web, Text-to-speech on web, Entirely revamped search, Custom Ghostreader, Resurfaced Later docs, Original email styles, Automatic PDF metadata, Notion official API export.
Loading tweet...Today's a big day for my vibecoding journey. I just submitted my first app (Quran reading app powered with AI) to the iOS app store AND the Replit Mobile Hackathon.
Loading tweet...Reader's AI just got smarter! GPT4o-mini is now the default LLM used in Reader for autosummarizing. Ghostreader can do more.
Loading tweet...New for Linear Mobile: Customizable navbar. Rearrange and pin tabs, or add specific projects, initiatives, and views for quick access.
Loading tweet...People keep telling us they're surprised by how good Ramble (our new voice-to-task feature) is. Which is funny, because… same. It keeps doing things that even catch us off guard.
Loading tweet...No video script yet — run /direct-movie to generate one.
No assets yet — upload your video to RAMP/reader/first-class-lookup.mp4 in Google Drive
No surfaces yet. Run /launch-surfaces to generate a checklist.
No approved copy yet — run the pipeline to generate copy.
Run /analytics to add post URL
Run /analytics to add post URL
Run /analytics to add post URL
Ghostreader Mobile v3
Campaign Brief: Ghost Reader Mobile
Overview
- Feature name: Ghostreader Mobile v3
- Product: reader
- Launch date: ~late April 2026 (next month, blocked on React Native migration)
- Status: inspiration
What It Is
A full chat-with-your-documents experience inside the Reader iOS and Android apps. While reading any document, you can open a chat interface and talk to an AI assistant that knows exactly where you are in the document, what's on your screen, and how far along you are. Ask questions, get clarifications, request recaps of previous sections, explore characters, or dive deeper into topics — all without leaving the reading experience. Ghost Reader Chat V3 already shipped on web (where a sidebar makes it easy), but mobile required careful UX design to make chatting while reading feel seamless on a small screen.
Why We Built It
Readers constantly want to engage more deeply with their content while reading — ask questions, get unstuck, explore ideas. ChatGPT can talk about a book in theory, but it doesn't know where you are or what you're looking at. Reader's mobile chat assistant has full document context and reading position awareness. The hard problem isn't AI chat — it's making it work beautifully on a small screen alongside the reading experience. As far as we know, nobody has nailed this UX on mobile yet.
User Demand
- Signal strength: medium-high
- Evidence: Multiple user requests for mobile Ghost Reader chat. Broader push for Readwise to be AI-first. Web version (V3) already shipped and validated the concept — mobile is the natural next step.
- Who wants this most? Mobile-first readers who consume long-form content (articles, books, PDFs) on their phones and want AI assistance in the moment, not after the fact.
Success
- Primary goal: acquisition
- What does success look like? The launch video and social campaign positions Reader as the first app to truly nail AI-assisted reading on mobile. Drives new signups from people excited about the AI + reading intersection. Creates a compelling demo moment that influencers and tech press want to share.
- Secondary goals: Retention — existing mobile Reader users get a powerful new reason to stay engaged. Conversion — the AI chat experience is a premium feature that makes the subscription feel worth it.
The "Wow" Moment
Chatting with the assistant while reading and getting a response with a clickable link that jumps you directly to the passage it's referencing — showing that the AI knows exactly where you are in the document.
Target Audience
Mobile readers who consume long-form content and want AI to enhance their reading. Secondary: AI/tech enthusiasts who'll appreciate the UX innovation of making chat-while-reading work on a small screen.
Launch Constraints
Blocked on the React Native migration (getting close). Will need significant QA and testing post-migration. Targeting next month. Ghost Reader V3 web is already live — this is the mobile counterpart.
Messaging Angle
"Chat with anything you read — now on mobile." Emphasize the UX breakthrough: everyone has chat, everyone has reading apps, but nobody has made them work together seamlessly on a phone. The assistant knows your exact reading position and document context — it's not a generic chatbot, it's a reading companion. Show the experience in motion — this needs video, not screenshots.
Raw Notes
Okay, the next feature is Ghost Reader Mobile, or Ghost Reader V3 Mobile. It's basically the ability to chat with your documents inside our mobile iOS and Android apps while reading in a chat interface. It's a really fluid, fun, seamless chatting-while-reading experience, which nobody has really figured out a great UX for yet. It's really hard because the screen real estate is so small on mobile. The pain it solves is that while you're reading, users often want to unlock the value of AI. They want to ask questions, get clarification when they're lost, expand on topics, ask for recaps on previous sections, request summaries, ask questions about the content, or ask who characters are. They want to engage more deeply with their content while reading, not in a separate space on mobile. ChatGPT is great for talking about a book in theory, but in Reader Mobile now, you can chat with an assistant that knows exactly where you are in the document, what you can see while reading, and how far along you are. We already shipped Ghost Reader Chat V3 on web, but on web it's easy mode because you have so much screen real estate. You can just pop a sidebar next to the content and chat while reading. It's much harder on mobile and required a lot of careful design and iteration to make it seamless. We've had quite a few users who want this, and there's a broader push for Readwise to be AI-first. This feature is primarily about acquisition and helping us get on the AI hype wave. We're pioneering a way to engage more deeply while reading that, as far as we know, no one has really done. There are really good standalone chat experiences and really good standalone reading experiences, and people have combined them on big screens like web. But on mobile, getting the UX, feel, context, and navigation just right is something we think we're the first to do. The wow moment will be a video showing chatting while reading. You'll probably get a link to somewhere in the document that you can click through to take you to the passage the assistant is referencing. Hopefully we can also convey that the assistant in the chat knows exactly where you are in the document. That's really powerful. For timing, we're hoping to launch this in the next month or so. It's currently blocked on the React Native migration, but that's getting pretty close. There will be a lot of QA and testing of the feature. Definitely in the next month.
12 entries
This russian guy found a way to learn anything 10x faster with NotebookLM + Gemini + Obsidian.
Loading tweet...I have been working on Obsidian Reader for over a year. I didn't want to share it until I felt it was good enough. It's finally there.
Loading tweet...Defuddle now returns Youtube transcripts! Paste a YouTube link into defuddle.md to get a markdown transcript.
Loading tweet...Defuddle now has a website! This means you can use Defuddle anywhere to get the main content of a page.
Loading tweet...NEWS: @Flighty v4 is launching NEXT WEEK! Momentary doubts creep in... but I'm 99% sure it's revolutionary.
Loading tweet...I want this for reading with AI: Circle what you don't understand and just ask.
Loading tweet...In the latest version of keep.md you can now get your own personal email inbox: subscribe to email newsletters, confirm email addresses.
Loading tweet...App for chat with documents, AI agents, multi-user.
Loading tweet...New in Reader: Ghostreader v3 on web! Chat with any document, now: Much faster, links to the exact sources in your document, preset prompts.
Loading tweet...We just shipped a ton of new features: NEW SIDE PANEL, Text-to-speech on web, Entirely revamped search, Custom Ghostreader, Resurfaced Later docs, Original email styles, Automatic PDF metadata, Notion official API export.
Loading tweet...Ghostreader — Reader's AI is now: Way easier to access (two taps on mobile!), Way more powerful & accurate, You're always in control.
Loading tweet...In the latest version of the Keep.md Chrome extension you can now do a one-time export of all your X bookmarks. Import them to Keep, Obsidian.
Loading tweet...No video script yet — run /direct-movie to generate one.
No assets yet — upload your video to RAMP/reader/ghost-reader-mobile.mp4 in Google Drive
No surfaces yet. Run /launch-surfaces to generate a checklist.
No approved copy yet — run the pipeline to generate copy.
Run /analytics to add post URL
Run /analytics to add post URL
Run /analytics to add post URL
OpenClaw Integration
Campaign Brief: OpenClaw Integration
Overview
- Feature name: OpenClaw Integration
- Product: reader, readwise
- Owner: @christina
- Launch date: 2026-03-27
- Status: complete
What It Is
A dedicated launch packaging the Readwise CLI (launched last week) with a new ClawHub skill (possibly multiple skills), plus a landing page and guide oriented specifically at OpenClaw users. The skill connects OpenClaw to your Readwise/Reader library, giving your AI assistant instant access to everything you've saved — articles, tweets, PDFs, books, podcasts, newsletters, YouTube videos — in clean markdown format. Think of it as giving your OpenClaw a second brain.
Why We Built It
OpenClaw and similar AI autonomous agent assistants need rich context about the user to be useful. Right now, giving your assistant that context is a painful, manual process. Readwise + Reader already solves the "save anything from anywhere" problem — browser extension, iOS app, bookmark — so plugging that into OpenClaw makes the context problem disappear. You save something, your Claw knows about it in seconds.
User Demand
- Signal strength: high
- Evidence:
- Multiple Twitter DMs and replies asking for this. Reddit threads discussing it.
- HelpScout signals (3 direct conversations):
- Antoine Buteau — CLI beta tester who proactively built an OpenClaw skill on ClawHub (clawhub.ai/TristanH/readwise-official) before we even launched one officially. (2026-03-19)
- Grant Gochnauer — CLI beta tester who replied saying "I was using it to build an OpenClaw integration skill!" (2026-03-18)
- Mike Dove — New Reader convert (ex-Pocket user) who asked "What's the preferred way to integrate with my agents (Claude Code/OpenClaw) as I see some very useful ways to quick find something I saved and to pull it up!" (2026-03-24)
- Reddit thread on r/readwise where Tristan explained CLI vs MCP, specifically calling out OpenClaw as a primary CLI use case.
- Wisereads newsletter featured an OpenClaw founder interview video that was among the most highlighted content.
- Who wants this most? AI power users and early adopters who are already using OpenClaw and want to supercharge their assistant with their reading library. Tech-forward audience comfortable with CLI tools and agent workflows. Some are already building their own integrations — we need to give them the official one.
Success
- Primary goal: acquisition
- Gets OpenClaw's massive user base (hundreds of thousands to millions) to become Reader/Readwise users.
- What does success look like? Meaningful sign-up lift from OpenClaw users. The integration becomes a go-to recommendation in OpenClaw community spaces (Reddit, Twitter, ClawHub).
- Secondary goals: Positions Readwise as the default "knowledge layer" for AI assistants broadly.
The "Wow" Moment
Save a tweet, article, or YouTube video to Reader — and within seconds your OpenClaw can discuss it, reference it, and work through it with you in clean markdown.
Target Audience
Primary: OpenClaw users looking to give their assistant better context and a personal knowledge base.
Secondary: AI-curious readers who haven't tried OpenClaw yet but are intrigued by the "second brain for your AI" angle.
Launch Constraints
- Launching 2026-03-27 (tomorrow morning) for hackathon demos
- Builds on the Readwise CLI launched last week — this is packaging + positioning, not net-new backend work
- Will include ClawHub skill(s), a landing page, and a dedicated guide for OpenClaw users
Messaging Angle
"Give your OpenClaw a second brain." The pitch is about effortless context — save anything from anywhere, your AI knows about it instantly. Lean into the speed of the save-to-access loop and the breadth of content types (tweets, articles, books, PDFs, podcasts, YouTube, newsletters). Avoid making it feel like "just another integration" — this is a new category of use case for Readwise.
Raw Notes
All right, here's my speech-to-text ramble on OpenClaw. Basically, this is a dedicated launch for the integration of Readwise with OpenClaw. Technically, you can already use our new Readwise CLI, which launched last week with OpenClaw. This launch is basically just packaging up that CLI with a new ClawHub skill, plus a lot of positioning, documentation, and orientation for OpenClaw users to explain why Readwise is useful for them. This will basically mean the skill—maybe multiple skills—plus a dedicated landing page and guide for OpenClaw users. The product is both Readwise and Reader, just like our CLI that we launched last week. The idea here is that the CLI works really well with tools like OpenClaw—these AI autonomous agent assistants. Basically, the pain point for these users is that you need to give your assistant line context on you and what you're interested in, and that can be a really painful and manual process. We make that process super simple and painless. So basically, if you're browsing and want your Claw to know about something—if you see a YouTube video, a piece of content you want to read, or content you've already read from tweets to PDFs to books to articles to podcasts to newsletters to YouTube—you can just save that content to Reader or Readwise with one click using our browser extension, our iOS app, or by hitting bookmark. That'll immediately be saved to your Reader account, which will be connected to your OpenClaw, and your OpenClaw will have immediate access to it in clean markdown format. So it's kind of like giving your OpenClaw assistant a second brain of knowledge content. I'm not sure how much demand we've had for it, but there's definitely been a lot of people asking me for it on Twitter, Twitter DMs, and Reddit. There would be some Reddit threads about it. It would actually be an interesting exercise as part of this setup step for you, the ramp assistant, to go use the Context One Help Scout to search for OpenClaw and ClawBot—what it used to be called—and find as many examples of people asking for it as possible from Help Scout. I would say this feature is definitely about acquisition. It's just such a valuable tool for people in this new AI world who are using OpenClaw. There's something inherently valuable about giving your AI the second brain of all the things you've ever read and want to read, able to quickly reference it and work through your library with you. OpenClaw is a huge trend. Hundreds of thousands, if not millions, of people are using it. We basically want those people to become Reader users. So it's definitely all acquisition. The wow moment, I think, is basically a demo showing how easy it is to save stuff—either bookmark tweets, save with our browser extension, YouTube videos, articles, whatever—or save from the mobile app with our native share sheet. Then immediately your Claude knows about it within seconds and can either prompt you with content you saved or you can discuss with it. You can find stuff you saved, and your OpenClaw will have it in clean markdown. So that wow moment is kind of around that save-to-access-with-your-assistant loop. In terms of launch timing, we want to launch it tomorrow morning for the hackathon demos.
10 entries
be Andy >become an OpenClaw power-user >realizes it forgets things and burn tokens very fast >figures out a solution >builds a ClawHub skill >makes it free, available for everyone on ClawHub
Loading tweet...Obsidian Web Clipper 1.0 is now available and adds logic to templates! This adds powerful new tools to convert web pages to markdown.
Loading tweet...I've used OpenClaw every single day for 50+ days. Through ClawdBot. Through MoltBot. Through the rebrand. Nobody else can tell you what happens after the honeymoon phase.
Loading tweet...BREAKING! Introducing Plus One: A hosted @openclaw that lives in your Slack and comes pre-loaded with @every best tools, skills, and workflows. Set it up in one click.
Loading tweet...I built a tool to turn any Chrome tab, X bookmark or URL into a Markdown API for agents like OpenClaw. Add the skill and they can fetch your content on schedule to: summarize articles, draft blog posts from bookmarks, etc.
Loading tweet...OpenClaw changed the agentic AI game. People are building insane use cases, major companies are building on it. Bookmark this.
Loading tweet...In the latest version of keep.md you can now get your own personal email inbox: subscribe to email newsletters, confirm email addresses.
Loading tweet...Have OpenClaw make games for you with the Remix skill (live now). Prompt a game in Claude and it ships to iOS, Android, Telegram, + more in under 2 minutes.
Loading tweet...Heading out for a long flight and want to research @openclaw architecture, so I am adding many great articles to my @ReadwiseReader for offline reading. Reader is the best on the market.
Loading tweet...In the latest version of the Keep.md Chrome extension you can now do a one-time export of all your X bookmarks. Import them to Keep, Obsidian.
Loading tweet...Video Script — OpenClaw Integration
Format Inspiration
- Keep.md (@iannuttall) is the direct comp: "bookmarks for your agent" at $10/mo for 500 links. We do everything they do and dramatically more (full document parsing, highlights, annotations, PDFs, EPUBs, podcasts, newsletters, tweets, YouTube). This video should make that obvious without saying it.
- Top ClawHub skill launches use short demo videos with the product in action (@EXM7777: 2K likes/340K views, @velvet_shark: 872 likes/61K views). Screen recordings, no voiceover, punchy pacing.
- Our CLI launch video (1.1K likes, 256K views) was 1:18 with beats. This video is shorter and punchier since it builds on the CLI launch, not introducing from scratch.
- @danshipper's Plus One launch (530 likes, 150K views) used a similar "pre-loaded tools" pitch with a demo video. Good format reference for an OpenClaw product announcement.
- Twitter autoplay is muted. The video must work without sound. All text goes on gradient title cards, not overlaid on screen recordings (per brand voice).
Concept
Two-act structure. The point: save stuff the way you already do, your agent instantly knows about it. The implicit message: Reader is the best "second brain for your agent" and it's not even close. Full semantic search over every word of every document, not just bookmarks.
Narrative Arc
- Title card (2s) — "Give your OpenClaw a second brain"
- Act 1: Save from anywhere (5-7s) — Three rapid-fire saves. Browser extension, tweet bookmark, mobile share sheet. Fast, casual, this is stuff people already do.
- Act 2: Your agent already knows (8-10s) — Cut to terminal. Agent instantly finds and summarizes all three items. Optional follow-up recommendation.
- End card (2s) — Install command
Tagline
Give your OpenClaw a second brain.
Product
reader, readwise
Music
Minimal, lo-fi electronic. Calm confidence. Low volume background texture. Subtle tempo bump on the Act 1→2 transition to signal the "magic moment."
Audio
- Voiceover: no
- Strip all diegetic audio (typing, clicks). Music only.
Shots
Shot 1 — Title Card
- Type: title-card
- Duration: 2s
- Speed: 1x
- Visual: Readwise product gradient background (dark). Animated typed text: "Give your OpenClaw a second brain"
- Why this works: Sets the "second brain" frame immediately. Agent builders who know OpenClaw will lean in. The claw emoji can optionally appear.
- Transition: hard cut
Shot 2 — Save from Browser Extension (Act 1a)
- Type: screen-recording
- Duration: 2s
- Speed: 2x
- Platform: web (desktop Chrome)
- Window setup: Full browser window, dark mode preferred. A meaty article open (something the audience would care about, e.g. a technical post about AI agents, LLM architecture, or a long-form essay). Bookmarks bar hidden, no distracting tabs.
- Starting state: Article page loaded, Reader browser extension icon visible in toolbar
- Action sequence:
1. Click the Reader browser extension icon
2. The save confirmation appears (the Reader slide-in panel or toast)
- Cursor movement: Smooth, direct click on the extension icon. No hunting.
- What the viewer should notice: One click. Saved. The article title is visible so it can be recognized in Act 2.
- What to have ready: A real article pre-loaded. Pick something with a recognizable title that will look good when the agent summarizes it.
- Transition: hard cut (fast, no pause between saves)
Shot 3 — Save Tweet via Bookmark (Act 1b)
- Type: screen-recording
- Duration: 2s
- Speed: 2x
- Platform: web (x.com in Chrome)
- Window setup: Same browser, navigated to a tweet. Pick a tweet with real content (a thread, insight, or something worth saving). Dark mode Twitter.
- Starting state: Tweet visible on screen
- Action sequence:
1. Click the bookmark icon on the tweet
2. Bookmark confirmation appears
- Cursor movement: Quick, deliberate tap on the bookmark icon.
- What the viewer should notice: The tweet content is visible (title/text) so it's recognizable in Act 2 when the agent references it.
- What to have ready: A real tweet pre-loaded. Ideally something from a well-known account so the viewer registers it. Must have Readwise X bookmark integration enabled so it saves to Reader.
- Transition: hard cut
Shot 4 — Save from Mobile (Act 1c)
- Type: screen-recording
- Duration: 2s
- Speed: 2x
- Platform: mobile (iPhone)
- Window setup: A YouTube video, PDF, or article open in Safari/app. Share sheet about to be triggered.
- Starting state: Content visible on screen
- Action sequence:
1. Tap share button
2. Tap "Save to Reader" in the share sheet
3. Save confirmation
- Cursor movement: Quick taps, no hesitation.
- What the viewer should notice: Different content type from the first two saves (ideally a YouTube video for variety). This is the third "save from anywhere" proof point.
- What to have ready: A YouTube video or PDF open on the phone. Reader app installed with share sheet configured.
- Cleanup: Hide any personal notifications. Do Not Disturb mode on.
- Transition: hard cut
Shot 5 — Title Card: "Your agent already knows" (Act 1→2 bridge)
- Type: title-card
- Duration: 1.5s
- Speed: 1x
- Visual: Same gradient background. Animated typed text: "Your claw already knows"
- Why this works: This is the magic moment. The transition from "save" to "agent knows" is the entire pitch. The title card gives the viewer a beat to process before the payoff.
- Transition: hard cut to terminal
Shot 6 — Agent Finds Everything (Act 2, Payoff) — SLOW MOMENT
- Type: screen-recording
- Duration: 6-8s
- Speed: Start at 2x while the agent is working/searching, drop to 1x when results appear
- Platform: terminal (iTerm2 or similar)
- Window setup: Dark terminal, large readable font (16pt+), clean prompt. OpenClaw, Claude Code, or similar agent running. No other windows/tabs visible.
- Starting state: Agent prompt ready for input
- Action sequence:
1. Type (or paste, pre-filled): "What did I just save? Give me a quick summary of each."
2. Agent begins processing (speed-ramp through this at 2-4x)
3. Agent outputs a list of all 3 items with titles and short summaries
- What the viewer should notice: The agent lists all three items that were just saved. The titles visibly match what was saved in Act 1. The tweet content is fully expanded (not just a link). The YouTube/PDF content has a real summary. This is the money shot. The viewer sees: save anything, your agent knows it all, in clean markdown.
- Pacing note: Let the results fully render at 1x. Do not rush. The viewer needs time to scan the three items and recognize them from Act 1. This is the payoff.
- What to have ready: The three items must actually be saved to Reader/Readwise before recording. The agent (with Readwise CLI installed) must be able to find them via search. Do a dry run to confirm the agent output looks good and includes recognizable titles.
- Typing: Pre-type or paste the prompt. Natural speed is fine since it's short.
- Wait points: Pause 0.5s after the results finish rendering so the viewer can read.
- Transition: hard cut to end card
Shot 7 — End Card
- Type: title-card
- Duration: 2s
- Speed: 1x
- Visual: Same gradient background. Two lines, animated typed text:
```
npx clawhub install readwise-official
readwise.io/cli
```
- Why this works: Agent builders want an install command, not a marketing URL. Give them both. The ClawHub install command is the primary CTA since this targets OpenClaw users specifically.
Recording Prep Checklist
- [ ] Save a real, meaty article via the Reader browser extension (pick something with a distinctive, recognizable title)
- [ ] Save a real tweet via X bookmark (pick a tweet from a well-known account with substantive content, not a one-liner)
- [ ] Save a YouTube video (or PDF) via mobile share sheet to Reader
- [ ] Wait for all three to sync to Readwise/Reader
- [ ] Confirm the Readwise CLI can find all three items: run
readwise searchor test the agent query - [ ] Do a dry run of the agent prompt "What did I just save? Give me a quick summary of each." and verify all three show up with good summaries
- [ ] Terminal setup: dark background, large font (16pt+), clean prompt, no clutter
- [ ] Desktop: hide bookmarks bar, notifications, dock. Clean browser with only the relevant tab
- [ ] Mobile: Do Not Disturb on, hide notifications, Reader share sheet ready
- [ ] Pre-approve a music track (lo-fi/ambient, no vocals)
- [ ] Decide: OpenClaw terminal, Claude Code, or generic agent prompt? (OpenClaw branding may resonate more with the target audience)
What Tristan Provides to Christina
- Raw screen recordings of each shot (desktop saves, mobile save, agent terminal)
- This spec doc
- Music preference
What Christina Delivers
- Edited video with title cards on gradient backgrounds, music, and pacing
- Draft for review before final export
- Final export: MP4 (H.264), 15-20s target, works without sound
Surface Changes 0/4
New: Readwise for OpenClaw.
Save anything to Reader (one click from browser or phone): articles, tweets, books, youtube, podcasts, newsletters, and more.
Your claw now has instant access to your full library as markdown. To help you search and stay on top of it all.
Available on ClawHub (link in next tweet).
### Reply tweet (from @readwise)
Install: npx clawhub install readwise-official
Learn more: readwise.io/openclaw
### Founder retweet (from @homsiT)
We just shipped an official Readwise skill for OpenClaw.
1. Save stuff to Reader the way you already do (browser extension, mobile app, bookmark a tweet)
2. Your claw instantly has it. Full text, not just links.
Semantic search across every word of every document you've ever saved. Highlights, annotations, full content in markdown.
### Founder follow-up (from @homsiT)
Some things your claw can do with your Readwise library:
* Search across everything you've ever read
* Read the full content of any article, PDF, book, tweet, or YouTube transcript
* Create highlights and annotations on your behalf
* Triage your Reader inbox
* Quiz you on what you've read
readwise.io/openclaw
If you use OpenClaw (or any AI agent), you know the hardest part: giving your assistant real context about what you care about.
We just launched a ClawHub skill that connects your Readwise and Reader library directly to your claw. Anything you save (articles, PDFs, tweets, YouTube, podcasts, newsletters, books) is instantly searchable and readable by your agent in clean markdown.
No manual context loading. No copy-pasting. Save something with one click, your claw knows about it in seconds.
This works because Reader already handles the hard part: parsing any content type into clean, structured text. Your agent gets the same quality you get when you read in Reader.
Install: npx clawhub install readwise-official
Learn more: readwise.io/openclaw
**Body:**
Hey all, we just shipped an official Readwise skill for ClawHub.
If you're using OpenClaw (or Claude Code, Codex, etc), your agent can now access your entire Readwise and Reader library. That means everything you've ever saved or read: articles, PDFs, books, tweets, podcasts, newsletters, YouTube videos.
What your claw can do with it:
* Search your entire library (full-text + semantic search across every word of every document)
* Read the full content of anything in clean markdown
* Create highlights and annotations
* Tag and organize your library
* Triage your Reader inbox
* Quiz you on what you've read
The idea is pretty simple: you save stuff the way you already do (browser extension, mobile share sheet, bookmark a tweet), and your agent instantly has it. No extra steps.
We built this on the Readwise CLI we launched last week, packaged as a ClawHub skill for easy install:
* ClawHub: npx clawhub install readwise-official
* Landing page: readwise.io/openclaw
* CLI (for non-OpenClaw agents): readwise.io/cli
Would love to hear what workflows you build with it.
Readwise 2.0
Campaign Brief: Readwise 2.0
Overview
- Feature name: Readwise 2.0
- Product: readwise
- Launch date: mid-April 2026 (before end of April)
- Status: inspiration
What It Is
A complete redesign and rebuild of the Readwise mobile app on both iOS and Android. Every surface has been revamped — colors, fonts, animations, performance, and functionality. The app now matches the design quality Readwise deserves. Beyond the visual overhaul, there's real new functionality: completely revamped search that's faster and better at finding things, plus new ways to save highlights via camera, text input, and voice (none of which existed before). It's the same app people love, but it finally looks and feels like a modern, premium product.
Why We Built It
The original Readwise app was built in 2019 and it shows. The UI is dated, animations are janky, and the overall feel is unprofessional for a product people use every day. Tens of thousands of people pay for Readwise and thousands use the app daily with long streaks — they deserve better. The dated look and feel has been a friction point for new users evaluating the product and a source of frustration for existing users who love the functionality but cringe at the presentation.
User Demand
- Signal strength: high
- Evidence: Consistent complaints on Reddit and Twitter about the app feeling dated and janky. Regular support feedback. Tens of thousands of paying users and thousands of daily active users with long streaks — this affects a huge portion of the user base.
- Who wants this most? Daily Readwise users who do the daily review religiously, and new users evaluating whether to pay for Readwise. People with taste for good mobile design who've been frustrated by the gap between Readwise's functionality and its presentation.
Success
- Primary goal: conversion
- What does success look like? New users who sign up are significantly more likely to convert to paid when they experience the redesigned app. The app no longer feels like a barrier during onboarding — it feels like a selling point. Secondary: existing users feel renewed excitement about the product and share it organically.
- Secondary goals: Retention — daily users feel like Readwise is actively improving and worth continuing to pay for. Acquisition — a splashy launch video and social campaign drives awareness and new signups.
The "Wow" Moment
Seeing the redesigned home screen and daily review in motion — smooth animations, polished visuals, the kind of app you want to show someone. Think Flighty-level design quality applied to your highlight review.
Target Audience
Existing Readwise users (tens of thousands of paying customers, thousands daily). Secondary: potential new users who've heard of Readwise but haven't converted, especially design-conscious people who were put off by the old app.
Launch Constraints
Shipping on both iOS and Android simultaneously. Targeting mid-April, must be before end of April. Will be included in the next Reader public beta update (first since December). This is a big campaign — not a fast follow.
Messaging Angle
"Readwise, redesigned from the ground up." Lead with the visual transformation — show don't tell. A sexy screen-share video (Flighty as the benchmark) is the centerpiece. Pair the design story with the new functionality (search, camera/voice/text highlight capture) so it's not just a coat of paint. Tone: confident, proud — this is a major milestone. Avoid underselling it as "just a redesign."
Raw Notes
Okay, the next project is a big one. We're calling it internally ReadWise 2.0. It's basically a full redesign, modernization, and rebuilding of the original ReadWise mobile app on both iOS and Android. This is pretty different from the other launches we've done recently, but we do want this process to be generalizable. The ReadWise mobile app was created in 2019. It has a very basic, hacky-looking UI with janky performance, animations, and most of all, dated design that felt unprofessional. We've completely redesigned it to match the design standards that ReadWise deserves. The whole app—from coloring and fonts to animations and functionality—has been completely revamped. The pain it solves is that while people love the UX and functionality of the ReadWise app, the look and feel has been completely dated. That extends to some of the functionality too. For an app you use every day, the ReadWise app just feels very dated and janky. We've had a lot of demand from users who have lamented this. Some people don't have a taste for mobile apps and just like the functionality, but a lot of people get really annoyed at how janky the ReadWise mobile app feels. We definitely get hate about this on Reddit and Twitter requests. Tens of thousands of people pay us just for the original ReadWise app. Thousands use it every day with long streaks doing the daily review. I would say this feature is primarily about retention and conversion. When a new person signs up, they're much more likely to convert with this new, cool-feeling app. I'd say conversion first, retention second, and acquisition third. Hopefully we can make a splash with this campaign and drive new users. The people who are retained are probably pretty happy even with the janky UI, but it's definitely more likely to keep them retained and make them feel like we're improving the app. Most of all, I think new users who hear about ReadWise will be much more likely to become paying customers with this new design. The wow moment for social media could be a sexy screen share video. Flighty is a benchmark app that does this really well. I think the best moment is seeing the home screen of the app in a cool frame in a video with music, or seeing functionality like the daily review with cool graphics at the end. One other big part of this launch is the new functionality. It's not just the look and feel. The search is completely revamped and much faster and better at finding stuff. We also have new ways to save highlights using the camera, text input, and voice, which never existed before. Primarily, it's the look and feel and delightfulness, but there's also real substance there. I don't think we'll be able to ship this for at least a couple of weeks. My goal is to announce it sometime mid-April, and definitely before the end of April. I want to include it in the next Reader public beta update by April because it's been so long since December's last one. Launching on both iOS and Android simultaneously.
9 entries
Someone just built a 3D thought map by turning Obsidian embeddings into living networks. It lets you see the shape of your mind.
Loading tweet...I have been working on Obsidian Reader for over a year. I didn't want to share it until I felt it was good enough. It's finally there.
Loading tweet...Find us in Apple's brand new iPhone 17 launch video. Twice!
Loading tweet...Introducing Flighty: Your past flights are now a heat map! Deep sorting of past flights, Super detailed past flight list view. Plus key fixes.
Loading tweet...Introducing Halo — the habit app with an AI coach. It's like James Clear in your pocket.
Loading tweet...NEWS: @Flighty arrives NEXT WEEK! Prepare for unprecedented delay intel.
Loading tweet...Say hello to Bear 2! Packed with over 20 new features and a sparkling new design, we are excited to bring you the Bear-iest version of Bear yet!
Loading tweet...Mega reveal of Flighty 3.0. We've dreamed of this since 2018. And it's even more seamless than you're imagining.
Loading tweet...We just shipped a ton of new features: NEW SIDE PANEL, Text-to-speech on web, Entirely revamped search, Custom Ghostreader, Resurfaced Later docs, Original email styles, Automatic PDF metadata, Notion official API export.
Loading tweet...No video script yet — run /direct-movie to generate one.
No assets yet — upload your video to RAMP/readwise/readwise-2-mobile.mp4 in Google Drive
No surfaces yet. Run /launch-surfaces to generate a checklist.
No approved copy yet — run the pipeline to generate copy.
Run /analytics to add post URL
Run /analytics to add post URL
Run /analytics to add post URL
Readwise 2.0 Web
Campaign Brief: Readwise 2.0 Web
Overview
- Feature name: Readwise 2.0 Web
- Product: readwise
- Launch date: late April–May 2026
- Status: inspiration
What It Is
A complete redesign of the Readwise web app to match the new design standards established by the Readwise 2.0 mobile launch. The web app gets the same visual overhaul — updated colors, fonts, layout, and interactions — plus functional improvements to search and manual highlight input. The result is a consistent, modern experience across mobile and web.
Why We Built It
The Readwise web app feels dated and painful to use, especially for things like search and manual highlight entry. After launching the redesigned mobile app, having the web app stuck in the old design creates an inconsistent experience. This brings the web up to the same standard and completes the Readwise 2.0 story.
User Demand
- Signal strength: medium
- Evidence: Less vocal demand than the mobile app redesign, but the same underlying frustrations apply — the web app feels old and clunky. Completing the redesign across both platforms is a natural expectation after the mobile launch.
- Who wants this most? Users who primarily interact with Readwise on the web (desktop reviewers, manual highlight creators, people who manage their library from a browser).
Success
- Primary goal: conversion
- What does success look like? New users evaluating Readwise on the web see a polished, modern product that feels worth paying for. The web app stops being a friction point in the conversion funnel.
- Secondary goals: Retention — existing users feel the momentum of continuous improvement. Acquisition — a second launch event creates another wave of attention and social sharing.
The "Wow" Moment
A before-and-after comparison showing the old dated web app transforming into the sleek new design — instant visual proof of how far it's come.
Target Audience
Existing Readwise users, especially desktop/web-primary users. Secondary: potential new users who encounter the web app during signup and evaluation.
Launch Constraints
Must launch after the Readwise 2.0 mobile campaign. Intentionally split from mobile to create two distinct launch events. Engineering timeline means late April at earliest, May more likely.
Messaging Angle
"The Readwise redesign comes to web." Frame as the second chapter of the Readwise 2.0 story. Lean into before/after visuals — they're inherently shareable. Can reference the mobile launch reception as social proof. Tone: same confident energy as mobile, but can be slightly lighter since this is the follow-up, not the reveal.
Raw Notes
Okay, the next project is a follow-up to the ReadWise 2.0 mobile launch: the ReadWise 2.0 web launch. We're going to split up those announcements to get more launch events, and the engineering for web is going to take a while, so launching them at the same time seems too hard. We can obviously take a lot of inspiration and lessons from the ReadWise 2.0 mobile launch. This will basically be: "Hey, we've completely revamped the ReadWise web app." It's the original ReadWise app again, and I think it solves the same pains that the ReadWise 2.0 mobile app launch solved. The ReadWise web app feels dated—not as much as the mobile app was, but it's still pretty dated and hard to use. All the functionality feels painful, like search. We're revamping that and the manual highlight input. There's a little bit of real functionality too on the search and manual highlight input. It'll make the web app consistent with the mobile app's new design, which will be good. I think there's another chance to get some excitement over the redesign. Maybe in this case, we could also show before and after. I think the wow moment is something like the before and after. There's been more demand around the aesthetics of the ReadWise mobile app, but this should still be valuable as well. I think it's primarily about conversion, but it'll have secondary effects on retention and, if we do this campaign right, acquisition. The timing is definitely going to be after the mobile app launch, so I'm thinking probably end of April, but much more likely in May.
5 entries
Someone just built a 3D thought map by turning Obsidian embeddings into living networks. It lets you see the shape of your mind.
Loading tweet...Introducing Flighty: Your past flights are now a heat map! Deep sorting of past flights, Super detailed past flight list view. Plus key fixes.
Loading tweet...NEWS: @Flighty arrives NEXT WEEK! Prepare for unprecedented delay intel.
Loading tweet...Say hello to Bear 2! Packed with over 20 new features and a sparkling new design, we are excited to bring you the Bear-iest version of Bear yet!
Loading tweet...We just shipped a ton of new features: NEW SIDE PANEL, Text-to-speech on web, Entirely revamped search, Custom Ghostreader, Resurfaced Later docs, Original email styles, Automatic PDF metadata, Notion official API export.
Loading tweet...No video script yet — run /direct-movie to generate one.
No assets yet — upload your video to RAMP/readwise/readwise-2-web.mp4 in Google Drive
No surfaces yet. Run /launch-surfaces to generate a checklist.
No approved copy yet — run the pipeline to generate copy.
Run /analytics to add post URL
Run /analytics to add post URL
Run /analytics to add post URL
Readwise CLI
Campaign Brief: Readwise CLI
Overview
- Feature name: Readwise CLI
- Product: readwise
- Launch date: 2026-03-18 (already launched)
- Status: complete
What It Is
A command-line interface that makes everything you've saved in Readwise (highlights, articles, PDFs, books, YouTube, newsletters) accessible via the terminal. Install with npm install -g @readwise/cli, run readwise login, and your entire reading library is searchable, readable, and actionable from the command line. Includes semantic + full-text hybrid search across every word of every saved document (powered by turbopuffer), document retrieval in markdown, highlight management, feed triage, and full library export. Also ships with pre-built "skills" (one-command workflows) for triage, feed catchup, quiz, and more. Works natively with AI agents like Claude Code, Codex, and OpenClaw.
Why We Built It
The new wave of AI agents and coding tools live in the terminal. Readwise's data — everything you've read, highlighted, and saved — is incredibly valuable context for these agents, but there was no way to access it programmatically from the command line. The CLI makes Readwise the "content brain" for any terminal-based AI workflow.
User Demand
- Signal strength: high
- Evidence: The launch tweet got 1,154 likes, 165 RTs, 1,198 bookmarks, and 254K views. The AI/agent ecosystem is exploding and users were asking for programmatic access. The read-only mode follow-up ("by popular demand") shipped days later.
- Who wants this most? Terminal-native power users, AI agent builders (Claude Code, OpenClaw), developers who want to script their reading workflow.
Success
- Primary goal: acquisition
- What does success look like? Positions Readwise as AI-native infrastructure. Influencers and power users build workflows with the CLI and share them, driving signups from the developer/AI community. The launch tweet was one of Readwise's best-performing ever.
- Secondary goals: Retention — existing power users feel Readwise is on the cutting edge. Sets up the MCP launch as a fast follow.
The "Wow" Moment
Ask your AI agent a question and it searches across everything you've ever saved to Readwise — articles, books, PDFs, highlights — and gives you a grounded answer with citations.
Target Audience
Terminal-native developers and AI agent power users (Claude Code, OpenClaw, Cursor). Secondary: the broader PKM/productivity community who sees the demo and thinks "I need this."
Launch Constraints
Already launched 2026-03-18. This campaign is being documented retroactively. Video asset already recorded (37s demo). Fast follow: Readwise MCP for non-terminal users.
Messaging Angle
"Introducing the Readwise CLI." Simple, direct. Let the demo speak. The video shows: save from anywhere → your agent already knows. Implicit message: Readwise is the best "second brain for your agent" — full semantic search over every word of every document, not just bookmarks.
Raw Notes
Assembled from: readwise.io/cli landing page, @readwise launch tweet (1.1K likes, 254K views), @homsiT co-founder tweets about hybrid search + turbopuffer, readwise-skills repo, existing readwise-for-openclaw campaign brief, Notion CLI documentation page. The CLI and MCP are functionally equivalent (same tools) — the CLI targets terminal users, the MCP targets ChatGPT/Claude/Perplexity users.
11 entries
Anything you can do in Obsidian you can do from the command line. Obsidian CLI is now available in .
Loading tweet...1. install Obsidian 1.12 2. enable CLI 3. now OpenClaw, OpenCode, Claude Code, Codex, or any other agent can use Obsidian
Loading tweet...I'm still processing, but I think I would die for the Obsidian CLI, using this with Claude Code or any agent is fucking insane. Holy shit man.
Loading tweet...Obsidian CLI orphans command vs other methods. 54x faster than grep. 70,000x cheaper than MCP.
Loading tweet...Obsidian CLI closes one of the oldest feature requests, dating back to May 2020.
Loading tweet...1. Install Readwise CLI 2. OpenClaw, Claude Code, Codex, or any coding agent can now: Search your entire readwise library, Read full content of anything you've saved, Tag, create highlights, organize on your behalf.
Loading tweet...One of the coolest things about the CLI (and mcp) is that every single word of every document in your library is searchable via hybrid (FTS + semantic) @turbopuffer. Allows you to build some really powerful stuff trivially, eg recommendation systems.
Loading tweet...Obsidian + Claude Code knowledge creation workflow using Readwise + browser clipping. Collecting, cleaning, and creating content.
Loading tweet...The new Readwise CLI has full access to every word you've ever saved in your Reader library. Semantically search over your documents.
Loading tweet...The Readwise CLI/MCP work great on their own, but we've also created a readwise-skills repo that shares some powerful workflows.
Loading tweet...It's quite fun, here's a 2d graph visualization of my recent ~500 highlights and how they're connected. Built with the CLI.
Loading tweet...Launch Video — Production Guide for Christina
What this is
A 45-60 second screen recording for the launch tweet. It shows the Readwise CLI and AI agent integration in action. No voiceover — just the terminal with music and text overlays. Think Obsidian's CLI launch video that got 1M views: the product speaks for itself.
Reference
Obsidian's CLI launch tweet/video — same format, same energy. Clean terminal, no talking, text overlays to orient the viewer.
- Main tweet: https://x.com/obsdmd/status/2021241384057930224
- Kepano's follow-up: https://x.com/kepano/status/2021251878521073847
The Script
Beat 1: The Hook (3-5 sec)
What happens: Run readwise in the terminal. The TUI launches with ASCII art logo.
Why: This is the thumbnail/first frame. The ASCII art stops the scroll. It looks cool and immediately signals "developer tool."
Text overlay: None needed — the ASCII art is the hook.
Beat 2: Quick Search in TUI (5-8 sec)
What happens: Search for "aggregation theory" in the TUI. Results appear showing article titles and authors.
Text overlay: "Search your reading library"
Beat 3: Transition to Claude Code (2-3 sec)
What happens: Open Claude Code (the terminal AI agent). This is the transition from "CLI tool" to "AI-powered."
Text overlay: None — the Claude Code interface is self-explanatory.
Beat 4: Search via Agent (8-10 sec)
What happens: Ask Claude Code something like: "What counterintuitive productivity advice have I read about?"
The agent searches across highlights and documents, pulls from multiple sources, and synthesizes a real answer grounded in what the user has actually read.
Text overlay: "Your agent knows everything you've read"
Beat 5: Quiz (10-12 sec)
What happens: Ask: "Quiz me on some key ideas from that article"
The agent reads the full document, asks a question, user answers, gets graded.
Text overlay: "Quiz yourself on what you've read"
Note: This is the longest beat. It needs to show enough of the back-and-forth to be compelling but not drag. Show one question + grade, then cut.
Beat 6: Save Back to Readwise (8-10 sec)
What happens: Ask: "Save the last quiz question and answer as a highlight to my Readwise"
The agent creates a highlight from the conversation — the reading loop closes. You read, get quizzed, and the key insight goes back into your library.
Text overlay: "Save insights back to Readwise"
Beat 7: The Closer — Surprise Me (8-10 sec)
What happens: Ask: "Read a large sample of my highlights and tell me something surprising I don't know about myself"
Show the agent pulling highlights and delivering a surprising insight.
Text overlay: "And so much more..."
End Card (3 sec)
Text: "readwise.io/cli" and/or "npm install -g @readwise/cli"
Production Notes
Recording
- Tool: Screen capture (not asciinema — we need to show Claude Code's UI, not just raw terminal)
- Resolution: Record at 2x/retina, export at 1920x1080 or 1080x1080 (square works well for Twitter)
- Terminal: Dark background, large readable font (16pt+). Clean terminal with no clutter — hide tabs, bookmarks, dock if possible
- Speed: Claude Code is too slow to show in real-time. Fast-forward through the agent thinking/working (2-4x speed), then drop back to real-time when the results appear. The viewer should see the agent doing its thing but never wait for it. Consider a subtle speed indicator or just let the fast-forward be obvious — people understand sped-up terminal footage.
- Tristan will do the actual CLI/agent interactions — Christina edits the raw recording into the final video
Text Overlays
- Clean sans-serif font (SF Pro, Inter, or similar)
- Semi-transparent dark background behind text so it's readable over the terminal
- Bottom-left or top-left placement — don't cover the terminal output
- Keep them short (3-6 words max)
- Fade in/out, don't snap
Music
- Lo-fi / ambient / chill electronic — nothing with vocals
- Low volume — it's background texture, not the focus
- Match the energy: calm confidence, not hype
Pacing
- Each beat should feel snappy. If a beat drags, speed it up or cut content
- The video should feel like it's moving fast but is still followable
- Transitions between beats: simple cuts or brief fade-throughs. No fancy transitions.
Final Export
- Format: MP4 (H.264)
- Duration: 45-60 seconds (aim for 50)
- Twitter auto-plays muted, so the video must work without sound
- The text overlays are essential since there's no voiceover
What Tristan Provides to Christina
- Raw screen recording of each beat (can be separate clips or one long take)
- This spec doc
- Any specific music preferences
What Christina Delivers
- Edited video with text overlays and music
- Draft for review before final export
No surface changes listed.
Anything you've saved in Readwise (highlights, articles, PDFs, books, youtube, newsletters) is now instantly accessible from the terminal.
For you, and your AI agents.
npm install -g @readwise/cli
**Body:**
Hey all, wanted to let you know about the new Readwise CLI (as well as our MCP, in beta)!
Anything you've saved in Readwise/Reader (highlights, articles, PDFs, books, youtube, newsletters) is now instantly accessible from the terminal and your AI apps.
We're pretty happy with how it turned out, and so far the reception has been great. Almost anything you can do inside of Readwise/Reader, you can do via these tools:
* Do a full-text + semantic hybrid search across all of the Reader content in your library
* Move/tag/archive your documents
* Read the full content of any document
* Sync all of your data to a local folder
* Create highlights
* Pull the highlights from your daily review
You can install it (and learn more) here:
* CLI: readwise.io/cli
* MCP: readwise.io/mcp
They work with basically any AI app (Claude, Codex, ChatGPT, Perplexity, etc). We also created a repo with some cool example skills you can use with them.
Curious what you think and if you find it helpful!
Run /analytics to add post URL
Readwise MCP
Campaign Brief: Readwise MCP
Overview
- Feature name: Readwise MCP
- Product: readwise
- Launch date: ~2026-04-08 (next week or two)
- Status: inspiration
What It Is
The Readwise MCP server brings all the power of the Readwise CLI to any AI app that supports MCP — ChatGPT, Claude, Perplexity, Poke, and the long tail of AI tools beyond the terminal. It's functionally equivalent to the CLI (same tools, same capabilities), but works natively in the apps people already use. Users can search their highlights, search their full Reader library (hybrid semantic + full-text search across every word of every saved document), triage their feed, organize documents, add tags, create visualizations, and more — all through natural language in their favorite AI app. The MCP server has also been approved as a ChatGPT app, making setup seamless for ChatGPT users.
Why We Built It
The CLI was powerful but limited to terminal-native users. Most people interact with AI through apps like ChatGPT and Claude, not the command line. The MCP opens up all the same capabilities to a much broader audience. More broadly, AI apps are enabling powerful workflows through plain natural language that weren't possible before — and Readwise data is incredibly valuable in that context.
User Demand
- Signal strength: high
- Evidence: MCPs are a very hot category in the AI ecosystem and the Readwise MCP has been a well-requested, popular feature. High organic demand.
- Who wants this most? AI app power users who already use Readwise — the kind of person who uses ChatGPT or Claude daily and wants their reading data accessible there. Also: influencers and workflow builders who create and share AI workflows.
Success
- Primary goal: acquisition
- What does success look like? Influencers and power users build and share workflows using the Readwise MCP, driving awareness and new signups. Meaningful installs from ChatGPT app store and other AI app marketplaces. Positions Readwise as a key player in the AI ecosystem.
- Secondary goals: Retention — existing users feel Readwise is on the forefront of AI and increasingly valuable in their daily workflow.
The "Wow" Moment
Talk to ChatGPT or Claude and instantly search across everything you've ever saved to Readwise, then organize your library, triage your feed, and build powerful workflows — all through natural conversation.
Target Audience
Mainstream AI app users (ChatGPT, Claude, Perplexity, Poke) who are also readers/savers. More mainstream than the CLI audience. Secondary: influencers and workflow builders who will create content showcasing Readwise MCP workflows.
Launch Constraints
Fast follow to the CLI launch. Frame as "last week we launched the CLI for terminal users — now here's the MCP for everyone else." The MCP server is approved as a ChatGPT app. Low-medium investment — leverage CLI launch momentum and assets.
Messaging Angle
"Your Readwise, in every AI app." Frame as the natural next step after the CLI: same power, but now in ChatGPT, Claude, Perplexity, and any MCP-compatible app. Emphasize the full-text library search as a standout capability people have always wanted. Show powerful workflows, not features. Tone: excited but grounded — this is genuinely useful, not hype.
Raw Notes
The next project is the ReadWise MCP launch, which is another small project similar to the OpenClaw launch. It's basically a follow-up to the CLI launch. The ReadWise MCP and CLI are functionally equivalent—they have all the same tools, and we want to mention that in the launch. The MCP is less of a terminal agent thing and compatible with more AI apps. The big thing that the MCP works with that the CLI doesn't is the regular ChatGPT app, the regular OpenAI app, and many other long-tail AI apps that don't run in the command line. You can now get all the power of ReadWise and all the stuff from our CLI launch: listing your highlights, searching your highlights, searching your read documents, triaging reader documents, trashing your feed, and creating visualizations of your data. All that good stuff is now doable in your AI apps. This is a fast follow, and one big part of this is that our MCP server has been approved as a ChatGPT app, so people can use it very seamlessly in ChatGPT. This launch is basically: connect your ReadWise data to your favorite AI apps and get control through your favorite AI apps to do all those cool operations on your ReadWise data. Why did we build it? Because we're in this new AI wave, and these new AI apps are very powerful and allow you to build very cool, powerful workflows just with plain natural language. It allows users to organize their library or search through their library very conveniently in ways that wasn't possible before. Some pains it might solve: if a user really wants to find certain documents or highlights in their library. One big use case that was not possible before is that we have a full reader library search tool. We index every single word of every single document in your library, and you can do a hybrid semantic full-text search over that content. That's something people have always wanted to do. With these AI apps, they can do really powerful searches over their entire library and find stuff they remember reading or saving. How much demand have we had? We've actually hit a lot of demand for the MCP. It's been a very popular, well-requested feature because MCPs are just a very hot thing in this new AI world and people definitely want it. I would say it's primarily about acquisition. We're trying to position ReadWise as an AI app because there's so much hype about that. Hopefully influencers will make content about it. The hope is that with the MCP, users will be able to build and show off to other users powerful workflows, especially influencers who can hopefully make content about the workflows using our MCP server. It's primarily acquisition, but I also think it will be valuable for retention. Existing users will feel like ReadWise is on the forefront of AI and a useful tool for them in this new world. What is the WOW moment? I think it's very similar to our CLI. It's basically being able to talk to ChatGPT or Claude and get powerful search through all the stuff you save to ReadWise, then organize your library, organize your highlights, add tags to them, move documents, triage your feed, et cetera. Just showing off these powerful workflows will be wow, but again, very much like our CLI launch video, except focused a little bit more on mainstream users who would use something like Claude and less like command line. Any launch or timing constraints? I think the next week or two would be a good time to announce it as a fast follow to the CLI. The framing I'm imagining is: last week we announced the CLI. Now we're following up with the ReadWise MCP that will work with any of your AI apps, not just terminal ones. If you use ChatGPT, Claude, Perplexity, or Poke, these apps will now work natively with your ReadWise. You have full access to your library to read it, search it, and organize it.
16 entries
Now you can use AI agents to design directly on the Figma canvas, with our new use_figma MCP tool and skills to teach them. Open beta starts today.
Loading tweet...This russian guy found a way to learn anything 10x faster with NotebookLM + Gemini + Obsidian.
Loading tweet...Stop sleeping on Obsidian + Claude Code = your own JARVIS. Takes 1 hour to build.
Loading tweet...BREAKING: We've given Claude direct access to the full options and equities market. Introducing the Unusual Whales MCP Server. It connects any AI assistant to live, structured market data in real time.
Loading tweet...I built Feynman, Claude Code for research. I gave it a question and it came back 30 minutes later with a full research report.
Loading tweet...Figma MCP server, now with design context anywhere you work: Remote access, Connection with Figma Make.
Loading tweet...Introducing Better Icons — The MCP server for searching and retrieving icons. 200,000+ icons. Auto learns your preferences and prioritizes.
Loading tweet...Introducing Generative UI for MCP Apps. One server. Infinite interfaces. Works in Claude, ChatGPT, VS Code, Cursor and more.
Loading tweet...Introducing Codex to Figma: Roundtrip between code and canvas through the Figma MCP server.
Loading tweet...Notion official MCP server released. Community servers had security risks so this is welcome.
Loading tweet...Excited to launch rube.app on ProductHunt today! Rube is the universal MCP server that connects your AI chat (Cursor, Claude, VSCode) to 500+ apps including Slack, Notion, Figma.
Loading tweet...We shipped an MCP server! You can now query your Readwise highlights inside of Claude, Cursor and more.
Loading tweet...One of the coolest things about the CLI (and MCP) is that every single word of every document in your library is searchable via hybrid (FTS + semantic, @turbopuffer ofc). Allows you to build some really powerful stuff trivially, eg recommendation systems.
Loading tweet...we made a @readwise MCP server, but haven't really announced/documented it yet... anyone wanna help test it out?
Loading tweet...readwise的新版mcp,以及readwise cli(内测中),实在是太好用太强大了。2026年,不能给agent用的工具,就不是好工具。
Loading tweet...The Readwise CLI/MCP work great on their own, but we've also created a readwise-skills repo that shares some powerful skills to use with them.
Loading tweet...No video script yet — run /direct-movie to generate one.
No assets yet — upload your video to RAMP/readwise/readwise-mcp.mp4 in Google Drive
No surfaces yet. Run /launch-surfaces to generate a checklist.
No approved copy yet — run the pipeline to generate copy.
Run /analytics to add post URL
Run /analytics to add post URL
Run /analytics to add post URL
RAMP
Campaign Brief: LaunchWise
Overview
- Feature name: RAMP
- Product: internal tool (standalone)
- Launch date: 2026-03-26
- Owner: @christina
- Status: complete
What It Is
LaunchWise is a Slack-based workflow and collaboration tool that helps the Readwise team coordinate product marketing campaigns from brief to distribution. It guides team members through a structured pipeline — campaign setup, inspiration gathering, video scripting, copy generation, and social distribution — turning what was a manual, ad-hoc process into a repeatable, bot-assisted workflow.
Why We Built It
Announcing new features and making users aware of them is a huge bottleneck for the company. Historically, only Tristan and Dan have done product marketing, and the process is manual, messy, and hard to stay on top of. LaunchWise exists to make launching features dramatically faster and to let more team members participate in the process without needing to reinvent the wheel each time.
User Demand
- Signal strength: high
- Evidence: Strong internal demand from Tristan, Dan, and Christina. The current launch process is widely acknowledged as a mess — no structure, no shared tooling, and heavy reliance on two people. This is a hackathon project born from real, felt pain.
- Who wants this most? The internal product marketing team (Tristan, Dan, Christina) and anyone at Readwise who ships features and wants them properly launched.
Success
- Primary goal: acquisition
- LaunchWise's whole purpose is to accelerate product marketing, which drives new usage and word-of-mouth growth by getting features in front of potential users.
- What does success look like? The team can take a feature from brief to distributed launch in a fraction of the time it takes today. More features get properly launched, leading to more awareness, more sign-ups, and more word-of-mouth.
- Secondary goals: Retention — existing users learn about new features they might have missed, keeping them engaged and subscribed.
The "Wow" Moment
The hackathon demo presentation — showing the sheer volume of polished product marketing work (briefs, video scripts, social copy, distribution) the team was able to churn through using LaunchWise in a single sitting.
Target Audience
Primary: The Readwise internal team (Tristan, Dan, Christina, and future team members involved in launches). Secondary: Readwise end users, who benefit from actually hearing about new features.
Launch Constraints
This is a hackathon project — needs to be demo-ready within the next hour or two. It's an internal tool, not a user-facing product launch. The "launch" is the hackathon demo presentation.
Messaging Angle
"Stop letting great features die in silence." LaunchWise turns the chaos of product marketing into a guided, collaborative pipeline — so every feature gets the launch it deserves.
Raw Notes
Okay, here's my ramble. This is a bit of a meta test project. I want us to launch LaunchWise. It's not really any of our normal products. It's a standalone internal tool for the team—a hackathon project. LaunchWise is a new workflow and Slack-based collaboration tool for collaboration between different members of the team to resolve the bottleneck we have in product marketing and getting new features launched. We built it because announcing and making our users aware of new features is a huge challenge for the company right now. Something only Tristan and Dan have really done manually for the company historically, and we want to make it a lot faster. The immediate users are going to be the team, and it's going to solve the pain that launching stuff is so manual, so messy, and so hard to stay on top of. For end users, they're actually going to find out about all the new stuff we're working on, which will be very valuable. I think there's a lot of demand from myself, Dan, and Christina because this process is a mess right now. Making it less of a mess will improve our lives a lot and allow the company to get more stuff done. Ultimately, this project is about acquisition. The whole point of product marketing and launching stuff is primarily to drive new usage and word-of-mouth growth. Secondarily, I think it will help with retention by letting our existing users know about all the valuable things we're building for them. The WOW moment will probably be our hackathon demo when we present it, or maybe just a cool screenshot or graphic we can include. For launch timing, I want this project specifically done in the next hour or two so we can move on to testing LaunchWise on more serious projects.
5 entries
Introducing Cursor Automations to build always-on agents.
Loading tweet...Obsidian 1.12 is now available to everyone!
Loading tweet...Cursor going entirely from ticket to PR!
Loading tweet...Convert your Notion pages and databases to durable, private, local files.
Loading tweet...Better text-to-speech is possible. @UnrealSpeech has filled this void.
Loading tweet...Video Script — LaunchWise
Format Inspiration
- Cursor "ticket to PR" (27s, 638K views) — one continuous workflow, no cuts, fast and punchy
- Best-performing dev tool demos show the full loop in a single take with speed ramps
- Internal tool = lower production bar. One shot, minimal editing.
Narrative Arc
Title card → One continuous screen recording of LaunchWise running a campaign in Slack → End card
Tagline
Stop letting great features die in silence.
Product
Internal tool (standalone)
Music
Lo-fi ambient, low volume. Keep it simple.
Audio
Voiceover: no. Sound design: strip all diegetic audio, music only.
Shots
Shot 1 — Title Card
- Type: title-card
- Duration: 3s
- Speed: 1x
- Visual: Readwise gradient background. Animated typed text: "Stop letting great features die in silence."
Shot 2 — LaunchWise in Action (One Shot)
- Type: screen-recording
- Duration: 30-40s (after speed ramping)
- Speed: Mixed — 4x through bot thinking/generating, 1x on key outputs
- Platform: web (Slack in browser)
- Window setup: Slack in browser, full width, dark mode. Only the LaunchWise channel visible. Hide bookmarks bar, notifications, dock, other channels sidebar if possible.
- Starting state: The LaunchWise Slack channel, empty or near-empty. A fresh campaign about to begin.
- Action sequence:
1. Type a message kicking off a campaign (e.g. "@launchwise let's launch [feature name]")
2. Bot responds, begins campaign setup — speed ramp 4x through the bot's questions and responses
3. Brief gets generated — drop to 1x briefly to show the completed brief link appearing
4. Bot moves to inspiration stage — speed ramp 4x through research
5. Bot moves to script stage — speed ramp 4x through generation
6. Bot shares the finished video script — drop to 1x to show the output
7. Bot generates social copy — speed ramp 4x
8. Final outputs appear in channel — drop to 1x. Multiple campaign assets visible in the Slack thread.
- What the viewer should notice: The sheer volume of polished marketing output generated through a simple Slack conversation. Brief, inspo, script, copy — all done.
- Pacing note: The payoff is the scroll-back moment. When you see the full thread of completed work, let it breathe for 2-3 seconds. This is the money shot — the volume of output.
- Cursor movement: Minimal. Mostly watching the bot work.
- What to have ready: A real feature to campaign on (or reuse an existing campaign). Bot needs to be running and responsive. Have the full pipeline working end-to-end.
- Cleanup: Hide all Slack sidebar channels except the campaign channel. No notifications. No other browser tabs visible.
Shot 3 — End Card
- Type: title-card
- Duration: 3s
- Speed: 1x
- Visual: Readwise gradient background. Animated typed text: "Every feature deserves a launch."
Recording Prep Checklist
- [ ] Bot running and connected to a test Slack channel
- [ ] Pick a real feature to campaign on (or stage a clean re-run of an existing campaign)
- [ ] Slack in browser, dark mode, full screen
- [ ] Hide bookmarks bar, dock, notifications, system clock
- [ ] Close all other Slack channels/DMs in sidebar
- [ ] Test the full pipeline once before recording to make sure nothing errors out
- [ ] Pre-approve a lo-fi music track
No assets yet — upload your video to RAMP/internal tool (standalone)/test_launchbot.mp4 in Google Drive
Surface Changes 0/5
So we built RAMP. A Slack bot that takes a feature from "we should announce this" to fully distributed launch. Brief, video script, copy, posting. All driven by one conversation.
No more "we should announce this" messages that go nowhere.
[demo video]
At Readwise, shipping features was never the bottleneck. Telling people about them was. Only two people on the team had ever done product marketing, and the process was entirely manual.
RAMP is a Slack bot that walks the team through a structured pipeline: campaign brief, inspiration research, video scripting, copy generation, and social distribution. It runs on Claude Code and handles the heavy lifting while a human reviews and approves at every step.
The result: any team member can take a feature from idea to distributed launch in a single sitting.
Built during our internal hackathon. Already using it to launch real features.
**Body:**
Quick behind-the-scenes post. One of the biggest challenges at Readwise has always been product marketing. We build a lot of features, but actually announcing them and getting the word out? That part was manual, messy, and usually fell on one or two people.
During our hackathon, we built RAMP. A Slack bot powered by Claude Code that drives a full marketing campaign from a voice memo to distributed social posts. It handles the brief, researches inspiration, writes video scripts, generates copy, and posts to social channels. A human approves everything along the way.
We're already using it internally. If you've noticed more frequent feature announcements from us lately... this is why.
Would love to hear what you think. What features have you wished we talked about more?