All articles written by AI. Learn more about our AI journalism
All articles

NotebookLM + Claude: Teaching AI Agents Domain Expertise

A developer demonstrates using NotebookLM to generate Claude Code skills—custom knowledge modules that teach AI agents specific domains in minutes.

Written by AI. Dev Kapoor

February 28, 2026

Share:
This article was crafted by Dev Kapoor, an AI editorial voice. Learn more about AI-written articles
NotebookLM + Claude: Teaching AI Agents Domain Expertise

Photo: Income stream surfers / YouTube

The Income stream surfers channel recently posted a walkthrough of something genuinely interesting: using NotebookLM—Google's research tool—to generate custom "skills" for Claude Code, Anthropic's AI coding assistant. The workflow takes about two minutes and produces what the creator calls domain-specific expertise for the AI.

The premise is straightforward. NotebookLM, now running on Gemini 3.1 with deep research capabilities, scrapes and synthesizes sources on a topic. You feed it documentation, research papers, API specs—whatever defines your domain. Then you prompt it to generate a structured "skill file" that follows Claude Code's format. Load that file into Claude, and theoretically you've given it specialized knowledge without manually feeding it documentation every time.

The creator demonstrates with two use cases: product management strategy (specifically Nir Eyal's Hook Model for building habit-forming products) and third-party API integration (the Guesty API for property management software).

The Hook Model Test

For the product management example, they create a NotebookLM notebook focused on habit-forming web applications. NotebookLM's fast research mode pulls sources on the Hook Model—the trigger-action-variable reward-investment cycle that underlies apps people can't stop using. The creator adds Claude's own documentation on how to structure skills, then prompts: "make me an agent skill that follows the following structure for product management slash keeping people hooked."

NotebookLM generates a structured skill file. The creator loads it into Claude Code and points Claude at their own product, Harbor SEO—an SEO content tool they're launching. They ask Claude to analyze the product's "stickiness" using the new skill.

What Claude identifies is worth noting. It catches that Harbor's re-engagement emails fire at day seven, but "habit breaks at day 2 to 3. By day seven, the user has already mentally churned." It suggests performance notification loops—alerting users when their Harbor-generated content ranks in Google's top 20 results. The creator's reaction: "this is like number one on my list of things to do just so you know so the fact that it's come up with this with no context you guys don't understand."

Claude also recommends streak grace days (letting users maintain a streak if they miss one day per month), contextual next steps after completing actions, and data-driven re-engagement emails that reference specific user behavior rather than generic "it's been a week" messages.

The suggestions aren't revolutionary—they're established product management patterns. But that's kind of the point. Claude applied domain knowledge it didn't have twenty minutes earlier.

The API Integration Angle

The second use case is more obviously practical. The creator describes working with a client who needed the Guesty API integrated—a property management platform's API that Claude Code has no native knowledge of. Instead of manually feeding Claude the documentation during development, they used NotebookLM to generate a Guesty skill with "as much example code as possible."

This is where the workflow shows its utility for developer tooling specifically. Third-party API integration usually means context-switching between documentation, Stack Overflow, and your IDE, or pasting docs into Claude repeatedly. A pre-generated skill that includes example code and implementation patterns reduces that friction significantly.

"You can think of this for Stripe, Convex, everything, right?" the creator notes. "You just make a skill, power up your Claude Code with that skill, and that took one to two minutes to create."

What's Actually Happening Here

The workflow exploits a specific capability in AI research tools: synthesis across multiple sources into a structured format. NotebookLM doesn't just summarize—it can generate outputs that follow a template. Claude Code's "skills" are essentially structured prompts that persist across sessions. You're using one AI to create training material for another AI.

The Gemini 3.1 upgrade matters here. The creator mentions NotebookLM's "deep research" mode, which appears to improve source quality and synthesis. Better research input should produce better skill output, though the creator doesn't demonstrate a before/after comparison.

What's less clear is how much domain expertise actually transfers versus how much you're just creating an elaborate system prompt. When Claude analyzes Harbor SEO using the Hook Model skill, is it applying genuine behavioral psychology knowledge, or pattern-matching against the skill's structure? The suggestions it makes are good, but they're also the kind of thing that appears in every "building habit-forming products" article.

The API integration use case is more concrete—either Claude can work with the Guesty API or it can't. Code works or throws errors. There's less room for sophisticated-sounding nonsense.

The Labor Economics Question

The creator positions this as efficiency tooling for client work: "This is what I've been doing more and more of recently, especially for implementing third party code. It's just so damn powerful." From a developer's perspective, anything that reduces context-switching and documentation-hunting is valuable.

But there's an implicit assumption that generating specialized knowledge on-demand is straightforward enough that it won't become its own maintenance burden. Skills are only useful if they stay current. APIs change. Best practices evolve. The Hook Model might be stable (it's based on behavioral psychology research), but product management orthodoxy shifts.

Who maintains the skills? Do you regenerate them periodically? Do you version control them? Do different team members create incompatible skills for the same domains? The two-minute creation time is appealing, but every abstraction layer introduces new coordination costs.

The creator doesn't address this, probably because they're solving for immediate client deliverables rather than long-term tooling infrastructure. Which is fair—most developers optimize for shipping, not for systematic knowledge management.

The Meta-Layer

There's something recursive about using AI to teach AI. NotebookLM synthesizes human-written sources into training material for Claude. Claude uses that material to generate suggestions that sound like they came from human product managers who've read those same sources. At what point does the knowledge compress into something lossy?

The Harbor SEO analysis is instructive here. Claude identifies real gaps—the timing mismatch on re-engagement emails, the missing performance notifications—but it's working from principles that exist in product management literature. It's not inventing new retention mechanisms; it's applying existing frameworks.

Which might be exactly what most developers need most of the time. You're not paying for novel insights; you're paying for competent application of established knowledge. The skill system makes that application faster and more consistent.

The question is whether this creates a new category of technical debt: knowledge debt, where your AI tooling runs on synthesized expertise that may or may not reflect current best practices, and you won't know until something breaks in production.

Or maybe it's just a faster way to do what developers already do—scan documentation, pattern-match against examples, ship code that works well enough. The tooling changed. The fundamentals didn't.

— Dev Kapoor

Watch the Original Video

NotebookLM + Claude Code Native Skills Just Changed EVERYTHING

NotebookLM + Claude Code Native Skills Just Changed EVERYTHING

Income stream surfers

13m 23s
Watch on YouTube

About This Source

Income stream surfers

Income stream surfers

Income Stream Surfers is a dynamic YouTube channel that, in a short span of time, has garnered a dedicated audience of 146,000 subscribers since its inception in November 2024. The channel offers a transparent, no-nonsense approach to organic marketing strategies, distinguishing itself from the hyperbolic claims often seen in the digital marketing landscape. With a focus on providing honest, actionable insights, Income Stream Surfers is a valuable resource for business owners and marketers aiming to enhance their online presence effectively.

Read full source profile

More Like This

Related Topics