Use when analyzing YouTube videos, extracting insights from tutorials, researching video content, or learning from talks and presentations
View on GitHubLight-Brands/lawless-ai
ai-coding-config
plugins/core/skills/youtube-transcript-analyzer/SKILL.md
January 21, 2026
Select agents to install to:
npx add-skill https://github.com/Light-Brands/lawless-ai/blob/main/plugins/core/skills/youtube-transcript-analyzer/SKILL.md -a claude-code --skill youtube-transcript-analyzerInstallation paths:
.claude/skills/youtube-transcript-analyzer/<objective> Download and analyze YouTube video transcripts to extract insights, understand concepts, and relate content to your work. Uses yt-dlp for reliable transcript extraction with intelligent chunking for long-form content. </objective> <when-to-use> Use when you need to understand how a YouTube video/tutorial relates to your current project, research technical concepts explained in video format, extract key insights from talks or presentations, compare video content with your codebase or approach, or learn from video demonstrations without watching the entire video. </when-to-use> <prerequisites> Ensure yt-dlp is installed: ```bash # Install via pip pip install yt-dlp # Or via homebrew (macOS) brew install yt-dlp # Verify installation yt-dlp --version ``` </prerequisites> <transcript-extraction> Setup temporary directory - IMPORTANT: Always create and use a temporary directory for downloaded files to avoid cluttering the repository: ```bash # Create temporary directory for this analysis ANALYSIS_DIR=$(mktemp -d) echo "Using temporary directory: $ANALYSIS_DIR" ``` Download transcript using yt-dlp to extract subtitles/transcripts to the temporary directory: ```bash # Download transcript only (no video) yt-dlp --skip-download --write-auto-sub --sub-format vtt --output "$ANALYSIS_DIR/transcript.%(ext)s" URL # Or get manually created subtitles if available (higher quality) yt-dlp --skip-download --write-sub --sub-lang en --sub-format vtt --output "$ANALYSIS_DIR/transcript.%(ext)s" URL # Get video metadata for context yt-dlp --skip-download --print-json URL > "$ANALYSIS_DIR/metadata.json" ``` Handle long transcripts - For transcripts exceeding 8,000 tokens (roughly 6,000 words or 45+ minutes): 1. Split into logical chunks based on timestamp or topic breaks 2. Generate a summary for each chunk focusing on key concepts 3. Create an overall synthesis connecting themes to the user's question 4. Reference specific timestamps for detailed sections For shor