Back to Skills

ffmpeg-modal-containers

verified

Complete Modal.com FFmpeg deployment system for serverless video processing. PROACTIVELY activate for: (1) Modal.com FFmpeg container setup, (2) GPU-accelerated video encoding on Modal (NVIDIA, NVENC), (3) Parallel video processing with Modal map/starmap, (4) Volume mounting for large video files, (5) CPU vs GPU container cost optimization, (6) apt_install/pip_install for FFmpeg, (7) Python subprocess FFmpeg patterns, (8) Batch video transcoding at scale, (9) Modal pricing for video workloads, (10) Audio/video processing with Whisper. Provides: Image configuration examples, GPU container patterns, parallel processing code, volume usage, cost comparisons, production-ready FFmpeg deployments. Ensures: Efficient, scalable video processing on Modal serverless infrastructure.

View on GitHub

Marketplace

claude-plugin-marketplace

JosiahSiegel/claude-plugin-marketplace

Plugin

ffmpeg-master

Repository

JosiahSiegel/claude-plugin-marketplace
7stars

plugins/ffmpeg-master/skills/ffmpeg-modal-containers/SKILL.md

Last Verified

January 20, 2026

Install Skill

Select agents to install to:

Scope:
npx add-skill https://github.com/JosiahSiegel/claude-plugin-marketplace/blob/main/plugins/ffmpeg-master/skills/ffmpeg-modal-containers/SKILL.md -a claude-code --skill ffmpeg-modal-containers

Installation paths:

Claude
.claude/skills/ffmpeg-modal-containers/
Powered by add-skill CLI

Instructions

## Quick Reference

| Container Type | Image Setup | GPU | Use Case |
|---------------|-------------|-----|----------|
| CPU (debian_slim) | `.apt_install("ffmpeg")` | No | Batch processing, I/O-bound tasks |
| GPU (debian_slim) | `.apt_install("ffmpeg").pip_install("torch")` | Yes | ML inference, not NVENC |
| GPU (CUDA image) | `from_registry("nvidia/cuda:...")` | Yes | Full CUDA toolkit, NVENC possible |

| GPU Type | Price/Hour | NVENC | Best For |
|----------|-----------|-------|----------|
| T4 | ~$0.59 | Yes (Turing) | Inference + encoding |
| A10G | ~$1.10 | Yes (Ampere) | 4K encoding, ML |
| L40S | ~$1.95 | Yes (Ada) | Heavy ML + video |
| H100 | ~$4.25 | Yes (Hopper) | Training, overkill for video |

## When to Use This Skill

Use for **serverless video processing**:
- Batch transcoding that needs to scale to hundreds of containers
- Parallel video processing with Modal's map/starmap
- GPU-accelerated encoding (with limitations on NVENC)
- Cost-effective burst processing (pay only for execution time)
- Integration with ML models (Whisper, video analysis)

**Key decision**: Modal excels at parallel CPU workloads and ML inference on GPU. For pure hardware NVENC encoding, verify GPU capabilities first.

---

# FFmpeg on Modal.com (2025)

Complete guide to running FFmpeg on Modal's serverless Python platform with CPU and GPU containers.

## Overview

Modal is a serverless platform for running Python code in the cloud with:
- **Sub-second cold starts** - Containers spin up in milliseconds
- **Elastic GPU capacity** - Access T4, A10G, L40S, H100 GPUs
- **Parallel processing** - Scale to thousands of containers instantly
- **Pay-per-use** - Billed by CPU cycle, not idle time

### Modal vs Traditional Cloud

| Feature | Modal | Traditional VMs |
|---------|-------|-----------------|
| Cold start | <1 second | Minutes |
| Scaling | Automatic to 1000s | Manual setup |
| Billing | Per execution | Per hour |
| GPU access | `gpu="any"` decorator | Complex provisioning 

Validation Details

Front Matter
Required Fields
Valid Name Format
Valid Description
Has Sections
Allowed Tools
Instruction Length:
23227 chars