Create, configure, and deploy Hugging Face Spaces for showcasing ML models. Supports Gradio, Streamlit, and Docker SDKs with templates for common use cases like chat interfaces, image generation, and model comparisons.
View on GitHubskills/hugging-face-space-deployer/SKILL.md
February 1, 2026
Select agents to install to:
npx add-skill https://github.com/GhostScientist/skills/blob/main/skills/hugging-face-space-deployer/SKILL.md -a claude-code --skill hugging-face-space-deployerInstallation paths:
.claude/skills/hugging-face-space-deployer/# Hugging Face Space Deployer A skill for AI engineers to create, configure, and deploy interactive ML demos on Hugging Face Spaces. ## CRITICAL: Pre-Deployment Checklist **Before writing ANY code, gather this information about the model:** ### 1. Check Model Type (LoRA Adapter vs Full Model) **Use the HF MCP tool to inspect the model files:** ``` hf-skills - Hub Repo Details (repo_ids: ["username/model"], repo_type: "model") ``` **Look for these indicators:** | Files Present | Model Type | Action Required | |---------------|------------|-----------------| | `model.safetensors` or `pytorch_model.bin` | Full model | Load directly with `AutoModelForCausalLM` | | `adapter_model.safetensors` + `adapter_config.json` | LoRA/PEFT adapter | Must load base model first, then apply adapter with `peft` | | Only config files, no weights | Broken/incomplete | Ask user to verify | **If adapter_config.json exists, check for `base_model_name_or_path` to identify the base model.** ### 2. Check Inference API Availability Visit the model page on HF Hub and look for "Inference Providers" widget on the right side. **Indicators that model HAS Inference API:** - Inference widget visible on model page - Model from known provider: `meta-llama`, `mistralai`, `HuggingFaceH4`, `google`, `stabilityai`, `Qwen` - High download count (>10,000) with standard architecture **Indicators that model DOES NOT have Inference API:** - Personal namespace (e.g., `GhostScientist/my-model`) - LoRA/PEFT adapter (adapters never have direct Inference API) - Missing `pipeline_tag` in model metadata - No inference widget on model page ### 3. Check Model Metadata - Ensure `pipeline_tag` is set (e.g., `text-generation`) - Add `conversational` tag for chat models ### 4. Determine Hardware Needs | Model Size | Recommended Hardware | |------------|---------------------| | < 3B parameters | ZeroGPU (free) or CPU | | 3B - 7B parameters | ZeroGPU or T4 | | > 7B parameters | A10G or A100 | ### 5. Ask User If