Google Cloud Platform configuration templates for BigQuery ML and Vertex AI training with authentication setup, GPU/TPU configs, and cost estimation tools. Use when setting up GCP ML training, configuring BigQuery ML models, deploying Vertex AI training jobs, estimating GCP costs, configuring cloud authentication, selecting GPUs/TPUs for training, or when user mentions BigQuery ML, Vertex AI, GCP training, cloud ML setup, TPU training, or Google Cloud costs.
View on GitHubFebruary 1, 2026
Select agents to install to:
npx add-skill https://github.com/vanman2024/ai-dev-marketplace/blob/main/plugins/ml-training/skills/google-cloud-configs/SKILL.md -a claude-code --skill google-cloud-configsInstallation paths:
.claude/skills/google-cloud-configs/Use when: - Setting up BigQuery ML for SQL-based machine learning - Configuring Vertex AI custom training jobs - Setting up GCP authentication for ML workflows - Selecting appropriate GPU/TPU configurations - Estimating costs for GCP ML training - Deploying models to Vertex AI endpoints - Configuring distributed training on GCP - Optimizing cost vs performance for cloud ML ## Platform Overview ### BigQuery ML **What it is**: SQL-based machine learning directly in BigQuery **Best for**: - Quick ML prototypes using existing data warehouse data - Classification, regression, forecasting on structured data - Users familiar with SQL but not Python/ML frameworks - Large-scale batch predictions **Available Models**: - Linear/Logistic Regression - XGBoost (BOOSTED_TREE) - Deep Neural Networks (DNN) - AutoML Tables - TensorFlow/PyTorch imported models **Pricing**: - Based on data processed (same as BigQuery queries) - $5 per TB processed for analysis - AutoML: $19.32/hour for training ### Vertex AI Training **What it is**: Fully managed ML training platform **Best for**: - Custom PyTorch/TensorFlow training - Large-scale distributed training - GPU/TPU-accelerated workloads - Production ML pipelines **Available Compute**: - **CPUs**: n1-standard, n1-highmem, n1-highcpu - **GPUs**: NVIDIA T4, P4, V100, P100, A100, L4 - **TPUs**: v2, v3, v4, v5e (8 cores to 512 cores) **Pricing**: - CPU: $0.05-0.30/hour depending on machine type - GPU T4: $0.35/hour - GPU A100: $3.67/hour (40GB) or $4.95/hour (80GB) - TPU v3: $8.00/hour (8 cores) - TPU v4: $11.00/hour (8 cores) ## GPU/TPU Selection Guide ### GPU Selection (Vertex AI) **T4 (16GB VRAM)**: - Use case: Inference, light training, small models - Cost: $0.35/hour - Good for: BERT-base, small CNNs, inference serving **V100 (16GB VRAM)**: - Use case: Mid-size training, mixed precision training - Cost: $2.48/hour - Good for: ResNet training, medium transformers **A100 (40GB/80GB VRAM)**: - Use case: Large model training, dis