Master high-performance rendering for large datasets with Datashader. Use this skill when working with datasets exceeding 100M+ points, optimizing visualization performance, or implementing efficient rendering strategies with rasterization and colormapping techniques.
View on GitHubuw-ssec/rse-plugins
holoviz-visualization
community-plugins/holoviz-visualization/skills/advanced-rendering/SKILL.md
January 22, 2026
Select agents to install to:
npx add-skill https://github.com/uw-ssec/rse-plugins/blob/main/community-plugins/holoviz-visualization/skills/advanced-rendering/SKILL.md -a claude-code --skill advanced-renderingInstallation paths:
.claude/skills/advanced-rendering/# Advanced Rendering Skill
## Overview
Master high-performance rendering for large datasets with Datashader and optimization techniques. This skill covers handling 100M+ point datasets, performance tuning, and efficient visualization strategies.
## Dependencies
- datashader >= 0.15.0
- colorcet >= 3.1.0
- holoviews >= 1.18.0
- pandas >= 1.0.0
- numpy >= 1.15.0
## Core Capabilities
### 1. Datashader Fundamentals
Datashader is designed for rasterizing large datasets:
```python
import datashader as ds
from datashader.mpl_ext import _colorize
import holoviews as hv
# Load large dataset (can handle 100M+ points)
df = pd.read_csv('large_dataset.csv') # Millions or billions of rows
# Create datashader canvas
canvas = ds.Canvas(plot_width=800, plot_height=600)
# Rasterize aggregation
agg = canvas.points(df, 'x', 'y')
# Convert to image
img = agg.to_array(True)
```
### 2. Efficient Point Rendering
```python
from holoviews.operation.datashader import datashade, aggregate, shade
# Quick datashading with HoloViews
scatter = hv.Scatter(df, 'x', 'y')
shaded = datashade(scatter)
# With custom aggregation
agg = aggregate(scatter, width=800, height=600)
colored = shade(agg, cmap='viridis')
# Control rasterization
from holoviews.operation import rasterize
rasterized = rasterize(
scatter,
aggregator=ds.count(),
pixel_ratio=2,
upsample_method='interp'
)
```
### 3. Color Mapping and Aggregation
```python
import datashader as ds
from colorcet import cm
# Count aggregation (heatmap)
canvas = ds.Canvas()
agg = canvas.points(df, 'x', 'y', agg=ds.count())
# Weighted aggregation
agg = canvas.points(df, 'x', 'y', agg=ds.sum('value'))
# Mean aggregation
agg = canvas.points(df, 'x', 'y', agg=ds.mean('value'))
# Custom colormapping
import datashader.transfer_functions as tf
shaded = tf.shade(agg, cmap=cm['viridis'])
shaded_with_spread = tf.spread(shaded, px=2)
```
### 4. Image Compositing
```python
# Combine multiple datasets
canvas = ds.Canvas(x_range=