Back to Skills

security-patterns

verified

Security Pattern Detector - Real-time detection of dangerous coding patterns during file edits. Based on Anthropic's official security-guidance hook. Activates proactively when writing code. Warns about command injection, XSS, unsafe deserialization, dynamic code execution.

View on GitHub

Marketplace

specweave

anton-abyzov/specweave

Plugin

sw

development

Repository

anton-abyzov/specweave
27stars

plugins/specweave/skills/security-patterns/SKILL.md

Last Verified

January 25, 2026

Install Skill

Select agents to install to:

Scope:
npx add-skill https://github.com/anton-abyzov/specweave/blob/main/plugins/specweave/skills/security-patterns/SKILL.md -a claude-code --skill security-patterns

Installation paths:

Claude
.claude/skills/security-patterns/
Powered by add-skill CLI

Instructions

# Security Pattern Detector Skill

## Overview

This skill provides real-time security pattern detection based on Anthropic's official security-guidance plugin. It identifies potentially dangerous coding patterns BEFORE they're committed.

## Detection Categories

### 1. Command Injection Risks

**GitHub Actions Workflow Injection**
```yaml
# DANGEROUS - User input directly in run command
run: echo "${{ github.event.issue.title }}"

# SAFE - Use environment variable
env:
  TITLE: ${{ github.event.issue.title }}
run: echo "$TITLE"
```

**Node.js Child Process Execution**
```typescript
// DANGEROUS - Shell command with user input
exec(`ls ${userInput}`);
spawn('sh', ['-c', userInput]);

// SAFE - Array arguments, no shell
execFile('ls', [sanitizedPath]);
spawn('ls', [sanitizedPath], { shell: false });
```

**Python OS Commands**
```python
# DANGEROUS
os.system(f"grep {user_input} file.txt")
subprocess.call(user_input, shell=True)

# SAFE
subprocess.run(['grep', sanitized_input, 'file.txt'], shell=False)
```

### 2. Dynamic Code Execution

**JavaScript eval-like Patterns**
```typescript
// DANGEROUS - All of these execute arbitrary code
eval(userInput);
new Function(userInput)();
setTimeout(userInput, 1000);  // When string passed
setInterval(userInput, 1000); // When string passed

// SAFE - Use parsed data, not code
const config = JSON.parse(configString);
```

### 3. DOM-based XSS Risks

**React dangerouslySetInnerHTML**
```tsx
// DANGEROUS - Renders arbitrary HTML
<div dangerouslySetInnerHTML={{ __html: userContent }} />

// SAFE - Use proper sanitization
import DOMPurify from 'dompurify';
<div dangerouslySetInnerHTML={{ __html: DOMPurify.sanitize(userContent) }} />
```

**Direct DOM Manipulation**
```typescript
// DANGEROUS
element.innerHTML = userInput;
document.write(userInput);

// SAFE
element.textContent = userInput;
element.innerText = userInput;
```

### 4. Unsafe Deserialization

**Python Pickle**
```python
# DANGEROUS - Pickle can execute arbitrary code
i

Validation Details

Front Matter
Required Fields
Valid Name Format
Valid Description
Has Sections
Allowed Tools
Instruction Length:
4579 chars