Helps write Python code using the everyrow SDK for AI-powered data processing - transforming, deduping, merging, ranking, and screening dataframes with natural language instructions
View on GitHubfuturesearch/everyrow-sdk
everyrow
January 24, 2026
Select agents to install to:
npx add-skill https://github.com/futuresearch/everyrow-sdk/blob/main/skills/everyrow-sdk/SKILL.md -a claude-code --skill everyrow-sdkInstallation paths:
.claude/skills/everyrow-sdk/# everyrow SDK
The everyrow SDK provides intelligent data processing utilities powered by AI agents. Use this skill when writing Python code that needs to:
- Rank/score rows based on qualitative criteria
- Deduplicate data using semantic understanding
- Merge tables using AI-powered matching
- Screen/filter rows based on research-intensive criteria
- Run AI agents over dataframe rows
## Installation
```bash
pip install everyrow
```
## Configuration
Before writing any everyrow code, check if `EVERYROW_API_KEY` is set. If not, prompt the user:
> everyrow requires an API key. Do you have one?
> - If yes, paste it here
> - If no, get one at https://everyrow.io/api-key and paste it back
Once the user provides the key, set it:
```bash
export EVERYROW_API_KEY=<their_key>
```
## Results
All operations return a result object. The data is available as a pandas DataFrame in `result.data`:
```python
result = await rank(...)
print(result.data.head()) # pandas DataFrame
```
## Operations
For quick one-off operations, sessions are created automatically.
### rank - Score and rank rows
Score rows based on criteria you can't put in a database field:
```python
from everyrow.ops import rank
result = await rank(
task="Score by likelihood to need data integration solutions",
input=leads_dataframe,
field_name="integration_need_score",
)
print(result.data.head())
```
### dedupe - Deduplicate data
Remove duplicates using AI-powered semantic matching. The AI understands that "AbbVie Inc", "Abbvie", and "AbbVie Pharmaceutical" are the same company:
```python
from everyrow.ops import dedupe
result = await dedupe(
input=crm_data,
equivalence_relation="Two entries are duplicates if they represent the same legal entity",
)
print(result.data.head())
```
Results include `equivalence_class_id` (groups duplicates), `equivalence_class_name` (human-readable cluster name), and `selected` (the canonical record in each cluster).
### merge - Merge tables with AI