v2.0 · Open-source · MIT
Compress LLM tokens
by 40–70%
Type preservation, value dictionaries, and 7+ integrations. One import change. Zero model-side changes.
40–70%
Token savings
98.4%
LLM accuracy
1 line
Code change
7+
Integrations
[
{
"employeeId": "EMP-001",
"firstName": "Alice",
"department": "Engineering",
"salary": 125000
},
{
"employeeId": "EMP-002",
"firstName": "Bob",
"department": "Marketing",
"salary": 95000
},
{
"employeeId": "EMP-003",
"firstName": "Carol",
"department": "Engineering",
"salary": 130000
}
]@keys{employeeId:_0,firstName:_1,department:_2}
[3]{department,employeeId,firstName,salary}:
Engineering,EMP-001,Alice,125000
Marketing,EMP-002,Bob,95000
Engineering,EMP-003,Carol,130000Live Playground
Paste JSON or pick a sample to see SLIM compression in real-time. Toggle @types, @dict, and deterministic mode for v2.0 features.
Samples:
Delimiter:
SLIM output appears here...
Cost Calculator
Estimate how much you'll save per month with TokenSlim.
10%70%
Monthly savings
$50.00
Yearly savings
$600.00
Monthly cost (before)
$125.00
Monthly cost (after)
$75.00
Benchmark Suite
SLIM vs JSON vs TOON vs YAML vs CSV across real-world datasets.
| Dataset | Shape | JSON | SLIM | TOON | YAML | CSV |
|---|---|---|---|---|---|---|
| Medium User List (50) | 8 cols × 50 rows | 2065baseline | 54%1083 tok | 54%1081 tok | 15%2015 tok | 56%1029 tok |
| Nested Config | 3 levels deep | 176baseline | 0%207 tok | 4%194 tok | 0%206 tok | N/A |
| API Response (50 orders) | Mixed structure | 3658baseline | 61%1645 tok | 57%1784 tok | 7%3881 tok | N/A |
| Log Entries (40) | 7 cols × 40 rows | 1696baseline | 58%813 tok | 56%853 tok | 15%1646 tok | 58%812 tok |
| Product Catalog (30) | 7 cols × 30 rows | 889baseline | 61%392 tok | 61%400 tok | 13%881 tok | 62%383 tok |
Token counts are estimates. CSV lacks lossless round-trip for nested data. SLIM provides the best balance of savings and round-trip fidelity.
Run locally:
npx @tokenslim/benchmarkUse --json for CI, --format / --dataset for filteringIntegrate in one minute
Change one import. Keep your existing code.
npm install @tokenslim/openai openaiimport OpenAI from 'openai';
const client = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
const response = await client.chat.completions.create({
model: 'gpt-4o',
messages: [
{ role: 'user', content: JSON.stringify(data) },
],
});import { TokenSlimOpenAI as OpenAI } from '@tokenslim/openai';
const client = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
const response = await client.chat.completions.create({
model: 'gpt-4o',
messages: [
{ role: 'user', content: JSON.stringify(data) },
],
});That's it. All SDK features, types, and error handling work identically.