Install and start using TONL in 30 seconds
npm install tonl
import { TONLDocument } from 'tonl';
// Create from JSON
const doc = TONLDocument.fromJSON({
users: [{ name: 'Alice', age: 30 }]
});
// Query
const result = doc.query('users[*].name');
// Modify
doc.set('users[0].age', 31);
// Save
await doc.save('data.tonl');
encodeTONL(data)
Maximum token savings (38-50%), no type hints
encodeTONL(data, { includeTypes: true })
Schema validation enabled (~32% savings)
encodeTONL(data, { delimiter: '|' })
Use pipe, tab, or semicolon delimiters
encodeSmart(data)
Auto-selects best delimiter and options
Need more?
Check out the full getting started guide on GitHub.
Complete TONLDocument API and core functions
Create TONL document from JavaScript object
const doc = TONLDocument.fromJSON({ users: [...] })
Parse TONL text into document
const doc = TONLDocument.fromTONL(tonlText)
Load TONL file from disk
const doc = await TONLDocument.load('data.tonl')
JSONPath-like queries with filters
doc.query('users[?(@.role == "admin")]')
Get single value at path
doc.get('users[0].name') // "Alice"
Check if path exists
doc.has('users[0].email') // true/false
Update value at path
doc.set('users[0].age', 31)
Remove field or array element
doc.delete('user.tempField')
Append to array
doc.push('users', newUser)
Deep merge objects
doc.merge('config', updates)
Export as TONL string
const tonlText = doc.toTONL()
Export as JavaScript object
const obj = doc.toJSON()
Atomic file save with backup
await doc.save('output.tonl')
JSONPath-like query expressions
==, !=, >, <, >=, <=
&&, ||, !
contains, startsWith, endsWith
Command-line interface for TONL
tonl encode data.json --smart
tonl decode data.tonl
tonl query users.tonl 'users[*]'
tonl get data.tonl "user.name"
tonl validate --schema schema.tonl
tonl stats data.json
System prompt for teaching LLMs to read TONL data
Copy this into your LLM system prompt when sending TONL formatted data:
The following data is in TONL format. Parse it as follows:
⢠Lines with [count]{fields}: are array headers, data rows follow
⢠Lines with {fields}: are object headers, field: value pairs follow
⢠Indentation (2 spaces) indicates nesting levels
⢠Default delimiter is comma unless #delimiter header specifies otherwise
⢠Type hints may appear: field:type (e.g., id:u32, name:str)
ā Ignore the :type part, just parse the values
⢠Value types: unquoted numbers/booleans, quoted strings, null
Examples:
Without types: users[2]{id,name,role}:
With types: users[2]{id:u32,name:str,role:str}:
Both parse the same - just read the data values.
This represents: {"users": [{"id":1,"name":"Alice","role":"admin"}, {"id":2,"name":"Bob","role":"user"}]}
Define and validate data structures with TONL schemas
Schemas let you define data structure, types, and constraints. TONL validates data against schemas and can auto-generate TypeScript types.
@schema v1
@strict true
User: obj
id: u32 required
username: str required min:3 max:20
email: str required pattern:email
age: u32? min:13 max:150
roles: list<str> required
users: list<User> required min:1
import { parseSchema, validateTONL } from 'tonl/schema';
// Load and parse schema
const schema = parseSchema(schemaContent);
// Validate data
const result = validateTONL(data, schema);
if (!result.valid) {
result.errors.forEach(err => {
console.error(`${err.field}: ${err.message}`);
});
}
str, u32, i32, u64, i64, f32, f64, bool
obj, list, list<T>, optional (field?)
min, max, pattern, email, url
min, max, positive, negative, integer
Process multi-GB files with constant memory usage
Stream processing allows you to work with files larger than available RAM. TONL's line-based format is perfect for streaming - process records one at a time with <100MB memory usage.
import { createEncodeStream } from 'tonl/stream';
import { createReadStream, createWriteStream } from 'fs';
// Stream encode large JSON files
createReadStream('huge.json')
.pipe(createEncodeStream({ smart: true }))
.pipe(createWriteStream('huge.tonl'));
import { streamQuery } from 'tonl/stream';
// Query huge files efficiently
await streamQuery(
'large-dataset.tonl',
'users[?(@.active)]',
(chunk) => {
// Process each matching chunk
console.log(chunk);
}
);
// Memory stays constant ~10MB
Build TONL libraries in any language
95KB of implementation documentation with algorithms, pseudo-code, and test requirements.