API Documentation
Everything you need to query your CSV-backed APIs.
Authentication
Include your API key as a Bearer token in the Authorization header, or pass it as a query parameter.
Key scopes
Every key has a scope and a visible prefix:
pk_…Public — read-only. Safe to embed in frontends and static sites. Available on every plan, including Free.sk_…Private — read & write. Treat like a password; never expose client-side. Required for create / update / delete / bulk endpoints. Paid plans only — the Free plan is read-only.
Header (recommended)
curl -H "Authorization: Bearer {api_key}" \
"https://csv-api.com/api/v1/datasets/{public_id}/records"
Query parameter
curl "https://csv-api.com/api/v1/datasets/{public_id}/records?api_key={api_key}"
Endpoints
| Method | Path | Description |
|---|---|---|
| GET | /api/v1/datasets/:public_id/records | List records with filtering, sorting, and pagination |
| GET | /api/v1/datasets/:public_id/records/:id | Get a single record by ID |
| POST | /api/v1/datasets/:public_id/records | Create a record (sk_ key) |
| PATCH | /api/v1/datasets/:public_id/records/:id | Update a record (sk_ key) |
| DELETE | /api/v1/datasets/:public_id/records/:id | Delete a record (sk_ key) |
| POST | /api/v1/datasets/:public_id/records/bulk | Bulk insert / upsert up to 1,000 rows (sk_ key) |
| GET | /api/v1/datasets/:public_id/records/aggregate | Count, sum, avg, min, max with optional group_by |
All requests require authentication via Authorization: Bearer <api_key> header or ?api_key=<api_key> query parameter.
Reading Records
List rows or fetch a single row by id. Both endpoints accept either a pk_ or sk_ key.
List records
curl -H "Authorization: Bearer {api_key}" \
"https://csv-api.com/api/v1/datasets/{public_id}/records?per_page=25"
{
"data": [
{ "id": 1, "name": "Ada", "city": "Portland" },
{ "id": 2, "name": "Grace", "city": "Seattle" }
],
"meta": { "total": 2, "page": 1, "per_page": 25, "total_pages": 1 }
}
Get a single record
curl -H "Authorization: Bearer {api_key}" \
"https://csv-api.com/api/v1/datasets/{public_id}/records/42"
{ "data": { "id": 42, "name": "Ada", "city": "Portland" } }
Filtering
Filter records using query parameters.
| Parameter | Description | Example |
|---|---|---|
| filter[col]=val | Exact match | ?filter[city]=Portland |
| filter[col][gt]=val | Greater than | ?filter[age][gt]=21 |
| filter[col][gte]=val | Greater than or equal | ?filter[price][gte]=10 |
| filter[col][lt]=val | Less than | ?filter[score][lt]=100 |
| filter[col][lte]=val | Less than or equal | ?filter[weight][lte]=50 |
| filter[col][like]=val | Case-insensitive contains | ?filter[name][like]=john |
| filter[col][wildcard]=val | Wildcard match (use * as wildcard) | ?filter[name][wildcard]=Al* |
| filter[col][ne]=val | Not equal | ?filter[status][ne]=inactive |
Runnable curl example
curl -H "Authorization: Bearer {api_key}" \
"https://csv-api.com/api/v1/datasets/{public_id}/records?filter%5Bcity%5D=Portland&filter%5Bage%5D%5Bgte%5D=21"
Note: [ and ] are encoded as %5B / %5D so curl doesn't interpret them as glob ranges. (Alternatively, pass -g to disable curl's URL globbing.)
Sorting
Sort results by one or more columns. Prefix with - for descending order. Separate multiple columns with commas.
Ascending
?sort=name
Descending
?sort=-created_date
Multiple columns
?sort=city,-age
Pagination
Results are paginated. Default: 25 per page, maximum: 100.
?page=2&per_page=50
The response includes a meta object:
{
"data": [...],
"meta": {
"total": 1000,
"page": 2,
"per_page": 50,
"total_pages": 20
}
}
Field Selection
Return only specific columns to reduce payload size.
?fields=name,email,age
Rate Limits & Plan Limits
| Plan | Requests/hr | Shared Pages |
|---|---|---|
| Free | 100 | 1 |
| Starter | 1,000 | 5 |
| Pro | 5,000 | 50 |
| Scale | 25,000 | Unlimited |
When the rate limit is exceeded, the API returns 429 Too Many Requests.
Writing Records
Create, update, and delete individual rows over the API. Write endpoints require a private (sk_) key — using a public key returns 403 Forbidden.
Send columns by their display names (the same names you see in the dataset table). Unknown columns are silently ignored. Type validation runs on the database side — invalid values return 422.
Create
curl -X POST -H "Authorization: Bearer {api_key}" \
-H "Content-Type: application/json" \
-d '{"record": {"name": "Ada", "city": "Portland"}}' \
"https://csv-api.com/api/v1/datasets/{public_id}/records"
Update
curl -X PATCH -H "Authorization: Bearer {api_key}" \
-H "Content-Type: application/json" \
-d '{"record": {"city": "Seattle"}}' \
"https://csv-api.com/api/v1/datasets/{public_id}/records/42"
Delete
curl -X DELETE -H "Authorization: Bearer {api_key}" \
"https://csv-api.com/api/v1/datasets/{public_id}/records/42"
Bulk Writes & Upserts
Write up to 1,000 rows per request. Choose how to handle conflicts on the columns you designate as keys — useful when you want idempotent imports without re-uploading the whole file.
| on_conflict | Behavior |
|---|---|
| error | Default. Fail on duplicate keys. |
| ignore | Skip rows that conflict on key_columns. |
| update | Update existing rows that match key_columns; insert the rest. |
curl -X POST -H "Authorization: Bearer {api_key}" \
-H "Content-Type: application/json" \
-d '{
"records": [
{"email": "[email protected]", "name": "Ada"},
{"email": "[email protected]", "name": "Grace"}
],
"on_conflict": "update",
"key_columns": ["email"]
}' \
"https://csv-api.com/api/v1/datasets/{public_id}/records/bulk"
Response: { "data": { "inserted": 1, "updated": 1 } }
Aggregation
Compute counts, sums, averages, and more without pulling rows down. Combine with the same filter[…] syntax used by the list endpoint.
| Parameter | Description |
|---|---|
| metric | Comma-separated list. Up to 5 metrics. Use count, sum:col, avg:col, min:col, max:col. |
| group_by | Optional. Up to 3 columns, comma-separated. |
| filter[col][op] | Same operators as the list endpoint. |
curl -H "Authorization: Bearer {api_key}" \
"https://csv-api.com/api/v1/datasets/{public_id}/records/aggregate?metric=count,avg:price&group_by=city"
Append & Upsert Imports
Add new rows to an existing dataset by uploading another CSV or Excel file — no need to re-upload everything from scratch. Available on paid plans from the dataset detail page.
- Append — every row in the upload is added as a new row.
- Upsert — pick one or more
key_columns; rows that match an existing key get updated, the rest are inserted.
Imports run as background jobs and complete asynchronously.
Shared Pages
Create public, read-only pages for any dataset. Each shared page gets a unique URL that anyone can access without an API key.
You can configure static filters on shared pages to show a specific view of your data. For example, only show rows where status = active or city = Portland. Static filters are always applied before any visitor search or sort.
Shared page limits vary by plan: Free (1), Starter (5), Pro (50), Scale (unlimited). Configure shared pages and their filters from the dataset detail page.