Back to Blog
API Features April 8, 2026

csv-api Goes Read/Write: Writes, Bulk Upserts, and Aggregation

A major release: create, update, and delete records over the API, bulk upsert up to 1,000 rows at a time, and run server-side aggregations.

csv-api is now read/write

Up until today, csv-api was great at one thing: turning a spreadsheet into a fast, queryable, read-only API. That's enough for a lot of use cases — static sites, dashboards, prototypes — but it left a clear gap. If you wanted to keep the data in sync with another system, you had to re-upload the file every time.

This release closes that gap. You can now create, update, and delete records over the API, push up to 1,000 rows per request with idempotent upsert semantics, and run server-side aggregations without dragging rows down to the client. csv-api is still as easy to start with as ever — just upload a file — but now it's a credible sync target for the systems that already own your data.

For background on how API key authentication works in csv-api, see Securing Your API: Authentication and API Key Management.

Public and private API keys

Once you have write endpoints, "every key can do anything" stops being a sensible default. So API keys now have scopes, and the prefix tells you which is which at a glance:

  • pk_… public keys are read-only. Safe to drop into a frontend, a static site, or a public repo. They can list, fetch, and aggregate — nothing else.
  • sk_… private keys can read and write. Treat them like a password — only use them server-side.

Plan note: the free plan stays read-only — you can keep creating pk_ keys and querying your data forever, no card required. The write API, bulk upserts, and append/upsert imports all live behind paid plans, where private sk_ keys are unlocked.

Existing keys were migrated to the public scope, so nothing breaks for free accounts. When you're ready to start writing, upgrade and head to your account page to create a new private key.

Writing single records

Three new endpoints — POST, PATCH, and DELETE — let you mutate one row at a time. Send columns by their display name, the same names you see in the dashboard.

Create — returns 201 Created

curl -X POST -H "Authorization: Bearer sk_..." \
     -H "Content-Type: application/json" \
     -d '{"record": {"name": "Ada", "city": "Portland"}}' \
     "https://csv-api.com/api/v1/datasets/YOUR_ID/records"
{ "data": { "id": 42, "name": "Ada", "city": "Portland" } }

Update — returns 200 OK

curl -X PATCH -H "Authorization: Bearer sk_..." \
     -H "Content-Type: application/json" \
     -d '{"record": {"city": "Seattle"}}' \
     "https://csv-api.com/api/v1/datasets/YOUR_ID/records/42"
{ "data": { "id": 42, "name": "Ada", "city": "Seattle" } }

Delete — returns 204 No Content

curl -X DELETE -H "Authorization: Bearer sk_..." \
     "https://csv-api.com/api/v1/datasets/YOUR_ID/records/42"

Use a public key on any of these and you'll get a clear 403 with an explanation. Send a column that doesn't exist and it's silently ignored. Send a value that doesn't fit the column type and you get a 422.

Bulk writes and upserts

One row at a time is fine for forms and dashboards, but if you're syncing data from somewhere else you want to push a batch and let csv-api figure out which rows are new and which already exist.

The new /records/bulk endpoint takes up to 1,000 rows per request and gives you three conflict strategies:

  • error — the safe default. Fail loudly on duplicate keys.
  • ignore — skip rows whose key_columns already match an existing row. Great for "make sure these rows exist" sync jobs.
  • update — the upsert. Existing rows that match are updated; the rest are inserted. The first time you upsert on a column, csv-api lazily creates the unique index for you.
curl -X POST -H "Authorization: Bearer sk_..." \
     -H "Content-Type: application/json" \
     -d '{
           "records": [
             {"email": "[email protected]", "name": "Ada"},
             {"email": "[email protected]", "name": "Grace"}
           ],
           "on_conflict": "update",
           "key_columns": ["email"]
         }' \
     "https://csv-api.com/api/v1/datasets/YOUR_ID/records/bulk"
{ "data": { "inserted": 1, "updated": 1, "skipped_due_to_limit": 0 } }

The response tells you exactly what happened. If your batch would push the dataset past your plan's row cap, the overflow rows are reported in skipped_due_to_limit rather than failing the whole request. Run the same request again and you'll see inserted: 0 — proof your sync is idempotent.

Append & upsert imports from the dashboard

Not everyone wants to drive csv-api from a script. The dataset detail page now has an Import panel where you can drop another CSV or Excel file on top of an existing dataset. Pick append to add every row, or upsert with one or more key columns to merge on existing data.

Imports run as background jobs, so large files don't block your browser. The job respects your plan's row cap — if the upload would push the dataset past the limit, the overflow rows are skipped rather than failing the whole import.

For more on uploading Excel files specifically, see Excel Multi-Sheet Support.

Server-side aggregation

A common pattern with the read API was "fetch everything, then count it on the client". That works at small scale, but for anything bigger you really want the database to do the math. The new /records/aggregate endpoint does exactly that.

Pick up to 5 metrics from count, sum, avg, min, max, optionally group by up to 3 columns, and reuse the same filter[…] syntax you already know from the list endpoint.

curl -H "Authorization: Bearer pk_..." \
     "https://csv-api.com/api/v1/datasets/YOUR_ID/records/aggregate?metric=count,avg:price&group_by=city"
{
  "data": [
    { "city": "Portland", "count": 14, "avg_price": 22.5 },
    { "city": "Seattle",  "count":  9, "avg_price": 30.1 }
  ],
  "meta": { "metrics": ["count", "avg_price"], "group_by": ["city"], "row_count": 2 }
}

You get back one row per group with the metric values inline — perfect for charting, KPIs, or report generation. And because aggregation reads from the same query path as the list endpoint, all your existing filters carry over. For an introduction to filter syntax, see Mastering Filters.

Putting it together

None of these features are interesting on their own — they're interesting together. A typical workflow with this release looks like:

  1. 1 Upload your initial data as a CSV or Excel file. You get an instant API and a public pk_ key.
  2. 2 Use the public key to read from a static site or a frontend. Use the aggregation endpoint to power KPIs without dragging rows down to the browser.
  3. 3 Mint a private sk_ key for your backend job, and bulk-upsert from your source of truth on a schedule.
  4. 4 Share a filtered, branded shared page with stakeholders who shouldn't need an API key at all.

csv-api still does the easy thing well — upload a file, get an API. But now there's a clear path from "weekend prototype" to "real backend for a small product", without having to migrate off the platform when you outgrow read-only.

All of these endpoints are documented in the API docs, with copy-pasteable curl examples. Go give them a try.

We use essential cookies to keep you logged in. No tracking or analytics. Privacy policy