Back to Features
Feature

Bulk Upserts

Insert or update up to 1,000 rows in a single request.

Send up to 1,000 rows in one HTTP call. Pick a key column, choose how to handle conflicts, and let csv-api insert new rows and update existing ones in a single round trip.

What it does

POST /api/v1/datasets/:public_id/records/bulk accepts an array of records plus an on_conflict mode (error, ignore, or update) and an optional key_columns list. csv-api groups the rows, opens a transaction, and emits an INSERT … ON CONFLICT statement that's safe to retry. The response tells you how many rows were inserted vs. updated, and how many were skipped because of your plan's row cap. The first time you upsert against a key column, csv-api creates a unique index in the background — subsequent calls are O(1).

How it works

  1. 1

    POST your batch

    Send a JSON body with a records array. Each record is a JSON object whose keys match your dataset's column names.

  2. 2

    Choose a conflict mode

    Pass on_conflict=error to fail on duplicates, ignore to skip them, or update to upsert. For ignore/update, also pass key_columns naming the unique key.

  3. 3

    Get back inserted/updated counts

    The response includes inserted, updated, and skipped_due_to_limit counts so you know exactly what happened in the batch.

See it in action

bash
curl -X POST "https://csv-api.com/api/v1/datasets/d_a8f3bc91/records/bulk" \
  -H "Authorization: Bearer sk_YOUR_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "on_conflict": "update",
    "key_columns": ["email"],
    "records": [
      { "email": "[email protected]", "city": "Portland", "plan": "pro" },
      { "email": "[email protected]",   "city": "Seattle",  "plan": "starter" },
      { "email": "[email protected]", "city": "Denver",   "plan": "free" }
    ]
  }'

# → 201 Created
{ "data": { "inserted": 1, "updated": 2, "skipped_due_to_limit": 0 } }

Why it matters

  • Idempotent imports

    Re-run the same upsert job after a network failure and you'll end up with the same row count. No duplicates, no half-finished states.

  • Sparse-row safe

    Send a partial row in update mode and csv-api preserves the columns you didn't supply with COALESCE(EXCLUDED.col, table.col).

  • Atomic and bookkeeping-aware

    Your row_count is updated inside the same transaction as the write. A crash mid-request can't leak rows past your plan limit.

The problem it solves

Without bulk endpoints, every row write is a separate HTTP round trip. A 500-row import becomes 500 sequential POSTs, ten seconds of latency, and a partial-failure problem if any one fails. Bulk Upserts let you ship a batch in a single request that either succeeds completely or fails atomically.

Common use cases

  • Nightly sync jobs from another database into csv-api

  • Importing webhook events from Stripe, Segment, or your own systems

  • Migrating from a legacy spreadsheet without re-uploading the whole file every time

  • Backfills and corrections after a one-off data fix

Try Bulk Upserts for yourself

Create a free csv-api account, upload a file, and see your API live in under a minute.

We use essential cookies to keep you logged in. No tracking or analytics. Privacy policy