Skip to content
FireConvert
11 min read

Convert JSON to CSV — flat arrays, nested objects, and the flattening strategy

JSON is a tree. CSV is a grid. The conversion between them is easy when your JSON is an array of flat objects — the array is rows, the object keys are columns, done. The moment the objects have nested objects, arrays-inside-arrays, or mixed shapes per element, you hit a genuine design decision: flatten with dot-notation, put the nested JSON in a single cell, or explode one row per array element. Pick wrong and you lose data or produce a CSV your spreadsheet can't parse. Here's the honest version for developers and data people.

The short version

  1. Paste or drop your JSON on the converter. Array of objects, object with an array property, or NDJSON — we detect the shape.
  2. Pick a flatten strategy: Dot-notation (nested fields become user.address.city columns), JSON-in-cell (nested values stay as JSON strings), or Explode arrays (one row per element of a nested array).
  3. Pick a null policy: empty string, literal null, or the string N/A.
  4. Pick delimiter + encoding. Comma with UTF-8 (no BOM) is the developer default; UTF-8 with BOM if it's headed to Excel.
  5. Download the CSV. Headers auto-derived from the union of keys; type-preserving (numbers as numbers, booleans as true/false).

That's the happy path. The rest of this post is for when the JSON has mixed-shape objects, deeply nested structures, arrays of primitives vs arrays of objects, or when you're deciding between CSV and JSONL.

The four shapes of JSON you'll actually see

Real JSON-to-CSV inputs fall into four categories, and the right strategy differs per category:

JSON shape → right flatten strategyJSON shape → CSV strategyFlat array-of-objectstrivialNested objects (fixed depth)dot-notationArrays of primitivesjoin or explodeArrays of objects (1:N)explode rowsMixed-shape polymorphicJSONL insteadFit to CSV →If your JSON is polymorphic (different fields per object), CSV is the wrong target — use JSONL.
Pick the strategy that matches your input shape; not every JSON belongs in a CSV.

1. Flat array of objects (the easy case)

[{ "id": 1, "name": "Ada", "city": "London" }, ...]

Every object has the same keys, no nesting. Output:

id,name,city
1,Ada,London
2,Grace,New York

Done. Headers = union of keys (sorted or in first-object order — we let you choose). This is the bread-and-butter of JSON-to-CSV and maps 1:1.

2. Nested objects, fixed depth

[{ "id": 1, "user": { "name": "Ada", "city": "London" } }]

Dot-notation flatten is the standard answer: nested keys become dotted column names.

id,user.name,user.city
1,Ada,London

This round-trips cleanly back to JSON if you need it, and every column has a single scalar value. It's our default.

3. Arrays of primitives (tags, labels, ids)

{ "id": 1, "tags": ["admin", "beta", "staff"] }

Two reasonable answers:

  • Join into one celltags,admin|beta|staff. Pick a separator that doesn't appear in your data (| is a safe default; semicolon works if comma is the CSV delimiter).
  • Explode into rows — one row per tag, the rest of the fields duplicated. Good for pivoting; bad if the outer row is large (it multiplies).

4. Arrays of objects (1:N — the genuine denormalization)

[{ "id": 1, "orders": [{"sku":"A"}, {"sku":"B"}] }]

This is the hard one. Two honest paths:

  • Explode — one row per order, parent fields repeated. You get two rows with id=1, different orders.sku. Good for relational analysis in a pivot tool; bad if you wanted a single row per customer.
  • JSON-in-cellorders becomes a single cell containing [{"sku":"A"},{"sku":"B"}] as a quoted string. One row per customer preserved; the orders cell is opaque to the spreadsheet but round-trips perfectly back to JSON.

Rule of thumb: explode for pivot-table analysis, JSON-in-cell for API-response archival.

Type preservation — numbers, booleans, nulls

JSON has types. CSV doesn't. The converter has to write each value as text, and the downstream consumer has to re-infer types. Three rules to keep the round-trip clean:

  • Numbers — write as they appear. 42 stays 42, 3.14 stays 3.14. Do not quote numbers; quoted strings survive as strings downstream.
  • Booleans — write as true / false (lowercase JSON literals). Excel does not auto-convert these to TRUE/FALSE cells; they stay as strings. If you need Excel's boolean cell type, route through XLSX instead.
  • Null — three policies, pick one: empty string (most common), the literal null, or the string N/A. Empty string is the safest default because it lets the consumer distinguish between "no value" and "the string null"; but if you're going back to JSON later, literal null is the only round-trip-safe option.

If type preservation matters more than CSV compatibility — genuinely typed numbers, booleans, nested structures — the right answer is usually not CSV at all; see the next section.

When CSV is the wrong target — JSONL / NDJSON

"I have JSON and I need it in a flat file format" does not always mean CSV. JSON Lines (JSONL / NDJSON) is a format where each line is an independent JSON object:

{"id":1,"name":"Ada","tags":["admin","beta"]}
{"id":2,"name":"Grace","tags":["eng"]}

JSONL keeps the tree structure intact, preserves types, and is streamable (read one line, parse one object). Every modern data pipeline — DuckDB, BigQuery, Snowflake, ClickHouse, pandas — reads JSONL natively. It's the right answer when:

  • Your JSON has deep or irregular nesting that CSV would mangle
  • Your rows have polymorphic schemas (different keys per row)
  • The consumer is a data pipeline, not a spreadsheet
  • Types (nulls, nested objects, arrays) have to survive

CSV is the right answer when: the consumer is Excel / Numbers / Google Sheets, when the data is genuinely tabular, or when a human will read it directly. If your next step is opening it in a spreadsheet, CSV. If your next step is a database load or a data-science notebook, JSONL.

Developer workflows — API response → spreadsheet

The #1 real use case for JSON-to-CSV is an API response someone needs to hand to a non-developer. Three common shapes:

REST paginated list response

{
  "results": [ {...}, {...}, {...} ],
  "next": "...",
  "count": 1234
}

Our converter lets you specify a root path($.results, JSONPath-lite) to focus the flattening on just the array. The pagination metadata is dropped; only the rows survive.

Elasticsearch / OpenSearch response

{ "hits": { "hits": [ { "_source": {...} }, ... ] } }

Root path $.hits.hits, then usually $.hits.hits[*]._source via the "unwrap" option. You get one row per document with the _sourcefields as columns.

GraphQL response

{ "data": { "users": [ {...}, ... ] } }

Root path $.data.users. GraphQL responses are usually well-typed and nest cleanly, so dot-notation flatten is the right strategy.

How our tool compares (honestly)

JSON-to-CSV is a developer-favorite commodity — every engineer has written it at least once in a jq pipeline or a Python script. What differs is nested-structure handling, type preservation, large-file streaming, and not uploading the data. Honest scoresheet:

ToolCostWhere it winsWhere it loses
FireConvertAppFreeIn-browser, no upload, three flatten strategies (dot / JSON-in-cell / explode) visible in preview, root-path picker for API responses, null policy, NDJSON input supported, type-preserving outputJSONPath-lite (not full JSONPath); very deep nesting (8+ levels) produces verbose column names; no streaming for >500 MB inputs yet
jq + shellFreeScripted, scriptable, infinitely flexible, handles arbitrary JSON shapes via custom filters; UNIX-pipeline friendlyLearning curve on jq syntax; you have to hand-write the flatten per schema; no preview; doubles as a puzzle for teammates
Python pandas (json_normalize)FreeNative dot-notation flatten, pairs cleanly with to_csv / to_excel, handles array explode via record_pathRequires pandas install; overkill for a one-off; silent dtype inference can change your types (the same problem from our CSV to XLSX post)
ConvertCSV.com / json-csv.comFree w/ adsSimple paste-and-go UI; handles flat array-of-objects wellUploads your data to their server (privacy); limited nested-object handling; no root-path picker; ad-heavy UI; small file caps
Excel "Get Data → From JSON"Free (if you own Excel)Built-in Power Query flatten; handles nested objects through the UI; final output is live XLSXPower Query learning curve; Mac Excel's JSON import is weaker than Windows; doesn't produce a CSV — produces an Excel table (fine if that's what you wanted)

Honest summary: if you're in a terminal, jq + scripting is the most flexible. If you're in a notebook, pandas.json_normalize is canonical. For everything else — especially when you need a visual preview of how your nested JSON will flatten, when privacy matters, or when the destination is genuinely a CSV and not an XLSX — our JSON to CSV converter is the short path with sensible defaults and zero install.

Where our tool works well — and where it doesn't

Works well

  • Array-of-objects from REST / GraphQL / Elasticsearch responses
  • Nested structures up to ~6 levels deep via dot-notation
  • Mixed-null fields (per-column null policy)
  • Root-path selection to skip pagination metadata
  • NDJSON / JSONL input (each line a row)
  • Privacy-sensitive data (stays in your browser; no upload)
  • Chains cleanly into CSV to XLSX when the final destination is a spreadsheet

Doesn't work (well) yet

  • Huge JSON files (>500 MB) — in-browser JSON parsers are memory-bound. For multi-GB JSON, use jq streaming mode or a pipelined Node script.
  • Truly polymorphic data (every object has different keys) — CSV is the wrong format for this. Use JSONL.
  • JSON-to-Excel-with-formatting — we produce CSV; for styled XLSX, chain into our CSV-to-XLSX step.
  • Custom per-field transforms (compute a column, filter rows) — we don't do data transforms; for that you want jq or pandas.

Tips for the best result

  • Start with the preview. We show the first 20 flattened rows with the chosen strategy before you download. If the column names look wrong or the nesting flattens into the wrong shape, change the strategy in-place.
  • Use the root-path field for API responses. $.results or $.data.users gets you to the array you actually want without the pagination metadata cluttering the output.
  • For nested arrays, decide: explode or JSON-in-cell. Explode if you'll analyze the inner elements (one row per order). JSON-in-cell if you want one row per parent and the array is opaque reference data.
  • Pick UTF-8 with BOM if Excel-on-Windows is the target. Same rule as our XLSX-to-CSV post: the BOM fixes accented-character mojibake on double-click.
  • If you're going to a spreadsheet anyway, chain to XLSX. CSV to XLSX lets you declare column types (Text for ID fields, Date for date strings) — important if the downstream user will double-click rather than go through Data → From Text.
  • If your JSON is genuinely tree-shaped, consider JSONL instead. CSV is a tabular destination; forcing a tree into a table loses information no matter which strategy you pick.

Common questions

My JSON has deeply nested objects. Will dot-notation work?

Up to about six levels deep, yes — dot-notation produces column names like user.address.geo.coordinates.lat, which is verbose but unambiguous and round-trips cleanly back to JSON. Past that depth, consider JSONL instead; you're fighting the format.

My JSON is an object, not an array. What's the row unit?

If the top-level is a single object, we treat it as one row. If the object has an array property (like results: [...]), set the root path to that property and the array becomes the rows. We auto-detect common wrappers (results, data, items, hits) and default to them when present.

My objects have different keys. What happens?

We take the union of all keys as the header row. Objects missing a key get an empty cell (or whatever null policy you chose) in that column. If the keys diverge wildly (five different shapes), CSV is the wrong format — the file becomes a sparse mess. Consider JSONL.

Numbers and booleans — do they stay typed in the CSV?

As much as CSV allows. We write numbers unquoted (42, not "42") and booleans as true / false. Every parser that re-reads the CSV will infer Number on the number column and String on the boolean column (most parsers don't have a boolean type). If preservation matters, go to XLSX via our CSV-to-XLSX step and declare the types explicitly.

What about JSONL / NDJSON input?

Supported. Drop a file where each line is a JSON object and we treat it as an array of objects — same flatten strategies apply. JSONL input is usually cleaner because it forces a flatter shape to begin with.

Do my files get uploaded to your server?

No. The JSON parser, the flattener, and the CSV writer all run in your browser tab. Your data never leaves your machine — which matters for API responses that contain user data, emails, tokens, or other sensitive material. Paid tiers raise the size caps but use the same browser-side path.

Is this tool really free?

Yes. No watermark, no daily cap, no sign-up on the free tier. Paid tiers lift the per-file size limit and add an API endpoint for scripted workflows.

Ready?

JSON to CSV →. Paste the JSON, pick the flatten strategy, download the CSV. Free, in your browser, no sign-up, no watermark. Heading to Excel after? Chain to CSV to XLSX. Need the reverse direction? CSV → JSON lives on our data converters page.