Top Related Projects
Command-line JSON processor
Terminal JSON viewer & processor
JSON Stream Editor (command line utility)
json incremental digger
yq is a portable command-line YAML, JSON, XML, CSV, TOML and properties processor
Select, put and delete data from JSON, TOML, YAML, XML and CSV files with a single tool. Supports conversion between formats and can be used as a Go package.
Quick Overview
Gron is a command-line tool that transforms JSON into discrete assignments to make it easier to grep for specific data. It breaks down complex JSON structures into simple, greppable lines, allowing users to quickly search and manipulate JSON data using standard Unix tools.
Pros
- Simplifies searching and filtering of complex JSON structures
- Integrates well with existing Unix tools like grep, sed, and awk
- Supports reverse transformation, allowing modified output to be converted back to JSON
- Lightweight and easy to install
Cons
- Limited to JSON format; not suitable for other data formats
- May produce verbose output for large JSON files
- Requires learning a new syntax for complex queries
- Not a full-featured JSON manipulation tool; best used in combination with other tools
Getting Started
To install gron on macOS using Homebrew:
brew install gron
For other platforms, you can download the binary from the GitHub releases page or use go get:
go get -u github.com/tomnomnom/gron
Basic usage:
gron example.json | grep "name" | gron --ungron
This command will convert the JSON file to gron format, search for lines containing "name", and then convert the results back to JSON.
Competitor Comparisons
Command-line JSON processor
Pros of jq
- More powerful and flexible for complex JSON transformations
- Supports a wide range of operations, including filtering, mapping, and reducing
- Has a rich query language for advanced data manipulation
Cons of jq
- Steeper learning curve due to its complex syntax and features
- Can be overkill for simple JSON parsing tasks
- May require more typing for basic operations
Code Comparison
gron example:
gron example.json
json = {};
json.name = "John Doe";
json.age = 30;
json.city = "New York";
jq example:
jq '.' example.json
{
"name": "John Doe",
"age": 30,
"city": "New York"
}
Summary
gron excels at making JSON greppable and easy to search, while jq is more powerful for complex JSON transformations. gron has a simpler syntax and is easier to learn, but jq offers more advanced features for data manipulation. Choose gron for quick JSON inspection and searching, and jq for more complex JSON processing tasks.
Terminal JSON viewer & processor
Pros of fx
- More feature-rich, offering interactive mode and advanced querying capabilities
- Supports multiple input formats (JSON, YAML, TOML) and output formats
- Provides a more powerful JavaScript-based query language
Cons of fx
- Steeper learning curve due to more complex syntax and features
- Larger codebase and dependencies, potentially slower for simple tasks
- May be overkill for basic JSON processing needs
Code comparison
fx:
fx '.[0].name' < data.json
fx 'x => x.filter(i => i.age > 30)' < data.json
gron:
gron data.json | grep name
gron data.json | grep 'age =' | awk '$NF > 30'
Summary
fx is a more powerful and versatile tool for working with structured data, offering advanced querying and transformation capabilities. It's ideal for complex data manipulation tasks and interactive exploration. gron, on the other hand, excels in simplicity and ease of use, making it perfect for quick JSON flattening and basic filtering using standard Unix tools. The choice between the two depends on the complexity of the task at hand and the user's familiarity with JavaScript-based querying versus Unix command-line tools.
JSON Stream Editor (command line utility)
Pros of jj
- Supports both JSON and JSONL formats
- Offers more advanced querying capabilities, including filtering and transformations
- Provides a rich set of built-in functions for data manipulation
Cons of jj
- Steeper learning curve due to more complex syntax
- Less focused on the specific task of flattening JSON for grepping
Code Comparison
gron example:
echo '{"a": 1, "b": {"c": 2}}' | gron
json = {};
json.a = 1;
json.b = {};
json.b.c = 2;
jj example:
echo '{"a": 1, "b": {"c": 2}}' | jj -r '.'
{
"a": 1,
"b": {
"c": 2
}
}
Summary
gron is designed specifically for flattening JSON into greppable format, making it simpler for basic tasks. jj, on the other hand, offers more powerful querying and manipulation capabilities, but with a more complex syntax. gron excels in simplicity and ease of use for its specific purpose, while jj provides greater flexibility and functionality for working with JSON data.
json incremental digger
Pros of jid
- Interactive JSON exploration with real-time filtering
- Supports incremental search and navigation through complex JSON structures
- Allows for easy extraction of specific data points from large JSON files
Cons of jid
- Requires user interaction, less suitable for automated scripts
- Limited to JSON format, while gron can handle other data types
- May be slower for processing very large files compared to gron's static output
Code Comparison
jid example:
echo '{"foo": {"bar": "baz"}}' | jid
gron example:
echo '{"foo": {"bar": "baz"}}' | gron
Key Differences
- jid provides an interactive interface for exploring JSON data
- gron flattens JSON into assignable JavaScript-like statements
- jid is better for ad-hoc exploration, while gron excels in scripting and automation
Use Cases
- Use jid for interactive debugging and exploring unfamiliar JSON structures
- Choose gron for creating reusable scripts or processing JSON in pipelines
Both tools offer unique approaches to JSON manipulation, with jid focusing on interactivity and gron on simplification and automation. The choice between them depends on the specific task and workflow requirements.
yq is a portable command-line YAML, JSON, XML, CSV, TOML and properties processor
Pros of yq
- Supports multiple data formats (YAML, JSON, XML, CSV, TSV)
- More powerful querying and manipulation capabilities
- Actively maintained with frequent updates
Cons of yq
- Larger binary size and more dependencies
- Steeper learning curve due to more complex syntax
- Slower for simple tasks compared to gron
Code Comparison
gron:
gron example.json | grep name
yq:
yq '.name' example.json
Both tools can be used to extract specific fields from JSON data, but yq offers more advanced querying capabilities:
gron:
gron example.json | grep user.age | sed 's/^json.//' | sed 's/ = /: /'
yq:
yq '.user.age' example.json
While gron flattens JSON into assignable expressions, yq provides a more concise syntax for querying nested structures. gron excels in simplicity and ease of use for basic tasks, while yq offers more powerful features for complex data manipulation across multiple formats.
Select, put and delete data from JSON, TOML, YAML, XML and CSV files with a single tool. Supports conversion between formats and can be used as a Go package.
Pros of dasel
- Supports multiple data formats (JSON, YAML, TOML, XML) vs. gron's JSON-only focus
- Allows both reading and writing data, while gron is primarily for flattening/viewing
- Offers a more versatile query syntax for complex data manipulations
Cons of dasel
- May have a steeper learning curve due to its more complex functionality
- Potentially slower for simple JSON flattening tasks compared to gron's specialized approach
- Less focused on the specific use case of making JSON greppable
Code Comparison
gron:
gron "https://api.github.com/repos/tomnomnom/gron/commits?per_page=1" | grep "commit.author"
dasel:
curl -s "https://api.github.com/repos/tomnomnom/gron/commits?per_page=1" | dasel -r json ".[] | select(.commit.author)"
Both tools aim to simplify working with structured data, but they approach the task differently. gron focuses on flattening JSON for easy grepping, while dasel offers a more comprehensive solution for querying and manipulating various data formats. The choice between them depends on the specific use case and the complexity of the data processing required.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
gron
Make JSON greppable!
gron transforms JSON into discrete assignments to make it easier to grep
for what you want and see the absolute 'path' to it.
It eases the exploration of APIs that return large blobs of JSON but have terrible documentation.
ⶠgron "https://api.github.com/repos/tomnomnom/gron/commits?per_page=1" | fgrep "commit.author" json[0].commit.author = {}; json[0].commit.author.date = "2016-07-02T10:51:21Z"; json[0].commit.author.email = "mail@tomnomnom.com"; json[0].commit.author.name = "Tom Hudson";
gron can work backwards too, enabling you to turn your filtered data back into JSON:
ⶠgron "https://api.github.com/repos/tomnomnom/gron/commits?per_page=1" | fgrep "commit.author" | gron --ungron [ { "commit": { "author": { "date": "2016-07-02T10:51:21Z", "email": "mail@tomnomnom.com", "name": "Tom Hudson" } } } ]
Disclaimer: the GitHub API has fantastic documentation, but it makes for a good example.
Installation
gron has no runtime dependencies. You can just download a binary for Linux, Mac, Windows or FreeBSD and run it.
Put the binary in your $PATH
(e.g. in /usr/local/bin
) to make it easy to use:
ⶠtar xzf gron-linux-amd64-0.1.5.tgz
ⶠsudo mv gron /usr/local/bin/
If you're a Mac user you can also install gron via brew:
ⶠbrew install gron
Or if you're a Go user you can use go install
:
ⶠgo install github.com/tomnomnom/gron@latest
It's recommended that you alias ungron
or norg
(or both!) to gron --ungron
. Put something like this in your shell profile (e.g. in ~/.bashrc
):
alias norg="gron --ungron"
alias ungron="gron --ungron"
Or you could create a shell script in your $PATH named ungron
or norg
to affect all users:
gron --ungron "$@"
Usage
Get JSON from a file:
ⶠgron testdata/two.json
json = {};
json.contact = {};
json.contact.email = "mail@tomnomnom.com";
json.contact.twitter = "@TomNomNom";
json.github = "https://github.com/tomnomnom/";
json.likes = [];
json.likes[0] = "code";
json.likes[1] = "cheese";
json.likes[2] = "meat";
json.name = "Tom";
From a URL:
ⶠgron http://headers.jsontest.com/
json = {};
json.Host = "headers.jsontest.com";
json["User-Agent"] = "gron/0.1";
json["X-Cloud-Trace-Context"] = "6917a823919477919dbc1523584ba25d/11970839830843610056";
Or from stdin
:
ⶠcurl -s http://headers.jsontest.com/ | gron
json = {};
json.Accept = "*/*";
json.Host = "headers.jsontest.com";
json["User-Agent"] = "curl/7.43.0";
json["X-Cloud-Trace-Context"] = "c70f7bf26661c67d0b9f2cde6f295319/13941186890243645147";
Grep for something and easily see the path to it:
ⶠgron testdata/two.json | grep twitter
json.contact.twitter = "@TomNomNom";
gron makes diffing JSON easy too:
ⶠdiff <(gron two.json) <(gron two-b.json)
3c3
< json.contact.email = "mail@tomnomnom.com";
---
> json.contact.email = "contact@tomnomnom.com";
The output of gron
is valid JavaScript:
ⶠgron testdata/two.json > tmp.js
ⶠecho "console.log(json);" >> tmp.js
ⶠnodejs tmp.js
{ contact: { email: 'mail@tomnomnom.com', twitter: '@TomNomNom' },
github: 'https://github.com/tomnomnom/',
likes: [ 'code', 'cheese', 'meat' ],
name: 'Tom' }
It's also possible to obtain the gron
output as JSON stream via
the --json
switch:
ⶠcurl -s http://headers.jsontest.com/ | gron --json
[[],{}]
[["Accept"],"*/*"]
[["Host"],"headers.jsontest.com"]
[["User-Agent"],"curl/7.43.0"]
[["X-Cloud-Trace-Context"],"c70f7bf26661c67d0b9f2cde6f295319/13941186890243645147"]
ungronning
gron can also turn its output back into JSON:
ⶠgron testdata/two.json | gron -u
{
"contact": {
"email": "mail@tomnomnom.com",
"twitter": "@TomNomNom"
},
"github": "https://github.com/tomnomnom/",
"likes": [
"code",
"cheese",
"meat"
],
"name": "Tom"
}
This means you use can use gron with grep
and other tools to modify JSON:
ⶠgron testdata/two.json | grep likes | gron --ungron
{
"likes": [
"code",
"cheese",
"meat"
]
}
or
ⶠgron --json testdata/two.json | grep likes | gron --json --ungron
{
"likes": [
"code",
"cheese",
"meat"
]
}
To preserve array keys, arrays are padded with null
when values are missing:
ⶠgron testdata/two.json | grep likes | grep -v cheese
json.likes = [];
json.likes[0] = "code";
json.likes[2] = "meat";
ⶠgron testdata/two.json | grep likes | grep -v cheese | gron --ungron
{
"likes": [
"code",
null,
"meat"
]
}
If you get creative you can do some pretty neat tricks with gron, and then ungron the output back into JSON.
Get Help
ⶠgron --help
Transform JSON (from a file, URL, or stdin) into discrete assignments to make it greppable
Usage:
gron [OPTIONS] [FILE|URL|-]
Options:
-u, --ungron Reverse the operation (turn assignments back into JSON)
-v, --values Print just the values of provided assignments
-c, --colorize Colorize output (default on tty)
-m, --monochrome Monochrome (don't colorize output)
-s, --stream Treat each line of input as a separate JSON object
-k, --insecure Disable certificate validation
-j, --json Represent gron data as JSON stream
--no-sort Don't sort output (faster)
--version Print version information
Exit Codes:
0 OK
1 Failed to open file
2 Failed to read input
3 Failed to form statements
4 Failed to fetch URL
5 Failed to parse statements
6 Failed to encode JSON
Examples:
gron /tmp/apiresponse.json
gron http://jsonplaceholder.typicode.com/users/1
curl -s http://jsonplaceholder.typicode.com/users/1 | gron
gron http://jsonplaceholder.typicode.com/users/1 | grep company | gron --ungron
FAQ
Wasn't this written in PHP before?
Yes it was! The original version is preserved here for posterity.
Why the change to Go?
Mostly to remove PHP as a dependency. There's a lot of people who work with JSON who don't have PHP installed.
Why shouldn't I just use jq?
jq is awesome, and a lot more powerful than gron, but with that power comes
complexity. gron aims to make it easier to use the tools you already know, like grep
and sed
.
gron's primary purpose is to make it easy to find the path to a value in a deeply nested JSON blob when you don't already know the structure; much of jq's power is unlocked only once you know that structure.
Top Related Projects
Command-line JSON processor
Terminal JSON viewer & processor
JSON Stream Editor (command line utility)
json incremental digger
yq is a portable command-line YAML, JSON, XML, CSV, TOML and properties processor
Select, put and delete data from JSON, TOML, YAML, XML and CSV files with a single tool. Supports conversion between formats and can be used as a Go package.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot