- Table output:
--format tableor-trenders results as aligned terminal tables with headers and separators - YAML output:
--format yamlrenders results as YAML (uses pyyaml if available, built-in fallback otherwise) --explainmode: shows parsed query breakdown (fields, conditions, grouping) and the generated jq filter--timeflag: prints parse/execute/format timing to stderr- Auto-detect stdin:
curl ... | jonq "select name"works without needing-as source
- CASE/WHEN/THEN/ELSE/END: conditional expressions —
select case when age > 30 then "senior" else "junior" end as level - COALESCE: null fallback with nested function support —
select coalesce(nickname, name) as display ||operator: SQL-standard string concatenation —select first || ' ' || last as full_name- IS NULL / IS NOT NULL: null checks in conditions —
select * if email is not null - Type casting:
int(),float(),str()/string(),type() - Date/time functions:
todate(),fromdate(),date(),timestamp()with null-safe execution - Utility functions:
keys(),values(),trim(),ltrim(),rtrim(),tojson(),fromjson(),reverse(),sort(),unique(),flatten(),not_null(),to_entries(),from_entries()
- Rich REPL: readline with persistent history (
~/.jonq_history) and tab completion for field names + keywords --followmode: stream NDJSON from stdin line-by-line —tail -f app.log | jonq --follow "select level, msg if level = 'error'"- Shell completions:
jonq --completions bash|zsh|fishgenerates completion scripts
- Version bump to 0.3.0
- Unified all function-name-to-jq-builtin mappings into a single
_FUNC_MAPconstant (eliminated 3 duplicate copies) - Expression-only queries now properly map over array elements instead of wrapping the whole input
- Path-explorer schema preview now shows nested JSON paths like
orders[].priceinstead of only shallow field listings - Reuse bounded sync and async jq worker pools for repeated identical filters
- Main streaming execution path now chunks root-array JSON in memory instead of splitting to temp files
- Repositioned README/package metadata around JSON exploration and extraction
- Null-sensitive functions (
todate,tonumber) are now guarded against null input instead of crashing - Schema validator recognizes CASE expressions,
||operator, andcoalesce— no longer rejects them as unknown fields - Sync jq runtime errors that only write to
stderrno longer hang waiting onstdout - Async jq workers are cleaned up with their event loop, preventing leftover subprocess transport warnings
--followmode skips empty results instead of printing[]for non-matching lines
- DISTINCT:
select distinct cityreturns unique rows - COUNT DISTINCT:
select count(distinct city) as n - Standalone LIMIT:
select * limit 10(independent of sort) - IN operator:
select * if city in ('New York', 'Chicago') - NOT operator:
select * if not age > 30 - LIKE operator:
select * if name like 'Al%'(supports%wildcard for startswith, endswith, contains) - String functions:
upper(),lower(),length() - Math functions:
round(),abs(),ceil(),floor() - Schema preview: run
jonq data.jsonwith no query to inspect fields and types - Interactive REPL:
jonq -i data.jsonfor interactive querying - Watch mode:
--watchflag to re-run query on file change - URL fetch:
jonq https://api.example.com/data "select id" - Multi-file glob:
jonq 'logs/*.json' "select *" - Auto-detect NDJSON: no flag needed for newline-delimited JSON
- Fuzzy field suggestions: typo correction with Levenshtein distance
- Colorized output: syntax-highlighted JSON when outputting to terminal
python -m jonq: added__main__.pyentry point- Constants module: centralized all magic numbers and ANSI codes
- Dynamic HAVING field mappings instead of hardcoded alias dict
- Removed inline jq comments (
# sum) from generated filters fromclause now correctly iterates arrays (.path[]instead of.path)- Schema validator recognizes
sort,distinct,as, arithmetic operators - Schema validator skips validation when
fromclause is used - Logging level changed from INFO to WARNING (no more filter spam)
- Version bump to 0.2.0
- Replaced all star imports with explicit imports
- Replaced all hardcoded ANSI codes with
_Colorsclass - Replaced all magic numbers with named constants
- Removed dead code:
handle_error,extract_value_from_quotes,_AGG_FUNCTIONS,parse_condition(jq_filter.py),use_fastparameter - Executor raises
ValueErrorinstead ofRuntimeErrorfor jq errors - Updated all Sphinx docs, README, and SYNTAX.md
- NDJSON input mode: --ndjson (supports - stdin). Lines are wrapped into a JSON array for querying. Currently incompatible with
--stream.. prints a friendly message if combined
- Properly await aclose()/close() to eliminate RuntimeWarning: coroutine ... was never awaited etc etc.
- Schema validation now allows nested paths (e.g., items[].price) by validating only the top level head
- Better handling of comparisons (
== != >= <= > <), numbers (ints/floats), andcontains
- CSV pipeline. Convert JSON -> CSV before applying
--limit - Pretty printing applies only to JSON output
- Concurrent chunk processing for 2-5x performance improvement on large files
- Prevents freezing on large datasets
- New
run_jq_async()andrun_jq_streaming_async()functions for concurrent processing - Streaming mode now processes chunks in parallel
- Tool remains responsive during large file processing
- Added
aiofilesdependency for async file operations - Internal async architecture for streaming operations
- Prototype implementation
- Initial public release
- SQL-like query syntax for JSON files
- Support for field selection, filtering, sorting, and limiting results
- Support for aggregation functions (sum, avg, count, max, min)
- Support for nested JSON traversal with dot notation
- Support for array indexing with [n] syntax
- Support for complex boolean expressions with AND, OR, and parentheses
- Support for grouping and aggregation with GROUP BY
- Support for handling special characters in field names
- Comprehensive test suite