Hacker Newsnew | past | comments | ask | show | jobs | submit | hitradostava's commentslogin

Patrick, the problems you describe (speed, cost, cross-border friction) already have solutions. SEPA Instant, FedNow, PIX, and providers like Wise move money in seconds, at negligible cost, inside regulated systems. Tempo doesn’t solve payments; it sidesteps oversight.

By shifting flows onto a private stablecoin ledger, Stripe isn’t fixing inefficiency; it’s making it easier to route money in ways regulators and tax authorities can’t easily monitor. That’s not innovation, it’s the oldest trick in the crypto playbook: pretend you’re improving payments, when what you’re really selling is a way around the rules.


did you write your comment?


Thats not what OpenAI are claiming. They are claiming that there are two new flagship models and a router that routes between them.

"GPT‑5 is a unified system with a smart, efficient model that answers most questions, a deeper reasoning model (GPT‑5 thinking) for harder problems, and a real‑time router that quickly decides which to use"


Planning was ok for me, much slower than Sonnet, but comparable. But some of the code it produces is just terrible. Maybe the routing layer sends some code-generation tasks to a much smaller model- but then I don't get why it's so slow!

The only thing that seems better to me is the parallel tool calling.


I agree, I just don't understand how the team at Cursor can say this:

“GPT-5 is the smartest coding model we've used. Our team has found GPT-5 to be remarkably intelligent, easy to steer, and even to have a personality we haven’t seen in any other model. It not only catches tricky, deeply-hidden bugs but can also run long, multi-turn background agents to see complex tasks through to the finish—the kinds of problems that used to leave other models stuck. It’s become our daily driver for everything from scoping and planning PRs to completing end-to-end builds.”

The cynic in me thinks that Cursor had to give positive PR in order to secure better pricing...


Had Sonnet 4 not been able to?


No, it kept going in circles....spent like 3 weeks trying to fix it. Got access to gpt5 yesterday and all major bugs are resolved.


Interesting I tried it to fix some unit tests that were failing but made the problem worse. Sonnet was able to fix the failing unit tests and the new problems introduced by GPT5. I used Claude Code for Sonnet and Cursor Agent for GPT-5. Maybe Cursor Agent is just bad?


I don't know I use roocode.


Sure.


Amazing project. The question I have is why rust? Is the compiled WASM significantly faster than JS?


Yes, the compiled WASM is significantly faster. Easily by an order of magnitude. I might be completely wrong about this but I _think_ if the brilliant folks at Microsoft research in the calc intelligence group would have waited a few years they might have used wasm instead of TypeScript (https://www.microsoft.com/en-us/garage/wall-of-fame/calc-ts-...)

As for Rust, could have been C or Zig. I just needed a language that minimally compiles to wasm.

There is another reason though. IronCalc runs in the bare metal, not only in the web and needs to have bindings to languages like Python, R or Julia. I can't get that today easily with TypeScript.


There might be some tricks from Row Zero that you can borrow

https://news.ycombinator.com/item?id=39551064

https://news.ycombinator.com/item?id=41512270


In a couple of years time I don't see why AI based tooling couldn't write Redis? Would you get a complete Redis produced with a single prompt? Of course not. but if extreme speed is what you want to optimize for, then the tooling needs to be given the right feedback loop to optimize for that.

I think the question to ask is what do I do as a software engineer that couldn't be done by an AI based tool in a few years time? The answer is scary, but exciting.


I agree with you and its confusing to me. I do think there is a lot of emotion at play here - rather than cold rationality.

Using LLM based tools effectively requires a change in workflow that a lot of people aren't ready to try. Everyone can share their anecdote of how an LLM has produced stupid or buggy code, but there is way too much focus on what we are now, rather than the direction of travel.

I think existing models are already sufficient, its just we need to improve the feedback loop. A lot of the corrections / direction I make to LLM produced code could 100% be done by a better LLM agent. In the next year I can imagine tooling that: - lets me interact fully via voice - a separate "architecture" agent ensures that any produced code is in line with the patterns in a particular repo - compile and runtime errors are automatically fed back in and automatically fixed - a refactoring workflow mode, where the aim is to first get tests written, then get the code working, and then get the code efficient, clean and with repo patterns

I'm excited by this direction of travel, but I do think it will fundamentally change software engineering in a way that is scary.


> Using LLM based tools effectively requires a change in workflow that a lot of people aren't ready to try

This is a REALLY good summary of it I think. If you lose your patience with people, you'll lose your patience with AI tooling, because AI interaction is fundamentally so similar to interacting with other people


Exactly, and LLM based tools can be very frustrating right now - but if you view the tooling as a very fast junior developer with very broad but shallow knowledge then you can develop a workflow which for many (but not all) tasks is much much faster writing code by hand.


I'm continually surprised by the amount of negativity that accompanies these sort of statements. The direction of travel is very clear - LLM based systems will be writing more and more code at all companies.

I don't think this is a bad thing - if this can be accompanied by an increase in software quality, which is possible. Right now its very hit and miss and everyone has examples of LLMs producing buggy or ridiculous code. But once the tooling improves to:

1. align produced code better to existing patterns and architecture 2. fix the feedback loop - with TDD, other LLM agents reviewing code, feeding in compile errors, letting other LLM agents interact with the produced code, etc.

Then we will definitely start seeing more and more code produced by LLMs. Don't look at the state of the art not, look at the direction of travel.


> if this can be accompanied by an increase in software quality

That’s a huge “if”, and by your own admission not what’s happening now.

> other LLM agents reviewing code, feeding in compile errors, letting other LLM agents interact with the produced code, etc.

What a stupid future. Machines which make errors being “corrected” by machines which make errors in a death spiral. An unbelievable waste of figurative and literal energy.

> Then we will definitely start seeing more and more code produced by LLMs.

We’re already there. And there’s a lot of bad code being pumped out. Which will in turn be fed back to the LLMs.

> Don't look at the state of the art not, look at the direction of travel.

That’s what leads to the eternal “in five years” which eventually sinks everyone’s trust.


> What a stupid future. Machines which make errors being “corrected” by machines which make errors in a death spiral. An unbelievable waste of figurative and literal energy.

Humans are machines which make errors. Somehow, we got to the moon. The suggestion that errors just mindlessly compound and that there is no way around it, is what's stupid.


> Humans are machines

Even if we accept the premise (seeing humans as machines is literally dehumanising and a favourite argument of those who exploit them), not all machines are created equal. Would you use a bicycle to fill your taxes?

> Somehow, we got to the moon

Quite hand wavey. We didn’t get to the Moon by reading a bunch of text from the era then probabilistically joining word fragments, passing that around the same funnel a bunch of times, then blindly doing what came out, that’s for sure.

> The suggestion that errors just mindlessly compound and that there is no way around it

Is one that you made up, as that was not my argument.


LLMs are a lot better at a lot of things than a lot of humans.

We got to the moon using a large number of systems to a) avoid errors where possible and b) build in redundancies. Even an LLM knows this and knew what the statement meant:

https://chatgpt.com/share/6722e04f-0230-8002-8345-5d2eba2e7d...

Putting "corrected" in quotes and saying "death spiral" implies error compounding.

https://chatgpt.com/share/6722e19c-7f44-8002-8614-a560620b37...

These LLMs seem so smart.


> LLMs are a lot better at a lot of things than a lot of humans.

Sure, I'm really poor painter, Midjourney is better than me. Are they better than a human trained for that task, on that task? That's the real question.

And I reckon the answer is currently no.


The real question is can they do a good enough job quickly and cheaply to be valuable. ie, quick and cheap at some level of quality is often "better". Many people are using them in the real world because they can do in 1 minute what might take them hours. I personally save a couple hours a day using ChatGPT.


Ah, well then, if the LLM said so then it’s surely right. Because as we all know, LLMs are never ever wrong and they can read minds over the internet. If it says something about a human, then surely you can trust it.

You’ve just proven my point. My issue with LLMs is precisely people turning off their brains and blindly taking them at face value, even arduously defending the answers in the face of contrary evidence.

If you’re basing your arguments on those answers then we don’t need to have this conversation. I have access to LLMs like everyone else, I don’t need to come to HN to speak with a robot.


You didn't read the responses from an LLM. You've turned your brain off. You probably think self-driving cars are also a nonsense idea. Can't work. Too complex. Humans are geniuses without equal. AI is all snake oil. None of it works.


You missed the mark entirely. But it does reveal how you latch on to an idea about someone and don’t let it go, completely letting it cloud your judgement and arguments. You are not engaging with the conversation at hand, you’re attacking a straw man you have constructed in your head.

Of course self-driving cars aren’t a nonsense idea. The execution and continued missed promises suck, but that doesn’t affect the idea. Claiming “humans are geniuses without equal” would be pretty dumb too, and is again something you’re making up. And something doesn’t have to be “all snake oil” to deserve specific criticism.

The world has nuance, learn to see it. It’s not all black and white and I’m not your enemy.


Nope, hit the mark.

Actually understand LLMs in detail and you'll see it isn't some huge waste of time and energy to have LLMs correct outputs from LLMs.

Or, don't, and continue making silly, snarky comments about how stupid some sensible thing is, in a field you don't understand.


> These LLMs seem so smart.

Yes, they do *seem* smart. My experience with a wide variety of LLM-based tools is that they are the industrialization of the Dunning-Kruger effect.


It's more likely the opposite. Humans rationalize their errors out the wazoo. LLMs are showing us we really aren't very smart at all.


Humans are obviously machines. If not, what are humans then? Fairies?

Now once you've recognized that, you're better equiped for task at hand - which is augmenting and ultimately automating away every task that humans-as-machines perform by building equivalent or better machine that performs said tasks at fraction of the cost!

People who want to exploit humans are the ones that oppose automation.

There's still long way to go, but now we've finally reached a point where some tasks that were very ellusive to automation are starting to show great promise of being automated, or atleast being greatly augmented.


Profoundly spiritual take. Why is that the task at hand?

The conceit that humans are machines carries with it such powerful ideology: humans are for something, we are some kind of utility, not just things in themselves, like birds and rocks. How is it anything other than an affirmation of metaphysical/theological purpose to particularly humans? Why is it like that? This must be coming from a religious context, right?

I cannot at least see how you could believe this while sustaining a rational, scientific mind about nature, cosmology, etc. Which is fine! We can all believe things, just know you cant have your cake and eat it too. Namely, if anybody should believe in fairies around here, it should probably be you!


> Why is that the task at hand?

Because it's boring stuff, and most of us would prefer to be playing golf/tennis/hanging out with friends/painting/etc. If you look at the history of humanity, we've been automating the boring stuff since the start. We don't automate the stuff we like.


Where's the spiritual part?

Recognizing that humans, just like birds are self-replicating biological machines is the most level-headed way of looking at it.

It is consistent with observations and there are no (apparent) contraditions.

The spritual beliefs are the ones with the fairies, binding of the soul, made of special substrate, beyond reason and understanding.

If you have desire to improve human condition (not everyone does) then the task at hand naturally arisies - eliminate forced labour, aging, disease, suffering, death, etc.

This all naturally leads to automation and transhumanism.


> Humans are obviously machines. If not, what are humans then? Fairies?

If humans are machines, then so are fairies.


The difference is that when we humans learn from our errors, we learn how to make them less often.

LLMs get their errors fed back into them and become more confident that their wrong code is right.

I'm not saying that's completely unsolvable, but that does seem to be how it works today.


That isn't the way they work today. LLMs can easily find errors in outputs they themselves just produced.

Start adding different prompts, different models and you get all kinds of ways to catch errors. Just like humans.


I don’t think LLMs can easily find errors in their output.

There was a recent meme about asking LLMs to draw a wineglass full to the brim with wine.

Most really struggle with that instruction. No matter how much you ask them to correct themselves they can’t.

I’m sure they’ll get better with more input but what it reveals is that right now they definitely do not understand their own output.

I’ve seen no evidence that they are better with code than they are with images.

For instance, if the time to complete only scales with length of the token and not the complexity of its contents then it probably safe to assume it’s not being comprehended.


> LLMs can easily find errors in outputs they themselves just produced.

No. LLMs can be told that there was an error and produce an alternative answer.

In fact LLMs can be told there was an error when there wasn't one and produce an alternative answer.



https://chatgpt.com/share/672331d2-676c-8002-b8b3-10fc4c8d88...

In my experience, if you confuse an LLM by deviating from the the "expected", then all the shims of logic seem to disappear, and it goes into hallucination mode.


Try asking this question to a bunch of adults.


Tbf that was exactly my point. An adult might use 'inference' and 'reasoning' to ask clarification, or go with an internal logic of their choosing.

ChatGPT here went with a lexigraphical order in Python for some reason, and then proceeded to make false statements from false observations, while also defying its own internal logic.

    "six" > "ten" is true because "six" comes after "ten" alphabetically.
No.

    "ten" > "seven" is false because "ten" comes before "seven" alphabetically.
No.

From what I understand of LLMs (which - I admit - is not very much), logical reasoning isn't a property of LLMs, unlike information retrieval. I'm sure this problem can be solved at some point, but a good solution would need development of many more kinds of inference and logic engines than there are today.


Do you believe that the LLM understands what it is saying and is applying the logic that you interprets from its response, or do you think its simply repeating similar patterns of words its seen associated with the question you presented it?


If you take the time to build an (S?)LM yourself, you'll realize it's neither of these. "Understands" is an ill-defined term, as is "applying logic".

But a LLM is not "simply" doing anything. It's extremely complex and sophisticated. Once you go from tokens into high-dimensional embeddings... it seems these models (with enough training) figure out how all the concepts go together. I'd suggest reading the word2vec paper first, then think about how attention works. You'll come to the conclusion these things are likely to be able to beat humans at almost everything.


You said humans are machines that make errors ans that LLMs can easily find errors in output they themself produce.

Are you sure you wanted to say that? Or is the other way around?


Yes. Just like humans. It's called "checking your work" and we teach it to children. It's effective.


> LLMs can easily find errors in outputs they themselves just produced.

Really? That must be a very recent development, because so far this has been a reason for not using them at scale. And noone is.

Do you have a source?


Lots of companies are using them at scale.


To err is human. To err at scale is AI.


I fear that we'll see a lot of humans err at scale next Tuesday. Global warming is another example of human error at scale.


>next Tuesday.

USA (s)election, I guess.


To err at scale isn't unique to AI. We don't say "no software, it can err at scale".


CEOs embracing the marginal gains of LLMs by dumping billions into it are certainly great examples of humans erring at scale.


yep, nano mega.


It is by will alone that I set my mind in motion.

It is by the juice of Sapho that thoughts acquire speed, the lips become stained, the stains become a warning...


err, "hallucinate" is the euphemism you're looking for. ;)


I don't like the use of hallucinate. It implies that LLM have some kind of model of reality and some times get confused. They don't have any kind of model of anything, they cannot "hallucinate", they can only output wrong results.


>They don't have any kind of model of anything, they cannot "hallucinate", they can only output wrong results.

it's even more fundamental than that.

even if they had any model, they would not be able to think.

thinking requires consciousness. only humans and some animals have it. maybe plants too.

machines? no way, jose.


yeah, i get you. it was a joke, though.

that "hallucinate" term is a marketing gimmick to make it seem to the gullible that this "AI" (i.e. LLMs) can actually think, which is flat out BS.

as many others have said here on hn, those who stand to benefit a lot from this are the ones promoting this bullcrap idea (that they (LLMs) are intelligent).

greater fool theory.

picks and shovels.

etc.

In detective or murder novels, the cliche is "look for the woman".

https://en.m.wikipedia.org/wiki/Cherchez_la_femme

in this case, "follow the money" is the translation, i.e. who really benefits (the investors and founders, the few), as opposed to who is grandly proclaimed to be the beneficiary (us, the many).


s/grand/grandiose/g

from a search for grand vs grandiose:

When it comes to bigness, there's grand and then there's grandiose. Both words can be used to describe something impressive in size, scope, or effect, but while grand may lend its noun a bit of dignity (i.e., “we had a grand time”), grandiose often implies a whiff of pretension.

https://www.merriam-webster.com/dictionary/grandiose


> Humans are machines which make errors.

Indeed, and one of the most interesting errors some human machines are making is hallucinating false analogies.


It wasn't an analogy.


Machines are intelligently designed for a purpose. Humans are born and grow up, have social lives, a moral status and are conscious, and are ultimately the product of a long line of mindless evolution that has no goals. Biology is not design. It's way messier.


Exactly my thought. Humans can correct humans. Machines can correct, or at least point to failures in the product of, machines.


I don't see how this is sustainable. We have essentially eaten the seed corn. These current LLMs have been trained by an enormous corpus of mostly human-generated technical knowledge from sources which we already know to be currently being polluted by AI-generated slop. We also have preliminary research into how poorly these models do when training on data generated by other LLMs. Sure, it can coast off of that initial training set for maybe 5 or more years, but where will the next giant set of unpolluted training data come from? I just don't see it, unless we get something better than LLMs which is closer to AGI or an entire industry is created to explicitly create curated training data to be fed to future models.


These tools also require the developer class to that they are intended to replace to continue to do what they currently do (create the knowledge source to train the AI on). It's not like the AIs are going to be creating the accessible knowledge bases to train AIs on, especially for new language extensions/libraries/etc. This is a one and f'd development. It will give a one time gain and then companies will be shocked when it falls apart and there's no developers trained up (because they all had to switch careers) to replace them. Unless Google's expectation is that all languages/development/libraries will just be static going forward.


One of my concerns is that AI may actually slow innovation in software development (tooling, languages, protocols, frameworks and libraries), because the opportunity cost of adopting them will increase, if AI remains unable to be taught new knowledge quickly.


It also bugs me that these tools will reduce the incentive to write better frameworks and language features if all the horrible boilerplate is just written by an LLM for us rather than finding ways to design systems which don't need it.

The idea that our current languages might be as far as we get is absolutely demoralising. I don't want a tool to help me write pointless boilerplate in a bad language, I want a better language.


This is my main concern. What's the point of other tools when none of the LLMs have been trained on it and you need to deliver yesterday?

It's an insanely conservative tool


You already see this if you use a language outside of Python, JS or SQL.


that is solved via larger contexts


It’s not, unless contexts get as large as comparable training materials. And you’d have to compile adequate materials. Clearly, just adding some documentation about $tool will not have the same effect as adding all the gigabytes of internet discussion and open source code regarding $tool that the model would otherwise have been trained on. This is similar to handing someone documentation and immediately asking questions about the tool, compared to asking someone who had years of experience with the tool.

Lastly, it’s also a huge waste of energy to feed the same information over and over again for each query.


- context of millions of tokens is frontier

- context over training is like someone referencing docs vs vaguely recalling from decayed memory

- context caching


You’re assuming that everything can be easily known from documentation. That’s far from the truth. A lot of what LLMs produce is informed by having been trained on large amounts of source code and large amounts of discussions where people have shared their knowledge from experience, which you can’t get from the documentation.


Yea, I'm thinking along the same lines.

The companies valuing the expensive talent currently working on Google will be the winner.

Google and others are betting big right now, but I feel the winner might be those who watches how it unfolds first.


The LLM codegen at Google isn't unsupervised. It's integrated into the IDE as both autocomplete and prompt-based assistant, so you get a lot of feedback from a) what suggestions the human accepts and b) how they fix the suggestion when it's not perfect. So future iterations of the model won't be trained on LLM output, but on a mixture of human written code and human-corrected LLM output.

As a dev, I like it. It speeds up writing easy but tedious code. It's just a bit smarter version of the refactoring tools already common in IDEs...


What about (c) the human doesn't realize the LLM-generated code is flawed, and accepts it?


I mean what happens when a human doesn't realize the human generated code is wrong and accepts the PR and it becomes part of the corpus of 'safe' code?


Presumably someone will notice the bug in both of these scenarios at some point and it will no longer be treated as safe.


Do you ask a junior to review your code or someone experienced in the codebase?


maybe most of the code in the future will be very different from what we’re used to. For instance, AI image processing/computer vision algorithms are being adopted very quickly given the best ones are now mostly transformers networks.


My main gripe with this form of code generation is that is primarily used to generate “leaf” code. Code that will not be further adjusted or refactored into the right abstractions.

It is now very easy to sprinkle in regexes to validate user input , like email addresses, on every controller instead of using a central lib/utility for that.

In the hands of a skilled engineer it is a good tool. But for the rest it mainly serves to output more garbage at a higher rate.


>It is now very easy to sprinkle in regexes to validate user input , like email addresses, on every controller instead of using a central lib/utility for that.

Some people are touting this as a major feature. "I don't have to pull in some dependency for a minor function - I can just have AI write that simple function for me." I, personally, don't see this as a net positive.


Yes, I have heard similar arguments before. It could be an argument for including the functionality in the standard lib for the language. There can be a long debate about dependencies, and then there is still the benefit of being able to vendor and prune them.

The way it is now just leads to bloat and cruft.


> The direction of travel is very clear

And if we get 9 women we can produce a baby in a single month.

There's no guarantee such progression will continue. Indeed, there's much more evidence it is coming to a a halt.


It might also be an example of 80/20 - we're just entering the 20% of features that take 80% of the time & effort.

It might be possible but will shareholders/investors foot the bill for the 80% that they still have to pay.


Its not even been 2 years, and you think things are coming to a halt?


Yes. The models require training data and they already been fed the internet.

More and more of the content generated since is LLM generated and useless as training data.

The models get worse, not better by being fed their own output, and right now they are out of training data.

This is why Reddit just went profitable, AI companies buy their text to train their models because it is at least somewhat human written.

Of course, even reddit is crawling with LLM generated text, so yes. It is coming to a halt.


Data is not the only factor. Architecture improvements, data filtering etc. matter too.


I know for a fact they are because rate _and_ quality of improvement is diminishing exponentially. I keep a close eye on this field as part of my job.


> Don't look at the state of the art not, look at the direction of travel.

That's what people are doing. The direction of travel over the most recent few (6-12) months is mostly flat.

The direction of travel when first introduced was a very steep line going from bottom-left to top-right.

We are not there anymore.


> I'm continually surprised by the amount of negativity

Maybe I'm just old, but to me, LLMs feel like magic. A decade ago, anyone predicting their future capabilities would have been laughed at.


Magic Makes Money - the more magical something seems, the more people are willing to pay for that something.

The discussion here seems to bare this out: CEO claims AI is magical, here the truth becomes that it’s just an auto-complete engine.


Nah, you just were not up to speed with the current research. Which is completely normal. Now marketing departments are on the job.


Transformers were proposed in 2017. A decade ago none of this was predictable.


emacs psichologist was there from before :D

And so were a lot of markov chain based chatbots. Also Doretta, the microsoft AI/search engine chatbot.

Were they as good? No. Is this an iteration of those? Absolutely.


Kurzweil would disagree)


That's the hype isn't it. The direction of travel hasn't been proven to be more than a surface level yet.


Because there seems to be a fundamental misunderstanding producing a lot of nonsense.

Of course LLMs are a fantastic tool to improve productivity, but current LLM's cannot produce anything novel. They can only reproduce what they have seen.


But they assist developers and collect novel coding experience from their projects all the time. Each application of LLM creates feedback to the AI code - the human might leave it as is, slightly change it, or refuse it.


> LLM based systems will be writing more and more code at all companies.

At Google, today, for sure.

I do believe we still are not across the road on this one.

> if this can be accompanied by an increase in software quality, which is possible. Right now its very hit and miss

So, is it really a smart move of Google to enforce this today, before quality have increased? Or did this set off their path to losing market shares because their software quality will deteriorate further over the next couple years?

From the outside it just seems Google and others have no choice, they must walk this path or lose market valuation.


> I'm continually surprised by the amount of negativity that accompanies these sort of statements.

I'm excited about the possibilities and I still recoil at the refined marketer prose.


I'm not really seeing this direction of travel. I hear a lot of claims, but they are always 3rd person. I don't know or work with any engineers who rely heavily on these tools for productivity. I don't even see any convincing videos on Youtube. Just show me on engineer sitting down with theses tools for a couple hours and writing a feature that would normally take a couple of days. I'll believe it when I see it.


Well, I rely on it a lot, but not in the IDE, I copy/paste my code and prompts between the ide and LLM. By now I have a library of prompts in each project I can tweak that I can just reuse. It makes me 25% up to 50% faster. Does this mean every project t is done in 50/75% of the time? No, the actual completion time is maybe 10% faster, but i do get a lot more time to spend on thinking about the overall design instead of writing boilerplate and reading reference documents.

Why no youtube videos thought? Well, most dev you tubers are actual devs that cultivate an image of "I'm faster than LLM, I never re-read library references, I memorise them on first read" and do on. If they then show you a video how they forgot the syntax for this or that maven plugin config and how LLM fills it in 10s instead of a 5min Google search that makes them look less capable on their own. Why would they do that?


Why don’t you read reference documents? The thing with bite-sized information is that is never gives you a coherent global view of the space. It’s like exploring a territory by crawling instead of using a map.


Can you give me an example of one of these useful prompts? I'd love to try it out.


you said it, bro.


I think that at least partially the negativity is due to the tech bros hyping AI just like they hyped crypto.


Looks interesting, we solved this problem with Kinesis Firehose, S3 and Athena. Pricing is cheap, you can run any arbitrary SQL query and there is zero infrastructure to maintain.


Storing small events in s3 can explode costs quickly.

At 1M events/day that's $7.5/day. Decent

At 15M, $75/day

Cost for 150 million S3 PUT requests per day of 25KB each would be $750/day, assuming no extra data transfer charges.

With clickhouse you won't get charged per read/write


Kinesis supports buffering - up to 900 seconds or 128mb. So you are way out on your cost estimations. Over time queries can start costing more due to S3 Requests, but regular spark runs to combine small files solves that.


I haven't even got to kinesis or bandwidth or storage.

Even if you compress N objects through spark/etc your starting point would be the large number of writes first. So that doesn't change. The costs would be even larger considering even more medium sized PUT's that double the storage, add N deletes potentially. Have also heard that Athena, presto etc charge based on rows read.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: