Hacker Newsnew | past | comments | ask | show | jobs | submit | Tehnix's commentslogin

Nothing has opened my eyes more to how much mainstream media is distorting reality to fit their narrative, than the genocide Israel is committing in Gaza.

You can sit with literal video of an incident, and then see media headlines tell a completely different story than what actually happened.

Social media in our generation has been a weird amplifier of both misinformation as well as truth from the ground that contradicts misinformation in the media.

My selection of topics I trust media to report on has greatly narrowed down to ones that are completely apolitical, which is sad (they’ve always been biased, but at least I felt you could tell that they were biased and read through it).


With investments of these huge amounts (similar to Anthropic's recent investment), do they actually get a full 1.7B€ deposited into their bank account? Or does it work in some other way?


It works whatever way is agreed upon between them and the investors. For such large amounts it’s unlikely to be pure cash (there’s likely some amount of services somewhere in there), and they won’t be calling for all that cash at once.

The cash that is guaranteed is sent as soon as the investee needs it (they do what is called a capital call). Early stage startups and investments just do one capital call for the full amount, but larger amounts are often committed for periods of time; this also helps the investors schedule their own cash flow: for example if I have 500m this year and 500m next year, I can invest 1b in you, given the right schedule.


Anthropic has much more funding than that. Most recent one was at $13B at the one before was at $3.5B. Now imagine that GPT recieved $40B in one round!


GPT is not a company


Neither is OpenAI, but here we are.


OpenAI, Inc. is a company, and it owns other companies including OpenAI Holdings, LLC and OpenAI Global, LLC. https://en.wikipedia.org/wiki/OpenAI


>In May 2025, the nonprofit renounced plans to cede control of OpenAI after outside pressure.


Regardless of non-profit shenanigans, OpenAI is an entity. GPT is a type of LLM, which is not specific to OpenAI, other companies use this as well.


The nonprofit is OpenAI, Inc., a company: https://opencorporates.com/companies/us_de/5902936. Look at how many times the word "company" is used in the Wikipedia article.


my bad, but you know what I meant


I'm also wondering this. It also doesn't seem to be a coincidence, that ASML is an integral part of the semiconductor value chain.


> Ah, the penny drops. The idea that you can’t run a traditional server and must rely on serverless vendor if you’re “serious”

That's not at all how you should read this. They later on give an example of exactly what kinds of problems you'll run into once you start needing to horizontally scale you Next.js servers (e.g. as pods in k8s, which is not serverless):

> The issue of stale data is trickier than it seems. For example, as each node has its own cache, if you use revalidatePath in your server action or route handler code, that code would run on just one of your nodes that happens to process that action/route, and only purge the cache for that node.

Seeing as a Node.js server running Next.js serving SSR or ISR (otherwise you'd just serve static files, which I personally prefer) is not known to have the greatest performance, you will quickly run into the need of needing to scale up your application once you hit any meaningful amount of traffic.

You can then try to keep scaling vertically to avoid the horizontal pains, but even that has limits seeing as Node.js is single-threaded, and will run into issues with the templating part of stringing together HTML simply taking too long (that is, compute will always block, only I/O can be yielded).

The common solution for this in Python, Ruby, and JS/Node.js is to run more instances of your program. Could be on the same machine still, but voila! you are now in horizontal scaling land, and will run into the cache issues mentioned above.

There was not really anything in the article that should have lead you to believe that this was a "serverless only" issue, so I think the bashing against Netlify here is quite unwarranted.


> (e.g. as pods in k8s, which is not serverless):

> There was not really anything in the article that should have lead you to believe that this was a "serverless only" issue, so I think the bashing against Netlify here is quite unwarranted.

It's not because you can use an external cache like Redis[1]. You can scale to hundreds of instances with an external redis cache and you'll be fine. The problem is that you can't operate on Netlify scale with a simple implementation like that. Netlify can't afford running a redis instance for every NextJS application without significantly cutting into their margins (not just from compute cost, but running and managing millions of redis instances at scale won't work).

Clearly Vercel has their own in-house cache service that they have priced in their model. Netlify could run a redis instance per application, though more realistically it needs its own implementation of a multi-tenant caching service that is secure, can scale, cost effective, and fits their operational model. They are not willing to invest in that.

[1] https://github.com/vercel/next.js/tree/canary/examples/cache...


Interesting and definitely something platforms must take into consideration.

Now back to the post, implementing custom cache is not something Netlify is strongly complaining about. They are mostly asking for some documentation with rather stable APIs. Other Frameworks seems to provide that.


> Netlify could run a redis instance per application, though more realistically it needs its own implementation of a multi-tenant caching service that is secure, can scale, cost effective, and fits their operational model. They are not willing to invest in that.

But they have done that, as they say in the post.

Disclosure: used to work at Netlify, now work at Astro


Hmm, beyond a bug they had in bun between version 1.0.8 and 1.1.20[0] bun has otherwise worked perfectly fine for me

You have to do a few adjustments which you can see here https://github.com/codetalkio/bun-issue-cdk-repro?tab=readme...

- Change app/cdk.json to use bun instead of ts-node

- Remove package-lock.json + existing node_modules and run bun install

- You can now use bun run cdk as normal

[0]: https://github.com/codetalkio/bun-issue-cdk-repro


If you're not a Medium member, I've included a link in the start of the post where you can read it for free :)


In case you're not a Medium member, there's a link to read it for free right at the beginning of the post :)


Imagine this happening on a longer sail where help might be much further away, that’s kinda scary

I definitely would not be surprised if this ends up in people being prepared for these attacks in the future, if this keeps occurring, and I’m afraid that won’t do good for the orcas.


I feel I should bring up that in the EU there almost exists two worlds when it comes to GDPR: Germany - and the rest of the countries.

I’ve made software for the childcare industry, where the data concerns are greater than most other industries.

Nobody had any problem with AWS, or really any non-EU vendor, as long as they lived up to the GRPR agreements and could provide the usual agreements.

Only in Germany would you run into requirements to either host in Germany (at worst) or at least within EU (at best). Additionally, there’s a lot of German specific laws on top, that simply aren’t in the other EU countries, and the general population is also much more concerned about data privacy and residency than any other EU country.

It was a world of difference, and honestly enough for me that I would not enter the German market again if it meant needing to comply with any additional effort than the rest of the EU market.

A bit more of a rant: The hosting solutions in Germany are also quite atrocious once you get to a certain scale. Lack of proper managed services, tons of instability, insane maintenance policies, poor security support (eg no 2FA for many). Once you’ve gotten used to how AWS/GCP/Azure handles things, it’s hard to go back to that world.

Edit: Almost as response to my last point, AWS is setting up a unique EU sovereign cloud https://aws.amazon.com/blogs/aws/in-the-works-aws-european-s...


That EU Sovereign Cloud will help nothing. The basic facts remain the same. Amazon is a US company and the US government can force Amazon to hand over the data using a secret FISA order. They can force Amazon to add a backdoor to get the data if they have to.

The only way out is to not be a US company.


> I feel I should bring up that in the EU there almost exists two worlds when it comes to GDPR: Germany - and the rest of the countries.

Well, Germany isn't the country that made Google Analytics illegal. Other countries do care.

> Nobody had any problem with AWS, or really any non-EU vendor, as long as they lived up to the GRPR agreements and could provide the usual agreements.

I was in charge of the tech for a massive man in the middle company in Germany where we integrated with lots of companies to provide data for other companies. Noone had an issue with AWS because they were all using it. It's consumers who care and consumers who will make reports and it's companies that will pay the fine.


I make daily/weekly/monthly goals, and structure it in whatever App I use e.g. Linear, Todoist, or Notion.

- Monthly goals are very high level and few (e.g. “Make PoC for this”, “Redesign and relaunch blog”)

- Weekly goals are more tangible and limited (e.g. “Settle on approach for calling Rust from Swift code”, or “Finish design and styling of posts”)

- Daily are very concrete (e.g. “Set up UniFFI pipeline to generate Swift bindings” or “Implement new theme across blog pages”)

Sometimes things come up that I discover during implementation, and then I typically shift a daily goal to the next day.

Has worked well so far for giving me focus, and I then pick the daily goals based on the weekly focus from the list of many open tasks/issues I have in my various projects.

I set up each thing I’m working on as a Project in e.g. Linear, and immediately add a priority when I add things, which allows me to easily keep an overview of many smaller or larger projects I might have going on or want to do in the future.

While I do like paper, for me that’s only for ephemeral things. I prefer to keep things digital, allowing me to easily add stuff from my phone in the go when I get an idea while being out-and-about. I also write much faster on a keyboard and use the various tasks as the dumping ground for info while I’m working through something or researching something.


I've been playing Diablo 4 and Diablo II Resurrected on Crossover ever since Whisky didn't work with Battle.net anymore. It's been working great tbh!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: