Hacker Newsnew | past | comments | ask | show | jobs | submit | Quarrel's commentslogin

WTF is up with Luxembourg on that graph?

It is a tax haven, with one of the highest GDP / person in the world, why is it, by magnitudes, the biggest recipient of EU largesse / person??!


Lots of people who work in Luxembourg don't live there so anything "per capita" is a bit misleading.

Additionally a lot of the EU's institutions are based there or have offices there, some of which might count as investments as well.

Lastly, everything there is really expensive. So you need to invest a larger amount to achieve the same thing as elsewhere.


These are reasons why it might not be the largest provider of funds per capita, not why it would be by orders of magnitude the biggest recipient.

I have been to Luxembourg and to Hungary, Bulgaria & Greece - the otherwise obvious contenders for "poorest" in the EU and Luxembourg should not be in the picture.


If it gets funds for restoring one railway bridge or something of that sort the fact the population is tiny makes the per capita investment look huge, just usual tiny country effects.

A bunch of foreign companies also incorporate their EU subsidiaries there (presumably due to some tax benefit). I imagine that distorts their GDP quite badly as well.

I presume this is because of the EU institutions there and that expenditure to maintain those institutions counts towards receipts (and this effect is then exaggerated due to Luxembourg's small population). Certainly no one in the EU is under any illusion that Luxembourg is poor, much less vastly poorer than the next poorest EU country.

Notoriously difficult to portray correctly in EU money-shuffling statistics. Some money not granted to the grand duchy still filed under "beneficiary country: Luxembourg" due to some program or institution being headquartered there. And it is essentially impossible to compare apples to apples what happens in actual EU budget and what happens in Kirchberg, home to EIB.

Small population plus lots of EU institutions.

They're chimps that are on the other side of the Congo river (and both types of genus Pan can't swim).

They're super close to chimps (and definitely much closer than us), rather than "a very different species".


Right?

There are lots of reasons this stuff happens, but one of them is definitely that some kids aren't acting out for school reasons but for attention from their parents.


Way back whenever I first read Dune, this seemed like such weird niche ban. I don't think I had a lot of respect for it.

Now, like all good SciFi, it seems fairly prescient ....


Agree with your take, very similar to what I heard.

However, we were strongly told that for early stage startups, some (CA) VCs would only bother looking at CA or DE companies.


> LLMs aren't perfect rule following machines is the fundamental problem here

I kind of get what you're saying, but let us not pretend that SW engineers are perfect rule followers either.

Having a framework to work within, whether you are an LLM or a human, can be helpful.


If someone regularly ignored critical instructions even though they were written down and had been told to follow them, that person would be fired.

People are excused all the time for things because they are elevated in other areas. It's about their value as a whole and that's where we are with LLMs. They aren't perfect but they do plenty we can't which means they are worth using.

I essentially do this.

Super simple. (although I use rewrites at my dns layer for the whole local lan, but whatever)

It also solves issues my password manager has with multiple services on the one host but with different ports, but putting each on their own 2nd level domain.


FWIW, zerodium shut down in 2025.

Or at least went dark ..


Just went dark.

So this was my first thought on reading the article.

I don't know if it is just that it is an Australian thing, but certainly my friend group would all just say "petrichor" for this scent.

The Australian's who coined the term specifically credited Indian perfumers for their matti ka attar; they had collected and distilled it for centuries before (western?) science investigated.

Like lots of scents, the fresh versus the preserved are different. Petrichor has a sharp ozone smell, that does not persist when preserved, where it ends up with an earthier smell afterwards.


Most of the good major models are already very capable of changing their writing style.

Just give them the right writing prompt. "You are a writer for the Economist, you need to write in the house style, following the house style rules, writing for print, with no emoji .." etc etc.

The large models have already ingested plenty of New Yorker, NYT, The Times, FT, The Economist etc articles, you just need to get them away from their system prompt quirks.


I think that should be true, but doesn't hold up in practice.

I work with a good editor from a respected political outlet. I've tried hard to get current models to match his style: filling the context with previous stories, classic style guides and endless references to Strunk & White. The LLM always ends up writing something filtered through tropes, so I inevitably have to edit quite heavily, before my editor takes another pass.

It feels like LLMs have a layperson's view of writing and editing. They believe it's about tweaking sentence structure or switching in a synonym, rather than thinking hard about what you want to say, and what is worth saying.

I also don't think LLMs' writing capabilities have improved much over the last year or so, whereas coding has come on leaps and bounds. Given that good writing is a matter of taste which is beyond the direct expertise of most AI researchers (unlike coding), I doubt they'll improve much in the near future.


You're ignoring what I said. They work better when turning it into a two step process. Step 1 create a template. Step 2 execute the template.

>The large models have already ingested plenty of New Yorker, NYT, The Times, FT, The Economist etc articles

And that ends up diluting them. Going back and doing another pass on only a subset would give them stronger voice. At some threshold, scanning information brings it to average and a return to the mean, instead of increasing the information. It's a giant table of word associations, it can regress.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: