Hacker Newsnew | past | comments | ask | show | jobs | submit | wholesomepotato's commentslogin

What's amazing about it if the debt increased more in the absolute terms? :D . If I max out credit cards I can at least show spending (GDP) growth at least the same as what I borrowed...

How is it not obvious to everyone that this is the end game? Increasing rates will rapidly increase debt, which increases inflation, which pushes increasing rates ...


Whoever's lived long enough in USA would laugh about end game questions. Inflation? 3.2% is absolutely peanuts to what it was some 3-4 decades ago. Unemployment? Are you kidding? Pandemics? Did you compare what it is today with what it was 2 years ago? COVID is just 4th - which is a lot, but very far from mortal danger - cause of death, and the country even inches up in life expectancy. All fundamentals are pretty much positive, and only debt/GDP is vaguely comparable to the worst numbers in US history.

Now, what about alternatives? If you don't like US numbers, can you get anyplace better on Earth? I'll wait. Meanwhile, there are forward-positive questions and topics - USA is about to jump in 2, count 'em, two very consequential areas, which are AI and space exploration, within this decade. Are you sure your better bets are against USA? Did you take the history lessons into account, or it's business as usual?


> Did you take the history lessons into account, or it's business as usual?

Did we take the same history lessons? Lots of what led to the fall of the Roman Empire has been happening in USA the past few decades. And then we have China literally opium-warring us with fentanyl and killing tens of thousands of US citizens each year. But yeah, count on the cool shiny AI and space exploration to save the US.


I'm sorry but most of your comment has nothing to do to what I said, and seems like random rambling. Except:

> 3.2% is absolutely peanuts to what it was some 3-4 decades ago.

But 3-4 decades ago governments, all institutions and economy as as whole were not so much in debt. https://fred.stlouisfed.org/series/GFDEGDQ188S . And this chart doesn't include all the "unfunded liabilities".

You can hike rates to curb the inflation only if it doesn't bankrupt everything and everyone.

And 3-4 decades ago boomers globally were ahead of most productive years of their lives, without the "baggage" of tons of kids and globalization had plenty of room ahead to increase productivity.

I don't want to waste time talking about nonsense like AI and "space exploration". The financial system is screwed. That's all I'm saying.


> You can hike rates to curb the inflation only if it doesn't bankrupt everything and everyone.

They will inevitably inflate away much of the debt. But, that hurts dollar savers more than debtors.

Some cash rich companies today are earning more on their deposits than they're paying on their bonds from a couple years ago.


You can't inflate away government debt, not meaningfully, because it's almost always going to be at a rate higher than inflation by construction.


Yes, but also - they lie about the inflation and force the public to hold the bonds anyway (aka. financial repression, yield curve control).


Exactly. YCC is already happening in Japan; it's inevitable here too. The only alternative is outright default, which never happens when the debt is denominated in currency the debtor can print.


Exxon is a private company so some basic level of competency can be assumed. The Fed is basically bunch of incompetent academic buffoons.


I thought the Fed was private.


What products/services they produce if they're private?

They're in this sweet spot where they're neither held accountable by the voter, nor by the consumer.

The can be wrong over and over and over again and nothing is going to happen. A private company has to at least not loose more money then they can bribe politicians to give them.


Private/public doesn’t have anything to do with selling a product, but with who owns the institution.

> neither held accountable by the voter, nor by the consumer

Don’t know if you meant to do this but you’re implying that voter!=consumer. Citizens united strikes again!

> A private company has to at least not loose more money then they can bribe politicians

1 dollar 1 vote!


Would use XOR 0x30303030lu, then OR with a value shifted left 12 bits, take bits 24..12 and lookup in 4k pre-computed lookup array.


How are you packing the number into 12 bits? Multiple shift+and+or operations? If so, I'd expect that to generally be slower than a multiply+shift.


it's one XOR, SHL, OR, SHR, AND, on some archs the shifts might come free with the other instruction. I'd expect it to be faster.

a = v ^ 0x30303030lu // normalize to digits 0xXX0a0b0c

b = (a << 12) | a // combine into XXXXXbacb0c

idx = (b >> 12) & 0xfff // get bac

res = lookup[idx]


That's kinda neat. Actually, if you're doing that, you may as well reduce it to a single shift+or:

a = ((a >> 12) | a) & 0xfff

You could also skip the xor with 0x303030 by adjusting the lookup table accordingly.

Unfortunately, you'd still need to factor in the length argument somehow. That is, if given "23" with length=1, it should parse to 2, not 23. You could address this with a variable shift, but at that point, I can't see it being any better than a multiply+shift, even assuming the lookup table is fully cached.

The other major issue is validation, which the lookup table doesn't help much with.


Might be wrong but this shortcut corrupts the lower bits with garbage from the higher byte.

The lookup table can detect some, but not all errors, so yeah, it relies on valid input.


I can't see how it's any different. If you have 0x0y0z, a shift+or gives 0x0yxz, so the result is same as what you have, just with fewer operations.


It's unclear what is in the highest byte, so I assume not 0x000x0y0z, but 0xab0x0y0z where ab is unknown (in the past comment I used XX for this). If highest byte is known, then sure, even better.


The '& 0xfff' eliminates the highest byte, so it doesn't matter what it is.

Your code doesn't handle the 'length' parameter, so the problem isn't the highest byte, it's bytes beyond 'length'.


I think you're right. Even better. Did you have time to bench it, etc.?

I see what you mean by length. I just skimmed over the text originally as I don't have time for rather lame problems like this. I'd just add 3 bits of length to be part of the index, job done. 12KB lookup table instead of 4KB, assuming 0 is not a valid value (negate to avoid needing 0b11).


Adding the length means another shift+or operation at minimum. I already think this is slower than the technique presented in the article, and this would make it worse.

It's an interesting idea, but I don't see it being practical, even if the size of the table wasn't an issue.


Instruction level parallelism will make extra shift free. Other than this it needs to be benched and might depend on cpu/arch. I don't care enough to bench and optimize further.


It's most definitely not free. It'd consume fetch bandwidth, decode/rename/scheduler slots, an execution port etc.

The comparison here is:

((v ^ 0x303030) * 0x640a0100) >> (len << 3)

against:

table[(((v >> 12) | v) & 0xfff) | (len << 12)]

The former is 4 ops, the latter is 6 ops, so throughput wise, the former wins. Latency wise, it also wins, considering that L1 cache lookups are generally 3-5 cycles, whilst integer multiply is typically 3-4.


This needs to be said: Every single thing Poettering touched was huge improvement. A great hero of Linux userspace architecture.


> In an infinite world of abundance ...

:D


Snap back to reality, ope there goes gravity


Features of modern CPUs don't really prevent them from real time usage, afaik. As long as something is bounded and can be reasoned about it can be used to build a real time system. You can always assume no cache hits and alikes, maximum load etc and as long as you can put a bound on the time it will take, you're good to go.


Exactly. "Real-time" is a misnomer, it should be called "bounded-time". As long as the bound is deterministic, known in advance, and guaranteed, it's "real-time". For it to be useful it also must be under some application-specific duration.

The bounds are usually in CPU cycles, so a faster CPU can sometimes be used even if it takes more cycles. CPUs capable of running Linux usually have higher latency (in cycles) than microcontrollers, but as long as that can be kept under the (wall clock) duration limits with bounded-time it's fine. There will still be cases where the worst-case latency to fetch from DRAM in an RT-Linux system will be higher than a slower MCU fetching from internal SRAM, so RT-Linux won't take over all these systems.


So the things that might prevent you are:

1. Suppliers have not given you sufficient information for you to be able to prove an upper bound on the time taken. (That must happen a lot.)

2. The system is so complicated that you are not totally confident of the correctness of your proof of the upper bound.

3. The only upper bound that can prove with reasonable confidence is so amazingly bad that you'd be better off with cheaper, simpler hardware.

4. There really isn't a worst case. There might, for example, be a situation equivalent to "roll the dice until you don't get snake eyes". In networking, for example, sometimes after a collision both parties try again after a random delay so the situation is resolved eventually with probability one but there's no actual upper bound. A complex CPU and memory system might have something like that? Perhaps you'd be happy with "the probability of this operation taking more than 2000 clock cycles is less than 10^-13" but perhaps not.


You're probably thinking about bus arbiters in 4.), which are generally fast but have no bounded settling time.


System management mode is one example of a feature on modern CPUs that prevents real-time usage https://wiki.linuxfoundation.org/realtime/documentation/howt...


mlock your memory, test with cache miss and cache invalidation scenarios will help, using no heap for memory allocation, but it's a bit hard



Does anyone use paged memory in hard realtime systems?


Public discourse was and should always be in the private domain. Anything else is totalitarianism by definition.


Corporations censor speech and any content in their platforms that they deem undesirable or threatening to their profits. That’s totalitarianism by definition, and yet private.

Public discourse squarely belongs to the public, and hosting it must be perceived and intended to be a kind of public service; otherwise we can just go back to the good old days of gathering and debating in public squares.


Totalitarianism would require the government to do the censoring.

You can't assume the government wouldn't censor based on the exact same criteria.

An entity with control will exercise it.

When it's a private entity there's a theoretical other place you can go. When it's public (government) there is fundamentally no other place you can go.


Government isn’t the only entity capable of public service. Non-profit organizations are a thing. And that is precisely why the comment I replied to has problematic notions of “public” and “private”. NPOs are private entities, but do not behave the same way as corporations.


"Public Discourse" being "Private"? Public is in the name, though. So that statement seems... well... weird?

"Public" and "Private" and "Government" mean different things at different times.

In this case I think you mean Public Discourse is not controlled by the Government. And that's where a lot of confusion is coming in.


The negative outcomes are hard to argue with given Meta's scale. Having viable and vibrant journalistic outlets (that Meta had a hand in killing) and just talking to people, unmediated by a platform were both healthier.


There’s a difference between private domain and privately regulated.


TIL public school board hearings are totalitarian.


The do increase shareholder value, just in exactly short-term. The problem with publicly traded companies is not as much wanting to increase the value, but how short is the time horizon, when most owners don't have any understanding of the bussiness other than just handful of numbers every quarter.


If you want to really understand modern money check "Broken Money" book.


Given that almost nobody here is going to go hunt up the book and read it in order to find out why you think it's relevant, would you give us a TL;DR?


The book goes over the history of human transactions from hunter gatherer to modern day and how currency or a ‘ledger’ has played a role in those transactions as civilization has progressed.

The topic of inflation for hundreds of years and currency debasement is covered in detail and a common theme throughout the entire book.

Overall I’d recommend the book, Lyn is a great writer. The end is definitely Bitcoin heavy but I still enjoyed it ( I do not own any cryptocurrency ).


Lyn's research is top notch too. Keep in mind they were a bitcoin skeptic for a long time.


What about my code?


Do you make a living off your code's likeness? That is, is there a wholesomepotato device, like Duff's or Not John Carmack's fast inverse square root, that's immediately recognizable as having been created by you, and you get paid for its appearance on late night tv shows and in movies?


copilots whole job is to make code look like it functions... arguably, that's entirely what these systems do.

the AI isn't doing fundamentals in programming anymore than it's putting on makeup to look like an actress.

if you go out and find a woman whose shaped like her, get a makeup artist and a costume designer and put together a simulacrum ... that's also illegal right?

anyway, it's pretty difficult to judge one as being ethically dubious and the other as hand wavy concern trolling.


We are entitled to it.

The system cannot be compromised by a developer retracting their code from the machine. A developer cannot be allowed authority over the machine. Code is powerful, unlike ScarJo's likeness.

Knowledge workers are permanently forced to lose control of their own labour.

It's how it is.


You can use celebrities in your code as long as they're open source.


if your code contributed to any AI project, it sure must suck because I rarely get code from any AI that runs on the first try. If it was verbatim, which is the only claim that is valid, shouldn't it work every time? How do you define the "likeness" of a codebase, and someone duplicating that at least stylistically, and how would that accusation be defended in court?


Yeah, that too depending on what you mean. Just don't get carried away claiming things like "They saw I have a head and now the people in the generated ads have heads too" or "my code showed how to use parts of the Box2D library and their code does too" are the same type of issue as "they used my public data to impersonate me" or "my code is being stolen". Maybe you want to argue the latter things are still problematic but no amount of pointing it out in situations like this achieves that.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: