Hacker Newsnew | past | comments | ask | show | jobs | submit | krackers's commentslogin

I've actually found the opposite, it's easier to conceptually understand the continuous FT, then analyze the DTFT, DFT, and Fourier Series as special cases of applying a {periodic summation, discrete sampling} operator before the FT.

This makes sense if you want to reject the modern web, but using lynx or w3m would work as well. But if you generally want to champion free software and put the "personal" in PC, then I think you necessarily need to familiarize yourself with modern computing or else you can't really have a good opinion on it.

For instance, if you refuse to play around with LLMs out of some dogmatic reason that they're not "truly" open (note: I don't know what his true opinions are), then you risk completely missing the boat and can't meaningfully shape the space of modern discourse.


No, you don't know what the reasons are. You're assuming he just wants to avoid graphical interfaces. That might not be the reason. In fact, I suspect that it has to do privacy, where lynx won't help you.

I assume it is more about structure and time. If you start browsing you wait for pages to load and then probably go a page further and to the next. In the batch mode you have the designated time window to go through mail and read what is there and avoid jumping into some rats nest of neverending paths.

In addition you get those privacy aspects (website operators don't know where you are) and are blocked from "non-free JavaScript programs" and only deal with text with content, all else will not come through.


What is the privacy leak vector using lynx? It does not use JS, so I'm not sure how running wget on another server is better than lynx over ssh or mosh?

I don't know, we're both speculating. I'm just advising against "oh he could just as well do X" - you don't know.

I think Michael Levin's work on bioelectricity fits, for basically introducing a new paradigm with which to answer the question

>why is this cell doing things that maximised the comparative fitness of its genes in a wildly different environment, features of which I am not even aware of let alone able to manipulate, and not what I want it to do


"From Our Family to Yours"

>It used to rightfully be something we looked forward to

Science fiction has always been mixed. In Star Trek the cool technology and AGI like computer is accompanied by a post-scarcity society where fundamental needs are taken care of. There are countless other stories where technology and AI is used as a tool to enrich some at the expense of others.

>We let pearl-clutching loom smashers hijack the narrative to the point where a computer making a drawing based on natural language is "slop" and you're a bad person if you like it

I don't strongly hold one opinion or the other, but I think fundamentally the roots of people's backlash is that it is something that jeopardizes their livelihood. Not in some abstract "now the beauty and humanity of art is lost" sort of way, but much more concretely, in that because of LLM adoption (or at least hype), they are out of a job and cannot make money—which hurts their quality of life much more than the increase in quality of life from access to LLMs. Then those people see the "easy money" pouring into this bubble, and it would be hard not to get demoralized. You can claim that people just need to find a different job, but that's ignoring the reality that the over the past century the skill-floor has basically risen and the ladder pulled up; and perhaps even worse, trying to reach for that higher bar still results in one "treading water" without any commensurate growth in earnings.


> In Star Trek the cool technology and AGI like computer is accompanied by a post-scarcity society where fundamental needs are taken care of.

The Star Trek computer doesn't even attempt to show AGI, and Commander Data is the exception, not the rule. Star Trek has largely been anti-AGI for its entire run, for a variety of reasons - dehumanization, unsafe/going unstable, etc.


I think you're confusing AGI for ASI or sentience? The enterprise's computer clearly meets the definition for AGI, in that it can basically do any task the humans require of it (limited only by data, which humans need to go out and gather). Especially consider that it also runs the holodeck.

Unlike modern LLMs it also correctly handles uncertainty, stating when there is insufficient information. However they seem to have made a deliberate effort to restrict/limit the extent of its use for planning and command (no "long-running agentic tasks" in modern parlance), requiring human input/intervention in the loop. This is likely because as you mentioned there is a theme of "losing humanity when you entrust too much to the machine".


This is almost surely in violation of a bunch of corporate policies... The whole point is that they don't want you having access to anything once access is cut.

> bed frames and box springs are mostly superfluous and a mattress on the ground is more comfortable and cheaper.

This is basically a Japanese futon. The only con I can think of is the one the other commenter noted, about mold buildup in more humid climates, and that mattresses are usually built assuming a bit of "flex" from the frame+box spring so a mattress on a bare floor might be slightly firmer than you'd expect.


I wonder if semi-reliable RAM could be made to work for training. After all gradient descent already works in a stochastic environment, so maybe the noise from a few flipped bits doesn't matter too much.

Also, depends on the nature of the error. If only a small memory range is affected, you could patch the kernel to avoid it.

No need for patching you can disable specific ranges on Linux using the memmap kernel parameter. It's often used for that purpose.

You can analyze this in various ways. At the "next token predictor" level of abstraction, LLMs learn to predict structure ("hallucinations" are just mimicking the style/structure but not the content), so at the structural level a conversation with mistake/correction/mistake/correction is likely to be followed with another mistake.

At the "personality space" level of abstraction, via RLHF the LLM learns to play the role of an assistant. However as seen by things such as "jailbreaks", the character the LLM plays adapts to the context, and in a long enough conversation the last several turns dominate the character (this is seen in "crescendo" style jailbreaks, and also partly explains LLM sycophancy as the LLM is stuck in a feedback loop with the user). From this perspective, a conversation with mistake/correction/mistake/correction is a signal that the assistant is pretty "dumb", and it will dutifully fulfill that expectation. In a way it's the opposite of the "you are a world-class expert in coding" prompt hacks.

Yet another way to think about it is at the lowest attention-score level, all the extra junk in the context is stuff that needs to be attended to, and when most of that stuff is incorrect stuff it's likely to "poison" the context and skew the logits in a bad direction.


I can believe this, Deepseek V3.2 shows that you can get close to "gpt-5" performance with a gpt-4 level base model just with sufficient post-training.

Deepseek scores Gold at IMO and IOI while GPT-5 scores Bronze. OpenAI now has to catch up to china.

...in a single benchmark.

No. Many benchmarks, I just mentioned those two as they where being bragged about by openai and Google when their internal models achieved gold.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: