Hacker Newsnew | past | comments | ask | show | jobs | submit | dekhn's commentslogin

Yes, for me it's the scene with the knights riding through the forest with Carmina Burana playing (this was one of my introductions to recorded music- my parents had that record and I spent a lot of time listening to it later).

I think some people call this effect "frisson" and to me it's not dissimilar from the senstation I get from ASMR.


I had a similar experience in my genetics class in grad school- the professor explained that children of musicians were more likely to have perfect pitch, hence it was a genetic trait. Some folks suggested that perhaps it was possible that children of musicians were subjected to lots of labelled and unlabelled training data (musical notes) making it "environment" rather than genetic.

no, alphafold is basically just a static structure predictor. folding@home explicitly models the folding process (the journey, not just the destination).

I used to think this way, but after about 30 years in the field (including ML, MD, sequence alignment, homology modelling, drug discovery, and more) I've concluded grandiose visions like this have little to no chance of actually making any sort of impactful difference, either academically or industrially.

Don't get me wrong- I think many of the analogies are accurate, but thinking about it that way doesn't help you design an experiment that answers any clear question. If you want to pursue the idea of building better tools for interrogating life, have at it- make sure you read all the stuff George Church has published about DNA recorders and similar systems.

I don't know why you would want to focus on Raman spectroscopy. That's a useful analytic tool, but it's certainly not going to move the needle on any interesting biological processes. You would have a lot more luck using optical microscopy- it's one of the most mature and powerful readouts, highly flexible, and can be used with sequencing.

I can't really see a case for funding here. You need a well-defined, biomedically based argument and a technology that people think will actually work. You will get 3-5 years of runway and then have to meet some important milestone.


Optical microscopy is even interacting and destroying cell in infrared or don't see small molecules on ~400nm UV because of diffraction.

Raman combination scattering, allows to find moleculas and not interract with them.

Its keystone for make possibility 'to see' live cell with molecules in dynamic, without disrupting it. And make full virtual 3D of live human cell.

-- If technology will be created = problem solved. Investing here i mean, into actions to investigation into current and future possible technology, and to construct it.

Its not about sell something to others, its about make humanity to fight diseases, cancer, aging. And about that you can be part of it. Proudly called youself Investor who help bring humanity to new era.


I find often that conversations between lawyers and engineers are just two very different minded people talking past each other. I'm an engineer, and once I spent more time understanding lawyers, what they do, and how they do it, my ability to get them to do something increased tremendously. It's like programming in an extremely quirky programming language running on a very broken system that requires a ton of money to stay up.

Could you post on HN on that? Would be worth reading.

And are you only talking about cybersecurity disclosure, liability, patent applications... And the scenario when you're both working for the same party, or opposing parties?


I'm talking about any situation where a principled person who is technically correct gets a threatening letter from a lawyer instead of a thank you.

If you read enough lawyer messages (they show up on HN all the time) you will see they follow a pattern of looking tough, and increasingly threatening posture. But often, the laws they cite aren't applicable, and wouldn't hold up in court or public opinion.


> they follow a pattern of looking tough, and increasingly threatening posture. But often, the laws they cite aren't applicable, and wouldn't hold up in court

And it takes years to prove that and be judged as not guilty, or if guilty (as OP would likely be for dumping the database), that the punishment should be nil due to the demonstrated good faith even if it technically violated a law

Wouldn't you say the threats are to be taken seriously in cases like OP's?



I'm curious to hear your take on the situation in the article.

Based on your experience, do you think there are specific ways the author could have communicated differently to elicit a better response from the lawyers?


It would take a bit of time to re-read the entire chain and come up with highly specific ways. The way I read the exchange, the lawyer basically wants the programmer to shut up and not disclose the vulnerability, and is using threatening legal language. While the programmer sees themself as a responsible person doing the company a favor in a principled way.

Some things I can see. I think the way the programmer worded this sounds adversarial; I wouldn't have written it that way, but ultimately, there is nothing wrong with it: "I am offering a window of 30 days from today the 28th of April 2025 for [the organization] to mitigate or resolve the vulnerability before I consider any public disclosure."

When the lawyer sent the NDA with extra steps: the programmer could have chosen to hire a lawyer at this point to get advice. Or they could ignore this entirely (with the risk that the lawyer may sue him?), or proceed to negotiate terms, which the programmer did (offering a different document to sign).

IIUC, at that point, the lawyer went away and it's likely they will never contact this guy again, unless he discloses their name publicly and trashes their security, at which point the lawyer might sue for defamation, etc.

Anyway, my take is that as soon as the programmer got a lawyer email reply (instead of the "CTO thanking him for responsible disclosure"), he should have talked to his own lawyer for advice. When I have situations similar to this, I use the lawyer as a sounding board. i ask questions like "What is the lawyer trying to get me to do here?" and "Why are they threatening me instead of thanking me", and "What would happen if I respond in this way".

Depending on what I learned from my lawyer I can take a number actions. For example, completely ignoring the company lawyer might be a good course of action. The company doesn't want to bring somebody to court then have everybody read in a newspaper that the company had shitty security. Or writing a carefully written threatening letter- "if you sue me, I'll countersue, and in discovery, you will look bad and lose". Or- and this is one of my favorite tricks, rewriting the document to what I wanted, signing that, sending it back to them. Again, for all of those, I'd talk to a lawyer and listen to their perspective carefully.


> which the programmer did (offering a different document to sign). \n\n IIUC, at that point, the lawyer went away

The article says that the organization refused the counter-offer and doubled down instead

> he should have talked to his own lawyer for advice

Costing how much? Next I'll need a lawyer for telling the supermarket that their alarm system code was being overlooked by someone from the bushes

It's not bad legal advice and I won't discourage anyone from talking to a lawyer, but it makes things way more costly than they need be. There's a thousand cases like this already online to be found if you want to know how to handle this type of response

Sounds very usa-esque (or perhaps unusually wealthy) to retain a lawyer as "sounding board"


This is the undergraduate curriculum. I've talked with folks who went through various programs (biology, chemistry, math, and physics) and I think they are demanding courses, and the question sets and tests were extremely hard. The students are also quite competitive with each other.

But I've also learned that many well-known advanced CS programs are taking significant shortcuts with their CS education.

I honestly can't imagine compressing an undergraduate curriculum (even with electives omitted) into a shorter time without significant compromises to the learning process.


I stopped when it started showing propaganda from the CCP (at least it was clearly labelled as such). https://misinforeview.hks.harvard.edu/article/chinese-state-... https://www.reddit.com/r/China/comments/1i67ja9/whats_going_...

It was already slop before that.


> I stopped when it started showing propaganda from the CCP

What did you do when it showed you propaganda from other countries?


Well, I'm in the US, I already know how to recognize US propaganda and ignore it :)

I don't think I was shown anything that was clearly labelled as "state-sponsored-media" from any other country and I don't think I saw anything that was propaganda, but not labelled as such, although I typically scrolled past the obvious ads and AI slop so I might have missed something.


Yeah, it definitely never labelled state sponsored media from the UK or Canada for me. Living in Paraguay, it still doesn't label it as a state sponsored propaganda. I'm not sure why propaganda from Eastern / Mainland Taiwan gets so much attention, the legitimate government there certainly does not sponsor it.

When my spouse worked in the area of determining "the value of an individual" (economically, not morally), it was computed as present value lifetime earnings: the cumulative income of the individual, converted back to its current value (using some sort of inflation model). IIRC, the PVLE averaged out to about $1-10M.

You shouldn't be down voted. Regardless of the moral or technical issues involved, there are established formulas used to calculate damages in wrongful death civil suits. Your range is generally correct although certain factors can push it higher. (Punitive damages are a separate issue.)

There are not "established formulas" or, to the extent that they are, the coefficients and exponents are not determined. The parties always argue about the discount rates and whatnot.

Sure, no argument there, I was just referring to research like this: https://escholarship.org/uc/item/82d0550k

"""Results. At a discount rate of 3 percent, males and females aged 20-24 have the highest PVLE — $1,517,045 and $1,085,188 respectively. Lifetime earnings for men are higher than for women. Higher discount rates yield lower values at all ages."""


I generally don't complain about being downvoted, but it is always puzzling when I post a neutral fact without any judgement.

If I read the article it says autopilot, not FSD.

> If I read the article it says autopilot, not FSD.

What's the difference? And does it matter?

Both are misleadingly named, per the OP:

> In December 2025, a California judge ruled that Tesla’s use of “Autopilot” in its marketing was misleading and violated state law, calling “Full Self-Driving” a name that is “actually, unambiguously false.”

> Just this week, Tesla avoided a 30-day California sales suspension only by agreeing to drop the “Autopilot” branding entirely. Tesla has since discontinued Autopilot as a standalone product in the U.S. and Canada.

> This lands weight to one of the main arguments used in lawsuits since the landmark case: Tesla has been misleading customers into thinking that its driver assist features (Autopilot and FSD) are more capable than they are – leading drivers to pay less attention.


Autopilot is similar to cruise control that is aware of other cars, and lane keeping. I would fully expect the sort of accident that happened to happen (drop phone, stop controlling vehicle, it continues through an intersection).

FSD has much more sophisticated features, explicitly handling traffic stops and lights. I would not expect the sort of accident to happen with FSD.

The fact that Tesla misleads consumers is a different issue from Autopilot and FSD being different.


Autopilot is similar to cruise control that is aware of other cars, and lane keeping.

Thanks for explaining why labeling it "Autopilot" is misleading and deceptive.


Is anyone actually being deceived? When you’re buying a Tesla, they definitely carefully explain these options to you.

This is not even funny anymore. You reap what you sow.

> FSD has much more sophisticated features, explicitly handling traffic stops and lights. I would not expect the sort of accident to happen with FSD.

FSD at one point had settings for whether it could roll through stop signs, or how much it could exceed the speed limit by. I've watched it interpret a railroad crossing as a weirdly malfunctioning red light with a convoy of intermittent trucks rolling by. It took the clearly delineated lanes of a roundabout as mere suggestions and has tried to barrel through them in a straight line.

I'd love to know where your confidence stems from.


My confidence comes only from what I hear people doing with the system. I have zero experience with it and consider most of the PR from Tesla to be junk.

"would not expect" is the way a cautious person demonstrates a lack of confidence.


I remember having this argument with a friend.

My argument was that the idea that the name Autopilot is misleading comes not from Tesla naming it wrong, it comes from what most people think "Autopilots" on an aircraft do. (And that is probably good enough to argue in court, that it doesn't matter what's factually correct, it matters what people understand based on their knowledge)

Autopilot on a Tesla historically did two things - traffic aware cruise control (keeps a gap from the car in front of you) and stays in its lane. If you tell it to, it can suggest and change lanes. In some cases, it'll also take an exit ramp. (which was called Navigate on Autopilot)

Autopilots on planes roughly also do the same. They keep speed and heading, and will also change heading to follow a GPS flight plan. Pilots still take off and land the plane. (Like Tesla drivers still get you on the highway and off).

Full Self Driving (to which they've now added the word "Supervised" probably from court cases but it always was quite obvious that it was supervised, you had to keep shaking the steering wheel to prove you were alert, same as with Autopilot btw), is a different AI model that even stops at traffic lights, navigates parking lots, everything. That's the true "summon my car from LA to NY" dream at least.

So to answer your question, "What's the difference" – it's huge. And I think they've covered that in earlier court cases.

But one could argue that maybe they should've restricted it to only highways maybe? (fewer traffic lights, no intersections), but I don't know the details of each recent crash.


Autopilots do a lot more than that because flying an aircraft safely is a lot more complicated than turning a steering wheel left and right and accelerating or breaking.

Tesla’s Autopilot being unable to swap from one road to another makes is way less capable than a decades old civilian autopilots which will get you to any arbitrary location as long as you have fuel. Calling the current FSD Autopilot would be overstating its capabilities, but reasonably fitting.


>"Autopilots do a lot more than that because flying an aircraft safely is a lot more complicated than turning a steering wheel left and right and accelerating or breaking."

Can you elaborate? My very limited knowledge but of very real airplane autopilots in little Cessna and Pipers is that they are in fact far easier than cars - they are a simple control feedback loop that maintains altitude and heading, that's it. You can crash into ground, mountain, or other traffic quite cheerfully. I would not be surprised to find adaptive cruise in cars is far more complex of a system than basic aircraft "autopilot".


Doesn’t basic airplane autopilot just maintain flight level, speed, and heading? What are some other things it can do?

Recover from upsets is the big thing. Maintaining flight level, speed, and heading while upside down isn’t acceptable.

Levels of safety are another consideration, car autopilot’s don’t use multiple levels of redundancy on everything because they can stop without falling out of the sky.


That's still massively simpler than making a self-driving car.

It's trivially easy to fly a plane in straight level flight, to the extent that you don't actually need any automation at all to do it. You simply trim the aircraft to fly in the attitude you want and over a reasonable timescale it will do just that.


> It's trivially easy to fly a plane in straight level flight, to the extent that you don't actually need any automation at all to do it. You simply trim the aircraft to fly in the attitude

That seemingly shifts the difficulty from the autopilot to the airframe. But that’s not actually good enough, it doesn’t keep an aircraft flying when it’s missing a large chunk of wing for example. https://taskandpurpose.com/tech-tactics/1983-negev-mid-air-c...

Instead, you’re talking about the happy path and if we accept the happy path as enough there’s the weekend equivalents of self driving cars built using minimal effort, however being production worthy is about more than being occasionally useful.

Autopilot is difficult because you need to do several things well or people will defiantly die. Self driving cars are far more forgiving of occasional mistakes but again it’s the or people die bits that makes it difficult. Tesla isn’t actually ahead of the game, they are just willing to take more risks with their customers and the general public’s lives.


> Self driving cars are far more forgiving of occasional mistakes

I would say not, no.

It's almost impossible to crash a plane. There's nothing to hit except the ground, and you stay away from that unless you really really mean to get close.

It's very easy to crash a car, and if you do that most of the time you'll kill people outside the car, often quite a lot of them.

There are no production aircraft fitted with autopilots that can correct for breaking a wing off.


Autopilots have contributed to a significant number of crashes and that’s with a very safety conscious industry.

In a hypothetical Tesla style let’s take more risk approach, buggy autopilots can surprisingly quickly get into a situation at cruising altitude which isn’t recoverable before hitting the ground. What is the worst possible thing an autopilot could do in this situation is eye opening here.

> There are no production aircraft fitted with autopilots that can correct for breaking a wing off.

That was a production aircraft still in service. https://simpleflying.com/how-many-f-15-eagles-are-still-in-s...

Granted that specific case depends on the aircraft being a lifting body etc so obviously doesn’t extend to commercial aviation. But my point was lack of aerodynamic stability on its own isn’t enough that giving up is ok.


> Autopilots have contributed to a significant number of crashes and that’s with a very safety conscious industry.

"Contributed to", in the sense that the pilots decided to just blindly trust the autopilot and let it make a developing situation worse rather than, oh I don't know, maybe FLYING THE DAMN PLANE.

> buggy autopilots can surprisingly quickly get into a situation at cruising altitude which isn’t recoverable before hitting the ground

If you allow the autopilot to fly the plane into the ground, yes. If you're paying attention you ought to be able to recover just about anything, if most of the plane is still working. The vast majority of incidents where aircraft have departed controlled flight and crashed are because the pilots lost sight of the important thing - FLYING THE DAMN PLANE.

> But my point was lack of aerodynamic stability on its own isn’t enough that giving up is ok.

It's got nothing to do with aerodynamic stability. If you adjust the steering and suspension in a car correctly, it'll drive in a perfectly straight line with no user input for a surprisingly long way. With modern electronic power steering and throttle-by-wire systems it's actually surprisingly easy to turn an off-the-shelf car (even something cheap, secondhand, and quite old like a 2010s Vauxhall Corsa) into a simple line-following robot like we used to build at uni in the 80s and 90s in robotics class. Sure, you need a disused aerodrome to play with it, but it'll work.

There is the far greater problem that self-driving cars have to cope with a far more rapidly changing environment than an aircraft. A self-flying plane would be far easier to get right than a self-driving car.

A human driver can't just react, painfully slowly, in the way that current "self-driving" cars do, they have to anticipate and be "reacting" before the problem even begins to start. You do it yourself, even if you don't realise it. You hang back from that car because you know they're going to - there, right across two lanes, not so much as a glance in their mirror, what did I tell you? - they're going to do something boneheaded. That car's just pulled in, the passenger in the back is about to open their door right into your - nicely done, you moved out to the line and missed them by 50cm at least.

Self-driving cars can't do that, and probably never will. Self-flying aircraft won't need to do that.

And an autopilot is a surprisingly simple device that responds in simple and predictable ways to sensor inputs.


> "Contributed to", in the sense that the pilots decided to just blindly trust the autopilot and let it make a developing situation worse rather than, oh I don't know, maybe FLYING THE DAMN PLANE.

Excuses don’t save lives. You can’t trust pilots or drivers to always make the correct decision instantly. Any system designed in such a manner will get people killed.

> If you allow the autopilot to fly the plane into the ground, yes.

Things can be unrecoverable a full minute before impact. There’s some seriously harrowing NTSB reports, and that’s just what’s already happened possible failure modes are practically endless.


> Excuses don’t save lives. You can’t trust pilots or drivers to always make the correct decision instantly. Any system designed in such a manner will get people killed.

Okay, so what's your answer? Stick yet another computer in to go wrong and fly the plane into the ground when it gets the wrong idea about a situation? Add yet more sensors to the car to prevent the driver steering away from an obstacle because it thinks they're not using their indicators yet?

> Things can be unrecoverable a full minute before impact.

Can you find an example of one that isn't down to gross mechanical failure, or just plain Operator Idiocy?


> Okay, so what's your answer? Stick yet another computer in to go wrong and fly the plane into the ground when it gets the wrong idea about a situation? Add yet more sensors to the car to prevent the driver steering away from an obstacle because it thinks they're not using their indicators yet?

I’m not condemning the airline industry here, the safety conscious approach has done a good job over time especially in terms of redundancy. A major area of improvement is the way autopilots are communicating with pilots, but that’s a hard process.

The car industry isn’t doing nearly as well in terms of redundancy etc so there’s many obvious areas of improvement through solid engineering without changing anything fundamental. That said, communication is again lacking.

> Can you find an example of one that isn't down to gross mechanical failure, or just plain Operator Idiocy?

Operator Idiocy isn’t some clearly defined line, an aircraft with a moderate fuel leak can look like idiocy after the fact but it’s an easy mistake to make. That’s exactly the kind of thing autopilots could catch not just from fuel sensors but how the flight characteristics change as the aircraft gets lighter, but aircraft have happily flown into trouble over the ocean.


Airplane "autoland" goes back a ways:

https://en.wikipedia.org/wiki/Autoland


Evolution has lead to optimization and efficiency many times. It rarely trends to maximization or the largest possible efficiency, since those conflict with "good enough". Protein structure and function is a common example.

> It rarely trends to maximization or the largest possible efficiency, since those conflict with "good enough".

Sometimes things get trapped in a local minima. Particularly when a seemingly inconsequential detail at a much much earlier stage becomes a dependency of lots of downstream stuff, but then it turns out that this just so happens to conflict with a better option in the here and now.

More commonly, the "perfect" solution is extremely brittle while the (supposedly) "good enough" solution is incredibly robust to all sorts of environmentally inflicted bullshit. In other words, most of the time evolution is practical while the humans criticizing the outcome are ignorant idealists.


I would go so far as to say that the vast majority of the time, systems that evolved are robust, not brittle, and you're right, this compromise "works better" or is "good enough to reproduce more than my relatives". And other times something gets caught in a local minima- but other bits around it optimize anyway (I think the "backwards" human eye might be an example of that- https://en.wikipedia.org/wiki/Evolution_of_the_eye#Placement and see also, https://en.wikipedia.org/wiki/Evolutionary_baggage).

Anyway, the example I was thinking of is here: https://en.wikipedia.org/wiki/Diffusion-limited_enzyme where some enzymes have evolved to reach extremely close to the maximum rate of catalysis limited by diffusion rates (and some enzymes have clever tricks to get around that).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: