Hacker Newsnew | past | comments | ask | show | jobs | submit | jijji's commentslogin

you can write all you want about an AI bubble but when I can do from the command line with agentic AI changes to a large code base that would take over a year and get it done in a week or two, it's simply a dramatic evolution of software engineering by one competent software engineer

> you can write all you want about an AI bubble

Bubble =/= useless

Bubble means over-hyped


clippy was a windows based closed source program that didnt have access to the internet. For the people who never grew up on windows, it makes no sense to what you're writing. We were using Linux, and before that *BSD or Solaris/SVR4/Sunos4/AIX/HP-UX/Ultrix/VAX/VMS, etc.

I loved the FAQ question "What happens if cloudflare is down?"... Well, the short answer is it takes down 75% of the internet with it; you mean like 3 days ago for the whole day? well, there is always going outside and going bowling with your friends for a few hours until it comes back online, then the internet resumes function at that time.

True true, maybe I should update it to “go outside and touch some grass.”

this is where change management really shines because in a change management environment this would have been prevented by a backout procedure and it would never have been rolled out to production before going into QA, with peer review happening before that... I don't know if they lack change management but it's definitely something to think about


i think that is data rather than code which is where it falls short, in a way you need stringent code and more safeguarded code; it's like if everyone sends you 64k posts as that's all your proxy layer lets in, someone checked sending 128kb and it gave an error before reaching your app - and then someone sends 128kb and the proxy layer has changed - and your app crashes as it was more than 64kb and your app had an assert against that. to actually track issues with erraneous data that overflows well and stuff isn't so much code test but more like fuzz testing, brute force testing etc. which i think people should do; but that's more like we need strong test networks, and also those test networks may need to be more internet like to reflect real issues too, so the whole testing infrastructure in itself becomes difficult to get right - like they have their own tunneling system etc, they could segregate some of their servers and make a test system with better error diagnosis etc potentially. but to my mind, if they had better error propogation back that really identified what was happening and where then that would be a lot better in general. sure, start doing that on a test network. this is something i've beeen tihnking about in general - i made a simple rpc system for being able to send real time rust tracing logs (it allows to just use the normal tracing framework and use a thin rpc layer) back from multiple end servers but that's mostly for granular debugging. i've never quite understood why systems like systemd-journald aren't more network centric when they're going to be big and complex kitchensink approaches - apparently there's dbus support, but to my mind something inbetween debugging level of code and warning/info. like even if it's doing things like 1/20 of log info it's too much volume if things like large files getting close to limits is increasing etc and we can see this as things run, and can see if it's localised or common etc it'd help have more resilient systems. something may already exist in this line but i didn't come across anything in a reasonably passive way - i mean there's debugging tools like dtrace etc that have been around for ages.


It is a bit tough to do all that in five minutes.


what's the difference between this and a reverse SSH tunnel, for example making a local port on your laptop accessible to a public-facing internet server or even running on localhost on that same server... or using sshuttle to access your local network from a remote server .... it doesn't sound like "zero trust" if you're proxying everything through some third-party company that you know nothing about what they're doing with your actual data that you're sending across the wire...


Zero trust is a marketing term used by them - surprisingly it has nothing to do with end-to-end encryption also.


ollama and other peojects already make this possible


The only thing the article fails to mention is the use of more than one transducer used to focus multiple ultrasound beams to an intersection point in the body, increasing the heating power of all beams


There was a startup in Shanghai in the early 2000. Their device used multiple transducers. The probe was at least 40 cm in diameter. They did trials on uterine fibroids, among other diseases. One of the difficulties was while it looks good in theory, but the path ultrasound travels in the body is more complicated than, say x-ray or gamma ray. They expected a fine focal zone, but sometimes the focal zone was much larger than expected. This new wave of ultrasound equipment may have discovered better ways to control the sound beam.


it seems like the intesection point can be smaller than a grain of rice, and moved at 0.1mm three dimensionally [0]

[0] https://youtu.be/3Bwq2YxD9eU


This is amazing! That HIFU 20 years ago used phase array to steer beam. Don't know the size of transducer. One of the tests I heard of was on a pig leg. The damage was bigger than expected, could be in the range of few centimeters, probably due to the leg has skin, subcutaneous fat, muscle and bone. All have different sound characteristic.


most people use redis on localhost (i hope)


52,874 are connected to the internet according to Shodan.https://www.shodan.io/search?query=redis+product%3A%22Redis+... Not affiliated with them.


I’d imagine recent uptick in using services like Upstash may make it harder for people to know if they are vulnerable or not. Is this mitigated by disabling Lua script execution?


Upstash wouldn’t be vulnerable - Upstash doesn’t run upstream redis, it’s a protocol-compatible proprietary implementation.


I would guess it is.

Also:

> Exploitation of this vulnerability requires an attacker to first gain authenticated access to your Redis instance.


it used to possible to execute redis commands against localhost from the web browser using domain rebinding. but i think redis did something to the protocol to fix this. also, this is only really relevant for developers.


the fact is that there are over 700 quintillion planets in the universe, habitable is over 40 billion... i think our current understanding of physics limits our ability to reach any of them.


not only is there at least a hundred other open source equivalent apps


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: