Hacker Newsnew | past | comments | ask | show | jobs | submit | chme's commentslogin

Threema is still vendor lock-in.


Wasn’t Threema sold this week or something?

https://comitiscapital.com/news/comitis-capital-announces-th...

Not sure what / if that changes anything, but presumably it will… sometime…


Element X is in some cases still a downgrade from Element. For instance there doesn't seem to be a way to create local key backups anymore. Also, that calls between Element and Element X are incompatible means both apps need to be installed in order to receive calls from all contacts.

Still, I love Matrix and hope that these issues will be resolved in time.


Manually importing/exporting keybackups is on the todo list (albeit towards the bottom). Supporting legacy calls is not (unless someone contributes it); the intention is to converge asap on native Matrix group calls.


I had to deal with Intel Quark SoC X1000 on a Galileo board years ago, where the LOCK prefix instruction caused segfaults. Since the SoC is single threaded, the lock prefix could just be patched out from resulting binaries, before the compiler/build system was patched.

https://en.wikipedia.org/wiki/Intel_Quark#Segfault_bug


Well... Maybe just have a BIOS on your system that fetches a markdown, pushes it to a LLM to generate a new and exciting operating system for you on every boot.

Wouldn't that be nice?


TBH, I doubt that this will happen...

It is much easier to use LLMs to generate code, validate that code as a developer, fix it, if necessary, and check it into the repo, then if every user has to send prompts to LLMs in order to get the code they can actually execute.

While hoping it doesn't break their system and does what they wanted from it.

Also... that just doesn't scale. How much power would we need, if everyday computing starts with a BIOS sending prompts to LLMs in order to generate a operating system it can use.

Even if it is just about installing stuff... We have CI runners, that constantly install software often on every build. How would they scale if they need LLMs to generate install instructions every time?


Maybe that is a reason for this approach. It changes the responsibility of errors from the person writing that code, to the one executing it.

Pretty brilliant in a way.


Even bringing down the "theory" to paper in prosa will be lossy.

And natural languages are open to interpretation and a lot of context will remain unmentioned. While programming languages, together with their tested environment, contain the whole context.

Instrumenting LLMs will also mean, doing a lot of prompt engineering, which on one hand might make the instructions clearer (for the human reader as well), but on the other will likely not transfer as much theory behind why each decision was made. Instead, it will likely focus on copy&pasta guides, that don't require much understanding on why something is done.


I agree that it will be lossy because all writing is lossy.


So... What you are saying is that we don't need 'install.md'. Because a developer can just use a LLM to generate a 'install.sh', validate that, and put it into the repo?

Good idea. That seems sensible.

Bonus: LLM is only used once, not every time anyone wants to install some software. With some risks of having to regenerate, because the output was nonsensical.


> What you are saying is that we don't need 'install.md'

I think the point was that install.md is a good way to generate an install.sh.

> validate that, and put it into the repo

The problem being discussed is that the user of the script needs to validate it. It's great if it's validated by the author, but that's already the situation we're in.


> The problem being discussed is that the user of the script needs to validate it. It's great if it's validated by the author, but that's already the situation we're in.

The user is free to use a LLM to 'validate' the `install.sh` file. Just asking it if the script does anything 'bad'. That should be similarly successful as the LLM generating the script based on a description. Maybe even more successful.


I still dont understand why we need any of them. If I am installing something, It would take me more time to write this install.md or install.sh than if I just went to the correct website and copied the command, see the contents, run it and opening help.


TBH. I never read prose that couldn't be in some way misinterpreted or misunderstood. Because much of it is context sensitive.

That is why we have programming languages, they, coupled with a specific interpreter/compiler, are pretty clear on what they do. If someone misunderstands some specific code segment, they can just test their assumptions easily.

You cannot do that with just written prose, you would need to ask the writer of that prose to clarify.

And with programming languages, the context is contained, and clearly stated, otherwise it couldn't be executed. Even undefined behavior is part of that, if you use the same interpreter/compiler.

Also humans often just read something wrong, or skip important parts. That is why we have computers.

Now, I wouldn't trust a LLM to execute prose any better then I trust a random human of reading some how-to guide and doing that.

The whole idea that we now add more documentation to our source code projects, so that dumb AI can make sense of it, is interesting... Maybe generally useful for humans as well... But I would instead target humans, not LLMs. If the LLMs finds it useful as well, great. But I wouldn't try to 'optimize' my instructions so that every LLM doesn't just fall flat on its face. That seems like a futile effort.


It really depends on the order of priorities. If the overall goal is to allow digital archeologist to make sense of some file they found, it would be prudent to give them some instructions on how it is decoded.

I just hope that people will not just execute that code in an unconfined environment.


Hope is not an adequate security best practice. ;)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: