The whole point about having an open platform from boot is you don't have to trust it. You run your own code from first power on.
Is it possible that it's backdoored, have a secret opcode / management engine? Probably, but that goes to everyone, as it's not practical to analyze what's in the chip (unless you're decapping them and all)
I don't know what secure environments you're talking about, if it's an airgapped system then you should be secure even when what's inside 'tries to get out'.
Korean and western made stuff guarantee to have such thing. CNC devices in Russia stopped working. Even NVIDIA gpu has back door according to China and NVIDIA had to settle this matter behind the scene with China government. At this point, your phone is 100% backdoorable by western government. The only thing protect you is you are non-threat and too small to be bother with.
I’m not trying to make any claims about consciousness. For us, the practical question is: does the interaction feel supportive and useful, while staying transparent that it’s a model. The rest is philosophy, and I’m happy to read more perspectives.
Give it access to a terminal and see what it does, unprompted. Does it explore? Does it develop interests? Does it change when exposed to new information?
We’re not giving it unconstrained tool access. In-product, actions are either not available or gated behind explicit user intent and strict allowlists. The interesting part for us is the real-time conversational loop and memory personalization, not autonomous exploration.
I'm working on a game for pocketstation (essentially Dreamcast VMU, but Playstation). It has the same cpu architecture as GBA but there are some unfortunate circumstances that requires me to modify LLVM for rust to use. Forces me to learn I guess
Just run the code that provisions the infrastructure? Sandboxing is the least of your problems. You would need to fully mock out all function executions and their results to have a hope to properly execute the code let alone govern what's happening without affecting a live environment. And even still, there would be ways to fool this kind of introspection, as I mentioned. In an enterprise environment where this kind of governance is mandatory, that's not acceptable.
In any case, regardless whatever clever method you try to use, even if you're successful, it's not as straightforward and easily understood and extensible as OPA policy. Let's say you succeed in governing Rust code. OK, but now I have developers who are writing in Python and Java and TypeScript. What now? Develop a new, customized solution for each one? No thanks
Is it possible that it's backdoored, have a secret opcode / management engine? Probably, but that goes to everyone, as it's not practical to analyze what's in the chip (unless you're decapping them and all)
I don't know what secure environments you're talking about, if it's an airgapped system then you should be secure even when what's inside 'tries to get out'.
reply