These are sota models, not open source 7b parameter ones. They've put lots of effort into preventing prompt injections during the agentic reinforcement learning
It's probably not literally prompted to do that. It has access to a desktop and GitHub, and the blog posts are published through GitHub. It switches back and forth autonomously between different parts of the platform and reads and writes comments in the PR thread because that seems sensible.
The scale is there. I'm scraping, cleaning, token efficientizing dozens of sources every single hour. The lack of monies for embedding everything was a temporary problem.
HarmonyOS Next (Huawei) is independent from Android. So that leaves three options, even if Harmony is mostly China focussed. They have tons of users there.
I can't wait for Italo and Trenitalia to enter the German market, it can only get better. It's already at rock bottom for long distance. Short distance with foreign train companies is bearable (Westbahn, transdev, RXX...)
I just refactored the rendering and resampling approach. Took me a few tries to figure out how to remove the banding masks from the layers, but with more stacked layers and a bit of GPT-foo to figure out the API it sort of works now (updated the GIF)
Keep in mind that this is not Gaussian splat rendering but just a hacked approximation--on my NVIDIA machine that looks way smoother.