I don't know that this is a first model release. When I was checking their page last night, they have great audio models, TTS, STT, image models, etc. I'm skeptical that folks do all of that on the first release. Possible but unlikely, with that said. The evals look amazing, the audios I got to play is amazing. I hope everything about them is legit, we need more sovereign models.
Japan giving a security guarantee to Taiwan would be major news!
In reality no such thing happened and one YouTube video of a handful of protestors doesn’t make it so.
What she did say is that a Chinese attack on Taiwan _could_ clearly become an existential threat to Japan. Note that key word _could_
Which… of course it could!
Japan hosts multiple US military bases. If it developed into an armed conflict between the US and China then it’s exceedingly likely that Japan would be attacked. Think Chinese missiles aimed
At Yokosuka, just south of Tokyo.
Not only that but Japan and China have multiple territorial disputes. It’s not hard to imagine China deciding to go all in and settle those as well.
If this was just about semiconductors then this would be a reasonable take but I doubt semi-conductors are anything more than a minor footnote in China’s strategic calculus vis-a-vis Taiwan.
Reunification with Taiwan has been a major policy goal of the CCP since the civil war and is one of Xi’s explicit policy goals. He just reaffirmed this commitment as part of his New Year’s speech.
Historically China has lacked force projection capability. However it has had a multi-decade modernisation and military build-up which has drastically changed this situation.
Further we’ve seen significant tightening of CCP control over society and in particular the military in Xi’s term.
A straight forward analysis of these events, in line with Xi’s public statements and past Chinese actions,
is that the ground work is being laid for encirclement of Taiwan followed by China taking over, by force if necessary.
Looking through this guys GitHub he seems to have a lot of small “demo” apps, so I’m not surprised he gets a lot of value out of LLM tools.
Modern LLMs are amazing for writing small self contained tools/apps and adding isolated features to larger code bases, especially when the problem can be solved by composing existing open source libraries.
Where they fall flat is their lack of long term memory and inability to learn from mistakes and gain new insider knowledge/experience over time.
The other area they seem to fall flat is that they seem to rush to achieve their immediate goal and tick functional boxes without considering wider issues such as security, performance and maintainability. I suspect this is an artefact of the reinforcement learning process. It’s relatively easy to asses whether a functional outcome has been achieved, while assessing secondary outcomes (is this code secure, bug free, maintainable and performant) is much harder.
I somewhat disagree. Sure, if the prompt is “build fully functional application that does X from scratch”, then of course you are going to get crap end product because of what you said and didn’t say.
As a developer you would take that and break it down to a design and smaller tasks that can show incremental progress and give yourself a chance to build feature Foo, assess the situation and refactor or move forward with feature Bar.
Working with an LLM to build a full featured application is no different. You need to design the system and break down the work into smaller tasks for it to consume and build. It and you can verify the completed work and keep track of things to improve and not repeat as it moves forward with new tasks.
Keeping fully guard rails like linters, static analysis, code coverage further helps ensure what is produced is better code quality.
At some point are you baby sitting the LLM so much that you could write it by hand? Maybe, but I generally think not. While I can get deeply intense and write lots of code, LLMs can still generate code and accompanying documentation, fix static analysis issues and write/run the unit tests without taking breaks or getting distracted. And for some series of tasks, it can do them in parallel in separate worktrees further reducing the aggregate time to complete.
I don’t expect a developer to build something fully without incrementally working on it with feedback, it is not much different with an LLM to get meaningful results.
> That feels like cargo-culting the toolchain instead of asking the uncomfortable question: why did it take a greenfield project to give Python the package manager behavior people clearly wanted for the last decade?
This feels like a very unfair take to me. Uv didn’t happen in isolation, and wasn’t the first alternative to pip. It’s built on a lot of hard work by the community to put the standards in place, through the PEP process, that make it possible.
The problem the OP is pointing out is that some programmers are incompetent and do string concatenation anyway. A mistake which if anything is even easier in Python thanks to string interpolation.
reply