(Please focus on his discussion of PE licensure, not on what developers should call themselves)
He says some software can be designed in our usual way: informal, fast, iterative, democratized, agile. But more and more software needs to be designed like a PE designing a public bridge: carefully, to stand strong for 100 years.
This is a fun little set of commands to run live-captioning on my Macbook! Nothing serious.
A tutorial and demo video are in the link.
I saw a post on HN that introduced me to Whisper.cpp. I’ve been excited to see so many AI tools start to support my M3 Macbook/Metal/whatever so I can put my 96GB of memory to use.
I don’t have the timing windows tuned perfectly. Feel free to play with it!
Thanks so much for this!
I updated LM Studio, and it picked up the mlx-lm update required.
After a small tweak to tool-calling in the prompt, it works great with Zed!
Could you describe the tweak you did, and possibly the general setup you have with zed working with LM Studio? Do you use a custom system prompt? What context size do you use? Temperature? Thanks!
Also, I had to go into LM Studio and increase the max context size for each model I wanted to use in Zed. Otherwise it gives a parsing error on the response. I set it to the max allowable value.
I start LM Studio, start the LM Studio server, then go to Zeds AI config and tell it to connect to LM Studio. I put it in Agent mode, and it seems to work!
I don't know much about temperature, and I didn't use any other system prompt.
OP here! Going into it, I definitely agreed and thought that easily fact-checkable claims would be the minority.
But as I worked, I found that many of his claims were "this paper says this". So checking the claim was as simple as checking "does the paper he's citing say what he says it does?"
You can see them here: https://fact-check.brady.fyi/documents/3f744445-0703-4baf-89...
OP here! Thanks for calling out this important point.
As I fact-checked each claim, I was surprised at how many of the checks were "does the paper he's citing say what he says it does?"
You can see them here: https://fact-check.brady.fyi/documents/3f744445-0703-4baf-89...
Yeah. And that's really important; If someone makes a correct claim by accident, say they misread a paper that incorrectly claims X as correctly claiming not-X, we shouldn't consider it evidence that they are trustworthy or honest, just lucky.
But then you have cases where someone correctly cites a source that they know to be incorrect (or at least plausibly should know). This is commonly done when flawed studies are funded specifically so they can be cited. This is arguably even more egregious lying, yet would pass a consistency based "fact check".
Likewise, the factual claim ("eight out of ten doctors surveyed recommend smoking brand-x") can be true while the implication is false.
In short, I'm not claiming such checks can't catch liars (they can), just that passing such checks doesn't mean they were telling the truth or what they said or implied was correct.
OP here -- thanks for your reply!
You're exactly right! I included the NYT/PolitiFact graph at the top as an example of that problem.
In the second half of the post, I propose what I think could work a little better (sampling comparable speeches and fact-checking the entire text).
(Please focus on his discussion of PE licensure, not on what developers should call themselves)
He says some software can be designed in our usual way: informal, fast, iterative, democratized, agile. But more and more software needs to be designed like a PE designing a public bridge: carefully, to stand strong for 100 years.