The boring way is also a feature for users. Knowing a tool will still work the same in three years matters more than most product comparisons acknowledge.
This is what Graeber called "box-ticking" in Bullshit Jobs: work that exists so the org can prove it's doing something, not because the doing matters. The leaderboard isn't measuring productivity, it's producing proof of AI adoption. Once an exec says "we're an AI-first company," the rest of the org needs to show that's actually happening. Token counts are the easiest thing to put on a dashboard, regardless of whether anyone got anything done.
Rejections are usually conditional on the world at the time: a constraint, a dependency, a workaround that exists today. When those conditions change the rejection is stale but the log still reads "we tried this and it failed." How do you think about surfacing stale entries for revisit? Is it on the agent to spot them on its own or is there a manual deprecation step?
There isn't a manual deprecation step, because the agent has no context outside what the human gives it. Deprecation happens when conflicting information is given ("you want to do this but this note says you tried it before and it failed, what do you want to do?").
At that point, either the human decides to go for it and the new decision is noted, and the old decision is superseded/removed, or the human says "wow I'm sure glad I'm using gnosis" and everything is left as-is.
Hi! Yes, the premium voices are Kokoro. I’m only exposing the English voices right now because the rest of the pipeline around them is English-first and custom, especially pronunciation/G2P, QA, and timestamp awareness. I’d like to expand that over time, but I don’t want to overpromise multilingual support before the surrounding stack is ready. So I'm taking it one language at a time based on demand and feedback.
AI summaries are currently generated remote, not local. Those currently leverage gpt-4o-mini. TTS and OCR are on-device and summarization is the cloud-backed feature.
Have you seen the same chain pattern outside finance yet? Wonder whether investment scams are the most conspicuous because the payout per convert is high or whether it's seeded the widest on YouTube specifically.
I saw something like this for a book. It was under an Instagram reel where the person was describing ways to improve your self-esteem. In the comments section someone mentioned a book that worked for them and it had a few replies saying how it worked for them too. I searched for the book and it was a very new book from an unknown author and zero reviews everywhere.
This is the kind of porting work I always hope for when I see a CUDA-only release. Have you thought about publishing the gather-scatter sparse 3D convolution and SDPA attention swaps as a standalone toolkit or writeup? A lot of folks running models locally on Apple Silicon hit the same wall with flash_attn, nvdiffrast, and custom sparse kernels and end up redoing the same work.
Hi! Each app goes through an automated agent that verifies it can be cleanly installed and run, with malware scanning. We research each one across forums, Reddit, GitHub issues, and other sources to put together a comprehensive description with links to dev pages and repositories. There's also a built-in recommendation system that brings up similar apps.
Since verification and scanning are token and time consuming, not every app has been fully processed yet. It's an ongoing effort!
reply