Hacker Newsnew | past | comments | ask | show | jobs | submit | ipsum2's commentslogin

I don't get why writers do this. It makes sense for fiction, but why a factual, non-fiction article?

You said factual. But what is factual for you and I may not be for someone else. There are a lot of recollections in the article where sama remembers one version or doesn't remember at all and the other party remembers something else. Combine that with the nature of the article and the legal issues considering egos and sums involved. To top all of that New Yorker is known for fact checking that is exhaustive to the point of paranoia.

I am just speculating but if @ronanfarrow is still checking the discussion here, it would be amazing to hear the actual reasons.


It keeps it shorter.

Parakeet is significantly more accurate and faster than Whisper if it supports your language.

Are you running Parakeet with VoiceInk[0]?

[0]: https://github.com/beingpax/VoiceInk



i am, working great for a long time now

Right, and if you're on MacOS you can use it for free with Hex: https://github.com/kitlangton/Hex

Or write your own custom one with the library that backs it: https://github.com/FluidInference/FluidAudio

I did that so that I could record my own inputs and finetune parakeet to make it accurate enough to skip post-processing.


There's a fork of FluidAudio that supports the recent Cohere model: https://github.com/altic-dev/FluidAudio/tree/B/cohere-coreml...

It's used by this dictation app: https://github.com/altic-dev/FluidVoice/


Parakeet supports japanese now, but I cant find a version ported to apple silicone yet.

I have been using Parakeet with MacWhisper's hold-to-talk on a MacBook Neo and it's been awesome.

And indeed, Ghost Pepper supports parakeet v3

How good is the guitar to MIDI detection? Can it recognize chords?

Unfortunately not right now, it's in the works. Polyphonic guitar to midi is a problem I am yet to understand and try solving in this one. Jam Origin's Midi Guitar is good like that, I still need to get there.

Would be useful to have a video demoing it if people don't have a guitar or MIDI instrument handy.

You can use your computer keyboard as well. Setup the IAC buses and use the keyboard and you should be sorted.

I just started reading Asimov press. It has a weird name, I thought they were a sci-fi publishing company at first.

It had a unique blend of popular science writing that was sorely missing from the internet. Alas I hardly knew thee.


Isaac Asimov vote a huge amount of popular science book as well. They just have a shorter shelf life.

I enjoyed his “science for the layman” books, a lot more than his sci-fi stuff.

He was really good at explaining very complex stuff, in a simple, approachable manner.


My mother bought me "The Planet that Wasn't" when I was a kid - excellent book that I re-read many times:

https://en.wikipedia.org/wiki/The_Planet_That_Wasn%27t


This is why I read HN's comments.

I get MANY recommendations for books/movies/shows/topics I knew nothing about etc. here.

Just ordered this book for my 10-year-old grandson.


Maybe I should have mentioned that was about 50 years ago - still a good book though!

I suspect much of it is still relevant.

Well I'm pretty sure that Vulcan still isn't there!

The question is, has this had anything to do with Asimov the writer? Was he involved at the start or endorsed it somehow?

Judging by the .press domain it's too new for that.


Judging by the fact that you DNRTFA, we can't help you.

Actually I did look them up and that Asimov was never involved with them or their parent company.

A bit dishonest don't you think?


Everyone is surprised at the $300k/year figure, but that seems on the low end. My previous work place spends tens of millions a year on GPU continuous integration tests.

The $300K/year figure is surprising because it was for something that didn't need to exist (RPC calls).

Also not free nor open source, it just calls to fal.


prior version used modal to host the sam audio model (https://github.com/sambarrowclough/clearaudio/pull/2). fal made things simpler & faster


This seems like pure misinformation. The code lines that are actually changed:

              hint: {
                opencode: "recommended",
                -anthropic: "API key",
                openai: "ChatGPT Plus/Pro or API key",
              }[x.id],
They're removing the ability to use OpenCode via Anthropic API key


This is what most people in the comments are missing. They are removing the ability to even use Anthropic APIs not just your Max subscription.


this not true. api keys are supported. only "claude code" is being dropped.

that code is just a cli hint to which LLM they recommend using. so they stop recommending anthropic. rightfully so.


Is this what the legal request demanded or is this just something that OpenCode is doing out of spite? Seems unclear. To me the meat of this change is that they're removing support for `opencode-anthropic-auth` and the prompt text that allows OpenCode to mimic Claude Code behavior. They have been skirting the intent of the original C&D for awhile now with these auth plugins and prompt text.


Using your API key in third-party harnesses has always been allowed. They just don't like using the subsidized subscription plan outside of first-party harnesses. So this seems to be out of spite


It is what the legal demands are. They requested removal of all Anthropic (trademark?) mentions.


Anthropic's issue was always them spoofing OpenCode as Claude Code, piggybacking on the subscription plan.

Banning them from using the pay-per-token API key would be bad business.


I believe parent is talking about a separate topic, not about this change.


Hyperparam tuning that has better intuition and can incorporate architecture changes automatically. It won't invent something completely new though.


Hm, that's fair. It does feel like there's low hanging fruit in combining "old school" methods for conducting a hyperparameter sweep efficiently _with_ the higher level architecture edit ability of Autoresearch.

Probably would cut the number of runs down by a significant number (as far as I can tell it's doing a grid search once it decides to mess with a knob or section of the architecture).


> It won't invent something completely new though.

I don't necessarily disagree, but am wondering whether you have any particular reason/intuition driving you to claim this. I have seen AI agents be quite creative in other tasks; do you think there's a particular reason why we shouldn't see creativity in architecture research, given enough time and resources?


A cluster is 2 nodes? That's technically true, but not very exciting.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: