Hacker Newsnew | past | comments | ask | show | jobs | submit | Nedomas's commentslogin

two weeks ago I started heavily using Codex (I have 20y+ dev xp).

At first I was very enthusiastic and thought Codex is helping me multiplex myself. But you actually spend so much time trying to explain Codex the most obvious things and it gets them wrong all the time in some kind of nuanced way that in the end you spend more time doing things via Codex than by hand.

So I also dialed back Codex usage and got back to doing many more things by hand again because its just so much faster and much more predictable time-wise.


Same experience, these "background agents" are powered by models that aren't yet capable enough to handle large, tangled or legacy codebases without human guidance. So the background part ends up being functionally useless in my experience.


We've built a version of this on steroids - not only a registry, but also one-click mcp hosting. Would love you eyeballs if you're into mcp: https://supermachine.ai


Welcome to MCP hosting space guys. We’ve also been hosting MCPs since early Feb in https://supermachine.ai and investing a lot into open-source (see https://github.com/supercorp-ai/supergateway and many others).

Guess there’s gonna be more competition haha (but don’t worry, I think our approaches are a bit different).


Why is it so expensive? 100$ a month.


The interesting question is what are your respective end games and do they compete? By this I mean like Facebook was seemingly a face book for college campuses at first.


There's open source package that allows delaying providing credentials to MCP server to runtime / via MCP tool call: https://github.com/supercorp-ai/superargs

For hosted MCPs: https://supermachine.ai


Met the guys in sf, the concept is great, can we really build o1-type of models with this? Congratulations on the launch.


Met with the Humiris MoAI team in sf, they are onto something cool


I do and built Assistants API compat layer for Groq and Anthropic: https://github.com/supercorp-ai/supercompat I’d argue that Assistants API DX > manual completions API.


Aye, but your FinOps will be comolaining even with simple use.


Assistants API use in prod used to suck because it would send full convo on each message. But last month they added an option to send truncted history so its no longer 2$ a pop thankfully. Also Grok, Haiku and Mistral is cheap


Are you using Assistants API v2 with streaming?


Yeah, I do both in prod and in the lib. In the lib I even ported Anthropics streaming API to be OpenAI compatible. Will write the docs over the coming days if interested.


Hi HN! Started working with more text models in PyTorch, experimented with LSTMs and found most of the tutorials terribly outdated. So figured out whats the most modern best practices there (until next version of torchtext comes) and wanted to share. Do anybody use LSTMs at work or are most of the people on BERTs?


Thanks for the feedback. I'll add it. What kind of brand is it? I couldn't quite get your message fully.



YOU NAZI! Thanks - fixed it and am feeling ashamed of it a bit :)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: