Hacker Newsnew | past | comments | ask | show | jobs | submit | alasano's commentslogin

It's still not available in the API despite them announcing the availability.

They even linked to their Image Playground where it's also not available..

I updated my local playground to support it and I'm just handling the 404 on the model gracefully

https://github.com/alasano/gpt-image-1-playground


Yeah I just tried it and got a 500 server error with no details as to why:

  POST "https://api.openai.com/v1/responses": 500 Internal Server Error {
    "message": "An error occurred while processing your request. You can retry your request, or contact us through our help center at help.openai.com if the error persists. Please include the request ID req_******************* in your message.",
    "type": "server_error",
    "param": null,
    "code": "server_error"
  }
Interestingly if you change to request the model foobar you get an error showing this:

  POST "https://api.openai.com/v1/responses": 400 Bad Request {
    "message": "Invalid value: 'blah'. Supported values are: 'gpt-image-1' and 'gpt-image-1-mini'.",
    "type": "invalid_request_error",
    "param": "tools[0].model",
    "code": "invalid_value"
  }


My Enterprise account got an email 1.5 hours ago that it is available in API but my other accounts haven't gotten any email yet


It's a staggered rollout but I am not seeing it on the backend either.


> staggered rollout

It's too bad no OpenAI Engineers (or Marketers?) know that term exists. /s

I do not understand why it's so hard for them to just tell the truth. So many announcements "Available today for Plus/Pro/etc" really means "Sometime this week at best, maybe multiple weeks". I'm not asking for them to roll out faster, just communicate better.


I created this local Sora 2 Playground if you want to play around with the new sora-2 and sora-2-pro models.

It supports all params available in the API, allows you to queue multiple videos to be generated, remix videos, polls for progress, view costs and more.

It's based on my previous gpt-image-1 playground :) https://github.com/Alasano/gpt-image-1-playground

Enjoy!


I find that it consistently breaks around that exact range you specified. In the sense that reliability falls off a cliff, even though I've used it successfully close to the 1M token limit.

At 500k+ I will define a task and it will suddenly panic and go back to a previous task that we just fully completed.


The context7 MCP helps with this but I agree.


Interesting that you're migrating assistants and threads to the responses API, I presumed you were killing them off.

I started my MVP product with assistants and migrated to responses pretty easily. I handle a few more things myself but other than that it's not really been difficult.


I like hotels but I love renting someplace unique and pretending that I live in that city or place for a week or two.

I understand your point if all you're looking for is somewhere to sleep that's clean and comfy.


I can't find any reference to Cline/Roo charging anything on top of API pricing.

Not sure how they'd do it considering you bring your own API keys. Can you link me to a resource?


GP didn't say Cline/Roo charged anything on top.


The comparison table on the kilo site says "OpenRouter without 5% markup" and only puts a checkbox next to kilo.


Yes - with our built-in provider, we provide all the models that OpenRouter provides but without OpenRouter's 5% markup. We provide them at cost (the AI provider cost)


Roo/Cline doesn't offer Openrouter, markup or not.


You can most definitely use Openrouter with Roo and Cline. Openrouter leaderboards are dominated by these 2 apps.


But they don't OFFER Openrouter a paid product... You cannot give roo/cline dollars and get openrouter api access.


Even with proper tool call descriptions, I've had quite a few occasions where the LLM didn't know how to use the tool.

The tools provided by the MCP server were definitely in context and there were only two or three servers with a small amount of tools enabled.

It feels too model dependant at the moment, this was Gemini 2.5 Pro which is normally state of the art but has lots of quirks for tool use it seems.

Agreed on hoping models are going to be trained to be better at using MCP.


Right, my workflow to get even a basic prompt working consistently rarely involves fewer than like 10 cycles of [run it 10 times -> update the prompt extensively to knock out problems in the first step]

And then every time I try to add something new to the prompt, all the prompting for previously existing behavior often needs to be updated as well to account for the new stuff, even if it's in a totally separate 'branch' of the prompt flow/logic.

I'd anticipate that each individual MCP I wanted to add would require a similar process to ensure reliability.


We don't know whether pushing towards AGI is marching towards a dystopia.

If it's winner takes all for the first company/nation to have AGI (presuming we can control it), then slowing down progress of any kind with regulation is a risk.

I don't think there's a good enough analogy to be made, like your nuclear power/weapons example.

The hypothetical benefits of an aligned AGI outweigh those of any other technology by orders of magnitude.


As with nuclear weapons, there is non-negligible probability of wiping out the human race. The companies developing AI have not solved the alignment problem, and OpenAI even dismantled what programs it had on it. They are not going to invest in it unless forced to.

We should not be racing ahead because China is, but investing energy in alignment research and international agreements.


> We don't know whether pushing towards AGI is marching towards a dystopia.

We do know that. By literally looking at China.

> The hypothetical benefits of an aligned AGI outweigh those of any other technology by orders of magnitude.

AGI aligned with whom?


Taken to the extreme and dumbed down, I don't think it's bad to think about it like that to be fair. HYPE spewing aside.

The comment you replied to is sarcastic but magic box that does everything is pretty much where things end up, given enough time.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: