Hacker Newsnew | past | comments | ask | show | jobs | submit | nknj's commentslogin

you can use local mcp servers with the agents sdk: https://openai.github.io/openai-agents-python/mcp/

responses api is a hosted thing and so it made most sense for it to directly connect to other hosted services (like remote mcp servers).


one additional difference between chat and responses is the number model turns a single api call can make. chat completions is a single turn api primitive -- which means it can talk to the model just once. responses is capable of making multiple model turns and tool calls in a single api call.

for example, you can give the responses api access to 3 tools: a vector store with some user memories (file_search), the shopify mcp server, and code_interpreter. you can then ask it to look up some user memories, find relevant items in the shopify mcp store, and then download them into a csv file. all of this can be done in a single api call that involves multiple model turns and tool calls.

p.s. - you can also use responses statelessly by setting store=false.


What are my choices for using a custom tool? Does it come down to: function calling (single turn), MCP (multi-turn via Responses)? What else are my choices?

Why would anyone want to use Responses statelessly? Just trying to understand.


i think the original intent of responses api was also to unify the realtime experiences into responses - is that accurate?


we expect responses and realtime to be our 2 core api primitives long term — responses for turn by turn interactions and realtime for models requiring low latency bidirectional streams to/from the apps/models.


thank you for the correction!


I hear you and really appreciate the patience here.

We're almost ready to share a migration guide. Today, we closed the gap between Assistants and Responses by launching Code Interpreter and support for multiple vector stores in File Search.

We still need to add support for Assistants and Threads objects to Responses before we can give devs a simple migration path. Working on this actively and hope to have all of this out in the coming weeks.


On the announcement page they are saying that "...introducing updates to the file search tool that allow developers to perform searches across multiple vector stores...". On the docs, I still find this limitation: "At the moment, you can search in only one vector store at a time, so you can include only one vector store ID when calling the file search tool."

Anybody knows how searching multiple vector stores is implemented? The obvious plan would be to allow something like:

  "vector_store_ids": ["<vector_store_id1>", "<vector_store_id2>", ...]


sorry about the error in the docs. we're removing that call out.

`"vector_store_ids": ["<vector_store_id1>", "<vector_store_id2>"]` is exactly right. only 2 vector stores are supported at the moment.


2 feels quite arbitrary and honestly not that much of an improvement. Any plans to up that limit?


Interesting that you're migrating assistants and threads to the responses API, I presumed you were killing them off.

I started my MVP product with assistants and migrated to responses pretty easily. I handle a few more things myself but other than that it's not really been difficult.


there's no rush to do this - in the coming weeks, we will add support for:

- assistant-like and thread-like objects to the responses api

- async responses

- code interpreter in responses

once we do this, we'll share a migration guide that allows you to move over without any loss of features or data. we'll also give you a full 12 months to do your migration. feel free to reach out at nikunj[at]openai.com if you have any questions about any of this, and thank you so much for building on the assistants api beta! I think you'll really like responses api too!


If Responses is replacing Assistants, is there a quickstart template available—similar to the one you had for Assistants?

https://github.com/openai/openai-assistants-quickstart


Proud to have Stainless as a partner at OpenAI. All our SDKs are generated by them. Alex, Robert, Philip and team are extremely thoughtful about SDK design and push us to improve our API + other products while they're at it. Stainless is bringing a new standard of SDKs to our industry and this is a great thing for all developers.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: