Not quite. My understanding is that OpenAI's various embeddings APIs return only a single vector per document, instead of the sequence of hidden states corresponding to each predicted next token in the response generated by a GPT-type LLM.
Imagine getting generated text from a GPT LLM that comes with a deep embedding of each generated token's "contextual meaning":