Prompt chaining
Use a technique of "prompt chaining". First, an AI generates a concise summary of the original text. Then, using that summary, a second AI creates a set of multiple-choice questions.
graph TD
%%{init: {'theme': 'mc','layout': 'elk'}}%%
OpenAIModel-2zqbh[<div><img alt="logo" src="/_astro/openAI.BhmuxEs3.svg" style="height: 20px !important;width: 20px !important"/></div>OpenAI]
style OpenAIModel-2zqbh stroke:#a170ff
OpenAIModel-ponsr[<div><img alt="logo" src="/_astro/openAI.BhmuxEs3.svg" style="height: 20px !important;width: 20px !important"/></div>OpenAI]
style OpenAIModel-ponsr stroke:#a170ff
ChatOutput-p1d55[<div><img alt="logo" src="/_astro/messages-square.BaSDmT6g.svg" style="height: 20px !important;width: 20px !important"/></div>Chat Output]
style ChatOutput-p1d55 stroke:#a170ff
Prompt-vfbvv[<div><img alt="logo" src="/_astro/square-terminal.BMOXc-nZ.svg" style="height: 20px !important;width: 20px !important"/></div>Generar Resumen]
style Prompt-vfbvv stroke:#a170ff
ChatInput-3g4zy[<div><img alt="logo" src="/_astro/messages-square.BaSDmT6g.svg" style="height: 20px !important;width: 20px !important"/></div>Chat Input]
style ChatInput-3g4zy stroke:#a170ff
Prompt-g1mz2[<div><img alt="logo" src="/_astro/square-terminal.BMOXc-nZ.svg" style="height: 20px !important;width: 20px !important"/></div>Prompt]
style Prompt-g1mz2 stroke:#a170ff
OpenAIModel-ponsr -.- ChatOutput-p1d55
linkStyle 0 stroke:#a170ff
Prompt-vfbvv -.- OpenAIModel-2zqbh
linkStyle 1 stroke:#a170ff
ChatInput-3g4zy -.- Prompt-vfbvv
linkStyle 2 stroke:#a170ff
Prompt-g1mz2 -.- OpenAIModel-ponsr
linkStyle 3 stroke:#a170ff
OpenAIModel-2zqbh -.- Prompt-g1mz2
linkStyle 4 stroke:#a170ff
Prompt Chaining Workflow Documentation
🧩 Overview
The workflow implements a two‑stage prompt chaining process. First, the user’s text is summarized by an OpenAI model, then the summary is fed to a second model that produces a set of multiple‑choice questions. The resulting questions are displayed in the chat interface, giving users an instant, self‑contained learning exercise derived from the original content.
⚙️ Main Features
- Accepts free‑form text from the chat interface.
- Generates a concise summary of the user’s input.
- Creates multiple‑choice questions based on the summary.
- Uses two separate OpenAI model calls to keep responsibilities distinct.
- Streams the final output back to the chat without intermediate storage.
🔄 Workflow Steps
| Component Name | Role in the Workflow | Key Inputs | Key Outputs |
|---|---|---|---|
| Chat Input | Receives the user’s message from the playground. | User message text, session ID, conversation ID | Message object containing the user’s input |
| Prompt (Generate Summary) | Builds the prompt that instructs the model to produce a summary. | Text from Chat Input | Prompt message to be sent to the first model |
| OpenAI (Summarization) | Generates a concise summary of the input text. | Prompt message from the previous step | Text message containing the summary |
| Prompt (Generate Questions) | Constructs the prompt that asks for multiple‑choice questions based on the summary. | Summary text from the summarization step | Prompt message for the second model |
| OpenAI (Question Generation) | Produces the set of multiple‑choice questions. | Prompt message from the previous step | Text message containing the questions |
| Chat Output | Displays the generated questions in the playground chat. | Text from the second OpenAI model, conversation ID, session ID | Rendered chat message visible to the user |
🧠 Notes
- The workflow relies on an active OpenAI API key; the key is supplied to both model components.
- Each OpenAI call uses the
gpt-4o-minimodel with a low temperature (0.1) to keep responses deterministic. - The summary generation step feeds directly into the question generation step; no intermediate persistence is required.
- Token limits and rate‑limiting are governed by the OpenAI configuration; exceeding limits will result in a runtime error.
- The labels attached to the nodes provide human‑readable descriptions but are not part of the data flow.
- The entire chain is deterministic given the same user input, but the use of a seed parameter ensures reproducibility if needed.