Prompt Chaining
Use a “prompt chaining” technique. First, an AI generates a concise summary of the original text. Then, using that summary, a second AI creates a set of multiple‑choice questions.
+2
graph TD
%%{init: {'theme': 'mc','layout': 'elk'}}%%
OpenAIModel-2zqbh[<div><img src="/_astro/openAI.BhmuxEs3.svg" style="height: 20px !important;width: 20px !important"/></div>OpenAI]
style OpenAIModel-2zqbh stroke:#a170ff
ChatOutput-xropn[<div><img src="/_astro/messages-square.BaSDmT6g.svg" style="height: 20px !important;width: 20px !important"/></div>Muestra el Resumen]
style ChatOutput-xropn stroke:#a170ff
OpenAIModel-ponsr[<div><img src="/_astro/openAI.BhmuxEs3.svg" style="height: 20px !important;width: 20px !important"/></div>OpenAI]
style OpenAIModel-ponsr stroke:#a170ff
ChatOutput-p1d55[<div><img src="/_astro/messages-square.BaSDmT6g.svg" style="height: 20px !important;width: 20px !important"/></div>Chat Output]
style ChatOutput-p1d55 stroke:#a170ff
Prompt-vfbvv[<div><img src="/_astro/square-terminal.BMOXc-nZ.svg" style="height: 20px !important;width: 20px !important"/></div>Generar Resumen]
style Prompt-vfbvv stroke:#a170ff
TextOutput-buyib[<div><img src="/_astro/type.Dy26vmDy.svg" style="height: 20px !important;width: 20px !important"/></div>Salida]
style TextOutput-buyib stroke:#a170ff
Prompt-2fe60[<div><img src="/_astro/square-terminal.BMOXc-nZ.svg" style="height: 20px !important;width: 20px !important"/></div>Creador de Preguntas]
style Prompt-2fe60 stroke:#a170ff
TextOutput-fplll[<div><img src="/_astro/type.Dy26vmDy.svg" style="height: 20px !important;width: 20px !important"/></div>Text Output]
style TextOutput-fplll stroke:#a170ff
ChatInput-3g4zy[<div><img src="/_astro/messages-square.BaSDmT6g.svg" style="height: 20px !important;width: 20px !important"/></div>Chat Input]
style ChatInput-3g4zy stroke:#a170ff
OpenAIModel-2zqbh -.- ChatOutput-xropn
linkStyle 0 stroke:#a170ff
OpenAIModel-ponsr -.- ChatOutput-p1d55
linkStyle 1 stroke:#a170ff
Prompt-vfbvv -.- TextOutput-buyib
linkStyle 2 stroke:#a170ff
Prompt-vfbvv -.- OpenAIModel-2zqbh
linkStyle 3 stroke:#a170ff
TextOutput-buyib -.- Prompt-2fe60
linkStyle 4 stroke:#a170ff
Prompt-2fe60 -.- TextOutput-fplll
linkStyle 5 stroke:#a170ff
Prompt-2fe60 -.- OpenAIModel-ponsr
linkStyle 6 stroke:#a170ff
ChatInput-3g4zy -.- Prompt-vfbvv
linkStyle 7 stroke:#a170ff
🧩 Overview
The workflow implements a prompt‑chaining pipeline that transforms a user‑submitted text into a concise summary and, subsequently, into a set of multiple‑choice questions based on that summary.
By automating both summarisation and question generation, it enables rapid content analysis and quiz creation without manual intervention.
⚙️ Main Features
- The user supplies text through a chat interface, which is captured by the Chat Input component.
- A Generar Resumen Prompt dynamically builds a summarisation prompt that is forwarded to an OpenAI model.
- The OpenAI model produces a brief summary of the original text.
- The summary is rendered by a Text Output component and also sent to a Chat Output for conversational display.
- A Creador de Preguntas Prompt receives the summary and constructs a prompt for question creation.
- Another OpenAI model generates multiple‑choice questions from the summary.
- The resulting questions are shown by a second Text Output component and a corresponding Chat Output.
🔄 Workflow Steps
| Component Name | Role in the Workflow | Key Inputs | Key Outputs |
|---|---|---|---|
| Chat Input | Captures the user’s initial message. | Text entered by the user. | Chat message with user content. |
| Generar Resumen Prompt | Builds a prompt for summarisation. | User text. | Prompt message for the summarisation model. |
| OpenAI | Generates a concise summary. | Prompt message. | Summary text. |
| Text Output (Summary) | Displays the summary as plain text. | Summary text. | Text message containing the summary. |
| Creador de Preguntas Prompt | Builds a prompt for question generation. | Summary text. | Prompt message for the question‑generation model. |
| OpenAI | Generates multiple‑choice questions. | Prompt message. | Text containing the questions. |
| Text Output (Questions) | Displays the generated questions as plain text. | Questions text. | Text message containing the questions. |
| Chat Output (Summary) | Shows the summary within the chat interface. | Summary text. | Chat message with the summary. |
| Chat Output (Questions) | Shows the questions within the chat interface. | Questions text. | Chat message with the questions. |
🧠 Notes
- The workflow relies on OpenAI API access; valid credentials and the appropriate model name (e.g.,
gpt-4o-mini) must be configured. - Prompt templates are defined in the Generar Resumen and Creador de Preguntas components and can be modified to change summarisation style or question complexity.
- Both Text Output and Chat Output components receive the same content; this duplication provides flexibility in how the results are presented to the user.
- The system assumes that each model call completes successfully; in practice, error handling should be added to manage API limits or timeouts.
- Because the prompts are statically defined, the workflow does not support dynamic prompt modification at runtime unless the prompt templates are edited before deployment.