Reasoning Prompts
This flow generates a final answer with justification. To do this, first create a logical justification and then integrate it into the answer before sending it.
graph TD
%%{init: {'theme': 'mc','layout': 'elk'}}%%
Prompt-vq8my[<div><img src="/_astro/square-terminal.BMOXc-nZ.svg" style="height: 20px !important;width: 20px !important"/></div>Prompt para generar justificación]
style Prompt-vq8my stroke:#a170ff
OpenAIModel-d3dpa[<div><img src="/_astro/openAI.BhmuxEs3.svg" style="height: 20px !important;width: 20px !important"/></div>Genera Justificación]
style OpenAIModel-d3dpa stroke:#a170ff
Prompt-tma9i[<div><img src="/_astro/square-terminal.BMOXc-nZ.svg" style="height: 20px !important;width: 20px !important"/></div>Prompt]
style Prompt-tma9i stroke:#a170ff
OpenAIModel-1tbks[<div><img src="/_astro/openAI.BhmuxEs3.svg" style="height: 20px !important;width: 20px !important"/></div>Genera Respuesta]
style OpenAIModel-1tbks stroke:#a170ff
ChatInput-6u0w8[<div><img src="/_astro/messages-square.BaSDmT6g.svg" style="height: 20px !important;width: 20px !important"/></div>Chat Input]
style ChatInput-6u0w8 stroke:#a170ff
CombineText-hw2cy[Combina la justificacion con la respuesta]
style CombineText-hw2cy stroke:#a170ff
ChatOutput-k57y4[<div><img src="/_astro/messages-square.BaSDmT6g.svg" style="height: 20px !important;width: 20px !important"/></div>Chat Output]
style ChatOutput-k57y4 stroke:#a170ff
Prompt-vq8my -.- OpenAIModel-d3dpa
linkStyle 0 stroke:#a170ff
OpenAIModel-d3dpa -.- Prompt-tma9i
linkStyle 1 stroke:#a170ff
Prompt-tma9i -.- OpenAIModel-1tbks
linkStyle 2 stroke:#a170ff
ChatInput-6u0w8 -.- Prompt-vq8my
linkStyle 3 stroke:#a170ff
ChatInput-6u0w8 -.- Prompt-tma9i
linkStyle 4 stroke:#a170ff
OpenAIModel-1tbks -.- CombineText-hw2cy
linkStyle 5 stroke:#a170ff
CombineText-hw2cy -.- ChatOutput-k57y4
linkStyle 6 stroke:#a170ff
OpenAIModel-d3dpa -.- CombineText-hw2cy
linkStyle 7 stroke:#a170ff
🧩 Overview
This workflow automatically produces a fully‑justified answer to a user’s question.
It first generates a rationale, then integrates that rationale into a final response before delivering the output back to the chat interface.
⚙️ Main Features
- Generates a logical justification for each user query using an LLM.
- Combines the justification with the final answer into a single coherent message.
- Uses two separate OpenAI models: one for rationale creation and one for answer generation.
- Supports dynamic prompt templates that inject user questions and generated rationales.
- Outputs the final answer directly to the chat session for immediate display.
🔄 Workflow Steps
| Component Name | Role in the Workflow | Key Inputs | Key Outputs |
|---|---|---|---|
| Chat Input | Receives the user’s question and session metadata. | User message, conversation ID, session ID | Structured chat message |
| Prompt – Rationale | Constructs a prompt asking the model to produce a rationale. | Question from chat message | Prompt text |
| OpenAI Model – Rationale | Generates a rationale text from the prompt. | Prompt text | Rationale text |
| Prompt – Final Answer | Builds a prompt that includes the question and the generated rationale to ask for the final answer. | Question, Rationale text | Prompt text |
| OpenAI Model – Final Answer | Produces the final answer text. | Prompt text | Answer text |
| Combine Text | Concatenates the rationale and answer using a newline delimiter. | Rationale text, Answer text | Combined justification‑answer text |
| Chat Output | Sends the combined text back to the chat session as the bot’s reply. | Combined text, conversation ID, session ID | Chat message displayed to the user |
🧠 Notes
- The first OpenAI model is tuned for producing concise rationales; the second model focuses on crafting the final answer.
- Both models use the same “gpt‑4o‑mini” engine, but the architecture can be swapped out by changing the model name field.
- Prompt templates are dynamic; placeholders (
{question},{rationale}) are replaced automatically at runtime. - The delimiter for combining text is two newline characters (
\n\n) by default, ensuring clear separation between rationale and answer. - Session and conversation identifiers maintain state across multiple turns, allowing context‑aware replies.
- Error handling for model failures is not explicitly modeled; in production, the workflow should capture and surface failures to the user or a monitoring system.