Reasoning Prompts Workflow Documentation
Overview
This workflow uses a two-stage reasoning process to answer questions. First, a rationale for answering the question is generated. Then, using this rationale and the original question, a final answer is produced. The workflow is designed to be flexible and utilizes large language models (LLMs) for text generation.
Components Overview
The workflow consists of the following components:
- Chat Input: Provides user input as a question.
- Prompt (Stage 1): Creates a prompt template requesting a rationale for a given question.
- OpenAI Model (Stage 1): Generates a rationale for the input question using an OpenAI LLM.
- Prompt (Stage 2): Creates a prompt template for generating a final answer, given a question and its rationale.
- OpenAI Model (Stage 2): Generates the final answer based on the question and the rationale from Stage 1, using an OpenAI LLM.
- Combine Text: Combines the output from the two OpenAI models for final display.
- Chat Output: Displays the final answer.
Detailed Component Descriptions
Chat Input
- Description: Receives user input, typically a question, as a text message.
- Input Parameters: User-provided text (question).
- Output Parameters: A message object containing the user's input question.
- Key Configurations/Conditions: None.
Prompt (Stage 1)
- Description: Constructs a prompt template to guide the first OpenAI model towards generating a rationale.
- Input Parameters: The question from the Chat Input.
- Output Parameters: A prompt message containing the question and a template for generating the rationale.
- Key Configurations/Conditions: The prompt template includes a placeholder
{question}
for dynamic insertion of the user's question.
OpenAI Model (Stage 1)
- Description: An OpenAI Large Language Model that generates a rationale for the question provided in the prompt.
- Input Parameters: The prompt message from Prompt (Stage 1). Model parameters (e.g., temperature, max tokens, model name).
- Output Parameters: A text message containing the generated rationale.
- Key Configurations/Conditions: Requires an OpenAI API key. The
model_name
parameter specifies the LLM to be used. Other parameters control model behavior, such as temperature and max tokens.
Prompt (Stage 2)
- Description: Creates a prompt template to guide the second OpenAI model to generate a final answer using the question and the generated rationale.
- Input Parameters: The question from the Chat Input and the rationale from OpenAI Model (Stage 1).
- Output Parameters: A prompt message containing the question, rationale, and a template for generating the final answer.
- Key Configurations/Conditions: The prompt template includes placeholders
{question}
and{rationale}
for dynamic insertion of the respective inputs.
OpenAI Model (Stage 2)
- Description: An OpenAI LLM that generates the final answer based on the question and the rationale.
- Input Parameters: The prompt message from Prompt (Stage 2). Model parameters (e.g., temperature, max tokens, model name).
- Output Parameters: A text message containing the generated final answer.
- Key Configurations/Conditions: Requires an OpenAI API key. The
model_name
parameter specifies the LLM to be used. Other parameters control model behavior, such as temperature and max tokens.
Combine Text
- Description: Combines the rationale and final answer texts into a single output for display.
- Input Parameters: The rationale from OpenAI Model (Stage 1) and the final answer from OpenAI Model (Stage 2). A delimiter string.
- Output Parameters: A single text message containing the concatenated rationale and answer.
- Key Configurations/Conditions: The delimiter parameter controls how the two texts are joined.
Chat Output
- Description: Displays the final combined text to the user.
- Input Parameters: The combined text from Combine Text.
- Output Parameters: None.
- Key Configurations/Conditions: None.
Workflow Execution
- The user provides a question via the Chat Input.
- The question is passed to the Prompt (Stage 1), which generates a prompt requesting a rationale.
- OpenAI Model (Stage 1) uses the prompt to produce a rationale.
- The original question and the generated rationale are passed to Prompt (Stage 2) to create a prompt for the final answer.
- OpenAI Model (Stage 2) produces the final answer.
- Combine Text merges the rationale and the final answer.
- Chat Output displays the combined output to the user.
Additional Notes
Successful execution relies on the availability of an OpenAI API key and appropriate model parameters. The choice of model name in the OpenAI Model components influences the quality and style of the generated text. Experimentation with different model parameters (e.g., temperature) may be necessary to achieve optimal results. Error handling (not explicitly shown in the JSON) should be implemented to gracefully manage potential API errors or unexpected input.