AI Workflow: Article Summarization and Question Generation
Overview
This workflow processes a given text input, summarizing it using a large language model (LLM), and subsequently generating multiple-choice questions based on the summary. The output is presented in both summarized text and a conversational chat format, including both the summary and the generated questions.
Components Overview
The workflow utilizes the following components:
- Text Input: Provides the initial text input for processing.
- Prompt: Creates prompts for the LLMs.
- OpenAI Model: An LLM that generates text based on provided prompts.
- Text Output: Displays text output in a formatted way.
- Chat Output: Presents output in a conversational chat interface.
Detailed Component Descriptions
Text Input
- Name: Text Input
- Description: Accepts a text string as input.
- Input Parameters: None
- Output Parameters: Text (Message) – The input text string.
- Key Configurations/Conditions: None
Prompt (x2)
- Name: Prompt
- Description: Constructs prompts for the LLMs. Dynamically inserts input text into a pre-defined template.
- Input Parameters: Document (Message/Text) - Text to be included in the prompt. For the second Prompt, Summary (Message/Text) is used.
- Output Parameters: Prompt Message (Message) - The constructed prompt string.
- Key Configurations/Conditions: The prompt templates are predefined and contain placeholders for dynamic input.
OpenAI Model (x2)
- Name: OpenAI Model
- Description: An LLM that processes prompts and generates text outputs.
- Input Parameters: Input (Message) – The prompt string.
- Output Parameters: Text (Message) – The generated text.
- Key Configurations/Conditions:
model_name
specifies the LLM used (e.g.,gpt-4o-mini
). Other parameters such asmax_tokens
andtemperature
can be adjusted to fine-tune the model's behavior.
Text Output (x2)
- Name: Text Output
- Description: Displays the generated text to the user.
- Input Parameters: Text (Message) – The text to be displayed.
- Output Parameters: None.
- Key Configurations/Conditions: None
Chat Output (x2)
- Name: Chat Output
- Description: Displays the generated text in a chat-like format.
- Input Parameters: Text (Message) – The text to be displayed as a chat message. Other parameters include
sender_name
to identify the sender of the message (e.g., "Summarizer" or "Question Generator"). - Output Parameters: None.
- Key Configurations/Conditions:
sender_name
is used to attribute the message to either the summarizer or question generator.
Workflow Execution
- The workflow begins with Text Input, providing the initial article text.
- This text is passed to the first Prompt component, which formats it for summarization.
- The formatted prompt is sent to the first OpenAI Model, generating a summary.
- The summary is then displayed via Text Output and Chat Output.
- The summary is also passed to the second Prompt, creating a prompt to generate multiple-choice questions.
- This second prompt goes to the second OpenAI Model for question generation.
- Finally, the generated questions are displayed using Text Output and Chat Output.
Additional Notes
The workflow relies on the availability of an OpenAI API key and a working internet connection. The performance and quality of the generated summary and questions depend heavily on the selected LLM, its parameters, and the quality of the input text. Adjusting parameters like max_tokens
and temperature
might be necessary to optimize the results.
graph TD
%%{init: {'theme': 'mc','layout': 'elk'}}%%
TextInput-tSi6t[<img src="/_astro/type.Dy26vmDy.svg" class="w-8 h-8 max-w-12 object-contain mx-auto" width="20">Text Input]
TextInput-tSi6t@{ shape: rounded}
style TextInput-tSi6t stroke:#a170ff
Prompt-dKU6x[<img src="/_astro/square-terminal.BMOXc-nZ.svg" class="w-8 h-8 max-w-12 object-contain mx-auto" width="20">Prompt]
Prompt-dKU6x@{ shape: rounded}
style Prompt-dKU6x stroke:#a170ff
OpenAIModel-Fv7Az[<img src="/_astro/openAI.CA91HhVI.svg" class="w-8 h-8 max-w-12 object-contain mx-auto" width="20">OpenAI]
OpenAIModel-Fv7Az@{ shape: rounded}
style OpenAIModel-Fv7Az stroke:#a170ff
TextOutput-IulHd[<img src="/_astro/type.Dy26vmDy.svg" class="w-8 h-8 max-w-12 object-contain mx-auto" width="20">Text Output]
TextOutput-IulHd@{ shape: rounded}
style TextOutput-IulHd stroke:#a170ff
ChatOutput-H5PCr[<img src="/_astro/messages-square.BaSDmT6g.svg" class="w-8 h-8 max-w-12 object-contain mx-auto" width="20">Chat Output]
ChatOutput-H5PCr@{ shape: rounded}
style ChatOutput-H5PCr stroke:#a170ff
Prompt-cXKsy[<img src="/_astro/square-terminal.BMOXc-nZ.svg" class="w-8 h-8 max-w-12 object-contain mx-auto" width="20">Prompt]
Prompt-cXKsy@{ shape: rounded}
style Prompt-cXKsy stroke:#a170ff
OpenAIModel-6htE5[<img src="/_astro/openAI.CA91HhVI.svg" class="w-8 h-8 max-w-12 object-contain mx-auto" width="20">OpenAI]
OpenAIModel-6htE5@{ shape: rounded}
style OpenAIModel-6htE5 stroke:#a170ff
TextOutput-9XsZJ[<img src="/_astro/type.Dy26vmDy.svg" class="w-8 h-8 max-w-12 object-contain mx-auto" width="20">Text Output]
TextOutput-9XsZJ@{ shape: rounded}
style TextOutput-9XsZJ stroke:#a170ff
ChatOutput-JiGBJ[<img src="/_astro/messages-square.BaSDmT6g.svg" class="w-8 h-8 max-w-12 object-contain mx-auto" width="20">Chat Output]
ChatOutput-JiGBJ@{ shape: rounded}
style ChatOutput-JiGBJ stroke:#a170ff
TextInput-tSi6t -.- Prompt-dKU6x
linkStyle 0 stroke:#a170ff
Prompt-dKU6x -.- OpenAIModel-Fv7Az
linkStyle 1 stroke:#a170ff
Prompt-dKU6x -.- TextOutput-IulHd
linkStyle 2 stroke:#a170ff
OpenAIModel-Fv7Az -.- ChatOutput-H5PCr
linkStyle 3 stroke:#a170ff
OpenAIModel-Fv7Az -.- Prompt-cXKsy
linkStyle 4 stroke:#a170ff
Prompt-cXKsy -.- OpenAIModel-6htE5
linkStyle 5 stroke:#a170ff
Prompt-cXKsy -.- TextOutput-9XsZJ
linkStyle 6 stroke:#a170ff
OpenAIModel-6htE5 -.- ChatOutput-JiGBJ
linkStyle 7 stroke:#a170ff