[Agent] Define Prompt Template
The Agent define prompt template
operation is essential for using specific prompt templates with your LLMs. This operation allows you to define and compose AI functions using plain text, enabling the creation of natural language prompts, generating responses, extracting information, invoking other prompts, or performing any text-based task.
Input Fields
Module Configuration
This refers to the Amazon Bedrock Configuration set up in the Gettintg Started section.
General Operation Fields
- Template: Contains the prompt template for the operation.
- Instructions: Provides instructions for the LLM and outlines the goals of the task.
- Dataset: Temperature is a value between 0 and 1, and it regulates the creativity of LLMs' responses. Use lower temperature if you want more deterministic responses, and use higher temperature if you want more creative or different responses for the same prompt from LLMs on Amazon Bedrock.
Additional Properties
- ModelName: The name of the LLM. You can select any model from the supported LLM Providers.
- Region: The aws region.
- Temperature: Specifies the dataset to be evaluated by the LLM using the provided template and instructions.
- Top K: The number of most-likely candidates that the model considers for the next token.
- Top P: The percentage of most-likely candidates that the model considers for the next token.
- MaxToken: The maximum number of token to consume to the output generation
XML Configuration
Below is the XML configuration for this operation:
<mac-bedrock:
agent-define-prompt-template
doc:name="Agent define prompt template"
doc:id="01796c3a-aec6-46ad-ac28-feb34bf258a2"
config-ref="AWS"
template="#[payload.template]"
instructions="#[payload.instruction]"
dataset="#[payload.dataset]"
modelName="anthropic.claude-instant-v1"
/>
Output Field
This operation responds with a json
payload.
Example Output
{
"completion": " {\n \"type\": \"positive\",\n \"answer\": \"Thank you for the positive feedback about the training last week. We are glad to hear that you found the training to be amazing and that the trainer was friendly. Have a nice day!\"\n}",
"stop": "\n\nHuman:",
"stop_reason": "stop_sequence",
"type": "completion"
}
- completion – The resulting completion up to and excluding the stop sequences.
- stop_reason – The reason why the model stopped generating the response.
- stop – If you specify the stop_sequences inference parameter, stop contains the stop sequence that signalled the model to stop generating text. For example, holes in the following response.
Example Use Cases
Prompt templates can be applied in various scenarios, such as:
- Customer Service Agents: Enhance customer service by providing case summaries, case classifications, summarizing large datasets, and more.
- Sales Operation Agents: Aid sales teams in writing sales emails, summarizing cases for specific accounts, assessing the probability of closing deals, and more.
- Marketing Agents: Assist marketing teams in generating product descriptions, creating newsletters, planning social media campaigns, and more.
[Agent] Chat
The Agent chat
operation is essential for chatting with prepared Amazon Bedrock Agents. These agents are build with a defined purpose and have tools (Action Group) defined to perform API Calling to fullfil users need.
Input Fields
Module Configuration
This refers to the Amazon Bedrock Configuration set up in the Gettintg Started section.
General Operation Fields
- AgentId: The agent Id (i.e.
WFK8E3DFKD
). - AgentAliasId: The agent alias Id (i.e.
TSTALIASID
). - Prompt: Contains the users prompt for the agent.
Additional Properties
- ModelName: The name of the LLM. You can select any model from the supported LLM Providers.
- Region: The aws region.
XML Configuration
Below is the XML configuration for this operation:
<mac-bedrock:
agent-chat doc:name="Agent chat"
doc:id="04728422-15cd-4008-95de-adf18486e24a"
config-ref="AWS" agentId="#[payload.id]"
prompt="#[payload.question]"
agentAliasId="#[payload.aliasId]"
/>
Output Field
This operation responds with a json
payload.
Example Output
This output has been converted to JSON.
- result – The result of the prompt query as String.