Docs
MAC AWS Bedrock
Supported Operations
Chat

[Chat] Answer prompt

The Chat answer prompt operation is a simple prompt request operation to the configured LLM. It uses a plain text prompt as input and responds with a plain text answer.

Agent Prompt Template

Input Fields

Module Configuration

This refers to the MAC AWS Bedrock Configuration set up in the Gettintg Started section.

General Operation Fields

  • Prompt: This field contains the prompt as plain text for the operation.

Additional Properties

  • ModelName: The name of the LLM. You can select any model from the supported LLM Providers.
  • Region: The aws region.
  • Temperature: Specifies the dataset to be evaluated by the LLM using the provided template and instructions.
  • Top K: The number of most-likely candidates that the model considers for the next token.
  • Top P: The percentage of most-likely candidates that the model considers for the next token.
  • MaxToken: The maximum number of token to consume to the output generation

XML Configuration

Below is the XML configuration for this operation:

<mac-bedrock:
  agent-define-prompt-template 
  doc:name="Agent define prompt template" 
  doc:id="01796c3a-aec6-46ad-ac28-feb34bf258a2" 
  config-ref="AWS" 
  template="#[payload.template]" 
  instructions="#[payload.instruction]" 
  dataset="#[payload.dataset]" 
  modelName="anthropic.claude-instant-v1"
/>

Output Field

This operation responds with a json payload.

Example Output

This output has been converted to JSON.

{
    "inputTextTokenCount": 6,
    "results": [
        {
            "tokenCount": 69,
            "outputText": "?\nBern is the capital of Switzerland.\n\nBern is the capital of the Swiss Confederation. The municipality is located at the confluence of the Aare River into the river of the same name, and is the eighth-most populous city in Switzerland, with a population of around 134,200. ",
            "completionReason": "FINISH"
        }
    ]
}
  • inputTextTokenCount: Token used to process the input.
  • result:
    • tokenCount: The number of token used to generate the output.
    • outputText: The response from the LLM on the prompt sent.
    • completionReason: The reason the response finished being generated. The following reasons are possible:
      • FINISHED – The response was fully generated.
      • LENGTH – The response was truncated because of the response length you set.
      • STOP_CRITERIA_MET – The response was truncated because the stop criteria was reached.
      • RAG_QUERY_WHEN_RAG_DISABLED – The feature is disabled and cannot complete the query.
      • CONTENT_FILTERED – The contents were filtered or removed by the content filter applied.

[Chat] Answer prompt memory

The Chat answer prompt with memory operation is very useful when you want to provide memory of the conversation history for a multi-user chat operation.

Agent Prompt Template

Input Fields

Module Configuration

This refers to the MAC AWS Bedrock Configuration set up in the Getting Started section.

General Operation Fields

  • Data: Contains the prompt for the operation.
  • Memory Name: The name of the conversation. For multi-user support, enter the unique user ID.
  • Dataset: The path to the in-memory database for storing the conversation history. You can also use a DataWeave expression for this field, e.g., mule.home ++ "/apps/" ++ app.name ++ "/chat-memory.db".
  • Max Messages: The number of max messages to remember for the conversation defined in Memory Name.

Additional Properties

  • ModelName: The name of the LLM. You can select any model from the supported LLM Providers.
  • Region: The aws region.
  • Temperature: Specifies the dataset to be evaluated by the LLM using the provided template and instructions.
  • Top K: The number of most-likely candidates that the model considers for the next token.
  • Top P: The percentage of most-likely candidates that the model considers for the next token.
  • MaxToken: The maximum number of token to consume to the output generation

XML Configuration

Below is the XML configuration for this operation:

<mac-bedrock:
  chat-answer-prompt-memory 
  doc:name="Chat answer prompt memory" 
  doc:id="975945a8-b077-401f-a5d2-41d6b75393fb" 
  config-ref="AWS" prompt="#[payload.prompt]" 
  memoryPath='#[mule.home ++ "/apps/" ++ app.name ++ "/chat-memory.db"]' 
  memoryName="Amir" 
  keepLastMessages="10" 
  modelName="amazon.titan-text-premier-v1:0"
/>

Output Field

This operation responds with a json payload.

Example Output

This output has been converted to JSON.

{
    "inputTextTokenCount": 6,
    "results": [
        {
            "tokenCount": 69,
            "outputText": "?\nBern is the capital of Switzerland.\n\nBern is the capital of the Swiss Confederation. The municipality is located at the confluence of the Aare River into the river of the same name, and is the eighth-most populous city in Switzerland, with a population of around 134,200. ",
            "completionReason": "FINISH"
        }
    ]
}
  • inputTextTokenCount: Token used to process the input.
  • result:
    • tokenCount: The number of token used to generate the output.
    • outputText: The response from the LLM on the prompt sent.
    • completionReason: The reason the response finished being generated. The following reasons are possible:
      • FINISHED – The response was fully generated.
      • LENGTH – The response was truncated because of the response length you set.
      • STOP_CRITERIA_MET – The response was truncated because the stop criteria was reached.
      • RAG_QUERY_WHEN_RAG_DISABLED – The feature is disabled and cannot complete the query.
      • CONTENT_FILTERED – The contents were filtered or removed by the content filter applied.