Docs
MAC Inference
Getting Started

Getting Started

System Requirements

Before you start, ensure you have the following prerequisites:

  • Java Development Kit (JDK) 8, 11, and 17
  • Apache Maven
  • MuleSoft Anypoint Studio

Download the MAC Inference Connector

Clone the MAC Inference Connector repository from GitHub:

git clone https://github.com/MuleSoft-AI-Chain-Project/mac-inference.git
cd mac-inference

Install the Connector with Java 8

mvn clean install -DskipTests

Installing with Java 11, 17, 21, 22, etc.

Step 1

export MAVEN_OPTS="--add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.net=ALL-UNNAMED --add-opens=java.base/java.util.regex=ALL-UNNAMED --add-opens=java.base/java.lang.reflect=ALL-UNNAMED --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.xml/javax.xml.namespace=ALL-UNNAMED"

Step 2

 
For Java 11
mvn clean install -Dmaven.test.skip=true -DskipTests -Djdeps.multiRelease=11
 
For Java 17
mvn clean install -Dmaven.test.skip=true -DskipTests -Djdeps.multiRelease=17
 
For Java 21
mvn clean install -Dmaven.test.skip=true -DskipTests -Djdeps.multiRelease=21
 
For Java 22
mvn clean install -Dmaven.test.skip=true -DskipTests -Djdeps.multiRelease=22

Add the Connector to your Project

Add the following dependency to your pom.xml file:

pom.xml
  	<dependency>
  		<groupId>com.mulesoft.connectors</groupId>
  		<artifactId>mac-inference-chain</artifactId>
  		<version>{version}</version>
  		<classifier>mule-plugin</classifier>
  	</dependency>
💡

The MAC Project connectors are constantly updated, and the version is regularly changed. Make sure to replace {version} with the latest release from our GitHub repository (opens in a new tab).

Configuration

The MAC Inference connector can be easily configured. Go to the Global Elements in your MuleSoft project, and create a new configuration. In the Connector Configuration, you will find the MAC Inference configuration. Select it and press OK.

Tools Use AI Service

Inference Support

MAC Inference supports different Inference Offerings:

  • GitHub Models
  • Groq
  • Hugging Face
  • Ollama
  • Open Router
  • Portkey

Select the Inference type of your choice from the Inference Type dropdown field.

Tools Use AI Service

API Key

Provide the API Key for the Inference provider. Also check the tab Inference Parameters for additional properties for the inference provider.

Model Name

After choosing the LLM provider, the available and supported models are listed in the model name dropdown.

Temperature, Top P and Max Token

Temperature is a number between 0 and 2, with a default value of 0.7. The temperature is used to control the randomness of the output. When you set it higher, you'll get more random outputs. When you set it lower, towards 0, the values are more deterministic. Top P specifies the cumulative probability score threshold that the tokens must reach. Max Token defines the number of LLM Token to be used when generating a response. This parameter helps control the usage and costs when engaging with LLMs.