Getting Started
Use the Connector in Your Project
Application Requirements
Java Development Kit (JDK)
The application using the MuleSoft Inference Connector can use Java Development Kit (JDK) 8, 11 and 17.
MuleSoft Runtime
The application using the MuleSoft Inference Connector requires a Mule Runtime >= 4.3
.
Option 1: Maven Central Repository
Edit File pom.xml
The MAC Project connectors are constantly updated, and the version is regularly changed.
Make sure to replace {version}
with the latest release from Maven Central (opens in a new tab).
Copy and paste the following Maven Dependency into your Mule application pom file.
<dependency>
<groupId>io.github.mulesoft-ai-chain-project</groupId>
<artifactId>mule4-inference-connector</artifactId>
<version>{version}</version>
<classifier>mule-plugin</classifier>
</dependency>
Option 2: Local Maven Repository
Build Requirements
Before you start, ensure you have the following prerequisites:
- Java Development Kit (JDK) 8, 11, and 17
- Apache Maven
- MuleSoft Anypoint Studio
Download the MuleSoft Inference Connector
Clone the MuleSoft Inference Connector repository from GitHub:
git clone https://github.com/MuleSoft-AI-Chain-Project/mule-inference-connector.git
cd mule-inference-connector
Install the Connector with Java 8
mvn clean install -DskipTests -Dgpg.skip
Installing with Java 11, 17, 21, 22, etc.
Step 1
export MAVEN_OPTS="--add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.net=ALL-UNNAMED --add-opens=java.base/java.util.regex=ALL-UNNAMED --add-opens=java.base/java.lang.reflect=ALL-UNNAMED --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.xml/javax.xml.namespace=ALL-UNNAMED"
Step 2
For Java 11
mvn clean install -Dmaven.test.skip=true -DskipTests -Dgpg.skip -Djdeps.multiRelease=11
For Java 17
mvn clean install -Dmaven.test.skip=true -DskipTests -Dgpg.skip -Djdeps.multiRelease=17
For Java 21
mvn clean install -Dmaven.test.skip=true -DskipTests -Dgpg.skip -Djdeps.multiRelease=21
For Java 22
mvn clean install -Dmaven.test.skip=true -DskipTests -Dgpg.skip -Djdeps.multiRelease=22
Edit File pom.xml
The MAC Project connectors are constantly updated, and the version is regularly changed.
Make sure to replace {version}
with the latest release from our GitHub repository (opens in a new tab).
Add the following dependency to your pom.xml
file:
<dependency>
<groupId>com.mulesoft.connectors</groupId>
<artifactId>mule4-inference-connector</artifactId>
<version>{version}</version>
<classifier>mule-plugin</classifier>
</dependency>
Connector Configuration
The MuleSoft Inference connector can be easily configured. Go to the Global Elements
in your MuleSoft project, and create a new configuration. In the Connector Configuration
, you will find the MuleSoft Inference configuration. Select it and press OK.
Inference Support
MuleSoft Inference supports different Inference Offerings:
- GitHub Models (opens in a new tab)
- Hugging Face (opens in a new tab)
- Ollama (opens in a new tab)
- Groq AI (opens in a new tab)
- Portkey (opens in a new tab)
- OpenRouter (opens in a new tab)
- Cerebras (opens in a new tab)
- NVIDIA (opens in a new tab)
- Together.ai (opens in a new tab)
- Fireworks (opens in a new tab)
- DeepInfra (opens in a new tab)
Select the Inference type of your choice from the Inference Type dropdown field.
API Key
Provide the API Key for the Inference provider. Also check the tab Inference Parameters for additional properties for the inference provider.
Model Name
After choosing the LLM provider, the available and supported models are listed in the model name dropdown.
Temperature, Top P and Max Token
Temperature is a number between 0 and 2, with a default value of 0.7. The temperature is used to control the randomness of the output. When you set it higher, you'll get more random outputs. When you set it lower, towards 0, the values are more deterministic. Top P specifies the cumulative probability score threshold that the tokens must reach. Max Token defines the number of LLM Token to be used when generating a response. This parameter helps control the usage and costs when engaging with LLMs.