MuleSoft Inference Connector Overview
MuleSoft Inference Connector provides access to Inference Offering for Large Language Models i.e. Groq, Hugging Face, Github Models, etc.
What is the MuleSoft Inference Connector?
The MuleSoft Inference Connector is designed to help developers easily build and manage AI-driven agents within the MuleSoft Anypoint Platform. It provides operations to interface directly with the API of the Inference Providers.

Key Features
The MuleSoft Inference Connector simplifies AI integration into MuleSoft applications with:
- Seamless Interaction with Inferenced LLMs: Effortlessly integrate and utilize large language models (LLMs) for tasks such as natural language processing, text generation, and more, including advanced features like text generation, analysis, and complex language tasks.
- Embeddings and Search: Combine with MAC Vectors to handle tasks like text similarity, document search, and clustering, all within MuleSoft applications.
- Optimized Performance: Designed for high efficiency and performance in enterprise-grade MuleSoft applications, ensuring smooth handling of AI operations.
- Comprehensive AI Tools and Services: Access a wide array of AI-driven features, including Retrieval-Augmented Generation (RAG) for document retrieval, dynamic tool integration (Function Calling) for tasks like recognition and manipulation.
The MuleSoft Inference Connector supports the following Inference Offerings:
Supported Inference Providers
- AI21Labs (opens in a new tab)
- Anthropic (opens in a new tab)
- Azure AI Foundry (opens in a new tab)
- Azure OpenAI (opens in a new tab)
- Cerebras (opens in a new tab)
- Cohere (opens in a new tab)
- Databricks (opens in a new tab)
- DeepInfra (opens in a new tab)
- DeepSeek (opens in a new tab)
- Docker Models (opens in a new tab)
- Fireworks (opens in a new tab)
- GitHub Models (opens in a new tab)
- Google Vertex AI (opens in a new tab)
- Groq AI (opens in a new tab)
- Hugging Face (opens in a new tab)
- IBM Watson (opens in a new tab)
- LLM API (opens in a new tab)
- Mistral (opens in a new tab)
- NVIDIA (opens in a new tab)
- Ollama (opens in a new tab)
- OpenAI (opens in a new tab)
- OpenAI Compatible Endpoints (opens in a new tab)
- OpenRouter (opens in a new tab)
- Perplexity (opens in a new tab)
- Portkey (opens in a new tab)
- Together.ai (opens in a new tab)
- XAI (opens in a new tab)
- Xinference (opens in a new tab)
- ZHIPU AI (opens in a new tab)
Supported Moderation Providers
Supported Vision Model Providers
- Anthropic (opens in a new tab)
- GitHub Models (opens in a new tab)
- Google Vertex AI (opens in a new tab)
- Groq AI (opens in a new tab)
- Hugging Face (opens in a new tab)
- Mistral (opens in a new tab)
- OpenAI (opens in a new tab)
- OpenAI Compatible Endpoints (opens in a new tab)
- OpenRouter (opens in a new tab)
- Portkey (opens in a new tab)
- XAI (opens in a new tab)
Supported Image Models Providers
- Hugging Face (opens in a new tab)
- OpenAI (opens in a new tab)
- Stability_AI (opens in a new tab)
- XAI (opens in a new tab)
HTTPS Security
The MuleSoft Inference Connector support TLS for Mule Apps (opens in a new tab)
Requirements
- The supported version for Java SDK is Java 17.
- Compilation of the connector has to be done with Java 17.
- Mule Runtimes with Java 17 are supported.
Table of Supported Operations by Inference Offering
Not all the operations are supported by each embedding store, following a detailed view.
Name | GitHub Models | Groq AI | Hugging Face | Ollama | OpenAI | Azure OpenAI | XAI | OpenRouter | Portkey | Perplexity | Cerebras | NVIDIA | Together.ai | Fireworks | DeepInfra | Mistral | Anthropic | AI21Labs | Cohere | Xinference | Azure AI Model Inference | Google Vertex AI (Gemini) | GPT4ALL | LM Studio | Docker Models | DeepSeek | Zhipu | Data Bricks | IBM Watson | LLM API |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Chat answer prompt | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
Chat completion | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
Agent define prompt template | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
Tools native template | *✅ | *✅ | *✅ | *✅ | *✅ | ✅ | *✅ | *✅ | ✅ | ❌ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | *✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
Read Image by (URL or Base64) | ✅ | ✅ | ✅ | Base64 only | ✅ | 🔜 | ✅ | ✅ | ✅ | 🔜 | 🔜 | 🔜 | 🔜 | 🔜 | 🔜 | ✅ | ✅ | 🔜 | 🔜 | 🔜 | 🔜 | 🔜 | 🔜 | 🔜 | 🔜 | 🔜 | 🔜 | 🔜 | 🔜 | 🔜 |
Image Generation | ❌ | ❌ | ✅ | ❌ | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ |
Toxicity detection by text | ❌ | ❌ | ❌ | ❌ | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ |
-* depends on the model used
Additional Integrations
MuleSoft Inference Connector integrates seamlessly with other MAC Projects AI Connectors and the MuleSoft ecosystem, offering enhanced functionalities.