Using Hosted Models and External Services
deepset Cloud is integrated with multiple model and service providers. Learn how to benefit from it in your pipelines.
Need a custom connection?
To use a third-party provider or service that’s not listed in deepset Cloud Connections, you can create a custom component to connect to it. For details, see Custom Components.
Accessing Connections
All deepset Cloud connections are listed on the Connections page that you can access by clicking your credentials and choosing Connections:
Supported Providers
Currently, on the Connections page you can set up connections with the following model providers:
- Amazon Bedrock and SageMaker
- Azure OpenAI
- Cohere
- Google AI
- Hugging Face (only needed for private models hosted there)
- Nvidia
- OpenAI
- SearchAPI
- Voyage AI
Supported connections to data processing services and data sources are:
- Azure Document Intelligence
- DeepL
- Snowflake
- unstructured.io
Adding Connections to Your Pipeline
- Add the provider's API key on the Connections page.
- Add a component that supports the integration to your pipeline.
Find the integration you want to use and follow the detailed steps below:
- Use Amazon Bedrock and SageMaker Models
- Use Azure Document Intelligence
- Use Azure OpenAI Models
- Use Cohere Models
- Use DeepL Translation Services
- Use Google Gemini Models
- Use Google Search API
- Use Hugging Face Models
- Use NVIDIA Models
- Use OpenAI Models
- Use Snowflake Database
- Use Unstructured to Process Documents
- Use Voyage AI Models
Integrations
There are also other model providers and frameworks you can use in your pipelines, even though they're not listed on the Connections page. To use these integrations, add a secret on the Secrets page and then pass the secret name as the API key required by the provider. For details, see Add Secrets to Connect to Third Party Providers.
Currently, deepset Cloud supports the following integrations:
- Fastembed: Models for generating embeddings and ranking documents. We've added the following components through which you can use Fastembed models:
- Google Vertex AI available through the following Generators:
- Jina AI embedding and ranking models available through:
- Embedders:
- Rankers:
- Llama.cpp library for efficient inference of large language models, available through:
- Generators:
- Mistral AI models available through an API with a pay-as-you-go access to the latest Mistral models. You can use it through:
- Embedders:
- Generators:
- Ollama models available through:
- Embedders:
- Generators:
And the following integrations:
- Langfuse available through LangfuseConnector
- Ragas available through RagasEvaluator
Updated 5 days ago