Using Large Language Models (LLMs)
LLMs show remarkable capabilities in understanding and generating human-like text. Have a look at how you can use them in your deepset Cloud pipelines.
LLMs in Your Pipelines
You can easily integrate LLMs in your deepset Cloud pipelines using the versatile PromptNode. PromptNode works well for retrieval augmented generation (RAG) question answering and other tasks such as text classification, summarization, and more. It performs the specific task you define within the prompt you pass to it.
All RAG and Chat pipelines have streaming enabled by default.
Creating an LLM App
You can start with an out-of-the-box template and then experiment with models and prompts:
- Create a Pipeline from the template and choose the Retrieval Augmented Generation Question Answering Llama2-13b. It uses PromptNode with the free Llama model by Meta.
By changing the prompt and the model, you can adjust it to NLP tasks beyond generative question answering. - Experiment with your pipeline settings:
- Try changing the model. PromptNode works with various models, including GPT-4, Cohere's, or Anthropic's models. For a full list, see PromptNode.
- Experiment with different prompts.
- PromptNode comes with a set of ready-made prompts for you to try out. Simply pass the prompt name in the
default_prompt_template
parameter. For a full list of prompts, see Prompt Templates. - Engineer your prompts in Prompt Studio, a sandbox environment in deepset Cloud, to refine and test your prompts.
- PromptNode comes with a set of ready-made prompts for you to try out. Simply pass the prompt name in the
Learn More
Here's a collection of information you may want to explore for more information related to LLMs and generative AI in deepset Cloud.
About LLMs
Generative AI in Practice
- Use Case: Generative AI Systems
- Tutorial: Building a Summarization System with a Large Language Model
- Tutorial: Building a Robust RAG Question Answering System
- PromptNode
- Using Hosted LLMs in Your Pipelines
Prompt Engineering
Updated 5 months ago