PineconeEmbeddingRetriever
Retrieve documents from the PineconeDocumentStore based on their dense embeddings.
Basic Information
- Type:
haystack_integrations.components.retrievers.pinecone.embedding_retriever.PineconeEmbeddingRetriever - Components it can connect with:
- Text Embedders:
PineconeEmbeddingRetrieverreceives the query embedding from a text embedder likeSentenceTransformersTextEmbedderorOpenAITextEmbedder. PromptBuilder:PineconeEmbeddingRetrievercan send retrieved documents toPromptBuilderto be used in a prompt.Ranker:PineconeEmbeddingRetrievercan send retrieved documents to aRankerto reorder them by relevance.
- Text Embedders:
Inputs
| Parameter | Type | Default | Description |
|---|---|---|---|
| query_embedding | List[float] | Embedding of the query. | |
| filters | Optional[Dict[str, Any]] | None | Filters applied to the retrieved Documents. The way runtime filters are applied depends on the filter_policy chosen at retriever initialization. See init method docstring for more details. |
| top_k | Optional[int] | None | Maximum number of Documents to return. |
Outputs
| Parameter | Type | Default | Description |
|---|---|---|---|
| documents | List[Document] | List of documents similar to the query embedding. |
Overview
PineconeEmbeddingRetriever is an embedding-based retriever compatible with the PineconeDocumentStore. It compares the query and document embeddings and fetches the documents most relevant to the query based on vector similarity.
Pinecone is a managed vector database service that enables fast and scalable similarity search. It's designed for production workloads with features like automatic scaling, high availability, and real-time updates.
When using PineconeEmbeddingRetriever in your pipeline, make sure it has the query and document embeddings available. Add a document embedder to your indexing pipeline and a text embedder to your query pipeline to create these embeddings.
In addition to the query_embedding, the retriever accepts other optional parameters, including top_k (the maximum number of documents to retrieve) and filters to narrow down the search space.
Some relevant parameters that impact embedding retrieval must be defined when the PineconeDocumentStore is initialized: these include the dimension of the embeddings and the distance metric to use.
Authorization
You need a Pinecone API key to use this component. Create a secret with PINECONE_API_KEY key in your workspace secrets. For detailed instructions, see Add Secrets.
Usage Example
Using the Component in a Pipeline
This is an example of a semantic search pipeline where PineconeEmbeddingRetriever receives the query embedding from a text embedder and retrieves matching documents.
components:
text_embedder:
type: haystack.components.embedders.sentence_transformers_text_embedder.SentenceTransformersTextEmbedder
init_parameters:
model: sentence-transformers/all-MiniLM-L6-v2
device:
token:
prefix: ''
suffix: ''
batch_size: 32
progress_bar: true
normalize_embeddings: false
trust_remote_code: false
PineconeEmbeddingRetriever:
type: haystack_integrations.components.retrievers.pinecone.embedding_retriever.PineconeEmbeddingRetriever
init_parameters:
document_store:
type: haystack_integrations.document_stores.pinecone.document_store.PineconeDocumentStore
init_parameters:
api_key:
type: env_var
env_vars:
- PINECONE_API_KEY
strict: true
index: my-index
namespace: my-namespace
dimension: 384
metric: cosine
spec:
filters:
top_k: 10
filter_policy: replace
connections:
- sender: text_embedder.embedding
receiver: PineconeEmbeddingRetriever.query_embedding
max_runs_per_component: 100
metadata: {}
inputs:
query:
- text_embedder.text
filters:
- PineconeEmbeddingRetriever.filters
outputs:
documents: PineconeEmbeddingRetriever.documents
Using in a RAG Pipeline
This example shows a RAG pipeline that uses PineconeEmbeddingRetriever to find relevant documents, then passes them to a generator to answer a question.
components:
text_embedder:
type: haystack.components.embedders.sentence_transformers_text_embedder.SentenceTransformersTextEmbedder
init_parameters:
model: sentence-transformers/all-MiniLM-L6-v2
device:
token:
prefix: ''
suffix: ''
batch_size: 32
progress_bar: true
normalize_embeddings: false
trust_remote_code: false
retriever:
type: haystack_integrations.components.retrievers.pinecone.embedding_retriever.PineconeEmbeddingRetriever
init_parameters:
document_store:
type: haystack_integrations.document_stores.pinecone.document_store.PineconeDocumentStore
init_parameters:
api_key:
type: env_var
env_vars:
- PINECONE_API_KEY
strict: true
index: my-index
namespace: my-namespace
dimension: 384
metric: cosine
spec:
filters:
top_k: 10
filter_policy: replace
prompt_builder:
type: haystack.components.builders.prompt_builder.PromptBuilder
init_parameters:
required_variables: "*"
template: |-
Given the following documents, answer the question.
Documents:
{% for document in documents %}
{{ document.content }}
{% endfor %}
Question: {{ question }}
Answer:
generator:
type: haystack.components.generators.openai.OpenAIGenerator
init_parameters:
api_key:
type: env_var
env_vars:
- OPENAI_API_KEY
strict: true
model: gpt-4o-mini
generation_kwargs:
answer_builder:
type: deepset_cloud_custom_nodes.augmenters.deepset_answer_builder.DeepsetAnswerBuilder
init_parameters:
reference_pattern: acm
connections:
- sender: text_embedder.embedding
receiver: retriever.query_embedding
- sender: retriever.documents
receiver: prompt_builder.documents
- sender: prompt_builder.prompt
receiver: generator.prompt
- sender: generator.replies
receiver: answer_builder.replies
- sender: retriever.documents
receiver: answer_builder.documents
- sender: prompt_builder.prompt
receiver: answer_builder.prompt
max_runs_per_component: 100
metadata: {}
inputs:
query:
- text_embedder.text
- prompt_builder.question
- answer_builder.query
filters:
- retriever.filters
outputs:
documents: retriever.documents
answers: answer_builder.answers
Parameters
Init Parameters
These are the parameters you can configure in Pipeline Builder:
| Parameter | Type | Default | Description |
|---|---|---|---|
| document_store | PineconeDocumentStore | The Pinecone Document Store. | |
| filters | Optional[Dict[str, Any]] | None | Filters applied to the retrieved Documents. |
| top_k | int | 10 | Maximum number of Documents to return. |
| filter_policy | Union[str, FilterPolicy] | FilterPolicy.REPLACE | Policy to determine how filters are applied. |
Run Method Parameters
These are the parameters you can configure for the component's run() method. This means you can pass these parameters at query time through the API, in Playground, or when running a job. For details, see Modify Pipeline Parameters at Query Time.
| Parameter | Type | Default | Description |
|---|---|---|---|
| query_embedding | List[float] | Embedding of the query. | |
| filters | Optional[Dict[str, Any]] | None | Filters applied to the retrieved Documents. The way runtime filters are applied depends on the filter_policy chosen at retriever initialization. See init method docstring for more details. |
| top_k | Optional[int] | None | Maximum number of Documents to return. |
Was this page helpful?