PineconeEmbeddingRetriever
Retrieves documents from a PineconeDocumentStore using vector similarity search on dense embeddings.
Key Features
- Embedding-based retrieval from Pinecone's managed vector database.
- Configurable number of results with
top_k. - Filter support to narrow down the search space.
- Namespace support for organizing documents within an index.
- Compatible with all text embedders that produce dense vectors.
Configuration
You need a Pinecone API key to use this component. Create a secret with the key PINECONE_API_KEY in your workspace. For detailed instructions, see Add Secrets.
- Drag the
PineconeEmbeddingRetrievercomponent onto the canvas from the Component Library. - Click the component to open the configuration panel.
- On the General tab:
- Configure the
document_storewith your Pinecone index name, namespace, embedding dimensions, and distance metric.
- Configure the
- Go to the Advanced tab to configure
top_k, filters, and filter policy.
Connections
PineconeEmbeddingRetriever accepts a query embedding as input. It outputs a list of documents ranked by similarity.
Connect a text embedder (like SentenceTransformersTextEmbedder or OpenAITextEmbedder) to its query_embedding input. Connect its documents output to a PromptBuilder, Ranker, or DeepsetAnswerBuilder.
Make sure to also add a document embedder to your indexing pipeline so that documents stored in Pinecone have embeddings.
Usage Example
This is an example of a semantic search pipeline where PineconeEmbeddingRetriever receives the query embedding from a text embedder and retrieves matching documents.
components:
text_embedder:
type: haystack.components.embedders.sentence_transformers_text_embedder.SentenceTransformersTextEmbedder
init_parameters:
model: sentence-transformers/all-MiniLM-L6-v2
device:
token:
prefix: ''
suffix: ''
batch_size: 32
progress_bar: true
normalize_embeddings: false
trust_remote_code: false
PineconeEmbeddingRetriever:
type: haystack_integrations.components.retrievers.pinecone.embedding_retriever.PineconeEmbeddingRetriever
init_parameters:
document_store:
type: haystack_integrations.document_stores.pinecone.document_store.PineconeDocumentStore
init_parameters:
api_key:
type: env_var
env_vars:
- PINECONE_API_KEY
strict: true
index: my-index
namespace: my-namespace
dimension: 384
metric: cosine
spec:
filters:
top_k: 10
filter_policy: replace
connections:
- sender: text_embedder.embedding
receiver: PineconeEmbeddingRetriever.query_embedding
max_runs_per_component: 100
metadata: {}
inputs:
query:
- text_embedder.text
filters:
- PineconeEmbeddingRetriever.filters
outputs:
documents: PineconeEmbeddingRetriever.documents
This example shows a RAG pipeline that uses PineconeEmbeddingRetriever to find relevant documents, then passes them to a generator to answer a question.
components:
text_embedder:
type: haystack.components.embedders.sentence_transformers_text_embedder.SentenceTransformersTextEmbedder
init_parameters:
model: sentence-transformers/all-MiniLM-L6-v2
device:
token:
prefix: ''
suffix: ''
batch_size: 32
progress_bar: true
normalize_embeddings: false
trust_remote_code: false
retriever:
type: haystack_integrations.components.retrievers.pinecone.embedding_retriever.PineconeEmbeddingRetriever
init_parameters:
document_store:
type: haystack_integrations.document_stores.pinecone.document_store.PineconeDocumentStore
init_parameters:
api_key:
type: env_var
env_vars:
- PINECONE_API_KEY
strict: true
index: my-index
namespace: my-namespace
dimension: 384
metric: cosine
spec:
filters:
top_k: 10
filter_policy: replace
prompt_builder:
type: haystack.components.builders.prompt_builder.PromptBuilder
init_parameters:
required_variables: "*"
template: |-
Given the following documents, answer the question.
Documents:
{% for document in documents %}
{{ document.content }}
{% endfor %}
Question: {{ question }}
Answer:
generator:
type: haystack.components.generators.openai.OpenAIGenerator
init_parameters:
api_key:
type: env_var
env_vars:
- OPENAI_API_KEY
strict: true
model: gpt-4o-mini
generation_kwargs:
answer_builder:
type: deepset_cloud_custom_nodes.augmenters.deepset_answer_builder.DeepsetAnswerBuilder
init_parameters:
reference_pattern: acm
connections:
- sender: text_embedder.embedding
receiver: retriever.query_embedding
- sender: retriever.documents
receiver: prompt_builder.documents
- sender: prompt_builder.prompt
receiver: generator.prompt
- sender: generator.replies
receiver: answer_builder.replies
- sender: retriever.documents
receiver: answer_builder.documents
- sender: prompt_builder.prompt
receiver: answer_builder.prompt
max_runs_per_component: 100
metadata: {}
inputs:
query:
- text_embedder.text
- prompt_builder.question
- answer_builder.query
filters:
- retriever.filters
outputs:
documents: retriever.documents
answers: answer_builder.answers
Parameters
Inputs
| Parameter | Type | Default | Description |
|---|---|---|---|
| query_embedding | List[float] | Embedding of the query. | |
| filters | Optional[Dict[str, Any]] | None | Filters applied to the retrieved Documents. The way runtime filters are applied depends on the filter_policy chosen at retriever initialization. See init method docstring for more details. |
| top_k | Optional[int] | None | Maximum number of Documents to return. |
Outputs
| Parameter | Type | Default | Description |
|---|---|---|---|
| documents | List[Document] | List of documents similar to the query embedding. |
Init Parameters
These are the parameters you can configure in Pipeline Builder:
| Parameter | Type | Default | Description |
|---|---|---|---|
| document_store | PineconeDocumentStore | The Pinecone Document Store. | |
| filters | Optional[Dict[str, Any]] | None | Filters applied to the retrieved Documents. |
| top_k | int | 10 | Maximum number of Documents to return. |
| filter_policy | Union[str, FilterPolicy] | FilterPolicy.REPLACE | Policy to determine how filters are applied. |
Run Method Parameters
These are the parameters you can configure for the component's run() method. This means you can pass these parameters at query time through the API, in Playground, or when running a job. For details, see Modify Pipeline Parameters at Query Time.
| Parameter | Type | Default | Description |
|---|---|---|---|
| query_embedding | List[float] | Embedding of the query. | |
| filters | Optional[Dict[str, Any]] | None | Filters applied to the retrieved Documents. The way runtime filters are applied depends on the filter_policy chosen at retriever initialization. See init method docstring for more details. |
| top_k | Optional[int] | None | Maximum number of Documents to return. |
Was this page helpful?