Build AI Agents with LangChain + Needle
Use Needle as a powerful RAG tool in your LangChain applications. The integration provides components for document management and semantic search, enabling seamless integration with LangChain's extensive ecosystem.
Key Features
- Document Loading: Easy document ingestion with
NeedleLoader
- Semantic Search: Advanced retrieval capabilities with
NeedleRetriever
- RAG Pipeline Integration: Seamless integration with LangChain's RAG components
- Flexible Configuration: Customizable retrieval parameters and chain components
Installation
Install the LangChain community package that includes Needle integration:
pip install langchain-community
API Keys
To use the Needle LangChain integration, you'll need to set up your environment variables:
NEEDLE_API_KEY
- Get your API key from Developer settingsOPENAI_API_KEY
- Required if using OpenAI in your pipeline
Components
NeedleLoader
The NeedleLoader
component enables you to easily add documents to your Needle collections:
import os
from langchain_community.document_loaders.needle import NeedleLoader
# Initialize the loader
document_loader = NeedleLoader(
needle_api_key=os.getenv("NEEDLE_API_KEY"),
collection_id="your-collection-id"
)
# Add files to the collection
files = {
"document.pdf": "https://example.com/document.pdf"
}
document_loader.add_files(files=files)
NeedleRetriever
The NeedleRetriever
component enables semantic search over your documents using Needle's advanced retrieval capabilities:
from langchain_community.retrievers.needle import NeedleRetriever
retriever = NeedleRetriever(
needle_api_key=os.getenv("NEEDLE_API_KEY"),
collection_id="your-collection-id"
)
Example RAG Pipeline
Here's how to build a complete RAG pipeline using Needle components with LangChain:
import os
from langchain.chains import create_retrieval_chain
from langchain.chains.combine_documents import create_stuff_documents_chain
from langchain_community.retrievers.needle import NeedleRetriever
from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI
# Initialize the language model
llm = ChatOpenAI(temperature=0)
# Initialize the Needle retriever
retriever = NeedleRetriever(
needle_api_key=os.getenv("NEEDLE_API_KEY"),
collection_id="your-collection-id"
)
# Define the system prompt
system_prompt = """
You are an assistant for question-answering tasks.
Use the following pieces of retrieved context to answer the question.
If you don't know, say so concisely.\n\n{context}
"""
# Create the prompt template
prompt = ChatPromptTemplate.from_messages([
("system", system_prompt),
("human", "{input}")
])
# Create the question-answering chain
question_answer_chain = create_stuff_documents_chain(llm, prompt)
# Create the RAG chain
rag_chain = create_retrieval_chain(retriever, question_answer_chain)
# Run the chain
response = rag_chain.invoke({
"input": "What is the main topic of the document?"
})
print(response["answer"])
Support
If you have questions or need assistance:
- Join our Discord community
- Check our API Reference
- Visit the LangChain documentation