LangChain-MongoDB is a dedicated package that provides “long-term memory” capabilities for LLMs — vector store, conversation history, and semantic caching.
MongoDB Atlas Vector Search integrates with LlamaIndex to provide “long-term memory” to LLMs as well as provide a store for document chunks.
Vector embeddings generated by OpenAI can be stored in MongoDB Atlas Vector Search to build high-performance generative AI applications.
Hugging Face provides access to many open source models that can be easily used for generating vector embeddings and storing them in Atlas Vector Search.
Vector embeddings generated by Cohere can be stored in MongoDB Atlas Vector Search to build high-performance generative AI applications.
Semantic Kernel is an SDK that simplifies building LLM application with programming languages like C# and python. Atlas Vector search integrates to provide “memory” for LLM applications.
Knowledge Bases for Amazon Bedrock is a fully managed capability that allows for implementing the entire RAG workflow, from ingestion to retrieval. Atlas Vector Search integrates natively and securely.
Start with the multi-cloud database service built for resilience, scale, and the highest levels of data privacy and security.
Bring your data to life instantly. Create, share, and embed visualizations for real-time insights and business intelligence.
Analyze rich data easily across Atlas and AWS S3. Combine, transform and enrich data from multiple sources without complex integrations.