CyborgDB Service
Self-deployed REST API service Deploy CyborgDB as a standalone microservice that provides REST API access to confidential vector search. The service runs on your infrastructure and can scale independently from your applications.Learn more about CyborgDB Service here.
Key Benefits
Independent ScalingScale vector operations separately from your main application. Handle high query loads without impacting your core services. Self-Optimization
The service automatically optimizes index performance, manages memory efficiently, and adapts to query patterns over time. Multi-Language Access
One service deployment supports multiple applications and programming languages through REST API and client SDKs. Operational Simplicity
Centralized deployment, monitoring, and maintenance. Update vector search capabilities without touching application code.
Deployment Options
Docker Deployment
Containerized serviceDeploy as a Docker container for easy orchestration and scaling
Python Service
Direct Python installationInstall and run as a Python service with pip for lightweight deployment
Client SDKs
Python SDK
Full-featured Python client with async support and type hints
JavaScript/TypeScript SDK
Modern JS/TS client with Promise-based API and TypeScript definitions
Go SDK
High-performance Go client with idiomatic API and concurrency support
CyborgDB Embedded
Direct library integration Embed CyborgDB directly into your applications using Python or C++ libraries. This approach provides maximum control and performance by eliminating network overhead.Learn more about CyborgDB Embedded here.
Key Benefits
Maximum PerformanceDirect memory access and zero network latency. Ideal for latency-sensitive applications requiring sub-millisecond response times. Deep Integration
Customize every aspect of vector operations. Perfect for specialized workflows and performance tuning requirements. Complete Control
Full ownership of the vector search stack. No external dependencies or service management overhead. Advanced Customization
Access to low-level APIs for custom index configurations, memory management, and algorithm tuning.
Deployment Options
Python Embedded
Direct Python integrationEmbed confidential vector search directly in Python applications
C++ Embedded
Native C++ integrationHigh-performance native integration for C++ applications
Choosing the Right Model
- Multi-application architecture - Multiple services need vector search capabilities
- Team scalability - Different teams use different programming languages
- Operational simplicity - You want centralized vector search management
- Independent scaling - Vector workloads need to scale separately from applications
- Microservice patterns - You prefer service-oriented architecture