Build with confidence, knowing your AI is accurate and grounded in fast, relevant data.
Spice is an open-source SQL query and AI-inference engine, built in Rust, for developers.
Powered by
SQL API to query structured and unstructed data across databases, data warehouses, and data lakes.
DocsOpenAI-compatible API for local and hosted inference, search, memory, evals, and observability.
DocsMaterialize data and content in DuckDB, SQLite, and PostgreSQL; in-memory or on disk. Results caching included.
DocsSelf-hostable binary or Docker image, platform-agnostic, and Apache 2.0 licensed. Built on industry standard technologies including Apache DataFusion and Apache Arrow.
DocsSee how Spice has been deployed in production architectures.
Slow 15 sec queries across 100B+ rows.
Poor user experience with slow page loads.
Unnecessary Databricks workspace expense.
Spice powers data apps and AI agents with federated SQL, vector search, LLM memory, real-time data acceleration, observability, and integration across modern and legacy systems.
Build data-grounded AI apps and agents with local or hosted models, LLM memory, evals, and observability.
Ensure AI is grounded in data with high-performance search and text-to-SQL, across a semantic knowledge layer.
Co-locate working sets of data in Arrow, SQLite, and DuckDB with applications for fast, sub-second query.
Use SQL to query across databases, data warehouses, and data lakes with advanced federation.
Build data-grounded AI apps and agents with local or hosted models, LLM memory, evals, and observability.
Ensure AI is grounded in data with high-performance search and text-to-SQL, across a semantic knowledge layer.
Co-locate working sets of data in Arrow, SQLite, and DuckDB with applications for fast, sub-second query.
Use SQL to query across databases, data warehouses, and data lakes with advanced federation.
“We have been looking for a way to accelerate queries from our Databricks workspaces. Spice was the perfect solution, as it was super simple to setup and it was easy to define and query accelerated datasets without a lot of overhead.”
Chief Data Officer at Barracuda