Power your mission with efficient data management and retrieval without high cloud costs.
Features
Request dozens of telemetry channels without performance degradation
Tune storage costs by deciding what telemetry to retain and for how long.
Connect other tools to Sift’s backend to train your AI/ML models or feed your existing workflows
Send Sift primitives, as well as strings, enums, and bitfields.
Sift Observability Platform
Missions are complex. Reviewing machine data should be simple.
Ingestion
Scale both the number and complexity of your machines without worrying about ingestion rates.
Storage
Query a backend that's purpose-built for long duration, high cardinality searches. Add granular retention policies to save only the data you need.
Visualization
Explore, share, and collaborate with your team — no Python scripts or SQL queries needed. Unify all your data sources in a single window.
Review
Automatically review simulation, testbed, and operational data. Ingrain your expertise in rules that flag anomalies real-time.
Reporting
With one-click, generate certification reports for stakeholders. Provide precise access to your data with Role-Based Access Controls.
FAQs
The data catalog is stored in PostgreSQL and contains metadata like runs, assets, and channels. Data is persisted to Parquet in batches every few minutes, and real time data is written to the cache for immediate availability. Parquet files are compacted into their optimal size on a recurring basis and as data reaches the end of its retention period.
We currently support 10 different data types. Standard ints, floats, and bools, but also strings and enums.
Our goal is to meet customers where they are. If you want to be deployed to a certain cloud environment or have an on-premises deployment, that’s something we can support.