HopsFS

An open-source, next-generation scale-out metadata file system designed for high availability and superior performance in large-scale distributed storage systems with support for tiered storage using an object store, NVMe disks, and in-memory files.

Real-Time Machine Learning with Low Latency and High Availability

RonDB shows higher availability and the ability to handle larger data sets in comparison with Redis, paving the way to be the fastest key-value store available.

Scale smarter, train faster, and innovate effortlessly

Hopsworks supports easy hyperparameter optimization (both synchronous and asynchronous search), distributed training using PySpark, TensorFlow and GPUs.

Scalable AI solutions for unlocking the potential of massive datasets

How ExtremeEarth Brings Large-scale AI to the Earth Observation Community with Hopsworks, the Data-intensive AI Platform

100x Times Faster than AWS S3

What If you could build on top of S3 a distributed file system with a HDFS API that gives you POSIX goodness and improved performance?
NVMe cache for S3: HopsFS provides a globally aware, write-through cache on NVMe disks for data stored on S3.
Increase GPU Utilization: Deep learning training pipelines often bottleneck on disk I/O to S3. With 100X faster file metadata operations and a NVMe cache, training pipelines on GPUs can run at full speed.
Cloud-Native Powerhouse: Cloud-native HopsFS with seamless high availability across availability zones.
Record-Breaking Performance: Achieves 3.4X the read throughput of S3 (EMRFS) in the DFSIO Benchmark, peer-reviewed at USENIX FAST and ACM Middleware.
Dashboard mockup

Try now free

Hopsworks - Real-time AI Lakehouse

Enhanced MLOps with Hopsworks Feature Store

Contact us and learn more about how Hopsworks can help your organization develop and deploy reliable AI Systems.