Hopsworks is an operational AI platform that supports managing data for AI, with a feature store and vector database, but also supports scalable model training on GPUs and scale-out model serving on GPUs (with KServe). As such, Hopsworks can and is being used for all the life cycle phases of LLMs - supervised fine-tuning and serving with RAG. Hopsworks even address the problem of slow model saving and loading, through HopsFS, that enables NVMe speed access to huge volumes of data stored in object storage.
In this webinar, we will walk through the LLM life cycle on Hopsworks and show you how to build LLM applications on your private data with Hopsworks and open-source foundation models.