back chevron
Back to Events
back chevron
Back to Events

Modular AI Systems with Feature, Training, Inference Pipelines on Hopsworks - a LLMOps Tutorial

Modular AI Systems with Feature, Training, Inference Pipelines on Hopsworks - a LLMOps Tutorial
No items found.

Modular AI Systems with Feature, Training, Inference Pipelines on Hopsworks - a LLMOps Tutorial

calendar icon
May 8, 2024
calendar icon
clock icon
6:30 pm
EDT
clock icon
EDT
clock icon
Microsoft, New York

Learn about a unified architecture for Batch, Real-Time, and LLM AI Systems based around three independent ML pipelines.

In this tutorial, we will introduce a unified architecture for Batch, Real-Time, and LLM AI Systems based around three independent ML pipelines

  • A feature pipeline to create feature data,
  • A training pipeline to train your model,
  • And an inference pipeline to make predictions with new data on your trained model.

We will use this FTI architecture to walk through a tutorial of building a LLM system that uses RAG and function calling to access structured data and a model.

Register here

---

Register now!

Thank you for registering!
Oops! Something went wrong while submitting the form, please check your details again.

Tags

You might also be interested in:

© Hopsworks 2024. All rights reserved. Various trademarks held by their respective owners.

Privacy Policy
Cookie Policy
Terms and Conditions