As GenAI and LLMs are becoming a more integral part of many businesses products and services, Hopsworks aims to support even more LLM/GenAI use cases. To do so, we create custom made GenAI solutions leveraging fine-tuning, RAG and function calling for LLM models.
With our GenAI solution we want to help organizations create custom LLM-powered products and services using fine-tuning and RAG by enabling:
- Personalized, Real-Time LLM on Cloud or On-Premises
Fine-tune on your own organization's data and empower your company's LLM models with real-time context about your users and their requests. - From Foundation Models to AI-Enabled Apps
- Your organization's bespoke personalised and real-time enabled LLMs application.
LLM Makerspace 🪐
🛸 Build Your Own pdf.ai: Using both RAG and Fine-Tuning
Build an AI system to search your private PDF files referencing to where the page and paragraph in the file it finds the answers in.
💫 Unlocking the Power of Function Calling with LLMs
We looked into extending RAG for LLMs to include the ability to query structured data and API calls using function calling.
Upcoming Session: Building a Cheque Fraud Detection & Explanation AI System using a Fine-Tuned LLM
We will build a LLM that not only detects whether a scanned image of a cheque is a fraud, but also writes an explanation for why the cheque has been marked as fraud. Make sure to tune in on May 2nd!
Explore Hopsworks for GenAI ✨
GenAI with Vector Similarity Search
With our latest product update we introduced our capability for vector similarity search.
On-Premises LLMs with RAG and Fine-Tuning
Explore the 3 programs you need to write to productionize a fine-tuned RAG LLM.