How’s your autumn starting? We’ve been quite busy over the summer! We just released multiple features to help all Valohai users manage compute resources and optimize machine learning pipelines.
Going forward, we have a lot more to share with you! We’ve been working on multiple exciting projects, ranging from AI governance to a new key feature in Valohai, success stories, and industry benchmarks.
If you already know that you don’t want to miss out on these updates, you can subscribe to our newsletter.
But if you’re unsure, here’s more about these updates to win you over:
AI governance
One of our next stories is all about AI governance and forthcoming AI regulations. We’ll keep them straight to the point to help organizations with data science and machine learning teams better navigate this regulatory landscape.
You can expect a breakdown of the AI EU Act and key developments “across the pond” in North America. You’ll also get a list of concrete tips for complying with these regulations.
Valohai’s Model Hub
One of our most anticipated moments this autumn is the release of a major addition to the Valohai MLOps platform. In short, this addition will serve as a centralized solution that is purpose-built to track and manage ML models throughout their entire lifecycle.
Valohai’s Model Hub will expand far beyond the Model Registry capabilities. It’ll be packed with new advanced features and an intuitive user interface. Its primary goal is to improve collaboration, increase model iteration, and help with regulatory compliance.
Success stories
We’re looking forward to sharing a couple of new success stories from the forefront of machine learning in production.
In particular, one of these stories is from a product company that operates in a highly regulated space with sensitive data. No matter these challenges, its team succeeded in increasing experimentation and deployment speed while adhering to security standards and decreasing reliance on DevOps for access to data and compute.
As a nice perk, we’ll show you exactly how they built one of their production pipelines to schedule retraining and deployment of customer models on a weekly basis.
One more thing
Ok. We promised only three stories. But we can’t miss this opportunity to hint at one of our larger benchmarking projects.
We’re extremely excited about it, but it’ll take a bit longer until you get to read it as we can’t publish it until this newest hardware “hits the shelves”.
We’ll reveal the topic later this autumn. But for now, we’ll leave you with two hints:
- GPU accelerators
- Fine-tuning LLMs
If you don’t want to miss out on any of these updates, here’s a friendly nudge to subscribe to our newsletter: