Blog

All

AI

Case Studies

Comparisons

Data Science

Engineering

MLOps

Tutorials

Valohai

ML Pioneers

Blog
September 11, 2024 Alexander Rozhkov
3 things to look forward to in MLOps (or maybe 4)

Don’t miss out on Valohai’s upcoming updates on AI governance and the AI EU Act, examples of machine learning pipelines in production, new features, and GPU benchmarks. Subscribe to our newsletter.

View post
Blog
September 04, 2024 Tarek Oraby
Stop waiting for your training data to download (again)

Valohai’s new experimental feature selects compute instances based on where the data has been cached already, helping you reduce data transfer overhead and increase model iteration speed.

View post
Blog
August 28, 2024 Toni Perämäki
Solve the GPU shortage and control cloud costs: Valohai’s partnership with OVHcloud

Our new partnership enables you to seamlessly access OVHcloud’s scalable and secure environments from the Valohai MLOps platform without changing your preferred ML workflows.

View post
Blog
August 20, 2024 Tarek Oraby
Save time and avoid recomputation with Pipeline Step Caching

Valohai’s latest feature helps you avoid unnecessary costs by reusing the results of matching pipeline steps from previous executions. This feature is already available to all Valohai users!

View post
Blog
July 10, 2024 Tarek Oraby
New Features for Optimizing MLOps Efficiency and Resource Utilization

We’ve built significant enhancements into our platform to further empower data science teams in accelerating time-to-market and optimizing operational costs. These enhancements tackle model iteration speed, efficient resource utilization, and dataset management.

View post
Blog
July 01, 2024 Alexander Rozhkov
Stop paying for the compute resources that you’re not using anymore

Our new feature monitors CPU, GPU, and memory usage and alerts you when your machines operate below 50% capacity. This allows you to optimize resource usage and reduce costs.

View post
Blog
May 22, 2024 Tarek Oraby
Track and Manage the Lifecycle of ML Models with Valohai’s Model Registry

Valohai’s Model Registry is a centralized hub for managing model lifecycle from development to production. Think of it as a single source of truth for model versions and lineage.

View post
Blog
May 15, 2024 Tarek Oraby
Introducing Kubernetes Support for Streamlined Machine Learning Workflows

We designed our new Kubernetes support so that Data Science teams can effortlessly manage and scale their workflows on top of Kubernetes and enhance their overall machine-learning operations.

View post
Blog
April 02, 2024 Tarek Oraby
Introducing Slurm Support: Scale Your ML Workflows with Ease

We're excited to announce that Valohai now supports Slurm, an open-source workload manager used in HPC environments. Valohai users can now scale their ML workflows with Slurm-based clusters with unprecedented ease and efficiency.

View post
Blog
March 01, 2024 Alexander Rozhkov
Taking GenAI and LLMs from POCs to Production

LLMs and other generative models make ripples everywhere from established enterprises to innovative startups, and beyond. But what did successful adoption look like in 2023? And what can we expect in 2024?

View post
Blog
November 21, 2023 Henrik Skogström
Easiest way to fine-tune Mistral 7B

We’ve built a template for fine-tuning Mistral 7B on Valohai. Mistral is an excellent combination of size and performance, and by fine-tuning it using a technique called LoRA, we can be very cost-efficient.

View post
Blog
November 06, 2023 Henrik Skogström
Dive into Valohai with our new serverless trial

We’re thrilled to announce our new free trial for all aspiring ML pioneers! With the new free trial, we’ve made it easy to kickstart your journey with our handpicked templates.

View post
Blog
August 23, 2023 Henrik Skogström
Why closed-source LLMs are not suited for production

ChatGPT continues to capture the public attention and many are looking to incorporate similar functionalities in their products. But is it a safe route for production-grade applications?

View post
Blog
May 29, 2023 Henrik Skogström
Enjoy Hugging Face's model library with Valohai's templates

We've built a set of Hugging Face templates that make it super simple to use the latest and greatest in open-source ML. These templates are available through the Valohai Ecosystem.

View post
Blog
March 27, 2023 Viktoriya Kuzina
How to Ensure Traceability and Eliminate Data Inconsistency

The key takeaways from a presentation by Andres Hernandez, Principal Data Scientist at KONUX, about how their team streamlines operations utilizing the Valohai datasets feature.

View post