Blog / Tarek Oraby
Tarek Oraby

Tarek Oraby

Head of Product at Valohai.

Blog
November 06, 2024 Tarek Oraby
Valohai's Audit Log: Traceability built for AI governance

Introducing an out-of-the-box solution that gives all Valohai users automatic, immutable, and secure audit logs that ensure traceability for navigating compliance requirements, debugging issues, and improving accountability within teams.

View post
Blog
September 18, 2024 Tarek Oraby
Simplify and automate the machine learning model lifecycle

We’ve built the Model Hub to help you streamline and automate model lifecycle management. Leverage Valohai for lineage tracking, performance comparison, workflow automation, access control, regulatory compliance, and more.

View post
Blog
September 04, 2024 Tarek Oraby
Stop waiting for your training data to download (again)

Valohai’s new experimental feature selects compute instances based on where the data has been cached already, helping you reduce data transfer overhead and increase model iteration speed.

View post
Blog
August 20, 2024 Tarek Oraby
Save time and avoid recomputation with Pipeline Step Caching

Valohai’s latest feature helps you avoid unnecessary costs by reusing the results of matching pipeline steps from previous executions. This feature is already available to all Valohai users!

View post
Blog
July 10, 2024 Tarek Oraby
New Features for Optimizing MLOps Efficiency and Resource Utilization

We’ve built significant enhancements into our platform to further empower data science teams in accelerating time-to-market and optimizing operational costs. These enhancements tackle model iteration speed, efficient resource utilization, and dataset management.

View post
Blog
May 22, 2024 Tarek Oraby
Track and Manage the Lifecycle of ML Models with Valohai’s Model Registry

Valohai’s Model Registry is a centralized hub for managing model lifecycle from development to production. Think of it as a single source of truth for model versions and lineage.

View post
Blog
May 15, 2024 Tarek Oraby
Introducing Kubernetes Support for Streamlined Machine Learning Workflows

We designed our new Kubernetes support so that Data Science teams can effortlessly manage and scale their workflows on top of Kubernetes and enhance their overall machine-learning operations.

View post
Blog
April 02, 2024 Tarek Oraby
Introducing Slurm Support: Scale Your ML Workflows with Ease

We're excited to announce that Valohai now supports Slurm, an open-source workload manager used in HPC environments. Valohai users can now scale their ML workflows with Slurm-based clusters with unprecedented ease and efficiency.

View post