My time at Krai was incredibly rewarding. Here’s a breakdown of what I worked on and learned:

Why benchmarking matters

Benchmarking is essential for both research and industry. For researchers, it’s a way to prove their work adds real value. For industry, better performance means more efficiency and cost savings.

At Krai, we used MLPerf to help chip vendors and AI hardware companies showcase their systems. What made our work unique was the focus on end-to-end performance, bringing together both software and hardware for true co-design.

Many benchmarks only focus on software or hardware in isolation, but real gains come from treating the system as a whole. As we like to say: If MLPerf is the Olympics of ML systems, we’re the Olympic coaches.

ML systems are judged mainly on two things: accuracy and speed. We measured both how fast a model can make predictions (inference) and how quickly it can be trained to reach a target accuracy.

How we did it

Our team built and maintained an open-source tool called AXS, a modular framework for ML benchmarking. It allows easy swapping of models, hardware, and software—making it a great platform for testing different system setups and exploring software-hardware co-design.

Working in a startup

Working closely with others on a shared codebase meant writing clean, maintainable code and good documentation. I set up and upgraded the continuous integration (CI) system for AXS, and helped improve our testing and development practices.

Managing projects

At one point, I was managing four projects at the same time. I used JIRA and other tools to keep things on track, while mentoring junior engineers and onboarding new team members. I also created tailored starter projects to match each person’s background and help them settle in quickly.

I’m truly grateful for everything I learned and the support I received at Krai. It’s an inspiring team full of exceptionally talented people. Follow their latest work here!