top of page

Malizen

A new way to evaluate the performance of new ML models quickly ?

Updated: Aug 22, 2023

When developing new Machine Learning algorithms in cybersecurity, it often difficult to find data to train models. There are also risks that the learning process itself is biased and does not perform correctly on real data.


This research paper presents a new interface that allows to quickly evaluate the performance new ML models while maintaining confidence that the data underlying the evaluation is working correctly. It also allows users to immediately observe the root cause of an observed problem. This article introduces the tool and some real-world examples that have proven useful for model evaluation.


To be able to evaluate our Machine Learning models without wondering about the underlying datasets, we love it!


Comments


logo Malizen

Follow our adventures !

  • Discorde
  • X
  • LinkedIn

Subscribe to our newsletter

Be notified every time we have news !

Thanks for subscribing !

By subscribing, I agree to the General Terms of Use and Privacy Policy.

bottom of page