When developing new Machine Learning algorithms in cybersecurity, it often difficult to find data to train models. There are also risks that the learning process itself is biased and does not perform correctly on real data.
This research paper presents a new interface that allows to quickly evaluate the performance new ML models while maintaining confidence that the data underlying the evaluation is working correctly. It also allows users to immediately observe the root cause of an observed problem. This article introduces the tool and some real-world examples that have proven useful for model evaluation.
To be able to evaluate our Machine Learning models without wondering about the underlying datasets, we love it!
Comments