Cristina Cornelio, Michele Donini, et al.
AAMAS 2022
In many machine learning scenarios, looking for the best classifier that fits a particular dataset can be very costly in terms of time and resources. Moreover, it can require deep knowledge of the specific domain. We propose a new technique which does not require profound expertise in the domain and avoids the commonly used strategy of hyper-parameter tuning and model selection. Our method is an innovative ensemble technique that uses voting rules over a set of randomly-generated classifiers. Given a new input sample, we interpret the output of each classifier as a ranking over the set of possible classes. We then aggregate these output rankings using a voting rule, which treats them as preferences over the classes. We show that our approach obtains good results compared to the state-of-the-art, both providing a theoretical analysis and an empirical evaluation of the approach on several datasets.
Cristina Cornelio, Michele Donini, et al.
AAMAS 2022
Umberto Grandi, Andrea Loreggia, et al.
ISAIM 2014
Andrea Loreggia, Nicholas Mattei, et al.
AAAI-SS 2018
Vishal Pallagani, Bharath Muppasani, et al.
IJCAI 2023