Vishal Pallagani, Bharath Chandra Muppasani, et al.
ICAPS 2024
In many machine learning scenarios, looking for the best classifier that fits a particular dataset can be very costly in terms of time and resources. Moreover, it can require deep knowledge of the specific domain. We propose a new technique which does not require profound expertise in the domain and avoids the commonly used strategy of hyper-parameter tuning and model selection. Our method is an innovative ensemble technique that uses voting rules over a set of randomly-generated classifiers. Given a new input sample, we interpret the output of each classifier as a ranking over the set of possible classes. We then aggregate these output rankings using a voting rule, which treats them as preferences over the classes. We show that our approach obtains good results compared to the state-of-the-art, both providing a theoretical analysis and an empirical evaluation of the approach on several datasets.
Vishal Pallagani, Bharath Chandra Muppasani, et al.
ICAPS 2024
Cristina Cornelio, Michele Donini, et al.
AAMAS 2020
M. Bergamaschi Ganapini, M. Campbell, et al.
AAAI-FS 2022
Michele Donini, Andrea Loreggia, et al.
RiCeRcA 2018