Abstract—Ensembles of classifiers has proven itself to be among the best methods for creating highly accurate prediction models. In this paper we combine the random coverage method which facilitates additional diversity when inducing rules using the covering algorithm, with the random subspace selection method which has been used successfully by for example the random forest algorithm. We compare three different covering methods with the random forest algorithm; 1st using random subspace selection and random covering; 2nd using bagging and random subspace selection and 3rd using bagging, random subspace selection and random covering. The results show that all three covering algorithms do perform better than the random forest algorithm. The covering algorithm using random subspace selection and random covering performs best of all methods. The results are not significant according to adjusted p-values but for the unadjusted p-value, indicating that the novel method introduced in this paper warrants further attention.
Index Terms—Ensemble of classifiers, random covering, random subspace selection, diversity.
Tony Lindgren is with the Department of Computer and Systems Sciences at Stockholm University, Nodhuset, Borgarfjordsgatan 12, S-164 07 Kista, Sweden (e-mail: firstname.lastname@example.org).
Cite: Tony Lindgren, "Random Rule Sets – Combining Random Covering with the Random Subspace Method," International Journal of Machine Learning and Computing vol. 8, no. 1, pp. 8-13, 2018.