Abstract—The goal of multi-label feature selection is to find a feature subset that is dependent to multiple labels while maintaining as small number of features as possible. To select a compact feature subset, feature selection approaches that considers the dependency among features during its multi-label feature selection process. However, multi-label feature selection methods considering feature dependency suffer from its time-consuming task because the process of considering dependency among features consumes additional computational cost. In this paper, we propose a fast multi-label feature selection method considering feature dependency. The proposed method circumvents the prohibitive computations originated from the calculation of feature dependency by using an approximation. Empirical results conducted on several multi-label datasets demonstrate that the proposed method outperforms recent multi-label feature selection methods in terms of execution time.
Index Terms—feature dependency, multi-label feature selection, mutual information, quadratic programming.
The authors are with the Department of Computer Science and Engineering, Chung-Ang University, Seoul, Republic of Korea (e-mail: email@example.com, firstname.lastname@example.org, email@example.com).
Cite: Hyunki Lim, Jaesung Lee, and Dae-Won Kim, "Low-Rank Approximation for Multi-label Feature Selection," International Journal of Machine Learning and Computing vol.6, no. 1, pp. 42-46, 2016.