Use and reuse of shared lists as a social content type
Werner Geyer, Casey Dugan, et al.
CHI 2008
Traditional learning algorithms use only labeled data for training. However, labeled examples are often difficult or time consuming to obtain since they require substantial human labeling efforts. On the other hand, unlabeled data are often relatively easy to collect. Semisupervised learning addresses this problem by using large quantities of unlabeled data with labeled data to build better learning algorithms. In this paper, we use the manifold regularization approach to formulate the semisupervised learning problem where a regularization framework which balances a tradeoff between loss and penalty is established. We investigate different implementations of the loss function and identify the methods which have the least computational expense. The regularization hyperparameter, which determines the balance between loss and penalty, is crucial to model selection. Accordingly, we derive an algorithm that can fit the entire path of solutions for every value of the hyperparameter. Its computational complexity after preprocessing is quadratic only in the number of labeled examples rather than the total number of labeled and unlabeled examples. © 2006 IEEE.
Werner Geyer, Casey Dugan, et al.
CHI 2008
David G. Novick, John Karat, et al.
CHI EA 1997
Paul Ung-Joon Lee, Shumin Zhai
Int. J. Hum. Comput. Stud.
Michael Heck, Masayuki Suzuki, et al.
INTERSPEECH 2017