TY - GEN
T1 - Multi-view transfer learning with a large margin approach
AU - Zhang, Dan
AU - He, Jingrui
AU - Liu, Yan
AU - Si, Luo
AU - Lawrence, Richard D.
PY - 2011
Y1 - 2011
N2 - Transfer learning has been proposed to address the problem of scarcity of labeled data in the target domain by leveraging the data from the source domain. In many real world applications, data is often represented from different perspectives, which correspond to multiple views. For example, a web page can be described by its contents and its associated links. However, most existing transfer learning methods fail to capture the multi-view nature, and might not be best suited for such applications. To better leverage both the labeled data from the source domain and the features from different views, this paper proposes a general framework: Multi-View Transfer Learning with a Large Margin Approach (MVTL-LM). On one hand, labeled data from the source domain is effectively utilized to construct a large margin classifier; on the other hand, the data from both domains is employed to impose consistencies among multiple views. As an instantiation of this framework, we propose an efficient optimization method, which is guaranteed to converge to precision in O(1=ε) steps. Furthermore, we analyze its error bound, which improves over existing results of related methods. An extensive set of experiments are conducted to demonstrate the advantages of our proposed method over state-of-the-art techniques.
AB - Transfer learning has been proposed to address the problem of scarcity of labeled data in the target domain by leveraging the data from the source domain. In many real world applications, data is often represented from different perspectives, which correspond to multiple views. For example, a web page can be described by its contents and its associated links. However, most existing transfer learning methods fail to capture the multi-view nature, and might not be best suited for such applications. To better leverage both the labeled data from the source domain and the features from different views, this paper proposes a general framework: Multi-View Transfer Learning with a Large Margin Approach (MVTL-LM). On one hand, labeled data from the source domain is effectively utilized to construct a large margin classifier; on the other hand, the data from both domains is employed to impose consistencies among multiple views. As an instantiation of this framework, we propose an efficient optimization method, which is guaranteed to converge to precision in O(1=ε) steps. Furthermore, we analyze its error bound, which improves over existing results of related methods. An extensive set of experiments are conducted to demonstrate the advantages of our proposed method over state-of-the-art techniques.
KW - Algorithms
KW - Experimentation
KW - Performance
UR - http://www.scopus.com/inward/record.url?scp=80052660541&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=80052660541&partnerID=8YFLogxK
U2 - 10.1145/2020408.2020593
DO - 10.1145/2020408.2020593
M3 - Conference contribution
AN - SCOPUS:80052660541
SN - 9781450308137
T3 - Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining
SP - 1208
EP - 1216
BT - Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD'11
PB - Association for Computing Machinery
T2 - 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD 2011
Y2 - 21 August 2011 through 24 August 2011
ER -