Liu Yang
  • Home
  • Research Interests
  • Publications
  • Projects
  • Photos

Biography

  • Associate Professor
  • yangliuyl@tju.edu.cn
  • Office: 55-B521, Peiyang Campus, Jinnan District, Tianjin, China, 300350
  • Tianjin University
  • School of Computer Science and Technology
  • Lab of Machine Learning and Data Mining
  • Liu Yang received her Ph.D. degree from School of Computer and Information Technology, Beijing Jiaotong University in 2016. She was a visiting scholar at Pai Chai University in Korea and Hong Kong Baptist University in 2007 and 2013 respectively. Her main research interests are theories and methods of transfer learning, multi view learning and multi label learning in machine learning.



  • Awards -- -- The best student paper award

  • Liu Yang, Liping Jing, and Jian Yu. Common latent space identification for heterogeneous co-transfer clustering, in the 2015 International Conference of Intelligence Science and Big Data Engineering (ISCIDE), 2015, 395-406.



  • Projects

  • 1. Research on heterogeneous transfer learning for semi-paired image and text data, National Natural Science Foundation of China (Youth Program), 2018.01-2020.12.
  • 2. Theory and Technique on uncertainty modeling for large-scale machine learning with big data, National Natural Science Foundation of China (Key Program), 2018.01-2022.12.
  • 3. Research on NMF key problems for high-dimensional data mining, National Natural Science Foundation of China (General Program), 2014.01-2017.12.
  • 4. Research on key problems of image semantic understanding with text information, Ph.D Programs Foundation through the Ministry of Education, 2013.01-2015.12.



  • Enrollment Requirements

  • Honest and responsible.
  • Studious and patient.
  • Good mathematics, programming, English reading and writing skills.
  • Research Interests

  • Transfer learning
  •   In data mining applications, the lack of labeled data makes supervised learning algorithms fail to build accurate classification models. Transfer learning has been developed to deal with such lack of label problem. It aims to improve the performance of learning by transferring knowledge from several source domains to a target domain. For example, image classification can be modeled as a target learning task where there are only a few labeled training images. Fortunately, it is possible to collect some texts related to images, such as image annotations or documents around images, so that the knowledge from text data (a source domain) can be transferred to classify images in a target domain.

    (a) Homogeneous transfer learning
    (b) Heterogeneous transfer learning

  • Multi-view learning
  •   In real-world applications, examples are described by different feature sets or different “views” due to the innate properties, or collecting from different sources. For instance, in multimedia content understanding, the multimedia segments can be simultaneously described by their video signals from visual camera and audio signals from voice recorder devices. The different views usually contain complementary information, and multi-view learning can exploit this information to learn representation that is more expressive than that of single-view learning methods.


  • Multi-label learning
  •   The explosive growth of online content such as images and videos nowadays has made developing classification system a very challenging problem. Such new classification system is usually required to assign multiple labels to one single instance: an image might be annotated by many semantic tags in image classification; one article can focus on several topics for text mining. Most of the conventional classification techniques under the assumption that an object only refers to one single class fail to work in such scenario. Therefore, methods that are capable of accomplishing multi-label learning can be more and more important.


  • Deep learning
  •   Deep learning is part of a broader family of machine learning methods based on learning data representations, which is known as deep structured learning or hierarchical learning. Deep learning architectures have been applied to fields including computer vision, speech recognition, natural language processing, etc, where they have produced results comparable to and in some cases superior to human experts.

    Publications

  • 1. Liu Yang, Liping Jing, Jian Yu, and Michael K. Ng. Learning transferred weights from co-occurrence data for heterogeneous transfer learning, IEEE Transactions on Neural Networks and Learning Systems, 27(11): 2187-2200, 2016.
  • 2. Liu Yang, Liping Jing, and Michael K. Ng. Robust and non-negative collective matrix factorization for text-to-image transfer learning, IEEE Transactions on Image Processing, 24(12): 4701-4714, 2015.
  • 3. Liu Yang, Liping Jing, and Jian Yu. Common latent space identification for heterogeneous co-transfer clustering, Neurocomputing, 2017, 269: 29-39.
  • 4. Liu Yang, Liping Jing, Michael K. Ng, and Jian Yu. A discriminative and sparse topic model for image classification and annotation, Image and Vision Computing, 51: 22-35, 2016.
  • 5. Liu Yang, Jian Yu, Ye Liu, Dechuan Zhan. Research progress on cognitive-oriented multi-source data learning theory and algorithm, Journal of Software, 2017, 28(11):2971-2991.
  • 6. Liu Yang, Liping Jing, and Jian Yu. A heterogeneous transductive transfer learning algorithm, Journal of Software, 26(11): 2762-2780, 2015.
  • 7. Liu Yang, Jian Yu, and Liping Jing. An adaptive large margin nearest neighbor classification algorithm, Journal of Computer Research and Development, 50(11): 2269-2277, 2013.
  • 8. Liu Yang, Liping Jing, and Jian Yu. Common latent space identification for heterogeneous co-transfer clustering, in the 2015 International Conference of Intelligence Science and Big Data Engineering (ISCIDE), 2015, 395-406.
  • 9. Liu Yang, Liping Jing, and Jian Yu. Heterogeneous co-transfer spectral clustering, in the 9th International Conference on Rough Sets and Knowledge Technology (RSKT), 2014, 352-363.
  • 10.Liping Jing, Liu Yang, Jian Yu, Michael K. Ng. Semi-supervised low-rank mapping learning for multi-label classification, in the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, 1483-1491.
  • 11.Liping Jing, Peng Wang, Liu Yang. Sparse probabilistic matrix factorization by Laplace distribution for collaborative filtering, in Proceedings of the 24th International Joint Conference on Artificial Intelligence (IJCAI), 2015, 1771-1777.
  • 12.Liping Jing, Chenyang Shen, Liu Yang, Jian Yu, and Michael K. Ng. Multi-Label Classification by Semi-Supervised Singular Value Decomposition. IEEE Transactions on Image Processing, 2017, 26(10): 4612-4625.
  • Projects

  • 1.Research on heterogeneous transfer learning for semi-paired image and text data, National Natural Science Foundation of China (Youth Program), 2018.01-2020.12.
  • 2.Theory and Technique on uncertainty modeling for large-scale machine learning with big data, National Natural Science Foundation of China (Key Program), 2018.01-2022.12.
  • 3.Research on NMF key problems for high-dimensional data mining, National Natural Science Foundation of China (General Program), 2014.01-2017.12.
  • 4.Research on key problems of image semantic understanding with text information, Ph.D Programs Foundation through the Ministry of Education, 2013.01-2015.12.
  • Photos