Wednesday, May 6, 2020
The Abstract Latent Factor ( Lf ) Models - 1591 Words
ï⬠Abstractââ¬âLatent factor (LF) models have proven to be accurate and efficient in extracting hidden knowledge from high-dimensional and sparse (HiDS) matrices. However, most LF models fail to fulfill the non-negativity constraints that reflect the non-negative nature of industrial data. Yet existing non-negative LF models for HiDS matrices suffer from slow convergence leading to considerable time cost. An alternating direction method-based non-negative latent factor (ANLF) model decomposes a non-negative optimization process into small sub-tasks. It updates each LF non-negatively based on the latest state of those trained before, thereby achieving fast convergence and maintaining high prediction accuracy and scalability. This paperâ⬠¦show more contentâ⬠¦Originated from matrix factorization (MF) techniques, their *This research is supported in part by the Pioneer Hundred Talents Program of Chinese Academy of Sciences, in part by the International Joint Project funded jointly by the Royal Society of the UK and the National Natural Science Foundation of China under Grant 61611130209, in part by the Young Scientist Foundation of Chongqing under Grant No. cstc2014kjrc-qnrc40005, in part by the National Natural Science Foundation of China under Grant 61370150, Grant 61433014, and Grant 61402198. X. Luo is with the Institute of Green and Intelligent Technology, Chinese Academy of Sciences, Chongqing 400714, China, and also with the Shenzhen Engineering Laboratory for Mobile Internet Application Middleware Technology of Shenzhen University, Shenzhen 518060, China (e-mail: luoxin21@cigit.ac.cn). S. Li is with the Department of Computing, Hong Kong Polytechnic University, Hong Kong, HK 999077, China (e-mail: shuaili@polyu.edu.hk). principle is to build a low-rank approximation to a target matrix. They first map entities corresponding to the columns and rows of this target matrix into the same low-dimensional LF space. Then a series of loss functions are built based on its known entry set and the desired LFs [3-5, 8-13]. Since its known data take only a tiny piece of its whole entry set, to focus on them rather than on the entire matrix leads to high
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.