Kronecker CP Decomposition with Fast Multiplication for Compressing RNNs

التفاصيل البيبلوغرافية
العنوان: Kronecker CP Decomposition with Fast Multiplication for Compressing RNNs
المؤلفون: Lei Deng, Tianyi Yan, Hengnu Chen, Guangshe Zhao, Guoqi Li, Bijiao Wu, Man Yao, Dingheng Wang
بيانات النشر: arXiv, 2020.
سنة النشر: 2020
مصطلحات موضوعية: FOS: Computer and information sciences, Artificial neural network, Computer Networks and Communications, Computer science, Computation, Computer Vision and Pattern Recognition (cs.CV), Computer Science - Computer Vision and Pattern Recognition, TIMIT, Computer Science Applications, symbols.namesake, Recurrent neural network, Artificial Intelligence, Tensor (intrinsic definition), Kronecker delta, symbols, Multiplication, Tensor, Algorithm, Software, Block (data storage)
الوصف: Recurrent neural networks (RNNs) are powerful in the tasks oriented to sequential data, such as natural language processing and video recognition. However, since the modern RNNs, including long-short term memory (LSTM) and gated recurrent unit (GRU) networks, have complex topologies and expensive space/computation complexity, compressing them becomes a hot and promising topic in recent years. Among plenty of compression methods, tensor decomposition, e.g., tensor train (TT), block term (BT), tensor ring (TR) and hierarchical Tucker (HT), appears to be the most amazing approach since a very high compression ratio might be obtained. Nevertheless, none of these tensor decomposition formats can provide both the space and computation efficiency. In this paper, we consider to compress RNNs based on a novel Kronecker CANDECOMP/PARAFAC (KCP) decomposition, which is derived from Kronecker tensor (KT) decomposition, by proposing two fast algorithms of multiplication between the input and the tensor-decomposed weight. According to our experiments based on UCF11, Youtube Celebrities Face and UCF50 datasets, it can be verified that the proposed KCP-RNNs have comparable performance of accuracy with those in other tensor-decomposed formats, and even 278,219x compression ratio could be obtained by the low rank KCP. More importantly, KCP-RNNs are efficient in both space and computation complexity compared with other tensor-decomposed ones under similar ranks. Besides, we find KCP has the best potential for parallel computing to accelerate the calculations in neural networks.
Comment: Accepted by TNNLS
DOI: 10.48550/arxiv.2008.09342
URL الوصول: https://explore.openaire.eu/search/publication?articleId=doi_dedup___::d8057e61836f013eb34884c63fe6c6f4
حقوق: OPEN
رقم الأكسشن: edsair.doi.dedup.....d8057e61836f013eb34884c63fe6c6f4
قاعدة البيانات: OpenAIRE
الوصف
DOI:10.48550/arxiv.2008.09342