版权说明 操作指南
首页 > 成果 > 详情

Parameters compressing in deep learning

认领
导出
Link by DOI
反馈
分享
QQ微信 微博
成果类型:
期刊论文
作者:
He, Shiming;Li, Zhuozhou;Tang, Yangning;Liao, Zhuofan;Li, Feng;...
通讯作者:
Lim, Se-Jung
作者机构:
[He, Shiming; Tang, Yangning; Liao, Zhuofan; Li, Zhuozhou; Li, Feng] Changsha Univ Sci & Technol, Sch Comp & Commun Engn, Hunan Prov Key Lab Intelligent Proc Big Data Tran, Changsha 410114, Peoples R China.
[Lim, Se-Jung] Honam Univ, Liberal Arts & Convergence Studies, Gwangju 62399, South Korea.
通讯机构:
[Lim, Se-Jung] H
Honam Univ, Liberal Arts & Convergence Studies, Gwangju 62399, South Korea.
语种:
英文
关键词:
Deep neural network;Matrix decomposition;Parameters compressing;Tensor decomposition
期刊:
计算机、材料和连续体(英文)
ISSN:
1546-2218
年:
2020
卷:
62
期:
1
页码:
321-336
基金类别:
This work was supported by National Natural Science Foundation of China (Nos. 61802030, 61572184), the Science and Technology Projects of Hunan Province (No. 2016JC2075), the International Cooperative Project for ?Double First-Class?, CSUST (No. 2018IC24).
机构署名:
本校为第一机构
院系归属:
计算机与通信工程学院
摘要:
With the popularity of deep learning tools in image decomposition and natural language processing, how to support and store a large number of parameters required by deep learning algorithms has become an urgent problem to be solved. These parameters are huge and can be as many as millions. At present, a feasible direction is to use the sparse representation technique to compress the parameter matrix to achieve the purpose of reducing parameters and reducing the storage pressure. These methods include matrix decomposition and tensor decompositio...

反馈

验证码:
看不清楚,换一个
确定
取消

成果认领

标题:
用户 作者 通讯作者
请选择
请选择
确定
取消

提示

该栏目需要登录且有访问权限才可以访问

如果您有访问权限,请直接 登录访问

如果您没有访问权限,请联系管理员申请开通

管理员联系邮箱:yun@hnwdkj.com