版权说明 操作指南
首页 > 成果 > 详情

Defending Federated Learning System from Poisoning Attacks via Efficient Unlearning

认领
导出
Link by DOI
反馈
分享
QQ微信 微博
成果类型:
期刊论文
作者:
Cai, Long;Gu, Ke;Lei, Jiaqi
通讯作者:
Gu, K
作者机构:
[Gu, Ke; Gu, K; Lei, Jiaqi; Cai, Long] Changsha Univ Sci & Technol, Sch Comp & Commun Engn, Changsha 410114, Peoples R China.
通讯机构:
[Gu, K ] C
Changsha Univ Sci & Technol, Sch Comp & Commun Engn, Changsha 410114, Peoples R China.
语种:
英文
关键词:
Federated learning;malicious client detection;model recovery;machine unlearning
期刊:
计算机、材料和连续体(英文)
ISSN:
1546-2218
年:
2025
卷:
83
期:
1
页码:
239-258
基金类别:
National Social Science Foundation of China [20BTQ058]; Natural Science Foundation of Hunan Province [2023JJ50033]
机构署名:
本校为第一且通讯机构
院系归属:
计算机与通信工程学院
摘要:
Large-scale neural networks-based federated learning (FL) has gained public recognition for its effective capabilities in distributed training. Nonetheless, the open system architecture inherent to federated learning systems raises concerns regarding their vulnerability to potential attacks. Poisoning attacks turn into a major menace to federated learning on account of their concealed property and potent destructive force. By altering the local model during routine machine learning training, attackers can easily contaminate the global model. Traditional detection and aggregation solutions miti...

反馈

验证码:
看不清楚,换一个
确定
取消

成果认领

标题:
用户 作者 通讯作者
请选择
请选择
确定
取消

提示

该栏目需要登录且有访问权限才可以访问

如果您有访问权限,请直接 登录访问

如果您没有访问权限,请联系管理员申请开通

管理员联系邮箱:yun@hnwdkj.com