关键词:
Mobile Edge Computing (MEC);Unmanned aerial vehicles (UAVs);Stochastic geometry;Successful uplink communication probability (SUCP)
摘要:
With the increase of computing-intensive and delay-sensitive applications, mobile edge computing (MEC) technology has sprung up. It effectively satisfies the needs of user equipment (UE) for real-time computing resources by placing servers at the edge of the network. However, traditional MEC infrastructures are constrained by their fixed locations and emergency mobility needs. Unmanned aerial vehicles (UAVs) offer an effective solution with low cost, high mobility, and flexible deployment capabilities. In this paper, we propose a region-centric UAV-assisted MEC model, where the whole network space is divided into a set of hexagonal cells of equal area, and all UAVs in the same cell jointly process UE offloading data. Assume all UAVs are assumed to be equipped with independent MEC servers, and both UAVs and terrestrial UE follow independent homogeneous Poisson point process (PPP) distributions. Using the stochastic geometry analysis framework, we derive the successful uplink communication probability (SUCP) of uplink transmission for users. Finally, we compare the simulation results with the theoretical values to verify the model's accuracy and evaluate the influence of key performance parameters on the network performance.
关键词:
Servers;Throughput;Scheduling;Admission control;Internet of Things;Scheduling algorithms;Resource management;Optimization;Delays;Time factors;Adaptive time slice;admission control;edge computing;round-robin (RR);task scheduling
摘要:
The rise of multiaccess edge computing (MEC) speeds up mobile user services and resolves service delays caused by long-distance transmission to cloud servers. However, in task-intensive scenarios, edge server processing limitations lead to buffer congestion, increasing latency and reducing Quality of Service (QoS). Furthermore, the challenges of edge server task processing are increased by the varying deadline requirements of different tasks, the time variability of task arrivals, and the real-time fluctuations of the network. In this work, we propose an adaptive slicing-based task admission scheduling strategy (ASTA) to address these issues. ASTA consists of an adaptive time slice adjustment algorithm (ASTA-I) and a task admission scheduling algorithm (ASTA-II). ASTA-I dynamically adjusts time slices based on real-time network conditions and task flow. ASTA-II first adjusts task priorities dynamically by considering factors, such as data volume, deadlines, network conditions, and buffer locations. After that, ASTA-II formulates different scheduling strategies based on changes in task priorities. These strategies are formulated to improve the throughput efficiency of edge servers and enhance the average response speed of tasks. Simulation results show that compared with the existing O2A and OTDS in different scenarios, the proposed ASTA can reduce the average number of waiting requests in the edge server buffer by 19.53%–57.73% and 20.42%–50.26%, and accelerates the average response speed of tasks by about 39.76% and 32.41%.
摘要:
For device-to-device (D2D) communications in the internet of things (IoT), when the direct links between terminals are unavailable owing to obstacles or severe fading, deploying intelligent reflecting surfaces (IRSs) is a promising solution to reconfigure channel environments for enhancing signal coverage and system capacity. In this paper, to improve system spectrum and energy efficiency, a novel full-duplex (FD) D2D communication model with dual IRSs is presented, where two IRSs are deployed closely to two FD transceivers for assisting the exchange of information between them. Given the budget of total transmit power, maximizing the achievable sum-rate of such IRS-assisted FD two-way system is formulated to optimize the precoding at the two transceivers and the phase shifts at the two IRSs. For such a coupled non-convex problem, we decouple it into two subproblems successfully, which can be solved in an alternate manner with low complexity. Simulation results are presented to validate the superior performance of the proposed D2D communication model compared to the existing models and similar optimization schemes.
For device-to-device (D2D) communications in the internet of things (IoT), when the direct links between terminals are unavailable owing to obstacles or severe fading, deploying intelligent reflecting surfaces (IRSs) is a promising solution to reconfigure channel environments for enhancing signal coverage and system capacity. In this paper, to improve system spectrum and energy efficiency, a novel full-duplex (FD) D2D communication model with dual IRSs is presented, where two IRSs are deployed closely to two FD transceivers for assisting the exchange of information between them. Given the budget of total transmit power, maximizing the achievable sum-rate of such IRS-assisted FD two-way system is formulated to optimize the precoding at the two transceivers and the phase shifts at the two IRSs. For such a coupled non-convex problem, we decouple it into two subproblems successfully, which can be solved in an alternate manner with low complexity. Simulation results are presented to validate the superior performance of the proposed D2D communication model compared to the existing models and similar optimization schemes.
期刊:
IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING,2025年22(2):997-1010 ISSN:1545-5971
通讯作者:
Li, ZT
作者机构:
[Huang, Mingfeng; Liu, Anfeng] Cent South Univ, Sch Elect Informat, Changsha 410017, Peoples R China.;[Huang, Mingfeng] Changsha Univ Sci & Technol, Sch Comp & Commun Engn, Changsha 410000, Peoples R China.;[Li, Zhetao] Jinan Univ, Coll Informat Sci & Technol, Natl & Local Joint Engn Res Ctr Network Secur Dete, Guangdong ProvincialKey Lab Data Secur & Privacy P, Guangzhou 510632, Peoples R China.;[Zhang, Xinglin] South China Univ Technol, Sch Comp Sci & Engn, Guangzhou 510006, Peoples R China.;[Yang, Zhemin] Fudan Univ, Sch Comp Sci, Shanghai 201203, Peoples R China.
通讯机构:
[Li, ZT ] J;Jinan Univ, Coll Informat Sci & Technol, Natl & Local Joint Engn Res Ctr Network Secur Dete, Guangdong ProvincialKey Lab Data Secur & Privacy P, Guangzhou 510632, Peoples R China.
关键词:
Data models;Computational modeling;Accuracy;Predictive models;Cloud computing;Bayes methods;Analytical models;Internet of Things;trust mechanism;data security;sequence extraction;evaluation accuracy
摘要:
As a collaborative and open network, billions of devices can be free to join the IoT-based data collection network for data perception and transmission. Along with this trend, more and more malicious attackers enter the network, they steal or tamper with data, and hinder data exchange and communication. To address these issues, we propose a Proactive Trust Evaluation System (PTES) for secure data collection by evaluating the trust of mobile data collectors. Specifically, PTES guarantees evaluation accuracy from trust evidence acquisition, trust evidence storage, and trust value calculation. First, PTES obtains trust evidence based on active detection of drones, feedbacks from interacted objects, and recommendations from trusted third parties. Then, these trust evidences are stored according to interaction time by adopting a sliding window mechanism. After that, credible, untrustworthy, and uncertain evidence sequences are extracted from the storage space, and assigned with positive, negative, and tendentious trust values, respectively. Consequently, the final normalized trust is obtained by combining the three trust values. Finally, extensive experiments conducted on a real-world dataset demonstrate PTES is superior to benchmark methods in terms of detection accuracy and profit.
摘要:
In the context of transportation cyber-physical systems (T-CPS), backdoor attacks leveraging traffic images have emerged as a significant security threat. As T-CPS increasingly relies on visual information, such as real-time images captured by traffic cameras, for tasks like traffic sign recognition and autonomous driving, the risk of image-based backdoor attacks has grown substantially. Although various detection-based defense techniques have shown some success in identifying backdoored models, they often fail to fully eliminate backdoor effects, leaving residual security risks. To address this challenge, we propose a Frequency-Domain Hybrid Distillation (FDHD) method for backdoor defense, which effectively weakens the association between backdoor triggers and target labels by combining distillation mechanisms in both the frequency and pixel domains. Furthermore, we design a loss function that integrates feature reconstruction with adaptive alignment, enhancing the student network's ability to mimic the teacher network and thereby bolstering the backdoor defense capability. Extensive experiments conducted by FDHD on multiple benchmark datasets against the five latest attacks demonstrate that our proposed defense method effectively reduces backdoor threats while maintaining high accuracy in predicting clean samples. This approach will protect against image-based backdoor attacks in T-CPS and lay the foundation for enhancing future traffic safety.
作者机构:
School of Computer and Communication Engineering, Changsha University of Science and Technology , Changsha, 410114 , China;Key Laboratory of Cyberspace Situation Awareness of Henan Province, Information Engineering University , Zhengzhou, 450001 , China;[Lingyun Xiang; Hang Fu; Chunfang Yang] School of Computer and Communication Engineering, Changsha University of Science and Technology , Changsha, 410114 , China<&wdkj&>Key Laboratory of Cyberspace Situation Awareness of Henan Province, Information Engineering University , Zhengzhou, 450001 , China
摘要:
In the Visual Place Recognition (VPR) task, existing research has leveraged large-scale pre-trained models to improve the performance of place recognition. However, when there are significant environmental differences between query images and reference images, a large number of ineffective local features will interfere with the extraction of key landmark features, leading to the retrieval of visually similar but geographically different images. To address this perceptual aliasing problem caused by environmental condition changes, we propose a novel Visual Place Recognition method with Cross-Environment Robust Feature Enhancement (CerfeVPR). This method uses the GAN network to generate similar images of the original images under different environmental conditions, thereby enhancing the learning of robust features of the original images. This enables the global descriptor to effectively ignore appearance changes caused by environmental factors such as seasons and lighting, showing better place recognition accuracy than other methods. Meanwhile, we introduce a large kernel convolution adapter to fine tune the pre-trained model, obtaining a better image feature representation for subsequent robust feature learning. Then, we process the information of different local regions in the general features through a 3-layer pyramid scene parsing network and fuse it with a tag that retains global information to construct a multi-dimensional image feature representation. Based on this, we use the fused features of similar images to drive the robust feature learning of the original images and complete the feature matching between query images and retrieved images. Experiments on multiple commonly used datasets show that our method exhibits excellent performance. On average, CerfeVPR achieves the highest results, with all Recall@N values exceeding 90%. In particular, on the highly challenging Nordland dataset, the R@1 metric is improved by 4.6%, significantly outperforming other methods, which fully verifies the superiority of CerfeVPR in visual place recognition under complex environments.
In the Visual Place Recognition (VPR) task, existing research has leveraged large-scale pre-trained models to improve the performance of place recognition. However, when there are significant environmental differences between query images and reference images, a large number of ineffective local features will interfere with the extraction of key landmark features, leading to the retrieval of visually similar but geographically different images. To address this perceptual aliasing problem caused by environmental condition changes, we propose a novel Visual Place Recognition method with Cross-Environment Robust Feature Enhancement (CerfeVPR). This method uses the GAN network to generate similar images of the original images under different environmental conditions, thereby enhancing the learning of robust features of the original images. This enables the global descriptor to effectively ignore appearance changes caused by environmental factors such as seasons and lighting, showing better place recognition accuracy than other methods. Meanwhile, we introduce a large kernel convolution adapter to fine tune the pre-trained model, obtaining a better image feature representation for subsequent robust feature learning. Then, we process the information of different local regions in the general features through a 3-layer pyramid scene parsing network and fuse it with a tag that retains global information to construct a multi-dimensional image feature representation. Based on this, we use the fused features of similar images to drive the robust feature learning of the original images and complete the feature matching between query images and retrieved images. Experiments on multiple commonly used datasets show that our method exhibits excellent performance. On average, CerfeVPR achieves the highest results, with all Recall@N values exceeding 90%. In particular, on the highly challenging Nordland dataset, the R@1 metric is improved by 4.6%, significantly outperforming other methods, which fully verifies the superiority of CerfeVPR in visual place recognition under complex environments.
关键词:
Human pose reconstruction;multimodal data fusion;point cloud data processing ultrawideband (UWB) radar;point cloud data processing ultrawideband (UWB) radar;point cloud data processing ultrawideband (UWB) radar
摘要:
This article proposes a multimodal human pose reconstruction method based on 3-D ultrawideband (UWB) radar images and point clouds, aiming to improve the accuracy of human pose estimation through the fusion of radar images and point cloud data. First, a UWB 3-D imaging radar system is designed, which synchronously collects radar echo signals and optical images, constructing a multimodal dataset covering various common actions and different human characteristics. Radar data processing includes azimuth-range 2-D imaging, target locking, local 3-D imaging, discrete sampling, and maximum projection to generate point cloud data and projection images. Optical image processing uses mature methods to reconstruct 3-D poses as pose labels for point clouds and projection images. To achieve multimodal data fusion, the UWB FusionPose network is designed, comprising an image feature extraction network, a point cloud feature extraction network, and a pose reconstruction network. The image feature extraction network is based on the ResNet-18 framework, while the point cloud feature extraction network adopts a pyramid structure. After feature fusion, a multilayer perceptron (MLP) is used to predict human pose information. Additionally, this article explores the impact of fusion parameters on network performance and verifies the effectiveness of the multimodal network through ablation experiments. Experimental results show that this method effectively utilizes radar point cloud data and projection image data to accurately reconstruct the 3-D pose of human targets. This research not only provides a new human pose reconstruction technique but also offers valuable references for the future development of radar imaging technology and multimodal data fusion methods.
摘要:
With the rapid development of Deepfake technology, social security is facing great challenges. Although numerous Deepfake detection algorithms based on traditional CNN frameworks perform well on specific datasets, they still suffer from overfitting due to an over-reliance on localized artifact information. This limitation leads to degraded detection performance across diverse datasets. To address this issue, this study proposes a dual-branch fusion network called LGDF-Net. LGDF-Net uses a dual-branch structure to process the local artifact features and global texture features generated by Deepfake separately, preserving their unique characteristics. Specifically, the local compression branch utilizes a specially designed local compression module (LCM) that allows the network to focus more accurately on key regions of localized artifacts in Deepfake faces. The global expansion branch enhances the analysis of the global facial context through a global expansion module (GEM), which captures image context information and subtle texture features more comprehensively. Additionally, the proposed multi-scale feature extraction module (MSFE) delves into image features at various scales, enriching the extraction of detailed information. Finally, the multi-level feature fusion strategy (MLFF) improves the integration of local and global features through multiple layers, enabling the network to learn the intrinsic connections between these two types of features. A series of experimental validations demonstrate that the proposed scheme outperforms many existing detection networks in terms of accuracy and generalization ability.
摘要:
The rapid development of the Internet has led to the widespread dissemination of manipulated facial images, significantly impacting people's daily lives. With the continuous advancement of Deepfake technology, the generated counterfeit facial images have become increasingly challenging to distinguish. There is an urgent need for a more robust and convincing detection method. Current detection methods mainly operate in the spatial domain and transform the spatial domain into other domains for analysis. With the emergence of transformers, some researchers have also combined traditional convolutional networks with transformers for detection. This paper explores the artifacts left by Deepfakes in various domains and, based on this exploration, proposes a detection method that utilizes the steganalysis rich model to extract high-frequency noise to complement spatial features. We have designed two main modules to fully leverage the interaction between these two aspects based on traditional convolutional neural networks. The first is the multi-scale mixed feature attention module, which introduces artifacts from high-frequency noise into spatial textures, thereby enhancing the model's learning of spatial texture features. The second is the multiscale channel attention module, which reduces the impact of background noise by weighting the features. Our proposed method was experimentally evaluated on mainstream datasets, and a significant amount of experimental results demonstrate the effectiveness of our approach in detecting Deepfake forged faces, outperforming the majority of existing methods.
期刊:
CCF Transactions on High Performance Computing,2025年:1-10 ISSN:2524-4922
通讯作者:
Xiaotian Li
作者机构:
[Xiaoyong Tang; Xiaotian Li; Ronghui Cao] College of Computer and Communication Engineering, Changsha University of Science and Technology, Changsha, China
通讯机构:
[Xiaotian Li] C;College of Computer and Communication Engineering, Changsha University of Science and Technology, Changsha, China
摘要:
Kubernetes is a well-known distributed system for managing containers. It is essential to elect a leader among the replicas to maintain data consistency and coordinate tasks when deploying certain stateful services in a cluster. There are already many leader election algorithms used in distributed systems, but the cost of implementing these algorithms in a Kubernetes cluster is exorbitantly expensive. The existing leader election algorithms in Kubernetes do not take into account the state of the nodes in the election process for distributing the leader, resulting in unbalanced utilization of the cluster and hindering overall cluster performance. This paper proposes an online, resource-aware leader election algorithm to address the aforementioned issues. The algorithm dynamically retrieves the status of cluster nodes to influence the distribution of leaders, ensuring a more balanced allocation of leadership across nodes. This approach helps optimize cluster performance and load balancing. Through experimental comparisons, the algorithm achieves a minimum improvement of 82% in load balancing effectiveness compared to the default and existing improved leader election algorithms, using the coefficient of variation to validate the results.
摘要:
To meet the stringent requirements of industrial applications, modern Ethernet datacenter networks widely deployed with remote direct memory access (RDMA) technology and priority-based flow control (PFC) scheme aim at providing low latency and high throughput transmission performance. However, the existing end-to-end congestion control cannot handle the transient congestion timely due to the round-trip-time (RTT) level control loop, inevitably resulting in PFC triggering. In this article, we propose a Sub-RTT congestion control mechanism called SRCC to alleviate bursty congestion timely. Specifically, SRCC identifies the congested flows accurately, notifies congestion directly from the hotspot to the corresponding source at the sub-RTT control loop and adjusts the sending rate to avoid PFC's head-of-line blocking. Compared to the state-of-the-art end-to-end transmission protocols, the evaluation results show that SRCC effectively reduces the average flow completion time (FCT) by up to 61%, 52%, 40%, and 24% over datacenter quantized congestion notification (DCQCN), Swift, high precision congestion control (HPCC), and photonic congestion notification (PCN), respectively.
期刊:
Journal of Lightwave Technology,2025年:1-13 ISSN:0733-8724
作者机构:
[Qiuyan Yao; Hui Yang] State Key Laboratory of Information Photonics and Optical Communications, Beijing University of Posts and Telecommunications, Beijing, China;[Bowen Bao] School of Computer and Communication Engineering, Changsha University of Science and Technology, Changsha, China
摘要:
Space division multiplexing elastic optical networks (SDM-EONs) is a promising solution to enhance the transmission capacity of optical networks. However, physical layer impairments (PLIs) degrade the service's quality of transmission (QoT). Additionally, service-differentiated spectrum demand leads to spectrum fragmentation, which is further aggravated by strict PLI constraints. Therefore, we propose a routing, core and spectrum allocation algorithm based on dynamic quantitative impairments for low fragmentation (QISEN-LowSF) in SDM-EONs. This algorithm quantitatively analysis multiple impairments and designs restrictive condition to weaken the fragmentation. Specifically, we design a path selection mechanism that takes transmission distance and integration degree of path resources as the cost metrics. Then, we abstract multiple impairment effects in core, and design a core selection mechanism at the cost of impairment estimation and resource integration of each core. When spectrum is allocated to the service, we design a reduced fragmentation spectrum allocation strategy based on the combination of quantified impairment with low-fragmentation. We also optimize the service impairment estimation model in the QoT calculation phase. The results show that QISEN-LowSF performs better than the comparison algorithms under different traffic loads. From the above, the QISEN-LowSF algorithm has superior dynamic flexibility and can reduce blocking probability and the resource fragmentation. Further, it can effectively handle various traffic loads.
通讯机构:
[Feng, CC ] N;Natl Univ Def Technol, Coll Comp Sci, Changsha 410073, Peoples R China.
关键词:
Pricing;Resource management;Servers;Costs;Internet of Things;Optimization;Games;Computational offloading;multiple-access edge computing (MEC);offloading decision;pricing;resources allocation;Stackelberg game
摘要:
Multiaccess edge computing (MEC) is extensively utilized within the Internet of Things (IoT), wherein end-users pay services to meet the latency demands of their respective tasks. The pricing is impacted not solely by the quantity of data offloaded by the user but also associated with the leased computing and communication resources. Nevertheless, prevailing pricing strategies seldom account for the personalized resource requisites during user offloading. In this article, we present an adaptive pricing-oriented approach for concomitant task offloading and resource allocation, considering hybrid resources, comprising two key components. First, we propose a differential pricing framework for communication and computation resources, where the unit price will be influenced by the proportion of resources rented by users. Subsequently, we design a two-stage Stackelberg game model: 1) employing convex optimization theory to mitigate problem intricacies and 2) employing gradient descent to ascertain the potentially optimal price, thus achieving a balance between minimizing user expenses and maximizing server profitability. Simulation outcomes demonstrate that our approach slashes user costs by 23.3% and enhances average server revenue by 65.6% compared to a flat pricing model with a high-user request rate (five user-initiated requests per 100 ms). This maintains server occupancy within 60% to 80%, thereby alleviating user queuing and refining user Quality of Experience (QoE).
摘要:
Large-scale neural networks-based federated learning (FL) has gained public recognition for its effective capabilities in distributed training. Nonetheless, the open system architecture inherent to federated learning systems raises concerns regarding their vulnerability to potential attacks. Poisoning attacks turn into a major menace to federated learning on account of their concealed property and potent destructive force. By altering the local model during routine machine learning training, attackers can easily contaminate the global model. Traditional detection and aggregation solutions mitigate certain threats, but they are still insufficient to completely eliminate the influence generated by attackers. Therefore, federated unlearning that can remove unreliable models while maintaining the accuracy of the global model has become a solution. Unfortunately some existing federated unlearning approaches are rather difficult to be applied in large neural network models because of their high computational expenses. Hence, we propose SlideFU, an efficient anti-poisoning attack federated unlearning framework. The primary concept of SlideFU is to employ sliding window to construct the training process, where all operations are confined within the window. We design a malicious detection scheme based on principal component analysis (PCA), which calculates the trust factors between compressed models in a low-cost way to eliminate unreliable models. After confirming that the global model is under attack, the system activates the federated unlearning process, calibrates the gradients based on the updated direction of the calibration gradients. Experiments on two public datasets demonstrate that our scheme can recover a robust model with extremely high efficiency.
Large-scale neural networks-based federated learning (FL) has gained public recognition for its effective capabilities in distributed training. Nonetheless, the open system architecture inherent to federated learning systems raises concerns regarding their vulnerability to potential attacks. Poisoning attacks turn into a major menace to federated learning on account of their concealed property and potent destructive force. By altering the local model during routine machine learning training, attackers can easily contaminate the global model. Traditional detection and aggregation solutions mitigate certain threats, but they are still insufficient to completely eliminate the influence generated by attackers. Therefore, federated unlearning that can remove unreliable models while maintaining the accuracy of the global model has become a solution. Unfortunately some existing federated unlearning approaches are rather difficult to be applied in large neural network models because of their high computational expenses. Hence, we propose SlideFU, an efficient anti-poisoning attack federated unlearning framework. The primary concept of SlideFU is to employ sliding window to construct the training process, where all operations are confined within the window. We design a malicious detection scheme based on principal component analysis (PCA), which calculates the trust factors between compressed models in a low-cost way to eliminate unreliable models. After confirming that the global model is under attack, the system activates the federated unlearning process, calibrates the gradients based on the updated direction of the calibration gradients. Experiments on two public datasets demonstrate that our scheme can recover a robust model with extremely high efficiency.
摘要:
Images captured under improper exposure conditions lose their brightness information and texture details. Therefore, the enhancement of low-light images has received widespread attention. In recent years, most methods are based on deep convolutional neural networks to enhance low-light images in the spatial domain, which tends to introduce a huge number of parameters, thus limiting their practical applicability. In this paper, we propose a Fourier-based two-stage low-light image enhancement method via mutual learning (FT-LLIE), which sequentially enhance the amplitude and phase components. Specifically, we design the amplitude enhancement module (AEM) and phase enhancement module (PEM). In these two enhancement stages, we design the amplitude enhancement block (AEB) and phase enhancement block (PEB) based on the Fast Fourier Transform (FFT) to deal with the amplitude component and the phase component, respectively. In AEB and PEB, we design spatial unit (SU) and frequency unit (FU) to process spatial and frequency domain information, and adopt a mutual learning strategy so that the local features extracted from the spatial domain and global features extracted from the frequency domain can learn from each other to obtain complementary information to enhance the image. Through extensive experiments, it has been shown that our network requires only a small number of parameters to effectively enhance image details, outperforming existing low-light image enhancement algorithms in both qualitative and quantitative results.
Images captured under improper exposure conditions lose their brightness information and texture details. Therefore, the enhancement of low-light images has received widespread attention. In recent years, most methods are based on deep convolutional neural networks to enhance low-light images in the spatial domain, which tends to introduce a huge number of parameters, thus limiting their practical applicability. In this paper, we propose a Fourier-based two-stage low-light image enhancement method via mutual learning (FT-LLIE), which sequentially enhance the amplitude and phase components. Specifically, we design the amplitude enhancement module (AEM) and phase enhancement module (PEM). In these two enhancement stages, we design the amplitude enhancement block (AEB) and phase enhancement block (PEB) based on the Fast Fourier Transform (FFT) to deal with the amplitude component and the phase component, respectively. In AEB and PEB, we design spatial unit (SU) and frequency unit (FU) to process spatial and frequency domain information, and adopt a mutual learning strategy so that the local features extracted from the spatial domain and global features extracted from the frequency domain can learn from each other to obtain complementary information to enhance the image. Through extensive experiments, it has been shown that our network requires only a small number of parameters to effectively enhance image details, outperforming existing low-light image enhancement algorithms in both qualitative and quantitative results.
作者:
Abdulmajeed Abdullah Mohammed Mokbel;Fei Yu;Yumba Musoya Gracia;Bohong Tan;Hairong Lin;...
期刊:
复杂系统建模与仿真(英文),2025年5(1):34-45 ISSN:2096-9929
通讯作者:
Yu, F
作者机构:
[Abdulmajeed Abdullah Mohammed Mokbel; Fei Yu; Yumba Musoya Gracia; Bohong Tan] School of Computer and Communication Engineering, Changsha University of Science and Technology, Changsha, China;[Hairong Lin] School of Electronic Information, Central South University, Changsha, China;[Herbert Ho-Ching Iu] School of Electrical, Electronic and Computer Engineering, University of Western Australia, Perth, Australia
通讯机构:
[Yu, F ] C;Changsha Univ Sci & Technol, Sch Comp & Commun Engn, Changsha 410114, Peoples R China.
摘要:
This paper proposes a novel 5D hyperchaotic memristive system based on the Sprott-C system configuration, which greatly improves the complexity of the system to be used for secure communication and signal processing. A critical aspect of this research work is the introduction of a flux-controlled memristor that exhibits chaotic behavior and dynamic responses of the system. To this respect, detailed mathematical modeling and numerical simulations about the stability of the system's equilibria, bifurcations, and hyperchaotic dynamics were conducted and showed a very wide variety of behaviors of great potential in cryptographic applications and secure data transmission. Then, the flexibility and efficiency of the real-time operating environment were demonstrated, and the system was actually implemented on a field-programmable gate array (FPGA) hardware platform. A prototype that confirms the theoretical framework was presented, providing new insights for chaotic systems with practical significance. Finally, we conducted National Institute of Standards and Technology (NIST) testing on the proposed 5D hyperchaotic memristive system, and the results showed that the system has good randomness.
摘要:
In vehicular edge computing (VEC), most tasks require high real-time and energy requirements, but the mobility of vehicles and the difficulty of intelligent computing make it hard to meet these requirements. Due to the fact that most VEC tasks can be decomposed into smaller granularity, based on the dependencies between small subtasks, the repetition of tasks can be reduced, thereby improving task completion rates. In this work, we explore the dependencies of subtasks in different applications and design a two-stage multihop clustering de-duplication offloading (MCDO) mechanism. First, MCDO gives a multihop two-layer clustering (MTLC) algorithm to divide clusters based on similarities between different tasks. Based on this, MCDO further designs a de-duplication logical hierarchical offloading (DLHO). DLHO forms a directed acyclic graph (DAG) of de-duplicated subtasks in each cluster and offloads these subtasks in a logical hierarchical manner. Simulation results show that, compared to existing approaches PC5-GO, FedEdge, and MD-TSDQN, MCDO can achieve a minimum improvement of 15.1% in terms of latency and 20.8% in terms of energy consumption.
摘要:
Despite the evident advantages of variants of UNet in medical image segmentation, these methods still exhibit limitations in the extraction of foreground, background, and boundary features. Based on feature guidance, we propose a new network (FG-UNet). Specifically, adjacent high-level and low-level features are used to gradually guide the network to perceive lesion features. To accommodate lesion features of different scales, the multi-order gated aggregation (MGA) block is designed based on multi-order feature interactions. Furthermore, a novel feature-guided context-aware (FGCA) block is devised to enhance the capability of FG-UNet to segment lesions by fusing boundary-enhancing features, object-enhancing features, and uncertain areas. Eventually, a bi-dimensional interaction attention (BIA) block is designed to enable the network to highlight crucial features effectively. To appraise the effectiveness of FG-UNet, experiments were conducted on Kvasir-seg, ISIC2018, and COVID-19 datasets. The experimental results illustrate that FG-UNet achieves a DSC score of 92.70% on the Kvasir-seg dataset, which is 1.15% higher than that of the latest SCUNet++, 4.70% higher than that of ACC-UNet, and 5.17% higher than that of UNet.
作者机构:
[Dong Cui] Research Institute of Petroleum Exploration&Development, Beijing, China;[Hao Cao; Yuxu Peng] School of Computer and Communication Engineering, Changsha University of Science & Technology, Changsha, China;[Qingzhen Ma] National Supercomputing Center in Tianjin, Tianjin, China;[Chunye Gong] College of computing, National University of Defense Technology, Changsha, China;[Taihui Yin] China Asrtronaut Research and Training Center, Beijing, China
会议名称:
2025 2nd International Conference on Electronic Engineering and Information Systems (EEISS)
会议时间:
23 May 2025
会议地点:
Nanjing, China
会议论文集名称:
2025 2nd International Conference on Electronic Engineering and Information Systems (EEISS)
关键词:
fast sweeping method;OpenMP;parallel computing;SIMD
摘要:
With advancements in computational capabilities and algorithmic sophistication, 3D seismic simulations have become pivotal in geophysics, earthquake engineering, and disaster prevention efforts. However, the application of fast scanning methods in 3D seismic simulations presents significant challenges due to strong data dependencies, which hinder the parallelism in solving equations within the programmed functions. This limitation often precludes practical engineering applications. To overcome these constraints, we have developed an innovative approach that transcends the traditional 2D methodologies by extending these methods to three dimensions. We introduce a novel 3D diagonal direction chunking algorithm that utilizes Manhattan distance, accompanied by a foundational integral optimization algorithm. Experimental evaluations indicate that our proposed method enhances performance, achieving an optimization acceleration ratio of 1.47 on a single core. When scaled to a 56-core single node, the parallel efficiency reaches $\mathbf{4 2 \%}$. In practical engineering computations, the implementation of 56 OpenMP threads facilitates an acceleration by a factor of 23, demonstrating substantial improvements over existing algorithms.
摘要:
Images captured in the wild often suffer from issues such as under-exposure, over-exposure, or sometimes a combination of both. These images tend to lose details and texture due to uneven exposure. The majority of image enhancement methods currently focus on correcting either under-exposure or over-exposure, but there are only a few methods available that can effectively handle these two problems simultaneously. In order to address these issues, a novel partition-based exposure correction method is proposed. Firstly, our method calculates the illumination map to generate a partition mask that divides the original image into under-exposed and over-exposed areas. Then, we propose a Transformer-based parameter estimation module to estimate the dual gamma values for partition-based exposure correction. Finally, we introduce a dual-branch fusion module to merge the original image with the exposure-corrected image to obtain the final result. It is worth noting that the illumination map plays a guiding role in both the dual gamma model parameters estimation and the dual-branch fusion. Extensive experiments demonstrate that the proposed method consistently achieves superior performance over state-of-the-art (SOTA) methods on 9 datasets with paired or unpaired samples. Our codes are available at https://github.com/csust7zhangjm/ExposureCorrectionWMS .
Images captured in the wild often suffer from issues such as under-exposure, over-exposure, or sometimes a combination of both. These images tend to lose details and texture due to uneven exposure. The majority of image enhancement methods currently focus on correcting either under-exposure or over-exposure, but there are only a few methods available that can effectively handle these two problems simultaneously. In order to address these issues, a novel partition-based exposure correction method is proposed. Firstly, our method calculates the illumination map to generate a partition mask that divides the original image into under-exposed and over-exposed areas. Then, we propose a Transformer-based parameter estimation module to estimate the dual gamma values for partition-based exposure correction. Finally, we introduce a dual-branch fusion module to merge the original image with the exposure-corrected image to obtain the final result. It is worth noting that the illumination map plays a guiding role in both the dual gamma model parameters estimation and the dual-branch fusion. Extensive experiments demonstrate that the proposed method consistently achieves superior performance over state-of-the-art (SOTA) methods on 9 datasets with paired or unpaired samples. Our codes are available at https://github.com/csust7zhangjm/ExposureCorrectionWMS .