摘要:
To meet the stringent requirements of industrial applications, modern Ethernet datacenter networks widely deployed with remote direct memory access (RDMA) technology and priority-based flow control (PFC) scheme aim at providing low latency and high throughput transmission performance. However, the existing end-to-end congestion control cannot handle the transient congestion timely due to the round-trip-time (RTT) level control loop, inevitably resulting in PFC triggering. In this article, we propose a Sub-RTT congestion control mechanism called SRCC to alleviate bursty congestion timely. Specifically, SRCC identifies the congested flows accurately, notifies congestion directly from the hotspot to the corresponding source at the sub-RTT control loop and adjusts the sending rate to avoid PFC's head-of-line blocking. Compared to the state-of-the-art end-to-end transmission protocols, the evaluation results show that SRCC effectively reduces the average flow completion time (FCT) by up to 61%, 52%, 40%, and 24% over datacenter quantized congestion notification (DCQCN), Swift, high precision congestion control (HPCC), and photonic congestion notification (PCN), respectively.
期刊:
IEEE Internet of Things Journal,2025年12(11):16067-16078
作者机构:
[Zhuofan Liao; Yanpu Tang; Xiaoyong Tang] School of Computer and Communication Engineering, Changsha University of Science and Technology, Changsha, China;[Jiawei Huang] School of Information Science and Engineering, Central South University, Changsha, China
摘要:
The rise of multiaccess edge computing (MEC) speeds up mobile user services and resolves service delays caused by long-distance transmission to cloud servers. However, in task-intensive scenarios, edge server processing limitations lead to buffer congestion, increasing latency and reducing Quality of Service (QoS). Furthermore, the challenges of edge server task processing are increased by the varying deadline requirements of different tasks, the time variability of task arrivals, and the real-time fluctuations of the network. In this work, we propose an adaptive slicing-based task admission scheduling strategy (ASTA) to address these issues. ASTA consists of an adaptive time slice adjustment algorithm (ASTA-I) and a task admission scheduling algorithm (ASTA-II). ASTA-I dynamically adjusts time slices based on real-time network conditions and task flow. ASTA-II first adjusts task priorities dynamically by considering factors, such as data volume, deadlines, network conditions, and buffer locations. After that, ASTA-II formulates different scheduling strategies based on changes in task priorities. These strategies are formulated to improve the throughput efficiency of edge servers and enhance the average response speed of tasks. Simulation results show that compared with the existing O2A and OTDS in different scenarios, the proposed ASTA can reduce the average number of waiting requests in the edge server buffer by 19.53%–57.73% and 20.42%–50.26%, and accelerates the average response speed of tasks by about 39.76% and 32.41%.
摘要:
Images captured under improper exposure conditions lose their brightness information and texture details. Therefore, the enhancement of low-light images has received widespread attention. In recent years, most methods are based on deep convolutional neural networks to enhance low-light images in the spatial domain, which tends to introduce a huge number of parameters, thus limiting their practical applicability. In this paper, we propose a Fourier-based two-stage low-light image enhancement method via mutual learning (FT-LLIE), which sequentially enhance the amplitude and phase components. Specifically, we design the amplitude enhancement module (AEM) and phase enhancement module (PEM). In these two enhancement stages, we design the amplitude enhancement block (AEB) and phase enhancement block (PEB) based on the Fast Fourier Transform (FFT) to deal with the amplitude component and the phase component, respectively. In AEB and PEB, we design spatial unit (SU) and frequency unit (FU) to process spatial and frequency domain information, and adopt a mutual learning strategy so that the local features extracted from the spatial domain and global features extracted from the frequency domain can learn from each other to obtain complementary information to enhance the image. Through extensive experiments, it has been shown that our network requires only a small number of parameters to effectively enhance image details, outperforming existing low-light image enhancement algorithms in both qualitative and quantitative results.
Images captured under improper exposure conditions lose their brightness information and texture details. Therefore, the enhancement of low-light images has received widespread attention. In recent years, most methods are based on deep convolutional neural networks to enhance low-light images in the spatial domain, which tends to introduce a huge number of parameters, thus limiting their practical applicability. In this paper, we propose a Fourier-based two-stage low-light image enhancement method via mutual learning (FT-LLIE), which sequentially enhance the amplitude and phase components. Specifically, we design the amplitude enhancement module (AEM) and phase enhancement module (PEM). In these two enhancement stages, we design the amplitude enhancement block (AEB) and phase enhancement block (PEB) based on the Fast Fourier Transform (FFT) to deal with the amplitude component and the phase component, respectively. In AEB and PEB, we design spatial unit (SU) and frequency unit (FU) to process spatial and frequency domain information, and adopt a mutual learning strategy so that the local features extracted from the spatial domain and global features extracted from the frequency domain can learn from each other to obtain complementary information to enhance the image. Through extensive experiments, it has been shown that our network requires only a small number of parameters to effectively enhance image details, outperforming existing low-light image enhancement algorithms in both qualitative and quantitative results.
摘要:
For device-to-device (D2D) communications in the internet of things (IoT), when the direct links between terminals are unavailable owing to obstacles or severe fading, deploying intelligent reflecting surfaces (IRSs) is a promising solution to reconfigure channel environments for enhancing signal coverage and system capacity. In this paper, to improve system spectrum and energy efficiency, a novel full-duplex (FD) D2D communication model with dual IRSs is presented, where two IRSs are deployed closely to two FD transceivers for assisting the exchange of information between them. Given the budget of total transmit power, maximizing the achievable sum-rate of such IRS-assisted FD two-way system is formulated to optimize the precoding at the two transceivers and the phase shifts at the two IRSs. For such a coupled non-convex problem, we decouple it into two subproblems successfully, which can be solved in an alternate manner with low complexity. Simulation results are presented to validate the superior performance of the proposed D2D communication model compared to the existing models and similar optimization schemes.
For device-to-device (D2D) communications in the internet of things (IoT), when the direct links between terminals are unavailable owing to obstacles or severe fading, deploying intelligent reflecting surfaces (IRSs) is a promising solution to reconfigure channel environments for enhancing signal coverage and system capacity. In this paper, to improve system spectrum and energy efficiency, a novel full-duplex (FD) D2D communication model with dual IRSs is presented, where two IRSs are deployed closely to two FD transceivers for assisting the exchange of information between them. Given the budget of total transmit power, maximizing the achievable sum-rate of such IRS-assisted FD two-way system is formulated to optimize the precoding at the two transceivers and the phase shifts at the two IRSs. For such a coupled non-convex problem, we decouple it into two subproblems successfully, which can be solved in an alternate manner with low complexity. Simulation results are presented to validate the superior performance of the proposed D2D communication model compared to the existing models and similar optimization schemes.
摘要:
In the context of transportation cyber-physical systems (T-CPS), backdoor attacks leveraging traffic images have emerged as a significant security threat. As T-CPS increasingly relies on visual information, such as real-time images captured by traffic cameras, for tasks like traffic sign recognition and autonomous driving, the risk of image-based backdoor attacks has grown substantially. Although various detection-based defense techniques have shown some success in identifying backdoored models, they often fail to fully eliminate backdoor effects, leaving residual security risks. To address this challenge, we propose a Frequency-Domain Hybrid Distillation (FDHD) method for backdoor defense, which effectively weakens the association between backdoor triggers and target labels by combining distillation mechanisms in both the frequency and pixel domains. Furthermore, we design a loss function that integrates feature reconstruction with adaptive alignment, enhancing the student network's ability to mimic the teacher network and thereby bolstering the backdoor defense capability. Extensive experiments conducted by FDHD on multiple benchmark datasets against the five latest attacks demonstrate that our proposed defense method effectively reduces backdoor threats while maintaining high accuracy in predicting clean samples. This approach will protect against image-based backdoor attacks in T-CPS and lay the foundation for enhancing future traffic safety.
摘要:
In vehicular edge computing (VEC), most tasks require high real-time and energy requirements, but the mobility of vehicles and the difficulty of intelligent computing make it hard to meet these requirements. Due to the fact that most VEC tasks can be decomposed into smaller granularity, based on the dependencies between small subtasks, the repetition of tasks can be reduced, thereby improving task completion rates. In this work, we explore the dependencies of subtasks in different applications and design a two-stage multihop clustering de-duplication offloading (MCDO) mechanism. First, MCDO gives a multihop two-layer clustering (MTLC) algorithm to divide clusters based on similarities between different tasks. Based on this, MCDO further designs a de-duplication logical hierarchical offloading (DLHO). DLHO forms a directed acyclic graph (DAG) of de-duplicated subtasks in each cluster and offloads these subtasks in a logical hierarchical manner. Simulation results show that, compared to existing approaches PC5-GO, FedEdge, and MD-TSDQN, MCDO can achieve a minimum improvement of 15.1% in terms of latency and 20.8% in terms of energy consumption.
摘要:
Large-scale neural networks-based federated learning (FL) has gained public recognition for its effective capabilities in distributed training. Nonetheless, the open system architecture inherent to federated learning systems raises concerns regarding their vulnerability to potential attacks. Poisoning attacks turn into a major menace to federated learning on account of their concealed property and potent destructive force. By altering the local model during routine machine learning training, attackers can easily contaminate the global model. Traditional detection and aggregation solutions mitigate certain threats, but they are still insufficient to completely eliminate the influence generated by attackers. Therefore, federated unlearning that can remove unreliable models while maintaining the accuracy of the global model has become a solution. Unfortunately some existing federated unlearning approaches are rather difficult to be applied in large neural network models because of their high computational expenses. Hence, we propose SlideFU, an efficient anti-poisoning attack federated unlearning framework. The primary concept of SlideFU is to employ sliding window to construct the training process, where all operations are confined within the window. We design a malicious detection scheme based on principal component analysis (PCA), which calculates the trust factors between compressed models in a low-cost way to eliminate unreliable models. After confirming that the global model is under attack, the system activates the federated unlearning process, calibrates the gradients based on the updated direction of the calibration gradients. Experiments on two public datasets demonstrate that our scheme can recover a robust model with extremely high efficiency.
Large-scale neural networks-based federated learning (FL) has gained public recognition for its effective capabilities in distributed training. Nonetheless, the open system architecture inherent to federated learning systems raises concerns regarding their vulnerability to potential attacks. Poisoning attacks turn into a major menace to federated learning on account of their concealed property and potent destructive force. By altering the local model during routine machine learning training, attackers can easily contaminate the global model. Traditional detection and aggregation solutions mitigate certain threats, but they are still insufficient to completely eliminate the influence generated by attackers. Therefore, federated unlearning that can remove unreliable models while maintaining the accuracy of the global model has become a solution. Unfortunately some existing federated unlearning approaches are rather difficult to be applied in large neural network models because of their high computational expenses. Hence, we propose SlideFU, an efficient anti-poisoning attack federated unlearning framework. The primary concept of SlideFU is to employ sliding window to construct the training process, where all operations are confined within the window. We design a malicious detection scheme based on principal component analysis (PCA), which calculates the trust factors between compressed models in a low-cost way to eliminate unreliable models. After confirming that the global model is under attack, the system activates the federated unlearning process, calibrates the gradients based on the updated direction of the calibration gradients. Experiments on two public datasets demonstrate that our scheme can recover a robust model with extremely high efficiency.
作者:
Abdulmajeed Abdullah Mohammed Mokbel;Fei Yu;Yumba Musoya Gracia;Bohong Tan;Hairong Lin;...
期刊:
复杂系统建模与仿真(英文),2025年5(1):34-45 ISSN:2096-9929
作者机构:
[Abdulmajeed Abdullah Mohammed Mokbel; Fei Yu; Yumba Musoya Gracia; Bohong Tan] School of Computer and Communication Engineering, Changsha University of Science and Technology, Changsha, China;[Hairong Lin] School of Electronic Information, Central South University, Changsha, China;[Herbert Ho-Ching Iu] School of Electrical, Electronic and Computer Engineering, University of Western Australia, Perth, Australia
摘要:
This paper proposes a novel 5D hyperchaotic memristive system based on the Sprott-C system configuration, which greatly improves the complexity of the system to be used for secure communication and signal processing. A critical aspect of this research work is the introduction of a flux-controlled memristor that exhibits chaotic behavior and dynamic responses of the system. To this respect, detailed mathematical modeling and numerical simulations about the stability of the system's equilibria, bifurcations, and hyperchaotic dynamics were conducted and showed a very wide variety of behaviors of great potential in cryptographic applications and secure data transmission. Then, the flexibility and efficiency of the real-time operating environment were demonstrated, and the system was actually implemented on a field-programmable gate array (FPGA) hardware platform. A prototype that confirms the theoretical framework was presented, providing new insights for chaotic systems with practical significance. Finally, we conducted National Institute of Standards and Technology (NIST) testing on the proposed 5D hyperchaotic memristive system, and the results showed that the system has good randomness.
通讯机构:
[Feng, CC ] N;Natl Univ Def Technol, Coll Comp Sci, Changsha 410073, Peoples R China.
关键词:
Pricing;Resource management;Servers;Costs;Internet of Things;Optimization;Games;Computational offloading;multiple-access edge computing (MEC);offloading decision;pricing;resources allocation;Stackelberg game
摘要:
Multiaccess edge computing (MEC) is extensively utilized within the Internet of Things (IoT), wherein end-users pay services to meet the latency demands of their respective tasks. The pricing is impacted not solely by the quantity of data offloaded by the user but also associated with the leased computing and communication resources. Nevertheless, prevailing pricing strategies seldom account for the personalized resource requisites during user offloading. In this article, we present an adaptive pricing-oriented approach for concomitant task offloading and resource allocation, considering hybrid resources, comprising two key components. First, we propose a differential pricing framework for communication and computation resources, where the unit price will be influenced by the proportion of resources rented by users. Subsequently, we design a two-stage Stackelberg game model: 1) employing convex optimization theory to mitigate problem intricacies and 2) employing gradient descent to ascertain the potentially optimal price, thus achieving a balance between minimizing user expenses and maximizing server profitability. Simulation outcomes demonstrate that our approach slashes user costs by 23.3% and enhances average server revenue by 65.6% compared to a flat pricing model with a high-user request rate (five user-initiated requests per 100 ms). This maintains server occupancy within 60% to 80%, thereby alleviating user queuing and refining user Quality of Experience (QoE).
关键词:
Human pose reconstruction;multimodal data fusion;point cloud data processing ultrawideband (UWB) radar;point cloud data processing ultrawideband (UWB) radar;point cloud data processing ultrawideband (UWB) radar
摘要:
This article proposes a multimodal human pose reconstruction method based on 3-D ultrawideband (UWB) radar images and point clouds, aiming to improve the accuracy of human pose estimation through the fusion of radar images and point cloud data. First, a UWB 3-D imaging radar system is designed, which synchronously collects radar echo signals and optical images, constructing a multimodal dataset covering various common actions and different human characteristics. Radar data processing includes azimuth-range 2-D imaging, target locking, local 3-D imaging, discrete sampling, and maximum projection to generate point cloud data and projection images. Optical image processing uses mature methods to reconstruct 3-D poses as pose labels for point clouds and projection images. To achieve multimodal data fusion, the UWB FusionPose network is designed, comprising an image feature extraction network, a point cloud feature extraction network, and a pose reconstruction network. The image feature extraction network is based on the ResNet-18 framework, while the point cloud feature extraction network adopts a pyramid structure. After feature fusion, a multilayer perceptron (MLP) is used to predict human pose information. Additionally, this article explores the impact of fusion parameters on network performance and verifies the effectiveness of the multimodal network through ablation experiments. Experimental results show that this method effectively utilizes radar point cloud data and projection image data to accurately reconstruct the 3-D pose of human targets. This research not only provides a new human pose reconstruction technique but also offers valuable references for the future development of radar imaging technology and multimodal data fusion methods.
摘要:
Despite the evident advantages of variants of UNet in medical image segmentation, these methods still exhibit limitations in the extraction of foreground, background, and boundary features. Based on feature guidance, we propose a new network (FG-UNet). Specifically, adjacent high-level and low-level features are used to gradually guide the network to perceive lesion features. To accommodate lesion features of different scales, the multi-order gated aggregation (MGA) block is designed based on multi-order feature interactions. Furthermore, a novel feature-guided context-aware (FGCA) block is devised to enhance the capability of FG-UNet to segment lesions by fusing boundary-enhancing features, object-enhancing features, and uncertain areas. Eventually, a bi-dimensional interaction attention (BIA) block is designed to enable the network to highlight crucial features effectively. To appraise the effectiveness of FG-UNet, experiments were conducted on Kvasir-seg, ISIC2018, and COVID-19 datasets. The experimental results illustrate that FG-UNet achieves a DSC score of 92.70% on the Kvasir-seg dataset, which is 1.15% higher than that of the latest SCUNet++, 4.70% higher than that of ACC-UNet, and 5.17% higher than that of UNet.
期刊:
IEEE Transactions on Circuits and Systems for Video Technology,2025年:1-1 ISSN:1051-8215
作者机构:
[Min Long] School of Electronics and Communication Engineering, Guangzhou University, Guangzhou, Guangdong, China;[Zhenyu Liu] School of Computer and Communication Engineering, Changsha University of Science and Technology, Changsha, China;[Fei Peng] School of Artificial Intelligence, Guangzhou University, Guangzhou, Guangdong, China;[Le-Bing Zhang] School of Computer and Artificial Intelligence, Huaihua University, Huaihua, China
摘要:
With the rapid development of Deepfake technology, social security is facing great challenges. Although numerous Deepfake detection algorithms based on traditional CNN frameworks perform well on specific datasets, they still suffer from overfitting due to an over-reliance on localized artifact information. This limitation leads to degraded detection performance across diverse datasets. To address this issue, this study proposes a dual-branch fusion network called LGDF-Net. LGDF-Net uses a dual-branch structure to process the local artifact features and global texture features generated by Deepfake separately, preserving their unique characteristics. Specifically, the local compression branch utilizes a specially designed local compression module (LCM) that allows the network to focus more accurately on key regions of localized artifacts in Deepfake faces. The global expansion branch enhances the analysis of the global facial context through a global expansion module (GEM), which captures image context information and subtle texture features more comprehensively. Additionally, the proposed multi-scale feature extraction module (MSFE) delves into image features at various scales, enriching the extraction of detailed information. Finally, the multi-level feature fusion strategy (MLFF) improves the integration of local and global features through multiple layers, enabling the network to learn the intrinsic connections between these two types of features. A series of experimental validations demonstrate that the proposed scheme outperforms many existing detection networks in terms of accuracy and generalization ability.
摘要:
The rapid development of the Internet has led to the widespread dissemination of manipulated facial images, significantly impacting people's daily lives. With the continuous advancement of Deepfake technology, the generated counterfeit facial images have become increasingly challenging to distinguish. There is an urgent need for a more robust and convincing detection method. Current detection methods mainly operate in the spatial domain and transform the spatial domain into other domains for analysis. With the emergence of transformers, some researchers have also combined traditional convolutional networks with transformers for detection. This paper explores the artifacts left by Deepfakes in various domains and, based on this exploration, proposes a detection method that utilizes the steganalysis rich model to extract high-frequency noise to complement spatial features. We have designed two main modules to fully leverage the interaction between these two aspects based on traditional convolutional neural networks. The first is the multi-scale mixed feature attention module, which introduces artifacts from high-frequency noise into spatial textures, thereby enhancing the model's learning of spatial texture features. The second is the multiscale channel attention module, which reduces the impact of background noise by weighting the features. Our proposed method was experimentally evaluated on mainstream datasets, and a significant amount of experimental results demonstrate the effectiveness of our approach in detecting Deepfake forged faces, outperforming the majority of existing methods.
期刊:
IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING,2025年22(2):997-1010 ISSN:1545-5971
通讯作者:
Li, ZT
作者机构:
[Huang, Mingfeng; Liu, Anfeng] Cent South Univ, Sch Elect Informat, Changsha 410017, Peoples R China.;[Huang, Mingfeng] Changsha Univ Sci & Technol, Sch Comp & Commun Engn, Changsha 410000, Peoples R China.;[Li, Zhetao] Jinan Univ, Coll Informat Sci & Technol, Natl & Local Joint Engn Res Ctr Network Secur Dete, Guangdong ProvincialKey Lab Data Secur & Privacy P, Guangzhou 510632, Peoples R China.;[Zhang, Xinglin] South China Univ Technol, Sch Comp Sci & Engn, Guangzhou 510006, Peoples R China.;[Yang, Zhemin] Fudan Univ, Sch Comp Sci, Shanghai 201203, Peoples R China.
通讯机构:
[Li, ZT ] J;Jinan Univ, Coll Informat Sci & Technol, Natl & Local Joint Engn Res Ctr Network Secur Dete, Guangdong ProvincialKey Lab Data Secur & Privacy P, Guangzhou 510632, Peoples R China.
关键词:
Data models;Computational modeling;Accuracy;Predictive models;Cloud computing;Bayes methods;Analytical models;Internet of Things;trust mechanism;data security;sequence extraction;evaluation accuracy
摘要:
As a collaborative and open network, billions of devices can be free to join the IoT-based data collection network for data perception and transmission. Along with this trend, more and more malicious attackers enter the network, they steal or tamper with data, and hinder data exchange and communication. To address these issues, we propose a Proactive Trust Evaluation System (PTES) for secure data collection by evaluating the trust of mobile data collectors. Specifically, PTES guarantees evaluation accuracy from trust evidence acquisition, trust evidence storage, and trust value calculation. First, PTES obtains trust evidence based on active detection of drones, feedbacks from interacted objects, and recommendations from trusted third parties. Then, these trust evidences are stored according to interaction time by adopting a sliding window mechanism. After that, credible, untrustworthy, and uncertain evidence sequences are extracted from the storage space, and assigned with positive, negative, and tendentious trust values, respectively. Consequently, the final normalized trust is obtained by combining the three trust values. Finally, extensive experiments conducted on a real-world dataset demonstrate PTES is superior to benchmark methods in terms of detection accuracy and profit.
期刊:
Expert Systems with Applications,2025年:127815 ISSN:0957-4174
通讯作者:
Bo Yin
作者机构:
[Bo Yin; Binyao Xu] School of Computer and Communication Engineering, Changsha University of Science and Technology, Changsha 410114, China;School of Control and Computer Engineering, North China Electric Power University, Beijing, 102206, China;[Yihu Liu] School of Computer and Communication Engineering, Changsha University of Science and Technology, Changsha 410114, China<&wdkj&>School of Control and Computer Engineering, North China Electric Power University, Beijing, 102206, China
通讯机构:
[Bo Yin] S;School of Computer and Communication Engineering, Changsha University of Science and Technology, Changsha 410114, China
关键词:
In recent years;with the rise of web 3.0 (Garcia et al.;Hendler;2009) applications such as meta-universe and digital collections;blockchain as the underlying technology has gained extensive attention from researchers. Blockchain is a distributed database in which nodes do not trust each other. It ensures data security through the use of a hash chain;consensus mechanism;and other technologies. The hash chain prevents data tampering;while the consensus mechanism ensures that all nodes store identical copies. Additionally;blockchain is characterized by decentralization and traceability. Due to these features;blockchain technology has found wide applications;particularly in fields like finance (Treleaven;Brown;& Yang;2017) and healthcare (Revathi and Manikandan;Wang et al.;where decentralized networks store increasing amounts of data. Consequently;the issue of data querying has become a focus of research. In a blockchain system;there are two types of nodes: full nodes and light nodes (as shown in Fig. 1). Full nodes store all data from the blockchain;including block headers and bodies;and they verify transactions and validate blocks. This results in increased storage demands on them. As of April 2024;the amount of data in the full nodes of Bitcoin is 564.04 GB;and the amount of data in the full nodes of Ethereum is 994 GB. On the other hand;light nodes only store block headers;which significantly reduces their storage pressure since the size of a block header is only 80 bytes. When users join the blockchain as a full node;they can easily query the data stored on the chain. However;this brings huge storage and computation pressure to the user. To alleviate this situation;users can join the blockchain as light nodes. In this case;a query request needs to be sent to the service provider (SP) that runs a full node. Since nodes do not trust each other;the query results returned by the full node may be tampered with or the complete query results may not be returned. Therefore;the user needs to verify the soundness and completeness of the query results. To address this issue;the miner constructs the ADS for the data in each block during the block-packing process. The root hash of the ADS is stored in the block header. The full node generates a verification object (VO) along with the results by querying the ADS. Both the VO and query results are then returned to the user. The user verifies the integrity of the query result through the VO and the root digest obtained from the block header. Fig. 1 shows an example where the user sends a query request Q to the SP. The SP queries ADS and returns VO and query result R to the user. In this paper;we investigate verifiable similarity queries within the context of blockchain networks. Our focus will be on two specific types of queries: similarity range queries and similarity top- k queries. In practical scenarios;users often seek to retrieve data objects that exhibit similarity to a given set of keywords of interest. For example;a user may initiate a similarity range query such as “Querying data objects with the similarity to {male;USA;California;programming enthusiasts;enjoy traveling} greater than 0.6”. Meanwhile;a similarity top- k query could involve a request for “Obtaining 5 data objects with the highest similarity to {female;USA;New York;yoga enthusiasts;enjoying reading}”. In the context of such similarity queries;it becomes imperative for the blockchain system to calculate the similarity between the user-provided keyword set and each data object stored within the blockchain. The efficient execution of similarity queries;while upholding the integrity and completeness of the query outcomes;holds significant importance for both the broader adoption of blockchain technology and the enhancement of user experience. Two examples of similarity queries are as follows. Example 1 Consider a platform for accessing literature that utilizes blockchain technology to ensure copyright protection and track the origin of the literature. On this platform;papers are published and can be retrieved and cited by other researchers. To help researchers find relevant literature quickly;a keyword query like Q s =“{ machine learning;smart grid;blockchain };number=50 ” can be used to return the 50 most similar literature items. Consider a platform for accessing literature that utilizes blockchain technology to ensure copyright protection and track the origin of the literature. On this platform;papers are published and can be retrieved and cited by other researchers. To help researchers find relevant literature quickly;a keyword query like Q s =“{ machine learning;smart grid;blockchain };number=50 ” can be used to return the 50 most similar literature items. Example 2 A blockchain-based smart healthcare system stores patients’ medical records and health insurance information. When seeking advice on medical treatment;patients may want to know the treatment options received by other patients with similar symptoms. For this;the patient can issue a query like Q s = “{ diabetes;hypertension;elderly patients };threshold=0.7 ” to find treatment plans of patients with a similarity greater than 0.7. The similarity threshold of 0.7 can be adjusted according to user needs. A blockchain-based smart healthcare system stores patients’ medical records and health insurance information. When seeking advice on medical treatment;patients may want to know the treatment options received by other patients with similar symptoms. For this;the patient can issue a query like Q s = “{ diabetes;hypertension;elderly patients };threshold=0.7 ” to find treatment plans of patients with a similarity greater than 0.7. The similarity threshold of 0.7 can be adjusted according to user needs. The similarity query can also be utilized in e-commerce-based blockchain platforms. For example;the non-fungible token (NFT) market is a popular blockchain application that can recommend digital collections based on user interest tags;such as profile pictures and games. In other e-commerce platforms;users can discover products with similar styles through similarity queries that rely on their purchase history or preferences. For the above example;an intuitive solution is to construct a Merkel hash tree (MHT) based on data objects in the block. However;the MHT does not support efficient data queries because it lacks a search key. Consequently;many existing query schemes combine the MHT with various database indexes;such as B+-trees;R-trees;and prefix trees;to enhance search efficiency. The keywords associated with data objects stored in the leaf nodes can serve as the search key. Despite this;the approach still suffers from low query efficiency;as it cannot directly identify the leaf nodes containing the relevant query results. During query processing at a node;it is necessary to examine every subset of the search key. Even if a subset that meets the query conditions is found;it may not correspond to an actual set residing within a descendant node. This limitation hinders effective node pruning during the query process. Therefore;our goal is to reduce the query time of ADS. The challenge is to improve query efficiency. Existing research on blockchain verifiable queries focuses on range queries (Wang;Xu;et al.;Xu et al.;Zhang et al.;keyword queries (Xu et al.;Zhang et al.;and existence queries (Dai et al.;2020). There is no relevant research providing solutions for verifiable similarity queries. In this paper;we propose a scheme VSQ to support verifiable similarity queries in blockchain databases. We focus on similarity range queries and similarity top- k queries. The goal is to enable efficient query processing and support the verifiability of the soundness and completeness of the query results. Specifically;we first propose a baseline solution that uses MHT as the base ADS and adds pointers to the leaf nodes to realize the query. Next;we improve the query efficiency and reduce the storage overhead using minhash instead of the original data in MHT as an upgrade solution (named baseline+) to the baseline solution. Finally;we propose VSQ;which improves query efficiency by introducing locality-sensitive hashing (LSH) and using merkle bucket tree (MBT) as the base ADS. The verifiability of query results is realized using MBT to store data. The main contributions of this paper are as follows: • To the best of our knowledge;this is the first work that addresses the verifiable similarity query in blockchain. We propose a verifiable query scheme based on the Merkle bucket tree;aiming at avoiding traversing all the sets and reducing the query time. • We propose the VSQ scheme for verifiable similarity queries;which combines minhash;LSH;and the Merkle bucket tree to improve query efficiency. Each leaf node of a Merkle bucket tree represents a hash bucket;which achieves the verifiability of the soundness and completeness of the query results. • We present the similarity query algorithm and the query result verification algorithm. The performance and security of the scheme are analyzed in detail. • We have conducted extensive experimental tests on the proposed method. The experimental results of the synthetic dataset show that for similarity range queries;the query time and VO size of our proposed VSQ are reduced by two orders of magnitude compared to the Baseline. For similarity top- k queries;the query time of VSQ is reduced by two orders of magnitude;and the VO size is reduced by three orders of magnitude compared to the Baseline. Experimental results of real datasets show that the query time and VO size for both queries are reduced by two orders of magnitude compared to the Baseline approach. To the best of our knowledge;this is the first work that addresses the verifiable similarity query in blockchain. We propose a verifiable query scheme based on the Merkle bucket tree;aiming at avoiding traversing all the sets and reducing the query time. We propose the VSQ scheme for verifiable similarity queries;which combines minhash;LSH;and the Merkle bucket tree to improve query efficiency. Each leaf node of a Merkle bucket tree represents a hash bucket;which achieves the verifiability of the soundness and completeness of the query results. We present the similarity query algorithm and the query result verification algorithm. The performance and security of the scheme are analyzed in detail. We have conducted extensive experimental tests on the proposed method. The experimental results of the synthetic dataset show that for similarity range queries;the query time and VO size of our proposed VSQ are reduced by two orders of magnitude compared to the Baseline. For similarity top- k queries;the query time of VSQ is reduced by two orders of magnitude;and the VO size is reduced by three orders of magnitude compared to the Baseline. Experimental results of real datasets show that the query time and VO size for both queries are reduced by two orders of magnitude compared to the Baseline approach. This paper is organized as follows. Section 2 introduces the related work on blockchain technology and verifiable queries. Section 3 introduces the preparatory knowledge and problem definition. Section 4 provides an in-depth discussion of the ADS based on the Merkle bucket tree and its design principles. Section 5 shows the experimental results to demonstrate the effectiveness and superior performance of the proposed method in this paper. Finally;the full paper is summarized in Section 6;and future research directions are explored. We conducted a comparison between the proposed VSQ;the Baseline scheme;and the Baseline+ scheme. Synthetic datasets. To generate a set of 5000 keywords;we utilized Python’s nltk library. Subsequently;we randomly assigned keywords from the set to each data object. The dataset size is in [128;16384]. The dimensions of the keyword set are 1500;and 3500. The default value is indicated by bolding. Real datasets. We also used three real-world
摘要:
Blockchain serves as the basis for secure and distributed applications like decentralized finance (DeFi). Light nodes can help reduce the storage and computation burden of full nodes in the blockchain by storing only block headers instead of full blocks. The resource constraint user runs a light node and sends the query request to the full node to find the answer. However, this approach raises security concerns, as query results from the full node may be altered or incomplete. In this paper, we explore verifiable similarity queries, commonly used in text mining, within the context of blockchain networks. The main challenge lies in designing an authenticated data structure (ADS) that enables efficient query processing and verification of query result soundness and completeness. Previous research on blockchain verifiable queries has focused on range queries, keyword queries, and existence queries, which are not applicable to verifiable similarity queries. Our proposed solutions include a baseline approach that uses the Merkle hash tree as the foundational ADS and adds leaf node pointers for query realization. Additionally, we introduce an enhanced solution Baseline+, which improves query efficiency and reduces storage overhead by leveraging minhash. Finally, we present a verifiable similarity query scheme (VSQ) based on the Merkle bucket tree, integrating minhash and locality-sensitive hashing to enhance query and verification efficiency. We conducted extensive experimental evaluation. The experimental results show that the query time and the verification object size of VSQ are reduced by two orders of magnitude compared to the Baseline approach.
Blockchain serves as the basis for secure and distributed applications like decentralized finance (DeFi). Light nodes can help reduce the storage and computation burden of full nodes in the blockchain by storing only block headers instead of full blocks. The resource constraint user runs a light node and sends the query request to the full node to find the answer. However, this approach raises security concerns, as query results from the full node may be altered or incomplete. In this paper, we explore verifiable similarity queries, commonly used in text mining, within the context of blockchain networks. The main challenge lies in designing an authenticated data structure (ADS) that enables efficient query processing and verification of query result soundness and completeness. Previous research on blockchain verifiable queries has focused on range queries, keyword queries, and existence queries, which are not applicable to verifiable similarity queries. Our proposed solutions include a baseline approach that uses the Merkle hash tree as the foundational ADS and adds leaf node pointers for query realization. Additionally, we introduce an enhanced solution Baseline+, which improves query efficiency and reduces storage overhead by leveraging minhash. Finally, we present a verifiable similarity query scheme (VSQ) based on the Merkle bucket tree, integrating minhash and locality-sensitive hashing to enhance query and verification efficiency. We conducted extensive experimental evaluation. The experimental results show that the query time and the verification object size of VSQ are reduced by two orders of magnitude compared to the Baseline approach.
通讯机构:
[Luo, HM ] J;[Zhao, JJ ] C;Jiangxi Prov Key Lab Adv Elect Mat & Devices No 20, Nanchang 330022, Peoples R China.;Changsha Univ Sci & Technol, Sch Comp & Commun Engn, Changsha 410114, Peoples R China.
关键词:
Fiber optical sensor;Fiber Bragg grating;Microfiber;Temperature and strain measurement;PDMS
摘要:
This paper proposes a sensor for simultaneously measuring temperature and strain using a microfiber Bragg grating (MFBG), half of which is coated with polymer polydimethylsiloxane (PDMS) and encapsulated inside a couple of U-shape glass grooves, while the other half is bare. Due to the different thermal-optic coefficient, elastic-optic coefficient, and cross section structural dimension, the two reflection bands from the encapsulated part of the MFBG and the bare MFBG section show different behaviors to temperature and strain. The sensitivity to temperature varies significantly between the two peaks, the reflection wavelength shifts in the opposite direction, capable of effectively detecting temperature variations. The strain is almost all concentrate on the bare MFBG (BMFBG) as the cross-sectional area of the encapsulated MFBG (EMFBG) is much larger than that of BMFBG. The experimental results show that the temperature sensitivities of −31.92 pm/°C and 10.31 pm/°C and strain sensitivities of ~0 pm/με and 6.24 pm/με are achieved, respectively. The sensor has the advantages of high sensitivity and simple structure and can measure strain and temperature simultaneously.
This paper proposes a sensor for simultaneously measuring temperature and strain using a microfiber Bragg grating (MFBG), half of which is coated with polymer polydimethylsiloxane (PDMS) and encapsulated inside a couple of U-shape glass grooves, while the other half is bare. Due to the different thermal-optic coefficient, elastic-optic coefficient, and cross section structural dimension, the two reflection bands from the encapsulated part of the MFBG and the bare MFBG section show different behaviors to temperature and strain. The sensitivity to temperature varies significantly between the two peaks, the reflection wavelength shifts in the opposite direction, capable of effectively detecting temperature variations. The strain is almost all concentrate on the bare MFBG (BMFBG) as the cross-sectional area of the encapsulated MFBG (EMFBG) is much larger than that of BMFBG. The experimental results show that the temperature sensitivities of −31.92 pm/°C and 10.31 pm/°C and strain sensitivities of ~0 pm/με and 6.24 pm/με are achieved, respectively. The sensor has the advantages of high sensitivity and simple structure and can measure strain and temperature simultaneously.
期刊:
Journal of Information Security and Applications,2025年92:104082 ISSN:2214-2126
通讯作者:
Longfei Huang
作者机构:
[Zhuoqun Xia; Longfei Huang; Jingjing Tan; Yongbin Yu] School of Computer and Communication Engineering, Changsha University of Science and Technology, Changsha 410114, China;[Wei Hao] School of Traffic and Transportation Engineering, Changsha University of Science and Technology, Changsha 410114, China;[Kejun Long] Hunan Key Laboratory of Smart Roadway and Cooperative Vehicle-Infrastructure Systems, Changsha 410114, China
通讯机构:
[Longfei Huang] S;School of Computer and Communication Engineering, Changsha University of Science and Technology, Changsha 410114, China
摘要:
The Controller Area Network (CAN) bus plays an essential role in Connected Autonomous Vehicles (CAVs), yet its inherent design limitations regarding data protection make it susceptible to malicious intrusions. Contemporary research in intrusion detection predominantly employs Long Short-Term Memory (LSTM) models to analyze CAN IDs as time series data. However, the high computational complexity of LSTM models makes them unsuitable for resource constrained in-vehicle network. To address this problem, a lightweight IDS combining image encoding and an Efficient Channel Attention (ECA) network is proposed. Specifically, three temporal image encoding techniques, Gramian Angular Sum Fields, Markov Transition Fields, and Recurrence Plots are employed to transform CAN ID time-series data into single-channel images, which are then superimposed into three-channel images. A lightweight three-layer convolutional neural network integrated with an ECA module dynamically adjusts channel weights for image classification. Evaluated on real in-vehicle datasets, the method achieves classification accuracies of 99.83%, 99.98%, and 98.75% across three test scenarios with 5.5ms average inference time, demonstrating robust detection capability and computational efficiency.
The Controller Area Network (CAN) bus plays an essential role in Connected Autonomous Vehicles (CAVs), yet its inherent design limitations regarding data protection make it susceptible to malicious intrusions. Contemporary research in intrusion detection predominantly employs Long Short-Term Memory (LSTM) models to analyze CAN IDs as time series data. However, the high computational complexity of LSTM models makes them unsuitable for resource constrained in-vehicle network. To address this problem, a lightweight IDS combining image encoding and an Efficient Channel Attention (ECA) network is proposed. Specifically, three temporal image encoding techniques, Gramian Angular Sum Fields, Markov Transition Fields, and Recurrence Plots are employed to transform CAN ID time-series data into single-channel images, which are then superimposed into three-channel images. A lightweight three-layer convolutional neural network integrated with an ECA module dynamically adjusts channel weights for image classification. Evaluated on real in-vehicle datasets, the method achieves classification accuracies of 99.83%, 99.98%, and 98.75% across three test scenarios with 5.5ms average inference time, demonstrating robust detection capability and computational efficiency.
关键词:
Super-resolution;Dual-branch feature interaction attention;Adaptive large kernel enhancement;Overlapping cross-attention modules
摘要:
Recently, deep convolutional neural networks (CNNs) have achieved excellent performance on image super-resolution (SR). However, the majority of the deep CNN-based super-resolution models struggle to capture global context due to their limited receptive fields and do not fully utilize intermediate features, which limits their performance and application. To address this issue, we propose an image super-resolution reconstruction network (DBFA) based on dual-branch feature interaction attention mechanism, aimed at capturing global context and multi-scale local features. DBFA uses the Transformer constructed by the dual-branch attention mechanism as the basic module to model global long-range dependencies and enhance channel and spatial feature interactions within the block. Additionally, to fully utilize the features of each level, we design an adaptive large kernel enhancement module (ALKE) with a scalable receptive domain to differentially improve the extracted features, and capture multi-scale local features of different levels by embedding skip-connections. Meanwhile, the feature interaction between adjacent windows is enhanced by introducing overlapping cross-attention modules (OCM). A large number of experimental results indicate that the DBFA method proposed in this paper has significantly improved visual effects and image quality compared with other representative SR methods. Code is available at:
https://github.com/wjxcsust2024/DBFA
摘要:
Due to their biological interpretability, memristors are widely used to simulate synapses between artificial neural networks. As a type of neural network whose dynamic behavior can be explained, the coupling of resonant tunneling diode-based cellular neural networks (RTD-CNNs) with memristors has rarely been reported in the literature. Therefore, this paper designs a coupled RTD-CNN model with memristors (RTD-MCNN), investigating and analyzing the dynamic behavior of the RTD-MCNN. Based on this model, a simple encryption scheme for the protection of digital images in police forensic applications is proposed. The results show that the RTD-MCNN can have two positive Lyapunov exponents, and its output is influenced by the initial values, exhibiting multistability. Furthermore, a set of amplitudes in its output sequence is affected by the internal parameters of the memristor, leading to nonlinear variations. Undoubtedly, the rich dynamic behaviors described above make the RTD-MCNN highly suitable for the design of chaos-based encryption schemes in the field of privacy protection. Encryption tests and security analyses validate the effectiveness of this scheme.
摘要:
Images captured in the wild often suffer from issues such as under-exposure, over-exposure, or sometimes a combination of both. These images tend to lose details and texture due to uneven exposure. The majority of image enhancement methods currently focus on correcting either under-exposure or over-exposure, but there are only a few methods available that can effectively handle these two problems simultaneously. In order to address these issues, a novel partition-based exposure correction method is proposed. Firstly, our method calculates the illumination map to generate a partition mask that divides the original image into under-exposed and over-exposed areas. Then, we propose a Transformer-based parameter estimation module to estimate the dual gamma values for partition-based exposure correction. Finally, we introduce a dual-branch fusion module to merge the original image with the exposure-corrected image to obtain the final result. It is worth noting that the illumination map plays a guiding role in both the dual gamma model parameters estimation and the dual-branch fusion. Extensive experiments demonstrate that the proposed method consistently achieves superior performance over state-of-the-art (SOTA) methods on 9 datasets with paired or unpaired samples. Our codes are available at https://github.com/csust7zhangjm/ExposureCorrectionWMS .
Images captured in the wild often suffer from issues such as under-exposure, over-exposure, or sometimes a combination of both. These images tend to lose details and texture due to uneven exposure. The majority of image enhancement methods currently focus on correcting either under-exposure or over-exposure, but there are only a few methods available that can effectively handle these two problems simultaneously. In order to address these issues, a novel partition-based exposure correction method is proposed. Firstly, our method calculates the illumination map to generate a partition mask that divides the original image into under-exposed and over-exposed areas. Then, we propose a Transformer-based parameter estimation module to estimate the dual gamma values for partition-based exposure correction. Finally, we introduce a dual-branch fusion module to merge the original image with the exposure-corrected image to obtain the final result. It is worth noting that the illumination map plays a guiding role in both the dual gamma model parameters estimation and the dual-branch fusion. Extensive experiments demonstrate that the proposed method consistently achieves superior performance over state-of-the-art (SOTA) methods on 9 datasets with paired or unpaired samples. Our codes are available at https://github.com/csust7zhangjm/ExposureCorrectionWMS .