Classifying large-scale networks into several categories and distinguishing them according to their fine structures is of great importance to several real-life applications. However, most studies on complex networks focus on the properties of a single network and seldom on classification, clustering, and comparison between different networks, in which the network is treated as a whole. Conventional methods can hardly be applied on networks directly due to the non-Euclidean properties of data. In this paper, we propose a novel framework of Complex Network Classifier (CNC) by integrating network embedding and convolutional neural network to tackle the problem of network classification. By training the classifier on synthetic complex network data, we show CNC can not only classify networks with high accuracy and robustness but can also extract the features of the networks automatically. We also compare our CNC with baseline methods on benchmark datasets, which shows that our method performs well on large-scale networks.
Inspired by real biological neural models, Spiking Neural Networks (SNNs) process information with discrete spikes and show great potential for building low-power neural network systems. This paper proposes a hardware implementation of SNN based on Field-Programmable Gate Arrays (FPGA). It features a hybrid updating algorithm, which combines the advantages of existing algorithms to simplify hardware design and improve performance. The proposed design supports up to 16 384 neurons and 16.8 million synapses but requires minimal hardware resources and archieves a very low power consumption of 0.477 W. A test platform is built based on the proposed design using a Xilinx FPGA evaluation board, upon which we deploy a classification task on the MNIST dataset. The evaluation results show an accuracy of 97.06% and a frame rate of 161 frames per second.
Botnets based on the Domain Generation Algorithm (DGA) mechanism pose great challenges to the main current detection methods because of their strong concealment and robustness. However, the complexity of the DGA family and the imbalance of samples continue to impede research on DGA detection. In the existing work, the sample size of each DGA family is regarded as the most important determinant of the resampling proportion; thus, differences in the characteristics of various samples are ignored, and the optimal resampling effect is not achieved. In this paper, a Long Short-Term Memory-based Property and Quantity Dependent Optimization (LSTM.PQDO) method is proposed. This method takes advantage of LSTM to automatically mine the comprehensive features of DGA domain names. It iterates the resampling proportion with the optimal solution based on a comprehensive consideration of the original number and characteristics of the samples to heuristically search for a better solution around the initial solution in the right direction; thus, dynamic optimization of the resampling proportion is realized. The experimental results show that the LSTM.PQDO method can achieve better performance compared with existing models to overcome the difficulties of unbalanced datasets; moreover, it can function as a reference for sample resampling tasks in similar scenarios.
Static compaction methods aim at finding unnecessary test patterns to reduce the size of the test set as a post-process of test generation. Techniques based on partial maximum satisfiability are often used to track many hard problems in various domains, including artificial intelligence, computational biology, data mining, and machine learning. We observe that part of the test patterns generated by the commercial Automatic Test Pattern Generation (ATPG) tool is redundant, and the relationship between test patterns and faults, as a significant information, can effectively induce the test patterns reduction process. Considering a test pattern can detect one or more faults, we map the problem of static test compaction to a partial maximum satisfiability problem. Experiments on ISCAS89, ISCAS85, and ITC99 benchmarks show that this approach can reduce the initial test set size generated by TetraMAX18 while maintaining fault coverage.
The user-generated social media messages usually contain considerable multimodal content. Such messages are usually short and lack explicit sentiment words. However, we can understand the sentiment associated with such messages by analyzing the context, which is essential to improve the sentiment analysis performance. Unfortunately, majority of the existing studies consider the impact of contextual information based on a single data model. In this study, we propose a novel model for performing context-aware user sentiment analysis. This model involves the semantic correlation of different modalities and the effects of tweet context information. Based on our experimental results obtained using the Twitter dataset, our approach is observed to outperform the other existing methods in analysing user sentiment.
Facing the challenges of the next generation exascale computing, National University of Defense Technology has developed a prototype system to explore opportunities, solutions, and limits toward the next generation Tianhe system. This paper briefly introduces the prototype system, which is deployed at the National Supercomputer Center in Tianjin and has a theoretical peak performance of 3.15 Pflops. A total of 512 compute nodes are found where each node has three proprietary CPUs called Matrix-2000+. The system memory is 98.3 TB, and the storage is 1.4 PB in total.
The microservices architecture has been proposed to overcome the drawbacks of the traditional monolithic architecture. Scalability is one of the most attractive features of microservices. Scaling in the microservices architecture requires the scaling of specified services only, rather than the entire application. Scaling services can be achieved by deploying the same service multiple times on different physical machines. However, problems with load balancing may arise. Most existing solutions of microservices load balancing focus on individual tasks and ignore dependencies between these tasks. In the present paper, we propose TCLBM, a task chain-based load balancing algorithm for microservices. When an Application Programming Interface (API) request is received, TCLBM chooses target services for all tasks of this API call and achieves load balancing by evaluating the system resource usage of each service instance. TCLBM reduces the API response time by reducing data transmissions between physical machines. We use three heuristic algorithms, namely, Particle Swarm Optimization (PSO), Simulated Annealing (SA), and Genetic Algorithm (GA), to implement TCLBM, and comparison results reveal that GA performs best. Our findings show that TCLBM achieves load balancing among service instances and reduces API response times by up to 10% compared with existing methods.
Proteins drive virtually all cellular-level processes. The proteins that are critical to cell proliferation and survival are defined as essential. These essential proteins are implicated in key metabolic and regulatory networks, and are important in the context of rational drug design efforts. The computational identification of the essential proteins benefits from the proliferation of publicly available protein interaction datasets. Scientists have developed several algorithms that use these interaction datasets to predict essential proteins. However, a comprehensive web platform that facilitates the analysis and prediction of essential proteins is missing. In this study, we design, implement, and release NetEPD: a network-based essential protein discovery platform. This resource integrates data on Protein-Protein Interaction (PPI) networks, gene expression, subcellular localization, and a native set of essential proteins. It also computes a variety of node centrality measures, evaluates the predictions of essential proteins, and visualizes PPI networks. This comprehensive platform functions by implementing four activities, which include the collection of datasets, computation of centrality measures, evaluation, and visualization. The results produced by NetEPD are visualized on its website, and sent to a user-provided email, and they are available to download in a parsable format. This platform is freely available at http://bioinformatics.csu.edu.cn/netepd.
Traditional steganography is the practice of embedding a secret message into an image by modifying the information in the spatial or frequency domain of the cover image. Although this method has a large embedding capacity, it inevitably leaves traces of rewriting that can eventually be discovered by the enemy. The method of Steganography by Cover Synthesis (SCS) attempts to construct a natural stego image, so that the cover image is not modified; thus, it can overcome detection by a steganographic analyzer. Due to the difficulty in constructing natural stego images, the development of SCS is limited. In this paper, a novel generative SCS method based on a Generative Adversarial Network (GAN) for image steganography is proposed. In our method, we design a GAN model called Synthetic Semantics Stego Generative Adversarial Network (SSS-GAN) to generate stego images from secret messages. By establishing a mapping relationship between secret messages and semantic category information, category labels can generate pseudo-real images via the generative model. Then, the receiver can recognize the labels via the classifier network to restore the concealed information in communications. We trained the model on the MINIST, CIFAR-10, and CIFAR-100 image datasets. Experiments show the feasibility of this method. The security, capacity, and robustness of the method are analyzed.
Apache Spark provides a well-known MapReduce computing framework, aiming to fast-process big data analytics in data-parallel manners. With this platform, large input data are divided into data partitions. Each data partition is processed by multiple computation tasks concurrently. Outputs of these computation tasks are transferred among multiple computers via the network. However, such a distributed computing framework suffers from system overheads, inevitably caused by communication and disk I/O operations. System overheads take up a large proportion of the Job Completion Time (JCT). We observed that excessive computational resources incurs considerable system overheads, prolonging the JCT. The over-allocation of individual jobs not only prolongs their own JCTs, but also likely makes other jobs suffer from under-allocation. Thus, the average JCT is suboptimal, too. To address this problem, we propose a prediction model to estimate the changing JCT of a single Spark job. With the support of the prediction method, we designed a heuristic algorithm to balance the resource allocation of multiple Spark jobs, aiming to minimize the average JCT in multiple-job cases. We implemented the prediction model and resource allocation method in ReB, a Resource-Balancer based on Apache Spark. Experimental results showed that ReB significantly outperformed the traditional max-min fairness and shortest-job-optimal methods. The average JCT was decreased by around 10%-30% compared to the existing solutions.
E-commerce has dramatically reduced the limitation of space and time on economic activities, resulting in individuals having access to a huge number of consumers. In this paper, we propose a company’s optimal size decision model containing management costs as a means of investigating the evolution of the company size in e-commerce. Given that production decisions are made based on accessible market capacity, we explain how a company enters the market, and we draw an evolutionary path of the optimal company size. The results show that in the early expansion stage of accessible market capacity, a firm’s optimal size keeps increasing; after reaching a peak, the change in a firm’s optimal size depends on its cost management. When the accessible market capacity reaches a threshold, the firm will no longer be in the market, and may no longer exist. Finally, we construct a simulation framework based on complex adaptive systems to validate our proposed model. A simulation experiment confirms our model and reveals the dynamic co-evolution process of individual producers and firms.
In recent years, e-sports has rapidly developed, and the industry has produced large amounts of data with specifications, and these data are easily to be obtained. Due to the above characteristics, data mining and deep learning methods can be used to guide players and develop appropriate strategies to win games. As one of the world’s most famous e-sports events, Dota2 has a large audience base and a good game system. A victory in a game is often associated with a hero’s match, and players are often unable to pick the best lineup to compete. To solve this problem, in this paper, we present an improved bidirectional Long Short-Term Memory (LSTM) neural network model for Dota2 lineup recommendations. The model uses the Continuous Bag Of Words (CBOW) model in the Word2vec model to generate hero vectors. The CBOW model can predict the context of a word in a sentence. Accordingly, a word is transformed into a hero, a sentence into a lineup, and a word vector into a hero vector, the model applied in this article recommends the last hero according to the first four heroes selected first, thereby solving a series of recommendation problems.
Wireless sensor technology plays an important role in the military, medical, and commercial fields nowadays. Wireless Body Area Network (WBAN) is a special application of the wireless sensor network in human health monitoring, through which patients can know their physical condition in real time and respond to emergencies on time. Data reliability, guaranteed by the trust of nodes in WBAN, is a prerequisite for the effective treatment of patients. Therefore, authenticating the sensor nodes and the sink nodes in WBAN is necessary. This paper proposes a lightweight Physical Unclonable Function (PUF)-based and cloud-assisted authentication mechanism for multi-hop body area networks, which compared with the star single-hop network, can enhance the adaptability to human motion and the integrity of data transmission. Such authentication mechanism can significantly reduce the storage overhead and resource loss in the data transmission process.
The development of autonomous driving has brought with it requirements for intelligence, safety, and stability. One example of this is the need to construct effective forms of interactive cognition between pedestrians and vehicles in dynamic, complex, and uncertain environments. Pedestrian action detection is a form of interactive cognition that is fundamental to the success of autonomous driving technologies. Specifically, vehicles need to detect pedestrians, recognize their limb movements, and understand the meaning of their actions before making appropriate decisions in response. In this survey, we present a detailed description of the architecture for pedestrian action recognition in autonomous driving, and compare the existing mainstream pedestrian action recognition techniques. We also introduce several commonly used datasets used in pedestrian motion recognition. Finally, we present several suggestions for future research directions.
Research into the impact of road accidents on drivers is essential to effective post-crash interventions. However, due to limited data and resources, the current research focus is mainly on those who have suffered severe injuries. In this paper, we propose a novel approach to examining the impact that being involved in a crash has on drivers by using traffic surveillance data. In traffic video surveillance systems, the locations of vehicles at different moments in time are captured and their headway, which is an important indicator of driving behavior, can be calculated from this information. It was found that there was a sudden increase in headway when drivers return to the road after being involved in a crash, but that the headway returned to its pre-crash level over time. We further analyzed the duration of the decay using a Cox proportional hazards regression model, which revealed many significant factors (related to the driver, vehicle, and nature of the accident) behind the survival time of the increased headway. Our approach is able to reveal the crash impact on drivers in a convenient and economical way. It can enhance the understanding of the impact of a crash on drivers, and help to devise more effective re-education programs and other interventions to encourage drivers who are involved in crashes to drive more safely in the future.
Learning With Errors (LWE) is one of the Non-Polynomial (NP)-hard problems applied in cryptographic primitives against quantum attacks. However, the security and efficiency of schemes based on LWE are closely affected by the error sampling algorithms. The existing pseudo-random sampling methods potentially have security leaks that can fundamentally influence the security levels of previous cryptographic primitives. Given that these primitives are proved semantically secure, directly deducing the influences caused by leaks of sampling algorithms may be difficult. Thus, we attempt to use the attack model based on automatic learning system to identify and evaluate the practical security level of a cryptographic primitive that is semantically proved secure in indistinguishable security models. In this paper, we first analyzed the existing major sampling algorithms in terms of their security and efficiency. Then, concentrating on the Indistinguishability under Chosen-Plaintext Attack (IND-CPA) security model, we realized the new attack model based on the automatic learning system. The experimental data demonstrates that the sampling algorithms perform a key role in LWE-based schemes with significant disturbance of the attack advantages, which may potentially compromise security considerably. Moreover, our attack model is achievable with acceptable time and memory costs.
The Multi-Key Fully Homomorphic Encryption (MKFHE) based on the NTRU cryptosystem is an important alternative to the post-quantum cryptography due to its simple scheme form, high efficiency, and fewer ciphertexts and keys. In 2012, López-Alt et al. proposed the first NTRU-type MKFHE scheme, the LTV12 scheme, using the key-switching and modulus-reduction techniques, whose security relies on two assumptions: the Ring Learning With Error (RLWE) assumption and the Decisional Small Polynomial Ratio (DSPR) assumption. However, the LTV12 and subsequent NTRU-type schemes are restricted to the family of power-of-2 cyclotomic rings, which may affect the security in the case of subfield attacks. Moreover, the key-switching technique of the LTV12 scheme requires a circular application of evaluation keys, which causes rapid growth of the error and thus affects the circuit depth. In this paper, an NTRU-type MKFHE scheme over prime cyclotomic rings without key-switching is proposed, which has the potential to resist the subfield attack and decrease the error exponentially during the homomorphic evaluating process. First, based on the RLWE and DSPR assumptions over the prime cyclotomic rings, a detailed analysis of the factors affecting the error during the homomorphic evaluations in the LTV12 scheme is provided. Next, a Low Bit Discarded & Dimension Expansion of Ciphertexts (LBD&DEC) technique is proposed, and the inherent homomorphic multiplication decryption structure of the NTRU is proposed, which can eliminate the key-switching operation in the LTV12 scheme. Finally, a leveled NTRU-type MKFHE scheme is developed using the LBD&DEC and modulus-reduction techniques. The analysis shows that the proposed scheme compared to the LTV12 scheme can decrease the magnitude of the error exponentially and minimize the dimension of ciphertexts.
Network texts have become important carriers of cybersecurity information on the Internet. These texts include the latest security events such as vulnerability exploitations, attack discoveries, advanced persistent threats, and so on. Extracting cybersecurity entities from these unstructured texts is a critical and fundamental task in many cybersecurity applications. However, most Named Entity Recognition (NER) models are suitable only for general fields, and there has been little research focusing on cybersecurity entity extraction in the security domain. To this end, in this paper, we propose a novel cybersecurity entity identification model based on Bidirectional Long Short-Term Memory with Conditional Random Fields (Bi-LSTM with CRF) to extract security-related concepts and entities from unstructured text. This model, which we have named XBiLSTM-CRF, consists of a word-embedding layer, a bidirectional LSTM layer, and a CRF layer, and concatenates X input with bidirectional LSTM output. Via extensive experiments on an open-source dataset containing an office security bulletin, security blogs, and the Common Vulnerabilities and Exposures list, we demonstrate that XBiLSTM-CRF achieves better cybersecurity entity extraction than state-of-the-art models.
The cartoon animation industry has developed into a huge industrial chain with a large potential market involving games, digital entertainment, and other industries. However, due to the coarse-grained classification of cartoon materials, cartoon animators can hardly find relevant materials during the process of creation. The polar emotions of cartoon materials are an important reference for creators as they can help them easily obtain the pictures they need. Some methods for obtaining the emotions of cartoon pictures have been proposed, but most of these focus on expression recognition. Meanwhile, other emotion recognition methods are not ideal for use as cartoon materials. We propose a deep learning-based method to classify the polar emotions of the cartoon pictures of the "Moe" drawing style. According to the expression feature of the cartoon characters of this drawing style, we recognize the facial expressions of cartoon characters and extract the scene and facial features of the cartoon images. Then, we correct the emotions of the pictures obtained by the expression recognition according to the scene features. Finally, we can obtain the polar emotions of corresponding picture. We designed a dataset and performed verification tests on it, achieving 81.9% experimental accuracy. The experimental results prove that our method is competitive.
The hypersonic vehicle model is characterized by strong coupling, nonlinearity, and acute changes of aerodynamic parameters, which are challenging for control system design. This study investigates a novel compound control scheme that combines the advantages of the Fractional-Order Proportional-Integral-Derivative (FOPID) controller and Linear Active Disturbance Rejection Control (LADRC) for reentry flight control of hypersonic vehicles with actuator faults. First, given that the controller has adjustable parameters, the frequency-domain analysis-method-based parameter tuning strategy is utilized for the FOPID controller and LADRC method (FOLADRC). Then, the influences of the actuator model on the anti-disturbance capability and parameter tuning of the FOLADRC-based closed-loop control system are analyzed. Finally, the simulation results indicate that the proposed FOLADRC approach has satisfactory performance in terms of rapidity, accuracy, and robustness under the normal operating condition and actuator fault condition.
Social Influence Maximization Problems (SIMPs) deal with selecting k seeds in a given Online Social Network (OSN) to maximize the number of eventually-influenced users. This is done by using these seeds based on a given set of influence probabilities among neighbors in the OSN. Although the SIMP has been proved to be NP-hard, it has both submodular (with a natural diminishing-return) and monotone (with an increasing influenced users through propagation) that make the problem suitable for approximation solutions. However, several special SIMPs cannot be modeled as submodular or monotone functions. In this paper, we look at several conditions under which non-submodular or non-monotone functions can be handled or approximated. One is a profit-maximization SIMP where seed selection cost is included in the overall utility function, breaking the monotone property. The other is a crowd-influence SIMP where crowd influence exists in addition to individual influence, breaking the submodular property. We then review several new techniques and notions, including double-greedy algorithms and the supermodular degree, that can be used to address special SIMPs. Our main results show that for a specific SIMP model, special network structures of OSNs can help reduce its time complexity of the SIMP.
The rapid development of wearable computing technologies has led to an increased involvement of wearable devices in the daily lives of people. The main power sources of wearable devices are batteries; so, researchers must ensure high performance while reducing power consumption and improving the battery life of wearable devices. The purpose of this study is to analyze the new features of an Energy-Aware Scheduler (EAS) in the Android 7.1.2 operating system and the scarcity of EAS schedulers in wearable application scenarios. Also, the paper proposed an optimization scheme of EAS scheduler for wearable applications (Wearable-Application-optimized Energy-Aware Scheduler (WAEAS)). This scheme improves the accuracy of task workload prediction, the energy efficiency of central processing unit core selection, and the load balancing. The experimental results presented in this paper have verified the effectiveness of a WAEAS scheduler.
Hardening reliability-critical gates in a circuit is an important step to improve the circuit reliability at a low cost. However, accurately locating the reliability-critical gates is a key prerequisite for the efficient implementation of the hardening operation. In this paper, a probabilistic-based calculation method developed for locating the reliability-critical gates in a circuit is described. The proposed method is based on the generation of input vectors and the sampling of reliability-critical gates using uniform non-Bernoulli sequences, and the criticality of the gate reliability is measured by combining the structure information of the circuit itself. Both the accuracy and the efficiency of the proposed method have been illustrated by various simulations on benchmark circuits. The results show that the proposed method has an efficient performance in locating accuracy and algorithm runtime.
Most of the behavior models with respect to Web applications focus on sequencing of events, without regard for the changes of parameters or elements and the relationship between trigger conditions of events and Web pages. As a result, these models are not sufficient to effectively represent the dynamic behavior of the Web 2.0 application. Therefore, in this paper, to appropriately describe the dynamic behavior of the client side of Web applications, we define a novel Client-side Behavior Model (CBM) for Web applications and present a user behavior trace-based modeling method to automatically generate and optimize CBMs. To verify the effectiveness of our method, we conduct a series of experiments on six Web applications according to three types of user behavior traces. The experimental results show that our modeling method can construct CBMs automatically and effectively, and the CBMs built are more precise to represent the dynamic behavior of Web applications.
Achieving faster performance without increasing power and energy consumption for computing systems is an outstanding challenge. This paper develops a novel resource allocation scheme for memory-bound applications running on High-Performance Computing (HPC) clusters, aiming to improve application performance without breaching peak power constraints and total energy consumption. Our scheme estimates how the number of processor cores and CPU frequency setting affects the application performance. It then uses the estimate to provide additional compute nodes to memory-bound applications if it is profitable to do so. We implement and apply our algorithm to 12 representative benchmarks from the NAS parallel benchmark and HPC Challenge (HPCC) benchmark suites and evaluate it on a representative HPC cluster. Experimental results show that our approach can effectively mitigate memory contention to improve application performance, and it achieves this without significantly increasing the peak power and overall energy consumption. Our approach obtains on average 12.69% performance improvement over the default resource allocation strategy, but uses 7.06% less total power, which translates into 17.77% energy savings.
Global Positioning System (GPS) trajectory data can be used to infer transportation modes at certain times and locations. Such data have important applications in many transportation research fields, for instance, to detect the movement mode of travelers, calculate traffic flow in an area, and predict the traffic flow at a certain time in the future. In this paper, we propose a novel method to infer transportation modes from GPS trajectory data and Geographic Information System (GIS) information. This method is based on feature extraction and machine learning classification algorithms. While using GIS information to improve inference accuracy, we ensure that the algorithm is simple and easy to use on mobile devices. Applied to GeoLife GPS trajectory dataset, our method achieves 91.1% accuracy while inferring transportation modes, such as walking, bike, bus, car, and subway, with random forest classification algorithm. GIS features in our method improved the overall accuracy by 2.5% while raising the recall of the bus and subway transportation mode categories by 3.4% and 18.5%. We believe that many algorithms used in detecting the transportation modes from GPS trajectory data that do not utilize GIS information can improve their inference accuracy by using our GIS features, with a slight increase in the consumption of data storage and computing resources.
Docker, as a mainstream container solution, adopts the Copy-on-Write (CoW) mechanism in its storage drivers. This mechanism satisfies the need of different containers to share the same image. However, when a single container performs operations such as modification of an image file, a duplicate is created in the upper read-write layer, which contributes to the runtime overhead. When the accessed image file is fairly large, this additional overhead becomes non-negligible. Here we present the concept of Dynamic Prefetching Strategy Optimization (DPSO), which optimizes the CoW mechanism for a Docker container on the basis of the dynamic prefetching strategy. At the beginning of the container life cycle, DPSO pre-copies up the image files that are most likely to be copied up later to eliminate the overhead caused by performing this operation during application runtime. The experimental results show that DPSO has an average prefetch accuracy of greater than 78% in complex scenarios and could effectively eliminate the overhead caused by the CoW mechanism.
Virtualization is the most important technology in the unified resource layer of cloud computing systems. Static placement and dynamic management are two types of Virtual Machine (VM) management methods. VM dynamic management is based on the structure of the initial VM placement, and this initial structure will affect the efficiency of VM dynamic management. When a VM fails, cloud applications deployed on the faulty VM will crash if fault tolerance is not considered. In this study, a model of initial VM fault-tolerant placement for star topological data centers of cloud systems is built on the basis of multiple factors, including the service-level agreement violation rate, resource remaining rate, power consumption rate, failure rate, and fault tolerance cost. Then, a heuristic ant colony algorithm is proposed to solve the model. The service-providing VMs are placed by the ant colony algorithms, and the redundant VMs are placed by the conventional heuristic algorithms. The experimental results obtained from the simulation, real cluster, and fault injection experiments show that the proposed method can achieve better VM fault-tolerant placement solution than that of the traditional first fit or best fit descending method.
Hardware Trojans (HTs) have drawn increasing attention in both academia and industry because of their significant potential threat. In this paper, we propose HTDet, a novel HT detection method using information entropy-based clustering. To maintain high concealment, HTs are usually inserted in the regions with low controllability and low observability, which will result in that Trojan logics have extremely low transitions during the simulation. This implies that the regions with the low transitions will provide much more abundant and more important information for HT detection. The HTDet applies information theory technology and a density-based clustering algorithm called Density-Based Spatial Clustering of Applications with Noise (DBSCAN) to detect all suspicious Trojan logics in the circuit under detection. The DBSCAN is an unsupervised learning algorithm, that can improve the applicability of HTDet. In addition, we develop a heuristic test pattern generation method using mutual information to increase the transitions of suspicious Trojan logics. Experiments on circuit benchmarks demonstrate the effectiveness of HTDet.