• Issue 9,2022 Table of Contents
    Select All
    Display Type: |
    • Cloud Task Scheduling System on RISC-V Heterogeneous Cluster Based on Kubernetes

      2022, 31(9):3-14. DOI: 10.15888/j.cnki.csa.008844

      Abstract (1162) HTML (1221) PDF 2.55 M (1718) Comment (0) Favorites

      Abstract:With the wide attention in the field of cloud computing, the cluster container orchestration management platform Kubernetes has been widely used in service scenarios such as automatic deployment and release of container application service, elastic expansion of application and rollback update, and fault detection and self-repairing. Fifth-generation reduced instruction-set computer (RISC-V) includes four technical characteristics and advantages: fine simplification, modularization, scalability, and open source, and it has attracted extensive attention from academia and industry. Based on the collaborative research of Kubernetes ecology and RISC-V ecology, this study supports scheduling tasks with heterogeneous instruction set architecture (ISA) for the Kubernetes scheduler. Through the quantitative analysis of various computing task requirements of RISC-V instruction set architecture in the production environment, it is found that the existing Kubernetes cannot schedule the computing tasks of RISC-V instruction set architecture, and in particular, it fails to employ the extended instruction set architecture characteristics defined by RISC-V developers to provide high-performance and reliable services. In order to solve these problems, this study proposes an ISAMatch model which comprehensively considers the affinity of the instruction set, the number of nodes in the same instruction set architecture, and the utilization of node resources, so as to realize the optimal allocation of tasks. Based on the existing cluster scheduler, this study improves its scheduling requirements for multiple instruction set architecture tasks. In addition, compared with the default scheduler whose accuracy rate is 62% (scheduling RISC-V basic instruction set tasks), 41% (scheduling RISC-V extended instruction set tasks), and 67% (scheduling RISC-V instruction set tasks with “RISC-V” node matching label), ISAMatch model can achieve a task scheduling accuracy of 100% without considering resource constraints.

    • Optimization of Dynamic Jump Handling in QEMU Based on Address Space Identifier

      2022, 31(9):15-23. DOI: 10.15888/j.cnki.csa.008843

      Abstract (641) HTML (975) PDF 2.40 M (1105) Comment (0) Favorites

      Abstract:The continuous evolution of hardware and software technology demands higher execution performance from instruction set architecture emulators represented by QEMU. This study analyzes the limitations of QEMU’s existing dynamic jump handling mechanism in the scenario where the emulated architecture supports virtual memory, and proposes an optimized scheme based on address space identifiers suitable for common virtual memory systems. The proposed scheme is implemented for the RISC-V frontend in QEMU mainline 6.2.0 version. Evaluation results show that the dynamic jump scheme based on the address space identifier achieves an average performance improvement of 12% compared to the native QEMU.

    • RISC-V Load Instruction Optimization Based on LLD

      2022, 31(9):24-30. DOI: 10.15888/j.cnki.csa.008841

      Abstract (729) HTML (1772) PDF 1.44 M (1417) Comment (0) Favorites

      Abstract:As a typical example of reduced instruction sets, RISC-V can also reflect some disadvantages of the reduced instruction set computer (RISC), and large program size is one of the problems. Compared with the complex instruction set computer (CISC), RISC generally requires more instructions to implement complex operations and results in a large binary size of the program. Meanwhile, RAM and ROM in embedded devices are generally small. Therefore, it means that the binary size of the program is significant for embedded scenarios. In view of this, the Zce sub-extension of RISC-V has developed a series of instructions to reduce the program size as much as possible. Specifically, the instructions represented by the LWGP are used to reduce the number of instructions when loading/storing bytes. This study analyzes the principle of the LWGP instructions in reducing the code size and implements it on the LLD linker. It also evaluates the efficiency of LWGP in reducing the binary size of the program by analyzing the change in program size before and after using LWGP instructions and puts forward recommendations for improvement.

    • Implementation of RISC-V Debugger Protocol Stack Based on Lightweight Remote Procedure Call

      2022, 31(9):31-38. DOI: 10.15888/j.cnki.csa.008842

      Abstract (613) HTML (1191) PDF 1.70 M (1454) Comment (0) Favorites

      Abstract:In recent years, as RISC-V architecture spreads rapidly in the industry due to its advantages of open source, concision, and modularization, massive processor IP cores and system on chip (SoC) based on the RISC-V architecture have emerged in the market. The existing debuggers serve as an important tool in developing RISC-V software, but they face low performance, high deployment cost, and high difficulty in secondary development and struggle in meeting the needs of RTL design and verification, software development and debugging, and mass production/batch programming of RISC-V architecture-based chips. To solve these problems, this study proposes a new, open-source, and modularized RISC-V debugger protocol stack design scheme based on the lightweight remote procedure call—Morpheus. Experiments and analysis results have shown that this debugger protocol stack can effectively reduce the deployment cost and the difficulty of secondary development and improves the debugging performance.

    • Design of Cross-platform Memory Safety Test Suite

      2022, 31(9):39-49. DOI: 10.15888/j.cnki.csa.008840

      Abstract (740) HTML (730) PDF 1.38 M (1123) Comment (0) Favorites

      Abstract:Memory safety is critical but vulnerable. In view of this, numerous defense countermeasures have been proposed, but few of them could be applied in a production-ready environment due to unbearable performance overhead. Recently, as open-sourced architectures like RISC-V emerge, the extension design of enhancing hardware memory safety has revived. The performance overhead of hardware-enhanced defense techniques becomes affordable. To support the extension design of enhancing memory safety systematically, this study proposes a comprehensive and portable test framework for measuring the memory safety of a processor. In addition, the study achieves an open-sourced initial test suite with 160 test cases covering spatial and temporal safety of the memory, access control, pointer and control flow integrity. Furthermore, the test suite has been applied in several platforms with x86-64 or RISC-V64 architecture processors.

    • Intelligent Teaching Platform Based on Microcontroller with RISC-V Processor Core

      2022, 31(9):50-56. DOI: 10.15888/j.cnki.csa.008839

      Abstract (770) HTML (1147) PDF 2.74 M (1300) Comment (0) Favorites

      Abstract:This study expounds on the practice of embedded technology by using the AHL-CH32V307 hardware system based on the CH32V307 microcontroller with a RISC-V processor core from Nanjing Qinheng Microelectronics. Firstly, the study briefly introduces the knowledge system of the embedded system, reduces the development work of the embedded system with a high-tech threshold, and realizes the agile development ecosystem of embedded artificial intelligence. Then, the embedded development hardware is given and tested. With the intuitive experience of compiling, downloading, and running the first embedded program in a multi-functional embedded integrated development environment, students can start their journey to learn embedded systems. In terms of the hardware system corresponding to the development kit, this study describes the basic principle, circuit connection, and programming practice of some common controlled units in the embedded system, such as color lights, infrared sensors, and the tree structure of assembly project. Furthermore, a simple and practical embedded object recognition system based on image recognition is designed by using the CH32V307 microcontroller, and it can be used as a quick-start system of artificial intelligence. The teaching cases in this study are applicable to the teaching or technical training of embedded systems in colleges and universities, and they can provide a reference for application technicians in developing embedded systems.

    • >Survey
    • Survey on Log Anomaly Detection Based on Machine Learning

      2022, 31(9):57-69. DOI: 10.15888/j.cnki.csa.008661

      Abstract (828) HTML (3249) PDF 1.48 M (3003) Comment (0) Favorites

      Abstract:Log anomaly detection is a typical core application scenario of artificial intelligence for IT operations (AIOPS) in the current data center. With the rapid development and gradual maturity of machine learning technology, the application of machine learning to log anomaly detection has become a hot spot. Firstly, this study introduces the general procedure of log anomaly detection and points out the technical classifications and typical methods in the related process. Secondly, the classifications and characteristics of the application of machine learning technology in log analysis tasks are discussed, and we probe into the technical difficulties of log analysis tasks in terms of log instability, noise interference, computation & storage requirements, and algorithm portability. Thirdly, the related research productions in the field are summarized and their technical characteristics are compared and analyzed. Finally, the study discusses the future research focus and thinking of log anomaly detection from three aspects: log semantic representation, online model update, algorithm parallelism and versatility.

    • Medical Named Entity Recognition Based on Deep Learning

      2022, 31(9):70-81. DOI: 10.15888/j.cnki.csa.008708

      Abstract (644) HTML (3907) PDF 1.83 M (1724) Comment (0) Favorites

      Abstract:Medical named entity recognition refers to the extraction of key information from massive unstructured medical data, which provides a foundation for the development of medical research and the popularization of smart medical systems. Deep learning uses deep nonlinear neural network structures to learn complex and abstract characteristics, which can represent data more essentially. Deep learning models can significantly improve the effect of medical named entity recognition. First, this study introduces the unique difficulties and traditional methods of medical named entity recognition. Then, it summarizes models based on deep learning and popular model improvement methods, including the improvement of feature vectors and the ways to deal with difficulties such as a lack of data and the recognition of complex named entities. Finally, the study provides an outlook on future research direction through a comprehensive discussion.

    • High-performance Hash Table Indexing System Supporting Paging Memory

      2022, 31(9):82-90. DOI: 10.15888/j.cnki.csa.008664

      Abstract (616) HTML (941) PDF 1.83 M (973) Comment (0) Favorites

      Abstract:Hash tables are well known for their access efficiency and time complexity O(1). As an algorithm and data structure available for efficient access to large-scale data, hash tables have been widely adopted by big data applications. For example, they are applicable to various kinds of workloads and scenarios in the emerging high-performance computing (HPC) domain and the database domain. As the hardware performance of the high-performance coprocessor graphics processing unit (GPU) improves continuously, parallel hash table optimization for high-performance GPUs has attracted a lot of researchers. According to our previous research survey, most of the current methods and solutions for GPU-based hash table optimization focus on the large-scale thread scheduling and high memory bandwidth of GPUs to improve the high-concurrency processing of hash table transactions and fast key-value access to data. However, since existing research on GPU hash table structures generally ignores the effective management of GPU resources, no methods are available for making full use of GPU thread resources and memory resources. Moreover, due to the limitation of the GPU memory size, the space for storing data on hash table structures is limited and therefore unable to handle hash table structures of larger scales. Thus, technical challenges remain for the scalability and performance optimization of the hash table designs for GPUs. This study proposes a hash table technology that can process massive concurrent transactions for GPUs and names it Starfish. Starfish includes a novel “swap layer” technology based on asynchronous GPU streams to support the dynamic hash table outside the GPU memory and also guarantee the high indexing performance of GPU hash tables. To solve the massive hash transaction conflict overhead resulting from access of large-scale GPU threads, we design a class of compact data structures and a pageable memory distribution method, which not only provides the high performance of a static hash method for the GPU-based hash table technology but also supports the high scalability of dynamic hash. Our experimental performance evaluation shows that Starfish is significantly superior to other GPU-based hash table technologies including cudpp-Hash and SlabHash.

    • 3D Power Pipeline System Based on Digital Twin

      2022, 31(9):91-98. DOI: 10.15888/j.cnki.csa.008649

      Abstract (629) HTML (998) PDF 3.00 M (1162) Comment (0) Favorites

      Abstract:At present, in the electric power industry, the power plant’s supervision and management of the service state of power pipelines remains at the static stage, by manual work. On one hand, information control is incomplete and untimely, which does not meet the requirement of security production. On the other hand, the traditional digital software has cumbersome operation, lack of function and attention to maintenance information. In view of the above problems, a 3D power pipeline system design based on a digital twin five-dimensional model is proposed. Firstly, a sketch modeling and parametric designing method is used for 3D pipeline reconstruction to quickly realize the construction of a virtual entity and intuitively display the real scene of the system. Then, a smart management technology is utilized to monitor the operation data, predict the security state of pipelines, and integrate physical and virtual information. Finally, a prototype with a concise interface and practical functions is implemented. After being used in actual electric factories, this system initially shows improvement in the efficiency of service and security management of power pipelines.

    • Proactive Diagnostic System for Persistent Packet Loss in Cloud Networks Based on Hyper-converged Infrastructure

      2022, 31(9):99-113. DOI: 10.15888/j.cnki.csa.008662

      Abstract (734) HTML (1468) PDF 3.45 M (1222) Comment (0) Favorites

      Abstract:Moving business to cloud has been a trend recently, and COVID-19 gives a push to this trend. However, not all forms of business are suitable for public cloud computing. For the sake of data privacy, plenty of users, especially government users, prefer to build their own private cloud or hybrid cloud in the post-COVID-19 world, and hyper-converged infrastructure (HCI) is a convenient way to achieve this goal. In HCI, computing, storage, and network are all virtualized, which leads to higher resource utilization and easier way to be deployed. The network elements are no longer present as sensible hardware blocks in HCI but as lines of codes to function instead. To achieve better data forwarding performance in virtualization, many innovative technologies have risen, among which DPDK has been widely studied and applied. With DPDK, developers can customize various network forwarding applications. Virtualization and DPDK can greatly improve resource utilization and network forwarding performance, reducing the difficulties and costs of building data centers or private cloud by enterprises of various scales or institutions. However, virtualization at a high level also poses great challenges to network operation and maintenance owing to the loss of physical network entities. When a virtual network suffers a failure (e.g., packet loss), the traditional diagnosis tools designed for hardware network equipment cannot fulfill the need of cause locating and analyzing, resulting in much more mean time to repair (MTTR) and business loss. Even worse, the virtual network seems like a black box to network operators, which makes the network vulnerable. To solve these problems, this study proposes a proactive diagnostic system for persistent packet loss in HCI based cloud, named Flowprobe, which aims to enable the detection and cause locating of persistent packet loss for user-space virtual networks based on DPDK. With this system, users can have a comprehensive view of the way in which the packet traverses through the virtual network, the actions that the packet has performed, the positions that suffer packet loss, and the causes resulting in the loss. Thoughtful evaluation has proven that the system can handle 576 packet loss scenarios in virtual networks. Meanwhile, it has a good performance, with the performance degradation of data forwarding not exceeding 1% when the system is functioning. The system has been deployed in the HCI production environment for about 3 years and helped solve many problems in virtual networks.

    • Optimization of Deep Learning Workload on Kubernetes Cluster

      2022, 31(9):114-126. DOI: 10.15888/j.cnki.csa.008672

      Abstract (535) HTML (1093) PDF 2.64 M (1290) Comment (0) Favorites

      Abstract:Owing to the rapid development of artificial intelligence (AI) technologies and the efficient deployment of AI applications on cloud-native platforms, an increasing number of developers and internet companies deploy AI applications on Kubernetes clusters. However, Kubernetes is not designed chiefly for deep learning, which, as a special field, requires customized optimization. This study designs and implements a series of optimization schemes, mainly from the perspectives of data processing, graphics processing unit (GPU) calculation, and distributed training that deep learning requires, for the scenario of deploying deep learning workloads on Kubernetes clusters of a certain scale. The proposed optimization schemes involve data processing and calculation. These technologies reduce the difficulty in deploying AI workloads on large-scale cloud-native platforms and improve operational efficiency greatly. Moreover, the practice also verifies their significant improvement effect on AI applications.

    • Knowledge Reasoning Combining TuckER Embedding and Reinforcement Learning

      2022, 31(9):127-135. DOI: 10.15888/j.cnki.csa.008681

      Abstract (586) HTML (1193) PDF 1.40 M (1230) Comment (0) Favorites

      Abstract:Knowledge reasoning is an important method to complement a knowledge graph, which aims to infer unknown facts or relations according to the existing knowledge in the graph. As the path information between entity pairs is not fully considered in most reasoning methods, the reasoning shows low efficiency and poor interpretability. To solve this problem, this study proposes TuckER embedding with reinforcement learning (TuckRL), a knowledge reasoning method that combines TuckER embedding and reinforcement learning (RL). First, entities and relations are mapped to low-dimensional vector space through TuckER embedding, and the path reasoning process is modeled using RL guided by strategies in the knowledge graph environment. Then, the action pruning mechanism is introduced to reduce the interference of invalid actions for action selection during path walking, and LSTM is used as the memory component to preserve the agent’s historical action trajectory. In this way, the agent can more accurately select valid actions and can complete knowledge reasoning by interaction with the knowledge graph. The experiments on three mainstream large-scale datasets indicate that TuckRL is superior to most of the existing methods, which demonstrates the effectiveness of combining embedding and RL for knowledge reasoning.

    • Intrusion Detection in Smart Grid Based on Unsupervised Learning

      2022, 31(9):136-144. DOI: 10.15888/j.cnki.csa.008657

      Abstract (628) HTML (1185) PDF 1.59 M (1034) Comment (0) Favorites

      Abstract:The smart grid (SG) constitutes a technological evolution of the traditional grid by introducing information and communication technology (ICT) services. Although using of ICT has advantages, it poses some serious challenges to security. In this study, we propose a security architecture of the smart grid intrusion detection system and a novel intrusion detection system (IDS) based on unsupervised learning. We design block-training architecture which can not only reduce the computing burden in the data center but also train the characteristics of local traffic. We also propose a variational autoencoder based on recursive feature elimination with cross-validation (RFECV-VAE). The RFECV-VAE is a combination of RFECV (for feature selection) and VAE model (for anomaly detection) and can detect large-scale and high-dimensional data with high accuracy. Finally, we choose deep autoencoder (DAE), deep autoencoding Gaussian mixture model (DAGMM), one-class support vector machine (OCSVM), isolation forest (IF), and VAE as comparison algorithms and accuracy, ROC_AUC, F1_score, and training duration for performance evaluation. The experimental results show that RFECV-VAE outperforms the comparison algorithms.

    • Electronic Bill of Lading System Based on Blockchain

      2022, 31(9):145-151. DOI: 10.15888/j.cnki.csa.008650

      Abstract (537) HTML (1458) PDF 1.28 M (1280) Comment (0) Favorites

      Abstract:As an important part of international logistics, the bill of lading can improve the efficiency of freight and capital circulation when applied to railway freight transportation. Faced with low security and insufficient credibility, the current electronic bill of lading system cannot guarantee the rights of participants. According to the decentralization, traceability, and programmability of blockchain, this study defines a framework of the electronic bill of lading system. Based on hyperledger fabric, the operation logic of electronic bill of lading, such as issuance, audit, circulation, and pledge is realized with multi-agency and multi-role participation in blockchain networks. The transaction information is stored on the chain and can be shared and queried in real time. In general, our work effectively solves the problems of property right and goods tracing.

    • Semantic Enhanced Multi-strategy Policy Term Extraction System

      2022, 31(9):152-158. DOI: 10.15888/j.cnki.csa.008693

      Abstract (481) HTML (605) PDF 7.32 M (963) Comment (0) Favorites

      Abstract:Policy terms are characterized by timeliness, low frequency, sparsity, and compound phrases. To address the difficulty of traditional term extraction methods in meeting demands, we design and implement a semantic enhanced multi-strategy system of policy term extraction. The system models the features of policy texts from the two dimensions of frequent item mining and semantic similarity. Feature seed words are selected by integrating multiple frequent pattern mining strategies. Low-frequency and sparse policy terms are recalled by pre-training the language model and enhancing semantic matching. Transforming from a cold start without a thesaurus to a hot start with a thesaurus, the system achieves semi-automatic extraction of policy terms. The proposed system can improve the effect of policy text analysis and provide technical support for the construction of a smart government service platform.

    • Interactive Control Based on Gesture Recognition and End User Evaluation

      2022, 31(9):159-166. DOI: 10.15888/j.cnki.csa.008718

      Abstract (415) HTML (623) PDF 9.29 M (812) Comment (0) Favorites

      Abstract:Gesture recognition based on RGB images is widely used in the field of human-computer interaction because of its low requirements for equipment and convenient data collection. In the process of gesture recognition and interaction of RGB images, on the one hand, the efficiency of gesture segmentation based on skin color information is low due to the illumination influence of RGB gesture images during collection; on the other hand, the interactive gestures cognized by users are different from those designed by designers, which leads to poor feedback of users’ interaction experience. In this study, we systematically optimize the above two problems. Firstly, users’ cognition is linked with the interactive gesture design principles to establish a gesture consensus set. Secondly, the gesture image is subjected to color balancing, and an elliptical skin color model is used to segment the gesture area. Then, the binarized gesture images are input into a MobileNet-V2 lightweight convolutional neural network to calculate the gesture recognition rate. The combination of end-user subjective evaluation of gestures and gesture recognition technology can systematically design gestures for interactive tasks, reduce the cognitive deviation of users in the actual interaction process, and improve the usability and efficiency of interactive systems.

    • Transplantation and Application of COMO Component Technology Based on RISC-V Platform and openEuler System

      2022, 31(9):167-172. DOI: 10.15888/j.cnki.csa.008699

      Abstract (543) HTML (669) PDF 1.06 M (1249) Comment (0) Favorites

      Abstract:With the development of cloud computing and Internet of Things (IoT) technology, the edge computing mode begins to emerge. On the basis of the open-source instruction set architecture RISC-V and openEuler operating system (OS), an open, flexible, evolving, and architecture-inclusive software ecosystem has been gradually formed, which provides a good innovative software and hardware platform for the construction of edge computing applications. However, the infrastructure such as software development environments, frameworks, and toolchains for edge computing is not yet complete. The component model (COMO) is a component technology that solves the reuse issue for C++ software and assets, and the combination of which with RISC-V and openEuler is conducive to the development of RISC-V and the openEuler ecosystem in a new software development architecture. Therefore, this study proposes the method of operating the COMO program and transplanting the development environment on the software and hardware platform based on RISC-V and openEuler OS, and an experiment proves the compatibility and feasibility of COMO with RISC-V and openEuler OS. In addition, a simple example is introduced for the application of COMO’s ServiceManager framework in edge computing. The work makes a useful exploration in the component-based development mode of providing XaaS services for edge computing applications oriented to cloud computing and IoT.

    • Landmark Localization Algorithm for Medical Images Based on Topological Constraints and Feature Augmentation

      2022, 31(9):173-182. DOI: 10.15888/j.cnki.csa.008700

      Abstract (557) HTML (986) PDF 2.63 M (1069) Comment (0) Favorites

      Abstract:The existing landmark localization algorithms for medical images cannot make good use of the inherent characteristics of medical images and cannot well perceive their subtle features. Therefore, this study proposes a landmark localization algorithm for medical images, which is based on topological constraints and feature augmentation. It uses the invariant topological structure among landmarks to improve the localization accuracy of the algorithm, and multi-resolution attention mechanisms and multi-branch dilated convolution modules are introduced into the network to extract augmented features. The network can not only pay more attention to important features but also improve the perception of context features without increasing the amount of computation and the number of parameters. Experiments on public datasets demonstrate that the proposed method outperforms the current mainstream algorithms in every indicator and achieves higher accuracy.

    • Secure Swarm Attestation and Recovery Scheme for IoT Devices

      2022, 31(9):183-191. DOI: 10.15888/j.cnki.csa.008671

      Abstract (419) HTML (553) PDF 1.80 M (895) Comment (0) Favorites

      Abstract:Owing to the lack of security mechanisms for Internet of Things (IoT) devices, the IoT environment faces serious security challenges. However, remote attestation can identify the authenticity and integrity of devices and can also establish trust in IoT devices through a remote mode. Swarm attestation is an extension of remote attestation technology, which can be applied to swarm composed of a large number of devices. Compared with the traditional remote attestation, the swarm attestation liberates the verifier and improves verification efficiency. At present, the swarm attestation is mainly used for static networks, and there is no efficient recovery mechanism for compromised devices. To solve these problems, this study proposes a secure swarm attestation and recovery scheme based on reputation mechanism and Merkle tree. Firstly, we use the reputation mechanism to achieve a many-to-one attestation scheme, which can effectively solve the single point of failure and also trigger the attestation from the device. In addition, the attestation scheme is suitable for semi-dynamic networks. Secondly, we introduce the Merkle tree for measurement, which can quickly and accurately identify the code blocks compromised by malicious software and efficiently recover them. Finally, the security analysis and performance evaluation of the swarm attestation scheme are presented. The results show that the swarm attestation in this study improves the security, and its performance overhead is acceptable.

    • Fuzzy Test Improvement Based on Clustering and New Coverage Information

      2022, 31(9):192-200. DOI: 10.15888/j.cnki.csa.008679

      Abstract (492) HTML (683) PDF 1.67 M (940) Comment (0) Favorites

      Abstract:Fuzzing plays a huge role in discovering software security vulnerabilities and improving software security. This study discusses the low efficiency of the mutation strategy for fuzzing and the unreasonableness of the seed scoring strategy and proposes a mutation optimization strategy based on clustering and an energy allocation strategy based on new coverage information. The former improvement strategy extracts the positions of effective combined mutations by generating new coverage of non-deterministic mutations, uses clustering algorithms to further determine the positions of effective mutations, and implements fine-grained deterministic mutations at positions of effective mutations in the mutation stage. The latter improvement strategy in this study is for the seed scoring strategy. The new coverage information generated by the seed and the branch transfer information from the static analysis are used as important indicators of seed scoring. We compare the improved fuzzing tool AgileFuzz with existing ones such as AFL 2.52b, AFLFast, and EcoFuzz and conduct multiple experiments on open source programs such as binutils and libxml2. The experimental results show that AgileFuzz finds more program branch coverage in the same amount of time. Meanwhile, five unknown vulnerabilities in fontforge, harfbuzz, and other open source software are discovered during the testing.

    • Inductive Text Classification Based on Gated Graph Attention Network

      2022, 31(9):201-209. DOI: 10.15888/j.cnki.csa.008703

      Abstract (453) HTML (1338) PDF 1.50 M (1066) Comment (0) Favorites

      Abstract:To effectively integrate complex features in text and extract different contextual information, this study proposes an inductive text classification method based on a gated graph attention network (TextIGAT). This method constructs a graph structure for each document in the corpus and takes all the words as nodes in the graph to preserve the complete text sequence. One-way connected document-level nodes are designed in the text graph, so that word nodes can interact with global information, and different contextual connection word nodes are merged to introduce more text information in a single text graph. Then, the representations of word nodes are updated utilizing a graph attention network (GAT) and a gated recurrent unit (GRU), and the sequential representation of nodes is enhanced by a bi-directional gated recurrent unit (Bi-GRU) according to the text sequence retained in the graph. TextIGAT can flexibly integrate information from text, which thus allows inductive learning on text with new words and relations. Extensive experiments on four benchmark datasets (MR, Ohsumed, R8, and R52) and detailed analysis prove the effectiveness of our proposed method on text classification.

    • Reliable Kalman Filter Used in Processing of Tunnel Construction Data

      2022, 31(9):210-216. DOI: 10.15888/j.cnki.csa.008761

      Abstract (380) HTML (792) PDF 1.29 M (820) Comment (0) Favorites

      Abstract:In view of the problems of outlier and noise in tunnel structure safety monitoring data, which seriously affect the subsequent analysis. A reliable Kalman filter algorithm for data denoising of adaptive tracking system is proposed. Firstly, the least square method is used to compensate the outliers with sliding Window. Secondly, it inherits Kalman’s idea of “step by step derivation” and dynamically estimates the noise value, which effectively solves the problem that the traditional Kalman Filter cannot accurately model when facing outliers and nonlinear systems. Finally, the data of a construction subway in Beijing are used for numerical verification, and the results show that the proposed algorithm has a great improvement in accuracy compared with the classical algorithms.

    • Fusion Expansion-dual Feature Extraction Applied to Few-shot Learning

      2022, 31(9):217-225. DOI: 10.15888/j.cnki.csa.008654

      Abstract (548) HTML (1235) PDF 1.44 M (1006) Comment (0) Favorites

      Abstract:The goal of few-shot image classification is to identify the category based on a very small number of labeled samples. Two of the key issues are too little labeled data and invisible categories (the training category and the test category are inconsistent). In response, we propose a new few-shot classification model: fusion expansion-dual feature extraction model. First, we introduce a fusion expansion mechanism (FE), which uses the change rules between different samples of the same category in the visible category samples to expand the support set samples, thereby increasing the number of samples in the support set and making the extracted features more robust. Secondly, we propose a dual feature extraction mechanism (DF). A large amount of data from the base class is first utilized to train two different feature extractors: a local feature extractor and a global feature extractor, which are applied to extract more comprehensive sample features. Then the local and overall features are compared to highlight the features that have the greatest impact on the classification, thereby improving the accuracy of the classification. On the Mini-ImageNet and Tiered-ImageNet datasets, our model has achieved good results.

    • Environmental Perception Algorithm for Multi-task Autonomous Driving Based on YOLOv5

      2022, 31(9):226-232. DOI: 10.15888/j.cnki.csa.008698

      Abstract (866) HTML (4098) PDF 1.31 M (1758) Comment (0) Favorites

      Abstract:Autonomous driving tasks are a popular field of deep learning research. As one of the most important modules in autonomous driving, environmental perception includes object detection, lane detection, and drivable area segmentation, which is extremely challenging and has far-reaching significance. Traditional deep learning algorithms usually only solve one detection task in environmental perception and cannot meet the needs of autonomous driving to simultaneously perceive multiple environmental factors. In this study, YOLOv5 is used as the backbone network and object detection branch for lane detection and drivable area segmentation in combination with the real-time semantic segmentation network ENet. Therefore, the environmental perception algorithm for multi-task autonomous driving is achieved, and α-IoU is employed for loss calculation to improve the regression accuracy, which is greatly robust against noise. Experiments show that on the BDD100K data set, the proposed algorithm structure is better than the existing multi-task deep learning networks and can reach a speed of 76.3 FPS on GTX1080Ti.

    • Automatic Recognition of Chinese Compound Sentence Relation Based on BERT-FHAN Model and Sentence Features

      2022, 31(9):233-240. DOI: 10.15888/j.cnki.csa.008715

      Abstract (363) HTML (951) PDF 1.74 M (808) Comment (0) Favorites

      Abstract:The relations of compound sentences refer to the logical semantic relations between the clauses. The relation recognition of compound sentences is therefore the identification of semantic relations between clauses, and it is a difficult issue in natural language processing (NLP). Taking the marked compound sentences as the research object, this study proposes a BERT-FHAN model. In this model, the BERT model is employed to obtain word vectors, and the HAN model is used to integrate the ontology knowledge of relational words, as well as the characteristics of the part of speech, syntactic dependency relations, and semantic dependency relations. The proposed model is verified by experiments, and the result indicates that the highest macro average F1 value and accuracy of the BERT-FHAN model are 95.47% and 96.97%, respectively, which demonstrates the effectiveness of the method.

    • Secure OLSR Routing Protocol Based on Two-layer Fuzzy Logic Trust Model

      2022, 31(9):241-249. DOI: 10.15888/j.cnki.csa.008684

      Abstract (414) HTML (814) PDF 1.72 M (917) Comment (0) Favorites

      Abstract:Due to the open and decentralized characteristics of mobile ad hoc networks, the traditional optimized link state routing (OLSR) trust model cannot clearly quantify the trust indexes of nodes and ignores the network environments of nodes. To solve the above problems, this study proposes a two-layer fuzzy logic trust model based on environment-adaptive decision-making and constructs an EFT-OLSR protocol with the OLSR protocol. The model is divided into a parameter extraction module, a two-layer fuzzy reasoning module, and a decision-making module. To start with, the node residual energy (P) is selected to prevent implicit selfish attacks; then, the complexity of the trust indexes of the computing nodes is limited by using the improved two-layer fuzzy logic structure; finally, the trust threshold in the routing protocol is dynamically adjusted according to the network environment. Experiments show that the EFT-OLSR protocol is superior to the existing FT-OLSR trust model in packet delivery rate (PDR), average end-to-end delay, and packet loss rate.

    • Image Classification of River Water Surface Pollution Based on Grouped Convolution and Dual Attention Mechanism

      2022, 31(9):250-256. DOI: 10.15888/j.cnki.csa.008688

      Abstract (479) HTML (967) PDF 1.79 M (898) Comment (0) Favorites

      Abstract:Water surface pollutants of rivers are the main pollutants that endanger river resources. Timely detection and treatment of water surface pollutants can effectively protect the river environment and water resources and further boost the pollution and carbon reduction and the carbon sink capacity of the ecosystem. With the widespread promotion of intelligence, traditional monitoring and processing methods for water surface pollutants of rivers can no longer meet the current needs. To address water surface pollution in the Liaohe River basin, this study applies computer vision technology to the classification of water surface pollution and proposes a classification algorithm module based on grouped convolution and the dual attention (GCDA) mechanism for images of water surface pollution. Specifically, a simplified dual attention mechanism is introduced into the network on the basis of grouped convolution, which uses fewer parameters to enhance the network’s ability to extract features of images and further enhances the effect of image classification. The method of capturing images at a fixed position is performed on images from five river monitoring cameras in the Liaohe River basin for preprocessing. The five cameras refer to the ones in the hot spring intake of Chengshui Station, the confluence of Wangyinghe River and Xihe River, Gaotaizi Section, Jinyuan Sewage Outlet, and the overflow port of Qingyuan Sewage Treatment Plant. In addition, a dataset for water surface pollutants of rivers is established, and these images are categorized as polluted and unpolluted ones. Experiments indicate that compared with the original network and the network that adds space and channel attention mechanisms separately, the network with the GCDA module demonstrates better performance on this dataset in the dichotomous classification of images of water surface pollutants.

    • Zero-watermarking Algorithm Constructed by Attention Mechanism and Autoencoder

      2022, 31(9):257-264. DOI: 10.15888/j.cnki.csa.008668

      Abstract (490) HTML (1200) PDF 4.74 M (1137) Comment (0) Favorites

      Abstract:Zero-watermarking technology is an effective means of protecting image copyright. However, most of the existing zero-watermarking algorithms use traditional mathematical theories to extract features manually, and extensive research on zero-watermarking extracting image features with neural networks is still to be conducted. At present, neural networks have achieved favorable results in image feature extraction. A deep attention mechanism and autoencoder (AMAE) model is proposed for constructing zero-watermarks by making full use of a convolutional autoencoder and the attention mechanism. Specifically, an attention-based convolutional neural network is used to construct an autoencoder, which is then trained. Subsequently, the global features of the image are constructed with the features output from the trained encoder. Finally, binary pattern processing of the obtained feature image is conducted to acquire the binary feature matrix. An XOR operation with the image to be watermarked is then performed to obtain a zero-watermark, which is then registered into the intellectual property database. Once the zero-watermark is registered, the original image is under the protection of watermarking technology. During training, the idea of adversarial training is drawn on to train the model with noise, which improves the robustness of the model. The experimental results show that the normalized correlation (NC) values of the extracted watermarked image and the original one to be watermarked both exceed 0.9 under rotation, noise, filtering, and other attacks, which proves the effectiveness and superiority of the proposed algorithm.

    • Insufficient SLP in GCC

      2022, 31(9):265-271. DOI: 10.15888/j.cnki.csa.008686

      Abstract (472) HTML (734) PDF 1.06 M (983) Comment (0) Favorites

      Abstract:With the increase in vector length, SIMD extension can deal with more huge data level parallelism, but the parallelism threshold of the program also increases. For the current auto-vectorization compiler, if enough data level parallelism can not be found from the scalar code to completely fill the vector register in the analysis stage, it will not enter the vector code transformation stage, and vectorization cannot be achieved. The improvement of vector length makes some programs with insufficient parallelism lose the opportunity of vectorization, resulting in performance degradation. To make full use of SIMD components, this study introduces a basic block oriented insufficient vectorization method ISLP. Based on the GCC compiler, the design and implementation of ISLP are described in detail from three aspects: parallelism detection, code generation and cost model. Experiments on the standard test set show that this method can effectively vectorize the program with insufficient super-word level parallelism and improve the program execution efficiency. The average speedup ratio of the selected test cases after vectorization reaches 1.14, and the performance is 11.8% higher than that of the conventional SLP method.

    • Electric Power Named Entity Recognition Based on Topic Prompt

      2022, 31(9):272-279. DOI: 10.15888/j.cnki.csa.008750

      Abstract (456) HTML (930) PDF 2.39 M (1025) Comment (0) Favorites

      Abstract:Traditional named entity recognition methods can achieve favorable results owing to sufficient supervision data. As far as named entity recognition from electric power texts is concerned, however, the dependence on professional knowledge often makes it difficult to obtain sufficient supervision data, which is also known as a few-shot scenario. In addition, electric power named entity recognition is more challenging than general open domain tasks due to the accuracy requirements of the electric power industry and the more categories of entities in this industry. To overcome these challenges, this study proposes a named entity recognition method based on topic prompts. This method regards each entity category as a topic and uses the topic model to obtain topic words related to the category from the training corpus. Then, it fills in the template and constructs prompt sentences by enumerating entity spans, entity categories, and topic terms. Finally, the generative pre-trained language model is used to rank the prompt sentences and ultimately identify the entity and the corresponding category label. The experimental results show that on the dataset of Chinese electric power named entities to be recognized, the proposed method achieves better results than those offered by several traditional named entity recognition methods.

    • Incremental Text Clustering Algorithm for Hot Topic Detection

      2022, 31(9):280-286. DOI: 10.15888/j.cnki.csa.008677

      Abstract (511) HTML (1394) PDF 1.26 M (949) Comment (0) Favorites

      Abstract:As the traditional Single-Pass clustering algorithm is highly sensitive to the input sequence of data and has low accuracy, an incremental text clustering algorithm (SP-HTD) is proposed, which takes subtopics as granularity and considers the dynamics, timeliness, and contextual semantic features of news texts. Firstly, by parsing the LDA2Vec topic model, this study jointly trains the document vectors and the word vectors to obtain the context vectors and thus fully mines the semantic features and importance relationship of the text. Then, on the basis of the Single-Pass algorithm, sub-topics are classified according to the extracted hot topic feature words, and the time threshold is set to confirm the timeliness of the cluster center. The mined semantic features and tasks are combined to dynamically update the cluster center. Finally, with the assistance of the time characteristics, the centroid vectors of the topics are updated to improve the accuracy of text similarity calculation. The results reveal that the F value of the proposed method can reach up to 89.3%, and on the premise of ensuring the clustering accuracy, the proposed method has a significantly lower undetected rate and false detection rate compared with those of the traditional algorithm, and thus it can effectively improve the accuracy of topic detection.

    • Multi-objective Support Vector Machine and Its Application in Small Sample Fault Diagnosis

      2022, 31(9):287-293. DOI: 10.15888/j.cnki.csa.008716

      Abstract (398) HTML (526) PDF 1.59 M (918) Comment (0) Favorites

      Abstract:Support vector machines have a simple theory and strong practicability, which are thus widely used in fault diagnosis. In the process of analyzing the influence of support vector machine parameters on classification results, it is found that inappropriate parameter selection often leads to poor classification results. The adverse effects of artificial selection can be avoided by using a heuristic optimization method. However, taking the equivalent interval distance as the optimization goal is prone to result in “over learning”. Taking the equivalent interval distance, the number of support vectors and the misclassification rate as optimization objectives at the same time, this study proposes a multi-objective support vector machine method based on particle swarm optimization. The strategies of timed restart and dynamic learning factor are used to improve the global optimization ability of the algorithm. In this way, the overall structural risk can be reduced. The proposed method is applied to the fault diagnosis of a complex diesel engine with strong correlation and coupling of multiple faults. The experimental results show that this method can effectively diagnose the fault of abnormal noise from diesel engines in the case of small samples and incomplete or uncertain symptoms, and the comprehensive optimal solution obtained by screening is more in line with people’s expectations.

    • User Energy Classification Based on Auto-learned Edge Weights Graph Convolution Network

      2022, 31(9):294-299. DOI: 10.15888/j.cnki.csa.008694

      Abstract (473) HTML (677) PDF 1.07 M (968) Comment (0) Favorites

      Abstract:User classification is an important method for energy consumption analysis, and the wide application of smart meters provides a large number of available data for user classification. To improve the accuracy of user classification and the extraction ability of energy consumption features, this study proposes a graph convolutional network (GCN) of self-learned edge weights for user classification. It converts the original energy consumption data into a graph through a special initialization layer with attention mechanisms and extracts energy consumption features from the generated graph. Then, the proposed network outputs the user classes according to the learning features of the graph. Through comparative experiments on a real energy consumption dataset, it is proven that the feature extraction of the proposed method is more intuitive and clear, and the classification performance of the proposed method is better than the existing methods.

    • Single-view 3D Reconstruction Based on Deep Learning

      2022, 31(9):300-305. DOI: 10.15888/j.cnki.csa.008685

      Abstract (631) HTML (2771) PDF 1.40 M (1267) Comment (0) Favorites

      Abstract:Single-view 3D reconstruction is a challenging problem in computer vision. To improve the accuracy of the 3D model reconstructed by the existing 3D reconstruction algorithm, this study extracts both global and local features of the image. On this basis, the signed distance function (SDF) is used to describe the reconstructed 3D objects. In this way, high-quality 3D shapes are generated, and the model has higher accuracy and enhanced generalization capability, which enables the deep model to reconstruct other types of objects with high quality. Experiments demonstrate that compared with the most advanced reconstruction algorithm at present, the proposed deep network and the method for representing 3D shapes have better performance in the effects of reconstructed 3D models and the generalization of new objects.

    • Short-term Traffic Flow Prediction Based on DWT-GCN

      2022, 31(9):306-312. DOI: 10.15888/j.cnki.csa.008682

      Abstract (527) HTML (1780) PDF 1.68 M (1195) Comment (0) Favorites

      Abstract:Traffic prediction is one of the research focuses in intelligent transportation. This study proposes a short-term traffic flow prediction model based on discrete wavelet transformation (DWT) and graph convolutional networks (GCNs) to deeply explore the temporal and spatial features of traffic flow sequences and improve prediction accuracy. Firstly, the original traffic sequences are decomposed into detailed and approximate components by the DWT algorithm to reduce the non-stationarity of traffic flow data. Secondly, the adjacency matrix of the GCN model is optimized by introducing the distance factor term to extract the spatial features of road networks. Finally, each group of components decomposed by DWT is used as the input of the GCN model separately for prediction, and the prediction results of each group are reconstructed to obtain the final prediction value. The model is tested on the Caltrans PeMS dataset, and the test results reveal that compared with the ARIMA, WNN, and GCN model, the proposed model has reduced its mean absolute error (MAE) and mean absolute percentage error (MAPE) by 57% and 59%, respectively, which proves to be an effective method for predicting short-term traffic flow.

    • Quality Analysis Technology for Face Images in Surveillance Videos

      2022, 31(9):313-318. DOI: 10.15888/j.cnki.csa.008696

      Abstract (435) HTML (958) PDF 1.50 M (807) Comment (0) Favorites

      Abstract:The research on quality analysis technology for face images in surveillance scenes is of great significance. Since the low-quality images collected from surveillance videos have blurred faces and incorrect head angles and are subjected to occlusion by other objects, the input of them into the recognition system can result in lower identification accuracy of the system. To solve the above problems, this work studies two important factors that affect image quality in surveillance scenes through experiments, namely, the face angle and image clarity. Thus, a quality analysis algorithm for face images based on clustering is designed, and a calculation method for scoring the quality of face images is proposed. The experiment proves that the technology can effectively filter the low-quality images collected in the surveillance videos and improve the accuracy of the face recognition system.

    • Fast Data Extraction for Numerical Weather Prediction Based on Decision Analysis

      2022, 31(9):319-323. DOI: 10.15888/j.cnki.csa.008683

      Abstract (410) HTML (534) PDF 1.37 M (938) Comment (0) Favorites

      Abstract:Traditional data extraction methods are usually inefficient. To address this problem, we first design an exact position addressing-based algorithm with multi-processing methods to achieve the accurate positioning of data blocks by taking the massive data generated from semi-structured numerical weather prediction (NWP) products as the research object. Then, an extraction algorithm is designed to extract data in the spatial range on demand, namely, to realize on-demand data extraction according to attribute dimensions as well as the latitude and longitude of data. As a result, the multi-process data reading under unified whole-process control is achieved on the basis of the above two algorithms. For testing, the time consumption of a single data plane is taken as the main assessment index, and the single-, quad-, octo-, and 16-core processes are employed for data processing. The test results reveal that the processing with 16-core processes is faster than that of a single-core process, and the time consumption is reduced from 257 ms to 37 ms. This method can effectively improve the efficiency of data extraction for non-structural NWP products and has been put into use in decision analysis for urban governance.

    • Application of Improved Crow Search Algorithm in Air Transport Capacity Scheduling of Mail Distribution Center

      2022, 31(9):324-332. DOI: 10.15888/j.cnki.csa.008710

      Abstract (453) HTML (731) PDF 1.68 M (1017) Comment (0) Favorites

      Abstract:The scheduling of air transport capacity by a mail distribution center involves fixed capacity and alternative capacity. This study establishes an optimization model to minimize transportation costs under the premise of sufficient air transport capacity and proposes an improved crow search algorithm. First, depending on the mathematical model of the problem, the penalty function method is employed to convert some constraints into penalty terms which then form a fitness function together with the objective function. Second, the Logistic chaotic map is utilized to improve the diversity of the initial population. In addition, considering the characteristics of the problem, the study proposes a location update strategy based on the individual optimal follow mechanism and the sine-cosine algorithm. Finally, a cross-mutation mechanism is introduced to enrich population diversity in the search process. The effectiveness and superiority of the algorithm are proved by a large number of cases.

    • Deep Convolution Sticking Prediction Based on Joint Modeling of Multi-factor Long Time Series Information

      2022, 31(9):333-341. DOI: 10.15888/j.cnki.csa.008669

      Abstract (439) HTML (621) PDF 2.32 M (820) Comment (0) Favorites

      Abstract:To make full use of the long time series information of multiple monitoring factors obtained by a drilling monitoring platform and implement accurate prediction of sticking accidents in offshore oil drilling, this study proposes a deep convolution sticking prediction method based on joint modeling of multi-factor long time series information (CNN-MFT). It uses the self-attention mechanism and a CNN to jointly model the time series information of multiple monitoring factors. Meanwhile, it considers the specific value of each factor at the current moment and the historical time series information of each factor to achieve accurate sticking prediction. Verification and comparison are conducted with actual monitoring data on the offshore drilling platform. Compared with the eight commonly used sticking prediction methods such as those based on random forest (RF) and support vector machine (SVM), the proposed CNN-MFT achieves the best accuracy of sticking accident prediction under different training sample proportions, 50% and 70% for example. Meanwhile, it is also more stable. This method provides key algorithm support for applications of offshore oil accident prediction.

    • Joint Optimization of Subcontractor Options and Single-machine Batch Scheduling

      2022, 31(9):342-351. DOI: 10.15888/j.cnki.csa.008719

      Abstract (343) HTML (518) PDF 1.71 M (868) Comment (0) Favorites

      Abstract:Outsourcing with multiple subcontractors is a major operational management challenge for today’s manufacturing firms. The joint decision-making between outsourcing options and in-house scheduling is crucial to the cost reduction and efficiency increase of these firms. To jointly optimize single-machine batch scheduling with multiple subcontractors available for job outsourcing, this study constructs a 0-1 integer programming model, the objective of which is to minimize the sum of total outsourcing cost and total in-house batch processing cost under the premise that both the total outsourcing cost and the latest leading time for outsourcing jobs are subject to upper limits. An improved genetic algorithm and a greedy algorithm are also designed for joint optimization. The study takes the joint decision-making scenario of outsourcing and batch scheduling in a ceramic enterprise as an example and compares the solution performance of the two algorithms. The improved genetic algorithm shows its comparative advantages in terms of solution quality and efficiency. The results of a sensitivity experiment show that the latest leading time for outsourcing jobs has a significant impact on the total operating cost, while the upper limit of the total outsourcing cost does not significantly influence the total operating cost.

    • Extractive Machine Reading Comprehension Model with Explicitly Fused Lexical and Syntactic Features

      2022, 31(9):352-359. DOI: 10.15888/j.cnki.csa.008717

      Abstract (371) HTML (1244) PDF 1.70 M (1116) Comment (0) Favorites

      Abstract:Language models obtained by pre-training unstructured text alone can provide excellent contextual representation features for each word, but cannot explicitly provide lexical and syntactic features, which are often the basis for understanding overall semantics. In this study, we investigate the impact of lexical and syntactic features on the reading comprehension ability of pre-trained models by introducing them explicitly. First, we utilize part of speech tagging and named entity recognition to provide lexical features and dependency parsing to provide syntactic features. These features are integrated with the contextual representation from the pre-trained model output. Then, we design an adaptive feature fusion method based on the attention mechanism to fuse different types of features. Experiments on the extractive machine reading comprehension dataset CMRC2018 show that our approach helps the model achieve 0.37% and 1.56% improvement in F1 and EM scores, respectively, by using explicitly introduced lexical and syntactic features at a very low computational cost.

    • Object-oriented Building Extraction Based on Feature Optimization

      2022, 31(9):360-367. DOI: 10.15888/j.cnki.csa.008712

      Abstract (377) HTML (820) PDF 3.50 M (795) Comment (0) Favorites

      Abstract:Compared with pixel-based building extraction methods, object-oriented methods can reduce the phenomena of “the same spectrum for different objects” and “different spectra for the same object” and improve extraction accuracy. To address the curse of feature dimensionality due to numerous features of remote sensing images, this study proposes an object-oriented feature optimization method for building extraction. First of all, minimum error automatic threshold segmentation is combined with multi-scale segmentation to optimize the segmentation technology. Then, features are selected by the Relief algorithm and fast correlation-based filter (FCBF) algorithm to construct the optimal feature subset. Finally, buildings are extracted by the random forest method, and building boundaries are optimized by the minimum bounding rectangle method. The results show that the importance of features varies greatly. An overall accuracy of 0.93 is achieved by building extraction based on the optimal feature subset, and the Kappa coefficient is 0.91, which is significantly higher than the extraction results of the original feature set and the optimized feature set.

    • Left Ventricular Myocardium Segmentation Method of Cardiac Cine-MRI Based on Optical Flow and Semantic Feature Fusion

      2022, 31(9):368-375. DOI: 10.15888/j.cnki.csa.008697

      Abstract (518) HTML (1226) PDF 2.26 M (1073) Comment (0) Favorites

      Abstract:In magnetic resonance imaging (MRI) of living hearts, the edges of the left ventricular endocardium and myocardium are blurred due to movement, which results in inaccurate segmentation. To address this problem, we propose a left ventricular myocardium segmentation model OSFNet of 4D cardiac Cine-MRI based on the optical flow field and semantic feature fusion. The model includes the optical flow field calculation and semantic segmentation network, where the motion features calculated by the optical flow field are fused with the semantic features of images to achieve the optimal segmentation effect through network learning. The model employs the encoder-decoder architecture, and the proposed multi-receptive field module with average pooling is used to extract multi-scale semantic features and reduce feature losses. The decoder uses the multi-path up-sampling method and skip connections to ensure that semantic features are effectively restored. Then, the open dataset ACDC is applied to train and test the model, and the proposed model is compared with DenseNet and U-Net by the experiments of the left ventricular endocardium segmentation and the left ventricular endocardium and myocardium segmentation. Experimental results indicate that OSFNet achieves the best performance in several indicators such as Dice and HD.

    • Prediction of Venous Thrombosis Based on Fusion Model of DeepFM and XGBoost

      2022, 31(9):376-381. DOI: 10.15888/j.cnki.csa.008691

      Abstract (510) HTML (1093) PDF 1.14 M (870) Comment (0) Favorites

      Abstract:The peripherally inserted central catheter (PICC) technology is widely used in medium and long-term venous treatment, but it can cause various complications and adverse reactions, such as PICC-related thrombosis. The continuous development of machine learning and deep neural networks provides a solution for the assisted diagnosis of PICC-related thrombosis based on clinical medical data. In this study, a fusion model of DeepFM and XGBoost is constructed to predict the risks of PICC-related thrombosis, which can perform feature fusion for sparse data and reduce over-fitting. The experiment reveals that the fusion model can effectively extract the feature importance of PICC-related thrombosis, predict the probability of disease, assist the clinic in identifying high-risk factors of thrombosis in PICC, and intervene to prevent the occurrence of thrombosis in time.

    • Indoor Monocular Depth Estimation by Fusing 2D LiDAR

      2022, 31(9):382-388. DOI: 10.15888/j.cnki.csa.008690

      Abstract (515) HTML (798) PDF 2.08 M (1115) Comment (0) Favorites

      Abstract:The depth information of a scenario is very important in indoor monocular vision navigation tasks. However, monocular depth estimation is an ill-posed problem with low accuracy. At present, 2D LiDAR is widely used in indoor navigation tasks, and the price is low. Therefore, we propose an indoor monocular depth estimation algorithm by fusing 2D LiDAR to improve the accuracy of depth estimation. Specifically, the feature extraction of 2D LiDAR is added to the encoder-decoder structure, and skip connections are used to acquire more detailed information of monocular depth estimation. Additionally, a method using channel attention mechanisms is presented to fuse 2D LiDAR features and RGB image features. The algorithm is verified on the public dataset NYUDv2, and a depth dataset with 2D LiDAR data for the application scenarios of the algorithm is established. Experiments indicate that the proposed algorithm outperforms the state-of-art monocular depth estimation on both public dataset and self-made dataset.

    • Application of Clustering in Data Characteristic Analysis of Wind Profiler Radar

      2022, 31(9):389-395. DOI: 10.15888/j.cnki.csa.008759

      Abstract (422) HTML (585) PDF 1.22 M (811) Comment (0) Favorites

      Abstract:Understanding the general characteristics of wind profiler radar data under different weather conditions is of great significance for improving the quality of weather forecast service. On the basis of clustering technology in data mining, with the hourly observation data obtained by the wind profiler radar in Jinghai, Tianjin Municipality as the research object, this study builds a cluster analysis model for characteristics of wind profiler radar data. On this basis, different characteristics of the maximum detection height and maximum vertical velocity of the wind profiler radar in Jinghai are mined under weather conditions such as sunny, cloudy, pre-precipitation, precipitation, and post-precipitation. This study provides a new reference for weather forecast service and offers a new idea for the characteristic analysis of wind profiler radar data.

    • Simulation on Emergency Evacuation in Multi-storey Buildings

      2022, 31(9):396-402. DOI: 10.15888/j.cnki.csa.008695

      Abstract (498) HTML (670) PDF 1.33 M (943) Comment (0) Favorites

      Abstract:This work studies emergency evacuation in multi-storey buildings. On the basis of the improved path planning algorithm, multi-agent technology is applied to the communication between models. Robots are used to sense the surrounding environment, and the number and initial positions of robots are set to conduct intelligent search and rescue for indoor trapped people in a disaster; then, real-time on-site data are collected to make a decision analysis. Specifically, the robots sense the changes of the field conditions in real time, guide the evacuation, and transmit the real-time data to rescuers for implementing further rescue measures. The results reveal that the three-dimensional simulation technology can effectively reduce casualties in evacuation and provide a reference for the formulation of the optimal rescue plan, which demonstrates certain practical guiding significance.

    • Short Text Classification Model of GM-FastText Multi-channel Word Vector

      2022, 31(9):403-408. DOI: 10.15888/j.cnki.csa.008648

      Abstract (476) HTML (750) PDF 1.05 M (912) Comment (0) Favorites

      Abstract:To tackle the problems in short text classification, such as difficult extraction of sparse text features and out of vocabulary (OOV) caused by non-standard words, this study proposes a short text classification model GM-FastText based on the FastText multi-channel embedded word vector and the GRU-MLP hybrid network architecture (GM) built by a gated recurrent unit (GRU) and multi-layer perceptron (MLP). This model uses the FastText model to generate different embedded word vectors in the N-gram mode and feeds them into the GRU layer and MLP layer to obtain short text features. After the extraction of text features by GRU and the hybrid extraction of the text features in different channels in the MLP layer, they are finally mapped to each classification. The experimental results show that compared with TextCNN and TextRNN, the GM-FastText model has an F1 index increased by 0.021 and 0.023 and accuracy by 1.96 and 2.08 percentage points. Moreover, compared with FastText, FastText-CNN and FastText-RNN, the GM-FastText has an F1 index improved by 0.006, 0.014 and 0.016 and accuracy by 0.42, 1.06 and 1.41 percentage points. In short, under the action of FastText multi-channel word vector and GM hybrid structure network, the multi-channel word vector has better word vector expression in short text classification and the GM network structure has better performance for multi-parameter feature extraction.

    • Simulation Model of Multi-rotor UAV Based on Unity Physics Engine

      2022, 31(9):409-415. DOI: 10.15888/j.cnki.csa.008701

      Abstract (695) HTML (2339) PDF 2.26 M (1334) Comment (0) Favorites

      Abstract:This study constructs the mathematical model of the unmanned aerial vehicle (UAV) operation mode for simulation model analysis by using the Unity physics engine and taking a quad-rotor UAV as the research object. Specifically, the force state of a real object is simulated by applying the force model of the UAV directly to the corresponding position of the motor, which eliminates the mathematical rigid body modeling of the model and simplifies the simulation modeling process. Additionally, the power system and control system of the UAV are modeled by analyzing the UAV motion principle, where the control system uses the PID cascade control algorithm for attitude control. Finally, the stability and validity of the UAV model are verified by flight experiments, which meet the simulation requirements of quad-rotor UAVs.

Current Issue


Volume , No.

Table of Contents

Archive

Volume

Issue

联系方式
  • 《计算机系统应用》
  • 1992年创刊
  • 主办单位:中国科学院软件研究所
  • 邮编:100190
  • 电话:010-62661041
  • 电子邮箱:csa (a) iscas.ac.cn
  • 网址:http://www.c-s-a.org.cn
  • 刊号:ISSN 1003-3254
  • CN 11-2854/TP
  • 国内定价:50元
You are the firstVisitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-3
Address:4# South Fourth Street, Zhongguancun,Haidian, Beijing,Postal Code:100190
Phone:010-62661041 Fax: Email:csa (a) iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063