• Current Issue
  • Online First
  • Archive
  • Click Rank
  • Most Downloaded
  • 综述文章
    Article Search
    Search by issue
    Select AllDeselectExport
    Display Method:
    2024,33(5):1-14, DOI: 10.15888/j.cnki.csa.009517
    [Abstract] (183) [HTML] (146) [PDF 3.16 M] (325)
    Abstract:
    This study proposes an algorithm named DPCP-CROSS-JOIN for fast co-spatiotemporal relationship join queries of large-scale trajectory data in insufficient cluster computing resource environments. The proposed algorithm discretizes continuous trajectory data by segmenting and cross-coding the temporal fields of trajectory data and conducting spatiality gridded coding and then stores the data in two-level partitions using date and grid region coding. It achieves 3-level indexing and 4-level acceleration for spatiotemporal join queries through cross “equivalent” join queries. As a result, the time complexity of the co-spatiotemporal relationship join queries among n$\cdot $n objects is reduced from O(n2) to O(nlogn). It can improve the efficiency of join queries by up to 30.66 times when Hive and TEZ are used on a Hadoop cluster for join queries of large-scale trajectory data. This algorithm uses time-slice and gridding coding as the join condition, thereby cleverly bypassing the real-time calculation of complex expressions during the join process. Moreover, complex expression calculation join is replaced with “equivalent” join to improve the parallelism of MapReduce tasks and enhance the utilization rates of cluster storage and computing resources. Similar tasks of larger scales of trajectory data that are almost impossible to accomplish using general optimization methods can still be completed by the proposed algorithm within a few minutes. The experimental results suggest that the proposed algorithm is efficient and stable, and it is especially suitable for the co-spatiotemporal relationship join queries of large-scale trajectory data under insufficient computing resource conditions. It can also be used as an atomic algorithm for searching accompanying spatiotemporal trajectories and determining the intimacy of relationships among objects. It can be widely applied in fields such as national security and social order maintenance, crime prevention and combat, and urban and rural planning support.
    2024,33(5):15-27, DOI: 10.15888/j.cnki.csa.009512
    [Abstract] (119) [HTML] (72) [PDF 4.77 M] (297)
    Abstract:
    The scenes in high-resolution aerial images are of many highly similar categories. The classic classification method based on deep learning offers low operational efficiency because of the redundant floating-point operations generated in the feature extraction process. FasterNet improves the operational efficiency through partial convolution but reduces the feature extraction ability and hence the classification accuracy of the model. To address the above problems, this study proposes a hybrid structure classification method integrating FasterNet and the attention mechanism. Specifically, the “cross-shaped convolution module” is used to partially extract scene features and thereby improve the operational efficiency of the model. Then, a dual-branch attention mechanism that integrates coordinate attention and channel attention is used to enable the model to better extract features. Finally, a residual connection is made between the “cross-shaped convolution module” and the dual-branch attention module so that more task-related features can be obtained from network training, thereby reducing operational costs and improving operational efficiency in addition to improving classification accuracy. The experimental results show that compared with the existing classification models based on deep learning, the proposed method has a short inference time and high accuracy. Its number of parameters is 19M, and its average inference time for one image is 7.1 ms. The classification accuracy of the proposed method on the public datasets NWPU-RESISC45, EuroSAT, VArcGIS (10%), and VArcGIS (20%) is 96.12%, 98.64%, 95.42%, and 97.87%, respectively, which is 2.06%, 0.77%, 1.34%, and 0.65% higher than that of the FasterNet model, respectively.
    2024,33(5):28-36, DOI: 10.15888/j.cnki.csa.009504
    Abstract:
    The statistical inference of network data has become a hot topic in statistical research in recent years. The independence assumption among sample data in traditional models often fails to meet the analytical demands of modern network-linked data. This work studies the independent effect of each network node in the network-linked data, and based on the idea of fusion penalty, the independent effect of the associated nodes is converged. Knockoff variables construct covariates independent of the target variable by imitating the structure of the original variable. With the help of Knockoff variables, this study proposes a general method framework for variable selection for network-linked data (NLKF). The study proves that NLKF can control the false discovery rate (FDR) at the target level and has higher statistical power than the Lasso variable selection method. When the covariance of the original data is unknown, the covariance matrix using the estimation still has good statistical properties. Finally, combining the 200 factor samples of more than 4 000 stocks in the A-share market and their network relations constructed by Shenyin Wanguo’s first-level industry classification, an example of the application in the field of financial engineering is given.
    2024,33(5):37-46, DOI: 10.15888/j.cnki.csa.009482
    [Abstract] (133) [HTML] (88) [PDF 2.77 M] (295)
    Abstract:
    Optical coherence tomography (OCT) is a new type of ophthalmic diagnosis method with non-contact, high resolution, and other characteristics, which has been used as an important reference for doctors to clinically diagnose ophthalmic diseases. As early detection and clinical diagnosis of retinopathy are crucial, it is necessary to change the time-consuming and laborious status quo of the manual classification of diseases. To this end, this study proposes a multi-classification recognition method for retinal OCT images based on an improved MobileNetV2 neural network. This method uses feature fusion technology to process images and designs an attention increase mechanism to improve the network model, greatly improving the classification accuracy of OCT images. Compared with the original algorithm, the classification effect has been significantly improved, and the classification accuracy, recall value, accuracy, and F1 value of the proposed model reach 98.3%, 98.44%, 98.94% and 98.69%, respectively, which has exceeded the accuracy of manual classification. Such methods not only speed up the diagnostic process, reduce the burden on doctors, and improve the quality of diagnosis in actual diagnosis, but also provide a new direction for ophthalmic medical research.
    2024,33(5):47-56, DOI: 10.15888/j.cnki.csa.009490
    [Abstract] (113) [HTML] (96) [PDF 2.88 M] (377)
    Abstract:
    A new method for short-term power load forecasting is proposed to address issues such as complex and non-stationary load data, as well as large prediction errors. Firstly, this study utilizes the maximum information coefficient (MIC) to analyze the correlation of feature variables and selects relevant variables related to power load sequences. At the same time, as the variational mode decomposition (VMD) method is susceptible to subjective factors, the study employs the rime optimization algorithm (RIME) to optimize VMD and decompose the original power load sequence. Then, the long and short-term time series network (LSTNet) is improved as the prediction model by replacing the recursive LSTM layer with BiLSTM and incorporating the convolutional block attention mechanism (CBAM). Comparative experiments and ablation experiments demonstrate that RIME-VMD reduces the root mean square error (RMSE) of the LSTM, GRU, and LSTNet models by more than 20%, significantly improving the prediction accuracy of the models, and can be adapted to different prediction models. Compared with LSTM, GRU, and LSTNet, the proposed BLSTNet-CBAM model reduces the RMSE by 35.54%, 6.78%, and 1.46% respectively, improving the accuracy of short-term power load forecasting.
    2024,33(5):57-66, DOI: 10.15888/j.cnki.csa.009483
    [Abstract] (119) [HTML] (76) [PDF 1.72 M] (319)
    Abstract:
    To address the challenge of data sparsity within session recommendation systems, this study introduces a self-supervised graph convolution session recommendation model based on the attention mechanism (ATSGCN). The model constructs the session sequence into three distinct views: the hypergraph view, item view, and session view, showing the high-order and low-order connection relationships of the session. Secondly, the hypergraph view employs hypergraph convolutional networks to capture higher-order pairwise relationships among items within a conversation. The item view and session view employ graph convolutional networks and attention mechanisms respectively to capture lower-order connection details within local conversation data at both item and session levels. Finally, self-supervised learning is adopted to maximize the mutual information between the session representations learned by the two encoders, thereby effectively improving recommendation performance. Comparative experiment on the Nowplaying and Diginetica public datasets demonstrates the superior performance of the proposed model over the baseline model.
    Article Search
    Search by issue
    Select AllDeselectExport
    Display Method:
    Available online:  May 09, 2024 , DOI: 10.15888/j.cnki.csa.009531
    Abstract:
    Advancements in synthetic aperture radar (SAR) technology have enabled large-scale observations and high-resolution imaging. Consequently, SAR images now contain numerous small-sized objects with weak features, including aircraft, vehicles, tanks, and ships, which are of high value in civilian and key military assets. However, accurately detecting these objects poses a significant challenge due to their small size, dense connectivity, and variable morphology. Deep learning technology has ushered in a new era of progress in SAR object detection. Researchers have made substantial strides by fine-tuning and optimizing deep learning networks to address the imaging characteristics and detection challenges associated with weak SAR objects. This study provides a comprehensive review of deep learning-based methodologies for weak object detection in SAR images. The primary focus is on datasets and methods, providing a thorough analysis of the principal challenges encountered in SAR weak object detection. This study also summarizes the characteristics and application scenarios of recent detection methods, as well as collates and organizes publicly available datasets and common performance evaluation metrics. In conclusion, this study provides an overview of the current application status of SAR weak object detection and offers insights into future development trends.
    Available online:  May 09, 2024 , DOI: 10.15888/j.cnki.csa.009563
    Abstract:
    The lower signal-to-noise ratio of underground microseismic signals results in a decrease in signal picking accuracy. At present, signal denoising algorithms based on wavelet thresholding encounter problems such as poor generalization and difficulty in measuring thresholds when facing signals with low signal-to-noise ratios. To address this issue, this study investigates a fully supervised learning method based on complex wavelet transform for microseismic waveform denoising. The proposed method first utilizes a complex wavelet transform combined with a convolutional autoencoder to design an encoder-decoder with multiple convolutional and deconvolution operations to complete the image denoising. To verify the effectiveness of this method, Earthquake2023 is first constructed on Stanford’s Earthquake dataset for training and testing. It shows good fitting performance and training results. At the same time, a seismic phase picking method is designed based on the denoised signal obtained from this method and achieves high picking accuracy. This study designs multiple sets of comparative experiments, and the results show that the denoising method can effectively improve the peak signal-to-noise ratio and root mean square error of the signal, which have increased by 16 dB and 24% respectively. Moreover, the error of picking up P-wave and S-wave at the first arrival time is reduced by 0.3 ms compared to STA/LTA.
    Available online:  May 07, 2024 , DOI: 10.15888/j.cnki.csa.009548
    Abstract:
    Time-sensitive network technologies are widely used in industrial automation. The scheduling methods of business flows in this field mainly include static and dynamic scheduling. Static scheduling computes all business flows at a time, which can save link and time resources to the greatest extent but has the disadvantages of long computation time and lack of flexibility to handle new business flows. Dynamic scheduling computes new business flows incrementally, with short computation time but insufficient resource allocation, resulting in time slot fragmentation. The global flow reconfiguration mechanism can periodically replan all business flows in the network to optimize the allocation of link and time resources. However, this mechanism only applies to small networks with fewer business flows, and an increase in the number of business flows can cause a sharp increase in computation time, affecting subsequent business flows. This study designs a batch reconfiguration algorithm based on the existing dynamic scheduling algorithm. This algorithm provides a new evaluation indicator: network throughput. It can regularly reconfigure some business flows to optimize network resource allocation while meeting the dynamic scheduling second-level response time requirement. In addition, the algorithm gives reconfigured business flow selection standards and optimizes flow path selection standards and transmission start time calculation. This study conducts simulation experiments on the original and improved algorithm with the batch reconfiguration mechanism. The experimental results show that the improved algorithm can run in large networks with thousands of business flows and have a 16.5% and 5.5% improvement in network throughput and the number of successfully scheduled flows while ensuring the second-level calculation time of the algorithm.
    Available online:  May 07, 2024 , DOI: 10.15888/j.cnki.csa.009501
    Abstract:
    Retinal vessel segmentation is a common task in medical image segmentation. Retinal vessel images have the characteristic of small and multiple segmentation targets. In the past, networks could effectively extract coarse blood vessels in segmentation. However, it is easy to overlook small blood vessels, the extraction of which affects the performance of the network to some extent, and even the diagnostic results. Therefore, to extract more continuous fine blood vessels while ensuring the accurate extraction of coarse blood vessels, this study uses a symmetric encoder-decoder network as the basic network and a new convolution module, DR-Conv, to prevent overfitting while improving the learning capability of the network. In the process, regarding the information loss caused by the max-pooling layer, the study uses discrete wavelet transform for image decomposition and inverse discrete wavelet transform for image reconstruction and utilizes mixed loss functions to combine the characteristics of different loss functions to compensate for the insufficient optimization ability of a single loss function. It checks the performance of the network on three public retinal vessel datasets and compares it with the latest networks, showing better performance of the proposed network.
    Available online:  May 07, 2024 , DOI: 10.15888/j.cnki.csa.009522
    Abstract:
    Reservoir lithology classification is the foundation of geological research. Although data-driven machine learning models can effectively identify reservoir lithology, the special nature of well logging data as sequential data makes it difficult for the model to effectively extract the spatial correlation of the data, resulting in limitations in reservoir identification. To address this issue, this study proposes a bidirectional long short-term memory extreme gradient boosting (BiLSTM-XGBoost, BiXGB) model for predicting reservoir lithology by combining bidirectional long short-term memory (BiLSTM) and extreme gradient boosting decision tree (XGBoost). By integrating BiLSTM into the traditional XGBoost, the model significantly enhances the feature extraction capability for well logging data. The BiXGB model utilizes BiLSTM to extract features from well logging data, which are then input into the XGBoost classification model for training and prediction. The BiXGB model achieves an overall prediction accuracy of 91% when applied to a reservoir lithology dataset. To further validate its accuracy and stability, the model is tested on the publicly available UCI Occupancy dataset, achieving an overall prediction accuracy of 93%. Compared to other machine learning models, the BiXGB model accurately classifies sequential data, improving the accuracy of reservoir lithology identification and meeting the practical needs of oil and gas exploration. This provides a new approach for reservoir lithology identification.
    Available online:  May 07, 2024 , DOI: 10.15888/j.cnki.csa.009524
    Abstract:
    Flowmeter measurement values have a large deviation in crude oil gathering and transmission pipeline network, and the manual correction of simulation software is cumbersome with poor adaptive. To solve these problems, this study proposes an adaptive spatio-temporal graphic convolutional neural network production calculation method to realize the simulation calculation of crude oil gathering and transmission pipeline network production. The topology of the pipeline network is constructed with the submerged oil electric pump wells as nodes and the oil pipelines as edges. The study utilizes the graph convolutional neural network to extract the spatial information of well distribution and the temporal convolutional neural network to obtain the time series characteristics of the production data, so as to calculate the accurate production simulation results. The experimental validation is carried out on the crude oil gathering and transmission pipeline network system of an oil field. The results show that the proposed method can accurately calculate the production of each electric pump well in the pipeline network system. Compared with other baseline network models, the error indexes are reduced: the average absolute error is reduced to 0.87; the average absolute percentage error is reduced to 4.45%; the mean square error is reduced to 0.84, which proves the validity and accuracy of the proposed method.
    Available online:  May 07, 2024 , DOI: 10.15888/j.cnki.csa.009530
    Abstract:
    The hesitant fuzzy C-means (HFCM) clustering algorithm has addressed the uncertainty between different pixel blocks in an image to some extent. However, as its objective function does not contain any local information, it is very sensitive to noise and cannot achieve good segmentation accuracy when the noise is large. This study proposes an image segmentation method based on improved HFCM (IHFCM) to address the above issues. Firstly, the completion method of hesitant fuzzy elements is given, and then a similarity measure between hesitant fuzzy elements is defined. Using the defined similarity measure, the study constructs a novel fuzzy factor and fuses it into the objective function of HFCM. The new fuzzy factor considers not only spatial information in the local window but also the similarity between pixels, balancing the impact of noise while preserving image details. Finally, experimental results on synthesized images, BSDS500 dataset images, and natural images show that the proposed IHFCM algorithm has good robustness to noise and improves segmentation accuracy.
    Available online:  May 07, 2024 , DOI: 10.15888/j.cnki.csa.009537
    Abstract:
    Joint entity and relation extraction aims to extract entity relation triples from text and is one of the most important steps in building a knowledge graph. There are issues in joint entity and relation extraction, such as weak information expression, poor generalization ability, entity overlap, and relation redundancy. To address these issues, a joint entity and relation extraction model named RGPNRE is proposed. RoBERTa pre-trained model is used as an encoder to enhance the model’s information expression capability. Adversarial training is introduced in the training process to improve the model’s generalization ability. The use of the global pointer addresses entity overlap issues. Relation prediction is used to exclude impossible relations, reducing redundant relations. Entity and relation extraction experiments on the schema-based Chinese medical information extraction dataset CMeIE show that the final model achieved a 2% improvement in F1 score compared to the baseline model. In cases of entity pair overlap, there is a 10% increase in the F1 score, and in situations of single entity overlap, there is a 1% increase in the F1 score. This indicates that the model can more accurately extract entity relation triples, thereby assisting in knowledge graph construction. In the contrast experiment with 1–5 triples, the F1 score of the model increased by about 2 percentage points in sentences with 4 triples, and by about 1 percentage point in complex sentences with 5 or more triples, indicating that the model can effectively handle complex sentence scenarios.
    Available online:  April 30, 2024 , DOI: 10.15888/j.cnki.csa.009551
    Abstract:
    This study aims to address the issues of current span-based aspect sentiment triplet extraction models, which ignore part-of-speech and syntactic knowledge and encounter conflicts in triplets. A semantic and syntactic enhanced span-based aspect sentiment triplet extraction model named SSES-SPAN is proposed. Firstly, part-of-speech and syntactic dependency knowledge is introduced into the feature encoder to enable the model to more accurately distinguish aspect and opinion terms in the text and gain a deeper understanding of their relationships. Specifically, for part-of-speech information, a weighted sum approach is employed to fuse part-of-speech contextual representation with sentence contextual representation to obtain semantic enhanced representation, aiding in the precise extraction of aspect and opinion terms. For syntactic dependency information, attention-guided graph convolution networks are used to capture syntactic dependency features and obtain syntactic dependency enhanced representation to handle complex relationships between aspect and opinion terms. Furthermore, considering the lack of a mutual exclusivity guarantee in span-level inputs, an inference strategy is employed to eliminate conflicting triplets. Extensive experiments on benchmark datasets demonstrate that the proposed model outperforms state-of-the-art methods in terms of effectiveness and robustness.
    Available online:  April 30, 2024 , DOI: 10.15888/j.cnki.csa.009559
    Abstract:
    In recent years, the rapidly evolving quantum computing has become the focus of attention. However, quantum hardware suffers from scarcity and noise, which makes the study of quantum algorithms and the verification of quantum chips rely on quantum simulators running on classical computers. In this study, the main simulation methods used by different quantum simulators are discussed, and various optimizations of mainstream full-amplitude state vector simulators and tensor network-based quantum simulators are explored. Finally, the current status and future directions of quantum simulators are summarized.
    Available online:  April 30, 2024 , DOI: 10.15888/j.cnki.csa.009546
    Abstract:
    A remote sensing image change detection method of multi-temporal binary change detection based on image transformation (BIT) is proposed to address issues related to seasonal and radiometric variations (color discrepancies) between remotely sensed images acquired at different times but from the same geographic area. This method incorporates remote sensing images from multiple past time points and combines the results of pairwise change detection between the current image and the past temporal images to obtain a stable change detection outcome. This method helps mitigate false alarms caused by seasonal and radiometric variations, thereby enhancing the accuracy of change detection. Multiple remote sensing images from different time points in the past are utilized to eliminate the influence of non-target building changes. The pixel difference value of change points is introduced as a regularization term in the loss function, further improving the robustness and reliability of change detection. In this study, a three-temporal (three images from different time points) example is provided, and experiments are conducted with a remote sensing image dataset of building changes. The experimental results demonstrate that the multi-temporal BIT method outperforms change detection methods that only consider two temporal images in the task of remote sensing image change detection.
    Available online:  April 30, 2024 , DOI: 10.15888/j.cnki.csa.009549
    Abstract:
    In the process planning stage of parts, the generated process schemes strongly depend on the process knowledge selected and applied by designers. However, due to the many deviations between the actual manufacturing logics and the process knowledge selected by designers, the mismatch between the generated process scheme and the actual process has become a problem of concern in the current parts manufacturing field. This study proposes a decision method for processes of machining features driven by data and knowledge to solve the above problems. In this method, an MLP deep learning algorithm based on an attention mechanism is utilized to mine process knowledge from structured process data and correlate machining features with feature process labels. After data processing, the method is applied to train a neural network model. After verification, the method can take the feature process data of parts as input and output the distributions of corresponding feature process labels, providing decision support for the generation of the process scheme of parts.
    Available online:  April 30, 2024 , DOI: 10.15888/j.cnki.csa.009550
    Abstract:
    In a distributed storage system based on a three-replica strategy, when a hard disk on the storage node fails, a common processing method is to wait for the system’s preset time. If the faulty disk doesn’t recover within the specified timeout, the recovery of the replicas on the faulty hard disk will begin. The issue with this handling approach is that when there is a faulty replica within the three-replica group, another disk failure in the same group will result in the system being unable to continue providing services and recover automatically. This study introduces an improved Raft consensus algorithm based on log replicas, namely log replica based Raft (LR-Raft). Log replicas do not have a complete state machine, allowing them to quickly join the cluster and participate in voting and consensus, thereby enhancing system availability in the presence of a faulty disk. It can address the problem of unavailability and data loss in the cluster caused by the failure of two replicas in a three-replica setup in a short period. The experimental results indicate that with the introduction of log replicas into the replica group, LR-Raft significantly reduces read and write latency and substantially improves throughput compared to the original Raft across various workloads.
    Available online:  April 30, 2024 , DOI: 10.15888/j.cnki.csa.009539
    Abstract:
    Aiming at the problems of too many forms and insufficient acquisition of form features in the existing form-based event relationship extraction methods, this study proposes TF-ChineseERE, a Chinese event causality extraction method that combines Roberta and Bi-FLASH-SRU. The method transforms the text into labeled forms by formulating a form-filling strategy that takes advantage of the labeled relationships in the text. The study proposes the Roberta pre-training model and the bidirectional built-in flash attention simple recurrent unit (Bi-FLASH-SRU) to obtain the subject-object event features. It then uses the table feature recurrent learning module to mine the global features and finally performs table decoding to obtain event causality triples. The experiments are validated with two public datasets in the financial domain. The results show that the F1 values of the proposed method reach 59.2% and 62.5%, respectively, with a faster training speed of the Bi-FLASH-SRU model and less number of filled forms, which proves the effectiveness of the method.
    Available online:  April 30, 2024 , DOI: 10.15888/j.cnki.csa.009529
    Abstract:
    Traditional fire detection methods are mostly based on object detection techniques, which suffer from difficulties in acquiring fire samples and high manual annotation costs. To address this issue, this study proposes an unsupervised fire detection model based on contrastive learning and synthetic pseudo anomalies. A cross-input contrastive learning module is proposed for achieving unsupervised image feature learning. Then, a memory prototype that learns the feature distribution of normal scene images to discriminate fire scenes through feature reconstruction is introduced. Moreover, a method for synthesizing pseudo anomaly fire scenes and an anomaly feature discrimination loss based on Euclidean distance are proposed, making the model more targeted toward fire scenes. Experimental results demonstrate that the proposed method achieves an image-level AUC of 89.86% and 89.56% on the publicly available Fire-Flame-Dataset and Fire-Detection-Image-Dataset, respectively, surpassing mainstream image anomaly detection algorithms such as PatchCore, PANDA, and Mean-Shift.
    Available online:  April 30, 2024 , DOI: 10.15888/j.cnki.csa.009520
    Abstract:
    Visual navigation uses the visual information in the environment as the navigation basis, and one of the key tasks of visual navigation is object detection. Traditional object detection methods require a large number of annotations and only focus on the image itself, failing to fully utilize the data similarity in visual navigation tasks. To solve the above problem, this paper proposes a self-supervised training task based on historical image information. In this method, multi-moment images at the same location are aggregated. Furthermore, the foreground and the background are distinguished by information entropy, and the images are enhanced and then sent into the simple siamese (SimSiam) self-supervised paradigm for training. In addition, the multi-layer perception (MLP) networks in the projection and prediction layers of the SimSiam paradigm are upgraded into a convolutional attention module and a convolution module, and the loss function is improved into one of the losses among multi-dimensional vectors, thereby extracting multi-dimensional features from the images. Finally, the model pre-trained by the self-supervised paradigm is used to train the model for downstream tasks. Experiments reveal that the proposed method effectively improves the precision of downstream classification and detection tasks on the processed nuScenes dataset. Its Top5 precision on downstream classification tasks reaches 66.95%, and its mean average precision (mAP) on downstream detection tasks reaches 40.02%.
    Available online:  April 28, 2024 , DOI: 10.15888/j.cnki.csa.009541
    Abstract:
    Currently, when dealing with the classification of acute lymphoblastic leukemia (ALL), there are problems of messy background information and nuanced differences. Since it is still difficult to select key features and reduce background noise in blood sample images, it is difficult for traditional methods to capture important and subtle features, and effectively classify and identify various blood cell types, which affects the accuracy and reliability of the results. This study proposes a classification model based on ResNeXt50, which uses image enhancement to reduce background noise. The model enhances the perception of various scales and context information by improving the hole pyramid feature extraction method. By adding an improved SA attention mechanism, the model can better focus on and learn information that has a greater impact on the outcome. The model is tested on the Blood Cells Cancer public data set of Tehran (Taleqani) Hospital in Iran, and the accuracy and precision rates reach 98.39% and 98.33%, respectively. The results show that the model not only has certain clinical significance and practical value but also provides a new idea for the auxiliary diagnosis of ALL.
    Available online:  April 28, 2024 , DOI: 10.15888/j.cnki.csa.009542
    Abstract:
    For fusing map data from different sources, entity alignment is a key step, and its purpose is to determine equivalent entity pairs between different maps. Most of the existing entity alignment methods are based on graph embedding, which aligns by considering the structure and attribute information of the graph. However, they do not handle the interactive relationship between the two well and ignore the use of relationships and multi-order neighbor information. To solve the above problems, this study proposes an entity alignment method based on the fused structural and attribute attention mechanism model (FSAAM). The model first divides the graph data characteristics into attribute and structural channel data and then uses the attribute attention mechanism to learn attribute information. It adds the learning of relationship information to that of structural information and uses the graph attention mechanism to find the entities aligned with beneficial neighbor features. The Transformer encoder is introduced to better correlate information between entities, and the Highway network is utilized to reduce the impact of noise information that may be learned. Finally, the model applies the LS-SVM network to the similarity matrix of the learned structural channel and the attribute channel information, obtaining the integrated similarity matrix to achieve entity alignment. The proposed model is verified on three sub-datasets of the public data set DBP15K. Experimental results show that compared with the best results in the baseline model, its Hits@1 has increased by 2.7%, 4.3%, and 1.7% respectively, and Hits@10 and MRR have also improved, indicating that this model can effectively improve the alignment accuracy of entities.
    Available online:  April 28, 2024 , DOI: 10.15888/j.cnki.csa.009543
    Abstract:
    The efficient recognition of farmed animals is the basis for animal husbandry farms to conduct all kinds of precision breeding. Therefore, it is essential to build a corresponding recognition system to support it. The system designed in this study uses the UAV live broadcast linkage method for sample collection and cruise recognition. This method allows real-time video uploading to the data center and addresses issues such as fewer small targets and occlusion problems compared to ordinary UAV shooting. On this basis, the study selects the YOLOv7 algorithm model to recognize animal behavior and quantity. Furthermore, it optimizes and lightweights the YOLOv7 algorithm model to enhance the recognition accuracy and reduce the system load. Finally, the recognition data is output to the standard interface for convenient calls by various precision breeding programs. The system not only adapts to the scene needs of the farm but also takes into account the efficient operation of the system. It can provide unified data support for implementing diverse precision breeding in the farm and reduce the cost of repeated design and decentralized management.
    Available online:  April 28, 2024 , DOI: 10.15888/j.cnki.csa.009544
    Abstract:
    Optimizing the control strategy of traffic signals can improve the efficiency of vehicular traffic on roads and alleviate congestion. To overcome the challenge of efficiently optimizing signal control strategies at single intersections using value function-based deep reinforcement learning algorithms, this study develops a method based on sample optimization called modified proximal policy optimization (MPPO). This approach enhances the quality of model sample selection by maximizing the extraction from the agent target function in the traditional PPO algorithm. It employs a multi-dimensional traffic state vector as input for the model’s observations, enabling it to promptly track and utilize the dynamic changes in road traffic conditions. The accuracy and effectiveness of the MPPO algorithm model are verified by comparing it with value function reinforcement learning control methods using the urban traffic micro simulation software (SUMO). Simulation experiments show that this approach closely resembles real traffic scenarios compared to value function reinforcement learning control methods. It significantly accelerates the convergence speed of cumulative vehicle waiting time, noticeably reduces the average vehicle queue length and waiting time, and effectively improves the traffic throughput at the intersection.
    Available online:  April 19, 2024 , DOI: 10.15888/j.cnki.csa.009525
    Abstract:
    To address the inadequacy of existing remote sensing image super-resolution reconstruction models in long-term feature similarity and multi-scale feature relevance, this study proposes a novel remote sensing image super-resolution reconstruction algorithm based on a cross-scale hybrid attention mechanism. Initially, the study introduces a global layer attention (GLA) mechanism and employs layer-wise attention to weight and merge global features across different levels, thereby modeling the extended dependency between low-resolution and high-resolution image features. Concurrently, it designs a cross-scale local attention (CSLA) mechanism to identify and integrate local information patches in multi-scale low-resolution feature maps that correspond with high-resolution images, enhancing the model’s ability to restore image details. Finally, the study proposes a local information-aware loss function to guide the image reconstruction process, further improving the visual quality and detail preservation of the reconstructed images. Experiments on UC-Merced datasets demonstrate that the proposed method outperforms most mainstream methods in terms of average PSNR/SSIM across three magnification factors and exhibits superior quality and detail preservation in visual results.
    Available online:  April 19, 2024 , DOI: 10.15888/j.cnki.csa.009526
    Abstract:
    The distribution of grayscale values in calligraphic character document images exhibits significant variations under poor lighting conditions, resulting in lower image contrast in low-light areas and degradation of morphological texture features of the strokes. Traditional methods typically focus on local information such as mean, squared deviation, and entropy, while giving less consideration to morphological texture, rendering them insensitive to the features of low-contrast areas. To address these issues, this study proposes a binarization method called clustering segmentation-based side-window filter (CS-SWF) specifically designed for degraded calligraphic documents. Firstly, this method utilizes multi-dimensional SWF to describe pixel chunks with similar morphological features. Then, with multiple correction rules, it utilizes downsampling to extract low-latitude information and correct feature regions. Finally, the clustered blocks in the feature map are classified to obtain the binarization results. To evaluate the performance of the proposed method, it is compared with existing methods using F-measure (FM), peak signal-to-noise ratio (PSNR), and distance reciprocal distortion (DRD) as indicators. Experimental results on a self-constructed dataset consisting of 100 handwritten degraded document images demonstrate that the proposed binarization method exhibits greater stability in low-contrast dark regions and outperforms the comparison algorithm in terms of accuracy and robustness.
    Available online:  April 19, 2024 , DOI: 10.15888/j.cnki.csa.009527
    Abstract:
    To prevent and reduce the occurrence of WUI fires, this study mines the key causal factors of WUI fires and clarifies the action mechanism between the causal factors. First, this study obtains the causal factors from WUI fire accident cases based on the proposed mining technology and uses the Apriori algorithm to obtain association rules between the causal factors. Then it uses the complex network theory to construct the WUI fire causal factor network, calculate the topological parameters of the network, and analyze the characteristics of the WUI fire causal network. Finally, the study introduces the risk index of the WUI fire causal chain, mines the high-risk connecting edges, and proposes the chain breaking measures. The results show that the WUI fire causal factor network has a small-world characteristic, and high temperature, strong wind, and drought have a greater influence on other causal factors. Burning waste, plant fire, emergency response speed, human arson, and strong wind have important roles in the conversion of different causal factors, which should be controlled better. The most risky side of the network is burning waste → plant fire, and the risk chain can be cut off by enacting regulations such as the prohibition of unauthorized burning waste, to achieve the prevention and active control of WUI fires.
    Available online:  April 19, 2024 , DOI: 10.15888/j.cnki.csa.009528
    Abstract:
    Clinical diagnoses can be facilitated through the utilization of multi-organ medical image segmentation. This study proposes a multi-level feature interaction Transformer model to address the issues of weak global feature extraction capability in CNN, weak local feature extraction capability in Transformer, and the quadratic computational complexity problem of Transformer for multi-organ medical image segmentation. The proposed model employs CNN for extracting local features, which are then transformed into global features through Swin Transformer. Multi-level local and global features are generated through down-sampling, and each level of local and global features undergo interaction and enhancement. After the enhancement at each level, the features are cross-fused by multi-level feature fusion modules. The features, once again fused, pass through up-sampling and segmentation heads to produce segmentation masks. The proposed model is experimented on the Synapse and ACDC datasets, achieving average dice similarity coefficient (DSC) and average 95th percentile Hausdorff distance (HD95) values of 80.16% and 19.20 mm, respectively. These results outperform representative models such as LGNet and RFE-UNet. The proposed model is effective for multi-organ medical image segmentation.
    Available online:  April 19, 2024 , DOI: 10.15888/j.cnki.csa.009507
    Abstract:
    Mixed sample data enhancement methods focus only on the model’s forward representation of the category to which the image belongs while ignoring the reverse determination of whether the image belongs to a specific category. To address the problem of uniquely describing image categories and affecting model performance, this study proposes a method of image data augmentation with inverse target interference. To prevent overfitting of the network model, the method first modifies the original image to increase the diversity of background and target images. Secondly, the idea of reverse learning is adopted to enable the network model to correctly identify the category that the original image belongs to while fully learning the attributes of the populated image that do not belong to that category to increase the confidence of the network model in identifying the category that the original image belongs to. In conclusion, to verify the method’s effectiveness, the study utilizes different network models to perform many experiments on five datasets including CIFAR-10 and CIFAR-100. Experimental results show that compared to other state-of-the-art data augmentation methods, the proposed method can significantly enhance the model’s learning effect and generalization ability in complex settings.
    Available online:  April 19, 2024 , DOI: 10.15888/j.cnki.csa.009509
    Abstract:
    Accurate segmentation of colon polyps is important to remove abnormal tissue and reduce the risk of polyps converting to colon cancer. The current colon polyp segmentation model has the problems of high misjudgment rate and low segmentation accuracy in the segmentation of polyp images. To achieve accurate segmentation of polyp images, this study proposes a colon polyp segmentation model (MGW-Net) combining multi-scale gated convolution and window attention. Firstly, it designs an improved multi-scale gate convolution module (MGCM) to replace the U-Net convolutional block to achieve full extraction of colon polyp image information. Secondly, to reduce the information loss at the skip connection and make full use of the information at the bottom of the network, the study builds a multi-information fusion enhancement module (MFEM) by combining improved dilated convolution and hybrid enhanced residual window attention to optimize the feature fusion at the skip connection. Experimental results on CVC-ClinicDB and Kvasir-SEG data sets show that the similarity coefficients of MGW-Net are 93.8% and 92.7%, and the average crossover ratio is 89.4% and 87.9%, respectively. Experimental results on CVC-ColonDB, CVC-300, and ETIS datasets show that MGW-Net has strong generalization performance, which verifies that MGW-Net can effectively improve the accuracy and robustness of colon polyp segmentation.
    Available online:  April 19, 2024 , DOI: 10.15888/j.cnki.csa.009514
    Abstract:
    In the anti-external force damage inspection of transmission lines, the current lightweight target detection algorithm deployed at the edge has insufficient detection accuracy and slow reasoning speed. To solve the above problems, this study proposes a sparse convolution network (SCN) with global context enhancement for anti-external force damage detection of the power grid, Fast-YOLOv5. Based on the YOLOv5 algorithm, the FasterNet+ network is designed as a new feature extraction network, which can maintain detection accuracy, improve the reasoning speed of the model, and reduce computational complexity. In the bottleneck layer of the algorithm, an ECAFN module with efficient channel attention is designed, which improves the detection effect by adaptively calibrating the feature response in the channel direction, efficiently obtaining the cross-channel interactive information and further reducing the amount of parameters and calculation. The study proposes the detection layer of the sparse convolutional network SCN replacement model with context enhancement to enhance the foreground focus feature and improve the prediction ability of the model by capturing the global context information. The experimental results show that compared with the original model, the accuracy of the improved model is increased by 1.9%, and the detection speed is doubled, reaching 56.2 FPS. The amount of parameters and calculation are reduced by 50% and 53% respectively, which is more in line with the requirements for efficient detection of transmission lines.
    Available online:  April 19, 2024 , DOI: 10.15888/j.cnki.csa.009533
    Abstract:
    Videos captured in low illumination environments often carry problems such as low contrast, high noise, and unclear details, which seriously affect computer vision tasks such as target detection and segmentation. Most of the existing low-light video enhancement methods are constructed based on convolutional neural networks. Since convolution cannot make full use of the long-range dependencies between pixels, the generated video often suffers from loss of details and color distortion in some regions. To address the above problems, this study proposes a Siamese low-light video enhancement network coupling local and global features. The model obtains local features of video frames through a deformable convolution-based local feature extraction module and designs a lightweight self-attention module to capture the global features of video frames. Finally, the extracted local and global features are fused by a feature fusion module, which guides the model to generate enhanced videos with more realistic colors and details. The experimental results show that the proposed method can effectively improve the brightness of low-light videos and generate videos with richer colors and details. It also outperforms the methods proposed in recent years in evaluation metrics such as peak signal-to-noise ratio and structural similarity.
  • 全文下载排行(总排行年度排行各期排行)
    摘要点击排行(总排行年度排行各期排行)

  • Article Search
    Search by issue
    Select AllDeselectExport
    Display Method:
    2000,9(2):38-41, DOI:
    [Abstract] (12524) [HTML] (0) [PDF ] (20127)
    Abstract:
    本文详细讨论了VRML技术与其他数据访问技术相结合 ,实现对数据库实时交互的技术实现方法 ,并简要阐述了相关技术规范的语法结构和技术要求。所用技术手段安全可靠 ,具有良好的实际应用表现 ,便于系统移植。
    1993,2(8):41-42, DOI:
    [Abstract] (9331) [HTML] (0) [PDF ] (29842)
    Abstract:
    本文介绍了作者近年来应用工具软件NU清除磁盘引导区和硬盘主引导区病毒、修复引导区损坏磁盘的 经验,经实践检验,简便有效。
    1995,4(5):2-5, DOI:
    [Abstract] (9076) [HTML] (0) [PDF ] (12089)
    Abstract:
    本文简要介绍了海关EDI自动化通关系统的定义概况及重要意义,对该EDI应用系统下的业务运作模式所涉及的法律问题,采用EDIFACT国际标准问题、网络与软件技术问题,以及工程管理问题进行了结合实际的分析。
    2016,25(8):1-7, DOI: 10.15888/j.cnki.csa.005283
    [Abstract] (8482) [HTML] () [PDF 1167952] (35850)
    Abstract:
    从2006年开始,深度神经网络在图像/语音识别、自动驾驶等大数据处理和人工智能领域中都取得了巨大成功,其中无监督学习方法作为深度神经网络中的预训练方法为深度神经网络的成功起到了非常重要的作用. 为此,对深度学习中的无监督学习方法进行了介绍和分析,主要总结了两类常用的无监督学习方法,即确定型的自编码方法和基于概率型受限玻尔兹曼机的对比散度等学习方法,并介绍了这两类方法在深度学习系统中的应用,最后对无监督学习面临的问题和挑战进行了总结和展望.
    2008,17(5):122-126, DOI:
    [Abstract] (7559) [HTML] (0) [PDF ] (45947)
    Abstract:
    随着Internet的迅速发展,网络资源越来越丰富,人们如何从网络上抽取信息也变得至关重要,尤其是占网络资源80%的Deep Web信息检索更是人们应该倍加关注的难点问题。为了更好的研究Deep Web爬虫技术,本文对有关Deep Web爬虫的内容进行了全面、详细地介绍。首先对Deep Web爬虫的定义及研究目标进行了阐述,接着介绍了近年来国内外关于Deep Web爬虫的研究进展,并对其加以分析。在此基础上展望了Deep Web爬虫的研究趋势,为下一步的研究奠定了基础。
    2011,20(11):80-85, DOI:
    [Abstract] (7494) [HTML] () [PDF 863160] (40306)
    Abstract:
    在研究了目前主流的视频转码方案基础上,提出了一种分布式转码系统。系统采用HDFS(HadoopDistributed File System)进行视频存储,利用MapReduce 思想和FFMPEG 进行分布式转码。详细讨论了视频分布式存储时的分段策略,以及分段大小对存取时间的影响。同时,定义了视频存储和转换的元数据格式。提出了基于MapReduce 编程框架的分布式转码方案,即Mapper 端进行转码和Reducer 端进行视频合并。实验数据显示了转码时间随视频分段大小和转码机器数量不同而变化的趋势。结
    1999,8(7):43-46, DOI:
    [Abstract] (7099) [HTML] (0) [PDF ] (21766)
    Abstract:
    用较少的颜色来表示较大的色彩空间一直是人们研究的课题,本文详细讨论了半色调技术和抖动技术,并将它们扩展到实用的真彩色空间来讨论,并给出了实现的算法。
    2007,16(9):22-25, DOI:
    [Abstract] (6364) [HTML] (0) [PDF ] (4850)
    Abstract:
    本文结合物流遗留系统的实际安全状态,分析了面向对象的编程思想在横切关注点和核心关注点处理上的不足,指出面向方面的编程思想解决方案对系统进行分离关注点处理的优势,并对面向方面的编程的一种具体实现AspectJ进行分析,提出了一种依据AspectJ对遗留物流系统进行IC卡安全进化的方法.
    2012,21(3):260-264, DOI:
    [Abstract] (6308) [HTML] () [PDF 336300] (42839)
    Abstract:
    开放平台的核心问题是用户验证和授权问题,OAuth 是目前国际通用的授权方式,它的特点是不需要用户在第三方应用输入用户名及密码,就可以申请访问该用户的受保护资源。OAuth 最新版本是OAuth2.0,其认证与授权的流程更简单、更安全。研究了OAuth2.0 的工作原理,分析了刷新访问令牌的工作流程,并给出了OAuth2.0 服务器端的设计方案和具体的应用实例。
    2011,20(7):184-187,120, DOI:
    [Abstract] (6117) [HTML] () [PDF 731903] (30944)
    Abstract:
    针对智能家居、环境监测等的实际要求,设计了一种远距离通讯的无线传感器节点。该系统采用集射频与控制器于一体的第二代片上系统CC2530 为核心模块,外接CC2591 射频前端功放模块;软件上基于ZigBee2006 协议栈,在ZStack 通用模块基础上实现应用层各项功能。介绍了基于ZigBee 协议构建无线数据采集网络,给出了传感器节点、协调器节点的硬件设计原理图及软件流程图。实验证明节点性能良好、通讯可靠,通讯距离较TI 第一代产品有明显增大。
    2004,13(10):7-9, DOI:
    [Abstract] (5865) [HTML] (0) [PDF ] (9772)
    Abstract:
    本文介绍了车辆监控系统的组成,研究了如何应用Rockwell GPS OEM板和WISMOQUIKQ2406B模块进行移动单元的软硬件设计,以及监控中心 GIS软件的设计.重点介绍嵌入TCP/IP协议处理的Q2406B模块如何通过AT指令接入Internet以及如何和监控中心传输TCP数据.
    2008,17(1):113-116, DOI:
    [Abstract] (5783) [HTML] (0) [PDF ] (47664)
    Abstract:
    排序是计算机程序设计中一种重要操作,本文论述了C语言中快速排序算法的改进,即快速排序与直接插入排序算法相结合的实现过程。在C语言程序设计中,实现大量的内部排序应用时,所寻求的目的就是找到一个简单、有效、快捷的算法。本文着重阐述快速排序的改进与提高过程,从基本的性能特征到基本的算法改进,通过不断的分析,实验,最后得出最佳的改进算法。
    2008,17(8):87-89, DOI:
    [Abstract] (5716) [HTML] (0) [PDF ] (39641)
    Abstract:
    随着面向对象软件开发技术的广泛应用和软件测试自动化的要求,基于模型的软件测试逐渐得到了软件开发人员和软件测试人员的认可和接受。基于模型的软件测试是软件编码阶段的主要测试方法之一,具有测试效率高、排除逻辑复杂故障测试效果好等特点。但是误报、漏报和故障机理有待进一步研究。对主要的测试模型进行了分析和分类,同时,对故障密度等参数进行了初步的分析;最后,提出了一种基于模型的软件测试流程。
    2008,17(8):2-5, DOI:
    [Abstract] (5633) [HTML] (0) [PDF ] (30505)
    Abstract:
    本文介绍了一个企业信息门户中单点登录系统的设计与实现。系统实现了一个基于Java EE架构的结合凭证加密和Web Services的单点登录系统,对门户用户进行统一认证和访问控制。论文详细阐述了该系统的总体结构、设计思想、工作原理和具体实现方案,目前系统已在部分省市的广电行业信息门户平台中得到了良好的应用。
    2004,13(8):58-59, DOI:
    [Abstract] (5566) [HTML] (0) [PDF ] (26200)
    Abstract:
    本文介绍了Visual C++6.0在对话框的多个文本框之间,通过回车键转移焦点的几种方法,并提出了一个改进方法.
    2019,28(6):1-12, DOI: 10.15888/j.cnki.csa.006915
    [Abstract] (5563) [HTML] (16635) [PDF 672566] (14748)
    Abstract:
    知识图谱是以图的形式表现客观世界中的概念和实体及其之间关系的知识库,是语义搜索、智能问答、决策支持等智能服务的基础技术之一.目前,知识图谱的内涵还不够清晰;且因建档不全,已有知识图谱的使用率和重用率不高.为此,本文给出知识图谱的定义,辨析其与本体等相关概念的关系.本体是知识图谱的模式层和逻辑基础,知识图谱是本体的实例化;本体研究成果可以作为知识图谱研究的基础,促进知识图谱的更快发展和更广应用.本文罗列分析了国内外已有的主要通用知识图谱和行业知识图谱及其构建、存储及检索方法,以提高其使用率和重用率.最后指出知识图谱未来的研究方向.
    2009,18(3):164-167, DOI:
    [Abstract] (5505) [HTML] (0) [PDF ] (27253)
    Abstract:
    介绍了一种基于DWGDirectX在不依赖于AutoCAD平台的情况下实现DWG文件的显示、操作、添加的简单的实体的方法,并对该方法进行了分析和实现。
    2009,18(5):182-185, DOI:
    [Abstract] (5489) [HTML] (0) [PDF ] (31550)
    Abstract:
    DICOM 是医学图像存储和传输的国际标准,DCMTK 是免费开源的针对DICOM 标准的开发包。解读DICOM 文件格式并解决DICOM 医学图像显示问题是医学图像处理的基础,对医学影像技术的研究具有重要意义。解读了DICOM 文件格式并介绍了调窗处理的原理,利用VC++和DCMTK 实现医学图像显示和调窗功能。
    2010,19(10):42-46, DOI:
    [Abstract] (5420) [HTML] () [PDF 1301305] (20679)
    Abstract:
    综合考虑基于构件组装技术的虚拟实验室的系统需求,分析了工作流驱动的动态虚拟实验室的业务处理模型,介绍了轻量级J2EE框架(SSH)与工作流系统(Shark和JaWE)的集成模型,提出了一种轻量级J2EE框架下工作流驱动的动态虚拟实验室的设计和实现方法,给出了虚拟实验项目的实现机制、数据流和控制流的管理方法,以及实验流程的动态组装方法,最后,以应用实例说明了本文方法的有效性。
  • 全文下载排行(总排行年度排行各期排行)
    摘要点击排行(总排行年度排行各期排行)

  • Article Search
    Search by issue
    Select AllDeselectExport
    Display Method:
    2007,16(10):48-51, DOI:
    [Abstract] (4677) [HTML] (0) [PDF 0.00 Byte] (86291)
    Abstract:
    论文对HDF数据格式和函数库进行研究,重点以栅格图像为例,详细论述如何利用VC++.net和VC#.net对光栅数据进行读取与处理,然后根据所得到的象素矩阵用描点法显示图像.论文是以国家气象中心开发Micaps3.0(气象信息综合分析处理系统)的课题研究为背景的.
    2002,11(12):67-68, DOI:
    [Abstract] (3811) [HTML] (0) [PDF 0.00 Byte] (57613)
    Abstract:
    本文介绍非实时操作系统Windows 2000下,利用VisualC++6.0开发实时数据采集的方法.所用到的数据采集卡是研华的PCL-818L.借助数据采集卡PCL-818L的DLLs中的API函数,提出三种实现高速实时数据采集的方法及优缺点.
    2008,17(1):113-116, DOI:
    [Abstract] (5783) [HTML] (0) [PDF 0.00 Byte] (47664)
    Abstract:
    排序是计算机程序设计中一种重要操作,本文论述了C语言中快速排序算法的改进,即快速排序与直接插入排序算法相结合的实现过程。在C语言程序设计中,实现大量的内部排序应用时,所寻求的目的就是找到一个简单、有效、快捷的算法。本文着重阐述快速排序的改进与提高过程,从基本的性能特征到基本的算法改进,通过不断的分析,实验,最后得出最佳的改进算法。
    2008,17(5):122-126, DOI:
    [Abstract] (7559) [HTML] (0) [PDF 0.00 Byte] (45947)
    Abstract:
    随着Internet的迅速发展,网络资源越来越丰富,人们如何从网络上抽取信息也变得至关重要,尤其是占网络资源80%的Deep Web信息检索更是人们应该倍加关注的难点问题。为了更好的研究Deep Web爬虫技术,本文对有关Deep Web爬虫的内容进行了全面、详细地介绍。首先对Deep Web爬虫的定义及研究目标进行了阐述,接着介绍了近年来国内外关于Deep Web爬虫的研究进展,并对其加以分析。在此基础上展望了Deep Web爬虫的研究趋势,为下一步的研究奠定了基础。

External Links

You are the firstVisitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-3
Address:4# South Fourth Street, Zhongguancun,Haidian, Beijing,Postal Code:100190
Phone:010-62661041 Fax: Email:csa (a) iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063