2019, 28(6):1-12. DOI: 10.15888/j.cnki.csa.006915
Abstract:A knowledge graph is a knowledge base that represents objective concepts/entities and their relationships in the form of graph, which is one of the fundamental technologies for intelligent services such as semantic retrieval, intelligent answering, decision support, etc. Currently, the connotation of knowledge graph is not clear enough and the usage/reuse rate of existing knowledge graphs is relatively low due to lack of documentation. This paper clarifies the concept of knowledge graph through differentiating it from related concepts such as ontology in that the ontology is the schema layer and the logical basis of a knowledge graph while the knowledge graph is the instantiation of an ontology. Research results of ontologies can be used as the foundation of knowledge graph research to promote its developments and applications. Existing generic/domain knowledge graphs are briefly documented and analyzed in terms of building, storage, and retrieval methods. Moreover, future research directions are pointed out.
2019, 28(6):13-21. DOI: 10.15888/j.cnki.csa.006921
Abstract:The 3D animation automatic generation system of mobile phone aims to obtain an animation which matches an input text message and send it to the recipient. Expressions in animation are important for expressing emotional themes and enhancing animation effects. This study focuses on the automatic generation of the character expression in the 3D animation system, mainly including the qualitative planning and the quantitative calculation. The qualitative planning part mainly uses the semantic Web technology to build an expression ontology library based on the facial coding system and the emotional wheel model and establish the appropriate axioms. Then knowledge reasoning is carried out according to the key information of short message to get the qualitative description of expressions. Followed is converting the qualitative description information into specific animation data and processing the expression smooth transition in the quantitative part, and finally a rich and diverse character expression animation is generated. Except for the abnormalities caused by human operation in the 270 data of the experiment, the success rate of expression planning is 71.2%, and the diversity expression generation rate is 83.76%. Experiments show that this proposed method can generate expression animations better.
2019, 28(6):22-28. DOI: 10.15888/j.cnki.csa.006899
Abstract:Cognitive wireless networks are vulnerable to message tampering, forgery, eavesdropping, and denial of service attacks due to the openness of wireless channels and the broadcasting characteristics of wireless transmission. In order to resist these attacks, researchers have proposed many physical layer authentication technologies. Compared with traditional cryptographic authentication mechanism, physical layer authentication technology is faster and more efficient, so it is very suitable for continuous and real-time authentication of resource-constrained terminals in cognitive wireless networks. However, the existing physical layer authentication technology can not achieve the initial authentication, and packet loss events often occur in the authentication process, resulting in long authentication delay and low authentication efficiency. This study combines traditional cryptographic authentication technology with physical layer authentication technology, and proposes a lightweight cross-layer authentication scheme. The scheme only uses cryptography technology in the initial authentication, while the other authentication uses fast and efficient physical layer authentication technology, which improves the authentication efficiency. In this scheme, an improved normalized statistic is used, which makes the calculation of threshold simpler, reduces the computational complexity effectively and reduces the user authentication waiting delay. In addition, the Hash chain-based authentication method is adopted to ensure that continuous authentication can still be achieved in case of packet loss. Performance analysis shows that, compared with existing schemes, the scheme in this study has greater advantages in improving authentication efficiency.
2019, 28(6):29-37. DOI: 10.15888/j.cnki.csa.006938
Abstract:In view of the nonlinearity and non-stationarity of gas load data, this paper presents a combined forecasting model based on improved WT-LMD and GRU neural networks. Firstly, the model decomposes the gas load data by using the improved LMD algorithm, which improves the over-smoothing problem of that the traditional LMD method uses piecewise Newton interpolation instead of the traditional sliding average method to obtain local mean function and envelope estimation function. After that, the PF components are processed with wavelet threshold denoising to obtain effective component data. Finally, The GRU neural network is used to predict the value of each component separately, and the final predicted value of load is obtained by adding them. Simulation results show that the proposed method is more accurate than single GRU neural network and GRU network combined with traditional LMD algorithm.
2019, 28(6):38-44. DOI: 10.15888/j.cnki.csa.006948
Abstract:Naive Bayesian classifier can be applied to lithologic identification. The Gaussian distribution is often used to fit the probability distribution of continuous attributes, but it is not effective for complex logging data. To solve this problem, a hybrid Gaussian probability density estimation based on EM algorithm is proposed. Logging data of the lower ancient gas Wells in the block 41-33 of Sudong are selected as training samples, and data of 44-45 Wells are selected as test samples. The experiment uses the mixed Gaussian model based on EM algorithm, to estimate the probability density of logging data variables at first, and then applies it to the Naive Bayes classifier for the lithology identification. Finally, the fitting effect of the single Gaussian distribution function was used as the comparison. The results reveal that the mixed Gaussian model has a better fitting effect and the performance of the Naive Bayes classifier for the lithology identification could be improved through this way.
2019, 28(6):45-52. DOI: 10.15888/j.cnki.csa.006956
Abstract:The State Grid plays an important role in the development of national economy. Choosing high-quality supplier cooperation is a prerequisite for providing quality services. Many experts and scholars have made many contributions to the management of suppliers, but there are still problems that solve the problem unitarily and subjective factors are too strong. According to the multi-application and research status of supplier management in the project, a user portrait-based supplier management scheme is proposed, which mainly assists the supplier's credit score, supplier product selection, and supplier's long-term cooperation feasibility through user portraits. The article mainly elaborates on the labeling system and content of the portrait, and focuses on the analysis of the model label generation method based on natural language processing. Combined with the supplier product selection case, the application method of user portrait in assistant decision-making is briefly described. It proves that the method based on the portrait can improve the accuracy of supplier selection and simplify the business process.
2019, 28(6):53-61. DOI: 10.15888/j.cnki.csa.006953
Abstract:The mobile phone 3D animation automatic generation system sends and outputs a 3D animation to the SMS receiver according to the SMS content, through semantic analysis, information extraction, and plot planning. The material of the scene in the animation is the most intuitive way to show the animation, this work starts with qualitative planning and quantitative calculation, and studies the automatic planning of scene material in mobile phone 3D animation automatic generation system. The main task of qualitative planning is to use semantic Web technology to depict all kinds of attributes related to material through ontology database and to associate with relevant themes and templates to get a qualitative description sentence about model material. The main work of quantitative calculation is to depict the qualitative planning results into the specific animation scenes, and to change the material quality of the animation scenes. In the 314 open short messages test, 86.9% of the short messages successfully completed the material planning, among which 71.43% of the text messages planned the material of the original scene in the animation scene library. Experiments show that this method can effectively carry out animation material planning.
2019, 28(6):62-68. DOI: 10.15888/j.cnki.csa.006932
Abstract:In this study, the convolutional neural network is used to segment the scene and detect the parking space in the parking lot of the expressway service area. Firstly, the study expands the parking lot dataset of the expressway service area, and uses the convolutional neural network to segmentation of the highway parking lot and vehicle detection, the weighted neural network is used to share the weights of the feature extraction network to achieve the joint training and lightweight of the network model. Furthermore, the convolutional neural network enhances the recognition of small targets by extracting texture features of the vehicle and using pyramid feature fusion. Finally, the system uses the prior knowledge of the parking space in the expressway service area to calculate the parking space quantity information of the parking lot in real time. The practical application shows that the method has a accuracy of 94% for parking space detection and a detection speed of 25 frames per second in complex scenes. It has strong generalization ability and is suitable for parking lot detection.
2019, 28(6):69-75. DOI: 10.15888/j.cnki.csa.006936
Abstract:Instrument and equipment sharing management platform for Chinese Academy of Sciences can solve the problems of closed management, difficult sharing, and low operation efficiency of instruments and equipment among scientific units effectively. Meanwhile, users can learn the using and the sharing situation of various instruments through the system. The SAMP system can provide decent decision-making basis for scientific and efficient management work of competent business departments at all levels. So when the data, which belong to the apparatus apply info table, reaches the million scale, the query performance will decline quickly because of using joining query. At present, the solution is using sharding, just like Hash fetching algorithm. Because of the meaningless ID, this way is ineffective. There is a certain degree of aggregation in geographical areas between users and instruments, so a strategy that based on the K-means algorithm is used in this study. The result shows it can improve the query performance at least 70%.
2019, 28(6):76-81. DOI: 10.15888/j.cnki.csa.006929
Abstract:In order to solve the problem of networking and centralized management of massive ZigBee nodes, this study proposes a multi-ZigBee network structure management design scheme based on Internet server. The system divides the mass nodes into multiple ZigBee network structures, and there is no coupling between each network, which can get rid of the distance limit of wireless transmission. In addition to the terminal node and the ZigBee coordinator, each independent ZigBee network adds a controller that can access the Internet to form an intranet structure. The ZigBee nodes use Z-Stack protocol to realize self-organizing network function, and the controller is mainly used to realize communication between nodes and servers. The proposed concept of the central server as the core of the external network structure is used to maintain the state information of all nodes in the central database and provide Web services to clients. The client's query or control operations on any node are completed by the central server agent. The test shows that the system is reliable and expandable, and has certain reference value.
2019, 28(6):82-88. DOI: 10.15888/j.cnki.csa.006853
Abstract:Vehicle object detection and tracking is a key step for the real-time monitoring and acquisition of traffic parameters in the video monitoring system of expressway. A vehicle tracking method based on trajectory temporal information with KCF algorithm is proposed to realize high precision continuous tracking. Firstly, data sets is established, the classification and detection of vehicle applicable to highway scenario are realized by using SSD algorithm based on deep learning. Then, based on the trajectory of temporal information, matching between object and trajectory is realized, and KCF algorithm is applied to forecast the missing object positioning, so as to realize the vehicle trajectory tracking. The experiment result shows that this tracking method has high precision, can adapt to many different scenarios, thus has high application value.
2019, 28(6):89-94. DOI: 10.15888/j.cnki.csa.006930
Abstract:Large equipment such as transformers has the characteristics of identification and stability during operation, but it is easily interfered by various environmental sounds. To solve this problem, by using sound signal processing, feature extraction, pattern matching, and other techniques, this study proposes a device sound fault monitoring scheme that is resistant to multiple environmental sound disturbances. First of all, the normal and faulty sounds of transformers in various ambient sounds are collected and preprocessed. Then, MFCC features are extracted and dimensionality is reduced. Next, the normal working sound characteristics of the transformer are trained through the OPTICS algorithm to obtain a standard set with multiple clusters. Last, the standard set is matched with the test sample containing the faulty sound. If there is a mismatch, but the manual test is a false positive, it will be classified as a new cluster. The experimental results show that the proposed method can not only identify the sample well, but also optimize the standard set through the standard set enhancement module when the new normal sound appears, thus improving the recognition accuracy and reducing the false alarm rate.
2019, 28(6):95-99. DOI: 10.15888/j.cnki.csa.006963
Abstract:With the rapid development of the economy, social security has become quite important. At the same time, the criminal methods used by criminals are more complicated and high-tech. Traditional security measures are mostly based on electrical sensing or relying on human surveillance, and are hard to guarantee people's property and personal safety effectively. Therefore, this study proposes a fiber perimeter system based on double M-Z structure, which uses the characteristics of the fiber to identify the pattern of interference signal, and then uses the special structure of entire perimeter system to respond to the interference signal with high efficiency and alarm, and even complete accurate positioning function. Through the signal processing technology, the signal-to-noise ratio of the acquired signal is increased, and a good recognition rate and a high positioning accuracy are achieved.
2019, 28(6):100-104. DOI: 10.15888/j.cnki.csa.006918
Abstract:With the gradual promotion and application of domestic processors and domestic operating systems, more and more developers are developing multi-threaded programs under domestic platforms. The lack of visual concurrent performance analysis tools in the Qt Creator tools commonly used in domestic platforms makes it extremely difficult to optimize performance problems due to multi-thread synchronization/mutual exclusion and resource competition. This study designs a concurrent performance analysis scheme under Qt Creator through real-time monitoring of concurrent events, collects concurrent performance data during program running, analyzes performance concurrency bottlenecks and deadlock causes, and displays multi-view data in plug-in form. Experiments show that the concurrent performance analysis program can easily and quickly assist users to develop multi-threaded concurrent programs and improve software development efficiency.
2019, 28(6):105-109. DOI: 10.15888/j.cnki.csa.006926
Abstract:With the rapid development of the information technology, the Internet of Things (IoT) technology has been widely used in all walks of life. One of the main applications is the collection and transmission of information from hardware devices. But there will be serious data security problems in the process of data transmission, so the study proposes a hybrid communication encryption method. From the perspective of IoT devices, the study also introduces the wireless communication technology of IoT and the CoAP protocol, and then uses the NB-IoT technology considering the resource constrains of IoT devices, and has implemented the method in smart gas system. The experimental results shows that the proposed method is feasible.
2019, 28(6):110-117. DOI: 10.15888/j.cnki.csa.006934
Abstract:Combining with the characteristics of current colorimetric sensor array such as diversity, instability, etc., and aiming at the current situation of the existing array image segmentation algorithm, such as low efficiency or susceptible to illumination environment, etc., this study proposes an image segmentation algorithm based on the fuzzy C-means clustering algorithm. Firstly, this algorithm achieves the grid division of image by the projection of I component in row and column under the HSI color space, and solves the problem of the initialization of clustering condition of FCM algorithm by combining with the smooth histogram information of local array point images. Secondly, in order to improve the accuracy of the result of segmentation of image points, the algorithm introduces the H component and I component of different weight coefficients through the objective function to realize the introduction of color information. Through the test of the effect of image segmentation, the image segmentation algorithm proposed in this study shows the overall optimal segmentation precision of 96.54% in all the image segmentation of the array points, and can effectively and accurately realize the target extraction of the colorimetric sensor array image.
2019, 28(6):118-124. DOI: 10.15888/j.cnki.csa.006944
Abstract:Relation classification is an important subtask in the field of Natural Language Processing (NLP), which provides technical support for the construction of knowledge map, question answer systems, and information retrieval. Compared with traditional relational classification methods, deep learning model-based methods with attention have achieved better performance in various relation classification tasks. Most of previous models use one-layer attention, which cause single representation of the feature. Therefore, on the basis of the existing works, the study introduces a multi-head attention, which aims to enable the model to obtain more information about sentence from different representation subspaces and improve the model's feature expression ability. Otherwise, based on the existing word embedding and position embedding as network input, we introduce dependency parsing feature and relative core predicate dependency feature to the model. The dependency parsing features include the dependency value and the location of the dependent parent node position for the current word. The experimental results on the SemEval-2010 relation classification task show that the proposed method outperforms most of the existing methods.
2019, 28(6):125-129. DOI: 10.15888/j.cnki.csa.006788
Abstract:In order to effectively improve the image encryption effect and security, an improved logistic mapping image encryption algorithm is designed. Firstly, on the basis of cubic mapping and logistic mapping, a new two-dimensional discrete mapping is proposed to overcome the problem of narrow chaotic interval and fewer parameters. The image is scrambled by improved logistic mapping. Then the scrambled image is processed by bitwise exclusive or operation between adjacent pixels, and the final cipher-text image is obtained by crossover operation. The simulation results show that the algorithm is simple and easy to execute, has good security, strong anti-attack ability and high efficiency.
2019, 28(6):130-134. DOI: 10.15888/j.cnki.csa.006849
Abstract:The cut set analysis of fault tree is a common technique for judging accidents. However, the technology based on cut sets can only determine the occurrence of accidents by combination of basic events, and cannot analyze the intermediate events in the process of accident evolution. In view of the mechanism of accident analysis described in the accident analysis report, a precise classification method of accident report oriented fault tree is proposed by combining the text classification and fault tree analysis technology, and the accurate classification technology of accident oriented accident evolution path is realized, and the information of the report and the fault tree structure evolution information can be automatically correlated. We can achieve precise analysis of accident causality evolution based on expert experience.
2019, 28(6):135-140. DOI: 10.15888/j.cnki.csa.006888
Abstract:Naive Bayes algorithm is based on feature-independence assumption and the traditional TF-IDF weighting algorithm, and only considers the distribution of features in the whole training set, but ignores the relationship between feature and categories or documents, so the weights given by traditional method cannot represent its performance. To solve the above problems, this study proposes a naive Bayes classification algorithm of feature weighting based on two-dimensional information gain. It considers the effects of two-dimensional information gain of features, which are the information gain of category and the information gain of documents. Compared with the traditional naive Bayesian algorithm of feature weighting, the proposed algorithm can improve about 6% in the precision, recall, F1 value performance.
2019, 28(6):141-147. DOI: 10.15888/j.cnki.csa.006900
Abstract:With the development of the network, the public data which shows the trend of explosive growth, making the data type more and more complex. These network data combine with each other to form a complex network data structure to express the information of data. In this scenario, it is increasingly difficult to fully express data information through a single type of data (picture, text, voice, etc.). For the purpose of a network information that contains multiple types of data can be classified better, this study proposes a new public opinion classification model via neural network which is used to learn the data features respectively, and to classify their features after fusion. In the experiment, LSTM and CNN neural networks are used to extract text and image's features, fusing the two features to classified. The experimental results show that the reclassification after the fusion of various data features can better realize the classification and improve the accuracy of data information classification.
2019, 28(6):148-152. DOI: 10.15888/j.cnki.csa.006935
Abstract:Aiming at the problem of non-repeatability, long period, high labor cost, and high technical requirements of the artillery launching experiment, this work studies the virtual simulation technology of artillery external ballistics. According to the characteristics of the artillery firing process, it analyzes artillery particle exterior trajectory equation set and establishes the mathematical model of the external ballistic motion process of the projectile. Combining virtual reality technology and Unity3D development engine, 3ds Max is used to build the artillery and projectile solid model, and C# is used as the development language to calculate and control the ballistic motion of the ballistics, and realize the analogue simulation of the artillery launching process. According to the analysis of the simulation data and the firing table, the simulation experiment is not only visually showing the flight state of the artillery launch and projectile, but the deviation is within the acceptable range, which facilitates the research of the external ballistics of the artillery.
2019, 28(6):153-158. DOI: 10.15888/j.cnki.csa.006941
Abstract:The monitoring environment of offshore oil platforms is complex, the monitoring angle of the oil production working platform is different, the marine environment is complex and changeable, and the camera pictures are blurred in the weather such as fog and rain. To solve the above problem of increasing the difficulty of object detection, the object detection algorithm based on Convolutional Neural Network (CNN) in complicated scenario (ODCS) is proposed to detect specific objects in the image. This method integrates feature map prediction with different resolutions to naturally process objects of various sizes, eliminates the feature re-sampling phase, and encapsulates all calculations in a single network. This is easy to train and can be integrated directly into the system that needs to detect components. The experimental results show that compared with the traditional methods, the detection accuracy of this method and the recall rate are significantly improved, and the detection efficiency can meet the requirements of real-time applications.
2019, 28(6):159-164. DOI: 10.15888/j.cnki.csa.006950
Abstract:The traditional collaborative filtering recommendation algorithm does not fully consider the impact of user attributes and item classification on similarity calculation, which results in data sparsity and low recommendation accuracy. This study proposes a collaborative filtering recommendation algorithm based on user attribute clustering and item partitioning. The algorithm fully considers the similarity calculation which has an important impact on recommendation accuracy. Firstly, users are clustered by user identity attributes using clustering algorithm, and then the items are classified. In the similarity calculation, category similarity is added. Considering the number of users scored jointly, comprehensive similarity is calculated by weighted coefficient. Finally, combined with average similarity, the nearest neighbor is synthesized by threshold method. The experimental results show that the proposed algorithm can effectively improve the recommendation accuracy and provide more accurate items for users.
2019, 28(6):165-171. DOI: 10.15888/j.cnki.csa.006919
Abstract:The quality of video with low lighting is always pessimistic. Many images have low contrast, blurry edges, and low brightness. These situations will bring inconvenience to subsequent processing. For solving these problems, an improved algorithm named low lighting video enhancement algorithm based on dark channel prior is presented. Firstly, the input image is inverted, and then dehazed. Atmospheric light is estimated by the maximum value of dark channel in input image. At the same time, the refractive index t is calculated and optimized with fast guided filter, which help realizing edge-ware and denoising. Finally, the image is inverted again. The result shows that the proposed algorithm can enhance the contrast of low lighting image, improve the brightness, and highlight the details of the edges of the image.
2019, 28(6):172-177. DOI: 10.15888/j.cnki.csa.006945
Abstract:With the rapid growth of the number of registered locomotives in CMD system, a large amount of Beidou positioning data accumulated provides a necessary data foundation for generating orbital electronic maps with higher precision. However, the Beidou data sent by the locomotive will have a certain deviation, resulting in the displacement of the locomotive positioning or positioning as isolated points. In this study, the prediction equation was constructed by Kalman filtering algorithm to eliminate the outlier data in the Beidou data set, and the processed longitude and latitude data were further fitted with segmental curves, thus generating the electronic map of train track. The case analysis shows that using Kalman filtering algorithm and segmental curve fitting method to generate electronic map of Beidou positioning data can get close to real track line trend accurately.
2019, 28(6):178-182. DOI: 10.15888/j.cnki.csa.006962
Abstract:With the continuous development of the virtual reality technology, the requirement for the authenticity of the virtual scene is getting greater and greater. However, there are a lot of data to be rendered in the virtual scene, which is caused by the complex terrain as well as a large number of vegetation and buildings. As a result, the rendering speed becomes a bottleneck of the virtual reality technology. In the existing research, it is impossible to improve the rendering speed of the illusion engine very well. Some problems can also appear about "breakthrough" and the poor effect of removing the invisible model. In this paper, a kind of parallel and double-layer cut algorithm for game threads and rendering threads is presented. Firstly, game threads and rendering threads are paralleled in the illusion engine to improve the rendering speed. Then, the fade in-and-out levels of detail algorithm are applied to cut the first level. Finally, the fade culling algorithm is adopted to cut the second level to improve the culling effect. The experiment shows that the rendering speed of the above method is improved by forty percent compared with the rendering speed of serial threads. Its frame rate is also improved by fifty-five percent compared with the traditional single-layer cut algorithm.
2019, 28(6):183-188. DOI: 10.15888/j.cnki.csa.006960
Abstract:The software and hardware testing of laptop is a very important part of its production before mass production. Because the laptop is a highly integrated product, the repeatability and complexity of the test process has become a major feature of its testing. For this feature, an automated test task model is designed and the corresponding allocation algorithm is implemented. First, according to the test requirements, the corresponding test tasks are integrated to form a data table composed of a plurality of test conditions; then, the automatic task assignment is performed according to the state of each laptop. The tester only needs to complete the corresponding test task according to the prompts, without the tedious process of classifying the task. Through the actual test of multiple machines, the practicality of the allocation algorithm is verified, and the test efficiency is obviously improved.
2019, 28(6):189-197. DOI: 10.15888/j.cnki.csa.006985
Abstract:In the Spark computing platform, data skew often causes some nodes to withstand greater network traffic and computing pressure, which imposes a huge burden on the cluster's CPU, memory, disk, and traffic, affecting the computing performance of the entire cluster. Through the research on Spark Shuffle design and algorithm implementation, and deep analyses on the essential reasons of data skew in large-scale distributed environment, this study proposes a method to avoid data skew in shuffle process through the broadcast mechanism, analyzes the process of broadcast variable distribution logic, and gives the algorithm implementation and performance advantage analysis of the method. The performance of the method is improved by the Broadcast Join experiment.
2019, 28(6):198-202. DOI: 10.15888/j.cnki.csa.006954
Abstract:C4.5 algorithm is a classical algorithm used to generate decision tree. Although it has strong noise processing ability, the classification accuracy of C4.5 algorithm decreases obviously when the missing rate of attribute value is high, and the algorithm needs to scan many times when constructing decision tree. This paper presents an improved classification algorithm for sorting data sets and calling logarithms frequently. A method based on naive Bayesian theorem is used to deal with the vacant attribute value and improve the classification accuracy. By optimizing and reducing the calculation formula, the improved formula uses four mixed operations to replace the original logarithmic operation, thus reducing the running time of constructing the decision tree. In order to verify the performance of the algorithm, five data sets in UCI database are tested. The experimental results show that the improved algorithm greatly improves the running efficiency.
2019, 28(6):203-208. DOI: 10.15888/j.cnki.csa.006928
Abstract:In order to provide more effective employment guidance work in colleges and universities, and train students in a more targeted manner, this study collects the relevant information of graduates and their employment situations, constructs a classification prediction modeling algorithm based on HMIGW feature selection and XGBoost, and applies it in graduates' employment forecasting. In consideration of the mixed discrete-continuous characteristics of the student information data, the study proposes an HMIGW feature selection algorithm suitable for employment prediction. This method firstly correlates the characteristics of student data, then adopts forward-increasing backward recursive deletion strategy to conduct feature selection. Finally, the XGBoost prediction model is used for training and result prediction based on the selected optimal feature subset data. By comparing the results of different algorithms, the prediction method adopted in this study has a better performance in evaluation indexes such as accuracy and time, and has a positive effect on employment guidance of graduates.
2019, 28(6):209-212. DOI: 10.15888/j.cnki.csa.006912
Abstract:Differential Evolution (DE) is a novel evolutionary computation technique, which has attracted much attention and wide applications for its simple concept, easy implementation and quick convergence. In order to tackle much overhead, problem-dependent parameters, etc and enhance the precision of classical DE, an Improved DE(IDE) algorithm is proposed by using an dynamical mutation operator adjusting the step size based on search space with evolution. Experiments of solving well-known benchmark functions in MATLAB show the improved approach outperforms existing algorithms, and dynamic mutation is a effective improvement ideas.
2019, 28(6):213-220. DOI: 10.15888/j.cnki.csa.006916
Abstract:This paper aims at minimizing makespan of parallel batch processing machines with non-identical job sizes. All the jobs are grouped into batches within the restraint of machines' capacity and then scheduled on the machines. First, a mixed integer programming model is summarized for this problem and a lower bound is proposed. Then an FF-LPT rule is addressed to form batches and assign batches on machines. And an estimation of distribution algorithm (EDA) with four different update mechanism is proposed. The performance of proposed algorithm is evaluated by comparing with a simulated annealing algorithm (SA) and a genetic algorithm (GA). The experimental results indicate that the effectiveness of proposed algorithm.
2019, 28(6):221-227. DOI: 10.15888/j.cnki.csa.006943
Abstract:The simulation of vehicle body posture in the running of armored vehicles is the key technology to simulate the driving training system. In order to realize the simulation of armored vehicle driving under different terrain based on virtual reality, the method of motion simulation of armored vehicle based on virtual reality is presented in this paper. Firstly, the structure, shape, and real terrain of the armored vehicle are constructed, and the driving scenarios are displayed simultaneously from the first and third perspectives. Secondly, the dynamic model of vehicle driving is established to solve the attitude data of vehicle body under different topography. Finally, the whole scene and vehicle posture are dynamically rendered through Unity3D engine. The experimental results show that this method can accurately simulate the vehicle body posture under various topographic conditions and truly simulate the running state of the vehicle.
2019, 28(6):228-234. DOI: 10.15888/j.cnki.csa.006937
Abstract:The field of railway detection and monitoring generates massive image data, image scene classification is of great value for subsequent analysis and management. In this study, a visual scene classification model that combines Deep Convolutional Neural Networks (DCNN) and Grad Class Activation Mapping (Grad-CAM) is proposed, DCNN extract feature of railway scene classification image dataset by transfer learning method, Grad-CAM improves the interpretability of the classification model by calculating the weighted thermogram and activation scores of the categories. In the experiment, the effects of different DCNN structures on the performance of railway image scene classification tasks are compared, and visual interpretation of scene classification model is realized. At the same time, based on visualization method, an optimization process is proposed to improve model classification ability by reducing internal deviation of dataset, which verifies the effectiveness of the deep learning technology for image scene classification task.
2019, 28(6):235-242. DOI: 10.15888/j.cnki.csa.006946
Abstract:In order to better evaluate the clustering quality of unsupervised clustering algorithm and solve the problem of invalidation of clustering evaluation results caused by overlapping cluster centers, the commonly used cluster evaluation index is analyzed and a new internal evaluation index is proposed, the product of the minimum square of the distance between the adjacent boundary points and the number of samples in the cluster is taken as the separation degree of the whole sample set, the relation between the degree of separation between clusters and the degree of compactness within clusters is balanced; a new density calculation method is proposed, which takes the object with a larger average distance ratio between the sample set and each sample as a high-density point, and uses the maximum product method to select the relatively dispersed data object with a higher density as the initial cluster center, thus enhancing the representativeness of the initial center of K-medoids algorithm and the stability of the algorithm. On this basis, the cluster quality evaluation model is designed with the newly proposed internal evaluation index. The experimental results on UCI and KDD CUP 99 data sets show that the new model can effectively cluster and reasonably evaluate non-prior knowledge samples, and can give the optimal number or range of clustering.
2019, 28(6):243-246. DOI: 10.15888/j.cnki.csa.006920
Abstract:Linux Virtual Server (LVS) is one of the solutions to improve the resource utilization of cloud platform. However, because the weight setting of the LVS load balance algorithm is unscientific and the task cannot be balanced in real time when the connection request is allocated, the server load in the cloud environment is unbalanced, which reduces the ability of the system to provide external services. In view of the above problems, the simulated annealing algorithm and weighted least-connection algorithm are combined in this study to offer an improved balance strategy with optimal load factor. Experiments show that the optimal load factor strategy can make the node load in the cluster more balanced, so that the high availability of the cloud platform is improved.
2019, 28(6):247-253. DOI: 10.15888/j.cnki.csa.006906
Abstract:Urban short-term traffic flow forecasting can help people choose the optimal route for travel and improve travel efficiency, which is necessary because the traffic congestion increasingly serious today. It is difficult to predict short-term traffic flow accurately because there are various factors can influence short-term traffic flow such as weather. To improve the accuracy of short-term traffic flow prediction, this study proposes a hybrid model based on Adaptive Neuro-Fuzzy Inference System (ANFIS). The hybrid model is combined with the periodicity knowledge model and the ANFIS model which has been driven by residual data. To verify the performance of the proposed hybrid model, it is compared with the Backward Propagating Neural Network (BPNN) model and the normal ANFIS model. The experimental results show that the hybrid model has better applicability and accuracy in traffic flow prediction.
2019, 28(6):254-259. DOI: 10.15888/j.cnki.csa.006753
Abstract:As the development of social economy, the amount of data has been ever increasing. In order to dig out valuable information from the huge amount of data, it has become an important part in the field of data mining to predict the future through the potential law of historical data. This work studies the MLP, BP, and MLBP models and conducts error comparative analysis of the models, and then applies the optimal model to stock forecasting. The text uses the Tushare financial data interface provided by Python to crawl the stock daily trading data, and uses three models to analyze and process the stock trading data, adjusting some of the parameters continuously. The prediction results of each model algorithm are compared by using MSE error and finally an optimal prediction value is obtained.
2019, 28(6):260-267. DOI: 10.15888/j.cnki.csa.006995
Abstract:Hazardous chemicals industry is a high risk industry. Explosion, fire, leakage, and poisoning accidents occur frequently. Traditional causality-based accident chain analysis method is limited by the technical basis and assumptions that traditional safety engineering relies on, and cannot adapt to today's complex systems. Based on the accident causation theory, this study analyses the main factors affecting the formation of dangerous chemicals accidents, constructs a state vector of dangerous chemicals accidents, describes the factors leading to dangerous chemicals accidents comprehensively, and uses the state vector to analyze and forecast dangerous chemicals accidents. The high dimension vector is used to define the accident state, and the most possible factors are considered. Using support vector machine learning algorithm, an accident prediction model is established by accident state vector. A sample test of the hazardous chemical accident shows that the method can differentiate accident state accurately and efficiently, and demonstrate a positive significance on accident prediction of hazardous chemicals.