• Current Issue
  • Online First
  • Archive
  • Click Rank
  • Most Downloaded
    Article Search
    Search by issue
    Select AllDeselectExport
    Display Method:
    2020,29(11):1-10, DOI: 10.15888/j.cnki.csa.007593
    Abstract:
    So far, most researches on LoRa technology are about single-application oriented IoT, low utilization of configurable parameters leaves room for further optimization of network performance. In order to adapt to the growing transmission requirements of heterogeneous multi-type services, it is increasing essential to optimize the performance of the LoRa network. To address the above issue, a dynamic parameters adaptive configuration strategy based on simulated annealing genetic algorithm is proposed, which can improve the number of end devices and data throughput supported by single gateway LoRa network while limiting energy consumption. The simulation results based on LoRaSim reveal that the proposed method outperforms ADR by 25.6%. By simulating the single gateway LoRa network of nearly over 1000 devices, the experimental results show that when packet generation rate 1/100 s, dynamic parameters adaptive configuration strategy proposed in this study can guarantee PDR above 90%. This method can adapt to the data transmission needs of multi-heterogeneous applications and effectively improve the data throughput while ensuring the PDR of each applications.
    2020,29(11):11-20, DOI: 10.15888/j.cnki.csa.007461
    Abstract:
    Detecting malicious URL is important for defending against cyber attacks. In view of the problem that supervised learning requires a large number of labeled samples, this study uses a semi-supervised learning method to train malicious URL detection models, which reduces the cost overhead of labeling data. We propose an improved algorithm based on the traditional co-training. Two kinds of classifiers are trained by using expert knowledge and Doc2Vec pre-processed data, and the data with the same prediction result and the high confidence of the two classifiers are screened and used for classifiers learning after being pseudo-labeled. The experimental results show that the proposed method can train two different types of classifiers with detection precision of 99.42% and 95.23% with only 0.67% of labeled data, which is similar to supervised learning performance and performs better than self-training and co-training.
    2020,29(11):21-28, DOI: 10.15888/j.cnki.csa.007668
    Abstract:
    To solve the blank of current research on the prediction of density limit disruption of EAST, 972 density limit disruptive pulses selected as data sets from the EAST’s 2014 to 2019 discharge. 13 diagnostic signals were chosen as features. Multi-Layer Perceptron (MLP) and Long Short-Term Memory (LSTM) was used as models and the disruption risk was used as output to build the predictors. The experimental results show that for density limit disruptive pulses, under different alarming times, the successful prediction rate of LSTM (around 95%) is higher than that of MLP (85%), and for non-disruptive pulses, the false prediction rate is around 8% for both MLP and LSTM. The performance of LSTM has great improvement than MLP, shows the feasibility of building EAST density limit disruption system with neural networks and improving the response performance of disruption avoidance and mitigation system.
    2020,29(11):29-39, DOI: 10.15888/j.cnki.csa.007705
    Abstract:
    Polarimetric Synthetic Aperture Radar (PolSAR) is a type of microwave imaging radar that avoids the influence of weather, light and clouds, and it has the capability of all-day and all-weather imaging. Therefore, PolSAR images have become one of the main data sources for land classification based on remote sensing image. From the perspective of technical methods, this paper discusses the methods and applications of land classification based on PolSAR image in recent years. It introduces the technical methods and experimental effects, and analyzes the development trend of land classification based on PolSAR image.
    2020,29(11):40-46, DOI: 10.15888/j.cnki.csa.007666
    Abstract:
    With the rapid development of modern technology, the data center has become the IT infrastructure of the information society, storing and managing a large amount of key data. At present, the management of data centers mostly relies on experienced professional operation and maintenance personnel to use computers to automatically monitor equipment room equipment indicators and make multiple inspections of equipment, which is time-consuming and tedious. Deep learning and artificial intelligence technologies are currently attracting more and more attention and have achieved many successful applications in the Internet and industrial fields. This study designs a Gated Recurrent Unit (GRU) based deep learning framework to automatically diagnose equipment failures in cloud data center equipment rooms and combines timing information to predict future states based on past equipment operating status information. Series data are split into fixed time windows as input to the bidirectional GRU layer which makes the network learn the time dependency relationship in data points. Besides, we add an attention layer and embedding layer after the output of GRU unit, to help the neural network learning more efficient features for prediction task and further dimension reduction. At last, multi-layer perception is used to classify the data. Experimental results based on real data sets show that proposed neural network framework based on GRU can accurately detect cloud data center faults compared with LSTM, SVM and KNN.
    2020,29(11):47-56, DOI: 10.15888/j.cnki.csa.007663
    Abstract:
    Big data industry has risen to the national strategy. The establishment of big data laboratory and experimental curriculum system is necessary for training big data technical personnel. This paper combs the knowledge system of big data, analyzes the training objectives and career orientation of the major of “data science and big data technology” and the major of “big data technology and application”, and clarifies the key knowledge that big data students should master and the professional skills that need to be cultivated, introduces the mainstream big data ecosystem, selects the most general big data architecture, proposes different plans to build big data laboratory in single machine environment, single machine virtualization environment, shared big data cluster environment and cloud computing environment, and designs the big data experiment curriculum system and experiment projects.
    Article Search
    Search by issue
    Select AllDeselectExport
    Display Method:
  • 全文下载排行(总排行年度排行各期排行)
    摘要点击排行(总排行年度排行各期排行)

  • Article Search
    Search by issue
    Select AllDeselectExport
    Display Method:
    2000,9(2):38-41, DOI:
    [Abstract] (11050) [HTML] (0) [PDF ] (7498)
    Abstract:
    本文详细讨论了VRML技术与其他数据访问技术相结合 ,实现对数据库实时交互的技术实现方法 ,并简要阐述了相关技术规范的语法结构和技术要求。所用技术手段安全可靠 ,具有良好的实际应用表现 ,便于系统移植。
    1993,2(8):41-42, DOI:
    [Abstract] (7694) [HTML] (0) [PDF ] (7708)
    Abstract:
    本文介绍了作者近年来应用工具软件NU清除磁盘引导区和硬盘主引导区病毒、修复引导区损坏磁盘的 经验,经实践检验,简便有效。
    1995,4(5):2-5, DOI:
    [Abstract] (7304) [HTML] (0) [PDF ] (5103)
    Abstract:
    本文简要介绍了海关EDI自动化通关系统的定义概况及重要意义,对该EDI应用系统下的业务运作模式所涉及的法律问题,采用EDIFACT国际标准问题、网络与软件技术问题,以及工程管理问题进行了结合实际的分析。
    2011,20(11):80-85, DOI:
    [Abstract] (6308) [HTML] () [PDF 863160] (13408)
    Abstract:
    在研究了目前主流的视频转码方案基础上,提出了一种分布式转码系统。系统采用HDFS(HadoopDistributed File System)进行视频存储,利用MapReduce 思想和FFMPEG 进行分布式转码。详细讨论了视频分布式存储时的分段策略,以及分段大小对存取时间的影响。同时,定义了视频存储和转换的元数据格式。提出了基于MapReduce 编程框架的分布式转码方案,即Mapper 端进行转码和Reducer 端进行视频合并。实验数据显示了转码时间随视频分段大小和转码机器数量不同而变化的趋势。结
    2008,17(5):122-126, DOI:
    [Abstract] (5830) [HTML] (0) [PDF ] (18179)
    Abstract:
    随着Internet的迅速发展,网络资源越来越丰富,人们如何从网络上抽取信息也变得至关重要,尤其是占网络资源80%的Deep Web信息检索更是人们应该倍加关注的难点问题。为了更好的研究Deep Web爬虫技术,本文对有关Deep Web爬虫的内容进行了全面、详细地介绍。首先对Deep Web爬虫的定义及研究目标进行了阐述,接着介绍了近年来国内外关于Deep Web爬虫的研究进展,并对其加以分析。在此基础上展望了Deep Web爬虫的研究趋势,为下一步的研究奠定了基础。
    2016,25(8):1-7, DOI: 10.15888/j.cnki.csa.005283
    [Abstract] (5394) [HTML] () [PDF 1167952] (14487)
    Abstract:
    从2006年开始,深度神经网络在图像/语音识别、自动驾驶等大数据处理和人工智能领域中都取得了巨大成功,其中无监督学习方法作为深度神经网络中的预训练方法为深度神经网络的成功起到了非常重要的作用. 为此,对深度学习中的无监督学习方法进行了介绍和分析,主要总结了两类常用的无监督学习方法,即确定型的自编码方法和基于概率型受限玻尔兹曼机的对比散度等学习方法,并介绍了这两类方法在深度学习系统中的应用,最后对无监督学习面临的问题和挑战进行了总结和展望.
    1999,8(7):43-46, DOI:
    [Abstract] (5353) [HTML] (0) [PDF ] (6997)
    Abstract:
    用较少的颜色来表示较大的色彩空间一直是人们研究的课题,本文详细讨论了半色调技术和抖动技术,并将它们扩展到实用的真彩色空间来讨论,并给出了实现的算法。
    2004,13(10):7-9, DOI:
    [Abstract] (4618) [HTML] (0) [PDF ] (4320)
    Abstract:
    本文介绍了车辆监控系统的组成,研究了如何应用Rockwell GPS OEM板和WISMOQUIKQ2406B模块进行移动单元的软硬件设计,以及监控中心 GIS软件的设计.重点介绍嵌入TCP/IP协议处理的Q2406B模块如何通过AT指令接入Internet以及如何和监控中心传输TCP数据.
    2012,21(3):260-264, DOI:
    [Abstract] (4612) [HTML] () [PDF 336300] (15634)
    Abstract:
    开放平台的核心问题是用户验证和授权问题,OAuth 是目前国际通用的授权方式,它的特点是不需要用户在第三方应用输入用户名及密码,就可以申请访问该用户的受保护资源。OAuth 最新版本是OAuth2.0,其认证与授权的流程更简单、更安全。研究了OAuth2.0 的工作原理,分析了刷新访问令牌的工作流程,并给出了OAuth2.0 服务器端的设计方案和具体的应用实例。
    2011,20(7):184-187,120, DOI:
    [Abstract] (4555) [HTML] () [PDF 731903] (16562)
    Abstract:
    针对智能家居、环境监测等的实际要求,设计了一种远距离通讯的无线传感器节点。该系统采用集射频与控制器于一体的第二代片上系统CC2530 为核心模块,外接CC2591 射频前端功放模块;软件上基于ZigBee2006 协议栈,在ZStack 通用模块基础上实现应用层各项功能。介绍了基于ZigBee 协议构建无线数据采集网络,给出了传感器节点、协调器节点的硬件设计原理图及软件流程图。实验证明节点性能良好、通讯可靠,通讯距离较TI 第一代产品有明显增大。
    2008,17(8):2-5, DOI:
    [Abstract] (4553) [HTML] (0) [PDF ] (9473)
    Abstract:
    本文介绍了一个企业信息门户中单点登录系统的设计与实现。系统实现了一个基于Java EE架构的结合凭证加密和Web Services的单点登录系统,对门户用户进行统一认证和访问控制。论文详细阐述了该系统的总体结构、设计思想、工作原理和具体实现方案,目前系统已在部分省市的广电行业信息门户平台中得到了良好的应用。
    2008,17(8):87-89, DOI:
    [Abstract] (4543) [HTML] (0) [PDF ] (17622)
    Abstract:
    随着面向对象软件开发技术的广泛应用和软件测试自动化的要求,基于模型的软件测试逐渐得到了软件开发人员和软件测试人员的认可和接受。基于模型的软件测试是软件编码阶段的主要测试方法之一,具有测试效率高、排除逻辑复杂故障测试效果好等特点。但是误报、漏报和故障机理有待进一步研究。对主要的测试模型进行了分析和分类,同时,对故障密度等参数进行了初步的分析;最后,提出了一种基于模型的软件测试流程。
    2008,17(1):113-116, DOI:
    [Abstract] (4469) [HTML] (0) [PDF ] (23228)
    Abstract:
    排序是计算机程序设计中一种重要操作,本文论述了C语言中快速排序算法的改进,即快速排序与直接插入排序算法相结合的实现过程。在C语言程序设计中,实现大量的内部排序应用时,所寻求的目的就是找到一个简单、有效、快捷的算法。本文着重阐述快速排序的改进与提高过程,从基本的性能特征到基本的算法改进,通过不断的分析,实验,最后得出最佳的改进算法。
    2010,19(10):42-46, DOI:
    Abstract:
    综合考虑基于构件组装技术的虚拟实验室的系统需求,分析了工作流驱动的动态虚拟实验室的业务处理模型,介绍了轻量级J2EE框架(SSH)与工作流系统(Shark和JaWE)的集成模型,提出了一种轻量级J2EE框架下工作流驱动的动态虚拟实验室的设计和实现方法,给出了虚拟实验项目的实现机制、数据流和控制流的管理方法,以及实验流程的动态组装方法,最后,以应用实例说明了本文方法的有效性。
    2004,13(8):58-59, DOI:
    [Abstract] (4390) [HTML] (0) [PDF ] (7055)
    Abstract:
    本文介绍了Visual C++6.0在对话框的多个文本框之间,通过回车键转移焦点的几种方法,并提出了一个改进方法.
    2009,18(5):182-185, DOI:
    [Abstract] (4301) [HTML] (0) [PDF ] (12189)
    Abstract:
    DICOM 是医学图像存储和传输的国际标准,DCMTK 是免费开源的针对DICOM 标准的开发包。解读DICOM 文件格式并解决DICOM 医学图像显示问题是医学图像处理的基础,对医学影像技术的研究具有重要意义。解读了DICOM 文件格式并介绍了调窗处理的原理,利用VC++和DCMTK 实现医学图像显示和调窗功能。
    2003,12(1):62-65, DOI:
    [Abstract] (4300) [HTML] (0) [PDF ] (6184)
    Abstract:
    本文介绍了一种将DTD转换成ER图,并用XMLApplication将ER图描述成转换标准,然后根据该转换标准将XML文档转换为关系模型的方法.
    2009,18(3):164-167, DOI:
    [Abstract] (4292) [HTML] (0) [PDF ] (14590)
    Abstract:
    介绍了一种基于DWGDirectX在不依赖于AutoCAD平台的情况下实现DWG文件的显示、操作、添加的简单的实体的方法,并对该方法进行了分析和实现。
    2009,18(3):96-98, DOI:
    [Abstract] (4233) [HTML] (0) [PDF ] (6742)
    Abstract:
    基于Java的企业级计算解决方案J2EE和基于Java的安全认证授权解决方案JAAS的综合应用,可以为基于互联网的安全分布式应用系统的构建提供一个较好的解决方案。作者在某科技管理平台的设计与实现过程中,采用J2EE和JAAS技术,实现了一个有较好安全保障的,集科技项目管理、专家信息管理和科技项目网上申报等功能于一体的网上分布式应用系统,并在实际使用中取得了较好的效果。
  • 全文下载排行(总排行年度排行各期排行)
    摘要点击排行(总排行年度排行各期排行)

  • Article Search
    Search by issue
    Select AllDeselectExport
    Display Method:
    2007,16(10):48-51, DOI:
    [Abstract] (3469) [HTML] (0) [PDF 0.00 Byte] (73485)
    Abstract:
    论文对HDF数据格式和函数库进行研究,重点以栅格图像为例,详细论述如何利用VC++.net和VC#.net对光栅数据进行读取与处理,然后根据所得到的象素矩阵用描点法显示图像.论文是以国家气象中心开发Micaps3.0(气象信息综合分析处理系统)的课题研究为背景的.
    2002,11(12):67-68, DOI:
    [Abstract] (2198) [HTML] (0) [PDF 0.00 Byte] (29850)
    Abstract:
    本文介绍非实时操作系统Windows 2000下,利用VisualC++6.0开发实时数据采集的方法.所用到的数据采集卡是研华的PCL-818L.借助数据采集卡PCL-818L的DLLs中的API函数,提出三种实现高速实时数据采集的方法及优缺点.
    2001,10(11):8-9, DOI:
    [Abstract] (2919) [HTML] (0) [PDF 0.00 Byte] (24808)
    Abstract:
    文章分析了电子商务存在的问题和需求,以浙江省电子商务现状为背景,提出了一些电子商务发展对策,供时下的电子商务企业参考。
    2008,17(1):113-116, DOI:
    [Abstract] (4469) [HTML] (0) [PDF 0.00 Byte] (23228)
    Abstract:
    排序是计算机程序设计中一种重要操作,本文论述了C语言中快速排序算法的改进,即快速排序与直接插入排序算法相结合的实现过程。在C语言程序设计中,实现大量的内部排序应用时,所寻求的目的就是找到一个简单、有效、快捷的算法。本文着重阐述快速排序的改进与提高过程,从基本的性能特征到基本的算法改进,通过不断的分析,实验,最后得出最佳的改进算法。

External Links

You are the firstVisitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-3
Address:4# South Fourth Street, Zhongguancun,Haidian, Beijing,Postal Code:100190
Phone:010-62661041 Fax: Email:csa (a) iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063