Abstract:Fuzzing is outstanding in detecting vulnerabilities in real-world programs. In recent years, researchers have paid considerable attention to fuzzing improving techniques, and large numbers of optimized fuzzers were proposed. These fuzzers are usually combinations of more than one improving technique for better performance. However, systematic evaluation of individual fuzzing improving techniques is still to be conducted. In this study, we establish an evaluation system for such techniques according to four metrics and used it to evaluate individual fuzzing improving algorithms integrated into recently proposed advanced fuzzers. Multiple groups of experiments are conducted to evaluate the effectiveness of each individual technique in each category of improving techniques, and the experimental data are comprehensively analyzed with the actual algorithm design and code implementation. We hope the evaluation of individual fuzzing improving techniques could help researchers develop more effective fuzzers in the future.