Biblio
Static analysis tools help to detect common pro-gramming errors but generate a large number of false positives. Moreover, when applied to evolving software systems, around 95 % of alarms generated on a version are repeated, i.e., they have also been generated on the previous version. Version-aware static analysis techniques (VSATs) have been proposed to suppress the repeated alarms that are not impacted by the code changes between the two versions. The alarms reported by VSATs after the suppression, called delta alarms, still constitute 63% of the tool-generated alarms. We observe that delta alarms can be further postprocessed using their corresponding code changes: the code changes due to which VSATs identify them as delta alarms. However, none of the existing VSATs or alarms postprocessing techniques postprocesses delta alarms using the corresponding code changes. Based on this observation, we use the code changes to classify delta alarms into six classes that have different priorities assigned to them. The assignment of priorities is based on the type of code changes and their likelihood of actually impacting the delta alarms. The ranking of alarms, obtained by prioritizing the classes, can help suppress alarms that are ranked lower, when resources to inspect all the tool-generated alarms are limited. We performed an empirical evaluation using 9789 alarms generated on 59 versions of seven open source C applications. The evaluation results indicate that the proposed classification and ranking of delta alarms help to identify, on average, 53 % of delta alarms as more likely to be false positives than the others.
Code-graph based software defect prediction methods have become a research focus in SDP field. Among them, Code Property Graph is used as a form of data representation for code defects due to its ability to characterize the structural features and dependencies of defect codes. However, since the coarse granularity of Code Property Graph, redundant information which is not related to defects often attached to the characterization of software defects. Thus, it is a problem to be solved in how to locate software defects at a finer granularity in Code Property Graph. Static analysis is a technique for identifying software defects using set defect rules, and there are many proven static analysis tools in the industry. In this paper, we propose a method for locating specific types of defects in the Code Property Graph based on the result of static analysis tool. Experiments show that the location method based on static analysis results can effectively predict the location of specific defect types in real software program.
Static analyzers have become increasingly popular both as developer tools and as subjects of empirical studies. Whereas static analysis tools exist for disparate programming languages, the bulk of the empirical research has focused on the popular Java programming language. In this paper, we investigate to what extent some known results about using static analyzers for Java change when considering C\#-another popular object-oriented language. To this end, we combine two replications of previous Java studies. First, we study which static analysis tools are most widely used among C\# developers, and which warnings are more commonly reported by these tools on open-source C\# projects. Second, we develop and empirically evaluate EagleRepair: a technique to automatically fix code in response to static analysis warnings; this is a replication of our previous work for Java [20]. Our replication indicates, among other things, that 1) static code analysis is fairly popular among C\# developers too; 2) Re-Sharper is the most widely used static analyzer for C\#; 3) several static analysis rules are commonly violated in both Java and C\# projects; 4) automatically generating fixes to static code analysis warnings with good precision is feasible in C\#. The EagleRepair tool developed for this research is available as open source.
The complexity and scale of modern software programs often lead to overlooked programming errors and security vulnerabilities. Developers often rely on automatic tools, like static analysis tools, to look for bugs and vulnerabilities. Static analysis tools are widely used because they can understand nontrivial program behaviors, scale to millions of lines of code, and detect subtle bugs. However, they are known to generate an excess of false alarms which hinder their utilization as it is counterproductive for developers to go through a long list of reported issues, only to find a few true positives. One of the ways proposed to suppress false positives is to use machine learning to identify them. However, training machine learning models requires good quality labeled datasets. For this purpose, we developed D2A [3], a differential analysis based approach that uses the commit history of a code repository to create a labeled dataset of Infer [2] static analysis output.
Objective measures are ubiquitous in the formulation, design and implementation of deep space missions. Tour durations, flyby altitudes, propellant budgets, power consumption, and other metrics are essential to developing and managing NASA missions. But beyond the simple metrics of cost and workforce, it has been difficult to identify objective, quantitative measures that assist in evaluating choices made during formulation or implementation phases in terms of their impact on flight operations. As part of the development of the Europa Clipper Mission system, a set of operations metrics have been defined along with the necessary design information and software tooling to calculate them. We have applied these methods and metrics to help assess the impact to the flight team on the six options for the Clipper Tour that are currently being vetted for selection in the fall of 2021. To generate these metrics, the Clipper MOS team first designed the set of essential processes by which flight operations will be conducted, using a standard approach and template to identify (among other aspects) timelines for each process, along with their time constraints (e.g., uplinks for sequence execution). Each of the resulting 50 processes is documented in a common format and concurred by stakeholders. Process timelines were converted into generic schedules and workforce-loaded using COTS scheduling software, based on the inputs of the process authors and domain experts. Custom code was generated to create an operations schedule for a specific portion of Clipper's prime mission, with instances of a given process scheduled based on specific timing rules (e.g., process X starts once per week on Thursdays) or relative to mission events (e.g., sequence generation process begins on a Monday, at least three weeks before each Europa closest approach). Over a 5-month period, and for each of six Clipper candidate tours, the result was a 20,000+ line, workforce-loaded schedule that documents all of the process-driven work effort at the level of individual roles, along with a significant portion of the level-of-effort work. Post-processing code calculated the absolute and relative number of work hours during a nominal 5 day / 40 hour work week, the work effort during 2nd and 3rd shift, as well as 1st shift on weekends. The resultant schedules and shift tables were used to generate objective measures that can be related to both human factors and to operational risk and showed that Clipper tours which utilize 6:1 resonant (21.25 day) orbits instead of 4:1 resonant (14.17 day) orbits during the first dozen or so Europa flybys are advantageous to flight operations. A similar approach can be extended to assist missions in more objective assessments of a number of mission issues and trades, including tour selection and spacecraft design for operability.