Visible to the public Biblio

Filters: Keyword is static code analysis  [Clear All Filters]
2019-02-22
Querel, Louis-Philippe, Rigby, Peter C..  2018.  WarningsGuru: Integrating Statistical Bug Models with Static Analysis to Provide Timely and Specific Bug Warnings. Proceedings of the 2018 26th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering. :892-895.

The detection of bugs in software systems has been divided into two research areas: static code analysis and statistical modeling of historical data. Static analysis indicates precise problems on line numbers but has the disadvantage of suggesting many warning which are often false positives. In contrast, statistical models use the history of the system to suggest which files or commits are likely to contain bugs. These course-grained predictions do not indicate to the developer the precise reasons for the bug prediction. We combine static analysis with statistical bug models to limit the number of warnings and provide specific warnings information at the line level. Previous research was able to process only a limited number of releases, our tool, WarningsGuru, can analyze all commits in a source code repository and we currently have processed thousands of commits and warnings. Since we process every commit, we present developers with more precise information about when a warning is introduced allowing us to show recent warnings that are introduced in statistically risky commits. Results from two OSS projects show that CommitGuru's statistical model flags 25% and 29% of all commits as risky. When we combine this with static analysis in WarningsGuru the number of risky commits with warnings is 20% for both projects and the number commits with new warnings is only 3% and 6%. We can drastically reduce the number of commits and warnings developers have to examine. The tool, source code, and demo is available at https://github.com/louisq/warningsguru.

Ludwig, Jeremy, Xu, Steven, Webber, Frederick.  2018.  Static Software Metrics for Reliability and Maintainability. Proceedings of the 2018 International Conference on Technical Debt. :53-54.

This paper identifies a small, essential set of static software code metrics linked to the software product quality characteristics of reliability and maintainability and to the most commonly identified sources of technical debt. An open-source plug-in is created for the Understand code analysis tool that calculates and visualizes these metrics. The plug-in was developed as a first step in an ongoing project aimed at applying case-based reasoning to the issue of software product quality.1

Ferenc, Rudolf, Tóth, Zoltán, Ladányi, Gergely, Siket, István, Gyimóthy, Tibor.  2018.  A Public Unified Bug Dataset for Java. Proceedings of the 14th International Conference on Predictive Models and Data Analytics in Software Engineering. :12-21.

Background: Bug datasets have been created and used by many researchers to build bug prediction models. Aims: In this work we collected existing public bug datasets and unified their contents. Method: We considered 5 public datasets which adhered to all of our criteria. We also downloaded the corresponding source code for each system in the datasets and performed their source code analysis to obtain a common set of source code metrics. This way we produced a unified bug dataset at class and file level that is suitable for further research (e.g. to be used in the building of new bug prediction models). Furthermore, we compared the metric definitions and values of the different bug datasets. Results: We found that (i) the same metric abbreviation can have different definitions or metrics calculated in the same way can have different names, (ii) in some cases different tools give different values even if the metric definitions coincide because (iii) one tool works on source code while the other calculates metrics on bytecode, or (iv) in several cases the downloaded source code contained more files which influenced the afferent metric values significantly. Conclusions: Apart from all these imprecisions, we think that having a common metric set can help in building better bug prediction models and deducing more general conclusions. We made the unified dataset publicly available for everyone. By using a public dataset as an input for different bug prediction related investigations, researchers can make their studies reproducible, thus able to be validated and verified.

Gharibi, Gharib, Tripathi, Rashmi, Lee, Yugyung.  2018.  Code2Graph: Automatic Generation of Static Call Graphs for Python Source Code. Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering. :880-883.

A static call graph is an imperative prerequisite used in most interprocedural analyses and software comprehension tools. However, there is a lack of software tools that can automatically analyze the Python source-code and construct its static call graph. In this paper, we introduce a prototype Python tool, named code2graph, which automates the tasks of (1) analyzing the Python source-code and extracting its structure, (2) constructing static call graphs from the source code, and (3) generating a similarity matrix of all possible execution paths in the system. Our goal is twofold: First, assist the developers in understanding the overall structure of the system. Second, provide a stepping stone for further research that can utilize the tool in software searching and similarity detection applications. For example, clustering the execution paths into a logical workflow of the system would be applied to automate specific software tasks. Code2graph has been successfully used to generate static call graphs and similarity matrices of the paths for three popular open-source Deep Learning projects (TensorFlow, Keras, PyTorch). A tool demo is available at https://youtu.be/ecctePpcAKU.

2019-01-21
Kronjee, Jorrit, Hommersom, Arjen, Vranken, Harald.  2018.  Discovering Software Vulnerabilities Using Data-flow Analysis and Machine Learning. Proceedings of the 13th International Conference on Availability, Reliability and Security. :6:1–6:10.

We present a novel method for static analysis in which we combine data-flow analysis with machine learning to detect SQL injection (SQLi) and Cross-Site Scripting (XSS) vulnerabilities in PHP applications. We assembled a dataset from the National Vulnerability Database and the SAMATE project, containing vulnerable PHP code samples and their patched versions in which the vulnerability is solved. We extracted features from the code samples by applying data-flow analysis techniques, including reaching definitions analysis, taint analysis, and reaching constants analysis. We used these features in machine learning to train various probabilistic classifiers. To demonstrate the effectiveness of our approach, we built a tool called WIRECAML, and compared our tool to other tools for vulnerability detection in PHP code. Our tool performed best for detecting both SQLi and XSS vulnerabilities. We also tried our approach on a number of open-source software applications, and found a previously unknown vulnerability in a photo-sharing web application.

2018-06-07
Reynolds, Z. P., Jayanth, A. B., Koc, U., Porter, A. A., Raje, R. R., Hill, J. H..  2017.  Identifying and Documenting False Positive Patterns Generated by Static Code Analysis Tools. 2017 IEEE/ACM 4th International Workshop on Software Engineering Research and Industrial Practice (SER IP). :55–61.

This paper presents our results from identifying anddocumenting false positives generated by static code analysistools. By false positives, we mean a static code analysis toolgenerates a warning message, but the warning message isnot really an error. The goal of our study is to understandthe different kinds of false positives generated so we can (1)automatically determine if an error message is truly indeed a truepositive, and (2) reduce the number of false positives developersand testers must triage. We have used two open-source tools andone commercial tool in our study. The results of our study haveled to 14 core false positive patterns, some of which we haveconfirmed with static code analysis tool developers.

Novikov, A. S., Ivutin, A. N., Troshina, A. G., Vasiliev, S. N..  2017.  The approach to finding errors in program code based on static analysis methodology. 2017 6th Mediterranean Conference on Embedded Computing (MECO). :1–4.

The article considers the approach to static analysis of program code and the general principles of static analyzer operation. The authors identify the most important syntactic and semantic information in the programs, which can be used to find errors in the source code. The general methodology for development of diagnostic rules is proposed, which will improve the efficiency of static code analyzers.

Obster, M., Kowalewski, S..  2017.  A live static code analysis architecture for PLC software. 2017 22nd IEEE International Conference on Emerging Technologies and Factory Automation (ETFA). :1–4.

Static code analysis is a convenient technique to support the development of software. Without prior test setup, information about a later runtime behavior can be inferred and errors in the code can be found before using a regular compiler. Solutions to apply static code analysis to PLC software following the IEC 61131-3 already exist, but using these separate tools usually creates a gap in the development process. In this paper we introduce an architecture to use static analysis directly in a development environment and give instant feedback to the developer while he is still editing the PLC software.

Kübler, Florian, Müller, Patrick, Hermann, Ben.  2017.  SootKeeper: Runtime Reusability for Modular Static Analysis. Proceedings of the 6th ACM SIGPLAN International Workshop on State Of the Art in Program Analysis. :19–24.
In order to achieve a higher reusability and testability, static analyses are increasingly being build as modular pipelines of analysis components. However, to build, debug, test, and evaluate these components the complete pipeline has to be executed every time. This process recomputes intermediate results which have already been computed in a previous run but are lost because the preceding process ended and removed them from memory. We propose to leverage runtime reusability for static analysis pipelines and introduce SootKeeper, a framework to modularize static analyses into OSGi (Open Service Gateway initiative) bundles, which takes care of the automatic caching of intermediate results. Little to no change to the original analysis is necessary to use SootKeeper while speeding up the execution of code-build-debug cycles or evaluation pipelines significantly.
Chistyakov, Alexander, Pripadchev, Artem, Radchenko, Irina.  2017.  On Development of a Framework for Massive Source Code Analysis Using Static Code Analyzers. Proceedings of the 13th Central & Eastern European Software Engineering Conference in Russia. :20:1–20:3.
Authors describe architecture and implementation of an automated source code analyzing system which uses pluggable static code analyzers. The paper presents a module for gathering and analyzing the source code massively in a detailed manner. Authors also compare existing static code analyzers for Python programming language. A common format of storing results of code analysis for subsequent processing is introduced. Also, authors discuss methods of statistical processing and visualizing of raw analysis data.
Tymchuk, Yuriy, Ghafari, Mohammad, Nierstrasz, Oscar.  2017.  Renraku: The One Static Analysis Model to Rule Them All. Proceedings of the 12th Edition of the International Workshop on Smalltalk Technologies. :13:1–13:10.
Most static analyzers are monolithic applications that define their own ways to analyze source code and present the results. Therefore aggregating multiple static analyzers into a single tool or integrating a new analyzer into existing tools requires a significant amount of effort. Over the last few years, we cultivated Renraku — a static analysis model that acts as a mediator between the static analyzers and the tools that present the reports. When used by both analysis and tool developers, this single quality model can reduce the cost to both introduce a new type of analysis to existing tools and create a tool that relies on existing analyzers.
von Hof, Vincent, Fögen, Konrad, Kuchen, Herbert.  2017.  Detecting Spring Configurations Errors. Proceedings of the Symposium on Applied Computing. :1505–1512.
Dependency injection frameworks such as the Spring framework rely on dynamic language features of Java. Errors arising from the improper usage of these features bypass the compile-time checks of the Java compiler. This paper discusses the application of static code analysis as a means to restore compile-time checking for Spring-related configuration errors. First, possible errors in the configuration of Spring are identified and classified. Attributed grammars are applied in order to formally detect the errors and a prototypical compiler extension is implemented based on Java's pluggable annotation processing API.
Koc, Ugur, Saadatpanah, Parsa, Foster, Jeffrey S., Porter, Adam A..  2017.  Learning a Classifier for False Positive Error Reports Emitted by Static Code Analysis Tools. Proceedings of the 1st ACM SIGPLAN International Workshop on Machine Learning and Programming Languages. :35–42.
The large scale and high complexity of modern software systems make perfectly precise static code analysis (SCA) infeasible. Therefore SCA tools often over-approximate, so not to miss any real problems. This, however, comes at the expense of raising false alarms, which, in practice, reduces the usability of these tools. To partially address this problem, we propose a novel learning process whose goal is to discover program structures that cause a given SCA tool to emit false error reports, and then to use this information to predict whether a new error report is likely to be a false positive as well. To do this, we first preprocess code to isolate the locations that are related to the error report. Then, we apply machine learning techniques to the preprocessed code to discover correlations and to learn a classifier. We evaluated this approach in an initial case study of a widely-used SCA tool for Java. Our results showed that for our dataset we could accurately classify a large majority of false positive error reports. Moreover, we identified some common coding patterns that led to false positive errors. We believe that SCA developers may be able to redesign their methods to address these patterns and reduce false positive error reports.
2017-05-17
Wang, Timothy E., Garoche, Pierre-Loïc, Roux, Pierre, Jobredeaux, Romain, Féron, Éric.  2016.  Formal Analysis of Robustness at Model and Code Level. Proceedings of the 19th International Conference on Hybrid Systems: Computation and Control. :125–134.

Robustness analyses play a major role in the synthesis and analysis of controllers. For control systems, robustness is a measure of the maximum tolerable model inaccuracies or perturbations that do not destabilize the system. Analyzing the robustness of a closed-loop system can be performed with multiple approaches: gain and phase margin computation for single-input single-output (SISO) linear systems, mu analysis, IQC computations, etc. However, none of these techniques consider the actual code in their analyses. The approach presented here relies on an invariant computation on the discrete system dynamics. Using semi-definite programming (SDP) solvers, a Lyapunov-based function is synthesized that captures the vector margins of the closed-loop linear system considered. This numerical invariant expressed over the state variables of the system is compatible with code analysis and enables its validation on the code artifact. This automatic analysis extends verification techniques focused on controller implementation, addressing validation of robustness at model and code level. It has been implemented in a tool analyzing discrete SISO systems and generating over-approximations of phase and gain margins. The analysis will be integrated in our toolchain for Simulink and Lustre models autocoding and formal analysis.

Maier, Petra R., Kleeberger, Veit, Mueller-Gritschneder, Daniel, Schlichtmann, Ulf.  2016.  Fault Injection at Host-compiled Level with Static Fault Set Reduction for SoC Firmware Robustness Testing. Proceedings of the Eleventh IEEE/ACM/IFIP International Conference on Hardware/Software Codesign and System Synthesis. :18:1–18:10.

Decreasing hardware reliability makes robust firmware imperative for safety-critical applications. Hence, ensuring correct handling of errors in peripherals is a key objective during firmware design. To adequately support robustness considerations of firmware designers during implementation, an efficient qualitative fault injection method is required. This paper presents a high-speed fault injection technique based on host-compiled firmware simulation that is suitable to analyze the impact of transient faults on firmware behavior. Additionally, fault set reduction by static code analysis avoids unnecessary injection of masked and equivalent faults. Application of the proposed fault injection technique on an industrial safety-relevant automotive system-on-chip (SoC) firmware demonstrates at least three orders of magnitude speedup compared to instruction set level. In addition, a fault set reduction by 78% is achieved. While significantly reducing the required fault injection time, the presented techniques provide as accurate feedback to the designer as existing state-of-the-art approaches.

Ng, Nicholas, Yoshida, Nobuko.  2016.  Static Deadlock Detection for Concurrent Go by Global Session Graph Synthesis. Proceedings of the 25th International Conference on Compiler Construction. :174–184.

Go is a programming language developed at Google, with channel-based concurrent features based on CSP. Go can detect global communication deadlocks at runtime when all threads of execution are blocked, but deadlocks in other paths of execution could be undetected. We present a new static analyser for concurrent Go code to find potential communication errors such as communication mismatch and deadlocks at compile time. Our tool extracts the communication operations as session types, which are then converted into Communicating Finite State Machines (CFSMs). Finally, we apply a recent theoretical result on choreography synthesis to generate a global graph representing the overall communication pattern of a concurrent program. If the synthesis is successful, then the program is free from communication errors. We have implemented the technique in a tool, and applied it to analyse common Go concurrency patterns and an open source application with over 700 lines of code.

Ostberg, Jan-Peter, Wagner, Stefan, Weilemann, Erica.  2016.  Does Personality Influence the Usage of Static Analysis Tools?: An Explorative Experiment Proceedings of the 9th International Workshop on Cooperative and Human Aspects of Software Engineering. :75–81.

There are many techniques to improve software quality. One is using automatic static analysis tools. We have observed, however, that despite the low-cost help they offer, these tools are underused and often discourage beginners. There is evidence that personality traits influence the perceived usability of a software. Thus, to support beginners better, we need to understand how the workflow of people with different prevalent personality traits using these tools varies. For this purpose, we observed users' solution strategies and correlated them with their prevalent personality traits in an exploratory study with student participants within a controlled experiment. We gathered data by screen capturing and chat protocols as well as a Big Five personality traits test. We found strong correlations between particular personality traits and different strategies of removing the findings of static code analysis as well as between personality and tool utilization. Based on that, we offer take-away improvement suggestions. Our results imply that developers should be aware of these solution strategies and use this information to build tools that are more appealing to people with different prevalent personality traits.

Nicolay, Jens, Spruyt, Valentijn, De Roover, Coen.  2016.  Static Detection of User-specified Security Vulnerabilities in Client-side JavaScript. Proceedings of the 2016 ACM Workshop on Programming Languages and Analysis for Security. :3–13.

Program defects tend to surface late in the development of programs, and they are hard to detect. Security vulnerabilities are particularly important defects to detect. They may cause sensitive information to be leaked or the system on which the program is executed to be compromised. Existing approaches that use static analysis to detect security vulnerabilities in source code are often limited to a predetermined set of encoded security vulnerabilities. Although these approaches support a decent number of vulnerabilities by default, they cannot be configured for detecting vulnerabilities that are specific to the application domain of the analyzed program. In this paper we present JS-QL, a framework for detecting user-specified security vulnerabilities in JavaScript applications statically. The framework makes use of an internal domain-specific query language hosted by JavaScript. JS-QL queries are based on regular path expressions, enabling users to express queries over a flow graph in a declarative way. The flow graph represents the run-time behavior of a program and is computed by a static analysis. We evaluate JS-QL by expressing 9 security vulnerabilities supported by existing work and comparing the resulting specifications. We conclude that the combination of static analysis and regular path expressions lends itself well to the detection of user-specified security vulnerabilities.

Su, Fang-Hsiang, Bell, Jonathan, Harvey, Kenneth, Sethumadhavan, Simha, Kaiser, Gail, Jebara, Tony.  2016.  Code Relatives: Detecting Similarly Behaving Software. Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering. :702–714.

Detecting “similar code” is useful for many software engineering tasks. Current tools can help detect code with statically similar syntactic and–or semantic features (code clones) and with dynamically similar functional input/output (simions). Unfortunately, some code fragments that behave similarly at the finer granularity of their execution traces may be ignored. In this paper, we propose the term “code relatives” to refer to code with similar execution behavior. We define code relatives and then present DyCLINK, our approach to detecting code relatives within and across codebases. DyCLINK records instruction-level traces from sample executions, organizes the traces into instruction-level dynamic dependence graphs, and employs our specialized subgraph matching algorithm to efficiently compare the executions of candidate code relatives. In our experiments, DyCLINK analyzed 422+ million prospective subgraph matches in only 43 minutes. We compared DyCLINK to one static code clone detector from the community and to our implementation of a dynamic simion detector. The results show that DyCLINK effectively detects code relatives with a reasonable analysis time.

Smith, Justin.  2016.  Identifying Successful Strategies for Resolving Static Analysis Notifications. Proceedings of the 38th International Conference on Software Engineering Companion. :662–664.

Although static analysis tools detect potential code defects early in the development process, they do not fully support developers in resolving those defects. To accurately and efficiently resolve defects, developers must orchestrate several complex tasks, such as determining whether the defect is a false positive and updating the source code without introducing new defects. Without good defect resolution strategies developers may resolve defects erroneously or inefficiently. In this work, I perform a preliminary analysis of the successful and unsuccessful strategies developers use to resolve defects. Based on the successful strategies identified, I then outline a tool to support developers throughout the defect resolution process.

Legunsen, Owolabi, Hariri, Farah, Shi, August, Lu, Yafeng, Zhang, Lingming, Marinov, Darko.  2016.  An Extensive Study of Static Regression Test Selection in Modern Software Evolution. Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering. :583–594.

Regression test selection (RTS) aims to reduce regression testing time by only re-running the tests affected by code changes. Prior research on RTS can be broadly split into dy namic and static techniques. A recently developed dynamic RTS technique called Ekstazi is gaining some adoption in practice, and its evaluation shows that selecting tests at a coarser, class-level granularity provides better results than selecting tests at a finer, method-level granularity. As dynamic RTS is gaining adoption, it is timely to also evaluate static RTS techniques, some of which were proposed over three decades ago but not extensively evaluated on modern software projects. This paper presents the first extensive study that evaluates the performance benefits of static RTS techniques and their safety; a technique is safe if it selects to run all tests that may be affected by code changes. We implemented two static RTS techniques, one class-level and one method-level, and compare several variants of these techniques. We also compare these static RTS techniques against Ekstazi, a state-of-the-art, class-level, dynamic RTS technique. The experimental results on 985 revisions of 22 open-source projects show that the class-level static RTS technique is comparable to Ekstazi, with similar performance benefits, but at the risk of being unsafe sometimes. In contrast, the method-level static RTS technique performs rather poorly.

Tymburibá, Mateus, Moreira, Rubens E. A., Quintão Pereira, Fernando Magno.  2016.  Inference of Peak Density of Indirect Branches to Detect ROP Attacks. Proceedings of the 2016 International Symposium on Code Generation and Optimization. :150–159.

A program subject to a Return-Oriented Programming (ROP) attack usually presents an execution trace with a high frequency of indirect branches. From this observation, several researchers have proposed to monitor the density of these instructions to detect ROP attacks. These techniques use universal thresholds: the density of indirect branches that characterizes an attack is the same for every application. This paper shows that universal thresholds are easy to circumvent. As an alternative, we introduce an inter-procedural semi-context-sensitive static code analysis that estimates the maximum density of indirect branches possible for a program. This analysis determines detection thresholds for each application; thus, making it more difficult for attackers to compromise programs via ROP. We have used an implementation of our technique in LLVM to find specific thresholds for the programs in SPEC CPU2006. By comparing these thresholds against actual execution traces of corresponding programs, we demonstrate the accuracy of our approach. Furthermore, our algorithm is practical: it finds an approximate solution to a theoretically undecidable problem, and handles programs with up to 700 thousand assembly instructions in 25 minutes.

Brown, Fraser, Nötzli, Andres, Engler, Dawson.  2016.  How to Build Static Checking Systems Using Orders of Magnitude Less Code. Proceedings of the Twenty-First International Conference on Architectural Support for Programming Languages and Operating Systems. :143–157.

Modern static bug finding tools are complex. They typically consist of hundreds of thousands of lines of code, and most of them are wedded to one language (or even one compiler). This complexity makes the systems hard to understand, hard to debug, and hard to retarget to new languages, thereby dramatically limiting their scope. This paper reduces checking system complexity by addressing a fundamental assumption, the assumption that checkers must depend on a full-blown language specification and compiler front end. Instead, our program checkers are based on drastically incomplete language grammars ("micro-grammars") that describe only portions of a language relevant to a checker. As a result, our implementation is tiny-roughly 2500 lines of code, about two orders of magnitude smaller than a typical system. We hope that this dramatic increase in simplicity will allow people to use more checkers on more systems in more languages. We implement our approach in μchex, a language-agnostic framework for writing static bug checkers. We use it to build micro-grammar based checkers for six languages (C, the C preprocessor, C++, Java, JavaScript, and Dart) and find over 700 errors in real-world projects.

Kumar, Snehasish, Srinivasan, Vijayalakshmi, Sharifian, Amirali, Sumner, Nick, Shriraman, Arrvindh.  2016.  Peruse and Profit: Estimating the Accelerability of Loops. Proceedings of the 2016 International Conference on Supercomputing. :21:1–21:13.

There exist a multitude of execution models available today for a developer to target. The choices vary from general purpose processors to fixed-function hardware accelerators with a large number of variations in-between. There is a growing demand to assess the potential benefits of porting or rewriting an application to a target architecture in order to fully exploit the benefits of performance and/or energy efficiency offered by such targets. However, as a first step of this process, it is necessary to determine whether the application has characteristics suitable for acceleration. In this paper, we present Peruse, a tool to characterize the features of loops in an application and to help the programmer understand the amenability of loops for acceleration. We consider a diverse set of features ranging from loop characteristics (e.g., loop exit points) and operation mixes (e.g., control vs data operations) to wider code region characteristics (e.g., idempotency, vectorizability). Peruse is language, architecture, and input independent and uses the intermediate representation of compilers to do the characterization. Using static analyses makes Peruse scalable and enables analysis of large applications to identify and extract interesting loops suitable for acceleration. We show analysis results for unmodified applications from the SPEC CPU benchmark suite, Polybench, and HPC workloads. For an end-user it is more desirable to get an estimate of the potential speedup due to acceleration. We use the workload characterization results of Peruse as features and develop a machine-learning based model to predict the potential speedup of a loop when off-loaded to a fixed function hardware accelerator. We use the model to predict the speedup of loops selected by Peruse and achieve an accuracy of 79%.

2017-03-07
Masood, A., Java, J..  2015.  Static analysis for web service security - Tools amp; techniques for a secure development life cycle. 2015 IEEE International Symposium on Technologies for Homeland Security (HST). :1–6.

In this ubiquitous IoT (Internet of Things) era, web services have become a vital part of today's critical national and public sector infrastructure. With the industry wide adaptation of service-oriented architecture (SOA), web services have become an integral component of enterprise software eco-system, resulting in new security challenges. Web services are strategic components used by wide variety of organizations for information exchange on the internet scale. The public deployments of mission critical APIs opens up possibility of software bugs to be maliciously exploited. Therefore, vulnerability identification in web services through static as well as dynamic analysis is a thriving and interesting area of research in academia, national security and industry. Using OWASP (Open Web Application Security Project) web services guidelines, this paper discusses the challenges of existing standards, and reviews new techniques and tools to improve services security by detecting vulnerabilities. Recent vulnerabilities like Shellshock and Heartbleed has shifted the focus of risk assessment to the application layer, which for majority of organization means public facing web services and web/mobile applications. RESTFul services have now become the new service development paradigm normal; therefore SOAP centric standards such as XML Encryption, XML Signature, WS-Security, and WS-SecureConversation are nearly not as relevant. In this paper we provide an overview of the OWASP top 10 vulnerabilities for web services, and discuss the potential static code analysis techniques to discover these vulnerabilities. The paper reviews the security issues targeting web services, software/program verification and security development lifecycle.