Visible to the public Biblio

Found 232 results

Filters: Keyword is Testing  [Clear All Filters]
2018-02-02
Khari, M., Vaishali, Kumar, M..  2016.  Analysis of software security testing using metaheuristic search technique. 2016 3rd International Conference on Computing for Sustainable Global Development (INDIACom). :2147–2152.

Metaheuristic search technique is one of the advance approach when compared with traditional heuristic search technique. To select one option among different alternatives is not hard to get but really hard is give assurance that being cost effective. This hard problem is solved by the meta-heuristic search technique with the help of fitness function. Fitness function is a crucial metrics or a measure which helps in deciding which solution is optimal to choose from available set of test sets. This paper discusses hill climbing, simulated annealing, tabu search, genetic algorithm and particle swarm optimization techniques in detail explaining with the help of the algorithm. If metaheuristic search techniques combine some of the security testing methods, it would result in better searching technique as well as secure too. This paper primarily focusses on the metaheuristic search techniques.

Chen, L., May, J..  2017.  Theoretical Feasibility of Statistical Assurance of Programmable Systems Based on Simulation Tests. 2017 IEEE International Conference on Software Quality, Reliability and Security Companion (QRS-C). :630–631.

This presents a new model to support empirical failure probability estimation for a software-intensive system. The new element of the approach is that it combines the results of testing using a simulated hardware platform with results from testing on the real platform. This approach addresses a serious practical limitation of a technique known as statistical testing. This limitation will be called the test time expansion problem (or simply the 'time problem'), which is that the amount of testing required to demonstrate useful levels of reliability over a time period T is many orders of magnitude greater than T. The time problem arises whether the aim is to demonstrate ultra-high reliability levels for protection system, or to demonstrate any (desirable) reliability levels for continuous operation ('high demand') systems. Specifically, the theoretical feasibility of a platform simulation approach is considered since, if this is not proven, questions of practical implementation are moot. Subject to the assumptions made in the paper, theoretical feasibility is demonstrated.

Whitmore, J., Tobin, W..  2017.  Improving Attention to Security in Software Design with Analytics and Cognitive Techniques. 2017 IEEE Cybersecurity Development (SecDev). :16–21.

There is widening chasm between the ease of creating software and difficulty of "building security in". This paper reviews the approach, the findings and recent experiments from a seven-year effort to enable consistency across a large, diverse development organization and software portfolio via policies, guidance, automated tools and services. Experience shows that developing secure software is an elusive goal for most. It requires every team to know and apply a wide range of security knowledge in the context of what software is being built, how the software will be used, and the projected threats in the environment where the software will operate. The drive for better outcomes for secure development and increased developer productivity led to experiments to augment developer knowledge and eventually realize the goal of "building the right security in".

Amir, K. C., Goulart, A., Kantola, R..  2016.  Keyword-driven security test automation of Customer Edge Switching (CES) architecture. 2016 8th International Workshop on Resilient Networks Design and Modeling (RNDM). :216–223.

Customer Edge Switching (CES) is an experimental Internet architecture that provides reliable and resilient multi-domain communications. It provides resilience against security threats because domains negotiate inbound and outbound policies before admitting new traffic. As CES and its signalling protocols are being prototyped, there is a need for independent testing of the CES architecture. Hence, our research goal is to develop an automated test framework that CES protocol designers and early adopters can use to improve the architecture. The test framework includes security, functional, and performance tests. Using the Robot Framework and STRIDE analysis, in this paper we present this automated security test framework. By evaluating sample test scenarios, we show that the Robot Framework and our CES test suite have provided productive discussions about this new architecture, in addition to serving as clear, easy-to-read documentation. Our research also confirms that test automation can be useful to improve new protocol architectures and validate their implementation.

Tramèr, F., Atlidakis, V., Geambasu, R., Hsu, D., Hubaux, J. P., Humbert, M., Juels, A., Lin, H..  2017.  FairTest: Discovering Unwarranted Associations in Data-Driven Applications. 2017 IEEE European Symposium on Security and Privacy (EuroS P). :401–416.

In a world where traditional notions of privacy are increasingly challenged by the myriad companies that collect and analyze our data, it is important that decision-making entities are held accountable for unfair treatments arising from irresponsible data usage. Unfortunately, a lack of appropriate methodologies and tools means that even identifying unfair or discriminatory effects can be a challenge in practice. We introduce the unwarranted associations (UA) framework, a principled methodology for the discovery of unfair, discriminatory, or offensive user treatment in data-driven applications. The UA framework unifies and rationalizes a number of prior attempts at formalizing algorithmic fairness. It uniquely combines multiple investigative primitives and fairness metrics with broad applicability, granular exploration of unfair treatment in user subgroups, and incorporation of natural notions of utility that may account for observed disparities. We instantiate the UA framework in FairTest, the first comprehensive tool that helps developers check data-driven applications for unfair user treatment. It enables scalable and statistically rigorous investigation of associations between application outcomes (such as prices or premiums) and sensitive user attributes (such as race or gender). Furthermore, FairTest provides debugging capabilities that let programmers rule out potential confounders for observed unfair effects. We report on use of FairTest to investigate and in some cases address disparate impact, offensive labeling, and uneven rates of algorithmic error in four data-driven applications. As examples, our results reveal subtle biases against older populations in the distribution of error in a predictive health application and offensive racial labeling in an image tagger.

Rotella, P., Chulani, S..  2017.  Predicting Release Reliability. 2017 IEEE International Conference on Software Quality, Reliability and Security Companion (QRS-C). :39–46.

Customers need to know how reliable a new release is, and whether or not the new release has substantially different, either better or worse, reliability than the one currently in production. Customers are demanding quantitative evidence, based on pre-release metrics, to help them decide whether or not to upgrade (and thereby offer new features and capabilities to their customers). Finding ways to estimate future reliability performance is not easy - we have evaluated many prerelease development and test metrics in search of reliability predictors that are sufficiently accurate and also apply to a broad range of software products. This paper describes a successful model that has resulted from these efforts, and also presents both a functional extension and a further conceptual simplification of the extended model that enables us to better communicate key release information to internal stakeholders and customers, without sacrificing predictive accuracy or generalizability. Work remains to be done, but the results of the original model, the extended model, and the simplified version are encouraging and are currently being applied across a range of products and releases. To evaluate whether or not these early predictions are accurate, and also to compare releases that are available to customers, we use a field software reliability assessment mechanism that incorporates two types of customer experience metrics: field bug encounters normalized by usage, and field bug counts, also normalized by usage. Our 'release-overrelease' strategy combines the 'maturity assessment' component (i.e., estimating reliability prior to release to the field) and the 'reliability assessment' component (i.e., gauging actual reliability after release to the field). This overall approach enables us to both predict reliability and compare reliability results for recent releases for a product.

2018-01-23
Wang, B., Song, W., Lou, W., Hou, Y. T..  2017.  Privacy-preserving pattern matching over encrypted genetic data in cloud computing. IEEE INFOCOM 2017 - IEEE Conference on Computer Communications. :1–9.

Personalized medicine performs diagnoses and treatments according to the DNA information of the patients. The new paradigm will change the health care model in the future. A doctor will perform the DNA sequence matching instead of the regular clinical laboratory tests to diagnose and medicate the diseases. Additionally, with the help of the affordable personal genomics services such as 23andMe, personalized medicine will be applied to a great population. Cloud computing will be the perfect computing model as the volume of the DNA data and the computation over it are often immense. However, due to the sensitivity, the DNA data should be encrypted before being outsourced into the cloud. In this paper, we start from a practical system model of the personalize medicine and present a solution for the secure DNA sequence matching problem in cloud computing. Comparing with the existing solutions, our scheme protects the DNA data privacy as well as the search pattern to provide a better privacy guarantee. We have proved that our scheme is secure under the well-defined cryptographic assumption, i.e., the sub-group decision assumption over a bilinear group. Unlike the existing interactive schemes, our scheme requires only one round of communication, which is critical in practical application scenarios. We also carry out a simulation study using the real-world DNA data to evaluate the performance of our scheme. The simulation results show that the computation overhead for real world problems is practical, and the communication cost is small. Furthermore, our scheme is not limited to the genome matching problem but it applies to general privacy preserving pattern matching problems which is widely used in real world.

2018-01-16
Buriro, A., Akhtar, Z., Crispo, B., Gupta, S..  2017.  Mobile biometrics: Towards a comprehensive evaluation methodology. 2017 International Carnahan Conference on Security Technology (ICCST). :1–6.

Smartphones have become the pervasive personal computing platform. Recent years thus have witnessed exponential growth in research and development for secure and usable authentication schemes for smartphones. Several explicit (e.g., PIN-based) and/or implicit (e.g., biometrics-based) authentication methods have been designed and published in the literature. In fact, some of them have been embedded in commercial mobile products as well. However, the published studies report only the brighter side of the proposed scheme(s), e.g., higher accuracy attained by the proposed mechanism. While other associated operational issues, such as computational overhead, robustness to different environmental conditions/attacks, usability, are intentionally or unintentionally ignored. More specifically, most publicly available frameworks did not discuss or explore any other evaluation criterion, usability and environment-related measures except the accuracy under zero-effort. Thus, their baseline operations usually give a false sense of progress. This paper, therefore, presents some guidelines to researchers for designing, implementation, and evaluating smartphone user authentication methods for a positive impact on future technological developments.

Richardson, D. P., Lin, A. C., Pecarina, J. M..  2017.  Hosting distributed databases on internet of things-scale devices. 2017 IEEE Conference on Dependable and Secure Computing. :352–357.

The Internet of Things (IoT) era envisions billions of interconnected devices capable of providing new interactions between the physical and digital worlds, offering new range of content and services. At the fundamental level, IoT nodes are physical devices that exist in the real world, consisting of networking, sensor, and processing components. Some application examples include mobile and pervasive computing or sensor nets, and require distributed device deployment that feed information into databases for exploitation. While the data can be centralized, there are advantages, such as system resiliency and security to adopting a decentralized architecture that pushes the computation and storage to the network edge and onto IoT devices. However, these devices tend to be much more limited in computation power than traditional racked servers. This research explores using the Cassandra distributed database on IoT-representative device specifications. Experiments conducted on both virtual machines and Raspberry Pi's to simulate IoT devices, examined latency issues with network compression, processing workloads, and various memory and node configurations in laboratory settings. We demonstrate that distributed databases are feasible on Raspberry Pi's as IoT representative devices and show findings that may help in application design.

Zeitz, K., Cantrell, M., Marchany, R., Tront, J..  2017.  Designing a Micro-moving Target IPv6 Defense for the Internet of Things. 2017 IEEE/ACM Second International Conference on Internet-of-Things Design and Implementation (IoTDI). :179–184.

As the use of low-power and low resource embedded devices continues to increase dramatically with the introduction of new Internet of Things (IoT) devices, security techniques are necessary which are compatible with these devices. This research advances the knowledge in the area of cyber security for the IoT through the exploration of a moving target defense to apply for limiting the time attackers may conduct reconnaissance on embedded systems while considering the challenges presented from IoT devices such as resource and performance constraints. We introduce the design and optimizations for a Micro-Moving Target IPv6 Defense including a description of the modes of operation, needed protocols, and use of lightweight hash algorithms. We also detail the testing and validation possibilities including a Cooja simulation configuration, and describe the direction to further enhance and validate the security technique through large scale simulations and hardware testing followed by providing information on other future considerations.

2018-01-10
Meltsov, V. Y., Lesnikov, V. A., Dolzhenkova, M. L..  2017.  Intelligent system of knowledge control with the natural language user interface. 2017 International Conference "Quality Management,Transport and Information Security, Information Technologies" (IT QM IS). :671–675.
This electronic document is a “live” template and already defines the components of your paper [title, text, heads, etc.] in its style sheet. The paper considers the possibility and necessity of using in modern control and training systems with a natural language interface methods and mechanisms, characteristic for knowledge processing systems. This symbiosis assumes the introduction of specialized inference machines into the testing systems. For the effective operation of such an intelligent interpreter, it is necessary to “translate” the user's answers into one of the known forms of the knowledge representation, for example, into the expressions (rules) of the first-order predicate calculus. A lexical processor, performing morphological, syntactic and semantic analysis, solves this task. To simplify further work with the rules, the Skolem-transformation is used, which allows to get rid of quantifiers and to present semantic structures in the form of sequents (clauses, disjuncts). The basic principles of operation of the inference machine are described, which is the main component of the developed intellectual subsystem. To improve the performance of the machine, one of the fastest methods was chosen - a parallel method of deductive inference based on the division of clauses. The parallelism inherent in the method, and the use of the dataflow architecture, allow parallel computations in the output machine to be implemented without additional effort on the part of the programmer. All this makes it possible to reduce the time for comparing the sequences stored in the knowledge base by several times as compared to traditional inference mechanisms that implement various versions of the principle of resolutions. Formulas and features of the technique of numerical estimation of the user's answers are given. In general, the development of the human-computer dialogue capabilities in test systems- through the development of a specialized module for processing knowledge, will increase the intelligence of such systems and allow us to directly consider the semantics of sentences, more accurately determine the relevance of the user's response to standard knowledge and, ultimately, get rid of the skeptical attitude of many managers to machine testing systems.
2017-12-28
Sultana, K. Z..  2017.  Towards a software vulnerability prediction model using traceable code patterns and software metrics. 2017 32nd IEEE/ACM International Conference on Automated Software Engineering (ASE). :1022–1025.

Software security is an important aspect of ensuring software quality. The goal of this study is to help developers evaluate software security using traceable patterns and software metrics during development. The concept of traceable patterns is similar to design patterns but they can be automatically recognized and extracted from source code. If these patterns can better predict vulnerable code compared to traditional software metrics, they can be used in developing a vulnerability prediction model to classify code as vulnerable or not. By analyzing and comparing the performance of traceable patterns with metrics, we propose a vulnerability prediction model. This study explores the performance of some code patterns in vulnerability prediction and compares them with traditional software metrics. We use the findings to build an effective vulnerability prediction model. We evaluate security vulnerabilities reported for Apache Tomcat, Apache CXF and three stand-alone Java web applications. We use machine learning and statistical techniques for predicting vulnerabilities using traceable patterns and metrics as features. We found that patterns have a lower false negative rate and higher recall in detecting vulnerable code than the traditional software metrics.

2017-12-20
Weedon, M., Tsaptsinos, D., Denholm-Price, J..  2017.  Random forest explorations for URL classification. 2017 International Conference On Cyber Situational Awareness, Data Analytics And Assessment (Cyber SA). :1–4.

Phishing is a major concern on the Internet today and many users are falling victim because of criminal's deceitful tactics. Blacklisting is still the most common defence users have against such phishing websites, but is failing to cope with the increasing number. In recent years, researchers have devised modern ways of detecting such websites using machine learning. One such method is to create machine learnt models of URL features to classify whether URLs are phishing. However, there are varying opinions on what the best approach is for features and algorithms. In this paper, the objective is to evaluate the performance of the Random Forest algorithm using a lexical only dataset. The performance is benchmarked against other machine learning algorithms and additionally against those reported in the literature. Initial results from experiments indicate that the Random Forest algorithm performs the best yielding an 86.9% accuracy.

Sudhodanan, A., Carbone, R., Compagna, L., Dolgin, N., Armando, A., Morelli, U..  2017.  Large-Scale Analysis Detection of Authentication Cross-Site Request Forgeries. 2017 IEEE European Symposium on Security and Privacy (EuroS P). :350–365.
Cross-Site Request Forgery (CSRF) attacks are one of the critical threats to web applications. In this paper, we focus on CSRF attacks targeting web sites' authentication and identity management functionalities. We will refer to them collectively as Authentication CSRF (Auth-CSRF in short). We started by collecting several Auth-CSRF attacks reported in the literature, then analyzed their underlying strategies and identified 7 security testing strategies that can help a manual tester uncover vulnerabilities enabling Auth-CSRF. In order to check the effectiveness of our testing strategies and to estimate the incidence of Auth-CSRF, we conducted an experimental analysis considering 300 web sites belonging to 3 different rank ranges of the Alexa global top 1500. The results of our experiments are alarming: out of the 300 web sites we considered, 133 qualified for conducting our experiments and 90 of these suffered from at least one vulnerability enabling Auth-CSRF (i.e. 68%). We further generalized our testing strategies, enhanced them with the knowledge we acquired during our experiments and implemented them as an extension (namely CSRF-checker) to the open-source penetration testing tool OWASP ZAP. With the help of CSRFchecker, we tested 132 additional web sites (again from the Alexa global top 1500) and identified 95 vulnerable ones (i.e. 72%). Our findings include serious vulnerabilities among the web sites of Microsoft, Google, eBay etc. Finally, we responsibly disclosed our findings to the affected vendors.
Mohammadi, M., Chu, B., Lipford, H. R..  2017.  Detecting Cross-Site Scripting Vulnerabilities through Automated Unit Testing. 2017 IEEE International Conference on Software Quality, Reliability and Security (QRS). :364–373.

The best practice to prevent Cross Site Scripting (XSS) attacks is to apply encoders to sanitize untrusted data. To balance security and functionality, encoders should be applied to match the web page context, such as HTML body, JavaScript, and style sheets. A common programming error is the use of a wrong encoder to sanitize untrusted data, leaving the application vulnerable. We present a security unit testing approach to detect XSS vulnerabilities caused by improper encoding of untrusted data. Unit tests for the XSS vulnerability are automatically constructed out of each web page and then evaluated by a unit test execution framework. A grammar-based attack generator is used to automatically generate test inputs. We evaluate our approach on a large open source medical records application, demonstrating that we can detect many 0-day XSS vulnerabilities with very low false positives, and that the grammar-based attack generator has better test coverage than industry best practices.

2017-12-12
Chow, J., Li, X., Mountrouidou, X..  2017.  Raising flags: Detecting covert storage channels using relative entropy. 2017 IEEE International Conference on Intelligence and Security Informatics (ISI). :25–30.

This paper focuses on one type of Covert Storage Channel (CSC) that uses the 6-bit TCP flag header in TCP/IP network packets to transmit secret messages between accomplices. We use relative entropy to characterize the irregularity of network flows in comparison to normal traffic. A normal profile is created by the frequency distribution of TCP flags in regular traffic packets. In detection, the TCP flag frequency distribution of network traffic is computed for each unique IP pair. In order to evaluate the accuracy and efficiency of the proposed method, this study uses real regular traffic data sets as well as CSC messages using coding schemes under assumptions of both clear text, composed by a list of keywords common in Unix systems, and encrypted text. Moreover, smart accomplices may use only those TCP flags that are ever appearing in normal traffic. Then, in detection, the relative entropy can reveal the dissimilarity of a different frequency distribution from this normal profile. We have also used different data processing methods in detection: one method summarizes all the packets for a pair of IP addresses into one flow and the other uses a sliding moving window over such a flow to generate multiple frames of packets. The experimentation results, displayed by Receiver Operating Characteristic (ROC) curves, have shown that the method is promising to differentiate normal and CSC traffic packet streams. Furthermore the delay of raising an alert is analyzed for CSC messages to show its efficiency.

2017-12-04
Alejandre, F. V., Cortés, N. C., Anaya, E. A..  2017.  Feature selection to detect botnets using machine learning algorithms. 2017 International Conference on Electronics, Communications and Computers (CONIELECOMP). :1–7.

In this paper, a novel method to do feature selection to detect botnets at their phase of Command and Control (C&C) is presented. A major problem is that researchers have proposed features based on their expertise, but there is no a method to evaluate these features since some of these features could get a lower detection rate than other. To this aim, we find the feature set based on connections of botnets at their phase of C&C, that maximizes the detection rate of these botnets. A Genetic Algorithm (GA) was used to select the set of features that gives the highest detection rate. We used the machine learning algorithm C4.5, this algorithm did the classification between connections belonging or not to a botnet. The datasets used in this paper were extracted from the repositories ISOT and ISCX. Some tests were done to get the best parameters in a GA and the algorithm C4.5. We also performed experiments in order to obtain the best set of features for each botnet analyzed (specific), and for each type of botnet (general) too. The results are shown at the end of the paper, in which a considerable reduction of features and a higher detection rate than the related work presented were obtained.

2017-11-27
Mohammadi, M., Chu, B., Lipford, H. R., Murphy-Hill, E..  2016.  Automatic Web Security Unit Testing: XSS Vulnerability Detection. 2016 IEEE/ACM 11th International Workshop in Automation of Software Test (AST). :78–84.

Integrating security testing into the workflow of software developers not only can save resources for separate security testing but also reduce the cost of fixing security vulnerabilities by detecting them early in the development cycle. We present an automatic testing approach to detect a common type of Cross Site Scripting (XSS) vulnerability caused by improper encoding of untrusted data. We automatically extract encoding functions used in a web application to sanitize untrusted inputs and then evaluate their effectiveness by automatically generating XSS attack strings. Our evaluations show that this technique can detect 0-day XSS vulnerabilities that cannot be found by static analysis tools. We will also show that our approach can efficiently cover a common type of XSS vulnerability. This approach can be generalized to test for input validation against other types injections such as command line injection.

2017-11-20
Thongthua, A., Ngamsuriyaroj, S..  2016.  Assessment of Hypervisor Vulnerabilities. 2016 International Conference on Cloud Computing Research and Innovations (ICCCRI). :71–77.

Hypervisors are the main components for managing virtual machines on cloud computing systems. Thus, the security of hypervisors is very crucial as the whole system could be compromised when just one vulnerability is exploited. In this paper, we assess the vulnerabilities of widely used hypervisors including VMware ESXi, Citrix XenServer and KVM using the NIST 800-115 security testing framework. We perform real experiments to assess the vulnerabilities of those hypervisors using security testing tools. The results are evaluated using weakness information from CWE, and using vulnerability information from CVE. We also compute the severity scores using CVSS information. All vulnerabilities found of three hypervisors will be compared in terms of weaknesses, severity scores and impact. The experimental results showed that ESXi and XenServer have common weaknesses and vulnerabilities whereas KVM has fewer vulnerabilities. In addition, we discover a new vulnerability called HTTP response splitting on ESXi Web interface.

2017-09-15
Multari, Nicholas J., Singhal, Anoop, Manz, David O..  2016.  SafeConfig'16: Testing and Evaluation for Active and Resilient Cyber Systems. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :1871–1872.

The premise of this year's SafeConfig Workshop is existing tools and methods for security assessments are necessary but insufficient for scientifically rigorous testing and evaluation of resilient and active cyber systems. The objective for this workshop is the exploration and discussion of scientifically sound testing regimen(s) that will continuously and dynamically probe, attack, and "test" the various resilient and active technologies. This adaptation and change in focus necessitates at the very least modification, and potentially, wholesale new developments to ensure that resilient- and agile-aware security testing is available to the research community. All testing, validation and experimentation must also be repeatable, reproducible, subject to scientific scrutiny, measurable and meaningful to both researchers and practitioners.

2017-09-06
C. Theisen, L. Williams, K. Oliver, E. Murphy-Hill.  2016.  Software Security Education at Scale. 2016 IEEE/ACM 38th International Conference on Software Engineering Companion (ICSE-C). :346-355.

Massively Open Online Courses (MOOCs) provide a unique opportunity to reach out to students who would not normally be reached by alleviating the need to be physically present in the classroom. However, teaching software security coursework outside of a classroom setting can be challenging. What are the challenges when converting security material from an on-campus course to the MOOC format? The goal of this research is to assist educators in constructing software security coursework by providing a comparison of classroom courses and MOOCs. In this work, we compare demographic information, student motivations, and student results from an on-campus software security course and a MOOC version of the same course. We found that the two populations of students differed, with the MOOC reaching a more diverse set of students than the on-campus course. We found that students in the on-campus course had higher quiz scores, on average, than students in the MOOC. Finally, we document our experience running the courses and what we would do differently to assist future educators constructing similar MOOC's.

2017-09-05
Ghanim, Yasser.  2016.  Toward a Specialized Quality Management Maturity Assessment Model. Proceedings of the 2Nd Africa and Middle East Conference on Software Engineering. :1–8.

SW Quality Assessment models are either too broad such as CMMI-DEV and SPICE that cover the full software development life cycle (SDLC), or too narrow such as TMMI and TPI that focus on testing. Quality Management as a main concern within the software industry is broader than the concept of testing. The V-Model sets a broader view with the concepts of Verification and Validation. Quality Assurance (QA) is another broader term that includes quality of processes. Configuration audits add more scope. In parallel there are some less visible dimensions in quality not often addressed in traditional models such as business alignment of QA efforts. This paper compares the commonly accepted models related to software quality management and proposes a model that fills an empty space in this area. The paper provides some analysis of the concepts of maturity and capability levels and provides some proposed adaptations for quality management assessment.

2017-06-05
Prechelt, Lutz, Schmeisky, Holger, Zieris, Franz.  2016.  Quality Experience: A Grounded Theory of Successful Agile Projects Without Dedicated Testers. Proceedings of the 38th International Conference on Software Engineering. :1017–1027.

Context: While successful conventional software development regularly employs separate testing staff, there are successful agile teams with as well as without separate testers. Question: How does successful agile development work without separate testers? What are advantages and disadvantages? Method: A case study, based on Grounded Theory evaluation of interviews and direct observation of three agile teams; one having separate testers, two without. All teams perform long-term development of parts of e-business web portals. Results: Teams without testers use a quality experience work mode centered around a tight field-use feedback loop, driven by a feeling of responsibility, supported by test automation, resulting in frequent deployments. Conclusion: In the given domain, hand-overs to separate testers appear to hamper the feedback loop more than they contribute to quality, so working without testers is preferred. However, Quality Experience is achievable only with modular architectures and in suitable domains.

2017-04-20
Shinde, P. S., Ardhapurkar, S. B..  2016.  Cyber security analysis using vulnerability assessment and penetration testing. 2016 World Conference on Futuristic Trends in Research and Innovation for Social Welfare (Startup Conclave). :1–5.

In last twenty years, use of internet applications, web hacking activities have exaggerated speedily. Organizations facing very significant challenges in securing their web applications from rising cyber threats, as compromise with the protection issues don't seem to be reasonable. Vulnerability Assessment and Penetration Testing (VAPT) techniques help them to go looking out security loopholes. These security loopholes could also be utilized by attackers to launch attacks on technical assets. Thus it is necessary ascertain these vulnerabilities and install security patches. VAPT helps organization to determine whether their security arrangements are working properly. This paper aims to elucidate overview and various techniques used in vulnerability assessment and penetration testing (VAPT). Also focuses on making cyber security awareness and its importance at various level of an organization for adoption of required up to date security measures by the organization to stay protected from various cyber-attacks.

2017-03-08
Huang, J., Hou, D., Schuckers, S., Hou, Z..  2015.  Effect of data size on performance of free-text keystroke authentication. IEEE International Conference on Identity, Security and Behavior Analysis (ISBA 2015). :1–7.

Free-text keystroke authentication has been demonstrated to be a promising behavioral biometric. But unlike physiological traits such as fingerprints, in free-text keystroke authentication, there is no natural way to identify what makes a sample. It remains an open problem as to how much keystroke data are necessary for achieving acceptable authentication performance. Using public datasets and two existing algorithms, we conduct two experiments to investigate the effect of the reference profile size and test sample size on False Alarm Rate (FAR) and Imposter Pass Rate (IPR). We find that (1) larger reference profiles will drive down both IPR and FAR values, provided that the test samples are large enough, and (2) larger test samples have no obvious effect on IPR, regardless of the reference profile size. We discuss the practical implication of our findings.