Biblio
Payment Service Providers (PSP) enable application developers to effortlessly integrate complex payment processing code using software development toolkits (SDKs). While providing SDKs reduces the risk of application developers introducing payment vulnerabilities, vulnerabilities in the SDKs themselves can impact thousands of applications. In this work, we propose a static analysis tool for assessing PSP SDKs using OWASP’s MASVS industry standard for mobile application security. A key challenge for the work was reapplying both the MASVS and program analysis tools designed to analyze whole applications to study only a specific SDK. Our preliminary findings show that a number of payment processing libraries fail to meet MASVS security requirements, with evidence of persisting sensitive data insecurely, using outdated cryptography, and improperly configuring TLS. As such, our investigation demonstrates the value of applying security analysis at SDK granularity to prevent widespread deployment of vulnerable code.
Payment Service Providers (PSPs) provide software development toolkits (SDKs) for integrating complex payment processing code into applications. Security weaknesses in payment SDKs can impact thousands of applications. In this work, we propose AARDroid for statically assessing payment SDKs against OWASP’s MASVS industry standard for mobile application security. In creating AARDroid, we adapted application-level requirements and program analysis tools for SDK-specific analysis, tailoring dataflow analysis for SDKs using domain-specific ontologies to infer the security semantics of application programming interfaces (APIs). We apply AARDroid to 50 payment SDKs and discover security weaknesses including saving unencrypted credit card information to files, use of insecure cryptographic primitives, insecure input methods for credit card information, and insecure use of WebViews. These results demonstrate the value of applying security analysis at the SDK granularity to prevent the widespread deployment of insecure code.
Zigbee network security relies on symmetric cryptography based on a pre-shared secret. In the current Zigbee protocol, the network coordinator creates a network key while establishing a network. The coordinator then shares the network key securely, encrypted under the pre-shared secret, with devices joining the network to ensure the security of future communications among devices through the network key. The pre-shared secret, therefore, needs to be installed in millions or more devices prior to deployment, and thus will be inevitably leaked, enabling attackers to compromise the confidentiality and integrity of the network. To improve the security of Zigbee networks, we propose a new certificate-less Zigbee joining protocol that leverages low-cost public-key primitives. The new protocol has two components. The first is to integrate Elliptic Curve Diffie-Hellman key exchange into the existing association request/response messages, and to use this key both for link-to-link communication and for encryption of the network key to enhance privacy of user devices. The second is to improve the security of the installation code, a new joining method introduced in Zigbee 3.0 for enhanced security, by using public key encryption. We analyze the security of our proposed protocol using the formal verification methods provided by ProVerif, and evaluate the efficiency and effectiveness of our solution with a prototype built with open source software and hardware stack. The new protocol does not introduce extra messages and the overhead is as lows as 3.8% on average for the join procedure.
Modern smartphone platforms implement permission-based models to protect access to sensitive data and system resources. However, apps can circumvent the permission model and gain access to protected data without user consent by using both covert and side channels. Side channels present in the implementation of the permission system allow apps to access protected data and system resources without permission; whereas covert channels enable communication between two colluding apps so that one app can share its permission-protected data with another app lacking those permissions. Both pose threats to user privacy.
In this work, we make use of our infrastructure that runs hundreds of thousands of apps in an instrumented environment. This testing environment includes mechanisms to monitor apps' runtime behaviour and network traffic. We look for evidence of side and covert channels being used in practice by searching for sensitive data being sent over the network for which the sending app did not have permissions to access it. We then reverse engineer the apps and third-party libraries responsible for this behaviour to determine how the unauthorized access occurred. We also use software fingerprinting methods to measure the static prevalence of the technique that we discover among other apps in our corpus.
Using this testing environment and method, we uncovered a number of side and covert channels in active use by hundreds of popular apps and third-party SDKs to obtain unauthorized access to both unique identifiers as well as geolocation data. We have responsibly disclosed our findings to Google and have received a bug bounty for our work.
Many cybersecurity breaches occur due to users not following security regulations, chief among them regulations pertaining to what might be termed hygiene---including applying software patches to operating systems, updating software applications, and maintaining strong passwords.
We capture cybersecurity expectations on users as norms. We empirically investigate sanctioning mechanisms in promoting compliance with those norms as well as the detrimental effect of sanctions on the ability of users to complete their work. We do so by developing a game that emulates the decision making of workers in a research lab.
We find that relative to group sanctions, individual sanctions are more effective in achieving compliance and less detrimental on the ability of users to complete their work.
Our findings have implications for workforce training in cybersecurity.
Extended abstract
This work takes a novel approach to classifying the behavior of devices by exploiting the single-purpose nature of IoT devices and analyzing the complexity and variance of their network traffic. We develop a formalized measurement of complexity for IoT devices, and use this measurement to precisely tune an anomaly detection algorithm for each device. We postulate that IoT devices with low complexity lead to a high confidence in their behavioral model and have a correspondingly more precise decision boundary on their predicted behavior. Conversely, complex general purpose devices have lower confidence and a more generalized decision boundary. We show that there is a positive correlation to our complexity measure and the number of outliers found by an anomaly detection algorithm. By tuning this decision boundary based on device complexity we are able to build a behavioral framework for each device that reduces false positive outliers. Finally, we propose an architecture that can use this tuned behavioral model to rank each flow on the network and calculate a trust score ranking of all traffic to and from a device which allows the network to autonomously make access control decisions on a per-flow basis.
In the era of the ever-growing number of smart devices, fraudulent practices through Phishing Websites have become an increasingly severe threat to modern computers and internet security. These websites are designed to steal the personal information from the user and spread over the internet without the knowledge of the user using the system. These websites give a false impression of genuinity to the user by mirroring the real trusted web pages which then leads to the loss of important credentials of the user. So, Detection of such fraudulent websites is an essence and the need of the hour. In this paper, various classifiers have been considered and were found that ensemble classifiers predict to utmost efficiency. The idea behind was whether a combined classifier model performs better than a single classifier model leading to a better efficiency and accuracy. In this paper, for experimentation, three Meta Classifiers, namely, AdaBoostM1, Stacking, and Bagging have been taken into consideration for performance comparison. It is found that Meta Classifier built by combining of simple classifier(s) outperform the simple classifier's performance.
Modern large scale technical systems often face iterative changes on their behaviours with the requirement of validated quality which is not easy to achieve completely with traditional testing. Regression verification is a powerful tool for the formal correctness analysis of software-driven systems. By proving that a new revision of the software behaves similarly as the original version of the software, some of the trust that the old software and system had earned during the validation processes or operation histories can be inherited to the new revision. This trust inheritance by the formal analysis relies on a number of implicit assumptions which are not self-evident but easy to miss, and may lead to a false sense of safety induced by a misunderstood regression verification processes. This paper aims at pointing out hidden, implicit assumptions of regression verification in the context of cyber-physical systems by making them explicit using practical examples. The explicit trust inheritance analysis would clarify for the engineers to understand the extent of the trust that regression verification provides and consequently facilitate them to utilize this formal technique for the system validation.
Older adults (65+) are becoming primary users of emerging smart systems, especially in health care. However, these technologies are often not designed for older users and can pose serious privacy and security concerns due to their novelty, complexity, and propensity to collect and communicate vast amounts of sensitive information. Efforts to address such concerns must build on an in-depth understanding of older adults' perceptions and preferences about data privacy and security for these technologies, and accounting for variance in physical and cognitive abilities. In semi-structured interviews with 46 older adults, we identified a range of complex privacy and security attitudes and needs specific to this population, along with common threat models, misconceptions, and mitigation strategies. Our work adds depth to current models of how older adults' limited technical knowledge, experience, and age-related declines in ability amplify vulnerability to certain risks; we found that health, living situation, and finances play a notable role as well. We also found that older adults often experience usability issues or technical uncertainties in mitigating those risks -- and that managing privacy and security concerns frequently consists of limiting or avoiding technology use. We recommend educational approaches and usable technical protections that build on seniors' preferences.
Intelligent voice assistants (IVAs) and other voice-enabled devices already form an integral component of the Internet of Things and will continue to grow in popularity. As their capabilities evolve, they will move beyond relying on the wake-words today’s IVAs use, engaging instead in continuous listening. Though potentially useful, the continuous recording and analysis of speech can pose a serious threat to individuals’ privacy. Ideally, users would be able to limit or control the types of information such devices have access to. But existing technical approaches are insufficient for enforcing any such restrictions. To begin formulating a solution, we develop a system- atic methodology for studying continuous-listening applications and survey architectural approaches to designing a system that enhances privacy while preserving the benefits of always-listening assistants.
According to the theory of contextual integrity (CI), privacy norms prescribeinformation flows with reference to five parameters — sender, recipient, subject, information type, and transmission principle. Because privacy is grasped contextually (e.g., health, education, civic life, etc.), the values of these parameters range over contextually meaningful ontologies — of information types (or topics) and actors (subjects, senders, and recipients), incontextually defined capacities. As an alternative to predominant approaches to privacy, which were ineffective against novel information practices enabled by IT, CI was able both to pinpoint sources of disruption andprovide grounds for either accepting or rejecting them. Mounting challengesfrom a burgeoning array of networked, sensor-enabled devices (IoT) and data-ravenous machine learning systems, similar in form though magnified in scope, call for renewed attention to theory. This Article introduces themetaphor of a data (food) chain to capture the nature of these challenges.With motion up the chain, where data of higher order is inferred from lower-order data, the crucial question is whether privacy norms governing lower-order data are sufficient for the inferred higher-order data. While CI has a response to this question, a greater challenge comes from data primitives, such as digital impulses of mouse clicks, motion detectors, and bare GPS coordinates, because they appear to have no meaning. Absent a semantics, they escape CI’s privacy norms entirely.