Baden, Mathis, Ferreira Torres, Christof, Fiz Pontiveros, Beltran Borja, State, Radu.
2019.
Whispering Botnet Command and Control Instructions. 2019 Crypto Valley Conference on Blockchain Technology (CVCBT). :77—81.
Botnets are responsible for many large scale attacks happening on the Internet. Their weak point, which is usually targeted to take down a botnet, is the command and control infrastructure: the foundation for the diffusion of the botmaster's instructions. Hence, botmasters employ stealthy communication methods to remain hidden and retain control of the botnet. Recent research has shown that blockchains can be leveraged for under the radar communication with bots, however these methods incur fees for transaction broadcasting. This paper discusses the use of a novel technology, Whisper, for command and control instruction dissemination. Whisper allows a botmaster to control bots at virtually zero cost, while providing a peer-to-peer communication infrastructure, as well as privacy and encryption as part of its dark communication strategy. It is therefore well suited for bidirectional botnet command and control operations, and creating a botnet that is very difficult to take down.
Prokofiev, Anton O., Smirnova, Yulia S..
2019.
Counteraction against Internet of Things Botnets in Private Networks. 2019 IEEE Conference of Russian Young Researchers in Electrical and Electronic Engineering (EIConRus). :301—305.
This article focuses on problems related to detection and prevention of botnet threats in private Internet of Things (IoT) networks. Actual data about IoT botnets activity on the Internet is provided in the paper. Results of analysis of widespread botnets, as well as key characteristics of botnet behavior and activity on IoT devices are also provided. Features of private IoT networks are determined. The paper provides architectural features as well as functioning principles of software systems for botnet prevention in private networks. Recommendations for process of interaction between such system and a user are suggested. Suggestions for future development of the approach are formulated.
Spradling, Matthew, Allison, Mark, Tsogbadrakh, Tsenguun, Strong, Jay.
2019.
Toward Limiting Social Botnet Effectiveness while Detection Is Performed: A Probabilistic Approach. 2019 International Conference on Computational Science and Computational Intelligence (CSCI). :1388—1391.
The prevalence of social botnets has increased public distrust of social media networks. Current methods exist for detecting bot activity on Twitter, Reddit, Facebook, and other social media platforms. Most of these detection methods rely upon observing user behavior for a period of time. Unfortunately, the behavior observation period allows time for a botnet to successfully propagate one or many posts before removal. In this paper, we model the post propagation patterns of normal users and social botnets. We prove that a botnet may exploit deterministic propagation actions to elevate a post even with a small botnet population. We propose a probabilistic model which can limit the impact of social media botnets until they can be detected and removed. While our approach maintains expected results for non-coordinated activity, coordinated botnets will be detected before propagation with high probability.
Shukla, Ankur, Katt, Basel, Nweke, Livinus Obiora.
2019.
Vulnerability Discovery Modelling With Vulnerability Severity. 2019 IEEE Conference on Information and Communication Technology. :1—6.
Web browsers are primary targets of attacks because of their extensive uses and the fact that they interact with sensitive data. Vulnerabilities present in a web browser can pose serious risk to millions of users. Thus, it is pertinent to address these vulnerabilities to provide adequate protection for personally identifiable information. Research done in the past has showed that few vulnerability discovery models (VDMs) highlight the characterization of vulnerability discovery process. In these models, severity which is one of the most crucial properties has not been considered. Vulnerabilities can be categorized into different levels based on their severity. The discovery process of each kind of vulnerabilities is different from the other. Hence, it is essential to incorporate the severity of the vulnerabilities during the modelling of the vulnerability discovery process. This paper proposes a model to assess the vulnerabilities present in the software quantitatively with consideration for the severity of the vulnerabilities. It is possible to apply the proposed model to approximate the number of vulnerabilities along with vulnerability discovery rate, future occurrence of vulnerabilities, risk analysis, etc. Vulnerability data obtained from one of the major web browsers (Google Chrome) is deployed to examine goodness-of-fit and predictive capability of the proposed model. Experimental results justify the fact that the model proposed herein can estimate the required information better than the existing VDMs.
Eskandarian, Saba, Cogan, Jonathan, Birnbaum, Sawyer, Brandon, Peh Chang Wei, Franke, Dillon, Fraser, Forest, Garcia, Gaspar, Gong, Eric, Nguyen, Hung T., Sethi, Taresh K. et al..
2019.
Fidelius: Protecting User Secrets from Compromised Browsers. 2019 IEEE Symposium on Security and Privacy (SP). :264—280.
Users regularly enter sensitive data, such as passwords, credit card numbers, or tax information, into the browser window. While modern browsers provide powerful client-side privacy measures to protect this data, none of these defenses prevent a browser compromised by malware from stealing it. In this work, we present Fidelius, a new architecture that uses trusted hardware enclaves integrated into the browser to enable protection of user secrets during web browsing sessions, even if the entire underlying browser and OS are fully controlled by a malicious attacker. Fidelius solves many challenges involved in providing protection for browsers in a fully malicious environment, offering support for integrity and privacy for form data, JavaScript execution, XMLHttpRequests, and protected web storage, while minimizing the TCB. Moreover, interactions between the enclave and the browser, the keyboard, and the display all require new protocols, each with their own security considerations. Finally, Fidelius takes into account UI considerations to ensure a consistent and simple interface for both developers and users. As part of this project, we develop the first open source system that provides a trusted path from input and output peripherals to a hardware enclave with no reliance on additional hypervisor security assumptions. These components may be of independent interest and useful to future projects. We implement and evaluate Fidelius to measure its performance overhead, finding that Fidelius imposes acceptable overhead on page load and user interaction for secured pages and has no impact on pages and page components that do not use its enhanced security features.
Azakami, Tomoka, Shibata, Chihiro, Uda, Ryuya, Kinoshita, Toshiyuki.
2019.
Creation of Adversarial Examples with Keeping High Visual Performance. 2019 IEEE 2nd International Conference on Information and Computer Technologies (ICICT). :52—56.
The accuracy of the image classification by the convolutional neural network is exceeding the ability of human being and contributes to various fields. However, the improvement of the image recognition technology gives a great blow to security system with an image such as CAPTCHA. In particular, since the character string CAPTCHA has already added distortion and noise in order not to be read by the computer, it becomes a problem that the human readability is lowered. Adversarial examples is a technique to produce an image letting an image classification by the machine learning be wrong intentionally. The best feature of this technique is that when human beings compare the original image with the adversarial examples, they cannot understand the difference on appearance. However, Adversarial examples that is created with conventional FGSM cannot completely misclassify strong nonlinear networks like CNN. Osadchy et al. have researched to apply this adversarial examples to CAPTCHA and attempted to let CNN misclassify them. However, they could not let CNN misclassify character images. In this research, we propose a method to apply FGSM to the character string CAPTCHAs and to let CNN misclassified them.
Sain, Mangal, Kim, Ki-Hwan, Kang, Young-Jin, lee, hoon jae.
2019.
An Improved Two Factor User Authentication Framework Based on CAPTCHA and Visual Secret Sharing. 2019 IEEE International Conference on Computational Science and Engineering (CSE) and IEEE International Conference on Embedded and Ubiquitous Computing (EUC). :171—175.
To prevent unauthorized access to adversaries, strong authentication scheme is a vital security requirement in client-server inter-networking systems. These schemes must verify the legitimacy of such users in real-time environments and establish a dynamic session key fur subsequent communication. Of late, T. H. Chen and J. C. Huang proposed a two-factor authentication framework claiming that the scheme is secure against most of the existing attacks. However we have shown that Chen and Huang scheme have many critical weaknesses in real-time environments. The scheme is prone to man in the middle attack and information leakage attack. Furthermore, the scheme does not provide two essential security services such user anonymity and session key establishment. In this paper, we present an enhanced user participating authenticating scheme which overcomes all the weaknesses of Chen et al.'s scheme and provide most of the essential security features.
Kim, Donghoon, Sample, Luke.
2019.
Search Prevention with Captcha Against Web Indexing: A Proof of Concept. 2019 IEEE International Conference on Computational Science and Engineering (CSE) and IEEE International Conference on Embedded and Ubiquitous Computing (EUC). :219—224.
A website appears in search results based on web indexing conducted by a search engine bot (e.g., a web crawler). Some webpages do not want to be found easily because they include sensitive information. There are several methods to prevent web crawlers from indexing in search engine database. However, such webpages can still be indexed by malicious web crawlers. Through this study, we explore a paradox perspective on a new use of captchas for search prevention. Captchas are used to prevent web crawlers from indexing by converting sensitive words to captchas. We have implemented the web-based captcha conversion tool based on our search prevention algorithm. We also describe our proof of concept with the web-based chat application modified to utilize our algorithm. We have conducted the experiment to evaluate our idea on Google search engine with two versions of webpages, one containing plain text and another containing sensitive words converted to captchas. The experiment results show that the sensitive words on the captcha version of the webpages are unable to be found by Google's search engine, while the plain text versions are.
Shekhar, Heemany, Moh, Melody, Moh, Teng-Sheng.
2019.
Exploring Adversaries to Defend Audio CAPTCHA. 2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA). :1155—1161.
CAPTCHA is a web-based authentication method used by websites to distinguish between humans (valid users) and bots (attackers). Audio captcha is an accessible captcha meant for the visually disabled section of users such as color-blind, blind, near-sighted users. Firstly, this paper analyzes how secure current audio captchas are from attacks using machine learning (ML) and deep learning (DL) models. Each audio captcha is made up of five, seven or ten random digits[0-9] spoken one after the other along with varying background noise throughout the length of the audio. If the ML or DL model is able to correctly identify all spoken digits and in the correct order of occurance in a single audio captcha, we consider that captcha to be broken and the attack to be successful. Throughout the paper, accuracy refers to the attack model's success at breaking audio captchas. The higher the attack accuracy, the more unsecure the audio captchas are. In our baseline experiments, we found that attack models could break audio captchas that had no background noise or medium background noise with any number of spoken digits with nearly 99% to 100% accuracy. Whereas, audio captchas with high background noise were relatively more secure with attack accuracy of 85%. Secondly, we propose that the concepts of adversarial examples algorithms can be used to create a new kind of audio captcha that is more resilient towards attacks. We found that even after retraining the models on the new adversarial audio data, the attack accuracy remained as low as 25% to 36% only. Lastly, we explore the benefits of creating adversarial audio captcha through different algorithms such as Basic Iterative Method (BIM) and deepFool. We found that as long as the attacker has less than 45% sample from each kinds of adversarial audio datasets, the defense will be successful at preventing attacks.
Shu, Yujin, Xu, Yongjin.
2019.
End-to-End Captcha Recognition Using Deep CNN-RNN Network. 2019 IEEE 3rd Advanced Information Management, Communicates, Electronic and Automation Control Conference (IMCEC). :54—58.
With the development of the Internet, the captcha technology has also been widely used. Captcha technology is used to distinguish between humans and machines, namely Completely Automated Public Turing test to tell Computers and Humans Apart. In this paper, an end-to-end deep CNN-RNN network model is constructed by studying the captcha recognition technology, which realizes the recognition of 4-character text captcha. The CNN-RNN model first constructs a deep residual convolutional neural network based on the residual network structure to accurately extract the input captcha picture features. Then, through the constructed variant RNN network, that is, the two-layer GRU network, the deep internal features of the captcha are extracted, and finally, the output sequence is the 4-character captcha. The experiments results show that the end-to-end deep CNN-RNN network model has a good performance on different captcha datasets, achieving 99% accuracy. And experiment on the few samples dataset which only has 4000 training samples also shows an accuracy of 72.9 % and a certain generalization ability.