Visible to the public Biblio

Filters: Keyword is bots  [Clear All Filters]
2023-04-14
Umar, Mohammad, Ayyub, Shaheen.  2022.  Intrinsic Decision based Situation Reaction CAPTCHA for Better Turing Test. 2022 International Conference on Industry 4.0 Technology (I4Tech). :1–6.
In this modern era, web security is often required to beware from fraudulent activities. There are several hackers try to build a program that can interact with web pages automatically and try to breach the data or make several junk entries due to that web servers get hanged. To stop the junk entries; CAPTCHA is a solution through which bots can be identified and denied the machine based program to intervene with. CAPTCHA stands for Completely Automated Public Turing test to tell Computers and Humans Apart. In the progression of CAPTCHA; there are several methods available such as distorted text, picture recognition, math solving and gaming based CAPTCHA. Game based turing test is very much popular now a day but there are several methods through which game can be cracked because game is not intellectual. So, there is a required of intrinsic CAPTCHA. The proposed system is based on Intrinsic Decision based Situation Reaction Challenge. The proposed system is able to better classify the humans and bots by its intrinsic problem. It has been considered as human is more capable to deal with the real life problems and machine is bit poor to understand the situation or how the problem can be solved. So, proposed system challenges with simple situations which is easier for human but almost impossible for bots. Human is required to use his common sense only and problem can be solved with few seconds.
2020-09-11
Shekhar, Heemany, Moh, Melody, Moh, Teng-Sheng.  2019.  Exploring Adversaries to Defend Audio CAPTCHA. 2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA). :1155—1161.
CAPTCHA is a web-based authentication method used by websites to distinguish between humans (valid users) and bots (attackers). Audio captcha is an accessible captcha meant for the visually disabled section of users such as color-blind, blind, near-sighted users. Firstly, this paper analyzes how secure current audio captchas are from attacks using machine learning (ML) and deep learning (DL) models. Each audio captcha is made up of five, seven or ten random digits[0-9] spoken one after the other along with varying background noise throughout the length of the audio. If the ML or DL model is able to correctly identify all spoken digits and in the correct order of occurance in a single audio captcha, we consider that captcha to be broken and the attack to be successful. Throughout the paper, accuracy refers to the attack model's success at breaking audio captchas. The higher the attack accuracy, the more unsecure the audio captchas are. In our baseline experiments, we found that attack models could break audio captchas that had no background noise or medium background noise with any number of spoken digits with nearly 99% to 100% accuracy. Whereas, audio captchas with high background noise were relatively more secure with attack accuracy of 85%. Secondly, we propose that the concepts of adversarial examples algorithms can be used to create a new kind of audio captcha that is more resilient towards attacks. We found that even after retraining the models on the new adversarial audio data, the attack accuracy remained as low as 25% to 36% only. Lastly, we explore the benefits of creating adversarial audio captcha through different algorithms such as Basic Iterative Method (BIM) and deepFool. We found that as long as the attacker has less than 45% sample from each kinds of adversarial audio datasets, the defense will be successful at preventing attacks.
2020-09-04
Khan, Aasher, Rehman, Suriya, Khan, Muhammad U.S, Ali, Mazhar.  2019.  Synonym-based Attack to Confuse Machine Learning Classifiers Using Black-box Setting. 2019 4th International Conference on Emerging Trends in Engineering, Sciences and Technology (ICEEST). :1—7.
Twitter being the most popular content sharing platform is giving rise to automated accounts called “bots”. Majority of the users on Twitter are bots. Various machine learning (ML) algorithms are designed to detect bots avoiding the vulnerability constraints of ML-based models. This paper contributes to exploit vulnerabilities of machine learning (ML) algorithms through black-box attack. An adversarial text sequence misclassifies the results of deep learning (DL) classifiers for bot detection. Literature shows that ML models are vulnerable to attacks. The aim of this paper is to compromise the accuracy of ML-based bot detection algorithms by replacing original words in tweets with their synonyms. Our results show 7.2% decrease in the accuracy for bot tweets, therefore classifying bot tweets as legitimate tweets.
2020-02-10
Carneiro, Lucas R., Delgado, Carla A.D.M., da Silva, João C.P..  2019.  Social Analysis of Game Agents: How Trust and Reputation can Improve Player Experience. 2019 8th Brazilian Conference on Intelligent Systems (BRACIS). :485–490.
Video games normally use Artificial Intelligence techniques to improve Non-Player Character (NPC) behavior, creating a more realistic experience for their players. However, rational behavior in general does not consider social interactions between player and bots. Because of that, a new framework for NPCs was proposed, which uses a social bias to mix the default strategy of finding the best possible plays to win with a analysis to decide if other players should be categorized as allies or foes. Trust and reputation models were used together to implement this demeanor. In this paper we discuss an implementation of this framework inside the game Settlers of Catan. New NPC agents are created to this implementation. We also analyze the results obtained from simulations among agents and players to conclude how the use of trust and reputation in NPCs can create a better gaming experience.
2019-04-01
Usuzaki, S., Aburada, K., Yamaba, H., Katayama, T., Mukunoki, M., Park, M., Okazaki, N..  2018.  Interactive Video CAPTCHA for Better Resistance to Automated Attack. 2018 Eleventh International Conference on Mobile Computing and Ubiquitous Network (ICMU). :1–2.
A “Completely Automated Public Turing Test to Tell Computers and Humans Apart” (CAPTCHA) widely used online services so that prevents bots from automatic getting a large of accounts. Interactive video type CAPTCHAs that attempt to detect this attack by using delay time due to communication relays have been proposed. However, these approaches remain insufficiently resistant to bots. We propose a CAPTCHA that combines resistant to automated and relay attacks. In our CAPTCHA, the users recognize a moving object (target object) from among a number of randomly appearing decoy objects and tracks the target with mouse cursor. The users pass the test when they were able to track the target for a certain time. Since the target object moves quickly, the delay makes it difficult for a remote solver to break the CAPTCHA during a relay attack. It is also difficult for a bot to track the target using image processing because it has same looks of the decoys. We evaluated our CAPTCHA's resistance to relay and automated attacks. Our results show that, if our CAPTHCA's parameters are set suitable value, a relay attack cannot be established economically and false acceptance rate with bot could be reduced to 0.01% without affecting human success rate.
2017-12-20
Kumar, S. A., Kumar, N. R., Prakash, S., Sangeetha, K..  2017.  Gamification of internet security by next generation CAPTCHAs. 2017 International Conference on Computer Communication and Informatics (ICCCI). :1–5.

CAPTCHA is a type of challenge-response test to ensure that the response is only generated by humans and not by computerized robots. CAPTCHA are getting harder as because usage of latest advanced pattern recognition and machine learning algorithms are capable of solving simpler CAPTCHA. However, some enhancement procedures make the CAPTCHAs too difficult to be recognized by the human. This paper resolves the problem by next generation human-friendly mini game-CAPTCHA for quantifying the usability of CAPTCHAs.

2017-09-05
Thakar, Bhavik, Parekh, Chandresh.  2016.  Advance Persistent Threat: Botnet. Proceedings of the Second International Conference on Information and Communication Technology for Competitive Strategies. :143:1–143:6.

Growth of internet era and corporate sector dealings communication online has introduced crucial security challenges in cyber space. Statistics of recent large scale attacks defined new class of threat to online world, advanced persistent threat (APT) able to impact national security and economic stability of any country. From all APTs, botnet is one of the well-articulated and stealthy attacks to perform cybercrime. Botnet owners and their criminal organizations are continuously developing innovative ways to infect new targets into their networks and exploit them. The concept of botnet refers collection of compromised computers (bots) infected by automated software robots, that interact to accomplish some distributed task which run without human intervention for illegal purposes. They are mostly malicious in nature and allow cyber criminals to control the infected machines remotely without the victim's knowledge. They use various techniques, communication protocols and topologies in different stages of their lifecycle; also specifically they can upgrade their methods at any time. Botnet is global in nature and their target is to steal or destroy valuable information from organizations as well as individuals. In this paper we present real world botnet (APTs) survey.