Biblio
Every day, huge amounts of unstructured text is getting generated. Most of this data is in the form of essays, research papers, patents, scholastic articles, book chapters etc. Many plagiarism softwares are being developed to be used in order to reduce the stealing and plagiarizing of Intellectual Property (IP). Current plagiarism softwares are mainly using string matching algorithms to detect copying of text from another source. The drawback of some of such plagiarism softwares is their inability to detect plagiarism when the structure of the sentence is changed. Replacement of keywords by their synonyms also fails to be detected by these softwares. This paper proposes a new method to detect such plagiarism using semantic knowledge graphs. The method uses Named Entity Recognition as well as semantic similarity between sentences to detect possible cases of plagiarism. The doubtful cases are visualized using semantic Knowledge Graphs for thorough analysis of authenticity. Rules for active and passive voice have also been considered in the proposed methodology.
Reconnaissance might be the longest phase, sometimes take weeks or months. The black hat makes use of passive information gathering techniques. Once the attacker has sufficient statistics, then the attacker starts the technique of scanning perimeter and internal network devices seeking out open ports and related services. In this paper we are showing traffic accountability and time to complete the specific task during reconnaissance phase active scanning with nmap tool and proposed strategies that how to deal with large volumes of hosts and conserve network traffic as well as time of the specific task.
Vehicular communication systems increase traffic efficiency and safety by allowing vehicles to share safety-related information and location-based services. Pseudonym schemes are the standard solutions providing driver/vehicle anonymity, whilst enforcing vehicle accountability in case of liability issues. State-of-the-art PKI-based pseudonym schemes present scalability issues, notably due to the centralized architecture of certificate-based solutions. The first Direct Anonymous Attestation (DAA)-based pseudonym scheme was introduced at VNC 2017, providing a decentralized approach to the pseudonym generation and update phases. The DAA-based construction leverages the properties of trusted computing, allowing vehicles to autonomously generate their own pseudonyms by using a (resource constrained) Trusted Hardware Module or Component (TC). This proposition however requires the TC to delegate part of the (heavy) pseudonym generation computations to the (more powerful) vehicle's On-Board Unit (OBU), introducing security and privacy issues in case the OBU becomes compromised. In this paper, we introduce a novel pseudonym scheme based on a variant of DAA, namely a pre-DAA-based pseudonym scheme. All secure computations in the pre-DAA pseudonym lifecycle are executed by the secure element, thus creating a secure enclave for pseudonym generation, update, and revocation. We instantiate vehicle-to-everything (V2X) with our pre-DAA solution, thus ensuring user anonymity and user-controlled traceability within the vehicular network. In addition, the pre-DAA-based construction transfers accountability from the vehicle to the user, thus complying with the many-to-many driver/vehicle relation. We demonstrate the efficiency of our solution with a prototype implementation on a standard Javacard (acting as a TC), showing that messages can be anonymously signed and verified in less than 50 ms.
To ensure the accountability of a cloud environment, security policies may be provided as a set of properties to be enforced by cloud providers. However, due to the sheer size of clouds, it can be challenging to provide timely responses to all the requests coming from cloud users at runtime. In this paper, we design and implement a middleware, PERMON, as a pluggable interface to OpenStack for intercepting and verifying the legitimacy of user requests at runtime, while leveraging our previous work on proactive security verification to improve the efficiency. We describe detailed implementation of the middleware and demonstrate its usefulness through a use case.
This paper work is focused on Performance comparison of intrusion detection system between DBN Algorithm and SPELM Algorithm. Researchers have used this new algorithm SPELM to perform experiments in the area of face recognition, pedestrian detection, and for network intrusion detection in the area of cyber security. The scholar used the proposed State Preserving Extreme Learning Machine(SPELM) algorithm as machine learning classifier and compared it's performance with Deep Belief Network (DBN) algorithm using NSL KDD dataset. The NSL- KDD dataset has four lakhs of data record; out of which 40% of data were used for training purposes and 60% data used in testing purpose while calculating the performance of both the algorithms. The experiment as performed by the scholar compared the Accuracy, Precision, recall and Computational Time of existing DBN algorithm with proposed SPELM Algorithm. The findings have show better performance of SPELM; when compared its accuracy of 93.20% as against 52.8% of DBN algorithm;69.492 Precision of SPELM as against 66.836 DBN and 90.8 seconds of Computational time taken by SPELM as against 102 seconds DBN Algorithm.
Software has become an essential component of modern life, but when software vulnerabilities threaten the security of users, new ways of analyzing for software security must be explored. Using the National Institute of Standards and Technology's Juliet Java Suite, containing thousands of examples of defective Java methods for a variety of vulnerabilities, a prototype tool was developed implementing an array of Long-Short Term Memory Recurrent Neural Networks to detect vulnerabilities within source code. The tool employs various data preparation methods to be independent of coding style and to automate the process of extracting methods, labeling data, and partitioning the dataset. The result is a prototype command-line utility that generates an n-dimensional vulnerability prediction vector. The experimental evaluation using 44,495 test cases indicates that the tool can achieve an accuracy higher than 90% for 24 out of 29 different types of CWE vulnerabilities.
Multicast distribution employs the model of many-to-many so that it is a more efficient way of data delivery compared to traditional one-to-one unicast distribution, which can benefit many applications such as media streaming. However, the lack of security features in its nature makes multicast technology much less popular in an open environment such as the Internet. Internet Service Providers (ISPs) take advantage of IP multicast technology's high efficiency of data delivery to provide Internet Protocol Television (IPTV) to their users. But without the full control on their networks, ISPs cannot collect revenue for the services they provide. Secure Internet Group Management Protocol (SIGMP), an extension of Internet Group Management Protocol (IGMP), and Group Security Association Management Protocol (GSAM), have been proposed to enforce receiver access control at the network level of IP multicast. In this paper, we analyze operational details and issues of both SIGMP and GSAM. An examination of the performance of both protocols is also conducted.
The aim of this paper is to explore the performance of two well-known wave energy converters (WECs) namely Floating Buoy Point Absorber (FBPA) and Oscillating Surge (OS) in onshore and offshore locations. To achieve clean energy targets by reducing greenhouse gas emissions, integration of renewable energy resources is continuously increasing all around the world. In addition to widespread renewable energy source such as wind and solar photovoltaic (PV), wave energy extracted from ocean is becoming more tangible day by day. In the literature, a number of WEC devices are reported. However, further investigations are still needed to better understand the behaviors of FBPA WEC and OS WEC under irregular wave conditions in onshore and offshore locations. Note that being surrounded by Bay of Bengal, Bangladesh has huge scope of utilizing wave power. To this end, FBPA WEC and OS WEC are simulated using the typical onshore and offshore wave height and wave period of the coastal area of Bangladesh. Afterwards, performances of the aforementioned two WECs are compared by analyzing their power output.
Phishing attacks have reached record volumes in recent years. Simultaneously, modern phishing websites are growing in sophistication by employing diverse cloaking techniques to avoid detection by security infrastructure. In this paper, we present PhishFarm: a scalable framework for methodically testing the resilience of anti-phishing entities and browser blacklists to attackers' evasion efforts. We use PhishFarm to deploy 2,380 live phishing sites (on new, unique, and previously-unseen .com domains) each using one of six different HTTP request filters based on real phishing kits. We reported subsets of these sites to 10 distinct anti-phishing entities and measured both the occurrence and timeliness of native blacklisting in major web browsers to gauge the effectiveness of protection ultimately extended to victim users and organizations. Our experiments revealed shortcomings in current infrastructure, which allows some phishing sites to go unnoticed by the security community while remaining accessible to victims. We found that simple cloaking techniques representative of real-world attacks- including those based on geolocation, device type, or JavaScript- were effective in reducing the likelihood of blacklisting by over 55% on average. We also discovered that blacklisting did not function as intended in popular mobile browsers (Chrome, Safari, and Firefox), which left users of these browsers particularly vulnerable to phishing attacks. Following disclosure of our findings, anti-phishing entities are now better able to detect and mitigate several cloaking techniques (including those that target mobile users), and blacklisting has also become more consistent between desktop and mobile platforms- but work remains to be done by anti-phishing entities to ensure users are adequately protected. Our PhishFarm framework is designed for continuous monitoring of the ecosystem and can be extended to test future state-of-the-art evasion techniques used by malicious websites.
There are increasing threats for cyberspace. This paper tries to identify how extreme cybersecurity incidents occur based on the scenario of a targeted attack through emails. Knowledge on how extreme cybersecurity incidents occur helps in identifying the key points on how they can be prevented from occurring. The model based on system thinking approach to the understanding how communication influences entities and how tiny initiating events scale up into extreme events provides a condensed figure of the cyberspace and surrounding threats. By taking cyberspace layers and characteristics of cyberspace identified by this model into consideration, it predicts most suitable risk mitigations.
As a cyber attack which leverages social engineering and other sophisticated techniques to steal sensitive information from users, phishing attack has been a critical threat to cyber security for a long time. Although researchers have proposed lots of countermeasures, phishing criminals figure out circumventions eventually since such countermeasures require substantial manual feature engineering and can not detect newly emerging phishing attacks well enough, which makes developing an efficient and effective phishing detection method an urgent need. In this work, we propose a novel phishing website detection approach by detecting the Uniform Resource Locator (URL) of a website, which is proved to be an effective and efficient detection approach. To be specific, our novel capsule-based neural network mainly includes several parallel branches wherein one convolutional layer extracts shallow features from URLs and the subsequent two capsule layers generate accurate feature representations of URLs from the shallow features and discriminate the legitimacy of URLs. The final output of our approach is obtained by averaging the outputs of all branches. Extensive experiments on a validated dataset collected from the Internet demonstrate that our approach can achieve competitive performance against other state-of-the-art detection methods while maintaining a tolerable time overhead.
Phishing attacks have reached record volumes in recent years. Simultaneously, modern phishing websites are growing in sophistication by employing diverse cloaking techniques to avoid detection by security infrastructure. In this paper, we present PhishFarm: a scalable framework for methodically testing the resilience of anti-phishing entities and browser blacklists to attackers' evasion efforts. We use PhishFarm to deploy 2,380 live phishing sites (on new, unique, and previously-unseen .com domains) each using one of six different HTTP request filters based on real phishing kits. We reported subsets of these sites to 10 distinct anti-phishing entities and measured both the occurrence and timeliness of native blacklisting in major web browsers to gauge the effectiveness of protection ultimately extended to victim users and organizations. Our experiments revealed shortcomings in current infrastructure, which allows some phishing sites to go unnoticed by the security community while remaining accessible to victims. We found that simple cloaking techniques representative of real-world attacks- including those based on geolocation, device type, or JavaScript- were effective in reducing the likelihood of blacklisting by over 55% on average. We also discovered that blacklisting did not function as intended in popular mobile browsers (Chrome, Safari, and Firefox), which left users of these browsers particularly vulnerable to phishing attacks. Following disclosure of our findings, anti-phishing entities are now better able to detect and mitigate several cloaking techniques (including those that target mobile users), and blacklisting has also become more consistent between desktop and mobile platforms- but work remains to be done by anti-phishing entities to ensure users are adequately protected. Our PhishFarm framework is designed for continuous monitoring of the ecosystem and can be extended to test future state-of-the-art evasion techniques used by malicious websites.