Biblio
Enterprises round the globe have been searching for a way to securely empower AndroidTM devices for work but have spurned away from the Android platform due to ongoing fragmentation and security concerns. Discrepant vulnerabilities have been reported in Android smartphones since Android Lollipop release. Smartphones can be easily hacked by installing a malicious application, visiting an infectious browser, receiving a crafted MMS, interplaying with plug-ins, certificate forging, checksum collisions, inter-process communication (IPC) abuse and much more. To highlight this issue a manual analysis of Android vulnerabilities is performed, by using data available in National Vulnerability Database NVD and Android Vulnerability website. This paper includes the vulnerabilities that risked the dual persona support in Android 5 and above, till Dec 2017. In our security threat analysis, we have identified a comprehensive list of Android vulnerabilities, vulnerable Android versions, manufacturers, and information regarding complete and partial patches released. So far, there is no published research work that systematically presents all the vulnerabilities and vulnerability assessment for dual persona feature of Android's smartphone. The data provided in this paper will open ways to future research and present a better Android security model for dual persona.
Nowadays, there is a flood of data such as naked body photos and child pornography, which is making people bloodless. In addition, people also distribute drugs through unknown dark channels. In particular, most transactions are being made through the Deep Web, the dark path. “Deep Web refers to an encrypted network that is not detected on search engine like Google etc. Users must use Tor to visit sites on the dark web” [4]. In other words, the Dark Web uses Tor's encryption client. Therefore, users can visit multiple sites on the dark Web, but not know the initiator of the site. In this paper, we propose the key idea based on the current status of such crimes and a crime information visual system for Deep Web has been developed. The status of deep web is analyzed and data is visualized using Java. It is expected that the program will help more efficient management and monitoring of crime in unknown web such as deep web, torrent etc.
Increasing number of Internet-scale applications, such as video streaming, incur huge amount of wide area traffic. Such traffic over the unreliable Internet without bandwidth guarantee suffers unpredictable network performance. This result, however, is unappealing to the application providers. Fortunately, Internet giants like Google and Microsoft are increasingly deploying their private wide area networks (WANs) to connect their global datacenters. Such high-speed private WANs are reliable, and can provide predictable network performance. In this paper, we propose a new type of service-inter-datacenter network as a service (iDaaS), where traditional application providers can reserve bandwidth from those Internet giants to guarantee their wide area traffic. Specifically, we design a bandwidth trading market among multiple iDaaS providers and application providers, and concentrate on the essential bandwidth pricing problem. The involved challenging issue is that the bandwidth price of each iDaaS provider is not only influenced by other iDaaS providers, but also affected by the application providers. To address this issue, we characterize the interaction between iDaaS providers and application providers using a Stackelberg game model, and analyze the existence and uniqueness of the equilibrium. We further present an efficient bandwidth pricing algorithm by blending the advantage of a geometrical Nash bargaining solution and the demand segmentation method. For comparison, we present two bandwidth reservation algorithms, where each iDaaS provider's bandwidth is reserved in a weighted fair manner and a max-min fair manner, respectively. Finally, we conduct comprehensive trace-driven experiments. The evaluation results show that our proposed algorithms not only ensure the revenue of iDaaS providers, but also provide bandwidth guarantee for application providers with lower bandwidth price per unit.
Attacks on cloud-computing services are becoming more prevalent with recent victims including Tesla, Aviva Insurance and SIM-card manufacturer Gemalto[1]. The risk posed to organisations from malicious insiders is becoming more widely known about and consequently many are now investing in hardware, software and new processes to try to detect these attacks. As for all types of attack vector, there will always be those which are not known about and those which are known about but remain exceptionally difficult to detect - particularly in a timely manner. We believe that insider attacks are of particular concern in a cloud-computing environment, and that cloud-service providers should enhance their ability to detect them by means of indirect detection. We propose a combined attack-tree and kill-chain based method for identifying multiple indirect detection measures. Specifically, the use of attack trees enables us to encapsulate all detection opportunities for insider attacks in cloud-service environments. Overlaying the attack tree on top of a kill chain in turn facilitates indirect detection opportunities higher-up the tree as well as allowing the provider to determine how far an attack has progressed once suspicious activity is detected. We demonstrate the method through consideration of a specific type of insider attack - that of attempting to capture virtual machines in transit within a cloud cluster via use of a network tap, however, the process discussed here applies equally to all cloud paradigms.
Every day, huge amounts of unstructured text is getting generated. Most of this data is in the form of essays, research papers, patents, scholastic articles, book chapters etc. Many plagiarism softwares are being developed to be used in order to reduce the stealing and plagiarizing of Intellectual Property (IP). Current plagiarism softwares are mainly using string matching algorithms to detect copying of text from another source. The drawback of some of such plagiarism softwares is their inability to detect plagiarism when the structure of the sentence is changed. Replacement of keywords by their synonyms also fails to be detected by these softwares. This paper proposes a new method to detect such plagiarism using semantic knowledge graphs. The method uses Named Entity Recognition as well as semantic similarity between sentences to detect possible cases of plagiarism. The doubtful cases are visualized using semantic Knowledge Graphs for thorough analysis of authenticity. Rules for active and passive voice have also been considered in the proposed methodology.
Most Web sites rely on resources hosted by third parties such as CDNs. Third parties may be compromised or coerced into misbehaving, e.g. delivering a malicious script or stylesheet. Unexpected changes to resources hosted by third parties can be detected with the Subresource Integrity (SRI) mechanism. The focus of SRI is on scripts and stylesheets. Web fonts cannot be secured with that mechanism under all circumstances. The first contribution of this paper is to evaluates the potential for attacks using malicious fonts. With an instrumented browser we find that (1) more than 95% of the top 50,000 Web sites of the Tranco top list rely on resources hosted by third parties and that (2) only a small fraction employs SRI. Moreover, we find that more than 60% of the sites in our sample use fonts hosted by third parties, most of which are being served by Google. The second contribution of the paper is a proof of concept of a malicious font as well as a tool for automatically generating such a font, which targets security-conscious users who are used to verifying cryptographic fingerprints. Software vendors publish such fingerprints along with their software packages to allow users to verify their integrity. Due to incomplete SRI support for Web fonts, a third party could force a browser to load our malicious font. The font targets a particular cryptographic fingerprint and renders it as a desired different fingerprint. This allows attackers to fool users into believing that they download a genuine software package although they are actually downloading a maliciously modified version. Finally, we propose countermeasures that could be deployed to protect the integrity of Web fonts.
Disaster is an unexpected event in a system lifetime, which can be made by nature or even human errors. Disaster recovery of information technology is an area of information security for protecting data against unsatisfactory events. It involves a set of procedures and tools for returning an organization to a state of normality after an occurrence of a disastrous event. So the organizations need to have a good plan in place for disaster recovery. There are many strategies for traditional disaster recovery and also for cloud-based disaster recovery. This paper focuses on using cloud-based disaster recovery strategies instead of the traditional techniques, since the cloud-based disaster recovery has proved its efficiency in providing the continuity of services faster and in less cost than the traditional ones. The paper introduces a proposed model for virtual private disaster recovery on cloud by using two metrics, which comprise a recovery time objective and a recovery point objective. The proposed model has been evaluated by experts in the field of information technology and the results show that the model has ensured the security and business continuity issues, as well as the faster recovery of a disaster that could face an organization. The paper also highlights the cloud computing services and illustrates the most benefits of cloud-based disaster recovery.
The Web ecosystem has been evolving over the past years and new Internet protocols, namely HTTP/2 over TLS/TCP and QUIC/UDP, are now used to deliver Web contents. Similarly, CDNs (Content Delivery Network) are deployed worldwide, caching contents close to end-users to optimize web browsing quality. We present in this paper an analysis of the influence of the Internet protocols and CDN on the Top 10,000 Alexa websites, based on a 12-month measurement campaign (from April 2018 to April 2019) performed via our tool Web View [1]. Part of our measurements are made public, represented on a monitoring website1, showing the results for the Top 50 Alexa Websites plus few specific websites and 8 french websites, suggested by the French Agency in charge of regulating telecommunications. Our analysis of this long-term measurement campaign allows to better analyze the delivery of public websites. For instance, it shows that even if some argue that QUIC optimizes the quality, it is not observed in the real-life since QUIC is not largely deployed. Our method for analyzing CDN delivery in the Web browsing allows us to evaluate its influence, which is important since their usage can decrease the web pages' loading time, on average 43.1% with HTTP/2 and 38.5% with QUIC, when requesting a second time the same home page.