Biblio
The present age of digital information has presented a heterogeneous online environment which makes it a formidable mission for a noble user to search and locate the required online resources timely. Recommender systems were implemented to rescue this information overload issue. However, majority of recommendation algorithms focused on the accuracy of the recommendations, leaving out other important aspects in the definition of good recommendation such as diversity and serendipity. This results in low coverage, long-tail items often are left out in the recommendations as well. In this paper, we present and explore a recommendation technique that ensures that diversity, accuracy and serendipity are all factored in the recommendations. The proposed algorithm performed comparatively well as compared to other algorithms in literature.
This paper proposes a new DNA cryptographic technique based on dynamic DNA encoding and asymmetric cryptosystem to increase the level of secrecy of data. The key idea is: to split the plaintext into fixed sized chunks, to encrypt each chunk using asymmetric cryptosystem and finally to merge the ciphertext of each chunk using dynamic DNA encoding. To generate chunks, characters of the plaintext are transformed into their equivalent ASCII values and split it into finite values. Now to encrypt each chunk, asymmetric cryptosystem is applied and the ciphertext is transformed into its equivalent binary value. Then this binary value is converted into DNA bases. Finally to merge each chunk, sufficient random strings are generated. Here to settle the required number of random strings, dynamic DNA encoding is exploited which is generated using Fibonacci series. Thus the use of finite chunks, asymmetric cryptosystem, random strings and dynamic DNA encoding increases the level of security of data. To evaluate the encryption-decryption time requirement, an empirical analysis is performed employing RSA, ElGamal and Paillier cryptosystems. The proposed technique is suitable for any use of cryptography.
Top-level domains play an important role in domain name system. Close attention should be paid to security of top level domains. In this paper, we found many configuration anomalies of top-level domains by analyzing their resource records. We got resource records of top-level domains from root name servers and authoritative servers of top-level domains. By comparing these resource records, we observed the anomalies in top-level domains. For example, there are 8 servers shared by more than one hundred top-level domains; Some TTL fields or SERIAL fields of resource records obtained on each NS servers of the same top-level domain were inconsistent; some authoritative servers of top-level domains were unreachable. Those anomalies may affect the availability of top-level domains. We hope that these anomalies can draw top-level domain administrators' attention to security of top-level domains.
As a vital component of variety cyber attacks, malicious domain detection becomes a hot topic for cyber security. Several recent techniques are proposed to identify malicious domains through analysis of DNS data because much of global information in DNS data which cannot be affected by the attackers. The attackers always recycle resources, so they frequently change the domain - IP resolutions and create new domains to avoid detection. Therefore, multiple malicious domains are hosted by the same IPs and multiple IPs also host same malicious domains in simultaneously, which create intrinsic association among them. Hence, using the labeled domains which can be traced back from queries history of all domains to verify and figure out the association of them all. Graphs seem the best candidate to represent for this relationship and there are many algorithms developed on graph with high performance. A graph-based interface can be developed and transformed to the graph mining task of inferring graph node's reputation scores using improvements of the belief propagation algorithm. Then higher reputation scores the nodes reveal, the more malicious probabilities they infer. For demonstration, this paper proposes a malicious domain detection technique and evaluates on a real-world dataset. The dataset is collected from DNS data servers which will be used for building a DNS graph. The proposed technique achieves high performance in accuracy rates over 98.3%, precision and recall rates as: 99.1%, 98.6%. Especially, with a small set of labeled domains (legitimate and malicious domains), the technique can discover a large set of potential malicious domains. The results indicate that the method is strongly effective in detecting malicious domains.
Speech recognition (SR) systems such as Siri or Google Now have become an increasingly popular human-computer interaction method, and have turned various systems into voice controllable systems (VCS). Prior work on attacking VCS shows that the hidden voice commands that are incomprehensible to people can control the systems. Hidden voice commands, though "hidden", are nonetheless audible. In this work, we design a totally inaudible attack, DolphinAttack, that modulates voice commands on ultrasonic carriers (e.g., f textgreater 20 kHz) to achieve inaudibility. By leveraging the nonlinearity of the microphone circuits, the modulated low-frequency audio commands can be successfully demodulated, recovered, and more importantly interpreted by the speech recognition systems. We validated DolphinAttack on popular speech recognition systems, including Siri, Google Now, Samsung S Voice, Huawei HiVoice, Cortana and Alexa. By injecting a sequence of inaudible voice commands, we show a few proof-of-concept attacks, which include activating Siri to initiate a FaceTime call on iPhone, activating Google Now to switch the phone to the airplane mode, and even manipulating the navigation system in an Audi automobile. We propose hardware and software defense solutions, and suggest to re-design voice controllable systems to be resilient to inaudible voice command attacks.
Objective: To evaluate the effectiveness of domain highlighting in helping users identify whether webpages are legitimate or spurious.
Background: As a component of the URL, a domain name can be overlooked. Consequently, browsers highlight the domain name to help users identify which website they are visiting. Nevertheless, few studies have assessed the effectiveness of domain highlighting, and the only formal study confounded highlighting with instructions to look at the address bar.
Method: We conducted two phishing detection experiments. Experiment 1 was run online: Participants judged the legitimacy of webpages in two phases. In phase one, participants were to judge the legitimacy based on any information on the webpage, whereas phase two they were to focus on the address bar. Whether the domain was highlighted was also varied. Experiment 2 was conducted similarly but with participants in a laboratory setting, which allowed tracking of fixations.
Results: Participants differentiated the legitimate and fraudulent webpages better than chance. There was some benefit of attending to the address bar, but domain highlighting did not provide effective protection against phishing attacks. Analysis of eye-gaze fixation measures was in agreement with the task performance, but heat-map results revealed that participants’ visual attention was attracted by the highlighted domains.
Conclusion: Failure to detect many fraudulent webpages even when the domain was highlighted implies that users lacked knowledge of webpage security cues or how to use those cues.
Objective: To evaluate the effectiveness of domain highlighting in helping users identify whether webpages are legitimate or spurious.
Background: As a component of the URL, a domain name can be overlooked. Consequently, browsers highlight the domain name to help users identify which website they are visiting. Nevertheless, few studies have assessed the effectiveness of domain highlighting, and the only formal study confounded highlighting with instructions to look at the address bar.
Method: We conducted two phishing detection experiments. Experiment 1 was run online: Participants judged the legitimacy of webpages in two phases. In phase one, participants were to judge the legitimacy based on any information on the webpage, whereas phase two they were to focus on the address bar. Whether the domain was highlighted was also varied. Experiment 2 was conducted similarly but with participants in a laboratory setting, which allowed tracking of fixations.
Results: Participants differentiated the legitimate and fraudulent webpages better than chance. There was some benefit of attending to the address bar, but domain highlighting did not provide effective protection against phishing attacks. Analysis of eye-gaze fixation measures was in agreement with the task performance, but heat-map results revealed that participants’ visual attention was attracted by the highlighted domains.
Conclusion: Failure to detect many fraudulent webpages even when the domain was highlighted implies that users lacked knowledge of webpage security cues or how to use those cues.
To evaluate the effectiveness of domain highlighting in helping users identify whether Web pages are legitimate or spurious. As a component of the URL, a domain name can be overlooked. Consequently, browsers highlight the domain name to help users identify which Web site they are visiting. Nevertheless, few studies have assessed the effectiveness of domain highlighting, and the only formal study confounded highlighting with instructions to look at the address bar. We conducted two phishing detection experiments. Experiment 1 was run online: Participants judged the legitimacy of Web pages in two phases. In Phase 1, participants were to judge the legitimacy based on any information on the Web page, whereas in Phase 2, they were to focus on the address bar. Whether the domain was highlighted was also varied. Experiment 2 was conducted similarly but with participants in a laboratory setting, which allowed tracking of fixations. Participants differentiated the legitimate and fraudulent Web pages better than chance. There was some benefit of attending to the address bar, but domain highlighting did not provide effective protection against phishing attacks. Analysis of eye-gaze fixation measures was in agreement with the task performance, but heat-map results revealed that participants’ visual attention was attracted by the highlighted domains. Failure to detect many fraudulent Web pages even when the domain was highlighted implies that users lacked knowledge of Web page security cues or how to use those cues. Potential applications include development of phishing prevention training incorporating domain highlighting with other methods to help users identify phishing Web pages.
Objective: To evaluate the effectiveness of domain highlighting in helping users identify whether webpages are legitimate or spurious.
Background: As a component of the URL, a domain name can be overlooked. Consequently, browsers highlight the domain name to help users identify which website they are visiting. Nevertheless, few studies have assessed the effectiveness of domain highlighting, and the only formal study confounded highlighting with instructions to look at the address bar.
Method: Two phishing detection experiments were conducted. Experiment 1 was run online: Participants judged the legitimacy of webpages in two phases. In phase one, participants were to judge the legitimacy based on any information on the webpage, whereas phase two they were to focus on the address bar. Whether the domain was highlighted was also varied. Experiment 2 was conducted similarly but with participants in a laboratory setting, which allowed tracking of fixations.
Results: Participants differentiated the legitimate and fraudulent webpages better than chance. There was some benefit of attending to the address bar, but domain highlighting did not provide effective protection against phishing attacks. Analysis of eye-gaze fixation measures was in agreement with the task performance, but heat-map results revealed that participants’ visual attention was attracted by the domain highlighting.
Conclusion: Failure to detect many fraudulent webpages even when the domain was highlighted implies that users lacked knowledge of webpage security cues or how to use those cues.
Application: Potential applications include development of phishing-prevention training incorporating domain highlighting with other methods to help users identify phishing webpages.
Acoustic emanations of computer keyboards represent a serious privacy issue. As demonstrated in prior work, physical properties of keystroke sounds might reveal what a user is typing. However, previous attacks assumed relatively strong adversary models that are not very practical in many real-world settings. Such strong models assume: (i) adversary's physical proximity to the victim, (ii) precise profiling of the victim's typing style and keyboard, and/or (iii) significant amount of victim's typed information (and its corresponding sounds) available to the adversary. This paper presents and explores a new keyboard acoustic eavesdropping attack that involves Voice-over-IP (VoIP), called Skype & Type (S&T), while avoiding prior strong adversary assumptions. This work is motivated by the simple observation that people often engage in secondary activities (including typing) while participating in VoIP calls. As expected, VoIP software acquires and faithfully transmits all sounds, including emanations of pressed keystrokes, which can include passwords and other sensitive information. We show that one very popular VoIP software (Skype) conveys enough audio information to reconstruct the victim's input – keystrokes typed on the remote keyboard. Our results demonstrate that, given some knowledge on the victim's typing style and keyboard model, the attacker attains top-5 accuracy of 91.7% in guessing a random key pressed by the victim. Furthermore, we demonstrate that S&T is robust to various VoIP issues (e.g., Internet bandwidth fluctuations and presence of voice over keystrokes), thus confirming feasibility of this attack. Finally, it applies to other popular VoIP software, such as Google Hangouts.
Malware has become sophisticated and organizations don't have a Plan B when standard lines of defense fail. These failures have devastating consequences for organizations, such as sensitive information being exfiltrated. A promising avenue for improving the effectiveness of behavioral-based malware detectors is to combine fast (usually not highly accurate) traditional machine learning (ML) detectors with high-accuracy, but time-consuming, deep learning (DL) models. The main idea is to place software receiving borderline classifications by traditional ML methods in an environment where uncertainty is added, while software is analyzed by time-consuming DL models. The goal of uncertainty is to rate-limit actions of potential malware during deep analysis. In this paper, we describe Chameleon, a Linux-based framework that implements this uncertain environment. Chameleon offers two environments for its OS processes: standard - for software identified as benign by traditional ML detectors - and uncertain - for software that received borderline classifications analyzed by ML methods. The uncertain environment will bring obstacles to software execution through random perturbations applied probabilistically on selected system calls. We evaluated Chameleon with 113 applications from common benchmarks and 100 malware samples for Linux. Our results show that at threshold 10%, intrusive and non-intrusive strategies caused approximately 65% of malware to fail accomplishing their tasks, while approximately 30% of the analyzed benign software to meet with various levels of disruption (crashed or hampered). We also found that I/O-bound software was three times more affected by uncertainty than CPU-bound software.
Binary embedding is an effective way for nearest neighbor (NN) search as binary code is storage efficient and fast to compute. It tries to convert real-value signatures into binary codes while preserving similarity of the original data. However, it greatly decreases the discriminability of original signatures due to the huge loss of information. In this paper, we propose a novel method double-bit quantization and weighting (DBQW) to solve the problem by mapping each dimension to double-bit binary code and assigning different weights according to their spatial relationship. The proposed method is applicable to a wide variety of embedding techniques, such as SH, PCA-ITQ and PCA-RR. Experimental comparisons on two datasets show that DBQW for NN search can achieve remarkable improvements in query accuracy compared to original binary embedding methods.
This work applies side channel analysis on hardware implementations of two CAESAR candidates, Keyak and Ascon. Both algorithms are cryptographic sponges with an iterated permutation. The algorithms share an s-box so attacks on the non-linear step of the permutation are similar. This work presents the first results of a DPA attack on Keyak using traces generated by an FPGA. A new attack is crafted for a larger sensitive variable to reduce the number of traces. It also presents and applies the first CPA attack on Ascon. Using a toy-sized threshold implementation of Ascon we try to give insight in the order of the steps of a permutation.
Educational games have a potentially significant role to play in the increasing efforts to expand access to computer science education. Computational thinking is an area of particular interest, including the development of problem-solving strategies like divide and conquer. Existing games designed to teach computational thinking generally consist of either open-ended exploration with little direct guidance or a linear series of puzzles with lots of direct guidance, but little exploration. Educational research indicates that the most effective approach may be a hybrid of these two structures. We present Dragon Architect, an educational computational thinking game, and use it as context for a discussion of key open problems in the design of games to teach computational thinking. These problems include how to directly teach computational thinking strategies, how to achieve a balance between exploration and direct guidance, and how to incorporate engaging social features. We also discuss several important design challenges we have encountered during the design of Dragon Architect. We contend the problems we describe are relevant to anyone making educational games or systems that need to teach complex concepts and skills.
This paper presents the preliminary framework proposed by the authors for drivers of Smart Governance. The research question of this study is: What are the drivers for Smart Governance to achieve evidence-based policy-making? The framework suggests that in order to create a smart governance model, data governance and collaborative governance are the main drivers. These pillars are supported by legal framework, normative factors, principles and values, methods, data assets or human resources, and IT infrastructure. These aspects will guide a real time evaluation process in all levels of the policy cycle, towards to the implementation of evidence-based policies.
Current software platforms for service composition are based on orchestration, choreography or hierarchical orchestration. However, such approaches for service composition only support partial compositionality; thereby, increasing the complexity of SOA development. In this paper, we propose DX-MAN, a platform that supports total compositionality. We describe the main concepts of DX-MAN with the help of a case study based on the popular MusicCorp.
This work introduces concepts and algorithms along with a case study validating them, to enhance the event detection, pattern recognition and anomaly identification results in real life video surveillance. The motivation for the work underlies in the observation that human behavioral patterns in general continuously evolve and adapt with time, rather than being static. First, limitations in existing work with respect to this phenomena are identified. Accordingly, the notion and algorithms of Dynamic Clustering are introduced in order to overcome these drawbacks. Correspondingly, we propose the concept of maintaining two separate sets of data in parallel, namely the Normal Plane and the Anomaly Plane, to successfully achieve the task of learning continuously. The practicability of the proposed algorithms in a real life scenario is demonstrated through a case study. From the analysis presented in this work, it is evident that a more comprehensive analysis, closely following human perception can be accomplished by incorporating the proposed notions and algorithms in a video surveillance event.