Biblio
Proofs of Data Possession/Retrievability (PoDP/PoR) schemes are essential to cloud storage services, since they can increase clients' confidence on the integrity and availability of their data. The majority of PoDP/PoR schemes are constructed from homomorphic linear authentication (HLA) schemes, which decrease the price of communication between the client and the server. In this paper, a new subclass of authentication codes, named ε-authentication codes, is proposed, and a modular construction of HLA schemes from ε-authentication codes is presented. We prove that the security notions of HLA schemes are closely related to the size of the authenticator/tag space and the successful probability of impersonation attacks (with non-zero source states) of the underlying ε-authentication codes. We show that most of HLA schemes used for the PoDP/PoR schemes are instantiations of our modular construction from some ε-authentication codes. Following this line, an algebraic-curves-based ε-authentication code yields a new HLA scheme.
Many books and papers describe how to do data science. While those texts are useful, it can also be important to reflect on anti-patterns; i.e. common classes of errors seen when large communities of researchers and commercial software engineers use, and misuse data mining tools. This technical briefing will present those errors and show how to avoid them.
We ask whether it is possible to anonymously communicate a large amount of data using only public (non-anonymous) communication together with a small anonymous channel. We think this is a central question in the theory of anonymous communication and to the best of our knowledge this is the first formal study in this direction. Towards this goal, we introduce the novel concept of anonymous steganography: think of a leaker Lea who wants to leak a large document to Joe the journalist. Using anonymous steganography Lea can embed this document in innocent looking communication on some popular website (such as cat videos on YouTube or funny memes on 9GAG). Then Lea provides Joe with a short decoding key dk which, when applied to the entire website, recovers the document while hiding the identity of Lea among the large number of users of the website. Our contributions include: Introducing and formally defining anonymous steganography, A construction showing that anonymous steganography is possible (which uses recent results in circuits obfuscation), A lower bound on the number of bits which are needed to bootstrap anonymous communication.
iOS is well-known operating system which is strong in security. However, many attacking methods of iOS have recently been published which are called "Masque Attack", "Null Dereference" and "Italy Hacking Team's RCS". Therefore, security and safety is not suitable word to iOS. In addition, many security researchers have a problem to analyze iOS because the iOS is difficult to debug because of closed source. So, we propose a new security testing method for iOS. At first, we perform to fuzz iOS's web browser called MobileSafari. The MobileSafari is possible to express HTML, PDF and mp4, etc. We perform test abnormal HTML and PDF using our fuzzing method. We hope that our research can be helpful to iOS's security and safety.
iOS is well-known operating system which is strong in security. However, many attacking methods of iOS have recently been published which are called "Masque Attack", "Null Dereference" and "Italy Hacking Team's RCS". Therefore, security and safety is not suitable word to iOS. In addition, many security researchers have a problem to analyze iOS because the iOS is difficult to debug because of closed source. So, we propose a new security testing method for iOS. At first, we perform to fuzz iOS's web browser called MobileSafari. The MobileSafari is possible to express HTML, PDF and mp4, etc. We perform test abnormal HTML and PDF using our fuzzing method. We hope that our research can be helpful to iOS's security and safety.
Human mobility is one of the key topics to be considered in the networks of the future, both by industrial and research communities that are already focused on multidisciplinary applications and user-centric systems. If the rapid proliferation of networks and high-tech miniature sensors makes this reality possible, the ever-growing complexity of the metrics and parameters governing such systems raises serious issues in terms of privacy, security and computing capability. In this demonstration, we show a new system, able to estimate a user's mobility profile based on anonymized and lightweight smartphone data. In particular, this system is composed of (1) a web analytics platform, able to analyze multimodal sensing traces and improve our understanding of complex mobility patterns, and (2) a smartphone application, able to show a user's profile generated locally in the form of a spider graph. In particular, this application uses anonymized and privacy-friendly data and methods, obtained thanks to the combination of Wi-Fi traces, activity detection and graph theory, made available independent of any personal information. A video showing the different interfaces to be presented is available online.
Now a days, ATM is used for money transaction for the convenience of the user by providing round the clock 24*7 services in financial transaction. Bank provides the Debit or Credit card to its user along with particular PIN number (which is only known by the Bank and User). Sometimes, user's card may be stolen by someone and this person can access all confidential information as Credit card number, Card holder name, Expiry date and CVV number through which he/she can complete fake transaction. In this paper, we introduced the biometric encryption of "EYE RETINA" to enhance the security over the wireless and unreliable network as internet. In this method user can authorizeasthird person his/her behalf to make the transaction using Debit or Credit card. In proposed method, third person can also perform financial transaction by providing his/her eye retina for the authorization & identification purpose.
Identifying threats contained within encrypted network traffic poses a unique set of challenges. It is important to monitor this traffic for threats and malware, but do so in a way that maintains the integrity of the encryption. Because pattern matching cannot operate on encrypted data, previous approaches have leveraged observable metadata gathered from the flow, e.g., the flow's packet lengths and inter-arrival times. In this work, we extend the current state-of-the-art by considering a data omnia approach. To this end, we develop supervised machine learning models that take advantage of a unique and diverse set of network flow data features. These data features include TLS handshake metadata, DNS contextual flows linked to the encrypted flow, and the HTTP headers of HTTP contextual flows from the same source IP address within a 5 minute window. We begin by exhibiting the differences between malicious and benign traffic's use of TLS, DNS, and HTTP on millions of unique flows. This study is used to design the feature sets that have the most discriminatory power. We then show that incorporating this contextual information into a supervised learning system significantly increases performance at a 0.00% false discovery rate for the problem of classifying encrypted, malicious flows. We further validate our false positive rate on an independent, real-world dataset.
The image contains a lot of visual as well as hidden information. Both, information must be secured at the time of transmission. With this motivation, a scheme is proposed based on encryption in tetrolet domain. For encryption, an iterative based Arnold transform is used in proposed methodology. The images are highly textured, which contains the authenticity of the image. For that, decryption process is performed in this way so that maximum, the edges and textures should be recovered, effectively. The suggested method has been tested on standard images and results obtained after applying suggested method are significant. A comparison is also performed with some standard existing methods to measure the effectiveness of the suggested method.
Mobile app developers today have a hard decision to make: to independently develop native apps for different operating systems or to develop an app that is cross-platform compatible. The availability of different tools and approaches to support cross-platform app development only makes the decision harder. In this study, we used user reviews of apps to empirically understand the relationship (if any) between the approach used in the development of an app and its perceived quality. We used Natural Language Processing (NLP) models to classify 787,228 user reviews of the Android version and iOS version of 50 apps as complaints in one of four quality concerns: performance, usability, security, and reliability. We found that hybrid apps (on both Android and iOS platforms) tend to be more prone to user complaints than interpreted/generated apps. In a study of Facebook, an app that underwent a change in development approach from hybrid to native, we found that change in the development approach was accompanied by a reduction in user complaints about performance and reliability.
The prevalent integration of highly intermittent renewable distributed energy resources (DER) into microgrids necessitates the deployment of a microgrid controller. In the absence of the main electric grid setting the network voltage and frequency, the microgrid power and energy management becomes more challenging, accentuating the need for a centralized microgrid controller that, through communication links, ensures smooth operation of the autonomous system. This extensive reliance on information and communication technologies (ICT) creates potential access points and vulnerabilities that may be exploited by cyber-attackers. This paper first presents a typical microgrid configuration operating in islanded mode; the microgrid elements, primary and secondary control functions for power, energy and load management are defined. The information transferred from the central controller to coordinate and dispatch the DERs is provided along with the deployable communication technologies and protocols. The vulnerabilities arising in such microgrids along with the cyber-attacks exploiting them are described. The impact of these attacks on the microgrid controller functions was shown to be dependent on the characteristics, location and target of the cyber-attack, as well as the microgrid configuration and control. A real-time hardware-in-the loop (HIL) testing platform, which emulates a microgrid featuring renewable DERs, an energy storage system (ESS), a diesel generator and controllable loads was used as the case study in order to demonstrate the impact of various cyber-attacks.
We introduce a simplified island model with behavior similar to the λ (1+1) islands optimizing the Maze fitness function, and investigate the effects of the migration topology on the ability of the simplified island model to track the optimum of a dynamic fitness function. More specifically, we prove that there exist choices of model parameters for which using a unidirectional ring as the migration topology allows the model to track the oscillating optimum through n Maze-like phases with high probability, while using a complete graph as the migration topology results in the island model losing track of the optimum with overwhelming probability. Additionally, we prove that if migration occurs only rarely, denser migration topologies may be advantageous. This serves to illustrate that while a less-dense migration topology may be useful when optimizing dynamic functions with oscillating behavior, and requires less problem-specific knowledge to determine when migration may be allowed to occur, care must be taken to ensure that a sufficient amount of migration occurs during the optimization process.
The Physical Web is a project announced by Google's Chrome team that essentially provides a framework to discover "smart" physical objects (e.g. vending machines, classroom, conference room, cafeteria etc.) and interact with specific, contextual content without having to resort to downloading a specific app. A common app such as the open source and freely available Physical Web app on the Google Play Store or the BKON Browser on the Apple App Store, can access nearby beacons. A current work-in-progress at the University of Maui College is developing a campus-wide prototype of beacon technology using Eddystone-URL and EID protocol from various beacon vendors.
As the Smart Grid becomes highly interconnected, the power protection, control, and monitoring functions of the grid are increasingly relying on the communications infrastructure, which has seen rapid growth. At the same time concerns regarding cyber threats have attracted significant attention towards the security of power systems. A properly designed security attack against the power grid can cause catastrophic damages to equipment and create large scale power outages. The smart grid consists of critical IEDs, which are considered high priority targets for malicious security attacks. For this reason it is very important to design the IEDs from the beginning with cyber security in mind, starting with the selection of hardware and operating systems, so that all facets of security are addressed and the product is robust and can stand attacks. Fact is that the subject of cyber security is vast and it covers many aspects. This paper focuses mainly on one of these aspects, namely the aspect of IED firmware system testing from the security point of view. The paper discusses practical aspects of IED security testing, and introduces the reader to types of vulnerability exploitations on the IED communication stack and SCADA applications, practical aspects of security testing, the importance of early vulnerability detection and ways in which the security testing helps towards regulatory standards compliance, such as NERC-CIP. Finally, based on the results from the simulated attacks, the paper discusses the importance of good security practices in design and coding, so that the potential to introduce vulnerabilities is kept to a minimum. Designing with security in mind also includes good security practices, both in design and coding, and adequate policies for the software development process. Critical software development milestones must be established, such as design and test documentation review, code review, unit, integration and system testing.
Machine-code slicing is an important primitive for building binary analysis and rewriting tools, such as taint trackers, fault localizers, and partial evaluators. However, it is not easy to create a machine-code slicer that exhibits a high level of precision. Moreover, the problem of creating such a tool is compounded by the fact that a small amount of local imprecision can be amplified via cascade effects. Most instructions in instruction sets such as Intel's IA-32 and ARM are multi-assignments: they have several inputs and several outputs (registers, flags, and memory locations). This aspect of the instruction set introduces a granularity issue during slicing: there are often instructions at which we would like the slice to include only a subset of the instruction's semantics, whereas the slice is forced to include the entire instruction. Consequently, the slice computed by state-of-the-art tools is very imprecise, often including essentially the entire program. This paper presents an algorithm to slice machine code more accurately. To counter the granularity issue, our algorithm performs slicing at the microcode level, instead of the instruction level, and obtains a more precise microcode slice. To reconstitute a machine-code program from a microcode slice, our algorithm uses machine-code synthesis. Our experiments on IA-32 binaries of FreeBSD utilities show that, in comparison to slices computed by a state-of-the-art tool, our algorithm reduces the size of backward slices by 33%, and forward slices by 70%.
Recurrent neural networks (RNNs) were recently proposed for the session-based recommendation task. The models showed promising improvements over traditional recommendation approaches. In this work, we further study RNN-based models for session-based recommendations. We propose the application of two techniques to improve model performance, namely, data augmentation, and a method to account for shifts in the input data distribution. We also empirically study the use of generalised distillation, and a novel alternative model that directly predicts item embeddings. Experiments on the RecSys Challenge 2015 dataset demonstrate relative improvements of 12.8% and 14.8% over previously reported results on the Recall@20 and Mean Reciprocal Rank@20 metrics respectively.
Within few years, Cloud computing has emerged as the most promising IT business model. Thanks to its various technical and financial advantages, Cloud computing continues to convince every day new users coming from scientific and industrial sectors. To satisfy the various users' requirements, Cloud providers must maximize the performance of their IT resources to ensure the best service at the lowest cost. The performance optimization efforts in the Cloud can be achieved at different levels and aspects. In the present paper, we propose to introduce a fuzzy logic process in scheduling strategy for public Cloud in order to improve the response time, processing time and total cost. In fact, fuzzy logic has proven his ability to solve the problem of optimization in several fields such as data mining, image processing, networking and much more.
In multi-server environments, remote user authentication is an extremely important issue because it provides authorization while users access their data and services. Moreover, the remote user authentication scheme for multi-server environment has resolved the problem of users needing to manage their different identities and passwords. For this reason, many user authentication schemes for multi-server environments have been proposed in recent years. In 2015, Lu et al. improved Mishra et al.'s scheme, and claimed that their scheme is a more secure and practical remote user authentication for multi-server environments. However, we found that Lu et al.'s scheme is actually insecure and incorrect. In this paper, we demonstrate that their scheme is vulnerable to outsider attack, user forgery attack. We then propose a new biometrics and smart card-based authentication scheme. Finally, we show that our proposed scheme is more secure and supports security properties.
Digital information security is the field of information technology which deal with all about identification and protection of information. Whereas, identification of the threat of any Intrusion Detection System (IDS) in the most challenging phase. Threat detection become most promising because rest of the IDS system phase depends on the solely on "what is identified". In this view, a multilayered framework has been discussed which handles the underlying features for the identification of various attack (DoS, R2L, U2R, Probe). The experiments validates the use SVM with genetic approach is efficient.
This paper proposes a method of distinguishing stock market states, classifying them based on price variations of securities, and using an evolutionary algorithm for improving the quality of classification. The data represents buy/sell order queues obtained from rebuild order book, given as price-volume pairs. In order to put more emphasis on certain features before the classifier is used, we use a weighting scheme, further optimized by an evolutionary algorithm.
This paper proposes a forensic method for identifying whether an image was previously compressed by JPEG and also proposes an improved anti-forensics method to enhance the quality of noise added image. Stamm and Liu's anti-forensics method disable the detection capabilities of various forensics methods proposed in the literature, used for identifying the compressed images. However, it also degrades the quality of the image. First, we analyze the anti-forensics method and then use the decimal histogram of the coefficients to distinguish the never compressed images from the previously compressed; even the compressed image processed anti-forensically. After analyzing the noise distribution in the AF image, we propose a method to remove the Gaussian noise caused by image dithering which in turn enhances the image quality. The paper is organized in the following manner: Section I is the introduction, containing previous literature. Section II briefs Anti-forensic method proposed by Stamm et al. In section III, we have proposed a forensic approach and section IV comprises of improved anti-forensic approach. Section V covers details of experimentation followed by the conclusion.
The Software Assurance Metrics and Tool Evaluation (SAMATE) project at the National Institute of Standards and Technology (NIST) has created the Software Assurance Reference Dataset (SARD) to provide researchers and software security assurance tool developers with a set of known security flaws. As part of an empirical evaluation of a runtime monitoring framework, two test suites were executed and monitored, revealing deficiencies which led to a collaboration with the NIST SAMATE team to provide replacements. Test Suites 45 and 46 are analyzed, discussed, and updated to improve accuracy, consistency, preciseness, and automation. Empirical results show metrics such as recall, precision, and F-Measure are all impacted by invalid base assumptions regarding the test suites.
The Software Assurance Metrics and Tool Evaluation (SAMATE) project at the National Institute of Standards and Technology (NIST) has created the Software Assurance Reference Dataset (SARD) to provide researchers and software security assurance tool developers with a set of known security flaws. As part of an empirical evaluation of a runtime monitoring framework, two test suites were executed and monitored, revealing deficiencies which led to a collaboration with the NIST SAMATE team to provide replacements. Test Suites 45 and 46 are analyzed, discussed, and updated to improve accuracy, consistency, preciseness, and automation. Empirical results show metrics such as recall, precision, and F-Measure are all impacted by invalid base assumptions regarding the test suites.
A honeypot is a deception tool for enticing attackers to make efforts to compromise the electronic information systems of an organization. A honeypot can serve as an advanced security surveillance tool for use in minimizing the risks of attacks on information technology systems and networks. Honeypots are useful for providing valuable insights into potential system security loopholes. The current research investigated the effectiveness of the use of centralized system management technologies called Puppet and Virtual Machines in the implementation automated honeypots for intrusion detection, correction and prevention. A centralized logging system was used to collect information of the source address, country and timestamp of intrusions by attackers. The unique contributions of this research include: a demonstration how open source technologies is used to dynamically add or modify hacking incidences in a high-interaction honeynet system; a presentation of strategies for making honeypots more attractive for hackers to spend more time to provide hacking evidences; and an exhibition of algorithms for system and network intrusion prevention.
Modern information extraction pipelines are typically constructed by (1) loading textual data from a database into a special-purpose application, (2) applying a myriad of text-analytics functions to the text, which produce a structured relational table, and (3) storing this table in a database. Obviously, this approach can lead to laborious development processes, complex and tangled programs, and inefficient control flows. Towards solving these deficiencies, we embark on an effort to lay the foundations of a new generation of text-centric database management systems. Concretely, we extend the relational model by incorporating into it the theory of document spanners which provides the means and methods for the model to engage the Information Extraction (IE) tasks. This extended model, called Spannerlog, provides a novel declarative method for defining and manipulating textual data, which makes possible the automation of the typical work method described above. In addition to formally defining Spannerlog and illustrating its usefulness for IE tasks, we also report on initial results concerning its expressive power.