Biblio
Programming languages have long incorporated type safety, increasing their level of abstraction and thus aiding programmers. Type safety eliminates whole classes of security-sensitive bugs, replacing the tedious and error-prone search for such bugs in each application with verifying the correctness of the type system. Despite their benefits, these protections often end at the process boundary, that is, type safety holds within a program but usually not to the file system or communication with other programs. Existing operating system approaches to bridge this gap require the use of a single programming language or common language runtime. We describe the deep integration of type safety in Ethos, a clean-slate operating system which requires that all program input and output satisfy a recognizer before applications are permitted to further process it. Ethos types are multilingual and runtime-agnostic, and each has an automatically generated unique type identifier. Ethos bridges the type-safety gap between programs by (1) providing a convenient mechanism for specifying the types each program may produce or consume, (2) ensuring that each type has a single, distributed-system-wide recognizer implementation, and (3) inescapably enforcing these type constraints.
We are currently living in the age of Big Data coming along with the challenge to grasp the golden opportunities at hand. This mixed blessing also dominates the relation between Big Data and trust. On the one side, large amounts of trust-related data can be utilized to establish innovative data-driven approaches for reputation-based trust management. On the other side, this is intrinsically tied to the trust we can put in the origins and quality of the underlying data. In this paper, we address both sides of trust and Big Data by structuring the problem domain and presenting current research directions and inter-dependencies. Based on this, we define focal issues which serve as future research directions for the track to our vision of Next Generation Online Trust within the FORSEC project.
Secure data communication is the most important and essential issue in the area of message transmission over the networks. Cryptography provides the way of making secure message for confidential message transfer. Cryptography is the process of transforming the sender's message to a secret format called cipher text that only intended receiver will get understand the meaning of the secret message. There are various cryptographic or DNA based encoding algorithms have been proposed in order to make secret message for communication. But all these proposed DNA based encryption algorithms are not secure enough to provide better security as compared with the today's security requirement. In this paper, we have proposed a technique of encryption that will enhance the message security. In this proposed algorithm, a new method of DNA based encryption with a strong key of 256 bit is used. Along with this big size key various other encoding tools are used as key in the encoding process of the message like random series of DNA bases, modified DNA bases coding. Moreover a new method of round key selection is also given in this paper to provide better security in the message. The cipher text contains the extra bit of information as similar with the DNA strands that will provide better and enhanced security against intruder's attack.
During recent years, establishing proper metrics for measuring system security has received increasing attention. Security logs contain vast amounts of information which are essential for creating many security metrics. Unfortunately, security logs are known to be very large, making their analysis a difficult task. Furthermore, recent security metrics research has focused on generic concepts, and the issue of collecting security metrics with log analysis methods has not been well studied. In this paper, we will first focus on using log analysis techniques for collecting technical security metrics from security logs of common types (e.g., Network IDS alarm logs, workstation logs, and Net flow data sets). We will also describe a production framework for collecting and reporting technical security metrics which is based on novel open-source technologies for big data.
Cloud computing is widely deployed to handle challenges such as big data processing and storage. Due to the outsourcing and sharing feature of cloud computing, security is one of the main concerns that hinders the end users to shift their businesses to the cloud. A lot of cryptographic techniques have been proposed to alleviate the data security issues in cloud computing, but most of these works focus on solving a specific security problem such as data sharing, comparison, searching, etc. At the same time, little efforts have been done on program security and formalization of the security requirements in the context of cloud computing. We propose a formal definition of the security of cloud computing, which captures the essence of the security requirements of both data and program. Analysis of some existing technologies under the proposed definition shows the effectiveness of the definition. We also give a simple look-up table based solution for secure cloud computing which satisfies the given definition. As FPGA uses look-up table as its main computation component, it is a suitable hardware platform for the proposed secure cloud computing scheme. So we use FPGAs to implement the proposed solution for k-means clustering algorithm, which shows the effectiveness of the proposed solution.
In the era of big data, many users and companies start to move their data to cloud storage to simplify data management and reduce data maintenance cost. However, security and privacy issues become major concerns because third-party cloud service providers are not always trusty. Although data contents can be protected by encryption, the access patterns that contain important information are still exposed to clouds or malicious attackers. In this paper, we apply the ORAM algorithm to enable privacy-preserving access to big data that are deployed in distributed file systems built upon hundreds or thousands of servers in a single or multiple geo-distributed cloud sites. Since the ORAM algorithm would lead to serious access load unbalance among storage servers, we study a data placement problem to achieve a load balanced storage system with improved availability and responsiveness. Due to the NP-hardness of this problem, we propose a low-complexity algorithm that can deal with large-scale problem size with respect to big data. Extensive simulations are conducted to show that our proposed algorithm finds results close to the optimal solution, and significantly outperforms a random data placement algorithm.
In this paper we propose a framework for automating feedback control to balance hard-to-predict wind power variations. The power imbalance is a result of non-zero mean error around the wind power forecast. Our proposed framework is aimed at achieving the objective of frequency stabilization and regulation through one control action. A case-study for a real-world system on Flores island in Portugal is provided. Using a battery-based storage on the island, we illustrate the proposed control framework.
This scientific paper reveals an intelligent system for data acquisition for dam monitoring and diagnose. This system is built around the RS485 communication standard and uses its own communication protocol [2]. The aim of the system is to monitor all signal levels inside the communication bus, respectively to detect the out of action data loggers. The diagnose test extracts the following functional parameters: supply voltage and the absolute value and common mode value for differential signals used in data transmission (denoted with “A” and “B”). Analyzing this acquired information, it's possible to find short-circuits or open-circuits across the communication bus. The measurement and signal processing functions, for flaws, are implemented inside the system's central processing unit. The next testing step is finding the out of action data loggers and is being made by trying to communicate with every data logger inside the network. The lack of any response from a data logger is interpreted as an error and using the code of the data logger's microcontroller, it is possible to find its exact position inside the dam infrastructure. The novelty of this procedure is the fact that it completely automates the diagnose procedure, which, until now, was made visually by checking every data logger.
When a user accesses a resource, the accounting process at the server side does the job of keeping track of the resource usage so as to charge the user. In cloud computing, a user may use more than one service provider and need two independent service providers to work together. In this user-centric context, the user is the owner of the information and has the right to authorize to a third party application to access the protected resource on the user's behalf. Therefore, the user also needs to monitor the authorized resource usage he granted to third party applications. However, the existing accounting protocols were proposed to monitor the resource usage in terms of how the user uses the resource from the service provider. This paper proposed the user-centric accounting model called AccAuth which designs an accounting layer to an OAuth protocol. Then the prototype was implemented, and the proposed model was evaluated against the standard requirements. The result showed that AccAuth passed all the requirements.
Visual cryptography is a way to encrypt the secret image into several meaningless share images. Noted that no information can be obtained if not all of the shares are collected. Stacking the share images, the secret image can be retrieved. The share images are meaningless to owner which results in difficult to manage. Tagged visual cryptography is a skill to print a pattern onto meaningless share images. After that, users can easily manage their own share images according to the printed pattern. Besides, access control is another popular topic to allow a user or a group to see the own authorizations. In this paper, a self-authentication mechanism with lossless construction ability for image secret sharing scheme is proposed. The experiments provide the positive data to show the feasibility of the proposed scheme.
In cloud computing environments, the user authentication scheme is an important security tool because it provides the authentication, authorization, and accounting for cloud users. Therefore, many user authentication schemes for cloud computing have been proposed in recent years. However, we find that most of the previous authentication schemes have some security problems. Besides, it cannot be implemented in cloud computing. To solve the above problems, we propose a new ID-based user authentication scheme for cloud computing in this paper. Compared with the related works, the proposed scheme has higher security levels and lower computation costs. In addition, it can be easily applied to cloud computing environments. Therefore, the proposed scheme is more efficient and practical than the related works.
Internet into our physical world and making it present everywhere. This evolution is also raising challenges in issues such as privacy, and security. For that reason, this work is focused on the integration and lightweight adaptation of existing authentication protocols, which are able also to offer authorization and access control functionalities. In particular, this work is focused on the Extensible Authentication Protocol (EAP). EAP is widely used protocol for access control in local area networks such Wireless (802.11) and wired (802.3). This work presents an integration of the EAP frame into IEEE 802.15.4 frames, demonstrating that EAP protocol and some of its mechanisms are feasible to be applied in constrained devices, such as the devices that are populating the IoT networks.
Single sign-on (SSO) is an identity management technique that provides users the ability to use multiple Web services with one set of credentials. However, when the authentication server is down or unavailable, users cannot access Web services, even if the services are operating normally. Therefore, enabling continuous use is important in single sign on. In this paper, we present security framework to overcome credential problems of accessing multiple web application. We explain system functionality with authorization and Authentication. We consider these methods from the viewpoint of continuity, security and efficiency makes the framework highly secure.
Efficient authentication, authorization, and accounting (AAA) management mechanisms will be key for the widespread adoption of SDN experimentation facilities beyond the confines of academic labs. In particular, we are interested in a robust AAA infrastructure to identify experimenters, police their actions based on the associated roles, facilitate secure resource sharing, and provide for detailed accountability. Currently, however, said facilities are forced to employ a patchy AAA infrastructure which lacks several of the aforementioned features. This paper proposes a certificate-based AAA architecture for SDN experimental facilities, which is by design both secure and flexible. As this work is implementation-driven and aims for a short deployment cycle in current facilities, we also outline a credible migration path which we are currently pursuing actively.
Monitoring is an important issue in cloud environments because it assures that acquired cloud slices attend the user's expectations. However, these environments are multitenant and dynamic, requiring automation techniques to offload cloud administrators. In a previous work, we proposed FlexACMS: a framework to automate monitoring configuration related to cloud slices using multiple monitoring solutions. In this work, we enhanced FlexACMS to allow dynamic and automatic attribution of monitoring configuration tasks to servers without administrator intervention, which was not available in previous version. FlexACMS also considers the monitoring server load when attributing configuration tasks, which allows load balancing between monitoring servers. The evaluation showed that enhancements reduced FlexACMS response time up to 60% in comparison to previous version. The scalability evaluation of enhanced version demonstrated the feasibility of our approach in large scale cloud environments.
Software components are software units designed to interact with other independently developed software components. These components are assembled by third parties into software applications. The success of final software applications largely depends upon the selection of appropriate and easy to fit components in software application according to the need of customer. It is primary requirement to evaluate the quality of components before using them in the final software application system. All the quality characteristics may not be of same significance for a particular software application of a specific domain. Therefore, it is necessary to identify only those characteristics/ sub-characteristics, which may have higher importance over the others. Analytical Network Process (ANP) is used to solve the decision problem, where attributes of decision parameters form dependency networks. The objective of this paper is to propose ANP based model to prioritize the characteristics /sub-characteristics of quality and to o estimate the numeric value of software quality.
The growth of the Internet has made IPv4 addresses a scarce resource. Due to slow IPv6 deployment, IANA-level IPv4 address exhaustion was reached before the world could transition to an IPv6-only Internet. The continuing need for IPv4 reachability will only be supported by IPv4 address sharing. This paper reviews ISP-level address sharing mechanisms, which allow Internet service providers to connect multiple customers who share a single IPv4 address. Some mechanisms come with severe and unpredicted consequences, and all of them come with tradeoffs. We propose a novel classification, which we apply to existing mechanisms such as NAT444 and DS-Lite and proposals such as 4rd, MAP, etc. Our tradeoff analysis reveals insights into many problems including: abuse attribution, performance degradation, address and port usage efficiency, direct intercustomer communication, and availability.
Billions of dollars of services and goods are sold through email marketing. Subject lines have a strong influence on open rates of the e-mails, as the consumers often open e-mails based on the subject. Traditionally, the e-mail-subject lines are compiled based on the best assessment of the human editors. We propose a method to help the editors by predicting subject line open rates by learning from past subject lines. The method derives different types of features from subject lines based on keywords, performance of past subject lines and syntax. Furthermore, we evaluate the contribution of individual subject-line keywords to overall open rates based on an iterative method-namely Attribution Scoring - and use this for improved predictions. A random forest based model is trained to combine these features to predict the performance. We use a dataset of more than a hundred thousand different subject lines with many billions of impressions to train and test the method. The proposed method shows significant improvement in prediction accuracy over the baselines for both new as well as already used subject lines.
By representing large corpora with concise and meaningful elements, topic-based generative models aim to reduce the dimension and understand the content of documents. Those techniques originally analyze on words in the documents, but their extensions currently accommodate meta-data such as authorship information, which has been proved useful for textual modeling. The importance of learning authorship is to extract author interests and assign authors to anonymous texts. Author-Topic (AT) model, an unsupervised learning technique, successfully exploits authorship information to model both documents and author interests using topic representations. However, the AT model simplifies that each author has equal contribution on multiple-author documents. To overcome this limitation, we assumes that authors give different degrees of contributions on a document by using a Dirichlet distribution. This automatically transforms the unsupervised AT model to Supervised Author-Topic (SAT) model, which brings a novelty of authorship prediction on anonymous texts. The SAT model outperforms the AT model for identifying authors of documents written by either single authors or multiple authors with a better Receiver Operating Characteristic (ROC) curve and a significantly higher Area Under Curve (AUC). The SAT model not only achieves competitive performance to state-of-the-art techniques e.g. Random forests but also maintains the characteristics of the unsupervised models for information discovery i.e. Word distributions of topics, author interests, and author contributions.
In interconnected power systems, dynamic model reduction can be applied to generators outside the area of interest (i.e., study area) to reduce the computational cost associated with transient stability studies. This paper presents a method of deriving the reduced dynamic model of the external area based on dynamic response measurements. The method consists of three steps, namely dynamic-feature extraction, attribution, and reconstruction (DEAR). In this method, a feature extraction technique, such as singular value decomposition (SVD), is applied to the measured generator dynamics after a disturbance. Characteristic generators are then identified in the feature attribution step for matching the extracted dynamic features with the highest similarity, forming a suboptimal “basis” of system dynamics. In the reconstruction step, generator state variables such as rotor angles and voltage magnitudes are approximated with a linear combination of the characteristic generators, resulting in a quasi-nonlinear reduced model of the original system. The network model is unchanged in the DEAR method. Tests on several IEEE standard systems show that the proposed method yields better reduction ratio and response errors than the traditional coherency based reduction methods.
User authentication is an important security mechanism that allows mobile users to be granted access to roaming service offered by the foreign agent with assistance of the home agent in mobile networks. While security-related issues have been well studied, how to preserve user privacy in this type of protocols still remains an open problem. In this paper, we revisit the privacy-preserving two-factor authentication scheme presented by Li et al. at WCNC 2013. We show that, despite being armed with a formal security proof, this scheme actually cannot achieve the claimed feature of user anonymity and is insecure against offline password guessing attacks, and thus, it is not recommended for practical applications. Then, we figure out how to fix these identified drawbacks, and suggest an enhanced scheme with better security and reasonable efficiency. Further, we conjecture that under the non-tamper-resistant assumption of the smart cards, only symmetric-key techniques are intrinsically insufficient to attain user anonymity.
The development of data communications enabling the exchange of information via mobile devices more easily. Security in the exchange of information on mobile devices is very important. One of the weaknesses in steganography is the capacity of data that can be inserted. With compression, the size of the data will be reduced. In this paper, designed a system application on the Android platform with the implementation of LSB steganography and cryptography using TEA to the security of a text message. The size of this text message may be reduced by performing lossless compression technique using LZW method. The advantages of this method is can provide double security and more messages to be inserted, so it is expected be a good way to exchange information data. The system is able to perform the compression process with an average ratio of 67.42 %. Modified TEA algorithm resulting average value of avalanche effect 53.8%. Average result PSNR of stego image 70.44 dB. As well as average MOS values is 4.8.
The innovations in communication and computing technologies are changing the way we carry-out the tasks in our daily lives. These revolutionary and disrupting technologies are available to the users in various hardware form-factors like Smart Phones, Embedded Appliances, Configurable or Customizable add-on devices, etc. One such technology is Bluetooth [1], which enables the users to communicate and exchange various kinds of information like messages, audio, streaming music and file transfer in a Personal Area Network (PAN). Though it enables the user to carry-out these kinds of tasks without much effort and infrastructure requirements, they inherently bring with them the security and privacy concerns, which need to be addressed at different levels. In this paper, we present an application-layer framework, which provides strong mutual authentication of applications, data confidentiality and data integrity independent of underlying operating system. It can make use of the services of different Cryptographic Service Providers (CSP) on different operating systems and in different programming languages. This framework has been successfully implemented and tested on Android Operating System on one end (using Java language) and MS-Windows 7 Operating System on the other end (using ANSI C language), to prove the framework's reliability/compatibility across OS, Programming Language and CSP. This framework also satisfies the three essential requirements of Security, i.e. Confidentiality, Integrity and Availability, as per the NIST Guide to Bluetooth Security specification and enables the developers to suitably adapt it for different kinds of applications based on Bluetooth Technology.
Effective Personalized Mobile Search Using KNN, implements an architecture to improve user's personalization effectiveness over large set of data maintaining security of the data. User preferences are gathered through clickthrough data. Clickthrough data obtained is sent to the server in encrypted form. Clickthrough data obtained is classified into content concepts and location concepts. To improve classification and minimize processing time, KNN(K Nearest Neighborhood) algorithm is used. Preferences identified(location and content) are merged to provide effective preferences to the user. System make use of four entropies to balance weight between content concepts and location concepts. System implements client server architecture. Role of client is to collect user queries and to maintain them in files for future reference. User preference privacy is ensured through privacy parameters and also through encryption techniques. Server is responsible to carry out the tasks like training, reranking of the search results obtained and the concept extraction. Experiments are carried out on Android based mobile. Results obtained through experiments show that system significantly gives improved results over previous algorithm for the large set of data maintaining security.
In this paper, we present an open cloud DRM service provider to protect the digital content's copyright. The proposed architecture enables the service providers to use an on-the fly DRM technique with digital signature and symmetric-key encryption. Unlike other similar works, our system does not keep the encrypted digital content but lets the content creators do so in their own cloud storage. Moreover, the key used for symmetric encryption are managed in an extremely secure way by means of the key fission engine and the key fusion engine. The ideas behind the two engines are taken from the works in secure network coding and secret sharing. Although the use of secret sharing and secure network coding for the storage of digital content is proposed in some other works, this paper is the first one employing those ideas only for key management while letting the content be stored in the owner's cloud storage. In addition, we implement an Android SDK for e-Book readers to be compatible with our proposed open cloud DRM service provider. The experimental results demonstrate that our proposal is feasible for the real e-Book market, especially for individual businesses.