Biblio
A key challenge of future mobile communication research is to strike an attractive compromise between wireless network's area spectral efficiency and energy efficiency. This necessitates a clean-slate approach to wireless system design, embracing the rich body of existing knowledge, especially on multiple-input-multiple-ouput (MIMO) technologies. This motivates the proposal of an emerging wireless communications concept conceived for single-radio-frequency (RF) large-scale MIMO communications, which is termed as SM. The concept of SM has established itself as a beneficial transmission paradigm, subsuming numerous members of the MIMO system family. The research of SM has reached sufficient maturity to motivate its comparison to state-of-the-art MIMO communications, as well as to inspire its application to other emerging wireless systems such as relay-aided, cooperative, small-cell, optical wireless, and power-efficient communications. Furthermore, it has received sufficient research attention to be implemented in testbeds, and it holds the promise of stimulating further vigorous interdisciplinary research in the years to come. This tutorial paper is intended to offer a comprehensive state-of-the-art survey on SM-MIMO research, to provide a critical appraisal of its potential advantages, and to promote the discussion of its beneficial application areas and their research challenges leading to the analysis of the technological issues associated with the implementation of SM-MIMO. The paper is concluded with the description of the world's first experimental activities in this vibrant research field.
Despite its great importance, modern network infrastructure is remarkable for the lack of rigor in its engineering. The Internet, which began as a research experiment, was never designed to handle the users and applications it hosts today. The lack of formalization of the Internet architecture meant limited abstractions and modularity, particularly for the control and management planes, thus requiring for every new need a new protocol built from scratch. This led to an unwieldy ossified Internet architecture resistant to any attempts at formal verification and to an Internet culture where expediency and pragmatism are favored over formal correctness. Fortunately, recent work in the space of clean slate Internet design-in particular, the software defined networking (SDN) paradigm-offers the Internet community another chance to develop the right kind of architecture and abstractions. This has also led to a great resurgence in interest of applying formal methods to specification, verification, and synthesis of networking protocols and applications. In this paper, we present a self-contained tutorial of the formidable amount of work that has been done in formal methods and present a survey of its applications to networking.
We are currently living in the age of Big Data coming along with the challenge to grasp the golden opportunities at hand. This mixed blessing also dominates the relation between Big Data and trust. On the one side, large amounts of trust-related data can be utilized to establish innovative data-driven approaches for reputation-based trust management. On the other side, this is intrinsically tied to the trust we can put in the origins and quality of the underlying data. In this paper, we address both sides of trust and Big Data by structuring the problem domain and presenting current research directions and inter-dependencies. Based on this, we define focal issues which serve as future research directions for the track to our vision of Next Generation Online Trust within the FORSEC project.
Cloud Computing means that a relationship of many number of computers through a contact channel like internet. Through cloud computing we send, receive and store data on internet. Cloud Computing gives us an opportunity of parallel computing by using a large number of Virtual Machines. Now a days, Performance, scalability, availability and security may represent the big risks in cloud computing. In this paper we highlights the issues of security, availability and scalability issues and we will also identify that how we make our cloud computing based infrastructure more secure and more available. And we also highlight the elastic behavior of cloud computing. And some of characteristics which involved for gaining the high performance of cloud computing will also be discussed.
With the arrival of the big data era, information privacy and security issues become even more crucial. The Mining Associations with Secrecy Konstraints (MASK) algorithm and its improved versions were proposed as data mining approaches for privacy preserving association rules. The MASK algorithm only adopts a data perturbation strategy, which leads to a low privacy-preserving degree. Moreover, it is difficult to apply the MASK algorithm into practices because of its long execution time. This paper proposes a new algorithm based on data perturbation and query restriction (DPQR) to improve the privacy-preserving degree by multi-parameters perturbation. In order to improve the time-efficiency, the calculation to obtain an inverse matrix is simplified by dividing the matrix into blocks; meanwhile, a further optimization is provided to reduce the number of scanning database by set theory. Both theoretical analyses and experiment results prove that the proposed DPQR algorithm has better performance.
With the arrival of the big data era, information privacy and security issues become even more crucial. The Mining Associations with Secrecy Konstraints (MASK) algorithm and its improved versions were proposed as data mining approaches for privacy preserving association rules. The MASK algorithm only adopts a data perturbation strategy, which leads to a low privacy-preserving degree. Moreover, it is difficult to apply the MASK algorithm into practices because of its long execution time. This paper proposes a new algorithm based on data perturbation and query restriction (DPQR) to improve the privacy-preserving degree by multi-parameters perturbation. In order to improve the time-efficiency, the calculation to obtain an inverse matrix is simplified by dividing the matrix into blocks; meanwhile, a further optimization is provided to reduce the number of scanning database by set theory. Both theoretical analyses and experiment results prove that the proposed DPQR algorithm has better performance.
Smart grid is a technological innovation that improves efficiency, reliability, economics, and sustainability of electricity services. It plays a crucial role in modern energy infrastructure. The main challenges of smart grids, however, are how to manage different types of front-end intelligent devices such as power assets and smart meters efficiently; and how to process a huge amount of data received from these devices. Cloud computing, a technology that provides computational resources on demands, is a good candidate to address these challenges since it has several good properties such as energy saving, cost saving, agility, scalability, and flexibility. In this paper, we propose a secure cloud computing based framework for big data information management in smart grids, which we call “Smart-Frame.” The main idea of our framework is to build a hierarchical structure of cloud computing centers to provide different types of computing services for information management and big data analysis. In addition to this structural framework, we present a security solution based on identity-based encryption, signature and proxy re-encryption to address critical security issues of the proposed framework.
This paper has conducted analyzing the accident case of data spill to study policy issues for ICT security from a social science perspective focusing on risk. The results from case analysis are as follows. First, ICT risk can be categorized 'severe, strong, intensive and individual' from the level of both probability and impact. Second, strategy of risk management can be designated 'avoid, transfer, mitigate, accept' by understanding their own culture type of relative group such as 'hierarchy, egalitarianism, fatalism and individualism'. Third, personal data has contained characteristics of big data such like 'volume, velocity, variety' for each risk situation. Therefore, government needs to establish a standing organization responsible for ICT risk policy and management in a new big data era. And the policy for ICT risk management needs to balance in considering 'technology, norms, laws, and market' in big data era.
The traditional Kerberos protocol exists some limitations in achieving clock synchronization and storing key, meanwhile, it is vulnerable from password guessing attack and attacks caused by malicious software. In this paper, a new authentication scheme is proposed for wireless mesh network. By utilizing public key encryption techniques, the security of the proposed scheme is enhanced. Besides, timestamp in the traditional protocol is replaced by random numbers to implementation cost. The analysis shows that the improved authentication protocol is fit for wireless Mesh network, which can make identity authentication more secure and efficient.
In recent years, Attribute Based Access Control (ABAC) has evolved as the preferred logical access control methodology in the Department of Defense and Intelligence Community, as well as many other agencies across the federal government. Gartner recently predicted that “by 2020, 70% of enterprises will use attribute-based access control (ABAC) as the dominant mechanism to protect critical assets, up from less that 5% today.” A definition and introduction to ABAC can be found in NIST Special Publication 800-162, Guide to Attribute Based Access Control (ABAC) Definition and Considerations and Intelligence Community Policy Guidance (ICPG) 500.2, Attribute-Based Authorization and Access Management. Within ABAC, attributes are used to make critical access control decisions, yet standards for attribute assurance have just started to be researched and documented. This presentation outlines factors influencing attributes that an authoritative body must address when standardizing attribute assurance and proposes some notional implementation suggestions for consideration. Attribute Assurance brings a level of confidence to attributes that is similar to levels of assurance for authentication (e.g., guidelines specified in NIST SP 800-63 and OMB M-04-04). There are three principal areas of interest when considering factors related to Attribute Assurance. Accuracy establishes the policy and technical underpinnings for semantically and syntactically correct descriptions of Subjects, Objects, or Environmental conditions. Interoperability considers different standards and protocols used for secure sharing of attributes between systems in order to avoid compromising the integrity and confidentiality of the attributes or exposing vulnerabilities in provider or relying systems or entities. Availability ensures that the update and retrieval of attributes satisfy the application to which the ABAC system is applied. In addition, the security and backup capability of attribute repositories need to be considered. Similar to a Level of Assurance (LOA), a Level of Attribute Assurance (LOAA) assures a relying party that the attribute value received from an Attribute Provider (AP) is accurately associated with the subject, resource, or environmental condition to which it applies. An Attribute Provider (AP) is any person or system that provides subject, object (or resource), or environmental attributes to relying parties regardless of transmission method. The AP may be the original, authoritative source (e.g., an Applicant). The AP may also receive information from an authoritative source for repacking or store-and-forward (e.g., an employee database) to relying parties or they may derive the attributes from formulas (e.g., a credit score). Regardless of the source of the AP's attributes, the same standards should apply to determining the LOAA. As ABAC is implemented throughout government, attribute assurance will be a critical, limiting factor in its acceptance. With this presentation, we hope to encourage dialog between attribute relying parties, attribute providers, and federal agencies that will be defining standards for ABAC in the immediate future.
In PMIPv6-based network, mobile nodes can be made smaller and lighter because the network nodes perform the mobility management-related functions on behalf of the mobile nodes. One of the protocols, Fast Handovers for Proxy Mobile IPv6 (FPMIPv6) [1] was studied by the Internet Engineering Task Force (IETF). Since FPMIPv6 adopts the entities and the concepts of Fast Handovers for Mobile IPv6 (FMIPv6) in Proxy Mobile IPv6 (PMIPv6), it reduces the packet loss. The conventional scheme has been proposed to cooperate with an Authentication, Authorization and Accounting (AAA) infrastructure for authentication of a mobile node in PMIPv6. Despite the fact that this approach resulted in the best efficiency, without beginning secured signaling messages, The PMIPv6 is vulnerable to various security threats and it does not support global mobility. In this paper, the authors analyzed the Kang-Park & ESS-FH scheme, and proposed an Enhanced Security scheme for FPMIPv6 (ESS-FP). Based on the CGA method and the public key Cryptography, ESS-FP provides a strong key exchange and key independence in addition to improving the weaknesses of FPMIPv6 and its handover latency was analyzed and compared with that of the Kang-Park scheme & ESS-FH.
In the last decade, the request for Internet access in heterogeneous environments keeps on growing, principally in mobile platforms such as buses, airplanes and trains. Consequently, several extensions and schemes have been introduced to achieve seamless handoff of mobile networks from one subnet to another. Even with these enhancements, the problem of maintaining the security concerns and availability has not been resolved yet, especially, the absence of authentication mechanism between network entities in order to avoid vulnerability from attacks. To eliminate the threats on the interface between the mobile access gateway (MAG) and the mobile router (MR) in improving fast PMIPv6-based network mobility (IFP-NEMO) protocol, we propose a lightweight mutual authentication mechanism in improving fast PMIPv6-based network mobility scheme (LMAIFPNEMO). This scheme uses authentication, authorization and accounting (AAA) servers to enhance the security of the protocol IFP-NEMO which allows the integration of improved fast proxy mobile IPv6 (PMIPv6) in network mobility (NEMO). We use only symmetric cryptographic, generated nonces and hash operation primitives to ensure a secure authentication procedure. Then, we analyze the security aspect of the proposed scheme and evaluate it using the automated validation of internet security protocols and applications (AVISPA) software which has proved that authentication goals are achieved.
IP technology for resource-constrained devices enables transparent end-to-end connections between a vast variety of devices and services in the Internet of Things (IoT). To protect these connections, several variants of traditional IP security protocols have recently been proposed for standardization, most notably the DTLS protocol. In this paper, we identify significant resource requirements for the DTLS handshake when employing public-key cryptography for peer authentication and key agreement purposes. These overheads particularly hamper secure communication for memory-constrained devices. To alleviate these limitations, we propose a delegation architecture that offloads the expensive DTLS connection establishment to a delegation server. By handing over the established security context to the constrained device, our delegation architecture significantly reduces the resource requirements of DTLS-protected communication for constrained devices. Additionally, our delegation architecture naturally provides authorization functionality when leveraging the central role of the delegation server in the initial connection establishment. Hence, in this paper, we present a comprehensive, yet compact solution for authentication, authorization, and secure data transmission in the IP-based IoT. The evaluation results show that compared to a public-key-based DTLS handshake our delegation architecture reduces the memory overhead by 64 %, computations by 97 %, network transmissions by 68 %.
The objective of this paper is to explore the current notions of systems and “System of Systems” and establish the case for quantitative characterization of their structural, behavioural and contextual facets that will pave the way for further formal development (mathematical formulation). This is partly driven by stakeholder needs and perspectives and also in response to the necessity to attribute and communicate the properties of a system more succinctly, meaningfully and efficiently. The systematic quantitative characterization framework proposed will endeavor to extend the notion of emergence that allows the definition of appropriate metrics in the context of a number of systems ontologies. The general characteristic and information content of the ontologies relevant to system and system of system will be specified but not developed at this stage. The current supra-system, system and sub-system hierarchy is also explored for the formalisation of a standard notation in order to depict a relative scale and order and avoid the seemingly arbitrary attributions.
Optimizing memory access is critical for performance and power efficiency. CPU manufacturers have developed sampling-based performance measurement units (PMUs) that report precise costs of memory accesses at specific addresses. However, this data is too low-level to be meaningfully interpreted and contains an excessive amount of irrelevant or uninteresting information. We have developed a method to gather fine-grained memory access performance data for specific data objects and regions of code with low overhead and attribute semantic information to the sampled memory accesses. This information provides the context necessary to more effectively interpret the data. We have developed a tool that performs this sampling and attribution and used the tool to discover and diagnose performance problems in real-world applications. Our techniques provide useful insight into the memory behaviour of applications and allow programmers to understand the performance ramifications of key design decisions: domain decomposition, multi-threading, and data motion within distributed memory systems.
With the urban traffic planning and management development, it is a highly considerable issue to analyze and estimate the original-destination data in the city. Traditional method to acquire the OD information usually uses household survey, which is inefficient and expensive. In this paper, the new methodology proposed that using mobile phone data to analyze the mechanism of trip generation, trip attraction and the OD information. The mobile phone data acquisition is introduced. A pilot study is implemented on Beijing by using the new method. And, much important traffic information can be extracted from the mobile phone data. We use the K-means clustering algorithm to divide the traffic zone. The attribution of traffic zone is identified using the mobile phone data. Then the OD distribution and the commuting travel are analyzed. At last, an experiment is done to verify availability of the mobile phone data, that analyzing the "Traffic tide phenomenon" in Beijing. The results of the experiments in this paper show a great correspondence to the actual situation. The validated results reveal the mobile phone data has tremendous potential on OD analysis.
In multicarrier direct modulation direct detection systems, interaction between laser chirp and fiber group velocity dispersion induces subcarrier-to-subcarrier intermixing interferences (SSII) after detection. Such SSII become a major impairment in orthogonal frequency division multiplexing-based access systems, where a high modulation index, leading to large chirp, is required to maximize the system power budget. In this letter, we present and experimentally verify an analytical formulation to predict the level of signal and SSII and estimate the signal to noise ratio of each subcarrier, enabling improved bit-and-power loading and subcarrier attribution. The reported model is compact, and only requires the knowledge of basic link characteristics and laser parameters that can easily be measured.
An aspect of database forensics that has not received much attention in the academic research community yet is the presence of database triggers. Database triggers and their implementations have not yet been thoroughly analysed to establish what possible impact they could have on digital forensic analysis methods and processes. Conventional database triggers are defined to perform automatic actions based on changes in the database. These changes can be on the data level or the data definition level. Digital forensic investigators might thus feel that database triggers do not have an impact on their work. They are simply interrogating the data and metadata without making any changes. This paper attempts to establish if the presence of triggers in a database could potentially disrupt, manipulate or even thwart forensic investigations. The database triggers as defined in the SQL standard were studied together with a number of database trigger implementations. This was done in order to establish what aspects might have an impact on digital forensic analysis. It is demonstrated in this paper that some of the current database forensic analysis methods are impacted by the possible presence of certain types of triggers in a database. Furthermore, it finds that the forensic interpretation and attribution processes should be extended to include the handling and analysis of database triggers if they are present in a database.
Conducting active cyberdefense requires the acceptance of a proactive framework that acknowledges the lack of predictable symmetries between malicious actors and their capabilities and intent. Unlike physical weapons such as firearms, naval vessels, and piloted aircraft-all of which risk physical exposure when engaged in direct combat-cyberweapons can be deployed (often without their victims' awareness) under the protection of the anonymity inherent in cyberspace. Furthermore, it is difficult in the cyber domain to determine with accuracy what a malicious actor may target and what type of cyberweapon the actor may wield. These aspects imply an advantage for malicious actors in cyberspace that is greater than for those in any other domain, as the malicious cyberactor, under current international constructs and norms, has the ability to choose the time, place, and weapon of engagement. This being said, if defenders are to successfully repel attempted intrusions, then they must conduct an active cyberdefense within a framework that proactively engages threatening actions independent of a requirement to achieve attribution. This paper proposes that private business, government personnel, and cyberdefenders must develop a threat identification framework that does not depend upon attribution of the malicious actor, i.e., an attribution agnostic cyberdefense construct. Furthermore, upon developing this framework, network defenders must deploy internally based cyberthreat countermeasures that take advantage of defensive network environmental variables and alter the calculus of nefarious individuals in cyberspace. Only by accomplishing these two objectives can the defenders of cyberspace actively combat malicious agents within the virtual realm.
In this paper we introduce PADAVAN, a novel anonymous data collection scheme for Vehicular Ad Hoc Networks (VANETs). PADAVAN allows users to submit data anonymously to a data consumer while preventing adversaries from submitting large amounts of bogus data. PADAVAN is comprised of an n-times anonymous authentication scheme, mix cascades and various principles to protect the privacy of the submitted data itself. Furthermore, we evaluate the effectiveness of limiting an adversary to a fixed amount of messages.
Aside from massive advantages in safety and convenience on the road, Vehicular Ad Hoc Networks (VANETs) introduce security risks to the users. Proposals of new security concepts to counter these risks are challenging to verify because of missing real world implementations of VANETs. To fill this gap, we introduce VANETsim, an event-driven simulation platform, specifically designed to investigate application-level privacy and security implications in vehicular communications. VANETsim focuses on realistic vehicular movement on real road networks and communication between the moving nodes. A powerful graphical user interface and an experimentation environment supports the user when setting up or carrying out experiments.
WiFi fingerprint-based localization is regarded as one of the most promising techniques for indoor localization. The location of a to-be-localized client is estimated by mapping the measured fingerprint (WiFi signal strengths) against a database owned by the localization service provider. A common concern of this approach that has never been addressed in literature is that it may leak the client's location information or disclose the service provider's data privacy. In this paper, we first analyze the privacy issues of WiFi fingerprint-based localization and then propose a Privacy-Preserving WiFi Fingerprint Localization scheme (PriWFL) that can protect both the client's location privacy and the service provider's data privacy. To reduce the computational overhead at the client side, we also present a performance enhancement algorithm by exploiting the indoor mobility prediction. Theoretical performance analysis and experimental study are carried out to validate the effectiveness of PriWFL. Our implementation of PriWFL in a typical Android smartphone and experimental results demonstrate the practicality and efficiency of PriWFL in real-world environments.
Many systems rely on passwords for authentication. Due to numerous accounts for different services, users have to choose and remember a significant number of passwords. Password-Manager applications address this issue by storing the user's passwords. They are especially useful on mobile devices, because of the ubiquitous access to the account passwords. Password-Managers often use key derivation functions to convert a master password into a cryptographic key suitable for encrypting the list of passwords, thus protecting the passwords against unauthorized, off-line access. Therefore, design and implementation flaws in the key derivation function impact password security significantly. Design and implementation problems in the key derivation function can render the encryption on the password list useless, by for example allowing efficient bruteforce attacks, or - even worse - direct decryption of the stored passwords. In this paper, we analyze the key derivation functions of popular Android Password-Managers with often startling results. With this analysis, we want to raise the awareness of developers of security critical apps for security, and provide an overview about the current state of implementation security of security-critical applications.
As web-server spoofing is increasing, we investigate a novel technology termed ICmetrics, used to identify fraud for given software/hardware programs based on measurable quantities/features. ICmetrics technology is based on extracting features from digital systems' operation that may be integrated together to generate unique identifiers for each of the systems or create unique profiles that describe the systems' actual behavior. This paper looks at the properties of the several behaviors as a potential ICmetrics features to identify android apps, it presents several quality features which meet the ICmetrics requirements and can be used for encryption key generation. Finally, the paper identifies four android apps and verifies the use of ICmetrics by identifying a spoofed app as a different app altogether.
As recently shown in 2013, Android-driven smartphones and tablet PCs are vulnerable to so-called cold boot attacks. With physical access to an Android device, forensic memory dumps can be acquired with tools like FROST that exploit the remanence effect of DRAM to read out what is left in memory after a short reboot. While FROST can in some configurations be deployed to break full disk encryption, encrypted user partitions are usually wiped during a cold boot attack, such that a post-mortem analysis of main memory remains the only source of digital evidence. Therefore, we provide an in-depth analysis of Android's memory structures for system and application level memory. To leverage FROST in the digital investigation process of Android cases, we provide open-source Volatility plugins to support an automated analysis and extraction of selected Dalvik VM memory structures.