Biblio
We survey the state-of-the-art on the Internet-of-Things (IoT) from a wireless communications point of view, as a result of the European FP7 project BUTLER which has its focus on pervasiveness, context-awareness and security for IoT. In particular, we describe the efforts to develop so-called (wireless) enabling technologies, aimed at circumventing the many challenges involved in extending the current set of domains (“verticals”) of IoT applications towards a “horizontal” (i.e. integrated) vision of the IoT. We start by illustrating current research effort in machine-to-machine (M2M), which is mainly focused on vertical domains, and we discuss some of them in details, depicting then the necessary horizontal vision for the future intelligent daily routine (“Smart Life”). We then describe the technical features of the most relevant heterogeneous communications technologies on which the IoT relies, under the light of the on-going M2M service layer standardization. Finally we identify and present the key aspects, within three major cross-vertical categories, under which M2M technologies can function as enablers for the horizontal vision of the IoT.
The string matching algorithms have broad applications in many areas of computer sciences. These areas include operating systems, information retrieval, editors, Internet searching engines, security applications and biological applications. Two important factors used to evaluate the performance of the sequential string matching algorithms are number of attempts and total number of character comparisons during the matching process. This research proposes to integrate the good properties of three single string matching algorithms, Quick-Search, Zuh-Takaoka and Horspool, to produce hybrid string matching algorithm called Maximum-Shift algorithm. Three datasets are used to test the proposed algorithm, which are, DNA, Protein sequence and English text. The hybrid algorithm, Maximum-Shift, shows efficient results compared to four string matching algorithms, Quick-Search, Horspool, Smith and Berry-Ravindran, in terms of the number of attempts and the total number of character comparisons.
This paper presents a human model-based feature extraction method for a video surveillance retrieval system. The proposed method extracts, from a normalized scene, object features such as height, speed, and representative color using a simple human model based on multiple-ellipse. Experimental results show that the proposed system can effectively track moving routes of people such as a missing child, an absconder, and a suspect after events.
A web service is a web-based application connected via the internet connectivity. The common web-based applications are deployed using web browsers and web servers. However, the security of Web Service is a major concern issues since it is not widely studied and integrated in the design stage of Web Service standard. They are add-on modules rather a well-defined solutions in standards. So, various web services security solutions have been defined in order to protect interaction over a network. Remote attestation is an authentication technique proposed by the Trusted Computing Group (TCG) which enables the verification of the trusted environment of platforms and assuring the information is accurate. To incorporate this method in web services framework in order to guarantee the trustworthiness and security of web-based applications, a new framework called TrustWeb is proposed. The TrustWeb framework integrates the remote attestation into SSL/TLS protocol to provide integrity information of the involved endpoint platforms. The framework enhances TLS protocol with mutual attestation mechanism which can help to address the weaknesses of transferring sensitive computations, and a practical way to solve the remote trust issue at the client-server environment. In this paper, we describe the work of designing and building a framework prototype in which attestation mechanism is integrated into the Mozilla Firefox browser and Apache web server. We also present framework solution to show improvement in the efficiency level.
Social networking sites (SNSs), with their large number of users and large information base, seem to be the perfect breeding ground for exploiting the vulnerabilities of people, who are considered the weakest link in security. Deceiving, persuading, or influencing people to provide information or to perform an action that will benefit the attacker is known as "social engineering." Fraudulent and deceptive people use social engineering traps and tactics through SNSs to trick users into obeying them, accepting threats, and falling victim to various crimes such as phishing, sexual abuse, financial abuse, identity theft, and physical crime. Although organizations, researchers, and practitioners recognize the serious risks of social engineering, there is a severe lack of understanding and control of such threats. This may be partly due to the complexity of human behaviors in approaching, accepting, and failing to recognize social engineering tricks. This research aims to investigate the impact of source characteristics on users' susceptibility to social engineering victimization in SNSs, particularly Facebook. Using grounded theory method, we develop a model that explains what and how source characteristics influence Facebook users to judge the attacker as credible.
Commercial Wireless Sensor Networks (WSNs) can be accessed through sensor web portals. However, associated security implications and threats to the 1) users/subscribers 2) investors and 3) third party operators regarding sensor web portals are not seen in completeness, rather the contemporary work handles them in parts. In this paper, we discuss different kind of security attacks and vulnerabilities at different layers to the users, investors including Wireless Sensor Network Service Providers (WSNSPs) and WSN itself in relation with the two well-known documents i.e., “Department of Homeland Security” (DHS) and “Department of Defense (DOD)”, as these are standard security documents till date. Further we propose a comprehensive cross layer security solution in the light of guidelines given in the aforementioned documents that is minimalist in implementation and achieves the purported security goals.
The gradient-descent total least-squares (GD-TLS) algorithm is a stochastic-gradient adaptive filtering algorithm that compensates for error in both input and output data. We study the local convergence of the GD-TLS algoritlun and find bounds for its step-size that ensure its stability. We also analyze the steady-state performance of the GD-TLS algorithm and calculate its steady-state mean-square deviation. Our steady-state analysis is inspired by the energy-conservation-based approach to the performance analysis of adaptive filters. The results predicted by the analysis show good agreement with the simulation experiments.
Web applications need to validate and sanitize user inputs in order to avoid attacks such as Cross Site Scripting (XSS) and SQL Injection. Writing string manipulation code for input validation and sanitization is an error-prone process leading to many vulnerabilities in real-world web applications. Automata-based static string analysis techniques can be used to automatically compute vulnerability signatures (represented as automata) that characterize all the inputs that can exploit a vulnerability. However, there are several factors that limit the applicability of static string analysis techniques in general: 1) undesirability of static string analysis requires the use of approximations leading to false positives, 2) static string analysis tools do not handle all string operations, 3) dynamic nature of the scripting languages makes static analysis difficult. In this paper, we show that vulnerability signatures computed for deliberately insecure web applications (developed for demonstrating different types of vulnerabilities) can be used to generate test cases for other applications. Given a vulnerability signature represented as an automaton, we present algorithms for test case generation based on state, transition, and path coverage. These automatically generated test cases can be used to test applications that are not analyzable statically, and to discover attack strings that demonstrate how the vulnerabilities can be exploited.
This paper proposes a new cross-layer based packet scheduling scheme for multimedia traffic in satellite Long Term Evolution (LTE) network which adopts MIMO technology. The Satellite LTE air interface will provide global coverage and hence complement its terrestrial counterpart in the provision of mobile services (especially multimedia services) to users across the globe. A dynamic packet scheduling scheme is very important towards actualizing an effective utilization of the limited available resources in satellite LTE networks without compromise to the Quality of Service (QoS) demands of multimedia traffic. Hence, the need for an effective packet scheduling algorithm cannot be overemphasized. The aim of this paper is to propose a new scheduling algorithm tagged Cross-layer Based Queue-Aware (CBQA) Scheduler that will provide a good trade-off among QoS, fairness and throughput. The newly proposed scheduler is compared to existing ones through simulations and various performance indices have been used. A land mobile dual-polarized GEO satellite system has been considered for this work.
One of the major threats against web applications is Cross-Site Scripting (XSS). The final target of XSS attacks is the client running a particular web browser. During this last decade, several competing web browsers (IE, Netscape, Chrome, Firefox) have evolved to support new features. In this paper, we explore whether the evolution of web browsers is done using systematic security regression testing. Beginning with an analysis of their current exposure degree to XSS, we extend the empirical study to a decade of most popular web browser versions. We use XSS attack vectors as unit test cases and we propose a new method supported by a tool to address this XSS vector testing issue. The analysis on a decade releases of most popular web browsers including mobile ones shows an urgent need of XSS regression testing. We advocate the use of a shared security testing benchmark as a good practice and propose a first set of publicly available XSS vectors as a basis to ensure that security is not sacrificed when a new version is delivered.
Cloud computing is an application and set of services given through the internet. However it is an emerging technology for shared infrastructure but it lacks with an access rights and security mechanism. As it lacks security issues for the cloud users our system focuses only on the security provided through the token management system. It is based on the internet where computing is done through the virtual shared servers for providing infrastructure, software, platform and security as a services. In which security plays an important role in the cloud service. Hence, this security has been given with three types of services such as mutual authentication, directory services, token granting for the resources. Since, existing token issuing mechanism does not provide scalability to large data sets and also increases memory overhead between the client and the server. Hence, our proposed work focuses on providing tokens to the users, which addresses the problem of scalability and memory overhead. The proposed framework of token management system monitors the entire operations of the cloud and there by managing the entire cloud infrastructure. Our model comes under the new category of cloud model known as "Security as a Service". This paper provides the security framework as an architectural model to verify user authorization and data correctness of the resource stored thereby provides guarantee to the data owner for their resource stored into the cloud This framework also describes about the storage of token in a secured manner and it also facilitates search and usage of tokens for auditing purpose and supervision of the users.
Mutation analysis generates tests that distinguish variations, or mutants, of an artifact from the original. Mutation analysis is widely considered to be a powerful approach to testing, and hence is often used to evaluate other test criteria in terms of mutation score, which is the fraction of mutants that are killed by a test set. But mutation analysis is also known to provide large numbers of redundant mutants, and these mutants can inflate the mutation score. While mutation approaches broadly characterized as reduced mutation try to eliminate redundant mutants, the literature lacks a theoretical result that articulates just how many mutants are needed in any given situation. Hence, there is, at present, no way to characterize the contribution of, for example, a particular approach to reduced mutation with respect to any theoretical minimal set of mutants. This paper's contribution is to provide such a theoretical foundation for mutant set minimization. The central theoretical result of the paper shows how to minimize efficiently mutant sets with respect to a set of test cases. We evaluate our method with a widely-used benchmark.
The concept of smart cities envisions services that provide distraction-free support for citizens. To realize this vision, the services must adapt to the citizens' situations, behaviors and intents at runtime. This requires services to gather and process the context of their users. Mobile devices provide a promising basis for determining context in an automated manner on a large scale. However, despite the wide availability of versatile programmable mobile platforms such as Android and iOS, there are only few examples of smart city applications. One reason for this is that existing software platforms primarily focus on low-level resource management which requires application developers to repeatedly tackle many challenging tasks. Examples include efficient data acquisition, secure and privacy-preserving data distribution as well as interoperable data integration. In this paper, we describe the GAMBAS middleware which tries to simplify the development of smart city applications. To do this, GAMBAS introduces a Java-based runtime system with an associated software development kit (SDK). To clarify how the runtime system and the SDK can be used for application development, we describe two simple applications that highlight different middleware functions.
With the rise in the underground Internet economy, automated malicious programs popularly known as malwares have become a major threat to computers and information systems connected to the internet. Properties such as self healing, self hiding and ability to deceive the security devices make these software hard to detect and mitigate. Therefore, the detection and the mitigation of such malicious software is a major challenge for researchers and security personals. The conventional systems for the detection and mitigation of such threats are mostly signature based systems. Major drawback of such systems are their inability to detect malware samples for which there is no signature available in their signature database. Such malwares are known as zero day malware. Moreover, more and more malware writers uses obfuscation technology such as polymorphic and metamorphic, packing, encryption, to avoid being detected by antivirus. Therefore, the traditional signature based detection system is neither effective nor efficient for the detection of zero-day malware. Hence to improve the effectiveness and efficiency of malware detection system we are using classification method based on structural information and behavioral specifications. In this paper we have used both static and dynamic analysis approaches. In static analysis we are extracting the features of an executable file followed by classification. In dynamic analysis we are taking the traces of executable files using NtTrace within controlled atmosphere. Experimental results obtained from our algorithm indicate that our proposed algorithm is effective in extracting malicious behavior of executables. Further it can also be used to detect malware variants.
Modern cyber systems and their integration with the infrastructure has a clear effect on the productivity and quality of life immensely. Their involvement in our daily life elevate the need for means to insure their resilience against attacks and failure. One major threat is the software monoculture. Latest research work demonstrated the danger of software monoculture and presented diversity to reduce the attack surface. In this paper, we propose ChameleonSoft, a multidimensional software diversity employment to, in effect, induce spatiotemporal software behavior encryption and a moving target defense. ChameleonSoft introduces a loosely coupled, online programmable software-execution foundation separating logic, state and physical resources. The elastic construction of the foundation enabled ChameleonSoft to define running software as a set of behaviorally-mutated functionally-equivalent code variants. ChameleonSoft intelligently Shuffle, at runtime, these variants while changing their physical location inducing untraceable confusion and diffusion enough to encrypt the execution behavior of the running software. ChameleonSoft is also equipped with an autonomic failure recovery mechanism for enhanced resilience. In order to test the applicability of the proposed approach, we present a prototype of the ChameleonSoft Behavior Encryption (CBE) and recovery mechanisms. Further, using analysis and simulation, we study the performance and security aspects of the proposed system. This study aims to assess the provisioned level of security by measuring the avalanche effect percentage and the induced confusion and diffusion levels to evaluate the strength of the CBE mechanism. Further, we compute the computational cost of security provisioning and enhancing system resilience.
Modern cyber systems and their integration with the infrastructure has a clear effect on the productivity and quality of life immensely. Their involvement in our daily life elevate the need for means to insure their resilience against attacks and failure. One major threat is the software monoculture. Latest research work demonstrated the danger of software monoculture and presented diversity to reduce the attack surface. In this paper, we propose ChameleonSoft, a multidimensional software diversity employment to, in effect, induce spatiotemporal software behavior encryption and a moving target defense. ChameleonSoft introduces a loosely coupled, online programmable software-execution foundation separating logic, state and physical resources. The elastic construction of the foundation enabled ChameleonSoft to define running software as a set of behaviorally-mutated functionally-equivalent code variants. ChameleonSoft intelligently Shuffle, at runtime, these variants while changing their physical location inducing untraceable confusion and diffusion enough to encrypt the execution behavior of the running software. ChameleonSoft is also equipped with an autonomic failure recovery mechanism for enhanced resilience. In order to test the applicability of the proposed approach, we present a prototype of the ChameleonSoft Behavior Encryption (CBE) and recovery mechanisms. Further, using analysis and simulation, we study the performance and security aspects of the proposed system. This study aims to assess the provisioned level of security by measuring the avalanche effect percentage and the induced confusion and diffusion levels to evaluate the strength of the CBE mechanism. Further, we compute the computational cost of security provisioning and enhancing system resilience.
Recent advances in adaptive filter theory and the hardware for signal acquisition have led to the realization that purely linear algorithms are often not adequate in these domains. Nonlinearities in the input space have become apparent with today's real world problems. Algorithms that process the data must keep pace with the advances in signal acquisition. Recently kernel adaptive (online) filtering algorithms have been proposed that make no assumptions regarding the linearity of the input space. Additionally, advances in wavelet data compression/dimension reduction have also led to new algorithms that are appropriate for producing a hybrid nonlinear filtering framework. In this paper we utilize a combination of wavelet dimension reduction and kernel adaptive filtering. We derive algorithms in which the dimension of the data is reduced by a wavelet transform. We follow this by kernel adaptive filtering algorithms on the reduced-domain data to find the appropriate model parameters demonstrating improved minimization of the mean-squared error (MSE). Another important feature of our methods is that the wavelet filter is also chosen based on the data, on-the-fly. In particular, it is shown that by using a few optimal wavelet coefficients from the constructed wavelet filter for both training and testing data sets as the input to the kernel adaptive filter, convergence to the near optimal learning curve (MSE) results. We demonstrate these algorithms on simulated and a real data set from food processing.
Wireless Sensor Networks (WSNs) are used in many applications in military, environmental, and health-related areas. These applications often include the monitoring of sensitive information such as enemy movement on the battlefield or the location of personnel in a building. Security is important in WSNs. However, WSNs suffer from many constraints, including low computation capability, small memory, limited energy resources, susceptibility to physical capture, and the use of insecure wireless communication channels. These constraints make security in WSNs a challenge. In this paper, we try to explore security issue in WSN. First, the constraints, security requirements and attacks with their corresponding countermeasures in WSNs are explained. Individual sensor nodes are subject to compromised security. An adversary can inject false reports into the networks via compromised nodes. Furthermore, an adversary can create a Gray hole by compromised nodes. If these two kinds of attacks occur simultaneously in a network, some of the existing methods fail to defend against those attacks. The Ad-hoc On Demand Distance (AODV) Vector scheme for detecting Gray-Hole attack and Statistical En-Route Filtering is used for detecting false report. For increasing security level, the Elliptic Curve Cryptography (ECC) algorithm is used. Simulations results obtain so far reduces energy consumption and also provide greater network security to some extent.
The Communities vary from country to country. There are civil societies and rural communities, which also differ in terms of geography climate and economy. This shows that the use of social networks vary from region to region depending on the demographics of the communities. So, in this paper, we researched the most important problems of the Social Network, as well as the risk which is based on the human elements. We raised the problems of social networks in the transformation of societies to another affected by the global economy. The social networking integration needs to strengthen social ties that lead to the existence of these problems. For this we focused on the Internet security risks over the social networks. And study on Risk Management, and then look at resolving various problems that occur from the use of social networks.
As multi-tenant authorization and federated identity management systems for cloud computing matures, the provisioning of services using this paradigm allows maximum efficiency on business that requires access control. However, regarding scalability support, mainly horizontal, some characteristics of those approaches based on central authentication protocols are problematic. The objective of this work is to address these issues by providing an adapted sticky-session mechanism for a Shibboleth architecture using CAS. This alternative, compared with the recommended shared memory approach, shown improved efficiency and less overall infrastructure complexity.
As multi-tenant authorization and federated identity management systems for cloud computing matures, the provisioning of services using this paradigm allows maximum efficiency on business that requires access control. However, regarding scalability support, mainly horizontal, some characteristics of those approaches based on central authentication protocols are problematic. The objective of this work is to address these issues by providing an adapted sticky-session mechanism for a Shibboleth architecture using CAS. This alternative, compared with the recommended shared memory approach, shown improved efficiency and less overall infrastructure complexity.
Near Field Communication (NFC)-based mobile phone services offer a lifeline to the under-appreciated multiapplication smart card initiative. The initiative could effectively replace heavy wallets full of smart cards for mundane tasks. However, the issue of the deployment model still lingers on. Possible approaches include, but are not restricted to, the User Centric Smart card Ownership Model (UCOM), GlobalPlatform Consumer Centric Model, and Trusted Service Manager (TSM). In addition, multiapplication smart card architecture can be a GlobalPlatform Trusted Execution Environment (TEE) and/or User Centric Tamper-Resistant Device (UCTD), which provide cross-device security and privacy preservation platforms to their users. In the multiapplication smart card environment, there might not be a prior off-card trusted relationship between a smart card and an application provider. Therefore, as a possible solution to overcome the absence of prior trusted relationships, this paper proposes the concept of Trusted Platform Module (TPM) for smart cards (embedded devices) that can act as a point of reference for establishing the necessary trust between the device and an application provider, and among applications.
This paper presents verification and model based checking of the Trivial File Transfer Protocol (TFTP). Model checking is a technique for software verification that can detect concurrency defects within appropriate constraints by performing an exhaustive state space search on a software design or implementation and alert the implementing organization to potential design deficiencies that are otherwise difficult to be discovered. The TFTP is implemented on top of the Internet User Datagram Protocol (UDP) or any other datagram protocol. We aim to create a design model of TFTP protocol, with adding window size, using Promela to simulate it and validate some specified properties using spin. The verification has been done by using the model based checking tool SPIN which accepts design specification written in the verification language PROMELA. The results show that TFTP is free of live locks.
In this paper we present WiMesh, a software tool we developed during the last ten years of research conducted in the field of multi-radio wireless mesh networks. WiMesh serves two main purposes: (i) to run different algorithms for the assignment of channels, transmission rate and power to the available network radios; (ii) to automatically setup and run ns-3 simulations based on the network configuration returned by such algorithms. WiMesh basically consists of three libraries and three corresponding utilities that allow to easily conduct experiments. All such utilities accept as input an XML configuration file where a number of options can be specified. WiMesh is freely available to the research community, with the purpose of easing the development of new algorithms and the verification of their performances.
Cloud Computing means that a relationship of many number of computers through a contact channel like internet. Through cloud computing we send, receive and store data on internet. Cloud Computing gives us an opportunity of parallel computing by using a large number of Virtual Machines. Now a days, Performance, scalability, availability and security may represent the big risks in cloud computing. In this paper we highlights the issues of security, availability and scalability issues and we will also identify that how we make our cloud computing based infrastructure more secure and more available. And we also highlight the elastic behavior of cloud computing. And some of characteristics which involved for gaining the high performance of cloud computing will also be discussed.