Biblio
It is expected that clean-slate network designs will be implemented for wide-area network applications. Multi-tenancy in OpenFlow networks is an effective method to supporting a clean-slate network design, because the cost-effectiveness is improved by the sharing of substrate networks. To guarantee the programmability of OpenFlow for tenants, a complete flow space (i.e., header values of the data packets) virtualization is necessary. Wide-area substrate networks typically have multiple administrators. We therefore need to implement a flow space virtualization over multiple administration networks. In existing techniques, a third party is solely responsible for managing the mapping of header values for flow space virtualization for substrate network administrators and tenants, despite the severity of a third party failure. In this paper, we propose an AutoVFlow mechanism that allows flow space virtualization in a wide-area networks without the need for a third party. Substrate network administrators implement a flow space virtualization autonomously. They are responsible for virtualizing a flow space involving switches in their own substrate networks. Using a prototype of AutoVFlow, we measured the virtualization overhead, the results of which show a negligible amount of overhead.
Cloud Computing means that a relationship of many number of computers through a contact channel like internet. Through cloud computing we send, receive and store data on internet. Cloud Computing gives us an opportunity of parallel computing by using a large number of Virtual Machines. Now a days, Performance, scalability, availability and security may represent the big risks in cloud computing. In this paper we highlights the issues of security, availability and scalability issues and we will also identify that how we make our cloud computing based infrastructure more secure and more available. And we also highlight the elastic behavior of cloud computing. And some of characteristics which involved for gaining the high performance of cloud computing will also be discussed.
Multiple Inductive Loop Detectors are advanced Inductive Loop Sensors that can measure traffic flow parameters in even conditions where the traffic is heterogeneous and does not conform to lanes. This sensor consists of many inductive loops in series, with each loop having a parallel capacitor across it. These inductive and capacitive elements of the sensor may undergo open or short circuit faults during operation. Such faults lead to erroneous interpretation of data acquired from the loops. Conventional methods used for fault diagnosis in inductive loop detectors consume time and effort as they require experienced technicians and involve extraction of loops from the saw-cut slots on the road. This also means that the traffic flow parameters cannot be measured until the sensor system becomes functional again. The repair activities would also disturb traffic flow. This paper presents a method for automating fault diagnosis for series-connected Multiple Inductive Loop Detectors, based on an impulse test. The system helps in the diagnosis of open/short faults associated with the inductive and capacitive elements of the sensor structure by displaying the fault status conveniently. Since the fault location as well as the fault type can be precisely identified using this method, the repair actions are also localised. The proposed system thereby results in significant savings in both repair time and repair costs. An embedded system was developed to realize this scheme and the same was tested on a loop prototype.
Automated server parameter tuning is crucial to performance and availability of Internet applications hosted in cloud environments. It is challenging due to high dynamics and burstiness of workloads, multi-tier service architecture, and virtualized server infrastructure. In this paper, we investigate automated and agile server parameter tuning for maximizing effective throughput of multi-tier Internet applications. A recent study proposed a reinforcement learning based server parameter tuning approach for minimizing average response time of multi-tier applications. Reinforcement learning is a decision making process determining the parameter tuning direction based on trial-and-error, instead of quantitative values for agile parameter tuning. It relies on a predefined adjustment value for each tuning action. However it is nontrivial or even infeasible to find an optimal value under highly dynamic and bursty workloads. We design a neural fuzzy control based approach that combines the strengths of fast online learning and self-adaptiveness of neural networks and fuzzy control. Due to the model independence, it is robust to highly dynamic and bursty workloads. It is agile in server parameter tuning due to its quantitative control outputs. We implemented the new approach on a testbed of virtualized data center hosting RUBiS and WikiBench benchmark applications. Experimental results demonstrate that the new approach significantly outperforms the reinforcement learning based approach for both improving effective system throughput and minimizing average response time.
Intrusion response is a new generation of technology basing on active defence idea, which has very prominent significance on the protection of network security. However, the existing automatic intrusion response systems are difficult to judge the real "danger" of invasion or attack. In this study, an immune-inspired adaptive automated intrusion response system model, named as AIAIM, was given. With the descriptions of self, non-self, memory detector, mature detector and immature detector of the network transactions, the real-time network danger evaluation equations of host and network are built up. Then, the automated response polices are taken or adjusted according to the real-time danger and attack intensity, which not only solve the problem that the current automated response system models could not detect the true intrusions or attack actions, but also greatly reduce the response times and response costs. Theory analysis and experimental results prove that AIAIM provides a positive and active network security method, which will help to overcome the limitations of traditional passive network security system.
When a user accesses a resource, the accounting process at the server side does the job of keeping track of the resource usage so as to charge the user. In cloud computing, a user may use more than one service provider and need two independent service providers to work together. In this user-centric context, the user is the owner of the information and has the right to authorize to a third party application to access the protected resource on the user's behalf. Therefore, the user also needs to monitor the authorized resource usage he granted to third party applications. However, the existing accounting protocols were proposed to monitor the resource usage in terms of how the user uses the resource from the service provider. This paper proposed the user-centric accounting model called AccAuth which designs an accounting layer to an OAuth protocol. Then the prototype was implemented, and the proposed model was evaluated against the standard requirements. The result showed that AccAuth passed all the requirements.
Software components are software units designed to interact with other independently developed software components. These components are assembled by third parties into software applications. The success of final software applications largely depends upon the selection of appropriate and easy to fit components in software application according to the need of customer. It is primary requirement to evaluate the quality of components before using them in the final software application system. All the quality characteristics may not be of same significance for a particular software application of a specific domain. Therefore, it is necessary to identify only those characteristics/ sub-characteristics, which may have higher importance over the others. Analytical Network Process (ANP) is used to solve the decision problem, where attributes of decision parameters form dependency networks. The objective of this paper is to propose ANP based model to prioritize the characteristics /sub-characteristics of quality and to o estimate the numeric value of software quality.
In multicarrier direct modulation direct detection systems, interaction between laser chirp and fiber group velocity dispersion induces subcarrier-to-subcarrier intermixing interferences (SSII) after detection. Such SSII become a major impairment in orthogonal frequency division multiplexing-based access systems, where a high modulation index, leading to large chirp, is required to maximize the system power budget. In this letter, we present and experimentally verify an analytical formulation to predict the level of signal and SSII and estimate the signal to noise ratio of each subcarrier, enabling improved bit-and-power loading and subcarrier attribution. The reported model is compact, and only requires the knowledge of basic link characteristics and laser parameters that can easily be measured.
Variable Precision Rough Set (VPRS) model is one of the most important extensions of the Classical Rough Set (RS) theory. It employs a majority inclusion relation mechanism in order to make the Classical RS model become more fault tolerant, and therefore the generalization of the model is improved. This paper can be viewed as an extension of previous investigations on attribution reduction problem in VPRS model. In our investigation, we illustrated with examples that the previously proposed reduct definitions may spoil the hidden classification ability of a knowledge system by ignoring certian essential attributes in some circumstances. Consequently, by proposing a new β-consistent notion, we analyze the relationship between the structures of Decision Table (DT) and different definitions of reduct in VPRS model. Then we give a new notion of β-complement reduct that can avoid the defects of reduct notions defined in previous literatures. We also supply the method to obtain the β- complement reduct using a decision table splitting algorithm, and finally demonstrate the feasibility of our approach with sample instances.
The existence of mixed pixels is a major problem in remote-sensing image classification. Although the soft classification and spectral unmixing techniques can obtain an abundance of different classes in a pixel to solve the mixed pixel problem, the subpixel spatial attribution of the pixel will still be unknown. The subpixel mapping technique can effectively solve this problem by providing a fine-resolution map of class labels from coarser spectrally unmixed fraction images. However, most traditional subpixel mapping algorithms treat all mixed pixels as an identical type, either boundary-mixed pixel or linear subpixel, leading to incomplete and inaccurate results. To improve the subpixel mapping accuracy, this paper proposes an adaptive subpixel mapping framework based on a multiagent system for remote-sensing imagery. In the proposed multiagent subpixel mapping framework, three kinds of agents, namely, feature detection agents, subpixel mapping agents and decision agents, are designed to solve the subpixel mapping problem. Experiments with artificial images and synthetic remote-sensing images were performed to evaluate the performance of the proposed subpixel mapping algorithm in comparison with the hard classification method and other subpixel mapping algorithms: subpixel mapping based on a back-propagation neural network and the spatial attraction model. The experimental results indicate that the proposed algorithm outperforms the other two subpixel mapping algorithms in reconstructing the different structures in mixed pixels.
This paper presents a framework to identify the authors of Thai online messages. The identification is based on 53 writing attributes and the selected algorithms are support vector machine (SVM) and C4.5 decision tree. Experimental results indicate that the overall accuracies achieved by the SVM and the C4.5 were 79% and 75%, respectively. This difference was not statistically significant (at 95% confidence interval). As for the performance of identifying individual authors, in some cases the SVM was clearly better than the C4.5. But there were also other cases where both of them could not distinguish one author from another.
The availability of sophisticated source attribution techniques raises new concerns about privacy and anonymity of photographers, activists, and human right defenders who need to stay anonymous while spreading their images and videos. Recently, the use of seam-carving, a content-aware resizing method, has been proposed to anonymize the source camera of images against the well-known photoresponse nonuniformity (PRNU)-based source attribution technique. In this paper, we provide an analysis of the seam-carving-based source camera anonymization method by determining the limits of its performance introducing two adversarial models. Our analysis shows that the effectiveness of the deanonymization attacks depend on various factors that include the parameters of the seam-carving method, strength of the PRNU noise pattern of the camera, and an adversary's ability to identify uncarved image blocks in a seam-carved image. Our results show that, for the general case, there should not be many uncarved blocks larger than the size of 50×50 pixels for successful anonymization of the source camera.
Electric vehicle is the automobile that powered by electrical energy stored in batteries. Due to the frequent recharging, vehicles need to be connected to the recharging infrastructure while they are parked. This may disclose drivers' privacy, such as their location that drivers may want to keep secret. In this paper, we propose a scheme to enhance the privacy of the drivers using anonymous credential technique and Trusted Platform Module(TPM). We use anonymous credential technique to achieve the anonymity of vehicles such that drivers can anonymously and unlinkably recharge their vehicles. We add some attributes to the credential such as the type of the battery in the vehicle in case that the prices of different batteries are different. We use TPM to omit a blacklist such that the company that offer the recharging service(Energy Provider Company, EPC) does not need to conduct a double spending detection.
Cognitive radio (CR) networks are becoming an increasingly important part of the wireless networking landscape due to the ever-increasing scarcity of spectrum resources throughout the world. Nowadays CR media is becoming popular wireless communication media for disaster recovery communication network. Although the operational aspects of CR are being explored vigorously, its security aspects have gained less attention to the research community. The existing research on CR network mainly focuses on the spectrum sensing and allocation, energy efficiency, high throughput, end-to-end delay and other aspect of the network technology. But, very few focuses on the security aspect and almost none focus on the secure anonymous communication in CR networks (CRNs). In this research article we would focus on secure anonymous communication in CR ad hoc networks (CRANs). We would propose a secure anonymous routing for CRANs based on pairing based cryptography which would provide source node, destination node and the location anonymity. Furthermore, the proposed research would protect different attacks those are feasible on CRANs.
Mobile ad hoc networks have the features of open medium, dynamic topology, cooperative algorithms, lack of centralized monitoring etc. Due to these, mobile ad hoc networks are much vulnerable to security attacks when compared to wired networks. There are various routing protocols that have been developed to cope up with the limitations imposed by the ad hoc networks. But none of these routing schemes provide complete unlinkability and unobservability. In this paper we have done a survey about anonymous routing and secure communications in mobile ad hoc networks. Different routing protocols are analyzed based on public/private key pairs and cryptosystems, within that USOR can well protect user privacy against both inside and outside attackers. It is a combination of group signature scheme and ID based encryption scheme. These are run during the route discovery process. We implement USOR on ns2, and then its performance is compared with AODV.
Wireless Sensor Network has a wide range of applications including environmental monitoring and data gathering in hostile environments. This kind of network is easily leaned to different external and internal attacks because of its open nature. Sink node is a receiving and collection point that gathers data from the sensor nodes present in the network. Thus, it forms bridge between sensors and the user. A complete sensor network can be made useless if this sink node is attacked. To ensure continuous usage, it is very important to preserve the location privacy of sink nodes. A very good approach for securing location privacy of sink node is proposed in this paper. The proposed scheme tries to modify the traditional Blast technique by adding shortest path algorithm and an efficient clustering mechanism in the network and tries to minimize the energy consumption and packet delay.
Although there has been much research on the leakage of sensitive data in Android applications, most of the existing research focus on how to detect the malware or adware that are intentionally collecting user privacy. There are not much research on analyzing the vulnerabilities of apps that may cause the leakage of privacy. In this paper, we present a vulnerability analyzing method which combines taint analysis and cryptography misuse detection. The four steps of this method are decompile, taint analysis, API call record, cryptography misuse analysis, all of which steps except taint analysis can be executed by the existing tools. We develop a prototype tool PW Exam to analysis how the passwords are handled and if the app is vulnerable to password leakage. Our experiment shows that a third of apps are vulnerable to leak the users' passwords.
WiFi fingerprint-based localization is regarded as one of the most promising techniques for indoor localization. The location of a to-be-localized client is estimated by mapping the measured fingerprint (WiFi signal strengths) against a database owned by the localization service provider. A common concern of this approach that has never been addressed in literature is that it may leak the client's location information or disclose the service provider's data privacy. In this paper, we first analyze the privacy issues of WiFi fingerprint-based localization and then propose a Privacy-Preserving WiFi Fingerprint Localization scheme (PriWFL) that can protect both the client's location privacy and the service provider's data privacy. To reduce the computational overhead at the client side, we also present a performance enhancement algorithm by exploiting the indoor mobility prediction. Theoretical performance analysis and experimental study are carried out to validate the effectiveness of PriWFL. Our implementation of PriWFL in a typical Android smartphone and experimental results demonstrate the practicality and efficiency of PriWFL in real-world environments.
Mobile security is as critical as the PIN number on our ATM card or the lock on our front door. More than our phone itself, the information inside needs safeguarding as well. Not necessarily for scams, but just peace of mind. Android seems to have attracted the most attention from malicious code writers due to its popularity. The flexibility to freely download apps and content has fueled the explosive growth of smart phones and mobile applications but it has also introduced a new risk factor. Malware can mimic popular applications and transfer contacts, photos and documents to unknown destination servers. There is no way to disable the application stores on mobile operating systems. Fortunately for end-users, our smart phones are fundamentally open devices however they can quite easily be hacked. Enterprises now provide business applications on these devices. As a result, confidential business information resides on employee-owned device. Once an employee quits, the mobile operating system wipe-out is not an optimal solution as it will delete both business and personal data. Here we propose H-Secure application for mobile security where one can store their confidential data and files in encrypted form. The encrypted file and encryption key are stored on a web server so that unauthorized person cannot access the data. If user loses the mobile then he can login into web and can delete the file and key to stop further decryption process.
The high usability of smartphones and tablets is embraced by consumers as well as the corporate and public sector. However, especially in the non-consumer area the factor security plays a decisive role for the platform-selection process. All of the current companies within the mobile device sector added a wide range of security features to the initially consumer-oriented devices (Apple, Google, Microsoft), or have dealt with security as a core feature from the beginning (RIM, now Blackerry). One of the key security features for protecting data on the device or in device backups are encryption systems, which are available in the majority of current devices. However, even under the assumption that the systems are implemented correctly, there is a wide range of parameters, specific use cases, and weaknesses that need to be considered when deploying mobile devices in security-critical environments. As the second part in a series of papers (the first part was on iOS), this work analyzes the deployment of the Android platform and the usage of its encryption systems within a security-critical context. For this purpose, Android's different encryption systems are assessed and their susceptibility to different attacks is analyzed in detail. Based on these results a workflow is presented, which supports deployment of the Android platform and usage of its encryption systems within security-critical application scenarios.
Highly accurate indoor localization of smartphones is critical to enable novel location based features for users and businesses. In this paper, we first conduct an empirical investigation of the suitability of WiFi localization for this purpose. We find that although reasonable accuracy can be achieved, significant errors (e.g., 6 8m) always exist. The root cause is the existence of distinct locations with similar signatures, which is a fundamental limit of pure WiFi-based methods. Inspired by high densities of smartphones in public spaces, we propose a peer assisted localization approach to eliminate such large errors. It obtains accurate acoustic ranging estimates among peer phones, then maps their locations jointly against WiFi signature map subjecting to ranging constraints. We devise techniques for fast acoustic ranging among multiple phones and build a prototype. Experiments show that it can reduce the maximum and 80-percentile errors to as small as 2m and 1m, in time no longer than the original WiFi scanning, with negligible impact on battery lifetime.
This paper proposes a high-performance audio fingerprint extraction method for identifying TV commercial advertisement. In the proposed method, a salient audio peak pair fingerprints based on constant Q transform (CQT) are hashed and stored, to be efficiently compared to one another. Experimental results confirm that the proposed method is quite robust in different noise conditions and improves the accuracy of the audio fingerprinting system in real noisy environments.
Suppose that you are at a music festival checking on an artist, and you would like to quickly know about the song that is being played (e.g., title, lyrics, album, etc.). If you have a smartphone, you could record a sample of the live performance and compare it against a database of existing recordings from the artist. Services such as Shazam or SoundHound will not work here, as this is not the typical framework for audio fingerprinting or query-by-humming systems, as a live performance is neither identical to its studio version (e.g., variations in instrumentation, key, tempo, etc.) nor it is a hummed or sung melody. We propose an audio fingerprinting system that can deal with live version identification by using image processing techniques. Compact fingerprints are derived using a log-frequency spectrogram and an adaptive thresholding method, and template matching is performed using the Hamming similarity and the Hough Transform.
Conventional photoacoustic microscopy (PAM) involves detection of optically induced thermo-elastic waves using ultrasound transducers. This approach requires acoustic coupling and the spatial resolution is limited by the focusing properties of the transducer. We present an all-optical PAM approach that involved detection of the photoacoustically induced surface displacements using an adaptive, two-wave mixing interferometer. The interferometer consisted of a 532-nm, CW laser and a Bismuth Silicon Oxide photorefractive crystal (PRC) that was 5×5×5 mm3. The laser beam was expanded to 3 mm and split into two paths, a reference beam that passed directly through the PRC and a signal beam that was focused at the surface through a 100-X, infinity-corrected objective and returned to the PRC. The PRC matched the wave front of the reference beam to that of the signal beam for optimal interference. The interference of the two beams produced optical-intensity modulations that were correlated with surface displacements. A GHz-bandwidth photoreceiver, a low-noise 20-dB amplifier, and a 12-bit digitizer were employed for time-resolved detection of the surface-displacement signals. In combination with a 5-ns, 532-nm pump laser, the interferometric probe was employed for imaging ink patterns, such as a fingerprint, on a glass slide. The signal beam was focused at a reflective cover slip that was separated from the fingerprint by 5 mm of acoustic-coupling gel. A 3×5 mm2 area of the coverslip was raster scanned with 100-μm steps and surface-displacement signals at each location were averaged 20 times. Image reconstruction based on time reversal of the PA-induced displacement signals produced the photoacoustic image of the ink patterns. The reconstructed image of the fingerprint was consistent with its photograph, which demonstrated the ability of our system to resolve micron-scaled features at a depth of 5 mm.
Acoustic microscopy is characterized by relatively long scanning time, which is required for the motion of the transducer over the entire scanning area. This time may be reduced by using a multi-channel acoustical system which has several identical transducers arranged as an array and is mounted on a mechanical scanner so that each transducer scans only a fraction of the total area. The resulting image is formed as a combination of all acquired partial data sets. The mechanical instability of the scanner, as well as the difference in parameters of the individual transducers causes a misalignment of the image fractures. This distortion may be partially compensated for by the introduction of constant or dynamical signal leveling and data shift procedures. However, a reduction of the random instability component requires more advanced algorithms, including auto-adjustment of processing parameters. The described procedure was implemented into the prototype of an ultrasonic fingerprint reading system. The specialized cylindrical scanner provides a helical spiral lens trajectory which eliminates repeatable acceleration, reduces vibration and allows constant data flow on maximal rate. It is equipped with an array of four spherically focused 50 MHz acoustic lenses operating in pulse-echo mode. Each transducer is connected to a separate channel including pulser, receiver and digitizer. The output 3D data volume contains interlaced B-scans coming from each channel. Afterward, data processing includes pre-determined procedures of constant layer shift in order to compensate for the transducer displacement, phase shift and amplitude leveling for compensation of variation in transducer characteristics. Analysis of statistical parameters of individual scans allows adaptive eliminating of the axial misalignment and mechanical vibrations. Further 2D correlation of overlapping partial C-scans will realize an interpolative adjustment which essentially improves the output image. Implementation of this adaptive algorithm into a data processing sequence allows us to significantly reduce misreading due to hardware noise and finger motion during scanning. The system provides a high quality acoustic image of the fingerprint including different levels of information: fingerprint pattern, sweat porous locations, internal dermis structures. These additional features can effectively facilitate fingerprint based identification. The developed principles and algorithm implementations allow improved quality, stability and reliability of acoustical data obtained with the mechanical scanner, accommodating several transducers. General principles developed during this work can be applied to other configurations of advanced ultrasonic systems designed for various biomedical and NDE applications. The data processing algorithm, developed for a specific biometric task, can be adapted for the compensation of mechanical imperfections of the other devices.

