Visible to the public Biblio

Found 2859 results

Filters: First Letter Of Last Name is H  [Clear All Filters]
2017-05-17
Huang, Jheng-Jia, Juang, Wen-Shenq, Fan, Chun-I, Tseng, Yi-Fan, Kikuchi, Hiroaki.  2016.  Lightweight Authentication Scheme with Dynamic Group Members in IoT Environments. Adjunct Proceedings of the 13th International Conference on Mobile and Ubiquitous Systems: Computing Networking and Services. :88–93.

In IoT environments, the user may have many devices to connect each other and share the data. Also, the device will not have the powerful computation and storage ability. Many studies have focused on the lightweight authentication between the cloud server and the client in this environment. They can use the cloud server to help sensors or proxies to finish the authentication. But in the client side, how to create the group session key without the cloud capability is the most important issue in IoT environments. The most popular application network of IoT environments is the wireless body area network (WBAN). In WBAN, the proxy usually needs to control and monitor user's health data transmitted from the sensors. In this situation, the group authentication and group session key generation is needed. In this paper, in order to provide an efficient and robust group authentication and group session key generation in the client side of IoT environments, we propose a lightweight authentication scheme with dynamic group members in IoT environments. Our proposed scheme can satisfy the properties including the flexible generation of shared group keys, the dynamic participation, the active revocation, the low communication and computation cost, and no time synchronization problem. Also our scheme can achieve the security requirements including the mutual authentication, the group session key agreement, and prevent all various well-known attacks.

Ke, Yu-Ming, Chen, Chih-Wei, Hsiao, Hsu-Chun, Perrig, Adrian, Sekar, Vyas.  2016.  CICADAS: Congesting the Internet with Coordinated and Decentralized Pulsating Attacks. Proceedings of the 11th ACM on Asia Conference on Computer and Communications Security. :699–710.

This study stems from the premise that we need to break away from the "reactive" cycle of developing defenses against new DDoS attacks (e.g., amplification) by proactively investigating the potential for new types of DDoS attacks. Our specific focus is on pulsating attacks, a particularly debilitating type that has been hypothesized in the literature. In a pulsating attack, bots coordinate to generate intermittent pulses at target links to significantly reduce the throughput of TCP connections traversing the target. With pulsating attacks, attackers can cause significantly greater damage to legitimate users than traditional link flooding attacks. To date, however, pulsating attacks have been either deemed ineffective or easily defendable for two reasons: (1) they require a central coordinator and can thus be tracked; and (2) they require tight synchronization of pulses, which is difficult even in normal non-congestion scenarios. This paper argues that, in fact, the perceived drawbacks of pulsating attacks are in fact not fundamental. We develop a practical pulsating attack called CICADAS using two key ideas: using both (1) congestion as an implicit signal for decentralized implementation, and (2) a Kalman-filter-based approach to achieve tight synchronization. We validate CICADAS using simulations and wide-area experiments. We also discuss possible countermeasures against this attack.

Qiao, Siyi, Hu, Chengchen, Guan, Xiaohong, Zou, Jianhua.  2016.  Taming the Flow Table Overflow in OpenFlow Switch. Proceedings of the 2016 ACM SIGCOMM Conference. :591–592.

SDN has become the wide area network technology, which the academic and industry most concerned about.The limited table sizes of today’s SDN switches has turned to the most prominent short planks in the network design implementation. TCAM based flow table can provide an excellent matching performance while it really costs much. Even the flow table overflow cannot be prevented by a fixed-capacity flow table. In this paper, we design FTS(Flow Table Sharing) mechanism that can improve the performance disaster caused by overflow. We demonstrate that FTS reduces both control messages quantity and RTT time by two orders of magnitude compared to current state-of-the-art OpenFlow table-miss handler.

Mell, Peter, Shook, James, Harang, Richard.  2016.  Measuring and Improving the Effectiveness of Defense-in-Depth Postures. Proceedings of the 2Nd Annual Industrial Control System Security Workshop. :15–22.

Defense-in-depth is an important security architecture principle that has significant application to industrial control systems (ICS), cloud services, storehouses of sensitive data, and many other areas. We claim that an ideal defense-in-depth posture is 'deep', containing many layers of security, and 'narrow', the number of node independent attack paths is minimized. Unfortunately, accurately calculating both depth and width is difficult using standard graph algorithms because of a lack of independence between multiple vulnerability instances (i.e., if an attacker can penetrate a particular vulnerability on one host then they can likely penetrate the same vulnerability on another host). To address this, we represent known weaknesses and vulnerabilities as a type of colored attack graph. We measure depth and width through solving the shortest color path and minimum color cut problems. We prove both of these to be NP-Hard and thus for our solution we provide a suite of greedy heuristics. We then empirically apply our approach to large randomly generated networks as well as to ICS networks generated from a published ICS attack template. Lastly, we discuss how to use these results to help guide improvements to defense-in-depth postures.

McClurg, Jedidiah, Hojjat, Hossein, Foster, Nate, Černý, Pavol.  2016.  Event-driven Network Programming. Proceedings of the 37th ACM SIGPLAN Conference on Programming Language Design and Implementation. :369–385.

Software-defined networking (SDN) programs must simultaneously describe static forwarding behavior and dynamic updates in response to events. Event-driven updates are critical to get right, but difficult to implement correctly due to the high degree of concurrency in networks. Existing SDN platforms offer weak guarantees that can break application invariants, leading to problems such as dropped packets, degraded performance, security violations, etc. This paper introduces EVENT-DRIVEN CONSISTENT UPDATES that are guaranteed to preserve well-defined behaviors when transitioning between configurations in response to events. We propose NETWORK EVENT STRUCTURES (NESs) to model constraints on updates, such as which events can be enabled simultaneously and causal dependencies between events. We define an extension of the NetKAT language with mutable state, give semantics to stateful programs using NESs, and discuss provably-correct strategies for implementing NESs in SDNs. Finally, we evaluate our approach empirically, demonstrating that it gives well-defined consistency guarantees while avoiding expensive synchronization and packet buffering.

Hsu, Terry Ching-Hsiang, Hoffman, Kevin, Eugster, Patrick, Payer, Mathias.  2016.  Enforcing Least Privilege Memory Views for Multithreaded Applications. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :393–405.

Failing to properly isolate components in the same address space has resulted in a substantial amount of vulnerabilities. Enforcing the least privilege principle for memory accesses can selectively isolate software components to restrict attack surface and prevent unintended cross-component memory corruption. However, the boundaries and interactions between software components are hard to reason about and existing approaches have failed to stop attackers from exploiting vulnerabilities caused by poor isolation. We present the secure memory views (SMV) model: a practical and efficient model for secure and selective memory isolation in monolithic multithreaded applications. SMV is a third generation privilege separation technique that offers explicit access control of memory and allows concurrent threads within the same process to partially share or fully isolate their memory space in a controlled and parallel manner following application requirements. An evaluation of our prototype in the Linux kernel (TCB textless 1,800 LOC) shows negligible runtime performance overhead in real-world applications including Cherokee web server (textless 0.69%), Apache httpd web server (textless 0.93%), and Mozilla Firefox web browser (textless 1.89%) with at most 12 LOC changes.

Adams, Michael D., Hollenbeck, Celeste, Might, Matthew.  2016.  On the Complexity and Performance of Parsing with Derivatives. Proceedings of the 37th ACM SIGPLAN Conference on Programming Language Design and Implementation. :224–236.

Current algorithms for context-free parsing inflict a trade-off between ease of understanding, ease of implementation, theoretical complexity, and practical performance. No algorithm achieves all of these properties simultaneously. Might et al. introduced parsing with derivatives, which handles arbitrary context-free grammars while being both easy to understand and simple to implement. Despite much initial enthusiasm and a multitude of independent implementations, its worst-case complexity has never been proven to be better than exponential. In fact, high-level arguments claiming it is fundamentally exponential have been advanced and even accepted as part of the folklore. Performance ended up being sluggish in practice, and this sluggishness was taken as informal evidence of exponentiality. In this paper, we reexamine the performance of parsing with derivatives. We have discovered that it is not exponential but, in fact, cubic. Moreover, simple (though perhaps not obvious) modifications to the implementation by Might et al. lead to an implementation that is not only easy to understand but also highly performant in practice.

Canetti, Ran, Holmgren, Justin.  2016.  Fully Succinct Garbled RAM. Proceedings of the 2016 ACM Conference on Innovations in Theoretical Computer Science. :169–178.

We construct the first fully succinct garbling scheme for RAM programs, assuming the existence of indistinguishability obfuscation for circuits and one-way functions. That is, the size, space requirements, and runtime of the garbled program are the same as those of the input program, up to poly-logarithmic factors and a polynomial in the security parameter. The scheme can be used to construct indistinguishability obfuscators for RAM programs with comparable efficiency, at the price of requiring sub-exponential security of the underlying primitives. In particular, this opens the door to obfuscated computations that are sublinear in the length of their inputs. The scheme builds on the recent schemes of Koppula-Lewko-Waters and Canetti-Holmgren-Jain-Vaikuntanathan [STOC 15]. A key technical challenge here is how to combine the fixed-prefix technique of KLW, which was developed for deterministic programs, with randomized Oblivious RAM techniques. To overcome that, we develop a method for arguing about the indistinguishability of two obfuscated randomized programs that use correlated randomness. Along the way, we also define and construct garbling schemes that offer only partial protection. These may be of independent interest.

Walter, Charles, Hale, Matthew L., Gamble, Rose F..  2016.  Imposing Security Awareness on Wearables. Proceedings of the 2Nd International Workshop on Software Engineering for Smart Cyber-Physical Systems. :29–35.

Bluetooth reliant devices are increasingly proliferating into various industry and consumer sectors as part of a burgeoning wearable market that adds convenience and awareness to everyday life. Relying primarily on a constantly changing hop pattern to reduce data sniffing during transmission, wearable devices routinely disconnect and reconnect with their base station (typically a cell phone), causing a connection repair each time. These connection repairs allow an adversary to determine what local wearable devices are communicating to what base stations. In addition, data transmitted to a base station as part of a wearable app may be forwarded onward to an awaiting web API even if the base station is in an insecure environment (e.g. a public Wi-Fi). In this paper, we introduce an approach to increase the security and privacy associated with using wearable devices by imposing transmission changes given situational awareness of the base station. These changes are asserted via policy rules based on the sensor information from the wearable devices collected and aggregated by the base system. The rules are housed in an application on the base station that adapts the base station to a state in which it prevents data from being transmitted by the wearable devices without disconnecting the devices. The policies can be updated manually or through an over the air update as determined by the user.

Albazrqaoe, Wahhab, Huang, Jun, Xing, Guoliang.  2016.  Practical Bluetooth Traffic Sniffing: Systems and Privacy Implications. Proceedings of the 14th Annual International Conference on Mobile Systems, Applications, and Services. :333–345.

With the prevalence of personal Bluetooth devices, potential breach of user privacy has been an increasing concern. To date, sniffing Bluetooth traffic has been widely considered an extremely intricate task due to Bluetooth's indiscoverable mode, vendor-dependent adaptive hopping behavior, and the interference in the open 2.4 GHz band. In this paper, we present BlueEar -a practical Bluetooth traffic sniffer. BlueEar features a novel dual-radio architecture where two Bluetooth-compliant radios coordinate with each other on learning the hopping sequence of indiscoverable Bluetooth networks, predicting adaptive hopping behavior, and mitigating the impacts of RF interference. Experiment results show that BlueEar can maintain a packet capture rate higher than 90% consistently in real-world environments, where the target Bluetooth network exhibits diverse hopping behaviors in the presence of dynamic interference from coexisting Wi-Fi devices. In addition, we discuss the privacy implications of the BlueEar system, and present a practical countermeasure that effectively reduces the packet capture rate of the sniffer to 20%. The proposed countermeasure can be easily implemented on the Bluetooth master device while requiring no modification to slave devices like keyboards and headsets.

Snader, Robin, Kravets, Robin, Harris, III, Albert F..  2016.  CryptoCoP: Lightweight, Energy-efficient Encryption and Privacy for Wearable Devices. Proceedings of the 2016 Workshop on Wearable Systems and Applications. :7–12.

As people use and interact with more and more wearables and IoT-enabled devices, their private information is being exposed without any privacy protections. However, the limited capabilities of IoT devices makes implementing robust privacy protections challenging. In response, we present CryptoCoP, an energy-efficient, content agnostic privacy and encryption protocol for IoT devices. Eavesdroppers cannot snoop on data protected by CryptoCoP or track users via their IoT devices. We evaluate CryptoCoP and show that the performance and energy overheads are viable in a wide variety of situations, and can be modified to trade off forward secrecy and energy consumption against required key storage on the device.

Su, Fang-Hsiang, Bell, Jonathan, Harvey, Kenneth, Sethumadhavan, Simha, Kaiser, Gail, Jebara, Tony.  2016.  Code Relatives: Detecting Similarly Behaving Software. Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering. :702–714.

Detecting “similar code” is useful for many software engineering tasks. Current tools can help detect code with statically similar syntactic and–or semantic features (code clones) and with dynamically similar functional input/output (simions). Unfortunately, some code fragments that behave similarly at the finer granularity of their execution traces may be ignored. In this paper, we propose the term “code relatives” to refer to code with similar execution behavior. We define code relatives and then present DyCLINK, our approach to detecting code relatives within and across codebases. DyCLINK records instruction-level traces from sample executions, organizes the traces into instruction-level dynamic dependence graphs, and employs our specialized subgraph matching algorithm to efficiently compare the executions of candidate code relatives. In our experiments, DyCLINK analyzed 422+ million prospective subgraph matches in only 43 minutes. We compared DyCLINK to one static code clone detector from the community and to our implementation of a dynamic simion detector. The results show that DyCLINK effectively detects code relatives with a reasonable analysis time.

Legunsen, Owolabi, Hariri, Farah, Shi, August, Lu, Yafeng, Zhang, Lingming, Marinov, Darko.  2016.  An Extensive Study of Static Regression Test Selection in Modern Software Evolution. Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering. :583–594.

Regression test selection (RTS) aims to reduce regression testing time by only re-running the tests affected by code changes. Prior research on RTS can be broadly split into dy namic and static techniques. A recently developed dynamic RTS technique called Ekstazi is gaining some adoption in practice, and its evaluation shows that selecting tests at a coarser, class-level granularity provides better results than selecting tests at a finer, method-level granularity. As dynamic RTS is gaining adoption, it is timely to also evaluate static RTS techniques, some of which were proposed over three decades ago but not extensively evaluated on modern software projects. This paper presents the first extensive study that evaluates the performance benefits of static RTS techniques and their safety; a technique is safe if it selects to run all tests that may be affected by code changes. We implemented two static RTS techniques, one class-level and one method-level, and compare several variants of these techniques. We also compare these static RTS techniques against Ekstazi, a state-of-the-art, class-level, dynamic RTS technique. The experimental results on 985 revisions of 22 open-source projects show that the class-level static RTS technique is comparable to Ekstazi, with similar performance benefits, but at the risk of being unsafe sometimes. In contrast, the method-level static RTS technique performs rather poorly.

Huang, Zhenqi, Wang, Yu, Mitra, Sayan, Dullerud, Geir.  2016.  Controller Synthesis for Linear Dynamical Systems with Adversaries. Proceedings of the Symposium and Bootcamp on the Science of Security. :53–62.

We present a controller synthesis algorithm for a reach-avoid problem in the presence of adversaries. Our model of the adversary abstractly captures typical malicious attacks envisioned on cyber-physical systems such as sensor spoofing, controller corruption, and actuator intrusion. After formulating the problem in a general setting, we present a sound and complete algorithm for the case with linear dynamics and an adversary with a budget on the total L2-norm of its actions. The algorithm relies on a result from linear control theory that enables us to decompose and compute the reachable states of the system in terms of a symbolic simulation of the adversary-free dynamics and the total uncertainty induced by the adversary. With this decomposition, the synthesis problem eliminates the universal quantifier on the adversary's choices and the symbolic controller actions can be effectively solved using an SMT solver. The constraints induced by the adversary are computed by solving second-order cone programmings. The algorithm is later extended to synthesize state-dependent controller and to generate attacks for the adversary. We present preliminary experimental results that show the effectiveness of this approach on several example problems.

2017-05-16
Wan, Mengting, Chen, Xiangyu, Kaplan, Lance, Han, Jiawei, Gao, Jing, Zhao, Bo.  2016.  From Truth Discovery to Trustworthy Opinion Discovery: An Uncertainty-Aware Quantitative Modeling Approach. Proceedings of the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. :1885–1894.

In this era of information explosion, conflicts are often encountered when information is provided by multiple sources. Traditional truth discovery task aims to identify the truth the most trustworthy information, from conflicting sources in different scenarios. In this kind of tasks, truth is regarded as a fixed value or a set of fixed values. However, in a number of real-world cases, objective truth existence cannot be ensured and we can only identify single or multiple reliable facts from opinions. Different from traditional truth discovery task, we address this uncertainty and introduce the concept of trustworthy opinion of an entity, treat it as a random variable, and use its distribution to describe consistency or controversy, which is particularly difficult for data which can be numerically measured, i.e. quantitative information. In this study, we focus on the quantitative opinion, propose an uncertainty-aware approach called Kernel Density Estimation from Multiple Sources (KDEm) to estimate its probability distribution, and summarize trustworthy information based on this distribution. Experiments indicate that KDEm not only has outstanding performance on the classical numeric truth discovery task, but also shows good performance on multi-modality detection and anomaly detection in the uncertain-opinion setting.

Mokhtar, Maizura, Hunt, Ian, Burns, Stephen, Ross, Dave.  2016.  Optimising a Waste Heat Recovery System Using Multi-Objective Evolutionary Algorithm. Proceedings of the 2016 on Genetic and Evolutionary Computation Conference Companion. :913–920.

A waste heat recovery system (WHRS) on a process with variable output, is an example of an intermittent renewable process. WHRS recycles waste heat into usable energy. As an example, waste heat produced from refrigeration can be used to provide hot water. However, consistent with most intermittent renewable energy systems, the likelihood of waste heat availability at times of demand is low. For this reason, the WHRS may be coupled with a hot water reservoir (HWR) acting as the energy storage system that aims to maintain desired hot water temperature Td (and therefore energy) at time of demand. The coupling of the WHRS and the HWR must be optimised to ensure higher efficiency given the intermittent mismatch of demand and heat availability. Efficiency of an WHRS can be defined as achieving multiple objectives, including to minimise the need for back-up energy to achieve Td, and to minimise waste heat not captured (when the reservoir volume Vres is too small). This paper investigates the application of a Multi Objective Evolutionary Algorithm (MOEA) to optimise the parameters of the WHRS, including the Vres and depth of discharge (DoD), that affect the WHRS efficiency. Results show that one of the optimum solutions obtained requires the combination of high Vres, high DoD, low water feed in rate, low power external back-up heater and high excess temperature for the HWR to ensure efficiency of the WHRS.

Chen, Ang, Wu, Yang, Haeberlen, Andreas, Zhou, Wenchao, Loo, Boon Thau.  2016.  The Good, the Bad, and the Differences: Better Network Diagnostics with Differential Provenance. Proceedings of the 2016 ACM SIGCOMM Conference. :115–128.

In this paper, we propose a new approach to diagnosing problems in complex distributed systems. Our approach is based on the insight that many of the trickiest problems are anomalies. For instance, in a network, problems often affect only a small fraction of the traffic (e.g., perhaps a certain subnet), or they only manifest infrequently. Thus, it is quite common for the operator to have “examples” of both working and non-working traffic readily available – perhaps a packet that was misrouted, and a similar packet that was routed correctly. In this case, the cause of the problem is likely to be wherever the two packets were treated differently by the network. We present the design of a debugger that can leverage this information using a novel concept that we call differential provenance. Differential provenance tracks the causal connections between network states and state changes, just like classical provenance, but it can additionally perform root-cause analysis by reasoning about the differences between two provenance trees. We have built a diagnostic tool that is based on differential provenance, and we have used our tool to debug a number of complex, realistic problems in two scenarios: software-defined networks and MapReduce jobs. Our results show that differential provenance can be maintained at relatively low cost, and that it can deliver very precise diagnostic information; in many cases, it can even identify the precise root cause of the problem.

Herschel, Melanie, Hlawatsch, Marcel.  2016.  Provenance: On and Behind the Screens. Proceedings of the 2016 International Conference on Management of Data. :2213–2217.

Collecting and processing provenance, i.e., information describing the production process of some end product, is important in various applications, e.g., to assess quality, to ensure reproducibility, or to reinforce trust in the end product. In the past, different types of provenance meta-data have been proposed, each with a different scope. The first part of the proposed tutorial provides an overview and comparison of these different types of provenance. To put provenance to good use, it is essential to be able to interact with and present provenance data in a user-friendly way. Often, users interested in provenance are not necessarily experts in databases or query languages, as they are typically domain experts of the product and production process for which provenance is collected (biologists, journalists, etc.). Furthermore, in some scenarios, it is difficult to use solely queries for analyzing and exploring provenance data. The second part of this tutorial therefore focuses on enabling users to leverage provenance through adapted visualizations. To this end, we will present some fundamental concepts of visualization before we discuss possible visualizations for provenance.

Yuan, Yali, Kaklamanos, Georgios, Hogrefe, Dieter.  2016.  A Novel Semi-Supervised Adaboost Technique for Network Anomaly Detection. Proceedings of the 19th ACM International Conference on Modeling, Analysis and Simulation of Wireless and Mobile Systems. :111–114.

With the developing of Internet, network intrusion has become more and more common. Quickly identifying and preventing network attacks is getting increasingly more important and difficult. Machine learning techniques have already proven to be robust methods in detecting malicious activities and network threats. Ensemble-based and semi-supervised learning methods are some of the areas that receive most attention in machine learning today. However relatively little attention has been given in combining these methods. To overcome such limitations, this paper proposes a novel network anomaly detection method by using a combination of a tri-training approach with Adaboost algorithms. The bootstrap samples of tri-training are replaced by three different Adaboost algorithms to create the diversity. We run 30 iteration for every simulation to obtain the average results. Simulations indicate that our proposed semi-supervised Adaboost algorithm is reproducible and consistent over a different number of runs. It outperforms other state-of-the-art learning algorithms, even with a small part of labeled data in the training phase. Specifically, it has a very short execution time and a good balance between the detection rate as well as the false-alarm rate.

Sänger, Johannes, Hänsch, Norman, Glass, Brian, Benenson, Zinaida, Landwirth, Robert, Sasse, M. Angela.  2016.  Look Before You Leap: Improving the Users' Ability to Detect Fraud in Electronic Marketplaces. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. :3870–3882.

Reputation systems in current electronic marketplaces can easily be manipulated by malicious sellers in order to appear more reputable than appropriate. We conducted a controlled experiment with 40 UK and 41 German participants on their ability to detect malicious behavior by means of an eBay-like feedback profile versus a novel interface involving an interactive visualization of reputation data. The results show that participants using the new interface could better detect and understand malicious behavior in three out of four attacks (the overall detection accuracy 77% in the new vs. 56% in the old interface). Moreover, with the new interface, only 7% of the users decided to buy from the malicious seller (the options being to buy from one of the available sellers or to abstain from buying), as opposed to 30% in the old interface condition.

Jang, Min-Hee, Kim, Sang-Wook, Ha, Jiwoon.  2016.  Effectiveness of Reverse Edges and Uncertainty in PIN-TRUST for Trust Prediction. Proceedings of the Sixth International Conference on Emerging Databases: Technologies, Applications, and Theory. :81–85.

Recently, PIN-TRUST, a method to predict future trust relationships between users is proposed. PIN-TRUST out-performs existing trust prediction methods by exploiting all types of interactions between users and the reciprocation of ones. In this paper, we validate whether its consideration on the reciprocation of interactions is really effective in trust prediction. Furthermore, we consider a new concept, the "uncertainty" of untrustworthy users that is devised to reflect the difficulty on modeling the activities of untrustworthy users in PIN-TRUST. Then, we also validate the effectiveness this uncertainty concepts. Through the validation, we reveal that the consideration of the reciprocation of interactions is effective for trust prediction with PIN-TRUST, and it is necessary to regard the uncertainty of untrustworthy users same as that of other users.

Yan, Ting-Kun, Xu, Xin-Shun, Guo, Shanqing, Huang, Zi, Wang, Xiao-Lin.  2016.  Supervised Robust Discrete Multimodal Hashing for Cross-Media Retrieval. Proceedings of the 25th ACM International on Conference on Information and Knowledge Management. :1271–1280.

Recently, multimodal hashing techniques have received considerable attention due to their low storage cost and fast query speed for multimodal data retrieval. Many methods have been proposed; however, there are still some problems that need to be further considered. For example, some of these methods just use a similarity matrix for learning hash functions which will discard some useful information contained in original data; some of them relax binary constraints or separate the process of learning hash functions and binary codes into two independent stages to bypass the obstacle of handling the discrete constraints on binary codes for optimization, which may generate large quantization error; some of them are not robust to noise. All these problems may degrade the performance of a model. To consider these problems, in this paper, we propose a novel supervised hashing framework for cross-modal retrieval, i.e., Supervised Robust Discrete Multimodal Hashing (SRDMH). Specifically, SRDMH tries to make final binary codes preserve label information as same as that in original data so that it can leverage more label information to supervise the binary codes learning. In addition, it learns hashing functions and binary codes directly instead of relaxing the binary constraints so as to avoid large quantization error problem. Moreover, to make it robust and easy to solve, we further integrate a flexible l2,p loss with nonlinear kernel embedding and an intermediate presentation of each instance. Finally, an alternating algorithm is proposed to solve the optimization problem in SRDMH. Extensive experiments are conducted on three benchmark data sets. The results demonstrate that the proposed method (SRDMH) outperforms or is comparable to several state-of-the-art methods for cross-modal retrieval task.

Su, Jinshu, Chen, Shuhui, Han, Biao, Xu, Chengcheng, Wang, Xin.  2016.  A 60Gbps DPI Prototype Based on Memory-Centric FPGA. Proceedings of the 2016 ACM SIGCOMM Conference. :627–628.

Deep packet inspection (DPI) is widely used in content-aware network applications to detect string features. It is of vital importance to improve the DPI performance due to the ever-increasing link speed. In this demo, we propose a novel DPI architecture with a hierarchy memory structure and parallel matching engines based on memory-centric FPGA. The implemented DPI prototype is able to provide up to 60Gbps full-text string matching throughput and fast rules update speed.

Nirasawa, Shinnosuke, Hara, Masaki, Nakao, Akihiro, Oguchi, Masato, Yamamoto, Shu, Yamaguchi, Saneyasu.  2016.  Network Application Performance Improvement with Deeply Programmable Switch. Adjunct Proceedings of the 13th International Conference on Mobile and Ubiquitous Systems: Computing Networking and Services. :263–267.

Large scale applications in data centers are composed of computers connected with a network. Traditional network switches cannot be flexibly controlled. Then, application developer cannot optimize network elements' behavior for improving application performance. On the other hand, Deeply Programmable Network (DPN) switches can completely analyze packet payloads and be profoundly programmed. In this paper, we focus on processing a part of application functions in network elements for improving application performance based on Deep Packet Inspection (DPI), i.e. analyzing packet payload, using DPN switches. We assume some applications as targets and implement some of functions of applications in network switches. We then present the comparison of performances with and without out method, and show that our method can significantly increase application performance.

Kohls, Katharina, Holz, Thorsten, Kolossa, Dorothea, Pöpper, Christina.  2016.  SkypeLine: Robust Hidden Data Transmission for VoIP. Proceedings of the 11th ACM on Asia Conference on Computer and Communications Security. :877–888.

Internet censorship is used in many parts of the world to prohibit free access to online information. Different techniques such as IP address or URL blocking, DNS hijacking, or deep packet inspection are used to block access to specific content on the Internet. In response, several censorship circumvention systems were proposed that attempt to bypass existing filters. Especially systems that hide the communication in different types of cover protocols attracted a lot of attention. However, recent research results suggest that this kind of covert traffic can be easily detected by censors. In this paper, we present SkypeLine, a censorship circumvention system that leverages Direct-Sequence Spread Spectrum (DSSS) based steganography to hide information in Voice-over-IP (VoIP) communication. SkypeLine introduces two novel modulation techniques that hide data by modulating information bits on the voice carrier signal using pseudo-random, orthogonal noise sequences and repeating the spreading operation several times. Our design goals focus on undetectability in presence of a strong adversary and improved data rates. As a result, the hiding is inconspicuous, does not alter the statistical characteristics of the carrier signal, and is robust against alterations of the transmitted packets. We demonstrate the performance of SkypeLine based on two simulation studies that cover the theoretical performance and robustness. Our measurements demonstrate that the data rates achieved with our techniques substantially exceed existing DSSS approaches. Furthermore, we prove the real-world applicability of the presented system with an exemplary prototype for Skype.