Visible to the public Biblio

Filters: Keyword is edge detection  [Clear All Filters]
2019-05-01
Chen, Ming-Hung, Ciou, Jyun-Yan, Chung, I-Hsin, Chou, Cheng-Fu.  2018.  FlexProtect: A SDN-Based DDoS Attack Protection Architecture for Multi-Tenant Data Centers. Proceedings of the International Conference on High Performance Computing in Asia-Pacific Region. :202-209.

With the recent advances in software-defined networking (SDN), the multi-tenant data centers provide more efficient and flexible cloud platform to their subscribers. However, as the number, scale, and diversity of distributed denial-of-service (DDoS) attack is dramatically escalated in recent years, the availability of those platforms is still under risk. We note that the state-of-art DDoS protection architectures did not fully utilize the potential of SDN and network function virtualization (NFV) to mitigate the impact of attack traffic on data center network. Therefore, in this paper, we exploit the flexibility of SDN and NFV to propose FlexProtect, a flexible distributed DDoS protection architecture for multi-tenant data centers. In FlexProtect, the detection virtual network functions (VNFs) are placed near the service provider and the defense VNFs are placed near the edge routers for effectively detection and avoid internal bandwidth consumption, respectively. Based on the architecture, we then propose FP-SYN, an anti-spoofing SYN flood protection mechanism. The emulation and simulation results with real-world data demonstrates that, compared with the traditional approach, the proposed architecture can significantly reduce 46% of the additional routing path and save 60% internal bandwidth consumption. Moreover, the proposed detection mechanism for anti-spoofing can achieve 98% accuracy.

Ando, Ruo.  2018.  Automated Reduction of Attack Surface Using Call Graph Enumeration. Proceedings of the 2018 2Nd International Conference on Management Engineering, Software Engineering and Service Sciences. :118-121.

There have been many research efforts on detecting vulnerability such as model checking and formal method. However, according to Rice's theorem, checking whether a program contains vulnerable code by static checking is undecidable in general. In this paper, we propose a method of attack surface reduction using enumeration of call graph. Proposal system is divided into two steps: enumerating edge E[Function Fi, Function Fi+1] and constructing call graph by recursive search of [E1, E2, En]. Proposed method enables us to find the sum of paths of which leaf node is vulnerable function VF. Also, root node RF of call graph is part of program which is open to attacker. Therefore, call graph [VF, RF] can be eliminated according the situation where the program is running. We apply proposal method to the real programs (Xen) and extracts the attack surface of CVE-2013-4371. These vulnerabilities are classified into two class: use-after-free and assertion failure. Also, numerical result is shown in searching attack surface of Xen with different search depth of constructing call graph.

Yu, Wenchao, Cheng, Wei, Aggarwal, Charu C., Zhang, Kai, Chen, Haifeng, Wang, Wei.  2018.  NetWalk: A Flexible Deep Embedding Approach for Anomaly Detection in Dynamic Networks. Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. :2672-2681.

Massive and dynamic networks arise in many practical applications such as social media, security and public health. Given an evolutionary network, it is crucial to detect structural anomalies, such as vertices and edges whose "behaviors'' deviate from underlying majority of the network, in a real-time fashion. Recently, network embedding has proven a powerful tool in learning the low-dimensional representations of vertices in networks that can capture and preserve the network structure. However, most existing network embedding approaches are designed for static networks, and thus may not be perfectly suited for a dynamic environment in which the network representation has to be constantly updated. In this paper, we propose a novel approach, NetWalk, for anomaly detection in dynamic networks by learning network representations which can be updated dynamically as the network evolves. We first encode the vertices of the dynamic network to vector representations by clique embedding, which jointly minimizes the pairwise distance of vertex representations of each walk derived from the dynamic networks, and the deep autoencoder reconstruction error serving as a global regularization. The vector representations can be computed with constant space requirements using reservoir sampling. On the basis of the learned low-dimensional vertex representations, a clustering-based technique is employed to incrementally and dynamically detect network anomalies. Compared with existing approaches, NetWalk has several advantages: 1) the network embedding can be updated dynamically, 2) streaming network nodes and edges can be encoded efficiently with constant memory space usage, 3) flexible to be applied on different types of networks, and 4) network anomalies can be detected in real-time. Extensive experiments on four real datasets demonstrate the effectiveness of NetWalk.

Ren, W., Yardley, T., Nahrstedt, K..  2018.  EDMAND: Edge-Based Multi-Level Anomaly Detection for SCADA Networks. 2018 IEEE International Conference on Communications, Control, and Computing Technologies for Smart Grids (SmartGridComm). :1-7.

Supervisory Control and Data Acquisition (SCADA) systems play a critical role in the operation of large-scale distributed industrial systems. There are many vulnerabilities in SCADA systems and inadvertent events or malicious attacks from outside as well as inside could lead to catastrophic consequences. Network-based intrusion detection is a preferred approach to provide security analysis for SCADA systems due to its less intrusive nature. Data in SCADA network traffic can be generally divided into transport, operation, and content levels. Most existing solutions only focus on monitoring and event detection of one or two levels of data, which is not enough to detect and reason about attacks in all three levels. In this paper, we develop a novel edge-based multi-level anomaly detection framework for SCADA networks named EDMAND. EDMAND monitors all three levels of network traffic data and applies appropriate anomaly detection methods based on the distinct characteristics of data. Alerts are generated, aggregated, prioritized before sent back to control centers. A prototype of the framework is built to evaluate the detection ability and time overhead of it.

Li, P., Liu, Q., Zhao, W., Wang, D., Wang, S..  2018.  Chronic Poisoning against Machine Learning Based IDSs Using Edge Pattern Detection. 2018 IEEE International Conference on Communications (ICC). :1-7.

In big data era, machine learning is one of fundamental techniques in intrusion detection systems (IDSs). Poisoning attack, which is one of the most recognized security threats towards machine learning- based IDSs, injects some adversarial samples into the training phase, inducing data drifting of training data and a significant performance decrease of target IDSs over testing data. In this paper, we adopt the Edge Pattern Detection (EPD) algorithm to design a novel poisoning method that attack against several machine learning algorithms used in IDSs. Specifically, we propose a boundary pattern detection algorithm to efficiently generate the points that are near to abnormal data but considered to be normal ones by current classifiers. Then, we introduce a Batch-EPD Boundary Pattern (BEBP) detection algorithm to overcome the limitation of the number of edge pattern points generated by EPD and to obtain more useful adversarial samples. Based on BEBP, we further present a moderate but effective poisoning method called chronic poisoning attack. Extensive experiments on synthetic and three real network data sets demonstrate the performance of the proposed poisoning method against several well-known machine learning algorithms and a practical intrusion detection method named FMIFS-LSSVM-IDS.

Lu, X., Wan, X., Xiao, L., Tang, Y., Zhuang, W..  2018.  Learning-Based Rogue Edge Detection in VANETs with Ambient Radio Signals. 2018 IEEE International Conference on Communications (ICC). :1-6.
Edge computing for mobile devices in vehicular ad hoc networks (VANETs) has to address rogue edge attacks, in which a rogue edge node claims to be the serving edge in the vehicle to steal user secrets and help launch other attacks such as man-in-the-middle attacks. Rogue edge detection in VANETs is more challenging than the spoofing detection in indoor wireless networks due to the high mobility of onboard units (OBUs) and the large-scale network infrastructure with roadside units (RSUs). In this paper, we propose a physical (PHY)- layer rogue edge detection scheme for VANETs according to the shared ambient radio signals observed during the same moving trace of the mobile device and the serving edge in the same vehicle. In this scheme, the edge node under test has to send the physical properties of the ambient radio signals, including the received signal strength indicator (RSSI) of the ambient signals with the corresponding source media access control (MAC) address during a given time slot. The mobile device can choose to compare the received ambient signal properties and its own record or apply the RSSI of the received signals to detect rogue edge attacks, and determines test threshold in the detection. We adopt a reinforcement learning technique to enable the mobile device to achieve the optimal detection policy in the dynamic VANET without being aware of the VANET model and the attack model. Simulation results show that the Q-learning based detection scheme can significantly reduce the detection error rate and increase the utility compared with existing schemes.
Chen, D., Chen, W., Chen, J., Zheng, P., Huang, J..  2018.  Edge Detection and Image Segmentation on Encrypted Image with Homomorphic Encryption and Garbled Circuit. 2018 IEEE International Conference on Multimedia and Expo (ICME). :1-6.

Edge detection is one of the most important topics of image processing. In the scenario of cloud computing, performing edge detection may also consider privacy protection. In this paper, we propose an edge detection and image segmentation scheme on an encrypted image with Sobel edge detector. We implement Gaussian filtering and Sobel operator on the image in the encrypted domain with homomorphic property. By implementing an adaptive threshold decision algorithm in the encrypted domain, we obtain a threshold determined by the image distribution. With the technique of garbled circuit, we perform comparison in the encrypted domain and obtain the edge of the image without decrypting the image in advanced. We then propose an image segmentation scheme on the encrypted image based on the detected edges. Our experiments demonstrate the viability and effectiveness of the proposed encrypted image edge detection and segmentation.

Rayavel, P., Rathnavel, P., Bharathi, M., Kumar, T. Siva.  2018.  Dynamic Traffic Control System Using Edge Detection Algorithm. 2018 International Conference on Soft-Computing and Network Security (ICSNS). :1-5.

As the traffic congestion increases on the transport network, Payable on the road to slower speeds, longer falter times, as a consequence bigger vehicular queuing, it's necessary to introduce smart way to reduce traffic. We are already edging closer to ``smart city-smart travel''. Today, a large number of smart phone applications and connected sat-naves will help get you to your destination in the quickest and easiest manner possible due to real-time data and communication from a host of sources. In present situation, traffic lights are used in each phase. The other way is to use electronic sensors and magnetic coils that detect the congestion frequency and monitor traffic, but found to be more expensive. Hence we propose a traffic control system using image processing techniques like edge detection. The vehicles will be detected using images instead of sensors. The cameras are installed alongside of the road and it will capture image sequence for every 40 seconds. The digital image processing techniques will be applied to analyse and process the image and according to that the traffic signal lights will be controlled.

2018-09-28
Brandauer, C., Dorfinger, P., Paiva, P. Y. A..  2017.  Towards scalable and adaptable security monitoring. 2017 IEEE 36th International Performance Computing and Communications Conference (IPCCC). :1–6.

A long time ago Industrial Control Systems were in a safe place due to the use of proprietary technology and physical isolation. This situation has changed dramatically and the systems are nowadays often prone to severe attacks executed from remote locations. In many cases, intrusions remain undetected for a long time and this allows the adversary to meticulously prepare an attack and maximize its destructiveness. The ability to detect an attack in its early stages thus has a high potential to significantly reduce its impact. To this end, we propose a holistic, multi-layered, security monitoring and mitigation framework spanning the physical- and cyber domain. The comprehensiveness of the approach demands for scalability measures built-in by design. In this paper we present how scalability is addressed by an architecture that enforces geographically decentralized data reduction approaches that can be dynamically adjusted to the currently perceived context. A specific focus is put on a robust and resilient solution to orchestrate dynamic configuration updates. Experimental results based on a prototype implementation show the feasibility of the approach.

Husak, M., Čermák, M..  2017.  A graph-based representation of relations in network security alert sharing platforms. 2017 IFIP/IEEE Symposium on Integrated Network and Service Management (IM). :891–892.

In this paper, we present a framework for graph-based representation of relation between sensors and alert types in a security alert sharing platform. Nodes in a graph represent either sensors or alert types, while edges represent various relations between them, such as common type of reported alerts or duplicated alerts. The graph is automatically updated, stored in a graph database, and visualized. The resulting graph will be used by network administrators and security analysts as a visual guide and situational awareness tool in a complex environment of security alert sharing.

Malloy, Matthew, Barford, Paul, Alp, Enis Ceyhun, Koller, Jonathan, Jewell, Adria.  2017.  Internet Device Graphs. Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. :1913–1921.
Internet device graphs identify relationships between user-centric internet connected devices such as desktops, laptops, smartphones, tablets, gaming consoles, TV's, etc. The ability to create such graphs is compelling for online advertising, content customization, recommendation systems, security, and operations. We begin by describing an algorithm for generating a device graph based on IP-colocation, and then apply the algorithm to a corpus of over 2.5 trillion internet events collected over the period of six weeks in the United States. The resulting graph exhibits immense scale with greater than 7.3 billion edges (pair-wise relationships) between more than 1.2 billion nodes (devices), accounting for the vast majority of internet connected devices in the US. Next, we apply community detection algorithms to the graph resulting in a partitioning of internet devices into 100 million small communities representing physical households. We validate this partition with a unique ground truth dataset. We report on the characteristics of the graph and the communities. Lastly, we discuss the important issues of ethics and privacy that must be considered when creating and studying device graphs, and suggest further opportunities for device graph enrichment and application.
Xue, Haoyue, Li, Yuhong, Rahmani, Rahim, Kanter, Theo, Que, Xirong.  2017.  A Mechanism for Mitigating DoS Attack in ICN-based Internet of Things. Proceedings of the 1st International Conference on Internet of Things and Machine Learning. :26:1–26:10.
Information-Centric Networking (ICN) 1 is a significant networking paradigm for the Internet of Things, which is an information-centric network in essence. The ICN paradigm owns inherently some security features, but also brings several new vulnerabilities. The most significant one among them is Interest flooding, which is a new type of Denial of Service (DoS) attack, and has even more serious effects to the whole network in the ICN paradigm than in the traditional IP paradigm. In this paper, we suggest a new mechanism to mitigate Interest flooding attack. The detection of Interest flooding and the corresponding mitigation measures are implemented on the edge routers, which are directly connected with the attackers. By using statistics of Interest satisfaction rate on the incoming interface of some edge routers, malicious name-prefixes or interfaces can be discovered, and then dropped or slowed down accordingly. With the help of the network information, the detected malicious name-prefixes and interfaces can also be distributed to the whole network quickly, and the attack can be mitigated quickly. The simulation results show that the suggested mechanism can reduce the influence of the Interest flooding quickly, and the network performance can recover automatically to the normal state without hurting the legitimate users.
Feibish, Shir Landau, Afek, Yehuda, Bremler-Barr, Anat, Cohen, Edith, Shagam, Michal.  2017.  Mitigating DNS Random Subdomain DDoS Attacks by Distinct Heavy Hitters Sketches. Proceedings of the Fifth ACM/IEEE Workshop on Hot Topics in Web Systems and Technologies. :8:1–8:6.
Random Subdomain DDoS attacks on the Domain Name System (DNS) infrastructure are becoming a popular vector in recent attacks (e.g., recent Mirai attack on Dyn). In these attacks, many queries are sent for a single or a few victim domains, yet they include highly varying non-existent subdomains generated randomly. Motivated by these attacks we designed and implemented novel and efficient algorithms for distinct heavy hitters (dHH). A (classic) heavy hitter (HH) in a stream of elements is a key (e.g., the domain of a query) which appears in many elements (e.g., requests). When stream elements consist of ¡key, subkey¿ pairs, (¡domain, subdomain¿) a distinct heavy hitter (dhh) is a key that is paired with a large number of different subkeys. Our algorithms dominate previous designs in both the asymptotic (theoretical) sense and practicality. Specifically the new fixed-size algorithms are simple to code and with asymptotically optimal space accuracy tradeoffs. Based on these algorithms, we build and implement a system for detection and mitigation of Random Subdomain DDoS attacks. We perform experimental evaluation, demonstrating the effectiveness of our algorithms.
Ushijima-Mwesigwa, Hayato, Negre, Christian F. A., Mniszewski, Susan M..  2017.  Graph Partitioning Using Quantum Annealing on the D-Wave System. Proceedings of the Second International Workshop on Post Moores Era Supercomputing. :22–29.
Graph partitioning (GP) applications are ubiquitous throughout mathematics, computer science, chemistry, physics, bio-science, machine learning, and complex systems. Post Moore's era supercomputing has provided us an opportunity to explore new approaches for traditional graph algorithms on quantum computing architectures. In this work, we explore graph partitioning using quantum annealing on the D-Wave 2X machine. Motivated by a recently proposed graph-based electronic structure theory applied to quantum molecular dynamics (QMD) simulations, graph partitioning is used for reducing the calculation of the density matrix into smaller subsystems rendering the calculation more computationally efficient. Unconstrained graph partitioning as community clustering based on the modularity metric can be naturally mapped into the Hamiltonian of the quantum annealer. On the other hand, when constraints are imposed for partitioning into equal parts and minimizing the number of cut edges between parts, a quadratic unconstrained binary optimization (QUBO) reformulation is required. This reformulation may employ the graph complement to fit the problem in the Chimera graph of the quantum annealer. Partitioning into 2 parts and k parts concurrently for arbitrary k are demonstrated with benchmark graphs, random graphs, and small material system density matrix based graphs. Results for graph partitioning using quantum and hybrid classical-quantum approaches are shown to be comparable to current "state of the art" methods and sometimes better.
Gu, Yufei, Zhao, Qingchuan, Zhang, Yinqian, Lin, Zhiqiang.  2017.  PT-CFI: Transparent Backward-Edge Control Flow Violation Detection Using Intel Processor Trace. Proceedings of the Seventh ACM on Conference on Data and Application Security and Privacy. :173–184.
This paper presents PT-CFI, a new backward-edge control flow violation detection system based on a novel use of a recently introduced hardware feature called Intel Processor Trace (PT). Designed primarily for offline software debugging and performance analysis, PT offers the capability of tracing the entire control flow of a running program. In this paper, we explore the practicality of using PT for security applications, and propose to build a new control flow integrity (CFI) model that enforces a backward-edge CFI policy for native COTS binaries based on the traces from Intel PT. By exploring the intrinsic properties of PT with a system call based synchronization primitive and a deep inspection capability, we have addressed a number of technical challenges such as how to make sure the backward edge CFI policy is both sound and complete, how to make PT enforce our CFI policy, and how to balance the performance overhead. We have implemented PT-CFI and evaluated with a number of programs including SPEC2006 and HTTP daemons. Our experimental results show that PT-CFI can enforce a perfect backward-edge CFI with only small overhead for the protected program.
2018-05-30
Vanhoef, Mathy, Schepers, Domien, Piessens, Frank.  2017.  Discovering Logical Vulnerabilities in the Wi-Fi Handshake Using Model-Based Testing. Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security. :360–371.

We use model-based testing techniques to detect logical vulnerabilities in implementations of the Wi-Fi handshake. This reveals new fingerprinting techniques, multiple downgrade attacks, and Denial of Service (DoS) vulnerabilities. Stations use the Wi-Fi handshake to securely connect with wireless networks. In this handshake, mutually supported capabilities are determined, and fresh pairwise keys are negotiated. As a result, a proper implementation of the Wi-Fi handshake is essential in protecting all subsequent traffic. To detect the presence of erroneous behaviour, we propose a model-based technique that generates a set of representative test cases. These tests cover all states of the Wi-Fi handshake, and explore various edge cases in each state. We then treat the implementation under test as a black box, and execute all generated tests. Determining whether a failed test introduces a security weakness is done manually. We tested 12 implementations using this approach, and discovered irregularities in all of them. Our findings include fingerprinting mechanisms, DoS attacks, and downgrade attacks where an adversary can force usage of the insecure WPA-TKIP cipher. Finally, we explain how one of our downgrade attacks highlights incorrect claims made in the 802.11 standard.

2018-05-02
Brennan, Tegan.  2017.  Path Cost Analysis for Side Channel Detection. Proceedings of the 26th ACM SIGSOFT International Symposium on Software Testing and Analysis. :416–419.

Side-channels have been increasingly demonstrated as a practical threat to the confidentiality of private user information. Being able to statically detect these kinds of vulnerabilites is a key challenge in current computer security research. We introduce a new technique, path-cost analysis (PCA), for the detection of side-channels. Given a cost model for a type of side-channel, path-cost analysis assigns a symbolic cost expression to every node and every back edge of a method's control flow graph that gives an over-approximation for all possible observable values at that node or after traversing that cycle. Queries to a satisfiability solver on the maximum distance between specific pairs of nodes allow us to detect the presence of imbalanced paths through the control flow graph. When combined with taint analysis, we are able to answer the following question: does there exist a pair of paths in the method's control flow graph, differing only on branch conditions influenced by the secret, that differs in observable value by more than some given threshold? In fact, we are able to answer the specifically state what sets of secret-sensitive conditional statements introduce a side-channel detectable given some noise parameter. We extend this approach to an interprocedural analysis, resulting in a over-approximation of the number of true side-channels in the program according to the given cost model. Greater precision can be obtained by combining our method with predicate abstraction or symbolic execution to eliminate a subset of the infeasible paths through the control flow graph. We propose evaluating our method on a set of sizeable Java server-client applications.

2018-02-02
Abura'ed, Nour, Khan, Faisal Shah, Bhaskar, Harish.  2017.  Advances in the Quantum Theoretical Approach to Image Processing Applications. ACM Comput. Surv.. 49:75:1–75:49.
In this article, a detailed survey of the quantum approach to image processing is presented. Recently, it has been established that existing quantum algorithms are applicable to image processing tasks allowing quantum informational models of classical image processing. However, efforts continue in identifying the diversity of its applicability in various image processing domains. Here, in addition to reviewing some of the critical image processing applications that quantum mechanics have targeted, such as denoising, edge detection, image storage, retrieval, and compression, this study will also highlight the complexities in transitioning from the classical to the quantum domain. This article shall establish theoretical fundamentals, analyze performance and evaluation, draw key statistical evidence to support claims, and provide recommendations based on published literature mostly during the period from 2010 to 2015.
Modarresi, A., Gangadhar, S., Sterbenz, J. P. G..  2017.  A framework for improving network resilience using SDN and fog nodes. 2017 9th International Workshop on Resilient Networks Design and Modeling (RNDM). :1–7.

The IoT (Internet of Things) is one of the primary reasons for the massive growth in the number of connected devices to the Internet, thus leading to an increased volume of traffic in the core network. Fog and edge computing are becoming a solution to handle IoT traffic by moving timesensitive processing to the edge of the network, while using the conventional cloud for historical analysis and long-term storage. Providing processing, storage, and network communication at the edge network are the aim of fog computing to reduce delay, network traffic, and decentralise computing. In this paper, we define a framework that realises fog computing that can be extended to install any service of choice. Our framework utilises fog nodes as an extension of the traditional switch to include processing, networking, and storage. The fog nodes act as local decision-making elements that interface with software-defined networking (SDN), to be able to push updates throughout the network. To test our framework, we develop an IP spoofing security application and ensure its correctness through multiple experiments.

2018-01-23
Baragchizadeh, A., Karnowski, T. P., Bolme, D. S., O’Toole, A. J..  2017.  Evaluation of Automated Identity Masking Method (AIM) in Naturalistic Driving Study (NDS). 2017 12th IEEE International Conference on Automatic Face Gesture Recognition (FG 2017). :378–385.

Identity masking methods have been developed in recent years for use in multiple applications aimed at protecting privacy. There is only limited work, however, targeted at evaluating effectiveness of methods-with only a handful of studies testing identity masking effectiveness for human perceivers. Here, we employed human participants to evaluate identity masking algorithms on video data of drivers, which contains subtle movements of the face and head. We evaluated the effectiveness of the “personalized supervised bilinear regression method for Facial Action Transfer (FAT)” de-identification algorithm. We also evaluated an edge-detection filter, as an alternate “fill-in” method when face tracking failed due to abrupt or fast head motions. Our primary goal was to develop methods for humanbased evaluation of the effectiveness of identity masking. To this end, we designed and conducted two experiments to address the effectiveness of masking in preventing recognition and in preserving action perception. 1- How effective is an identity masking algorithm?We conducted a face recognition experiment and employed Signal Detection Theory (SDT) to measure human accuracy and decision bias. The accuracy results show that both masks (FAT mask and edgedetection) are effective, but that neither completely eliminated recognition. However, the decision bias data suggest that both masks altered the participants' response strategy and made them less likely to affirm identity. 2- How effectively does the algorithm preserve actions? We conducted two experiments on facial behavior annotation. Results showed that masking had a negative effect on annotation accuracy for the majority of actions, with differences across action types. Notably, the FAT mask preserved actions better than the edge-detection mask. To our knowledge, this is the first study to evaluate a deidentification method aimed at preserving facial ac- ions employing human evaluators in a laboratory setting.

2017-09-15
Zodik, Gabi.  2016.  Cognitive and Contextual Enterprise Mobile Computing: Invited Keynote Talk. Proceedings of the 9th India Software Engineering Conference. :11–12.

The second wave of change presented by the age of mobility, wearables, and IoT focuses on how organizations and enterprises, from a wide variety of commercial areas and industries, will use and leverage the new technologies available. Businesses and industries that don't change with the times will simply cease to exist. Applications need to be powered by cognitive and contextual technologies to support real-time proactive decisions. These decisions will be based on the mobile context of a specific user or group of users, incorporating location, time of day, current user task, and more. Driven by the huge amounts of data produced by mobile and wearables devices, and influenced by privacy concerns, the next wave in computing will need to exploit data and computing at the edge of the network. Future mobile apps will have to be cognitive to 'understand' user intentions based on all the available interactions and unstructured data. Mobile applications are becoming increasingly ubiquitous, going beyond what end users can easily comprehend. Essentially, for both business-to-client (B2C) and business-to-business (B2B) apps, only about 30% of the development efforts appear in the interface of the mobile app. For example, areas such as the collaborative nature of the software or the shortened development cycle and time-to-market are not apparent to end users. The other 70% of the effort invested is dedicated to integrating the applications with back-office systems and developing those aspects of the application that operate behind the scenes. An important, yet often complex, part of the solution and mobile app takes place far from the public eye-in the back-office environment. It is there that various aspects of customer relationship management must be addressed: tracking usage data, pushing out messaging as needed, distributing apps to employees within the enterprise, and handling the wide variety of operational and management tasks-often involving the collection and monitoring of data from sensors and wearable devices. All this must be carried out while addressing security concerns that range from verifying user identities, to data protection, to blocking attempted breaches of the organization, and activation of malicious code. Of course, these tasks must be augmented by a systematic approach and vigilant maintenance of user privacy. The first wave of the mobile revolution focused on development platforms, run-time platforms, deployment, activation, and management tools for multi-platform environments, including comprehensive mobile device management (MDM). To realize the full potential of this revolution, we must capitalize on information about the context within which mobile devices are used. With both employees and customers, this context could be a simple piece of information such as the user location or time of use, the hour of the day, or the day of the week. The context could also be represented by more complex data, such as the amount of time used, type of activity performed, or user preferences. Further insight could include the relationship history with the user and the user's behavior as part of that relationship, as well as a long list of variables to be considered in various scenarios. Today, with the new wave of wearables, the definition of context is being further extended to include environmental factors such as temperature, weather, or pollution, as well as personal factors such as heart rate, movement, or even clothing worn. In both B2E and B2C situations, a context-dependent approach, based on the appropriate context for each specific user, offers a superior tool for working with both employees and clients alike. This mode of operation does not start and end with the individual user. Rather, it takes into account the people surrounding the user, the events taking place nearby, appliances or equipment activated, the user's daily schedule, as well as other, more general information, such as the environment and weather. Developing enterprise-wide, context-dependent, mobile solutions is still a complex challenge. A system of real added-value services must be developed, as well as a comprehensive architecture. These four-tier architectures comprise end-user devices like wearables and smartphones, connected to systems of engagement (SoEs), and systems of record (SoRs). All this is needed to enable data analytics and collection in the context where it is created. The data collected will allow further interaction with employees or customers, analytics, and follow-up actions based on the results of that analysis. We also need to ensure end-to-end (E2E) security across these four tiers, and to keep the data and application contexts in sync. These are just some of the challenges being addressed by IBM Research. As an example, these technologies could be deployed in the retail space, especially in brick-and-mortar stores. Identifying a customer entering a store, detecting her location among the aisles, and cross-referencing that data with the customer's transaction history, could lead to special offers tailor-made for that specific customer or suggestions relevant to her purchasing process. This technology enables real-world implementation of metrics, analytics, and other tools familiar to us from the online realm. We can now measure visits to physical stores in the same way we measure web page hits: analyze time spent in the store, the areas visited by the customer, and the results of those visits. In this way, we can also identify shoppers wandering around the store and understand when they are having trouble finding the product they want to purchase. We can also gain insight into the standard traffic patterns of shoppers and how they navigate a store's floors and departments. We might even consider redesigning the store layout to take advantage of this insight to enhance sales. In healthcare, the context can refer to insight extracted from data received from sensors on the patient, from either his mobile device or wearable technology, and information about the patient's environment and location at that moment in time. This data can help determine if any assistance is required. For example, if a patient is discharged from the hospital for continued at-home care, doctors can continue to remotely monitor his condition via a system of sensors and analytic tools that interpret the sensor readings. This approach can also be applied to the area of safety. Scientists at IBM Research are developing a platform that collects and analyzes data from wearable technology to protect the safety of employees working in construction, heavy industry, manufacturing, or out in the field. This solution can serve as a real-time warning system by analyzing information gathered from wearable sensors embedded in personal protective equipment, such as smart safety helmets and protective vests, and in the workers' individual smartphones. These sensors can continuously monitor a worker's pulse rate, movements, body temperature, and hydration level, as well as environmental factors such as noise level, and other parameters. The system can provide immediate alerts to the worker about any dangers in the work environment to prevent possible injury. It can also be used to prevent accidents before they happen or detect accidents once they occur. For example, with sophisticated algorithms, we can detect if a worker falls based on a sudden difference in elevations detected by an accelerometer, and then send an alert to notify her peers and supervisor or call for help. Monitoring can also help ensure safety in areas where continuous exposure to heat or dangerous materials must be limited based on regulated time periods. Mobile technologies can also help manage events with massive numbers of participants, such as professional soccer games, music festivals, and even large-scale public demonstrations, by sending alerts concerning long and growing lines or specific high-traffic areas. These technologies can be used to detect accidents typical of large-scale gatherings, send warnings about overcrowding, and alert the event organizers. In the same way, they can alleviate parking problems or guide public transportation operators- all via analysis and predictive analytics. IBM Research - Haifa is currently involved in multiple activities as part of IBM's MobileFirst initiative. Haifa researchers have a special expertise in time- and location-based intelligent applications, including visual maps that display activity contexts and predictive analytics systems for mobile data and users. In another area, IBM researchers in Haifa are developing new cognitive services driven from the unique data available on mobile and wearable devices. Looking to the future, the IBM Research team is further advancing the integration of wearable technology, augmented reality systems, and biometric tools for mobile user identity validation. Managing contextual data and analyzing the interaction between the different kinds of data presents fascinating challenges for the development of next-generation programming. For example, we need to rethink when and where data processing and computations should occur: Is it best to leave them at the user-device level, or perhaps they should be moved to the back-office systems, servers, and/or the cloud infrastructures with which the user device is connected? New-age applications are becoming more and more distributed. They operate on a wide range of devices, such as wearable technologies, use a variety of sensors, and depend on cloud-based systems. As a result, a new distributed programming paradigm is emerging to meet the needs of these use-cases and real-time scenarios. This paradigm needs to deal with massive amounts of devices, sensors, and data in business systems, and must be able to shift computation from the cloud to the edge, based on context in close to real-time. By processing data at the edge of the network, close to where the interactions and processing are happening, we can help reduce latency and offer new opportunities for improved privacy and security. Despite all these interactions, data collection, and the analytic insights based upon them-we cannot forget the issues of privacy. Without a proper and reliable solution that offers more control over what personal data is shared and how it is used, people will refrain from sharing information. Such sharing is necessary for developing and understanding the context in which people are carrying out various actions, and to offer them tools and services to enhance their actions. In the not-so-distant future, we anticipate the appearance of ad-hoc networks for wearable technology systems that will interact with one another to further expand the scope and value of available context-dependent data.

Rodrigues, Bruno, Quintão Pereira, Fernando Magno, Aranha, Diego F..  2016.  Sparse Representation of Implicit Flows with Applications to Side-channel Detection. Proceedings of the 25th International Conference on Compiler Construction. :110–120.

Information flow analyses traditionally use the Program Dependence Graph (PDG) as a supporting data-structure. This graph relies on Ferrante et al.'s notion of control dependences to represent implicit flows of information. A limitation of this approach is that it may create O(textbarItextbar x textbarEtextbar) implicit flow edges in the PDG, where I are the instructions in a program, and E are the edges in its control flow graph. This paper shows that it is possible to compute information flow analyses using a different notion of implicit dependence, which yields a number of edges linear on the number of definitions plus uses of variables. Our algorithm computes these dependences in a single traversal of the program's dominance tree. This efficiency is possible due to a key property of programs in Static Single Assignment form: the definition of a variable dominates all its uses. Our algorithm correctly implements Hunt and Sands system of security types. Contrary to their original formulation, which required O(IxI) space and time for structured programs, we require only O(I). We have used our ideas to build FlowTracker, a tool that uncovers side-channel vulnerabilities in cryptographic algorithms. FlowTracker handles programs with over one-million assembly instructions in less than 200 seconds, and creates 24% less implicit flow edges than Ferrante et al.'s technique. FlowTracker has detected an issue in a constant-time implementation of Elliptic Curve Cryptography; it has found several time-variant constructions in OpenSSL, one issue in TrueCrypt and it has validated the isochronous behavior of the NaCl library.

Puttegowda, D., Padma, M. C..  2016.  Human Motion Detection and Recognising Their Actions from the Video Streams. Proceedings of the International Conference on Informatics and Analytics. :12:1–12:5.

In the field of image processing, it is more complex and challenging task to detect the Human motion in the video and recognize their actions from the video sequences. A novel approach is presented in this paper to detect the human motion and recognize their actions. By tracking the selected object over consecutive frames of a video or image sequences, the different Human actions are recognized. Initially, the background motion is subtracted from the input video stream and its binary images are constructed. Using spatiotemporal interest points, the object which needs to be monitored is selected by enclosing the required pixels within the bounding rectangle. The selected foreground pixels within the bounding rectangle are then tracked using edge tracking algorithm. The features are extracted and using these features human motion are detected. Finally, the different human actions are recognized using K-Nearest Neighbor classifier. The applications which uses this methodology where monitoring the human actions is required such as shop surveillance, city surveillance, airports surveillance and other important places where security is the prime factor. The results obtained are quite significant and are analyzed on the datasets like KTH and Weizmann dataset, which contains actions like bending, running, walking, skipping, and hand-waving.

Wang, Gang, Zhang, Xinyi, Tang, Shiliang, Zheng, Haitao, Zhao, Ben Y..  2016.  Unsupervised Clickstream Clustering for User Behavior Analysis. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. :225–236.

Online services are increasingly dependent on user participation. Whether it's online social networks or crowdsourcing services, understanding user behavior is important yet challenging. In this paper, we build an unsupervised system to capture dominating user behaviors from clickstream data (traces of users' click events), and visualize the detected behaviors in an intuitive manner. Our system identifies "clusters" of similar users by partitioning a similarity graph (nodes are users; edges are weighted by clickstream similarity). The partitioning process leverages iterative feature pruning to capture the natural hierarchy within user clusters and produce intuitive features for visualizing and understanding captured user behaviors. For evaluation, we present case studies on two large-scale clickstream traces (142 million events) from real social networks. Our system effectively identifies previously unknown behaviors, e.g., dormant users, hostile chatters. Also, our user study shows people can easily interpret identified behaviors using our visualization tool.

Cheng, Wei, Zhang, Kai, Chen, Haifeng, Jiang, Guofei, Chen, Zhengzhang, Wang, Wei.  2016.  Ranking Causal Anomalies via Temporal and Dynamical Analysis on Vanishing Correlations. Proceedings of the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. :805–814.

Modern world has witnessed a dramatic increase in our ability to collect, transmit and distribute real-time monitoring and surveillance data from large-scale information systems and cyber-physical systems. Detecting system anomalies thus attracts significant amount of interest in many fields such as security, fault management, and industrial optimization. Recently, invariant network has shown to be a powerful way in characterizing complex system behaviours. In the invariant network, a node represents a system component and an edge indicates a stable, significant interaction between two components. Structures and evolutions of the invariance network, in particular the vanishing correlations, can shed important light on locating causal anomalies and performing diagnosis. However, existing approaches to detect causal anomalies with the invariant network often use the percentage of vanishing correlations to rank possible casual components, which have several limitations: 1) fault propagation in the network is ignored; 2) the root casual anomalies may not always be the nodes with a high-percentage of vanishing correlations; 3) temporal patterns of vanishing correlations are not exploited for robust detection. To address these limitations, in this paper we propose a network diffusion based framework to identify significant causal anomalies and rank them. Our approach can effectively model fault propagation over the entire invariant network, and can perform joint inference on both the structural, and the time-evolving broken invariance patterns. As a result, it can locate high-confidence anomalies that are truly responsible for the vanishing correlations, and can compensate for unstructured measurement noise in the system. Extensive experiments on synthetic datasets, bank information system datasets, and coal plant cyber-physical system datasets demonstrate the effectiveness of our approach.