Visible to the public Biblio

Found 560 results

Filters: First Letter Of Last Name is I  [Clear All Filters]
2017-03-29
Ghosh, Uttam, Dong, Xinshu, Tan, Rui, Kalbarczyk, Zbigniew, Yau, David K.Y., Iyer, Ravishankar K..  2016.  A Simulation Study on Smart Grid Resilience Under Software-Defined Networking Controller Failures. Proceedings of the 2Nd ACM International Workshop on Cyber-Physical System Security. :52–58.

Riding on the success of SDN for enterprise and data center networks, recently researchers have shown much interest in applying SDN for critical infrastructures. A key concern, however, is the vulnerability of the SDN controller as a single point of failure. In this paper, we develop a cyber-physical simulation platform that interconnects Mininet (an SDN emulator), hardware SDN switches, and PowerWorld (a high-fidelity, industry-strength power grid simulator). We report initial experiments on how a number of representative controller faults may impact the delay of smart grid communications. We further evaluate how this delay may affect the performance of the underlying physical system, namely automatic gain control (AGC) as a fundamental closed-loop control that regulates the grid frequency to a critical nominal value. Our results show that when the fault-induced delay reaches seconds (e.g., more than four seconds in some of our experiments), degradation of the AGC becomes evident. Particularly, the AGC is most vulnerable when it is in a transient following say step changes in loading, because the significant state fluctuations will exacerbate the effects of using a stale system state in the control.

Harshaw, Christopher R., Bridges, Robert A., Iannacone, Michael D., Reed, Joel W., Goodall, John R..  2016.  GraphPrints: Towards a Graph Analytic Method for Network Anomaly Detection. Proceedings of the 11th Annual Cyber and Information Security Research Conference. :15:1–15:4.

This paper introduces a novel graph-analytic approach for detecting anomalies in network flow data called GraphPrints. Building on foundational network-mining techniques, our method represents time slices of traffic as a graph, then counts graphlets–-small induced subgraphs that describe local topology. By performing outlier detection on the sequence of graphlet counts, anomalous intervals of traffic are identified, and furthermore, individual IPs experiencing abnormal behavior are singled-out. Initial testing of GraphPrints is performed on real network data with an implanted anomaly. Evaluation shows false positive rates bounded by 2.84% at the time-interval level, and 0.05% at the IP-level with 100% true positive rates at both.

2017-03-20
Im, Jong-Hyuk, Choi, JinChun, Nyang, DaeHun, Lee, Mun-Kyu.  2016.  Privacy-Preserving Palm Print Authentication Using Homomorphic Encryption. :878–881.

Biometric verification systems have security issues regarding the storage of biometric data in that a user's biometric features cannot be changed into other ones even when a system is compromised. To address this issue, it may be safe to store the biometrics data on a reliable remote server instead of storing them in a local device. However, this approach may raise a privacy issue. In this paper, we propose a biometric verification system where the biometric data are stored in a remote server in an encrypted form and the similarity of the user input to the registered biometric data is computed in an encrypted domain using a homomorphic encryption. We evaluated the performance of the proposed system through an implementation on an Android-based smartphone and an i7-based server.

2017-03-08
Idrus, S. Z. Syed, Cherrier, E., Rosenberger, C., Mondal, S., Bours, P..  2015.  Keystroke dynamics performance enhancement with soft biometrics. IEEE International Conference on Identity, Security and Behavior Analysis (ISBA 2015). :1–7.

It is accepted that the way a person types on a keyboard contains timing patterns, which can be used to classify him/her, is known as keystroke dynamics. Keystroke dynamics is a behavioural biometric modality, whose performances, however, are worse than morphological modalities such as fingerprint, iris recognition or face recognition. To cope with this, we propose to combine keystroke dynamics with soft biometrics. Soft biometrics refers to biometric characteristics that are not sufficient to authenticate a user (e.g. height, gender, skin/eye/hair colour). Concerning keystroke dynamics, three soft categories are considered: gender, age and handedness. We present different methods to combine the results of a classical keystroke dynamics system with such soft criteria. By applying simple sum and multiply rules, our experiments suggest that the combination approach performs better than the classification approach with best result of 5.41% of equal error rate. The efficiency of our approaches is illustrated on a public database.

Bottazzi, G., Italiano, G. F..  2015.  Fast Mining of Large-Scale Logs for Botnet Detection: A Field Study. 2015 IEEE International Conference on Computer and Information Technology; Ubiquitous Computing and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing. :1989–1996.

Botnets are considered one of the most dangerous species of network-based attack today because they involve the use of very large coordinated groups of hosts simultaneously. The behavioral analysis of computer networks is at the basis of the modern botnet detection methods, in order to intercept traffic generated by malwares for which signatures do not exist yet. Defining a pattern of features to be placed at the basis of behavioral analysis, puts the emphasis on the quantity and quality of information to be caught and used to mark data streams as normal or abnormal. The problem is even more evident if we consider extensive computer networks or clouds. With the present paper we intend to show how heuristics applied to large-scale proxy logs, considering a typical phase of the life cycle of botnets such as the search for C&C Servers through AGDs (Algorithmically Generated Domains), may provide effective and extremely rapid results. The present work will introduce some novel paradigms. The first is that some of the elements of the supply chain of botnets could be completed without any interaction with the Internet, mostly in presence of wide computer networks and/or clouds. The second is that behind a large number of workstations there are usually "human beings" and it is unlikely that their behaviors will cause marked changes in the interaction with the Internet in a fairly narrow time frame. Finally, AGDs can highlight, at the moment, common lexical features, detectable quickly and without using any black/white list.

Boykov, Y., Isack, H., Olsson, C., Ayed, I. B..  2015.  Volumetric Bias in Segmentation and Reconstruction: Secrets and Solutions. 2015 IEEE International Conference on Computer Vision (ICCV). :1769–1777.

Many standard optimization methods for segmentation and reconstruction compute ML model estimates for appearance or geometry of segments, e.g. Zhu-Yuille [23], Torr [20], Chan-Vese [6], GrabCut [18], Delong et al. [8]. We observe that the standard likelihood term in these formu-lations corresponds to a generalized probabilistic K-means energy. In learning it is well known that this energy has a strong bias to clusters of equal size [11], which we express as a penalty for KL divergence from a uniform distribution of cardinalities. However, this volumetric bias has been mostly ignored in computer vision. We demonstrate signif- icant artifacts in standard segmentation and reconstruction methods due to this bias. Moreover, we propose binary and multi-label optimization techniques that either (a) remove this bias or (b) replace it by a KL divergence term for any given target volume distribution. Our general ideas apply to continuous or discrete energy formulations in segmenta- tion, stereo, and other reconstruction problems.

2017-03-07
Onireti, Oluwakayode, Qadir, Junaid, Imran, Muhammad Ali, Sathiaseelan, Arjuna.  2016.  Will 5G See Its Blind Side? Evolving 5G for Universal Internet Access Proceedings of the 2016 Workshop on Global Access to the Internet for All. :1–6.

Internet has shown itself to be a catalyst for economic growth and social equity but its potency is thwarted by the fact that the Internet is off limits for the vast majority of human beings. Mobile phones—the fastest growing technology in the world that now reaches around 80% of humanity—can enable universal Internet access if it can resolve coverage problems that have historically plagued previous cellular architectures (2G, 3G, and 4G). These conventional architectures have not been able to sustain universal service provisioning since these architectures depend on having enough users per cell for their economic viability and thus are not well suited to rural areas (which are by definition sparsely populated). The new generation of mobile cellular technology (5G), currently in a formative phase and expected to be finalized around 2020, is aimed at orders of magnitude performance enhancement. 5G offers a clean slate to network designers and can be molded into an architecture also amenable to universal Internet provisioning. Keeping in mind the great social benefits of democratizing Internet and connectivity, we believe that the time is ripe for emphasizing universal Internet provisioning as an important goal on the 5G research agenda. In this paper, we investigate the opportunities and challenges in utilizing 5G for global access to the Internet for all (GAIA). We have also identified the major technical issues involved in a 5G-based GAIA solution and have set up a future research agenda by defining open research problems.

Chu, Xu, Ilyas, Ihab F., Krishnan, Sanjay, Wang, Jiannan.  2016.  Data Cleaning: Overview and Emerging Challenges. Proceedings of the 2016 International Conference on Management of Data. :2201–2206.

Detecting and repairing dirty data is one of the perennial challenges in data analytics, and failure to do so can result in inaccurate analytics and unreliable decisions. Over the past few years, there has been a surge of interest from both industry and academia on data cleaning problems including new abstractions, interfaces, approaches for scalability, and statistical techniques. To better understand the new advances in the field, we will first present a taxonomy of the data cleaning literature in which we highlight the recent interest in techniques that use constraints, rules, or patterns to detect errors, which we call qualitative data cleaning. We will describe the state-of-the-art techniques and also highlight their limitations with a series of illustrative examples. While traditionally such approaches are distinct from quantitative approaches such as outlier detection, we also discuss recent work that casts such approaches into a statistical estimation framework including: using Machine Learning to improve the efficiency and accuracy of data cleaning and considering the effects of data cleaning on statistical analysis.

Amin, R., Islam, S. K. H., Biswas, G. P., Khan, M. K..  2015.  An efficient remote mutual authentication scheme using smart mobile phone over insecure networks. 2015 International Conference on Cyber Situational Awareness, Data Analytics and Assessment (CyberSA). :1–7.

To establish a secure connection between a mobile user and a remote server, this paper presents a session key agreement scheme through remote mutual authentication protocol by using mobile application software(MAS). We analyzed the security of our protocol informally, which confirms that the protocol is secure against all the relevant security attacks including off-line identity-password guessing attacks, user-server impersonation attacks, and insider attack. In addition, the widely accepted simulator tool AVISPA simulates the proposed protocol and confirms that the protocol is SAFE under the OFMC and CL-AtSe back-ends. Our protocol not only provide strong security against the relevant attacks, but it also achieves proper mutual authentication, user anonymity, known key secrecy and efficient password change operation. The performance comparison is also performed, which ensures that the protocol is efficient in terms of computation and communication costs.

Isah, H., Neagu, D., Trundle, P..  2015.  Bipartite network model for inferring hidden ties in crime data. 2015 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM). :994–1001.

Certain crimes are difficult to be committed by individuals but carefully organised by group of associates and affiliates loosely connected to each other with a single or small group of individuals coordinating the overall actions. A common starting point in understanding the structural organisation of criminal groups is to identify the criminals and their associates. Situations arise in many criminal datasets where there is no direct connection among the criminals. In this paper, we investigate ties and community structure in crime data in order to understand the operations of both traditional and cyber criminals, as well as to predict the existence of organised criminal networks. Our contributions are twofold: we propose a bipartite network model for inferring hidden ties between actors who initiated an illegal interaction and objects affected by the interaction, we then validate the method in two case studies on pharmaceutical crime and underground forum data using standard network algorithms for structural and community analysis. The vertex level metrics and community analysis results obtained indicate the significance of our work in understanding the operations and structure of organised criminal networks which were not immediately obvious in the data. Identifying these groups and mapping their relationship to one another is essential in making more effective disruption strategies in the future.

Farid, Mina, Roatis, Alexandra, Ilyas, Ihab F., Hoffmann, Hella-Franziska, Chu, Xu.  2016.  CLAMS: Bringing Quality to Data Lakes. Proceedings of the 2016 International Conference on Management of Data. :2089–2092.

With the increasing incentive of enterprises to ingest as much data as they can in what is commonly referred to as "data lakes", and with the recent development of multiple technologies to support this "load-first" paradigm, the new environment presents serious data management challenges. Among them, the assessment of data quality and cleaning large volumes of heterogeneous data sources become essential tasks in unveiling the value of big data. The coveted use of unstructured and semi-structured data in large volumes makes current data cleaning tools (primarily designed for relational data) not directly adoptable. We present CLAMS, a system to discover and enforce expressive integrity constraints from large amounts of lake data with very limited schema information (e.g., represented as RDF triples). This demonstration shows how CLAMS is able to discover the constraints and the schemas they are defined on simultaneously. CLAMS also introduces a scale-out solution to efficiently detect errors in the raw data. CLAMS interacts with human experts to both validate the discovered constraints and to suggest data repairs. CLAMS has been deployed in a real large-scale enterprise data lake and was experimented with a real data set of 1.2 billion triples. It has been able to spot multiple obscure data inconsistencies and errors early in the data processing stack, providing huge value to the enterprise.

Agrawal, Divy, Ba, Lamine, Berti-Equille, Laure, Chawla, Sanjay, Elmagarmid, Ahmed, Hammady, Hossam, Idris, Yasser, Kaoudi, Zoi, Khayyat, Zuhair, Kruse, Sebastian et al..  2016.  Rheem: Enabling Multi-Platform Task Execution. Proceedings of the 2016 International Conference on Management of Data. :2069–2072.

Many emerging applications, from domains such as healthcare and oil & gas, require several data processing systems for complex analytics. This demo paper showcases system, a framework that provides multi-platform task execution for such applications. It features a three-layer data processing abstraction and a new query optimization approach for multi-platform settings. We will demonstrate the strengths of system by using real-world scenarios from three different applications, namely, machine learning, data cleaning, and data fusion.

Igarashi, Takeo, Shono, Naoyuki, Kin, Taichi, Saito, Toki.  2016.  Interactive Volume Segmentation with Threshold Field Painting. Proceedings of the 29th Annual Symposium on User Interface Software and Technology. :403–413.

An interactive method for segmentation and isosurface extraction of medical volume data is proposed. In conventional methods, users decompose a volume into multiple regions iteratively, segment each region using a threshold, and then manually clean the segmentation result by removing clutter in each region. However, this is tedious and requires many mouse operations from different camera views. We propose an alternative approach whereby the user simply applies painting operations to the volume using tools commonly seen in painting systems, such as flood fill and brushes. This significantly reduces the number of mouse and camera control operations. Our technical contribution is in the introduction of the threshold field, which assigns spatially-varying threshold values to individual voxels. This generalizes discrete decomposition of a volume into regions and segmentation using a constant threshold in each region, thereby offering a much more flexible and efficient workflow. This paper describes the details of the user interaction and its implementation. Furthermore, the results of a user study are discussed. The results indicate that the proposed method can be a few times faster than a conventional method.

Yashiro, Hisashi, Terai, Masaaki, Yoshida, Ryuji, Iga, Shin-ichi, Minami, Kazuo, Tomita, Hirofumi.  2016.  Performance Analysis and Optimization of Nonhydrostatic ICosahedral Atmospheric Model (NICAM) on the K Computer and TSUBAME2.5. Proceedings of the Platform for Advanced Scientific Computing Conference. :3:1–3:8.

We summarize the optimization and performance evaluation of the Nonhydrostatic ICosahedral Atmospheric Model (NICAM) on two different types of supercomputers: the K computer and TSUBAME2.5. First, we evaluated and improved several kernels extracted from the model code on the K computer. We did not significantly change the loop and data ordering for sufficient usage of the features of the K computer, such as the hardware-aided thread barrier mechanism and the relatively high bandwidth of the memory, i.e., a 0.5 Byte/FLOP ratio. Loop optimizations and code cleaning for a reduction in memory transfer contributed to a speed-up of the model execution time. The sustained performance ratio of the main loop of the NICAM reached 0.87 PFLOPS with 81,920 nodes on the K computer. For GPU-based calculations, we applied OpenACC to the dynamical core of NICAM. The performance and scalability were evaluated using the TSUBAME2.5 supercomputer. We achieved good performance results, which showed efficient use of the memory throughput performance of the GPU as well as good weak scalability. A dry dynamical core experiment was carried out using 2560 GPUs, which achieved 60 TFLOPS of sustained performance.

Inoue, Jun, Kiselyov, Oleg, Kameyama, Yukiyoshi.  2016.  Staging Beyond Terms: Prospects and Challenges. Proceedings of the 2016 ACM SIGPLAN Workshop on Partial Evaluation and Program Manipulation. :103–108.

Staging is a program generation paradigm with a clean, well-investigated semantics which statically ensures that the generated code is always well-typed and well-scoped. Staging is often used for specializing programs to the known properties or parts of data to improve efficiency, but so far it has been limited to generating terms. This short paper describes our ongoing work on extending staging, with its strong safety guarantees, to generation of non-terms, focusing on ML-style modules. The purpose is to map out the promises and challenges, then to pose a question to solicit the community's expertise in evaluating how essential our extensions are for the purpose of applying staging beyond the realm of terms. We demonstrate our extensions' use in specializing functor applications to eliminate its (currently large) overhead in OCaml. We explain the challenges that those extensions bring in and identify a promising line of attack. Unexpectedly, however, it turns out that we can avoid module generation altogether by representing modules, possibly containing abstract types, as polymorphic records. With the help of first-class modules, module specialization reduces to ordinary term specialization, which can be done with conventional staging. The extent to which this hack generalizes is unclear. Thus we have a question to the community: is there a compelling use case for module generation? With these insights and questions, we offer a starting point for a long-term program in the next stage of staging research.

Imajo, Tomoaki, Sumiya, Kazutoshi, Ushiama, Taketoshi.  2016.  An SNS Based on Implicit Beneficial Social Relations in A Regional Community. Proceedings of the 10th International Conference on Ubiquitous Information Management and Communication. :47:1–47:7.

In this paper, we propose a novel Social Networking Service (SNS) for a regional community. The purpose of the SNS is to support and encourage people by making them aware beneficial social relations in the real world. The conventional SNSs can hardly deal with beneficial social relations, because they are implicit and dynamic. The proposed SNS is designed to provide positive information for two types of people: people who does community voluntary works, such as cleaning, as contributors, and people who receives benefit from them as beneficiary. This paper introduces the basic scheme based on the SNS for beneficial social relations, and evaluates the effectiveness of our scheme based on the result of the experimental studies. The experimental result shows the users of our SNS tend to consider the information about the voluntary works valuable if they have been performed in their living area, and it suggests that our proposed SNS system would work well in a regional community.

Huang, Muhuan, Wu, Di, Yu, Cody Hao, Fang, Zhenman, Interlandi, Matteo, Condie, Tyson, Cong, Jason.  2016.  Programming and Runtime Support to Blaze FPGA Accelerator Deployment at Datacenter Scale. Proceedings of the Seventh ACM Symposium on Cloud Computing. :456–469.

With the end of CPU core scaling due to dark silicon limitations, customized accelerators on FPGAs have gained increased attention in modern datacenters due to their lower power, high performance and energy efficiency. Evidenced by Microsoft's FPGA deployment in its Bing search engine and Intel's 16.7 billion acquisition of Altera, integrating FPGAs into datacenters is considered one of the most promising approaches to sustain future datacenter growth. However, it is quite challenging for existing big data computing systems—like Apache Spark and Hadoop—to access the performance and energy benefits of FPGA accelerators. In this paper we design and implement Blaze to provide programming and runtime support for enabling easy and efficient deployments of FPGA accelerators in datacenters. In particular, Blaze abstracts FPGA accelerators as a service (FaaS) and provides a set of clean programming APIs for big data processing applications to easily utilize those accelerators. Our Blaze runtime implements an FaaS framework to efficiently share FPGA accelerators among multiple heterogeneous threads on a single node, and extends Hadoop YARN with accelerator-centric scheduling to efficiently share them among multiple computing tasks in the cluster. Experimental results using four representative big data applications demonstrate that Blaze greatly reduces the programming efforts to access FPGA accelerators in systems like Apache Spark and YARN, and improves the system throughput by 1.7× to 3× (and energy efficiency by 1.5× to 2.7×) compared to a conventional CPU-only cluster.

Subramanya, Supreeth, Mustafa, Zain, Irwin, David, Shenoy, Prashant.  2016.  Beyond Energy-Efficiency: Evaluating Green Datacenter Applications for Energy-Agility. Proceedings of the 7th ACM/SPEC on International Conference on Performance Engineering. :185–196.

Computing researchers have long focused on improving energy-efficiency under the implicit assumption that all energy is created equal. Yet, this assumption is actually incorrect: energy's cost and carbon footprint vary substantially over time. As a result, consuming energy inefficiently when it is cheap and clean may sometimes be preferable to consuming it efficiently when it is expensive and dirty. Green datacenters adapt their energy usage to optimize for such variations, as reflected in changing electricity prices or renewable energy output. Thus, we introduce energy-agility as a new metric to evaluate green datacenter applications. To illustrate fundamental tradeoffs in energy-agile design, we develop GreenSort, a distributed sorting system optimized for energy-agility. GreenSort is representative of the long-running, massively-parallel, data-intensive tasks that are common in datacenters and amenable to delays from power variations. Our results demonstrate the importance of energy-agile design when considering the benefits of using variable power. For example, we show that GreenSort requires 31% more time and energy to complete when power varies based on real-time electricity prices versus when it is constant. Thus, in this case, real-time prices should be at least 31% lower than fixed prices to warrant using them.

Muthusamy, Vinod, Slominski, Aleksander, Ishakian, Vatche, Khalaf, Rania, Reason, Johnathan, Rozsnyai, Szabolcs.  2016.  Lessons Learned Using a Process Mining Approach to Analyze Events from Distributed Applications. Proceedings of the 10th ACM International Conference on Distributed and Event-based Systems. :199–204.

The execution of distributed applications are captured by the events generated by the individual components. However, understanding the behavior of these applications from their event logs can be a complex and error prone task, compounded by the fact that applications continuously change rendering any knowledge obsolete. We describe our experiences applying a suite of process-aware analytic tools to a number of real world scenarios, and distill our lessons learned. For example, we have seen that these tools are used iteratively, where insights gained at one stage inform the configuration decisions made at an earlier stage. As well, we have observed that data onboarding, where the raw data is cleaned and transformed, is the most critical stage in the pipeline and requires the most manual effort and domain knowledge. In particular, missing, inconsistent, and low-resolution event time stamps are recurring problems that require better solutions. The experiences and insights presented here will assist practitioners applying process analytic tools to real scenarios, and reveal to researchers some of the more pressing challenges in this space.

Iyengar, Varsha, Coleman, Grisha, Tinapple, David, Turaga, Pavan.  2016.  Motion, Captured: An Open Repository for Comparative Movement Studies. Proceedings of the 3rd International Symposium on Movement and Computing. :17:1–17:6.

This paper begins to describe a new kind of database, one that explores a diverse range of movement in the field of dance through capture of different bodies and different backgrounds - or what we are terming movement vernaculars. We re-purpose Ivan Illich's concept of 'vernacular work' [11] here to refer to those everyday forms of dance and organized movement that are informal, refractory (resistant to formal analysis), yet are socially reproduced and derived from a commons. The project investigates the notion of vernaculars in movement that is intentional and aesthetic through the development of a computational approach that highlights both similarities and differences, thereby revealing the specificities of each individual mover. This paper presents an example of how this movement database is used as a research tool, and how the fruits of that research can be added back to the database, thus adding a novel layer of annotation and further enriching the collection. Future researchers can then benefit from this layer, further refining and building upon these techniques. The creation of a robust, open source, movement lexicon repository will allow for observation, speculation, and contextualization - along with the provision of clean and complex data sets for new forms of creative expression.

Mohan, Naveen, Torngren, Martin, Izosimov, Viacheslav, Kaznov, Viktor, Roos, Per, Svahn, Johan, Gustavsson, Joakim, Nesic, Damir.  2016.  Challenges in Architecting Fully Automated Driving; with an Emphasis on Heavy Commercial Vehicles. 2016 Workshop on Automotive Systems/Software Architectures (WASA). :2–9.

Fully automated vehicles will require new functionalities for perception, navigation and decision making -- an Autonomous Driving Intelligence (ADI). We consider architectural cases for such functionalities and investigate how they integrate with legacy platforms. The cases range from a robot replacing the driver -- with entire reuse of existing vehicle platforms, to a clean-slate design. Focusing on Heavy Commercial Vehicles (HCVs), we assess these cases from the perspectives of business, safety, dependability, verification, and realization. The original contributions of this paper are the classification of the architectural cases themselves and the analysis that follows. The analysis reveals that although full reuse of vehicle platforms is appealing, it will require explicitly dealing with the accidental complexity of the legacy platforms, including adding corresponding diagnostics and error handling to the ADI. The current fail-safe design of the platform will also tend to limit availability. Allowing changes to the platforms, will enable more optimized designs and fault-operational behaviour, but will require initial higher development cost and specific emphasis on partitioning and control to limit the influences of safety requirements. For all cases, the design and verification of the ADI will pose a grand challenge and relate to the evolution of the regulatory framework including safety standards.

2017-02-27
Ismail, Z., Leneutre, J., Bateman, D., Chen, L..  2015.  A Game-Theoretical Model for Security Risk Management of Interdependent ICT and Electrical Infrastructures. 2015 IEEE 16th International Symposium on High Assurance Systems Engineering. :101–109.

The communication infrastructure is a key element for management and control of the power system in the smart grid. The communication infrastructure, which can include equipment using off-the-shelf vulnerable operating systems, has the potential to increase the attack surface of the power system. The interdependency between the communication and the power system renders the management of the overall security risk a challenging task. In this paper, we address this issue by presenting a mathematical model for identifying and hardening the most critical communication equipment used in the power system. Using non-cooperative game theory, we model interactions between an attacker and a defender. We derive the minimum defense resources required and the optimal strategy of the defender that minimizes the risk on the power system. Finally, we evaluate the correctness and the efficiency of our model via a case study.

2017-02-23
I. Mukherjee, R. Ganguly.  2015.  "Privacy preserving of two sixteen-segmented image using visual cryptography". 2015 IEEE International Conference on Research in Computational Intelligence and Communication Networks (ICRCICN). :417-422.

With the advancement of technology, the world has not only become a better place to live in but have also lost the privacy and security of shared data. Information in any form is never safe from the hands of unauthorized accessing individuals. Here, in our paper we propose an approach by which we can preserve data using visual cryptography. In this paper, two sixteen segment displayed text is broken into two shares that does not reveal any information about the original images. By this process we have obtained satisfactory results in statistical and structural testes.

2017-02-21
I. Ilhan, A. C. Gurbuz, O. Arikan.  2015.  "Sparsity based robust Stretch Processing". 2015 IEEE International Conference on Digital Signal Processing (DSP). :95-99.

Strecth Processing (SP) is a radar signal processing technique that provides high-range resolution with processing large bandwidth signals with lower rate Analog to Digital Converter(ADC)s. The range resolution of the large bandwidth signal is obtained through looking into a limited range window and low rate ADC samples. The target space in the observed range window is sparse and Compressive sensing(CS) is an important tool to further decrease the number of measurements and sparsely reconstruct the target space for sparse scenes with a known basis which is the Fourier basis in the general application of SP. Although classical CS techniques might be directly applied to SP, due to off-grid targets reconstruction performance degrades. In this paper, applicability of compressive sensing framework and its sparse signal recovery techniques to stretch processing is studied considering off-grid cases. For sparsity based robust SP, Perturbed Parameter Orthogonal Matching Pursuit(PPOMP) algorithm is proposed. PPOMP is an iterative technique that estimates off-grid target parameters through a gradient descent. To compute the error between actual and reconstructed parameters, Earth Movers Distance(EMD) is used. Performance of proposed algorithm are compared with classical CS and SP techniques.

E. Aubry, T. Silverston, I. Chrisment.  2015.  "SRSC: SDN-based routing scheme for CCN". Proceedings of the 2015 1st IEEE Conference on Network Softwarization (NetSoft). :1-5.

Content delivery such as P2P or video streaming generates the main part of the Internet traffic and Content Centric Network (CCN) appears as an appropriate architecture to satisfy the user needs. However, the lack of scalable routing scheme is one of the main obstacles that slows down a large deployment of CCN at an Internet-scale. In this paper we propose to use the Software-Defined Networking (SDN) paradigm to decouple data plane and control plane and present SRSC, a new routing scheme for CCN. Our solution is a clean-slate approach using only CCN messages and the SDN paradigm. We implemented our solution into the NS-3 simulator and perform simulations of our proposal. SRSC shows better performances than the flooding scheme used by default in CCN: it reduces the number of messages, while still improves CCN caching performances.