Biblio

Found 4288 results

Filters: Keyword is security  [Clear All Filters]
2017-12-28
Sandberg, H., Teixeira, A. M. H..  2016.  From control system security indices to attack identifiability. 2016 Science of Security for Cyber-Physical Systems Workshop (SOSCYPS). :1–6.

In this paper, we investigate detectability and identifiability of attacks on linear dynamical systems that are subjected to external disturbances. We generalize a concept for a security index, which was previously introduced for static systems. The index exactly quantifies the resources necessary for targeted attacks to be undetectable and unidentifiable in the presence of disturbances. This information is useful for both risk assessment and for the design of anomaly detectors. Finally, we show how techniques from the fault detection literature can be used to decouple disturbances and to identify attacks, under certain sparsity constraints.

Amin, S..  2016.  Security games on infrastructure networks. 2016 Science of Security for Cyber-Physical Systems Workshop (SOSCYPS). :1–4.

The theory of robust control models the controller-disturbance interaction as a game where disturbance is nonstrategic. The proviso of a deliberately malicious (strategic) attacker should be considered to increase the robustness of infrastructure systems. This has become especially important since many IT systems supporting critical functionalities are vulnerable to exploits by attackers. While the usefulness of game theory methods for modeling cyber-security is well established in the literature, new game theoretic models of cyber-physical security are needed for deriving useful insights on "optimal" attack plans and defender responses, both in terms of allocation of resources and operational strategies of these players. This whitepaper presents some progress and challenges in using game-theoretic models for security of infrastructure networks. Main insights from the following models are presented: (i) Network security game on flow networks under strategic edge disruptions; (ii) Interdiction problem on distribution networks under node disruptions; (iii) Inspection game to monitor commercial non-technical losses (e.g. energy diversion); and (iv) Interdependent security game of networked control systems under communication failures. These models can be used to analyze the attacker-defender interactions in a class of cyber-physical security scenarios.

2018-01-10
Hamamreh, J. M., Yusuf, M., Baykas, T., Arslan, H..  2016.  Cross MAC/PHY layer security design using ARQ with MRC and adaptive modulation. 2016 IEEE Wireless Communications and Networking Conference. :1–7.

In this work, Automatic-Repeat-Request (ARQ) and Maximal Ratio Combination (MRC), have been jointly exploited to enhance the confidentiality of wireless services requested by a legitimate user (Bob) against an eavesdropper (Eve). The obtained security performance is analyzed using Packet Error Rate (PER), where the exact PER gap between Bob and Eve is determined. PER is proposed as a new practical security metric in cross layers (Physical/MAC) security design since it reflects the influence of upper layers mechanisms, and it can be linked with Quality of Service (QoS) requirements for various digital services such as voice and video. Exact PER formulas for both Eve and Bob in i.i.d Rayleigh fading channel are derived. The simulation and theoretical results show that the employment of ARQ mechanism and MRC on a signal level basis before demodulation can significantly enhance data security for certain services at specific SNRs. However, to increase and ensure the security of a specific service at any SNR, adaptive modulation is proposed to be used along with the aforementioned scheme. Analytical and simulation studies demonstrate orders of magnitude difference in PER performance between eavesdroppers and intended receivers.

2016-07-01
Zielinska, Olga, Welk, Allaire, Mayhorn, Christopher B., Murphy-Hill, Emerson.  2016.  The Persuasive Phish: Examining the Social Psychological Principles Hidden in Phishing Emails. Proceedings of the Symposium and Bootcamp on the Science of Security. :126–126.

Phishing is a social engineering tactic used to trick people into revealing personal information [Zielinska, Tembe, Hong, Ge, Murphy-Hill, & Mayhorn 2014]. As phishing emails continue to infiltrate users' mailboxes, what social engineering techniques are the phishers using to successfully persuade victims into releasing sensitive information?

Cialdini's [2007] six principles of persuasion (authority, social proof, liking/similarity, commitment/consistency, scarcity, and reciprocation) have been linked to elements of phishing emails [Akbar 2014; Ferreira, & Lenzini 2015]; however, the findings have been conflicting. Authority and scarcity were found as the most common persuasion principles in 207 emails obtained from a Netherlands database [Akbar 2014], while liking/similarity was the most common principle in 52 personal emails available in Luxemborg and England [Ferreira et al. 2015]. The purpose of this study was to examine the persuasion principles present in emails available in the United States over a period of five years.

Two reviewers assessed eight hundred eighty-seven phishing emails from Arizona State University, Brown University, and Cornell University for Cialdini's six principles of persuasion. Each email was evaluated using a questionnaire adapted from the Ferreira et al. [2015] study. There was an average agreement of 87% per item between the two raters.

Spearman's Rho correlations were used to compare email characteristics over time. During the five year period under consideration (2010--2015), the persuasion principles of commitment/consistency and scarcity have increased over time, while the principles of reciprocation and social proof have decreased over time. Authority and liking/similarity revealed mixed results with certain characteristics increasing and others decreasing.

The commitment/consistency principle could be seen in the increase of emails referring to elements outside the email to look more reliable, such as Google Docs or Adobe Reader (rs(850) = .12, p =.001), while the scarcity principle could be seen in urgent elements that could encourage users to act quickly and may have had success in eliciting a response from users (rs(850) = .09, p =.01). Reciprocation elements, such as a requested reply, decreased over time (rs(850) = -.12, p =.001). Additionally, the social proof principle present in emails by referring to actions performed by other users also decreased (rs(850) = -.10, p =.01).

Two persuasion principles exhibited both an increase and decrease in their presence in emails over time: authority and liking/similarity. These principles could increase phishing rate success if used appropriately, but could also raise suspicions in users and decrease compliance if used incorrectly. Specifically, the source of the email, which corresponds to the authority principle, displayed an increase over time in educational institutes (rs(850) = .21, p <.001), but a decrease in financial institutions (rs(850) = -.18, p <.001). Similarly, the liking/similarity principle revealed an increase over time of logos present in emails (rs(850) = .18, p <.001) and decrease in service details, such as payment information (rs(850) = -.16, p <.001).

The results from this study offer a different perspective regarding phishing. Previous research has focused on the user aspect; however, few studies have examined the phisher perspective and the social psychological techniques they are implementing. Additionally, they have yet to look at the success of the social psychology techniques. Results from this study can be used to help to predict future trends and inform training programs, as well as machine learning programs used to identify phishing messages.

2017-11-03
Yang, B., Zhang, T..  2016.  A Scalable Meta-Model for Big Data Security Analyses. 2016 IEEE 2nd International Conference on Big Data Security on Cloud (BigDataSecurity), IEEE International Conference on High Performance and Smart Computing (HPSC), and IEEE International Conference on Intelligent Data and Security (IDS). :55–60.

This paper proposes a highly scalable framework that can be applied to detect network anomaly at per flow level by constructing a meta-model for a family of machine learning algorithms or statistical data models. The approach is scalable and attainable because raw data needs to be accessed only one time and it will be processed, computed and transformed into a meta-model matrix in a much smaller size that can be resident in the system RAM. The calculation of meta-model matrix can be achieved through disposable updating operations at per row level: once a per-flow information is proceeded, it is no longer needed in calculating the meta-model matrix. While the proposed framework covers both Gaussian and non-Gaussian data, the focus of this work is on the linear regression models. Specifically, a new concept called meta-model sufficient statistics is proposed to analyze a group of models, where exact, not the approximate, results are derived. In addition, the proposed framework can quickly discover an optimal statistical or computer model from a family of candidate models without the need of rescanning the raw dataset. This suggest an extremely efficient and effectively theory and method is possible for big data security analysis.

2017-07-24
Hibshi, Hanan.  2016.  Systematic Analysis of Qualitative Data in Security. Proceedings of the Symposium and Bootcamp on the Science of Security. :52–52.

This tutorial will introduce participants to Grounded Theory, which is a qualitative framework to discover new theory from an empirical analysis of data. This form of analysis is particularly useful when analyzing text, audio or video artifacts that lack structure, but contain rich descriptions. We will frame Grounded Theory in the context of qualitative methods and case studies, which complement quantitative methods, such as controlled experiments and simulations. We will contrast the approaches developed by Glaser and Strauss, and introduce coding theory - the most prominent qualitative method for performing analysis to discover Grounded Theory. Topics include coding frames, first- and second-cycle coding, and saturation. We will use examples from security interview scripts to teach participants: developing a coding frame, coding a source document to discover relationships in the data, developing heuristics to resolve ambiguities between codes, and performing second-cycle coding to discover relationships within categories. Then, participants will learn how to discover theory from coded data. Participants will further learn about inter-rater reliability statistics, including Cohen's and Fleiss' Kappa, Krippendorf's Alpha, and Vanbelle's Index. Finally, we will review how to present Grounded Theory results in publications, including how to describe the methodology, report observations, and describe threats to validity.

2017-05-22
Manzoor, Emaad, Milajerdi, Sadegh M., Akoglu, Leman.  2016.  Fast Memory-efficient Anomaly Detection in Streaming Heterogeneous Graphs. Proceedings of the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. :1035–1044.

Given a stream of heterogeneous graphs containing different types of nodes and edges, how can we spot anomalous ones in real-time while consuming bounded memory? This problem is motivated by and generalizes from its application in security to host-level advanced persistent threat (APT) detection. We propose StreamSpot, a clustering based anomaly detection approach that addresses challenges in two key fronts: (1) heterogeneity, and (2) streaming nature. We introduce a new similarity function for heterogeneous graphs that compares two graphs based on their relative frequency of local substructures, represented as short strings. This function lends itself to a vector representation of a graph, which is (a) fast to compute, and (b) amenable to a sketched version with bounded size that preserves similarity. StreamSpot exhibits desirable properties that a streaming application requires: it is (i) fully-streaming; processing the stream one edge at a time as it arrives, (ii) memory-efficient; requiring constant space for the sketches and the clustering, (iii) fast; taking constant time to update the graph sketches and the cluster summaries that can process over 100,000 edges per second, and (iv) online; scoring and flagging anomalies in real time. Experiments on datasets containing simulated system-call flow graphs from normal browser activity and various attack scenarios (ground truth) show that StreamSpot is high-performance; achieving above 95% detection accuracy with small delay, as well as competitive time and memory usage.  

2017-09-15
Puttegowda, D., Padma, M. C..  2016.  Human Motion Detection and Recognising Their Actions from the Video Streams. Proceedings of the International Conference on Informatics and Analytics. :12:1–12:5.

In the field of image processing, it is more complex and challenging task to detect the Human motion in the video and recognize their actions from the video sequences. A novel approach is presented in this paper to detect the human motion and recognize their actions. By tracking the selected object over consecutive frames of a video or image sequences, the different Human actions are recognized. Initially, the background motion is subtracted from the input video stream and its binary images are constructed. Using spatiotemporal interest points, the object which needs to be monitored is selected by enclosing the required pixels within the bounding rectangle. The selected foreground pixels within the bounding rectangle are then tracked using edge tracking algorithm. The features are extracted and using these features human motion are detected. Finally, the different human actions are recognized using K-Nearest Neighbor classifier. The applications which uses this methodology where monitoring the human actions is required such as shop surveillance, city surveillance, airports surveillance and other important places where security is the prime factor. The results obtained are quite significant and are analyzed on the datasets like KTH and Weizmann dataset, which contains actions like bending, running, walking, skipping, and hand-waving.

Cheng, Wei, Zhang, Kai, Chen, Haifeng, Jiang, Guofei, Chen, Zhengzhang, Wang, Wei.  2016.  Ranking Causal Anomalies via Temporal and Dynamical Analysis on Vanishing Correlations. Proceedings of the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. :805–814.

Modern world has witnessed a dramatic increase in our ability to collect, transmit and distribute real-time monitoring and surveillance data from large-scale information systems and cyber-physical systems. Detecting system anomalies thus attracts significant amount of interest in many fields such as security, fault management, and industrial optimization. Recently, invariant network has shown to be a powerful way in characterizing complex system behaviours. In the invariant network, a node represents a system component and an edge indicates a stable, significant interaction between two components. Structures and evolutions of the invariance network, in particular the vanishing correlations, can shed important light on locating causal anomalies and performing diagnosis. However, existing approaches to detect causal anomalies with the invariant network often use the percentage of vanishing correlations to rank possible casual components, which have several limitations: 1) fault propagation in the network is ignored; 2) the root casual anomalies may not always be the nodes with a high-percentage of vanishing correlations; 3) temporal patterns of vanishing correlations are not exploited for robust detection. To address these limitations, in this paper we propose a network diffusion based framework to identify significant causal anomalies and rank them. Our approach can effectively model fault propagation over the entire invariant network, and can perform joint inference on both the structural, and the time-evolving broken invariance patterns. As a result, it can locate high-confidence anomalies that are truly responsible for the vanishing correlations, and can compensate for unstructured measurement noise in the system. Extensive experiments on synthetic datasets, bank information system datasets, and coal plant cyber-physical system datasets demonstrate the effectiveness of our approach.

2017-11-13
Singh, S. K., Bziuk, W., Jukan, A..  2016.  Balancing Data Security and Blocking Performance with Spectrum Randomization in Optical Networks. 2016 IEEE Global Communications Conference (GLOBECOM). :1–7.

Data randomization or scrambling has been effectively used in various applications to improve the data security. In this paper, we use the idea of data randomization to proactively randomize the spectrum (re)allocation to improve connections' security. As it is well-known that random (re)allocation fragments the spectrum and thus increases blocking in elastic optical networks, we analyze the tradeoff between system performance and security. To this end, in addition to spectrum randomization, we utilize an on-demand defragmentation scheme every time a request is blocked due to the spectrum fragmentation. We model the occupancy pattern of an elastic optical link (EOL) using a multi-class continuous-time Markov chain (CTMC) under the random-fit spectrum allocation method. Numerical results show that although both the blocking and security can be improved for a particular so-called randomization process (RP) arrival rate, while with the increase in RP arrival rate the connections' security improves at the cost of the increase in overall blocking.

2017-11-27
Settanni, G., Shovgenya, Y., Skopik, F., Graf, R., Wurzenberger, M., Fiedler, R..  2016.  Correlating cyber incident information to establish situational awareness in Critical Infrastructures. 2016 14th Annual Conference on Privacy, Security and Trust (PST). :78–81.

Protecting Critical Infrastructures (CIs) against contemporary cyber attacks has become a crucial as well as complex task. Modern attack campaigns, such as Advanced Persistent Threats (APTs), leverage weaknesses in the organization's business processes and exploit vulnerabilities of several systems to hit their target. Although their life-cycle can last for months, these campaigns typically go undetected until they achieve their goal. They usually aim at performing data exfiltration, cause service disruptions and can also undermine the safety of humans. Novel detection techniques and incident handling approaches are therefore required, to effectively protect CI's networks and timely react to this type of threats. Correlating large amounts of data, collected from a multitude of relevant sources, is necessary and sometimes required by national authorities to establish cyber situational awareness, and allow to promptly adopt suitable countermeasures in case of an attack. In this paper we propose three novel methods for security information correlation designed to discover relevant insights and support the establishment of cyber situational awareness.

2017-11-13
Ueta, K., Xue, X., Nakamoto, Y., Murakami, S..  2016.  A Distributed Graph Database for the Data Management of IoT Systems. 2016 IEEE International Conference on Internet of Things (iThings) and IEEE Green Computing and Communications (GreenCom) and IEEE Cyber, Physical and Social Computing (CPSCom) and IEEE Smart Data (SmartData). :299–304.

The Internet of Things(IoT) has become a popular technology, and various middleware has been proposed and developed for IoT systems. However, there have been few studies on the data management of IoT systems. In this paper, we consider graph database models for the data management of IoT systems because these models can specify relationships in a straightforward manner among entities such as devices, users, and information that constructs IoT systems. However, applying a graph database to the data management of IoT systems raises issues regarding distribution and security. For the former issue, we propose graph database operations integrated with REST APIs. For the latter, we extend a graph edge property by adding access protocol permissions and checking permissions using the APIs with authentication. We present the requirements for a use case scenario in addition to the features of a distributed graph database for IoT data management to solve the aforementioned issues, and implement a prototype of the graph database.

2018-01-10
Zhang, Yuexin, Xiang, Yang, Huang, Xinyi.  2016.  Password-Authenticated Group Key Exchange: A Cross-Layer Design. ACM Trans. Internet Technol.. 16:24:1–24:20.
Two-party password-authenticated key exchange (2PAKE) protocols provide a natural mechanism for secret key establishment in distributed applications, and they have been extensively studied in past decades. However, only a few efforts have been made so far to design password-authenticated group key exchange (GPAKE) protocols. In a 2PAKE or GPAKE protocol, it is assumed that short passwords are preshared among users. This assumption, however, would be impractical in certain applications. Motivated by this observation, this article presents a GPAKE protocol without the password sharing assumption. To obtain the passwords, wireless devices, such as smart phones, tablets, and laptops, are used to extract short secrets at the physical layer. Using the extracted secrets, users in our protocol can establish a group key at higher layers with light computation consumptions. Thus, our GPAKE protocol is a cross-layer design. Additionally, our protocol is a compiler, that is, our protocol can transform any provably secure 2PAKE protocol into a GPAKE protocol with only one more round of communications. Besides, the proposed protocol is proved secure in the standard model.
2017-11-03
Biswas, K., Muthukkumarasamy, V..  2016.  Securing Smart Cities Using Blockchain Technology. 2016 IEEE 18th International Conference on High Performance Computing and Communications; IEEE 14th International Conference on Smart City; IEEE 2nd International Conference on Data Science and Systems (HPCC/SmartCity/DSS). :1392–1393.

A smart city uses information technology to integrate and manage physical, social, and business infrastructures in order to provide better services to its dwellers while ensuring efficient and optimal utilization of available resources. With the proliferation of technologies such as Internet of Things (IoT), cloud computing, and interconnected networks, smart cities can deliver innovative solutions and more direct interaction and collaboration between citizens and the local government. Despite a number of potential benefits, digital disruption poses many challenges related to information security and privacy. This paper proposes a security framework that integrates the blockchain technology with smart devices to provide a secure communication platform in a smart city.

2021-02-08
Qiao, B., Jin, L., Yang, Y..  2016.  An Adaptive Algorithm for Grey Image Edge Detection Based on Grey Correlation Analysis. 2016 12th International Conference on Computational Intelligence and Security (CIS). :470—474.

In the original algorithm for grey correlation analysis, the detected edge is comparatively rough and the thresholds need determining in advance. Thus, an adaptive edge detection method based on grey correlation analysis is proposed, in which the basic principle of the original algorithm for grey correlation analysis is used to get adaptively automatic threshold according to the mean value of the 3×3 area pixels around the detecting pixel and the property of people's vision. Because the false edge that the proposed algorithm detected is relatively large, the proposed algorithm is enhanced by dealing with the eight neighboring pixels around the edge pixel, which is merged to get the final edge map. The experimental results show that the algorithm can get more complete edge map with better continuity by comparing with the traditional edge detection algorithms.

2017-10-03
Henri, Sébastien, Vlachou, Christina, Herzen, Julien, Thiran, Patrick.  2016.  EMPoWER Hybrid Networks: Exploiting Multiple Paths over Wireless and ElectRical Mediums. Proceedings of the 12th International on Conference on Emerging Networking EXperiments and Technologies. :51–65.

Several technologies, such as WiFi, Ethernet and power-line communications (PLC), can be used to build residential and enterprise networks. These technologies often co-exist; most networks use WiFi, and buildings are readily equipped with electrical wires that can offer a capacity up to 1 Gbps with PLC. Yet, current networks do not exploit this rich diversity and often operate far below the available capacity. We design, implement, and evaluate EMPoWER, a system that exploits simultaneously several potentially-interfering mediums. It operates at layer 2.5, between the MAC and IP layers, and combines routing (to find multiple concurrent routes) and congestion control (to efficiently balance traffic across the routes). To optimize resource utilization and robustness, both components exploit the heterogeneous nature of the network. They are fair and efficient, and they operate only within the local area network, without affecting remote Internet hosts. We demonstrate the performance gains of EMPoWER, by simulations and experiments on a 22-node testbed. We show that PLC/WiFi, benefiting from the diversity offered by wireless and electrical mediums, provides significant throughput gains (up to 10x) and improves coverage, compared to multi-channel WiFi.

2017-04-20
Shinde, P. S., Ardhapurkar, S. B..  2016.  Cyber security analysis using vulnerability assessment and penetration testing. 2016 World Conference on Futuristic Trends in Research and Innovation for Social Welfare (Startup Conclave). :1–5.

In last twenty years, use of internet applications, web hacking activities have exaggerated speedily. Organizations facing very significant challenges in securing their web applications from rising cyber threats, as compromise with the protection issues don't seem to be reasonable. Vulnerability Assessment and Penetration Testing (VAPT) techniques help them to go looking out security loopholes. These security loopholes could also be utilized by attackers to launch attacks on technical assets. Thus it is necessary ascertain these vulnerabilities and install security patches. VAPT helps organization to determine whether their security arrangements are working properly. This paper aims to elucidate overview and various techniques used in vulnerability assessment and penetration testing (VAPT). Also focuses on making cyber security awareness and its importance at various level of an organization for adoption of required up to date security measures by the organization to stay protected from various cyber-attacks.

2022-03-08
Choucri, Nazli, Jackson, Chrisma.  2016.  Perspectives on Cybersecurity: A Collaborative Study. MIT Political Science Network. :1–82.
Almost everyone recognizes the emergence of a new challenge in the cyber domain, namely increased threats to the security of the Internet and its various uses. Seldom does a day go by without dire reports and hair raising narratives about unauthorized intrusions, access to content, or damage to systems, or operations. And, of course, a close correlate is the loss of value. An entire industry is around threats to cyber security, prompting technological innovations and operational strategies that promise to prevent damage and destruction. This paper is a collection chapters entitled 1) "Cybersecurity – Problems, Premises, Perspectives," 2) "An Abbreviated Technical Perspective on Cybersecurity," 3) "The Conceptual Underpinning of Cyber Security Studies" 4) "Cyberspace as the Domain of Content," 5) "The Conceptual Underpinning of Cyber Security Studies," 6) "China’s Perspective on Cyber Security," 7) "Pursuing Deterrence Internationally in Cyberspace," 8) "Is Deterrence Possible in Cyber Warfare?" and 9) "A Theoretical Framework for Analyzing Interactions between Contemporary Transnational Activism and Digital Communication."
2017-09-15
Zodik, Gabi.  2016.  Cognitive and Contextual Enterprise Mobile Computing: Invited Keynote Talk. Proceedings of the 9th India Software Engineering Conference. :11–12.

The second wave of change presented by the age of mobility, wearables, and IoT focuses on how organizations and enterprises, from a wide variety of commercial areas and industries, will use and leverage the new technologies available. Businesses and industries that don't change with the times will simply cease to exist. Applications need to be powered by cognitive and contextual technologies to support real-time proactive decisions. These decisions will be based on the mobile context of a specific user or group of users, incorporating location, time of day, current user task, and more. Driven by the huge amounts of data produced by mobile and wearables devices, and influenced by privacy concerns, the next wave in computing will need to exploit data and computing at the edge of the network. Future mobile apps will have to be cognitive to 'understand' user intentions based on all the available interactions and unstructured data. Mobile applications are becoming increasingly ubiquitous, going beyond what end users can easily comprehend. Essentially, for both business-to-client (B2C) and business-to-business (B2B) apps, only about 30% of the development efforts appear in the interface of the mobile app. For example, areas such as the collaborative nature of the software or the shortened development cycle and time-to-market are not apparent to end users. The other 70% of the effort invested is dedicated to integrating the applications with back-office systems and developing those aspects of the application that operate behind the scenes. An important, yet often complex, part of the solution and mobile app takes place far from the public eye-in the back-office environment. It is there that various aspects of customer relationship management must be addressed: tracking usage data, pushing out messaging as needed, distributing apps to employees within the enterprise, and handling the wide variety of operational and management tasks-often involving the collection and monitoring of data from sensors and wearable devices. All this must be carried out while addressing security concerns that range from verifying user identities, to data protection, to blocking attempted breaches of the organization, and activation of malicious code. Of course, these tasks must be augmented by a systematic approach and vigilant maintenance of user privacy. The first wave of the mobile revolution focused on development platforms, run-time platforms, deployment, activation, and management tools for multi-platform environments, including comprehensive mobile device management (MDM). To realize the full potential of this revolution, we must capitalize on information about the context within which mobile devices are used. With both employees and customers, this context could be a simple piece of information such as the user location or time of use, the hour of the day, or the day of the week. The context could also be represented by more complex data, such as the amount of time used, type of activity performed, or user preferences. Further insight could include the relationship history with the user and the user's behavior as part of that relationship, as well as a long list of variables to be considered in various scenarios. Today, with the new wave of wearables, the definition of context is being further extended to include environmental factors such as temperature, weather, or pollution, as well as personal factors such as heart rate, movement, or even clothing worn. In both B2E and B2C situations, a context-dependent approach, based on the appropriate context for each specific user, offers a superior tool for working with both employees and clients alike. This mode of operation does not start and end with the individual user. Rather, it takes into account the people surrounding the user, the events taking place nearby, appliances or equipment activated, the user's daily schedule, as well as other, more general information, such as the environment and weather. Developing enterprise-wide, context-dependent, mobile solutions is still a complex challenge. A system of real added-value services must be developed, as well as a comprehensive architecture. These four-tier architectures comprise end-user devices like wearables and smartphones, connected to systems of engagement (SoEs), and systems of record (SoRs). All this is needed to enable data analytics and collection in the context where it is created. The data collected will allow further interaction with employees or customers, analytics, and follow-up actions based on the results of that analysis. We also need to ensure end-to-end (E2E) security across these four tiers, and to keep the data and application contexts in sync. These are just some of the challenges being addressed by IBM Research. As an example, these technologies could be deployed in the retail space, especially in brick-and-mortar stores. Identifying a customer entering a store, detecting her location among the aisles, and cross-referencing that data with the customer's transaction history, could lead to special offers tailor-made for that specific customer or suggestions relevant to her purchasing process. This technology enables real-world implementation of metrics, analytics, and other tools familiar to us from the online realm. We can now measure visits to physical stores in the same way we measure web page hits: analyze time spent in the store, the areas visited by the customer, and the results of those visits. In this way, we can also identify shoppers wandering around the store and understand when they are having trouble finding the product they want to purchase. We can also gain insight into the standard traffic patterns of shoppers and how they navigate a store's floors and departments. We might even consider redesigning the store layout to take advantage of this insight to enhance sales. In healthcare, the context can refer to insight extracted from data received from sensors on the patient, from either his mobile device or wearable technology, and information about the patient's environment and location at that moment in time. This data can help determine if any assistance is required. For example, if a patient is discharged from the hospital for continued at-home care, doctors can continue to remotely monitor his condition via a system of sensors and analytic tools that interpret the sensor readings. This approach can also be applied to the area of safety. Scientists at IBM Research are developing a platform that collects and analyzes data from wearable technology to protect the safety of employees working in construction, heavy industry, manufacturing, or out in the field. This solution can serve as a real-time warning system by analyzing information gathered from wearable sensors embedded in personal protective equipment, such as smart safety helmets and protective vests, and in the workers' individual smartphones. These sensors can continuously monitor a worker's pulse rate, movements, body temperature, and hydration level, as well as environmental factors such as noise level, and other parameters. The system can provide immediate alerts to the worker about any dangers in the work environment to prevent possible injury. It can also be used to prevent accidents before they happen or detect accidents once they occur. For example, with sophisticated algorithms, we can detect if a worker falls based on a sudden difference in elevations detected by an accelerometer, and then send an alert to notify her peers and supervisor or call for help. Monitoring can also help ensure safety in areas where continuous exposure to heat or dangerous materials must be limited based on regulated time periods. Mobile technologies can also help manage events with massive numbers of participants, such as professional soccer games, music festivals, and even large-scale public demonstrations, by sending alerts concerning long and growing lines or specific high-traffic areas. These technologies can be used to detect accidents typical of large-scale gatherings, send warnings about overcrowding, and alert the event organizers. In the same way, they can alleviate parking problems or guide public transportation operators- all via analysis and predictive analytics. IBM Research - Haifa is currently involved in multiple activities as part of IBM's MobileFirst initiative. Haifa researchers have a special expertise in time- and location-based intelligent applications, including visual maps that display activity contexts and predictive analytics systems for mobile data and users. In another area, IBM researchers in Haifa are developing new cognitive services driven from the unique data available on mobile and wearable devices. Looking to the future, the IBM Research team is further advancing the integration of wearable technology, augmented reality systems, and biometric tools for mobile user identity validation. Managing contextual data and analyzing the interaction between the different kinds of data presents fascinating challenges for the development of next-generation programming. For example, we need to rethink when and where data processing and computations should occur: Is it best to leave them at the user-device level, or perhaps they should be moved to the back-office systems, servers, and/or the cloud infrastructures with which the user device is connected? New-age applications are becoming more and more distributed. They operate on a wide range of devices, such as wearable technologies, use a variety of sensors, and depend on cloud-based systems. As a result, a new distributed programming paradigm is emerging to meet the needs of these use-cases and real-time scenarios. This paradigm needs to deal with massive amounts of devices, sensors, and data in business systems, and must be able to shift computation from the cloud to the edge, based on context in close to real-time. By processing data at the edge of the network, close to where the interactions and processing are happening, we can help reduce latency and offer new opportunities for improved privacy and security. Despite all these interactions, data collection, and the analytic insights based upon them-we cannot forget the issues of privacy. Without a proper and reliable solution that offers more control over what personal data is shared and how it is used, people will refrain from sharing information. Such sharing is necessary for developing and understanding the context in which people are carrying out various actions, and to offer them tools and services to enhance their actions. In the not-so-distant future, we anticipate the appearance of ad-hoc networks for wearable technology systems that will interact with one another to further expand the scope and value of available context-dependent data.

2017-03-13
Hlyne, C. N. N., Zavarsky, P., Butakov, S..  2016.  SCAP benchmark for Cisco router security configuration compliance. 2015 10th International Conference for Internet Technology and Secured Transactions (ICITST). :270–276.

Information security management is time-consuming and error-prone. Apart from day-to-day operations, organizations need to comply with industrial regulations or government directives. Thus, organizations are looking for security tools to automate security management tasks and daily operations. Security Content Automation Protocol (SCAP) is a suite of specifications that help to automate security management tasks such as vulnerability measurement and policy compliance evaluation. SCAP benchmark provides detailed guidance on setting the security configuration of network devices, operating systems, and applications. Organizations can use SCAP benchmark to perform automated configuration compliance assessment on network devices, operating systems, and applications. This paper discusses SCAP benchmark components and the development of a SCAP benchmark for automating Cisco router security configuration compliance.

2017-10-03
Chlebus, Bogdan S., Vaya, Shailesh.  2016.  Distributed Communication in Bare-bones Wireless Networks. Proceedings of the 17th International Conference on Distributed Computing and Networking. :1:1–1:10.

We consider wireless networks in which the effects of interference are determined by the SINR model. We address the question of structuring distributed communication when stations have very limited individual capabilities. In particular, nodes do not know their geographic coordinates, neighborhoods or even the size n of the network, nor can they sense collisions. Each node is equipped only with its unique name from a range \1, ..., N\. We study the following three settings and distributed algorithms for communication problems in each of them. In the uncoordinated-start case, when one node starts an execution and other nodes are awoken by receiving messages from already awoken nodes, we present a randomized broadcast algorithm which wakes up all the nodes in O(n log2 N) rounds with high probability. In the synchronized-start case, when all the nodes simultaneously start an execution, we give a randomized algorithm that computes a backbone of the network in O(Δ log7 N) rounds with high probability. Finally, in the partly-coordinated-start case, when a number of nodes start an execution together and other nodes are awoken by receiving messages from the already awoken nodes, we develop an algorithm that creates a backbone network in time O(n log2 N + Δ log7 N) with high probability.

2018-05-25
B. Zheng, H. Liang, Q. Zhu, H. Yu, C. W. Lin.  2016.  Next Generation Automotive Architecture Modeling and Exploration for Autonomous Driving. 2016 IEEE Computer Society Annual Symposium on VLSI (ISVLSI). :53-58.
2017-12-27
Kharel, R., Raza, U., Ijaz, M., Ekpo, S., Busawon, K..  2016.  Chaotic secure digital communication scheme using auxiliary systems. 2016 10th International Symposium on Communication Systems, Networks and Digital Signal Processing (CSNDSP). :1–6.

In this paper, we present a new secure message transmission scheme using hyperchaotic discrete primary and auxiliary chaotic systems. The novelty lies on the use of auxiliary chaotic systems for the encryption purposes. We have used the modified Henon hyperchaotic discrete-time system. The use of the auxiliary system allows generating the same keystream in the transmitter and receiver side and the initial conditions in the auxiliary systems combined with other transmitter parameters suffice the role of the key. The use of auxiliary systems will mean that the information of keystream used in the encryption function will not be present on the transmitted signal available to the intruders, hence the reconstructing of the keystream will not be possible. The encrypted message is added on to the dynamics of the transmitter using inclusion technique and the dynamical left inversion technique is employed to retrieve the unknown message. The simulation results confirm the robustness of the method used and some comments are made about the key space from the cryptographic viewpoint.

2017-06-05
Khodaei, Mohammad, Papadimitratos, Panos.  2016.  Evaluating On-demand Pseudonym Acquisition Policies in Vehicular Communication Systems. Proceedings of the First International Workshop on Internet of Vehicles and Vehicles of Internet. :7–12.

Standardization and harmonization efforts have reached a consensus towards using a special-purpose Vehicular Public-Key Infrastructure (VPKI) in upcoming Vehicular Communication (VC) systems. However, there are still several technical challenges with no conclusive answers; one such an important yet open challenge is the acquisition of short-term credentials, pseudonym: how should each vehicle interact with the VPKI, e.g., how frequently and for how long? Should each vehicle itself determine the pseudonym lifetime? Answering these questions is far from trivial. Each choice can affect both the user privacy and the system performance and possibly, as a result, its security. In this paper, we make a novel systematic effort to address this multifaceted question. We craft three generally applicable policies and experimentally evaluate the VPKI system performance, leveraging two large-scale mobility datasets. We consider the most promising, in terms of efficiency, pseudonym acquisition policies; we find that within this class of policies, the most promising policy in terms of privacy protection can be supported with moderate overhead. Moreover, in all cases, this work is the first to provide tangible evidence that the state-of-the-art VPKI can serve sizable areas or domain with modest computing resources.

2017-10-03
Yang, Chen, Stoleru, Radu.  2016.  Hybrid Routing in Wireless Networks with Diverse Connectivity. Proceedings of the 17th ACM International Symposium on Mobile Ad Hoc Networking and Computing. :71–80.

Real world wireless networks usually have diverse connectivity characteristics. Although existing works have identified replication as the key to the successful design of routing protocols for these networks, the questions of when the replication should be used, by how much, and how to distribute packet copies are still not satisfactorily answered. In this paper, we investigate the above questions and present the design of the Hybrid Routing Protocol (HRP). We make a key observation that delay correlations can significantly impact performance improvements gained from packet replication. Thus, we propose a novel model to capture the correlations of inter-contact times among a group of nodes. HRP utilizes both direct delays feedback and the proposed model to estimate the replication gain, which is then fed into a novel regret-minimization algorithm to dynamically decide the amount of packet replication under unknown network conditions. We evaluate HRP through extensive simulations. We show that HRP achieves up to 3.5x delivery ratio improvement and up to 50% delay reduction, with comparable and even lower overhead than state-of-art routing protocols.