Visible to the public Biblio

Found 188 results

Filters: Keyword is Computing Theory  [Clear All Filters]
2020-07-20
Haque, Md Ariful, Shetty, Sachin, Krishnappa, Bheshaj.  2019.  Modeling Cyber Resilience for Energy Delivery Systems Using Critical System Functionality. 2019 Resilience Week (RWS). 1:33–41.

In this paper, we analyze the cyber resilience for the energy delivery systems (EDS) using critical system functionality (CSF). Some research works focus on identification of critical cyber components and services to address the resiliency for the EDS. Analysis based on the devices and services excluding the system behavior during an adverse event would provide partial analysis of cyber resilience. To address the gap, in this work, we utilize the vulnerability graph representation of EDS to compute the system functionality under adverse condition. We use network criticality metric to determine CSF. We estimate the criticality metric using graph Laplacian matrix and network performance after removing links (i.e., disabling control functions, or services). We model the resilience of the EDS using CSF, and system recovery curve. We also provide a comprehensive analysis of cyber resilience by determining the critical devices using TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) and AHP (Analytical Hierarchy Process) methods. We present use cases of EDS illustrating the way control functions and services in EDS map to the vulnerability graph model. The simulation results show that we can estimate the resilience metric using different types of graphs that may assist in making an informed decision about EDS resilience.

2020-04-13
Horne, Benjamin D., Gruppi, Mauricio, Adali, Sibel.  2019.  Trustworthy Misinformation Mitigation with Soft Information Nudging. 2019 First IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications (TPS-ISA). :245–254.

Research in combating misinformation reports many negative results: facts may not change minds, especially if they come from sources that are not trusted. Individuals can disregard and justify lies told by trusted sources. This problem is made even worse by social recommendation algorithms which help amplify conspiracy theories and information confirming one's own biases due to companies' efforts to optimize for clicks and watch time over individuals' own values and public good. As a result, more nuanced voices and facts are drowned out by a continuous erosion of trust in better information sources. Most misinformation mitigation techniques assume that discrediting, filtering, or demoting low veracity information will help news consumers make better information decisions. However, these negative results indicate that some news consumers, particularly extreme or conspiracy news consumers will not be helped. We argue that, given this background, technology solutions to combating misinformation should not simply seek facts or discredit bad news sources, but instead use more subtle nudges towards better information consumption. Repeated exposure to such nudges can help promote trust in better information sources and also improve societal outcomes in the long run. In this article, we will talk about technological solutions that can help us in developing such an approach, and introduce one such model called Trust Nudging.

2020-03-02
Sultana, Kazi Zakia, Chong, Tai-Yin.  2019.  A Proposed Approach to Build an Automated Software Security Assessment Framework using Mined Patterns and Metrics. 2019 IEEE International Conference on Computational Science and Engineering (CSE) and IEEE International Conference on Embedded and Ubiquitous Computing (EUC). :176–181.

Software security is a major concern of the developers who intend to deliver a reliable software. Although there is research that focuses on vulnerability prediction and discovery, there is still a need for building security-specific metrics to measure software security and vulnerability-proneness quantitatively. The existing methods are either based on software metrics (defined on the physical characteristics of code; e.g. complexity or lines of code) which are not security-specific or some generic patterns known as nano-patterns (Java method-level traceable patterns that characterize a Java method or function). Other methods predict vulnerabilities using text mining approaches or graph algorithms which perform poorly in cross-project validation and fail to be a generalized prediction model for any system. In this paper, we envision to construct an automated framework that will assist developers to assess the security level of their code and guide them towards developing secure code. To accomplish this goal, we aim to refine and redefine the existing nano-patterns and software metrics to make them more security-centric so that they can be used for measuring the software security level of a source code (either file or function) with higher accuracy. In this paper, we present our visionary approach through a series of three consecutive studies where we (1) will study the challenges of the current software metrics and nano-patterns in vulnerability prediction, (2) will redefine and characterize the nano-patterns and software metrics so that they can capture security-specific properties of code and measure the security level quantitatively, and finally (3) will implement an automated framework for the developers to automatically extract the values of all the patterns and metrics for the given code segment and then flag the estimated security level as a feedback based on our research results. We accomplished some preliminary experiments and presented the results which indicate that our vision can be practically implemented and will have valuable implications in the community of software security.

2020-02-17
Ezick, James, Henretty, Tom, Baskaran, Muthu, Lethin, Richard, Feo, John, Tuan, Tai-Ching, Coley, Christopher, Leonard, Leslie, Agrawal, Rajeev, Parsons, Ben et al..  2019.  Combining Tensor Decompositions and Graph Analytics to Provide Cyber Situational Awareness at HPC Scale. 2019 IEEE High Performance Extreme Computing Conference (HPEC). :1–7.

This paper describes MADHAT (Multidimensional Anomaly Detection fusing HPC, Analytics, and Tensors), an integrated workflow that demonstrates the applicability of HPC resources to the problem of maintaining cyber situational awareness. MADHAT combines two high-performance packages: ENSIGN for large-scale sparse tensor decompositions and HAGGLE for graph analytics. Tensor decompositions isolate coherent patterns of network behavior in ways that common clustering methods based on distance metrics cannot. Parallelized graph analysis then uses directed queries on a representation that combines the elements of identified patterns with other available information (such as additional log fields, domain knowledge, network topology, whitelists and blacklists, prior feedback, and published alerts) to confirm or reject a threat hypothesis, collect context, and raise alerts. MADHAT was developed using the collaborative HPC Architecture for Cyber Situational Awareness (HACSAW) research environment and evaluated on structured network sensor logs collected from Defense Research and Engineering Network (DREN) sites using HPC resources at the U.S. Army Engineer Research and Development Center DoD Supercomputing Resource Center (ERDC DSRC). To date, MADHAT has analyzed logs with over 650 million entries.

2018-09-05
Pasareanu, C..  2017.  Symbolic execution and probabilistic reasoning. 2017 32nd Annual ACM/IEEE Symposium on Logic in Computer Science (LICS). :1–1.
Summary form only given. Symbolic execution is a systematic program analysis technique which explores multiple program behaviors all at once by collecting and solving symbolic path conditions over program paths. The technique has been recently extended with probabilistic reasoning. This approach computes the conditions to reach target program events of interest and uses model counting to quantify the fraction of the input domain satisfying these conditions thus computing the probability of event occurrence. This probabilistic information can be used for example to compute the reliability of an aircraft controller under different wind conditions (modeled probabilistically) or to quantify the leakage of sensitive data in a software system, using information theory metrics such as Shannon entropy. In this talk we review recent advances in symbolic execution and probabilistic reasoning and we discuss how they can be used to ensure the safety and security of software systems.
Turnley, J., Wachtel, A., Muñoz-Ramos, K., Hoffman, M., Gauthier, J., Speed, A., Kittinger, R..  2017.  Modeling human-technology interaction as a sociotechnical system of systems. 2017 12th System of Systems Engineering Conference (SoSE). :1–6.
As system of systems (SoS) models become increasingly complex and interconnected a new approach is needed to capture the effects of humans within the SoS. Many real-life events have shown the detrimental outcomes of failing to account for humans in the loop. This research introduces a novel and cross-disciplinary methodology for modeling humans interacting with technologies to perform tasks within an SoS specifically within a layered physical security system use case. Metrics and formulations developed for this new way of looking at SoS termed sociotechnical SoS allow for the quantification of the interplay of effectiveness and efficiency seen in detection theory to measure the ability of a physical security system to detect and respond to threats. This methodology has been applied to a notional representation of a small military Forward Operating Base (FOB) as a proof-of-concept.
Zhang, H., Lou, F., Fu, Y., Tian, Z..  2017.  A Conditional Probability Computation Method for Vulnerability Exploitation Based on CVSS. 2017 IEEE Second International Conference on Data Science in Cyberspace (DSC). :238–241.
Computing the probability of vulnerability exploitation in Bayesian attack graphs (BAGs) is a key process for the network security assessment. The conditional probability of vulnerability exploitation could be obtained from the exploitability of the NIST's Common Vulnerability Scoring System (CVSS). However, the method which N. Poolsappasit et al. proposed for computing conditional probability could be used only in the CVSS metric version v2.0, and can't be used in other two versions. In this paper, we present two methods for computing the conditional probability based on CVSS's other two metric versions, version 1.0 and version 3.0, respectively. Based on the CVSS, the conditional probability computation of vulnerability exploitation is complete by combining the method of N. Poolsappasit et al.
Teusner, R., Matthies, C., Giese, P..  2017.  Should I Bug You? Identifying Domain Experts in Software Projects Using Code Complexity Metrics 2017 IEEE International Conference on Software Quality, Reliability and Security (QRS). :418–425.
In any sufficiently complex software system there are experts, having a deeper understanding of parts of the system than others. However, it is not always clear who these experts are and which particular parts of the system they can provide help with. We propose a framework to elicit the expertise of developers and recommend experts by analyzing complexity measures over time. Furthermore, teams can detect those parts of the software for which currently no, or only few experts exist and take preventive actions to keep the collective code knowledge and ownership high. We employed the developed approach at a medium-sized company. The results were evaluated with a survey, comparing the perceived and the computed expertise of developers. We show that aggregated code metrics can be used to identify experts for different software components. The identified experts were rated as acceptable candidates by developers in over 90% of all cases.
Wang, J., Shi, D., Li, Y., Chen, J., Duan, X..  2017.  Realistic measurement protection schemes against false data injection attacks on state estimators. 2017 IEEE Power Energy Society General Meeting. :1–5.
False data injection attacks (FDIA) on state estimators are a kind of imminent cyber-physical security issue. Fortunately, it has been proved that if a set of measurements is strategically selected and protected, no FDIA will remain undetectable. In this paper, the metric Return on Investment (ROI) is introduced to evaluate the overall returns of the alternative measurement protection schemes (MPS). By setting maximum total ROI as the optimization objective, the previously ignored cost-benefit issue is taken into account to derive a realistic MPS for power utilities. The optimization problem is transformed into the Steiner tree problem in graph theory, where a tree pruning based algorithm is used to reduce the computational complexity and find a quasi-optimal solution with acceptable approximations. The correctness and efficiency of the algorithm are verified by case studies.
Hossain, M. A., Merrill, H. M., Bodson, M..  2017.  Evaluation of metrics of susceptibility to cascading blackouts. 2017 IEEE Power and Energy Conference at Illinois (PECI). :1–5.
In this paper, we evaluate the usefulness of metrics that assess susceptibility to cascading blackouts. The metrics are computed using a matrix of Line Outage Distribution Factors (LODF, or DFAX matrix). The metrics are compared for several base cases with different load levels of the Western Interconnection (WI). A case corresponding to the September 8, 2011 pre-blackout state is used to compute these metrics and relate them to the origin of the cascading blackout. The correlation between the proposed metrics is determined to check redundancy. The analysis is also used to find vulnerable and critical hot spots in the power system.
Doynikova, E., Kotenko, I..  2017.  Enhancement of probabilistic attack graphs for accurate cyber security monitoring. 2017 IEEE SmartWorld, Ubiquitous Intelligence Computing, Advanced Trusted Computed, Scalable Computing Communications, Cloud Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI). :1–6.
Timely and adequate response on the computer security incidents depends on the accurate monitoring of the security situation. The paper investigates the task of refinement of the attack models in the form of attack graphs. It considers some challenges of attack graph generation and possible solutions, including: inaccuracies in specifying the pre- and postconditions of attack actions, processing of cycles in graphs to apply the Bayesian methods for attack graph analysis, mapping of incidents on attack graph nodes, and automatic countermeasure selection for the nodes under the risk. The software prototype that implements suggested solutions is briefly specified. The influence of the modifications on the security monitoring is shown on a case study, and the results of experiments are described.
Gai, K., Qiu, M..  2017.  An Optimal Fully Homomorphic Encryption Scheme. 2017 ieee 3rd international conference on big data security on cloud (bigdatasecurity), ieee international conference on high performance and smart computing (hpsc), and ieee international conference on intelligent data and security (ids). :101–106.

The expeditious expansion of the networking technologies have remarkably driven the usage of the distributedcomputing as well as services, such as task offloading to the cloud. However, security and privacy concerns are restricting the implementations of cloud computing because of the threats from both outsiders and insiders. The primary alternative of protecting users' data is developing a Fully Homomorphic Encryption (FHE) scheme, which can cover both data protections and data processing in the cloud. Despite many previous attempts addressing this approach, none of the proposed work can simultaneously satisfy two requirements that include the non-noise accuracy and an efficiency execution. This paper focuses on the issue of FHE design and proposes a novel FHE scheme, which is called Optimal Fully Homomorphic Encryption (O-FHE). Our approach utilizes the properties of the Kronecker Product (KP) and designs a mechanism of achieving FHE, which consider both accuracy and efficiency. We have assessed our scheme in both theoretical proofing and experimental evaluations with the confirmed and exceptional results.

Mayle, A., Bidoki, N. H., Masnadi, S., Boeloeni, L., Turgut, D..  2017.  Investigating the Value of Privacy within the Internet of Things. GLOBECOM 2017 - 2017 IEEE Global Communications Conference. :1–6.

Many companies within the Internet of Things (IoT) sector rely on the personal data of users to deliver and monetize their services, creating a high demand for personal information. A user can be seen as making a series of transactions, each involving the exchange of personal data for a service. In this paper, we argue that privacy can be described quantitatively, using the game- theoretic concept of value of information (VoI), enabling us to assess whether each exchange is an advantageous one for the user. We introduce PrivacyGate, an extension to the Android operating system built for the purpose of studying privacy of IoT transactions. An example study, and its initial results, are provided to illustrate its capabilities.

Jia, R., Dong, R., Ganesh, P., Sastry, S., Spanos, C..  2017.  Towards a theory of free-lunch privacy in cyber-physical systems. 2017 55th Annual Allerton Conference on Communication, Control, and Computing (Allerton). :902–910.

Emerging cyber-physical systems (CPS) often require collecting end users' data to support data-informed decision making processes. There has been a long-standing argument as to the tradeoff between privacy and data utility. In this paper, we adopt a multiparametric programming approach to rigorously study conditions under which data utility has to be sacrificed to protect privacy and situations where free-lunch privacy can be achieved, i.e., data can be concealed without hurting the optimality of the decision making underlying the CPS. We formalize the concept of free-lunch privacy, and establish various results on its existence, geometry, as well as efficient computation methods. We propose the free-lunch privacy mechanism, which is a pragmatic mechanism that exploits free-lunch privacy if it exists with the constant guarantee of optimal usage of data. We study the resilience of this mechanism against attacks that attempt to infer the parameter of a user's data generating process. We close the paper by a case study on occupancy-adaptive smart home temperature control to demonstrate the efficacy of the mechanism.

Gaikwad, V. S., Gandle, K. S..  2017.  Ideal complexity cryptosystem with high privacy data service for cloud databases. 2017 1st International Conference on Intelligent Systems and Information Management (ICISIM). :267–270.

Data storage in cloud should come along with high safety and confidentiality. It is accountability of cloud service provider to guarantee the availability and security of client data. There exist various alternatives for storage services but confidentiality and complexity solutions for database as a service are still not satisfactory. Proposed system gives alternative solution for database as a service that integrates benefits of different services along with advance encryption techniques. It yields possibility of applying concurrency on encrypted data. This alternative provides supporting facility to connect dispersed clients with elimination of intermediate proxy by which simplicity can acquired. Performance of proposed system evaluated on basis of theoretical analyses.

Li, C., Palanisamy, B., Joshi, J..  2017.  Differentially Private Trajectory Analysis for Points-of-Interest Recommendation. 2017 IEEE International Congress on Big Data (BigData Congress). :49–56.

Ubiquitous deployment of low-cost mobile positioning devices and the widespread use of high-speed wireless networks enable massive collection of large-scale trajectory data of individuals moving on road networks. Trajectory data mining finds numerous applications including understanding users' historical travel preferences and recommending places of interest to new visitors. Privacy-preserving trajectory mining is an important and challenging problem as exposure of sensitive location information in the trajectories can directly invade the location privacy of the users associated with the trajectories. In this paper, we propose a differentially private trajectory analysis algorithm for points-of-interest recommendation to users that aims at maximizing the accuracy of the recommendation results while protecting the privacy of the exposed trajectories with differential privacy guarantees. Our algorithm first transforms the raw trajectory dataset into a bipartite graph with nodes representing the users and the points-of-interest and the edges representing the visits made by the users to the locations, and then extracts the association matrix representing the bipartite graph to inject carefully calibrated noise to meet έ-differential privacy guarantees. A post-processing of the perturbed association matrix is performed to suppress noise prior to performing a Hyperlink-Induced Topic Search (HITS) on the transformed data that generates an ordered list of recommended points-of-interest. Extensive experiments on a real trajectory dataset show that our algorithm is efficient, scalable and demonstrates high recommendation accuracy while meeting the required differential privacy guarantees.

Takbiri, N., Houmansadr, A., Goeckel, D. L., Pishro-Nik, H..  2017.  Limits of location privacy under anonymization and obfuscation. 2017 IEEE International Symposium on Information Theory (ISIT). :764–768.

The prevalence of mobile devices and location-based services (LBS) has generated great concerns regarding the LBS users' privacy, which can be compromised by statistical analysis of their movement patterns. A number of algorithms have been proposed to protect the privacy of users in such systems, but the fundamental underpinnings of such remain unexplored. Recently, the concept of perfect location privacy was introduced and its achievability was studied for anonymization-based LBS systems, where user identifiers are permuted at regular intervals to prevent identification based on statistical analysis of long time sequences. In this paper, we significantly extend that investigation by incorporating the other major tool commonly employed to obtain location privacy: obfuscation, where user locations are purposely obscured to protect their privacy. Since anonymization and obfuscation reduce user utility in LBS systems, we investigate how location privacy varies with the degree to which each of these two methods is employed. We provide: (1) achievability results for the case where the location of each user is governed by an i.i.d. process; (2) converse results for the i.i.d. case as well as the more general Markov Chain model. We show that, as the number of users in the network grows, the obfuscation-anonymization plane can be divided into two regions: in the first region, all users have perfect location privacy; and, in the second region, no user has location privacy.

Palanisamy, B., Li, C., Krishnamurthy, P..  2017.  Group Differential Privacy-Preserving Disclosure of Multi-level Association Graphs. 2017 IEEE 37th International Conference on Distributed Computing Systems (ICDCS). :2587–2588.

Traditional privacy-preserving data disclosure solutions have focused on protecting the privacy of individual's information with the assumption that all aggregate (statistical) information about individuals is safe for disclosure. Such schemes fail to support group privacy where aggregate information about a group of individuals may also be sensitive and users of the published data may have different levels of access privileges entitled to them. We propose the notion ofεg-Group Differential Privacy that protects sensitive information of groups of individuals at various defined privacy levels, enabling data users to obtain the level of access entitled to them. We present a preliminary evaluation of the proposed notion of group privacy through experiments on real association graph data that demonstrate the guarantees on group privacy on the disclosed data.

Li, W., Song, T., Li, Y., Ma, L., Yu, J., Cheng, X..  2017.  A Hierarchical Game Framework for Data Privacy Preservation in Context-Aware IoT Applications. 2017 IEEE Symposium on Privacy-Aware Computing (PAC). :176–177.

Due to the increasing concerns of securing private information, context-aware Internet of Things (IoT) applications are in dire need of supporting data privacy preservation for users. In the past years, game theory has been widely applied to design secure and privacy-preserving protocols for users to counter various attacks, and most of the existing work is based on a two-player game model, i.e., a user/defender-attacker game. In this paper, we consider a more practical scenario which involves three players: a user, an attacker, and a service provider, and such a complicated system renders any two-player model inapplicable. To capture the complex interactions between the service provider, the user, and the attacker, we propose a hierarchical two-layer three-player game framework. Finally, we carry out a comprehensive numerical study to validate our proposed game framework and theoretical analysis.

2018-08-23
Laszka, Aron, Abbas, Waseem, Vorobeychik, Yevgeniy, Koutsoukos, Xenofon.  2017.  Synergic Security for Smart Water Networks: Redundancy, Diversity, and Hardening. Proceedings of the 3rd International Workshop on Cyber-Physical Systems for Smart Water Networks. :21–24.

Smart water networks can provide great benefits to our society in terms of efficiency and sustainability. However, smart capabilities and connectivity also expose these systems to a wide range of cyber attacks, which enable cyber-terrorists and hostile nation states to mount cyber-physical attacks. Cyber-physical attacks against critical infrastructure, such as water treatment and distribution systems, pose a serious threat to public safety and health. Consequently, it is imperative that we improve the resilience of smart water networks. We consider three approaches for improving resilience: redundancy, diversity, and hardening. Even though each one of these "canonical" approaches has been throughly studied in prior work, a unified theory on how to combine them in the most efficient way has not yet been established. In this paper, we address this problem by studying the synergy of these approaches in the context of protecting smart water networks from cyber-physical contamination attacks.

Crooks, Natacha, Pu, Youer, Alvisi, Lorenzo, Clement, Allen.  2017.  Seeing is Believing: A Client-Centric Specification of Database Isolation. Proceedings of the ACM Symposium on Principles of Distributed Computing. :73–82.

This paper introduces the first state-based formalization of isolation guarantees. Our approach is premised on a simple observation: applications view storage systems as black-boxes that transition through a series of states, a subset of which are observed by applications. Defining isolation guarantees in terms of these states frees definitions from implementation-specific assumptions. It makes immediately clear what anomalies, if any, applications can expect to observe, thus bridging the gap that exists today between how isolation guarantees are defined and how they are perceived. The clarity that results from definitions based on client-observable states brings forth several benefits. First, it allows us to easily compare the guarantees of distinct, but semantically close, isolation guarantees. We find that several well-known guarantees, previously thought to be distinct, are in fact equivalent, and that many previously incomparable flavors of snapshot isolation can be organized in a clean hierarchy. Second, freeing definitions from implementation-specific artefacts can suggest more efficient implementations of the same isolation guarantee. We show how a client-centric implementation of parallel snapshot isolation can be more resilient to slowdown cascades, a common phenomenon in large-scale datacenters.

Li, Xin.  2017.  Improved Non-malleable Extractors, Non-malleable Codes and Independent Source Extractors. Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing. :1144–1156.
In this paper we give improved constructions of several central objects in the literature of randomness extraction and tamper-resilient cryptography. Our main results are: (1) An explicit seeded non-malleable extractor with error � and seed length d=O(logn)+O(log(1/�)loglog(1/�)), that supports min-entropy k=Ω(d) and outputs Ω(k) bits. Combined with the protocol by Dodis and Wichs, this gives a two round privacy amplification protocol with optimal entropy loss in the presence of an active adversary, for all security parameters up to Ω(k/logk), where k is the min-entropy of the shared weak random source. Previously, the best known seeded non-malleable extractors require seed length and min-entropy O(logn)+log(1/�)2O�loglog(1/�), and only give two round privacy amplification protocols with optimal entropy loss for security parameter up to k/2O(�logk). (2) An explicit non-malleable two-source extractor for min entropy k � (1��)n, some constant �\textbackslashtextgreater0, that outputs Ω(k) bits with error 2�Ω(n/logn). We further show that we can efficiently uniformly sample from the pre-image of any output of the extractor. Combined with the connection found by Cheraghchi and Guruswami this gives a non-malleable code in the two-split-state model with relative rate Ω(1/logn). This exponentially improves previous constructions, all of which only achieve rate n�Ω(1). (3) Combined with the techniques by Ben-Aroya et. al, our non-malleable extractors give a two-source extractor for min-entropy O(logn loglogn), which also implies a K-Ramsey graph on N vertices with K=(logN)O(logloglogN). Previously the best known two-source extractor by Ben-Aroya et. al requires min-entropy logn 2O(�logn), which gives a Ramsey graph with K=(logN)2O(�logloglogN). We further show a way to reduce the problem of constructing seeded non-malleable extractors to the problem of constructing non-malleable independent source extractors. Using the non-malleable 10-source extractor with optimal error by Chattopadhyay and Zuckerman, we give a 10-source extractor for min-entropy O(logn). Previously the best known extractor for such min-entropy by Cohen and Schulman requires O(loglogn) sources. Independent of our work, Cohen obtained similar results to (1) and the two-source extractor, except the dependence on � is log(1/�)poly loglog(1/�) and the two-source extractor requires min-entropy logn poly loglogn.
Abbas, W., Laszka, A., Vorobeychik, Y., Koutsoukos, X..  2017.  Improving network connectivity using trusted nodes and edges. 2017 American Control Conference (ACC). :328–333.

Network connectivity is a primary attribute and a characteristic phenomenon of any networked system. A high connectivity is often desired within networks; for instance to increase robustness to failures, and resilience against attacks. A typical approach to increasing network connectivity is to strategically add links; however adding links is not always the most suitable option. In this paper, we propose an alternative approach to improving network connectivity, that is by making a small subset of nodes and edges “trusted,” which means that such nodes and edges remain intact at all times and are insusceptible to failures. We then show that by controlling the number of trusted nodes and edges, any desired level of network connectivity can be obtained. Along with characterizing network connectivity with trusted nodes and edges, we present heuristics to compute a small number of such nodes and edges. Finally, we illustrate our results on various networks.

Mahmood, N. H., Pedersen, K. I., Mogensen, P..  2017.  A centralized inter-cell rank coordination mechanism for 5G systems. 2017 13th International Wireless Communications and Mobile Computing Conference (IWCMC). :1951–1956.
Multiple transmit and receive antennas can be used to increase the number of independent streams between a transmitter-receiver pair, or to improve the interference resilience property with the help of linear minimum mean squared error (MMSE) receivers. An interference aware inter-cell rank coordination framework for the future fifth generation wireless system is proposed in this article. The proposal utilizes results from random matrix theory to estimate the mean signal-to-interference-plus-noise ratio at the MMSE receiver. In addition, a game-theoretic interference pricing measure is introduced as an inter-cell interference management mechanism to balance the spatial multiplexing vs. interference resilience trade-off. Exhaustive Monte Carlo simulations results demonstrating the performance of the proposed algorithm indicate a gain of around 40% over conventional non interference-aware schemes; and within around 6% of the optimum performance obtained using a brute-force exhaustive search algorithm.
Randles, Martin, Johnson, Princy, Hussain, Abir.  2017.  Internet of Things Eco-systems: Assured Interactivity of Devices and Data Through Cloud Based Team Work. Proceedings of the Second International Conference on Internet of Things, Data and Cloud Computing. :15:1–15:9.
IoT systems continue to grow in scale and exhibit similarities to complex systems seen in nature and biology: Systems are composed of heterogeneous entities (mobile devices, servers, sensors, data items, databases, etc.) coordinated in a Cloud environment forming a digital eco-system. Properties of such systems include variety, emergent outcome, self-organisation, etc. The scale of IoT systems, and the disparity in the capabilities of the devices on the market, means there needs to be a unifying model to enable a secure and assured interaction among those `things'. The authors propose conceptual designs for an efficient architecture, run-time decision models using assured models for such an interaction in a digital eco-system. This is done using the situation calculus modelling to represent the fundamental requirements for adjustable decentralised feedback control mechanisms necessary for the IoT-ready software systems: It is shown that complex properties and emergent outcomes of the system can be deduced, emanating from the simple distributed interaction models. A case study from the rail industry is used to assess the design and possible implementation.