Visible to the public Biblio

Found 16998 results

2017-09-26
Poller, Andreas, Kocksch, Laura, Kinder-Kurlanda, Katharina, Epp, Felix Anand.  2016.  First-time Security Audits As a Turning Point?: Challenges for Security Practices in an Industry Software Development Team Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems. :1288–1294.

Software development is often accompanied by security audits such as penetration tests, usually performed on behalf of the software vendor. In penetration tests security experts identify entry points for attacks in a software product. Many development teams undergo such audits for the first time if their product is attacked or faces new security concerns. The audits often serve as an eye-opener for development teams: they realize that security requires much more attention. However, there is a lack of clarity with regard to what lasting benefits developers can reap from penetration tests. We report from a one-year study of a penetration test run at a major software vendor, and describe how a software development team managed to incorporate the test findings. Results suggest that penetration tests improve developers' security awareness, but that long-lasting enhancements of development practices are hampered by a lack of dedicated security stakeholders and if security is not properly reflected in the communicative and collaborative structures of the organization.

Yassine, Abdul-Amir, Najm, Farid N..  2016.  A Fast Layer Elimination Approach for Power Grid Reduction. Proceedings of the 35th International Conference on Computer-Aided Design. :101:1–101:8.

Simulation and verification of the on-die power delivery network (PDN) is one of the key steps in the design of integrated circuits (ICs). With the very large sizes of modern grids, verification of PDNs has become very expensive and a host of techniques for faster simulation and grid model approximation have been proposed. These include topological node elimination, as in TICER and full-blown numerical model order reduction (MOR) as in PRIMA and related methods. However, both of these traditional approaches suffer from certain drawbacks that make them expensive and limit their scalability to very large grids. In this paper, we propose a novel technique for grid reduction that is a hybrid of both approaches–-the method is numerical but also factors in grid topology. It works by eliminating whole internal layers of the grid at a time, while aiming to preserve the dynamic behavior of the resulting reduced grid. Effectively, instead of traditional node-by-node topological elimination we provide a numerical layer-by-layer block-matrix approach that is both fast and accurate. Experimental results show that this technique is capable of handling very large power grids and provides a 4.25x speed-up in transient analysis.

Devriese, Dominique, Patrignani, Marco, Piessens, Frank.  2016.  Fully-abstract Compilation by Approximate Back-translation. Proceedings of the 43rd Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages. :164–177.

A compiler is fully-abstract if the compilation from source language programs to target language programs reflects and preserves behavioural equivalence. Such compilers have important security benefits, as they limit the power of an attacker interacting with the program in the target language to that of an attacker interacting with the program in the source language. Proving compiler full-abstraction is, however, rather complicated. A common proof technique is based on the back-translation of target-level program contexts to behaviourally-equivalent source-level contexts. However, constructing such a back-translation is problematic when the source language is not strong enough to embed an encoding of the target language. For instance, when compiling from the simply-typed λ-calculus (λτ) to the untyped λ-calculus (λu), the lack of recursive types in λτ prevents such a back-translation. We propose a general and elegant solution for this problem. The key insight is that it suffices to construct an approximate back-translation. The approximation is only accurate up to a certain number of steps and conservative beyond that, in the sense that the context generated by the back-translation may diverge when the original would not, but not vice versa. Based on this insight, we describe a general technique for proving compiler full-abstraction and demonstrate it on a compiler from λτ to λu . The proof uses asymmetric cross-language logical relations and makes innovative use of step-indexing to express the relation between a context and its approximate back-translation. We believe this proof technique can scale to challenging settings and enable simpler, more scalable proofs of compiler full-abstraction.

Oliveira, Raquel, Dupuy-Chessa, Sophie, Calvary, Gaëlle, Dadolle, Daniele.  2016.  Using Formal Models to Cross Check an Implementation. Proceedings of the 8th ACM SIGCHI Symposium on Engineering Interactive Computing Systems. :126–137.

Interactive systems are developed according to requirements, which may be, for instance, documentation, prototypes, diagrams, etc. The informal nature of system requirements may be a source of problems: it may be the case that a system does not implement the requirements as expected, thus, a way to validate whether an implementation follows the requirements is needed. We propose a novel approach to validating a system using formal models of the system. In this approach, a set of traces generated from the execution of the real interactive system is searched over the state space of the formal model. The scalability of the approach is demonstrated by an application to an industrial system in the nuclear plant domain. The combination of trace analysis and formal methods provides feedback that can bring improvements to both the real interactive system and the formal model.

Rothberg, Valentin, Dietrich, Christian, Ziegler, Andreas, Lohmann, Daniel.  2016.  Towards Scalable Configuration Testing in Variable Software. Proceedings of the 2016 ACM SIGPLAN International Conference on Generative Programming: Concepts and Experiences. :156–167.

Testing a software product line such as Linux implies building the source with different configurations. Manual approaches to generate configurations that enable code of interest are doomed to fail due to the high amount of variation points distributed over the feature model, the build system and the source code. Research has proposed various approaches to generate covering configurations, but the algorithms show many drawbacks related to run-time, exhaustiveness and the amount of generated configurations. Hence, analyzing an entire Linux source can yield more than 30 thousand configurations and thereby exceeds the limited budget and resources for build testing. In this paper, we present an approach to fill the gap between a systematic generation of configurations and the necessity to fully build software in order to test it. By merging previously generated configurations, we reduce the number of necessary builds and enable global variability-aware testing. We reduce the problem of merging configurations to finding maximum cliques in a graph. We evaluate the approach on the Linux kernel, compare the results to common practices in industry, and show that our implementation scales even when facing graphs with millions of edges.

Jangra, Ajay, Singh, Niharika, Lakhina, Upasana.  2016.  VIP: Verification and Identification Protective Data Handling Layer Implementation to Achieve MVCC in Cloud Computing. Proceedings of the Second International Conference on Information and Communication Technology for Competitive Strategies. :124:1–124:6.

Over transactional database systems MultiVersion concurrency control is maintained for secure, fast and efficient access to the shared data file implementation scenario. An effective coordination is supposed to be set up between owners and users also the developers & system operators, to maintain inter-cloud & intra-cloud communication Most of the services & application offered in cloud world are real-time, which entails optimized compatibility service environment between master and slave clusters. In the paper, offered methodology supports replication and triggering methods intended for data consistency and dynamicity. Where intercommunication between different clusters is processed through middleware besides slave intra-communication is handled by verification & identification protection. The proposed approach incorporates resistive flow to handle high impact systems that identifies and verifies multiple processes. Results show that the new scheme reduces the overheads from different master and slave servers as they are co-located in clusters which allow increased horizontal and vertical scalability of resources.

Zhang, Zhengkui, Nielsen, Brian, Larsen, Kim G..  2016.  Time Optimal Reachability Analysis Using Swarm Verification. Proceedings of the 31st Annual ACM Symposium on Applied Computing. :1634–1640.

Time optimal reachability analysis employs model-checking to compute goal states that can be reached from an initial state with a minimal accumulated time duration. The model-checker may produce a corresponding diagnostic trace which can be interpreted as a feasible schedule for many scheduling and planning problems, response time optimization etc. We propose swarm verification to accelerate time optimal reachability using the real-time model-checker Uppaal. In swarm verification, a large number of model checker instances execute in parallel on a computer cluster using different, typically randomized search strategies. We develop four swarm algorithms and evaluate them with four models in terms scalability, and time- and memory consumption. Three of these cooperate by exchanging costs of intermediate solutions to prune the search using a branch-and-bound approach. Our results show that swarm algorithms work much faster than sequential algorithms, and especially two using combinations of random-depth-first and breadth-first show very promising performance.

Ricketts, Daniel, Malecha, Gregory, Lerner, Sorin.  2016.  Modular Deductive Verification of Sampled-data Systems. Proceedings of the 13th International Conference on Embedded Software. :17:1–17:10.

Unsafe behavior of cyber-physical systems can have disastrous consequences, motivating the need for formal verification of these kinds of systems. Deductive verification in a proof assistant such as Coq is a promising technique for this verification because it (1) justifies all verification from first principles, (2) is not limited to classes of systems for which full automation is possible, and (3) provides a platform for proving powerful, higher-order modularity theorems that are crucial for scaling verification to complex systems. In this paper, we demonstrate the practicality, utility, and scalability of this approach by developing in Coq sound and powerful rules for modular construction and verification of sampled-data cyber-physical systems. We evaluate these rules by using them to verify a number of non-trivial controllers enforcing safety properties of a quadcopter, e.g. a geo-fence. We show that our controllers are realistic by running them on a real, flying quadcopter.

Madi, Taous, Majumdar, Suryadipta, Wang, Yushun, Jarraya, Yosr, Pourzandi, Makan, Wang, Lingyu.  2016.  Auditing Security Compliance of the Virtualized Infrastructure in the Cloud: Application to OpenStack. Proceedings of the Sixth ACM Conference on Data and Application Security and Privacy. :195–206.

Cloud service providers typically adopt the multi-tenancy model to optimize resources usage and achieve the promised cost-effectiveness. Sharing resources between different tenants and the underlying complex technology increase the necessity of transparency and accountability. In this regard, auditing security compliance of the provider's infrastructure against standards, regulations and customers' policies takes on an increasing importance in the cloud to boost the trust between the stakeholders. However, virtualization and scalability make compliance verification challenging. In this work, we propose an automated framework that allows auditing the cloud infrastructure from the structural point of view while focusing on virtualization-related security properties and consistency between multiple control layers. Furthermore, to show the feasibility of our approach, we integrate our auditing system into OpenStack, one of the most used cloud infrastructure management systems. To show the scalability and validity of our framework, we present our experimental results on assessing several properties related to auditing inter-layer consistency, virtual machines co-residence, and virtual resources isolation.

Mechtaev, Sergey, Yi, Jooyong, Roychoudhury, Abhik.  2016.  Angelix: Scalable Multiline Program Patch Synthesis via Symbolic Analysis. Proceedings of the 38th International Conference on Software Engineering. :691–701.

Since debugging is a time-consuming activity, automated program repair tools such as GenProg have garnered interest. A recent study revealed that the majority of GenProg repairs avoid bugs simply by deleting functionality. We found that SPR, a state-of-the-art repair tool proposed in 2015, still deletes functionality in their many "plausible" repairs. Unlike generate-and-validate systems such as GenProg and SPR, semantic analysis based repair techniques synthesize a repair based on semantic information of the program. While such semantics-based repair methods show promise in terms of quality of generated repairs, their scalability has been a concern so far. In this paper, we present Angelix, a novel semantics-based repair method that scales up to programs of similar size as are handled by search-based repair tools such as GenProg and SPR. This shows that Angelix is more scalable than previously proposed semantics based repair methods such as SemFix and DirectFix. Furthermore, our repair method can repair multiple buggy locations that are dependent on each other. Such repairs are hard to achieve using SPR and GenProg. In our experiments, Angelix generated repairs from large-scale real-world software such as wireshark and php, and these generated repairs include multi-location repairs. We also report our experience in automatically repairing the well-known Heartbleed vulnerability.

Papadopoulos, Georgios Z., Gallais, Antoine, Schreiner, Guillaume, Noël, Thomas.  2016.  Importance of Repeatable Setups for Reproducible Experimental Results in IoT. Proceedings of the 13th ACM Symposium on Performance Evaluation of Wireless Ad Hoc, Sensor, & Ubiquitous Networks. :51–59.

Performance analysis of newly designed solutions is essential for efficient Internet of Things and Wireless Sensor Network (WSN) deployments. Simulation and experimental evaluation practices are vital steps for the development process of protocols and applications for wireless technologies. Nowadays, the new solutions can be tested at a very large scale over both simulators and testbeds. In this paper, we first discuss the importance of repeatable experimental setups for reproducible performance evaluation results. To this aim, we present FIT IoT-LAB, a very large-scale and experimental testbed, i.e., consists of 2769 low-power wireless devices and 127 mobile robots. We then demonstrate through a number of experiments conducted on FIT IoT-LAB testbed, how to conduct meaningful experiments under real-world conditions. Finally, we discuss to what extent results obtained from experiments could be considered as scientific, i.e., reproducible by the community.

Lavanya, Natarajan.  2016.  Lightweight Authentication for COAP Based IOT. Proceedings of the 6th International Conference on the Internet of Things. :167–168.

Security of Constrained application protocol(COAP) used instead of HTTP in Internet of Thing s(IoT) is achieved using DTLS which uses the Internet key exchange protocol for key exchange and management. In this work a novel key exchange and authentication protocol is proposed. CLIKEv2 protcol is a certificate less and light weight version of the existing protocol. The protocol design is tested with the formal protcol verification tool Scyther, where no named attacks are identified for the propsed protocol. Compared to the existing IKE protocol the CLIKEv2 protocol reduces the computation time, key sizes and ultimately reduces energy consumption.

Padon, Oded, Immerman, Neil, Shoham, Sharon, Karbyshev, Aleksandr, Sagiv, Mooly.  2016.  Decidability of Inferring Inductive Invariants. Proceedings of the 43rd Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages. :217–231.

Induction is a successful approach for verification of hardware and software systems. A common practice is to model a system using logical formulas, and then use a decision procedure to verify that some logical formula is an inductive safety invariant for the system. A key ingredient in this approach is coming up with the inductive invariant, which is known as invariant inference. This is a major difficulty, and it is often left for humans or addressed by sound but incomplete abstract interpretation. This paper is motivated by the problem of inductive invariants in shape analysis and in distributed protocols. This paper approaches the general problem of inferring first-order inductive invariants by restricting the language L of candidate invariants. Notice that the problem of invariant inference in a restricted language L differs from the safety problem, since a system may be safe and still not have any inductive invariant in L that proves safety. Clearly, if L is finite (and if testing an inductive invariant is decidable), then inferring invariants in L is decidable. This paper presents some interesting cases when inferring inductive invariants in L is decidable even when L is an infinite language of universal formulas. Decidability is obtained by restricting L and defining a suitable well-quasi-order on the state space. We also present some undecidability results that show that our restrictions are necessary. We further present a framework for systematically constructing infinite languages while keeping the invariant inference problem decidable. We illustrate our approach by showing the decidability of inferring invariants for programs manipulating linked-lists, and for distributed protocols.

Kwon, Youngjin, Dunn, Alan M., Lee, Michael Z., Hofmann, Owen S., Xu, Yuanzhong, Witchel, Emmett.  2016.  Sego: Pervasive Trusted Metadata for Efficiently Verified Untrusted System Services. Proceedings of the Twenty-First International Conference on Architectural Support for Programming Languages and Operating Systems. :277–290.

Sego is a hypervisor-based system that gives strong privacy and integrity guarantees to trusted applications, even when the guest operating system is compromised or hostile. Sego verifies operating system services, like the file system, instead of replacing them. By associating trusted metadata with user data across all system devices, Sego verifies system services more efficiently than previous systems, especially services that depend on data contents. We extensively evaluate Sego's performance on real workloads and implement a kernel fault injector to validate Sego's file system-agnostic crash consistency and recovery protocol.

Gleissenthall, Klaus v., Bjørner, Nikolaj, Rybalchenko, Andrey.  2016.  Cardinalities and Universal Quantifiers for Verifying Parameterized Systems. Proceedings of the 37th ACM SIGPLAN Conference on Programming Language Design and Implementation. :599–613.

Parallel and distributed systems rely on intricate protocols to manage shared resources and synchronize, i.e., to manage how many processes are in a particular state. Effective verification of such systems requires universally quantification to reason about parameterized state and cardinalities tracking sets of processes, messages, failures to adequately capture protocol logic. In this paper we present Tool, an automatic invariant synthesis method that integrates cardinality-based reasoning and universal quantification. The resulting increase of expressiveness allows Tool to verify, for the first time, a representative collection of intricate parameterized protocols.

Jebadurai, N. Immanuel, Gupta, Himanshu.  2016.  Automated Verification in Cryptography System. Proceedings of the Second International Conference on Information and Communication Technology for Competitive Strategies. :2:1–2:5.

Cryptographic protocols and algorithms are the strength of digital era in which we are living. Unluckily, the security of many confidential information and credentials has been compromised due to ignorance of required security services. As a result, various attacks have been introduced by talented attackers and many security issues like as financial loss, violations of personal privacy, and security threats to democracy. This research paper provides the secure design and architecture of cryptographic protocols and expedites the authentication of cryptographic system. Designing and developing a secure cryptographic system is like a game in which designer or developer tries to maintain the security while attacker tries to penetrate the security features to perform successful attack.

Mikami, Kei, Ando, Daisuke, Kaneko, Kunitake, Teraoka, Fumio.  2016.  Verification of a Multi-Domain Authentication and Authorization Infrastructure Yamata-no-Orochi. Proceedings of the 11th International Conference on Future Internet Technologies. :69–75.

Yamata-no-Orochi is an authentication and authorization infrastructure across multiple service domains and provides Internet services with unified authentication and authorization mechanisms. In this paper, Yamata-no-Orochi is incorporated into a video distribution system to verify its general versatility as a multi-domain authentication and authorization infrastructure for Internet services. This paper also reduces the authorization time of Yamata-no-Orochi to fulfill the processing time constrains of the video distribution system. The evaluation results show that all the authentication and authorization processes work correctly and the performance of Yamata-no-Orochi is practical for the video distribution system.

Woos, Doug, Wilcox, James R., Anton, Steve, Tatlock, Zachary, Ernst, Michael D., Anderson, Thomas.  2016.  Planning for Change in a Formal Verification of the Raft Consensus Protocol. Proceedings of the 5th ACM SIGPLAN Conference on Certified Programs and Proofs. :154–165.

We present the first formal verification of state machine safety for the Raft consensus protocol, a critical component of many distributed systems. We connected our proof to previous work to establish an end-to-end guarantee that our implementation provides linearizable state machine replication. This proof required iteratively discovering and proving 90 system invariants. Our verified implementation is extracted to OCaml and runs on real networks. The primary challenge we faced during the verification process was proof maintenance, since proving one invariant often required strengthening and updating other parts of our proof. To address this challenge, we propose a methodology of planning for change during verification. Our methodology adapts classical information hiding techniques to the context of proof assistants, factors out common invariant-strengthening patterns into custom induction principles, proves higher-order lemmas that show any property proved about a particular component implies analogous properties about related components, and makes proofs robust to change using structural tactics. We also discuss how our methodology may be applied to systems verification more broadly.

Kim, Woobin, Jin, Jungha, Kim, Keecheon.  2016.  A Routing Protocol Method That Sets Up Multi-hops in the Ad-hoc Network. Proceedings of the 10th International Conference on Ubiquitous Information Management and Communication. :70:1–70:6.

In infrastructure wireless network technology, communication between users is provided within a certain area supported by access points (APs) or base station communication networks, but in ad-hoc networks, communication between users is provided only through direct connections between nodes. Ad-hoc network technology supports mobility directly through routing algorithms. However, when a connected node is lost owing to the node's movement, the routing protocol transfers this traffic to another node. The routing table in the node that is receiving the traffic detects any changes that occur and manages them. This paper proposes a routing protocol method that sets up multi-hops in the ad-hoc network and verifies the performance, which provides more effective connection persistence than existing methods.

Fournet, Cédric.  2016.  Verified Secure Implementations for the HTTPS Ecosystem: Invited Talk. Proceedings of the 2016 ACM Workshop on Programming Languages and Analysis for Security. :89–89.

The HTTPS ecosystem, including the SSL/TLS protocol, the X.509 public-key infrastructure, and their cryptographic libraries, is the standardized foundation of Internet Security. Despite 20 years of progress and extensions, however, its practical security remains controversial, as witnessed by recent efforts to improve its design and implementations, as well as recent disclosures of attacks against its deployments. The Everest project is a collaboration between Microsoft Research, INRIA, and the community at large that aims at modelling, programming, and verifying the main HTTPS components with strong machine-checked security guarantees, down to core system and cryptographic assumptions. Although HTTPS involves a relatively small amount of code, it requires efficient low-level programming and intricate proofs of functional correctness and security. To this end, we are also improving our verifications tools (F*, Dafny, Lean, Z3) and developing new ones. In my talk, I will present our project, review our experience with miTLS, a verified reference implementation of TLS coded in F*, and describe current work towards verified, secure, efficient HTTPS.

Walfield, Neal H., Koch, Werner.  2016.  TOFU for OpenPGP. Proceedings of the 9th European Workshop on System Security. :2:1–2:6.

We present the design and implementation of a trust-on-first-use (TOFU) policy for OpenPGP. When an OpenPGP user verifies a signature, TOFU checks that the signer used the same key as in the past. If not, this is a strong indicator that a key is a forgery and either the message is also a forgery or an active man-in-the-middle attack (MitM) is or was underway. That is, TOFU can proactively detect new attacks if the user had previously verified a message from the signer. And, it can reactively detect an attack if the signer gets a message through. TOFU cannot, however, protect against sustained MitM attacks. Despite this weakness, TOFU's practical security is stronger than the Web of Trust (WoT), OpenPGP's current trust policy, for most users. The problem with the WoT is that it requires too much user support. TOFU is also better than the most popular alternative, an X.509-based PKI, which relies on central servers whose certification processes are often sloppy. In this paper, we outline how TOFU can be integrated into OpenPGP; we address a number of potential attacks against TOFU; and, we show how TOFU can work alongside the WoT. Our implementation demonstrates the practicality of the approach.

Cangialosi, Frank, Chung, Taejoong, Choffnes, David, Levin, Dave, Maggs, Bruce M., Mislove, Alan, Wilson, Christo.  2016.  Measurement and Analysis of Private Key Sharing in the HTTPS Ecosystem. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :628–640.

The semantics of online authentication in the web are rather straightforward: if Alice has a certificate binding Bob's name to a public key, and if a remote entity can prove knowledge of Bob's private key, then (barring key compromise) that remote entity must be Bob. However, in reality, many websites' and the majority of the most popular ones-are hosted at least in part by third parties such as Content Delivery Networks (CDNs) or web hosting providers. Put simply: administrators of websites who deal with (extremely) sensitive user data are giving their private keys to third parties. Importantly, this sharing of keys is undetectable by most users, and widely unknown even among researchers. In this paper, we perform a large-scale measurement study of key sharing in today's web. We analyze the prevalence with which websites trust third-party hosting providers with their secret keys, as well as the impact that this trust has on responsible key management practices, such as revocation. Our results reveal that key sharing is extremely common, with a small handful of hosting providers having keys from the majority of the most popular websites. We also find that hosting providers often manage their customers' keys, and that they tend to react more slowly yet more thoroughly to compromised or potentially compromised keys.

2017-09-19
Ben Ujcich, University of Illinois at Urbana-Champaign.  2017.  Securing SDNs with App Provenance.

Presented at the UIUC/R2 Monthly Meeting on September 18, 2017.

Bogdan, Paul, Pande, Partha Pratim, Amrouch, Hussam, Shafique, Muhammad, Henkel, Jörg.  2016.  Power and Thermal Management in Massive Multicore Chips: Theoretical Foundation Meets Architectural Innovation and Resource Allocation. Proceedings of the International Conference on Compilers, Architectures and Synthesis for Embedded Systems. :4:1–4:2.

Continuing progress and integration levels in silicon technologies make possible complete end-user systems consisting of extremely high number of cores on a single chip targeting either embedded or high-performance computing. However, without new paradigms of energy- and thermally-efficient designs, producing information and communication systems capable of meeting the computing, storage and communication demands of the emerging applications will be unlikely. The broad topic of power and thermal management of massive multicore chips is actively being pursued by a number of researchers worldwide, from a variety of different perspectives, ranging from workload modeling to efficient on-chip network infrastructure design to resource allocation. Successful solutions will likely adopt and encompass elements from all or at least several levels of abstraction. Starting from these ideas, we consider a holistic approach in establishing the Power-Thermal-Performance (PTP) trade-offs of massive multicore processors by considering three inter-related but varying angles, viz., on-chip traffic modeling, novel Networks-on-Chip (NoC) architecture and resource allocation/mapping On-line workload (mathematical modeling, analysis and prediction) learning is fundamental for endowing the many-core platforms with self-optimizing capabilities [2][3]. This built-in intelligence capability of many-cores calls for monitoring the interactions between the set of running applications and the architectural (core and uncore) components, the online construction of mathematical models for the observed workloads, and determining the best resource allocation decisions given the limited amount of information about user-to-application-to-system dynamics. However, workload modeling is not a trivial task. Centralized approaches for analyzing and mining workloads can easily run into scalability issues with increasing number of monitored processing elements and uncore (routers and interface queues) components since it can either lead to significant traffic and energy overhead or require dedicated system infrastructure. In contrast, learning the most compact mathematical representation of the workload can be done in a distributed manner (within the proximity of the observation /sensing) as long as the mathematical techniques are flexible and exploit the mathematical characteristics of the workloads (degree of periodicity, degree of fractal and temporal scaling) [3]. As one can notice, this strategy does not postulate a-priori the mathematical expressions (e.g., a specific order of the autoregressive moving average (ARMA) model). Instead, the periodicity and fractality of the observed computation (e.g., instructions per cycles, last level cache misses, branch prediction successes and failures, TLB access/misses) and communication (request-reply latency, queues utilization, memory queuing delay) metrics dictate the number of coefficients, the linearity or nonlinearity of the dynamical state equations and the noise terms (e.g., Gaussian distributed) [3]. In other words, dedicated minimal logic can be allocated to interact with the local sensor to analyze the incoming workload at run-time, determine the required number of parameters and their values as a function of their characteristics and communicate only the workload model parameters to a hierarchical optimization module (autonomous control architecture). For instance, capturing the fractal characteristics of the core and uncore workloads led to the development of more efficient power management strategy [1] than those based on PID or model predictive control. In order to develop a compact and accurate mathematical framework for analyzing and modeling the incoming workload, we describe a general probabilistic approach that models the statistics of the increments in the magnitude of a stochastic process (associated with a specific workload metric) and the intervals of time (inter-event times) between successive changes in the stochastic process [3]. We show that the statistics of these two components of the stochastic process allows us to derive state equations and capture either short-range or long-range memory properties. To test the benefits of this new workload modeling approach, we describe its integration into a multi-fractal optimal control framework for solving the power management for a 64-core NoC-based manycore platform and contrast it with a mono-fractal and non-fractal schemes [3]. A scalable, low power, and high-bandwidth on-chip communication infrastructure is essential to sustain the predicted growth in the number of embedded cores in a single die. New interconnection fabrics are key for continued performance improvements and energy reduction of manycore chips, and an efficient and robust NoC architecture is one of the key steps towards achieving that goal. An NoC architecture that incorporates emerging interconnect paradigms will be an enabler for low-power, high-bandwidth manycore chips. Innovative interconnect paradigms based on optical technologies, RF/wireless methods, carbon nanotubes, or 3D integration are promising alternatives that may indeed overcome obstacles that impede continued advances of the manycore paradigm. These innovations will open new opportunities for research in NoC designs with emerging interconnect infrastructures. In this regard, wireless NoC (WiNoC) is a promising direction to design energy efficient multicore architectures. WiNoC not only helps in improving the energy efficiency and performance, it also opens up opportunities for implementing power management strategies. WiNoCs enable implementation of the two most popular power management mechanisms, viz., dynamic voltage and frequency scaling (DVFS) and voltage frequency island (VFI). The wireless links in the WiNoC establish one-hop shortcuts between the distant nodes and facilitate energy savings in data exchange [3]. The wireless shortcuts attract a significant amount of the overall traffic within the network. The amount of traffic detoured is substantial and the low power wireless links enable energy savings. However, the overall energy dissipation within the network is still dominated by the data traversing the wireline links. Hence, by incorporating DVFS on these wireline links we can save more energy. Moreover, by incorporating suitable congestion aware routing with DVFS, we can avoid thermal hotspots in the system [4]. It should be noted that for large system size the hardware overhead in terms of on-chip voltage regulators and synchronizers is much more in DVFS than in VFI. WiNoC-enabled VFI designs mitigate some of the full-system performance degradation inherent in VFI-partitioned multicore designs, and it also help in eliminating it entirely for certain applications [5]. The VFI-partitioned designs used in conjunction with a novel NoC architecture like WiNoC can achieve significant energy savings while minimizing the impact on the achievable performance. On-chip power density and temperature trends are continuously increasing due to high integration density of nano-scale transistors and failure of Dennard Scaling as a result of diminishing voltage scaling. Hence, all computing is temperature-constrained computing and therefore, employing thermal management techniques that keep chip temperatures within safe limits along with meeting the constraints of spatial/temporal thermal gradients and avoid wear-out effects [8] is key. We introduced the novel concept of Dark Silicon Patterning, i.e. spatio-temporal control of power states of different cores [9] Sophisticated patterning and thread-to-core mapping decisions are made considering the knowledge of process variations and lateral heat dissipation of power-gated cores in order to enhance the performance of multi-threaded workloads through dynamic core count scaling (DCCS). This is enabled by a lightweight online prediction of chip's thermal profile for a given patterning candidate. We also present an enhanced temperature-aware resource management technique that, besides active and dark states of cores, also exploit various grey states (i.e., using different voltage-frequency levels) in order to achieve a high performance for mixed ILP-TLP workloads under peak temperature constraints. High ILP applications benefit from high V-f and boosting levels, while high TLP applications benefit from As the scaling trends move from multi-core to many-core processors, the centralized solutions become infeasible, and thereby require distributed techniques. In [6], we proposed an agent-based distributed temperature-aware resource management technique called TAPE. It assigns a so-called agent to each core, a software or hardware entity that acts on behalf of the core. Following the principles of economic theory, these agents negotiate with each other to trade their power budgets in order to fulfil the performance requirements of their tasks, while keep the TPeak≤Tcritical. In case of thermal violations, task migration or V-f throttling is triggered, and a penalty is applied to the trading process to improve the decision making.

Bor, Martin C., Roedig, Utz, Voigt, Thiemo, Alonso, Juan M..  2016.  Do LoRa Low-Power Wide-Area Networks Scale? Proceedings of the 19th ACM International Conference on Modeling, Analysis and Simulation of Wireless and Mobile Systems. :59–67.

New Internet of Things (IoT) technologies such as Long Range (LoRa) are emerging which enable power efficient wireless communication over very long distances. Devices typically communicate directly to a sink node which removes the need of constructing and maintaining a complex multi-hop network. Given the fact that a wide area is covered and that all devices communicate directly to a few sink nodes a large number of nodes have to share the communication medium. LoRa provides for this reason a range of communication options (centre frequency, spreading factor, bandwidth, coding rates) from which a transmitter can choose. Many combination settings are orthogonal and provide simultaneous collision free communications. Nevertheless, there is a limit regarding the number of transmitters a LoRa system can support. In this paper we investigate the capacity limits of LoRa networks. Using experiments we develop models describing LoRa communication behaviour. We use these models to parameterise a LoRa simulation to study scalability. Our experiments show that a typical smart city deployment can support 120 nodes per 3.8 ha, which is not sufficient for future IoT deployments. LoRa networks can scale quite well, however, if they use dynamic communication parameter selection and/or multiple sinks.