Biblio
We address the problem of stabilizing control for complex queueing systems with known parameters but unobservable Markovian random environment. In such systems, the controller needs to assign servers to queues without having full information about the servers' states. A control challenge is to devise a policy that matches servers to queues in a way that takes state estimates into account. Maximally attainable stability regions are non-trivial. To handle these situations, we model the system under given decision rules. The model is using Quasi-Birth-and-Death (QBD) structure to find a matrix analytic expression for the stability bound. We use this formulation to illustrate how the stability region grows as the number of controller belief states increases.
An increasing number of applications in all aspects of society rely on data. Despite the long line of research in data cleaning and repairs, data correctness has been an elusive goal. Errors in the data can be extremely disruptive, and are detrimental to the effectiveness and proper function of data-driven applications. Even when data is cleaned, new errors can be introduced by applications and users who interact with the data. Subsequent valid updates can obscure these errors and propagate them through the dataset causing more discrepancies. Any discovered errors tend to be corrected superficially, on a case-by-case basis, further obscuring the true underlying cause, and making detection of the remaining errors harder. In this demo proposal, we outline the design of QFix, a query-centric framework that derives explanations and repairs for discrepancies in relational data based on potential errors in the queries that operated on the data. This is a marked departure from traditional data-centric techniques that directly fix the data. We then describe how users will use QFix in a demonstration scenario. Participants will be able to select from a number of transactional benchmarks, introduce errors into the queries that are executed, and compare the fixes to the queries proposed by QFix as well as existing alternative algorithms such as decision trees.
With the developing understanding of Information Security and digital assets, IT technology has put on tremendous importance of network admission control (NAC). In NAC architecture, admission decisions and resource reservations are taken at edge devices, rather than resources or individual routers within the network. The NAC architecture enables resilient resource reservation, maintaining reservations even after failures and intra-domain rerouting. Admission Control Networks destiny is based on IP networks through its Security and Quality of Service (QoS) demands for real time multimedia application via advance resource reservation techniques. To achieve Security & QoS demands, in real time performance networks, admission control algorithm decides whether the new traffic flow can be admitted to the network or not. Secure allocation of Peer for multimedia traffic flows with required performance is a great challenge in resource reservation schemes. In this paper, we have proposed our model for VoIP networks in order to achieve security services along with QoS, where admission control decisions are taken place at edge routers. We have analyzed and argued that the measurement based admission control should be done at edge routers which employs on-demand probing parallel from both edge routers to secure the source and destination nodes respectively. In order to achieve Security and QoS for a new call, we choose various probe packet sizes for voice and video calls respectively. Similarly a technique is adopted to attain a security allocation approach for selecting an admission control threshold by proposing our admission control algorithm. All results are tested on NS2 based simulation to evalualate the network performance of edge router based upon network admission control in VoIP traffic.
The cooperative spectrum leasing process between the primary user (PU) and the secondary user (SU) in a cognitive radio network under the overlay approach and the decode and forward (DF) cooperative protocol is studied. Considering the Quality of Service (QoS) provisioning of both users, which participate in a three-phase leasing process, we investigate the maximization of PU's effective capacity subject to an average energy constraint for the SU under a heuristic power and time allocation mechanism. The aforementioned proposed scheme treats with the basic concepts of the convex optimization theory and outperforms a baseline allocation mechanism which is proven by the simulations. Finally, important remarks for the PU's and the SU's performance are extracted for different system parameters.
We propose a new view on data cleaning: Not data itself but the degrees of uncertainty attributed to data are dirty. Applying possibility theory, tuples are assigned degrees of possibility with which they occur, and constraints are assigned degrees of certainty that say to which tuples they apply. Classical data cleaning modifies some minimal set of tuples. Instead, we marginally reduce their degrees of possibility. This reduction leads to a new qualitative version of the vertex cover problem. Qualitative vertex cover can be mapped to a linear-weighted constraint satisfaction problem. However, any off-the-shelf solver cannot solve the problem more efficiently than classical vertex cover. Instead, we utilize the degrees of possibility and certainty to develop a dedicated algorithm that is fixed parameter tractable in the size of the qualitative vertex cover. Experiments show that our algorithm is faster than solvers for the classical vertex cover problem by several orders of magnitude, and performance improves with higher numbers of uncertainty degrees.
A number of small Open Source projects let independent providers measure different aspects of their quality that would otherwise be hard to see. This paper describes this observation as the pattern Quality Attestation. Quality Attestation belongs to a family of Open Source patterns written by various authors.
In the recent years, silicon based Physical Unclonable Function (PUF) has evolved as one of the popular hardware security primitives. PUFs are a class of circuits that use the inherent variations in the Integrated Circuit (IC) manufacturing process to create unique and unclonable IDs. There are various security related applications of PUFs such as IC counterfeiting, piracy detection, secure key management etc. In this paper, we are presenting a novel QUasi-Adiabatic Logic based PUF (QUALPUF) which is designed using energy recovery technique. To the best of our knowledge, this is the first work on the hardware design of PUF using adiabatic logic. The proposed design is energy efficient compared to recent designs of hardware PUFs proposed in the literature. Further, we are proposing a novel bit extraction method for our proposed PUF which improves the space set of challenge-response pairs. QUALPUF is evaluated in security metrics including reliability, uniqueness, uniformity and bit-aliasing. Power and area of QUALPUF is also presented. SPICE simulations show that QUALPUF consumes 0.39μ Watt of power to generate a response bit.
We propose a multi-level CSI quantization and key reconciliation scheme for physical layer security. The noisy wireless channel estimates obtained by the users first run through a transformation, prior to the quantization step. This enables the definition of guard bands around the quantization boundaries, tailored for a specific efficiency and not compromising the uniformity required at the output of the quantizer. Our construction results in an better key disagreement and initial key generation rate trade-off when compared to other level-crossing quantization methods.
Context: While successful conventional software development regularly employs separate testing staff, there are successful agile teams with as well as without separate testers. Question: How does successful agile development work without separate testers? What are advantages and disadvantages? Method: A case study, based on Grounded Theory evaluation of interviews and direct observation of three agile teams; one having separate testers, two without. All teams perform long-term development of parts of e-business web portals. Results: Teams without testers use a quality experience work mode centered around a tight field-use feedback loop, driven by a feeling of responsibility, supported by test automation, resulting in frequent deployments. Conclusion: In the given domain, hand-overs to separate testers appear to hamper the feedback loop more than they contribute to quality, so working without testers is preferred. However, Quality Experience is achievable only with modular architectures and in suitable domains.
Quality of service (QoS) has been considered as a significant criterion for querying among functionally similar web services. Most researches focus on the search of QoS under certain data which may not cover some practical scenarios. Recent approaches for uncertain QoS of web service deal with discrete data domain. In this paper, we try to build the search of QoS under continuous probability distribution. We offer the definition of two kinds of queries under uncertain QoS and form the optimization approaches for specific distributions. Based on that, the search is extended to general cases. With experiments, we show the feasibility of the proposed methods.
The cyber security exposure of resilient systems is frequently described as an attack surface. A larger surface area indicates increased exposure to threats and a higher risk of compromise. Ad-hoc addition of dynamic proactive defenses to distributed systems may inadvertently increase the attack surface. This can lead to cyber friendly fire, a condition in which adding superfluous or incorrectly configured cyber defenses unintentionally reduces security and harms mission effectiveness. Examples of cyber friendly fire include defenses which themselves expose vulnerabilities (e.g., through an unsecured admin tool), unknown interaction effects between existing and new defenses causing brittleness or unavailability, and new defenses which may provide security benefits, but cause a significant performance impact leading to mission failure through timeliness violations. This paper describes a prototype service capability for creating semantic models of attack surfaces and using those models to (1) automatically quantify and compare cost and security metrics across multiple surfaces, covering both system and defense aspects, and (2) automatically identify opportunities for minimizing attack surfaces, e.g., by removing interactions that are not required for successful mission execution.
We study a quantity-flexibility supply contract between a manufacturer and a retailer in two periods. The retailer can get a low wholesale price within a fixed quantity and adjust the quantity at the end of the first period. The retailer can adjust the order quantities after the first period based on updated inventory status by paying a higher per-unit price for the incremental units or obtaining a buyback price per-unit for the returning units. By developing a two-period dynamic programming model in this paper, we first obtain an optimal replenishment strategy for the retailer when the manufacturer's price scheme is known. Then we derive an proper pricing scheme for the manufacturer by assuming that the supply chain is coordinated. The numerical results show some managerial insights by comparing this coordination scheme with Stackelberg game.
Sampling multiband radar signals is an essential issue of multiband/multifunction radar. This paper proposes a multiband quadrature compressive sampling (MQCS) system to perform the sampling at sub-Landau rate. The MQCS system randomly projects the multiband signal into a compressive multiband one by modulating each subband signal with a low-pass signal and then samples the compressive multiband signal at Landau-rate with output of compressive measurements. The compressive inphase and quadrature (I/Q) components of each subband are extracted from the compressive measurements respectively and are exploited to recover the baseband I/Q components. As effective bandwidth of the compressive multiband signal is much less than that of the received multiband one, the sampling rate is much less than Landau rate of the received signal. Simulation results validate that the proposed MQCS system can effectively acquire and reconstruct the baseband I/Q components of the multiband signals.
In this paper we present a framework for Quality of Information (QoI)-aware networking. QoI quantifies how useful a piece of information is for a given query or application. Herein, we present a general QoI model, as well as a specific example instantiation that carries throughout the rest of the paper. In this model, we focus on the tradeoffs between precision and accuracy. As a motivating example, we look at traffic video analysis. We present simple algorithms for deriving various traffic metrics from video, such as vehicle count and average speed. We implement these algorithms both on a desktop workstation and less-capable mobile device. We then show how QoI-awareness enables end devices to make intelligent decisions about how to process queries and form responses, such that huge bandwidth savings are realized.
Consider a thin, flexible wire of fixed length that is held at each end by a robotic gripper. Any curve traced by this wire when in static equilibrium is a local solution to a geometric optimal control problem, with boundary conditions that vary with the position and orientation of each gripper. We prove that the set of all local solutions to this problem over all possible boundary conditions is a smooth manifold of finite dimension that can be parameterized by a single chart. We show that this chart makes it easy to implement a sampling-based algorithm for quasi-static manipulation planning. We characterize the performance of such an algorithm with experiments in simulation.
Quasi-steady-state (QSS) large-signal models are often taken for granted in the analysis and design of DC-DC switching converters, particularly for varying operating conditions. In this study, the premise for the QSS is justified quantitatively for the first time. Based on the QSS, the DC-DC switching converter under varying operating conditions is reduced to the linear time varying systems model. Thereafter, the QSS concept is applied to analysis of frequency-domain properties of the DC-DC switching converters by using three-dimensional Bode plots, which is then utilised to the optimisation of the controller parameters for wide variations of input voltage and load resistance. An experimental prototype of an average-current-mode-controlled boost DC-DC converter is built to verify the analysis and design by both frequency-domain and time-domain measurements.
Resilience in the information sciences is notoriously difficult to define much less to measure. But in mechanical engineering, the resilience of a substance is mathematically well-defined as an area under the stress-strain curve. We combined inspiration from mechanics of materials and axioms from queuing theory in an attempt to define resilience precisely for information systems. We first examine the meaning of resilience in linguistic and engineering terms and then translate these definitions to information sciences. As a general assessment of our approach's fitness, we quantify how resilience may be measured in a simple queuing system. By using a very simple model we allow clear application of established theory while being flexible enough to apply to many other engineering contexts in information science and cyber security. We tested our definitions of resilience via simulation and analysis of networked queuing systems. We conclude with a discussion of the results and make recommendations for future work.
Consider a thin, flexible wire of fixed length that is held at each end by a robotic gripper. Any curve traced by this wire when in static equilibrium is a local solution to a geometric optimal control problem, with boundary conditions that vary with the position and orientation of each gripper. We prove that the set of all local solutions to this problem over all possible boundary conditions is a smooth manifold of finite dimension that can be parameterized by a single chart. We show that this chart makes it easy to implement a sampling-based algorithm for quasi-static manipulation planning. We characterize the performance of such an algorithm with experiments in simulation.
Published in The International Journal of Robotics Research