Biblio
The mainstream approach to protecting the privacy of mobile users in location-based services (LBSs) is to alter (e.g., perturb, hide, and so on) the users’ actual locations in order to reduce exposed sensitive information. In order to be effective, a location-privacy preserving mechanism must consider both the privacy and utility requirements of each user, as well as the user’s overall exposed locations (which contribute to the adversary’s background knowledge). In this article, we propose a methodology that enables the design of optimal user-centric location obfuscation mechanisms respecting each individual user’s service quality requirements, while maximizing the expected error that the optimal adversary incurs in reconstructing the user’s actual trace. A key advantage of a user-centric mechanism is that it does not depend on third-party proxies or anonymizers; thus, it can be directly integrated in the mobile devices that users employ to access LBSs. Our methodology is based on the mutual optimization of user/adversary objectives (maximizing location privacy versus minimizing localization error) formalized as a Stackelberg Bayesian game. This formalization makes our solution robust against any location inference attack, that is, the adversary cannot decrease the user’s privacy by designing a better inference algorithm as long as the obfuscation mechanism is designed according to our privacy games. We develop two linear programs that solve the location privacy game and output the optimal obfuscation strategy and its corresponding optimal inference attack. These linear programs are used to design location privacy–preserving mechanisms that consider the correlation between past, current, and future locations of the user, thus can be tuned to protect different privacy objectives along the user’s location trace. We illustrate the efficacy of the optimal location privacy–preserving mechanisms obtained with our approach against real location traces, showing their performance in protecting users’ different location privacy objectives.
Enterprise networks today have highly diverse correctness requirements and relatively common performance objectives. As a result, preferred abstractions for enterprise networks are those which allow matching correctness specification, while transparently managing performance. Existing SDN network management architectures, however, bundle correctness and performance as a single abstraction. We argue that this creates an SDN ecosystem that is unnecessarily hard to build, maintain and evolve. We advocate a separation of the diverse correctness abstractions from generic performance optimization, to enable easier evolution of SDN controllers and platforms. We propose Oreo, a first step towards a common and relatively transparent performance optimization layer for SDN. Oreo performs the optimization by first building a model that describes every flow in the network, and then performing network-wide, multi-objective optimization based on this model without disrupting higher level correctness.
Enterprise networks today have highly diverse correctness requirements and relatively common performance objectives. As a result, preferred abstractions for enterprise networks are those which allow matching security and correctness specifications, while transparently managing performance. Existing SDN network management architectures, however, bundle correctness and performance as a single abstraction. We argue that this creates an SDN ecosystem that is unnecessarily hard to build, maintain and evolve. We advocate a separation of the diverse correctness abstractions from generic performance optimization, to enable easier evolution of SDN controllers and platforms. We propose Oreo, a first step towards a common and relatively transparent performance optimization layer for SDN. Oreo performs the optimization by first building a model that describes every flow in the network, and then performing network-wide, multi-objective optimization based on this model without disrupting higher level security and correctness.
Authors: Santhosh Prabhu, Mo Dong, Tong Meng, P. Brighten Godfrey, and Matthew Caesar
In this paper, an optimization model of automobile supply chain network with risks under fuzzy price is put forward. The supply chain network is composed of component suppliers, plants, and distribution centers. The total costs of automobile supply chain consist of variable costs, fixed costs, and transportation costs. The objective of this study is to minimize the risks of total profits. In order to deal with this model, this paper puts forward an approximation method to transform a continuous fuzzy problem into discrete fuzzy problem. The model is solved using Cplex 12.6. The results show that Cplex 12.6 can perfectly solve this model, the expected value and lower semi-variance of total profits converge with the increasing number of discretization points, the structure of automobile supply chain network keeps unchanged with the increasing number of discretization points.
Next generation cellular networks will provide users better experiences by densely deploying smaller cells, which results in more complicated interferences environment. In order to coordinate interference, power control for uplink is particularly challenging due to random locations of uplink transmitter and dense deployment. In this paper, we address the uplink fractional power control (FPC) optimization problem from network optimization perspective. The relations between FPC parameters and network KPIs (Key Performance Indicators) are investigated. Rather than considering any single KPI in conventional approaches, multi-KPI optimization problem is formulated and solved. By relaxing the discrete optimization problem to a continuous one, the gradients of multiple KPIs with respect to FPC parameters are derived. The gradient enables efficiently searching for optimized FPC parameters which is particularly desirable for dense deployment of large number of cells. Simulation results show that the proposed scheme greatly outperforms the traditional one, in terms of network mean load, call drop & block ratio, and convergence speed.
We consider a class of robust optimization problems that we call “robust-to-dynamics optimization” (RDO). The input to an RDO problem is twofold: (i) a mathematical program (e.g., an LP, SDP, IP, etc.), and (ii) a dynamical system (e.g., a linear, nonlinear, discrete, or continuous dynamics). The objective is to maximize over the set of initial conditions that forever remain feasible under the dynamics. The focus of this paper is on the case where the optimization problem is a linear program and the dynamics are linear. We establish some structural properties of the feasible set and prove that if the linear system is asymptotically stable, then the RDO problem can be solved in polynomial time. We also outline a semidefinite programming based algorithm for providing upper bounds on robust-to-dynamics linear programs.
We consider the problem of robust on-line optimization of a class of continuous-time nonlinear systems by using a discrete-time controller/optimizer, interconnected with the plant in a sampled-data structure. In contrast to classic approaches where the controller is updated after a fixed sufficiently long waiting time has passed, we design an event-based mechanism that triggers the control action only when the rate of change of the output of the plant is sufficiently small. By using this event-based update rule, a significant improvement in the convergence rate of the closed-loop dynamics is achieved. Since the closed-loop system combines discrete-time and continuous-time dynamics, and in order to guarantee robustness and semi-continuous dependence of solutions on parameters and initial conditions, we use the framework of hybrid set-valued dynamical systems to analyze the stability properties of the system. Numerical simulations illustrate the results.
Game theory serves as a powerful tool for distributed optimization in multiagent systems in different applications. In this paper we consider multiagent systems that can be modeled as a potential game whose potential function coincides with a global objective function to be maximized. This approach renders the agents the strategic decision makers and the corresponding optimization problem the problem of learning an optimal equilibruim point in the designed game. In distinction from the existing works on the topic of payoff-based learning, we deal here with the systems where agents have neither memory nor ability for communication, and they base their decision only on the currently played action and the experienced payoff. Because of these restrictions, we use the methods of reinforcement learning, stochastic approximation, and learning automata extensively reviewed and analyzed in [3], [9]. These methods allow us to set up the agent dynamics that moves the game out of inefficient Nash equilibria and leads it close to an optimal one in both cases of discrete and continuous action sets.
The paper presents a joint optimization algorithm for coverage and capacity in heterogeneous cellular networks. A joint optimization objective related to capacity loss considering both coverage hole and overlap area based on power density distribution is proposed. The optimization object is a NP problem due to that the adjusting parameters are mixed with discrete and continuous, so the bacterial foraging (BF) algorithm is improved based on network performance analysis result to find a more effective direction than randomly selected. The results of simulation show that the optimization object is feasible gains a better effect than traditional method.
In recent years, binary coding techniques are becoming increasingly popular because of their high efficiency in handling large-scale computer vision applications. It has been demonstrated that supervised binary coding techniques that leverage supervised information can significantly enhance the coding quality, and hence greatly benefit visual search tasks. Typically, a modern binary coding method seeks to learn a group of coding functions which compress data samples into binary codes. However, few methods pursued the coding functions such that the precision at the top of a ranking list according to Hamming distances of the generated binary codes is optimized. In this paper, we propose a novel supervised binary coding approach, namely Top Rank Supervised Binary Coding (Top-RSBC), which explicitly focuses on optimizing the precision of top positions in a Hamming-distance ranking list towards preserving the supervision information. The core idea is to train the disciplined coding functions, by which the mistakes at the top of a Hamming-distance ranking list are penalized more than those at the bottom. To solve such coding functions, we relax the original discrete optimization objective with a continuous surrogate, and derive a stochastic gradient descent to optimize the surrogate objective. To further reduce the training time cost, we also design an online learning algorithm to optimize the surrogate objective more efficiently. Empirical studies based upon three benchmark image datasets demonstrate that the proposed binary coding approach achieves superior image search accuracy over the state-of-the-arts.
Wireless sensor networks have been widely utilized in many applications such as environment monitoring and controlling. Appropriate sensor deployment scheme to achieve the maximal coverage is crucial for effectiveness of sensor network. In this paper, we study coverage optimization problem with hopping sensors. Although similar problem has been investigated when each mobile sensor has continuous dynamics, the problem is different for hopping sensor which has discrete and constraint dynamics. Based on the characteristics of hopping, we obtain dynamics equation of hopping sensors. Then we propose an enhanced virtual force algorithm as a deployment scheme to improve the coverage. A combination of attractive and repulsive forces generated by Voronoi neighbor sensors, obstacles and the centroid of local Voronoi cell is used to determine the motion paths for hopping sensors. Furthermore, a timer is designed to adjust the movement sequence of sensors, such that unnecessary movements can be reduced. Simulation results show that optimal coverage can be accomplished by hopping sensors in an energy efficient manner.
One of the main issues in the design of modern integrated circuits is power reduction. Mainly in digital circuits, the power consumption was defined by the dynamic power consumption, during decades. But in the new NanoCMOs technologies, the static power due to the leakage current is becoming the main issue in power consumption. As the leakage power is related to the amount of components, it is becoming mandatory to reduce the amount of transistors in any type of design, to reduce power consumption. So, it is important to obtain new EDA algorithms and tools to optimize the amount of components (transistors). It is also needed tools for the layout design automation that are able to design any network of components that is provided by an optimization tool that is able to reduce the size of the network of components. It is presented an example of a layout design automation tool that can do the layout of any network of transistors using transistors of any size. Another issue for power optimization is the use of tools and algorithms for gate sizing. The designer can manage the sizing of transistors to reduce power consumption, without compromising the clock frequency. There are two types of gate sizing, discrete gate sizing and continuous gate sizing. The discrete gate sizing tools are used when it is being used a cell library that has only few available sizes for each cell. The continuous gate sizing considers that the EDA tool can define any transistor sizing. In this case, the designer needs to have a layout design tool able to do the layout of transistors with any size. It will be presented the winner tools of the ISPD Contest 2012 and 2013. Also, it will be discussed the inclusion of our gate sizing algorithms in an industrial flow used to design state-of-the-art microprocessors. Another type of EDA tool that is becoming more and more useful is the visualization tools that provide an animated visual output of the running of EDA tools. This kind of tools is very usef- l to show to the tool developers how the tool is running. So, the EDA developers can use this information to improve the algorithms used in an EDA Tool.
The main challenge of ultra-reliable machine-to-machine (M2M) control applications is to meet the stringent timing and reliability requirements of control systems, despite the adverse properties of wireless communication for delay and packet errors, and limited battery resources of the sensor nodes. Since the transmission delay and energy consumption of a sensor node are determined by the transmission power and rate of that sensor node and the concurrently transmitting nodes, the transmission schedule should be optimized jointly with the transmission power and rate of the sensor nodes. Previously, it has been shown that the optimization of power control and rate adaptation for each node subset can be separately formulated, solved and then used in the scheduling algorithm in the optimal solution of the joint optimization of power control, rate adaptation and scheduling problem. However, the power control and rate adaptation problem has been only formulated and solved for continuous rate transmission model, in which Shannon's capacity formulation for an Additive White Gaussian Noise (AWGN) wireless channel is used in the calculation of the maximum achievable rate as a function of Signal-to-Interference-plus-Noise Ratio (SINR). In this paper, we formulate the power control and rate adaptation problem with the objective of minimizing the time required for the concurrent transmission of a set of sensor nodes while satisfying their transmission delay, reliability and energy consumption requirements based on the more realistic discrete rate transmission model, in which only a finite set of transmit rates are supported. We propose a polynomial time algorithm to solve this problem and prove the optimality of the proposed algorithm. We then combine it with the previously proposed scheduling algorithms and demonstrate its close to optimal performance via extensive simulations.
In this paper, we focus on energy management of distributed generators (DGs) and energy storage system (ESS) in microgrids (MG) considering uncertainties in renewable energy and load demand. The MG energy management problem is formulated as a two-stage stochastic programming model based on optimization principle. Then, the optimization model is decomposed into a mixed integer quadratic programming problem by using discrete stochastic scenarios to approximate the continuous random variables. A Scenarios generation approach based on time-homogeneous Markov chain model is proposed to generate simulated time-series of renewable energy generation and load demand. Finally, the proposed stochastic programming model is tested in a typical LV network and solved by Matlab optimization toolbox. The simulation results show that the proposed stochastic programming model has a better performance to obtain robust scheduling solutions and lower the operating cost compared to the deterministic optimization modeling methods.
As a clean energy, wind power is massively utilized in net recent years, which significantly reduced the pollution emission created from unit. This article referred to the concept of energy-saving and emission reducing; built a multiple objective function with represent of the emission of CO2& SO2, the coal-fired from units and the lowest unit fees of commitment; Proposed a algorithm to improving NSGA-D (Non-dominated Sorting Genetic Algorithm-II) for the dynamic characteristics, consider of some constraint conditions such as the shortest operation and fault time and climbing etc.; Optimized and commitment discrete magnitude and Load distribution continuous quantity with the double-optimization strategy; Introduced the fuzzy satisfaction-maximizing method to reaching a decision for Pareto solution and also nested into each dynamic solution; Through simulation for 10 units of wind power, the result show that this method is an effective way to optimize the Multi-objective unit commitment modeling in wind power integrated system with Mixed-integer variable.
Genes, proteins and other metabolites present in cellular environment exhibit a virtual network that represents the regulatory relationship among its constituents. This network is called Gene Regulatory Network (GRN). Computational reconstruction of GRN reveals the normal metabolic pathway as well as disease motifs. Availability of microarray gene expression data from normal and diseased tissues makes the job easier for computational biologists. Reconstruction of GRN is based on neural modeling. Here we have used discrete and continuous versions of a meta-heuristic algorithm named Firefly algorithm for structure and parameter learning of GRNs respectively. The discrete version for this problem is proposed by us and it has been applied to explore the discrete search space of GRN structure. To evaluate performance of the algorithm, we have used a widely used synthetic GRN data set. The algorithm shows an accuracy rate above 50% in finding GRN. The accuracy level of the performance of Firefly algorithm in structure and parameter optimization of GRN is promising.
The cuttlefish optimization algorithm is a new combinatorial optimization algorithm in the family of metaheuristics, applied in the continuous domain, and which provides mechanisms for local and global research. This paper presents a new adaptation of this algorithm in the discrete case, solving the famous travelling salesman problem, which is one of the discrete combinatorial optimization problems. This new adaptation proposes a reformulation of the equations to generate solutions depending a different algorithm cases. The experimental results of the proposed algorithm on instances of TSPLib library are compared with the other methods, show the efficiency and quality of this adaptation.
A novel method for computation of modal reflectivity at optical waveguide end-facet is presented. The method is based on the characteristic Green's function (CGF) technique. Using separability assumption of the structure and rational function fitting method (RFFM), a closed-form field expression is derived for optical planar waveguide. The uniform derived expression consists of discrete and continuous spectrum contributions which denotes guided and radiation modes effects respectively. An optimization problem is then defined to calculate the exact reflection coefficients at the end-facet for all extracted poles obtained from rational function fitting step. The proposed CGF-RFFM-optimization offers superior exactness in comparison with the previous reported CGF-complex images (CI) technique due to contribution of all components of field in the optimization problem. The main advantage of the proposed method lies in its simple implementation as well as precision for any refractive index contrast. Excellent numerical agreements with rigorous methods are shown in several examples.
We propose a methodology for architecture exploration for Cyber-Physical Systems (CPS) based on an iterative, optimization-based approach, where a discrete architecture selection engine is placed in a loop with a continuous sizing engine. The discrete optimization routine proposes a candidate architecture to the sizing engine. The sizing routine optimizes over the continuous parameters using simulation to evaluate the physical models and to monitor the requirements. To decrease the number of simulations, we show how balance equations and conservation laws can be leveraged to prune the discrete space, thus achieving significant reduction in the overall runtime. We demonstrate the effectiveness of our methodology on an industrial case study, namely an aircraft environmental control system, showing more than one order of magnitude reduction in optimization time.
In practical reflector antenna structures, components of the back-up structure (BUS) are selected form a standard steel library which is normally manufactured. In this case, the design problem of the antenna structure is a discrete optimization problem. In most cases, discrete design is solved by heuristic-based algorithm which will be computing-expensive when the number of deign variable increases. In this paper, a continuous method is used to transfer the discrete optimization problem to a continuous one and gradient-based technique is utilized to solve this problem. The method proposed can achieve a well antenna surface accuracy with all components selected from a standard cross-section list, which is shown by a 9m diameter antenna optimization problem.
An Egyptian statue on display at the Manchester Museum mysteriously spins on its axis every day; it is eventually discovered that this is due to anisotropic friction forces, and that the motile power comes from imperceptible mechanical waves caused by visitors' footsteps and nearby traffic. This phenomena involves microscopic ratchets, and is pervasive in the microscopic world - this is basically how muscles contract. It was the source of inspiration to think about everyday objects that move by harvesting external vibration rather than using mechanical traction and steering wheels. We propose here a strategy for displacing objects by attaching relatively small vibration sources. After learning how several random bursts of vibration affect its pose, an optimization algorithm discovers the optimal sequence of vibration patterns required to (slowly but surely) move the object to a very different specified position. We describe and demonstrate two application scenarios, namely assisted transportation of heavy objects with little effort on the part of the human and self arranging furniture, useful for instance to clean classrooms or restaurants during vacant hours.
In this paper, we focus on the principal-agent problems in continuous time when the participants have ambiguity on the output process in the framework of g-expectation. The first best (or, risk-sharing) type is studied. The necessary condition of the optimal contract is derived by means of the optimal control theory. Finally, we present some examples to clarify our results.
Risk-control optimization has great significance for security of power system. Usually the probabilistic uncertainties of parameters are considered in the research of risk optimization of power system. However, the method of probabilistic uncertainty description will be insufficient in the case of lack of sample data. Thus non-probabilistic uncertainties of parameters should be considered, and will impose a significant influence on the results of optimization. To solve this problem, a robust optimization operation method of power system risk-control is presented in this paper, considering the non-probabilistic uncertainty of parameters based on information gap decision theory (IGDT). In the method, loads are modeled as the non-probabilistic uncertainty parameters, and the model of robust optimization operation of risk-control is presented. By solving the model, the maximum fluctuation of the pre-specified target can be obtained, and the strategy of this situation can be obtained at the same time. The proposed model is applied to the IEEE-30 system of risk-control by simulation. The results can provide the valuable information for operating department to risk management.
Selective encryption designates a technique that aims at scrambling a message content while preserving its syntax. Such an approach allows encryption to be transparent towards middle-box and/or end user devices, and to easily fit within existing pipelines. In this paper, we propose to apply this property to a real-time diffusion scenario - or broadcast - over a RTP session. The main challenge of such problematic is the preservation of the synchronization between encryption and decryption. Our solution is based on the Advanced Encryption Standard in counter mode which has been modified to fit our auto-synchronization requirement. Setting up the proposed synchronization scheme does not induce any latency, and requires no additional bandwidth in the RTP session (no additional information is sent). Moreover, its parallel structure allows to start decryption on any given frame of the video while leaving a lot of room for further optimization purposes.
Cache coherence protocol bugs can cause multicores to fail. Existing coherence verification approaches incur state explosion at small scales or require considerable human effort. As protocols' complexity and multicores' core counts increase, verification continues to be a challenge. Recently, researchers proposed fractal coherence which achieves scalable verification by enforcing observational equivalence between sub-systems in the coherence protocol. A larger sub-system is verified implicitly if a smaller sub-system has been verified. Unfortunately, fractal protocols suffer from two fundamental limitations: (1) indirect-communication: sub-systems cannot directly communicate and (2) partially-serial-invalidations: cores must be invalidated in a specific, serial order. These limitations disallow common performance optimizations used by conventional directory protocols: reply-forwarding where caches communicate directly and parallel invalidations. Therefore, fractal protocols lack performance scalability while directory protocols lack verification scalability. To enable both performance and verification scalability, we propose Fractal++ which employs a new class of protocol optimizations for verification-constrained architectures: decoupled-replies, contention-hints, and fully-parallel-fractal-invalidations. The first two optimizations allow reply-forwarding-like performance while the third optimization enables parallel invalidations in fractal protocols. Unlike conventional protocols, Fractal++ preserves observational equivalence and hence is scalably verifiable. In 32-core simulations of single- and four-socket systems, Fractal++ performs nearly as well as a directory protocol while providing scalable verifiability whereas the best-performing previous fractal protocol performs 8% on average and up to 26% worse with a single-socket and 12% on average and up to 34% worse with a longer-latency multi-socket system.