Biblio
Automobiles provide comfort and mobility to owners. While they make life more meaningful they also pose challenges and risks in their safety and security mechanisms. Some modern automobiles are equipped with anti-theft systems and enhanced safety measures to safeguard its drivers. But at times, these mechanisms for safety and secured operation of automobiles are insufficient due to various mechanisms used by intruders and car thieves to defeat them. Drunk drivers cause accidents on our roads and thus the need to safeguard the driver when he is intoxicated and render the car to be incapable of being driven. These issues merit an integrated approach to safety and security of automobiles. In the light of these challenges, an integrated microcontroller-based hardware and software system for safety and security of automobiles to be fixed into existing vehicle architecture, was designed, developed and deployed. The system submodules are: (1) Two-step ignition for automobiles, namely: (a) biometric ignition and (b) alcohol detection with engine control, (2) Global Positioning System (GPS) based vehicle tracking and (3) Multisensor-based fire detection using neuro-fuzzy logic. All submodules of the system were implemented using one microcontroller, the Arduino Mega 2560, as the central control unit. The microcontroller was programmed using C++11. The developed system performed quite well with the tests performed on it. Given the right conditions, the alcohol detection subsystem operated with a 92% efficiency. The biometric ignition subsystem operated with about 80% efficiency. The fire detection subsystem operated with a 95% efficiency in locations registered with the neuro-fuzzy system. The vehicle tracking subsystem operated with an efficiency of 90%.
In this paper, based on the Hamiltonian, an alternative interpretation about the iterative adaptive dynamic programming (ADP) approach from the perspective of optimization is developed for discrete time nonlinear dynamic systems. The role of the Hamiltonian in iterative ADP is explained. The resulting Hamiltonian driven ADP is able to evaluate the performance with respect to arbitrary admissible policies, compare two different admissible policies and further improve the given admissible policy. The convergence of the Hamiltonian ADP to the optimal policy is proven. Implementation of the Hamiltonian-driven ADP by neural networks is discussed based on the assumption that each iterative policy and value function can be updated exactly. Finally, a simulation is conducted to verify the effectiveness of the presented Hamiltonian-driven ADP.
Automated server parameter tuning is crucial to performance and availability of Internet applications hosted in cloud environments. It is challenging due to high dynamics and burstiness of workloads, multi-tier service architecture, and virtualized server infrastructure. In this paper, we investigate automated and agile server parameter tuning for maximizing effective throughput of multi-tier Internet applications. A recent study proposed a reinforcement learning based server parameter tuning approach for minimizing average response time of multi-tier applications. Reinforcement learning is a decision making process determining the parameter tuning direction based on trial-and-error, instead of quantitative values for agile parameter tuning. It relies on a predefined adjustment value for each tuning action. However it is nontrivial or even infeasible to find an optimal value under highly dynamic and bursty workloads. We design a neural fuzzy control based approach that combines the strengths of fast online learning and self-adaptiveness of neural networks and fuzzy control. Due to the model independence, it is robust to highly dynamic and bursty workloads. It is agile in server parameter tuning due to its quantitative control outputs. We implemented the new approach on a testbed of virtualized data center hosting RUBiS and WikiBench benchmark applications. Experimental results demonstrate that the new approach significantly outperforms the reinforcement learning based approach for both improving effective system throughput and minimizing average response time.