Biblio
With the urban traffic planning and management development, it is a highly considerable issue to analyze and estimate the original-destination data in the city. Traditional method to acquire the OD information usually uses household survey, which is inefficient and expensive. In this paper, the new methodology proposed that using mobile phone data to analyze the mechanism of trip generation, trip attraction and the OD information. The mobile phone data acquisition is introduced. A pilot study is implemented on Beijing by using the new method. And, much important traffic information can be extracted from the mobile phone data. We use the K-means clustering algorithm to divide the traffic zone. The attribution of traffic zone is identified using the mobile phone data. Then the OD distribution and the commuting travel are analyzed. At last, an experiment is done to verify availability of the mobile phone data, that analyzing the "Traffic tide phenomenon" in Beijing. The results of the experiments in this paper show a great correspondence to the actual situation. The validated results reveal the mobile phone data has tremendous potential on OD analysis.
In the future Internet of Things, it is envisioned that things are collaborating to serve people. Unfortunately, this vision could not be realised without relations between things and people. To solve the problem this paper proposes a user centric identity management system that incorporates user identity, device identity and the relations between them. The proposed IDM system is user centric and allows device authentication and authorization based on the user identity. A typical compelling use case of the proposed solution is also given.
During recent years, establishing proper metrics for measuring system security has received increasing attention. Security logs contain vast amounts of information which are essential for creating many security metrics. Unfortunately, security logs are known to be very large, making their analysis a difficult task. Furthermore, recent security metrics research has focused on generic concepts, and the issue of collecting security metrics with log analysis methods has not been well studied. In this paper, we will first focus on using log analysis techniques for collecting technical security metrics from security logs of common types (e.g., Network IDS alarm logs, workstation logs, and Net flow data sets). We will also describe a production framework for collecting and reporting technical security metrics which is based on novel open-source technologies for big data.
Cover time measures the time (or number of steps) required for a mobile agent to visit each node in a network (graph) at least once. A short cover time is important for search or foraging applications that require mobile agents to quickly inspect or monitor nodes in a network, such as providing situational awareness or security. Speed can be achieved if details about the graph are known or if the agent maintains a history of visited nodes, however, these requirements may not be feasible for agents with limited resources, they are difficult in dynamic graph topologies, and they do not easily scale to large networks. This paper introduces a set-based form of heading (directional bias) that allows an agent to more efficiently explore any connected graph, static or dynamic. When deciding the next node to visit, agents are discouraged from visiting nodes that neighbor both their previous and current locations. Modifying a traditional movement method, e.g., random walk, with this concept encourages an agent to move toward nodes that are less likely to have been previously visited, reducing cover time. Simulation results with grid, scale-free, and minimum distance graphs demonstrate heading can consistently reduce cover time as compared to non-heading movement techniques.
Can software reliability models be used to assess software security? One of the issues is that security problems are relatively rare under “normal” operational profiles, while “classical” reliability models may not be suitable for use in attack conditions. We investigated a range of Fedora open source software security problems to see if some of the basic assumptions behind software reliability growth models hold for discovery of security problems in non-attack situations. We find that in some cases, under “normal” operational use, security problem detection process may be described as a Poisson process. In those cases, we can use appropriate classical software reliability growth models to assess “security reliability” of that software in non-attack situations.We analyzed security problem discovery rate for RedHat Fedora. We find that security problems are relatively rare, their rate of discovery appears to be relatively constant under “normal” (non-attack) conditions. Discovery process often appears to satisfy Poisson assumption opening doors to use of classical reliability models. We illustrated using Yamada S-shaped model fit to v15 that in some cases such models may be effective in predicting the number of remaining security problems, and thus may offer a way of assessing security “quality” of the software product (although not necessarily its behavior under an attack).
Cyber-physical systems (CPS) may interact and manipulate objects in the physical world, and therefore ideally would have formal guarantees about their behavior. Performing statictime proofs of safety invariants, however, may be intractable for systems with distributed physical-world interactions. This is further complicated when realistic communication models are considered, for which there may not be bounds on message delays, or even that messages will eventually reach their destination. In this work, we address the challenge of proving safety and progress in distributed CPS communicating over an unreliable communication layer. This is done in two parts. First, we show that system safety can be verified by partially relying upon runtime checks, and that dropping messages if the run-time checks fail will maintain safety. Second, we use a notion of compatible action chains to guarantee system progress, despite unbounded message delays.We demonstrate the effectiveness of our approach on a multi-agent vehicle flocking system, and show that the overhead of the proposed run-time checks is not overbearing.
Poison message failure is a mechanism that has been responsible for large scale failures in both telecommunications and IP networks. The poison message failure can propagate in the network and cause an unstable network. We apply a machine learning, data mining technique in the network fault management area. We use the k-nearest neighbor method to identity the poison message failure. We also propose a "probabilistic" k-nearest neighbor method which outputs a probability distribution about the poison message. Through extensive simulations, we show that the k-nearest neighbor method is very effective in identifying the responsible message type.