Biblio
The k-anonymity approach adopted by k-Same face de-identification methods enables these methods to serve their purpose of privacy protection. However, it also forces every k original faces to share the same de-identified face, making it impossible to track individuals in a k-Same de-identified video. To address this issue, this paper presents an approach to the creation of distinguishable de-identified faces. This new approach can serve privacy protection perfectly whilst producing de-identified faces that are as distinguishable as their original faces.
Most of the Depth Image Based Rendering (DIBR) techniques produce synthesized images which contain nonuniform geometric distortions affecting edges coherency. This type of distortions are challenging for common image quality metrics. Morphological filters maintain important geometric information such as edges across different resolution levels. In this paper, morphological wavelet peak signal-to-noise ratio measure, MW-PSNR, based on morphological wavelet decomposition is proposed to tackle the evaluation of DIBR synthesized images. It is shown that MW-PSNR achieves much higher correlation with human judgment compared to the state-of-the-art image quality measures in this context.
Most Depth Image Based Rendering (DIBR) techniques produce synthesized images which contain non-uniform geometric distortions affecting edges coherency. This type of distortions are challenging for common image quality metrics. Morphological filters maintain important geometric information such as edges across different resolution levels. There is inherent congruence between the morphological pyramid decomposition scheme and human visual perception. In this paper, multi-scale measure, morphological pyramid peak signal-to-noise ratio MP-PSNR, based on morphological pyramid decomposition is proposed for the evaluation of DIBR synthesized images. It is shown that MPPSNR achieves much higher correlation with human judgment compared to the state-of-the-art image quality measures in this context.
Online Social Networks have emerged as an interesting area for analysis where each user having a personalized user profile interact and share information with each other. Apart from analyzing the structural characteristics, detection of abnormal and anomalous activities in social networks has become need of the hour. These anomalous activities represent the rare and mischievous activities that take place in the network. Graphical structure of social networks has encouraged the researchers to use various graph metrics to detect the anomalous activities. One such measure that seemed to be highly beneficial to detect the anomalies was brokerage value which helped to detect the anomalies with high accuracy. Also, further application of the measure to different datasets verified the fact that the anomalous behavior detected by the proposed measure was efficient as compared to the already proposed measures in Oddball Algorithm.
We propose a dense continuous-time tracking and mapping method for RGB-D cameras. We parametrize the camera trajectory using continuous B-splines and optimize the trajectory through dense, direct image alignment. Our method also directly models rolling shutter in both RGB and depth images within the optimization, which improves tracking and reconstruction quality for low-cost CMOS sensors. Using a continuous trajectory representation has a number of advantages over a discrete-time representation (e.g. camera poses at the frame interval). With splines, less variables need to be optimized than with a discrete representation, since the trajectory can be represented with fewer control points than frames. Splines also naturally include smoothness constraints on derivatives of the trajectory estimate. Finally, the continuous trajectory representation allows to compensate for rolling shutter effects, since a pose estimate is available at any exposure time of an image. Our approach demonstrates superior quality in tracking and reconstruction compared to approaches with discrete-time or global shutter assumptions.
The cuttlefish optimization algorithm is a new combinatorial optimization algorithm in the family of metaheuristics, applied in the continuous domain, and which provides mechanisms for local and global research. This paper presents a new adaptation of this algorithm in the discrete case, solving the famous travelling salesman problem, which is one of the discrete combinatorial optimization problems. This new adaptation proposes a reformulation of the equations to generate solutions depending a different algorithm cases. The experimental results of the proposed algorithm on instances of TSPLib library are compared with the other methods, show the efficiency and quality of this adaptation.
In practical reflector antenna structures, components of the back-up structure (BUS) are selected form a standard steel library which is normally manufactured. In this case, the design problem of the antenna structure is a discrete optimization problem. In most cases, discrete design is solved by heuristic-based algorithm which will be computing-expensive when the number of deign variable increases. In this paper, a continuous method is used to transfer the discrete optimization problem to a continuous one and gradient-based technique is utilized to solve this problem. The method proposed can achieve a well antenna surface accuracy with all components selected from a standard cross-section list, which is shown by a 9m diameter antenna optimization problem.
Detecting and repairing dirty data is one of the perennial challenges in data analytics, and failure to do so can result in inaccurate analytics and unreliable decisions. Over the past few years, there has been a surge of interest from both industry and academia on data cleaning problems including new abstractions, interfaces, approaches for scalability, and statistical techniques. To better understand the new advances in the field, we will first present a taxonomy of the data cleaning literature in which we highlight the recent interest in techniques that use constraints, rules, or patterns to detect errors, which we call qualitative data cleaning. We will describe the state-of-the-art techniques and also highlight their limitations with a series of illustrative examples. While traditionally such approaches are distinct from quantitative approaches such as outlier detection, we also discuss recent work that casts such approaches into a statistical estimation framework including: using Machine Learning to improve the efficiency and accuracy of data cleaning and considering the effects of data cleaning on statistical analysis.
Botnets are emerging as the most serious cyber threat among different forms of malware. Today botnets have been facilitating to launch many cybercriminal activities like DDoS, click fraud, phishing attacks etc. The main purpose of botnet is to perform massive financial threat. Many large organizations, banks and social networks became the target of bot masters. Botnets can also be leased to motivate the cybercriminal activities. Recently several researches and many efforts have been carried out to detect bot, C&C channels and bot masters. Ultimately bot maters also strengthen their activities through sophisticated techniques. Many botnet detection techniques are based on payload analysis. Most of these techniques are inefficient for encrypted C&C channels. In this paper we explore different categories of botnet and propose a detection methodology to classify bot host from the normal host by analyzing traffic flow characteristics based on time intervals instead of payload inspection. Due to that it is possible to detect botnet activity even encrypted C&C channels are used.
Digital Forensics is an area of Forensics Science that uses the application of scientific method toward crime investigation. The thwarting of forensic evidence is known as anti-forensics, the aim of which is ambiguous in the sense that it could be bad or good. The aim of this project is to simulate digital crimes scenario and carry out forensic and anti-forensic analysis to enhance security. This project uses several forensics and anti-forensic tools and techniques to carry out this work. The data analyzed were gotten from result of the simulation. The results reveal that although it might be difficult to investigate digital crime but with the help of sophisticated forensic tools/anti-forensics tools it can be accomplished.
Cultivation of Smart Grid refurbish with brisk and ingenious. The delinquent breed and sow mutilate in massive. This state of affair coerces security as a sapling which incessantly is to be irrigated with Research and Analysis. The Cyber Security is endowed with resiliency to the SYN flooding induced Denial of Service attack in this work. The proposed secure web server algorithm embedded in the LPC1768 processor ensures the smart resources to be precluded from the attack.
In cyberspace, availability of the resources is the key component of cyber security along with confidentiality and integrity. Distributed Denial of Service (DDoS) attack has become one of the major threats to the availability of resources in computer networks. It is a challenging problem in the Internet. In this paper, we present a detailed study of DDoS attacks on the Internet specifically the attacks due to protocols vulnerabilities in the TCP/IP model, their countermeasures and various DDoS attack mechanisms. We thoroughly review DDoS attacks defense and analyze the strengths and weaknesses of different proposed mechanisms.
Data mining has been used as a technology in various applications of engineering, sciences and others to analysis data of systems and to solve problems. Its applications further extend towards detecting cyber-attacks. We are presenting our work with simple and less efforts similar to data mining which detects email based phishing attacks. This work digs html contents of emails and web pages referred. Also domains and domain related authority details of these links, script codes associated to web pages are analyzed to conclude for the probability of phishing attacks.
Cyber crime investigation is the integration of two technologies named theoretical methodology and second practical tools. First is the theoretical digital forensic methodology that encompasses the steps to investigate the cyber crime. And second technology is the practically development of the digital forensic tool which sequentially and systematically analyze digital devices to extract the evidence to prove the crime. This paper explores the development of digital forensic framework, combine the advantages of past twenty five forensic models and generate a algorithm to create a new digital forensic model. The proposed model provides the following advantages, a standardized method for investigation, the theory of model can be directly convert into tool, a history lookup facility, cost and time minimization, applicable to any type of digital crime investigation.
The rise of malware attack and data leakage is putting the Internet at a higher risk. Digital forensic examiners responsible for cyber security incident need to continually update their processes, knowledge and tools due to changing technology. These attack activities can be investigated by means of Digital Triage Forensics (DTF) methodologies. DTF is a procedural model for the crime scene investigation of digital forensic applications. It takes place as a way of gathering quick intelligence, and presents methods of conducting pre/post-blast investigations. A DTF framework of Window malware forensic toolkit is further proposed. It is also based on ISO/IEC 27037: 2012 - guidelines for specific activities in the handling of digital evidence. The argument is made for a careful use of digital forensic investigations to improve the overall quality of expert examiners. This solution may improve the speed and quality of pre/post-blast investigations. By considering how triage solutions are being implemented into digital investigations, this study presents a critical analysis of malware forensics. The analysis serves as feedback for integrating digital forensic considerations, and specifies directions for further standardization efforts.
In this paper, we present Deola, an Online system for Author Entity Linking with DBLP. Unlike most existing entity linking systems which focus on linking entities with Wikipedia and depend largely on the special features associated with Wikipedia (e.g., Wikipedia articles), Deola links author names appearing in the web document which belongs to the domain of computer science with their corresponding entities existing in the DBLP network. This task is helpful for the enrichment of the DBLP network and the understanding of the domain-specific document. This task is challenging due to name ambiguity and limited knowledge existing in DBLP. Given a fragment of domain-specific web document belonging to the domain of computer science, Deola can return the mapping entity in DBLP for each author name appearing in the input document.
In 2013 West Virginia University consolidated a few individually purchased college and individual licenses for Qualtrics survey software into a single campus-wide license that includes all of our colleges and regional campuses, to be implemented as a campus standard and enterprise solution for our campus. Due to some staff reorganizations over the past two years, I and the other Qualtrics brand administrators at WVU are all new to this administrative role. In this paper, I plan to share lessons that I learned while (1) participating in developing and documenting new business processes, (2) transitioning to serve as the main brand administrator, (3) cleaning up user accounts that had not been actively managed for years, and (4) working with the Qualtrics vendor, local group administrators, my IT colleagues, and campus users as we refine a set of best practices for product usage and administration. Although this paper discusses a campus-wide implementation of Qualtrics survey software, I feel that the lessons I learned during this process could be extrapolated to the development of best practices for other products or IT services.
In Pixar's Finding Dory, we are introduced to a new character: Hank the Octopus. This is a very different character than Pixar has been asked to animate before. Our directors demanded both precise control and graceful, clean silhouettes. The reference artwork we were given showed complex curves between arms and body without any disjointed shapes or breaks in form. Video of Octopus in motion reveals an infinitely malleable creature capable of an enormous shape language. This art direction required a small group of TDs to create a control scheme that was sensible, flexible and with a new level of control in order for animators to bring Hank to life. We had to think deeply from the tips of the fingers all the way through how the tentacles connect to the mouth corners, and eye sockets. Each of this issues raised concerns around design, deformation and finally how the end user can manipulate such complexity effectively.
Walrasian equilibrium prices have a remarkable property: they allow each buyer to purchase a bundle of goods that she finds the most desirable, while guaranteeing that the induced allocation over all buyers will globally maximize social welfare. However, this clean story has two caveats. * First, the prices may induce indifferences. In fact, the minimal equilibrium prices necessarily induce indifferences. Accordingly, buyers may need to coordinate with one another to arrive at a socially optimal outcome—the prices alone are not sufficient to coordinate the market. * Second, although natural procedures converge to Walrasian equilibrium prices on a fixed population, in practice buyers typically observe prices without participating in a price computation process. These prices cannot be perfect Walrasian equilibrium prices, but instead somehow reflect distributional information about the market. To better understand the performance of Walrasian prices when facing these two problems, we give two results. First, we propose a mild genericity condition on valuations under which the minimal Walrasian equilibrium prices induce allocations which result in low over-demand, no matter how the buyers break ties. In fact, under genericity the over-demand of any good can be bounded by 1, which is the best possible at the minimal prices. We demonstrate our results for unit demand valuations and give an extension to matroid based valuations (MBV), conjectured to be equivalent to gross substitute valuations (GS). Second, we use techniques from learning theory to argue that the over-demand and welfare induced by a price vector converge to their expectations uniformly over the class of all price vectors, with respective sample complexity linear and quadratic in the number of goods in the market. These results make no assumption on the form of the valuation functions. These two results imply that under a mild genericity condition, the exact Walrasian equilibrium prices computed in a market are guaranteed to induce both low over-demand and high welfare when used in a new market where agents are sampled independently from the same distribution, whenever the number of agents is larger than the number of commodities in the market.
Java bytecode obfuscates the original structure of a Java expression in the source code. So a simple expression such as (c1 {\textbackslash}textbar{\textbackslash}textbar c2) or (c1 && c2) may be captured in the bytecode in 4 different ways (as shown in the paper). And correspondingly, when we reconvert the bytecode back into Java source code, there are four different ways this may happen. Further, although gotos are not permitted in the Java source code, the bytecode is full of gotos. If you were to blindly convert the bytecode into Java source code, then you would replace a goto by a labeled break. A labeled break has the advantage that it only allows you to break out of a block structure and (unlike a setjump) does not permit you to jump arbitrarily into a block structure. So while the data structures used in the regenerated Java source code are still relatively "clean" arbitrary usage of labeled breaks makes for unreadable code (as we show in the paper). And this can be a point of concern, since decompilation is generally related to debugging code. Instead of dumping arbitrary labeled breaks, we try to reconstruct the original expression, in terms of && and {\textbackslash}textbar{\textbackslash}textbar clauses as well as ternary operators "?:" (c0 ? c1 : c2); Thus our goal is quite simply to regenerate, without using goto or labeled breaks, the expressions as close to the original as possible (it is not possible to guarantee an exact match). In this paper we explain what is the state of the art in Java decompilers for decoding complex expressions. Then we will present our solution. We have implemented the algorithms described here in this paper and give you our experience with it.
Modelling network traffic is gaining importance to counter modern security threats of ever increasing sophistication. It is though surprisingly difficult and costly to construct reliable classifiers on top of telemetry data due to the variety and complexity of signals that no human can manage to interpret in full. Obtaining training data with sufficiently large and variable body of labels can thus be seen as a prohibitive problem. The goal of this work is to detect infected computers by observing their HTTP(S) traffic collected from network sensors, which are typically proxy servers or network firewalls, while relying on only minimal human input in the model training phase. We propose a discriminative model that makes decisions based on a computer's all traffic observed during a predefined time window (5 minutes in our case). The model is trained on traffic samples collected over equally-sized time windows for a large number of computers, where the only labels needed are (human) verdicts about the computer as a whole (presumed infected vs. presumed clean). As part of training, the model itself learns discriminative patterns in traffic targeted to individual servers and constructs the final high-level classifier on top of them. We show the classifier to perform with very high precision, and demonstrate that the learned traffic patterns can be interpreted as Indicators of Compromise. We implement the discriminative model as a neural network with special structure reflecting two stacked multi instance problems. The main advantages of the proposed configuration include not only improved accuracy and ability to learn from gross labels, but also automatic learning of server types (together with their detectors) that are typically visited by infected computers.
As the amount of mobile devices populating the Internet keeps growing at tremendous pace, context-aware services have gained a lot of traction thanks to the wide set of potential use cases they can be applied to. Environmental sensing applications, emergency services, and location-aware messaging are just a few examples of applications that are expected to increase in popularity in the next few years. The MobilityFirst future Internet architecture, a clean-slate Internet architecture design, provides the necessary abstractions for creating and managing context-aware services. Starting from these abstractions we design a context services framework, which is based on a set of three fundamental mechanisms: an easy way to specify context based on human understandable techniques, i.e. use of names, an architecture supported management mechanism that allows both to conveniently deploy the service and efficiently provide management capabilities, and a native delivery system that reduces the tax on the network components and on the overhead cost of deploying such applications. In this paper, we present an emergency alert system for vehicles assisting first responders that exploits users location awareness to support quick and reliable alert messages for interested vehicles. By deploying a demo of the system on a nationwide testbed, we aim to provide better understanding of the dynamics involved in our designed framework.
The power system forms the backbone of a modern society, and its security is of paramount importance to nation's economy. However, the power system is vulnerable to intelligent attacks by attackers who have enough knowledge of how the power system is operated, monitored and controlled. This paper proposes a game theoretic approach to explore and evaluate strategies for the defender to protect the power systems against such intelligent attacks. First, a risk assessment is presented to quantify the physical impacts inflicted by attacks. Based upon the results of the risk assessment, this paper represents the interactions between the attacker and the defender by extending the current zero-sum game model to more generalized game models for diverse assumptions concerning the attacker's motivation. The attacker and defender's equilibrium strategies are attained by solving these game models. In addition, a numerical illustration is demonstrated to warrant the theoretical outcomes.
Forming, in a decentralized fashion, an optimal network topology while balancing multiple, possibly conflicting objectives like cost, high performance, security and resiliency to viruses is a challenging endeavor. In this paper, we take a game-formation approach to network design where each player, for instance an autonomous system in the Internet, aims to collectively minimize the cost of installing links, of protecting against viruses, and of assuring connectivity. In the game, minimizing virus risk as well as connectivity costs results in sparse graphs. We show that the Nash Equilibria are trees that, according to the Price of Anarchy (PoA), are close to the global optimum, while the worst-case Nash Equilibrium and the global optimum may significantly differ for small infection rate and link installation cost. Moreover, the types of trees, in both the Nash Equilibria and the optimal solution, depend on the virus infection rate, which provides new insights into how viruses spread: for high infection rate τ, the path graph is the worst- and the star graph is the best-case Nash Equilibrium. However, for small and intermediate values of τ, trees different from the path and star graphs may be optimal.
The threats of smartphone security are mostly from the privacy disclosure and malicious chargeback software which deducting expenses abnormally. They exploit the vulnerabilities of previous permission mechanism to attack to mobile phones, and what's more, it might call hardware to spy privacy invisibly in the background. As the existing Android operating system doesn't support users the monitoring and auditing of system resources, a dynamic supervisory mechanism of process behavior based on Dalvik VM is proposed to solve this problem. The existing android system framework layer and application layer are modified and extended, and special underlying services of system are used to realize a dynamic supervisory on the process behavior of Dalvik VM. Via this mechanism, each process on the system resources and the behavior of each app process can be monitored and analyzed in real-time. It reduces the security threats in system level and positions that which process is using the system resource. It achieves the detection and interception before the occurrence or the moment of behavior so that it protects the private information, important data and sensitive behavior of system security. Extensive experiments have demonstrated the accuracy, effectiveness, and robustness of our approach.