Biblio
Several efforts are currently active in dealing with scenarios combining fog, cloud computing, out of which a significant proportion is devoted to control, and manage the resulting scenario. Certainly, although many challenging aspects must be considered towards the design of an efficient management solution, it is with no doubt that whatever the solution is, the quality delivered to the users when executing services and the security guarantees provided to the users are two key aspects to be considered in the whole design. Unfortunately, both requirements are often non-convergent, thus making a solution suitably addressing both aspects is a challenging task. In this paper, we propose a decoupled transversal security strategy, referred to as DCF, as a novel architectural oriented policy handling the QoS-Security trade-off, particularly designed to be applied to combined fog-to-cloud systems, and specifically highlighting its impact on the delivered QoS.
Distributed Denial of Service (DDoS) strike is a malevolent undertaking to irritate regular action of a concentrated on server, organization or framework by overwhelming the goal or its incorporating establishment with a flood of Internet development. DDoS ambushes achieve feasibility by utilizing different exchanged off PC structures as wellsprings of strike action. Mishandled machines can join PCs and other masterminded resources, for instance, IoT contraptions. From an anomalous express, a DDoS attack looks like a vehicle convergence ceasing up with the road, shielding standard action from meeting up at its pined for objective.
Everyday., the DoS/DDoS attacks are increasing all over the world and the ways attackers are using changing continuously. This increase and variety on the attacks are affecting the governments, institutions, organizations and corporations in a bad way. Every successful attack is causing them to lose money and lose reputation in return. This paper presents an introduction to a method which can show what the attack and where the attack based on. This is tried to be achieved with using clustering algorithm DBSCAN on network traffic because of the change and variety in attack vectors.
Denial-of-Service attack (DoS attack) is an attack on network in which an attacker tries to disrupt the availability of network resources by overwhelming the target network with attack packets. In DoS attack it is typically done using a single source, and in a Distributed Denial-of-Service attack (DDoS attack), like the name suggests, multiple sources are used to flood the incoming traffic of victim. Typically, such attacks use vulnerabilities of Domain Name System (DNS) protocol and IP spoofing to disrupt the normal functioning of service provider or Internet user. The attacks involving DNS, or attacks exploiting vulnerabilities of DNS are known as DNS based DDOS attacks. Many of the proposed DNS based DDoS solutions try to prevent/mitigate such attacks using some intelligent non-``network layer'' (typically application layer) protocols. Utilizing the flexibility and programmability aspects of Software Defined Networks (SDN), via this proposed doctoral research it is intended to make underlying network intelligent enough so as to prevent DNS based DDoS attacks.
Internet of Things (IoT) offers new opportunities for business, technology and science but it also raises new challenges in terms of security and privacy, mainly because of the inherent characteristics of this environment: IoT devices come from a variety of manufacturers and operators and these devices suffer from constrained resources in terms of computation, communication and storage. In this paper, we address the problem of trust establishment for IoT and propose a security solution that consists of a secure bootstrap mechanism for device identification as well as a message attestation mechanism for aggregate response validation. To achieve both security requirements, we approach the problem in a confined environment, named SubNets of Things (SNoT), where various devices depend on it. In this context, devices are uniquely and securely identified thanks to their environment and their role within it. Additionally, the underlying message authentication technique features signature aggregation and hence, generates one compact response on behalf of all devices in the subnet.
The growing number of devices we interact with require a convenient yet secure solution for user identification, authorization and authentication. Current approaches are cumbersome, susceptible to eavesdropping and relay attacks, or energy inefficient. In this paper, we propose a body-guided communication mechanism to secure every touch when users interact with a variety of devices and objects. The method is implemented in a hardware token worn on user's body, for example in the form of a wristband, which interacts with a receiver embedded inside the touched device through a body-guided channel established when the user touches the device. Experiments show low-power (uJ/bit) operation while achieving superior resilience to attacks, with the received signal at the intended receiver through the body channel being at least 20dB higher than that of an adversary in cm range.
Formal security verification of firmware interacting with hardware in modern Systems-on-Chip (SoCs) is a critical research problem. This faces the following challenges: (1) design complexity and heterogeneity, (2) semantics gaps between software and hardware, (3) concurrency between firmware/hardware and between Intellectual Property Blocks (IPs), and (4) expensive bit-precise reasoning. In this paper, we present a co-verification methodology to address these challenges. We model hardware using the Instruction-Level Abstraction (ILA), capturing firmware-visible behavior at the architecture level. This enables integrating hardware behavior with firmware in each IP into a single thread. The co-verification with multiple firmware across IPs is formulated as a multi-threaded program verification problem, for which we leverage software verification techniques. We also propose an optimization using abstraction to prevent expensive bit-precise reasoning. The evaluation of our methodology on an industry SoC Secure Boot design demonstrates its applicability in SoC security verification.
We explore a new security model for secure computation on large datasets. We assume that two servers have been employed to compute on private data that was collected from many users, and, in order to improve the efficiency of their computation, we establish a new tradeoff with privacy. Specifically, instead of claiming that the servers learn nothing about the input values, we claim that what they do learn from the computation preserves the differential privacy of the input. Leveraging this relaxation of the security model allows us to build a protocol that leaks some information in the form of access patterns to memory, while also providing a formal bound on what is learned from the leakage. We then demonstrate that this leakage is useful in a broad class of computations. We show that computations such as histograms, PageRank and matrix factorization, which can be performed in common graph-parallel frameworks such as MapReduce or Pregel, benefit from our relaxation. We implement a protocol for securely executing graph-parallel computations, and evaluate the performance on the three examples just mentioned above. We demonstrate marked improvement over prior implementations for these computations.
An important ingredient for a successful recipe for solving machine learning problems is the availability of a suitable dataset. However, such a dataset may have to be extracted from a large unstructured and semi-structured data like programming code, scripts, and text. In this work, we propose a plug-in based, extensible feature extraction framework for which we have prototyped as a tool. The proposed framework is demonstrated by extracting features from two different sources of semi-structured and unstructured data. The semi-structured data comprised of web page and script based data whereas the other data was taken from email data for spam filtering. The usefulness of the tool was also assessed on the aspect of ease of programming.
Click-through rate prediction is an essential task in industrial applications, such as online advertising. Recently deep learning based models have been proposed, which follow a similar Embedding&MLP paradigm. In these methods large scale sparse input features are first mapped into low dimensional embedding vectors, and then transformed into fixed-length vectors in a group-wise manner, finally concatenated together to fed into a multilayer perceptron (MLP) to learn the nonlinear relations among features. In this way, user features are compressed into a fixed-length representation vector, in regardless of what candidate ads are. The use of fixed-length vector will be a bottleneck, which brings difficulty for Embedding&MLP methods to capture user's diverse interests effectively from rich historical behaviors. In this paper, we propose a novel model: Deep Interest Network (DIN) which tackles this challenge by designing a local activation unit to adaptively learn the representation of user interests from historical behaviors with respect to a certain ad. This representation vector varies over different ads, improving the expressive ability of model greatly. Besides, we develop two techniques: mini-batch aware regularization and data adaptive activation function which can help training industrial deep networks with hundreds of millions of parameters. Experiments on two public datasets as well as an Alibaba real production dataset with over 2 billion samples demonstrate the effectiveness of proposed approaches, which achieve superior performance compared with state-of-the-art methods. DIN now has been successfully deployed in the online display advertising system in Alibaba, serving the main traffic.
Traditional image compressed sensing (CS) coding frameworks solve an inverse problem that is based on the measurement coding tools (prediction, quantization, entropy coding, etc.) and the optimization based image reconstruction method. These CS coding frameworks face the challenges of improving the coding efficiency at the encoder, while also suffering from high computational complexity at the decoder. In this paper, we move forward a step and propose a novel deep network based CS coding framework of natural images, which consists of three sub-networks: sampling sub-network, offset sub-network and reconstruction sub-network that responsible for sampling, quantization and reconstruction, respectively. By cooperatively utilizing these sub-networks, it can be trained in the form of an end-to-end metric with a proposed rate-distortion optimization loss function. The proposed framework not only improves the coding performance, but also reduces the computational cost of the image reconstruction dramatically. Experimental results on benchmark datasets demonstrate that the proposed method is capable of achieving superior rate-distortion performance against state-of-the-art methods.
With the rapid proliferation of mobile users, the spectrum scarcity has become one of the issues that have to be addressed. Cognitive Radio technology addresses this problem by allowing an opportunistic use of the spectrum bands. In cognitive radio networks, unlicensed users can use licensed channels without causing harmful interference to licensed users. However, cognitive radio networks can be subject to different security threats which can cause severe performance degradation. One of the main attacks on these networks is the primary user emulation in which a malicious node emulates the characteristics of the primary user signals. In this paper, we propose a detection technique of this attack based on the RSS-based localization with the maximum likelihood estimation. The simulation results show that the proposed technique outperforms the RSS-based localization method in detecting the primary user emulation attacker.
The gap is widening between the processor clock speed of end-system architectures and network throughput capabilities. It is now physically possible to provide single-flow throughput of speeds up to 100 Gbps, and 400 Gbps will soon be possible. Most current research into high-speed data networking focuses on managing expanding network capabilities within datacenter Local Area Networks (LANs) or efficiently multiplexing millions of relatively small flows through a Wide Area Network (WAN). However, datacenter hyper-convergence places high-throughput networking workloads on general-purpose hardware, and distributed High-Performance Computing (HPC) applications require time-sensitive, high-throughput end-to-end flows (also referred to as ``elephant flows'') to occur over WANs. For these applications, the bottleneck is often the end-system and not the intervening network. Since the problem of the end-system bottleneck was uncovered, many techniques have been developed which address this mismatch with varying degrees of effectiveness. In this survey, we describe the most promising techniques, beginning with network architectures and NIC design, continuing with operating and end-system architectures, and concluding with clean-slate protocol design.

