Biblio
The current trend of IoT user is toward the use of services and data externally due to voluminous processing, which demands resourceful machines. Instead of relying on the cloud of poor connectivity or a limited bandwidth, the IoT user prefers to use a cloudlet-based fog computing. However, the choice of cloudlet is solely dependent on its trust and reliability. In practice, even though a cloudlet possesses a required trusted platform module (TPM), we argue that the presence of a TPM is not enough to make the cloudlet trustworthy as the TPM supports only the primitive security of the bootstrap. Besides uncertainty in security, other uncertain conditions of the network (e.g. network bandwidth, latency and expectation time to complete a service request for cloud-based services) may also prevail for the cloudlets. Therefore, in order to evaluate the trust value of multiple cloudlets under uncertainty, this paper broadly proposes the empirical process for evaluation of trust. This will be followed by a measure of trust-based reputation of cloudlets through computational intelligence such as fuzzy logic and ant colony optimization (ACO). In the process, fuzzy logic-based inference and membership evaluation of trust are presented. In addition, ACO and its pheromone communication across different colonies are being modeled with multiple cloudlets. Finally, a measure of affinity or popular trust and reputation of the cloudlets is also proposed. Together with the context of application under multiple cloudlets, the computationally intelligent approaches have been investigated in terms of performance. Hence the contribution is subjected towards building a trusted cloudlet-based fog platform.
Document integrity and origin for E2E S2S in IoTcloud have recently received considerable attention because of their importance in the real-world fields. Maintaining integrity could protect decisions made based on these message/image documents. Authentication and integrity solutions have been conducted to recognise or protect any modification in the exchange of documents between E2E S2S (smart-to-smart). However, none of the proposed schemes appear to be sufficiently designed as a secure scheme to prevent known attacks or applicable to smart devices. We propose a robust scheme that aims to protect the integrity of documents for each users session by integrating HMAC-SHA-256, handwritten feature extraction using a local binary pattern, one-time random pixel sequence based on RC4 to randomly hide authentication codes using LSB. The proposed scheme can provide users with one-time bio-key, robust message anonymity and a disappearing authentication code that does not draw the attention of eavesdroppers. Thus, the scheme improves the data integrity for a users messages/image documents, phase key agreement, bio-key management and a one-time message/image document code for each users session. The concept of stego-anonymity is also introduced to provide additional security to cover a hashed value. Finally, security analysis and experimental results demonstrate and prove the invulnerability and efficiency of the proposed scheme.
Internet of Things (IoT) is a revolutionary expandable network which has brought many advantages, improving the Quality of Life (QoL) of individuals. However, IoT carries dangers, due to the fact that hackers have the ability to find security gaps in users' IoT devices, which are not still secure enough and hence, intrude into them for malicious activities. As a result, they can control many connected devices in an IoT network, turning IoT into Botnet of Things (BoT). In a botnet, hackers can launch several types of attacks, such as the well known attacks of Distributed Denial of Service (DDoS) and Man in the Middle (MitM), and/or spread various types of malicious software (malware) to the compromised devices of the IoT network. In this paper, we propose a novel hybrid Artificial Intelligence (AI)-powered honeynet for enhanced IoT botnet detection rate with the use of Cloud Computing (CC). This upcoming security mechanism makes use of Machine Learning (ML) techniques like the Logistic Regression (LR) in order to predict potential botnet existence. It can also be adopted by other conventional security architectures in order to intercept hackers the creation of large botnets for malicious actions.
The current paper is proposing a three-factor authentication (3FA) scheme based on three components. In the first component a token and a password will be generated (this module represents the kernel of the three-factor authentication scheme - 3FA). In the second component a pass-code will be generated, using to the token resulted in the first phase. We will use RSA for encryption and decryption of the generated values (token and pass-code). For the token ID and passcode the user will use his smartphone. The third component uses a searchable encryption scheme, whose purpose is to retrieve the documents of the user from the cloud server, based on a keyword and his/her fingerprint. The documents are stored encrypted on a mistrust server (cloud environment) and searchable encryption will help us to search specific information and to access those documents in an encrypted content. We will introduce also a software simulation developed in C\# 8.0 for our scheme and a source code analysis for the main algorithms.
Existing cyber security solutions have been basically developed using knowledge-based models that often cannot trigger new cyber-attack families. With the boom of Artificial Intelligence (AI), especially Deep Learning (DL) algorithms, those security solutions have been plugged-in with AI models to discover, trace, mitigate or respond to incidents of new security events. The algorithms demand a large number of heterogeneous data sources to train and validate new security systems. This paper presents the description of new datasets, the so-called ToNİoT, which involve federated data sources collected from Telemetry datasets of IoT services, Operating system datasets of Windows and Linux, and datasets of Network traffic. The paper introduces the testbed and description of TONİoT datasets for Windows operating systems. The testbed was implemented in three layers: edge, fog and cloud. The edge layer involves IoT and network devices, the fog layer contains virtual machines and gateways, and the cloud layer involves cloud services, such as data analytics, linked to the other two layers. These layers were dynamically managed using the platforms of software-Defined Network (SDN) and Network-Function Virtualization (NFV) using the VMware NSX and vCloud NFV platform. The Windows datasets were collected from audit traces of memories, processors, networks, processes and hard disks. The datasets would be used to evaluate various AI-based cyber security solutions, including intrusion detection, threat intelligence and hunting, privacy preservation and digital forensics. This is because the datasets have a wide range of recent normal and attack features and observations, as well as authentic ground truth events. The datasets can be publicly accessed from this link [1].
We consider the problem of protecting cloud services from simultaneous white-box and black-box attacks. Recent research in cryptographic program obfuscation considers the problem of protecting the confidentiality of programs and any secrets in them. In this model, a provable program obfuscation solution makes white-box attacks to the program not more useful than black-box attacks. Motivated by very recent results showing successful black-box attacks to machine learning programs run by cloud servers, we propose and study the approach of augmenting the program obfuscation solution model so to achieve, in at least some class of application scenarios, program confidentiality in the presence of both white-box and black-box attacks.We propose and formally define encrypted-input program obfuscation, where a key is shared between the entity obfuscating the program and the entity encrypting the program's inputs. We believe this model might be of interest in practical scenarios where cloud programs operate over encrypted data received by associated sensors (e.g., Internet of Things, Smart Grid).Under standard intractability assumptions, we show various results that are not known in the traditional cryptographic program obfuscation model; most notably: Yao's garbled circuit technique implies encrypted-input program obfuscation hiding all gates of an arbitrary polynomial circuit; and very efficient encrypted-input program obfuscation for range membership programs and a class of machine learning programs (i.e., decision trees). The performance of the latter solutions has only a small constant overhead over the equivalent unobfuscated program.
An attacker's success crucially depends on the reconnaissance phase of Distributed Denial of Service (DDoS) attacks, which is the first step to gather intelligence. Although several solutions have been proposed against network reconnaissance attacks, they fail to address the needs of legitimate users' requests. Thus, we propose a cloud-based deception framework which aims to confuse the attacker with reconnaissance replies while allowing legitimate uses. The deception is based on for-warding the reconnaissance packets to a cloud infrastructure through tunneling and SDN so that the returned IP addresses to the attacker will not be genuine. For handling legitimate requests, we create a reflected virtual topology in the cloud to match any changes in the original physical network to the cloud topology using SDN. Through experimentations on GENI platform, we show that our framework can provide reconnaissance responses with negligible delays to the network clients while also reducing the management costs significantly.
Cloud forensics investigates the crime committed over cloud infrastructures like SLA-violations and storage privacy. Cloud storage forensics is the process of recording the history of the creation and operations performed on a cloud data object and investing it. Secure data provenance in the Cloud is crucial for data accountability, forensics, and privacy. Towards this, we present a Cloud-based data provenance framework using Blockchain, which traces data record operations and generates provenance data. Initially, we design a dropbox like application using AWS S3 storage. The application creates a cloud storage application for the students and faculty of the university, thereby making the storage and sharing of work and resources efficient. Later, we design a data provenance mechanism for confidential files of users using Ethereum blockchain. We also evaluate the proposed system using performance parameters like query and transaction latency by varying the load and number of nodes of the blockchain network.
Research on the design of data center infrastructure is increasing, both from academia and industry, due to the rapid development of cloud-based applications such as search engines, social networks, and large-scale computing. On a large scale, data centers can consist of hundreds to thousands of servers that require systems with high-performance requirements and low downtime. To meet the network's needs in a dynamic data center, infrastructure of applications and services are growing. It takes a process of designing a network topology so that it can guarantee availability and security. One way to surmount this is by implementing the zero trust security model based on micro-segmentation. Zero trust is a security idea based on the principle of "never trust, always verify" in which no concepts of trust and untrust in network traffic. The zero trust security model implemented network traffic in the form of untrust. Micro-segmentation is a way to achieve zero trust by dividing a network into smaller logical segments to restrict the traffic. In this research, data center network performance based on software-defined networking with zero trust security model using micro-segmentation has been evaluated using a testbed simulation of Cisco Application Centric Infrastructure by measuring the round trip time, jitter, and packet loss during experiments. Performance evaluation results show that micro-segmentation adds an average round trip time of 4 μs and jitter of 11 μs without packet loss so that the security can be improved without significantly affecting network performance on the data center.