Visible to the public Biblio

Found 2387 results

Filters: Keyword is human factors  [Clear All Filters]
2018-01-16
Zhou, Chong, Paffenroth, Randy C..  2017.  Anomaly Detection with Robust Deep Autoencoders. Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. :665–674.

Deep autoencoders, and other deep neural networks, have demonstrated their effectiveness in discovering non-linear features across many problem domains. However, in many real-world problems, large outliers and pervasive noise are commonplace, and one may not have access to clean training data as required by standard deep denoising autoencoders. Herein, we demonstrate novel extensions to deep autoencoders which not only maintain a deep autoencoders' ability to discover high quality, non-linear features but can also eliminate outliers and noise without access to any clean training data. Our model is inspired by Robust Principal Component Analysis, and we split the input data X into two parts, \$X = L\_\D\ + S\$, where \$L\_\D\\$ can be effectively reconstructed by a deep autoencoder and \$S\$ contains the outliers and noise in the original data X. Since such splitting increases the robustness of standard deep autoencoders, we name our model a "Robust Deep Autoencoder (RDA)". Further, we present generalizations of our results to grouped sparsity norms which allow one to distinguish random anomalies from other types of structured corruptions, such as a collection of features being corrupted across many instances or a collection of instances having more corruptions than their fellows. Such "Group Robust Deep Autoencoders (GRDA)" give rise to novel anomaly detection approaches whose superior performance we demonstrate on a selection of benchmark problems.

Craggs, Barnaby, Rashid, Awais.  2017.  Smart Cyber-physical Systems: Beyond Usable Security to Security Ergonomics by Design. Proceedings of the 3rd International Workshop on Software Engineering for Smart Cyber-Physical Systems. :22–25.

Securing cyber-physical systems is hard. They are complex infrastructures comprising multiple technological artefacts, designers, operators and users. Existing research has established the security challenges in such systems as well as the role of usable security to support humans in effective security decisions and actions. In this paper we focus on smart cyber-physical systems, such as those based on the Internet of Things (IoT). Such smart systems aim to intelligently automate a variety of functions, with the goal of hiding that complexity from the user. Furthermore, the interactions of the user with such systems are more often implicit than explicit, for instance, a pedestrian with wearables walking through a smart city environment will most likely interact with the smart environment implicitly through a variety of inferred preferences based on previously provided or automatically collected data. The key question that we explore is that of empowering software engineers to pragmatically take into account how users make informed security choices about their data and information in such a pervasive environment. We discuss a range of existing frameworks considering the impact of automation on user behaviours and argue for the need of a shift–-from usability to security ergonomics as a key requirement when designing and implementing security features in smart cyber-physical environments. Of course, the considerations apply more broadly than security but, in this paper, we focus only on security as a key concern.

Curran, Max T., Merrill, Nick, Chuang, John, Gandhi, Swapan.  2017.  One-step, Three-factor Authentication in a Single Earpiece. Proceedings of the 2017 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2017 ACM International Symposium on Wearable Computers. :21–24.

Multifactor authentication presents a robust security method, but typically requires multiple steps on the part of the user resulting in a high cost to usability and limiting adoption. Furthermore, a truly usable system must be unobtrusive and inconspicuous. Here, we present a system that provides all three factors of authentication (knowledge, possession, and inherence) in a single step in the form of an earpiece which implements brain-based authentication via custom-fit, in-ear electroencephalography (EEG). We demonstrate its potential by collecting EEG data using manufactured custom-fit earpieces with embedded electrodes. Across 7 participants, we are able to achieve perfect performance, mean 0% false acceptance (FAR) and 0% false rejection rates (FRR), using participants' best performing tasks collected in one session by one earpiece with three electrodes. Our results indicate that a single earpiece with embedded electrodes could provide a discreet, convenient, and robust method for secure one-step, three-factor authentication.

Mergendahl, Samuel, Sisodia, Devkishen, Li, Jun, Cam, Hasan.  2017.  Source-End DDoS Defense in IoT Environments. Proceedings of the 2017 Workshop on Internet of Things Security and Privacy. :63–64.

While the Internet of Things (IoT) becomes increasingly popular and pervasive in everyday objects, IoT devices often remain unprotected and can be exploited to launch large-scale distributed denial-of-service (DDoS) attacks. One could attempt to employ traditional DDoS defense solutions, but these solutions are hardly suitable in IoT environments since they seldom consider the resource constraints of IoT devices. This paper presents FR-WARD which defends against DDoS attacks launched from an IoT network. FR-WARD is an adaptation of the classic DDoS defense system D-WARD. While both solutions are situated near the attack sources and drop packets to throttle DDoS traffic, FR-WARD utilizes the fast retransmit mechanism in TCP congestion control to minimize resource penalties on benign IoT devices. Based on our analysis and simulation results, FR-WARD not only effectively throttles DDoS traffic but also minimizes retransmission overhead for benign IoT devices.

Ozmen, Muslum Ozgur, Yavuz, Attila A..  2017.  Low-Cost Standard Public Key Cryptography Services for Wireless IoT Systems. Proceedings of the 2017 Workshop on Internet of Things Security and Privacy. :65–70.

Internet of Things (IoT) is an integral part of application domains such as smart-home and digital healthcare. Various standard public key cryptography techniques (e.g., key exchange, public key encryption, signature) are available to provide fundamental security services for IoTs. However, despite their pervasiveness and well-proven security, they also have been shown to be highly energy costly for embedded devices. Hence, it is a critical task to improve the energy efficiency of standard cryptographic services, while preserving their desirable properties simultaneously. In this paper, we exploit synergies among various cryptographic primitives with algorithmic optimizations to substantially reduce the energy consumption of standard cryptographic techniques on embedded devices. Our contributions are: (i) We harness special precomputation techniques, which have not been considered for some important cryptographic standards to boost the performance of key exchange, integrated encryption, and hybrid constructions. (ii) We provide self-certification for these techniques to push their performance to the edge. (iii) We implemented our techniques and their counterparts on 8-bit AVR ATmega 2560 and evaluated their performance. We used microECC library and made the implementations on NIST-recommended secp192 curve, due to its standardization. Our experiments confirmed significant improvements on the battery life (up to 7x) while preserving the desirable properties of standard techniques. Moreover, to the best of our knowledge, we provide the first open-source framework including such set of optimizations on low-end devices.

Zeng, Jing, Yang, Laurence T., Lin, Man, Shao, Zili, Zhu, Dakai.  2017.  System-Level Design Optimization for Security-Critical Cyber-Physical-Social Systems. ACM Trans. Embed. Comput. Syst.. 16:39:1–39:21.

Cyber-physical-social systems (CPSS), an emerging computing paradigm, have attracted intensive attentions from the research community and industry. We are facing various challenges in designing secure, reliable, and user-satisfied CPSS. In this article, we consider these design issues as a whole and propose a system-level design optimization framework for CPSS design where energy consumption, security-level, and user satisfaction requirements can be fulfilled while satisfying constraints for system reliability. Specifically, we model the constraints (energy efficiency, security, and reliability) as the penalty functions to be incorporated into the corresponding objective functions for the optimization problem. A smart office application is presented to demonstrate the feasibility and effectiveness of our proposed design optimization approach.

Nguyen, Thanh H., Wright, Mason, Wellman, Michael P., Baveja, Satinder.  2017.  Multi-Stage Attack Graph Security Games: Heuristic Strategies, with Empirical Game-Theoretic Analysis. Proceedings of the 2017 Workshop on Moving Target Defense. :87–97.

We study the problem of allocating limited security countermeasures to protect network data from cyber-attacks, for scenarios modeled by Bayesian attack graphs. We consider multi-stage interactions between a network administrator and cybercriminals, formulated as a security game. This formulation is capable of representing security environments with significant dynamics and uncertainty, and very large strategy spaces. For the game model, we propose parameterized heuristic strategies for both players. Our heuristics exploit the topological structure of the attack graphs and employ different sampling methodologies to overcome the computational complexity in determining players' actions. Given the complexity of the game, we employ a simulation-based methodology, and perform empirical game analysis over an enumerated set of these heuristic strategies. Finally, we conduct experiments based on a variety of game settings to demonstrate the advantages of our heuristics in obtaining effective defense strategies which are robust to the uncertainty of the security environment.

2018-01-10
Wu, Xiaotong, Dou, Wanchun, Ni, Qiang.  2017.  Game Theory Based Privacy Preserving Analysis in Correlated Data Publication. Proceedings of the Australasian Computer Science Week Multiconference. :73:1–73:10.

Privacy preserving on data publication has been an important research field over the past few decades. One of the fundamental challenges in privacy preserving data publication is the trade-off problem between privacy and utility of the single and independent data set. However, recent research works have shown that the advanced privacy mechanism, i.e., differential privacy, is vulnerable when multiple data sets are correlated. In this case, the trade-off problem between privacy and utility is evolved into a game problem, in which the payoff of each player is dependent not only on his privacy parameter, but also on his neighbors' privacy parameters. In this paper, we firstly present the definition of correlated differential privacy to evaluate the real privacy level of a single data set influenced by the other data sets. Then, we construct a game model of multiple players, who each publishes the data set sanitized by differential privacy. Next, we analyze the existence and uniqueness of the pure Nash Equilibrium and demonstrate the sufficient conditions in the game. Finally, we refer to a notion, i.e., the price of anarchy, to evaluate efficiency of the pure Nash Equilibrium.

Ping, Haoyue, Stoyanovich, Julia, Howe, Bill.  2017.  DataSynthesizer: Privacy-Preserving Synthetic Datasets. Proceedings of the 29th International Conference on Scientific and Statistical Database Management. :42:1–42:5.
To facilitate collaboration over sensitive data, we present DataSynthesizer, a tool that takes a sensitive dataset as input and generates a structurally and statistically similar synthetic dataset with strong privacy guarantees. The data owners need not release their data, while potential collaborators can begin developing models and methods with some confidence that their results will work similarly on the real dataset. The distinguishing feature of DataSynthesizer is its usability — the data owner does not have to specify any parameters to start generating and sharing data safely and effectively. DataSynthesizer consists of three high-level modules — DataDescriber, DataGenerator and ModelInspector. The first, DataDescriber, investigates the data types, correlations and distributions of the attributes in the private dataset, and produces a data summary, adding noise to the distributions to preserve privacy. DataGenerator samples from the summary computed by DataDescriber and outputs synthetic data. ModelInspector shows an intuitive description of the data summary that was computed by DataDescriber, allowing the data owner to evaluate the accuracy of the summarization process and adjust any parameters, if desired. We describe DataSynthesizer and illustrate its use in an urban science context, where sharing sensitive, legally encumbered data between agencies and with outside collaborators is reported as the primary obstacle to data-driven governance. The code implementing all parts of this work is publicly available at https://github.com/DataResponsibly/DataSynthesizer.
Deng, Xiyue, Mirkovic, Jelena.  2017.  Commoner Privacy And A Study On Network Traces. Proceedings of the 33rd Annual Computer Security Applications Conference. :566–576.
Differential privacy has emerged as a promising mechanism for privacy-safe data mining. One popular differential privacy mechanism allows researchers to pose queries over a dataset, and adds random noise to all output points to protect privacy. While differential privacy produces useful data in many scenarios, added noise may jeopardize utility for queries posed over small populations or over long-tailed datasets. Gehrke et al. proposed crowd-blending privacy, with random noise added only to those output points where fewer than k individuals (a configurable parameter) contribute to the point in the same manner. This approach has a lower privacy guarantee, but preserves more research utility than differential privacy. We propose an even more liberal privacy goal—commoner privacy—which fuzzes (omits, aggregates or adds noise to) only those output points where an individual's contribution to this point is an outlier. By hiding outliers, our mechanism hides the presence or absence of an individual in a dataset. We propose one mechanism that achieves commoner privacy—interactive k-anonymity. We also discuss query composition and show how we can guarantee privacy via either a pre-sampling step or via query introspection. We implement interactive k-anonymity and query introspection in a system called Patrol for network trace processing. Our evaluation shows that commoner privacy prevents common attacks while preserving orders of magnitude higher research utility than differential privacy, and at least 9-49 times the utility of crowd-blending privacy.
Zhang, Jun, Cormode, Graham, Procopiuc, Cecilia M., Srivastava, Divesh, Xiao, Xiaokui.  2017.  PrivBayes: Private Data Release via Bayesian Networks. ACM Trans. Database Syst.. 42:25:1–25:41.
Privacy-preserving data publishing is an important problem that has been the focus of extensive study. The state-of-the-art solution for this problem is differential privacy, which offers a strong degree of privacy protection without making restrictive assumptions about the adversary. Existing techniques using differential privacy, however, cannot effectively handle the publication of high-dimensional data. In particular, when the input dataset contains a large number of attributes, existing methods require injecting a prohibitive amount of noise compared to the signal in the data, which renders the published data next to useless. To address the deficiency of the existing methods, this paper presents PrivBayes, a differentially private method for releasing high-dimensional data. Given a dataset D, PrivBayes first constructs a Bayesian network N, which (i) provides a succinct model of the correlations among the attributes in D and (ii) allows us to approximate the distribution of data in D using a set P of low-dimensional marginals of D. After that, PrivBayes injects noise into each marginal in P to ensure differential privacy and then uses the noisy marginals and the Bayesian network to construct an approximation of the data distribution in D. Finally, PrivBayes samples tuples from the approximate distribution to construct a synthetic dataset, and then releases the synthetic data. Intuitively, PrivBayes circumvents the curse of dimensionality, as it injects noise into the low-dimensional marginals in P instead of the high-dimensional dataset D. Private construction of Bayesian networks turns out to be significantly challenging, and we introduce a novel approach that uses a surrogate function for mutual information to build the model more accurately. We experimentally evaluate PrivBayes on real data and demonstrate that it significantly outperforms existing solutions in terms of accuracy.
He, Zaobo, Cai, Zhipeng, Sun, Yunchuan, Li, Yingshu, Cheng, Xiuzhen.  2017.  Customized Privacy Preserving for Inherent Data and Latent Data. Personal Ubiquitous Comput.. 21:43–54.
The huge amount of sensory data collected from mobile devices has offered great potentials to promote more significant services based on user data extracted from sensor readings. However, releasing user data could also seriously threaten user privacy. It is possible to directly collect sensitive information from released user data without user permissions. Furthermore, third party users can also infer sensitive information contained in released data in a latent manner by utilizing data mining techniques. In this paper, we formally define these two types of threats as inherent data privacy and latent data privacy and construct a data-sanitization strategy that can optimize the tradeoff between data utility and customized two types of privacy. The key novel idea lies that the developed strategy can combat against powerful third party users with broad knowledge about users and launching optimal inference attacks. We show that our strategy does not reduce the benefit brought by user data much, while sensitive information can still be protected. To the best of our knowledge, this is the first work that preserves both inherent data privacy and latent data privacy.
Aissaoui, K., idar, H. Ait, Belhadaoui, H., Rifi, M..  2017.  Survey on data remanence in Cloud Computing environment. 2017 International Conference on Wireless Technologies, Embedded and Intelligent Systems (WITS). :1–4.

The Cloud Computing is a developing IT concept that faces some issues, which are slowing down its evolution and adoption by users across the world. The lack of security has been the main concern. Organizations and entities need to ensure, inter alia, the integrity and confidentiality of their outsourced sensible data within a cloud provider server. Solutions have been examined in order to strengthen security models (strong authentication, encryption and fragmentation before storing, access control policies...). More particularly, data remanence is undoubtedly a major threat. How could we be sure that data are, when is requested, truly and appropriately deleted from remote servers? In this paper, we aim to produce a survey about this interesting subject and to address the problem of residual data in a cloud-computing environment, which is characterized by the use of virtual machines instantiated in remote servers owned by a third party.

Vakilinia, I., Tosh, D. K., Sengupta, S..  2017.  3-Way game model for privacy-preserving cybersecurity information exchange framework. MILCOM 2017 - 2017 IEEE Military Communications Conference (MILCOM). :829–834.

With the growing number of cyberattack incidents, organizations are required to have proactive knowledge on the cybersecurity landscape for efficiently defending their resources. To achieve this, organizations must develop the culture of sharing their threat information with others for effectively assessing the associated risks. However, sharing cybersecurity information is costly for the organizations due to the fact that the information conveys sensitive and private data. Hence, making the decision for sharing information is a challenging task and requires to resolve the trade-off between sharing advantages and privacy exposure. On the other hand, cybersecurity information exchange (CYBEX) management is crucial in stabilizing the system through setting the correct values for participation fees and sharing incentives. In this work, we model the interaction of organizations, CYBEX, and attackers involved in a sharing system using dynamic game. With devising appropriate payoff models for each player, we analyze the best strategies of the entities by incorporating the organizations' privacy component in the sharing model. Using the best response analysis, the simulation results demonstrate the efficiency of our proposed framework.

Jeyaprabha, T. J., Sumathi, G., Nivedha, P..  2017.  Smart and secure data storage using Encrypt-interleaving. 2017 Innovations in Power and Advanced Computing Technologies (i-PACT). :1–6.

In the recent years many companies are shifting towards cloud for expanding their business profit with least additional cost. Cloud computing is a growing technology which has emerged from the development of grid computing, virtualization and utility computing. Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources like networks, servers, storage, applications, and services that can be rapidly provisioned and released with minimal management effort or service provider interaction. There was a huge data loss during the recent Chennai floods during Dec 2015. If these data would have been stored at distributed data centers great loss could have been prevented. Though, such natural calamities are tempting many users to shift towards the cloud storage, security threats are inhibiting them to shift towards the cloud. Many solutions have been addressed for these security issues but they do not give guaranteed security. By guaranteed security we mean confidentiality, integrity and availability. Some of the existing techniques for providing security are Cryptographic Protocols, Data Sanitization, Predicate Logic, Access Control Mechanism, Honeypots, Sandboxing, Erasure Coding, RAID(Redundant Arrays of Independent Disks), Homomorphic Encryption and Split-Key Encryption. All these techniques either cannot work alone or adds computational and time complexity. An alternate scheme of combining encryption and channel coding schemes at one-go is proposed for increasing the levels of security. Hybrid encryption scheme is proposed to be used in the interleaver block of Turbo coder for avoiding burst error. Hybrid encryption avoids sharing of secret key via the unsecured channel. This provides both security and reliability by reducing error propagation effect with small additional cost and computational overhead. Time complexity can be reduced when encryption and encoding are done as a single process.

Zaman, A. N. K., Obimbo, C., Dara, R. A..  2017.  An improved differential privacy algorithm to protect re-identification of data. 2017 IEEE Canada International Humanitarian Technology Conference (IHTC). :133–138.

In the present time, there has been a huge increase in large data repositories by corporations, governments, and healthcare organizations. These repositories provide opportunities to design/improve decision-making systems by mining trends and patterns from the data set (that can provide credible information) to improve customer service (e.g., in healthcare). As a result, while data sharing is essential, it is an obligation to maintaining the privacy of the data donors as data custodians have legal and ethical responsibilities to secure confidentiality. This research proposes a 2-layer privacy preserving (2-LPP) data sanitization algorithm that satisfies ε-differential privacy for publishing sanitized data. The proposed algorithm also reduces the re-identification risk of the sanitized data. The proposed algorithm has been implemented, and tested with two different data sets. Compared to other existing works, the results obtained from the proposed algorithm show promising performance.

Holdsworth, J., Apeh, E..  2017.  An Effective Immersive Cyber Security Awareness Learning Platform for Businesses in the Hospitality Sector. 2017 IEEE 25th International Requirements Engineering Conference Workshops (REW). :111–117.
The rapid digitalisation of the hospitality industry over recent years has brought forth many new points of attack for consideration. The hasty implementation of these systems has created a reality in which businesses are using the technical solutions, but employees have very little awareness when it comes to the threats and implications that they might present. This gap in awareness is further compounded by the existence of preestablished, often rigid, cultures that drive how hospitality businesses operate. Potential attackers are recognising this and the last two years have seen a huge increase in cyber-attacks within the sector.Attempts at addressing the increasing threats have taken the form of technical solutions such as encryption, access control, CCTV, etc. However, a high majority of security breaches can be directly attributed to human error. It is therefore necessary that measures for addressing the rising trend of cyber-attacks go beyond just providing technical solutions and make provision for educating employees about how to address the human elements of security. Inculcating security awareness amongst hospitality employees will provide a foundation upon which a culture of security can be created to promote the seamless and secured interaction of hotel users and technology.One way that the hospitality industry has tried to solve the awareness issue is through their current paper-based training. This is unengaging, expensive and presents limited ways to deploy, monitor and evaluate the impact and effectiveness of the content. This leads to cycles of constant training, making it very hard to initiate awareness, particularly within those on minimum waged, short-term job roles.This paper presents a structured approach for eliciting industry requirement for developing and implementing an immersive Cyber Security Awareness learning platform. It used a series of over 40 interviews and threat analysis of the hospitality industry to identify the requirements fo- designing and implementing cyber security program which encourage engagement through a cycle of reward and recognition. In particular, the need for the use of gamification elements to provide an engaging but gentle way of educating those with little or no desire to learn was identified and implemented. Also presented is a method for guiding and monitoring the impact of their employee's progress through the learning management system whilst monitoring the levels of engagement and positive impact the training is having on the business.
2017-12-28
Vizarreta, P., Heegaard, P., Helvik, B., Kellerer, W., Machuca, C. M..  2017.  Characterization of failure dynamics in SDN controllers. 2017 9th International Workshop on Resilient Networks Design and Modeling (RNDM). :1–7.

With Software Defined Networking (SDN) the control plane logic of forwarding devices, switches and routers, is extracted and moved to an entity called SDN controller, which acts as a broker between the network applications and physical network infrastructure. Failures of the SDN controller inhibit the network ability to respond to new application requests and react to events coming from the physical network. Despite of the huge impact that a controller has on the network performance as a whole, a comprehensive study on its failure dynamics is still missing in the state of the art literature. The goal of this paper is to analyse, model and evaluate the impact that different controller failure modes have on its availability. A model in the formalism of Stochastic Activity Networks (SAN) is proposed and applied to a case study of a hypothetical controller based on commercial controller implementations. In case study we show how the proposed model can be used to estimate the controller steady state availability, quantify the impact of different failure modes on controller outages, as well as the effects of software ageing, and impact of software reliability growth on the transient behaviour.

Mailloux, L. O., Sargeant, B. N., Hodson, D. D., Grimaila, M. R..  2017.  System-level considerations for modeling space-based quantum key distribution architectures. 2017 Annual IEEE International Systems Conference (SysCon). :1–6.

Quantum Key Distribution (QKD) is a revolutionary technology which leverages the laws of quantum mechanics to distribute cryptographic keying material between two parties with theoretically unconditional security. Terrestrial QKD systems are limited to distances of \textbackslashtextless;200 km in both optical fiber and line-of-sight free-space configurations due to severe losses during single photon propagation and the curvature of the Earth. Thus, the feasibility of fielding a low Earth orbit (LEO) QKD satellite to overcome this limitation is being explored. Moreover, in August 2016, the Chinese Academy of Sciences successfully launched the world's first QKD satellite. However, many of the practical engineering performance and security tradeoffs associated with space-based QKD are not well understood for global secure key distribution. This paper presents several system-level considerations for modeling and studying space-based QKD architectures and systems. More specifically, this paper explores the behaviors and requirements that researchers must examine to develop a model for studying the effectiveness of QKD between LEO satellites and ground stations.

Suebsombut, P., Sekhari, A., Sureepong, P., Ueasangkomsate, P., Bouras, A..  2017.  The using of bibliometric analysis to classify trends and future directions on \#x201C;smart farm \#x201D;. 2017 International Conference on Digital Arts, Media and Technology (ICDAMT). :136–141.

Climate change has affected the cultivation in all countries with extreme drought, flooding, higher temperature, and changes in the season thus leaving behind the uncontrolled production. Consequently, the smart farm has become part of the crucial trend that is needed for application in certain farm areas. The aims of smart farm are to control and to enhance food production and productivity, and to increase farmers' profits. The advantages in applying smart farm will improve the quality of production, supporting the farm workers, and better utilization of resources. This study aims to explore the research trends and identify research clusters on smart farm using bibliometric analysis that has supported farming to improve the quality of farm production. The bibliometric analysis is the method to explore the relationship of the articles from a co-citation network of the articles and then science mapping is used to identify clusters in the relationship. This study examines the selected research articles in the smart farm field. The area of research in smart farm is categorized into two clusters that are soil carbon emission from farming activity, food security and farm management by using a VOSviewer tool with keywords related to research articles on smart farm, agriculture, supply chain, knowledge management, traceability, and product lifecycle management from Web of Science (WOS) and Scopus online database. The major cluster of smart farm research is the soil carbon emission from farming activity which impacts on climate change that affects food production and productivity. The contribution is to identify the trends on smart farm to develop research in the future by means of bibliometric analysis.

Liu, H., Ditzler, G..  2017.  A fast information-theoretic approximation of joint mutual information feature selection. 2017 International Joint Conference on Neural Networks (IJCNN). :4610–4617.

Feature selection is an important step in data analysis to address the curse of dimensionality. Such dimensionality reduction techniques are particularly important when if a classification is required and the model scales in polynomial time with the size of the feature (e.g., some applications include genomics, life sciences, cyber-security, etc.). Feature selection is the process of finding the minimum subset of features that allows for the maximum predictive power. Many of the state-of-the-art information-theoretic feature selection approaches use a greedy forward search; however, there are concerns with the search in regards to the efficiency and optimality. A unified framework was recently presented for information-theoretic feature selection that tied together many of the works in over the past twenty years. The work showed that joint mutual information maximization (JMI) is generally the best options; however, the complexity of greedy search for JMI scales quadratically and it is infeasible on high dimensional datasets. In this contribution, we propose a fast approximation of JMI based on information theory. Our approach takes advantage of decomposing the calculations within JMI to speed up a typical greedy search. We benchmarked the proposed approach against JMI on several UCI datasets, and we demonstrate that the proposed approach returns feature sets that are highly consistent with JMI, while decreasing the run time required to perform feature selection.

Manoja, I., Sk, N. S., Rani, D. R..  2017.  Prevention of DDoS attacks in cloud environment. 2017 International Conference on Big Data Analytics and Computational Intelligence (ICBDAC). :235–239.

Cloud computing emerges as an endowment technological data for the longer term and increasing on one of the standards of utility computing is most likely claimed to symbolize a wholly new paradigm for viewing and getting access to computational assets. As a result of protection problem many purchasers hesitate in relocating their touchy data on the clouds, regardless of gigantic curiosity in cloud-based computing. Security is a tremendous hassle, considering the fact that so much of firms present a alluring goal for intruders and the particular considerations will pursue to lower the advancement of distributed computing if not located. Hence, this recent scan and perception is suitable to honeypot. Distributed Denial of Service (DDoS) is an assault that threats the availability of the cloud services. It's fundamental investigate the most important features of DDoS Defence procedures. This paper provides exact techniques that been carried out to the DDoS attack. These approaches are outlined in these paper and use of applied sciences for special kind of malfunctioning within the cloud.

Shafee, S., Rajaei, B..  2017.  A secure steganography algorithm using compressive sensing based on HVS feature. 2017 Seventh International Conference on Emerging Security Technologies (EST). :74–78.

Steganography is the science of hiding information to send secret messages using the carrier object known as stego object. Steganographic technology is based on three principles including security, robustness and capacity. In this paper, we present a digital image hidden by using the compressive sensing technology to increase security of stego image based on human visual system features. The results represent which our proposed method provides higher security in comparison with the other presented methods. Bit Correction Rate between original secret message and extracted message is used to show the accuracy of this method.

Kabiri, M. N., Wannous, M..  2017.  An Experimental Evaluation of a Cloud-Based Virtual Computer Laboratory Using Openstack. 2017 6th IIAI International Congress on Advanced Applied Informatics (IIAI-AAI). :667–672.

In previous work, we proposed a solution to facilitate access to computer science related courses and learning materials using cloud computing and mobile technologies. The solution was positively evaluated by the participants, but most of them indicated that it lacks support for laboratory activities. As it is well known that many of computer science subjects (e.g. Computer Networks, Information Security, Systems Administration, etc.) require a suitable and flexible environment where students can access a set of computers and network devices to successfully complete their hands-on activities. To achieve this criteria, we created a cloud-based virtual laboratory based on OpenStack cloud platform to facilitate access to virtual machine both locally and remotely. Cloud-based virtual labs bring a lot of advantages, such as increased manageability, scalability, high availability and flexibility, to name a few. This arrangement has been tested in a case-study exercise with a group of students as part of Computer Networks and System Administration courses at Kabul Polytechnic University in Afghanistan. To measure success, we introduced a level test to be completed by participants prior and after the experiment. As a result, the learners achieved an average of 17.1 % higher scores in the post level test after completing the practical exercises. Lastly, we distributed a questionnaire after the experiment and students provided positive feedback on the effectiveness and usefulness of the proposed solution.

Tane, E., Fujigaki, Y..  2017.  Cross-Disciplinary Survey on \#34;Data Science \#34; Field Development: Historical Analysis from 1600s-2000s. 2017 Portland International Conference on Management of Engineering and Technology (PICMET). :1–10.

For the last several decades, the rapid development of information technology and computer performance accelerates generation, transportation and accumulation of digital data, it came to be called "Big Data". In this context, researchers and companies are eager to utilize the data to create new values or manage a wide range of issues, and much focus is being placed on "Data Science" to extract useful information (knowledge) from digital data. Data Science has been developed from several independent fields such as Mathematics/Operations Research, Computer Science, Data Engineering, Visualization and Statistics since 1800s. In addition, Artificial Intelligence converges on this stream recent years. On the other hand, the national projects have been established to utilize data for society with concerns surrounding the security and privacy. In this paper, through detailed analysis on history of this field, processes of development and integration among related fields are discussed as well as comparative aspects between Japan and the United States. This paper also includes a brief discussion of future directions.