Visible to the public Biblio

Found 12044 results

Filters: Keyword is Resiliency  [Clear All Filters]
2017-12-12
Hosseini, Fateme S., Fotouhi, Pouya, Yang, Chengmo, Gao, Guang R..  2017.  Leveraging Compiler Optimizations to Reduce Runtime Fault Recovery Overhead. Proceedings of the 54th Annual Design Automation Conference 2017. :20:1–20:6.

Smaller feature size, lower supply voltage, and faster clock rates have made modern computer systems more susceptible to faults. Although previous fault tolerance techniques usually target a relatively low fault rate and consider error recovery less critical, with the advent of higher fault rates, recovery overhead is no longer negligible. In this paper, we propose a scheme that leverages and revises a set of compiler optimizations to design, for each application hotspot, a smart recovery plan that identifies the minimal set of instructions to be re-executed in different fault scenarios. Such fault scenario and recovery plan information is efficiently delivered to the processor for runtime fault recovery. The proposed optimizations are implemented in LLVM and GEM5. The results show that the proposed scheme can significantly reduce runtime recovery overhead by 72%.

Abdi, Fardin, Tabish, Rohan, Rungger, Matthias, Zamani, Majid, Caccamo, Marco.  2017.  Application and System-level Software Fault Tolerance Through Full System Restarts. Proceedings of the 8th International Conference on Cyber-Physical Systems. :197–206.

Due to the growing performance requirements, embedded systems are increasingly more complex. Meanwhile, they are also expected to be reliable. Guaranteeing reliability on complex systems is very challenging. Consequently, there is a substantial need for designs that enable the use of unverified components such as real-time operating system (RTOS) without requiring their correctness to guarantee safety. In this work, we propose a novel approach to design a controller that enables the system to restart and remain safe during and after the restart. Complementing this controller with a switching logic allows the system to use complex, unverified controller to drive the system as long as it does not jeopardize safety. Such a design also tolerates faults that occur in the underlying software layers such as RTOS and middleware and recovers from them through system-level restarts that reinitialize the software (middleware, RTOS, and applications) from a read-only storage. Our approach is implementable using one commercial off-the-shelf (COTS) processing unit. To demonstrate the efficacy of our solution, we fully implement a controller for a 3 degree of freedom (3DOF) helicopter. We test the system by injecting various types of faults into the applications and RTOS and verify that the system remains safe.

Bos, Jeroen van den.  2017.  Sustainable Automated Data Recovery: A Research Roadmap. Proceedings of the 1st ACM SIGSOFT International Workshop on Software Engineering and Digital Forensics. :6–9.

Digital devices contain increasingly more data and applications. This means more data to handle and a larger amount of different types of traces to recover and consider in digital forensic investigations. Both present a challenge to data recovery approaches, requiring higher performance and increased flexibility. In order to progress to a long-term sustainable approach to automated data recovery, this paper proposes a partitioning into manual, custom, formalized and self-improving approaches. These approaches are described along with research directions to consider: building universal abstractions, selecting appropriate techniques and developing user-friendly tools.

Gilbert, Anna C., Li, Yi, Porat, Ely, Strauss, Martin J..  2017.  For-All Sparse Recovery in Near-Optimal Time. ACM Trans. Algorithms. 13:32:1–32:26.

An approximate sparse recovery system in ℓ1 norm consists of parameters k, ε, N; an m-by-N measurement Φ; and a recovery algorithm R. Given a vector, x, the system approximates x by &xwidehat; = R(Φ x), which must satisfy ‖ &xwidehat;-x‖1 ≤ (1+ε)‖ x - xk‖1. We consider the “for all” model, in which a single matrix Φ, possibly “constructed” non-explicitly using the probabilistic method, is used for all signals x. The best existing sublinear algorithm by Porat and Strauss [2012] uses O(ε−3klog (N/k)) measurements and runs in time O(k1 − αNα) for any constant α textgreater 0. In this article, we improve the number of measurements to O(ε − 2klog (N/k)), matching the best existing upper bound (attained by super-linear algorithms), and the runtime to O(k1+βpoly(log N,1/ε)), with a modest restriction that k ⩽ N1 − α and ε ⩽ (log k/log N)γ for any constants α, β, γ textgreater 0. When k ⩽ log cN for some c textgreater 0, the runtime is reduced to O(kpoly(N,1/ε)). With no restrictions on ε, we have an approximation recovery system with m = O(k/εlog (N/k)((log N/log k)γ + 1/ε)) measurements. The overall architecture of this algorithm is similar to that of Porat and Strauss [2012] in that we repeatedly use a weak recovery system (with varying parameters) to obtain a top-level recovery algorithm. The weak recovery system consists of a two-layer hashing procedure (or with two unbalanced expanders for a deterministic algorithm). The algorithmic innovation is a novel encoding procedure that is reminiscent of network coding and that reflects the structure of the hashing stages. The idea is to encode the signal position index i by associating it with a unique message mi, which will be encoded to a longer message m′i (in contrast to Porat and Strauss [2012] in which the encoding is simply the identity). Portions of the message m′i correspond to repetitions of the hashing, and we use a regular expander graph to encode the linkages among these portions. The decoding or recovery algorithm consists of recovering the portions of the longer messages m′i and then decoding to the original messages mi, all the while ensuring that corruptions can be detected and/or corrected. The recovery algorithm is similar to list recovery introduced in Indyk et al. [2010] and used in Gilbert et al. [2013]. In our algorithm, the messages \mi\ are independent of the hashing, which enables us to obtain a better result.

Wu, Yingjun, Guo, Wentian, Chan, Chee-Yong, Tan, Kian-Lee.  2017.  Fast Failure Recovery for Main-Memory DBMSs on Multicores. Proceedings of the 2017 ACM International Conference on Management of Data. :267–281.

Main-memory database management systems (DBMS) can achieve excellent performance when processing massive volume of on-line transactions on modern multi-core machines. But existing durability schemes, namely, tuple-level and transaction-level logging-and-recovery mechanisms, either degrade the performance of transaction processing or slow down the process of failure recovery. In this paper, we show that, by exploiting application semantics, it is possible to achieve speedy failure recovery without introducing any costly logging overhead to the execution of concurrent transactions. We propose PACMAN, a parallel database recovery mechanism that is specifically designed for lightweight, coarse-grained transaction-level logging. PACMAN leverages a combination of static and dynamic analyses to parallelize the log recovery: at compile time, PACMAN decomposes stored procedures by carefully analyzing dependencies within and across programs; at recovery time, PACMAN exploits the availability of the runtime parameter values to attain an execution schedule with a high degree of parallelism. As such, recovery performance is remarkably increased. We evaluated PACMAN in a fully-fledged main-memory DBMS running on a 40-core machine. Compared to several state-of-the-art database recovery mechanisms, can significantly reduce recovery time without compromising the efficiency of transaction processing.

Huang, Jian, Xu, Jun, Xing, Xinyu, Liu, Peng, Qureshi, Moinuddin K..  2017.  FlashGuard: Leveraging Intrinsic Flash Properties to Defend Against Encryption Ransomware. Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. :2231–2244.

Encryption ransomware is a malicious software that stealthily encrypts user files and demands a ransom to provide access to these files. Several prior studies have developed systems to detect ransomware by monitoring the activities that typically occur during a ransomware attack. Unfortunately, by the time the ransomware is detected, some files already undergo encryption and the user is still required to pay a ransom to access those files. Furthermore, ransomware variants can obtain kernel privilege, which allows them to terminate software-based defense systems, such as anti-virus. While periodic backups have been explored as a means to mitigate ransomware, such backups incur storage overheads and are still vulnerable as ransomware can obtain kernel privilege to stop or destroy backups. Ideally, we would like to defend against ransomware without relying on software-based solutions and without incurring the storage overheads of backups. To that end, this paper proposes FlashGuard, a ransomware tolerant Solid State Drive (SSD) which has a firmware-level recovery system that allows quick and effective recovery from encryption ransomware without relying on explicit backups. FlashGuard leverages the observation that the existing SSD already performs out-of-place writes in order to mitigate the long erase latency of flash memories. Therefore, when a page is updated or deleted, the older copy of that page is anyway present in the SSD. FlashGuard slightly modifies the garbage collection mechanism of the SSD to retain the copies of the data encrypted by ransomware and ensure effective data recovery. Our experiments with 1,447 manually labeled ransomware samples show that FlashGuard can efficiently restore files encrypted by ransomware. In addition, we demonstrate that FlashGuard has a negligible impact on the performance and lifetime of the SSD.

Jun, Jaeyung, Choi, Kyu Hyun, Kim, Hokwon, Yu, Sang Ho, Kim, Seon Wook, Han, Youngsun.  2017.  Recovering from Biased Distribution of Faulty Cells in Memory by Reorganizing Replacement Regions Through Universal Hashing. ACM Trans. Des. Autom. Electron. Syst.. 23:16:1–16:21.

Recently, scaling down dynamic random access memory (DRAM) has become more of a challenge, with more faults than before and a significant degradation in yield. To improve the yield in DRAM, a redundancy repair technique with intra-subarray replacement has been extensively employed to replace faulty elements (i.e., rows or columns with defective cells) with spare elements in each subarray. Unfortunately, such technique cannot efficiently handle a biased distribution of faulty cells because each subarray has a fixed number of spare elements. In this article, we propose a novel redundancy repair technique that uses a hashing method to solve this problem. Our hashing technique reorganizes replacement regions by changing the way in which their replacement information is referred, thus making faulty cells become evenly distributed to the regions. We also propose a fast repair algorithm to find the best hash function among all possible candidates. Even if our approach requires little hardware overhead, it significantly improves the yield when compared with conventional redundancy techniques. In particular, the results of our experiment show that our technique saves spare elements by about 57% and 55% for a yield of 99% at BER 1e-6 and 5e-7, respectively.

Taing, Nguonly, Springer, Thomas, Cardozo, Nicolás, Schill, Alexander.  2017.  A Rollback Mechanism to Recover from Software Failures in Role-based Adaptive Software Systems. Companion to the First International Conference on the Art, Science and Engineering of Programming. :11:1–11:6.

Context-dependent applications are relatively complex due to their multiple variations caused by context activation, especially in the presence of unanticipated adaptation. Testing these systems is challenging, as it is hard to reproduce the same execution environments. Therefore, a software failure caused by bugs is no exception. This paper presents a rollback mechanism to recover from software failures as part of a role-based runtime with support for unanticipated adaptation. The mechanism performs checkpoints before each adaptation and employs specialized sensors to detect bugs resulting from recent configuration changes. When the runtime detects a bug, it assumes that the bug belongs to the latest configuration. The runtime rolls back to the recent checkpoint to recover and subsequently notifies the developer to fix the bug and re-applying the adaptation through unanticipated adaptation. We prototype the concept as part of our role-based runtime engine LyRT and demonstrate the applicability of the rollback recovery mechanism for unanticipated adaptation in erroneous situations.

Soska, Kyle, Gates, Chris, Roundy, Kevin A., Christin, Nicolas.  2017.  Automatic Application Identification from Billions of Files. Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. :2021–2030.

Understanding how to group a set of binary files into the piece of software they belong to is highly desirable for software profiling, malware detection, or enterprise audits, among many other applications. Unfortunately, it is also extremely challenging: there is absolutely no uniformity in the ways different applications rely on different files, in how binaries are signed, or in the versioning schemes used across different pieces of software. In this paper, we show that, by combining information gleaned from a large number of endpoints (millions of computers), we can accomplish large-scale application identification automatically and reliably. Our approach relies on collecting metadata on billions of files every day, summarizing it into much smaller "sketches", and performing approximate k-nearest neighbor clustering on non-metric space representations derived from these sketches. We design and implement our proposed system using Apache Spark, show that it can process billions of files in a matter of hours, and thus could be used for daily processing. We further show our system manages to successfully identify which files belong to which application with very high precision, and adequate recall.

Sürer, Özge.  2017.  Improving Similarity Measures Using Ontological Data. Proceedings of the Eleventh ACM Conference on Recommender Systems. :416–420.

The representation of structural data is important to capture the pattern between features. Interrelations between variables provide information beyond the standard variables. In this study, we show how ontology information may be used in a recommender systems to increase the efficiency of predictions. We propose two alternative similarity measures that incorporates the structural data representation. Experiments show that our ontology-based approach delivers improved classification accuracy when the dimension increases.

Shaabani, Nuhad, Meinel, Christoph.  2017.  Incremental Discovery of Inclusion Dependencies. Proceedings of the 29th International Conference on Scientific and Statistical Database Management. :2:1–2:12.

Inclusion dependencies form one of the most fundamental classes of integrity constraints. Their importance in classical data management is reinforced by modern applications such as data profiling, data cleaning, entity resolution and schema matching. Their discovery in an unknown dataset is at the core of any data analysis effort. Therefore, several research approaches have focused on their efficient discovery in a given, static dataset. However, none of these approaches are appropriate for applications on dynamic datasets, such as transactional datasets, scientific applications, and social network. In these cases, discovery techniques should be able to efficiently update the inclusion dependencies after an update in the dataset, without reprocessing the entire dataset. We present the first approach for incrementally updating the unary inclusion dependencies. In particular, our approach is based on the concept of attribute clustering from which the unary inclusion dependencies are efficiently derivable. We incrementally update the clusters after each update of the dataset. Updating the clusters does not need to access the dataset because of special data structures designed to efficiently support the updating process. We perform an exhaustive analysis of our approach by applying it to large datasets with several hundred attributes and more than 116,200,000 million tuples. The results show that the incremental discovery significantly reduces the runtime needed by the static discovery. This reduction in the runtime is up to 99.9996 % for both the insert and the delete.

Zhou, G., Huang, J. X..  2017.  Modeling and Learning Distributed Word Representation with Metadata for Question Retrieval. IEEE Transactions on Knowledge and Data Engineering. 29:1226–1239.

Community question answering (cQA) has become an important issue due to the popularity of cQA archives on the Web. This paper focuses on addressing the lexical gap problem in question retrieval. Question retrieval in cQA archives aims to find the existing questions that are semantically equivalent or relevant to the queried questions. However, the lexical gap problem brings a new challenge for question retrieval in cQA. In this paper, we propose to model and learn distributed word representations with metadata of category information within cQA pages for question retrieval using two novel category powered models. One is a basic category powered model called MB-NET and the other one is an enhanced category powered model called ME-NET which can better learn the distributed word representations and alleviate the lexical gap problem. To deal with the variable size of word representation vectors, we employ the framework of fisher kernel to transform them into the fixed-length vectors. Experimental results on large-scale English and Chinese cQA data sets show that our proposed approaches can significantly outperform state-of-the-art retrieval models for question retrieval in cQA. Moreover, we further conduct our approaches on large-scale automatic evaluation experiments. The evaluation results show that promising and significant performance improvements can be achieved.

Saundry, A..  2017.  Institutional Repository Digital Object Metadata Enhancement and Re-Architecting. 2017 ACM/IEEE Joint Conference on Digital Libraries (JCDL). :1–3.

We present work undertaken at our institutional repository to enhance metadata and re-organize digital objects according to new information architecture, in an effort to minimize administrative object management and processing, and improve object discovery and use. This work was partly motivated by the launch of a new discovery platform at our institution, which aggregates metadata and full text from our four open access repositories into a cohesive, consistent, and enhanced searching and browsing experience. The platform provides digital object identifier (DOI) assignment, metadata access via various formats, and an open metadata and full text application program interface (API) for researchers, amongst other features. Functionality of these platform features relies heavily on accurate object representation and metadata. This work facilitates and improves the discovery and engagement of the diverse digital objects available from our institution, so they can be used and analyzed in new, flexible, and innovative ways by a myriad of communities and disciplines.

Kollenda, B., Göktaş, E., Blazytko, T., Koppe, P., Gawlik, R., Konoth, R. K., Giuffrida, C., Bos, H., Holz, T..  2017.  Towards Automated Discovery of Crash-Resistant Primitives in Binary Executables. 2017 47th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN). :189–200.

Many modern defenses rely on address space layout randomization (ASLR) to efficiently hide security-sensitive metadata in the address space. Absent implementation flaws, an attacker can only bypass such defenses by repeatedly probing the address space for mapped (security-sensitive) regions, incurring a noisy application crash on any wrong guess. Recent work shows that modern applications contain idioms that allow the construction of crash-resistant code primitives, allowing an attacker to efficiently probe the address space without causing any visible crash. In this paper, we classify different crash-resistant primitives and show that this problem is much more prominent than previously assumed. More specifically, we show that rather than relying on labor-intensive source code inspection to find a few "hidden" application-specific primitives, an attacker can find such primitives semi-automatically, on many classes of real-world programs, at the binary level. To support our claims, we develop methods to locate such primitives in real-world binaries. We successfully identified 29 new potential primitives and constructed proof-of-concept exploits for four of them.

Ktob, A., Li, Z..  2017.  The Arabic Knowledge Graph: Opportunities and Challenges. 2017 IEEE 11th International Conference on Semantic Computing (ICSC). :48–52.

Semantic Web has brought forth the idea of computing with knowledge, hence, attributing the ability of thinking to machines. Knowledge Graphs represent a major advancement in the construction of the Web of Data where machines are context-aware when answering users' queries. The English Knowledge Graph was a milestone realized by Google in 2012. Even though it is a useful source of information for English users and applications, it does not offer much for the Arabic users and applications. In this paper, we investigated the different challenges and opportunities prone to the life-cycle of the construction of the Arabic Knowledge Graph (AKG) while following some best practices and techniques. Additionally, this work suggests some potential solutions to these challenges. The proprietary factor of data creates a major problem in the way of harvesting this latter. Moreover, when the Arabic data is openly available, it is generally in an unstructured form which requires further processing. The complexity of the Arabic language itself creates a further problem for any automatic or semi-automatic extraction processes. Therefore, the usage of NLP techniques is a feasible solution. Some preliminary results are presented later in this paper. The AKG has very promising outcomes for the Semantic Web in general and the Arabic community in particular. The goal of the Arabic Knowledge Graph is mainly the integration of the different isolated datasets available on the Web. Later, it can be used in both the academic (by providing a large dataset for many different research fields and enhance discovery) and commercial sectors (by improving search engines, providing metadata, interlinking businesses).

Nadgowda, S., Duri, S., Isci, C., Mann, V..  2017.  Columbus: Filesystem Tree Introspection for Software Discovery. 2017 IEEE International Conference on Cloud Engineering (IC2E). :67–74.

Software discovery is a key management function to ensure that systems are free of vulnerabilities, comply with licensing requirements, and support advanced search for systems containing given software. Today, software is predominantly discovered through querying package management tools, or using rules that check for file metadata or contents. These approaches are inadequate as not every software is installed through package managers, and agile development practices lead to frequent deployment of software. Other approaches to software discovery use machine learning methods requiring training phase, or require maintaining knowledge bases. Columbus uses the knowledge of the software packaging practices that evolved over time, and uses the information embedded in the file system impression created by a software package to discover it. Columbus is able to discover software in 92% of all official Docker images. Further, Columbus can be used in problem diagnosis and drift detection situations to compare two different systems, or to determine the evolution of a system overtime.

Kimmig, A., Memory, A., Miller, R. J., Getoor, L..  2017.  A Collective, Probabilistic Approach to Schema Mapping. 2017 IEEE 33rd International Conference on Data Engineering (ICDE). :921–932.

We propose a probabilistic approach to the problem of schema mapping. Our approach is declarative, scalable, and extensible. It builds upon recent results in both schema mapping and probabilistic reasoning and contributes novel techniques in both fields. We introduce the problem of mapping selection, that is, choosing the best mapping from a space of potential mappings, given both metadata constraints and a data example. As selection has to reason holistically about the inputs and the dependencies between the chosen mappings, we define a new schema mapping optimization problem which captures interactions between mappings. We then introduce Collective Mapping Discovery (CMD), our solution to this problem using stateof- the-art probabilistic reasoning techniques, which allows for inconsistencies and incompleteness. Using hundreds of realistic integration scenarios, we demonstrate that the accuracy of CMD is more than 33% above that of metadata-only approaches already for small data examples, and that CMD routinely finds perfect mappings even if a quarter of the data is inconsistent.

Diaz, J. S. B., Medeiros, C. B..  2017.  WorkflowHunt: Combining Keyword and Semantic Search in Scientific Workflow Repositories. 2017 IEEE 13th International Conference on e-Science (e-Science). :138–147.

Scientific datasets and the experiments that analyze them are growing in size and complexity, and scientists are facing difficulties to share such resources. Some initiatives have emerged to try to solve this problem. One of them involves the use of scientific workflows to represent and enact experiment execution. There is an increasing number of workflows that are potentially relevant for more than one scientific domain. However, it is hard to find workflows suitable for reuse given an experiment. Creating a workflow takes time and resources, and their reuse helps scientists to build new workflows faster and in a more reliable way. Search mechanisms in workflow repositories should provide different options for workflow discovery, but it is difficult for generic repositories to provide multiple mechanisms. This paper presents WorkflowHunt, a hybrid architecture for workflow search and discovery for generic repositories, which combines keyword and semantic search to allow finding relevant workflows using different search methods. We validated our architecture creating a prototype that uses real workflows and metadata from myExperiment, and compare search results via WorkflowHunt and via myExperiment's search interface.

2017-12-04
Won, J., Singla, A., Bertino, E..  2017.  CertificateLess Cryptography-Based Rule Management Protocol for Advanced Mission Delivery Networks. 2017 IEEE 37th International Conference on Distributed Computing Systems Workshops (ICDCSW). :7–12.

Assured Mission Delivery Network (AMDN) is a collaborative network to support data-intensive scientific collaborations in a multi-cloud environment. Each scientific collaboration group, called a mission, specifies a set of rules to handle computing and network resources. Security is an integral part of the AMDN design since the rules must be set by authorized users and the data generated by each mission may be privacy-sensitive. In this paper, we propose a CertificateLess cryptography-based Rule-management Protocol (CL-RP) for AMDN, which supports authenticated rule registrations and updates with non-repudiation. We evaluate CL-RP through test-bed experiments and compare it with other standard protocols.

Sattar, N. S., Adnan, M. A., Kali, M. B..  2017.  Secured aerial photography using Homomorphic Encryption. 2017 International Conference on Networking, Systems and Security (NSysS). :107–114.

Aerial photography is fast becoming essential in scientific research that requires multi-agent system in several perspective and we proposed a secured system using one of the well-known public key cryptosystem namely NTRU that is somewhat homomorphic in nature. Here we processed images of aerial photography that were captured by multi-agents. The agents encrypt the images and upload those in the cloud server that is untrusted. Cloud computing is a buzzword in modern era and public cloud is being used by people everywhere for its shared, on-demand nature. Cloud Environment faces a lot of security and privacy issues that needs to be solved. This paper focuses on how to use cloud so effectively that there remains no possibility of data or computation breaches from the cloud server itself as it is prone to the attack of treachery in different ways. The cloud server computes on the encrypted data without knowing the contents of the images. After concatenation, encrypted result is delivered to the concerned authority where it is decrypted retaining its originality. We set up our experiment in Amazon EC2 cloud server where several instances were the agents and an instance acted as the server. We varied several parameters so that we could minimize encryption time. After experimentation we produced our desired result within feasible time sustaining the image quality. This work ensures data security in public cloud that was our main concern.

Thayananthan, V., Abdulkader, O., Jambi, K., Bamahdi, A. M..  2017.  Analysis of Cybersecurity Based on Li-Fi in Green Data Storage Environments. 2017 IEEE 4th International Conference on Cyber Security and Cloud Computing (CSCloud). :327–332.

Industrial networking has many issues based on the type of industries, data storage, data centers, and cloud computing, etc. Green data storage improves the scientific, commercial and industrial profile of the networking. Future industries are looking for cybersecurity solution with the low-cost resources in which the energy serving is the main problem in the industrial networking. To improve these problems, green data storage will be the priority because data centers and cloud computing deals with the data storage. In this analysis, we have decided to use solar energy source and different light rays as methodologies include a prism and the Li-Fi techniques. In this approach, light rays sent through the prism which allows us to transmit the data with different frequencies. This approach provides green energy and maximum protection within the data center. As a result, we have illustrated that cloud services within the green data center in industrial networking will achieve better protection with the low-cost energy through this analysis. Finally, we have to conclude that Li-Fi enhances the use of green energy and protection which are advantages to current and future industrial networking.

Al-Shomrani, A., Fathy, F., Jambi, K..  2017.  Policy enforcement for big data security. 2017 2nd International Conference on Anti-Cyber Crimes (ICACC). :70–74.

Security and privacy of big data becomes challenging as data grows and more accessible by more and more clients. Large-scale data storage is becoming a necessity for healthcare, business segments, government departments, scientific endeavors and individuals. Our research will focus on the privacy, security and how we can make sure that big data is secured. Managing security policy is a challenge that our framework will handle for big data. Privacy policy needs to be integrated, flexible, context-aware and customizable. We will build a framework to receive data from customer and then analyze data received, extract privacy policy and then identify the sensitive data. In this paper we will present the techniques for privacy policy which will be created to be used in our framework.

Rodrigues, P., Sreedharan, S., Basha, S. A., Mahesh, P. S..  2017.  Security threat identification using energy points. 2017 2nd International Conference on Anti-Cyber Crimes (ICACC). :52–54.

This research paper identifies security issues; especially energy based security attacks and enhances security of the system. It is very essential to consider Security of the system to be developed in the initial Phases of the software Cycle of Software Development (SDLC) as many billions of bucks are drained owing to security flaws in software caused due to improper or no security process. Security breaches that occur on software system are in umpteen numbers. Scientific Literature propose many solutions to overcome security issues, all security mechanisms are reactive in nature. In this paper new security solution is proposed that is proactive in nature especially for energy based denial of service attacks which is frequent in the recent past. Proposed solution is based on energy consumption by system known as energy points.

Athinaiou, M..  2017.  Cyber security risk management for health-based critical infrastructures. 2017 11th International Conference on Research Challenges in Information Science (RCIS). :402–407.

This brief paper reports on an early stage ongoing PhD project in the field of cyber-physical security in health care critical infrastructures. The research overall aims to develop a methodology that will increase the ability of secure recovery of health critical infrastructures. This ambitious or reckless attempt, as it is currently at an early stage, in this paper, tries to answer why cyber-physical security for health care infrastructures is important and of scientific interest. An initial PhD project methodology and expected outcomes are also discussed. The report concludes with challenges that emerge and possible future directions.

Mayer, N., Feltus, C..  2017.  Evaluation of the risk and security overlay of archimate to model information system security risks. 2017 IEEE 21st International Enterprise Distributed Object Computing Workshop (EDOCW). :106–116.

We evaluated the support proposed by the RSO to represent graphically our EAM-ISSRM (Enterprise Architecture Management - Information System Security Risk Management) integrated model. The evaluation of the RSO visual notation has been done at two different levels: completeness with regards to the EAM-ISSRM integrated model (Section III) and cognitive effectiveness, relying on the nine principles established by D. Moody ["The 'Physics' of Notations: Toward a Scientific Basis for Constructing Visual Notations in Software Engineering," IEEE Trans. Softw. Eng., vol. 35, no. 6, pp. 756-779, Nov. 2009] (Section IV). Regarding completeness, the coverage of the EAMISSRM integrated model by the RSO is complete apart from 'Event'. As discussed in Section III, this lack is negligible and we can consider the RSO as an appropriate notation to support the EAM-ISSRM integrated model from a completeness point of view. Regarding cognitive effectiveness, many gaps have been identified with regards to the nine principle established by Moody. Although no quantitative analysis has been performed to objectify this conclusion, the RSO can decently not be considered as an appropriate notation from a cognitive effectiveness point of view and there is room to propose a notation better on this aspect. This paper is focused on assessing the RSO without suggesting improvements based on the conclusions drawn. As a consequence, our objective for future work is to propose a more cognitive effective visual notation for the EAM-ISSRM integrated model. The approach currently considered is to operationalize Moody's principles into concrete metrics and requirements, taking into account the needs and profile of the target group of our notation (information security risk managers) through personas development and user experience map. With such an approach, we will be able to make decisions on the necessary trade-offs about our visual syntax, taking care of a specific context. We also aim at valida- ing our proposal(s) with the help of tools and approaches extracted from cognitive psychology research applied to HCI domain (e.g., eye tracking, heuristic evaluation, user experience evaluation…).