Visible to the public Biblio

Found 2859 results

Filters: First Letter Of Last Name is H  [Clear All Filters]
2017-03-07
Hu, Zhiyong, Baynard, C. W., Hu, Hongda, Fazio, M..  2015.  GIS mapping and spatial analysis of cybersecurity attacks on a florida university. 2015 23rd International Conference on Geoinformatics. :1–5.

As the centers of knowledge, discovery, and intellectual exploration, US universities provide appealing cybersecurity targets. Cyberattack origin patterns and relationships are not evident until data is visualized in maps and tested with statistical models. The current cybersecurity threat detection software utilized by University of North Florida's IT department records large amounts of attacks and attempted intrusions by the minute. This paper presents GIS mapping and spatial analysis of cybersecurity attacks on UNF. First, locations of cyberattack origins were detected by geographic Internet Protocol (GEO-IP) software. Second, GIS was used to map the cyberattack origin locations. Third, we used advanced spatial statistical analysis functions (exploratory spatial data analysis and spatial point pattern analysis) and R software to explore cyberattack patterns. The spatial perspective we promote is novel because there are few studies employing location analytics and spatial statistics in cyber-attack detection and prevention research.

Tunc, C., Hariri, S., Montero, F. D. L. P., Fargo, F., Satam, P..  2015.  CLaaS: Cybersecurity Lab as a Service – Design, Analysis, and Evaluation. 2015 International Conference on Cloud and Autonomic Computing. :224–227.

The explosive growth of IT infrastructures, cloud systems, and Internet of Things (IoT) have resulted in complex systems that are extremely difficult to secure and protect against cyberattacks that are growing exponentially in the complexity and also in the number. Overcoming the cybersecurity challenges require cybersecurity environments supporting the development of innovative cybersecurity algorithms and evaluation of the experiments. In this paper, we present the design, analysis, and evaluation of the Cybersecurity Lab as a Service (CLaaS) which offers virtual cybersecurity experiments as a cloud service that can be accessed from anywhere and from any device (desktop, laptop, tablet, smart mobile device, etc.) with Internet connectivity. We exploit cloud computing systems and virtualization technologies to provide isolated and virtual cybersecurity experiments for vulnerability exploitation, launching cyberattacks, how cyber resources and services can be hardened, etc. We also present our performance evaluation and effectiveness of CLaaS experiments used by students.

Allawi, M. A. A., Hadi, A., Awajan, A..  2015.  MLDED: Multi-layer Data Exfiltration Detection System. 2015 Fourth International Conference on Cyber Security, Cyber Warfare, and Digital Forensic (CyberSec). :107–112.

Due to the growing advancement of crime ware services, the computer and network security becomes a crucial issue. Detecting sensitive data exfiltration is a principal component of each information protection strategy. In this research, a Multi-Level Data Exfiltration Detection (MLDED) system that can handle different types of insider data leakage threats with staircase difficulty levels and their implications for the organization environment has been proposed, implemented and tested. The proposed system detects exfiltration of data outside an organization information system, where the main goal is to use the detection results of a MLDED system for digital forensic purposes. MLDED system consists of three major levels Hashing, Keywords Extraction and Labeling. However, it is considered only for certain type of documents such as plain ASCII text and PDF files. In response to the challenging issue of identifying insider threats, a forensic readiness data exfiltration system is designed that is capable of detecting and identifying sensitive information leaks. The results show that the proposed system has an overall detection accuracy of 98.93%.

Mir, I. E., Kim, D. S., Haqiq, A..  2015.  Security modeling and analysis of a self-cleansing intrusion tolerance technique. 2015 11th International Conference on Information Assurance and Security (IAS). :111–117.

Since security is increasingly the principal concern in the conception and implementation of software systems, it is very important that the security mechanisms are designed so as to protect the computer systems against cyber attacks. An Intrusion Tolerance Systems play a crucial role in maintaining the service continuity and enhancing the security compared with the traditional security. In this paper, we propose to combine a preventive maintenance with existing intrusion tolerance system to improve the system security. We use a semi-Markov process to model the system behavior. We quantitatively analyze the system security using the measures such as system availability, Mean Time To Security Failure and cost. The numerical analysis is presented to show the feasibility of the proposed approach.

Namazifard, A., Amiri, B., Tousi, A., Aminilari, M., Hozhabri, A. A..  2015.  Literature review of different contention of E-commerce security and the purview of cyber law factors. 2015 9th International Conference on e-Commerce in Developing Countries: With focus on e-Business (ECDC). :1–14.

Today, by widely spread of information technology (IT) usage, E-commerce security and its related legislations are very critical issue in information technology and court law. There is a consensus that security matters are the significant foundation of e-commerce, electronic consumers, and firms' privacy. While e-commerce networks need a policy for security privacy, they should be prepared for a simple consumer friendly infrastructure. Hence it is necessary to review the theoretical models for revision. In This theory review, we embody a number of former articles that cover security of e-commerce and legislation ambit at the individual level by assessing five criteria. Whether data of articles provide an effective strategy for secure-protection challenges in e-commerce and e-consumers. Whether provisions clearly remedy precedents or they need to flourish? This paper focuses on analyzing the former discussion regarding e-commerce security and existence legislation toward cyber-crime activity of e-commerce the article also purports recommendation for subsequent research which is indicate that through secure factors of e-commerce we are able to fill the vacuum of its legislation.

Madaio, Michael, Chen, Shang-Tse, Haimson, Oliver L., Zhang, Wenwen, Cheng, Xiang, Hinds-Aldrich, Matthew, Chau, Duen Horng, Dilkina, Bistra.  2016.  Firebird: Predicting Fire Risk and Prioritizing Fire Inspections in Atlanta. Proceedings of the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. :185–194.

The Atlanta Fire Rescue Department (AFRD), like many municipal fire departments, actively works to reduce fire risk by inspecting commercial properties for potential hazards and fire code violations. However, AFRD's fire inspection practices relied on tradition and intuition, with no existing data-driven process for prioritizing fire inspections or identifying new properties requiring inspection. In collaboration with AFRD, we developed the Firebird framework to help municipal fire departments identify and prioritize commercial property fire inspections, using machine learning, geocoding, and information visualization. Firebird computes fire risk scores for over 5,000 buildings in the city, with true positive rates of up to 71% in predicting fires. It has identified 6,096 new potential commercial properties to inspect, based on AFRD's criteria for inspection. Furthermore, through an interactive map, Firebird integrates and visualizes fire incidents, property information and risk scores to help AFRD make informed decisions about fire inspections. Firebird has already begun to make positive impact at both local and national levels. It is improving AFRD's inspection processes and Atlanta residents' safety, and was highlighted by National Fire Protection Association (NFPA) as a best practice for using data to inform fire inspections.

Lau, Billy Pik Lik, Chaturvedi, Tanmay, Ng, Benny Kai Kiat, Li, Kai, Hasala, Marakkalage S., Yuen, Chau.  2016.  Spatial and Temporal Analysis of Urban Space Utilization with Renewable Wireless Sensor Network. Proceedings of the 3rd IEEE/ACM International Conference on Big Data Computing, Applications and Technologies. :133–142.

Space utilization are important elements for a smart city to determine how well public space are being utilized. Such information could also provide valuable feedback to the urban developer on what are the factors that impact space utilization. The spatial and temporal information for space utilization can be studied and further analyzed to generate insights about that particular space. In our research context, these elements are translated to part of big data and Internet of things (IoT) to eliminate the need of on site investigation. However, there are a number of challenges for large scale deployment, eg. hardware cost, computation capability, communication bandwidth, scalability, data fragmentation, and resident privacy etc. In this paper, we designed and prototype a Renewable Wireless Sensor Network (RWSN), which addressed the aforementioned challenges. Finally, analyzed results based on initial data collected is presented.

Ren, Xiang, El-Kishky, Ahmed, Ji, Heng, Han, Jiawei.  2016.  Automatic Entity Recognition and Typing in Massive Text Data. Proceedings of the 2016 International Conference on Management of Data. :2235–2239.

In today's computerized and information-based society, individuals are constantly presented with vast amounts of text data, ranging from news articles, scientific publications, product reviews, to a wide range of textual information from social media. To extract value from these large, multi-domain pools of text, it is of great importance to gain an understanding of entities and their relationships. In this tutorial, we introduce data-driven methods to recognize typed entities of interest in massive, domain-specific text corpora. These methods can automatically identify token spans as entity mentions in documents and label their fine-grained types (e.g., people, product and food) in a scalable way. Since these methods do not rely on annotated data, predefined typing schema or hand-crafted features, they can be quickly adapted to a new domain, genre and language. We demonstrate on real datasets including various genres (e.g., news articles, discussion forum posts, and tweets), domains (general vs. bio-medical domains) and languages (e.g., English, Chinese, Arabic, and even low-resource languages like Hausa and Yoruba) how these typed entities aid in knowledge discovery and management.

Farid, Mina, Roatis, Alexandra, Ilyas, Ihab F., Hoffmann, Hella-Franziska, Chu, Xu.  2016.  CLAMS: Bringing Quality to Data Lakes. Proceedings of the 2016 International Conference on Management of Data. :2089–2092.

With the increasing incentive of enterprises to ingest as much data as they can in what is commonly referred to as "data lakes", and with the recent development of multiple technologies to support this "load-first" paradigm, the new environment presents serious data management challenges. Among them, the assessment of data quality and cleaning large volumes of heterogeneous data sources become essential tasks in unveiling the value of big data. The coveted use of unstructured and semi-structured data in large volumes makes current data cleaning tools (primarily designed for relational data) not directly adoptable. We present CLAMS, a system to discover and enforce expressive integrity constraints from large amounts of lake data with very limited schema information (e.g., represented as RDF triples). This demonstration shows how CLAMS is able to discover the constraints and the schemas they are defined on simultaneously. CLAMS also introduces a scale-out solution to efficiently detect errors in the raw data. CLAMS interacts with human experts to both validate the discovered constraints and to suggest data repairs. CLAMS has been deployed in a real large-scale enterprise data lake and was experimented with a real data set of 1.2 billion triples. It has been able to spot multiple obscure data inconsistencies and errors early in the data processing stack, providing huge value to the enterprise.

Lappalainen, Tuomas, Virtanen, Lasse, Häkkilä, Jonna.  2016.  Experiences with Wellness Ring and Bracelet Form Factor. Proceedings of the 15th International Conference on Mobile and Ubiquitous Multimedia. :351–353.

This paper explores experiences with ring and bracelet activity tracker form factors. During the first week of a 2-week field study participants (n=6) wore non-functional mock-ups of ring and bracelet wellness trackers, and provided feedback on their experiences. During the second week, participants used a commercial wellness tracking ring, which collected physical exercise and sleep data and visualized it in a mobile application. Our salient findings based on 196 user diary entries suggest, that the ring form factor is considered beautiful, aesthetic and contributing to the wearer's image. However, the bracelet form factor is more practical for active lifestyle, and preferred in situations where the hands are performing tasks requiring gripping objects, such as sport activities, cleaning the car, cooking and washing dishes. Users strongly identified the ring form factor as jewellery that is intended to be seen, whereas bracelets were considered hidden and inconspicuous elements of the user's ensemble.

Heindorf, Stefan, Potthast, Martin, Stein, Benno, Engels, Gregor.  2016.  Vandalism Detection in Wikidata. Proceedings of the 25th ACM International on Conference on Information and Knowledge Management. :327–336.

Wikidata is the new, large-scale knowledge base of the Wikimedia Foundation. Its knowledge is increasingly used within Wikipedia itself and various other kinds of information systems, imposing high demands on its integrity. Wikidata can be edited by anyone and, unfortunately, it frequently gets vandalized, exposing all information systems using it to the risk of spreading vandalized and falsified information. In this paper, we present a new machine learning-based approach to detect vandalism in Wikidata. We propose a set of 47 features that exploit both content and context information, and we report on 4 classifiers of increasing effectiveness tailored to this learning task. Our approach is evaluated on the recently published Wikidata Vandalism Corpus WDVC-2015 and it achieves an area under curve value of the receiver operating characteristic, ROC-AUC, of 0.991. It significantly outperforms the state of the art represented by the rule-based Wikidata Abuse Filter (0.865 ROC-AUC) and a prototypical vandalism detector recently introduced by Wikimedia within the Objective Revision Evaluation Service (0.859 ROC-AUC).

Petrić, Jean, Bowes, David, Hall, Tracy, Christianson, Bruce, Baddoo, Nathan.  2016.  The Jinx on the NASA Software Defect Data Sets. Proceedings of the 20th International Conference on Evaluation and Assessment in Software Engineering. :13:1–13:5.

Background: The NASA datasets have previously been used extensively in studies of software defects. In 2013 Shepperd et al. presented an essential set of rules for removing erroneous data from the NASA datasets making this data more reliable to use. Objective: We have now found additional rules necessary for removing problematic data which were not identified by Shepperd et al. Results: In this paper, we demonstrate the level of erroneous data still present even after cleaning using Shepperd et al.'s rules and apply our new rules to remove this erroneous data. Conclusion: Even after systematic data cleaning of the NASA MDP datasets, we found new erroneous data. Data quality should always be explicitly considered by researchers before use.

Agrawal, Divy, Ba, Lamine, Berti-Equille, Laure, Chawla, Sanjay, Elmagarmid, Ahmed, Hammady, Hossam, Idris, Yasser, Kaoudi, Zoi, Khayyat, Zuhair, Kruse, Sebastian et al..  2016.  Rheem: Enabling Multi-Platform Task Execution. Proceedings of the 2016 International Conference on Management of Data. :2069–2072.

Many emerging applications, from domains such as healthcare and oil & gas, require several data processing systems for complex analytics. This demo paper showcases system, a framework that provides multi-platform task execution for such applications. It features a three-layer data processing abstraction and a new query optimization approach for multi-platform settings. We will demonstrate the strengths of system by using real-world scenarios from three different applications, namely, machine learning, data cleaning, and data fusion.

Lin, Xiaofeng, Chen, Yu, Li, Xiaodong, Mao, Junjie, He, Jiaquan, Xu, Wei, Shi, Yuanchun.  2016.  Scalable Kernel TCP Design and Implementation for Short-Lived Connections. Proceedings of the Twenty-First International Conference on Architectural Support for Programming Languages and Operating Systems. :339–352.

With the rapid growth of network bandwidth, increases in CPU cores on a single machine, and application API models demanding more short-lived connections, a scalable TCP stack is performance-critical. Although many clean-state designs have been proposed, production environments still call for a bottom-up parallel TCP stack design that is backward-compatible with existing applications. We present Fastsocket, a BSD Socket-compatible and scalable kernel socket design, which achieves table-level connection partition in TCP stack and guarantees connection locality for both passive and active connections. Fastsocket architecture is a ground up partition design, from NIC interrupts all the way up to applications, which naturally eliminates various lock contentions in the entire stack. Moreover, Fastsocket maintains the full functionality of the kernel TCP stack and BSD-socket-compatible API, and thus applications need no modifications. Our evaluations show that Fastsocket achieves a speedup of 20.4x on a 24-core machine under a workload of short-lived connections, outperforming the state-of-the-art Linux kernel TCP implementations. When scaling up to 24 CPU cores, Fastsocket increases the throughput of Nginx and HAProxy by 267% and 621% respectively compared with the base Linux kernel. We also demonstrate that Fastsocket can achieve scalability and preserve BSD socket API at the same time. Fastsocket is already deployed in the production environment of Sina WeiBo, serving 50 million daily active users and billions of requests per day.

West, Ruth, Kajihara, Meghan, Parola, Max, Hays, Kathryn, Hillard, Luke, Carlew, Anne, Deutsch, Jeremey, Lane, Brandon, Holloway, Michelle, John, Brendan et al..  2016.  Eliciting Tacit Expertise in 3D Volume Segmentation. Proceedings of the 9th International Symposium on Visual Information Communication and Interaction. :59–66.

The output of 3D volume segmentation is crucial to a wide range of endeavors. Producing accurate segmentations often proves to be both inefficient and challenging, in part due to lack of imaging data quality (contrast and resolution), and because of ambiguity in the data that can only be resolved with higher-level knowledge of the structure and the context wherein it resides. Automatic and semi-automatic approaches are improving, but in many cases still fail or require substantial manual clean-up or intervention. Expert manual segmentation and review is therefore still the gold standard for many applications. Unfortunately, existing tools (both custom-made and commercial) are often designed based on the underlying algorithm, not the best method for expressing higher-level intention. Our goal is to analyze manual (or semi-automatic) segmentation to gain a better understanding of both low-level (perceptual tasks and actions) and high-level decision making. This can be used to produce segmentation tools that are more accurate, efficient, and easier to use. Questioning or observation alone is insufficient to capture this information, so we utilize a hybrid capture protocol that blends observation, surveys, and eye tracking. We then developed, and validated, data coding schemes capable of discerning low-level actions and overall task structures.

Talbot, Jeremie, Piretti, Mark, Singleton, Kevin, Hessler, Mark.  2016.  Designing an Interaction with an Octopus. ACM SIGGRAPH 2016 Talks. :43:1–43:2.

In Pixar's Finding Dory, we are introduced to a new character: Hank the Octopus. This is a very different character than Pixar has been asked to animate before. Our directors demanded both precise control and graceful, clean silhouettes. The reference artwork we were given showed complex curves between arms and body without any disjointed shapes or breaks in form. Video of Octopus in motion reveals an infinitely malleable creature capable of an enormous shape language. This art direction required a small group of TDs to create a control scheme that was sensible, flexible and with a new level of control in order for animators to bring Hank to life. We had to think deeply from the tips of the fingers all the way through how the tentacles connect to the mouth corners, and eye sockets. Each of this issues raised concerns around design, deformation and finally how the end user can manipulate such complexity effectively.

Hsu, Justin, Morgenstern, Jamie, Rogers, Ryan, Roth, Aaron, Vohra, Rakesh.  2016.  Do Prices Coordinate Markets? Proceedings of the Forty-eighth Annual ACM Symposium on Theory of Computing. :440–453.

Walrasian equilibrium prices have a remarkable property: they allow each buyer to purchase a bundle of goods that she finds the most desirable, while guaranteeing that the induced allocation over all buyers will globally maximize social welfare. However, this clean story has two caveats. * First, the prices may induce indifferences. In fact, the minimal equilibrium prices necessarily induce indifferences. Accordingly, buyers may need to coordinate with one another to arrive at a socially optimal outcome—the prices alone are not sufficient to coordinate the market. * Second, although natural procedures converge to Walrasian equilibrium prices on a fixed population, in practice buyers typically observe prices without participating in a price computation process. These prices cannot be perfect Walrasian equilibrium prices, but instead somehow reflect distributional information about the market. To better understand the performance of Walrasian prices when facing these two problems, we give two results. First, we propose a mild genericity condition on valuations under which the minimal Walrasian equilibrium prices induce allocations which result in low over-demand, no matter how the buyers break ties. In fact, under genericity the over-demand of any good can be bounded by 1, which is the best possible at the minimal prices. We demonstrate our results for unit demand valuations and give an extension to matroid based valuations (MBV), conjectured to be equivalent to gross substitute valuations (GS). Second, we use techniques from learning theory to argue that the over-demand and welfare induced by a price vector converge to their expectations uniformly over the class of all price vectors, with respective sample complexity linear and quadratic in the number of goods in the market. These results make no assumption on the form of the valuation functions. These two results imply that under a mild genericity condition, the exact Walrasian equilibrium prices computed in a market are guaranteed to induce both low over-demand and high welfare when used in a new market where agents are sampled independently from the same distribution, whenever the number of agents is larger than the number of commodities in the market.

Hsu, Kai-Cheng, Lin, Kate Ching-Ju, Wei, Hung-Yu.  2016.  Full-duplex Delay-and-forward Relaying. Proceedings of the 17th ACM International Symposium on Mobile Ad Hoc Networking and Computing. :221–230.

A full-duplex radio can transmit and receive simultaneously, and, hence, is a natural fit for realizing an in-band relay system. Most of existing full-duplex relay designs, however, simply forward an amplified version of the received signal without decoding it, and, thereby, also amplify the noise at the relay, offsetting throughput gains of full-duplex relaying. To overcome this issue, we explore an alternative: demodulate-and-forward. This paper presents the design and implementation of DelayForward (DF), a practical system that fully extracts the relay gains of full-duplex demodulate-and-forward mechanism. DF allows a relay to remove its noise from the signal it receives via demodulation and forward the clean signal to destination with a small delay. While such delay-and-forward mechanism avoids forwarding the noise at the relay, the half-duplex destination, however, now receives the combination of the direct signal from a source and the delayed signal from a relay. Unlike previous theoretical work, which mainly focuses on deriving the capacity of demodulate-and-forward relaying, we observe that such combined signals have a structure similar to the convolutional code, and, hence, propose a novel viterbi-type decoder to recover data from those combined signals in practice. Another challenge is that the performance of full-duplex relay is inherently bounded by the minimum of the relay's SNR and the destination's SNR. To break this limitation, we further develop a power allocation scheme to optimize the capacity of DF. We have built a prototype of DF using USRP software radios. Experimental results show that our power-adaptive DF delivers the throughput gain of 1.25×, on average, over the state-of-the-art full-duplex relay design. The gain is as high as 2.03× for the more challenged clients.

Huang, Muhuan, Wu, Di, Yu, Cody Hao, Fang, Zhenman, Interlandi, Matteo, Condie, Tyson, Cong, Jason.  2016.  Programming and Runtime Support to Blaze FPGA Accelerator Deployment at Datacenter Scale. Proceedings of the Seventh ACM Symposium on Cloud Computing. :456–469.

With the end of CPU core scaling due to dark silicon limitations, customized accelerators on FPGAs have gained increased attention in modern datacenters due to their lower power, high performance and energy efficiency. Evidenced by Microsoft's FPGA deployment in its Bing search engine and Intel's 16.7 billion acquisition of Altera, integrating FPGAs into datacenters is considered one of the most promising approaches to sustain future datacenter growth. However, it is quite challenging for existing big data computing systems—like Apache Spark and Hadoop—to access the performance and energy benefits of FPGA accelerators. In this paper we design and implement Blaze to provide programming and runtime support for enabling easy and efficient deployments of FPGA accelerators in datacenters. In particular, Blaze abstracts FPGA accelerators as a service (FaaS) and provides a set of clean programming APIs for big data processing applications to easily utilize those accelerators. Our Blaze runtime implements an FaaS framework to efficiently share FPGA accelerators among multiple heterogeneous threads on a single node, and extends Hadoop YARN with accelerator-centric scheduling to efficiently share them among multiple computing tasks in the cluster. Experimental results using four representative big data applications demonstrate that Blaze greatly reduces the programming efforts to access FPGA accelerators in systems like Apache Spark and YARN, and improves the system throughput by 1.7× to 3× (and energy efficiency by 1.5× to 2.7×) compared to a conventional CPU-only cluster.

Backes, Michael, Bugiel, Sven, Huang, Jie, Schranz, Oliver.  2016.  POSTER: The ART of App Compartmentalization. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :1811–1813.

On Android, advertising libraries are commonly integrated with their host apps. Since the host and advertising components share the application's sandbox, advertisement code inherits all permissions and can access host resources with no further approval needed. Motivated by the privacy risks of advertisement libraries as already shown in the literature, this poster introduces an Android Runtime (ART) based app compartmentalization mechanism to achieve separation between trusted app code and untrusted library code without system modification and application rewriting. With our approach, advertising libraries will be isolated from the host app and the original app will be partitioned into two sub-apps that run independently, with the host app's resources and permissions being protected by Android's app sandboxing mechanism. ARTist [1], a compiler-based Android app instrumentation framework, is utilized here to recreate the communication channels between host and advertisement library. The result is a robust toolchain on device which provides a clean separation of developer-written app code and third-party advertisement code, allowing for finer-grained access control policies and information flow control without OS customization and application rebuilding.

2017-02-27
Li, X., He, Z., Zhang, S..  2015.  Robust optimization of risk for power system based on information gap decision theory. 2015 5th International Conference on Electric Utility Deregulation and Restructuring and Power Technologies (DRPT). :200–204.

Risk-control optimization has great significance for security of power system. Usually the probabilistic uncertainties of parameters are considered in the research of risk optimization of power system. However, the method of probabilistic uncertainty description will be insufficient in the case of lack of sample data. Thus non-probabilistic uncertainties of parameters should be considered, and will impose a significant influence on the results of optimization. To solve this problem, a robust optimization operation method of power system risk-control is presented in this paper, considering the non-probabilistic uncertainty of parameters based on information gap decision theory (IGDT). In the method, loads are modeled as the non-probabilistic uncertainty parameters, and the model of robust optimization operation of risk-control is presented. By solving the model, the maximum fluctuation of the pre-specified target can be obtained, and the strategy of this situation can be obtained at the same time. The proposed model is applied to the IEEE-30 system of risk-control by simulation. The results can provide the valuable information for operating department to risk management.

Abd, S. K., Salih, R. T., Al-Haddad, S. A. R., Hashim, F., Abdullah, A. B. H., Yussof, S..  2015.  Cloud computing security risks with authorization access for secure Multi-Tenancy based on AAAS protocol. TENCON 2015 - 2015 IEEE Region 10 Conference. :1–5.

Many cloud security complexities can be concerned as a result of its open system architecture. One of these complexities is multi-tenancy security issue. This paper discusses and addresses the most common public cloud security complexities focusing on Multi-Tenancy security issue. Multi-tenancy is one of the most important security challenges faced by public cloud services providers. Therefore, this paper presents a secure multi-tenancy architecture using authorization model Based on AAAS protocol. By utilizing cloud infrastructure, access control can be provided to various cloud information and services by our suggested authorization system. Each business can offer several cloud services. These cloud services can cooperate with other services which can be related to the same organization or different one. Moreover, these cooperation agreements are supported by our suggested system.

Mulcahy, J. J., Huang, S..  2015.  An autonomic approach to extend the business value of a legacy order fulfillment system. 2015 Annual IEEE Systems Conference (SysCon) Proceedings. :595–600.

In the modern retailing industry, many enterprise resource planning (ERP) systems are considered legacy software systems that have become too expensive to replace and too costly to re-engineer. Countering the need to maintain and extend the business value of these systems is the need to do so in the simplest, cheapest, and least risky manner available. There are a number of approaches used by software engineers to mitigate the negative impact of evolving a legacy systems, including leveraging service-oriented architecture to automate manual tasks previously performed by humans. A relatively recent approach in software engineering focuses upon implementing self-managing attributes, or “autonomic” behavior in software applications and systems of applications in order to reduce or eliminate the need for human monitoring and intervention. Entire systems can be autonomic or they can be hybrid systems that implement one or more autonomic components to communicate with external systems. In this paper, we describe a commercial development project in which a legacy multi-channel commerce enterprise resource planning system was extended with service-oriented architecture an autonomic control loop design to communicate with an external third-party security screening provider. The goal was to reduce the cost of the human labor necessary to screen an ever-increasing volume of orders and to reduce the potential for human error in the screening process. The solution automated what was previously an inefficient, incomplete, and potentially error-prone manual process by inserting a new autonomic software component into the existing order fulfillment workflow.

Huda, S., Sudarsono, A., Harsono, T..  2015.  Secure data exchange using authenticated Ciphertext-Policy Attributed-Based Encryption. 2015 International Electronics Symposium (IES). :134–139.

Easy sharing files in public network that is intended only for certain people often resulting in the leaking of sharing folders or files and able to be read also by others who are not authorized. Secure data is one of the most challenging issues in data sharing systems. Here, Ciphertext-Policy Attribute-Based Encryption (CP-ABE) is a reliable asymmetric encryption mechanism which deals with secure data and used for data encryption. It is not necessary encrypted to one particular user, but recipient is only able to decrypt if and only if the attribute set of his private key match with the specified policy in the ciphertext. In this paper, we propose a secure data exchange using CP-ABE with authentication feature. The data is attribute-based encrypted to satisfy confidentiality feature and authenticated to satisfy data authentication simultaneously.

Trajanovski, S., Kuipers, F. A., Hayel, Y., Altman, E., Mieghem, P. Van.  2015.  Designing virus-resistant networks: A game-formation approach. 2015 54th IEEE Conference on Decision and Control (CDC). :294–299.

Forming, in a decentralized fashion, an optimal network topology while balancing multiple, possibly conflicting objectives like cost, high performance, security and resiliency to viruses is a challenging endeavor. In this paper, we take a game-formation approach to network design where each player, for instance an autonomous system in the Internet, aims to collectively minimize the cost of installing links, of protecting against viruses, and of assuring connectivity. In the game, minimizing virus risk as well as connectivity costs results in sparse graphs. We show that the Nash Equilibria are trees that, according to the Price of Anarchy (PoA), are close to the global optimum, while the worst-case Nash Equilibrium and the global optimum may significantly differ for small infection rate and link installation cost. Moreover, the types of trees, in both the Nash Equilibria and the optimal solution, depend on the virus infection rate, which provides new insights into how viruses spread: for high infection rate τ, the path graph is the worst- and the star graph is the best-case Nash Equilibrium. However, for small and intermediate values of τ, trees different from the path and star graphs may be optimal.