Visible to the public Biblio

Found 2320 results

Filters: First Letter Of Last Name is P  [Clear All Filters]
2017-03-07
Zarras, Apostolis, Kohls, Katharina, Dürmuth, Markus, Pöpper, Christina.  2016.  Neuralyzer: Flexible Expiration Times for the Revocation of Online Data. Proceedings of the Sixth ACM Conference on Data and Application Security and Privacy. :14–25.

Once data is released to the Internet, there is little hope to successfully delete it, as it may have been duplicated, reposted, and archived in multiple places. This poses a significant threat to users' privacy and their right to permanently erase their very own data. One approach to control the implications on privacy is to assign a lifetime value to the published data and ensure that the data is no longer accessible after this point in time. However, such an approach suffers from the inability to successfully predict the right time when the data should vanish. Consequently, the author of the data can only estimate the correct time, which unfortunately can cause the premature or belated deletion of data. This paper tackles the problem of prefixed lifetimes in data deletion from a different angle and argues that alternative approaches are a desideratum for research. In our approach, we consider different criteria when data should be deleted, such as keeping data available as long as there is sufficient interest for it or untimely delete it in cases of excessive accesses. To assist the self-destruction of data, we propose a protocol and develop a prototype, called Neuralyzer, which leverages the caching mechanisms of the Domain Name System (DNS) to ensure the successful deletion of data. Our experimental results demonstrate that our approach can completely delete published data while at the same time achieving flexible expiration times varying from few days to several months depending on the users' interest.

Baba, Asif Iqbal, Jaeger, Manfred, Lu, Hua, Pedersen, Torben Bach, Ku, Wei-Shinn, Xie, Xike.  2016.  Learning-Based Cleansing for Indoor RFID Data. Proceedings of the 2016 International Conference on Management of Data. :925–936.

RFID is widely used for object tracking in indoor environments, e.g., airport baggage tracking. Analyzing RFID data offers insight into the underlying tracking systems as well as the associated business processes. However, the inherent uncertainty in RFID data, including noise (cross readings) and incompleteness (missing readings), pose challenges to high-level RFID data querying and analysis. In this paper, we address these challenges by proposing a learning-based data cleansing approach that, unlike existing approaches, requires no detailed prior knowledge about the spatio-temporal properties of the indoor space and the RFID reader deployment. Requiring only minimal information about RFID deployment, the approach learns relevant knowledge from raw RFID data and uses it to cleanse the data. In particular, we model raw RFID readings as time series that are sparse because the indoor space is only partly covered by a limited number of RFID readers. We propose the Indoor RFID Multi-variate Hidden Markov Model (IR-MHMM) to capture the uncertainties of indoor RFID data as well as the correlation of moving object locations and object RFID readings. We propose three state space design methods for IR-MHMM that enable the learning of parameters while contending with raw RFID data time series. We solely use raw uncleansed RFID data for the learning of model parameters, requiring no special labeled data or ground truth. The resulting IR-MHMM based RFID data cleansing approach is able to recover missing readings and reduce cross readings with high effectiveness and efficiency, as demonstrated by extensive experimental studies with both synthetic and real data. Given enough indoor RFID data for learning, the proposed approach achieves a data cleansing accuracy comparable to or even better than state-of-the-art techniques requiring very detailed prior knowledge, making our solution superior in terms of both effectiveness and employability.

Heindorf, Stefan, Potthast, Martin, Stein, Benno, Engels, Gregor.  2016.  Vandalism Detection in Wikidata. Proceedings of the 25th ACM International on Conference on Information and Knowledge Management. :327–336.

Wikidata is the new, large-scale knowledge base of the Wikimedia Foundation. Its knowledge is increasingly used within Wikipedia itself and various other kinds of information systems, imposing high demands on its integrity. Wikidata can be edited by anyone and, unfortunately, it frequently gets vandalized, exposing all information systems using it to the risk of spreading vandalized and falsified information. In this paper, we present a new machine learning-based approach to detect vandalism in Wikidata. We propose a set of 47 features that exploit both content and context information, and we report on 4 classifiers of increasing effectiveness tailored to this learning task. Our approach is evaluated on the recently published Wikidata Vandalism Corpus WDVC-2015 and it achieves an area under curve value of the receiver operating characteristic, ROC-AUC, of 0.991. It significantly outperforms the state of the art represented by the rule-based Wikidata Abuse Filter (0.865 ROC-AUC) and a prototypical vandalism detector recently introduced by Wikimedia within the Objective Revision Evaluation Service (0.859 ROC-AUC).

Summers, Cameron, Tronel, Greg, Cramer, Jason, Vartakavi, Aneesh, Popp, Phillip.  2016.  GNMID14: A Collection of 110 Million Global Music Identification Matches. Proceedings of the 39th International ACM SIGIR Conference on Research and Development in Information Retrieval. :693–696.

A new dataset is presented composed of music identification matches from Gracenote, a leading global music metadata company. Matches from January 1, 2014 to December 31, 2014 have been curated and made available as a public dataset called Gracenote Music Identification 2014, or GNMID14, at the following address: https://developer.gracenote.com/mid2014. This collection is the first significant music identification dataset and one of the largest music related datasets available containing more than 110M matches in 224 countries for 3M unique tracks, and 509K unique artists. It features geotemporal information (i.e. country and match date), genre and mood metadata. In this paper, we characterize the dataset and demonstrate its utility for Information Retrieval (IR) research.

Kiran, Indra, Guha, Tanaya, Pandey, Gaurav.  2016.  Blind Image Quality Assessment Using Subspace Alignment. Proceedings of the Tenth Indian Conference on Computer Vision, Graphics and Image Processing. :91:1–91:6.

This paper addresses the problem of estimating the quality of an image as it would be perceived by a human. A well accepted approach to assess perceptual quality of an image is to quantify its loss of structural information. We propose a blind image quality assessment method that aims at quantifying structural information loss in a given (possibly distorted) image by comparing its structures with those extracted from a database of clean images. We first construct a subspace from the clean natural images using (i) principal component analysis (PCA), and (ii) overcomplete dictionary learning with sparsity constraint. While PCA provides mathematical convenience, an overcomplete dictionary is known to capture the perceptually important structures resembling the simple cells in the primary visual cortex. The subspace learned from the clean images is called the source subspace. Similarly, a subspace, called the target subspace, is learned from the distorted image. In order to quantify the structural information loss, we use a subspace alignment technique which transforms the target subspace into the source by optimizing over a transformation matrix. This transformation matrix is subsequently used to measure the global and local (patch-based) quality score of the distorted image. The quality scores obtained by the proposed method are shown to correlate well with the subjective scores obtained from human annotators. Our method achieves competitive results when evaluated on three benchmark databases.

Petrić, Jean, Bowes, David, Hall, Tracy, Christianson, Bruce, Baddoo, Nathan.  2016.  The Jinx on the NASA Software Defect Data Sets. Proceedings of the 20th International Conference on Evaluation and Assessment in Software Engineering. :13:1–13:5.

Background: The NASA datasets have previously been used extensively in studies of software defects. In 2013 Shepperd et al. presented an essential set of rules for removing erroneous data from the NASA datasets making this data more reliable to use. Objective: We have now found additional rules necessary for removing problematic data which were not identified by Shepperd et al. Results: In this paper, we demonstrate the level of erroneous data still present even after cleaning using Shepperd et al.'s rules and apply our new rules to remove this erroneous data. Conclusion: Even after systematic data cleaning of the NASA MDP datasets, we found new erroneous data. Data quality should always be explicitly considered by researchers before use.

Agrawal, Divy, Ba, Lamine, Berti-Equille, Laure, Chawla, Sanjay, Elmagarmid, Ahmed, Hammady, Hossam, Idris, Yasser, Kaoudi, Zoi, Khayyat, Zuhair, Kruse, Sebastian et al..  2016.  Rheem: Enabling Multi-Platform Task Execution. Proceedings of the 2016 International Conference on Management of Data. :2069–2072.

Many emerging applications, from domains such as healthcare and oil & gas, require several data processing systems for complex analytics. This demo paper showcases system, a framework that provides multi-platform task execution for such applications. It features a three-layer data processing abstraction and a new query optimization approach for multi-platform settings. We will demonstrate the strengths of system by using real-world scenarios from three different applications, namely, machine learning, data cleaning, and data fusion.

West, Ruth, Kajihara, Meghan, Parola, Max, Hays, Kathryn, Hillard, Luke, Carlew, Anne, Deutsch, Jeremey, Lane, Brandon, Holloway, Michelle, John, Brendan et al..  2016.  Eliciting Tacit Expertise in 3D Volume Segmentation. Proceedings of the 9th International Symposium on Visual Information Communication and Interaction. :59–66.

The output of 3D volume segmentation is crucial to a wide range of endeavors. Producing accurate segmentations often proves to be both inefficient and challenging, in part due to lack of imaging data quality (contrast and resolution), and because of ambiguity in the data that can only be resolved with higher-level knowledge of the structure and the context wherein it resides. Automatic and semi-automatic approaches are improving, but in many cases still fail or require substantial manual clean-up or intervention. Expert manual segmentation and review is therefore still the gold standard for many applications. Unfortunately, existing tools (both custom-made and commercial) are often designed based on the underlying algorithm, not the best method for expressing higher-level intention. Our goal is to analyze manual (or semi-automatic) segmentation to gain a better understanding of both low-level (perceptual tasks and actions) and high-level decision making. This can be used to produce segmentation tools that are more accurate, efficient, and easier to use. Questioning or observation alone is insufficient to capture this information, so we utilize a hybrid capture protocol that blends observation, surveys, and eye tracking. We then developed, and validated, data coding schemes capable of discerning low-level actions and overall task structures.

Talbot, Jeremie, Piretti, Mark, Singleton, Kevin, Hessler, Mark.  2016.  Designing an Interaction with an Octopus. ACM SIGGRAPH 2016 Talks. :43:1–43:2.

In Pixar's Finding Dory, we are introduced to a new character: Hank the Octopus. This is a very different character than Pixar has been asked to animate before. Our directors demanded both precise control and graceful, clean silhouettes. The reference artwork we were given showed complex curves between arms and body without any disjointed shapes or breaks in form. Video of Octopus in motion reveals an infinitely malleable creature capable of an enormous shape language. This art direction required a small group of TDs to create a control scheme that was sensible, flexible and with a new level of control in order for animators to bring Hank to life. We had to think deeply from the tips of the fingers all the way through how the tentacles connect to the mouth corners, and eye sockets. Each of this issues raised concerns around design, deformation and finally how the end user can manipulate such complexity effectively.

Pevny, Tomas, Somol, Petr.  2016.  Discriminative Models for Multi-instance Problems with Tree Structure. Proceedings of the 2016 ACM Workshop on Artificial Intelligence and Security. :83–91.

Modelling network traffic is gaining importance to counter modern security threats of ever increasing sophistication. It is though surprisingly difficult and costly to construct reliable classifiers on top of telemetry data due to the variety and complexity of signals that no human can manage to interpret in full. Obtaining training data with sufficiently large and variable body of labels can thus be seen as a prohibitive problem. The goal of this work is to detect infected computers by observing their HTTP(S) traffic collected from network sensors, which are typically proxy servers or network firewalls, while relying on only minimal human input in the model training phase. We propose a discriminative model that makes decisions based on a computer's all traffic observed during a predefined time window (5 minutes in our case). The model is trained on traffic samples collected over equally-sized time windows for a large number of computers, where the only labels needed are (human) verdicts about the computer as a whole (presumed infected vs. presumed clean). As part of training, the model itself learns discriminative patterns in traffic targeted to individual servers and constructs the final high-level classifier on top of them. We show the classifier to perform with very high precision, and demonstrate that the learned traffic patterns can be interpreted as Indicators of Compromise. We implement the discriminative model as a neural network with special structure reflecting two stacked multi instance problems. The main advantages of the proposed configuration include not only improved accuracy and ability to learn from gross labels, but also automatic learning of server types (together with their detectors) that are typically visited by infected computers.

Liang, Jiongqian, Parthasarathy, Srinivasan.  2016.  Robust Contextual Outlier Detection: Where Context Meets Sparsity. Proceedings of the 25th ACM International on Conference on Information and Knowledge Management. :2167–2172.

Outlier detection is a fundamental data science task with applications ranging from data cleaning to network security. Recently, a new class of outlier detection algorithms has emerged, called contextual outlier detection, and has shown improved performance when studying anomalous behavior in a specific context. However, as we point out in this article, such approaches have limited applicability in situations where the context is sparse (i.e., lacking a suitable frame of reference). Moreover, approaches developed to date do not scale to large datasets. To address these problems, here we propose a novel and robust approach alternative to the state-of-the-art called RObust Contextual Outlier Detection (ROCOD). We utilize a local and global behavioral model based on the relevant contexts, which is then integrated in a natural and robust fashion. We run ROCOD on both synthetic and real-world datasets and demonstrate that it outperforms other competitive baselines on the axes of efficacy and efficiency. We also drill down and perform a fine-grained analysis to shed light on the rationale for the performance gains of ROCOD and reveal its effectiveness when handling objects with sparse contexts.

Pinsenschaum, Richard, Neff, Flaithri.  2016.  Evaluating Gesture Characteristics When Using a Bluetooth Handheld Music Controller. Proceedings of the Audio Mostly 2016. :209–214.

This paper describes a study that investigates tilt-gesture depth on a Bluetooth handheld music controller for activating and deactivating music loops. Making use of a Wii Remote's 3-axis ADXL330 accelerometer, a Max patch was programmed to receive, handle, and store incoming accelerometer data. Each loop corresponded to the front, back, left and right tilt-gesture direction, with each gesture motion triggering a loop 'On' or 'Off' depending on its playback status. The study comprised 40 undergraduate students interacting with the prototype controller for a duration of 5 minutes per person. Each participant performed three full cycles beginning with the front gesture direction and moving clockwise. This corresponded to a total of 24 trigger motions per participant. Raw data associated with tilt-gesture motion depth was scaled, analyzed and graphed. Results show significant differences between each gesture direction in terms of tilt-gesture depth, as well as issues with noise for left/right gesture motion due to dependency on Roll and Yaw values. Front and Left tilt-gesture depths displayed significantly higher threshold levels compared to the Back and Right axes. Front and Left tilt-gesture thresholds therefore allow the device to easily differentiate between intentional sample triggering and general device handling, while this is more difficult for Back and Left directions. Future work will include finding an alternative method for evaluating intentional tilt-gesture triggering on the Back and Left axes, as well as utilizing two 2-axis accelerometers to garner clean data from the Left and Right axes.

Parshakova, Tetiana, Cho, Minjoo, Cassinelli, Alvaro, Saakes, Daniel.  2016.  Ratchair: Furniture Learns to Move Itself with Vibration. ACM SIGGRAPH 2016 Emerging Technologies. :19:1–19:2.

An Egyptian statue on display at the Manchester Museum mysteriously spins on its axis every day; it is eventually discovered that this is due to anisotropic friction forces, and that the motile power comes from imperceptible mechanical waves caused by visitors' footsteps and nearby traffic. This phenomena involves microscopic ratchets, and is pervasive in the microscopic world - this is basically how muscles contract. It was the source of inspiration to think about everyday objects that move by harvesting external vibration rather than using mechanical traction and steering wheels. We propose here a strategy for displacing objects by attaching relatively small vibration sources. After learning how several random bursts of vibration affect its pose, an optimization algorithm discovers the optimal sequence of vibration patterns required to (slowly but surely) move the object to a very different specified position. We describe and demonstrate two application scenarios, namely assisted transportation of heavy objects with little effort on the part of the human and self arranging furniture, useful for instance to clean classrooms or restaurants during vacant hours.

Thibodeau, David, Cave, Andrew, Pientka, Brigitte.  2016.  Indexed Codata Types. Proceedings of the 21st ACM SIGPLAN International Conference on Functional Programming. :351–363.

Indexed data types allow us to specify and verify many interesting invariants about finite data in a general purpose programming language. In this paper we investigate the dual idea: indexed codata types, which allow us to describe data-dependencies about infinite data structures. Unlike finite data which is defined by constructors, we define infinite data by observations. Dual to pattern matching on indexed data which may refine the type indices, we define copattern matching on indexed codata where type indices guard observations we can make. Our key technical contributions are three-fold: first, we extend Levy's call-by-push value language with support for indexed (co)data and deep (co)pattern matching; second, we provide a clean foundation for dependent (co)pattern matching using equality constraints; third, we describe a small-step semantics using a continuation-based abstract machine, define coverage for indexed (co)patterns, and prove type safety. This is an important step towards building a foundation where (co)data type definitions and dependent types can coexist.

Purohit, Suchit S., Bothale, Vinod M., Gandhi, Savita R..  2016.  Towards M-gov in Solid Waste Management Sector Using RFID, Integrated Technologies. Proceedings of the Second International Conference on Information and Communication Technology for Competitive Strategies. :61:1–61:4.

Due to explosive increase in teledensity, penetration of mobile networks in urban as well as rural areas, m-governance in India is growing from infancy to a more mature shape. Various steps are taken by Indian government for offering citizen services through mobile platform hence offering smooth transition from web based e-gov services to more pervasive mobile based services. Municipalities and Municipal corporations in India are already providing m-gov services like property and professional tax transaction, Birth and death registration, Marriage registration, due of taxes and charges etc. through SMS alerts or via call centers. To the best of our knowledge no municipality offers mobile based services in Solid Waste management sector. This paper proposes an m-gov service implemented as Android mobile application for SWM department, AMC, Ahmadabad. The application operates on real time data collected from a fully automated Solid waste Collection process integrated using RFID, GPS, GIS and GPRS proposed in the preceding work by the authors. The mobile application facilitates citizens to interactively view the status of the cleaning process of their area file complaints in the case of failure and also can follow up the status of their complaints which could be handled by SWM officials using the same application. This application also facilitates SWM officials to observe, analyze the real time status of the collection process and generated reports.

Bell, Eamonn, Pugin, Laurent.  2016.  Approaches to Handwritten Conductor Annotation Extraction in Musical Scores. Proceedings of the 3rd International Workshop on Digital Libraries for Musicology. :33–36.

Conductor copies of musical scores are typically rich in handwritten annotations. Ongoing archival efforts to digitize orchestral conductors' scores have made scanned copies of hundreds of these annotated scores available in digital formats. The extraction of handwritten annotations from digitized printed documents is a difficult task for computer vision, with most approaches focusing on the extraction of handwritten text. However, conductors' annotation practices provide us with at least two affordances, which make the task more tractable in the musical domain. First, many conductors opt to mark their scores using colored pencils, which contrast with the black and white print of sheet music. Consequently, we show promising results when using color separation techniques alone to recover handwritten annotations from conductors' scores. We also compare annotated scores to unannotated copies and use a printed sheet music comparison tool to recover handwritten annotations as additions to the clean copy. We then investigate the use of both of these techniques in a combined method, which improves the results of the color separation technique. These techniques are demonstrated using a sample of orchestral scores annotated by professional conductors of the New York Philharmonic. Handwritten annotation extraction in musical scores has applications to the systematic investigation of score annotation practices by performers, annotator attribution, and to the interactive presentation of annotated scores, which we briefly discuss.

Zhang, Xiang, Gong, Lirui, Xun, Yunbo, Piao, Xuewei, Leit, Kai.  2016.  Centaur: A evolutionary design of hybrid NDN/IP transport architecture for streaming application. :1–7.

Named Data Networking (NDN), a clean-slate data oriented Internet architecture targeting on replacing IP, brings many potential benefits for content distribution. Real deployment of NDN is crucial to verify this new architecture and promote academic research, but work in this field is at an early stage. Due to the fundamental design paradigm difference between NDN and IP, Deploying NDN as IP overlay causes high overhead and inefficient transmission, typically in streaming applications. Aiming at achieving efficient NDN streaming distribution, this paper proposes a transitional architecture of NDN/IP hybrid network dubbed Centaur, which embodies both NDN's smartness, scalability and IP's transmission efficiency and deployment feasibility. In Centaur, the upper NDN module acts as the smart head while the lower IP module functions as the powerful feet. The head is intelligent in content retrieval and self-control, while the IP feet are able to transport large amount of media data faster than that if NDN directly overlaying on IP. To evaluate the performance of our proposal, we implement a real streaming prototype in ndnSIM and compare it with both NDN-Hippo and P2P under various experiment scenarios. The result shows that Centaur can achieve better load balance with lower overhead, which is close to the performance that ideal NDN can achieve. All of these validate that our proposal is a promising choice for the incremental and compatible deployment of NDN.

Chiti, Francesco, Di Giacomo, Dario, Fantacci, Romano, Pierucci, Laura, Carlini, Camillo.  2016.  Optimized Narrow-Band M2M Systems for Massive Cellular IoT Communications. :1–6.

Simple connectivity and data requirements together with high lifetime of battery are the main issues for the machine-to-machine (M2M) communications. 3GPP focuses on three main licensed standardizations based on Long Term Evolution (LTE), GSM and clean-slate technologies. The paper considers the last one and proposes a modified slotted-Aloha method to increase the capability of supporting a massive number of low-throughput devices. The proposed method increases the access rate of users belonging to each class considered in the clean-slate standard and consequently the total throughput offered by the system. To derive the mean access rate per class, we use the Markov chain approach and simulation results are provided for scenarios with different data rate and also in terms of cell average delay.

2017-02-27
Santini, R., Foglietta, C., Panzieri, S..  2015.  A graph-based evidence theory for assessing risk. 2015 18th International Conference on Information Fusion (Fusion). :1467–1474.

The increasing exploitation of the internet leads to new uncertainties, due to interdependencies and links between cyber and physical layers. As an example, the integration between telecommunication and physical processes, that happens when the power grid is managed and controlled, yields to epistemic uncertainty. Managing this uncertainty is possible using specific frameworks, usually coming from fuzzy theory such as Evidence Theory. This approach is attractive due to its flexibility in managing uncertainty by means of simple rule-based systems with data coming from heterogeneous sources. In this paper, Evidence Theory is applied in order to evaluate risk. Therefore, the authors propose a frame of discernment with a specific property among the elements based on a graph representation. This relationship leads to a smaller power set (called Reduced Power Set) that can be used as the classical power set, when the most common combination rules, such as Dempster or Smets, are applied. The paper demonstrates how the use of the Reduced Power Set yields to more efficient algorithms for combining evidences and to application of Evidence Theory for assessing risk.

Saravanan, S., Sabari, A., Geetha, M., priyanka, Q..  2015.  Code based community network for identifying low risk community. 2015 IEEE 9th International Conference on Intelligent Systems and Control (ISCO). :1–6.

The modern day approach in boulevard network centers on efficient factor in safe routing. The safe routing must follow up the low risk cities. The troubles in routing are a perennial one confronting people day in and day out. The common goal of everyone using a boulevard seems to be reaching the desired point through the fastest manner which involves the balancing conundrum of multiple expected and unexpected influencing factors such as time, distance, security and cost. It is universal knowledge that travelling is an almost inherent aspect in everyone's daily routine. With the gigantic and complex road network of a modern city or country, finding a low risk community for traversing the distance is not easy to achieve. This paper follows the code based community for detecting the boulevard network and fuzzy technique for identifying low risk community.

Rontidis, G., Panaousis, E., Laszka, A., Dagiuklas, T., Malacaria, P., Alpcan, T..  2015.  A game-theoretic approach for minimizing security risks in the Internet-of-Things. 2015 IEEE International Conference on Communication Workshop (ICCW). :2639–2644.

In the Internet-of-Things (IoT), users might share part of their data with different IoT prosumers, which offer applications or services. Within this open environment, the existence of an adversary introduces security risks. These can be related, for instance, to the theft of user data, and they vary depending on the security controls that each IoT prosumer has put in place. To minimize such risks, users might seek an “optimal” set of prosumers. However, assuming the adversary has the same information as the users about the existing security measures, he can then devise which prosumers will be preferable (e.g., with the highest security levels) and attack them more intensively. This paper proposes a decision-support approach that minimizes security risks in the above scenario. We propose a non-cooperative, two-player game entitled Prosumers Selection Game (PSG). The Nash Equilibria of PSG determine subsets of prosumers that optimize users' payoffs. We refer to any game solution as the Nash Prosumers Selection (NPS), which is a vector of probabilities over subsets of prosumers. We show that when using NPS, a user faces the least expected damages. Additionally, we show that according to NPS every prosumer, even the least secure one, is selected with some non-zero probability. We have also performed simulations to compare NPS against two different heuristic selection algorithms. The former is proven to be approximately 38% more effective in terms of security-risk mitigation.

M, Supriya, Sangeeta, K., Patra, G. K..  2015.  Comparison of AHP based and Fuzzy based mechanisms for ranking Cloud Computing services. 2015 International Conference on Computer, Control, Informatics and its Applications (IC3INA). :175–180.

Cloud Computing has emerged as a paradigm to deliver on demand resources to facilitate the customers with access to their infrastructure and applications as per their requirements on a subscription basis. An exponential increase in the number of cloud services in the past few years provides more options for customers to choose from. To assist customers in selecting a most trustworthy cloud provider, a unified trust evaluation framework is needed. Trust helps in the estimation of competency of a resource provider in completing a task thus enabling users to select the best resources in the heterogeneous cloud infrastructure. Trust estimates obtained using the AHP process exhibit a deviation for parameters that are not in direct proportion to the contributing attributes. Such deviation can be removed using the Fuzzy AHP model. In this paper, a Fuzzy AHP based hierarchical trust model has been proposed to rate the service providers and their various plans for infrastructure as a service.

Mohsen, R., Pinto, A. M..  2015.  Algorithmic information theory for obfuscation security. 2015 12th International Joint Conference on e-Business and Telecommunications (ICETE). 04:76–87.

The main problem in designing effective code obfuscation is to guarantee security. State of the art obfuscation techniques rely on an unproven concept of security, and therefore are not regarded as provably secure. In this paper, we undertake a theoretical investigation of code obfuscation security based on Kolmogorov complexity and algorithmic mutual information. We introduce a new definition of code obfuscation that requires the algorithmic mutual information between a code and its obfuscated version to be minimal, allowing for controlled amount of information to be leaked to an adversary. We argue that our definition avoids the impossibility results of Barak et al. and is more advantageous then obfuscation indistinguishability definition in the sense it is more intuitive, and is algorithmic rather than probabilistic.

2017-02-23
Fisk, G., Ardi, C., Pickett, N., Heidemann, J., Fisk, M., Papadopoulos, C..  2015.  Privacy Principles for Sharing Cyber Security Data. 2015 IEEE Security and Privacy Workshops. :193–197.

Sharing cyber security data across organizational boundaries brings both privacy risks in the exposure of personal information and data, and organizational risk in disclosing internal information. These risks occur as information leaks in network traffic or logs, and also in queries made across organizations. They are also complicated by the trade-offs in privacy preservation and utility present in anonymization to manage disclosure. In this paper, we define three principles that guide sharing security information across organizations: Least Disclosure, Qualitative Evaluation, and Forward Progress. We then discuss engineering approaches that apply these principles to a distributed security system. Application of these principles can reduce the risk of data exposure and help manage trust requirements for data sharing, helping to meet our goal of balancing privacy, organizational risk, and the ability to better respond to security with shared information.

V. S. Gutte, P. Deshpande.  2015.  "Cost and Communication Efficient Auditing over Public Cloud". 2015 International Conference on Computational Intelligence and Communication Networks (CICN). :807-810.

Cloud Computing is one of the large and essential environment now a days to work for the storage collection and privacy preserve to that data. Cloud data security is most important and major concern for the client while use of the cloud services provided by the different service providers. There can be some major security concern and conflicts between the client and the service provider. To get out from those issues, a third party auditor uses as an auditor for assurance of data in the environment. Storage systems for the cloud has many fundamental challenges still today. All basic as well critical challenges among which storage space and security is generally the top concern in the cloud environment. To give the appropriate security issues we have proposed third party authentication system. The cloud not only for the simplified data storage but also secure data acquisition in cloud environment. At last we have perform different security analysis as well performance analysis. It give the results that proposed scheme has significant increases in efficiency for maintaining highly secure data storage and acquisition. The proposed method also helps to minimize the cost in environment and also increases communication efficiency in the cloud environment.