Visible to the public Biblio

Found 721 results

Filters: Keyword is Computational modeling  [Clear All Filters]
2017-02-23
Y. Cao, J. Yang.  2015.  "Towards Making Systems Forget with Machine Unlearning". 2015 IEEE Symposium on Security and Privacy. :463-480.

Today's systems produce a rapidly exploding amount of data, and the data further derives more data, forming a complex data propagation network that we call the data's lineage. There are many reasons that users want systems to forget certain data including its lineage. From a privacy perspective, users who become concerned with new privacy risks of a system often want the system to forget their data and lineage. From a security perspective, if an attacker pollutes an anomaly detector by injecting manually crafted data into the training data set, the detector must forget the injected data to regain security. From a usability perspective, a user can remove noise and incorrect entries so that a recommendation engine gives useful recommendations. Therefore, we envision forgetting systems, capable of forgetting certain data and their lineages, completely and quickly. This paper focuses on making learning systems forget, the process of which we call machine unlearning, or simply unlearning. We present a general, efficient unlearning approach by transforming learning algorithms used by a system into a summation form. To forget a training data sample, our approach simply updates a small number of summations – asymptotically faster than retraining from scratch. Our approach is general, because the summation form is from the statistical query learning in which many machine learning algorithms can be implemented. Our approach also applies to all stages of machine learning, including feature selection and modeling. Our evaluation, on four diverse learning systems and real-world workloads, shows that our approach is general, effective, fast, and easy to use.

H. M. Ruan, M. H. Tsai, Y. N. Huang, Y. H. Liao, C. L. Lei.  2015.  "Discovery of De-identification Policies Considering Re-identification Risks and Information Loss". 2015 10th Asia Joint Conference on Information Security. :69-76.

In data analysis, it is always a tough task to strike the balance between the privacy and the applicability of the data. Due to the demand for individual privacy, the data are being more or less obscured before being released or outsourced to avoid possible privacy leakage. This process is so called de-identification. To discuss a de-identification policy, the most important two aspects should be the re-identification risk and the information loss. In this paper, we introduce a novel policy searching method to efficiently find out proper de-identification policies according to acceptable re-identification risk while retaining the information resided in the data. With the UCI Machine Learning Repository as our real world dataset, the re-identification risk can therefore be able to reflect the true risk of the de-identified data under the de-identification policies. Moreover, using the proposed algorithm, one can then efficiently acquire policies with higher information entropy.

A. Soliman, L. Bahri, B. Carminati, E. Ferrari, S. Girdzijauskas.  2015.  "DIVa: Decentralized identity validation for social networks". 2015 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM). :383-391.

Online Social Networks exploit a lightweight process to identify their users so as to facilitate their fast adoption. However, such convenience comes at the price of making legitimate users subject to different threats created by fake accounts. Therefore, there is a crucial need to empower users with tools helping them in assigning a level of trust to whomever they interact with. To cope with this issue, in this paper we introduce a novel model, DIVa, that leverages on mining techniques to find correlations among user profile attributes. These correlations are discovered not from user population as a whole, but from individual communities, where the correlations are more pronounced. DIVa exploits a decentralized learning approach and ensures privacy preservation as each node in the OSN independently processes its local data and is required to know only its direct neighbors. Extensive experiments using real-world OSN datasets show that DIVa is able to extract fine-grained community-aware correlations among profile attributes with average improvements up to 50% than the global approach.

2017-02-14
X. Feng, Z. Zheng, P. Hu, D. Cansever, P. Mohapatra.  2015.  "Stealthy attacks meets insider threats: A three-player game model". MILCOM 2015 - 2015 IEEE Military Communications Conference. :25-30.

Advanced persistent threat (APT) is becoming a major threat to cyber security. As APT attacks are often launched by well funded entities that are persistent and stealthy in achieving their goals, they are highly challenging to combat in a cost-effective way. The situation becomes even worse when a sophisticated attacker is further assisted by an insider with privileged access to the inside information. Although stealthy attacks and insider threats have been considered separately in previous works, the coupling of the two is not well understood. As both types of threats are incentive driven, game theory provides a proper tool to understand the fundamental tradeoffs involved. In this paper, we propose the first three-player attacker-defender-insider game to model the strategic interactions among the three parties. Our game extends the two-player FlipIt game model for stealthy takeover by introducing an insider that can trade information to the attacker for a profit. We characterize the subgame perfect equilibria of the game with the defender as the leader and the attacker and the insider as the followers, under two different information trading processes. We make various observations and discuss approaches for achieving more efficient defense in the face of both APT and insider threats.

C. H. Hsieh, C. M. Lai, C. H. Mao, T. C. Kao, K. C. Lee.  2015.  "AD2: Anomaly detection on active directory log data for insider threat monitoring". 2015 International Carnahan Conference on Security Technology (ICCST). :287-292.

What you see is not definitely believable is not a rare case in the cyber security monitoring. However, due to various tricks of camouflages, such as packing or virutal private network (VPN), detecting "advanced persistent threat"(APT) by only signature based malware detection system becomes more and more intractable. On the other hand, by carefully modeling users' subsequent behaviors of daily routines, probability for one account to generate certain operations can be estimated and used in anomaly detection. To the best of our knowledge so far, a novel behavioral analytic framework, which is dedicated to analyze Active Directory domain service logs and to monitor potential inside threat, is now first proposed in this project. Experiments on real dataset not only show that the proposed idea indeed explores a new feasible direction for cyber security monitoring, but also gives a guideline on how to deploy this framework to various environments.

2015-05-06
Nower, N., Yasuo Tan, Lim, A.O..  2014.  Efficient Temporal and Spatial Data Recovery Scheme for Stochastic and Incomplete Feedback Data of Cyber-physical Systems. Service Oriented System Engineering (SOSE), 2014 IEEE 8th International Symposium on. :192-197.

Feedback loss can severely degrade the overall system performance, in addition, it can affect the control and computation of the Cyber-physical Systems (CPS). CPS hold enormous potential for a wide range of emerging applications including stochastic and time-critical traffic patterns. Stochastic data has a randomness in its nature which make a great challenge to maintain the real-time control whenever the data is lost. In this paper, we propose a data recovery scheme, called the Efficient Temporal and Spatial Data Recovery (ETSDR) scheme for stochastic incomplete feedback of CPS. In this scheme, we identify the temporal model based on the traffic patterns and consider the spatial effect of the nearest neighbor. Numerical results reveal that the proposed ETSDR outperforms both the weighted prediction (WP) and the exponentially weighted moving average (EWMA) algorithm regardless of the increment percentage of missing data in terms of the root mean square error, the mean absolute error, and the integral of absolute error.
 

Castro Marquez, C.I., Strum, M., Wang Jiang Chau.  2014.  A unified sequential equivalence checking approach to verify high-level functionality and protocol specification implementations in RTL designs. Test Workshop - LATW, 2014 15th Latin American. :1-6.

Formal techniques provide exhaustive design verification, but computational margins have an important negative impact on its efficiency. Sequential equivalence checking is an effective approach, but traditionally it has been only applied between circuit descriptions with one-to-one correspondence for states. Applying it between RTL descriptions and high-level reference models requires removing signals, variables and states exclusive of the RTL description so as to comply with the state correspondence restriction. In this paper, we extend a previous formal methodology for RTL verification with high-level models, to check also the signals and protocol implemented in the RTL design. This protocol implementation is compared formally to a description captured from the specification. Thus, we can prove thoroughly the sequential behavior of a design under verification.
 

Huaqun Wang.  2015.  Identity-Based Distributed Provable Data Possession in Multicloud Storage. Services Computing, IEEE Transactions on. 8:328-340.

Remote data integrity checking is of crucial importance in cloud storage. It can make the clients verify whether their outsourced data is kept intact without downloading the whole data. In some application scenarios, the clients have to store their data on multicloud servers. At the same time, the integrity checking protocol must be efficient in order to save the verifier's cost. From the two points, we propose a novel remote data integrity checking model: ID-DPDP (identity-based distributed provable data possession) in multicloud storage. The formal system model and security model are given. Based on the bilinear pairings, a concrete ID-DPDP protocol is designed. The proposed ID-DPDP protocol is provably secure under the hardness assumption of the standard CDH (computational Diffie-Hellman) problem. In addition to the structural advantage of elimination of certificate management, our ID-DPDP protocol is also efficient and flexible. Based on the client's authorization, the proposed ID-DPDP protocol can realize private verification, delegated verification, and public verification.
 

Xiaoyong Li, Huadong Ma, Feng Zhou, Xiaolin Gui.  2015.  Service Operator-Aware Trust Scheme for Resource Matchmaking across Multiple Clouds. Parallel and Distributed Systems, IEEE Transactions on. 26:1419-1429.

This paper proposes a service operator-aware trust scheme (SOTS) for resource matchmaking across multiple clouds. Through analyzing the built-in relationship between the users, the broker, and the service resources, this paper proposes a middleware framework of trust management that can effectively reduces user burden and improve system dependability. Based on multidimensional resource service operators, we model the problem of trust evaluation as a process of multi-attribute decision-making, and develop an adaptive trust evaluation approach based on information entropy theory. This adaptive approach can overcome the limitations of traditional trust schemes, whereby the trusted operators are weighted manually or subjectively. As a result, using SOTS, the broker can efficiently and accurately prepare the most trusted resources in advance, and thus provide more dependable resources to users. Our experiments yield interesting and meaningful observations that can facilitate the effective utilization of SOTS in a large-scale multi-cloud environment.

Nitti, M., Girau, R., Atzori, L..  2014.  Trustworthiness Management in the Social Internet of Things. Knowledge and Data Engineering, IEEE Transactions on. 26:1253-1266.

The integration of social networking concepts into the Internet of things has led to the Social Internet of Things (SIoT) paradigm, according to which objects are capable of establishing social relationships in an autonomous way with respect to their owners with the benefits of improving the network scalability in information/service discovery. Within this scenario, we focus on the problem of understanding how the information provided by members of the social IoT has to be processed so as to build a reliable system on the basis of the behavior of the objects. We define two models for trustworthiness management starting from the solutions proposed for P2P and social networks. In the subjective model each node computes the trustworthiness of its friends on the basis of its own experience and on the opinion of the friends in common with the potential service providers. In the objective model, the information about each node is distributed and stored making use of a distributed hash table structure so that any node can make use of the same information. Simulations show how the proposed models can effectively isolate almost any malicious nodes in the network at the expenses of an increase in the network traffic for feedback exchange.

Pajic, M., Weimer, J., Bezzo, N., Tabuada, P., Sokolsky, O., Insup Lee, Pappas, G.J..  2014.  Robustness of attack-resilient state estimators. Cyber-Physical Systems (ICCPS), 2014 ACM/IEEE International Conference on. :163-174.

The interaction between information technology and phys ical world makes Cyber-Physical Systems (CPS) vulnerable to malicious attacks beyond the standard cyber attacks. This has motivated the need for attack-resilient state estimation. Yet, the existing state-estimators are based on the non-realistic assumption that the exact system model is known. Consequently, in this work we present a method for state estimation in presence of attacks, for systems with noise and modeling errors. When the the estimated states are used by a state-based feedback controller, we show that the attacker cannot destabilize the system by exploiting the difference between the model used for the state estimation and the real physical dynamics of the system. Furthermore, we describe how implementation issues such as jitter, latency and synchronization errors can be mapped into parameters of the state estimation procedure that describe modeling errors, and provide a bound on the state-estimation error caused by modeling errors. This enables mapping control performance requirements into real-time (i.e., timing related) specifications imposed on the underlying platform. Finally, we illustrate and experimentally evaluate this approach on an unmanned ground vehicle case-study.
 

Ben Bouazza, N., Lemoudden, M., El Ouahidi, B..  2014.  Surveing the challenges and requirements for identity in the cloud. Security Days (JNS4), Proceedings of the 4th Edition of National. :1-5.

Cloud technologies are increasingly important for IT department for allowing them to concentrate on strategy as opposed to maintaining data centers; the biggest advantages of the cloud is the ability to share computing resources between multiple providers, especially hybrid clouds, in overcoming infrastructure limitations. User identity federation is considered as the second major risk in the cloud, and since business organizations use multiple cloud service providers, IT department faces a range of constraints. Multiple attempts to solve this problem have been suggested like federated Identity, which has a number of advantages, despite it suffering from challenges that are common in new technologies. The following paper tackles federated identity, its components, advantages, disadvantages, and then proposes a number of useful scenarios to manage identity in hybrid clouds infrastructure.

Arora, D., Verigin, A., Godkin, T., Neville, S.W..  2014.  Statistical Assessment of Sybil-Placement Strategies within DHT-Structured Peer-to-Peer Botnets. Advanced Information Networking and Applications (AINA), 2014 IEEE 28th International Conference on. :821-828.

Botnets are a well recognized global cyber-security threat as they enable attack communities to command large collections of compromised computers (bots) on-demand. Peer to-peer (P2P) distributed hash tables (DHT) have become particularly attractive botnet command and control (C & C) solutions due to the high level resiliency gained via the diffused random graph overlays they produce. The injection of Sybils, computers pretending to be valid bots, remains a key defensive strategy against DHT-structured P2P botnets. This research uses packet level network simulations to explore the relative merits of random, informed, and partially informed Sybil placement strategies. It is shown that random placements perform nearly as effectively as the tested more informed strategies, which require higher levels of inter-defender co-ordination. Moreover, it is shown that aspects of the DHT-structured P2P botnets behave as statistically nonergodic processes, when viewed from the perspective of stochastic processes. This suggests that although optimal Sybil placement strategies appear to exist they would need carefully tuning to each specific P2P botnet instance.

Schaefer, J..  2014.  A semantic self-management approach for service platforms. Network Operations and Management Symposium (NOMS), 2014 IEEE. :1-4.

Future personal living environments feature an increasing number of convenience-, health- and security-related applications provided by distributed services, which do not only support users but require tasks such as installation, configuration and continuous administration. These tasks are becoming tiresome, complex and error-prone. One way to escape this situation is to enable service platforms to configure and manage themselves. The approach presented here extends services with semantic descriptions to enable platform-independent autonomous service level management using model driven architecture and autonomic computing concepts. It has been implemented as a OSGi-based semantic autonomic manager, whose concept, prototypical implementation and evaluation are presented.
 

2015-05-05
Zadeh, B.Q., Handschuh, S..  2014.  Random Manhattan Indexing. Database and Expert Systems Applications (DEXA), 2014 25th International Workshop on. :203-208.

Vector space models (VSMs) are mathematically well-defined frameworks that have been widely used in text processing. In these models, high-dimensional, often sparse vectors represent text units. In an application, the similarity of vectors -- and hence the text units that they represent -- is computed by a distance formula. The high dimensionality of vectors, however, is a barrier to the performance of methods that employ VSMs. Consequently, a dimensionality reduction technique is employed to alleviate this problem. This paper introduces a new method, called Random Manhattan Indexing (RMI), for the construction of L1 normed VSMs at reduced dimensionality. RMI combines the construction of a VSM and dimension reduction into an incremental, and thus scalable, procedure. In order to attain its goal, RMI employs the sparse Cauchy random projections.

Veugen, T., de Haan, R., Cramer, R., Muller, F..  2015.  A Framework for Secure Computations With Two Non-Colluding Servers and Multiple Clients, Applied to Recommendations. Information Forensics and Security, IEEE Transactions on. 10:445-457.

We provide a generic framework that, with the help of a preprocessing phase that is independent of the inputs of the users, allows an arbitrary number of users to securely outsource a computation to two non-colluding external servers. Our approach is shown to be provably secure in an adversarial model where one of the servers may arbitrarily deviate from the protocol specification, as well as employ an arbitrary number of dummy users. We use these techniques to implement a secure recommender system based on collaborative filtering that becomes more secure, and significantly more efficient than previously known implementations of such systems, when the preprocessing efforts are excluded. We suggest different alternatives for preprocessing, and discuss their merits and demerits.

McDaniel, P., Rivera, B., Swami, A..  2014.  Toward a Science of Secure Environments. Security Privacy, IEEE. 12:68-70.

The longstanding debate on a fundamental science of security has led to advances in systems, software, and network security. However, existing efforts have done little to inform how an environment should react to emerging and ongoing threats and compromises. The authors explore the goals and structures of a new science of cyber-decision-making in the Cyber-Security Collaborative Research Alliance, which seeks to develop a fundamental theory for reasoning under uncertainty the best possible action in a given cyber environment. They also explore the needs and limitations of detection mechanisms; agile systems; and the users, adversaries, and defenders that use and exploit them, and conclude by considering how environmental security can be cast as a continuous optimization problem.
 

Hong, J.B., Dong Seong Kim.  2014.  Scalable Security Models for Assessing Effectiveness of Moving Target Defenses. Dependable Systems and Networks (DSN), 2014 44th Annual IEEE/IFIP International Conference on. :515-526.

Moving Target Defense (MTD) changes the attack surface of a system that confuses intruders to thwart attacks. Various MTD techniques are developed to enhance the security of a networked system, but the effectiveness of these techniques is not well assessed. Security models (e.g., Attack Graphs (AGs)) provide formal methods of assessing security, but modeling the MTD techniques in security models has not been studied. In this paper, we incorporate the MTD techniques in security modeling and analysis using a scalable security model, namely Hierarchical Attack Representation Models (HARMs), to assess the effectiveness of the MTD techniques. In addition, we use importance measures (IMs) for scalable security analysis and deploying the MTD techniques in an effective manner. The performance comparison between the HARM and the AG is given. Also, we compare the performance of using the IMs and the exhaustive search method in simulations.

Carroll, T.E., Crouse, M., Fulp, E.W., Berenhaut, K.S..  2014.  Analysis of network address shuffling as a moving target defense. Communications (ICC), 2014 IEEE International Conference on. :701-706.

Address shuffling is a type of moving target defense that prevents an attacker from reliably contacting a system by periodically remapping network addresses. Although limited testing has demonstrated it to be effective, little research has been conducted to examine the theoretical limits of address shuffling. As a result, it is difficult to understand how effective shuffling is and under what circumstances it is a viable moving target defense. This paper introduces probabilistic models that can provide insight into the performance of address shuffling. These models quantify the probability of attacker success in terms of network size, quantity of addresses scanned, quantity of vulnerable systems, and the frequency of shuffling. Theoretical analysis shows that shuffling is an acceptable defense if there is a small population of vulnerable systems within a large network address space, however shuffling has a cost for legitimate users. These results will also be shown empirically using simulation and actual traffic traces.
 

Kumar, A., Reddy, K..  2014.  Constructing secure web applications with proper data validations. Recent Advances and Innovations in Engineering (ICRAIE), 2014. :1-5.

With the advent of World Wide Web, information sharing through internet increased drastically. So web applications security is today's most significant battlefield between attackers and resources of web service. It is likely to remain so for the foreseeable future. By considering recent attacks it has been found that major attacks in Web Applications have been carried out even when system having most significant network level security. Poor input validation mechanisms that using in Web Applications shall causes to launching vulnerable web applications, which easy to exploit easy in future stages. Critical Web Application Vulnerabilities like Cross Site Scripting (XSS) and Injections (SQL, PHP, LDAP, SSL, XML, Command, and Code) are happen because of base level Validations, and it is enough to update system in unauthorized way or may be causes to exploit the system. In this paper we present those issues in data validations strategies, to avoid deployment of vulnerable web applications.
 

Buja, G., Bin Abd Jalil, K., Bt Hj Mohd Ali, F., Rahman, T.F.A..  2014.  Detection model for SQL injection attack: An approach for preventing a web application from the SQL injection attack. Computer Applications and Industrial Electronics (ISCAIE), 2014 IEEE Symposium on. :60-64.

Since the past 20 years the uses of web in daily life is increasing and becoming trend now. As the use of the web is increasing, the use of web application is also increasing. Apparently most of the web application exists up to today have some vulnerability that could be exploited by unauthorized person. Some of well-known web application vulnerabilities are Structured Query Language (SQL) Injection, Cross-Site Scripting (XSS) and Cross-Site Request Forgery (CSRF). By compromising with these web application vulnerabilities, the system cracker can gain information about the user and lead to the reputation of the respective organization. Usually the developers of web applications did not realize that their web applications have vulnerabilities. They only realize them when there is an attack or manipulation of their code by someone. This is normal as in a web application, there are thousands of lines of code, therefore it is not easy to detect if there are some loopholes. Nowadays as the hacking tools and hacking tutorials are easier to get, lots of new hackers are born. Even though SQL injection is very easy to protect against, there are still large numbers of the system on the internet are vulnerable to this type of attack because there will be a few subtle condition that can go undetected. Therefore, in this paper we propose a detection model for detecting and recognizing the web vulnerability which is; SQL Injection based on the defined and identified criteria. In addition, the proposed detection model will be able to generate a report regarding the vulnerability level of the web application. As the consequence, the proposed detection model should be able to decrease the possibility of the SQL Injection attack that can be launch onto the web application.

Rieke, R., Repp, J., Zhdanova, M., Eichler, J..  2014.  Monitoring Security Compliance of Critical Processes. Parallel, Distributed and Network-Based Processing (PDP), 2014 22nd Euromicro International Conference on. :552-560.

Enforcing security in process-aware information systems at runtime requires the monitoring of systems' operation using process information. Analysis of this information with respect to security and compliance aspects is growing in complexity with the increase in functionality, connectivity, and dynamics of process evolution. To tackle this complexity, the application of models is becoming standard practice. Considering today's frequent changes to processes, model-based support for security and compliance analysis is not only needed in pre-operational phases but also at runtime. This paper presents an approach to support evaluation of the security status of processes at runtime. The approach is based on operational formal models derived from process specifications and security policies comprising technical, organizational, regulatory and cross-layer aspects. A process behavior model is synchronized by events from the running process and utilizes prediction of expected close-future states to find possible security violations and allow early decisions on countermeasures. The applicability of the approach is exemplified by a misuse case scenario from a hydroelectric power plant.

2015-05-04
Fatemi Moghaddam, F., Varnosfaderani, S.D., Mobedi, S., Ghavam, I., Khaleghparast, R..  2014.  GD2SA: Geo detection and digital signature authorization for secure accessing to cloud computing environments. Computer Applications and Industrial Electronics (ISCAIE), 2014 IEEE Symposium on. :39-42.

Cloud computing is a new paradigm and emerged technology for hosting and delivering resources over a network such as internet by using concepts of virtualization, processing power and storage. However, many challenging issues are still unclear in cloud-based environments and decrease the rate of reliability and efficiency for service providers and users. User Authentication is one of the most challenging issues in cloud-based environments and according to this issue this paper proposes an efficient user authentication model that involves both of defined phases during registration and accessing processes. Geo Detection and Digital Signature Authorization (GD2SA) is a user authentication tool for provisional access permission in cloud computing environments. The main aim of GD2SA is to compare the location of an un-registered device with the location of the user by using his belonging devices (e.g. smart phone). In addition, this authentication algorithm uses the digital signature of account owner to verify the identity of applicant. This model has been evaluated in this paper according to three main parameters: efficiency, scalability, and security. In overall, the theoretical analysis of the proposed model showed that it can increase the rate of efficiency and reliability in cloud computing as an emerging technology.

Sah, S.K., Shakya, S., Dhungana, H..  2014.  A security management for Cloud based applications and services with Diameter-AAA. Issues and Challenges in Intelligent Computing Techniques (ICICT), 2014 International Conference on. :6-11.

The Cloud computing offers various services and web based applications over the internet. With the tremendous growth in the development of cloud based services, the security issue is the main challenge and today's concern for the cloud service providers. This paper describes the management of security issues based on Diameter AAA mechanisms for authentication, authorization and accounting (AAA) demanded by cloud service providers. This paper focuses on the integration of Diameter AAA into cloud system architecture.
 

Liu, J.N.K., Yanxing Hu, You, J.J., Yulin He.  2014.  An advancing investigation on reduct and consistency for decision tables in Variable Precision Rough Set models. Fuzzy Systems (FUZZ-IEEE), 2014 IEEE International Conference on. :1496-1503.

Variable Precision Rough Set (VPRS) model is one of the most important extensions of the Classical Rough Set (RS) theory. It employs a majority inclusion relation mechanism in order to make the Classical RS model become more fault tolerant, and therefore the generalization of the model is improved. This paper can be viewed as an extension of previous investigations on attribution reduction problem in VPRS model. In our investigation, we illustrated with examples that the previously proposed reduct definitions may spoil the hidden classification ability of a knowledge system by ignoring certian essential attributes in some circumstances. Consequently, by proposing a new β-consistent notion, we analyze the relationship between the structures of Decision Table (DT) and different definitions of reduct in VPRS model. Then we give a new notion of β-complement reduct that can avoid the defects of reduct notions defined in previous literatures. We also supply the method to obtain the β- complement reduct using a decision table splitting algorithm, and finally demonstrate the feasibility of our approach with sample instances.