Visible to the public Biblio

Found 2371 results

Filters: First Letter Of Last Name is G  [Clear All Filters]
2019-12-05
Sahu, Abhijeet, Goulart, Ana.  2019.  Implementation of a C-UNB Module for NS-3 and Validation for DLMS-COSEM Application Layer Protocol. 2019 IEEE ComSoc International Communications Quality and Reliability Workshop (CQR). :1-6.

The number of sensors and embedded devices in an urban area can be on the order of thousands. New low-power wide area (LPWA) wireless network technologies have been proposed to support this large number of asynchronous, low-bandwidth devices. Among them, the Cooperative UltraNarrowband (C-UNB) is a clean-slate cellular network technology to connect these devices to a remote site or data collection server. C-UNB employs small bandwidth channels, and a lightweight random access protocol. In this paper, a new application is investigated - the use of C-UNB wireless networks to support the Advanced Metering Infrastructure (AMI), in order to facilitate the communication between smart meters and utilities. To this end, we adapted a mathematical model for C-UNB, and implemented a network simulation module in NS-3 to represent C-UNB's physical and medium access control layer. For the application layer, we implemented the DLMS-COSEM protocol, or Device Language Message Specification - Companion Specification for Energy Metering. Details of the simulation module are presented and we conclude that it supports the results of the mathematical model.

Guang, Xuan, Yeung, Raymond w..  2019.  Local-Encoding-Preserving Secure Network Coding for Fixed Dimension. 2019 IEEE International Symposium on Information Theory (ISIT). :201-205.

In the paradigm of network coding, information-theoretic security is considered in the presence of wiretappers, who can access one arbitrary edge subset up to a certain size, referred to as the security level. Secure network coding is applied to prevent the leakage of the source information to the wiretappers. In this paper, we consider the problem of secure network coding for flexible pairs of information rate and security level with any fixed dimension (equal to the sum of rate and security level). We present a novel approach for designing a secure linear network code (SLNC) such that the same SLNC can be applied for all the rate and security-level pairs with the fixed dimension. We further develop a polynomial-time algorithm for efficient implementation and prove that there is no penalty on the required field size for the existence of SLNCs in terms of the best known lower bound by Guang and Yeung. Finally, by applying our approach as a crucial building block, we can construct a family of SLNCs that not only can be applied to all possible pairs of rate and security level but also share a common local encoding kernel at each intermediate node in the network.

Gu, Yonggen, Hou, Dingding, Wu, Xiaohong.  2018.  A Cloud Storage Resource Transaction Mechanism Based on Smart Contract. Proceedings of the 8th International Conference on Communication and Network Security. :134-138.

Since the security and fault tolerance is the two important metrics of the data storage, it brings both opportunities and challenges for distributed data storage and transaction. The traditional transaction system of storage resources, which generally runs in a centralized mode, results in high cost, vendor lock-in, single point failure risk, DDoS attack and information security. Therefore, this paper proposes a distributed transaction method for cloud storage based on smart contract. First, to guarantee the fault tolerance and decrease the storing cost for erasure coding, a VCG-based auction mechanism is proposed for storage transaction, and we deploy and implement the proposed mechanism by designing a corresponding smart contract. Especially, we address the problem - how to implement a VCG-like mechanism in a blockchain environment. Based on private chain of Ethereum, we make the simulations for proposed storage transaction method. The results showed that proposed transaction model can realize competitive trading of storage resources, and ensure the safe and economic operation of resource trading.

2019-12-02
Abate, Carmine, Blanco, Roberto, Garg, Deepak, Hritcu, Catalin, Patrignani, Marco, Thibault, Jérémy.  2019.  Journey Beyond Full Abstraction: Exploring Robust Property Preservation for Secure Compilation. 2019 IEEE 32nd Computer Security Foundations Symposium (CSF). :256–25615.
Good programming languages provide helpful abstractions for writing secure code, but the security properties of the source language are generally not preserved when compiling a program and linking it with adversarial code in a low-level target language (e.g., a library or a legacy application). Linked target code that is compromised or malicious may, for instance, read and write the compiled program's data and code, jump to arbitrary memory locations, or smash the stack, blatantly violating any source-level abstraction. By contrast, a fully abstract compilation chain protects source-level abstractions all the way down, ensuring that linked adversarial target code cannot observe more about the compiled program than what some linked source code could about the source program. However, while research in this area has so far focused on preserving observational equivalence, as needed for achieving full abstraction, there is a much larger space of security properties one can choose to preserve against linked adversarial code. And the precise class of security properties one chooses crucially impacts not only the supported security goals and the strength of the attacker model, but also the kind of protections a secure compilation chain has to introduce. We are the first to thoroughly explore a large space of formal secure compilation criteria based on robust property preservation, i.e., the preservation of properties satisfied against arbitrary adversarial contexts. We study robustly preserving various classes of trace properties such as safety, of hyperproperties such as noninterference, and of relational hyperproperties such as trace equivalence. This leads to many new secure compilation criteria, some of which are easier to practically achieve and prove than full abstraction, and some of which provide strictly stronger security guarantees. For each of the studied criteria we propose an equivalent “property-free” characterization that clarifies which proof techniques apply. For relational properties and hyperproperties, which relate the behaviors of multiple programs, our formal definitions of the property classes themselves are novel. We order our criteria by their relative strength and show several collapses and separation results. Finally, we adapt existing proof techniques to show that even the strongest of our secure compilation criteria, the robust preservation of all relational hyperproperties, is achievable for a simple translation from a statically typed to a dynamically typed language.
Kelly, Daniel M., Wellons, Christopher C., Coffman, Joel, Gearhart, Andrew S..  2019.  Automatically Validating the Effectiveness of Software Diversity Schemes. 2019 49th Annual IEEE/IFIP International Conference on Dependable Systems and Networks – Supplemental Volume (DSN-S). :1–2.
Software diversity promises to invert the current balance of power in cybersecurity by preventing exploit reuse. Nevertheless, the comparative evaluation of diversity techniques has received scant attention. In ongoing work, we use the DARPA Cyber Grand Challenge (CGC) environment to assess the effectiveness of diversifying compilers in mitigating exploits. Our approach provides a quantitative comparison of diversity strategies and demonstrates wide variation in their effectiveness.
2019-11-27
Gao, Yang, Li, Borui, Wang, Wei, Xu, Wenyao, Zhou, Chi, Jin, Zhanpeng.  2018.  Watching and Safeguarding Your 3D Printer: Online Process Monitoring Against Cyber-Physical Attacks. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.. 2:108:1–108:27.

The increasing adoption of 3D printing in many safety and mission critical applications exposes 3D printers to a variety of cyber attacks that may result in catastrophic consequences if the printing process is compromised. For example, the mechanical properties (e.g., physical strength, thermal resistance, dimensional stability) of 3D printed objects could be significantly affected and degraded if a simple printing setting is maliciously changed. To address this challenge, this study proposes a model-free real-time online process monitoring approach that is capable of detecting and defending against the cyber-physical attacks on the firmwares of 3D printers. Specifically, we explore the potential attacks and consequences of four key printing attributes (including infill path, printing speed, layer thickness, and fan speed) and then formulate the attack models. Based on the intrinsic relation between the printing attributes and the physical observations, our defense model is established by systematically analyzing the multi-faceted, real-time measurement collected from the accelerometer, magnetometer and camera. The Kalman filter and Canny filter are used to map and estimate three aforementioned critical toolpath information that might affect the printing quality. Mel-frequency Cepstrum Coefficients are used to extract features for fan speed estimation. Experimental results show that, for a complex 3D printed design, our method can achieve 4% Hausdorff distance compared with the model dimension for infill path estimate, 6.07% Mean Absolute Percentage Error (MAPE) for speed estimate, 9.57% MAPE for layer thickness estimate, and 96.8% accuracy for fan speed identification. Our study demonstrates that, this new approach can effectively defend against the cyber-physical attacks on 3D printers and 3D printing process.

2019-11-26
Hassanpour, Reza, Dogdu, Erdogan, Choupani, Roya, Goker, Onur, Nazli, Nazli.  2018.  Phishing E-Mail Detection by Using Deep Learning Algorithms. Proceedings of the ACMSE 2018 Conference. :45:1-45:1.

Phishing e-mails are considered as spam e-mails, which aim to collect sensitive personal information about the users via network. Since the main purpose of this behavior is mostly to harm users financially, it is vital to detect these phishing or spam e-mails immediately to prevent unauthorized access to users' vital information. To detect phishing e-mails, using a quicker and robust classification method is important. Considering the billions of e-mails on the Internet, this classification process is supposed to be done in a limited time to analyze the results. In this work, we present some of the early results on the classification of spam email using deep learning and machine methods. We utilize word2vec to represent emails instead of using the popular keyword or other rule-based methods. Vector representations are then fed into a neural network to create a learning model. We have tested our method on an open dataset and found over 96% accuracy levels with the deep learning classification methods in comparison to the standard machine learning algorithms.

Scheitle, Quirin, Gasser, Oliver, Nolte, Theodor, Amann, Johanna, Brent, Lexi, Carle, Georg, Holz, Ralph, Schmidt, Thomas C., Wählisch, Matthias.  2018.  The Rise of Certificate Transparency and Its Implications on the Internet Ecosystem. Proceedings of the Internet Measurement Conference 2018. :343-349.

In this paper, we analyze the evolution of Certificate Transparency (CT) over time and explore the implications of exposing certificate DNS names from the perspective of security and privacy. We find that certificates in CT logs have seen exponential growth. Website support for CT has also constantly increased, with now 33% of established connections supporting CT. With the increasing deployment of CT, there are also concerns of information leakage due to all certificates being visible in CT logs. To understand this threat, we introduce a CT honeypot and show that data from CT logs is being used to identify targets for scanning campaigns only minutes after certificate issuance. We present and evaluate a methodology to learn and validate new subdomains from the vast number of domains extracted from CT logged certificates.

Baykara, Muhammet, Gürel, Zahit Ziya.  2018.  Detection of Phishing Attacks. 2018 6th International Symposium on Digital Forensic and Security (ISDFS). :1-5.

Phishing is a form of cybercrime where an attacker imitates a real person / institution by promoting them as an official person or entity through e-mail or other communication mediums. In this type of cyber attack, the attacker sends malicious links or attachments through phishing e-mails that can perform various functions, including capturing the login credentials or account information of the victim. These e-mails harm victims because of money loss and identity theft. In this study, a software called "Anti Phishing Simulator'' was developed, giving information about the detection problem of phishing and how to detect phishing emails. With this software, phishing and spam mails are detected by examining mail contents. Classification of spam words added to the database by Bayesian algorithm is provided.

Schmidt, Mark, Pfeiffer, Tom, Grill, Christin, Huber, Robert, Jirauschek, Christian.  2019.  Coexistence of Intensity Pattern Types in Broadband Fourier Domain Mode Locked (FDML) Lasers. 2019 Conference on Lasers and Electro-Optics Europe European Quantum Electronics Conference (CLEO/Europe-EQEC). :1-1.

Fourier domain mode locked (FDML) lasers, in which the sweep period of the swept bandpass filter is synchronized with the roundtrip time of the optical field, are broadband and rapidly tunable fiber ring laser systems, which offer rich dynamics. A detailed understanding is important from a fundamental point of view, and also required in order to improve current FDML lasers which have not reached their coherence limit yet. Here, we study the formation of localized patterns in the intensity trace of FDML laser systems based on a master equation approach [1] derived from the nonlinear Schrödinger equation for polarization maintaining setups, which shows excellent agreement with experimental data. A variety of localized patterns and chaotic or bistable operation modes were previously discovered in [2–4] by investigating primarily quasi-static regimes within a narrow sweep bandwidth where a delay differential equation model was used. In particular, the formation of so-called holes which are characterized by a dip in the intensity trace and a rapid phase jump are described. Such holes have tentatively been associated with Nozaki-Bekki holes which are solutions to the complex Ginzburg-Landau equation. In Fig. 1 (b) to (d) small sections of a numerical solution of our master equation are presented for a partially dispersion compensated polarization maintaining FDML laser setup. Within our approach, we are able to study the full sweep dynamics over a broad sweep range of more than 100 nm. This allows us to identify different co-existing intensity patterns within a single sweep. In general, high frequency distortions in the intensity trace of FDML lasers [5] are mainly caused by synchronization mismatches caused by the fiber dispersion or a detuning of the roundtrip time of the optical field to the sweep period of the swept bandpass filter. This timing errors lead to rich and complex dynamics over many roundtrips and are a major source of noise, greatly affecting imaging and sensing applications. For example, the imaging quality in optical coherence tomography where FDML lasers are superior sources is significantly reduced [5].

Shukla, Anjali, Rakshit, Arnab, Konar, Amit, Ghosh, Lidia, Nagar, Atulya K..  2018.  Decoding of Mind-Generated Pattern Locks for Security Checking Using Type-2 Fuzzy Classifier. 2018 IEEE Symposium Series on Computational Intelligence (SSCI). :1976-1981.

Brain Computer Interface (BCI) aims at providing a better quality of life to people suffering from neuromuscular disability. This paper establishes a BCI paradigm to provide a biometric security option, used for locking and unlocking personal computers or mobile phones. Although it is primarily meant for the people with neurological disorder, its application can safely be extended for the use of normal people. The proposed scheme decodes the electroencephalogram signals liberated by the brain of the subjects, when they are engaged in selecting a sequence of dots in(6×6)2-dimensional array, representing a pattern lock. The subject, while selecting the right dot in a row, would yield a P300 signal, which is decoded later by the brain-computer interface system to understand the subject's intention. In case the right dots in all the 6 rows are correctly selected, the subject would yield P300 signals six times, which on being decoded by a BCI system would allow the subject to access the system. Because of intra-subjective variation in the amplitude and wave-shape of the P300 signal, a type 2 fuzzy classifier has been employed to classify the presence/absence of the P300 signal in the desired window. A comparison of performances of the proposed classifier with others is also included. The functionality of the proposed system has been validated using the training instances generated for 30 subjects. Experimental results confirm that the classification accuracy for the present scheme is above 90% irrespective of subjects.

2019-11-25
Miao, Mao-ke, Gao, Chao, Liang, Hao-dong, Li, Xiao-Feng.  2018.  Shannon Limit of Coding in Wireless Communication. Proceedings of the 2Nd International Conference on Telecommunications and Communication Engineering. :275–279.
A limit SNR(signal noise ratio) -1.6dB has been derived in continuous AWGN channel by shannon theorem. In this paper we study the channel capacity and limit SNR by series representation unlike iteration method for BI-AWGN(discrete binary input, continuous output) and rayleigh channel in BPSK modulation. The limit SNR is obtained under the help of Cesαro series summation rule. We figure out that BI-AWGN and rayleigh channel with (channel state information)CSI behave the same in limit SNR in terms of R → 0 and P → 0. For rayleigh channel without CSI, We solve it approximately by series expanded in R → 0 and an expression of capacity behaves more simple than the two others derived before.
Guo, Tao, Yeung, Raymond w..  2018.  The Explicit Coding Rate Region of Symmetric Multilevel Diversity Coding. 2018 Information Theory and Applications Workshop (ITA). :1–9.
It is well known that superposition coding, namely separately encoding the independent sources, is optimal for symmetric multilevel diversity coding (SMDC) (Yeung-Zhang 1999). However, the characterization of the coding rate region therein involves uncountably many linear inequalities and the constant term (i.e., the lower bound) in each inequality is given in terms of the solution of a linear optimization problem. Thus this implicit characterization of the coding rate region does not enable the determination of the achievability of a given rate tuple. In this paper, we first obtain closed-form expressions of these uncountably many inequalities. Then we identify a finite subset of inequalities that is sufficient for characterizing the coding rate region. This gives an explicit characterization of the coding rate region. We further show by the symmetry of the problem that only a much smaller subset of this finite set of inequalities needs to be verified in determining the achievability of a given rate tuple. Yet, the cardinality of this smaller set grows at least exponentially fast with L.
2019-11-19
Sun, Yunhe, Yang, Dongsheng, Meng, Lei, Gao, Xiaoting, Hu, Bo.  2018.  Universal Framework for Vulnerability Assessment of Power Grid Based on Complex Networks. 2018 Chinese Control And Decision Conference (CCDC). :136-141.

Traditionally, power grid vulnerability assessment methods are separated to the study of nodes vulnerability and edges vulnerability, resulting in the evaluation results are not accurate. A framework for vulnerability assessment is still required for power grid. Thus, this paper proposes a universal method for vulnerability assessment of power grid by establishing a complex network model with uniform weight of nodes and edges. The concept of virtual edge is introduced into the distinct weighted complex network model of power system, and the selection function of edge weight and virtual edge weight are constructed based on electrical and physical parameters. In addition, in order to reflect the electrical characteristics of power grids more accurately, a weighted betweenness evaluation index with transmission efficiency is defined. Finally, the method has been demonstrated on the IEEE 39 buses system, and the results prove the effectiveness of the proposed method.

Filvà, Daniel Amo, García-Peñalvo, Francisco José, Forment, Marc Alier, Escudero, David Fonseca, Casañ, Maria José.  2018.  Privacy and Identity Management in Learning Analytics Processes with Blockchain. Proceedings of the Sixth International Conference on Technological Ecosystems for Enhancing Multiculturality. :997-1003.

The collection of students' sensible data raises adverse reactions against Learning Analytics that decreases the confidence in its adoption. The laws and policies that surround the use of educational data are not enough to ensure privacy, security, validity, integrity and reliability of students' data. This problem has been detected through literature review and can be solved if a technological layer of automated checking rules is added above these policies. The aim of this thesis is to research about an emerging technology such as blockchain to preserve the identity of students and secure their data. In a first stage a systematic literature review will be conducted in order to set the context of the research. Afterwards, and through the scientific method, we will develop a blockchain based solution to automate rules and constraints with the aim to let students the governance of their data and to ensure data privacy and security.

2019-11-12
Ferenc, Rudolf, Heged\H us, Péter, Gyimesi, Péter, Antal, Gábor, Bán, Dénes, Gyimóthy, Tibor.  2019.  Challenging Machine Learning Algorithms in Predicting Vulnerable JavaScript Functions. 2019 IEEE/ACM 7th International Workshop on Realizing Artificial Intelligence Synergies in Software Engineering (RAISE). :8-14.

The rapid rise of cyber-crime activities and the growing number of devices threatened by them place software security issues in the spotlight. As around 90% of all attacks exploit known types of security issues, finding vulnerable components and applying existing mitigation techniques is a viable practical approach for fighting against cyber-crime. In this paper, we investigate how the state-of-the-art machine learning techniques, including a popular deep learning algorithm, perform in predicting functions with possible security vulnerabilities in JavaScript programs. We applied 8 machine learning algorithms to build prediction models using a new dataset constructed for this research from the vulnerability information in public databases of the Node Security Project and the Snyk platform, and code fixing patches from GitHub. We used static source code metrics as predictors and an extensive grid-search algorithm to find the best performing models. We also examined the effect of various re-sampling strategies to handle the imbalanced nature of the dataset. The best performing algorithm was KNN, which created a model for the prediction of vulnerable functions with an F-measure of 0.76 (0.91 precision and 0.66 recall). Moreover, deep learning, tree and forest based classifiers, and SVM were competitive with F-measures over 0.70. Although the F-measures did not vary significantly with the re-sampling strategies, the distribution of precision and recall did change. No re-sampling seemed to produce models preferring high precision, while re-sampling strategies balanced the IR measures.

Wei, Shengjun, Zhong, Hao, Shan, Chun, Ye, Lin, Du, Xiaojiang, Guizani, Mohsen.  2018.  Vulnerability Prediction Based on Weighted Software Network for Secure Software Building. 2018 IEEE Global Communications Conference (GLOBECOM). :1-6.

To build a secure communications software, Vulnerability Prediction Models (VPMs) are used to predict vulnerable software modules in the software system before software security testing. At present many software security metrics have been proposed to design a VPM. In this paper, we predict vulnerable classes in a software system by establishing the system's weighted software network. The metrics are obtained from the nodes' attributes in the weighted software network. We design and implement a crawler tool to collect all public security vulnerabilities in Mozilla Firefox. Based on these data, the prediction model is trained and tested. The results show that the VPM based on weighted software network has a good performance in accuracy, precision, and recall. Compared to other studies, it shows that the performance of prediction has been improved greatly in Pr and Re.

2019-11-04
Abani, Noor, Braun, Torsten, Gerla, Mario.  2018.  Betweenness Centrality and Cache Privacy in Information-Centric Networks. Proceedings of the 5th ACM Conference on Information-Centric Networking. :106-116.

In-network caching is a feature shared by all proposed Information Centric Networking (ICN) architectures as it is critical to achieving a more efficient retrieval of content. However, the default "cache everything everywhere" universal caching scheme has caused the emergence of several privacy threats. Timing attacks are one such privacy breach where attackers can probe caches and use timing analysis of data retrievals to identify if content was retrieved from the data source or from the cache, the latter case inferring that this content was requested recently. We have previously proposed a betweenness centrality based caching strategy to mitigate such attacks by increasing user anonymity. We demonstrated its efficacy in a transit-stub topology. In this paper, we further investigate the effect of betweenness centrality based caching on cache privacy and user anonymity in more general synthetic and real world Internet topologies. It was also shown that an attacker with access to multiple compromised routers can locate and track a mobile user by carrying out multiple timing analysis attacks from various parts of the network. We extend our privacy evaluation to a scenario with mobile users and show that a betweenness centrality based caching policy provides a mobile user with path privacy by increasing an attacker's difficulty in locating a moving user or identifying his/her route.

Gunawan, Dedi, Mambo, Masahiro.  2018.  Set-valued Data Anonymization Maintaining Data Utility and Data Property. Proceedings of the 12th International Conference on Ubiquitous Information Management and Communication. :88:1–88:8.

Set-valued database publication has been attracting much attention due to its benefit for various applications like recommendation systems and marketing analysis. However, publishing original database directly is risky since an unauthorized party may violate individual privacy by associating and analyzing relations between individuals and set of items in the published database, which is known as identity linkage attack. Generally, an attack is performed based on attacker's background knowledge obtained by a prior investigation and such adversary knowledge should be taken into account in the data anonymization. Various data anonymization schemes have been proposed to prevent the identity linkage attack. However, in existing data anonymization schemes, either data utility or data property is reduced a lot after excessive database modification and consequently data recipients become to distrust the released database. In this paper, we propose a new data anonymization scheme, called sibling suppression, which causes minimum data utility lost and maintains data properties like database size and the number of records. The scheme uses multiple sets of adversary knowledge and items in a category of adversary knowledge are replaced by other items in the category. Several experiments with real dataset show that our method can preserve data utility with minimum lost and maintain data property as the same as original database.

2019-10-30
Ghose, Nirnimesh, Lazos, Loukas, Li, Ming.  2018.  Secure Device Bootstrapping Without Secrets Resistant to Signal Manipulation Attacks. 2018 IEEE Symposium on Security and Privacy (SP). :819-835.
In this paper, we address the fundamental problem of securely bootstrapping a group of wireless devices to a hub, when none of the devices share prior associations (secrets) with the hub or between them. This scenario aligns with the secure deployment of body area networks, IoT, medical devices, industrial automation sensors, autonomous vehicles, and others. We develop VERSE, a physical-layer group message integrity verification primitive that effectively detects advanced wireless signal manipulations that can be used to launch man-in-the-middle (MitM) attacks over wireless. Without using shared secrets to establish authenticated channels, such attacks are notoriously difficult to thwart and can undermine the authentication and key establishment processes. VERSE exploits the existence of multiple devices to verify the integrity of the messages exchanged within the group. We then use VERSE to build a bootstrapping protocol, which securely introduces new devices to the network. Compared to the state-of-the-art, VERSE achieves in-band message integrity verification during secure pairing using only the RF modality without relying on out-of-band channels or extensive human involvement. It guarantees security even when the adversary is capable of fully controlling the wireless channel by annihilating and injecting wireless signals. We study the limits of such advanced wireless attacks and prove that the introduction of multiple legitimate devices can be leveraged to increase the security of the pairing process. We validate our claims via theoretical analysis and extensive experimentations on the USRP platform. We further discuss various implementation aspects such as the effect of time synchronization between devices and the effects of multipath and interference. Note that the elimination of shared secrets, default passwords, and public key infrastructures effectively addresses the related key management challenges when these are considered at scale.
2019-10-28
Withers, Alex, Bockelman, Brian, Weitzel, Derek, Brown, Duncan, Gaynor, Jeff, Basney, Jim, Tannenbaum, Todd, Miller, Zach.  2018.  SciTokens: Capability-Based Secure Access to Remote Scientific Data. Proceedings of the Practice and Experience on Advanced Research Computing. :24:1–24:8.
The management of security credentials (e.g., passwords, secret keys) for computational science workflows is a burden for scientists and information security officers. Problems with credentials (e.g., expiration, privilege mismatch) cause workflows to fail to fetch needed input data or store valuable scientific results, distracting scientists from their research by requiring them to diagnose the problems, re-run their computations, and wait longer for their results. In this paper, we introduce SciTokens, open source software to help scientists manage their security credentials more reliably and securely. We describe the SciTokens system architecture, design, and implementation addressing use cases from the Laser Interferometer Gravitational-Wave Observatory (LIGO) Scientific Collaboration and the Large Synoptic Survey Telescope (LSST) projects. We also present our integration with widely-used software that supports distributed scientific computing, including HTCondor, CVMFS, and XrootD. SciTokens uses IETF-standard OAuth tokens for capability-based secure access to remote scientific data. The access tokens convey the specific authorizations needed by the workflows, rather than general-purpose authentication impersonation credentials, to address the risks of scientific workflows running on distributed infrastructure including NSF resources (e.g., LIGO Data Grid, Open Science Grid, XSEDE) and public clouds (e.g., Amazon Web Services, Google Cloud, Microsoft Azure). By improving the interoperability and security of scientific workflows, SciTokens 1) enables use of distributed computing for scientific domains that require greater data protection and 2) enables use of more widely distributed computing resources by reducing the risk of credential abuse on remote systems.
Ocaña, Kary, Galheigo, Marcelo, Osthoff, Carla, Gadelha, Luiz, Gomes, Antônio Tadeu A., De Oliveira, Daniel, Porto, Fabio, Vasconcelos, Ana Tereza.  2019.  Towards a Science Gateway for Bioinformatics: Experiences in the Brazilian System of High Performance Computing. 2019 19th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGRID). :638–647.

Science gateways bring out the possibility of reproducible science as they are integrated into reusable techniques, data and workflow management systems, security mechanisms, and high performance computing (HPC). We introduce BioinfoPortal, a science gateway that integrates a suite of different bioinformatics applications using HPC and data management resources provided by the Brazilian National HPC System (SINAPAD). BioinfoPortal follows the Software as a Service (SaaS) model and the web server is freely available for academic use. The goal of this paper is to describe the science gateway and its usage, addressing challenges of designing a multiuser computational platform for parallel/distributed executions of large-scale bioinformatics applications using the Brazilian HPC resources. We also present a study of performance and scalability of some bioinformatics applications executed in the HPC environments and perform machine learning analyses for predicting features for the HPC allocation/usage that could better perform the bioinformatics applications via BioinfoPortal.

2019-10-23
Karmaker Santu, Shubhra Kanti, Bindschadler, Vincent, Zhai, ChengXiang, Gunter, Carl A..  2018.  NRF: A Naive Re-Identification Framework. Proceedings of the 2018 Workshop on Privacy in the Electronic Society. :121-132.

The promise of big data relies on the release and aggregation of data sets. When these data sets contain sensitive information about individuals, it has been scalable and convenient to protect the privacy of these individuals by de-identification. However, studies show that the combination of de-identified data sets with other data sets risks re-identification of some records. Some studies have shown how to measure this risk in specific contexts where certain types of public data sets (such as voter roles) are assumed to be available to attackers. To the extent that it can be accomplished, such analyses enable the threat of compromises to be balanced against the benefits of sharing data. For example, a study that might save lives by enabling medical research may be enabled in light of a sufficiently low probability of compromise from sharing de-identified data. In this paper, we introduce a general probabilistic re-identification framework that can be instantiated in specific contexts to estimate the probability of compromises based on explicit assumptions. We further propose a baseline of such assumptions that enable a first-cut estimate of risk for practical case studies. We refer to the framework with these assumptions as the Naive Re-identification Framework (NRF). As a case study, we show how we can apply NRF to analyze and quantify the risk of re-identification arising from releasing de-identified medical data in the context of publicly-available social media data. The results of this case study show that NRF can be used to obtain meaningful quantification of the re-identification risk, compare the risk of different social media, and assess risks of combinations of various demographic attributes and medical conditions that individuals may voluntarily disclose on social media.

2019-10-22
Khelf, Roumaissa, Ghoualmi-Zine, Nacira.  2018.  IPsec/Firewall Security Policy Analysis: A Survey. 2018 International Conference on Signal, Image, Vision and their Applications (SIVA). :1–7.
As the technology reliance increases, computer networks are getting bigger and larger and so are threats and attacks. Therefore Network security becomes a major concern during this last decade. Network Security requires a combination of hardware devices and software applications. Namely, Firewalls and IPsec gateways are two technologies that provide network security protection and repose on security policies which are maintained to ensure traffic control and network safety. Nevertheless, security policy misconfigurations and inconsistency between the policy's rules produce errors and conflicts, which are often very hard to detect and consequently cause security holes and compromise the entire system functionality. In This paper, we review the related approaches which have been proposed for security policy management along with surveying the literature for conflicts detection and resolution techniques. This work highlights the advantages and limitations of the proposed solutions for security policy verification in IPsec and Firewalls and gives an overall comparison and classification of the existing approaches.
2019-10-15
Liang, Danwei, An, Jian, Cheng, Jindong, Yang, He, Gui, Ruowei.  2018.  The Quality Control in Crowdsensing Based on Twice Consensuses of Blockchain. Proceedings of the 2018 ACM International Joint Conference and 2018 International Symposium on Pervasive and Ubiquitous Computing and Wearable Computers. :630–635.
In most crowdsensing systems, the quality of the collected data is varied and difficult to evaluate while the existing crowdsensing quality control methods are mostly based on a central platform, which is not completely trusted in reality and results in fraud and other problems. To solve these questions, a novel crowdsensing quality control model is proposed in this paper. First, the idea of blockchain is introduced into this model. The credit-based verifier selection mechanism and twice consensuses are proposed to realize the non-repudiation and non-tampering of information in crowdsensing. Then, the quality grading evaluation (QGE) is put forward, in which the method of truth discovery and the idea of fuzzy theories are combined to evaluate the quality of sensing data, and the garbled circuit is used to ensure that evaluation criteria can not be leaked. Finally, the Experiments show that our model is feasible in time and effective in quality evaluation.