Biblio
Confidentiality Breach Through Acoustic Side-Channel in Cyber-Physical Additive Manufacturing Systems. ACM Trans. Cyber-Phys. Syst.. 2:3:1–3:25.
.
2017. In cyber-physical systems, due to the tight integration of the computational, communication, and physical components, most of the information in the cyber-domain manifests in terms of physical actions (such as motion, temperature change, etc.). This leads to the system being prone to physical-to-cyber domain attacks that affect the confidentiality. Physical actions are governed by energy flows, which may be observed. Some of these observable energy flows unintentionally leak information about the cyber-domain and hence are known as the side-channels. Side-channels such as acoustic, thermal, and power allow attackers to acquire the information without actually leveraging the vulnerability of the algorithms implemented in the system. As a case study, we have taken cyber-physical additive manufacturing systems (fused deposition modeling-based three-dimensional (3D) printer) to demonstrate how the acoustic side-channel can be used to breach the confidentiality of the system. In 3D printers, geometry, process, and machine information are the intellectual properties, which are stored in the cyber domain (G-code). We have designed an attack model that consists of digital signal processing, machine-learning algorithms, and context-based post processing to steal the intellectual property in the form of geometry details by reconstructing the G-code and thus the test objects. We have successfully reconstructed various test objects with an average axis prediction accuracy of 86% and an average length prediction error of 11.11%.
On the Content Security Policy Violations Due to the Same-Origin Policy. Proceedings of the 26th International Conference on World Wide Web. :877–886.
.
2017. Modern browsers implement different security policies such as the Content Security Policy (CSP), a mechanism designed to mitigate popular web vulnerabilities, and the Same Origin Policy (SOP), a mechanism that governs interactions between resources of web pages. In this work, we describe how CSP may be violated due to the SOP when a page contains an embedded iframe from the same origin. We analyse 1 million pages from 10,000 top Alexa sites and report that at least 31.1% of current CSP-enabled pages are potentially vulnerable to CSP violations. Further considering real-world situations where those pages are involved in same-origin nested browsing contexts, we found that in at least 23.5% of the cases, CSP violations are possible. During our study, we also identified a divergence among browsers implementations in the enforcement of CSP in srcdoc sandboxed iframes, which actually reveals a problem in Gecko-based browsers CSP implementation. To ameliorate the problematic conflicts of the security mechanisms, we discuss measures to avoid CSP violations.
Continuous Biometric Verification for Non-Repudiation of Remote Services. Proceedings of the 12th International Conference on Availability, Reliability and Security. :4:1–4:10.
.
2017. As our society massively relies on ICT, security services are becoming essential to protect users and entities involved. Amongst such services, non-repudiation provides evidences of actions, protects against their denial, and helps solving disputes between parties. For example, it prevents denial of past behaviors as having sent or received messages. Noteworthy, if the information flow is continuous, evidences should be produced for the entirety of the flow and not only at specific points. Further, non-repudiation should be guaranteed by mechanisms that do not reduce the usability of the system or application. To meet these challenges, in this paper, we propose two solutions for non-repudiation of remote services based on multi-biometric continuous authentication. We present an application scenario that discusses how users and service providers are protected with such solutions. We also discuss the technological readiness of biometrics for non-repudiation services: the outcome is that, under specific assumptions, it is actually ready.
A contribution to cyber-security of networked control systems: An event-based control approach. 2017 3rd International Conference on Event-Based Control, Communication and Signal Processing (EBCCSP). :1–7.
.
2017. In the present paper, a networked control system under both cyber and physical attacks Is considered. An adapted formulation of the problem under physical attacks, data deception and false data injection attacks, is used for controller synthesis. Based on the classical fault tolerant detection (FTD) tools, a residual generator for attack/fault detection based on observers is proposed. An event-triggered and Bilinear Matrix Inequality (BMI) implementation is proposed in order to achieve novel and better security strategy. The purpose in using this implementation would be to reduce (limit) the total number of transmissions to only instances when the networked control system (NCS) needs attention. It is important to note that the main contribution of this paper is to establish the adequate event-triggered and BMI-based methodology so that the particular structure of the mixed attacked/faulty structure can be re-formulated within the classical FTD paradigm. Experimental results are given to illustrate the developed approach efficiency on a pilot three-tank system. The plant model is presented and the proposed control design is applied to the system.
Control-Flow Hijacking: Are We Making Progress? Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security. :4–4.
.
2017. Memory corruption errors in C/C++ programs remain the most common source of security vulnerabilities in today's systems. Over the last 10+ years the security community developed several defenses [4]. Data Execution Prevention (DEP) protects against code injection – eradicating this attack vector. Yet, control-flow hijacking and code reuse remain challenging despite wide deployment of Address Space Layout Randomization (ASLR) and stack canaries. These defenses are probabilistic and rely on information hiding. The deployed defenses complicate attacks, yet control-flow hijack attacks (redirecting execution to a location that would not be reached in a benign execution) are still prevalent. Attacks reuse existing gadgets (short sequences of code), often leveraging information disclosures to learn the location of the desired gadgets. Strong defense mechanisms have not yet been widely deployed due to (i) the time it takes to roll out a security mechanism, (ii) incompatibility with specific features, and (iii) performance overhead. In the meantime, only a set of low-overhead but incomplete mitigations has been deployed in practice. Control-Flow Integrity (CFI) [1,2] and Code-Pointer Integrity (CPI) [3] are two promising upcoming defense mechanisms, protecting against control-flow hijacking. CFI guarantees that the runtime control flow follows the statically determined control-flow graph. An attacker may reuse any of the valid transitions at any control-flow transfer. We compare a broad range of CFI mechanisms using a unified nomenclature based on (i) a qualitative discussion of the conceptual security guarantees, (ii) a quantitative security evaluation, and (iii)\textbackslashtextasciitildean empirical evaluation of their performance in the same test environment. For each mechanism, we evaluate (i) protected types of control-flow transfers, (ii) the precision of the protection for forward and backward edges. For open-source compiler-based implementations, we additionally evaluate (iii) the generated equivalence classes and target sets, and (iv) the runtime performance. CPI on the other hand is a dynamic property that enforces selective memory safety through bounds checks for code pointers by separating code pointers from regular data.
Control-Flow Integrity: Precision, Security, and Performance. ACM Comput. Surv.. 50:16:1–16:33.
.
2017. Memory corruption errors in C/C++ programs remain the most common source of security vulnerabilities in today’s systems. Control-flow hijacking attacks exploit memory corruption vulnerabilities to divert program execution away from the intended control flow. Researchers have spent more than a decade studying and refining defenses based on Control-Flow Integrity (CFI); this technique is now integrated into several production compilers. However, so far, no study has systematically compared the various proposed CFI mechanisms nor is there any protocol on how to compare such mechanisms. We compare a broad range of CFI mechanisms using a unified nomenclature based on (i) a qualitative discussion of the conceptual security guarantees, (ii) a quantitative security evaluation, and (iii) an empirical evaluation of their performance in the same test environment. For each mechanism, we evaluate (i) protected types of control-flow transfers and (ii) precision of the protection for forward and backward edges. For open-source, compiler-based implementations, we also evaluate (iii) generated equivalence classes and target sets and (iv) runtime performance.
CoRAL: Confined Recovery in Distributed Asynchronous Graph Processing. Proceedings of the Twenty-Second International Conference on Architectural Support for Programming Languages and Operating Systems. :223–236.
.
2017. Existing distributed asynchronous graph processing systems employ checkpointing to capture globally consistent snapshots and rollback all machines to most recent checkpoint to recover from machine failures. In this paper we argue that recovery in distributed asynchronous graph processing does not require the entire execution state to be rolled back to a globally consistent state due to the relaxed asynchronous execution semantics. We define the properties required in the recovered state for it to be usable for correct asynchronous processing and develop CoRAL, a lightweight checkpointing and recovery algorithm. First, this algorithm carries out confined recovery that only rolls back graph execution states of the failed machines to affect recovery. Second, it relies upon lightweight checkpoints that capture locally consistent snapshots with a reduced peak network bandwidth requirement. Our experiments using real-world graphs show that our technique recovers from failures and finishes processing 1.5x to 3.2x faster compared to the traditional asynchronous checkpointing and recovery mechanism when failures impact 1 to 6 machines of a 16 machine cluster. Moreover, capturing locally consistent snapshots significantly reduces intermittent high peak bandwidth usage required to save the snapshots – the average reduction in 99th percentile bandwidth ranges from 22% to 51% while 1 to 6 snapshot replicas are being maintained.
Coresets for Kernel Regression. Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. :645–654.
.
2017. Kernel regression is an essential and ubiquitous tool for non-parametric data analysis, particularly popular among time series and spatial data. However, the central operation which is performed many times, evaluating a kernel on the data set, takes linear time. This is impractical for modern large data sets. In this paper we describe coresets for kernel regression: compressed data sets which can be used as proxy for the original data and have provably bounded worst case error. The size of the coresets are independent of the raw number of data points; rather they only depend on the error guarantee, and in some cases the size of domain and amount of smoothing. We evaluate our methods on very large time series and spatial data, and demonstrate that they incur negligible error, can be constructed extremely efficiently, and allow for great computational gains.
Corrective Model-Predictive Control in Large Electric Power Systems. IEEE Transactions on Power Systems.
.
2017. Enhanced control capabilities are required to coordinate the response of increasingly diverse controllable resources, including FACTS devices, energy storage, demand response and fast-acting generation. Model-predictive control (MPC) has shown great promise for accommodating these devices in a corrective control framework that exploits the thermal overload capability of transmission lines and limits detrimental effects of contingencies. This work expands upon earlier implementations by incorporating voltage magnitudes and reactive power into the system model utilized by MPC. These improvements provide a more accurate prediction of system behavior and enable more effective control decisions. Performance of this enhanced MPC strategy is demonstrated using a model of the Californian power system containing 4259 buses. Sparsity in modelling and control actions must be exploited for implementation on large networks. A method is developed for identifying the set of controls that is most effective for a given contingency. The proposed MPC corrective control algorithm fits naturally within energy management systems where it can provide feedback control or act as a guide for system operators by identifying beneficial control actions across a wide range of devices.
Crawling and cluster hidden web using crawler framework and fuzzy-KNN. 2017 5th International Conference on Cyber and IT Service Management (CITSM). :1–7.
.
2017. Today almost everyone is using internet for daily activities. Whether it's for social, academic, work or business. But only a few of us are aware that internet generally we access only a small part of the overall of internet access. The Internet or the world wide web is divided into several levels, such as web surfaces, deep web or dark web. Accessing internet into deep or dark web is a dangerous thing. This research will be conducted with research on web content and deep content. For a faster and safer search, in this research will be use crawler framework. From the search process will be obtained various kinds of data to be stored into the database. The database classification process will be implemented to know the level of the website. The classification process is done by using the fuzzy-KNN method. The fuzzy-KNN method classifies the results of the crawling framework that contained in the database. Crawling framework will generate data in the form of url address, page info and other. Crawling data will be compared with predefined sample data. The classification result of fuzzy-KNN will result in the data of the web level based on the value of the word specified in the sample data. From the research conducted on several data tests that found there are as much as 20% of the web surface, 7.5% web bergie, 20% deep web, 22.5% charter and 30% dark web. Research is only done on some test data, it is necessary to add some data in order to get better result. Better crawler frameworks can speed up crawling results, especially at certain web levels because not all crawler frameworks can work at a particular web level, the tor browser's can be used but the crawler framework sometimes can not work.
Cross-Level Monte Carlo Framework for System Vulnerability Evaluation Against Fault Attack. Proceedings of the 54th Annual Design Automation Conference 2017. :17:1–17:6.
.
2017. Fault attack becomes a serious threat to system security and requires to be evaluated in the design stage. Existing methods usually ignore the intrinsic uncertainty in attack process and suffer from low scalability. In this paper, we develop a general framework to evaluate system vulnerability against fault attack. A holistic model for fault injection is incorporated to capture the probabilistic nature of attack process. Based on the probabilistic model, a security metric named as System Security Factor (SSF) is defined to measure the system vulnerability. In the framework, a Monte Carlo method is leveraged to enable a feasible evaluation of SSF for different systems, security policies, and attack techniques. We enhance the framework with a novel system pre-characterization procedure, based on which an importance sampling strategy is proposed. Experimental results on a commercial processor demonstrate that compared to random sampling, a 2500X speedup is achieved with the proposed sampling strategy. Meanwhile, 3% registers are identified to contribute to more than 95% SSF. By hardening these registers, a 6.5X security improvement can be achieved with less than 2% area overhead.
Cross-site Scripting Attacks on Android Hybrid Applications. Proceedings of the 2017 International Conference on Cryptography, Security and Privacy. :56–61.
.
2017. Hybrid mobile applications are coded in both standard web languages and native language. The including of web technologies results in that Hybrid applications introduce more security risks than the traditional web applications, which have more possible channels to inject malicious codes to gain much more powerful privileges. In this paper, Cross-site Scripting attacks specific to Android Hybrid apps developed with PhoneGap framework are investigated. We find out that the XSS vulnerability on Hybrid apps makes it possible for attackers to bypass the access control policies of WebView and WebKit to run malicious codes into victim's WebView. With the PhoneGap plugins, the malicious codes can steal user's private information and destroy user's file system, which are more damaging than cookie stealing.
Cryptographic key management methods for mission-critical wireless networks. 2017 7th IEEE International Conference on Electronics Information and Emergency Communication (ICEIEC). :33–36.
.
2017. When a large scale disaster strikes, it demands an efficient communication and coordination among first responders to save life and other community resources. Normally, the traditional communication infrastructures such as landline phone or cellular networks are damaged and dont provide adequate communication services to first responders for exchanging emergency related information. Wireless mesh networks is the promising alternatives in such type of situations. The security requirements for emergency response communications include privacy, data integrity, authentication, access control and availability. To build a secure communication system, usually the first attempt is to employ cryptographic keys. In critical-mission wireless mesh networks, a mesh router needs to maintain secure data communication with its neighboring mesh routers. The effective designs on fast pairwise key generation and rekeying for mesh routers are critical for emergency response and are essential to protect unicast traffic. In this paper, we present a security-enhanced session key generation and rekeying protocols EHPFS (enhanced 4-way handshake with PFS support). It eliminate the DoS attack problem of the 4-way handshake in 802.11s. EHPFS provides additional support for perfect forward secrecy (PFS). Even in case a Primary Master Key (PMK) is exposed, the session key PTK will not be compromised. The performance and security analysis show that EHPFS is efficient.
Customized Privacy Preserving for Inherent Data and Latent Data. Personal Ubiquitous Comput.. 21:43–54.
.
2017. The huge amount of sensory data collected from mobile devices has offered great potentials to promote more significant services based on user data extracted from sensor readings. However, releasing user data could also seriously threaten user privacy. It is possible to directly collect sensitive information from released user data without user permissions. Furthermore, third party users can also infer sensitive information contained in released data in a latent manner by utilizing data mining techniques. In this paper, we formally define these two types of threats as inherent data privacy and latent data privacy and construct a data-sanitization strategy that can optimize the tradeoff between data utility and customized two types of privacy. The key novel idea lies that the developed strategy can combat against powerful third party users with broad knowledge about users and launching optimal inference attacks. We show that our strategy does not reduce the benefit brought by user data much, while sensitive information can still be protected. To the best of our knowledge, this is the first work that preserves both inherent data privacy and latent data privacy.
Cyber Defense As a Complex Adaptive System: A Model-based Approach to Strategic Policy Design. Proceedings of the 2017 International Conference of The Computational Social Science Society of the Americas. :17:1–17:1.
.
2017. In a world of ever-increasing systems interdependence, effective cybersecurity policy design seems to be one of the most critically understudied elements of our national security strategy. Enterprise cyber technologies are often implemented without much regard to the interactions that occur between humans and the new technology. Furthermore, the interactions that occur between individuals can often have an impact on the newly employed technology as well. Without a rigorous, evidence-based approach to ground an employment strategy and elucidate the emergent organizational needs that will come with the fielding of new cyber capabilities, one is left to speculate on the impact that novel technologies will have on the aggregate functioning of the enterprise. In this paper, we will explore a scenario in which a hypothetical government agency applies a complexity science perspective, supported by agent-based modeling, to more fully understand the impacts of strategic policy decisions. We present a model to explore the socio-technical dynamics of these systems, discuss lessons using this platform, and suggest further research and development.
Cyber Threat Analysis Framework for the Wind Energy Based Power System. Proceedings of the 2017 Workshop on Cyber-Physical Systems Security and PrivaCy. :81–92.
.
2017. Wind energy is one of the major sources of renewable energy. Countries around the world are increasingly deploying large wind farms that can generate a significant amount of clean energy. A wind farm consists of many turbines, often spread across a large geographical area. Modern wind turbines are equipped with meteorological sensors. The wind farm control center monitors the turbine sensors and adjusts the power generation parameters for optimal power production. The turbine sensors are prone to cyberattacks and with the evolving of large wind farms and their share in the power generation, it is crucial to analyze such potential cyber threats. In this paper, we present a formal framework to verify the impact of false data injection attack on the wind farm meteorological sensor measurements. The framework designs this verification as a maximization problem where the adversary's goal is to maximize the wind farm power production loss with its limited attack capability. Moreover, the adversary wants to remain stealthy to the wind farm bad data detection mechanism while it is launching its cyberattack on the turbine sensors. We evaluate the proposed framework for its threat analysis capability as well as its scalability by executing experiments on synthetic test cases.
Cybersecurity Innovation in Government: A Case Study of U.S. Pentagon's Vulnerability Reward Program. Proceedings of the 18th Annual International Conference on Digital Government Research. :64–73.
.
2017. The U.S. federal governments and agencies face increasingly sophisticated and persistent cyber threats and cyberattacks from black hat hackers who breach cybersecurity for malicious purposes or for personal gain. With the rise of malicious attacks that caused untold financial damage and substantial reputational damage, private-sector high-tech firms such as Google, Microsoft and Yahoo have adopted an innovative practice known as vulnerability reward program (VRP) or bug bounty program which crowdsources software bug detection from the cybersecurity community. In an alignment with the 2016 U.S. Cybersecurity National Action Plan, the Department of Defense adopted a pilot VRP in 2016. This paper examines the Pentagon's VRP and examines how it may fit with the national cybersecurity policy and the need for new and enhanced cybersecurity capability development. Our case study results show the feasibility of the government adoption and implementation of the innovative concept of VRP to enhance the government cybersecurity posture.
On Cybersecurity Loafing and Cybercomplacency. SIGMIS Database. 48:8–10.
.
2017. As we begin to publish more articles in the area of cybersecurity, a case in point being the fine set of security papers presented in this particular issue as well as the upcoming special issue on Advances in Behavioral Cybersecurity Research which is currently in the review phase, it comes to mind that there is an emerging rubric of interest to the research community involved in security. That rubric concerns itself with the increasingly odd and inexplicable degree of comfort that computer users appear to have while operating in an increasingly threat-rich online environment. In my own work, I have noticed over time that users are blissfully unconcerned about malware threats (Poston et al., 2005; Stafford, 2005; Stafford and Poston, 2010; Stafford and Urbaczewski, 2004). This often takes the avenue of "it can't happen to me," or, "that's just not likely," but the fact is, since I first started noticing this odd nonchalance it seems like it is only getting worse, generally speaking. Mind you, a computer user who has been exploited and suffered harm from it will be vigilant to the end of his or her days, but for those who have scraped by, "no worries," is the order of the day, it seems to me. This is problematic because the exploits that are abroad in the online world these days are a whole order of magnitude more harmful than those that were around when I first started studying the matter a decade ago. I would not have commented on the matter, having long since chalked it up to the oddities of civilian computing, so to speak, but an odd pattern I encountered when engaging in a research study with trained corporate users brought the matter back to the fore recently. I have been collecting neurocogntive data on user response to security threats, and while my primary interest was to see if skin conductance or pupillary dilation varied during exposure to computer threat scenarios, I noticed an odd pattern that commanded my attention and actually derailed my study for a while as I dug in to examine it.
A dark web story in-depth research and study conducted on the dark web based on forensic computing and security in Malaysia. 2017 IEEE International Conference on Power, Control, Signals and Instrumentation Engineering (ICPCSI). :3049–3055.
.
2017. The following is a research conducted on the Dark Web to study and identify the ins and outs of the dark web, what the dark web is all about, the various methods available to access the dark web and many others. The researchers have also included the steps and precautions taken before the dark web was opened. Apart from that, the findings and the website links / URL are also included along with a description of the sites. The primary usage of the dark web and some of the researcher's experience has been further documented in this research paper.
Data poisoning attacks on factorization-based collaborative filtering. Neural Information Processing Systems (NIPS 2016).
.
2017. Recommendation and collaborative filtering systems are important in modern information and e-commerce applications. As these systems are becoming increasingly popular in the industry, their outputs could affect business decision making, introducing incentives for an adversarial party to compromise the availability or integrity of such systems. We introduce a data poisoning attack on collaborative filtering systems. We demonstrate how a powerful attacker with full knowledge of the learner can generate malicious data so as to maximize his/her malicious objectives, while at the same time mimicking normal user behavior to avoid being detected. While the complete knowledge assumption seems extreme, it enables a robust assessment of the vulnerability of collaborative filtering schemes to highly motivated attacks. We present efficient solutions for two popular factorization-based collaborative filtering algorithms: the alternative minimization formulation and the nuclear norm minimization method. Finally, we test the effectiveness of our proposed algorithms on real-world data and discuss potential defensive strategies.
A Data-driven Process Recommender Framework. Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. :2111–2120.
.
2017. We present an approach for improving the performance of complex knowledge-based processes by providing data-driven step-by-step recommendations. Our framework uses the associations between similar historic process performances and contextual information to determine the prototypical way of enacting the process. We introduce a novel similarity metric for grouping traces into clusters that incorporates temporal information about activity performance and handles concurrent activities. Our data-driven recommender system selects the appropriate prototype performance of the process based on user-provided context attributes. Our approach for determining the prototypes discovers the commonly performed activities and their temporal relationships. We tested our system on data from three real-world medical processes and achieved recommendation accuracy up to an F1 score of 0.77 (compared to an F1 score of 0.37 using ZeroR) with 63.2% of recommended enactments being within the first five neighbors of the actual historic enactments in a set of 87 cases. Our framework works as an interactive visual analytic tool for process mining. This work shows the feasibility of data-driven decision support system for complex knowledge-based processes.
DataShield: Configurable Data Confidentiality and Integrity. Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security. :193–204.
.
2017. Applications written in C/C++ are prone to memory corruption, which allows attackers to extract secrets or gain control of the system. With the rise of strong control-flow hijacking defenses, non-control data attacks have become the dominant threat. As vulnerabilities like HeartBleed have shown, such attacks are equally devastating. Data Confidentiality and Integrity (DCI) is a low-overhead non-control-data protection mechanism for systems software. DCI augments the C/C++ programming languages with an- notations, allowing the programmer to protect selected data types. The DCI compiler and runtime system prevent illegal reads (confidentiality) and writes (integrity) to instances of these types. The programmer selects types that contain security critical information such as passwords, cryptographic keys, or identification tokens. Protecting only this critical data greatly reduces performance overhead relative to complete memory safety. Our prototype implementation of DCI, DataShield, shows the applicability and efficiency of our approach. For SPEC CPU2006, the performance overhead is at most 16.34%. For our case studies, we instrumented mbedTLS, astar, and libquantum to show that our annotation approach is practical. The overhead of our SSL/TLS server is 35.7% with critical data structures protected at all times. Our security evaluation shows DataShield mitigates a recently discovered vulnerability in mbedTLS.
DataSynthesizer: Privacy-Preserving Synthetic Datasets. Proceedings of the 29th International Conference on Scientific and Statistical Database Management. :42:1–42:5.
.
2017. To facilitate collaboration over sensitive data, we present DataSynthesizer, a tool that takes a sensitive dataset as input and generates a structurally and statistically similar synthetic dataset with strong privacy guarantees. The data owners need not release their data, while potential collaborators can begin developing models and methods with some confidence that their results will work similarly on the real dataset. The distinguishing feature of DataSynthesizer is its usability — the data owner does not have to specify any parameters to start generating and sharing data safely and effectively. DataSynthesizer consists of three high-level modules — DataDescriber, DataGenerator and ModelInspector. The first, DataDescriber, investigates the data types, correlations and distributions of the attributes in the private dataset, and produces a data summary, adding noise to the distributions to preserve privacy. DataGenerator samples from the summary computed by DataDescriber and outputs synthetic data. ModelInspector shows an intuitive description of the data summary that was computed by DataDescriber, allowing the data owner to evaluate the accuracy of the summarization process and adjust any parameters, if desired. We describe DataSynthesizer and illustrate its use in an urban science context, where sharing sensitive, legally encumbered data between agencies and with outside collaborators is reported as the primary obstacle to data-driven governance. The code implementing all parts of this work is publicly available at https://github.com/DataResponsibly/DataSynthesizer.
Deals with Integrating of Security Specifications During Software Design Phase Using MDA Approach. Proceedings of the Second International Conference on Internet of Things, Data and Cloud Computing. :196:1–196:7.
.
2017. There are many recent propositions treating Model Driven Architecture (MDA) approaches to perform and automate code generation from design models. To the best of our knowledge and research, most of these propositions have been only focused on functional aspect by allowing code generation without considering this the non-functional aspect at the same time so that to generate secure object-oriented software basing on MDA approach. In this context, we are adding further details to integrate the security policies required in the form of secure models. The systems specification models will be enhanced with security requirements at different abstraction levels through a set of transformation models. Improving functional models with security constraints allow us to incorporate the security needs and automating generating secure applications with their security infrastructure using MDA approach. After carrying out a modification on MDA processes and UML meta-model to cover a better representation of security policies of an organization by updating different existing software engineering process to take into account nonfunctional aspect along with their functional aspect. This work presents a new methodology based on MDA approach and existing security technologies for allowing the integration of the proposed security requirements, which are obtained from security experts, during the system design. Within this context, we have focused on the essential elements of security, such as data encryption, Message Integrity, and Access Control in order to express the importance of merging both the functional and non-functional aspects altogether. We have chosen these properties to practically illustrate how to generate secure applications including their security policies. Then the source code will be obtained automatically from Platform Specific Models (PSM) by applying a set of model transformations and using a code generator designed for this mission. In addition, we can inject also other security-related properties, such as Availability, Traceability, non-repudiation, and Scalability issues during the whole development process by following the same methodology. these properties will be treated in the future work.
Deep Asymmetric Pairwise Hashing. Proceedings of the 2017 ACM on Multimedia Conference. :1522–1530.
.
2017. Recently, deep neural networks based hashing methods have greatly improved the multimedia retrieval performance by simultaneously learning feature representations and binary hash functions. Inspired by the latest advance in the asymmetric hashing scheme, in this work, we propose a novel Deep Asymmetric Pairwise Hashing approach (DAPH) for supervised hashing. The core idea is that two deep convolutional models are jointly trained such that their output codes for a pair of images can well reveal the similarity indicated by their semantic labels. A pairwise loss is elaborately designed to preserve the pairwise similarities between images as well as incorporating the independence and balance hash code learning criteria. By taking advantage of the flexibility of asymmetric hash functions, we devise an efficient alternating algorithm to optimize the asymmetric deep hash functions and high-quality binary code jointly. Experiments on three image benchmarks show that DAPH achieves the state-of-the-art performance on large-scale image retrieval.