Biblio

Found 1163 results

Filters: First Letter Of Title is R  [Clear All Filters]
2019-09-26
Berrueta, Eduardo, Morato, Daniel, Magana, Eduardo, Izal, Mikel.  2018.  Ransomware Encrypted Your Files but You Restored Them from Network Traffic. 2018 2nd Cyber Security in Networking Conference (CSNet). :1-7.

In a scenario where user files are stored in a network shared volume, a single computer infected by ransomware could encrypt the whole set of shared files, with a large impact on user productivity. On the other hand, medium and large companies maintain hardware or software probes that monitor the traffic in critical network links, in order to evaluate service performance, detect security breaches, account for network or service usage, etc. In this paper we suggest using the monitoring capabilities in one of these tools in order to keep a trace of the traffic between the users and the file server. Once the ransomware is detected, the lost files can be recovered from the traffic trace. This includes any user modifications posterior to the last snapshot of periodic backups. The paper explains the problems faced by the monitoring tool, which is neither the client nor the server of the file sharing operations. It also describes the data structures in order to process the actions of users that could be simultaneously working on the same file. A proof of concept software implementation was capable of successfully recovering the files encrypted by 18 different ransomware families.

2020-10-26
Mutalemwa, Lilian C., Shin, Seokjoo.  2018.  Realizing Source Location Privacy in Wireless Sensor Networks Through Agent Node Routing. 2018 International Conference on Information and Communication Technology Convergence (ICTC). :1283–1285.
Wireless Sensor Networks (WSNs) are used in sensitive applications such as in asset monitoring applications. Due to the sensitivity of information in these applications, it is important to ensure that flow of data between sensor nodes is secure and does not expose any information about the source node or the monitored assets. This paper proposes a scheme to preserve the source location privacy based on random routing techniques. To achieve high privacy, the proposed scheme randomly sends packet to sink node through tactically positioned agent nodes. The position of agent nodes is designed to guarantee that successive packets are routed through highly random and perplexing routing paths as compared to other routing schemes. Simulation results demonstrate that proposed scheme provides longer safety period and higher privacy against both, patient and cautious adversaries.
2020-09-28
Zhang, Xueru, Khalili, Mohammad Mahdi, Liu, Mingyan.  2018.  Recycled ADMM: Improve Privacy and Accuracy with Less Computation in Distributed Algorithms. 2018 56th Annual Allerton Conference on Communication, Control, and Computing (Allerton). :959–965.
Alternating direction method of multiplier (ADMM) is a powerful method to solve decentralized convex optimization problems. In distributed settings, each node performs computation with its local data and the local results are exchanged among neighboring nodes in an iterative fashion. During this iterative process the leakage of data privacy arises and can accumulate significantly over many iterations, making it difficult to balance the privacy-utility tradeoff. In this study we propose Recycled ADMM (R-ADMM), where a linear approximation is applied to every even iteration, its solution directly calculated using only results from the previous, odd iteration. It turns out that under such a scheme, half of the updates incur no privacy loss and require much less computation compared to the conventional ADMM. We obtain a sufficient condition for the convergence of R-ADMM and provide the privacy analysis based on objective perturbation.
2019-02-14
Chen, B., Lu, Z., Zhou, H..  2018.  Reliability Assessment of Distribution Network Considering Cyber Attacks. 2018 2nd IEEE Conference on Energy Internet and Energy System Integration (EI2). :1-6.

With the rapid development of the smart grid, a large number of intelligent sensors and meters have been introduced in distribution network, which will inevitably increase the integration of physical networks and cyber networks, and bring potential security threats to the operating system. In this paper, the functions of the information system on distribution network are described when cyber attacks appear at the intelligent electronic devices (lED) or at the distribution main station. The effect analysis of the distribution network under normal operating condition or in the fault recovery process is carried out, and the reliability assessment model of the distribution network considering cyber attacks is constructed. Finally, the IEEE-33-bus distribution system is taken as a test system to presented the evaluation process based on the proposed model.

2019-02-08
Zou, Z., Wang, D., Yang, H., Hou, Y., Yang, Y., Xu, W..  2018.  Research on Risk Assessment Technology of Industrial Control System Based on Attack Graph. 2018 IEEE 3rd Advanced Information Technology, Electronic and Automation Control Conference (IAEAC). :2420-2423.

In order to evaluate the network security risks and implement effective defenses in industrial control system, a risk assessment method for industrial control systems based on attack graphs is proposed. Use the concept of network security elements to translate network attacks into network state migration problems and build an industrial control network attack graph model. In view of the current subjective evaluation of expert experience, the atomic attack probability assignment method and the CVSS evaluation system were introduced to evaluate the security status of the industrial control system. Finally, taking the centralized control system of the thermal power plant as the experimental background, the case analysis is performed. The experimental results show that the method can comprehensively analyze the potential safety hazards in the industrial control system and provide basis for the safety management personnel to take effective defense measures.

2020-11-02
Xiong, Wenjie, Shan, Chun, Sun, Zhaoliang, Meng, Qinglei.  2018.  Real-time Processing and Storage of Multimedia Data with Content Delivery Network in Vehicle Monitoring System. 2018 6th International Conference on Wireless Networks and Mobile Communications (WINCOM). :1—4.

With the rapid development of the Internet of vehicles, there is a huge amount of multimedia data becoming a hidden trouble in the Internet of Things. Therefore, it is necessary to process and store them in real time as a way of big data curation. In this paper, a method of real-time processing and storage based on CDN in vehicle monitoring system is proposed. The MPEG-DASH standard is used to process the multimedia data by dividing them into MPD files and media segments. A real-time monitoring system of vehicle on the basis of the method introduced is designed and implemented.

2019-02-08
Mertoguno, S., Craven, R., Koller, D., Mickelson, M..  2018.  Reducing Attack Surface via Executable Transformation. 2018 IEEE Cybersecurity Development (SecDev). :138-138.

Modern software development and deployment practices encourage complexity and bloat while unintentionally sacrificing efficiency and security. A major driver in this is the overwhelming emphasis on programmers' productivity. The constant demands to speed up development while reducing costs have forced a series of individual decisions and approaches throughout software engineering history that have led to this point. The current state-of-the-practice in the field is a patchwork of architectures and frameworks, packed full of features in order to appeal to: the greatest number of people, obscure use cases, maximal code reuse, and minimal developer effort. The Office of Naval Research (ONR) Total Platform Cyber Protection (TPCP) program seeks to de-bloat software binaries late in the life-cycle with little or no access to the source code or the development process.

2020-10-16
Liu, Liping, Piao, Chunhui, Jiang, Xuehong, Zheng, Lijuan.  2018.  Research on Governmental Data Sharing Based on Local Differential Privacy Approach. 2018 IEEE 15th International Conference on e-Business Engineering (ICEBE). :39—45.

With the construction and implementation of the government information resources sharing mechanism, the protection of citizens' privacy has become a vital issue for government departments and the public. This paper discusses the risk of citizens' privacy disclosure related to data sharing among government departments, and analyzes the current major privacy protection models for data sharing. Aiming at the issues of low efficiency and low reliability in existing e-government applications, a statistical data sharing framework among governmental departments based on local differential privacy and blockchain is established, and its applicability and advantages are illustrated through example analysis. The characteristics of the private blockchain enhance the security, credibility and responsiveness of information sharing between departments. Local differential privacy provides better usability and security for sharing statistics. It not only keeps statistics available, but also protects the privacy of citizens.

2019-08-05
Ma, S., Zeng, S., Guo, J..  2018.  Research on Trust Degree Model of Fault Alarms Based on Neural Network. 2018 12th International Conference on Reliability, Maintainability, and Safety (ICRMS). :73-77.

False alarm and miss are two general kinds of alarm errors and they can decrease operator's trust in the alarm system. Specifically, there are two different forms of trust in such systems, represented by two kinds of responses to alarms in this research. One is compliance and the other is reliance. Besides false alarm and miss, the two responses are differentially affected by properties of the alarm system, situational factors or operator factors. However, most of the existing studies have qualitatively analyzed the relationship between a single variable and the two responses. In this research, all available experimental studies are identified through database searches using keyword "compliance and reliance" without restriction on year of publication to December 2017. Six relevant studies and fifty-two sets of key data are obtained as the data base of this research. Furthermore, neural network is adopted as a tool to establish the quantitative relationship between multiple factors and the two forms of trust, respectively. The result will be of great significance to further study the influence of human decision making on the overall fault detection rate and the false alarm rate of the human machine system.

2020-10-16
Tungela, Nomawethu, Mutudi, Maria, Iyamu, Tiko.  2018.  The Roles of E-Government in Healthcare from the Perspective of Structuration Theory. 2018 Open Innovations Conference (OI). :332—338.

The e-government concept and healthcare have usually been studied separately. Even when and where both e-government and healthcare systems were combined in a study, the roles of e-government in healthcare have not been examined. As a result., the complementarity of the systems poses potential challenges. The interpretive approach was applied in this study. Existing materials in the areas of healthcare and e-government were used as data from a qualitative method viewpoint. Dimension of change from the perspective of the structuration theory was employed to guide the data analysis. From the analysis., six factors were found to be the main roles of e-government in the implementation and application of e-health in the delivering of healthcare services. An understanding of the roles of e-government promotes complementarity., which enhances the healthcare service delivery to the community.

2020-07-20
Guelton, Serge, Guinet, Adrien, Brunet, Pierrick, Martinez, Juan Manuel, Dagnat, Fabien, Szlifierski, Nicolas.  2018.  [Research Paper] Combining Obfuscation and Optimizations in the Real World. 2018 IEEE 18th International Working Conference on Source Code Analysis and Manipulation (SCAM). :24–33.
Code obfuscation is the de facto standard to protect intellectual property when delivering code in an unmanaged environment. It relies on additive layers of code tangling techniques, white-box encryption calls and platform-specific or tool-specific countermeasures to make it harder for a reverse engineer to access critical pieces of data or to understand core algorithms. The literature provides plenty of different obfuscation techniques that can be used at compile time to transform data or control flow in order to provide some kind of protection against different reverse engineering scenarii. Scheduling code transformations to optimize a given metric is known as the pass scheduling problem, a problem known to be NP-hard, but solved in a practical way using hard-coded sequences that are generally satisfactory. Adding code obfuscation to the problem introduces two new dimensions. First, as a code obfuscator needs to find a balance between obfuscation and performance, pass scheduling becomes a multi-criteria optimization problem. Second, obfuscation passes transform their inputs in unconventional ways, which means some pass combinations may not be desirable or even valid. This paper highlights several issues met when blindly chaining different kind of obfuscation and optimization passes, emphasizing the need of a formal model to combine them. It proposes a non-intrusive formalism to leverage on sequential pass management techniques. The model is validated on real-world scenarii gathered during the development of an industrial-strength obfuscator on top of the LLVM compiler infrastructure.
2018-08-23
Zhang, Kai, Liu, Chuanren, Zhang, Jie, Xiong, Hui, Xing, Eric, Ye, Jieping.  2017.  Randomization or Condensation?: Linear-Cost Matrix Sketching Via Cascaded Compression Sampling Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. :615–623.
Matrix sketching is aimed at finding compact representations of a matrix while simultaneously preserving most of its properties, which is a fundamental building block in modern scientific computing. Randomized algorithms represent state-of-the-art and have attracted huge interest from the fields of machine learning, data mining, and theoretic computer science. However, it still requires the use of the entire input matrix in producing desired factorizations, which can be a major computational and memory bottleneck in truly large problems. In this paper, we uncover an interesting theoretic connection between matrix low-rank decomposition and lossy signal compression, based on which a cascaded compression sampling framework is devised to approximate an m-by-n matrix in only O(m+n) time and space. Indeed, the proposed method accesses only a small number of matrix rows and columns, which significantly improves the memory footprint. Meanwhile, by sequentially teaming two rounds of approximation procedures and upgrading the sampling strategy from a uniform probability to more sophisticated, encoding-orientated sampling, significant algorithmic boosting is achieved to uncover more granular structures in the data. Empirical results on a wide spectrum of real-world, large-scale matrices show that by taking only linear time and space, the accuracy of our method rivals those state-of-the-art randomized algorithms consuming a quadratic, O(mn), amount of resources.
2018-09-05
Wang, J., Shi, D., Li, Y., Chen, J., Duan, X..  2017.  Realistic measurement protection schemes against false data injection attacks on state estimators. 2017 IEEE Power Energy Society General Meeting. :1–5.
False data injection attacks (FDIA) on state estimators are a kind of imminent cyber-physical security issue. Fortunately, it has been proved that if a set of measurements is strategically selected and protected, no FDIA will remain undetectable. In this paper, the metric Return on Investment (ROI) is introduced to evaluate the overall returns of the alternative measurement protection schemes (MPS). By setting maximum total ROI as the optimization objective, the previously ignored cost-benefit issue is taken into account to derive a realistic MPS for power utilities. The optimization problem is transformed into the Steiner tree problem in graph theory, where a tree pruning based algorithm is used to reduce the computational complexity and find a quasi-optimal solution with acceptable approximations. The correctness and efficiency of the algorithm are verified by case studies.
2018-06-07
WANG, YING, Li, Huawei, Li, Xiaowei.  2017.  Real-Time Meets Approximate Computing: An Elastic CNN Inference Accelerator with Adaptive Trade-off Between QoS and QoR. Proceedings of the 54th Annual Design Automation Conference 2017. :33:1–33:6.
Due to the recent progress in deep learning and neural acceleration architectures, specialized deep neural network or convolutional neural network (CNNs) accelerators are expected to provide an energy-efficient solution for real-time vision/speech processing. recognition and a wide spectrum of approximate computing applications. In addition to their wide applicability scope, we also found that the fascinating feature of deterministic performance and high energy-efficiency, makes such deep learning (DL) accelerators ideal candidates as application-processor IPs in embedded SoCs concerned with real-time processing. However, unlike traditional accelerator designs, DL accelerators introduce a new aspect of design trade-off between real-time processing (QoS) and computation approximation (QoR) into embedded systems. This work proposes an elastic CNN acceleration architecture that automatically adapts to the hard QoS constraint by exploiting the error-resilience in typical approximate computing workloads For the first time, the proposed design, including network tuning-and-mapping software and reconfigurable accelerator hardware, aims to reconcile the design constraint of QoS and Quality of Result (QoR). which are respectively the key concerns in real-time and approximate computing. It is shown in experiments that the proposed architecture enables the embedded system to work flexibly in an expanded operating space, significantly enhances its real-time ability. and maximizes the energy-efficiency of system within the user-specified QoS-QoR constraint through self-reconfiguration.
Tymchuk, Yuriy, Ghafari, Mohammad, Nierstrasz, Oscar.  2017.  Renraku: The One Static Analysis Model to Rule Them All. Proceedings of the 12th Edition of the International Workshop on Smalltalk Technologies. :13:1–13:10.
Most static analyzers are monolithic applications that define their own ways to analyze source code and present the results. Therefore aggregating multiple static analyzers into a single tool or integrating a new analyzer into existing tools requires a significant amount of effort. Over the last few years, we cultivated Renraku — a static analysis model that acts as a mediator between the static analyzers and the tools that present the reports. When used by both analysis and tool developers, this single quality model can reduce the cost to both introduce a new type of analysis to existing tools and create a tool that relies on existing analyzers.
2020-07-20
Marakis, Evangelos, van Harten, Wouter, Uppu, Ravitej, Vos, Willem L., Pinkse, Pepijn W. H..  2017.  Reproducibility of artificial multiple scattering media. 2017 Conference on Lasers and Electro-Optics Europe European Quantum Electronics Conference (CLEO/Europe-EQEC). :1–1.
Summary form only given. Authentication of people or objects using physical keys is insecure against secret duplication. Physical unclonable functions (PUF) are special physical keys that are assumed to be unclonable due to the large number of degrees of freedom in their manufacturing [1]. Opaque scattering media, such as white paint and teeth, comprise of millions of nanoparticles in a random arrangement. Under coherent light illumination, the multiple scattering from these nanoparticles gives rise to a complex interference resulting in a speckle pattern. The speckle pattern is seemingly random but highly sensitive to the exact position and size of the nanoparticles in the given piece of opaque scattering medium [2], thereby realizing an ideal optical PUF. These optical PUFs enabled applications such as quantum-secure authentication (QSA) and communication [3, 4].
2018-11-19
Xiaohe, Cao, Liuping, Feng, Peng, Cao, Jianhua, Hu, Jianle, Zhu.  2017.  Research on Anti-Counterfeiting Technology of Print Image Based on the Metameric Properties. Proceedings of the 2017 2Nd International Conference on Communication and Information Systems. :284–289.
High-precision scanners, copiers and other equipment to copy the image compared with the original, you can achieve a very realistic effect. There is a certain threat to the copyright of the manuscript. In view of this phenomenon, a design method of metameric security images with anti-counterfeiting and anti-copy function is presented on this paper. Metameric security images are designed and printed by the theory of metameric color and the four-color ink spectral characteristics. The realization of anti-counterfeiting function is based on the difference of K ink content in proportion of CMYK. In the metameric security images, trademark image display for the CMYK color, and visible under the sunlight. Anti-counterfeiting images appear as monochrome K ink, and visible under the infrared. The experimental results show that the metameric security images with infrared detection device and its characteristics under the infrared light source are observed the clear hidden information. It realizes the anti-counterfeiting function. The method can be applied to various industries in the trademark image security.
2018-01-23
Joo, Moon-Ho, Yoon, Sang-Pil, Kim, Sahng-Yoon, Kwon, Hun-Yeong.  2017.  Research on Distribution of Responsibility for De-Identification Policy of Personal Information. Proceedings of the 18th Annual International Conference on Digital Government Research. :74–83.
With the coming of the age of big data, efforts to institutionalize de-identification of personal information to protect privacy but also at the same time, to allow the use of personal information, have been actively carried out and already, many countries are in the stage of implementing and establishing de-identification policies quite actively. But even with such efforts to protect and use personal information at the same time, the danger posed by re-identification based on de-identified information is real enough to warrant serious consideration for a management mechanism of such risks as well as a mechanism for distributing the responsibilities and liabilities that follow these risks in the event of accidents and incidents involving the invasion of privacy. So far, most countries implementing the de-identification policies are focusing on defining what de-identification is and the exemption requirements to allow free use of de-identified personal information; in fact, it seems that there is a lack of discussion and consideration on how to distribute the responsibility of the risks and liabilities involved in the process of de-identification of personal information. This study proposes to take a look at the various de-identification policies worldwide and contemplate on these policies in the perspective of risk-liability theory. Also, the constituencies of the de-identification policies will be identified in order to analyze the roles and responsibilities of each of these constituencies thereby providing the theoretical basis on which to initiate the discussions on the distribution of burden and responsibilities arising from the de-identification policies.
2018-03-05
Shu, F., Li, M., Chen, S., Wang, X., Li, F..  2017.  Research on Network Security Protection System Based on Dynamic Modeling. 2017 IEEE 2nd Information Technology, Networking, Electronic and Automation Control Conference (ITNEC). :1602–1605.
A dynamic modeling method for network security vulnerabilities which is composed of the design of safety evaluation model, the design of risk model of intrusion event and the design of vulnerability risk model. The model based on identification of vulnerabilities values through dynamic forms can improve the tightness between vulnerability scanning system, intrusion prevention system and security configuration verification system. Based on this model, the network protection system which is most suitable for users can be formed, and the protection capability of the network protection system can be improved.
Shu, F., Li, M., Chen, S., Wang, X., Li, F..  2017.  Research on Network Security Protection System Based on Dynamic Modeling. 2017 IEEE 2nd Information Technology, Networking, Electronic and Automation Control Conference (ITNEC). :1602–1605.
A dynamic modeling method for network security vulnerabilities which is composed of the design of safety evaluation model, the design of risk model of intrusion event and the design of vulnerability risk model. The model based on identification of vulnerabilities values through dynamic forms can improve the tightness between vulnerability scanning system, intrusion prevention system and security configuration verification system. Based on this model, the network protection system which is most suitable for users can be formed, and the protection capability of the network protection system can be improved.
2018-05-16
Yang, Fan, Chien, Andrew A., Gunawi, Haryadi S..  2017.  Resilient Cloud in Dynamic Resource Environments. Proceedings of the 2017 Symposium on Cloud Computing. :627–627.
Traditional cloud stacks are designed to tolerate random, small-scale failures, and can successfully deliver highly-available cloud services and interactive services to end users. However, they fail to survive large-scale disruptions that are caused by major power outage, cyber-attack, or region/zone failures. Such changes trigger cascading failures and significant service outages. We propose to understand the reasons for these failures, and create reliable data services that can efficiently and robustly tolerate such large-scale resource changes. We believe cloud services will need to survive frequent, large dynamic resource changes in the future to be highly available. (1) Significant new challenges to cloud reliability are emerging, including cyber-attacks, power/network outages, and so on. For example, human error disrupted Amazon S3 service on 02/28/17 [2]. Recently hackers are even attacking electric utilities, which may lead to more outages [3, 6]. (2) Increased attention on resource cost optimization will increase usage dynamism, such as Amazon Spot Instances [1]. (3) Availability focused cloud applications will increasingly practice continuous testing to ensure they have no hidden source of catastrophic failure. For example, Netflix Simian Army can simulate the outages of individual servers, and even an entire AWS region [4]. (4) Cloud applications with dynamic flexibility will reap numerous benefits, such as flexible deployments, managing cost arbitrage and reliability arbitrage across cloud provides and datacenters, etc. Using Apache Cassandra [5] as the model system, we characterize its failure behavior under dynamic datacenter-scale resource changes. Each datacenter is volatile and randomly shut down with a given duty factor. We simulate read-only workload on a quorum-based system deployed across multiple datacenters, varying (1) system scale, (2) the fraction of volatile datacenters, and (3) the duty factor of volatile datacenters. We explore the space of various configurations, including replication factors and consistency levels, and measure the service availability (% of succeeded requests) and replication overhead (number of total replicas). Our results show that, in a volatile resource environment, the current replication and quorum protocols in Cassandra-like systems cannot high availability and consistency with low replication overhead. Our contributions include: (1) Detailed characterization of failures under dynamic datacenter-scale resource changes, showing that the exiting protocols in quorum-based systems cannot achieve high availability and consistency with low replication cost. (2) Study of the best achieve-able availability of data service in dynamic datacenter-scale resource environment.
2018-09-12
Park, Sangdon, Weimer, James, Lee, Insup.  2017.  Resilient Linear Classification: An Approach to Deal with Attacks on Training Data. Proceedings of the 8th International Conference on Cyber-Physical Systems. :155–164.
Data-driven techniques are used in cyber-physical systems (CPS) for controlling autonomous vehicles, handling demand responses for energy management, and modeling human physiology for medical devices. These data-driven techniques extract models from training data, where their performance is often analyzed with respect to random errors in the training data. However, if the training data is maliciously altered by attackers, the effect of these attacks on the learning algorithms underpinning data-driven CPS have yet to be considered. In this paper, we analyze the resilience of classification algorithms to training data attacks. Specifically, a generic metric is proposed that is tailored to measure resilience of classification algorithms with respect to worst-case tampering of the training data. Using the metric, we show that traditional linear classification algorithms are resilient under restricted conditions. To overcome these limitations, we propose a linear classification algorithm with a majority constraint and prove that it is strictly more resilient than the traditional algorithms. Evaluations on both synthetic data and a real-world retrospective arrhythmia medical case-study show that the traditional algorithms are vulnerable to tampered training data, whereas the proposed algorithm is more resilient (as measured by worst-case tampering).
2017-12-20
Ren, H., Jiang, F., Wang, H..  2017.  Resource allocation based on clustering algorithm for hybrid device-to-device networks. 2017 9th International Conference on Wireless Communications and Signal Processing (WCSP). :1–6.
In order to improve the spectrum utilization rate of Device-to-Device (D2D) communication, we study the hybrid resource allocation problem, which allows both the resource reuse and resource dedicated mode to work simultaneously. Meanwhile, multiple D2D devices are permitted to share uplink cellular resources with some designated cellular user equipment (CUE). Combined with the transmission requirement of different users, the optimized resource allocation problem is built which is a NP-hard problem. A heuristic greedy throughput maximization (HGTM) based on clustering algorithm is then proposed to solve the above problem. Numerical results demonstrate that the proposed HGTM outperforms existing algorithms in the sum throughput, CUEs SINR performance and the number of accessed D2D deceives.
2018-12-10
Maas, Martin, Asanović, Krste, Kubiatowicz, John.  2017.  Return of the Runtimes: Rethinking the Language Runtime System for the Cloud 3.0 Era. Proceedings of the 16th Workshop on Hot Topics in Operating Systems. :138–143.
The public cloud is moving to a Platform-as-a-Service model where services such as data management, machine learning or image classification are provided by the cloud operator while applications are written in high-level languages and leverage these services. Managed languages such as Java, Python or Scala are widely used in this setting. However, while these languages can increase productivity, they are often associated with problems such as unpredictable garbage collection pauses or warm-up overheads. We argue that the reason for these problems is that current language runtime systems were not initially designed for the cloud setting. To address this, we propose seven tenets for designing future language runtime systems for cloud data centers. We then outline the design of a general substrate for building such runtime systems, based on these seven tenets.
2017-12-20
Rogowski, R., Morton, M., Li, F., Monrose, F., Snow, K. Z., Polychronakis, M..  2017.  Revisiting Browser Security in the Modern Era: New Data-Only Attacks and Defenses. 2017 IEEE European Symposium on Security and Privacy (EuroS P). :366–381.
The continuous discovery of exploitable vulnerabilitiesin popular applications (e.g., web browsers and documentviewers), along with their heightening protections against control flow hijacking, has opened the door to an oftenneglected attack strategy-namely, data-only attacks. In thispaper, we demonstrate the practicality of the threat posedby data-only attacks that harness the power of memorydisclosure vulnerabilities. To do so, we introduce memorycartography, a technique that simplifies the construction ofdata-only attacks in a reliable manner. Specifically, we showhow an adversary can use a provided memory mapping primitive to navigate through process memory at runtime, andsafely reach security-critical data that can then be modifiedat will. We demonstrate this capability by using our cross-platform memory cartography framework implementation toconstruct data-only exploits against Internet Explorer and Chrome. The outcome of these exploits ranges from simple HTTP cookie leakage, to the alteration of the same originpolicy for targeted domains, which enables the cross-originexecution of arbitrary script code. The ease with which we can undermine the security ofmodern browsers stems from the fact that although isolationpolicies (such as the same origin policy) are enforced atthe script level, these policies are not well reflected in theunderlying sandbox process models used for compartmentalization. This gap exists because the complex demands oftoday's web functionality make the goal of enforcing thesame origin policy through process isolation a difficult oneto realize in practice, especially when backward compatibility is a priority (e.g., for support of cross-origin IFRAMEs). While fixing the underlying problems likely requires a majorrefactoring of the security architecture of modern browsers(in the long term), we explore several defenses, includingglobal variable randomization, that can limit the power ofthe attacks presented herein.