Visible to the public Biblio

Filters: Keyword is parallel architectures  [Clear All Filters]
2023-07-18
Lin, Decong, Cao, Hongbo, Tian, Chunzi, Sun, Yongqi.  2022.  The Fast Paillier Decryption with Montgomery Modular Multiplication Based on OpenMP. 2022 IEEE 13th International Symposium on Parallel Architectures, Algorithms and Programming (PAAP). :1—6.
With the increasing awareness of privacy protection and data security, people’s concerns over the confidentiality of sensitive data still limit the application of distributed artificial intelligence. In fact, a new encryption form, called homomorphic encryption(HE), has achieved a balance between security and operability. In particular, one of the HE schemes named Paillier has been adopted to protect data privacy in distributed artificial intelligence. However, the massive computation of modular multiplication in Paillier greatly affects the speed of encryption and decryption. In this paper, we propose a fast CRT-Paillier scheme to accelerate its decryption process. We first introduce the Montgomery algorithm to the CRT-Paillier to improve the process of the modular exponentiation, and then compute the modular exponentiation in parallel by using OpenMP. The experimental results show that our proposed scheme has greatly heightened its decryption speed while preserving the same security level. Especially, when the key length is 4096-bit, its speed of decryption is about 148 times faster than CRT-Paillier.
2023-04-27
Spliet, Roy, Mullins, Robert D..  2022.  Sim-D: A SIMD Accelerator for Hard Real-Time Systems. IEEE Transactions on Computers. 71:851–865.
Emerging safety-critical systems require high-performance data-parallel architectures and, problematically, ones that can guarantee tight and safe worst-case execution times. Given the complexity of existing architectures like GPUs, it is unlikely that sufficiently accurate models and algorithms for timing analysis will emerge in the foreseeable future. This motivates our work on Sim-D, a clean-slate approach to designing a real-time data-parallel architecture. Sim-D enforces a predictable execution model by isolating compute- and access resources in hardware. The DRAM controller uninterruptedly transfers tiles of data, requested by entire work-groups. This permits work-groups to be executed as a sequence of deterministic access- and compute phases, scheduling phases from up to two work-groups in parallel. Evaluation using a cycle-accurate timing model shows that Sim-D can achieve performance on par with an embedded-grade NVIDIA TK1 GPU under two conditions: applications refrain from using indirect DRAM transfers into large buffers, and Sim-D's scratchpads provide sufficient bandwidth. Sim-D's design facilitates derivation of safe WCET bounds that are tight within 12.7 percent on average, at an additional average performance penalty of \textbackslashsim∼9.2 percent caused by scheduling restrictions on phases.
Conference Name: IEEE Transactions on Computers
2021-03-15
Chowdhuryy, M. H. Islam, Liu, H., Yao, F..  2020.  BranchSpec: Information Leakage Attacks Exploiting Speculative Branch Instruction Executions. 2020 IEEE 38th International Conference on Computer Design (ICCD). :529–536.
Recent studies on attacks exploiting processor hardware vulnerabilities have raised significant concern for information security. Particularly, transient execution attacks such as Spectre augment microarchitectural side channels with speculative executions that lead to exfiltration of secretive data not intended to be accessed. Many prior works have demonstrated the manipulation of branch predictors for triggering speculative executions, and thereafter leaking sensitive information through processor microarchitectural components. In this paper, we present a new class of microarchitectural attack, called BranchSpec, that performs information leakage by exploiting state changes of branch predictors in speculative path. Our key observation is that, branch instruction executions in speculative path alter the states of branch pattern history, which are not restored even after the speculatively executed branches are eventually squashed. Unfortunately, this enables adversaries to harness branch predictors as the transmitting medium in transient execution attacks. More importantly, as compared to existing speculative attacks (e.g., Spectre), BranchSpec can take advantage of much simpler code patterns in victim's code base, making the impact of such exploitation potentially even more severe. To demonstrate this security vulnerability, we have implemented two variants of BranchSpec attacks: a side channel where a malicious spy process infers cross-boundary secrets via victim's speculatively executed nested branches, and a covert channel that communicates secrets through intentionally perturbing the branch pattern history structure via speculative branch executions. Our evaluation on Intel Skylake- and Coffee Lake-based processors reveals that these information leakage attacks are highly accurate and successful. To the best of our knowledge, this is the first work to reveal the information leakage threat due to speculative state update in branch predictor. Our studies further broaden the attack surface of processor microarchitecture, and highlight the needs for branch prediction mechanisms that are secure in transient executions.
2020-06-26
M, Raviraja Holla, D, Suma.  2019.  Memory Efficient High-Performance Rotational Image Encryption. 2019 International Conference on Communication and Electronics Systems (ICCES). :60—64.

Image encryption is an essential part of a Visual Cryptography. Existing traditional sequential encryption techniques are infeasible to real-time applications. High-performance reformulations of such methods are increasingly growing over the last decade. These reformulations proved better performances over their sequential counterparts. A rotational encryption scheme encrypts the images in such a way that the decryption is possible with the rotated encrypted images. A parallel rotational encryption technique makes use of a high-performance device. But it less-leverages the optimizations offered by them. We propose a rotational image encryption technique which makes use of memory coalescing provided by the Compute Unified Device Architecture (CUDA). The proposed scheme achieves improved global memory utilization and increased efficiency.

2020-05-22
Despotovski, Filip, Gusev, Marjan, Zdraveski, Vladimir.  2018.  Parallel Implementation of K-Nearest-Neighbors for Face Recognition. 2018 26th Telecommunications Forum (℡FOR). :1—4.
Face recognition is a fast-expanding field of research. Countless classification algorithms have found use in face recognition, with more still being developed, searching for better performance and accuracy. For high-dimensional data such as images, the K-Nearest-Neighbours classifier is a tempting choice. However, it is very computationally-intensive, as it has to perform calculations on all items in the stored dataset for each classification it makes. Fortunately, there is a way to speed up the process by performing some of the calculations in parallel. We propose a parallel CUDA implementation of the KNN classifier and then compare it to a serial implementation to demonstrate its performance superiority.
2020-03-30
Souza, Renan, Azevedo, Leonardo, Lourenço, Vítor, Soares, Elton, Thiago, Raphael, Brandão, Rafael, Civitarese, Daniel, Brazil, Emilio, Moreno, Marcio, Valduriez, Patrick et al..  2019.  Provenance Data in the Machine Learning Lifecycle in Computational Science and Engineering. 2019 IEEE/ACM Workflows in Support of Large-Scale Science (WORKS). :1–10.
Machine Learning (ML) has become essential in several industries. In Computational Science and Engineering (CSE), the complexity of the ML lifecycle comes from the large variety of data, scientists' expertise, tools, and workflows. If data are not tracked properly during the lifecycle, it becomes unfeasible to recreate a ML model from scratch or to explain to stackholders how it was created. The main limitation of provenance tracking solutions is that they cannot cope with provenance capture and integration of domain and ML data processed in the multiple workflows in the lifecycle, while keeping the provenance capture overhead low. To handle this problem, in this paper we contribute with a detailed characterization of provenance data in the ML lifecycle in CSE; a new provenance data representation, called PROV-ML, built on top of W3C PROV and ML Schema; and extensions to a system that tracks provenance from multiple workflows to address the characteristics of ML and CSE, and to allow for provenance queries with a standard vocabulary. We show a practical use in a real case in the O&G industry, along with its evaluation using 239,616 CUDA cores in parallel.
Kim, Sejin, Oh, Jisun, Kim, Yoonhee.  2019.  Data Provenance for Experiment Management of Scientific Applications on GPU. 2019 20th Asia-Pacific Network Operations and Management Symposium (APNOMS). :1–4.
Graphics Processing Units (GPUs) are getting popularly utilized for multi-purpose applications in order to enhance highly performed parallelism of computation. As memory virtualization methods in GPU nodes are not efficiently provided to deal with diverse memory usage patterns for these applications, the success of their execution depends on exclusive and limited use of physical memory in GPU environments. Therefore, it is important to predict a pattern change of GPU memory usage during runtime execution of an application. Data provenance extracted from application characteristics, GPU runtime environments, input, and execution patterns from runtime monitoring, is defined for supporting application management to set runtime configuration and predict an experimental result, and utilize resource with co-located applications. In this paper, we define data provenance of an application on GPUs and manage data by profiling the execution of CUDA scientific applications. Data provenance management helps to predict execution patterns of other similar experiments and plan efficient resource configuration.
2019-12-30
Razaque, Abdul, Jinrui, Wang, Zancheng, Wang, Hani, Qassim Bani, Khaskheli, Murad Ali, Bhutto, Waseem Ahmed.  2018.  Integration of CPU and GPU to Accelerate RSA Modular Exponentiation Operation. 2018 IEEE Long Island Systems, Applications and Technology Conference (LISAT). :1-6.

Now-a-days, the security of data becomes more and more important, people store many personal information in their phones. However, stored information require security and maintain privacy. Encryption algorithm has become the main force of maintaining the security of data. Thus, the algorithm complexity and encryption efficiency have become the main measurement of whether the encryption algorithm is save or not. With the development of hardware, we have many tools to improve the algorithm at present. Because modular exponentiation in RSA algorithm can be divided into several parts mathematically. In this paper, we introduce a conception by dividing the process of encryption and add the model into graphics process unit (GPU). By using GPU's capacity in parallel computing, the core of RSA can be accelerated by using central process unit (CPU) and GPU. Compute unified device architecture (CUDA) is a platform which can combine CPU and GPU together to realize GPU parallel programming and this is the tool we use to perform experience of accelerating RSA algorithm. This paper will also build up a mathematical model to help understand the mechanism of RSA encryption algorithm.

2018-02-21
Conti, F., Schilling, R., Schiavone, P. D., Pullini, A., Rossi, D., Gürkaynak, F. K., Muehlberghuber, M., Gautschi, M., Loi, I., Haugou, G. et al..  2017.  An IoT Endpoint System-on-Chip for Secure and Energy-Efficient Near-Sensor Analytics. IEEE Transactions on Circuits and Systems I: Regular Papers. 64:2481–2494.

Near-sensor data analytics is a promising direction for internet-of-things endpoints, as it minimizes energy spent on communication and reduces network load - but it also poses security concerns, as valuable data are stored or sent over the network at various stages of the analytics pipeline. Using encryption to protect sensitive data at the boundary of the on-chip analytics engine is a way to address data security issues. To cope with the combined workload of analytics and encryption in a tight power envelope, we propose Fulmine, a system-on-chip (SoC) based on a tightly-coupled multi-core cluster augmented with specialized blocks for compute-intensive data processing and encryption functions, supporting software programmability for regular computing tasks. The Fulmine SoC, fabricated in 65-nm technology, consumes less than 20mW on average at 0.8V achieving an efficiency of up to 70pJ/B in encryption, 50pJ/px in convolution, or up to 25MIPS/mW in software. As a strong argument for real-life flexible application of our platform, we show experimental results for three secure analytics use cases: secure autonomous aerial surveillance with a state-of-the-art deep convolutional neural network (CNN) consuming 3.16pJ per equivalent reduced instruction set computer operation, local CNN-based face detection with secured remote recognition in 5.74pJ/op, and seizure detection with encrypted data collection from electroencephalogram within 12.7pJ/op.

2017-02-14
A. T. Erozan, A. S. Aydoğdu, B. Örs.  2015.  "Application specific processor design for DCT based applications". 2015 23nd Signal Processing and Communications Applications Conference (SIU). :2157-2160.

Discrete Cosine Transform (DCT) is used in JPEG compression, image encryption, image watermarking and channel estimation. In this paper, an Application Specific Processor (ASP) for DCT based applications is designed and implemented to Field Programmable Gate Array (FPGA). One dimensional DCT and IDCT hardwares which have fully parallel architecture have been implemented and connected to MicroBlaze softcore processer. To show a basic application of ASP, DCT based image watermarking example is studied in this system.