Biblio

Found 19480 results

A B C D E F G H I J K L M N O P Q R S T U V W X Y Z 
'
'Ammar, Muhammad Amirul, Purnamasari, Rita, Budiman, Gelar.  2022.  Compressive Sampling on Weather Radar Application via Discrete Cosine Transform (DCT). 2022 IEEE Symposium on Future Telecommunication Technologies (SOFTT). :83–89.
A weather radar is expected to provide information about weather conditions in real time and valid. To obtain these results, weather radar takes a lot of data samples, so a large amount of data is obtained. Therefore, the weather radar equipment must provide bandwidth for a large capacity for transmission and storage media. To reduce the burden of data volume by performing compression techniques at the time of data acquisition. Compressive Sampling (CS) is a new data acquisition method that allows the sampling and compression processes to be carried out simultaneously to speed up computing time, reduce bandwidth when passed on transmission media, and save storage media. There are three stages in the CS method, namely: sparsity transformation using the Discrete Cosine Transform (DCT) algorithm, sampling using a measurement matrix, and reconstruction using the Orthogonal Matching Pursuit (OMP) algorithm. The sparsity transformation aims to convert the representation of the radar signal into a sparse form. Sampling is used to extract important information from the radar signal, and reconstruction is used to get the radar signal back. The data used in this study is the real data of the IDRA beat signal. Based on the CS simulation that has been done, the best PSNR and RMSE values are obtained when using a CR value of two times, while the shortest computation time is obtained when using a CR value of 32 times. CS simulation in a sector via DCT using the CR value two times produces a PSNR value of 20.838 dB and an RMSE value of 0.091. CS simulation in a sector via DCT using the CR value 32 times requires a computation time of 10.574 seconds.
{
A
A, Meharaj Begum, Arock, Michael.  2021.  Efficient Detection Of SQL Injection Attack(SQLIA) Using Pattern-based Neural Network Model. 2021 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS). :343–347.
Web application vulnerability is one of the major causes of cyber attacks. Cyber criminals exploit these vulnerabilities to inject malicious commands to the unsanitized user input in order to bypass authentication of the database through some cyber-attack techniques like cross site scripting (XSS), phishing, Structured Query Language Injection Attack (SQLIA), malware etc., Although many research works have been conducted to resolve the above mentioned attacks, only few challenges with respect to SQLIA could be resolved. Ensuring security against complete set of malicious payloads are extremely complicated and demanding. It requires appropriate classification of legitimate and injected SQL commands. The existing approaches dealt with limited set of signatures, keywords and symbols of SQL queries to identify the injected queries. This work focuses on extracting SQL injection patterns with the help of existing parsing and tagging techniques. Pattern-based tags are trained and modeled using Multi-layer Perceptron which significantly performs well in classification of queries with accuracy of 94.4% which is better than the existing approaches.
A, Sujan Reddy, Rudra, Bhawana.  2021.  Evaluation of Recurrent Neural Networks for Detecting Injections in API Requests. 2021 IEEE 11th Annual Computing and Communication Workshop and Conference (CCWC). :0936–0941.
Application programming interfaces (APIs) are a vital part of every online business. APIs are responsible for transferring data across systems within a company or to the users through the web or mobile applications. Security is a concern for any public-facing application. The objective of this study is to analyze incoming requests to a target API and flag any malicious activity. This paper proposes a solution using sequence models to identify whether or not an API request has SQL, XML, JSON, and other types of malicious injections. We also propose a novel heuristic procedure that minimizes the number of false positives. False positives are the valid API requests that are misclassified as malicious by the model.
A. A. Zewail, A. Yener.  2015.  "The two-hop interference untrusted-relay channel with confidential messages". 2015 IEEE Information Theory Workshop - Fall (ITW). :322-326.

This paper considers the two-user interference relay channel where each source wishes to communicate to its destination a message that is confidential from the other destination. Furthermore, the relay, that is the enabler of communication, due to the absence of direct links, is untrusted. Thus, the messages from both sources need to be kept secret from the relay as well. We provide an achievable secure rate region for this network. The achievability scheme utilizes structured codes for message transmission, cooperative jamming and scaled compute-and-forward. In particular, the sources use nested lattice codes and stochastic encoding, while the destinations jam using lattice points. The relay decodes two integer combinations of the received lattice points and forwards, using Gaussian codewords, to both destinations. The achievability technique provides the insight that we can utilize the untrusted relay node as an encryption block in a two-hop interference relay channel with confidential messages.

A. Akinbi, E. Pereira.  2015.  "Mapping Security Requirements to Identify Critical Security Areas of Focus in PaaS Cloud Models". 2015 IEEE International Conference on Computer and Information Technology; Ubiquitous Computing and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing. :789-794.

Information Technology experts cite security and privacy concerns as the major challenges in the adoption of cloud computing. On Platform-as-a-Service (PaaS) clouds, customers are faced with challenges of selecting service providers and evaluating security implementations based on their security needs and requirements. This study aims to enable cloud customers the ability to quantify their security requirements in order to identify critical areas in PaaS cloud architectures were security provisions offered by CSPs could be assessed. With the use of an adaptive security mapping matrix, the study uses a quantitative approach to presents findings of numeric data that shows critical architectures within the PaaS environment where security can be evaluated and security controls assessed to meet these security requirements. The matrix can be adapted across different types of PaaS cloud models based on individual security requirements and service level objectives identified by PaaS cloud customers.

A. Ayoub, B. Chang, O. Sokolsky, I. Lee.  2013.  Assessing the Overall Sufficiency of Safety Arguments. Proceedings of the 21st Safety-critical Systems Symposium (SSS'13).
A. Ayoub, B. Kim, I. Lee, O. Sokolsky.  2012.  A Systematic Approach to Justifying Sufficient Confidence in Software Safety Arguments. International Conference on Computer Safety, Reliability and Security ({SAFECOMP 2012}).
A. Ayoub, B. Kim, I. Lee, O. Sokolsky.  2012.  A Safety Case Pattern for Model-Based Development Approach. Proceedings of the 4$^{th}$ NASA Formal Methods Symposium. :223–243.
A. Bekan, M. Mohorcic, J. Cinkelj, C. Fortuna.  2015.  "An Architecture for Fully Reconfigurable Plug-and-Play Wireless Sensor Network Testbed". 2015 IEEE Global Communications Conference (GLOBECOM). :1-7.

In this paper we propose an architecture for fully-reconfigurable, plug-and-play wireless sensor network testbed. The proposed architecture is able to reconfigure and support easy experimentation and testing of standard protocol stacks (i.e. uIPv4 and uIPv6) as well as non-standardized clean-slate protocol stacks (e.g. configured using RIME). The parameters of the protocol stacks can be remotely reconfigured through an easy to use RESTful API. Additionally, we are able to fully reconfigure clean-slate protocol stacks at run-time. The architecture enables easy set-up of the network - plug - by using a protocol that automatically sets up a multi-hop network (i.e. RPL protocol) and it enables reconfiguration and experimentation - play - by using a simple, RESTful interaction with each node individually. The reference implementation of the architecture uses a dual-stack Contiki OS with the ProtoStack tool for dynamic composition of services.

A. Cherukuri, E. Mallada, S. Low, J. Cortes.  2016.  The role of strong convexity-concavity in the convergence and robustness of the saddle-point dynamics. 54th Annual Allerton Conf. on Communication, Control, and Computing (Allerton). :504-510.
A. Chouhan, S. Singh.  2015.  "Real time secure end to end communication over GSM network". 2015 International Conference on Energy Systems and Applications. :663-668.

GSM network is the most widely used communication network for mobile phones in the World. However the security of the voice communication is the main issue in the GSM network. This paper proposes the technique for secure end to end communication over GSM network. The voice signal is encrypted at real time using digital techniques and transmitted over the GSM network. At receiver end the same decoding algorithm is used to extract the original speech signal. The speech trans-coding process of the GSM, severely distort an encrypted signal that does not possess the characteristics of speech signal. Therefore, it is not possible to use standard modem techniques over the GSM speech channel. The user may choose an appropriate algorithm and hardware platform as per requirement.

A. Dirafzoon, N. Lokare, E. Lobaton.  2016.  Action Classification from Motion Capture Data using Topological Data Analysis. IEEE Global Conf. on Signal and Information Processing (GlobalSIP).
A. Dutta, R. K. Mangang.  2015.  "Analog to information converter based on random demodulation". 2015 International Conference on Electronic Design, Computer Networks Automated Verification (EDCAV). :105-109.

With the increase in signal's bandwidth, the conventional analog to digital converters (ADCs), operating on the basis of Shannon/Nyquist theorem, are forced to work at very high rates leading to low dynamic range and high power consumptions. This paper here tells about one Analog to Information converter developed based on compressive sensing techniques. The high sampling rates, which is the main drawback for ADCs, is being successfully reduced to 4 times lower than the conventional rates. The system is also accompanied with the advantage of low power dissipation.

A. Endert.  2014.  Semantic Interaction for Visual Analytics: Toward Coupling Cognition and Computation. IEEE Computer Graphics and Applications. 34:8-15.

Alex Endert's dissertation "Semantic Interaction for Visual Analytics: Inferring Analytical Reasoning for Model Steering" described semantic interaction, a user interaction methodology for visual analytics (VA). It showed that user interaction embodies users' analytic process and can thus be mapped to model-steering functionality for "human-in-the-loop" system design. The dissertation contributed a framework (or pipeline) that describes such a process, a prototype VA system to test semantic interaction, and a user evaluation to demonstrate semantic interaction's impact on the analytic process. This research is influencing current VA research and has implications for future VA research.

A. K. M. A., J. C. D..  2015.  "Execution Time Measurement of Virtual Machine Volatile Artifacts Analyzers". 2015 IEEE 21st International Conference on Parallel and Distributed Systems (ICPADS). :314-319.

Due to a rapid revaluation in a virtualization environment, Virtual Machines (VMs) are target point for an attacker to gain privileged access of the virtual infrastructure. The Advanced Persistent Threats (APTs) such as malware, rootkit, spyware, etc. are more potent to bypass the existing defense mechanisms designed for VM. To address this issue, Virtual Machine Introspection (VMI) emerged as a promising approach that monitors run state of the VM externally from hypervisor. However, limitation of VMI lies with semantic gap. An open source tool called LibVMI address the semantic gap. Memory Forensic Analysis (MFA) tool such as Volatility can also be used to address the semantic gap. But, it needs to capture a memory dump (RAM) as input. Memory dump acquires time and its analysis time is highly crucial if Intrusion Detection System IDS (IDS) depends on the data supplied by FAM or VMI tool. In this work, live virtual machine RAM dump acquire time of LibVMI is measured. In addition, captured memory dump analysis time consumed by Volatility is measured and compared with other memory analyzer such as Rekall. It is observed through experimental results that, Rekall takes more execution time as compared to Volatility for most of the plugins. Further, Volatility and Rekall are compared with LibVMI. It is noticed that examining the volatile data through LibVMI is faster as it eliminates memory dump acquire time.

A. Motamedi, M. Najafi, N. Erami.  2015.  "Parallel secure turbo code for security enhancement in physical layer". 2015 Signal Processing and Intelligent Systems Conference (SPIS). :179-184.

Turbo code has been one of the important subjects in coding theory since 1993. This code has low Bit Error Rate (BER) but decoding complexity and delay are big challenges. On the other hand, considering the complexity and delay of separate blocks for coding and encryption, if these processes are combined, the security and reliability of communication system are guaranteed. In this paper a secure decoding algorithm in parallel on General-Purpose Graphics Processing Units (GPGPU) is proposed. This is the first prototype of a fast and parallel Joint Channel-Security Coding (JCSC) system. Despite of encryption process, this algorithm maintains desired BER and increases decoding speed. We considered several techniques for parallelism: (1) distribute decoding load of a code word between multiple cores, (2) simultaneous decoding of several code words, (3) using protection techniques to prevent performance degradation. We also propose two kinds of optimizations to increase the decoding speed: (1) memory access improvement, (2) the use of new GPU properties such as concurrent kernel execution and advanced atomics to compensate buffering latency.

A. Oprea, Z. Li, T. F. Yen, S. H. Chin, S. Alrwais.  2015.  "Detection of Early-Stage Enterprise Infection by Mining Large-Scale Log Data". 2015 45th Annual IEEE/IFIP International Conference on Dependable Systems and Networks. :45-56.

Recent years have seen the rise of sophisticated attacks including advanced persistent threats (APT) which pose severe risks to organizations and governments. Additionally, new malware strains appear at a higher rate than ever before. Since many of these malware evade existing security products, traditional defenses deployed by enterprises today often fail at detecting infections at an early stage. We address the problem of detecting early-stage APT infection by proposing a new framework based on belief propagation inspired from graph theory. We demonstrate that our techniques perform well on two large datasets. We achieve high accuracy on two months of DNS logs released by Los Alamos National Lab (LANL), which include APT infection attacks simulated by LANL domain experts. We also apply our algorithms to 38TB of web proxy logs collected at the border of a large enterprise and identify hundreds of malicious domains overlooked by state-of-the-art security products.

A. P. Vinod, Y. Tang, M. M. K. Oishi, K. Sycara, C. Lebiere, M. Lewis.  2016.  Validation of cognitive models for collaborative hybrid systems with discrete human input. {IEEE/RSJ} International Conference on Intelligent Robots and Systems. :3339–3346.