Visible to the public Biblio

Filters: Keyword is Vectors  [Clear All Filters]
2018-01-16
Arita, S., Kozaki, S..  2017.  A Homomorphic Signature Scheme for Quadratic Polynomials. 2017 IEEE International Conference on Smart Computing (SMARTCOMP). :1–6.

Homomorphic signatures can provide a credential of a result which is indeed computed with a given function on a data set by an untrusted third party like a cloud server, when the input data are stored with the signatures beforehand. Boneh and Freeman in EUROCRYPT2011 proposed a homomorphic signature scheme for polynomial functions of any degree, however the scheme is not based on the normal short integer solution (SIS) problems as its security assumption. In this paper, we show a homomorphic signature scheme for quadratic polynomial functions those security assumption is based on the normal SIS problems. Our scheme constructs the signatures of multiplication as tensor products of the original signature vectors of input data so that homomorphism holds. Moreover, security of our scheme is reduced to the hardness of the SIS problems respect to the moduli such that one modulus is the power of the other modulus. We show the reduction by constructing solvers of the SIS problems respect to either of the moduli from any forger of our scheme.

2017-12-12
Zhou, G., Huang, J. X..  2017.  Modeling and Learning Distributed Word Representation with Metadata for Question Retrieval. IEEE Transactions on Knowledge and Data Engineering. 29:1226–1239.

Community question answering (cQA) has become an important issue due to the popularity of cQA archives on the Web. This paper focuses on addressing the lexical gap problem in question retrieval. Question retrieval in cQA archives aims to find the existing questions that are semantically equivalent or relevant to the queried questions. However, the lexical gap problem brings a new challenge for question retrieval in cQA. In this paper, we propose to model and learn distributed word representations with metadata of category information within cQA pages for question retrieval using two novel category powered models. One is a basic category powered model called MB-NET and the other one is an enhanced category powered model called ME-NET which can better learn the distributed word representations and alleviate the lexical gap problem. To deal with the variable size of word representation vectors, we employ the framework of fisher kernel to transform them into the fixed-length vectors. Experimental results on large-scale English and Chinese cQA data sets show that our proposed approaches can significantly outperform state-of-the-art retrieval models for question retrieval in cQA. Moreover, we further conduct our approaches on large-scale automatic evaluation experiments. The evaluation results show that promising and significant performance improvements can be achieved.

2017-11-20
You, L., Li, Y., Wang, Y., Zhang, J., Yang, Y..  2016.  A deep learning-based RNNs model for automatic security audit of short messages. 2016 16th International Symposium on Communications and Information Technologies (ISCIT). :225–229.

The traditional text classification methods usually follow this process: first, a sentence can be considered as a bag of words (BOW), then transformed into sentence feature vector which can be classified by some methods, such as maximum entropy (ME), Naive Bayes (NB), support vector machines (SVM), and so on. However, when these methods are applied to text classification, we usually can not obtain an ideal result. The most important reason is that the semantic relations between words is very important for text categorization, however, the traditional method can not capture it. Sentiment classification, as a special case of text classification, is binary classification (positive or negative). Inspired by the sentiment analysis, we use a novel deep learning-based recurrent neural networks (RNNs)model for automatic security audit of short messages from prisons, which can classify short messages(secure and non-insecure). In this paper, the feature of short messages is extracted by word2vec which captures word order information, and each sentence is mapped to a feature vector. In particular, words with similar meaning are mapped to a similar position in the vector space, and then classified by RNNs. RNNs are now widely used and the network structure of RNNs determines that it can easily process the sequence data. We preprocess short messages, extract typical features from existing security and non-security short messages via word2vec, and classify short messages through RNNs which accept a fixed-sized vector as input and produce a fixed-sized vector as output. The experimental results show that the RNNs model achieves an average 92.7% accuracy which is higher than SVM.

2017-03-08
Lee, K., Kolsch, M..  2015.  Shot Boundary Detection with Graph Theory Using Keypoint Features and Color Histograms. 2015 IEEE Winter Conference on Applications of Computer Vision. :1177–1184.

The TRECVID report of 2010 [14] evaluated video shot boundary detectors as achieving "excellent performance on [hard] cuts and gradual transitions." Unfortunately, while re-evaluating the state of the art of the shot boundary detection, we found that they need to be improved because the characteristics of consumer-produced videos have changed significantly since the introduction of mobile gadgets, such as smartphones, tablets and outdoor activity purposed cameras, and video editing software has been evolving rapidly. In this paper, we evaluate the best-known approach on a contemporary, publicly accessible corpus, and present a method that achieves better performance, particularly on soft transitions. Our method combines color histograms with key point feature matching to extract comprehensive frame information. Two similarity metrics, one for individual frames and one for sets of frames, are defined based on graph cuts. These metrics are formed into temporal feature vectors on which a SVM is trained to perform the final segmentation. The evaluation on said "modern" corpus of relatively short videos yields a performance of 92% recall (at 89% precision) overall, compared to 69% (91%) of the best-known method.

2015-05-06
Kaur, R., Singh, M..  2014.  A Survey on Zero-Day Polymorphic Worm Detection Techniques. Communications Surveys Tutorials, IEEE. 16:1520-1549.

Zero-day polymorphic worms pose a serious threat to the Internet security. With their ability to rapidly propagate, these worms increasingly threaten the Internet hosts and services. Not only can they exploit unknown vulnerabilities but can also change their own representations on each new infection or can encrypt their payloads using a different key per infection. They have many variations in the signatures of the same worm thus, making their fingerprinting very difficult. Therefore, signature-based defenses and traditional security layers miss these stealthy and persistent threats. This paper provides a detailed survey to outline the research efforts in relation to detection of modern zero-day malware in form of zero-day polymorphic worms.

Chieh-Hao Chang, Jung-Chun Kao, Fu-Wen Chen, Shih Hsun Cheng.  2014.  Many-to-all priority-based network-coding broadcast in wireless multihop networks. Wireless Telecommunications Symposium (WTS), 2014. :1-6.

This paper addresses the minimum transmission broadcast (MTB) problem for the many-to-all scenario in wireless multihop networks and presents a network-coding broadcast protocol with priority-based deadlock prevention. Our main contributions are as follows: First, we relate the many-to-all-with-network-coding MTB problem to a maximum out-degree problem. The solution of the latter can serve as a lower bound for the number of transmissions. Second, we propose a distributed network-coding broadcast protocol, which constructs efficient broadcast trees and dictates nodes to transmit packets in a network coding manner. Besides, we present the priority-based deadlock prevention mechanism to avoid deadlocks. Simulation results confirm that compared with existing protocols in the literature and the performance bound we present, our proposed network-coding broadcast protocol performs very well in terms of the number of transmissions.

Sanandaji, B.M., Bitar, E., Poolla, K., Vincent, T.L..  2014.  An abrupt change detection heuristic with applications to cyber data attacks on power systems. American Control Conference (ACC), 2014. :5056-5061.

We present an analysis of a heuristic for abrupt change detection of systems with bounded state variations. The proposed analysis is based on the Singular Value Decomposition (SVD) of a history matrix built from system observations. We show that monitoring the largest singular value of the history matrix can be used as a heuristic for detecting abrupt changes in the system outputs. We provide sufficient detectability conditions for the proposed heuristic. As an application, we consider detecting malicious cyber data attacks on power systems and test our proposed heuristic on the IEEE 39-bus testbed.
 

Pajic, M., Weimer, J., Bezzo, N., Tabuada, P., Sokolsky, O., Insup Lee, Pappas, G.J..  2014.  Robustness of attack-resilient state estimators. Cyber-Physical Systems (ICCPS), 2014 ACM/IEEE International Conference on. :163-174.

The interaction between information technology and phys ical world makes Cyber-Physical Systems (CPS) vulnerable to malicious attacks beyond the standard cyber attacks. This has motivated the need for attack-resilient state estimation. Yet, the existing state-estimators are based on the non-realistic assumption that the exact system model is known. Consequently, in this work we present a method for state estimation in presence of attacks, for systems with noise and modeling errors. When the the estimated states are used by a state-based feedback controller, we show that the attacker cannot destabilize the system by exploiting the difference between the model used for the state estimation and the real physical dynamics of the system. Furthermore, we describe how implementation issues such as jitter, latency and synchronization errors can be mapped into parameters of the state estimation procedure that describe modeling errors, and provide a bound on the state-estimation error caused by modeling errors. This enables mapping control performance requirements into real-time (i.e., timing related) specifications imposed on the underlying platform. Finally, we illustrate and experimentally evaluate this approach on an unmanned ground vehicle case-study.
 

Butt, M.I.A..  2014.  BIOS integrity an advanced persistent threat. Information Assurance and Cyber Security (CIACS), 2014 Conference on. :47-50.

Basic Input Output System (BIOS) is the most important component of a computer system by virtue of its role i.e., it holds the code which is executed at the time of startup. It is considered as the trusted computing base, and its integrity is extremely important for smooth functioning of the system. On the contrary, BIOS of new computer systems (servers, laptops, desktops, network devices, and other embedded systems) can be easily upgraded using a flash or capsule mechanism which can add new vulnerabilities either through malicious code, or by accidental incidents, and deliberate attack. The recent attack on Iranian Nuclear Power Plant (Stuxnet) [1:2] is an example of advanced persistent attack. This attack vector adds a new dimension into the information security (IS) spectrum, which needs to be guarded by implementing a holistic approach employed at enterprise level. Malicious BIOS upgrades can also cause denial of service, stealing of information or addition of new backdoors which can be exploited by attackers for causing business loss, passive eaves dropping or total destruction of system without knowledge of user. To address this challenge a capability for verification of BIOS integrity needs to be developed and due diligence must be observed for proactive resolution of the issue. This paper explains the BIOS Integrity threats and presents a prevention strategy for effective and proactive resolution.

Shaohua Tang, Lingling Xu, Niu Liu, Xinyi Huang, Jintai Ding, Zhiming Yang.  2014.  Provably Secure Group Key Management Approach Based upon Hyper-Sphere. Parallel and Distributed Systems, IEEE Transactions on. 25:3253-3263.

Secure group communication systems have become increasingly important for many emerging network applications. An efficient and robust group key management approach is indispensable to a secure group communication system. Motivated by the theory of hyper-sphere, this paper presents a new group key management approach with a group controller (GC). In our new design, a hyper-sphere is constructed for a group and each member in the group corresponds to a point on the hyper-sphere, which is called the member's private point. The GC computes the central point of the hyper-sphere, intuitively, whose “distance” from each member's private point is identical. The central point is published such that each member can compute a common group key, using a function by taking each member's private point and the central point of the hyper-sphere as the input. This approach is provably secure under the pseudo-random function (PRF) assumption. Compared with other similar schemes, by both theoretical analysis and experiments, our scheme (1) has significantly reduced memory and computation load for each group member; (2) can efficiently deal with massive membership change with only two re-keying messages, i.e., the central point of the hyper-sphere and a random number; and (3) is efficient and very scalable for large-size groups.

Zhongming Jin, Cheng Li, Yue Lin, Deng Cai.  2014.  Density Sensitive Hashing. Cybernetics, IEEE Transactions on. 44:1362-1371.

Nearest neighbor search is a fundamental problem in various research fields like machine learning, data mining and pattern recognition. Recently, hashing-based approaches, for example, locality sensitive hashing (LSH), are proved to be effective for scalable high dimensional nearest neighbor search. Many hashing algorithms found their theoretic root in random projection. Since these algorithms generate the hash tables (projections) randomly, a large number of hash tables (i.e., long codewords) are required in order to achieve both high precision and recall. To address this limitation, we propose a novel hashing algorithm called density sensitive hashing (DSH) in this paper. DSH can be regarded as an extension of LSH. By exploring the geometric structure of the data, DSH avoids the purely random projections selection and uses those projective functions which best agree with the distribution of the data. Extensive experimental results on real-world data sets have shown that the proposed method achieves better performance compared to the state-of-the-art hashing approaches.

Plesca, C., Morogan, L..  2014.  Efficient and robust perceptual hashing using log-polar image representation. Communications (COMM), 2014 10th International Conference on. :1-6.

Robust image hashing seeks to transform a given input image into a shorter hashed version using a key-dependent non-invertible transform. These hashes find extensive applications in content authentication, image indexing for database search and watermarking. Modern robust hashing algorithms consist of feature extraction, a randomization stage to introduce non-invertibility, followed by quantization and binary encoding to produce a binary hash. This paper describes a novel algorithm for generating an image hash based on Log-Polar transform features. The Log-Polar transform is a part of the Fourier-Mellin transformation, often used in image recognition and registration techniques due to its invariant properties to geometric operations. First, we show that the proposed perceptual hash is resistant to content-preserving operations like compression, noise addition, moderate geometric and filtering. Second, we illustrate the discriminative capability of our hash in order to rapidly distinguish between two perceptually different images. Third, we study the security of our method for image authentication purposes. Finally, we show that the proposed hashing method can provide both excellent security and robustness.
 

Chi Sing Chum, Changha Jun, Xiaowen Zhang.  2014.  Implementation of randomize-then-combine constructed hash function. Wireless and Optical Communication Conference (WOCC), 2014 23rd. :1-6.

Hash functions, such as SHA (secure hash algorithm) and MD (message digest) families that are built upon Merkle-Damgard construction, suffer many attacks due to the iterative nature of block-by-block message processing. Chum and Zhang [4] proposed a new hash function construction that takes advantage of the randomize-then-combine technique, which was used in the incremental hash functions, to the iterative hash function. In this paper, we implement such hash construction in three ways distinguished by their corresponding padding methods. We conduct the experiment in parallel multi-threaded programming settings. The results show that the speed of proposed hash function is no worse than SHA1.
 

Kafai, M., Eshghi, K., Bhanu, B..  2014.  Discrete Cosine Transform Locality-Sensitive Hashes for Face Retrieval. Multimedia, IEEE Transactions on. 16:1090-1103.

Descriptors such as local binary patterns perform well for face recognition. Searching large databases using such descriptors has been problematic due to the cost of the linear search, and the inadequate performance of existing indexing methods. We present Discrete Cosine Transform (DCT) hashing for creating index structures for face descriptors. Hashes play the role of keywords: an index is created, and queried to find the images most similar to the query image. Common hash suppression is used to improve retrieval efficiency and accuracy. Results are shown on a combination of six publicly available face databases (LFW, FERET, FEI, BioID, Multi-PIE, and RaFD). It is shown that DCT hashing has significantly better retrieval accuracy and it is more efficient compared to other popular state-of-the-art hash algorithms.
 

Ghosh, S..  2014.  On the implementation of mceliece with CCA2 indeterminacy by SHA-3. Circuits and Systems (ISCAS), 2014 IEEE International Symposium on. :2804-2807.

This paper deals with the design and implementation of the post-quantum public-key algorithm McEliece. Seamless incorporation of a new error generator and new SHA-3 module provides higher indeterminacy and more randomization of the original McEliece algorithm and achieves CCA2 security standard. Due to the lightweight and high-speed implementation of SHA-3 module the proposed 128-bit secure McEliece architecture provides 6% higher performance in only 0.78 times area of the best known existing design.
 

Nemoianu, I.-D., Greco, C., Cagnazzo, M., Pesquet-Popescu, B..  2014.  On a Hashing-Based Enhancement of Source Separation Algorithms Over Finite Fields With Network Coding Perspectives. Multimedia, IEEE Transactions on. 16:2011-2024.

Blind Source Separation (BSS) deals with the recovery of source signals from a set of observed mixtures, when little or no knowledge of the mixing process is available. BSS can find an application in the context of network coding, where relaying linear combinations of packets maximizes the throughput and increases the loss immunity. By relieving the nodes from the need to send the combination coefficients, the overhead cost is largely reduced. However, the scaling ambiguity of the technique and the quasi-uniformity of compressed media sources makes it unfit, at its present state, for multimedia transmission. In order to open new practical applications for BSS in the context of multimedia transmission, we have recently proposed to use a non-linear encoding to increase the discriminating power of the classical entropy-based separation methods. Here, we propose to append to each source a non-linear message digest, which offers an overhead smaller than a per-symbol encoding and that can be more easily tuned. Our results prove that our algorithm is able to provide high decoding rates for different media types such as image, audio, and video, when the transmitted messages are less than 1.5 kilobytes, which is typically the case in a realistic transmission scenario.

Rathmair, M., Schupfer, F., Krieg, C..  2014.  Applied formal methods for hardware Trojan detection. Circuits and Systems (ISCAS), 2014 IEEE International Symposium on. :169-172.

This paper addresses the potential danger using integrated circuits which contain malicious hardware modifications hidden in the silicon structure. A so called hardware Trojan may be added at several stages of the chip development process. This work concentrates on formal hardware Trojan detection during the design phase and highlights applied verification techniques. Selected methods are discussed and their combination used to increase an introduced “Trojan Assurance Level”.
 

Kitsos, Paris, Voyiatzis, Artemios G..  2014.  Towards a hardware Trojan detection methodology. Embedded Computing (MECO), 2014 3rd Mediterranean Conference on. :18-23.

Malicious hardware is a realistic threat. It can be possible to insert the malicious functionality on a device as deep as in the hardware design flow, long before manufacturing the silicon product. Towards developing a hardware Trojan horse detection methodology, we analyze capabilities and limitations of existing techniques, framing a testing strategy for uncovering efficiently hardware Trojan horses in mass-produced integrated circuits.
 

Djouadi, S.M., Melin, A.M., Ferragut, E.M., Laska, J.A., Jin Dong.  2014.  Finite energy and bounded attacks on control system sensor signals. American Control Conference (ACC), 2014. :1716-1722.

Control system networks are increasingly being connected to enterprise level networks. These connections leave critical industrial controls systems vulnerable to cyber-attacks. Most of the effort in protecting these cyber-physical systems (CPS) from attacks has been in securing the networks using information security techniques. Effort has also been applied to increasing the protection and reliability of the control system against random hardware and software failures. However, the inability of information security techniques to protect against all intrusions means that the control system must be resilient to various signal attacks for which new analysis methods need to be developed. In this paper, sensor signal attacks are analyzed for observer-based controlled systems. The threat surface for sensor signal attacks is subdivided into denial of service, finite energy, and bounded attacks. In particular, the error signals between states of attack free systems and systems subject to these attacks are quantified. Optimal sensor and actuator signal attacks for the finite and infinite horizon linear quadratic (LQ) control in terms of maximizing the corresponding cost functions are computed. The closed-loop systems under optimal signal attacks are provided. Finally, an illustrative numerical example using a power generation network is provided together with distributed LQ controllers.

Tuia, D., Munoz-Mari, J., Rojo-Alvarez, J.L., Martinez-Ramon, M., Camps-Valls, G..  2014.  Explicit Recursive and Adaptive Filtering in Reproducing Kernel Hilbert Spaces. Neural Networks and Learning Systems, IEEE Transactions on. 25:1413-1419.

This brief presents a methodology to develop recursive filters in reproducing kernel Hilbert spaces. Unlike previous approaches that exploit the kernel trick on filtered and then mapped samples, we explicitly define the model recursivity in the Hilbert space. For that, we exploit some properties of functional analysis and recursive computation of dot products without the need of preimaging or a training dataset. We illustrate the feasibility of the methodology in the particular case of the γ-filter, which is an infinite impulse response filter with controlled stability and memory depth. Different algorithmic formulations emerge from the signal model. Experiments in chaotic and electroencephalographic time series prediction, complex nonlinear system identification, and adaptive antenna array processing demonstrate the potential of the approach for scenarios where recursivity and nonlinearity have to be readily combined.

Arablouei, R., Werner, S., Dogancay, K..  2014.  Analysis of the Gradient-Descent Total Least-Squares Adaptive Filtering Algorithm. Signal Processing, IEEE Transactions on. 62:1256-1264.

The gradient-descent total least-squares (GD-TLS) algorithm is a stochastic-gradient adaptive filtering algorithm that compensates for error in both input and output data. We study the local convergence of the GD-TLS algoritlun and find bounds for its step-size that ensure its stability. We also analyze the steady-state performance of the GD-TLS algorithm and calculate its steady-state mean-square deviation. Our steady-state analysis is inspired by the energy-conservation-based approach to the performance analysis of adaptive filters. The results predicted by the analysis show good agreement with the simulation experiments.

Bhotto, M.Z.A., Antoniou, A..  2014.  Affine-Projection-Like Adaptive-Filtering Algorithms Using Gradient-Based Step Size. Circuits and Systems I: Regular Papers, IEEE Transactions on. 61:2048-2056.

A new class of affine-projection-like (APL) adaptive-filtering algorithms is proposed. The new algorithms are obtained by eliminating the constraint of forcing the a posteriori error vector to zero in the affine-projection algorithm proposed by Ozeki and Umeda. In this way, direct or indirect inversion of the input signal matrix is not required and, consequently, the amount of computation required per iteration can be reduced. In addition, as demonstrated by extensive simulation results, the proposed algorithms offer reduced steady-state misalignment in system-identification, channel-equalization, and acoustic-echo-cancelation applications. A mean-square-error analysis of the proposed APL algorithms is also carried out and its accuracy is verified by using simulation results in a system-identification application.

Soleimani, M.T., Kahvand, M..  2014.  Defending packet dropping attacks based on dynamic trust model in wireless ad hoc networks. Mediterranean Electrotechnical Conference (MELECON), 2014 17th IEEE. :362-366.

Rapid advances in wireless ad hoc networks lead to increase their applications in real life. Since wireless ad hoc networks have no centralized infrastructure and management, they are vulnerable to several security threats. Malicious packet dropping is a serious attack against these networks. In this attack, an adversary node tries to drop all or partial received packets instead of forwarding them to the next hop through the path. A dangerous type of this attack is called black hole. In this attack, after absorbing network traffic by the malicious node, it drops all received packets to form a denial of service (DOS) attack. In this paper, a dynamic trust model to defend network against this attack is proposed. In this approach, a node trusts all immediate neighbors initially. Getting feedback from neighbors' behaviors, a node updates the corresponding trust value. The simulation results by NS-2 show that the attack is detected successfully with low false positive probability.

Barani, F..  2014.  A hybrid approach for dynamic intrusion detection in ad hoc networks using genetic algorithm and artificial immune system. Intelligent Systems (ICIS), 2014 Iranian Conference on. :1-6.

Mobile ad hoc network (MANET) is a self-created and self organized network of wireless mobile nodes. Due to special characteristics of these networks, security issue is a difficult task to achieve. Hence, applying current intrusion detection techniques developed for fixed networks is not sufficient for MANETs. In this paper, we proposed an approach based on genetic algorithm (GA) and artificial immune system (AIS), called GAAIS, for dynamic intrusion detection in AODV-based MANETs. GAAIS is able to adapting itself to network topology changes using two updating methods: partial and total. Each normal feature vector extracted from network traffic is represented by a hypersphere with fix radius. A set of spherical detector is generated using NicheMGA algorithm for covering the nonself space. Spherical detectors are used for detecting anomaly in network traffic. The performance of GAAIS is evaluated for detecting several types of routing attacks simulated using the NS2 simulator, such as Flooding, Blackhole, Neighbor, Rushing, and Wormhole. Experimental results show that GAAIS is more efficient in comparison with similar approaches.

2015-05-05
Zadeh, B.Q., Handschuh, S..  2014.  Random Manhattan Indexing. Database and Expert Systems Applications (DEXA), 2014 25th International Workshop on. :203-208.

Vector space models (VSMs) are mathematically well-defined frameworks that have been widely used in text processing. In these models, high-dimensional, often sparse vectors represent text units. In an application, the similarity of vectors -- and hence the text units that they represent -- is computed by a distance formula. The high dimensionality of vectors, however, is a barrier to the performance of methods that employ VSMs. Consequently, a dimensionality reduction technique is employed to alleviate this problem. This paper introduces a new method, called Random Manhattan Indexing (RMI), for the construction of L1 normed VSMs at reduced dimensionality. RMI combines the construction of a VSM and dimension reduction into an incremental, and thus scalable, procedure. In order to attain its goal, RMI employs the sparse Cauchy random projections.