![]() |
Publications of Interest |
The Publications of Interest section contains bibliographical citations, abstracts if available and links on specific topics and research problems of interest to the Science of Security community.
How recent are these publications?
These bibliographies include recent scholarly research on topics which have been presented or published within the past year. Some represent updates from work presented in previous years, others are new topics.
How are topics selected?
The specific topics are selected from materials that have been peer reviewed and presented at SoS conferences or referenced in current work. The topics are also chosen for their usefulness for current researchers.
How can I submit or suggest a publication?
Researchers willing to share their work are welcome to submit a citation, abstract, and URL for consideration and posting, and to identify additional topics of interest to the community. Researchers are also encouraged to share this request with their colleagues and collaborators.
Submissions and suggestions may be sent to: news@scienceofsecurity.net
(ID#:14-3730)
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
![]() |
Acutator Security |
At the October Quarterly meeting of the Lablets at the University of Maryland, discussion about resiliency and composability identified the need to build secure sensors and actuators. The works cited here address the problems of actuator security and were presented or published in 2014.
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
![]() |
Communications Security |
This collection of citations covers a range of issues in communications security from a theoretic or scientific level. Included are topics such as secure MSR regenerating codes, OS fingerprinting, jamming resilient codes, and irreducible pentanomials. These works were presented or published in 2014.
Sasidharan, B.; Kumar, P.V.; Shah, N.B.; Rashmi, K.V.; Ramachandran, K., "Optimality of the Product-Matrix Construction For Secure MSR Regenerating Codes," Communications, Control and Signal Processing (ISCCSP), 2014 6th International Symposium on, pp.10,14, 21-23 May 2014 doi: 10.1109/ISCCSP.2014.6877804 In this paper, we consider the security of exact-repair regenerating codes operating at the minimum-storage-regenerating (MSR) point. The security requirement (introduced in Shah et. al.) is that no information about the stored data file must be leaked in the presence of an eavesdropper who has access to the contents of ℓ1 nodes as well as all the repair traffic entering a second disjoint set of ℓ2 nodes. We derive an upper bound on the size of a data file that can be securely stored that holds whenever ℓ2 ≤ d - k + 1. This upper bound proves the optimality of the product-matrix-based construction of secure MSR regenerating codes by Shah et. al.
Keywords: encoding; matrix algebra; MSR point; data file; eavesdropper; exact repair regenerating code security; minimum storage regenerating point; product matrix; product matrix construction; repair traffic; secure MSR regenerating codes; Bandwidth; Data collection; Entropy; Maintenance engineering; Random variables; Security; Upper bound; MSR codes; Secure regenerating codes; product-matrix construction; regenerating codes (ID#: 15-3600)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6877804&isnumber=6877795
Gu, Y.; Fu, Y.; Prakash, A.; Lin, Z.; Yin, H., "Multi-Aspect, Robust, and Memory Exclusive Guest OS Fingerprinting," Cloud Computing, IEEE Transactions on vol. PP, no.99, pp. 1, 1, 11 July 2014. doi: 10.1109/TCC.2014.2338305 Precise fingerprinting of an operating system (OS) is critical to many security and forensics applications in the cloud, such as virtual machine (VM) introspection, penetration testing, guest OS administration, kernel dump analysis, and memory forensics. The existing OS fingerprinting techniques primarily inspect network packets or CPU states, and they all fall short in precision and usability. As the physical memory of a VM always exists in all these applications, in this article, we present OSSOMMELIER +, a multi-aspect, memory exclusive approach for precise and robust guest OS fingerprinting in the cloud. It works as follows: given a physical memory dump of a guest OS, OS-SOMMELIER+ first uses a code hash based approach from kernel code aspect to determine the guest OS version. If code hash approach fails, OS-SOMMELIER+ then uses a kernel data signature based approach from kernel data aspect to determine the version. We have implemented a prototype system, and tested it with a number of Linux kernels. Our evaluation results show that the code hash approach is faster but can only fingerprint the known kernels, and data signature approach complements the code signature approach and can fingerprint even unknown kernels.
Keywords: (not provided) (ID#: 15-3601)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6853383&isnumber=6562694
Liu, Yuanyuan; Cheng, Jianping; Zhang, Li; Xing, Yuxiang; Chen, Zhiqiang; Zheng, Peng, "A Low-Cost Dual Energy CT System With Sparse Data," Tsinghua Science and Technology , vol.19, no.2, pp.184,194, April 2014. doi: 10.1109/TST.2014.6787372 Dual Energy CT (DECT) has recently gained significant research interest owing to its ability to discriminate materials, and hence is widely applied in the field of nuclear safety and security inspection. With the current technological developments, DECT can be typically realized by using two sets of detectors, one for detecting lower energy X-rays and another for detecting higher energy X-rays. This makes the imaging system expensive, limiting its practical implementation. In 2009, our group performed a preliminary study on a new low-cost system design, using only a complete data set for lower energy level and a sparse data set for the higher energy level. This could significantly reduce the cost of the system, as it contained much smaller number of detector elements. Reconstruction method is the key point of this system. In the present study, we further validated this system and proposed a robust method, involving three main steps: (1) estimation of the missing data iteratively with TV constraints; (2) use the reconstruction from the complete lower energy CT data set to form an initial estimation of the projection data for higher energy level; (3) use ordered views to accelerate the computation. Numerical simulations with different number of detector elements have also been examined. The results obtained in this study demonstrate that 1 + 14% CT data is sufficient enough to provide a rather good reconstruction of both the effective atomic number and electron density distributions of the scanned object, instead of 2 sets CT data.
Keywords: Computed tomography; Detectors; Energy states; Image reconstruction; Reconstruction algorithms; TV; X-rays; ART-TV; X-ray imaging; dual energy CT system; material discrimination; reconstruction; sparse samples (ID#: 15-3602)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6787372&isnumber=6787360
Yao, H.; Silva, D.; Jaggi, S.; Langberg, M., "Network Codes Resilient to Jamming and Eavesdropping," Networking, IEEE/ACM Transactions on, vol. PP, no.99, pp.1,1, 3 Feb 2014. doi: 10.1109/TNET.2013.2294254 We consider the problem of communicating information over a network secretly and reliably in the presence of a hidden adversary who can eavesdrop and inject malicious errors. We provide polynomial-time distributed network codes that are information-theoretically rate-optimal for this scenario, improving on the rates achievable in prior work by Ngai Our main contribution shows that as long as the sum of the number of links the adversary can jam (denoted by $ Z_{O}$ ) and the number of links he can eavesdrop on (denoted by $ Z_{I}$) is less than the network capacity (denoted by $ C$) (i.e., $ Z_{O}+ Z_{I}< C$), our codes can communicate (with vanishingly small error probability) a single bit correctly and without leaking any information to the adversary. We then use this scheme as a module to design codes that allow communication at the source rate of $ C- Z_{O}$ when there are no security requirements, and codes that allow communication at the source rate of $ C- Z_{O}- Z_{I}$ while keeping the communicated message provably secret from the adversary. Interior nodes are oblivious to the presence of adversaries and perform random linear network coding; only the source and destination need to be tweaked. We also prove that the rate-region obtained is information-theoretically optimal. In proving our results, we correct an error in prior work by a subset of the authors in this paper.
Keywords: Error probability; Jamming; Network coding; Robustness; Transforms; Vectors; Achievable rates; adversary; error control; network coding; secrecy (ID#: 15-3603)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6730968&isnumber=4359146
Yang, J.-S.; Chang, J.-M.; Pai, K.-J.; Chan, H.-C., "Parallel Construction of Independent Spanning Trees on Enhanced Hypercubes," Parallel and Distributed Systems, IEEE Transactions on, vol. PP, no. 99, pp.1, 1, 5 Nov 2014. doi: 10.1109/TPDS.2014.2367498 The use of multiple independent spanning trees (ISTs) for data broadcasting in networks provides a number of advantages, including the increase of fault-tolerance, bandwidth and security. Thus, the designs of multiple ISTs on several classes of networks have been widely investigated. In this paper, we give an algorithm to construct ISTs on enhanced hypercubes Qn;k, which contain folded hypercubes as a subclass. Moreover, we show that these ISTs are near optimal for heights and path lengths. Let D(Qn;k) denote the diameter of Qn;k. If n k is odd or n k 2 f2; ng, we show that all the heights of ISTs are equal to D(Qn;k) + 1, and thus are optimal. Otherwise, we show that each path from a node to the root in a spanning tree has length at most D(Qn;k) + 2. In particular, no more than 2.15% of nodes have the maximum path length. As a by-product, we improve the upper bound of wide diameter (respectively, fault diameter) of Qn;k from these path lengths.
Keywords: Broadcasting; Educational institutions; Electronic mail; Fault tolerance; Fault tolerant systems; Hypercubes; Vegetation; enhanced hypercubes; fault diameter; folded hypercubes ;independent spanning trees; interconnection networks; wide diameter (ID#: 15-3604)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6948321&isnumber=4359390
Ben Othman, S.; Trad, A.; Youssef, H., "Security Architecture For At-Home Medical Care Using Wireless Sensor Network," Wireless Communications and Mobile Computing Conference (IWCMC), 2014 International, pp.304,309, 4-8 Aug. 2014. doi: 10.1109/IWCMC.2014.6906374 Distributed wireless sensor network technologies have become one of the major research areas in healthcare industries due to rapid maturity in improving the quality of life. Medical Wireless Sensor Network (MWSN) via continuous monitoring of vital health parameters over a long period of time can enable physicians to make more accurate diagnosis and provide better treatment. The MWSNs provide the options for flexibilities and cost saving to patients and healthcare industries. Medical data sensors on patients produce an increasingly large volume of increasingly diverse real-time data. The transmission of this data through hospital wireless networks becomes a crucial problem, because the health information of an individual is highly sensitive. It must be kept private and secure. In this paper, we propose a security model to protect the transfer of medical data in hospitals using MWSNs. We propose Compressed Sensing + Encryption as a strategy to achieve low-energy secure data transmission in sensor networks.
Keywords: body sensor networks; compressed sensing; cryptography; health care; hospitals; patient monitoring; MWSN; at-home medical care;compressed sensing-encryption; distributed wireless sensor network technologies; healthcare industries; hospital wireless networks; low-energy secure data transmission; medical data sensors; medical wireless sensor network; security architecture; vital health parameter continuous monitoring; Encryption; Medical services; Sensors; Servers; Wireless sensor networks; Data Transmission; Encryption; Medical Wireless Sensor Network; Security (ID#: 15-3605)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6906374&isnumber=6906315
Xi Xiong; Haining Fan, "GF(2n) Bit-Parallel Squarer Using Generalised Polynomial Basis For New Class Of Irreducible Pentanomials," Electronics Letters , vol. 50, no.9, pp. 655, 657, April 24 2014. doi: 10.1049/el.2014.0006 Explicit formulae and complexities of bit-parallel GF(2n) squarers for a new class of irreducible pentanomials xn + xn-1 + xk + x + 1, where n is odd and 1 <; k <; (n - 1)/2 are presented. The squarer is based on the generalised polynomial basis of GF(2n). Its gate delay matches the best results, whereas its XOR gate complexity is n + 1, which is only about two thirds of the current best results.
Keywords: logic gates; polynomials; GF(2n) bit-parallel squarer; XOR gate; gate delay; generalised polynomial basis; irreducible pentanomial (ID#: 15-3606)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6809279&isnumber=6809270
Mo, Y.; Sinopoli, B., "Secure Estimation in the Presence of Integrity Attacks," Automatic Control, IEEE Transactions on, vol. PP, no. 99, pp.1, 1, 21 August 2014. doi: 10.1109/TAC.2014.2350231 We consider the estimation of a scalar state based on m measurements that can be potentially manipulated by an adversary. The attacker is assumed to have full knowledge about the true value of the state to be estimated and about the value of all the measurements. However, the attacker has limited resources and can only manipulate up to l of the m measurements. The problem is formulated as a minimax optimization, where one seeks to construct an optimal estimator that minimizes the “worst-case” expected cost against all possible manipulations by the attacker. We show that if the attacker can manipulate at least half the measurements (l m=2), then the optimal worst-case estimator should ignore all measurements and be based solely on the a-priori information. We provide the explicit form of the optimal estimator when the attacker can manipulate less than half the measurements (l < m=2), which is based on m 2l local estimators. We further prove that such an estimator can be reduced into simpler forms for two special cases, i.e., either the estimator is symmetric and monotone or m = 2l + 1. Finally we apply the proposed methodology in the case of Gaussian measurements.
Keywords: Cost function; Security; Sensors; State estimation; Vectors (ID#: 15-3607)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6881627&isnumber=4601496
Yan-Xiao Liu, "Efficient t-Cheater Identifiable (k, n) Secret-Sharing Scheme for t ⩽ [((k - 2)/2)]," Information Security, IET vol. 8, no. 1, pp.37, 41, Jan. 2014. doi: 10.1049/iet-ifs.2012.0322 In Eurocrypt 2011, Obana proposed a (k, n) secret-sharing scheme that can identify up to [((k - 2)/2)] cheaters. The number of cheaters that this scheme can identify meets its upper bound. When the number of cheaters t satisfies t ≤ [((k - 1)/3)], this scheme is extremely efficient since the size of share |Vi| can be written as |Vi| = |S|/ε, which almost meets its lower bound, where |S| denotes the size of secret and ε denotes the successful cheating probability; when the number of cheaters t is close to ((k - 2)/2)], the size of share is upper bounded by |Vi| = (n·(t + 1) · 23t-1|S|)/ε. A new (k, n) secret-sharing scheme capable of identifying [((k - 2)/2)] cheaters is presented in this study. Considering the general case that k shareholders are involved in secret reconstruction, the size of share of the proposed scheme is |Vi| = (2k - 1|S| )/ε, which is independent of the parameters t and n. On the other hand, the size of share in Obana's scheme can be rewritten as |Vi| = (n · (t + 1) · 2k - 1|S|)/ε under the same condition. With respect to the size of share, the proposed scheme is more efficient than previous one when the number of cheaters t is close to [((k - 2)/2)].
Keywords: probability; public key cryptography; (k, n) secret-sharing scheme; Obana's scheme; cheating probability; k-shakeholders; secret reconstruction (ID#: 15-3608)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6687156&isnumber=6687150
Ta-Yuan Liu; Mukherjee, P.; Ulukus, S.; Shih-Chun Lin; Hong, Y.-W.P., "Secure DoF of MIMO Rayleigh block fading wiretap channels with No CSI anywhere," Communications (ICC), 2014 IEEE International Conference on, pp.1959,1964, 10-14 June 2014. doi: 10.1109/ICC.2014.6883610 We consider the block Rayleigh fading multiple-input multiple-output (MIMO) wiretap channel with no prior channel state information (CSI) available at any of the terminals. The channel gains remain constant in a coherence time of T symbols, and then change to another independent realization. The transmitter, the legitimate receiver and the eavesdropper have nt, nr and ne antennas, respectively. We determine the exact secure degrees of freedom (s.d.o.f.) of this system when T ≥ 2 min(nt, nr). We show that, in this case, the s.d.o.f. is exactly (min(nt, nr) - ne)+(T - min(nt, nr))/T. The first term can be interpreted as the eavesdropper with ne antennas taking away ne antennas from both the transmitter and the legitimate receiver. The second term can be interpreted as a fraction of s.d.o.f. being lost due to the lack of CSI at the legitimate receiver. In particular, the fraction loss, min(nt, nr)/T, can be interpreted as the fraction of channel uses dedicated to training the legitimate receiver for it to learn its own CSI. We prove that this s.d.o.f. can be achieved by employing a constant norm channel input, which can be viewed as a generalization of discrete signalling to multiple dimensions.
Keywords: MIMO communication; Rayleigh channels; radio receivers; radio transmitters; receiving antennas; telecommunication security; transmitting antennas; CSI; MIMO Rayleigh block fading wiretap channels secure DoF; antennas; channel state information; degrees of freedom; discrete signalling; multiple input multiple output; receiver; s.d.o.f; transmitter; Coherence; Fading; MIMO; Receivers; Transmitting antennas; Upper bound (ID#: 15-3609)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6883610&isnumber=6883277
Yanbing Liu; Qingyun Liu; Ping Liu; Jianlong Tan; Li Guo, "A Factor-Searching-Based Multiple String Matching Algorithm For Intrusion Detection," Communications (ICC), 2014 IEEE International Conference on, pp.653, 658, 10-14 June 2014. doi: 10.1109/ICC.2014.6883393 Multiple string matching plays a fundamental role in network intrusion detection systems. Automata-based multiple string matching algorithms like AC, SBDM and SBOM are widely used in practice, but the huge memory usage of automata prevents them from being applied to a large-scale pattern set. Meanwhile, poor cache locality of huge automata degrades the matching speed of algorithms. Here we propose a space-efficient multiple string matching algorithm BVM, which makes use of bit-vector and succinct hash table to replace the automata used in factor-searching-based algorithms. Space complexity of the proposed algorithm is O(rm2 + ΣpϵP |p|), that is more space-efficient than the classic automata-based algorithms. Experiments on datasets including Snort, ClamAV, URL blacklist and synthetic rules show that the proposed algorithm significantly reduces memory usage and still runs at a fast matching speed. Above all, BVM costs less than 0.75% of the memory usage of AC, and is capable of matching millions of patterns efficiently.
Keywords: {automata theory; security of data; string matching; AC; ClamAV; SBDM; SBOM; Snort; URL blacklist; automata-based multiple string matching algorithms; bit-vector; factor searching-based algorithms; factor-searching-based multiple string matching algorithm; huge memory usage; matching speed; network intrusion detection systems; space complexity; space-efficient multiple string matching algorithm BVM; succinct hash table; synthetic rules; Arrays; Automata; Intrusion detection; Pattern matching; Time complexity; automata; intrusion detection; multiple string matching; space-efficient (ID#: 15-3610)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6883393&isnumber=6883277
Baofeng Wu; Qingfang Jin; Zhuojun Liu; Dongdai Lin, "Constructing Boolean Functions With Potentially Optimal Algebraic Immunity Based On Additive Decompositions Of Finite Fields (Extended Abstract)," Information Theory (ISIT), 2014 IEEE International Symposium on, pp.1361,1365, June 29 2014-July 4 2014. doi: 10.1109/ISIT.2014.6875055 We propose a general approach to construct cryptographic significant Boolean functions of (r + 1)m variables based on the additive decomposition F2rm × F2m of the finite field F2(r+1)m, where r ≥ 1 is odd and m ≥ 3. A class of unbalanced functions is constructed first via this approach, which coincides with a variant of the unbalanced class of generalized Tu-Deng functions in the case r = 1. Functions belonging to this class have high algebraic degree, but their algebraic immunity does not exceed m, which is impossible to be optimal when r > 1. By modifying these unbalanced functions, we obtain a class of balanced functions which have optimal algebraic degree and high nonlinearity (shown by a lower bound we prove). These functions have optimal algebraic immunity provided a combinatorial conjecture on binary strings which generalizes the Tu-Deng conjecture is true. Computer investigations show that, at least for small values of number of variables, functions from this class also behave well against fast algebraic attacks.
Keywords: Boolean functions; combinatorial mathematics; cryptography; additive decomposition; algebraic immunity; binary strings; combinatorial conjecture; cryptographic significant Boolean functions; fast algebraic attacks; finite field; generalized Tu-Deng functions; optimal algebraic degree; unbalanced functions; Additives; Boolean functions; Cryptography; Electronic mail; FAA; Information theory; Transforms (ID#: 15-3611)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6875055&isnumber=6874773
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
![]() |
Discrete and Continuous Optimization |
Discrete and continuous optimization are two mathematical approaches to problem solving. The research works cited here are primarily focused oncontinuous optimization. They appeared in 2014 between January and October.
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
![]() |
Distributed Denial of Service Attacks (DDoS Attacks) |
Distributed Denial of Service Attacks continue to be among the most prolific forms of attack against information systems. According to the NSFOCUS DDOS Report for 2014 (ID#:14-1643) (available at: http://en.nsfocus.com/2014/SecurityReport_0320/165.html), DDOS attacks occur at the rate of 28 per hour. Research into method of prevention, detection, and response and mitigation is also substantial, as the articles presented here show.
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
![]() |
Host-based Intrusion Detection |
The research presented here on host-based intrusion detection systems addresses semantic approaches, power grid substation protection, an architecture for modular mobile IDS, and a hypervisor based system. All works cited are from 2014.
Creech, G.; Jiankun Hu, "A Semantic Approach to Host-Based Intrusion Detection Systems Using Contiguous and Discontiguous System Call Patterns," Computers, IEEE Transactions on, vol. 63, no. 4, pp.807, 819, April 2014. doi: 10.1109/TC.2013.13 Host-based anomaly intrusion detection system design is very challenging due to the notoriously high false alarm rate. This paper introduces a new host-based anomaly intrusion detection methodology using discontiguous system call patterns, in an attempt to increase detection rates whilst reducing false alarm rates. The key concept is to apply a semantic structure to kernel level system calls in order to reflect intrinsic activities hidden in high-level programming languages, which can help understand program anomaly behaviour. Excellent results were demonstrated using a variety of decision engines, evaluating the KDD98 and UNM data sets, and a new, modern data set. The ADFA Linux data set was created as part of this research using a modern operating system and contemporary hacking methods, and is now publicly available. Furthermore, the new semantic method possesses an inherent resilience to mimicry attacks, and demonstrated a high level of portability between different operating system versions.
Keywords: high level languages; operating systems (computers); security of data;KDD98 data sets; UNM data sets; contemporary hacking methods; contiguous system call patterns; discontiguous system call patterns; false alarm rates; high-level programming languages; host-based anomaly intrusion detection system design; modern operating system; program anomaly behaviour; semantic structure; Clocks; Complexity theory; Computer architecture; Cryptography; Gaussian processes; Logic gates; Registers; ADFA-LD; Intrusion detection; anomaly detection; computer security; host-based IDS; system calls (ID#: 15-3612)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6419701&isnumber=6774900
Al-Jarrah, O.; Arafat, A., "Network Intrusion Detection System Using Attack Behavior Classification," Information and Communication Systems (ICICS), 2014 5th International Conference on, pp. 1, 6, 1-3 April 2014. doi: 10.1109/IACS.2014.6841978 Intrusion Detection Systems (IDS) have become a necessity in computer security systems because of the increase in unauthorized accesses and attacks. Intrusion Detection is a major component in computer security systems that can be classified as Host-based Intrusion Detection System (HIDS), which protects a certain host or system and Network-based Intrusion detection system (NIDS), which protects a network of hosts and systems. This paper addresses Probes attacks or reconnaissance attacks, which try to collect any possible relevant information in the network. Network probe attacks have two types: Host Sweep and Port Scan attacks. Host Sweep attacks determine the hosts that exist in the network, while port scan attacks determine the available services that exist in the network. This paper uses an intelligent system to maximize the recognition rate of network attacks by embedding the temporal behavior of the attacks into a TDNN neural network structure. The proposed system consists of five modules: packet capture engine, preprocessor, pattern recognition, classification, and monitoring and alert module. We have tested the system in a real environment where it has shown good capability in detecting attacks. In addition, the system has been tested using DARPA 1998 dataset with 100% recognition rate. In fact, our system can recognize attacks in a constant time.
Keywords: computer network security; neural nets; pattern classification; HIDS; NIDS; TDNN neural network structure; alert module; attack behavior classification; computer security systems; host sweep attacks; host-based intrusion detection system; network intrusion detection system; network probe attacks; packet capture engine; pattern classification; pattern recognition; port scan attacks; preprocessor; reconnaissance attacks; unauthorized accesses; IP networks; Intrusion detection; Neural networks; Pattern recognition; Ports (Computers); Probes; Protocols; Host sweep; Intrusion Detection Systems; Network probe attack; Port scan; TDNN neural network (ID#: 15-3613)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6841978&isnumber=6841931
Junho Hong; Chen-Ching Liu; Govindarasu, M., "Integrated Anomaly Detection for Cyber Security of the Substations," Smart Grid, IEEE Transactions on, vol. 5, no. 4, pp. 1643, 1653, July 2014. doi: 10.1109/TSG.2013.2294473 Cyber intrusions to substations of a power grid are a source of vulnerability since most substations are unmanned and with limited protection of the physical security. In the worst case, simultaneous intrusions into multiple substations can lead to severe cascading events, causing catastrophic power outages. In this paper, an integrated Anomaly Detection System (ADS) is proposed which contains host- and network-based anomaly detection systems for the substations, and simultaneous anomaly detection for multiple substations. Potential scenarios of simultaneous intrusions into the substations have been simulated using a substation automation testbed. The host-based anomaly detection considers temporal anomalies in the substation facilities, e.g., user-interfaces, Intelligent Electronic Devices (IEDs) and circuit breakers. The malicious behaviors of substation automation based on multicast messages, e.g., Generic Object Oriented Substation Event (GOOSE) and Sampled Measured Value (SMV), are incorporated in the proposed network-based anomaly detection. The proposed simultaneous intrusion detection method is able to identify the same type of attacks at multiple substations and their locations. The result is a new integrated tool for detection and mitigation of cyber intrusions at a single substation or multiple substations of a power grid.
Keywords: computer network security; power engineering computing; power grids; power system reliability; substation automation; ADS; GOOSE; IED; SMV; catastrophic power outages; circuit breakers; cyber intrusions; generic object oriented substation event; host-based anomaly detection systems; integrated anomaly detection system; intelligent electronic devices; malicious behaviors; multicast messages; network-based anomaly detection systems; physical security; power grid; sampled measured value; severe cascading events; simultaneous anomaly detection; simultaneous intrusion detection method; substation automation testbed; substation facilities; substations; temporal anomalies; user-interfaces; Circuit breakers ;Computer security; Intrusion detection; Power grids; Substation automation; Anomaly detection; GOOSE anomaly detection; SMV anomaly detection and intrusion detection; cyber security of substations (ID#: 15-3614)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6786500&isnumber=6839066
Nikolai, J.; Yong Wang, "Hypervisor-Based Cloud Intrusion Detection System," Computing, Networking and Communications (ICNC), 2014 International Conference on, pp. 989, 993, 3-6 Feb 2014. doi: 10.1109/ICCNC.2014.6785472 Shared resources are an essential part of cloud computing. Virtualization and multi-tenancy provide a number of advantages for increasing resource utilization and for providing on demand elasticity. However, these cloud features also raise many security concerns related to cloud computing resources. In this paper, we propose an architecture and approach for leveraging the virtualization technology at the core of cloud computing to perform intrusion detection security using hypervisor performance metrics. Through the use of virtual machine performance metrics gathered from hypervisors, such as packets transmitted/received, block device read/write requests, and CPU utilization, we demonstrate and verify that suspicious activities can be profiled without detailed knowledge of the operating system running within the virtual machines. The proposed hypervisor-based cloud intrusion detection system does not require additional software installed in virtual machines and has many advantages compared to host-based and network based intrusion detection systems which can complement these traditional approaches to intrusion detection.
Keywords: cloud computing; computer network security; software architecture; software metrics; virtual machines; virtualisation; CPU utilization; block device read requests; block device write requests; cloud computing resources; cloud features; hypervisor performance metrics; hypervisor-based cloud intrusion detection system; intrusion detection security; multitenancy; operating system; packet transmission; received packets; shared resource utilization; virtual machine performance metrics; virtualization; virtualization technology; Cloud computing; Computer crime; Intrusion detection; Measurement; Virtual machine monitors; Virtual machining; Cloud Computing; hypervisor; intrusion detection (ID#: 15-3615)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6785472&isnumber=6785290
Salman, A.; Elhajj, I.H.; Chehab, A.; Kayssi, A., "DAIDS: An Architecture for Modular Mobile IDS," Advanced Information Networking and Applications Workshops (WAINA), 2014 28th International Conference on, pp.328, 333, 13-16 May 2014. doi: 10.1109/WAINA.2014.54 The popularity of mobile devices and the enormous number of third party mobile applications in the market have naturally lead to several vulnerabilities being identified and abused. This is coupled with the immaturity of intrusion detection system (IDS) technology targeting mobile devices. In this paper we propose a modular host-based IDS framework for mobile devices that uses behavior analysis to profile applications on the Android platform. Anomaly detection can then be used to categorize malicious behavior and alert users. The proposed system accommodates different detection algorithms, and is being tested at a major telecom operator in North America. This paper highlights the architecture, findings, and lessons learned.
Keywords: Android (operating system); mobile computing; mobile radio; security of data; Android platform; DAIDS; North America; anomaly detection; behavior analysis; detection algorithms; intrusion detection system; malicious behavior; mobile devices; modular mobile IDS; profile applications; telecom operator; third party mobile applications; Androids; Databases; Detectors; Humanoid robots; Intrusion detection; Malware; Monitoring; behavior profiling; dynamic analysis; intrusion detection (ID#: 15-3616)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6844659&isnumber=6844560
Can, O., "Mobile Agent Based Intrusion Detection System," Signal Processing and Communications Applications Conference (SIU), 2014 22nd, pp.1363, 1366, 23-25 April 2014. doi: 10.1109/SIU.2014.6830491 An intrusion detection system (IDS) inspects all inbound and outbound network activity and identifies suspicious patterns that may indicate a network or system attack from someone attempting to break into or compromise a system. A network based system, or NIDS, the individual packets flowing through a network are analyzed. In a host-based system, the IDS examines at the activity on each individual computer or host. IDS techniques are divided into two categories including misuse detection and anomaly detection. In recently years, Mobile Agent based technology has been used for distributed systems with having characteristic of mobility and autonomy. In this working we aimed to combine IDS with Mobile Agent concept for more scale, effective, knowledgeable system.
Keywords: mobile agents; security of data; NIDS; anomaly detection; host-based system; misuse detection; mobile agent based intrusion detection system; network activity; network-based system; suspicious patterns identification;Computers;Conferences;Informatics;Internet;Intrusion detection; Mobile agents; Signal processing;cyber attack; intrusion detection; mobile agent (ID#: 15-3617)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6830491&isnumber=6830164
Sridhar, S.; Govindarasu, M., "Model-Based Attack Detection and Mitigation for Automatic Generation Control ,” Smart Grid, IEEE Transactions on, vol. 5, no. 2, pp. 580, 591, March 2014. doi: 10.1109/TSG.2014.2298195 Cyber systems play a critical role in improving the efficiency and reliability of power system operation and ensuring the system remains within safe operating margins. An adversary can inflict severe damage to the underlying physical system by compromising the control and monitoring applications facilitated by the cyber layer. Protection of critical assets from electronic threats has traditionally been done through conventional cyber security measures that involve host-based and network-based security technologies. However, it has been recognized that highly skilled attacks can bypass these security mechanisms to disrupt the smooth operation of control systems. There is a growing need for cyber-attack-resilient control techniques that look beyond traditional cyber defense mechanisms to detect highly skilled attacks. In this paper, we make the following contributions. We first demonstrate the impact of data integrity attacks on Automatic Generation Control (AGC) on power system frequency and electricity market operation. We propose a general framework to the application of attack resilient control to power systems as a composition of smart attack detection and mitigation. Finally, we develop a model-based anomaly detection and attack mitigation algorithm for AGC. We evaluate the detection capability of the proposed anomaly detection algorithm through simulation studies. Our results show that the algorithm is capable of detecting scaling and ramp attacks with low false positive and negative rates. The proposed model-based mitigation algorithm is also efficient in maintaining system frequency within acceptable limits during the attack period.
Keywords: data integrity; frequency control; power system control; power system reliability; power system stability; security of data; AGC; attack mitigation algorithm; attack resilient control; automatic generation control; critical assets protection; cyber layer; cyber security measures; cyber systems; cyber-attack-resilient control techniques; data integrity attacks; electricity market operation; electronic threats; host-based security technologies; model-based anomaly detection algorithm; model-based mitigation algorithm; network-based security technologies; physical system; power system frequency; power system operation reliability; ramp attacks; scaling attacks; smart attack detection; smart attack mitigation; Automatic generation control; Electricity supply industry; Frequency measurement; Generators; Power measurement; Power system stability; Anomaly detection; automatic generation control; intrusion detection systems; kernel density estimation; supervisory control and data acquisition (ID#: 15-3618)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6740883&isnumber=6740878
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
![]() |
Human Trust |
Human behavior is complex and that complexity creates a tremendous problem for cybersecurity. The works cited here address a range of human trust issues related to behaviors, deception, enticement, sentiment and other factors difficult to isolate and quantify. All appeared in 2014.
Sousa, S.; Dias, P.; Lamas, D., "A Model for Human-Computer Trust: A Key Contribution For Leveraging Trustful Interactions," Information Systems and Technologies (CISTI), 2014 9th Iberian Conference on, pp.1, 6, 18-21 June 2014. doi: 10.1109/CISTI.2014.6876935 This article addresses trust in computer systems as a social phenomenon, which depends on the type of relationship that is established through the computer, or with other individuals. It starts by theoretically contextualizing trust, and then situates trust in the field of computer science. Then, describes the proposed model, which builds on what one perceives to be trustworthy and is influenced by a number of factors such as the history of participation and user's perceptions. It ends by situating the proposed model as a key contribution for leveraging trustful interactions and ends by proposing it used to serve as a complement to foster user's trust needs in what concerns Human-computer Iteration or Computermediated Interactions.
Keywords: computer mediated communication; human computer interaction; computer science; computer systems; computer-mediated interactions; human-computer iteration; human-computer trust model; participation history; social phenomenon; trustful interaction leveraging; user perceptions ;user trust needs; Collaboration; Computational modeling; Computers; Context; Correlation; Educational institutions; Psychology; Collaboration; Engagement; Human-computer trust; Interaction design; Participation (ID#: 15-3619)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6876935&isnumber=6876860
Kounelis, I.; Baldini, G.; Neisse, R.; Steri, G.; Tallacchini, M.; Guimaraes Pereira, A., "Building Trust in the Human?Internet of Things Relationship," Technology and Society Magazine, IEEE, vol. 33, no. 4, pp.73, 80, winter 2014. doi: 10.1109/MTS.2014.2364020 The concept of the Internet of Things (IoT) was initially proposed by Kevin Ashton in 1998 [1], where it was linked to RFID technology. More recently, the initial idea has been extended to support pervasive connectivity and the integration of the digital and physical worlds [2], encompassing virtual and physical objects, including people and places.
Keywords: Internet of things; Privacy; Security; Senior citizens; Smart homes; Trust management (ID#: 15-3620)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6969184&isnumber=6969174
Fei Hao; Geyong Min; Man Lin; Changqing Luo; Yang, L.T., "MobiFuzzyTrust: An Efficient Fuzzy Trust Inference Mechanism in Mobile Social Networks," Parallel and Distributed Systems, IEEE Transactions on, vol.25, no.11, pp.2944, 2955, Nov. 2014. doi: 10.1109/TPDS.2013.309 Mobile social networks (MSNs) facilitate connections between mobile users and allow them to find other potential users who have similar interests through mobile devices, communicate with them, and benefit from their information. As MSNs are distributed public virtual social spaces, the available information may not be trustworthy to all. Therefore, mobile users are often at risk since they may not have any prior knowledge about others who are socially connected. To address this problem, trust inference plays a critical role for establishing social links between mobile users in MSNs. Taking into account the nonsemantical representation of trust between users of the existing trust models in social networks, this paper proposes a new fuzzy inference mechanism, namely MobiFuzzyTrust, for inferring trust semantically from one mobile user to another that may not be directly connected in the trust graph of MSNs. First, a mobile context including an intersection of prestige of users, location, time, and social context is constructed. Second, a mobile context aware trust model is devised to evaluate the trust value between two mobile users efficiently. Finally, the fuzzy linguistic technique is used to express the trust between two mobile users and enhance the human's understanding of trust. Real-world mobile dataset is adopted to evaluate the performance of the MobiFuzzyTrust inference mechanism. The experimental results demonstrate that MobiFuzzyTrust can efficiently infer trust with a high precision.
Keywords: fuzzy reasoning; fuzzy set theory; graph theory; mobile computing; security of data; social networking (online);trusted computing; MSN; MobiFuzzyTrust inference mechanism; distributed public virtual social spaces; fuzzy linguistic technique; fuzzy trust inference mechanism; mobile context aware trust model; mobile devices; mobile social networks; mobile users; nonsemantical trust representation; real-world mobile dataset; social links; trust graph; trust models; trust value evaluation; Computational modeling; Context; Context modeling; Mobile communication; Mobile handsets; Pragmatics; Social network services; Mobile social networks; fuzzy inference; linguistic terms; mobile context; trust (ID#: 15-3621)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6684155&isnumber=6919360
Dondio, P.; Longo, L., "Computing Trust as a Form of Presumptive Reasoning," Web Intelligence (WI) and Intelligent Agent Technologies (IAT), 2014 IEEE/WIC/ACM International Joint Conferences on , vol.2, no., pp.274,281, 11-14 Aug. 2014. doi: 10.1109/WI-IAT.2014.108 This study describes and evaluates a novel trust model for a range of collaborative applications. The model assumes that humans routinely choose to trust their peers by relying on few recurrent presumptions, which are domain independent and which form a recognisable trust expertise. We refer to these presumptions as trust schemes, a specialised version of Walton's argumentation schemes. Evidence is provided about the efficacy of trust schemes using a detailed experiment on an online community of 80,000 members. Results show how proposed trust schemes are more effective in trust computation when they are combined together and when their plausibility in the selected context is considered.
Keywords: trusted computing; Walton argumentation schemes; presumptive reasoning; trust computing; trust expertise; trust model; trust schemes; Cognition; Communities; Computational modeling; Context; Fuzzy logic; Measurement; Standards; fuzzy logics; online communities; trust (ID#: 15-3622)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6927635&isnumber=6927590
Frauenstein, E.D.; von Solms, R., "Combatting Phishing: A Holistic Human Approach," Information Security for South Africa (ISSA), 2014, pp.1,10, 13-14 Aug. 2014. doi: 10.1109/ISSA.2014.6950508 Phishing continues to remain a lucrative market for cyber criminals, mostly because of the vulnerable human element. Through emails and spoofed-websites, phishers exploit almost any opportunity using major events, considerable financial awards, fake warnings and the trusted reputation of established organizations, as a basis to gain their victims' trust. For many years, humans have often been referred to as the `weakest link' towards protecting information. To gain their victims' trust, phishers continue to use sophisticated looking emails and spoofed websites to trick them, and rely on their victims' lack of knowledge, lax security behavior and organizations' inadequate security measures towards protecting itself and their clients. As such, phishing security controls and vulnerabilities can arguably be classified into three main elements namely human factors (H), organizational aspects (O) and technological controls (T). All three of these elements have the common feature of human involvement and as such, security gaps are inevitable. Each element also functions as both security control and security vulnerability. A holistic framework towards combatting phishing is required whereby the human feature in all three of these elements is enhanced by means of a security education, training and awareness programme. This paper discusses the educational factors required to form part of a holistic framework, addressing the HOT elements as well as the relationships between these elements towards combatting phishing. The development of this framework uses the principles of design science to ensure that it is developed with rigor. Furthermore, this paper reports on the verification of the framework.
Keywords: computer crime; computer science education; human factors; organisational aspects; unsolicited e-mail; HOT elements; ails; awareness programme; cyber criminals; design science principles; educational factors; fake warnings; financial awards; holistic human approach; human factors; lax security behavior; organizational aspects; phishing security controls; security education; security gaps; security training; security vulnerability; spoofed-Web sites; technological controls; trusted reputation; ISO; Lead; Security; Training;COBIT;agency theory; human factors; organizational aspects; phishing; security education training and awareness; social engineering; technological controls; technology acceptance model (ID#: 15-3623)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6950508&isnumber=6950479
Ing-Ray Chen; Jia Guo, "Dynamic Hierarchical Trust Management of Mobile Groups and Its Application to Misbehaving Node Detection," Advanced Information Networking and Applications (AINA), 2014 IEEE 28th International Conference on, pp.49,56, 13-16 May 2014. doi: 10.1109/AINA.2014.13 In military operation or emergency response situations, very frequently a commander will need to assemble and dynamically manage Community of Interest (COI) mobile groups to achieve a critical mission assigned despite failure, disconnection or compromise of COI members. We combine the designs of COI hierarchical management for scalability and reconfigurability with COI dynamic trust management for survivability and intrusion tolerance to compose a scalable, reconfigurable, and survivable COI management protocol for managing COI mission-oriented mobile groups in heterogeneous mobile environments. A COI mobile group in this environment would consist of heterogeneous mobile entities such as communication-device-carried personnel/robots and aerial or ground vehicles operated by humans exhibiting not only quality of service (QoS) characters, e.g., competence and cooperativeness, but also social behaviors, e.g., connectivity, intimacy and honesty. A COI commander or a subtask leader must measure trust with both social and QoS cognition depending on mission task characteristics and/or trustee properties to ensure successful mission execution. In this paper, we present a dynamic hierarchical trust management protocol that can learn from past experiences and adapt to changing environment conditions, e.g., increasing misbehaving node population, evolving hostility and node density, etc. to enhance agility and maximize application performance. With trust-based misbehaving node detection as an application, we demonstrate how our proposed COI trust management protocol is resilient to node failure, disconnection and capture events, and can help maximize application performance in terms of minimizing false negatives and positives in the presence of mobile nodes exhibiting vastly distinct QoS and social behaviors.
Keywords: emergency services; military communication; mobile computing; protocols; quality of service; telecommunication security; trusted computing ;COI dynamic hierarchical trust management protocol; COI mission-oriented mobile group management; aerial vehicles; agility enhancement; application performance maximization; communication-device-carried personnel; community-of-interest mobile groups; competence; connectivity; cooperativeness; emergency response situations; ground vehicles; heterogeneous mobile entities; heterogeneous mobile environments; honesty; intimacy; intrusion tolerance; military operation; misbehaving node population; node density; quality-of-service characters; robots; social behaviors; survivable COI management protocol; trust measurement; trust-based misbehaving node detection; Equations; Mathematical model; Mobile communication; Mobile computing; Peer-to-peer computing; Protocols; Quality of service; Trust management; adaptability; community of interest; intrusion detection; performance analysis; scalability (ID#: 15-3624)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6838647&isnumber=6838626
Athanasiou, G.; Fengou, M.-A.; Beis, A.; Lymberopoulos, D., "A Novel Trust Evaluation Method For Ubiquitous Healthcare Based On Cloud Computational Theory," Engineering in Medicine and Biology Society (EMBC), 2014 36th Annual International Conference of the IEEE, pp.4503, 4506, 26-30 Aug. 2014. doi: 10.1109/EMBC.2014.6944624 The notion of trust is considered to be the cornerstone on patient-psychiatrist relationship. Thus, a trustfully background is fundamental requirement for provision of effective Ubiquitous Healthcare (UH) service. In this paper, the issue of Trust Evaluation of UH Providers when register UH environment is addressed. For that purpose a novel trust evaluation method is proposed, based on cloud theory, exploiting User Profile attributes. This theory mimics human thinking, regarding trust evaluation and captures fuzziness and randomness of this uncertain reasoning. Two case studies are investigated through simulation in MATLAB software, in order to verify the effectiveness of this novel method.
Keywords: cloud computing; health care; trusted computing; ubiquitous computing; uncertainty handling; MATLAB software; UH environment; cloud computational theory; cloud theory; trust evaluation method; ubiquitous healthcare; uncertain reasoning; user profile attributes; Conferences; Generators; MATLAB; MIMICs; Medical services; Pragmatics; TV (ID#: 15-3625)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6944624&isnumber=6943513
Howser, G.; McMillin, B., "A Modal Model of Stuxnet Attacks on Cyber-physical Systems: A Matter of Trust," Software Security and Reliability, 2014 Eighth International Conference on, pp.225,234, June 30 2014-July 2 2014. doi: 10.1109/SERE.2014.36 Multiple Security Domains Nondeducibility, MSDND, yields results even when the attack hides important information from electronic monitors and human operators. Because MSDND is based upon modal frames, it is able to analyze the event system as it progresses rather than relying on traces of the system. Not only does it provide results as the system evolves, MSDND can point out attacks designed to be missed in other security models. This work examines information flow disruption attacks such as Stuxnet and formally explains the role that implicit trust in the cyber security of a cyber physical system (CPS) plays in the success of the attack. The fact that the attack hides behind MSDND can be used to help secure the system by modifications to break MSDND and leave the attack nowhere to hide. Modal operators are defined to allow the manipulation of belief and trust states within the model. We show how the attack hides and uses the operator's trust to remain undetected. In fact, trust in the CPS is key to the success of the attack.
Keywords: security of data; trusted computing; CPS; MSDND; Stuxnet attacks; belief manipulation; cyber physical system; cyber security; cyber-physical systems; electronic monitors; event system analysis; human operators; implicit trust; information flow disruption attacks; modal frames;modal model; multiple security domains nondeducibility; security models; trust state manipulation; Analytical models; Bismuth; Cognition; Cost accounting; Monitoring; Security; Software; Stuxnet; cyber-physical systems; doxastic logic; information flow security; nondeducibility; security models (ID#: 15-3626)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6895433&isnumber=6895396
Godwin, J.L.; Matthews, P., "Rapid Labelling Of SCADA Data To Extract Transparent Rules Using RIPPER," Reliability and Maintainability Symposium (RAMS), 2014 Annual, pp.1,7, 27-30 Jan. 2014. doi: 10.1109/RAMS.2014.6798456 This paper addresses a robust methodology for developing a statistically sound, robust prognostic condition index and encapsulating this index as a series of highly accurate, transparent, human-readable rules. These rules can be used to further understand degradation phenomena and also provide transparency and trust for any underlying prognostic technique employed. A case study is presented on a wind turbine gearbox, utilising historical supervisory control and data acquisition (SCADA) data in conjunction with a physics of failure model. Training is performed without failure data, with the technique accurately identifying gearbox degradation and providing prognostic signatures up to 5 months before catastrophic failure occurred. A robust derivation of the Mahalanobis distance is employed to perform outlier analysis in the bivariate domain, enabling the rapid labelling of historical SCADA data on independent wind turbines. Following this, the RIPPER rule learner was utilised to extract transparent, human-readable rules from the labelled data. A mean classification accuracy of 95.98% of the autonomously derived condition was achieved on three independent test sets, with a mean kappa statistic of 93.96% reported. In total, 12 rules were extracted, with an independent domain expert providing critical analysis, two thirds of the rules were deemed to be intuitive in modelling fundamental degradation behaviour of the wind turbine gearbox.
Keywords: SCADA systems; condition monitoring; failure analysis; gears; knowledge based systems; maintenance engineering; mechanical engineering computing; wind turbines; Mahalanobis distance; RIPPER rule learner; SCADA data rapid labelling; catastrophic failure; failure model; mean kappa statistic; robust prognostic condition index; supervisory control and data acquisition; wind turbine gearbox degradation; Accuracy; Gears; Indexes; Inspection; Maintenance engineering; Robustness; Wind turbines; Condition index; Data mining; prognosis; rule extraction; wind turbine SCADA data (ID#: 15-3627)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6798456&isnumber=6798433
Yinping Yang; Falcao, H.; Delicado, N.; Ortony, A., "Reducing Mistrust in Agent-Human Negotiations," Intelligent Systems, IEEE, vol. 29, no.2, pp.36,43, Mar.-Apr. 2014. doi: 10.1109/MIS.2013.106 Face-to-face negotiations always benefit if the interacting individuals trust each other. But trust is also important in online interactions, even for humans interacting with a computational agent. In this article, the authors describe a behavioral experiment to determine whether, by volunteering information that it need not disclose, a software agent in a multi-issue negotiation can alleviate mistrust in human counterparts who differ in their propensities to mistrust others. Results indicated that when cynical, mistrusting humans negotiated with an agent that proactively communicated its issue priority and invited reciprocation, there were significantly more agreements and better utilities than when the agent didn't volunteer such information. Furthermore, when the agent volunteered its issue priority, the outcomes for mistrusting individuals were as good as those for trusting individuals, for whom the volunteering of issue priority conferred no advantage. These findings provide insights for designing more effective, socially intelligent agents in online negotiation settings.
Keywords: multi-agent systems; software agents; trusted computing; agent-human negotiations; computational agent; face-to-face negotiation; multiissue negotiation; online interaction; online negotiation setting; socially intelligent agent; software agent; trusting individual; Context; Economics; Educational institutions; Instruments; Intelligent systems; Joints; Software agents; agent-human negotiation; intelligent systems; online negotiation; socially intelligent agents (ID#: 15-3628)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6636309&isnumber=6832878
Goldman, A.D.; Uluagac, A.S.; Copeland, J.A., "Cryptographically-Curated File System (CCFS): Secure, Inter-Operable, And Easily Implementable Information-Centric Networking," Local Computer Networks (LCN), 2014 IEEE 39th Conference on, pp.142, 149, 8-11 Sept. 2014. doi: 10.1109/LCN.2014.6925766 Cryptographically-Curated File System (CCFS) proposed in this work supports the adoption of Information-Centric Networking. CCFS utilizes content names that span trust boundaries, verify integrity, tolerate disruption, authenticate content, and provide non-repudiation. Irrespective of the ability to reach an authoritative host, CCFS provides secure access by binding a chain of trust into the content name itself. Curators cryptographically bind content to a name, which is a path through a series of objects that map human meaningful names to cryptographically strong content identifiers. CCFS serves as a network layer for storage systems unifying currently disparate storage technologies. The power of CCFS derives from file hashes and public keys used as a name with which to retrieve content and as a method of verifying that content. We present results from our prototype implementation. Our results show that the overhead associated with CCFS is not negligible, but also is not prohibitive.
Keywords: information networks; public key cryptography; storage management; CCFS; content authentication; cryptographically strong content identifiers; cryptographically-curated file system; file hashes; information-centric networking; integrity verification; network layer; public keys; storage systems; storage technologies; trust boundaries; File systems; Google; IP networks; Prototypes; Public key; Servers; Content Centric Networking (CCN);Cryptographically Curated File System (CCFS); Delay Tolerant Networking (DTN) ;Information Centric Networks (ICN); Inter-operable Heterogeneous Storage; Name Orientated Networking (NON); Self Certifying File Systems (ID#: 15-3629)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6925766&isnumber=6925725
Ormrod, David, "The Coordination of Cyber and Kinetic Deception for Operational Effect: Attacking the C4ISR Interface," Military Communications Conference (MILCOM), 2014 IEEE, pp.117, 122, 6-8 Oct. 2014. doi: 10.1109/MILCOM.2014.26 Modern military forces are enabled by networked command and control systems, which provide an important interface between the cyber environment, electronic sensors and decision makers. However these systems are vulnerable to cyber attack. A successful cyber attack could compromise data within the system, leading to incorrect information being utilized for decisions with potentially catastrophic results on the battlefield. Degrading the utility of a system or the trust a decision maker has in their virtual display may not be the most effective means of employing offensive cyber effects. The coordination of cyber and kinetic effects is proposed as the optimal strategy for neutralizing an adversary's C4ISR advantage. However, such an approach is an opportunity cost and resource intensive. The adversary's cyber dependence can be leveraged as a means of gaining tactical and operational advantage in combat, if a military force is sufficiently trained and prepared to attack the entire information network. This paper proposes a research approach intended to broaden the understanding of the relationship between command and control systems and the human decision maker, as an interface for both cyber and kinetic deception activity.
Keywords: Aircraft Command and control systems; Decision making; Force; Kinetic theory; Sensors; Synchronization; Command and control; combat; cyber attack; deception; risk management; trust (ID#: 15-3630)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6956747&isnumber=6956719
Srivastava, M., "In Sensors We Trust -- A Realistic Possibility?," Distributed Computing in Sensor Systems (DCOSS), 2014 IEEE International Conference on, pp.1,1, 26-28 May 2014. doi: 10.1109/DCOSS.2014.65 Sensors of diverse capabilities and modalities, carried by us or deeply embedded in the physical world, have invaded our personal, social, work, and urban spaces. Our relationship with these sensors is a complicated one. On the one hand, these sensors collect rich data that are shared and disseminated, often initiated by us, with a broad array of service providers, interest groups, friends, and family. Embedded in this data is information that can be used to algorithmically construct a virtual biography of our activities, revealing intimate behaviors and lifestyle patterns. On the other hand, we and the services we use, increasingly depend directly and indirectly on information originating from these sensors for making a variety of decisions, both routine and critical, in our lives. The quality of these decisions and our confidence in them depend directly on the quality of the sensory information and our trust in the sources. Sophisticated adversaries, benefiting from the same technology advances as the sensing systems, can manipulate sensory sources and analyze data in subtle ways to extract sensitive knowledge, cause erroneous inferences, and subvert decisions. The consequences of these compromises will only amplify as our society increasingly complex human-cyber-physical systems with increased reliance on sensory information and real-time decision cycles. Drawing upon examples of this two-faceted relationship with sensors in applications such as mobile health and sustainable buildings, this talk will discuss the challenges inherent in designing a sensor information flow and processing architecture that is sensitive to the concerns of both producers and consumer. For the pervasive sensing infrastructure to be trusted by both, it must be robust to active adversaries who are deceptively extracting private information, manipulating beliefs and subverting decisions. While completely solving these challenges would require a new science of resilient, secure and trustworthy networked sensing and decision systems that would combine hitherto disciplines of distributed embedded systems, network science, control theory, security, behavioral science, and game theory, this talk will provide some initial ideas. These include an approach to enabling privacy-utility trade-offs that balance the tension between risk of information sharing to the producer and the value of information sharing to the consumer, and method to secure systems against physical manipulation of sensed information.
Keywords: information dissemination; sensors; information sharing; processing architecture; secure systems; sensing infrastructure; sensor information flow; Architecture; Buildings; Computer architecture; Data mining; Information management; Security; Sensors (ID#: 15-3631)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6846138&isnumber=6846129
Sen, Shayak; Guha, Saikat; Datta, Anupam; Rajamani, Sriram K.; Tsai, Janice; Wing, Jeannette M., "Bootstrapping Privacy Compliance in Big Data Systems," Security and Privacy (SP), 2014 IEEE Symposium on, pp.327, 342, 18-21 May 2014. doi: 10.1109/SP.2014.28 With the rapid increase in cloud services collecting and using user data to offer personalized experiences, ensuring that these services comply with their privacy policies has become a business imperative for building user trust. However, most compliance efforts in industry today rely on manual review processes and audits designed to safeguard user data, and therefore are resource intensive and lack coverage. In this paper, we present our experience building and operating a system to automate privacy policy compliance checking in Bing. Central to the design of the system are (a) Legal ease-a language that allows specification of privacy policies that impose restrictions on how user data is handled, and (b) Grok-a data inventory for Map-Reduce-like big data systems that tracks how user data flows among programs. Grok maps code-level schema elements to data types in Legal ease, in essence, annotating existing programs with information flow types with minimal human input. Compliance checking is thus reduced to information flow analysis of big data systems. The system, bootstrapped by a small team, checks compliance daily of millions of lines of ever-changing source code written by several thousand developers.
Keywords: Advertising; Big data; Data privacy; IP networks; Lattices; Privacy; Semantics; big data; bing; compliance; information flow; policy; privacy; program analysis (ID#: 15-3632)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6956573&isnumber=6956545
Shila, D.M.; Venugopal, V., "Design, implementation and security analysis of Hardware Trojan Threats in FPGA," Communications (ICC), 2014 IEEE International Conference on, pp.719, 724, 10-14 June 2014. doi: 10.1109/ICC.2014.6883404 Hardware Trojan Threats (HTTs) are stealthy components embedded inside integrated circuits (ICs) with an intention to attack and cripple the IC similar to viruses infecting the human body. Previous efforts have focused essentially on systems being compromised using HTTs and the effectiveness of physical parameters including power consumption, timing variation and utilization for detecting HTTs. We propose a novel metric for hardware Trojan detection coined as HTT detectability metric (HDM) that uses a weighted combination of normalized physical parameters. HTTs are identified by comparing the HDM with an optimal detection threshold; if the monitored HDM exceeds the estimated optimal detection threshold, the IC will be tagged as malicious. As opposed to existing efforts, this work investigates a system model from a designer perspective in increasing the security of the device and an adversary model from an attacker perspective exposing and exploiting the vulnerabilities in the device. Using existing Trojan implementations and Trojan taxonomy as a baseline, seven HTTs were designed and implemented on a FPGA testbed; these Trojans perform a variety of threats ranging from sensitive information leak, denial of service to beat the Root of Trust (RoT). Security analysis on the implemented Trojans showed that existing detection techniques based on physical characteristics such as power consumption, timing variation or utilization alone does not necessarily capture the existence of HTTs and only a maximum of 57% of designed HTTs were detected. On the other hand, 86% of the implemented Trojans were detected with HDM. We further carry out analytical studies to determine the optimal detection threshold that minimizes the summation of false alarm and missed detection probabilities.
Keywords: field programmable gate arrays; integrated logic circuits; invasive software; FPGA testbed; HDM ;HTT detectability metric; HTT detection; ICs; RoT; Trojan taxonomy; denial of service; hardware Trojan detection technique; hardware Trojan threats; integrated circuits; missed detection probability; normalized physical parameters; optimal detection threshold; power consumption; root of trust; security analysis; sensitive information leak;s ummation of false alarm; timing variation; Encryption; Field programmable gate arrays; Hardware; Power demand; Timing; Trojan horses; Design; Hardware Trojans; Resiliency; Security (ID#: 15-3633)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6883404&isnumber=6883277
Dickerson, J.P.; Kagan, V.; Subrahmanian, V.S., "Using Sentiment To Detect Bots On Twitter: Are Humans More Opinionated Than Bots?," Advances in Social Networks Analysis and Mining (ASONAM), 2014 IEEE/ACM International Conference on, pp.620,627, 17-20 Aug. 2014
doi: 10.1109/ASONAM.2014.6921650 In many Twitter applications, developers collect only a limited sample of tweets and a local portion of the Twitter network. Given such Twitter applications with limited data, how can we classify Twitter users as either bots or humans? We develop a collection of network-, linguistic-, and application-oriented variables that could be used as possible features, and identify specific features that distinguish well between humans and bots. In particular, by analyzing a large dataset relating to the 2014 Indian election, we show that a number of sentimentrelated factors are key to the identification of bots, significantly increasing the Area under the ROC Curve (AUROC). The same method may be used for other applications as well.
Keywords: social networking (online); trusted computing; AUROC; Indian election; Twitter applications; Twitter network; area under the ROC curve; bot detection; sentiment-related factors; Conferences; Nominations and elections; Principal component analysis; Semantics; Syntactics; Twitter (ID#: 15-3635)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6921650&isnumber=6921526
Cailleux, L.; Bouabdallah, A.; Bonnin, J.-M., "A Confident Email System Based On A New Correspondence Model," Advanced Communication Technology (ICACT), 2014 16th International Conference on, pp.489, 492, 16-19 Feb. 2014. doi: 10.1109/ICACT.2014.6779010 Despite all the current controversies, the success of the email service is still valid. The ease of use of its various features contributed to its widespread adoption. In general, the email system provides for all its users the same set of features controlled by a single monolithic policy. Such solutions are efficient but limited because they grant no place for the concept of usage which denotes a user's intention of communication: private, professional, administrative, official, military. The ability to efficiently send emails from mobile devices creates new interesting opportunities. We argue that the context (location, time, device, operating system, access network...) of the email sender appears as a new dimension we have to take into account to complete the picture. Context is clearly orthogonal to usage because a same usage may require different features depending of the context. It is clear that there is no global policy meeting requirements of all possible usages and contexts. To address this problem, we propose to define a correspondence model which for a given usage and context allows to derive a correspondence type encapsulating the exact set of required features. With this model, it becomes possible to define an advanced email system which may cope with multiple policies instead of a single monolithic one. By allowing a user to select the exact policy coping with her needs, we argue that our approach reduces the risk-taking allowing the email system to slide from a trusted one to a confident one.
Keywords: electronic mail; human factors; security of data; trusted computing; confident email system; correspondence model; email sender context; email service; email system policy; mobile devices; trusted email system; Context; Context modeling; Electronic mail; Internet; Postal services;Protocols;Security;Email;confidence;correspondence;email security; policy; security; trust (ID#: 15-3636)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6779010&isnumber=6778899
Biedermann, S.; Ruppenthal, T.; Katzenbeisser, S., "Data-Centric Phishing Detection Based On Transparent Virtualization Technologies," Privacy, Security and Trust (PST), 2014 Twelfth Annual International Conference on, pp.215,223, 23-24 July 2014. doi: 10.1109/PST.2014.6890942 We propose a novel phishing detection architecture based on transparent virtualization technologies and isolation of the own components. The architecture can be deployed as a security extension for virtual machines (VMs) running in the cloud. It uses fine-grained VM introspection (VMI) to extract, filter and scale a color-based fingerprint of web pages which are processed by a browser from the VM's memory. By analyzing the human perceptual similarity between the fingerprints, the architecture can reveal and mitigate phishing attacks which are based on redirection to spoofed web pages and it can also detect “Man-in-the-Browser” (MitB) attacks. To the best of our knowledge, the architecture is the first anti-phishing solution leveraging virtualization technologies. We explain details about the design and the implementation and we show results of an evaluation with real-world data.
Keywords: Web sites; cloud computing; computer crime; online front-ends; virtual machines; virtualisation; MitB attack; VM introspection; VMI; antiphishing solution; cloud; color-based fingerprint extraction; color-based fingerprint filtering; color-based fingerprint scaling; component isolation; data-centric phishing detection; human perceptual similarity; man-in-the-browser attack; phishing attacks; spoofed Web pages; transparent virtualization technologies; virtual machines; Browsers; Computer architecture; Data mining; Detectors; Image color analysis; Malware; Web pages (ID#: 15-3637)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6890942&isnumber=6890911
Bian Yang; Huiguang Chu; Guoqiang Li; Petrovic, S.; Busch, C., "Cloud Password Manager Using Privacy-Preserved Biometrics," Cloud Engineering (IC2E), 2014 IEEE International Conference on, pp.505, 509, 11-14 March 2014. doi: 10.1109/IC2E.2014.91 Using one password for all web services is not secure because the leakage of the password compromises all the web services accounts, while using independent passwords for different web services is inconvenient for the identity claimant to memorize. A password manager is used to address this security-convenience dilemma by storing and retrieving multiple existing passwords using one master password. On the other hand, a password manager liberates human brain by enabling people to generate strong passwords without worry about memorizing them. While a password manager provides a convenient and secure way to managing multiple passwords, it centralizes the passwords storage and shifts the risk of passwords leakage from distributed service providers to a software or token authenticated by a single master password. Concerned about this one master password based security, biometrics could be used as a second factor for authentication by verifying the ownership of the master password. However, biometrics based authentication is more privacy concerned than a non-biometric password manager. In this paper we propose a cloud password manager scheme exploiting privacy enhanced biometrics, which achieves both security and convenience in a privacy-enhanced way. The proposed password manager scheme relies on a cloud service to synchronize all local password manager clients in an encrypted form, which is efficient to deploy the updates and secure against untrusted cloud service providers.
Keywords: Web services; authorisation; biometrics (access control);cloud computing; data privacy; trusted computing; Web service account; biometrics based authentication; cloud password manager; distributed service providers; local password manager client synchronization; master password based security; nonbiometric password manager; password leakage risk; password storage; privacy enhanced biometrics; privacy-preserved biometrics; token authentication; untrusted cloud service providers; Authentication; Biometrics (access control);Cryptography; Privacy; Synchronization; Web services; biometrics; cloud; password manager; privacy preservation; security (ID#: 15-3638)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6903519&isnumber=6903436
Skopik, F.; Settanni, G.; Fiedler, R.; Friedberg, I., "Semi-Synthetic Data Set Generation For Security Software Evaluation," Privacy, Security and Trust (PST), 2014 Twelfth Annual International Conference on, pp.156, 163, 23-24 July 2014. doi: 10.1109/PST.2014.6890935 Threats to modern ICT systems are rapidly changing these days. Organizations are not mainly concerned about virus infestation, but increasingly need to deal with targeted attacks. This kind of attacks are specifically designed to stay below the radar of standard ICT security systems. As a consequence, vendors have begun to ship self-learning intrusion detection systems with sophisticated heuristic detection engines. While these approaches are promising to relax the serious security situation, one of the main challenges is the proper evaluation of such systems under realistic conditions during development and before roll-out. Especially the wide variety of configuration settings makes it hard to find the optimal setup for a specific infrastructure. However, extensive testing in a live environment is not only cumbersome but usually also impacts daily business. In this paper, we therefore introduce an approach of an evaluation setup that consists of virtual components, which imitate real systems and human user interactions as close as possible to produce system events, network flows and logging data of complex ICT service environments. This data is a key prerequisite for the evaluation of modern intrusion detection and prevention systems. With these generated data sets, a system's detection performance can be accurately rated and tuned for very specific settings.
Keywords: data handling; security of data; ICT security systems; ICT systems; heuristic detection engines; information and communication technology systems; intrusion detection and prevention systems; security software evaluation; self-learning intrusion detection systems; semisynthetic data set generation; virus infestation; Complexity theory; Data models; Databases; Intrusion detection; Testing; Virtual machining; anomaly detection evaluation; scalable system behavior model; synthetic data set generation (ID#: 15-3639)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6890935&isnumber=6890911
Montague, E.; Jie Xu; Chiou, E., "Shared Experiences of Technology and Trust: An Experimental Study of Physiological Compliance Between Active and Passive Users in Technology-Mediated Collaborative Encounters," Human-Machine Systems, IEEE Transactions on, vol. 44, no. 5, pp.614, 624, Oct. 2014. doi: 10.1109/THMS.2014.2325859 The aim of this study is to examine the utility of physiological compliance (PC) to understand shared experience in a multiuser technological environment involving active and passive users. Common ground is critical for effective collaboration and important for multiuser technological systems that include passive users since this kind of user typically does not have control over the technology being used. An experiment was conducted with 48 participants who worked in two-person groups in a multitask environment under varied task and technology conditions. Indicators of PC were measured from participants' cardiovascular and electrodermal activities. The relationship between these PC indicators and collaboration outcomes, such as performance and subjective perception of the system, was explored. Results indicate that PC is related to group performance after controlling for task/technology conditions. PC is also correlated with shared perceptions of trust in technology among group members. PC is a useful tool for monitoring group processes and, thus, can be valuable for the design of collaborative systems. This study has implications for understanding effective collaboration.
Keywords: groupware; multiprogramming; physiology; trusted computing; multitask environment; multiuser technological environment; physiological compliance; shared experiences ;technology-mediated collaborative encounters; trust; Atmospheric measurements; Biomedical monitoring; Joints; Monitoring; Optical wavelength conversion; Particle measurements; Reliability; Group performance; multiagent systems; passive user; physiological compliance (PC); trust in technology (ID#: 15-3640)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6837486&isnumber=6898062
Algarni, A.; Yue Xu; Chan, T., "Social Engineering in Social Networking Sites: The Art of Impersonation," Services Computing (SCC), 2014 IEEE International Conference on, pp.797,804, June 27 2014-July 2 2014. doi: 10.1109/SCC.2014.108 Social networking sites (SNSs), with their large number of users and large information base, seem to be the perfect breeding ground for exploiting the vulnerabilities of people, who are considered the weakest link in security. Deceiving, persuading, or influencing people to provide information or to perform an action that will benefit the attacker is known as "social engineering." Fraudulent and deceptive people use social engineering traps and tactics through SNSs to trick users into obeying them, accepting threats, and falling victim to various crimes such as phishing, sexual abuse, financial abuse, identity theft, and physical crime. Although organizations, researchers, and practitioners recognize the serious risks of social engineering, there is a severe lack of understanding and control of such threats. This may be partly due to the complexity of human behaviors in approaching, accepting, and failing to recognize social engineering tricks. This research aims to investigate the impact of source characteristics on users' susceptibility to social engineering victimization in SNSs, particularly Facebook. Using grounded theory method, we develop a model that explains what and how source characteristics influence Facebook users to judge the attacker as credible.
Keywords: computer crime; fraud; social aspects of automation; social networking (online);Facebook; SNS; attacker; deceptive people; financial abuse; fraudulent people; grounded theory method; human behaviors complexity ;identity theft; impersonation; large information base; phishing; physical crime; security; sexual abuse; social engineering traps; social engineering victimization; social engineering tactics; social networking sites; threats; user susceptibility;Encoding;Facebook;Interviews;Organizations;Receivers;Security; impersonation; information security management; social engineering; social networking sites; source credibility; trust management (ID#: 15-3641)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6930610&isnumber=6930500
Riveiro, M.; Lebram, M.; Warston, H., "On Visualizing Threat Evaluation Configuration Processes: A Design Proposal," Information Fusion (FUSION), 2014 17th International Conference on, pp.1,8, 7-10 July 2014 Threat evaluation is concerned with estimating the intent, capability and opportunity of detected objects in relation to our own assets in an area of interest. To infer whether a target is threatening and to which degree is far from a trivial task. Expert operators have normally to their aid different support systems that analyze the incoming data and provide recommendations for actions. Since the ultimate responsibility lies in the operators, it is crucial that they trust and know how to configure and use these systems, as well as have a good understanding of their inner workings, strengths and limitations. To limit the negative effects of inadequate cooperation between the operators and their support systems, this paper presents a design proposal that aims at making the threat evaluation process more transparent. We focus on the initialization, configuration and preparation phases of the threat evaluation process, supporting the user in the analysis of the behavior of the system considering the relevant parameters involved in the threat estimations. For doing so, we follow a known design process model and we implement our suggestions in a proof-of-concept prototype that we evaluate with military expert system designers.
Keywords: estimation theory; expert systems; military computing; design process model; design proposal; expert operators; military expert system designer; proof-of-concept prototype; relevant parameter; threat estimation; threat evaluation configuration process; Data models; Estimation; Guidelines; Human computer interaction; Proposals; Prototypes; Weapons; decision-making; design; high-level information fusion; threat evaluation; transparency; visualization (ID#: 15-3642)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6916152&isnumber=6915967
El Masri, A.; Wechsler, H.; Likarish, P.; Kang, B.B., "Identifying Users With Application-Specific Command Streams," Privacy, Security and Trust (PST), 2014 Twelfth Annual International Conference on, pp.232,238, 23-24 July 2014. doi: 10.1109/PST.2014.6890944 This paper proposes and describes an active authentication model based on user profiles built from user-issued commands when interacting with GUI-based application. Previous behavioral models derived from user issued commands were limited to analyzing the user's interaction with the *Nix (Linux or Unix) command shell program. Human-computer interaction (HCI) research has explored the idea of building users profiles based on their behavioral patterns when interacting with such graphical interfaces. It did so by analyzing the user's keystroke and/or mouse dynamics. However, none had explored the idea of creating profiles by capturing users' usage characteristics when interacting with a specific application beyond how a user strikes the keyboard or moves the mouse across the screen. We obtain and utilize a dataset of user command streams collected from working with Microsoft (MS) Word to serve as a test bed. User profiles are first built using MS Word commands and identification takes place using machine learning algorithms. Best performance in terms of both accuracy and Area under the Curve (AUC) for Receiver Operating Characteristic (ROC) curve is reported using Random Forests (RF) and AdaBoost with random forests.
Keywords: biometrics (access control); human computer interaction; learning (artificial intelligence); message authentication; sensitivity analysis; AUC; AdaBoost; GUI-based application; MS Word commands; Microsoft; RF; ROC curve; active authentication model; application-specific command streams; area under the curve; human-computer interaction; machine learning algorithms; random forests; receiver operating characteristic; user command streams; user identification; user profiles; user-issued commands; Authentication; Biometrics (access control); Classification algorithms; Hidden Markov models; Keyboards; Mice; Radio frequency; Active Authentication; Behavioral biometrics; Intrusion Detection; Machine Learning (ID#: 15-3643)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6890944&isnumber=6890911
Saoud, Z.; Faci, N.; Maamar, Z.; Benslimane, D., "A Fuzzy Clustering-Based Credibility Model for Trust Assessment in a Service-Oriented Architecture," WETICE Conference (WETICE), 2014 IEEE 23rd International, pp.56,61, 23-25 June 2014. doi: 10.1109/WETICE.2014.35 This paper presents a credibility model to assess trust of Web services. The model relies on consumers' ratings whose accuracy can be questioned due to different biases. A category of consumers known as strict are usually excluded from the process of reaching a majority consensus. We demonstrated that this exclusion should not be. The proposed model reduces the gap between these consumers' ratings and the current majority rating. Fuzzy clustering is used to compute consumers' credibility. To validate this model a set of experiments are carried out.
Keywords: Web services; customer satisfaction; fuzzy set theory; human computer interaction; pattern clustering; service-oriented architecture; trusted computing; Web services; consumer credibility; consumer ratings; credibility model; fuzzy clustering; majority rating; service-oriented architecture; trust assessment; Clustering algorithms; Communities; Computational modeling; Equations; Robustness; Social network services; Web services; Credibility; Trust; Web Service (ID#: 15-3644)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6927023&isnumber=6926989
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
![]() |
Input-Output (I/O) Systems Security |
Management of I/O devices is a critical part of the operating system. Entire I/O subsystems are devoted to its operation. These subsystems contend both with the movement towards standard interfaces for a wide range of devices to makes it easier to add newly developed devices to existing systems, and the development of entirely new types of devices for which existing standard interfaces can be difficult to apply. Typically, when accessing files, a security check is performed when the file is created or opened. The security check is typically not done again unless the file is closed and reopened. If an opened file is passed to an untrusted caller, the security system can, but is not required to prevent the caller from accessing the file. Research into I/O security addresses the need to provide adequate security economically and to scale.
Research cited here were published or presented in the first half of 2014. I/O security topics addressed in these works include avionic systems, virtual machines, device replication, RAID arrays, hypervisor design, and cloud storage.
Muller, K.; Sigl, G.; Triquet, B.; Paulitsch, M., "On MILS I/O Sharing Targeting Avionic Systems," Dependable Computing Conference (EDCC), 2014 Tenth European , vol., no., pp.182,193, 13-16 May 2014. doi: 10.1109/EDCC.2014.35 This paper discusses strategies for I/O sharing in Multiple Independent Levels of Security (MILS) systems mostly deployed in the special environment of avionic systems. MILS system designs are promising approaches for handling the increasing complexity of functionally integrated systems, where multiple applications run concurrently on the same hardware platform. Such integrated systems, also known as Integrated Modular Avionics (IMA) in the aviation industry, require communication to remote systems located outside of the hosting hardware platform. One possible solution is to provide each partition, the isolated runtime environment of an application, a direct interface to the communication's hardware controller. Nevertheless, this approach requires a special design of the hardware itself. This paper discusses efficient system architectures for I/O sharing in the environment of high-criticality embedded systems and the exemplary analysis of Free scale's proprietary Data Path Acceleration Architecture (DPAA) with respect to generic hardware requirements. Based on this analysis we also discuss the development of possible architectures matching with the MILS approach. Even though the analysis focuses on avionics it is equally applicable to automotive architectures such as Auto SAR.
Keywords: aerospace computing; avionics; embedded systems; security of data; DPAA; IMA; MILS I/O sharing; MILS system designs; autoSAR; aviation industry; avionic systems; communication hardware controller; free scale proprietary data path acceleration architecture; hardware platform; high-criticality embedded systems; integrated modular avionics; multiple independent levels of security system; system architectures; Aerospace electronics; Computer architecture; Hardware; Portals; Runtime; Security; Software (ID#: 15-3724)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6821104&isnumber=6821069
Aiash, M.; Mapp, G.; Gemikonakli, O., "Secure Live Virtual Machines Migration: Issues and Solutions," Advanced Information Networking and Applications Workshops (WAINA), 2014 28th International Conference on , vol., no., pp.160,165, 13-16 May 2014. doi: 10.1109/WAINA.2014.35 In recent years, there has been a huge trend towards running network intensive applications, such as Internet servers and Cloud-based service in virtual environment, where multiple virtual machines (VMs) running on the same machine share the machine's physical and network resources. In such environment, the virtual machine monitor (VMM) virtualizes the machine's resources in terms of CPU, memory, storage, network and I/O devices to allow multiple operating systems running in different VMs to operate and access the network concurrently. A key feature of virtualization is live migration (LM) that allows transfer of virtual machine from one physical server to another without interrupting the services running in virtual machine. Live migration facilitates workload balancing, fault tolerance, online system maintenance, consolidation of virtual machines etc. However, live migration is still in an early stage of implementation and its security is yet to be evaluated. The security concern of live migration is a major factor for its adoption by the IT industry. Therefore, this paper uses the X.805 security standard to investigate attacks on live virtual machine migration. The analysis highlights the main source of threats and suggests approaches to tackle them. The paper also surveys and compares different proposals in the literature to secure the live migration.
Keywords: cloud computing; security of data; virtual machines; Internet server; VMM; X.805 security standard; cloud-based service; fault tolerance; live virtual machine migration; online system maintenance; virtual machine monitor; workload balancing; Authentication; Hardware; Servers; Virtual machine monitors; Virtual machining; Virtualization (ID#: 15-3725)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6844631&isnumber=6844560
Ravindran, K.; Rabby, M.; Adiththan, A, "Model-based Control Of Device Replication For Trusted Data Collection," Modeling and Simulation of Cyber-Physical Energy Systems (MSCPES), 2014 Workshop on , vol., no., pp.1,6, 14-14 April 2014. doi: 10.1109/MSCPES.2014.6842399 Voting among replicated data collection devices is a means to achieve dependable data delivery to the end-user in a hostile environment. Failures may occur during the data collection process: such as data corruptions by malicious devices and security/bandwidth attacks on data paths. For a voting system, how often a correct data is delivered to the user in a timely manner and with low overhead depicts the QoS. Prior works have focused on algorithm correctness issues and performance engineering of the voting protocol mechanisms. In this paper, we study the methods for autonomic management of device replication in the voting system to deal with situations where the available network bandwidth fluctuates, the fault parameters change unpredictably, and the devices have battery energy constraints. We treat the voting system as a `black-box' with programmable I/O behaviors. A management module exercises a macroscopic control of the voting box with situational inputs: such as application priorities, network resources, battery energy, and external threat levels.
Keywords: quality of service; security of data; trusted computing; QoS; algorithm correctness; bandwidth attack; black-box; data corruptions; device replication autonomic management; malicious devices; security attack; trusted data collection; voting protocol mechanisms; Bandwidth; Batteries; Data collection; Delays; Frequency modulation; Protocols; Quality of service; Adaptive Fault-tolerance; Attacker Modeling; Hierarchical Control; Sensor Replication; Situational Assessment (ID#: 15-3726)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6842399&isnumber=6842390
Smith, S.; Woodward, C.; Liang Min; Chaoyang Jing; Del Rosso, A, "On-line Transient Stability Analysis Using High Performance Computing," Innovative Smart Grid Technologies Conference (ISGT), 2014 IEEE PES , vol., no., pp.1,5, 19-22 Feb. 2014. doi: 10.1109/ISGT.2014.6816438 In this paper, parallelization and high performance computing are utilized to enable ultrafast transient stability analysis that can be used in a real-time environment to quickly perform “what-if” simulations involving system dynamics phenomena. EPRI's Extended Transient Midterm Simulation Program (ETMSP) is modified and enhanced for this work. The contingency analysis is scaled for large-scale contingency analysis using Message Passing Interface (MPI) based parallelization. Simulations of thousands of contingencies on a high performance computing machine are performed, and results show that parallelization over contingencies with MPI provides good scalability and computational gains. Different ways to reduce the Input/Output (I/O) bottleneck are explored, and findings indicate that architecting a machine with a larger local disk and maintaining a local file system significantly improve the scaling results. Thread-parallelization of the sparse linear solve is explored also through use of the SuperLU_MT library.
Keywords: large-scale systems; message passing; power engineering computing; power system transient stability; real-time systems; EPRI extended transient midterm simulation program; ETMSP; MPI; SuperLU_MT library; high performance computing machine; input-output bottleneck; large-scale contingency analysis; local disk;local file system; message passing interface; on-line transient stability analysis; real-time environment; sparse linear solve; system dynamics phenomena; ultrafast transient stability analysis; Computational modeling; File systems; High performance computing; Power system stability; Stability analysis; Transient analysis; Dynamic security assessment; control center; high performance computing; parallelization; real-time simulation; transient stability (ID#: 15-3727)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6816438&isnumber=6816367
Shropshire, J., "Analysis of Monolithic and Microkernel Architectures: Towards Secure Hypervisor Design," System Sciences (HICSS), 2014 47th Hawaii International Conference on , vol., no., pp.5008,5017, 6-9 Jan. 2014. doi: 10.1109/HICSS.2014.615 This research focuses on hyper visor security from holistic perspective. It centers on hyper visor architecture - the organization of the various subsystems which collectively compromise a virtualization platform. It holds that the path to a secure hyper visor begins with a big-picture focus on architecture. Unfortunately, little research has been conducted with this perspective. This study investigates the impact of monolithic and micro kernel hyper visor architectures on the size and scope of the attack surface. Six architectural features are compared: management API, monitoring interface, hyper calls, interrupts, networking, and I/O. These subsystems are core hyper visor components which could be used as attack vectors. Specific examples and three leading hyper visor platforms are referenced (ESXi for monolithic architecture; Xen and Hyper-V for micro architecture). The results describe the relative strengths and vulnerabilities of both types of architectures. It is concluded that neither design is more secure, since both incorporate security tradeoffs in core processes.
Keywords: application program interfaces; security of data; virtualization; ESXi; Hyper-V; Xen; attack surface; hyper calls; hyper visor security; management API; micro architecture; micro kernel hyper visor architectures; microkernel architectures; monitoring interface; monolithic architectures; monolithic hyper visor architectures; networking; secure hyper visor design; security tradeoffs; virtualization platform; Computer architecture; Hardware; Kernel; Monitoring; Security; Virtual machine monitors; Virtual machining; cloud computing; hypervisor security; microkernel architecture; monolithic architecture (ID#: 15-3728)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6759218&isnumber=6758592
Youngjung Ahn; Yongsuk Lee; Jin-Young Choi; Gyungho Lee; Dongkyun Ahn, "Monitoring Translation Lookahead Buffers to Detect Code Injection Attacks," Computer , vol.47, no.7, pp.66,72, July 2014. doi: 10.1109/MC.2013.228 By identifying memory pages that external I/O operations have modified, a proposed scheme blocks malicious injected code activation, accurately distinguishing an attack from legitimate code injection with negligible performance impact and no changes to the user application.
Keywords: buffer storage; computer crime; system monitoring; blocks malicious injected code activation; code injection attack detection; external I/O operations; legitimate code injection attack; memory pages identification; translation lookahead buffers monitoring; Decision support systems; Handheld computers; Code injection; TLB; data execution prevention; hackers; invasive software; security (ID#: 15-3729)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6560060&isnumber=6861869
Mingqiang Li; Lee, P.P.C., "Toward I/O-Efficient Protection Against Silent Data Corruptions In RAID Arrays," Mass Storage Systems and Technologies (MSST), 2014 30th Symposium on , vol., no., pp.1,12, 2-6 June 2014. doi: 10.1109/MSST.2014.6855548 Although RAID is a well-known technique to protect data against disk errors, it is vulnerable to silent data corruptions that cannot be detected by disk drives. Existing integrity protection schemes designed for RAID arrays often introduce high I/O overhead. Our key insight is that by properly designing an integrity protection scheme that adapts to the read/write characteristics of storage workloads, the I/O overhead can be significantly mitigated. In view of this, this paper presents a systematic study on I/O-efficient integrity protection against silent data corruptions in RAID arrays. We formalize an integrity checking model, and justify that a large proportion of disk reads can be checked with simpler and more I/O-efficient integrity checking mechanisms. Based on this integrity checking model, we construct two integrity protection schemes that provide complementary performance advantages for storage workloads with different user write sizes. We further propose a quantitative method for choosing between the two schemes in real-world scenarios. Our trace-driven simulation results show that with the appropriate integrity protection scheme, we can reduce the I/O overhead to below 15%.
Keywords: RAID; data integrity; input-output programs; security of data; IO-efficient integrity checking mechanisms; IO-efficient protection; RAID arrays; disk drives; disk errors; integrity checking model; integrity protection schemes; read-write characteristics; silent data corruptions; storage workloads; trace-driven simulation; user write sizes; Arrays; Data models; Disk drives; Redundancy; Systematics; Taxonomy; I/O overhead; RAID; integrity protection schemes; silent data corruptions (ID#: 15-3730)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6855548&isnumber=6855532
Mianxiong Dong; He Lit; Ota, K.; Haojin Zhu, "HVSTO: Efficient Privacy Preserving Hybrid Storage In Cloud Data Center," Computer Communications Workshops (INFOCOM WKSHPS), 2014 IEEE Conference on , vol., no., pp.529,534, April 27 2014-May 2 2014. doi: 10.1109/INFCOMW.2014.6849287 In cloud data center, shared storage with good management is a main structure used for the storage of virtual machines (VM). In this paper, we proposed Hybrid VM storage (HVSTO), a privacy preserving shared storage system designed for the virtual machine storage in large-scale cloud data center. Unlike traditional shared storage, HVSTO adopts a distributed structure to preserve privacy of virtual machines, which are a threat in traditional centralized structure. To improve the performance of I/O latency in this distributed structure, we use a hybrid system to combine solid state disk and distributed storage. From the evaluation of our demonstration system, HVSTO provides a scalable and sufficient throughput for the platform as a service infrastructure.
Keywords: cloud computing; computer centers; data privacy; virtual machines; virtualization; HVSTO; I/O latency performance improvement; distributed storage; distributed structure; hybrid VM storage; large-scale cloud data center; privacy preserving hybrid storage; privacy preserving shared storage system; service infrastructure; solid state disk; virtual machine storage; Conferences; Data privacy; Indexes; Security; Servers; Virtual machining; Virtualization} (ID#: 15-3730)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6849287&isnumber=6849127
Chiang, R.; Rajasekaran, S.; Zhang, N.; Huang, H., "Swiper: Exploiting Virtual Machine Vulnerability in Third-Party Clouds with Competition for I/O Resources," Parallel and Distributed Systems, IEEE Transactions on, vol.PP, no.99, pp.1, 1, June 2014. doi: 10.1109/TPDS.2014.2325564 The emerging paradigm of cloud computing, e.g., Amazon Elastic Compute Cloud (EC2), promises a highly flexible yet robust environment for large-scale applications. Ideally, while multiple virtual machines (VM) share the same physical resources (e.g., CPUs, caches, DRAM, and I/O devices), each application should be allocated to an independently managed VM and isolated from one another. Unfortunately, the absence of physical isolation inevitably opens doors to a number of security threats. In this paper, we demonstrate in EC2 a new type of security vulnerability caused by competition between virtual I/O workloads - i.e., by leveraging the competition for shared resources, an adversary could intentionally slow down the execution of a targeted application in a VM that shares the same hardware. In particular, we focus on I/O resources such as hard-drive throughput and/or network bandwidth - which are critical for data-intensive applications. We design and implement Swiper, a framework which uses a carefully designed workload to incur significant delays on the targeted application and VM with minimum cost (i.e., resource consumption). We conduct a comprehensive set of experiments in EC2, which clearly demonstrates that Swiper is capable of significantly slowing down various server applications while consuming a small amount of resources.
Keywords: Cloud computing; Delays; IP networks; Security; Synchronization; Throughput; Virtualization (ID#: 15-3731)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6824231&isnumber=4359390
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
![]() |
Integrated Security |
Cybersecurity often has spent the past two decades largely as a “bolt on” product added as an afterthought. To get to composability, built-in, integrated security will be a key factor. The research cited here addresses issues in integrated security technologies and were presented in 2014.
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
![]() |
Integrity of Outsourced Databases |
The growth of distributed storage systems such as the Cloud has produced novel security problems. The works cited here address untrusted servers, generic trusted data, trust extension on commodity computers, defense against frequency-based attacks in wireless networks, and other topics. These articles were presented or published in the first half of 2014.
Matteo Maffei, Giulio Malavolta, Manuel Reinert, Dominique Schröder, “Brief Announcement: Towards Security And Privacy For Outsourced Data In The Multi-Party Setting,” Proceedings of the 2014 ACM Symposium On Principles Of Distributed Computing, July 2014, Pages 144-146. doi>10.1145/2611462.2611508 Cloud storage has rapidly acquired popularity among users, constituting a seamless solution for the backup, synchronization, and sharing of large amounts of data. This technology, however, puts user data in the direct control of cloud service providers, which raises increasing security and privacy concerns related to the integrity of outsourced data, the accidental or intentional leakage of sensitive information, the profiling of user activities and so on. We present GORAM, a cryptographic system that protects the secrecy and integrity of the data outsourced to an untrusted server and guarantees the anonymity and unlinkability of consecutive accesses to such data. GORAM allows the database owner to share outsourced data with other clients, selectively granting them read and write permissions. GORAM is the first system to achieve such a wide range of security and privacy properties for outsourced storage. Technically, GORAM builds on a combination of ORAM to conceal data accesses, attribute-based encryption to rule the access to outsourced data, and zero-knowledge proofs to prove read and write permissions in a privacy-preserving manner. We implemented GORAM and conducted an experimental evaluation to demonstrate its feasibility.
Keywords: GORAM, ORAM, cloud storage, oblivious ram, privacy-enhancing technologies (ID#: 15-3732)
URL: http://dl.acm.org/citation.cfm?id=2611462.2611508&coll=DL&dl=GUIDE&CFID=404518475&CFTOKEN=44609526 or http://doi.acm.org/10.1145/2611462.2611508
Andrew Miller, Michael Hicks, Jonathan Katz, Elaine Shi, “Authenticated Data Structures, Generically,” Proceedings of the 41st ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, January 2014, Pages 411-423. doi>10.1145/2535838.2535851 An authenticated data structure (ADS) is a data structure whose operations can be carried out by an untrusted prover, the results of which a verifier can efficiently check as authentic. This is done by having the prover produce a compact proof that the verifier can check along with each operation's result. ADSs thus support outsourcing data maintenance and processing tasks to untrusted servers without loss of integrity. Past work on ADSs has focused on particular data structures (or limited classes of data structures), one at a time, often with support only for particular operations. This paper presents a generic method, using a simple extension to a ML-like functional programming language we call λ• (lambda-auth), with which one can program authenticated operations over any data structure defined by standard type constructors, including recursive types, sums, and products. The programmer writes the data structure largely as usual and it is compiled to code to be run by the prover and verifier. Using a formalization of λ• we prove that all well-typed λ• programs result in code that is secure under the standard cryptographic assumption of collision-resistant hash functions. We have implemented λ• as an extension to the OCaml compiler, and have used it to produce authenticated versions of many interesting data structures including binary search trees, red-black+ trees, skip lists, and more. Performance experiments show that our approach is efficient, giving up little compared to the hand-optimized data structures developed previously.
Keywords: authenticated data structures, cryptography, programming languages, security (ID#: 15-3733)
URL: http://dl.acm.org/citation.cfm?id=2535838.2535851&coll=DL&dl=GUIDE&CFID=404518475&CFTOKEN=44609526 or http://doi.acm.org/10.1145/2578855.2535851
Lifei Wei, Haojin Zhu, Zhenfu Cao, Xiaolei Dong, Weiwei Jia, Yunlu Chen, Athanasios V. Vasilakos, “Security and Privacy For Storage And Computation In Cloud Computing,” Information Sciences: an International Journal, Volume 258, February, 2014, Pages 371-386. doi>10.1016/j.ins.2013.04.028 Cloud computing emerges as a new computing paradigm that aims to provide reliable, customized and quality of service guaranteed computation environments for cloud users. Applications and databases are moved to the large centralized data centers, called cloud. Due to resource virtualization, global replication and migration, the physical absence of data and machine in the cloud, the stored data in the cloud and the computation results may not be well managed and fully trusted by the cloud users. Most of the previous work on the cloud security focuses on the storage security rather than taking the computation security into consideration together. In this paper, we propose a privacy cheating discouragement and secure computation auditing protocol, or SecCloud, which is a first protocol bridging secure storage and secure computation auditing in cloud and achieving privacy cheating discouragement by designated verifier signature, batch verification and probabilistic sampling techniques. The detailed analysis is given to obtain an optimal sampling size to minimize the cost. Another major contribution of this paper is that we build a practical secure-aware cloud computing experimental environment, or SecHDFS, as a test bed to implement SecCloud. Further experimental results have demonstrated the effectiveness and efficiency of the proposed SecCloud.
Keywords: Batch verification, Cloud computing, Designated verifier signature, Privacy-cheating discouragement, Secure computation auditing, Secure storage (ID#: 15-3734)
URL: http://dl.acm.org/citation.cfm?id=2563733.2564107&coll=DL&dl=GUIDE&CFID=404518475&CFTOKEN=44609526 or http://dx.doi.org/10.1016/j.ins.2013.04.028
Bryan Jeffery Parno, Trust Extension as a Mechanism for Secure Code Execution on Commodity Computers, ACM Press, New York, NY, 2014. ISBN: 978-1-62705-477-5 doi>10.1145/2611399 From the preface: As society rushes to digitize sensitive information and services, it is imperative that we adopt adequate security protections. However, such protections fundamentally conflict with the benefits we expect from commodity computers. In other words, consumers and businesses value commodity computers because they provide good performance and an abundance of features at relatively low costs. Meanwhile, attempts to build secure systems from the ground up typically abandon such goals, and hence are seldom adopted [Karger et al. 1991, Gold et al. 1984, Ames 1981]. In this book, a revised version of my doctoral dissertation, originally written while studying at Carnegie Mellon University, I argue thatwecan resolve the tension between security and features by leveraging the trust a user has in one device to enable her to securely use another commodity device or service, without sacrificing the performance and features expected of commodity systems. We support this premise over the course of the following chapters. •Introduction. This chapter introduces the notion of bootstrapping trust from one device or service to another and gives an overview of how the subsequent chapters fit together. •Background and related work. This chapter focuses on existing techniques for bootstrapping trust in commodity computers, specifically by conveying information about a computer's current execution environment to an interested party. This would, for example, enable a user to verify that her computer is free of malware, or that a remote web server will handle her data responsibly. •Bootstrapping trust in a commodity computer. At a high level, this chapter develops techniques to allow a user to employ a small, trusted, portable device to securely learn what code is executing on her local computer. While the problem is simply stated, finding a solution that is both secure and usable with existing hardware proves quite difficult. •On-demand secure code execution. Rather than entrusting a user's data to the mountain of buggy code likely running on her computer, in this chapter, we construct an on-demand secure execution environment which can perform security sensitive tasks and handle private data in complete isolation from all other software (and most hardware) on the system. Meanwhile, non-security-sensitive software retains the same abundance of features and performance it enjoys today. •Using trustworthy host data in the network. Having established an environment for secure code execution on an individual computer, this chapter shows how to extend trust in this environment to network elements in a secure and efficient manner. This allows us to reexamine the design of network protocols and defenses, since we can now execute code on end hosts and trust the results within the network. •Secure code execution on untrusted hardware. Lastly, this chapter extends the user's trust one more step to encompass computations performed on a remote host (e.g., in the cloud).We design, analyze, and prove secure a protocol that allows a user to outsource arbitrary computations to commodity computers run by an untrusted remote party (or parties) who may subject the computers to both software and hardware attacks. Our protocol guarantees that the user can both verify that the results returned are indeed the correct results of the specified computations on the inputs provided, and protect the secrecy of both the inputs and outputs of the computations. These guarantees are provided in a non-interactive, asymptotically optimal (with respect to CPU and bandwidth) manner. Thus, extending a user's trust, via software, hardware, and cryptographic techniques, allows us to provide strong security protections for both local and remote computations on sensitive data, while still preserving the performance and features of commodity computers. (ID#: 15-3735)
URL: http://dl.acm.org/citation.cfm?id=2611399&coll=DL&dl=GUIDE&CFID=404518475&CFTOKEN=44609526
Hongbo Liu, Hui Wang, Yingying Chen, Dayong Jia, “Defending against Frequency-Based Attacks on Distributed Data Storage in Wireless Networks,” ACM Transactions on Sensor Networks (TOSN), Volume 10 Issue 3, April 2014, Article No. 49. doi>10.1145/2594774As wireless networks become more pervasive, the amount of the wireless data is rapidly increasing. One of the biggest challenges of wide adoption of distributed data storage is how to store these data securely. In this work, we study the frequency-based attack, a type of attack that is different from previously well-studied ones, that exploits additional adversary knowledge of domain values and/or their exact/approximate frequencies to crack the encrypted data. To cope with frequency-based attacks, the straightforward 1-to-1 substitution encryption functions are not sufficient. We propose a data encryption strategy based on 1-to-n substitution via dividing and emulating techniques to defend against the frequency-based attack, while enabling efficient query evaluation over encrypted data. We further develop two frameworks, incremental collection and clustered collection, which are used to defend against the global frequency-based attack when the knowledge of the global frequency in the network is not available. Built upon our basic encryption schemes, we derive two mechanisms, direct emulating and dual encryption, to handle updates on the data storage for energy-constrained sensor nodes and wireless devices. Our preliminary experiments with sensor nodes and extensive simulation results show that our data encryption strategy can achieve high security guarantee with low overhead.
Keywords: Frequency-based attack, secure distributed data storage, wireless networks (ID#: 15-3736)
URL: http://dl.acm.org/citation.cfm?id=2619982.2594774&coll=DL&dl=GUIDE&CFID=404518475&CFTOKEN=44609526 or http://doi.acm.org/10.1145/2594774
She-I Chang, David C. Yen, I-Cheng Chang, Derek Jan, “Internal Control Framework For A Compliant ERP System,” Information and Management, Volume 51 Issue 2, March, 2014, Pages 187-205. doi>10.1016/j.im.2013.11.002 After the occurrence of numerous worldwide financial scandals, the importance of related issues such as internal control and information security has greatly increased. This study develops an internal control framework that can be applied within an enterprise resource planning (ERP) system. A literature review is first conducted to examine the necessary forms of internal control in information technology (IT) systems. The control criteria for the establishment of the internal control framework are then constructed. A case study is conducted to verify the feasibility of the established framework. This study proposes a 12-dimensional framework with 37 control items aimed at helping auditors perform effective audits by inspecting essential internal control points in ERP systems. The proposed framework allows companies to enhance IT audit efficiency and mitigates control risk. Moreover, companies that refer to this framework and consider the limitations of their own IT management can establish a more robust IT management mechanism.
Keywords: Enterprise resource planning, IT control, Internal control framework (ID#: 15-3737)
URL: http://dl.acm.org/citation.cfm?id=2592290.2592340&coll=DL&dl=GUIDE&CFID=404518475&CFTOKEN=44609526 or http://dx.doi.org/10.1016/j.im.2013.11.002
Miyoung Jang; Min Yoon; Jae-Woo Chang, "A privacy-aware query authentication index for database outsourcing," Big Data and Smart Computing (BIGCOMP), 2014 International Conference on , vol., no., pp.72,76, 15-17 Jan. 2014. doi: 10.1109/BIGCOMP.2014.6741410 Recently, cloud computing has been spotlighted as a new paradigm of database management system. In this environment, databases are outsourced and deployed on a service provider in order to reduce cost for data storage and maintenance. However, the service provider might be untrusted so that the two issues of data security, including data confidentiality and query result integrity, become major concerns for users. Existing bucket-based data authentication methods have problem that the original spatial data distribution can be disclosed from data authentication index due to the unsophisticated data grouping strategies. In addition, the transmission overhead of verification object is high. In this paper, we propose a privacy-aware query authentication which guarantees data confidentiality and query result integrity for users. A periodic function-based data grouping scheme is designed to privately partition a spatial database into small groups for generating a signature of each group. The group signature is used to check the correctness and completeness of outsourced data when answering a range query to users. Through performance evaluation, it is shown that proposed method outperforms the existing method in terms of range query processing time up to 3 times.
Keywords: cloud computing; data integrity; data privacy; database indexing; digital signatures; outsourcing; query processing; visual databases; bucket-based data authentication methods; cloud computing; cost reduction ;data confidentiality; data maintenance; data security; data storage; database management system; database outsourcing; group signature; periodic function-based data grouping scheme; privacy-aware query authentication index; query result integrity; range query answering; service provider; spatial data distribution; spatial database; unsophisticated data grouping strategy; verification object transmission overhead; Authentication; Encryption; Indexes; Query processing; Spatial databases; Data authentication index; Database outsourcing; Encrypted database; Query result integrity (ID#: 15-3738)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6741410&isnumber=6741395
Omote, K.; Thao, T.P., "A New Efficient and Secure POR Scheme Based on Network Coding," Advanced Information Networking and Applications (AINA), 2014 IEEE 28th International Conference on , vol., no., pp.98,105, 13-16 May 2014. doi: 10.1109/AINA.2014.17 Information is increasing quickly, database owners have tendency to outsource their data to an external service provider called Cloud Computing. Using Cloud, clients can remotely store their data without burden of local data storage and maintenance. However, such service provider is untrusted, therefore there are some challenges in data security: integrity, availability and confidentiality. Since integrity and availability are prerequisite conditions of the existence of a system, we mainly focus on them rather than confidentiality. To ensure integrity and availability, researchers have proposed network coding-based POR (Proof of Retrievability) schemes that enable the servers to demonstrate whether the data is retrievable or not. However, most of network coding-based POR schemes are inefficient in data checking and also cannot prevent a common attack in POR: small corruption attack. In this paper, we propose a new network coding-based POR scheme using dispersal code in order to reduce cost in checking phase and also to prevent small corruption attack.
Keywords: cloud computing; data communication; network coding; security of data; cloud computing; corruption attack; corruption attack prevention; ost reduction; data availability; data checking;data confidentiality; data integrity; data security; dispersal code; efficient POR scheme; local data storage; maintenance; network coding-based POR; proof of retrievability; secure POR scheme; Availability; Decoding; Encoding; Maintenance engineering; Network coding; Servers; Silicon; data availability; data integrity; network coding; proof of retrievability (ID#: 15-3739)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6838653&isnumber=6838626
Yinan Jing; Ling Hu; Wei-Shinn Ku; Shahabi, C., "Authentication of k Nearest Neighbor Query on Road Networks," Knowledge and Data Engineering, IEEE Transactions on , vol.26, no.6, pp.1494,1506, June 2014. doi: 10.1109/TKDE.2013.174 Outsourcing spatial databases to the cloud provides an economical and flexible way for data owners to deliver spatial data to users of location-based services. However, in the database outsourcing paradigm, the third-party service provider is not always trustworthy, therefore, ensuring spatial query integrity is critical. In this paper, we propose an efficient road network k-nearest-neighbor query verification technique which utilizes the network Voronoi diagram and neighbors to prove the integrity of query results. Unlike previous work that verifies k-nearest-neighbor results in the Euclidean space, our approach needs to verify both the distances and the shortest paths from the query point to its kNN results on the road network. We evaluate our approach on real-world road networks together with both real and synthetic points of interest datasets. Our experiments run on Google Android mobile devices which communicate with the service provider through wireless connections. The experiment results show that our approach leads to compact verification objects (VO) and the verification algorithm on mobile devices is efficient, especially for queries with low selectivity.
Keywords: computational geometry; outsourcing; query processing; smart phones; visual databases; Euclidean space; Google Android mobile devices; Voronoi diagram; database outsourcing paradigm; k nearest neighbor query; location-based services; road network k-nearest-neighbor query verification technique; spatial databases; spatial query integrity; third-party service provider; Artificial neural networks; Authentication; Generators; Outsourcing; Roads; Spatial databases; Spatial database outsourcing; location-based services; query authentication; road networks (ID#: 15-3740)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6658750&isnumber=6824283
Wang, H., "Identity-Based Distributed Provable Data Possession in Multi-Cloud Storage," Services Computing, IEEE Transactions on, vol. PP, no.99, pp.1,1, March 2014. doi: 10.1109/TSC.2014.1 Remote data integrity checking is of crucial importance in cloud storage. It can make the clients verify whether their outsourced data is kept intact without downloading the whole data. In some application scenarios, the clients have to store their data on multi-cloud servers. At the same time, the integrity checking protocol must be efficient in order to save the verifier’s cost. From the two points, we propose a novel remote data integrity checking model: ID-DPDP (identity-based distributed provable data possession) in multi-cloud storage. The formal system model and security model are given. Based on the bilinear pairings, a concrete ID-DPDP protocol is designed. The proposed ID-DPDP protocol is provably secure under the hardness assumption of the standard CDH (computational Diffie- Hellman) problem. In addition to the structural advantage of elimination of certificate management, our ID-DPDP protocol is also efficient and flexible. Based on the client’s authorization, the proposed ID-DPDP protocol can realize private verification, delegated verification and public verification.
Keywords: Cloud computing; Computational modeling; Distributed databases; Indexes; Protocols; Security; Servers (ID#: 15-3741)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6762896&isnumber=4629387
Jinguang Han; Susilo, W.; Yi Mu, "Identity-Based Secure Distributed Data Storage Schemes," Computers, IEEE Transactions on , vol.63, no.4, pp.941,953, April 2014. doi: 10.1109/TC.2013.26 Secure distributed data storage can shift the burden of maintaining a large number of files from the owner to proxy servers. Proxy servers can convert encrypted files for the owner to encrypted files for the receiver without the necessity of knowing the content of the original files. In practice, the original files will be removed by the owner for the sake of space efficiency. Hence, the issues on confidentiality and integrity of the outsourced data must be addressed carefully. In this paper, we propose two identity-based secure distributed data storage (IBSDDS) schemes. Our schemes can capture the following properties: (1) The file owner can decide the access permission independently without the help of the private key generator (PKG); (2) For one query, a receiver can only access one file, instead of all files of the owner; (3) Our schemes are secure against the collusion attacks, namely even if the receiver can compromise the proxy servers, he cannot obtain the owner's secret key. Although the first scheme is only secure against the chosen plaintext attacks (CPA), the second scheme is secure against the chosen ciphertext attacks (CCA). To the best of our knowledge, it is the first IBSDDS schemes where an access permission is made by the owner for an exact file and collusion attacks can be protected in the standard model.
Keywords: authorization; data integrity; distributed databases; file servers; private key cryptography; storage management; CCA; CPA; IBSDDS scheme; PKG; access permission; chosen ciphertext attack; chosen plaintext attack; collusion attacks; encrypted files conversion; file access; file maintenance; identity-based secure distributed data storage scheme; outsourced data confidentiality; outsourced data integrity; private key generator; proxy server; receiver; space efficiency; Educational institutions; Encryption; Memory; Receivers; Servers; Distributed data storage; access control; identity-based system; security (ID#: 15-3742)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6463376&isnumber=6774900
Al-Anzi, F.S.; Salman, AA; Jacob, N.K.; Soni, J., "Towards Robust, Scalable And Secure Network Storage In Cloud Computing," Digital Information and Communication Technology and it's Applications (DICTAP), 2014 Fourth International Conference on , vol., no., pp.51,55, 6-8 May 2014. doi: 10.1109/DICTAP.2014.6821656 The term Cloud Computing is not something that appeared overnight, it may come from the time when computer system remotely accessed the applications and services. Cloud computing is Ubiquitous technology and receiving a huge attention in the scientific and industrial community. Cloud computing is ubiquitous, next generation's in-formation technology architecture which offers on-demand access to the network. It is dynamic, virtualized, scalable and pay per use model over internet. In a cloud computing environment, a cloud service provider offers “house of resources” includes applications, data, runtime, middleware, operating system, virtualization, servers, data storage and sharing and networking and tries to take up most of the overhead of client. Cloud computing offers lots of benefits, but the journey of the cloud is not very easy. It has several pitfalls along the road because most of the services are outsourced to third parties with added enough level of risk. Cloud computing is suffering from several issues and one of the most significant is Security, privacy, service availability, confidentiality, integrity, authentication, and compliance. Security is a shared responsibility of both client and service provider and we believe security must be information centric, adaptive, proactive and built in. Cloud computing and its security are emerging study area nowadays. In this paper, we are discussing about data security in cloud at the service provider end and proposing a network storage architecture of data which make sure availability, reliability, scalability and security.
Keywords: cloud computing; data integrity; data privacy; security of data; storage management; ubiquitous computing; virtualisation; Internet; adaptive security; authentication; built in security; client overhead; cloud computing environment; cloud service provider; compliance; confidentiality; data security; data sharing; data storage; information centric security; integrity; middleware; network storage architecture; networking; on-demand access; operating system; pay per use model; privacy; proactive security; remote application access; remote service access; robust scalable secure network storage; server; service availability; service outsourcing; ubiquitous next generation information technology architecture virtualization; Availability; Cloud computing; Computer architecture; Data security; Distributed databases; Servers; Cloud Computing; Data Storage; Data security; RAID (ID#: 15-3743)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6821656&isnumber=6821645
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
![]() |
Intrusion Tolerance (2014) |
This bibliography is a 2014 year in review collection. Intrusion tolerance refers to a fault-tolerant design approach to defending communications, computer and other information systems against malicious attack. Rather than detecting all anomalies, tolerant systems only identify those intrusions which lead to security failures. This collection cites publications of interest addressing new methods of building secure fault tolerant systems.
Wei Min; Keecheon Kim, "Intrusion Tolerance Mechanisms Using Redundant Nodes for Wireless Sensor Networks," Information Networking (ICOIN), 2014 International Conference on, pp. 131, 135, 10-12 February 2014. doi: 10.1109/ICOIN.2014.6799679 Wireless sensor networks extend people's ability to explore, monitor, and control the physical world. Wireless sensor networks are susceptible to certain types of attacks because they are deployed in open and unprotected environments. Novel intrusion tolerance architecture is proposed in this paper. An expert intrusion detection analysis system and an all-channel analyzer are introduced. A proposed intrusion tolerance scheme is implemented. Results show that this scheme can detect data traffic and re-route it to a redundant node in the wireless network, prolong the lifetime of the network, and isolate malicious traffic introduced through compromised nodes or illegal intrusions.
Keywords: data communication; telecommunication channels; telecommunication network routing; telecommunication security; telecommunication traffic; wireless sensor networks; all-channel analyzer; data traffic detection; expert intrusion detection analysis system; intrusion tolerance architecture; intrusion tolerance mechanisms; re-route detection; redundant node; redundant nodes; wireless sensor networks; Intrusion detection; Monitoring ;Protocols; Routing; Wireless networks; Wireless sensor networks; Wireless Sensor networks; intrusion tolerance; security (ID#: 15-3645)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6799679&isnumber=6799467
Hemalatha, A.; Venkatesh, R., "Redundancy Management In Heterogeneous Wireless Sensor networks," Communications and Signal Processing (ICCSP), 2014 International Conference on, pp.1849,1853, 3-5 April 2014. doi: 10.1109/ICCSP.2014.6950165 A Wireless sensor network is a special type of Ad Hoc network, composed of a large number of sensor nodes spread over a wide geographical area. Each sensor node has the wireless communication capability and sufficient intelligence for making signal processing and dissemination of data from the collecting center .In this paper deals about redundancy management for improving network efficiency and query reliability in heterogeneous wireless sensor networks. The proposed scheme deals about finding a reliable path by using redundancy management algorithm and detection of unreliable nodes by discarding the path. The redundancy management algorithm finds the reliable path based on redundancy level, average distance between a source node and destination node and analyzes the redundancy level as the path and source redundancy. For finding the path from source CH to processing center we propose intrusion tolerance in the presence of unreliable nodes. Finally we applied our analyzed result to redundancy management algorithm to find the reliable path in which the network efficiency and Query success probability will be improved.
Keywords: ad hoc networks; probability queueing theory; redundancy; signal processing; telecommunication network reliability; wireless sensor networks; ad hoc network; destination node; geographical area; heterogeneous wireless sensor networks; intrusion tolerance; network efficiency; query reliability; query success probability; redundancy management algorithm; signal dissemination; signal processing; source node; unreliable nodes detection; Ad hoc networks; Indexes; Quality of service; Redundancy; Tin; Wireless sensor networks; intrusion tolerance; multipath routing; reliability; wireless sensor network (ID#: 15-3646)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6950165&isnumber=6949766
Ing-Ray Chen; Jia Guo, "Dynamic Hierarchical Trust Management of Mobile Groups and Its Application to Misbehaving Node Detection," Advanced Information Networking and Applications (AINA), 2014 IEEE 28th International Conference on, pp. 49, 56, 13-16 May 2014. doi: 10.1109/AINA.2014.13 In military operation or emergency response situations, very frequently a commander will need to assemble and dynamically manage Community of Interest (COI) mobile groups to achieve a critical mission assigned despite failure, disconnection or compromise of COI members. We combine the designs of COI hierarchical management for scalability and reconfigurability with COI dynamic trust management for survivability and intrusion tolerance to compose a scalable, reconfigurable, and survivable COI management protocol for managing COI mission-oriented mobile groups in heterogeneous mobile environments. A COI mobile group in this environment would consist of heterogeneous mobile entities such as communication-device-carried personnel/robots and aerial or ground vehicles operated by humans exhibiting not only quality of service (QoS) characters, e.g., competence and cooperativeness, but also social behaviors, e.g., connectivity, intimacy and honesty. A COI commander or a subtask leader must measure trust with both social and QoS cognition depending on mission task characteristics and/or trustee properties to ensure successful mission execution. In this paper, we present a dynamic hierarchical trust management protocol that can learn from past experiences and adapt to changing environment conditions, e.g., increasing misbehaving node population, evolving hostility and node density, etc. to enhance agility and maximize application performance. With trust-based misbehaving node detection as an application, we demonstrate how our proposed COI trust management protocol is resilient to node failure, disconnection and capture events, and can help maximize application performance in terms of minimizing false negatives and positives in the presence of mobile nodes exhibiting vastly distinct QoS and social behaviors.
Keywords: emergency services; military communication; mobile computing; protocols; quality of service; telecommunication security; trusted computing; COI dynamic hierarchical trust management protocol; COI mission-oriented mobile group management; aerial vehicles; agility enhancement; application performance maximization; communication-device-carried personnel; community-of-interest mobile groups; competence; connectivity; cooperativeness; emergency response situations; ground vehicles; heterogeneous mobile entities; heterogeneous mobile environments; honesty; intimacy ;intrusion tolerance; military operation; misbehaving node population; node density; quality-of-service characters; robots; social behaviors; survivable COI management protocol; trust measurement; trust-based misbehaving node detection; Equations; Mathematical model; Mobile communication; Mobile computing; Peer-to-peer computing; Protocols; Quality of service; Trust management; adaptability; community of interest; intrusion detection; performance analysis; scalability (ID#: 15-3647)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6838647&isnumber=6838626
Myalapalli, V.K.; Chakravarthy, A.S.N., "A Unified Model For Cherishing Privacy In Database System An Approach To Overhaul Vulnerabilities," Networks & Soft Computing (ICNSC), 2014 First International Conference on, pp.263,266, 19-20 Aug. 2014. doi: 10.1109/CNSC.2014.6906658 Privacy is the most anticipated aspect in many perspectives especially with sensitive data and the database is being targeted incessantly for vulnerability. The database must be persistently monitored for ensuring comprehensive security. The proposed model is intended to cherish the database privacy by thwarting intrusions and inferences. The Database Static protection and Intrusion Tolerance Subsystem proposed in the architecture bolster this practice. This paper enunciates Privacy Cherished Database architecture model and how it achieves security under sundry circumstances.
Keywords: data privacy; database management systems; security of data; database static protection; database system privacy; inference thwarting; intrusion thwarting; intrusion tolerance subsystem; privacy cherished database architecture model; security; Decision support systems; Handheld computers; Database Security; Database Security Configurations; Inference Detection; Intrusion detection; security policy (ID#: 15-3648)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6906658&isnumber=6906636
Wenbing Zhao, "Application-Aware Byzantine Fault Tolerance," Dependable, Autonomic and Secure Computing (DASC), 2014 IEEE 12th International Conference on, pp.45,50, 24-27 Aug. 2014 doi: 10.1109/DASC.2014.17 Byzantine fault tolerance has been intensively studied over the past decade as a way to enhance the intrusion resilience of computer systems. However, state-machine-based Byzantine fault tolerance algorithms require deterministic application processing and sequential execution of totally ordered requests. One way of increasing the practicality of Byzantine fault tolerance is to exploit the application semantics, which we refer to as application-aware Byzantine fault tolerance. Application-aware Byzantine fault tolerance makes it possible to facilitate concurrent processing of requests, to minimize the use of Byzantine agreement, and to identify and control replica nondeterminism. In this paper, we provide an overview of recent works on application-aware Byzantine fault tolerance techniques. We elaborate the need for exploiting application semantics for Byzantine fault tolerance and the benefits of doing so, provide a classification of various approaches to application-aware Byzantine fault tolerance, and outline the mechanisms used in achieving application-aware Byzantine fault tolerance according to our classification.
Keywords: client-server systems; concurrency control; finite state machines; security of data; software fault tolerance; Byzantine agreement; application semantics; application-aware Byzantine fault tolerance; computer system intrusion resilience enhancement; deterministic application processing; replica nondeterminism; request concurrent processing; sequential execution; state-machine-based Byzantine fault tolerance algorithm; totally ordered request; Algorithm design and analysis; Fault tolerance; Fault tolerant systems; Message systems; Semantics; Servers; System recovery; Application Nondeterminism; Application Semantics; Application-Aware Byzantine Fault Tolerance; Deferred Byzantine Agreement; Dependability; Intrusion Resilience (ID#: 15-3649)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6945302&isnumber=6945641
Fonseca, J.; Seixas, N.; Vieira, M.; Madeira, H., "Analysis of Field Data on Web Security Vulnerabilities," Dependable and Secure Computing, IEEE Transactions on, vol.11, no.2, pp.89, 100, March-April 2014 doi: 10.1109/TDSC.2013.37 Most web applications have critical bugs (faults) affecting their security, which makes them vulnerable to attacks by hackers and organized crime. To prevent these security problems from occurring it is of utmost importance to understand the typical software faults. This paper contributes to this body of knowledge by presenting a field study on two of the most widely spread and critical web application vulnerabilities: SQL Injection and XSS. It analyzes the source code of security patches of widely used Web applications written in weak and strong typed languages. Results show that only a small subset of software fault types, affecting a restricted collection of statements, is related to security. To understand how these vulnerabilities are really exploited by hackers, this paper also presents an analysis of the source code of the scripts used to attack them. The outcomes of this study can be used to train software developers and code inspectors in the detection of such faults and are also the foundation for the research of realistic vulnerability and attack injectors that can be used to assess security mechanisms, such as intrusion detection systems, vulnerability scanners, and static code analyzers.
Keywords: Internet; SQL; security of data; software fault tolerance; source code (software); SQL injection; Web application vulnerabilities; Web security vulnerabilities; XSS; attack injectors; code inspectors; field data analysis; intrusion detection systems; realistic vulnerability; security mechanisms; security patches; software faults; source code; static code analyzers; vulnerability scanners; Awards activities; Blogs; Internet; Java; Security; Software; Internet applications; Security; languages; review and evaluation (ID#: 15-3650)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6589556&isnumber=6785951
Hua Chai; Wenbing Zhao, "Towards Trustworthy Complex Event Processing," Software Engineering and Service Science (ICSESS), 2014 5th IEEE International Conference on, pp.758,761, 27-29 June 2014. doi: 10.1109/ICSESS.2014.6933677 Complex event processing has become an important technology for big data and intelligent computing because it facilitates the creation of actionable, situational knowledge from potentially large amount events in soft realtime. Complex event processing can be instrumental for many mission-critical applications, such as business intelligence, algorithmic stock trading, and intrusion detection. Hence, the servers that carry out complex event processing must be made trustworthy. In this paper, we present a threat analysis on complex event processing systems and describe a set of mechanisms that can be used to control various threats. By exploiting the application semantics for typical event processing operations, we are able to design lightweight mechanisms that incur minimum runtime overhead appropriate for soft realtime computing.
Keywords: Big Data; trusted computing; Big Data; actionable situational knowledge; algorithmic stock trading; application semantics; business intelligence; complex event processing; event processing operations ;intelligent computing; intrusion detection; minimum runtime overhead; mission-critical applications; servers; soft realtime computing; threat analysis; trustworthy; Business; Context; Fault tolerance; Fault tolerant systems; Runtime; Servers; Synchronization; Big Data; Business Intelligence; Byzantine Fault Tolerance; Complex Event Processing; Dependable Computing; Trust (ID#: 15-3652)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6933677&isnumber=6933501
Fonseca, J.; Vieira, M.; Madeira, H., "Evaluation of Web Security Mechanisms Using Vulnerability & Attack Injection," Dependable and Secure Computing, IEEE Transactions on, vol. 11, no.5, pp.440, 453, Sept.-Oct. 2014. doi: 10.1109/TDSC.2013.45 In this paper we propose a methodology and a prototype tool to evaluate web application security mechanisms. The methodology is based on the idea that injecting realistic vulnerabilities in a web application and attacking them automatically can be used to support the assessment of existing security mechanisms and tools in custom setup scenarios. To provide true to life results, the proposed vulnerability and attack injection methodology relies on the study of a large number of vulnerabilities in real web applications. In addition to the generic methodology, the paper describes the implementation of the Vulnerability & Attack Injector Tool (VAIT) that allows the automation of the entire process. We used this tool to run a set of experiments that demonstrate the feasibility and the effectiveness of the proposed methodology. The experiments include the evaluation of coverage and false positives of an intrusion detection system for SQL Injection attacks and the assessment of the effectiveness of two top commercial web application vulnerability scanners. Results show that the injection of vulnerabilities and attacks is indeed an effective way to evaluate security mechanisms and to point out not only their weaknesses but also ways for their improvement.
Keywords: Internet; SQL; fault diagnosis; security of data; software fault tolerance; SQL Injection attacks; VAIT; Web application security mechanism evaluation; attack injection methodology; fault injection; intrusion detection system; vulnerability injection methodology; vulnerability-&-attack injector tool; Databases; Educational institutions; Input variables; Probes; Security; Software; TV; Security; fault injection; internet applications; review and evaluation (ID#: 15-3653)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6629992&isnumber=6893064
Kirsch, J.; Goose, S.; Amir, Y.; Dong Wei; Skare, P., "Survivable SCADA Via Intrusion-Tolerant Replication," Smart Grid, IEEE Transactions on vol. 5, no. 1, pp. 60, 70, Jan. 2014. doi: 10.1109/TSG.2013.2269541 Providers of critical infrastructure services strive to maintain the high availability of their SCADA systems. This paper reports on our experience designing, architecting, and evaluating the first survivable SCADA system-one that is able to ensure correct behavior with minimal performance degradation even during cyber attacks that compromise part of the system. We describe the challenges we faced when integrating modern intrusion-tolerant protocols with a conventional SCADA architecture and present the techniques we developed to overcome these challenges. The results illustrate that our survivable SCADA system not only functions correctly in the face of a cyber attack, but that it also processes in excess of 20 000 messages per second with a latency of less than 30 ms, making it suitable for even large-scale deployments managing thousands of remote terminal units.
Keywords: SCADA systems; fault tolerance; production engineering computing; security of data; SCADA architecture; cyberattacks; intrusion-tolerant protocols; intrusion-tolerant replication; performance degradation; survivable SCADA system; Clocks; Libraries; Monitoring; Protocols; SCADA systems; Servers; Synchronization; Cyberattack; SCADA systems; fault tolerance; reliability; resilience; survivability (ID#: 15-3654)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6576306&isnumber=6693741
Di Benedetto, M.D.; D'Innocenzo, A.; Smarra, F., "Fault-tolerant Control Of A Wireless HVAC Control System," Communications, Control and Signal Processing (ISCCSP), 2014 6th International Symposium on, pp.235,238, 21-23 May 2014. doi: 10.1109/ISCCSP.2014.6877858 In this paper we address the problem of designing a fault tolerant control scheme for an HVAC control system where sensing and actuation data are exchanged with a centralized controller via a wireless sensors and actuators network where the communication nodes are subject to permanent failures and malicious intrusions.
Keywords: HVAC; actuators; building management systems; failure analysis; fault tolerant control; wireless sensor networks; actuators network; centralized controller; communication nodes; fault tolerant control scheme; fault-tolerant control; malicious intrusions; permanent failures; sensing and actuation data; wireless HVAC control system; wireless sensors; Atmospheric modeling; Control systems; Fault tolerance; Fault tolerant systems; Sensors; Wireless communication; Wireless sensor networks; Building automation; fault detection; wireless sensor networks (ID#: 15-3655)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6877858&isnumber=6877795
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
![]() |
Power Grid Security |
Cyber-Physical Systems such as the power grid are complex networks linked with cyber capabilities. The complexity and potential consequences of cyber-attacks on the grid make them an important area for scientific research. The articles cited below appeared in 2014.
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
![]() |
Software Tamper Resistance |
Software tampering and reverse engineering of code create financial concern for software developers, as well as introducing access for malicious injections. The three articles cited here from 2014 address code obfuscation, AES and fault analysis.
Yoshikawa, M.; Goto, H.; Asahi, K., "Error Value Driven Fault Analysis Attack," Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD), 2014 15th IEEE/ACIS International Conference on, pp. 1, 4, June 30 2014-July 2 2014. doi: 10.1109/SNPD.2014.6888689 The advanced encryption standard (AES) has been sufficiently studied to confirm that its decryption is computationally impossible. However, its vulnerability against fault analysis attacks has been pointed out in recent years. To verify the vulnerability of electronic devices in the future, into which cryptographic circuits have been incorporated, fault analysis attacks must be thoroughly studied. The present study proposes a new fault analysis attack method which utilizes the tendency of an operation error due to a glitch. The present study also verifies the validity of the proposed method by performing evaluation experiments using FPGA.
Keywords: cryptography; field programmable gate arrays; AES; advanced encryption standard; cryptographic circuits; error value driven fault analysis attack method; Ciphers; Circuit faults; Encryption; Equations; Field programmable gate arrays; Standards; Error value; Fault analysis attacks; Side-channel attack; Tamper resistance (ID#: 15-3656)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6888689&isnumber=6888665
Ketenci, S.; Ulutas, G.; Ulutas, M., "Detection of Duplicated Regions In Images Using 1D-Fourier Transform," Systems, Signals and Image Processing (IWSSIP), 2014 International Conference on, pp.171,174, 12-15 May 2014 Large number of digital images and videos are acquired, stored, processed and shared nowadays. High quality imaging hardware and low cost, user friendly image editing software make digital mediums vulnerable to modifications. One of the most popular image modification techniques is copy move forgery. This tampering technique copies part of an image and pastes it into another part on the same image to conceal or to replicate some part of the image. Researchers proposed many techniques to detect copy move forged regions of images recently. These methods divide image into overlapping blocks and extract features to determine similarity among group of blocks. Selection of the feature extraction algorithm plays an important role on the accuracy of detection methods. Column averages of 1D-FT of rows is used to extract features from overlapping blocks on the image. Blocks are transformed into frequency domain using 1D-FT of the rows and average values of the transformed columns form feature vectors. Similarity of feature vectors indicates possible forged regions. Results show that the proposed method can detect copy pasted regions with higher accuracy compared to similar works reported in the literature. The method is also more resistant against the Gaussian blurring or JPEG compression attacks as shown in the results.
Keywords: Fourier transforms; feature extraction; frequency-domain analysis ;image recognition;1D-Fourier transform; Gaussian blurring; JPEG compression attacks; copy move forged region detection; digital images; digital mediums; duplicated region detection; feature extraction algorithm; feature vector similarity; frequency domain; high quality imaging hardware; image modification techniques; overlapping blocks; tampering technique; user friendly image editing software; Authentication; Digital images; Image coding; Resistance; Copy move forgery; Fourier transform; Gaussian Blurring (ID#: 15-3657)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6837658&isnumber=6837609
Kulkarni, A.; Metta, R., "A New Code Obfuscation Scheme for Software Protection," Service Oriented System Engineering (SOSE), 2014 IEEE 8th International Symposium on, pp.409, 414, 7-11 April 2014 doi: 10.1109/SOSE.2014.57 IT industry loses tens of billions of dollars annually from security attacks such as tampering and malicious reverse engineering. Code obfuscation techniques counter such attacks by transforming code into patterns that resist the attacks. None of the current code obfuscation techniques satisfy all the obfuscation effectiveness criteria such as resistance to reverse engineering attacks and state space increase. To address this, we introduce new code patterns that we call nontrivial code clones and propose a new obfuscation scheme that combines nontrivial clones with existing obfuscation techniques to satisfy all the effectiveness criteria. The nontrivial code clones need to be constructed manually, thus adding to the development cost. This cost can be limited by cloning only the code fragments that need protection and by reusing the clones across projects. This makes it worthwhile considering the security risks. In this paper, we present our scheme and illustrate it with a toy example.
Keywords: computer crime; reverse engineering; software engineering; systems re-engineering; IT industry; code fragment cloning; code obfuscation scheme; code patterns; code transformation; malicious reverse engineering; nontrivial code clones; security attacks; software protection; tampering; Cloning; Complexity theory; Data processing; Licenses; Resistance; Resists; Software (ID#: 15-3658)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6830939&isnumber=6825948
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
![]() |
Theoretical Foundations for Software |
Theory work helps enhance our understanding of basic principles. Much interest has developed around the theoretical foundations of software which have direct and indirect implications for cyber security. The research cited here appeared in 2014 and include such topics as malware propagation and mutant measurements.
Shigen Shen; Hongjie Li; Risheng Han; Vasilakos, A.V.; Yihan Wang; Qiying Cao, "Differential Game-Based Strategies for Preventing Malware Propagation in Wireless Sensor Networks," Information Forensics and Security, IEEE Transactions on, vol. 9, no. 11, pp.1962,1973, Nov. 2014. doi: 10.1109/TIFS.2014.2359333 Wireless sensor networks (WSNs) are prone to propagating malware because of special characteristics of sensor nodes. Considering the fact that sensor nodes periodically enter sleep mode to save energy, we develop traditional epidemic theory and construct a malware propagation model consisting of seven states. We formulate differential equations to represent the dynamics between states. We view the decision-making problem between system and malware as an optimal control problem; therefore, we formulate a malware-defense differential game in which the system can dynamically choose its strategies to minimize the overall cost whereas the malware intelligently varies its strategies over time to maximize this cost. We prove the existence of the saddle-point in the game. Further, we attain optimal dynamic strategies for the system and malware, which are bang-bang controls that can be conveniently operated and are suitable for sensor nodes. Experiments identify factors that influence the propagation of malware. We also determine that optimal dynamic strategies can reduce the overall cost to a certain extent and can suppress the malware propagation. These results support a theoretical foundation to limit malware in WSNs.
Keywords: bang-bang control; differential games; invasive software; telecommunication control; telecommunication security; wireless sensor networks; WSN; bang-bang controls; decision-making problem; differential equations; differential game-based strategy; malware propagation model; malware propagation prevention; malware-defense differential game; optimal control problem; optimal dynamic strategy; overall cost minimization; saddle-point; sensor node characteristics; sleep mode; traditional epidemic theory; wireless sensor networks; Control systems; Games; Grippers; Malware; Silicon; Wireless sensor networks; Differential game; Malware propagation; epidemic theory; wireless sensor networks (ID#: 15-3659)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6905838&isnumber=6912034
Baraldi, A.; Boschetti, L.; Humber, M.L., "Probability Sampling Protocol for Thematic and Spatial Quality Assessment of Classification Maps Generated From Spaceborne/Airborne Very High Resolution Images," Geoscience and Remote Sensing, IEEE Transactions on, vol. 52, no. 1, pp.701,760, Jan. 2014. doi: 10.1109/TGRS.2013.2243739 To deliver sample estimates provided with the necessary probability foundation to permit generalization from the sample data subset to the whole target population being sampled, probability sampling strategies are required to satisfy three necessary not sufficient conditions: 1) All inclusion probabilities be greater than zero in the target population to be sampled. If some sampling units have an inclusion probability of zero, then a map accuracy assessment does not represent the entire target region depicted in the map to be assessed. 2) The inclusion probabilities must be: a) knowable for nonsampled units and b) known for those units selected in the sample: since the inclusion probability determines the weight attached to each sampling unit in the accuracy estimation formulas, if the inclusion probabilities are unknown, so are the estimation weights. This original work presents a novel (to the best of these authors' knowledge, the first) probability sampling protocol for quality assessment and comparison of thematic maps generated from spaceborne/airborne very high resolution images, where: 1) an original Categorical Variable Pair Similarity Index (proposed in two different formulations) is estimated as a fuzzy degree of match between a reference and a test semantic vocabulary, which may not coincide, and 2) both symbolic pixel-based thematic quality indicators (TQIs) and sub-symbolic object-based spatial quality indicators (SQIs) are estimated with a degree of uncertainty in measurement in compliance with the well-known Quality Assurance Framework for Earth Observation (QA4EO) guidelines. Like a decision-tree, any protocol (guidelines for best practice) comprises a set of rules, equivalent to structural knowledge, and an order of presentation of the rule set, known as procedural knowledge. The combination of these two levels of knowledge makes an original protocol worth more than the sum of its parts. The several degrees of novelty of the proposed probability sampling protocol are highlighted in this paper, at the levels of understanding of both structural and procedural knowledge, in comparison with related multi-disciplinary works selected from the existing literature. In the experimental session, the proposed protocol is tested for accuracy validation of preliminary classification maps automatically generated by the Satellite Image Automatic Mapper (SIAM™) software product from two WorldView-2 images and one QuickBird-2 image provided by DigitalGlobe for testing purposes. In these experiments, collected TQIs and SQIs are statistically valid, statistically significant, consistent across maps, and in agreement with theoretical expectations, visual (qualitative) evidence and quantitative quality indexes of operativeness (OQIs) claimed for SIAM™ by related papers. As a subsidiary conclusion, the statistically consistent and statistically significant accuracy validation of the SIAM™ pre-classification maps proposed in this contribution, together with OQIs claimed for SIAM™ by related works, make the operational (automatic, accurate, near real-time, robust, scalable) SIAM™ software product eligible for opening up new inter-disciplinary research and market opportunities in accordance with the visionary goal of the Global Earth Observation System of Systems initiative and the QA4EO international guidelines.
Keywords: decision trees; geographic information systems; geophysical image processing ;image classification; measurement uncertainty; probability; quality assurance; remote sensing; sampling methods; DigitalGlobe; Global Earth Observation System of Systems;QA4EO international guidelines; Quality Assurance Framework for Earth Observation guidelines;QuickBird-2 image; SIAM preclassification maps; Satellite Image Automatic Mapper;WorldView-2 images; categorical variable pair similarity index; decision-tree; inclusion probability; measurement uncertainty; probability sampling protocol; procedural knowledge; quality assessment; spaceborne/airborne very high resolution images; structural knowledge; subsymbolic object-based spatial quality indicators; symbolic pixel-based thematic quality indicators; thematic maps; Accuracy; Earth; Estimation; Guidelines; Indexes; Protocols; Spatial resolution; Contingency matrix; error matrix; land cover change (LCC) detection; land cover classification; maps comparison; nonprobability sampling; ontology; overlapping area matrix (OAMTRX);probability sampling; quality indicator of operativeness (OQI);spatial quality indicator (SQI);taxonomy; thematic quality indicator (TQI)}, (ID#: 15-3660)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6479283&isnumber=6675822
Cardoso, L.S.; Massouri, A.; Guillon, B.; Ferrand, P.; Hutu, F.; Villemaud, G.; Risset, T.; Gorce, J.-M., "CorteXlab: A Facility For Testing Cognitive Radio Networks In A Reproducible Environment," Cognitive Radio Oriented Wireless Networks and Communications (CROWNCOM), 2014 9th International Conference on , vol., no., pp.503,507, 2-4 June 2014. While many theoretical and simulation works have highlighted the potential gains of cognitive radio, several technical issues still need to be evaluated from an experimental point of view. Deploying complex heterogeneous system scenarios is tedious, time consuming and hardly reproducible. To address this problem, we have developed a new experimental facility, called CorteXlab, that allows complex multi-node cognitive radio scenarios to be easily deployed and tested by anyone in the world. Our objective is not to design new software defined radio (SDR) nodes, but rather to provide a comprehensive access to a large set of high performance SDR nodes. The CorteXlab facility offers a 167 m2 electromagnetically (EM) shielded room and integrates a set of 24 universal software radio peripherals (USRPs) from National Instruments, 18 PicoSDR nodes from Nutaq and 42 IoT-Lab wireless sensor nodes from Hikob. CorteXlab is built upon the foundations of the SensLAB testbed and is based the free and open-source toolkit GNU Radio. Automation in scenario deployment, experiment start, stop and results collection is performed by an experiment controller, called Minus. CorteXlab is in its final stages of development and is already capable of running test scenarios. In this contribution, we show that CorteXlab is able to easily cope with the usual issues faced by other testbeds providing a reproducible experiment environment for CR experimentation.
Keywords: Internet of Things; cognitive radio; controllers; electromagnetic shielding; software radio; testing; wireless sensor networks; CorteXlab facility; Hikob; IoT-Lab wireless sensor nodes; Minus; National Instruments; Nutaq; PicoSDR nodes; SDR nodes; SensLAB; cognitive radio networks; complex heterogeneous system scenarios; complex multinode cognitive radio scenarios; controller; electromagnetically shielded room; open-source toolkit GNU Radio; reproducible environment; software defined radio; testing facility; universal software radio peripherals; Cognitive radio; Field programmable gate arrays; Interference; MIMO; Orbits; Wireless sensor networks (ID#: 15-3661)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6849736&isnumber=6849647
Chang, Lichen, "Convergence of Physical System And Cyber System Modeling Methods For Aviation Cyber Physical Control System," Information and Automation (ICIA), 2014 IEEE International Conference on, pp. 542, 547, 28-30 July 2014. doi: 10.1109/ICInfA.2014.6932714 Recent attention to aviation cyber physical systems (ACPS) is driven by the need for seamless integration of design disciplines that dominate physical world and cyber world convergence. System convergence is a big obstacle to good aviation cyber-physical system (ACPS) design, which is due to a lack of an adequate scientific theoretical foundation for the subject. The absence of a good understanding of the science of aviation system convergence is not due to neglect, but rather due to its difficulty. Most complex aviation system builders have abandoned any science or engineering discipline for system convergence they simply treat it as a management problem. Aviation System convergence is almost totally absent from software engineering and engineering curricula. Hence, system convergence is particularly challenging in ACPS where fundamentally different physical and computational design concerns intersect. In this paper, we propose an integrated approach to handle System convergence of aviation cyber physical systems based on multi-dimensions, multi-views, multi-paradigm and multiple tools. This model-integrated development approach addresses the development needs of cyber physical systems through the pervasive use of models, and physical world, cyber world can be specified and modeled together, cyber world and physical world can be converged entirely, and cyber world models and physical world model can be integrated seamlessly. The effectiveness of the approach is illustrated by means of one practical case study: specifying and modeling Aircraft Systems. In this paper, We specify and model Aviation Cyber-Physical Systems with integrating Modelica, Modelicaml and Architecture Analysis & Design Language (AADL), the physical world is modeled by Modelica and Modelicaml, the cyber part is modeled by AADL and Modelicaml.
Keywords: Aerospace control; Aircraft; Analytical models; Atmospheric modeling; Convergence; Mathematical model; Unified modeling language; AADL; Aviation Cyber Physical System; Dynamic Continuous Features; Modelica; Modelicaml; Spatial-Temporal Features (ID#: 15-3662)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6932714&isnumber=6932615
Hummel, M., "State-of-the-Art: A Systematic Literature Review on Agile Information Systems Development," System Sciences (HICSS), 2014 47th Hawaii International Conference on, pp.4712,4721, 6-9 Jan. 2014. doi: 10.1109/HICSS.2014.579 Principles of agile information systems development (ISD) have attracted the interest of practice as well as research. The goal of this literature review is to validate, update and extend previous reviews in terms of the general state of research on agile ISD. Besides including categories such as the employed research methods and data collection techniques, the importance of theory is highlighted by evaluating the theoretical foundations and contributions of former studies. Since agile ISD is rooted in the IS as well as software engineering discipline, important outlets of both disciplines are included in the search process, resulting in 482 investigated papers. The findings show that quantitative studies and the theoretical underpinnings of agile ISD are lacking. Extreme Programming is still the most researched agile ISD method, and more efforts on Scrum are needed. In consequence, multiple research gaps that need further research attention are identified.
Keywords: software prototyping; Scrum; agile ISD; agile information systems development; data collection techniques; extreme programming; software engineering discipline; Abstracts; Data collection; Interviews; Programming; Systematics; Testing (ID#: 15-3663)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6759181&isnumber=6758592
Ammann, P.; Delamaro, M.E.; Offutt, J., "Establishing Theoretical Minimal Sets of Mutants," Software Testing, Verification and Validation (ICST), 2014 IEEE Seventh International Conference on, pp.21,30, March 31 2014-April 4 2014. doi: 10.1109/ICST.2014.13 Mutation analysis generates tests that distinguish variations, or mutants, of an artifact from the original. Mutation analysis is widely considered to be a powerful approach to testing, and hence is often used to evaluate other test criteria in terms of mutation score, which is the fraction of mutants that are killed by a test set. But mutation analysis is also known to provide large numbers of redundant mutants, and these mutants can inflate the mutation score. While mutation approaches broadly characterized as reduced mutation try to eliminate redundant mutants, the literature lacks a theoretical result that articulates just how many mutants are needed in any given situation. Hence, there is, at present, no way to characterize the contribution of, for example, a particular approach to reduced mutation with respect to any theoretical minimal set of mutants. This paper's contribution is to provide such a theoretical foundation for mutant set minimization. The central theoretical result of the paper shows how to minimize efficiently mutant sets with respect to a set of test cases. We evaluate our method with a widely-used benchmark.
Keywords: minimisation; program testing; set theory; mutant set minimization; mutation analysis; mutation score; redundant mutants; test cases; Benchmark testing; Computational modeling; Context; Electronic mail; Heuristic algorithms; Minimization; Mutation testing; dynamic subsumption; minimal mutant sets (ID#: 15-3664)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6823862&isnumber=6823846
Achouri, A.; Hlaoui, Y.B.; Jemni Ben Ayed, L., "Institution Theory for Services Oriented Applications," Computer Software and Applications Conference Workshops (COMPSACW), 2014 IEEE 38th International, pp.516,521, 21-25 July 2014. doi: 10.1109/COMPSACW.2014.86 In the present paper, we present our approach for the transformation of workflow applications based on institution theory. The workflow application is modeled with UML Activity Diagram(UML AD). Then, for a formal verification purposes, the graphical model will be translated to an Event-B specification. Institution theory will be used in two levels. First, we defined a local semantic for UML AD and Event B specification using a categorical description of each one. Second, we defined institution comorphism to link the two defined institutions. The theoretical foundations of our approach will be studied in the same mathematical framework since the use of institution theory. The resulted Event-B specification, after applying the transformation approach, will be used for the formal verification of functional proprieties and the verification of absences of problems such deadlock. Additionally, with the institution comorphism, we define a semantic correctness and coherence of the model transformation.
Keywords: Unified Modeling Language; diagrams; formal specification; formal verification; programming language semantics; software engineering; UML AD;UML activity diagram; event-B specification; formal verification; graphical model; institution comorphism; institution theory; local semantic; semantic correctness; service oriented applications; workflow applications; Context; Grammar; Manganese; Semantics; Syntactics; System recovery; Unified modeling language; Event-B; Formal semantics; Institution theory; Model transformation; UML Activity Diagram (ID#: 15-3665)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6903182&isnumber=6903069
Lerchner, H.; Stary, C., "An Open S-BPM Runtime Environment Based on Abstract State Machines," Business Informatics (CBI), 2014 IEEE 16th Conference on, vol. 1, pp. 54, 61, 14-17 July 2014. doi: 10.1109/CBI.2014.24 The paradigm shift from traditional BPM to Subject-oriented BPM (S-BPM) is accounted to identifying independently acting subjects. As such, they can perform arbitrary actions on arbitrary objects. Abstract State Machines (ASMs) work on a similar basis. Exploring their capabilities with respect to representing and executing S-BPM models strengthens the theoretical foundations of S-BPM, and thus, validity of S-BPM tools. Moreover it enables coherent intertwining of business process modeling with executing of S-BPM representations. In this contribution we introduce the framework and roadmap tackling the exploration of the ASM approach in the context of S-BPM. We also report the major result, namely the implementation of an executable workflow engine with an Abstract State Machine interpreter based on an existing abstract interpreter model for S-BPM (applying the ASM refinement concept). This workflow engine serves as a baseline and reference implementation for further language and processing developments, such as simulation tools, as it has been developed within the Open-S-BPM initiative.
Keywords: business data processing; finite state machines; program interpreters; workflow management software; ASM approach; Open S-BPM runtime environment; S-BPM model; S-BPM tools; abstract interpreter model; abstract state machine interpreter; business process modeling; executable workflow engine ;subject-oriented BPM; Abstracts; Analytical models; Business; Engines; Mathematical model; Semantics; Abstract State Machine; CoreASM; Open-S-BPM; Subject-oriented Business Process Management; workflow engine (ID#: 15-3670)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6904137&isnumber=6904121
Poberezhskiy, Y.S.; Poberezhskiy, G.Y., "Impact of the Sampling Theorem Interpretations on Digitization and Reconstruction in SDRs and CRs," Aerospace Conference, 2014 IEEE, pp. 1, 20, 1-8 March 2014. doi: 10.1109/AERO.2014.6836423 Sampling and reconstruction (S&R) are used in virtually all areas of science and technology. The classical sampling theorem is a theoretical foundation of S&R. However, for a long time, only sampling rates and ways of the sampled signals representation were derived from it. The fact that the design of S&R circuits (SCs and RCs) is based on a certain interpretation of the sampling theorem was mostly forgotten. The traditional interpretation of this theorem was selected at the time of the theorem introduction because it offered the only feasible way of S&R realization then. At that time, its drawbacks did not manifest themselves. By now, this interpretation has largely exhausted its potential and inhibits future progress in the field. This tutorial expands the theoretical foundation of S&R. It shows that the traditional interpretation, which is indirect, can be replaced by the direct one or by various combinations of the direct and indirect interpretations that enable development of novel SCs and RCs (NSCs and NRCs) with advanced properties. The tutorial explains the basic principles of the NSCs and NRCs design, their advantages, as well as theoretical problems and practical challenges of their realization. The influence of the NSCs and NRCs on the architectures of SDRs and CRs is also discussed.
Keywords: analogue-digital conversion; cognitive radio; signal reconstruction; signal representation; signal sampling; software radio; CR; NRC design; NSC design; S&R circuits; SDR; cognitive radio; sampled signal representation; sampling and reconstruction; sampling rates; sampling theorem interpretation; software defined radio; Band-pass filters; Bandwidth; Barium; Baseband; Digital signal processing; Equations; Interference (ID#: 15-3671)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6836423&isnumber=6836156
Chen, Qingyi; Kang, Hongwei; Zhou, Hua; Sun, Xingping; Shen, Yong; Jin, YunZhi; Yin, Jun, "Research on Cloud Computing Complex Adaptive Agent," Service Systems and Service Management (ICSSSM), 2014 11th International Conference on, pp.1,4, 25-27 June 2014. doi: 10.1109/ICSSSM.2014.6943342 It has gradually realized in the industry that the increasing complexity of cloud computing under interaction of technology, business, society and the like, instead of being simply solved depending on research on information technology, shall be explained and researched from a systematic and scientific perspective on the basis of theory and method of a complex adaptive system (CAS). This article, for basic problems in CAS theoretical framework, makes research on definition of an active adaptive agent constituting the cloud computing system, and proposes a service agent concept and basic model through commonality abstraction from two basic levels: cloud computing technology and business, thus laying a foundation for further development of cloud computing complexity research as well as for multi-agent based cloud computing environment simulation.
Keywords: Adaptation models; Adaptive systems; Business; Cloud computing; Complexity theory; Computational modeling; Economics; cloud computing; complex adaptive system; service agent (ID#: 15-3672)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6943342&isnumber=6874015
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
![]() |
Time Frequency Analysis |
A search for articles combining research in time frequency analysis and security produced nearly 3500 results. The works cited here only scratch the surface. They appear to have useful implications for the science of security.
Koga, H.; Honjo, S., "A Secret Sharing Scheme Based On A Systematic Reed-Solomon Code And Analysis Of Its Security For A General Class Of Sources," Information Theory (ISIT), 2014 IEEE International Symposium on, pp. 1351, 1355, June 29 2014-July 4 2014. doi: 10.1109/ISIT.2014.6875053 In this paper we investigate a secret sharing scheme based on a shortened systematic Reed-Solomon code. In the scheme L secrets S1, S2, ..., SL and n shares X1, X2, ..., Xn satisfy certain n - k + L linear equations. Security of such a ramp secret sharing scheme is analyzed in detail. We prove that this scheme realizes a (k; n)-threshold scheme for the case of L = 1 and a ramp (k, L, n)-threshold scheme for the case of 2 ≤ L ≤ k - 1 under a certain assumption on S1, S2, ..., SL.
Keywords: Reed-Solomon codes; telecommunication security; linear equations ;ramp secret sharing scheme; shorten systematic Reed-Solomon code; Cryptography; Equations; Probability distribution; Random variables; Reed-Solomon codes (ID#: 15-3673)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6875053&isnumber=6874773
Liu, Y.; Hatzinakos, D., "Earprint: Transient Evoked Otoacoustic Emission for Biometrics," Information Forensics and Security, IEEE Transactions on, vol. 9, no. 12, pp. 2291, 2301, Dec. 2014. doi: 10.1109/TIFS.2014.2361205 Biometrics is attracting increasing attention in privacy and security concerned issues, such as access control and remote financial transaction. However, advanced forgery and spoofing techniques are threatening the reliability of conventional biometric modalities. This has been motivating our investigation of a novel yet promising modality transient evoked otoacoustic emission (TEOAE), which is an acoustic response generated from cochlea after a click stimulus. Unlike conventional modalities that are easily accessible or captured, TEOAE is naturally immune to replay and falsification attacks as a physiological outcome from human auditory system. In this paper, we resort to wavelet analysis to derive the time-frequency representation of such nonstationary signal, which reveals individual uniqueness and long-term reproducibility. A machine learning technique linear discriminant analysis is subsequently utilized to reduce intrasubject variability and further capture intersubject differentiation features. Considering practical application, we also introduce a complete framework of the biometric system in both verification and identification modes. Comparative experiments on a TEOAE data set of biometric setting show the merits of the proposed method. Performance is further improved with fusion of information from both ears.
Keywords: Auditory system; Biometrics (access control); Ear; Feature extraction; Probes; Time-frequency analysis; Vectors; Robust Biometric Modality; Robust biometric modality; Time-frequency Analysis; Transient Evoked Otoacoustic Emission; biometric fusion ;linear discriminant analysis; time-frequency analysis; transient evoked otoacoustic emission (ID#: 15-3674)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6914592&isnumber=6953163
Guang Hua; Goh, J.; Thing, V.L.L., "A Dynamic Matching Algorithm for Audio Timestamp Identification Using the ENF Criterion," Information Forensics and Security, IEEE Transactions on, vol. 9, no. 7, pp.1045, 1055, July 2014. doi: 10.1109/TIFS.2014.2321228 The electric network frequency (ENF) criterion is a recently developed technique for audio timestamp identification, which involves the matching between extracted ENF signal and reference data. For nearly a decade, conventional matching criterion has been based on the minimum mean squared error (MMSE) or maximum correlation coefficient. However, the corresponding performance is highly limited by low signal-to-noise ratio, short recording durations, frequency resolution problems, and so on. This paper presents a threshold-based dynamic matching algorithm (DMA), which is capable of autocorrecting the noise affected frequency estimates. The threshold is chosen according to the frequency resolution determined by the short-time Fourier transform (STFT) window size. A penalty coefficient is introduced to monitor the autocorrection process and finally determine the estimated timestamp. It is then shown that the DMA generalizes the conventional MMSE method. By considering the mainlobe width in the STFT caused by limited frequency resolution, the DMA achieves improved identification accuracy and robustness against higher levels of noise and the offset problem. Synthetic performance analysis and practical experimental results are provided to illustrate the advantages of the DMA.
Keywords: Fourier transforms; audio recording; correlation methods; frequency estimation; mean square error methods; ENF criterion; MMSE;S TFT; audio timestamp identification; autocorrection process; dynamic matching; electric network frequency criterion; extracted ENF signal; frequency estimates; frequency resolution problems; maximum correlation coefficient; minimum mean squared error; reference data; short recording durations; short-time Fourier transform; signal-to-noise ratio; window size; Correlation; Estimation; Frequency estimation; Signal resolution; Signal to noise ratio; Time-frequency analysis; Electric network frequency (ENF); audio authentication; audio forensics; timestamp identification (ID#: 15-3675)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6808537&isnumber=6819111
Sousa, J.; Vilela, J.P., "A Characterization Of Uncoordinated Frequency Hopping For Wireless Secrecy," Wireless and Mobile Networking Conference (WMNC), 2014 7th IFIP, pp.1,4, 20-22 May 2014 doi: 10.1109/WMNC.2014.6878885 We characterize the secrecy level of communication under Uncoordinated Frequency Hopping, a spread spectrum scheme where a transmitter and a receiver randomly hop through a set of frequencies with the goal of deceiving an adversary. In our work, the goal of the legitimate parties is to land on a given frequency without the adversary eavesdroppers doing so, therefore being able to communicate securely in that period, that may be used for secret-key exchange. We also consider the effect on secrecy of the availability of friendly jammers that can be used to obstruct eavesdroppers by causing them interference. Our results show that tuning the number of frequencies and adding friendly jammers are effective countermeasures against eavesdroppers.
Keywords: cryptography; jamming; radio receivers; radio transmitters; spread spectrum communication; telecommunication security; communication secrecy level; interference; secret-key exchange; spread spectrum scheme; uncoordinated frequency hopping characterization; wireless secrecy; wireless transmissions; Interference; Jamming; Security; Spread spectrum communication; Throughput; Time-frequency analysis; Wireless communication (ID#: 15-3676)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6878885&isnumber=6878843
Esquef, P.A.A.; Apolinario, J.A.; Biscainho, L.W.P., "Edit Detection in Speech Recordings via Instantaneous Electric Network Frequency Variations," Information Forensics and Security, IEEE Transactions on, vol. 9, no. 12, pp. 2314, 2326, Dec. 2014. doi: 10.1109/TIFS.2014.2363524 In this paper, an edit detection method for forensic audio analysis is proposed. It develops and improves a previous method through changes in the signal processing chain and a novel detection criterion. As with the original method, electrical network frequency (ENF) analysis is central to the novel edit detector, for it allows monitoring anomalous variations of the ENF related to audio edit events. Working in unsupervised manner, the edit detector compares the extent of ENF variations, centered at its nominal frequency, with a variable threshold that defines the upper limit for normal variations observed in unedited signals. The ENF variations caused by edits in the signal are likely to exceed the threshold providing a mechanism for their detection. The proposed method is evaluated in both qualitative and quantitative terms via two distinct annotated databases. Results are reported for originally noisy database signals as well as versions of them further degraded under controlled conditions. A comparative performance evaluation, in terms of equal error rate (EER) detection, reveals that, for one of the tested databases, an improvement from 7% to 4% EER is achieved, respectively, from the original to the new edit detection method. When the signals are amplitude clipped or corrupted by broadband background noise, the performance figures of the novel method follow the same profile of those of the original method.
Keywords: Databases; Estimation; Forensics; Frequency estimation; Noise; Noise measurement; Time-frequency analysis; Acoustical signal processing; edit detection; instantaneous frequency; spectral analysis; voice activity detection (ID#: 15-3677)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6926817&isnumber=6953163
Pukkawanna, S.; Hazeyama, H.; Kadobayashi, Y.; Yamaguchi, S., "Investigating the Utility Of S-Transform For Detecting Denial-Of-Service And Probe Attacks," Information Networking (ICOIN), 2014 International Conference on, pp.282, 287, 10-12 Feb. 2014. doi: 10.1109/ICOIN.2014.6799482 Denial-of-Service (DoS) and probe attacks are growing more modern and sophisticated in order to evade detection by Intrusion Detection Systems (IDSs) and to increase the potent threat to the availability of network services. Detecting these attacks is quite tough for network operators using misuse-based IDSs because they need to see through attackers and upgrade their IDSs by adding new accurate attack signatures. In this paper, we proposed a novel signal and image processing-based method for detecting network probe and DoS attacks in which prior knowledge of attacks is not required. The method uses a time-frequency representation technique called S-transform, which is an extension of Wavelet Transform, to reveal abnormal frequency components caused by attacks in a traffic signal (e.g., a time-series of the number of packets). Firstly, S-Transform converts the traffic signal to a two-dimensional image which describes time-frequency behavior of the traffic signal. The frequencies that behave abnormally are discovered as abnormal regions in the image. Secondly, Otsu's method is used to detect the abnormal regions and identify time that attacks occur. We evaluated the effectiveness of the proposed method with several network probe and DoS attacks such as port scans, packet flooding attacks, and a low-intensity DoS attack. The results clearly indicated that the method is effective for detecting the probe and DoS attack streams which were generated to real-world Internet.
Keywords: Internet; computer network security; telecommunication traffic; time-frequency analysis; wavelet transforms; DoS attacks;I DS; Internet; Otsu method; S-transform; accurate attack signatures; denial-of-service detection; frequency components; image processing method; intrusion detection systems; probe attacks; signal processing method; time-frequency representation technique; traffic signal; two-dimensional image; wavelet transform; Computer crime; Internet; Ports (Computers);Probes; Time-frequency analysis; Wavelet transforms (ID#: 15-3678)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6799482&isnumber=6799467
Rahayu, T.M.; Sang-Gon Lee; Hoon-Jae Lee, "Security Analysis Of Secure Data Aggregation Protocols In Wireless Sensor Networks," Advanced Communication Technology (ICACT), 2014 16th International Conference on, pp.471, 474, 16-19 Feb. 2014. doi: 10.1109/ICACT.2014.6779005 In order to conserve wireless sensor network (WSN) lifetime, data aggregation is applied. Some researchers consider the importance of security and propose secure data aggregation protocols. The essential of those secure approaches is to make sure that the aggregators aggregate the data in appropriate and secure way. In this paper we give the description of ESPDA (Energy-efficient and Secure Pattern-based Data Aggregation) and SRDA (Secure Reference-Based Data Aggregation) protocol that work on cluster-based WSN and the deep security analysis that are different from the previously presented one.
Keywords: protocols; telecommunication security; wireless sensor networks; ESPDA protocol; SRDA protocol; WSN lifetime; cluster-based WSN; deep security analysis; energy-efficient and secure pattern-based data aggregation protocol; secure reference-based data aggregation protocol; wireless sensor network lifetime; Authentication; Cryptography; Energy efficiency; Peer-to-peer computing; Protocols; Wireless sensor networks; Data aggregation protocol; ESPDA; SRDA; WSN; secure data aggregation protocol}, (ID#: 15-3679)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6779005&isnumber=6778899
Rezvani, M.; Ignjatovic, A.; Bertino, E.; Jha, S., "Provenance-Aware Security Risk Analysis For Hosts And Network Flows," Network Operations and Management Symposium (NOMS), 2014 IEEE, pp.1, 8, 5-9 May 2014. doi: 10.1109/NOMS.2014.6838250 Detection of high risk network flows and high risk hosts is becoming ever more important and more challenging. In order to selectively apply deep packet inspection (DPI) one has to isolate in real time high risk network activities within a huge number of monitored network flows. To help address this problem, we propose an iterative methodology for a simultaneous assessment of risk scores for both hosts and network flows. The proposed approach measures the risk scores of hosts and flows in an interdependent manner; thus, the risk score of a flow influences the risk score of its source and destination hosts, and also the risk score of a host is evaluated by taking into account the risk scores of flows initiated by or terminated at the host. Our experimental results show that such an approach not only effective in detecting high risk hosts and flows but, when deployed in high throughput networks, is also more efficient than PageRank based algorithms.
Keywords: computer network security; risk analysis; deep packet inspection; high risk hosts; high risk network flows; provenance aware security risk analysis; risk score; Computational modeling; Educational institutions; Iterative methods; Monitoring; Ports (Computers); Risk management; Security (ID#: 15-3680)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6838250&isnumber=6838210
Zahid, A.; Masood, R.; Shibli, M.A., "Security of Sharded NoSQL Databases: A Comparative Analysis," Information Assurance and Cyber Security (CIACS), 2014 Conference on, pp. 1, 8, 12-13 June 2014. doi: 10.1109/CIACS.2014.6861323 NoSQL databases are easy to scale-out because of their flexible schema and support for BASE (Basically Available, Soft State and Eventually Consistent) properties. The process of scaling-out in most of these databases is supported by sharding which is considered as the key feature in providing faster reads and writes to the database. However, securing the data sharded over various servers is a challenging problem because of the data being distributedly processed and transmitted over the unsecured network. Though, extensive research has been performed on NoSQL sharding mechanisms but no specific criterion has been defined to analyze the security of sharded architecture. This paper proposes an assessment criterion comprising various security features for the analysis of sharded NoSQL databases. It presents a detailed view of the security features offered by NoSQL databases and analyzes them with respect to proposed assessment criteria. The presented analysis helps various organizations in the selection of appropriate and reliable database in accordance with their preferences and security requirements.
Keywords: SQL; security of data; BASE; NoSQL sharding mechanisms; assessment criterion; security features; sharded NoSQL databases; Access control; Authentication; Distributed databases; Encryption; Servers; Comparative Analysis; Data and Applications Security; Database Security; NoSQL; Sharding (ID#: 15-3681)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6861323&isnumber=6861314
Hongzhen Du; Qiaoyan Wen, "Security Analysis Of Two Certificateless Short Signature Schemes," Information Security, IET, vol. 8, no.4, pp.230, 233, July 2014. doi: 10.1049/iet-ifs.2013.0080 Certificateless public key cryptography (CL-PKC) combines the advantage of both traditional PKC and identity-based cryptography (IBC) as it eliminates the certificate management problem in traditional PKC and resolves the key escrow problem in IBC. Recently, Choi et al. and Tso et al. proposed two different efficient CL short signature schemes and claimed that the two schemes are secure against super adversaries and satisfy the strongest security. In this study, the authors show that both Choi et al.'s scheme and Tso et al.'s scheme are insecure against the strong adversaries who can replace users' public keys and have access to the signing oracle under the replaced public keys.
Keywords: digital signatures; public key cryptography; CL short signature schemes; CL-PKC; IBC; certificate management problem; certificateless public key cryptography; certificateless short signature schemes; identity-based cryptography; key escrow problem; security analysis; user public keys (ID#: 15-3682)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6842408&isnumber=6842405
Wenli Liu; Xiaolong Zheng; Tao Wang; Hui Wang, "Collaboration Pattern and Topic Analysis on Intelligence and Security Informatics Research," Intelligent Systems, IEEE, vol.29, no. 3, pp. 39, 46, May-June 2014. doi: 10.1109/MIS.2012.106 In this article, researcher collaboration patterns and research topics on Intelligence and Security Informatics (ISI) are investigated using social network analysis approaches. The collaboration networks exhibit scale-free property and small-world effect. From these networks, the authors obtain the key researchers, institutions, and three important topics.
Keywords: groupware; security of data; social networking (online);collaboration pattern; intelligence and security informatics research; scale-free property; small-world effect; social network analysis approach; topic analysis; Collaboration; Computer security; Informatics; Intelligent systems; Network security; Social network services; Terrorism; ISI; Intelligence and Security Informatics; intelligent systems; social network analysis; topic analysis (ID#: 15-3683)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6357170&isnumber=6871688
Sasidharan, B.; Kumar, P.V.; Shah, N.B.; Rashmi, K.V.; Ramachandran, K., "Optimality of the Product-Matrix Construction For Secure MSR Regenerating Codes," Communications, Control and Signal Processing (ISCCSP), 2014 6th International Symposium on, pp. 10, 14, 21-23 May 2014. doi: 10.1109/ISCCSP.2014.6877804 In this paper, we consider the security of exact-repair regenerating codes operating at the minimum-storage-regenerating (MSR) point. The security requirement (introduced in Shah et. al.) is that no information about the stored data file must be leaked in the presence of an eavesdropper who has access to the contents of ℓ1 nodes as well as all the repair traffic entering a second disjoint set of ℓ2 nodes. We derive an upper bound on the size of a data file that can be securely stored that holds whenever ℓ2 ≤ d - k + 1. This upper bound proves the optimality of the product-matrix-based construction of secure MSR regenerating codes by Shah et. al.
Keywords: encoding; matrix algebra; MSR point; data file; eavesdropper; exact repair regenerating code security; minimum storage regenerating point; product matrix; product matrix construction; repair traffic; secure MSR regenerating codes; Bandwidth; Data collection; Entropy; Maintenance engineering; Random variables; Security; Upper bound; MSR codes; Secure regenerating codes; product-matrix construction; regenerating codes (ID#: 15-3684)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6877804&isnumber=6877795
Mirmohseni, M.; Papadimitratos, P., "Scaling Laws For Secrecy Capacity In Cooperative Wireless Networks," INFOCOM, 2014 Proceedings IEEE, pp.1527, 1535, April 27 2014-May 2 2014. doi: 10.1109/INFOCOM.2014.6848088 We investigate large wireless networks subject to security constraints. In contrast to point-to-point, interference-limited communications considered in prior works, we propose active cooperative relaying based schemes. We consider a network with nl legitimate nodes and ne eavesdroppers, and path loss exponent α ≥ 2. As long as ne2(log(ne))γ = o(nl) holds for some positive γ, we show one can obtain unbounded secure aggregate rate. This means zero-cost secure communication, given a fixed total power constraint for the entire network. We achieve this result with (i) the source using Wyner randomized encoder and a serial (multi-stage) block Markov scheme, to cooperate with the relays, and (ii) the relays acting as a virtual multi-antenna to apply beamforming against the eavesdroppers. Our simpler parallel (two-stage) relaying scheme can achieve the same unbounded secure aggregate rate when neα/2 + 1 (log(ne))γ+δ(α/2+1) = o(nl) holds, for some positive γ, δ.
Keywords: Markov processes; array signal processing; cooperative communication; interference (signal); relay networks (telecommunication); telecommunication security; Wyner randomized encoder; active cooperative relaying; beamforming ;cooperative wireless networks; interference limited communications; parallel relaying scheme; path loss exponent; scaling laws; secrecy capacity; secure communication; serial block Markov scheme; Aggregates; Array signal processing; Encoding; Relays; Tin; Transmitters; Wireless networks (ID#: 15-3685)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6848088&isnumber=6847911
Yao, H.; Silva, D.; Jaggi, S.; Langberg, M., "Network Codes Resilient to Jamming and Eavesdropping," Networking, IEEE/ACM Transactions on, vol. PP, no.99, pp. 1, 1, February 2014. doi: 10.1109/TNET.2013.2294254 We consider the problem of communicating information over a network secretly and reliably in the presence of a hidden adversary who can eavesdrop and inject malicious errors. We provide polynomial-time distributed network codes that are information-theoretically rate-optimal for this scenario, improving on the rates achievable in prior work by Ngai Our main contribution shows that as long as the sum of the number of links the adversary can jam (denoted by $ Z_{O}$ ) and the number of links he can eavesdrop on (denoted by $ Z_{I}$) is less than the network capacity (denoted by $ C$) (i.e., $ Z_{O}+ Z_{I}< C$), our codes can communicate (with vanishingly small error probability) a single bit correctly and without leaking any information to the adversary. We then use this scheme as a module to design codes that allow communication at the source rate of $ C- Z_{O}$ when there are no security requirements, and codes that allow communication at the source rate of $ C- Z_{O}- Z_{I}$ while keeping the communicated message provably secret from the adversary. Interior nodes are oblivious to the presence of adversaries and perform random linear network coding; only the source and destination need to be tweaked. We also prove that the rate-region obtained is information-theoretically optimal. In proving our results, we correct an error in prior work by a subset of the authors in this paper.
Keywords: Error probability; Jamming; Network coding; Robustness; Transforms; Vectors; Achievable rates; adversary; error control; network coding; secrecy (ID#: 15-3686)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6730968&isnumber=4359146
Pasolini, G.; Dardari, D., "Secret Key Generation In Correlated Multi-Dimensional Gaussian Channels," Communications (ICC), 2014 IEEE International Conference on, pp.2171,2177, 10-14 June 2014. doi: 10.1109/ICC.2014.6883645 Wireless channel reciprocity can be successfully exploited as a common source of randomness for the generation of a secret key by two legitimate users willing to achieve confidential communications over a public channel. This paper presents an analytical framework to investigate the theoretical limits of secret-key generation when wireless multi-dimensional Gaussian channels are used as source of randomness. The intrinsic secrecy content of wide-sense stationary wireless channels in frequency, time and spatial domains is derived through asymptotic analysis as the number of observations in a given domain tends to infinity. Some significant case studies are presented where single and multiple antenna eavesdroppers are considered. In the numerical results, the role of signal-to-noise ratio, spatial correlation, frequency and time selectivity is investigated.
Keywords: Gaussian channels; antenna arrays; frequency-domain analysis; public key cryptography; radio networks; telecommunication security; time-domain analysis; wireless channels; analytical framework; asymptotic analysis; confidential communications; correlated multidimensional Gaussian channels; frequency domains; intrinsic secrecy content; multiple antenna eavesdroppers; public channel; secret key generation; signal-to-noise ratio; spatial correlation; spatial domains; time domains; time selectivity; wide-sense stationary wireless channels; wireless channel reciprocity; wireless networks; Communication system security; Covariance matrices; Security; Signal to noise ratio; Time-frequency analysis; Wireless communication (ID#: 15-3687)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6883645&isnumber=6883277
Guoyuan Lin; Danru Wang; Yuyu Bie; Min Lei, "MTBAC: A Mutual Trust Based Access Control Model In Cloud Computing," Communications, China, vol.11, no.4, pp.154, 162, April 2014. doi: 10.1109/CC.2014.6827577 As a new computing mode, cloud computing can provide users with virtualized and scalable web services, which faced with serious security challenges, however. Access control is one of the most important measures to ensure the security of cloud computing. But applying traditional access control model into the Cloud directly could not solve the uncertainty and vulnerability caused by the open conditions of cloud computing. In cloud computing environment, only when the security and reliability of both interaction parties are ensured, data security can be effectively guaranteed during interactions between users and the Cloud. Therefore, building a mutual trust relationship between users and cloud platform is the key to implement new kinds of access control method in cloud computing environment. Combining with Trust Management(TM), a mutual trust based access control (MTBAC) model is proposed in this paper. MTBAC model take both user's behavior trust and cloud services node's credibility into consideration. Trust relationships between users and cloud service nodes are established by mutual trust mechanism. Security problems of access control are solved by implementing MTBAC model into cloud computing environment. Simulation experiments show that MTBAC model can guarantee the interaction between users and cloud service nodes
Keywords: Web services; authorisation; cloud computing; virtualisation; MTBAC model; cloud computing environment; cloud computing security; cloud service node credibility; data security; mutual trust based access control model; mutual trust mechanism; mutual trust relationship; open conditions; scalable Web services; trust management; user behavior trust; virtualized Web services; Computational modeling; Reliability; Time-frequency analysis; MTBAC; access control; cloud computing; mutual trust mechanism; trust model (ID#: 15-3688)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6827577&isnumber=6827540
Chen, L.M.; Hsiao, S.-W.; Chen, M.C.; Liao, W., "Slow-Paced Persistent Network Attacks Analysis and Detection Using Spectrum Analysis," Systems Journal, IEEE, vol. PP, no. 99, pp.1, 12, September 2014. doi: 10.1109/JSYST.2014.2348567 A slow-paced persistent attack, such as slow worm or bot, can bewilder the detection system by slowing down their attack. Detecting such attacks based on traditional anomaly detection techniques may yield high false alarm rates. In this paper, we frame our problem as detecting slow-paced persistent attacks from a time series obtained from network trace. We focus on time series spectrum analysis to identify peculiar spectral patterns that may represent the occurrence of a persistent activity in the time domain. We propose a method to adaptively detect slow-paced persistent attacks in a time series and evaluate the proposed method by conducting experiments using both synthesized traffic and real-world traffic. The results show that the proposed method is capable of detecting slow-paced persistent attacks even in a noisy environment mixed with legitimate traffic.
Keywords: Discrete Fourier transforms; Grippers; Spectral analysis; Time series analysis; Time-domain analysis; Time-frequency analysis; Network security; persistent activity; slow-paced attack; spectrum analysis; time series (ID#: 15-3689)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6906240&isnumber=4357939
Kun Wen; Jiahai Yang; Fengjuan Cheng; Chenxi Li; Ziyu Wang; Hui Yin, "Two-stage Detection Algorithm For Roq Attack Based On Localized Periodicity Analysis Of Traffic Anomaly," Computer Communication and Networks (ICCCN), 2014 23rd International Conference on, pp.1,6, 4-7 Aug. 2014. doi: 10.1109/ICCCN.2014.6911829 Reduction of Quality (RoQ) attack is a stealthy denial of service attack. It can decrease or inhibit normal TCP flows in network. Victims are hard to perceive it as the final network throughput is decreasing instead of increasing during the attack. Therefore, the attack is strongly hidden and it is difficult to be detected by existing detection systems. Based on the principle of Time-Frequency analysis, we propose a two-stage detection algorithm which combines anomaly detection with misuse detection. In the first stage, we try to detect the potential anomaly by analyzing network traffic through Wavelet multiresolution analysis method. According to different time-domain characteristics, we locate the abrupt change points. In the second stage, we further analyze the local traffic around the abrupt change point. We extract the potential attack characteristics by autocorrelation analysis. By the two-stage detection, we can ultimately confirm whether the network is affected by the attack. Results of simulations and real network experiments demonstrate that our algorithm can detect RoQ attacks, with high accuracy and high efficiency.
Keywords: computer network security; time-frequency analysis; RoQ attack; anomaly detection; autocorrelation analysis; denial of service attack; detection algorithm; detection systems; inhibit normal TCP flows; localized periodicity analysis; network traffic; reduction of quality; time-frequency analysis; traffic anomaly; wavelet multiresolution analysis method; Algorithm design and analysis; Computer crime; Correlation; Detection algorithms; Multiresolution analysis; RoQ attack; anomaly detection; misuse detection; network security; wavelet analysis (ID#: 15-3690)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6911829&isnumber=6911704
Yanbing Liu; Qingyun Liu; Ping Liu; Jianlong Tan; Li Guo, "A Factor-Searching-Based Multiple String Matching Algorithm For Intrusion Detection," Communications (ICC), 2014 IEEE International Conference on , vol., no., pp.653,658, 10-14 June 2014. doi: 10.1109/ICC.2014.6883393 Multiple string matching plays a fundamental role in network intrusion detection systems. Automata-based multiple string matching algorithms like AC, SBDM and SBOM are widely used in practice, but the huge memory usage of automata prevents them from being applied to a large-scale pattern set. Meanwhile, poor cache locality of huge automata degrades the matching speed of algorithms. Here we propose a space-efficient multiple string matching algorithm BVM, which makes use of bit-vector and succinct hash table to replace the automata used in factor-searching-based algorithms. Space complexity of the proposed algorithm is O(rm2 + ΣpϵP |p|), that is more space-efficient than the classic automata-based algorithms. Experiments on datasets including Snort, ClamAV, URL blacklist and synthetic rules show that the proposed algorithm significantly reduces memory usage and still runs at a fast matching speed. Above all, BVM costs less than 0.75% of the memory usage of AC, and is capable of matching millions of patterns efficiently.
Keywords: automata theory; security of data; string matching; AC; ClamAV; SBDM; SBOM; Snort; URL blacklist; automata-based multiple string matching algorithms; bit-vector; factor searching-based algorithms; factor-searching-based multiple string matching algorithm; huge memory usage; matching speed; network intrusion detection systems; space complexity; space-efficient multiple string matching algorithm BVM; succinct hash table; synthetic rules; Arrays; Automata; Intrusion detection; Pattern matching; Time complexity; automata; intrusion detection; multiple string matching; space-efficient (ID#: 15-3691)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6883393&isnumber=6883277
Van Vaerenbergh, S.; González, O.; Vía, J.; Santamaría, I., "Physical Layer Authentication Based On Channel Response Tracking Using Gaussian Processes," Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on, pp.2410,2414, 4-9 May 2014. doi: 10.1109/ICASSP.2014.6854032 Physical-layer authentication techniques exploit the unique properties of the wireless medium to enhance traditional higher-level authentication procedures. We propose to reduce the higher-level authentication overhead by using a state-of-the-art multi-target tracking technique based on Gaussian processes. The proposed technique has the additional advantage that it is capable of automatically learning the dynamics of the trusted user's channel response and the time-frequency fingerprint of intruders. Numerical simulations show very low intrusion rates, and an experimental validation using a wireless test bed with programmable radios demonstrates the technique's effectiveness.
Keywords: {Gaussian processes; fingerprint identification; security of data; target tracking; telecommunication security; time-frequency analysis; wireless channels; Gaussian process; automatic learning; channel response tracking; higher level authentication overhead; higher level authentication procedure; intruder; multitarget tracking technique; numerical simulation; physical layer authentication; programmable radio; time-frequency fingerprint; trusted user channel response; wireless medium; wireless test bed; Authentication; Channel estimation; Communication system security; Gaussian processes; Time-frequency analysis; Trajectory; Wireless communication; Gaussian processes; multi-target tracking; physical-layer authentication; wireless communications (ID#: 15-3692)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6854032&isnumber=6853544
Baofeng Wu; Qingfang Jin; Zhuojun Liu; Dongdai Lin, "Constructing Boolean Functions With Potentially Optimal Algebraic Immunity Based On Additive Decompositions Of Finite Fields (Extended Abstract)," Information Theory (ISIT), 2014 IEEE International Symposium on, pp.1361,1365, June 29 2014-July 4 2014. doi: 10.1109/ISIT.2014.6875055 We propose a general approach to construct cryptographic significant Boolean functions of (r + 1)m variables based on the additive decomposition F2rm × F2m of the finite field F2(r+1)m, where r ≥ 1 is odd and m ≥ 3. A class of unbalanced functions is constructed first via this approach, which coincides with a variant of the unbalanced class of generalized Tu-Deng functions in the case r = 1. Functions belonging to this class have high algebraic degree, but their algebraic immunity does not exceed m, which is impossible to be optimal when r > 1. By modifying these unbalanced functions, we obtain a class of balanced functions which have optimal algebraic degree and high nonlinearity (shown by a lower bound we prove). These functions have optimal algebraic immunity provided a combinatorial conjecture on binary strings which generalizes the Tu-Deng conjecture is true. Computer investigations show that, at least for small values of number of variables, functions from this class also behave well against fast algebraic attacks.
Keywords: Boolean functions; combinatorial mathematics; cryptography; additive decomposition; algebraic immunity; binary strings; combinatorial conjecture; cryptographic significant Boolean functions; fast algebraic attacks ;finite field; generalized Tu-Deng functions; optimal algebraic degree; unbalanced functions; Additives; Boolean functions; Cryptography; Electronic mail; FAA; Information theory; Transforms (ID#: 15-3693)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6875055&isnumber=6874773
Luowei Zhou; Sucheng Liu; Weiguo Lu; Shuchang Hu, "Quasi-Steady-State Large-Signal Modelling Of DC–DC Switching Converter: Justification And Application For Varying Operating Conditions," Power Electronics, IET, vol.7, no.10, pp.2455, 2464, 10 2014. doi: 10.1049/iet-pel.2013.0487 Quasi-steady-state (QSS) large-signal models are often taken for granted in the analysis and design of DC-DC switching converters, particularly for varying operating conditions. In this study, the premise for the QSS is justified quantitatively for the first time. Based on the QSS, the DC-DC switching converter under varying operating conditions is reduced to the linear time varying systems model. Thereafter, the QSS concept is applied to analysis of frequency-domain properties of the DC-DC switching converters by using three-dimensional Bode plots, which is then utilised to the optimisation of the controller parameters for wide variations of input voltage and load resistance. An experimental prototype of an average-current-mode-controlled boost DC-DC converter is built to verify the analysis and design by both frequency-domain and time-domain measurements.
Keywords: Bode diagrams; DC-DC power convertors; electric current control; electric resistance; linear systems; optimisation; switching convertors; time-frequency analysis; time-varying systems; 3D Bode plots; DC-DC switching converter; QSS; controller parameter optimisation; current mode controlled boost DC-DC converter; frequency-domain measurement; linear time varying systems model; load resistance variation; operating conditions variation; quasi steady-state large signal modelling; time-domain measurement (ID#: 15-3694)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6919980&isnumber=6919884
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
![]() |
Trust and Trustworthiness |
Trust is created in information security through cryptography to assure the identity of external parties. The works cited here have a strong emphasis on Bayesian methods and cloud environments. In addition, the new ISO/IEEE standard for security device identification, has been released as ISO/IEC/IEEE International Standard for Information technology -- Telecommunications and information exchange between systems -- Local and metropolitan area networks -- Part 1AR: Secure device identity," ISO/IEC/IEEE 8802-1AR:2014(E), vol., no., pp.1,82, Feb. 15 2014. It is available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6739984&isnumber=6739983
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
![]() |
Trusted Platform Modules (TPMs) |
Trusted Platform Module (TPM) is a computer chip that can securely store artifacts used to authenticate a network or platform. These artifacts can include passwords, certificates, or encryption keys. A TPM can also be used to store platform measurements that help ensure that the platform remains trustworthy. Interest is in TPMs is growing due to their potential for solving hard problems in security such as composability and cyber-physical system security and resilience. The works cited here are from 2014.
Akram, R.N.; Markantonakis, K.; Mayes, K., "Trusted Platform Module for Smart Cards," New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on, pp.1,5, March 30 2014-April 2 2014. doi: 10.1109/NTMS.2014.6814058 Near Field Communication (NFC)-based mobile phone services offer a lifeline to the under-appreciated multiapplication smart card initiative. The initiative could effectively replace heavy wallets full of smart cards for mundane tasks. However, the issue of the deployment model still lingers on. Possible approaches include, but are not restricted to, the User Centric Smart card Ownership Model (UCOM), GlobalPlatform Consumer Centric Model, and Trusted Service Manager (TSM). In addition, multiapplication smart card architecture can be a GlobalPlatform Trusted Execution Environment (TEE) and/or User Centric Tamper-Resistant Device (UCTD), which provide cross-device security and privacy preservation platforms to their users. In the multiapplication smart card environment, there might not be a prior off-card trusted relationship between a smart card and an application provider. Therefore, as a possible solution to overcome the absence of prior trusted relationships, this paper proposes the concept of Trusted Platform Module (TPM) for smart cards (embedded devices) that can act as a point of reference for establishing the necessary trust between the device and an application provider, and among applications.
Keywords: data privacy; mobile handsets; near-field communication; smart cards; TEE ;Trusted Execution Environment; UCOM; UCTD; User Centric Tamper-Resistant Device; application provider; cross-device security; deployment model; embedded devices; global platform consumer centric model; multiapplication smart card initiative; near field communication-based mobile phone services; off-card trusted relationship; privacy preservation platforms; trusted platform module; trusted service manager; user centric smart card ownership model; Computational modeling; Computer architecture; Hardware; Mobile communication; Runtime; Security; Smart cards (ID#: 15-3713)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6814058&isnumber=6813963
Das, S.; Wei Zhang; Yang Liu, "Reconfigurable Dynamic Trusted Platform Module for Control Flow Checking," VLSI (ISVLSI), 2014 IEEE Computer Society Annual Symposium on, pp.166,171, 9-11 July 2014. doi: 10.1109/ISVLSI.2014.84 Trusted Platform Module (TPM) has gained its popularity in computing systems as a hardware security approach. TPM provides the boot time security by verifying the platform integrity including hardware and software. However, once the software is loaded, TPM can no longer protect the software execution. In this work, we propose a dynamic TPM design, which performs control flow checking to protect the program from runtime attacks. The control flow checker is integrated at the commit stage of the processor pipeline. The control flow of program is verified to defend the attacks such as stack smashing using buffer overflow and code reuse. We implement the proposed dynamic TPM design in FPGA to achieve high performance, low cost and flexibility for easy functionality upgrade based on FPGA. In our design, neither the source code nor the Instruction Set Architecture (ISA) needs to be changed. The benchmark simulations demonstrate less than 1% of performance penalty on the processor, and an effective software protection from the attacks.
Keywords: field programmable gate arrays; formal verification; security of data; trusted computing; FPGA; buffer overflow; code reuse; control flow checking; dynamic TPM design; instruction set architecture; processor pipeline; reconfigurable dynamic trusted platform module; runtime attacks; stack smashing; Benchmark testing; Computer architecture; Field programmable gate arrays; Pipelines; Runtime; Security; Software; Control Flow Checking; Dynamic TPM; Reconfigurable Architecture; Runtime Security (ID#: 15-3714)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6903354&isnumber=6903314
Oberle, A.; Larbig, P.; Kuntze, N.; Rudolph, C., "Integrity based relationships and trustworthy communication between network participants," Communications (ICC), 2014 IEEE International Conference on,pp.610,615, 10-14 June 2014. doi: 10.1109/ICC.2014.6883386 Establishing trust relationships between network participants by having them prove their operating system's integrity via a Trusted Platform Module (TPM) provides interesting approaches for securing local networks at a higher level. In the introduced approach on OSI layer 2, attacks carried out by already authenticated and participating nodes (insider threats) can be detected and prevented. Forbidden activities and manipulations in hard- and software, such as executing unknown binaries, loading additional kernel modules or even inserting unauthorized USB devices, are detected and result in an autonomous reaction of each network participant. The provided trust establishment and authentication protocol operates independently from upper protocol layers and is optimized for resource constrained machines. Well known concepts of backbone architectures can maintain the chain of trust between different kinds of network types. Each endpoint, forwarding and processing unit monitors the internal network independently and reports misbehaviors autonomously to a central instance in or outside of the trusted network.
Keywords: computer network security; cryptographic protocols; trusted computing; OSI layer 2; authenticated node; authentication protocol; insider threat; integrity based relationship; network participants; operating system integrity; participating node; trust establishment; trusted platform module; trustworthy communication; Authentication; Encryption; Payloads; Protocols; Servers; Unicast; Cyber-physical systems; Security; authentication; industrial networks; integrity; protocol design; trust (ID#: 15-3715)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6883386&isnumber=6883277
Abd Aziz, N.; Udzir, N.I.; Mahmod, R., "Performance Analysis For Extended TLS With Mutual Attestation For Platform Integrity Assurance," Cyber Technology in Automation, Control, and Intelligent Systems (CYBER), 2014 IEEE 4th Annual International Conference on, pp.13,18, 4-7 June 2014. doi: 10.1109/CYBER.2014.6917428 A web service is a web-based application connected via the internet connectivity. The common web-based applications are deployed using web browsers and web servers. However, the security of Web Service is a major concern issues since it is not widely studied and integrated in the design stage of Web Service standard. They are add-on modules rather a well-defined solutions in standards. So, various web services security solutions have been defined in order to protect interaction over a network. Remote attestation is an authentication technique proposed by the Trusted Computing Group (TCG) which enables the verification of the trusted environment of platforms and assuring the information is accurate. To incorporate this method in web services framework in order to guarantee the trustworthiness and security of web-based applications, a new framework called TrustWeb is proposed. The TrustWeb framework integrates the remote attestation into SSL/TLS protocol to provide integrity information of the involved endpoint platforms. The framework enhances TLS protocol with mutual attestation mechanism which can help to address the weaknesses of transferring sensitive computations, and a practical way to solve the remote trust issue at the client-server environment. In this paper, we describe the work of designing and building a framework prototype in which attestation mechanism is integrated into the Mozilla Firefox browser and Apache web server. We also present framework solution to show improvement in the efficiency level.
Keywords: Web services; protocols; trusted computing; Apache Web server; Internet connectivity; Mozilla Firefox browser; SSL-TLS protocol; Web browsers; Web servers; Web service security; Web-based application ;client-server environment; endpoint platforms; extended TLS; mutual attestation mechanism; platform integrity assurance; remote attestation; trusted computing group; trustworthiness; Browsers; Principal component analysis; Protocols; Security; Web servers (ID#: 15-3716)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6917428&isnumber=6917419
Chen Chen, Himanshu Raj, Stefan Saroiu, Alec Wolman, “cTPM: a Cloud TPM For Cross-Device Trusted Applications,“ Proceeding s, NSDI'14 Proceedings of the 11th USENIX Conference on Networked Systems Design and Implementation, April, 2014,Pages 187-201, (no doi given) Current Trusted Platform Modules (TPMs) are illsuited for cross-device scenarios in trusted mobile applications because they hinder the seamless sharing of data across multiple devices. This paper presents cTPM, an extension of the TPM's design that adds an additional root key to the TPM and shares that root key with the cloud. As a result, the cloud can create and share TPM-protected keys and data across multiple devices owned by one user. Further, the additional key lets the cTPM allocate cloud-backed remote storage so that each TPM can benefit from a trusted real-time clock and high-performance, non-volatile storage. This paper shows that cTPM is practical, versatile, and easily applicable to trusted mobile applications. Our simple change to the TPM specification is viable because its fundamental concepts - a primary root key and off-chip, NV storage - are already found in the current specification, TPM 2.0. By avoiding a clean-slate redesign, we sidestep the difficult challenge of re-verifying the security properties of a new TPM design. We demonstrate cTPM's versatility with two case studies: extending Pasture with additional functionality, and reimplementing TrInc without the need for extra hardware.
Keywords: (not provided) (ID#: 15-3717)
URL: http://dl.acm.org/citation.cfm?id=2616448.2616466
Vijay Varadharajan, Udaya Tupakula, Counteracting Security Attacks In Virtual Machines In The Cloud Using Property Based Attestation, Journal of Network and Computer Applications, Volume 40, April, 2014, Pages 31-45. Doi: 10.1016/j.jnca.2013.08.002 Cloud computing technologies are receiving a great deal of attention. Furthermore most of the hardware devices such as the PCs and mobile phones are increasingly having a trusted component called Trusted Platform Module embedded in them, which helps to measure the state of the platform and hence reason about its trust. Recently attestation techniques such as binary attestation and property based attestation techniques have been proposed based on the TPM. In this paper, we propose a novel trust enhanced security model for cloud services that helps to detect and prevent security attacks in cloud infrastructures using trusted attestation techniques. We consider a cloud architecture where different services are hosted on virtualized systems on the cloud by multiple cloud customers (multi-tenants). We consider attacker model and various attack scenarios for such hosted services in the cloud. Our trust enhanced security model enables the cloud service provider to certify certain security properties of the tenant virtual machines and services running on them. These properties are then used to detect and minimise attacks between the cloud tenants running virtual machines on the infrastructure and its customers as well as increase the assurance of the tenant virtual machine transactions. If there is a variation in the behaviour of the tenant virtual machine from the certified properties, the model allows us to dynamically isolate the tenant virtual machine or even terminate the malicious services on a fine granular basis. The paper describes the design and implementation of the proposed model and discusses how it deals with the different attack scenarios. We also show that our model is beneficial for the cloud service providers, cloud customers running tenant virtual machines as well as the customers using the services provided by these tenant virtual machines.
Keywords: Cloud, Malware, Rootkits, TPM attestation, Trusted computing, Virtual machine monitors, Zero day attacks (ID#: 15-3718)
URL: http://www.sciencedirect.com/science/article/pii/S1084804513001768
Y. Seifi, S. Suriadi, E. Foo, C. Boyd, Security Properties Analysis In A TPM-Based Protocol, International Journal of Security and Networks, Volume 9 Issue 2, April 2014, Pages 85-103. Doi: 10.1504/IJSN.2014.060742 Security protocols are designed in order to provide security properties goals. They achieve their goals using cryptographic primitives such as key agreement or hash functions. Security analysis tools are used in order to verify whether a security protocol achieves its goals or not. The analysed property by specific purpose tools are predefined properties such as secrecy confidentiality, authentication or non-repudiation. There are security goals that are defined by the user in systems with security requirements. Analysis of these properties is possible with general purpose analysis tools such as coloured petri nets CPN. This research analyses two security properties that are defined in a protocol that is based on trusted platform module TPM. The analysed protocol is proposed by Delaune to use TPM capabilities and secrets in order to open only one secret from two submitted secrets to a recipient.
Keywords: (not provided) (ID#: 15-3719)
URL: http://www.inderscience.com/offer.php?id=60742
Danan Thilakanathan, Shiping Chen, Surya Nepal, Rafael A. Calvo, Dongxi Liu, John Zic, CLOUD '14 Proceedings of the 2014 IEEE International Conference on Cloud Computing, June 2014, Pages 224-231. Doi: 10.1109/CLOUD.2014.39 The trend towards Cloud computing infrastructure has increased the need for new methods that allow data owners to share their data with others securely taking into account the needs of multiple stakeholders. The data owner should be able to share confidential data while delegating much of the burden of access control management to the Cloud and trusted enterprises. The lack of such methods to enhance privacy and security may hinder the growth of cloud computing. In particular, there is a growing need to better manage security keys of data shared in the Cloud. BYOD provides a first step to enabling secure and efficient key management, however, the data owner cannot guarantee that the data consumers device itself is secure. Furthermore, in current methods the data owner cannot revoke a particular data consumer or group efficiently. In this paper, we address these issues by incorporating a hardware-based Trusted Platform Module (TPM) mechanism called the Trusted Extension Device (TED) together with our security model and protocol to allow stronger privacy of data compared to software-based security protocols. We demonstrate the concept of using TED for stronger protection and management of cryptographic keys and how our secure data sharing protocol will allow a data owner (e.g., author) to securely store data via untrusted Cloud services. Our work prevents keys to be stolen by outsiders and/or dishonest authorized consumers, thus making it particularly attractive to be implemented in a real-world scenario.
Keywords: Cloud Computing, Security, Privacy, Data sharing, Access control, TPM, BYOD, Key management (ID#: 15-3720)
URL: http://dx.doi.org/10.1109/CLOUD.2014.39
Rommel García, Ignacio Algredo-Badillo, Miguel Morales-Sandoval, Claudia Feregrino-Uribe, René Cumplido, A Compact FPGA-Based Processor for the Secure Hash Algorithm SHA-256, Computers and Electrical Engineering, Volume 40 Issue 1, January, 2014, Pages 194-202. Doi: 10.1016/j.compeleceng.2013.11.014 This work reports an efficient and compact FPGA processor for the SHA-256 algorithm. The novel processor architecture is based on a custom datapath that exploits the reusing of modules, having as main component a 4-input Arithmetic-Logic Unit not previously reported. This ALU is designed as a result of studying the type of operations in the SHA algorithm, their execution sequence and the associated dataflow. The processor hardware architecture was modeled in VHDL and implemented in FPGAs. The results obtained from the implementation in a Virtex5 device demonstrate that the proposed design uses fewer resources achieving higher performance and efficiency, outperforming previous approaches in the literature focused on compact designs, saving around 60% FPGA slices with an increased throughput (Mbps) and efficiency (Mbps/Slice). The proposed SHA processor is well suited for applications like Wi-Fi, TMP (Trusted Mobile Platform), and MTM (Mobile Trusted Module), where the data transfer speed is around 50Mbps.
Keywords: (not provided) (ID#: 15-3721)
URL: http://www.sciencedirect.com/science/article/pii/S0045790613002966
Bryan Jeffery Parno, Trust Extension as a Mechanism for Secure Code Execution on Commodity Computers, (book) Association for Computing Machinery and Morgan & Claypool New York, NY, June 2014. ISBN = 978-1-62705-477-5 As society rushes to digitize sensitive information and services, it is imperative that we adopt adequate security protections. However, such protections fundamentally conflict with the benefits we expect from commodity computers. In other words, consumers and businesses value commodity computers because they provide good performance and an abundance of features at relatively low costs. Meanwhile, attempts to build secure systems from the ground up typically abandon such goals, and hence are seldom adopted [Karger et al. 1991, Gold et al. 1984, Ames 1981]. In this book, a revised version of my doctoral dissertation, originally written while studying at Carnegie Mellon University, I argue that we can resolve the tension between security and features by leveraging the trust a user has in one device to enable her to securely use another commodity device or service, without sacrificing the performance and features expected of commodity systems. We support this premise over the course of the following chapters.
URL: http://dl.acm.org/citation.cfm?id=2611399
Akshay Dua, Nirupama Bulusu, Wu-Chang Feng, Wen Hu, Combating Software and Sybil Attacks to Data Integrity in Crowd-Sourced Embedded Systems , ACM Transactions on Embedded Computing Systems (TECS), Volume 13 Issue 5s, September 2014, Article No. 154. Doi: 10.1145/2629338 Crowd-sourced mobile embedded systems allow people to contribute sensor data, for critical applications, including transportation, emergency response and eHealth. Data integrity becomes imperative as malicious participants can launch software and Sybil attacks modifying the sensing platform and data. To address these attacks, we develop (1) a Trusted Sensing Peripheral (TSP) enabling collection of high-integrity raw or aggregated data, and participation in applications requiring additional modalities; and (2) a Secure Tasking and Aggregation Protocol (STAP) enabling aggregation of TSP trusted readings by untrusted intermediaries, while efficiently detecting fabricators. Evaluations demonstrate that TSP and STAP are practical and energy-efficient.
Keywords: Trust, critical systems, crowd-sourced sensing, data integrity, embedded systems, mobile computing, security, (ID#: 15-3723)
URL: http://dl.acm.org/citation.cfm?doid=2660459.2629338
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
![]() |
Video Surveillance |
Video surveillance is a fast growing area of public security. With it has come policy issues related to privacy. Technical issues and opportunities have also arisen, including the potential to use advanced methods to provide positive identification, abnormal behaviors in crowds, intruder detection, and information fusion with other data. The research presented here came from multiple conferences and publications and was offered in 2014.
Xiaochun Cao; Na Liu; Ling Du; Chao Li, "Preserving Privacy For Video Surveillance Via Visual Cryptography," Signal and Information Processing (ChinaSIP), 2014 IEEE China Summit & International Conference on, pp.607,610, 9-13 July 2014. doi: 10.1109/ChinaSIP.2014.6889315 The video surveillance widely installed in public areas poses a significant threat to the privacy. This paper proposes a new privacy preserving method via the Generalized Random-Grid based Visual Cryptography Scheme (GRG-based VCS). We first separate the foreground from the background for each video frame. These foreground pixels contain the most important information that needs to be protected. Every foreground area is encrypted into two shares based on GRG-based VCS. One share is taken as the foreground, and the other one is embedded into another frame with random selection. The content of foreground can only be recovered when these two shares are got together. The performance evaluation on several surveillance scenarios demonstrates that our proposed method can effectively protect sensitive privacy information in surveillance videos.
Keywords: cryptography; data protection; video surveillance; GRG-based VCS; foreground pixels; generalized random-grid based visual cryptography scheme; performance evaluation; random selection; sensitive privacy information preservation method; video frame; video surveillance; Cameras; Cryptography; PSNR; Privacy; Video surveillance; Visualization; Random-Grid; Video surveillance; privacy protection; visual cryptography (ID#: 15-3584)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6889315&isnumber=6889177
Yoohwan Kim; Juyeon Jo; Shrestha, S., "A Server-Based Real-Time Privacy Protection Scheme Against Video Surveillance By Unmanned Aerial Systems," Unmanned Aircraft Systems (ICUAS), 2014 International Conference on, pp.684,691, 27-30 May 2014. doi: 10.1109/ICUAS.2014.6842313 Abstract: Unmanned Aerial Systems (UAS) have raised a great concern on privacy recently. A practical method to protect privacy is needed for adopting UAS in civilian airspace. This paper examines the privacy policies, filtering strategies, existing techniques, then proposes a novel method based on the encrypted video stream and the cloud-based privacy servers. In this scheme, all video surveillance images are initially encrypted, then delivered to a privacy server. The privacy server decrypts the video using the shared key with the camera, and filters the image according to the privacy policy specified for the surveyed region. The sanitized video is delivered to the surveillance operator or anyone on the Internet who is authorized. In a larger system composed of multiple cameras and multiple privacy servers, the keys can be distributed using Kerberos protocol. With this method the privacy policy can be changed on demand in real-time and there is no need for a costly on-board processing unit. By utilizing the cloud-based servers, advanced image processing algorithms and new filtering algorithms can be applied immediately without upgrading the camera software. This method is cost-efficient and promotes video sharing among multiple subscribers, thus it can spur wide adoption.
Keywords: Internet; data privacy; video coding; video surveillance; Internet; Kerberos protocol; UAS; camera software; civilian airspace; cloud-based privacy servers; cloud-based servers; encrypted video stream; filtering algorithms; filtering strategies; image processing algorithms; multiple privacy servers; on-board processing unit; privacy policy; sanitized video; server-based real-time privacy protection scheme; surveillance operator; unmanned aerial systems; video sharing; video surveillance images; Cameras; Cryptography; Filtering; Privacy; Servers; Streaming media; Surveillance; Key Distribution; Privacy; Unmanned Aerial Systems; Video Surveillance (ID#: 15-3585)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6842313&isnumber=6842225
Hassan, M.M.; Hossain, M.A.; Al-Qurishi, M., "Cloud-based Mobile IPTV Terminal for Video Surveillance," Advanced Communication Technology (ICACT), 2014 16th International Conference on, pp.876, 880, 16-19 Feb. 2014. doi: 10.1109/ICACT.2014.6779086 Surveillance video streams monitoring is an important task that the surveillance operators usually carry out. The distribution of video surveillance facilities over multiple premises and the mobility of surveillance users requires that they are able to view surveillance video seamlessly from their mobile devices. In order to satisfy this requirement, we propose a cloud-based IPTV (Internet Protocol Television) solution that leverages the power of cloud infrastructure and the benefits of IPTV technology to seamlessly deliver surveillance video content on different client devices anytime and anywhere. The proposed mechanism also supports user-controlled frame rate adjustment of video streams and sharing of these streams with other users. In this paper, we describe the overall approach of this idea, address and identify key technical challenges for its practical implementation. In addition, initial experimental results were presented to justify the viability of the proposed cloud-based IPTV surveillance framework over the traditional IPTV surveillance approach.
Keywords: IPTV; cloud computing; mobile television; video surveillance Internet protocol television ;cloud-based mobile IPTV terminal; mobile devices; surveillance operators; surveillance video streams monitoring; video surveillance facilities distribution; Cameras ;IPTV; Mobile communication; Servers; Streaming media; Video surveillance; IPTV; Video surveillance; cloud computing; mobile terminal (ID#: 15-3586)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6779086&isnumber=6778899
Gorur, P.; Amrutur, B., "Skip Decision and Reference Frame Selection for Low-Complexity H.264/AVC Surveillance Video Coding," Circuits and Systems for Video Technology, IEEE Transactions on, vol.24, no.7, pp.1156,1169, July 2014. doi: 10.1109/TCSVT.2014.2319611 H.264/advanced video coding surveillance video encoders use the Skip mode specified by the standard to reduce bandwidth. They also use multiple frames as reference for motion-compensated prediction. In this paper, we propose two techniques to reduce the bandwidth and computational cost of static camera surveillance video encoders without affecting detection and recognition performance. A spatial sampler is proposed to sample pixels that are segmented using a Gaussian mixture model. Modified weight updates are derived for the parameters of the mixture model to reduce floating point computations. A storage pattern of the parameters in memory is also modified to improve cache performance. Skip selection is performed using the segmentation results of the sampled pixels. The second contribution is a low computational cost algorithm to choose the reference frames. The proposed reference frame selection algorithm reduces the cost of coding uncovered background regions. We also study the number of reference frames required to achieve good coding efficiency. Distortion over foreground pixels is measured to quantify the performance of the proposed techniques. Experimental results show bit rate savings of up to 94.5% over methods proposed in literature on video surveillance data sets. The proposed techniques also provide up to 74.5% reduction in compression complexity without increasing the distortion over the foreground regions in the video sequence.
Keywords: Gaussian processes; cameras; data compression; distortion; motion compensation; video codecs; video coding; video surveillance; Gaussian mixture model;H.264/advanced video coding surveillance video encoders; bit rate savings; coding uncovered background regions; compression complexity; detection performance; distortion; floating point computations; foreground pixels; low-complexity H.264/AVC surveillance video coding; mixture model; motion-compensated prediction; multiple frames; recognition performance; reference frame selection; reference frame selection algorithm; skip decision; static camera surveillance video encoders; video sequence; video surveillance data sets; Cameras; Encoding; Motion detection; Motion segmentation; Streaming media; Surveillance; Video coding; Cache optimization; H.264/advanced video coding (AVC); motion detection; reference frame selection; skip decision; video surveillance (ID#: 15-3587)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6805578&isnumber=6846390
Xianguo Zhang; Tiejun Huang; Yonghong Tian; Wen Gao, "Background-Modeling-Based Adaptive Prediction for Surveillance Video Coding," Image Processing, IEEE Transactions on, vol.23, no.2, pp.769,784, Feb. 2014. doi: 10.1109/TIP.2013.2294549 The exponential growth of surveillance videos presents an unprecedented challenge for high-efficiency surveillance video coding technology. Compared with the existing coding standards that were basically developed for generic videos, surveillance video coding should be designed to make the best use of the special characteristics of surveillance videos (e.g., relative static background). To do so, this paper first conducts two analyses on how to improve the background and foreground prediction efficiencies in surveillance video coding. Following the analysis results, we propose a background-modeling-based adaptive prediction (BMAP) method. In this method, all blocks to be encoded are firstly classified into three categories. Then, according to the category of each block, two novel inter predictions are selectively utilized, namely, the background reference prediction (BRP) that uses the background modeled from the original input frames as the long-term reference and the background difference prediction (BDP) that predicts the current data in the background difference domain. For background blocks, the BRP can effectively improve the prediction efficiency using the higher quality background as the reference; whereas for foreground-background-hybrid blocks, the BDP can provide a better reference after subtracting its background pixels. Experimental results show that the BMAP can achieve at least twice the compression ratio on surveillance videos as AVC (MPEG-4 Advanced Video Coding) high profile, yet with a slightly additional encoding complexity. Moreover, for the foreground coding performance, which is crucial to the subjective quality of moving objects in surveillance videos, BMAP also obtains remarkable gains over several state-of-the-art methods.
Keywords: data compression; video coding; video surveillance; AVC; BDP; BMAP method; BRP; MPEG-4 advanced video coding; background difference prediction; background pixels; background prediction efficiency; background reference prediction; background-modeling-based adaptive prediction method; encoding complexity; exponential growth; foreground coding performance; foreground prediction efficiency; foreground-background-hybrid blocks; high-efficiency surveillance video coding technology; surveillance video compression ratio; Complexity theory; Decoding; Encoding; Image coding; Object oriented modeling; Surveillance; video coding; Surveillance video; background difference; background modeling; background reference; block classification (ID#: 15-3588)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6680670&isnumber=6685907
Chun-Rong Huang; Chung, P.-C.J.; Di-Kai Yang; Hsing-Cheng Chen; Guan-Jie Huang, "Maximum a Posteriori Probability Estimation for Online Surveillance Video Synopsis," Circuits and Systems for Video Technology, IEEE Transactions on, vol.24, no.8, pp.1417,1429, Aug. 2014. doi: 10.1109/TCSVT.2014.2308603 To reduce human efforts in browsing long surveillance videos, synopsis videos are proposed. Traditional synopsis video generation applying optimization on video tubes is very time consuming and infeasible for real-time online generation. This dilemma significantly reduces the feasibility of synopsis video generation in practical situations. To solve this problem, the synopsis video generation problem is formulated as a maximum a posteriori probability (MAP) estimation problem in this paper, where the positions and appearing frames of video objects are chronologically rearranged in real time without the need to know their complete trajectories. Moreover, a synopsis table is employed with MAP estimation to decide the temporal locations of the incoming foreground objects in the synopsis video without needing an optimization procedure. As a result, the computational complexity of the proposed video synopsis generation method can be significantly reduced. Furthermore, as it does not require prescreening the entire video, this approach can be applied on online streaming videos.
Keywords: maximum likelihood estimation; video signal processing; video streaming; video surveillance; MAP estimation problem; computational complexity reduction; human effort reduction; long surveillance video browsing; maximum-a-posteriori probability estimation problem; online streaming videos; online surveillance video synopsis; synopsis table; synopsis video generation problem; video summarization; video tubes; Estimation; Indexes; Optimization; Predictive models; Real-time systems; Streaming media; Surveillance; Maximum a posteriori (MAP) estimation; video summarization; video surveillance; video synopsis (ID#: 15-3589)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6748870&isnumber=6869080
Hong Jiang; Songqing Zhao; Zuowei Shen; Wei Deng; Wilford, P.A.; Haimi-Cohen, R., "Surveillance Video Analysis Using Compressive Sensing With Low Latency," Bell Labs Technical Journal , vol.18, no.4, pp. 63, 74, March 2014. doi: 10.1002/bltj.21646 We propose a method for analysis of surveillance video by using low rank and sparse decomposition (LRSD) with low latency combined with compressive sensing to segment the background and extract moving objects in a surveillance video. Video is acquired by compressive measurements, and the measurements are used to analyze the video by a low rank and sparse decomposition of a matrix. The low rank component represents the background, and the sparse component, which is obtained in a tight wavelet frame domain, is used to identify moving objects in the surveillance video. An important feature of the proposed low latency method is that the decomposition can be performed with a small number of video frames, which reduces latency in the reconstruction and makes it possible for real time processing of surveillance video. The low latency method is both justified theoretically and validated experimentally.
Keywords: compressed sensing; image motion analysis; image segmentation; video surveillance; wavelet transforms; LRSD; background segmentation; compressive sensing; low latency method; low rank and sparse decomposition; surveillance video analysis; video frames; wavelet frame domain; Matrix decompoistion; Object recognition; Sparse decomposition; Sparse matrices; Streaming media; Surveillance; Video communication; Wavelet domain (ID#: 15-3590)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6770348&isnumber=6770341
Rasheed, N.; Khan, S.A.; Khalid, A., "Tracking and Abnormal Behavior Detection in Video Surveillance Using Optical Flow and Neural Networks," Advanced Information Networking and Applications Workshops (WAINA), 2014 28th International Conference on, pp.61,66, 13-16 May 2014. doi: 10.1109/WAINA.2014.18 An abnormal behavior detection algorithm for surveillance is required to correctly identify the targets as being in a normal or chaotic movement. A model is developed here for this purpose. The uniqueness of this algorithm is the use of foreground detection with Gaussian mixture (FGMM) model before passing the video frames to optical flow model using Lucas-Kanade approach. Information of horizontal and vertical displacements and directions associated with each pixel for object of interest is extracted. These features are then fed to feed forward neural network for classification and simulation. The study is being conducted on the real time videos and some synthesized videos. Accuracy of method has been calculated by using the performance parameters for Neural Networks. In comparison of plain optical flow with this model, improved results have been obtained without noise. Classes are correctly identified with an overall performance equal to 3.4e-02 with & error percentage of 2.5.
Keywords: Gaussian processes; feature selection; feedforward neural nets; image sequences; mixture models; object detection; video surveillance; FGMM model; Lucas-Kanade approach; abnormal behavior detection; chaotic movement; feed forward neural network; foreground detection with Gaussian mixture model; neural networks; normal movement; optical flow; real time videos; synthesized videos; targets identification; video frames; video surveillance; Adaptive optics; Computer vision; Image motion analysis; Neural networks; Optical computing; Optical imaging; Streaming media; Foreground Detection; Gaussian Mixture Models; Neural Network; Optical Flow; Video Surveillance (ID#: 15-3591)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6844614&isnumber=6844560
Hammoud, R.I.; Sahin, C.S.; Blasch, E.P.; Rhodes, B.J., "Multi-source Multi-modal Activity Recognition in Aerial Video Surveillance," Computer Vision and Pattern Recognition Workshops (CVPRW), 2014 IEEE Conference on, pp.237,244, 23-28 June 2014. doi: 10.1109/CVPRW.2014.44 Recognizing activities in wide aerial/overhead imagery remains a challenging problem due in part to low-resolution video and cluttered scenes with a large number of moving objects. In the context of this research, we deal with two un-synchronized data sources collected in real-world operating scenarios: full-motion videos (FMV) and analyst call-outs (ACO) in the form of chat messages (voice-to-text) made by a human watching the streamed FMV from an aerial platform. We present a multi-source multi-modal activity/event recognition system for surveillance applications, consisting of: (1) detecting and tracking multiple dynamic targets from a moving platform, (2) representing FMV target tracks and chat messages as graphs of attributes, (3) associating FMV tracks and chat messages using a probabilistic graph-based matching approach, and (4) detecting spatial-temporal activity boundaries. We also present an activity pattern learning framework which uses the multi-source associated data as training to index a large archive of FMV videos. Finally, we describe a multi-intelligence user interface for querying an index of activities of interest (AOIs) by movement type and geo-location, and for playing-back a summary of associated text (ACO) and activity video segments of targets-of-interest (TOIs) (in both pixel and geo-coordinates). Such tools help the end-user to quickly search, browse, and prepare mission reports from multi-source data.
Keywords: image matching; image motion analysis; image representation; indexing; learning (artificial intelligence);object detection; query processing; target tracking; user interfaces; video streaming; video surveillance; ACO; FMV streaming; FMV target track representation; FMV videos; activities of interest; activity pattern learning framework; activity video segments; aerial imagery; aerial video surveillance; analyst call-outs; associated text; full-motion video; geolocation; index query; multi-intelligence user interface; multiple dynamic target detection; multiple dynamic target tracking; multisource associated data; multisource multimodal activity recognition; multisource multimodal event recognition; overhead imagery; probabilistic graph-based matching approach; spatial-temporal activity boundary detection; targets-of-interest; unsynchronized data sources; voice-to-text chat messages; Pattern recognition; Radar tracking; Semantics; Streaming media; Target tracking; Vehicles; FMV exploitation; MINER; activity recognition; chat and video fusion; event recognition; fusion; graph matching; graph representation; surveillance (ID#: 15-3592)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6909989&isnumber=6909944
Woon Cho; Abidi, M.A.; Kyungwon Jeong; Nahyun Kim; Seungwon Lee; Joonki Paik; Gwang-Gook Lee, "Object Retrieval Using Scene Normalized Human Model For Video Surveillance System," Consumer Electronics (ISCE 2014), The 18th IEEE International Symposium on, pp.1,2, 22-25 June 2014 doi: 10.1109/ISCE.2014.6884439 This paper presents a human model-based feature extraction method for a video surveillance retrieval system. The proposed method extracts, from a normalized scene, object features such as height, speed, and representative color using a simple human model based on multiple-ellipse. Experimental results show that the proposed system can effectively track moving routes of people such as a missing child, an absconder, and a suspect after events.
Keywords: feature extraction; image retrieval; object tracking; feature extraction; multiple ellipse human model; object retrieval; scene normalized human model; video surveillance retrieval system; Cameras; Databases; Feature extraction ;Image color analysis; Shape; Video surveillance; human model; retrieval system; scene calibration; surveillance system (ID#: 15-3593)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6884439&isnumber=6884278
Ma Juan; Hu Rongchun; Li Jian, "A Fast Human Detection Algorithm Of Video Surveillance In Emergencies," Control and Decision Conference (2014 CCDC), The 26th Chinese, vol., no., pp.1500,1504, May 31 2014-June 2 2014. doi: 10.1109/CCDC.2014.6852404 This paper propose a fast human detection algorithm of video surveillance in emergencies. Firstly through the background subtraction based on the single Guassian model and frame subtraction, we get the target mask which is optimized by Gaussian filter and dilation. Then the interest points of head is obtained from figures with target mask and edge detection. Finally according to detecting these points we can track the head and count the number of people with the frequency of moving target at the same place. Simulation results show that the algorithm can detect the moving object quickly and accurately.
Keywords: Gaussian processes; edge detection; object detection; video surveillance; Gaussian filter; background subtraction; dilation; edge detection; emergencies; frame subtraction; human detection algorithm; moving target; single Guassian model; target mask; video surveillance; Conferences; Detection algorithms; Educational institutions; Electronic mail; Estimation; IEEE Computer Society; Image edge detection; background subtraction; edge tracking of head; frame subtraction; target mask (ID#: 15-3594)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6852404&isnumber=6852105
Harish, Palagati; Subhashini, R.; Priya, K., "Intruder Detection By Extracting Semantic Content From Surveillance Videos," Green Computing Communication and Electrical Engineering (ICGCCEE), 2014 International Conference on, pp.1,5, 6-8 March 2014. doi: 10.1109/ICGCCEE.2014.6922469 Many surveillance cameras are using everywhere, the videos or images captured by these cameras are still dumped but they are not processed. Many methods are proposed for tracking and detecting the objects in the videos but we need the meaningful content called semantic content from these videos. Detecting Human activity recognition is quite complex. The proposed method called Semantic Content Extraction (SCE) from videos is used to identify the objects and the events present in the video. This model provides useful methodology for intruder detecting systems which provides the behavior and the activities performed by the intruder. Construction of ontology enhances the spatial and temporal relations between the objects or features extracted. Thus proposed system provides a best way for detecting the intruders, thieves and malpractices happening around us.
Keywords: Cameras; Feature extraction; Ontologies; Semantics; Video surveillance; Videos; Human activity recognition; Ontology; Semantic content; Spatial and Temporal Relations (ID#: 15-3595)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6922469&isnumber=6920919
Wang, S.; Orwell, J.; Hunter, G., "Evaluation of Bayesian and Dempster-Shafer Approaches to Fusion of Video Surveillance Information," Information Fusion (FUSION), 2014 17th International Conference on, pp. 1, 7, 7-10 July 2014. (no doi provided) This paper presents the application of fusion methods to a visual surveillance scenario. The range of relevant features for re-identifying vehicles is discussed, along with the methods for fusing probabilistic estimates derived from these estimates. In particular, two statistical parametric fusion methods are considered: Bayesian Networks and the Dempster Shafer approach. The main contribution of this paper is the development of a metric to allow direct comparison of the benefits of the two methods. This is achieved by generalising the Kelly betting strategy to accommodate a variable total stake for each sample, subject to a fixed expected (mean) stake. This metric provides a method to quantify the extra information provided by the Dempster-Shafer method, in comparison to a Bayesian Fusion approach.
Keywords: Accuracy; Bayes methods; Color; Mathematical model; Shape; Uncertainty; Vehicles; Bayesian; Dempster-Shafer; evaluation; fusion ;vehicle (ID#: 15-3596)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6916172&isnumber=6915967
Yueguo Zhang; Lili Dong; Shenghong Li; Jianhua Li, "Abnormal Crowd Behavior Detection Using Interest Points," Broadband Multimedia Systems and Broadcasting (BMSB), 2014 IEEE International Symposium on, pp.1,4, 25-27 June 2014. doi: 10.1109/BMSB.2014.6873527 Abnormal crowd behavior detection is an important research issue in video processing and computer vision. In this paper we introduce a novel method to detect abnormal crowd behaviors in video surveillance based on interest points. A complex network-based algorithm is used to detect interest points and extract the global texture features in scenarios. The performance of the proposed method is evaluated on publicly available datasets. We present a detailed analysis of the characteristics of the crowd behavior in different density crowd scenes. The analysis of crowd behavior features and simulation results are also demonstrated to illustrate the effectiveness of our proposed method.
Keywords: behavioural sciences computing; complex networks; computer vision; feature extraction ;image texture; object detection; video signal processing; video surveillance; abnormal crowd behavior detection; complex network-based algorithm; computer vision; crowd behavior feature analysis; global texture feature extraction; interest point detection; video processing; video surveillance; Broadband communication; Broadcasting; Complex networks; Computer vision; Feature extraction; Multimedia systems; Video surveillance; Crowd Behavior; Video Surveillance; Video processing (ID#: 15-3597)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6873527&isnumber=6873457
Lu Wang; Yung, N.H.C.; Lisheng Xu, "Multiple-Human Tracking by Iterative Data Association and Detection Update," Intelligent Transportation Systems, IEEE Transactions on, vol. 15, no. 5, pp.1886,1899, Oct. 2014. doi: 10.1109/TITS.2014.2303196 Multiple-object tracking is an important task in automated video surveillance. In this paper, we present a multiple-human-tracking approach that takes the single-frame human detection results as input and associates them to form trajectories while improving the original detection results by making use of reliable temporal information in a closed-loop manner. It works by first forming tracklets, from which reliable temporal information is extracted, and then refining the detection responses inside the tracklets, which also improves the accuracy of tracklets' quantities. After this, local conservative tracklet association is performed and reliable temporal information is propagated across tracklets so that more detection responses can be refined. The global tracklet association is done last to resolve association ambiguities. Experimental results show that the proposed approach improves both the association and detection results. Comparison with several state-of-the-art approaches demonstrates the effectiveness of the proposed approach.
Keywords: feature extraction; intelligent transportation systems; iterative methods ;object tracking; sensor fusion; video surveillance; automated video surveillance; detection responses; human detection results; intelligent transportation systems; iterative data association; multiple-human tracking; temporal information extraction ;tracklet association; Accuracy; Computational modeling; Data mining; Reliability; Solid modeling; Tracking; Trajectory; Data association; detection update; multiple-human tracking; video surveillance (ID#: 15-3598)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6750747&isnumber=6910343
Shuai Yi; Xiaogang Wang, "Profiling Stationary Crowd Groups," Multimedia and Expo (ICME), 2014 IEEE International Conference on, pp. 1, 6, 14-18 July 2014. doi: 10.1109/ICME.2014.6890138 Detecting stationary crowd groups and analyzing their behaviors have important applications in crowd video surveillance, but have rarely been studied. The contributions of this paper are in two aspects. First, a stationary crowd detection algorithm is proposed to estimate the stationary time of foreground pixels. It employs spatial-temporal filtering and motion filtering in order to be robust to noise caused by occlusions and crowd clutters. Second, in order to characterize the emergence and dispersal processes of stationary crowds and their behaviors during the stationary periods, three attributes are proposed for quantitative analysis. These attributes are recognized with a set of proposed crowd descriptors which extract visual features from the results of stationary crowd detection. The effectiveness of the proposed algorithms is shown through experiments on a benchmark dataset.
Keywords: feature extraction; filtering theory; image motion analysis; object detection; video signal processing; video surveillance; crowd descriptors; crowd video surveillance; foreground pixel; motion filtering; quantitative analysis; spatial-temporal filtering; stationary crowd detection algorithm; stationary crowd group detection; stationary crowd groups profiling; visual feature extraction;Color;Estimation;Filtering;Indexes;Noise;Tracking;Trajectory;Stationary crowd detection; crowd video surveillance; stationary crowd analysis (ID#: 15-3599)<
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6890138&isnumber=6890121
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
![]() |
Wireless Mesh Network Security |
With more than 70 protocols vying for preeminence over wireless mesh networks, the security problem is magnified. The research cited here was presented in 2014 and covers smart grid, specific protocols, 4-way handshaking, and fuzzy protocols.
Saavedra Benitez, Y.I.; Ben-Othman, J.; Claude, J.-P., "Performance Evaluation of Security Mechanisms in RAOLSR Protocol for Wireless Mesh Networks," Communications (ICC), 2014 IEEE International Conference on, pp. 1808, 1812, 10-14 June 2014. doi: 10.1109/ICC.2014.6883585 In this paper, we have proposed the IBE-RAOLSR and ECDSA-RAOLSR protocols for WMNs (Wireless Mesh Networks), which contributes to security routing protocols. We have implemented the IBE (Identity Based Encryption) and ECDSA (Elliptic Curve Digital Signature Algorithm) methods to secure messages in RAOLSR (Radio Aware Optimized Link State Routing), namely TC (Topology Control) and Hello messages. We then compare the ECDSA-based RAOLSR with IBE-based RAOLSR protocols. This study shows the great benefits of the IBE technique in securing RAOLSR protocol for WMNs. Through extensive ns-3 (Network Simulator-3) simulations, results have shown that the IBE-RAOLSR outperforms the ECDSA-RAOLSR in terms of overhead and delay. Simulation results show that the utilization of the IBE-based RAOLSR provides a greater level of security with light overhead.
Keywords: cryptography; routing protocols; telecommunication control; telecommunication network topology; wireless mesh networks; ECDSA-RAOLSR protocols; IBE-RAOLSR protocols; WMN; elliptic curve digital signature algorithm; hello messages; identity based encryption; network simulator-3 simulations; radio aware optimized link state routing; routing protocols; security mechanisms ;topology control; wireless mesh networks; Delays; Digital signatures; IEEE 802.11 Standards; Routing; Routing protocols; IBE; Identity Based Encryption; Radio Aware Optimized Link State Routing; Routing Protocol; Security; Wireless Mesh Networks (ID#: 15-3695)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6883585&isnumber=6883277
Tsado, Y.; Lund, D.; Gamage, K., "Resilient Wireless Communication Networking For Smart Grid BAN," Energy Conference (ENERGYCON), 2014 IEEE International, pp.846, 851, 13-16 May 2014. doi: 10.1109/ENERGYCON.2014.6850524 The concept of Smart grid technology sets greater demands for reliability and resilience on communications infrastructure. Wireless communication is a promising alternative for distribution level, Home Area Network (HAN), smart metering and even the backbone networks that connect smart grid applications to control centers. In this paper, the reliability and resilience of smart grid communication network is analyzed using the IEEE 802.11 communication technology in both infrastructure single hop and mesh multiple-hop topologies for smart meters in a Building Area Network (BAN). Performance of end to end delay and Round Trip Time (RTT) of an infrastructure mode smart meter network for Demand Response (DR) function is presented. Hybrid deployment of these network topologies is also suggested to provide resilience and redundancy in the network during network failure or when security of the network is circumvented. This recommendation can also be deployed in other areas of the grid where wireless technologies are used. DR communication from consumer premises is used to show the performance of an infrastructure mode smart metering network.
Keywords: home automation; home networks; redundancy; sensor placement; smart meters; smart power grids; telecommunication network reliability; telecommunication network topology; telecommunication security; wireless LAN; DR communication; IEEE 802.11 communication technology; RTT; backbone networks; building area network; control center; demand response function; distribution level; end to end delay; home area network; hybrid deployment; infrastructure mode smart meter network; infrastructure single hop topology; mesh multiple hop topology; network failure; network reliability; network security; redundancy; resilient wireless communication networking; round trip time; smart grid BAN; wireless technology; IEEE 802.11 Standards; Network topology; Resilience; Smart grids; Smart meters; Wireless communication; Infrastructure mode; Multi-hop mesh network ;Resilience; Single-hop network (ID#: 15-3696)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6850524&isnumber=6850389
Ghatak, Sumitro; Bose, Sagar; Roy, Siuli, "Intelligent Wall Mounted Wireless Fencing System Using Wireless Sensor Actuator Network," Computer Communication and Informatics (ICCCI), 2014 International Conference on, pp.1,5, 3-5 Jan. 2014. doi: 10.1109/ICCCI.2014.6921795 This paper presents the relative merits of IR and microwave sensor technology and their combination with wireless camera for the development of a wall mounted wireless intrusion detection system and explain the phases by which the intrusion information are collected and sent to the central control station using wireless mesh network for analysis and processing the collected data. These days every protected zone is facing numerous security threats like trespassing or damaging of important equipment and a lot more. Unwanted intrusion has turned out to be a growing problem which has paved the way for a newer technology which detects intrusion accurately. Almost all organizations have their own conventional arrangement of protecting their zones by constructing high wall, wire fencing, power fencing or employing guard for manual observation. In case of large areas, manually observing the perimeter is not a viable option. To solve this type of problem we have developed a wall-mounted wireless fencing system. In this project I took the responsibility of studying how the different units could be collaborated and how the data collected from them could be further processed with the help of software, which was developed by me. The Intrusion detection system constitutes an important field of application for IR and microwave based wireless sensor network. A state of the art wall-mounted wireless intrusion detection system will detect intrusion automatically, through multi-level detection mechanism (IR, microwave, active RFID & camera) and will generate multi-level alert (buzzer, images, segment illumination, SMS, E-Mail) to notify security officers, owners and also illuminate the particular segment where the intrusion has happened. This system will enable the authority to quickly handle the emergency through identification of the area of incident at once and to take action quickly. IR based perimeter protection is a proven technology. However IR-based intrusion detection -system is not a full-proof solution since (1) IR may fail in foggy or dusty weather condition & hence it may generate false alarm. Therefore we amalgamate this technology with Microwave based intrusion detection which can work satisfactorily in foggy weather. Also another significant arena of our proposed system is the Camera-based intrusion detection. Some industries require this feature to capture the snap-shots of the affected location instantly as the intrusion happens. The Intrusion information data are transmitted wirelessly to the control station via multi hop routing (using active RFID or IEEE 802.15.4 protocol). The Control station will receive intrusion information at real time and analyze the data with the help of the Intrusion software. It then sends SMS to the predefined numbers of the respective authority through GSM modem attached with the control station engine.
Keywords: Communication system security; Intrusion detection; Monitoring; Software; Wireless communication; Wireless sensor networks; IEEE 802.15.4;IR Transceiver Module; Wireless Sensor Network (ID#: 15-3697)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6921795&isnumber=6921705
Junguo Liao; Mingyan Wang, "A New Dynamic Updating Key Strategy Based On EMSA In Wireless Mesh Networks," Information and Communications Technologies (ICT 2014), 2014 International Conference on, pp.1,5, 15-17 May 2014. doi: 10.1049/cp.2014.0635 In the security protocols of Efficient Mesh Security Association(EMSA), the key updating strategy is an effective method to ensure the security of communication. For the existing strategy of periodic automatic key updating, the PTK (Pairwise Transit Key) is updated through the complex 4-way handshake to produce each time. Once the update frequency of the PTK is faster, it will have a greater impact on throughput and delay of the network. On this basis, we propose a new strategy of dynamic key updating to ensure the safety and performance of wireless mesh networks. In the new strategy, mesh point (MP) and mesh authenticator (MA) negotiate a random function at the initial certification, and use the PTK which is generated by the 4-way handshake as the initial seed. When the PTK updating cycle comes, both sides generate the new keys using the random function, which do not have to generate a new PTK by complex 4-way handshake. The analysis of performance compared with existing strategies showed that the dynamic key updating strategy proposed in this paper have a larger increase in delay and throughput of the network.
Keywords: EMSA; MESH network; key update; security protocol (ID#: 15-3698)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6913688&isnumber=6913610
Mor, V.; Kumar, H., "Energy Efficient Techniques in Wireless Mesh Network," Engineering and Computational Sciences (RAECS), 2014 Recent Advances in, pp.1, 6, 6-8 March 2014. doi: 10.1109/RAECS.2014.6799561 Wireless Mesh Network (WMN) is a promising wireless network architecture having potential of last few miles connectivity. There has been considerable research work carried out on various issues like design, performance, security etc. in WMN. Due to increasing interest in WMN and use of smart devices with bandwidth hungry applications, WMN must be designed with objective of energy efficient communication. Goal of this paper is to summarize importance of energy efficiency in WMN. Various techniques to bring energy efficient solutions have also been reviewed.
Keywords: energy conservation; wireless mesh networks; WMN; bandwidth hungry applications; energy efficient techniques; smart devices; wireless mesh network; wireless network architecture; Energy efficiency; IEEE 802.11 Standards; Logic gates; Routing; Throughput; Wireless communication; Wireless mesh networks; energy aware techniques; energy efficient network; evolution (ID#: 15-3699)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6799561&isnumber=6799496
Szott, S., "Selfish Insider Attacks in IEEE 802.11s Wireless Mesh Networks," Communications Magazine, IEEE, vol.52, no.6, pp.227, 233, June 2014. doi: 10.1109/MCOM.2014.6829968 The IEEE 802.11s amendment for wireless mesh networks does not provide incentives for stations to cooperate and is particularly vulnerable to selfish insider attacks in which a legitimate network participant hopes to increase its QoS at the expense of others. In this tutorial we describe various attacks that can be executed against 802.11s networks and also analyze existing attacks and identify new ones. We also discuss possible countermeasures and detection methods and attempt to quantify the threat of the attacks to determine which of the 802.11s vulnerabilities need to be secured with the highest priority.
Keywords: telecommunication security; wireless LAN; wireless mesh networks; IEEE 802.11s wireless mesh networks; selfish insider attacks; Ad hoc networks; IEEE 802.11 Standards; Logic gates; Protocols; Quality of service; Routing; Wireless mesh networks (ID#: 15-3700)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6829968&isnumber=6829933
El Masri, A.; Sardouk, A.; Khoukhi, L.; Merghem-Boulahia, L.; Gaiti, D., "Multimedia Support in Wireless Mesh Networks Using Interval Type-2 Fuzzy Logic System," New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on, pp.1,5, March 30 2014-April 2 2014. doi: 10.1109/NTMS.2014.6814034 Wireless mesh networks (WMNs) are attracting more and more real time applications. This kind of applications is constrained in terms of Quality of Service (QoS). Existing works in this area are mostly designed for mobile ad hoc networks, which, unlike WMNs, are mainly sensitive to energy and mobility. However, WMNs have their specific characteristics (e.g. static routers and heavy traffic load), which require dedicated QoS protocols. This paper proposes a novel traffic regulation scheme for multimedia support in WMNs. The proposed scheme aims to regulate the traffic sending rate according to the network state, based on the buffer evolution at mesh routers and on the priority of each traffic type. By monitoring the buffer evolution at mesh routers, our scheme is able to predict possible congestion, or QoS violation, early enough before their occurrence; each flow is then regulated according to its priority and to its QoS requirements. The idea behind the proposed scheme is to maintain lightly loaded buffers in order to minimize the queuing delays, as well as, to avoid congestion. Moreover, the regulation process is made smoothly in order to ensure the continuity of real time and interactive services. We use the interval type-2 fuzzy logic system (IT2 FLS), known by its adequacy to uncertain environments, to make suitable regulation decisions. The performance of our scheme is proved through extensive simulations in different network and traffic load scales.
Keywords: fuzzy control; protocols; quality of service; queueing theory; telecommunication congestion control; telecommunication traffic; wireless mesh networks; QoS requirements; QoS violation; buffer evolution; dedicated QoS protocols; heavy traffic load; interval type-2 fuzzy logic system; lightly loaded buffers; mesh routers; mobile ad hoc networks; multimedia support; network state; quality of service; queuing delays; regulation process; static routers; traffic load scale; traffic regulation scheme; traffic sending rate; traffic type; wireless mesh networks; Ad hoc networks; Delays ;Load management; Quality of service; Real-time systems; Throughput; Wireless communication (ID#: 15-3701)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6814034&isnumber=6813963
Bin Hu; Gharavi, H., "Smart Grid Mesh Network Security Using Dynamic Key Distribution With Merkle Tree 4-Way Handshaking," Smart Grid, IEEE Transactions on, vol.5, no.2, pp.550,558, March 2014. doi: 10.1109/TSG.2013.2277963 Distributed mesh sensor networks provide cost-effective communications for deployment in various smart grid domains, such as home area networks (HAN), neighborhood area networks (NAN), and substation/plant-generation local area networks. This paper introduces a dynamically updating key distribution strategy to enhance mesh network security against cyber attack. The scheme has been applied to two security protocols known as simultaneous authentication of equals (SAE) and efficient mesh security association (EMSA). Since both protocols utilize 4-way handshaking, we propose a Merkle-tree based handshaking scheme, which is capable of improving the resiliency of the network in a situation where an intruder carries a denial of service attack. Finally, by developing a denial of service attack model, we can then evaluate the security of the proposed schemes against cyber attack, as well as network performance in terms of delay and overhead.
Keywords: computer network performance evaluation; computer network security; cryptographic protocols; home networks; smart power grids; substations; trees (mathematics);wireless LAN; wireless mesh networks; wireless sensor networks; EMSA; HAN;IEEE 802.11s;Merkle tree 4-way handshaking scheme; NAN; SAE; WLAN; cost-effective communications; cyber attack; denial-of-service attack model; distributed mesh sensor networks; dynamic key distribution strategy updating; efficient mesh security association; home area networks; neighborhood area networks; network performance; network resiliency improvement; plant-generation local area networks; security protocols; simultaneous authentication-of-equals; smart grid mesh network security enhancement; substation local area networks; wireless local area networks; Authentication; Computer crime; Logic gates; Mesh networks; Protocols; Smart grids; EMSA; IEEE 802.11s;SAE;security attacks; security protocols; smart grid; wireless mesh networks (ID#: 15-3702)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6599007&isnumber=6740878
Ping Yi; Ting Zhu; Qingquan Zhang; Yue Wu; Jianhua Li, "A Denial Of Service Attack In Advanced Metering Infrastructure Network," Communications (ICC), 2014 IEEE International Conference on, pp.1029, 1034, 10-14 June 2014. doi: 10.1109/ICC.2014.6883456 Advanced Metering Infrastructure (AMI) is the core component in a smart grid that exhibits a highly complex network configuration. AMI shares information about consumption, outages, and electricity rates reliably and efficiently by bidirectional communication between smart meters and utilities. However, the numerous smart meters being connected through mesh networks open new opportunities for attackers to interfere with communications and compromise utilities assets or steal customers’ private information. In this paper, we present a new DoS attack, called puppet attack, which can result in denial of service in AMI network. The intruder can select any normal node as a puppet node and send attack packets to this puppet node. When the puppet node receives these attack packets, this node will be controlled by the attacker and flood more packets so as to exhaust the network communication bandwidth and node energy. Simulation results show that puppet attack is a serious and packet deliver rate goes down to 20%-10%.
Keywords: power engineering computing; power system measurement; radio telemetry; security of data; smart meters; smart power grids; wireless mesh networks; DoS attack; advanced metering infrastructure network; denial of service attack; mesh network; puppet attack; smart meter; smart power grid; Computer crime; Electricity; Floods; Routing protocols; Smart meters; Wireless mesh networks (ID#: 15-3703)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6883456&isnumber=6883277
do Carmo, Rodrigo; Hoffmann, Justus; Willert, Volker; Hollick, Matthias, "Making Active-Probing-Based Network Intrusion Detection in Wireless Multihop Networks practical: A Bayesian Inference Approach To Probe Selection," Local Computer Networks (LCN), 2014 IEEE 39th Conference on, pp.345,353, 8-11 Sept. 2014. doi: 10.1109/LCN.2014.6925790 Practical intrusion detection in Wireless Multihop Networks (WMNs) is a hard challenge. The distributed nature of the network makes centralized intrusion detection difficult, while resource constraints of the nodes and the characteristics of the wireless medium often render decentralized, node-based approaches impractical. We demonstrate that an active-probing-based network intrusion detection system (AP-NIDS) is practical for WMNs. The key contribution of this paper is to optimize the active probing process: we introduce a general Bayesian model and design a probe selection algorithm that reduces the number of probes while maximizing the insights gathered by the AP-NIDS. We validate our model by means of testbed experimentation. We integrate it to our open source AP-NIDS DogoIDS and run it in an indoor wireless mesh testbed utilizing the IEEE 802.11s protocol. For the example of a selective packet dropping attack, we develop the detection states for our Bayes model, and show its feasibility. We demonstrate that our approach does not need to execute the complete set of probes, yet we obtain good detection rates.
Keywords: Bayes methods; Equations; Intrusion detection; Probes; Spread spectrum communication; Testing; Wireless communication; Bayes inference; Intrusion Detection; Security; Wireless Multihop Networks (ID#: 15-3704)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6925790&isnumber=6925725
Soderi, S.; Dainelli, G.; Iinatti, J.; Hamalainen, M., "Signal Fingerprinting In Cognitive Wireless Networks," Cognitive Radio Oriented Wireless Networks and Communications (CROWNCOM), 2014 9th International Conference on, pp.266,270, 2-4 June 2014. Future wireless communications are made up of different wireless technologies. In such a scenario, cognitive and cooperative principles create a promising framework for the interaction of these systems. The opportunistic behavior of cognitive radio (CR) provides an efficient use of radio spectrum and makes wireless network setup easier. However more and more frequently, CR features are exploited by malicious attacks, e.g., denial-of-service (DoS). This paper introduces active radio frequency fingerprinting (RFF) with double application scenario. CRs could encapsulate common-control-channel (CCC) information in an existing channel using active RFF and avoiding any additional or dedicated link. On the other hand, a node inside a network could use the same technique to exchange a public key during the setup of secure communication. Results indicate how the active RFF aims to a valuable technique for cognitive radio manager (CRM) framework facilitating data exchange between CRs without any dedicated channel or additional radio resource.
Keywords: cognitive radio; cryptographic protocols; public key cryptography; telecommunication security; telecommunication signalling; wireless mesh networks; CRM; DoS; RFF; active radiofrequency fingerprinting; cognitive radio manager framework; cognitive wireless networks; common-control-channel information; denial-of-service attacks; malicious attacks; public key; signal fingerprinting; Amplitude shift keying; Demodulation; Protocols; Security; Signal to noise ratio; Spread spectrum communication; Wireless communication; Cognitive; Fingerprinting; Security; Wireless (ID#: 15-3705)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6849697&isnumber=6849647
Lichtblau, B.; Dittrich, A., "Probabilistic Breadth-First Search - A Method for Evaluation of Network-Wide Broadcast Protocols," New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on, pp. 1, 6, March 30 2014-April 2 2014. doi: 10.1109/NTMS.2014.6814046 In Wireless Mesh Networks (WMNs), Network-Wide Broadcasts (NWBs) are a fundamental operation, required by routing and other mechanisms that distribute information to all nodes in the network. However, due to the characteristics of wireless communication, NWBs are generally problematic. Optimizing them thus is a prime target when improving the overall performance and dependability of WMNs. Most existing optimizations neglect the real nature of WMNs and are based on simple graph models, which provide optimistic assumptions of NWB dissemination. On the other hand, models that fully consider the complex propagation characteristics of NWBs quickly become unsolvable due to their complexity. In this paper, we present the Monte Carlo method Probabilistic Breadth-First Search (PBFS) to approximate the reachability of NWB protocols. PBFS simulates individual NWBs on graphs with probabilistic edge weights, which reflect link qualities of individual wireless links in the WMN, and estimates reachability over a configurable number of simulated runs. This approach is not only more efficient than existing ones, but further provides additional information, such as the distribution of path lengths. Furthermore, it is easily extensible to NWB schemes other than flooding. The applicability of PBFS is validated both theoretically and empirically, in the latter by comparing reachability as calculated by PBFS and measured in a real-world WMN. Validation shows that PBFS quickly converges to the theoretically correct value and approximates the behavior of real-life testbeds very well. The feasibility of PBFS to support research on NWB optimizations or higher level protocols that employ NWBs is demonstrated in two use cases.
Keywords: Monte Carlo methods; graph theory; routing protocols; search problems; wireless mesh networks; Monte Carlo method; NWB dissemination; NWB optimizations; NWB protocols; WMN; complex propagation characteristics; link qualities; network-wide broadcast protocols; network-wide broadcasts; path lengths; probabilistic breadth-first search; probabilistic edge weights; simple graph models; wireless communication; wireless links; wireless mesh networks; Approximation methods; Complexity theory; Mathematical model; Optimization; Probabilistic logic; Protocols; Wireless communication}, (ID#: 15-3706)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6814046&isnumber=6813963
do Carmo, R.; Hollick, M., "Analyzing Active Probing For Practical Intrusion Detection in Wireless Multihop Networks," Wireless On-demand Network Systems and Services (WONS), 2014 11th Annual Conference on, pp.77,80, 2-4 April 2014. doi: 10.1109/WONS.2014.6814725 Practical intrusion detection in Wireless Multihop Networks (WMNs) is a hard challenge. It has been shown that an active-probing-based network intrusion detection system (AP-NIDS) is practical for WMNs. However, understanding its interworking with real networks is still an unexplored challenge. In this paper, we investigate this in practice. We identify the general functional parameters that can be controlled, and by means of extensive experimentation, we tune these parameters and analyze the trade-offs between them, aiming at reducing false positives, overhead, and detection time. The traces we collected help us to understand when and why the active probing fails, and let us present countermeasures to prevent it.
Keywords: frequency hop communication; security of data; wireless mesh networks; active-probing-based network intrusion detection system; wireless mesh network; wireless multihop networks; Ad hoc networks; Communication system security; Intrusion detection; Routing protocols; Testing; Wireless communication; Wireless sensor networks (ID#: 15-3708)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6814725&isnumber=6814711
Bhatia, R.K.; Bodade, V., "Defining The Framework For Wireless-AMI Security In Smart Grid," Green Computing Communication and Electrical Engineering (ICGCCEE), 2014 International Conference on, pp.1, 5, 6-8 March 2014. doi: 10.1109/ICGCCEE.2014.6921383 In smart grid, critical data like monitoring data, usage data, state estimation, billing data, etc. are regularly being talked among its elements. So, security of such a system, if violated, results in massive losses and damages. By compromising with security aspect of such a system is as good as committing suicide. Thus in this paper, we have proposed security mechanism in Advanced Metering Infrastructure of smart grid, formed as Mesh-Zigbee topology. This security mechanism involves PKI based Digital certificate Authentication and Intrusion detection system to protect the AMI from internal and external security attack.<
Keywords: Zigbee; computer network security; metering; power engineering computing; power system protection; public key cryptography; smart power grids; wireless mesh networks; PKI based digital certificate authentication; external security attack; internal security attack; intrusion detection system; public key infrastructure; smart grid advanced metering infrastructure; wireless AMI security; wireless mesh Zigbee network topology; Authentication; Intrusion detection; Smart grids; Smart meters; Wireless communication; Zigbee; AMI (Advanced Metering Infrastructure); PKI; Security; WMN(Wireless Mesh Network) (ID#: 15-3709)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6921383&isnumber=6920919
de Alwis, Chamitha; Arachchi, H.Kodikara; Fernando, Anil; Pourazad, Mahsa, "Content And Network-Aware Multicast Over Wireless Networks," Heterogeneous Networking for Quality, Reliability, Security and Robustness (QShine), 2014 10th International Conference on, pp.122,128, 18-20 Aug. 2014. doi: 10.1109/QSHINE.2014.6928670 This paper proposes content and network-aware redundancy allocation algorithms for channel coding and network coding to optimally deliver data and video multicast services over error prone wireless mesh networks. Each network node allocates redundancies for channel coding and network coding taking in to account the content properties, channel bandwidth and channel status to improve the end-to-end performance of data and video multicast applications. For data multicast applications, redundancies are allocated at each network node in such a way that the total amount of redundant bits transmitted is minimised. As for video multicast applications, redundancies are allocated considering the priority of video packets such that the probability of delivering high priority video packets is increased. This not only ensures the continuous playback of a video but also increases the received video quality. Simulation results for bandwidth sensitive data multicast applications exhibit up to 10× reduction of the required amount of redundant bits compared to reference schemes to achieve a 100% packet delivery ratio. Similarly, for delay sensitive video multicast applications, simulation results exhibit up to 3.5dB PSNR gains in the received video quality.
Keywords: Bandwidth; Channel coding; Delays; Network coding; Receivers; Redundancy; Streaming media; content and network-aware redundancy allocation; network coding; wireless mesh networks (ID#: 15-3710)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6928670&isnumber=6928645
Avallone, S.; Di Stasi, G., "WiMesh: A Tool for the Performance Evaluation of Multi-Radio Wireless Mesh Networks," New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on, pp.1, 5, March 30 2014-April 2 2014. doi: 10.1109/NTMS.2014.6814062 In this paper we present WiMesh, a software tool we developed during the last ten years of research conducted in the field of multi-radio wireless mesh networks. WiMesh serves two main purposes: (i) to run different algorithms for the assignment of channels, transmission rate and power to the available network radios; (ii) to automatically setup and run ns-3 simulations based on the network configuration returned by such algorithms. WiMesh basically consists of three libraries and three corresponding utilities that allow to easily conduct experiments. All such utilities accept as input an XML configuration file where a number of options can be specified. WiMesh is freely available to the research community, with the purpose of easing the development of new algorithms and the verification of their performances.
Keywords: XML; performance evaluation; telecommunication channels; telecommunication computing; wireless mesh networks; WiMesh; XML configuration; channel assignment; multiradio wireless mesh networks;ns-3 simulations; performance evaluation; research community; software tool; Channel allocation; Libraries; Network topology; Throughput; Topology; Wireless mesh networks; XML (ID#: 15-3711)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6814062&isnumber=6813963
Arieta, F.; Barabasz, L.T.; Santos, A.; Nogueira, M., "Mitigating Flooding Attacks on Mobility in Infrastructure-Based Vehicular Networks," Latin America Transactions, IEEE (Revista IEEE America Latina), vol.12, no.3, pp.475, 483, May 2014. doi: 10.1109/TLA.2014.6827876 Infrastructure-based Vehicular Networks can be applied in different social contexts, such as health care, transportation and entertainment. They can easily take advantage of the benefices provided by wireless mesh networks (WMNs) to mobility, since WMNs essentially support technological convergence and resilience, required for the effective operation of services and applications. However, infrastructure-based vehicular networks are prone to attacks such as ARP packets flooding that compromise mobility management and users' network access. Hence, this work proposes MIRF, a secure mobility scheme based on reputation and filtering to mitigate flooding attacks on mobility management. The efficiency of the MIRF scheme has been evaluated by simulations considering urban scenarios with and without attacks. Analyses show that it significantly improves the packet delivery ratio in scenarios with attacks, mitigating their intentional negative effects, as the reduction of malicious ARP requests. Furthermore, improvements have been observed in the number of handoffs on scenarios under attacks, being faster than scenarios without the scheme.
Keywords: mobility management (mobile radio);telecommunication security; wireless mesh networks; ARP packets flooding; MIRF; WMN; filtering; flooding attacks mitigation ;handoffs; infrastructure-based vehicular networks; malicious ARP requests; mobility management; negative effects; network access; packet delivery ratio; secure mobility scheme; technological convergence; wireless mesh networks; Filtering; Floods; IP networks; Internet; Mobile radio mobility management; Monitoring; Flooding Attacks; Mobility; Security; Vehicular Networks (ID#: 15-3712)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6827876&isnumber=6827455
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.