Biblio

Found 3679 results

Filters: First Letter Of Last Name is C  [Clear All Filters]
2015-05-06
Chouhan, D.S., Mahajan, R.P..  2014.  An architectural framework for encryption amp; generation of digital signature using DNA cryptography. Computing for Sustainable Global Development (INDIACom), 2014 International Conference on. :743-748.

As most of the modern encryption algorithms are broken fully/partially, the world of information security looks in new directions to protect the data it transmits. The concept of using DNA computing in the fields of cryptography has been identified as a possible technology that may bring forward a new hope for hybrid and unbreakable algorithms. Currently, several DNA computing algorithms are proposed for cryptography, cryptanalysis and steganography problems, and they are proven to be very powerful in these areas. This paper gives an architectural framework for encryption & Generation of digital signature using DNA Cryptography. To analyze the performance; the original plaintext size and the key size; together with the encryption and decryption time are examined also the experiments on plaintext with different contents are performed to test the robustness of the program.

2015-05-05
Baughman, A.K., Chuang, W., Dixon, K.R., Benz, Z., Basilico, J..  2014.  DeepQA Jeopardy! Gamification: A Machine-Learning Perspective. Computational Intelligence and AI in Games, IEEE Transactions on. 6:55-66.

DeepQA is a large-scale natural language processing (NLP) question-and-answer system that responds across a breadth of structured and unstructured data, from hundreds of analytics that are combined with over 50 models, trained through machine learning. After the 2011 historic milestone of defeating the two best human players in the Jeopardy! game show, the technology behind IBM Watson, DeepQA, is undergoing gamification into real-world business problems. Gamifying a business domain for Watson is a composite of functional, content, and training adaptation for nongame play. During domain gamification for medical, financial, government, or any other business, each system change affects the machine-learning process. As opposed to the original Watson Jeopardy!, whose class distribution of positive-to-negative labels is 1:100, in adaptation the computed training instances, question-and-answer pairs transformed into true-false labels, result in a very low positive-to-negative ratio of 1:100 000. Such initial extreme class imbalance during domain gamification poses a big challenge for the Watson machine-learning pipelines. The combination of ingested corpus sets, question-and-answer pairs, configuration settings, and NLP algorithms contribute toward the challenging data state. We propose several data engineering techniques, such as answer key vetting and expansion, source ingestion, oversampling classes, and question set modifications to increase the computed true labels. In addition, algorithm engineering, such as an implementation of the Newton-Raphson logistic regression with a regularization term, relaxes the constraints of class imbalance during training adaptation. We conclude by empirically demonstrating that data and algorithm engineering are complementary and indispensable to overcome the challenges in this first Watson gamification for real-world business problems.

Demchenko, Y., Canh Ngo, De Laat, C., Lee, C..  2014.  Federated Access Control in Heterogeneous Intercloud Environment: Basic Models and Architecture Patterns. Cloud Engineering (IC2E), 2014 IEEE International Conference on. :439-445.

This paper presents on-going research to define the basic models and architecture patterns for federated access control in heterogeneous (multi-provider) multi-cloud and inter-cloud environment. The proposed research contributes to the further definition of Intercloud Federation Framework (ICFF) which is a part of the general Intercloud Architecture Framework (ICAF) proposed by authors in earlier works. ICFF attempts to address the interoperability and integration issues in provisioning on-demand multi-provider multi-domain heterogeneous cloud infrastructure services. The paper describes the major inter-cloud federation scenarios that in general involve two types of federations: customer-side federation that includes federation between cloud based services and customer campus or enterprise infrastructure, and provider-side federation that is created by a group of cloud providers to outsource or broker their resources when provisioning services to customers. The proposed federated access control model uses Federated Identity Management (FIDM) model that can be also supported by the trusted third party entities such as Cloud Service Broker (CSB) and/or trust broker to establish dynamic trust relations between entities without previously existing trust. The research analyses different federated identity management scenarios, defines the basic architecture patterns and the main components of the distributed federated multi-domain Authentication and Authorisation infrastructure.

2015-05-06
Kundu, S., Jha, A., Chattopadhyay, S., Sengupta, I., Kapur, R..  2014.  Framework for Multiple-Fault Diagnosis Based on Multiple Fault Simulation Using Particle Swarm Optimization. Very Large Scale Integration (VLSI) Systems, IEEE Transactions on. 22:696-700.

This brief proposes a framework to analyze multiple faults based on multiple fault simulation in a particle swarm optimization environment. Experimentation shows that up to ten faults can be diagnosed in a reasonable time. However, the scheme does not put any restriction on the number of simultaneous faults.

2015-04-30
Varadarajan, P., Crosby, G..  2014.  Implementing IPsec in Wireless Sensor Networks. New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on. :1-5.

There is an increasing need for wireless sensor networks (WSNs) to be more tightly integrated with the Internet. Several real world deployment of stand-alone wireless sensor networks exists. A number of solutions have been proposed to address the security threats in these WSNs. However, integrating WSNs with the Internet in such a way as to ensure a secure End-to-End (E2E) communication path between IPv6 enabled sensor networks and the Internet remains an open research issue. In this paper, the 6LoWPAN adaptation layer was extended to support both IPsec's Authentication Header (AH) and Encapsulation Security Payload (ESP). Thus, the communication endpoints in WSNs are able to communicate securely using encryption and authentication. The proposed AH and ESP compressed headers performance are evaluated via test-bed implementation in 6LoWPAN for IPv6 communications on IEEE 802.15.4 networks. The results confirm the possibility of implementing E2E security in IPv6 enabled WSNs to create a smooth transition between WSNs and the Internet. This can potentially play a big role in the emerging "Internet of Things" paradigm.

2015-05-05
Kumar, S., Rama Krishna, C., Aggarwal, N., Sehgal, R., Chamotra, S..  2014.  Malicious data classification using structural information and behavioral specifications in executables. Engineering and Computational Sciences (RAECS), 2014 Recent Advances in. :1-6.

With the rise in the underground Internet economy, automated malicious programs popularly known as malwares have become a major threat to computers and information systems connected to the internet. Properties such as self healing, self hiding and ability to deceive the security devices make these software hard to detect and mitigate. Therefore, the detection and the mitigation of such malicious software is a major challenge for researchers and security personals. The conventional systems for the detection and mitigation of such threats are mostly signature based systems. Major drawback of such systems are their inability to detect malware samples for which there is no signature available in their signature database. Such malwares are known as zero day malware. Moreover, more and more malware writers uses obfuscation technology such as polymorphic and metamorphic, packing, encryption, to avoid being detected by antivirus. Therefore, the traditional signature based detection system is neither effective nor efficient for the detection of zero-day malware. Hence to improve the effectiveness and efficiency of malware detection system we are using classification method based on structural information and behavioral specifications. In this paper we have used both static and dynamic analysis approaches. In static analysis we are extracting the features of an executable file followed by classification. In dynamic analysis we are taking the traces of executable files using NtTrace within controlled atmosphere. Experimental results obtained from our algorithm indicate that our proposed algorithm is effective in extracting malicious behavior of executables. Further it can also be used to detect malware variants.

Cam, H., Mouallem, P., Yilin Mo, Sinopoli, B., Nkrumah, B..  2014.  Modeling impact of attacks, recovery, and attackability conditions for situational awareness. Cognitive Methods in Situation Awareness and Decision Support (CogSIMA), 2014 IEEE International Inter-Disciplinary Conference on. :181-187.

A distributed cyber control system comprises various types of assets, including sensors, intrusion detection systems, scanners, controllers, and actuators. The modeling and analysis of these components usually require multi-disciplinary approaches. This paper presents a modeling and dynamic analysis of a distributed cyber control system for situational awareness by taking advantage of control theory and time Petri net. Linear time-invariant systems are used to model the target system, attacks, assets influences, and an anomaly-based intrusion detection system. Time Petri nets are used to model the impact and timing relationships of attacks, vulnerability, and recovery at every node. To characterize those distributed control systems that are perfectly attackable, algebraic and topological attackability conditions are derived. Numerical evaluation is performed to determine the impact of attacks on distributed control system.

2015-04-30
Cam, H., Mouallem, P., Yilin Mo, Sinopoli, B., Nkrumah, B..  2014.  Modeling impact of attacks, recovery, and attackability conditions for situational awareness. Cognitive Methods in Situation Awareness and Decision Support (CogSIMA), 2014 IEEE International Inter-Disciplinary Conference on. :181-187.

A distributed cyber control system comprises various types of assets, including sensors, intrusion detection systems, scanners, controllers, and actuators. The modeling and analysis of these components usually require multi-disciplinary approaches. This paper presents a modeling and dynamic analysis of a distributed cyber control system for situational awareness by taking advantage of control theory and time Petri net. Linear time-invariant systems are used to model the target system, attacks, assets influences, and an anomaly-based intrusion detection system. Time Petri nets are used to model the impact and timing relationships of attacks, vulnerability, and recovery at every node. To characterize those distributed control systems that are perfectly attackable, algebraic and topological attackability conditions are derived. Numerical evaluation is performed to determine the impact of attacks on distributed control system.

2018-05-14
Behzad Tabibian, Michael Lewis, Christian Lebiere, Nilanjan Chakraborty, Katia Sycara, Meeko Oishi.  2014.  Towards a Cognitively-based Analytic Model of Human Control of Swarms. Proceedings of the {AAAI} Spring Symposium. :68–73.

Technical Report SS-14-02, ``Formal Verification and Modeling in Human-Machine Systems''

2015-05-06
Castro Marquez, C.I., Strum, M., Wang Jiang Chau.  2014.  A unified sequential equivalence checking approach to verify high-level functionality and protocol specification implementations in RTL designs. Test Workshop - LATW, 2014 15th Latin American. :1-6.

Formal techniques provide exhaustive design verification, but computational margins have an important negative impact on its efficiency. Sequential equivalence checking is an effective approach, but traditionally it has been only applied between circuit descriptions with one-to-one correspondence for states. Applying it between RTL descriptions and high-level reference models requires removing signals, variables and states exclusive of the RTL description so as to comply with the state correspondence restriction. In this paper, we extend a previous formal methodology for RTL verification with high-level models, to check also the signals and protocol implemented in the RTL design. This protocol implementation is compared formally to a description captured from the specification. Thus, we can prove thoroughly the sequential behavior of a design under verification.
 

2015-05-04
Rafii, Z., Coover, B., Jinyu Han.  2014.  An audio fingerprinting system for live version identification using image processing techniques. Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on. :644-648.

Suppose that you are at a music festival checking on an artist, and you would like to quickly know about the song that is being played (e.g., title, lyrics, album, etc.). If you have a smartphone, you could record a sample of the live performance and compare it against a database of existing recordings from the artist. Services such as Shazam or SoundHound will not work here, as this is not the typical framework for audio fingerprinting or query-by-humming systems, as a live performance is neither identical to its studio version (e.g., variations in instrumentation, key, tempo, etc.) nor it is a hummed or sung melody. We propose an audio fingerprinting system that can deal with live version identification by using image processing techniques. Compact fingerprints are derived using a log-frequency spectrogram and an adaptive thresholding method, and template matching is performed using the Hamming similarity and the Hough Transform.

2015-04-30
Cioranesco, J.-M., Danger, J.-L., Graba, T., Guilley, S., Mathieu, Y., Naccache, D., Xuan Thuy Ngo.  2014.  Cryptographically secure shields. Hardware-Oriented Security and Trust (HOST), 2014 IEEE International Symposium on. :25-31.

Probing attacks are serious threats on integrated circuits. Security products often include a protective layer called shield that acts like a digital fence. In this article, we demonstrate a new shield structure that is cryptographically secure. This shield is based on the newly proposed SIMON lightweight block cipher and independent mesh lines to ensure the security against probing attacks of the hardware located behind the shield. Such structure can be proven secure against state-of-the-art invasive attacks. For the first time in the open literature, we describe a chip designed with a digital shield, and give an extensive report of its cost, in terms of power, metal layer(s) to sacrifice and of logic (including the logic to connect it to the CPU). Also, we explain how “Through Silicon Vias” (TSV) technology can be used for the protection against both frontside and backside probing.

Salman, A., Elhajj, I.H., Chehab, A., Kayssi, A..  2014.  DAIDS: An Architecture for Modular Mobile IDS. Advanced Information Networking and Applications Workshops (WAINA), 2014 28th International Conference on. :328-333.

The popularity of mobile devices and the enormous number of third party mobile applications in the market have naturally lead to several vulnerabilities being identified and abused. This is coupled with the immaturity of intrusion detection system (IDS) technology targeting mobile devices. In this paper we propose a modular host-based IDS framework for mobile devices that uses behavior analysis to profile applications on the Android platform. Anomaly detection can then be used to categorize malicious behavior and alert users. The proposed system accommodates different detection algorithms, and is being tested at a major telecom operator in North America. This paper highlights the architecture, findings, and lessons learned.

2015-05-06
Cook, A., Wunderlich, H.-J..  2014.  Diagnosis of multiple faults with highly compacted test responses. Test Symposium (ETS), 2014 19th IEEE European. :1-6.

Defects cluster, and the probability of a multiple fault is significantly higher than just the product of the single fault probabilities. While this observation is beneficial for high yield, it complicates fault diagnosis. Multiple faults will occur especially often during process learning, yield ramp-up and field return analysis. In this paper, a logic diagnosis algorithm is presented which is robust against multiple faults and which is able to diagnose multiple faults with high accuracy even on compressed test responses as they are produced in embedded test and built-in self-test. The developed solution takes advantage of the linear properties of a MISR compactor to identify a set of faults likely to produce the observed faulty signatures. Experimental results show an improvement in accuracy of up to 22 % over traditional logic diagnosis solutions suitable for comparable compaction ratios.

2015-05-05
Hussain, A., Faber, T., Braden, R., Benzel, T., Yardley, T., Jones, J., Nicol, D.M., Sanders, W.H., Edgar, T.W., Carroll, T.E. et al..  2014.  Enabling Collaborative Research for Security and Resiliency of Energy Cyber Physical Systems. Distributed Computing in Sensor Systems (DCOSS), 2014 IEEE International Conference on. :358-360.

The University of Illinois at Urbana Champaign (Illinois), Pacific Northwest National Labs (PNNL), and the University of Southern California Information Sciences Institute (USC-ISI) consortium is working toward providing tools and expertise to enable collaborative research to improve security and resiliency of cyber physical systems. In this extended abstract we discuss the challenges and the solution space. We demonstrate the feasibility of some of the proposed components through a wide-area situational awareness experiment for the power grid across the three sites.
 

2015-05-06
Chunhui Zhao.  2014.  Fault subspace selection and analysis of relative changes based reconstruction modeling for multi-fault diagnosis. Control and Decision Conference (2014 CCDC), The 26th Chinese. :235-240.

Online fault diagnosis has been a crucial task for industrial processes. Reconstruction-based fault diagnosis has been drawing special attentions as a good alternative to the traditional contribution plot. It identifies the fault cause by finding the specific fault subspace that can well eliminate alarming signals from a bunch of alternatives that have been prepared based on historical fault data. However, in practice, the abnormality may result from the joint effects of multiple faults, which thus can not be well corrected by single fault subspace archived in the historical fault library. In the present work, an aggregative reconstruction-based fault diagnosis strategy is proposed to handle the case where multiple fault causes jointly contribute to the abnormal process behaviors. First, fault subspaces are extracted based on historical fault data in two different monitoring subspaces where analysis of relative changes is taken to enclose the major fault effects that are responsible for different alarming monitoring statistics. Then, a fault subspace selection strategy is developed to analyze the combinatorial fault nature which will sort and select the informative fault subspaces that are most likely to be responsible for the concerned abnormalities. Finally, an aggregative fault subspace is calculated by combining the selected fault subspaces which represents the joint effects from multiple faults and works as the final reconstruction model for online fault diagnosis. Theoretical support is framed and the related statistical characteristics are analyzed. Its feasibility and performance are illustrated with simulated multi-faults using data from the Tennessee Eastman (TE) benchmark process.
 

Rui Zhou, Rong Min, Qi Yu, Chanjuan Li, Yong Sheng, Qingguo Zhou, Xuan Wang, Kuan-Ching Li.  2014.  Formal Verification of Fault-Tolerant and Recovery Mechanisms for Safe Node Sequence Protocol. Advanced Information Networking and Applications (AINA), 2014 IEEE 28th International Conference on. :813-820.

Fault-tolerance has huge impact on embedded safety-critical systems. As technology that assists to the development of such improvement, Safe Node Sequence Protocol (SNSP) is designed to make part of such impact. In this paper, we present a mechanism for fault-tolerance and recovery based on the Safe Node Sequence Protocol (SNSP) to strengthen the system robustness, from which the correctness of a fault-tolerant prototype system is analyzed and verified. In order to verify the correctness of more than thirty failure modes, we have partitioned the complete protocol state machine into several subsystems, followed to the injection of corresponding fault classes into dedicated independent models. Experiments demonstrate that this method effectively reduces the size of overall state space, and verification results indicate that the protocol is able to recover from the fault model in a fault-tolerant system and continue to operate as errors occur.
 

Chi Sing Chum, Changha Jun, Xiaowen Zhang.  2014.  Implementation of randomize-then-combine constructed hash function. Wireless and Optical Communication Conference (WOCC), 2014 23rd. :1-6.

Hash functions, such as SHA (secure hash algorithm) and MD (message digest) families that are built upon Merkle-Damgard construction, suffer many attacks due to the iterative nature of block-by-block message processing. Chum and Zhang [4] proposed a new hash function construction that takes advantage of the randomize-then-combine technique, which was used in the incremental hash functions, to the iterative hash function. In this paper, we implement such hash construction in three ways distinguished by their corresponding padding methods. We conduct the experiment in parallel multi-threaded programming settings. The results show that the speed of proposed hash function is no worse than SHA1.
 

2015-05-04
Chakaravarthi, S., Selvamani, K., Kanimozhi, S., Arya, P.K..  2014.  An intelligent agent based privacy preserving model for Web Service security. Electrical and Computer Engineering (CCECE), 2014 IEEE 27th Canadian Conference on. :1-5.

Web Service (WS) plays an important role in today's word to provide effective services for humans and these web services are built with the standard of SOAP, WSDL & UDDI. This technology enables various service providers to register and service sender their intelligent agent based privacy preserving modelservices to utilize the service over the internet through pre established networks. Also accessing these services need to be secured and protected from various types of attacks in the network environment. Exchanging data between two applications on a secure channel is a challenging issue in today communication world. Traditional security mechanism such as secured socket layer (SSL), Transport Layer Security (TLS) and Internet Protocol Security (IP Sec) is able to resolve this problem partially, hence this research paper proposes the privacy preserving named as HTTPI to secure the communication more efficiently. This HTTPI protocol satisfies the QoS requirements, such as authentication, authorization, integrity and confidentiality in various levels of the OSI layers. This work also ensures the QoS that covers non functional characteristics like performance (throughput), response time, security, reliability and capacity. This proposed intelligent agent based model results in excellent throughput, good response time and increases the QoS requirements.
 

2015-05-06
Mukaddam, A., Elhajj, I., Kayssi, A., Chehab, A..  2014.  IP Spoofing Detection Using Modified Hop Count. Advanced Information Networking and Applications (AINA), 2014 IEEE 28th International Conference on. :512-516.

With the global widespread usage of the Internet, more and more cyber-attacks are being performed. Many of these attacks utilize IP address spoofing. This paper describes IP spoofing attacks and the proposed methods currently available to detect or prevent them. In addition, it presents a statistical analysis of the Hop Count parameter used in our proposed IP spoofing detection algorithm. We propose an algorithm, inspired by the Hop Count Filtering (HCF) technique, that changes the learning phase of HCF to include all the possible available Hop Count values. Compared to the original HCF method and its variants, our proposed method increases the true positive rate by at least 9% and consequently increases the overall accuracy of an intrusion detection system by at least 9%. Our proposed method performs in general better than HCF method and its variants.
 

2015-05-01
Chiaradonna, S., Di Giandomenico, F., Murru, N..  2014.  On a Modeling Approach to Analyze Resilience of a Smart Grid Infrastructure. Dependable Computing Conference (EDCC), 2014 Tenth European. :166-177.

The evolution of electrical grids, both in terms of enhanced ICT functionalities to improve efficiency, reliability and economics, as well as the increasing penetration of renewable redistributed energy resources, results in a more sophisticated electrical infrastructure which poses new challenges from several perspectives, including resilience and quality of service analysis. In addition, the presence of interdependencies, which more and more characterize critical infrastructures (including the power sector), exacerbates the need for advanced analysis approaches, to be possibly employed since the early phases of the system design, to identify vulnerabilities and appropriate countermeasures. In this paper, we outline an approach to model and analyze smart grids and discuss the major challenges to be addressed in stochastic model-based analysis to account for the peculiarities of the involved system elements. Representation of dynamic and flexible behavior of generators and loads, as well as representation of the complex ICT control functions required to preserve and/or re-establish electrical equilibrium in presence of changes need to be faced to assess suitable indicators of the resilience and quality of service of the smart grid.

2015-05-04
Cherkaoui, A., Bossuet, L., Seitz, L., Selander, G., Borgaonkar, R..  2014.  New paradigms for access control in constrained environments. Reconfigurable and Communication-Centric Systems-on-Chip (ReCoSoC), 2014 9th International Symposium on. :1-4.

The Internet of Things (IoT) is here, more than 10 billion units are already connected and five times more devices are expected to be deployed in the next five years. Technological standarization and the management and fostering of rapid innovation by governments are among the main challenges of the IoT. However, security and privacy are the key to make the IoT reliable and trusted. Security mechanisms for the IoT should provide features such as scalability, interoperability and lightness. This paper addresses authentication and access control in the frame of the IoT. It presents Physical Unclonable Functions (PUF), which can provide cheap, secure, tamper-proof secret keys to authentify constrained M2M devices. To be successfully used in the IoT context, this technology needs to be embedded in a standardized identity and access management framework. On the other hand, Embedded Subscriber Identity Module (eSIM) can provide cellular connectivity with scalability, interoperability and standard compliant security protocols. The paper discusses an authorization scheme for a constrained resource server taking advantage of PUF and eSIM features. Concrete IoT uses cases are discussed (SCADA and building automation).

Lin Chen, Lu Zhou, Chunxue Liu, Quan Sun, Xiaobo Lu.  2014.  Occlusive vehicle tracking via processing blocks in Markov random field. Progress in Informatics and Computing (PIC), 2014 International Conference on. :294-298.

The technology of vehicle video detecting and tracking has been playing an important role in the ITS (Intelligent Transportation Systems) field during recent years. The occlusion phenomenon among vehicles is one of the most difficult problems related to vehicle tracking. In order to handle occlusion, this paper proposes an effective solution that applied Markov Random Field (MRF) to the traffic images. The contour of the vehicle is firstly detected by using background subtraction, then numbers of blocks with vehicle's texture and motion information are filled inside each vehicle. We extract several kinds of information of each block to process the following tracking. As for each occlusive block two groups of clique functions in MRF model are defined, which represents spatial correlation and motion coherence respectively. By calculating each occlusive block's total energy function, we finally solve the attribution problem of occlusive blocks. The experimental results show that our method can handle occlusion problems effectively and track each vehicle continuously.
 

2015-05-06
Weikun Hou, Xianbin Wang, Chouinard, J.-Y., Refaey, A..  2014.  Physical Layer Authentication for Mobile Systems with Time-Varying Carrier Frequency Offsets. Communications, IEEE Transactions on. 62:1658-1667.

A novel physical layer authentication scheme is proposed in this paper by exploiting the time-varying carrier frequency offset (CFO) associated with each pair of wireless communications devices. In realistic scenarios, radio frequency oscillators in each transmitter-and-receiver pair always present device-dependent biases to the nominal oscillating frequency. The combination of these biases and mobility-induced Doppler shift, characterized as a time-varying CFO, can be used as a radiometric signature for wireless device authentication. In the proposed authentication scheme, the variable CFO values at different communication times are first estimated. Kalman filtering is then employed to predict the current value by tracking the past CFO variation, which is modeled as an autoregressive random process. To achieve the proposed authentication, the current CFO estimate is compared with the Kalman predicted CFO using hypothesis testing to determine whether the signal has followed a consistent CFO pattern. An adaptive CFO variation threshold is derived for device discrimination according to the signal-to-noise ratio and the Kalman prediction error. In addition, a software-defined radio (SDR) based prototype platform has been developed to validate the feasibility of using CFO for authentication. Simulation results further confirm the effectiveness of the proposed scheme in multipath fading channels.
 

2015-05-04
Coover, B., Jinyu Han.  2014.  A Power Mask based audio fingerprint. Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on. :1394-1398.

The Philips audio fingerprint[1] has been used for years, but its robustness against external noise has not been studied accurately. This paper shows the Philips fingerprint is noise resistant, and is capable of recognizing music that is corrupted by noise at a -4 to -7 dB signal to noise ratio. In addition, the drawbacks of the Philips fingerprint are addressed by utilizing a “Power Mask” in conjunction with the Philips fingerprint during the matching process. This Power Mask is a weight matrix given to the fingerprint bits, which allows mismatched bits to be penalized according to their relevance in the fingerprint. The effectiveness of the proposed fingerprint was evaluated by experiments using a database of 1030 songs and 1184 query files that were heavily corrupted by two types of noise at varying levels. Our experiments show the proposed method has significantly improved the noise resistance of the standard Philips fingerprint.