Visible to the public Biblio

Found 918 results

Filters: First Letter Of Title is T  [Clear All Filters]
2015-05-06
Kitsos, Paris, Voyiatzis, Artemios G..  2014.  Towards a hardware Trojan detection methodology. Embedded Computing (MECO), 2014 3rd Mediterranean Conference on. :18-23.

Malicious hardware is a realistic threat. It can be possible to insert the malicious functionality on a device as deep as in the hardware design flow, long before manufacturing the silicon product. Towards developing a hardware Trojan horse detection methodology, we analyze capabilities and limitations of existing techniques, framing a testing strategy for uncovering efficiently hardware Trojan horses in mass-produced integrated circuits.
 

McGettrick, Andrew, Cassel, Lillian N., Dark, Melissa, Hawthorne, Elizabeth K., Impagliazzo, John.  2014.  Toward Curricular Guidelines for Cybersecurity. Proceedings of the 45th ACM Technical Symposium on Computer Science Education. :81–82.

This session reports on a workshop convened by the ACM Education Board with funding by the US National Science Foundation and invites discussion from the community on the workshop findings. The topic, curricular directions for cybersecurity, is one that resonates in many departments considering how best to prepare graduates to face the challenges of security issues in employment and future research. The session will include presentation of the workshop context and conclusions, but will be open to participant discussion. This will be the first public presentation of the results of the workshop and the first opportunity for significant response.

Klaper, David, Hovy, Eduard.  2014.  A Taxonomy and a Knowledge Portal for Cybersecurity. Proceedings of the 15th Annual International Conference on Digital Government Research. :79–85.

Smart government is possible only if the security of sensitive data can be assured. The more knowledgeable government officials and citizens are about cybersecurity, the better are the chances that government data is not compromised or abused. In this paper, we present two systems under development that aim at improving cybersecurity education. First, we are creating a taxonomy of cybersecurity topics that provides links to relevant educational or research material. Second, we are building a portal that serves as platform for users to discuss the security of websites. These sources can be linked together. This helps to strengthen the knowledge of government officials and citizens with regard to cybersecurity issues. These issues are a central concern for open government initiatives.

 

2015-05-05
Lomotey, R.K., Deters, R..  2014.  Terms Mining in Document-Based NoSQL: Response to Unstructured Data. Big Data (BigData Congress), 2014 IEEE International Congress on. :661-668.

Unstructured data mining has become topical recently due to the availability of high-dimensional and voluminous digital content (known as "Big Data") across the enterprise spectrum. The Relational Database Management Systems (RDBMS) have been employed over the past decades for content storage and management, but, the ever-growing heterogeneity in today's data calls for a new storage approach. Thus, the NoSQL database has emerged as the preferred storage facility nowadays since the facility supports unstructured data storage. This creates the need to explore efficient data mining techniques from such NoSQL systems since the available tools and frameworks which are designed for RDBMS are often not directly applicable. In this paper, we focused on topics and terms mining, based on clustering, in document-based NoSQL. This is achieved by adapting the architectural design of an analytics-as-a-service framework and the proposal of the Viterbi algorithm to enhance the accuracy of the terms classification in the system. The results from the pilot testing of our work show higher accuracy in comparison to some previously proposed techniques such as the parallel search.

Babour, A., Khan, J.I..  2014.  Tweet Sentiment Analytics with Context Sensitive Tone-Word Lexicon. Web Intelligence (WI) and Intelligent Agent Technologies (IAT), 2014 IEEE/WIC/ACM International Joint Conferences on. 1:392-399.

In this paper we propose a twitter sentiment analytics that mines for opinion polarity about a given topic. Most of current semantic sentiment analytics depends on polarity lexicons. However, many key tone words are frequently bipolar. In this paper we demonstrate a technique which can accommodate the bipolarity of tone words by context sensitive tone lexicon learning mechanism where the context is modeled by the semantic neighborhood of the main target. Performance analysis shows that ability to contextualize the tone word polarity significantly improves the accuracy.

Mukkamala, R.R., Hussain, A., Vatrapu, R..  2014.  Towards a Set Theoretical Approach to Big Data Analytics. Big Data (BigData Congress), 2014 IEEE International Congress on. :629-636.

Formal methods, models and tools for social big data analytics are largely limited to graph theoretical approaches such as social network analysis (SNA) informed by relational sociology. There are no other unified modeling approaches to social big data that integrate the conceptual, formal and software realms. In this paper, we first present and discuss a theory and conceptual model of social data. Second, we outline a formal model based on set theory and discuss the semantics of the formal model with a real-world social data example from Facebook. Third, we briefly present and discuss the Social Data Analytics Tool (SODATO) that realizes the conceptual model in software and provisions social data analysis based on the conceptual and formal models. Fourth and last, based on the formal model and sentiment analysis of text, we present a method for profiling of artifacts and actors and apply this technique to the data analysis of big social data collected from Facebook page of the fast fashion company, H&M.
 

Toshiro Yano, E., Bhatt, P., Gustavsson, P.M., Ahlfeldt, R.-M..  2014.  Towards a Methodology for Cybersecurity Risk Management Using Agents Paradigm. Intelligence and Security Informatics Conference (JISIC), 2014 IEEE Joint. :325-325.

In order to deal with shortcomings of security management systems, this work proposes a methodology based on agents paradigm for cybersecurity risk management. In this approach a system is decomposed in agents that may be used to attain goals established by attackers. Threats to business are achieved by attacker's goals in service and deployment agents. To support a proactive behavior, sensors linked to security mechanisms are analyzed accordingly with a model for Situational Awareness(SA)[4].
 

Voigt, S., Schoepfer, E., Fourie, C., Mager, A..  2014.  Towards semi-automated satellite mapping for humanitarian situational awareness. Global Humanitarian Technology Conference (GHTC), 2014 IEEE. :412-416.

Very high resolution satellite imagery used to be a rare commodity, with infrequent satellite pass-over times over a specific area-of-interest obviating many useful applications. Today, more and more such satellite systems are available, with visual analysis and interpretation of imagery still important to derive relevant features and changes from satellite data. In order to allow efficient, robust and routine image analysis for humanitarian purposes, semi-automated feature extraction is of increasing importance for operational emergency mapping tasks. In the frame of the European Earth Observation program COPERNICUS and related research activities under the European Union's Seventh Framework Program, substantial scientific developments and mapping services are dedicated to satellite based humanitarian mapping and monitoring. In this paper, recent results in methodological research and development of routine services in satellite mapping for humanitarian situational awareness are reviewed and discussed. Ethical aspects of sensitivity and security of humanitarian mapping are deliberated. Furthermore methods for monitoring and analysis of refugee/internally displaced persons camps in humanitarian settings are assessed. Advantages and limitations of object-based image analysis, sample supervised segmentation and feature extraction are presented and discussed.
 

Schneider, S., Lansing, J., Fangjian Gao, Sunyaev, A..  2014.  A Taxonomic Perspective on Certification Schemes: Development of a Taxonomy for Cloud Service Certification Criteria. System Sciences (HICSS), 2014 47th Hawaii International Conference on. :4998-5007.

Numerous cloud service certifications (CSCs) are emerging in practice. However, in their striving to establish the market standard, CSC initiatives proceed independently, resulting in a disparate collection of CSCs that are predominantly proprietary, based on various standards, and differ in terms of scope, audit process, and underlying certification schemes. Although literature suggests that a certification's design influences its effectiveness, research on CSC design is lacking and there are no commonly agreed structural characteristics of CSCs. Informed by data from 13 expert interviews and 7 cloud computing standards, this paper delineates and structures CSC knowledge by developing a taxonomy for criteria to be assessed in a CSC. The taxonomy consists of 6 dimensions with 28 subordinate characteristics and classifies 328 criteria, thereby building foundations for future research to systematically develop and investigate the efficacy of CSC designs as well as providing a knowledge base for certifiers, cloud providers, and users.
 

Jiankun Hu, Pota, H.R., Song Guo.  2014.  Taxonomy of Attacks for Agent-Based Smart Grids. Parallel and Distributed Systems, IEEE Transactions on. 25:1886-1895.

Being the most important critical infrastructure in Cyber-Physical Systems (CPSs), a smart grid exhibits the complicated nature of large scale, distributed, and dynamic environment. Taxonomy of attacks is an effective tool in systematically classifying attacks and it has been placed as a top research topic in CPS by a National Science Foundation (NSG) Workshop. Most existing taxonomy of attacks in CPS are inadequate in addressing the tight coupling of cyber-physical process or/and lack systematical construction. This paper attempts to introduce taxonomy of attacks of agent-based smart grids as an effective tool to provide a structured framework. The proposed idea of introducing the structure of space-time and information flow direction, security feature, and cyber-physical causality is innovative, and it can establish a taxonomy design mechanism that can systematically construct the taxonomy of cyber attacks, which could have a potential impact on the normal operation of the agent-based smart grids. Based on the cyber-physical relationship revealed in the taxonomy, a concrete physical process based cyber attack detection scheme has been proposed. A numerical illustrative example has been provided to validate the proposed physical process based cyber detection scheme.
 

McDaniel, P., Rivera, B., Swami, A..  2014.  Toward a Science of Secure Environments. Security Privacy, IEEE. 12:68-70.

The longstanding debate on a fundamental science of security has led to advances in systems, software, and network security. However, existing efforts have done little to inform how an environment should react to emerging and ongoing threats and compromises. The authors explore the goals and structures of a new science of cyber-decision-making in the Cyber-Security Collaborative Research Alliance, which seeks to develop a fundamental theory for reasoning under uncertainty the best possible action in a given cyber environment. They also explore the needs and limitations of detection mechanisms; agile systems; and the users, adversaries, and defenders that use and exploit them, and conclude by considering how environmental security can be cast as a continuous optimization problem.
 

Mahmood, A., Akbar, A.H..  2014.  Threats in end to end commercial deployments of Wireless Sensor Networks and their cross layer solution. Information Assurance and Cyber Security (CIACS), 2014 Conference on. :15-22.

Commercial Wireless Sensor Networks (WSNs) can be accessed through sensor web portals. However, associated security implications and threats to the 1) users/subscribers 2) investors and 3) third party operators regarding sensor web portals are not seen in completeness, rather the contemporary work handles them in parts. In this paper, we discuss different kind of security attacks and vulnerabilities at different layers to the users, investors including Wireless Sensor Network Service Providers (WSNSPs) and WSN itself in relation with the two well-known documents i.e., “Department of Homeland Security” (DHS) and “Department of Defense (DOD)”, as these are standard security documents till date. Further we propose a comprehensive cross layer security solution in the light of guidelines given in the aforementioned documents that is minimalist in implementation and achieves the purported security goals.
 

Rashad Al-Dhaqm, A.M., Othman, S.H., Abd Razak, S., Ngadi, A..  2014.  Towards adapting metamodelling technique for database forensics investigation domain. Biometrics and Security Technologies (ISBAST), 2014 International Symposium on. :322-327.

Threats which come from database insiders or database outsiders have formed a big challenge to the protection of integrity and confidentiality in many database systems. To overcome this situation a new domain called a Database Forensic (DBF) has been introduced to specifically investigate these dynamic threats which have posed many problems in Database Management Systems (DBMS) of many organizations. DBF is a process to identify, collect, preserve, analyse, reconstruct and document all digital evidences caused by this challenge. However, until today, this domain is still lacks having a standard and generic knowledge base for its forensic investigation methods / tools due to many issues and challenges in its complex processes. Therefore, this paper will reveal an approach adapted from a software engineering domain called metamodelling which will unify these DBF complex knowledge processes into an artifact, a metamodel (DBF Metamodel). In future, the DBF Metamodel could benefit many DBF investigation users such as database investigators, stockholders, and other forensic teams in offering various possible solutions for their problem domain.
 

Sanger, J., Richthammer, C., Hassan, S., Pernul, G..  2014.  Trust and Big Data: A Roadmap for Research. Database and Expert Systems Applications (DEXA), 2014 25th International Workshop on. :278-282.

We are currently living in the age of Big Data coming along with the challenge to grasp the golden opportunities at hand. This mixed blessing also dominates the relation between Big Data and trust. On the one side, large amounts of trust-related data can be utilized to establish innovative data-driven approaches for reputation-based trust management. On the other side, this is intrinsically tied to the trust we can put in the origins and quality of the underlying data. In this paper, we address both sides of trust and Big Data by structuring the problem domain and presenting current research directions and inter-dependencies. Based on this, we define focal issues which serve as future research directions for the track to our vision of Next Generation Online Trust within the FORSEC project.
 

Murthy, Praveen K..  2014.  Top ten challenges in Big Data security and privacy. Test Conference (ITC), 2014 IEEE International. :1-1.

Multiple Inductive Loop Detectors are advanced Inductive Loop Sensors that can measure traffic flow parameters in even conditions where the traffic is heterogeneous and does not conform to lanes. This sensor consists of many inductive loops in series, with each loop having a parallel capacitor across it. These inductive and capacitive elements of the sensor may undergo open or short circuit faults during operation. Such faults lead to erroneous interpretation of data acquired from the loops. Conventional methods used for fault diagnosis in inductive loop detectors consume time and effort as they require experienced technicians and involve extraction of loops from the saw-cut slots on the road. This also means that the traffic flow parameters cannot be measured until the sensor system becomes functional again. The repair activities would also disturb traffic flow. This paper presents a method for automating fault diagnosis for series-connected Multiple Inductive Loop Detectors, based on an impulse test. The system helps in the diagnosis of open/short faults associated with the inductive and capacitive elements of the sensor structure by displaying the fault status conveniently. Since the fault location as well as the fault type can be precisely identified using this method, the repair actions are also localised. The proposed system thereby results in significant savings in both repair time and repair costs. An embedded system was developed to realize this scheme and the same was tested on a loop prototype.
 

Uymatiao, M.L.T., Yu, W.E.S..  2014.  Time-based OTP authentication via secure tunnel (TOAST): A mobile TOTP scheme using TLS seed exchange and encrypted offline keystore. Information Science and Technology (ICIST), 2014 4th IEEE International Conference on. :225-229.

The main objective of this research is to build upon existing cryptographic standards and web protocols to design an alternative multi-factor authentication cryptosystem for the web. It involves seed exchange to a software-based token through a login-protected Transport Layer Security (TLS/SSL) tunnel, encrypted local storage through a password-protected keystore (BC UBER) with a strong key derivation function (PBEWithSHAANDTwofish-CBC), and offline generation of one-time passwords through the TOTP algorithm (IETF RFC 6239). Authentication occurs through the use of a shared secret (the seed) to verify the correctness of the one-time password used to authenticate. With the traditional use of username and password no longer wholly adequate for protecting online accounts, and with regulators worldwide toughening up security requirements (i.e. BSP 808, FFIEC), this research hopes to increase research effort on further development of cryptosystems involving multi-factor authentication.
 

Yi-Hui Chen, Chi-Shiang Chan, Po-Yu Hsu, Wei-Lin Huang.  2014.  Tagged visual cryptography with access control. Multimedia and Expo Workshops (ICMEW), 2014 IEEE International Conference on. :1-5.

Visual cryptography is a way to encrypt the secret image into several meaningless share images. Noted that no information can be obtained if not all of the shares are collected. Stacking the share images, the secret image can be retrieved. The share images are meaningless to owner which results in difficult to manage. Tagged visual cryptography is a skill to print a pattern onto meaningless share images. After that, users can easily manage their own share images according to the printed pattern. Besides, access control is another popular topic to allow a user or a group to see the own authorizations. In this paper, a self-authentication mechanism with lossless construction ability for image secret sharing scheme is proposed. The experiments provide the positive data to show the feasibility of the proposed scheme.
 

Albino Pereira, A., Bosco M.Sobral, J., Merkle Westphall, C..  2014.  Towards Scalability for Federated Identity Systems for Cloud-Based Environments. New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on. :1-5.

As multi-tenant authorization and federated identity management systems for cloud computing matures, the provisioning of services using this paradigm allows maximum efficiency on business that requires access control. However, regarding scalability support, mainly horizontal, some characteristics of those approaches based on central authentication protocols are problematic. The objective of this work is to address these issues by providing an adapted sticky-session mechanism for a Shibboleth architecture using CAS. This alternative, compared with the recommended shared memory approach, shown improved efficiency and less overall infrastructure complexity.

2015-05-04
Kreutz, D., Bessani, A., Feitosa, E., Cunha, H..  2014.  Towards Secure and Dependable Authentication and Authorization Infrastructures. Dependable Computing (PRDC), 2014 IEEE 20th Pacific Rim International Symposium on. :43-52.

We propose a resilience architecture for improving the security and dependability of authentication and authorization infrastructures, in particular the ones based on RADIUS and OpenID. This architecture employs intrusion-tolerant replication, trusted components and entrusted gateways to provide survivable services ensuring compatibility with standard protocols. The architecture was instantiated in two prototypes, one implementing RADIUS and another implementing OpenID. These prototypes were evaluated in fault-free executions, under faults, under attack, and in diverse computing environments. The results show that, beyond being more secure and dependable, our prototypes are capable of achieving the performance requirements of enterprise environments, such as IT infrastructures with more than 400k users.
 

Bianchi, T., Piva, A..  2014.  TTP-free asymmetric fingerprinting protocol based on client side embedding. Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on. :3987-3991.

In this paper, we propose a scheme to employ an asymmetric fingerprinting protocol within a client-side embedding distribution framework. The scheme is based on a novel client-side embedding technique that is able to transmit a binary fingerprint. This enables secure distribution of personalized decryption keys containing the Buyer's fingerprint by means of existing asymmetric protocols, without using a trusted third party. Simulation results show that the fingerprint can be reliably recovered by using non-blind decoding, and it is robust with respect to common attacks. The proposed scheme can be a valid solution to both customer's rights and scalability issues in multimedia content distribution.

2015-05-01
Rasheed, N., Khan, S.A., Khalid, A..  2014.  Tracking and Abnormal Behavior Detection in Video Surveillance Using Optical Flow and Neural Networks. Advanced Information Networking and Applications Workshops (WAINA), 2014 28th International Conference on. :61-66.

An abnormal behavior detection algorithm for surveillance is required to correctly identify the targets as being in a normal or chaotic movement. A model is developed here for this purpose. The uniqueness of this algorithm is the use of foreground detection with Gaussian mixture (FGMM) model before passing the video frames to optical flow model using Lucas-Kanade approach. Information of horizontal and vertical displacements and directions associated with each pixel for object of interest is extracted. These features are then fed to feed forward neural network for classification and simulation. The study is being conducted on the real time videos and some synthesized videos. Accuracy of method has been calculated by using the performance parameters for Neural Networks. In comparison of plain optical flow with this model, improved results have been obtained without noise. Classes are correctly identified with an overall performance equal to 3.4e-02 with & error percentage of 2.5.

Parno, Bryan Jeffery.  2014.  Trust Extension As a Mechanism for Secure Code Execution on Commodity Computers.

From the Preface

As society rushes to digitize sensitive information and services, it is imperative that we adopt adequate security protections. However, such protections fundamentally conflict with the benefits we expect from commodity computers. In other words, consumers and businesses value commodity computers because they provide good performance and an abundance of features at relatively low costs. Meanwhile, attempts to build secure systems from the ground up typically abandon such goals, and hence are seldom adopted [Karger et al. 1991, Gold et al. 1984, Ames 1981].

In this book, a revised version of my doctoral dissertation, originally written while studying at Carnegie Mellon University, I argue that we can resolve the tension between security and features by leveraging the trust a user has in one device to enable her to securely use another commodity device or service, without sacrificing the performance and features expected of commodity systems.We support this premise over the course of the following chapters.

Introduction. This chapter introduces the notion of bootstrapping trust from one device or service to another and gives an overview of how the subsequent chapters fit together.
Background and related work. This chapter focuses on existing techniques for bootstrapping trust in commodity computers, specifically by conveying information about a computer's current execution environment to an interested party. This would, for example, enable a user to verify that her computer is free of malware, or that a remote web server will handle her data responsibly. 
Bootstrapping trust in a commodity computer. At a high level, this chapter develops techniques to allow a user to employ a small, trusted, portable device to securely learn what code is executing on her local computer. While the problem is simply stated, finding a solution that is both secure and usable with existing hardware proves quite difficult.
On-demand secure code execution. Rather than entrusting a user's data to the mountain of buggy code likely running on her computer, in this chapter, we construct an on-demand secure execution environment which can perform security sensitive tasks and handle private data in complete isolation from all other software (and most hardware) on the system. Meanwhile, non-security-sensitive software retains the same abundance of features and performance it enjoys today.
Using trustworthy host data in the network. Having established an environment for secure code execution on an individual computer, this chapter shows how to extend trust in this environment to network elements in a secure and efficient manner. This allows us to reexamine the design of network protocols and defenses, since we can now execute code on end hosts and trust the results within the network.
Secure code execution on untrusted hardware. Lastly, this chapter extends the user's trust one more step to encompass computations performed on a remote host (e.g., in the cloud).We design, analyze, and prove secure a protocol that allows a user to outsource arbitrary computations to commodity computers run by an untrusted remote party (or parties) who may subject the computers to both software and hardware attacks. Our protocol guarantees that the user can both verify that the results returned are indeed the correct results of the specified computations on the inputs provided, and protect the secrecy of both the inputs and outputs of the computations. These guarantees are provided in a non-interactive, asymptotically optimal (with respect to CPU and bandwidth) manner. 

Thus, extending a user's trust, via software, hardware, and cryptographic techniques, allows us to provide strong security protections for both local and remote computations on sensitive data, while still preserving the performance and features of commodity computers.

Akram, R.N., Markantonakis, K., Mayes, K..  2014.  Trusted Platform Module for Smart Cards. New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on. :1-5.

Near Field Communication (NFC)-based mobile phone services offer a lifeline to the under-appreciated multiapplication smart card initiative. The initiative could effectively replace heavy wallets full of smart cards for mundane tasks. However, the issue of the deployment model still lingers on. Possible approaches include, but are not restricted to, the User Centric Smart card Ownership Model (UCOM), GlobalPlatform Consumer Centric Model, and Trusted Service Manager (TSM). In addition, multiapplication smart card architecture can be a GlobalPlatform Trusted Execution Environment (TEE) and/or User Centric Tamper-Resistant Device (UCTD), which provide cross-device security and privacy preservation platforms to their users. In the multiapplication smart card environment, there might not be a prior off-card trusted relationship between a smart card and an application provider. Therefore, as a possible solution to overcome the absence of prior trusted relationships, this paper proposes the concept of Trusted Platform Module (TPM) for smart cards (embedded devices) that can act as a point of reference for establishing the necessary trust between the device and an application provider, and among applications.

Pelechrinis, Konstantinos, Krishnamurthy, Prashant, Gkantsidis, Christos.  2014.  Trustworthy Operations in Cellular Networks: The Case of PF Scheduler. IEEE Trans. Parallel Distrib. Syst.. 25:292–300.

Cellular data networks are proliferating to address the need for ubiquitous connectivity. To cope with the increasing number of subscribers and with the spatiotemporal variations of the wireless signals, current cellular networks use opportunistic schedulers, such as the Proportional Fairness scheduler (PF), to maximize network throughput while maintaining fairness among users. Such scheduling decisions are based on channel quality metrics and Automatic Repeat reQuest (ARQ) feedback reports provided by the User's Equipment (UE). Implicit in current networks is the a priori trust on every UE's feedback. Malicious UEs can, thus, exploit this trust to disrupt service by intelligently faking their reports. This work proposes a trustworthy version of the PF scheduler (called TPF) to mitigate the effects of such Denial-of-Service (DoS) attacks. In brief, based on the channel quality reported by the UE, we assign a probability to possible ARQ feedbacks. We then use the probability associated with the actual ARQ report to assess the UE's reporting trustworthiness. We adapt the scheduling mechanism to give higher priority to more trusted users. Our evaluations show that TPF 1) does not induce any performance degradation under benign settings, and 2) it completely mitigates the effects of the activity of malicious UEs. In particular, while colluding attackers can obtain up to 77 percent of the time slots with the most sophisticated attack, TPF is able to contain this percentage to as low as 6 percent.

Kun Wen, Jiahai Yang, Fengjuan Cheng, Chenxi Li, Ziyu Wang, Hui Yin.  2014.  Two-stage detection algorithm for RoQ attack based on localized periodicity analysis of traffic anomaly. Computer Communication and Networks (ICCCN), 2014 23rd International Conference on. :1-6.

Reduction of Quality (RoQ) attack is a stealthy denial of service attack. It can decrease or inhibit normal TCP flows in network. Victims are hard to perceive it as the final network throughput is decreasing instead of increasing during the attack. Therefore, the attack is strongly hidden and it is difficult to be detected by existing detection systems. Based on the principle of Time-Frequency analysis, we propose a two-stage detection algorithm which combines anomaly detection with misuse detection. In the first stage, we try to detect the potential anomaly by analyzing network traffic through Wavelet multiresolution analysis method. According to different time-domain characteristics, we locate the abrupt change points. In the second stage, we further analyze the local traffic around the abrupt change point. We extract the potential attack characteristics by autocorrelation analysis. By the two-stage detection, we can ultimately confirm whether the network is affected by the attack. Results of simulations and real network experiments demonstrate that our algorithm can detect RoQ attacks, with high accuracy and high efficiency.