Biblio

Found 5756 results

Filters: Keyword is Human Behavior  [Clear All Filters]
2017-09-19
Huang, Jianjun, Zhang, Xiangyu, Tan, Lin.  2016.  Detecting Sensitive Data Disclosure via Bi-directional Text Correlation Analysis. Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering. :169–180.

Traditional sensitive data disclosure analysis faces two challenges: to identify sensitive data that is not generated by specific API calls, and to report the potential disclosures when the disclosed data is recognized as sensitive only after the sink operations. We address these issues by developing BidText, a novel static technique to detect sensitive data disclosures. BidText formulates the problem as a type system, in which variables are typed with the text labels that they encounter (e.g., during key-value pair operations). The type system features a novel bi-directional propagation technique that propagates the variable label sets through forward and backward data-flow. A data disclosure is reported if a parameter at a sink point is typed with a sensitive text label. BidText is evaluated on 10,000 Android apps. It reports 4,406 apps that have sensitive data disclosures, with 4,263 apps having log based disclosures and 1,688 having disclosures due to other sinks such as HTTP requests. Existing techniques can only report 64.0% of what BidText reports. And manual inspection shows that the false positive rate for BidText is 10%.

Reaves, Bradley, Blue, Logan, Tian, Dave, Traynor, Patrick, Butler, Kevin R.B..  2016.  Detecting SMS Spam in the Age of Legitimate Bulk Messaging. Proceedings of the 9th ACM Conference on Security & Privacy in Wireless and Mobile Networks. :165–170.

Text messaging is used by more people around the world than any other communications technology. As such, it presents a desirable medium for spammers. While this problem has been studied by many researchers over the years, the recent increase in legitimate bulk traffic (e.g., account verification, 2FA, etc.) has dramatically changed the mix of traffic seen in this space, reducing the effectiveness of previous spam classification efforts. This paper demonstrates the performance degradation of those detectors when used on a large-scale corpus of text messages containing both bulk and spam messages. Against our labeled dataset of text messages collected over 14 months, the precision and recall of past classifiers fall to 23.8% and 61.3% respectively. However, using our classification techniques and labeled clusters, precision and recall rise to 100% and 96.8%. We not only show that our collected dataset helps to correct many of the overtraining errors seen in previous studies, but also present insights into a number of current SMS spam campaigns.

2017-04-20
Sonewar, P. A., Thosar, S. D..  2016.  Detection of SQL injection and XSS attacks in three tier web applications. 2016 International Conference on Computing Communication Control and automation (ICCUBEA). :1–4.

Web applications are used on a large scale worldwide, which handles sensitive personal data of users. With web application that maintains data ranging from as simple as telephone number to as important as bank account information, security is a prime point of concern. With hackers aimed to breakthrough this security using various attacks, we are focusing on SQL injection attacks and XSS attacks. SQL injection attack is very common attack that manipulates the data passing through web application to the database servers through web servers in such a way that it alters or reveals database contents. While Cross Site Scripting (XSS) attacks focuses more on view of the web application and tries to trick users that leads to security breach. We are considering three tier web applications with static and dynamic behavior, for security. Static and dynamic mapping model is created to detect anomalies in the class of SQL Injection and XSS attacks.

2017-11-27
Kuze, N., Ishikura, S., Yagi, T., Chiba, D., Murata, M..  2016.  Detection of vulnerability scanning using features of collective accesses based on information collected from multiple honeypots. NOMS 2016 - 2016 IEEE/IFIP Network Operations and Management Symposium. :1067–1072.

Attacks against websites are increasing rapidly with the expansion of web services. An increasing number of diversified web services make it difficult to prevent such attacks due to many known vulnerabilities in websites. To overcome this problem, it is necessary to collect the most recent attacks using decoy web honeypots and to implement countermeasures against malicious threats. Web honeypots collect not only malicious accesses by attackers but also benign accesses such as those by web search crawlers. Thus, it is essential to develop a means of automatically identifying malicious accesses from mixed collected data including both malicious and benign accesses. Specifically, detecting vulnerability scanning, which is a preliminary process, is important for preventing attacks. In this study, we focused on classification of accesses for web crawling and vulnerability scanning since these accesses are too similar to be identified. We propose a feature vector including features of collective accesses, e.g., intervals of request arrivals and the dispersion of source port numbers, obtained with multiple honeypots deployed in different networks for classification. Through evaluation using data collected from 37 honeypots in a real network, we show that features of collective accesses are advantageous for vulnerability scanning and crawler classification.

2017-05-18
Giang, Nam Ky, Leung, Victor C.M., Lea, Rodger.  2016.  On Developing Smart Transportation Applications in Fog Computing Paradigm. Proceedings of the 6th ACM Symposium on Development and Analysis of Intelligent Vehicular Networks and Applications. :91–98.

Smart Transportation applications by nature are examples of Vehicular Ad-hoc Network (VANETs) applications where mobile vehicles, roadside units and transportation infrastructure interplay with one another to provide value added services. While there are abundant researches that focused on the communication aspect of such Mobile Ad-hoc Networks, there are few research bodies that target the development of VANET applications. Among the popular VANET applications, a dominant direction is to leverage Cloud infrastructure to execute and deliver applications and services. Recent studies showed that Cloud Computing is not sufficient for many VANET applications due to the mobility of vehicles and the latency sensitive requirements they impose. To this end, Fog Computing has been proposed to leverage computation infrastructure that is closer to the network edge to compliment Cloud Computing in providing latency-sensitive applications and services. However, applications development in Fog environment is much more challenging than in the Cloud due to the distributed nature of Fog systems. In this paper, we investigate how Smart Transportation applications are developed following Fog Computing approach, their challenges and possible mitigation from the state of the arts.

2017-09-05
Basan, Alexander, Basan, Elena, Makarevich, Oleg.  2016.  Development of the Hierarchal Trust Management System for Mobile Cluster-based Wireless Sensor Network. Proceedings of the 9th International Conference on Security of Information and Networks. :116–122.

In this paper a model of secure wireless sensor network (WSN) was developed. This model is able to defend against most of known network attacks and don't significantly reduce the energy power of sensor nodes (SN). We propose clustering as a way of network organization, which allows reducing energy consumption. Network protection is based on the trust level calculation and the establishment of trusted relationships between trusted nodes. The primary purpose of the hierarchical trust management system (HTMS) is to protect the WSN from malicious actions of an attacker. The developed system should combine the properties of energy efficiency and reliability. To achieve this goal the following tasks are performed: detection of illegal actions of an intruder; blocking of malicious nodes; avoiding of malicious attacks; determining the authenticity of nodes; the establishment of trusted connections between authentic nodes; detection of defective nodes and the blocking of their work. The HTMS operation based on the use of Bayes' theorem and calculation of direct and centralized trust values.

2017-08-22
Ding, Han, Qian, Chen, Han, Jinsong, Wang, Ge, Jiang, Zhiping, Zhao, Jizhong, Xi, Wei.  2016.  Device-free Detection of Approach and Departure Behaviors Using Backscatter Communication. Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing. :167–177.

Smart environments and security systems require automatic detection of human behaviors including approaching to or departing from an object. Existing human motion detection systems usually require human beings to carry special devices, which limits their applications. In this paper, we present a system called APID to detect arm reaching by analyzing backscatter communication signals from a passive RFID tag on the object. APID does not require human beings to carry any device. The idea is based on the influence of human movements to the vibration of backscattered tag signals. APID is compatible with commodity off-the-shelf devices and the EPCglobal Class-1 Generation-2 protocol. In APID an commercial RFID reader continuously queries tags through emitting RF signals and tags simply respond with their IDs. A USRP monitor passively analyzes the communication signals and reports the approach and departure behaviors. We have implemented the APID system for both single-object and multi-object scenarios in both horizontal and vertical deployment modes. The experimental results show that APID can achieve high detection accuracy.

2017-05-22
Hooshmand, Salman, Mahmud, Akib, Bochmann, Gregor V., Faheem, Muhammad, Jourdan, Guy-Vincent, Couturier, Russ, Onut, Iosif-Viorel.  2016.  D-ForenRIA: Distributed Reconstruction of User-Interactions for Rich Internet Applications. Proceedings of the 25th International Conference Companion on World Wide Web. :211–214.

We present D-ForenRIA, a distributed forensic tool to automatically reconstruct user-sessions in Rich Internet Applications (RIAs), using solely the full HTTP traces of the sessions as input. D-ForenRIA recovers automatically each browser state, reconstructs the DOMs and re-creates screenshots of what was displayed to the user. The tool also recovers every action taken by the user on each state, including the user-input data. Our application domain is security forensics, where sometimes months-old sessions must be quickly reconstructed for immediate inspection. We will demonstrate our tool on a series of RIAs, including a vulnerable banking application created by IBM Security for testing purposes. In that case study, the attacker visits the vulnerable web site, and exploits several vulnerabilities (SQL-injections, XSS...) to gain access to private information and to perform unauthorized transactions. D-ForenRIA can reconstruct the session, including screenshots of all pages seen by the hacker, DOM of each page and the steps taken for unauthorized login and the inputs hacker exploited for the SQL-injection attack. D-ForenRIA is made efficient by applying advanced reconstruction techniques and by using several browsers concurrently to speed up the reconstruction process. Although we developed D-ForenRIA in the context of security forensics, the tool can also be useful in other contexts such as aided RIAs debugging and automated RIAs scanning.

2016-04-10
2017-05-22
Zhu, Xue, Sun, Yuqing.  2016.  Differential Privacy for Collaborative Filtering Recommender Algorithm. Proceedings of the 2016 ACM on International Workshop on Security And Privacy Analytics. :9–16.

Collaborative filtering plays an essential role in a recommender system, which recommends a list of items to a user by learning behavior patterns from user rating matrix. However, if an attacker has some auxiliary knowledge about a user purchase history, he/she can infer more information about this user. This brings great threats to user privacy. Some methods adopt differential privacy algorithms in collaborative filtering by adding noises to a rating matrix. Although they provide theoretically private results, the influence on recommendation accuracy are not discussed. In this paper, we solve the privacy problem in recommender system in a different way by applying the differential privacy method into the procedure of recommendation. We design two differentially private recommender algorithms with sampling, named Differentially Private Item Based Recommendation with sampling (DP-IR for short) and Differentially Private User Based Recommendation with sampling(DP-UR for short). Both algorithms are based on the exponential mechanism with a carefully designed quality function. Theoretical analyses on privacy of these algorithms are presented. We also investigate the accuracy of the proposed method and give theoretical results. Experiments are performed on real datasets to verify our methods.

To, Hien, Nguyen, Kien, Shahabi, Cyrus.  2016.  Differentially Private Publication of Location Entropy. Proceedings of the 24th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems. :35:1–35:10.

Location entropy (LE) is a popular metric for measuring the popularity of various locations (e.g., points-of-interest). Unlike other metrics computed from only the number of (unique) visits to a location, namely frequency, LE also captures the diversity of the users' visits, and is thus more accurate than other metrics. Current solutions for computing LE require full access to the past visits of users to locations, which poses privacy threats. This paper discusses, for the first time, the problem of perturbing location entropy for a set of locations according to differential privacy. The problem is challenging because removing a single user from the dataset will impact multiple records of the database; i.e., all the visits made by that user to various locations. Towards this end, we first derive non-trivial, tight bounds for both local and global sensitivity of LE, and show that to satisfy ε-differential privacy, a large amount of noise must be introduced, rendering the published results useless. Hence, we propose a thresholding technique to limit the number of users' visits, which significantly reduces the perturbation error but introduces an approximation error. To achieve better utility, we extend the technique by adopting two weaker notions of privacy: smooth sensitivity (slightly weaker) and crowd-blending (strictly weaker). Extensive experiments on synthetic and real-world datasets show that our proposed techniques preserve original data distribution without compromising location privacy.

2017-06-27
Haimson, Oliver L., Brubaker, Jed R., Dombrowski, Lynn, Hayes, Gillian R..  2016.  Digital Footprints and Changing Networks During Online Identity Transitions. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. :2895–2907.

Digital artifacts on social media can challenge individuals during identity transitions, particularly those who prefer to delete, separate from, or hide data that are representative of a past identity. This work investigates concerns and practices reported by transgender people who transitioned while active on Facebook. We analyze open-ended survey responses from 283 participants, highlighting types of data considered problematic when separating oneself from a past identity, and challenges and strategies people engage in when managing personal data in a networked environment. We find that people shape their digital footprints in two ways: by editing the self-presentational data that is representative of a prior identity, and by managing the configuration of people who have access to that self-presentation. We outline the challenging interplay between shifting identities, social networks, and the data that suture them together. We apply these results to a discussion of the complexities of managing and forgetting the digital past.

2017-05-30
Jadhao, Ankita R., Agrawal, Avinash J..  2016.  A Digital Forensics Investigation Model for Social Networking Site. Proceedings of the Second International Conference on Information and Communication Technology for Competitive Strategies. :130:1–130:4.

Social Networking is fundamentally shifting the way we communicate, sharing idea and form opinions. All people try to use social media for there need, people from every age group are involved in social media site or e-commerce site. Nowadays almost every illegal activity is happened using the social network and instant messages. It means that present system is not capable to found all suspicious words. In this paper, we provided a brief description of problem and review on the different framework developed so far. Propose a better system which can be indentify criminal activity through social networking more efficiently. Use Ontology Based Information Extraction (OBIE) technique to identify domain of word and Association Rule mining to generate rules. Heuristic method checks in user database for malicious users according to predefine elements and Naïve Bayes method is use to identify the context behind the message or post. The experimental result is used for further action on victim by cyber crime department.

2017-08-02
Bertot, John Carlo, Estevez, Elsa, Janowski, Tomasz.  2016.  Digital Public Service Innovation: Framework Proposal. Proceedings of the 9th International Conference on Theory and Practice of Electronic Governance. :113–122.

This paper proposes the Digital Public Service Innovation Framework that extends the "standard" provision of digital public services according to the emerging, enhanced, transactional and connected stages underpinning the United Nations Global e-Government Survey, with seven example "innovations" in digital public service delivery – transparent, participatory, anticipatory, personalized, co-created, context-aware and context-smart. Unlike the "standard" provisions, innovations in digital public service delivery are open-ended – new forms may continuously emerge in response to new policy demands and technological progress, and are non-linear – one innovation may or may not depend on others. The framework builds on the foundations of public sector innovation and Digital Government Evolution model. In line with the latter, the paper equips each innovation with sharp logical characterization, body of research literature and real-life cases from around the world to simultaneously serve the illustration and validation goals. The paper also identifies some policy implications of the framework, covering a broad range of issues from infrastructure, capacity, eco-system and partnerships, to inclusion, value, channels, security, privacy and authentication.

2017-06-05
Pevny, Tomas, Somol, Petr.  2016.  Discriminative Models for Multi-instance Problems with Tree Structure. Proceedings of the 2016 ACM Workshop on Artificial Intelligence and Security. :83–91.

Modelling network traffic is gaining importance to counter modern security threats of ever increasing sophistication. It is though surprisingly difficult and costly to construct reliable classifiers on top of telemetry data due to the variety and complexity of signals that no human can manage to interpret in full. Obtaining training data with sufficiently large and variable body of labels can thus be seen as a prohibitive problem. The goal of this work is to detect infected computers by observing their HTTP(S) traffic collected from network sensors, which are typically proxy servers or network firewalls, while relying on only minimal human input in the model training phase. We propose a discriminative model that makes decisions based on a computer's all traffic observed during a predefined time window (5 minutes in our case). The model is trained on traffic samples collected over equally-sized time windows for a large number of computers, where the only labels needed are (human) verdicts about the computer as a whole (presumed infected vs. presumed clean). As part of training, the model itself learns discriminative patterns in traffic targeted to individual servers and constructs the final high-level classifier on top of them. We show the classifier to perform with very high precision, and demonstrate that the learned traffic patterns can be interpreted as Indicators of Compromise. We implement the discriminative model as a neural network with special structure reflecting two stacked multi instance problems. The main advantages of the proposed configuration include not only improved accuracy and ability to learn from gross labels, but also automatic learning of server types (together with their detectors) that are typically visited by infected computers.

Zhao, Dexin, Ma, Zhen, Zhang, Degan.  2016.  A Distributed and Adaptive Trust Evaluation Algorithm for MANET. Proceedings of the 12th ACM Symposium on QoS and Security for Wireless and Mobile Networks. :47–54.

We propose a distributed and adaptive trust evaluation algorithm (DATEA) to calculate the trust between nodes. First, calculate the communication trust by using the number of data packets between nodes, and predict the trust based on the trend of this value, calculate the comprehensive trust by combining the history trust with the predict value; calculate the energy trust based on the residual energy of nodes; calculate the direct trust by using the communication trust and energy trust. Second, calculate the recommendation trust based on the recommendation reliability and the recommendation familiarity; put forward the adaptively weighting method, and calculate the integrate direct trust by combining the direct trust with recommendation trust. Third, according to the integrate direct trust, considering the factor of trust propagation distance, the indirect trust between nodes is calculated. Simulation experiments show that the proposed algorithm can effectively avoid the attacks of malicious nodes, besides, the calculated direct trust and indirect trust about normal nodes are more conformable to the actual situation.

2017-05-19
Calumby, Rodrigo Tripodi.  2016.  Diversity-oriented Multimodal and Interactive Information Retrieval. SIGIR Forum. 50:86–86.

Information retrieval methods, especially considering multimedia data, have evolved towards the integration of multiple sources of evidence in the analysis of the relevance of the items considering a given user search task. In this context, for attenuating the semantic gap between low-level features extracted from the content of the digital objects and high-level semantic concepts (objects, categories, etc.) and making the systems adaptive to different user needs, interactive models have brought the user closer to the retrieval loop allowing user-system interaction mainly through implicit or explicit relevance feedback. Analogously, diversity promotion has emerged as an alternative for tackling ambiguous or underspecified queries. Additionally, several works have addressed the issue of minimizing the required user effort on providing relevance assessments while keeping an acceptable overall effectiveness This thesis discusses, proposes, and experimentally analyzes multimodal and interactive diversity-oriented information retrieval methods. This work, comprehensively covers the interactive information retrieval literature and also discusses about recent advances, the great research challenges, and promising research opportunities. We have proposed and evaluated two relevancediversity trade-off enhancement work-flows, which integrate multiple information from images, such as: visual features, textual metadata, geographic information, and user credibility descriptors. In turn, as an integration of interactive retrieval and diversity promotion techniques, for maximizing the coverage of multiple query interpretations/aspects and speeding up the information transfer between the user and the system, we have proposed and evaluated a multimodal online learning-to-rank method trained with relevance feedback over diversified results Our experimental analysis shows that the joint usage of multiple information sources positively impacted the relevance-diversity balancing algorithms. Our results also suggest that the integration of multimodal-relevance-based filtering and reranking is effective on improving result relevance and also boosts diversity promotion methods. Beyond it, with a thorough experimental analysis we have investigated several research questions related to the possibility of improving result diversity and keeping or even improving relevance in interactive search sessions. Moreover, we analyze how much the diversification effort affects overall search session results and how different diversification approaches behave for the different data modalities. By analyzing the overall and per feedback iteration effectiveness, we show that introducing diversity may harm initial results whereas it significantly enhances the overall session effectiveness not only considering the relevance and diversity, but also how early the user is exposed to the same amount of relevant items and diversity

2017-09-15
Nicholas, Charles, Brandon, Robert.  2016.  Document Engineering Issues in Malware Analysis. Proceedings of the 2016 ACM Symposium on Document Engineering. :3–3.

We present an overview of the field of malware analysis with emphasis on issues related to document engineering. We will introduce the field with a discussion of the types of malware, including executable binaries, malicious PDFs, polymorphic malware, ransomware, and exploit kits. We will conclude with our view of important research questions in the field. This is an updated version of last year's tutorial, with more information about web-based malware and malware targeting the Android market.

2017-05-17
Ostberg, Jan-Peter, Wagner, Stefan, Weilemann, Erica.  2016.  Does Personality Influence the Usage of Static Analysis Tools?: An Explorative Experiment Proceedings of the 9th International Workshop on Cooperative and Human Aspects of Software Engineering. :75–81.

There are many techniques to improve software quality. One is using automatic static analysis tools. We have observed, however, that despite the low-cost help they offer, these tools are underused and often discourage beginners. There is evidence that personality traits influence the perceived usability of a software. Thus, to support beginners better, we need to understand how the workflow of people with different prevalent personality traits using these tools varies. For this purpose, we observed users' solution strategies and correlated them with their prevalent personality traits in an exploratory study with student participants within a controlled experiment. We gathered data by screen capturing and chat protocols as well as a Big Five personality traits test. We found strong correlations between particular personality traits and different strategies of removing the findings of static code analysis as well as between personality and tool utilization. Based on that, we offer take-away improvement suggestions. Our results imply that developers should be aware of these solution strategies and use this information to build tools that are more appealing to people with different prevalent personality traits.

2017-06-27
Hu, Gang, Bin Hannan, Nabil, Tearo, Khalid, Bastos, Arthur, Reilly, Derek.  2016.  Doing While Thinking: Physical and Cognitive Engagement and Immersion in Mixed Reality Games. Proceedings of the 2016 ACM Conference on Designing Interactive Systems. :947–958.

We present a study examining the impact of physical and cognitive challenge on reported immersion for a mixed reality game called Beach Pong. Contrary to prior findings for desktop games, we find significantly higher reported immersion among players who engage physically, regardless of their actual game performance. Building a mental map of the real, virtual, and sensed world is a cognitive challenge for novices, and this appears to influence immersion: in our study, participants who actively attended to both physical and virtual game elements reported higher immersion levels than those who attended mainly or exclusively to virtual elements. Without an integrated mental map, in-game cognitive challenges were ignored or offloaded to motor response when possible in order to achieve the minimum required goals of the game. From our results we propose a model of immersion in mixed reality gaming that is useful for designers and researchers in this space.

2017-09-19
Tromer, Eran, Schuster, Roei.  2016.  DroidDisintegrator: Intra-Application Information Flow Control in Android Apps. Proceedings of the 11th ACM on Asia Conference on Computer and Communications Security. :401–412.

In mobile platforms and their app markets, controlling app permissions and preventing abuse of private information are crucial challenges. Information Flow Control (IFC) is a powerful approach for formalizing and answering user concerns such as: "Does this app send my geolocation to the Internet?" Yet despite intensive research efforts, IFC has not been widely adopted in mainstream programming practice. Abstract We observe that the typical structure of Android apps offers an opportunity for a novel and effective application of IFC. In Android, an app consists of a collection of a few dozen "components", each in charge of some high-level functionality. Most components do not require access to most resources. These components are a natural and effective granularity at which to apply IFC (as opposed to the typical process-level or language-level granularity). By assigning different permission labels to each component, and limiting information flow between components, it is possible to express and enforce IFC constraints. Yet nuances of the Android platform, such as its multitude of discretionary (and somewhat arcane) communication channels, raise challenges in defining and enforcing component boundaries. Abstract We build a system, DroidDisintegrator, which demonstrates the viability of component-level IFC for expressing and controlling app behavior. DroidDisintegrator uses dynamic analysis to generate IFC policies for Android apps, repackages apps to embed these policies, and enforces the policies at runtime. We evaluate DroidDisintegrator on dozens of apps.

2017-06-05
Sterbenz, James P.G..  2016.  Drones in the Smart City and IoT: Protocols, Resilience, Benefits, and Risks. Proceedings of the 2Nd Workshop on Micro Aerial Vehicle Networks, Systems, and Applications for Civilian Use. :3–3.

Drones have quickly become ubiquitous for both recreational and serious use. As is frequently the case with new technology in general, their rapid adoption already far exceeds our legal, policy, and social ability to cope with such issues as privacy and interference with well-established commercial and military air space. While the FAA has issued rulings, they will almost certainly be challenged in court as disputes arise, for example, when property owners shoot drones down. It is clear that drones will provide a critical role in smart cities and be connected to, if not directly a part of the IoT (Internet of Things). Drones will provide an essential role in providing network relay connectivity and situational awareness, particularly in disaster assessment and recovery scenarios. As is typical for new network technologies, the deployment of the drone hardware far exceeds our research in protocols – extending our previous understanding of MANETs (mobile ad hoc networks) and DTNs (disruption tolerant networks) – and more importantly, management, control, resilience, security, and privacy concerns. This keynote address will discuss these challenges and consider future research directions.

2017-09-19
Goyal, Shruti, Bindu, P. V., Thilagam, P. Santhi.  2016.  Dynamic Structure for Web Graphs with Extended Functionalities. Proceedings of the International Conference on Advances in Information Communication Technology & Computing. :46:1–46:6.

The hyperlink structure of World Wide Web is modeled as a directed, dynamic, and huge web graph. Web graphs are analyzed for determining page rank, fighting web spam, detecting communities, and so on, by performing tasks such as clustering, classification, and reachability. These tasks involve operations such as graph navigation, checking link existence, and identifying active links, which demand scanning of entire graphs. Frequent scanning of very large graphs involves more I/O operations and memory overheads. To rectify these issues, several data structures have been proposed to represent graphs in a compact manner. Even though the problem of representing graphs has been actively studied in the literature, there has been much less focus on representation of dynamic graphs. In this paper, we propose Tree-Dictionary-Representation (TDR), a compressed graph representation that supports dynamic nature of graphs as well as the various graph operations. Our experimental study shows that this representation works efficiently with limited main memory use and provides fast traversal of edges.

2017-11-27
Pang, Y., Xue, X., Namin, A. S..  2016.  Early Identification of Vulnerable Software Components via Ensemble Learning. 2016 15th IEEE International Conference on Machine Learning and Applications (ICMLA). :476–481.

Software components, which are vulnerable to being exploited, need to be identified and patched. Employing any prevention techniques designed for the purpose of detecting vulnerable software components in early stages can reduce the expenses associated with the software testing process significantly and thus help building a more reliable and robust software system. Although previous studies have demonstrated the effectiveness of adapting prediction techniques in vulnerability detection, the feasibility of those techniques is limited mainly because of insufficient training data sets. This paper proposes a prediction technique targeting at early identification of potentially vulnerable software components. In the proposed scheme, the potentially vulnerable components are viewed as mislabeled data that may contain true but not yet observed vulnerabilities. The proposed hybrid technique combines the supports vector machine algorithm and ensemble learning strategy to better identify potential vulnerable components. The proposed vulnerability detection scheme is evaluated using some Java Android applications. The results demonstrated that the proposed hybrid technique could identify potentially vulnerable classes with high precision and relatively acceptable accuracy and recall.

2017-10-03
Lu, Yiqin, Wang, Meng.  2016.  An Easy Defense Mechanism Against Botnet-based DDoS Flooding Attack Originated in SDN Environment Using sFlow. Proceedings of the 11th International Conference on Future Internet Technologies. :14–20.

As today's networks become larger and more complex, the Distributed Denial of Service (DDoS) flooding attack threats may not only come from the outside of networks but also from inside, such as cloud computing network where exists multiple tenants possibly containing malicious tenants. So, the need of source-based defense mechanism against such attacks is pressing. In this paper, we mainly focus on the source-based defense mechanism against Botnet-based DDoS flooding attack through combining the power of Software-Defined Networking (SDN) and sample flow (sFlow) technology. Firstly, we defined a metric to measure the essential features of this kind attack which means distribution and collaboration. Then we designed a simple detection algorithm based on statistical inference model and response scheme through the abilities of SDN. Finally, we developed an application to realize our idea and also tested its effect on emulation network with real network traffic. The result shows that our mechanism could effectively detect DDoS flooding attack originated in SDN environment and identify attack flows for avoiding the harm of attack spreading to target or outside. We advocate the advantages of SDN in the area of defending DDoS attacks, because it is difficult and laborious to organize selfish and undisciplined traditional distributed network to confront well collaborative DDoS flooding attacks.