Biblio

Found 7524 results

Filters: Keyword is Metrics  [Clear All Filters]
2019-03-28
Varga, S., Brynielsson, J., Franke, U..  2018.  Information Requirements for National Level Cyber Situational Awareness. 2018 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM). :774-781.

As modern societies become more dependent on IT services, the potential impact both of adversarial cyberattacks and non-adversarial service management mistakes grows. This calls for better cyber situational awareness-decision-makers need to know what is going on. The main focus of this paper is to examine the information elements that need to be collected and included in a common operational picture in order for stakeholders to acquire cyber situational awareness. This problem is addressed through a survey conducted among the participants of a national information assurance exercise conducted in Sweden. Most participants were government officials and employees of commercial companies that operate critical infrastructure. The results give insight into information elements that are perceived as useful, that can be contributed to and required from other organizations, which roles and stakeholders would benefit from certain information, and how the organizations work with creating cyber common operational pictures today. Among findings, it is noteworthy that adversarial behavior is not perceived as interesting, and that the respondents in general focus solely on their own organization.

2019-05-08
Mylrea, M., Gourisetti, S. N. G., Larimer, C., Noonan, C..  2018.  Insider Threat Cybersecurity Framework Webtool Methodology: Defending Against Complex Cyber-Physical Threats. 2018 IEEE Security and Privacy Workshops (SPW). :207–216.

This paper demonstrates how the Insider Threat Cybersecurity Framework (ITCF) web tool and methodology help provide a more dynamic, defense-in-depth security posture against insider cyber and cyber-physical threats. ITCF includes over 30 cybersecurity best practices to help organizations identify, protect, detect, respond and recover to sophisticated insider threats and vulnerabilities. The paper tests the efficacy of this approach and helps validate and verify ITCF's capabilities and features through various insider attacks use-cases. Two case-studies were explored to determine how organizations can leverage ITCF to increase their overall security posture against insider attacks. The paper also highlights how ITCF facilitates implementation of the goals outlined in two Presidential Executive Orders to improve the security of classified information and help owners and operators secure critical infrastructure. In realization of these goals, ITCF: provides an easy to use rapid assessment tool to perform an insider threat self-assessment; determines the current insider threat cybersecurity posture; defines investment-based goals to achieve a target state; connects the cybersecurity posture with business processes, functions, and continuity; and finally, helps develop plans to answer critical organizational cybersecurity questions. In this paper, the webtool and its core capabilities are tested by performing an extensive comparative assessment over two different high-profile insider threat incidents. 

2019-05-20
Gschwandtner, Mathias, Demetz, Lukas, Gander, Matthias, Maier, Ronald.  2018.  Integrating Threat Intelligence to Enhance an Organization's Information Security Management. Proceedings of the 13th International Conference on Availability, Reliability and Security. :37:1-37:8.

As security incidents might have disastrous consequences on an enterprise's information technology (IT), organizations need to secure their IT against threats. Threat intelligence (TI) promises to provide actionable information about current threats for information security management systems (ISMS). Common information range from malware characteristics to observed perpetrator origins that allow customizing security controls. The aim of this article is to assess the impact of utilizing public available threat feeds within the corporate process on an organization's security information level. We developed a framework to integrate TI for large corporations and evaluated said framework in cooperation with a global acting manufacturer and retailer. During the development of the TI framework, a specific provider of TI was analyzed and chosen for integration within the process of vulnerability management. The evaluation of this exemplary integration was assessed by members of the information security department at the cooperating enterprise. During our evaluation it was emphasized that a prioritization of management activities based on whether threats that have been observed in the wild are targeting them or similar companies. Furthermore, indicators of compromise (IoC) provided by the chosen TI source, can be automatically integrated utilizing a provided software development kit. Theoretical relevance is based on the contribution towards the verification of proposed benefits of TI integration, such as increasing the resilience of an enterprise network, within a real-world environment. Overall, practitioners suggest that TI integration should result in enhanced management of security budgets and more resilient enterprise networks.

2019-01-31
Wang, Ningfei, Ji, Shouling, Wang, Ting.  2018.  Integration of Static and Dynamic Code Stylometry Analysis for Programmer De-Anonymization. Proceedings of the 11th ACM Workshop on Artificial Intelligence and Security. :74–84.

De-anonymizing the authors of anonymous code (i.e., code stylometry) entails significant privacy and security implications. Most existing code stylometry methods solely rely on static (e.g., lexical, layout, and syntactic) features extracted from source code, while neglecting its key difference from regular text – it is executable! In this paper, we present Sundae, a novel code de-anonymization framework that integrates both static and dynamic stylometry analysis. Compared with the existing solutions, Sundae departs in significant ways: (i) it requires much less number of static, hand-crafted features; (ii) it requires much less labeled data for training; and (iii) it can be readily extended to new programmers once their stylometry information becomes available Through extensive evaluation on benchmark datasets, we demonstrate that Sundae delivers strong empirical performance. For example, under the setting of 229 programmers and 9 problems, it outperforms the state-of-art method by a margin of 45.65% on Python code de-anonymization. The empirical results highlight the integration of static and dynamic analysis as a promising direction for code stylometry research.

Simmons, Andrew J., Curumsing, Maheswaree Kissoon, Vasa, Rajesh.  2018.  An Interaction Model for De-Identification of Human Data Held by External Custodians. Proceedings of the 30th Australian Conference on Computer-Human Interaction. :23–26.

Reuse of pre-existing industry datasets for research purposes requires a multi-stakeholder solution that balances the researcher's analysis objectives with the need to engage the industry data custodian, whilst respecting the privacy rights of human data subjects. Current methods place the burden on the data custodian, whom may not be sufficiently trained to fully appreciate the nuances of data de-identification. Through modelling of functional, quality, and emotional goals, we propose a de-identification in the cloud approach whereby the researcher proposes analyses along with the extraction and de-identification operations, while engaging the industry data custodian with secure control over authorising the proposed analyses. We demonstrate our approach through implementation of a de-identification portal for sports club data.

2019-06-28
Gulzar, Muhammad Ali.  2018.  Interactive and Automated Debugging for Big Data Analytics. Proceedings of the 40th International Conference on Software Engineering: Companion Proceeedings. :509-511.

An abundance of data in many disciplines of science, engineering, national security, health care, and business has led to the emerging field of Big Data Analytics that run in a cloud computing environment. To process massive quantities of data in the cloud, developers leverage Data-Intensive Scalable Computing (DISC) systems such as Google's MapReduce, Hadoop, and Spark. Currently, developers do not have easy means to debug DISC applications. The use of cloud computing makes application development feel more like batch jobs and the nature of debugging is therefore post-mortem. Developers of big data applications write code that implements a data processing pipeline and test it on their local workstation with a small sample data, downloaded from a TB-scale data warehouse. They cross fingers and hope that the program works in the expensive production cloud. When a job fails or they get a suspicious result, data scientists spend hours guessing at the source of the error, digging through post-mortem logs. In such cases, the data scientists may want to pinpoint the root cause of errors by investigating a subset of corresponding input records. The vision of my work is to provide interactive, real-time and automated debugging services for big data processing programs in modern DISC systems with minimum performance impact. My work investigates the following research questions in the context of big data analytics: (1) What are the necessary debugging primitives for interactive big data processing? (2) What scalable fault localization algorithms are needed to help the user to localize and characterize the root causes of errors? (3) How can we improve testing efficiency during iterative development of DISC applications by reasoning the semantics of dataflow operators and user-defined functions used inside dataflow operators in tandem? To answer these questions, we synthesize and innovate ideas from software engineering, big data systems, and program analysis, and coordinate innovations across the software stack from the user-facing API all the way down to the systems infrastructure.

2019-05-20
Terkawi, A., Innab, N., al-Amri, S., Al-Amri, A..  2018.  Internet of Things (IoT) Increasing the Necessity to Adopt Specific Type of Access Control Technique. 2018 21st Saudi Computer Society National Computer Conference (NCC). :1–5.

The Internet of Things (IoT) is one of the emerging technologies that has seized the attention of researchers, the reason behind that was the IoT expected to be applied in our daily life in the near future and human will be wholly dependent on this technology for comfort and easy life style. Internet of things is the interconnection of internet enabled things or devices to connect with each other and to humans in order to achieve some goals or the ability of everyday objects to connect to the Internet and to send and receive data. However, the Internet of Things (IoT) raises significant challenges that could stand in the way of realizing its potential benefits. This paper discusses access control area as one of the most crucial aspect of security and privacy in IoT and proposing a new way of access control that would decide who is allowed to access what and who is not to the IoT subjects and sensors.

2019-03-28
Subasi, A., Al-Marwani, K., Alghamdi, R., Kwairanga, A., Qaisar, S. M., Al-Nory, M., Rambo, K. A..  2018.  Intrusion Detection in Smart Grid Using Data Mining Techniques. 2018 21st Saudi Computer Society National Computer Conference (NCC). :1-6.

The rapid growth of population and industrialization has given rise to the way for the use of technologies like the Internet of Things (IoT). Innovations in Information and Communication Technologies (ICT) carries with it many challenges to our privacy's expectations and security. In Smart environments there are uses of security devices and smart appliances, sensors and energy meters. New requirements in security and privacy are driven by the massive growth of devices numbers that are connected to IoT which increases concerns in security and privacy. The most ubiquitous threats to the security of the smart grids (SG) ascended from infrastructural physical damages, destroying data, malwares, DoS, and intrusions. Intrusion detection comprehends illegitimate access to information and attacks which creates physical disruption in the availability of servers. This work proposes an intrusion detection system using data mining techniques for intrusion detection in smart grid environment. The results showed that the proposed random forest method with a total classification accuracy of 98.94 %, F-measure of 0.989, area under the ROC curve (AUC) of 0.999, and kappa value of 0.9865 outperforms over other classification methods. In addition, the feasibility of our method has been successfully demonstrated by comparing other classification techniques such as ANN, k-NN, SVM and Rotation Forest.

2019-01-21
Warzyński, A., Kołaczek, G..  2018.  Intrusion detection systems vulnerability on adversarial examples. 2018 Innovations in Intelligent Systems and Applications (INISTA). :1–4.

Intrusion detection systems define an important and dynamic research area for cybersecurity. The role of Intrusion Detection System within security architecture is to improve a security level by identification of all malicious and also suspicious events that could be observed in computer or network system. One of the more specific research areas related to intrusion detection is anomaly detection. Anomaly-based intrusion detection in networks refers to the problem of finding untypical events in the observed network traffic that do not conform to the expected normal patterns. It is assumed that everything that is untypical/anomalous could be dangerous and related to some security events. To detect anomalies many security systems implements a classification or clustering algorithms. However, recent research proved that machine learning models might misclassify adversarial events, e.g. observations which were created by applying intentionally non-random perturbations to the dataset. Such weakness could increase of false negative rate which implies undetected attacks. This fact can lead to one of the most dangerous vulnerabilities of intrusion detection systems. The goal of the research performed was verification of the anomaly detection systems ability to resist this type of attack. This paper presents the preliminary results of tests taken to investigate existence of attack vector, which can use adversarial examples to conceal a real attack from being detected by intrusion detection systems.

2018-12-03
Shearon, C. E..  2018.  IPC-1782 standard for traceability of critical items based on risk. 2018 Pan Pacific Microelectronics Symposium (Pan Pacific). :1–3.

Traceability has grown from being a specialized need for certain safety critical segments of the industry, to now being a recognized value-add tool for the industry as a whole that can be utilized for manual to automated processes End to End throughout the supply chain. The perception of traceability data collection persists as being a burden that provides value only when the most rare and disastrous of events take place. Disparate standards have evolved in the industry, mainly dictated by large OEM companies in the market create confusion, as a multitude of requirements and definitions proliferate. The intent of the IPC-1782 project is to bring the whole principle of traceability up to date and enable business to move faster, increase revenue, increase productivity, and decrease costs as a result of increased trust. Traceability, as defined in this standard will represent the most effective quality tool available, becoming an intrinsic part of best practice operations, with the encouragement of automated data collection from existing manufacturing systems which works well with Industry 4.0, integrating quality, reliability, product safety, predictive (routine, preventative, and corrective) maintenance, throughput, manufacturing, engineering and supply-chain data, reducing cost of ownership as well as ensuring timeliness and accuracy all the way from a finished product back through to the initial materials and granular attributes about the processes along the way. The goal of this standard is to create a single expandable and extendable data structure that can be adopted for all levels of traceability and enable easily exchanged information, as appropriate, across many industries. The scope includes support for the most demanding instances for detail and integrity such as those required by critical safety systems, all the way through to situations where only basic traceability, such as for simple consumer products, are required. A key driver for the adoption of the standard is the ability to find a relevant and achievable level of traceability that exactly meets the requirement following risk assessment of the business. The wealth of data accessible from traceability for analysis (e.g.; Big Data, etc.) can easily and quickly yield information that can raise expectations of very significant quality and performance improvements, as well as providing the necessary protection against the costs of issues in the market and providing very timely information to regulatory bodies along with consumers/customers as appropriate. This information can also be used to quickly raise yields, drive product innovation that resonates with consumers, and help drive development tests & design requirements that are meaningful to the Marketplace. Leveraging IPC 1782 to create the best value of Component Traceability for your business.

2019-08-12
Cerny, Tomas, Sedlisky, Filip, Donahoo, Michael J..  2018.  On Isolation-Driven Automated Module Decomposition. Proceedings of the 2018 Conference on Research in Adaptive and Convergent Systems. :302-307.

Contemporary enterprise systems focus primarily on performance and development/maintenance costs. Dealing with cyber-threats and system compromise is relegated to good coding (i.e., defensive programming) and secure environment (e.g., patched OS, firewalls, etc.). This approach, while a necessary start, is not sufficient. Such security relies on no missteps, and compromise only need a single flaw; consequently, we must design for compromise and mitigate its impact. One approach is to utilize fine-grained modularization and isolation. In such a system, decomposition ensures that compromise of a single module presents limited and known risk to data/resource theft and denial. We propose mechanisms for automating such modular composition and consider its system performance impact.

2019-10-08
Hajomer, A. A. E., Yang, X., Sultan, A., Sun, W., Hu, W..  2018.  Key Generation and Distribution Using Phase Fluctuation in Classical Fiber Channel. 2018 20th International Conference on Transparent Optical Networks (ICTON). :1–3.

We propose a secure key generation and distribution scheme for data encryption in classical optical fiber channel. A Delay interferometer (DI) is used to track the random phase fluctuation inside fiber, while the reconfigurable lengths of polarization-maintaining (PM) fiber are set as the source of optical phase fluctuations. The output signals from DI are extracted as the secret key and shared between the two-legal transmitter and receiver. Because of the randomness of local environment and the uniqueness of fiber channel, the phase fluctuation between orthogonal polarization modes (OPMs) can be used as secure keys to enhance the level of security in physical layer. Experimentally, we realize the random key generation and distribution over 25-km standard single-mode fiber (SSMF). Moreover, the proposed key generation scheme has the advantages of low cost, compatible with current optical fiber networks and long distance transmission with optical amplifiers.

2019-01-31
McCulley, Shane, Roussev, Vassil.  2018.  Latent Typing Biometrics in Online Collaboration Services. Proceedings of the 34th Annual Computer Security Applications Conference. :66–76.

The use of typing biometrics—the characteristic typing patterns of individual keyboard users—has been studied extensively in the context of enhancing multi-factor authentication services. The key starting point for such work has been the collection of high-fidelity local timing data, and the key (implicit) security assumption has been that such biometrics could not be obtained by other means. We show that the latter assumption to be false, and that it is entirely feasible to obtain useful typing biometric signatures from third-party timing logs. Specifically, we show that the logs produced by realtime collaboration services during their normal operation are of sufficient fidelity to successfully impersonate a user using remote data only. Since the logs are routinely shared as a byproduct of the services' operation, this creates an entirely new avenue of attack that few users would be aware of. As a proof of concept, we construct successful biometric attacks using only the log-based structure (complete editing history) of a shared Google Docs, or Zoho Writer, document which is readily available to all contributing parties. Using the largest available public data set of typing biometrics, we are able to create successful forgeries 100% of the time against a commercial biometric service. Our results suggest that typing biometrics are not robust against practical forgeries, and should not be given the same weight as other authentication factors. Another important implication is that the routine collection of detailed timing logs by various online services also inherently (and implicitly) contains biometrics. This not only raises obvious privacy concerns, but may also undermine the effectiveness of network anonymization solutions, such as ToR, when used with existing services.

2019-10-08
del Pino, Rafael, Lyubashevsky, Vadim, Seiler, Gregor.  2018.  Lattice-Based Group Signatures and Zero-Knowledge Proofs of Automorphism Stability. Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. :574–591.

We present a group signature scheme, based on the hardness of lattice problems, whose outputs are more than an order of magnitude smaller than the currently most efficient schemes in the literature. Since lattice-based schemes are also usually non-trivial to efficiently implement, we additionally provide the first experimental implementation of lattice-based group signatures demonstrating that our construction is indeed practical – all operations take less than half a second on a standard laptop. A key component of our construction is a new zero-knowledge proof system for proving that a committed value belongs to a particular set of small size. The sets for which our proofs are applicable are exactly those that contain elements that remain stable under Galois automorphisms of the underlying cyclotomic number field of our lattice-based protocol. We believe that these proofs will find applications in other settings as well. The motivation of the new zero-knowledge proof in our construction is to allow the efficient use of the selectively-secure signature scheme (i.e. a signature scheme in which the adversary declares the forgery message before seeing the public key) of Agrawal et al. (Eurocrypt 2010) in constructions of lattice-based group signatures and other privacy protocols. For selectively-secure schemes to be meaningfully converted to standard signature schemes, it is crucial that the size of the message space is not too large. Using our zero-knowledge proofs, we can strategically pick small sets for which we can provide efficient zero-knowledge proofs of membership.

2019-05-08
Ning, W., Zhi-Jun, L..  2018.  A Layer-Built Method to the Relevancy of Electronic Evidence. 2018 2nd IEEE Advanced Information Management,Communicates,Electronic and Automation Control Conference (IMCEC). :416–420.

T138 combat cyber crimes, electronic evidence have played an increasing role, but in judicial practice the electronic evidence were not highly applied because of the natural contradiction between the epistemic uncertainty of electronic evidence and the principle of discretionary evidence of judge in the court. in this paper, we put forward a layer-built method to analyze the relevancy of electronic evidence, and discussed their analytical process combined with the case study. The initial practice shows the model is feasible and has a consulting value in analyzing the relevancy of electronic evidence.

2019-05-01
Yu, Wenchao, Zheng, Cheng, Cheng, Wei, Aggarwal, Charu C., Song, Dongjin, Zong, Bo, Chen, Haifeng, Wang, Wei.  2018.  Learning Deep Network Representations with Adversarially Regularized Autoencoders. Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. :2663-2671.

The problem of network representation learning, also known as network embedding, arises in many machine learning tasks assuming that there exist a small number of variabilities in the vertex representations which can capture the "semantics" of the original network structure. Most existing network embedding models, with shallow or deep architectures, learn vertex representations from the sampled vertex sequences such that the low-dimensional embeddings preserve the locality property and/or global reconstruction capability. The resultant representations, however, are difficult for model generalization due to the intrinsic sparsity of sampled sequences from the input network. As such, an ideal approach to address the problem is to generate vertex representations by learning a probability density function over the sampled sequences. However, in many cases, such a distribution in a low-dimensional manifold may not always have an analytic form. In this study, we propose to learn the network representations with adversarially regularized autoencoders (NetRA). NetRA learns smoothly regularized vertex representations that well capture the network structure through jointly considering both locality-preserving and global reconstruction constraints. The joint inference is encapsulated in a generative adversarial training process to circumvent the requirement of an explicit prior distribution, and thus obtains better generalization performance. We demonstrate empirically how well key properties of the network structure are captured and the effectiveness of NetRA on a variety of tasks, including network reconstruction, link prediction, and multi-label classification.

2019-04-05
Ardi, Calvin, Heidemann, John.  2018.  Leveraging Controlled Information Sharing for Botnet Activity Detection. Proceedings of the 2018 Workshop on Traffic Measurements for Cybersecurity. :14-20.

Today's malware often relies on DNS to enable communication with command-and-control (C&C). As defenses that block C&C traffic improve, malware use sophisticated techniques to hide this traffic, including "fast flux" names and Domain-Generation Algorithms (DGAs). Detecting this kind of activity requires analysis of DNS queries in network traffic, yet these signals are sparse. As bot countermeasures grow in sophistication, detecting these signals increasingly requires the synthesis of information from multiple sites. Yet sharing security information across organizational boundaries to date has been infrequent and ad hoc because of unknown risks and uncertain benefits. In this paper, we take steps towards formalizing cross-site information sharing and quantifying the benefits of data sharing. We use a case study on DGA-based botnet detection to evaluate how sharing cybersecurity data can improve detection sensitivity and allow the discovery of malicious activity with greater precision.

2019-02-14
Jenkins, J., Cai, H..  2018.  Leveraging Historical Versions of Android Apps for Efficient and Precise Taint Analysis. 2018 IEEE/ACM 15th International Conference on Mining Software Repositories (MSR). :265-269.

Today, computing on various Android devices is pervasive. However, growing security vulnerabilities and attacks in the Android ecosystem constitute various threats through user apps. Taint analysis is a common technique for defending against these threats, yet it suffers from challenges in attaining practical simultaneous scalability and effectiveness. This paper presents a novel approach to fast and precise taint checking, called incremental taint analysis, by exploiting the evolving nature of Android apps. The analysis narrows down the search space of taint checking from an entire app, as conventionally addressed, to the parts of the program that are different from its previous versions. This technique improves the overall efficiency of checking multiple versions of the app as it evolves. We have implemented the techniques as a tool prototype, EVOTAINT, and evaluated our analysis by applying it to real-world evolving Android apps. Our preliminary results show that the incremental approach largely reduced the cost of taint analysis, by 78.6% on average, yet without sacrificing the analysis effectiveness, relative to a representative precise taint analysis as the baseline.

Yoshikawa, Masaya, Nozaki, Yusuke.  2018.  Lightweight Cipher Aware Countermeasure Using Random Number Masks and Its Evaluation. Proceedings of the 2Nd International Conference on Vision, Image and Signal Processing. :55:1-55:5.

Recent advancements in the Internet of Things (IoT) technology has left built-in devices vulnerable to interference from external networks. Power analysis attacks against cryptographic circuits are of particular concern, as they operate by illegally analyzing confidential information via power consumption of a cryptographic circuit. In response to these threats, many researchers have turned to lightweight ciphers, which can be embedded in small-scale circuits, coupled with countermeasures to increase built-in device security, even against power analysis attacks. However, while researchers have examined the efficacy of embedding lightweight ciphers in circuits, neither cost nor tamper resistance have been considered in detail. To use lightweight ciphers and improve tamper resistance in the future, it is necessary to investigate the relationship between the cost of embedding a lightweight cipher with a countermeasure against power analysis in a circuit and the tamper resistance of the cipher. Accordingly, the present study determined the tamper resistance of TWINE, a typical lightweight cipher, both with and without a countermeasure; costs were calculated for embedding the cipher with and without a countermeasure as well.

2019-06-24
Naeem, H., Guo, B., Naeem, M. R..  2018.  A light-weight malware static visual analysis for IoT infrastructure. 2018 International Conference on Artificial Intelligence and Big Data (ICAIBD). :240–244.

Recently a huge trend on the internet of things (IoT) and an exponential increase in automated tools are helping malware producers to target IoT devices. The traditional security solutions against malware are infeasible due to low computing power for large-scale data in IoT environment. The number of malware and their variants are increasing due to continuous malware attacks. Consequently, the performance improvement in malware analysis is critical requirement to stop rapid expansion of malicious attacks in IoT environment. To solve this problem, the paper proposed a novel framework for classifying malware in IoT environment. To achieve flne-grained malware classification in suggested framework, the malware image classification system (MICS) is designed for representing malware image globally and locally. MICS first converts the suspicious program into the gray-scale image and then captures hybrid local and global malware features to perform malware family classification. Preliminary experimental outcomes of MICS are quite promising with 97.4% classification accuracy on 9342 windows suspicious programs of 25 families. The experimental results indicate that proposed framework is quite capable to process large-scale IoT malware.

2019-12-05
Zhai, Zhongyi, Qian, Junyan, Tao, Yuan, Zhao, Lingzhong, Cheng, Bo.  2018.  A Lightweight Timestamp-Based MAC Detection Scheme for XOR Network Coding in Wireless Sensor Networks. Proceedings of the 24th Annual International Conference on Mobile Computing and Networking. :735-737.

Network coding has become a promising approach to improve the communication capability for WSN, which is vulnerable to malicious attacks. There are some solutions, including cryptographic and information-theory schemes, just can thwart data pollution attacks but are not able to detect replay attacks. In the paper, we present a lightweight timestamp-based message authentication code method, called as TMAC. Based on TMAC and the time synchronization technique, the proposed detection scheme can not only resist pollution attacks but also defend replay attacks simultaneously. Finally

2020-05-22
Wang, Xi, Yao, Jun, Ji, Hongxia, Zhang, Ze, Li, Chen, Ma, Beizhi.  2018.  A Local Integral Hash Nearest Neighbor Algorithm. 2018 3rd International Conference on Mechanical, Control and Computer Engineering (ICMCCE). :544—548.

Nearest neighbor search algorithm plays a very important role in computer image algorithm. When the search data is large, we need to use fast search algorithm. The current fast retrieval algorithms are tree based algorithms. The efficiency of the tree algorithm decreases sharply with the increase of the data dimension. In this paper, a local integral hash nearest neighbor algorithm of the spatial space is proposed to construct the tree structure by changing the way of the node of the access tree. It is able to express data distribution characteristics. After experimental testing, this paper achieves more efficient performance in high dimensional data.

2019-09-26
Jackson, K. A., Bennett, B. T..  2018.  Locating SQL Injection Vulnerabilities in Java Byte Code Using Natural Language Techniques. SoutheastCon 2018. :1-5.

With so much our daily lives relying on digital devices like personal computers and cell phones, there is a growing demand for code that not only functions properly, but is secure and keeps user data safe. However, ensuring this is not such an easy task, and many developers do not have the required skills or resources to ensure their code is secure. Many code analysis tools have been written to find vulnerabilities in newly developed code, but this technology tends to produce many false positives, and is still not able to identify all of the problems. Other methods of finding software vulnerabilities automatically are required. This proof-of-concept study applied natural language processing on Java byte code to locate SQL injection vulnerabilities in a Java program. Preliminary findings show that, due to the high number of terms in the dataset, using singular decision trees will not produce a suitable model for locating SQL injection vulnerabilities, while random forest structures proved more promising. Still, further work is needed to determine the best classification tool.

2019-11-04
Altay, Osman, Ulas, Mustafa.  2018.  Location Determination by Processing Signal Strength of Wi-Fi Routers in the Indoor Environment with Linear Discriminant Classifier. 2018 6th International Symposium on Digital Forensic and Security (ISDFS). :1-4.

Location determination in the indoor areas as well as in open areas is important for many applications. But location determination in the indoor areas is a very difficult process compared to open areas. The Global Positioning System (GPS) signals used for position detection is not effective in the indoor areas. Wi-Fi signals are a widely used method for localization detection in the indoor area. In the indoor areas, localization can be used for many different purposes, such as intelligent home systems, locations of people, locations of products in the depot. In this study, it was tried to determine localization for with the classification method for 4 different areas by using Wi-Fi signal values obtained from different routers for indoor location determination. Linear discriminant analysis (LDA) classification was used for classification. In the test using 10k fold cross-validation, 97.2% accuracy value was calculated.

2018-12-03
Catania, E., Corte, A. La.  2018.  Location Privacy in Virtual Cell-Equipped Ultra-Dense Networks. 2018 9th IFIP International Conference on New Technologies, Mobility and Security (NTMS). :1–4.

Ultra-dense Networks are attracting significant interest due to their ability to provide the next generation 5G cellular networks with a high data rate, low delay, and seamless coverage. Several factors, such as interferences, energy constraints, and backhaul bottlenecks may limit wireless networks densification. In this paper, we study the effect of mobile node densification, access node densification, and their aggregation into virtual entities, referred to as virtual cells, on location privacy. Simulations show that the number of tracked mobile nodes might be statistically reduced up to 10 percent by implementing virtual cells. Moreover, experiments highlight that success of tracking attacks has an inverse relationship to the number of moving nodes. The present paper is a preliminary attempt to analyse the effectiveness of cell virtualization to mitigate location privacy threats in ultra-dense networks.