Biblio

Found 7524 results

Filters: Keyword is Metrics  [Clear All Filters]
2017-11-03
Upadhyaya, R., Jain, A..  2016.  Cyber ethics and cyber crime: A deep dwelved study into legality, ransomware, underground web and bitcoin wallet. 2016 International Conference on Computing, Communication and Automation (ICCCA). :143–148.

Future wars will be cyber wars and the attacks will be a sturdy amalgamation of cryptography along with malware to distort information systems and its security. The explosive Internet growth facilitates cyber-attacks. Web threats include risks, that of loss of confidential data and erosion of consumer confidence in e-commerce. The emergence of cyber hack jacking threat in the new form in cyberspace is known as ransomware or crypto virus. The locker bot waits for specific triggering events, to become active. It blocks the task manager, command prompt and other cardinal executable files, a thread checks for their existence every few milliseconds, killing them if present. Imposing serious threats to the digital generation, ransomware pawns the Internet users by hijacking their system and encrypting entire system utility files and folders, and then demanding ransom in exchange for the decryption key it provides for release of the encrypted resources to its original form. We present in this research, the anatomical study of a ransomware family that recently picked up quite a rage and is called CTB locker, and go on to the hard money it makes per user, and its source C&C server, which lies with the Internet's greatest incognito mode-The Dark Net. Cryptolocker Ransomware or the CTB Locker makes a Bitcoin wallet per victim and payment mode is in the form of digital bitcoins which utilizes the anonymity network or Tor gateway. CTB Locker is the deadliest malware the world ever encountered.

2017-05-19
Al-Shaer, Ehab.  2016.  A Cyber Mutation: Metrics, Techniques and Future Directions. Proceedings of the 2016 ACM Workshop on Moving Target Defense. :1–1.

After decades of cyber warfare, it is well-known that the static and predictable behavior of cyber configuration provides a great advantage to adversaries to plan and launch their attack successfully. At the same time, as cyber attacks are getting highly stealthy and more sophisticated, their detection and mitigation become much harder and expensive. We developed a new foundation for moving target defense (MTD) based on cyber mutation, as a new concept in cybersecurity to reverse this asymmetry in cyber warfare by embedding agility into cyber systems. Cyber mutation enables cyber systems to automatically change its configuration parameters in unpredictable, safe and adaptive manner in order to proactively achieve one or more of the following MTD goals: (1) deceiving attackers from reaching their goals, (2) disrupting their plans via changing adversarial behaviors, and (3) deterring adversaries by prohibitively increasing the attack effort and cost. In this talk, we will present the formal foundations, metrics and framework for developing effective cyber mutation techniques. The talk will also review several examples of developed techniques including Random Host Mutation, Random Rout Mutation, fingerprinting mutation, and mutable virtual networks. The talk will also address the evaluation and lessons learned for advancing the future research in this area.

Hellman, Martin E..  2016.  Cybersecurity, Nuclear Security, Alan Turing, and Illogical Logic. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :1–2.

My work that is being recognized by the 2015 ACM A. M. Turing Award is in cybersecurity, while my primary interest for the last thirty-five years is concerned with reducing the risk that nuclear deterrence will fail and destroy civilization. This Turing Lecture draws connections between those seemingly disparate areas as well as Alan Turing's elegant proof that the computable real numbers, while denumerable, are not effectively denumerable.

2017-03-29
Ibrahim, Ahmad, Sadeghi, Ahmad-Reza, Tsudik, Gene, Zeitouni, Shaza.  2016.  DARPA: Device Attestation Resilient to Physical Attacks. Proceedings of the 9th ACM Conference on Security & Privacy in Wireless and Mobile Networks. :171–182.

As embedded devices (under the guise of "smart-whatever") rapidly proliferate into many domains, they become attractive targets for malware. Protecting them from software and physical attacks becomes both important and challenging. Remote attestation is a basic tool for mitigating such attacks. It allows a trusted party (verifier) to remotely assess software integrity of a remote, untrusted, and possibly compromised, embedded device (prover). Prior remote attestation methods focus on software (malware) attacks in a one-verifier/one-prover setting. Physical attacks on provers are generally ruled out as being either unrealistic or impossible to mitigate. In this paper, we argue that physical attacks must be considered, particularly, in the context of many provers, e.g., a network, of devices. As- suming that physical attacks require capture and subsequent temporary disablement of the victim device(s), we propose DARPA, a light-weight protocol that takes advantage of absence detection to identify suspected devices. DARPA is resilient against a very strong adversary and imposes minimal additional hardware requirements. We justify and identify DARPA's design goals and evaluate its security and costs.

2017-12-28
Thuraisingham, B., Kantarcioglu, M., Hamlen, K., Khan, L., Finin, T., Joshi, A., Oates, T., Bertino, E..  2016.  A Data Driven Approach for the Science of Cyber Security: Challenges and Directions. 2016 IEEE 17th International Conference on Information Reuse and Integration (IRI). :1–10.

This paper describes a data driven approach to studying the science of cyber security (SoS). It argues that science is driven by data. It then describes issues and approaches towards the following three aspects: (i) Data Driven Science for Attack Detection and Mitigation, (ii) Foundations for Data Trustworthiness and Policy-based Sharing, and (iii) A Risk-based Approach to Security Metrics. We believe that the three aspects addressed in this paper will form the basis for studying the Science of Cyber Security.

2017-10-18
Oertel, Catharine, Gustafson, Joakim, Black, Alan W..  2016.  On Data Driven Parametric Backchannel Synthesis for Expressing Attentiveness in Conversational Agents. Proceedings of the Workshop on Multimodal Analyses Enabling Artificial Agents in Human-Machine Interaction. :43–47.

In this study, we are using a multi-party recording as a template for building a parametric speech synthesiser which is able to express different levels of attentiveness in backchannel tokens. This allowed us to investigate i) whether it is possible to express the same perceived level of attentiveness in synthesised than in natural backchannels; ii) whether it is possible to increase and decrease the perceived level of attentiveness of backchannels beyond the range observed in the original corpus.

2017-09-05
Luo, Chu, Fylakis, Angelos, Partala, Juha, Klakegg, Simon, Goncalves, Jorge, Liang, Kaitai, Seppänen, Tapio, Kostakos, Vassilis.  2016.  A Data Hiding Approach for Sensitive Smartphone Data. Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing. :557–568.

We develop and evaluate a data hiding method that enables smartphones to encrypt and embed sensitive information into carrier streams of sensor data. Our evaluation considers multiple handsets and a variety of data types, and we demonstrate that our method has a computational cost that allows real-time data hiding on smartphones with negligible distortion of the carrier stream. These characteristics make it suitable for smartphone applications involving privacy-sensitive data such as medical monitoring systems and digital forensics tools.

2017-09-27
Chariton, Antonios A., Degkleri, Eirini, Papadopoulos, Panagiotis, Ilia, Panagiotis, Markatos, Evangelos P..  2016.  DCSP: Performant Certificate Revocation a DNS-based Approach. Proceedings of the 9th European Workshop on System Security. :1:1–1:6.

Trust in SSL-based communication on the Internet is provided by Certificate Authorities (CAs) in the form of signed certificates. Checking the validity of a certificate involves three steps: (i) checking its expiration date, (ii) verifying its signature, and (iii) making sure that it is not revoked. Currently, Certificate Revocation checks (i.e. step (iii) above) are done either via Certificate Revocation Lists (CRLs) or Online Certificate Status Protocol (OCSP) servers. Unfortunately, both current approaches tend to incur such a high overhead that several browsers (including almost all mobile ones) choose not to check certificate revocation status, thereby exposing their users to significant security risks. To address this issue, we propose DCSP: a new low-latency approach that provides up-to-date and accurate certificate revocation information. DCSP capitalizes on the existing scalable and high-performance infrastructure of DNS. DCSP minimizes end user latency while, at the same time, requiring only a small number of cryptographic signatures by the CAs. Our design and initial performance results show that DCSP has the potential to perform an order of magnitude faster than the current state-of-the-art alternatives.

2017-03-29
Lou, Jian, Vorobeychik, Yevgeniy.  2016.  Decentralization and Security in Dynamic Traffic Light Control. Proceedings of the Symposium and Bootcamp on the Science of Security. :90–92.

Complex traffic networks include a number of controlled intersections, and, commonly, multiple districts or municipalities. The result is that the overall traffic control problem is extremely complex computationally. Moreover, given that different municipalities may have distinct, non-aligned, interests, traffic light controller design is inherently decentralized, a consideration that is almost entirely absent from related literature. Both complexity and decentralization have great bearing both on the quality of the traffic network overall, as well as on its security. We consider both of these issues in a dynamic traffic network. First, we propose an effective local search algorithm to efficiently design system-wide control logic for a collection of intersections. Second, we propose a game theoretic (Stackelberg game) model of traffic network security in which an attacker can deploy denial-of-service attacks on sensors, and develop a resilient control algorithm to mitigate such threats. Finally, we propose a game theoretic model of decentralization, and investigate this model both in the context of baseline traffic network design, as well as resilient design accounting for attacks. Our methods are implemented and evaluated using a simple traffic network scenario in SUMO.

2017-10-10
Kuehner, Holger, Hartenstein, Hannes.  2016.  Decentralized Secure Data Sharing with Attribute-Based Encryption: A Resource Consumption Analysis. Proceedings of the 4th ACM International Workshop on Security in Cloud Computing. :74–81.

Secure Data Sharing (SDS) enables users to share data in the cloud in a confidential and integrity-preserving manner. Many recent SDS approaches are based on Attribute-Based Encryption (ABE), leveraging the advantage that ABE allows to address a multitude of users with only one ciphertext. However, ABE approaches often come with the downside that they require a central fully-trusted entity that is able to decrypt any ciphertext in the system. In this paper, we investigate on whether ABE could be used to efficiently implement Decentralized Secure Data Sharing (D-SDS), which explicitly demands that the authorization and access control enforcement is carried out solely by the owner of the data, without the help of a fully-trusted third party. For this purpose, we did a comprehensive analysis of recent ABE approaches with regard to D-SDS requirements. We found one ABE approach to be suitable, and we show different alternatives to employ this ABE approach in a group-based D-SDS scenario. For a realistic estimation of the resource consumption, we give concrete resource consumption values for workloads taken from real-world system traces and exemplary up-to-date mobile devices. Our results indicate that for the most D-SDS operations, the resulting computation times and outgoing network traffic will be acceptable in many use cases. However, the computation times and outgoing traffic for the management of large groups might prevent using mobile devices.

2017-03-29
Martin, Jeremy, Rye, Erik, Beverly, Robert.  2016.  Decomposition of MAC Address Structure for Granular Device Inference. Proceedings of the 32Nd Annual Conference on Computer Security Applications. :78–88.

Common among the wide variety of ubiquitous networked devices in modern use is wireless 802.11 connectivity. The MAC addresses of these devices are visible to a passive adversary, thereby presenting security and privacy threats - even when link or application-layer encryption is employed. While it is well-known that the most significant three bytes of a MAC address, the OUI, coarsely identify a device's manufacturer, we seek to better understand the ways in which the remaining low-order bytes are allocated in practice. From a collection of more than two billion 802.11 frames observed in the wild, we extract device and model information details for over 285K devices, as leaked by various management frames and discovery protocols. From this rich dataset, we characterize overall device populations and densities, vendor address allocation policies and utilization, OUI sharing among manufacturers, discover unique models occurring in multiple OUIs, and map contiguous address blocks to specific devices. Our mapping thus permits fine-grained device type and model predictions for unknown devices solely on the basis of their MAC address. We validate our inferences on both ground-truth data and a third-party dataset, where we obtain high accuracy. Our results empirically demonstrate the extant structure of the low-order MAC bytes due to manufacturer's sequential allocation policies, and the security and privacy concerns therein.

2017-04-03
Wadhawan, Yatin, Neuman, Clifford.  2016.  Defending Cyber-Physical Attacks on Oil Pipeline Systems: A Game-Theoretic Approach. Proceedings of the 1st International Workshop on AI for Privacy and Security. :7:1–7:8.

The security of critical infrastructures such as oil and gas cyber-physical systems is a significant concern in today's world where malicious activities are frequent like never before. On one side we have cyber criminals who compromise cyber infrastructure to control physical processes; we also have physical criminals who attack the physical infrastructure motivated to destroy the target or to steal oil from pipelines. Unfortunately, due to limited resources and physical dispersion, it is impossible for the system administrator to protect each target all the time. In this research paper, we tackle the problem of cyber and physical attacks on oil pipeline infrastructure by proposing a Stackelberg Security Game of three players: system administrator as a leader, cyber and physical attackers as followers. The novelty of this paper is that we have formulated a real world problem of oil stealing using a game theoretic approach. The game has two different types of targets attacked by two distinct types of adversaries with different motives and who can coordinate to maximize their rewards. The solution to this game assists the system administrator of the oil pipeline cyber-physical system to allocate the cyber security controls for the cyber targets and to assign patrol teams to the pipeline regions efficiently. This paper provides a theoretical framework for formulating and solving the above problem.

2017-03-20
Fowler, James E..  2016.  Delta Encoding of Virtual-Machine Memory in the Dynamic Analysis of Malware. :592–592.

Malware is an ever-increasing threat to personal, corporate, and government computing systems alike. Particularly in the corporate and government sectors, the attribution of malware—including the identification of the authorship of malware as well as potentially the malefactor responsible for an attack—is of growing interest. Such malware attribution is often enabled by the fact that malware authors build on the work of others through the use of generators, libraries, and borrowed code. Determining malware phylogeny—the evolutionary history of and the derivative relations between malware—is consequently an endeavor of increasing importance, with a growing focus on the dynamic analysis of malware which involves executing a malware sample and determining the actions it takes after some period of operation. In most cases, such dynamic analysis occurs in a virtual machine, or "sandbox," in order to confine the malware to an environment in which it can do no harm to real systems. In sandbox-driven dynamic analysis of malware, a virtual machine is typically run starting from some known, malware-free baseline state. The malware is injected into the virtual machine, and the machine is allowed to run for some period of time during which the malware presumably activates. The machine is then suspended, and the current machine memory is dumped to disk. The process may then be repeated for other malware samples, each time starting from the baseline state. Stored in raw form on the disk, the dumped memory file is the same size as the virtual-machine memory, for virtual machines running modern operating systems, such memory would likely be no less than 512 MB but could be up to several GBs. If the corresponding memory dumps are to be retained for repeated analysis—as is likely to be required in order to determine a phylogeny for a large database of malware samples—lossless compression of the memory dumps is necessarily to prevent explosive disk usage. For example, the VirusShare project maintains a database of over 19 million malware samples, running these in a virtual machine with 512 MB of memory would require of 9 petabytes (PB) of storage to retain the memory dumps. In this paper, we develop a scheme for the lossless compression of memory dumps resulting from the repeated execution of malware samples in a virtual-machine sandbox. Rather than compress each memory dump individually, we capitalize on the fact that memory dumps stem from a known baseline virtual-machine state and code with respect to this baseline memory. Additionally, to further improve compression efficiency, we exploit the fact that a significant portion of the difference between the baseline memory and that of the currently running machine is the result of the loading of known executable programs and shared libraries. Consequently, we propose delta coding to compress the current virtual-machine memory dump by coding its differences with respect to a predicted memory image, with the latter formed by duplicating the loading of the executables and libraries into the baseline memory, resulting in a significant improvement in compression performance over straightforward delta coding alone. In experimental results for a body of malware samples, the proposed approach outperformed the widely used xdelta3 delta coder by approximately 20% and the popular generic gzip coder by 79%.

2017-04-20
Bronzino, F., Raychaudhuri, D., Seskar, I..  2016.  Demonstrating Context-Aware Services in the Mobility First Future Internet Architecture. 2016 28th International Teletraffic Congress (ITC 28). 01:201–204.

As the amount of mobile devices populating the Internet keeps growing at tremendous pace, context-aware services have gained a lot of traction thanks to the wide set of potential use cases they can be applied to. Environmental sensing applications, emergency services, and location-aware messaging are just a few examples of applications that are expected to increase in popularity in the next few years. The MobilityFirst future Internet architecture, a clean-slate Internet architecture design, provides the necessary abstractions for creating and managing context-aware services. Starting from these abstractions we design a context services framework, which is based on a set of three fundamental mechanisms: an easy way to specify context based on human understandable techniques, i.e. use of names, an architecture supported management mechanism that allows both to conveniently deploy the service and efficiently provide management capabilities, and a native delivery system that reduces the tax on the network components and on the overhead cost of deploying such applications. In this paper, we present an emergency alert system for vehicles assisting first responders that exploits users location awareness to support quick and reliable alert messages for interested vehicles. By deploying a demo of the system on a nationwide testbed, we aim to provide better understanding of the dynamics involved in our designed framework.

2017-04-24
Fabre, Arthur, Martinez, Kirk, Bragg, Graeme M., Basford, Philip J., Hart, Jane, Bader, Sebastian, Bragg, Olivia M..  2016.  Deploying a 6LoWPAN, CoAP, Low Power, Wireless Sensor Network: Poster Abstract. Proceedings of the 14th ACM Conference on Embedded Network Sensor Systems CD-ROM. :362–363.

In order to integrate equipment from different vendors, wireless sensor networks need to become more standardized. Using IP as the basis of low power radio networks, together with application layer standards designed for this purpose is one way forward. This research focuses on implementing and deploying a system using Contiki, 6LoWPAN over an 868 MHz radio network, together with CoAP as a standard application layer protocol. A system was deployed in the Cairngorm mountains in Scotland as an environmental sensor network, measuring streams, temperature profiles in peat and periglacial features. It was found that RPL provided an effective routing algorithm, and that the use of UDP packets with CoAP proved to be an energy efficient application layer. This combination of technologies can be very effective in large area sensor networks.

2017-05-30
Gu, Yufei, Lin, Zhiqiang.  2016.  Derandomizing Kernel Address Space Layout for Memory Introspection and Forensics. Proceedings of the Sixth ACM Conference on Data and Application Security and Privacy. :62–72.

Modern OS kernels including Windows, Linux, and Mac OS all have adopted kernel Address Space Layout Randomization (ASLR), which shifts the base address of kernel code and data into different locations in different runs. Consequently, when performing introspection or forensic analysis of kernel memory, we cannot use any pre-determined addresses to interpret the kernel events. Instead, we must derandomize the address space layout and use the new addresses. However, few efforts have been made to derandomize the kernel address space and yet there are many questions left such as which approach is more efficient and robust. Therefore, we present the first systematic study of how to derandomize a kernel when given a memory snapshot of a running kernel instance. Unlike the derandomization approaches used in traditional memory exploits in which only remote access is available, with introspection and forensics applications, we can use all the information available in kernel memory to generate signatures and derandomize the ASLR. In other words, there exists a large volume of solutions for this problem. As such, in this paper we examine a number of typical approaches to generate strong signatures from both kernel code and data based on the insight of how kernel code and data is updated, and compare them from efficiency (in terms of simplicity, speed etc.) and robustness (e.g., whether the approach is hard to be evaded or forged) perspective. In particular, we have designed four approaches including brute-force code scanning, patched code signature generation, unpatched code signature generation, and read-only pointer based approach, according to the intrinsic behavior of kernel code and data with respect to kernel ASLR. We have gained encouraging results for each of these approaches and the corresponding experimental results are reported in this paper.

2017-10-13
Weichslgartner, Andreas, Wildermann, Stefan, Götzfried, Johannes, Freiling, Felix, Glaß, Michael, Teich, Jürgen.  2016.  Design-Time/Run-Time Mapping of Security-Critical Applications in Heterogeneous MPSoCs. Proceedings of the 19th International Workshop on Software and Compilers for Embedded Systems. :153–162.

Different applications concurrently running on modern MPSoCs can interfere with each other when they use shared resources. This interference can cause side channels, i.e., sources of unintended information flow between applications. To prevent such side channels, we propose a hybrid mapping methodology that attempts to ensure spatial isolation, i.e., a mutually-exclusive allocation of resources to applications in the MPSoC. At design time and as a first step, we compute compact and connected application mappings (called shapes). In a second step, run-time management uses this information to map multiple spatially segregated shapes to the architecture. We present and evaluate a (fast) heuristic and an (exact) SAT-based mapper, demonstrating the viability of the approach.

2017-09-05
Siddiqui, Sana, Khan, Muhammad Salman, Ferens, Ken, Kinsner, Witold.  2016.  Detecting Advanced Persistent Threats Using Fractal Dimension Based Machine Learning Classification. Proceedings of the 2016 ACM on International Workshop on Security And Privacy Analytics. :64–69.

Advanced Persistent Threats (APTs) are a new breed of internet based smart threats, which can go undetected with the existing state of-the-art internet traffic monitoring and protection systems. With the evolution of internet and cloud computing, a new generation of smart APT attacks has also evolved and signature based threat detection systems are proving to be futile and insufficient. One of the essential strategies in detecting APTs is to continuously monitor and analyze various features of a TCP/IP connection, such as the number of transferred packets, the total count of the bytes exchanged, the duration of the TCP/IP connections, and details of the number of packet flows. The current threat detection approaches make extensive use of machine learning algorithms that utilize statistical and behavioral knowledge of the traffic. However, the performance of these algorithms is far from satisfactory in terms of reducing false negatives and false positives simultaneously. Mostly, current algorithms focus on reducing false positives, only. This paper presents a fractal based anomaly classification mechanism, with the goal of reducing both false positives and false negatives, simultaneously. A comparison of the proposed fractal based method with a traditional Euclidean based machine learning algorithm (k-NN) shows that the proposed method significantly outperforms the traditional approach by reducing false positive and false negative rates, simultaneously, while improving the overall classification rates.

2017-08-18
Francis-Christie, Christopher A..  2016.  Detecting Insider Attacks with Video Websites Using Distributed Image Steganalysis (Abstract Only). Proceedings of the 47th ACM Technical Symposium on Computing Science Education. :725–725.

The safety of information inside of cloud networks is of interest to the network administrators. In a new insider attack, inside attackers merge confidential information with videos using digital video steganography. The video can be uploaded to video websites, where the information can be distributed online, where it can cost firms millions in damages. Standard behavior based exfiltration detection does not always prevent these attacks. This form of steganography is almost invisible. Existing compressed video steganalysis only detects small-payload watermarks. We develop such a strategy using distributed algorithms and classify videos, then compare existing algorithms to new ones. We find our approach improves on behavior based exfiltration detection, and on the existing online video steganalysis.

2017-09-19
Reaves, Bradley, Blue, Logan, Tian, Dave, Traynor, Patrick, Butler, Kevin R.B..  2016.  Detecting SMS Spam in the Age of Legitimate Bulk Messaging. Proceedings of the 9th ACM Conference on Security & Privacy in Wireless and Mobile Networks. :165–170.

Text messaging is used by more people around the world than any other communications technology. As such, it presents a desirable medium for spammers. While this problem has been studied by many researchers over the years, the recent increase in legitimate bulk traffic (e.g., account verification, 2FA, etc.) has dramatically changed the mix of traffic seen in this space, reducing the effectiveness of previous spam classification efforts. This paper demonstrates the performance degradation of those detectors when used on a large-scale corpus of text messages containing both bulk and spam messages. Against our labeled dataset of text messages collected over 14 months, the precision and recall of past classifiers fall to 23.8% and 61.3% respectively. However, using our classification techniques and labeled clusters, precision and recall rise to 100% and 96.8%. We not only show that our collected dataset helps to correct many of the overtraining errors seen in previous studies, but also present insights into a number of current SMS spam campaigns.

2017-08-22
Meitei, Irom Lalit, Singh, Khundrakpam Johnson, De, Tanmay.  2016.  Detection of DDoS DNS Amplification Attack Using Classification Algorithm. Proceedings of the International Conference on Informatics and Analytics. :81:1–81:6.

The Domain Name System (DNS) is a critically fundamental element in the internet technology as it translates domain names into corresponding IP addresses. The DNS queries and responses are UDP (User Datagram Protocol) based. DNS name servers are constantly facing threats of DNS amplification attacks. DNS amplification attack is one of the major Distributed Denial of Service (DDoS) attacks, in DNS. The DNS amplification attack victimized huge business and financial companies and organizations by giving disturbance to the customers. In this paper, a mechanism is proposed to detect such attacks coming from the compromised machines. We analysed DNS traffic packet comparatively based on the Machine Learning Classification algorithms such as Decision Tree (TREE), Multi Layer Perceptron (MLP), Naïve Bayes (NB) and Support Vector Machine (SVM) to classify the DNS traffics into normal and abnormal. In this approach attribute selection algorithms such as Information Gain, Gain Ratio and Chi Square are used to achieve optimal feature subset. In the experimental result it shows that the Decision Tree achieved 99.3% accuracy. This model gives highest accuracy and performance as compared to other Machine Learning algorithms.

2017-08-18
Aljamea, Moudhi M., Iliopoulos, Costas S., Samiruzzaman, M..  2016.  Detection Of URL In Image Steganography. Proceedings of the International Conference on Internet of Things and Cloud Computing. :23:1–23:6.

Steganography is the science of hiding data within data. Either for the good purpose of secret communication or for the bad intention of leaking sensitive confidential data or embedding malicious code or URL. However, many different carrier file formats can be used to hide these data (network, audio, image..etc) but the most common steganography carrier is embedding secret data within images as it is considered to be the best and easiest way to hide all types of files (secret files) within an image using different formats (another image, text, video, virus, URL..etc). To the human eye, the changes in the image appearance with the hidden data can be imperceptible. In fact, images can be more than what we see with our eyes. Therefore, many solutions where proposed to help in detecting these hidden data but each solution have their own strong and weak points either by the limitation of resolving one type of image along with specific hiding technique and or most likely without extracting the hidden data. This paper intends to propose a novel detection approach that will concentrate on detecting any kind of hidden URL in all types of images and extract the hidden URL from the carrier image that used the LSB least significant bit hiding technique.

2017-11-27
Kuze, N., Ishikura, S., Yagi, T., Chiba, D., Murata, M..  2016.  Detection of vulnerability scanning using features of collective accesses based on information collected from multiple honeypots. NOMS 2016 - 2016 IEEE/IFIP Network Operations and Management Symposium. :1067–1072.

Attacks against websites are increasing rapidly with the expansion of web services. An increasing number of diversified web services make it difficult to prevent such attacks due to many known vulnerabilities in websites. To overcome this problem, it is necessary to collect the most recent attacks using decoy web honeypots and to implement countermeasures against malicious threats. Web honeypots collect not only malicious accesses by attackers but also benign accesses such as those by web search crawlers. Thus, it is essential to develop a means of automatically identifying malicious accesses from mixed collected data including both malicious and benign accesses. Specifically, detecting vulnerability scanning, which is a preliminary process, is important for preventing attacks. In this study, we focused on classification of accesses for web crawling and vulnerability scanning since these accesses are too similar to be identified. We propose a feature vector including features of collective accesses, e.g., intervals of request arrivals and the dispersion of source port numbers, obtained with multiple honeypots deployed in different networks for classification. Through evaluation using data collected from 37 honeypots in a real network, we show that features of collective accesses are advantageous for vulnerability scanning and crawler classification.

2017-04-20
Wolf, Flynn.  2016.  Developing a Wearable Tactile Prototype to Support Situational Awareness. Proceedings of the 13th Web for All Conference. :37:1–37:2.

Research towards my dissertation has involved a series of perceptual and accessibility-focused studies concerned with the use of tactile cues for spatial and situational awareness, displayed through head-mounted wearables. These studies were informed by an initial participatory design study of mobile technology multitasking and tactile interaction habits. This research has yielded a number of actionable conclusions regarding the development of tactile interfaces for the head, and endeavors to provide greater insight into the design of advanced tactile alerting for contextual and spatial understanding in assistive applications (e.g. for individuals who are blind or those encountering situational impairments), as well as guidance for developers regarding assessment of interaction between under-utilized sensory modalities and underlying perceptual and cognitive processes.

2017-05-22
Strackx, Raoul, Piessens, Frank.  2016.  Developing Secure SGX Enclaves: New Challenges on the Horizon. Proceedings of the 1st Workshop on System Software for Trusted Execution. :3:1–3:2.

The combination of (1) hard to eradicate low-level vulnerabilities, (2) a large trusted computing base written in a memory-unsafe language and (3) a desperate need to provide strong software security guarantees, led to the development of protected-module architectures. Such architectures provide strong isolation of protected modules: Security of code and data depends only on a module's own implementation. In this paper we discuss how such protected modules should be written. From an academic perspective it is clear that the future lies with memory-safe languages. Unfortunately, from a business and management perspective, that is a risky path and will remain so in the near future. The use of well-known but memory-unsafe languages such as C and C++ seem inevitable. We argue that the academic world should take another look at the automatic hardening of software written in such languages to mitigate low-level security vulnerabilities. This is a well-studied topic for full applications, but protected-module architectures introduce a new, and much more challenging environment. Porting existing security measures to a protected-module setting without a thorough security analysis may even harm security of the protected modules they try to protect.