Biblio
Tactics Techniques and Procedures (TTPs) in cyber domain is an important threat information that describes the behavior and attack patterns of an adversary. Timely identification of associations between TTPs can lead to effective strategy for diagnosing the Cyber Threat Actors (CTAs) and their attack vectors. This study profiles the prevalence and regularities in the TTPs of CTAs. We developed a machine learning-based framework that takes as input Cyber Threat Intelligence (CTI) documents, selects the most prevalent TTPs with high information gain as features and based on them mine interesting regularities between TTPs using Association Rule Mining (ARM). We evaluated the proposed framework with publicly available TTPbased CTI documents. The results show that there are 28 TTPs more prevalent than the other TTPs. Our system identified 155 interesting association rules among the TTPs of CTAs. A summary of these rules is given to effectively investigate threats in the network.
Traditional security controls, such as firewalls, anti-virus and IDS, are ill-equipped to help IT security and response teams keep pace with the rapid evolution of the cyber threat landscape. Cyber Threat Intelligence (CTI) can help remediate this problem by exploiting non-traditional information sources, such as hacker forums and "dark-web" social platforms. Security and response teams can use the collected intelligence to identify emerging threats. Unfortunately, when manual analysis is used to extract CTI from non-traditional sources, it is a time consuming, error-prone and resource intensive process. We address these issues by using a hybrid Machine Learning model that automatically searches through hacker forum posts, identifies the posts that are most relevant to cyber security and then clusters the relevant posts into estimations of the topics that the hackers are discussing. The first (identification) stage uses Support Vector Machines and the second (clustering) stage uses Latent Dirichlet Allocation. We tested our model, using data from an actual hacker forum, to automatically extract information about various threats such as leaked credentials, malicious proxy servers, malware that evades AV detection, etc. The results demonstrate our method is an effective means for quickly extracting relevant and actionable intelligence that can be integrated with traditional security controls to increase their effectiveness.
Smartphones have become ubiquitous in our everyday lives, providing diverse functionalities via millions of applications (apps) that are readily available. To achieve these functionalities, apps need to access and utilize potentially sensitive data, stored in the user's device. This can pose a serious threat to users' security and privacy, when considering malicious or underskilled developers. While application marketplaces, like Google Play store and Apple App store, provide factors like ratings, user reviews, and number of downloads to distinguish benign from risky apps, studies have shown that these metrics are not adequately effective. The security and privacy health of an application should also be considered to generate a more reliable and transparent trustworthiness score. In order to automate the trustworthiness assessment of mobile applications, we introduce the Trust4App framework, which not only considers the publicly available factors mentioned above, but also takes into account the Security and Privacy (S&P) health of an application. Additionally, it considers the S&P posture of a user, and provides an holistic personalized trustworthiness score. While existing automatic trustworthiness frameworks only consider trustworthiness indicators (e.g. permission usage, privacy leaks) individually, Trust4App is, to the best of our knowledge, the first framework to combine these indicators. We also implement a proof-of-concept realization of our framework and demonstrate that Trust4App provides a more comprehensive, intuitive and actionable trustworthiness assessment compared to existing approaches.
Information Centric Networking (ICN) changed the communication model from host-based to content-based to cope with the high volume of traffic due to the rapidly increasing number of users, data objects, devices, and applications. ICN communication model requires new security solutions that will be integrated with ICN architectures. In this paper, we present a security framework to manage ICN traffic by detecting, preventing, and responding to ICN attacks. The framework consists of three components: availability, access control, and privacy. The availability component ensures that contents are available for legitimate users. The access control component allows only legitimate users to get restrictedaccess contents. The privacy component prevents attackers from knowing content popularities or user requests. We also show our specific solutions as examples of the framework components.
Autonomous systems are gaining momentum in various application domains, such as autonomous vehicles, autonomous transport robotics and self-adaptation in smart homes. Product liability regulations impose high standards on manufacturers of such systems with respect to dependability (safety, security and privacy). Today's conventional engineering methods are not adequate for providing guarantees with respect to dependability requirements in a cost-efficient manner, e.g. road tests in the automotive industry sum up millions of miles before a system can be considered sufficiently safe. System engineers will no longer be able to test and respectively formally verify autonomous systems during development time in order to guarantee the dependability requirements in advance. In this vision paper, we introduce a new holistic software systems engineering approach for autonomous systems, which integrates development time methods as well as operation time techniques. With this approach, we aim to give the users a transparent view of the confidence level of the autonomous system under use with respect to the dependability requirements. We present already obtained results and point out research goals to be addressed in the future.
At a time when all it takes to open a Twitter account is a mobile phone, the act of authenticating information encountered on social media becomes very complex, especially when we lack measures to verify digital identities in the first place. Because the platform supports anonymity, fake news generated by dubious sources have been observed to travel much faster and farther than real news. Hence, we need valid measures to identify authors of misinformation to avert these consequences. Researchers propose different authorship attribution techniques to approach this kind of problem. However, because tweets are made up of only 280 characters, finding a suitable authorship attribution technique is a challenge. This research aims to classify authors of tweets by comparing machine learning methods like logistic regression and naive Bayes. The processes of this application are fetching of tweets, pre-processing, feature extraction, and developing a machine learning model for classification. This paper illustrates the text classification for authorship process using machine learning techniques. In total, there were 46,895 tweets used as both training and testing data, and unique features specific to Twitter were extracted. Several steps were done in the pre-processing phase, including removal of short texts, removal of stop-words and punctuations, tokenizing and stemming of texts as well. This approach transforms the pre-processed data into a set of feature vector in Python. Logistic regression and naive Bayes algorithms were applied to the set of feature vectors for the training and testing of the classifier. The logistic regression based classifier gave the highest accuracy of 91.1% compared to the naive Bayes classifier with 89.8%.
There are over 1 billion websites today, and most of them are designed using content management systems. Cybersecurity is one of the most discussed topics when it comes to a web application and protecting the confidentiality, integrity of data has become paramount. SQLi is one of the most commonly used techniques that hackers use to exploit a security vulnerability in a web application. In this paper, we compared SQLi vulnerabilities found on the three most commonly used content management systems using a vulnerability scanner called Nikto, then SQLMAP for penetration testing. This was carried on default WordPress, Drupal and Joomla website pages installed on a LAMP server (Iocalhost). Results showed that each of the content management systems was not susceptible to SQLi attacks but gave warnings about other vulnerabilities that could be exploited. Also, we suggested practices that could be implemented to prevent SQL injections.
SQL injection is well known a method of executing SQL queries and retrieving sensitive information from a website connected database. This process poses a threat to those applications which are poorly coded in the today's world. SQL is considered as one of the top 10 vulnerabilities even in 2018. To keep a track of the vulnerabilities that each of the websites are facing, we employ a tool called Acunetix which allows us to find the vulnerabilities of a specific website. This tool also suggests measures on how to ensure preventive measures. Using this implementation, we discover vulnerabilities in an actual website. Such a real-world implementation would be useful for instructional use in a foundational cybersecurity course.
Internet users are increasing day by day. The web services and mobile web applications or desktop web application's demands are also increasing. The chances of a system being hacked are also increasing. All web applications maintain data at the backend database from which results are retrieved. As web applications can be accessed from anywhere all around the world which must be available to all the users of the web application. SQL injection attack is nowadays one of the topmost threats for security of web applications. By using SQL injection attackers can steal confidential information. In this paper, the SQL injection attack detection method by removing the parameter values of the SQL query is discussed and results are presented.
The Structured Query Language Injection Attack (SQLIA) is one of the most serious and popular threats of web applications. The results of SQLIA include the data loss or complete host takeover. Detection of SQLIA is always an intractable challenge because of the heterogeneity of the attack payloads. In this paper, a novel method to detect SQLIA based on word vector of SQL tokens and LSTM neural networks is described. In the proposed method, SQL query strings were firstly syntactically analyzed into tokens, and then likelihood ratio test is used to build the word vector of SQL tokens, ultimately, an LSTM model is trained with sequences of token word vectors. We developed a tool named WOVSQLI, which implements the proposed technique, and it was evaluated with a dataset from several sources. The results of experiments demonstrate that WOVSQLI can effectively identify SQLIA.
Immersive technologies have been touted as empathetic mediums. This capability has yet to be fully explored through machine learning integration. Our demo seeks to explore proxemics in mixed-reality (MR) human-human interactions. The author developed a system, where spatial features can be manipulated in real time by identifying emotions corresponding to unique combinations of facial micro-expressions and tonal analysis. The Magic Leap One is used as the interactive interface, the first commercial spatial computing head mounted (virtual retinal) display (HUD). A novel spatial user interface visualization element is prototyped that leverages the affordances of mixed-reality by introducing both a spatial and affective component to interfaces.
We investigate mobile phone pointing in Spatial Augmented Reality (SAR). Three pointing methods are compared, raycasting, viewport, and tangible (i.e. direct contact), using a five-projector "full" SAR environment with targets distributed on varying surfaces. Participants were permitted free movement in the environment to create realistic variations in target occlusion and target incident angle. Our results show raycast is fastest for high and distant targets, tangible is fastest for targets in close proximity to the user, and viewport performance is in between.
While existing visual recognition approaches, which rely on 2D images to train their underlying models, work well for object classification, recognizing the changing state of a 3D object requires addressing several additional challenges. This paper proposes an active visual recognition approach to this problem, leveraging camera pose data available on mobile devices. With this approach, the state of a 3D object, which captures its appearance changes, can be recognized in real time. Our novel approach selects informative video frames filtered by 6-DOF camera poses to train a deep learning model to recognize object state. We validate our approach through a prototype for Augmented Reality-assisted hardware maintenance.
The utility of mediated environments increases when environmental scale (size and distance) is perceived accurately. We present the use of perceived affordances–-judgments of action capabilities–-as an objective way to assess space perception in an augmented reality (AR) environment. The current study extends the previous use of this methodology in virtual reality (VR) to AR. We tested two locomotion-based affordance tasks. In the first experiment, observers judged whether they could pass through a virtual aperture presented at different widths and distances, and also judged the distance to the aperture. In the second experiment, observers judged whether they could step over a virtual gap on the ground. In both experiments, the virtual objects were displayed with the HoloLens in a real laboratory environment. We demonstrate that affordances for passing through and perceived distance to the aperture are similar in AR to those measured in the real world, but that judgments of gap-crossing in AR were underestimated. These differences across two affordances may result from the different spatial characteristics of the virtual objects (on the ground versus extending off the ground).
This study discusses the results and findings of an augmented reality navigation app that was created using vector data uploaded to an online mapping software for indoor navigation. The main objective of this research is to determine the current issues with a solution of indoor navigation that relies on the use of GPS signals, as these signals are sparse in buildings. The data was uploaded in the form of GeoJSON files to MapBox which relayed the data to the app using an API in the form of Tilesets. The application converted the tilesets to a miniaturized map and calculated the navigation path, and then overlaid that navigation line onto the floor via the camera. Once the project setup was completed, multiple navigation paths have been tested numerous times between the different sync points and destination rooms. At the end, their accuracy, ease of access and several other factors, along with their issues, were recorded. The testing revealed that the navigation system was not only accurate despite the lack of GPS signal, but it also detected the device motion precisely. Furthermore, the navigation system did not take much time to generate the navigation path, as the app processed the data tile by tile. The application was also able to accurately measure the ground plane along with the walls, perfectly overlaying the navigation line. However, a few observations indicated various factors affected the accuracy of the navigation, and testing revealed areas where major improvements can be made to improve both accuracy and ease of access.
Conventional HVAC control systems are usually incognizant of the physical structures and materials of buildings. These systems merely follow pre-set HVAC control logic based on abstract building thermal response models, which are rough approximations to true physical models, ignoring dynamic spatial variations in built environments. To enable more accurate and responsive HVAC control, this paper introduces the notion of self-aware smart buildings, such that buildings are able to explicitly construct physical models of themselves (e.g., incorporating building structures and materials, and thermal flow dynamics). The question is how to enable self-aware buildings that automatically acquire dynamic knowledge of themselves. This paper presents a novel approach using augmented reality. The extensive user-environment interactions in augmented reality not only can provide intuitive user interfaces for building systems, but also can capture the physical structures and possibly materials of buildings accurately to enable real-time building simulation and control. This paper presents a building system prototype incorporating augmented reality, and discusses its applications.
In an augmented reality system, labelling technique is a very useful assistant technique for browsing and understanding unfamiliar objects or environments, through which the superimposed virtual labels of words or pictures on the real scene provide convenient information to the viewers, expand the recognition to area of interests and promote the interaction with real scene. How to design the layout of labels in user's field of view, keep the clarity of virtual information and balance the ratio between virtual information and real scene information is a key problem in the field of view management. This paper presents the empirical results extracted from experiment aiming at the user's visual perception to labelling layout, which reflects the subjective preferences to different factors influencing the labelling result. Statistical analysis of the experiment results shows the intuitive visual judgement accomplished by subjects. The quantitative measurement of clutter indicates the change induced by labels on real scene, therefore contributing the label design on view management in future.