Visible to the public Biblio

Found 2371 results

Filters: First Letter Of Last Name is G  [Clear All Filters]
2019-03-15
Cozzi, M., Galliere, J., Maurine, P..  2018.  Exploiting Phase Information in Thermal Scans for Stealthy Trojan Detection. 2018 21st Euromicro Conference on Digital System Design (DSD). :573-576.

Infrared thermography has been recognized for its ability to investigate integrated circuits in a non destructive way. Coupled to lock-in correlation it has proven efficient in detecting thermal hot spots. Most of the state of the Art measurement systems are based on amplitude analysis. In this paper we propose to investigate weak thermal hot spots using the phase of infrared signals. We demonstrate that phase analysis is a formidable alternative to amplitude to detect small heat signatures. Finally, we apply our measurement platform and its detection method to the identification of stealthy hardware Trojans.

Ye, J., Yang, Y., Gong, Y., Hu, Y., Li, X..  2018.  Grey Zone in Pre-Silicon Hardware Trojan Detection. 2018 IEEE International Test Conference in Asia (ITC-Asia). :79-84.

Pre-Silicon hardware Trojan detection has been studied for years. The most popular benchmark circuits are from the Trust-Hub. Their common feature is that the probability of activating hardware Trojans is very low. This leads to a series of machine learning based hardware Trojan detection methods which try to find the nets with low signal probability of 0 or 1. On the other hand, it is considered that, if the probability of activating hardware Trojans is high, these hardware Trojans can be easily found through behaviour simulations or during functional test. This paper explores the "grey zone" between these two opposite scenarios: if the activation probability of a hardware Trojan is not low enough for machine learning to detect it and is not high enough for behaviour simulation or functional test to find it, it can escape from detection. Experiments show the existence of such hardware Trojans, and this paper suggests a new set of hardware Trojan benchmark circuits for future study.

Queiroz, Diego V., Gomes, Ruan D., Benavente-Peces, Cesar, Fonseca, Iguatemi E., Alencar, Marcelo S..  2018.  Evaluation of Channels Blacklists in TSCH Networks with Star and Tree Topologies. Proceedings of the 14th ACM International Symposium on QoS and Security for Wireless and Mobile Networks. :116-123.
The Time-Slotted Channel Hopping (TSCH) mode, defined by the IEEE 802.15.4e protocol, aims to reduce the effects of narrowband interference and multipath fading on some channels through the frequency hopping method. To work satisfactorily, this method must be based on the evaluation of the channel quality through which the packets will be transmitted to avoid packet losses. In addition to the estimation, it is necessary to manage channel blacklists, which prevents the sensors from hopping to bad quality channels. The blacklists can be applied locally or globally, and this paper evaluates the use of a local blacklist through simulation of a TSCH network in a simulated harsh industrial environment. This work evaluates two approaches, and both use a developed protocol based on TSCH, called Adaptive Blacklist TSCH (AB-TSCH), that considers beacon packets and includes a link quality estimation with blacklists. The first approach uses the protocol to compare a simple version of TSCH to configurations with different sizes of blacklists in star topology. In this approach, it is possible to analyze the channel adaption method that occurs when the blacklist has 15 channels. The second approach uses the protocol to evaluate blacklists in tree topology, and discusses the inherent problems of this topology. The results show that, when the estimation is performed continuously, a larger blacklist leads to an increase of performance in star topology. In tree topology, due to the simultaneous transmissions among some nodes, the use of smaller blacklist showed better performance.
2019-03-11
Go, Wooyoung, Lee, Daewoo.  2018.  Toward Trustworthy Deep Learning in Security. Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. :2219–2221.

In the security area, there has been an increasing tendency to apply deep learning, which is perceived as a black box method because of the lack of understanding of its internal functioning. Can we trust deep learning models when they achieve high test accuracy? Using a visual explanation method, we find that deep learning models used in security tasks can easily focus on semantically non-discriminative parts of input data even though they produce the right answers. Furthermore, when a model is re-trained without any change in the learning procedure (i.e., no change in training/validation data, initialization/optimization methods and hyperparameters), it can focus on significantly different parts of many samples while producing the same answers. For trustworthy deep learning in security, therefore, we argue that it is necessary to verify the classification criteria of deep learning models before deploying them, even though they successfully achieve high test accuracy.

Li, Z., Xie, X., Ma, X., Guan, Z..  2018.  Trustworthiness Optimization of Industrial Cluster Network Platform Based on Blockchain. 2018 8th International Conference on Logistics, Informatics and Service Sciences (LISS). :1–6.

Industrial cluster is an important organization form and carrier of development of small and medium-sized enterprises, and information service platform is an important facility of industrial cluster. Improving the credibility of the network platform is conducive to eliminate the adverse effects of distrust and information asymmetry on industrial clusters. The decentralization, transparency, openness, and intangibility of block chain technology make it an inevitable choice for trustworthiness optimization of industrial cluster network platform. This paper first studied on trusted standard of industry cluster network platform and construct a new trusted framework of industry cluster network platform. Then the paper focus on trustworthiness optimization of data layer and application layer of the platform. The purpose of this paper is to build an industrial cluster network platform with data access, information trustworthiness, function availability, high-speed and low consumption, and promote the sustainable and efficient development of industrial cluster.

Ghafoor, K. Z., Kong, L., Sadiq, A. S., Doukha, Z., Shareef, F. M..  2018.  Trust-aware routing protocol for mobile crowdsensing environments. IEEE INFOCOM 2018 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS). :82–87.
Link quality, trust management and energy efficiency are considered as main factors that affect the performance and lifetime of Mobile CrowdSensing (MCS). Routing packets toward the sink node can be a daunting task if aforementioned factors are considered. Correspondingly, routing packets by considering only shortest path or residual energy lead to suboptimal data forwarding. To this end, we propose a Fuzzy logic based Routing (FR) solution that incorporates social behaviour of human beings, link quality, and node quality to make the optimal routing decision. FR leverages friendship mechanism for trust management, Signal to Noise Ratio (SNR) to assure good link quality node selection, and residual energy for long lasting sensor lifetime. Extensive simulations show that the FR solution outperforms the existing approaches in terms of network lifetime and packet delivery ratio.
Hu, Xiaohe, Gupta, Arpit, Feamster, Nick, Panda, Aurojit, Shenker, Scott.  2018.  Preserving Privacy at IXPs. Proceedings of the 2Nd Asia-Pacific Workshop on Networking. :43–49.
Autonomous systems (ASes) on the Internet increasingly rely on Internet Exchange Points (IXPs) for peering. A single IXP may interconnect several 100s or 1000s of participants (ASes) all of which might peer with each other through BGP sessions. IXPs have addressed this scaling challenge through the use of route servers. However, route servers require participants to trust the IXP and reveal their policies, a drastic change from the accepted norm where all policies are kept private. In this paper we look at techniques to build route servers which provide the same functionality as existing route servers without requiring participants to reveal their policies thus preserving the status quo and enabling wider adoption of IXPs. Prior work has looked at secure multiparty computation (SMPC) as a means of implementing such route servers however this affects performance and reduces policy flexibility. In this paper we take a different tack and build on trusted execution environments (TEEs) such as Intel SGX to keep policies private and flexible. We present results from an initial route server implementation that runs under Intel SGX and show that our approach has 20x better performance than SMPC based approaches. Furthermore, we demonstrate that the additional privacy provided by our approach comes at minimal cost and our implementation is at worse 2.1x slower than a current route server implementation (and in some situations up to 2x faster).
2019-03-06
Khalil, Issa M., Guan, Bei, Nabeel, Mohamed, Yu, Ting.  2018.  A Domain Is Only As Good As Its Buddies: Detecting Stealthy Malicious Domains via Graph Inference. Proceedings of the Eighth ACM Conference on Data and Application Security and Privacy. :330-341.

Inference based techniques are one of the major approaches to analyze DNS data and detect malicious domains. The key idea of inference techniques is to first define associations between domains based on features extracted from DNS data. Then, an inference algorithm is deployed to infer potential malicious domains based on their direct/indirect associations with known malicious ones. The way associations are defined is key to the effectiveness of an inference technique. It is desirable to be both accurate (i.e., avoid falsely associating domains with no meaningful connections) and with good coverage (i.e., identify all associations between domains with meaningful connections). Due to the limited scope of information provided by DNS data, it becomes a challenge to design an association scheme that achieves both high accuracy and good coverage. In this paper, we propose a new approach to identify domains controlled by the same entity. Our key idea is an in-depth analysis of active DNS data to accurately separate public IPs from dedicated ones, which enables us to build high-quality associations between domains. Our scheme avoids the pitfall of naive approaches that rely on weak "co-IP" relationship of domains (i.e., two domains are resolved to the same IP) that results in low detection accuracy, and, meanwhile, identifies many meaningful connections between domains that are discarded by existing state-of-the-art approaches. Our experimental results show that the proposed approach not only significantly improves the domain coverage compared to existing approaches but also achieves better detection accuracy. Existing path-based inference algorithms are specifically designed for DNS data analysis. They are effective but computationally expensive. To further demonstrate the strength of our domain association scheme as well as improve the inference efficiency, we construct a new domain-IP graph that can work well with the generic belief propagation algorithm. Through comprehensive experiments, we show that this approach offers significant efficiency and scalability improvement with only a minor impact to detection accuracy, which suggests that such a combination could offer a good tradeoff for malicious domain detection in practice.

Zong, Fang, Yong, Ouyang, Gang, Liu.  2018.  3D Modeling Method Based on Deep Belief Networks (DBNs) and Interactive Evolutionary Algorithm (IEA). Proceedings of the 2018 International Conference on Big Data and Computing. :124-128.

3D modeling usually refers to be the use of 3D software to build production through the virtual 3D space model with 3D data. At present, most 3D modeling software such as 3dmax, FLAC3D and Midas all need adjust models to get a satisfactory model or by coding a precise modeling. There are many matters such as complicated steps, strong profession, the high modeling cost. Aiming at this problem, the paper presents a new 3D modeling methods which is based on Deep Belief Networks (DBN) and Interactive Evolutionary Algorithm (IEA). Following this method, firstly, extract characteristic vectors from vertex, normal, surfaces of the imported model samples. Secondly, use the evolution strategy, to extract feature vector for stochastic evolution by artificial grading control the direction of evolution, and in the process to extract the characteristics of user preferences. Then, use evolution function matrix to establish the fitness approximation evaluation model, and simulate subjective evaluation. Lastly, the user can control the whole machine simulation evaluation process at any time, and get a satisfactory model. The experimental results show that the method in this paper is feasible.

Guerriero, Michele, Tamburri, Damian Andrew, Di Nitto, Elisabetta.  2018.  Defining, Enforcing and Checking Privacy Policies in Data-Intensive Applications. Proceedings of the 13th International Conference on Software Engineering for Adaptive and Self-Managing Systems. :172-182.
The rise of Big Data is leading to an increasing demand for large-scale data-intensive applications (DIAs), which have to analyse massive amounts of personal data (e.g. customers' location, cars' speed, people heartbeat, etc.), some of which can be sensitive, meaning that its confidentiality has to be protected. In this context, DIA providers are responsible for enforcing privacy policies that account for the privacy preferences of data subjects as well as for general privacy regulations. This is the case, for instance, of data brokers, i.e. companies that continuously collect and analyse data in order to provide useful analytics to their clients. Unfortunately, the enforcement of privacy policies in modern DIAs tends to become cumbersome because (i) the number of policies can easily explode, depending on the number of data subjects, (ii) policy enforcement has to autonomously adapt to the application context, thus, requiring some non-trivial runtime reasoning, and (iii) designing and developing modern DIAs is complex per se. For the above reasons, we need specific design and runtime methods enabling so called privacy-by-design in a Big Data context. In this article we propose an approach for specifying, enforcing and checking privacy policies on DIAs designed according to the Google Dataflow model and we show that the enforcement approach behaves correctly in the considered cases and introduces a performance overhead that is acceptable given the requirements of a typical DIA.
Gursoy, Mehmet Emre, Liu, Ling, Truex, Stacey, Yu, Lei, Wei, Wenqi.  2018.  Utility-Aware Synthesis of Differentially Private and Attack-Resilient Location Traces. Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. :196-211.
As mobile devices and location-based services become increasingly ubiquitous, the privacy of mobile users' location traces continues to be a major concern. Traditional privacy solutions rely on perturbing each position in a user's trace and replacing it with a fake location. However, recent studies have shown that such point-based perturbation of locations is susceptible to inference attacks and suffers from serious utility losses, because it disregards the moving trajectory and continuity in full location traces. In this paper, we argue that privacy-preserving synthesis of complete location traces can be an effective solution to this problem. We present AdaTrace, a scalable location trace synthesizer with three novel features: provable statistical privacy, deterministic attack resilience, and strong utility preservation. AdaTrace builds a generative model from a given set of real traces through a four-phase synthesis process consisting of feature extraction, synopsis learning, privacy and utility preserving noise injection, and generation of differentially private synthetic location traces. The output traces crafted by AdaTrace preserve utility-critical information existing in real traces, and are robust against known location trace attacks. We validate the effectiveness of AdaTrace by comparing it with three state of the art approaches (ngram, DPT, and SGLT) using real location trace datasets (Geolife and Taxi) as well as a simulated dataset of 50,000 vehicles in Oldenburg, Germany. AdaTrace offers up to 3-fold improvement in trajectory utility, and is orders of magnitude faster than previous work, while preserving differential privacy and attack resilience.
Aniculaesei, Adina, Grieser, Jörg, Rausch, Andreas, Rehfeldt, Karina, Warnecke, Tim.  2018.  Towards a Holistic Software Systems Engineering Approach for Dependable Autonomous Systems. Proceedings of the 1st International Workshop on Software Engineering for AI in Autonomous Systems. :23-30.

Autonomous systems are gaining momentum in various application domains, such as autonomous vehicles, autonomous transport robotics and self-adaptation in smart homes. Product liability regulations impose high standards on manufacturers of such systems with respect to dependability (safety, security and privacy). Today's conventional engineering methods are not adequate for providing guarantees with respect to dependability requirements in a cost-efficient manner, e.g. road tests in the automotive industry sum up millions of miles before a system can be considered sufficiently safe. System engineers will no longer be able to test and respectively formally verify autonomous systems during development time in order to guarantee the dependability requirements in advance. In this vision paper, we introduce a new holistic software systems engineering approach for autonomous systems, which integrates development time methods as well as operation time techniques. With this approach, we aim to give the users a transparent view of the confidence level of the autonomous system under use with respect to the dependability requirements. We present already obtained results and point out research goals to be addressed in the future.

2019-03-04
Schwartz, Edward J., Cohen, Cory F., Duggan, Michael, Gennari, Jeffrey, Havrilla, Jeffrey S., Hines, Charles.  2018.  Using Logic Programming to Recover C++ Classes and Methods from Compiled Executables. Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. :426–441.
High-level C++ source code abstractions such as classes and methods greatly assist human analysts and automated algorithms alike when analyzing C++ programs. Unfortunately, these abstractions are lost when compiling C++ source code, which impedes the understanding of C++ executables. In this paper, we propose a system, OOAnalyzer, that uses an innovative new design to statically recover detailed C++ abstractions from executables in a scalable manner. OOAnalyzer's design is motivated by the observation that many human analysts reason about C++ programs by recognizing simple patterns in binary code and then combining these findings using logical inference, domain knowledge, and intuition. We codify this approach by combining a lightweight symbolic analysis with a flexible Prolog-based reasoning system. Unlike most existing work, OOAnalyzer is able to recover both polymorphic and non-polymorphic C++ classes. We show in our evaluation that OOAnalyzer assigns over 78% of methods to the correct class on our test corpus, which includes both malware and real-world software such as Firefox and MySQL. These recovered abstractions can help analysts understand the behavior of C++ malware and cleanware, and can also improve the precision of program analyses on C++ executables.
Hammad, Mahmoud, Garcia, Joshua, Malek, Sam.  2018.  Self-protection of Android Systems from Inter-component Communication Attacks. Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering. :726–737.
The current security mechanisms for Android apps, both static and dynamic analysis approaches, are insufficient for detection and prevention of the increasingly dynamic and sophisticated security attacks. Static analysis approaches suffer from false positives whereas dynamic analysis approaches suffer from false negatives. Moreover, they all lack the ability to efficiently analyze systems with incremental changes—such as adding/removing apps, granting/revoking permissions, and dynamic components’ communications. Each time the system changes, the entire analysis needs to be repeated, making the existing approaches inefficient for practical use. To mitigate their shortcomings, we have developed SALMA, a novel self-protecting Android software system that monitors itself and adapts its behavior at runtime to prevent a wide-range of security risks. SALMA maintains a precise architectural model, represented as a Multiple-Domain-Matrix, and incrementally and efficiently analyzes an Android system in response to incremental system changes. The maintained architecture is used to reason about the running Android system. Every time the system changes, SALMA determines (1) the impacted part of the system, and (2) the subset of the security analyses that need to be performed, thereby greatly improving the performance of the approach. Our experimental results on hundreds of real-world apps corroborate SALMA’s scalability and efficiency as well as its ability to detect and prevent security attacks at runtime with minimal disruption.
Hejderup, J., Deursen, A. v, Gousios, G..  2018.  Software Ecosystem Call Graph for Dependency Management. 2018 IEEE/ACM 40th International Conference on Software Engineering: New Ideas and Emerging Technologies Results (ICSE-NIER). :101–104.
A popular form of software reuse is the use of open source software libraries hosted on centralized code repositories, such as Maven or npm. Developers only need to declare dependencies to external libraries, and automated tools make them available to the workspace of the project. Recent incidents, such as the Equifax data breach and the leftpad package removal, demonstrate the difficulty in assessing the severity, impact and spread of bugs in dependency networks. While dependency checkers are being adapted as a counter measure, they only provide indicative information. To remedy this situation, we propose a fine-grained dependency network that goes beyond packages and into call graphs. The result is a versioned ecosystem-level call graph. In this paper, we outline the process to construct the proposed graph and present a preliminary evaluation of a security issue from a core package to an affected client application.
Gafurov, Davrondzhon, Hurum, Arne Erik, Markman, Martin.  2018.  Achieving Test Automation with Testers Without Coding Skills: An Industrial Report. Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering. :749–756.
We present a process driven test automation solution which enables delegating (part of) automation tasks from test automation engineer (expensive resource) to test analyst (non-developer, less expensive). In our approach, a test automation engineer implements test steps (or actions) which are executed automatically. Such automated test steps represent user actions in the system under test and specified by a natural language which is understandable by a non-technical person. Then, a test analyst with a domain knowledge organizes automated steps combined with test input to create an automated test case. It should be emphasized that the test analyst does not need to possess programming skills to create, modify or execute automated test cases. We refine benchmark test automation architecture to be better suitable for an effective separation and sharing of responsibilities between the test automation engineer (with coding skills) and test analyst (with a domain knowledge). In addition, we propose a metric to empirically estimate cooperation between test automation engineer and test analyst's works. The proposed automation solution has been defined based on our experience in the development and maintenance of Helsenorg, the national electronic health services in Norway which has had over one million of visits per month past year, and we still use it to automate the execution of regression tests.
Gugelmann, D., Sommer, D., Lenders, V., Happe, M., Vanbever, L..  2018.  Screen watermarking for data theft investigation and attribution. 2018 10th International Conference on Cyber Conflict (CyCon). :391–408.
Organizations not only need to defend their IT systems against external cyber attackers, but also from malicious insiders, that is, agents who have infiltrated an organization or malicious members stealing information for their own profit. In particular, malicious insiders can leak a document by simply opening it and taking pictures of the document displayed on the computer screen with a digital camera. Using a digital camera allows a perpetrator to easily avoid a log trail that results from using traditional communication channels, such as sending the document via email. This makes it difficult to identify and prove the identity of the perpetrator. Even a policy prohibiting the use of any device containing a camera cannot eliminate this threat since tiny cameras can be hidden almost everywhere. To address this leakage vector, we propose a novel screen watermarking technique that embeds hidden information on computer screens displaying text documents. The watermark is imperceptible during regular use, but can be extracted from pictures of documents shown on the screen, which allows an organization to reconstruct the place and time of the data leak from recovered leaked pictures. Our approach takes advantage of the fact that the human eye is less sensitive to small luminance changes than digital cameras. We devise a symbol shape that is invisible to the human eye, but still robust to the image artifacts introduced when taking pictures. We complement this symbol shape with an error correction coding scheme that can handle very high bit error rates and retrieve watermarks from cropped and compressed pictures. We show in an experimental user study that our screen watermarks are not perceivable by humans and analyze the robustness of our watermarks against image modifications.
2019-02-25
Lucas, Gale M., Krämer, Nicole, Peters, Clara, Taesch, Lisa-Sophie, Mell, Johnathan, Gratch, Jonathan.  2018.  Effects of Perceived Agency and Message Tone in Responding to a Virtual Personal Trainer. Proceedings of the 18th International Conference on Intelligent Virtual Agents. :247-254.
Research has demonstrated promising benefits of applying virtual trainers to promote physical fitness. The current study investigated the value of virtual agents in the context of personal fitness, compared to trainers with greater levels of perceived agency (avatar or live human). We also explored the possibility that the effectiveness of the virtual trainer might depend on the affective tone it uses when trying to motivate users. Accordingly, participants received either positively or negatively valenced motivational messages from a virtual human they believed to be either an agent or an avatar, or they received the messages from a human instructor via skype. Both self-report and physiological data were collected. Like in-person coaches, the live human trainer who used negatively valenced messages were well-regarded; however, when the agent or avatar used negatively valenced messages, participants responded more poorly than when they used positively valenced ones. Perceived agency also affected rapport: compared to the agent, users felt more rapport with the live human trainer or the avatar. Regardless of trainer type, they also felt more rapport - and said they put in more effort - with trainers that used positively valenced messages than those that used negatively valenced ones. However, in reality, they put in more physical effort (as measured by heart rate) when trainers employed the more negatively valenced affective tone. We discuss implications for human–computer interaction.
Grynszpan, Ouriel, Mouquet, Esther, Rushworth, Matthew, Sallet, Jérôme, Khamassi, Mehdi.  2018.  Computational Model of the User's Learning Process When Cued by a Social Versus Non-Social Agent. Proceedings of the 6th International Conference on Human-Agent Interaction. :347-349.
There are ongoing debates on whether learning involves the same mechanisms when it is mediated by social skills than when it is not [1]. Gaze cues serve as a strong communicative modality that is profoundly human. They have been shown to trigger automatic attentional orienting [2]. However, arrow cues have been shown to elicit similar effects [3]. Hence, gaze and arrow cues are often compared to investigate differences between social and non-social cognitive processes [4]. The present study sought to compare cued learning when the cue is provided by a social agent versus a nonsocial agent.
Lucas, Gale M., Boberg, Jill, Traum, David, Artstein, Ron, Gratch, Jonathan, Gainer, Alesia, Johnson, Emmanuel, Leuski, Anton, Nakano, Mikio.  2018.  Culture, Errors, and Rapport-Building Dialogue in Social Agents. Proceedings of the 18th International Conference on Intelligent Virtual Agents. :51-58.
This work explores whether culture impacts the extent to which social dialogue can mitigate (or exacerbate) the loss of trust caused when agents make conversational errors. Our study uses an agent designed to persuade users to agree with its rankings on two tasks. Participants from the U.S. and Japan completed our study. We perform two manipulations: (1) The presence of conversational errors – the agent exhibited errors in the second task or not; (2) The presence of social dialogue – between the two tasks, users either engaged in a social dialogue with the agent or completed a control task. Replicating previous research, conversational errors reduce the agent's influence. However, we found that culture matters: there was a marginally significant three-way interaction with culture, presence of social dialogue, and presence of errors. The pattern of results suggests that, for American participants, social dialogue backfired if it is followed by errors, presumably because it extends the period of good performance, creating a stronger contrast effect with the subsequent errors. However, for Japanese participants, social dialogue if anything mitigates the detrimental effect of errors; the negative effect of errors is only seen in the absence of a social dialogue. Agent design should therefore take the culture of the intended users into consideration when considering use of social dialogue to bolster agents against conversational errors.
Cayetano, Trisha Anne, Dogao, Averyl, Guipoc, Cristopher, Palaoag, Thelma.  2018.  Cyber-Physical IT Assessment Tool and Vulnerability Assessment for Semiconductor Companies. Proceedings of the 2Nd International Conference on Cryptography, Security and Privacy. :67–71.

Information and systems are the most valuable asset of almost all global organizations. Thus, sufficient security is key to protect these assets. The reliability and security of a manufacturing company's supply chain are key concerns as it manages assurance & quality of supply. Traditional concerns such as physical security, disasters, political issues & counterfeiting remain, but cyber security is an area of growing interest. Statistics show that cyber-attacks still continue with no signs of slowing down. Technical controls, no matter how good, will only take the company thus far since no usable system is 100 percent secure or impenetrable. Evaluating the security vulnerabilities of one organization and taking the action to mitigate the risks will strengthen the layer of protection in the manufacturing company's supply chain. In this paper, the researchers created an IT Security Assessment Tool to facilitate the evaluation of the sufficiency of policy, procedures, and controls implemented by semiconductor companies. The proposed IT Security Assessment Tool was developed considering the factors that are critical in protecting the information and systems of various semiconductor companies. Subsequently, the created IT Security Assessment Tool was used to evaluate existing semiconductor companies to identify their areas of security vulnerabilities. The result shows that all suppliers visited do not have cyber security programs and most dwell on physical and network security controls. Best practices were shared and action items were suggested to improve the security controls and minimize risk of service disruption for customers, theft of sensitive data and reputation damage.

Gupta, M., Bakliwal, A., Agarwal, S., Mehndiratta, P..  2018.  A Comparative Study of Spam SMS Detection Using Machine Learning Classifiers. 2018 Eleventh International Conference on Contemporary Computing (IC3). :1–7.
With technological advancements and increment in content based advertisement, the use of Short Message Service (SMS) on phones has increased to such a significant level that devices are sometimes flooded with a number of spam SMS. These spam messages can lead to loss of private data as well. There are many content-based machine learning techniques which have proven to be effective in filtering spam emails. Modern day researchers have used some stylistic features of text messages to classify them to be ham or spam. SMS spam detection can be greatly influenced by the presence of known words, phrases, abbreviations and idioms. This paper aims to compare different classifying techniques on different datasets collected from previous research works, and evaluate them on the basis of their accuracies, precision, recall and CAP Curve. The comparison has been performed between traditional machine learning techniques and deep learning methods.
2019-02-22
Zhou, Bing, Guven, Sinem, Tao, Shu, Ye, Fan.  2018.  Pose-Assisted Active Visual Recognition in Mobile Augmented Reality. Proceedings of the 24th Annual International Conference on Mobile Computing and Networking. :756-758.

While existing visual recognition approaches, which rely on 2D images to train their underlying models, work well for object classification, recognizing the changing state of a 3D object requires addressing several additional challenges. This paper proposes an active visual recognition approach to this problem, leveraging camera pose data available on mobile devices. With this approach, the state of a 3D object, which captures its appearance changes, can be recognized in real time. Our novel approach selects informative video frames filtered by 6-DOF camera poses to train a deep learning model to recognize object state. We validate our approach through a prototype for Augmented Reality-assisted hardware maintenance.

Börsting, Ingo, Gruhn, Volker.  2018.  Towards Rapid Digital Prototyping for Augmented Reality Applications. Proceedings of the 4th International Workshop on Rapid Continuous Software Engineering. :12-15.

In rapid continuous software development, time- and cost-effective prototyping techniques are beneficial through enabling software designers to quickly explore and evaluate different design concepts. Regarding low-fidelity prototyping for augmented reality (AR) applications, software designers are so far restricted to non-digital prototypes, which enable the visualization of first design concepts, but can be laborious in capturing interactivity. The lack of empirical values and standards for designing user interactions in AR-software leads to a particular need for applying end-user feedback to software refinement. In this paper we present the concept of a tool for rapid digital prototyping for augmented reality applications, enabling software designers to rapidly design augmented reality prototypes, without requiring programming skills. The prototyping tool focuses on modeling multimodal interactions, especially regarding the interaction with physical objects, as well as performing user-based studies to integrate valuable end-user feedback into the refinement of software aspects.

Bakour, K., Ünver, H. M., Ghanem, R..  2018.  The Android Malware Static Analysis: Techniques, Limitations, and Open Challenges. 2018 3rd International Conference on Computer Science and Engineering (UBMK). :586-593.

This paper aims to explain static analysis techniques in detail, and to highlight the weaknesses and challenges which face it. To this end, more than 80 static analysis-based framework have been studied, and in their light, the process of detecting malicious applications has been divided into four phases that were explained in a schematic manner. Also, the features that is used in static analysis were discussed in detail by dividing it into four categories namely, Manifest-based features, code-based features, semantic features and app's metadata-based features. Also, the challenges facing methods based on static analysis were discussed in detail. Finally, a case study was conducted to test the strength of some known commercial antivirus and one of the stat-of-art academic static analysis frameworks against obfuscation techniques used by developers of malicious applications. The results showed a significant impact on the performance of the most tested antiviruses and frameworks, which is reflecting the urgent need for more accurately tools.