Biblio

Found 7524 results

Filters: Keyword is Metrics  [Clear All Filters]
2017-07-24
Berndt, Sebastian, Liśkiewicz, Maciej.  2016.  Provable Secure Universal Steganography of Optimal Rate: Provably Secure Steganography Does Not Necessarily Imply One-Way Functions. Proceedings of the 4th ACM Workshop on Information Hiding and Multimedia Security. :81–92.

We present the first complexity-theoretic secure steganographic protocol which, for any communication channel, is provably secure, reliable, and has nearly optimal bandwidth. Our system is unconditionally secure, i.e. our proof does not rely on any unproven complexity-theoretic assumption, like e.g. the existence of one-way functions. This disproves the claim that the existence of one-way functions and access to a communication channel oracle are both necessary and sufficient conditions for the existence of secure steganography, in the sense that secure and reliable steganography exists independently of the existence of one-way functions.

2017-12-27
Aromataris, G., Annovazzi-Lodi, V..  2016.  Two- and three-laser chaos communications. 18th Italian National Conference on Photonic Technologies (Fotonica 2016). :1–4.

After a brief introduction on optical chaotic cryptography, we compare the standard short cavity, close-loop, two-laser and three-laser schemes for secure transmission, showing that both are suitable for secure data exchange, the three-laser scheme offering a slightly better level of privacy, due to its symmetrical topology.

2017-03-20
Fowler, James E..  2016.  Delta Encoding of Virtual-Machine Memory in the Dynamic Analysis of Malware. :592–592.

Malware is an ever-increasing threat to personal, corporate, and government computing systems alike. Particularly in the corporate and government sectors, the attribution of malware—including the identification of the authorship of malware as well as potentially the malefactor responsible for an attack—is of growing interest. Such malware attribution is often enabled by the fact that malware authors build on the work of others through the use of generators, libraries, and borrowed code. Determining malware phylogeny—the evolutionary history of and the derivative relations between malware—is consequently an endeavor of increasing importance, with a growing focus on the dynamic analysis of malware which involves executing a malware sample and determining the actions it takes after some period of operation. In most cases, such dynamic analysis occurs in a virtual machine, or "sandbox," in order to confine the malware to an environment in which it can do no harm to real systems. In sandbox-driven dynamic analysis of malware, a virtual machine is typically run starting from some known, malware-free baseline state. The malware is injected into the virtual machine, and the machine is allowed to run for some period of time during which the malware presumably activates. The machine is then suspended, and the current machine memory is dumped to disk. The process may then be repeated for other malware samples, each time starting from the baseline state. Stored in raw form on the disk, the dumped memory file is the same size as the virtual-machine memory, for virtual machines running modern operating systems, such memory would likely be no less than 512 MB but could be up to several GBs. If the corresponding memory dumps are to be retained for repeated analysis—as is likely to be required in order to determine a phylogeny for a large database of malware samples—lossless compression of the memory dumps is necessarily to prevent explosive disk usage. For example, the VirusShare project maintains a database of over 19 million malware samples, running these in a virtual machine with 512 MB of memory would require of 9 petabytes (PB) of storage to retain the memory dumps. In this paper, we develop a scheme for the lossless compression of memory dumps resulting from the repeated execution of malware samples in a virtual-machine sandbox. Rather than compress each memory dump individually, we capitalize on the fact that memory dumps stem from a known baseline virtual-machine state and code with respect to this baseline memory. Additionally, to further improve compression efficiency, we exploit the fact that a significant portion of the difference between the baseline memory and that of the currently running machine is the result of the loading of known executable programs and shared libraries. Consequently, we propose delta coding to compress the current virtual-machine memory dump by coding its differences with respect to a predicted memory image, with the latter formed by duplicating the loading of the executables and libraries into the baseline memory, resulting in a significant improvement in compression performance over straightforward delta coding alone. In experimental results for a body of malware samples, the proposed approach outperformed the widely used xdelta3 delta coder by approximately 20% and the popular generic gzip coder by 79%.

Canfora, Gerardo, Medvet, Eric, Mercaldo, Francesco, Visaggio, Corrado Aaron.  2016.  Acquiring and Analyzing App Metrics for Effective Mobile Malware Detection. Proceedings of the 2016 ACM on International Workshop on Security And Privacy Analytics. :50–57.

Android malware is becoming very effective in evading detection techniques, and traditional malware detection techniques are demonstrating their weaknesses. Signature based detection shows at least two drawbacks: first, the detection is possible only after the malware has been identified, and the time needed to produce and distribute the signature provides attackers with window of opportunities for spreading the malware in the wild. For solving this problem, different approaches that try to characterize the malicious behavior through the invoked system and API calls emerged. Unfortunately, several evasion techniques have proven effective to evade detection based on system and API calls. In this paper, we propose an approach for capturing the malicious behavior in terms of device resource consumption (using a thorough set of features), which is much more difficult to camouflage. We describe a procedure, and the corresponding practical setting, for extracting those features with the aim of maximizing their discriminative power. Finally, we describe the promising results we obtained experimenting on more than 2000 applications, on which our approach exhibited an accuracy greater than 99%.

2017-09-05
Thakar, Bhavik, Parekh, Chandresh.  2016.  Advance Persistent Threat: Botnet. Proceedings of the Second International Conference on Information and Communication Technology for Competitive Strategies. :143:1–143:6.

Growth of internet era and corporate sector dealings communication online has introduced crucial security challenges in cyber space. Statistics of recent large scale attacks defined new class of threat to online world, advanced persistent threat (APT) able to impact national security and economic stability of any country. From all APTs, botnet is one of the well-articulated and stealthy attacks to perform cybercrime. Botnet owners and their criminal organizations are continuously developing innovative ways to infect new targets into their networks and exploit them. The concept of botnet refers collection of compromised computers (bots) infected by automated software robots, that interact to accomplish some distributed task which run without human intervention for illegal purposes. They are mostly malicious in nature and allow cyber criminals to control the infected machines remotely without the victim's knowledge. They use various techniques, communication protocols and topologies in different stages of their lifecycle; also specifically they can upgrade their methods at any time. Botnet is global in nature and their target is to steal or destroy valuable information from organizations as well as individuals. In this paper we present real world botnet (APTs) survey.

2017-06-27
Maheswaran, John, Jackowitz, Daniel, Zhai, Ennan, Wolinsky, David Isaac, Ford, Bryan.  2016.  Building Privacy-Preserving Cryptographic Credentials from Federated Online Identities. Proceedings of the Sixth ACM Conference on Data and Application Security and Privacy. :3–13.

Federated identity providers, e.g., Facebook and PayPal, offer a convenient means for authenticating users to third-party applications. Unfortunately such cross-site authentications carry privacy and tracking risks. For example, federated identity providers can learn what applications users are accessing; meanwhile, the applications can know the users' identities in reality. This paper presents Crypto-Book, an anonymizing layer enabling federated identity authentications while preventing these risks. Crypto-Book uses a set of independently managed servers that employ a (t,n)-threshold cryptosystem to collectively assign credentials to each federated identity (in the form of either a public/private keypair or blinded signed messages). With the credentials in hand, clients can then leverage anonymous authentication techniques such as linkable ring signatures or partially blind signatures to log into third-party applications in an anonymous yet accountable way. We have implemented a prototype of Crypto-Book and demonstrated its use with three applications: a Wiki system, an anonymous group communication system, and a whistleblower submission system. Crypto-Book is practical and has low overhead: in a deployment within our research group, Crypto-Book group authentication took 1.607s end-to-end, an overhead of 1.2s compared to traditional non-privacy-preserving federated authentication.

2017-08-02
Bondi, André B..  2016.  Challenges with Applying Performance Testing Methods for Systems Deployed on Shared Environments with Indeterminate Competing Workloads: Position Paper. Companion Publication for ACM/SPEC on International Conference on Performance Engineering. :41–44.

There is a tendency to move production environments from corporate-owned data centers to cloud-based services. Users who do not maintain a private production environment might not wish to maintain a private performance test environment either. The application of performance engineering methods to the development and delivery of software systems is complicated when the form and or parameters of the target deployment environment cannot be controlled or determined. The difficulty of diagnosing the causes of performance issues during testing or production may be increased by the presence of highly variable workloads on the target platform that compete with the application of interest for resources in ways that might be hard to determine. In particular, performance tests might be conducted in virtualized environments that introduce factors influencing customer-affecting metrics (such as transaction response time) and observed resource usage. Observed resource usage metrics in virtualized environments can have different meanings from those in a native environment. Virtual machines may suffer delays in execution. We explore factors that exacerbate these complications. We argue that these complexities reinforce the case for rigorously using software performance engineering methods rather than diminishing it. We also explore possible performance testing methods for mitigating the risk associated with these complexities.

2017-10-03
Majumder, Abhishek, Deb, Subhrajyoti, Roy, Sudipta.  2016.  Classification and Performance Analysis of Intra-domain Mobility Management Schemes for Wireless Mesh Network. Proceedings of the Second International Conference on Information and Communication Technology for Competitive Strategies. :113:1–113:6.

Nowadays Wireless Mesh Networks (WMNs) has come up with a promising solution for modern wireless communications. But, one of the major problems with WMN is the mobility of the Mesh Clients (MCs). To offer seamless connectivity to the MCs, their mobility management is necessary. During mobility management one of the major concerns is the communication overhead incurred during handoff of the MCs. For addressing this concern, many schemes have been proposed by the researchers. In this paper, a classification of the existing intra domain mobility management schemes has been presented. The schemes have been numerically analyzed. Finally, their performance has been analyzed and compared with respect to handoff cost considering different mobility rates of the MCs.

2017-05-18
Jindal, Shikha, Sharma, Manmohan.  2016.  Design and Implementation of Kerberos Using DES Algorithm. Proceedings of the ACM Symposium on Women in Research 2016. :92–95.

Security is playing a very important and crucial role in the field of network communication system and Internet. Kerberos Authentication Protocol is designed and developed by Massachusetts Institute of Technology (MIT) and it provides authentication by encrypting information and allow clients to access servers in a secure manner. This paper describes the design and implementation of Kerberos using Data Encryption Standard (DES). Data encryption standard (DES) is a private key cryptography system that provides the security in the communication system. Java Development Tool Kit as the front end and ms access as the back end are used for implementation.

2017-10-03
Bello, Oumarou Mamadou, Taiwe, Kolyang Dina.  2016.  Mesh Node Placement in Wireless Mesh Network Based on Multiobjective Evolutionary Metaheuristic. Proceedings of the International Conference on Internet of Things and Cloud Computing. :59:1–59:6.

The necessity to deploy wireless mesh network is determined by the real world application requirements. WMN does not fit some application well due to latency issues and capacity related problem with paths having more than 2 hops. With the promising IEEE 802.11ac based device a better fairness for multi-hop communications are expected to support broadband application; the rate usually varies according to the link quality and network environment. Careful network planning can effectively improves the throughput and delay of the overall network. We provide model for the placement of router nodes as an optimization process to improve performance. Our aim is to propose a WMNs planning model based on multiobjective constraints like coverage, reliability, and cost of deployment. The bit rate guarantee therefore necessary to limit the number of stations connected to the access point; to takes into account delay and fairness of the network the user's behaviors are derived. We use a multiobjective evolutionary algorithm based metaheuristic to evaluate the performance of our proposed placement algorithm.

2017-12-27
Hassene, S., Eddine, M. N..  2016.  A new hybrid encryption technique permuting text and image based on hyperchaotic system. 2016 2nd International Conference on Advanced Technologies for Signal and Image Processing (ATSIP). :63–68.

This paper proposes a new hybrid technique for combined encryption text and image based on hyperchaos system. Since antiquity, man has continued looking for ways to send messages to his correspondents in order to communicate with them safely. It needed, through successive epochs, both physical and intellectual efforts in order to find an effective and appropriate communication technique. On another note, there is a behavior between the rigid regularity and randomness. This behavior is called chaos. In fact, it is a new field of investigation that is opened along with a new understanding of the frequently misunderstood long effects. The chaotic cryptography is thus born by inclusion of chaos in encryption algorithms. This article is in this particular context. Its objective is to create and implement an encryption algorithm based on a hyperchaotic system. This algorithm is composed of four methods: two for encrypting images and two for encrypting texts. The user chose the type of the input of the encryption (image or text) and as well as of the output. This new algorithm is considered a renovation in the science of cryptology, with the hybrid methods. This research opened a new features.

2017-07-24
Melis, Luca, Asghar, Hassan Jameel, De Cristofaro, Emiliano, Kaafar, Mohamed Ali.  2016.  Private Processing of Outsourced Network Functions: Feasibility and Constructions. Proceedings of the 2016 ACM International Workshop on Security in Software Defined Networks & Network Function Virtualization. :39–44.

Aiming to reduce the cost and complexity of maintaining networking infrastructures, organizations are increasingly outsourcing their network functions (e.g., firewalls, traffic shapers and intrusion detection systems) to the cloud, and a number of industrial players have started to offer network function virtualization (NFV)-based solutions. Alas, outsourcing network functions in its current setting implies that sensitive network policies, such as firewall rules, are revealed to the cloud provider. In this paper, we investigate the use of cryptographic primitives for processing outsourced network functions, so that the provider does not learn any sensitive information. More specifically, we present a cryptographic treatment of privacy-preserving outsourcing of network functions, introducing security definitions as well as an abstract model of generic network functions, and then propose a few instantiations using partial homomorphic encryption and public-key encryption with keyword search. We include a proof-of-concept implementation of our constructions and show that network functions can be privately processed by an untrusted cloud provider in a few milliseconds.

2017-08-02
Matsuki, Tatsuma, Matsuoka, Naoki.  2016.  A Resource Contention Analysis Framework for Diagnosis of Application Performance Anomalies in Consolidated Cloud Environments. Proceedings of the 7th ACM/SPEC on International Conference on Performance Engineering. :173–184.

Cloud services have made large contributions to the agile developments and rapid revisions of various applications. However, the performance of these applications is still one of the largest concerns for developers. Although it has created many performance analysis frameworks, most of them have not been efficient for the rapid application revisions because they have required performance models, which may have had to be remodeled whenever application revisions occurred. We propose an analysis framework for diagnosis of application performance anomalies. We designed our framework so that it did not require any performance models to be efficient in rapid application revisions. That investigates the Pearson correlation and association rules between system metrics and application performance. The association rules are widely used in data-mining areas to find relations between variables in databases. We demonstrated through an experiment and testing on a real data set that our framework could select causal metrics even when the metrics were temporally correlated, which reduced the false negatives obtained from cause diagnosis. We evaluated our framework from the perspective of the expected remaining diagnostic costs of framework users. The results indicated that it was expected to reduce the diagnostic costs by 84.8\textbackslash% at most, compared with a method that only used the Pearson correlation.

2017-03-29
Aono, Yoshinori, Hayashi, Takuya, Trieu Phong, Le, Wang, Lihua.  2016.  Scalable and Secure Logistic Regression via Homomorphic Encryption. Proceedings of the Sixth ACM Conference on Data and Application Security and Privacy. :142–144.

Logistic regression is a powerful machine learning tool to classify data. When dealing with sensitive data such as private or medical information, cares are necessary. In this paper, we propose a secure system for protecting the training data in logistic regression via homomorphic encryption. Perhaps surprisingly, despite the non-polynomial tasks of training in logistic regression, we show that only additively homomorphic encryption is needed to build our system. Our system is secure and scalable with the dataset size.

2017-09-15
Rodrigues, Bruno, Quintão Pereira, Fernando Magno, Aranha, Diego F..  2016.  Sparse Representation of Implicit Flows with Applications to Side-channel Detection. Proceedings of the 25th International Conference on Compiler Construction. :110–120.

Information flow analyses traditionally use the Program Dependence Graph (PDG) as a supporting data-structure. This graph relies on Ferrante et al.'s notion of control dependences to represent implicit flows of information. A limitation of this approach is that it may create O(textbarItextbar x textbarEtextbar) implicit flow edges in the PDG, where I are the instructions in a program, and E are the edges in its control flow graph. This paper shows that it is possible to compute information flow analyses using a different notion of implicit dependence, which yields a number of edges linear on the number of definitions plus uses of variables. Our algorithm computes these dependences in a single traversal of the program's dominance tree. This efficiency is possible due to a key property of programs in Static Single Assignment form: the definition of a variable dominates all its uses. Our algorithm correctly implements Hunt and Sands system of security types. Contrary to their original formulation, which required O(IxI) space and time for structured programs, we require only O(I). We have used our ideas to build FlowTracker, a tool that uncovers side-channel vulnerabilities in cryptographic algorithms. FlowTracker handles programs with over one-million assembly instructions in less than 200 seconds, and creates 24% less implicit flow edges than Ferrante et al.'s technique. FlowTracker has detected an issue in a constant-time implementation of Elliptic Curve Cryptography; it has found several time-variant constructions in OpenSSL, one issue in TrueCrypt and it has validated the isochronous behavior of the NaCl library.

2017-08-02
Menninghaus, Mathias, Pulvermüller, Elke.  2016.  Towards Using Code Coverage Metrics for Performance Comparison on the Implementation Level. Proceedings of the 7th ACM/SPEC on International Conference on Performance Engineering. :101–104.

The development process for new algorithms or data structures often begins with the analysis of benchmark results to identify the drawbacks of already existing implementations. Furthermore it ends with the comparison of old and new implementations by using one or more well established benchmark. But how relevant, reproducible, fair, verifiable and usable those benchmarks may be, they have certain drawbacks. On the one hand a new implementation may be biased to provide good results for a specific benchmark. On the other hand benchmarks are very general and often fail to identify the worst and best cases of a specific implementation. In this paper we present a new approach for the comparison of algorithms and data structures on the implementation level using code coverage. Our approach uses model checking and multi-objective evolutionary algorithms to create test cases with a high code coverage. It then executes each of the given implementations with each of the test cases in order to calculate a cross coverage. Using this it calculates a combined coverage and weighted performance where implementations, which are not fully covered by the test cases of the other implementations, are punished. These metrics can be used to compare the performance of several implementations on a much deeper level than traditional benchmarks and they incorporate worst, best and average cases in an equal manner. We demonstrate this approach by two example sets of algorithms and outline the next research steps required in this context along with the greatest risks and challenges.

2017-09-26
Jangra, Ajay, Singh, Niharika, Lakhina, Upasana.  2016.  VIP: Verification and Identification Protective Data Handling Layer Implementation to Achieve MVCC in Cloud Computing. Proceedings of the Second International Conference on Information and Communication Technology for Competitive Strategies. :124:1–124:6.

Over transactional database systems MultiVersion concurrency control is maintained for secure, fast and efficient access to the shared data file implementation scenario. An effective coordination is supposed to be set up between owners and users also the developers & system operators, to maintain inter-cloud & intra-cloud communication Most of the services & application offered in cloud world are real-time, which entails optimized compatibility service environment between master and slave clusters. In the paper, offered methodology supports replication and triggering methods intended for data consistency and dynamicity. Where intercommunication between different clusters is processed through middleware besides slave intra-communication is handled by verification & identification protection. The proposed approach incorporates resistive flow to handle high impact systems that identifies and verifies multiple processes. Results show that the new scheme reduces the overheads from different master and slave servers as they are co-located in clusters which allow increased horizontal and vertical scalability of resources.

2017-03-29
Mallaiah, Kurra, Gandhi, Rishi Kumar, Ramachandram, S..  2016.  Word and Phrase Proximity Searchable Encryption Protocols for Cloud Based Relational Databases. Proceedings of the International Conference on Internet of Things and Cloud Computing. :42:1–42:12.

In this paper, we propose a practical and efficient word and phrase proximity searchable encryption protocols for cloud based relational databases. The proposed advanced searchable encryption protocols are provably secure. We formalize the security assurance with cryptographic security definitions and prove the security of our searchable encryption protocols under Shannon's perfect secrecy assumption. We have tested the proposed protocols comprehensively on Amazon's high performance computing server using mysql database and presented the results. The proposed protocols ensure that there is zero overhead of space and communication because cipher text size being equal to plaintext size. For the same reason, the database schema also does not change for existing applications. In this paper, we also present results of comprehensive analysis for Song, Wagner, and Perrig scheme.

2017-11-20
Yap, B. L., Baskaran, V. M..  2016.  Active surveillance using depth sensing technology \#8212; Part I: Intrusion detection. 2016 IEEE International Conference on Consumer Electronics-Taiwan (ICCE-TW). :1–2.

In part I of a three-part series on active surveillance using depth-sensing technology, this paper proposes an algorithm to identify outdoor intrusion activities by monitoring skeletal positions from Microsoft Kinect sensor in real-time. This algorithm implements three techniques to identify a premise intrusion. The first technique observes a boundary line along the wall (or fence) of a surveilled premise for skeletal trespassing detection. The second technique observes the duration of a skeletal object within a region of a surveilled premise for loitering detection. The third technique analyzes the differences in skeletal height to identify wall climbing. Experiment results suggest that the proposed algorithm is able to detect trespassing, loitering and wall climbing at a rate of 70%, 85% and 80% respectively.

2017-11-27
Chopade, P., Zhan, J., Bikdash, M..  2016.  Micro-Community detection and vulnerability identification for large critical networks. 2016 IEEE Symposium on Technologies for Homeland Security (HST). :1–7.

In this work we put forward our novel approach using graph partitioning and Micro-Community detection techniques. We firstly use algebraic connectivity or Fiedler Eigenvector and spectral partitioning for community detection. We then used modularity maximization and micro level clustering for detecting micro-communities with concept of community energy. We run micro-community clustering algorithm recursively with modularity maximization which helps us identify dense, deeper and hidden community structures. We experimented our MicroCommunity Clustering (MCC) algorithm for various types of complex technological and social community networks such as directed weighted, directed unweighted, undirected weighted, undirected unweighted. A novel fact about this algorithm is that it is scalable in nature.

2017-11-20
Aqel, S., Aarab, A., Sabri, M. A..  2016.  Shadow detection and removal for traffic sequences. 2016 International Conference on Electrical and Information Technologies (ICEIT). :168–173.

This paper address the problem of shadow detection and removal in traffic vision analysis. Basically, the presence of the shadow in the traffic sequences is imminent, and therefore leads to errors at segmentation stage and often misclassified as an object region or as a moving object. This paper presents a shadow removal method, based on both color and texture features, aiming to contribute to retrieve efficiently the moving objects whose detection are usually under the influence of cast-shadows. Additionally, in order to get a shadow-free foreground segmentation image, a morphology reconstruction algorithm is used to recover the foreground disturbed by shadow removal. Once shadows are detected, an automatic shadow removal model is proposed based on the information retrieved from the histogram shape. Experimental results on a real traffic sequence is presented to test the proposed approach and to validate the algorithm's performance.

2017-07-24
Xu, Peng, Li, Jingnan, Wang, Wei, Jin, Hai.  2016.  Anonymous Identity-Based Broadcast Encryption with Constant Decryption Complexity and Strong Security. Proceedings of the 11th ACM on Asia Conference on Computer and Communications Security. :223–233.

Anonymous Identity-Based Broadcast Encryption (AIBBE) allows a sender to broadcast a ciphertext to multi-receivers, and keeps receivers' anonymity. The existing AIBBE schemes fail to achieve efficient decryption or strong security, like the constant decryption complexity, the security under the adaptive attack, or the security in the standard model. Hence, we propose two new AIBBE schemes to overcome the drawbacks of previous schemes in the state-of-art. The biggest contribution in our work is the proposed AIBBE scheme with constant decryption complexity and the provable security under the adaptive attack in the standard model. This scheme should be the first one to obtain advantages in all above mentioned aspects, and has sufficient contribution in theory due to its strong security. We also propose another AIBBE scheme in the Random Oracle (RO) model, which is of sufficient interest in practice due to our experiment.

2017-09-19
Costin, Andrei, Zarras, Apostolis, Francillon, Aurélien.  2016.  Automated Dynamic Firmware Analysis at Scale: A Case Study on Embedded Web Interfaces. Proceedings of the 11th ACM on Asia Conference on Computer and Communications Security. :437–448.

Embedded devices are becoming more widespread, interconnected, and web-enabled than ever. However, recent studies showed that embedded devices are far from being secure. Moreover, many embedded systems rely on web interfaces for user interaction or administration. Web security is still difficult and therefore the web interfaces of embedded systems represent a considerable attack surface. In this paper, we present the first fully automated framework that applies dynamic firmware analysis techniques to achieve, in a scalable manner, automated vulnerability discovery within embedded firmware images. We apply our framework to study the security of embedded web interfaces running in Commercial Off-The-Shelf (COTS) embedded devices, such as routers, DSL/cable modems, VoIP phones, IP/CCTV cameras. We introduce a methodology and implement a scalable framework for discovery of vulnerabilities in embedded web interfaces regardless of the devices' vendor, type, or architecture. To reach this goal, we perform full system emulation to achieve the execution of firmware images in a software-only environment, i.e., without involving any physical embedded devices. Then, we automatically analyze the web interfaces within the firmware using both static and dynamic analysis tools. We also present some interesting case-studies and discuss the main challenges associated with the dynamic analysis of firmware images and their web interfaces and network services. The observations we make in this paper shed light on an important aspect of embedded devices which was not previously studied at a large scale.

2017-06-27
Hardjono, Thomas, Smith, Ned.  2016.  Cloud-Based Commissioning of Constrained Devices Using Permissioned Blockchains. Proceedings of the 2Nd ACM International Workshop on IoT Privacy, Trust, and Security. :29–36.

In this paper we describe a privacy-preserving method for commissioning an IoT device into a cloud ecosystem. The commissioning consists of the device proving its manufacturing provenance in an anonymous fashion without reliance on a trusted third party, and for the device to be anonymously registered through the use of a blockchain system. We introduce the ChainAnchor architecture that provides device commissioning in a privacy-preserving fashion. The goal of ChainAnchor is (i) to support anonymous device commissioning, (ii) to support device-owners being remunerated for selling their device sensor-data to service providers, and (iii) to incentivize device-owners and service providers to share sensor-data in a privacy-preserving manner.

2017-07-24
Fang, Fuyang, Li, Bao, Lu, Xianhui, Liu, Yamin, Jia, Dingding, Xue, Haiyang.  2016.  (Deterministic) Hierarchical Identity-based Encryption from Learning with Rounding over Small Modulus. Proceedings of the 11th ACM on Asia Conference on Computer and Communications Security. :907–912.

In this paper, we propose a hierarchical identity-based encryption (HIBE) scheme in the random oracle (RO) model based on the learning with rounding (LWR) problem over small modulus \$q\$. Compared with the previous HIBE schemes based on the learning with errors (LWE) problem, the ciphertext expansion ratio of our scheme can be decreased to 1/2. Then, we utilize the HIBE scheme to construct a deterministic hierarchical identity-based encryption (D-HIBE) scheme based on the LWR problem over small modulus. Finally, with the technique of binary tree encryption (BTE) we can construct HIBE and D-HIBE schemes in the standard model based on the LWR problem over small modulus.