Biblio
The Internet of Things (IoT) market is growing rapidly, allowing continuous evolution of new technologies. Alongside this development, most IoT devices are easy to compromise, as security is often not a prioritized characteristic. This paper proposes a novel IoT Security Model (IoTSM) that can be used by organizations to formulate and implement a strategy for developing end-to-end IoT security. IoTSM is grounded by the Software Assurance Maturity Model (SAMM) framework, however it expands it with new security practices and empirical data gathered from IoT practitioners. Moreover, we generalize the model into a conceptual framework. This approach allows the formal analysis for security in general and evaluates an organization's security practices. Overall, our proposed approach can help researchers, practitioners, and IoT organizations, to discourse about IoT security from an end-to-end perspective.
An advanced metering infrastructure (AMI) allows real-time fine-grained monitoring of the energy consumption data of individual consumers. Collected metering data can be used for a multitude of applications. For example, energy demand forecasting, based on the reported fine-grained consumption, can help manage the near future energy production. However, fine- grained metering data reporting can lead to privacy concerns. It is, therefore, imperative that the utility company receives the fine-grained data needed to perform the intended demand response service, without learning any sensitive information about individual consumers. In this paper, we propose an anonymous privacy preserving fine-grained data aggregation scheme for AMI networks. In this scheme, the utility company receives only the distribution of the energy consumption by the consumers at different time slots. We leverage a network tree topology structure in which each smart meter randomly reports its energy consumption data to its parent smart meter (according to the tree). The parent node updates the consumption distribution and forwards the data to the utility company. Our analysis results show that the proposed scheme can preserve the privacy and security of individual consumers while guaranteeing the demand response service.
Maritime transportation plays a critical role for the U.S. and global economies, and has evolved into a complex system that involves a plethora of supply chain stakeholders spread around the globe. The inherent complexity brings huge security challenges including cargo loss and high burdens in cargo inspection against illicit activities and potential terrorist attacks. The emerging blockchain technology provides a promising tool to build a unified maritime cargo tracking system critical for cargo security. However, most existing efforts focus on transportation data itself, while ignoring how to bind the physical cargo movements and information managed by the system consistently. This can severely undermine the effectiveness of securing cargo transportation. To fulfill this gap, we propose a binding scheme leveraging a novel digital identity management mechanism. The digital identity management mechanism maps the best practice in the physical world to the cyber world and can be seamlessly integrated with a blockchain-based cargo management system.
Traceability has grown from being a specialized need for certain safety critical segments of the industry, to now being a recognized value-add tool for the industry as a whole that can be utilized for manual to automated processes End to End throughout the supply chain. The perception of traceability data collection persists as being a burden that provides value only when the most rare and disastrous of events take place. Disparate standards have evolved in the industry, mainly dictated by large OEM companies in the market create confusion, as a multitude of requirements and definitions proliferate. The intent of the IPC-1782 project is to bring the whole principle of traceability up to date and enable business to move faster, increase revenue, increase productivity, and decrease costs as a result of increased trust. Traceability, as defined in this standard will represent the most effective quality tool available, becoming an intrinsic part of best practice operations, with the encouragement of automated data collection from existing manufacturing systems which works well with Industry 4.0, integrating quality, reliability, product safety, predictive (routine, preventative, and corrective) maintenance, throughput, manufacturing, engineering and supply-chain data, reducing cost of ownership as well as ensuring timeliness and accuracy all the way from a finished product back through to the initial materials and granular attributes about the processes along the way. The goal of this standard is to create a single expandable and extendable data structure that can be adopted for all levels of traceability and enable easily exchanged information, as appropriate, across many industries. The scope includes support for the most demanding instances for detail and integrity such as those required by critical safety systems, all the way through to situations where only basic traceability, such as for simple consumer products, are required. A key driver for the adoption of the standard is the ability to find a relevant and achievable level of traceability that exactly meets the requirement following risk assessment of the business. The wealth of data accessible from traceability for analysis (e.g.; Big Data, etc.) can easily and quickly yield information that can raise expectations of very significant quality and performance improvements, as well as providing the necessary protection against the costs of issues in the market and providing very timely information to regulatory bodies along with consumers/customers as appropriate. This information can also be used to quickly raise yields, drive product innovation that resonates with consumers, and help drive development tests & design requirements that are meaningful to the Marketplace. Leveraging IPC 1782 to create the best value of Component Traceability for your business.
SQL Injection is one of the most critical security vulnerability in web applications. Most web applications use SQL as web applications. SQL injection mainly affects these websites and web applications. An attacker can easily bypass a web applications authentication and authorization and get access to the contents they want by SQL injection. This unauthorised access helps the attacker to retrieve confidential data's, trade secrets and can even delete or modify valuable documents. Even though, to an extend many preventive measures are found, till now there are no complete solution for this problem. Hence, from the surveys and analyses done, an enhanced methodology is proposed against SQL injection disclosure and deterrence by ensuring proper authentication using Heisenberg analysis and password security using Honey pot mechanism.
If, as most experts agree, the mathematical basis of major blockchain systems is (probably if not provably) sound, why do they have a bad reputation? Human misbehavior (such as failed Bitcoin exchanges) accounts for some of the issues, but there are also deeper and more interesting vulnerabilities here. These include design faults and code-level implementation defects, ecosystem issues (such as wallets), as well as approaches such as the "51% attack" all of which can compromise the integrity of blockchain systems. With particular attention to the emerging non-financial applications of blockchain technology, this paper demonstrates the kinds of attacks that are possible and provides suggestions for minimizing the risks involved.
Recently, digital transactions in real estate, insurance, etc. have become popular, and researchers are actively studying digital signatures as a method for distinguishing individuals. However, existing digital signature systems have different methods for making signatures depending on the platform and device, and because they are used on platforms owned by corporations, they have the disadvantage of being highly platform-dependent and having low software extensibility. Therefore, in this paper we have analyzed existing digital signature systems and designed a heterogeneous integrated digital signature system which has per-user contract management features and can guarantee platform independence and increase the ease of software extension and maintenance by using a browser environment.
Many companies within the Internet of Things (IoT) sector rely on the personal data of users to deliver and monetize their services, creating a high demand for personal information. A user can be seen as making a series of transactions, each involving the exchange of personal data for a service. In this paper, we argue that privacy can be described quantitatively, using the game- theoretic concept of value of information (VoI), enabling us to assess whether each exchange is an advantageous one for the user. We introduce PrivacyGate, an extension to the Android operating system built for the purpose of studying privacy of IoT transactions. An example study, and its initial results, are provided to illustrate its capabilities.
Despite widespread use of commercial anti-virus products, the number of malicious files detected on home and corporate computers continues to increase at a significant rate. Recently, anti-virus companies have started investing in machine learning solutions to augment signatures manually designed by analysts. A malicious file's determination is often represented as a hierarchical structure consisting of a type (e.g. Worm, Backdoor), a platform (e.g. Win32, Win64), a family (e.g. Rbot, Rugrat) and a family variant (e.g. A, B). While there has been substantial research in automated malware classification, the aforementioned hierarchical structure, which can provide additional information to the classification models, has been ignored. In this paper, we propose the novel idea and study the performance of employing hierarchical learning algorithms for automated classification of malicious files. To the best of our knowledge, this is the first research effort which incorporates the hierarchical structure of the malware label in its automated classification and in the security domain, in general. It is important to note that our method does not require any additional effort by analysts because they typically assign these hierarchical labels today. Our empirical results on a real world, industrial-scale malware dataset of 3.6 million files demonstrate that incorporation of the label hierarchy achieves a significant reduction of 33.1% in the binary error rate as compared to a non-hierarchical classifier which is traditionally used in such problems.
Code signing which at present is the only methodology of trusting a code that is distributed to others. It heavily relies on the security of the software providers private key. Attackers employ targeted attacks on the code signing infrastructure for stealing the signing keys which are used later for distributing malware in disguise of genuine software. Differentiating a malware from a benign software becomes extremely difficult once it gets signed by a trusted software providers private key as the operating systems implicitly trusts this signed code. In this paper, we analyze the growing menace of signed malware by examining several real world incidents and present a threat model for the current code signing infrastructure. We also propose a novel solution that prevents this issue of malicious code signing by requiring additional verification of the executable. We also present the serious threat it poses and it consequences. To our knowledge this is the first time this specific issue of Malicious code signing has been thoroughly studied and an implementable solution is proposed.
The online portion of modern life is growing at an astonishing rate, with the consequence that more of the user's critical information is stored online. This poses an immediate threat to privacy and security of the user's data. This work will cover the increasing dangers and security risks of adware, adware injection, and malware injection. These programs increase in direct proportion to the number of users on the Internet. Each of these programs presents an imminent threat to a user's privacy and sensitive information, anytime they utilize the Internet. We will discuss how current ad blockers are not the actual solution to these threats, but rather a premise to our work. Current ad blocking tools can be discovered by the web servers which often requires suppression of the ad blocking tool. Suppressing the tool creates vulnerabilities in a user's system, but even when the tool is active their system is still susceptible to peril. It is possible, even when an ad blocking tool is functioning, for it to allow adware content through. Our solution to the contemporary threats is our tool, MalFire.
In this paper we present a case study of applying fitness dimensions in API design assessment. We argue that API assessment is company specific and should take into consideration various stakeholders in the API ecosystem. We identified new fitness dimensions and introduced the notion of design considerations for fitness dimensions such as priorities, tradeoffs, and technical versus cognitive classification.
There is no doubt that security issues are on the rise and defense mechanisms are becoming one of the leading subjects for academic and industry experts. In this paper, we focus on the security domain and envision a new way of looking at the security life cycle. We utilize our vision to propose an asset-based approach to countermeasure zero day attacks. To evaluate our proposal, we built a prototype. The initial results are promising and indicate that our prototype will achieve its goal of detecting zero-day attacks.
There are vast amounts of information in our world. Accessing the most accurate information in a speedy way is becoming more difficult and complicated. A lot of relevant information gets ignored which leads to much duplication of work and effort. The focuses tend to provide rapid and intelligent retrieval systems. Information retrieval (IR) is the process of searching for information that is related to some topics of interest. Due to the massive search results, the user will normally have difficulty in identifying the relevant ones. To alleviate this problem, a recommendation system is used. A recommendation system is a sort of filtering information system, which predicts the relevance of retrieved information to the user's needs according to some criteria. Hence, it can provide the user with the results that best fit their needs. The services provided through the web normally provide massive information about any requested item or service. An efficient recommendation system is required to classify this information result. A recommendation system can be further improved if augmented with a level of trust information. That is, recommendations are ranked according to their level of trust. In our research, we produced a recommendation system combined with an efficient level of trust system to guarantee that the posts, comments and feedbacks from users are trusted. We customized the concept of LoT (Level of Trust) [1] since it can cover medical, shopping and learning through social media. The proposed system TRS\_LoT provides trusted recommendations to the users with a high percentage of accuracy. Whereas a 300 post with more than 5000 comments from ``Amazon'' was selected to be used as a dataset, the experiment has been conducted by using same dataset based on ``post rating''.
Caching query results is an efficient technique for Web search engines. A state-of-the-art approach named Static-Dynamic Cache (SDC) is widely used in practice. Replacement policy is the key factor on the performance of cache system, and has been widely studied such as LIRS, ARC, CLOCK, SKLRU and RANDOM in different research areas. In this paper, we discussed replacement policies for static-dynamic cache and conducted the experiments on real large scale query logs from two famous commercial Web search engine companies. The experimental results show that ARC replacement policy could work well with static-dynamic cache, especially for large scale query results cache.
The extensive use of information and communication technologies in power grid systems make them vulnerable to cyber-attacks. One class of cyber-attack is advanced persistent threats where highly skilled attackers can steal user authentication information's and then move laterally in the network, from host to host in a hidden manner, until they reach an attractive target. Once the presence of the attacker has been detected in the network, appropriate actions should be taken quickly to prevent the attacker going deeper. This paper presents a game theoretic approach to optimize the defense against an invader attempting to use a set of known vulnerabilities to reach critical nodes in the network. First, the network is modeled as a vulnerability multi-graph where the nodes represent physical hosts and edges the vulnerabilities that the attacker can exploit to move laterally from one host to another. Secondly, a two-player zero-sum Markov game is built where the states of the game represent the nodes of the vulnerability multi-graph graph and transitions correspond to the edge vulnerabilities that the attacker can exploit. The solution of the game gives the optimal strategy to disconnect vulnerable services and thus slow down the attack.
In this paper, we address the problem of peer grouping employees in an organization for identifying security risks. Our motivation for studying peer grouping is its importance for a clear understanding of user and entity behavior analytics (UEBA) that is the primary tool for identifying insider threat through detecting anomalies in network traffic. We show that using Louvain method of community detection it is possible to automate peer group creation with feature-based weight assignments. Depending on the number of employees and their features we show that it is also possible to give each group a meaningful description. We present three new algorithms: one that allows an addition of new employees to already generated peer groups, another that allows for incorporating user feedback, and lastly one that provides the user with recommended nodes to be reassigned. We use Niara's data to validate our claims. The novelty of our method is its robustness, simplicity, scalability, and ease of deployment in a production environment.