Visible to the public Biblio

Filters: Keyword is Network provenance  [Clear All Filters]
2022-02-25
Sadineni, Lakshminarayana, Pilli, Emmanuel S., Battula, Ramesh Babu.  2021.  Ready-IoT: A Novel Forensic Readiness Model for Internet of Things. 2021 IEEE 7th World Forum on Internet of Things (WF-IoT). :89–94.
Internet of Things (IoT) networks are often attacked to compromise the security and privacy of application data and disrupt the services offered by them. The attacks are being launched at different layers of IoT protocol stack by exploiting their inherent weaknesses. Forensic investigations need substantial artifacts and datasets to support the decisions taken during analysis and while attributing the attack to the adversary. Network provenance plays a crucial role in establishing the relationships between network entities. Hence IoT networks can be made forensic ready so that network provenance may be collected to help in constructing these artifacts. The paper proposes Ready-IoT, a novel forensic readiness model for IoT environment to collect provenance from the network which comprises of both network parameters and traffic. A link layer dataset, Link-IoT Dataset is also generated by querying provenance graphs. Finally, Link-IoT dataset is compared with other IoT datasets to draw a line of difference and applicability to IoT environments. We believe that the proposed features have the potential to detect the attacks performed on the IoT network.
2019-09-23
Pham, Quan, Malik, Tanu, That, Dai Hai Ton, Youngdahl, Andrew.  2018.  Improving Reproducibility of Distributed Computational Experiments. Proceedings of the First International Workshop on Practical Reproducible Evaluation of Computer Systems. :2:1–2:6.
Conference and journal publications increasingly require experiments associated with a submitted article to be repeatable. Authors comply to this requirement by sharing all associated digital artifacts, i.e., code, data, and environment configuration scripts. To ease aggregation of the digital artifacts, several tools have recently emerged that automate the aggregation of digital artifacts by auditing an experiment execution and building a portable container of code, data, and environment. However, current tools only package non-distributed computational experiments. Distributed computational experiments must either be packaged manually or supplemented with sufficient documentation. In this paper, we outline the reproducibility requirements of distributed experiments using a distributed computational science experiment involving use of message-passing interface (MPI), and propose a general method for auditing and repeating distributed experiments. Using Sciunit we show how this method can be implemented. We validate our method with initial experiments showing application re-execution runtime can be improved by 63% with a trade-off of longer run-time on initial audit execution.