Visible to the public Biblio

Filters: Author is Pohlmann, N.  [Clear All Filters]
2019-03-11
Puesche, A., Bothe, D., Niemeyer, M., Sachweh, S., Pohlmann, N., Kunold, I..  2018.  Concept of Smart Building Cyber-physical Systems Including Tamper Resistant Endpoints. 2018 International IEEE Conference and Workshop in Óbuda on Electrical and Power Engineering (CANDO-EPE). :000127–000132.

Cyber-physical systems (CPS) and their Internet of Things (IoT) components are repeatedly subject to various attacks targeting weaknesses in their firmware. For that reason emerges an imminent demand for secure update mechanisms that not only include specific systems but cover all parts of the critical infrastructure. In this paper we introduce a theoretical concept for a secure CPS device update and verification mechanism and provide information on handling hardware-based security incorporating trusted platform modules (TPM) on those CPS devices. We will describe secure communication channels by state of the art technology and also integrity measurement mechanisms to ensure the system is in a known state. In addition, a multi-level fail-over concept is presented, ensuring continuous patching to minimize the necessity of restarting those systems.

2014-09-26
Rossow, C., Dietrich, C.J., Grier, C., Kreibich, C., Paxson, V., Pohlmann, N., Bos, H., van Steen, M..  2012.  Prudent Practices for Designing Malware Experiments: Status Quo and Outlook. Security and Privacy (SP), 2012 IEEE Symposium on. :65-79.

Malware researchers rely on the observation of malicious code in execution to collect datasets for a wide array of experiments, including generation of detection models, study of longitudinal behavior, and validation of prior research. For such research to reflect prudent science, the work needs to address a number of concerns relating to the correct and representative use of the datasets, presentation of methodology in a fashion sufficiently transparent to enable reproducibility, and due consideration of the need not to harm others. In this paper we study the methodological rigor and prudence in 36 academic publications from 2006-2011 that rely on malware execution. 40% of these papers appeared in the 6 highest-ranked academic security conferences. We find frequent shortcomings, including problematic assumptions regarding the use of execution-driven datasets (25% of the papers), absence of description of security precautions taken during experiments (71% of the articles), and oftentimes insufficient description of the experimental setup. Deficiencies occur in top-tier venues and elsewhere alike, highlighting a need for the community to improve its handling of malware datasets. In the hope of aiding authors, reviewers, and readers, we frame guidelines regarding transparency, realism, correctness, and safety for collecting and using malware datasets.