Biblio
Filters: Keyword is regulatory requirements [Clear All Filters]
Trustworthiness in Supply Chains : A modular extensible Approach applied to Industrial IoT. 2020 Global Internet of Things Summit (GIoTS). :1–6.
.
2020. Typical transactions in cross-company Industry 4.0 supply chains require a dynamically evaluable form of trustworthiness. Therefore, specific requirements on the parties involved, down to the machine level, for automatically verifiable operations shall facilitate the realization of the economic advantages of future flexible process chains in production. The core of the paper is a modular and extensible model for the assessment of trustworthiness in industrial IoT based on the Industrial Internet Security Framework of the Industrial Internet Consortium, which among other things defines five trustworthiness key characteristics of NIST. This is the starting point for a flexible model, which contains features as discussed in ISO/IEC JTC 1/AG 7 N51 or trustworthiness profiles as used in regulatory requirements. Specific minimum and maximum requirement parameters define the range of trustworthy operation. An automated calculation of trustworthiness in a dynamic environment based on an initial trust metric is presented. The evaluation can be device-based, connection-based, behaviour-based and context-based and thus become part of measurable, trustworthy, monitorable Industry 4.0 scenarios. Finally, the dynamic evaluation of automatable trust models of industrial components is illustrated based on the Multi-Vendor-Industry of the Horizon 2020 project SecureIoT. (grant agreement number 779899).
Compliance Checking of Open Source EHR Applications for HIPAA and ONC Security and Privacy Requirements. 2019 IEEE 43rd Annual Computer Software and Applications Conference (COMPSAC). 1:704–713.
.
2019. Electronic Health Record (EHR) applications are digital versions of paper-based patient's health information. They are increasingly adopted to improved quality in healthcare, such as convenient access to histories of patient medication and clinic visits, easier follow up of patient treatment plans, and precise medical decision-making process. EHR applications are guided by measures of the Health Insurance Portability and Accountability Act (HIPAA) to ensure confidentiality, integrity, and availability. Furthermore, Office of the National Coordinator (ONC) for Health Information Technology (HIT) certification criteria for usability of EHRs. A compliance checking approach attempts to identify whether or not an adopted EHR application meets the security and privacy criteria. There is no study in the literature to understand whether traditional static code analysis-based vulnerability discovered can assist in compliance checking of regulatory requirements of HIPAA and ONC. This paper attempts to address this issue. We identify security and privacy requirements for HIPAA technical requirements, and identify a subset of ONC criteria related to security and privacy, and then evaluate EHR applications for security vulnerabilities. Finally propose mitigation of security issues towards better compliance and to help practitioners reuse open source tools towards certification compliance.
Test-Driven Anonymization for Artificial Intelligence. 2019 IEEE International Conference On Artificial Intelligence Testing (AITest). :103—110.
.
2019. In recent years, data published and shared with third parties to develop artificial intelligence (AI) tools and services has significantly increased. When there are regulatory or internal requirements regarding privacy of data, anonymization techniques are used to maintain privacy by transforming the data. The side-effect is that the anonymization may lead to useless data to train and test the AI because it is highly dependent on the quality of the data. To overcome this problem, we propose a test-driven anonymization approach for artificial intelligence tools. The approach tests different anonymization efforts to achieve a trade-off in terms of privacy (non-functional quality) and functional suitability of the artificial intelligence technique (functional quality). The approach has been validated by means of two real-life datasets in the domains of healthcare and health insurance. Each of these datasets is anonymized with several privacy protections and then used to train classification AIs. The results show how we can anonymize the data to achieve an adequate functional suitability in the AI context while maintaining the privacy of the anonymized data as high as possible.