Visible to the public Biblio

Filters: Author is Islam, M. M.  [Clear All Filters]
2020-12-07
Islam, M. M., Karmakar, G., Kamruzzaman, J., Murshed, M..  2019.  Measuring Trustworthiness of IoT Image Sensor Data Using Other Sensors’ Complementary Multimodal Data. 2019 18th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/13th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE). :775–780.
Trust of image sensor data is becoming increasingly important as the Internet of Things (IoT) applications grow from home appliances to surveillance. Up to our knowledge, there exists only one work in literature that estimates trustworthiness of digital images applied to forensic applications, based on a machine learning technique. The efficacy of this technique is heavily dependent on availability of an appropriate training set and adequate variation of IoT sensor data with noise, interference and environmental condition, but availability of such data cannot be assured always. Therefore, to overcome this limitation, a robust method capable of estimating trustworthy measure with high accuracy is needed. Lowering cost of sensors allow many IoT applications to use multiple types of sensors to observe the same event. In such cases, complementary multimodal data of one sensor can be exploited to measure trust level of another sensor data. In this paper, for the first time, we introduce a completely new approach to estimate the trustworthiness of an image sensor data using another sensor's numerical data. We develop a theoretical model using the Dempster-Shafer theory (DST) framework. The efficacy of the proposed model in estimating trust level of an image sensor data is analyzed by observing a fire event using IoT image and temperature sensor data in a residential setup under different scenarios. The proposed model produces highly accurate trust level in all scenarios with authentic and forged image data.
2019-03-11
Habib, S. M., Alexopoulos, N., Islam, M. M., Heider, J., Marsh, S., Müehlhäeuser, M..  2018.  Trust4App: Automating Trustworthiness Assessment of Mobile Applications. 2018 17th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/ 12th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE). :124–135.

Smartphones have become ubiquitous in our everyday lives, providing diverse functionalities via millions of applications (apps) that are readily available. To achieve these functionalities, apps need to access and utilize potentially sensitive data, stored in the user's device. This can pose a serious threat to users' security and privacy, when considering malicious or underskilled developers. While application marketplaces, like Google Play store and Apple App store, provide factors like ratings, user reviews, and number of downloads to distinguish benign from risky apps, studies have shown that these metrics are not adequately effective. The security and privacy health of an application should also be considered to generate a more reliable and transparent trustworthiness score. In order to automate the trustworthiness assessment of mobile applications, we introduce the Trust4App framework, which not only considers the publicly available factors mentioned above, but also takes into account the Security and Privacy (S&P) health of an application. Additionally, it considers the S&P posture of a user, and provides an holistic personalized trustworthiness score. While existing automatic trustworthiness frameworks only consider trustworthiness indicators (e.g. permission usage, privacy leaks) individually, Trust4App is, to the best of our knowledge, the first framework to combine these indicators. We also implement a proof-of-concept realization of our framework and demonstrate that Trust4App provides a more comprehensive, intuitive and actionable trustworthiness assessment compared to existing approaches.

2018-04-02
He, X., Islam, M. M., Jin, R., Dai, H..  2017.  Foresighted Deception in Dynamic Security Games. 2017 IEEE International Conference on Communications (ICC). :1–6.

Deception has been widely considered in literature as an effective means of enhancing security protection when the defender holds some private information about the ongoing rivalry unknown to the attacker. However, most of the existing works on deception assume static environments and thus consider only myopic deception, while practical security games between the defender and the attacker may happen in dynamic scenarios. To better exploit the defender's private information in dynamic environments and improve security performance, a stochastic deception game (SDG) framework is developed in this work to enable the defender to conduct foresighted deception. To solve the proposed SDG, a new iterative algorithm that is provably convergent is developed. A corresponding learning algorithm is developed as well to facilitate the defender in conducting foresighted deception in unknown dynamic environments. Numerical results show that the proposed foresighted deception can offer a substantial performance improvement as compared to the conventional myopic deception.