Visible to the public Biblio

Filters: Keyword is Multi-Modal  [Clear All Filters]
2019-03-11
Rao, Aakarsh, Rozenblit, Jerzy, Lysecky, Roman, Sametinger, Johannes.  2018.  Trustworthy Multi-modal Framework for Life-critical Systems Security. Proceedings of the Annual Simulation Symposium. :17:1–17:9.
With the advent of network connectivity and complex software applications, life-critical systems like medical devices are subject to a plethora of security risks and vulnerabilities. Security threats and attacks exploiting these vulnerabilities have been shown to compromise patient safety by hampering essential functionality. This necessitates incorporating security from the very design of software. Isolation of software functionality into different modes and switching between them based on risk assessment, while maintaining a fail-safe mode ensuring device's essential functionality is a compelling design. Formal modeling is an essential ingredient for verification of such a design. Hence, in this paper, we formally model a trustworthy multi-modal framework for life-critical systems security and in turn safety. We formalize a multiple mode based software design approach of operation with a fail-safe mode that maintains critical functionality. We ensure trustworthyness by formalizing a composite risk model incorporated into the design for run-time risk assessment and management.
2018-11-19
Chelaramani, S., Jha, A., Namboodiri, A. M..  2018.  Cross-Modal Style Transfer. 2018 25th IEEE International Conference on Image Processing (ICIP). :2157–2161.

We, humans, have the ability to easily imagine scenes that depict sentences such as ``Today is a beautiful sunny day'' or ``There is a Christmas feel, in the air''. While it is hard to precisely describe what one person may imagine, the essential high-level themes associated with such sentences largely remains the same. The ability to synthesize novel images that depict the feel of a sentence is very useful in a variety of applications such as education, advertisement, and entertainment. While existing papers tackle this problem given a style image, we aim to provide a far more intuitive and easy to use solution that synthesizes novel renditions of an existing image, conditioned on a given sentence. We present a method for cross-modal style transfer between an English sentence and an image, to produce a new image that imbibes the essential theme of the sentence. We do this by modifying the style transfer mechanism used in image style transfer to incorporate a style component derived from the given sentence. We demonstrate promising results using the YFCC100m dataset.