Visible to the public Biblio

Filters: Keyword is video  [Clear All Filters]
2023-04-14
Shaocheng, Wu, Hefang, Jiang, Sijian, Li, Tao, Liu.  2022.  Design of a chaotic sequence cipher algorithm. 2022 IEEE 2nd International Conference on Data Science and Computer Application (ICDSCA). :320–323.
To protect the security of video information use encryption technology to be effective means. In practical applications, the structural complexity and real-time characteristics of video information make the encryption effect of some commonly used algorithms have some shortcomings. According to the characteristics of video, to design practical encryption algorithm is necessary. This paper proposed a novel scheme of chaotic image encryption, which is based on scrambling and diffusion structure. Firstly, the breadth first search method is used to scramble the pixel position in the original image, and then the pseudo-random sequence generated by the time-varying bilateral chaotic symbol system is used to transform each pixel of the scrambled image ratio by ratio or encryption. In the simulation experiment and analysis, the performance of the encrypted image message entropy displays that the new chaotic image encryption scheme is effective.
2022-06-30
Jadhav, Mohit, Kulkarni, Nupur, Walhekar, Omkar.  2021.  Doodling Based CAPTCHA Authentication System. 2021 Asian Conference on Innovation in Technology (ASIANCON). :1—5.
CAPTCHA (Completely Automated Public Turing Test to tell Computers and Humans Apart) is a widely used challenge-measures to distinguish humans and computer automated programs apart. Several existing CAPTCHAs are reliable for normal users, whereas visually impaired users face a lot of problems with the CAPTCHA authentication process. CAPTCHAs such as Google reCAPTCHA alternatively provides audio CAPTCHA, but many users find it difficult to decipher due to noise, language barrier, and accent of the audio of the CAPTCHA. Existing CAPTCHA systems lack user satisfaction on smartphones thus limiting its use. Our proposed system potentially solves the problem faced by visually impaired users during the process of CAPTCHA authentication. Also, our system makes the authentication process generic across users as well as platforms.
2020-02-10
Velmurugan, K.Jayasakthi, Hemavathi, S..  2019.  Video Steganography by Neural Networks Using Hash Function. 2019 Fifth International Conference on Science Technology Engineering and Mathematics (ICONSTEM). 1:55–58.

Video Steganography is an extension of image steganography where any kind of file in any extension is hidden into a digital video. The video content is dynamic in nature and this makes the detection of hidden data difficult than other steganographic techniques. The main motive of using video steganography is that the videos can store large amount of data in it. This paper focuses on security using the combination of hybrid neural networks and hash function for determining the best bits in the cover video to embed the secret data. For the embedding process, the cover video and the data to be hidden is uploaded. Then the hash algorithm and neural networks are applied to form the stego video. For the extraction process, the reverse process is applied and the secret data is obtained. All experiments are done using MatLab2016a software.

2018-09-12
Mattmann, Chris A., Sharan, Madhav.  2017.  Scalable Hadoop-Based Pooled Time Series of Big Video Data from the Deep Web. Proceedings of the 2017 ACM on International Conference on Multimedia Retrieval. :117–120.

We contribute a scalable, open source implementation of the Pooled Time Series (PoT) algorithm from CVPR 2015. The algorithm is evaluated on approximately 6800 human trafficking (HT) videos collected from the deep and dark web, and on an open dataset: the Human Motion Database (HMDB). We describe PoT and our motivation for using it on larger data and the issues we encountered. Our new solution reimagines PoT as an Apache Hadoop-based algorithm. We demonstrate that our new Hadoop-based algorithm successfully identifies similar videos in the HT and HMDB datasets and we evaluate the algorithm qualitatively and quantitatively.

2017-10-19
Nikravesh, Ashkan, Hong, David Ke, Chen, Qi Alfred, Madhyastha, Harsha V., Mao, Z. Morley.  2016.  QoE Inference Without Application Control. Proceedings of the 2016 Workshop on QoE-based Analysis and Management of Data Communication Networks. :19–24.
Network quality-of-service (QoS) does not always directly translate to users' quality-of-experience (QoE), e.g., changes in a video streaming app's frame rate in reaction to changes in packet loss rate depend on various factors such as the adaptation strategy used by the app and the app's use of forward error correction (FEC) codes. Therefore, knowledge of user QoE is desirable in several scenarios that have traditionally operated on QoS information. Examples include traffic management by ISPs and resource allocation by the operating system (OS). However, today, entities such as ISPs and OSes that implement these optimizations typically do not have a convenient way of obtaining input from applications on user QoE. To address this problem, we propose offline generation of per-application models mapping application-independent QoS metrics to corresponding application-specific QoE metrics, thereby enabling entities (such as ISPs and OSes) that can observe a user's network traffic to infer the user's QoE, in the absence of direct input. In this paper, we describe how such models can be generated and present our results from two popular video applications with significantly different QoE metrics. We also showcase the use of these models for ISPs to perform QoE-aware traffic management and for the OS to offer an efficient QoE diagnosis service.
2017-10-18
Gingold, Mathew, Schiphorst, Thecla, Pasquier, Philippe.  2017.  Never Alone: A Video Agents Based Generative Audio-Visual Installation. Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems. :1425–1430.

Never Alone (2016) is a generative large-scale urban screen video-sound installation, which presents the idea of generative choreographies amongst multiple video agents, or "digital performers". This generative installation questions how we navigate in urban spaces and the ubiquity and disruptive nature of encounters within the cities' landscapes. The video agents explore precarious movement paths along the façade inhabiting landscapes that are both architectural and emotional.

2017-05-19
Selim, Ahmed, Elgharib, Mohamed, Doyle, Linda.  2016.  Painting Style Transfer for Head Portraits Using Convolutional Neural Networks. ACM Trans. Graph.. 35:129:1–129:18.

Head portraits are popular in traditional painting. Automating portrait painting is challenging as the human visual system is sensitive to the slightest irregularities in human faces. Applying generic painting techniques often deforms facial structures. On the other hand portrait painting techniques are mainly designed for the graphite style and/or are based on image analogies; an example painting as well as its original unpainted version are required. This limits their domain of applicability. We present a new technique for transferring the painting from a head portrait onto another. Unlike previous work our technique only requires the example painting and is not restricted to a specific style. We impose novel spatial constraints by locally transferring the color distributions of the example painting. This better captures the painting texture and maintains the integrity of facial structures. We generate a solution through Convolutional Neural Networks and we present an extension to video. Here motion is exploited in a way to reduce temporal inconsistencies and the shower-door effect. Our approach transfers the painting style while maintaining the input photograph identity. In addition it significantly reduces facial deformations over state of the art.

2015-05-05
Coatsworth, M., Tran, J., Ferworn, A..  2014.  A hybrid lossless and lossy compression scheme for streaming RGB-D data in real time. Safety, Security, and Rescue Robotics (SSRR), 2014 IEEE International Symposium on. :1-6.

Mobile and aerial robots used in urban search and rescue (USAR) operations have shown the potential for allowing us to explore, survey and assess collapsed structures effectively at a safe distance. RGB-D cameras, such as the Microsoft Kinect, allow us to capture 3D depth data in addition to RGB images, providing a significantly richer user experience than flat video, which may provide improved situational awareness for first responders. However, the richer data comes at a higher cost in terms of data throughput and computing power requirements. In this paper we consider the problem of live streaming RGB-D data over wired and wireless communication channels, using low-power, embedded computing equipment. When assessing a disaster environment, a range camera is typically mounted on a ground or aerial robot along with the onboard computer system. Ground robots can use both wireless radio and tethers for communications, whereas aerial robots can only use wireless communication. We propose a hybrid lossless and lossy streaming compression format designed specifically for RGB-D data and investigate the feasibility and usefulness of live-streaming this data in disaster situations.
 

2015-05-04
Hui Su, Hajj-Ahmad, A., Min Wu, Oard, D.W..  2014.  Exploring the use of ENF for multimedia synchronization. Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on. :4613-4617.

The electric network frequency (ENF) signal can be captured in multimedia recordings due to electromagnetic influences from the power grid at the time of recording. Recent work has exploited the ENF signals for forensic applications, such as authenticating and detecting forgery of ENF-containing multimedia signals, and inferring their time and location of creation. In this paper, we explore a new potential of ENF signals for automatic synchronization of audio and video. The ENF signal as a time-varying random process can be used as a timing fingerprint of multimedia signals. Synchronization of audio and video recordings can be achieved by aligning their embedded ENF signals. We demonstrate the proposed scheme with two applications: multi-view video synchronization and synchronization of historical audio recordings. The experimental results show the ENF based synchronization approach is effective, and has the potential to solve problems that are intractable by other existing methods.