Safe Coding Guidelines 2015 |
Coding standards encourage programmers to follow a set of uniform rules and guidelines determined by the requirements of the project and organization, rather than by the programmer’s personal familiarity or preference. Developers and software designers apply these coding standards during software development to create secure systems. The development of secure coding standards is a work in progress by security researchers, language experts, and software developers. The articles cited here cover topics related to the Science of Security hard problems of resilience, metrics, human factors, and policy-based governance. They were presented in 2015.
Sodanil, M.; Porrawatpreyakorn, N.; Quirchmayr, G.; Tjoa, A.M., “A Knowledge Transfer Framework for Secure Coding Practices,” in Computer Science and Software Engineering (JCSSE), 2015 12th International Joint Conference on, vol., no.,
pp. 120–125, 22–24 July 2015. doi:10.1109/JCSSE.2015.7219782
Abstract: Building a secure software product is required understandings of security principles and guidelines for the secure coding in terms of programming languages to develop safe, reliable, and secure systems in software development process. Therefore, knowledge transferring is required and influenced to the most effective secure software development project. This paper proposes a knowledge transfer framework for secure coding practices with guidance for the development of secure software product and how the framework could be applied in the telecommunication industry. A set of knowledge transfer activities is specified which aligns for secure coding. Finally, the implementation of a knowledge transfer framework for secure coding practices could mitigate at least the most common mistakes in software development processes.
Keywords: programming languages; security of data; software engineering; knowledge transfer activities; knowledge transfer framework; secure coding practices; secure software development project; secure software product; software development processes; telecommunication industry; Communications technology; Encoding; Knowledge transfer; Privacy; Security; Software; Standards; Knowledge Transfer Framework; Secure Coding Practices; Security Vulnerable; Software Engineering (ID#: 15-7806)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7219782&isnumber=7219755
Liu, S.; Qu, Q.; Chen, L.; Ni, L.M., “SMC: A Practical Schema for Privacy-Preserved Data Sharing over Distributed Data Streams,” in Big Data, IEEE Transactions on, vol. 1, no. 2, pp. 68–81, June 1 2015.
doi:10.1109/TBDATA.2015.2498156
Abstract: Data collection is required to be safe and efficient considering both data privacy and system performance. In this paper, we study a new problem: distributed data sharing with privacy-preserving requirements. Given a data demander requesting data from multiple distributed data providers, the objective is to enable the data demander to access the distributed data without knowing the privacy of any individual provider. The problem is challenged by two questions: how to transmit the data safely and accurately; and how to efficiently handle data streams? As the first study, we propose a practical method, Shadow Coding, to preserve the privacy in data transmission and ensure the recovery in data collection, which achieves privacy preserving computation in a data-recoverable, efficient, and scalable way. We also provide practical techniques to make Shadow Coding efficient and safe in data streams. Extensive experimental study on a large-scale real-life dataset offers insight into the performance of our schema. The proposed schema is also implemented as a pilot system in a city to collect distributed mobile phone data.
Keywords: Base stations; Big data; Data privacy; Distributed databases; Encoding; Mobile handsets; Distributed data streams; data mining; distributed data sharing; privacy preserving; shadow coding (ID#: 15-7807)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7321000&isnumber=7153538
Prabhakar, B.; Reddy, D.K., “Analysis of Video Coding Standards Using PSNR and Bit Rate Saving,” in Signal Processing and Communication Engineering Systems (SPACES), 2015 International Conference on, vol., no., pp. 306–308, 2–3 Jan. 2015. doi:10.1109/SPACES.2015.7058271
Abstract: This paper mainly deals with the performance comparison of several video coding standards by means of peak signal-to-noise ratio and subjective testing like Rate Distortion (RD) curves and average bit-rate savings. A particular procedure is applied for the video coding standards H.265/High Efficiency Video Coding (HEVC), H.264/MPEG4-Advance Video Coding (AVC), MPEG4V2, MPEG4 and the Google video codec’s such as VP8, VP9 at different bit-rates and Peak Signal-to-Noise Ratios (PSNR) are estimated. The bit-rate reduction on an average is achieved about 50% for comparable to earlier video coding standards. The results obtained illustrate the H.265/HEVC achieving high peak signal-to-noise at low bit rates. This resembles high coding efficiency, comparing with lower versions of video coding standards.
Keywords: code standards; error statistics; rate distortion theory; video codecs; video coding; AVC; Google video codec; HEVC; PSNR; advance video coding; bit rate saving; high coding efficiency; peak signal-to-noise ratio; rate distortion curves; video coding standards; Bit rate; Encoding; MPEG 4 Standard; Transform coding; Video coding; H.264/MPEG4-AVC; H.265/HEVC; MPEG4; MPEG4V2; RD-curves; VP8; VP9; bit-rate (ID#: 15-7808)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7058271&isnumber=7058196
Jokinen, E.; Lecomte, J.; Schinkel-Bielefeld, N.; Bäckström, T., “Intelligibility Evaluation of Speech Coding Standards in Severe Background Noise and Packet Loss Conditions,” in Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on, vol., no., pp. 5152–5156, 19–24 April 2015. doi:10.1109/ICASSP.2015.7178953
Abstract: Speech intelligibility is an important aspect of speech transmission but often when speech coding standards are compared only the quality is evaluated using perceptual tests. In this study, the performance of three wideband speech coding standards, adaptive multi-rate wideband (AMR-WB), G.718, and enhanced voice services (EVS), is evaluated in a subjective intelligibility test. The test covers different packet loss conditions as well as a near-end background noise condition. Additionally, an objective quality evaluation in different packet loss conditions is conducted. All of the test conditions extend beyond the specification range to evaluate the attainable performance of the codecs in extreme conditions. The results of the subjective tests show that both EVS and G.718 are better in terms of intelligibility than AMR-WB. EVS attains the same performance as G.718 with lower algorithmic delay.
Keywords: speech coding; AMR-WB; EVS; adaptive multirate wideband; background noise; enhanced voice services; intelligibility evaluation; near end background noise condition; packet loss conditions; speech coding standards; speech intelligibility; speech transmission; Codecs; Noise; Packet loss; Speech; Speech coding; Standards; G.718; Speech intelligibility; adaptive multi-rate wideband; packet loss concealment (ID#: 15-7809)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7178953&isnumber=7177909
Panichella, S.; Arnaoudova, V.; Di Penta, M.; Antoniol, G., “Would Static Analysis Tools Help Developers with Code Reviews?,” in Software Analysis, Evolution and Reengineering (SANER), 2015 IEEE 22nd International Conference on, vol., no., pp. 161–170, 2–6 March 2015. doi:10.1109/SANER.2015.7081826
Abstract: Code reviews have been conducted since decades in software projects, with the aim of improving code quality from many different points of view. During code reviews, developers are supported by checklists, coding standards and, possibly, by various kinds of static analysis tools. This paper investigates whether warnings highlighted by static analysis tools are taken care of during code reviews and, whether there are kinds of warnings that tend to be removed more than others. Results of a study conducted by mining the Gerrit repository of six Java open source projects indicate that the density of warnings only slightly vary after each review. The overall percentage of warnings removed during reviews is slightly higher than what previous studies found for the overall project evolution history. However, when looking (quantitatively and qualitatively) at specific categories of warnings, we found that during code reviews developers focus on certain kinds of problems. For such categories of warnings the removal percentage tend to be very high, often above 50% and sometimes up to 100%. Examples of those are warnings in the imports, regular expressions, and type resolution categories. In conclusion, while a broad warning detection might produce way too many false positives, enforcing the removal of certain warnings prior to the patch submission could reduce the amount of effort provided during the code review process.
Keywords: Java; project management; software management; software quality; software tools; Gerrit repository; Java open source projects; broad warning detection; code quality; code review process; code reviews developers; coding standards; patch submission; project evolution history; software projects; static analysis tools; Context; Data mining; Encoding; History; Software; Standards; Code Review; Empirical Study; Mining Software Repositories; Static Analysis (ID#: 15-7810)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7081826&isnumber=7081802
Takhma, Y.; Rachid, T.; Harroud, H.; Abid, M.R.; Assem, N., “Third-Party Source Code Compliance Using Early Static Code Analysis,” in Collaboration Technologies and Systems (CTS), 2015 International Conference on, vol., no., pp. 132–139, 1–5 June 2015. doi:10.1109/CTS.2015.7210413
Abstract: This paper presents a generic tool for Static Code Analysis for MyIC Phone developer community. Its major aim is to verify, early during development cycle, the compliance of third-party software with the MyIC phone platform coding standards, ensuring successful deployment through the MyIC Phone App Store. Built as an extendable Eclipse plug-in, our tool facilitates collaborative software acceptance tests imposed by the target platform provider. Our approach to code compliance is based on static code analysis, which consists in the construction of an abstract model of the source code of the application under analysis. The abstract model is then traversed in order to find the potential non compliances based on the set of rules set by the platform provider, and which are distributed as XML files, and loaded by the developer into the Eclipse environment upon project instantiation. The generated results of the analysis are represented in a tree view with line code highlighted to be easily accessed by the developer. Statistics that relate to conformity with the rules are calculated and displayed in a pie chart for consideration by the developer.
Keywords: XML; conformance testing; program diagnostics; program testing; software quality; software tools; source code (software); Eclipse environment; MyIC Phone App Store; MyIC Phone developer community; MyIC Phone platform coding standards; XML files; abstract model; collaborative software acceptance tests; early static code analysis; extendable Eclipse plug-in; line code; target platform provider; third-party software compliance; third-party source code compliance; tree view; Algorithm design and analysis; Analytical models; Decision support systems; Encoding; Software quality; Standards; Collaborative acceptance tests; Software compliance; Software conformity; Software quality; Software verification; Static code analysis
(ID#: 15-7811)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7210413&isnumber=7210375
Luheng Jia; Chi-ying Tsui; Oscar C. Au ; Amin Zheng, “A Fast Variable Block Size Motion Estimation Algorithm with Refined Search Range for a Two-Layer Data Reuse Scheme,” in Circuits and Systems (ISCAS), 2015 IEEE International Symposium on, vol., no., pp. 1206–1209, 24–27 May 2015. doi:10.1109/ISCAS.2015.7168856
Abstract: Motion estimation (ME) serves as a key tool in a variety of video coding standards. With the increasing need for higher resolution video format, the limited memory bandwidth becomes a bottleneck for ME implementation. The huge data loading from external memory to the on-chip memory and the frequent data fetching from the on-chip memory to the ME engine are two major problems. To reduce both off-chip and on-chip memory bandwidth, we propose a two-layer data reuse scheme. On the macroblock (MB) layer, an advanced Level C data reuse scheme is presented. It employs two cooperating on-chip caches which load data in a novel local-snake scanning manner. On the block layer, we propose a fast variable block size motion estimation with a refined search window (RSW-VBSME). A new approach for hardware implementation of VBSME is then employed based on the fast algorithm. Instead of obtain the SADs of all the modes at the same time, the ME of different block sizes are performed separately. This enables higher data reusability within an MB. The two-layer data reuse scheme archives a more than 90% reduction of off-chip memory bandwidth with a slight increase of on-chip memory size. Moreover, the on-chip memory bandwidth is also greatly reduced compared with other reuse methods with different VBSME implementations.
Keywords: cache storage; microprocessor chips; motion estimation; video coding; RSW-VBSME; advanced Level C data reuse scheme; block layer; cooperating on-chip caches; external memory; fast variable block size motion estimation algorithm; frequent data fetching; huge data loading; limited memory bandwidth; local-snake scanning manner; macroblock layer; off-chip memory bandwidth reduction; on-chip memory bandwidth reduction; refined search range; refined search window; two-layer data reuse scheme; video coding standards; video format; Bandwidth; Loading; Manganese; Motion estimation; Strips; System-on-chip; Video coding; VBSME; data reuse; memory bandwidth (ID#: 15-7812)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7168856&isnumber=7168553
Taeyoung Na; Sangkwon Na; Kiwon Yoo, “A Probabilistic-Based CU Size Pre-Determination Method for Parallel Processing of HEVC Encoders,” in Consumer Electronics (ICCE), 2015 IEEE International Conference on, vol., no., pp. 327–330, 9–12 Jan. 2015. doi:10.1109/ICCE.2015.7066432
Abstract: The advent of the state-of-the-art video coding standard, High Efficiency Video Coding (HEVC) is expected to bring great changes to relevant fields of broadcasting, storage and communications. HEVC achieves higher coding gains compared to its previous video coding standards in terms of rate-distortion (R-D) performance with various improved coding tools. This leads to heavy computational complexity and costs to HEVC encoders and these comes as strong restrictions specially to develop H/W types of encoders that are more preferred for real-time based applications and services. In particular, the quad-tree based coding unit (CU) structures with various sizes are known to contribute to achieving high coding gains of HEVC. However, RD cost calculation for mode decision with all CU sizes cannot normally be considered in the H/W HEVC encoders for real-time operation. To overcome this, a CU size pre-determination method based on a probabilistic decision model fit to implement H/W HEVC encoders is proposed in this paper. All available CU sizes are checked before inter prediction and unnecessary CU sizes are excluded from inter prediction according to the decision model. Then inter prediction with the reduced number of CU sizes can be performed in parallel with pipeline structures. The experimental results show that the proposed method effectively determines the necessary CU sizes with negligible coding loss of 1.57% for LD (Low-delay) coding structure and 1.08% for RA (Random access) coding structure, respectively in BD-BR.
Keywords: probability; quadtrees; video coding; HEVC encoders; computational complexity; high efficiency video coding; parallel processing; probabilistic decision model; probabilistic-based CU size pre-determination method; quad-tree based coding unit structures; rate-distortion performance; Complexity theory; Conferences; Consumer electronics; Encoding; Probabilistic logic; Standards; Video coding (ID#: 15-7813)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7066432&isnumber=7066289
Siwei Ma; Tiejun Huang; Wen Gao, “The Second Generation IEEE 1857 Video Coding Standard,” in Signal and Information Processing (ChinaSIP), 2015 IEEE China Summit and International Conference on, vol., no., pp. 171–175, 12–15 July 2015. doi:10.1109/ChinaSIP.2015.7230385
Abstract: A new generation video coding standard developed by IEEE 1857 working group will be published as IEEE 1857.4, which assumes the work on the first generation of IEEE 1857 video coding standard IEEE std. 1857-2013 (IEEE 1857.1) and targets to double the coding efficiency of IEEE std. 1857-2013. This paper provides an overview of the forthcoming IEEE 1857.4 video coding standard, including the background and the key coding tools used in IEEE 1857.4. The performance comparisons between IEEE 1857.4 and the state-of-the art coding standards are also provided.
Keywords: video coding; IEEE 1857 video coding standard; IEEE 1857.1; IEEE 1857.4; IEEE std. 1857-2013; Encoding; Filtering; Redundancy; Standards; Surveillance; Transforms; Video coding; AVS2; IEEE 1857 (ID#: 15-7814)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7230385&isnumber=7230339
Layman, L.; Seaman, C.; Falessi, D.; Diep, M., “Ask the Engineers: Exploring Repertory Grids and Personal Constructs for Software Data Analysis,” in Cooperative and Human Aspects of Software Engineering (CHASE), 2015 IEEE/ACM 8th International Workshop on , vol., no., pp. 81–84, 18–18 May 2015. doi:10.1109/CHASE.2015.25
Abstract: Maturity in software projects is often equated with data-driven predictability. However, data collection is expensive and measuring all variables that may correlate with project outcome is neither practical nor feasible. In contrast, a project engineer can identify a handful of factors that he or she believes influence the success of a project. The challenge is to quantify engineers’ insights in a way that is useful for data analysis. In this exploratory study, we investigate the repertory grid technique for this purpose. The repertory grid technique is an interview-based procedure for eliciting “constructs” (e.g., Adhering to coding standards) that individuals believe influence a worldly phenomenon (e.g., What makes a high-quality software project) by comparing example elements from their past (e.g., Projects they have worked on). We investigate the relationship between objective metrics of project performance and repertory grid constructs elicited from eight software engineers. Our results show correlations between the engineers’ subjective constructs and the objective project outcome measures. This suggests that repertory grids may be of benefit in developing models of project outcomes, particularly when project data is limited.
Keywords: data analysis; project management; software development management; data collection; interview-based procedure; personal constructs; project outcome measures; project performance; project success; repertory grid technique; software data analysis; software project maturity; subjective constructs; Atmospheric measurements; Companies; Interviews; Particle measurements; Productivity; Software; practitioners; repertory grids; software data analytics (ID#: 15-7815)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7166093&isnumber=7166073
Hang Chen; Rong Xie; Liang Zhang, “Gradient Based Fast Mode and Depth Decision for High Efficiency Intra Frame Video Coding,” in Broadband Multimedia Systems and Broadcasting (BMSB), 2015 IEEE International Symposium on, vol., no., pp. 1–6, 17–19 June 2015. doi:10.1109/BMSB.2015.7177230
Abstract: Intra frame coding plays an important role in video coding. To get better performance, high efficiency coding standards exploit new techniques to improve the performance of intra-coding. A flexible partition structure and multiple prediction modes are applied to achieve accurate intra prediction. Different block sizes and prediction modes are traversed to find the best coding unit and accurate prediction direction. All these processes increase lots of computational complexity. To optimize the intra coding process, a gradient based algorithm is proposed in this paper to make fast mode and depth decision. A Sobel operator is applied to get the gradient information of pixels. Its statistical properties are studied to find the most probable prediction mode for further selection. Furthermore, the texture information guides coding unit partition to avoid unnecessary calculation. In order to reduce the encoding complexity, we started from the smallest coding unit and build a bottom-up partition process. Experiments are implemented on a high efficiency coding standard AVS2, and about 48% encoding time is reduced on average with negligible coding performance loss.
Keywords: computational complexity; encoding; statistical analysis; video coding; Sobel operator; bottom-up partition process; computational complexity; depth decision; encoding complexity; flexible partition structure; gradient based fast mode; gradient pixel information; high efficiency coding standard AVS2; high efficiency intra frame video coding; multiple prediction modes; statistical properties; Algorithm design and analysis; Complexity theory; Encoding; Indexes; Partitioning algorithms; Standards; Video coding; Depth decision; Gradient; Intra prediction; Mode decision (ID#: 15-7816)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7177230&isnumber=7177182
Mandal, D.K.; Mody, M.; Mehendale, M.; Yadav, N.; Chaitanya, G.; Goswami, P.; Sanghvi, H.; Nandan, N., “Accelerating H.264/HEVC Video Slice Processing Using Application Specific Instruction Set Processor,” in Consumer Electronics (ICCE), 2015 IEEE International Conference on , vol., no., pp. 408–411, 9–12 Jan. 2015. doi:10.1109/ICCE.2015.7066465
Abstract: Video coding standards (e.g. H.264, HEVC) use slice, consisting of a header and payload video data, as an independent coding unit for low latency encode-decode and better transmission error resiliency. In typical video streams, decoding the slice header is quite simple that can be done on standard embedded RISC processor architectures. However, universal decoding scenarios require handling worst case slice header complexity that grows to un-manageable level, well beyond the capacity of most embedded RISC processors. Hardwiring of slice processing control logic is potentially helpful but it reduces flexibility to tune the decoder for error conditions—an important differentiator for the end user. The paper presents a programmable approach to accelerate slice header decoding using an Application Specific Instruction Set Processor (ASIP). Purpose built instructions, built as extensions to a RISC processor (ARP32), accelerate slice processing by 30% for typical cases, reaching up to 70% for slices with worst case decoding complexity. The approach enables real time universal video decode for all slice-complexity-scenarios without sacrificing the flexibility, adaptability to customize, differentiate the codec solution via software programmability.
Keywords: instruction sets; reduced instruction set computing; video codecs; video coding; video streaming; ARP32; ASIP; H.264-HEVC video slice processing; RISC processor; application specific instruction set processor; codec solution; programmable approach; real time universal video decoding; slice header decoding; slice-complexity-scenarios; software programmability; worst case decoding complexity; Decoding; Engines; Real-time systems; Software; Standards; Streaming media; Video coding; ASIP; Custom instructions; H.264; HEVC; Slice; Universal Decoder (ID#: 15-7817)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7066465&isnumber=7066289
Takada, R.; Orihashi, S.; Matsuo, Y.; Katto, J., “Improvement of 8K UHDTV Picture Quality for H.265/HEVC by Global Zoom Estimation,” in Consumer Electronics (ICCE), 2015 IEEE International Conference on, vol., no., pp. 58–59, 9–12 Jan. 2015. doi:10.1109/ICCE.2015.7066317
Abstract: Block-based Motion Estimation (ME) has been widely used in various video coding standards to remove temporal redundancy. However, this ME has limitation that it can only compensate for a parallel translation. Various methods have been proposed for other motions such as zooming. In recent years, 8K UHDTV (7,680 × 4,320 pixels) has been developed. Since 8K has large motion by zooming that is difficult to be predicted by block matching, it is important to improve zoom motion estimation. In this paper, to handle zooming in 8K video sequences, we propose a method for improving the picture quality by global zoom estimation based on motion vector analysis extracted by block matching.
Keywords: estimation theory; high definition television; image matching; image sequences; motion estimation; video coding; 8K UHDTV picture quality; H.265-HEVC; ME; block based motion estimation; block matching; global zoom estimation; motion vector analysis; parallel translation; video coding standards; video sequences; zoom motion estimation; Encoding; Estimation; Motion estimation; Motion segmentation; Proposals; Vectors; Video coding (ID#: 15-7818)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7066317&isnumber=7066289
Papadopoulos, M.A.; Agrafiotis, D.; Bull, D., “On the Performance of Modern Video Coding Standards with Textured Sequences,” in Systems, Signals and Image Processing (IWSSIP), 2015 International Conference on, vol., no., pp. 137–140,
10–12 Sept. 2015. doi:10.1109/IWSSIP.2015.7314196
Abstract: This work presents two studies on the topic of coding highly textured content with H.265/HEVC. The aim of the studies is to identify any potential for improvement in the performance of the codec with this type of content. Both studies employ a texture-focused video database developed by the authors. Study I evaluates the performance of H.265/HEVC relative to H.264/AVC for the case of static, dynamic and mixed texture content. Study II evaluates the effectiveness of the currently used objective quality measures with this type of content. The results suggest that there is potential for improvement in coding performance by matching the quality/error measure used to the type of content (textured/non-textured) and type of texture (static, dynamic, mixed) encountered.
Keywords: image sequences; image texture; video coding; H.265 video coding performance; HEVC performance; modern video coding standards; textured sequences; Correlation; Databases; Encoding; Quality assessment; Rate-distortion; Silicon; Video coding; AVC; BVI Texture; HEVC; texture sequences (ID#: 15-7819)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7314196&isnumber=7313917
Blasi, S.G.; Zupancic, I.; Izquierdo, E.; Peixoto, E., “Adaptive Precision Motion Estimation for HEVC Coding,” in Picture Coding Symposium (PCS), 2015, vol., no., pp. 144–148, May 31 2015–June 3 2015. doi:10.1109/PCS.2015.7170064
Abstract: Most video coding standards, including the state-of-the-art High Efficiency Video Coding (HEVC), make use of sub-pixel Motion Estimation (ME) with Motion Vectors (MV) at fractional precisions to achieve high compression ratios. Unfortunately, sub-pixel ME comes at very high computational costs due to the interpolation step and additional motion searches. In this paper, a fast sub-pixel ME algorithm is proposed. The MV precision is adaptively selected on each block to skip the half or quarter precision steps when not needed. The algorithm bases the decision on local features, such as the behaviour of the residual error samples, and global features, such as the amount of edges in the pictures. Experimental results show that the method reduces total encoding time by up to 17.6% compared to conventional HEVC, at modest efficiency losses.
Keywords: data compression; motion estimation; vectors; video codecs; video coding; HEVC coding; MV; adaptive precision motion estimation; high efficiency video coding; motion vectors; subpixel ME algorithm; subpixel motion estimation; video coding standards; video compression ratios; Algorithm design and analysis; Encoding; Libraries; Vehicles; HEVC; Sub-Pixel Motion Estimation; Video Coding (ID#: 15-7820)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7170064&isnumber=7170026
Argyriou, V., “DNA Based Image Coding,” in Digital Signal Processing (DSP), 2015 IEEE International Conference on, vol., no., pp. 468–472, 21–24 July 2015. doi:10.1109/ICDSP.2015.7251916
Abstract: Lossless image compression is necessary for many applications related to digital cameras, medical imaging, mobile telecommunications, security and entertainment. Image compression as an important filed in image processing, includes several coding standards providing high compression ratios. In this work a novel method for lossless image encoding and decoded is introduced. Inspired by the storing and data representation architectures used in living multicellular organisms, the proposed DNA coding approach encodes images based on the same principles. The coding process includes three main stages, division, differentiation and specialization allowing the exploitation of spatial and inter-pixel redundancies. The key element to achieve that representation and efficiency is the novel concept of doi:10.1109/ICDSP.2015.7251916 ‘stem' pixels that is introduced. A comparative study was performed with current state of the art lossless image coding standards showing that the proposed methodology provides high compression rations.
Keywords: DNA; data compression; image coding; image representation; medical image processing; redundancy; DNA based image coding; data representation; data storage; image processing; interpixel redundancy; living multicellular organism; lossless image compression; lossless image decoding; lossless image encoding; spatial redundancy; Biomedical imaging; Cameras; Encoding; Mobile communication; Transform coding; lossless compression (ID#: 15-7821)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7251916&isnumber=7251315
Daehyeok Gwon; Haechul Choi; Youn, J.M., “HEVC Fast Intra Mode Decision Based on Edge and SATD Cost,” in Multimedia and Broadcasting (APMediaCast), 2015 Asia Pacific Conference on, vol., no., pp. 1–5, 23–25 April 2015. doi:10.1109/APMediaCast.2015.7210287
Abstract: HEVC (high efficiency video coding) achieves much higher coding efficiency compared with previous video coding standards at the cost of significant computational complexity. This paper proposes a fast intra mode decision scheme, where edge orientation and the sum of absolute Hadamard transformed difference (SATD) are used to consider texture characteristics of blocks. According to these features, the numbers of candidate modes to be tested in rough mode decision and rate-distortion optimization processes are reduced, respectively. In particular, the rate-distortion optimization candidates are selected by Bayesian classification framework to minimize a risk such as coding loss and computation complexity. Experimental results reveal that the proposed scheme reduces encoding run time by 30.3% with a negligible coding loss of 0.9% BD-rate for the all intra coding scenario.
Keywords: belief networks; computational complexity; distortion; optimisation; video coding; Bayesian classification framework; HEVC fast intra mode decision; SATD cost; computational complexity; edge orientation; high efficiency video coding; rate-distortion optimization processes; rough mode decision; sum of absolute Hadamard transformed difference; Bayes methods; Computational complexity; Encoding; Image edge detection; Indexes; Multimedia communication; Video coding; HEVC; fast encoder; intra coding; mode decision (ID#: 15-7822)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7210287&isnumber=7210263
Yu, Matt; Lakshman, Haricharan; Girod, Bernd, “A Framework to Evaluate Omnidirectional Video Coding Schemes,” in Mixed and Augmented Reality (ISMAR), 2015 IEEE International Symposium on, vol., no., pp. 31–36, Sept. 29 2015–Oct. 3 2015. doi:10.1109/ISMAR.2015.12
Abstract: Omnidirectional videos of real world environments viewed on head-mounted displays with real-time head motion tracking can offer immersive visual experiences. For live streaming applications, compression is critical to reduce the bitrate. Omnidirectional videos, which are spherical in nature, are mapped onto one or more planes before encoding to interface with modern video coding standards. In this paper, we consider the problem of evaluating the coding efficiency in the context of viewing with a head-mounted display. We extract viewport based head motion trajectories, and compare the original and coded videos on the viewport. With this approach, we compare different sphere-to-plane mappings. We show that the average viewport quality can be approximated by a weighted spherical PSNR.
Keywords: Approximation methods; Bit rate; Encoding; Head; Streaming media; Trajectory; Video coding (ID#: 15-7823)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7328056&isnumber=7328030
Tok, M.; Eiselein, V.; Sikora, T., “Motion Modeling for Motion Vector Coding in HEVC,” in Picture Coding Symposium (PCS), 2015, vol., no., pp. 154–158, May 31 2015–June 3 2015. doi:10.1109/PCS.2015.7170066
Abstract: During the standardization of HEVC, new motion information coding and prediction schemes such as temporal motion vector prediction have been investigated to reduce the spatial redundancy of motion vector fields used for motion compensated inter prediction. In this paper a general motion model based vector coding scheme is introduced. This scheme includes estimation, coding and dynamic recombination of parametric motion models to generate vector predictors and merge candidates for all common HEVC inter coding settings. Bit rate reductions of up to 4.9% indicate that higher order motion models can increase the efficiency of motion information coding in modern hybrid video coding standards.
Keywords: motion estimation; video coding; HEVC; bit rate reduction; general motion model based vector coding scheme; high efficiency video coding; motion information coding; motion model coding; motion model dynamic recombination; motion model estimation; motion prediction scheme; parametric motion model; spatial redundancy reduction; Bit rate; Delays; Encoding; Image coding; Predictive models; Standards; Video coding (ID#: 15-7824)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7170066&isnumber=7170026
Weijia Zhu; Wenpeng Ding; Jizheng Xu; Yunhui Shi; Baocai Yin, “Multi-stage Hash Based Motion Estimation for HEVC,” in Data Compression Conference (DCC), 2015, vol., no., pp. 478–478, 7–9 April 2015. doi:10.1109/DCC.2015.25
Abstract: Motion estimation plays an important role in video coding standards, such as H.264/AVC and HEVC. In this paper, we propose a multi-stage hash based motion estimation algorithm for HEVC, which enables hash based motion estimation for natural videos. In the proposed method, the prediction blocks significantly different from the current prediction unit will be eliminated in the motion estimation process. Locality sensitive hashing functions are used to measure the difference between the input block and predicted blocks. The proposed algorithm is implemented into the HM 12.0 software, and the simulation results show that the complexity of motion estimation is significantly reduced with negligible coding performance loss.
Keywords: computational complexity; cryptography; motion estimation; video coding; HEVC; HM12.0 software; complexity reduction; locality sensitive hashing function; multistage hash based motion estimation; natural video coding standard; Asia; Data compression; Encoding; Motion estimation; Multimedia communication; Software; Transportation; Locality sensitive hashing; fast motion estimation (ID#: 15-7825)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7149341&isnumber=7149089
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.