Visible to the public Biblio

Filters: Keyword is quality  [Clear All Filters]
2021-08-11
Nazarenko, Maxim A..  2020.  What is Mobile Operation System Quality? 2020 International Conference Quality Management, Transport and Information Security, Information Technologies (IT QM IS). :145—147.
There are some modern mobile operation systems. The main two of them are iOS and Android. However, in the past, there were two more commonly used ones: Windows Mobile and Symbian. Each of these systems has its own pros and cons, whereas none of them is the best or the worst one in different criterions. In this paper the main criterions of operation system quality are discussed. The paper defines what the mobile operating system quality is.
2020-07-13
Agrawal, Shriyansh, Sanagavarapu, Lalit Mohan, Reddy, YR.  2019.  FACT - Fine grained Assessment of web page CredibiliTy. TENCON 2019 - 2019 IEEE Region 10 Conference (TENCON). :1088–1097.
With more than a trillion web pages, there is a plethora of content available for consumption. Search Engine queries invariably lead to overwhelming information, parts of it relevant and some others irrelevant. Often the information provided can be conflicting, ambiguous, and inconsistent contributing to the loss of credibility of the content. In the past, researchers have proposed approaches for credibility assessment and enumerated factors influencing the credibility of web pages. In this work, we detailed a WEBCred framework for automated genre-aware credibility assessment of web pages. We developed a tool based on the proposed framework to extract web page features instances and identify genre a web page belongs to while assessing it's Genre Credibility Score ( GCS). We validated our approach on `Information Security' dataset of 8,550 URLs with 171 features across 7 genres. The supervised learning algorithm, Gradient Boosted Decision Tree classified genres with 88.75% testing accuracy over 10 fold cross-validation, an improvement over the current benchmark. We also examined our approach on `Health' domain web pages and had comparable results. The calculated GCS correlated 69% with crowdsourced Web Of Trust ( WOT) score and 13% with algorithm based Alexa ranking across 5 Information security groups. This variance in correlation states that our GCS approach aligns with human way ( WOT) as compared to algorithmic way (Alexa) of web assessment in both the experiments.
2020-02-10
Ashfaq, Qirat, Khan, Rimsha, Farooq, Sehrish.  2019.  A Comparative Analysis of Static Code Analysis Tools That Check Java Code Adherence to Java Coding Standards. 2019 2nd International Conference on Communication, Computing and Digital Systems (C-CODE). :98–103.

Java programming language is considered highly important due to its extensive use in the development of web, desktop as well as handheld devices applications. Implementing Java Coding standards on Java code has great importance as it creates consistency and as a result better development and maintenance. Finding bugs and standard's violations is important at an early stage of software development than at a later stage when the change becomes impossible or too expensive. In the paper, some tools and research work done on Coding Standard Analyzers is reviewed. These tools are categorized based on the type of rules they cheeked, namely: style, concurrency, exceptions, and quality, security, dependency and general methods of static code analysis. Finally, list of Java Coding Standards Enforcing Tools are analyzed against certain predefined parameters that are limited by the scope of research paper under study. This review will provide the basis for selecting a static code analysis tool that enforce International Java Coding Standards such as the Rule of Ten and the JPL Coding Standards. Such tools have great importance especially in the development of mission/safety critical system. This work can be very useful for developers in selecting a good tool for Java code analysis, according to their requirements.

2017-09-19
Mercado, Iván Tactuk, Munaiah, Nuthan, Meneely, Andrew.  2016.  The Impact of Cross-platform Development Approaches for Mobile Applications from the User's Perspective. Proceedings of the International Workshop on App Market Analytics. :43–49.

Mobile app developers today have a hard decision to make: to independently develop native apps for different operating systems or to develop an app that is cross-platform compatible. The availability of different tools and approaches to support cross-platform app development only makes the decision harder. In this study, we used user reviews of apps to empirically understand the relationship (if any) between the approach used in the development of an app and its perceived quality. We used Natural Language Processing (NLP) models to classify 787,228 user reviews of the Android version and iOS version of 50 apps as complaints in one of four quality concerns: performance, usability, security, and reliability. We found that hybrid apps (on both Android and iOS platforms) tend to be more prone to user complaints than interpreted/generated apps. In a study of Facebook, an app that underwent a change in development approach from hybrid to native, we found that change in the development approach was accompanied by a reduction in user complaints about performance and reliability.

2017-08-02
Dmitriev, Pavel, Wu, Xian.  2016.  Measuring Metrics. Proceedings of the 25th ACM International on Conference on Information and Knowledge Management. :429–437.

You get what you measure, and you can't manage what you don't measure. Metrics are a powerful tool used in organizations to set goals, decide which new products and features should be released to customers, which new tests and experiments should be conducted, and how resources should be allocated. To a large extent, metrics drive the direction of an organization, and getting metrics 'right' is one of the most important and difficult problems an organization needs to solve. However, creating good metrics that capture long-term company goals is difficult. They try to capture abstract concepts such as success, delight, loyalty, engagement, life-time value, etc. How can one determine that a metric is a good one? Or, that one metric is better than another? In other words, how do we measure the quality of metrics? Can the evaluation process be automated so that anyone with an idea of a new metric can quickly evaluate it? In this paper we describe the metric evaluation system deployed at Bing, where we have been working on designing and improving metrics for over five years. We believe that by applying a data driven approach to metric evaluation we have been able to substantially improve our metrics and, as a result, ship better features and improve search experience for Bing's users.

2017-04-20
Zaidan, Firas, Hannebauer, Christoph, Gruhn, Volker.  2016.  Quality Attestation: An Open Source Pattern. Proceedings of the 21st European Conference on Pattern Languages of Programs. :2:1–2:7.

A number of small Open Source projects let independent providers measure different aspects of their quality that would otherwise be hard to see. This paper describes this observation as the pattern Quality Attestation. Quality Attestation belongs to a family of Open Source patterns written by various authors.

2015-11-12
Emfinger, W., Karsai, G..  2015.  Modeling Network Medium Access Protocols for Network Quality of Service Analysis. Real-Time Distributed Computing (ISORC), 2015 IEEE 18th International Symposium on. :292-295.

Design-time analysis and verification of distributed real-time embedded systems necessitates the modeling of the time-varying performance of the network and comparing that to application requirements. Earlier work has shown how to build a system network model that abstracted away the network's physical medium and protocols which govern its access and multiplexing. In this work we show how to apply a network medium channel access protocol, such as Time-Division Multiple Access (TDMA), to our network analysis methods and use the results to show that the abstracted model without the explicit model of the protocol is valid.

2015-05-04
Pandey, A.K., Agrawal, C.P..  2014.  Analytical Network Process based model to estimate the quality of software components. Issues and Challenges in Intelligent Computing Techniques (ICICT), 2014 International Conference on. :678-682.

Software components are software units designed to interact with other independently developed software components. These components are assembled by third parties into software applications. The success of final software applications largely depends upon the selection of appropriate and easy to fit components in software application according to the need of customer. It is primary requirement to evaluate the quality of components before using them in the final software application system. All the quality characteristics may not be of same significance for a particular software application of a specific domain. Therefore, it is necessary to identify only those characteristics/ sub-characteristics, which may have higher importance over the others. Analytical Network Process (ANP) is used to solve the decision problem, where attributes of decision parameters form dependency networks. The objective of this paper is to propose ANP based model to prioritize the characteristics /sub-characteristics of quality and to o estimate the numeric value of software quality.