Visible to the public Data Race Vulnerabilities 2015Conflict Detection Enabled

SoS Newsletter- Advanced Book Block

 

 
SoS Logo

Data Race Vulnerabilities

2015

 

A race condition is a flaw that occurs when the timing or ordering of events affects a program’s correctness. A data race happens when there are two memory accesses in a program where both target the same location and are performed concurrently by two threads. For the Science of Security, data races may impact compositionality. The research work cited here was presented in 2015.



D. Last, “Using Historical Software Vulnerability Data to Forecast Future Vulnerabilities,” Resilience Week (RWS), 2015, Philadelphia, PA, 2015, pp. 1-7. doi: 10.1109/RWEEK.2015.7287429
Abstract: The field of network and computer security is a never-ending race with attackers, trying to identify and patch software vulnerabilities before they can be exploited. In this ongoing conflict, it would be quite useful to be able to predict when and where the next software vulnerability would appear. The research presented in this paper is the first step towards a capability for forecasting vulnerability discovery rates for individual software packages. This first step involves creating forecast models for vulnerability rates at the global level, as well as the category (web browser, operating system, and video player) level. These models will later be used as a factor in the predictive models for individual software packages. A number of regression models are fit to historical vulnerability data from the National Vulnerability Database (NVD) to identify historical trends in vulnerability discovery. Then, k-NN classification is used in conjunction with several time series distance measurements to select the appropriate regression models for a forecast. 68% and 95% confidence bounds are generated around the actual forecast to provide a margin of error. Experimentation using this method on the NVD data demonstrates the accuracy of these forecasts, as well as the accuracy of the confidence bounds forecasts. Analysis of these results indicates which time series distance measures produce the best vulnerability discovery forecasts.
Keywords: pattern classification; regression analysis; security of data; software packages; time series; computer security; k-NN classification; regression model; software package; software vulnerability data; time series distance measure; vulnerability forecasting; Accuracy; Market research; Predictive models; Software packages; Time series analysis; Training; cybersecurity; vulnerability discovery model; vulnerability prediction (ID#: 16-11192)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7287429&isnumber=7287407

 

F. Schuster, T. Tendyck, C. Liebchen, L. Davi, A.-R. Sadeghi and T. Holz, “Counterfeit Object-Oriented Programming: On the Difficulty of Preventing Code Reuse Attacks in C++ Applications,” 2015 IEEE Symposium on Security and Privacy, San Jose, CA, 2015, pp. 745-762. doi: 10.1109/SP.2015.51
Abstract: Code reuse attacks such as return-oriented programming (ROP) have become prevalent techniques to exploit memory corruption vulnerabilities in software programs. A variety of corresponding defenses has been proposed, of which some have already been successfully bypassed -- and the arms race continues. In this paper, we perform a systematic assessment of recently proposed CFI solutions and other defenses against code reuse attacks in the context of C++. We demonstrate that many of these defenses that do not consider object-oriented C++ semantics precisely can be generically bypassed in practice. Our novel attack technique, denoted as counterfeit object-oriented programming (COOP), induces malicious program behavior by only invoking chains of existing C++ virtual functions in a program through corresponding existing call sites. COOP is Turing complete in realistic attack scenarios and we show its viability by developing sophisticated, real-world exploits for Internet Explorer 10 on Windows and Fire fox 36 on Linux. Moreover, we show that even recently proposed defenses (CPS, T-VIP, vfGuard, and VTint) that specifically target C++ are vulnerable to COOP. We observe that constructing defenses resilient to COOP that do not require access to source code seems to be challenging. We believe that our investigation and results are helpful contributions to the design and implementation of future defenses against control flow hijacking attacks.
Keywords: C++ language; Turing machines; object-oriented programming; security of data; C++ applications; C++ virtual functions; CFI solutions; COOP; CPS; Firefox 36; Internet Explorer 10; Linux; ROP; T-VIP; Turing complete; VTint; Windows; code reuse attack prevention; code reuse attacks; control flow hijacking attacks; counterfeit object-oriented programming; malicious program behavior; memory corruption vulnerabilities; return-oriented programming; software programs; source code; vfGuard; Aerospace electronics; Arrays; Layout; Object oriented programming; Runtime; Semantics; C++; CFI; ROP; code reuse attacks (ID#: 16-11193)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7163058&isnumber=7163005

 

Z. Wu, K. Lu and X. Wang, “Efficiently Trigger Data Races Through Speculative Execution,” 2015 IEEE 17th International Conference on High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS),  New York, NY, 2015, pp. 90-95. doi: 10.1109/HPCC-CSS-ICESS.2015.57
Abstract: Harmful data races hidden in concurrent programs are hard to be detected due to non-determinism. Many race detectors report a large number of benign data races. To detect the harmful data races automatically, previous tools dynamically execute program and actively insert delay to create real race condition, checking whether failure occurs due to the race. If so, a harmful race is detected. However, performance may be affected due to the inserted delay. We use speculative execution to alleviate this problem. Unlike previous tools that suspend one thread's memory access to wait for another thread's memory access, we continue to execute this thread's memory accesses and do not suspend this thread until it is going to execute a memory access that may change the effort of race. Therefore, real race condition will be created with less delay or even no delay. To our knowledge, this is the first technique that can trigger data races by speculative execution. The speculative execution does not affect the detection of harmful races. We have implemented a prototype tool and experimented on some real world programs. Results show that our tool can detect harmful races effectively. By speculative execution, the performance is improved significantly.
Keywords: concurrency control; parallel programming; program compilers; security of data; concurrent programs; data race detection; dynamic program execution; nondeterminism; race detectors; speculative execution; thread memory access; Concurrent computing; Delays; Instruction sets; Instruments; Message systems; Programming; Relays; concurrent program; dynamic analysis; harmful data race (ID#: 16-11194)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336149&isnumber=7336120

 

J. Adebayo and L. Kagal, “A Privacy Protection Procedure for Large Scale Individual Level Data,” Intelligence and Security Informatics (ISI), 2015 IEEE International Conference on, Baltimore, MD, 2015, pp. 120-125. doi: 10.1109/ISI.2015.7165950
Abstract: We present a transformation procedure for large scale individual level data that produces output data in which no linear combinations of the resulting attributes can yield the original sensitive attributes from the transformed data. In doing this, our procedure eliminates all linear information regarding a sensitive attribute from the input data. The algorithm combines principal components analysis of the data set with orthogonal projection onto the subspace containing the sensitive attribute(s). The algorithm presented is motivated by applications where there is a need to drastically 'sanitize' a data set of all information relating to sensitive attribute(s) before analysis of the data using a data mining algorithm. Sensitive attribute removal (sanitization) is often needed to prevent disparate impact and discrimination on the basis of race, gender, and sexual orientation in high stakes contexts such as determination of access to loans, credit, employment, and insurance. We show through experiments that our proposed algorithm outperforms other privacy preserving techniques by more than 20 percent in lowering the ability to reconstruct sensitive attributes from large scale data.
Keywords: data analysis; data mining; data privacy; principal component analysis; data mining algorithm; large scale individual level data; orthogonal projection; principal component analysis; privacy protection procedure; sanitization; sensitive attribute removal; Data privacy; Loans and mortgages; Noise; Prediction algorithms; Principal component analysis; Privacy; PCA; privacy preserving (ID#: 16-11195)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7165950&isnumber=7165923

 

J. García, “Broadband Connected Aircraft Security,” 2015 Integrated Communication, Navigation and Surveillance Conference (ICNS), Herndon, VA, USA, 2015, pp. 1-23. doi: 10.1109/ICNSURV.2015.7121291
Abstract: There is an inter-company race among service providers to offer the highest speed connections and services to the passenger. With some providers offering up to 50Mbps per aircraft and global coverage, traditional data links between aircraft and ground are becoming obsolete.
Keywords: (not provided) (ID#: 16-11197)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7121291&isnumber=7121207

 

D. H. Summerville, K. M. Zach and Y. Chen, “Ultra-Lightweight Deep Packet Anomaly Detection for Internet of Things Devices,” 2015 IEEE 34th International Performance Computing and Communications Conference (IPCCC), Nanjing, 2015, pp. 1-8. doi: 10.1109/PCCC.2015.7410342
Abstract: As we race toward the Internet of Things (IoT), small embedded devices are increasingly becoming network-enabled. Often, these devices can't meet the computational requirements of current intrusion prevention mechanisms or designers prioritize additional features and services over security; as a result, many IoT devices are vulnerable to attack. We have developed an ultra-lightweight deep packet anomaly detection approach that is feasible to run on resource constrained IoT devices yet provides good discrimination between normal and abnormal payloads. Feature selection uses efficient bit-pattern matching, requiring only a bitwise AND operation followed by a conditional counter increment. The discrimination function is implemented as a lookup-table, allowing both fast evaluation and flexible feature space representation. Due to its simplicity, the approach can be efficiently implemented in either hardware or software and can be deployed in network appliances, interfaces, or in the protocol stack of a device. We demonstrate near perfect payload discrimination for data captured from off the shelf IoT devices.
Keywords: Internet of Things; feature selection; security of data; table lookup; Internet of Things devices; IoT devices; bit-pattern matching; bitwise AND operation; conditional counter increment;  lookup-table; ultra-lightweight deep packet anomaly detection approach; Computational complexity; Detectors; Feature extraction; Hardware; Hidden Markov models; Payloads; Performance evaluation; network anomaly detection (ID#: 16-11198)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7410342&isnumber=7410258

 

B. M. Bhatti and N. Sami, “Building Adaptive Defense Against Cybercrimes Using Real-Time Data Mining,” Anti-Cybercrime (ICACC), 2015 First International Conference on, Riyadh, 2015, pp. 1-5. doi: 10.1109/Anti-Cybercrime.2015.7351949
Abstract: In today's fast changing world, cybercrimes are growing at perturbing pace. At the very definition of it, cybercrimes get engendered by capitalizing on threats and exploitation of vulnerabilities. However, recent history reveals that such crimes often come with surprises and seldom follow the trends. This puts the defense systems behind in the race, because of their inability to identify new patters of cybercrime and to ameliorate to the required levels of security. This paper visualizes the empowerment of security systems through real-time data mining by the virtue of which these systems will be able to dynamically identify patterns of cybercrimes. This will help those security systems stepping up their defense capabilities, while adapting to the required levels posed by newly germinating patterns. In order to confine within scope of this paper, the application of this approach is being discussed in the context of selected scenarios of cybercrime.
Keywords: computer crime; data mining; perturbation techniques; adaptive cybercrime defense system; real-time data mining; security systems; vulnerability exploitation; Computer crime; Data mining; Engines; Internet; Intrusion detection; Real-time systems; Cybercrime; Cybercrime Pattern Recognition (CPR); Information Security; Real-time Data Mining Engine (RTDME); Real-time Security Protocol (RTSP); Realtime Data Mining; TPAC (Threat Prevention & Adaptation Code); Threat Prevention and Response Algorithm Generator (TPRAG) (ID#: 16-11199)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7351949&isnumber=7351910

 

K. V. Muhongya and M. S. Maharaj, “Visualising and Analysing Online Social Networks,” Computing, Communication and Security (ICCCS), 2015 International Conference on, Pamplemousses, 2015, pp. 1-6.  doi: 10.1109/CCCS.2015.7374121
Abstract: The immense popularity of online social networks generates sufficient data, that when carefully analysed, can reveal unexpected realities. People are using them to establish relationships in the form of friendships. Based on data collected, students' networks were extracted, visualized and analysed to reflect the connections among South African communities using Gephi. The analysis revealed a slow progress in terms of connections among communities from different ethnic groups in South Africa. This was facilitated through analysis of data collected through Netvizz as well as by using Gephi to visualize social media network structures.
Keywords: data visualisation; social networking (online); Gephi; South African communities; analysing online social networks; student network; visualising online social networks; visualize social media network structures; Business; Data visualization; Facebook; Image color analysis; Joining processes; Media; Gephi; Online social network; betweeness centrality; closeness centrality; graph; race; visualization (ID#: 16-11200)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7374121&isnumber=7374113

 

W. A. R. d. Souza and A. Tomlinson, “SMM Revolutions,” 2015 IEEE 17th International Conference on High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), New York, NY, 2015, pp. 1466-1472.
doi: 10.1109/HPCC-CSS-ICESS.2015.278
Abstract: The System Management Mode (SMM) is a highly privileged processor operating mode in x86 platforms. The goal of the SMM is to perform system management functions, such as hardware control and power management. Because of this, SMM has powerful resources. Moreover, its executive software executes unnoticed by any other component in the system, including operating systems and hypervisors. For that reason, SMM has been exploited in the past to facilitate attacks, misuse, or alternatively, building security tools capitalising on its resources. In this paper, we discuss how the use of the SMM has been contributing to the arms race between system's attackers and defenders. We analyse the main work published on attacks, misuse and implementing security tools in the SMM and how the SMM has been modified to respond to those issues. Finally, we discuss how Intel Software Guard Extensions (SGX) technology, a sort of “hypervisor in processor”, presents a possible answer to the issue of using SMM for security purposes.
Keywords: operating systems (computers); security of data; virtualisation; Intel Software Guard Extensions technology; SGX technology; SMM; hardware control; hypervisor; hypervisors; operating systems; power management; processor operating mode; system attackers; system defenders; system management mode; Hardware; Operating systems; Process control; Registers; Security; Virtual machine monitors; PC architecture; SGX; SMM; security; virtualization (ID#: 16-11201)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336375&isnumber=7336120

 

S. Pietrowicz, B. Falchuk, A. Kolarov and A. Naidu, “Web-Based Smart Grid Network Analytics Framework,” Information Reuse and Integration (IRI), 2015 IEEE International Conference on, San Francisco, CA, 2015, pp. 496-501. doi: 10.1109/IRI.2015.82
Abstract: As utilities across the globe continue to deploy Smart Grid technology, there is an immediate and growing need for analytics, diagnostics and forensics tools akin to those commonly employed in enterprise IP networks to provide visibility and situational awareness into the operation, security and performance of Smart Energy Networks. Large-scale Smart Grid deployments have raced ahead of mature management tools, leaving gaps and challenges for operators and asset owners. Proprietary Smart Grid solutions have added to the challenge. This paper reports on the research and development of a new vendor-neutral, packet-based, network analytics tool called MeshView that abstracts information about system operation from low-level packet detail and visualizes endpoint and network behavior of wireless Advanced Metering Infrastructure, Distribution Automation, and SCADA field networks. Using real utility use cases, we report on the challenges and resulting solutions in the software design, development and Web usability of the framework, which is currently in use by several utilities.
Keywords: Internet; power engineering computing; smart power grids; software engineering; Internet protocols; MeshView tool; SCADA field network; Web usability; Web-based smart grid network analytics framework; distribution automation; enterprise IP networks; smart energy networks; smart grid technology; software design; software development; wireless advanced metering infrastructure; Conferences; Advanced Meter Infrastructure; Big data visualization; Cybersecurity; Field Area Networks; Network Analytics; Smart Energy; Smart Grid; System scalability; Web management (ID#: 16-11202)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7301018&isnumber=7300933

 

M. Phillips, B. M. Knoppers and Y. Joly, “Seeking a ‘Race to the Top’ in Genomic Cloud Privacy?,” Security and Privacy Workshops (SPW), 2015 IEEE, San Jose, CA, 2015, pp. 65-69. doi: 10.1109/SPW.2015.26
Abstract: The relationship between data-privacy lawmakers and genomics researchers may have gotten off on the wrong foot. Critics of protectionism in the current laws advocate that we abandon the existing paradigm, which was formulated in an entirely different medical research context. Genomic research no longer requires physically risky interventions that directly affect participants' integrity. But to simply strip away these protections for the benefit of research projects neglects not only new concerns about data privacy, but also broader interests that research participants have in the research process. Protectionism and privacy should not be treated as unwelcome anachronisms. We should instead seek to develop an updated, positive framework for data privacy and participant participation and collective autonomy. It is beginning to become possible to imagine this new framework, by reflecting on new developments in genomics and bioinformatics, such as secure remote processing, data commons, and health data co-operatives.
Keywords: bioinformatics; cloud computing; data privacy; genomics; security of data; collective autonomy; data commons; genomic cloud privacy; genomics research; health data cooperatives; medical research; participant participation; protectionism; secure remote processing; Bioinformatics; Cloud computing; Context; Data privacy; Genomics; Law; Privacy; data protection; health data co-operatives; privacy (ID#: 16-11203)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7163210&isnumber=7163193

 

N. Koutsopoulos, M. Northover, T. Felden and M. Wittiger, “Advancing Data Race Investigation and Classification Through Visualization,” Software Visualization (VISSOFT), 2015 IEEE 3rd Working Conference on, Bremen, 2015, pp. 200-204. doi: 10.1109/VISSOFT.2015.7332437
Abstract: Data races in multi-threaded programs are a common source of serious software failures. Their undefined behavior may lead to intermittent failures with unforeseeable, and in embedded systems, even life-threatening consequences. To mitigate these risks, various detection tools have been created to help identify potential data races. However, these tools produce thousands of data race warnings, often in text-based format, which makes the manual assessment process slow and error-prone. Through visualization, we aim to speed up the data race assessment process by reducing the amount of information to be investigated, and to provide a versatile interface that quality assurance engineers can use to investigate data race warnings. The ultimate goal of our integrated software suite, called RaceView, is to improve the usability of the data race information to such an extent that the elimination of data races can be incorporated into the regular software development process.
Keywords: data visualisation; multi-threading; pattern classification; program diagnostics; software quality; RaceView; data race assessment process; data race classification; data race elimination; data race information usability; data race warnings; integrated software suite; interface; multithreaded programs; quality assurance engineers; software development process; visualization; Data visualization; Instruction sets; Manuals; Merging; Navigation; Radiation detectors; data race detection; graph navigation; graph visualization; static analysis; user interface (ID#: 16-11204)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7332437&isnumber=7332403

 

J. Schimmel, K. Molitorisz, A. Jannesari and W. F. Tichy, “Combining Unit Tests for Data Race Detection,” Automation of Software Test (AST), 2015 IEEE/ACM 10th International Workshop on, Florence, 2015, pp. 43-47. doi: 10.1109/AST.2015.16
Abstract: Multithreaded programs are subject to data races. Data race detectors find such defects by static or dynamic inspection of the program. Current race detectors suffer from high numbers of false positives, slowdown, and false negatives. Because of these disadvantages, recent approaches reduce the false positive rate and the runtime overhead by applying race detection only on a subset of the whole program. To achieve this, they make use of parallel test cases, but this has other disadvantages: Parallel test cases have to be engineered manually, cover code regions that are affected by data races, and execute with input data that provoke the data races. This paper introduces an approach that does not need additional parallel use cases to be engineered. Instead, we take conventional unit tests as input and automatically generate parallel test cases, execution contexts and input data. As can be observed, most real-world software projects nowadays have high test coverages, so a large information base as input for our approach is already available. We analyze and reuse input data, initialization code, and mock objects that conventional unit tests already contain. With this information, no further oracles are necessary for generating parallel test cases. Instead, we reuse the knowledge that is already implicitly available in conventional unit tests. We implemented our parallel test case generation strategy in a tool called Test Merge. To evaluate these test cases we used them as input for the dynamic race detector CHESS that evokes all possible thread interleavings for a given program. We evaluated Test Merge using six sample programs and one industrial application with a high test case coverage of over 94%. For this benchmark, Test Merge identified all previously known data races and even revealed previously unknown ones.
Keywords: multi-threading; program testing; CHESS; TestMerge; data race detectors; dynamic race detector; multithreaded programs; parallel test case generation; thread interleavings; unit tests; Computer bugs; Context; Customer relationship management; Detectors; Schedules; Software; Testing; Data Races; Multicore Software Engineering; Unit Testing (ID#: 16-11205)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7166265&isnumber=7166248

 

S. W. Park, O. K. Ha and Y. K. Jun, “A Loop Filtering Technique for Reducing Time Overhead of Dynamic Data Race Detection,” 2015 8th International Conference on Database Theory and Application (DTA), Jeju, 2015, pp. 29-32. doi: 10.1109/DTA.2015.18
Abstract: Data races are the hardest defect to handle in multithread programs due to their nondeterministic interleaving of concurrent threads. The main drawback of data race detection using dynamic techniques is the additional overhead of monitoring program execution and analyzing every conflicting memory operation. Thus, it is important to reduce the additional overheads for debugging data races. This paper presents a loop filtering technique that rules out repeatedly execution regions of loops from the monitoring targets in the multithread programs. The empirical results using multithread programs show that the filtering technique reduces the average runtime overhead to 60% of that of dynamic data race detection.
Keywords: concurrency (computers); monitoring; multi-threading; program debugging; concurrent threads; data races debugging; dynamic data race detection; dynamic techniques; loop filtering technique; monitoring program execution; multithread programs; nondeterministic interleaving; Databases; Filtering; Monitoring; Performance analysis; Runtime; Multithread programs; data race detection; dynamic analysis; filtering; runtime overheads (ID#: 16-11207)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7433734&isnumber=7433698

 

C. Jia, C. Yang and W. K. Chan, “Architecturing Dynamic Data Race Detection as a Cloud-Based Service,” Web Services (ICWS), 2015 IEEE International Conference on, New York, NY, 2015, pp. 345-352. doi: 10.1109/ICWS.2015.54
Abstract: A web-based service consists of layers of programs (components) in the technology stack. Analyzing program executions of these components separately allows service vendors to acquire insights into specific program behaviors or problems in these components, thereby pinpointing areas of improvement in their offering services. Many existing approaches for testing as a service take an orchestration approach that splits components under test and the analysis services into a set of distributed modules communicating through message-based approaches. In this paper, we present the first work in providing dynamic analysis as a service using a virtual machine (VM)-based approach on dynamic data race detection. Such a detection needs to track a huge number of events performed by each thread of a program execution of a service component, making such an analysis unsuitable to use message passing to transit huge numbers of events individually. In our model, we instruct VMs to perform holistic dynamic race detections on service components and only transfer the detection results to our service selection component. With such result data as the guidance, the service selection component accordingly selects VM instances to fulfill subsequent analysis requests. The experimental results show that our model is feasible.
Keywords: Web services; cloud computing; program diagnostics; virtual machines; VM-based approach; Web-based service; cloud-based service; dynamic analysis-as-a-service; dynamic data race detection; message-based approach; orchestration approach; program behavior; program execution analysis; program execution thread; virtual machine; Analytical models; Clocks; Detectors; Instruction sets; Optimization; Performance analysis; cloud-based usage model; data race detection; dynamic analysis; service engineering; service selection strategy (ID#: 16-11208)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7195588&isnumber=7195533

 

K. Shankari and N. G. B. Amma, “Clasp: Detecting Potential Deadlocks and Its Removal by Iterative Method,” 2015 Online International Conference on Green Engineering and Technologies (IC-GET), Coimbatore, 2015, pp. 1-5. doi: 10.1109/GET.2015.7453824
Abstract: Considering a multithreaded code there is a possibility of getting deadlocks while running it which cannot provide the necessary output from the required program. It is necessary to eliminate the deadlock to get the process to be successful. In this proposed system which actively eliminates the dependencies that are removable. This can cause potential deadlock localization. It is done in an iterative manner. It can detect the dependencies in iteration based. It identifies the deadlock and then it confirms using its techniques. It can be obtained by finding the lock dependencies and dividing them into partitions and then validating the thread specific partitions and then it again searches the dependencies iteratively to eliminate them. The bugs in the multithreaded program can be traced. When a data race is identified it is isolated and then removed. By using a scheduler the bug is removed. It can increase the execution time of the code. By iterating this process the code can be free from bugs and deadlocks. It can be applied real world problems and can be used to detect the problems that causing a deadlock.
Keywords: concurrency control; iterative methods; multi-threading; program debugging; system recovery; Clasp; bugs; code execution time; data race; deadlock removal; iterative method; multithreaded program; potential deadlock detection; thread specific partitions; Algorithm design and analysis; Clocks; Computer bugs; Heuristic algorithms; Instruction sets; Synchronization; System recovery; data races; deadlock; lock dependencies; multi threaded code (ID#: 16-11209)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7453824&isnumber=7453764

 

J. R. Wilcox, P. Finch, C. Flanagan and S. N. Freund, “Array Shadow State Compression for Precise Dynamic Race Detection (T),” Automated Software Engineering (ASE), 2015 30th IEEE/ACM International Conference on, Lincoln, NE, 2015, pp. 155-165. doi: 10.1109/ASE.2015.19
Abstract: Precise dynamic race detectors incur significant time and space overheads, particularly for array-intensive programs, due to the need to store and manipulate analysis (or shadow) state for every element of every array. This paper presents SlimState, a precise dynamic race detector that uses an adaptive, online algorithm to optimize array shadow state representations. SlimState is based on the insight that common array access patterns lead to analogous patterns in array shadow state, enabling optimized, space efficient representations of array shadow state with no loss in precision. We have implemented SlimState for Java. Experiments on a variety of benchmarks show that array shadow compression reduces the space and time overhead of race detection by 27% and 9%, respectively. It is particularly effective for array-intensive programs, reducing space and time overheads by 35% and 17%, respectively, on these programs.
Keywords: Java; program testing; system monitoring; Java; SLIMSTATE; adaptive online algorithm; analogous patterns; array access patterns; array shadow state compression; array shadow state representations; array-intensive programs; precise dynamic race detection; space efficient representations; space overhead; time overhead; Arrays; Clocks; Detectors; Heuristic algorithms; Instruction sets; Java; Synchronization; concurrency; data race detection; dynamic analysis (ID#: 16-11210)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7372005&isnumber=7371976

 

A. S. Rajam, L. E. Campostrini, J. M. M. Caamaño and P. Clauss, “Speculative Runtime Parallelization of Loop Nests: Towards Greater Scope and Efficiency,” Parallel and Distributed Processing Symposium Workshop (IPDPSW), 2015 IEEE International, Hyderabad, 2015, pp. 245-254. doi: 10.1109/IPDPSW.2015.10
Abstract: Runtime loop optimization and speculative execution are becoming more and more prominent to leverage performance in the current multi-core and many-core era. However, a wider and more efficient use of such techniques is mainly hampered by the prohibitive time overhead induced by centralized data race detection, dynamic code behavior modeling and code generation. Most of the existing Thread Level Speculation (TLS) systems rely on slicing the target loops into chunks, and trying to execute the chunks in parallel with the help of a centralized performance-penalizing verification module that takes care of data races. Due to the lack of a data dependence model, these speculative systems are not capable of doing advanced transformations and, more importantly, the chances of rollback are high. The poly tope model is a well known mathematical model to analyze and optimize loop nests. The current state-of-art tools limit the application of the poly tope model to static control codes. Thus, none of these tools can handle codes with while loops, indirect memory accesses or pointers. Apollo (Automatic Polyhedral Loop Optimizer) is a framework that goes one step beyond, and applies the poly tope model dynamically by using TLS. Apollo can predict, at runtime, whether the codes are behaving linearly or not, and applies polyhedral transformations on-the-fly. This paper presents a novel system, which extends the capability of Apollo to handle codes whose memory accesses are not necessarily linear. More generally, this approach expands the applicability of the poly tope model at runtime to a wider class of codes.
Keywords: multiprocessing systems; optimisation; parallel programming; program compilers; program verification; Apollo; TLS; automatic polyhedral loop optimizer; centralized data race detection; centralized performance-penalizing verification module; code generation; data dependence model; dynamic code behavior modeling; loop nests; many-core era; memory accesses; multicore era; polyhedral transformations; polytope model; prohibitive time overhead; runtime loop optimization; speculative execution; speculative runtime parallelization; static control codes; thread level speculation systems; Adaptation models; Analytical models; Mathematical model; Optimization; Predictive models; Runtime; Skeleton; Automatic parallelization; Polyhedral model; Thread level speculation; loop optimization; non affine accesses (ID#: 16-11211)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7284316&isnumber=7284273

 

C. Segulja and T. S. Abdelrahman, “Clean: A Race Detector with Cleaner Semantics,” 2015 ACM/IEEE 42nd Annual International Symposium on Computer Architecture (ISCA), Portland, OR, 2015, pp. 401-413. doi: 10.1145/2749469.2750395
Abstract: Data races make parallel programs hard to understand. Precise race detection that stops an execution on first occurrence of a race addresses this problem, but it comes with significant overhead. In this work, we exploit the insight that precisely detecting only write-after-write (WAW) and read-after-write (RAW) races suffices to provide cleaner semantics for racy programs. We demonstrate that stopping an execution only when these races occur ensures that synchronization-free-regions appear to be executed in isolation and that their writes appear atomic. Additionally, the undetected racy executions can be given certain deterministic guarantees with efficient mechanisms. We present CLEAN, a system that precisely detects WAW and RAW races and deterministically orders synchronization. We demonstrate that the combination of these two relatively inexpensive mechanisms provides cleaner semantics for racy programs. We evaluate both software-only and hardware-supported CLEAN. The software-only CLEAN runs all Pthread benchmarks from the SPLASH-2 and PARSEC suites with an average 7.8x slowdown. The overhead of precise WAW and RAW detection (5.8x) constitutes the majority of this slowdown. Simple hardware extensions reduce the slowdown of CLEAN's race detection to on average 10.4% and never more than 46.7%.
Keywords: parallel programming; programming language semantics; synchronisation; CLEAN system; PARSEC; Pthread benchmarks; RAW races; SPLASH-2; WAW races; cleaner semantics; data races; deterministic guarantees; hardware-supported CLEAN; parallel programs; race detection; race detector; racy executions; racy programs; read-after-write races; software-only CLEAN; synchronization-free-regions; write-after-write races; Instruction sets; Switches; Synchronization (ID#: 16-11212)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7284082&isnumber=7284049

 

P. Wang, D. J. Dean and X. Gu, “Understanding Real World Data Corruptions in Cloud Systems,” Cloud Engineering (IC2E), 2015 IEEE International Conference on, Tempe, AZ, 2015, pp. 116-125. doi: 10.1109/IC2E.2015.41
Abstract: Big data processing is one of the killer applications for cloud systems. MapReduce systems such as Hadoop are the most popular big data processing platforms used in the cloud system. Data corruption is one of the most critical problems in cloud data processing, which not only has serious impact on the integrity of individual application results but also affects the performance and availability of the whole data processing system. In this paper, we present a comprehensive study on 138 real world data corruption incidents reported in Hadoop bug repositories. We characterize those data corruption problems in four aspects: (1) what impact can data corruption have on the application and system? (2) how is data corruption detected? (3) what are the causes of the data corruption? and (4) what problems can occur while attempting to handle data corruption? Our study has made the following findings: (1) the impact of data corruption is not limited to data integrity, (2) existing data corruption detection schemes are quite insufficient: only 25% of data corruption problems are correctly reported, 42% are silent data corruption without any error message, and 21% receive imprecise error report. We also found the detection system raised 12% false alarms, (3) there are various causes of data corruption such as improper runtime checking, race conditions, inconsistent block states, improper network failure handling, and improper node crash handling, and (4) existing data corruption handling mechanisms (i.e., data replication, replica deletion, simple re-execution) make frequent mistakes including replicating corrupted data blocks, deleting uncorrupted data blocks, or causing undesirable resource hogging.
Keywords: cloud computing; data handling; Hadoop; MapReduce systems; big data processing; cloud data processing; cloud systems; data corruption; data corruption problems; data integrity; improper network failure handling; improper node crash handling; inconsistent block states; race conditions; real world data corruptions; runtime checking; Availability; Computer bugs; Data processing; Radiation detectors; Software; Yarn (ID#: 16-11213)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7092909&isnumber=7092808

 

J. Zarei, M. M. Arefi and H. Hassani, “Bearing Fault Detection Based on Interval Type-2 Fuzzy Logic Systems for Support Vector Machines,” Modeling, Simulation, and Applied Optimization (ICMSAO), 2015 6th International Conference on, Istanbul, 2015, pp. 1-6. doi: 10.1109/ICMSAO.2015.7152214
Abstract: A method based on Interval Type-2 Fuzzy Logic Systems (IT2FLSs) for combination of different Support Vector Machines (SVMs) in order to bearing fault detection is the main argument of this paper. For this purpose, an experimental setup has been provided to collect data samples of stator current phase a of the induction motor using healthy and defective bearing. The defective bearing has an inner race hole with the diameter 1-mm that is created by the spark. An Interval Type-2 Fuzzy Fusion Model (IT2FFM) has been presented that is consists of two phases. Using this IT2FFM, testing data samples have been classified. A comparison between T1FFM, IT2FFM, SVMs and also Adaptive Neuro Fuzzy Inference Systems (ANFIS) in classification of testing data samples has been done and the results show the effectiveness of the proposed ITFFM.
Keywords: electrical engineering computing; fault diagnosis; fuzzy logic; fuzzy neural nets; fuzzy reasoning; fuzzy set theory; induction motors; machine bearings; mechanical engineering computing; pattern classification; stators; support vector machines; ANFIS; IT2FFM; Interval Type-2 Fuzzy Fusion Model; SVM; T1FFM; adaptive neuro fuzzy inference systems; bearing fault detection; defective bearing; healthy bearing; induction motor; inner race hole; interval type-2 fuzzy logic systems; size 1 mm; stator current phase; support vector machines; testing data sample classification; Accuracy; Fault detection; Fuzzy logic; Fuzzy sets; Kernel; Support vector machines; Testing; Bearing; Fault Detection; Support Vector Machines; Type-2 fuzzy logic system (ID#: 16-11214)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7152214&isnumber=7152193

 

R. Z. Haddad, C. A. Lopez, J. Pons-Llinares, J. Antonino-Daviu and E. G. Strangas, “Outer Race Bearing Fault Detection in Induction Machines Using Stator Current Signals,” 2015 IEEE 13th International Conference on Industrial Informatics (INDIN), Cambridge, 2015, pp. 801-808. doi: 10.1109/INDIN.2015.7281839
Abstract: This paper discusses the effect of the operating load as well as the suitability of combined startup and steady-state analysis for the detection of bearing faults in induction machines, Motor Current Signature Analysis and Linear Discriminant Analysis are used to detect and estimate the severity of an outer race bearing fault. The algorithm is based on using the machine stator current signals, instead of the conventional vibration signals, which has the advantages of simplicity and low cost of the necessary equipment. The machine stator current signals are analyzed during steady state and start up using Fast Fourier Transform and Short Time Fourier Transform. For steady state operation, two main changes in the spectrum compared to the healthy case: firstly, new harmonics related to bearing faults are generated, and secondly, the amplitude of the grid harmonics changes with the degree of the fault. For start up signals, the energy of the current signal frequency within a specific frequency band related to the bearing fault increases with the fault severity. Linear Discriminant Analysis classification is used to detect a bearing fault and estimate its severity for different loads using the amplitude of the grid harmonics as features for the classifier. Experimental data were collected from a 1.1 kW, 400V, 50 Hz induction machine in healthy condition, and two severities of outer race bearing fault at three different load levels: no load, 50% load, and 100% load.
Keywords: asynchronous machines; fast Fourier transforms; fault diagnosis; machine bearings; stators; bearing faults detection; fast Fourier transform; fault severity; grid harmonics amplitude; induction machines; linear discriminant analysis; machine stator current signals; motor current signature analysis; outer race bearing fault; short time Fourier transform; steady-state analysis; Fault detection; Harmonic analysis; Induction machines; Stators; Steady-state; Torque; Vibrations; Ball bearing; Fast Fourier Transform; Induction machine; Linear Discriminant Analysis; Outer race bearing fault; Short Time Fourier Transform (ID#: 16-11215)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7281839&isnumber=7281697

 

S. Saidi and Y. Falcone, “Dynamic Detection and Mitigation of DMA Races in MPSoCs,” Digital System Design (DSD), 2015 Euromicro Conference on, Funchal, 2015, pp. 267-270. doi: 10.1109/DSD.2015.77
Abstract: Explicitly managed memories have emerged as a good alternative for multicore processors design in order to reduce energy and performance costs. Memory transfers then rely on Direct Memory Access (DMA) engines which provide a hardware support for accelerating data. However, programming explicit data transfers is very challenging for developers who must manually orchestrate data movements through the memory hierarchy. This is in practice very error-prone and can easily lead to memory inconsistency. In this paper, we propose a runtime approach for monitoring DMA races. The monitor acts as a safeguard for programmers and is able to enforce at runtime a correct behavior w.r.t the semantics of the program execution. We validate the approach using traces extracted from industrial benchmarks and executed on the multiprocessor system-on-chip platform STHORM. Our experiments demonstrate that the monitoring algorithm has a low overhead (less than 1.5 KB) of on-chip memory consumption and an overhead of less than 2% of additional execution time.
Keywords: multiprocessing systems; storage management; system-on-chip; DMA engines; DMA races monitoring; MPSoC; STHORM; accelerating data; data movements; data transfers; direct memory access engines; dynamic detection and mitigation; energy reduction; hardware support; memories management; memory hierarchy; memory inconsistency; memory transfers; monitoring algorithm; multicore processors design; multiprocessor system-on-chip platform; on-chip memory consumption; performance costs; program execution semantics; runtime approach; Benchmark testing; Memory management; Monitoring; Program processors; Runtime; System-on-chip (ID#: 16-11216)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7302281&isnumber=7302233

 

P. Chatarasi and V. Sarkar, “Extending Polyhedral Model for Analysis and Transformation of OpenMP Programs,” 2015 International Conference on Parallel Architecture and Compilation (PACT), San Francisco, CA, 2015, pp. 490-491. doi: 10.1109/PACT.2015.57
Abstract: The polyhedral model is a powerful algebraic framework that has enabled significant advances in analysis and transformation of sequential affine (sub)programs, relative to traditional AST-based approaches. However, given the rapid growth of parallel software, there is a need for increased attention to using polyhedral compilation techniques to analyze and transform explicitly parallel programs. In our PACT'15 paper titled “Polyhedral Optimizations of Explicitly Parallel Programs” [1, 2], we addressed the problem of analyzing and transforming programs with explicit parallelism that satisfy the serial-elision property, i.e., the property that removal of all parallel constructs results in a sequential program that is a valid (albeit inefficient) implementation of the parallel program semantics. In this poster, we address the problem of analyzing and transforming more general OpenMP programs that do not satisfy the serial-elision property. Our contributions include the following: (1) An extension of the polyhedral model to represent input OpenMP programs, (2) Formalization of May Happen in Parallel (MHP) and Happens before (HB) relations in the extended model, (3) An approach for static detection of data races in OpenMP programs by generating race constraints that can be solved by an SMT solver such as Z3, and (4) An approach for transforming OpenMP programs.
Keywords: algebra; parallel programming; program compilers; AST-based approach; OpenMP programs; SMT solver; algebraic framework; parallel programs; parallel software; polyhedral compilation techniques; polyhedral model; sequential affine (sub)programs; serial-elision property; Analytical models; Instruction sets; Parallel architectures; Parallel processing; Schedules; Semantics
(ID#: 16-11217)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7429335&isnumber=7429279

 

J. Huang, Q. Luo and G. Rosu, “GPredict: Generic Predictive Concurrency Analysis,” 2015 IEEE/ACM 37th IEEE International Conference on Software Engineering, Florence, 2015, pp. 847-857. doi: 10.1109/ICSE.2015.96
Abstract: Predictive trace analysis (PTA) is an effective approach for detecting subtle bugs in concurrent programs. Existing PTA techniques, however, are typically based on adhoc algorithms tailored to low-level errors such as data races or atomicity violations, and are not applicable to high-level properties such as “a resource must be authenticated before use“ and “a collection cannot be modified when being iterated over”. In addition, most techniques assume as input a globally ordered trace of events, which is expensive to collect in practice as it requires synchronizing all threads. In this paper, we present GPredict: a new technique that realizes PTA for generic concurrency properties. Moreover, GPredict does not require a global trace but only the local traces of each thread, which incurs much less runtime overhead than existing techniques. Our key idea is to uniformly model violations of concurrency properties and the thread causality as constraints over events. With an existing SMT solver, GPredict is able to precisely predict property violations allowed by the causal model. Through our evaluation using both benchmarks and real world applications, we show that GPredict is effective in expressing and predicting generic property violations. Moreover, it reduces the runtime overhead of existing techniques by 54% on DaCapo benchmarks on average.
Keywords: concurrency control; program debugging; program diagnostics; DaCapo benchmarks; GPredict; PTA; SMT solver; concurrent programs; generic predictive concurrency analysis; local traces; predictive trace analysis; subtle bug detection; Concurrent computing; Java; Prediction algorithms; Predictive models; Runtime; Schedules; Syntactics (ID#: 16-11219)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7194631&isnumber=7194545

 

R. Pang, A. Baretto, H. Kautz and J. Luo, “Monitoring Adolescent Alcohol Use via Multimodal Analysis in Social Multimedia,” Big Data (Big Data), 2015 IEEE International Conference on, Santa Clara, CA, 2015, pp. 1509-1518. doi: 10.1109/BigData.2015.7363914
Abstract: Underage drinking or adolescent alcohol use is a major public health problem that causes more than 4,300 annual deaths. Traditional methods for monitoring adolescent alcohol consumption are based on surveys, which have many limitations and are difficult to scale. The main limitations include 1) respondents may not provide accurate, honest answers, 2) surveys with closed-ended questions may have a lower validity rate than other question types, 3) respondents who choose to respond may be different from those who chose not to respond, thus creating bias, 4) cost, 5) small sample size, and 6) lack of temporal sensitivity. We propose a novel approach to monitoring underage alcohol use by analyzing Instagram users' contents in order to overcome many of the limitations of surveys. First, Instagram users' demographics (such as age, gender and race) are determined by analyzing their selfie photos with automatic face detection and face analysis techniques supplied by a state-of-the-art face processing toolkit called Face++. Next, the tags associated with the pictures uploaded by users are used to identify the posts related to alcohol consumption and discover the existence of drinking patterns in terms of time, frequency and location. To that end, we have built an extensive dictionary of drinking activities based on internet slang and major alcohol brands. Finally, we measure the penetration of alcohol brands among underage users within Instagram by analyzing the followers of such brands in order to evaluate to what extent they might influence their followers' drinking behaviors. Experimental results using a large number of Instagram users have revealed several findings that are consistent with those of the conventional surveys, thus partially validating the proposed approach. Moreover, new insights are obtained that may help develop effective intervention. We believe that this approach can be effectively applied to other domains of public health.
Keywords: face recognition; medical computing; multimedia computing; social networking (online); Face++; Instagram user content analysis; Instagram user demographics; Internet slang; adolescent alcohol consumption monitoring; adolescent alcohol monitoring; automatic face detection; drinking behaviors; face analysis techniques; face processing toolkit; major alcohol brands; multimodal analysis; public health problem; selfie photo analysis; social multimedia; temporal sensitivity; underage alcohol usage monitoring; underage drinking; Big data; Conferences; Decision support systems; Dictionaries; Handheld computers; Media; Multimedia communication; data mining; social media; social multimedia; underage drinking public health (ID#: 16-11220)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7363914&isnumber=7363706

 

G. A. Skrimpas et al., “Detection of Generator Bearing Inner Race Creep by Means of Vibration and Temperature Analysis,” Diagnostics for Electrical Machines, Power Electronics and Drives (SDEMPED), 2015 IEEE 10th International Symposium on, Guarda, 2015, pp. 303-309. doi: 10.1109/DEMPED.2015.7303706
Abstract: Vibration and temperature analysis are the two dominating condition monitoring techniques applied to fault detection of bearing failures in wind turbine generators. Relative movement between the bearing inner ring and generator axle is one of the most severe failure modes in terms of secondary damages and development. Detection of bearing creep can be achieved reliably based on continuous trending of the amplitude of vibration running speed harmonic and temperature absolute values. In order to decrease the number of condition indicators which need to be assessed, it is proposed to exploit a weighted average descriptor calculated based on the 3rd up to 6th harmonic orders. Two cases of different bearing creep severity are presented, showing the consistency of the combined vibration and temperature data utilization. In general, vibration monitoring reveals early signs of abnormality several months prior to any permanent temperature increase, depending on the fault development.
Keywords: condition monitoring; creep; electric generators; failure analysis; fault diagnosis; harmonic analysis; machine bearings; thermal analysis; vibrations; bearing failures; bearing inner ring; condition monitoring techniques; fault detection; generator axle; generator bearing inner race creep; temperature absolute values; temperature analysis; vibration analysis; vibration running speed harmonic; weighted average descriptor; wind turbine generators; Creep; Generators; Harmonic analysis; Market research; Shafts; Vibrations; Wind turbines; Condition monitoring; angular resampling; bearing creep; rotational looseness; vibration analysis
(ID#: 16-11221)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7303706&isnumber=7303654

 

M. Charles, T. N. Miano, X. Zhang, L. E. Barnes and J. M. Lobo, “Monitoring Quality Indicators for Screening Colonoscopies,” Systems and Information Engineering Design Symposium (SIEDS), 2015, Charlottesville, VA, 2015, pp. 171-175. doi: 10.1109/SIEDS.2015.7116968
Abstract: The detection rate of adenomas in screening colonoscopies is an important quality indicator for endoscopists. Successful detection of adenomas is linked to reduced cancer incidence and mortality. This study focuses on evaluating the performance of endoscopists on adenoma detection rate (ADR), polyp detection rate (PDR), and scope withdrawal time. The substitution of PDR for ADR has been suggested due the reliance of ADR calculation on pathology reports. We compare these metrics to established clinical guidelines and to the performance of other individual endoscopists. Our analysis (n = 2730 screening colonoscopies) found variation in ADR for 14 endoscopists, ranging from 0.20 to 0.41. PDR ranged from 0.38 to 0.62. Controlling for age, sex, race, withdrawal time, and the presence of a trainee endoscopist accounted for 34% of variation in PDR but failed to account for any variation in ADR. The Pearson correlation between PDR and ADR is 0.82. These results suggest that PDR has significant value as a quality indicator. The reported variation in detection rates after controlling for case mix signals the need for greater scrutiny of individual endoscopist skill. Understanding the root cause of this variation could potentially lead to better patient outcomes.
Keywords: cancer; endoscopes; medical image processing; object detection; ADR; PDR; Pearson correlation; adenomas detection rate; cancer incidence; cancer mortality; clinical guidelines; endoscopists; pathology reports; polyp detection rate; quality indicator monitoring; screening colonoscopies; Cancer; Colonoscopy; Endoscopes; Guidelines; Logistics; Measurement; Predictive models; Electronic Medical Records; Health Data; Machine Learning; Physician Performance (ID#: 16-11222)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7116968&isnumber=7116953

 

B. Bayart, A. Vartanian, P. Haefner and J. Ovtcharova, “TechViz XL Helps KITs Formula Student Car ‘Become Alive,’” 2015 IEEE Virtual Reality (VR), Arles, 2015, pp. 395-396. doi: 10.1109/VR.2015.7223462
Abstract: TechViz has been a supporter of Formula Student at KIT for several years reflecting the companys long-term commitment to enhance engineering and education by providing students with powerful VR system software to connect curriculum to real-world applications. Incorporating immersive visualisation and interaction environment into Formula Student vehicle design is proven to deliver race day success, by helping to detect faults and optimise product life cycle. The TechViz LESC system helps to improve the car design despites the short limit of time, thanks to the direct visualisation in the VR system of the CAD mockup and the ease of usage for non-VR experts.
Keywords: automobiles; computer aided instruction; data visualisation; graphical user interfaces; human computer interaction; virtual reality; CAD mockup; KITs formula student car; TechViz LESC system; TechViz XL; VR system software; fault detection; formula student vehicle design; immersive visualisation; interaction environment; product life cycle optimization; virtual reality system; Companies; Hardware; Solid modeling; Three-dimensional displays; Vehicles; Virtual reality; Visualization (ID#: 16-11223)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7223462&isnumber=7223305

 

T. Sim and L. Zhang, “Controllable Face Privacy,” Automatic Face and Gesture Recognition (FG), 2015 11th IEEE International Conference and Workshops on, Ljubljana, 2015, pp. 1-8. doi: 10.1109/FG.2015.7285018
Abstract: We present the novel concept of Controllable Face Privacy. Existing methods that alter face images to conceal identity inadvertently also destroy other facial attributes such as gender, race or age. This all-or-nothing approach is too harsh. Instead, we propose a flexible method that can independently control the amount of identity alteration while keeping unchanged other facial attributes. To achieve this flexibility, we apply a subspace decomposition onto our face encoding scheme, effectively decoupling facial attributes such as gender, race, age, and identity into mutually orthogonal subspaces, which in turn enables independent control of these attributes. Our method is thus useful for nuanced face de-identification, in which only facial identity is altered, but others, such gender, race and age, are retained. These altered face images protect identity privacy, and yet allow other computer vision analyses, such as gender detection, to proceed unimpeded. Controllable Face Privacy is therefore useful for reaping the benefits of surveillance cameras while preventing privacy abuse. Our proposal also permits privacy to be applied not just to identity, but also to other facial attributes as well. Furthermore, privacy-protection mechanisms, such as k-anonymity, L-diversity, and t-closeness, may be readily incorporated into our method. Extensive experiments with a commercial facial analysis software show that our alteration method is indeed effective.
Keywords: computer vision; data privacy; face recognition; image coding; L-diversity mechanism; computer vision analysis; controllable face privacy concept; face de-identification; face encoding scheme; face images; facial attributes; identity alteration control; k-anonymity mechanism; mutually orthogonal subspaces; privacy-protection mechanisms; subspace decomposition; t-closeness mechanism; Cameras; Detectors; Face; Privacy; Shape; Training; Visual analytics (ID#: 16-11224)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7285018&isnumber=7285013
 


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.