Biblio

Filters: Author is Huang, T.  [Clear All Filters]
2021-01-20
Lei, M., Jin, M., Huang, T., Guo, Z., Wang, Q., Wu, Z., Chen, Z., Chen, X., Zhang, J..  2020.  Ultra-wideband Fingerprinting Positioning Based on Convolutional Neural Network. 2020 International Conference on Computer, Information and Telecommunication Systems (CITS). :1—5.

The Global Positioning System (GPS) can determine the position of any person or object on earth based on satellite signals. But when inside the building, the GPS cannot receive signals, the indoor positioning system will determine the precise position. How to achieve more precise positioning is the difficulty of an indoor positioning system now. In this paper, we proposed an ultra-wideband fingerprinting positioning method based on a convolutional neural network (CNN), and we collect the dataset in a room to test the model, then compare our method with the existing method. In the experiment, our method can reach an accuracy of 98.36%. Compared with other fingerprint positioning methods our method has a great improvement in robustness. That results show that our method has good practicality while achieves higher accuracy.

2020-11-30
Pan, T., Xu, C., Lv, J., Shi, Q., Li, Q., Jia, C., Huang, T., Lin, X..  2019.  LD-ICN: Towards Latency Deterministic Information-Centric Networking. 2019 IEEE 21st International Conference on High Performance Computing and Communications; IEEE 17th International Conference on Smart City; IEEE 5th International Conference on Data Science and Systems (HPCC/SmartCity/DSS). :973–980.
Deterministic latency is the key challenge that must be addressed in numerous 5G applications such as AR/VR. However, it is difficult to make customized end-to-end resource reservation across multiple ISPs using IP-based QoS mechanisms. Information-Centric Networking (ICN) provides scalable and efficient content distribution at the Internet scale due to its in-network caching and native multicast capabilities, and the deterministic latency can promisingly be guaranteed by caching the relevant content objects in appropriate locations. Existing proposals formulate the ICN cache placement problem into numerous theoretical models. However, the underlying mechanisms to support such cache coordination are not discussed in detail. Especially, how to efficiently make cache reservation, how to avoid route oscillation when content cache is updated and how to conduct the real-time latency measurement? In this work, we propose Latency Deterministic Information-Centric Networking (LD-ICN). LD-ICN relies on source routing-based latency telemetry and leverages an on-path caching technique to avoid frequent route oscillation while still achieve the optimal cache placement under the SDN architecture. Extensive evaluation shows that under LD-ICN, 90.04% of the content requests are satisfied within the hard latency requirements.
2018-05-15
Wu, L., Liu, C., Huang, T., Sharma, A., Sarkar, S..  2017.  Traffic sensor health monitoring using spatiotemporal graphical modeling. Proceedings of the 2nd ACM SIGKDD Workshop on Machine Learning for Prognostics & Health Management. (Halifax, NS, Canada).
Huang, T., Liu, C., Sharma, A., Sarkar, S..  2017.  Traffic System Anomaly Detection using Spatiotemporal Pattern Networks. Proceedings of the 2nd ACM SIGKDD Workshop on Machine Learning for Prognostics & Health Management. (Halifax, NS, Canada).
2015-05-06
Lei, X., Liao, X., Huang, T., Li, H..  2014.  Cloud Computing Service: the Case of Large Matrix Determinant Computation. Services Computing, IEEE Transactions on. PP:1-1.

Cloud computing paradigm provides an alternative and economical service for resource-constrained clients to perform large-scale data computation. Since large matrix determinant computation (DC) is ubiquitous in the fields of science and engineering, a first step is taken in this paper to design a protocol that enables clients to securely, verifiably, and efficiently outsource DC to a malicious cloud. The main idea to protect the privacy is employing some transformations on the original matrix to get an encrypted matrix which is sent to the cloud; and then transforming the result returned from the cloud to get the correct determinant of the original matrix. Afterwards, a randomized Monte Carlo verification algorithm with one-sided error is introduced, whose superiority in designing inexpensive result verification algorithm for secure outsourcing is well demonstrated. In addition, it is analytically shown that the proposed protocol simultaneously fulfills the goals of correctness, security, robust cheating resistance, and high-efficiency. Extensive theoretical analysis and experimental evaluation also show its high-efficiency and immediate practicability. It is hoped that the proposed protocol can shed light in designing other novel secure outsourcing protocols, and inspire powerful companies and working groups to finish the programming of the demanded all-inclusive scientific computations outsourcing software system. It is believed that such software system can be profitable by means of providing large-scale scientific computation services for so many potential clients.
 

Huang, T., Drake, B., Aalfs, D., Vidakovic, B..  2014.  Nonlinear Adaptive Filtering with Dimension Reduction in the Wavelet Domain. Data Compression Conference (DCC), 2014. :408-408.

Recent advances in adaptive filter theory and the hardware for signal acquisition have led to the realization that purely linear algorithms are often not adequate in these domains. Nonlinearities in the input space have become apparent with today's real world problems. Algorithms that process the data must keep pace with the advances in signal acquisition. Recently kernel adaptive (online) filtering algorithms have been proposed that make no assumptions regarding the linearity of the input space. Additionally, advances in wavelet data compression/dimension reduction have also led to new algorithms that are appropriate for producing a hybrid nonlinear filtering framework. In this paper we utilize a combination of wavelet dimension reduction and kernel adaptive filtering. We derive algorithms in which the dimension of the data is reduced by a wavelet transform. We follow this by kernel adaptive filtering algorithms on the reduced-domain data to find the appropriate model parameters demonstrating improved minimization of the mean-squared error (MSE). Another important feature of our methods is that the wavelet filter is also chosen based on the data, on-the-fly. In particular, it is shown that by using a few optimal wavelet coefficients from the constructed wavelet filter for both training and testing data sets as the input to the kernel adaptive filter, convergence to the near optimal learning curve (MSE) results. We demonstrate these algorithms on simulated and a real data set from food processing.