Visible to the public Biblio

Filters: Keyword is compressive sampling  [Clear All Filters]
2021-04-27
Sekar, K., Devi, K. Suganya, Srinivasan, P., SenthilKumar, V. M..  2020.  Deep Wavelet Architecture for Compressive sensing Recovery. 2020 Seventh International Conference on Information Technology Trends (ITT). :185–189.
The deep learning-based compressive Sensing (CS) has shown substantial improved performance and in run-time reduction with signal sampling and reconstruction. In most cases, moreover, these techniques suffer from disrupting artefacts or high-frequency contents at low sampling ratios. Similarly, this occurs in the multi-resolution sampling method, which further collects more components with lower frequencies. A promising innovation combining CS with convolutionary neural network has eliminated the sparsity constraint yet recovery persists slow. We propose a Deep wavelet based compressive sensing with multi-resolution framework provides better improvement in reconstruction as well as run time. The proposed model demonstrates outstanding quality on test functions over previous approaches.
K, S., Devi, K. Suganya, Srinivasan, P., Dheepa, T., Arpita, B., singh, L. Dolendro.  2020.  Joint Correlated Compressive Sensing based on Predictive Data Recovery in WSNs. 2020 International Conference on Emerging Trends in Information Technology and Engineering (ic-ETITE). :1–5.
Data sampling is critical process for energy constrained Wireless Sensor Networks. In this article, we proposed a Predictive Data Recovery Compressive Sensing (PDR-CS) procedure for data sampling. PDR-CS samples data measurements from the monitoring field on the basis of spatial and temporal correlation and sparse measurements recovered at the Sink. Our proposed algorithm, PDR-CS extends the iterative re-weighted -ℓ1(IRW - ℓ1) minimization and regularization on the top of Spatio-temporal compressibility for enhancing accuracy of signal recovery and reducing the energy consumption. The simulation study shows that from the less number of samples are enough to recover the signal. And also compared with the other compressive sensing procedures, PDR-CS works with less time.
Mahamat, A. D., Ali, A., Tanguier, J. L., Donnot, A., Benelmir, R..  2020.  Mechanical and thermophysical characterization of local clay-based building materials. 2020 5th International Conference on Renewable Energies for Developing Countries (REDEC). :1–6.
The work we present is a comparative study based on an experimental approach to the mechanical and thermal properties of different local clay-based building materials with the incorporation of agricultural waste in Chad. These local building materials have been used since ancient times by the low-income population. They were the subject of a detailed characterization of their mechanical and thermal parameters. The objective is to obtain lightweight materials with good thermomechanical performance and which can contribute to improving thermal comfort, energy-saving, and security in social housing in Chad while reducing the cost of investment. Several clay-based samples with increasing incorporation of 0 to 8% of agricultural waste (cow dung or millet pod) were made. We used appropriate experimental methods for porous materials (the hydraulic press for mechanical tests and the box method for thermal tests). In this article, we have highlighted the values and variations of the mechanical compressive resistances, thermal conductivities, and thermal resistances of test pieces made with these materials. Knowing the mechanical and thermal characteristics, we also carried out a thermomechanical study. The thermal data made it possible to make Dynamic Thermal Simulations (STD) of the buildings thanks to the Pléiades + COMFIE software. The results obtained show that the use of these materials in a building presents good mechanical and thermal performance with low consumption of electrical energy for better thermal comfort of the occupants. Thus agricultural waste can be recovered thanks to its integration into building materials based on clay.
Kuldeep, G., Zhang, Q..  2020.  Revisiting Compressive Sensing based Encryption Schemes for IoT. 2020 IEEE Wireless Communications and Networking Conference (WCNC). :1–6.
Compressive sensing (CS) is regarded as one of the promising solutions for IoT data encryption as it achieves simultaneous sampling, compression, and encryption. Theoretical work in the literature has proved that CS provides computational secrecy. It also provides asymptotic perfect secrecy for Gaussian sensing matrix with constraints on input signal. In this paper, we design an attack decoding algorithm based on block compressed sensing decoding algorithm to perform ciphertext-only attack on real-life time series IoT data. It shows that it is possible to retrieve vital information in the plaintext under some conditions. Furthermore, it is also applied to a State-of-the Art CS-based encryption scheme for smart grid, and the power profile is reconstructed using ciphertext-only attack. Additionally, the statistical analysis of Gaussian and Binomial measurements is conducted to investigate the randomness provided by them.
Balestrieri, E., Vito, L. De, Picariello, F., Rapuano, S., Tudosa, I..  2020.  A Novel CS-based Measurement Method for Impairments Identification in Wireline Channels. 2020 IEEE International Instrumentation and Measurement Technology Conference (I2MTC). :1–6.
The paper proposes a new measurement method for impairments identification in wireline channels (i.e. wire cables) by exploiting a Compressive Sampling (CS)-based technique. The method consists of two-phases: (i) acquisition and reconstruction of the channel impulse response in the nominal working condition and (ii) analysis of the channel state to detect any physical anomaly/discontinuity like deterioration (e.g. aging due to harsh environment) or unauthorized side channel attacks (e.g. taps). The first results demonstrate that the proposed method is capable of estimating the channel impairments with an accuracy that could allow the classification of the main channel impairments. The proposed method could be used to develop low-cost instrumentation for continuous monitoring of the physical layer of data networks and to improve their hardware security.
2020-09-14
Chandrala, M S, Hadli, Pooja, Aishwarya, R, Jejo, Kevin C, Sunil, Y, Sure, Pallaviram.  2019.  A GUI for Wideband Spectrum Sensing using Compressive Sampling Approaches. 2019 10th International Conference on Computing, Communication and Networking Technologies (ICCCNT). :1–6.
Cognitive Radio is a prominent solution for effective spectral resource utilization. The rapidly growing device to device (D2D) communications and the next generation networks urge the cognitive radio networks to facilitate wideband spectrum sensing in order to assure newer spectral opportunities. As Nyquist sampling rates are formidable owing to complexity and cost of the ADCs, compressive sampling approaches are becoming increasingly popular. One such approach exploited in this paper is the Modulated Wideband Converter (MWC) to recover the spectral support. On the multiple measurement vector (MMV) framework provided by the MWC, threshold based Orthogonal Matching Pursuit (OMP) and Sparse Bayesian Learning (SBL) algorithms are employed for support recovery. We develop a Graphical User Interface (GUI) that assists a beginner to simulate the RF front-end of a MWC and thereby enables the user to explore support recovery as a function of Signal to Noise Ratio (SNR), number of measurement vectors and threshold. The GUI enables the user to explore spectrum sensing in DVB-T, 3G and 4G bands and recovers the support using OMP or SBL approach. The results show that the performance of SBL is better than that of OMP at a lower SNR values.
Zhu, Xiaofeng, Huang, Liang, Wang, Ziqian.  2019.  Dynamic range analysis of one-bit compressive sampling with time-varying thresholds. The Journal of Engineering. 2019:6608–6611.
From the point of view of statistical signal processing, the dynamic range for one-bit quantisers with time-varying thresholds is studied. Maximum tolerable amplitudes, minimum detectable amplitudes and dynamic ranges of this one-bit sampling approach and uniform quantisers, such as N-bits analogue-to-digital converters (ADCs), are derived and simulated. The results reveal that like conventional ADCs, the dynamic ranges of one-bit sampling approach are linearly proportional to the Gaussian noise standard deviations, while one-bit sampling's dynamic ranges are lower than N-bits ADC under the same noise levels.
Anselmi, Nicola, Poli, Lorenzo, Oliveri, Giacomo, Rocca, Paolo, Massa, Andrea.  2019.  Dealing with Correlation and Sparsity for an Effective Exploitation of the Compressive Processing in Electromagnetic Inverse Problems. 2019 13th European Conference on Antennas and Propagation (EuCAP). :1–4.
In this paper, a novel method for tomographic microwave imaging based on the Compressive Processing (CP) paradigm is proposed. The retrieval of the dielectric profiles of the scatterers is carried out by efficiently solving both the sampling and the sensing problems suitably formulated under the first order Born approximation. Selected numerical results are presented in order to show the improvements provided by the CP with respect to conventional compressive sensing (CSE) approaches.
Wang, Lizhi, Xiong, Zhiwei, Huang, Hua, Shi, Guangming, Wu, Feng, Zeng, Wenjun.  2019.  High-Speed Hyperspectral Video Acquisition By Combining Nyquist and Compressive Sampling. IEEE Transactions on Pattern Analysis and Machine Intelligence. 41:857–870.
We propose a novel hybrid imaging system to acquire 4D high-speed hyperspectral (HSHS) videos with high spatial and spectral resolution. The proposed system consists of two branches: one branch performs Nyquist sampling in the temporal dimension while integrating the whole spectrum, resulting in a high-frame-rate panchromatic video; the other branch performs compressive sampling in the spectral dimension with longer exposures, resulting in a low-frame-rate hyperspectral video. Owing to the high light throughput and complementary sampling, these two branches jointly provide reliable measurements for recovering the underlying HSHS video. Moreover, the panchromatic video can be used to learn an over-complete 3D dictionary to represent each band-wise video sparsely, thanks to the inherent structural similarity in the spectral dimension. Based on the joint measurements and the self-adaptive dictionary, we further propose a simultaneous spectral sparse (3S) model to reinforce the structural similarity across different bands and develop an efficient computational reconstruction algorithm to recover the HSHS video. Both simulation and hardware experiments validate the effectiveness of the proposed approach. To the best of our knowledge, this is the first time that hyperspectral videos can be acquired at a frame rate up to 100fps with commodity optical elements and under ordinary indoor illumination.
Feng, Qi, Huang, Jianjun, Yang, Zhaocheng.  2019.  Jointly Optimized Target Detection and Tracking Using Compressive Samples. IEEE Access. 7:73675–73684.
In this paper, we consider the problem of joint target detection and tracking in compressive sampling and processing (CSP-JDT). CSP can process the compressive samples of sparse signals directly without signal reconstruction, which is suitable for handling high-resolution radar signals. However, in CSP, the radar target detection and tracking problems are usually solved separately or by a two-stage strategy, which cannot obtain a globally optimal solution. To jointly optimize the target detection and tracking performance and inspired by the optimal Bayes joint decision and estimation (JDE) framework, a jointly optimized target detection and tracking algorithm in CSP is proposed. Since detection and tracking are highly correlated, we first develop a measurement matrix construction method to acquire the compressive samples, and then a joint CSP Bayesian approach is developed for target detection and tracking. The experimental results demonstrate that the proposed method outperforms the two-stage algorithms in terms of the joint performance metric.
Kafedziski, Venceslav.  2019.  Compressive Sampling Stepped Frequency Ground Penetrating Radar Using Group Sparsity and Markov Chain Sparsity Model. 2019 14th International Conference on Advanced Technologies, Systems and Services in Telecommunications (℡SIKS). :265–268.
We investigate an implementation of a compressive sampling (CS) stepped frequency ground penetrating radar. Due to the small number of targets, the B-scan is represented as a sparse image. Due to the nature of stepped frequency radar, smaller number of random frequencies can be used to obtain each A-scan (sparse delays). Also, the measurements obtained from different antenna positions can be reduced to a smaller number of random antenna positions. We also use the structure in the B-scan, i.e. the shape of the targets, which can be known, for instance, when detecting land mines. We demonstrate our method using radar data available from the Web from the land mine targets buried in the ground. We use group sparsity, i.e. we assume that the targets have some non-zero (and presumably known) dimension in the cross-range coordinate of the B-scan. For such targets, we also use the Markov chain model for the targets, where we simultaneously estimate the model parameters using the EMturboGAMP algorithm. Both approaches result in improved performance.
Wang, Hui, Yan, Qiurong, Li, Bing, Yuan, Chenglong, Wang, Yuhao.  2019.  Sampling Time Adaptive Single-Photon Compressive Imaging. IEEE Photonics Journal. 11:1–10.
We propose a time-adaptive sampling method and demonstrate a sampling-time-adaptive single-photon compressive imaging system. In order to achieve self-adapting adjustment of sampling time, the theory of threshold of light intensity estimation accuracy is deduced. According to this threshold, a sampling control module, based on field-programmable gate array, is developed. Finally, the advantage of the time-adaptive sampling method is proved experimentally. Imaging performance experiments show that the time-adaptive sampling method can automatically adjust the sampling time for the change of light intensity of image object to obtain an image with better quality and avoid speculative selection of sampling time.
Quang-Huy, Tran, Nguyen, Van Dien, Nguyen, Van Dung, Duc-Tan, Tran.  2019.  Density Imaging Using a Compressive Sampling DBIM approach. 2019 International Conference on Advanced Technologies for Communications (ATC). :160–163.
Density information has been used as a property of sound to restore objects in a quantitative manner in ultrasound tomography based on backscatter theory. In the traditional method, the authors only study the distorted Born iterative method (DBIM) to create density images using Tikhonov regularization. The downside is that the image quality is still low, the resolution is low, the convergence rate is not high. In this paper, we study the DBIM method to create density images using compressive sampling technique. With compressive sampling technique, the probes will be randomly distributed on the measurement system (unlike the traditional method, the probes are evenly distributed on the measurement system). This approach uses the l1 regularization to restore images. The proposed method will give superior results in image recovery quality, spatial resolution. The limitation of this method is that the imaging time is longer than the one in the traditional method, but the less number of iterations is used in this method.
HANJRI, Adnane EL, HAYAR, Aawatif, Haqiq, Abdelkrim.  2019.  Combined Compressive Sampling Techniques and Features Detection using Kullback Leibler Distance to Manage Handovers. 2019 IEEE International Smart Cities Conference (ISC2). :504–507.
In this paper, we present a new Handover technique which combines Distribution Analysis Detector and Compressive Sampling Techniques. The proposed approach consists of analysing Received Signal probability density function instead of demodulating and analysing Received Signal itself as in classical handover. In this method we will exploit some mathematical tools like Kullback Leibler Distance, Akaike Information Criterion (AIC) and Akaike weights, in order to decide blindly the best handover and the best Base Station (BS) for each user. The Compressive Sampling algorithm is designed to take advantage from the primary signals sparsity and to keep the linearity and properties of the original signal in order to be able to apply Distribution Analysis Detector on the compressed measurements.
2019-12-10
Ponuma, R, Amutha, R, Haritha, B.  2018.  Compressive Sensing and Hyper-Chaos Based Image Compression-Encryption. 2018 Fourth International Conference on Advances in Electrical, Electronics, Information, Communication and Bio-Informatics (AEEICB). :1-5.

A 2D-Compressive Sensing and hyper-chaos based image compression-encryption algorithm is proposed. The 2D image is compressively sampled and encrypted using two measurement matrices. A chaos based measurement matrix construction is employed. The construction of the measurement matrix is controlled by the initial and control parameters of the chaotic system, which are used as the secret key for encryption. The linear measurements of the sparse coefficients of the image are then subjected to a hyper-chaos based diffusion which results in the cipher image. Numerical simulation and security analysis are performed to verify the validity and reliability of the proposed algorithm.

Tian, Yun, Xu, Wenbo, Qin, Jing, Zhao, Xiaofan.  2018.  Compressive Detection of Random Signals from Sparsely Corrupted Measurements. 2018 International Conference on Network Infrastructure and Digital Content (IC-NIDC). :389-393.

Compressed sensing (CS) integrates sampling and compression into a single step to reduce the processed data amount. However, the CS reconstruction generally suffers from high complexity. To solve this problem, compressive signal processing (CSP) is recently proposed to implement some signal processing tasks directly in the compressive domain without reconstruction. Among various CSP techniques, compressive detection achieves the signal detection based on the CS measurements. This paper investigates the compressive detection problem of random signals when the measurements are corrupted. Different from the current studies that only consider the dense noise, our study considers both the dense noise and sparse error. The theoretical performance is derived, and simulations are provided to verify the derived theoretical results.

Shiddik, Luthfi Rakha, Novamizanti, Ledya, Ramatryana, I N Apraz Nyoman, Hanifan, Hasya Azqia.  2019.  Compressive Sampling for Robust Video Watermarking Based on BCH Code in SWT-SVD Domain. 2019 International Conference on Sustainable Engineering and Creative Computing (ICSECC). :223-227.

The security and confidentiality of the data can be guaranteed by using a technique called watermarking. In this study, compressive sampling is designed and analyzed on video watermarking. Before the watermark compression process was carried out, the watermark was encoding the Bose Chaudhuri Hocquenghem Code (BCH Code). After that, the watermark is processed using the Discrete Sine Transform (DST) and Discrete Wavelet Transform (DWT). The watermark insertion process to the video host using the Stationary Wavelet Transform (SWT), and Singular Value Decomposition (SVD) methods. The results of our system are obtained with the PSNR 47.269 dB, MSE 1.712, and BER 0.080. The system is resistant to Gaussian Blur and rescaling noise attacks.

Zhou, Guorui, Zhu, Xiaoqiang, Song, Chenru, Fan, Ying, Zhu, Han, Ma, Xiao, Yan, Yanghui, Jin, Junqi, Li, Han, Gai, Kun.  2018.  Deep Interest Network for Click-Through Rate Prediction. Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. :1059-1068.

Click-through rate prediction is an essential task in industrial applications, such as online advertising. Recently deep learning based models have been proposed, which follow a similar Embedding&MLP paradigm. In these methods large scale sparse input features are first mapped into low dimensional embedding vectors, and then transformed into fixed-length vectors in a group-wise manner, finally concatenated together to fed into a multilayer perceptron (MLP) to learn the nonlinear relations among features. In this way, user features are compressed into a fixed-length representation vector, in regardless of what candidate ads are. The use of fixed-length vector will be a bottleneck, which brings difficulty for Embedding&MLP methods to capture user's diverse interests effectively from rich historical behaviors. In this paper, we propose a novel model: Deep Interest Network (DIN) which tackles this challenge by designing a local activation unit to adaptively learn the representation of user interests from historical behaviors with respect to a certain ad. This representation vector varies over different ads, improving the expressive ability of model greatly. Besides, we develop two techniques: mini-batch aware regularization and data adaptive activation function which can help training industrial deep networks with hundreds of millions of parameters. Experiments on two public datasets as well as an Alibaba real production dataset with over 2 billion samples demonstrate the effectiveness of proposed approaches, which achieve superior performance compared with state-of-the-art methods. DIN now has been successfully deployed in the online display advertising system in Alibaba, serving the main traffic.

Tai, Kai Sheng, Sharan, Vatsal, Bailis, Peter, Valiant, Gregory.  2018.  Sketching Linear Classifiers over Data Streams. Proceedings of the 2018 International Conference on Management of Data. :757-772.

We introduce a new sub-linear space sketch—the Weight-Median Sketch—for learning compressed linear classifiers over data streams while supporting the efficient recovery of large-magnitude weights in the model. This enables memory-limited execution of several statistical analyses over streams, including online feature selection, streaming data explanation, relative deltoid detection, and streaming estimation of pointwise mutual information. Unlike related sketches that capture the most frequently-occurring features (or items) in a data stream, the Weight-Median Sketch captures the features that are most discriminative of one stream (or class) compared to another. The Weight-Median Sketch adopts the core data structure used in the Count-Sketch, but, instead of sketching counts, it captures sketched gradient updates to the model parameters. We provide a theoretical analysis that establishes recovery guarantees for batch and online learning, and demonstrate empirical improvements in memory-accuracy trade-offs over alternative memory-budgeted methods, including count-based sketches and feature hashing.

Deng, Lijin, Piao, Yan, Liu, Shuo.  2018.  Research on SIFT Image Matching Based on MLESAC Algorithm. Proceedings of the 2Nd International Conference on Digital Signal Processing. :17-21.

The difference of sensor devices and the camera position offset will lead the geometric differences of the matching images. The traditional SIFT image matching algorithm has a large number of incorrect matching point pairs and the matching accuracy is low during the process of image matching. In order to solve this problem, a SIFT image matching based on Maximum Likelihood Estimation Sample Consensus (MLESAC) algorithm is proposed. Compared with the traditional SIFT feature matching algorithm, SURF feature matching algorithm and RANSAC feature matching algorithm, the proposed algorithm can effectively remove the false matching feature point pairs during the image matching process. Experimental results show that the proposed algorithm has higher matching accuracy and faster matching efficiency.

Feng, Chenwei, Wang, Xianling, Zhang, Zewang.  2018.  Data Compression Scheme Based on Discrete Sine Transform and Lloyd-Max Quantization. Proceedings of the 3rd International Conference on Intelligent Information Processing. :46-51.

With the increase of mobile equipment and transmission data, Common Public Radio Interface (CPRI) between Building Base band Unit (BBU) and Remote Radio Unit (RRU) suffers amounts of increasing transmission data. It is essential to compress the data in CPRI if more data should be transferred without congestion under the premise of restriction of fiber consumption. A data compression scheme based on Discrete Sine Transform (DST) and Lloyd-Max quantization is proposed in distributed Base Station (BS) architecture. The time-domain samples are transformed by DST according to the characteristics of Orthogonal Frequency Division Multiplexing (OFDM) baseband signals, and then the coefficients after transformation are quantified by the Lloyd-Max quantizer. The simulation results show that the proposed scheme can work at various Compression Ratios (CRs) while the values of Error Vector Magnitude (EVM) are better than the limits in 3GPP.

Huang, Lilian, Zhu, Zhonghang.  2018.  Compressive Sensing Image Reconstruction Using Super-Resolution Convolutional Neural Network. Proceedings of the 2Nd International Conference on Digital Signal Processing. :80-83.

Compressed sensing (CS) can recover a signal that is sparse in certain representation and sample at the rate far below the Nyquist rate. But limited to the accuracy of atomic matching of traditional reconstruction algorithm, CS is difficult to reconstruct the initial signal with high resolution. Meanwhile, scholar found that trained neural network have a strong ability in settling such inverse problems. Thus, we propose a Super-Resolution Convolutional Neural Network (SRCNN) that consists of three convolutional layers. Every layer has a fixed number of kernels and has their own specific function. The process is implemented using classical compressed sensing algorithm to process the input image, afterwards, the output images are coded via SRCNN. We achieve higher resolution image by using the SRCNN algorithm proposed. The simulation results show that the proposed method helps improve PSNR value and promote visual effect.

Sun, Jie, Yu, Jiancheng, Zhang, Aiqun, Song, Aijun, Zhang, Fumin.  2018.  Underwater Acoustic Intensity Field Reconstruction by Kriged Compressive Sensing. Proceedings of the Thirteenth ACM International Conference on Underwater Networks & Systems. :5:1-5:8.

This paper presents a novel Kriged Compressive Sensing (KCS) approach for the reconstruction of underwater acoustic intensity fields sampled by multiple gliders following sawtooth sampling patterns. Blank areas in between the sampling trajectories may cause unsatisfying reconstruction results. The KCS method leverages spatial statistical correlation properties of the acoustic intensity field being sampled to improve the compressive reconstruction process. Virtual data samples generated from a kriging method are inserted into the blank areas. We show that by using the virtual samples along with real samples, the acoustic intensity field can be reconstructed with higher accuracy when coherent spatial patterns exist. Corresponding algorithms are developed for both unweighted and weighted KCS methods. By distinguishing the virtual samples from real samples through weighting, the reconstruction results can be further improved. Simulation results show that both algorithms can improve the reconstruction results according to the PSNR and SSIM metrics. The methods are applied to process the ocean ambient noise data collected by the Sea-Wing acoustic gliders in the South China Sea.

Cui, Wenxue, Jiang, Feng, Gao, Xinwei, Zhang, Shengping, Zhao, Debin.  2018.  An Efficient Deep Quantized Compressed Sensing Coding Framework of Natural Images. Proceedings of the 26th ACM International Conference on Multimedia. :1777-1785.

Traditional image compressed sensing (CS) coding frameworks solve an inverse problem that is based on the measurement coding tools (prediction, quantization, entropy coding, etc.) and the optimization based image reconstruction method. These CS coding frameworks face the challenges of improving the coding efficiency at the encoder, while also suffering from high computational complexity at the decoder. In this paper, we move forward a step and propose a novel deep network based CS coding framework of natural images, which consists of three sub-networks: sampling sub-network, offset sub-network and reconstruction sub-network that responsible for sampling, quantization and reconstruction, respectively. By cooperatively utilizing these sub-networks, it can be trained in the form of an end-to-end metric with a proposed rate-distortion optimization loss function. The proposed framework not only improves the coding performance, but also reduces the computational cost of the image reconstruction dramatically. Experimental results on benchmark datasets demonstrate that the proposed method is capable of achieving superior rate-distortion performance against state-of-the-art methods.

Braverman, Mark, Kol, Gillat.  2018.  Interactive Compression to External Information. Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing. :964-977.

We describe a new way of compressing two-party communication protocols to get protocols with potentially smaller communication. We show that every communication protocol that communicates C bits and reveals I bits of information about the participants' private inputs to an observer that watches the communication, can be simulated by a new protocol that communicates at most poly(I) $\cdot$ loglog(C) bits. Our result is tight up to polynomial factors, as it matches the recent work separating communication complexity from external information cost.