Privacy governance for Big Data is challenging--data may be rich enough to allow the inference of private information that has been removed, redacted, or minimized. We must protect against both malicious and accidental inference, both by data analysts and by automated systems. To do this, we are extending existing methods for controlling the inference risks of common analysis tools (drawn from literature on the related problem of nondiscriminatory data analysis). We are coupling these methods with auditing tools such as verifiably integral audit logs.