How To Audit An AI Model Owned by Someone else (Part 1) Many AI systems today lack comprehensive external evaluation in terms of their value alignment, fairness, safety, and other critical aspects. The few external entities that have access are often granted excessive permissions, leading to potential privacy, security, and intellectual property risks. Current access methods struggle to balance auditor access with AI owner risks. This blogpost series presents a new approach using PySyft “Domain” servers. The system enables auditors to ask a specific question about an AI system. If all parties approve the question, the auditor receives the answer without exposing any additional information, ensuring a more focused and secure audit process.
Prediction Sensitivity: Continual Audit of Counterfactual Fairness in Deployed Classifiers: We present prediction sensitivity, an approach for auditing counterfactual fairness while the model is deployed and making predictions.
Maintaining Privacy in Medical Data with Differential Privacy: How can you make use of these datasets without accessing them directly? How can you assure these hospitals that their patients’ data will be protected? Is it even a possibility? We try to answer these questions in this blog post.
Comic – PATE Analysis: A comic representing the PATE Framework.
Deep CARs— Transfer Learning With Pytorch: How can you teach a computer to recognize different car brands? Would you like to take a picture of any car and your phone automatically tells you what the make of the car is?
Population vs Sample , Statistic vs Parameter : Basics of descriptive statistics.