xD to Present Four Papers at FCSM

July 10, 2024

The xD team will present four papers at the upcoming 2024 Federal Committee on Statistical Methodology Research and Policy Conference in October. xD's papers focus on the responsible implementation and oversight of AI systems, including mitigating AI bias by using explainable AI and causal learning, employing privacy enhancing technologies to enable third-party audits of algorithms, and harnessing Census data to better understand the societal impact of AI systems. See below for a short description of each paper.

  • “A Semi-Supervised Active Learning Approach for Block-Status Classification”: We have developed an AI/ML solution to improve both data labelling and classification of parcel data to enable new data-driven insight while reducing costs and effort for data assessment. Here we are using a combination of Explainable AI (XAI) and Causal Learning (CL) for bias identification with an active semi-supervised learning approach to ensure that model predictions are robust, fair, and trustworthy. The project aims to save approximately 800,000 hours of manual labeling and showcase Census' expertise and evolution in applying AI/ML in geographic data.

  • “Explainable Artificial Intelligence for Bias Identification and Mitigation in Demographic Models”: In the language project with the Social, Economic, and Housing Statistics Division (SEHSD) of the Census Bureau, we highlight use case examples of XAI and Causal Learning to identify bias within demographic models and the datasets used for these models. Utilization of AI in demographic applications is increasingly vulnerable to bias scrutiny. The incorporation of causal learning and XAI as a “must have feature” in demographic use of AI will help alleviate problematic algorithmic bias and help inject much-needed transparency into a process that will otherwise remain shrouded in shadows.

  • “Official Statistics for Responsible AI: The Role of the Federal Statistical System in Enabling a More Accountable AI/ML ecosystem”: As the country’s premier source of statistical information, the Federal Statistical System (FSS) is in a unique position to enable understanding of the social impacts of AI systems. In this policy paper, we propose three ways that FSS agencies can enhance the Responsible AI ecosystem: 1) data collection on societal impacts of AI through existing or novel survey products; 2) creation of computational tools and educational materials to encourage use of federal open data, including demographic data, for algorithm auditing and responsible AI practices; and 3) research across agencies on AI decision systems as drivers of social inequality, with a focus on employment, credit, housing, and healthcare.

  • “Enabling Third-Party Audits of Algorithmic Systems with Privacy Enhancing Technologies”: In this short position paper, we explore Privacy Enhancing Technologies (PETs) as a potential enabling intermediary for third-party auditing of algorithmic systems. We assess the utility of two approaches: Differential Privacy (DP) and Secure Multiparty Computation (SMPC) and outline potential directions for further research into the practicality of using these techniques in third-party auditing methodologies.

FCSM Logo