Jason Roberts Jason Roberts

NYC 144 AI Law Overview

The New York City Local Law 144, or NYC 144 AI Law, is a groundbreaking legislation requiring businesses to disclose the use of Artificial Intelligence (AI) and Automated Decision Systems (ADS) in their operations. This law significantly impacts the Human Resources (HR) sector, where AI and ADS have been increasingly adopted for various functions.

Key effects of the NYC 144 AI Law on HR include:

  1. Transparency: The law mandates that companies disclose when and how they are using AI in HR processes such as recruitment, employee evaluation, and other decision-making processes. This brings a level of transparency that allows applicants and employees to understand why certain decisions were made.

  2. Fairness and Bias Scrutiny: With AI algorithms now required to be more transparent, there is an increased opportunity to examine these systems for bias or unfair practices. This allows companies to address potential issues and ensures that decisions are made equitably.

  3. Third-Party Audits: The law requires companies to undergo external audits of their AI and ADS, scrutinizing the data used, how the system learns from this data, and the decisions it makes. This allows for independent verification of the fairness and transparency of the systems in use.

In summary, the NYC 144 AI Law brings about significant transparency and fairness in the use of AI and ADS within HR. It holds companies accountable for their use of such technologies, ensuring that their systems are fair, unbiased, and transparent in their decision-making processes.

Read More
Jason Roberts Jason Roberts

Best Practices in HR AI Audit

It all begins with an idea.

Artificial Intelligence (AI) has revolutionized various aspects of our lives, including the recruitment and hiring process. However, as AI systems become more prevalent, there is a growing concern about potential biases that can inadvertently perpetuate discrimination and hinder diversity and inclusion efforts. To mitigate these risks, independent AI audits have emerged as a crucial practice to ensure fairness and transparency in the hiring process. In fact, New York City has a new law in effect July 5, 2023 that requires annual independent audit of AI’s use in hiring for every provider and end using company. That’s right, every company has to conduct an independent audit of the use of computational processes in hiring. This is being followed by legislation in other states and new AI legislation in Europe. In this article, we will outline the best practices for conducting an independent AI audit to remove bias in hiring.

1. Pre-planning: Define the Audit Scope/Objectives and Process Mapping:

Before embarking on an AI audit, it is essential to clearly define the scope and objectives. This includes identifying the specific AI systems used in the hiring process, the stages of recruitment where bias may occur, and the metrics used to assess fairness and accuracy. Setting well-defined goals will help guide the audit process and ensure that the focus remains on addressing bias effectively.

It is also important to thoroughly map out the hiring processes within the organization. This includes understanding how decisions are made at each stage, from initial screening to final selection, and identifying the specific points where technology, including AI systems, is integrated.

By mapping the hiring processes, auditors gain a comprehensive understanding of how technology influences decision-making. This step helps identify potential areas where bias can be introduced or amplified. It allows auditors to analyze the impact of technology on key decision points, such as resume screening, candidate ranking, and interview selection.

Furthermore, mapping the hiring processes enables auditors to assess the roles and responsibilities of human decision-makers in conjunction with AI systems. It helps determine whether human judgment is adequately balanced with technology-driven decision-making or if there is an overreliance on automated systems.

Understanding the interplay between technology and human decision-making provides crucial context for the AI audit. It allows auditors to accurately evaluate the impact of AI systems and recommend appropriate remediation strategies to remove bias effectively.

2. Select an Independent Auditor:

To ensure an unbiased audit, it is crucial to engage an independent third-party auditor with expertise in AI ethics and fairness. Independent auditors should have no direct involvement in the design, development, or implementation of the AI systems being audited. They should possess a strong understanding of machine learning algorithms, statistical methods, and the legal and ethical implications of bias in hiring.

3. Access Relevant Data and Documentation:

For a comprehensive audit, access to relevant data and documentation is vital. The auditor should have access to training data, testing data, algorithms used, and any other relevant information that sheds light on the inner workings of the AI system. This access will enable the auditor to evaluate the fairness of the system and identify potential bias.

4. Assess Data Collection and Preprocessing:

During the audit, the data collection and preprocessing stages need to be thoroughly examined. It is essential to ensure that the data used to train the AI system is representative of the diverse applicant pool. Auditors should evaluate whether the data collection methods introduce any inherent bias or if the preprocessing steps inadvertently reinforce existing biases.

5. Evaluate Model Training and Validation:

The auditor should examine the model training and validation process to determine if fairness considerations were integrated. This includes assessing the features selected, the algorithms used, and the evaluation metrics employed. The auditor should also examine if any biased proxies, such as zip codes or educational institutions, were utilized, as these can lead to indirect discrimination.

6. Analyze Decision-Making and Outcome Metrics:

The audit should analyze the decision-making process of the AI system, specifically focusing on the criteria used to make hiring decisions. The auditor should investigate whether the system is inadvertently favoring or discriminating against certain groups based on protected attributes (e.g., race, gender, age). Outcome metrics, such as applicant demographics and the success rates of different groups, should also be evaluated for potential bias.

7. Propose Remediation Strategies:

Upon identifying bias or areas for improvement, the auditor should propose remediation strategies. These strategies may involve modifying the training data, adjusting the algorithmic design, or changing the decision-making process to reduce bias and enhance fairness. The proposed recommendations should be practical and aligned with legal and ethical guidelines.

8. Monitor and Iterate:

After implementing the suggested remediation strategies, it is essential to continuously monitor the AI system to ensure ongoing fairness. Periodic audits should be conducted to identify any new biases that may arise due to changes in data or system updates. Auditors should work alongside organizations to create a feedback loop, allowing for continuous improvement and refinement of the AI system.

Read More