Bias, AI ethics
and the HireVue Approach

Measuring the impact of software

HireVue recognizes the impact that our software can have on individuals and on society, and we act upon this responsibility with deep commitment. The following principles guide our thoughts and actions as we develop artificial intelligence (AI) technology and incorporate it into our products and technologies. All of us now live in a rapidly shifting technology landscape in which change is constant. Because of this, HireVue practices will continue to evolve as we work with our customers, job-seekers, technology partners, ethicists, legal advisors, and society at large to ensure we are always holding ourselves to the highest possible standards.

HireVue AI ethical principles

1. We are committed to benefiting society

We recognize that our software impacts individuals, companies and potentially our society at large and we are committed to understanding that impact and being thoughtful in all of our actions to do the greatest good. Our goal is to build systems that augment and improve human decision-making. We are focused on helping our customers find the best people to work for their companies as well as helping them grow and develop those people after hire.

2. We design to promote diversity and fairness

Because HireVue uses AI technology in our products, we actively work to prevent the introduction or propagation of bias against any group or individual. We will continue to carefully review the datasets we use in our work and ensure that they are as accurate and diverse as possible. We also continue to advance our abilities to monitor, detect, and mitigate bias. We strive to build teams from diverse backgrounds with diverse knowledge, experiences, and perspectives to best represent the people our systems serve.

3. We design to help people make better decisions

Our goal is to develop solutions that combine the strengths of machines and people together to improve decision-making. Our systems are designed to consume complex data to augment and improve human decision-making. We carefully design our products to provide clear understanding about what is being predicted, the confidence in the prediction and appropriate explanation of the data. We also provide feedback mechanisms that allow human input to build trust and improve overall functioning of our systems.

4. We design for privacy and data protection

We are committed to incorporating privacy of data at each step of our process of technology development and deployment. We will give opportunity for notice and consent, encourage architectures on a foundation of privacy safeguards, and provide appropriate transparency and control over the use of data consistent with best practices and legal standards.

5. We validate and test continuously

All of HireVue’s algorithms are tested prior to use to validate that they both do what they are intended to do and that their results generate unbiased outputs. Ongoing monitoring is performed to the fullest extent possible to ensure that real-world behavior matches the expected behavior.

How HireVue works to prevent and mitigate bias in assessments algorithms

In 1978, the Uniform Guidelines on Employee Selection Process were jointly adopted by the U.S. Civil Service Commission, the U.S. Department of Labor, the U.S. Department of Justice, and the U.S. Equal Employment Opportunity Commission (EEOC) to provide a uniform set of principles to help govern the appropriate use of employee selection procedures and employment-related decisions. Consistent with the guidelines, our mission is not just to avoid bias in the inferences and employment decisions made based on our technology, but to use the technology to actively promote diversity and aid in the achievement of equal opportunity for everyone regardless of gender, ethnicity, age, or disability status.

HireVue’s data scientists work alongside a team of industrial-organizational (IO) psychologists with decades of assessment and adverse-impact testing experience. Consistent with generally accepted legal, professional and validation standards established within the field of psychology, our data scientists and IO psychologists continuously evaluate the degree to which evidence and theory support the interpretations and employment decisions made based on assessment results, while ensuring protected groups are not adversely impacted. HireVue has accumulated a significant and growing body of validity evidence to provide a sound scientific basis for the use of our assessments to aide with job-related decisions that minimize the potential for adverse impact.

Our data scientists and IO psychologists build HireVue Assessment algorithms in a way that removes data from consideration by the algorithm that contributes to adverse impact without significantly impacting the assessment’s predictive accuracy. The result is a highly valid, bias-mitigated assessment that helps to enhance human decision making while actively promoting diversity and equal opportunity regardless of gender, ethnicity, age, or disability status.

The HireVue assessment model development process

HireVue does not offer a one-size-fits-all algorithm that evaluates all candidates for all job types in the same way. Each assessment model is purpose-built for a specific job role after following these critical steps:

  • Ensure that there is a clear performance indicator for the job role that differentiates the strongest from the least promising performers.
  • Ask the right questions to elicit responses that can be measured and that are pertinent to predicting job performance based on IO psychology research.
  • Train the model to notice everything that is relevant in the interview (what someone says and how they say it), and build a model that uses only the data points that help predict success in the job.
  • Rigorously audit the algorithms to ensure that they aren’t adversely impacting protected groups.
  • Remove features that may cause biased results.
  • Re-train the model.
  • Re-test the model.
  • Repeat these procedures as needed so the algorithm evolves with the customer’s data and changing requirements of the job.

When HireVue creates an assessment model or algorithm, a primary focus of the development and testing process is finding and mitigating factors that may cause bias (or “adverse impact”) against protected classes. The HireVue team carefully tests for bias related to age, gender, or ethnicity throughout the process — before, during, and after development of the assessments model. Thorough testing is done prior to deployment and continues to be performed as part of an ongoing process of prevention.

Once assessments on candidates have been performed, recruiters see a list prioritized by assessment scores and can choose which to move on from the screening stage to the person-to-person interviewing stages. Skilled recruiting specialists and hiring managers decide which candidate to hire after the completion of multiple stages in the hiring process.

For more information on the work we do to mitigate bias and promote diversity, visit the HireVue blog or email