For workplace diversity, the ‘ideal’ is an established meritocracy that judges the individual purely on merit. It’s the objective judgement of capability, free from bias, which could promote organic diversity without the need for quotas or targets.
However, the reality is a constant reminder that there is work to be done. And while there is general agreement on the benefits of workplace diversity (such as greater creativity and innovation), it’s important that we recognise we haven’t got it sorted just yet. While the number of campaigns depicting diverse teams bursting with happiness suggests we’ve cracked it, your diversity data may paint a slightly different picture. One thing is clear, we have enough data to make informed selection decisions, but it’s the quality and application of this data that’s critical.
A challenge in achieving diversity is highlighted by its own definition; the celebration of individual difference. Individual difference is a beautiful thing; for a start, it underpins the majority of psychological research, but it can also crop up in candidate selection in the form of subjectivity or bias – candidates and recruiters alike can bring their own unique interpretations.
To cope with this, we have an opportunity to re-focus our ideals on a more realistic objective, that of ‘inclusive assessment’. This recognises how different groups can be impacted by bias across any tool, exercise or recruiter. It also aims to reduce this impact. While the Equality Act 2010 clearly defines discrimination, it’s only by exploring the performance data that we see the subtleties of bias and can begin to identify methods to reduce it.
1. Assessment criteria should come from a cross-section of employees
Assessment criteria (e.g. competencies) should be developed with a cross-section of people from your organisation. They should provide the data that shapes your criteria., The analysis of it must be robust and the results must be validated (with a representative sample). With data from diverse groups forming an inherent part of the criteria itself, you’re reducing potential bias before you’ve really begun.Where this is overlooked, you run the risk of a small few setting the agenda for the majority.
2. Use multiple assessment methods
The assessments you use should be driven by your criteria. In addition, job-relevant tools, like Situational Judgement Tests, provide a role preview that levels the playing field for different candidates by providing deeper insight into the job itself. Application is also important. We know high cut off scores for ability tests can disproportionately impact different applicant groups due to unnecessarily high expectations. Instead, using multiple methods that assess multiple criteria is the real golden ticket. It’s a win-win situation: you provide greater opportunity for candidates to shine, whilst also capturing more data to inform your selection decisions.
3. Reduce bias and error through training
Direct observation of behaviour is both the most powerful measure of performance we know, and the most prone to bias. This is because the performance data we extract relies on human judgement. Training recruiters on best practice builds awareness of potential bias and promotes consistency within and across different assessments, averaging out the impact of individual bias across the wider process.
4. Keep a close eye on the data
Capturing diversity data does not mean recording what religion, if any, someone practices – it means tracking the progress of candidate groups across your selection process.Whilst the former suggests how reflective your applicant pool is of the wider job-hunting population, the latter shines a light on a specific tool that may impact one applicant group more than others.
To avoid any major issues arising, it is important to track both demographic and candidate performance data. Without this kind of analysis, you risk candidates challenging the fairness of your selection decisions. It’s also likely that if the data isn’t monitored, the diversity of your applicant pool will vanish and you’ll be left with a homogenous group reaching the final assessment centre.
Whilst some aspects of a true meritocracy may be hard to achieve, our awareness of diversity and the value we place on performance data is a hugely positive sign. As best-practice assessment evolves and we continue to monitor the progress of different applicant groups, we’ll further reduce the impact of bias on all applicants and deliver a truer form of ‘inclusive assessment’.
By James Lewis, Cubiks Consultant