Creating content specific knowledge tests is a time intensive proposition. When the company using them does not anticipate a lot of openings, the return-on-investment of those tests can be questionable. For these reasons, I am taking a foray into crowd-sourced tests. I’ll be using the, hopefully, for selecting software developers. I’m not being coy about the website, but I’d like to have some data and experience with the process before naming the company.
How do these tests work? Multiple-choice items are written and submitted to the platform. Then, people (applicants, incumbents, and the curious) take them online. Each item is evaluated based on its difficulty based on the crowd taking it. Using advanced analytics and the items’ difficulty, the platform’s algorithm gives the person items until it can accurately estimate his/her skill level. It is all very similar to computer adaptive testing models.
What’s great about it?
- It is free (for now) to both the user and the company. This makes for a high ROI, assuming it’s valid (more on that in a minute).
- There is a high level of sophistication in the scoring. This reduces the number of items a person has to take.
- Since everyone takes a different set of test items, it maintains a high level of test security. A friend may pass on to me certain questions, but there is no guarantee that I’ll see those items.
- Results can be fed directly into an applicant tracking
- A wide base of people can write items, so it is likely to cover more content then if written by one set of job experts.
What makes me hesitant?
- I am concerned about the quality of the test items. Maybe some items appear difficult because there is potentially more than one right answer. Of course, this could be fixed via user comments.
- While the items appear to measure job content (I’m no expert for a lot of them), it doesn’t mean that they measure those things that really distinguish between high and low performance. Of course, this is a problem for all content-validated tests, but it’s of particular concern here as each test taker will be getting different items. For this reason, I’m not going to use the platform to select candidates until we collect enough test data to do a criterion-related validation study (statistically show that scores on the test are correlated with job performance) and set test norms.
I am excited about this new adventure in testing. There are computer-adaptive approaches to some tests, primarily in the educational realm, but this is the first that I know of for such specific tests. If successful, it will streamline the implementation of skills/knowledge tests for several of my clients.
What are your thoughts on using crowd-sourced tests?
For more information on legal pre-employment testing systems, skills assessment and talent management, please contact Warren at 310 670-4175 or [email protected]