Testing times

A few weeks back I wrote a post called the Appliance of Science following local press coverage about the use of psychometric testing tools in the public sector and a comment on Twitter from the Head of the PSA that testing was a “fake science.” As a result of this blog, the PSA’s Brenda Pilott offered to share the PSA view of psychometric testing in a forthcoming article they were publishing for their members which I subsequently received (thanks Brenda).

Snap14

This has now been published here (pages 11-14). It covers four full pages and I am sure you will agree it is about as one sided and blinkered as an article could possibly be (sorry Brenda). It’s an all out attack on psychometric or personality testing. While it’s a shocking read, I’m not here to bury the PSA and there are some interesting points to note.

For example, the article talks about the recent employment court decision that found against an employer who had used a psych test to make a redundancy decision. I don’t know the full details but it appears the employer disregarded previous employment history and relied solely or heavily on the test results. Disregarding previous service and performance is always going to land you in trouble whatever the circumstances of someone losing their employment.

The history of psychometric testing is largely dismissed through focusing mainly on Myers-Briggs and comparing it to a teen magazine survey. I have to say I have no problem with rubbishing Myers-Briggs which is not a tool I would ever use in a recruitment or development context. It’s okay as a fluffy bit of teambuilding, a bit of fun, but there are so many other better tools out there that do that sort of thing better. It reminds me of the sort of surveys they used to have on Facebook of the “Which Muppet are you?” ilk.

An HR academic is wheeled out (a job probably as far removed from the reality of modern HR as you can get) who offers the view that it’s just HR trying to be a “much more serious management function…an attempt to try and ape the harder sciences.”

There is a real life example of a PSA member who missed out on a job with a Government department after testing. He arrogantly claims he was given every indication in the interview “the job was his” but didn’t get it after completing testing and a feedback discussion. The article dryly remarks that “Jeff thinks he made a bad mistake in telling the [assessment] company he thought their tests were rubbish.” Do you think so Jeff? Sorry, being well qualified doesn’t guarantee you a job ANYWHERE. And being a smart arse certainly doesn’t help.

And then they go on to quote several PSA members views on testing and a summary of how much New Zealand’s Government departments have spent on testing over the last year. Overall, it’s a thorough and comprehensive a hatchet job in my opinion. Not an HR practitioner, Psychologist or counter argument anywhere in sight. Wellington’s Dominion Post has been playing along giving them substantial paper time on the subject. A selection of articles are here and here.

As I said, I’m not rubbishing the PSA here. You can read the material and make your own mind up. What bothers me more is what the hell we are doing to people to give them such negative and extreme views of psychometric and cognitive testing?

Perhaps an example of my own might give us a clue. A few years ago I was put forward for a senior HR role at one of New Zealand’s best known companies. I was interviewed by a General Manager (not an HR person) who then asked his HR team to organize testing. I was sent a link to a personality questionnaire I had not come across before (there was no briefing, just an email) and I duly completed it.

Then the recruitment agency rang. They had been told the results showed I was negative and had no leadership skills and so the GM was not going to pursue my application. That was it. No feedback, no discussion, dismissed out of hand. The recruiter said that she did not believe this was an accurate description of the person she had interviewed or that my work history suggested, and would they consider an alternative, more established test administered by a Psychologist? The GM reluctantly agreed, and I completed it with very different results. It made no difference.

The GM decided he couldn’t get past the first profile and that was that. I asked who had interpreted the results and was told it was a junior recruitment consultant within the company i.e. not qualified to do so. He/she had simply printed the standard report. I emailed the guy directly to say I thought it was unfair that they had not discussed the results with me or taken any other factors into account. I had neither seen or been able to discuss the profile. He didn’t even afford me the courtesy of a reply.

I have used many psychometric tools over the years and have been trained to interpret several of the major ones. I am a big believer in testing but I understand the key principles:
• Testing is not the be all and end all. You use it in association with other assessment methods not in isolation. It confirms or informs your thinking, not drives it.
• Personality profiles are a starting point for a discussion. Printed reports only show some of the picture not the whole picture. You learn more about people by letting them talk about what the results are showing.
• All candidates should be offered feedback.
• You don’t put psychometric testing in the hands of people who are not trained to administer and/or interpret the results.

So why are we getting it so wrong? Have we become so complacent about testing that these principles are not being observed? Do assessment providers no longer care how their tools are being used now it’s all done online rather than face to face? Do HR people not know or care how the results are interpreted or the context in which they are used? Do recruiters not care about the impact the results have on candidates?

Or are people taking tests unhappy/suspicious because these tools on the whole are very good at sorting the wheat from the chaff and making it much harder for candidates to “bluff” their way into a job? I’ve come across many candidates who interviewed well only to fail abysmally at cognitive testing or prove to be unsuitable once a personality test was administered.

I once had a Psychologist help me identify a blatant liar who convinced both a senior, experienced recruiter and two senior managers that he was the real deal with a global executive career behind him. It was an executive level role and he was the preferred candidate. His testing however told another story and she alerted me to the fact that she was suspicious of his lack of engagement when she tried to discuss the results with him. Something didn’t feel right. She suggested we needed to dig deeper.

A request for an additional reference from the CEO of his previous employer was met with a New York number and a specific time to ring. The recruiter deliberately rang an hour later. Turned out the “CEO” was out delivering pizza. We terminated the process immediately but the candidate successfully secured a similar senior role with another New Zealand employer. I doubt that organization either tested him properly or checked his CV in any detail.

Similarly, I’ve seen many candidates appointed to roles where the test results showed there would be issues but managers chose to ignore the advice, and there invariably were. Exactly the issues the testing predicted. Expensive mistakes to make.

And that’s the irony. Despite the tools, some companies are still making as many poor hiring decisions as successful ones either because they don’t use the information properly or don’t test at all. What is it they say about bad workmen always blaming their tools? I would suggest the tools are not the problem if used properly. It’s many of the people using them that are.

Recent Posts:
The Living Dead
Lost in Translation

9 thoughts on “Testing times

  1. Gwynn says:

    Great article, thanks. As an HR manager in one of the Govt departments that has spent a large amount of money on testing over the past year, I can quite honestly say that the investment has been well worth it. We use the tests as part of an assessment centre, and include a range of tools to determine who we want where we need to be 100% sure we have the right person for the job. We usually use the tests results, along with all the other info we get from the assessment centre, as a means of informing the interview questions. This way we believe we get a true assessment of a candidate, and can make a more informed recruitment decision.
    I was a little disappointed in the PSA approach, as I believe there is value in doing the tests. But, as the post suggests, they need to be used in conjunction with other assessment methods, and they should provide part of the picture not all of it.

    1. hrmannz says:

      Thanks Gwynn. Yes, I am disappointed by the strength of their anti-testing message which shows a lack of basic understanding of what they can do. But clearly people need to be better educated about how and why these are used.

  2. Iain MacGibbon (@nzheadhunter) says:

    Excellent article Richard. Balance and looking at the whole picture is the key. As you state, testing in isolation can be foolish but it certainly adds a stronger validity than the interview (competency based or not) or even worse the ” I know what makes a successful candidate because I have a good gut instinct”
    It is incredibly unprofessional to ask someone to complete an assessment process and then give them no feedback. The candidate has sweated through ability and personality profiling and deserves to k ow. We find that the “debrief” of the candidate is almost as revealing as the testing itself. Some people have no real insight into themselves at all.
    The PSA head in the sand approach to testing is heading back to the promotion by tenure rather than merit. !

  3. annetynan says:

    Interesting blog on the article in the PSA Journal – good to see that this is available publicly as so many require a subscription.

    As Professor Spillane of Macquarie University in Sydney states that personality tests “are as discriminatory as age, religion, gender or anything else”, I decided to see whether the United Nations had expressed views on the issue. It is always useful to stand back from local issues to obtain a more global perspective.

    This is an extract from the document ‘Staff recruitment in United Nations system organizations: a comparative analysis and benchmarking framework: The recruitment process.’ United Nations Joint Inspection Unit . Geneva 2012.

    Click to access JIU_NOTE_2012_2_English.pdf

    Other assessment methods
    59. Other assessment methods include oral presentations, written outputs to assess technical knowledge, review of performance appraisals, and psychometric assessment of personality, cognitive ability, work styles and motivation. UNDP uses psychometric tests for managers, and IAEA uses them for director-level posts. Research indicates that combining cognitive ability tests with personality tests will provide a better prediction of work performance. Such tests are useful
    for all positions, but in particular for leadership roles, and should be administered by
    professionals.(10)

    (10) State Government of Victoria, Department of Planning and Community Development, Best practice recruitment and selection – a tool kit for the community sector, Victoria, Australia, 2010, pp. 35-37

    The link for the above reference was outdated; it is now available in both Word and PDF formats:
    http://www.dhs.vic.gov.au/for-business-and-community/not-for-profit-organisations/workforce-capability-tools/workforce-capability-framework-recruitment-and-retention

    Click to access Best-practice-recruitment-and-selection-a-tool-kit-for-the-community-sector-1-July-2013.pdf

    (pp. 40-42)

    It was interesting to check out Professor Spillane’s background – I am always looking for interesting sources. As well as opposing personality tests, he also “criticises the fraud of mental illness” and describes ADHD as a myth. http://robertspillane.info/

    I leave readers to draw their own conclusions about it all.

  4. Jon Court says:

    So many variables there. The quality of the test; the quality, skill and attitude of the people interpreting the result and assessing the impact on the role; the insult of being tested and judged by some unknown person of questionable skill; the anxiety and fear some people have of testing in any way shape or form; the testing environment; the insufferable surety that the test is capable of catching out the professional liar (or serial killer); the gaping exposure to prejudice and bias; the fact that two tests in two days can have completely different results; and so on….

    It would take a very senior and objective crew to even begin to unravel the complexity; and how do you test the tester? These tests are akin to the horoscopes you read in trashy newspapers when put in the hands of nearly anyone; and I include in this assessment the various other numeracy, verbal and logic tests that abound. They are so uninterpretable as to be worse than nothing and would generally have no bearing on someone’s ability or cultural fit for a role. There was some comment of the cost not testing can invite when someone proves wrong for a role; but what about the cost of missing out on someone awesome because you demeaned them with a nutty bunch of tests?

    For the record I’ve done a few of these myself and requested them when company policy dictates it. I have yet to find an example where I learned something I didn’t already know, and I’ve yet to be consistent in my own results.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s