I had a chance to present on innovations in assessment at the annual industrial and organizational psychology conference (SIOP) in Houston a couple of weeks ago.  The panel was fun and engaging, I had a good time and learned a lot during those 90 minutes.  The rest of the conference left me with some concerns and hope.  It felt a bit more like an industry trade show than a conference.  It also left me with two main concerns about the field:

1)  We are way too dependent on statistical techniques.  Don’t get me wrong, correctly interpreting data is how every scientific and business field moves forward.  However, I was listening to a newly minted Ph.D. present on manager assessments.  His statistical analysis completely clouded the results (even a young tenured professor in the audience couldn’t figure it out) and it was completely divorced from reality.  There was no way to get from what he was explaining to something meaningful in the use of the assessment.  That might get the presenter a job teaching others about convoluted statistics, but nowhere else.  And it just wasn’t this presenter.  I saw several presentations and papers where the statistical techniques took precedent over the ideas (sort of like in our peer-reviewed journals).  The value of using these analyses was lost on me.  I’m guessing that every new statistical breakthrough was met with similar skepticism, and maybe these techniques will eventually be the norm, but I doubt it.  Let’s teach graduate students how to think and come up with creative new ideas rather than twisting numbers.

2)  Speaking of new ideas, this was not the place to find them.  Lots of papers and presentations on slicing what we know into smaller and smaller pieces, but nothing anywhere close to a breakthrough.  This was a theme that came up in my conversations with others at the conference.  Some companies presented on “big data,” which translated to their plans to link their recruiting, assessment, talent management, performance, and salary databases.  If they had some interesting findings they weren’t sharing them (yet).  Most disconcerting is the lack of progress on defining successful leadership outcomes.  In most ways, we’re still in the, “I know it when I see it” phase of understanding leadership, which makes it seem very complex.  On the other hand, consistent studying of it always comes back to basically the same conclusions (see this article on Google’s study of effective leadership, brought to my attention by Dennis Adsit [@DennisAtKombea]), which makes leadership seem simple.  We can quibble about the difference between good management and good leadership, but what’s obvious is that there is nothing new under the sun regarding the behaviors required to make these things happen.  This is reassuring from an assessment point of view, but a bit troubling when thinking about the field as a whole.

Having said that, there is great interest in the field all over the world and the study of people at work is growing and keeping pace with changes in technology, connectivity, and how we live our lives.  The gap between science and practice in many areas is closing, to the benefit of both.

For those in the field, I’m sure you had different experiences and I would love to hear about them.  For the users of testing/assessment services, you can rest assured that the underpinnings of evaluating talent are still solid.

For more information on leadership and talent management, please contact Warren at 310 670-4175 or warren@allaboutperformance.biz.