At its core, I would say the interview is a recorded conversation. You can add structure to it that suits your purpose, but too much structure puts you at risk of missing novel insights and getting valuable corrections – the questions you thought were the most relevant to ask might quickly be thrown away once you start actually using your interview “script”.
My background in Business Intelligence taught me to value structured quantitative data, but I want to grab the opportunity as a student to experiment with more qualitative methods – and only extract quantitative data from them when I come across it. There is still a lot of magic and mystery to qualitative methods for me, especially since I used to be much more focused on order and structure in data.
So far, for the user interface course I took last year, and Design for Use this year, I have tried both some pretty structured (but open-ended) and more loose interview guides. The take away so far is that I do not yet have the interview experience to build a high quality interview guide before I get it field tested, and that the first few interviews often provide a lot of new insight!
It has been very interesting to see which changes we have to make to interview guides in these two courses. Some questions become unecessary – the interviewee always answers them earlier in the interview. Likewise, some questions become completely irrelevant because they were based on preconceptions and assumptions that are proven wrong or cover areas that become less interesting once you start talking to people. Both of these points make the interview highly valuable as an initial data gathering tool, to be used before any of the more high cost methods like focus groups or questionnaires. You can test your inintial hypothesis and gain insights that help you radically change direction early on.
Ok, so interviews are also high cost if you intend to use them on a large chunk of your population, if measured strictly on time used per respondent, but perhaps that is not a smart and economical use of this method? If the purpose is to inform a design process and not scientifics research, you could easily make a small but broad interview selection by mixing some personality types from different life phases.
A good example of interview value came last year, in the user interface course. We ran a big questionnaire with 140 respondents before we had interviewed anyone. This turned out to be pretty wasteful, as half of the 30 or so questions (that is over 2000 data points!) were proven irrelevant to the project after just three interviews! We should really have done it the other way around. In fact, for this years project, we chose to avoid the questionnaire altogether in the initial phase, as the more qualitative interview and focus group methods provided enough insights to start a first iteration.
I have of course also had a lot of informal topical discussions – almost in interview style – held just because I am doing the course and taken without any guide or preset questions at all. Some of these were never planned as interviews, and so not really recorded except for the most interesting insights they gave. Oh, and I must mention that I also see the “user test debrief” as a perfect short interview setting! I will write more about the user test in a later blog post.
To close off, I would like to mention a couple of interesting limitations of the interview. One point, and this is mentioned in design literature like Design driven Innovation (Verganti, 2009), is that the user seldom knows what he or she really needs in the future. If you work on an assignment where you need to define future meanings or more radical innovations, an interview would be an information gathering tool that is far removed from the finished design compared to what we did in this project. I have yet to try it in a context like this, but I hope to do that a lot in the future.
Another context in which the interview gets limited, is when your task is too trivial. For instance, working on the calendar project, interviewees gave similar responses, and the outcomes became predictable. The same can happen with user testing. Again, check out my next post for more on that! Until then – keep observing and keep asking pointed questions to your users and stakeholders:)