Considerations for the Qualitative Researcher
Tell us about yourself and what drew you to your research topic.
I am Laurie Hinze, and I am an assessment specialist. I had a topic that I was very interested in, in feedback, faculty feedback from my work as an assessment specialist with faculty. I graduated in 2013, and am still involved in ongoing course development and assessment specialist work on faculty feedback.
Controlling for Initial Bias
How did you assess for initial bias as you honed in on the topic and started considering research design?
The conversations that I've had with faculty over time really dealt with the assessments in the course or the assignments and the scoring guides. And in the online setting, there's always a challenge of providing effective feedback for the learners but doing it in an efficient manner. So my initial thought was, well, what scoring guide do we use in order to make that efficient and effective? And as I went through a number of steps of the process I learned there was a lot more to it than that. But that was my interest, you know how can we help faculty, how can we help learners, and how can we help the field really in online, being able to identify what's an effective and efficient tool for providing feedback. And my initial focus was on the scoring guides, but as I go through it a little bit more that moved over a little bit and now it's definitely more of the course design component of it. But that's the topic, faculty feedback.
I came in with ideas that I was sure that's exactly what I was going to do. And I found out that would probably be a lot bigger, a lot more involved than I really needed to be doing, in a beginning component of researching this. So one of the things for checking about biased, because I knew that obviously there was a biased that I was bringing to the table, my thought was well, rubrics are the way to do that. Faculty that I talked to thought checklists are the way to do that. So in talking to my mentor, she was instrumental in saying you really need to start with a lit review. In order to really set the stage is kind of how I call it. So, setting the stage really shed a light on what do I really want to do. But it was very helpful in making sure that the biases weren't there. That I didn't bring in, well this is the way I think it should be or that I was going with you know well, this is what the faculty said. So, starting with the lit review, ongoing conversations with her were very important to monitor that bias, that initial thought of what I wanted to explore.
I guess going back to the lit review, before you even do Chapter One, that was her initial direction. Start with that literature review really became my tool. The lit review that I did, was also a way that I could create an outline that would really help create what research was out there, what was it saying, and how did that really combine with my thoughts and the topic I wanted to explore. So much so that I really had to narrow it down, and that was the other piece, the two things I think that you'll probably hear me mention a lot are the use of the lit review and the outline that I actually created from that, and the mentor conversations that I had. And sometimes it was even beyond that, committee members.
Sometimes it was colleagues in my department. And just questions that or I would share a comment and they would ask questions, it would help me think about, is this really what we should be looking about? Is this really what the problem is? So it helped me really bring it down to a much more narrow focus that I ended up really looking at more the course design. Really what is being delivered by faculty? Not so much what do they think is effective and efficient, but let's start at a baseline of what is really being, how are they going about it now, without even thinking, what's effective and efficient, what are they doing? So that really after many conversations, many reviews of the literature, and creating that outline, that really kind of showed me how things were flowing. That became more of my topic, what's the perception of the faculty from course design components and how they deliver feedback. And that's really what developed into both what I've kind of felt what the problem was.
Still effective and efficient feedback, although I wasn't going to be answering that specifically. I was going to be looking now more at the perception. And then the problem, again of the effectiveness and the efficiency, really looking at the course design features, so that became more of the, not so much the problem, but we didn't know what we'd be looking at. And then I think the last one was the participants that you had asked about, and that would really be where again I looked at the literature review and looked at who were other researchers involving in their literature, I mean in their research. So it was helpful for me to narrow down who I was going to be looking at, who I was going to be working with, again, working with very closely with my mentor.
We ended up doing people that have had experience, so that I was getting somebody that could reflect back a little bit more than just, oh this is how I do it. But two to three years of experience in the online teaching world. And then, at the higher ed level I really wanted to keep it at, at first we started with masters and PhD level, and I did a snowball technique where people recommend each other that they think would be good participants, and that wasn't giving me the number that I needed to reach a saturation point. So that we did go back through the IRB and change that so that I was also doing undergrads. But I don't think that made a big difference at all in the study.
How did your alignment problem, purpose, and research question(s) affect participant selection?
I did the interview questions; there were about eight or nine of them if I remember correctly. I did keep my study to one research question. And it really didn't change that a lot because I was looking for perception. But the challenges with the, the snowball technique were a little bit of making sure that I still had a sample size like you said that I would know when I got saturation. And maybe this was a benefit too, that I was able to start coding immediately after the interviews. So I was able to start seeing, slow but sure, was I reaching saturation on that. So, it's a matter of it takes a little while to do, but you really can start your coding and start getting a feel for your saturation point. And I reached that about after about 10 or 11 participants.
I was getting the same types of responses and those types of things. Of course I think any type of sampling technique can have its challenges, but this is the one that really fit what I felt I was looking for. It gave me, with the online component of it, it gave me people that were geographically mixed, which was very good, and then as I was talking about the different levels and that I did finally go back into the IRB for the undergraduate level gave me a nice spread too of the different levels. As well as the different disciplines that they taught in, which, as I got into the coding could be challenging, by the way that they may have responded feedback wise, say in a math course, they were much more linear with those types of feedback. As opposed to a different course that the feedback would be more of a critical thought type of requirement that the faculty wanted them to search out. So the subject matter was good, but challenging.
Developing Initial Codes
How did you develop your initial codes? How did they change over the course of your data analysis process? Did you consider the use of qualitative software?
The initial recommendations to start the snowball sampling were from my committee members. They reached out to somebody that they felt would be a good participant based on what I had put as requirements. And you know I think some of it is, it's such an additional thing to do, so you get some people that say, oh yeah I'd be happy to, and then you just don't always hear back from them. So, those people were probably more at the masters and PhD level that I was interviewing, so that was a little bit similar, and as I mentioned I know the math one was definitely an undergrad component, where he talked a little bit more about the specifics of feedback. So it probably in the long run broadened what I was looking for. Because I was doing the coding along the way, I think there was probably still a convergence of thoughts. It's just a matter of maybe expanding my list of codes as opposed to deleting them. Or any kind of divergence of coding.
One of the tools, I'm a big person for using tools and researching things. I don't want to reinvent the wheel always, and I had found a document that really listed the coding and I did broad coding. Then the next one was patterns, axial coding, and the last one really was themes or the selective coding. So, that first component was broad, very broad and that's where I would talk about the convergence. That's where I would go back to the literature review and really see, where does this fit? How can this fit in there? So I was using that list through Excel to really construct that and then using my literature review outline to help with that. But then also really looking at what the participants were saying, I wanted to say the respondents, but what the participants would say, and then I had two things.
One, when looking at what I was going to highlight, did it apply to the study, and could it stand on its own? So I think at times there might have of been more that I might have elected to put as a code, but that really didn't fit that criteria. And that was a comment that I had found as part of how you do the coding in that manner, where you do broad coding, and then what patterns or axial, and selective coding that really needs, that those two criteria pieces were really important to kind of keep in mind. So, not sure if that answered the convergence question, but it really, you go through it over and over and over and I think you just start really getting a feel of what is going to fit for your study. And those two items were really, really helpful for me.
I always want to fill in the gaps. So learning to ask the question, and this is just basic interviewing, and there again I looked at what types of tips for that, and there's tons of them out there. But really letting them respond, otherwise I think I was too eager and I think the biases would have come in there, so I really had to be careful of that. And that was a really conscientious effort to make sure that I did that. So it's hard I think, but it's an important one or otherwise your biases, unknowingly I think, can sway the participants response and I did not want that all.
So, catching myself, and having a document that had the questions, I just made sure that I stuck to that and just had to not say anything for a little bit. And the fact they're recorded, you know that helps so you, you know after listening to the first one if I felt like I walked over that line I really had to take a step back and think okay I can't do that. And by the time you get towards the end you're feeling pretty comfortable about not feeling you're leading or asking. And that actually in the development of the questions too that's really important. There again my mentor, using the lit review, those two items. You know I can't say enough about that helps keep a balance on bias all the way through. It's important to remember that. It's not that you don't know it right off the top of your head, having those conversations is really important.
I did explore them I can't even think of the names anymore; it's been a couple of years. There again, my mentor was like, you don't want to do that, just use Excel, so it's really bringing it down to that baseline level. I enjoyed coding, and I always thought I wanted to a quantitative research study. So to find out that I really liked qualitative, I really like coding, I do coding in my job now. And using it as an Excel document it's, I don't want to say easy because you still have to kind of think through the different levels of it. You know what's the broad categories, what's the patterns, and what's the themes. But I also find that energizing, and that starts answering those questions that we ask. So, yeah, no I used just a basic Excel document and reviewed it over and over and over. So there's probably, there were probably many, many versions of it. But yeah, it's very, I think you have a little, I shouldn't say this cause I haven't done it the other way, but I would like to say that it's easier, you know, for me. It was just easy to control.
How did you determine when there was sufficient evidence to develop a finding?
I felt some of my hunches were accurate. Yeah I do think, and you say never left, I think I was probably a little bit of that, always checking back in. And that actually inspired me as I developed that outline from the literature review. Oh, this is what I would have been expecting, this is what I'm hoping for, this is where I hope my research goes. And in my situation, that's where it did go. Were there other components of it that I learned and were able to add to that, which you want to do, you want to contribute to the, to the world of online and for me it was course design and how faculty felt that influence, their feedback practices.
But yeah, I felt like I was able to really confirm my initial thoughts, initial hunches. But with it in a much more detailed understanding in manner, and be able to support that as I share that with either faculty training, faculty discussions, and the work that I do with other assessment specialists, those types of things, so. Yeah, it was, I don't think I ever left it but I was very happy that I felt like it matched and enhanced, the findings matched and enhanced what I initially thought and where I wanted to go with it.
I don't think there were as many surprises as differences of the way people do things and approach. But again that's me wondering why. I wonder why they do that? So I can't say that it was a surprise but it was like, oh, that's an interesting way of looking at that. So it's always so easy to think, well this is the way it is and this is how everybody thinks. So in some ways it's a pleasant surprise to really look at that breadth of information. But I'm trying to think if there was anything that really caught me off guard, but I can't think of anything other than you know really especially with my mentor consistently getting me to narrow down, so maybe the surprise would be that I'm always too broad of a thinker and I really needed to focus on building the blocks as opposed to looking at that high level end product right away.