The Sports Physical Therapy Podcast
The Sports Physical Therapy Podcast
How to Read a Journal Article with Phil Page - Episode 38
There is a ton of research being published these days. Some good, some bad.
In this podcast, I’m joined by Phil Page to discuss how clinicians can find quality research, read an article, and draw clinical implications.
We’ll cover some great tips to ensure you are doing your best to stay current with the literature, but not thrown off in the wrong direction!
Full Show Notes: https://mikereinold.com/how-to-read-a-journal-article-with-phil-page
Want to learn more from me? I have a variety of online courses on my website!
Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.
_____
Want to learn more? Check out my blog, podcasts, and online courses
Follow me: Instagram | Twitter | Facebook | Youtube
On this episode of the sports physical therapy podcast. I am joined by Phil page. Phil's an associate professor and research director at Franciscan university DPT program in Baton Rouge. He's also one of the editors of the international journal of sports, physical therapy. And this episode, we're going to talk about how to find quality research and review articles to best determine the clinical implications that may improve our practice.
Mike:Hey, Phil, welcome to today's podcast. Thank you so much for taking some time outta your schedule to join us
Phil:Mike. Glad to be here. Thanks for asking.
Mike:Awesome. Um, you know, I, I, I guess I, I think I say this a lot, but I think we're gonna have a great show today. I, I, I, I don't know, you and I have been, um, going back and forth, you know, with emails trying to, you know, come up with like a really great episode here and I think it's gonna be, uh, I think it's gonna be really good. You know, obviously you and I have known each other for years now and I know how much this topic is, uh, a passion of yours, um, with just. So much journal reviewer experience for decades now and you know, an editor of I J S P T, I mean, you really understand the concepts of, of digging into research, but then more importantly like how we use that information to improve our clinical practice because like that's the whole point of research, right?
Phil:Yeah, that's, that's evidence-based practice. And I mean, my passion, as you know, Mike, is really, uh, I love research, but making it fun and making it understandable for everyday clinicians because it is so kind of nebulous. And I think it has a bad reputation with a lot of folks because of the bad experiences. You know, you mentioned the word statistics and everyone crawls into a hole. And my goal as a, as an educator now in academia, is not to taint the, the young kids coming out thinking that this is terrible and they hate it. I really want them to appreciate it and be able to use it correctly.
Mike:Right. I like that. And, and, and that's a good way of saying it because I think you can use research in lots of ways. Right. You can, you can use it to prove your point. You can use it to keep an open mind and, and maybe expand your thoughts. Um, you know, but there's, you know, there there's multiple ways to use it. And you know, I would even say too, just like over the years there's just so much research coming out that it's not uncommon for me to read an article and say like, that was really interesting, but I don't know what I can take out of this and put back into my clinical practice sometimes. Right. And it just depends on the article in the journal. But, um, you know, it's interesting.
Phil:Yeah, there's a, that's where there's this kind of, we all go through this. If you, you know, have journals that you look at every month, you get the table of contents or whatever, and you flip through it. You don't have a specific question, but sometimes you're just trying to keep up. And then are, there are other times when you do have a specific question that you're trying to find an answer to. So you need to kind of have that in in mind as you're going through. What, what's your purpose when you're looking at these articles and what are you trying to, to look for?
Mike:Right. Yeah, no, I agree. And, and unfortunately, I think, you know, sometimes this isn't the most glamorous topic, like how to read research and how to take, you know, implications from it. But it's extremely important and, and I think that's why I'm excited for this episode and, and really been looking forward to talking to you about this because every time we do this, and you and I have. Chatted in the past about this, I, I always learned something too, and, and it, it helps me digest the literature. Um, every time I've seen you give these talks at some of the big meetings too about how to get better at doing this, I've grown. So I'm super looking forward to this, Phil. Yeah. Um, alright, let's dig in. Okay. Why don't we start with this, and kind of alluded to this Atago, but, um, to me there's, I don't, I don't, I don't really understand it, but there've been a ton of journals popping up. Recently, and you know, as a published author myself, I I, my email is in the spam system apparently because OMG, I get so many emails from so many random things I've never heard of that you just put a bunch of random words together and call it a journal. Um, but there, there's a ton of journals popping up. Ton of formats now, open access, uh, pay to play, you know, uh, you know, the traditional models. Uh, why don't we start with this before we even talk about how to read a research article, but. Why don't we start at the beginning and say, how do we find quality research nowadays? How do we know what to watch out for and what's good and what's bad?
Phil:Well, I guess, I mean, it does start with the journal. Um, and there are specific guidelines that, uh, for example, the international committee of, uh, medical journal editors have specific guidelines that you need to follow. And if you're looking for a good credible journal, you look for those guidelines and of, obviously the biggest part of that is gonna be the peer review part of it. Um, I'm in the same boat as you every day. I'm getting. You know, spammed with, you know, dear so and so, they get your name wrong. And they, they, they, they want me to do an article about a letter to the editor that I published, you know, years ago. Um, and, and I actually use this in my class now because the grammar's terrible in these as well, you know? Um, anyway, the, the, the problem with today, there's the issue of firewalls with traditional journals, you know, And people not having access. So then you became this, well, we want to put up, you know, you have to pay for articles. Well, then this model started with, well, no, now we want you to pay as a writer to have this article published. And some, there are some great, credible journals just like I J S P T that you do pay a nominal fee just to submit. Right. They're open access, but they're peer reviewed. We follow the editors. You know, that kind of stuff. So you can't just say something just because something's open access or that it has a publishing charge, that it's a bad journal or a good journal. Okay? We need to put all these things into context. Who is the publisher is the first thing. The first thing I look for is you have your credible publishers, uh, like an elvir. Those that have large publishing houses, that have high credible researcher, uh, research journals. There are some that I call journal farms. Those are the ones that have. Thousands of journal names that all sound scientifically right, and those tend to be the ones that you have to pay for, and they have probably a little bit less on the peer review. Anytime I see someone sending me a request saying, we'll publish your article in 24 hours, I'm like, no, that is not happening.
Mike:Yeah. I, I've seen that and, and Wow. Wow. I'll, I'll just say
Phil:So what I do is I actually look at, when I look, when I see a journal, I'll look at the title. I go to the publisher, see kind of where, where they're coming from. One of my favorite things to do, Mike, is to actually Google their address. And it's usually like Delaware Inc. Or a UPS store. Um, that's another clue that this is not necessarily a credible journal. Go look for peer review and editorial reviewers. If you see, hopefully you'll see somebody you know. Or I've heard of at least, um, and so, but I, but I don't want to discredit all the journals because you may have one of these, we call those the predatory journals. And there is a list that you can actually access too. It's called Bees List, B E A B E A L L S, if you Google that, um, which generally is updated, but there was a lot of controversy with that list. But it gives you a starting point, uh, to look for these predatory journals. But when you start reading the, the articles, I can kind of tell that they're letting a lot of things slip by that they shouldn't. So it really comes down to the article itself more so than the journal.
Mike:right, right. And, and to take a step back as to why this is important too, you know, peer review, um, assures that the research is unbiased, quality, good, valid methodology, those types of things. There's, there's so many things that go, that go into that. I have seen some articles published. In these ones that say, we'll get it published in 24 to 48 hours, which happens, and not only are there, excuse me, like grammar errors, but you look at the methodology and you say, what reviewer would allow this article to be published with those methods that are clearly biased, which clearly do not result in a good, objective outcome. How did that get published? Right. It, it, it blows my mind, Phil.
Phil:A actually, that's one of the reasons and one of my passions again, is. We can't just take the conclusion in reading these articles because even in these credible impact type journals, there are errors and I have seen these several times and I'm the first to send a letter to the editor and there was a situation where I was looking at the article and reading the tables and going, this doesn't look right. And sure enough, they had flipped the data, the abstract, uh, P-value were all off. And I sent, uh, several emails to the, to the editor or whoever was in charge of the journal. Um, and it wasn't until I, I got on social media and said, you know, this is, this is not right and you need to fix this because it's totally wrong in the abstract. It was totally wrong. What they decided to do was to, um, publish a cor again, like a, an error. W you know, this is what, and there were seven points, Mike, that I found wrong with this article that had been published,
Mike:right.
Phil:a year later, all they did was publish this little thing that said, we acknowledge this is wrong. If you go back to that article now, it's still in error. The whole thing. They won't change
Mike:Wow. And they won't
Phil:No. And, and so those are still issues, which is why it's so important for us to be able to critically appraise them because they do get through peer review even if they're a, a, a credible journal and no one's perfect. Right? And here's the other problem, Mike, is that reviewers are a dying breed. Because the time you don't get paid to do it, there's a lot of pushback from these journals that are getting paid at, you know, for these articles. And the reviewers are going like, no.
Mike:All right. That's awesome, Phil. I love it. So we've, we've identified how to find a good journal, and I like some of the tips you'd say here too. Um, I, I would say I would probably add to that just a touch and just say, there's a lot of research out there, right? Stick to the name, brand, reputable journals. For now, when you're an advanced level clinician or you're digging for an answer, you can start perusing, right and start getting a little deeper in the literature. But there is some very high quality top tier sports medicine, sports, physical therapy journals to just stick with that. And then probably just follow some good researchers and clinicians that you like on Twitter because they're sharing articles. And I don't think we're, we're all, we wouldn't all be sharing bad articles around the internet. So I would add to that is that, You know,
Phil:Yeah, that's exactly right. You know, the other thing I wanted to mention too is PubMed is a really good way to kind of filter through the bad journals there. There's a claim, I saw that about 10% still get through on PubMed. Um, but if I always tell my students, you have to have a PubMed citation in order for me to think that it's. Credible. Um, these, these journals that you're seeing that aren't in PubMed have a lot less chance of being a highly credible journal, but I always tell them
Mike:sure.
Phil:even if it is in PubMed and it is a good journal, the key is still the quality of the article itself. So, and it even goes beyond the level of evidence. You know, everyone says, oh, these level one studies, that doesn't mean it's a good article.
Mike:Right. Oh yeah, for sure. I like the systematic reviews of level ones and then systematic reviews of systematic reviews of level ones. Um, it's just,
Phil:I'm doing, I'm doing one now.
Mike:there's, I, I mean, I get it. I mean, we could go into why there are so many systematic reviews right now, but they're, you know, they're quick. They're, they're easy. They're, you know, you get residents and fellows can do'em, you know, pretty good. Um, you know, there's reasons why they're in there. They're often cited very well. So the journals love it. But, you know, I, I always say, and I put this on Twitter, right? And I think I, I dunno, maybe I got a little heat. I don't remember. Whatever. I don't, I, I try not to get too worried about Twitter, but I said, you know, putting together a bunch of article of. Bunch of bad articles into one systematic review doesn't make a good article, right? Like, like you, you have to make sure, but, um, alright, so let, let's keep going with this thought. I like this. So we f we found some quality research. We like it, we're happy. We think this could be helpful for what we do every day, right? This is the big question, right? And I'm sure you're gonna probably go bananas on this one here, but, How do we approach reading this? Right? And this, you can see I'm, I'm not a good podcast host, right? Because I just ask you these huge 20 minute questions. But, um, you know, how do we go about reading this? And, and, and this is where I think I, I've really learned a lot from you over the years. And I know this is where you shine. Like walk me through what goes through your head, Phil, when you start reading an article and what we should look for and. And not just yourself as like an expert, expert, clinician, but also like, like maybe from the lens of like a younger professional, some early career professional that doesn't wanna get too overwhelmed.
Both:So what, what I think about as I'm going through an article, um, you know, you obviously have some type of question in mind. Um, I use a Pico approach. Obviously the population intervention comparison and outcome, the, that's kind of your guiding for what you're looking for. The title's good. To start with. Um, but I find that it titles are like newspaper headlines. Sometimes they're a little over embellished. Yes, yes. More so lately I feel. Yes, yes, exactly. It's, you know, it's eye candy, uh, and then you might look scanned through the abstract. But quite honestly, I'll look at the conclusion to see if it's kind of relevant, but I don't put any merit into it. Okay. All I'm doing is trying to go through this. Is this really gonna fit like what I'm looking for? So the biggest thing for me, Mike, is the purpose. Okay? And this is where a lot of people, I think, miss the boat. And what you have to look for is a purpose statement. The hypothesis. The research question cuz that's the central core to any article. What are you trying to do here? What's your purpose statement? One of the things that Barb Hogan Boom has just. Preached as the, as the editor of I J S P T is, that purpose statement has to be consistent every single time you say that in an article, which I love that. Mm-hmm. Principle because the purpose then tells me the design it should be, which then tells me the statistics that I should see, and also gives me an idea of the sample. Okay, so get your purpose and understand your research question. And if there's a hypothesis involved, that's when statistics come into play, which I can talk about later. But then I look for the design. Does the design answer the purpose or the question? Right? So if you're looking for a difference, I use keywords. Mike, I use difference effect association relationship. All right. So I understand the purpose. I get it. I like that. Um, the, to me, I think that is definitely an approach that I don't always take myself. So, again, I just learned something here that was great. Is, is make sure that I, I really, really understand the purpose. Um, what do you, what do you do next from here? Does, is this when you start dipping into the methods? Like, or, or what do you do from here? So, yeah, pretty much. I, I, once I've got the design right, and here's another hint, Mike, is, don't believe what the authors tell you. They're, they're gonna tell you it's a randomized, controlled trial. It's not, you know, sometimes that's funny and. Why do they do that is because it gives them more bang for the buck. Remember, this is sometimes a game that you, most researchers that are trying to publish need to have these randomized controlled trials, the highest level for a clinical trial, and they also want to have significance. And so there's a, there's an inherent problem, and that's a, that's a problem with bias is what we're always worried about is publication bias. Not only does the researcher look for statistical significance, but also journals, I've still hearing that they don't publish non-significant findings, and that blows my mind. Like I would want to know if something didn't work. Right. That's, that's exa, I mean, the answer is yes or no. I mean, why is no one valid? Yeah. So the next thing I, I like to look for, and these are some really easy things you, you can do as a clinician to help the process. And we talk about quality, right? And quality is really about it's internal validity. How well is the study done and how well can it be replicated? That's what the science is all about, right? But quality can be broken down into two components. You have the actual reporting guidelines and you have risk of bias. Okay. Luckily there's tons of tools on the internet that'll help you with that. If you go to equator network.org, you can actually find reporting guidelines for all the different types of research designs, and it should tell you what to look for throughout every article. The author should have done this, this, this, this. You could check it off. Mm-hmm. The other thing you do, and again, you've had the design, so you know the design is go to the risk of bias tool for randomized controlled trials and physical therapy. Probably the most popular is Pedro, p e d r o. Um, there's another, uh, group that I just came about when I started in academia called Joanna Briggs, and this is a really good combination of a ton of different research designs that includes risk of bias and qual um, uh, reporting. So once I've kind of looked at the, these, I can use these, these quality guidelines to, to help me to evaluate the literature, but in order to really then go further, I look at the tests and measures, right? What, what are the, from the design, from the question, what are they measuring? What tests are they using? A lot of times you see what I call these kind of proxy measures. They're not the real measurement. Right, so you have to be careful. What's the validity and reliability of those measures? What, what does the, the general say? Tests, don't guess, you know, uh, George Davies huge on validity, reliability, the psychometrics. Um, and then I get into kind of the statistics behind it, and again, I refer back to the design and the question, and the, the question should link me to the proper stats to answer the question correctly. Now, this is where it gets dicey for some people, because unfortunately we're not taught statistics usually by clinicians. You're taught by statisticians, so then we get into kind of the statistics and making sure that we're answering our question. So people unfortunately fall for the word significance way too much. And I think it's a very bad word to use, obviously. And it's one of those words that, um, people think significant means a lot, right? And that's not what it means, right? And even taking a value. Of a p value of 0.05 is still arbitrary, and I use the example of I if I review an anesthesia paper and the P of 0.05 is used for significance, you mean to tell me you have a 5% chance you could be wrong and kill someone? That's different, you know, than in our field, right? 5% being wrong is probably acceptable, but, um, what I like to look for are the clinical outcomes. Those, those clinical statistics, what's the mean? Uh, the, the minimal clinically important difference. MCI IDs, just because it's significant, was it meaningful clinically? And then what's the confidence interval of their ability to say that That is where we believe the true value is a rep. In the population. This is where people lose it, Mike. They, they, they just, they just look at significance and they go, oh, well the treatment works. That doesn't mean it works in your population. Right, right. You gotta go back and look at the actual inclusion, exclusion criteria of this sample. Right, because the way that the inferential stats work is to infer the results of that sample on a population that you then have a confidence interval. That's what that means is I'm pretty confident that the true value of whatever this outcome is lies between these two numbers. It's not the range of values, it's the range of possible values with the, where there's one true value. And it doesn't mean that it's gonna represent your patient, but you want to make sure that those are the things you look for. And lastly on that is read the tables. Don't just read the narratives. Go back and look at the tables, as I mentioned earlier. They could be wrong. Uh, if it doesn't make sense, doesn't pass the smell test, you know, start going, that doesn't look right. Don't be afraid to do that. And, and I would say, I often look, sometimes I'll, I'll do, like you said too, I'll read that, oh, there was a significant finding and that sort of thing. And then I'll look at the table. I'll be like, Well, you know, I don't know how clinically important that may be. It might be statistically significant, but I'm, I don't know. The clinic clinical, that's exactly right. So, so I would agree. So, um, so, so what you're saying here is that p-value isn't the end all be all right? We, there's, there's more than that, especially in the clinic, right. Clinical decisions should not be based on P values. What, what a p-value is, is really let's I go back. It's about no hypothesis testing. And you said it right, it's a yes or no question, right? So is clinical practice a yes or no question? No way. Right in in, yeah. In a Petri dish, in a lab where everything's controlled. I'm good with statistical testing of P-value, but. I can only use it so much in the clinic, you know, and again, You know that those samples are not always representative of the patients in front of you, right? Right. So, so, so less p-value in our head. Well, I mean, obviously yes, there's a value to P-value, but more confidence intervals, is that right? Confidence intervals and effect sizes. Right? And clinically important differences. Those are the three things you look for. Yeah. And I think that's a great way of saying it too, and a great way for new clinicians too that are just trying to get used to this sort of thing to figure out what to look for. I, I, I think that's, that's a great way of doing it for me. So, alright, so we've gone through the article, we understand the methods, we understand the results, we, we know that whole purpose. What are some of your tips now on how to apply the information? As a clinician, what do you look for in the results? What do you look for just in general and say, how is this article going to change what I do every day? What can you offer for some assistance to people? So once you've kind of looked at it, one of the things I like to do, Mike, is look at the limitations. Um, you should by this point be able to say there were certain limitations of the article. Every, every article should have limitations. There's no perfect article. And hopefully the author has synced up with your limitations. Um, I don't. Don't take the author's limitations, um, at face value. Okay? They're there. It's usually, at least it's the elephant in the room, but there's gonna be other things that they don't always bring out, right? Um, but at least they should pick up the big things. Um, those are what I call contextual things, right? So the. You take the results, but within the context of the limitations. So you have to know those limitations. What were the areas for bias? What wa for example, in in particular the sample, um, you know, some of the outcome measures, those types of things that you've looked at. Um, so you'll, you'll know your limitations and, and the context with within that. Now, the biggest thing I'm gonna look for, uh, are kind of the, the, go back to the statistics and looking at how does that, Confidence interval. How much confidence do I have at that intervention would affect or have an influence on that specific individual right in front of me? And that's where the confidence interval comes in. That's where your effect sizes or whatever come in. So you're gonna apply the results to that patient if it's appropriate, but remember that it doesn't mean it's gonna work, right? There's still a 5% chance. Exactly. Point oh five. Right? Right. Um, and it, it, remember this is there, that's what, a 95, you remember, you're working on the 95% curve of a normal distribution. So there's always a chance that this isn't gonna be, As applicable to that patient, but you, you try it, you make sure that everything else, uh, fits in and, and kind of the last thing that that. Is important about doing this is it's not being able to read. The research, as we've talked about, is not just about applying it to a patient, but we need more peer reviewers. Maybe there's, we need more authors, right? We need more people to understand this and not be put off by research and statistics and frightened by it and be able to actually do this on a regular basis besides just applying it to patients. And as you've kind of seen today, research is not easy. If it were easy, everyone would be doing it. Right. But every time you do research, something goes wrong. Right. Every time. I agree. Welcome to research this podcast too. It just, it just happens. That's right. Just the way it is. I mean, I, I, I, I wish people understood that a little bit, especially the people that are so overly critical on social media. Um, yeah. It, it's, it's really challenging and there's a lot of work on, on so many levels to make sure these work, so for the listeners, I, I want you to, to take a step back here. So Phil just walked us through a really next level expert view of how to read a journal article. I don't want you to feel, uh, nervous about that, anxious about that, that Phil's thinking of this at a completely different perspective with his experience as an editor and reviewer of journal articles for years. You can still apply those basics. Scale back to things you said. Make sure the article's purpose is there. Make sure that the methods match the purpose. Make sure that when you're reading the results and the limitations that you're thinking, is this applicable to the person that I'm interested in answering this question for? Right? Is, is this for the person in front of me? And I think if, if you you approach it from that way, from a quality journal in a quality article, then I think you can get a lot more out of these journal articles. So, um, Phil, that was awesome. I know you gotta get going. I apologize, uh, for, for keeping you so long. But thank you so much for joining us and sharing these tips and information for people on how they can get the most outta journal articles. That was awesome. Thank you. Thanks, Mike.