Kate Grandbois: Welcome to SLP nerd cast your favorite professional resource for evidence based practice in speech, language pathology. I'm Kate grant wa and I'm Amy
Amy Wonkka: Wonka. We are both speech, language pathologists working in the field and co-founders of SLP nerd cast. Each
Kate Grandbois: episode of this podcast is a course offered for ashes EU.
Our podcast audio courses are here to help you level up your knowledge and earn those professional development hours that you need. This course. Plus the corresponding short post test is equal to one certificate of attendance to earn CEUs today and take the post test. After this session, follow the link provided in the show notes or head to SLP ncast.com.
Amy Wonkka: Before we get started one quick, disclaimer, our courses are not meant to replace clinical. We do not endorse products, procedures, or other services mentioned by our guests, unless otherwise
Kate Grandbois: specified. We hope you enjoy
Announcer: the course. Are you an SLP related [00:01:00] professional? The SLP nerd cast unlimited subscription gives members access to over 100 courses, offered for ashes, EU, and certificates of attendance.
With SLP nerd cast membership, you can earn unlimited EU all year at any time. SLP nerd cast courses are unique evidence based with a focus on information that is useful. When you join SLP nerd cast as a member, you'll have access to the best online platform for continuing education and speech and language pathology.
Join as a member today and save 10% using code nerd caster 10. A link for membership is in the show notes
Kate Grandbois: Welcome everyone to SLP Nerdcast. We are really excited for another edition of SLP On Demand. For those of you listening for the first time, SLP On Demand is a series that we put out occasionally where we answer [00:02:00] questions from our audience. So if you are a member and you have a clinical question you can write in and our doctor of speech language pathology who is here with us Dr.
Ana Paula Moomy will answer your question. Welcome Ana Paula. Thank you. I'm very excited for today's question. It touches something that I do for a living. I was really excited to kind of catch up with you before we hit the record button and learn a little bit more about what the research says, because that's always fun for me.
Before we read our clinical question aloud, I am going to quickly review our learning objectives for today's discussion. Learning objective number one, identify the relationship between data collection, target selection, and goal writing. And learning objective number two, identify at least two different types of data collection that can be used when working with AAC users.
Uh, anyone who is listening can also find information about, uh, our financial [00:03:00] and non financial disclosures in the show notes. And Apollo, why don't you get us started by reading aloud our listener's question?
Ana Paula Mumy: Sure. So the question relates to resources for data collection for AAC users. And Andy, um, who wrote in.
Um, stated that her mentee has 14 life skills elementary students on her caseload, and she's looking for ways to help her efficiently track device use throughout her student's day. So that's a
Kate Grandbois: really big question.
Ana Paula Mumy: Yes. And I think we just have to first acknowledge, like, this is a big question that's hard to answer, um, because there is so much that we don't know about these particular students.
And there's also. Um, we don't know specifics about what devices they're using. What does that look like? And so, um, I would say just in general, this is gonna be a little bit difficult to, to touch on, but, um, also acknowledging that I would say data collection is tricky regardless, [00:04:00] right? It's strictly tricky whether, um, we're working on articulation or language or it doesn't really matter, right?
All of these areas. Um, especially. Tracking, um, data or taking data without sacrificing genuine engagement with the person that's in front of you, right? So that's the big thing. I think, um, I work a lot with grad students and I think about how sometimes they're so attuned to the data collection process that they forget, like, oh, wait, but there's a person in front of me and I should be engaging and just really building that relationship and the rapport.
So, yeah. Um, I just wanted to acknowledge those, uh, setbacks in a sense, right, um, related to data collection.
Kate Grandbois: And to kind of piggyback on that, obviously this episode is likely, you know, it's under an hour long. Uh, we are not going to be able to cover everything about data collection in this short amount of time.
Uh, anyone who is listening who would like [00:05:00] to learn more about data collection, either while you're listening to this episode or after this episode is over. We do have three or maybe even four episodes specifically on Monitoring progress and data collection, including probe data or discontinuous data, which I know we're going to talk a little bit about today, uh, that is a very complex topic.
So if you are listening and you know already that that's something that you'd want to learn more about, check out the show notes. We will link, um, we will put links to all of those episodes in the show notes for
Ana Paula Mumy: you. So I wanted to focus on one article in just my research and really just admitting, first of all, that.
I am not an expert on AAC. And so it's an area that is a stretch for me. And so, um, as I looked through some of the research, um, I found one helpful article, um, on data collection and monitoring AAC intervention in the schools, um, by Katya Hill, um, in 2009 in the [00:06:00] perspectives on AAC, a journal. And I appreciated how they talked about, um, collecting the data, To collect, depending on the design and the targets of the intervention program.
So, in other words, really thinking about, like, what are we actually tracking in relation to device use? And, um, they divide up 2, uh, areas or talk through 2 different areas. Um, performance data and outcomes data. So with performance data, um, representing really the quantification of specific language targets.
So things like, uh, spontaneous or novel utterances, um, communication rate potentially, or any word based measures, um, that might include things like. Uh, total number of words used, or maybe it's percentage of core vocabulary that's used, um, or mean length of utterance, um, even diversity of words. So is there, um, a [00:07:00] mixture, right, of are they using nouns, verbs, adjectives, and so on?
So just looking at these word based measures and other types of performance, um, data. And then the other Area was outcomes data really representing the results of intervention that's related to things like quality of life and satisfaction and functionality. So, um, that was helpful to me to categorize, um, and make that distinction.
Um, and because our goal. And this is what they talk through is to optimize communication in a student's daily environment. Um, then we really should have both performance data that's collected in those environments and then also outcome measures, um, that report, you know, perceptions or satisfaction of performance by, um, Those closest to the student.
So that could be teachers, of course, caregivers. Um, but then, of course, the student him or herself, right? And so [00:08:00] I did wonder, Kate, if you would just touch on, um, examples of those, the performance data that might be tied to or, or more appropriate for complex learners, because this might be easier for a child who, um, Um, is maybe more verbal, but not one that is, um, where, where there's just more complex, um, profiles.
So if you wanted to talk about that for a little bit, I would love to hear your input.
Kate Grandbois: Sure. So, I mean, anyone who's listening to this podcast knows that, or has been listening for a while, that this is my jam. This is the, this is my clinical wheelhouse. I'm very fortunate to have worked in AAC for the last almost 20 years now.
Uh, not quite 20 years, but over 15, not that we're counting. Um, and this particular profile, complex learners, emergent learners, early language learners, is what I, what I love to do. Um, I really appreciate the way that you've described, at least from this article, these two different categories of data [00:09:00] collection, um, that you've Because I think often when people think of data collection, they think of tally notes scribbled on a sticky, right?
You know, we're going back to your point of not wanting to sacrifice connection. We grab what we have, and we, oh gosh, I've got this, I've got this goal on whatever it is, and so we, we, we scribble our tally notes, and we think that that's our data collection, and yes, that is data collection. Uh, is it quality data collection?
Perhaps not. Um, and I, I really just wanted to take a second to, um, to think to at least appreciate the different qualifiers when it comes to the kind of data that you are collecting. That is a really important first thought to kind of zoom back to this learner's question or this member's question about what recommendations they can make to their mentor.
And I think the first recommendation based on that article from what I'm hearing from you is really reflecting on your purpose. What are you taking data about? Is it outcomes related? [00:10:00] Is it performance related? Is it aligned with our EBP model in terms of considering clinician's perspective, client's perspectives and values?
Um, I think when you keep that as a lens, it's a lot easier to then zoom in a little bit further. Um, and think about what data collection methods are most appropriate, uh, to your next step or to the targets that you're, that you're trying to work towards. Now, I know that I just went really off topic, but to answer your question about an example for a more, a complex learner or an emergent learner, um, I think that when you're, first of all, every child is unique.
Every learner is unique. There is no, I have, I have big feelings when I hear things like, well, this is the way it's done or this is what we do here or no, you are always customizing your AAC intervention to your learner, especially if that learner has a complex profile. So a number one, um, you're always making data driven decisions, person driven decisions, [00:11:00] patient centered decisions.
Um, particularly when there is complexity involved. And a lot of com, when you're working with complex learners, often your first objectives are related to teaching symbolic exchange, teaching the use of symbolic, uh, language. Now, when I say symbolic exchange, I'm not talking about pecs before anybody gets a little grouchy thinking about pecs and all of the grouchy feelings that we've developed about pecs.
We are talking about moving through a developmental lens to teach a person how to use symbols to communicate. Um, and I think, you know, that can look like a lot of different things that can, when you're talking about AAC, depending on your learner, that could look like point selecting icons in a sequence.
It could look like scanning a visual field to select an icon and make a purposeful choice. It could look like sequencing two icons. together to produce voice output, [00:12:00] uh, it could, it could be producing one symbol for a function that isn't just requesting or perhaps they are an emergent learner and they're, you know, in developmentally making requests and making basic wants and needs known is a main goal.
So you want them to produce a single symbol to get their wants and needs met and then everybody's throwing a party, right? So it really will depend, um, So much on who the learner is in terms of choosing that those targets and choosing that data collection strategy for performance. If anyone is listening wants to learn more about, um, the lens of AAC and going back to basics, we did a great interview with Dr.
Kathy Binger, um, and Dr. Ken, Jennifer Kent Walsh called AAC back to basics that really specifically takes a good look at what. Um, the intersection of AAC and language development and how we can better integrate those two things. Did I answer your question? I know I [00:13:00] went on like four tangents. No, you did.
Ana Paula Mumy: And
Kate Grandbois: thank you. That's
Ana Paula Mumy: perfect. I appreciate it. Um, that makes it, uh, a lot clearer and, um, for sure, Just having those tangible examples are super helpful. Um, another recommendation that I found in the initial stages of device use, which kind of goes back to a little bit of what you were saying, um, was to actually take data on what the SLP or the communication partner is doing.
So, um, there were some really helpful questions, um, Again, for me, because this is not my area, um, that helped me think through like, okay, so what does that mean? Exactly. So things like how often does the student have access to their system throughout the day? That is a pretty important question, right? And then how many opportunities did the student have to actually use their device?
Um, another question, how often are adults modeling on the device? And so that modeling component being huge, and maybe you could [00:14:00] speak to that a little bit more.
Kate Grandbois: I was going to
Ana Paula Mumy: say, I've got a great example for that, but keep going. Yes, so I have one more here. Um, how often is the student attending to the modeling that's provided?
So again, this isn't necessarily looking at the output from the child. It's really more talking about the input, right? What is happening? Um, with the individuals around that child who are doing something or providing access or providing that modeling and so on. So yes, please give me examples.
Kate Grandbois: I was going to say, I was like jumping in my seat because I have such a great example for this.
So backing up really quickly. Back to our sticky note with tally marks on it, right? We think that that is frequency data. So frequency data collection strategies would be, you know, marking every single instance of the target behavior that happened. And again, we're not going to get into this in detail. We will list additional references or episodes in the show notes.
Unpacks a lot more of different kinds of strategies of data collection. Frequency is one of them. [00:15:00] Percentage, who doesn't love a good percentage? I think we over rely on them out of 80 percent of opportunities, right? Everybody knows how to take percentage data Um and rates how many times you do things in a certain period of time Those are a pretty common data collection strategies in speech pathology.
One of the less common ones that I love, and I swear I'm going to answer your question, is trials to criterion. Trials to criterion is a data collection strategy where you're looking at the number of opportunities or the number of trials, trials to criterion, that a person needs to achieve a certain outcome.
Threshold or to achieve a predetermined set of mastery. The reason that I love trials to criterion is because I have applied it to measuring the behavior of communication partners. And this is my story. I consult to a wide variety of. of programs in in Massachusetts area. Uh, because I'm a BCBA, don't anybody hate me.
I'm not evil. Because I'm a [00:16:00] BCBA, I work a lot with behavioral programs. I work a lot with BCBAs, um, and trying to integrate some of this speech pathology. research, knowledge, best practice, person centered care in some of these programs. And in that work, we have one, one program in particular, we did a lot of patient education, a lot of teacher education around the importance of modeling, around the importance of language bombardment, around the importance of making a, making someone's, um, program linguistically rich.
And what we did was To kind of flip the script, we said, okay, how many trials does this one particular complex learner need to produce a word? How many models do they need? And what's nice about this is that we switch from asking the question, what does a student know, to how do they learn? Once you know how a complex student learns, rinse, repeat, you've got the recipe, [00:17:00] let's make all the cookies, let's make all the words, let's, let's do this again and again and again, but when you really flip your thinking to thinking about asking questions and taking data to learn about how they learn instead of what they know, you can really apply that to the entire environment.
So in this particular example, we took trials to criterion data on the number of models that were provided in a day. And to learn how many times did this one kiddo need to get exposed to this one word for them to be able to produce it. And the answer was hundreds. What's amazing is that he was able to get hundreds of exposures in a short period of time because the staff got super competitive and they, and you know, they started becoming more aware of their own behavior and their own roles and responsibilities as communication partners.
Um, so. So another tangent, I guess that's my, my function and my role here today is to go off on these tangents, but it's the different data collection [00:18:00] strategies you choose can really help you shift the way you're thinking about where that, where the quality data comes from because it's not just your communication.
It's not just, it's not just your student. It's not just your client. You could be looking at data about the environment. You could be looking at data about the communication partners. We're really talking about a whole. a whole human and a whole microcosm, a whole environment, a whole set of variables that we need to consider for AAC success.
I hope I answered your question again.
Ana Paula Mumy: Yes. Thank you. One more thing that I wanted to touch on before I'm going to pose another question to you, Kate, is I found just a variety of data collection sheets that were downloadable for free. And again, for me, it was helpful just to think through, like, how are they structured and, you know, just in different ways and organized.
And so there were some that were goal based versus prompt based data collection. And so one example was, um, data collection that was based on, um, [00:19:00] modeled words. So you select a word and then show, of course, the child, you know, what happens when you select that word. And so having that, um, sequence of modeling and then seeing, um, Or giving them a taste of what does that produce, right?
Or what's the outcome after that happens? Um, and then, uh, the, Um, it was also, uh, a word selected by the child after a prompt was provided, um, and then a word selected spontaneously by the child without any prompting. So they had essentially like an MPS format where you were tracking modeled words, words that were prompted, and then words that were spontaneous.
So MPS, um, was one way that it was structured. Um, another one, um, another example was, um, Based on a variety of language functions like requesting protesting are they commenting [00:20:00] describing negotiating and so on so there was lots of different options to think through because I, um. I feel like so often we get stuck on just requesting, right?
It's just a request a button. It's just and that's like the only thing that counts or that really is being monitored when there's so much more right that we can look for when it comes to language usage beyond just number of words. So. Those were really helpful for me to look through, just in terms of thinking about, you know, efficiently tracking usage, um, with different parameters.
Do you have anything to add, um, in relation to, to that?
Kate Grandbois: I, I
Ana Paula Mumy: think,
Kate Grandbois: you know, everybody wants a good data sheet. You know, data sheets are better than your sticky note with Scratch with, with tally marks. I think something that you bring up that's really important to think about is the relationship between data collection and goal writing.
Uh, we, again, this is a whole other, [00:21:00] this is a whole episode that we can link that we've done on the importance of measurement target selection. Uh, we will link that in the show notes as well. Um, the short CliffsNotes version is We think of data collection and goal writing as something that happens in a sequence.
So first we write our goal, then we take our data. And that's absolutely not the case. We need to be thinking that these are two things that happen in tandem. They influence one another. Uh, we want to be thinking, before we write our goal, we want to be thinking about what kind of, what's the data collection going to look like?
Is it reasonable? Is it doable? Who's collecting the data? How often is it going to get collected? Um, you also want to think about your target when you're writing your goal and how your target's going to get measured. Is it a target skill that's really fleeting and you have to be watching the entire time?
Is it a target skill that is prolonged? Um, is it something that's low frequency? So you're going to be lucky if it happens once a day, or is it high frequency where it's potentially happening multiple times in a half hour? Um, all of [00:22:00] these things are really important to consider when you are thinking about your data collection.
Uh, and when it comes to recording your data, There are a lot of different ways in AAC to do that. I think the way that the best way is the one that works for you, that keeps your hands free, that keeps your attention on your client. Um, there are two strategies that I think are, um, there are a handful that I think could be considered that I think are worth considering.
The first is maybe a tally counter or a golf counter. I don't know why it's called a golf counter, but you know, they're like little clicker. They're like bouncers. You see them at like the, at the clubs, uh, you know, uh, clicking for, for capacity in a, in a, you know, in a bar or whatever. Um, those are nice to kind of hang on your belt with a carabiner.
You could do one on each side. And then, uh, Um, if you're doing percentage data, one, your right side [00:23:00] is for successful trials. Your left side, left side is for unsuccessful trials. At the end of the session, you've got a total percentage. Um, you didn't have to do any sticky notes. Um, another consideration would be, um, another consideration would be probe data.
So probe data is really complex. Um, we have a whole episode on probe data. The short version of the story is that any data collection system that you have, you want your data collection to be. accurate, reliable and valid. If you are not measuring accurately, then you're not going to be able to inform your goals and you're not gonna be able to measure progress.
It's impossible to, in a lot of instances, track every single instance of an occurrence. That is when you get into this trouble of not being able to engage with your client and have a nice connected session. Um, a potential answer to this problem is probe data where you're only recording predetermined, a predetermined set.
of a certain number of trials. The problem with probe data is that it can be really [00:24:00] inaccurate. It can violate that, you know, ideal standard of data collection that's accurate, reliable, and valid. The way to mitigate and take probe data in a way that is better is to take more, the more probes you take, the more accurate.
And if you add a qualifier to that, so let's say you're recording the first three trials, but you're also recording whether or not it was prompted, you're also recording how long it, you know, the duration, you're, you're adding some qualifier onto the probe, the more probes you take, and the more qualifiers you add, the more accurate it is.
And again, we have an entire hour long episode that reviews the research. Not that any, it's very dry. I know it sounds boring, but we really liked it. Um, so there are a lot of different ways that you can make your probe data more accurate, reliable and valid. I think another problem with AAC in particular with data collection is that it feels cumbersome because you've got this extra device.
So you're like, but I got the device and I've got the student and now I have these golf counters and a pen and a sticky note and there's all these things. Um, [00:25:00] it can feel really overwhelming. And I think there is often in a lot of. A lot of instances, a big temptation to use the internal tracking system.
So a lot of our devices come with internal data collecting data collection trackers where you can toggle it on and it will record every instance of a target behavior or every instance of a target communications or every instance of an icon selection rather. Those are really tempting, but they are very, they have a lot of limitations.
So the first major, major limitation is ethics. We have heard from the AAC community that these mechanisms feel very much like spying. Imagine if there was someone walking around with you all day, following you around, listening to every single thing you said. But didn't tell you that they were listening.
Uh, we need to be, if we're going to use these mechanisms, we need to be extremely careful about turning them on and off and doing it with informed consent. And that's informed consent for the AAC user and potentially their families, depending on their age and all these [00:26:00] other kinds of things. The other thing that we really need to think about with these internal data collection systems is the law.
So, we could, depending on your state, there is a potential that you are violating a privacy law, uh, by taking this data and storing it in a cloud. Um, that is not part of your district. It could be a violation of FAPE here in Massachusetts. We have to be very careful about that. And we have families sign additional permissions, some schools and some programs I work with won't even do it because it is too close to some violations.
So check with your administrators, check with your state and make sure that use of these is even within the provision of what would be considered. Um, Uh, secure storage of data as part of an educational file. So that's another consideration there. Um, the other and last limitation of these internal tracking systems is that they are going to track everything.
This, these little tiny robots and say, these machines don't know if it's you that selected the button or the child that's or the student that selected the button. So if you [00:27:00] are using them for short periods of time, because we know we have to turn them off so that we're not theoretically, you know, following someone around listening to what they say all day.
While it is on, you want to make sure that you're only capturing what it is that you're measuring. So if you're capturing models for a communication partner, you want to make sure that the student doesn't select a device, doesn't select an icon, or conversely, if you're using it to measure independent student productions, you have to make sure you're not providing any prompting, that you're not providing any modeling on the device, that the tally mark that you're getting is actually independent productions of the student.
So there are a lot of limitations to those. Uh, and those are all really important things to consider.
Ana Paula Mumy: Absolutely. Yeah. I hadn't thought about that. Um, for sure. And it really, I think in some ways that almost defeats the purpose of, um, the therapy strategy of modeling. Right. So if, if your goal is to model a [00:28:00] ton and to really see that growth through modeling, then you almost would be shooting yourself in the foot if you used it.
Right. Like, yeah. Yeah, well, I want to just ask a follow up question to just, um, in relation to the gold writing. So you kind of already answered the initial question that I had. So I'm just going to add on to the question. Um, just because again, thinking about our member who's mentoring someone, um, And really just understanding that relationship between goal writing and data collection.
How might she help her mentee with goal writing to help with better data collection? Does that make sense?
Kate Grandbois: Yeah. And I think, you know, again, this is a really great question. And like you said, at the beginning of the episode, to really answer this question, well, we need a lot more information, right? Because we don't know if this individual is a complex learner, we don't know what their goals are.
And the goals and [00:29:00] targets are going to have a significant impact on how progress is monitored, because again, Data collection and goal writing are BFFs. They cannot be separated. You cannot do one without the other. They don't happen in a sequence. They happen in tandem, and they influence each other, influence each other continually.
Because as you're monitoring progress, theoretically, if they're making progress, the goal may need to be adjusted, right? That's why we have annual IEP meetings, because we're rewriting goals based on progress. Um, I think when you're working with a mentee, And you're trying to unpack some of these concepts, I would go back to the goal first and I would go back to the target and think about what it is that you're measuring and all the variables that will influence how that measurement is taken.
Um, that might include all of the things that we've already mentioned, but the environment, the communication partners, how fleeting the communication [00:30:00] target is. Um, and I also think it's important to have a little bit of forward thinking. And I know where this is a question about a school environment. Um, so theoretically you have an entire year under the IEP to take that, to take that data collection.
But having a really good baseline measurement is also really important because if you don't know where you started, How do you know where you're going? And for some, particularly complex learners, really small steps are really huge deals and we don't want to miss them. Uh, we don't want to not give credit where it's due for our students who are working so hard and the paraprofessionals who are working so hard and the teachers who are working so hard and the whole team who was working so hard.
Right. So taking those, I would also be really asking a lot of questions about where the student is currently and taking really good baseline measurement. So that you have a strong foundation off of which to judge what progress was made to begin with. Did I answer your question? [00:31:00]
Ana Paula Mumy: Yes. Okay. So do you want to talk a little bit?
Do we have time to just touch on, um, do you have like favorite ways to, um, measure baseline or recommendations or strategies that you, like your go to, um, options, you know, for baseline? Oh, that's a really, it's a
Kate Grandbois: really good question. Um, I think not as a standard because it is going to be influenced so much by the learner and the environment.
Um, I think in a perfect universe, we would take enough baseline measurement to have a solid understanding that this is exactly where the student is and not just a bad day. Um, particularly for more complex learners who might be, you know, presenting with sleep disturbances or, you know, there might be other things going on in the child's life that make that one day that you took baseline measurement, not the best day for baseline measurement.
So in a perfect world, we would have a decent amount of baseline. A decent amount of measurement at the beginning of treatment to have a good understanding of where [00:32:00] we are, so we can decide where we're going. Um, I also think that designing data collection systems that feel achievable and doable is really important because if it's not achievable and doable, then the data that you collect is going to be.
Inaccurate. Uh, in one of our data collection courses, we talk about this a little bit in a little bit more depth, but there's an expression, garbage in, garbage out. So if your data collection is inaccurate, that's going to inform your progress. In an inaccurate way, which is going to lead to inaccurate decision making and clinical reflection, um, and potentially poor choices for implementation and intervention.
So, really making sure that we hold data collection strategies that are, um, Accurate, reliable, and valid at the center is really, really important. And there's really no gold standard because it's such a customized experience. We also have a, we have a handout on our website that I will include in the show notes as well [00:33:00] on what accurate, reliable, and valid data means.
Um, and again, referring people back to our original, you know, some of our previous work in data collection, just because I recognize that this is a very, a very nuanced, very nuanced conversation.
Ana Paula Mumy: Yes, well, and just, you know, one takeaway for me, just as you were talking is to really think about representative samples, right?
And when we think about this, whether it's a speech sample for an Arctic kid or language sample for a child with a developmental delay. I mean, it doesn't matter what the. Uh, situation is we have to make sure that we are doing what we can to make sure that we are sampling, um, their speech, their device usage, whatever in the best way possible, but also yielding the most representative sample possible.
And so. It might take more than one trial, right, to get there. And because, like you said, there's so many variables that could impact that individual's willingness [00:34:00] to participate or willingness to show what they do know or what they are capable of doing. So, um, so, yeah, that's important. I think sometimes, you know, with AAC, we, we tend to maybe think, um, differently, or, or we don't use sometimes like just the basic knowledge that we already have about like, yeah, in the same way that this applies to X, Y, Z, it's going to also apply for our AAC users.
Um, there's maybe, like you said, nuances or different things that we have to take into consideration, but it's still, there's some basics that. Are just foundational, right?
Kate Grandbois: Totally agree. Totally agree. So, I mean, I think, I think I really appreciate the literature that you brought to the table. I know I shared quite a bit, but I couldn't help myself because this is my area.
This is my clinical area. I love
Ana Paula Mumy: learning from you.
Kate Grandbois: So great. Um, we will link every all of the additional resources in the show notes and Apollo. Was there anything else that you wanted to [00:35:00] share?
Ana Paula Mumy: No, that's it. That's all I had for today.
Kate Grandbois: And to the listener who wrote in this question, thank you so much for writing in.
We hope we did it justice, um, on, you know, in terms of what you shared. Anyone out there who's listening, if you have a question for us and you're a member, Please write in, we would love to read your questions and do a little literature search for you and discuss your clinical case on the air. Um, Dr.
Anupama Moomy, thank you so much for being here. This was really wonderful and we look forward to the next iteration of SLPD On Demand.
Thank you so much for joining us in today's episode, as always, you can use this episode for ASHA CEUs. You can also potentially use this episode for other credits, depending on the regulations of your governing body. To determine if this episode will count towards professional development in your area of study.
Please check in with your governing bodies or you can go to our website, [00:36:00] www.slpnerdcast.com all of the references and information listed throughout the course of the episode will be listed in the show notes. And as always, if you have any questions, please email us at info@slpnerdcast.com
thank you so much for joining us and we hope to welcome you back here again soon.
Comentários