| second mtg | third mtg | fourth mtg | fifth mtg | sixth mtg | seventh mtg | eighth mtg | ninth mtg | tenth mtg | eleventh | twelfth | thirteenth | fourteenth |fifteenth |
| back to top | next |
| previous | back to top | next |
| previous | back to top | next |
| previous | back to top | next |
| previous | back to top | next |
| previous | back to top | next |
| previous | back to top | next |
| previous | back to top | next |
| previous | back to top | next |
| previous | back to top | next |
| previous | back to top | next |
| previous | back to top | next |
Mon., 04/08/19
Met with Matt Cohen. Showed him the results of the Stats SE experiment.
Discussed what to do next.
One possibility would be to do a comparison of the effects of either practicing retrieval, spacing of practice, or both on retention. To do it recruiting from the subject pool would be tough. We had a very difficult time getting 199 of 364 trained subjects to come back for a retention test even with a one year sabbatical, an extension into a second year with a full teaching load, and a $15 payment for returning.
Another possibility might be to try to use the statistics and experimental classes for recruiting subjects. That would be more realistic. But getting high enough n’s would be a problem. It would require getting the cooperation of other faculty to increase the n’s. Then that would raise problems of asking them to follow our procedures when conducting their classes.
Let me try to imagine how using the stats and exp students might work. Let’s suppose we stay with just learning the two associations (the easier: number of conditions, and harder: types of conditions) used in Stats SE 2016
Experimental condition (SE, spacing, practicing retrieving
information during training – then a test): Four training and test
sessions in SE spaced throughout the semester and requiring students
to retrieve followed by feedback:
Training sessions: do
four examples – not worked examples – done by student with
feedback by instructor
Test: take four test items –
followed by feedback
Control condition (RS, massed, information provided during training – then a test): One training session in RS at the end of the semester, and providing the students with the information:
Training session: do sixteen examples – worked examples – read by the student
Test: take sixteen test items followed by feedback.
Then Retention test at beginning of Experimental
Test:
sixteen items equivalent to the test at the end of the statistics
training and test sessions.
Problems: How could we do the sessions en masse instead of individually? How could we do SE and RS in the same room at the same time?
Possible solutions: First, large n’s are important in order to be able to interpret a not-significant result as a lack of an effect, instead of just – There either isn’t any effect, or it is just too small for us to have enough power to find it. But since this is an applied study, we are mostly interested in the effect if it is large enough to find it even with low power.
Second, if we could interest some statistics professors at the other PASSHE schools, or any others that would be willing to collaborate with us, then we might be able to increase the n’s. That would still leave us with the other problems. For example, in order to run subjects en masse in a class, and to run randomly assigned students to the SE and RS conditions at the same time, we would have to have them just silently reading instructions, not engaging in any verbal interaction with the experimenter/instructor.
Perhaps, the manipulation could be the “massed/info-provided” versus “spaced/practice retrieval”. Since we have already established that SE works for immediate learning, but does not improve retention, perhaps all the subjects could engage in SEing, not RSing.
Met with Chase Leckerman. Got him documented in the “members.xls” file.
Sent him instructions and URL to get the CITI certification. Basic – Social Behavioral Educational
Sent him the Ideas for Fall 2019
Got him on the OSF on the Stats SE project.
| previous | back to top | next |
| previous | back to top |
After the semester ended
Mon., 07/08/19
Cleaning up the OSF Ryan Lab Group project. I sent this email:
==================
from: Bob Ryan <cogprofessor@gmail.com>
to:
"Pritchard, Adam" <aprit681@live.kutztown.edu>,
Emmy Velazquez <evela303@live.kutztown.edu>,
Isaac Perez <ipere636@live.kutztown.edu>,
"Hess, Dalice" <dhess728@live.kutztown.edu>,
Olivia Jaindl <ojain658@live.kutztown.edu>,
John Riter <jriter515@gmail.com>,
Gage Baughman <gbaug313@live.kutztown.edu>,
Trisha Gillott <tgill664@live.kutztown.edu>,
Natalie Santiago <nsant636@live.kutztown.edu>,
James Koppenhofer <jkopp945@live.kutztown.edu>
date: Jul 8, 2019, 12:14 PM
subject: Removing contributors to Ryan Lab Group project on OSF
Hello all,
I'm in the process of trying to get
my lab group re-organized. I have a Ryan Lab Group project on the OSF
on which you are a contributor. However, the list of contributors is
old. I believe everyone on the list is a person who has graduated
from Kutztown, and my current research assistants are not on the
project as contributors yet. So, as a first step, I'm proposing to
remove anyone who has graduated and is not active.
I am
planning to NOT remove James because we have been continuing our
collaboration.
Please look at the list below. That is the
list of graduates that I'm proposing to remove. If you agree that you
don't need to be on the project any more, you don't need to do
anything. If you'd prefer to be left on, you are welcome to be
retained. But if you wish to stay on, please email me back before
Fri. 7/12/19. Thanks.
Adam Pritchard
Emmy Velazquez
Issac Perez
Dalice Hess
Olivia Jaindl
John Riter (subsequently asked to be left on)
Gage Baughman
Trisha Gillott
Natalie Santiago
--- Dr. Ryan
==================
Then I made sure that the current Res. Assts. are on the Ryan Lab Group project – James Koppenhofer, Chase Leckerman, Matt Cohen, Brooke Marton, Morgan Milano, and Meagan Carson.
To work on getting started for the Fall of 2019, I sent the following email:
from: |
Bob Ryan <cogprofessor@gmail.com> |
||
to: |
"Leckerman, Chase" <cleck035@live.kutztown.edu>, |
||
date: |
Jul 8, 2019, 3:10 PM |
||
subject: |
Update for Ryan Lab Group |
Hello all,
Here's an update on what I need my research
assistants for the Ryan Lab Group to do. The next big project I need
to have done is the removal of all personally identifying information
from the response sheets of the StatsSE project. I have created a
working version of the StatsSE project on the OSF for this work to be
done on. The work would be to go to the working project, download a
response sheet and remove the first page. Then delete the response
sheet on the working directory (which has the personally identifying
info on the first page), and upload the response sheet that has the
first page removed. Unfortunately, that needs to be done for all 364
response sheets.
If you could help with that work, please
let me know. I'd like to have you come in to my office for me to show
you how to do those steps, just to be sure it all goes smoothly.
Also, if you're going to work on that job, I'd have to add you to the
contributors on the working version of the project on the OSF.
Then
we have to get a plan for data collection in the Fall. In the Spring
2019 meeting notes - twelfth week (
http://faculty.kutztown.edu/rryan/research/labgrp/meetings_2192.html#twelfth
) I documented a discussion I had with Matt Cohen about what we need
to do next. If you're interested in helping to design the next
project, please read that discussion and send me any ideas you have.
Perhaps we could try piloting a procedure in my section of Statistics
in the Fall.
Thanks,
--- Dr. Ryan
Fri. 07/12/19
In addition to the ideas in http://faculty.kutztown.edu/rryan/research/labgrp/meetings_2192.html#twelfth , here is another important idea.
James Koppenhofer devised a method of scoring the experimental subjects responses to the last two questions on the last two training examples for the quality of their explanations. First, he showed that there was a positive correlation between those quality measures and performance on the posttest.
The posttest questions were of two types. There were questions about the number of conditions (which we considered an easier, more familiar, concept because it just meant “only two” or “more than two”) and about the types of conditions (which we considered a harder more unfamiliar concept because it meant “between subjects” or “within subjects”). The explanations were also about those two different issues. So James also examined the correlation between the quality of the explanations and the posttest performance separately for the two types of question.. He found that the correlation for the easier concept was lower and not quite statistically significant. However, the correlation for the harder concept was stronger and statistically significant.
But what about retention? James began his work before the retention data was all scored and checked. Any errors we later found were in the retention data, so James was working with correct data. But the same kinds of analyses that James did with the posttest, could now be done with retention. It’s possible that even though we found no retention overall, those experimental subjects who did an especially good job on their explanations might have exhibited some retention. If so, that’s important to know.
Wed., 07/31/19
I now have John Ritter and Joseph Moyer as contributers on the working version of the StatsSE study on the OSF for them to strip identifying info from the response sheets.