Displaying results 1 - 7 of 7 (Go to Advanced Search)
Project
Description: The Meadows Center for Preventing Educational Risk (MCPER) partnered with the University of Houston, The University of Texas Health Science Center at Houston, Texas A&M University, and Florida State University to improve the reading comprehension of students in grades 7 through 12.
Project
Description: Language sampling is a critical component of language assessments. However, there are many ways to elicit language samples that likely impact the results. The purpose of this study was to examine how different discourse types and elicitation tasks affect various language sampling outcomes.
Project
Description: The analysis of narratives often accompanies comprehensive language assessments of students. While analyzing narratives can be time-consuming and labor-intensive, recent advances in large language models (LLMs) indicate that it may be possible to automate this process.
Dataset
Part of Project: Impact of Discourse Type and Elicitation Task on Language Sampling Outcomes
Description: These are the data for 1037 K-3 students who contributed oral academic language samples. https://doi.org/10.1044/2023_AJSLP-22-00365
Dataset
Part of Project: Automated Narrative Scoring Using Large Language Models
Description: Narrative language samples elicited using the ALPS Oral Narrative Retell and Oral Narrative Generation tasks from diverse K-3 students. The test data set was drawn randomly from the larger corpus of narrative language samples.
Dataset
Part of Project: Automated Narrative Scoring Using Large Language Models
Description: Narrative language samples elicited using the ALPS Oral Narrative Retell and Oral Narrative Generation tasks from diverse K-3 students. The training data set was drawn randomly from the larger corpus of narrative language samples.
Dataset
Part of Project: Automated Narrative Scoring Using Large Language Models
Description: Narrative language samples elicited using the ALPS Oral Narrative Retell and Oral Narrative Generation tasks from diverse K-3 students. The tworaters data set was drawn randomly from the larger corpus of narrative language samples. These samples were scored by two raters for reliability purposes.