help with 401 article review

  

[removed] Learning Disabilities Research

Learning Disabilities Research & Practice, 25(2), 6075
C 2010 The Division for Learning Disabilities of the Council for Exceptional Children

Don't use plagiarized sources. Get Your Custom Essay on
help with 401 article review
Just from $13/Page
Order Essay

Creating a Progress-Monitoring System in Reading for Middle-School
Students: Tracking Progress Toward Meeting High-Stakes Standards

Christine Espin, Teri Wallace, Erica Lembke, Heather Campbell,
and Jeffrey D. Long
University of Minnesota

In this study, we examined the reliability and validity of curriculum-based measures (CBM)
in reading for indexing the performance of secondary-school students. Participants were 236
eighth-grade students (134 females and 102 males) in the classrooms of 17 English teachers.
Students completed 1-, 2-, and 3-minute reading aloud and 2-, 3-, and 4-minute maze selection
tasks. The relation between performance on the CBMs and the state reading test were examined.
Results revealed that both reading aloud and maze selection were reliable and valid predictors
of performance on the state standards tests, with validity coefficients above .70. An exploratory
follow-up study was conducted in which the growth curves produced by the reading-aloud and
maze-selection measures were compared for a subset of 31 students from the original study. For
these 31 students, maze selection reflected change over time whereas reading aloud did not. This
pattern of results was found for both lower- and higher-performing students. Results suggest
that it is important to consider both performance and progress when examining the technical
adequacy of CBMs. Implications for the use of measures with secondary-level students for
progress monitoring are discussed.

In recent years, much attention has been directed to early
intervention and prevention in reading. An alternative to a
singular focus on early intervention is an approach in which
early intervention is combined with continuous, long-term,
intensive interventions for struggling readers. Long term
in this approach refers to reading instruction that extends
into the high school years. The goal of such an approach
would be to diminish the magnitude of reading difficulties
experienced by struggling readers and increase the likelihood
of postgraduation success. Supporting the notion that long-
term, intensive reading interventions may be needed for a
select group of students are two sources of data: (1) results
of early intervention studies and (2) results of secondary-
school studies for students with learning disabilities.

Need for Long-Term, Intensive Intervention
Efforts

Recent research on the effects of early identification and
intervention programs have produced promising outcomes
and demonstrated reductions in the magnitude and preva-
lence of reading failure (OConnor, Fulmer, Harty, & Bell,
2005; OConnor, Harty, & Fulmer, 2005; Vaughn, Linan-
Thompson, & Hickman, 2003). However, these studies also

Requests for reprints should be sent to Christine Espin, Wassenaarseweg
52, PO Box 9555, 2300 RB Leiden, The Netherlands. Electronic inquiries
should be sent to [emailprotected]

have uncovered a small group of children who fail to thrive
(Vaughn et al., 2003), even when given intensive and poten-
tially powerful interventions. Such children either do not
reach a level of performance that warrants placement into a
typical instructional setting or do not maintain satisfactory
levels of performance without continued intensive interven-
tions. These students have reading difficulties that seem to be
especially resistant to change (see Torgesen, 2000) and are
often considered to have learning disabilities (LD).

Research at the secondary-school level reveals that stu-
dents with LD continue to experience reading difficulties
well into their high school years. Secondary-school students
with LD experience difficulties with phonological, language
comprehension, and reading fluency skills (Fuchs, Fuchs,
Mathes, & Lipsey, 2000; Vellutino, Fletcher, Snowling, &
Scanlon, 2004; Vellutino, Scanlon, & Tanzman, 1994; Vel-
lutino, Tunmer, Jaccard, & Chen, 2007). They typically per-
form at levels 46 years behind non-LD peers in reading
and score in the lowest decile on reading achievement tests
(Deshler, Schumaker, Alley, Warner, & Clark, 1982; Levin,
Zigmond, & Birch, 1985; Warner, Schumaker, Alley, & Desh-
ler, 1980). For example, on the 2007 National Assessment
of Educational Progress (Lee, Grigg, & Donahue, 2007), 66
percent of students with disabilities in public schools scored
below a Basic Level, compared to only 24 percent of students
without disabilities. (A Basic Level implies partial mastery
of the knowledge and skills needed for proficient work at a
given grade level.)

LEARNING DISABILITIES RESEARCH 61

Taken together, research on younger and older children
with reading difficulties produces a picture of students whose
reading difficulties begin early and persist throughout their
school career. For such students a program of intervention
that begins earlyand then continues throughout their school
careersis needed.

Reading Interventions at the Secondary-School
Level

Two questions arise when considering reading interventions
for secondary-school students with LD. The first is: At what
level do students need to read to be successful after high
school graduation? In recent years, this question often has
been addressed through the development of state standards
tests in reading. Such tests define, by design or default, the
level of reading considered to be necessary for students to
be successful at the secondary-school levelthis despite the
fact that the extent to which many state tests reflect the type of
reading necessary for success either in school or in postsec-
ondary settings is unknown. However, given the high-stakes
nature of state tests for schools in terms of meeting No Child
Left Behind standards, and for students who are required to
pass reading tests to graduate (as is the case in 23 states;
Center on Education Policy, 2008), the tests are an important
outcome for students and schools at the secondary-school
level.

The second question is: How can we determine whether
our reading interventions are effective? The reading progress
of secondary-school students with LD might prove to be
slow and incrementalbut not necessarily unimportant. For
example, improvement of even one grade level (to use a
typical metric) in reading over the course of 4 years in high
school might translate into large advantages in posthigh
school settings. Yet are there instruments that are sensitive
to such slow and incremental growth? Are those instruments
reliable and valid, and can they be tied to success on tasks
of importance, such as performance on state reading tests
or performance in postsecondary educational settings? One
instrument that might potentially fulfill these requirements is
curriculum-based measurement (CBM).

CBM

CBM is a system of measurement designed to allow teachers
to monitor student progress and evaluate the effectiveness of
instructional programs (Deno, 1985). The success of CBM
relies on two key characteristics: practicality and technical
adequacy (Deno, 1985). With respect to practicality, if the
measures are to be given on a frequent basis, they must be
time efficient and easy to develop, administer, and score and
must allow for the creation of multiple equivalent forms. With
respect to technical adequacy, if the measures are to provide
educationally useful information, they must be valid and re-
liable indicators of performance in an academic area. For a
measure to be considered a valid indicator of performance,
evidence must demonstrate that performance on the measure
relates to performance in the academic domain more broadly.

In reading, the number of words read correctly in 1 minute
is often used as a CBM indicator of general reading perfor-
mance at the elementary-school level (Wayman, Wallace,
Wiley, Ticha, & Espin, 2007). One-minute reading-aloud
measures are time efficient and easy to develop, adminis-
ter, and score, and they allow for the creation of multiple
equivalent forms. Further, a large body of research supports
the relation between the number of words read aloud in 1
minute and other measures of reading proficiency, including
reading comprehension (see reviews by Marston, 1989; Way-
man et al., 2007). Although most CBM reading research has
focused on a reading-aloud measure, support also has been
found for the technical adequacy of a maze-selection mea-
sure (see Wayman et al., 2007). In a maze-selection measure,
every seventh word of a passage is deleted and replaced with
a multiple-choice item consisting of the correct word plus
two distracters. Students read through the text and choose
the correct word for each multiple-choice item. Specific to
the present study, both reading-aloud (Crawford, Tindal, &
Stieber, 2001; Hintze & Silberglitt, 2005; McGlinchey &
Hixson, 2004; Silberglitt & Hintze, 2005; Stage & Jacobsen,
2001) and maze-selection measures (Wiley & Deno, 2005)
have been shown to predict performance on state standards
tests.

Although research supports the technical adequacy of
both reading aloud and maze selection, the majority of
that research has been done at the elementary-school level
(Wayman et al., 2007). Far less research has been conducted
in reading at the secondary-school level, even though the re-
sults of cross-age studies suggest that the nature and type of
CBM in reading might need to change as students become
older and more proficient readers (Jenkins & Jewell, 1993;
MacMillan, 2000; Yovanoff, Duesbery, Alonzo, & Tindal,
2005). Many of the studies that have been conducted in read-
ing at the secondary-school level have focused on reading
as it relates to learning in the content areas (e.g., Espin &
Deno, 1993a, 1993b; Espin & Deno, 19941995; Fewster &
MacMillan, 2002) rather than on the development of general
reading proficiency. However, a small group of studies has
focused on general reading proficiency.

Fuchs, Fuchs, and Maxwell (1988) examined the va-
lidity of reading aloud for students with mild disabili-
ties across grades 48. Across-grade correlations between
words read correctly (WRC) in 1 minute and scores
on comprehension and word study subtests of a stan-
dardized achievement test were .91 and .80, respectively;
however, because the study was not specifically focused
on the secondary-school level, correlations were not re-
ported separately for the secondary-school students in the
study.

Three subsequent studies focused specifically on
secondary-school students. Espin and Foegen (1996) ex-
amined the validity of three CBMsreading aloud,
maze selection, and vocabulary matchingon the
comprehension, acquisition, and retention of expository
text for students in grades 68. Comprehension, acquisi-
tion, and retention were measured with researcher-designed,
multiple-choice questions given immediately after reading
(comprehension), immediately after instruction on the text
(acquisition), and a week or more following instruction

62 ESPIN ET AL.: CREATING A READING PROGRESS MEASUREMENT SYSTEM

(retention). Correlations ranged from .54 to .65 and were
similar for comprehension, acquisition, and retention mea-
sures. Brown-Chidsey, Davis, and Maya (2003) examined
the reliability and validity of a 10-minute maze taska
somewhat long task by CBM standardsas an indicator of
reading for students in grades 58. They found that scores
generally differentiated students by grade level and special
education status. Rasinski et al. (2005), in discussing the
importance of reading fluency for high school students, re-
ported correlations between WRC in 1 minute and scores
on a state standards test of .53 for ninth-grade students.
Descriptive data and methods were not reported in the
article.

In sum, little research has been conducted at the
secondary-school level on the development of CBM read-
ing measures as indicators of general reading proficiency,
and that which has been done has been limited in terms of
measures and methodology, or has not focused specifically
on secondary-school students. What is more, the research to
date has focused on the characteristics of the measures as
performance or static measures, not as progress or growth
measures. The validity and reliability of the measures may
differ based on their intended use.

In this article, we examine the technical adequacy of CBM
reading measures for secondary-school students. Specif-
ically, the reliability and validity of CBMs as predic-
tors of performance on a state standards test in reading
is examined. Differences related to time frame and scor-
ing procedure are examined. Reading-aloud and maze-
selection measures are selected because of previous re-
search demonstrating their practical and technical adequacy
at the elementary-school level and their potential promise
at the secondary-school level. Time frames are examined
because longer samples of work might be needed at the
middle-school level to obtain a distribution of student scores.
For example, reading-aloud scores might bunch together at
1 minute but spread out at 3 minutes. Finally, scoring pro-
cedures are examined to determine the influence of er-
rors on the reliability and validity of students scores.
For example, counting the number of correct selections
on a maze task is less time consuming than counting the
number of correct minus incorrect selections, but using
a correct minus incorrect score may help to control for
guessing.

Two research questions are addressed in the study:

(1) What are the reliability and validity of reading aloud
and maze selection for predicting performance on a
state standards test in reading?

(2) Do reliability and validity vary with time frame and
scoring procedures?

Our primary focus was on the technical adequacy of
CBMs as static measures or indicators of performance at
a single point in time. However, we were also able to col-
lect progress measures on a small subsample of the orig-
inal sample. Thus, we conducted an exploratory study in
which we compared the growth rates produced by reading-
aloud and maze-selection measures for this subsample of
students.

STUDY 1: READING ALOUD AND MAZE
SELECTION AS PERFORMANCE INDICATORS

Method

Setting and Participants

The study took place in two middle schools in an urban dis-
trict of a large, midwestern metropolitan area. The district
enrolled over 47,000 students. Seventy-five percent of the
students were from diverse cultural backgrounds, 24 percent
received ESL services, 67 percent were eligible for free and
reduced lunches, and 13 percent were in special education.
The first school had 669 students in grades 68. Eighty-
three percent of the students were from diverse cultural back-
grounds, 35 percent received ESL services, 83 percent were
eligible for free and reduced lunches, and 15 percent were
in special education. The second school had 778 students
in grades 68. Sixty-two percent of the students were from
diverse cultural backgrounds, 18 percent received ESL ser-
vices, 56 percent were eligible for free or reduced lunches,
and 16 percent were in special education.

All eighth-grade students were invited to participate in
the study to ensure a range of student performance levels.
Participants were 236 eighth-grade students (134 females
and 102 males) in the classrooms of 17 English teachers
from the two schools. Fifty-eight percent of the participants
were eligible for free or reduced lunches. Students were
Caucasian (34 percent), Asian American (24 percent),
African American (20 percent), Hispanic (19 percent), and
Native American (3 percent). Nine percent of the students
were receiving special education services for learning
disabilities or mild disabilities (4 percent), speech and
language (3 percent), emotional and behavior disorders (1
percent), or other health impaired (1 percent). Fifty-eight
percent of the students spoke English at home. The rest
spoke Spanish (18.5 percent), Hmong (16 percent), Laotian
(4 percent), Vietnamese (1 percent), Cambodian (1 percent),
Amharic (.5 percent), Chinese (0.5 percent), and Somali
(0.5 percent). The mean standard score on the state standards
reading test for Sample 1 was 626.9. This compared to a
state-wide mean score of 640.6 and a district-wide mean
score of 607.3.

Note that the sample did not consist of struggling read-
ers only, even though the primary purpose of the study was
to identify performance and progress measures for strug-
gling readers. To establish the reliability and validity of
CBM, it was necessary to have a sample that represented
a range of student ability levels, because validity and relia-
bility coefficients could be negatively affected by a truncated
distribution of scores. We had two options. One was to se-
lect students who were struggling readers across a range
of grade levels, similar to the approach taken by Fuchs
et al. (1988). A second was to work within one grade level,
but to include students across a range of performance lev-
els within that grade. Given that the purpose of the study
was to tie the CBM to performance on a state standards
test, and given that the state standards test was given in only
one grade, we chose the latter approach. This approach is
not unique. In a review of the CBM research in reading,

LEARNING DISABILITIES RESEARCH 63

(Wayman et al., 2007), 28 of the 29 technical adequacy stud-
ies conducted at the elementary-school level used general
education samples (13 studies) or mixed samples of general
and special education (15 studies). Only 1 used an exclusively
special education sample.

Measures

Predictor variables. Predictor variables were scores on two
CBM tasks: reading aloud and maze selection. The reading-
aloud and maze-selection tasks were drawn from human-
interest stories published in the local daily newspaper and
were selected on the basis of content, readability level, length,
and scores on a pilot test conducted with four students who
were not involved in the study. Passages whose content was
determined to be too technical or culturally specific were not
used. To ensure that students would not complete the CBM
tasks before time was expired, only passages that were longer
than 800 words were selected. Readability was calculated us-
ing the Flesch-Kincaid formula (Kincaid, Fishburne, Rogers,
& Chissom, 1975) via Microsoft Word, and the Degrees of
Reading Power (DRP; Touchstone Applied Science and As-
sociates, 2006). Readability levels for the selected passages
ranged from fifth to seventh grade and DRP levels ranged
from 51 to 61. Means (number of words read aloud in 3 min-
utes) and standard deviations from the pilot study for selected
passages were: 421.5 (SD = 80.5), 489.5 (SD = 117), 432.5
(SD = 140.5), and 401.7 (SD = 75).

The reading-aloud task was administered to students on
an individual basis using standardized administration proce-
dures. Students read aloud from the passage while the exam-
iner followed along on a numbered copy of the same passage,
making a slash through words read incorrectly or words sup-
plied for the student. The examiner timed for 3 minutes using
a stopwatch, marking progress at 1, 2, and 3 minutes. Read-
ing aloud was scored for total words read (TWR) and WRC
at 1, 2, and 3 minutes.

Maze-selection passages were created from the same sto-
ries used for reading aloud. Every seventh word was deleted
and replaced by the correct choice and two distracters. The
distracters were within one letter in length of the correct
word but started with different letters of the alphabet and
comprised different parts of speech (see Fuchs, Fuchs, Ham-
lett, & Ferguson, 1992, for maze-construction procedures).
The three word choices were underlined in bold print and
were not split at the end of the sentence in order to preserve
continuity for the reader.

The maze selection task was administered to students in a
group setting using standardized administration procedures.
Students read silently for 4 minutes, making selections for
each multiple-choice item. Examiners timed for 4 minutes
and instructed students to mark their progress with a slash
at 2, 3, and 4 minutes. Examiners monitored to ensure that
students made the slashes. Maze selection was scored for cor-
rect maze choices (CMC) and correct minus incorrect choices
(CMI) in 2, 3, and 4 minutes. As a control for guessing, and
following the procedures used in previous research on maze
selection (Espin, Deno, Maruyama, & Cohen, 1989; Fuchs
et al, 1992), maze scoring was stopped when three consec-

utive incorrect choices were made. A recent investigation
comparing different maze-selection scoring procedures re-
vealed no differences in criterion-related validity associated
with using a two-in-a-row versus three-in-a-row incorrect
rule (Wayman et al., 2009).

Criterion variables. The criterion variable in this study
was performance on the Minnesota Basic Standards Test
(MBST) in reading, a high-stakes test required for gradu-
ation. The MBST was designed by the state of Minnesota
to test the minimum level of reading skills needed for sur-
vival (MN Department of Education, 2001) and, at the time
of the study, was administered annually in the winter to all
eighth-grade students in Minnesota.1 The untimed test com-
prised four or more passages of 500 words or more selected
from newspaper and magazine articles. Passages were both
narrative and expository and had average DRP levels ranging
from 64 to 67. Each passage was followed by multiple-choice
questions, with approximately 40 questions per test. The test
was constructed so that 60 percent of the questions on the test
were literal, 30 percent inferential, and 10 percent could be
either. The test was machine-scored on a scale from 0 to 40,
and then the raw score was converted to a scale score between
375 and 750. A passing scale score was 600, which corre-
sponded to 75 percent correct (MN Department of Education,
2001). Students who did not pass the test were permitted to
retake it two times each year. Students had to pass the test in
order to graduate from high school.

The MBST Technical Manual (MN Department of Educa-
tion, 2001), reported reliability and validity information for
the MBST Reading test. Internal consistency measures for
reliability were based on the Rasch model index of person
separation. The KuderRichardson 20 internal consistency
reliability estimate was .90. No alternate-form reliability was
calculated. Content validity, according to the manual, was
determined by the relationship of the reading test items to
statewide content standards as verified by educators, item
developers, and experts in the field. Construct validity was
measured by item point-biserial correlations (the correlation
between students raw scores on the MBST and their scores
on individual test items). The mean point biserial correlation
was .38. There were no criterion-related validity statistics
noted.

Procedures

In the fall, students completed two maze passages in a group
setting in their classrooms. On a subsequent day in the same
week, students completed two reading-aloud passages indi-
vidually. Type of measure (reading aloud vs. maze selection)
and passage were counterbalanced across students, as was the
order in which the students completed the passages within
reading aloud or maze selection. Examples of each task were
given to students prior to administration. The MBST was
administered by teachers to students in February.

Sixteen graduate students administered and scored the
reading-aloud and maze-selection measures. Prior to data
collection, the graduate students were interviewed by mem-
bers of the research team to ascertain their ability to work

64 ESPIN ET AL.: CREATING A READING PROGRESS MEASUREMENT SYSTEM

with students and to accurately score reading samples. Fol-
lowing this initial screening, the graduate students partic-
ipated in two 2-hour training sessions on administration
and scoring. During training, the graduate students admin-
istered and scored three samples. Inter-scorer agreement
on the three passages between the data collectors and the
trainer was calculated by dividing the smaller by the larger
score and multiplying by 100. Inter-scorer agreement ex-
ceeded 95 percent on maze selection and 90 percent on read-
ing aloud for all scorers. During data collection and scor-
ing, 33 percent of the reading-aloud and 10 percent of the
maze-selection probes were randomly selected to be checked
for accuracy of scoring. Inter-scorer agreement exceeded
90 percent for all measures.

Results

Means and standard deviations for reading-aloud and maze-
selection scores for each time frame are reported in Table 1.
Examination of mean scores reveals that students worked at
a steady pace across the duration of the passages. Students
read aloud approximately 125 words with 6 errors per minute
across the 3 minutes and made approximately 6 correct maze
choices with 0.5 errors per minute across the 4 minutes of
maze. The mean score for study participants on the MBST in
reading was a standard score of 626.90 (SD = 65.66), with a
range of 475750.

To determine alternate-form reliability, correlations be-
tween scores on the two forms of the maze-selection and
reading-aloud measures were calculated for each time frame
and scoring procedure (see Table 2). Reliabilities for both
reading aloud and maze were generally above .80. Relia-
bilities for reading aloud ranged from .93 to .96, and were
similar across scoring method and sample duration. Relia-
bilities for maze ranged from .79 to .96, and were generally
similar for scoring method, but increased somewhat with time
frame. The highest obtained reliability coefficient was for the
4-minute maze passages scored for CMI (r = .96); however,
reliabilities for the 3-minute maze selection were above .85,
regardless of scoring method.

TABLE 1
Means and Standard Deviations for Reading Aloud and Maze

Selection by Scoring Procedure and Time Frame

Curriculum-Based Measurements
and scoring procedure Time

Reading aloud 1 minute 2 minutes 3 minutes
Total words read 125.88 250.46 373.31

(43.75) (85.05) (125.95)
Words read correct 119.82 238.54 355.27

(47.29) (92.14) (136.92)
Maze selection 2 minutes 3 minutes 4 minutes

Correct choices 12.33 18.76 25.24
(7.12) (10.87) (14.53)

Correct minus 11.18 17.17 23.10
incorrect choices (7.53) (11.40) (15.17)

Note: Standard deviations are in parentheses.

TABLE 2
Alternate-Form Reliability for Reading Aloud and Maze Selection

by Scoring Procedure and Time Frame

Curriculum-Based Measurements
and scoring procedure Time

Reading aloud 1 minute 2 minutes 3 minutes
Total words read .93 .96 .95
Words read correct .94 .96 .94

Maze selection 2 minutes 3 minutes 4 minutes
Correct choices .80 .86 .88
Correct minus .79 .86 .96

incorrect choices

Note: All correlations significant at p < .01. TABLE 3 Predictive Validity Coefficients for Reading Aloud and Maze Selection with MBST by Scoring Procedure and Time Frame Curriculum-Based Measurements and scoring procedure Time Reading aloud 1 minute 2 minutes 3 minutes Total words read .76 .77 .76 Words read correct .78 .79 .78 Maze selection 2 minutes 3 minutes 4 minutes Correct choices .75 .77 .80 Correct minus .77 .78 .81 incorrect choices Note: All correlations significant at p < .01. MBST : Minnesota Basic Standards Test. To examine the predictive validity of the measures, cor- relations between mean scores on the two forms of reading- aloud and maze-selection measures and scores on the MBST were calculated (see Table 3). Correlations ranged from .75 to .81. The magnitude of the correlations was similar across type of measure (reading aloud and maze) and method of scoring. For reading aloud, correlations for 1, 2, and 3 min- utes were virtually identical. For maze selection, a consis- tent but small increase in correlations was seen across time frames, with correlations of .75 (CMC) and .77 (CMI) for the 2-minute measure and .80 (CMC) and .81 (CMI) for the 4-minute measure. In summary, results revealed that both maze selection and reading aloud produced respectable alternate-form reliabil- ities, although reading aloud yielded consistently larger re- liability coefficients than maze. Few differences in reliabili- ties were seen for scoring procedure or time frame with the exception that reliabilities for the maze selection increased somewhat with time. Predictive validity coefficients were similar for the two types of measures. Correlations were sim- ilar across scoring procedures for both measures. With regard to time frame, small but consistent increases in correlations were seen for maze selection. Discussion In this study, we examined the reliability and validity of read- ing aloud and maze selection as indictors of performance on LEARNING DISABILITIES RESEARCH 65 a state standards test. Difference in technical characteristics related to time frame and scoring procedure were examined. Both reading aloud and maze selection showed reason- able alternate-form reliabilities at all time frames, with most coefficients at or above .80. In general, reading aloud re- sulted in higher alternate-form reliability coefficients (rang- ing from .93 to .96) than did maze selection (ranging from .79 to .96), but reliability for maze selection was in the range typical for CBM. Time frame did not influence re- liability coefficients for reading aloud but had some influ- ence on maze selection. Obtained reliability coefficients for maze increased with time frame, with coefficients for the 2-minute time frame hovering around .80, but increasing for 3-minute (rs = .86) and 4-minute (r = .88 and .96) time frames. Finally, scoring procedure had little effect on relia- bility, with the exception that when 4-minute maze selection was scored for CMI, reliability was somewhat larger (r = .96) than when it was scored for CMC (r = .88). Like reliability coefficients, validity coefficients were quite similar across type of measure, time frame, and scoring procedure. Validity coefficients for reading aloud ranged be- tween .76 and .79 and were similar across scoring procedure and time frames. Maze-selection coefficients ranged between .75 and .81 and also were similar across scoring procedure. A systematic increase in validity coefficients was seen with an increase in time for maze, but differences were small. We wish to make two observations regarding the magni- tude of the validity coefficients found in the performance study. First, the correlations obtained in our study were larger than those found in previous research at the middle- school level. For example, Yovanoff et al. (2005) reported correlations of .51 and .52 between WRC in 1 minute and scores on a reading comprehension task for eighth-grade stu- dents. Espin and Foegen (1996) reported correlations of .57 and .56, respectively, between WRC in 1 minute and CMC in 2 minutes and scores on a reading comprehension task. One might hypothesize that the differences in correla- tions are related to the materials used to develop the CBMs, although no consistent pattern of differences can be seen across studies. Yovanoff et al. (2005) used grade-level prose material, Espin and Foegen (1996) used fifth-grade level ex- pository material, and we used fifth- to seventh-grade human- interest stories from the newspapermaterial that might be considered to be both narrative and expository. Moreover, previous research conducted at the elementary-school level has revealed few differences in reliability and validity for CBMs drawn from material of different difficulty levels or from various sources (see Wayman et al., 2007, for a review). It is possible that differences are related to the criterion variable used. Both Yovanoff et al. (2005) and Espin and Foegen (1996) used a limited number of researcher-designed multiple-choice questions as an outcome, whereas in our study we used a broad-based measure of comprehension de- signed to scale student performance across a range of levels. Supporting this hypothesis are data from two studies demon- strating nearly identical correlations (in the .70s) to those we found between the CBM reading-aloud and maze-selection measures and the MBST (Muyskens & Marston, 2006; Ticha, Espin, & Wayman, 2009). In addition, Ticha et al. (2009) found high correlations between maze-selection scores and a standardized achievement test. Second, the state standards test used in the current study was designed to test the minimal reading competency for students in eighth grade. Thus, one might question whether the CBMs would predict reading competence as well if the criterion measures were mea SHOW MORE... Reflection: Nursing Aging and Ethical and legal aspects of nursing (Due 20 hours) APA format 1) Minimum 6pages (No word count per page)- Follow the 3 x 3 rule: minimum of three paragraphs per page You must strictly comply with the number of paragraphs requested per page. The number of words in each paragraph should be similar Part 1: minimum 3 pages Part 2: minimum 3 pages Submit 1 document per part 2)******APA norms The number of words in each paragraph should be similar Must be written in the 3 person All paragraphs must be narrative and cited in the text- each paragraph The writing must be coherent, using connectors or conjunctive to extend, add information, or contrast information. Bulleted responses are not accepted Don't write in the first person Do not use subtitles or titles Don't copy and paste the questions. Answer the question objectively, do not make introductions to your answers, answer it when you start the paragraph Submit 1 document per part 3)****************************** It will be verified by Turnitin (Identify the percentage of exact match of writing with any other resource on the internet and academic sources, including universities and data banks) ********************************It will be verified by SafeAssign (Identify the percentage of similarity of writing with any other resource on the internet and academic sources, including universities and data banks) 4) Minimum 4 references (APA format) per part not older than 5 years (Journals, books) (No websites) All references must be consistent with the topic-purpose-focus of the parts. Different references are not allowed 5) Identify your answer with the numbers, according to the question. Start your answer on the same line, not the next Example: Q 1. Nursing is XXXXX Q 2. Health is XXXX Q3. Research is.......................................................... (a) The relationship between......... (b) EBI has to 6) You must name the files according to the part you are answering: Example: Part 1.doc Part 2.doc __________________________________________________________________________________ The number of words in each paragraph should be similar Part 1: Nursing Aging (Write in the first person) Purpose: To provide the student an opportunity to reflect on selected RN-BSN competencies acquired through the Nursing aging curse. 1. Introduction and include: (One paragraph) a. Essential II: Systems leadership and Basic Organizational for patient safety and quality care Reflection about Nursing aging curse: 2. Implement holistic, patient centered care that reflects an understanding of human growth and development, pathophysiology, pharmacology, medical management, and nursing management across the health illness continuum, across the lifespan, and in all healthcare settings (One paragraph) 3. Implement elderly patient and family care around resolution of end of life and palliative care issues, such as symptom management, support of rituals, and respect for patient and family preferences. (Two paragraphs) 4. Provide appropriate patient teaching that reflects developmental stage, age, culture, spirituality, patient preferences, and health literacy considerations to foster patient engagement in their care. (Two paragraphs) 5. Manage care to maximize health, independence, and quality of life for a group of elderly patients that approximates a beginning practitioners workload (Two paragraphs) 6. Conclusion (One paragraph) The number of words in each paragraph should be similar Part 2: Ethical and legal aspects of nursing (Write in the first person) Purpose: To provide the student an opportunity to reflect on selected RN-BSN competencies acquired through the Ethical and Legal Aspects of the Nursing Practice curse. 1. Introduction includes (One paragraph) a. Essential 8: Professionalism: Professional ideals and accompanying behaviours are fundamental to nursing practice Reflection about Nursing aging curse: 2. Demonstrate the professional standards of moral, ethical, and legal conduct (Two paragraphs) . 3. Promote the image of nursing by modeling the values and articulating the knowledge, skills, and attitudes ofthe nursing profession ((Two paragraphs) 4. Protect patient privacy and confidentiality of patient records and other privileged communications (Two paragraphs) 5. Access interprofessional and intra-professional resources to resolve ethical and other practice dilemmas (One paragraph) 6. Conclusion (One paragraph)

  

Leave a Reply

Your email address will not be published.

Related Post

chapter 9chapter 9

  Chapter 9 Quiz 9 Instructions: There are four (4) topic areas listed below that are designed to measure your knowledge level specific to learning outcome (LO 9) shown in your

READ MOREREAD MORE

Short AnswerShort Answer

  See Attached documentsRubric and References BSNCOC WEEK ONE READING ASSIGNMENT Don't use plagiarized sources. Get Your Custom Essay on Short Answer Just from $13/Page Order Essay DAY 1 Team Dynamics

READ MOREREAD MORE
Open chat
💬 Need help?
Hello 👋
Can we help you?