Let us begin with informal assessments. Informal assessments are outstanding because they can be as simple as a bellringer or an exit ticket. If my students don't yelp and raise their hand when they see the word "antecedent" on an exit ticket, then I know they are ready to move on from pronouns. I find informal assessments infinitely helpful in judging if my lesson has succeeded or if I need to re-visit an objective. Informal assessments are wonderful, and I give them often.
Formal assessments, in the case of NPJH my first year, were numerous and ineffective. We had iReady testing, which helped with math and science, aligned with specific standards. We had STAR testing, which is given to assess reading levels. We had Acuity testing, which is a simulation of the state test. The Acuity test happened every 9 weeks and the students' written responses were graded by me within 24 hours of their testing window. We also had our bi-weekly testing, which were tests created by our consultant from MDE. These tests were paper exams, given with a scantron and typically required two periods to finish.
My students were constantly tested, and sure the data reflected their general apathy to testing. In many cases, my students filled in random answer bubbles and tried to finish early. Most test days, I did my best to keep them motivated and keep an eye on those students who seemed to be quitting out early.
This year, due to a fortunate gift from the gods of assessment, Ms. G and I have far less testing. This has been fantastic for students, as testing largely required multiple instruction days and was often messily organized. Last year, sometimes testing happened in the labs. While other days, students would be cramped into the “data room” (a gigantic closet sparsely stocked with broken furniture and mismatched desks). I never consistently tested the writing standards, Ms. G never consistently tested the reading literature standards; yet, each test consistently brought hysteria when scores finally posted.
My informal assessments include the daily exit ticket and multiple day projects where the close-reading steps must be completed. Students answer text-based questions, usually with a partner or small group, and their work is turned in on a Friday. In this way, I can post the rubric on the board for a multiple day period, students can work at their own pace, and ask questions from me if needed. Usually I model what I expect Monday and Tuesday, and then for the remainder of the week they analyze and unfamiliar passage and work through summarizing, some type of graphic organizer, and the state-test style questions.
The formal testing is more of an issue. Every two weeks, the reading standards are assessed using iReady’s Standards Mastery. Ms. G is held accountable for all R.L standards, while I am responsible for all R.I standards. Only one paired standard is tested at a time (for instance, students would test in iReady on R.I and R.L 2 in one siting.) These summative tests mean almost nothing because our pacing guide is so rigid that we can not go back and re-teach standards. Therefore, I try to work on weak standards in bellringers.
The final summative assessment is Case 21. Every 9 weeks students take the Case 21 comprehensive assessment, which is treated and weighted like the state test. Thankfully, the final 9 weeks of school is committed entirely to reviewing all standards. Every Thursday, my students sit down and take a timed writing test, with a state-test simulated writing task. I grade all 102 essays using the 4 point MDE rubric, then on Friday we workshop all writing.
Testing informs and changes how I teach in that I focus mostly on whatever standards are most frequent using the MAP blue print. If students fall behind, I use bellringers and group activity centers to rotate through skills and standards.
Enclosed is an example writing task I have used as a summative W.3 assessment. My students seem to love argumentative writing the most, and have an odd affinity for wolves and other dog-like creatures. My students always argue to protect whatever wildlife is in question, but this time, I encouraged them to write according to how much evidence was present in the sources. Most students then wrote claims arguing against the protection of wolves.
Formal assessments, in the case of NPJH my first year, were numerous and ineffective. We had iReady testing, which helped with math and science, aligned with specific standards. We had STAR testing, which is given to assess reading levels. We had Acuity testing, which is a simulation of the state test. The Acuity test happened every 9 weeks and the students' written responses were graded by me within 24 hours of their testing window. We also had our bi-weekly testing, which were tests created by our consultant from MDE. These tests were paper exams, given with a scantron and typically required two periods to finish.
My students were constantly tested, and sure the data reflected their general apathy to testing. In many cases, my students filled in random answer bubbles and tried to finish early. Most test days, I did my best to keep them motivated and keep an eye on those students who seemed to be quitting out early.
This year, due to a fortunate gift from the gods of assessment, Ms. G and I have far less testing. This has been fantastic for students, as testing largely required multiple instruction days and was often messily organized. Last year, sometimes testing happened in the labs. While other days, students would be cramped into the “data room” (a gigantic closet sparsely stocked with broken furniture and mismatched desks). I never consistently tested the writing standards, Ms. G never consistently tested the reading literature standards; yet, each test consistently brought hysteria when scores finally posted.
My informal assessments include the daily exit ticket and multiple day projects where the close-reading steps must be completed. Students answer text-based questions, usually with a partner or small group, and their work is turned in on a Friday. In this way, I can post the rubric on the board for a multiple day period, students can work at their own pace, and ask questions from me if needed. Usually I model what I expect Monday and Tuesday, and then for the remainder of the week they analyze and unfamiliar passage and work through summarizing, some type of graphic organizer, and the state-test style questions.
The formal testing is more of an issue. Every two weeks, the reading standards are assessed using iReady’s Standards Mastery. Ms. G is held accountable for all R.L standards, while I am responsible for all R.I standards. Only one paired standard is tested at a time (for instance, students would test in iReady on R.I and R.L 2 in one siting.) These summative tests mean almost nothing because our pacing guide is so rigid that we can not go back and re-teach standards. Therefore, I try to work on weak standards in bellringers.
The final summative assessment is Case 21. Every 9 weeks students take the Case 21 comprehensive assessment, which is treated and weighted like the state test. Thankfully, the final 9 weeks of school is committed entirely to reviewing all standards. Every Thursday, my students sit down and take a timed writing test, with a state-test simulated writing task. I grade all 102 essays using the 4 point MDE rubric, then on Friday we workshop all writing.
Testing informs and changes how I teach in that I focus mostly on whatever standards are most frequent using the MAP blue print. If students fall behind, I use bellringers and group activity centers to rotate through skills and standards.
Enclosed is an example writing task I have used as a summative W.3 assessment. My students seem to love argumentative writing the most, and have an odd affinity for wolves and other dog-like creatures. My students always argue to protect whatever wildlife is in question, but this time, I encouraged them to write according to how much evidence was present in the sources. Most students then wrote claims arguing against the protection of wolves.
|
|