Wednesday, October 29, 2008

O'Malley's/Valdez Portfolio Assessment Ch 3

Wow! The reading makes it sound easy to begin a portfolio in the classroom. I can picture using this next year along with the SBA folders. I like the idea that it gives parents a clear picture of student learning/developing, rather than me showing the parents the test they’ve taken and talking about where they failed. It is also a wonderful tool for teachers when there is a transferring or promoted student. Each year I get students who transfer into Bethel, where all I see is the phase check off sheet when they come from LKSD. I have to guess how they passed these assessments, and I don’t know what weaknesses and strength the student has; this is where a portfolio would have been very helpful for me. I'd know where to begin instruction for that student if there were portfolios. This is something that I have to do each new school year when Kindergartner’s become first graders. I have no idea of their oral proficiency level in Yugtun, other than the YPT scores, which are not valid to their speaking level.

In the past, I’ve used portfolios, but I didn’t have a clear understanding of how they worked. I like that in having portfolio assessments, you have a specific learning goal. It is so much more authentic than the tests that I give to my students where it doesn’t explain to the parent the process it took for the student to learn that concept. Portfolio assessments are like a working process where you and your student choose what goes into the portfolio to show that growth to the content/goal your teaching. You and the student can decide on the criteria for grading, of course, the student’s should know what their graded against (the prime example and also an example of non-exemplary, and also ELL students will need more time). You’ll have to do much practice with the whole class by comparing papers to the model paper. From this, they can learn to reflect on their work, and become self-assessors where they begin to ask themselves questions, as well as to their peers. From there, you can have the students’ partner with a peer to review their work against the criteria, and give reflections on each other. After practicing all these processes, you can have the student independently check their work where the teacher becomes the inter-rater. When weaknesses are found, they will become the improvement goals for the student, and can also be a helpful tool for teachers in their instruction for all students to work on those goals, like in centers, peer-to peer teaching, as well as ELL student extra time for the teacher.

Like I said in the beginning, it sounds so easy, but takes a lot of work. I’d like to see myself in the future beginning from one subject before I do everything. I can picture this going along with the SBA’s and comparing the portfolio with the parent. I’d be curious to hear and see the actual growth that the student has done to perform the assessment by confident.
Tua-I, piurci.

Monday, October 27, 2008

Solano-Flores' Who is Given Tests...

After reading this article, it made me think of our schools, and districts current Yugtun reading assessments. The reading assessments come from tests aimed for Yugtun First Language (YFL) students. The tests for reading involve comprehension questions where the language assumes that the student will understand. I often adjust the questions to my current students understanding, mostly using gestures. This is like adjusting the question for an English Language Learner (ELL). The way I translate is different from another teacher, so the test might be invalid.
The language I use to ask the questions come from my dialect and I often revert to the dialect they’ve learned in Kindergarten. There isn’t an official standardized Yugtun language, and if there were one, I’d have to question if my students know this language, like an ELL test has to consider. Another thing is that dialectal differences have to be taken into consideration when giving a test.
The whom portion of the test is the teacher. The way I rate the test is different from another teacher, as well for an ELL administrator/rater for their test. Ratings might be different from each teacher, the number of years of learning the language differs for each student, and as well as the proficiency of the language they’re learning. It makes you question the validity of the test you administer. The student who has more exposure to the Yugtun language will most likely score better than the student who had only a few years of instruction. Like ELL students, our Yugtun immersion students need many years to develop their language before given a language test. Another consideration to take is the number of times they were tested. I’ve seen a student who memorized a test, and has become invalid.
The where of the student is the environment their taking the test. Like, I’ve said in the beginning, the reading assessment test my students take is aimed for YFL students. The approach that I use is different for an YFL teacher for the reading assessment. The approach that an ELL depends on what they determine to be proficient in their school/environment.
The article overall explains more in detail the complications of testing for ELL students. What is categorized in another state for ELL is different for each state. The way ELL students are accommodated, labeling of proficiency, administrations of tests, and how they rate their students are different through out the state.
Something I need more clarification on is the G-theory.

Friday, October 24, 2008

Making Assessment Practices Valid for Indigenous American Students

When did the test writers realize the assessments written by outsiders are not valid for Indigenous people? I remember taking the CAT test when I was in third grade. I vividly remember filling in any answer to questions I wasn’t familiar with. One of the questions was on where papers came from, and I didn’t know the answer, and it was culturally biased. I don’t remember my teachers teaching this concept. The judgment of the test writers was not valid to the funds of knowledge I grew up with, so was the socioculture. This test was aimed for students who grew up in the Lower 48. This assessment did not agree with the curriculum and the instruction I received growing up. The assessment did not take into consideration of my language proficiency of English. This is the same case for the high stakes tests students currently take.

It is true that many of us Indigenous people grew up by observing before we performed our knowledge. It may take years to be confident to perform what we know, and it reminds me of Walkie’s “Only when their ready.” (It would be great to have him do a presentation on it.) This is unlike the school culture where teachers decide when their ready, probably mainly because of NCLB. Despite the legislation, we continue to educate our students of our culture mainly in the primary grades. Each day Ayaprun Elitnaurvik recites Yuuyaraq, although the students do not understand exactly what it means. We hope that as their lives go on, they'll begin to understand the meaning when they meet it. Like, only when their ready. We continue to feel the pressure to make AYP every year. Ever since this law was passed, our school has been more focused on reading, writing, and math. Before this law was passed, the Kindergartner’s were mainly taught the Yugtun oral language. This is unfortunate their more focused on SBA's now.

The current assessments that I do are not culturally relevant, even though they try to include cultural relevant pictures. These assessments were translated into Yugtun using the English SBA assessments, although some of the assessments are not par to the GLE’s.

As schools of Indigenous students, we need to become multimodal teachers, and assess them by implementing their funds of knowledge. Implementing Demmert’s finding, as well as the researchers on culture-based curriculum could do this. When I read this part, it was complicated for me to picture this in the classroom. I have become so intoned to how assessments are done in school.

After reading this article, I am curious to view a rubric that the Navajo Indians used. Where can I find it?

Monday, October 20, 2008

McNamara's Validity: Testing the test

Assessments are written to prove learning, and are captured from the data as evidence. Before it is considered valid, the evidence has to be scrutinized by carefully investigating the procedures and conclusions made about the evidence. It is the matter of judgment of what calls for validation. It is not the test itself that calls for validity, but the interpretations that we make of the test.

In developing a language test, and validating it, one has to review the procedures on how it was elicitated, the judgment of the test writer, and the observations that were used to conclude about the insights of the test takers.

Determining the validity of the test involve the evidence of the test performance as well as the appropriateness of the test to what was taught. We need to determine what procedures were used in the test, the judgment, the purpose, stakeholders, the criterion, content, method, and who developed and validated the test. A test may be valid, but the conclusion, that is the judgment of the test maybe invalid. If the test has been proven to be faulty, we have to speculate why it happened, observe and experiment in determining the validity of the test, rather than theorizing.

Once a test has been validated, it will not always be valid for different groups of students. It has to be revisited and revised for the criterion needs of the assessment, as well as for student inferences. Each year, there will be different data from each group of students, and will continually need some investigation. A valid test considers the intended population, and the author bases the assessment on the evidence, not assumptions.

The questions to think about are:
Was the construct of what your measuring defined?
Is the domain your looking for being measured?
Does the test measure the intelligence or skill of what you’re looking for?

Friday, October 3, 2008

Designing authentic assessment and The Language Assessment Process...

These readings seemed to have much in common on planning, developing, and use of authentic assessment. When designing authentic assessment you have to involve co-teachers, administrators, and parents. A group has to determine the purpose of the assessment, what do you want to build on, what objectives will be assessed, who will you share it with, what types of authentic assessments will you look at, what to look for when piloting the assessment you’ve made, who will you review the assessment with, assessment validity to the curriculum, fairness for all students, grading and reporting of the assessment, and the reliability in scoring.

I liked Shohamy and Inbar’s list of language assessment tools that can be include on page 4. Putting on a play reminded me of Abby Augustine’s study for her Master’s degree. Her students would do a short skit while her whole class told the story. I am curious if this was part of her assessment tools. Also, I thought Shohamy’s and Inbar’s article was easier reading for me, especially since it listed varies ways of administering a test for authenticity, and other tables in the article. This is something where I can quickly refer to when I have questions on authentic assessments especially on the valid and reliability part. It would be good to talk more about this in class, especially how a test can be reliable, but not valid.

As I began reading the rater training in O’Malley’s and Pierce’s chapter, on page 21, it reminded me of being one of many raters for the LKSD writing assessment that takes place in the Fall. In this type of meeting, teachers first have to practice rating papers, for I think 1/2 a day. The next 2 1/2 days involved a lot of papers. We had to rate the written assessments by following a rubric given by the school district. If there are two scores being two points or more away from each other, the two teachers had to explain why they have scored as they did, and both had to come into a consensus. We also compared the paper to another paper that was scored by an expert.