Wednesday, March 25, 2015

Whose Test Is It?

This is a long post, and rambles a bit. My Mom said the other day that I'm wordy, and she's right. But it's my blog and I'll be verbose if I want to, verbose if I want to . . . Also, keep in mind that I would prefer to radically change what we do each day in school (school should be "different", not "better"), but this post is written from the perspective of how we can do what we are currently doing better.

While it's clear I'm not a fan of standardized tests such as PARCC, there is one advertised feature of PARCC that I think is an improvement over previous state-mandated testing we've done. It's the idea that we would get the results back faster, in which case - whatever their value - we at least would perhaps be able to use them to help students. Now, so far, I don't see any indication that we actually will get those results back faster (end of the school year is better than next fall, but still not very helpful), but perhaps as they iron out the wrinkles that will happen.

Research indicates that timely and effective feedback is key for student learning growth so, if the point of assessment is to help students learn more effectively, then both "end-of-the-year" and "next fall" don't do us much good. While we don't have much (any?) control over state assessments like PARCC, we do have control over teacher generated and given assessments in our classrooms. So what frustrates me is the timeliness and effectiveness of the feedback we often give, because this is something we do have some control over.

The genesis of this post was when a student I know well took three major tests on the Friday before Spring Break. Which means that, in the best-case scenario, this student won't receive any feedback for at least ten days. How "timely" and "effective" do you think that feedback is going to be? Now, let me be clear, as a teacher I've done this before as well. You want to finish a "unit" before a scheduled break in school and you want to assess before that break while it's still fresh in their minds. But that doesn't make it right and, as I've gotten older (and hopefully wiser), I've done my best to resist that urge. While not a perfect solution, I at least tried to give any assessments the day before the last day before a scheduled break so that they could receive feedback before going on break.

Which brings up the next issue, which is how quickly we get these assessments back to our students. Now, in this case, it's going to be at least ten days due to Spring Break, but what about assessments that aren't given right before Spring Break? Here's my thinking. If something is important enough that we are going to assess (and grade) all of our students at one point in time, and we are expecting all of our students to be ready and to take that assessment, then we should be willing to commit to return that assessment to them, with feedback, the very next day that class meets.

This, obviously, is an opinion that some folks will take issue with. They'll point to a limited amount of time for teachers, and many competing obligations, and the sheer amount of time it takes to grade assessments of multiple sections in only one to three days (depending on your schedule and weekends). I readily acknowledge those issues, I just don't think they are a worthy excuse. Again, if this assessment is important enough to give to all your students at one time, and if the goal of the assessment is to determine how well they know this essential material and then to help them learn anything they are still confused about, then as teachers we need to get this back to our students with meaningful feedback as soon as possible. While immediate feedback is often best, slightly delayed (the next day that class meets) feedback can be useful as well. Greatly delayed feedback? Not so much.

So that addresses "timely," but what about "effective?" What does effective feedback look like? I am in no way an expert on this, and there are many books you can read to help you with this, but I do think I can identify a few practices that aren't effective. Let me focus on two of them. First, feedback that is just a grade, or perhaps a grade with a few things circled, is usually not going to be effective feedback. Second, an assessment that you don't give back to the students and allow them to keep is usually not going to be effective.

Letting students keep assessments is a controversial topic for some teachers, so let's explore that a bit. In my experience, every reason given for this basically boils down to the same reason: cheating (with a side helping of time). Some teachers don't like to let students keep the assessments because they are worried other students will use them to cheat, either because they were absent when the assessment was given and need to make it up, or that students will pass the assessment along to future students. There's a fairly easy way to solve that problem, of course, which is to have several versions of that assessment made, which is where the side helping of time comes in - teachers will say they simply don't have time to create multiple versions of their assessments. I disagree.

Creating quality assessments is obviously a complicated issue that can't be addressed in this post, but the majority of assessments I see in schools fall into three categories: textbook-generated assessments, teacher-created assessments, and essay-type assessments (either textbook or teacher generated). (There are obviously other types, but I think these three do a fairly good job of putting them into categories.) For those teachers that use textbook-generated assessments, the software will easily create multiple versions of the assessment for you. For those that decide to go a little further and create their own assessments, it will take a bit more time, but it's not that hard to create multiple versions of the same assessment. (And, if you're really good, which I'm not, great assessment questions are really hard to cheat on anyway, so you don't need multiple versions.) Essays are both easier and tougher. Easier because they are harder to cheat on, tougher because they do take a fair amount of time to evaluate and provide feedback on. In my perfect world, though, that feedback is being provided throughout the writing process, so there's really not one "due date" where the essays have to be turned in and evaluated en masse.

There are some other strategies that I think are helpful. In math and science classes especially, for example, I still see most teachers giving unit assessments, that take a long time for students to take and a long time for teachers to evaluate. Why not give shorter assessments more frequently? This not only makes it easier to provide more timely feedback, but it gives students more frequent feedback as well. I also see many teachers trying to assess everything, instead of just what we've identified as being essential. Which is better, assessing everything and providing delayed and incomplete feedback? Or assessing only a few really important things, and giving our students thorough feedback in a timely fashion? I would suggest the latter.

Finally, let's talk about final exams. Many high schools, including mine, give final exams during the last week of the semester. At the end of first semester we have winter break, and then students may or may not have the same course and teacher when the next semester starts two weeks later. At the end of the second semester, students go to summer break. The vast majority of students don't get any feedback (other than the grade on the online portal) on these final exams. Some folks will suggest that since these are "summative" assessments, it's not that important to give feedback. I think that argument only works if you view each course as its own isolated world with a goal of finishing the course and getting a grade. If we truly value what we are teaching in that course and think that it's important for students to learn, then it doesn't "end" when the course ends. From this viewpoint, all assessment is formative.

If we're going to continue to have final exams, then I have a simple suggestion: don't give them on the last days of school each semester. Give them a few days before and then allow each class to meet at least one more time after the final exam in order to give the assessments back to the students and provide feedback to them. With my school's schedule, for example, that would require two class days after final exams, one running a MWF schedule and one running a TR schedule. Some folks will argue that students won't use that time well, or might not even show up, and that may be true. But if that's the case, then what does that say about what we're doing in the first place? If what we are doing is truly valuable, then students will want to show up.

For me, it boils down to what is the purpose of assessment. Whose test is it? If it's designed for the adults, then the prevailing practices are probably just fine. But if it's designed for the students, to help them learn and grow and be successful, then we need to do some rethinking about how we assess and provide feedback to - and for - our students.

Monday, March 16, 2015

Monitoring Student Use of Social Media

This past weekend a story regarding Pearson monitoring social media for "security breaches" related to PARCC was a popular topic of conversation in my network (original story, although the server often has trouble handling the traffic). I'm not going to focus so much on that story here, as many others have written about it, other than to point out one thing. While students don't seem to have the opportunity to agree or disagree to the terms of service of PARCC, our states, school districts and schools do. We all agreed to this, this is part and parcel of administering PARCC to our students. (Not sure if it was just Colorado, but as a proctor I had to sign a form agreeing to the terms of service.) So I think it is worth some conversation at the state, district and school level about whether we are okay with this or not. Schools can't really blame Pearson for doing what they said they would do (although others can).

In response many folks have wondered why we aren't outraged by the many school districts that are also monitoring students' social media use. Now, to the best of my knowledge, my district is not actively monitoring our students use of social media. But in some respects, I am. Let me explain.

As part of my presence on Twitter I engage in at least two activities that wander into the territory of monitoring. I have a search column set up for the name of my school, primarily so that I can retweet mentions of my school and occasionally answer questions or address concerns. And - as a result of various interactions over the years - I follow some students, both former and current. On occasion, both of these activities have resulted in me coming into contact with student behavior that I have acted upon. This ranges from contacting a student to suggest that a particular tweet might not be looked favorably upon by a college admissions officer or future employer (and discuss digital footprint with them), to meeting with a student and their counselor because there is some concern the student might be engaging in behaviors that could be harmful to themselves or others.

Now, I don't think this is "actively monitoring" my student body. I am not attempting to monitor all student accounts, nor am I actively looking for "misbehavior" on the part of our students. But I readily admit that this could be a slippery slope. It's all well and good for me to say that I'm not surveilling our students, but are they just supposed to trust me on that?

This is something I've thought about a lot. A. lot. And I'm still not completely comfortable with where I've landed, because I think this is a very complicated subject and the parameters around it are constantly changing along with the uses of social media. But, at the moment, this is my best attempt to thread the needle of privacy vs. obligation. If we see a student in need, are we not obligated to try to help? I've chosen to err on the side of caring, but that doesn't mean I might not cross the line.

Part of the way I'm currently viewing this is through the lens of a parent. I ask myself as a parent of a teenager, if another caring adult noticed something of concern in my daughter's social media activity (or any activity for that matter), would I want them to ignore it? I would not. On the other hand, I wouldn't want her school (which, conveniently or inconveniently, is also my school) to be searching through her social media activity looking for something we deemed "inappropriate." It's a fine line.

Because this is such a tricky issue, some school districts have implemented policies to limit or forbid employees' use of social media in relation to their students. My district does not currently have such a policy, but they are working on a draft of one (including possible rules around texting). I think this is a mistake. We don't have policy around whether a teacher can talk to a student in the grocery store or at a volleyball game, whether they can call a student's home or what they are allowed to say to them in the hallway, so we don't need policy specifically regarding social media. Our existing policies cover social media just fine, we don't need a new policy for every new technology or social media platform that is created. As near as I can tell, these policies are really not about student safety, but about school district liability. I don't think anyone believes that simply having a policy in place would stop an adult who means harm toward a student to not act, the policy is just there so the school district can say we have a policy against it.

Our students are active in these spaces. We have a choice, we can ignore these spaces and implement policies designed to protect our institutions, or we can thoughtfully engage with our students and try to help them learn, grow and stay safe. I'm reasonably comfortable with my current position, although I'm constantly reexamining it to see if my thoughts have changed. I'm curious as to how others navigate this issue. Is it okay to "infringe" on a student's privacy if they are at risk? How do we determine they are at risk? Who decides?

Monday, March 02, 2015


Our daughter will be opting out of the PARCC testing this spring at my high school. Some folks will applaud this decision, others will vehemently disagree, but we thought it was important to share our thinking. This is the letter we submitted to my administration and the school board this morning.

February 28, 2015
To: Arapahoe High School Administration and LPS Board of Education

This letter is to let you know that our daughter will be opting out of the PARCC testing in the Spring of 2015 (both the PBA and the EOY). This request is not meant in any way to reflect poorly on Arapahoe High School or Littleton Public Schools. Our daughter loves her teachers and frequently comes home and tells us what a good job they are doing, with specific examples of what she thinks they did well. But as educators with a combined 48 years teaching every grade level (except Kindergarten and 2nd grade) from Pre-K through 12th, as well as professional development for adults, we do not feel like this testing is in the best interests of our daughter or the school.

We feel that the skills that this testing purports to measure reflect a very narrow and flawed version of what it means to be educated; of what it means to learn and to have learned. We don’t necessarily think that the standards themselves are bad; as standards go most of the Common Core State Standards (and the Colorado modification of them) are well written. To paraphrase Yong Zhao, there’s nothing wrong with the Common Core State Standards, as long as they weren’t common and they weren’t core.

While at times we may disagree with a specific assessment one of her teachers gives her (the content, the format, or the way it’s delivered), in general we believe that her teachers are in the best position to assess her progress as a learner (in conjunction with our daughter herself). More importantly, we believe these teacher-given assessments at least have the potential to help her grow as a learner. Standardized testing such as PARCC, however, is mostly designed to meet the needs of adults.

Instead of taking the tests, she will instead use that time to learn. She might read a book, or work on assignments from her teacher, or watch videos on YouTube of things that interest her, or perhaps just catch up on sleep to compensate for the ridiculousness of beginning school for teenagers at 7:21 am each day. Whatever she does, it is more likely to contribute to her growth as a learner than taking the tests, and less likely to negatively impact her and her school as a whole.

We don’t just think that these tests are bad for our daughter, we believe these tests are bad for all the students at Arapahoe, and for Arapahoe in general. These tests are forcing teachers to narrow their focus; to value a fixed, pre-determined set of skills that someone else has decided that all students need over the needs and desires of the living and breathing students that are actually in their classrooms. While there are many criticisms we would make about the curriculum currently being taught and the restraints that imposes on both teachers and learners, we still put our trust in Abby’s teachers to make the best of that curriculum.

But in our current environment, the mandated testing is overwhelming teachers’ abilities to make decisions in the best interest of their students. Because the results of these tests are being used to evaluate teachers, teachers and administrators are being forced to toe the line in order to keep their jobs. While some folks would argue that this “only” represents 50% of a teacher’s evaluation, we have both seen how this has come to dominate all the discussions of teaching and learning in our schools. I would ask school administrators the following question: If there is a teacher who you have observed many times over the years that you feel is a master teacher, and yet the results of mandated testing over a narrow band of skills don’t support that, would you really change your evaluation of that teacher? There is so much more to teaching and learning than students simply performing well on a single test on a single day.

Make no mistake, we believe in high standards, we just don’t think that this approach actually helps promote them. We believe you can have high standards without being standardized; in fact, we don’t think it’s possible to truly have high standards if you are standardized. The goal of K-12 education is not to help all students master a pre-determined, fixed set of knowledge all at the same time and at the same pace. Algebra may (or may not) be important for all students to learn, but it is ludicrous to state that all students must learn it by the time they are fifteen years old. Why not fourteen? Or sixteen? If a student decides they need - and want - to learn Algebra at eighteen and master it then, is that so bad?

Anyone who has had children, or has met more than one of them, knows that each and every student is different and learns differently, yet we continue to act as if they are widgets on an assembly line, performing the same processes for the same amount of time on each one of them, and expecting that they will all turn out identical at the end of the line. Not only is this not true, we shouldn’t even want it to be true. We say we value diversity and each individual student, that we value and cherish the individual personalities and strengths of each and every child, yet we’ve developed a system that values conformity and compliance over individuality and initiative. We say that we value critical thinking, yet we are apparently unwilling to model it for our students.

We believe in a vision of education that focuses on the needs of each student over the needs of the system. We believe that school should be a place where students are encouraged to pursue their passions, and then actually prepare them to achieve those passions. That doesn’t mean we don’t value community; we believe one of the greatest strengths of the concept of public schools is bringing together students with different strengths and different backgrounds into a common space where they can learn and grow together. Where they can find others who share their passion, but also learn with and alongside those who have other passions. We believe that the way you meet the needs of society is by meeting the needs of each individual student. If you truly meet each student’s needs, then in the end you will meet the needs of society.

For all of these reasons (and many more, but this is already fairly long), we are choosing to opt our daughter out of testing. We have given her the option of opting out each year but this is the first time she has chosen to do it; previously she has never wanted to stand out and “be different” than the other students. She is aware enough now to understand, however, that taking these tests is not only not in her own best interests, but also not in the interests of her friends, classmates and teachers. We think this is important enough that we would give her this option even if it did “negatively” impact Arapahoe or Littleton Public Schools but, thankfully, with the recent changes at the state level surrounding the 95% participation rate, that will not happen.

Which is why we also have a request for the leadership of Arapahoe and Littleton Public Schools. Littleton Public Schools is the highest scoring district in the Denver Metro area, and one of the highest scoring districts in the state, and Arapahoe scores very well as a school. This puts the school and the district in a position where others might listen if they stood up and said this is not in the best interests of our students. A school and a school district that always come out looking good under this system is in the unique position of making the case for why this approach is fatally flawed. Instead of simply reacting to events and the decisions of others, we would ask you to lead.

We - the students, parents, educators and citizens of Colorado - need you to be proactive, not reactive. Instead of reacting to and appeasing the folks who are imposing this system on us, we need you to advocate for a different version of learning, a truly higher standard of what we expect from our schools, a vision for what school can and should be. We don’t need schools that are “better” at scoring well on standardized tests, we need schools that are different, and we need you to advocate for that vision and for our students. We hope you will. Our students deserve nothing less from us.


Karl and Jill Fisch

More Information

Colorado Department of Education

 Denver Post

United Opt Out
Update 3-4-15: LPS has a page (not sure if it's brand new or was just updated) with FAQs about PARCC/CMAS that includes a mention of opting out.

Tuesday, February 10, 2015

Real Leaders Sometimes Lose

This post is going to veer away from the usual education focus and slightly into politics, but I think it's related.

The Denver Post ran an editorial today titled, Repeal TABOR? It's not happening, where they said,
Gov. John Hickenlooper told an assembly of school administrators last week what some of them clearly didn't want to hear: that any effort to repeal the Taxpayer's Bill of Rights would be "doomed." But Hickenlooper is very likely right about the odds, and education leaders shouldn't waste their time urging political leaders to undertake the electoral equivalent of the Charge of the Light Brigade.

Remember the thrashing that Amendment 66, which would have raised the income tax for education, sustained two years ago? Any attempt to repeal TABOR outright could easily face an even worse drubbing.

Hickenlooper was responding to a request by Boulder Superintendent Bruce Messinger that the governor lead a campaign to repeal TABOR, according to Chalkbeat Colorado. "We will need the governor to lead that charge," Messinger said.

To which Hickenlooper replied: "To take on that battle ... right now, that would be a doomed effort."

Indeed it would. Opponents of a repeal effort would have a field day portraying the campaign as contemptuous of popular opinion and bent on huge tax hikes. 
The Denver Post, like many media outlets, pundits, and politicians themselves, has succumbed to the viewpoint that governing (and politics) is always (and only) about winning. It's not.

I find it interesting that nowhere in that article does the Post's editorial board actually discuss the merits of repealing TABOR, it's only about whether it's a winning issue or not. And, to be clear, they are probably right, it would be a long shot to pass. But that's not the point.

What we need is real leadership, from Governor Hickenlooper, the state legislature, and even the Denver Post. Real leadership would realize that TABOR, Gallagher and Amendment 23 all hamstring our elected leaders from actually governing. That they are a horrible way to govern in a representative democracy, and they effectively make it impossible for our state government to operate efficiently and effectively, and to plan and implement policy.

Real leadership would look at the polls, realize it's most likely a losing issue, and take it on anyway. Real leadership would realize that this is so important that it's worth spending a lot of time and effort educating the public on it, even if it loses. Real leadership would propose repealing all three amendments and ask the voters to let their elected leaders actually govern.

It's not "contemptuous of popular opinion" to see a serious problem and then try to educate voters on why it's a problem and propose a solution. How many times in history has "popular opinion" been absolutely, utterly wrong and immoral? Would the Post suggest that Abraham Lincoln, Susan Anthony and Martin Luther King, Jr. (to name just a few) were "wasting their time?"

It may indeed be a doomed effort, but that doesn't mean it's not worth fighting. And sometimes even doomed efforts succeed. After all, I'm sure the Post thought that when a little known junior Senator from Illinois announced his candidacy for President in 2007, it was a "doomed effort." In fact, I bet when a little known bar owner, who was a failed geologist, decided to run for Denver mayor, that was a "doomed effort" as well. I wonder whatever happened to him?

The basic problems with TABOR/Gallagher/Amendment 23 can be easily explained in less than five minutes. What if Governor Hickenlooper spent five minutes explaining those problems each and every day at each and every event he was at? And what if other like-minded leaders in Colorado - on both sides of the aisle - also took five minutes at each and every stop in their day and described the problem? And what if the Denver Post, instead of focusing on winning and losing and the horserace aspects of politics, actually tried advocating for a solution?

So many of our problems today can be traced back to a lack of leadership. Whether it's education policy, the dysfunctional United States Congress, or the Colorado State Government being unwilling to have an honest conversation with the voters of Colorado about how TABOR, Gallagher, and Amendment 23 are crippling their ability to govern, our problems come down to folks being more concerned about political "victories" than actually trying to find solutions and solve problems.

What we need is leadership. Real leaders sometimes lose, but they choose to fight the battle anyway, because they know it's the right thing to do. And because they know that leading sometime means being out in front of the crowd and that, over time, you can bring the crowd along with you. That's not being contemptuous of public opinion, that's leadership.

Wednesday, February 04, 2015

If I Had A Million Dollars

We first started seriously discussing laptops for our students in the fall of 1999. At that time, the obstacles were cost and infrastructure (wireless), and not everyone was convinced that they would help students learn. Over the years the cost came down, the infrastructure began to be built out, and more and more folks were convinced that laptops would not only be helpful for students, but essential to their learning process. Yet still we didn't do it.

It took until the Fall of 2012 to pilot a program, and then the Fall of 2013 to roll it out for all Freshmen at AHS. We did it via a Bring-Your-Own-Device program, counting on a large percentage of our students to bring their own, and then we would provide laptops (netbooks) for those who couldn't afford one or didn't want to bring one. The district provided support in terms of helping us with a few netbooks and, more importantly, guaranteeing that if we didn't get enough students bringing their own, they would help us financially to make up the difference. It turns out that our students did bring their own in the expected amounts (roughly 65% that first year, and now well over 70%), but it was nice to have that insurance. Since then we've now rolled it out to two classes (this year's Freshmen and Sophomores), and next year will roll out it to a third class (Freshmen, Sophomores and Juniors), and possibly to our Seniors as well depending on a few things (more on that later).

Two weeks ago my school began receiving what will ultimately be 993 Chromebooks from our district. These weren't purchased because we've finally decided that laptops are important enough instructionally for our students to provide them, we're receiving them due to mandated state testing. Because both the PARCC and the CMAS tests are taken via computer, and because we can't sufficiently lock down the netbooks we had previously, the district decided to replace them with Chromebooks - and, of course, we had to add significantly more in order to test all of our students. After sixteen years of not being willing to spend money to support our students instructionally, we are willing (actually, forced) to spend money to support testing. Our Superintendent told us in a faculty meeting that district-wide more than $1 million dollars was being spent to purchase Chromebooks.

Now some folks might argue that I shouldn't complain, we are getting laptops that we will be able to use instructionally when we are not testing. (And, given this influx, this may allow us to accelerate our rollout to include Seniors next year - one year early.) I am certainly appreciative of this, and we will do our best to take full advantage of it, but I still think it's important to note the priorities of our national and state leaders, and what actually makes school districts spend money they otherwise wouldn't.

Since we have so many of our own students bringing their own devices, much of this $1 million will end up sitting most of the time in carts, unused (once we've rolled out Connected Learners to all four grades). So I wonder what else we could've spent $1 million on? I'm sure we could all come up with lots of ideas, but here's one pretty simple one: let's hire more teachers.

Now, I realize that $1 million doesn't go very far when you're talking about hiring teachers, but what if we did this. What if we hired eighteen teachers and provided six teachers each to three elementary schools in our district that we identify as being the most at-risk. Each school could decide how best to utilize those teachers. One school might decide to create one more class at each grade level (K-5), thereby lowering class sizes and student-to-teacher ratios across the board. Another school might decide to leave classes the same, but have one teacher work at each grade level, helping the existing teachers co-teach, or working with individual or small groups of students. Or a school might choose to place all six of those teachers in K-2, creating two extra sections at each level. How many of you think any of these ideas - or some permutation I haven't enumerated - would have a more positive effect on students than state-mandated testing? Which is more likely to change students' lives?

The problem with testing isn't limited to the dubious quality of the data we get when we purport to measure what's "important" for students to know. It's the opportunity cost of the testing. It's not just the $1 million spent on chromebooks that will often sit in carts instead of spending it on something that will help students learn. It's the tremendous monetary value of the staff time that goes into administering these tests including, but not limited to, a district assessment coordinator and their secretary, building-level assistant principals and counselors that spend an inordinate amount of time coordinating these tests, and the time that teachers spend in proctor training for these exams.

And then there's the value of the lost instructional time, not just the time students spend taking the tests, but the time taken in class to prepare for the tests (even teachers who don't do test-prep are very much encouraged to expose their students to the format of the test ahead of time), and the lowered quality of the instructional time that we typically have on testing days (where we test in the morning and have altered schedules in the afternoon).

And then there's the effect on students, both psychological and philosophical. Where they are stressed by the testing, and their motivation is decreased by constantly being told what they aren't good at. And it's the philosophical message we send to students, that being able to prove that adults are doing their job is more important than the students' learning.

If I had a million dollars, I'd buy you the opportunity for more learning, not more testing.

Friday, January 09, 2015

What If We Just Tried It?

Michael Feldstein, Dave Cormier (1, 2), Stephen Downes and many others in the comments had an interesting discussion around student learning and engagement that's worth your time to check out. While I agree with Chris Lehmann that perhaps engagement isn't always the word we're looking for, I think the discussion in the above posts is using engagement in the right way; the students aren't just engaging in the activity, but in the learning.

You should read the posts (and the comments), but I wanted to pull a few quotes out to highlight and think about.
So. In this case, we’re trying to make students move from the ‘not care’ category to the ‘care’ category by threatening to not allow them to stay with their friends. Grades serve a number of ‘not care to care’ purposes in our system. Your parents may get mad, so you should care. You’ll be embarrassed in front of your friends so you should care. In none of these cases are you caring about ‘learning’ but rather caring about things you, apparently, already care about. We take the ‘caring about learning’ part as a lost cause.
The problem with threatening people is that in order for it to continue to work, you have to continue to threaten them (well… there are other problems, but this is the relevant one for this discussion). And, as has happened, students no longer care about grades, or their parents believe their low grades are the fault of the teacher, then the whole system falls apart. You can only threaten people with things they care about. (Cormier, emphasis mine)
I've had many discussions with fellow educators around these same ideas, and I find it interesting that we so quickly dismiss "caring about learning" as a lost cause, and therefore have to find all these other ways to coerce students into learning. I wonder if we would just step back and really think about that statement, and what it says about what we're doing, if we just might figure out that we're doing it wrong.
Why bother learning how to use all these “effective instructional strategies” when people aren’t even going to engage with them? (David Wiley, in the comments).
For my purposes, I might modify that to say "when people aren't even going to care about what they're learning." More and more I'm struggling with the idea of learning about what someone else cares about, for someone else's sake, which is what I feel like we're doing. Yes, folks will argue it is still for the student's sake, but if they don't care about what they're learning, then aren't we putting our needs in front of theirs?
The issue for me, then, is more the mismatch between my students’ desires to connect and what I, or the curriculum, wants them to connect to. Almost all my students want to connect to certain people, ideas, skills, and professions, but most of them do not want to connect to academic writing, the subject I happen to teach. Schools are not adept at, or even interested in, identifying students’ existing interests and playing to those interests. We should be. There is great capital in students’ interests and desires for connection, and we are squandering it. (Keith Hamon, in the comments, emphasis mine)
Separate from the institution of school, when you think about learning, doesn't it start with interest? Then why in school do we think we need to start with curriculum and hope that it will generate interest?
My take is different. I see education less as an enterprise in making people do what they don't want to do, and more as one of helping people do what they want to do. (Stephen Downes)
Stephen is referring to ‘education’ and not to ‘learning’. That word usually indicates that we are talking about the institutions that support learning inside of our culture rather than the broader ‘learning’ that happens as part of being alive. Our education system is always a victim of the need for bureaucratization. It’s terrible… but it’s a necessary evil. (Cormier)
I wonder at the assumption that it's a "necessary evil." I often argue the practical side as well, so I totally get what Dave is saying, but I wonder if we've ever really tried to do it differently? Given the affordances of modern learning (technology, access to information, connectivism, relatively high standard of living - at least in my neck of the world), perhaps we should examine the assumption that 'education' and 'learning' need to be so very different.
I’m suggesting that we need to replace the measurable ‘content’ for the non-counting noun ‘caring’. Give me a kid who’s forgotten 95% of the content they were measured in during K-12 and I will match that with almost every adult i know. Give me a kid who cares about learning… well… then i can help them do just about anything. We simply don’t need all that content, and even if we do need it, we don’t have it anyway . . . We currently have ‘this student has once proved they knew tons of stuff’ as our baseline for ‘having an education’. That’s dumb. (Cormier)
If you have a second, Dave, check out Matthew Lieberman’s book Social, particularly Ch.12 where he discusses education. He echoes your point on page p.282 where he writes: “We spend more then 20,000 hours in classrooms before graduating from high school, and research suggests that of the things we learn in school, we retain little more than half of the knowledge just three months after initially learning it, and significantly less than half of that knowledge is accessible to us a few years later.”
Brutal. Yet we continue to double down. (Dave Quinn, in the comments)
I think most of us know this, both intuitively and from experience, yet we continue to "double down." It's like we acknowledge that what we're doing is ridiculous but, hey, it would be really hard to do it differently, so let's just keep doing it.
The Gallup Purdue Index Report picks up where Wellbeing leaves off. Having established some metrics that correlate both with overall personal happiness and success as well as workplace success, Gallup backs up and asks the question, “What kind of education is more likely to promote wellbeing?” They surveyed a number of college graduates in various age groups and with various measured levels of wellbeing, asking them to reflect back on their college experiences. What they didn’t find is in some ways as important as what they did find. They found no correlation between whether you went to a public or private, selective or non-selective school and whether you achieved high levels of overall wellbeing. It doesn’t matter, on average, whether you go to Harvard University or Podunk College. It doesn’t matter whether your school scored well in the U.S. News and World Report rankings . . .
What factors did matter? What moved the needle? Odds of thriving in all five areas of Gallup’s wellbeing index were
  • 1.7 times higher if “I had a mentor who encouraged me to pursue my goals and dreams” 
  • 1.5 times higher if “I had at least one professor at [College] who made me excited about learning” 
  • 1.7 times higher if “My professors at [College] cared about me as a person” 
  • 1.5 times higher if “I had an internship or job that allowed me to apply what I was learning in the classroom” 
  • 1.1 times higher if “I worked on a project that took a semester or more to complete” 
  • 1.4 times higher if “I was extremely active in extracurricular activities and organizations while attending [College]” 
. . . It really comes down to feeling connected to your school work and your teachers, which does not correlate well with the various traditional criteria people use for evaluating the quality of an educational institution. If you buy Gallup’s chain of argument and evidence this, in turn, suggests that being a hippy-dippy earthy-crunchy touchy-feely constructivy-connectivy commie pinko guide on the side will produce more productive workers and a more robust economy (not to mention healthier, happier human beings who get sick less and therefore keep healthcare costs lower) than being a hard-bitten Taylorite-Skinnerite practical this-is-the-real-world-kid type career coach. It turns out that pursuing your dreams is a more economically productive strategy, for you and your country, than pursuing your career. It turns out that learning a passion to learn is more important for your practical success than learning any particular facts or skills. It turns out that it is more important to know whether there will be weather than what the weather will be . . .
. . . The core problem with our education system isn’t the technology or even the companies. It’s how we deform teaching and learning in the name of accountability in education. Corporate interests amplify this problem greatly because they sell to it, thus reinforcing it. But they are not where the problem begins. It begins when we say, “Yes, of course we want the students to love to learn, but we need to cover the material.” Or when we say, “It’s great that kids want to go to school every day, but really, how do we know that they’re learning anything?” It’s daunting to think about trying to change this deep cultural attitude. (Michael Feldstein, emphasis mine)
And there it is. It's a systemic problem, and we depend on that system to create order out of chaos and, of course, for our employment. It truly is daunting to think about trying to change this and yet . . . we should try anyway.

I think Carol Black nails it when she says,
This is when it occurred to me: people today do not even know what children are actually like. They only know what children are like in schools
I think we've forgotten that despite all the good intentions behind the idea of schools, and the fact that good stuff does indeed happen in them, they are terribly artificial constructs. Again, as Black says,
Traits that would be valued in the larger American society –– energy, creativity, independence –– will get you into trouble in the classroom . . .

When you see children who do not learn well in school, they will often display characteristics that would be valued and admired if they lived in any number of traditional societies around the world. They are physically energetic; they are independent; they are sociable; they are funny. They like to do things with their hands. They crave real play, play that is exuberant, that tests their strength and skill and daring and endurance; they crave real work, work that is important, that is concrete, that makes a valued contribution. They dislike abstraction; they dislike being sedentary; they dislike authoritarian control. They like to focus on the things that interest them, that spark their curiosity, that drive them to tinker and explore . . .

But any Maori parent knows that you have to watch a child patiently, quietly, without interference, to learn whether he has the nature of the warrior or the priest. Our children come to us as seeking beings, Maori teachers tell us, with two rivers running through them — the celestial and the physical, the knowing and the not-yet-knowing. Their struggle is to integrate the two. Our role as adults is to support this process, not to shape it. It is not ours to control. 
Last night my wife was talking about one of her first graders who is really struggling with school right now and she said something like, "He doesn't want to do anything he doesn't want to do." That makes us both wonder, "Then why are we making him do it?"

So many of the problems that our children have in school are a result of school itself, not any inherent problem in the children.
So one hypothesis is that American schools are not only assuming the normal developmental window for reading to be too narrow, they’re also placing it too early. In other words, it’s not like expecting all children to take their first steps at the average age of twelve months: it’s like expecting them all to take their first steps at the precocious age of ten months. In doing this you create a sub-class of children so bewildered, so anxious, whose natural processes of physical and neurological development and organization are so severely disrupted, that you literally have no way of knowing what they would have been like if you had not done this to them.
“Grade level standards,” please recall, do not exist in nature; they are not created scientifically, but by fiat. And there has been almost no serious study of cognitive development in children whose learning has not been shaped by the arbitrary age grading of the school system. Finland simply sets its standards at a place where most children will succeed. The U.S. sets them at a place where a really significant percentage will fail. This is a choice. In making it, we may be creating disabilities in kids who would have been fine if allowed to learn to read on their own developmental schedule. (Black)
So what if we stopped making them "do what they don't want to do?" What if we tried helping them do what they want to do?
We totally want to be in the business of helping people do what they want to do. Try it. No really. Just try it. Sit down with a child and help them do what they want to do. (Cormier)
What if we just tried it?

Wednesday, January 07, 2015

What Grade Should They Get?

If you've followed this blog for a while, then you're probably aware that I'm not a big fan of grades. I won't rehash the philosophical underpinnings of why I'd like to get rid of grades, but I thought I'd briefly share three recent examples that I think help illustrate why you might want to rethink the way you grade even if you don't agree with me that we should eliminate them entirely.

One of the big frustrations I have when discussing grades with others (whether that be teachers, students, or parents) is that the argument frequently comes down to an unfounded faith in percent. The argument goes something like this:
  • Well, we have to have grades. (I disagree.)
  • You have to set a cut off somewhere. (Why?)
  • This is the percent the student got, math never lies, so therefore this grade is accurate and fair. (Oh really?)
Recent Example #1
It's toward the end of the semester and a student has an 89.5% in a class. They turn in a review guide and get a 20 out of 20 on it. What happens to their overall grade? Does it go up? Stay the same? Or go down?

The vast majority of folks say it will go up. The answer, of course, is it depends. In this particular case, the grade goes down. Yes, a student who has an 89.5% in the class turns in their review guide assignment like a good student should and gets a 100% on it, yet their grade still goes down.

How is that possible? Well, this teacher weights their grades by category. This assignment falls in the Homework Category which gets a weight of 10%. Because this teacher previously offered some extra credit (which is a whole different blog rant), the student's percentage in the homework category before the review guide was turned in was 105.7%. After turning in her correctly done review guide, her percent in that category drops from a 105.7 to a 105, and her overall grade drops from an 89.5 to an 89.4 (which, for many teachers, is from an A to a B - most teachers in my building will "round up" an 89.5).

In effect, the student is penalized for turning in a perfect assignment. What grade should they get?

Recent Example #2
At the end of the semester a student has an 89.1% in a class out of a total of 2,389 points. What happens to their overall grade if they scored 1 point higher on one single assignment earlier in the semester?

Again, of course, it depends. In this particular case, it would raise their overall grade to 89.815% which, again for most teachers in my building, is probably the difference between a B and an A. Some of you will doubt that 1 point out of 2,389 can raise their grade from an 89.1 to an 89.815, but it can. This teacher weights categories as well, and one of their categories is titled Homework Checks and is worth 10% of the overall grade. Here is the student's scores in that category:

See that Slope Quiz on October 31st that the student scored a 7 out of 8 on? If they had received an 8 out of 8, their category percentage rises to 100%, which increases their overall percentage in the class by 0.715%, from 89.1 to 89.815.

One point, on one quiz, on one day. What grade should they get?

Recent Example #3
Here's a student's percentages in different categories for a particular class:

Homework: 100%
Tests & Quizzes: 88%
Lab Reports: 88%
Participation: 100%
Checkpoints: 85%
Responsibility: 100%
Final Exam: 74%

What grade should this student get in this class?

Well, we could have a long and valuable philosophical discussion about this, but the point of this example is that this student could get two different grades in the same class at my school. How? It depends on what teacher they have and how that teacher weights their categories. Here's what it looks like for three teachers of this class in my building:

And here's what that translates to for the student's percentages in each category:

These teachers all teach the same class. Students are randomly scheduled into their class by the computer. This student could have performed exactly the same and, in one class, received an 89.2% (a B), an 89.5% (probably an A, but possibly a B), or 90.4% (an A), because the teachers choose to weight the categories differently. Oh, and there are two other teachers of this section that grade on total points, so the student would have yet another percentage that we can't determine from this information.

The same student, in the same class, with the same curriculum, at the same school. What grade should they get?

All three of these examples are real, from my school, from the end of last semester, although I did manipulate the overall percentages for effect (but the assignments and student scores on examples 1 and 2, and the teacher weights on all three examples, are real).

So, even if you believe grades are worthwhile (or if you don't believe grades are worthwhile but you have to give them anyway), I would at least ask that you spend a little more time thinking about them. Your computer grade book is mathematically accurate; it computes exactly what you tell it to compute. But that doesn't mean it makes sense. You are the professional, and if you give a grade to a student you should come up with a more thoughtful way to assign that grade than simply relying on a percentage.

Tuesday, January 06, 2015

You Keep Using That Word

Yesterday was our first day back after winter break and we had a faculty meeting (it was one of our few non-student-contact days). We heard from teachers, our principal and our superintendent (which is a nice mix, although I feel compelled to point out that there was at least one important group we didn't hear from: students). A variety of topics were addressed, but I think it's fair to say that "accountability" was a major theme.

Both my principal and my superintendent addressed standardized testing and, to be clear, it was very nice to hear from both of them that they believe we are testing too much, and that they are both working in the political arena to try to convince the state to reduce the amount of required testing. But I found it interesting that they both repeated almost exactly several sentences that I hear many folks in education use: "I'm not against accountability. I think accountability is important. We need to be held accountable to make sure we're doing our jobs."

Every time I hear phrases like those I find myself thinking of a line from The Princess Bride,
You keep using that word. I do not think it means what you think it means.
Google gives me this definition, which I guess is as good as any. (I also find the use-over-time graph very interesting.)

Of course, since it defines accountability in terms of being accountable, we have to dig a bit deeper.

So accountability is being responsible for and justifying our actions or decisions. In our current environment, and the way that most folks in the education discussion seem to use it, that means using test scores to "justify" that what we're doing with our students is "working." Therefore we need some amount of standardized testing to prove that we're being successful, to hold us accountable. That's wrong.

The problem isn't so much with their understanding of the word accountable, it's with their assumptions of who we are accountable to and what we are accountable for. We are not accountable to the test, or to the state, or even to the curriculum - we are (or at least should be) accountable to our students. We are (or should be) responsible for our actions and decisions in relation to our students' wants and needs - what they care about, and test scores don't measure that. Even for folks who believe that learning is mastering a fixed body of knowledge and being able to regurgitate that on command, test scores wouldn't hold us "accountable." Test scores don't measure the quality of our actions and decisions while interacting with our students. And, if you don't believe that mastering a fixed body of knowledge and regurgitating it on command is "learning," then using test scores for "accountability" is even more ludicrous.

Test scores don't hold me accountable as a teacher; they don't make sure I'm "doing my job". Standing up in front of (or beside) students each and every day, meeting their needs and helping them find out what they care about, and then helping them learn more about that, that holds me accountable. As long as educators continue to agree and reinforce that test scores are the way to keep us accountable, we're never going to make any progress. It's inconceivable.

Wednesday, November 12, 2014

Data Doesn't Create Meaning. We Do.

I found this TED Talk by Susan Etlinger to be interesting in and of itself, so I think it's worth 12 minutes of your time. Several of the things she said really resonated with me, so I'll discuss them briefly after you watch.

At just past the one-minute mark, she says:
We have to ask questions, and hard questions, to move past counting things to understanding them.
This is reminiscent of the oft-used quote (usually attributed to Einstein, but he probably didn't say it), 
Not everything that can be counted counts, and not everything that counts can be counted
but I sorta like this one better. Because counting things is often a good thing but we can't stop there, we have to provide the context, the understanding, the wisdom to do something good with what we've counted.

At about the 6:30 mark, she says,
this is what happens when assessments and analytics overvalue one metric — in this case, verbal communication — and undervalue others, such as creative problem-solving
This sums up my main objection to PISA/PARCC/CMAS/fill in your own state test. We're so proud of ourselves for coming up with the metric that we've stopped asking ourselves whether it's an important metric in the first place. (I just finished Yong Zhao's new book where he goes into great detail discussing the history of education in China, and why the PISA results - and especially the conclusions assigned to those results - are almost meaningless.) We are overvaluing a metric that may (or may not) show how well you will do in school, but has very little worth in determining how well you will do in life.

At about 8:20, she brings it home,
And at this point, you might be thinking, "Okay, Susan, we get it, you can take data, you can make it mean anything." And this is true, it's absolutely true, but the challenge is that we have this opportunity to try to make meaning out of it ourselves, because frankly, data doesn't create meaning. We do. So as businesspeople, as consumers, as patients, as citizens, we have a responsibility, I think, to spend more time focusing on our critical thinking skills. Why? Because at this point in our history, as we've heard many times over, we can process exabytes of data at lightning speed, and we have the potential to make bad decisions far more quickly, efficiently, and with far greater impact than we did in the past. Great, right? And so what we need to do instead is spend a little bit more time on things like the humanities and sociology, and the social sciences, rhetoric, philosophy, ethics, because they give us context that is so important for big data, and because they help us become better critical thinkers. (emphasis mine)
At various time in my life I've taught students mathematics, so in some ways I'm a big fan of data. But the mistake we've made (and are currently doubling-down on with our new state tests) is confusing data with meaning. Data is only as good as the questions you ask, the way you ask them, the way you collect it, and - critically - how you then interpret the data.

Or, as Susan says at about 10:40,
if I don't know what steps you took, I don't know what steps you didn't take, and if I don't know what questions you asked, I don't know what questions you didn't ask
In education we currently have a love affair with data, without bothering to ask whether the questions we're asking are the right ones, or the only ones.

Data doesn't create meaning. We do.

Data doesn't define learning. We do. Or at least we should.

Monday, October 27, 2014

Data-Driven Schools: Homework

In my school's student handbook we state,
Homework is an expectation . . . Achieving students do homework at least 5 out of every 7 days . . . Do homework Sunday through Thursday, take Friday and Saturday off! . . . Average nearly two hours of homework each night.
Since we're increasingly encouraged to be "data-driven", I have a few questions.

Let's start with the "two hours of homework Sunday through Thursday." This has been an expectation since I started at Arapahoe . . . in 1991. I wonder what kind of "data" we based the two hours on. Why not 1.5 hours? Or 2.5 hours? Or for that matter, why not 111 minutes instead of 120? (We have an overly fond appreciation for numbers that end in 5 or 0.)

What kind of research did we do to determine that 120 minutes was the appropriate and most effective amount of homework each night? I'm one of only about 4 or 5 staff members who've been here since 1991, we've never done any research on this since then that I know of, and I don't know of any research that was done before then, so I suspect there is none. So if we just made up this number, how is that "data-driven"? Perhaps we need to sit down and rethink this and decide if that's truly the best number.

Of course if we did that, then we'd probably also want to look at the research on the effectiveness of homework in general. Alfie Kohn has been a longtime skeptic on the value of homework, so much so that he wrote a book called The Homework Myth. In that book he argues that the research shows no support for homework at all at the elementary level, and at the high school level there is only a weak correlation between homework and increased test scores (and, of course, that then leads into the debate about whether those test scores are meaningful or worthwhile). It's fair to say that he advocates for no homework at all, other than reading or self-assigned homework.

He recently wrote an article in the Washington Post about a new study that looked at homework and its effect on test scores and grades. In terms of test scores,
Was there a correlation between the amount of homework that high school students reported doing and their scores on standardized math and science tests? Yes, and it was statistically significant but “very modest”: Even assuming the existence of a causal relationship, which is by no means clear, one or two hours’ worth of homework every day buys you two or three points on a test. Is that really worth the frustration, exhaustion, family conflict, loss of time for other activities, and potential diminution of interest in learning? And how meaningful a measure were those tests in the first place, since, as the authors concede, they’re timed measures of mostly mechanical skills? (Thus, a headline that reads “Study finds homework boosts achievement” can be translated as “A relentless regimen of after-school drill-and-skill can raise scores a wee bit on tests of rote learning.”)
And the effect on grades?
There was no relationship whatsoever between time spent on homework and course grade, and “no substantive difference in grades between students who complete homework and those who do not.” This result clearly caught the researchers off-guard. Frankly, it surprised me, too. When you measure “achievement” in terms of grades, you expect to see a positive result — not because homework is academically beneficial but because the same teacher who gives the assignments evaluates the students who complete them, and the final grade is often based at least partly on whether, and to what extent, students did the homework. Even if homework were a complete waste of time, how could it not be positively related to course grades?

And yet it wasn’t. Again. Even in high school. Even in math. The study zeroed in on specific course grades, which represents a methodological improvement, and the moral may be: The better the research, the less likely one is to find any benefits from homework. 
It's important to note that not everyone agrees with Kohn's interpretation of the data, but even most of what I've read in support of homework tends to show it having a relatively small effect on student "achievement" (I prefer the word learning, myself), and often ignores the question of whether this work should be done at home or could be done at school.

I find it interesting, however, that we haven't looked at any of the research, or any of the dialogue between folks like Kohn and Willingham, we've just decided it's good, and that two hours five days a week is the optimal amount. So why do we assign homework?

In general, I think there are three main reasons that I've heard teachers use (and have used myself).
  1. Students need the practice.
  2. I can't cover the curriculum unless I give homework.
  3. It teaches responsibility.
The research provides little or no support for number one. What little support it does give could be accomplished by giving them time in class to practice. At what point did we decide that school was so important that we decided to assign students a "second shift" of work at home after school was purportedly over?

Which leads to number two: there's not enough time to cover the curriculum. I agree with the diagnosis 100%, but not the treatment. Instead of assigning homework (and assigning students a "second shift") in order to cover the curriculum, we should change the curriculum.

I struggle with the increasing emphasis on covering more, and more advanced topics, earlier and earlier, and the emphasis on curriculum over learning. For example, we are now teaching topics in Algebra I (typically a freshman course) that we used to teach in Algebra II (typically a junior course). Why? And does it matter if you learn Algebra by age 15, or would it be okay if you mastered it at 16? (Or 25 for that matter?) We say we want to create lifelong learners, yet our policy is that they must learn things at certain ages that we determine (and standardize for all students). It's as if we think there's an expiration date on learning.

As far as the third reason, I have yet to see any research that shows that assigning homework teaches responsibility. In fact, anecdotally, I would say that it does not. How many high school teachers have you heard complain about students not doing homework? Yet we've been assigning them homework for years, shouldn't that have taught them responsibility by now? But, even if it did, would that be the best way to teach them responsibility? I would suggest that giving them meaningful and important things to do might teach them responsibility better than assigning homework of dubious value.

So, where does that leave us? If we truly believe that "data-driven" is the way to go, then the data is telling us that we need to step back and reexamine both our assumptions and our practices. I've previously suggested with textbooks that the default should be to not get a textbook, and then we have to justify why we need one. I would propose something similar for homework, the default should be no homework, any homework we assign should be justified. And that justification has to be well thought out and can't rely on any of the three reasons above, and has to also take into consideration the social and emotional health of our students.

And what about "average two hours of homework each night Sunday through Thursday"? Show me the data.