Who Is Really Responsible For The Summer Slide?

This post was originally published in @PeterMDeWitt’s blog Finding Common Ground in Education Week.

Today’s post is written by frequent Finding Common Ground guest blogger Lisa Westman. Lisa is an instructional coach specializing in differentiation for Skokie School District 73.5 in suburban Chicago. She taught middle school gifted humanities, ELA, and SS for twelve years before becoming a coach.

School let out in the Chicago suburbs just over a week ago. While I have never been a proponent of the “last days of school countdown” and much prefer Twitter movements like #lastbell, I must admit, I like the time off. I appreciate waking up in the morning without an alarm and drinking coffee from a real mug.

Similarly, my children (ages 11 and 8) have enjoyed sleeping in and playing outside. It wasn’t until day 6 of our time-off together that we did something “educational.” We visited the library where we greeted by a large poster reminding us to read and avoid the dreaded “summer slide.”

What is the summer slide?
The summer slide refers to the phenomena of lost academic growth by students over the summer months when they are not actively engaged at school. On average, students lose one to three months of learning during the summer, with students from low-income homes being disproportionately affected (ASCD).

There are a plethora of recommendations for minimizing the impact of the summer slide. Most suggestions, including those listed in a recent article in Forbes Magazine, focus on two aspects of the slide, one preventative and one reactionary:

  1. what parents/guardians can do to avoid the summer slide

  2. what educators can/need to do to fix the damage done over the summer when school resumes in the fall

Why are we placing the burden of preventing the summer slide on parents?
As an educator, I have insight into what my children should be doing over the summer and I have the luxury of time-off to do things like read with them. Yet, to be honest, I don’t assess whether or not our activities help their retention nor do I want to do so. This leads me to wonder about the majority of parents who aren’t trained educators or who don’t have time-off from work. Are they really the right party to rely on to prevent the summer slide?

There are people, like Geoffrey Canada, who say the idea of no school in the summer is asinine altogether:

“every 10 years they reproduce the same study. It says exactly the same thing: Poor kids lose ground in the summertime. The system decides you can’t run schools in the summer…who makes up those rules? — I went the Harvard Ed School. I thought I knew something. They said it was the agrarian calendar, — but let me tell you why that doesn’t make sense….anyone knows if you farm, you don’t plant crops in July and August. You plant them in the spring” (Ted Talk, Our Failing Schools. Enough is Enough, 2013).

However, considering that a systemic change (like mandated year-round school) could take years to legislate, we ought to focus less on what parents and students should do to prevent the summer slide and focus more on what we (educators) can control.  The questions we should be asking ourselves are:

  1. What are we doing during the school year to ensure that the growth our students make is permanent?

  2. What are we (inadvertently) doing to make students resistant to learning in the summer?

And, I propose that the following practices (or lack thereof) are unwittingly contributing to our students’ summer slide:

Reliance on Bells and Schedules
During the nine months we have students in our classrooms we consistently send them subliminal messages that learning is fixed and structured, rather than fluid and ubiquitous. This is not malicious, but true nonetheless.

We offer our students instruction in the form of “periods” or “blocks” which typically rely on bells to indicate when learning starts and stops. Students learn reading from 8-9, and then they learn science from 9-10. And, while many schools claim to teach literacy in all classes, or engage in interdisciplinary learning, on the whole, these connections are not clear to students. Students struggle to transfer information learned in one class to another class, let alone from one year to another.

What we need to do is recognize, vocalize, and celebrate the fact that the content, skills, and concepts we cover in our classrooms just scratch the surface of what there is to be learned. We need to focus on building students’ metacognitive awareness so they recognize when and where they are learning, so they can self-identify what strategies to use to best understand the new information to which they are constantly exposed. By doing so, even when students are at home “playing video games” all summer, we give them the greatest opportunity to learn something from playing these games (plotline of a story, digital imagery, strategizing) and make connections.

Incorrectly “using” formative assessment
In Formative Assessment 2.0, Larry Ainsworth offers Stiggins’ explanation of formative assessment as something that, “happens while learning is still underway. These are the assessments that we conduct throughout teaching and learning to diagnose student needs, plan for next steps in instruction, provide students with feedback they can use…”

When done correctly, formative assessment (sometimes referred to as assessment for learning) informs both the teacher and the student of whether or not concepts/skills have been consistently mastered. The consistent “loss” of skills or knowledge over the summer months is indicative of improperly assessing students’ progress/mastery throughout the year. Furthermore, this loss suggests the focus is on moving students as a whole, rather than focusing on individual student growth which would require the use of formative assessment evidence to differentiate for their needs.

Perhaps, if we truly shift our focus to assessment for learning rather than assessment of learning, and resume teaching our students where they actually left-off the year before, the gaps will not be as cavernous.

Making reading a punishment
If (as advertised) reading is the key to preventing the summer slide; the one thing all educators must do is curate a love of reading.

Unfortunately, however, we tend to do just the opposite and systemize reading. For many students, reading is seen as a chore, a measure of compliance, or worse, something it is ok to “lie” about (read more about this here or here).

With this in mind, it is no wonder that students choose to not read in the summer. They need a break because reading feels strenuous and stressful.

Rather than assign reading in it of itself, we need to pose relevant and provocative questions which will naturally compel students to read. Instead of assigning 20 minutes of reading a night, we can ask students questions about what they read outside of class (online, in books, in magazines, even subtitles) and accept that reading takes on many forms.

When we expose students to reading in a variety of forms and recognize learning from reading of any source (wow, that’s pretty cool. where did you learn/read about that, I’ve never heard that before? Can you show me that?) it’s pretty incredible how much more students are willing to read.

In The End
Until school runs year-round we may never fully eradicate the summer slide. But, we can certainly do our best to ensure that what our students learn is permanent and not fleeting. What are your thoughts on the summer slide?

Questions about this post? Connect with Lisa on Twitter.

Photo courtesy of Pixabay.

Differentiation: Attainable Or Somewhere Over The Rainbow?

differentiation cleaning things up

This post was originally published in @PeterMDeWitt’s blog Finding Common Ground in Education Week.

Today’s guest post is written by frequent Finding Common Ground blogger Lisa Westman. Lisa is an instructional coach specializing in differentiation for Skokie School District 73.5 in suburban Chicago. She taught middle school gifted humanities, ELA, and SS for twelve years before becoming a coach.

“If we have data, let’s look at data. If all we have are opinions, let’s go with mine.” Jim Barksdale

If you want to get an educator’s attention, just say the word differentiation. Call me naive, but until last summer, I had no idea that this word provoked such a wide range of reactions from all education stakeholders.

Then, last August, I wrote my first guest blog post for Finding Common Ground, Yes Differentiation Is Hard. So, Let’s get It Right, and the floodgates opened. It turns out, people have strong feelings about differentiation, and I have been listening and gathering specifics on these views.

Since the post was published, I have written and presented about differentiation quite a few times. Most recently, I presented alongside Carol Ann Tomlinson at ASCD Empower17 and co-moderated #ohedchat on the topic. Every time I write or present on differentiation, I note the questions and comments readers or participants make, and I have found some common themes in response to the topic of differentiation, such as:

  • Teachers often feel unequipped to differentiate effectively.
  • Administrators don’t always recognize differentiation when they see it, or they think they see “differentiation” when what they really see is “different”.
  • Many gifted education advocates believe the needs of gifted students cannot be met in the ‘regular’ classroom through differentiation.
  • There is a pervasive generalization and misunderstanding of the words: assessment and data.

Over time, I plan to address the first three bullet points in detail, but for the purposes of this post, I want to explore the fourth item- the words assessment and data. The misleading associations with these words (assessment=test, data=numbers) have become a giant barrier for teachers who strive to differentiate instruction yet struggle to do so effectively.

The Wicked Witch of The West Assess
I was never the best geometry student, but the one thing that stuck with me was “all squares are rectangles, but not all rectangles are squares.” The same can be said about assessments, “all tests are assessments, but all not all assessments are tests.”

Merriam-Webster dictionary defines assess as, “to make an approximate or tentative judgment” and tests can certainly do this. However, often times tests are the least effective way to ascertain where students are and what they need. Test results amass a certain type of information and to differentiate successfully, other evaluations (observations, writing samples, conversations) and facets (social-emotional, aptitude, growth) of student performance must be considered.

The way we assess and the assessments we use give us the data we need to inform how to appropriately differentiate instruction for students. Therefore, if we are not using a variety of reliable assessments, our attempts to differentiate instruction often fall flat because the data we try to use doesn’t give us the information we need.

Follow The Yellow Brick Road Data
The word data does not have a warm connotation. Saying “data” in conjunction with student learning often feels sterile and uncaring. I often hear sentiments like, “students are more than a number.” And, when I presented with Carol Ann Tomlinson she responded to a question about using data with, “data sounds like something spit out by a machine.

And, I agree, students are more than a data point. They are more than a number spit out by a machine. And, so is data. Data is more than just numbers, and it can indeed be gathered and appraised in compassionate ways.

Let’s look at an analogous situation: a child’s visit to his pediatrician. When a child visits his doctor, he is more than a number there, too. Therefore, in order to form a diagnosis,  pediatricians look at a variety of evidence, some which comes from a lab or machine (weight, temperature, blood count) and some which comes from other assessments (conversations, questionnaires, observing the patient perform a task). Yet, there is little complaint about using multiple types of data in a medical setting. In fact, I surmise that if a doctor made a diagnosis without various types of data, there would be quite a bit of protesting.

So, what is the difference?

In education, we seem to think that the only usable data we have are numbers: test scores, IQ scores, attendance rates, etc.  This is like saying the only data a doctor can use is the patient’s height, weight, blood pressure, etc.

If this were the case, think of how many misdiagnoses would be made from only using these pieces of evidence? The doctor would not have some of the vital information (data) he needs to diagnosis the patient and prescribe a course of action.

Instead, doctors are also highly dependent on information that comes directly from the patient via conversations and observations. This is data which is collected with sensitivity and not calculated by an algorithm. A doctor uses information from all of these sources to differentiate his approach for his patients, so they thrive.

The same holds true for using data to differentiate for our students in the classroom. When we say the word data in education, we are simply referring to the different types of evidence we gather and consider to differentiate instruction for our students, so they thrive.

There’s No Place Like Home A Data Dashboard
In summary, differentiation is a natural byproduct of collecting and using the right information and the traditional methods of teacher data collection are quickly (if not already) obsolete.

Luckily, help is here. In Data Dashboards a High Priority in National Ed-Tech Plan, Education Week contributor Malia Herman states:

“The push for wider and better use of data (dashboards)–which allow educators to examine and connect relevant student data from multiple sources–is growing stronger…learning dashboards integrate information from assessments, learning tools, educator observations, and other sources to provide compelling, comprehensive visual representations of student progress in real time.”

To keep the analogy going, a data dashboard is like a patient’s chart at his doctor’s office. This is the place where all of the information is housed on individual students and their growth over time can be contemplated. And like a patient’s chart, only certain people are privy to individual student’s information. This comprehensive view of a student makes differentiating for their needs more accessible, attainable, and sustainable.

What are your thoughts? What is your experience using data to differentiate instruction? What successes and struggles have you encountered? Please comment below or tweet your response so we can learn from each other.

Questions about this post? Connect with Lisa on Twitter.


Differentiation is The Key to Assessment For Learning

teaching does not equal learning

“Too often, educational tests, grades, and report cards are treated by teachers as autopsies when they should be viewed as physicals.” Douglas Reeves

One of my favorite stories is about the man who taught his dog to whistle. The man was so proud of his teaching. He walked his dog around town and proudly proclaimed, “I taught my dog to whistle!”

Then, one day, a neighbor stops the man and says, “I don’t hear your dog whistling.”

To which the man responds, “I said I taught him to whistle, I didn’t say he learned.”


Think about your students past and present on a “test” day. Can you recall a student who was nervous? Maybe a student who even cried? This is a common reaction to assessment of learning rather than assessment for learning.

For many years assessment was used as a measure to inform teachers and students how students performed in comparison to each other at arbitrary points in time. Thankfully, with years of research and a shift in the way teaching and learning is approached, the recommended method of determining student success is by using assessment to measure growth. The focus has shifted to ensuring students learn rather than that teachers taught. Assessment results are no longer final verdicts for students, but rather information for them and their teachers on where to go next, otherwise known as assessment for learning.

Assessment for learning is the process of seeking and interpreting evidence for use by learners and their teachers to decide where the learners are in their learning, where they need to go and how best to get there, otherwise known as formative (Assessment Reform Group, 2002).

The key to effectively implementing assessment for learning is using the evidence gathered to inform instruction rather than just collecting data. Pre-assessments (or the first of a series of formative assessments) give teachers a starting point and formative assessment helps teachers set the pace and choose content and strategies for students as they progress in their learning. Too often, educators believe they are practicing assessment for learning, but they are not. Below are some common mistakes made which are contradictory to assessment for learning:

  • Collecting data (evidence) and not using it (watch this commercial for a funny visualization of this practice)
  • Simply not counting “formative” assessments toward a final grade, but not adapting pacing or differentiating for students who need modifications or extensions
  • Assessing the wrong criteria (i.e. assessing content recall when the learning target is a skill)
  • Focus on arbitrary dates to finish learning, like “end of quarter”

Quick check to see if you are using assessment for learning correctly

A quick test to see if you are using formative assessment properly is to ask yourself, “Am I differentiating for my students?” It is nearly impossible to practice assessment for learning without differentiating. When looking at evidence of learning, teachers will inevitably find that some students will move more quickly than others and need extensions while others will require scaffolding to achieve. Additionally, some students will need to approach the material in an entirely different way.

This is where differentiation is vital, teachers will need to determine which differentiation category or combination of categories: the content (what students learn), the process (how students learn), the product (how student demonstrate their understanding), and the learning environment (where and with whom students) they need to adapt  to meet the needs of their students:

Differentiating instruction is frequently the piece of assessment for learning that teachers find the most intimidating, but it doesn’t need to be this way. Below is a list of common concerns teachers have with differentiating instruction and some considerations that may help for ease these apprehensions.


In summary: in order for something to be taught, it must be learned. In order for all students to learn, appropriate evidence (assessment results) must be gathered and used to inform future instruction. As educators, the onus is on us to ensure students learn. When we confirm learning, we confirm we have taught.


This post was originally published on Corwin Connect.corwin_connect_featured_button1

Standards Based Grading Made My Kid Average


This post was originally published in @PeterMDeWitt’s blog Finding Common Ground in Education Week.

Recently a friend called me in a panic. She was beside herself because she had just received her seventh-grade daughter’s new standards-based report card.  My friend relayed that her daughter (who was formerly an “A” student) was now “just average” according to the new report card.

I asked my friend if the report card had the word “average” on it and my friend said, “no.” She elaborated that her daughter had received all “meets” and no “exceeds” on her report card, and, therefore, her daughter was now, “just average.”

I calmly responded that “meets standards” does not equate to average. I clarified that a standards-based grading system does not neatly align to the traditional grading system we experienced in our schooling. I explained that standards-based grading is a much more pragmatic and informative way of reporting student progress than the traditional A-F approach.

I expected my friend to accept this explanation and settle down, but instead, her emotions escalated, and she replied, “well, my daughter’s teacher thinks standards-based grading is stupid, too.”

We are the stories we tell ourselves.” Joan Didion

Many school districts that have made the switch to standards-based reporting have been met with reactions like the one illustrated above. And, although I was surprised by my friend’s response, I shouldn’t have been. Reactions like hers are to be expected when identities are threatened, and eliminating traditional grading practices poses a threat to many people’s identities.

How so?

The A-F/100-point traditional grading system has been in place since the early twentieth century. This means all parents and grandparents of students currently in kindergarten through 12th grade, plus the vast majority of today’s teachers experienced school with a traditional grading system.

Based on the grades we received as students, we told ourselves we were “good” or “bad” students. We used our grades to tell ourselves which subjects we were “smart” in and which ones we weren’t. We used our grades to compare ourselves to our peers. Our parents used our grades to compare us to their peers and their peers’ children. We used our grades to determine if we were cut-out for certain careers. We allowed grades to tell us many stories about who we were. For better or for worse, these stories have played a part in shaping our identities as adults. Therefore, when we remove a critical piece of our identity formation (traditional grades) we may, consciously or not, feel threatened.

So, now what?
We will be uncomfortable for a little while.  Ultimately, just like us, our children’s identities will be shaped, in part, by the educational experience they have. However, if implemented correctly (as extensively researched and reported about by Thomas Gusky and Rick Wormeli) standards-based reporting should allow students to identify as individual learners, rather than comparably “good” or “bad” students.

The concept of standards-based grading is not easily enacted by teachers, nor is it easily understood by parents. Rather, this change is a work in progress which requires both educators and parents to work together to relearn what we have been taught in the past about grades.

While this shift is difficult for both educators and parents, it is the educators who must lead the charge, and be the first relearn (watch this video for some inspiration on relearning). The way in which educators share information about standards-based grading with parents is crucial for successful implementation. If educators are positive, admit that change is hard, and stick with the change because it is in the best interest of students, parents will follow suit. However, if educators protest, criticize, or are ambivalent about the benefits of standards-based grading, parents will react similarly. Educators must model the reaction they hope to elicit from parents and students.

To effectively communicate with parents, educators must put to rest some of the widely-held fallacies about grading like the three listed below:

Fallacy #1: Parents need letter grades to understand their child’s performance.
Reality: Traditional grades give the facade of understanding because they use familiar words and measures. Consider a report card that lists: Math: A, Reading: B+. Parents understand the words math and reading. They understand that an A is the highest grade and a B is close to an A. But, the reality is, this communication does not actually tell parents anything about what was learned. Math and reading are too broad of categories to offer any insight and the letter grades could mean a variety of things, many of which have nothing to do with reading or math.

Now what? Standards-based grading is an opportunity to create a common understanding of exactly what is being assessed. When teachers take care to ensure assessments are appropriately aligned to the standards they are assessing, the assessments become a vehicle for dialogue between students, parents, and teachers to adequately discuss where students are in their learning progression and where they are going.

Fallacy #2: Letter grades are more objective.
Reality: Once again, an A-F system creates a facade of objectivity.  Using a percentage attached to a letter  (93% = A) feels objective. But, what isn’t necessarily objective are the tools used to garner those scores. When I taught English, I often struggled to determine the critical difference between an 89% and a 90% on a student’s narrative writing assignment. When I taught social studies, I assumed the multiple choice tests I created were completely objective due to the right/wrong nature of the questions. I didn’t consider, however, the inherent bias of the questions since I had written them.

Now what? There is a reason teachers are part of a PLC/team and there are reasons why these teams are encouraged to meet frequently. This is a time for teachers to discuss topics like objectivity. It is no longer frowned upon for educators to admit that learning is not an entirely empirical process. Learning is complex and, therefore, grading is complex, too. When we look at student work as a team, engage in dialogue about assessments, and come to a consensus as to what “meeting standards” is, we are making the reporting process as objective as possible.

Fallacy #3: By the time we shift to standards-based grading, there will be a new fad, and we will have to start all over again.
Reality: It will take time for individual school systems and the educational system as a whole to fully embrace this change. It is likely that once we become comfortable with this change, there will be additional amendments to the way we grade. But, such is life. This is part of what all successful industries do to stay relevant. They make changes to improve processes, gather new information, and make more changes to improve processes again.

Next Steps: Don’t lament about the process. Don’t worry about what the future holds. We are doing what is best for students with the information we have right now. Celebrate the progressive and long overdue steps we are taking to use grading as an indicator of learning rather than a symbol of finality.