The Illusion of Change

By Guest blogger, Dr. Anthony Muhammad, Author and Leadership Consultant.  He is the keynote speaker at our Complimentary VIP Leadership Summit event hosted by the Center for Reading Recovery and Literacy Collaborative. This event is being held on May 15, 2017. This invitation only event will offer an opportunity for school leadership to discuss transforming school culture to build teacher leadership and improve student outcomes. Please email literacy@lesley.edu for more information.

Anthony MuhammedChange is a very difficult process, but it is the catalyst to continuous improvement.  It tests our ability as professionals at many different levels.  Sometimes, when things get too challenging, we tend to look for short-cuts or we quietly surrender.  We live in a political climate that demands that we change, whether we choose to or not, but I have found that some organizations are good at creating the illusion of change, rather than being fully involved in the process of change.  There are a three key phrases which clearly indicate that an organization is not fully committed to the change process.

Phrase #1: “We are having conversations”

This phrase is a code for; “we have a lot of opposition to this idea and we are afraid to make people too uncomfortable and release an onslaught of political and social opposition.”  I recently worked with a school that has been involved with the implementation of the Professional Learning Community (PLC) process for three years.  They have created collaborative teams and they have designated time for those collaborative teams to meet.  They have created district-wide formative assessments that are administered four times per year.  These milestones were reached in the first year of the process.  So, I asked about PLC Questions #3 and #4 which address systems of student intervention and enrichment, and the room got very quiet.  When people finally began to speak, each answer began with the phrase “we are having conversations.”  If your district is “having conversations,” the change process has stalled.

Phrase #2: “We are in different places”

This phrase is code for; “we don’t have a universal system of accountability, and people who understand the intrinsic value of what we propose have embraced it, and those that are averse are allowed to disregard it until they ‘buy-in’.” Schools and systems that use this phrase are engaged in what I call “accountability light.” This is a diet version of universal professional accountability where group expectations and coherence are the norm.  Healthy school cultures make collaborative decisions and they hold each other mutually accountable for full participation.  When shared commitment is not achieved, a tiered-system of commitment emerges where implementation is based upon personal preference.  Partial commitment is the same as no commitment.

Phrase #3: “District initiatives”

This phrase is code for; “there is a huge philosophical divide between school practitioners and central office which has led to a stalemate.”  I have had the pleasure to work with thousands of schools on the change process and whenever practitioners refer to the change process as a “district initiative,” it is never good.  In essence what they are expressing is a feeling of imposition.  In the mind of the school practitioner, they are confronting real world issues and they have their fingers on the pulse of the needs of the school; and central office lives a world disconnected from reality and their priorities are unreasonable and unnecessary.   This is a clear indication of poor communication and professional disconnection.  If your district has a lot of “initiatives,” effective change is probably not on the horizon.

Are You Teaching or Testing Comprehension?

irene_fountas_photoby Irene Fountas, Author and Founder/Director of the Center for Reading Recovery and Literacy Collaborative at Lesley University

All too often, successful comprehension has been regarded as a student’s ability to answer a teacher’s questions (which is one way of assessing comprehension), but it does not enhance the reader’s self-regulating power for processing a new text with deep understanding. Think about how your teaching moves may be focused on testing when you continually pose questions, or how you can shift to teaching or helping students learn how to comprehend texts for themselves.

Teaching for comprehending means supporting your students’ ability to construct the meaning of the text in a way that expands their reading ability. You can help them learn what to notice in a text and what is important to think about, how to solve problems when meaning is not clear, and provide scaffolds to develop their in-the-head systems for working through the meaning of the text. These abilities are generative, so students will be able to transfer what they learn how to do as readers before, during, or after reading to a variety of increasingly challenging texts in every genre.

Introduce the text to readers

When you introduce a challenging text to your students, be sure to help them notice how the writer constructed the meaning, organized the text, used language and made decisions about the print features. Help them know how the book works and get them started thinking about the writers’ purpose and message and the characteristics of the genre.

Prompt the readers for constructive activity

As students read orally, interact very briefly at points of difficulty to demonstrate, prompt for, or reinforce effective problem-solving actions that they can try out and make their own. Your facilitative language is a call for the reader to engage in problem-solving that expands their reading strengths.

Teach students how to read closely

Take the readers back into the text after reading to notice the writer’s craft more closely. Select a phrase, sentence or paragraph, or focus on helping them notice how the writer organized the whole text. Revisiting the text calls the reader’s attention to particular features.

Engage students in talk about texts

Talk represents thinking. When students talk about a text, they are processing the vocabulary, language and content aloud. This enables them to articulate their understandings, reactions and wonderings. When they learn to be articulate in their talk, they can then show their ability to communicate their thinking about texts in their writing.

Engage students in writing about texts

Writing about reading is a tool for sharing and thinking about a text. When students articulate their thoughts in writing, they confirm their understandings, reflect on the meaning and explore new understandings.

Testing is a controlled task for measuring what students can do without teacher help. Teaching is the opportunity to make a difference in the self-regulating capacity of the learner. Reflect on your teaching moves and engage in a discussion with your colleagues to shift from testing to teaching. When students focus on meaning-making with every text they read, they will be able to show their competencies on the test.

For more information about the Lesley University Center for Reading Recovery and Literacy Collaborative’s events and trainings, visit http://www.lesley.edu/crr

Solid research or pseudo-science: How can you tell?

by Wendy Vaulton, Senior Researcher, Center for Reading Recovery and Literacy Collaborative

ResearchAlthough there are some red flags that may tip you off to poor scholarship or propaganda disguised as science, sorting the good from the bad when it comes to educational research can be daunting. This topic is important for practitioners and scholars alike. In fact, the most recent issue of Educational Researcher focuses specifically on the question of what should count as high quality education research (http://edr.sagepub.com/content/43/1/7).

So, while the conversation about judging research quality continues, how can you tell if the next study that comes across your desk is worthwhile or belongs in the garbage bin? Let’s start with the easy stuff. Remember those red flags I mentioned? Be very cautious when:

  • A researcher uses inflammatory or insulting language
  • Sources cited are almost entirely from special interest groups or popular media (e.g. Wikipedia)
  • Study limitations and alternate explanations of findings are ignored or sidestepped
  • The author uses logical fallacies and spurious arguments to support a point

It gets trickier when errors and bias in research are subtle or technical. Although not a guarantee of every aspect of quality, research that appears in reputable, peer reviewed journals is generally more credible than articles that appear in other sources. Peer review means researchers who know the field critique and assess an article before it is published to make sure it is accurate, the methods are rigorous, and the topic is relevant to the journal or the publisher. Many government reports are also reviewed by a commission of experts.

Typically, the tone of quality social science is circumspect and judicious. Good research points out its own flaws and limitations rather than obscuring them. Desirable practices including drawing conclusions that are supported by data and avoiding grandiosity and hyperbole. In other words, good research values substance over flash, focusing on the steak rather than the sizzle.

I feel pretty confident saying that to date, there has never been a perfect piece of social science. Bias can enter into a study at any point in its development. What research questions are asked and how they are answered depends on how someone thinks about the issue. Good social science does its best to limit and acknowledge those sources of bias rather than pretending they don’t exist.

A number of people have compiled good checklists of questions for judging the quality of research. Some of the questions that frequently appear on these lists include:

  1. Does the researcher clearly identify a relevant research problem or question?
  2. Is there a theory or framework that guides the study and analysis?
  3. Has the author reviewed the relevant literature thoroughly?
  4. Is the study’s methodology clearly described? This includes the study’s sample selection criteria, data collection instruments and scientific procedures.
  5. Are the results presented in a well-organized, easy to follow format?
  6. Are the limitations of the study discussed?
  7. Do the researcher’s conclusions follow logically from the results?
  8. Are the implications of the research of an appropriate size and scope given the nature of the study?
  9. Does the author have a financial interest in the outcomes of the research?

So take these things into consideration then next time you come across a piece of research. Also, for a good example of bad science, take a look at The Dangers of Bread. This hilarious send-up is a great way to see how easily the trappings of science and statistics get abused to prove a point. You can find it at http://www.geoffmetcalf.com/bread.html

Visit our Center for Reading Recovery and Literacy Collaborative Research and Outcomes page at http://www.lesley.edu/center-for-reading-recovery/research-and-outcomes/

Dive in, but don’t drown

by Wendy Vaulton, Senior Researcher

In an era of information overload, figuring out what to do with data can feel a bit like drinking from a fire hydrant. Not only is the volume of data sometimes overwhelming, but information from different sources often seem to conflict with each other. How often have teachers found that state test results don’t mesh with classroom assessment results? The end result can be confusion and paralysis. So, how can you move forward and find meaningful, actionable information in a sea of data? This is the first in a series of posts to help you figure out how to move forward in looking at data without getting overwhelmed.

First, it is imperative to be clear about your questions. When we take information in without a sense of direction or purpose, it is easy to jump to the most obvious and sometimes misleading conclusions.  Then, we take premature action and become frustrated with a lack of meaningful change. To avoid this, work with your colleagues to identify the questions that matter most to your school. These questions should be aligned with state and district goals, but should also reflect the concerns and issues that are unique to your school and/or classroom. Once you are clear about the questions that matter most, then you can begin to figure out whether they can be answered with the information you have.

Second, be assured that you don’t need special skills or equipment to dive safely into data. You just need honest curiosity and a willingness to explore (knowing how to use Excel doesn’t hurt, but isn’t critical). Empower yourself to examine one source of data in-depth rather than trying to take in everything at once.  For instance, spending time with colleagues examining state ELA test results by item may lead to more actionable results than looking at a stack of different assessments all at once and comparing results. Digging deeply into a single source will allow you to explore which kinds of questions seem to trip up which students. What do these patterns say about student learning? What implications do they have for instruction?

iStock_000012107866XSmall

When digging into data, it is much easier to visualize trends using graphics. Pie charts and bar graphs are easy to make in Excel and can convey a world of information that it is impossible to absorb when looking at numbers in a table.

  • Helpful resource #2: If you don’t have a lot of tech capacity, think about using an online resource like Fiverr.com where people market their services for five dollars. Just don’t forget about student privacy if you are going to ask someone else to graph your data.

The idea of data driven decision making is not to try to understand everything going on at once. Better to get a real answer to one narrow, but meaningful question than a superficial answer to a dozen. By digging deeply into a single data source, we give ourselves room to think deeply and strategically. Stay tuned for next time when we’ll talk more about the process of digging into data to inform instruction.

NCLB Reauthorization Proposal and What Really Works in Turning Schools Around?

by Charlene DiCalogero
Assistant Director, Federal Grants Programs

Two items in the January 18th EdWeek caught my eye. The first is an article about the ongoing attempts to reauthorize the Elementary and Secondary Education Act most recently known as No Child Left Behind, as well as the Obama Administration’s proposed waiver/modification plan for some of NCLB’s provisions even if a revised version of the law does not pass.

The article includes a helpful comparison chart of what is currently in the law, and some major differences (and similarities) between the two proposed bills as well as the White House’s alternative plan.

Second is the back page commentary by Alan M. Blankstein and Pedro Noguera entitled “What Really Works in Turning Schools Around?”  Professor Noguera is a highly respected and progressive professor of education now at NYU, and formerly at Harvard. Two quotes that stood out:

“The first problem with the administration’s approach is that it specifies the remedy rather than beginning with an accurate diagnosis of the problem. Firing staff members or rewarding them based on performance assumes schools are failing because the staff is lazy or uninterested in improving. The actual problem is always more complicated.”

“There must be a clear and deliberate strategy for improving instruction. Professional development must be directly related to the skill areas where assessments show students are weakest. Professional development is effective when it is site-based, ongoing, and draws upon the expertise of the most effective teachers in the building. Creating a climate of collaboration among teachers is essential.”

The Observation Survey

by Diane Powell, Primary Literacy Trainer

I’ve been reading about how Dr. Marie Clay’s Observation Survey of Early Literacy Achievement has received the highest possible ratings for scientific rigor from National Center on Response to Intervention (NCRTI). The ratings are intended to inform and assist educators as they select screening tools for RTI. There are several links from the Reading Recovery Council of North America website that I will include at the end of the blog.

We have known for years about the quality of the instruments that allow us to capture current understandings of students and then can use that information to plan for our next instructional moves. All six tasks, Letter Identification, Word Reading, Concepts About Print, Writing Vocabulary, Hearing and Recording Sounds in Words, and Text Reading capture authentic data that aligns with real life classroom experiences rather than testing items of knowledge in isolation.

The National Center defines screening

as brief assessments that are valid, reliable, and evidence-based. They are conducted with all students or targeted groups of students to identify students who are at risk of academic failure and , therefore, likely to need additional or alternative forms of instruction to supplement the conventional general education approach

and now the tasks of An Observation Survey of Early Literacy Achievement can be used as one of the screening tools that schools involved in RTI can select. If you have trained Reading Recovery teachers on staff, they can work with other faculty to learn more about the tasks and avoid the additional expense of purchasing other screening and progress monitoring devices. Thank goodness someone has finally realized how powerful this work is.

View the NCRTI ratings for the Observation Survey.

Read the RRCNA press release [pdf].

Learn more about the Observation Survey.

Expanded Time for Teacher Professional Learning: Reading Recovery and Literacy Collaborative Align with Expectations in High Achieving Nations

By Michelle LaPointe, Researcher

Reading Recovery and Literacy Collaborative require many hours of professional development to learn these high impact instructional strategies to help children learn to read.  Beyond the time required in training to upgrade skills, both also require substantial teacher time to diagnose student needs, plan lessons that meet those needs, collecting data to continuously understand student progress, and reflecting on the data. Although districts and schools in the U.S. may balk at the amount of time spent that is NOT in direct contact with students, this amount of time is more common in schools in other higher performing nations.

Teaching is a learning profession.  Teachers must constantly assess and analyze student progress and understand the strategies that will best meet the needs of their students.  Teachers need adequate time to plan and individualize lessons. In nations with high performing education systems, teachers are given adequate time for on-going professional learning and collaboration with peers around meeting student needs and improving classroom practice.  Planning, reflection, documenting classroom work and daily student progress, and collaboration with peers to share challenges and solutions are all vital components of the professional practice of teachers.  Each of these activities, even if not performed in the classroom or while in direct contact with individual students, is aligned with improved student outcomes and mastering challenging standards.  The following table demonstrates the amount of professional learning time that is available to and expected of teachers in other parts of the world (*note- if you are having trouble reading the table, please print the blog post):

Sources:

OECD (2010). Education at a Glance 2010: OECD Indicators. Paris: OECD.

*The New Teacher Induction Program: A Case Study on the Its Effect on New Teachers and their Mentors (2007)

± part of collective bargaining agreement created in 2005 and renewed in 2008. In Strong Performers and Successful Reformers in Education: Lessons from PISA for the United States (OECD, 2011, p. 74)

** Training and Development Agency for Schools

º In Finland, aspiring teachers work in schools affiliated with the university training program.  They learn instructional practice in classrooms, under the tutelage of exceptional teachers.

Data taken from Teacher and Leader Effectiveness in High Performing Nations (2011). Percentage calculated by author.