Upstream Thinking . . . How to know you’re succeeding?

“When faced with a difficult question, we often answer an easier one instead, often without realizing the substitution.”

~ Dan Heath

This year’s Blueprint Bulletin theme is around Upstream Thinking for Systems Improvement. The first Blueprint Bulletin began our series where the primary focus was around the three forces that push us downstream:  1) problem blindness, 2) lack of ownership, and 3) tunneling. The second Blueprint Bulletin focused on the first of Dan Heath’s seven questions for Upstream leaders, How to unite the right people?  The third Blueprint Bulletin supported our thinking and understanding around How to change the system?  The next Blueprint Bulletin continued with the third question, Where to find a point of leverage?  The last Blueprint Bulletin focused on How to get early warning of the problem?  This edition focuses on the fifth question, How to know you’re succeeding? where Heath encourages you to “pre-game” your measures, use “paired” measures, and to avoid “ghost victories”.

What might it mean to “pre-game” your measures, use “paired” measures, and to avoid “ghost victories”?  Let’s start with the three types of “ghost victories”.  It’s not enough to take action, you need to know whether or not your actions are actually helping.  Even after you choose the right ways to measure success, you need to make sure that you are not experiencing “ghost victories”.

  1. The first kind of ghost victory reflects the expression, a rising tide lifts all boats.  A simplified example in an educational setting is that the district celebrates increased student achievement across the district, but it turns out that every district’s student achievement has increased as well because the statewide assessment tool has changed.  In other words, solutions are working, it’s just that the cause is something external to your efforts.
  2. The second kind of ghost victory is that you’ve succeeded on your short-term measures, but they didn’t align with your long-term mission. When you succeed in your short-term solutions but this success has pulled you away from your main mission, you’ve ultimately failed.  For example, a short-term measure may be completing all of the teacher evaluations within a set timeframe.  Congratulations!  This measure is achieved . . . much reason to celebrate.  Unfortunately, the long-term measure of growing, supporting, and increasing teacher capacity hasn’t occurred due to the fact that actionable and targeted formative feedback wasn’t provided to the teachers.  If you don’t provide targeted feedback with actionable next steps, then how are you growing your teachers? 
  3. The last kind of ghost victory is that your short-term measures became the mission in a way that really undermined the work.  This is where “gaming the system” comes into play.  Possibly you’ve heard of stories where the data doesn’t really reflect reality . . . perhaps the district has set a goal to reduce all dropouts by 20% . . . instead of documenting a dropout as a dropout, data would be skewed to record a dropout as a “transfer”.  The district was “successful” in achieving its goal . . . stakeholders celebrated this victory AND this victory really isn’t a victory at all. 

The Blueprint Connections, Leader’s Corner, and Teacher’s Corner sections will be utilized to confirm your thinking and give you some additional food-for-thought and potential actions.

Blueprint Connections

“Getting short-term measures right is frustratingly complex.  And it’s critical.  In fact, the only thing worse than contending with short-term measures is not having them at all.”

~ Dan Heath

In past editions of the Blueprint Bulletin, you have familiarized yourself with Chicago Public Schools where their graduation rate was 52% in 1997.  By shifting to upstream thinking, eliminating problem blindness, taking ownership for the problem, and putting systems in place, by 2018, their graduation rate increased to 78%.  You might be asking, “so what does this have to do with short-term and long-term measures?”  So glad you asked!  CPS’s leaders focused on and cared about reducing the dropout rate . . . this was their ultimate goal (long-term measure).  Unfortunately, they couldn’t wait four years to see whether or not the strategies that they put in place were actually working.  CPS used short-term measures and interim targets to progress monitor which guided their work and allowed them a chance to adapt.  Putting this into perspective, connections can be made to Michigan’s Integrated Continuous Improvement Process and using improvement cycles to monitor.  Let’s break this thinking down . . .

  • Long-term measure/outcome:  Decreasing dropout rate
  • Interim targets/measures:
    • a freshman’s completion of five full-year course credits
    • a freshman is not failing more than one semester of a core course, such as math or English
  • Short-term measures/outcomes:
    • Weekly attendance and grades

Using a systems approach that focuses on the interconnectedness of systems, processes, and people, CPS created the Freshman On-Track system.  CPS knew that freshmen who are on-track by these interim targets at the end of their freshman year had an 81 percent chance of graduating.  In other words, they are 3 ½ times more likely to graduate than students who were off track.

How might you use the improvement cycle IPOF (input, process, output, feedback) iterative model to monitor the short-term measures/outcomes which will ultimately lead the district to meeting its interim and long-term measures?  As you reflect on what this IPOF might look like for your district, here are a few considerations:

Feedback – Based on data collected, adjustments will be made to input and/or process.  

Heath states, “Getting short-term measures right is frustratingly complex.  And it’s critical.  In fact, the only thing worse than contending with short-term measures is not having them at all.”  With CPS, the ultimate goal is to decrease dropout rates and increase graduation rates.  By breaking down strategies into manageable chunks and using short-term measures that can be progress monitored on a short-term cycle, CPS was able to get closer to meeting their goal. 

The Leader’s and Teacher’s Corner sections will support you with deeper thinking around how to know you’re succeeding by “pregaming” your measures, using “paired” measures, and avoiding “ghost victories”.   

Leader’s Corner

“What counts as success?  With downstream work, success can be wonderfully tangible, and that’s partly because it involves restoration.  Downstream efforts restore the previous state. . . . But with upstream efforts, success is not always self-evident.”

~ Dan Heath

Instructional Leadership Routines are specific to leading instructional improvement at the building level. In a district using the Blueprint framework, building leaders are able to reorganize their time and rely on established systems so that their focus can be on teaching and learning as opposed to focusing on putting out fires and managerial tasks.  Two of the many practices include 1) routinely observing instruction and providing teachers with feedback, support, and coaching as well as 2) monitoring teaching and learning.  As you reflect on, “how do you know you’re succeeding”, you will want to consider “pre-game” measures, “paired” measures, and avoid the three types of “ghost victories”.  What might that look and sound like for you?  Let’s start with examples of ghost victories . . . 

  1. As mentioned above, the first kind of ghost victory measure shows that you’re succeeding, but you’ve mistakenly attributed that success to your own work.  Suppose staff received and began to apply professional learning on student engagement strategies in a virtual learning environment.  Teachers (and leaders) began to see changes in climate and culture as well as student engagement.  This overall positive change really had little to do with your interaction with teachers through observations, support, and coaching.  
  2. The second kind of ghost victory is that you’ve succeeded on your short-term measures, but they didn’t align with your long-term mission. The short-term measure was to complete “X” number of observations each week and to provide actionable feedback to teachers.  You may have completed twice the expected observations, yet there is no change in instructional practice.  
  3. How might “gaming the system”, the last kind of ghost victory, come into play?  This is when your short-term measures became the mission in a way that really undermined the work.  Perhaps you’re hitting the weekly short-term measure of observing instruction and providing teachers with feedback, however, time is spent in teachers’ classrooms who are already “hitting it out of the park”.  On paper, you’re meeting your short-term measure, however the long-term mission is far out of reach.

Those examples are not a very glamorous glimpse of the possible outcomes of an instructional leader.  Yet, each of us may have experienced such ghost victories in our profession.  How do you avoid these ghost victories?  You are absolutely right . . . by being upstream leaders and thinking through pre-gaming and paired measures, we can be proactive in avoiding ghost victories.  

Using paired measures means that there’s a balance between quality and quantity.  For example, CPS paired a quantity metric (number of students graduating) with a quality metric (ACT scores and AP class enrollments).  The district was assured that their data represented the “truth” and ghost victories were not an issue.  By pre-gaming, upstream leaders carefully consider and anticipate how short-term measures might be misused.  According to Heath, upstream leaders ask questions such as, “what else might explain success other than our own efforts?”, “if someone wanted to succeed on these measures with the least effort possible, what would they do?”, and/or “imagine that years from now, we have succeeded brilliantly according to our short-term measures, yet we have actually undermined our long-term mission . . . what happened?” As it relates to the example above, how might reflecting on these questions as well as using paired measures affect the overall outcome of our long-term goal?  What might you need/want to be intentional about as you and your building and district networks think through short- and long-term measures/targets as it relates to your overall goals?

Teacher’s Corner

“Choosing the wrong short-term measures can doom upstream work.  The truth is, though, that short-term measures are indispensable.  They are critical navigational aids.”

~ Dan Heath

The phrase, “short-term measures are indispensable,” brings to mind the vital role that the formative assessment process plays in the learning environment.  According to the Michigan Assessment Consortium, “formative assessment is a planned, ongoing process used by all students and teachers during learning and teaching to elicit and use evidence of student learning to improve student understanding of intended disciplinary learning outcomes and support students to become more self-directed learners” (CCSSO SCASS, 2017).

The goal of formative assessment is to monitor student learning.  Formative assessment is a type of diagnostic check-in that allows a teacher to adjust instruction to support students’ growth.   By understanding exactly what students know before, during, and after instruction, educators have much more power to improve student learning.  Formative assessment gives teachers critical information so that they, in turn, can provide students with timely and action-oriented feedback.  

John Hattie’s analysis of influences that are related to learning outcomes lists feedback as having a .70 effect size on student achievement.  Knowing that the average effect size of all the interventions he studied is 0.40, an effect size of .70 is significant.  Hattie stated the following in his Visible Learning book: 

“Feedback is a compelling influence on learner achievement. When teachers seek, or at least are open to what learners know, what they understand, where they make errors, when they have misconceptions, when they are not engaged – then teaching and learning can be synchronised [sic] and powerful. Feedback to teachers makes learning visible”

[p. 173].

Formative assessment practices provide many opportunities for feedback.  When teachers come together to create common assessments, they increase the likelihood that students will have access to the same curriculum, take assessments of the same rigor, and have their work judged according to the same criteria.  Think about how teacher collaborative routines efficiently allow your team to work together to determine the best methods to assess student learning.  Collaboratively deepening your knowledge of what and how students are learning allows you to capitalize on the expertise of your peers. For improvement in student learning to occur, data needs to be analyzed from assessments. The data analysis helps drive the instructional practices that best meet the needs of the students.

Dylan Wiliam describes assessment as, “the bridge between teaching and learning.” Formative assessment is necessary for the learning process of students, its application is continuous.

These assessments give both teachers and students feedback, so that teaching and learning can be improved.  As stated in the opening quote, “. . . short-term measures are indispensable. They are critical navigational aids.”  The formative assessment process serves as a navigational aid for both teachers and students.

Timely Topics

Our next round of online learning begins June 16 and runs through August 11.  Please browse the professional learning calendar for courses that may interest you and/or address your district’s goals. To register, click Events Registration.  

Check out this video to learn more about our 2021 Virtual Leadership Team Institute!!


Do not judge me by my successes, judge me by how many times I fell down and got back up again.”

~ Nelson Mandela

The Lean Thinker – Thoughts and Insights from the Shop Floor blog, on March 3, 2020, talks into Management by Measurement = “Ghost Victories”.  As you review, you are encouraged to reflect on your experiences and context.  Mark, the blogger, makes connections to Heath’s ghost victories by sharing real-world examples and lifts considerations for his readers.   How do YOU know you’re succeeding?  By being proactive, your short- and long-term measures and actual output will tell the true story.  

Bevan, Gwyn (08/01/2006). “What’s Measured is What Matters: Targets and Gaming in the English Public Health Care System”. Public administration (London) (0033-3298), 84 (3), p. 517.

Hattie, J. (2009). Visible Learning. Routledge.

Heath, D. Upstream – How to Solve Problems Before They Happen. London, Transworld Publishers, 2020.

Michigan Assessment Consortium Assessment Learning Network. (2017, September). What do we mean by formative assessment?. In ALN Learning Points. Retrieved from

Phillips, E. K. (2019). The Make-or-break Year: Solving the Dropout Crisis One Ninth Grader at a Time. The New Press.

Wiliam, D. (2010), “The role of formative assessment in effective learning environments”, in Dumont, H., D. Istance and F. Benavides (eds.), The Nature of Learning: Using Research to Inspire Practice, OECD Publishing, Paris, [p.137]

Contact Us
Our goal is to meet your needs. If there’s specific information that you believe you and others would benefit from learning about, please communicate those wishes to your SWFT facilitator or email Lynn Batchelder, Coordinator of Professional Learning,

Scroll to Top