By Arielle Wang
At the end of 2025, SASPD announced that grades will now come with descriptors, moving away from the precise numerical system students and teachers had long relied on. For those used to calculating down to two decimal points, the change quickly permeated hallway conversations…and sometimes debate.
Grades have always carried enormous weight at SAS, and now the school suggests they should carry “clearer and more specific” meaning. But will adding words like meeting or approaching actually make performance easier to understand, or does it just complicate a long-established system?
In this interview, we sit down with our high school Principal, Mr. Ivan Velasco, to explore what the change really means for students and teachers alike.
Q1: Can you introduce the new rubric and what students and teachers will notice most immediately?
The main change is the addition of grade descriptors alongside grades. Instead of relying only on numbers and letters, we are clarifying what grades actually mean.
Rather than saying, “You got 3 right out of 5, so that’s 60%,” we want to describe what that represents in terms of learning. Students may start seeing letter grades and performance descriptors instead of numerical grades.
One major shift is moving away from the idea that we can measure performance precisely on a 100-point scale. Instead, we’re focusing on broader categories of achievement.
Q2: What do those categories look like?
We are still using the letter grades, only that they are now paired with descriptors such as approaching, meeting, or exceeding.
Departments will further define these based on subject specific criteria, so that students understand what each performance bracket looks like in a specific context.
Q3: What were the main limitations of the old rubric, and how does the new one improve accuracy?
The issue with percentage based grading is the illusion of precision. It suggests we can distinguish between, say, an 84 and an 85. That, in reality, is unrealistic.
The new rubric aims to create broader categories that allow for larger margins of error in representing a student’s level of understanding. As an example, if there were a grading system out there that was just pass/fail, that’s super broad but very accurate. Smaller numerical distinctions create narrow and often misleading differences, while broader categories better reflect actual performance.
Q4: In STEM subjects, where answers are often right or wrong, how does the rubric distinguish levels of mastery?
Even in STEM, the standards aren’t only about can you accurately do this one skill. They’re about learning how to think like a scientist or a mathematician. They’re about justifying your answers. They’re about communicating in different ways. And so I think there’s a complexity to assessment that sometimes we want to narrow down to “If it’s right, you get a point. If it’s wrong, you don’t.”
In the end, it comes down to assessment design. Teachers decide whether to use tests, projects, or presentations, and how to evaluate them. Traditional grading strategies like deducting points for small errors don’t always reflect what a student truly understands.
So I’m not sure if I answered the question here, but the idea is that the design of the assessment is a big difference and that’s something else we’re working on that you might not see as a part of the work on grade descriptors.
Q5: Why introduce rubric changes before fully redesigning assessments?
This is a long-term process. We are not changing the style of assessments in every class. I think a variety of assessment types is really important.
In AP or IB classes, we are also trying to make sure that students have enough practice in the types of questions they are going to face in the external examination. So it doesn’t mean that we will create assessments that look completely different from the external tests. Our goal is to make sure we are preparing students, not only in terms of knowledge but also skill.
Q6: Some humanities teachers say the difference between meeting and exceeding is unclear, which may make grading less transparent. How is that being addressed?
I think that’s part of the work we need to continue working on. A rubric needs to have details. However, we can’t produce a rubric for every department and every quiz, test, or project they give, it’s up to the individual teachers to build detailed rubrics for each assessment.
Administration provides general guidelines but teachers define what each level looks like in their specific assignments. It’s their responsibility to clearly communicate expectations to students before assessment.
Q7: If some teachers are still unclear on the rubric, how will the school ensure consistency?
That’s part of the work that we’re doing this semester and this upcoming year. The conversations we’re having in faculty meetings and department meetings is helping that clarity come about and make sure that teachers have the full understanding.
Many teachers are already familiar with standards-based grading, but others may need a bit more support and space. We conducted a self-assessment to identify where teachers need additional training, and we are providing professional learning accordingly.
Q8: How will this work in AP or IB courses where scores are converted into school grades?
We have developed a conversion system for AP and IB scores that converts external scores into our grading system in a structured and consistent way. That process is still being finalized with teacher input.
Q9: Will past assessments be regraded under the new system?
No. Nothing from first semester or past years would be revisited or graded.
Q10: Who was involved in designing the new rubric?
We presented this in our conversations with the head of educational programs, the associate director of educational programs, our colleagues at Puxi high school, our administration, and our instructional coaches.
We brought it first to our heads of department and teachers, shared it with them and got some general feedback. The rubric has continued to evolve with the feedback that we’ve received.
Q11: To what extent did parent concerns factor into the decision?
I think parents highlighted the discrepancies between our and Puxi’s academic profiles, which prompted us to take a closer look. So I guess indirectly, that has prompted us to do this. But it’s not directly because parents have been putting pressure. It’s more that parents have given helped us and have their feedback and their input has really helped us get deeper into this conversation.
Q12: How much flexibility do teachers have in interpreting or adapting the rubric, and how would you prevent that flexibility from turning into inconsistency?
We have asked departments to take a look at it and make sure that it’s adapted to the needs of the department. Teachers already have a fair bit of flexibility in designing assessments and how they interpret the standard. This system actually aims to reduce inconsistency by providing clearer definitions of what grades mean, so it should result in less variability between teachers.
Q13: If the rubric ends up producing unintended issues, what mechanisms are in place to revise it?
It’s hard to imagine that it would lead to more inconsistency when we’re giving teachers more guidance and more common language. But, of course, if we determine that it’s inconsistent or that it’s causing the opposite effect of what we’re intending, we will have to take a closer look at it and see what’s causing the issue and what we would need to adapt or adjust in order to fix the issue.
Q14: How will students be taught to understand and use the new rubric?
I guess my question is, what did you have to go on before you had these grade descriptors? How subtle were the differences between grades? It’s a learning process, for both teachers and students.
With time, we will see the results of descriptor-based grading. In a fast-paced era, it’s crucial for everyone—educators, students, and parents—to approach change with an open and daring mind. This is not the first time SASPD has gone through significant structural changes, nor will it be the last. But regardless of how this trial semester unfolds, it’s important to remember that our conversation about how we measure learning is far from over.
*Much thanks to Mr. Velasco for his time and insightful responses.