Luvreet Sangha Classroom Case Studies: Standards-Based Grading 2
Friday, June 13, 1:00 — 2:00 PM EDT
In this talk, I will discuss how I have iterated on my standards-based graded (SBG) courses since I first adopted it in 2021. I have implemented SBG in 10 different classes (classes of <40 students) at Santa Clara University and UC Berkeley. I will describe the mistakes I made in early implementations (e.g. too many learning objectives, too many reassessments) and how I addressed these. I will share strategies to
(1) avoid my mistakes
(2) have a successful first implementation
(3) create a sustainable workload
(4) iterate your alternatively graded classes
Casey Sherman Classroom Case Studies: Specifications Grading in Math
Wednesday, June 11, 5:00 — 6:00 PM EDT
Over seven consecutive years, I taught seventeen sections of our Discrete Mathematics course, each with approximately 20 students. In the first three years (seven sections), specifications grading was used, where every question (homework and assessment) was graded on an EMRN scale, and breakpoints were provided for each version to earn various grades in the course. In the fourth year and on, I switched to using a hybrid system where each question was still graded on an EMRN scale but the homework and preparation assignments were scored as a limited percent of the course grade, and each assessment question was assigned a point value. This switch was made to both improve the clarity and acceptance of the system, and to `fix' grades where students were in awkward places as assigned by the previous rubric. This shift toward a slightly more traditional approach resulted in comparable final grade distributions while reducing student resistance to the grading system. I will present my reflections on the changes and explore the key decision points in these grading methods and discuss how they can be applied to similar courses.
Ruth Lamprecht, Jonathan McCurdy Research: Standards-Based Grading (and other models) in Context
Friday, June 13, 2:30 — 3:30 PM EDT
This talk presents the results of a literature review on the application of standards-based assessments (SBA), with a focus on technology courses. The authors have implemented SBA in a collection of computer science, data science, and cybersecurity courses for undergraduate students. As part of the methodological approach to using SBA, a literature review was conducted on alternative grading methods. Published works related to research in computer science education, general alternative grading practices, and specifications grading were analyzed to gather insights into how best to utilize these assessment options, specifically in technology classes. While there is a wealth of research on the broad use of alternative grading and a significant focus on STEM classes, there was little found that focused solely on the computer science, data science, and cybersecurity areas. These areas are unique in their use of technology, where students are expected to use the technology, generally in environments where there is easy access to solutions to the problems presented. As artificial intelligence becomes better and more prolific, this talk emphasizes the need for more research and published resources on how alternative grading can be used to improve student learning in technology courses in a fair and reliable manner.
Serene Rodrigues, Marina Milner-Bolotin, Firas Moosvi Research: Qualitative Work on Labor-Based Grading and Other Models
Wednesday, June 11, 5:00 — 6:00 PM EDT
Grades have shaped the student experience for over a hundred years; appearing on homework assignments, exams and transcripts. These issues highlight the need for a grading reform in higher education. Alternative grading (AG) offers a promising solution by emphasizing formative assessment practices, including feedback, reassessment, and clearly defined rubrics [Townsley]. However, there is a lack of research on instructor perspectives on AG and existing research is largely based on specific AG implementations.
In this qualitative research study, we used a phenomenographic approach [Akerlind_2025] to explore how university instructors experience and implement AG practices. Fifteen faculty members across disciplines at the University of British Columbia were interviewed about their AG practices, motivations, and perceived benefits and challenges. Using a pre-interview survey, we gathered contextual information on faculty rank, department, class size and course level. Additionally, we solicited course syllabi from instructors to explore how instructors at UBC structured their AG courses.
The data was analyzed using thematic analysis and examined through the lens of emancipatory pedagogy and constructivist ways of teaching [Clark]. Preliminary results revealed diverse approaches to AG, each tailored to instructors' specific courses. While participants reported increased student engagement and a decrease in student stress, they also noted challenges such as increased workload and institutional constraints. We aim to provide valuable insights for educators considering the implementation of alternative grading (AG), making it more accessible while also advocating for further experimentation. In addition, we intend to publish the interview and analysis protocol so other institutions can conduct similar studies. By building on these findings, we aim to support the development of more standardized, evidence-based practices in higher education.
Sarah Riddick, Melisssa Kagen, Courtney Kurlanska, Gbetonmasse Somasse, Carly Thorp Panel
Friday, June 13, 2:30 — 3:30 PM EDT
This panel presents an interdisciplinary group of instructors who are experimenting with alternative grading methods at a STEM-focused institute that uses a quarter system with 7-week courses, requires no prerequisites, and does not assign grades lower than a C. Panelists’ grading approaches use a combination of specifications-, standards-, and portfolio-based assessment across a variety of undergraduate courses and research projects. Panelists will speak to the challenges of motivating students in both STEM and non-STEM disciplines in courses that use alternative grading. This panel is relevant to instructors who are interested in experimenting with alternative forms of grading in their classes and who, more broadly, are interested in discussing how variations in student motivation across disciplines can affect alternative grading’s effectiveness.
Panelists will reflect on the following questions by sharing specific examples and experiences from their respective courses and disciplines:
“Why are you using alternative assessment?” Panelists will discuss their teaching motivations and values, including from the perspectives of their different disciplines.
“How are you using alternative assessment?” Panelists will each provide a specific example of how they are using alternative assessment in one of their courses.
“How has your particular alternative grading structure motivated (or failed to motivate) the students you teach?” Panelists will share their successes and challenges (e.g., designing effective assignments, rubrics, and systems; managing logistics; responding to unexpected student events and/or extenuating circumstances).
“What are your concerns about student motivation, from the perspective of your discipline in the context of a STEM-focused institute?”
Panelists: Sarah Riddick (Chair), Rhetoric & Writing; Melissa Kagen, Interactive Media & Game Design, Courtney Kurlanska, Global Studies; Gbetonmasse Somasse, Economics; Carly Thorp, Statistics.
Sophia D'Agostino Classroom Case Studies: Alternative Grading in STEM
Thursday, June 12, 1:00 — 2:00 PM EDT
Traditional exams often lead to student stress, disengagement, and prioritizing of memorization over meaningful understanding and application. In response, I developed a case study-based assessment model within my hybrid biology courses at CSU, designed to foster deeper comprehension, reduce anxiety, and provide students with real-world problem-solving skills. While implemented in a STEM course, this structured yet flexible framework is applicable across disciplines.
This talk will explore an alternative grading structure that guides students through five key phases: Learn, Assess, Question, Apply, and Evaluate. Students engage with multi-modal resources, complete low-stakes participation assessments (with unlimited attempts), collaboratively pose and answer conceptual questions, and apply their knowledge through hands-on projects and case studies. By shifting away from traditional exams, this model enhances student engagement and confidence while maintaining academic rigor.
Student feedback strongly supports this approach: "Traditional exams stress me out—this format actually makes me think!" and "I remember what I learned instead of cramming and forgetting after a test." Many students report greater conceptual retention, lower test anxiety, and improved critical thinking skills. However, implementation requires careful design, instructor investment, and is easier to manage in smaller classes — challenges that will be discussed in this session.
Although my experience comes from a biology classroom, I believe this model is adaptable across disciplines. Whether in a literature seminar, history course, or engineering class, instructors can replace high-stakes exams with structured, real-world assessments that promote critical thinking and deeper learning. The goal is for attendees to leave with tangible, scalable ideas to implement alternative grading strategies in their own classrooms regardless of the class they support.
Laura Fox Classroom Case Studies: Specifications Grading
Thursday, June 12, 2:30 — 3:30 PM EDT
Retention of evidence-based medicine (EBM) decision-making skills is essential for success in coursework and assessments throughout the pharmacy curriculum. However, traditional assessment methods often fail to support long-term learning. In response to persistently low performance on EBM evaluations, a PharmD biostatistics course was redesigned using a combination of specifications grading for coursework and standards-based grading for exams. The new structure emphasized frequent practice, self-assessment, and reassessment, shifting the focus from point accumulation to mastery learning.
This quantitative study evaluates the impact of the grading schema on retention of learning gains by comparing performance on standardized assessments across multiple years before and after course redesign. While students showed no difference in item scores during the course, they demonstrated substantial gains when assessed five weeks later in a follow-up course, with correct response rates increasing by an average of 19 percentage points across learning outcomes. The greatest improvements were seen in calculating number needed to treat (+38 points) and determining the appropriate statistical test (+35 points). Gains were sustained at fifteen weeks on a high-stakes competency exam, with average retention increasing by 15 points post-redesign, including a 31-point improvement for number needed to treat.
Student perceptions of the grading model were highly positive in 2024, with 93% agreeing that the system provided clear expectations, fair goals, and increased control over their learning. Additionally, 100% of students felt the grading system afforded them autonomy.
This session will discuss the impact of grading reform on course design, coordination across sequential coursework, and retention of learning gains. Practical considerations for communicating course redesign and managing reassessment logistics, as well as their influence on student perceptions, will also be explored.
Kelley Sullivan Classroom Case Studies: Standards-Based Grading 2
Friday, June 13, 1:00 — 2:00 PM EDT
In this talk I describe a standards-based grading system that I use in a small class of first-semester physics majors at a mid-sized comprehensive college.
The course structure provides students with opportunities for preparation through “pre-flight” reading and quizzes, participation through a variety of in-class activities, and practice through weekly homework sets. Work geared toward learning is either not included in the grade, counted for completion only, or rolled into an engagement credit system.
Students earn engagement credits by choosing from a menu of activities that support their learning including attending class, completing pre-flights, seeking help outside of class, completing optional homework problems and post-lab activities, and engaging in metacognitive reflection. Anonymous survey responses reveal that the engagement credit system encouraged a majority of students to complete the pre-flights and attend class. A smaller number of students reported that the credits pushed them to seek help or complete extra practice. A variety of credit options is key to supporting diverse student needs.
Students demonstrate their understanding of each content standard two times, first on a quiz covering a single standard and second on an in-class midterm covering multiple standards. Quizzes are marked as satisfactory or revise. Exam standards are marked as Exemplary, Solid, Progressing, or Not Yet. Students can revise or reassess standards to improve their marks. Anonymous survey responses reveal that students find the quizzes useful for assessing their understanding of the material and the revisions helpful for improving their understanding. A clear system for assigning and collecting revisions is essential for timely student reflection.
This system works well for a small class and promotes content understanding and the development of supportive learning habits. Some practices are scalable and can be applied to other disciplines.
Silvia Vong Special Topics: Liberation Education and Pedagogical Dissonance
Thursday, June 12, 1:00 — 2:00 PM EDT
One of the many challenges that may emerge for faculty that take a Critical Race Feminist approach to teaching in neoliberal institutions is pedagogical dissonance. The term pedagogical dissonance is the disconnect between educational praxis and one's identified praxis which results in feelings of tension or internal conflict. Through a critical exploration of Critical Race Feminist pedagogy, this session aims to identify how dominant grading practices create pedagogical dissonance that can pose challenges for the teacher and the students in classrooms that apply Critical Race Feminist approaches (e.g., use of storytelling, valuing lived experiences, etc.) to learning. This approach is rooted in centering student concerns, their own meanings, and finding their voice (Maher & Tetreault, 2001) which does not align with the dominant grading approach of assigning numbers and values. What is more challenging is finding a way to honor the personal experiences and insights of a student with an ethic of care, recognizing that assessment of human experience can itself be a kind of harm (e.g., gaslighting, etc.). This session will unpack ways to address these concerns that values and centers human experience and reflects on how to best engage with grading to minimize pedagogical dissonance.
References:
Maher, F. A., & Tetreault, M. K. T. (2001). The feminist classroom : dynamics of gender, race, and privilege (Expanded ed.). Rowman & Littlefield.
Misawa, M., & Johnson-Bailey, J. (2004). Examining feminist pedagogy from the perspective of transformative learning: Do race and gender matter in feminist classrooms? International Journal for Talent Development and Creativity, 12(1), 123-135.
Christina Michaud, Amy Bennett-Zendzian Workshop
Friday, June 13, 1:00 — 2:00 PM EDT
Walk with us through an intersectional process identifying your particular challenges effectively responding to students’ writing (emotional labor? disabilities? time? institutional constraints?). Reflect with peers and explore creative approaches to feedback and/or grading. Together, we will brainstorm strategies for maximizing your use of (un)grading conferences, and more. Our audience is instructors of writing-intensive courses, primarily though not exclusively in the humanities/social sciences.
Though we love reading students’ writing, we acknowledge the emotional and physical challenges of assessing students’ work. When we feel crushed under the labor of feedback and grading, our capacity for flexibility and generosity is reduced. How can this burden ease? We approach this workshop as instructors with decades of experience in diverse classrooms. Our particular situations led us to rely heavily on conferencing for offering feedback to students–not simply on drafts for formative feedback, but also final conferences, for summative feedback, or grades. We guide participants to consider their particular situations, preferences, biases, limitations (physical, emotional, and/or institutional), and goals, in order to help them refine—or redefine—their own feedback processes.
Our workshop is interactive. We start with a Mentimeter to elicit frustrations of assessing writing. Discussion centers participants’ own experiences and identities, leading to a reflective exercise to consider moments of joy in their own teaching, and their own values as teachers. Next, we explore the frameworks of critical validity inquiry (Perry, 2012) and queer validity inquiry (West-Puckett, Caswell, & Banks, 2024) as possible ways to reconsider our approach to feedback, reconceptualizing failure both on the part of student writers and ourselves as assessors. We spotlight the use of ungrading conferences, and briefly introduce research on the value of dialogic feedback.
Josh Mannix Classroom Case Studies: Specifications Grading in Math
Wednesday, June 11, 5:00 — 6:00 PM EDT
MATH 310 is a math education courses for elementary education and middle school math teaching majors at Ball State University. This course focuses on developing pre-service teachers’ conceptions of algebraic reasoning and how it manifests in the lower grades. In a world where algebra is often thought of as mathematics reserved for high school and beyond, this course can cause students to feel stress and anxiety, particularly as we work to change their ideas of what algebra is. In the course, we study topics like symbols and variables, patterns and generalization, expressions and equations, and functional reasoning. Students read articles, participate in discussions, complete tasks, and write lesson plans associated with each of these topics.
When I designed this class, I knew that non-traditional grading practices would fit well with the discussion-based nature of the course. I chose specifications grading because it added a level of organization and clarity that was missing from a previous attempt to use non-traditional grading practices. In my version of specs grading, assignments earn a mark of complete or incomplete, and final grades are based on specified criteria. I decided what types of tasks I wanted to use in the class, determined what sort of criteria I would use to determine whether assignments were complete or incomplete, and created a table where students could see what they needed to do to earn certain letter grades in the course. Students responded well to this system, saying they felt less stress in this class because of the grading system. Some students said they could focus more on the content because they weren’t worried about points. I am still working on how to assess tasks for completeness while still maintaining a high level of rigor. I would recommend other instructors consider specifications grading for courses that require a significant amount of reading and discussion and where students might report feeling high levels of stress.
Rachael Hannah, DeVaun Baker, Christopher Brown, Rhiannon Glover, Nader Munye Classroom Case Studies: Labor-based and Contract Grading
Wednesday, June 11, 2:00 — 3:00 PM EDT
In three upper-level physiology courses, students shaped their own progress through contract-based grading, centering their voices in teaching and learning. This talk explains the design, implementation, and outcomes, guided by educational change theory (Chang et al., 2023) and ungrading research (Hackerson et al., 2024). Traditional grading undermines equity by emphasizing grades over understanding (Inoue, 2019). Following bell hooks (1994), I replaced conventional assessments with flexible contracts that encourage engagement and collaboration. Applying change theory (Chang et al., 2023), I identified barriers, co-created student-developed contracts, and aligned personal goals with course outcomes. Learners chose from multiple assessment paths and refined work through iterative peer- and self-assessment, aided by reflections allowing timely strategy updates. Surveys and peer reviews revealed increased motivation, stronger metacognition, and deeper conceptual integration. Students appreciated the freedom to showcase mastery, noting peer evaluation fostered community—central to hooks (1994). Transparent communication, reflective feedback, and scaffolding sustained momentum; rubrics and structured support anchored the process. Students will share perspectives on timely feedback, clarity of expectations, and summative reflections that show a shift to seeing grades as a growth tool. Overall, contract-based grading—rooted in change theory—centered student voices, encouraged deeper learning, and cultivated an inclusive culture. Attendees will also receive strategies for implementation, including rubric design, peer feedback, and sustaining buy-in.
Jacob Adler Panel
Wednesday, June 11, 3:30 — 4:30 PM EDT
Often in large enrollment courses students are divided into smaller sections to complete specific tasks, whether working on problem solving, written assessments/discussions, or completing laboratories. Frequently, these small enrollment sections have graduate teaching assistants that are instructing students. In the alternative grading classroom graduate teaching assistants may require specific training to be best prepared for the alternative grading course structure. There has been reports about instructors’ and students’ perspectives on alternative grading course structure; however, it is important that we hear from graduate teaching assistants that are present in these key course implementation roles. In this session we have a panel of graduate teaching assistants that have taught in alternative grading courses. Some prompts will be presented concerning graduate teaching assistants’ thoughts on alternative grading course design. We will discuss the importance for graduate teaching assistants of clear well-defined standards, mentorship, graduate student and student buy-in, and holistic-style rubrics with clear pre-prepared language. Audience participation is encouraged for a question-and-answer session with our panel.
Jennifer Fishovitz Classroom Case Studies: Hybrid and Specialized Grading Models
Friday, June 13, 2:30 — 3:30 PM EDT
In this talk, I will discuss how I implemented a specifications/contract hybrid grading system in an upper-division advanced biochemistry course. One goal of this course is to have students practice being scientists: reading and analyzing literature, sharing results with others, and pursuing topics they are interested in learning more about. Traditional grading made it difficult to balance this goal with equitable grading. Recently, I have shifted to a grading system that is more focused on increasing intrinsic motivation of students and providing them with greater autonomy and choice than the traditional grading system. In this alternative grading system, students are guaranteed a “B” in the course if they satisfactorily complete a set of core assignments, with opportunities for no-penalty revisions. Higher grades can be earned by completing “level-up” activities that are evaluated based on effort and reflection. While the core assignments focus on their understanding of biochemistry, the level-up activities encourage them to take risks and learn new things that will benefit them in their professions as scientists or doctors and in their lives as citizens of a global community. I will share feedback from the students in the form of written reflections and survey data about factors of intrinsic motivation and autonomy and lessons learned for using this type of grading system in future iterations of this course and others.
Robert Erdmann Special Topics: Equity
Wednesday, June 11, 5:00 — 6:00 PM EDT
In higher education, the practice of allowing a certain number of dropped scores—where missed assignments or poor quiz performances do not count against a student's final grade—has become increasingly common. Despite their prevalence, little research has been conducted on the equity of such policies. This study investigates whether dropped score policies are equitable across different student identities. Do dropped score policies provide a potential mechanism to address opportunity gaps? Do dropped scores facilitate a “rich get richer” dynamic, where high-performing students gain the largest benefits? What patterns are there to who benefits most from the policy?
To explore these questions and hypotheses, data mining was performed on grade books from courses implementing dropped score policies. The analysis involved calculating hypothetical final grades without the dropped score policy and comparing them to actual final grades to assess the policy's impact on individual students. This data was then combined with demographic information to evaluate the equity of the policies.
Preliminary results indicate that dropped score policies are generally equitable, with no significant disparities in impact across different identities. These findings suggest that instructors can use dropped score policies without contributing to inequitable outcomes in their courses. This research provides valuable insights for educators seeking to implement fair grading practices.
Melanie Butler Research: Standards-Based Grading (and other models) in Context
Friday, June 13, 2:30 — 3:30 PM EDT
This talk examines the effects of three alternative grading systems—specifications grading (specs), standards-based grading (SBG), and ungrading—on key student outcomes such as motivation, engagement, stress, enjoyment, and perceptions of fairness. Drawing from quantitative and qualitative data collected over three semesters across mathematics, computer science, and statistics courses, I will present a comparative analysis of how these systems shape student experiences. Highlights include findings that SBG significantly boosts motivation and engagement, with 95% of students reporting it encourages learning through mistakes. Specs grading was praised for transparency, though partial credit issues arose, while SBG's flexibility was lauded despite some logistical hurdles. Ungrading emerged as a unique approach fostering autonomy and collaboration, contributing to a low-pressure, mastery-focused environment. The discussion will emphasize practical trade-offs between systems, offering actionable insights for educators aiming to align grading practices with diverse learner needs. The session concludes with recommendations for integrating the strengths of each system and considerations for future research on long-term impacts and faculty training.
Amy Lee, Maggie Bergeron, Merle Davis Matthews Panel
Wednesday, June 11, 5:00 — 6:00 PM EDT
As part of our ongoing work in a faculty development program in a large liberal arts college within a public R1 university, we have noted that participants are particularly interested in critically engaging in, reflecting on, and being supported to innovate their pedagogical framework for, and their material practices of, assessment and evaluation. Across the roughly 30 faculty who participate in annual, semester-long cohorts around equitable and inclusive pedagogy, participants have articulated both emergent, as yet unexamined but deeply felt senses, that their feedback and grading (not only how they do it but what aspects of the course they base it on) are bound up in inequitable and historical practices of sorting students rather than of supporting students and their growth. This reflects the current historical, social, and political conditions informing our work- the lack of safety for various visible identities in the current federal administration context; the increased reporting of loneliness, isolation, and mental health struggles; and the widespread availability of AI. We have found our participants increasingly interested to cultivate dispositions and to enact practices that have long been called for by critical pedagogues and which seek to center valuing of coaching, feedback, and engagement, and holistic assessment, rather than isolated and product-based evaluation.
This Panel will gather Ungrading Teaching Fellows from the Spring 2025 cohort to speak to their experience developing a project that involves identifying an opportunity for alignment in an assessment practice and then creating a plan to develop and integrate this practice in a way that supports student learning and growth. The semester-long Teaching Fellows cohort supports instructors in thinking about their role as teacher, and how this impacts the way they design and facilitate assessment and feedback opportunities for students.
Jennifer Gray, Stephanie Conner Special Topics: Writing Assessment
Wednesday, June 11, 3:30 — 4:30 PM EDT
One challenge faculty often face is implementing unique grading activities when working within an institutional system that values traditional assessment practices. To work around these constraints, we created two different assessment experiences that both invite students into the assessment process and use existing traditional assessment artifacts expected by our institution, such as rubrics and grading scales, in more unique and student-centered ways. Our presentation shares the experiences, results, and artifacts from these unique grading moments. For most students, grading rubrics are concrete and established tools created before their exposure to assignment materials. This pre-established approach distances students from the process of discovery and critical thinking we seek to encourage. Instead of active participants in learning and creating, students might passively receive these assessment documents and robotically perform without engagement as they have had no agency in the assessment process. Speaker One will discuss a first-year writing course activity that invites students into conversations about what makes a paper fit into an A-F category. The activity involves reviewing a sample student paper during class time and then creating guidelines for each graded category. For example, we might discuss the concept of audience and detail as a component of a B-level paper. Speaker Two will discuss an activity for an upper-level literature course that asks students to develop their own rubrics for a project of their own design. Instead of just selecting topics, students also create their own evaluation criteria points. Session attendees will receive copies of artifacts. These experiences and artifacts represent ways to work within the imposed institutional assessment restrictions, but creates room for a negotiated experience for students to contribute to their own assessment devices and to experience the challenge and agency of crafting an assessment process.
Laura Cruz, Noel Habashy Research: Student Engagement and Career Readiness
Wednesday, June 11, 2:00 — 3:00 PM EDT
Much of the work in global agriculture is collaborative and cooperative, so it seems fitting that upper-division, project-based capstone courses in the major should incorporate grading schema that are reflective of the relational dynamics that characterize these partnerships. The cooperative grading structure mirrors how professional performance reviews are conducted in the field, thereby further preparing students for future careers that address critical issues in global food security.
For their capstone course, students participated in periodic, one-on-one discussions of their performance with their instructor, receiving qualitative feedback on their strengths and areas for improvement, in lieu of any other grades. Prior to each meeting, students ranked their performance using a rubric derived from a combination of course learning outcomes and career attributes (reported by employers in this sector). Their self-reported ratings then served as the basis for the conversation with the instructor. Both the criteria and the feedback emphasize intellectual growth and the lifelong practice of cultural humility.
A mixed-methods study of the outcomes of the cooperative grading process indicates that students from two classes (n=22) initially resisted the approach but grew to appreciate the shift toward a more personalized, reflective assessment model. Using previously validated scales, pre- and post-survey results affirm the findings of prior studies, indicating significant differences (positively) in academic goal orientation (from extrinsic-performance to intrinsic-mastery) and beliefs about grades as motivators while also strengthening a sense of classroom community. A systematic review of student artifacts (including their self-ratings) affirms the shift towards intrinsic motivation, along with a stronger disposition of professional reflexivity in global contexts. The findings suggest distinctive contributions of global agriculture to the scholarship of upgrading.
Daniel Look Classroom Case Studies: Alternative Grading in STEM
Thursday, June 12, 1:00 — 2:00 PM EDT
In the fall of 2024, I implemented an alternative grading system in my Introduction to Real Analysis course, an upper-level mathematics class at a liberal arts institution. This approach emphasizes growth, mastery, and communication, replacing traditional point-based grading with a “complete/not complete yet” framework that allows for revisions. Course assessments include weekly homework, portfolio problems, and oral assessments, each designed to reflect the stages of mathematical research, from brainstorming to polished presentations.
Weekly homework represents the “napkin math” stage, encouraging exploratory problem-solving. Submissions must be substantial and majority correct, with opportunities to revise incomplete work. Students earn a base grade of 1.0 by completing all but two assignments.
Portfolio problems simulate the publishing process. Students submit polished, typeset proofs in LaTeX that meet professional standards for mathematical writing. Weekly submissions are allowed, with unlimited revisions until completion, and students earn grade boosts for each successful submission.
Oral assessments emulate conference talks, helping students transition from written to verbal communication. These assessments are carefully structured to reduce apprehension while ensuring students demonstrate their knowledge effectively.
After three iterations, I have refined this system by scaffolding the rigor of portfolio problems and aligning assessment categories with departmental and institutional goals, facilitating administrative support. Students have responded enthusiastically, particularly valuing the focus on revision and process over outcomes.
In this presentation, I will discuss the system’s implementation, highlight its strengths and challenges, and share student feedback and insights.
Dan Martin, Kelly Wilbanks, Sheila Richardson, Emily Carter, Lydia Smaciarz Panel
Thursday, June 12, 1:00 — 2:00 PM EDT
The focus of this session is on using a role-reversal pedagogy to prepare graduate teaching assistants (GTAs) to provide labor-based grading in a first-year writing (FYW) course. A role reversal pedagogy asks GTAs to become the students they will teach and enroll in an online version of a first-year writing course they will teach. This approach can help GTAs develop a more empathetic and equitable approach to alternative grading. GTAs receive extensive feedback on the work they produce as a mock student that they can use as a model when they assess student writing.
In this presentation, we start by providing an overview of how GTAs are prepared to assess and grade student writing. Then, four GTAs discuss their experiences learning how to grade and comment on student writing. They will discuss how they implemented the grading practices they learned, identify some of the struggles and triumphs for learning how to grade first-year writing, and note why they’ve been motivated to experiment with alternative grading practices. All of the GTAs are currently using alternative grading practices like grading contracts, portfolios, and reflection memos and will report on their experiences learning to use these approaches.
This presentation will benefit WPAs, writing instructors, and grad students learning how to teach writing. WPAs can learn more about how GTAs learn to grade and assess writing and what motivates them to try alternative grading practices. Instructors and GTAs can learn more about developing alternative grading practices.
This presentation will engage the following questions:
How well does a role reversal pedagogy prepare GTAs to grade writing? What worked? What didn’t work?
What motivates GTAs to use alternative grading approaches?
What lessons did we learn from this preparation method?
Brandon Yik, Lisa Morkowchuk, Lindsay Wheeler, Josipa Roksa, Haleigh Machost, Marilyne Stains Research: Alternative Grading in Chemistry
Thursday, June 12, 2:30 — 3:30 PM EDT
Specifications grading has been proposed as an alternative grading method to better promote student success over traditional grading schemes. Within the chemistry community, specifications grading has been growing in popularity over the last decade as demonstrated by the rise of publications and conference talks. While several studies describe shifts in the final grade distribution as a result of the implementation of specifications grading, no study explores the differential impact on students of different social identities. In this study, we analyze over 9,700 final course grades of a year-long general chemistry laboratory course under both traditional and specifications grading schemes. Data are analyzed by individual students’ social identity (i.e., sex, generation status, underrepresented minority status, and transfer student status) and students’ intersectional identities using intersectionality theory. Our results are mixed and conflicting. More systemically minoritized students pass these courses with high grades under specifications grading, but opportunity gaps between systemically minoritized students and their systemically advantaged counterparts remain. The results of this implementation show that the impact of specifications grading on students is complex and that much still needs to be understood about students’ experiences with different grading schemes and their impact.
Jason Elsinger Classroom Case Studies: Standards-Based Grading 1
Wednesday, June 11, 3:30 — 4:30 PM EDT
Standards-Based Grading (SBG) offers a powerful framework for assessing student learning but adapting it to a high school setting can present challenges, particularly when working within a rigid grading platform like PowerSchool (PS). After seven years of implementing SBG at the college level, I transitioned my approach to high school math courses, primarily AP-level classes. This transition required navigating structural constraints, such as the required weighted assignment categories in PS, necessitating a hybrid grading model.
In this talk, I will share my experiences designing and implementing this hybrid SBG approach, including the strategies I have used to maintain the integrity of SBG while meeting high school grading policy requirements. I will discuss both successes and obstacles, drawing from assessment results, student performance trends, and instructor reflections. I will also discuss parent communication and classroom management, both aspects that play a larger role in the high school environment, and share recommendations for educators looking to implement SBG in similar contexts.
Elizabeth Vaughan, Nicole James Research: Alternative Grading in Chemistry
Thursday, June 12, 2:30 — 3:30 PM EDT
Education literature suggests that traditional grading methods have been linked to increases in student anxiety, decreases in motivation to learn, and increases in competitiveness among classmates. Alternative grading methods are gaining popularity due to their potential to mitigate these effects, however, little is known about the impact that alternative grading practices actually have in undergraduate STEM classrooms. Toward this, here we investigate the use and impact of alternative grading methods in undergraduate STEM courses. Guided by self determination theory, self-efficacy theory, and teacher-centered systemic reform theory, we investigate chemistry student and faculty experiences with grading methods through cognitive interviews and analysis of course artifacts (e.g., syllabi) to address the following research questions: 1) How do chemistry students perceive the grading structure in their courses? 2) How do students perceive aspects of their course grading structures to impact their self-efficacy and motivation in the course, if at all? 3) How do chemistry instructors describe the grading structure in their courses? 4) What factors influence instructors' choice of grading method, and how? This presentation will include a discussion of the findings generated from student interviews, faculty interviews, and course artifacts, as well as the alignment (or lack there-of) of themes identified from each data source. By considering alternative grading practices in STEM courses from a variety of angles, these results will provide useful insights to both discipline based education researchers and practitioners about alternative grading methods.
Elizabeth Kubek Special Topics: Artificial Intelligence and Grading
Friday, June 13, 1:00 — 2:00 PM EDT
The landscape of “AI” is evolving so rapidly that it seems almost impossible to respond in an informed manner. We are told how essential it is that students know how to use AI, a term which embraces a range of models, agents, and embedded applications. Given all of this, what does it now mean to teach and evaluate “writing”?
One thread we can follow is provided by “reverse design” SLO-based assessment. But what do outcomes look like when the process students are using (prompt writing) is dialogic, and the interlocutor is also a proofreader and reference “source”? Can we align authentic processes with this parasocial mode of assisted writing, transparently and in a way that supports AI literacy? If AI agents can demonstrate proficiency, our emphasis should shift to process-based criteria. At the same time we must teach students the essential limits of AI, including linguistic and “reasoning” biases.
AI literacy can start with examining practices recommended by the Open Source Initiative, which include users’ ability to see and modify data, code, and parameters used to build and train the system (https://opensource.org/ai/open-source-ai-definition). We "see" training when a question is asked of a given AI agent (chatbot) with the stipulation that its “reasoning” be visible. Knowing the training instructions of an agent allows us to discern its biases.
Valid outcomes should be that students learn to assess and use AI in a manner consistent with project goals; practice modes of writing based in goals they themselves select; and develop individual voice and discernment, thus reducing motivation to rely on AI guidance. The proposed rubrics and assignments are currently in use in my classes, with assessment built into student self-reflections. I hope that these methods will help students reclaim writing as a joyful process where they themselves develop as agents.
Melanie Newell Classroom Case Studies: Alternative Grading in STEM
Thursday, June 12, 1:00 — 2:00 PM EDT
The purpose of this session is to explore the potential benefits of reintroducing Lab Write-Ups in science courses at Estrella Mountain Community College (EMCC) as a means to enhance students’ science literacy under a competency-based assessment model. Lab Write-Ups provide an opportunity for students to share their data collection experiences, interpret results, and engage in scientific reading and writing. They also allow students to incorporate findings into their understanding of the natural world. While writing skills are often underemphasized in science education, they are essential for students pursuing science fields, as they will be required to write reports, correspondence, and publish in peer-reviewed journals. To support this initiative, a Lab Write-Up template has been created and shared among teachers and academic support centers at EMCC. Students are provided with the template for use in completing their assignments, and a peer review process has been integrated to encourage collaborative feedback. Results from a survey conducted at the end of the Fall 2024 semester for my CHM151AA and CHM130AA courses indicate that 86% of students found the Lab Write-Up format effective or somewhat effective in communicating their lab experiences. Student feedback highlighted the benefits of structured guidance in organizing thoughts, though some struggled with interpretation of data. Mastery of learning outcomes such as data analysis, error identification, and scientific communication showed varied results. Notably, students in CHM 151AA demonstrated higher mastery in identifying logical connections between principles and data, while students in CHM 130AA exhibited greater proficiency in effectively communicating findings. Overall, the implementation of Lab Write-Ups shows promise in improving mastery of science literacy skills, though further refinement of the process may enhance its effectiveness.
Rochelle Rodrigo, Josh Barrows, Sallie Koenig Research: Qualitative Work on Labor-Based Grading and Other Models
Wednesday, June 11, 5:00 — 6:00 PM EDT
This presentation discusses the outcomes of a qualitative coding analysis using Saldańa's (2021) framework looking at students' beliefs, values, and attitudes around the concept of labor and grading. It further puts completion rubrics in conversation with labor-based grading practices in the context of online education, considering the move over the last decade to implement anti-racist and alternative assessment practices (Condon & Young, 2016; Inoue & Poe, 2012; Inoue, 2015; Inoue, 2019). Our research investigates student values, beliefs, and attitudes about labor in online environments, focusing specifically on 100- and 300-level writing courses. We aim to determine whether completion rubrics can serve as a viable alternative to traditional assessment practices, particularly in online settings that lack the synchronous components essential to many labor-based grading practices. We will share our qualitative coding scheme, the outcomes of our study, and possible future directions for research in this area.
Claire Mayo Research: Student Engagement and Career Readiness
Wednesday, June 11, 2:00 — 3:00 PM EDT
In the past five years, the scholarship of teaching and learning has focused on linking learning outcomes with career readiness, which compliments wider shifts in higher education as the Carnegie Foundation for the Advancement of Teaching and the American Council on Education now consider a university’s learning and employment records when awarding institutional rankings. This discussion has elevated ungrading assessment models like Linda Nilson’s Specifications Grading (2014) as a way by which to indicate mastery of learning outcomes. However, with the exception of digital learning fields, the conversation rarely turns to maintaining or elevating the rigor of the course in tandem with the emphasis on learning outcomes. Furthermore, while Nilson promises that Specifications Grading increases the rigor of the course, current scholarship on the ungrading assessment model focuses on implementing it rather than testing its effectiveness for maintaining or increasing course rigor. My study merges these conversations for a quantitative and qualitative study across two semesters to examine if the ungrading assessment model increases student engagement with the course material for a high level of mastery of the course’s learning outcomes.
This study uses a mixed methodology with grounded theory and thematic analysis to analyze the work of 63 students across a sequence of general education courses in history. For this study, rigor is defined as a standard of excellence that students are equipped to meet through incremental learning outcomes that build and are reinforced throughout the semester. The results of this study show the frequency of student engagement with the course learning outcomes including the following: analyze primary and secondary sources in their historical context, recognize silences and absences in the archival corpus, identify ideology in historical argument, and implement their own arguments in response to the course’s central question.
Rebecca Torrey, Tim Hickey, Keith Merrill, Ella Tuson Workshop
Thursday, June 12, 4:00 — 5:30 PM
Are you interested in learning about Mastery Grading? Or are you interested in trying Mastery Grading but are intimidated by the setup? This workshop is for you.
From this workshop, participants will come away with:
1. An understanding of Mastery Assessment Pedagogy (MAP), and
2. Their own custom course in the Mastery Learning App (MLA)
The MAP approach uses backward design where the course grade depends on the number of learning objectives mastered by the student. Each exam provides a challenging problem for each of the objectives introduced so far. The exam problems are graded Mastery/NotYet. We will show how the MAP approach incentivizes students to adopt effective study strategies while minimizing their stress, as well as aligning with many other pedagogical best practices. Participants will redesign all or part of their own course using the MAP approach.
The MLA is an open-source app custom-designed and programmed by Dr. Hickey to support the MAP course structure. The software allows instructors to import and customize skill lists and problems from pre-existing courses. Workshop participants will get to create their own course on the app, import materials from an existing course, create exams, generate personalized exams, grade the exam and upload the grades into the app.
The M.A.P. approach was introduced and analyzed for Computer Science courses in the dissertation Dr. Tuson. It has been used to teach PreCalculus, Calculus, and Integral Calculus by Drs Torrey and Merrill, and has been used by Dr. Hickey to teach Discrete Math. All of these courses were taught at Brandeis University in the past 3 years.
The target audience is anyone who is teaching a class where each learning objective could be assessed using a single challenging question for that objective. We will guide participants through the setup of their own course using MAP with MLA.
This workshop has no prerequisites, but participants will use a laptop and a smartphone.
Meg Steinweg, Elizabeth Parkins Research: Student Perceptions
Wednesday, June 11, 3:30 — 4:30 PM EDT
Traditional grading systems have faced scrutiny for undermining intrinsic motivation, fostering anxiety, and creating inequitable learning environments. As an alternative, ungrading has emerged as a progressive assessment practice that emphasizes qualitative feedback over quantitative scores. Despite its benefits, ungrading remains unfamiliar to many in higher education, leading to uncertainty. Adding to unfamiliarity and uncertainty are the several forms of ungrading, such as specification, mastery-based, grading for growth, and self-grading. Following an Ungrading (Blum, 2020) book club and community of practice we developed a research project to ask the question: What are student perceptions of ungrading and do those change over the semester? We surveyed students anonymously at the beginning and end of the Spring 2024 semester in 5 different courses that utilized some form of ungrading.
In pre- and post- surveys students scored their level of anxiety with ungrading, satisfaction with traditional grading systems, level of control they felt over their own learning, level of control over their own grade, and their level of concern with ungrading. Overall there was no change in students' responses between pre- and post-surveys for level of control over their own grade, and their level of concern with ungrading with those scores being neutral. Students had a low level of concern regarding ungrading at the beginning of the semester (2.4± 0.09) but that concern decreased significantly at the end of the semester (1.7± 0.13, t-test, p<0.001), which was an expected finding. Few professors at our institution use ungrading, but exposure to it decreased student concerns. We are analyzing professor survey data and interviews to determine how professors talked about ungrading in their courses and if that is related to the student perceptions.
Carter Moulton, Lauren Shumaker, Suzy Beeler, Shannon Mancus, Becky Swanson Panel
Wednesday, June 11, 2:00 — 3:00 PM EDT
Making the shift from traditional grading to alternative grading can be overwhelming, frustrating, and isolating. While Centers for Teaching and Learning (CTLs) play a crucial role in supporting faculty, programming often relies on one-off workshops that are difficult to sustain. Meaningful, long-term shifts in assessment require structured support, ongoing faculty engagement, peer mentorship, and a community of shared experimentation.
Through book clubs, workshops, faculty learning communities, intensives, peer working groups, and assessment support, Colorado School of Mines has built a network of over 50 faculty and staff rethinking assessment and grading. This panel brings together faculty and educational developers to discuss strategies for fostering long-term faculty engagement with alternative grading, sharing lessons learned from our multi-year interdisciplinary implementation efforts. Faculty panelists from a range of disciplines will briefly (5 minutes) discuss their alternative grading initiatives, including mastery-based testing in mathematics, specifications grading in computational biology and project-based honors courses, and ungrading in humanities courses.
Our panel will explore key challenges, joys, and motivations behind alternative grading adoption, as well as what makes for successful collaboration between faculty and CTLs. To frame this, we will draw on the four-part lens of "hub, incubator, temple, and sieve" (POD Network 2008) to highlight how teaching centers support faculty development and sustain alternative grading efforts.
This panel is designed for faculty, faculty developers, and teaching center staff interested in moving beyond one-time workshops to foster sustained faculty engagement. Attendees will leave with insights into alternative grading implementation and practical strategies for building and sustaining faculty buy-in and community.
Jayme Dyer, Chris Mansfield, Liz Bailey, Gavin Bell Special Topics: Equity
Wednesday, June 11, 5:00 — 6:00 PM EDT
In STEM fields, persistent opportunity gaps remain between PEERs (People Excluded based on their Ethnicity or Race) and non-PEERs, despite widespread deployment of institutional initiatives designed to “fix the student.” Rather than focus on deficits in student preparation, institutional policies that may perpetuate systemic inequities are now being examined, including traditional grading.
To improve equitable outcomes, in our community college Math and Physics courses we implemented a points-based grading policy we call Multiple Grading Schemes (“MGS”). We calculate grades using a weighted average (e.g. Homework is 20%, Quizzes are 20%, etc), but rather than have one grading scheme, we use three. Each grading scheme emphasizes a different aspect of the course; for example, one scheme weights homework more heavily, whereas another weights exams more heavily. At the end of the semester, each student’s grade is calculated using each scheme and they receive the highest grade.
Using 6 semesters of final course grade data (n>3500 students), we asked what percentage of students benefited from MGS: we set one grading scheme as baseline and determined the percentage of students who received a higher letter grade in one of the other grading schemes. We found that a higher proportion of PEERs (13-16%) benefited from MGS relative to white students (10%) (p<0.05). We also asked whether MGS harms students by allowing some students to pass the class who are not prepared for subsequent courses. Comparing students who earned a C using the baseline grading scheme to those who only earned a C in one of the other schemes (i.e. they would have failed without MGS), we find that both groups of students have similar pass rates in subsequent courses.
Providing multiple paths to earn higher grades appears to disproportionately benefit PEERs without causing harm. Thus, we argue that Multiple Grading Schemes may be one tool to improve equity in STEM persistence in higher education.
Sarah Lacy Research: Growth Mindset and Student Autonomy
Thursday, June 12, 1:00 — 2:00 PM EDT
Grading writing has long been an emotional experience for both the teacher and the student (Driscoll & Powell, 2016; Yu et al., 2021). In this presentation, I will discuss a new iteration of a pedagogical protocol I first outlined in a 2022 article called “Feedback Conversations (FCs),” which aim to lessen this tension by inviting students into the grading/feedback process. In my writing classes, a FC is an in-class activity in which students reflect in a metacognitive way on the feedback I provide to their writing during the grading process. I have since expanded these reflections by being more intentional by having students determine a revision plan for their next essay based on self-identified writing and learning needs, alongside a genre-based study of writing in their discipline. In this way, students can come to understand that their grade merely signals whether they followed the assignment instructions, and not an indictment of their ability to write “well.” My feedback practices aim to lessen the intensity of receiving a grade by offering the student a chance to work with my feedback and understand the grade they received, and pose questions to me with a focus on planning for future assignments. Additionally, by adding the discipline-specific research component to the course structure, my feedback now focuses less on only justifying the grade, and more on guiding students towards success in future compositions. These protocols also lessen my grading stress because I am affording myself the space to incorporate detailed lessons to each student in my feedback, outlining how they may navigate the revision process in order to become more confident writers.
To showcase the efficacy of this protocol I will showcase examples from student work collected during an IRB approved study in Fall 2024, as well as provide example FC prompts and activities from instructors to adapt. FCs are for any discipline or course which employs writing as a means of learning.
Derek Eckman, Randa Kress, Jason Reed Classroom Case Studies: Standards-Based Grading 1
Wednesday, June 11, 3:30 — 4:30 PM EDT
Idaho State University has spent the last four semesters implementing a standards-based grading
system for our coordinated college algebra courses. As part of this system, we initiated a
proctoring room where student workers proctor reassessments one day per week for college
algebra attendees. In this presentation, we describe the logistical structure we constructed for
scheduling reassessments and the communication mechanisms we created to keep instructors,
students, and proctors in alignment. Specifically, we describe the structure of our Qualtrics-based
survey form, the triggers and workflows that facilitate stakeholder communication, and our
Google Sheet to track reassessment schedules, printing, and other logistical issues of our
proctoring room. Our presentation is a practical demonstration of how instructors wishing to
incorporate reassessments into their courses might approach the construction of a reassessment
system in individual or coordinated courses.
Lexi Almy Special Topics: Liberation Education and Pedagogical Dissonance
Thursday, June 12, 1:00 — 2:00 PM EDT
The classroom is a space ideally designed to disrupt and question the status quo of knowledge production and social responsibility. There is great potential for liberation in education, but we can still observe forms of subjugation, unquestioned obedience, constraint, and oppression. One place oppression shows up is in the traditional grading system. Grades have been shown to reduce risk taking, reduce creativity, be a major stressor, and increase anxiety. This two-phase research project based on teacher interviews and a student survey examined the impact and perceptions of alternative grading in higher education. Eighty semi-structured interviews were conducted with higher education teachers who have implemented alternative grading. They were asked how they define and practice ungrading, what impact ungrading has on stress, and mental health outcomes of college students and educators. Teacher interviews were coded and analyzed to inform the student survey along with direct feedback from participants via a collaborative GoogleDoc. Phase two was a Qualtrics survey disseminated to students who attended alternative graded courses across the U.S., Canada, Brazil, and Europe. Students were asked about the benefits and challenges of alternative grading and traditional grading. In addition, they were asked questions about stress as it related to alternative and traditional grading. Qualitative responses were analyzed for this study, N= 370-445. The benefits of alternative grading for students proposed by teachers: reduction in stress, students focus more on learning than grades, more equitable, and students have more agency. Teachers often felt more fulfilled and provided a more equitable curriculum. Students responded with feelings of reduced stress, increased agency, and increased focus on learning over grades. In addition, students reported higher levels of stress in traditional courses when compared to their alternatively graded course.
Nicolas Cherone Classroom Case Studies: Ungrading and Collaborative Grading
Thursday, June 12, 2:30 — 3:30 PM EDT
This presentation examines a novel "ungrading" approach implemented in an advanced acting class of ten high school students at Annapolis Area Christian School--a private, coeducational, faith-based K-12 school in Maryland. The method replaces traditional letter grades with a system based on reflective practice and collaborative assessment. It employs rich, individualized feedback, regular student self-assessments, and one-on-one conferences to empower students to take ownership of their learning. While this specific case study involves a theatre class, the process was adapted from Joy Kirr’s chapter in "Ungrading" (ed. S. Blum, 2020), which described an application in a middle school ELA context. Reflective and collaborative ungrading has potential applicability across diverse educational environments.
Rationale: Shifting away from numerical scores toward qualitative, reflective feedback that emphasizes intrinsic motivation.
Implementation: Detailed discussion of the design and structure of self-assessments and collaborative conferences.
Outcomes: Preliminary results highlighting enhanced student engagement, deeper reflective practices, and valuable instructor insights.
This session offers practical recommendations for educators interested in rethinking traditional grading systems and creating a learning environment that prioritizes continuous improvement and authentic understanding.
Iva Katzarska-Miller Classroom Case Studies: Specifications Grading
Thursday, June 12, 2:30 — 3:30 PM EDT
The presentation will discuss the implementation of specification grading for a Senior Seminar in Psychology AI-infused course. In the current age of AI with students’ acceptance of using AI generated academic work inappropriately (McMurthie, 2024), traditional grading methods seem outdated in motivating students. At the same time students need to develop the necessary AI literacy to be able to use AI tools ethically and effectively. Combining the two considerations, the Senior Seminar course in Psychology asked students to use in specific ways two AI tools: ChatGPT and Elicit, and reflect on the utility of the tools for the assigned tasks, while specification grading was used. After a brief description of how AI was infused in the course, I will discuss the grading for the course. Every assignment for the course (6-step scaffolded APA style literature review paper, leading classroom discussion, reading journals, and self-evaluation of engagement) were graded as complete/incomplete using rubrics with specific criteria (each criteria also marked as complete/incomplete). For each class component, students were provided with tokens for late work or work that had earned an incomplete. At the end of each week, students used physical stickers to put on a “power disruptor quest map” for each component that they have earned complete marks along with used tokens. Final grades were determined based on bundles contacting a certain number of assignments for a specific grade. I will discuss students' responses to the grading system, my observations about the impact of the grading system on students’ motivation and learning, and considerations for improvement.
Sharona Krinsky, Robert Bosley Research: Standards-Based Grading (and other models) in Context
Friday, June 13, 2:30 — 3:30 PM EDT
In this talk we will present the results of two different grading implementations across a tightly coordinated Quantitative Reasoning Course with a large number of small sections and many instructors. In Fall 2024 we ran half of our sections using a points-and-percentages based grading system with category weights while the other half of the sections used standards-based grading. Given the tight coordination of the course, all sections were on the same pacing guide with the same iClicker materials, same quizzes, and use of the pillars of clearly defined learning outcomes, looking for evidence of learning and eventual success matters.
We will present the similarities and differences between the way the two sets of sections ran as well as the results, including pass rates and grade distributions. We will also discuss the impact of non-grade related changes that were included in the points-and-percentages grading system, including reflections on why these changes were incorporated and where we might need additional work to separate grading from assessment practices and articulate best practices in each for future iterations of the course.
This course is run at a 4-year large public minority-serving institution in California.
Edgar Fuller, Roneet Merkin, Jeremiah Hower Classroom Case Studies: Hybrid and Specialized Grading Models
Friday, June 13, 2:30 — 3:30 PM EDT
At Florida International University, we developed a process to extend certain mathematics courses beyond the end of a regular semester for close-to-passing students. These ‘stretch’ sections give students several weeks to build on their current knowledge, demonstrate proficiency, and possibly pass the course for which they were originally enrolled. This opportunity is at no cost to the students, allowing them to avoid retaking a course and potentially alleviating some of the time constraints on students that typically lead to failure.
Initially offered as an extension to a standards-based grading Pre-Calculus and Trigonometry course, the stretch sections are now offered at the end of each semester to students in College Algebra, Calculus 1 and 2 as well. The stretch sections utilize peer learning assistants, small group work and active learning to help reinforce knowledge. Approximately 500 out of the more than 5000 students taking these four courses make use of the opportunity every year.
In our session, we will discuss the motivation behind the stretch courses along with the steps involved in their creation. We will compare and contrast this program to other existing interventions that use course extension to increase student success such as two-semester equivalents, prerequisite and corequisite models. Data presented will show student success not only in the primary course, but in the next course as well. We will also discuss the resulting impact on the university at large, concluding with a collection of lessons learned and suggestions that other institutions could use when implementing a similar course structure for their students.
Melissa Ko, Rachel Weiher Research: Student Perceptions
Wednesday, June 11, 3:30 — 4:30 PM EDT
Much research suggests that grades can be detrimental to the learning process, but how do students make sense of the different ways that instructors grade and which strategies do they identify as beneficial to their learning? We investigated students’ perceptions around what grades accomplish, how they are assigned, and how they affect students emotionally and materially. We surveyed these beliefs/perceptions and then conducted a series of focus groups with a pre-screened sample of UC Berkeley undergraduate students that represented a diversity of self-reported academic and social identities. Participants shared numerous examples of how the presence of grades influenced their behavior in courses and ultimately had neutral to negative impacts on their learning. Moreover, some approaches to designing assessment were perceived by students as particularly misaligned and unreflective of their learning and/or their current capabilities, while others were identified as positive framing/influences. These findings can inform instructors as they plan course-level assessments, by revealing how even subtle design choices or framing can have profound impacts on student beliefs and behaviors.
Katherine Mattaini Panel
Thursday, June 12, 2:30 — 3:30 PM EDT
Instructors in the alternative grading community often discuss how to adapt evaluation methods for their particular student population, and yet, The Grading Conference rarely features many voices of some of the biggest stakeholders in the grading reform movement: students themselves. This session will be a panel of students who have taken at least one alternatively graded course in higher ed, recruited from across various disciplines, institution types, and methods of alternative grading. Students will be asked to answer several questions provided to them ahead of time, and they will have the opportunity to address questions from attendees. Some questions might include:
What were the major pros and cons of the alternative grading system you experienced? Please consider both your own experience and that of others in the class.
How did your instructor communicate with you about their alternative grading system, and how do you think that impacted your & other students’ experience with it?
How did the alternative grading system in your course affect your motivation to do work for the course, if it did at all? Please also consider the experience of other students in the course.
What advice would you give an instructor thinking of converting their course to a form of alternative grading?
Hannah Kinmonth-Schultz Research: Growth Mindset and Student Autonomy
Thursday, June 12, 1:00 — 2:00 PM EDT
Fear of failure in a science, technology, engineering, or mathematics (STEM) classroom can lead to student hesitancy to seek feedback or student disengagement in the form of missed classes, skipped answers on exams, or failure to turn in assignments. Growth mindset interventions emphasize a student’s capacity to improve with practice and show promise at diminishing disengagement. However, tracking improvement on specific objectives may be difficult in a large-lecture setting. Skills-based activities such as short-form written responses on a scientific data figure offer one method by which improvement can be easily tracked. We asked whether implementation of growth-based grading methods on a single assignment type in a large-lecture biology classroom would alter student perceptions of their general improvement in critical thinking, content knowledge and motivation in STEM compared to students who received traditional grading approaches. We used a quasi-experimental statistic control group post-test design and student responses to a researcher-developed reflection survey to assess students’ perceptions of their learning after receiving growth-based grading on consecutive short-form written responses. We noted an overall positive sentiment and emphasis on growth in the treatment group relative to the traditionally-graded control. Additionally, men perceived an overall greater improvement in content knowledge while women demonstrated greater STEM motivation than men in the treatment group.
We showcase our intervention instrument as one example of how growth-based grading approaches could be implemented in a large-lecture setting.
Adriana Streifer, Michael Palmer, Jessica Taggart Research: Student Perceptions
Wednesday, June 11, 3:30 — 4:30 PM EDT
Specifications grading (aka specs grading) is an alternative grading system that emphasizes transparency, low stakes, student engagement and learning, and equity. It attracts practitioners for its potential to enhance student motivation and remedy several challenges of traditional grading. Specs grading is growing in popularity, and most literature on the subject addresses instructors’ experiences with implementation, and the impact on students’ grades and learning outcomes. Much less is known about students' perceptions of and experiences with it. Our research questions were: “What are students’ perceptions of specs grading both before and after they experience it?”, and, “How does specs grading impact students’ motivations to learn?” We examined students’ predicted and actual experiences of specifications grading across several semesters, courses, and disciplines at a research-intensive, public university in the United States. This presentation will describe the methods, results, and conclusions of our study. Data were collected using a pre/post survey, which included both Likert and open-ended questions. Most students expressed positive attitudes toward specifications grading both before and after experiencing it. Facets of motivation, including choice, value, and expectations of success, were important factors shaping students’ perceptions: students perceive specs grading to align their efforts to their resulting grades, to increase transparency, and to give them more choice and control over their work. Based on these results, we propose a set of recommendations for practice, both for instructors who wish to implement specs grading and for educational developers who support instructors in implementing efficacious and equitable grading practices.
Madeline Sutton Special Topics: Writing Assessment
Wednesday, June 11, 3:30 — 4:30 PM EDT
This session shares results from a qualitative research study that used narrative inquiry to explore writing assessment literacy, the sum of instructors’ writing assessment knowledge, beliefs, and practices (Crusan et al., 2016). Writing assessment shapes what students learn and how they develop writing ability, with lasting consequences for academic growth (O’Neill et al., 2009; White, 2009). New composition instructors, particularly graduate teaching associates (GTAs), often receive incomplete or inadequate training in writing assessment theory and practice (Saenkhum, 2020; Weigle, 2007), creating a significant gap between the importance of effective writing assessment and the often-minimal preparation instructors receive to deliver that assessment. To address this gap, this study examined how new GTAs developed writing assessment literacy. Blending research with practical implications, the talk introduces the construct of writing assessment literacy and shares the stories of eight GTAs who used equitable assessment practices during their first year teaching first-year writing courses. Results uncover prior writing assessment knowledge, investigate factors that impact whether and how writing assessment literacy evolves over time, and identify how assessment knowledge and beliefs inform assessment practices. Based on narratives and thematic analysis, the talk presents an expanded conceptual model for tracing writing assessment literacy development that emphasizes lived experience, prior knowledge, affect, and labor in teacher learning. I redefine the development of writing assessment literacy as literacy labor to draw attention to the material, intellectual, affective, and embodied dimensions of GTA development. I offer practical recommendations for teacher training and learning to foster the development of writing assessment literacy for new and experienced composition instructors.
Daniel Guberman Panel
Friday, June 13, 1:00 — 2:00 PM EDT
As alternative grading has expanded, discussions through blogs and social media, scholarly articles, conferences, and books, have focused on instructor experiences and perceptions. Yet, most arguments in favor of alternative practices emphasize potential benefits for students. As a community, we have done an insufficient job of inviting students into discussions, particularly inviting students not as subjects but as partners and co-creators of new practices. This panel contributes toward remedying this issue by bringing students who have been engaged in discussions and projects related to alternative grading during the spring 2025 semester into discussion with conference attendees.
The panel will draw on a class of 15 students enrolled in a scholarly project course titled (Un)Grading at a large research intensive public university in the United States. The students in this course explored alternative grading systems by examining relevant literature and real-world examples in addition to interviewing peers and faculty. They discussed and debated practices, benefits, and drawbacks from multiple perspectives, including disciplinary differences, mental health considerations, and overall well-being. They identified opportunities to implement changes in individual classes and advocate for broader systemic reform. Each student developed and presented an individual or paired scholarly project focusing on a variety of topics related to grades and grading (such as exploring curving practices, examining well-being associated with honors programs, and looking at emerging structures that provide professional credentials for certain course grades). Drawing on these experiences and projects the members of the student panel, which will be facilitated by the course instructor, will share their own perspectives and projects while responding to questions from attendees. When possible they will also make their scholarly projects available to the attendees.
Hannah Jardine Classroom Case Studies: Labor-based and Contract Grading
Wednesday, June 11, 2:00 — 3:00 PM EDT
In this talk, I present an example of an alternative grading approach I applied in an undergraduate Psychology of Education course, and describe how I helped students connect the grading system they were experiencing to the course concepts we were learning. In this talk, I will first describe the alternative grading model that I used, which integrated aspects of labor-based and mastery-based grading. Then, I will share how I explicitly encouraged student discussion and metacognitive reflection, connecting the alternative grading structure to course concepts, such as Universal Design for Learning, culturally sustaining pedagogy, and theories about motivation. I will share anonymous student reflections about their experience with the alternative grading structure at the beginning, middle, and end of the course. The talk will conclude with recommendations for others aiming to support students’ metacognitive reflection on alternative grading, in education courses and beyond.
Amy Ernstes Special Topics: Neurodiversity
Wednesday, June 11, 2:00 — 3:00 PM EDT
For my dissertation I have conducted qualitative research on the topic of ungrading. The research questions of my project specifically focus on the student experience of ungrading. To address these questions, I will utilize data from interviews that I conducted in the Spring 2023 semester. I conducted 99 interviews with 38 undergraduate students across the semester: one interview at the beginning, one in the middle, and one at the end of the semester (28 students completed all three rounds). These students were from six courses using ungrading practices in the Spring 2023 semester. I also conducted interviews with the teachers of these courses as well. My group of participants will allow me to focus on the student experience of ungrading with regard to three specific groups: neurodivergent students, first generation college students, and students who had negative experiences of ungrading. For analysis of these student experiences, I will be using bell hooks’ concept of engaged pedagogy as an analytic framework. I am currently working on the results section of my project, yet intend to have the results and analysis sections completed before the conference. I intend to have my analysis and conclusions ready to report for the conference.
Martin Cenek Special Topics: Artificial Intelligence and Grading
Friday, June 13, 1:00 — 2:00 PM EDT
In recent years, generative AI tools have made great strides in quality, availability, and reliability, enabling a wide range of applications that automate many daily tasks. The disruptive nature of these technologies is felt across all levels of education, driving personalized instruction, broadening access, refining curriculum delivery, and fostering critical thinking. Automating the assessment of student learning outcomes is no exception—grading student work can now be powered by generative AI. We present our findings and reflections on deploying an AI grader in a college-level computer science laboratory course. Instead of using a large language model (LLM), we used and trained a small language model (SLM) to assess achieving learning outcomes submitted by students as short answers. The SLM also allows for running the AI grader system on a consumer level hardware, while the performance of the SLM matches the accuracy of a LLM to evaluate student’s short answer responses. The structure of questions and responses graded by the SLM did not suffer from bias of using LLM to grade long assay questions. We report lessons on when automated grading excels and identify scenarios in which bias and accuracy limitations hinder its effectiveness.
Leigha McReynolds, Alexandra Harlig Workshop
Thursday, June 12, 4:00 — 5:30 PM
This workshop offers participants the opportunity to re-think and re-structure their grading and assessment practices to better align with their pedagogical values. After completing this workshop participants will have: identified their pedagogical values, brainstormed alternative approaches, and identified a change to make in their classes. While this workshop assumes that participants will have a basic familiarity with alternative grading terms, it is geared toward teachers who have not yet implemented these practices, and are looking for help in thinking about how alternative grading would best fit their own teaching needs. Both of the workshop leaders have several years experience with alternative grading in their own classes – including labor-based contract grading and ungrading – and have offered alternative grading workshops for their department and their university through the Teaching and Learning Center. The workshop will begin with a reflection on relationships to grading, have a brief lecture about the key values of alternative grading, and then give participants time in several activities – including breakout room discussions, time for independent work, and general Q&A – to work through and implement the material. Participants should leave with a better sense of how to make decisions about their own grading practices and one specific change to make in their teaching.
Sarah Silverman Special Topics: Neurodiversity
Wednesday, June 11, 2:00 — 3:00 PM EDT
Neurodivergent people often think and communicate differently than neurotypical people, which can lead to miscommunications about expectations for assignments and assessments. This presentation is a theoretical exploration of the potential for alternative grading methods that integrate "tolerance for error," (a principle of Universal Design) to support neurodivergent students and instructors in situations where both accommodations and Universal Design for Learning fail to reduce barriers. The motivation for this research is that there are gaps in the two main available approaches to reducing barriers for neurodivergent students and instructors: accommodations and Universal Design for Learning. Accommodations are well known to have the drawback on relying on diagnosis and disclosure of a known disability (which not every neurodivergent student has access to) but also in locating disability in the individual. Universal Design for Learning builds flexibility in learning, but rarely addresses the communication differences that often create barriers for neurodivergent people. This presentation uses two elements of neurodiversity theory to argue for alternative grading methods that integrate tolerance for error, or elements that prevent adverse consequences for unintended actions or misunderstanding. The neurodiversity paradigm, which advances the idea that there is no one superior way of thinking or communicating is used to argue for collaborative grading that includes opportunities for dissonance between instructor and student perspectives. The double empathy concept, which is a neutral framing of communication challenges between neurodivergent and neurotypical people, is extended to suggest that alternative grading methods can support neurodivergent students and instructors by building in curious tolerance for communication issues such as misunderstanding expectations. Applications to collaborative grading, complete/incomplete grading, and other methods are reviewed.
Marilyne Stains, Ying Wang, Haleigh Machost, Brandon Yik Research: Alternative Grading in Chemistry
Thursday, June 12, 2:30 — 3:30 PM EDT
Extensive empirical evidence shows that traditional grading systems (A-F grades; 100% scale) fail to accurately reflect student learning and reinforce systemic inequities in education. In response, grading reforms like ungrading and alternative grading have emerged, gaining popularity among STEM college instructors. Specifications grading, formalized by Linda Nilson in 2015, focuses on criterion-referenced evaluations and mastery of course content. Its adoption has rapidly increased, particularly among chemistry instructors, as evidenced by the rise in related symposia and peer-reviewed articles. However, chemistry instructors' motivations for shifting to specifications grading is unclear. Insight into their motivation can provide valuable information for the further dissemination of this grading scheme. This qualitative study investigates the reasons behind chemistry instructors' adoption of specifications grading. We conducted semi-structured interviews with 29 chemistry instructors from 24 U.S. academic institutions currently using this alternative grading method. The interviews aimed to understand their perceptions of the benefits of specifications grading, their dissatisfaction with traditional grading, and the challenges they face in implementing this approach. Our findings reveal that instructors adopted specifications grading primarily to address their dissatisfaction with traditional grading. They frequently cited perceived benefits such as enhanced student learning gains and increased flexibility for students. This work provides valuable insights for future dissemination efforts aimed at STEM instructors who are considering implementing specifications grading. Specifically, to encourage broader adoption, dissemination efforts should emphasize how perceived benefits, even if not yet empirically supported, align with instructors’ dissatisfaction with the status quo and relate to their real-world needs and aspirations for their classroom.
Jeannette Byrne Classroom Case Studies: Ungrading and Collaborative Grading
Thursday, June 12, 2:30 — 3:30 PM EDT
Considerable has been written about the potential negative impacts of grades on student learning. Blume (2020), Clark and Talbert (2023) and Kohn (2013). In response to these negative aspects of grades, alternative methods of assessing students are emerging (Blume 2020). One such approach is ungrading. While many definitions exist for ungrading, at their core is shifting the focus away from grades and onto student learning. In this presentation I will describe how I implemented ungrading in an advanced biomechanics course. This course, taken by a mix of kinesiology and biomedical engineering students, is a lab-based course. In previous iterations of this course, a focus on grades was negatively impacting student learning in the lab portion of the course. Despite many attempts to shift the focus back to learning, grades were always the focus for students. Four years ago, I adopted ungrading for the lab portion of the course. Rather than grading individual lab submissions, grades were replaced by student reflections and small group discussions related to the lab topics. Lab grades were ultimately determined by student and teacher, at the end of the semester. In this presentation I will describe how I implemented ungrading in these labs and share with you how it impacted my students and I. Student feedback on the experience suggested greater student engagement and learning. Many students talked about how removing grades meant they were no longer afraid of getting a bad mark. This resulted in them having the freedom to explore and just learn. The impacts on students were profound and impactful. Seeing these impacts fundamentally changed my approach to teaching in ways I could never have imagined four years ago. Seeing how students thrived when they were given an environment where learning was the focus made teaching joyful again for me. I look forward to sharing my (and my students) ungrading journey with you.
Josh Stangle Classroom Case Studies: Standards-Based Grading 1
Wednesday, June 11, 3:30 — 4:30 PM EDT
Several years ago, I switched to standards-based grading (SBG) in all of my mathematics courses. I still use a hybrid version of SBG in my courses, and I think there are many advantages to clearly defined learning outcomes and multiple assessment points. I also believe that there is room for more open-ended assignments which ask students to reflect on when methods do not work, push them to apply the course content to new questions, and try novel problem-solving methods outside of the course "standards." In an effort to allow for this type of exploration and growth, but not overly tax students emotionally, I began using revision as a tool to stimulate conversation, encourage reflection, and allow students agency in their own grade. This talk will discuss my use of revisable Challenge Homework in lower- and upper-level mathematics courses. I will discuss lessons I've learned in implementing revisable assignments (including recommendations for other instructors), how they affect my and the student experience (based on conversations and student evaluations), and plans for future developments of the practice.
Deborah Rifkin Classroom Case Studies: Ungrading and Collaborative Grading
Thursday, June 12, 2:30 — 3:30 PM EDT
College music courses develop listening and performance skills, which are traditionally assessed by individual performances in class. This can be a harrowing process for students because it requires application among peers of complex concepts in a time-pressured context. Assessments can be unreliable because of debilitating anxiety, which also damages a positive learning environment. To mitigate these problems, I changed to a portfolio approach, in which students submit videos of their performances. Assignments are tailored to a particular learning goal, which is explicit to the student. As students submit their videos, they comment on their learning process and development towards the learning goal. I provide feedback and at the end of each unit students submit a portfolio. For each learning goal, students comment on their development, cite their homework videos as evidence of their learning arc, and assign themselves a grade. Students report much less stress and anxiety, and the iterative metacognition focuses attention on learning, not grading. In addition, the portfolio can be tailored to the individual. While traditional assessments tend to reward students whose background conforms to what was typical of preparatory training of a few generations ago, the portfolio allows students whose experience is primarily in popular music or oral traditions to thrive. Despite these successes, there remain significant challenges. When musical skills are culturally recognized as a gift or an inherited talent, the idea that musicianship can be improved through an introspective practice can threaten beliefs and identities. In addition, students can be unreliable self-assessors, not recognizing errors. It helps to transparently and intentionally facilitate a new learning culture. Incorporating peer assessment could help students recognize errors, however this introduces the kind of stress and anxiety the portfolio was aimed at reducing.
Hayley Blackburn
This poster presents the development and preliminary implementation of an AI-enhanced ungrading system, featuring both an AI-powered chatbot (named PAL, your personal assistant in learning) and a web-based rubric building application (named PAT, your personal advisor in teaching), in a 2000-level Introduction to Technical Writing and Presentation course. PAL provides immediate feedback, Q&A, and project clarification support to students, which is key in an unfamiliar ungraded classroom. PAL is trained on my class materials as something more specific and user friendly than the current AI tools offered me. The rubric-building PAT utilizes LLM-powered models to analyze project descriptions and automatically extract intended learning outcomes and skills, simplifying the creation of dynamic, criteria-based assessment frameworks for ungrading. This tool is designed to empower faculty to more effectively implement ungrading by streamlining the rubric creation process. This in-progress research aims to compare student grades and performance metrics from pre- and post-implementation of these ungrading strategies, chatbot integration, and the rubric tool. The poster will showcase the rubric application's functionality and invite participants to test it with their own project descriptions, providing valuable feedback for further development. This project offers practical strategies for educators aiming to adopt technology-enhanced ungrading practices, providing a model for leveraging AI to promote student autonomy, reduce instructor workload, and foster a more transparent assessment process. The code for PAL and PAT are open sourced for educator use.
Gwen Miller
This poster session explores a work-in-progress on using and grading student lab notebooks in an introductory biology course. This work attempts to recenter the lab notebook as a student learning tool in a second-semester introductory biology course at a large community college while at the same time not overwhelming the instructor with tedious grading. Instructors can adapt this approach to various disciplines that require students to record their learning.
Faculty created a new lab curriculum to support students in BIOL 1407, an introductory biology lab that covers evolution, organisms and ecology. Students must write purpose statements before class, complete a handwritten lab notebook, draw images, answer questions and make conclusions weekly. Students complete their lab notebooks in various formats. Students upload their lab notebooks to the Canvas learning management system for grading.
Traditional grading of every item each week in the notebook is impractical. A quick method to grade purpose statements and an alternative grading practice for the lab notebooks was developed. Initial results show that this grading practice does not negatively impact student outcomes. Lab notebooks are graded based on a common rubric where the instructor grades one standard but unannounced small section in fine detail for half of the credit. The rest of the grade is related to labor spent completing the lab notebook.
The poster session will cover recommendations on establishing grading/assignment expectations for students at the beginning of the semester and lessons learned from the first two semesters of implementation. Preliminary data regarding student success in the course compared to sections using traditional grading methods and instructor reflections will be shared.
Angela Brown
Traditional grading has stood as the “way to grade” for many years. However, there are conversations and expanding research on the potential impact of alternative grading options. This study considered how the approaches used in alternative grading could benefit a college-level intermediate accounting course.
Many accounting students consider pursuing certification after college. These students enter a process that requires studying and completing exams but is quite different from the traditional college experience. These exams offer multiple attempts and are “graded” entirely on a pass/fail scale with limited feedback.
It was with this consideration that I have adapted an alternative grading process for Intermediate Accounting II for the Spring 2025 semester. The final course grade is based on meeting a specific number of objectives rather than traditional point accumulation. Specific elements include exams that have been formatted in a way that simulates a portion of the professional exams, multiple attempts are allowed for each exam, and completion of other activities that are designed to support the learning process. Student were provided a survey at the beginning of the semester to assess their initial feelings toward a traditional grading process. A follow-up survey will be done at the end of the semester to reassess how students feel after having experienced this alternative grading format.
The initial goal of alternative grading in this specific course was to benefit accounting students. I would like to use the results of this course to determine how alternative grading techniques could be applied to other business courses to create a more effective learning experience for students.
Laura Vernon
I propose presenting the findings of a literature review of labor-based contract grading in the technical communication classroom. Technical communication/writing (also known as professional communication/writing) focuses on workplace writing and differs in many ways from academic writing taught in the composition classroom. (Most research on this topic relates to the composition classroom.) As a professional writing (the term I prefer to use) instructor who is beginning my alternative grading journey, I wanted to know if instructors in my field were using labor-based contract grading, why and how they were using it, and what they were experiencing. My database search resulted in five articles, which I analyzed to answer my research questions. My findings indicate the labor-based contract grading is a new practice in my field, and more research is needed; instructors using it had positive experiences; and it increased student agency and involvement, improved writing quality, and focused students on the writing process rather than on the writing product. The findings are important for composition instructors as well because many of them are integrating workplace writing into their composition courses.
Courtney Summers
Traditional grading methods in higher education often emphasize standardized assessments, which may not fully capture students' critical thinking, problem-solving, and real-world application skills. The use of case studies provides a more dynamic and practical method for evaluating student learning. In this talk, I will explore the implementation of case studies as an alternative grading mechanism, highlighting their benefits in fostering engagement, analytical skills, and interdisciplinary learning. Students work in learning communities on their cases, with class time dedicated to group work, allowing for deeper collaboration and knowledge-sharing. By analyzing real-world scenarios, students demonstrate their understanding through written analyses, oral presentations, and discussions, moving beyond rote memorization. In the past two years, 100% of students (N=160) who have participated in case-based assessment report experiencing less stress compared to traditional courses, suggesting that this method creates a more supportive and engaging learning environment. Case-based assessments also allow for more flexible, student-centered evaluation, reducing test anxiety and accommodating diverse learning styles. Additionally, they promote continuous feedback and iterative learning, aligning with competency-based education models. Case studies can be used in a variety of courses, and several examples will be provided.
Megan Patnott
I've used a mix of standards-based and specifications grading in most of my classes for a few years. In my introduction to proofs course in Spring 2024, I removed the specifications for final letter grades from my syllabus. Instead, my students and I collaboratively determined their grades through self-assessment reflections and grading conferences. I'll describe how the course was structured and how collaborative grading went.
Kelley Sullivan
Scholarship related to alternative grading is still in a nascent stage. Few published studies use validated tools to measure student learning related to alternative grading strategies. The work-in-progress described herein aims to contribute knowledge to fill this gap in the field.
In this poster, I discuss the effect of alternative grading strategies on student learning outcomes as assessed by comparing the average learning gains of students in two sections of a second-semester algebra-based physics class at my medium-sized comprehensive college. The advantages of comparing results within my institution are that the student population, physical learning space, and teaching pedagogies in the two sections are closely matched. The primary difference between sections is the grading methodology.
In Spring 2024, I measured learning gains related to electricity and magnetism topics using the Conceptual Survey of Electricity and Magnetism. Data analysis using a paired t-test revealed significantly higher learning gains in my alternatively graded section. I am repeating the study in Spring 2025 using the Brief Electricity and Magnetism Assessment and will compare the learning gains in my alternatively graded section to the traditionally graded section within my institution. I will also compare the learning gains from both semesters to those reported in the literature for similarly taught algebra-based electricity and magnetism courses across diverse institutions. Qualitative data regarding student beliefs about their learning will be collected via an anonymous survey and presented alongside the quantitative results.
Converting a course to include alternative grading takes considerable time and effort. Evidence of improved learning gains and positive student outlooks on learning in alternatively graded courses may encourage faculty to adopt alternative grading strategies.
Mike Griffin
Being an educator involves a continuous, reflective cycle filled with questioning, assessing, and revising of one’s teaching practice. A focus of mine over the last three years has been how to make my courses more student-centred, giving students an increased amount of agency within the work. Part of this work involves challenging ingrained and passed down pedagogical approaches. Through an analysis of recent course innovations and experimentations in studio-based teaching, I will share steps towards the development of my student-centred pedagogy. My presentation will include exploring the benefits of an artist over student mindset, with focus on individual connection to the learning process. This includes the sharing of my creation of The Do-Over, an embedded structure that embraces the individuality of the learning journey for each student. This structure gives students an opportunity to revisit performance-based assignments as their understanding of material develops. As well, I shall discuss the successes and challenges of bringing students into the assessment process. When are students equipped to assess themselves or their peers? I will also include a discussion around choice within grading practices. Additionally, I plan to examine mentorship vs. mastership in the teacher-student relationship. To conclude, I aim to offer insight to my next steps as I continue to advance my practice in a way that centres students in the learning experience
Amy Lee, Maggie Bergeron, Merle Davis Matthews
In the context of transformations in higher education and a ‘turn to learning,’ essential questions have been raised about the teacher/learner relationship and the role of education. The University of Minnesota College of Liberal Arts Office of Undergraduate Education Faculty Engagement Team has developed a Teaching Fellows Program designed to support the development of a generative community of practice and feedback amongst faculty, both Tenured/Tenure-Track and VITAL/Contingent Teaching Faculty. This cohort program is interwoven with a framework for thinking about teaching as a skill that can be developed, deepened, and researched. A cohort focused on Ungrading/alternative assessment was convened in Spring 2025, allowing faculty to investigate their teaching and assessment practices and work to better align their goals with design and facilitation.
Drawing on literature around community of practice and critical pedagogies, this poster will share strategies for creating communities of practice and feedback where teachers begin to see themselves as learners, doing the necessary work of shifting assessment practices to better align with student growth and development. This poster will offer data collected from pre- and post-cohort surveys, offering data on how fellows experienced this cohort, and what they will take into their assessment practices as a result of their experience. This poster will also offer recommendations and next steps for future Teaching Fellows cohorts around design, literature, and reflective practices. We will share what the facilitation team learned through facilitating this cohort around identifying and working with cohort members to align their teaching and assessment with the experiences of joy that can happen in the classroom. The target audience for this poster is faculty, administrators, and faculty development leaders.
Matthew Charnley
It is a common question as to whether student grades in a course correspond to their learning and if they carry this learning into future classes. Ideally, since standards-based grading directly ties letter grades to understanding of learning objectives, this correspondence should be stronger in an alternatively graded class. Since I have noticed that my grade distribution tends to be different than my colleagues who teach the same class with traditional grading, I want to analyze the correlation between grades in Differential Equations and follow-up courses in the School of Engineering and see if this differs between alternatively and traditionally graded classes. Following previous articles about alternatively graded classes, I want to compare test anxiety between these classes as well. This study is currently in the data collection phase, but I would like feedback on the survey design, other things to look for, or ways to improve response rate.
Aisling Dugan, Kaitlin Dang
Completion-credit writing assignments provide a low-stakes method for students to share ideas without the fear of penalty or perfection. Multiple completion credit writing assignments have been implemented in two large-enrollment undergraduate biology courses (Microbiology and Immunology) at Brown University for the last 2 years. Students are divided into small groups of 4-6 students who work on other class projects with each other, facilitating some familiarity and trust within this mixed year group. Each assignment provides reading material and asks students to answer one of several possible questions that explore broad bioethics topics, including problematic history in human research, scarcity medicine, indigenous knowledge of plants and medicine, and health disparities. We reviewed all of these written assignments and evaluated student engagement based on the following criteria: (1) word count, (2) responses to group members, and (3) use of AI. Evaluating these data will help us better understand the level of rigor that students devote to these types of completion credit assignments.
Dina Newman
One key takeaway from Susan Blum’s Ungrading is that students learn best from feedback when no grade is attached. To encourage productive use of feedback, I developed an alternative grading system for my Introductory Biology and Genetics courses for my Introductory Biology and Genetics courses at a large, private institution in the northeastern U.S (20-50 students per class). This approach evaluates students’ progress toward course learning objectives (LOs) and provides feedback before assigning grades. Each day, I outline specific LOs, assign aligned readings, and require a preclass quiz with retry options (no penalty for the first retry, slight penalty afterward to avoid random clicking). In class, active learning exercises deepen LO comprehension, and students receive feedback from the Learning Assistant before submitting work for grading. Weekly homework reinforces the same LOs, with feedback available before grading. Exams are scored as "Met" or "Unmet" for each LO, with some personalized feedback. Students can take a new version of the exam, only answering questions aligned with LOs they did not meet. This method is faster, fairer, and promotes mastery. Students are often able to find their own mistakes but they can also get help before the retake as needed. Nearly all students take advantage of retakes, indicating motivation is not a barrier. Final grades are based on the percentage of LOs met for each topic via class activities (25%), homework (25%), and exams (50%). This system has led to more As and Bs, fewer low grades, and stronger alignment between grades and material mastery. Students report lower anxiety, better time management, and appreciation for improvement opportunities. In Fall 2024, Genetics grades were 34% A, 44% B, 14% C, and 8% D/F/W. I am continuing to adjust the system to achieve an optimal balance of feedback and efficiency. Future work will involve interviews and surveys.
Shelley Dougherty, Mike Weimerskirch
All mathematics instructors value students who can effectively communicate their mathematical knowledge, however, these mathematical communication skills rarely make it into course grading schemes. These skills, and hence their assessments, differ from building content knowledge and mechanical/computational skills. For example, when the goals of an assignment are to apply and communicate knowledge, rather than recite it, students need to be allowed to demonstrate their ability to write in a way their peers understand, to make connections among symbolic, graphic, and tabular representations and to create alternate paths to a solution. This talk will discuss a detailed rubric for written and oral communication skills, derived from the ELIPSS (Enhancing Learning by Improving Process Skills in STEM) process skills and rubrics. This rubric is currently in use in the large-enrollment (100+ students per section) precalculus courses at the University of Minnesota, where students learn the course content through a flipped-classroom model, and spend class time on active learning activities and communicating their problem solving processes to their peers. We will go into detail about how this rubric is implemented (e.g., how we communicate feedback through the LMS), and the effect it has had on students' progress through calculus.
Michael Johnson
This poster presents an alternative grading framework centered on student-led, instructor-guided assessments designed for writing-intensive courses. I first implemented an early version of this framework in the 2023 Fall semester and have since used it within three different classes at the 100-, 200-, and 300-level. In brief, the assessment framework asks students to engage in metacognitive evaluation at three intervals, discussing their performance, engagement, and growth in the course. The goal of the framework is to enhance student learning by increasing student agency, investment, and competence while fostering a course environment that is more equitable, inclusive, and empowering. This poster will outline the current framework, including ancillary considerations influenced by the framework, such as course design and feedback strategies. By sharing this work, I hope to invite generative conversations on the implementation and future assessment of the framework.
Megan Rhee, DJ Trischler
In career-focused creative fields like communication and graphic design, hiring decisions often prioritize portfolios and organizational “fit” over GPAs. Yet, many design programs rely on traditional grading structures as the primary assessment method, reinforcing expectations that may not align with industry realities. While research on ungrading has expanded across disciplines, a comprehensive synthesis of the alternative assessment approaches in creative education is noticeably absent. The lack of literature suggests that either ungrading is not widely practiced in creative education or that existing practices have not been clearly identified, formally studied, and published.
As communication design educators, we continue to explore and experiment with ungrading by assembling disparate resources across many fields. Our research focuses on a literature review where seemingly little literature exists, aiming to bridge the design industry, design education, and broader pedagogical research. Additionally, because there is no established body of research in our field, we are developing a faculty survey and interview questions to uncover existing alternative assessment strategies in design education from faculty who may be using alternative grading informally or without structured research documentation. Preliminary findings from the review will be presented, mapping key themes for further exploration. Survey and interview designs will also be shared before promoting and recruiting study participants. By identifying and amplifying these practices, this project seeks to consolidate and address gaps in the literature and advocate for more authentic, creative, and industry-aligned assessment models for faculty consideration.
Alyson Huff, Mark Hussey
The growth of student-centered teaching practices has necessitated widespread reforms to our teaching, challenging us to develop an alternative method of assessing learning that subverts the point and task-based grading tradition with a more transformative and meaningful model. Liberated from points for the last six years, your presenters have learned from trial and error, gained colleague insights, gathered learner feedback, and noticed an increase in student completion rates. In our series of talks, you will: 1) Learn about a ‘holistic assessment’ alternative grading technique that applies the science of learning to advance student achievement of learning outcomes through formative and summative assessments supported with narrative feedback, 2) Discover what this approach looks like in a College Composition classroom at a community college and the impacts on teaching, and 3) Contrast the same approach in a Logic course at a state university, accompanied by student perspectives. Presenters will share tips and practical strategies for how to get started.
Beth Rawlins
Alternative grading is a great way to encourage growth mindset. Offering students the chance for re-assessment is a balancing act due to practical constraints. This presentation will cover the experience of setting up and implementing standards based grading for the first time in a STEM Calculus 1 course. The results will include instructor reflection and student surveys. These results will be used to offer "fixes" for continued implementation.
Heather Barker, Ryne VanKrevelen, Nicholas Bussberg
We explore how three statistics instructors at a mid-sized private, liberal arts university have implemented and refined different alternative grading approaches to better assess and support student learning, growth, and engagement. These approaches have been used in statistics courses ranging from introductory level through upper-level undergraduate courses. Sections typically contain 20-30 students with an emphasis on engaged learning. The three instructors have implemented grading practices such as specifications, standards-based, ungrading, and combinations of the three. Through firsthand experiences, we will share insights into what has worked, the challenges we have encountered, and how our grading philosophies have evolved over time. Data from IRB-approved pre-, mid-, and post-semester student surveys will be shared in our poster. In the surveys, students reflect on how their experience in these courses compares to previous mathematics and statistics classes in terms of grade anxiety, how grades reflect their understanding of material, opportunities for growth in the class, and more. In collaborative discussions amongst the three of us, reflecting on our own practices has helped us refine and create our own grading systems that not only value student growth but also help us to prioritize our own beliefs about what assessment looks like in our individual classrooms. In addition to the results from the student surveys and our own reflections, we will show the importance of the role of messaging to students, and we will provide recommendations on how to scaffold alternative grading approaches over time as opposed to an all-or-nothing model.
Michelle Davidson
I propose a talk sharing a feedback delivery method developed over the past four semesters which has improved student trust in me and in the no-grades/only feedback philosophy of learning.
Too many students enter college without receiving quality feedback. In a survey delivered to 77 of my first-year writing students in fall 2023, the majority reported receiving only superficial comments, surface error and corrective marks from high school teachers. Many defined the feedback as “negative,” and comments about “what I did wrong.”
It is not surprising then that students don’t warmly receive my promise to emphasize learning through feedback, not grades. I realized that they were expecting negative feedback.
This inspired a “Here’s what I am learning about you as a writer” approach to providing positive feedback, which stretches the boundaries of formulaic feedback models. In a short letter to the student, I identify and explain the skills the writer is demonstrating in a paper, what the student has done well, and provide evidence from the student writing to back the claim—so that the student recognizes my feedback as authentic, substantive and helpful. This is followed by formative feedback for improvement and a revision checklist fashioned by hacking the Blackboard rubric tool.
I have observed significant increases in trust in me as an instructor and in in alternative grading, increased student confidence and motivation to revise papers, and less fear and dread as deadlines approach.
Student comments from end-of-term reflections and in course evaluations speak to this. It is also notable that students report that their confidence and abilities “spillover” into other classes and job settings. I, too, notice that I am improving my feedback delivery skills.
This talk will provide attendees with frameworks for positive response for writing projects and guidance and strategies for tailoring this approach to STEM-H assignments.
Joss Ives
In my second-year Computational Physics course, I implemented a variant of Specifications Grading that focused primarily on three projects, which also served as the narrative anchors of the course. Students had multiple resubmission opportunities to bring each project up to "Meets Specifications" standards, where students were expected to address even minor revisions before meeting specifications. The additional layer is that students can also have their projects awarded with distinction, when the project goes significantly above and beyond expectations. This is framed to the students as something that happens via "editor spotlights", where work is highlighted based on a combination of originality, potential impact on the field and the quality of the communication. To have their project awarded with distinction, they are first required to self-nominate as wanting to go above and beyond expectations with their project or have a grader nominate their project for distinction. Project seeking to be awarded with distinction then go through additional rounds of feedback directly with the instructor, once the grader has established that the project has met specifications. In this presentation, I will reflect on the large improvement in quality of the projects using this distinction mechanism, as opposed to systems implemented in the previous two years, which had provided different opportunities and incentives for students to go above and beyond in their project work. I will discuss the features of this system that seemed to increase student motivation related to the ambition of their projects, and the joy I felt from collaborating with students to help guide their creativity and ambition toward final products where they were extremely proud of their work. Finally, I will provide recommendations to other educators interested in implementing a high-touch grading system that encourages students to go above and beyond meeting specifications.
Jessica Tinklenberg, Jeremy Schnieder
Transdisciplinarity values collaboration, boundary crossing, and empathy to address complex problems with diverse and invested partners. We propose that negotiated grading, a form of ungrading, can benefit from a transdisciplinary framework and approach. In this talk, we will use examples from our classrooms at a regional university and medium-sized community college system to discuss the ways a transdisciplinary practice can meaningfully enhance the process of collaborative, negotiated grading.
Tori Day, Jordan Lassonde
In their book Grading for Growth, Talbert and Clark discuss the Four Pillars of Alternative Grading Framework, which includes the pillar of helpful feedback and emphasizes the importance of feedback loops. In her classes, Tori recognized that the way she was implementing instructor based feedback loops put high demands on her and privileged one type of feedback (ie: that of instructor to student as opposed to student to student). Across campus, while administering the MHC Speaking, Arguing, and Writing (SAW) Program, Jordan realized that the mentoring done by SAW peer mentors could offer an alternate feedback loop by empowering students to give each other helpful feedback.
In this poster presentation, we will introduce a layered student-to-student feedback loop designed to support a mathematical writing assignment for an upper-level probability course Tori taught. The first layer of this feedback loop was SAW peer mentor to probability student in the form of a peer review workshop. The second layer of the loop involved the creation of a peer review structure so probability students could give and receive helpful feedback from each other. The third layer of the loop was the (alternative) grading of that peer review task.
Along the way, we will also discuss how this task’s layered nature shifted the power dynamic in the classroom (from instructor giving meaningful feedback to students to students being empowered to give each other meaningful feedback). We will conclude by alluding to other collaborations between our two programs and further thoughts we have about how to build a bridge between college writing centers and alternatively graded mathematics classrooms.
Nisha Fernando
This paper presents a competency-based alternative ‘grading’ process in two undergraduate interior architecture courses in a 4-year university. One was a lecture course with 25 second-year students, while the other was a capstone studio where 12 fourth-year students completed a semester-long, individual design project with one-on-one instruction. A competency-based alternative evaluation allows for more flexibility to determine a range of competency levels among students, thus enabling accommodation of different learning styles and performances. In the studio, a short pre-semester survey revealed varied learning styles. Based on their learning style, the author set instructions for each student on expected competencies. While students found this flexibility immensely helpful, it also generated more enthusiasm for learning. Competencies were evaluated individually, by personal verbal feedback. Students were could also continue to modify their work. The lecture course included three solo assignments. The author formed small groups based on common learning styles and the competencies were generated based on each such group. Students completed the assignments on their own, groups met with the author to discuss the competencies broadly, and each student discussed their work individually. If certain competencies were not satisfactory, the student was able to work on it more. No due dates were set; students could continue to work on their assignments and demonstrate their knowledge skills throughout the semester. Many positive outcomes were noted. The anxiety of grades was significantly reduced; demonstrating various skills instead of gaining points led students to own their work. The flexibility led to more inclusive learning environments for diverse learners. No rigid timelines encouraged more cumulative learning in both courses and feedback was a better direct assessment of learning. This approach may work only in smaller classes where direct interaction is possible.
Tamar More, Todd Basi
One potential barrier for faculty implementing alternative grading methods such as specification or proficiency grading is the concern about managing large numbers of reassessments outside of course meeting times. We present our implementation of a pilot proctoring program for reassessments that includes both a stand alone proctoring model, in which faculty are paid to proctor reassessments in a dedicated space, and a co-op model, in which faculty welcome reassessing students from other courses into their classrooms during regular testing. We will discuss the impact of our first trial run of the project on alternative grading efforts on campus as well as other creative ways to support reassessment.
Christopher Adamson
The values of the ungrading movement to promote student autonomy and confidence for the sake of reigniting a love of learning provide an excellent foundation for responding to educational concerns about generative AI. With the movement’s focus on promoting equity and eschewing carceral practices characterized by surveillance and control, ungrading methods can promote the project of forming students as whole persons who can evaluate and use or reject emerging technologies in a mature way.
This workshop responds to the recent crisis surrounding developments in large language models and generative AI with a relational view of education informed by the emerging world-centered approach to education and care ethics. In it, higher education faculty, staff, and administrators, along with secondary education teachers engaged in college readiness will explore how to form students as autonomous and confident evaluators of emerging technologies. An assignment sequence for a first-year experience class that invites students into the intellectual life and prepares them to respond to AI with maturity will be demonstrated along with conversation on how aspects of it can be applied to different disciplines. The sequence transitions from grade-free zones where students are restricted from using AI to a traditionally graded AI-assisted research project and concludes with a student-driven, public-facing digital scholarship project. In the final project, students propose their own AI policy and evaluate their own work with ungrading reflection rubrics.
Throughout, participants will work within a personal digital journal powered by H5P to apply principles from the session to their own formation of the students entrusted to their care. Participants will leave with a plan to promote student autonomy and confidence in their classroom through the integration of ungrading practices with mature technology adoption.
Richard Wilson, Itamar Kastner
This presentation discusses scaling skills-based grading (SBG) in a large UK undergraduate linguistics class (160 students). One of the main topics in our second-year introductory course is syntax (4 weeks of 11), a subject many students find difficult or intimidating. Recent findings indicate students often view syntax as "exhausting", "frustrating" or even exclusionary (Bjorkman et al 2023), raising parallels with math: a subject they are told implicitly or explicitly that they aren’t good at (Coles & Sinclair 2022). Compounding this general trend was our course's traditional "one and done" assessment, which provided limited structures for feedback or development.
SBG has been championed in the linguistics sub-disciplines of phonetics, phonology and semantics (Zuraw et al 2019; O'Leary & Stockwell 2021, 2022), and implemented for syntax by a colleague in a US institution as well. While wanting to address the issues above by implementing SBG (coupled with other changes such as more active learning), the large class size and limited TA hours necessitated some form of automated grading without sacrificing feedback quality.
All previous linguistics SBG systems relied on manual grading. We discuss how we spread our 24 skills over discrete automatically-graded Blackboard exercises (two attempts allowed), and four integrated short-form write-ups submitted at recitations graded by TAs. In addition, we had 5 general research skills students could write up for an "A" grade. Grading was binary (proficient / not yet) with feedback provided automatically on Blackboard, or as answer keys after recitations.
We further discuss differences between our iteration and the traditional one, not just in grade distribution but student growth, reactions, and ability to "catch up" during the semester; the importance of support from our Learning Technologists; the impact this approach had on the classroom sessions; and other suggestions for scaling alternative grading to large classes.
Robert Weston
Inspired by Grading for Equity by Joe Feldman, and Building Thinking Classrooms by Peter Liljedahl, I have attempted to implement Standards-Based Grading (SBG) in 100-level and corequisite support math classes at my home institution, a two-year college on quarters (10 weeks). The shift has required significant investment of time, alignment of course materials, and iteration as I receive student and peer feedback.
In this session I will share granular learning outcomes (LOs), the grading structure of how student performance on these LOs builds their grade, assessment protocols, aspects of SBG I am struggling to implement within the compressed timeline of a quarter system, how students track their grades, and how this information is displayed in the LMS, Canvas.
I would welcome suggestions, feedback, references, and anything else the community could share to improve my implementation of these practices.
Daniel Dries
The Four Pillars of Alternative Grading serve as the guiding principles for changes to grading architectures in our courses. Some traditionalists have argued that the Four Pillars erode at the seriousness – or the “rigor” – of coursework. Since the Four Pillars are structural – and not cognitive – features of a course, these arguments are arguments about structural rigor, not cognitive rigor. Such debates are academic and theoretical. The Four Pillars, however, are felt principally by the students. This begs the question, “How do students view the ‘rigor’ of an alternatively graded course versus a traditionally graded course?” To probe this question, we qualitatively interviewed students taking an introductory science course that used standards-based grading (SBG). Interviewees were asked to compare their self-efficacy, their motivation, and their science identity between both SBG and a points-and-percentages format, as well as the “rigors” of each format. A subset of interviewees were students retaking the course, providing an opportunity to understand how a student responds to SBG within the same curricular context. Together, these interviews provide instructors – from the alternative grading practitioner to the alternative grading skeptic – with insights into how students perceive the learning environment of traditionally- and alternatively-graded courses.
Emily Nemeth
Faculty grading practices are shaped by a range of forces: inherited norms, expectations, and institutional policies, personal experiences in schooling systems, and graduate training, just to name a few. And while the practice of grading is central to the work of faculty, it receives far less attention and energy in comparison to our pedagogies, which are often the focus of reflection and evolution–to meet the needs of students, to respond to changing demographics and changing times. This research places grading and our grading ecologies at the center of the work of faculty, and as central to social practice of teaching and learning. In this larger research project on grading we ask, why do faculty grade the way they grade, and how do they describe the relationship between their grading practices and their pedagogies?
While the goal is to pursue human participants research for this project, we will address one of the several subquestions in preparation in this poster: what is the legacy we’ve inherited around grading and how did it take shape? The research displayed on this poster will present preliminary analysis of documents from the archives of a small midwestern college. Drawing on the work of Deborah Brandt, we’re interested in the literacy sponsors that gave shape to the grading practices at the college and how these sponsors came to be perceived and deputized as stakeholders/social actors in grading policies. The archival documents focus mostly on a single institution, but there are some correspondences, dating back to the early part of the 20th century, about grading among deans and presidents of small colleges in the midwest.
We are eager to receive feedback from the community that assembles as a part of The Grading Conference on this early research.
Kyle Teller
In Spring 2025, we implemented a mastery grading system combined with a flipped classroom in an introductory statistics course at Salisbury University. We compare the effectiveness of and student attitudes toward our approach to traditional grading systems utilized by other professors in our department teaching the same course that semester. We measure efficacy by comparing common final exam scores between students in our classes and other classes. We compare student attitudes based on a 7-question survey handed out to all students in all classes. Our grading system is based on the work of Heubach and Krinsky: For each of the 17 Student Learning Outcomes (SLOs) based on the course syllabus, students received a course grade based on their cumulative number of masteries on online SLO Homework, weekly SLO Quizzes, and course projects. Each SLO Homework and project had a mastery standard of 90%. Each SLO Quiz had a mastery standard of at most one minor error out of all (4–7) questions, with up to 4 reattempts. A student’s score on the final exam would raise, lower, or keep the same the student’s course grade depending on their score. We plan to refine our approach and collect data over several semesters to understand the impact of our approach.
Angela Hanson
Ungrading is an alternative assessment method that focuses on feedback and self-assessment over scores and letter grades. This can take many forms, but much of the time, it requires meeting with students individually to discuss their progress. These meetings can be a barrier for many faculty because of scalability. In an effort to address this, I have developed a Qualtrics survey to be completed weekly that walks students through updating me on their progress. Then I schedule follow-up discussions as needed to clarify or fill in gaps in their responses.
While this survey helps streamline the self-assessment process, it also provides a record of student progress on course goals over the course of the semester. I am using these surveys (along with a couple extra) as an opportunity to measure student progress and try to quantify the effectiveness of ungrading in improving equity, mental health, metacognition, and growth mindset.
Angela Hanson
Ungrading is an alternative assessment method that focuses on feedback and self-assessment over scores and letter grades. This can take many forms, but much of the time, it requires meeting with students individually to discuss their progress. These meetings can be a barrier for many faculty because of scalability. In an attempt to address this, I have developed a Qualtrics survey to be completed weekly that walks students through updating me on their progress. The key to guiding students through self-assessment has been to provide clear course goals and then outline specific progress levels for each type of goal.
So far, I have implemented this in two math electives for Liberal Arts students. These are survey courses designed to provide students with a positive experience with math and expose them to how math is used in their fields and daily life. I have been developing these courses for two semesters, Fall of 2024 and Spring of 2025. In the fall semester, I had about 50 students using this system, and in the spring, I had about 25 students using it.
In this workshop, I will share how I am currently implementing this system, and then we will work in groups to develop a course goal, the levels for that goal, and survey questions that lead students through self-assessment. Then we will trade materials with other groups and simulate student responses to discuss what challenges can come up and how we might overcome them in our course design. The goal will be to provide a framework for designing a similar system in your own course and to address any questions or concerns about implementation.
It helps to be familiar with ungrading, but it is not expected of participants. To get the most out of this experience, participants should have a sample course to work on and bring a syllabus or course description that outlines the learning objectives or required material. This workshop is designed for college-level instructors, but all are welcome!
Sarah Ghoshal
This 20 minute talk will showcase how to use spreadsheets for effective invention and revision in the composition, often First Year Writing, classroom, as well as how grading for these entries is equitable and focused on keeping all students at the same starting point, regardless of past work or grades.
Spreadsheets are used to brainstorm ideas and then revise those ideas, offering a collaborative environment. They are projected in front of the class to show real time work. Effective in both traditional and blended/online courses, these spreadsheets are consistently used throughout the class, revisited regularly, and graded based on completion of columns associated with specific assignments. Students have so far reported appreciating the streamlined and equitable approach of using spreadsheets in their courses.
Maggie Prater, Laura Baumgartner
In our Community College Science department, multiple faculty and instructors have shifted to alternative grading, most recently through a department-wide grant to support active and inclusive learning. As a result, we have a wide array of alternative grading strategies, yet none of them fit into the strict boxes of “Specifications” or “Ungrading”. Instead, each pulls pieces from different frameworks to support the values of the designing instructor. Panelists at different stages of implementation in multiple science disciplines will share the systems they have developed, how their values shaped their systems, as well as pitfalls and solutions encountered. Panelists will share their syllabi and other documents used to support their grading systems with a one-pager on how each system supports Talbot’s Four Pillars of Alternative Grading.
Michelle Hock
All educators have heard the question: “How many points is this worth?” which students often pose to determine how much – if any – effort they should be putting into a required reading or assignment. However, educators also know that students need ample opportunities to learn new content and develop new skills, and that at this formative phase when meaning-making and feedback are key, assigning points for “completion” or “effort” goes against best practices in assessment and grading.
In online asynchronous online courses specifically, where preparing for class is not a motivating factor for completing non-graded tasks, instructors often resort to assigning point values to formative work (often simply on the basis of whether something was completed). This “carrots and sticks” approach to grading may actually undermine the learning process, as students view online activities as boxes to check rather than as opportunities to develop knowledge and skills.
In designing several online courses, I was confronted with this challenge. If I assigned no points to formative activities, students either did not complete them or expressed dissatisfaction with the fact that the tasks “weren’t worth anything.” Students also struggled to conceptualize why summative assessments were weighted so heavily, often expressing anxiety about how much of their grade was determined by these larger assignments. To contend with these challenges related to students’ perceptions of the course grading scheme, I (a) focused on increasing grading transparency, and (b) taught my students about best practices in formative and summative assessment use, with particular emphasis on grading. With these changes to my courses, I received positive feedback from students that I applied to designing new courses moving forward. In this poster, I will share my lessons learned and recommendations for other educators who are looking to improve grading practices and motivation in asynchronous online courses.