Upgrading Packback’s Gradebook: A UX Case Study

Katie Stakland
The Startup
Published in
7 min readApr 30, 2020

--

A professor’s primary role is to teach, but like most jobs, professors often get stuck doing other repetitive tasks to administer their courses. For professors using Packback, one of these tasks was finalizing and uploading Packback assignment grades to their Learning Management System (LMS) gradebook. The Packback gradebook feature had opportunities for improvement to make the process of calculating and downloading grades much easier.

At the start of the project, the gradebook feature provided users with data that could be used to calculate a grade for a student, but required the use of outside software, such as Excel, to perform final calculations. To upload those grades into their LMS, professors then needed to format these grades in a specific way. This process often required training or support from a member of the Packback team.

We set out with the primary objective of enabling users to perform all final grade calculation within Packback’s gradebook and prepare the file in the proper format for direct upload into the LMS, without the use of any outside software (like Excel). Our second objective was to produce a solution that allowed for a wide range of grading schema customization while still being quick and self-explanatory for a professor to configure on their own.

Research the Problem

I started diving into the project by creating a list of questions. Some of these questions were:

  • What is the current Packback experience? What data is generated for professors and how do they use this data to calculate final grades?
  • What data does each LMS require for professors to upload grades? What is this experience like?
  • How much time are professors spending grading Packback assignments each week? How many are using other software or relying on Packback’s support staff to generate spreadsheets for grading? What formulas are they using to create them?
Image of questions hand written in a notebook

I also developed a list of assumptions the team was currently making about the experience. I countered those with anti-assumptions to keep myself unbiased while researching and designing.

A chart of assumptions and anti-assumptions about grading Packback assignments

To begin answering my questions, I needed to perform an audit of the current experience. I was new to the team and had little exposure to the gradebook. I used a test account to replicate a professor’s experience in Packback. I then downloaded the data available on the platform, ran it through Excel with a formula, and attempted to upload to test instances of multiple LMSs. I documented my experience, highlighting pain points and steps that required high cognitive load.

This gave me valuable insight and understanding, but I still didn’t have the full picture. I needed to gather more information about the experience from real users. First, I met with internal users, Experience Managers from the Customer Success Team and the Customer Support Team Lead. We discussed how they were supporting professors and feedback they’d heard from professors about grading. I asked them about best practices they’d observed on the platform to learn how I could guide users to adopt best practices in the new grading experience.

Image of handwritten interview notes in a notebook

Next, I met with external users, professors at universities and community colleges that had used Packback for at least a semester. We discussed their grading requirements, process, and pain points. Interestingly, these interviews validated some of my anti-assumptions. For example, I learned that some professors had no complaints about the current experience and did not want to change how they were grading. My upgraded gradebook needed to have no negative impact on these users.

I gathered quantitative data on the current user experience through a survey. The survey provided me with data on how much time professors spent grading and how many users were utilizing external software in the grading process. It also gave me a baseline to determine the success of my final solution after implementation.

Charts of survey results
According to my survey, approximately 70% of users were using outside software to grade Packback assignments.

Map Out the Experience

I used all my research to define user goals. With the help of a user story map, I broke down these goals into user tasks. I also determined what needed to be included in this upgrade and what was out of scope for this project.

Image of post-its on a window in downtown Chicago

Wireframe & Sketch

I sketched out preliminary solutions for the experience on paper. These went through multiple rounds of ideation. I discussed the sketches with my CPO and we decided on the most successful designs. I translated these sketches into digital designs on Figma. I had recently implemented a new design system at Packback (read more about that project here), so was able to quickly add some basic user interface (UI) elements.

Wireframe sketches on paper

I continued to iterate on the designs in a mid-fidelity state. Iterating in mid-fidelity is helpful in my design process because it allows me to more accurately see how elements will fit on a page.

Define the UI

I met with my CPO again to get feedback on the mid-fidelity designs and the user experience I had created. We decided which iterations should become the finalized solution. I transformed these from mid to high-fidelity designs.

A few areas of the designs required further visual design exploration. For example, there was a line where users selected how many questions and responses a student needed to post per deadline. In initial designs I envisioned this working as a slider. However, my discussions with experience managers taught me best practice was for professors to require 1–4 responses and 1–2 questions per deadline. A slider is not a good solution for choosing between two options, so this would not work for the questions. A dropdown also felt wrong here. To solve for this, I designed a kind of radio button with the number selection inside the button. This saved space and added visual interest to the page.

View of grading calculator with radio button design

The Curiosity Score option needed further visual exploration as well. I originally envisioned this as a dropdown, but this limited user selection and was not as visually appealing as it could be. Instead I designed a slider that would guide a user to the best practices observed by the experience managers I had interviewed.

Stylized slider

I showed the slider design to a front end developer to discuss feasibility. He expressed concern the design was too ambitious for the project timeline. We also discussed concerns over the accessibility for visually impaired users. Color was the only indication that the user set an appropriate Curiosity Score, making the experience unequal. I sketched out a few ideas and designed an input field with arrows that a user could either type into or click through. A message and icon appeared next to the selector indicating the difficulty level for students, thus guiding users to make an appropriate selection for their course level.

View of Curiosity Score selector with message indicating the setting is appropriate for intermediate courses

Prototype & Test

I built a prototype in Figma to test the experience with real users. I reached out to professors who filled out the survey I sent in the research stage and asked them to test my prototype over a video call. I also met with experience managers again for their feedback on the solution.

gif of prototype used for usability tests
Initial prototype used for usability testing

I gathered all the feedback from the tests and incorporated it into my designs for a finalized solution.

Finalized designs
Finalized designs

Implement the Solution

The designs were now ready to be implemented, but first I needed buy-in from leadership. I met with my CPO for final approval on the designs. Then I met with the leaders of the Customer Success team. I wrote user stories in JIRA and presented the project to the engineering team. We collaborated closely throughout the development process.

This was a major update to our product, so I created training documents and presented them to our revenue teams before we released this to users. I wanted to make sure our sales teams were knowledgeable about the upgrades so they could help current users with any questions and present it to new users during a sale.

Training materials
A quick reference sheet sent to the whole company about the update

Take-aways

This was my first major feature update for Packback and I learned a lot throughout the experience. Here are some of my main take-aways from this project:

  • Interviewing users is a skill all Product Designers should learn. My initial interviews with professors and teammates were invaluable to this project. They provided me with critical insights into the product and deep empathy for my users.
  • Collaborate with developers early and often. Getting feedback from developers early on helped me design a solution that worked for all users and could be delivered on time.
  • Consider all users when designing solutions. While this feature upgrade mostly focused on the experience of professors, it was important to improve the experience for students as well as they are the majority of Packback’s users. I also added the grading requirements input by the professor to the student view.
Student view of course requirements and performance

--

--

Katie Stakland
The Startup

Product Designer in Boston, MA. I’m passionate about design and helping others succeed through design thinking.