Skip to content

phaseII

Ian-J-S edited this page Nov 15, 2025 · 9 revisions

Phase II - Refining interaction and designing wireframes

Introduction

The last report left the team with a solid understanding of the users and user interactions that we will most likely encounter when designing this application. We knew what we needed to design around but we were left without a good visual layout of how to accomplish our objectives. The main purpose of the work done this sprint was to refine the User Workflow and design of the wireframe. This led to a digital representation of a potential workflow to design and build off of.

Methods

1. Cognitive Walkthrough

The cognitive walkthrough was performed by usability engineering students, external to the project. Two students were given the same persona from our list of user personas, and tasked with interacting with the wireframe of our product to complete their persona's scenario. At each point in the interaction, they were to ask themselves the following two questions sourced from Spencer (2000).

  1. Will the user know what to do at this step?
  2. If the user does the right thing, will the user know that they did the right thing and is making progress toward the goal?

The two students were to take notes of their answers to these questions, and provide them to us for review.

2. Informal Feedback

A class (n=~30) of software engineering students sat in audience of an end of sprint demo performed by the engineers working on this project. The demo showcased the current state of the project, with the engineers walking through a standard user flow, from start to finish. The engineers showed accessing the application, uploading a PDF, and the general navigation of the web site.

At the end of the demo, the engineers asked the audience the following two questions:

  1. If you've used similar products, are there any features that are missing you think would be beneficial to this app?
  2. Do you think there would be a benefit to being able to load/work on multiple PDF's at once?

The floor was then opened to allow for the audience to answer the questions, with multiple answers per question solicited in total. Responses were recorded by the engineers performing the demo, and posted internally for review.

Findings

1. Cognitive Walkthrough

The primary findings from this research were that certain pages necessary for flow were missing. Specifically, there was no indication that the software was loading (or doing anything) when the user clicked upload on a PDF. Most importantly, there is no design for what the "alt-text" tagging tool would look like, nor how the user would interact with it.

The UX students raised some other minor questions, namely about if certain pages would automatically redirect to a next page or if the user would need to click a button.

2. Informal Feedback

Entirely new avenues of exploration were raised during the informal feedback. Previously out of scope, enough suggestions for an OCR feature were raised, bringing it back into consideration. However, this is primarily an engineering concern. The design for the OCR feature will follow a similar structure to the other tools in the application.

A design for showing only the images that need tagging, and some way of keeping track of progress (to know how many images are left to tag) was suggested. A modal popup dialog will need to be designed to support this, and the tool component design will need to be altered to support tools that simply open a modal.

Conclusions

From our cognitive walkthroughs and informal feedback we identified a need to create a straightforward flow from document upload to editing; to get the user into the ‘editing phase’ as soon as possible. We also need to provide a mix of easy to use tagging tools for quickly editing PDFs as well as more advanced options for power users. These conclusions resulted in the creation of more in-depth tools in our wireframe along with designs for the individual tools. Extra informational pages were also added to help give feedback to the user. Previously included steps were refined to show more information and allow for the user to have more control.

Caveats

For this sprint, our main research method was an informal cognitive walkthrough conducted by amateurs. Compared to actual UX experts, these cognitive walkthroughs could be missing crucial aspects. We also didn't have a particularly large set of data. To gather more data, a good solution would be to conduct surveys to discover features users’ unmet needs. Only a few evaluations were conducted, which does not give us much data to work with when evaluating the cognitive walkthroughs. Only a few points of informal feedback for our product were given and they were given by students who haven't used the product themselves. They were only able to watch a demonstration.

Clone this wiki locally