We all know we should be using usability testing to improve our library websites. However, lack of time, resources, and know-how can make the process feel daunting. For academic libraries like ours, there can be an additional pressure (perceived or real) to turn every usability test into a full-blown study complete with statistics and a twenty-page, publication-worthy paper.
While deep, robust, and expansive studies can be valuable in many cases, I’d like to argue that the kinds of usability tests needed to meet the expectations of today’s users are much smaller and have an audience of just one: you (or your team).
In this post, I’d like to show you how we used fast, cheap, and loose usability testing techniques when redesigning the HBLL website. I’ll do this through a specific example in which we carried out three usability tests in just one day. Our goal was to improve the design of our site’s header on mobile devices (we use responsive design).
Before we dive into the story, here’s a quick rundown of our equipment, setup, and procedure for mobile testing.
Equipment and Setup
When we want to record mobile tests, we use the following equipment:
- A standard tripod
- A JOBY Action Clamp + GorillaPod Arm
- A JOBY GripTight Mount
- My personal smartphone
The Action Clamp attaches to the tripod. (Note: the Action Clamp has a special clamp you can use to attach it to a structure, such as the side of a study carrel or something else. It can also screw into the top of a tripod, which is how we use it.) Next, the bendable GorillaPod Arm (which comes with the Action Clamp) attaches to the clamp. The GripTight Mount, which holds my smartphone (sold separately from the other two JOBY products) attaches to the GorillaPod Arm. I place my iPhone in the GripTight Mount with the screen facing up so I can see the user’s phone through it.
What it looks like:
To record a test, we use the camera app on my phone, making sure the user holds their phone under my phone so it’s in view. (When we run desktop tests, we use a MacBook Pro with Silverback installed. It works great.)
According to most web usability experts, including the well-known Steve Krug in Don’t Make Me Think, there is no need to spend lots of time finding a “representative sample” for your tests. You also don’t need more than 6–8 willing participants to find a majority of the usability issues.
We normally find our participants in the main atrium of our library, where students are constantly coming and going. We set up just off from the main stream of traffic and lay out a big bag of fun-size candy bars on a table. Then we wait. Some students notice the candy and come to us. Other times we have to politely wave handfuls of candy bars at students to coax them to come. Once, though, we had a long line form with some of the students waiting up to thirty minutes just to get some candy! (Admittedly, we were using full-size candy bars for that particular test.) And yes, we realize that this means our sample populations are skewed in favor of those who like chocolate. Oh well.
Three Tests in One Day
So, back to our story. It was just twenty-four hours before the official release of our most recent redesign. Though we’d been doing user tests on the site throughout the redesign process, we had just made some pretty drastic changes to the mobile version of our header and we wanted to test it with users before the big day. Our main goal was to find out if they would notice and use the “hamburger” menu icon (in the upper left corner), which accesses the main navigation panel. To be honest, we felt certain the design would fare well, but we wanted to be sure.
Our initial interface looked like this:
We provided four tasks for the users to do:
- Start a live chat with the library
- Browse newspapers the library has available digitally
- Find contact information for the library’s circulation department
- Find information on the history of the library
Some of these required the user to tap the hamburger icon in order to complete the task; others could be finished through other parts of the interface.
After testing just three users, we knew we had a problem. Not a single one even noticed the hamburger icon. After each test, I would point out the icon and show them how it worked, to which they all responded, “Oh! That navigation looks awesome. I just didn’t notice the icon.”
We asked the same questions as before, but this time we tested eight students. The results were (mostly) better. Only one of the eight students didn’t notice the navigation at all, a definite improvement. Five of the students used the icon, two rather quickly. The last two saw the icon but decided not to tap it.
This new design seemed to be more in the right direction. However, we were concerned about those two users who saw the hamburger icon and yet didn’t tap it, so we decided to try something else.
For this test, we changed the hamburger icon to a simple link labeled “Menu”:
We asked the same questions as in the first two tests, again with a sample of eight students. We were surprised by the results. Just as in the second test, five students saw and used the menu link. However, of the three who didn’t use it, two didn’t see it at all.
The Final Decision
In the end, we decided to go with the design we used in Test #2, with the hamburger icon. We knew it wasn’t perfect, but the majority of users used it, and almost everyone saw it.
All this testing resulted in a busy but fun day for us. It felt good to make such rapid improvements in our site knowing that we had usability tests to back up our decisions. We are currently working on some ideas that will (hopefully) make the site navigation even easier to find and use. And of course we’ll test those ideas as well.
Questions? Leave a comment! We’d love to hear from you.