All posts in User Testing

Five Things I’ve Learned About redditors (so far)

This is my reddit avatar. All employees get one
About a month ago, I took a gig to design the user experience of reddit. It’s a pretty exciting challenge!  My first projects have been mostly on mobile, and they’ve been a blast.  Check out our recently released AMA app on iTunes and Android and recently acquired Alien Blue iOS app.

The first step towards better user experience is better understanding of the users, so the quest begins with understanding redditors. And, there’s a lot of them: 6% of all online adults! ((Pew Research: 6% of Online Adults are reddit Users)) Understanding so many people requires attacking the problem at multiple angles.

One of the most direct ways to learn about a large user population is through surveys. The benefit of surveys is that they can be deployed broadly and analyzed statistically.  The main drawback is that they skew results towards the users who choose to complete them.

A few weeks ago, I released a test survey the subreddit ((subreddit: a sub-communities within reddit focused on a specific topic)) called /r/samplesize, which is dedicated to posting and taking surveys for other redditors.  I received 226 responses. Bearing in mind the enormous grain of salt that these results are comprised entirely of self-selecting users, here’s what I learned:

1. Twice as many men responded to the survey than women.

While I don’t know how representative this ratio is of reddit as a whole, this is already far more gender-balanced than previous self-selected surveys from three years ago. ((Who in the World is reddit? Results are in…)) ((I made a basic Reddit Demographic Survey. Let’s find out who we are…))

2. Most active users have been redditors for 1-3 years.

This isn’t too surprising considering the survey was given to a subreddit that only longer-term users would be aware of. However, given the site’s high bounce rate, it’s likely that reddit could improve at welcoming and retaining newer users.  After all, if reddit can’t create core users at a rate at or above dropout (churn) rate, its population will gradually decline.

3. Reading favorite subreddits is redditors’ most valued activity.

I asked users to rank their activities on reddit from not very important to extremely important: here are the responses only for extremely important.  As you can see, reading favorite subreddits was by far most commonly marked as extremely important.  User’s front pages was the second most marked as extremely important, which isn’t surprising since 99.2% of survey respondents have accounts which they use to modify their front page.

4. Users primarily want reddit to entertain them. Their secondary expectation is for community.

Here, I asked “what do you expect of reddit?” with a freeform response.  The tallies are per response rather than per user, such that if a user said she expected “community and humor,” I’d give one tally to community and another to humor.

Wanting reddit to be entertaining isn’t surprising: it’s the front page of the internet, after all.  What’s particularly interesting is how often community and communication were cited as expectations.  Discussion, particularly through comments, was the second most frequently cited expectation.  What’s reflected in “free speech,” “openness,” and “local content” were mainly variations on the idea that reddit content is different primarily because of its community.  Of these, about half of the responses mentioned the value of varied perspective – that reddit provided content and stories that users might otherwise not have found (or answers to questions they were afraid to ask).

5. What frustrates redditors most are other redditors.

For this freeform question, I simply asked redditors what frustrates them about reddit. The majority of responses could be summarized by concern that reddit is or is becoming dominated by negative viewpoints. Most common were concerns that homophobia, racism, and/or misogyny were unduly influencing the community. Second most common were concerns that reddit culture was becoming homogenized. Words such as “hivemind,” “groupthink,” and “in-jokes” appeared frequently. The most common frustration not related to the community was that the site itself was ugly and/or poorly designed.

It’s fascinating to get some insight into how these longterm users think about reddit and its future.  The challenge from here will be to learn more about the people who may not self-select to take a survey: newer users, non-users, and the population of reddit overall.  We’re planning user tests now to learn about newer redditors, and in-person interviews can help give more in-depth data on behavior.  But we’ll continue using surveys too: here’s the next if you’d like to take one!

How People Use New Tabs

As the web evolves, so does the way people interact with the web. Firefox’s user experience and research teams have been eager to learn about our users’ browsing habits so that we can better design for our users.  Lately, Mozillians like Lilian Weng and Jono X have been running some fascinating studies using Test Pilot to determine how, when, and why Firefox users open new tabs.  I wanted to note a few key takeaways from their recent study that give us a glimpse into how our users browse (full studies are linked at the bottom of this post).

A caveat is that these results – as with all Test Pilot studies – are gathered using anonymized data submitted by users who have signed up to participate in Test Pilot. Thus, the Test Pilot users data tends to skew slightly towards the technical and early-adopter crowd.

How are people currently using new tabs?

Each day, the average Firefox user creates 11 new tabs, loads 7 pages from a new tab, and visits 2 unique domains from a new tab.[1] The average new tab loads two pages before the user closes or leaves it.[2]

Once users have a new tab page open, about half of the time (53%) they navigate to a new page using their mouse, and about half of the time (47%) they use the keyboard.[1]

Here’s a breakdown of what actions users take once they’ve opened a new tab:

How People Use New Tabs

As you can see above, the URL bar was the most-used item on a new tab page, with 53% of use actions originating there. The search bar only accounted for 27% of user actions. Even though by default it’s not even enabled in Firefox, 16% of new tab page actions were clicking on a URL in the bookmark bar. History and bookmarks menus were both used less than 5% of the time.

In this study, 17.4% of the domains recorded accounted for 80% of the page views for all participants. You might think that the more active a user is, the number of unique domains they’d visit would follow the same ratio. However, this study found that the more sites a user visited online, they more often they would visit the same 20% of domains. Turns out, the most active internet users are even more loyal to a few choice domains than their less active counterparts.[2]

[1]Quick report on new tab study, by Lilian Weng

[2]Test Pilot New Tab Study Results, by Mozilla Research Team

Thumbnails, Titles, and URLs: How Users Recognize Representations of Websites

The Mozilla user experience team often designs features that represent sites to users in a variety of ways. For example, Firefox tabs display favicons and page titles, while Panorama displays favicons, titles, and page thumbnails. So, I thought it would be useful to investigate the effectiveness of various ways of representing sites to users.

One interesting piece of research on page representation was published by Shaun Kaasten, Saul Greenberg, and Christopher Edwards at the University of Calgary in their paper How People Recognize Previously Seen Web Pages from Titles, URLs and Thumbnails (download it here). This team conducted a series of studies, most of which involved increasing one variable which represented a site the user had previously visited (such as thumbnail size) until the user recognized it, at which point the user would buzz in to stop the expansion and identify the site.

Here’s some key takeaways from what the Canadians learned:

Running sums of how large a growing thumbnail became before participants recognized it

– The graph above plots the thumbnail sizes at which test participants could recognize a domain (black lines) and a specific page within a domain (blue lines). The dotted lines show all responses, and the solid lines show only correct responses. You can see that by the time a thumbnail was 962 pixels, 60% of test subjects had identified it.  80% of test subjects identified sites by 1442 pixels, and by 3042 pixels everyone had identified the site.



– Users’ guesses about what site a thumbnail was representing were correct about 90% of the time. Not bad, considering on most sites they had no readable text to go by until the thumbnail was over 962 pixels. This shows how effective thumbnails are at identifying sites to users.

– Color and layout in were the most important factors for identifying a site when the thumbnail was 642 pixels and smaller. From 642 to 962 pixels, color, layout, images, and text were equally important. Above 1002 pixels, text was most important.  This is presumably because at that size, sites were not yet identified because they were visually similar to other sites and text was the only effective differentiator.

– Looking at only truncated URLs and page titles, test subjects could correctly identify sites 90% of the time.  The researchers experimented with URL and title representation by showing users right, middle, and left truncated strings and recording when they buzzed in to identify the site correctly.

Running sums of how many characters a page title (top) and URL (bottom) became before participants correctly recognized it

– The graph above shows the running sum of correct answers in identifying sites based on only page title (top graph) and URL (bottom graph).  You can see that right truncation proved the most effective for domain-level site identification.  For titles and URLS that were truncated on the right, sites were correctly identified 15% of the time with 5-6 characters revealed, 30% of the time with 8 characters, 60% of the time with 13-15 characters, and 80% of the time with 25-31 characters. Left truncation was the most effective for identifying a specific site within a domain.  So, if you want users to identify a site based on a string, at least 15ish characters are needed for even a majority.  If you want users to identify a subdomain, clip right left side of the URL.  To idenfiy the domain itself, clip the right.

Firefox Would Love to Read Your Mind

I’d like to highlight the awesome research project that intern Lilian Weng is leading around Firefox’s new tab page.

While our goal is to make users more efficient at their browsing tasks, what makes them more efficient is a question we keep returning to. Most other browsers display links on new tab pages based on frecency. Frecency is a portmanteau which combines frequency and recency. At Mozilla, we use it to refer to sites that users have been to often, recently, or both. It’s how we calculate what should be the first, second, third, etc site that appears when you type a letter into Firefox’s URL bar.

Using frecency to list links on a new tab page seems an obvious design direction, but we want to truly investigate whether another solution would be best for users. So, Lilian is spinning up a brave new study. Once her test is ready, users of Test Pilot, our platform for collecting structured feedback on Firefox, will be asked if they’d like to participate in a new study. If they say yes, they will be randomly assigned one of six new designs on their new, blank Firefox tabs. One of these six designs will be our control group: a blank white tab, just as Firefox users see currently. The other five will look almost identical to each other. They will display a simple 8×8 grid of favicons set on a button which is colored to highlight them based on a color-matching algorithm designed by Margaret Leibovic:

Minimal 8x8 Grid Layout of Site Links

The only variable that will be changing among the five designs is which sites are displayed in this grid. Here’s the five variations we’re testing:

  1. Frecency. A combination of a user’s most frequently and most recently visited sites.
  2. Most recently bookmarked sites. By displaying prominently what a user has recently starred, we effectively turn the new tab page into a read it later list.
  3. Most recently closed sites. This could lead users to treat new tab page as an undo feature, or close tabs in order to temporarily store them in the new tab page as a short-term read it later list.
  4. Sites based on content similarity. Intern Abhinav Sharma is trying out his project, called Predictive Newtabs, which displays sites based on where the user has opened a new tab from. For instance, if the user has been browsing a news site, a new tab would offer other news sites the user has been to.
  5. Sites based on groups of sites frequently visited together. In another part of Abhinav’s Predictive Newtabs experiments, he has designed an algorithm to predict sites to show based on sites users visit in groups. For instance, if every time you get to work you first check the weather and then check stock prices, this new tab would offer you a stock page on a new tab after you checked the weather. If you want to try this experiment out yourself, you can download the Jetpack here.

The above study is still in preparation, and once it goes live I predict that we’ll learn tons of valuable information about how new tab suggestions can positively impact users. Lilian will be collecting data on many aspects of users’ responses to these designs, such as how they effect the breadth of sites users visit, how likely they are to click on each item in the grid, and how long they spend deciding where to navigate. I can’t wait to start pouring over the data that comes back: it’s very new research in an area that has a profound impact on how we use the web.

User Testing in the Wild: Joe’s First Computer Encounter

This past Friday, I went to Westfield Mall in San Francisco to conduct user tests on how people browse the web, and especially how (or if) they use tabs. This was part of a larger investigation some Mozillians are doing to learn about users’ tab behavior.

The mall is a fantastic place to find user test participants, because the range of technical expertise varies widely. Also, the people I encountered tended to be bored out of their minds, impatiently waiting for their partners to shop or friends to meet them. However, rather than completing all 20 tests I was hoping to, I ended up spending three hours testing a man I’ll call Joe.

I find Joe, a 60-year-old hospital cafeteria employee, in the food court looking suitably bored out of his mind. Joe agrees to do a user test, so I begin by asking my standard demographics questions about his experience with the internet. Joe tells me he’s never used a computer, and my eyes light up. It’s very rare in San Francisco to meet a person who’s not used a computer even once, but such people are amazingly useful. It’s a unique opportunity to see what someone who hasn’t been biased by any prior usage reacts. I ask Joe if I could interview him more extensively, and he agrees.

I decide to first expose Joe to the three major browsers. I begin by pulling up Internet Explorer.

Internet Explorer (as Joe encountered it)

Me: “Joe, let’s pretend you’ve sat down at this computer, and your goal is finding a local restaurant to eat at.”

Joe: “But I don’t know what to do.”

Me: “I know, but I want you to approach this computer like you approach a city you’re not familiar with. I want you to investigate and look around try and figure out how it works. And I want you to talk out loud about what you’re thinking and what you’re trying.”

(I show Joe how to use a mouse. He looks skeptical, but takes it in his hand and stares at the screen.)

Joe: “I don’t know what anything means.”

(Joe reads the text on IE and clicks on “Suggested Sites”)

Me: “Why did you click on that?”

Joe: “I don’t really know what to do, so I thought this would suggest something to me.”

(Joe reads a notification that there are no suggestions because the current site is private)

Joe: “I guess not.”

Joe looks around a bit more, but he’s getting visibly frustrated with IE, so I move on to Firefox.

Firefox (as Joe encountered it)

I give him the same task: find a local restaurant. He stares at the screen for awhile with his hand off the mouse, looking confused. I ask what he’s looking for. “I don’t know, anything that looks like it will help!” he says.  Finally, he reads the Apple context menu at the top of the screen, and his gaze falls on the word Help.

“Help, that’s what I need!” says Joe. He clicks on Help, but looks disappointed at what he sees in the menu.

“None of these can help me,” he says.

Joe is getting frustrated again, so I move on to Chrome and give him the same task.

Chrome (as Joe encountered it)

He proceeds to read all of the words on Chrome’s new tab page, looking for any that may offer guidance. Luckily for Joe, he spies a link to Yelp which is marked San Francisco in Chrome’s new tab page. He clicks it, and, seeing restaurants, declares he’s won.

I want to put Joe through other experiments at this point, but the tests are clearly taxing him. He looks very agitated, and has frequently in the tests declared that he “just doesn’t know,” “should have learned this by now,” and “has no excuse for not taking a class on computers.” No amount of assurance that I was testing software, not him, was calming him down. So, I decide to cut Joe a break. “Alright Joe, you’ve helped me, maybe I can help you.”

Because Joe has mentioned a few times he wants email, I get him a gmail.com email address and show him how to access it at a public computer. We practice logging into Gmail several times, and I end up writing a very explicit list of steps for Joe which includes items like “move mouse cursor to white box.” One of the hardest things to relate to Joe is the idea that you must first click in a text field in order to type.

When I am convinced that Joe understands how to check his email, I want to show him how he can use his new email address. So, I ask him why he had asked for an email address in the first place. I imagine he’ll say he wants to communicate with friends and relatives.

Joe: “I want discounts at Boudin Bakery.”

Me: “Sorry, what?”

Joe: “I want Boudin discounts, but they keep telling me I need email.”

(Joe takes his Boudin Bakery customer appreciation card out of his wallet and shows it to me)

I’m a little confused, but go ahead and register Joe’s Boudin Bakery card with his new email address. I show him the web summary of all the bread he’s bought lately. “Woah!” says Joe.

So, what did I learn from Joe?

  • There is little modern applications do to guide people who have never used a computer. Even when focusing on new users, designers tend to take for granted that users understand basic concepts such as cursors, text boxes, and buttons. And, perhaps, rightfully so – if all software could accommodate people like Joe, it would be little but instructions on how to do each new task. But, Joe was looking for a single point of help in an unfamiliar environment, and he never truly got it – not even in a Help menu
  • No matter their skill level, users will try to make sense of a new situation by leveraging what they know about previous situations. Joe knew nothing about computers, so he focused on the only item he recognized: text.  Icons, buttons, and interface elements Joe ignored completely
  • We shouldn’t assume that new users will inquisitively try and discover how new software works by clicking buttons and trying things out. Joe found using software for the first time to be frightening and only continued at my reassurance and (sometimes) insistence. If he was on his own in an internet cafe, I think he would have given up and left after a minute or so.  Giving visual feedback and help if someone is lost may help people like Joe feel they’re getting somewhere
  • Don’t make too many assumptions about how users will benefit from your technology – they may surprise you!