Autoplay

Summary 

I began my work on AutoPlay using a prompt from DesignLab about designing an audio app. In  After an initial round of self-motivated user research, I found that the majority of those I surveyed wished that their music-playing apps had more robust “driving modes” and other related features. Taking a look at driving modes in existing applications, I began to make some initial sketches. 

User testing, however, caused me to reassess my design. Despite reporting increased confidence while driving, users still took their eyes off the road for significant amounts of time while using large on-screen buttons. Seeing that my design goals were not being met, I pivoted towards using physical design.

My experience with the DesignLab prompt led me to think about their approach. While DesignLab’s process doubtlessly produces visually appealing material for some students, it neglects fundamental aspects of user experience, and leads designers astray by not rooting itself in user research.

The majority of participants I surveyed wished that their music-playing apps had more robust “driving modes.”

Starting the Project 

I enrolled in a design course with DesignLab in order to increase my knowledge of UI design. We were given an assignment to: “make a mobile music player (at least 3 screens) for either iOS or Android.” I started off stumped. Why design something new? How did I want to differentiate what I made from other products that were out there? 

In order to answer those questions I engaged in some self-directed user research. Reaching out to a number of people who use music playing apps on their smartphones, I identified several areas of desired improvement: recommendation management, continuity of playback experience, and controls while driving. I chose the last option, in large part because I had a concern  that users expressed myself. I, like my respondants, worried about the risk of getting into a car accident while using music playing applications while driving.  I decided on a problem statement: how can we design an application that minimizes distraction while listening to streaming audio while driving? 

With a concept in hand, I went to the drawing board. I took into consideration two important design principles: Fitt’s Law and Hick’s law. Fitt’s Law describes a relation between pointing and button size. The further a button is from the origin of a pointer, the larger it must be in order to maintain ease and speed of pressing. Hick’s Law describes how the time to make a decision scales up with the number of options. Based on these principles, I decided that in order to make an interface that would take up a minimal amount of attention for drivers, I needed to create an interface with a few large, easy to identify buttons.

Initial wireframes

Refining the Design

The Coronavirus pandemic did not allow me to engage in testing in person. Consequently, I ran user tests with a small base. I workshopped the set-up flow. But, more importantly, I ran a model “driving” test that came up with disconcerting results. 

For the “model driving test” all participants had two web-cams, one pointing at them, and another pointing at the piece of paper with the prototype. I would play music on my end while they played Highway Racer on their computer. When they wanted a switch in song, they would tap the paper with their finger, and I would change the song accordingly. Next I asked them to do the same tasks with their own smartphone and normal music application. 

I built and iterated on my design. I quickly realized there was no need for a playback bar. I then discovered a six button configuration would be better proportioned on most phones. I decided I wanted the application to be highly customizable. There were no existing icons for several of the functions that I wanted to use, so I iterated icon design and tested their legibility amongst a small group of users. I eventually made wireframes of an entire configuration process, and started to define my visual design language. 

Surprises During User Testing 

The Coronavirus pandemic did not allow me to engage in testing in person. Consequently, I ran user tests with a small base. I workshopped the set-up flow. But, more importantly, I ran a model “driving” test that came up with disconcerting results. 

For the “model driving test” all participants had two web-cams, one pointing at them, and another pointing at the piece of paper with the prototype. I would play music on my end while they played Highway Racer on their computer. When they wanted a switch in song, they would tap the paper with their finger, and I would change the song accordingly. Next I asked them to do the same tasks with their own smartphone and normal music application. 

Although users spent less time looking at the prototype than applications without a driving mode, they still had to take their eyes off of the screen to make accurate body presses. When placed in both horizontal and vertical orientations to the right of a keyboard (modeling being placed in a cup holder well or on a magnetic vent phone stand), users were not able to find buttons by proprioception alone. 

This method of testing was far from perfect. Playing a driving game on a computer differs significantly from driving in real life, and interacting with a phone is different from a piece of paper. The requirement that users be familiar with OBS and own two web cameras also restricted the testing pool to only several individuals. Nonetheless, this limited user testing brought up some important questions. I decided to follow up on them testing on myself. I recorded myself driving and listening to audio using smartphone applications with phone modes over the course of a week. I then did the same with applications without smartphone modes. While a simplified graphical interface saved time in some circumstances, it did not change the amount I looked at my phone to the extent I felt it seriously mitigated accident risk.  

  Documentation

Several forms of documentation exist for the form remain available. In addition to the gallery of images within this site, a demo of the Incident Response Form remains online, as does the Feedback Form. 

Back to the Drawing Board 

Feeling as if I didn’t fully grasp the problem I was working with, I returned to the first two phases of the design thinking process: empathize and define. I did some background research into distracted driving and engaged in a new round of user interviews, explicitly targeting audio use while driving. 

Understanding the Problem  

Distracted driving is a significant cause of injury and death. According to the NHTSA, distracted driving was a factor in 10% of fatal crashes and 15% of injury crashes in 2015. It is estimated that year more than 3,476 people died in accidents related to distracted driving and an estimated additional 391,000 were seriously injured. AAA has found that driver distraction is responsible for more than 58% of teen crashes.

EndDD.org identifies three kinds of distraction that drivers suffer from: manual, visual, and cognitive. Using mobile applications for audio playback on touch screen devices causes all three.

Modes of distraction as presented by EndDD: Manual, Visual, and Cognitive

While the use of a simplified graphical interface possibly mitigates cognitive distraction, it does not sufficiently address visual or manual distraction. Because any smartphone must be placed on the periphery of vision and movement, moving towards and pressing a button requires both manual movement and visual attention. Due to the large nature of the movement and involvement of both gross and fine motor systems, it is difficult to press graphical buttons on a smart phone by muscle memory alone. While this could be addressed by haptic feedback, this would increase time away from the wheel, and therefore manual distraction. I began to think that a smartphone application may not provide the best approach. 

User Interviews

I then contacted six subjects for a series of formative user interviews about audio application use while driving. Using the information gathered in the interviews, I drew on techniques from Grounded Theory to develop a holistic understanding of user needs. This involved a process or reviewing and coding interviews and developing both hierarchies of categorization and a conceptual framework. 

The participants neatly broke down into three categories: two had built in car-interfaces such as AppleCar or AndroidAuto, two connected their phone to their car with bluetooth, and the final two used an auxiliary jack or cassette tape adapter to play music from their phone in their car. All respondents at some point expressed concern about distracted driving, and over half communicated that they desire to change their behavior and have attempted to, and failed, in the past. 

The range of use cases was broad. Some users frequently switched between playlists and applications, some only the former. However, the desired inputs fell largely into two camps: those that wanted basic controls on a remote (pause, play, next track, previous track, etc) and those who wanted a remote to have easier access to controls that were not present either with a bluetooth connection or with a smart car application. It seems like this was the most motivated and largest segment of the user base. Based on this information I decided that the device should come with a set of default commands, but allow customization

Among the respondents who successfully changed their distracted driving habits, a common theme emerged. They physically put their phones away where they could not be seen, and used either bluetooth controls or voice commands. Three respondents noted that sometimes while listening to music they began other tasks. It appeared as though using a hardware peripheral as a remote other than the phone would provide substantial benefit. 

Prototyping and Testing 

Deciding I wanted to use a hardware interface, I made a few cardboard mockups of different shapes, and decided that something round or square that the hand could rest on easily would be ideal. I realized that there were a few qualities I found desirable. The first was a quantity of at least four buttons. The next was that the location of the buttons could feel distinct and be easy to locate without looking. The final category is that the remote was easy to use in a resting position.

Researching existing peripherals, I realized a class of products exist for speeding up editing workflows. I reached out within my social circle and borrowed a Logic Keyboard ShuttleXPress. I set it up in my car, connected to an audio player on my laptop, and gave it a shot. It was easy to use. I went back home and did the same Highway Racer test. I experienced a marked improvement. 

Wrapping Things Up 

At this point I felt pretty confident I’d found a good design solution. I started some sketches for a remote configuration workflow, but ultimately decided against continuing the work. This was not, after all, a product I could feasibly bring to market. The task at hand did not make for the kind of UI design learning experience I wanted. I moved on to other things. 

Reflection

Partway through my project I found myself with a difficult decision: should I continue with the prompt and put a project in my portfolio that, while containing solid visual design, was not sound. I decided not to go down that route, and instead chose to approach the design problem I found with a structured research process. In doing so, I learned a number of valuable lessons. 

My brief experience with bootcamp style skill building left me with some serious doubts about its value as a model. The process of designing a portfolio piece in the fashion that DesignLab recommended involved putting the cart before the horse. There was a sense of the desired aesthetic and depth of the deliverable, but no reason for that deliverable to exist. There was no design thinking behind the assignment. Without the constraints of a real design situation, the assignment made it easy to run out of steam. Without context, the assignment did not tap into the kind of thinking that drives good design. Instead it encouraged using the tools at hand to find a software solution to what was ultimately a design problem beyond the scope of software and that was best solved with a mix of hardware and behavioral intervention. 

Scroll to Top
Scroll to Top