TagStream is an audio and video parsing software that uses ai to identify insightful information within large troves of a/v content. Using proprietary technology, it creates "digital DNA" to identify and navigate to key moments, common items, or important information, in order to quickly provide value to viewers and content creators.
Once we onboarded and familiarized ourselves with the software, we conducted a heuristic evaluation, a competitive analysis, and an ethics/inclusivity discussion. These helped us orient around industry standards and identify early areas of research focus.
To begin our formative research, we met with an end-user of the software and conducted a contextual inquiry to examine how the software is used presently. This key user was highly knowledgeable about their industry workflow and was able to demonstrate typical use cases for TagStream.
To expand our research across more diverse use cases, we conducted a series of interviews with media professionals and content creators, to learn more about a day in the life of prospective end-users. These semi-structured interviews taught us a great deal about needs, organizational patterns, common tools, and expectations of these professionals.
After completing the preliminary research, we analyzed our findings to identify common themes to guide product redesign. The three themes that most strongly stood our were onboarding, optimization, and compatibility.
To begin the design process in earnest, we began sketching existing and prospective interfaces, making notes to specify design intentions and flow. We got feedback on these sketches from colleagues and stakeholders. (Images have been blurred by stakeholder request)
Once we had a cohesive idea of the visual design, we used Figma to create a low fidelity prototype for our first round of user testing. In this prototype we included revised tag organization and content exportability, along with changes to general visual layout. The prototype was somewhat interactive but mostly served to provide visual elements and rudimentary navigation. We would use this to test users for understanding of the software use concepts, navigational confusion, and general visual appeal.
For our first round of user testing, we conducted cognitive walkthroughs of the prototype, prompting participants to complete simple tasks like "export a file" while encouraging them to think aloud as they navigated each page.
Findings in this round of testing indicated that participants still found the interface "cluttered," were visually drawn to the wrong areas of pages, and fundamentally failed to understand the purpose and capability of the product as a whole.
With design direction now clearly scoped in, each member of my UCSC team individually designed a high fidelity prototype to incorporate research and iteration findings. My design focused on lowering the learning curve of the software, as well as making the product as a whole more intuitive and its usefulness more immediate and apparent.
To achieve this, I incorporated the following changes:
Mass video uploading:
The legacy software only allowed users to upload one video at a time. In addition to improving efficiency, enabling mass upload also created the affordance of handling large amounts of content.
Workspace organization:
The primary workspace of the software was viewed as cluttered and disorganized. My revision organized the page to place features where expected from industry standards, centralize primary taskbars, and organize those taskbars with a tabs system to reduce clutter.
Tutorial:
New users fundamentally did not understand the software without explicit verbal explanation. For this reason, I created a (skippable) tutorial in the software that presented an easily understandable example use case, and then guided users through a short series of steps to complete a simple task showcasing the value of the software.
Results were excellent-
Familiar users recognized improvements as such without prompting.
Unfamiliar users completed the tutorial quickly and without error, but most importantly when asked "What do you suppose TagStream is?" participants who completed the tutorial answered correctly, 100% of the time.