We needed to secure funds to get us through development, manufacturing, and our go-to-market campaign. We started first by crowd-sourcing then through raising investor capital.
The initial concept hadn’t really been validated. There were details to iron out and unique hurdles to be solved that came with building something that’s never been done before.
A race to beat a looming deadline with the intense pressure to get it right the first time because that may be the only shot we get. This is where it could get gruesome.
Using imagery and footage from Human's latest shoot, my first task after joining the company was to design social media posts, digital ads, and emails in order to create hype, grow our following, and promote the upcoming crowdfunding campaign.
As soon as we started marketing our Indiegogo campaign, the trolls began surfacing: "Do I have to be naked to wear these?” – Our brand imagery was a bit polarizing and at times, took focus away from the actual product. But being a brand-new hire to the company, I was hesitant to suggest a different approach, especially after seeing the resources invested in current direction. I believed showing relatable people in relatable scenarios could help shift the conversation back towards the benefits of our product rather than what felt like entrance to an exclusive club. We succeeded in capturing everyone’s attention, but now we needed potential backers to visualize themselves wearing our headphones and support our vision for a totally new form factor of headphones.
With that in mind, I called up some of my friends one weekend to model for me, and shot some new lifestyle imagery. The photos turned out great and actually helped silence some of the noise, but more importantly it marked a shift towards a more "human" era of Human. We began using those photos as our new hero images for the campaign and began shooting lifestyle videos in the same vein, showcasing product benefits in real world scenarios.
During the push to secure our Series B round, I was tasked with creating a video that showed a working demo of our product to send to potential investors. Because of where our product was in development, I knew that approach would end up costing a lot of post-production smoke & mirrors. Instead, I proposed a product vision video in the form of “A Day In The Life”. While others thought it would be a more costly and riskier route to go, I knew I could do it for less time and resources and it would end up being more impactful than a demo of our current state.
It was a tight deadline. I had to quickly write the shot-list and edit the script. Then I rented a DJI Ronin stabilizer and shot all of the footage in one day. After a week of late-night editing and post-production, we sent it out to all potential investors. The video became one of the main tools used to secure funding during that round, and the result? We got a call from Satya Nadella (CEO of Microsoft) the next day. It was just the push we needed to land one of our main strategic partners, along with many other key investors.
Which one do we optimize for? When do we rely on LEDs to communicate vs tones?
How do you switch between known Bluetooth connections? What happens when you go out of range and come back into range?
When are the headphone's LEDs visible? How do you know they're Off or in Speaker Mode?
Provide a dedicated area for users to adjust their device and app defaults.
Notify user when an update is available and guide them to a successful update.
Translate one or multiple languages directly into your headphones.
Enable hands-free voice assistance via keyword and dedicated gesture.
Transform headphones into a portable speaker by clasping them together.
Provide levels of sound transparency including a quick gesture to engage.
Our product had no buttons, could turn into a speaker when clasped, and utilized LEDs, Tones, and Voice Prompts to communicate status to the user. These unique interactions needed constant iteration to get right. As soon as team members grasped a hold of the original design intent, it had already changed. I needed a way to summarize the constantly evolving UX and share with the team. I started by designing a birds-eye level holistic overview of our product states and transitions to help evangelize how our product should flow and function. Because of how often the product evolved, it was important for everyone to continuously be on the same page so we all our working towards and building the same product.
Our audio chip came with a tone generator and pre-recorded voice prompts that sounded very robotic. I opted not to use the tone generator and instead stored custom tones using the voice prompt library. I chose audio tones that clearly communicated their purpose and felt like us. Due to our partnership with Microsoft, the plan was to use Cortana as the voice of our headphones, but as we got closer to launch I became concerned about the business and legal implications of this. So I went ahead and hired a voice actor to record a variety of phrases for us to use for our product. This turned out to be a critical fix, when it turned out that we wouldn’t be able to use Cortana after all. Plus, we were able to record custom phrases with a voice that felt more human.
What should happen when the headphones are clasped? How quickly should they turn off? What happens when your battery dies?
How do you switch between known Bluetooth connections? What happens when you go out of range and come back into range?
When are the headphone's LEDs visible? How do you know they're Off or in Speaker Mode?
Provide a dedicated area for users to adjust their device and app defaults.
Notify user when an update is available and guide them to a successful update.
Translate one or multiple languages directly into your headphones.
Enable hands-free voice assistance via keyword and dedicated gesture.
Transform headphones into a portable speaker by clasping them together.
Provide levels of sound transparency including a quick gesture to engage.
I used Sketch to design every screen, then found a cool app called Overflow that helped me quickly build interactive flows our stakeholders could click through and provide feedback. To prototype micro-interactions, I used a combination of Flinto, Framer, and After Effects. Finally, I used Zeplin to streamline the final handoff to our engineers. Having the right tools really helped make certain jobs simpler and I needed all the help I could get.
Once the app was in Test Flight and in the hands of beta-testers, we used real data to help refine our flows. I read every comment and bug report from our beta-testers, and organized them into a stack rank to help keep track of the biggest and most common user pain points and created a plan to prioritize our work according to UX impact. After many tense ship-room meetings, we decided the app was ready to release to the public and out GTM campaign began.
I worked with our engineering team to prep for Apple and Google app store submission, which included a very poorly created video of the founder and I explaining just how the app works where I look incredibly stressed and sleep deprived.
Designing the companion app was a huge learning process, but incredibly fruitful. I dive deep into the specific details of each feature in the app and lay out my process in a separate case study you can find . Don't worry, Ialso include a link at the end
Once I started using the product myself, I discovered pain points like long boot times, slow reconnection, and missing or unexpected feedback (LED / Audio). When it wasn't getting better, I began asking questions and educated myself on the limitations the hardware team faced. I dug into how our firmware state architecture was designed to see if there were any opportunities to make favorable tradeoffs and found opportunities to prioritize core user flows over edge cases and improve the experience (or perceived experience) for most of our users.
Through experimentation and close collaboration with our firmware engineers, we created a new state architecture that brought us closer to the desired experience and unlocked new possibilities.
In order to wake up the device's functions faster, we experimented with not powering everything down when the headphones were clasped together as originally specified. Instead, we created 2 new low power states (Standby + Sleep), which turned off certain functions like audio speakers and Bluetooth Classic (BR/EDR) while keeping Bluetooth Low Energy (BLE) connected in the background to manage/control the headphones remotely for a period of time.
These low-power modes preserved LED availability and enabled them to light up much quicker, decreasing perceived startup and connection time from 30 seconds down to less than 6 seconds, and provided a quicker way to communicate timely statuses like “battery dead”.
By transitioning from a cold to a warm boot, we achieved faster overall reconnection time in most scenarios, and by working together as a team we were able even able to get some of these improvements in by launch, despite tackling it so late in development.
By turning Bluetooth Classic off instead of just disconnecting, we made it impossible for the headphones to re-connect to the paired device and unintentionally hijack it's audio. This meant when the headphones were clasped close, the user's phone would revert audio routing back to defaults and audio services like streaming audio and using Siri would remain uninterrupted, all while still remaining connected to the headphones via BLE and allowing the user to still manage and control the headphones remotely through the app (see case 3).
Turning Bluetooth off instead of relying on a successful disconnection, also removed any delay in being able to communicate that the headphones have disconnected. The LEDs turn off immediately when the headphones are clasped closed and there was no longer any risk of audio still streaming to the headphones after user indicated they were done listening.
But why keep BLE connected? Well, this provided more flexibility for the user to transition from headphone to speaker mode in a variety of methods instead of having to provide signal while the device is still on and in headphone mode.
Any time the user wanted to use speaker mode while the headphones were already clasped, they didn't have to wake it up first. They could just remotely turn on speaker mode, which powered on Bluetooth Classic, automatically reconnecting to it's last paired device, and voila... From low power standby mode to speaker mode all through pressing a button in the app.
To provide this magical experience via voice command, would require the wake word sensor to remain on, which was a power drain, so for that use case, we set a shorter 5 minute time before turning the microphone off, and going to sleep.
When the headphones are closed and charging, both BLE and wake word sensor remain on, giving access to Speaker Mode indefinitely.