FREQUENTLY ASKED QUESTIONS


for more questions —

  • Our approach to identifying a customer problem starts usually with reviewing product usage and speaking with customer-facing teams. After we get that lay of the land, when talking with customers about their problem, we like to focus less on existing product pain points and more on where they find (or don’t find) value, as well as their goals. For smaller companies, it’s hard to get to the root of the customer problem by only talking with customers about their experience using the product. Then, we map out the existing journey based on customer goals and value, and that usually highlights a clear customer problem.

    Finding the minimum solution goes back to those underlying customer needs/goals. I like Henrik Kniberg’s illustration here. If you understand a customer just wants to get from A to B faster, you don’t need to jump into building a car. You just need to build enough to accomplish that task.

    Performing an attitudinal research study is a great way to get to the root of those foundational needs. Once you are clear on those needs, the minimum solution should offer enough value/enable a user to complete that goal. As a result, you should clearly be able to measure the success of the user completing this goal. Driven (pun intended) by the cheapest and simplest way to learn, iteration can happen from there.

  • We would flip the question back at you and ask: do you need to prioritize one persona for a problem where many personas play a role? For example, if persona A is dependent on persona B to accomplish a job-to-be-done, we would encourage a journey map and the defined customer problem to encompass the goals of both personas.

    In an instance where you are evaluating which persona to solve for, where there are independent persona problems requiring individual attention, we would use the aforementioned prioritization frameworks. Additionally, we would be mindful of your business goals. For example, if the most important business goal is to increase MRR, you may want to prioritize the persona that has the largest direct impact on that revenue.

  • Our approach to customer research is both expected and a little unexpected.

    It’s expected in the way that you would assume any trained designer would come into research; the goal of research is to learn, in order to save time and money. Customer research is best conducted (if it’s even needed - that’s the non-traditional take we’ll get into) at all or select points in the design cycle:

    1. Upon project definition - A completed Define phase should offer you a few outputs: a clearly defined customer and their pain point/problem, and various questions about solving that customer problem framed as hypotheses.

    2. During discovery and design - Sometimes, though we find not often, you may be really stuck between two equally strong directions and want to understand your customers’ preferences for one OR often, you have a handful of usability/behavioral unknowns that place risk on the customer experience and their ability to get their job done.

    3. Pre-delivery - Once the team is aligned on the designed user experience and visuals, it can be beneficial to do a small round of research to ensure all risks have been accounted for and to again, re-validate those hypotheses about solving for your customer pain points.

    4. Post-delivery - Also considered to be part of the research cycle, measuring for qualitative success post-delivery is often overlooked. This can be done formally (as a research initiative) or informally (through feedback efforts, social media, etc). This is a sure-fire way in addition to seeing changed metrics of success to know your customers feel that the changed user experience/design is bringing them value.

    Each of these steps requires a research plan, recruitment, prototypes, and synthesis. You can approach research attitudinally by listening to users’ words via interviews, or behaviorally by watching their actions. Our approach is often to encourage a hands-off behavioral-focused user study with a prototype during usability testing. We like doing this with products like maze. This is because in the way that restaurant customers rarely tell a waiter (who did not even cook the food) their honest opinion of their culinary experience, the same concept applies to customer research.

    Here’s our unexpected opinion: we don’t think the time invested in a user research study always pays off. Performing a competitive analysis or analyzing your product data can be equally informative. A great designer can utilize their penchant for empathy and psychological instincts. They can also utilize their knowledge of the laws of UX to validate. If the goal is to learn and there isn’t a sizable (this should be super aligned on) risk/concession to the UX, it’s okay to trust a great designer and ship something based on reasonable empathy, instinct, and UX rationale. However, each project is so unique and deserves a customer research recommendation best suited for the project and the team.

  • We see this really as a question of prioritization. If many clearly defined problems exist that need prioritization, gathering the decision-makers should be the next step before running a structured prioritization play. These decision-makers should ideally be both business and customer-focused, as well as mindful of existing resources/expertise prior to running the play. For example, if the team doesn’t have a strong front-end engineer, they likely won’t be able to execute on a major design project. In regards to being business-focused, the team should evaluate their organization’s needs: if the company is in survival mode, then reduced user experience risk and revenue generation should be top of mind. If the company has room to innovate, it can take on more risk when prioritizing. Another point of business focus is the vision and mission. These can provide a circumscribed strategy for your prioritization.

    We like two frameworks for prioritizing customer problems (and have facilitated using these templates to aid in collaborative prioritization sessions):

    • The Impact Effort matrix

    • The RICE framework

    When it comes to having many solutions, you can also prioritize using the aforementioned frameworks, or you can perform an A/B user test via customer research prototypes (shown to various customers in differing orders). P.S. Google loves to do this via Google Rewards when looking for feedback on UI changes.

  • Full contractorship

    We can approach each project as a full contractorship. We’ll discuss expectations up front around time and how long we expect the full design process of that project to take. Once we align on time, we would become full-time designers for the duration of the project without any need to onboard or get a full lay-of-the-land, but instead digesting only the context of that specific problem space. We would attend rituals and demos, and integrate ourselves and our work with the squads. We also recommend having an allocated point person with great collaboration skills, whether that be another designer, product manager, or engineering manager to have a weekly cadence of check-ins to ensure our work is on track and to clarify anything that is unclear. Our work ends either at delivery hand-off or we can be extended into a visual QA, or if there is an incremental roll-out to production and you’d need us available, that’s an option too.

    ARE TOO hand-off

    The next approach is more like an agency whereby you are completely hands-off with the problem and work. We’ll estimate the work and have an independent kick-off meeting with the product manager, engineering manager (and squad if that’s a good use of their time) to break down the design work and remove any ambiguity from the kick-off. We’ll follow up with a design plan using our design phases and will only be involved with the team during checkpoints outlined (at the top of that page) or a cadence we agree upon.

    To get more specific on working with squads and triad members, We envision using Notion, Figma, Figjam, and Loom or collaborative tools of your choice to come prepared and facilitate any design rituals. We are also strong believers in async collaboration; We would be commenting and frequently responding in our collaboration tools. The Define and Discover phase outputs like empathy maps or user flows should create a strong foundation with the PM, EM, and squad to ensure we are exchanging technical knowledge and removing any blockers needed to move forward.

    Having designed a myriad of UIs across varying industries, devices, and levels of scale, we are confident we can integrate quickly with the team, and tackle and understand your problems and problem spaces.