Duplicate Management

Senior Product Designer

2025

Sample project image

Background

The cost of duplicates

Venture capital firms are always searching for their next breakout company. Just one successful investment can transform a firm's reputation to help make raising future funds significantly easier.

For individual investors, one successful investment can define an entire career and establish tons of credibility, so it's no surprise investors gravitate toward tools that help give them a competitive edge.

Sample project image

A tool like Affinity can help investors do their jobs better by surfacing insights in their current network that uncover opportunities they might have otherwise not seen.

Sample project image

Since these insights are at the foundation of the Affinity platform, clean and accurate data is absolutely essential. However, since data is primarily imported from multiple sources, such as emails, calendars, and CSVs, duplicates can often surface, which creates confusion and prevents users from seeing accurate insights.

Sample project image

At the time of this project, Affinity was in the midst of a company-wide initiative to Nail the Basics, a strategy focused on strengthening the platform's foundation before shipping new features. This initiative was driven by a lion's share of customer feedback where trust in the data was prioritized over adding shiny new features.

Reducing duplicates naturally became a key part of this initiative to establish a strong base for data quality before building toward more advanced capabilities.

Sample project image
👥 Team

I worked in a cross-functional squad that included 1 PM, 1 front-end engineer, and a team of back-end engineers. I led weekly syncs with engineering to stay aligned on progress.

💼 Role

I was the sole designer and led the end-to-end design process from problem definition through launch. Some UX research had been previously conducted prior to me joining.

PROBLEM

The current experience was seen as inaccurate and untrustworthy

At the start of the project, Affinity already offered a duplicate manager that allowed users to merge duplicate people and companies. However, the experience was falling way short of customer expectations and was perceived as untrustworthy. In a survey, participants rated the current Duplicate Manager 1.6 out of 5 for usefulness.

Improving the duplicate manager had become one of the most requested features with more than 100+ customers submitting feedback citing duplicate-related frustrations.

Sample project image

At the core, the problem existed in two areas:

  1. Lack of identifiers

The duplicate manager lacked identifiers users needed to feel confident in their decisions. The information being displayed didn't align with what users needed, making it difficult for them to determine if matches were true positives.

Without these identifiers, merging felt risky, which discouraged any action from being taken.

Sample project image
  1. Inaccurate matches

The algorithm often surfaced incorrect matches, which made it feel inconsistent and unreliable. As a result, users avoided using the system entirely and relied more on manual workarounds like merging records directly from profiles, which came at a cost of around 20 seconds per duplicate to resolve.

Sample project image
Sample project image

These issues all compounded into a deeper root problem, which was a lack of trust in the system. Users didn’t feel confident committing merges from the duplicate manager, which ultimately left data quality issues unresolved and resulted in larger downstream problems.

GOALS

Building trust to drive efficiency and improve data quality

We kicked off by recruiting participants based on high activity in Productboard and Amplitude. While we had a grasp of the core problems from prior feedback, we needed to uncover the underlying nuances driving those issues. To do this, we conducted eight sessions with users to gain a deeper understanding of the challenges they faced with the current experience.

Sample project image

With a better grasp of the current issues, we moved into problem definition sessions that involved cross-functional partners across product, design, and engineering. These exercises leveraged frameworks like How Might We questions and the 5 Whys to uncover underlying user challenges.

Sample project image

Our hypothesis was that by surfacing relevant identifiers and providing more transparency into how matches are determined, users would be able to build trust over time and eventually resolve duplicates with more efficiency.

To measure success, we focused on adoption and data quality. We wanted to increase usage through higher merge rates, which could ultimately lead to a reduction in duplicates.

Sample project image

Explorations

Designing with long-term impact in mind

I explored a range of layout variations, each offering a slightly different experience. These included an accordion pattern optimized for efficiency and speed, and a full-page layout that prioritized trust and confidence.

I reconnected with customers we spoke with and had them walk through different design options through prototype sessions.

Sample project image
Sample project image
Sample project image

The accordion layout was more efficient and allowed users to quickly scan details without needing to navigate to another page, while the full-page layout was more thorough and provided a more focused environment for evaluating details.

However, since our eventual goal was to enable users to confidently bulk merge for maximum efficiency, we prioritized the full-page layout as it better aligned with our goals of building trust. Users were unwilling to bulk merge unless they felt fully confident in the system, so while the accordion layout was more efficient, it didn’t support the long-term behavior we aimed to drive.

Sample project image

Usability Testing

Addressing friction points identified in testing

After multiple rounds of feedback, we conducted usability testing with six participants. While users were able to complete most core tasks with ease, the sessions surfaced a few areas where they struggled or identified opportunities for improvement, which we addressed following the testing.

Sample project image

After sharing the usability testing results with the broader product organization, we aligned with engineering to assess feasibility and prioritized updates that were low effort and high impact.

Sample project image
Sample project image
Sample project image
Sample project image

Final Designs

All roads lead back to building trust

Sample project image

Our goal with the list view was to surface the most relevant details upfront, allowing users to act with confidence directly from this view. In parallel with engineering improvements to the algorithm, we made those improvements visible by introducing confidence scores and match reasons to explain why records were identified as duplicates.

Sample project image

We learned that while transparency helped, users still wanted to directly inspect the underlying data to fully understand the system. Rather than being asked to just "trust" the algorithm, they wanted to verify it themselves, with the added ability to control which values were carried into the merged record.

The detail view addressed this by presenting all fields side by side, making it easy to compare records and select which values to include in the merged outcome. A profile preview that updated in real time, displayed the final result, giving users more confidence before completing a merge.

Sample project image

We also introduced the ability to unmerge, which was consistently highlighted as a key driver of confidence and gave users more reassurance they could safely move forward knowing any unintended outcomes could be easily undone.

Sample project image

As users built more trust in the system, they had the option to bulk merge, enabling more efficiency. Many also expressed wanting the option to merge all records in a single action instead of batches. To support this, we introduced a “merge all” option with clear safeguards for maximum throughput.

Sample project image

Impact

Measuring impact through quantitative and qualitative feedback

Within three months, merge completions increased 3.4x, driven by a 530% increase in company merges (3,630 vs. 576) and a 200% increase in people merges (5,542 vs. 2,199), exceeding our adoption goals.

Within six months, duplicates also decreased by 55%, which demonstrated significant improvements in data quality.

Sample project image
Sample project image

After launch, we received a lot of positive feedback from users, including ones from our Big 10 Customers, whom we consistently prioritized due to their significant impact on our revenue.

Sample project image

Takeaways

Trust is earned through clarity, not just accuracy

While there were many takeaways from this project, if I had to boil it down to just one, I learned is that trust isn't earned through accuracy alone, but it's also built through transparency and clarity.

When a system has a reputation for being untrustworthy, users are already approaching it with skepticism and won't take any information at face value. Even with improvements to the algorithm, we initially believed that exposing confidence scores and match reasons would be enough to build trust in users. However, after multiple feedback sessions, we saw that users still remained hesitant.

This led us to shift our approach and instead of asking users to automatically just trust the system, we gave them tools to verify it themselves. By surfacing the underlying data and giving them control over how records were merged, we built up their confidence and, in turn, strengthened their trust in the system.

© Eric Hishinuma 2026

© Eric Hishinuma 2026

© Eric Hishinuma 2026