Overview

Over COVID, Wattpad saw a large spike in new users, and with that came a large spike in hate & harassment tickets that operationally overloaded our Support team. We wanted to take this as an opportunity to build out a long overdue feature we had been advocating for — the ability to block users. But, we quickly realized that Block couldn’t be the only solution for tackling our user and business problems.We needed to take a holistic approach to defining safety on Wattpad that protected not only our users, but our internal teams on the frontline as well.

 

Team: Audience trust

Product manager, Frontend + Backend engineers, Trust & Safety champions, Community Wellness team, Data champion

Role: Lead designer

Product strategy, UX flows & high-fidelity designs, Interaction design, Prototyping, Stakeholder management, QA

Understanding the problem space: Harassment on Wattpad

To kick off discovery, we first needed to get a holistic understanding on the current landscape of harassment on Wattpad—where it happens, who it impacts most, and where our existing systems fall short. Our goal was to identify high-risk negative interactions, the limitations of current safety features, and the underlying causes of unactionable support tickets.

 

1. Consulting the experts We worked very closely with our stakeholders from our Trust & Safety and Community Wellness teams to gain the foundational insight on harassment on Wattpad. Some things we wanted to know were:

  • What does Wattpad consider to be harassment?
  • Which user segments receive the most bullying/toxic interactions - writers or readers?
  • What is an example of an unactionable hate & harassment report?
  • What operational issues were the team facing when dealing with hate & harassment tickets?

 

From these conversations, we identified “unactionable support tickets” as a key signal for design opportunity. These tickets typically reflected situations where users were harmed but lacked the tools to act — indicating clear UX gaps. If we could empower users with better controls, we could prevent these incidents from escalating to support.

Discussing with Trust & Safety, we decided to use unactionable Hate & Harassment tickets as a metric for harassment on Wattpad. This metric was a signal for issues that users could have solved on their own if the right safety tools were provided.

  1. Mapping risk across the product ecosystem

To quantify and prioritize where these harmful interactions occurred, I mapped out all areas of social interaction across Wattpad (e.g., story comments, direct messages, profile posts), overlaying:

  • Volume of interactions (via analytics)
  • Existing safety tools available in each area
  • Support tagging behavior in Zendesk

 

Through collaboration with our Analytics team, we discovered that:

  • Story comments drive the overwhelming majority of social engagement—700M interactions/month
  • The next highest, Profile posts, had just 6M interactions/month

 

This data made it clear: story comments were the highest-risk surface, and likely our highest-leverage opportunity for safety improvements. I translated these findings into a Safety Landscape Map, helping the team visualize where current tools were failing and where the biggest gaps existed.

Discussing with Trust & Safety, we decided to use unactionable Hate & Harassment tickets as a metric for harassment on Wattpad. This metric was a signal for issues that users could have solved on their own if the right safety tools were provided.

  1. Synthesizing user sentiment and qualitative insights

To ground our understanding in user sentiment, we pulled from multiple qualitative sources:

  • Wattpad’s monthly sentiment survey
  • App store reviews
  • 1:1 interviews with six top-performing writers, representing our most engaged and influential users

 

Across these sources, consistent pain points emerged:

  • Current tools don’t offer real protection — users felt that features like Mute weren’t effective.
  • Support experiences feel unresolved — users felt ignored or dismissed after reporting harassment.
  • Lack of control over visibility — writers expressed a strong need to prevent specific users from accessing their stories.
  • Emotional impact — users described feeling unsafe, unheard, and frustrated by the platform’s lack of urgency around harassment.

 

These insights underscored that feeling safe isn’t just about having tools—it’s about feeling seen, supported, and in control.

From feature to strategy

At the beginning of the project, the team was eager to build a Block feature as it has been a long overdue feature of the platforn. I used this research to reframe the problem and guide the team toward a broader, more strategic goal.

 

At kickoff, I presented my findings:

  • Many unactionable tickets resulted from users not having the tools to manage harassment on their own.
  • Emotional safety—feeling heard, supported, and in control—was just as important as functional tools.
  • A single, reactive solution like blocking only addressed part of the user journey and wouldn’t fully solve the broader trust and safety challenges.

 

I was able to lead a reframing of the team’s focus. Instead of shipping what we though was a silver bullet feature, we aligned on a broader stragetigic goal: convince the team that building Block (a large, complicated feature) wasn’t the silver bullet solution. We needed to focus on was to design a set of safeguards that would allow users to feel safe while reducing the frequency of negative interactions from happening in the first place. This could be reactive (Block) or proactive (anti-bullying campaign).

From feature to strategy

At the beginning of the project, the team was eager to build a Block feature as it has been a long overdue feature of the platforn. I used this research to reframe the problem and guide the team toward a broader, more strategic goal.

 

At kickoff, I presented my findings:

  • Many unactionable tickets resulted from users not having the tools to manage harassment on their own.
  • Emotional safety—feeling heard, supported, and in control—was just as important as functional tools.
  • A single, reactive solution like blocking only addressed part of the user journey and wouldn’t fully solve the broader trust and safety challenges.

 

I was able to lead a reframing of the team’s focus. Instead of shipping what we though was a silver bullet feature, we aligned on a broader stragetigic goal: convince the team that building Block (a large, complicated feature) wasn’t the silver bullet solution. We needed to focus on was to design a set of safeguards that would allow users to feel safe while reducing the frequency of negative interactions from happening in the first place. This could be reactive (Block) or proactive (anti-bullying campaign).

From feature to strategy

At the beginning of the project, the team was eager to build a Block feature as it has been a long overdue feature of the platforn. I used this research to reframe the problem and guide the team toward a broader, more strategic goal.

 

At kickoff, I presented my findings:

  • Many unactionable tickets resulted from users not having the tools to manage harassment on their own.
  • Emotional safety—feeling heard, supported, and in control—was just as important as functional tools.
  • A single, reactive solution like blocking only addressed part of the user journey and wouldn’t fully solve the broader trust and safety challenges.

 

I was able to lead a reframing of the team’s focus. Instead of shipping what we though was a silver bullet feature, we aligned on a broader stragetigic goal: convince the team that building Block (a large, complicated feature) wasn’t the silver bullet solution. We needed to focus on was to design a set of safeguards that would allow users to feel safe while reducing the frequency of negative interactions from happening in the first place. This could be reactive (Block) or proactive (anti-bullying campaign).

Solutions shipped

Comment Muting

  • Muting now hides a user’s comments across all stories — and hides yours from them.
  • Users can now protect themselves in the largest area of interaction on Wattpad (~700M+ comments/month).
  • Mute now covers all interaction areas (Private messages, Public Conversations, Comments).
  • Unmuting restores all previously hidden comments

"I don't like what I'm seeing" Report Option

  • Introduced a new report category for comments users dislike but that don’t violate community guidelines. A big portion of our unactionable harassment tickets were from use cases like this.
    • Ex. "This person doesn’t support my HarryxLiam ship, I’m going to report them."
  • Redirects these reports away from the Harassment flow, easing burden on Trust & Safety.
  • Updated the confirmation screen with a Code of Conduct link and a Mute button to support user education and self-resolution.
  • Previously inconsistent designs and copy were updated and unified across iOS, Android, and Web.

Final Thoughts

We started the quarter focused on building a Block feature. But through deeper research, we realized what users really needed were tools to feel safe, supported, and in control of their experience.

 

By grounding our work in user insight and operational realities, we delivered features that not only improved platform safety but also reduced friction for our Trust & Safety team. Within weeks of launch, over 2.5 million mutes had been recorded, while over 250,000 users had a muted list of 1-10 users.

This project was one of the most fulfilling I’ve worked on. It reminded me how powerful design can be when it’s grounded in real user needs and aimed at building trust and community well-being.

© 2025 Jess Lee

Overview

Over COVID, Wattpad saw a large spike in new users, and with that came a large spike in hate & harassment tickets that operationally overloaded our Support team. We wanted to take this as an opportunity to build out a long overdue feature we had been advocating for — the ability to block users. But, we quickly realized that Block couldn’t be the only solution for tackling our user and business problems.We needed to take a holistic approach to defining safety on Wattpad that protected not only our users, but our internal teams on the frontline as well.

 

Team: Audience trust

Product manager, Frontend + Backend engineers, Trust & Safety champions, Community Wellness team, Data champion

Role: Lead designer

Product strategy, UX flows & high-fidelity designs, Interaction design, Prototyping, Stakeholder management, QA

Understanding the problem space: Harassment on Wattpad

To kick off discovery, we first needed to get a holistic understanding on the current landscape of harassment on Wattpad—where it happens, who it impacts most, and where our existing systems fall short. Our goal was to identify high-risk negative interactions, the limitations of current safety features, and the underlying causes of unactionable support tickets.

 

1. Consulting the experts We worked very closely with our stakeholders from our Trust & Safety and Community Wellness teams to gain the foundational insight on harassment on Wattpad. Some things we wanted to know were:

  • What does Wattpad consider to be harassment?
  • Which user segments receive the most bullying/toxic interactions - writers or readers?
  • What is an example of an unactionable hate & harassment report?
  • What operational issues were the team facing when dealing with hate & harassment tickets?

 

From these conversations, we identified “unactionable support tickets” as a key signal for design opportunity. These tickets typically reflected situations where users were harmed but lacked the tools to act — indicating clear UX gaps. If we could empower users with better controls, we could prevent these incidents from escalating to support.

Discussing with Trust & Safety, we decided to use unactionable Hate & Harassment tickets as a metric for harassment on Wattpad. This metric was a signal for issues that users could have solved on their own if the right safety tools were provided.

  1. Mapping risk across the product ecosystem

To quantify and prioritize where these harmful interactions occurred, I mapped out all areas of social interaction across Wattpad (e.g., story comments, direct messages, profile posts), overlaying:

  • Volume of interactions (via analytics)
  • Existing safety tools available in each area
  • Support tagging behavior in Zendesk

 

Through collaboration with our Analytics team, we discovered that:

  • Story comments drive the overwhelming majority of social engagement—700M interactions/month
  • The next highest, Profile posts, had just 6M interactions/month

 

This data made it clear: story comments were the highest-risk surface, and likely our highest-leverage opportunity for safety improvements. I translated these findings into a Safety Landscape Map, helping the team visualize where current tools were failing and where the biggest gaps existed.

Discussing with Trust & Safety, we decided to use unactionable Hate & Harassment tickets as a metric for harassment on Wattpad. This metric was a signal for issues that users could have solved on their own if the right safety tools were provided.

  1. Synthesizing user sentiment and qualitative insights

To ground our understanding in user sentiment, we pulled from multiple qualitative sources:

  • Wattpad’s monthly sentiment survey
  • App store reviews
  • 1:1 interviews with six top-performing writers, representing our most engaged and influential users

 

Across these sources, consistent pain points emerged:

  • Current tools don’t offer real protection — users felt that features like Mute weren’t effective.
  • Support experiences feel unresolved — users felt ignored or dismissed after reporting harassment.
  • Lack of control over visibility — writers expressed a strong need to prevent specific users from accessing their stories.
  • Emotional impact — users described feeling unsafe, unheard, and frustrated by the platform’s lack of urgency around harassment.

 

These insights underscored that feeling safe isn’t just about having tools—it’s about feeling seen, supported, and in control.

From feature to strategy

At the beginning of the project, the team was eager to build a Block feature as it has been a long overdue feature of the platforn. I used this research to reframe the problem and guide the team toward a broader, more strategic goal.

 

At kickoff, I presented my findings:

  • Many unactionable tickets resulted from users not having the tools to manage harassment on their own.
  • Emotional safety—feeling heard, supported, and in control—was just as important as functional tools.
  • A single, reactive solution like blocking only addressed part of the user journey and wouldn’t fully solve the broader trust and safety challenges.

 

I was able to lead a reframing of the team’s focus. Instead of shipping what we though was a silver bullet feature, we aligned on a broader stragetigic goal: convince the team that building Block (a large, complicated feature) wasn’t the silver bullet solution. We needed to focus on was to design a set of safeguards that would allow users to feel safe while reducing the frequency of negative interactions from happening in the first place. This could be reactive (Block) or proactive (anti-bullying campaign).

From feature to strategy

At the beginning of the project, the team was eager to build a Block feature as it has been a long overdue feature of the platforn. I used this research to reframe the problem and guide the team toward a broader, more strategic goal.

 

At kickoff, I presented my findings:

  • Many unactionable tickets resulted from users not having the tools to manage harassment on their own.
  • Emotional safety—feeling heard, supported, and in control—was just as important as functional tools.
  • A single, reactive solution like blocking only addressed part of the user journey and wouldn’t fully solve the broader trust and safety challenges.

 

I was able to lead a reframing of the team’s focus. Instead of shipping what we though was a silver bullet feature, we aligned on a broader stragetigic goal: convince the team that building Block (a large, complicated feature) wasn’t the silver bullet solution. We needed to focus on was to design a set of safeguards that would allow users to feel safe while reducing the frequency of negative interactions from happening in the first place. This could be reactive (Block) or proactive (anti-bullying campaign).

From feature to strategy

At the beginning of the project, the team was eager to build a Block feature as it has been a long overdue feature of the platforn. I used this research to reframe the problem and guide the team toward a broader, more strategic goal.

 

At kickoff, I presented my findings:

  • Many unactionable tickets resulted from users not having the tools to manage harassment on their own.
  • Emotional safety—feeling heard, supported, and in control—was just as important as functional tools.
  • A single, reactive solution like blocking only addressed part of the user journey and wouldn’t fully solve the broader trust and safety challenges.

 

I was able to lead a reframing of the team’s focus. Instead of shipping what we though was a silver bullet feature, we aligned on a broader stragetigic goal: convince the team that building Block (a large, complicated feature) wasn’t the silver bullet solution. We needed to focus on was to design a set of safeguards that would allow users to feel safe while reducing the frequency of negative interactions from happening in the first place. This could be reactive (Block) or proactive (anti-bullying campaign).

Comment Muting

  • Muting now hides a user’s comments across all stories — and hides yours from them.
  • Users can now protect themselves in the largest area of interaction on Wattpad (~700M+ comments/month).
  • Mute now covers all interaction areas (Private messages, Public Conversations, Comments).
  • Unmuting restores all previously hidden comments

"I don't like what I'm seeing" Report Option

  • Introduced a new report category for comments users dislike but that don’t violate community guidelines. A big portion of our unactionable harassment tickets were from use cases like this.
    • Ex. "This person doesn’t support my HarryxLiam ship, I’m going to report them."
  • Redirects these reports away from the Harassment flow, easing burden on Trust & Safety.
  • Updated the confirmation screen with a Code of Conduct link and a Mute button to support user education and self-resolution.
  • Previously inconsistent designs and copy were updated and unified across iOS, Android, and Web.

Final Thoughts

We started the quarter focused on building a Block feature. But through deeper research, we realized what users really needed were tools to feel safe, supported, and in control of their experience.

 

By grounding our work in user insight and operational realities, we delivered features that not only improved platform safety but also reduced friction for our Trust & Safety team. Within weeks of launch, over 2.5 million mutes had been recorded, while over 250,000 users had a muted list of 1-10 users.

This project was one of the most fulfilling I’ve worked on. It reminded me how powerful design can be when it’s grounded in real user needs and aimed at building trust and community well-being.

© 2025 Jess Lee

Overview

Over COVID, Wattpad saw a large spike in new users, and with that came a large spike in hate & harassment tickets that operationally overloaded our Support team. We wanted to take this as an opportunity to build out a long overdue feature we had been advocating for — the ability to block users. But, we quickly realized that Block couldn’t be the only solution for tackling our user and business problems.We needed to take a holistic approach to defining safety on Wattpad that protected not only our users, but our internal teams on the frontline as well.

 

Team: Audience trust

Product manager, Frontend + Backend engineers, Trust & Safety champions, Community Wellness team, Data champion

Role: Lead designer

Product strategy, UX flows & high-fidelity designs, Interaction design, Prototyping, Stakeholder management, QA

Understanding the problem space: Harassment on Wattpad

To kick off discovery, we first needed to get a holistic understanding on the current landscape of harassment on Wattpad—where it happens, who it impacts most, and where our existing systems fall short. Our goal was to identify high-risk negative interactions, the limitations of current safety features, and the underlying causes of unactionable support tickets.

 

1. Consulting the experts We worked very closely with our stakeholders from our Trust & Safety and Community Wellness teams to gain the foundational insight on harassment on Wattpad. Some things we wanted to know were:

  • What does Wattpad consider to be harassment?
  • Which user segments receive the most bullying/toxic interactions - writers or readers?
  • What is an example of an unactionable hate & harassment report?
  • What operational issues were the team facing when dealing with hate & harassment tickets?

 

From these conversations, we identified “unactionable support tickets” as a key signal for design opportunity. These tickets typically reflected situations where users were harmed but lacked the tools to act — indicating clear UX gaps. If we could empower users with better controls, we could prevent these incidents from escalating to support.

Discussing with Trust & Safety, we decided to use unactionable Hate & Harassment tickets as a metric for harassment on Wattpad. This metric was a signal for issues that users could have solved on their own if the right safety tools were provided.

  1. Mapping risk across the product ecosystem

To quantify and prioritize where these harmful interactions occurred, I mapped out all areas of social interaction across Wattpad (e.g., story comments, direct messages, profile posts), overlaying:

  • Volume of interactions (via analytics)
  • Existing safety tools available in each area
  • Support tagging behavior in Zendesk

 

Through collaboration with our Analytics team, we discovered that:

  • Story comments drive the overwhelming majority of social engagement—700M interactions/month
  • The next highest, Profile posts, had just 6M interactions/month

 

This data made it clear: story comments were the highest-risk surface, and likely our highest-leverage opportunity for safety improvements. I translated these findings into a Safety Landscape Map, helping the team visualize where current tools were failing and where the biggest gaps existed.

Discussing with Trust & Safety, we decided to use unactionable Hate & Harassment tickets as a metric for harassment on Wattpad. This metric was a signal for issues that users could have solved on their own if the right safety tools were provided.

  1. Synthesizing user sentiment and qualitative insights

To ground our understanding in user sentiment, we pulled from multiple qualitative sources:

  • Wattpad’s monthly sentiment survey
  • App store reviews
  • 1:1 interviews with six top-performing writers, representing our most engaged and influential users

 

Across these sources, consistent pain points emerged:

  • Current tools don’t offer real protection — users felt that features like Mute weren’t effective.
  • Support experiences feel unresolved — users felt ignored or dismissed after reporting harassment.
  • Lack of control over visibility — writers expressed a strong need to prevent specific users from accessing their stories.
  • Emotional impact — users described feeling unsafe, unheard, and frustrated by the platform’s lack of urgency around harassment.

 

These insights underscored that feeling safe isn’t just about having tools—it’s about feeling seen, supported, and in control.

From feature to strategy

At the beginning of the project, the team was eager to build a Block feature as it has been a long overdue feature of the platforn. I used this research to reframe the problem and guide the team toward a broader, more strategic goal.

 

At kickoff, I presented my findings:

  • Many unactionable tickets resulted from users not having the tools to manage harassment on their own.
  • Emotional safety—feeling heard, supported, and in control—was just as important as functional tools.
  • A single, reactive solution like blocking only addressed part of the user journey and wouldn’t fully solve the broader trust and safety challenges.

 

I was able to lead a reframing of the team’s focus. Instead of shipping what we though was a silver bullet feature, we aligned on a broader strategic goal: design a set of safeguards that would allow users to feel safe while reducing the frequency of negative interactions from happening in the first place. This could be reactive (Block) or proactive (anti-bullying campaign).By grounding our direction in real user needs and system-level gaps, we moved from delivering a single feature to building a more thoughtful and scalable approach to platform safety.

Ideation

With a clear understanding of the gaps in our current safety tools and the emotional pain points experienced by users, I led a cross-functional ideation workshop to generate and prioritize solutions.

The session included stakeholders from Engineering, Trust & Safety, and Community Wellness,

We prioritized ideas based on user impact, feasibility, and alignment with our safety goals.

The top ideas included:

  • Hiding comments from muted users (fixing a critical gap in the Mute feature)
  • Updating the Report flow to better handle unactionable tickets
  • In-context education during reporting to guide users
  • Story Block, which allows writers to restrict access to their stories

This session helped shape our safety roadmap with a balance of quick wins and long-term features that empower users and reduce harm.

Solutions shipped

Comment Muting

  • Muting now hides a user’s comments across all stories — and hides yours from them.
  • Users can now protect themselves in the largest area of interaction on Wattpad (~700M+ comments/month).
  • Mute now covers all interaction areas (Private messages, Public Conversations, Comments).
  • Unmuting restores all previously hidden comments

"I don't like what I'm seeing" Report Option

  • Introduced a new report category for comments users dislike but that don’t violate community guidelines. A big portion of our unactionable harassment tickets were from use cases like this.
    • Ex. "This person doesn’t support my HarryxLiam ship, I’m going to report them."
  • Redirects these reports away from the Harassment flow, easing burden on Trust & Safety.
  • Updated the confirmation screen with a Code of Conduct link and a Mute button to support user education and self-resolution.
  • Previously inconsistent designs and copy were updated and unified across iOS, Android, and Web.

Final Thoughts

We started the quarter focused on building a Block feature. But through deeper research, we realized what users really needed were tools to feel safe, supported, and in control of their experience.

 

By grounding our work in user insight and operational realities, we delivered features that not only improved platform safety but also reduced friction for our Trust & Safety team. Within weeks of launch, over 2.5 million mutes had been recorded, while over 250,000 users had a muted list of 1-10 users.

This project was one of the most fulfilling I’ve worked on. It reminded me how powerful design can be when it’s grounded in real user needs and aimed at building trust and community well-being.