Company: ByteDance
Product: Pangle
Project Duration: 09.2021 - 10.2021 (1 month)
My Role: UX Researcher
Methods: Survey, Interview
Collaboration: 5 Product managers, 3 Product designers, 1 UX researchers
Pangle is an advertisement network of Bytedance. It enables application development companies to request advertisements through the platform and create revenue.
Pangle has improved a lot in the last six months with many new features (e.g. more data analysis methods, specific test tools for advertisements strategy, alarm system, growth diagnosis) added.
The product team wants to know how well they did and get to know more of the users.
How does user satisfaction with the Pangle vary across different user segments and roles within their companies?
Which features or improvements have had the most significant impact on user experience?
What are the current satisfaction levels?
How do users perceive the new features and what are their usage patterns?
What are the differences in usage and satisfaction levels among various user groups?
Collect quantifiable data from a large group of users to assess overall satisfaction, utility, and comprehension. Help in identifying general trends and statistical correlations between different user segments and their experiences.
Provide in-depth, qualitative insights into individual user experiences, utility, and comprehension. A detailed exploration of users' perceptions, challenges, and the specific features they find beneficial or problematic.
Goal:
Gather statistical attitude, behavior, and pain points in each section of the Pangle platform.
Can be compared with the previous satisfaction scores.
Find potential participants and key focus questions in the interview part.
Define Metrics:
Previous satisfaction results only contain features and usability assessments that can not fully reflect the overall experience. Based on the product features, stakeholder interviews, and key user experience, the primary metrics are defined as Features and Operation, Monetization Effectiveness, Ads Quality, Learning Resources, and Service Support.
Survey Structure and Logic:
Data clean: Based on the average response time, deleted those fast (shorter than 100 secs) and slow (longer than 999 secs) to ensure the data quality.
Sample size: n = 58
Analysis: Used R Studio and Excel to analyze survey data. Sample code of the analysis.
# Loading survey result
library("readxl")
my_data <- read_excel("SurveyResult.xlsx", sheet = 2)
# Clean data with response time
my_data <- my_data[my_data$response_time >= 10 & my_data$response_time <= 999, ]
# Summary of data
summary_stats <- summary(my_data$overall)
mean_score <- mean(my_data$overall)
percentage_above_4 <- sum(my_data$overall > 4) / nrow(my_data) * 100
# Print summary statistics
cat("Summary Statistics for October Scores:\n")
print(summary_stats)
cat("Mean of October Scores:", mean_score, "\n")
cat("Percentage of Scores Above 4:", percentage_above_4, "%\n")
# Compare with previous satisfaction result
t_test_results <- t.test(my_data$overall_Oct ~ my_data$overall_April)
print(t_test_results)
# Correlation analysis
correlation_result <- cor(my_data$overall, my_data$sub1, use = "complete.obs")
cat("Correlation between Overall and Sub1:", correlation_result, "\n")
To identify the key areas that need to be improved, I conducted a correlation analysis to compare the overall satisfaction score with the primary metrics.
To verify the product improvement, I conducted a t-test to compare the satisfaction score this October with the previous April result.
1. The product's current priority is improving monetization effectiveness based on the correlation analysis. At the same time, the features and operation of the product are good enough.
2. There is a significant difference in satisfaction between key customers and medium- and small-size companies. This difference suggests that different segments have varying expectations and experiences with Pangle. Key customers' satisfaction might be higher due to personalized customer support or solutions. Meanwhile, medium- and small-size companies might feel underserved or that the solutions are not adequately customized for their needs.
Goal:
Dive into emotional and cognitive responses to the platform and features.
To assess how well users understand and utilize the platform's features, identify gaps in knowledge or usability issues.
Interview Structure and Logic:
Introduction: Understand the basic information of the company size, industry, and role to help further classification and understanding.
Experience with Other Advertising Platforms: Understand their skill level and previous experience with the advertisement platforms.
Daily Routine on Pangle: Understand the user flow and user experience, to help with the following questions (whether ask about some tech space or dashboard experience).
Detailed Walkthrough of All Functions: Evaluate the specific user experience and dig into detailed reasons for being satisfied or dissatisfied with specific features.
Based on the survey results and product operations' requirements for the interview, I interviewed 16 companies with different account sizes and roles.
Color code transcription into behavior, goal, attitude, and pain point.
With the color code, categorize high mentioned points in each journey to form the user journey map.
Summarized each participant's basic information and their typical behavior, motivation, and mindset.
Based on summaries of each participant, I explored various methods to categorize users according to key information. The most significant factor for grouping participants was their experience with advertising platforms, which influenced their mindsets and behaviors.
1. User Journey: The user journey map combines the interview results with the survey satisfaction score that gives the product team a straightforward sense of the whole experience of each feature experience including how users utilize it, goals, pain points, needs, and future improvement areas.
2. User Segmentation: User segmentation is based on users' previous experience with typical behavior and mindset in the Pangle platform. There are three distinct groups from beginners to experts.
This segmentation informs the product team to understand different users and design personalized experiences for users to help them handle the platform immediately and improve the monetization capability like the creation of targeted training courses.
The combined analysis of survey and interview data in the report and the established satisfaction metrics have significantly impacted the project by pinpointing key improvement areas and providing actionable insights.
The product team first came to me with the idea of confirming they did a good job but I presented them with metrics that they can follow to improve and the report was read through the whole team over 100 times.
The product team assigned each P0 and P1 issue in my report to a specific person to improve.
The research has streamlined decision-making, prioritized product development efforts, and established a quantitative benchmark for assessing user satisfaction over time. It has enabled a focused, data-driven strategy for enhancing the Pangle platform's user experience, aligning improvements with user needs, and facilitating a culture of continuous improvement.
This project emphasizes the value of a mixed-methods approach in user experience research, highlighting how qualitative insights from interviews can deepen and contextualize the understanding gained from quantitative survey data. This comprehensive approach facilitated targeted, informed decisions for platform improvement and set a foundation for measuring ongoing user satisfaction effectively.
Sample Diversity: The interviews may have captured a limited range of user perspectives, potentially missing nuanced feedback from less-represented user segments.
Time and Resource Constraints: Conducting thorough interviews and in-depth analysis within tight timelines or with limited resources impacted the depth and breadth of insights that could be gathered.
Statistic Methods: I employed correlation to identify primary metrics influencing the overall satisfaction score. To further understand the relationships between these factors, I can apply linear regression. This combination of methods will provide a comprehensive view of how various metrics correlate and predict overall satisfaction, ensuring actionable insights.
Stakeholder Management: During the research process, stakeholders such as product managers and designers were involved in user interviews. Their participation sometimes led to biased questions that could influence user responses. To mitigate this, I should establish clear guidelines for stakeholders before interviews. This approach will help ensure they trust the research process and understand the importance of unbiased data collection.