UX Research

Chrono,
Usability Evaluation of Montreal's Transit App

Chrono is Montreal's public transit app. Our team conducted a real-world usability test to identify pain points in core features such as trip planning, OPUS card reloading, and saving favorite locations.

The goal was to uncover actionable insights to improve task completion, navigation, and overall user satisfaction.

Client

Project

Tools

Airtable, Google Forms, Figma, Audio Recording, Android Devices

Timeline

2025

Role

UX Researcher

Tools Used

We utilized Airtable to plan and organize research tasks, Google Forms for participant intake and post-test surveys, and Figma and Figjam to document insights and propose design recommendations. Usability sessions were recorded using audio tools on Android devices to capture participant interactions and feedback for later analysis.

Deliverables

  • Usability Test Plan

  • Heuristic Evaluations

  • Session Recordings & Observation Notes

  • Affinity Map & Key Findings

  • Design Recommendations Report

  • Presentation Deck summarizing results and actionable insights

Research Focus

The usability test explored three core user tasks within the Chrono app to identify pain points and improvement opportunities:

Planning a trip from A to B
Reloading an OPUS transit card
Saving a location to favorites

You Can’t Test Everything, So What Should You Test?

The Nielsen Norman's usability testing prioritization matrix was adopted to carefully select the three tasks to be examined for the usability test, based on their relevenace to core app functions and their potential impact on the user experience and business outcome.

These focus areas were selected based on frequent user complaints and comparisons with competitor apps like Google Maps and Transit. The goal was to assess how smoothly users could perform everyday tasks and uncover where the app’s experience could be enhanced.

Usability testing prioritization matrix adopted from NNGroup (2021), showing task evaluation based on risk, business impact, and user impact.


1= High
2= Medium
3= Low

Target Users

Intended users: Public transit users in urban areas (commuters, students, etc.) 

We ended up with 6 participants, aged 24–42, all regular public transit users in Montreal.

Some participants were experienced Chrono users, while others were first-time users.

Methodology

We conducted moderated in-person usability tests at real transit locations (near metro/bus stops) to capture authentic usage behavior.

Test Type: Moderated, in-person



Location: Near metro/bus stops to simulate real-world conditions



Data Collected:
Task success rate
Time on task
Clicks, errors
Think-aloud observations
Post-task interviews

Usability Test Plan

The purpose was to identify usability issues in the main user flows through real-time user testing and identify key pain points affecting navigation and user satisfaction. 

Participants: 6 users, in-person, moderated interviews.
Measures: Task success, user behavior tags (Thinking, Error), direct quotes
Think-aloud + observational coding

3 people responsible for different tasks during the test:
Note taker of quantitative data, 
Note taker of qualitative information
The moderator. 

The Findings -QUANTITATIVE
50% error rate: 9 out of 18 tasks had at least one error.

Overall Rating of the app: Average Rating: 3.67 / 5

Average Age: 33.2 years Range: 24 to 42

Primary Transportation: Most use Public Transit 
(with combinations of walking, rideshare, and driving)

Accessibility Consideration: All participants marked No

Tasks were perceived as easy, but error rates were high, suggesting users blamed themselves for issues.
Participants took longer and more steps than benchmark users to complete tasks.
Despite this, app ratings were decent, indicating a disconnect between perceived and actual usability.

The Findings -QUANTITATIVE

Users Blaming Themselves, Not the App

Despite struggling during tasks, users often assumed the problem was their own fault, not the design.

“I figured I was doing it wrong—it’s probably just me.”
- Participant, 29, student

This mindset masked key usability issues and emphasized a need for:

  • Better in-app guidance

  • Clearer feedback

  • Intuitive iconography and labeling

Key Findings

Task-Specific Usability Issues

Trip Planning:
67% encountered confusion; only 2/6 completed with no errors.
No visible back button, swipe gesture not discoverable.
“I am afraid it is gonna take me back home.”

Input & Flow Issues:
No autosave or confirmation → users lost data when switching options.
"Arrive by" vs. "Depart at" confused 3/6 users.

Key Findings

UI & Feature Expectations

Recommendations

Trip Planning

Recommendations

Improving the Favorites Feature

Users struggled with the icons and naming process when saving favorite locations. Many icons felt irrelevant, and requiring custom names for each location created friction.


Recommendation:
Simplify and clarify the icon set using familiar, intuitive symbols (e.g., leaf for parks, fork for restaurants).
Make naming optional by auto-filling location names.
Fix the confusing "Enter" key behavior to align with user expectations.

These changes reduce cognitive load and make the Favorites feature quicker and more user-friendly.

Recommendations

Card Loading

The reload errors need to be a priority for fixing. More testing and investigation into the card reader

Simplify fare history UI; separate active vs expired fares.

Improve hierarchy of information for purchase

Challenges & Future Work

Future Research Opportunities:

Running comparative testing with Google Maps to identify must-have features.
Conducting an accessibility audit to ensure usability for users with diverse needs.
A/B testing redesigned icon sets for better recognition and preference.
Gathering long-term feedback by tracking user behavior post-onboarding.

Reflections

I gained hands-on experience planning and moderating usability tests in real-world settings.
I realized how easily users blame themselves for poor UX, it reminded me the importance of clear, intuitive design.
I observed that think-aloud method affects user behavior and task timing; I would try remote or repeat-task methods in future studies.
I noted the limitations of a small, non-diverse participant group, future tests should include long-term and older users for deeper insights.
This project strengthened my confidence in research and deepened my empathy for users navigating unclear interfaces.

Chrono, USability Evaluation of Montreal's Transit App

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Duis nec porttitor risus, non luctus augue. Nulla et accumsan lacus, et finibus lectus. Praesent et elit est. Nam eget semper augue.

Client

A

Tools

A

Timeline

A

Role

A

Tools Used

Tools Used

Research Focus

Our research focused on three main areas of the user experience. First, we assessed how well Chrono’s trip planning feature performed in comparison to competing apps such as Google Maps or Transit. Second, we examined the OPUS card reload process to understand whether users were satisfied or if there were specific pain points that could be improved. Lastly, we explored whether the use of favorite icons, such as saved stops or routes, was intuitive and useful or if their design and visibility could be optimized to better meet user needs.  Despite its practical tools, Chrono faced usability complaints compared to competitors like Google Maps. The app’s navigation and iconography were often confusing, and we noticed a lack of feedback during tasks like reloading OPUS cards or saving locations. We focused on 3 core areas: Trip planning usability OPUS card reload flow Icon clarity and favorite location saving

Target Users

Intended users: Public transit users in urban areas (commuters, students, etc.) 

We ended up with 6 participants, aged 24–42, all regular public transit users in Montreal.

Some participants were experienced Chrono users, while others were first-time users.

Methodology

Test Type: Moderated, in-person

Location: Near metro/bus stops to simulate real-world conditions

Participants: 6 users (ages 20–40, varied familiarity with transit apps)

Data Collected:
Task success rate
Time on task
Clicks, errors
Think-aloud observations
Post-task interviews

Test Focus Areas

Plan a trip from A to B using the app
Reload an OPUS transit card
Save a location to favorites


The goal of the test was to assess how easily users could perform common tasks. 

Usability Test Plan

The purpose was to identify usability issues in the main user flows through real-time user testing and identify key pain points affecting navigation, task efficiency, and user satisfaction. 

Participants: 6 users, in-person, moderated interviews.
Measures: Task success, user behavior tags (Thinking, Error), direct quotes
Think-aloud + observational coding

3 people responsible for different tasks during the test:
Note taker of quantitative data, 
Note taker of qualitative information
The moderator. 

Tasks

You can’t test everything, sp what should you test?

In our usability test, three tasks were carefully selected based on their relevance to core app functions and their potential impact on both user experience and business outcomes.
The matrix below reflects this prioritization strategy by weighing each task against its risk, business impact, and user impact: 


Usability testing prioritization matrix adopted from NNGroup (2021), showing task evaluation based on risk, business impact, and user impact.

1= High
2= Medium
3= Low

Overview of Test Results

Overall Rating of the app:
Average Rating: 3.67 / 5

Average Age: 33.2 years
Range: 24 to 42

Primary Transportation:
Most use Public Transit 
(with combinations of walking, rideshare, and driving)

Accessibility Consideration:
All participants marked No

50% error rate: 9 out of 18 tasks had at least one error.
Tasks were perceived as easy, but error rates were high, suggesting users blamed themselves for issues.
Participants took longer and more steps than benchmark users to complete tasks.
Despite this, app ratings were decent, indicating a disconnect between perceived and actual usability.

Overview of Test Results

We utilized Airtable to create surveys that we filled in with participants and later analyzed the data by coding user steps and thoughts to find emerging themes.

Screenshot from our Airtable coding

Key Findings

UI & Feature Expectations

Key Findings

Task-Specific Usability Issues

Trip Planning:
67% encountered confusion; only 2/6 completed with no errors.
No visible back button, swipe gesture not discoverable.
“I am afraid it is gonna take me back home.”

Input & Flow Issues:
No autosave or confirmation → users lost data when switching options.
"Arrive by" vs. "Depart at" confused 3/6 users.

Recommendations

Trip Planning

Add visible back buttons & autosave for trip planning.

Make “Back” button visible in trip detail screens
Add more route details (e.g., # of stops, walking time/distance)

Distinguish Arrival and Departure selection more clearly 

simple fix of checkbox

keep data in case users make errors!

Recommendations

Improving the Favorites Feature

Users struggled with the icons and naming process when saving favorite locations. Many icons felt irrelevant, and requiring custom names for each location created friction.


Recommendation:
Simplify and clarify the icon set using familiar, intuitive symbols (e.g., leaf for parks, fork for restaurants).
Make naming optional by auto-filling location names.
Fix the confusing "Enter" key behavior to align with user expectations.

These changes reduce cognitive load and make the Favorites feature quicker and more user-friendly.

Recommendations

Card Loading

The reload errors need to be a priority for fixing. More testing and investigation into the card reader

Simplify fare history UI; separate active vs expired fares.

Improve hierarchy of information for purchase

Figure 12 - Redesign recommendation for OPUS reload 

Challenges & Future Work

Future Research Opportunities:

Run comparative testing with Google Maps to identify must-have features.
Conduct an accessibility audit to ensure usability for users with diverse needs.
A/B test redesigned icon sets for better recognition and preference.
Gather long-term feedback by tracking user behavior post-onboarding.

Reflections

Future Research Opportunities:

Gained hands-on experience planning and moderating usability tests in real-world settings.
Realized how easily users blame themselves for poor UX—reinforcing the importance of clear, intuitive design.
Observed that think-aloud protocols affected user behavior and task timing; would try remote or repeat-task methods in future studies.
Noted the limitations of a small, non-diverse participant group—future tests should include long-term and older users for deeper insights.
This project strengthened my confidence in research and deepened my empathy for users navigating unclear interfaces.

Chrono, USability Evaluation of Montreal's Transit App

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Duis nec porttitor risus, non luctus augue. Nulla et accumsan lacus, et finibus lectus. Praesent et elit est. Nam eget semper augue.

Client

A

Tools

A

Timeline

A

Role

A

Tools Used

Tools Used

Research Focus

Our research focused on three main areas of the user experience. First, we assessed how well Chrono’s trip planning feature performed in comparison to competing apps such as Google Maps or Transit. Second, we examined the OPUS card reload process to understand whether users were satisfied or if there were specific pain points that could be improved. Lastly, we explored whether the use of favorite icons, such as saved stops or routes, was intuitive and useful or if their design and visibility could be optimized to better meet user needs.  Despite its practical tools, Chrono faced usability complaints compared to competitors like Google Maps. The app’s navigation and iconography were often confusing, and we noticed a lack of feedback during tasks like reloading OPUS cards or saving locations. We focused on 3 core areas: Trip planning usability OPUS card reload flow Icon clarity and favorite location saving

Target Users

Intended users: Public transit users in urban areas (commuters, students, etc.) 

We ended up with 6 participants, aged 24–42, all regular public transit users in Montreal.

Some participants were experienced Chrono users, while others were first-time users.

Methodology

Test Type: Moderated, in-person

Location: Near metro/bus stops to simulate real-world conditions

Participants: 6 users (ages 20–40, varied familiarity with transit apps)

Data Collected:
Task success rate
Time on task
Clicks, errors
Think-aloud observations
Post-task interviews

Test Focus Areas

Plan a trip from A to B using the app
Reload an OPUS transit card
Save a location to favorites


The goal of the test was to assess how easily users could perform common tasks. 

Usability Test Plan

The purpose was to identify usability issues in the main user flows through real-time user testing and identify key pain points affecting navigation, task efficiency, and user satisfaction. 

Participants: 6 users, in-person, moderated interviews.
Measures: Task success, user behavior tags (Thinking, Error), direct quotes
Think-aloud + observational coding

3 people responsible for different tasks during the test:
Note taker of quantitative data, 
Note taker of qualitative information
The moderator. 

Tasks

You can’t test everything, sp what should you test?

In our usability test, three tasks were carefully selected based on their relevance to core app functions and their potential impact on both user experience and business outcomes.
The matrix below reflects this prioritization strategy by weighing each task against its risk, business impact, and user impact: 


Usability testing prioritization matrix adopted from NNGroup (2021), showing task evaluation based on risk, business impact, and user impact.

1= High
2= Medium
3= Low

Overview of Test Results

Overall Rating of the app:
Average Rating: 3.67 / 5

Average Age: 33.2 years
Range: 24 to 42

Primary Transportation:
Most use Public Transit 
(with combinations of walking, rideshare, and driving)

Accessibility Consideration:
All participants marked No

50% error rate: 9 out of 18 tasks had at least one error.
Tasks were perceived as easy, but error rates were high, suggesting users blamed themselves for issues.
Participants took longer and more steps than benchmark users to complete tasks.
Despite this, app ratings were decent, indicating a disconnect between perceived and actual usability.

Overview of Test Results

We utilized Airtable to create surveys that we filled in with participants and later analyzed the data by coding user steps and thoughts to find emerging themes.

Screenshot from our Airtable coding

Key Findings

UI & Feature Expectations

Key Findings

Task-Specific Usability Issues

Trip Planning:
67% encountered confusion; only 2/6 completed with no errors.
No visible back button, swipe gesture not discoverable.
“I am afraid it is gonna take me back home.”

Input & Flow Issues:
No autosave or confirmation → users lost data when switching options.
"Arrive by" vs. "Depart at" confused 3/6 users.

Recommendations

Trip Planning

Add visible back buttons & autosave for trip planning.

Make “Back” button visible in trip detail screens
Add more route details (e.g., # of stops, walking time/distance)

Distinguish Arrival and Departure selection more clearly 

simple fix of checkbox

keep data in case users make errors!

Recommendations

Improving the Favorites Feature

Users struggled with the icons and naming process when saving favorite locations. Many icons felt irrelevant, and requiring custom names for each location created friction.

Recommendation:
Simplify and clarify the icon set using familiar, intuitive symbols (e.g., leaf for parks, fork for restaurants).
Make naming optional by auto-filling location names.
Fix the confusing "Enter" key behavior to align with user expectations.

These changes reduce cognitive load and make the Favorites feature quicker and more user-friendly.

Recommendations

Card Loading

The reload errors need to be a priority for fixing. More testing and investigation into the card reader

Simplify fare history UI; separate active vs expired fares.

Improve hierarchy of information for purchase

Figure 12 - Redesign recommendation for OPUS reload 

Challenges & Future Work

Future Research Opportunities:

Run comparative testing with Google Maps to identify must-have features.
Conduct an accessibility audit to ensure usability for users with diverse needs.
A/B test redesigned icon sets for better recognition and preference.
Gather long-term feedback by tracking user behavior post-onboarding.

Reflections

Future Research Opportunities:

Gained hands-on experience planning and moderating usability tests in real-world settings.
Realized how easily users blame themselves for poor UX—reinforcing the importance of clear, intuitive design.
Observed that think-aloud protocols affected user behavior and task timing; would try remote or repeat-task methods in future studies.
Noted the limitations of a small, non-diverse participant group—future tests should include long-term and older users for deeper insights.
This project strengthened my confidence in research and deepened my empathy for users navigating unclear interfaces.