UI/UX Feedback Standards — RBA Dev Team Portal v1.0.0
Testing Guide

UI/UX Feedback Standards

Great UX feedback is specific, actionable, and grounded in user behavior. This guide shows you how to give feedback that actually improves the product.

Feedback vs Bugs: Know the Difference

Before you write anything, ask yourself: Is the system broken, or could it just be better? That distinction determines where your input goes.

Bug reports are for things that don't work as intended—broken validation, data not saving, buttons that do nothing, errors that shouldn't appear. Feedback is for things that work but could be clearer, faster, or more intuitive. Both are valuable, but they get triaged differently.

Quick Test

Can you fix this with code changes alone, or does it require a design decision? Code fix = bug report. Design decision = feedback. If you're not sure, default to feedback—we can always reclassify it later.

The Three Levels of Feedback Quality

Not all feedback is created equal. Here's how to move from surface-level reactions to insights that drive real improvements.

Low Quality
High Quality
Opinion
"I don't like the color scheme."
Observation
"The blue accent (#4a7fbd) on the dark background (#0d1117) has a contrast ratio of 3.2:1, which fails WCAG AA for normal text. Users with low vision may struggle to read links."
Vague
"The form is confusing."
Specific
"The Case Intake Form asks for 'Appeal Reason' before explaining what an RBA appeal is. New users won't know which checkbox to select without reading the handbook first."
Problem-Only
"The dropdown is hard to use on mobile."
Problem + Context
"The county dropdown requires precise tapping on a 14px target. On mobile (375×667), this caused 3 mis-taps before I could select the right option. Consider increasing the touch target to at least 44×44px."

The pattern: Observation + Impact + Suggestion. Tell us what you saw, why it matters, and what might help. You don't need to solve the problem yourself, but giving context makes your feedback immediately actionable.

What to Focus On

Good UX feedback covers more than just "does it look nice." Here are the dimensions to evaluate when you're reviewing a component.

Key UX Dimensions
  • Clarity: Can a first-time user understand what to do without instructions? Are labels descriptive? Is the next action obvious?
  • Efficiency: How many clicks/steps to complete a task? Could it be fewer? Are common actions easy to reach, or buried in menus?
  • Feedback: Does the system tell you what's happening? Loading states, success messages, error explanations—users should never wonder if their action worked.
  • Consistency: Does this component match the patterns used elsewhere in the system? Same button styles, same dropdown behavior, same terminology.
  • Error Prevention: Does the design help users avoid mistakes? Disabled states, confirmation dialogs, inline validation before submission.
  • Accessibility: Can you navigate with keyboard only? Are contrast ratios sufficient? Do screen readers get enough context?

How to Structure Your Feedback

When you submit feedback through the portal, follow this template to keep it organized and actionable:

Feedback Template
  • Component: Which page/form/feature you're reviewing. Be specific—"Case Intake Form" not "the form."
  • Observation: What you noticed. Stick to facts, not opinions. "The Save button appears below the fold on laptop screens" not "I wish the button was higher."
  • Impact: Why this matters. Who's affected? How does it slow them down or confuse them? "New users might not realize they need to scroll down to submit."
  • Suggestion (optional): If you have an idea for improvement, share it. But don't force it—sometimes just identifying the problem is enough.
  • Rating: How significant is this? Use the portal's 1-5 scale: 1 = minor polish, 5 = critical usability issue.
Example

Component: Dashboard — Case List

Observation: Case list sorts by most recent first, but there's no visual indicator that it's sorted. Users might assume it's unsorted or sorted alphabetically.

Impact: Arbitrators managing 50+ cases might waste time looking for a specific file number in the wrong order.

Suggestion: Add a small "Sorted by: Date (newest first)" label above the list, or make the Date column header show an arrow.

Rating: 3 — Medium priority, affects daily workflow.

What Not to Do

Here are the feedback patterns that don't help and usually get deprioritized:

Avoid These
  • Personal taste without justification:"I don't like this color" tells us nothing. "This color fails contrast standards" tells us everything.
  • Feature requests disguised as feedback:"You should add a mobile app" isn't UX feedback on the current system—it's a product roadmap discussion.
  • Comparisons to other products:"Airtable does this better" isn't useful unless you explain how they do it and why it works.
  • Vague feelings:"This feels clunky" or "It just doesn't flow" without examples. What specifically felt clunky? Where did the flow break?
  • Multiple issues in one submission: Keep feedback atomic—one issue per submission. If you find five problems on the same page, file five feedback items.

Screenshots: When and How

For UX feedback, screenshots are even more valuable than for bug reports. Visual issues are hard to describe in words alone.

When to include a screenshot: Layout concerns (spacing, alignment, overflow), visual hierarchy problems (what stands out vs what fades), mobile/responsive issues (how it looks at different sizes), labeling or copy clarity (show the confusing text in context).

Annotation helps: Use arrows, circles, or highlights to draw attention to the specific area you're talking about. A screenshot with a red circle around the problem saves us ten minutes of hunting.

Rating Your Feedback

The portal asks you to rate feedback on a 1-5 scale. Use this guide to calibrate your ratings:

Feedback Rating Scale
  • 5 — Critical: Blocks core workflows, major usability barrier. Example: Can't find the Save button, form structure completely unclear.
  • 4 — High: Significantly slows down users, causes frustration. Example: Dropdown requires scrolling through 254 unsorted counties with no search.
  • 3 — Medium: Noticeable friction but workarounds exist. Example: Button placement forces extra scrolling, but users can still complete the task.
  • 2 — Low: Minor polish, doesn't affect core tasks. Example: Inconsistent capitalization, slightly misaligned labels.
  • 1 — Nice-to-Have: Cosmetic improvement, no functional impact. Example: Could use a slightly larger font size for headers.
Calibration Tip

If you find yourself rating everything a 5, you're diluting the signal. Reserve top ratings for genuine blockers. A well-calibrated feedback score tells us exactly where to focus first.

After You Submit

Once your feedback is submitted, it goes into the same Submissions queue as bug reports. John reviews everything and may reach out for clarification or to let you know when changes are implemented. Keep an eye on your assignments dashboard for follow-ups.

Your feedback directly shapes the final product. The difference between a system that works and one that works well is often just a few dozen well-placed observations like yours.