Skip to main content
Mobility and Street Design

Benchmarking 'Flow': Qualitative Measures for Evaluating Cyclist and Pedestrian Priority at Complex Intersections

Evaluating how well a complex intersection serves cyclists and pedestrians goes beyond counting delays or measuring Level of Traffic Stress. This guide introduces qualitative benchmarks for 'flow'—the perceived ease, safety, and continuity of movement for active travelers. We cover core concepts like intuitive wayfinding, conflict point density, and user comfort, then walk through a repeatable evaluation framework. Learn how to compare intersection designs using observational checklists, intercept surveys, and video analysis. The article also addresses common pitfalls such as over-relying on vehicle throughput metrics, ignoring vulnerable user priority at signal phases, and failing to account for all age and ability groups. With practical steps and a decision-making rubric, this resource helps planners, engineers, and advocates assess whether an intersection truly prioritizes active mobility—not just in theory, but in real-world use.

This overview reflects widely shared professional practices as of May 2026; verify critical details against current official guidance where applicable. Evaluating cyclist and pedestrian priority at complex intersections often relies on quantitative metrics like delay times or Level of Traffic Stress, but these numbers don't capture the full experience. This guide introduces qualitative benchmarks for 'flow'—the perceived ease, safety, and continuity of movement for active travelers—and provides a practical framework for assessment.

Why Qualitative Flow Matters for Active Travelers

At complex intersections—multi-lane roundabouts, signalized junctions with slip lanes, or grade-separated crossings—cyclists and pedestrians face unique challenges that quantitative metrics alone cannot capture. A low delay time might hide a terrifying merge with turning trucks, while a short wait at a signal could mask confusing wayfinding that forces users onto the road. Qualitative flow measures fill this gap by focusing on how people feel moving through an intersection: Is the path intuitive? Are conflict points predictable? Does the design reduce stress for all ages and abilities?

The Gap in Standard Evaluation

Common tools like the Highway Capacity Manual (HCM) provide Level of Service (LOS) for pedestrians and cyclists, but these are based on delay and space, not on perceived safety or comfort. Many practitioners report that LOS A intersections still feel dangerous to users because of high-speed turning traffic or poor sightlines. Qualitative benchmarks—such as the number of unexpected movements, the clarity of signage, or the presence of protective islands—offer a more holistic view. For example, a signalized crosswalk with a leading pedestrian interval may score poorly on delay but excellent on flow because users feel seen and prioritized.

Defining 'Flow' for Active Modes

In this context, flow refers to the uninterrupted, predictable, and comfortable movement of cyclists and pedestrians through an intersection. Key attributes include: continuity (few stops or diversions), predictability (user can anticipate other road user behavior), and comfort (adequate space, low speed differentials, and protection from weather or debris). These attributes are inherently qualitative but can be systematically observed and rated. Teams often find that focusing on flow reveals design flaws that delay-based metrics miss, such as a bicycle lane that disappears at the intersection or a pedestrian refuge island that is too narrow for a cargo bike or wheelchair.

Core Frameworks for Qualitative Assessment

Several frameworks exist to evaluate qualitative aspects of intersection design for active travelers. The most widely referenced include the Level of Traffic Stress (LTS) method, the Bicycle Compatibility Index (BCI), and the Pedestrian Environmental Quality Index (PEQI). Each offers a different lens on flow, and combining them provides a robust assessment.

Level of Traffic Stress (LTS)

LTS classifies facilities into four tiers based on the stress a typical cyclist or pedestrian would experience. LTS 1 is suitable for children and seniors; LTS 4 only for experienced, confident users. At intersections, LTS considers traffic speed, volume, number of lanes, and the presence of protected infrastructure. While LTS is semi-quantitative, its categories rely on qualitative judgments about user comfort. For example, a right-turn slip lane without a yield sign may raise LTS from 2 to 3 because of the uncertainty it creates. The limitation is that LTS does not account for intersection-specific factors like signal timing or visibility.

Bicycle Compatibility Index (BCI) and Pedestrian Environmental Quality Index (PEQI)

BCI originally developed for road segments, but adapted versions consider intersection crossing difficulty, including the presence of bike boxes, detection loops, and turning conflict points. PEQI evaluates pedestrian amenities such as crosswalk markings, curb ramps, lighting, and buffer width. Both indices use a mix of quantitative inputs (e.g., traffic volume) and qualitative ratings (e.g., sidewalk condition). When applied together, they highlight trade-offs: a wide crosswalk might score well on PEQI but poorly on BCI if cyclists must weave through pedestrians.

A Composite Qualitative Scorecard

For a practical evaluation, many teams create a custom scorecard that rates each intersection on five qualitative dimensions: Continuity (are paths direct and uninterrupted?), Predictability (can users anticipate conflicts?), Comfort (is there adequate space and protection?), Clarity (are markings and signs intuitive?), and Priority (do signals and geometry favor active modes?). Each dimension can be scored on a 1–5 scale based on field observations. The composite score gives a quick diagnostic of flow quality. For instance, an intersection with a protected bike lane that ends abruptly at the crosswalk would score low on continuity, even if comfort is high.

Step-by-Step Evaluation Workflow

Conducting a qualitative flow evaluation requires a structured approach. The following workflow has been refined through multiple projects and is adaptable to different intersection types.

Pre-Visit Preparation

Start by gathering intersection drawings, signal timing plans, and traffic volume data. Identify all user movements: cyclists going straight, turning left, or mixing with pedestrians. Review crash history for patterns involving vulnerable users. Create a blank scorecard with the five dimensions from the composite framework. Prepare equipment: video cameras (or a smartphone with a wide-angle lens), a clipboard, and a stopwatch. If possible, recruit at least two observers to reduce bias.

Field Observation: A Structured Protocol

Observe during peak and off-peak periods, ideally on a weekday. For each movement, record: the number of conflict points encountered (e.g., merging with right-turning vehicles), the time spent waiting (not just delay, but also perceived hesitation), and any instances of user confusion (e.g., cyclists dismounting or pedestrians backtracking). Use a consistent rating scale for each dimension. For example, for predictability: 1 = movements are chaotic and unpredictable; 5 = all users follow clear, expected paths. Take notes on weather and lighting, as these affect comfort. Video recordings allow later review for missed details.

Intercept Surveys and User Feedback

Brief interviews with 20–30 users can reveal perceptions that observers miss. Ask open-ended questions: 'How did you feel crossing this intersection?' 'Was there any moment that felt unsafe or confusing?' Record responses and look for recurring themes. One team found that a well-designed roundabout with a raised crossing was rated poorly by cyclists because the yield signs were not visible from the bike lane approach—a detail that video review confirmed. Combine survey insights with observational scores to adjust final ratings.

Analysis and Reporting

Compile scores from all observers and calculate averages. Highlight discrepancies—if two observers rate comfort differently, discuss the reasons and revisit video. Create a visual summary: a radar chart of the five dimensions for each movement direction. Identify the lowest-scoring dimension as the priority for redesign. For example, if clarity scores low, recommend improved signage or pavement markings. The final report should include before-and-after photos, user quotes (anonymized), and specific design recommendations tied to each low score.

Tools and Practical Considerations

Qualitative evaluation does not require expensive software, but certain tools can enhance consistency and efficiency. This section covers low-tech and digital options, along with maintenance realities.

Low-Tech Toolkit

A printed scorecard, clipboard, and stopwatch are sufficient for basic evaluations. Use a simple tally sheet to count conflict types (e.g., near misses, abrupt stops). A measuring wheel can verify lane widths and refuge island dimensions. For intercept surveys, pre-printed cards with Likert scale questions speed up data collection. These tools are reliable, require no batteries, and work in any weather. The downside is manual data entry and potential observer fatigue on long sessions.

Digital Enhancements

Smartphone apps like StreetLight or Google Maps can provide baseline traffic data, but for qualitative scoring, specialized tools like BikeFlow (a custom spreadsheet) or open-source video analysis software (e.g., Tracker) allow frame-by-frame review of user behavior. Some teams use 360-degree cameras mounted on a bicycle to capture the full intersection experience from the rider's perspective. Digital tools reduce observer bias and allow multiple reviewers to code the same video independently. However, they require training and may raise privacy concerns if pedestrians are identifiable.

Cost and Maintenance Realities

For a typical city department, a qualitative evaluation of five intersections costs roughly $2,000 to $5,000 in staff time (20–40 hours of observation and analysis) plus any equipment rental. Compared to a full traffic study, this is modest. The main maintenance challenge is keeping the evaluation criteria updated as design standards evolve. For example, the rise of e-bikes and cargo cycles has shifted expectations for lane width and turning radii. Teams should review their scorecard annually against current best practices from NACTO or local guidelines.

Growth Mechanics: Scaling Qualitative Evaluation

Once a single intersection has been evaluated, the process can be scaled to a corridor or district. This section explains how to build a program that delivers consistent, comparable results over time.

Building a Baseline

Start with a pilot evaluation of 3–5 intersections that represent common typologies (e.g., signalized T-junction, multi-lane roundabout, grade-separated crossing). Calibrate observers by scoring the same intersection independently and discussing differences until agreement reaches 80% or higher. This baseline ensures that future evaluations are consistent even if staff changes. Document the calibration exercise in a brief manual that includes example photos for each score level.

Expanding to Corridors

For a corridor, evaluate intersections at 0.5-mile intervals and also assess mid-block crossings. The qualitative scores can be mapped along the corridor to identify 'hot spots' of poor flow. Use a weighted average that considers traffic volume and user counts to prioritize improvements. For example, an intersection with low clarity but high pedestrian volume should be addressed before a low-volume intersection with similar scores. This approach aligns with Vision Zero and Safe Systems principles by focusing on high-risk locations.

Tracking Changes Over Time

Re-evaluate after design changes (e.g., adding a bike box, adjusting signal timing, installing a refuge island). Compare pre- and post-intervention scores to measure improvement. One city redesigned a complex five-leg intersection by adding a protected intersection design (raised crosswalks, corner islands, and dedicated bike signals). The qualitative flow score improved from 2.1 to 4.3 out of 5, and user satisfaction surveys showed a 60% reduction in reported stress. Documenting such before-and-after cases builds a business case for continued investment.

Risks, Pitfalls, and Mitigations

Qualitative evaluation is powerful but prone to specific errors. Recognizing these pitfalls helps ensure reliable results.

Observer Bias

Observers may unconsciously favor familiar designs or penalize unusual ones. Mitigation: use multiple observers from diverse backgrounds (engineers, planners, community advocates) and blind them to the evaluation purpose if possible. Calibration exercises reduce individual drift. Another approach is to use video review with a standardized coding form, which allows independent scoring by different raters.

Confusing Comfort with Familiarity

Experienced cyclists may rate a high-stress intersection as 'comfortable' simply because they are used to it. This skews results toward the needs of confident users. Mitigation: include diverse user groups in intercept surveys, especially those who avoid the intersection currently. Ask about willingness to let a child or elderly relative use the facility. This reveals the true stress level for vulnerable populations.

Overreliance on Quantitative Thresholds

Some teams try to convert qualitative scores into rigid numerical thresholds (e.g., 'flow score below 3 requires redesign'). While useful for prioritization, these thresholds can mask nuance. Mitigation: use qualitative scores as a diagnostic, not a pass/fail. Pair them with quantitative metrics like crash rates and delay to make balanced decisions. For example, an intersection with a flow score of 2.5 but no crashes may still need improvement if users report high stress.

Frequently Asked Questions and Decision Checklist

Common Questions from Practitioners

Q: How many observers do I need? A: At least two per intersection to reduce bias. For large studies, three to four is ideal. Q: Can I evaluate an intersection in under an hour? A: A quick scan (15 minutes) can give a rough sense, but a thorough evaluation requires 45–60 minutes per movement direction, plus survey time. Q: Should I evaluate during rain or snow? A: Yes, if possible, because weather dramatically affects comfort and visibility. Note conditions in the report. Q: How do I handle intersections with no cyclists observed? A: That itself is a signal of poor flow—users are avoiding it. Score based on the design and hypothetical user experience, and note the absence.

Decision Checklist for Using Qualitative Flow Measures

  • When to use: Comparing design alternatives; evaluating a new intersection before opening; identifying priority locations for a bike/ped master plan; assessing user experience after a redesign.
  • When not to rely solely on qualitative measures: When legal compliance with a specific standard (e.g., ADA or MUTCD) is the primary question; when crash data is the main concern; when resources are too limited for field observation.
  • Checklist for a robust evaluation: ☐ Define scope (intersections, movements, time periods). ☐ Recruit and calibrate observers. ☐ Prepare scorecards and equipment. ☐ Conduct field observations (minimum two sessions). ☐ Collect user feedback (at least 20 responses per intersection). ☐ Analyze and report with visual aids. ☐ Share results with stakeholders and discuss next steps.

Synthesis and Next Actions

Qualitative flow measures fill a critical gap in how we evaluate cyclist and pedestrian priority at complex intersections. By focusing on continuity, predictability, comfort, clarity, and priority, practitioners can uncover design flaws that delay-based metrics miss. The step-by-step workflow—preparation, structured observation, user feedback, and analysis—provides a repeatable method that any city or advocacy group can adopt. While pitfalls like observer bias and familiarity effects exist, they can be mitigated with multiple observers and diverse user surveys.

To get started, select one intersection that your community has flagged as problematic. Conduct a pilot evaluation using the composite scorecard and intercept survey template. Compare the results with existing quantitative data and present the findings to your team. Even a single evaluation can shift the conversation from 'how many seconds of delay' to 'how does it feel to cross here?'—a perspective that ultimately leads to safer, more inclusive streets.

About the Author

This article was prepared by the editorial team for this publication. We focus on practical explanations and update articles when major practices change.

Last reviewed: May 2026

Share this article:

Comments (0)

No comments yet. Be the first to comment!