Skip to main content
A pixel art illustration showing the requirements communication problem. A customer describes a tree swing, but different team members imagine different interpretations - a developer sees complex ropes, a designer sees art, a tester sees safety features. The delivered product differs from all expectations. Tagline: What They Said. What We Heard. What They Needed.

CS 3100: Program Design and Implementation II

Lecture 9: Interpreting Requirements

©2025 Jonathan Bell, CC-BY-SA

Learning Objectives

After this lecture, you will be able to:

  1. Explain the overall purpose of requirements analysis and why it matters
  2. Enumerate and explain three major dimensions of risk in requirements analysis
  3. Identify the stakeholders of a software module, along with their values and interests
  4. Describe multiple methods to elicit users' requirements

"I Need a System to Help My TAs Grade More Efficiently"

"Can you build something by next month?"

Sounds straightforward, right? But consider:

  • What does "efficiently" mean?
  • What specific problems are TAs facing?
  • What does "grading" include?
  • Is it just running tests, or also feedback, regrades, statistics?

What the Developer Built vs. What Was Actually Needed

Version 1: What was built

  • Run test cases on submissions
  • Return a numerical score
  • Store the final grade

Version 2: What was needed

  • Run automated tests on code
  • Check code style compliance
  • Enable inline commenting on code
  • Preserve previous grading on resubmit
  • Notify original graders of resubmissions
  • Prevent same grader from repeatedly grading same student
  • Randomly sample 10% for quality review
  • Track grader workload for fair distribution
  • Support multi-stage regrade requests
  • Integrate with university grade system

⚠ The difference: hundreds of hours of wasted work, frustrated users, missed deadlines.

The Requirements Communication Problem Is Not New

The classic tire swing cartoon showing how requirements get lost in translation: what the customer described, what was designed, what was built, and what the customer actually needed.

Versions of this image have supposedly circulated since the early 1900s. Christopher Alexander used it in The Oregon Experiment (1975).

The Cost of Fixing Requirements Errors Grows Exponentially

Exponential cost curve showing: Requirements (1x), Design (5x), Implementation (10x), Testing (20x), Deployment (100x). Below, a crack in requirements cascades into larger problems at each stage.

Requirements Analysis Is Not Just "Asking What They Want"

Users often:

  • Don't know what they want until they see it
  • Can't articulate their needs clearly
  • Focus on solutions instead of problems
  • Have conflicting needs with other stakeholders
  • Change their minds as they learn more

Key insight: Users are experts in their domain. The goal is to engage them as design partners, not extract requirements like mining ore.

Extractive vs. Participatory Approaches

Extractive Approach ❌

Dev: "What features do you need?"

Prof: "I need automated grading."

Dev: "OK, I'll build an autograder."

[Much Later...]

Prof: "This isn't what I wanted!"

Participatory Approach ✓

Dev: "Tell me about your grading challenges."

Prof: "I spend weekends grading, students complain about inconsistency."

Dev: "Can you show me the process?"

Prof: [Shows] "I check correctness, style, approach..."

Dev: "What if we automated mechanical parts so you could focus on pedagogy?"

Prof: "That would change how I design assignments!"

Collaborative Discovery Reveals Hidden Requirements

Professor: "I need the system to be fair."

You: "What does 'fair' mean to you?"

Professor: "Students complain some TAs grade harder than others."

You: "How would you like to address that?"

Professor: "Maybe... standardize the grading somehow?"

You: "Would a rubric help? Multiple graders? Statistical normalization?"

Professor: "Oh! Actually, the real problem might be TAs interpret my rubric differently..."

You: "What if TAs could see each other's grading on example submissions?"

Professor: "Like a calibration exercise? That's brilliant!"

✓ Neither party had this solution in mind—it emerged from collaboration.

Shared Language Enables Ongoing Collaboration

  • Professor stops saying "the system should be fair"
  • Professor starts saying "we need inter-rater reliability"
  • Developer stops thinking in databases
  • Developer starts thinking in rubrics and learning outcomes

Requirements Analysis Is Fundamentally About Managing Risk

3D coordinate system with three risk axes: Understanding (clear to ambiguous), Scope (tiny to massive), Volatility (stable to shifting). Origin is green 'Safe Zone', far corner is red 'Danger Zone'. Example requirements plotted: timestamp (green/safe), regrade workflow (yellow/medium), ML grading (red/dangerous).

Every requirement carries risks. Understanding these risks helps you focus effort where it matters most.

Risk Dimension 1: Understanding

How well do we understand what's needed?

Low Understanding Risk ✓

"The system shall record the UTC timestamp when a submission is received."

Clear, unambiguous, everyone knows what a timestamp is.

High Understanding Risk ❌

"The system shall ensure grading quality through meta-reviews."

What's a meta-review? Who does it? When? What constitutes "quality"?

Different Stakeholders Interpret the Same Term Differently

Three people reading 'meta-reviews' requirement with different interpretations: Professor sees spot-checks by senior TAs, TA sees peer review during grading, Student sees appeals process with impartial review.

⚠ One vague term, three completely different interpretations.

Risk Dimension 2: Scope

How much are we trying to build?

Low Scope Risk ✓

"Students have a fixed number of late tokens, each allowing a 24-hour extension."

Small, isolated change with clear boundaries.

High Scope Risk ❌

"Students can request regrades for any grading decision, which can be escalated to instructors."

Sounds like one feature, but what does it really mean?

Poll: Regrade Requests

If you were asked to design a regrade request system, what questions would you ask to better understand the requirements?

Poll Everywhere QR Code or Logo

Text espertus to 22333 if the
URL isn't working for you.

https://pollev.com/espertus

One Feature Hides Dozens of Decisions - Regrades

Visible requirements:

  • Students can request regrades
  • Requests include justification
  • Original grader reviews requests
  • Decisions can be appealed

Hidden requirements:

  • Who can request? (Student only? Team members?)
  • When can they request? (Immediately? Cooling period? Before finals?)
  • How many requests allowed? (Per assignment? Per semester?)
  • What's the escalation path? (Grader → Meta-grader → Instructor?)
  • What prevents gaming? (Shopping for easy grader?)
  • How are groups handled? (One request per group? Individual?)
  • What about concurrent modifications?
  • What audit trail is needed?

⚠ Each question represents additional scope. What seemed like one feature is actually dozens of interconnected decisions.

Risk Dimension 3: Volatility

How likely are requirements to change?

Low Volatility Risk ✓

"Use the university's standard letter grading scale (A, A-, B+, B, B-, ...)"

Stable for decades, unlikely to change.

High Volatility Risk ❌

"Integrate with the university's new AI-powered plagiarism detector currently in contract negotiations with three vendors."

Unstable API, multiple vendors, external organization control.

Volatile Requirements Evolve Unpredictably

Rollercoaster timeline starting at Chancellor's Office (Week 1, simple request) through loops (Week 3), steep climbs (Week 5 legal/compliance), crashes into Ethics Committee wall (Week 7), then loops back to the SAME Chancellor's Office (Week 10) - illustrating that the source of volatility was the original stakeholder all along.

Sources of volatility: External dependencies, regulatory requirements, political factors, technical uncertainty, market changes.

High-Risk Requirements Score High on Multiple Dimensions

Consider this requirement:

"The system shall use machine learning to automatically assign partial credit in a way that's pedagogically sound and legally defensible."

DimensionRisk LevelWhy
UnderstandingHIGHWhat is "pedagogically sound"? "Legally defensible"? What ML?
ScopeHIGHTraining data, scaling to new assignments, ML pipeline, evaluation, legal review, appeals system
VolatilityHIGHML evolves, legal requirements change, pedagogy is debated

Poll: Stakeholders

Who are stakeholders for an online grading system?

Poll Everywhere QR Code or Logo

Text espertus to 22333 if the
URL isn't working for you.

https://pollev.com/espertus

Give short answers for a word cloud.

A Stakeholder Is Anyone Affected By or Affecting the System

Concentric circles of stakeholders: Inner (Students, TAs, Instructors), Middle (Meta-graders, Administrators, IT), Outer (Parents, Future Employers, Accreditation). Shows stakeholders extend beyond obvious users.

Missing a key stakeholder is like designing a building without talking to the people who will live in it.

Primary Stakeholders Have Different Values and Fears

StakeholderPrimary ConcernValuesFears
Students"Will I get the grade I deserve?"Fairness, transparency, timelinessBiased grading, lost submissions, harsh TA
Graders (TAs)"Can I grade fairly without losing my weekend?"Efficiency, accuracy, workload balanceOverwhelming workload, hostile regrades
Instructors"Are students learning and evaluated fairly?"Educational outcomes, integrity, oversightGrade complaints, inconsistent grading

Hidden Stakeholders Can Derail Projects

Software project with visible stakeholders working, while hidden stakeholders emerge from shadows: Parents calling dean, Employers confused by transcripts, Accreditation body demanding audit, Regulators with compliance checklists. Speech bubble: We weren't consulted!
  • Parents: May call demanding grade explanations
  • Future Employers: Rely on grades as competence signals
  • Accreditation Bodies: Can shut down programs that don't meet standards

Poll: Stakeholder Conflicts

Identify a conflict between students and graders.

Poll Everywhere QR Code or Logo

Text espertus to 22333 if the
URL isn't working for you.

https://pollev.com/espertus

Stakeholder Interests Often Directly Conflict

ConflictStakeholder 1Stakeholder 2
RegradesStudents want unlimited requests for fairnessTAs want protection from frivolous requests
AnalyticsInstructors want to audit all student code for trojan horsesStudents want privacy of mistakes
AutomationTAs want fully automated gradingInstructors want nuanced evaluation
Feedback timingFast students want immediate feedbackTAs need to batch for efficiency
Audit trailsAdministrators want detailed logsTAs want quick, simple grade entry

Resolving conflicts is a negotiation skill—understanding what each party truly needs and finding creative solutions.

Balancing Competing Stakeholder Needs

Example: Designing Pawtograder's regrade request feature

StakeholderNeedSolution Element
StudentEasy ability to request regradesSimple request form in Pawtograder
TAProtection from frivolous requests7-day time limit after grades posted
InstructorOversight of grading quality3-day appeal window to instructor if unsatisfied
AdministratorCompliance and audit trailComplete log of all regrade activities

✓ A good solution addresses all concerns, not just the loudest voice.

Different Methods Reveal Different Information

Six elicitation methods as different lenses: Interviews (dialogue), Observation (eye watching work), Workshops (group with sticky notes), Prototypes (hand sketching), Document Analysis (magnifying glass on policies), Scenarios (storyboard). They converge on complete requirements.

Method 1: Interviews Reveal What Stakeholders Say

Q: Walk me through grading a typical assignment from start to finish.

"I download submissions from Canvas, run them locally, open a spreadsheet, test each function manually..."

Q: What takes the most time?

"Setting up the test environment for each student's code, and writing the same feedback over and over."

Q: Tell me about the last time grading went really badly.

"A student submitted at 11:59 PM, I graded at midnight, but they claimed they resubmitted at 12:01 AM and I graded the wrong version..."

Techniques: Open-ended questions, critical incident technique, probing, silence

Method 2: Observation Reveals What Stakeholders Actually Do

TimeObserved Activity
2:00 PMTA logs into Canvas, downloads submission ZIP
2:05 PMDiscovers some submissions are .tar.gz (unexpected!)
2:08 PMWrites quick script to handle multiple archive formats
2:15 PMOpens first submission, realizes it's Python 2 not Python 3
2:18 PMSets up separate testing environment for Python 2
2:20 PMManually copies test cases from assignment PDF
2:30 PMStudent order in spreadsheet doesn't match Canvas
2:37 PMPhone notification interrupts flow
2:38 PMLoses track of which submission was being graded

⚠ File format issues, manual test copying, and interruptions weren't mentioned in interviews.

Method 3: Workshops Resolve Conflicts Early

Requirements workshop: diverse group around whiteboard with sticky note clusters, dot voting in progress, two people negotiating. Whiteboard shows requirement clusters. Dialogue bubble shows student-TA negotiation reaching compromise on regrade limits.

Benefits: Stakeholders hear each other directly, creative solutions emerge, conflicts surface and resolve early, builds buy-in.

Method 4: Prototypes Make Abstract Ideas Concrete

Split image showing two stages: Stage 1 (Mirror) - developer reflects words back to professor for confirmation, useful starting point. Stage 2 (Window) - developer shows paper prototype, professor discovers new requirements she didn't know she had. Progression from confirming understanding to sparking deeper discovery.

Method 5: Document Analysis Reveals Hidden Constraints

Documents to analyze:

  • Grading rubrics → evaluation criteria and point distributions
  • Email complaints → common pain points and misunderstandings
  • Grade appeal forms → existing process and requirements
  • TA training materials → hidden complexities
  • University policies → compliance constraints
  • Previous syllabi → evolution of grading policies

Example from CS 3100 syllabus:

"All regrade requests must be submitted within 7 days from your receipt of the graded work. If your regrade request is closed and you feel that the response was not satisfactory, you may appeal to the instructor via Pawtograder within 3 days."

Requirements discovered: 7-day time limit after receipt (not posting), written form via Pawtograder, escalation path to instructor, 3-day appeal window, implies graders are not instructors.

Method 6: Scenarios Reveal Edge Cases

Scenario: End-of-Semester Grade Dispute

Sarah, a senior, needs a B+ to keep her scholarship. She receives a B after the final assignment. Checking her grades, she notices Assignment 3 seems low. She reviews the rubric and believes her recursive solution was incorrectly marked wrong.

She initiates a regrade request on December 15th. The original TA, Tom, is on winter break. The system escalates to the instructor, who must respond within 48 hours due to grade submission deadlines. The instructor realizes they never saw Sarah's original grade—they need visibility into all grading decisions, not just escalations...

Requirements revealed: Instructor visibility into all grades (not just appeals!), escalation when grader unavailable, urgency handling, time-sensitive responses.

Combine Methods for Complete Requirements

  1. Start broad: Interview stakeholders for general needs
  2. Get specific: Observe actual work to see hidden complexities
  3. Resolve conflicts: Run workshops with multiple stakeholders
  4. Validate understanding: Test with prototypes
  5. Check constraints: Analyze policy documents
  6. Verify completeness: Walk through detailed scenarios

Red flags to watch for:

  • Only talking to managers (they don't do the actual work)
  • Leading questions (you'll hear what you want to hear)
  • Single method (each method has blind spots)
  • Missing stakeholders (the quiet ones often have critical needs)

Key Takeaways

  • Requirements analysis bridges vague needs to concrete specifications
  • Three risk dimensions: Understanding, Scope, and Volatility
  • Stakeholder analysis identifies everyone affected, including hidden stakeholders
  • Multiple elicitation methods needed—each has blind spots
  • Participatory approach: Engage users as design partners, not just requirement sources

The key insight: Requirements analysis isn't asking "what do you want?"—it's understanding problems deeply enough to design solutions stakeholders didn't know were possible.

Bonus Slide

Left panel has title 'Product features' and shows a fancy cat tree. Right panel has title 'User Needs' and shows a cat in a cardboard box.

Source: @_yes_but