Skip to main content
8-bit pixel art cover showing a round-table discussion between students and faculty at Khoury College. Topic bubbles float above: Curriculum, Ethics, Research, Careers, Tools, Responsibility. An AI symbol glows at the center. Tagline: 14 Questions. No Easy Answers. Let's Talk.

AI at Khoury College

An Undergraduate Student & Faculty Conversation

Prof. Jonathan Bell & Associate Dean Christo Wilson

Organized by the Khoury UG Advisory Committee

©2026 Jonathan Bell, CC-BY-SA

Q1: What Is Khoury's Role in AI?

Are we primarily builders of AI systems, critics and auditors of them, or both?

Pixel art showing a developer at a crossroads: left path 'BUILD' shows constructing AI systems with foundations of Algorithms, Systems, Statistics. Right path 'CRITIQUE' shows inspecting AI with checklist (Correct? Fair? Safe?). Both paths converge into 'CS Education' with caption: Building gives depth, critique gives judgment, you need both.

Q2: What Differentiates Khoury's AI Education?

What do we uniquely offer students that they couldn't get elsewhere?

Triptych showing Khoury's three AI education advantages: 1) Co-op feedback loop between campus and industry, 2) Balance scale showing engineering skills (integration, validation, maintenance) outweighing pure algorithms when code generation gets cheap, 3) Faculty and students co-building curriculum together.

Q3: Should AI/ML Be Required for All CS Majors?

If AI is becoming foundational, does it belong alongside systems and algorithms as core knowledge?

Horizontal spectrum of AI knowledge: Left (everyone) shows baseline literacy — LLMs predict text, they hallucinate, non-deterministic. Center shows practical integration — building software that uses AI. Right shows deep theory for researchers. Below: two approaches fork — standalone course vs. weaving into existing courses.

Q4: Should All University Majors Get AI Training?

Should there be required foundational AI courses beyond Khoury?

Hub-and-spoke diagram: Khoury at center providing technical AI understanding. Spokes to Journalism (deepfakes, source verification), Nursing (clinical decision support, failure modes), Business (AI analytics, bias), Law (hallucinated citations). Warning: worst outcome is departments building AI curriculum independently without technical grounding.

Q5: Tools vs. Theory — What Comes First?

Should students first learn how transformers work, or how to effectively use them?

Three-floor building showing AI competency tiers: Ground floor (every graduate) — using AI assistants with judgment, forklift in warehouse. Middle floor (growing share) — building AI-enabled software, connecting APIs, managing context windows. Top floor (researchers) — advancing AI itself with math and architectures. Callout highlights the gap at Tier 2.

Q6: What Skills Matter Most in an AI-Driven Workforce?

Math depth? Systems thinking? Product intuition? Ethics? Something else?

Venn diagram: Left circle (blue) shows what AI does well — code generation, pattern matching, boilerplate. Right circle (gold) shows human skills AI can't replace — evaluation, requirements, communication, systems thinking, adaptability. Overlap zone: AI accelerates, humans evaluate. Quote: 'The hardest parts of building software have never been typing code.'

Q7: How Do We Prevent Over-Reliance on AI Tools?

Where should the line be drawn in coursework?

Three-layer semester timeline: Layer 1 'Sequence' shows progression from no-AI zone (building skills manually) to AI-permitted to AI-required. Layer 2 'Explicit Goals' shows assignment cards with AI usage levels and pedagogical explanations. Layer 3 'Assess Process' shows proctored exams, oral walkthroughs, and self-reflections that measure understanding AI can't fake.

Q8: How Is AI Changing Software Engineering?

Are we training students for a future of less writing and more reviewing?

Left side shows historical pattern: each productivity improvement (high-level languages, CI/CD, AI generation) led to MORE software, not less. Right side shows bottleneck diagram: massive code generation funnel narrows at human evaluation, integration, and maintenance. Key insight: when generation is fast, the bottleneck is evaluation and maintenance.

Q9: What New Engineering Challenges Does AI Create?

Reliability, testing, reproducibility, long-term maintenance?

Six warning panels showing AI engineering challenges: 1) Non-determinism — same input, different output. 2) Hard-to-evaluate tasks — some need months to validate. 3) Maintenance debt — accepting code you don't understand. 4) Supply chain — unknown training data provenance. 5) Reproducibility — model updates break working code. 6) Vibe coding collapse — only checking execution, never reading code.

Q10: Who Is Responsible When AI Systems Cause Harm?

Developer? Company? Researcher? Institution?

Concentric circle bullseye showing layered responsibility: innermost — Developer (I ship it, I own it), then Company (review processes), then Model Provider (transparency), outermost — Institution (teaching evaluation skills). Bottom parallel: Grace Hopper's compiler skeptics mirror today's AI skeptics. Caption: The tool generates, the human validates and takes responsibility.

Q11: How Do Undergraduates Get Into AI Research?

What's the first concrete step?

Four ascending steps to AI research: Step 1 — Talk to a professor with a specific question about their paper. Step 2 — Build strong foundations (linear algebra, probability, algorithms, systems). Step 3 — Build and break things (implement papers, reproduce benchmarks). Step 4 — Don't follow the hype (evaluation, safety, and applied research are impactful paths).

Q12: How Do You Stand Out When Everyone Lists "AI"?

What signals depth over trend-chasing?

Split comparison of two profiles: Left (gray, generic) shows surface signals — 'Python, PyTorch, LLMs', tutorial projects, bored interviewer. Right (vibrant, distinctive) shows depth signals — specific problems solved with tradeoff analysis, open-source contributions, co-op production experience, honest AI attribution. Callout: Show what you built, why, and what you learned from failure.

Q13: Is Graduate School Becoming Necessary for AI?

Can undergraduates still break into meaningful AI roles directly?

Fork in the road: Left path 'Build with AI' shows wide highway to Software Engineering, Product Development, AI Integration — high demand, strong BS + co-op works well. Right path 'Advance AI' shows steeper trail to new architectures, training methods, NeurIPS — requires grad school depth. Signpost: Both are valuable, require different preparation.

Q14: If You Were an Undergraduate Again Today...

What would you focus on — and what would you ignore?

Split panel: Green 'Double Down' section shows five power-ups — Fundamentals (don't expire), Reading Code (THE core skill), Communication (more important with AI), Building Complete Systems (deployed end-to-end), Co-ops (nothing replaces real stakes). Red 'Ignore' section shows three traps — Framework of the Month (learn principles), Hype Cycle (focus on reality), The Anxiety (every tool in history, answer is always: no replacement, just changes what we do).

Bonus: "We'll Solve All Your Software Problems"

Dario Amodei says software engineering is obsolete in 12 months. Where have we heard this before?

A 70-year horizontal timeline of 'magic beans' — promises to solve all business software problems. 1960s: IBM mainframes (vendor lock-in). 1980s: PCs (shadow IT, data silos). 1990s-2000s: ERP systems like SAP (years-long implementations, budget overruns) with special callout of Northeastern's botched Banner-to-Workday migration (dumpster fire). 2000s-2010s: Salesforce/SaaS ('No Software!' led to subscription fatigue, integration spaghetti). 2010s-2020s: No-code platforms like Monday.com and ClickUp (hitting walls at scale). 2025: Dario Amodei declaring software engineering obsolete in 12 months while software stocks tumble — same crowd of hopeful business people reaching for magic beans again. Bottom: graveyard of previous magic beans, with grizzled developer saying 'I've survived 6 of these. The tools change, the problems don't. Who evaluates what AI builds?' Caption: Every generation gets its magic beans. They DO grow something. But they never eliminate the need for someone who understands what they're growing.

Thank You — Let's Keep This Conversation Going

The through-line of every answer tonight:

  • Technology changes. Responsibility doesn't.
  • The hardest parts of building software have never been typing code.
  • Build the judgment that makes tools valuable — then use the tools aggressively.

Stay connected:

  • These conversations continue in CS 3100 — AI is woven throughout the rest of the semester
  • Your survey responses and feedback directly shape what we teach next
  • The UG Advisory Committee is your voice — use it