All projectsCase Study

PrepMaster.ai

An AI-powered learning platform that converts academic content into personalised practice questions, evaluates answers, and tracks student progress — with question generation powered by Claude API.

Next.jsTypeScriptNode.jsExpressPostgreSQLRedisClaude APITailwindCSS

Repository is private. Deployed as a controlled prototype for test users.

The Challenge

The hardest problem on PrepMaster was designing a question generation pipeline that could produce consistent, curriculum-aligned practice questions while keeping latency low and API costs predictable. Students expect near-instant feedback — a 3-second wait to see a question set feels broken. But generating fresh content from Claude on every request creates unpredictable latency spikes and API spend that scales linearly with usage. We needed a system that felt fast for students, cost-efficient at scale, and produced questions at a consistent difficulty level regardless of how the prompt was phrased.

What I Built & Why

I owned the full stack — the Next.js frontend, the Node.js/Express API, the PostgreSQL schema for user progress tracking, and the Claude API integration with Redis caching. The core engineering challenge was making generation fast without paying the API cost on every request.

Decision

Separated content generation logic from user session management as an independent service

Why

Tightly coupling question generation to the user session flow would have meant every new session triggered a fresh API call — expensive and slow. By treating generation as a standalone service, we could cache output, reuse questions across users with similar inputs, and swap the underlying AI model without touching session logic.

Outcome

Cache hit rate reached 68%, API request volume dropped 42%, and system responsiveness improved significantly as the user base grew.

Decision

Cache generated questions in Redis keyed by subject + topic + difficulty hash

Why

The same subject/topic/difficulty combination is requested by many students — regenerating identical question sets for every request wastes API quota and adds latency. A Redis cache with a TTL means the first request pays the generation cost; subsequent requests are sub-millisecond.

Outcome

Average question generation latency fell from 3.1s to 1.2s. API spend stayed predictable as user volume grew.

Decision

Structured prompt schema — subject, topic, difficulty as explicit parameters

Why

Free-text prompts produce inconsistent difficulty calibration. By defining a fixed prompt schema with explicit parameters, every generation call uses the same structure — making output quality predictable and making it easy to A/B test prompt variations without breaking the application.

Outcome

Question consistency improved meaningfully across subjects. Prompt iteration became a controlled, low-risk process.

System Architecture

Student / BrowserWeb InterfaceNext.js FrontendTypeScript · TailwindCSS · App RouterRESTNode.js / Express APIAuth · Routing · Cache lookup · Prompt builderPostgreSQLUsers · Progress · SessionsRedis CacheQuestion sets · TTL-keyedcache missClaude APIQuestion generation · Structured prompts

AI Layer

Model

Claude API for question generation. PrepMaster does not use a retrieval-augmented generation pipeline — questions are generated dynamically from structured prompts that define the subject, topic, and difficulty level. Generated content is cached in Redis to avoid redundant API calls.

Generation Pipeline

  1. 1Student submits subject, topic, and difficulty selection
  2. 2API hashes the parameters to a cache key and checks Redis
  3. 3Cache hit → return stored questions immediately (sub-ms)
  4. 4Cache miss → build structured prompt and call Claude API
  5. 5Generated questions stored in Redis (TTL) and returned to student
  6. 6Student answers are evaluated against correct answers stored in PostgreSQL

System Prompt (Simplified)

// system prompt

You are an educational assistant that generates practice questions.

Generate questions based on the specified subject, topic, and difficulty level.
Ensure questions are clear, relevant, and appropriate for the learner.
Provide concise explanations where necessary.

User-Facing Feature

Students log in and select a subject, topic, and difficulty level. Within seconds, a set of practice questions appears. Students answer each question, receive immediate feedback on correctness, and can retry incorrect answers or generate a fresh set. The platform tracks their history so they can see which topics need more work.

Results

3.1s → 1.2s
Question generation latency
68%
Cache hit rate (Redis)
42%
Reduction in Claude API calls
99%
System uptime in testing cycles

What I'd Do Differently

If I rebuilt PrepMaster today, I would implement structured logging and evaluation metrics from day one. Early iterations were focused on making generation work reliably — and they did — but we had limited visibility into where latency was coming from, how often cache keys were actually being hit, or whether question quality was degrading for edge-case topics. Adding an observability layer early would have made optimization data-driven rather than intuition-driven. It would also have made it much easier to justify infrastructure decisions (like Redis TTL tuning) with hard numbers.