Backend
January 10, 2025
12 min read

Case Study: Optimizing Database Queries for 40% Performance Gain

How we reduced API response times by 40% through systematic database optimization in a production healthcare platform serving 5,000+ users.

Sheryar Ahmed

Sheryar Ahmed

Full-Stack Engineer | Building scalable systems

Case Study: Optimizing Database Queries for 40% Performance Gain
Database
Performance
MySQL
Optimization
Case Study

Case Study: Optimizing Database Queries for 40% Performance Gain

Background

At HealthSIA, we were experiencing slow API response times as our user base grew from 1,000 to 5,000+ active users. Our patient dashboard was taking 3-5 seconds to load, and our meal plan API was timing out during peak hours.

Initial Metrics

Goal: Reduce response times by at least 30% without adding more database resources.

Problem Analysis

Identifying Bottlenecks

We used MySQL slow query log and APM to identify problematic queries. The main culprit:

```sql -- Slow query (avg 2.3s) SELECT p.*, u.email, u.name, mp.meal_name, mp.calories, ep.exercise_name, ep.duration FROM patients p LEFT JOIN users u ON p.user_id = u.id LEFT JOIN meal_plans mp ON p.id = mp.patient_id
LEFT JOIN exercise_plans ep ON p.id = ep.patient_id WHERE p.clinic_id = ? AND p.status = 'active' ORDER BY p.created_at DESC; ```

Issues identified:

Optimization Strategy

1. Added Strategic Indexes

```sql CREATE INDEX idx_patients_clinic_status ON patients(clinic_id, status, created_at);

CREATE INDEX idx_meal_plans_patient ON meal_plans(patient_id); ```

Impact: 35% reduction in query time

2. Optimized Query Structure

```javascript // Before: Heavy single query const patients = await db.query(heavyJoinQuery);

// After: Split into targeted queries const patients = await db.query(` SELECT p.id, p.name, p.status, u.email FROM patients p INNER JOIN users u ON p.user_id = u.id WHERE p.clinic_id = ? AND p.status = 'active' LIMIT ? OFFSET ? `, [clinicId, limit, offset]);

// Fetch related data separately const mealPlans = await db.query(` SELECT patient_id, COUNT(*) as count FROM meal_plans WHERE patient_id IN (?) GROUP BY patient_id `, [patientIds]); ```

Impact: Additional 25% reduction

3. Implemented Redis Caching

```javascript async function getPatientDashboard(patientId) { const cacheKey = `patient:dashboard:${patientId}`;

let data = await redis.get(cacheKey); if (data) return JSON.parse(data);

data = await fetchPatientDashboard(patientId); await redis.setex(cacheKey, 300, JSON.stringify(data));

return data; } ```

Impact: 60% reduction for cached requests

Results

After Optimization

Key Learnings

1. Measure First, Optimize Later

Don't optimize blindly. Use profiling tools to identify actual bottlenecks.

2. Indexes Are Powerful

Proper indexing gave us the biggest single improvement. But remember:

3. Trade-offs Are Real

Tools Used

  1. MySQL Slow Query Log
  2. EXPLAIN ANALYZE
  3. New Relic APM
  4. Redis for caching
  5. Artillery for load testing

Conclusion

Database optimization is ongoing, not one-time. By systematically identifying bottlenecks, applying targeted optimizations, and continuous monitoring, we achieved 40% improvement that directly improved UX for thousands of users.

Share this article