A terminal timeline showing a mysterious delay
This sci-fi art was generated using Nano Banana Pro using custom prompt written by yours truly.

The mystery of the 5-second delay that almost made me cry

2 min read
EngineeringDebugging

The goal

I was wiring a Spotify playback flow into a small side project that migrates an Apple Music library into Spotify. The app ingests an Apple Music XML export, lets me step through each track, search Spotify, and add matches to playlists or liked songs. The playback control is the tiny bit of polish that makes the workflow feel alive: click play on a candidate track, verify it sounds right, move on. That action should be sub-500ms because the backend is a thin OAuth + Spotify API proxy.

Then I hit a 5-second delay. Every click. Every time. No errors. Just a long, awkward pause before the track finally started. It was absurd in that particular way only a local dev environment can be.

The first theory: Spotify is slow

The most obvious culprit was Spotify itself. So I measured the backend:

[spotify] refresh token exchange: 165ms
[spotify] playTrack.spotifyPlay: 311ms
[spotify] playTrack.total: 476ms
POST /api/spotify/player/play: 482ms

Backend was fine. Spotify was fine. The numbers were polite, almost smug. The mystery got worse.

The crescendo: if the server is fast, where is the time going?

I added timing logs at every boundary I could reach:

  • React mutation start/end
  • Axios request/response interceptors
  • Express request arrival/response sent

Then I lined up the timestamps:

18:49:20.264Z - [axios] Request initiated (frontend)
18:49:24.367Z - [express] Request arrived (backend)
                ↑ 4.1 SECOND GAP ↑
18:49:24.821Z - [express] Response sent (461ms processing)
18:49:25.435Z - [axios] Response received (frontend)

This was the turning point. The backend wasn't slow. The request wasn't even arriving. A four-second void existed between the browser saying "go" and Express hearing it. That void became the entire story.

The absurdity: it was just "localhost"

The Vite proxy was configured like this:

proxy: {
  '/api': {
    target: 'http://localhost:3001',
    changeOrigin: true
  }
}

And that tiny, innocent localhost was the whole delay.

macOS was trying IPv6 first (::1), timing out for ~4 seconds, then falling back to IPv4. So each click was a tiny DNS drama: a polite IPv6 knock, a long silence, then finally the working IPv4 door. Over and over. 1

Switching to 127.0.0.1 collapsed the delay instantly:

proxy: {
  '/api': {
    target: 'http://127.0.0.1:3001',
    changeOrigin: true,
    ws: false,
  }
}

Secondary cleanup: preflight overhead

While I was there, I tightened up CORS so the preflight could be cached and stop costing me extra round trips:

app.use(cors({
  origin: FRONTEND_ORIGIN,
  methods: ['GET', 'POST', 'PATCH', 'DELETE', 'OPTIONS'],
  allowedHeaders: ['Content-Type', 'Authorization'],
  credentials: true,
  maxAge: 86400,
}));

It wasn't the root cause, but it removed a small, consistent tax. It also made me feel like I was returning from battle with trophies.

The payoff

Before:

Total:              5400ms
Network delay:      4100ms
Backend:             480ms

After:

Total:               476ms
Network delay:         8ms
Backend:             461ms

The fix was literally a one-character change, but I don't regret the whole investigation. It turned a ghost delay into a clean, documented learning moment, and it reminded me that "localhost" is not always as local as it looks.

A few notes for next time

  • Always instrument the full request path.
  • Prefer 127.0.0.1 in dev proxies on macOS.
  • Cache CORS preflights when you can.
  • Treat big delays as missing data, not slow code.

Footnotes

  1. macOS prefers IPv6 per RFC 6724. If your service isn't listening on ::1, you can get the exact kind of multi-second timeout seen here.