How I Made Our Next.js 15 App 3x Faster With These 7 Techniques

10 min read1838 words

Three weeks ago, our SaaS dashboard was failing Core Web Vitals. Users complained about sluggish navigation, and our TTFB averaged 1.2 seconds. After implementing these 7 Next.js 15 optimization techniques, we achieved 400ms TTFB, 40% better Core Web Vitals scores, and our users stopped complaining. Here's exactly what worked.

The Problem: A Dashboard That Felt Like 2015

Our B2B SaaS dashboard serves 10,000+ daily active users who expect instant responses. Built with Next.js 14's App Router, it worked but felt slow. Chrome DevTools revealed the truth:

  • TTFB: 1.2-1.8 seconds
  • LCP: 3.4 seconds
  • Total blocking time: 890ms
  • 15+ sequential API calls on dashboard load

I spent a weekend profiling every millisecond. The culprit? We were using App Router like the old Pages Router - missing its most powerful features.

Technique 1: Parallel Routes Eliminated Our Waterfall

Our dashboard had four widgets loading sequentially. Each waited for the previous one to complete. Classic waterfall problem.

Before: Sequential Loading Nightmare

// app/dashboard/page.tsx - The slow way
export default async function Dashboard() {
  // Each await blocks the next one
  const analytics = await fetchAnalytics() // 400ms
  const notifications = await fetchNotifications() // 300ms
  const metrics = await fetchMetrics() // 350ms
  const activities = await fetchActivities() // 280ms
  // Total: 1330ms minimum
  
  return (
    <div className="dashboard">
      <AnalyticsWidget data={analytics} />
      <NotificationsPanel items={notifications} />
      <MetricsChart data={metrics} />
      <ActivityFeed items={activities} />
    </div>
  )
}

After: Parallel Routes Magic

// app/dashboard/layout.tsx - The fast way
export default function DashboardLayout({
  children,
  analytics,
  notifications,
  metrics,
  activities
}: {
  children: React.ReactNode
  analytics: React.ReactNode
  notifications: React.ReactNode
  metrics: React.ReactNode
  activities: React.ReactNode
}) {
  return (
    <div className="dashboard-grid">
      <main>{children}</main>
      <section className="analytics">{analytics}</section>
      <aside className="notifications">{notifications}</aside>
      <div className="metrics">{metrics}</div>
      <div className="activities">{activities}</div>
    </div>
  )
}
 
// app/dashboard/@analytics/page.tsx
export default async function AnalyticsSlot() {
  const data = await fetchAnalytics()
  return <AnalyticsWidget data={data} />
}
 
// app/dashboard/@notifications/page.tsx
export default async function NotificationsSlot() {
  const items = await fetchNotifications()
  return <NotificationsPanel items={items} />
}
 
// Similar for @metrics and @activities

Result: All four API calls now happen simultaneously. Total load time dropped from 1330ms to 400ms (the slowest request). That's a 70% improvement with just restructuring.

Technique 2: Partial Prerendering for Instant Static Shells

I've found that users perceive speed based on when they see content, not when it's interactive. PPR (Partial Prerendering) ships static HTML instantly while streaming dynamic parts.

// app/dashboard/page.tsx with PPR
import { Suspense } from 'react'
 
export default function Dashboard() {
  return (
    <div className="dashboard">
      {/* Static shell renders immediately */}
      <DashboardHeader />
      <DashboardNav />
      
      {/* Dynamic content streams in */}
      <Suspense fallback={<ChartSkeleton />}>
        <UserSpecificAnalytics />
      </Suspense>
      
      <Suspense fallback={<FeedSkeleton />}>
        <PersonalizedActivityFeed />
      </Suspense>
    </div>
  )
}
 
// This component uses user data
async function UserSpecificAnalytics() {
  const session = await getSession()
  const analytics = await fetchUserAnalytics(session.userId)
  return <AnalyticsChart data={analytics} />
}

The static shell (header, nav, layout) renders in 50ms. Dynamic content streams in as it's ready. Users see meaningful content immediately instead of staring at a spinner.

Technique 3: The New Caching Strategy That Actually Works

Next.js 15 changed caching defaults. Client router cache now has staleTime: 0 for Page segments by default. This broke our assumptions but led to a better approach.

Our Three-Layer Cache Strategy

// 1. Route Handler Cache - For API responses
// app/api/products/route.ts
import { unstable_cache } from 'next/cache'
 
const getCachedProducts = unstable_cache(
  async (category: string) => {
    const products = await db.products.findMany({
      where: { category },
      include: { reviews: true }
    })
    return products
  },
  ['products'],
  { 
    revalidate: 300, // 5 minutes
    tags: ['products']
  }
)
 
export async function GET(request: Request) {
  const { searchParams } = new URL(request.url)
  const category = searchParams.get('category') || 'all'
  
  const products = await getCachedProducts(category)
  
  return Response.json(products, {
    headers: {
      'Cache-Control': 'public, s-maxage=300, stale-while-revalidate=59'
    }
  })
}
 
// 2. Component-Level Cache - For expensive computations
// app/products/page.tsx
const getProcessedProducts = unstable_cache(
  async () => {
    const products = await fetch('/api/products')
    const processed = products.map(transformProduct)
    return sortByPopularity(processed)
  },
  ['processed-products'],
  { revalidate: 60 }
)
 
// 3. Client Router Cache - Configure in next.config.ts
module.exports = {
  experimental: {
    staleTimes: {
      dynamic: 30,  // Dynamic routes cache for 30s
      static: 180   // Static routes cache for 3 minutes
    }
  }
}

This strategy cut our database queries by 80% while keeping data fresh. The key insight: cache at multiple levels with different TTLs based on data volatility.

Technique 4: Edge Runtime for Lightning-Fast Middleware

Our authentication middleware was adding 200ms to every request. Moving it to Edge Runtime changed everything.

Before: Node.js Middleware

// middleware.ts - Slow Node.js version
import { NextResponse } from 'next/server'
import jwt from 'jsonwebtoken' // 50KB library
 
export async function middleware(request: Request) {
  const token = request.cookies.get('token')
  
  if (!token) {
    return NextResponse.redirect('/login')
  }
  
  try {
    // This library doesn't work in Edge Runtime
    const decoded = jwt.verify(token, process.env.SECRET)
    // Database check adds latency
    const user = await fetchUser(decoded.userId)
    
    if (!user) {
      return NextResponse.redirect('/login')
    }
  } catch {
    return NextResponse.redirect('/login')
  }
}

After: Edge Runtime Middleware

// middleware.ts - Fast Edge Runtime version
import { NextResponse } from 'next/server'
import { jwtVerify } from 'jose' // Edge-compatible, 8KB
 
export const runtime = 'edge'
 
export async function middleware(request: Request) {
  const token = request.cookies.get('token')?.value
  
  if (!token) {
    return NextResponse.redirect(new URL('/login', request.url))
  }
  
  try {
    // Jose works in Edge Runtime
    const { payload } = await jwtVerify(
      token,
      new TextEncoder().encode(process.env.JWT_SECRET!)
    )
    
    // Skip database check - validate on page level if needed
    const response = NextResponse.next()
    response.headers.set('x-user-id', payload.userId as string)
    return response
  } catch {
    return NextResponse.redirect(new URL('/login', request.url))
  }
}
 
export const config = {
  matcher: ['/dashboard/:path*', '/api/protected/:path*']
}

Impact: Middleware execution dropped from 200ms to 15ms. Edge Runtime's geographic distribution means users in Singapore get the same speed as those in California.

Technique 5: Intercepting Routes for Instant Interactions

Modal dialogs used to require full page navigations. Intercepting routes let us show modals while keeping the underlying page loaded.

// app/products/[id]/page.tsx - Full product page
export default function ProductPage({ params }: { params: { id: string } }) {
  return <FullProductView id={params.id} />
}
 
// app/products/[id]/@modal/(.)photo/page.tsx - Intercepted route
export default function ProductPhotoModal({ params }: { params: { id: string } }) {
  return (
    <Modal>
      <ProductGallery id={params.id} />
    </Modal>
  )
}
 
// Usage in product list
<Link href={`/products/${product.id}/photo`}>
  View Gallery
</Link>

When users click "View Gallery", they see a modal instantly without navigation. The URL updates, back button works, and sharing the URL opens the full page. It feels native.

Technique 6: Smart Component Splitting

I learned the hard way that 'use client' is viral - everything imported becomes client code. Our solution? Surgical component splitting.

The Problem Component

// SearchableProductList.tsx - 45KB of client JS
'use client'
 
import { useState } from 'react'
import { formatPrice, calculateDiscount } from '@/utils' // 12KB
import { ProductCard } from '@/components' // 15KB
import { analytics } from '@/lib/analytics' // 18KB
 
export default function SearchableProductList({ products }) {
  const [search, setSearch] = useState('')
  
  const filtered = products.filter(p => 
    p.name.includes(search)
  )
  
  return (
    <>
      <input 
        value={search} 
        onChange={(e) => setSearch(e.target.value)}
      />
      {filtered.map(product => (
        <ProductCard 
          key={product.id} 
          product={product}
          price={formatPrice(product.price)}
          discount={calculateDiscount(product)}
        />
      ))}
    </>
  )
}

The Optimized Version

// ProductList.tsx - Server Component (0KB client JS)
import { formatPrice, calculateDiscount } from '@/utils'
import { ProductCard } from '@/components'
 
export default function ProductList({ products }) {
  // All formatting happens on server
  const processedProducts = products.map(p => ({
    ...p,
    formattedPrice: formatPrice(p.price),
    discount: calculateDiscount(p)
  }))
  
  return <ProductGrid products={processedProducts} />
}
 
// SearchFilter.client.tsx - Client Component (3KB)
'use client'
 
import { useRouter, useSearchParams } from 'next/navigation'
import { useTransition } from 'react'
 
export default function SearchFilter() {
  const router = useRouter()
  const searchParams = useSearchParams()
  const [isPending, startTransition] = useTransition()
  
  const handleSearch = (term: string) => {
    startTransition(() => {
      const params = new URLSearchParams(searchParams)
      params.set('search', term)
      router.push(`?${params.toString()}`)
    })
  }
  
  return (
    <input
      defaultValue={searchParams.get('search') || ''}
      onChange={(e) => handleSearch(e.target.value)}
      className={isPending ? 'opacity-50' : ''}
    />
  )
}
 
// page.tsx - Composed together
export default async function Page({ searchParams }) {
  const products = await getProducts(searchParams.search)
  
  return (
    <>
      <SearchFilter />
      <ProductList products={products} />
    </>
  )
}

Result: Client bundle reduced from 45KB to 3KB. The search still feels instant thanks to useTransition.

Technique 7: Image Optimization That Actually Matters

We were using next/image correctly but not optimally. These tweaks made our images load 2x faster.

// components/HeroImage.tsx
import Image from 'next/image'
 
export default function HeroImage({ src, alt }) {
  return (
    <Image
      src={src}
      alt={alt}
      width={1920}
      height={1080}
      priority // Critical for LCP
      quality={85} // Sweet spot for quality/size
      placeholder="blur" // Requires blurDataURL
      blurDataURL={generateBlurPlaceholder(src)}
      sizes="(max-width: 768px) 100vw, (max-width: 1200px) 80vw, 1920px"
      className="hero-image"
    />
  )
}
 
// For dynamic images, generate blur placeholders
import { getPlaiceholder } from 'plaiceholder'
 
async function generateBlurPlaceholder(src: string) {
  const { base64 } = await getPlaiceholder(src, { size: 10 })
  return base64
}
 
// Configure in next.config.ts for optimal delivery
module.exports = {
  images: {
    formats: ['image/avif', 'image/webp'],
    deviceSizes: [640, 750, 1080, 1200, 1920],
    minimumCacheTTL: 31536000, // 1 year
  }
}

Combined with Cloudflare Images, our hero images now load in 200ms on 4G.

Real-World Results After 3 Weeks

The numbers tell the story:

Performance Metrics:

  • TTFB: 1.2s → 400ms (-67%)
  • LCP: 3.4s → 1.8s (-47%)
  • FID: 210ms → 45ms (-79%)
  • CLS: 0.21 → 0.05 (-76%)
  • Total Blocking Time: 890ms → 230ms (-74%)

Business Impact:

  • Session duration: +18%
  • Bounce rate: -31%
  • Conversion rate: +12%
  • Support tickets about "slow dashboard": -95%

Technical Improvements:

  • Database queries: -80% (thanks to caching)
  • API calls: -60% (parallel routes)
  • Client bundle: -55% (smart splitting)
  • Server costs: -30% (better resource utilization)

Lessons Learned The Hard Way

Parallel routes aren't free. Each route is a separate React tree. Too many parallel routes can increase memory usage. We found 4-6 parallel routes to be the sweet spot.

PPR requires discipline. It's tempting to wrap everything in Suspense. But too many boundaries create "popcorn" loading. Group related content under single boundaries.

Edge Runtime has limits. Not all npm packages work. We maintain a list of Edge-compatible alternatives for common libraries.

Cache invalidation is still hard. Our three-layer cache strategy works, but we spent days tuning TTLs and revalidation logic. Start conservative and decrease cache times based on real usage.

Migration isn't all-or-nothing. We migrated route by route over two weeks. The App Router and Pages Router can coexist, making gradual migration possible.

What's Next

We're now experimenting with Next.js 15.5's typed routes for build-time route validation and Turbopack for 3x faster local development. Early results show another 20% improvement in build times.

The App Router isn't just a new routing system - it's a fundamental shift in how we think about React applications. By embracing its patterns instead of fighting them, we transformed our sluggish dashboard into something that feels instant. Our users noticed, our metrics improved, and our AWS bill went down. That's a win in my book.