Next.js Performance Optimization: Core Web Vitals, PPR, Streaming, and Bundle Analysis
Optimize Next.js app performance: achieve green Core Web Vitals with Partial Prerendering, React Suspense streaming, image optimization, font loading, and bundle size reduction.
A Next.js app can ship with perfect Lighthouse scores on day one and terrible Core Web Vitals by month six โ not because Next.js got worse, but because teams add features without understanding the performance budget they're spending from.
This post covers the practical techniques that move the needle: PPR for eliminating TTFB on dynamic pages, Suspense boundaries that parallelize data fetching, image and font loading patterns that fix LCP, and bundle analysis that finds the 200KB library you accidentally imported.
Core Web Vitals: What Actually Matters
Google's page experience ranking uses three signals:
| Metric | What it Measures | Good | Needs Work | Poor |
|---|---|---|---|---|
| LCP | Largest Contentful Paint โ when main content loads | < 2.5s | 2.5โ4s | > 4s |
| INP | Interaction to Next Paint โ responsiveness (replaced FID in 2024) | < 200ms | 200โ500ms | > 500ms |
| CLS | Cumulative Layout Shift โ visual stability | < 0.1 | 0.1โ0.25 | > 0.25 |
The biggest LCP killers in Next.js apps: unoptimized hero images, blocking fonts, slow server response times, and render-blocking scripts.
Partial Prerendering (PPR)
PPR is Next.js 15's most important performance feature. It lets a single route have a static shell (served instantly from CDN) with dynamic holes filled by streaming:
// next.config.ts
import type { NextConfig } from "next";
const nextConfig: NextConfig = {
experimental: {
ppr: "incremental", // Enable PPR per-route
},
};
export default nextConfig;
// src/app/dashboard/page.tsx
import { Suspense } from "react";
import { unstable_noStore as noStore } from "next/cache";
// Mark this route as using PPR
export const experimental_ppr = true;
// Static shell โ renders at build time, served from CDN
export default function DashboardPage() {
return (
<div className="container mx-auto p-6">
{/* Static: always renders immediately */}
<h1 className="text-2xl font-bold">Dashboard</h1>
<nav>{/* Static navigation */}</nav>
{/* Dynamic hole 1: user stats โ streams in */}
<Suspense fallback={<StatsSkeleton />}>
<UserStats />
</Suspense>
{/* Dynamic hole 2: recent activity โ streams in independently */}
<Suspense fallback={<ActivitySkeleton />}>
<RecentActivity />
</Suspense>
{/* Dynamic hole 3: notifications โ streams in independently */}
<Suspense fallback={<NotificationSkeleton />}>
<Notifications />
</Suspense>
</div>
);
}
// This component fetches dynamic data โ it's the "hole"
async function UserStats() {
noStore(); // Opt out of caching for this component
const stats = await fetchUserStats(); // Could take 200ms
return <StatsGrid stats={stats} />;
}
async function RecentActivity() {
noStore();
const activity = await fetchRecentActivity(); // Could take 300ms
return <ActivityFeed items={activity} />;
}
Before PPR: User waits for the slowest data fetch (300ms) before seeing anything. After PPR: Static shell appears in ~50ms from CDN; data streams in as it's ready, in parallel.
๐ Looking for a Dev Team That Actually Delivers?
Most agencies sell you a project manager and assign juniors. Viprasol is different โ senior engineers only, direct Slack access, and a 5.0โ Upwork record across 100+ projects.
- React, Next.js, Node.js, TypeScript โ production-grade stack
- Fixed-price contracts โ no surprise invoices
- Full source code ownership from day one
- 90-day post-launch support included
Suspense Boundaries for Parallel Data Fetching
Without Suspense, Server Components fetch sequentially. With Suspense, they fetch in parallel:
// โ Sequential โ total wait = 200ms + 300ms + 150ms = 650ms
async function ProductPage({ id }: { id: string }) {
const product = await fetchProduct(id); // 200ms
const reviews = await fetchReviews(id); // 300ms
const related = await fetchRelatedProducts(id); // 150ms
return (
<div>
<ProductDetail product={product} />
<ReviewList reviews={reviews} />
<RelatedProducts products={related} />
</div>
);
}
// โ
Parallel with Suspense โ total wait = max(200, 300, 150) = 300ms
function ProductPage({ id }: { id: string }) {
return (
<div>
<Suspense fallback={<ProductSkeleton />}>
<ProductDetail id={id} />
</Suspense>
<Suspense fallback={<ReviewSkeleton />}>
<ReviewList id={id} />
</Suspense>
<Suspense fallback={<RelatedSkeleton />}>
<RelatedProducts id={id} />
</Suspense>
</div>
);
}
// Each component fetches independently
async function ProductDetail({ id }: { id: string }) {
const product = await fetchProduct(id);
return <div>{/* render product */}</div>;
}
async function ReviewList({ id }: { id: string }) {
const reviews = await fetchReviews(id);
return <div>{/* render reviews */}</div>;
}
use() Hook for Client-Side Parallel Fetching
"use client";
import { use, Suspense } from "react";
// Pre-start fetches outside component (avoids waterfall)
function ProductPageClient({ id }: { id: string }) {
// These promises start immediately โ not inside the component render
const productPromise = fetchProduct(id);
const reviewsPromise = fetchReviews(id);
return (
<div>
<Suspense fallback={<ProductSkeleton />}>
<ProductDetail promise={productPromise} />
</Suspense>
<Suspense fallback={<ReviewSkeleton />}>
<ReviewList promise={reviewsPromise} />
</Suspense>
</div>
);
}
function ProductDetail({ promise }: { promise: Promise<Product> }) {
const product = use(promise); // Suspends until resolved
return <div>{product.name}</div>;
}
Image Optimization for LCP
The hero image is almost always the LCP element. Get it right:
// src/components/HeroImage.tsx
import Image from "next/image";
export function HeroImage() {
return (
<div className="relative w-full h-[400px]">
<Image
src="/images/hero.jpg"
alt="Hero image description"
fill
// priority: preloads this image โ critical for LCP
priority
// sizes: tells browser what size to expect at each breakpoint
// Without this, browser downloads the largest possible size
sizes="100vw"
quality={85}
className="object-cover"
// placeholder: blurDataURL prevents layout shift while loading
placeholder="blur"
blurDataURL="data:image/jpeg;base64,/9j/4AAQSkZJRg..."
/>
</div>
);
}
// Non-hero images: lazy load (default), proper sizes
export function ProductImage({ src, name }: { src: string; name: string }) {
return (
<div className="relative aspect-square">
<Image
src={src}
alt={name}
fill
// No priority โ let browser decide when to load
sizes="(max-width: 768px) 50vw, (max-width: 1200px) 33vw, 25vw"
quality={80}
className="object-cover rounded-lg"
/>
</div>
);
}
Generating Blur Placeholders
// scripts/generate-blur-placeholders.ts
import { getPlaiceholder } from "plaiceholder";
import fs from "fs/promises";
import path from "path";
async function generatePlaceholders() {
const imageDir = path.join(process.cwd(), "public/images");
const files = await fs.readdir(imageDir);
const placeholders: Record<string, string> = {};
for (const file of files.filter((f) => /\.(jpg|jpeg|png|webp)$/i.test(f))) {
const buffer = await fs.readFile(path.join(imageDir, file));
const { base64 } = await getPlaiceholder(buffer);
placeholders[`/images/${file}`] = base64;
}
await fs.writeFile(
"src/lib/image-placeholders.ts",
`export const imagePlaceholders: Record<string, string> = ${JSON.stringify(placeholders, null, 2)};`
);
}
generatePlaceholders();
๐ Senior Engineers. No Junior Handoffs. Ever.
You get the senior developer, not a project manager who relays your requirements to someone you never meet. Every Viprasol project has a senior lead from kickoff to launch.
- MVPs in 4โ8 weeks, full platforms in 3โ5 months
- Lighthouse 90+ performance scores standard
- Works across US, UK, AU timezones
- Free 30-min architecture review, no commitment
Font Loading Optimization
Fonts are a major source of CLS and render blocking. Next.js font subsetting is the fix:
// src/app/layout.tsx
import { Inter, JetBrains_Mono } from "next/font/google";
// next/font automatically:
// 1. Downloads and self-hosts the font (no external requests)
// 2. Subsets to only characters used on the page
// 3. Generates a fallback font with matched metrics (eliminates CLS)
// 4. Inlines font-face in <head> with preload links
const inter = Inter({
subsets: ["latin"],
display: "swap",
// Variable font covers all weights โ single file
variable: "--font-inter",
// Preload: include only the weights you actually use
preload: true,
});
const jetbrainsMono = JetBrains_Mono({
subsets: ["latin"],
display: "swap",
variable: "--font-mono",
// Only include code block weight
weight: ["400", "500"],
});
export default function RootLayout({ children }: { children: React.ReactNode }) {
return (
<html
lang="en"
className={`${inter.variable} ${jetbrainsMono.variable}`}
>
<body className="font-sans">{children}</body>
</html>
);
}
/* tailwind.config.ts โ use CSS variables */
fontFamily: {
sans: ["var(--font-inter)", "system-ui", "sans-serif"],
mono: ["var(--font-mono)", "monospace"],
}
CLS impact: Without next/font, font swap causes layout shift. With next/font, the fallback font is dimensionally matched โ zero shift.
Bundle Analysis and Size Reduction
# Install analyzer
npm install --save-dev @next/bundle-analyzer
# next.config.ts
import bundleAnalyzer from "@next/bundle-analyzer";
const withBundleAnalyzer = bundleAnalyzer({
enabled: process.env.ANALYZE === "true",
});
export default withBundleAnalyzer({
// your next config
});
# Run analysis
ANALYZE=true npm run build
Common Bundle Bloat Patterns
// โ Entire lodash library imported (70KB gzipped)
import _ from "lodash";
const sorted = _.sortBy(items, "name");
// โ
Tree-shaken import (only the function, ~1KB)
import sortBy from "lodash/sortBy";
const sorted = sortBy(items, "name");
// โ
Better: use native methods
const sorted = [...items].sort((a, b) => a.name.localeCompare(b.name));
// โ Moment.js โ 67KB gzipped, includes all locales
import moment from "moment";
const formatted = moment(date).format("MMM D, YYYY");
// โ
date-fns โ tree-shakeable, ~2KB per function
import { format } from "date-fns";
const formatted = format(date, "MMM d, yyyy");
// โ
Or native Intl (zero bundle cost)
const formatted = new Intl.DateTimeFormat("en-US", {
month: "short",
day: "numeric",
year: "numeric",
}).format(date);
// โ Importing entire icon library (800KB)
import { FaHome, FaUser } from "react-icons/fa";
// โ
Direct import from sub-path (tree-shaken)
import FaHome from "react-icons/fa/FaHome";
import FaUser from "react-icons/fa/FaUser";
// โ
Better: use Lucide React (already tree-shakeable)
import { Home, User } from "lucide-react";
Dynamic Imports for Heavy Components
// โ Chart library loaded on initial page load (400KB)
import { LineChart } from "recharts";
// โ
Dynamic import โ only loaded when component mounts
import dynamic from "next/dynamic";
const LineChart = dynamic(
() => import("recharts").then((mod) => mod.LineChart),
{
loading: () => <ChartSkeleton />,
ssr: false, // Charts often need DOM APIs
}
);
// โ
Route-level code splitting for heavy pages
const AdminPanel = dynamic(() => import("@/components/AdminPanel"), {
loading: () => <div>Loading admin panel...</div>,
});
Caching Strategy
// src/lib/fetch.ts โ typed fetch with cache control
interface FetchOptions extends RequestInit {
revalidate?: number | false; // seconds, or false for no-store
tags?: string[]; // for on-demand revalidation
}
export async function fetchWithCache<T>(
url: string,
options: FetchOptions = {}
): Promise<T> {
const { revalidate, tags, ...fetchOptions } = options;
const response = await fetch(url, {
...fetchOptions,
next: {
revalidate: revalidate ?? 60, // Default: revalidate every 60s
tags,
},
});
if (!response.ok) {
throw new Error(`Fetch failed: ${response.status} ${url}`);
}
return response.json() as Promise<T>;
}
// Usage examples:
// Static content โ revalidate daily
const config = await fetchWithCache("/api/site-config", { revalidate: 86400 });
// User data โ no cache (dynamic per user)
const user = await fetchWithCache("/api/me", { revalidate: false });
// Product catalog โ revalidate hourly, tag for on-demand invalidation
const products = await fetchWithCache("/api/products", {
revalidate: 3600,
tags: ["products"],
});
// On-demand revalidation when products change
// import { revalidateTag } from "next/cache";
// revalidateTag("products"); // Called from a webhook or Server Action
INP: Reducing Interaction Latency
INP measures the time from user interaction to next paint. Heavy JavaScript on the main thread is the primary cause:
// โ Synchronous processing blocks the main thread
function FilteredList({ items }: { items: Item[] }) {
const [query, setQuery] = useState("");
// This runs synchronously on every keystroke
const filtered = items.filter((item) =>
item.name.toLowerCase().includes(query.toLowerCase())
);
return (
<div>
<input value={query} onChange={(e) => setQuery(e.target.value)} />
{filtered.map((item) => <ItemRow key={item.id} item={item} />)}
</div>
);
}
// โ
useDeferredValue defers expensive computation
import { useDeferredValue, useMemo } from "react";
function FilteredList({ items }: { items: Item[] }) {
const [query, setQuery] = useState("");
const deferredQuery = useDeferredValue(query); // Defers to idle time
const filtered = useMemo(
() => items.filter((item) =>
item.name.toLowerCase().includes(deferredQuery.toLowerCase())
),
[items, deferredQuery]
);
return (
<div>
<input value={query} onChange={(e) => setQuery(e.target.value)} />
{/* Show stale results with visual hint while computing */}
<div style={{ opacity: query !== deferredQuery ? 0.7 : 1 }}>
{filtered.map((item) => <ItemRow key={item.id} item={item} />)}
</div>
</div>
);
}
// โ
For >10,000 items: move to Web Worker
const worker = new Worker(new URL("./filter.worker.ts", import.meta.url));
worker.postMessage({ items, query });
worker.onmessage = (e) => setFiltered(e.data);
Performance Monitoring in Production
// src/app/layout.tsx โ Web Vitals reporting
export function reportWebVitals(metric: NextWebVitalsMetric) {
const { name, value, rating, id } = metric;
// Send to analytics
window.gtag?.("event", name, {
value: Math.round(name === "CLS" ? value * 1000 : value),
metric_id: id,
metric_rating: rating, // "good" | "needs-improvement" | "poor"
non_interaction: true,
});
// Alert on poor metrics
if (rating === "poor") {
console.warn(`Poor ${name}: ${value} (${rating})`);
// Send to error tracking
Sentry.captureMessage(`Poor Core Web Vital: ${name}`, {
level: "warning",
extra: { value, rating, id },
});
}
}
Performance Budget Reference
| Resource | Target | Warning | Fail |
|---|---|---|---|
| JS bundle (initial) | < 150KB gzip | 150โ250KB | > 250KB |
| CSS bundle | < 30KB gzip | 30โ60KB | > 60KB |
| Hero image | < 100KB | 100โ200KB | > 200KB |
| TTFB (server) | < 200ms | 200โ500ms | > 500ms |
| LCP | < 2.5s | 2.5โ4s | > 4s |
| Total page weight | < 1MB | 1โ2MB | > 2MB |
See Also
- React Performance Optimization Techniques โ React-specific rendering
- React Server Components โ server-side rendering patterns
- Web Accessibility Engineering โ a11y without performance cost
- Next.js App Router Patterns โ App Router architecture
Working With Viprasol
We build Next.js applications that achieve green Core Web Vitals from day one โ with caching strategies, image pipelines, and bundle budgets that stay within spec as the product grows. Our clients have seen 40โ70% improvement in LCP after performance audits.
About the Author
Viprasol Tech Team
Custom Software Development Specialists
The Viprasol Tech team specialises in algorithmic trading software, AI agent systems, and SaaS development. With 100+ projects delivered across MT4/MT5 EAs, fintech platforms, and production AI systems, the team brings deep technical experience to every engagement. Based in India, serving clients globally.
Need a Modern Web Application?
From landing pages to complex SaaS platforms โ we build it all with Next.js and React.
Free consultation โข No commitment โข Response within 24 hours
Need a custom web application built?
We build React and Next.js web applications with Lighthouse โฅ90 scores, mobile-first design, and full source code ownership. Senior engineers only โ from architecture through deployment.