Back to Thoughts

The Cost of the DOM: Micro-Optimizing Next.js for Low-Bandwidth Markets

A deep, practical engineering guide on reducing React payload size, enforcing Server Component boundaries, managing SVG bloat, eliminating heavy dependencies, and building fintech frontends that survive 3G networks.


When developing modern web applications on an M-series MacBook Pro hooked up to gigabit fiber, everything is fast. A 3-megabyte JavaScript bundle downloads in half a second. A deeply nested React component tree with 1,500 DOM nodes renders at 60fps. Your Lighthouse score is green. Ship it.

Now deploy that same application to an emerging market where users rely on unpredictable 3G mobile networks, throttled to ~1.5 Mbps, on sub-$100 Android smartphones with 1GB RAM and a CPU throttled to 30% of what your dev machine runs. That 3MB bundle now takes 16 seconds to download. JavaScript parsing and execution — which happens on the main thread — can take another 8–12 seconds on a low-end Snapdragon CPU. Your checkout page freezes. Your transaction list causes the browser tab to crash. Your users leave.

This is not a hypothetical scenario. According to Google's HTTP Archive Web Almanac, the median mobile page loads 500KB of JavaScript. For complex React SPAs and Next.js apps, that number routinely exceeds 2MB before any route-level code splitting kicks in. And research from Cloudflare shows that for users in sub-Saharan Africa, Southeast Asia, and South Asia — the fastest-growing internet user populations on Earth — effective throughput is frequently 50–70% lower than advertised network speeds due to congestion and signal quality.

If you are building fintech, e-commerce, or healthcare software for these markets, performance is not a nice-to-have. It is your product's core reliability guarantee.


Understanding the Performance Stack

Before jumping into fixes, it helps to understand the full stack of what is making your page slow. The Core Web Vitals are Google's standardized performance metrics, and they map directly to user experience:

MetricFull NameWhat It MeasuresGood Threshold
LCPLargest Contentful PaintTime for main content to load< 2.5s
INPInteraction to Next PaintResponsiveness to user input< 200ms
CLSCumulative Layout ShiftVisual stability< 0.1
FCPFirst Contentful PaintTime to first visible content< 1.8s
TTFBTime to First ByteServer response speed< 800ms

For low-bandwidth markets, LCP and INP are the critical battlegrounds. LCP is hurt by large JavaScript bundles and unoptimized images. INP — which replaced FID in March 2024 — is hurt by long main-thread tasks, which in React apps is almost always caused by excessive hydration.

Run your Next.js app through WebPageTest on a "Motorola G4" device profile with a "3G Fast" network connection to get a realistic view of what your users in these markets actually experience. The numbers will surprise you.


The Illusion of Server-Side Rendering

A persistent misconception is that Next.js Server-Side Rendering (SSR) or Static Site Generation (SSG) automatically solves performance. Developers see a fast First Contentful Paint and conclude the performance problem is solved.

Here is what actually happens on a slow network:

1. 0ms     → Request sent to server
2. 180ms   → Server responds with HTML (instant — TTFB is fast)
3. 180ms   → Browser starts rendering the HTML skeleton (FCP is good ✅)
4. 180ms   → Browser discovers <script src="/_next/static/chunks/main.js">
5. 180ms   → Browser starts downloading main.js (3MB over 3G...)
6. ~18000ms → Download complete (16 seconds later)
7. ~22000ms → JavaScript parsed and executed (main thread was fully blocked)
8. ~22000ms → React hydration starts — attaches event listeners to HTML
9. ~26000ms → Hydration complete — page is interactive ✅

The user saw content at 180ms. But they could not interact with anything until 26 seconds in. They clicked the "Pay Now" button at second 3. Nothing happened. At second 8, they clicked again. Nothing. At second 15, they concluded the product was broken and left.

This zombie state — where the page looks ready but is actually non-interactive — is what Google now tracks with INP (Interaction to Next Paint). A bad INP score means your hydration is locking the main thread and making your app feel unresponsive even after the UI is visible.


Measuring Your Bundle: The First Step

Before you optimise, you need to see what you are shipping. Install Next.js Bundle Analyser:

npm install --save-dev @next/bundle-analyzer
// next.config.js
const withBundleAnalyzer = require("@next/bundle-analyzer")({
  enabled: process.env.ANALYZE === "true",
});
 
module.exports = withBundleAnalyzer({
  // your next config
});
ANALYZE=true npm run build

This opens an interactive treemap in your browser showing exactly what is inside your JavaScript bundle. You will often find:

  • moment.js (72KB gzipped) imported just to format a date
  • lodash (69KB) used only for _.debounce
  • An entire icon library where you only use three icons
  • A chart library shipped on pages that have no charts

These are your immediate targets.


Attack Vector 1: SVG and Icon Bloat

One of the most silent killers of Next.js performance is how developers handle icons. The standard approach — importing from react-icons, lucide-react, or a custom icon component library — inlines every SVG's full path data directly into the React component tree.

// ❌ This ships ALL of lucide-react's bundle unless tree-shaking works perfectly
import { ArrowRight, CheckCircle, X, Search, Bell, User } from "lucide-react";

Even with tree-shaking, each icon component is still a React component that needs to be hydrated. On a transaction list showing 200 rows, each with a category icon and a status icon, you have just added 400 React components to the hydration queue — each with their own DOM nodes for every <path>, <circle>, and <g> element.

Fix 1: SVG Sprites with the <use> tag

Compile all your icons into a single SVG sprite file. The browser downloads the sprite once, caches it aggressively, and references icons by ID — with zero React hydration cost:

# Install svg-sprite or use svgstore
npm install --save-dev svg-sprite
// ❌ Every icon is a React component — hydrates individually
import { BankIcon } from "./icons";
 
export const TransactionRow = ({ transaction }) => (
  <div>
    <BankIcon className="w-4 h-4 text-gray-500" />
    <span>{transaction.description}</span>
  </div>
);
 
// ✅ SVG sprite reference — zero React hydration cost
export const TransactionRow = ({ transaction }) => (
  <div>
    <svg className="w-4 h-4 text-gray-500" aria-hidden="true">
      <use href={`/sprite.svg#icon-${transaction.category}`} />
    </svg>
    <span>{transaction.description}</span>
  </div>
);

The sprite file is served as a static asset from your CDN, cached with long Cache-Control headers, and shared across every page of your app. A list of 1,000 transactions renders with the same network cost for icons as a list of 1.

Fix 2: Use next/image for complex illustrations

If you have complex SVG illustrations (hero graphics, empty state artwork), consider converting them to WebP or AVIF images served via next/image. SVGs with gradients, filters, and complex paths can be significantly larger in bytes than an equivalent compressed raster image.


Attack Vector 2: Unnecessary Client Component Boundaries

Next.js App Router and React Server Components (RSC) are designed specifically to solve the hydration problem. Server Components run exclusively on the server and contribute zero bytes of JavaScript to the client bundle. They are rendered to HTML on the server, and that HTML is sent to the browser — no hydration needed.

The critical mistake: developers reach for "use client" at the top of a page or layout because a single deep child component needs interactivity. This collapses the entire subtree into a client bundle.

// ❌ Entire page becomes a client component because of one onClick handler
"use client";
 
export default function DashboardPage() {
  // This is wrong — the chart, the table, all the static text
  // are now included in the JS bundle
  return (
    <div>
      <RevenueChart />
      <TransactionTable />
      <StatCards />
      <button onClick={handleExport}>Export CSV</button>
    </div>
  );
}
// ✅ Only the interactive element is a client component
// app/dashboard/page.tsx — runs 100% on the server
import { RevenueChart } from "./revenue-chart"; // Server Component
import { TransactionTable } from "./transaction-table"; // Server Component
import { StatCards } from "./stat-cards"; // Server Component
import { ExportButton } from "./export-button"; // Client Component
 
export default async function DashboardPage() {
  const data = await fetchDashboardData(); // Direct DB/API call on server
 
  return (
    <div>
      <StatCards stats={data.stats} />
      <RevenueChart data={data.revenue} />
      <TransactionTable transactions={data.transactions} />
      {/* Only this tiny button ships JS to the client */}
      <ExportButton tenantId={data.tenantId} />
    </div>
  );
}
// app/dashboard/export-button.tsx — isolated Client Component
"use client";
 
export function ExportButton({ tenantId }: { tenantId: string }) {
  const handleExport = async () => {
    const res = await fetch(`/api/export?tenant=${tenantId}`);
    const blob = await res.blob();
    // ... trigger download
  };
 
  return (
    <button onClick={handleExport} className="btn-primary">
      Export CSV
    </button>
  );
}

The rule is: push interactivity down to the leaves. Every component should be a Server Component by default. Only reach for "use client" when you genuinely need useState, useEffect, browser APIs, or event handlers. The further down the tree the client boundary sits, the less JavaScript ships to the browser.

Visualising Your Server vs Client Split

Next.js ships a built-in way to inspect the RSC/client boundary distribution. In development, open the browser DevTools → React panel and look for components marked with the ⚡ (client) icon. If your page root or your layouts are marked ⚡, you have a problem.

You can also use the bundle analyser output to identify which chunks are server-only versus shipped to the client.


Attack Vector 3: Dynamic Imports and Route-Level Code Splitting

Even within client components, not everything needs to be in the initial bundle. Next.js supports dynamic imports with next/dynamic, which defers loading a component until it is needed:

import dynamic from "next/dynamic";
 
// ❌ Chart library ships in the initial bundle — even if user never scrolls to it
import { RevenueChart } from "../components/charts/revenue-chart";
 
// ✅ Chart loads only when rendered — with a loading state
const RevenueChart = dynamic(
  () => import("../components/charts/revenue-chart"),
  {
    loading: () => <div className="chart-skeleton animate-pulse" />,
    ssr: false, // Charts that use window/canvas don't need SSR
  },
);

This is particularly impactful for:

  • Heavy charting libraries (recharts, chart.js, d3) — these can add 150–500KB to a bundle
  • Rich text editors (@tiptap, quill)
  • PDF renderers
  • Map components (leaflet, mapbox-gl)
  • Any component below the fold that your median user may never see

A practical pattern for fintech dashboards is to lazy-load everything below the first viewport:

// Only the hero stats and the first table load eagerly
// Everything else is deferred
const AdvancedFilters = dynamic(() => import("./advanced-filters"));
const AnalyticsChart = dynamic(() => import("./analytics-chart"), {
  ssr: false,
});
const ExportModal = dynamic(() => import("./export-modal"));
const AuditLog = dynamic(() => import("./audit-log"));

On a 3G connection, this means the user's critical path — the content they need to make a payment decision — loads first. Everything else loads progressively in the background.


Attack Vector 4: Heavy Dependencies and Bundle Size

The npm install command is a performance loaded gun. Every package you pull in has a size cost — and many popular packages are shockingly large for what they do.

Here are the most common offenders in Next.js finance/SaaS apps and their lean alternatives:

LibraryWhy You Use ItBundle Cost (gzip)Lean AlternativeSavings
momentDate formatting~72KBIntl.DateTimeFormat (native)72KB
lodashUtility functions~69KBNative ES2022+69KB
date-fnsDate manipulation~29KBIntl APIs + small utils~20KB
axiosHTTP requests~12KBfetch (native)12KB
react-iconsAll icons~1MB+ (if not tree-shaken)SVG sprites or inline SVG~900KB
numeralNumber formatting~8KBIntl.NumberFormat (native)8KB
validatorString validation~13KBNative URL, email regex~10KB

Concrete replacements:

// ❌ moment.js — 72KB for a date label
import moment from "moment";
const label = moment(invoice.date).format("MMM D, YYYY");
 
// ✅ Native Intl API — 0 bytes
const label = new Intl.DateTimeFormat("en-US", {
  month: "short",
  day: "numeric",
  year: "numeric",
}).format(new Date(invoice.date));
// ❌ lodash for currency formatting
import { round } from "lodash";
const amount = `$${round(transaction.amount, 2).toFixed(2)}`;
 
// ✅ Intl.NumberFormat — 0 bytes, handles 150+ currencies natively
const amount = new Intl.NumberFormat("en-NG", {
  style: "currency",
  currency: "NGN",
  minimumFractionDigits: 2,
}).format(transaction.amount);
// → "₦12,500.00"
// ❌ lodash.cloneDeep — pulls in the entire lodash ecosystem
import cloneDeep from "lodash/cloneDeep";
const copy = cloneDeep(formState);
 
// ✅ structuredClone — native, available in Node 17+ and all modern browsers
const copy = structuredClone(formState);
// ❌ axios — 12KB when fetch is built-in
import axios from "axios";
const { data } = await axios.get("/api/invoices");
 
// ✅ fetch — 0 bytes, with a thin typed wrapper if needed
const res = await fetch("/api/invoices");
const data: Invoice[] = await res.json();

Attack Vector 5: Image Optimization

Images are the single largest contributor to page weight on most web applications. On a financial dashboard, poorly optimized logos, avatars, and chart exports can easily add 2–5MB.

Next.js's next/image component handles the heavy lifting automatically — but only if you use it correctly:

import Image from 'next/image';
 
// ❌ Raw <img> — no optimization, no lazy loading, no WebP conversion
<img src="/merchant-logo.png" width={80} height={80} />
 
// ✅ next/image — auto WebP/AVIF conversion, lazy loading, prevents CLS
<Image
  src="/merchant-logo.png"
  alt="Merchant logo"
  width={80}
  height={80}
  quality={75}     // 75 is a good balance of quality vs size
  priority={false} // Only set true for above-the-fold images
/>

Key next/image best practices for low-bandwidth environments:

  1. Use priority only for LCP images. Adding priority to every image defeats lazy loading and forces the browser to download all images upfront.
  2. Use sizes for responsive images. Without it, Next.js generates a very large image for every viewport.
  3. Store images on a CDN. Configure next.config.js domains to serve images from Cloudflare or AWS CloudFront — CDN edges are geographically closer to your users.
  4. Use AVIF where possible. Next.js will serve AVIF (30–50% smaller than WebP) to browsers that support it automatically.
// next.config.js
module.exports = {
  images: {
    formats: ["image/avif", "image/webp"], // Try AVIF first, fallback WebP
    deviceSizes: [640, 750, 828, 1080, 1200], // Fewer sizes = fewer variants generated
    domains: ["assets.yourcdn.com"],
    minimumCacheTTL: 60 * 60 * 24 * 30, // Cache for 30 days
  },
};

Attack Vector 6: Font Loading

Fonts are a subtle but impactful performance vector. Downloading a heavy font file before rendering text blocks your FCP. Google Fonts, while convenient, introduces a cross-origin request that adds a DNS lookup and TLS handshake.

Next.js has a built-in solution: next/font.

// ❌ Google Fonts via <link> — cross-origin request, blocks render
// <link href="https://fonts.googleapis.com/css2?family=Inter:wght@400;500;600" rel="stylesheet" />
 
// ✅ next/font — downloaded and self-hosted at build time, zero runtime request
import { Inter } from "next/font/google";
 
const inter = Inter({
  subsets: ["latin"],
  display: "swap", // Show fallback font while loading
  preload: true,
  variable: "--font-inter", // Expose as CSS variable for Tailwind or CSS
});
 
export default function RootLayout({ children }) {
  return (
    <html lang="en" className={inter.variable}>
      <body>{children}</body>
    </html>
  );
}

next/font downloads the font at build time, self-hosts it on your domain, eliminates the cross-origin request, automatically applies font-display: swap, and inlines the @font-face declaration into the HTML — all with a single import. For users on slow connections in regions far from Google's CDN nodes, this can reduce font load time from 400–800ms to near zero.


Building for Network Resilience: Offline and Retry Patterns

On 3G networks, requests fail. Connections drop mid-transaction. Payment confirmations time out. Your app must handle this gracefully — not with a blank screen or a generic error.

Pattern 1: Optimistic UI for critical actions

// components/payment-button.tsx
"use client";
import { useState, useTransition } from "react";
 
export function PaymentButton({ invoiceId }: { invoiceId: string }) {
  const [isPending, startTransition] = useTransition();
  const [status, setStatus] = useState<"idle" | "success" | "error">("idle");
 
  const handlePayment = () => {
    startTransition(async () => {
      try {
        await processPayment(invoiceId);
        setStatus("success");
      } catch {
        setStatus("error");
      }
    });
  };
 
  if (status === "success") return <p>✅ Payment confirmed</p>;
  if (status === "error")
    return (
      <div>
        <p>Connection failed. Your payment was NOT processed.</p>
        <button onClick={handlePayment}>Retry</button>
      </div>
    );
 
  return (
    <button onClick={handlePayment} disabled={isPending}>
      {isPending ? "Processing..." : "Pay Now"}
    </button>
  );
}

Pattern 2: Retry with exponential backoff

// lib/fetch-with-retry.ts
export async function fetchWithRetry(
  url: string,
  options?: RequestInit,
  retries = 3,
): Promise<Response> {
  for (let attempt = 0; attempt < retries; attempt++) {
    try {
      const res = await fetch(url, options);
      if (res.ok) return res;
      // Don't retry on 4xx (client errors)
      if (res.status >= 400 && res.status < 500)
        throw new Error(`HTTP ${res.status}`);
    } catch (err) {
      if (attempt === retries - 1) throw err;
      // Exponential backoff: 1s, 2s, 4s
      await new Promise((r) => setTimeout(r, 1000 * Math.pow(2, attempt)));
    }
  }
  throw new Error("Max retries exceeded");
}

Pattern 3: Skeleton screens over spinners

Spinners give no layout information, which contributes to Cumulative Layout Shift (CLS) when content loads. Skeleton screens preserve the layout and signal to the user that something specific is on its way:

// components/transaction-skeleton.tsx
export function TransactionSkeleton() {
  return (
    <div className="transaction-row animate-pulse">
      <div className="skeleton-circle w-8 h-8 rounded-full bg-gray-200" />
      <div className="flex-1 space-y-2">
        <div className="h-3 bg-gray-200 rounded w-3/4" />
        <div className="h-3 bg-gray-200 rounded w-1/2" />
      </div>
      <div className="h-4 bg-gray-200 rounded w-16" />
    </div>
  );
}

The Performance Budget

A performance budget is a hard limit on how much JavaScript, CSS, and image weight your app is allowed to ship. Treat it like a financial budget: every dependency is an expenditure, and you need sign-off before going over.

A reasonable budget for a fintech Next.js app targeting 3G users:

ResourceBudgetHow to Enforce
JS (initial)< 200KB gzipBundle analyser in CI, fail build if exceeded
CSS< 30KB gzipPurgeCSS, avoid unused Tailwind classes
LCP image< 150KBnext/image with quality tuning
Total page weight< 1MBLighthouse CI + WebPageTest in CI
Lighthouse Performance> 85Lighthouse CI action in GitHub Actions

You can enforce this automatically in CI with Lighthouse CI:

# .github/workflows/lighthouse.yml
- name: Run Lighthouse CI
  uses: treosh/lighthouse-ci-action@v10
  with:
    urls: |
      http://localhost:3000/dashboard
      http://localhost:3000/checkout
    budgetPath: ./budget.json
    uploadArtifacts: true
// budget.json
[
  {
    "path": "/*",
    "resourceSizes": [
      { "resourceType": "script", "budget": 200 },
      { "resourceType": "total", "budget": 800 }
    ],
    "timings": [
      { "metric": "interactive", "budget": 5000 },
      { "metric": "first-contentful-paint", "budget": 2000 }
    ]
  }
]

The Bottom Line: Respect the Network

The golden engineering rule for emerging markets is simple: assume the network is hostile, and assume the device is slow.

Your checklist for a performance-hardened Next.js app:

  • Measure realistic performance using WebPageTest on a Moto G4 + 3G Fast profile
  • Run ANALYZE=true npm run build and identify your largest bundle chunks
  • Push all possible components to Server Components — "use client" should be rare
  • Lazy-load everything below the fold with next/dynamic
  • Replace moment, lodash, axios, and date-fns with native APIs
  • Compile icons into an SVG sprite; eliminate inline React SVG components
  • Use next/font instead of Google Fonts CDN links
  • Implement skeleton screens instead of spinners
  • Add retry logic with exponential backoff for all critical API calls
  • Set a JS performance budget and enforce it in CI with Lighthouse CI

Every kilobyte of JavaScript you remove is not just a speed improvement — it is a battery charge preserved, a data plan saved, and a user kept from bouncing. In markets where the next 2 billion internet users are, that distinction is the difference between a product that works and a product that does not.

Build lean. Render smart. Respect the network.


© 2026 Daniel Dallas Okoye

The best code is no code at all.