A boring, repeatable image pipeline for Next.js: cap images at 800px, strip metadata, and auto-pick lossless vs lossy WebP so pages feel complete faster.A boring, repeatable image pipeline for Next.js: cap images at 800px, strip metadata, and auto-pick lossless vs lossy WebP so pages feel complete faster.

This One Script Fixed the “Images Drop In Late” Problem

I run amankumar.ai on Next.js. Pages would load, layouts would appear, and React would do its thing, but something still felt slow.

The problem wasn’t that the page didn’t render.

The problem was this:

That gap, where placeholders sit there and images arrive late, is subtle, but once you notice it, you can’t unsee it. It makes the site feel heavier than it should.

So I stopped guessing and looked at the boring part I’d been ignoring: image payloads.


Why this keeps happening on modern sites (without anyone messing up)

This problem shows up a lot more today, even on well-built sites, and it’s not because people are careless.

A few things are happening at once:

  • Design tools export big images by default Posters, UI mockups, and screenshots often come out as large PNGs.
  • AI-generated images are heavy by nature High detail, high resolution, no concern for delivery size.
  • Frameworks don’t change asset intent Next.js can optimize delivery, but if the source image is huge, it still has to download.
  • Image bloat accumulates quietly No errors, no warnings, just slower visual completion over time.

The result is exactly what I was seeing: the page loads, but images arrive last.


Measuring the problem on amankumar.ai

Before touching anything, I measured the image assets in the repo.

They were a mix of:

  • PNGs (most of them)
  • some JPG/JPEGs
  • different resolutions
  • photos, posters, and UI graphics, all mixed together

\

\ This is repo-level image weight, not “every page ships 83 MB.” But it clearly showed the root issue. I was carrying a lot of unnecessary image data, and the browser was paying for it.


The engineering question I cared about

I didn’t want a clever setup. I wanted something:

  • repeatable
  • boring
  • safe for mixed assets
  • easy to re-run later

So the real question became:


First principle: pixels matter more than formats

Before debating PNG vs WebP vs AVIF, the biggest issue was pixel count.

If an image is 2400px wide but is never rendered above ~800px, shipping the extra pixels is pure waste.

So I made one hard rule:

This single decision removes a surprising amount of weight.


Image formats: quality vs size (same perceived quality)

Once dimensions are sane, format choice actually matters.

Here’s the practical ranking when comparing images at roughly the same visual quality:

\

\ \ AVIF genuinely compresses better than WebP in many cases.


Why I didn’t standardize on AVIF (yet)

This is important, because AVIF is genuinely impressive.

The reason I didn’t standardize on it is not quality. It’s practical browser support and operational safety.

According to current browser compatibility data:

  • AVIF support is good, but not universal
  • Some older browsers, embedded webviews, and edge cases still fall back poorly

You can see the current state clearly here: https://caniuse.com/?search=avif

For this project, I didn’t want:

  • extra format negotiation
  • more fallbacks
  • “why didn’t this image load on X device?” debugging

So the decision wasn’t “AVIF is bad.”

It was:


The tricky part: photos vs UI/posters

My repo wasn’t just photos.

It had:

  • UI graphics
  • posters with text
  • logos and illustrations

If you treat everything like a photo and apply lossy compression everywhere, UI assets suffer:

  • fuzzy text
  • halos around edges
  • cheap-looking posters

WebP helped here because it supports both lossy and lossless modes.

The challenge was choosing between them without manual tagging.


The simple rule that worked

I used one clean signal:

  • If the image has transparency (alpha), likely UI or graphic, use lossless WebP
  • If it’s fully opaque, likely a photo, use lossy WebP (quality ~80)

Is this perfect? No. Is it correct often enough to automate safely? Yes.

That single rule handled mixed assets without human intervention.


Metadata: invisible weight

Many images carry metadata:

  • camera EXIF
  • editing history
  • embedded profiles

None of this helps a webpage load faster or look better.

So I strip metadata unconditionally. Sometimes the savings are small; sometimes they’re large. Either way, it’s free.


Sanity check on a very different repo

To make sure this wasn’t a one-off, I ran the same script, unchanged, on another repo: promptsmint.com, an AI prompts library with only AI-generated images.

Before optimization

  • 92 images
  • 228.12 MB total
  • 2.48 MB per image (avg)

After optimization

  • 92 images
  • 4.28 MB total
  • ~50 KB per image (avg)

Result

  • 98.0% reduction
  • 223.84 MB saved

Different site. Different content. Same outcome.

That confirmed this wasn’t luck. It was just removing waste.


The script

I packaged the whole pipeline into a single script and hosted it here:

https://gist.github.com/onlyoneaman/cb5dbd36ed351b02e46db13d74e6dbe2

It:

  • finds all jpg/jpeg/png/webp
  • outputs *-optimized.webp next to originals
  • caps size at 800px
  • strips metadata
  • uses lossless WebP for transparent images
  • uses lossy WebP for opaque images

No clever tricks. Just consistent rules.


The actual outcome

I didn’t magically make Next.js faster.

What changed is simpler and more important:

That was the slowness I was noticing, and this fixed it.


Thanks for reading. If you found this useful, you can follow or connect with me here:

x.com linkedin

Market Opportunity
Capverse Logo
Capverse Price(CAP)
$0.13852
$0.13852$0.13852
+0.18%
USD
Capverse (CAP) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact [email protected] for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

The Channel Factories We’ve Been Waiting For

The Channel Factories We’ve Been Waiting For

The post The Channel Factories We’ve Been Waiting For appeared on BitcoinEthereumNews.com. Visions of future technology are often prescient about the broad strokes while flubbing the details. The tablets in “2001: A Space Odyssey” do indeed look like iPads, but you never see the astronauts paying for subscriptions or wasting hours on Candy Crush.  Channel factories are one vision that arose early in the history of the Lightning Network to address some challenges that Lightning has faced from the beginning. Despite having grown to become Bitcoin’s most successful layer-2 scaling solution, with instant and low-fee payments, Lightning’s scale is limited by its reliance on payment channels. Although Lightning shifts most transactions off-chain, each payment channel still requires an on-chain transaction to open and (usually) another to close. As adoption grows, pressure on the blockchain grows with it. The need for a more scalable approach to managing channels is clear. Channel factories were supposed to meet this need, but where are they? In 2025, subnetworks are emerging that revive the impetus of channel factories with some new details that vastly increase their potential. They are natively interoperable with Lightning and achieve greater scale by allowing a group of participants to open a shared multisig UTXO and create multiple bilateral channels, which reduces the number of on-chain transactions and improves capital efficiency. Achieving greater scale by reducing complexity, Ark and Spark perform the same function as traditional channel factories with new designs and additional capabilities based on shared UTXOs.  Channel Factories 101 Channel factories have been around since the inception of Lightning. A factory is a multiparty contract where multiple users (not just two, as in a Dryja-Poon channel) cooperatively lock funds in a single multisig UTXO. They can open, close and update channels off-chain without updating the blockchain for each operation. Only when participants leave or the factory dissolves is an on-chain transaction…
Share
BitcoinEthereumNews2025/09/18 00:09
What is the Outlook for Digital Assets in 2026?

What is the Outlook for Digital Assets in 2026?

The post What is the Outlook for Digital Assets in 2026? appeared on BitcoinEthereumNews.com. The crypto market cap reached $4.3 trillion in 2025 as institutions
Share
BitcoinEthereumNews2025/12/25 03:23
Pudgy Penguins’ Non-Crypto Display Wraps Las Vegas Sphere, Potentially Elevating PENGU Brand Reach

Pudgy Penguins’ Non-Crypto Display Wraps Las Vegas Sphere, Potentially Elevating PENGU Brand Reach

The post Pudgy Penguins’ Non-Crypto Display Wraps Las Vegas Sphere, Potentially Elevating PENGU Brand Reach appeared on BitcoinEthereumNews.com. Pudgy Penguins,
Share
BitcoinEthereumNews2025/12/25 03:41