Compress JPEG without losing quality — the honest guide
Every "compress image" tutorial on the internet ends with "set quality to 80 and you're done." It's the lazy answer. Quality is a curve, not a dial — q=95 costs three times more bytes than q=85 for output your eye can't tell apart, and chroma subsampling, progressive encoding, and metadata stripping matter at least as much as the slider most tutorials obsess over. Here's the honest version, with the numbers.
The short version
- Resize before you compress. A 6000×4000 photo shrunk to 2000×1333 drops to ~10% the size before the encoder touches a single DCT block. Our image resizer is the first stop.
- Drop the resized file on the compressor. Leave quality at 82 unless you have a reason not to.
- Leave 4:2:0 chroma subsampling on for photos. Only switch to 4:4:4 for graphics with saturated reds, magentas, or pixel-sharp text.
- Leave progressive JPEG on if the file will be larger than ~10 KB. Turn it off for tiny UI sprites.
- Leave strip metadata on unless you know you need EXIF, GPS, or camera data downstream.
- Click Download. Single file → single JPEG. Batch → ZIP.
That's the recipe that produces a JPEG 70-85% smaller than what your camera or editor wrote, and visually indistinguishable on any monitor you'll ever look at it on. The rest of this post explains why each of those defaults is what it is, and when to break them.
The quality=80 myth
Every "compress JPEG" tutorial says "set quality to 80." It's not wrong — it's just incomplete. Quality in JPEG (the libjpeg scale from 0-100 that almost everyone uses) maps to how aggressive the quantization of the frequency coefficients is. It's not linear. It's emphatically not linear.
Here's a representative 4032×3024 iPhone photo — real data from a recent camera roll, not synthetic — re-encoded at a range of qualities with 4:2:0 chroma subsampling, metadata stripped:
| Quality | File size | vs q=100 | Visually |
|---|---|---|---|
| 100 | ~8.4 MB | 100% | Reference. Still lossy (JPEG always is), but as close as the format gets. |
| 95 | ~3.1 MB | 37% | Indistinguishable from q=100 except under pixel-peeping. |
| 90 | ~1.8 MB | 21% | Indistinguishable from q=95 on any non-archival use. |
| 85 | ~1.1 MB | 13% | Our favorite tradeoff point. Visually identical to q=90 on phone/laptop screens. |
| 82 | ~900 KB | 11% | Default. Sub-pixel differences from q=85 that no human eye reliably resolves. |
| 75 | ~650 KB | 7.7% | Slight softening in high-detail areas (foliage, textures). Fine for social. |
| 65 | ~430 KB | 5.1% | Visible artifacts in gradients (sky, skin). Starting to look "compressed." |
| 50 | ~280 KB | 3.3% | Clearly compressed. Blocking visible on flat regions. Only for thumbnails. |
The jump from q=95 to q=90 drops 42% of the bytes for output your eye can't distinguish. q=90 to q=85 drops another 39%, still invisible. By q=82 you're at roughly 1/9th the size of q=100 with no perceptible quality loss. Below q=80 the curve flattens — you save bytes more slowly and start paying in visible artifacts. That's why every competent encoder's defaults cluster at 75-85; it's where the math produces the sweet spot.
The actual rule: start at q=82 and only move up if you can see a problem, not because 82 "sounds low." Most people move the slider up on instinct and pay for it in filesize three times over.
Chroma subsampling — the lever most tools hide
JPEG stores images in YCbCr (luminance + two chroma channels), not RGB. Human vision is dramatically more sensitive to brightness than to colour — by roughly a factor of two — so JPEG exploits that by storing the two chroma channels at lower resolution than the luminance channel. The ratio is called chroma subsampling, and it's the single biggest lever after quality.
- 4:4:4 — no chroma subsampling. Colour stored at full resolution. Largest file, sharpest colour edges.
- 4:2:2 — chroma stored at half horizontal resolution. Middle ground.
- 4:2:0 — chroma stored at half horizontal and half vertical resolution. Smallest file — roughly 20-30% smaller than 4:4:4 at the same quality. Default for virtually every camera, browser, and editor on Earth.
On a photograph, 4:2:0 is invisible outside pixel-peeping a saturated red on a reference monitor. It's the right answer 95% of the time and our default. The 5% of the time it isn't:
- Vibrant red or magenta fine detail — red flower on red foliage, lipstick macro, neon sign with thin strokes. 4:2:0 smears the chroma edges; 4:4:4 preserves them. Textbook "red fringing" case.
- Pixel-sharp text on coloured backgrounds — 4:2:0 produces visible ringing around glyphs. If the file is basically text and UI, it shouldn't be a JPEG at all — see the next section.
- Archival masters that will be re-edited. 4:4:4 resists chroma loss across generations.
TinyJPG and most online compressors don't expose the subsampling setting at all. Squoosh does. Photoshop does. cjpeg/mozjpeg exposes it as -sample. We expose it in advanced settings with 4:2:0 as the default.
Progressive JPEG — when it's a win
A baseline JPEG stores pixels top-to-bottom; the browser paints it in bands. A progressive JPEG stores pixels in passes of increasing detail — the browser paints a fuzzy full-image preview first, then sharpens it pass by pass. On a slow network it feels dramatically faster.
Two properties worth knowing:
- Progressive files are 2-8% smaller at the same quality — better entropy coding. Free win, not a tradeoff.
- Decoding takes more CPU/memory because of the multiple passes. On tiny files this overhead isn't free.
Rule: over ~10 KB → progressive; under ~10 KB → baseline. Real photos progressive; UI icons and small thumbnails baseline. Our compressor picks automatically by size. Most CMS pipelines (WordPress, Shopify) don't — compressing progressively before upload is a real win there.
The recompression trap
A JPEG saved at q=85 isn't "an 85% quality image." It's an image whose DCT coefficients have been quantized using the q=85 tables. Re-saving at q=85 re-quantizes the already-quantized coefficients, introducing additional rounding error. Every save after the first accumulates more loss. This is "generation loss" and it's real.
- Don't recompress without reason. A 500-KB JPEG re-saved at q=85 gets smaller and worse, not just smaller.
- If you must recompress, go lower. Re-saving at q=75 at least gives you meaningful bytes back for the artifacts you're accepting.
- Archive the lossless original. Keep the PNG/TIFF/RAW; compress to JPEG only at delivery. Never edit a JPEG into another JPEG you care about.
- Rotations and crops should be lossless.
jpegtranand our compressor do transforms that don't re-quantize; most consumer tools silently accumulate loss on every save.
Our compressor detects likely-already-compressed input (high quantization tables, small filesize for the pixel count) and warns: "this file is already compressed; re-saving reduces quality without saving much space." Almost no other tool surfaces this. TinyJPG cheerfully recompresses; Squoosh will happily save you a 3% smaller worse file.
When JPEG isn't the right target
The biggest mistake in image compression is using JPEG for things JPEG is bad at. JPEG is built for photographs — smooth gradients, natural textures, the statistical properties of camera sensor output. For anything else the compression math works against you.
- Line art, logos, vector exports — sharp edges ring and smear under DCT. Use PNG or WebP-lossless. A 40-KB logo PNG becomes an uglier 80-KB JPEG.
- Screenshots with text — antialiased glyph edges are what JPEG destroys. Use PNG unless the shot is ~90% photo content.
- Anything with transparency — JPEG has no alpha. Saving a transparent image as JPEG silently fills the background. Use PNG for lossless alpha; WebP for lossy alpha.
- Pixel art, diagrams, UI mockups — hard edges and flat regions. JPEG rings around every boundary. Use PNG or SVG.
- Modern web delivery — WebP at equivalent quality is typically 25-35% smaller than JPEG, supported in every mainstream browser since 2020.
Shortest rule: if you can count the distinct colours in your image, it's not a JPEG.
Metadata stripping — size wins plus privacy
Every JPEG your camera writes carries an EXIF payload. Camera make, lens, ISO, shutter, focal length, date — and if location was on, the GPS coordinates where you took the photo. Often a separate embedded thumbnail (yes, a second copy of the image in the metadata; yes, 10-50 KB). Plus colour profile, plus increasingly ML-generated face and subject tags.
- Size — EXIF plus thumbnail is easily 50-100 KB. On an 800-KB photo that's a free 6-12% win.
- Privacy — GPS coordinates in social uploads have doxxed people. Posting from home and attaching your address is easy to do without realising.
- Consistency — stripped metadata means the file looks the same on every viewer.
Our compressor strips everything except the sRGB ICC profile by default. Toggle off if you need EXIF downstream (archival photography, legal evidence). TinyJPG strips silently. Photoshop "Save for Web" gives per-category control. cjpeg/mozjpeg strips by default.
Resize first, compress second
The single biggest source of oversized JPEGs is failing to resize before compressing. A 4032×3024 iPhone photo has ~12 million pixels; the blog post displays it at 2000×1500 or less. Those extra pixels are costing you bytes for zero benefit — the browser downscales them on every page load anyway.
Filesize scales roughly with the square of the dimension ratio. A 4032×3024 photo at 1.5 MB becomes a 2000×1500 photo at ~370 KB before the compressor touches it — the encoder just has 1/4 the pixels. Then q=82 takes it to ~180 KB, where most web images should live.
- Decide the max display size. Retina-2x of the biggest it'll ever show. 2000px wide is usually plenty.
- Resize with a good resampler. Our resizer uses Lanczos — sharper than browser-default bilinear.
- Then compress with our JPEG compressor.
Flipping the order produces a worse result — the resampler ends up smoothing over compression artifacts instead of clean pixels. Resize first, always.
Batch workflow and presets
If you're doing this daily (blogger, e-commerce, real estate), you want presets. Our compressor ships with four:
- Web (default) — q=82, 4:2:0, progressive, strip metadata. 90% of web-destined photos.
- Social — q=85, 4:2:0, progressive, strip metadata, cap 2400px. Matches Instagram/Twitter/Facebook re-encoding.
- Email — q=78, 4:2:0, baseline, strip metadata, cap 1600px. Small enough for Gmail without Drive fallback.
- Archival — q=92, 4:4:4, progressive, preserve metadata. Still 1/3 the size of q=100.
Drop a folder, pick a preset, download the ZIP.
How our tool compares (honestly)
Image compression is a crowded space. What differs is whether the tool uploads your files, what encoder it uses, whether it exposes chroma subsampling, and whether it pretends q=80 is the whole story. Honest scoresheet:
| Tool | Cost | Where it wins | Where it loses |
|---|---|---|---|
| FireConvertApp | Free | mozjpeg-grade encoder, runs in-browser (no upload), exposes chroma subsampling and progressive mode, detects recompression and warns, metadata strip defaults, presets for web/social/email/archival, batch via folder drop | No perceptual-quality targeting (SSIM/Butteraugli) like guetzli — a roadmap item; no multi-image color-profile consolidation for bulk brand work |
| Squoosh (Google) | Free | Excellent reference tool; runs in-browser; exposes every encoder knob; supports WebP, AVIF, JPEG XL; side-by-side quality preview | One file at a time — no batch; no presets; UI is a fiddler's tool, not a workflow tool; no recompression warning; maintenance has slowed since Google deprioritised it |
| TinyJPG / TinyPNG | Free up to 20/mo, $39/yr Pro | Aggressive smart compression; typically 50-70% size reduction; simple UI; well-known brand | Uploads every file to their servers (privacy concern for some); no quality or subsampling controls; opaque about what it did to your file; 20-file/month free cap; per-file size cap on free tier |
| Photoshop "Save for Web" | $22.99/mo (Photography plan) | Exposes quality, subsampling, progressive, metadata individually; live preview with target-filesize mode; colour-managed; scriptable batch via Image Processor | Expensive for a compressor; uses older libjpeg encoder (bigger files than mozjpeg at equivalent quality); "Save for Web" is technically deprecated in favour of Export As, which has fewer controls |
| cjpeg / mozjpeg CLI | Free | The reference encoder; best size-at-quality in the industry; full scriptability; jpegtran for lossless rotations and metadata strips; batch via shell | Command-line only; requires install + a little toolchain literacy; no UI; no batch ZIP; steep first-time curve |
| Apple Preview "Reduce File Size" | Free (Mac only) | Zero-install; built into every Mac; one-click; works on PDF and image | Mac only; single hidden quality target ("Reduce File Size", no slider); often produces files larger than the original if the original was already compressed; no batch |
Honest summary: Photoshop for pixel-level encoder control, cjpeg/mozjpeg for scripted pipelines, Squoosh for single-file fiddling and learning. For everyone else — batches of real photos, sensible defaults, in-browser privacy, no subscription — our image compressor is the shortest path.
Works well / doesn't work
Works well
- Real-world photography heading to blogs, e-commerce, social, email
- Batches up to ~500 photos per session
- Mixed-quality input — we detect already-compressed files and warn
- Privacy-sensitive files (runs in your browser tab; no upload)
- Pre-upload prep for platforms that will re-encode (Instagram, WordPress)
Doesn't work (well) yet
- Perceptual-quality targeting (guetzli, SSIMULACRA 2) — on the roadmap
- JPEG XL and AVIF encoding — WebP is supported; JXL/AVIF planned for late 2026
- Arithmetic-coded JPEG — marginal size win, no decoder support, skipped
Tips for the best result
- Resize before compressing, not after. Run our resizer first, then the compressor. Compounding.
- Start at q=82. Only move up if you can see a problem. Most people move the slider up on instinct and pay three times the filesize.
- Leave 4:2:0 chroma on for photos. Only switch to 4:4:4 for red-heavy macros, coloured text, or archival masters.
- Leave progressive on for anything over 10 KB. Strictly smaller and perceptibly faster on slow networks.
- Strip metadata unless you specifically need it. Saves bytes plus removes GPS coordinates from anything you post publicly.
- Don't recompress if you don't have to. Archive the lossless original; compress once, at delivery time.
- If the file is text, line art, or transparency, it shouldn't be a JPEG. Pick PNG or WebP instead.
Common questions
What quality should I use for web images?
q=82 with 4:2:0 chroma and progressive encoding. Visually identical to q=95 at roughly 1/3 the filesize. Go higher for archival masters; go down to q=75 for thumbnails viewed small.
Why did my file get bigger after compression?
Three likely reasons: the source was already more compressed than your target (q=70 re-saved at q=90 gets bigger); you kept 4:4:4 chroma on a 4:2:0 source; or you preserved metadata the source didn't have. Our compressor warns when this is about to happen.
Is there a difference between "compress" and "optimize"?
"Optimize" usually means lossless — better entropy coding and lossless transforms (jpegtran -optimize) without touching the DCT. Gains of 3-10%. "Compress" means lossy re-encoding at a chosen quality, gains of 60-85%. Our tool offers an "optimize only" mode that leaves quality untouched.
Should I use WebP instead of JPEG?
For modern web delivery, probably yes — WebP at equivalent quality is 25-35% smaller and supported in every mainstream browser since 2020. JPEG still wins for email and downloads that might land in older viewers. Our format converter does JPEG→WebP in the same session.
Will compressing a JPEG lose quality every time?
Yes, unless you use lossless-optimize mode. Every re-encoding re-quantizes the DCT coefficients and accumulates rounding error. Keep the lossless source (PNG, TIFF, RAW); compress to JPEG once at delivery; don't edit a JPEG into another JPEG you care about.
Are my files private?
Yes. Compression runs in your browser tab via WebAssembly — the file never leaves your machine. Different from TinyJPG, iLoveIMG, or online-convert.com, all of which upload. Matters for personal photos or NDA-covered client work.
Can I compress PNGs the same way?
Yes, but the mechanics differ. PNG is lossless, so compression means better DEFLATE tuning plus optional palette reduction. Our tool detects PNG input and switches to PNG-specific optimization (pngquant-style palette reduction plus zopfli-class DEFLATE). For photographic PNGs, converting to JPEG at q=82 beats any PNG optimization by 5-10×.
Ready?
Image compressor →. Drop your photos, leave the defaults, download the ZIP. Free, in your browser, no upload, no watermark, no sign-up. If the images are coming off a phone or camera at full resolution, chain them through our resizer first — resize then compress always beats compress alone. If the file is text, line art, or needs transparency, pick PNG or WebP instead; our format converter handles that in the same session.