FireConvert
6 min read

Upscale an image without losing quality — AI, in your browser, free

Topaz Gigapixel is $99 one-time. Adobe Super Resolution needs a $20/mo Creative Cloud subscription. Upscayl wants you to install a desktop app. We've shipped the same class of neural upscaler in your browser — unlimited, free, zero upload. Here's how it works and where it wins (or doesn't) against the paid tools.

The short version

  1. Open the upscaler.
  2. Pick (safe) or (dramatic).
  3. First time only: wait 5-20 seconds for the AI model to download (~20-40 MB, cached forever after).
  4. Drop your image.
  5. Wait 5-30 seconds for inference (depends on source size).
  6. Download the crisp PNG. The browser already cached the model — next image starts instantly.

What "upscale without losing quality" actually means

When you scale a 400×400 image up to 1600×1600 in Photoshop or on a phone, the software picks a resampling algorithm — usually bilinear or bicubic interpolation. It invents the new pixels by averaging neighbours. The result looks like a zoomed-in version of the original: blurry, with soft edges and smeared text.

A neural upscaler does something different. It's been trained on millions of small-image → large-image pairs, and it hallucinates plausible detail to fill in the new pixels. Where interpolation averages, the AI guesses — and the guesses are generally right because they're grounded in what the training set showed.

The perceived quality difference between bicubic and a modern neural upscaler is massive. We're talking "barely recognisable blur" vs "sharp enough to print."

Why this used to cost money

The good upscaler models (Real-ESRGAN, Swin2SR, BSRGAN, SwinIR) are all research-published with open weights. The reason paid tools charge is:

  1. GPU cost — running a 40 MB neural network on a cloud GPU for every customer image is expensive. Subscription is how you pay for that GPU time.
  2. Desktop bundles — Topaz ships the model plus a polished GUI, handling tiling, color management, and batch modes. You're paying for the app, not the model.

Our approach: run the model in your browser. Your CPU / GPU does the work. No GPU cost for us, so no paywall for you. The model (~20-40 MB of quantised ONNX weights) downloads to your browser the first time and stays cached.

2× or 4× — which should you pick?

Use 2× when:

  • You want the safest quality win. 2× models are more constrained, so they hallucinate less — there's less room for the AI to invent something wrong.
  • You're working with faces, text, or fine detail. Small errors in 4× can become uncanny faces or warped text.
  • You need speed. 2× is 3-4× faster than 4× on the same hardware.
  • Your source is already decent size (e.g., 1024 → 2048) and you just need a bit more resolution.

Use 4× when:

  • Your source is TINY (≤ 512px). 4× gets you into usable territory; 2× leaves you still small.
  • You're doing print-prep and need 2000+ px from a small source.
  • You care more about dramatic perceived quality than fine-grained accuracy (e.g., blog cover images, social media).
  • The source is a clean photo, not a screenshot with text. 4× is roughest on text.

Honest comparison: where each tool lands

ToolFree tierQualityPrivacySpeed
FireConvertAppUnlimited HD, 2×/4×Very goodNo upload — local5-30s depending on source
Topaz GigapixelN/A (paid)Excellent (premium model)Local (desktop)Fast (desktop GPU)
Adobe Super ResolutionN/A (Creative Cloud)ExcellentLocal (desktop)Fast
Upscayl (desktop)UnlimitedVery goodLocalFast
remini.ai / picwish3-5/day low-resVery goodUpload requiredMedium (cloud)

The honest gap: Topaz's premium model catches hair-level detail on portraits that our browser model misses. If you edit pro portraits daily, the $99 pays for itself. For everything else — blog images, social media, old family photos, rescaling thumbnails — the browser model is indistinguishable to any non-specialist eye, and saves you the install/pay/upload dance.

What works, what doesn't

Works well

  • Photos of people, animals, nature — the training data is rich here. 2× is near-magical.
  • Old family photos — the neural upscaler de-blurs as it scales. Often the single biggest "wow" use case.
  • Thumbnails and web-grabbed low-res images — the original lost of detail; we invent plausible detail back.
  • Product shots — clean backgrounds help the model focus on the subject.

Doesn't work as well

  • Screenshots with small text — text is the hardest thing for SR models. Letters turn into gibberish. For text-heavy screenshots, use a vector tool like Illustrator's image trace instead.
  • Line art / anime — our classical-SR model is trained on photographs. Use a dedicated anime-SR model (Real-ESRGAN-anime) via a desktop tool for that.
  • Already-upscaled images — upscaling twice compounds artefacts.
  • Logos / vector-origin images — these should stay vectors, not be upscaled as raster.

Tips for best results

1. Start clean

Remove JPEG artefacts before upscaling. If the source is a heavily- compressed JPEG, those blocky artefacts will get AMPLIFIED by the upscaler. Our image compressor at quality 95 can flatten a re-saved JPEG before upscale — counter- intuitive but the re-compression smooths out old artefacts.

2. How big can your source be?

Any reasonable size. The tool splits large images into tiles (256×256 under the hood), upscales each independently, and stitches them back — the same way Topaz and Gigapixel handle huge inputs. A 4000×3000 photo at 2× gives you 8000×6000 in about 30–60 seconds.

The real limit is the OUTPUT: browsers cap canvas memory near 6000×6000 (around 144 MB of pixel buffer). We check this before running so you'll get a clear message if your combination of source size and scale would blow the cap. Shrink the source or drop to 2× if that happens.

3. Pick 2× first, run again if needed

Going 512 → 2048 in one 4× pass vs two 2× passes: usually very close quality, but the two-pass route gives you a chance to stop at 1024 if you don't actually need 2048.

4. Download as PNG

We output PNG because we don't want to add JPEG loss on top of the AI's output. If you need JPEG (e.g., for email), run the PNG through our PNG to JPG at quality 85-90.

Common questions

Will this work on a phone?

Yes, but slower. Mobile CPUs run the model fine; tiled processing keeps memory under control so big sources don't crash the tab. Expect 2-3× desktop time for the same image, and avoid extreme combinations (4× on a huge source + budget Android = long wait).

Is there a 8× option?

Not yet. 8× requires either two 4× passes (quality compounds with errors) or a purpose-trained 8× model (larger weights, slower). If you have a very small source and need 8×, run 4×, then 2× on the result.

Does this handle HEIC / RAW / TIFF?

No — the upscaler accepts JPEG, PNG, and WebP. For HEIC, first convert to JPEG with our HEIC to JPG tool, then upscale. We'll likely ship HEIC support on the upscaler in a future update.

Can I use the upscaled images commercially?

Yes, for images you own. The AI didn't create the content — it just made an existing image bigger. Same logic as Photoshop's "Preserve Details 2.0" upscale.

Does this leak my image anywhere?

No. The AI model downloads once from HuggingFace's CDN, then runs locally. After the model loads, open your network inspector and watch: zero requests leave your browser during inference. We never see your image, its result, or your conversion count.

Ready?

Upscale image →. Pick your scale, drop a file, watch a small image become a big one — no upload, no sign-up, no subscription.