How to Upscale Low-Resolution Artwork for Print
The complete guide to enlarging images for print-on-demand: pixel math, bicubic vs AI upscaling, chained upscale factors, and when it works (and when it does not).
Why low-resolution images look bad in print
Every printed image is made up of tiny dots of ink. The density of these dots is measured in DPI (dots per inch). At 300 DPI, each inch contains 300 dots, which is fine enough that the human eye cannot distinguish individual dots at normal viewing distance.
When an image does not have enough pixels to fill the target print size at 300 DPI, the printer must spread fewer pixels over a larger area. The result is pixelation: individual color blocks become visible, smooth gradients become jagged staircase patterns, and the overall appearance looks blocky and low-quality.
This is especially relevant for AI-generated artwork. Most AI image generators (Midjourney, DALL-E, Stable Diffusion, Leonardo) output images at 1024 x 1024 pixels, which prints at only 3.41 x 3.41 inches at 300 DPI. Even the highest-quality AI output needs significant enlargement before it is usable for any physical product larger than a sticker.
DPI alone does not fix the problem
Changing the DPI metadata tag from 72 to 300 does not add pixels. It only tells the printer to pack existing pixels more tightly, making the printed output physically smaller. To make it print larger, you must actually add new pixels through upscaling.
The pixel math: minimum pixels for common print sizes
The formula is straightforward: pixels = print size (inches) x DPI. For 300 DPI, a 10-inch print edge requires 3000 pixels. A 24-inch poster edge requires 7200 pixels.
| Product / Size | Print Size | Pixels at 300 DPI | Megapixels | Typical AI Source |
|---|---|---|---|---|
| Sticker (3") | 3 x 3" | 900 x 900 | 0.81 | 1024px OK |
| Mug (wrap) | 9.5 x 4" | 2850 x 1200 | 3.4 | Needs 2x-4x |
| T-Shirt | 15 x 18" | 4500 x 5400 | 24.3 | Needs 4x+ |
| Poster (18x24") | 18 x 24" | 5400 x 7200 | 38.9 | Needs 4x-8x |
| Canvas (24x36") | 24 x 36" | 7200 x 10800 | 77.8 | Needs 8x+ |
| Large Format | 36 x 48" | 10800 x 14400 | 155.5 | Needs 12x+ |
Minimum pixel dimensions at 300 DPI. "Typical AI Source" assumes a 1024x1024 starting image.
The gap between what AI generators produce and what print providers require is enormous. A standard 1024 x 1024 AI output has about 1 megapixel. A 24 x 36 inch poster at 300 DPI requires roughly 78 megapixels. That is a 78x increase in pixel data.
Traditional upscaling: bicubic interpolation and why it blurs
Before AI upscaling existed, the standard method for enlarging images was interpolation. The most common algorithm is bicubic interpolation. It calculates new pixel values based on the weighted average of surrounding source pixels. When you enlarge a 1000 x 1000 image to 4000 x 4000, the algorithm creates 15 million new pixels by averaging colors of their neighbors.
The problem with averaging is that it smooths away detail. Sharp edges become gradual gradients. Fine textures get smeared into blobs. The result looks like viewing the original through frosted glass.
For small enlargements (up to 150% or 1.5x), bicubic interpolation produces acceptable results. Beyond 200% (2x), the quality loss becomes clearly visible. At 400% (4x), the result is noticeably blurry. At 800% (8x), the output looks like a watercolor painting with no crisp edges.
Photoshop's Preserve Details 2.0
Adobe Photoshop's "Preserve Details 2.0" uses a basic neural network to improve on bicubic results. It produces better edges and less blur than standard bicubic, but it still cannot match dedicated AI upscaling models that are trained specifically for image super-resolution.
AI upscaling: how neural networks add realistic detail
AI upscaling, also called super-resolution, uses deep neural networks trained on millions of image pairs (low-resolution input, high-resolution ground truth) to learn what fine detail should look like when an image is enlarged. Instead of averaging neighboring pixels, the network predicts what plausible high-resolution detail should exist.
The most widely used architecture is ESRGAN (Enhanced Super-Resolution Generative Adversarial Network). ESRGAN uses two competing networks: a generator that creates convincing high-resolution images, and a discriminator that detects whether an image is real or generated. Through millions of training iterations, the generator learns to produce output that contains realistic-looking detail.
Beyond ESRGAN, newer models like Bria Increase Resolution and TopazLabs use proprietary architectures optimized for different image types. Bria excels at preserving fine detail in artwork, while TopazLabs handles photographs with particularly convincing results. Each model has strengths for different content types, which is why professional workflows often chain multiple models together.
AI upscaling does not create "real" detail
The detail added by AI upscaling is plausible, not actual. The network synthesizes textures and edges that look correct based on training data. For print-on-demand artwork, this is perfectly acceptable because the goal is visual quality, not documentary accuracy.
Comparison: traditional vs AI upscaling
The difference between traditional interpolation and AI upscaling becomes most apparent at higher enlargement factors. Traditional methods are adequate for small enlargements (up to 1.5x) but fall apart at the 4x to 8x factors required for most print-on-demand products.
| Feature | Bicubic | Photoshop PD 2.0 | AI (ESRGAN) |
|---|---|---|---|
| Sharp edges at 2x | Yes | Yes | Yes |
| Sharp edges at 4x | No | Partial | Yes |
| Sharp edges at 8x | No | No | Partial |
| Realistic texture synthesis | No | No | Yes |
| Alpha channel preservation | Yes | Yes | Varies |
| Batch processing | Manual | Actions | Manual |
| Processing time (4x, 4K) | ~2 sec | ~5 sec | ~30 sec |
AI upscaling maintains visual quality at factors where bicubic interpolation produces unusable results. The trade-off is processing time: bicubic is nearly instant, while AI upscaling takes 10 to 60 seconds per image.
When AI upscaling works well vs. when it does not
AI upscaling is not a magic wand. It produces excellent results for some image types and poor results for others.
AI upscaling works best with
- AI-generated artwork: Consistent detail patterns that upscale cleanly
- Digital illustrations: Clean edges and solid color areas
- High-quality photographs: Good focus, proper exposure, low noise
- Typography and logos: Well-defined edges produce sharp results
- Images starting at 1024px or larger: More source detail means less hallucination
AI upscaling struggles with
- Heavily compressed JPEGs: Compression artifacts get amplified
- Very small images (under 256px): Too few source pixels to work from
- Noisy or grainy photos: Grain gets magnified and treated as detail
- Thin lines and fine patterns: May break apart or develop moire artifacts
- Extreme enlargement (16x+): Hallucinated textures become visible
For print-on-demand, the sweet spot is 2x to 8x upscaling from a source image at 1024px or larger. This covers the vast majority of use cases.
Understanding upscale factors: 2x, 4x, and chained upscaling
Upscale factor refers to the multiplication applied to each dimension. A 2x upscale doubles both width and height, resulting in 4 times as many total pixels. A 4x upscale quadruples each dimension, producing 16 times as many pixels.
Most AI upscaling models support either 2x or 4x natively. To reach higher factors, you chain multiple upscale steps: first 4x to get 4096px, then 2x on that result to reach 8192px. Each step adds realistic detail at its native scale factor.
| Source Size | Target Size | Total Factor | Chain Steps | Example Chain |
|---|---|---|---|---|
| 1024 px | 2048 px | 2x | 1 step | ESRGAN 2x |
| 1024 px | 4096 px | 4x | 1 step | ESRGAN 4x |
| 1024 px | 8192 px | 8x | 2 steps | ESRGAN 4x then Bria 2x |
| 1024 px | 10800 px | ~10.5x | 3 steps | ESRGAN 4x, Bria 2x, TopazLabs 1.3x |
Common upscale chains. Chain calculator automatically selects the cheapest multi-step path.
Skip upscaling if your source is large enough
If your source image is already at or above the target size, no upscaling is needed. Automated systems detect this and skip the upscale step entirely, saving both time and credits.
Automated: AI upscaler with multiple quality tiers
Manually running images through individual upscaling tools, downloading results, checking quality, and re-running at different factors is tedious. Ratio Ready's upscale pipeline automates this entire process with multiple quality tiers.
| Target Size | Max Edge (px) | Best For | From 1024px |
|---|---|---|---|
| Standard | 2048 | Stickers, mugs, small prints | ESRGAN 2x |
| Large | 4096 | T-shirts, tote bags, phone cases | ESRGAN 4x |
| XL | 8192 | Posters, canvas, wall art up to 24" | ESRGAN 4x + Bria 2x |
| Max | 12000 | Large format, gallery prints | ESRGAN 4x + Bria 2x + TopazLabs |
Images already at or above the target size are skipped without charge.
Alpha channel (transparency) is preserved through the entire upscale chain. The pipeline extracts the alpha channel before upscaling, processes RGB and alpha independently, and recomposites the result at the output resolution.
For sellers using automation tools (Make.com, n8n, Zapier), the upscale endpoint (POST /v1/batch/upscale) accepts 1 to 25 images with a target size parameter and returns individual download URLs for each result.
Clipart auto-upscale
You do not need to upscale separately when using the clipart batch processor. When an uploaded image is smaller than the selected print size, the pipeline automatically upscales it as part of the same job.
Frequently asked questions
Related guides
Understand DPI metadata, check your images, and convert them correctly for print.
Ratios, DPI, color space, and the production workflow for wall art sellers.
The 5 standard ratios every wall art seller should offer.
Upscale your artwork for print in seconds
AI-powered upscaling to 12,000px. 100 free credits on signup.