Blob Objects in JavaScript: A Practical Guide to Files, Previews, Downloads, and Memory
A deep, real-world walkthrough of Blob APIs for frontend developers who need to handle files efficiently without destroying performance.
Frontend developers spend a lot of time thinking about UI, state, routing, and performance. But there is one area that quietly causes a surprising number of real-world problems: file handling.
It usually starts innocently.
You add image uploads. Then someone wants drag-and-drop PDF previews. Then product asks for CSV export. Then users upload 200MB videos from low-end mobile devices. Suddenly the browser starts freezing, memory climbs, tabs crash, and your “simple file feature” becomes one of the most fragile parts of the application.
This is exactly where Blob objects become useful.
Blob is one of those browser APIs that many developers have seen, but not everyone has really learned to use properly. It often appears in quick snippets for downloads or image previews, but its actual role in file-heavy applications is much bigger. Blob sits at the center of modern client-side file workflows: generating files, slicing large files, converting formats, previewing content, and controlling memory more carefully than naive string-based approaches.
In this article, we’ll take a practical approach to Blob objects. No abstract theory for the sake of theory. Just real use cases, better patterns, and production-friendly code.
We’ll cover:
what Blob actually is
why it’s better than huge strings or data URLs in many cases
how to generate files efficiently
how to split large files into chunks
how to build image compression pipelines
how to preview files in the browser
how to export data safely
how to avoid memory leaks caused by object URLs
If you work on dashboards, admin panels, editors, media tools, or upload-heavy products, this API deserves a place in your toolkit.
What a Blob Actually Is
A Blob is short for Binary Large Object. In browser terms, it represents raw immutable data that can be treated like a file, even if it didn’t come from the filesystem.
That last part matters.
A Blob can come from:
plain text
JSON
canvas output
image transformations
sliced file chunks
fetched binary content
generated CSV or HTML
You can think of a Blob as a browser-friendly wrapper around file-like data. It lets you package content with a MIME type and pass it into APIs that expect binary or file-based input.
Here’s a very simple example:
const reportPayload = new Blob(
[’Hello from a generated file’],
{ type: ‘text/plain’ }
)That Blob can now be:
downloaded
previewed
uploaded
passed into
URL.createObjectURLwrapped in a
Fileread as text, bytes, or streams in modern APIs
This is much more efficient and flexible than stuffing everything into long strings and hoping the browser handles it well.
Why Developers Run Into Trouble Without Blob
A common beginner approach to file generation looks like this:
const hugeText = ‘row,data\\n’.repeat(200000)
const downloadHref = ‘data:text/plain;charset=utf-8,’ + encodeURIComponent(hugeText)This works for small examples. It becomes painful for larger payloads.
Why?
Because:
the string itself already consumes memory
encodeURIComponentcreates more work and more memory pressurethe resulting URL can become extremely large
browsers may struggle or fail with very long data URLs
Blob is usually the cleaner and safer choice.
const hugeText = ‘row,data\\n’.repeat(200000)
const exportBlob = new Blob([hugeText], {
type: ‘text/plain;charset=utf-8’
})Now you have a file-like object without forcing the browser to build a massive inline data URL.
That difference becomes more important as payload size grows.
1. Creating Blob Objects the Right Way
The Blob constructor accepts an array of parts. Those parts can be strings, typed arrays, ArrayBuffers, or other Blob instances.
That means you can assemble files incrementally and explicitly.
function makeJsonBlob(payload) {
const serialized = JSON.stringify(payload, null, 2)
return new Blob([serialized], {
type: ‘application/json’
})
}
function makeHtmlBlob(markup) {
return new Blob([markup], {
type: ‘text/html;charset=utf-8’
})
}
function makeTextBlob(lines) {
return new Blob([lines.join(’\\n’)], {
type: ‘text/plain;charset=utf-8’
})
}This is already a step up from ad hoc string handling because you’re making the file type explicit.
For example:
const profileBlob = makeJsonBlob({
id: 42,
username: ‘anton’,
plan: ‘pro’
})A few practical tips here:
First, always set the MIME type when you know it. It improves interoperability with previews, downloads, and downstream consumers.
Second, think of Blob creation as a boundary. Once you create the Blob, treat it as a file artifact, not just “some random string.”
Third, don’t confuse Blob with File. File extends Blob and adds metadata like file name and last modified timestamp. If you need filename semantics, wrap the Blob:
const notesBlob = makeTextBlob([’First line’, ‘Second line’])
const notesFile = new File([notesBlob], ‘notes.txt’, {
type: ‘text/plain’
})2. Splitting Large Files Instead of Reading Everything at Once
One of the easiest ways to crash a browser tab is to read a large file all at once and then process it in memory.
This is especially risky for:
CSV imports
big JSON files
video uploads
logs
archives
A safer pattern is to use slice() and process the file in chunks.
async function readChunkAsText(blobPart) {
return await blobPart.text()
}
async function scanFileByChunks(sourceFile, options = {}) {
const partSize = options.partSize ?? 1024 * 1024
const onProgress = options.onProgress ?? (() => {})
const partCount = Math.ceil(sourceFile.size / partSize)
const summary = []
for (let partIndex = 0; partIndex < partCount; partIndex++) {
const offsetStart = partIndex * partSize
const offsetEnd = Math.min(offsetStart + partSize, sourceFile.size)
const filePart = sourceFile.slice(offsetStart, offsetEnd)
const textPreview = await readChunkAsText(filePart)
summary.push({
partIndex,
bytes: filePart.size,
preview: textPreview.slice(0, 80)
})
onProgress({
done: partIndex + 1,
total: partCount,
percent: Math.round(((partIndex + 1) / partCount) * 100)
})
}
return summary
}This approach gives you several benefits.
You avoid loading the entire file at once. You can show progress. You can retry failed chunks individually. And you can build resumable upload pipelines much more easily.
Here’s a cleaner upload-oriented version:
class ResumableUploader {
constructor(fileHandle, config) {
this.fileHandle = fileHandle
this.endpoint = config.endpoint
this.segmentSize = config.segmentSize ?? 2 * 1024 * 1024
this.onUpdate = config.onUpdate ?? (() => {})
}
async start() {
const totalSegments = Math.ceil(this.fileHandle.size / this.segmentSize)
const transferId = crypto.randomUUID()
for (let segmentNumber = 0; segmentNumber < totalSegments; segmentNumber++) {
const byteStart = segmentNumber * this.segmentSize
const byteEnd = Math.min(byteStart + this.segmentSize, this.fileHandle.size)
const segmentBlob = this.fileHandle.slice(byteStart, byteEnd)
await this.sendSegment({
segmentBlob,
segmentNumber,
transferId
})
this.onUpdate({
uploadedSegments: segmentNumber + 1,
totalSegments,
percent: Math.round(((segmentNumber + 1) / totalSegments) * 100)
})
}
return this.finishUpload(transferId, totalSegments)
}
async sendSegment({ segmentBlob, segmentNumber, transferId }) {
const payload = new FormData()
payload.append(’segment’, segmentBlob)
payload.append(’segmentNumber’, String(segmentNumber))
payload.append(’transferId’, transferId)
const response = await fetch(this.endpoint, {
method: ‘POST’,
body: payload
})
if (!response.ok) {
throw new Error(`Failed to upload segment ${segmentNumber}`)
}
return response.json()
}
async finishUpload(transferId, totalSegments) {
const response = await fetch(’/api/uploads/complete’, {
method: ‘POST’,
headers: { ‘Content-Type’: ‘application/json’ },
body: JSON.stringify({
transferId,
totalSegments
})
})
if (!response.ok) {
throw new Error(’Failed to finalize upload’)
}
return response.json()
}
}This is the kind of pattern that scales much better than “read the whole thing and hope.”
3. Using Blob for Image Compression and Format Conversion
Client-side image processing is one of the most practical Blob use cases.
Uploading the original photo straight from a phone is often wasteful. It might be 5MB, 8MB, or even larger. In many products, that’s unnecessary. A resized image with reasonable compression can look almost identical while being dramatically smaller.
The browser already gives us the tools: Image, canvas, and canvas.toBlob().
async function compressPhoto(sourceFile, settings = {}) {
const maxWidth = settings.maxWidth ?? 1600
const maxHeight = settings.maxHeight ?? 1600
const quality = settings.quality ?? 0.82
const outputType = settings.outputType ?? ‘image/jpeg’
const sourceUrl = URL.createObjectURL(sourceFile)
try {
const bitmap = await loadImage(sourceUrl)
const { width, height } = fitIntoBounds(
bitmap.width,
bitmap.height,
maxWidth,
maxHeight
)
const canvas = document.createElement(’canvas’)
canvas.width = width
canvas.height = height
const context = canvas.getContext(’2d’)
context.drawImage(bitmap, 0, 0, width, height)
const compressedBlob = await canvasToBlob(canvas, outputType, quality)
return {
blob: compressedBlob,
width,
height,
beforeBytes: sourceFile.size,
afterBytes: compressedBlob.size
}
} finally {
URL.revokeObjectURL(sourceUrl)
}
}
function loadImage(src) {
return new Promise((resolve, reject) => {
const image = new Image()
image.onload = () => resolve(image)
image.onerror = reject
image.src = src
})
}
function canvasToBlob(canvas, mimeType, quality) {
return new Promise((resolve, reject) => {
canvas.toBlob((blob) => {
if (!blob) {
reject(new Error(’Canvas export returned null’))
return
}
resolve(blob)
}, mimeType, quality)
})
}
function fitIntoBounds(originalWidth, originalHeight, maxWidth, maxHeight) {
let nextWidth = originalWidth
let nextHeight = originalHeight
if (nextWidth > maxWidth) {
nextHeight = (nextHeight * maxWidth) / nextWidth
nextWidth = maxWidth
}
if (nextHeight > maxHeight) {
nextWidth = (nextWidth * maxHeight) / nextHeight
nextHeight = maxHeight
}
return {
width: Math.round(nextWidth),
height: Math.round(nextHeight)
}
}This is more than a demo. It’s the foundation of avatar uploaders, CMS media tools, admin dashboards, and image-heavy forms.
You can build on it like this:
async function prepareAvatarUpload(rawImageFile) {
if (!rawImageFile.type.startsWith(’image/’)) {
throw new Error(’Only image files are allowed’)
}
const result = await compressPhoto(rawImageFile, {
maxWidth: 400,
maxHeight: 400,
quality: 0.9,
outputType: ‘image/jpeg’
})
const previewLink = URL.createObjectURL(result.blob)
return {
fileBlob: result.blob,
previewLink,
savedPercent: Math.round(
(1 - result.afterBytes / result.beforeBytes) * 100
)
}
}This creates a compressed Blob for upload and a preview URL for the UI.
That’s exactly the kind of workflow Blob is meant for.
4. Building a File Preview System That Doesn’t Turn Into a Mess
File preview features often start small and then grow into unmaintainable code. One function handles images, another handles text, another handles PDFs, another handles videos, and eventually every new type becomes its own special case.
A better approach is to centralize the logic.
class PreviewHub {
constructor(rootElement) {
this.rootElement = rootElement
this.activeObjectUrl = null
}
async show(fileEntry) {
this.reset()
const majorType = this.detectCategory(fileEntry)
if (majorType === ‘image’) {
return this.renderImage(fileEntry)
}
if (majorType === ‘text’) {
return this.renderText(fileEntry)
}
if (majorType === ‘video’) {
return this.renderVideo(fileEntry)
}
if (majorType === ‘audio’) {
return this.renderAudio(fileEntry)
}
return this.renderFallback(fileEntry)
}
detectCategory(fileEntry) {
const mime = (fileEntry.type || ‘’).toLowerCase()
if (mime.startsWith(’image/’)) return ‘image’
if (mime.startsWith(’text/’) || mime === ‘application/json’) return ‘text’
if (mime.startsWith(’video/’)) return ‘video’
if (mime.startsWith(’audio/’)) return ‘audio’
return ‘unknown’
}
renderImage(fileEntry) {
const imageNode = document.createElement(’img’)
imageNode.style.maxWidth = ‘100%’
imageNode.style.maxHeight = ‘480px’
imageNode.style.objectFit = ‘contain’
const objectUrl = URL.createObjectURL(fileEntry)
this.activeObjectUrl = objectUrl
imageNode.src = objectUrl
this.rootElement.append(imageNode)
}
async renderText(fileEntry) {
const rawText = await fileEntry.text()
const previewText = rawText.length > 12000
? rawText.slice(0, 12000) + ‘\\n\\n...truncated for preview’
: rawText
const codeBlock = document.createElement(’pre’)
codeBlock.textContent = previewText
codeBlock.style.whiteSpace = ‘pre-wrap’
codeBlock.style.overflow = ‘auto’
codeBlock.style.maxHeight = ‘420px’
this.rootElement.append(codeBlock)
}
renderVideo(fileEntry) {
const videoNode = document.createElement(’video’)
videoNode.controls = true
videoNode.style.maxWidth = ‘100%’
const objectUrl = URL.createObjectURL(fileEntry)
this.activeObjectUrl = objectUrl
videoNode.src = objectUrl
this.rootElement.append(videoNode)
}
renderAudio(fileEntry) {
const audioNode = document.createElement(’audio’)
audioNode.controls = true
audioNode.style.width = ‘100%’
const objectUrl = URL.createObjectURL(fileEntry)
this.activeObjectUrl = objectUrl
audioNode.src = objectUrl
this.rootElement.append(audioNode)
}
renderFallback(fileEntry) {
const message = document.createElement(’div’)
message.textContent = `Preview is not available for ${fileEntry.name || ‘this file’}`
this.rootElement.append(message)
}
reset() {
this.rootElement.innerHTML = ‘’
if (this.activeObjectUrl) {
URL.revokeObjectURL(this.activeObjectUrl)
this.activeObjectUrl = null
}
}
}This gives you one interface for many file types and keeps the preview lifecycle under control.
The important detail here is not just rendering. It’s cleanup. Preview systems that use object URLs without revoking them can leak memory over time, especially in file managers and media-heavy dashboards.
5. Generating Downloads Without Data URL Headaches
Export features are everywhere:
CSV exports
config downloads
JSON backups
generated reports
text logs
HTML snapshots
Blob is one of the safest ways to build these features.
function triggerBlobDownload(fileBlob, fileName) {
const tempUrl = URL.createObjectURL(fileBlob)
const anchor = document.createElement(’a’)
anchor.href = tempUrl
anchor.download = fileName
anchor.style.display = ‘none’
document.body.append(anchor)
anchor.click()
anchor.remove()
setTimeout(() => {
URL.revokeObjectURL(tempUrl)
}, 1000)
}Now wrap that in useful helpers:
function exportAsJson(payload, fileName = ‘export.json’) {
const fileBlob = new Blob(
[JSON.stringify(payload, null, 2)],
{ type: ‘application/json;charset=utf-8’ }
)
triggerBlobDownload(fileBlob, fileName)
}
function exportAsText(content, fileName = ‘notes.txt’) {
const fileBlob = new Blob(
[content],
{ type: ‘text/plain;charset=utf-8’ }
)
triggerBlobDownload(fileBlob, fileName)
}
function exportAsCsv(rows, fileName = ‘table.csv’) {
if (!rows.length) {
throw new Error(’Cannot export an empty dataset’)
}
const headers = Object.keys(rows[0])
const csvLines = [headers.join(’,’)]
for (const row of rows) {
const values = headers.map((key) => {
const value = String(row[key] ?? ‘’)
const escaped = value.replaceAll(’”’, ‘”“’)
return `”${escaped}”`
})
csvLines.push(values.join(’,’))
}
const csvBlob = new Blob(
[’\\uFEFF’ + csvLines.join(’\\n’)],
{ type: ‘text/csv;charset=utf-8’ }
)
triggerBlobDownload(csvBlob, fileName)
}This approach is simple, predictable, and much more scalable than embedding giant payloads into data URLs.
6. The Part Everyone Forgets: Memory Management
Blob itself is usually not the problem.
The real trouble often starts with object URLs created through URL.createObjectURL(blob).
These URLs are incredibly useful. They let you point img, video, audio, or download links to Blob-backed content. But they also need cleanup.
If you keep creating object URLs and never revoke them, memory usage can slowly grow.
A safer pattern is to track them intentionally.
class ObjectUrlRegistry {
constructor() {
this.liveUrls = new Set()
}
allocate(blobValue) {
const objectUrl = URL.createObjectURL(blobValue)
this.liveUrls.add(objectUrl)
return objectUrl
}
release(objectUrl) {
if (!this.liveUrls.has(objectUrl)) return
URL.revokeObjectURL(objectUrl)
this.liveUrls.delete(objectUrl)
}
releaseAll() {
for (const objectUrl of this.liveUrls) {
URL.revokeObjectURL(objectUrl)
}
this.liveUrls.clear()
}
}And then use it in components or modules:
const urlRegistry = new ObjectUrlRegistry()
function mountPreview(imageBlob, mountPoint) {
const objectUrl = urlRegistry.allocate(imageBlob)
const imageNode = document.createElement(’img’)
imageNode.src = objectUrl
imageNode.style.maxWidth = ‘100%’
mountPoint.innerHTML = ‘’
mountPoint.append(imageNode)
return () => {
urlRegistry.release(objectUrl)
mountPoint.innerHTML = ‘’
}
}This becomes especially useful in:
image editors
file galleries
drag-and-drop uploaders
long-lived admin apps
preview modals that open and close many times
If you’ve ever had a file-heavy SPA get progressively slower during a session, forgotten object URLs may be one of the causes.
Practical Rules I’d Actually Follow in Production
After working with Blob-based flows, a few rules stand out.
Use Blob when you are generating or transforming file-like data. It gives you cleaner boundaries and better interoperability.
Prefer object URLs over giant data URLs for previews and downloads, but always clean them up.
Chunk large files instead of processing them in one shot. A slightly more complex implementation is worth the stability.
Compress images before upload when your product doesn’t require the original full-resolution asset.
Use Blob-backed exports for CSV, JSON, and text generation. It scales better and is easier to maintain.
Keep file preview logic centralized. Scattered preview code becomes technical debt very quickly.
And most importantly: think about file features as performance features. A bad upload or preview experience is not just a technical issue. Users feel it immediately.
Final Thoughts
Blob objects are one of those browser APIs that quietly power a huge amount of modern frontend functionality.
They help bridge the gap between raw in-memory data and file-oriented browser workflows. They make previews easier. Downloads cleaner. Upload pipelines safer. Image processing more practical. Large-file handling more realistic.
But the real value isn’t just knowing the API exists.
The real value is knowing when to use it instead of naive strings, oversized data URLs, or “just read the whole thing into memory” patterns.
That’s where Blob stops being a trivia item and starts becoming an architectural tool.


