Astro Build Speed Optimization: From 35 to 127 Pages/Second (Complete Beginner's Guide)
Complete beginner's guide to dramatically speeding up Astro build times and why Static Site Generation (SSG) beats SSR for large-scale sites. Real-world optimization steps that improved build speed by 3.6x.

Table of Contents
- Understanding SSG vs SSR (The Basics)
- Why SSG Often Beats SSR at Scale
- Real-World Performance Case Study
- 8 Steps to Optimize Your Astro Builds
- Advanced Caching Strategies
- Hardware Considerations
- When to Choose SSG vs SSR
- Common Optimization Myths
- Monitoring and Measuring Success
- Troubleshooting Common Issues
- Conclusion and Next Steps
Are you frustrated with slow Astro builds? Being told to “just switch to SSR” for your large site? This comprehensive guide will show you how to dramatically improve your Astro build times while sticking with Static Site Generation (SSG) - the smart choice for scalable, cost-effective websites.
We’ll walk through real optimization techniques that improved a large SSG site from 35 pages/second to 127 pages/second - that’s a 3.6x speed improvement! Best of all, these techniques work for beginners and don’t require abandoning SSG.
This article is based on a detailed Reddit case study: Astro build speed optimization from 9642s to 2659s.
Before diving deep into optimization, you might want to check out these related Astro articles:
- Build your Astro blog for free
- Add YouTube videos to your Astro blog
- Build real-time apps with Astro and Convex
- Deploy Astro and Convex to Vercel
Understanding SSG vs SSR (The Basics)
Let’s start with the fundamentals. Understanding the difference between Static Site Generation (SSG) and Server-Side Rendering (SSR) is crucial for making the right choice for your project.
Static Site Generation (SSG)
What it is: Pages are pre-built at build time and served as static HTML files.
How it works:
- During build, Astro processes your content and components
- Generates static HTML files for each page
- These files are served directly by a CDN or web server
- No server processing needed for each request
Pros:
- ⚡ Lightning fast delivery - files served directly from CDN
- 💰 Very cost-effective - minimal server resources needed
- 🛡️ Highly resilient - can handle massive traffic spikes
- 🔒 More secure - no server-side vulnerabilities
- 📈 Excellent SEO - search engines love static content
Cons:
- ⏱️ Build time grows with more pages
- 🔄 Data freshness depends on rebuild frequency
- 🎯 Limited personalization without JavaScript
Server-Side Rendering (SSR)
What it is: Pages are generated on each request (or cached with smart rules).
How it works:
- User requests a page
- Server processes the request in real-time
- Generates HTML dynamically
- Sends response to user
Pros:
- 🔥 Always fresh data - content is up-to-date on every request
- 👤 Full personalization - can customize per user/request
- ⚡ Fast time-to-first-page - no build step needed
Cons:
- 💰 Higher costs - requires server capacity for each request
- 🐌 Slower under load - server processing needed for every request
- 🕷️ Vulnerable to crawler load - bots can overwhelm your server
Why SSG Often Beats SSR at Scale
The key insight from our case study: “You don’t ever fear a single item getting a million views in a day, you fear 100,000 items getting 10 views in a day.”
The Spider Problem
Modern websites face an unprecedented crawler load:
- Search engine bots (Google, Bing, etc.)
- AI training scrapers (ChatGPT, Claude, etc.)
- SEO tools and monitoring services
- Malicious scraping attempts
Real numbers from our case study:
- 2.3 million requests per day
- 774,860 unique visitors
- 710k unique URLs requested
- 30:1 ratio of spider traffic to human traffic
With SSG, each of these requests is a cheap file serve. With SSR, each request requires server processing power.
Cost Comparison
SSG Setup (from case study):
- $29 web server + memcached + workers
- $29 database server
- $89 build server
- Total: $147/month
This setup handles 2.3M daily requests easily, with average load under 2 on an 8-core system.
Equivalent SSR Setup:
- Would need multiple high-powered application servers
- Database connection pooling and caching layers
- Load balancers and auto-scaling
- Estimated cost: $500-2000+/month
Real-World Performance Case Study
Let’s look at the actual optimization journey that inspired this guide:
Site Stats
- 349,734 total files
- 346,236 HTML pages
- 43GB total size
- API-powered build (no local .md files)
Performance Journey
Stage | Pages Built | Build Time | Speed | Improvement |
---|---|---|---|---|
Initial | 339,194 | 9,642s (2.7 hours) | ~35 pages/sec | Baseline |
Mid-optimization | 339,251 | 3,583s (1 hour) | ~94 pages/sec | 2.7x faster |
Final optimized | 339,340 | 2,659s (44 minutes) | ~127 pages/sec | 3.6x faster |
Now let’s break down exactly how they achieved this improvement.
8 Steps to Optimize Your Astro Builds
Step 1: Upgrade Node.js and Astro
Why this matters: Newer versions include performance improvements, bug fixes, and optimizations.
What to do:
# Check current versions
node --version
npm list astro
# Upgrade Node.js to latest LTS (22+)
nvm install 22
nvm use 22
# Upgrade Astro to latest
npm update astro
Expected improvement: ~30% faster builds from version improvements alone.
Step 2: Increase Node.js Memory Allocation
Why this matters: Large builds can hit memory limits, causing garbage collection pauses and slowdowns.
What to do:
# Method 1: Environment variable (recommended)
export NODE_OPTIONS="--max-old-space-size=8192"
# Method 2: Direct command
node --max-old-space-size=8192 ./node_modules/.bin/astro build
Memory allocation guide:
- Small sites (< 1k pages): 4GB (4096)
- Medium sites (1k-10k pages): 8GB (8192)
- Large sites (10k+ pages): 16GB+ (16384)
Expected improvement: Reduced build time and eliminated memory-related crashes.
Step 3: Optimize Build Concurrency
Why this matters: Astro can process multiple pages simultaneously, but too much concurrency can cause resource contention.
Finding your sweet spot:
// astro.config.mjs
export default defineConfig({
build: {
concurrency: 4, // Start here, then test 2, 6, 8
},
});
Testing methodology:
- Start with
concurrency: 2
- Run a build and time it
- Increase to 4, then 6, then 8
- Use the fastest setting
Important: More isn’t always better! The case study found 4 was optimal on a 12-core system.
Step 4: Configure Vite and Rollup for Speed
Why this matters: Vite handles bundling and optimization. Proper configuration can significantly impact build speed.
Here’s the optimized configuration from our case study:
// astro.config.mjs
import { defineConfig } from "astro/config";
import { readFileSync } from "fs";
import { cpus } from "os";
const packageJson = JSON.parse(readFileSync("./package.json", "utf8"));
const CPU_COUNT = cpus().length;
export default defineConfig({
build: {
// Optimize concurrency for your CPU
concurrency: 4,
rollupOptions: {
// Maximum parallel file operations
maxParallelFileOps: CPU_COUNT * 3,
output: {
// Fewer, larger chunks = less overhead
manualChunks: undefined,
// Faster code generation
generatedCode: {
preset: 'es2022'
}
}
}
},
vite: {
build: {
// Allow larger chunks for speed
chunkSizeWarningLimit: 10000,
// Fastest minifier
minify: 'esbuild',
// Less transformation needed
target: 'es2022',
rollupOptions: {
maxParallelFileOps: CPU_COUNT * 3
}
},
esbuild: {
target: 'es2022',
// Fast minification settings
minifyIdentifiers: false, // Skip for speed
minifySyntax: true,
minifyWhitespace: true,
},
// Aggressive caching for faster subsequent builds
optimizeDeps: {
force: false // Use cache when possible
}
},
// Skip HTML compression for faster builds
compressHTML: false,
});
Key optimizations explained:
manualChunks: undefined
- Reduces chunk fragmentation overheadtarget: 'es2022'
- Modern target means less transpilationminify: 'esbuild'
- Fastest minifier availablecompressHTML: false
- Skip compression for speed (enable in production if needed)maxParallelFileOps
- Utilize all CPU cores efficiently
Step 5: Implement Smart Caching
Why this matters: If your site pulls data from APIs, caching eliminates redundant network requests.
Here’s a robust caching implementation:
// utils/fetchWithCache.js
import fs from 'fs';
import path from 'path';
import crypto from 'crypto';
export async function fetchWithCache(url, expirationSeconds = 600) {
const start = Date.now();
// Create unique cache filename
const urlHash = crypto.createHash('md5').update("cache_v1_" + url).digest('hex');
const cacheDir = path.join(process.cwd(), '.cache');
const cacheFile = path.join(cacheDir, `${urlHash}.json`);
// Ensure cache directory exists
if (!fs.existsSync(cacheDir)) {
fs.mkdirSync(cacheDir, { recursive: true });
}
// Check if cache file exists and is fresh
if (fs.existsSync(cacheFile)) {
const stats = fs.statSync(cacheFile);
const ageInSeconds = (Date.now() - stats.mtime.getTime()) / 1000;
if (ageInSeconds < expirationSeconds) {
const cachedData = JSON.parse(fs.readFileSync(cacheFile, 'utf8'));
console.log(`Cache hit: ${url} (${ageInSeconds.toFixed(1)}s old)`);
return cachedData;
}
}
// Fetch fresh data
console.log(`Fetching: ${url}`);
const response = await fetch(url, {
headers: {
'User-Agent': 'Astro Build Bot',
},
});
if (!response.ok) {
throw new Error(`HTTP ${response.status}: ${response.statusText}`);
}
const data = await response.json();
// Save to cache
fs.writeFileSync(cacheFile, JSON.stringify(data, null, 2));
console.log(`Fresh fetch completed: ${((Date.now() - start) / 1000).toFixed(2)}s`);
return data;
}
Usage in your Astro pages:
// pages/[...slug].astro
---
import { fetchWithCache } from '../utils/fetchWithCache.js';
export async function getStaticPaths() {
// Use cached fetch instead of regular fetch
const posts = await fetchWithCache('https://api.example.com/posts');
return posts.map(post => ({
params: { slug: post.slug },
props: { post }
}));
}
---
Step 6: Cache Prewarming (Advanced)
Why this matters: For very large sites, you can prewarm your cache before the main build starts.
Here’s a Node.js cache prewarming script:
// scripts/prewarmCache.js
import { fetchWithCache } from '../utils/fetchWithCache.js';
async function prewarmCache() {
console.log('Starting cache prewarming...');
// Define your API endpoints to prewarm
const endpoints = [
'https://api.example.com/posts',
'https://api.example.com/categories',
'https://api.example.com/authors',
// Add more endpoints as needed
];
// Warm cache with limited concurrency
const results = await Promise.allSettled(
endpoints.map(url => fetchWithCache(url, 3600)) // 1 hour cache
);
const successful = results.filter(r => r.status === 'fulfilled').length;
console.log(`Cache prewarming complete: ${successful}/${endpoints.length} successful`);
}
prewarmCache().catch(console.error);
Run before your main build:
# Package.json scripts
{
"scripts": {
"prewarm": "node scripts/prewarmCache.js",
"build": "npm run prewarm && astro build"
}
}
Step 7: Consider Ramdisk (Conditional)
When it helps: Only with slow storage (spinning disks, old SSDs).
When it doesn’t help: Modern NVMe drives - improvement is typically smaller then 1%.
How to set up (Linux/macOS):
# Create 4GB ramdisk
sudo mount -t tmpfs -o size=4g tmpfs /tmp/astro-build
# Build in ramdisk
cd /tmp/astro-build
# ... run your build here ...
Step 8: Hardware Upgrades
When it’s worth it: If you’re building multiple times per day, hardware ROI is real.
What matters most:
- CPU single-core performance - Node.js loves fast cores
- CPU cache (L3/L4) - More cache = faster builds
- Fast storage - NVMe > SATA SSD > HDD
- Adequate RAM - Avoid swapping at all costs
Case study hardware impact:
- Old: Intel Xeon E5-1650 v3 → 3,583s build time
- New: AMD Ryzen 9 5900X → 2,659s build time
- 25% improvement from CPU upgrade alone
Advanced Caching Strategies
Cache Invalidation Strategy
Smart cache invalidation ensures fresh data when needed:
// utils/smartCache.js
export async function fetchWithSmartCache(url, options = {}) {
const {
maxAge = 600,
forceRefresh = false,
invalidateOn = []
} = options;
if (forceRefresh) {
return await fetchFresh(url);
}
// Check for invalidation conditions
for (const condition of invalidateOn) {
if (await condition()) {
console.log(`Cache invalidated for ${url}`);
return await fetchFresh(url);
}
}
return await fetchWithCache(url, maxAge);
}
// Usage with invalidation
const posts = await fetchWithSmartCache('https://api.example.com/posts', {
maxAge: 3600, // 1 hour
invalidateOn: [
() => process.env.FORCE_REFRESH === 'true',
() => Date.now() - lastDeployTime < 300000 // 5 minutes after deploy
]
});
Batch Request Optimization
Minimize API calls by batching requests:
// utils/batchFetch.js
export async function batchFetchWithCache(urls, batchSize = 10) {
const results = [];
for (let i = 0; i < urls.length; i += batchSize) {
const batch = urls.slice(i, i + batchSize);
const batchResults = await Promise.allSettled(
batch.map(url => fetchWithCache(url))
);
results.push(...batchResults);
// Small delay to be nice to the API
if (i + batchSize < urls.length) {
await new Promise(resolve => setTimeout(resolve, 100));
}
}
return results;
}
Hardware Considerations
CPU Requirements
Site Size | Recommended CPU | Cores | Cache |
---|---|---|---|
Small (< 1k pages) | Any modern CPU | 4+ | 8MB+ |
Medium (1k-10k pages) | Intel i7/AMD Ryzen 7 | 8+ | 16MB+ |
Large (10k+ pages) | Intel i9/AMD Ryzen 9 | 12+ | 32MB+ |
Huge (100k+ pages) | Server-grade CPU | 16+ | 64MB+ |
Memory Requirements
Base calculation: ~2-4MB per page in memory during build.
Site Size | Minimum RAM | Recommended |
---|---|---|
Small (< 1k pages) | 8GB | 16GB |
Medium (1k-10k pages) | 16GB | 32GB |
Large (10k+ pages) | 32GB | 64GB |
Huge (100k+ pages) | 64GB | 128GB+ |
Storage Considerations
Speed hierarchy:
- NVMe Gen4 - Best for large builds
- NVMe Gen3 - Great for most uses
- SATA SSD - Minimum recommended
- HDD - Only with ramdisk
When to Choose SSG vs SSR
Decision Matrix
Factor | SSG | SSR | Hybrid |
---|---|---|---|
Content freshness | Rebuild required | Always fresh | Mixed |
Personalization | Limited | Full | Per-route |
Performance | Excellent | Variable | Excellent |
Cost at scale | Very low | High | Medium |
Crawler resilience | Excellent | Poor | Good |
Development complexity | Simple | Complex | Medium |
Use Case Recommendations
Choose SSG when:
- ✅ Content doesn’t change frequently (minutes/hours)
- ✅ Heavy anonymous/crawler traffic expected
- ✅ Budget constraints are important
- ✅ Maximum performance is priority
- ✅ Simple deployment preferred
Choose SSR when:
- ✅ Real-time data is essential
- ✅ Heavy personalization needed
- ✅ User-generated content is primary
- ✅ Small number of pages
- ✅ Server resources aren’t constrained
Choose Hybrid when:
- ✅ Most content is static, some dynamic
- ✅ Need personalization on some routes
- ✅ Want to optimize costs and performance
- ✅ Can handle route-level complexity
Hybrid Implementation Example
// astro.config.mjs - Hybrid setup
export default defineConfig({
output: 'hybrid',
adapter: vercel(),
integrations: [
// Most pages are pre-rendered (SSG)
// Specific routes can opt into SSR
],
});
// pages/dashboard/[user].astro - SSR route
---
export const prerender = false; // This page uses SSR
const { user } = Astro.params;
const userData = await fetch(`/api/user/${user}`);
---
<Layout title="Dashboard">
<UserDashboard data={userData} />
</Layout>
Common Optimization Myths
Myth 1: “More concurrency is always better”
Reality: Concurrency has diminishing returns and can cause resource contention.
Test this: Try concurrency values of 2, 4, 6, 8, 16. Most sites perform best between 2-6.
Myth 2: “Ramdisk always speeds up builds”
Reality: Only helps with slow storage. NVMe drives make ramdisk nearly useless.
Test this: Time your build with and without ramdisk on your storage setup.
Myth 3: “You need SSR for large sites”
Reality: SSG can handle hundreds of thousands of pages efficiently with proper optimization.
Evidence: Our case study site has 339k+ pages and builds in under 45 minutes.
Myth 4: “Build time doesn’t matter in production”
Reality: Faster builds mean:
- Quicker deployments
- More frequent updates
- Lower CI/CD costs
- Better developer experience
Myth 5: “HTML compression always saves significant space”
Reality: Modern CDNs handle compression better, and build-time compression slows builds significantly.
Recommendation: Let your CDN handle compression for better performance.
Monitoring and Measuring Success
Build Performance Metrics
Track these metrics to measure optimization success:
// build-metrics.js
const startTime = Date.now();
export function logBuildMetrics(pageCount) {
const buildTime = (Date.now() - startTime) / 1000;
const pagesPerSecond = pageCount / buildTime;
console.log(`
📊 Build Metrics:
- Pages built: ${pageCount.toLocaleString()}
- Build time: ${buildTime.toFixed(1)}s
- Speed: ${pagesPerSecond.toFixed(1)} pages/sec
- Memory usage: ${process.memoryUsage().heapUsed / 1024 / 1024:.1f}MB
`);
}
CI/CD Integration
Track build performance over time:
# .github/workflows/build-monitor.yml
name: Build Performance Monitor
on: [push, pull_request]
jobs:
build-perf:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '22'
- run: npm ci
- name: Build with timing
run: |
echo "BUILD_START=$(date +%s)" >> $GITHUB_ENV
npm run build
echo "BUILD_END=$(date +%s)" >> $GITHUB_ENV
- name: Report performance
run: |
BUILD_TIME=$((BUILD_END - BUILD_START))
echo "Build completed in ${BUILD_TIME} seconds"
# Send to your analytics/monitoring system
Troubleshooting Common Issues
Out of Memory Errors
Symptoms:
FATAL ERROR: Ineffective mark-compacts near heap limit
JavaScript heap out of memory
Solutions:
- Increase
--max-old-space-size
- Reduce build concurrency
- Clear cache:
rm -rf .cache node_modules/.vite
- Check for memory leaks in your code
Slow API Responses
Symptoms:
- Build hangs on certain pages
- Inconsistent build times
- Network timeout errors
Solutions:
- Implement request timeout and retry logic
- Use caching aggressively
- Batch API requests when possible
- Consider API rate limiting
// Robust fetch with retries
async function fetchWithRetry(url, retries = 3) {
for (let i = 0; i < retries; i++) {
try {
const response = await fetch(url, {
timeout: 10000 // 10 second timeout
});
if (response.ok) return response;
if (i === retries - 1) throw new Error(`HTTP ${response.status}`);
// Wait before retry
await new Promise(resolve => setTimeout(resolve, 1000 * (i + 1)));
} catch (error) {
if (i === retries - 1) throw error;
await new Promise(resolve => setTimeout(resolve, 1000 * (i + 1)));
}
}
}
Inconsistent Build Times
Symptoms:
- Build time varies significantly between runs
- Some builds much slower than others
Solutions:
- Implement consistent caching strategy
- Check for system resource contention
- Monitor CPU/memory usage during builds
- Use fixed versions for all dependencies
Conclusion and Next Steps
Optimizing Astro builds for large SSG sites is entirely achievable with the right approach. The key takeaways:
Quick Wins (Implement First)
- ✅ Upgrade Node.js and Astro
- ✅ Increase memory allocation
- ✅ Tune build concurrency (start with 4)
- ✅ Configure Vite for speed
Medium Effort (High Impact)
- ✅ Implement smart caching for API calls
- ✅ Optimize your
astro.config.mjs
- ✅ Monitor and measure build performance
Advanced Optimizations
- ✅ Cache prewarming for very large sites
- ✅ Hardware upgrades if building frequently
- ✅ Custom fetch implementations with retry logic
Remember the Core Principle
SSG isn’t just about static content - it’s about economic efficiency at scale. When crawlers and bots drive most of your traffic, serving pre-built files is far more cost-effective than processing every request server-side.
Ready to Learn More?
If you’re new to Astro or want to explore more advanced topics, check out:
- Build your first Astro blog for free
- Enhance your blog with YouTube videos
- Create real-time apps with Astro and Convex
- Deploy Astro apps to Vercel with Convex
The full case study with detailed logs and configurations is available in the original Reddit thread.
Have questions about optimizing your specific Astro setup? The techniques in this guide have been tested on real-world sites with hundreds of thousands of pages. Start with the quick wins, measure your improvements, and gradually implement the more advanced optimizations as needed.
Happy building! 🚀
Related Posts

Deploy Your Astro + Convex App to Vercel: The Simplest Production Setup
Deploy your real-time Astro and Convex application to Vercel in minutes with zero configuration - the easiest way to go from development to production

How To Deploy Static Website Astro.JS on VPS Servers
Learn how to deploy a node.js static website Astro on a VPS server easily.

How to Deploy Astro on Your VPS with EasyPanel
Learn how to deploy Astro static website on your own VPS with EasyPanel