How We Cut Infrastructure Costs from 4万円 to 5千円/Month: Building a Self-Hosted PaaS with Coolify

We were paying 4万円/month across Vercel, Heroku, and Netlify. By self-hosting with Coolify, we brought that down to 5千円/month — saving the equivalent of 45万円/year. Here is the full story.
代表 / エンジニア
TL;DR
- Cut monthly infrastructure costs from 4万円 to 5千円 by migrating to Coolify
- Running 5 web services stably on Xserver VPS 2GB plan (1,150円/month)
- Automated SSL, GitHub integration, and multi-site hosting — all for free
- 45万円/year in savings after 3 months of operation
Introduction
At the start of 2024, our accountant showed me the cloud services invoice. I was genuinely alarmed.
Vercel: $60. Heroku: $50. Netlify: $19. AWS dev environment: $150. That is roughly 4万円/month — nearly 48万円/year. Close to one engineer's monthly salary, gone just to keep servers running.
At first I told myself this was just the cost of cloud-native development. But as the number of projects grew, so did the bill. Something had to change.
And really, why were we paying so much? Vercel is convenient — git push and your app is live. But under the hood, it is just Docker containers running. Same with Heroku.
"Could we do this ourselves?"
That question started everything.
The Problem: What We Were Actually Paying
Here is the breakdown. It is a bit raw, but this was our reality.
Vercel Pro ($60/month)
- Corporate website
- Landing page
- Blog
$20 per project. The "Pro" plan comes with team features and Analytics — neither of which we were really using.
Heroku ($50/month)
- Main API server (Dyno Standard): $25
- Batch processing server (Dyno Standard): $25
When Heroku killed its free tier, we had no choice but to upgrade. Add the PostgreSQL add-on and that is another $9.
Netlify ($19/month)
- Static site hosting
- Form handling
- Identity features
Started on the free plan, hit bandwidth limits, moved to paid.
AWS EC2 ($150/month)
- Dev and staging environments
- t3.medium × 2 instances
- RDS (PostgreSQL)
- ELB
Our dev environment was costing more than production. We kept telling ourselves it was "good AWS practice."
Total: ~$279/month (approximately 4万円)
Every month, charged to the company card.
Discovering Coolify
We looked at Kubernetes first — the classic "if we're serious, we should use k8s" instinct. But the more we researched, the more obvious it became that it was overkill. Standing up a Kubernetes cluster to run five web apps is like driving a Ferrari to the convenience store. Cool, sure, but the maintenance and learning costs are not trivial.
We considered Docker Swarm — simpler than Kubernetes, but still too much for our scale. Dokku looked promising but everything is CLI-based, with no UI, which was a dealbreaker for us.
Then Coolify appeared in GitHub trending.
The README stopped me: "Open-source & self-hostable Heroku / Netlify alternative." That was exactly what we were looking for. The screenshots showed a surprisingly polished dashboard.
We decided to try it, half-skeptical.
Options We Considered
| Solution | Learning Curve | Monthly Cost | Operational Complexity | Decision |
|---|---|---|---|---|
| Kubernetes | Very high | $200+ | High | ❌ Overkill |
| Docker Swarm | High | $100+ | Medium–High | ❌ Too complex |
| Dokku | Medium | $10+ | Medium | 🤔 Considered |
| Coolify | Low | $10+ | Low | ✅ Adopted! |
Implementation: The 3-Week Migration
Week 1: Experimenting in a Test Environment
On a Friday evening after everyone had left the office, I signed up for Xserver VPS on their smallest plan (830円/month). Cheap enough that failure would not hurt.
We chose Xserver simply because it is a Japanese service — easier billing. The technical justification came later.
I SSH'd in and ran the command from the README:
curl -fsSL https://cdn.coollabs.io/coolify/install.sh | bashThe installer set up Docker and launched Coolify via docker-compose. Done in about 5 minutes.
I opened a browser. A polished dashboard appeared. Created an account, connected GitHub, selected a repository, chose a branch, clicked Deploy.
15 minutes later, the first Next.js app was running.
"Wait, that's it?"
It was almost anticlimactic. Nearly identical to the Vercel experience — except this was running on our own server.
Week 2: Setting Up Production
On Monday morning I reported back: "Coolify works." Our CTO's eyes lit up.
We provisioned Xserver VPS 2GB (1,150円/month) for production. Modest specs, but more than enough for our scale:
- CPU: 3 cores
- Memory: 2GB
- SSD: 50GB NVMe
- Bandwidth: Unlimited
Setup was the same as before. Coolify was running in under 30 minutes.
First production deploy. Build started… then crashed.
FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory
We could have upgraded the plan, but that defeats the purpose. We needed to make 2GB work.
It turns out you can extend available memory with a swap file:
# Create a 4GB swap file
sudo fallocate -l 4G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
# Persist across reboots
echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstabWe also bumped Node.js's memory limit via Coolify's environment variable settings:
NODE_OPTIONS=--max-old-space-size=3072Second attempt: success. Builds are a bit slower, but once you deploy to production you rarely touch it again.
Week 3: Going Multi-Site
Now the real migration. Three projects on Vercel, two API servers on Heroku — all moving to Coolify.
We were nervous. Could one VPS handle five apps? What about port conflicts? Domain routing? SSL certificates?
Coolify handled all of it.
Each new app gets an automatically assigned port. Traefik, the built-in reverse proxy, routes traffic by domain name. It felt like magic.
Here is what the final architecture looks like:
graph TB
subgraph "Internet"
User[Users]
end
subgraph "Xserver VPS (1,150円/month)"
subgraph "Coolify Platform"
Proxy[Traefik Proxy :80/:443]
subgraph "Applications"
Corp[Corporate Site<br/>Next.js :3000]
API[API Server<br/>Express :4000]
Admin[Admin Dashboard<br/>React :5000]
Blog[Blog<br/>Astro :6000]
LP[Landing Page<br/>Static HTML :7000]
end
subgraph "Management"
CoolifyUI[Coolify UI :8000]
end
end
end
User --> Proxy
Proxy --> Corp
Proxy --> API
Proxy --> Admin
Proxy --> Blog
Proxy --> LPEach app runs in its own isolated Docker container. Combined memory usage across all five sits around 1.5GB — with headroom to spare.
Operations: Best Practices From 3 Months of Running This
1. Bulk Environment Variable Management
Early on, I was typing environment variables one by one into the Coolify UI. NODE_ENV, DATABASE_URL, API_KEY — 20+ variables entered by hand. Predictably, I made typos.
The fix: use a .env.production file and paste the whole thing at once.
# .env.production
NODE_ENV=production
DATABASE_URL=postgresql://user:pass@localhost/db
NEXT_PUBLIC_API_URL=https://api.example.com
REDIS_URL=redis://localhost:6379
JWT_SECRET=your-secret-key
# ... other environment variablesPaste this into Coolify's environment variable screen. Done in 5 seconds.
2. Cutting Deploy Time by 50%
Our first deploys took 8 minutes. Waiting 8 minutes for every single-line change is painful.
We rewrote the Dockerfile using multi-stage builds to maximize layer caching:
# Stage 1: Install production dependencies (changes rarely — highly cacheable)
FROM node:18-alpine AS deps
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
# Stage 2: Install all dependencies and build
FROM node:18-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# Stage 3: Lean production image
FROM node:18-alpine AS runner
WORKDIR /app
COPY /app/node_modules ./node_modules
COPY /app/.next ./.next
COPY /app/public ./public
EXPOSE 3000
CMD ["npm", "start"]Results:
- First build: 8 minutes (unchanged)
- Subsequent builds: 2–4 minutes (only changed layers rebuild)
- After adding a package: ~5 minutes
Subjectively, it now feels faster than Vercel — because Vercel always does a clean build, while Coolify reuses its cache intelligently.
3. Database Backup Strategy
We run PostgreSQL on Coolify too. Losing data would be catastrophic, so automated backups are non-negotiable.
Automated nightly backup at 3 AM:
#!/bin/bash
# /home/backup/backup.sh
DATE=$(date +%Y%m%d)
BACKUP_DIR="/home/backup/postgres"
# Dump all PostgreSQL databases
docker exec coolify-postgres pg_dumpall -U postgres > $BACKUP_DIR/backup_$DATE.sql
# Remove backups older than 7 days
find $BACKUP_DIR -name "backup_*.sql" -mtime +7 -delete
# Upload to S3-compatible storage (Wasabi)
rclone copy $BACKUP_DIR/backup_$DATE.sql wasabi:coolify-backup/Wasabi costs about 500円/month and gives us unlimited backup retention. The peace of mind is worth every yen.
4. Monitoring and Alerts
Coolify does not include built-in monitoring, so we use an external service.
UptimeRobot (1,000円/month) checks all our sites every 5 minutes and sends a Slack notification on downtime.
# Monitored endpoints
- https://corporate.example.com (every 5 minutes)
- https://api.example.com/health (every 5 minutes)
- https://admin.example.com (every 5 minutes)Over 3 months of operation, total downtime has been about 15 minutes. Comparable to Vercel.
Results: The Numbers
Cost Reduction
| Item | Before | After | Reduction |
|---|---|---|---|
| Monthly cost | 40,000円 | 2,650円 | 93.4% reduction |
| Annual cost | 480,000円 | 31,800円 | 448,200円 saved |
Breakdown:
- Xserver VPS 2GB: 1,150円/month
- Backups (Wasabi): 500円/month
- Monitoring (UptimeRobot): 1,000円/month
Unexpected Performance Gains
Performance actually improved:
| Metric | Vercel | Coolify | Improvement |
|---|---|---|---|
| TTFB | 450ms | 280ms | -38% |
| FCP | 1.2s | 0.9s | -25% |
| LCP | 2.5s | 1.9s | -24% |
Why:
- VPS is physically located in Tokyo — closer to our users
- Dedicated resources mean no noisy-neighbor effect
- Fine-grained Nginx tuning is possible
Developer Experience
- Deploy speed: On par with Vercel
- Flexibility: If Docker can run it, Coolify can run it
- No artificial limits: No build time caps, concurrency limits, or bandwidth quotas
Fully Automated SSL
When we were on Vercel, SSL was something we never thought about. It just worked.
Moving to Coolify, SSL was our biggest concern. Let's Encrypt certificates expire every 90 days. Miss a renewal, and every visitor sees a "Not Secure" warning. That was unacceptable.
The worry turned out to be completely unfounded.
In Coolify's settings, just type your domain with https:// instead of http://. That is literally all you do — Coolify fetches the Let's Encrypt certificate and renews it automatically every 90 days.
# In Coolify's domain settings
Domain: https://corporate.example.com # Just use https://Here is what happens under the hood:
sequenceDiagram
participant B as Browser
participant T as Traefik
participant L as Let's Encrypt
participant A as Application
B->>T: HTTPS connection request
T-->>T: No cert or cert expired?
T->>L: Certificate request
L->>T: Issue challenge
T->>L: Challenge response
L->>T: Issue certificate
T->>T: Store certificate
T->>A: Forward request
A->>T: Response
T->>B: HTTPS responseInitial certificate issuance takes under a minute. After that, it is fully automatic. You wake up and your certificates have already been renewed.
Troubleshooting: Real Issues We Hit
Deploy Succeeded but Got 404
We deployed the first app, visited the domain, and got a 404. Coolify's logs showed a successful deploy. The container was running. Still 404.
The culprit was DNS propagation delay. Changing DNS settings takes time to propagate globally. Our local DNS had updated, but the DNS server Coolify was querying had not.
# Check DNS propagation status
dig @8.8.8.8 yourdomain.com # Google's DNS
dig @1.1.1.1 yourdomain.com # Cloudflare's DNS
# To force an immediate refresh, restart the proxy
docker restart coolify-proxyWaiting 30 minutes resolved it. Our rule now: make DNS changes at night, verify in the morning.
Disk Space Exhaustion
One month in, deploys started failing out of nowhere.
df -h
# /dev/sda1 50G 49G 0.5G 99% /Full disk. Docker build cache and images had been accumulating unnoticed.
docker system df
# Images 42 15 28.3GB
# Containers 23 5 1.2GB
# Volumes 8 8 3.5GBWe automated weekly cleanup:
# Runs every Sunday at 2 AM
crontab -e
0 2 * * 0 docker system prune -a -f --volumesProblem solved. Disk usage now stays consistently around 30GB.
GitHub Actions Integration Headache
We tried setting up a CI/CD pipeline that calls the Coolify API from GitHub Actions. Kept getting 401 errors.
After digging in, we found that Coolify API tokens rotate periodically. The token we stored in GitHub Secrets had expired.
The fix was switching to Webhook URLs instead. Webhook URLs do not rotate, so once configured you can forget about them.
Is Coolify Right for Your Team?
✅ Good fit
- Startups: Cost reduction is a top priority
- Small to mid-size teams: Up to ~10 applications
- Technical teams: Basic Linux and Docker knowledge on hand
- Teams that value flexibility: No restrictions on customization
❌ Not a good fit
- Large enterprises: Running hundreds of applications
- Non-technical teams: No server administration experience
- Global deployments: Need CDN presence worldwide
- Strict SLA requirements: 24/7 support is mandatory
Suggested Migration Roadmap
Based on our experience, we recommend a phased approach:
graph LR
A[Week 1–2<br/>Test in dev environment] --> B[Week 3–4<br/>Build staging]
B --> C[Month 2<br/>Migrate small projects]
C --> D[Month 3<br/>Migrate production]Key principles:
- Do not migrate everything at once
- Start small and expand gradually
- Build experience handling issues before they matter
Three Months In: Honest Reflections
Coolify is not perfect.
Occasionally there are errors with no obvious cause. There are moments of staring at logs going "why won't this work?" Documentation is in English, and there is not much of it — especially in Japanese. (That is partly why I am writing this.)
But the advantages more than make up for it.
First, the cost. 4万円/month became 5千円/month. That is 45万円/year. Enough to hire a junior engineer. Buy three new MacBooks. Take the team on a company trip.
Second, the freedom. On Vercel you run into walls — "this Node.js version is not supported," "you have exceeded your build time limit." On Coolify, if Docker can run it, you can deploy it. Python, Ruby, Go, Rust — no problem.
Third, the learning. This experience taught us a lot about infrastructure. We now actually understand how Docker, Traefik, and Let's Encrypt work. That knowledge is an asset.
Why I Wrote This
Honestly, because the lack of Coolify resources in Japanese made our journey much harder than it needed to be.
When errors came up, searches returned nothing. The official docs did not always resolve the issue. Discord questions took time to get answered.
But the struggle built up a lot of know-how, and I wanted to share it so someone else does not have to learn the hard way.
There is another reason too.
I suspect a lot of Japanese startups are significantly overpaying for infrastructure. Companies spending 10万円/month on Vercel, 20万円/month on Heroku — they exist.
Is that spend really justified?
For large enterprises, maybe. But for a startup, 10万円/month is real money. That money could be doing something better.
Closing Thoughts
Migrating to Coolify was a technical challenge. But more than that, it was a business decision.
"Convenient but expensive" versus "slightly more work but dramatically cheaper."
We chose the latter. It was the right call.
If you are struggling with infrastructure costs, give Coolify a serious look. The learning curve is real, but the savings waiting on the other side are too.
And with a leaner infrastructure budget, you can invest in what actually matters. That is the startup advantage.
Next Steps
If this piqued your interest, try it as a weekend project:
- Sign up for Xserver VPS on the cheapest plan (830円/month)
- Install Coolify (5 minutes)
- Connect a GitHub repo and deploy (10 minutes)
In 15 minutes, you will have your own personal PaaS.
Resources
Questions or feedback? Reach out at @aduce_tech.
I hope this article helps other startups wrestling with infrastructure costs.