How We Turned a $5 Compute Engine Into a Swiss Army Knife
Picture this: you're building an email forwarding manager for your organization, and you hit a wall. The Namecheap API requires IP whitelisting. Vercel wants $20/month for a static IP. Setting up a proper Cloud Run deployment with a static IP involves NAT gateways, VPC configurations, and more GCP wizardry than you want to deal with on a Tuesday night.
So what do you do? You grab the tiniest GCP Compute Engine instance money can buy (spoiler: it's about $5/month) and see just how much you can cram into it.
Here's the story of how we built our own miniature data center and probably violated every "separation of concerns" principle in the process. But hey, it works beautifully.
The Problem That Started It All
Let me paint you the picture. We're building an email forwarding management system for our organization. Think of it as a nice UI where team members can create email forwards like events@organization.com → john@gmail.com without having to log into domain registrar panels or remember cryptic API calls.
The technical setup seemed straightforward:
- Build a Next.js app with a clean interface for managing email forwards
- Hit the Namecheap API to create/update/delete forwarding rules
- Deploy on Vercel and call it a day
But here's where things got interesting. Namecheap's API has a security requirement: you have to whitelist the IP addresses that can make API calls. Makes sense from a security perspective, but it's a pain for serverless deployments where your IP changes constantly.
Our options were:
- Vercel Pro: $20/month for a static IP (ouch for a side project)
- Cloud Run + NAT Gateway: Technically possible but involves VPC setup, static IP reservations, and probably $15-30/month
- Compute Engine: $5/month for an e2-micro instance with a static IP
Guess which one we picked?
The "Why Not Maximize It?" Moment
Once we had a Compute Engine instance running our email forwarding API, we looked at the resource usage and laughed. We were using maybe 10% of the available resources. That's when the dangerous thought crept in: "What else could we run on this thing?"
You know what's fun? Having a bunch of useful services scattered across different platforms, each with their own logins, their own monitoring, their own deployment pipelines. Said no one ever.
So we decided to see just how much we could squeeze into our tiny instance. The goal wasn't just to save money (though that's nice). It was to create a single, well-organized hub for all our self-hosted tools.
Enter Docker Compose: The Magic Orchestrator
Here's where Docker Compose became our best friend. Instead of manually managing different services, processes, and configurations, we could define everything in a single docker-compose.yml file and let Docker handle the orchestration.
Our final setup looks like this:
version: '3.8'
services:
proxy-manager:
image: jc21/nginx-proxy-manager:latest
container_name: nginx-proxy-manager
restart: unless-stopped
ports:
- '80:80'
- '443:443'
- '81:81'
volumes:
- ./data:/data
- ./letsencrypt:/etc/letsencrypt
networks:
- nginx-proxy
vaultwarden:
image: vaultwarden/server:latest
container_name: vaultwarden
restart: unless-stopped
environment:
- ROCKET_PORT=8080
- ROCKET_ADDRESS=0.0.0.0
- ADMIN_TOKEN=[REDACTED]
volumes:
- ./vaultwarden-data:/data
networks:
- nginx-proxy
uptime-kuma:
image: louislam/uptime-kuma:latest
container_name: uptime-kuma
restart: unless-stopped
volumes:
- ./uptime-kuma-data:/app/data
- /var/run/docker.sock:/var/run/docker.sock
networks:
- nginx-proxy
portainer:
image: portainer/portainer-ce:latest
container_name: portainer
restart: unless-stopped
ports:
- '9000:9000'
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- portainer_data:/data
networks:
- nginx-proxy
watchtower:
image: containrrr/watchtower:latest
restart: unless-stopped
container_name: watchtower
volumes:
- /var/run/docker.sock:/var/run/docker.sock
command: >
--interval 300
--cleanup
--debug
networks:
- nginx-proxy
peerjs-server:
image: peerjs/peerjs-server:latest
container_name: peerjs
restart: unless-stopped
command:
- peerjs
- --port
- '9000'
- --key
- peerjs
- --path
- /peerjs
- --allow_discovery
- 'true'
- --proxied
- 'true'
- --alive_timeout
- '15000'
networks:
- nginx-proxy
networks:
nginx-proxy:
external: true
name: nginx-proxy
volumes:
portainer_data:Let me break down what we crammed into this tiny machine and why each service earned its spot.
Service #1: Nginx Proxy Manager (The Traffic Controller)
First up: Nginx Proxy Manager. This is the crown jewel of our setup, the service that makes everything else possible.
Here's the problem: we have multiple services that need to be accessible via nice domain names with HTTPS certificates. Manually configuring Nginx, managing SSL certificates, and keeping everything updated is a nightmare.
Nginx Proxy Manager gives us:
- A beautiful web UI for managing reverse proxy configurations
- Automatic Let's Encrypt certificate generation and renewal
- Easy subdomain routing (email.example.com → email service, passwords.example.com → Vaultwarden, etc.)
- Access lists and basic authentication for sensitive services
Setting it up was surprisingly simple. Point a domain to your server's IP, access the admin panel on port 81, and start creating proxy hosts. Want passwords.yourdomain.com to route to your Vaultwarden instance? Two clicks and a minute later, you've got HTTPS-enabled password management.
The genius of this setup is that every other service can run on internal ports (like 8080, 3000, etc.) and only the proxy manager needs to expose ports 80 and 443 to the world.
Service #2: Vaultwarden (Password Management)
Next up: Vaultwarden, which is an unofficial Bitwarden server implementation written in Rust. Why not just use Bitwarden's cloud service? A few reasons:
- Cost: Bitwarden Premium is $10/year, but for organizations you need the business plan at $36/year per user
- Control: We want our password vault on infrastructure we control
- Performance: Local hosting means faster sync times
- Learning: It's fun to understand how these systems work under the hood
Vaultwarden is incredibly lightweight. It uses maybe 50MB of RAM and barely touches the CPU. The setup is dead simple:
vaultwarden:
image: vaultwarden/server:latest
environment:
- ROCKET_PORT=8080
- ROCKET_ADDRESS=0.0.0.0
- ADMIN_TOKEN=[REDACTED]
volumes:
- ./vaultwarden-data:/dataPoint your Bitwarden client apps to passwords.yourdomain.com and boom. You've got enterprise-grade password management for the cost of a domain name.
The admin token gives you access to a web panel where you can invite users, monitor usage, and configure advanced settings. We've got the whole team using it now, and it's rock solid.
Since losing everyone's passwords would be catastrophic, we run automated external backups of the Vaultwarden data directory every 6 hours. A simple cron job tars up the data, keeps the last 14 backups locally for quick recovery, and uploads a copy to Google Cloud Storage for disaster recovery. At $0.02/GB/month, the peace of mind is worth way more than the cost.
Service #3: Uptime Kuma (Monitoring Everything)
Here's something they don't tell you about running your own infrastructure: stuff breaks. Services go down, SSL certificates expire, APIs start returning errors, and you won't know until someone complains.
Uptime Kuma is like having a tireless intern who checks on all your services 24/7 and sends you notifications when something goes wrong.
What makes Uptime Kuma special:
- Monitoring everything: HTTP/HTTPS endpoints, TCP ports, ping tests, DNS lookups, database connections
- Beautiful dashboard: Clean, modern interface that actually looks good
- Flexible notifications: Discord, Slack, email, webhook, you name it
- Status pages: Generate public status pages for your services
- Docker socket monitoring: Since we mounted
/var/run/docker.sock, it can even monitor Docker containers directly
Setting up monitoring for our email forwarding API was as simple as adding the endpoint URL and configuring Discord notifications. Now we get instant alerts if the API goes down, response times spike, or SSL certificates are about to expire.
You can actually see our Uptime Kuma instance in action at status.hackpsu.org - it's monitoring all our HackPSU services and provides a public status page that shows real-time uptime statistics and response times for everything from our main website to our internal APIs.
The psychological benefit is huge. Instead of constantly wondering "is everything still working?", you can trust that you'll know immediately if something breaks.
Service #4: Portainer (Container Management)
Managing Docker containers from the command line is fine when you have 2-3 services. When you have 6+ containers with different configurations, volumes, networks, and dependencies, a GUI becomes really valuable.
Portainer gives you:
- Visual container management: See all your containers, their status, resource usage, and logs in one place
- Easy updates: Click a button to pull new images and restart containers
- Volume management: Browse and manage Docker volumes without cryptic CLI commands
- Network visualization: Understand how your containers are connected
- Template system: Deploy common applications with pre-built templates
The killer feature for us is the logging interface. When something goes wrong, you can instantly view container logs, filter by time ranges, and download logs for analysis. No more docker logs container_name | grep error | tail -100.
Service #5: Watchtower (Automatic Updates)
Security updates are important. Feature updates are nice. But manually updating 6+ Docker containers every week? That's a recipe for neglect and vulnerabilities.
Watchtower solves this by automatically updating your Docker containers when new images are available. Here's how it works:
- Every 5 minutes (configurable), Watchtower checks Docker Hub for newer versions of your images
- If it finds an update, it gracefully stops the old container
- Pulls the new image
- Starts a new container with the same configuration
- Cleans up the old image to save disk space
watchtower:
image: containrrr/watchtower:latest
volumes:
- /var/run/docker.sock:/var/run/docker.sock
command: >
--interval 300
--cleanup
--debugThe --cleanup flag is crucial. Without it, old Docker images accumulate and you'll run out of disk space fast on a small instance.
We've been running this for months, and it's updated everything from security patches to major feature releases without any manual intervention. The peace of mind is incredible.
Service #6: PeerJS Server (WebRTC Signaling)
This one's a bit more niche, but hear me out. We occasionally build applications that need real-time communication between browsers. WebRTC is perfect for this, but it needs a signaling server to help peers find each other.
Instead of using a third-party service or deploying a separate Node.js app, we just throw a PeerJS server into our Docker Compose stack:
peerjs-server:
image: peerjs/peerjs-server:latest
command:
- peerjs
- --port
- '9000'
- --key
- peerjs
- --path
- /peerjs
- --allow_discovery
- 'true'
- --proxied
- 'true'
- --alive_timeout
- '15000'Now any WebRTC application we build can use peer.yourdomain.com/peerjs as its signaling server. The resource usage is minimal unless you're actively using it, but having it available means one less external dependency for future projects.
The Email Forwarding Manager (The Original Purpose)
And of course, running alongside all these containerized services, we have our original email forwarding manager. It's a Next.js application with a clean React interface for managing Namecheap email forwards.
The frontend is built with React Table for filtering and sorting forwards, plus a dialog system for creating new forwarding rules:
// Simplified version of our forwarding management interface
const handleDelete = async (mailbox: string, forwardTo: string) => {
try {
const response = await fetch(
`/api/email?mailbox=${encodeURIComponent(mailbox)}&forwardTo=${encodeURIComponent(forwardTo)}`,
{ method: 'DELETE' },
)
if (!response.ok) {
throw new Error('Failed to delete forwarding rule')
}
// Refresh the data and notify
const res = await fetch('/api/email')
const data = await res.json()
if (data.ok) {
onEntriesChange(data.ok)
toast.success('Forwarding rule deleted successfully')
}
} catch (err) {
toast.error('Failed to delete forwarding rule')
}
}The backend is a simple Next.js API route that interfaces with the Namecheap API:
export async function DELETE(req: NextRequest) {
const { searchParams } = req.nextUrl
const mailbox = searchParams.get('mailbox')
const forwardTo = searchParams.get('forwardTo')
if (!mailbox || !forwardTo) {
return NextResponse.json(
{ error: 'mailbox and forwardTo query params are required' },
{ status: 400 },
)
}
const list = await getEmailForwarding()
const filtered = list.filter(
(e) => !(e.mailbox === mailbox && e.forwardTo === forwardTo),
)
await setEmailForwarding(filtered)
return NextResponse.json({ ok: true })
}The beauty is that this runs directly on the Compute Engine instance using PM2 for process management, giving us the static IP we need for Namecheap's API whitelist requirements.
Resource Usage: The Numbers
Here's the fun part. After cramming all these services into a single e2-micro instance (1 vCPU, 1GB RAM), let's look at the actual resource usage:
Memory Usage:
- Nginx Proxy Manager: ~80MB
- Vaultwarden: ~50MB
- Uptime Kuma: ~120MB
- Portainer: ~60MB
- Watchtower: ~20MB
- PeerJS Server: ~30MB
- Email Forwarding App: ~200MB
- System overhead: ~200MB
Total: ~760MB out of 1GB available
CPU Usage: Typically under 10% unless we're actively using multiple services
Storage: About 5GB used out of 10GB available (mostly Docker images and data volumes)
The instance handles everything beautifully. Response times are snappy, services are stable, and we've got room to grow.
The Networking Magic
The networking setup deserves special mention because it's what makes this whole thing possible. Every service runs in a shared Docker network called nginx-proxy:
networks:
nginx-proxy:
external: true
name: nginx-proxyThis means:
- Services can communicate with each other using container names (like
http://vaultwarden:8080) - Only the proxy manager exposes ports to the internet
- Everything else is protected behind the reverse proxy
- SSL termination happens once at the proxy level
When you want to add a new service, you just:
- Add it to the Docker Compose file in the
nginx-proxynetwork - Create a new proxy host in Nginx Proxy Manager
- Point it to
http://container_name:port - Enable SSL with one click
It's like having your own private cloud with enterprise-grade networking, but simpler.
What We Learned (And What Went Wrong)
The Good:
- Running multiple services on one instance is surprisingly stable
- Docker Compose makes complex deployments feel simple
- Automatic updates with Watchtower are a game-changer
- Having everything in one place reduces cognitive overhead significantly
The Tricky:
- Resource monitoring becomes crucial. One misbehaving service can impact everything
- Backup strategies get more complex when you have multiple data volumes
- Docker logs can fill up disk space if you don't configure log rotation
- Some services don't play nice with shared resources (looking at you, services that assume they own port 80)
The Surprising:
- Our tiny instance handles traffic spikes better than expected
- The psychological benefit of "everything just works" is huge
- Other team members started requesting access to add their own services
Security Considerations
Running multiple services on one instance requires extra attention to security:
- Network isolation: Using Docker networks to prevent unnecessary inter-service communication
- Access control: Nginx Proxy Manager access lists to restrict sensitive services
- Regular updates: Watchtower keeps everything patched automatically
- Monitoring: Uptime Kuma alerts us to any suspicious behavior
- Firewall: Only ports 22 (SSH), 80 (HTTP), and 443 (HTTPS) are open to the world
We also use Cloudflare as a CDN/proxy layer, which adds DDoS protection and hides our server's real IP from casual discovery.
What's Next?
We're thinking about adding some more fun (and useful) tools to our little server:
- Link shortener: Something like Shlink so we can have custom short URLs like
go.hackpsu.org/applyinstead of those ad infested link shorteners - File drop: A simple FileBrowser instance for when we need to quickly share files with the team
- Quick notes: HedgeDoc for collaborative markdown notes during meetings
- QR code generator: A simple web app for generating QR codes for event check-ins
- Team dashboard: Something that shows our GitHub activity, uptime stats, and maybe a weather widget (because why not?)
But honestly? The current setup works so well that we're hesitant to mess with it. Sometimes the best engineering decision is knowing when to stop optimizing.
The Real Lesson
Here's the thing: this project taught us that you don't need a massive Kubernetes cluster or a complex microservices architecture to run serious infrastructure. Sometimes the best solution is the simplest one that meets your needs.
We needed a static IP for API access. We ended up with a complete self-hosted infrastructure platform that costs less than a couple of coffee drinks per month and gives us more control and functionality than most SaaS tools.
The key insights:
- Start with constraints: Having a tiny instance forced us to be efficient
- Embrace Docker: Containerization makes complex deployments manageable
- Automate everything: Watchtower, Uptime Kuma, and automatic SSL certificates mean less maintenance
- Monitor proactively: Know when things break before users do
- Keep it simple: The best architecture is the one you can understand and maintain
If you're building side projects or small team infrastructure, seriously consider the "tiny instance, maximum value" approach. Your wallet will thank you, your team will love the simplicity, and you might just learn something cool about systems administration along the way.
Plus, there's something deeply satisfying about seeing docker stats show a perfectly balanced resource usage across six different services, all humming along quietly in their little corner of the internet.

