How I reduced AWS networking costs by 93% while removing public attack surface
MOJAHID UL HAQUE
DevOps Engineer
I recently tackled a common but expensive challenge in AWS: the hidden cost of public IPv4 addresses.
In a setup with dozens of ECS Fargate tasks, my "In-use Public IP" charges were hitting hundreds of dollars per month. Beyond the cost, having backend workers exposed to the public internet was a security risk I wanted to eliminate.
The Fix: I transitioned the entire architecture to a private-first model.
1. Disabled Public IPs: Moved all Fargate tasks to private mode within the VPC. 2. VPC Peering: Connected multiple VPCs using VPC Peering to enable secure, private communication between services across environments, no internet routing required. 3. Optimized Routing: Navigated complex DNS and routing requirements to ensure seamless communication between services without needing a NAT Gateway. 4. Added a Public Load Balancer: Introduced an internet-facing Application Load Balancer to handle inbound traffic. Only the load balancer is publicly accessible backend services remain private.
The Results: - Cost: Monthly networking spend for public IPs was eliminated entirely, replaced by a much smaller, fixed endpoint fee. - Security: Drastically reduced the attack surface by ensuring backend workers are no longer reachable from the internet. - Efficiency: The system is now more robust, secure, and cost-predictable.
Originally posted on LinkedIn
View original postRelated Posts
Scaling Applications on AWS (Real Example)
See how to scale an application on AWS with a real architecture example covering stateless compute, data bottlenecks, caching, queues, and rollout safety.
AWS ECS Mumbai has mood swings - DevOps engineer perspective
As a DevOps engineer, I've basically accepted that AWS ECS Mumbai has mood swings. Once or twice a month, it just… decides it's done with life. Deploy? Maybe. Pull images? If it feels like it. Random crash? Always a crowd pleaser. And of course, the AWS status page sits there smiling like everything's perfectly normal. Meanwhile, I'm digging through IAM, logs, task defs, pipelines, wondering if I forgot how computers work… only to realize it's just Mumbai taking a personal day again. But who gets blamed? "DevOps can't deploy." Yes. Clearly, I woke up and told ECS to stop doing its job. At this point, we just want a little stability and a status page that doesn't gaslight me while the region is on vacation.
DevOps Rescue Story: Recovering an EC2 Instance Without a PEM Key
"Lost PEM? No SSH? SSM dead? Don't panic — AWS always leaves a backdoor for those who know where to look." Yesterday I ran into one of those heart-sinking moments: an EC2 instance was completely locked out. - PEM key gone → SSH impossible - SSM agent broken → root volume full, wouldn't start even after EBS expansion - EC2 Instance Connect failing Basically… the instance was bricked. Or so it seemed. The Recovery Playbook I Followed 1. Spun up a helper EC2 instance with a fresh key pair. 2. Detached the root volume from the locked instance → attached it to the helper. 3. Mounted the volume → discovered the partition still capped at 100GB even though the EBS size was already 150GB. 4. Ran growpart + resize2fs → filesystem finally stretched to the full 150GB. (49GB free instantly.) 5. Cleared old logs and temp files for breathing room. 6. Added a new SSH public key into ~/.ssh/authorized_keys. 7. Detached the fixed root volume → reattached it back to the original instance. 8. Rebooted → boom! SSH worked with the new PEM, and the SSM Agent sprang back to life.