How to free up space on AWS ec2 Ubuntu instance running nginx and jenkins?



I have an Ubuntu ec2 instance running nginx and Jenkins. There is no more space available to do updates, and every command I try to free up space doesn’t work. Furthermore, when trying to reach Jenkins I’m getting 502 Bad Gateway.

When I run sudo apt-get update I get a long list of errors but the main one that stood out was E: Write error - write (28: No space left on device)

I have no idea why there is no more space, or what caused it but df -h gives the following output:

Filesystem      Size  Used Avail Use% Mounted on
udev            2.0G     0  2.0G   0% /dev
tmpfs           394M  732K  393M   1% /run
/dev/xvda1       15G   15G     0 100% /
tmpfs           2.0G     0  2.0G   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           2.0G     0  2.0G   0% /sys/fs/cgroup
/dev/loop1       56M   56M     0 100% /snap/core18/1988
/dev/loop3       34M   34M     0 100% /snap/amazon-ssm-agent/3552
/dev/loop0      100M  100M     0 100% /snap/core/10958
/dev/loop2       56M   56M     0 100% /snap/core18/1997
/dev/loop4      100M  100M     0 100% /snap/core/10908
/dev/loop5       33M   33M     0 100% /snap/amazon-ssm-agent/2996
tmpfs           394M     0  394M   0% /run/user/1000

I tried to free up the space by running sudo apt-get autoremove and it gave me E: dpkg was interrupted, you must manually run 'sudo dpkg --configure -a' to correct the problem.

I ran sudo dpkg --configure -a and got pkg: error: failed to write status database record about 'libexpat1-dev:amd64' to '/var/lib/dpkg/status': No space left on device

Lastly, I ran sudo apt-get clean; sudo apt-get autoclean and it gave me the following errors:

Reading package lists... Error!
E: Write error - write (28: No space left on device)
E: IO Error saving source cache
E: The package lists or status file could not be parsed or opened.

Any help to free up space and get the server running again will be greatly appreciated.


In my case, I have an app with nginx, postgresql and gunicorn all containerized. I followed those steps to solve my issue,

  1. I tried to figure out which files are consuming my storage the most using
    below command

    sudo find / -type f -size +10M -exec ls -lh {} ;

  2. As you can see from the screenshot, it turns of that unused and docker related containers are the source

enter image description here

  1. I then purge all unused, stopped or dangling images

    docker system prune -a

I was able to reclaim about 4.4 GB at the end!

Answered By – Zekarias Taye Hirpo

This Answer collected from stackoverflow, is licensed under cc by-sa 2.5 , cc by-sa 3.0 and cc by-sa 4.0

Leave A Reply

Your email address will not be published.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More