[Troubleshooting] “No space left on device” Even Though df -h Looks Normal

During server operations, you may encounter a “No space left on device” error even though disk usage checked with df -h shows sufficient free space.
In most cases, this issue is not caused by a lack of physical disk capacity, but by inode exhaustion or deleted-but-still-open files (so-called zombie files).

This article is based on the ext4 filesystem.


Symptoms (The Paradox) #

Error Log #

cp: cannot create regular file 'backup.tar': No space left on device
Error: ENOSPC: no space left on device, write

Disk Check (df -h) #

Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1       100G   40G   60G  40% /

Situation #

The disk still has 60 GB of free space, yet the system reports that no space is available.


Cause 1: Inode Exhaustion (Most Common Cause) #

A filesystem has limits not only on capacity, but also on the number of files (inodes) it can manage.
Even if disk space is available, no new files can be created once inodes are exhausted.

How to Check #

df -i
Filesystem       Inodes   IUsed    IFree IUse% Mounted on
/dev/sda1       6553600 6553600        0  100% /

If IUse% is 100%, the root cause is inode exhaustion.


Finding the Culprit (Where File Counts Exploded) #

# Count files per top-level directory (may take time)
find / -xdev -type f 2>/dev/null | cut -d "/" -f 2 | sort | uniq -c | sort -nrl
# Count files under each subdirectory in the current path
for i in *; do echo -n "$i: "; find "$i" | wc -l; done

Common Problematic Paths #

/var/spool/postfix
/var/lib/php/sessions
/var/log
/usr/src/linux-headers

Why These Directories Become the Culprit #

This is not about disk size, but about inode slots.
A “No space” (inode) error means the disk’s parking spaces (inode slots) are full, not its weight capacity (bytes).

  • 1 file = 1 inode
    Even a 0-byte file consumes one inode.

Conclusion #

If a single directory consumes the majority of total inodes (df -i total), for example 80–90%, that directory is the direct cause of system failure—regardless of file sizes.


Caution Before Deleting Files #

Not everything that “looks like a lot of files” is safe to delete.
Always identify what the files actually are.

Safe to Delete (Garbage / Cache) #

  • sess_* (PHP sessions)
  • postfix/defer (mail spool)
  • Temporary cache files
  • Old files based on mtime

Do NOT Delete (System / Data) #

  • /var/lib/docker/overlay2
    Never delete manually. This will break the container system. (Use docker prune instead.)
  • Service data: thumbnails, active logs, database files, etc.

Tip:
Do not blindly run rm. First inspect files with:

ls -lh | head

to check file contents and creation times.


Cause 2: Deleted but Still Open Files #

This occurs when a log file is deleted with rm, but a running process continues to hold it open.
In this case, the space is still consumed even though it does not appear in df -h.

How to Check #

lsof | grep deleted
COMMAND   PID   USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
java      1234  root   1w   REG  8,1    50G      101  /var/log/app.log (deleted)

Meaning #

The java process is holding a deleted log file and occupying 50 GB of disk space.

Resolution #

Most reliable solution: Restart the process

systemctl restart <service_name>

If restart is not possible (temporary workaround):

cp /dev/null /proc/1234/fd/1

Cause 3: Docker / OverlayFS Issues #

This frequently occurs in container environments.
If Docker’s overlay2 filesystem or container logs are not cleaned up, you may see ENOSPC errors inside containers even though df -h on the host looks normal.

Inspection and Cleanup #

# Remove stopped containers, unused images and networks
docker system prune -a
# Check container log disk usage
du -h --max-depth=1 /var/lib/docker/containers

Key Summary #

  • If you see “No space left on device” but df -h looks normal, do not assume capacity issues first
  • Step 1: Check inode exhaustion with df -i
  • Step 2: If inodes are fine, look for zombie files with lsof | grep deleted
  • Step 3: In Docker / Kubernetes environments, clean up overlay2 and container logs

Following this order helps you avoid unnecessary disk expansion and pointless application-level debugging.

🛠 마지막 수정일: 2025.12.23

💡 도움이 필요하신가요?
Zabbix, Kubernetes, 그리고 다양한 오픈소스 인프라 환경에 대한 구축, 운영, 최적화, 장애 분석, 광고 및 협업 제안이 필요하다면 언제든 편하게 연락 주세요.

📧 Contact: jikimy75@gmail.com
💼 Service: 구축 대행 | 성능 튜닝 | 장애 분석 컨설팅

📖 E-BooK [PDF] 전자책 (Gumroad): Zabbix 엔터프라이즈 최적화 핸드북
블로그에서 다룬 Zabbix 관련 글들을 기반으로 실무 중심의 지침서로 재구성했습니다. 운영 환경에서 바로 적용할 수 있는 최적화·트러블슈팅 노하우까지 모두 포함되어 있습니다.


💡 Need Professional Support?
If you need deployment, optimization, or troubleshooting support for Zabbix, Kubernetes, or any other open-source infrastructure in your production environment, or if you are interested in sponsorships, ads, or technical collaboration, feel free to contact me anytime.

📧 Email: jikimy75@gmail.com
💼 Services: Deployment Support | Performance Tuning | Incident Analysis Consulting

📖 PDF eBook (Gumroad): Zabbix Enterprise Optimization Handbook
A single, production-ready PDF that compiles my in-depth Zabbix and Kubernetes monitoring guides.

What are your feelings

Updated on 2025-12-23