Last modified: March 23, 2026

This article is written in: 🇺🇸

Disk Usage Management

Managing and monitoring disk usage is necessary for server maintenance, allowing administrators to identify disk space shortages caused by large log files, such as Apache or system logs, and malfunctioning applications that generate excessive data. Tools like df provide quick overviews of available disk space, while du helps analyze directory sizes to locate space hogs. For planning future storage needs, tracking data growth with monitoring software like Nagios or Grafana enables accurate forecasting and timely upgrades of storage hardware or cloud solutions. Regularly cleaning up unused files involves deleting obsolete backups, removing temporary files from /tmp, archiving old user data, and eliminating redundant application caches using automated scripts or cleanup utilities like BleachBit.

Understanding the df command

The df (disk filesystem) command provides information about the filesystems on your machine. It shows details such as total size, used space, available space, and the percentage of space used. To display these statistics in a human-readable format, using units like KB, MB, or GB, you can use the -h (human-readable) option.

For example, executing df -h might produce an output like the following:

Filesystem Size Used Available Use% Mounted on
/dev/sda1 2.0T 1.0T 1.0T 50% /
/dev/sda2 500G 200G 300G 40% /boot

This output provides the following information:

Exploring the du Command

The du (disk usage) command is used to estimate the space occupied by files or directories. To display the output in a human-readable format, you can use the -h option. The -s option provides a summarized result for directories. For example, running du -sh . will show the total size of the current directory in a human-readable format.

To find the top 10 largest directories starting from the root directory (/), you can use the following command:

du -x / | sort -nr | head -10

An example output might look like this:

10485760    /usr
5120000     /var
2097152     /lib
1024000     /opt
524288      /boot
256000      /home
128000      /bin
64000       /sbin
32000       /etc
16000       /tmp

In this command:

This command sequence helps you quickly identify the directories consuming the most space on your system.

To further improve the speed of the du command, especially when dealing with many subdirectories, you can use xargs -P to parallelize the processing. This approach takes advantage of multiple CPU cores, allowing du to run on multiple directories simultaneously. Additionally, combining it with awk can help format the output more cleanly.

Here’s an enhanced example that finds the top 10 largest directories and uses xargs to speed up the process:

find / -maxdepth 1 -type d | xargs -I{} -P 4 du -sh {} 2>/dev/null | sort -hr | head -10 | awk '{printf "%-10s %s\n", $1, $2}'

Explanation:

I. find / -maxdepth 1 -type d: This command finds all directories at the root level (/), limiting the search to the top-level directories only (-maxdepth 1).

II. xargs -I{} -P 4 du -sh {} 2>/dev/null:

III. sort -hr: Sorts the output in human-readable format and in reverse order, so the largest directories come first.

IV. head -10: Limits the output to the top 10 largest directories.

V. awk '{printf "%-10s %s\n", $1, $2}': Formats the output, ensuring the size and directory name align neatly. The %-10s ensures the size column has a fixed width, making the output more readable.

By using xargs -P, you can significantly reduce the time it takes to compute the disk usage of directories, especially on systems with many directories and multiple CPU cores. This method effectively utilizes system resources to perform the operation more efficiently.

The ncdu Command

For a more visual and interactive representation of disk usage, you can use ncdu (NCurses Disk Usage). ncdu is a ncurses-based tool that provides a user-friendly interface to quickly assess which directories are consuming the most disk space. If it is not already installed, you can install it via your package manager, such as apt for Debian-based systems or yum for Red Hat-based systems.

Running the command ncdu -x / will start the program at the root directory (/) and present an interactive interface. Here, you can navigate through directories using arrow keys and view their sizes, making it easier to identify space hogs.

Here’s an example of what the output might look like in a non-interactive, textual representation:

ncdu 1.15 ~ Use the arrow keys to navigate, press ? for help
--- / -----------------------------------------------------------------------
    4.6 GiB [##########] /usr
    2.1 GiB [####      ] /var
  600.0 MiB [#         ] /lib
  500.0 MiB [#         ] /opt
  400.0 MiB [          ] /boot
  300.0 MiB [          ] /sbin
  200.0 MiB [          ] /bin
  100.0 MiB [          ] /etc
   50.0 MiB [          ] /tmp
   20.0 MiB [          ] /home
   10.0 MiB [          ] /root
    5.0 MiB [          ] /run
    1.0 MiB [          ] /srv
    0.5 MiB [          ] /dev
    0.1 MiB [          ] /mnt
    0.0 MiB [          ] /proc
    0.0 MiB [          ] /sys
 Total disk usage: 8.8 GiB  Apparent size: 8.8 GiB  Items: 123456

In this output:

ncdu is especially useful for quickly finding large directories and files, thanks to its intuitive interface. The ability to easily navigate through directories makes it a powerful tool for managing disk space on your system.

Cleaning Up Disk Space

Once you've identified what's using your disk space, the next step is often to free up space. Here are a few strategies:

Managing Disk Quotas

Disk quotas allow administrators to limit the amount of disk space or the number of files (inodes) that individual users or groups can consume on a filesystem. This prevents any single user from exhausting shared storage and helps maintain fair resource allocation on multi-user systems.

Enabling Quotas on a Filesystem

To use quotas, the filesystem must be mounted with quota options. Edit /etc/fstab to add usrquota and grpquota to the desired filesystem:

/dev/sda1  /home  ext4  defaults,usrquota,grpquota  0  2

After modifying /etc/fstab, remount the filesystem and initialize the quota database:

mount -o remount /home
quotacheck -cug /home
quotaon /home

Setting Quotas for a User

Use the edquota command to set soft and hard limits for a specific user:

edquota -u username

This opens an editor displaying the current quota settings:

Disk quotas for user username (uid 1001):
  Filesystem   blocks   soft     hard   inodes   soft   hard
  /dev/sda1    51200    204800   256000   100      0      0

Checking Quota Usage

To display the current quota status for a specific user, run:

quota -u username

For a summary of all users' quota usage on a filesystem, use:

repquota /home

Example output:

*** Report for user quotas on device /dev/sda1
Block grace time: 7days; Inode grace time: 7days
                        Block limits                File limits
User            used    soft    hard  grace    used  soft  hard  grace
----------------------------------------------------------------------
root      --   10240       0       0              5     0     0
alice     --   51200  204800  256000            100     0     0
bob       +-  230400  204800  256000  5days     320     0     0

The +- indicator next to bob shows that the user has exceeded the soft limit and is within the grace period.

Setting Group Quotas

Group quotas work similarly to user quotas. Use the -g flag to manage them:

edquota -g groupname
repquota -g /home

Configuring the Grace Period

The grace period determines how long a user can remain above the soft limit before it is enforced as a hard limit. To adjust it:

edquota -t

This opens an editor where you can set the grace period for both block and inode limits across the filesystem.

Automating Disk Usage Checks

For ongoing disk usage monitoring, consider setting up automated tasks. For instance, you can schedule a cron job that runs df and du at regular intervals and sends reports via email or logs them for later review.

Monitoring disk usage proactively can prevent potential issues related to low disk space, such as application errors, slow performance, or system crashes.

Bash Script Example for Disk Usage Monitoring

#!/bin/bash

# Script to monitor disk usage and report

# Set the path for the log file
LOG_FILE="/var/log/disk_usage_report.log"

# Get disk usage with df
echo "Disk Usage Report - $(date)" >> "$LOG_FILE"
echo "---------------------------------" >> "$LOG_FILE"
df -h >> "$LOG_FILE"

# Get top 10 directories consuming space
echo "" >> "$LOG_FILE"
echo "Top 10 Directories by Size:" >> "$LOG_FILE"
du -x / | sort -nr | head -10 >> "$LOG_FILE"

# Optionally, you can send this log via email instead of writing to a file
# For email, you can use: mail -s "Disk Usage Report" recipient@example.com < "$LOG_FILE"

# End of script
sudo chmod +x /path/to/disk_usage_monitor.sh && sudo mv /path/to/disk_usage_monitor.sh /etc/cron.daily/

Monitoring Disk Usage with Grafana

For long-term disk usage monitoring and visualization, Grafana provides powerful dashboarding capabilities when paired with a data collection agent like Prometheus and its Node Exporter.

Installing Node Exporter

Node Exporter collects hardware and OS metrics, including disk usage, and exposes them for Prometheus to scrape.

sudo apt install prometheus-node-exporter
sudo systemctl enable --now prometheus-node-exporter

By default, Node Exporter listens on port 9100. Verify it is running:

curl -s http://localhost:9100/metrics | grep node_filesystem_avail_bytes

Example output:

node_filesystem_avail_bytes{device="/dev/sda1",fstype="ext4",mountpoint="/"} 1.073741824e+10

The value is in bytes; 1.073741824e+10 equals roughly 10 GB of available space.

Configuring Prometheus to Scrape Metrics

Add the Node Exporter target to the Prometheus configuration file (/etc/prometheus/prometheus.yml):

scrape_configs:
  - job_name: 'node'
    static_configs:
      - targets: ['localhost:9100']

Restart Prometheus to apply the change:

sudo systemctl restart prometheus

Setting Up Grafana Dashboards

After installing Grafana and logging in (default at http://localhost:3000), add Prometheus as a data source, then import a community dashboard for disk metrics.

I. Add Prometheus as a data source under Configuration → Data Sources → Add data source, and set the URL to http://localhost:9090.

II. Import the Node Exporter Full dashboard (ID 1860) via Dashboards → Import, which includes pre-built panels for disk space, inode usage, and I/O rates.

III. Create custom alerts under Alerting → Alert Rules to trigger notifications when available disk space drops below a threshold:

node_filesystem_avail_bytes{mountpoint="/"} / node_filesystem_size_bytes{mountpoint="/"} < 0.1

This PromQL expression fires when the root filesystem has less than 10% free space.

Overview of the Monitoring Stack

+------------------+         +--------------+         +-----------+
|  Node Exporter   |-------->|  Prometheus  |-------->|  Grafana  |
|  (collects disk  |  scrape |  (stores     |  query  |  (visual  |
|   metrics)       |         |   time-series|         |   dashbrd)|
+------------------+         +--------------+         +-----------+

Challenges

  1. Explain the concept of filesystems and mount points, and then display the available free space on the root filesystem (/). Discuss why monitoring free space on the root is crucial for system stability.
  2. List all currently mounted filesystems and calculate the percentage of space used on each. Explain the importance of monitoring multiple filesystems, especially in systems with separate partitions for critical directories like /var, /home, or /boot.
  3. Identify all filesystems configured on the system, whether mounted or not, and display relevant information such as filesystem type, size, and last mount point. Discuss the purpose of different filesystem types and reasons they might not be mounted.
  4. Calculate the total size of the directory you’re in, including all files and subdirectories. Discuss recursive disk usage and the impact of nested directories on storage.
  5. Provide a breakdown of disk space usage within the /home directory for each user. Discuss the significance of managing space within /home and how it affects individual user accounts.
  6. List the top 10 directories consuming the most disk space across the entire system. Explain how these large directories can affect disk performance and the importance of periodically checking them.
  7. Track data being written to the disk in real-time for a set period, displaying a summary of write activity. Discuss the reasons behind tracking disk write activity, including potential implications for system performance and health.
  8. Identify individual files that occupy the most space on the disk. Discuss strategies for managing large files and how deleting or relocating these files can reclaim disk space.
  9. Take snapshots of disk usage at two different times and compare them to identify any significant changes or trends. Discuss the importance of historical data in predicting future disk space needs and planning for expansion or cleanup.
  10. Analyze disk usage by categorizing files based on their extensions (e.g., .txt, .jpg, .log). Explain how file type classification can help in identifying disk space hogs and in organizing cleanup strategies.