When chmod 777 is not enough - and the service still can’t access the file

a simple permission error - and why you might be looking at the wrong layer

Imagine this: You are confused staring at your webservers logfile. At this one single line that states it cannot serve the file you want - because of a “permission denied” error.

[Wed Apr 01 13:34:31.831239 2026] [core:error] [pid 769:tid ...]
(13)Permission denied: [client 100.115.92.25:42722] AH00132:
file permissions deny server access: /srv/www/htdocs/index.html

But haven’t you already - while promising its only for a test - done a chmod 777 for this file?

The website should be up and running for hours now - but the only thing you get is this:

You followed the same procedure as last time you did something similar. You even copied & pasted command lines from a previous documentation …

… but the server somehow still fails to serve the site, because something still refuses to give him access.

And shouldn’t the final chmod 777 give full access to the file for everyone? And shouldn’t this mean, that the service should be able to serve it?

It seems to make no sense anymore:

The permissions are world-open. So there simply cannot be a permission denied anymore!

Wait - not this fast …

As always - Linux doesn’t behave somehow “randomly”:

If anything fails, then because of something that is enforcing it - a part of the whole system (a “layer”) we may haven’t taken into account yet.

And our job is to find this layer and correct the issue.

So take a step back and approach this like an experienced administrator would do:

On every system, most of the time there isn’t only “the one single root of truth” - instead there are multiple different aspects that interact with each other and lead to the facts we experience.

Or more memorable: We need to look at all the layers, that could explain the behavior we see.

So let’s go through the applicable layers systematically and see where your intention breaks.

Continue reading »

Why disk space is full - even when df says it isn’t

a simple incident - and what it shows us about operating a Linux system instead of just using it

Suddenly you get a disk-full error while writing something to disk.

robert@ubuntu1:~$ sudo tar -cf /srv/data/etc_backup.tar /etc
tar: /srv/data/etc_backup.tar: Cannot open: No space left on device

Ah - not again …

So let’s quickly spin up the df tool and check what’s going on here.

And surprisingly - the disk still seems to have free space left.

robert@ubuntu1:~$ df -h
Filesystem Size Used Avail Use% Mounted on
tmpfs      392M 932K  391M   1% /run
/dev/vda2   12G 7.2G  4.0G  65% /
/dev/vdb1    3G 1.2G  1.8G  40% /srv/data    <-- free space left
...

(see the mentioned 40% for the mountpoint “/srv/data”?)

But despite this - it even fails to create a completely empty file:

robert@ubuntu1:~$ sudo touch /srv/data/testonly
touch: cannot touch '/srv/data/testonly': No space left on device

Tools don’t lie. But they just show you only one single layer of the system.

Here are a few of the real-world problems I’ve seen multiple times on production systems, seemingly coming out of nowhere:

  • a service suddenly dies or misbehaves
    … and users call you to solve this immediately.
  • users can no longer log in
    .. but they need to exactly now!
  • the companies website suddenly doesn’t accept new connections
    … the “worst case” as your boss states.
  • users complain about lost or delayed emails
    … and they are always waiting for the most important right now.
  • backups and even log-rotations fail
    … and this may have accumulated even more risks hidden in the background over time.

… and at the end, each of these problems was caused by something like a “disk full error”.

Despite the implemented monitoring didn’t alert for a full disk.

I think it’s obvious, that in such a situation, panicking and deleting some large files on the filesystem will not solve the problem while leaving you confident with your solution.

Instead - and this should always be your mantra in troubleshooting - we should tackle the problem in a systematic way to rule out anything that could be the root-cause, until we have identified the real underlying problem.

Or to say it a bit differently:

If you are faced with a problem like this - despite the urgency you may feel to solve it instantly: Always think your way through all the layers that may be involved. And then bind your solution to the right one.

A quick work around may help short-term, but leaves you with uncertainty and the risk of a not long lasting solution.

Let’s use this scenario as an example here - while the same thinking applies far beyond disk space.

Continue reading »

From echo to syslog: Smarter logging for your scripts

… or: how to use the logger command to send messages into the system log.

If you are monitoring or troubleshooting a Linux system, then one of the most important things to check is the logfiles.

Running services like Postfix or Apache write them, tiny tools like sudo write logs - and even the system itself, the kernel, tells you what it’s doing and what didn’t go so smoothly.

What if your cronjobs or scripts could do the same? You could simply use your known toolbox - grep, less, tail, and friends - to see what’s going on, or use the more modern journalctl.

Here’s the thing: you can easily write your own logs from the command line (and therefore from cronjobs or scripts) and inject your own messages into those central log repositories.

And if you know a few basic concepts, you can even filter your messages into certain files or “sections” of your system log.

But first: why not just append your messages to your own log file?

You could, of course, log messages yourself by just appending them to a file:

echo "my message" >> my-logfile

That works - but it misses some smart benefits you get when you log messages the way I’ll show you next:

  • Automatic timestamping and tagging
    Messages written via the syslog protocol (RFC 5424 / 3164) - that’s what we’ll use - include timestamps, hostnames, and tags automatically.
  • Centralized handling
    Your messages become part of the system’s managed logs, so they rotate, compress, and get archived together with the rest.
  • Remote logging integration
    Syslog already supports forwarding logs to a central host. So if your system’s rsyslog or syslog-ng is configured for remote forwarding, your messages will follow automatically.

Once you use the existing log infrastructure, your messages show up in the same places as the rest of the system - viewable with grep, less, or journalctl.

So, how can you tap into that same system yourself? Let’s look at the command that makes this easy.

Continue reading »

Linux Distributions? What They Are and Which to Pick

When people for the first time dive into Linux, one of the most confusing things is that there isn’t just one Linux. Instead, you’ll find hundreds of so-called distributions (or distros) floating around.

So what exactly is a distribution, and why does it matter which one you use?

What is a Linux Distribution?

As described in “First things first: What’s Linux, anyway?”, a running Linux system consists of a few independent components:

  • The Linux Kernel
    This gives you hardware-abstraction, process-management, security isolation and so on.
  • The GNU-Tools
    These provide you with a shell (the ”command line”) and the tools you need for daily tasks.
  • Other applications
    Like user-applications, servers and any other software you can imagine running on a pc or server.

Now you could collect all these parts on your own from their developers websites (kernel.org, gnu.org, …), compile them for your CPU-architecture and bring them somehow onto a bootable disk.

If you really deeply want to this: Go to www.linuxfromscratch.org and follow the instructions.

This may be a lot of fun and you’ll learn a ton. But for the daily deployment of a Linux system, this won’t be the best way to go.

And this is, where a Linux distributor comes in to play: They bundle everything you need for a running system, add an installer and provide it to you on a bootable medium.

So a distribution is basically:

  • The Linux kernel itself (sometimes slightly customized)
  • A set of system libraries and tools (GNU utilities, shells, compilers, etc.)
  • A package management system to install and update software in a convenient way
  • A preselected bundle of applications (from web servers to desktop environments)
  • Policies and defaults chosen by the distribution maintainers

Instead of you assembling all these pieces yourself, a distribution team does it for you, adds testing and updates, and makes sure everything fits together.

👉 Think of a distribution like a curated “bundle” of Linux plus tools, shaped with a certain philosophy or audience in mind.

So what are the differences?

Continue reading »

First things first: What's Linux, anyway?

Glad you’re back! Today, let’s tackle a fundamental but super-important question:

What exactly is Linux?

Knowing this is your ticket to figuring out why your Linux-based systems behave the way they do.

Getting your head around the Linux core components is the key to understanding how your systems really tick.

It All Starts With the Kernel

If we want to break down Linux to its purest form, we need to talk about the Linux kernel - the thing Linus Torvalds created himself back in the days.

Most of the time, when someone says “Linux,” they’re talking about the entire OS, but strictly speaking, Linux is just the kernel, the very heart of the system.

Want a nostalgia trip? Head over to kernel.org, and you can dig through code archives going all the way back to version 1.0, published in 1994, with a compressed source size of 1 MB.

Try to find the file linux-1.0.tar.gz at https://www.kernel.org/pub/linux/kernel/.

Sure, Linus released the first kernel back in 1991, but it was version 1.0 that hit the “stable” milestone.

Fast-forward to now, and we’re dealing with kernel version 6.16 from July 2025, weighing in at about 236 MB compressed. That’s one heck of a jump! Why the massive growth? Well, mostly because of device drivers.

Continue reading »

Your first steps with regular expressions - the essentials

When you search for something at the Linux command line, you’ll typically come across two different types of search patterns.

First, there are the ones you probably already use intuitively for file operations, like in cp *.pdf /tmp - which means “copy all files ending with .pdf to /tmp”. These patterns are known as filename globbing. You use them typically, if you want to address multiple files at once on the command line.

The second type of pattern you’ll come across are the regular expressions. At first glance, regular expressions might look similar to filename globbing, but they operate very differently. And more importantly, they’re incredibly powerful for searching.

For example, let’s look at a regular expression for matching email addresses:

\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}\b

And here’s one for matching any possible IP address:

\b((25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])\.){3}(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])\b

A very prominent tool you will typically use in combination with regular expressions is grep. Therefore I’ll use it here for doing the illustrations. (For an introduction to using grep see Searching with grep: The Essentials).

But over the time you’ll find that many other Linux command-line tools, such as find, sed and awk, just to name a few, can also leverage the power of regular expressions.

From the examples above you can guess that regular expressions can quickly become quite complex. But don’t worry - as a starting point you can achieve quite a lot with just a few basics. And these basics are typically referred to as “Basic Regular Expressions (BRE)” or simply standard regular expressions.

I typically use the phrase “standard regular expressions” in courses and conversations as a contrast to the much more complex “Extended Regular Expressions (ERE)”

The most important patterns of standard regular expressions

Let’s dive into the use of standard regular expressions with an example:

Let’s say you want to search through the file “/etc/passwd” (this file contains the locally defined users of a system) for the user named “max”.

To do this search with the command grep, you first need to know about the layout of the file, which is really straight forward: Every single line describes the properties of a user in seven fields. And these fields are separated by colons (“:”).

Here are a few possible lines from “/etc/passwd” for illustration.

...
tux:x:1099:1099:Tux:/home/tux:/bin/bash
max:x:1100:1100:Max:/home/max:/bin/bash
test1:x:1101:1101:Testuser 1:/home/max:/bin/false
...

Search for something at the beginning of a line

If we are now searching for the defined user “max”, we could simply try the following:

grep max /etc/passwd

But because the user “test1” has a defined home directory that contains the phrase “max” too (have a look into the 6th field), we need to specify what exactly we are looking for: The phrase “max” exactly at the beginning of a line (the first field contains the username), followed by a colon.

Continue reading »

cat - This multitool can help you more than you think

When you think about essential Linux command-line tools, cat is probably one of the first that comes into mind. Short for “concatenate”, this seemingly simple command is part of the basic equipment of every Linux user’s toolkit. Most commonly, it’s used to display the contents of a file in the terminal - a quick and easy way to peek inside a document.

But if you think that’s all cat is good for - you’re in for a surprise :-)

cat has a few tricks up its sleeve, that can help you to streamline your workflow: From transforming data to merging files or creating new ones - cat definitively deserves a closer look.

And along the way, I promise, we will stumble upon one or the other interesting tool or concept too …

Let’s start simple and understand the basic way cat is working

If you start cat on the command line without any additional parameter, then you will “lose your prompt”: You’ll have only a cursor on a blank line as a sign that you can enter some text:

Now if you enter some text and finish your input-line by hitting “<enter>”, then cat will immediately repeat the line you just typed in:

After that - again an empty line with a lonely cursor. Now you can enter the next line, which will also be repeated and so on. (you can stop the cat command in this state at any time by simply hitting <ctrl>+c)

what we just observed is exactly the way how cat works:

  • It reads in data line by line from its input datastream, which is by default bound to the terminal - and therefore to your keyboard.
  • The output of cat then goes to its output datastream, that is in this simple example bound to the terminal.

For illustration: This is part of a screenshot taken from the video I linked below: On the left-hand side I’ve tried to draw a keyboard, on the right-hand side a terminal (such an artist I am … :))

Continue reading »