DevOps interviews rarely include a round labeled “Linux”, but Linux is present in almost every technical discussion.
It appears when containers fail to start, when a deployment hangs, when disk fills unexpectedly, when a service cannot be reached.
Linux questions inside DevOps interviews are rarely about commands alone. They reveal whether you understand how systems behave underneath modern infrastructure.
Below are common Linux-style questions that come up in DevOps interviews, and what they actually evaluate.
Process Questions Reveal How You Think About System Control
You may be asked how to list running processes:
ps -ef
or
top
That is the surface response.
What is being evaluated is whether you understand:
- What a process is.
- How the kernel schedules it.
- How parent and child processes relate.
- How signals control termination.
- What happens when memory or CPU is exhausted.
If a production server spikes to 100 percent CPU, the correct thought process is:
- Identify the PID consuming resources.
- Inspect what the process is doing.
- Determine whether the behavior is expected or runaway.
- If it must be stopped, choose how to terminate it.
If termination is required:
kill -15 <pid> # SIGTERM
kill -9 <pid> # SIGKILL
SIGTERM allows cleanup handlers to run.
SIGKILL forces immediate termination and cannot be trapped.
In distributed systems, killing a process without allowing cleanup can corrupt state, drop connections, or leave locks behind.
You may also be asked about zombie processes. That question tests whether you understand how the kernel manages exit states and how parents must reap child processes. It reveals whether you understand how process termination actually works.
Filesystem Questions Reveal Operational Depth
A common question:
What is the difference between a hard link and a soft link?
ln file.txt hardlink.txt
ln -s file.txt softlink.txt
A hard link is another name for the same file.
file.txt and hardlink.txt both point to the same data on disk. They
are equal references. Editing one affects the other. Deleting one does
not remove the data as long as another hard link still exists.
The file is only removed when all hard links to it are deleted.
A soft link (symlink) stores a path to another file. It does not share the same underlying data. If the original file is deleted, the symlink breaks because it points to a path that no longer exists.
You can observe the behavior directly:
echo "hello" > file.txt
ln file.txt hardlink.txt
ln -s file.txt softlink.txt
rm file.txt
After deleting file.txt:
hardlink.txtstill works.softlink.txtis broken.
Hard link = another name for the same data. Soft link = a pointer to a path.
When this question appears in a DevOps interview, it reveals whether you understand how the filesystem tracks data and references it. That understanding is important when diagnosing broken deployments, unexpected file persistence, or disk space issues in production.
Similarly, permission questions such as:
chmod 777 file.txt
are not just about recalling octal values from memory.
They expose whether you understand how Linux enforces boundaries between users, services, and processes.
777 grants read, write, and execute permissions to everyone. In a
shared environment, that allows unintended users or services to modify
or execute files they should not touch.
Permission questions are evaluating whether you consider:
- Who owns the file.
- Which user or service runs the application.
- Whether write access is actually required.
- Whether execution should be restricted.
Special bits such as the sticky bit:
chmod +t /shared
are used in shared directories like /tmp. With the sticky bit set,
users can create files, but they cannot delete files owned by others.
This reflects awareness of multi-user isolation and shared-host behavior. In DevOps environments where multiple services or engineers interact with the same host, misconfigured permissions can cause outages or security incidents.
Disk and Resource Questions Reveal Production Experience
You may respond to disk questions with:
df -h
But disk usage in production is rarely that simple.
df shows filesystem usage.
du shows directory usage.
Sometimes they do not match.
If a large file is deleted while still open by a running process, disk space remains allocated until the file descriptor is closed.
An application writes to /var/log/app.log.
The log file grows to several gigabytes.
You delete the file to free space.
You expect disk usage to drop, but it does not.
Why?
Because the application still has the file open. Even though the filename is gone, the process is still holding the file open at the filesystem level.
The space is only freed when the process closes the file, usually after a restart.
That is why restarting a service sometimes “magically” frees disk space.
Another production scenario is inode exhaustion:
df -i
A filesystem may have available disk space but no inodes left to create new files.
These are common failure modes in container hosts, CI runners, and log-heavy systems.
Interviewers are looking for evidence that you understand these patterns.
Shell Questions Reveal How You Handle Failure
You may be asked to:
- Write a loop.
- Explain
breakversuscontinue. - Filter specific log output.
This is testing logic.
What reveals systems thinking is how you handle failure and propagation.
To verify a command succeeded:
echo $?
Exit code 0 indicates success.
Non-zero indicates failure.
In automation environments, exit codes determine whether pipelines continue or fail.
Robust scripts account for failure deliberately:
set -e
set -u
set -o pipefail
Without pipefail, pipeline errors can be silently ignored:
curl example.com | grep "500"
If curl fails but grep exits successfully, the overall pipeline may
appear successful.
In CI/CD systems, that behavior can allow broken builds to pass silently.
Shell is powerful but limited:
- Weak typing.
- Manual error handling.
- External commands spawn separate processes.
- Complex orchestration becomes difficult to maintain.
Understanding where shell scripting fits and where a higher-level language is more appropriate reflects engineering judgment.
Cron Questions Reveal Awareness of Execution Context
Scheduling a script is straightforward:
crontab -e
The real evaluation is whether you understand execution context.
Cron runs with a minimal environment:
- PATH may differ from your interactive shell.
- Environment variables may not be present.
- Output is not logged unless explicitly redirected.
A production-ready cron entry looks like:
0 6 * * * /usr/local/bin/script.sh >> /var/log/script.log 2>&1
If execution is required after reboot:
@reboot /usr/local/bin/script.sh
These details are important in infrastructure automation because scripts that work interactively can fail silently when executed by cron.
Interviewers evaluate whether you understand environment differences and startup behavior.
Networking Questions Reveal Your Understanding of Packet Flow
You may be asked to explain traceroute:
traceroute google.com
At a surface level, it shows hops between your machine and a destination.
At a systems level, it manipulates TTL values and observes ICMP responses from intermediate routers.
In production troubleshooting, the reasoning process involves asking:
- Is DNS resolving correctly?
- Is the service listening?
- Is it bound to
127.0.0.1or0.0.0.0? - Is the port open?
- Is a firewall blocking traffic?
- Is there a routing issue?
To inspect listening sockets:
ss -tuln
Binding to 127.0.0.1 restricts exposure to the local host.
Binding to 0.0.0.0 exposes the service on all interfaces.
In containerized and cloud environments, this distinction affects load balancers, reverse proxies, and security groups.
Interpreter and Portability Questions Reveal Environment Awareness
Consider:
#!/bin/bash
versus
#!/usr/bin/env bash
The first hardcodes the interpreter path. The second resolves the interpreter through the system’s environment.
On different systems, interpreter paths may vary.
In heterogeneous environments, hardcoded paths can cause scripts to fail unexpectedly.
This question evaluates whether you think about portability and cross-environment consistency.
What This Actually Signals in a DevOps Interview
Linux questions inside DevOps interviews evaluate whether you understand:
- Process lifecycle and signal handling.
- File reference behavior and access control.
- Resource exhaustion scenarios.
- Error propagation and exit codes.
- Execution context and startup behavior.
- Network binding and packet flow.
Modern infrastructure runs on layers of abstraction: containers, orchestration, CI/CD pipelines, and cloud platforms.
Underneath those layers is the operating system.
If you understand how Linux behaves under load, during failure, and across environments, you understand the foundation those abstractions depend on.
That is what interviewers are evaluating.