test-your-sysadmin-skills
A collection of Linux Sysadmin Test Questions and Answers. Test your knowledge and skills in different fields with these Q/A.
Top Related Projects
Linux, Jenkins, AWS, SRE, Prometheus, Docker, Python, Ansible, Git, Kubernetes, Terraform, OpenStack, SQL, NoSQL, Azure, GCP, DNS, Elastic, Network, Virtualization. DevOps Interview Questions
Learn how to design large-scale systems. Prep for the system design interview. Includes Anki flashcards.
Master the command line, in one page
Interactive roadmaps, guides and other educational content to help developers grow in their careers.
🤪 A list of funny and tricky JavaScript examples
An attempt to answer the age old interview question "What happens when you type google.com into your browser and press enter?"
Quick Overview
The "test-your-sysadmin-skills" repository is a comprehensive collection of questions and exercises designed to test and improve system administration skills. It covers a wide range of topics including Linux, networking, security, and DevOps practices, making it an excellent resource for both aspiring and experienced system administrators to assess and enhance their knowledge.
Pros
- Extensive coverage of various sysadmin topics, providing a well-rounded learning experience
- Regular updates and contributions from the community, keeping the content relevant and up-to-date
- Includes both theoretical questions and practical scenarios, helping to bridge the gap between knowledge and real-world application
- Free and open-source, making it accessible to anyone interested in improving their sysadmin skills
Cons
- May be overwhelming for beginners due to the vast amount of information and advanced topics covered
- Lacks a structured learning path or difficulty levels, which could make it challenging for users to track their progress
- Some questions may be subjective or have multiple correct answers, potentially leading to confusion
- Limited explanations for answers, requiring users to research further on their own
As this is not a code library, we'll skip the code examples and getting started instructions sections.
Competitor Comparisons
Linux, Jenkins, AWS, SRE, Prometheus, Docker, Python, Ansible, Git, Kubernetes, Terraform, OpenStack, SQL, NoSQL, Azure, GCP, DNS, Elastic, Network, Virtualization. DevOps Interview Questions
Pros of devops-exercises
- Broader scope covering DevOps practices beyond just system administration
- More structured content with categorized topics and exercises
- Includes hands-on exercises and scenarios for practical learning
Cons of devops-exercises
- Less focused on specific sysadmin skills compared to test-your-sysadmin-skills
- May be overwhelming for beginners due to the wide range of topics covered
- Lacks the concise Q&A format found in test-your-sysadmin-skills
Code Comparison
test-your-sysadmin-skills:
Q: How to check if port 22 is open on a remote host?
A: nc -vz <host> 22
devops-exercises:
- name: Check if port 22 is open
hosts: all
tasks:
- name: Check port 22
wait_for:
port: 22
timeout: 10
register: result
The code comparison shows that test-your-sysadmin-skills focuses on command-line solutions, while devops-exercises often uses more complex tools and frameworks like Ansible for similar tasks. This reflects the broader DevOps approach of devops-exercises compared to the more targeted sysadmin focus of test-your-sysadmin-skills.
Learn how to design large-scale systems. Prep for the system design interview. Includes Anki flashcards.
Pros of system-design-primer
- Comprehensive coverage of system design concepts and principles
- Includes visual aids, diagrams, and real-world examples
- Regularly updated with contributions from the community
Cons of system-design-primer
- Focuses primarily on high-level system design, less practical for day-to-day sysadmin tasks
- May be overwhelming for beginners due to its extensive content
- Lacks hands-on exercises or quizzes for self-assessment
Code comparison
While both repositories don't primarily focus on code, system-design-primer includes some code snippets for illustration. For example:
system-design-primer:
def get_user(request):
user_id = request.GET.get('user_id')
user = User.objects.get(id=user_id)
return JsonResponse({'user': user})
test-your-sysadmin-skills:
#!/bin/bash
echo "This repository doesn't contain code examples."
The system-design-primer repository provides code snippets to illustrate concepts, while test-your-sysadmin-skills focuses on theoretical questions and answers without code examples.
Both repositories serve different purposes: system-design-primer is geared towards system design and architecture, while test-your-sysadmin-skills is tailored for sysadmin interview preparation and skill assessment.
Master the command line, in one page
Pros of the-art-of-command-line
- Provides a comprehensive guide for both beginners and experienced users
- Covers a wide range of command-line topics and tools
- Regularly updated with community contributions
Cons of the-art-of-command-line
- Lacks structured exercises or quizzes for self-assessment
- May be overwhelming for absolute beginners due to its extensive content
Code Comparison
the-art-of-command-line:
# Find files with names that contain 'foo':
find . -name '*foo*'
# Find files that have been modified in the last day:
find . -mtime -1
test-your-sysadmin-skills:
# What command is used to create a new user?
useradd [options] USERNAME
# How to check the kernel version?
uname -r
The code examples in the-art-of-command-line are more practical and demonstrate actual command usage, while test-your-sysadmin-skills focuses on question-answer format for learning system administration concepts.
Both repositories serve as valuable resources for learning Linux and system administration skills, but they approach the subject differently. the-art-of-command-line offers a comprehensive guide with practical examples, while test-your-sysadmin-skills provides a structured set of questions and answers to test and improve knowledge.
Interactive roadmaps, guides and other educational content to help developers grow in their careers.
Pros of developer-roadmap
- Provides comprehensive visual roadmaps for various tech career paths
- Regularly updated with new technologies and industry trends
- Offers interactive versions of roadmaps for enhanced user experience
Cons of developer-roadmap
- Focuses on development roles, lacking coverage for system administration
- May overwhelm beginners with the sheer amount of information presented
- Doesn't provide specific questions or tests to assess skills
Code comparison
While both repositories don't contain significant code samples, they differ in their content structure:
test-your-sysadmin-skills:
## :diamond_shape_with_a_dot_inside: Main targets:
* [Test your knowledge](https://github.com/trimstray/test-your-sysadmin-skills#test-your-knowledge)
* [Build your skills](https://github.com/trimstray/test-your-sysadmin-skills#build-your-skills)
developer-roadmap:
## Frontend Roadmap
![Frontend Roadmap](./img/frontend.png)
## Backend Roadmap
![Backend Roadmap](./img/backend.png)
test-your-sysadmin-skills focuses on providing questions and resources for system administrators, while developer-roadmap offers visual career path guides for various development roles. The former is more targeted towards a specific field, while the latter covers a broader range of development-related topics.
🤪 A list of funny and tricky JavaScript examples
Pros of wtfjs
- Focuses specifically on JavaScript quirks and oddities, making it a valuable resource for JS developers
- Includes explanations and examples for each "WTF" moment, helping users understand the underlying reasons
- Regularly updated with new content and community contributions
Cons of wtfjs
- Limited in scope to JavaScript, unlike the broader system administration focus of test-your-sysadmin-skills
- May not be as directly applicable to real-world scenarios or job interviews as test-your-sysadmin-skills
- Lacks the structured question-and-answer format found in test-your-sysadmin-skills
Code Comparison
wtfjs example:
[] == ![] // -> true
test-your-sysadmin-skills example:
find /var/log -type f -mtime +7 -exec rm {} \;
While wtfjs focuses on JavaScript quirks, test-your-sysadmin-skills covers practical system administration tasks. The wtfjs example demonstrates a counterintuitive JavaScript comparison, while the test-your-sysadmin-skills example shows a command to find and remove log files older than 7 days.
Both repositories serve as educational resources in their respective domains, with wtfjs catering to JavaScript developers and test-your-sysadmin-skills targeting system administrators and DevOps professionals.
An attempt to answer the age old interview question "What happens when you type google.com into your browser and press enter?"
Pros of what-happens-when
- Provides a comprehensive, step-by-step explanation of web browsing process
- Offers in-depth technical details across various layers of networking and web technologies
- Encourages community contributions to expand and refine the content
Cons of what-happens-when
- Focuses solely on web browsing, limiting its scope compared to broader sysadmin skills
- May be overwhelming for beginners due to its technical depth
- Lacks interactive elements or practical exercises for hands-on learning
Code comparison
Not applicable, as neither repository contains significant code samples. Both primarily focus on explanatory text and documentation.
Summary
what-happens-when offers a deep dive into the web browsing process, making it an excellent resource for understanding internet technologies. test-your-sysadmin-skills, on the other hand, covers a broader range of system administration topics and includes practical questions for self-assessment.
While what-happens-when provides a more focused and detailed exploration of a single topic, test-your-sysadmin-skills offers a wider breadth of knowledge and interactive elements for skill evaluation. The choice between the two depends on whether the user seeks in-depth understanding of web technologies or a broader overview of sysadmin skills with practical exercises.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
:star:
"A great Admin doesn't need to know everything, but they should be able to come up with amazing solutions to impossible projects." - cwheeler33 (ServerFault)
:star:
"My skills are making things work, not knowing a billion facts. [...] If I need to fix a system Iâll identify the problem, check the logs and look up the errors. If I need to implement a solution Iâll research the right solution, implement and document it, the later on only really have a general idea of how it works unless I interact with it frequently... itâs why itâs documented." - Sparcrypt (Reddit)
:information_source: This project contains 284 test questions and answers that can be used as a test your knowledge or during an interview/exam for position such as Linux (*nix) System Administrator.
:heavy_check_mark: The answers are only examples and do not exhaust the whole topic. Most of them contains useful resources for a deeper understanding.
:warning: Questions marked ***
don't have answer yet or answer is incomplete - make a pull request to add them!
:traffic_light: If you find something which doesn't make sense, or something doesn't seem right, please make a pull request and please add valid and well-reasoned explanations about your changes or comments.
:books: In order to improve your knowledge/skills please see devops-interview-questions. It looks really interesting.
» All suggestions are welcome
«
Table of Contents
The type of chapter | Number of questions | Short description |
---|---|---|
Introduction | ||
:small_orange_diamond: Simple Questions | 14 questions | Relaxed, fun and simple - are great for starting everything. |
General Knowledge | ||
:small_orange_diamond: Junior Sysadmin | 65 questions | Reasonably simple and straight based on basic knowledge. |
:small_orange_diamond: Regular Sysadmin | 94 questions | The mid level of questions if that you have sound knowledge. |
:small_orange_diamond: Senior Sysadmin | 99 questions | Hard questions and riddles. Check it if you want to be good. |
Secret Knowledge | ||
:small_orange_diamond: Guru Sysadmin | 12 questions | Really deep questions are to get to know Guru Sysadmin. |
Introduction
:diamond_shape_with_a_dot_inside: Simple Questions
- What did you learn this week?
- What excites or interests you about the sysadmin world?
- What is a recent technical challenge you experienced and how did you solve it?
- Tell me about the last major project you finished.
- Do you contribute to any open source projects?
- Describe the setup of your homelab.
- What personal achievement are you most proud of?
- Tell me about the biggest mistake you've made. How would you do it differently today?
- What software tools are you going to install on the first day at a new job?
- Tell me about how you manage your knowledge database (e.g. wikis, files, portals).
- What news sources do you check daily? (sysadmin, security-related or other)
- Your NOC team has a new budget for sysadmin certifications. What certificate would you like and why?
- How do you interact with developers: us vs. them or all pulling together with a different approach?
- Which sysadmin question would you ask, if you were interviewing me, to know, how good I'm with non-standard situations?
General Knowledge
:diamond_shape_with_a_dot_inside: Junior Sysadmin
System Questions (37)
Give some examples of Linux distribution. What is your favorite distro and why?
- Red Hat Enterprise Linux - Fedora - CentOS - Debian - Ubuntu - Mint - SUSE Linux Enterprise Server (SLES) - SUSE Linux Enterprise Desktop (SLED) - Slackware - Arch - Kali - Backbox
My favorite Linux distribution:
- Arch Linux, which offers a nice minimalist base system on which one can build a custom operating system. The beauty of it too is that it has the Arch User Repository (AUR), which when combined with its official binary repositories allows it to probably have the largest repositories of any distribution. Its packaging process is also very simple, which means if one wants a package not in its official repositories or the AUR, it should be easy to make it for oneself.
- Linux Mint, which is also built from Ubuntu LTS releases, but features editions featuring a few different desktop environments, including Cinnamon, MATE and Xfce. Mint is quite polished and its aesthetics are rather appealing, I especially like its new icon theme, although I do quite dislike its GTK+ theme (too bland to my taste). Iâve also found a bug in its latest release Mint 19, that is getting quite irritating as I asked for with it over a fortnight ago on their forums and I have received no replies so far and it is a bug that makes my life on it more difficult.
- Kali Linux, is a Debian-based Linux distribution aimed at advanced Penetration Testing and Security Auditing. Kali contains several hundred tools which are geared towards various information security tasks, such as Penetration Testing, Security research, Computer Forensics and Reverse Engineering.
Useful resources:
What are the differences between Unix, Linux, BSD, and GNU?
GNU isn't really an OS. It's more of a set of rules or philosophies that govern free software, that at the same time gave birth to a bunch of tools while trying to create an OS. So GNU tools are basically open versions of tools that already existed, but were reimplemented to conform to principals of open software. GNU/Linux is a mesh of those tools and the Linux kernel to form a complete OS, but there are other GNUs, e.g. GNU/Hurd.
Unix and BSD are "older" implementations of POSIX that are various levels of "closed source". Unix is usually totally closed source, but there are as many flavors of Unix as there are Linux (if not more). BSD is not usually considered "open", but it was considered to be very open when it was released. Its licensing also allowed for commercial use with far fewer restrictions than the more "open" licenses of the time allowed.
Linux is the newest of the four. Strictly speaking, it's "just a kernel"; however, in general, it's thought of as a full OS when combined with GNU Tools and several other core components.
The main governing differences between these are their ideals. Unix, Linux, and BSD have different ideals that they implement. They are all POSIX, and are all basically interchangeable. They do solve some of the same problems in different ways. So other then ideals and how they choose to implement POSIX standards, there is little difference.
For more info I suggest your read a brief article on the creation of GNU, OSS, Linux, BSD, and UNIX. They will be slanted towards their individual ideas, but those articles should give you a better idea of the differences.
Useful resources:
What is a CLI? Tell me about your favorite CLI tools, tips, and hacks.
CLI is an acronym for Command Line Interface or Command Language Interpreter. The command line is one of the most powerful ways to control your system/computer.
In Unix like systems, CLI is the interface by which a user can type commands for the system to execute. The CLI is very powerful, but is not very error-tolerant.
The CLI allows you to do manipulations with your systemâs internals and with code in a much more fine-tuned way. It offers greater flexibility and control than a GUI regardless of what OS is used. Many programs that you might want to use in your software that are hosted on say Github also require running some commands on the CLI in order to get them running.
My favorite tools
screen
- free terminal multiplexer, I can start a session and My terminals will be saved even when you connection is lost, so you can resume later or from homessh
- the most valuable over-all command to learn, I can use it to do some amazing things:- mount a file system over the internet with
sshfs
- forward commands: runs against a
rsync
server with norsync
deamon by starting one itself via ssh - run in batch files: I can redirect the output from the remote command and use it within local batch file
- mount a file system over the internet with
vi/vim
- is the most popular and powerful text editor, it's universal, it's work very fast, even on large filesbash-completion
- contains a number of predefined completion rules for shell
Tips & Hacks
- searches the command history with
CTRL + R
popd/pushd
and other shell builtins which allow you manipulate the directory stack- editing keyboard shortcuts like a
CTRL + U
,CTRL + E
- combinations will be auto-expanded:
!*
- all arguments of last command!!
- the whole of last command!ssh
- last command starting with ssh
Useful resources:
What is your favorite shell and why?
BASH is my favorite. Itâs really a preferential kind of thing, where I love the syntax and it just "clicks" for me. The input/output redirection syntax (>>
, << 2>&1
, 2>
, 1>
, etc) is similar to C++ which makes it easier for me to recognize.
I also like the ZSH shell, because is much more customizable than BASH. It has the Oh-My-Zsh framework, powerful context based tab completion, pattern matching/globbing on steroids, loadable modules and more.
Useful resources:
How do you get help on the command line? ***
-
man
[commandname] can be used to see a description of a command (ex.:man less
,man cat
) -
-h
or--help
some programs will implement printing instructions when passed this parameter (ex.:python -h
andpython --help
)
Your first 5 commands on a *nix server after login.
w
- a lot of great information in there with the server uptimetop
- you can see all running processes, then order them by CPU, memory utilization and morenetstat
- to know on what port and IP your server is listening on and what processes are using thosedf
- reports the amount of available disk space being used by file systemshistory
- tell you what was previously run by the user you are currently connected to
Useful resources:
What do the fields in ls -al
output mean?
In the order of output:
-rwxrw-r-- 1 root root 2048 Jan 13 07:11 db.dump
- file permissions,
- number of links,
- owner name,
- owner group,
- file size,
- time of last modification,
- file/directory name
File permissions is displayed as following:
- first character is
-
orl
ord
,d
indicates a directory, a-
represents a file,l
is a symlink (or soft link) - special type of file - three sets of characters, three times, indicating permissions for owner, group and other:
r
= readablew
= writablex
= executable
In your example -rwxrw-r--
, this means the line displayed is:
- a regular file (displayed as
-
) - readable, writable and executable by owner (
rwx
) - readable, writable, but not executable by group (
rw-
) - readable but not writable or executable by other (
r--
)
Useful resources:
How do you get a list of logged-in users?
For a summary of logged-in users, including each login of a username, the terminal users are attached to, the date/time they logged in, and possibly the computer from which they are making the connection, enter:
# It uses /var/run/utmp and /var/log/wtmp files to get the details.
who
For extensive information, including username, terminal, IP number of the source computer, the time the login began, any idle time, process CPU cycles, job CPU cycles, and the currently running command, enter:
# It uses /var/run/utmp, and their processes /proc.
w
Also important for displays a list of last logged in users, enter:
# It uses /var/log/wtmp.
last
Useful resources:
What is the advantage of executing the running processes in the background? How can you do that?
The most significant advantage of executing the running process in the background is that you can do any other task simultaneously while other processes are running in the background. So, more processes can be completed in the background while you are working on different processes. It can be achieved by adding a special character &
at the end of the command.
Generally applications that take too long to execute and doesn't require user interaction are sent to background so that we can continue our work in terminal.
For example if you want to download something in background, you can:
wget https://url-to-download.com/download.tar.gz &
When you run the above command you get the following output:
[1] 2203
Here 1 is the serial number of job and 2203 is PID of the job.
You can see the jobs running in background using the following command:
jobs
When you execute job in background it give you a PID of job, you can kill the job running in background using the following command:
kill PID
Replace the PID with the PID of the job. If you have only one job running you can bring it to foreground using:
fg
If you have multiple jobs running in background you can bring any job in foreground using:
fg %#
Replace the #
with serial number of the job.
Useful resources:
Before you can manage processes, you must be able to identify them. Which tools will you use? ***
To be completed.
Running the command as root user. It is a good or bad practices?
Running (everything) as root is bad because:
-
Stupidity: nothing prevents you from making a careless mistake. If you try to change the system in any potentially harmful way, you need to use sudo, which ensures a pause (while you're entering the password) to ensure that you aren't about to make a mistake.
-
Security: harder to hack if you don't know the admin user's login account. root means you already have one half of the working set of admin credentials.
-
You don't really need it: if you need to run several commands as root, and you're annoyed by having to enter your password several times when
sudo
has expired, all you need to do issudo -i
and you are now root. Want to run some commands using pipes? Then usesudo sh -c "command1 | command2"
. -
You can always use it in the recovery console: the recovery console allows you to recover from a major mistake, or fix a problem caused by an app (which you still had to run as
sudo
). Ubuntu doesn't have a password for the root account in this case, but you can search online for changing that - this will make it harder for anyone that has physical access to your box to be able to do harm.
Useful resources:
How to check memory stats and CPU stats?
You'd use top/htop
for both. Using free
and vmstat
command we can display the physical and virtual memory statistics respectively. With the help of sar
command we see the CPU utilization & other stats (but sar
isn't even installed in most systems).
Useful resources:
What is load average?
Linux load averages are "system load averages" that show the running thread (task) demand on the system as an average number of running plus waiting threads. This measures demand, which can be greater than what the system is currently processing. Most tools show three averages, for 1, 5, and 15 minutes.
These 3 numbers are not the numbers for the different CPUs. These numbers are mean values of the load number for a given period of time (of the last 1, 5 and 15 minutes).
Load average is usually described as "average length of run queue". So few CPU-consuming processes or threads can raise load average above 1. There is no problem if load average is less than total number of CPU cores. But if it gets higher than number of CPUs, this means some threads/processes will stay in queue, ready to run, but waiting for free CPU.
It is meant to give you an idea of the state of the system, averaged over several periods of time. Since it is averaged, it takes time for it to go back to 0 after a heavy load was placed on the system.
Some interpretations:
- if the averages are 0.0, then your system is idle
- if the 1 minute average is higher than the 5 or 15 minute averages, then load is increasing
- if the 1 minute average is lower than the 5 or 15 minute averages, then load is decreasing
- if they are higher than your CPU count, then you might have a performance problem (it depends)
Useful resources:
Where is my password stored on Linux/Unix?
The passwords are not stored anywhere on the system at all. What is stored in /etc/shadow
are so called hashes of the passwords.
A hash of some text is created by performing a so called one way function on the text (password), thus creating a string to check against. By design it is "impossible" (computationally infeasible) to reverse that process.
Older Unix variants stored the encrypted passwords in /etc/passwd
along with other information about each account.
Newer ones simply have a *
in the relevant field in /etc/passwd
and use /etc/shadow
to store the password, in part to ensure nobody gets read access to the passwords when they only need the other stuff (shadow
is usually protected more strongly than passwd
).
For more info consult man crypt
, man shadow
, man passwd
.
Useful resources:
How to recursively change permissions for all directories except files and for all files except directories?
To change all the directories e.g. to 755 (drwxr-xr-x
):
find /opt/data -type d -exec chmod 755 {} \;
To change all the files e.g. to 644 (-rw-r--r--
):
find /opt/data -type f -exec chmod 644 {} \;
Useful resources:
Every command fails with command not found
. How to trace the source of the error and resolve it?
It looks that at one point or another are overwriting the default PATH
environment variable. The type of errors you have, indicates that PATH
does not contain e.g. /bin
, where the commands (including bash) reside.
One way to begin debugging your bash script or command would be to start a subshell with the -x
option:
bash --login -x
This will show you every command, and its arguments, which is executed when starting that shell.
Also very helpful is show PATH
variable values:
echo $PATH
If you run this:
PATH=/bin:/sbin:/usr/bin:/usr/sbin
most commands should start working - and then you can edit ~/.bash_profile
instead of ~/.bashrc
and fix whatever is resetting PATH
there. Default PATH
variable values for root and other users is in /etc/profile
file.
Useful resource:
You typing CTRL + C
but your script still running. How do you stop it?
In most cases, you can stop a running script by using the CTRL + C
keyboard combination. This sends an interrupt signal (SIGINT) to the script, which terminates its execution. If this does not work and the script is still running, you can try using the CTRL + \
combination, which sends a quit signal (SIGQUIT) to the script, which may terminate it immediately.
Alternatively, if you are using a terminal or command line interface, you can try using the kill
command to send a signal to the script process. You can find the process ID (PID) of the script by using the ps
or top
command, and then use kill
with the PID to stop the script.
In some cases, you may need to use the kill -9
command to force the script to stop, as the regular kill command may not work if the script is stuck or not responding. The -9
option sends a SIGKILL signal, which forces the process to stop immediately.
What is grep
command? How to match multiple strings in the same line?
The grep
utilities are a family of Unix tools, including egrep
and fgrep
.
grep
searches file patterns. If you are looking for a specific pattern in the output of another command, grep
highlights the relevant lines. Use this grep command for searching log files, specific processes, and more.
For match multiple strings:
grep -E "string1|string2" filename
or
grep -e "string1" -e "string2" filename
Useful resources:
Explain the file content commands along with the description.
head
: to check the starting of a file.tail
: to check the ending of the file. It is the reverse of head command.cat
: used to view, create, concatenate the files.more
: used to display the text in the terminal window in pager form.less
: used to view the text in the backward direction and also provides single line movement.
Useful resources:
SIGHUP, SIGINT, SIGKILL, and SIGTERM POSIX signals. Explain.
- SIGHUP - is sent to a process when its controlling terminal is closed. It was originally designed to notify the process of a serial line drop (a hangup). Many daemons will reload their configuration files and reopen their logfiles instead of exiting when receiving this signal.
- SIGINT - is sent to a process by its controlling terminal when a user wishes to interrupt the process. This is typically initiated by pressing
Ctrl+C
, but on some systems, the "delete" character or "break" key can be used. - SIGKILL - is sent to a process to cause it to terminate immediately (kill). In contrast to SIGTERM and SIGINT, this signal cannot be caught or ignored, and the receiving process cannot perform any clean-up upon receiving this signal.
- SIGTERM - is sent to a process to request its termination. Unlike the SIGKILL signal, it can be caught and interpreted or ignored by the process. This allows the process to perform nice termination releasing resources and saving state if appropriate. SIGINT is nearly identical to SIGTERM.
Useful resources:
What does kill
command do?
In Unix and Unix-like operating systems, kill
is a command used to send a signal to a process. By default, the message sent is the termination signal, which requests that the process exit. But kill
is something of a misnomer; the signal sent may have nothing to do with process killing.
Useful resources:
What is the difference between rm
and rm -rf
?
rm
only deletes the named files (and not directories). With -rf
as you say:
-r
,-R
,--recursive
recursively deletes content of a directory, including hidden files and sub directories-f
,--force
ignore nonexistent files, never prompt
Useful resources:
How do I grep
recursively? Explain on several examples. ***
To be completed.
archive.tgz
has ~30 GB. How do you list content of it and extract only one file?
# list of content
tar tf archive.tgz
# extract file
tar xf archive.tgz filename
Useful resources:
Execute combine multiple shell commands in one line.
If you want to execute each command only if the previous one succeeded, then combine them using the &&
operator:
cd /my_folder && rm *.jar && svn co path to repo && mvn compile package install
If one of the commands fails, then all other commands following it won't be executed.
If you want to execute all commands regardless of whether the previous ones failed or not, separate them with semicolons:
cd /my_folder; rm *.jar; svn co path to repo; mvn compile package install
In your case, I think you want the first case where execution of the next command depends on the success of the previous one.
You can also put all commands in a script and execute that instead:
#! /bin/sh
cd /my_folder \
&& rm *.jar \
&& svn co path to repo \
&& mvn compile package install
Useful resources:
What symbolic representation can you pass to chmod
to give all users execute access to a file without affecting other permissions?
chmod a+x /path/to/file
a
- for all usersx
- for execution permissionr
- for read permissionw
- for write permission
Useful resources:
How can I sync two local directories?
To sync the contents of dir1 to dir2 on the same system, type:
rsync -av --progress --delete dir1/ dir2
-a
,--archive
- archive mode--delete
- delete extraneous files from dest dirs-v
,--verbose
- verbose mode (increase verbosity)--progress
- show progress during transfer
Useful resources:
Many basic maintenance tasks require you to edit config files. Explain ways to undo the changes you make.
- manually backup of a file before editing (with brace expansion like this:
cp filename{,.orig}
) - manual copy of the directory structure where file is stored (e.g.
cp
,rsync
ortar
) - make a backup of original file in your editor (e.g. set rules in your editor configuration file)
- the best solution is to use
git
(or any other version control) to keep track of configuration files (e.g.etckeeper
for/etc
directory)
Useful resources:
You have to find all files larger than 20MB. How you do it?
find / -type f -size +20M
Useful resources:
Why do we use sudo su -
and not just sudo su
?
sudo
is in most modern Linux distributions where (but not always) the root user is disabled and has no password set. Therefore you cannot switch to the root user with su
(you can try). You have to call sudo
with root privileges: sudo su
.
su
just switches the user, providing a normal shell with an environment nearly the same as with the old user.
su -
invokes a login shell after switching the user. A login shell resets most environment variables, providing a clean base.
Useful resources:
How to find files that have been modified on your system in the past 60 minutes?
find / -mmin -60 -type f
Useful resources:
What are the main reasons for keeping old log files?
They are essential to investigate issues on the system. Log management is absolutely critical for IT security.
Servers, firewalls, and other IT equipment keep log files that record important events and transactions. This information can provide important clues about hostile activity affecting your network from within and without. Log data can also provide information for identifying and troubleshooting equipment problems including configuration problems and hardware failure.
Itâs your serverâs record of whoâs come to your site, when, and exactly what they looked at. Itâs incredibly detailed, showing:
- where folks came from
- what browser they were using
- exactly which files they looked at
- how long it took to load each file
- and a whole bunch of other nerdy stuff
Factors to consider:
- legal requirements for retention or destruction
- company policies for retention and destruction
- how long the logs are useful
- what questions you're hoping to answer from the logs
- how much space they take up
By collecting and analyzing logs, you can understand what transpires within your network. Each log file contains many pieces of information that can be invaluable, especially if you know how to read them and analyze them.
Useful resources:
What is an incremental backup?
An incremental backup is a type of backup that only copies files that have changed since the previous backup.
Useful resources:
What is RAID? What is RAID0, RAID1, RAID5, RAID6, RAID10?
A RAID (Redundant Array of Inexpensive Disks) is a technology that is used to increase the performance and/or reliability of data storage.
- RAID0: Also known as disk striping, is a technique that breaks up a file and spreads the data across all the disk drives in a RAID group. There are no safeguards against failure
- RAID1: A popular disk subsystem that increases safety by writing the same data on two drives. Called "mirroring," RAID 1 does not increase write performance, but read performance may equal up to the sum of each disks' performance. However, if one drive fails, the second drive is used, and the failed drive is manually replaced. After replacement, the RAID controller duplicates the contents of the working drive onto the new one
- RAID5: It is disk subsystem that increases safety by computing parity data and increasing speed by interleaving data across three or more drives (striping). Upon failure of a single drive, subsequent reads can be calculated from the distributed parity such that no data is lost
- RAID6: RAID 6 extends RAID 5 by adding another parity block. It requires a minimum of four disks and can continue to execute read and write of any two concurrent disk failures. RAID 6 does not have a performance penalty for read operations, but it does have a performance penalty on write operations because of the overhead associated with parity calculations
- RAID10: Also known as RAID 1+0, is a RAID configuration that combines disk mirroring and disk striping to protect data. It requires a minimum of four disks, and stripes data across mirrored pairs. As long as one disk in each mirrored pair is functional, data can be retrieved. If two disks in the same mirrored pair fail, all data will be lost because there is no parity in the striped sets
Useful resources:
How is a userâs default group determined? How would you change it?
useradd -m -g initial_group username
-g/--gid
: defines the group name or number of the user's initial login group. If specified, the group name must exist; if a group number is provided, it must refer to an already existing group.
If not specified, the behaviour of useradd will depend on the USERGROUPS_ENAB
variable contained in /etc/login.defs
. The default behaviour (USERGROUPS_ENAB yes
) is to create a group with the same name as the username, with GID equal to UID.
Useful resources:
What is your best command line text editor for daily working and scripting? ***
To be completed.
Why would you want to mount servers in a rack?
- Protecting Hardware
- Proper Cooling
- Organized Workspace
- Better Power Management
- Cleaner Environment
Useful resources:
Network Questions (23)
Draw me a simple network diagram: you have 20 systems, 1 router, 4 switches, 5 servers, and a small IP block. ***
To be completed.
What are the most important things to understand about the OSI (or any other) model?
The most important things to understand about the OSI (or any other) model are:
- we can divide up the protocols into layers
- layers provide encapsulation
- layers provide abstraction
- layers decouple functions from others
Useful resources:
What is the difference between a VLAN and a subnet? Do you need a VLAN to setup a subnet?
VLANs and subnets solve different problems. VLANs work at Layer 2, thereby altering broadcast domains (for instance). Whereas subnets are Layer 3 in the current context.
Subnet - is a range of IP addresses determined by part of an address (often called the network address) and a subnet mask (netmask). For example, if the netmask is 255.255.255.0
(or /24
for short), and the network address is 192.168.10.0
, then that defines a range of IP addresses 192.168.10.0
through 192.168.10.255
. Shorthand for writing that is 192.168.10.0/24
.
VLAN - a good way to think of this is "switch partitioning." Let's say you have an 8 port switch that is VLAN-able. You can assign 4 ports to one VLAN (say VLAN 1
) and 4 ports to another VLAN (say VLAN 2
). VLAN 1
won't see any of VLAN 2's
traffic and vice versa, logically, you now have two separate switches. Normally on a switch, if the switch hasn't seen a MAC address it will "flood" the traffic to all other ports. VLANs prevent this.
Subnet is nothing more than an IP address range of IP addresses that help hosts communicate over layer 2 and 3. Each subnet does not require its own VLAN. VLANs are implemented for isolation (are sandbox for layer two communication, no 2 systems of 2 different VLANs may communicate but it can be done through Inter VLAN routing), ease of management and security.
Useful resources:
List 5 common network ports you should know.
SERVICE | PORT |
---|---|
SMTP | 25 |
FTP | 20 for data transfer and 21 for connection established |
DNS | 53 |
DHCP | 67/UDP for DHCP server, 68/UDP for DHCP client |
SSH | 22 |
Useful resources:
What POP and IMAP are, and how to choose which of them you should implement?
POP and IMAP are both protocols for retrieving messages from a mail server to a mail client.
POP (Post Office Protocol) uses a one way push from mail server to client. By default this will send messages to the POP mail client and remove them from the mail server, though it is possible to configure the mail server to retain all messages. Any actions you take on the message in your mail client (labeling, deleting, moving to a folder) will not be reflected on the mail server, and thus inaccessible to other mail clients pulling from the mail server. POP uses little storage space on the mail server and can be seen as more secure since messages only exist on one mail client instead of the mail server and multiple clients.
IMAP (Internet Message Access Protocol) uses two way communication between mail server and client. Deleting or labeling a message in your mail client configured with IMAP will also delete or label the message on the mail server. IMAP allows for a similar experience when accessing mail across different clients or devices since messages can existing in the same state across multiple devices. IMAP can also save disk space on the mail client by selectively syncing messages, deleting older messages from the mail client since it can sync them from the mail server later as needed.
Choose IMAP if you need to access messages across multiple devices and you want to save disk space on your client device. Choose POP if you want to save disk space on your mail server, only access messages from one client device, and ensure that messages do not exist on multiple systems.
How to check default route and routing table?
Using the commands netstat -nr
, route -n
or ip route show
we can see the default route and routing tables.
Useful resources:
What is the difference between 127.0.0.1 and localhost?
Well, the most likely difference is that you still have to do an actual lookup of localhost somewhere.
If you use 127.0.0.1
, then (intelligent) software will just turn that directly into an IP address and use it. Some implementations of gethostbyname
will detect the dotted format (and presumably the equivalent IPv6 format) and not do a lookup at all.
Otherwise, the name has to be resolved. And there's no guarantee that your hosts file will actually be used for that resolution (first, or at all) so localhost
may become a totally different IP address.
By that I mean that, on some systems, a local hosts file can be bypassed. The host.conf
file controls this on Linux (and many other Unices).
If you use a Unix domain socket it'll be slightly faster than using TCP/IP (because of the less overhead you have). Windows is using TCP/IP as a default, whereas Linux tries to use a Unix Domain Socket if you choose localhost and TCP/IP if you take 127.0.0.1
.
Useful resources:
Which port is used for ping
command?
ping
uses ICMP, specifically ICMP echo request and ICMP echo reply packets. There is no 'port' associated with ICMP. Ports are associated with the two IP transport layer protocols, TCP and UDP. ICMP, TCP, and UDP are "siblings"; they are not based on each other, but are three separate protocols that run on top of IP.
ICMP packets are identified by the 'protocol' field in the IP datagram header. ICMP does not use either UDP or TCP communications services, it uses raw IP communications services. This means that the ICMP message is carried directly in an IP datagram data field. raw
comes from how this is implemented in software, to create and send an ICMP message, one opens a raw
socket, builds a buffer containing the ICMP message, and then writes the buffer containing the message to the raw socket.
The IP protocol value for ICMP is 1. The protocol field is part of the IP header and identifies what is in the data portion of the IP datagram.
However, you could use nmap
to see whether ports are open or not:
nmap -p 80 example.com
Useful resources:
Server A can't talk to Server B. Describe possible reasons in a few steps.
To troubleshoot communication problems between servers, it is better to ideally follow the TCP/IP stack:
-
Application Layer: are the services up and running on both servers? Are they correctly configured (eg. bind the correct IP and correct port)? Do application and system logs show meaningful errors?
-
Transport Layer: are the ports used by the application open (try telnet!)? Is it possible to ping the server?
-
Network Layer: is there a firewall on the network or on the OS correctly configured? Is the IP stack correctly configured (IP, routes, dns, etc.)? Are switches and routers working (check the ARP table!)?
-
Physical Layer: are the servers connected to a network? Are packets being lost?
Why wonât the hostnames resolve on your server? Fix this issue. ***
To be completed.
How to resolve the domain name (using external dns) with CLI? Can IPs be resolved to domain names?
Examples for resolve IP address to domain name:
# with host command:
host domain.com 8.8.8.8
# with dig command:
dig @9.9.9.9 google.com
# with nslookup command:
nslookup domain.com 8.8.8.8
You can (sometimes) resolve an IP Address back to a hostname. IP Address can be stored against a PTR record. You can then do:
dig A <hostname>
To lookup the IPv4 address for a host, or:
dig AAAA <hostname>
To lookup the IPv6 address for a host, or:
dig PTR ZZZ.YYY.XXX.WWW.in-addr.arpa.
To lookup the hostname for IPv4 address WWW.XXX.YYY.ZZZ
(note the octets are reversed), or:
dig PTR b.a.9.8.7.6.5.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.8.b.d.0.1.0.0.2.ip6.arpa.
Useful resources:
How to test port connectivity with telnet
or nc
?
# with telnet command:
telnet code42.example.com 5432
# with nc (netcat) command:
nc -vz code42.example.com 5432
Why should you avoid telnet
to administer a system remotely?
Modern operating systems have turned off all potentially insecure services by default. On the other hand, some vendors of network devices still allow to establish communication using the telnet protocol.
Telnet uses most insecure method for communication. It sends data across the network in plain text format and anybody can easily find out the password using the network tool.
In the case of Telnet, these include the passing of login credentials in plain text, which means anyone running a sniffer on your network can find the information he needs to take control of a device in a few seconds by eavesdropping on a Telnet login session.
Useful resources:
What is the difference between wget
and curl
?
The main differences are: wget's
major strong side compared to curl
is its ability to download recursively. wget
is command line only. curl
supports FTP, FTPS, HTTP, HTTPS, SCP, SFTP, TFTP, TELNET, DICT, LDAP, LDAPS, FILE, POP3, IMAP, SMTP, RTMP and RTSP.
Useful resources:
What is SSH and how does it work?
SSH stands for Secure Shell. It is a protocol that lets you drop from a server "A" into a shell session to a server "B". It allows you interact with your server "B".
An SSH connection to be established, the remote machine (server A) must be running a piece of software called an SSH daemon and the user's computer (server B) must have an SSH client.
The SSH daemon and SSH client listen for connections on a specific network port (default 22), authenticates connection requests, and spawns the appropriate environment if the user provides the correct credentials.
Useful resources:
Most tutorials suggest using SSH key authentication rather than password authentication. Why it is considered more secure?
An SSH key is an access credential in the SSH protocol. Its function is similar to that of user names and passwords, but the keys are primarily used for automated processes and for implementing single sign-on by system administrators and power users.
Instead of requiring a user's password, it is possible to confirm the client's identity by using asymmetric cryptography algorithms, with public and private keys.
If your SSH service only allows public-key authentication, an attacker needs a copy of a private key corresponding to a public key stored on the server.
If your SSH service allows password based authentication, then your Internet connected SSH server will be hammered day and night by bot-nets trying to guess user-names and passwords. The bot net needs no information, it can just try popular names and popular passwords. Apart from anything else this clogs your logs.
Useful resources:
What is a packet filter and how does it work?
Packet filtering is a firewall technique used to control network access by monitoring outgoing and incoming packets and allowing them to pass or halt based on the source and destination Internet Protocol (IP) addresses, protocols and ports.
Packet filtering is appropriate where there are modest security requirements. The internal (private) networks of many organizations are not highly segmented. Highly sophisticated firewalls are not necessary for isolating one part of the organization from another.
However it is prudent to provide some sort of protection of the production network from a lab or experimental network. A packet filtering device is a very appropriate measure for providing isolation of one subnet from another.
Operating at the network layer and transport layer of the TCP/IP protocol stack, every packet is examined as it enters the protocol stack. The network and transport headers are examined closely for the following information:
- protocol (IP header, network layer) - in the IP header, byte 9 (remember the byte count begins with zero) identifies the protocol of the packet. Most filter devices have the capability to differentiate between TCP, UPD, and ICMP.
- source address (IP header, network layer) - the source address is the 32-bit IP address of the host which created the packet.
- destination address (IP header, network layer) - the destination address is the 32-bit IP address of the host the packet is destined for.
- source port (TCP or UDP header, transport layer) - each end of a TCP or UDP network connection is bound to a port. TCP ports are separate and distinct from UDP ports. Ports numbered below 1024 are reserved â they have a specifically defined use. Ports numbered above 1024 (inclusive) are known as ephemeral ports. They can be used however a vendor chooses. For a list of "well known" ports, refer to RFP1700. The source port is a pseudo-randomly assigned ephemeral port number. Thus it is often not very useful to filter on the source port.
- destination port (TCP or UDP header, transport layer) - the destination port number indicates a port that the packet is sent to. Each service on the destination host listens to a port. Some well-known ports that might be filtered are 20/TCP and 21/TCP - ftp connection/data, 23/TCP - telnet, 80/TCP - http, and 53/TCP - DNS zone transfers.
- connection status (TCP header, transport layer) - the connection status tells whether the packet is the first packet of the network session. The ACK bit in the TCP header is set to âfalseâ or 0 if this is the first packet in the session. It is simple to disallow a host from establishing a connection by rejecting or discarding any packets which have the ACK bit set to "false" or 0.
Useful resources:
What are the advantages of using a reverse proxy server?
Hide the topology and characteristics of your back-end servers
The reverse proxy server can hide the presence and characteristics of the origin server. It acts as an intermediate between internet cloud and web server. It is good for security reason especially when you are using web hosting services.
Allows transparent maintenance of backend servers
Changes you make to servers running behind a reverse proxy are going to be completely transparent to your end users.
Load Balancing
The reverse proxy will then enforce a load balancing algorithm like round robin, weighted round robin, least connections, weighted least connections, or random, to distribute the load among the servers in the cluster.
When a server goes down, the system will automatically failover to the next server up and users can continue with their secure file transfer activities.
SSL offloading/termination
Handles incoming HTTPS connections, decrypting the requests and passing unencrypted requests on to the web servers.
IP masking
Using a single ip but different URLs to route to different back end servers.
Useful resources:
What is the difference between a router and a gateway? What is the default gateway?
Router describes the general technical function (layer-3 forwarding) or a hardware device intended for that purpose, while gateway describes the function for the local segment (providing connectivity to elsewhere). You could also state that "you set up a router as gateway". Another term is hop which describes the forwarding in between subnets.
The term default gateway is used to mean the router on your LAN which has the responsibility of being the first point of contact for traffic to computers outside the LAN.
It's just a matter of perspective, the device is the same.
Useful resources:
Explain the function of each of the following DNS records: SOA, PTR, A, MX, and CNAME.
DNS records are basically mapping files that tell the DNS server which IP address each domain is associated with, and how to handle requests sent to each domain. Some DNS records syntax that are commonly used in nearly all DNS record configurations are A
, AAAA
, CNAME
, MX
, PTR
, NS
, SOA
, SRV
, TXT
, and NAPTR
.
- SOA - A Start Of Authority
- A - Address Mapping records
- AAAA - IP Version 6 Address records
- CNAME - Canonical Name records
- MX - Mail exchanger record
- NS - Name Server records
- PTR - Reverse-lookup Pointer records
Useful resources:
Why couldn't MAC addresses be used instead of IPv4/6 for networking?
The OSI model explains why it doesn't make sense to make routing, a layer 3 concept, decisions based on a physical, layer 2, mechanism.
Modern networking is broken into many different layers to accomplish your end to end communication. Your network card (what is addressed by the mac address - physical address) needs to only be responsible for communicating with peers on it's physical network.
The communication that you are allowed to accomplish with your MAC address is going to be limited to other devices that reside within physical contact to your machine. On the internet, for example, you are not physically connected to each machine. That's why we make use of TCP/IP (a layer 3, logical address) mechanism when we need to communicate with a machine that we are not physically connected to.
IP is an arbitrary numbering scheme imposed in a hierarchical fashion on a group of computers to logically distinguish them as a group (that's what a subnet is). Sending messages between those groups is done by routing tables, themselves divided into multiple levels so that we don't have to keep track of every single subnet.
It's also pretty easy to relate this to another pair of systems. You have a State Issued ID Number, why would you need a mailing address if that ID number is already unique to just you? You need the mailing address because it's an arbitrary system that describes where the unique destination for communications to you should go.
On the other hand, the distribution of MAC addresses across the network is random and completely unrelated to topology. Routes grouping would be impossible, every router would need to keep track of routes for every single device that relays traffic trough it. That is what layer 2 switches do, and that does not scale well beyond a certain number of hosts.
Useful resources:
What is the smallest IPv4 subnet mask that can be applied to a network containing up to 30 devices?
Whether you have a standard /24
VLAN for end users, a /30
for point-to-point links, or something in between and subnet that must contain up to 30 devices works out to be a /27
- or a subnet mask of 255.255.255.224
.
Useful resources:
What are some common HTTP status codes?
- 1xx - Informational responses - communicates transfer protocol-level information
- 2xx - Success - indicates that the clientâs request was accepted successfully
- 3xx - Redirection - indicates that the client must take some additional action in order to complete their request
- 4xx - Client side error - this category of error status codes points the finger at clients
- 5xx - Server side error - the server takes responsibility for these error status codes
Useful resources:
Devops Questions (5)
What is DevOps? Which is more important to the success of any DevOps community: how people communicate or the tools that you choose to deploy? ***
DevOps is a cohesive team that engages in both Development and Operations tasks, or it's individual Operations and Development teams that work very closely together. It's more of a "way" of working collaboratively with other departments to achieve common goals.
What is a version control? Are your commit messages good looking?
It is a system that records changes to a file or set of files over time so that you can recall specific versions later. Version control systems consist of a central shared repository where teammates can commit changes to a file or set of file. Then you can mention the uses of version control.
Version control allows you to:
- revert files back to a previous state
- revert the entire project back to a previous state
- compare changes over time
- see who last modified something that might be causing a problem
- who introduced an issue and when
The seven rules of a great commit message:
- separate subject from body with a blank line
- limit the subject line to 50 characters
- capitalize the subject line
- do not end the subject line with a period
- use the imperative mood in the subject line
- wrap the body at 72 characters
- use the body to explain what and why vs. how
Useful resources:
Explain some basic git
commands.
git init
- create a new local repositorygit commit -m "message"
- commit changes to headgit status
- list the files you've added withgit add
and also commit any files you've changed since thengit push origin master
- send changes to the master branch of your remote repository
Explain a simple Continuous Integration pipeline.
- clone repository
- deploy stage (QA)
- testing environment (QA)
- deploy stage (PROD)
Explain some basic docker
commands.
docker ps
- show running containersdocker ps -a
- show all containersdocker images
- show docker imagesdocker logs <container-id|container-name>
- get logs from containerdocker network ls
- show all docker networksdocker volumes ls
- show all docker volumesdocker exec -it <container-id|container-name> bash
- execute bash in container with interactive shell
Cyber Security Questions (1)
What is a Security Misconfiguration?
Security misconfiguration is a vulnerability when a device/application/network is configured in a way which can be exploited by an attacker to take advantage of it. This can be as simple as leaving the default username/password unchanged or too simple for device accounts etc.
:diamond_shape_with_a_dot_inside: Regular Sysadmin
System Questions (60)
Tell me about your experience with the production environments? ***
To be completed.
Which distribution would you select for running a major web server? ***
To be completed.
Explain in a few points the boot process of the Linux system.
BIOS: Full form of BIOS is Basic Input or Output System that performs integrity checks and it will search and load and then it will execute the bootloader.
Bootloader: Since the earlier phases are not specific to the operating system, the BIOS-based boot process for x86 and x86-64 architectures is considered to start when the master boot record (MBR) code is executed in real mode and the first-stage boot loader is loaded. In UEFI systems, a payload, such as the Linux kernel, can be executed directly. Thus no boot loader is necessary. Some popular bootloaders: GRUB, Syslinux/Isolinux or Lilo.
Kernel: The kernel in Linux handles all operating system processes, such as memory management, task scheduling, I/O, interprocess communication, and overall system control. This is loaded in two stages - in the first stage, the kernel (as a compressed image file) is loaded into memory and decompressed, and a few fundamental functions such as basic memory management are set up.
Init: Is the parent of all processes on the system, it is executed by the kernel and is responsible for starting all other processes.
SysV init
- init's job is "to get everything running the way it should be once the kernel is fully running. Essentially it establishes and operates the entire user space. This includes checking and mounting file systems, starting up necessary user services, and ultimately switching to a user-environment when system startup is completed.systemd
- the developers of systemd aimed to replace the Linux init system inherited from Unix System V. Like init, systemd is a daemon that manages other daemons. All daemons, including systemd, are background processes. Systemd is the first daemon to start (during booting) and the last daemon to terminate (during shutdown).runinit
- runinit is an init scheme for Unix-like operating systems that initializes, supervises, and ends processes throughout the operating system. It is a reimplementation of the daemontools process supervision toolkit that runs on the Linux, Mac OS X, *BSD, and Solaris operating systems.
Useful resources:
How and why Linux daemons drop privileges? Why some daemons need root permissions to start? Explain. ***
To be completed.
Why is a load of 1.00 not ideal on a single-core machine?
The problem with a load of 1.00 is that you have no headroom. In practice, many sysadmins will draw a line at 0.70.
The "Need to Look into it" Rule of Thumb: 0.70 If your load average is staying above > 0.70, it's time to investigate before things get worse.
The "Fix this now" Rule of Thumb: 1.00. If your load average stays above 1.00, find the problem and fix it now. Otherwise, you're going to get woken up in the middle of the night, and it's not going to be fun.
Rule of Thumb: 5.0. If your load average is above 5.00, you could be in serious trouble, your box is either hanging or slowing way down, and this will (inexplicably) happen in the worst possible time like in the middle of the night or when you're presenting at a conference. Don't let it get there.
Useful resources:
What does it mean when the effective user is root, but the real user ID is still your name?
The real user ID is who you really are (the user who owns the process), and the effective user ID is what the operating system looks at to make a decision whether or not you are allowed to do something (most of the time, there are some exceptions).
When you log in, the login shell sets both the real and effective user ID to the same value (your real user ID) as supplied by the password file.
If, for instance, you execute setuid, and besides running as another user (e.g. root) the setuid program is also supposed to do something on your behalf.
After executing setuid, it will have your real ID (since you're the process owner) and the effective user id of the file owner (for example root) since it is setuid.
Let's use the case of passwd
:
-rwsr-xr-x 1 root root 45396 may 25 2012 /usr/bin/passwd
When user2 wants to change their password, they execute /usr/bin/passwd
.
The RUID will be user2 but the EUID of that process will be root.
user2 can use only passwd to change their own password, because internally passwd checks the RUID and, if it is not root, its actions will be limited to real user's password.
It's necessary that the EUID becomes root in the case of passwd because the process needs to write to /etc/passwd
and/or /etc/shadow
.
Useful resources:
Developer added cron job which generate massive log files. How do you prevent them from getting so big?
Using logrotate
is the usual way of dealing with logfiles. But instead of adding content to /etc/logrotate.conf
you should add your own job to /etc/logrotate.d/
, otherwise you would have to look at more diffs of configuration files during release upgrades.
If it's actively being written to you don't really have much you can do by way of truncate. Your only options are to truncate the file:
: >/var/log/massive-logfile
It's very helpful, because it's truncate the file without disrupting the processes.
Useful resources:
How the Linux kernel creates, manages and deletes the processes in the system? ***
To be completed.
Useful resources:
Explain the selected information you can see in top
and htop
. How to diagnose load, high user time and out-of-memory problems with these tools? ***
To be completed.
Useful resources:
How would you recognize a process that is hogging resources?
top
works reasonably well, as long as you look at the right numbers.
- M Sorts by current resident memory usage
- T Sorts by total ( or cummulative) CPU usage
- P Sorts by current CPU usage (this is the default refresh)
- ? Displays a usage summary for all top commands
This is very important information to obtain when problem solving why a computer process is running slowly and making decisions on what processes to kill/software to uninstall.
Useful resources:
You need to upgrade ntpd
service at 200 servers. What is the best way to go about upgrading all of these to the latest?
By using Infrastructure as a Code approach, there are multiple good ways:
- Configuration Synchronization Change Management Model:
There are Configuration Management Tools (Ansible, Chef, Puppet, Saltstack, ...), that can be used to automatically update ntpd
service on all servers. To keep systems stable, system packages on servers are usually auto-updated with only security updates. Major or minor versions of packages are usually version locked in configuration definitions to prevent misconfiguration of the service. Change is then deployed by changing ntpd
version in configuration definition.
With this approach, it is important to be careful when deploying changes into infrastructure massively. The pipeline of deployment should include Unit, Integration and System tests, and eventually be first deployed into Staging environment to prove configuration. If tests prove configuration correctness, deployment should be done by incremental rollout with ability to rollback in case of errors or failure.
- Immutable Servers Model:
In Immutable Server model, whole unit (server, container) is replaced by new updated image rather than making changes to running server (this eliminates configuration drift). With this approach you usually build server image with tools like Packer or Docker with Dockerfile. This image is then tested and deployed similarly as in option above (1.), but now using techniques such as Canary Release, which also has ability to incremental rollout and rollback.
Useful resources:
How to permanently set $PATH
on Linux/Unix? Why is this variable so important? ***
To be completed.
When your server is booting up some errors appears on the console. How to examine boot messages and where are they stored?
Your console has two types of messages:
- generated by the kernel (via printk)
- generated by userspace (usually your init system)
Kernel messages are always stored in the kmsg buffer, visible via dmesg
command. They're also often copied to your syslog. This also applies to userspace messages written to /dev/kmsg
, but those are fairly rare.
Meanwhile, when userspace writes its fancy boot status text to /dev/console
or /dev/tty1
, it's not stored anywhere at all. It just goes to the screen and that's it.
dmesg
is used to review boot messages contained in the kernel ring buffer. A ring buffer is a buffer of fixed size for which any new data added to it overwrites the oldest data in it.
It shows operations once the boot process has completed, such as command line options passed to the kernel; hardware components detected, events when a new USB device is added, or errors like NIC (Network Interface Card) failure and the drivers report no link activity detected on the network and so much more.
If system logging is done via the journal component you should use journalctl
. It shows messages include kernel and boot messages; messages from syslog or various services.
Boot issues/errors calls for a system administrator to look into certain important files in conjunction with particular commands (handled differently by different versions of Linux):
/var/log/boot.log
- system boot log, it contains all that unfolded during the system boot/var/log/messages
- stores global system messages, including the messages that are logged during system boot/var/log/dmesg
- contains kernel ring buffer information
Useful resources:
Swap usage too high. What are the reasons for this and how to resolve swapping problems?
Swap space is a restricted amount of physical memory that is allocated for use by the operating system when available memory has been fully utilized. It is memory management that involves swapping sections of memory to and from physical storage.
If the system needs more memory resources and the RAM is full, inactive pages in memory are moved to the swap space. While swap space can help machines with a small amount of RAM, it should not be considered a replacement for more RAM. Swap space is located on hard drives, which have a slower access time than physical memory.
Workload increases your RAM demand. You are running a workload that requires more memory. Usage of the entire swap indicates that. Also, changing swappiness
to 1 might not be a wise decision. Setting swappiness
to 1 does not indicate that swapping will not be done. It just indicates how aggressive kernel will be in respect of swapping, it does not eliminate swapping. Swapping will happen if needs to be done.
-
Increasing the size of the swap space - firstly, you'd have increased disk use. If your disks aren't fast enough to keep up, then your system might end up thrashing, and you'd experience slowdowns as data is swapped in and out of memory. This would result in a bottleneck.
-
Adding more RAM - the real solution is to add more memory. There's no substitute for RAM, and if you have enough memory, you'll swap less.
For monitoring swap space usage:
cat /proc/swaps
- to see total and used swap sizegrep SwapTotal /proc/meminfo
- to show total swap spacefree
- to display the amount of free and used system memory (also swap)vmstat
- to check swapping statisticstop
,htop
- to check swap space usageatop
- to show is that your system is overcommitting memory- or use one-liner shell command to list all applications with how much swap space search is using in kilobytes:
for _fd in /proc/*/status ; do
awk '/VmSwap|Name/{printf $2 " " $3}END{ print ""}' $_fd
done | sort -k 2 -n -r | less
Useful resources:
What is umask? How to set it permanently for a user?
On Linux and other Unix-like operating systems, new files are created with a default set of permissions. Specifically, a new file's permissions may be restricted in a specific way by applying a permissions "mask" called the umask
. The umask
command is used to set this mask, or to show you its current value.
Permanently change (set e.g. umask 02
):
~/.profile
~/.bashrc
~/.zshrc
~/.cshrc
Useful resources:
Explain the differences among the following umask values: 000, 002, 022, 027, 077, and 277.
Umask | File result | Directory result |
---|---|---|
000 | 666 rw- rw- rw- | 777 rwx rwx rwx |
002 | 664 rw- rw- r-- | 775 rwx rwx r-x |
022 | 644 rw- r-- r-- | 755 rwx r-x r-x |
027 | 640 rw- r-- --- | 750 rwx r-x --- |
077 | 600 rw---- --- | 700 rwx --- --- |
277 | 400 r-- --- --- | 500 r-x --- --- |
Useful resources:
What is the difference between a symbolic link and a hard link?
Underneath the file system files are represented by inodes (or is it multiple inodes not sure)
- a file in the file system is basically a link to an inode
- a hard link then just creates another file with a link to the same underlying inode
When you delete a file it removes one link to the underlying inode. The inode is only deleted (or deletable/over-writable) when all links to the inode have been deleted.
- a symbolic link is a link to another name in the file system
Once a hard link has been made the link is to the inode. deleting renaming or moving the original file will not affect the hard link as it links to the underlying inode. Any changes to the data on the inode is reflected in all files that refer to that inode.
Note: Hard links are only valid within the same file system. Symbolic links can span file systems as they are simply the name of another file.
Differences:
- Hardlink cannot be created for directories. Hard link can only be created for a file
- Softlink also termed a symbolic links or symlinks can link to a directory
Useful resources:
How does the sticky bit work? The SUID/GUID
is the same?
This is probably one of my most irksome things that people mess up all the time. The SUID/GUID bit and the sticky-bit are 2 completely different things.
If you do a man chmod
you can read about the SUID and sticky-bits.
SUID/GUID
What the above man page is trying to say is that the position that the x bit takes in the rwxrwxrwx for the user octal (1st group of rwx) and the group octal (2nd group of rwx) can take an additional state where the x becomes an s. When this occurs this file when executed (if it's a program and not just a shell script) will run with the permissions of the owner or the group of the file.
So if the file is owned by root and the SUID bit is turned on, the program will run as root. Even if you execute it as a regular user. The same thing applies to the GUID bit.
Examples:
no suid/guid - just the bits rwxr-xr-x
are set.
ls -lt b.pl
-rwxr-xr-x 1 root root 179 Jan 9 01:01 b.pl
suid & user's executable bit enabled (lowercase s) - the bits rwsr-x-r-x
are set.
chmod u+s b.pl
ls -lt b.pl
-rwsr-xr-x 1 root root 179 Jan 9 01:01 b.pl
suid enabled & executable bit disabled (uppercase S) - the bits rwSr-xr-x
are set.
chmod u-x b.pl
ls -lt b.pl
-rwSr-xr-x 1 root root 179 Jan 9 01:01 b.pl
guid & group's executable bit enabled (lowercase s) - the bits rwxr-sr-x
are set.
chmod g+s b.pl
ls -lt b.pl
-rwxr-sr-x 1 root root 179 Jan 9 01:01 b.pl
guid enabled & executable bit disabled (uppercase S) - the bits rwxr-Sr-x
are set.
chmod g-x b.pl
ls -lt b.pl
-rwxr-Sr-x 1 root root 179 Jan 9 01:01 b.pl
sticky bit
The sticky bit on the other hand is denoted as t
, such as with the /tmp
directory:
ls -l /|grep tmp
drwxrwxrwt. 168 root root 28672 Jun 14 08:36 tmp
This bit should have always been called the restricted deletion bit given that's what it really connotes. When this mode bit is enabled, it makes a directory such that users can only delete files & directories within it that they are the owners of.
Useful resources:
What does LC_ALL=C
before command do? In what cases it will be useful?
LC_ALL
is the environment variable that overrides all the other localisation settings. This sets all LC_
type variables at once to a specified locale.
The main reason to set LC_ALL=C
before command is that fine to simply get English output (general change the locale used by the command).
On the other hand, also important is to increase the speed of command execution with LC_ALL=C
e.g. grep
or fgrep
. Using the LC_ALL=C
locale increased our performance and brought command execution time down.
For example, if you set LC_ALL=en_US.utf8
your system opened multiple files from the /usr/lib/locale
directory. For LC_ALL=C
a minimum amount of open and read operations is performed.
If you want to restore all your normal (original) locale settings for the session:
LC_ALL=
If LC_ALL
does not work, try using LANG
(if that still does not work, try LANGUAGE
):
LANG=C date +%A
Monday
Useful resources:
How to make high availability of web application? ***
To be completed.
You are configuring a new server. One of the steps is setting the permissions to the app directories. What steps will you take and what mistakes to avoid?
1) Main requirements - remember about this
- which users have access to the app filesystem
- permissions for web servers, e.g. Apache and app servers e.g. uwsgi
- permissions for specific directories like a uploads, cache and main app directory like a
/var/www/app01/html
- correct
umask
value for users and suid/sgid (only for specific situations) - permissions for all future files and directories
- permissions for cron jobs and scripts
2) Application directories
/var/www
contains a directory for each website (isolation of the apps), e.g. /var/www/app01
, /var/www/app02
mkdir /var/www/{app01,app02}
3) Application owner and group
Each application has a designated owner (e.g. u01-prod, u02-prod) and group (e.g. g01-prod, g02-prod) which are set as the owner of all files and directories in the website's directory:
chown -R u01-prod:g01-prod /var/www/app01
chown -R u02-prod:g02-prod /var/www/app02
4) Developers owner and group
All of the users that maintain the website have own groups and they're attach to application group:
id alice
uid=2000(alice) gid=4000(alice) groups=8000(g01-prod)
id bob
uid=2001(bob) gid=4001(bob) groups=8000(g01-prod),8001(g02-prod)
So alice user has standard privileges for /var/www/app01
and bob user has standard privileges for /var/www/app01
and /var/www/app02
.
5) Web server owner and group
Any files or directories that need to be written by the webserver have their owner. If the web servers is Apache, default owner/group are apache:apache or www-data:www-data and for Nginx it will be nginx:nginx. Don't change these settings.
If applications works with app servers like a uwsgi or php-fpm should set the appropriate user and group (e.g. for app01 it will be u01-prod:g01-prod) in specific config files.
6) Permissions
Set properly permissions with Access Control Lists:
# For web server
setfacl -Rdm "g:apache:rwx" /var/www/app01
setfacl -Rm "g:apache:rwx" /var/www/app01
# For developers
setfacl -Rdm "g:g01-prod:rwx" /var/www/app01
setfacl -Rm "g:g01-prod:rwx" /var/www/app01
If you use SELinux remember about security context:
chcon -R system_u:object_r:httpd_sys_content_t /var/www/app01
7) Security mistakes
- root owner for files and directories
- root never executes any files in website directory, and shouldn't be creating files in there
- to wide permissions like a 777 so some critical files may be world-writable and world-readable
- avoid creating maintenance scripts or other critical files with suid root
If you allow your site to modify the files which form the code running your site, you make it much easier for someone to take over your server.
A file upload tool allows users to upload a file with any name and any contents. This allows a user to upload a mail relay PHP script to your site, which they can place wherever they want to turn your server into a machine to forward unsolicited commercial email. This script could also be used to read every email address out of your database, or other personal information.
If the malicious user can upload a file with any name but not control the contents, then they could easily upload a file which overwrites your index.php
(or another critical file) and breaks your site.
Useful resources:
What steps will be taken by init when you run telinit 1
from run level 3? What will be the final result of this? If you use telinit 6
instead of reboot
command your server will be restarted? ***
To be completed.
Useful resources:
I have forgotten the root password! What do I do in BSD? What is the purpose of booting into single user mode?
Restart the system, type boot -s
at the Boot:
prompt to enter single-user mode.
At the question about the shell to use, hit Enter
which will display a #
prompt.
Enter mount -urw /
to remount the root file system read/write, then run mount -a
to remount all the file systems.
Run passwd root
to change the root password then run exit
to continue booting.
Single user mode should basically let you log in with root access & change just about anything. For example, you might use single-user mode when you are restoring a damaged master database or a system database, or when you are changing server configuration options (e.g. password recovery).
Useful resources:
How could you modify a text file without invoking a text editor?
For example:
# cat >filename ... - overwrite file
# cat >>filename ... - append to file
cat > filename << __EOF__
data
__EOF__
How to change the kernel parameters? What kernel options might you need to tune? ***
To set the kernel parameters in Unix-like, first edit the file /etc/sysctl.conf
after making the changes save the file and run the command sysctl -p
, this command will make the changes permanently without rebooting the machine.
Useful resources:
Explain the /proc
filesystem.
/proc
is a virtual file system that provides detailed information about kernel, hardware and running processes.
Since /proc
contains virtual files, it is called virtual file system. These virtual files have unique qualities. Most of them are listed as zero bytes in size.
Virtual files such as /proc/interrupts
, /proc/meminfo
, /proc/mounts
and /proc/partitions
provide an up-to-the-moment glimpse of the systemâs hardware. Others: /proc/filesystems
file and the /proc/sys/
directory provide system configuration information and interfaces.
Useful resources:
Describe your data backup process. How often should you test your backups? ***
To be completed.
Explain three types of journaling in ext3/ext4.
There are three types of journaling available in ext3/ext4 file systems:
- Journal - metadata and content are saved in the journal
- Ordered - only metadata is saved in the journal. Metadata are journaled only after writing the content to disk. This is the default
- Writeback - only metadata is saved in the journal. Metadata might be journaled either before or after the content is written to the disk
What is an inode? How to find file's inode number and how can you use it?
An inode is a data structure on a filesystem on Linux and other Unix-like operating systems that stores all the information about a file except its name and its actual data. A data structure is a way of storing data so that it can be used efficiently.
A Unix file is stored in two different parts of the disk - the data blocks and the inodes. I won't get into superblocks and other esoteric information. The data blocks contain the "contents" of the file. The information about the file is stored elsewhere - in the inode.
A file's inode number can easily be found by using the ls
command, which by default lists the objects (i.e. files, links and directories) in the current directory (i.e. the directory in which the user is currently working), with its -i
option. Thus, for example, the following will show the name of each object in the current directory together with its inode number:
ls -i
df's
-i
option instructs it to supply information about inodes on each filesystem rather than about available space. Specifically, it tells df to return for each mounted filesystem the total number of inodes, the number of free inodes, the number of used inodes and the percentage of inodes used. This option can be used together with the -h
option as follows to make the output easier to read:
df -hi
Finding files by inodes
If you know the inode, you can find it using the find command:
find . -inum 435304 -print
Deleting files with strange names
Sometimes files are created with strange characters in the filename. The Unix file system will allow any character as part of a filename except for a null (ASCII 000) or a "/". Every other character is allowed.
Users can create files with characters that make it difficult to see the directory or file. They can create the directory ".. " with a space at the end, or create a file that has a backspace in the name, using:
touch `printf "aa\bb"`
Now what what happens when you use the ls
command:
ls
aa?b
ls | grep 'a'
ab
Note that when ls
sends the result to a terminal, it places a "?" in the filename to show an unprintable character.
You can get rid of this file by using rm -i *
and it will prompt you before it deletes each file. But you can also use find
to remove the file, once you know the inode number.
ls -i
435304 aa?b
find . -inum 435304 -delete
Useful resources:
ls -l
shows file attributes as question marks. What this means and what steps will you take to remove unused "zombie" files?
This problem may be more difficult to solve because several steps may be required - sometimes you have get test/file: Permission denied
, test/file: No such file or directory
or test/file: Input/output error
.
That happens when the user can't do a stat()
on the files (which requires execute permissions), but can read the directory entries (which requires read access on the directory). So you get a list of files in the directory, but can't get any information on the files because they can't be read. If you have a directory which has read permission but not execute, you'll see this.
Some processes like a rsync
generates temporary files that get created and dropped fast which will cause errors if you try to call other simple file management commands like rm
, mv
etc.
Example of output:
?????????? ? ? ? ? ? sess_kee6fu9ag7tiph2jae
- change permissions:
chmod 0777 sess_kee6fu9ag7tiph2jae
and try remove - change owner:
chown root:root sess_kee6fu9ag7tiph2jae
and try remove - change permissions and owner for directory:
chmod -R 0777 dir/ && chown -R root:root dir/
and try remove - recreate file:
touch sess_kee6fu9ag7tiph2jae
and try remove - watch out for other running processes on the server for example
rsync
, sometimes you can see this as a transient error when an NFS server is heavily overloaded - find file inode:
ls -i
, and try remove:find . -inum <inode_num> -delete
- remount (if possible) your filesystem
- boot system into single-user mode and repair your filesystem with
fsck
Useful resources:
To LVM or not to LVM. What benefits does it provide?
- LVM makes it quite easy to move file systems around
- you can extend a volume group onto a new physical volume
- move any number of logical volumes of an old physical one
- remove that volume from the volume group without needing to unmount any partitions
- you can also make snapshots of logical volumes for making backups
- LVM has built in mirroring support so you can have a logical volume mirrored across multiple physical volumes
- LVM even supports TRIM
Useful resources:
How to increase the size of LVM partition?
Use the lvextend
command for resize LVM partition.
- extending the size by 500MB:
lvextend -L +500M /dev/vgroup/lvolume
- extending all available free space:
lvextend -l +100%FREE /dev/vgroup/lvolume
and resize2fs
or xfs_growfs
to resize filesystem:
- for ext filesystems:
resize2fs /dev/vgroup/lvolume
- for xfs filesystem:
xfs_growfs mountpoint_for_/dev/vgroup/lvolume
Useful resources:
What is a zombie/defunct process?
Is a process that has completed execution (via the exit
system call) but still has an entry in the process table: it is a process in the "Terminated state".
Processes marked defunct are dead processes (so-called "zombies") that remain because their parent has not destroyed them properly. These processes will be destroyed by init if the parent process exits.
Useful resources:
What is the proper way to upgrade/update a system in production? Do you automate these processes? Do you set downtime for them? Write recommendations. ***
To be completed.
Your friend during configuration of the MySQL server asked you: Should I run sudo mysql_secure_installation
after installing mysql? What do you think about it?
It would be better if you run command as it provides many security options like:
- You can set a password for root accounts
- You can remove root accounts that are accessible from outside the local host
- You can remove anonymous-user accounts
- You can remove the test database, which by default can be accessed by anonymous users
Useful resources:
Present and explain the good ways of using the kill
command.
Speaking of killing processes never use kill -9/SIGKILL
unless absolutely mandatory. This kill can cause problems because of its brute force.
Always try to use the following simple procedure:
- first, send SIGTERM (
kill -15
) signal first which tells the process to shutdown and is generally accepted as the signal to use when shutting down cleanly (but remember that this signal can be ignored). - next try to send SIGHUP (
kill -1
) signal which is commonly used to tell a process to shutdown and restart, this signal can also be caught and ignored by a process.
The far majority of the time, this is all you need - and is much cleaner.
Useful resources:
What is strace
command and how should be used? Explain example of connect to an already running process.
strace
is a powerful command line tool for debugging and troubleshooting programs in Unix-like operating systems such as Linux. It captures and records all system calls made by a process and the signals received by the process.
Strace Overview
strace
can be seen as a light weight debugger. It allows a programmer/user to quickly find out how a program is interacting with the OS. It does this by monitoring system calls and signals.
Uses
Good for when you don't have source code or don't want to be bothered to really go through it. Also, useful for your own code if you don't feel like opening up GDB, but are just interested in understanding external interaction.
Example of attach to the process
strace -p <PID>
- to attach a process to strace.
strace -e trace=read,write -p <PID>
- by this you can also trace a process/program for an event, like read and write (in this example). So here it will print all such events that include read and write system calls by the process.
Other such examples
-e trace=network
- trace all the network related system calls.-e trace=signal
- trace all signal related system calls.-e trace=ipc
- trace all IPC related system calls.-e trace=desc
- trace all file descriptor related system calls.-e trace=memory
- trace all memory mapping related system calls.
Useful resources:
When would you use access control lists instead of or in conjunction with the chmod
command? ***
To be completed.
Which algorithms are supported in /etc/shadow
file?
Typical current algorithms are:
- MD5
- SHA-1 (also called SHA)
both should not be used for cryptographic/security purposes any more!!
- SHA-256
- SHA-512
- SHA-3 (KECCAK was announced the winner in the competition for a new federal approved hash algorithm in October 2012)
Useful resources:
What is the use of ulimit in Unix-like systems?
Most Unix-like operating systems, including Linux and BSD, provide ways to limit and control the usage of system resources such as threads, files, and network connections on a per-process and per-user basis. These "ulimits" prevent single users from using too many system resources.
What are soft limits and hard limits?
Hard limit is the maximum allowed to a user, set by the superuser or root. This value is set in the file /etc/security/limits.conf
. The user can increase the soft limit on their own in times of needing more resources, but cannot set the soft limit higher than the hard limit.
During configuration HAProxy to working with Redis you get General socket error (Permission denied)
from log. SELinux is enable. Explain basic SELinux troubleshooting in CLI. ***
Useful resources:
You have configured an RSA key login but your server show Server refused our key
as expected. Where will you look for the cause of the problem?
Server side
Setting LogLevel VERBOSE
in file /etc/ssh/sshd_config
is probably what you need, although there are higher levels:
SSH auth failures are logged in /var/log/auth.log
, /var/log/secure
or /var/log/audit/audit.log
.
The following should give you only ssh related log lines (for example):
grep 'sshd' /var/log/auth.log
Next, the most simple command to list all failed SSH logins is the one shown below:
grep "Failed password" /var/log/auth.log
also useful is:
grep "Failed\|Failure" /var/log/auth.log
On newer Linux distributions you can query the runtime log file maintained by Systemd daemon via journalctl
command (ssh.service
or sshd.service
). For example:
journalctl _SYSTEMD_UNIT=ssh.service | egrep "Failed|Failure"
Client side
Also you should run SSH client with -v|--verbose
- it is in first level of verbosity. Next, you can enable additional (level 2 and 3) verbosity for even more debugging messages as shown with e.g. -vv
.
Useful resources:
Why do most distros use ext4, as opposed to XFS or other filesystems? Why are there so many of them? ***
To be completed.
A project manager needs a new SQL Server. What do you ask her/his? ***
I want the DBA to ask questions like:
- How big will the database be? (whether we can add the database to an existing server)
- How critical is the database? (about clustering, disaster recovery, high availability)
Create a file with 100 lines with random values.
For example:
cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 32 | head -n 100 > /path/to/file
How to run script as another user without password?
For example (with visudo
command):
user1 ALL=(user2) NOPASSWD: /opt/scripts/bin/generate.sh
The command paths must be absolute! Then call sudo -u user2 /opt/scripts/bin/generate.sh
from a user1 shell.
How to check if running as root in a bash script? What should you watch out for?
In a bash script, you have several ways to check if the running user is root.
As a warning, do not check if a user is root by using the root username. Nothing guarantees that the user with ID 0 is called root. It's a very strong convention that is broadly followed but anybody could rename the superuser another name.
I think the best way when using bash is to use $EUID
because $UID
could be changed and not reflect the real user running the script.
if (( $EUID != 0 )); then
echo "Please run as root"
exit
fi
Can you give a particular example when is indicated to use nobody
account? Tell me the differences running httpd service as a nobody
and www-data
accounts.
In many Unix variants, nobody
is the conventional name of a user account which owns no files, is in no privileged groups, and has no abilities except those which every other user has.
It is common to run daemons as nobody
, especially servers, in order to limit the damage that could be done by a malicious user who gained control of them.
However, the usefulness of this technique is reduced if more than one daemon is run like this, because then gaining control of one daemon would provide control of them all. The reason is that nobody
-owned processes have the ability to send signals to each other and even debug each other, allowing them to read or even modify each other's memory.
When should I use nobody
account?
When permissions aren't required for a program's operations. This is most notable when there isn't ever going to be any disk activity.
A real world example of this is memcached (a key-value in-memory cache/database/thing), sitting on my computer and my server running under the nobody
account. Why? Because it just doesn't need any permissions and to give it an account that did have write access to files would just be a needless risk.
A good example are also web servers. Imagine if Apache ran as root and someone found a way to send custom commands to the console through Apache would have access to your entire system.
nobody
account also is used as a restricted shell for giving users filesystem access without an actual shell like bash. This should prevent them from being able to execute things.
nobody
or www-data
for httpd (Apache)
Upon starting Apache needs root access, but it quickly drops this and assumes the identity of a non privileged user. This user can either be nobody
or apache
, or www-data
.
Several applications use the user nobody
as a default. For example you probably never really want say the Apache service to be overwriting files that belong to bind. Having a per-service account tends to be a very good idea.
Getting Apache to run as nobody:nobody
is pretty easy, just update the user and group settings. But as I mentioned above I don't really recommend that particular user/group. It is entirely possible that you may be tempted to add a service to the system at some time in the future that also runs as nobody
, and you will forget that have given write access on the filesystem to the user nobody
.
If somehow, nobody
were to become compromised they could potentially have more impact than if an application isolate user, such as www-data
. Of course a lot of this will depend on the file and group permissions. nobody
uses the permissions of others, while an application specific user could be configured to allow file read access, but other could still be denied.
Useful resources:
Is there a way to redirect output to a file and have it display on stdout?
The command you want is named tee:
foo | tee output.file
For example, if you only care about stdout:
ls -a | tee output.file
If you want to include stderr, do:
program [arguments...] 2>&1 | tee outfile
2>&1
redirects channel 2 (stderr/standard error) into channel 1 (stdout/standard output), such that both is written as stdout. It is also directed to the given output file as of the tee command.
Furthermore, if you want to append to the log file, use tee -a
as:
program [arguments...] 2>&1 | tee -a outfile
What is the preferred bash shebang and why? What is the difference between executing a file using ./script
or bash script
?
You should use #!/usr/bin/env bash
for portability: different *nixes put bash in different places, and using /usr/bin/env
is a workaround to run the first bash found on the PATH
.
Running ./script
does exactly that, and requires execute permission on the file, but is agnostic to what type of a program it is. It might be a bash script, an sh script, or a Perl, Python, awk, or expect script, or an actual binary executable. Running bash script
would force it to be run under sh
, instead of anything else.
Useful resources:
You must run command that will be performed for a very long time. How to prevent killing this process after the ssh session drops?
Use nohup
to make your process ignore the hangup signal:
nohup long-running-process &
exit
or you want to be using GNU Screen:
screen -d -m long-running-process
exit
Useful resources:
What is the main purpose of the intermediate certification authorities?
To find out the main purpose of an intermediate CA, you should first learn about Root CAs, Intermediate CAs, and the SSL Certificate Chain Trust.
Root CAs are primary CAs which typically donât directly sign end entity/server certificates. They issue Root certificates which are usually pre-installed within all browsers, mobiles, and applications. The private key of these certificates is used to sign other subsequent certificates called intermediate certificates. Root CAs are usually kept "offlineâ and in a highly secure environment with stringently limited access.
Intermediates CAs are CAs that subordinate to the Root CA by one or more levels, being trusted by these to sign certificates on their behalf. The purpose of creating and using Intermediate CAs is primarily for security because if the intermediate private key is compromised, then the Root CA can revoke the intermediate certificate and create a new one with a new cryptographic key pair.
SSL Certificate Chain Trust is the list of SSL certificates, from the root certificate to the end entity/server certificate. For an SSL Certificate to be trusted, it must be issued by a trusted CAs which is included in the trusted CA list of the connecting device (browser, mobile, and application). Therefore, the connecting device will test the trustworthiness of each SSL Certificate in the Chain Trust until it matches the one issued by a trusted CA.
The Root-Intermediate CA structure is created by each major CA to protect against the disastrous effects of a root key compromise. If a root key is compromised, it would render the root and all subordinated certificates untrustworthy. For this reason, creating an Intermediate CA is a best practice to ensure a rigorous protection of the primary root key.
Useful resources:
How to reload PostgreSQL after configuration changes?
Solution 1:
systemctl reload postgresql
Solution 2:
su - postgres
/usr/bin/pg_ctl reload
Solution 3:
SELECT pg_reload_conf();
You have added several aliases to .profile
. How to reload shell without exit?
The best way is exec $SHELL -l
because exec
replaces the current process with a new one. Also good (but other) solution is . ~/.profile
.
Useful resources:
How to exit without saving shell history?
kill -9 $$
or
unset HISTFILE && exit
Useful resources:
What is this UID 0 toor account? Have I been compromised?
toor is an alternative superuser account, where toor is root spelled backwards. It is intended to be used with a non-standard shell so the default shell for root does not need to change.
This is important as shells which are not part of the base distribution, but are instead installed from ports or packages, are installed in /usr/local/bin
which, by default, resides on a different file system. If root's shell is located in /usr/local/bin
and the file system containing /usr/local/bin
) is not mounted, root will not be able to log in to fix a problem and will have to reboot into single-user mode in order to enter the path to a shell.
Some people use toor for day-to-day root tasks with a non-standard shell, leaving root, with a standard shell, for single-user mode or emergencies. By default, a user cannot log in using toor as it does not have a password, so log in as root and set a password for toor before using it to login.
Useful resources:
Is there an easy way to search inside 1000s of files in a complex directory structure to find files which contain a specific string?
For example use fgrep
:
fgrep * -R "string"
or:
grep -insr "pattern" *
-i
ignore case distinctions in both the PATTERN and the input files-n
prefix each line of output with the 1-based line number within its input file-s
suppress error messages about nonexistent or unreadable files.-r
read all files under each directory, recursively.
Useful resources:
How to find out the dynamic libraries executables loads when run?
You can do this with ldd
command:
ldd /bin/ls
You have the task of sync the testing and production environments. What steps will you take?
It's easy to get dragged down into bikeshedding about cloning environments and miss the real point:
- only production is production
and every time you deploy there you are testing a unique combination of deploy code + software + environment.
Every once in a while a good solution is regular cloning of the production servers to create testing servers. You can create instances with an exact copy of your production environment under a dev/test with snapshots, for example:
- generate a snapshot of production
- copy the snapshot to staging (or other)
- create a new disk using this snapshot
Sure, you can spin up clones of various system components or entire systems, and capture real traffic to replay offline (the gold standard of systems testing). But many systems are too big, complex, and cost-prohibitive to clone.
Before environment synchronization a good way is keeping track of every change that you make to the testing environment and provide a way for propagating this to the production environment, so that you do not skip any step and do it as smoothly as possible.
Also structure comparison tool or deploy scripts that update the testing environment from production environment is a good solution.
Presync tasks
First of all is informing developers and clients about not making changes on the test environment (if possible, disabling test domains that target this environment or set static pages with information about synchronization).
It is also important to make backup/snapshots of both environments.
Database servers
- sync/update system version (e.g. packages)
- create dump file from database on production db server
- import dump file on testing db server
- if necessary, syncs login permissions, roles, database permissions, open connections to the database and other
Web/App servers
- sync/update system version (e.g. packages)
- if necessary, updated kernel parameters, firewall rules and other
- sync/update configuration files of all running/important services
- sync/update user accounts (e.g. permissions) and their home directories
- deploy project from git/svn repository
- sync/update important directories existing in project, e.g. static, asset and other
- sync/update permissions for project directory
- remove/update all webhooks
- update cron jobs
Others tasks
- updated configurations of load balancers for testing domains and specific urls
- updated configurations of queues, session and storage instances
Useful resources:
Network Questions (24)
Configure a virtual interface on your workstation. ***
To be completed.
According to an HTTP monitor, a website is down. You're able to telnet to the port, so how do you resolve it?
If you can telnet to the port, this means that the service listening on the port is running and you can connect to it (it's not a networking problem). It is good to check this way for the IP address to which the domain is resolved and using the same domain to test connection.
First of all check if your site is online from a other location. It then lets you know if the site is down everywhere, or if only your network is unable to view it. It is also a good idea to check what the web browser returns.
If only IP connection working
- you can use whois to see what DNS servers serve up the hostname to the site:
whois www.example.com
- you can use tools like
dig
orhost
to test DNS to see if the host name is resolving:host www.example.org dns.example.org
- you can also check global public dns servers:
host www.example.com 9.9.9.9
If domain not resolved it's probably problem with DNS servers.
If domain resolved properly
- investigate the log files and resolve the issue regarding to the logs, it's the best way to show what's wrong
- check the http status code, usually it will be the response with the 5xx, maybe server is overload because clients making lot's of connection to the website or specific url? maybe your caching rules not working properly?
- check web/proxy server configuration (e.g.
nginx -t -c </path/to/nginx.conf>
), maybe another sysadmin has made some changes to the domain configuration? - maybe something on the server has crashed? maybe run out of space or run out of memory?
- maybe it's a programming error on the website?
Load balancing can dramatically impact server performance. Discuss several load balancing mechanisms. ***
To be completed.
List examples of network troubleshooting tools that can degrade during DNS issues. ***
To be completed.
Explain difference between HTTP 1.1 and HTTP 2.0.
HTTP/2 supports queries multiplexing, headers compression, priority and more intelligent packet streaming management. This results in reduced latency and accelerates content download on modern web pages.
Key differences with HTTP/1.1:
- it is binary, instead of textual
- fully multiplexed, instead of ordered and blocking
- can therefore use one connection for parallelism
- uses header compression to reduce overhead
- allows servers to "push" responses proactively into client caches
Useful resources:
Dev team reports an error: POST http://ws.int/api/v1/Submit/ resulted in a 413 Request Entity Too Large
. What's wrong?
Modify NGINX configuration file for domain
Set correct client_max_body_size
variable value:
client_max_body_size 20M;
Restart Nginx to apply the changes.
Modify php.ini file for upload limits
Itâs not needed on all configurations, but you may also have to modify the PHP upload settings as well to ensure that nothing is going out of limit by php configurations.
Now find following directives one by one:
upload_max_filesize
post_max_size
and increase its limit to 20M, by default they are 8M and 2M:
upload_max_filesize = 20M
post_max_size = 20M
Finally save it and restart PHP.
Useful resources:
What is handshake mechanism and why do we need 3 way handshake?
Handshaking begins when one device sends a message to another device indicating that it wants to establish a communications channel. The two devices then send several messages back and forth that enable them to agree on a communications protocol.
A three-way handshake is a method used in a TCP/IP network to create a connection between a local host/client and server. It is a three-step method that requires both the client and server to exchange SYN
and ACK
(SYN
, SYN-ACK
, ACK
) packets before actual data communication begins.
Useful resources:
Why is UDP faster than TCP?
UDP is faster than TCP, and the simple reason is because its nonexistent acknowledge packet (ACK
) that permits a continuous packet stream, instead of TCP that acknowledges a set of packets, calculated by using the TCP window size and round-trip time (RTT
).
Useful resources:
Which, in your opinion, are the 5 most important OpenSSH parameters that improve the security? ***
To be completed.
Useful resources:
What is NAT? What is it used for?
It enables private IP networks that use unregistered IP addresses to connect to the Internet. NAT operates on a router, usually connecting two networks together, and translates the private (not globally unique) addresses in the internal network into legal addresses, before packets are forwarded to another network.
Workstations or other computers requiring special access outside the network can be assigned specific external IPs using NAT, allowing them to communicate with computers and applications that require a unique public IP address. NAT is also a very important aspect of firewall security.
Useful resources:
What is the purpose of Spanning Tree?
This protocol operates at layer 2 of the OSI model with the purpose of preventing loops on the network. Without STP, a redundant switch deployment would create broadcast storms that cripple even the most robust networks. There are several iterations based on the original IEEE 802.1D standard; each operates slightly different than the others while largely accomplishing the same loop-free goal.
How to check which ports are listening on my Linux Server?
Use the:
lsof -i
ss -l
netstat -atn
- for tcpnetstat -aun
- for udpnetstat -tulapn
What mean Host key verification failed
when you connect to the remote host? Do you accept it automatically?
Host key verification failed
means that the host key of the remote host was changed. This can easily happen when connecting to a computer who's host keys in /etc/ssh
have changed if that computer was upgraded without copying its old host keys. The host keys here are proof when you reconnect to a remote computer with ssh that you are talking to the same computer you connected to the first time you accessed it.
Whenever you connect to a server via SSH, that server's public key is stored in your home directory (or possibly in your local account settings if using a Mac or Windows desktop) file called known_hosts. When you reconnect to the same server, the SSH connection will verify the current public key matches the one you have saved in your known_hosts file. If the server's key has changed since the last time you connected to it, you will receive the above error.
Don't delete the entire known_hosts file as recommended by some people, this totally voids the point of the warning. It's a security feature to warn you that a man in the middle attack may have happened.
Before accepting the new host key, contact your/other system administrator for verification.
Useful resources:
How to send an HTTP request using telnet
?
For example:
telnet example.com 80
Trying 192.168.252.10...
Connected to example.com.
Escape character is '^]'.
GET /questions HTTP/1.0
Host: example.com
HTTP/1.1 200 OK
Content-Type: text/html; charset=utf-8
...
How do you kill program using e.g. 80 port in Linux?
To list any process listening to the port 80:
# with lsof
lsof -i:80
# with fuser
fuser 80/tcp
To kill any process listening to the port 80:
kill $(lsof -t -i:80)
or more violently:
kill -9 $(lsof -t -i:80)
or with fuser
command:
fuser -k 80/tcp
Useful resources:
You get curl: (56) TCP connection reset by peer
. What steps will you take to solve this problem?
- check if the URL is correct, maybe you should add
www
or set correctlyHost:
header? Check also scheme (http or https) - check the domain is resolving into a correct IP address
- enable debug tracing with
--trace-ascii curl.dump
.Recv failure
is a really generic error so its hard for more info - use external proxy with
--proxy
for debug connection from external ip - use network sniffer (e.g.
tcpdump
) for debug connection in the lower TCP/IP layers - check firewall rules on the production environment and on the exit point of your network, also check your NAT rules
- check MTU size of packets traveling over your network
- check SSL version with ssl/tls
curl
params if you connecting to https protocol - it may be a problem on the client side e.g. the netfilter drop or limit connections from your IP address to the domain
Useful resources:
How to allow traffic to/from specific IP with iptables?
For example:
/sbin/iptables -A INPUT -p tcp -s XXX.XXX.XXX.XXX -j ACCEPT
/sbin/iptables -A OUTPUT -p tcp -d XXX.XXX.XXX.XXX -j ACCEPT
How to block abusive IP addresses with pf
in OpenBSD?
The best way to do this is to define a table and create a rule to block the hosts, in pf.conf
:
table <badhosts> persist
block on fxp0 from <badhosts> to any
And then dynamically add/delete IP addresses from it:
pfctl -t badhosts -T add 1.2.3.4
pfctl -t badhosts -T delete 1.2.3.4
When does the web server like Apache or Nginx write info to log file? Before or after serving the request?
Both servers provides very comprehensive and flexible logging capabilities - for logging everything that happens on your server, from the initial request, through the URL mapping process, to the final resolution of the connection, including any errors that may have occurred in the process.
Apache
The Apache server access log records all requests processed by the server (after the request has been completed).
Nginx
NGINX writes information about client requests in the access log right after the request is processed.
Useful resources:
Analyse web server log and show only 5xx
http codes. What external tools do you use?
tail -n 100 -f /path/to/logfile | grep "HTTP/[1-2].[0-1]\" [5]"
Examples of http/https log management tools:
- goaccess - is an open source real-time web log analyzer and interactive viewer that runs in a terminal in *nix systems or through your browser
- graylog - is a free and open-source log management platform that supports in-depth log collection and analysis
Useful resources:
Developer uses private key on the server to deploy app through ssh. Why it is incorrect behavior and what is the better (but not ideal) solution in such situations?
You have the private key for your personal account. The server needs your public key so that it can verify that your private key for the account you are trying to use is authorized.
The whole point with private keys is that they are private, meaning only you have your private key. If someone takes over your private key, it will be able to impersonate you any time he wants.
A better solutions is the use of ssh key forwarding. An essence, you need to create a ~/.ssh/config
file, if it doesn't exist. Then, add the hosts (either domain name or IP address in the file and set ForwardAgent yes
). Example:
Host git.example.com
User john
PreferredAuthentications publickey
IdentityFile ~/.ssh/id_rsa.git.example.com
ForwardAgent yes
Your remote server must allow SSH agent forwarding on inbound connections and your local ssh-agent
must be running.
Forwarding an ssh agent carries its own security risk. If someone on the remote machine can gain access to your forwarded ssh agent connection, they can still make use of your keys. However, this is better than storing keys on remote machines: the attacker can only use the ssh agent connection, not the key itself. Thus, only while you're logged into the remote machine can they do anything. If you store the key on the remote machine, they can make a copy of it and use it whenever they want.
If you use ssh keys remember about passphrases which is strongly recommended to reduce risk of keys accidentally leaking.
Useful resources:
What is the difference between CORS and CSPs?
CORS allows the Same Origin Policy to be relaxed for a domain.
e.g. normally if the user logs into both example.com
and example.org
, the Same Origin Policy prevents example.com
from making an AJAX request to example.org/current_user/full_user_details
and gaining access to the response.
This is the default policy of the web and prevents the user's data from being leaked when logged into multiple sites at the same time.
Now with CORS, example.org
could set a policy to say it will allow the origin https://example.com
to read responses made by AJAX. This would be done if both example.com
and example.org
are ran by the same company and data sharing between the origins is to be allowed in the user's browser. It only affects the client-side of things, not the server-side.
CSPs on the other hand set a policy of what content can run on the current site. For example, if JavaScript can be executed inline, or which domains .js
files can be loaded from. This can be beneficial to act as another line of defense against XSS attacks, where the attacker will try and inject script into the HTML page. Normally output would be encoded, however say the developer had forgotten only on one output field. Because the policy is preventing in-line script from executing, the attack is thwarted.
Useful resources:
Explain four types of responses from firewall when scanning with nmap
.
There might be four types of responses:
- Open port - few ports in the case of the firewall
- Closed port - most ports are closed because of the firewall
- Filtered -
nmap
is not sure whether the port is open or not - Unfiltered -
nmap
can access the port but is still confused about the open status of the port
Useful resources:
What does a tcpdump
do? How to capture only incoming traffic to your interface?
tcpdump
is a most powerful and widely used command-line packets sniffer or package analyzer tool which is used to capture or filter TCP/IP packets that received or transferred over a network on a specific interface.
tcpdump
puts your network card into promiscuous mode, which basically tells it to accept every packet it receives. It allows the user to see all traffic being passed over the network. Wireshark uses pcap to capture packets.
If you want to view only packets that come to your interface you should:
-Q in
- for Linuxtcpdump
version-D in
- for BSDtcpdump
version
Both params set send/receive direction direction for which packets should be captured.
tcpdump -nei eth0 -Q in host 192.168.252.125 and port 8080
Devops Questions (7)
Which are the top DevOps tools? Which tools have you worked on?
The most popular DevOps tools are mentioned below:
- Git : Version Control System tool
- Jenkins : Continuous Integration tool
- Selenium : Continuous Testing tool
- Puppet, Chef, Ansible : Configuration Management and Deployment tools
- Nagios : Continuous Monitoring tool
- Docker : Containerization tool
How do all these tools work together?
The most popular DevOps tools are mentioned below:
- Developers develop the code and this source code is managed by Version Control System tools like Git etc.
- Developers send this code to the Git repository and any changes made in the code is committed to this Repository
- Jenkins pulls this code from the repository using the Git plugin and build it using tools like Ant or Maven
- Configuration management tools like puppet deploys & provisions testing environment and then Jenkins releases this code on the test environment on which testing is done using tools like selenium
- Once the code is tested, Jenkins send it for deployment on the production server (even production server is provisioned & maintained by tools like puppet)
- After deployment It is continuously monitored by tools like Nagios
- Docker containers provides testing environment to test the build features
What are playbooks in Ansible?
Playbooks are Ansibleâs configuration, deployment, and orchestration language.
They can describe a policy you want your remote systems to enforce, or a set of steps in a general IT process. Playbooks are designed to be human-readable and are developed in a basic text language.
At a basic level, playbooks can be used to manage configurations of and deployments to remote machines.
What is NRPE (Nagios Remote Plugin Executor) in Nagios?
The NRPE addon is designed to allow you to execute Nagios plugins on remote Linux/Unix machines. The main reason for doing this is to allow Nagios to monitor "local" resources (like CPU load, memory usage, etc.) on remote machines.
Since these public resources are not usually exposed to external machines, an agent like NRPE must be installed on the remote Linux/Unix machines.
What is the difference between Active and Passive check in Nagios?
The major difference between Active and Passive checks is that Active checks are initiated and performed by Nagios, while passive checks are performed by external applications.
Passive checks are useful for monitoring services that are:
- asynchronous in nature and cannot be monitored effectively by polling their status on a regularly scheduled basis.
- located behind a firewall and cannot be checked actively from the monitoring host.
The main features of Actives checks are as follows:
- active checks are initiated by the Nagios process.
- active checks are run on a regularly scheduled basis.
How to git clone
including submodules?
For example:
# With -j8 - performance optimization
git clone --recurse-submodules -j8 git://github.com/foo/bar.git
# For already cloned repos or older Git versions
git clone git://github.com/foo/bar.git
cd bar
git submodule update --init --recursive
Mention what are the advantages of using Redis? What is redis-cli
?
- it provides high speed (exceptionally faster than others)
- it supports a server-side locking
- it has got lots of client lib
- it has got command level Atomic Operation (tx operation)
- supports for rich data types like hashes, sets, bitmaps
redis-cli
is the Redis command line interface, a simple program that allows to send commands to Redis, and read the replies sent by the server, directly from the terminal.
Useful resources:
Cyber Security Questions (4)
What is XSS, how will you mitigate it?
Cross Site Scripting is a JavaScript vulnerability in the web applications. The easiest way to explain this is a case when a user enters a script in the client side input fields and that input gets processed without getting validated. This leads to untrusted data getting saved and executed on the client side.
Countermeasures of XSS are input validation, implementing a CSP (Content security policy) and other.
HIDS vs NIDS and which one is better and why?
HIDS is host intrusion detection system and NIDS is network intrusion detection system. Both the systems work on the similar lines. Itâs just that the placement in different. HIDS is placed on each host whereas NIDS is placed in the network. For an enterprise, NIDS is preferred as HIDS is difficult to manage, plus it consumes processing power of the host as well.
What is compliance?
Abiding by a set of standards set by a government/Independent party/organisation, e.g. an industry which stores, processes or transmits Payment related information needs to be complied with PCI DSS (Payment card Industry Data Security Standard). Other compliance examples can be an organisation complying with its own policies.
What is a WAF and what are its types?
WAF stands for web application firewall. It is used to protect the application by filtering legitimate traffic from malicious traffic. WAF can be either a box type or cloud based.
:diamond_shape_with_a_dot_inside: Senior Sysadmin
System Questions (61)
Explain the current architecture youâre responsible for and point out where itâs scalable or fault-tolerant. ***
To be completed.
Tell me how code gets deployed in your current production. ***
To be completed.
What are the different types of kernels? Explain.
Monolithic Kernels
Earlier in this type of kernel architecture, all the basic system services like a process and memory management, interrupt handling etc were packaged into a single module in kernel space. This type of architecture led to some serious drawbacks like:
- the size of the kernel, which was huge
- poor maintainability, which means bug fixing or addition of new features resulted in recompilation of the whole kernel which could consume hours
In a modern day approach to monolithic architecture, the kernel consists of different modules which can be dynamically loaded and unloaded. This modular approach allows easy extension of OS's capabilities. With this approach, maintainability of kernel became very easy as only the concerned module needs to be loaded and unloaded every time there is a change or bug fix in a particular module.
Linux follows the monolithic modular approach.
Microkernels
This architecture majorly caters to the problem of ever growing size of kernel code which we could not control in the monolithic approach. This architecture allows some basic services like device driver management, protocol stack, file system etc to run in user space.
In this architecture, all the basic OS services which are made part of user space are made to run as servers which are used by other programs in the system through inter process communication (IPC).
Example: We have servers for device drivers, network protocol stacks, file systems, graphics, etc. Microkernel servers are essentially daemon programs like any others, except that the kernel grants some of them privileges to interact with parts of physical memory that are otherwise off limits to most programs.
Hybrid Kernels (Modular Kernels)
This is a combination of the above two, where the key idea is that Operating System services are in Kernel Space, and there is no message passing, no performance overhead and no reliability benefits, of having services in user space.
This is used by Microsoft's NT kernels, all the way up to the latest Windows version.
Useful resources:
The program returns the error of the missing library. How to provide dynamically linkable libraries?
Environment variable LD_LIBRARY_PATH
is a colon-separated set of directories where libraries should be searched for first, before the standard set of directories; this is useful when debugging a new library or using a nonstandard library for special purposes.
The best way to use LD_LIBRARY_PATH
is to set it on the command line or script immediately before executing the program. This way the new LD_LIBRARY_PATH
isolated from the rest of your system.
Example of use:
export LD_LIBRARY_PATH="/list/of/library/paths:/another/path" ./program
Useful resources:
Write the most important rules for using root privileges safely for novice administrators. ***
To be completed.
What is the advantage of synchronizing UID/GID across multiple systems?
There are several principle reasons why you want to co-ordinate the user/UID and group/GID management across your network.
The first is relatively obvious - it has to do with user and administrative convenience.
If each of your users are expected to have relatively uniform access to the systems throughout the network, then they'll expect the same username and password to work on each system that they are supposed to use. If they change their password they will expect that change to be global.
It also has a relationship with names and group names in Unix and Linux. They are mapped into numeric forms (UID's and GID's respectively). All file ownership (inodes) and processes use these numerics for all access and identity determination throughout the kernel and drivers. These numeric values are reverse mapped back to their corresponding principle symbolic representations (the names) by the utilities that display or process that information.
It is also recommended that you adopt a policy that UID's are not re-used. When a user leaves your organization you "retire" their UID (disabling their access by *'ing out their passwd, removing them from the groups maps, setting their "shell" to some /bin/denied
binary and their home directory to a secured graveyard - I use /home/.graveyard
on my systems).
The reason for this may not be obvious. However, if you are maintaining archival backups for several years (or indefinitely) you'll want to avoid any ambiguities and confusion that might result from restoring one (long gone) user's files and finding them owned by one of your new users.
Useful resources:
What principles to follow for successful system performance tuning? ***
To be completed.
Useful resources:
Describe start-up configuration files and directory in BSD systems.
In BSD the primary start-up configuration file is /etc/defaults/rc.conf
. System startup scripts such as /etc/rc
and /etc/rc.d
just include this file.
If you want to add other programs to system startup you need to change /etc/rc.conf
file instead of /etc/defaults/rc.conf
.
CPU spent the most of the time for a IO operations to complete. Which tools do you use for diagnose what process(es) did exactly wait for IO? How to minimize IO wait time? ***
To be completed.
Useful resources:
The Junior dev accidentally destroyed production database. How can you prevent such situations?
Create disaster recovery plan
Disaster recovery and business continuity planning are integral parts of the overall risk management for an organization. Is a documented process or set of procedures to recover and protect a business IT infrastructure.
If you donât have a recovery solution, then your restoration efforts will become rebuilding efforts, starting from scratch to recreate whatever was lost.
You should use commonly occurring real life data disaster scenarios to simulate what your backups will and wonât do in a crisis.
Create disaster recovery center
As a result, in the event of unplanned interruptions in the functioning of the primary location, service and all operational activities are switched to the backup center and therefore the unavailability of services is limited to the absolute minimum.
Does the facility have sufficient bandwidth options and power to scale and deal with the increased load during a major disaster? Are resources available to periodically test failover?
Create regular backups and tested it!
Backups are a way to protect the investment in data. By having several copies of the data, it does not matter as much if one is destroyed (the cost is only that of the restoration of the lost data from the backup).
When you lose data, one thing is certain: downtime.
To assure the validity and integrity of any backup, it's essential to carry out regular restoration tests. Ideally, a test should be conducted after every backup completes to ensure data can be successfully secured and recovered. However, this often isn't practical due to a lack of available resources or time constraints.
Make backups of entire virtual machines and important components in the middle of them.
Create snapshots: vm, disks or lvm
Snapshots are perfect if you want to recover a server from a previous state but it's only a "quick method", it cannot restore the system after too many items changed.
Create them always before making changes on production environments (and not only).
Disk snapshots are used to generate a snapshot of an entire disk. These snapshots don't make it easy to restore individual chunks of data (e.g. a lost user account), though it's possible. The primary purpose is to restore entire disks in case of disk failure.
The LVM snapshots can be primarily used to easily copy data from production environment to staging environment.
Remember: Snapshots are not backups!
Development and testing environments
A production environment is the real instance of the application and its database used by the company or the clients. The production database has all the real data.
Setting up development environments based directly on the production database, instead of using a backup for this (removing the need for the above). Dev and test environment that your engineers can get to and a prod environment that only a few people can push updates to following an approved change.
All environments such as prod, dev and test should have one major difference: authorization data for services. For example postgres database instance on testing environment should be consistent (if possible) with the production base, however, in order to eliminate errors of database names and logins and passwords for authorization should be different.
Single point of failure
The general method to avoid single points of failures is to provide redundant components for each necessary resource, so service can continue if a component fails.
Synchronization and replication process for databases
The replication procedure is super fragile and prone to error.
A good idea is also slightly longer delay of data replication (e.g. for DRC). As in replicas, the data changes will usually be replicated within minutes, so the lost data wonât be on the replica database either once that happens.
Create database model with users, roles and rights, use different methods of protection
Only very advanced devs have permissions for db admin access. The other really don't need write access to clone a database. On the other hand just don't give a developer write access to prod.
The production database should refuse connections from any server and pc which isn't the one running the production application, even if it provides a valid username/password.
How the hell development machines can access a production database right like that? How about a simple firewall rule to just let the servers needing the DB data access the database?
Create summary/postmortem documents after failures
The post-mortem audience includes customers, direct reports, peers, the company's executive team and often investors.
Explain what caused the outage on a timeline. Every incident begins with a specific trigger at a specific time, which often causes some unexpected behavior. For example, our servers were rebooted and we expected them to come back up intact, which didn't happen.
Furthermore, every incident has a root cause: the reboot itself was trigger, however a bug in the driver caused the actual outage. Finally, there are consequences to every incident, the most obvious one is that the site goes down.
The post-mortem answers the single most important question of what could have prevented the outage.
Despite how painful an outage may have been, the worst thing you can do is to bury it and never properly close the incident in a clear and transparent way.
If you also made a big mistake...
"Humans are just apes with bigger computers." - african_cheetah (Reddit)
"I've come to appreciate not having access to things I don't absolutely need." - warm_vanilla_sugar (Reddit)
Document whatever happened somewhere. Write setup guides. Failure is instructive.
Useful resources:
How to add new disk in Linux server without rebooting? How to rescan and add it in LVM?
To be completed.
Useful resources:
Explain each system calls used for process management in Linux.
There are some system calls for process management. These are as follows:
fork()
: it is used to create a new processexec()
: it is used to execute a new processwait()
: it is used to make the process to waitexit()
: it is used to exit or terminate the processgetpid()
: it is used to find the unique process IDgetppid()
: it is used to check the parent process IDnice()
: it is used to bias the currently running process property
Useful resources:
Canât mount the root file system. Why? ***
To be completed.
Useful resources:
You have to delete 100GB files. Which method will be the most optimal? ***
To be completed.
Useful resources:
Explain interrupts and interrupt handlers in Linux.
Here's a high-level view of the low-level processing. I'm describing a simple typical architecture, real architectures can be more complex or differ in ways that don't matter at this level of detail.
When an interrupt occurs, the processor looks if interrupts are masked. If they are, nothing happens until they are unmasked. When interrupts become unmasked, if there are any pending interrupts, the processor picks one.
Then the processor executes the interrupt by branching to a particular address in memory. The code at that address is called the interrupt handler. When the processor branches there, it masks interrupts (so the interrupt handler has exclusive control) and saves the contents of some registers in some place (typically other registers).
The interrupt handler does what it must do, typically by communicating with the peripheral that triggered the interrupt to send or receive data. If the interrupt was raised by the timer, the handler might trigger the OS scheduler, to switch to a different thread. When the handler finishes executing, it executes a special return-from-interrupt instruction that restores the saved registers and unmasks interrupts.
The interrupt handler must run quickly, because it's preventing any other interrupt from running. In the Linux kernel, interrupt processing is divided in two parts:
- The "top half" is the interrupt handler. It does the minimum necessary, typically communicate with the hardware and set a flag somewhere in kernel memory.
- The "bottom half" does any other necessary processing, for example copying data into process memory, updating kernel data structures, etc. It can take its time and even block waiting for some other part of the system since it runs with interrupts enabled.
Useful resources:
What considerations come into play when designing a highly available application, both at the architecture level and the application level? ***
To be completed.
What fields are stored in an inode?
Within a POSIX system, a file has the following attributes which may be retrieved by the stat system call:
- Device ID (this identifies the device containing the file; that is, the scope of uniqueness of the serial number). File serial numbers
- The file mode which determines the file type and how the file's owner, its group, and others can access the file
- A link count telling how many hard links point to the inode
- The User ID of the file's owner
- The Group ID of the file
- The device ID of the file if it is a device file.
- The size of the file in bytes
- Timestamps telling when the inode itself was last modified (ctime, inode change time), the file content last modified (mtime, modification time), and last accessed (atime, access time)
- The preferred I/O block size
- The number of blocks allocated to this file
Useful resources:
Ordinary users are able to read /etc/passwd
. Is it a security hole? Do you know other password shadowing scheme?
Typically, the hashed passwords are stored in /etc/shadow
on most Linux systems:
-rw-r----- 1 root shadow 1349 2016-07-03 03:54 /etc/shadow
They are stored in /etc/master.passwd
on BSD systems.
Programs that need to perform authentication still need to run with root
privileges:
-rwsr-xr-x 1 root root 42792 2016-02-14 14:13 /usr/bin/passwd
If you dislike the setuid root
programs and one single file containing all the hashed passwords on your system, you can replace it with the Openwall TCB PAM module. This provides every single user with their own file for storing their hashed password - as a result the number of setuid root
programs on the system can be drastically reduced.
Useful resources:
What are some of the benefits of using systemd over SysV init? ***
To be completed.
How do you run command every time a file is modified?
For example:
while inotifywait -e close_write filename ; do
echo "changed" >> /var/log/changed
done
You need to copy a large amount of data. Explain the most effective way. ***
To be completed.
Useful resources:
Tell me about the dangers and caveats of LVM.
Risks of using LVM
- Vulnerable to write caching issues with SSD or VM hypervisor
- Harder to recover data due to more complex on-disk structures
- Harder to resize filesystems correctly
- Snapshots are hard to use, slow and buggy
- Requires some skill to configure correctly given these issues
Useful resources:
Python dev team in your company have a dilemma what to choose: uwsgi or gunicorn. What are the pros/cons of each of the solutions from the admin's perspective? ***
To be completed.
Useful resources:
What if kill -9
does not work? Describe exceptions for which the use of SIGKILL is insufficient.
kill -9
(SIGKILL
) always works, provided you have the permission to kill the process. Basically either the process must be started by you and not be setuid or setgid, or you must be root. There is one exception: even root cannot send a fatal signal to PID 1 (the init process).
However kill -9
is not guaranteed to work immediately. All signals, including SIGKILL
, are delivered asynchronously: the kernel may take its time to deliver them. Usually, delivering a signal takes at most a few microseconds, just the time it takes for the target to get a time slice. However, if the target has blocked the signal, the signal will be queued until the target unblocks it.
Normally, processes cannot block SIGKILL
. But kernel code can, and processes execute kernel code when they call system calls.
A process blocked in a system call is in uninterruptible sleep. The ps
or top
command will (on most unices) show it in state D.
To remove a D State Process, since it is uninterruptible, only a machine reboot can solve the problem in case its not automatically handled by the system.
Usually there is a very few chance that a process stays in D State for long. And if it does then there is something not properly being handled in the system. This can be a potential bug as well.
A classical case of long uninterruptible sleep is processes accessing files over NFS when the server is not responding; modern implementations tend not to impose uninterruptible sleep (e.g. under Linux, the intr mount option allows a signal to interrupt NFS file accesses).
You may sometimes see entries marked Z (or H under Linux) in the ps
or top
output. These are technically not processes, they are zombie processes, which are nothing more than an entry in the process table, kept around so that the parent process can be notified of the death of its child. They will go away when the parent process pays attention (or dies).
Summary exceptions:
- Zombie processes cannot be killed since they are already dead and waiting for their parent processes to reap them
- Processes that are in the blocked state will not die until they wake up again
- The init process is special: It does not get signals that it does not want to handle, and thus it can ignore SIGKILL. An exception from this exception is while init is ptraced on Linux
- An uninterruptibly sleeping process may not terminate (and free its resources) even when sent SIGKILL. This is one of the few cases in which a Unix system may have to be rebooted to solve a temporary software problem
Useful resources:
Difference between nohup
, disown
, and &
. What happens when using all together?
&
puts the job in the background, that is, makes it block on attempting to read input, and makes the shell not wait for its completiondisown
removes the process from the shell's job control, but it still leaves it connected to the terminal. One of the results is that the shell won't send it a SIGHUP. Obviously, it can only be applied to background jobs, because you cannot enter it when a foreground job is runningnohup
disconnects the process from the terminal, redirects its output tonohup.out
and shields it from SIGHUP. One of the effects (the naming one) is that the process won't receive any sent SIGHUP. It is completely independent from job control and could in principle be used also for foreground jobs (although that's not very useful)
If you use all three together, the process is running in the background, is removed from the shell's job control and is effectively disconnected from the terminal.
Useful resources:
What is the main advantage of using chroot
? When and why do we use it? What is the purpose of the mount dev, proc, sys in a chroot environment?
An advantage of having a chroot environment is the file-system is totally isolated from the physical host. chroot
has a separate file-system inside the file-system, the difference is its uses a newly created root(/) as its root directory.
A chroot jail is a way to isolate a process and its children from the rest of the system. It should only be used for processes that don't run as root, as root users can break out of the jail very easily.
The idea is that you create a directory tree where you copy or link in all the system files needed for a process to run. You then use the chroot()
system call to change the root directory to be at the base of this new tree and start the process running in that chroot'd environment. Since it can't actually reference paths outside the modified root, it can't perform operations (read/write etc.) maliciously on those locations.
On Linux, using a bind mounts is a great way to populate the chroot tree. Using that, you can pull in folders like /lib
and /usr/lib
while not pulling in /usr
, for example. Just bind the directory trees you want to directories you create in the jail directory.
Chroot environment is useful for:
- reinstall bootloader
- reset a forgotten password
- perform a kernel upgrade (or downgrade)
- rebuild your initramdisk
- fix your /etc/fstab
- reinstall packages using your package manager
- whatever
When working in a chrooted environment, there is a few special file systems that needs to be mounted so all programs behave properly.
Limitation is that /dev
, /sys
and /proc
are not mounted by default but needed for many tasks.
Useful resources:
What are segmentation faults (segfaults), and how can identify what's causing them?
A segmentation fault (aka segfault) is a common condition that causes programs to crash. Segfaults are caused by a program trying to read or write an illegal memory location.
Program memory is divided into different segments:
- a text segment for program instructions
- a data segment for variables and arrays defined at compile time
- a stack segment for temporary (or automatic) variables defined in subroutines and functions
- a heap segment for variables allocated during runtime by functions, such as
malloc
(in C)
In practice, segfaults are almost always due to trying to read or write a non-existent array element, not properly defining a pointer before using it, or (in C programs) accidentally using a variable's value as an address. Thus, when Process A reads memory location 0x877, it reads information residing at a different physical location in RAM than when Process B reads its own 0x877.
All modern operating systems support and use segmentation, and so all can produce a segmentation fault.
Segmentation fault can also occur under following circumstances:
- a buggy program/command, which can be only fixed by applying patch
- it can also appear when you try to access an array beyond the end of an array under C programming
- inside a chrooted jail this can occur when critical shared libs, config file or
/dev/
entry missing - sometime hardware or faulty memory or driver can also create problem
- maintain suggested environment for all computer equipment (overheating can also generate this problem)
To debug this kind of error try one or all of the following techniques:
- enable core files:
$ ulimit -c unlimited
- reproduce the crash:
$ ./<program>
- debug crash with gdb:
$ gdb <program> [core file]
- or run
LD_PRELOAD=...path-to.../libSegFault.so <program>
to get a report with backtrace, loaded libs, etc
Also:
- make sure correct hardware installed and configured
- always apply all patches and use updated system
- make sure all dependencies installed inside jail
- turn on core dumping for supported services such as Apache
- use
strace
which is a useful diagnostic, instructional, and debugging tool
Sometimes segmentation faults are not caused by bugs in the program but are caused instead by system memory limits being set too low. Usually it is the limit on stack size that causes this kind of problem (stack overflows). To check memory limits, use the ulimit
command in bash.
Useful resources:
One of the processes runs slowly. How to check how long has been running and which tools will you use?
To be completed.
Useful resources:
What is a file descriptor in Linux?
In Unix and related computer operating systems, a file descriptor (FD, less frequently fildes) is an abstract indicator (handle) used to access a file or other input/output resource, such as a pipe or network socket. File descriptors form part of the POSIX application programming interface.
Which way of additionally feeding random entropy pool would you suggest for producing random passwords? How to improve it?
You should use /dev/urandom
, not /dev/random
. The two differences between /dev/random
and /dev/urandom
are:
-
/dev/random
might be theoretically better in the context of an information-theoretically secure algorithm. This is the kind of algorithm which is secure against today's technology, and also tomorrow's technology, and technology used by aliens, and God's own iPad as well. -
/dev/urandom
will not block, while/dev/random
may do so./dev/random
maintains a counter of "how much entropy it still has" under the assumption that any bits it has produced is a lost entropy bit. Blocking induces very real issues, e.g. a server which fails to boot after an automated install because it is stalling on its SSH server key creation.
So you want to use /dev/urandom
and stop to worry about this entropy business.
The trick is that /dev/urandom
never blocks, ever, even when it should: /dev/urandom
is secure as long as it has received enough bytes of "initial entropy" since the last boot (32 random bytes are enough). A normal Linux installation will create a random seed (from /dev/random
) upon installation, and save it on the disk. Upon each reboot, the seed will be read, fed into /dev/urandom
, and a new seed immediately generated (from /dev/urandom
) to replace it. Thus, this guarantees that /dev/urandom
will always have enough initial entropy to produce cryptographically strong alea, perfectly sufficient for any mundane cryptographic job, including password generation.
Should any of these daemons require randomness when all available entropy has been exhausted, they may pause to wait for more, which can cause excessive delays in your application. Even worse, since most modern applications will either resort to using its own random seed created at program initialization, or to using /dev/urandom
to avoid blocking, your applications will suffer from lower quality random data. This can affect the integrity of your secure communications, and can increase the chance of cryptoanalysis on your private data.
To check the amount of bytes of entropy currently available, use:
cat /proc/sys/kernel/random/entropy_avail
rng-tools
Fedora/Rh/Centos types: sudo yum install rng-tools
.
On deb types: sudo apt-get install rng-tools
to set it up.
Then run sudo rngd -r /dev/urandom
before generating the keys.
haveged
Fedora/Rh/Centos types: sudo yum install haveged
and add /usr/local/sbin/haveged -w 1024
to /etc/rc.local
.
On deb types: sudo apt-get install haveged
and add DAEMON_ARGS="-w 1024"
to /etc/default/haveged
to set it up.
Then run sudo rngd -r /dev/urandom
before generating the keys.
Useful resources:
What is the difference between /sbin/nologin
, /bin/false
, and /bin/true
?
When /sbin/nologin
is set as the shell, if user with that shell logs in, they'll get a polite message saying 'This account is currently not available'.
/bin/false
is just a binary that immediately exits, returning false, when it's called, so when someone who has false as shell logs in, they're immediately logged out when false exits. Setting the shell to /bin/true
has the same effect of not allowing someone to log in but false is probably used as a convention over true since it's much better at conveying the concept that person doesn't have a shell.
/bin/nologin
is the more user-friendly option, with a customizable message given to the user trying to log in, so you would theoretically want to use that; but both nologin and false will have the same end result of someone not having a shell and not being able to ssh in.
Useful resources:
Which symptoms might be suffering from a disk bottleneck? ***
To be completed.
What is the meaning of the error maxproc limit exceeded by uid %i ...
in FreeBSD?
The FreeBSD kernel will only allow a certain number of processes to exist at one time. The number is based on the kern.maxusers variable.
kern.maxusers also affects various other in-kernel limits, such as network buffers. If the machine is heavily loaded, increase kern.maxusers. This will increase these other system limits in addition to the maximum number of processes.
To adjust the kern.maxusers value, see the File/Process Limits section of the Handbook. While that section refers to open files, the same limits apply to processes.
If the machine is lightly loaded but running a very large number of processes, adjust the kern.maxproc tunable by defining it in /boot/loader.conf
.
How to read a file line by line and assigning the value to a variable?
For example:
while IFS='' read -r line || [[ -n "$line" ]] ; do
echo "Text read from file: $line"
done < "/path/to/filename"
Explanation:
IFS=''
(orIFS=
) prevents leading/trailing whitespace from being trimmed.-r
prevents backslash escapes from being interpreted.|| [[ -n $line ]]
prevents the last line from being ignored if it doesn't end with a\n
(since read returns a non-zero exit code when it encounters EOF).
Useful resources:
The client reports that his site received a grade B in the ssllabs scanner. Prepare a checklist of best practice for ssl configuration. ***
Useful resources:
What does CPU jumps mean?
An OS is a very busy thing, particularly so when you have it doing something (and even when you aren't). And when we are looking at an active enterprise environment, something is always going on.
Most of this activity is "bursty", meaning processes are typically quiescent with short periods of intense activity. This is certainly true of any type of network-based activity (e.g. processing PHP requests), but also applies to OS maintenance (e.g. file system maintenance, page reclamation, disk I/O requests).
If you take a situation where you have a lot of such bursty processes, you get a very irregular and spiky CPU usage plot.
As 500 - Internal Server Error
says, the high number of context switches are going to make the situation even worse.
Useful resources:
How do you trace a system call in Linux? Explain the possible methods.
SystemTap
This is the most powerful method. It can even show the call arguments:
Usage:
sudo apt-get install systemtap
sudo stap -e 'probe syscall.mkdir { printf("%s[%d] -> %s(%s)\n", execname(), pid(), name, argstr) }'
Then on another terminal:
sudo rm -rf /tmp/a /tmp/b
mkdir /tmp/a
mkdir /tmp/b
Sample output:
mkdir[4590] -> mkdir("/tmp/a", 0777)
mkdir[4593] -> mkdir("/tmp/b", 0777)
strace
with -f|-ff
params
You can use the -f
and -ff
option. Something like this:
strace -f -e trace=process bash -c 'ls; :
-
-f
: Trace child processes as they are created by currently traced processes as a result of the fork(2) system call. -
-ff
: If the-o
filename option is in effect, each processes trace is written to filename.pid where pid is the numeric process id of each process. This is incompatible with-c
, since no per-process counts are kept.
ltrace -S
shows both system calls and library calls
This awesome tool therefore gives even further visibility into what executables are doing.
ftrace
minimal runnable example
Here goes a minimal runnable example. Run with sudo
:
#!/bin/sh
set -eux
d=debug/tracing
mkdir -p debug
if ! mountpoint -q debug; then
mount -t debugfs nodev debug
fi
# Stop tracing.
echo 0 > "${d}/tracing_on"
# Clear previous traces.
echo > "${d}/trace"
# Find the tracer name.
cat "${d}/available_tracers"
# Disable tracing functions, show only system call events.
echo nop > "${d}/current_tracer"
# Find the event name with.
grep mkdir "${d}/available_events"
# Enable tracing mkdir.
# Both statements below seem to do the exact same thing,
# just with different interfaces.
# https://www.kernel.org/static/html/v4.18/trace/events.html
echo sys_enter_mkdir > "${d}/set_event"
# echo 1 > "${d}/events/syscalls/sys_enter_mkdir/enable"
# Start tracing.
echo 1 > "${d}/tracing_on"
# Generate two mkdir calls by two different processes.
rm -rf /tmp/a /tmp/b
mkdir /tmp/a
mkdir /tmp/b
# View the trace.
cat "${d}/trace"
# Stop tracing.
echo 0 > "${d}/tracing_on"
umount debug
Sample output:
# tracer: nop
#
# _-----=> irqs-off https://sourceware.org/systemtap/documentation.html
# / _----=> need-resched
# | / _---=> hardirq/softirq
# || / _--=> preempt-depth
# ||| / delay
# TASK-PID CPU# |||| TIMESTAMP FUNCTION
# | | | |||| | |
mkdir-5619 [005] .... 10249.262531: sys_mkdir(pathname: 7fff93cbfcb0, mode: 1ff)
mkdir-5620 [003] .... 10249.264613: sys_mkdir(pathname: 7ffcdc91ecb0, mode: 1ff)
One cool thing about this method is that it shows the function call for all processes on the system at once, although you can also filter PIDs of interest with set_ftrace_pid
.
Useful resources:
- How do I trace a system call in Linux? (original)
- Does ftrace allow capture of system call arguments to the Linux kernel, or only function names?
- How to trace just system call events with ftrace without showing any other functions in the Linux kernel?
- What system call is used to load libraries in Linux?
How to remove all files except some from a directory?
Solution 1 - with extglob
:
shopt -s extglob
rm !(textfile.txt|backup.tar.gz|script.php|database.sql|info.txt)
Solution 2 - with find
:
find . -type f -not -name '*txt' -print0 | xargs -0 rm --
How to check if a string contains a substring in Bash?
You can use *
(wildcards) outside a case statement, too, if you use double brackets:
string='some text'
if [[ $string = *"My long"* ]] ; then
true
fi
Explain differences between 2>&-
, 2>/dev/null
, |&
, &>/dev/null
, and >/dev/null 2>&1
.
- a number 1 = standard out (i.e.
STDOUT
) - a number 2 = standard error (i.e.
STDERR
) - if a number isn't explicitly given, then number 1 is assumed by the shell (bash)
First let's tackle the function of these.
2>&-
The general form of this one is M>&-
, where "M" is a file descriptor number. This will close output for whichever file descriptor is referenced, i.e. "M".
2>/dev/null
The general form of this one is M>/dev/null
, where "M" is a file descriptor number. This will redirect the file descriptor, "M", to /dev/null
.
2>&1
The general form of this one is M>&N
, where "M" & "N" are file descriptor numbers. It combines the output of file descriptors "M" and "N" into a single stream.
|&
This is just an abbreviation for 2>&1 |
. It was added in Bash 4.
&>/dev/null
This is just an abbreviation for >/dev/null 2>&1
. It redirects file descriptor 2 (STDERR
) and descriptor 1 (STDOUT
) to /dev/null
.
>/dev/null
This is just an abbreviation for 1>/dev/null
. It redirects file descriptor 1 (STDOUT
) to /dev/null
.
Useful resources:
How to redirect stderr and stdout to different files in the same line?
Just add them in one line command 2>> error 1>> output
.
However, note that >>
is for appending if the file already has data. Whereas, >
will overwrite any existing data in the file.
So, command 2> error 1> output
if you do not want to append.
Just for completion's sake, you can write 1>
as just >
since the default file descriptor is the output. so 1>
and >
is the same thing.
So, command 2> error 1> output
becomes, command 2> error > output
.
Load averages are above 30 on a server with 24 cores but CPU shows around 70 percent idle. One of the common causes of this condition is? How to debug and fixed?
Requests which involve disk I/O can be slowed greatly if cpu(s) needs to wait on the disk to read or write data. I/O Wait, is the percentage of time the CPU has to wait on disk.
Lets looks at how we can confirm if disk I/O is slowing down application performance by using a few terminal command line tools (top
, atop
and iotop
).
Example of debug:
- answering whether or not I/O is causing system slowness
- finding which disk is being written to
- finding the processes that are causing high I/O
- process list state
- finding what files are being written too heavily
- do you see your copy process put in D state waiting for I/O work to be done by pdflush?
- do you see heavy synchronous write activity on your disks?
also:
- using
top
command - load averages and wa (wait time) - using
atop
command to monitor DSK (disk) I/O stats - using
iotop
command for real-time insight on disk read/writes
For improvement performance:
- check drive array configuration
- check disk queuing algorithms and tuning them
- tuning general block I/O parameters
- tuning virtual memory management to improve I/O performance
- check and tuning mount options and filesystem params (also responsible for cache)
Useful resources:
How to enforce authorization methods in SSH? In what situations it would be useful?
Force login with a password:
ssh -o PreferredAuthentications=password -o PubkeyAuthentication=no user@remote_host
Force login using the key:
ssh -o PreferredAuthentications=publickey -o PubkeyAuthentication=yes -i id_rsa user@remote_host
Useful resources:
Getting Too many Open files
error for Postgres. How to resolve it?
Fixed the issue by reducing max_files_per_process
e.g. to 200 from default 1000. This parameter is in postgresql.conf
file and this sets the maximum number of simultaneously open files allowed to each server subprocess.
Usually people start to edit /etc/security/limits.conf
file, but forget that this file only apply to the actively logged in users through the PAM system.
In what circumstance can df
and du
disagree on available disk space? How do you solve it?
du
checks usage of directories, but df
checks free'd inodes, and files can be held open and take space after they're deleted.
Solution 1
Check for files on located under mount points. Frequently if you mount a directory (say a sambafs) onto a filesystem that already had a file or directories under it, you lose the ability to see those files, but they're still consuming space on the underlying disk.
I've had file copies while in single user mode dump files into directories that I couldn't see except in single usermode (due to other directory systems being mounted on top of them).
Solution 2
On the other hand df -h
and du -sh
could mismatched by about 50% of the hard disk size. This was caused by e.g. Apache (httpd) keeping large log files in memory which had been deleted from disk.
This was tracked down by running lsof | grep "/var" | grep deleted
where /var
was the partition I needed to clean up.
The output showed lines like this:
httpd 32617 nobody 106w REG 9,4 1835222944 688166 /var/log/apache/awstats_log (deleted)
The situation was then resolved by restarting Apache (service httpd restart
) and cleared of disk space, by allowing the locks on deleted files to be cleared.
Useful resources:
What is the difference between encryption and hashing?
Hashing: Finally, hashing is a form of cryptographic security which differs from encryption whereas encryption is a two step process used to first encrypt and then decrypt a message, hashing condenses a message into an irreversible fixed-length value, or hash.
Should the root certificate go on the server?
Self-signed root certificates need not/should not be included in web server configuration. They serve no purpose (clients will always ignore them) and they incur a slight performance (latency) penalty because they increase the size of the SSL handshake.
If the client does not have the root in their trust store, then it won't trust the web site, and there is no way to work around that problem. Having the web server send the root certificate will not help - the root certificate has to come from a trusted 3rd party (in most cases the browser vendor).
Useful resources:
How to log all commands run by root on production servers?
auditd
is the correct tool for the job here:
- Add these 2 lines to
/etc/audit/audit.rules
:
-a exit,always -F arch=b64 -F euid=0 -S execve
-a exit,always -F arch=b32 -F euid=0 -S execve
These will track all commands run by root (euid=0). Why two rules? The execve syscall must be tracked in both 32 and 64 bit code.
-
To get rid of
auid=4294967295
messages in logs, addaudit=1
to the kernel's cmdline (by editing/etc/default/grub
) -
Place the line
session required pam_loginuid.so
in all PAM config files that are relevant to login (/etc/pam.d/{login,kdm,sshd}
), but not in the files that are relevant to su or sudo. This will allow auditd to get the calling user's uid correctly when calling sudo or su.
Restart your system now.
Let's login and run some commands:
$ id -u
1000
$ sudo ls /
bin boot data dev etc home initrd.img initrd.img.old lib lib32 lib64 lost+found media mnt opt proc root run sbin scratch seLinux srv sys tmp usr var vmlinuz vmlinuz.old
$ sudo su -
# ls /etc
[...]
Now read /var/log/audit/auditd.log
for show what has been logged in.
Useful resources:
How to prevent dd
from freezing your system?
Try using ionice:
ionice -c3 dd if=/dev/zero of=file
This start the dd
process with the "idle" IO priority: it only gets disk time when no other process is using disk IO for a certain amount of time.
Of course this can still flood the buffer cache and cause freezes while the system flushes out the cache to disk. There are tunables under /proc/sys/vm/
to influence this, particularly the dirty_*
entries.
How to limit processes to not exceed more than X% of CPU usage?
nice/renice
nice is a great tool for 'one off' tweaks to a system:
nice COMMAND
cpulimit
cpulimit if you need to run a CPU intensive job and having free CPU time is essential for the responsiveness of a system:
cpulimit -l 50 COMMAND
cgroups
cgroups apply limits to a set of processes, rather than to just one:
cgcreate -g cpu:/cpulimited
cgset -r cpu.shares=512 cpulimited
cgexec -g cpu:cpulimited COMMAND_1
cgexec -g cpu:cpulimited COMMAND_2
cgexec -g cpu:cpulimited COMMAND_3
How mount a temporary ram partition?
# -t - filesystem type
# -o - mount options
mount -t tmpfs tmpfs /mnt -o size=64M
How to kills a process that is locking a file?
fuser -k filename
Other admin trying to debug a server accidentally typed: chmod -x /bin/chmod
. How to reset permissions back to default?
# 1:
cp /bin/ls chmod.01
cp /bin/chmod chmod.01
./chmod.01 700 file
# 2:
/bin/busybox chmod 0700 /bin/chmod
# 3:
setfacl --set u::rwx,g::---,o::--- /bin/chmod
# 4:
/usr/lib/ld*.so /bin/chmod 0700 /bin/chmod
Useful resources:
grub>
vs grub-rescue>
. Explain.
grub>
- this is the mode to which it passes if you find everything you need to run the system in addition to the configuration file. With this mode, we have access to most (if not all) modules and commands. This mode can be called from the menu by pressing the 'c' keygrub-rescue
- this is the mode to which it passes if it is impossible to find its own directory (especially the directory with modules and additional commands, e.g. directory/boot/grub/i386-pc
), if its contents are damaged or if no normal module is found, contains only basic commands
How to check whether the private key and the certificate match?
(openssl rsa -noout -modulus -in private.key | openssl md5 ; openssl x509 -noout -modulus -in certificate.crt | openssl md5) | uniq
How to add new user without using useradd
/adduser
commands?
- Add an entry of user details in
/etc/passwd
withvipw
:
# username:password:UID:GID:Comments:Home_Directory:Login Shell
user:x:501:501:test user:/home/user:/bin/bash
Be careful with the syntax. Do not edit directly with an editor.
vipw
locks the file, so that other commands won't try to update it at the same time.
- You will have to create a group with same name in
/etc/group
withvigr
(similar tool forvipw
):
user:x:501:
- Assign a password to the user:
passwd user
- Create the home directory of the user with mkdir:
mkdir -m 0700 /home/user
- Copy the files from
/etc/skel
to the new home directory:
rsync -av --delete /etc/skel/ /home/user
- Fix ownerships and permissions with
chown
andchmod
:
chown -R user:user /home/user
chmod -R go-rwx /home/user
Useful resources:
Why do we need mktemp
command? Present an example of use.
mktemp
randomizes the name. It is very important from the security point of view.
Just imagine that you do something like:
echo "random_string" > /tmp/temp-file
in your root-running script. And someone (who has read your script) does
ln -s /etc/passwd /tmp/temp-file
The mktemp
command could help you in this situation:
TEMP=$(mktemp /tmp/temp-file.XXXXXXXX)
echo "random_string" > ${TEMP}
Now this ln /etc/passwd
attack will not work.
Is it safe to attach the strace
to a running process on the production? What are the consequences?
strace
is the system call tracer for Linux. It currently uses the arcane ptrace()
(process trace) debugging interface, which operates in a violent manner: pausing the target process for each syscall so that the debugger can read state. And doing this twice: when the syscall begins, and when it ends.
This means strace
pauses your application twice for each syscall, and context-switches each time between the application and strace
. It's like putting traffic metering lights on your application.
Cons:
- can cause significant and sometimes massive performance overhead, in the worst case, slowing the target application by over 100x. This may not only make it unsuitable for production use, but any timing information may also be so distorted as to be misleading
- can't trace multiple processes simultaneously (with the exception of followed children)
- visibility is limited to the system call interface
Useful resources:
What is the easiest, safest and most portable way to remove -rf
directory entry?
They're effective but not optimally portable:
rm -- -fr
perl -le 'unlink("-fr");'
People who go on about shell command line quoting and character escaping are almost as dangerous as those who simply don't even recognize why a file name like that poses any problem at all.
The most portable solution:
rm ./-fr
Write a simple bash script (or pair of scripts) to backup and restore your system. ***
To be completed.
What are salted hashes? Generate the password with salt for the /etc/shadow
file.
Salt at its most fundamental level is random data. When a properly protected password system receives a new password, it will create a hashed value for that password, create a new random salt value, and then store that combined value in its database. This helps defend against dictionary attacks and known hash attacks.
For example, if a user uses the same password on two different systems, if they used the same hashing algorithm, they could end up with the same hash value. However, if even one of the systems uses salt with its hashes, the values will be different.
The encrypted passwords in /etc/shadow
file are stored in the following format:
$ID$SALT$ENCRYPTED
The $ID
indicates the type of encryption, the $SALT
is a random (up to 16 characters) string and $ENCRYPTED
is a passwordâs hash.
Hash Type | ID | Hash Length |
---|---|---|
MD5 | $1 | 22 characters |
SHA-256 | $5 | 43 characters |
SHA-512 | $6 | 86 characters |
Use the below commands from the Linux shell to generate hashed password for /etc/shadow
with the random salt:
- Generate MD5 password hash
python -c "import random,string,crypt; randomsalt = ''.join(random.sample(string.ascii_letters,8)); print crypt.crypt('MySecretPassword', '\$1\$%s\$' % randomsalt)"
- Generate SHA-256 password hash
python -c "import random,string,crypt; randomsalt = ''.join(random.sample(string.ascii_letters,8)); print crypt.crypt('MySecretPassword', '\$5\$%s\$' % randomsalt)"
- Generate SHA-512 password hash
python -c "import random,string,crypt; randomsalt = ''.join(random.sample(string.ascii_letters,8)); print crypt.crypt('MySecretPassword', '\$6\$%s\$' % randomsalt)"
Network Questions (27)
Create SPF records for your site to help control spam.
-
Start with the SPF version, this part defines the record as SPF. An SPF record should always start with the version number v=spf1 (version 1) this tag defines the record as SPF. There used to be a second version of SPF (called: SenderID), but this was discontinued.
-
After including the v=spf1 SPF version tag you should follow with all IP addresses that are authorized to send email on your behalf. For example:
v=spf1 ip4:34.243.61.237 ip6:2a05:d018:e3:8c00:bb71:dea8:8b83:851e
-
Next, you can include an include tag for every third-party organization that is used to send email on your behalf e.g.
include:thirdpartydomain.com.
This tag indicates that this particular third party is authorized to send email on behalf of your domain. You need to consult with the third party to learn which domain to use as a value for the âincludeâ statement. -
Once you have implemented all IP addresses and include tags you should end your record with an
~all
or-all
tag. The all tag is an important part of the SPF record as it indicates what policy should be applied when ISPs detect a server which is not listed in your SPF record. If an unauthorized server does send email on behalf of your domain, action is taken according to the policy that has been published (e.g. reject the email or mark it as spam). What is the difference between these tags? You need to instruct how strict servers need to treat the emails. The~all
tag indicates a soft fail and the-all
indicates a hardfail. The all tag has the following basic markers:
-all
â servers that arenât listed in the SPF record are not authorized to send email (not compliant emails will be rejected)
~all
â if the email is received from a server that isnât listed, the email will be marked as a soft fail (emails will be accepted but marked)
+all
- we strongly recommend not to use this option, this tag allows any server to send email from your domain -
After defining your SPF record your record might look something like this:
v=spf1 ip4:34.243.61.237 ip6:2a05:d018:e3:8c00:bb71:dea8:8b83:851e include:thirdpartydomain.com -all
Useful resources:
What is the difference between an authoritative and a nonauthoritative answer to a DNS query? ***
An authoritative DNS query answer comes from the server that contains the zone files for the domain queried. This is the name server that the domain administrator set up the DNS records on. A nonauthoriative answer comes from a name server that does not host the domain zone files (for example, a commonly used name server has the answer cached such as Google's 8.8.8.8 or OpenDNS 208.67.222.222).
If you try resolve hostname you get NXDOMAIN
from host
command. Your resolv.conf
stores two nameservers but only second of this store this domain name. Why did not the local resolver check the second nameserver?
NXDOMAIN is nothing but non-existent Internet or Intranet domain name. If domain name is unable to resolved using the DNS, a condition called the NXDOMAIN occurred.
The default behavior for resolv.conf
and the resolver
is to try the servers in the order listed. The resolver will only try the next nameserver if the first nameserver times out.
The algorithm used is to try a name server, and if the query times out, try the next, until out of name servers, then repeat trying all the name servers until a maximum number of retries are made.
If a nameserver responds with SERVFAIL or a referral (nofail) or terminate query (fail) also only the first dns server will be used.
Example:
nameserver 192.168.250.20 # it's not a dns
nameserver 8.8.8.8 # not store gate.test.int
nameserver 127.0.0.1 # store gate.test.int
so if you check:
host -v -t a gate.test.int
Trying "gate.test.int" # trying first dns (192.168.250.20) but response is time out, so try the next nameserver
Host gate.test.int not found: 3(NXDOMAIN) # ok but response is NXDOMAIN (not found this domain name)
Received 88 bytes from 8.8.8.8#53 in 43 ms
Received 88 bytes from 8.8.8.8#53 in 43 ms
# so the last server in the list was not asked
To avoid this you can use e.g. nslookup
command which will use the second nameserver if it receives a SERVFAIL from the first nameserver.
Useful resources:
Explore the current MTA configuration at your site. What are some of the special features of the MTA that are in use? ***
To be completed.
How to find a domain based on the IP address? What techniques/tools can you use? ***
To be completed.
Is it possible to have SSL certificate for IP address, not domain name?
It is possible (but rarely used) as long as it is a public IP address.
An SSL certificate is typically issued to a Fully Qualified Domain Name (FQDN) such as https://www.domain.com
. However, some organizations need an SSL certificate issued to a public IP address. This option allows you to specify a public IP address as the Common Name in your Certificate Signing Request (CSR). The issued certificate can then be used to secure connections directly with the public IP address (e.g. https://1.1.1.1
.).
According to the CA Browser forum, there may be compatibility issues with certificates for IP addresses unless the IP address is in both the commonName and subjectAltName fields. This is due to legacy SSL implementations which are not aligned with RFC 5280, notably, Windows OS prior to Windows 10.
Useful resources:
How do you do load testing and capacity planning for websites? ***
To be completed.
Useful resources:
Developer reports a problem with connectivity to the remote service. Use /dev
for troubleshooting.
# <host> - set remote host
# <port> - set destination port
# 1
timeout 1 bash -c "</dev/tcp/<host>/<port>" >/dev/null 2>&1 ; echo $?
# 2
timeout 1 bash -c 'cat < /dev/null > </dev/tcp/<host>/<port>' ; echo $?
# 2
&> echo > "</dev/tcp/<host>/<port>"
Useful resources:
How do I measure request and response times at once using curl
?
curl
supports formatted output for the details of the request (see the curl
manpage for details, under -w| -write-out 'format'
). For our purposes weâll focus just on the timing details that are provided.
- Create a new file,
curl-format.txt
, and paste in:
time_namelookup: %{time_namelookup}\n
time_connect: %{time_connect}\n
time_appconnect: %{time_appconnect}\n
time_pretransfer: %{time_pretransfer}\n
time_redirect: %{time_redirect}\n
time_starttransfer: %{time_starttransfer}\n
----------\n
time_total: %{time_total}\n
- Make a request:
curl -w "@curl-format.txt" -o /dev/null -s "http://example.com/"
What this does:
-w "@curl-format.txt"
- tells cURL to use our format file-o /dev/null
- redirects the output of the request to /dev/null-s
- tells cURL not to show a progress meterhttp://example.com/
is the URL we are requesting. Use quotes particularly if your URL has "&" query string parameters
You need to move ext4 journal on another disk/partition. What are the reasons for this? ***
To be completed.
Useful resources:
Does having Varnish in front of your website/app mean you don't need to care about load balancing or redundancy?
It depends. Varnish is a cache server, so its purpose is to cache contents and to act as a reverse proxy, to speed up retrieval of data and to lessen the load on the webserver. Varnish can be also configured as a load-balancer for multiple web servers, but if we use just one Varnish server, this will become our single point of failure on our infrastructure.
A better solution to ensure load-balancing or redundancy will be a cluster of at least two Varnish instances, in active-active mode or active-passive mode.
What are hits, misses, and hit-for-pass in Varnish Cache?
A hit is a request which is successfully served from the cache, a miss is a request that goes through the cache but finds an empty cache and therefore has to be fetched from the origin, the hit-for-pass comes in when Varnish Cache realizes that one of the objects it has requested is uncacheable and will result in a pass.
Useful resources:
What is a reasonable TTL for cached content given the following parameters? ***
To be completed.
Developer says: htaccess
is full of magic and it should be used. What is your opinion about using htaccess
files? How has this effect on the web app
.htaccess
files were born out of an era when shared hosting was commonÂplace:
- sysadmins needed a way to allow multiple clients to access their server under different accounts, with different configurations for their webÂsites.
The .htaccess
file allowed them to modify how Apache works without having access to the entire server. These files can reside in any and every directory in the directory tree of the website and provide features to the directory and the files and folders inside it.
Itâs horrible for performance
For .htaccess
to work Apache needs to check EVERY directory in the requested path for the existence of a .htaccess
file and if it exists it reads EVERY one of them and parses it. This happens for EVERY request. Remember that the second you change that file, itâs effective. This is because Apache reads it every time.
Every single request the webÂserver handlesâ-âeven for the lowliest .png
or .css
fileâ-âcauses Apache to:
- look for a
.htaccess
file in the directory of the current request - then look for a
.htaccess
file in every directory from there up to the server root - coalesce all of these
.htaccess
files together - reconfigure the webÂserver using the new settings
- finally, deliver the file
Every webÂpage can generate dozens of requests. This is overÂhead you donât need, and whatâs more, itâs completely unnecessary.
Security and permission loss
Allowing individual users to modify the configuration of a server using .htaccess
can cause security concerns if not taken care properly. If you add any directive in the .htaccess
file, it will be considered as they are added to Apache configuration file.
This means it may be possible for non-admins to write these files and thus 'undo' all of your security. If you need to do something that is temporary, .htaccess
is a good place to do it, if you need to do something more permanent, just put it in your /etc/apache/sites-available/site.conf
(or httpd.conf
or whatever your server calls).
Summary
You should avoid using .htaccess
files completely if you have access to httpd main server config file. If it worked in .htaccess
, it will work in your virtual host .conf
file as well.
If you cannot avoid using .htaccess
files, you should follow these rules.
- use only one
.htaccess
file or as few as possible - place the
.htaccess
file in the site root directory - keep your
.htaccess
file short and simple
Useful resources:
Is it safe to use SNI SSL in production? How to test connection with and without it? In which cases it is useful?
With OpenSSL:
# Testing connection to remote host (with SNI support)
echo | openssl s_client -showcerts -servername google.com -connect google.com:443
# Testing connection to remote host (without SNI support)
echo | openssl s_client -connect google.com:443 -showcerts
With GnuTLS:
# Testing connection to remote host (with SNI support)
gnutls-cli -p 443 google.com
# Testing connection to remote host (without SNI support)
gnutls-cli --disable-sni -p 443 google.com
How are cookies passed in the HTTP protocol?
The server sends the following in its response header to set a cookie field:
Set-Cookie:name=value
If there is a cookie set, then the browser sends the following in its request header:
Cookie:name=value
How to prevent processing requests in web server with undefined server names? No defined default server name rule can be security issue? ***
To be completed.
You should rewrite POST with payload to an external API but the POST requests loose the parameters passed on the URL. How to fix this problem (e.g. in Nginx) and what are the reasons for this behavior?
The issue is that external redirects will never resend POST data. This is written into the HTTP spec (check the 3xx
section). Any client that does do this is violating the spec.
POST data is passed in the body of the request, which gets dropped if you do a standard redirect.
Look at this:
+-------------------------------------------+-----------+-----------+
| | Permanent | Temporary |
+-------------------------------------------+-----------+-----------+
| Allows changing the request method from | 301 | 302 |
| POST to GET | | |
| Does not allow changing the request | 308 | 307 |
| method from POST to GET | | |
+-------------------------------------------+-----------+-----------+
You can try with the HTTP status code 307, a RFC compliant browser should repeat the post request. You just need to write a Nginx rewrite rule with HTTP status code 307 or 308:
location / {
proxy_pass http://localhost:80;
client_max_body_size 10m;
}
location /api {
# HTTP 307 only for POST method.
if ($request_method = POST) {
return 307 https://api.example.com?request_uri;
}
# You can keep this for non-POST requests.
rewrite ^ https://api.example.com?request_uri permanent;
client_max_body_size 10m;
}
HTTP Status code 307 or 308 should be used instead of 301 because it changes the request method from POST to GET.
Useful resources:
What is the proper way to test NFS performance? Prepare a short checklist.
The best benchmark is always "the application(s) that you normally use". The load on a NFS system when you have 20 people simultaneously compiling a Linux kernel is completely different from a bunch of people logging in at the same time or the accounts uses as "home directories for the local web-server".
But we have some good tools for testing this.
- boonie - a classical performances evaluation tool tests. The main program tests database type access to a single file (or a set of files if you wish to test more than 1G of storage), and it tests creation, reading, and deleting of small files which can simulate the usage of programs such as Squid, INN, or Maildir format email.
- DBench - was written to allow independent developers to debug and test SAMBA. It is heavily inspired of the original SAMBA tool.
- IOZone - performance tests suite. POSIX and 64 bits compliant. This tests is the file system test from the L.S.E. Main features: POSIX async I/O, Mmap() file I/O, Normal file I/O Single stream measurement, Multiple stream measurement, Distributed file server measurements (Cluster) POSIX pthreads, Multi-process measurement selectable measurements with fsync, O_SYNC Latency plots.
You need to block several IPs from the same subnet. What is the most efficient way for the system to traverse the iptables rule set or the black-hole route?
If you have a system with thousands of routes defined in the routing table and nothing in the iptables rules than it might actually be more efficient to input an iptables rule.
In most systems however the routing table is fairly small, in cases like this it is actually more efficient to use null routes. This is especially true if you already have extensive iptables rules in place.
Assuming you're blocking based on source address and not destination, then doing the DROP in raw/PREROUTING would work well as you would essentially be able to drop the packet before any routing decision is made.
Remember however that iptables rules are essentially a linked-list and for optimum performance when blocking a number of addresses you should use an ipset
.
On the other hand if blocking by destination, there is likely little difference between blocking at the routing table vs iptables EXCEPT if source IPs are spoofed in which case the blackholed entries may consume routing cache resources; in this case, raw/PREROUTING remains preferable.
Your outgoing route isn't going to matter until you try to send a packet back to the attacker. By that time you will have already incurred most of the cost of socket setup and may even have a thread blocking waiting for the kernel to conclude you have no route to host, plus whatever error handling your server process does when it concludes there's a network problem.
iptables or another firewall will allow you to block the incoming traffic and discard it before it reaches the daemon process on your server. It seems clearly superior in this use case.
iptables -A INPUT -s 192.168.200.0/24 -j DROP
When you define a route on a Linux/Unix system it tells the system in order to communicate with the specified IP address you will need to route your network communication to this specific place.
When you define a null route it simply tells the system to drop the network communication that is designated to the specified IP address. What this means is any TCP based network communication will not be able to be established as your server will no longer be able to send an SYN/ACK reply. Any UDP based network communication however will still be received; however your system will no longer send any response to the originating IP.
While iptables can accept tens of thousands of rules in a chain, the chains are walked sequentially until a match is found on every packet. So, lots of rules can lead to the system spending amazing amounts of CPU time walking through the rules.
The routing rules are much simpler than iptables. With iptables, a match can be based on many different variables including protocols, source and destination packets, and even other packets that were sent before the current packet.
In routing, all that matters is the remote IP address, so it's very easy to optimize. Also, many systems have a lot of routing rules. A typical system may only have 5 or 10, but something that's acting as a BGP router can have tens of thousands. So, for a very long time there have been extensive optimizations in selecting the right route for a particular packet.
In less technical terms this means your system will receive data from the attackers but no longer respond to it.
ip route add blackhole 192.168.200.0/24
or
ip route add 192.168.200.0/24 via 127.0.0.1
Useful resources:
How to run scp
with a second remote host?
With ssh
:
ssh user1@remote1 'ssh user2@remote2 "cat file"' > file
With tar
(with compression):
ssh user1@remote1 'ssh user2@remote2 "cd path2; tar cj file"' | tar xj
With ssh
and port forwarding tunnel:
# First, open the tunnel
ssh -L 1234:remote2:22 -p 45678 user1@remote1
# Then, use the tunnel to copy the file directly from remote2
scp -P 1234 user2@localhost:file .
How can you reduce load time of a dynamic website?
- webpage optimization
- cached web pages
- quality web hosting
- compressed text files
- apache/nginx tuning
What types of dns cache working when you type api.example.com in your browser and press return?
Browser checks if the domain is in its cache (to see the DNS Cache in Chrome, go to chrome://net-internals/#dns
). When this cache fails, it simply asks the OS to resolve the domain.
The OS resolver has it's own cache which it will check. If it fails this, it resorts to asking the OS configured DNS servers.
The OS configured DNS servers will typically be configured by DHCP from the router where the DNS servers are likely to be the ISP's DNS servers configured by DHCP from the internet gateway to the router.
In the event the router has it's own DNS servers, it may have it's own cache otherwise you should be directed straight to your ISP's DNS servers most typically as soon as the OS cache was found to be empty.
Useful resources:
What is the difference between Cache-Control: max-age=0
and Cache-Control: no-cache
?
When sent by the origin server
max-age=0
simply tells caches (and user agents) the response is stale from the get-go and so they SHOULD revalidate the response (e.g. with the If-Not-Modified header) before using a cached copy, whereas, no-cache
tells them they MUST revalidate before using a cached copy.
In other words, caches may sometimes choose to use a stale response (although I believe they have to then add a Warning header), but no-cache
says they're not allowed to use a stale response no matter what. Maybe you'd want the SHOULD-revalidate behavior when baseball stats are generated in a page, but you'd want the MUST-revalidate behavior when you've generated the response to an e-commerce purchase.
When sent by the user agent
If a user agent sends a request with Cache-Control: max-age=0
(aka. "end-to-end revalidation"), then each cache along the way will revalidate its cache entry (e.g. with the If-Not-Modified header) all the way to the origin server. If the reply is then 304 (Not Modified), the cached entity can be used.
On the other hand, sending a request with Cache-Control: no-cache
(aka. "end-to-end reload") doesn't revalidate and the server MUST NOT use a cached copy when responding.
What are the security risks of setting Access-Control-Allow-Origin
?
By responding with Access-Control-Allow-Origin: *
, the requested resource allows sharing with every origin. This basically means that any site can send an XHR request to your site and access the serverâs response which would not be the case if you hadnât implemented this CORS response.
So any site can make a request to your site on behalf of their visitors and process its response. If you have something implemented like an authentication or authorization scheme that is based on something that is automatically provided by the browser (cookies, cookie-based sessions, etc.), the requests triggered by the third party sites will use them too.
Create a single-use TCP or UDP proxy with netcat
.
### TCP -> TCP
nc -l -p 2000 -c "nc [ip|hostname] 3000"
### TCP -> UDP
nc -l -p 2000 -c "nc -u [ip|hostname] 3000"
### UDP -> UDP
nc -l -u -p 2000 -c "nc -u [ip|hostname] 3000"
### UDP -> TCP
nc -l -u -p 2000 -c "nc [ip|hostname] 3000"
Explain 3 techniques for avoiding firewalls with nmap
.
Use Decoy addresses
# Generates a random number of decoys.
nmap -D RND:10 [target]
# Manually specify the IP addresses of the decoys.
nmap -D decoy1,decoy2,decoy3
In this type of scan you can instruct Nmap to spoof packets from other hosts.In the firewall logs it will be not only our IP address but also and the IP addresses of the decoys so it will be much harder to determine from which system the scan started.
Source port number specification
nmap --source-port 53 [target]
A common error that many administrators are doing when configuring firewalls is to set up a rule to allow all incoming traffic that comes from a specific port number.The --source-port
option of Nmap can be used to exploit this misconfiguration.Common ports that you can use for this type of scan are: 20, 53 and 67.
Append Random Data
nmap --data-length 25 [target]
Many firewalls are inspecting packets by looking at their size in order to identify a potential port scan.This is because many scanners are sending packets that have specific size.In order to avoid that kind of detection you can use the command --data-length
to add additional data and to send packets with different size than the default.
TCP ACK Scan
nmap -sA [target]
It is always good to send the ACK packets rather than the SYN packets because if there is any active firewall working on the remote computer then because of the ACK packets the firewall cannot create the log, since firewalls treat ACK packet as the response of the SYN packet.
Useful resources:
Devops Questions (5)
Explain how Flap Detection works in Nagios?
Flapping occurs when a service or host changes state too frequently, this causes lot of problem and recovery notifications.
Once you have defined Flapping, explain how Nagios detects Flapping. Whenever Nagios checks the status of a host or service, it will check to see if it has started or stopped flapping.
Nagios follows the below given procedure to do that:
- storing the results of the last 21 checks of the host or service analyzing the historical check results and determine where state changes/transitions occur
- using the state transitions to determine a percent state change value (a measure of change) for the host or service
- comparing the percent state change value against low and high flapping thresholds
What are the advantages that Containerization provides over Virtualization?
Below are the advantages of containerization over virtualization:
- containers provide real-time provisioning and scalability but VMs provide slow provisioning
- containers are lightweight when compared to VMs
- VMs have limited performance when compared to containers
- containers have better resource utilization compared to VMs
Is the way of distributing Docker apps (e.g. Apache, MySQL) from Docker Hub is good for production environments? Describe security problems and possible solutions. ***
To be completed.
Some of the common use cases of LXC and LXD come from the following requirements... Explain.
- the need for an isolated development environment without polluting your host machine
- isolation within production servers and the possibility to run more than one service in its own container
- a need to test things with more than one version of the same software or different operating system environments
- experimenting with different and new releases of GNU/Linux distributions without having to install them on a physical host machine
- trying out a software or development stack that may or may not be used after some playing around
- installing many types of software in your primary development machine or production server and maintaining them on a longer run
- doing a dry run of any installation or maintenance task before actually executing it on production machines
- better utilization and provisioning of server resources with multiple services running for different users or clients
- high-density virtual private server (VPS) hosting, where isolation without the cost of full virtualization is needed
- easy access to host hardware from a container, compared to complicated access methods from virtual machines
- multiple build environments with different customizations in place
You have to prepare a Redis cluster. How will you ensure security?
- protect a given Redis instance from outside accesses via firewall
- binding it to 127.0.0.1 if only local clients are accessing it
- sandboxed environment
- enabling AUTH
- enabling Protected Mode
- data encryption support (e.g.
spiped
) - disabling of specific commands
- users ACLs
Useful resources:
Cyber Security Questions (5)
What is OWASP Application Security Verification Standard? Explain in a few points. ***
To be completed.
What is CSRF?
Cross Site Request Forgery is a web application vulnerability in which the server does not check whether the request came from a trusted client or not. The request is just processed directly. It can be further followed by the ways to detect this, examples and countermeasures.
What is the difference between policies, processes and guidelines?
As security policy defines the security objectives and the security framework of an organisation. A process is a detailed step by step how to document that specifies the exact action which will be necessary to implement important security mechanism. Guidelines are recommendations which can be customized and used in the creation of procedures.
What is a false positive and false negative in case of IDS?
When the device generated an alert for an intrusion which has actually not happened: this is false positive and if the device has not generated any alert and the intrusion has actually happened, this is the case of a false negative.
10 quick points about web server hardening.
Example:
- if machine is a new install, protect it from hostile network traffic, until the operating system is installed and hardened
- create a separate partition with the
nodev
,nosuid
, andnoexec
options set for/tmp
- create separate partitions for
/var
,/var/log
,/var/log/audit
, and/home
- enable randomized virtual memory region placement
- remove legacy services (e.g.
telnet-server
,rsh
,rlogin
,rcp
,ypserv
,ypbind
,tftp
,tftp-server
,talk
,talk-server
). - limit connections to services running on the host to authorized users of the service via firewalls and other access control technologies
- disable source routed packet acceptance
- enable TCP/SYN cookies
- disable SSH root login
- install and configure AIDE
- install and configure OSsec HIDS
- configure SELinux
- all administrator or root access must be logged
- integrity checking of system accounts, group memberships, and their associated privileges should be enabled and tested
- set password creation requirements (e.g. with PAM)
Useful resources:
Secret Knowledge
:diamond_shape_with_a_dot_inside: Guru Sysadmin
Explain what is Event-Driven architecture and how it improves performance? ***
To be completed.
An application encounters some performance issues. You should to find the code we have to optimize. How to profile app in Linux environment?
Ideally, I need an app that will attach to a process and log periodic snapshots of: memory usage number of threads CPU usage.
- You can use
top
in batch mode. It runs in the batch mode either until it is killed or until N iterations is done:
top -b -p `pidof a.out`
or
top -b -p `pidof a.out` -n 100
- You can use ps (for instance in a shell script):
ps --format pid,pcpu,cputime,etime,size,vsz,cmd -p `pidof a.out`
I need some means of recording the performance of an application on a Linux machine.
- To record performance data:
perf record -p `pidof a.out`
or to record for 10 secs:
perf record -p `pidof a.out` sleep 10
or to record with call graph ():
perf record -g -p `pidof a.out`
- To analyze the recorded data
perf report --stdio
perf report --stdio --sort=dso -g none
perf report --stdio -g none
perf report --stdio -g
This is an example of profiling a test program
- I run my test program (c++):
./my_test 100000000
- Then I record performance data of a running process:
perf record -g -p `pidof my_test` -o ./my_test.perf.data sleep 30
- Then I analyze load per module:
perf report --stdio -g none --sort comm,dso -i ./my_test.perf.data
# Overhead Command Shared Object
# ........ ....... ............................
#
70.06% my_test my_test
28.33% my_test libtcmalloc_minimal.so.0.1.0
1.61% my_test [kernel.kallsyms]
- Then load per function is analyzed:
perf report --stdio -g none -i ./my_test.perf.data | c++filt
# Overhead Command Shared Object Symbol
# ........ ....... ............................ ...........................
#
29.30% my_test my_test [.] f2(long)
29.14% my_test my_test [.] f1(long)
15.17% my_test libtcmalloc_minimal.so.0.1.0 [.] operator new(unsigned long)
13.16% my_test libtcmalloc_minimal.so.0.1.0 [.] operator delete(void*)
9.44% my_test my_test [.] process_request(long)
1.01% my_test my_test [.] operator delete(void*)@plt
0.97% my_test my_test [.] operator new(unsigned long)@plt
0.20% my_test my_test [.] main
0.19% my_test [kernel.kallsyms] [k] apic_timer_interrupt
0.16% my_test [kernel.kallsyms] [k] _spin_lock
0.13% my_test [kernel.kallsyms] [k] native_write_msr_safe
...
- Then call chains are analyzed:
perf report --stdio -g graph -i ./my_test.perf.data | c++filt
# Overhead Command Shared Object Symbol
# ........ ....... ............................ ...........................
#
29.30% my_test my_test [.] f2(long)
|
--- f2(long)
|
--29.01%-- process_request(long)
main
__libc_start_main
29.14% my_test my_test [.] f1(long)
|
--- f1(long)
|
|--15.05%-- process_request(long)
| main
| __libc_start_main
|
--13.79%-- f2(long)
process_request(long)
main
__libc_start_main
...
So at this point you know where your program spends time.
Also the simple way to do app profile is to use the pstack
utility or lsstack
.
Other tool is Valgrind. So this is what I recommend. Run program first:
valgrind --tool=callgrind --dump-instr=yes -v --instr-atstart=no ./binary > tmp
Now when it works and we want to start profiling we should run in another window:
callgrind_control -i on
This turns profiling on. To turn it off and stop whole task we might use:
callgrind_control -k
Now we have some files named callgrind.out.* in current directory. To see profiling results use:
kcachegrind callgrind.out.*
I recommend in next window to click on Self column header, otherwise it shows that main()
is most time consuming task.
Useful resources:
Using a Linux system with a limited number of packages installed, and telnet is not available. Use sysfs virtual filesystem to test connection on all interfaces (without loopback).
For example:
#!/usr/bin/bash
for iface in $(ls /sys/class/net/ | grep -v lo) ; do
if [[ $(cat /sys/class/net/$iface/carrier) = 1 ]] ; then state=1 ; fi
done
if [[ $state -ne 0 ]] ; then echo "not connection" > /dev/stderr ; exit ; fi
Write two golden rules for reducing the impact of hacked system.
- The principle of least privilege
You should configure services to run as a user with the least possible rights necessary to complete the service's tasks. This can contain a hacker even after they break in to a machine.
As an example, a hacker breaking into a system using a zero-day exploit of the Apache webserver service is highly likely to be limited to just the system memory and file resources that can be accessed by that process. The hacker would be able to download your html and php source files, and probably look into your mysql database, but they should not be able to get root or extend their intrusion beyond apache-accessible files.
Many default Apache webserver installations create the 'apache' user and group by default and you can easily configure the main Apache configuration file (httpd.conf
) to run apache using those groups.
- The principle of separation of privileges
If your web site only needs read-only access to the database, then create an account that only has read-only permissions, and only to that database.
SElinux is a good choice for creating context for security, app-armor
is another tool. Bastille was a previous choice for hardening.
Reduce the consequence of any attack, by separating the power of the service that has been compromised into it own "Box".
- Whitelist, don't blacklist
You're describing a blacklist approach. A whitelist approach would be much safer.
An exclusive club will never try to list everyone who can't come in; they will list everyone who can come in and exclude those not on the list.
Similarly, trying to list everything that shouldn't access a machine is doomed. Restricting access to a short list of programs/IP addresses/users would be more effective.
Of course, like anything else, this involves some trade-offs. Specifically, a whitelist is massively inconvenient and requires constant maintenance.
To go even further in the tradeoff, you can get great security by disconnecting the machine from the network.
Also interesting are:
Use the tools available. It's highly unlikely that you can do as well as the guys who are security experts, so use their talents to protect yourself.
- public key encryption provides excellent security
- enforce password complexity
- understand why you are making exceptions to the rules above - review your exceptions regularly
- hold someone to account for failure, it keeps you on your toes
Useful resources:
You're on a security conference. Members debating about putting up the OpenBSD firewall on the core of the network. Go to the podium and express your opinion about this solution. What are the pros/cons and why? ***
To be completed.
Is there a way to allow multiple cross-domains using the Access-Control-Allow-Origin header in Nginx?
To match a list of domain and subdomain this regex make it ease to work with fonts:
location ~* \.(?:ttf|ttc|otf|eot|woff|woff2)$ {
if ( $http_origin ~* (https?://(.+\.)?(domain1|domain2|domain3)\.(?:me|co|com)$) ) {
add_header "Access-Control-Allow-Origin" "$http_origin";
}
}
More slightly configuration:
location / {
if ($http_origin ~* (^https?://([^/]+\.)*(domainone|domaintwo)\.com$)) {
set $cors "true";
}
# Nginx doesn't support nested If statements. This is where things get slightly nasty.
# Determine the HTTP request method used
if ($request_method = 'GET') {
set $cors "${cors}get";
}
if ($request_method = 'POST') {
set $cors "${cors}post";
}
if ($cors = "true") {
# Catch all in case there's a request method we're not dealing with properly
add_header 'Access-Control-Allow-Origin' "$http_origin";
}
if ($cors = "trueget") {
add_header 'Access-Control-Allow-Origin' "$http_origin";
add_header 'Access-Control-Allow-Credentials' 'true';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
add_header 'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type';
}
if ($cors = "truepost") {
add_header 'Access-Control-Allow-Origin' "$http_origin";
add_header 'Access-Control-Allow-Credentials' 'true';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
add_header 'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type';
}
}
Explain :(){ :|:& };:
and how stop this code if you are already logged into a system?
It's a fork bomb.
:()
- this defines the function.:
is the function name and the empty parenthesis shows that it will not accept any arguments{ }
- these characters shows the beginning and end of function definition:|:
- it loads a copy of the function:
into memory and pipe its output to another copy of the:
function, which has to be loaded into memory&
- this will make the process as a background process, so that the child processes will not get killed even though the parent gets auto-killed:
- final:
will execute the function again and hence the chain reaction begins
The best way to protect a multi-user system is to use PAM to limit the number of processes a user can use. We know the biggest problem with a fork bomb is the fact it takes up so many processes.
So we have two ways of attempting to fix this, if you are already logged into the system:
- execute a SIGSTOP command to stop the process:
killall -STOP -u user1
- if you can't run at the command line you will have to use
exec
to force it to run (due to processes all being used):exec killall -STOP -u user1
With fork bombs your best method for this is preventing from being to big of an issue in the first place.
How to recover deleted file held open e.g. by Apache?
If a file has been deleted but is still open, that means the file still exists in the filesystem (it has an inode) but has a hard link count of 0. Since there is no link to the file, you cannot open it by name. There is no facility to open a file by inode either.
Linux exposes open files through special symbolic links under /proc
. These links are called /proc/12345/fd/42
where 12345 is the PID of a process and 42 is the number of a file descriptor in that process. A program running as the same user as that process can access the file (the read/write/execute permissions are the same you had as when the file was deleted).
The name under which the file was opened is still visible in the target of the symbolic link: if the file was /var/log/apache/foo.log
, then the target of the link is /var/log/apache/foo.log (deleted)
.
Thus you can recover the content of an open deleted file given the PID of a process that has it open and the descriptor that it's opened on like this:
recover_open_deleted_file () {
old_name=$(readlink "$1")
case "$old_name" in
*' (deleted)')
old_name=${old_name%' (deleted)'}
if [ -e "$old_name" ]; then
new_name=$(TMPDIR=${old_name%/*} mktemp)
echo "$oldname has been replaced, recovering content to $new_name"
else
new_name="$old_name"
fi
cat <"$1" >"$new_name";;
*) echo "File is not deleted, doing nothing";;
esac
}
recover_open_deleted_file "/proc/$pid/fd/$fd"
If you only know the process ID but not the descriptor, you can recover all files with:
for x in /proc/$pid/fd/* ; do
recover_open_deleted_file "$x"
done
If you don't know the process ID either, you can search among all processes:
for x in /proc/[1-9]*/fd/* ; do
case $(readlink "$x") in
/var/log/apache/*) recover_open_deleted_file "$x";;
esac
done
You can also obtain this list by parsing the output of lsof
, but it isn't simpler nor more reliable nor more portable (this is Linux-specific anyhow).
The team of admins needs your support. You must remotely reinstall the system on one of the main servers. There is no access to the management console (e.g. iDRAC). How to install Linux on disk, from and where other Linux exist and running?
It is possible that the question should be: "System installation from the level and in place of already other system working".
On the example of the Debian GNU/Linux distribution.
- Creating a working directory and downloading the system using the debootstrap tool.
_working_directory="/mnt/system"
mkdir $_working_directory
debootstrap --verbose --arch amd64 {wheezy|jessie} . http://ftp.en.debian.org/debian
- Mounting sub-systems:
proc
,sys
,dev
anddev/pts
.
for i in proc sys dev dev/pts ; do mount -o bind $i $_working_directory/$i ; done
- Copy system backup for restore.
cp system_backup_22012015.tgz $_working_directory/mnt
However, it is better not to waste space and do it in a different way (assuming that the copy is in /mnt/backup
):
_backup_directory="${_working_directory}/mnt/backup"
mkdir $_backup_directory && mount --bind /mnt/backup $_backup_directory
- Chroot to "new" system.
chroot $_working_directory /bin/bash
- Updating information about mounted devices.
grep -v rootfs /proc/mounts > /etc/mtab
- In the "new" system, the next thing to do is mount the disk on which the "old" system is located (e.g.
/dev/sda1
).
_working_directory="/mnt/old_system"
_backup_directory="/mnt/backup"
mkdir $_working_directory && mount /dev/sda1 $_working_directory
- Remove all files of the old system.
for i in $(ls | awk '!(/proc/ || /dev/ || /sys/ || /mnt/)') ; do rm -fr $i ; done
- The next step is to restore the system from a backup.
tar xzvfp $_backup_directory/system_backup_22012015.tgz -C $_working_directory
- And mount
proc
,sys
,dev
anddev/pts
in a new working directory.
for i in proc sys dev dev/pts ; do mount -o bind $i $_working_directory/$i ; done
- Install and update grub configuration.
chroot $_working_directory /bin/bash -c "grub-install --no-floppy --root-directory=/ /dev/sda"
chroot $_working_directory /bin/bash -c "update-grub"
- Unmount
proc
,sys
,dev
anddev/pts
filesystems.
cd
grep $_working_directory /proc/mounts | cut -f2 -d " " | sort -r | xargs umount -n
None of the available commands, i.e. halt
, shutdown
or reboot
, will work. You need to reload the system configuration - to do this, use the kernel debugger (without the 'b' option):
echo 1 > /proc/sys/kernel/sysrq
echo reisu > /proc/sysrq-trigger
Of course, it is recommended to fully restart the machine in order to completely load the current system. To do this:
sync ; reboot -f
Rsync triggered Linux OOM killer on a single 50 GB file. How does the OOM killer decide which process to kill first? How to control this?
Major distribution kernels set the default value of /proc/sys/vm/overcommit_memory
to zero, which means that processes can request more memory than is currently free in the system.
If memory is exhaustively used up by processes, to the extent which can possibly threaten the stability of the system, then the OOM killer comes into the picture.
NOTE: It is the task of the OOM Killer to continue killing processes until enough memory is freed for the smooth functioning of the rest of the process that the Kernel is attempting to run.
The OOM Killer has to select the best process(es) to kill. Best here refers to that process which will free up the maximum memory upon killing and is also the least important to the system.
The primary goal is to kill the least number of processes that minimizes the damage done and at the same time maximizing the amount of memory freed.
To facilitate this, the kernel maintains an oom_score
for each of the processes. You can see the oom_score of each of the processes in the /proc
filesystem under the pid directory.
When analyzing OOM killer logs, it is important to look at what triggered it.
cat /proc/10292/oom_score
The higher the value of oom_score
of any process, the higher is its likelihood of getting killed by the OOM Killer in an out-of-memory situation.
If you want to create a special control group containing the list of processes which should be the first to receive the OOM killer's attention, create a directory under /mnt/oom-killer
to represent it:
mkdir lambs
Set oom.priority
to a value high enough:
echo 256 > /mnt/oom-killer/lambs/oom.priority
oom.priority
is a 64-bit unsigned integer, and can have a maximum value an unsigned 64-bit number can hold. While scanning for the process to be killed, the OOM-killer selects a process from the list of tasks with the highest oom.priority
value.
Add the PID of the process to be added to the list of tasks:
echo <pid> > /mnt/oom-killer/lambs/tasks
To create a list of processes, which will not be killed by the OOM-killer, make a directory to contain the processes:
mkdir invincibles
Setting oom.priority
to zero makes all the process in this cgroup to be excluded from the list of target processes to be killed.
echo 0 > /mnt/oom-killer/invincibles/oom.priority
To add more processes to this group, add the pid of the task to the list of tasks in the invincible group:
echo <pid> > /mnt/oom-killer/invincibles/tasks
Useful resources:
You have a lot of sockets, hanging in TIME_WAIT
. Your http service behind proxy serve a lot of small http requests. How to check and reduce TIME_WAIT
sockets? ***
To be completed.
Useful resources:
How do SO_REUSEADDR
and SO_REUSEPORT
differ? Explain all socket implementations. ***
To be completed.
Top Related Projects
Linux, Jenkins, AWS, SRE, Prometheus, Docker, Python, Ansible, Git, Kubernetes, Terraform, OpenStack, SQL, NoSQL, Azure, GCP, DNS, Elastic, Network, Virtualization. DevOps Interview Questions
Learn how to design large-scale systems. Prep for the system design interview. Includes Anki flashcards.
Master the command line, in one page
Interactive roadmaps, guides and other educational content to help developers grow in their careers.
🤪 A list of funny and tricky JavaScript examples
An attempt to answer the age old interview question "What happens when you type google.com into your browser and press enter?"
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot