Tech in T: depth + breadth‎ > ‎OS‎ > ‎


Ctrl-w  delete previous word

#ProTip It's about using || and && as shell piping tools:

command1 || command 2 # command 2 will only run if command 2 has failed (non-zero return)
command1 && command 2 # command 2 will only run if command 2 has NOT failed (zero return)


cp -rf a/* b/    if this doesn't work it is because cp is aliased to something else with some flags set.
unalias a command for the session with prefixing it with \    e.g. \cp
OR use     $ yes | cp -rf a/* b/

systemctl --> handles system services

ubuntu disable fancy view theme
sudo apt-get install gnome-session-flashback
Once installed, log out, on login page click on ubuntu sign beside the text field, and select this from the login screen session menu: "GNOME Flashback (Compiz)" if you want to use Compiz or "GNOME Flashback (Metacity)"

disable window animations
$ sudo apt-get install unity-tweak-tool 
$ unity-tweak-tool   

ultimate ubuntu speed up in virtualbox link
Execute the following command to see if 3D acceleration is being used or not:/usr/lib/nux/unity_support_test -p
It will probably say:

Unity 3D supported:       no
Now that’s bad news, because the graphical interface of Ubuntu makes your whole system slow and laggy.
So first of all, make sure you have the VirtualBox Guest additions installed.
Once this is installed, we now install the vboxvideo driver:

$ sudo bash -c 'echo vboxvideo >> /etc/modules'
Now, shutdown Ubuntu. Then, you open the settings of your virtual Ubuntu and you go to ‘Display‘. Now tick ‘Enable 3D Acceleration‘.

Move faster in bash
Ctrl-a   beginning of line
Ctrl-e   end of line

shuffle a text file line by line:       
$ cat a.csv | while IFS= read -r f; do printf "%05d %s\n" "$RANDOM" "$f"; done | sort -n | cut -c7-         # Read the file, prepend every line with a random number, sort the file on those random prefixes, cut the prefixes afterwards.
$ F=a.csv; cat $F | while IFS= read -r f; do printf "%05d %s\n" "$RANDOM" "$f"; done | sort -n | cut -c7-  > $F.shuffled

for i in $(seq 1 10); do time curl -s > /dev/null;  done 2>&1 | grep real

copy as curl. 
Chrome>dev> reload a page, right click on somethign that has been downloaded, copy as curl. it puts the equivaltent curl command to paste buffer. also puts any auth required.

jq commandline parsing of json. $ brew install jq

Ctrl-r   keep pressing Ctrlr to see previous matches
$ cat * | grep -C5 exception     # also print 5 lines before and after

grep -v 'RuntimeException'   select linesthat dont match

perf-top(1): System profiling tool - Linux man page

$ comm -23 file1  file2   # get diff of llines that are in file1 vs  file 2
$ comm -23 file2  file1

if you have a separate drive for /boot and it is full: make a new directory soemwhere and move a few of the kernel files of hte same version to that folder. BUT keep the last two versions. link

grep 'create' -ir --exclude-dir={\*node_modules\*,\*migrations\*} .

mkdir t && cd $_
for specific argument
!:1    !:2     !:1-2

Use zgrep to grep a gzip (gz) file

Turn off Monitor from command line
$ xset dpms force off
and to turn the monitor back on:
$ xset dpms force on
You can also check the status of the X server settings by using:
$ xset -q

batch rename:    $ rename .src .c *   # renames all files in current directory with extension .src to .c

Shell Built in VariablesMeaning
$#Number of command line arguments. Useful to test no. of command line args in shell script.
$*All arguments to shell
$@Same as above
$-Option supplied to shell
$$PID of shell
$!PID of last started background process (started with &)

To pause a process: Ctrl-z on the terminal it is running  to resume:   $ fg

$ file a.txt    # determine file type

$ time sleep 5   # time track how much time a process takes to finish

 watch "ls -lSh /media/sde/entitySIs"           watch "grep '^>' run*Log.txt"   # to check if a new line with > in the beginning has been found in the text files in this directory

daemon tools : toolset for UNIX services.  supervise: monitors a service. It starts the service and restarts the service if it dies. softlimit runs another program with new resource limits.

printenv print environemnt variables
Who is online?   $ w   $ uptime

What Linux version are you running?   
cat /proc/version       ; cat /etc/*release*      ; uname -a      ; whoami   ; hostname  ;  find /etc/ -type f -maxdepth 1  -name "*release*" 2> /dev/null | xargs cat

Everything on the system that produces or accepts data is treated as a file; this includes hardware devices like disk drives and terminals.

man: The man command is used to show you the manual of other commands. Try "man man" to get the man page for man itself. e.g. man nautilus
info -->  lists system commands and a brief description of what they do.

CTRL + L ---> for location bar
Split view to browse to two differnet locations at the same time. View->Extra Pane
Use tabs Ctrl-T

Switch desktop  Ctrl-Alt ArrowKeys
Move windows between dektops/workspaces    Shift-Ctrl-Alt ArrowKeys
Lock Screen     Ctrl-Alt L
Show Desktop Ctrl-Alt-D   OR  SUPER-D
Open home folder    SUPER-Num1

su logs in as root user for all commands that follows, use exit to log out of su
OR $ sudo su
sudo just runs for one command as current user
$ gksudo    # run graphical applications using this 'You should never use normal sudo to start graphical applications as Root. You should use gksudo"

$ sudo shutdown -h now
$ sudo reboot
$ gnome-session-save --kill    //to log out or switch user

Daily Works (bash commands)

/ means filesystem root (/home --> the / specifies the filesystem root)
./ means current directory
` back quote (found on the key with the ~)
The command inside back quotes will execute as if it is the only command in the command line; its output to the console will be put in its place. echo "The contents of this directory are " `ls -l` > dir.txt it will not print anything on the console, but makes a file with "The contents of this directory are " and the result of the ls -l command. since the result of ls could be a very large list of files, the command that takes the substitution of `ls` as its parameters may not be able to handle all that, therefore xargs is the suggested choice.
~ means user home directory == $HOME
! history e.g. ps aux | grep yp  then !ps will execute command with ps in its content
; to run multiple commands at the shell

cd -  --> changes to the directory you were in before the current one (return where you were)
cd .. --> parent directory
cd \  --> user home
cd Desktop/

Terminal Profile
Screen size : 105 * 33
Font : Mnospace 9
 Clear Terminal:    $ reset

 Wildcard Matches
 ?  Any single character
 *  Any string of characters
 [set]  Any character in set
 [!set] Any character not in set
ls *.[cho] : any file ending in .c or .h or .o
ls /usr*/[be]* : any file in directories starting with usr and the file name begins with b or e
Brace Expansion  b{ar{d,n,k},ed}s   --> This will result in the expansion bards, barns, barks, and beds
Multiple Commands per line
$ cd docs; mkdir old_docs; mv *.* old_docs

is the same as
$ cd docs
$ mkdir old_docs
$ mv *.* old_docs 
 Keyboard Special Signals
Ctrl-c =    ^C   (default action: terminate the process) SIGINT
Ctrl-d =     ^\   (default action: terminate the process and dump to a core file) SIGQUIT
Ctrl-z =     ^Z   (default action: suspend the process) SIGSUSP
Ctrl r  search prev typed commands
Shift-PageUp    see upper screens of terminal
Note: These are overridable by the running process
  background process (non-blocking) Run a command in background and get the input back to terminal immediately: $ command& follow the command name with an ampersand(&) like  $ uncompress gcc.tar &
to check for background jobs :  $ jobs   will print running jobs
  $ history

$ more : page by page output scrolling

$ history | more

history: you can also have a look at word designators and how to access arguments to former commands
 !  Start a history substitution, except when followed by a space, tab, the end of the line, `=' or `('.
 !!  Refer to the previous command.
 !string Refer to the most recent command starting with string.
 !?string[?]  Refer to the most recent command containing string. The trailing `?' may be omitted if the string is followed immediately by a newline.

I/O Redirection  standard input / standard output / standard error
program < file_path         # file treated as standard input 
program > file_path         # standard output
program 1 > sys_out_file_path 2 > sys_err_file_path    
program >> file_path  will append instead of overwrite
*/10 * * * * /bin/execute/this/ 2>&1 >> /var/log/script_output.log     //a record in crontab
There's standard output (STDOUT) and standard errors (STDERR). STDOUT is marked 1, STDERR is marked 2. So the following statement tells Linux to store STDERR in STDOUT as well, creating one datastream for messages & errors:
Now that we have 1 output stream, we can pour it into a file. Where > will overwrite the file, >> will append to the file. In this case we'd like to to append:
>> /var/log/script_output.log

ls -Ila | grep 'm' --> (same as dir) show all files(-a) and their details(-l) that contain 'm' in their names. (-I) ignore case
ls -ltra  --> show all files(-a) sort by modification time (t) reverse the order of sort (r) and their details(-l)
$ ls -lSrh     # Sort by size, human readble.                               ls -lSh
ls -shX | more --> show the size (s) show in human readable format (h) sort by extension (X)
$ du -ah    --> recursive file size
$ tree -sh
$ tree -d   -->Tree structure of the directories    $ sudo apt-get install tree

cp index.html i.php --> copy 'index.html' to the current directory with new name 'i.php'.
This was done because if the extension of the file is not php, the server does not recognize it as a php file, hence there's no php code executed
cp -v:  verbose(explain what's going on)
sudo cp -r jre/ /usr/local/ --> copy the subdirectory jre to /usr/local/ r=recursively

mkdir -p work/junk/questions 
make parent directory if not available

rm file.extention
--> remove file
rm -r directory --> remove directory
rm -f --> no more confirmation
rm /path/*
rm `find /path -type f`
find /path -type f -print0 | xargs -0 rm
find feeds the input of xargs with a long list of file names. xargs then splits this list into sublists and calls rm once for every sublist.
find /path -type f -exec rm '{}' \;
calls rm once for every single file.
 find /path -type f -exec rm '{}' +
same as xargs version
xargs command is designed to construct argument lists and invoke other utility. xargs reads items from the standard input or pipes, delimited by blanks or newlines, and executes the command one or more times with any initial-arguments followed by items read from standard input. Blank lines on the standard input are ignored.
xargs covers the same functionality as the backquote but is more flexible and often also safer, especially if there are blanks or special characters in the input. It is a good companion for commands that output long lists of files like find, locate and grep, but only if you use -0, since xargs without -0 deals badly with file names containing ', " and space.

find /Users/morteza/zProject/  -type d -depth 1 -exec sh -c "echo {}; git --git-dir={}/.git --work-tree={} pull"  \;

find . -name "*.foo" | xargs grep bar   ===     grep bar `find . -name "*.foo"`
These commands will not work as expected if there are whitespace characters, including newlines, in the filenames. In order to avoid this limitation one may use:   find . -name "*.foo" -print0 | xargs -0 grep bar
You can also use -L to limit the number of arguments. If you do that, the command will be run repeatedly until it is out of arguments. Thus, -L1 runs the command once for each argument (needed for tools like tar and such).

$ find . -name "*.bak" -print0 | xargs -0 -I {} mv {} ~/old.files  ---> -I assigns a name to the input list and makes it possible to locate the exact position that the input list should be put in the command xargs will invoke.

$ find /etc -name '*.conf' | xargs head -q -n 2

The argument is too large?
$ file /directory/*   # argument too large
INSTEAD $ find and xargs

Search File (Content)

-i ignore case
-r recursively search  all files under each directory
-q  quiet: don't print the line you found it
grep  -ir 'hw_abstract' .       will find recursively to find files with that contet
grep -v    negative matching:  -v, --invert-match select non-matching lines
-name 'fileName'
-iname ignore case
-type d | f | s   (dir, file, socket)
-exec command {} \;    
{} is replaced by the current file name: forks a new process for each call, whereas xargs appends file names to one and forks one process to work on the file list.
-ok command Same as -exec, but asks for confirmation first.
-not  (e.g. -not -name 'myFile*')
Print the names, dates, sizes, and so on of matching files.
-size +10000k


find / -name java   # find is only good to get the name of the files
$ find . -name "*.orig" -print0 | xargs -0 rm

<<-exec in find forks a new process for each call, whereas xargs  appends file names to one and forks one process to work on the file list.>>
$ find . -type f -exec ls -s {} \; | sort -n  | head -5       #  5 largest files in a directory

find `pwd` -name .htaccess     find with absolute names
find /home/username/xhtml_repos -name '*.xhtml' -print | xargs perl -z     //-print shows which file is being processed
find . -name '*.htslp' -exec grep -l 'mortez' {} \;    --> find all files with name ending in htslp where the text 'mortez' is inside it
find . -exec grep -l 'mortez' {} \;
find / -name '*was*' 2>/dev/null  

find -name '000*' -exec ls -l {} \;  --> call ls -l on the found results, prints the details of the found files

$ find /mp3-collection -name 'Metallica*' -and -size +10000k
$ find /mp3-collection -size +10000k ! -name "Metallica*"
$ find /mp3-collection -name 'Metallica*' -or -size +10000k

Create Alias for Frequent Find Operations:
$ alias rmao="find . -iname a.out -exec rm {} \;"
$ rmao
Search File Contents
find . -exec grep -qi "textInsideTheFile" {} \; -print 2>/dev/null
2>/dev/null   --> while searching entire disk, you get error messages for places with a permission deny error, to avoid these too many messages that sometimes dominate the result, use this, or $ find / -name 'program.c' 2>errors.txt
Note : 2>/dev/null is not related to find tool as such. 2 indicates the error stream in Linux, and /dev/null is the device where anything you send simply disappears. So 2>/dev/null in this case means that while finding for the files, in case any error messages pop up simply send them to /dev/null i.e. simply discard all error messages.
* Alternatively you could use 2>error.txt where after the search is completed you would have a file named error.txt in the current directory with all the error messages in it.

-exec in find

find `pwd` -name .htaccess     find with absolute names
find /home/username/xhtml_repos -name '*.xhtml' -print | xargs perl -z     //-print shows which file is being processed

Search inside zip or jar files (for a file name)
find . -name '*.zip' -or -name '*.jar' | while read file
  echo "Listing coincidences inside ${file}:"
  unzip -l "$file" | grep "fileName"

pwd -P get current directory, ignoring symbolic links

$ zcat         zless
$ zip -0 -r zipFileName directory_or_filenames    -0 store, -9 the best compression
$ tar -cf project1.tar project1_directory
$ gzip project1.tar  //make tar.gz

tar -zcvf archive-name.tar.gz directory-name        -z: Compress archive using gzip program, -c: Create archive, -v: Verbose i.e display progress while creating archive, -f: Archive File name

$ unzip -d ./ppp
$ unzip -l zipFileName    //list content of zip file or jar file, etc
$ unzip \*.zip
$ gzip -d file.gz
$ tar zxf file.tar.gz -C directory
$ tar zxf file.tgz
$ tar jxf file.tar.bz2
$ tar jxf file.tbz2
$ tar -xvf yourfile.tar
for i in *.tar.gz; do tar xvzf $i -C path/to/output/directory; done   # to extract all zip files to directory

To unzip and rezip:
unzip -d temp
zip temp/*
rm -r temp/

Batch Rename

$ rename s/"SEARCH"/"REPLACE"/g *

Run/Install Programs
apt-cache search samba --> seach if samba is installed
sudo apt-get install phpmyadmin  --> install software called phpmyadmin

sudo apt-get remove phpmyadmin
sudo apt-get autoremove                  To remove packages that were automatically installed to satisfy dependencies for some package and that are no more needed.

sudo chown -R morteza (OR $USER) .     --> Own a directory so that it doesn't ask for password all the time to do any mere change

To Execute
To make a file have executable attributes
$ chmod +x my_program
To run it
$ ./my_program
The dot and slash at the start of this command mean to find the program in the current working directory. This is to prevent the shell (terminal) to execute a command with a similar name as my_program

Less Common Commands

$ ls -l
  drwxrwxrwx  1  username    users    2525 Feb 18 09:17 index.htm
^\ /\ /\ / ^ \ / \ /
| V V V | ''|''' '|'
| | | | | | `-- group the file belongs to
| | | | | `-- user who owns the file
| | | | `--the number of links (directory entries that refer to the file)
| | | `-- others (users who are neither you or in the group [WORLD!])
| | `-- group (people in the group)
| `-- user that owns the file
`-- d=directory, -=file, l=link, etc
Every file on your Linux system, including directories, is owned by a specific user and group. Therefore, file permissions are defined separately for users, groups, and others.

User: The username of the person who owns the file. By default, the user who creates the file will become its owner.

Group: The usergroup that owns the file. All users who belong into the group that owns the file will have the same access permissions to the file. This is useful if, for example, you have a project that requires a bunch of different users to be able to access certain files, while others can't. In that case, you'll add all the users into the same group, make sure the required files are owned by that group, and set the file's group permissions accordingly.

Other: A user who isn't the owner of the file and doesn't belong in the same group the file does. In other words, if you set a permission for the "other" category, it will affect everyone else by default. For this reason, people often talk about setting the "world" permission bit when they mean setting the permissions for "other."
Which user?
u user/owner
g group
o other
a all
What to do?
+ add this permission
- remove this permission
= set exactly this permission
Which permissions?
r read
w write
x execute

chmod u+wx testfile 
chmod ug-x testfile
chmod a=r testfile

Which number?
0 ---
1 --x
2 -w-
3 -wx
4 r--
5 r-x
6 rw-
7 rwx

chmod 640 testfile
chmod 755 testfile  ::: You read write execute, group and others read and execute
chmod 600 testfile   only you can rw, others can't even see it

To make sure that group ownership is inherited on future files:
chmod -R g+s /tmp/mytemp*

$ ssh user@host.domain.etc  OR user@ip_address

$ sudo tcpdump -nne to view mac addresses and port numbers

queries a (possibly remote) computer for a list of currently logged in users. Works by connecting to a finger daemon (usually named fingerd) that listens on port 79. finger makes its request of fingerd using a custom "finger" protocol, and fingerd replies with the appropriate information.
$ finger    reports all users of local machine
$ finger username  all details of the user
$ finger @computerName  all users of specified host

evaluate expression

$ which  firefox 
/usr/bin/firefox      will give the actual directory of the command file

$ top -p `pgrep -f firefox`                # only the process you are interested in      # table of processes
$ htop    similar to top more interactive                    $ glances
$ htop -d 100  # update htop every 10 seconds
$ iotop   table of processes for disk io measurements 
$ dstat   '' '' '' '' ' ' ' '' ' ' '

ps aux            processes snapshot list 
pstree process list in a tree
$ top  #table of processes

$ ps aux | grep httpd
OR   $ ps -fu postgres

kill -9 pid     kill process with pid no.e
pkill -9 -f .*Pi.*

Memory - Disk space
$ free -t -m       check how much memory you have in megabytes
$ df -h available disk space
$ du -sh directory size
$ du -hs /home/* | sort -r
$ gparted        # format and partition

$ lsblk    list disks, raid , etc

lsof list open files    lsof -Pnl +M -i4   list ip4 ports


Linux Pipeline Execution

$ cat fred barney | sort | ./your_program | grep something | lpr

Output character stream of the first command will be the input character stream of the second command.
This line says that the cat command should print out all of the lines of file fred followed by all of the lines of file barney. Then that output should be the input of the sort command, which sorts those lines and passes them on to your_program. After it has done its processing, your_program will send the data on to grep, which discards certain lines in the data, sending the others on to the lpr command, which should print everything that it gets on a printer. Whew!

Pipelines like that are common in Unix and many other systems today because they let you build powerful, complex commands out of simple, standard building blocks. Each building block does one thing very well, and it’s your job to use them together in the right way.

There’s one more standard I/O stream. If (in the previous example) your_program had to emit any warnings or other diagnostic messages, those shouldn’t go down the pipeline. The grep command is set to discard anything that it hasn’t specifically been told to look for, and so it will most likely discard the warnings. Even if it did keep the warnings, you probably don’t want to pass them downstream to the other programs in the pipeline. So that’s why there’s also the standard error stream: STDERR. Even if the
standard output is going to another program or file, the errors will go to wherever the user desires. By default, the errors will generally go to the user’s display screen,* but the user may send the errors to a file with a shell command like this one:

     $ netstat | ./your_program 2>/tmp/my_errors

Also, generally, errors aren’t buffered. That means that if the standard error and standard output streams are both going to the same place (such as the monitor), the errors may appear earlier than the normal output. For example, if your program prints a line of ordinary text, then tries to divide by zero, the output may show the message about dividing by zero first, and the ordinary text second.

You are using | (pipe) to direct the output of a command into another command. What you are looking for is && operator to execute the next command only if the previous one succeeded: cp /templates/apple /templates/used && mv /templates/apple /templates/inuse

If you want to save the output of program1 into a file and pipe it into program2, you can use tee(1):

program1 arg arg | tee output-file | program2 arg arg
to allow the second program to process data as it comes out from the 
first program, before the first program has completed its operation. 

System Parameters

gconf-editor   ===   Alt+F2 gconf-editor
"Configuration Editor - Directly edit your entire configuration database. - Linux registry" The Configuration Editor is often referred to as "GConf".
GConf provides a central storage location for preferences,    ---/apps/gnome-system-tools/users

Standard Input (Console) as File Parameter

Hyphen Command-line Argument: If you give no invocation arguments, the program should process the standard input stream. Or, as a special case, if you give just a hyphen as one of the arguments, that means standard input as well.

Linux Important 


/usr/lib   :  contains shared objects of the libraries, which can be used by other programs but not as for development
/usr/include    :   contains the header files of the libraries you might use to code and make and make install other packages.
for example for libxml2 I installed it and it was available but when I wanted to install the xmllib which required xmllib2 it couldn't find it, and the error message said that you might need the -devel .h header file. when we checked in the /usr/include it didn't have xmllib2. then we installed the xmllib2-dev via sudo apt-get install xmllib2-dev which includes the header files, then xmllib recognized the xmllib2.


/var/log/syslog    : 
/etc/environment    : system environment variables    
/home/username/.local/share/Trash  or    /root/.local/share/Trash

Network - Internet

Default Gateway
$ route
look for something like 
default         UG    0      0        0 eth0
$ ip route

DNS server
Right click on the network sign, it will show
$ cat /etc/resolv.conf

Renew IP
$ ifdown eth0 && ifup eth0
OR set manually $ ifconfing ethX inet
OR $ sudo dhclient
OR $ sudo /etc/init.d/networking restart
$ sudo tcpdump -s0 -i eth0
Prints every activity of the network adapter along with its protocol details! just add -A it even prints the whole packet! wow.  This guy could say that the comcast applet is not having a secure connection!

Might not work at first, at synaptic update manager mark samba-common-bin for upgrade, that's it!

Check internet connectivity
$ nslookup
OR $ wget to download a whole webpage
OR for text-based browsing install and use lynx

Name an IP
if you don't have a name associated with an ip, you can do it internally by modifying the following file:
$ vim /etc/hosts

Firefox initial home page:   chrome://ubufox/content/startpage.html

active ports and network connections
$ netstat -atun

which ports you are listenning to:
$ netstat -an | grep "LISTEN "

Modify Resolution (Virtual Machine)!

If you have installed a linux as a virtual machine and it shows in a tiny 600*800 pixel screen and does not adjust to the full resolution of your monitor:
    install system-config-display (sudo apt-get system-install or yum install system-config-display)
run it and modify the hardware monitor to the resolution you wish

Symbolic Link

If you have a disk mounted on some directory and want to use it in another directory you can use symlink.
$ echo abcd > a.txt
$ ln -s a.txt b.txt
$ ll    # will show that b.txt is pointing to a.txt. Any operation on b is like doing it on a.

To mount windows file system

Just open up terminal and write sudo fdisk -l. then try figuring out which partition is your c drive. and then just mount the partition as:

sudo mkdir /storage
sudo mount /dev/sda3 /storage //in case dev3 is your c drive.

Why not enable mount of the windows partition on boot time? There is a utility called ntfs-config which mounts your windows partition on boot time. install it as sudo apt-get install ntfs-config and enable mount at boot as


Create an Application Shortcut to Open Nautilus as Root in Ubuntu

$ sudo nano /usr/share/applications/Nautilus-root.desktop
[Desktop Entry]
Name=File Browser (Root)
Comment=Browse the filesystem with the file manager
Exec=gksudo nautilus
Ctrl + X
Press Enter
After this, you should have a shortcut under Applications > System Tools > File Browser (Root), from which you can enter a session of Nautilus with full write permissions.

Drag 'n Drop as Sudo

Create a launcher with the following command:
gksudo "gnome-open %u"
When you drag and drop any file on this launcher (it's useful to put it on the desktop or on a panel), it will be opened as Root with its own associated application. This is helpful especially when you're editing config files owned by Root, since they will be opened as read only by default with gedit, etc.

Automate or schedule Tasks (cron)

To view current scheduled tasks
$ crontab -l
To edit scheduled tasks
$ crontab -e

* * * * * /bin/execute/this/
1. minute (from 0 to 59)
2. hour (from 0 to 23)
3. day of month (from 1 to 31)
4. month (from 1 to 12)
5. day of week (from 0 to 6) (0=Sunday)

0 1 * * 1-5 /bin/execute/this/
  minute: 0 of hour: 1 of day of month: * (every day of month) of month: * (every month)  and weekday: 1-5 (=Monday til Friday)
10 * 1 * * /bin/execute/this/
Execute 10 past after every hour on the 1st of every month
0,10,20,30,40,50 * * * * /bin/execute/this/   ====   */10 * * * * /bin/execute/this/
Run it every ten minutes
Special Words
@reboot     Run once, at startup
@yearly     Run once  a year     "0 0 1 1 *"
@annually   (same as  @yearly)
@monthly    Run once  a month    "0 0 1 * *"
@weekly     Run once  a week     "0 0 * * 0"
@daily      Run once  a day      "0 0 * * *"
@midnight   (same as  @daily)
@hourly     Run once  an hour    "0 * * * *
@daily /bin/execute/this/

Mailing the crontab output
By default cron saves the output in the user's mailbox (root in this case) on the local system. But you can also configure crontab to forward all output to a real email address by starting your crontab with the following line:
Mailing the crontab output of just one cronjob
If you'd rather receive only one cronjob's output in your mail, make sure this package is installed:
$ aptitude install mailx
And change the cronjob like this:
*/10 * * * * /bin/execute/this/ 2>&1 | mail -s "Cronjob ouput"

installrpm redhat  rpm -ihv paketname.rpm

You can also use nc (NetCat) to transfer the data. On the receiving machine (e.g.,

nc -l 1234 > big.txt

This will set up nc to listen to port 1234 and copy anything sent to that port to the big.txt file. Then, on the sending machine:

echo "Lots of data" | nc 1234

This command will tell nc on the sending side to connect to port 1234 on the receiver and copy the data from stdin across the network.

However, the nc solution has a few downsides:

  • There's no authentication; anyone could connect to port 1234 and send data to the file.
  • The data is not encrypted, as it would be with ssh.

Add new disk and format and mount it.             gparted     linux disk format and management
$ fdisk -l
$ lvmdiskscan
$ mkfs.ext4 /dev/sdf
$ mkdir /mnt/sdf
$ mount -t ext4 /dev/sdf/ /mnt/sdf/

Convert MP# to WAV to be burnt in an AUDIO CD

for file in ./*.mp3
mpg123 -w ./"${file}".wav "$file"

Parallel execution over large file
split the text file into as many chunks as you wish
 split ../sample.txt -l 100 -d
have a bash scirpt which calls your script with an ampersand (&) at the end of each line to run them in parallel
time find /media/sd{d,e}/ -name '*.gpg'\ | parallel -u -j+0 --progress "gpg --quiet --no-permission-warning --trust-model always --decrypt {} | xz --decompress --stdout | ../cpp/entity_match '{}'"

$ htop                                              here
F5 tree. + or - to expand/collapse a branch
F9 kill process
l     for lsof   //list of open files
u    to view processes of a specific user 
SPACE        to select a process
F9               to kill the selected processes


Reset Ubuntu Administrator password:
Reboot into recovery mode.

Select the root option

Type this: mount -rw -o remount /
       This remounts the root "/".
root@ubuntu:~# passwd Administrator
       Enter new UNIX password: "Enter the desired password"
       Retype new UNIX password:
       passwd:  "Re-enter"    password updated successfully        
Resume the normal boot and login with the new password that you just created.