Command Substitution - $(comand)

You can use “output” of certain command as the “input” of other command:

  • Method-1: via $(command) expression (what’s inside the $() will get evaluated / run first)

    1
    2
    
    echo "Content of the file temp.md: \n$(cat temp.md)"
    vi $(fzf) #FZF will be open first, and the stdout will of FZF will be use as input for vi command
    
  • Method-2: via set _var_=$(command) expression (it just a variant of the above, by saving output of certain command to a variable, personally speaking, i feel like this method is mentally easier to comprehend and easier to debug)

    1
    2
    3
    
    admin_username=$(drush uinf --uid=1 --fields=name --format=string)
    ddev drush uublk $admin_username
    ddev drush uli
    

Unnamed Piping - cmd1 | xargs cmd2

If a command is able to accept data from standard input, rather than as an argument, then piping is enough, but otherwise, you will need xargs to “feed” the output of the command before piping as argument:

1
2
3
4
5
6
history | grep "ls -al"
history | pbcopy 
echo "temp.md" | cat <--- WRONG !
echo "temp.md" | xargs cat 
echo "temp.md" | xargs -I {} cat {}
echo "temp.md" | xargs -I % cat %
  • the flag -I {} is basically saying use {} sign as the placeholder token, similarly you can also have -I % which means use % as the placeholder token;
  • by default if you do not specify where to use the token, it will be used after the command after the pipe.

You can also do multiple pipes together:

1
cat *.txt | sort | uniq > result-file

Redirection - >, >>, <

Save stdout to File

Use > and >> to append STOUT (output of command) to file (save it to file):

1
2
3
4
echo "HELLO WORLD"  > "temp_1.txt"
echo "HELLO WORLD" >> "temp_1.txt"
cat  "temp_1.txt"   > "temp_2.txt"
cat  "temp_1.txt"  >> "temp_2.txt"
  • > will override existing content of the file with the output of the previsou command

  • >> will append the output of previous command to the ending of the file

Read from File as stdin

Use < to direct the content of a file as STDIO (input) of a command:

1
2
grep "Lorem" < temp.md (identical to [cat "temp.md" | grep "Lorem"])
grep "Lorem" < temp.md > temp_2.md (check for matching string and save matching line to temp_2.md file)

Brace Expansion - file_{1..5}.txt

Generate sequences: file{1..5}.txt becomes file1.txt file2.txt ... file5.txt.

1
2
cat temp_{1..5}.md
= cat temp_1.md temp_2.md ... temp_5.md
1
2
cat temp_{a..d}.md
= cat temp_a.md temp_b.md ... temp_d.md
1
2
cat temp_{001..005}.md
= cat temp_001.md temp_002.md ... temp_005.md
1
2
cat temp.{md,txt,yml}
= cat temp.md temp.txt temp.yml
1
2
cat temp_{1..5}.{md,txt,yml}
= cat temp_1.md temp_1.txt temp_1.yml temp_2.md ... temp_5.md temp_5.txt temp_5.yml

Pipe Single stdout to Multiple Command - mkfifo, tee

For this we’ll need to use a combination of the commands tee and mkfifo:

Named Pipe - mkfifo

mkfifo command: creates a named pipe (file) on disk that behaves like a pipe; Upon input to this named pipe (file) will trigger the command that reads from the pipe.

1
2
3
mkfifo test_pipe                              # Any terminal
cat < test_pipe                               # Terminal 1 (once you run it, it will be blocked, waiting for the input from pipe)
echo "X" > test_pipe                          # Terminal 2 (this is a brand new terminal instance (tab/window))
1
2
3
mkfifo example_pipe                           # Any terminal
xargs -I "@" echo "Hello @" < example_pipe    # Terminal 1
echo "Simon" > example_pipe                   # Terminal 2
1
2
3
mkfifo example_pipe 
xargs -I "@" echo "Hello @" < example_pipe &   # <-- append trailing "&" keyword to run in background
echo "Simon" > example_pipe

T-Splitter / T-Piece - tee

tee command reads from stdin and write the same data to stdout and (one or multiple) files.

1
echo "Simon" | tee name.txt | xargs -I "@" echo "Hello @"

Combined Usage - mkfifo & tee

using mkfifo and tee together:

  1. create one or multiple named pipe (FIFO)
  2. start one or multiple reader on the FIFO (and detach them by & )
  3. run a producer command to send its output through tee to the named pipes
  4. delete the named pipe via rm
1
2
3
4
5
mkfifo hello_pipe thanks_pipe
xargs -I "%" echo "Hello %"  < hello_pipe  &
xargs -I "%" echo "Thanks %" < thanks_pipe &
echo "Simon" | tee thanks_pipe | tee hello_pipe > name.txt
rm hello_pipe thanks_pipe

Text Filtering and Editing - grep, sed, awk

Usage Distinction

  • Use grep for search: find lines containing text or regex in files or stdout (command output)

  • Use sed for edit: replacement, remove or insert lines, perform simple transformation

  • Use awk for field-oriented process : (works best with structured data, like csv and log) works with columns, filter by condition, aggregate numbers

Example Usage

1
2
3
grep "HELLO" /var/log/app.log            # Basic search: shows lines containing HELLO (Case-insensitive and recursive) 
echo .... | grep "HELLO"                 # Basic search: shows lines in stdout of previous cmmand containting "HELLO"
grep -Ri "timeout" .                     # Searches all files below current directory, ignoring case, show line number and filename
1
2
3
4
5
6
sed 's/old_value/new_value/' example.txt  # Replace text (first match per line)
sed 's/http:/https:/g' urls.txt           # Replace text (-g: all occurance, global), converts all the http to https
sed -i.bak 's/debug=false/debug=true/' app.conf  # In-place replace text in file (-i create backup at app.conf.bak before the edit)
sed '/^#/d' config.conf                   # Delete text for lines that start with "#" (using regular expression here)
sed '/^#/i' config.conf                   # Insert (before) text for lines that start with "#" (add before the matching occurance (using regular expression here)
sed '/^#/i' config.conf                   # Append (after)  text for lines that start with "#" (add after the matching occurance (using regular expression here)
1
2
3
4
5
awk '{print $1, $3}' data.txt            # Print specific columns (space‑separated): Prints column 1 and 3.
awk -F',' '{print $2, $5}' data.csv      # Print specific columns (comma‑separated): Prints column 2 and 5. (-F sets custom field separator)
awk '{sum += $3} END {print "Total:", sum}' data.txt   # Show snly lines where field 3 is > 100, and sum a column, uses  END  block to print after reading all lines.
threshold=500                                          # declare a shell variable
awk -v T="$threshold" '$2 > T {print $1, $2}' data.txt # use shell variable in awk

Using them together

Common pattern is to use grep to narrow lines, then use sed or awk to transform the data:

1
2
3
grep "ERROR" app.log | awk '{print $1, $2, $5}'         # Extract only error lines, then print time and message columns
grep -l "DEBUG" *.conf | xargs sed -i 's/DEBUG/INFO/g'  # Replace in only matching lines 
grep "Australia" data.csv | awk -F',' '{print $1, $3}'  # Quick column fix in CSVs that match a pattern

Reference