Remove Duplicate Lines From File in Linux

Remove Duplicate Lines From File in Linux

When working with file processing in Linux system, may needed to remove duplicate lines from a file. It can be useful with log files which often contains repeated lines. This tutorial shows how to remove duplicate lines from file in Linux.

Create a new file for testing:

printf "Line1\nLine2\nLine2\nLine1\nLine3\n" > test.txt

The awk command can be used process text files. Execute the following command to remove duplicate lines from a file:

awk -i inplace '!seen[$0]++' test.txt

The -i option with value inplace will edit file in-place.

Leave a Comment

Cancel reply

Your email address will not be published.