Getting Familiar with the CLI

As long as you’re doing lots of work in Linux, there are some more things you’ll want to get used to. I spent a lot of time in the command line. (It’s kind of hard to avoid when you’re working on a headless server.) These tips are useless if you don’t have a basic familiarity, but for people with a relatively basic knowledge, here are tips that might come in handy:

Very often in less, I want to jump to the end of a file and work my way up. I can hit space over and over. One day I thought I was clever, when I realized it would tell you how many lines were in the file, and I began telling it jump to line 123 by typing :123 within less. But it turns out it’s even easier. G takes you to the last line. g takes you to the first line. There are many more handy tips here.

Of course, I spent even more time in vi. Search and replace is handy. But keep in mind that the :s/old/new command will only work on one occurrence. You can append a g, ending up with :s/old/new/g, but it’s only going to work on one line. This is usually not desirable. You can specify a line range. Generally, though, you want the whole file. $ denotes the end of the file, so you can do it as “1,$,” denoting “From line 1 to the end of the file.” But it’s even easier: % means “the whole file.” So I end up with…. :%s/old/new/g to replace all “old” with “new”. And if this isn’t what you want, press u to undo. The “G” trick to jump to the end works in vi, too. Turns out you can replace :wq with ZZ, which is essentially the same.

I’ve known about the uniq command for quite some time: its goal is to weed out duplicate lines. This is handy far more often than you might imagine: say you strung a ton of commands together to pull out a list of all e-mail addresses that your mailserver has rejected. There are bound to be many, many duplicates, because apparently bumttwagnerfor@domain is commonly-spammed (?!).

But uniq has a peculiar quirk that I missed. They call it a feature, although I’m not sure I agree. It’s for filtering out sequential duplicate lines. If the duplicate lines aren’t in order, it will merrily pass them on. I suppose there may be scenarios when this is desirable, although I’m at a loss to think of any. In a nutshell, whenever you want uniq, you probably want to run it through sort first. grep something /var/log/messages | sort | uniq, for example, will pull out all lines with “something” in them, but omit all duplicates.

And note that use of grep. For some reason people seem to think that echo filename | grep search_pattern is the way to do it… There’s no reason for echo. Just do grep search_pattern filename.

Leave a Reply

Your email address will not be published. Required fields are marked *