> Extract tar.gz in new directory: tar zxvf package.tar.gz -C new_dir
For quick reference, since it seems to be hard[1], these are the flags you'll most need in tar:
c = create
x = extract
v = verbose
z = gzipped; compressed.
f = file
So for example `tar xf example.tar` will extract that tar file. Or `tar cf example.tar .` will pack all files in the current directory to example.tar. For compression and verbosity, it would be `tar zvcf example.tar.gz .`. The .gz at the end is optional, but now others can see what kind of archive it is. Extract .tar.gz files with `tar xzf example.tar.gz`.
For bonus points, here is a poor man's version of scp/rsync:
Cheese note: I still do the ol' pipe dance when de-compressing archives, like this:
gunzip -c somefile.tar.gz | tar xvf -
The reason? Well, its like this: I learned it that way on machines that had, typically, about 4 megs available. So it used to be, back in the ol' days, that if you tried to do it like:
tar xvzf somefile.tar.gz
.. you'd run out of RAM, get into swapfile sadness, and so on.
Piping it through the console first, used the tty-buffers, which blocked whenever exhaustion occurs, but nevertheless: recovered a lot smoother than when heavy swap was involved ..
Dunno if its still 'true' (could be a mythos I'm passing on from the old, drunk sysadmin I learned the rule from) but it aways bugs me when I forget to do it 'the new way' ..
This is at least sometimes still true in corporate Linux environments. Giant, Immoral MegaCorps apparently can't be bothered to spring for big disks, so doing "tar xzf" or gunzipping something before un-tarring often fills up the filesystem. Embarassing, but true, given that you can hoof it to Office Depot and buy terabytes for a couple hundred, if they'd just let you.
Nowadays, `tar -xf foo.tar.gz` (at least in Linux) will auto-detect the gzipping and just do the right thing. I think it uses file extension, so I'd suspect it works for .bz2 and .xz too. (Not sure I'd use this for shell scripts, but for everyday command line it should work.)
Can't edit my post anymore, but good tip! As dmd commented though, it seems tar automatically detects compression on files because of the file extension.
Yesterday I got an error 'error: tar archive is compressed. Aborting' or something along those lines. When reading a file it might detect it because of the filename, but with stdin you need to specify the flag it seems.
Well, the default Linux shell is /bin/bash on most systems.
I'd say more about the article itself and what it should and/or does say on this, but at present, I suspect the author is looking up "Webserver and Database Scalability and Availability Tips & Tricks":
I'm well-aware that bash is a (if not the) popular choice on desktops and servers, but my terse and somewhat snarky answer was motivated by two other factors:
1. Not only do distributions like Debian or Ubuntu use dash, but various other shells are becoming more relevant to programmers due to their inclusion in special-purpose systems, such as devices running with a Busybox-userspace (which includes pretty much every Android phone, for instance).
2. If the article claims to have any educative value, it should definitely mention that this is something about a particular shell. Assuming the author isn't lying about his use of nix systems (since the diversity of shells is common knowledge to anyone seriously using Unices), the other logical choice is that saying bash instead of Linux isn't as cool in terms of SEO. The WWW is cluttered enough with every marketing drone hiring cheap labour on freelancer.com to fill it with bad spinoffs just to promote their semi-scam business, making any kind of legitimate information so hard to find you'd think it's 1998 again for some topics; it itches me on a personal* level when I see programmers doing that. We should know better.
Debian uses dash as the default system (that is: scripting) shell, and what you'll typically find /bin/sh symlinked to (OpenBSD don't truck such nonsense and gives you statically-linked Bourne when you ask for sh, and there's a pretty good argument for why that should be). I actually got in the habit of writing bash (not sh) scripts some time back. My understanding is that the article was referring to commandline tricks, which would generally indicate bash.
Defaults do have their use, though from what I saw of the article it was hardly rigorous enough to even note what defaults it was referring to.
Agreed with you on the quantity of low-quality information out there. Google seems to be badly losing its edge in winnowing wheat from chaff (as does HN).
A number of Linux distributions have moved to the BSD-licensed dash[1] for running their system initialization scripts, as it's noticeably faster than bash (resulting in faster boot times). Ubuntu made this change back in 2006 [2][3], and Debian switched to dash for Debian Squeeze.
Bash is still the default user shell on those systems, though.
This is incorrect and misleading right off the bat. Ctrl+Z does not "send process to background." You will almost always need to use the bg command for that after suspending the running program.
Came here to say that and to add. Long list of commands with no explanation or details is minimally useful.
I challenge you to read a man page a week. Use that command every day that week. Find away to use it with two or three other commands you know. Breakdown tasks into units the size of commands you know. Fill in missing pieces with bash or scripting language of choice.
I've been using Linux since 2006... And only this year did I finally type 'man' into my terminal. I no longer use google for finding those sorts of things out.
Content is typical blog style content and even comments come from Disqus, so ther should not be any reason to connect to a database with every request.
Nice list of stuff, especially for people relatively new to command line stuff
there's a cool twitter account called "Command Line Magic" (https://twitter.com/climagic) that does some fun stuff, and is pretty shell-agnostic.
Also, for those willing to, I'd strongly suggest checking out fish. it's not bash-compatible, but there's a lot of good usability aspects to it (especially compared to vanilla bash), I love it.
Out of curiosity though, this command :
tar zxvf package.tar.gz -C new_dir
it's always intrigued me that the tar command (I imagine it is mainly used to "unzip" things) does not make it very simple to do this. I've started to memorize the 4-letter combo but for a while I had to google it everytime (man pages are not useful for learning things for a lot of unix cl tools).
tar is not a compression program per se, but a file archiving tool. It is used because it preserves file system information such as permissions, owner and group, which would otherwise be lost if you used zip/gzip/bzip alone. Each letter has a meaning, x is for extract, f is file, v is verbose (print details to screen) and f is for file. So the cryptic four letters are really easy to rememeber: eXtract the Zip File Verbosely. Change the x for a c (Create) and you can make a new file.
I was going to ask what is wrong with the man page (because in my experience they do a mighty fine job of explaining most traditional unix tools). But then I checked the manuals for two variants of tar (gnu and obsd), and I can see how they might appear less than helpful for a new user who has never dealt with gzip before.
I am surprised the xz(v)f invocation isn't in the examples. I could've sworn it's there...
> it's always intrigued me that the tar command (I imagine it is mainly used to "unzip" things) does not make it very simple to do this.
It helps to remember that tar stands for Tape ARchiver (or Tape ARchive, or something, depending on who you ask), and that its interface really does date from the era when dealing with tape was a common task for pretty much any sysadmin.
The quick 'n dirty recipe to test the disk write speed isn't ideal, since there's the buffer cache which significantly skews the results. A much better way would be:
> sync && time sh -c "dd if=/dev/zero of=foo bs=1M count=10000 && sync"
Then just divide 10000 (or whatever value you choose) by the number of seconds elapsed and you should get a much closer approximation of the sequential write speed, taking into account completely flushing the buffer to disk.
This is a collection of quick shell recipes, so in that context it's perfectly acceptable. Besides, there are some environments where you can't just install packages willy-nilly, even if they're present on the repositories.
I had read some time ago that sync doesn't actually flush the buffer cache, it just schedules it to be written (and that may happen a little later). Could be wrong though, or the information could be outdated in recent Linux versions (I read that about some UNIX version, IIRC).
For quick reference, since it seems to be hard[1], these are the flags you'll most need in tar:
So for example `tar xf example.tar` will extract that tar file. Or `tar cf example.tar .` will pack all files in the current directory to example.tar. For compression and verbosity, it would be `tar zvcf example.tar.gz .`. The .gz at the end is optional, but now others can see what kind of archive it is. Extract .tar.gz files with `tar xzf example.tar.gz`.For bonus points, here is a poor man's version of scp/rsync:
[1] Obligatory xkcd https://xkcd.com/1168/Edit: replaced all asterisks with a period (.) since it would become italics. Tar is recursive by default, so it should work the same.