Watching this discussion makes it painfully obvious that the most vocal opposers to the post are those who have never used unified software package management. Sorry, guys, but you complaining how broken package management is when all you know is the google, browse, download, run, next-next-finish dance doesn't exactly help.
I don't want to know Mozilla released a new Firefox. I don't want to have to run it to find out. I don't want to know there is a new Java available and I don't care which version of OpenOffice is the latest and greatest. Or bzlib, or glibc. Or Python, Ruby, CMUCL... If the packaged version is good enough, I don't want to waste my time managing my computer, not a single minute. I'll let the system-wide updater do its job.
And that's what a unified software package management system (I use APT) does for me. It does its job so that I can do mine.
In the parts I need more control, I skip the whole thing and install my own programs in any way that makes sense for the specific product. My distro provides a reasonably up-to-date Python, but if I am to muck around libraries that conflict with the packages provided by the distro, I will use a virtualenv, which is a stand-alone Python environment, with its own libraries and easy_install/pip. This is similar to the "bundle your versions" approach, but, rather than the norm, it's the exception.
"Linux distributions must build everything from source code."
No, they don't have to. That's a self-imposed rule that's beneficial in some cases but not others. When an upstream developer releases nothing but source code, I appreciate the fact that the distros compile it on my behalf. But when an upstream developer (like Mozilla or Google or all these Java developers) releases a fully built and tested binary, I'd prefer the distros to just pass that on to me.
That's impossible because the distribution is responsible for the updates it provides. If you package libraries with your program alla Firefox, Chrome, or Java, you simply make your program unmaintainable and unsupportable from the distro point-of-view. This is true of Ubuntu, as well as Debian or RedHat. This way of thinking is incompatible with basically all of the major distros.
Honestly, I don't even want to get apps from distros at all; I'd prefer to get tested binaries directly from upstream, but all the disagreement over package formats and such makes that nearly impossible. I feel like distros have painted themselves into a corner with their rigid policies and now they're complaining about it.
But then the tested binaries + libraries is a big directory which doesn't respect any of the unices way of organizing files.
I don't see how this makes it any more convenient on the users than downloading it from upstream makes it.
I compare this to things like installing nginx on ubuntu which comes with files in the places I expect (/etc instead of the configure script default of /usr/local/conf) etc. Packages are actually very convenient for 90% of what I user on my computer.
Ruby and Clojure stuff are the only thing I download and install myself.
Why does it matter whether it respects the way Unix organizes files? Other than having config files in /etc and having binaries in $PATH, who cares whether libfoo.so is in /usr/lib or in /usr/myawesomeapp.com/vendor/libraries?
That's, more or less, what /opt is supposed to be. QT and KDE used to be "integrated" that way, until distros finally got around to put them into /usr. So, there really is a standardize mechanisms with those uncommunicative packages in the FHS. (Note that the Unix family as a whole does not standardize filesystem layout, even though Solaris also has /opt and uses it for exotic packages such as GNU gcc).
Very true. The traditional Unix way of organizing files makes sense in a command line world, and I'd expect that sed, awk, grep and friends will continue to live in directories like /usr/bin. But for GUI desktop apps, there's really no good reason each app can't live in it's own directory.
> Ruby and Clojure stuff are the only thing I download and install myself.
And why do you do this? Because the Debian/Ubuntu packages for these things are old and broken, primarily because they are insistent on breaking up the packages and scattering the files all over the FHS they worship. They despise RubyGems because it installs packages in a self-contained way and lets you install multiple versions of everything without mangling the names. It puts them out of a job.
> They despise RubyGems because it installs packages in a self-contained way
A way that conflicts with the software management Linux distros use. You can't issue a command and have your system updated and, at the same time, your Ruby environment updated. Introducing some new gem can crash system software written in Ruby and make it impossible for the system management tools to rely on the same interpreter you do.
If you want to install your gems, something like Python's virtualenv is recommended. I don't know if Ruby has one. If I relied on Ruby, I would make sure it had, even if it meant writing C.
For my own personal servers and workstations, yeah either way works pretty well. But when you have several hundred servers with a variety of different applications, many of which you don't use at all but you still need to keep patched and up to date, having a distribution helps a lot.
I curse the guys at CSW all the time (eg up and changing config directories from /opt/csw/etc to /etc/opt/csw[1]) but they have probably saved me quite a bit of work testing and building packages.
[1]For the record, I think /etc/opt/csw is better, just not better enough to be worth all the hassle of changing.
I'm want to agree with you, but I have a question.
Binaries tested against what? Obviously the testsuite that upstream uses, but every program will have dynamic dependencies into the distro. In that case wouldn't the upstream maintainers have to test every version or every distro to make sure it works for you? That seems time intensive (not infeasible, but it would become chaotic very quickly when we multiply the number of distros by the number of apps people want).
I'd also presume that the upstream maintainers include their testsuite with their source, so nothing should prevent the downstream distro maintainers from running the testsuite before shipping (I have always assumed they do, I hope I'm not wrong).
If there hadn't been multiple divergent distros in the first place, upstream developers wouldn't be faced with the prospect of a huge testing effort. The result seems to be that we're locked into a model of getting all software from the distros, but they don't necessarily have the resources to do a good job of packaging everything.
If you're willing to take some abuse you can release server software only on RHEL/SLES and desktop apps only on Ubuntu and tell everyone else to fend for themselves.
I agree. I'd much rather be able to go out to, as an example, mozilla.com and download the latest firefox in an easy to run installer (like I can do on Windows), than have to upgrade my entire fricking OS just to get the latest Firefox.
Again, just using Firefox as an example. The same applies to most end user apps, stuff like Java and Flash, etc.
In the case of Firefox, I don't even want an installer. I like the OS X install process for Firefox (and many other apps): copy Firefox.app to /Applications. That's it.
As much as I love application bundles, I'd like to point out that this method of distribution leads to code duplication. For example, almost every popular OS X application ships its own version of Sparkle and Growl (to verify this yourself, cd into the app bundle and go to Contents/Frameworks).
Does it matter? Disk space and ram are both cheap and plentiful. The difference in disk space or ram taken by apps on an average OSX desktop, and average Windows desktop, and an average Linux desktop seems to be negligible.
Does it matter? Disk space and ram are both cheap and plentiful.
That just sounds like promoting a culture of waste to me. "Do we need to care about resources?" "Nah, disk and RAM are cheap, and our users won't mind paying for more of it."
What happened to elegance, efficiency, and modular, maintainable design? Just because you have a shiny box with 8GB of RAM and a terabyte of space, it doesn't mean everyone does, or everyone should have to.
The difference in disk space or ram taken by apps on an average OSX desktop, and average Windows desktop, and an average Linux desktop seems to be negligible.
I'm not sure you can meaningfully make this comparison since the platforms (at least at the GUI level) are quite different. A better test would be to take a "standard" Linux system, and then create a "bundled" system that has each app in its own directory with all its dependencies bundled with it (perhaps leaving out some of the fundamental ones like libc). A little bit of common reasoning would suggest that memory and disk usage would be affected greatly.
> What happened to elegance, efficiency, and modular, maintainable design?
Each app having it's own copy of the libraries it needs is actually MORE maintainable.
It's not uncommon for a newer version of a library to introduce a change which breaks something in your app. One example from my work: we tried building with a newer version of Qt and all our text was suddenly getting rendered upside-down due to changes in how Qt uses OpenGL. If we had been using the system copy of Qt, it would have been our customers seeing this problem instead of our dev team.
There are other advantages too, but to my mind this alone is enough reason to bundle libraries with your app.
But is that the common case? Or is the common case that, on occasion every now and then an incompatible change is introduced, but much much more often bugs and security issues are fixed that your users will benefit from immediately, and not have to wait for you to release a new version with new bundled libraries.
I'm a bit surprised that the Qt guys would make an incompatible change during a stable series. Is it possible you were relying on undocumented behavior that was subject to change?
It matters when there's a security bug which needs patching in an underlying library.
With shared libraries, only one library needs updating. With each package proving it's entire ecosystem, every package on your system needs such updates.
Then again if a library is patched who will test if all applications using it still work? I never had updates messing up my system as bad on Windows as it happened a few times already on several of my Linux systems (with different distributions). To stay with Java - the reason I gave up on Eclipse was when it got broken the second time after an Java-VM update and I just didn't want to fight the package system anymore.
I trust application developers usually to do better tests than distributors. And certainly distributors don't even care about 3rd party packages. I would prefer a system where application developers are responsible for which libraries link against their applications. And a complete different mechanism for security. For example a list of libraries+versions with known security-problems against which shared libs can be tested on start.
And I'm sure there are other solutions, for example searching my system for all versions of a certain library and asking me which to replace (and maybe keep me informed about not-updated versions). Preferably while keeping the old version around for a while to make it easy for me to switch back for the applications which do break now.
It's not like having a single version of a library is the only way security can be handled - it's just the current way of doing things. And one can discuss what is preferably - a secure system which occasionally breaks working applications or insecure applications which at least do work.
edit: Btw., this might even be in the interest of free software. Ever had a problem with a certain library which won't be patched in the official sources for whatever reason? Right now - basically you're fucked. Because even if you patch it yourself, the chances you will get distributors to add a second nearly identical library with 1-2 simply patches is basically zero. So although you have those free libraries with source, it doesn't really matter because you can't change those sources yourself if you still want to be in official distributions. A real free system would be one where changing sources is made easy.
Which is why I like systems which are mostly from the vendor, with very few custom applications.
IME, application vendors only test on a few common systems, not across the board. Try running something which isn't common (say a 64bit Linux distribution three years ago), on even with newer hardware. My Linux experience has been with third party packages breaking, not anything shipped by the vendor.
I'll agree with that! That's always been one of my favorite things about OSX. It makes it easy and obvious how to get uninstall an app (duh! just delete it), but even better it makes it easy how to backup an app. Just backup your /Applications directory and at least 90% of your apps will work fine after a restore, without having to re-install all of them!
No, Debian is doing it wrong. Ubuntu is doing it wrong at a faster pace.
What's wrong is the cycle of freezing all the packages for a 'release' and then 'supporting' them in the future by trying to extract 'bugfix' patches from newer versions instead of just shipping the goddamn upstream software.
The answer is rolling releases and a proper slots system — look at Gentoo or Arch.
Which is fine if you're comfortable compiling from source or installing packages manually.
My point is that on OSX or Windows, installing an application from "upstream", like getting the latest Firefox from Mozilla.org for example, is something even relatively non-technical users can do by themselves. On a Linux desktop, it's a "here there be dragons" operation.
Not to mention that on Windows or OSX, a developer of a relatively small, obscure application (like say, HN favorite Bingo Card Creator) has a standard, relatively easy way to package their application up for installation by end users.
On a Linux desktop that's really damn difficult for both the developer and the end user.
wouldn't it be harder if yo didn't have the distros standard environment to target? if the user had gone upstrem to get any significant number of packages for their system that would be very messy, no?
in my experience there is nothing better then having a repo/ppa or even a well built .deb / .rpm, i wish for it every other day for my mac/win boxes.
Then do that across a few hundred (or few thousand) systems, making sure nothing breaks, including installed plugins and other applications needing that package.
What disagreement? There are two package formats in large use, rpm and deb. Many software are distributed upstream as .deb or .rpm and you're free to use them that way. But this fly in the face of ease of administration and easy maintenance. It's OK for personal machine, but it simply isn't acceptable for any professional setup.
So? There is already a lot of software for which Canonical, for example, explicitly states that they do not provide updates. The software is then updated either directly by user or in the next OS release.
The next obvious solution is to make separate packages for every version of library that the software uses. The problem is that there is no real convergence on “commonly-used” versions of libraries. There is no ABI protection, nor general guidelines on versioning. You end up having to package each and every minor version of a library that the software happens to want.
I don't understand why this is a problem exactly. If a dev runs and tests against a particular version of a library, use that version. Even minor updates in libraries can cause problems. If the library has a security issue, blacklist the version and force the upstream dev of both the library and the app to acknowledge the issue and release a new versions of their projects.
Force upstream to fix a bug and release a new version? And what if they don't? Quite a few packages in Debian have somewhat dormant upstream authors - the packages work, perhaps needing a few patches to compile with the latest versions of common libraries, but the original author has essentially moved on.
Consider:
libfoobar has a bug in at least one version seen in the wild. Is my system safe?
If your distro packages just one or two stable versions of libfoobar, any package that depends on libfoobar is either OK or not OK, and if it's not OK, you can patch the bug in one place and you're safe again. If upstream is dormant, perhaps the current package maintainer can fix it.
If there are various versions of libfoobar being linked by individual apps, you need to check every app for the flaw and work with both the app author and libfoobar's author to determine whether the flaw exists and how to fix it. Upstream libfoobar might say the bug has been fixed in the latest release, so just upgrade to that. Upstream app then has more work to do, and may be in denial about the importance of the bug. And if the source isn't available for the precise libfoobar bundled with the app, the package maintainer would have to either (a) rework the app to work with a stable system version of libfoobar, (b) package the odd version of libfoobar separately, and link that, or (c) delete the package from the distribution.
If upstream won't, can't, doesn't want to fix an upstream bug, then no one wins. Distributions patching upstream outside of the original source is a bad a idea -- other distributions need the same fix most likely. Kindly convincing the upstream app to rework their app with the new stable library is the ideal.
I realize it's a give and take, but the blog post is putting the blame squarely on the java dev's shoulders.
This is synonymous with Zed's recent Python rant, and the "DLL hell" mentioned previously.
FreeBSD went through the pain of extracting itself away from its perl dependency years ago. And, it's way better for it. There's nothing like a minimal install. A nice, clean, empty slate.
This sounds peculiarly like the issue of "DLL Hell" on the Windows platform. Ultimately, you either hope that the shared libraries match, ship with the DLLs you need, or (as a newer option) throw your chips in with WinSxS -- where the cure might actually be worse than the disease.
Maybe the fact that Java grew out of the proprietary world is part of the reason why it doesn't play nicely with open-source/free operating systems.
It's a little different than DLL hell: most applications provide their own libraries precisely to avoid DLL hell. Rather than relying on (or trying to install) some globally-shared DLL, they include their own versions of libraries, and it doesn't matter if those versions match the versions used by other applications. (As a developer you have to worry about dependency differences between libraries, but as an application installer/user you don't have to worry about it once the developer's got everything working and bundled up). The Linux distro guys are in some ways asking Java applications to go back to a world where DLL Hell is possible, which might be good for distro packagers but would be a disaster for developers.
I think it's because Java sees itself as its own operating system, and because it started out with a pretty crummy module system that's only been incrementally improved over the years.
This problem is different from the Python issue Zed Shaw wrote about.
In Java, developers will bundle dozens of random .jar files with their application. Other versions of these libraries may exist in a Linux distribution already, but specific Java apps aren't linking to those and carefully documenting how to install all the dependencies (or which ones are optional). Instead, from the distro's point of view, these Java apps are being released with a bunch of binary blobs that may or may not contain bugs that need patching later. Which partly defeats the purpose of package management.
Python programmers don't do this. Distros love Python, which is why so many of their system administration/automation scripts depend on it. This puts some tension between packagers, who use it as a bash/perl replacement, and developers like Zed, who want to treat it like an up-to-date library -- but realize that the Java solution of just bundling the whole thing with the main app is ugly. That tension exists because Python developers haven't run amok the way Java developers have.
I'd love to see distros start to introduce (for instance) separate python-sys and python-dev packages. The OS would use python-sys for the admin and automation scripts, but if you wanted to use Python for your own projects, you could use python-dev, which could be a later version. Possibly python-dev could even track upstream more closely. Either way, you'd decouple the system's need for a stable, proven and predictable Python from the user's need for new(er) shiny.
Maybe RedHat, but it's about it. Even Debian Stable comes with 2.5.
BTW this rant about impossibility to upgrade is a classic one and nonetheless wrong. You're perfectly able to install new versions of python, or perl or whatever given that you do that in /usr/local to keep the official version alongside the new one. I do this all the time, my system comes with perl 5.10 but I regularly install the newest releases in /usr/local, or even development versions. No problem.
CentOS 5 (even latest 5.5). You can install python25 or even python26, but 2.4 always stays as a default.
As long as you know about it, you can fix scripts, but any downloaded program needs a change in shbang line, or at least running explicitly via python-2.X.
But yes, "most Linux distros" part is completely untrue. Most have at least 2.5 - or higher.
For the most part yes, but there are minor incompatibilities. New keywords have been introduced, and if you used those keywords as names in your app...
I've come to the conclusion that every language is broken from the point of view of people actually working with the language rather than relying on tools previously written in it. Ruby has broken out of this with rvm, maven kinda-sorta fixes Java but seems to come with a bundle of baggage, virtualenv with your own python build works well enough. CLBuild and QuickLisp, the Haskell Platform and from what I can tell Racket automate a similar approach. I don't know much about the Perl ecosystem, but at a guess I'd say that Perl's backwards compatibility saves it better than most. And this is before you start looking at the code quality and release processes of the libraries packaged in the languages' own distribution mechanisms.
Seriously, unless you're working in C/++, I think it's only worth using the system-bundled languages if you're actually targeting the OS itself. If you're targeting more than one platform, using your own build seems to be the only way to stay sane. If you're only targeting a single platform, unless you're tracking a bleeding-edge-style distro like Arch, you can guarantee that all the fun development is going to be going on somewhere you can't reach.
It's exactly the same thing and one of the reasons distros should get the hell out of the business of shipping stuff like this.
I'm on a mission. Distros that like to use languages, say like python, for system level stuff should stuff that shit somewhere isolated and ONLY use it for system level stuff.
Then they can provide or not provide a native package for python2.7/python3.1/ruby1.9.2 ... you get the point. With distros like RHEL and Ubuntu LTS, they lose ALL value as a platform for ruby or python development because they don't release often enough or worry about breakage to keep those languages up to date.
This is why companies like ActiveState are making a KILLING providing supported after-market dynamic language binaries.
What the distros should be doing is, besides isolating any dynamic language they use for system-level configuration, providing with the support of the language vendors an installable local package repository. I.e. you should be able to install a base RHEL provided python 2.7 RPM + local PyPi server and grab which packages you want to standardize on. Same goes for Ruby and gems.
This would solve the issue entirely and keep LTS distros like RHEL and Ubuntu from being irrelevent in 2 weeks when a new version of a gem comes out that you have to have for app X.
Ruby, Perl and Python packages usually come with a README that says:
This depends on these external packages: ...
Java programs usually come with a bunch of .jar files which were once independent packages, but have been dropped into the release itself. No dependency problems!
Then, if someone wants to package a Java application for Debian, the process is:
1. Look through the collection of .jar files in the release
2. Do you recognize one of these as already being packaged for Debian?
3. Work with upstream to delete that .jar from Debian's copy and depend on the system's version instead
4. Repeat for every other .jar in the release, until you hit a wall
5. Upload the package to Debian with an acceptably small number of bundled .jars
6. Time permitting, get someone to package the other .jars that aren't available in Debian yet
That said, you can cause a similar amount of trouble in other languages, it's just not the convention (thanks to the success of gems, CPAN, PyPI). For example, Ubuntu appears to have deleted the sagemath package because upstream keeps their own patched copies of dozens of libraries they depend on:
They generally do much better than a README: machine-parseable is the way to go.
Perl (CPAN) packages include a YAML file that specifies what other modules (packages) and what minimum version of each is required to configure, build and run the package. There's an ecosystem of tools available for turning a Perl package into an RPM or deb, some of which can even work recursively. Even before the YAML was standard in CPAN packages, it just wasn't that hard to parse out all the "use" and "require" statements to automatically detect all the required packages (and minimum versions of those). There's also a unit testing framework to make sure you don't accidentally introduce any incompatibilities with untested newer versions of required packages.
I haven't dealt with Ruby quite as much, but the gem format also includes dependency information and there's gem2rpm for RPM and dpkg-gem for deb packages.
I think I've only ever once had to build an RPM or deb package of a Python package, but Python seems to natively support building both formats; just call the same "build and install" method you'd usually use with an extra argument and you get a native package, which will use dependency information if the python package provided it.
"The problem is that Java open source upstream projects do not really release code"
Wait what... If projects do not release source which you can modify, build and repackage, does it really deserve to be called open source project?
And I thought also that integrating different projects was distributions main task. Isn't that exactly what he is talking about? Distros like Debian already backport massive amounts of code in a lifetime of a relase to get all stuff working with their specific versions. Does Java really differ that much?
I think the real problem is that there are too few package maintainers for Java packages, and the upstream binaries are usable enough that there is less incentive to become a Java pkg maintainer compared to eg C packages.
edit: also, I don't believe in either of his solutions. Real solution imho would be to patch upstream source to work with distro provided libraries (of course in some cases patching the library is also viable alternative).
I think the real real problem is that assuming a "package maintainer" must exist for every single goddamn software package for every single goddamn Linux distribution has very obvious human scalability problems.
Just because an infinite number of monkeys appeared to create Debian does not mean it is a safe assumption to assume that there are many other groups of infinite monkeys out there to support every other software ecosystem.
Eventually, the Linux community will figure out that they've run out of bodies to build the same crap over-and-over and 'binary compatibility' will stop being a dirty word.
Stop worrying about duplicating files: disk is cheap.
Stop worrying about building everything from source: I'd rather use the official Firefox than the screwy patched version. (I'm looking at you, Debian Iceweasel.)
Just look at the way OS X does things.
I've been using Linux since 1993 and it's amazing this crap hasn't been fixed by now.
We want to avoid code duplication (so that a security update in a library package benefits all software that uses it)
The crux of the problem seems to be the Big Brother attitude distros take toward users and apps, specficially protecting users from unpatched vulnerabilities in upstream apps.
While this is crucial on servers and mass deployments, it's entirely possible users find this more of an annoyance than a feature, and thus we may have to wait just a little while longer for the so-called Year-of-Linux-on-the-Desktop.
Some have suggested that maven as a solution. I guess the only part I'm missing is how maven ties into the actual system. From what I've seen it is always pulling from my ~/.m2 repository, or a full repository upstream. Is there a way to have like a "System" repository that yum/apt/etc could install into?
Getting things to build with maven's idea of a build process has traditionally been tough, but it's pretty good for auto-downloading dependencies. Usually I needed a shell script that had maven download some, then wget'd a few more (that, for licensing reasons, couldn't go in an upstream server).
Unless a slightly-customized maven is distributed, you'd have to put a ~/.m2/settings.xml in the new-user template that specified your local repository. Which isn't too bad.
Anyone with more than a handful of developers should really be running their own maven repository mirror to shield deployments from external outages. http://nexus.sonatype.org/ is stable and simple to get running.
The fact that the libraries get downloaded to ~/.m2 should be irrelevant in production, because your deployments should happen against a deployment-specific role user, and downloads should be super-fast, because they're all from within your colo.
(all this being said, I haven't touched java and don't miss it at all since switching to Ruby on Rails to write AdGrok, and that's coming from more than 10 years soaking in java).
I disagree: you should put in version control each library that your project depends on. I don't see what Maven adds beyond introducing an artificial dependency on a tool outside of your control.
I think we need a radical break with the past to solve the horrible dependency nightmare we're looking at today. These would be my initial principles:
There should be _one_ unified Linux OS that only includes the bare minimum set of applications. It should be very clear what belongs to the OS and what doesn't. The root directory should contain exactly three sub directories: os, app and home.
There should be no dependencies on any non OS software. Each application should live in its own subdirectory of /app. Libraries that are not part of the OS should not be shared across applications. The only external dependency an application should have is the OS.
Whatever the new solution is, it should not include package mangers or package maintainers. Their existence is a symptom of an overly complicated system.
I realise that having a central registry of all installed software components does have advantages like being able to fix some security issues in one place. However, I think the idea has failed. It creates too many intractable dependencies, it is too complicated and hence ultimately insecure and unproductive.
One thing that always shocked me is how many Java apps play "library bingo".
There is no reason to use every third-party library available. IDEs make it easy to introduce all kinds of weird dependency in your code, but, please, don't.
Actually the "standard" JRE (as far as most java apps are concerned) is the official one from java.sun.com (bundled as "sun-java" in ubuntu IIRC).
I have encountered numerous instances of OpenJDK not being able to run java apps that run fine on sun-java & quite often when an app/applet I support doesn't work on a user's machine it's due to that user having OpenJDK installed. Replacing it normally solves the problem.
I don't really know why linux distros so often insist on installing OpenJDK as the default when it has so many incompatibility issues.
I don't want to know Mozilla released a new Firefox. I don't want to have to run it to find out. I don't want to know there is a new Java available and I don't care which version of OpenOffice is the latest and greatest. Or bzlib, or glibc. Or Python, Ruby, CMUCL... If the packaged version is good enough, I don't want to waste my time managing my computer, not a single minute. I'll let the system-wide updater do its job.
And that's what a unified software package management system (I use APT) does for me. It does its job so that I can do mine.
In the parts I need more control, I skip the whole thing and install my own programs in any way that makes sense for the specific product. My distro provides a reasonably up-to-date Python, but if I am to muck around libraries that conflict with the packages provided by the distro, I will use a virtualenv, which is a stand-alone Python environment, with its own libraries and easy_install/pip. This is similar to the "bundle your versions" approach, but, rather than the norm, it's the exception.