If you've never listened to the History of Rome podcast you'd enjoy it a whole lot. He also makes a lot of book recommendations through the course of the podcast.
developer of AWX here. We're working on this! unvaulting is available during playbook runs but we definitely need to make it available during inventory syncs also. The features coming up in Ansible 2.4 will enable us to do this.
Are there any pointers on how to work around this?
I think a large part of our problem is that we are using Ganeti for most of our VMs, rather than something supported native by AWX like OpenStack/EC2/Azure. I have an inventory script we have been using, but couldn't get it to run in AWX due to AWS credentials not being made available in a way that boto recognized (the inventory is pulled both from Ganeti and EC2).
One of the developers of OGS here! We're pretty excited to make this happen and continue development of the site and hopefully bring Go (the game) to a larger audience.
I have a cluster that I'm looking to upgrade on AWS and was holding off for some information on upgrade procedures but I see this "Instructions coming soon" under "Known Issues and Important Steps before Upgrading".
We have a brand new repo for fixing these things - https://github.com/kubernetes/kubernetes.github.io - but that said, if you EVER run into something misleading, outdated or incomplete, please ping immediately!
Thanks! I'm aware of that repo. I think the problem is much deeper than can be fixed by creating more issues in the repo. The docs for k8s really need to be approached holistically by a technical writer or someone else who works on it full time. What's there now, and the approach the project has taken towards docs thus far, has resulted in information that is inconsistent, not well organized, and not well maintained. I think comprehensive documentation should be considered a requirement for a release—just as important as the code or the test suite. Software is only useful if people can figure out how to use it, and it's very hard to do that when it's so difficult to find high quality information.
I used this as part of an Analytics BI project years ago. The biggest problem for me was dealing with the optimistic concurrency model.
We ran into enough issues with partial writes during rollbacks due to failed transactions that we eventually had to abandon it in favor of Infobright. Having said that queries were ridiculously fast for OLAP workflows involving millions of records.
Normally for OLAP workflows you want your updates to either be idempotent or be tagged with an id that identifies the ETL job they generated then (very helpful for auditing and related functions), or have a kind of "build and swap" model to ensure availability. That, and performance concerns, mean a lot of OLAP servers consequently often have very limited support for transactions.