For all the fancy bits of technology in your infrastructure there are still plenty of simple things that are useful and will probably never go away. One of these is blobs. Blobs are useful, they can be workload payloads, binaries for software, or whatever bits you need to deploy and manage.

Juju never had a way of knowing about blobs. Sure, you could plop something on an http server and your charm could snag it. But then we’re not really solving any problems for you, you still need to deal with managing that blob, versioning it, using a server to serve it to clients, etc.

Ideally, these blobs are accounted for, just like anything else in your infrastructure, so it makes sense that as of Juju 2.0 we can model blobs as part of a model; we call it Juju Resources. That way we can track them, cache them, acl them, and so on, just like everything else.

Resources

A new concept has been introduced into Charms called “resources”. Resources are binary blobs that the charm can utilize, and are declared in the metadata for the Charm. All resources declared will have a version stored in the Charm store, however updates to these can be uploaded from an admin’s local machine to the controller.

Change to Metadata

A new clause has been added to metadata.yaml for resources. Resources can be declared as follows:

resources:
  name:
    type: file                         # the only type initially
    filename: filename.tgz
    description: "One line that is useful when operators need to push it."

New User Commands

Three new commands have been introduced:

  1. juju list-resources

    Pretty obvious, this command shows the resources required by and those in use by an existing service or unit in your model.

  2. juju push-resource

    This command uploads a file from your local disk to the juju controller to be used as a resource for a service.

  3. juju charm list-resources

    juju charm is the the juju CLI equivalent of the “charm” command used by charm authors, though only applicable functionality is mirrored.

In addition, resources may be uploaded when deploying or upgrading charms by specifying the resource option to the deploy command. Following the resource option should be name=filepath pair. This option may be repeated more than once to upload more than one resource.

juju deploy foo --resource bar=/some/file.tgz --resource baz=./docs/cfg.xml

or

juju upgrade-charm foo --resource bar=/some/file.tgz --resource baz=./docs/cfg.xml

Where bar and baz are resources named in the metadata for the foo charm.

Conclusion

It’s pretty simple. Put stuff in a place and be able to snag it later. People can use resources for all sorts of things.

  • Payloads for the service you’re deploying.
  • Software. Let’s face it, when you look at the amount of enterprise software in the wild, you’re not going to be able to apt eveything; you can now gate trusted binaries into resources to be used by charms.

Hope you enjoy it!

I’ve pushed new sample bundles to the Juju Charm Store. The first is a simple mediawiki with mysql:

For a more scalable approach I’ve also pushed up a version with MariaDB, haproxy, and memcached. This allows you to add more wiki units to horizontally scale:

I’ll be working on a “smoosh everything onto one machine” bundle next, so stay tuned!

One of the nice things about system containers is that it makes the cost of creating a new machine essentially zero. With LXD 2.0 around the corner this is now easier than ever. LXD now ships with native simplestreams support, which means we can now list and cache all sorts of interesting OS cloud images.

You can see every image provided by doing a lxc image list images:, but the nice thing is the syntax is easy to remember. You can now just launch all sorts of Linux Server cloud images from all sorts of vendors and projects:

$ lxc launch images:centos/7/amd64 test-centos
Creating test-centos
Retrieving image: 100%
Starting test-centos
$ lxc exec test-centos /bin/bash
[root@test-centos ~]# yum update
Loaded plugins: fastestmirror

… and so on. Give em a try:

lxc launch images:debian/sid/amd64
lxc launch images:gentoo/current/amd64
lxc launch images:oracle/6.5/i386

And of course ubuntu is supported:

lxc launch ubuntu:14.04

So if you’re sick of manually snagging ISOs for things and keeping those up to date, then you’ll dig 16.04, just install LXD and you can launch any Linux almost instantly. We’ll keep ‘em cached and updated for you too. I can lxc launch ubuntu:12.04 and do python --version faster than I can look it up on packages.ubuntu.com. That’s pretty slick.

It’s been about six months since ricotz and mamarley have brought fresh Nvidia drivers to Ubuntu users. I covered some of this in my SCaLE 14x talk on Ubuntu Gaming. If you missed SCaLE then feel free to check out Liz’s summary.

In that talk I summarized some of the challenges (and victories!) to making our platform be better for gamers. One of the obvious pain points is of course, delivering fresh drivers for people, which can be difficult (especially when it comes to managing regressions). First let’s look at the version breakdown:

346.72: 57
346.96: 292
352.21: 219
352.79: 62
355.06: 3291
355.11: 11483
358.09: 16949
358.16: 22317
361.18: 4076
361.28: 10219

And then by Ubuntu release:

precise: 630
trusty: 30665
vivid: 10908
wily: 21537
xenial: 5225

Trusty wins, no surprise there. Wow, way more people on Xenial than I expected, good to see people are testing our upcoming LTS! And finally, by architecture:

amd64: 65661
armhf: 31
i386: 3273

So when you add them all up, it’s about 70,000 downloads of Nvidia drivers from the PPA in the last six months. This is great, as it means there’s certainly a demand for fresh drivers, but it’s also frightening when you consider the end-user experience of traditional packaging and all the wonderful face punching that entails. But hey, that’s XCOM. Snappy can’t come fast enough!

Want to help? Buy a game and check out ppa:graphics-drivers.

Oh and happy belated Vulkan day!

I was at Config Management Camp last week in Belgium and I ran into James Page, who was running OpenStack on his laptop. I mean a real OpenStack in containers, not a devstack, a real thing you could poke at. He would stand it up, do a bit of work on it, commit, test, tear it all down, retest, and so on.

He had repartitioned his hard drive so he could have a ZFS partition, and together with LXD and the OpenStack Juju Charms it all just worked and was very fast. His Thinkpad X230 was sweating a bit, but it all worked.

I had to have this. Real instances with ip’s that behave just as they would on a real cloud, except you don’t spend money, and thanks to LXD and ZFS, hella fast. It’s up to you if you want to run xenial a few months before release, but for me it’s worth it, so I went all in. Here are my notes:

First off you need a machine. :) I needed to redo my workstation anyway so this became an evening of moving drives around and scrounging some parts. When I was done I had an i7 3770, 16GB of RAM, 4x2TB drives, and 2 SSDs.

Step 1 - Installation

Install Xenial. I installed this on one of my SSDs. I used the normal ext4 filesystem. Next I had the 4x2TB drives and a 60GB Intel SSD, let’s put the spinning rust in a mirror, and use the SSD as cache for some decent writes (EDIT: Apparently the cache command in this context is for reads, not writes, thanks Manual Zachs for the correction). Feel free to set it up how you wanted, but I wanted a bunch of room so I could run large workloads and not worry about space or speed.

# apt install zfsutils-linux
# zpool create home mirror /dev/sda /dev/sdb /dev/sdc /dev/sdd cache /dev/sde

As you can tell, /dev/sde is the SSD and we’re just going to use the array as our home directory. If you’re in your desktop you’ll want to logout and not be in your home directory so you don’t step on yourself when you do this. After some activity, you can see the SSD start to be used as a cache device:

# zpool iostat -v

               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
home         132G  7.12T      3     64   314K  5.34M
  sda       33.0G  1.78T      0     16  78.8K  1.33M
  sdb       32.9G  1.78T      0     16  78.0K  1.33M
  sdc       32.9G  1.78T      0     16  78.7K  1.34M
  sdd       32.9G  1.78T      0     16  78.8K  1.34M
cache           -      -      -      -      -      -
  sde       25.9G  30.0G      0     34  41.0K  4.19M
----------  -----  -----  -----  -----  -----  -----

Step 2 - LXD configuration

Now time for our containers …. first install what we need:

sudo apt install lxd
newgrp lxd

The newgrp command puts you in the lxd group, and you don’t even need to log out, bad ass. Now we tell LXD to use our ZFS pool for the containers:

lxd init

And follow the directions, select zfs and put in your zpool name, mine was called home from the command above.

LXD needs images of operating systems to launch containers (duh), so we’ll need to download them. While my host is xenial, we want trusty here because just like the real world, cloud workloads run on Ubuntu LTS:

lxd-images import ubuntu trusty amd64 --sync --alias ubuntu-trusty

Now chill for a minute while it gets images. Ok so now you’ve got lxd installed, let’s make sure it works by “sshing” into the container:

lxc launch ubuntu-trusty my-test-container
lxc exec my-test-container /bin/bash

And there you go, you’re own new OS container. Make as many as you want, go nuts. Make sure you check out the fine docs at linuxcontainers.org

Exit out of that and move on to step 3!

Step 3 - Modelling Workloads on your shiny new setup

Ok now we need to put something awesome on this. Check out the docs for the LXD provider for Juju, here’s the TLDR:

sudo apt-add-repository -y ppa:juju/devel
sudo apt update
sudo apt install juju-local

OK, now let’s tell Juju to use LXD:

juju init
juju switch lxd
juju bootstrap --upload-tools

Now let’s plop a workload on there … how about some realtime syslog analytics?

juju deploy realtime-syslog-analytics

Nine containers worth of Hadoop, Spark, Yarn, Flume, and we’ll plop an Apache Zeppelin on top to make it all pretty:

Since we’re fetching all those Hadoop resources it’ll take a bit, on my system with decent internet about 10 minutes total from zero to finished. Do a watch juju status and the bundle will update the status messages with exactly what it’s doing. Keep an eye on IO and cpu usage while this is happening and if you’re coming from the world of VMs then prepare to be impressed.

Step 4 - Small tweaks

apt update and apt upgrade can be slow. If you think about it that’s a bunch of http requests, deb package downloads which are then unpacked, and then installed in the container. Multiply that 9 times and happening at the same time on your computer. We can mitigate this by telling juju to not update/upgrade when we spawn a new instance. Find ~/.local/share/juju/environments.yaml and turn off updates for your lxd provider:

lxd:
    type: lxd
    enable-os-refresh-update: false
    enable-os-upgrade: false

Since we publish cloud images every few weeks anyway (and lxd will refresh these for you via a cron job) you don’t really need to have every update installed when doing development. For obvious reasons, we recommend you leave updates on when doing things “for real”.

Conclusion

Well, I hope you enjoy the speed and convenience. It’s a really nice combination of technologies. And I haven’t even gotten to things like rollback, snapshots, super dense stuff, and more complex workloads. LXD, ZFS, and Juju each have a ton more features that I won’t cover today, but this should get you started working faster!

In the meantime here are some more big data workloads you can play with. Next up will be OpenStack but that will be for another day.

I have so many people to thank this time around I’m just going to post them all throughout the next few days. The first is Merlijn Sebrechts, one of the new breed of big data experts collaborating around Big Data.

Merlijn is bringing the Tengu Platform to users, which is a platform for big data experimentation, you can find more about it here.

You can find Merlijn’s work here.

Another day, another Ubuntu release! I know not everyone bothers to read mhall’s spreadsheet on where the money goes, so I decided to just post a few of the things I’ve used community funds this cycle. I also hope this encourages those of you out there in the community to apply for these funds when you’re doing something awesome.

The one getting the most attention is the new Nvidia Driver PPA. We now have a really nice way to deliver new drivers for gamers. I applied for these funds to get Michael Marley and Rico Tzischichholz some new hardware so that they could run this PPA. It wasn’t too hard of a process, and we now you can have a pretty kickass gaming experience in Ubuntu. Not bad!

This money also sends people to conferences, so if you’re at your favorite conference and you see someone sitting at the booth helping people with their laptop or answering questions about Ubuntu or showing people how to use Ubuntu, then there’s a good chance that you made that possible by your donations.

And lastly, we have events like the Ubucon Summit, which allow us to provide hours of technical content to our users for free. You can go to other, much more expensive conferences and not get nearly the amount of technical knowledge that you will get when you attend this Ubucon and SCALE14x.

So yes, we do use the money you donate for useful things!

After an exhausting and brilliant Juju Charmer Summit I took a week off so my parents could visit and hang out.

My dad and I try to do one project together every time we visit, so while we did do useful things around the house, this time we did something fun. We built a simulation racing chair with some wood, a recycled car seat, and these plans from Ricmotech.

Here are some build pics:

My dad scavenged some logos from a junkyard, so we dediced to paint it green and make it a sim version of my old, beloved, 1999 Grand Prix GTP. Total dorkfest, I love it.

And of course, we raced it, here’s the video:

Some tips if you’re going to build an RS-1:

  • I found working with MDF to be painful, we rebuilt parts of it in normal plywood. The shifter in particular we used a 2x4 for the middle section for stiffness.
  • Wood filler is like, simracing bondo.
  • Everyone will make fun of you, and then line up to race.

NOTE: Updated for August 2015.

It’s been about a year since I started building my own Steam console for my living room. A ton has changed since then. SteamOS has been released, In Home Streaming is out of beta and generally speaking the living room experience has gotten a ton better.

This blog post will be a summary of what’s changed in the past year, in the hopes that it will help someone who might be interested in building their own “next-gen console” for about the same price, and take advantage of nicer hardware and all the things that PC gaming has to offer.

Step 1: Choosing the hardware

  • I consider the NVIDIA GTX 750Ti to be the best thing to happen in hardware for this sort of project. It’s based on their newest Maxwell technology so it runs cool, it does not need a special power supply plug, and it’s pretty small. It’s also between $120-$150 – which means nearly any computer is now capable of becoming a game console. And a competent one at that.

  • I have settled on the Cooler Master 110 case, which is one of the least obnoxious PC case you can find that won’t look too bad in the living room. Unfortunately Valve’s slick-looking case did not kick the case makers into making awesome-looking living room style cases. The closest you can find is the Silverstone RVZ01, which has the right insides, but they ruined the outside with crazy plastic ribs. The Digital Storm Bolt II looks great, but you can’t buy the case seperately. Both cases have CD drives for some reason, boo!

  • Nvidia has a great guide on building a PC within the console-price range if you want to look around. I also recommend checking out r/buildapc, which has tons of Mini-ITX/750Ti builds.

  • Another alternative is the excellent Intel NUC and Gigabyte Brix. These make for great portable machines, but for the upcoming AAA titles for Linux like Metro Redux, Star Citizen, and so on I decided to go with a dedicated graphics card. Gigabyte makes a very interesting model that is the size of a NUC, but with a GTX 760(!). This looks to be ideal, but unfortunately when Linus reviewed it he found heat/throttling issues. When they make a Maxwell based one of these it will likely be awesome.

  • Don’t forget the controller. The Xbox wireless ones will work out of the box. I recommend avoiding the off-brand dongles you see on Amazon, they can be hit or miss.

Step 2: Choosing the software

I’ve been using SteamOS since it came out. The genious about SteamOS is that fundamentally it does only 2 things. It boots, and then runs Steam Big Picture (BPM) mode. This means for a dedicated console, the OS is really not important. I have 2 drives in the box, one with SteamOS, and one with Ubuntu running BPM. After running both I prefer Ubuntu/Steam to SteamOS:

  • Faster boot (Upstart v. SysV)
  • PPAs enable fresh access to new Nvidia drivers and Plex Home Theater
  • Newer kernels and access to HWE kernels over the next 5 years

I tend to alternate between the two, but since I am more familiar with Ubuntu it makes it easier to use for, so the rest of this post will cover how to build a dedicated Steam Box using Ubuntu.

This isn’t to say SteamOS is bad, in fact, setting it up is actually easier than doing the next few steps; remember that the entire point is to not care about the OS underneath, and get you into Steam. So build whatever is most comfortable for you!

Step 3: Installation

These are the steps I am currently doing. It’s not for beginners, you should be comfortable admining an Ubuntu system.

  • Install Ubuntu 14.04.
  • (Optional) - Install openssh-server. I don’t know about you but lugging a keyboard/mouse back and forth to my living room is not my idea of a good time. I prefer to sit on the couch, and ssh into the box from my laptop.
  • Add the graphics driver PPA.
  • Install the latest Nvidia drivers: As of this writing, nvidia-355.

After you’ve installed the drivers and all the security updates you should reboot to get to your nice new clean desktop system. Now it’s time to make it a console:

  • Log in, and install Steam. Log into steam, make sure it works.
  • Add the Marc Deslaurier’s SteamOS packages PPA. These are rebuilt for Ubuntu and he does a great job keeping them up to date.
  • sudo apt-get install steamos-compositor steamos-modeswitch-inhibitor steamos-xpad-dkms
  • Log out, and in the login screen, click on the Ubuntu symbol by the username and select the Steam session. This will get you the dedicated Steam session. Make sure that works. Exit out of that and now let’s make it so we can boot into that new session by default
  • Enable autologin in LightDM after the fact so that when your machine boots it goes right into Steam’s Big Picture mode.

We’re using Valve’s xpad module instead of xboxdrv because that’s what they use in SteamOS and I don’t want to deviate too much. But if you prefer about xboxdrv then follow this guide.

  • Steam updates itself at the client level so there’s no need to worry about that, the final step for a console-like experience is to enable automatic updates. Remember you’re using PPAs, so if you’re not confident that you can fix things, just leave it and do maintenance by hand every once in a while.

Step 4: Home Theater Bling

If you’re going to have a nice living room box, then let’s use it for other things. I have a dedicated server with media that I share out with Plex Media Server, so in this step I’ll install the client side.

Plex Home Theater:

sudo add-apt-repository ppa:plexapp/plexht
sudo add-apt-repository ppa:ppa:pulse-eight/libcec
sudo apt-get update && sudo apt-get install plexhometheater

In Steam you can then click on the + symbol, Add a non-steam game, and then add Plex. Use the gamepad (not the stick) to navigate the UI once you launch it. If you prefer XBMC/Kodi you can install that instead. I found that the controller also works out of the box there, so it’s a nice experience no matter which one you choose.

Step 5: In Home Streaming

This is a killer Steam feature, that allows you to stream your Windows games to your new console. It’s very straight forward, just have both machines on and logged into Steam on the same network, they will autodiscover each other, and your Windows games will show up in your Ubuntu/Steam UI, and you can stream them. Though it works suprisingly well over wireless, you’ll definately want to ensure you’ve got gigabit ethernet if you want to stream games 1080p at 60 frames per second.

Conclusion

And that’s basically it! There’s tons of stuff I’ve glossed over, but these are the basic steps. There’s lots of little things you can do, like remove a bunch of desktop packages you won’t need (so you don’t need to download and update them) and other tips and tricks, I’ll try to keep everyone up to date on how it’s going.

Enjoy your new next-gen gaming console!

TODO:

  • You can change out the plymouth theme to use the one for SteamOS - but I have an SSD in the box and combined with the fast boot it never comes up for me anyway.
  • It’d be cool to make a prototype of Ubuntu Core and then provide Steam in an LXC container on top of that so we don’t have to use a full blown desktop ISO.