Greetings folks,

The Juju Ecosystem team at Canonical (joined remotely by community members) recently had a developer sprint in beautiful Dillon, Colorado to Get Things Done(™).

Here are the highlights:

Automated Charm Testing

Tim Van Steenburgh and Marco Ceppi made a ton of progress with automated charm testing, here’s the prelim state-of-the-world:

Jenkins Jobs Fired off: 22

This enabled us to dedicate hours of block time of getting as many of those red charms to green as possible. The priority for our team over the next few weeks will be fixing these charms, and of course, gating new charms via this method, as well as kicking back broken charms to personal namespaces.

Ben Saller helped out by prototyping “Dockerizing” charm testing so that developers can test their charms in a fully containerized way. This will help CI by giving us isolation, density, and reliability.

Charm Tests are now launched from review queue to help gating based on tests passing.

Thanks to Aaron Bentley for supporting our efforts here!

Review Queue

The Charmers (Marco Ceppi, Charles Butler, and Matt Bruzek) dedicated time to getting through reviews. The whole team spent time creating fixes for the automated test results mentioned above. We’re in great shape to driving this down and not ever letting it get out control again thanks to our new team review guidelines: http://review.juju.solutions/

The goal was to help submitters and reviews know the where they are at in a review, and next steps needed.

Here are the numbers:

  • Reviews Performed: 189
  • Commits: 228
  • Charms Promulgated: 10
  • Charms Unpromulgated: 7
  • Lines of Code touched: 34109 (artificially high due to SVG icons, heh)
  • Reviews Submitted: 84
  • Energy Drinks: 80

Some new features:

  • Users can now log in with Ubuntu SSO and see what reviews they have submitted, and reviewed
  • Ability to query the review system and search/filter reviews based on several metrics (http://review.juju.solutions/search)
  • Ability for charmers to fire off an automated test of a charm on demand right from the queue. When an MP is done against a charm, we’ll now automatically reply to the MP with a link to the test results. \o/
  • You can now “lock” a review when you’re doing one so that the rest of the community can see when a review is claimed so we don’t duplicate work. (Essential for mass reviews!)
  • Queues divided and separated to highlight priority items and items for different teams

CloudFoundry

  • Improving the downloader/packaging story so it’s more reusable
  • Cory Johns developed a pattern for charm helpers for CloudFoundry; the CF sub-team feels this will be a useful pattern for other charmers. They’re calling it the “charm services framework”, expect to hear more from them in the future.
  • We were able to replicate the Juju/Rails Framework deployment of an application and compare doing the same thing on CF: https://plus.google.com/117270619435440230164/posts/gHgB6k5f7Fv
  • Whit concentrated on tracking changes to Pivotal’s build procedures.

Charm Developer Workflow

This involves two things:

“The first 30 minutes of Juju”

This primarily involved finding and fixing issues with our user and developer workflow. This included doing some initial work on what we’re calling “Landing Pages”, which will be topic based landing pages for different places where people can use Juju, so for example, a “Big Data” page with specific solutions for that field. We expect to have these for a bunch of different fields of study.

We have identified the following 5 charms as “flagbearer””: Rails (in progress), elasticsearch, postgresql, meteor, and chamilo. We consider these charms to be excellent examples of quality, integration with other tools, and usage of charm tools. We will be modifying the documentation to highlight these charms as reference points. All these charms have tests now, though some might not have landed yet.

Better tools for Charm Authors:

Ben, Tim, and Whit have a prototype of a fully Dockerized developer environment that contains all of the local developer tools and all of the flagbearer charms. The intention is to also provide a fully bootstrapped local provider. The goal is “test anything in 30 seconds in a container”.

In addition to this, Adam Israel tackled some of our Vagrant development stories, that will allow us to provide better Vagrant developer workflow, thanks to Ben Howard and his team for helping us get these features in our boxes.

We expect both the Docker-based and Vagrant-based approaches to eventually converge. Having both now gives us a nice “spread” to cover developers on multiple operating systems with tools they’re familiar with.

Big Data

Amir/Chuck worked on the following things:

  • Upgrading the ELK stack for Trusty
  • Planning out new Landing Pages focused on the Big Data story
  • Bringing up existing Big Data (Hortonworks) Stack to Charm Store standards for Trusty, and getting those charms merged
  • Pre-planning for next phase of Big Data Workloads (MapR, Apache distributions)

Other

  • General familiarity training with MAAS, OpenStack on OBs and NUCs.
  • Very fast firehose drinking for new team members, Adam Israel, Randall Ross, and Kevin Monroe have joined the team.
  • Special Thanks to Jose Antonio Rey, Sebas, and Josh Strobl, for joining us to help get reviews and fixes in the store and documentation.
  • We have a new team blog at: http://juju-solutions.github.io/ (Beta, thanks Whit.)
  • Most of the topics here had corresponding fixes/updates to the Juju documentation.

ElasticSearch is one of those tools that’s just handy to have. When combined with Logstash and Kibana, you can use the “ELK” stack for all sorts of things.

One thing we’d like to see is to bring these capabilities to our users. So I’ve been working on bundling together some of our ElasticSearch resources in Ubuntu so we can bring them to the cloud, it looks like this:

This is a standalone ElasticSearch cluster. I’ve added a few things:

A Production-ready Stack

One of the nice things about this bundle is it uses our ElasticSearch charm. Right off the bat you’re getting a charm that we’re using in production today. That means it’s tested (see the included tests, and it also uses many of the modern techniques for writing a charm. In this case, Michael Nelson leveraged Ansible to do most of the heavy lifting. Our ability to consume multiple tools to get the job done is one of the nice things that we can support in charms, so if you prefer a certain tool, then by all means use it! It also just consumes upstream packages, so you’ll always have a fresh version.

Why ElasticSearch?

The major use of ElasticSearch is within Ubuntu’s Software Store for Unity 8. On Ubuntu Touch anytime a user is searching for and navigating the store they’re using ElasticSearch. ElasticSearch was chosen for a few reasons.

  • It allowed us to design the store where every category is basically a search term. This allows the team to dynamically generate categories based on certain criteria. For example a list of “Top 10 apps” might be different depending whether you are in the US or in China.
  • Since everything is designed around search, it lets the team “future proof” the Software Store for future categories and queries.
  • ElasticSearch was designed to scale much more than other solutions we looked at. ElasticSearch can be horizontally scaled by just juju add-unit with no extra configuration.
  • Since the team didn’t have to worry about learning how to make search it let them concentrate on the store itself and let ElasticSearch do the heavy lifting.
  • Since ElasticSearch is driving the backend, this will enable Ubuntu Phone partners to offer customized views on top of the existing store without having to do major engineering work around building a branded store.
  • Our team found the ElasticSearch documentation to be excellent.

The quick way to get up and running

Ok so let’s deploy this badboy. Assuming you’re brand new:

sudo apt-get install juju juju-quickstart
juju quickstart bundle:elasticsearch/cluster

At this point follow the ncurses menus to pick which cloud you want to deploy to and add your credentials and then Juju will bootstrap and start deploying ES.

Total deployment time will depend on where you’re deploying to, but on AWS US East 1 I can go from zero to cluster in about 10 minutes. By default I give you one Kibana node and one ElasticSearch node. For ElasticSearch we ensure you’re node has at least 4 CPU cores and 16GB of RAM. You can follow the instructions to load up the sample data and the other sample plugins I’ve included. Just go to the ip address of the Kibana unit in your browser and start searching:

Now you’re ready to horizontally scale:

 juju add-unit elasticsearch

To add a new node.

And that’s it

You now have an ElasticSearch cluster. Using the same best practices that we’re doing in production. Amir Sanjar and I are prototyping bundling the ElasticSearch Hadoop plugin in a bundle as well, though that’s not quite ready (testing and help wanted!).

As you can see, we’ve barely scratched the surface. Chuck Butler’s been reworking the Logstash charm for Trusty, we expect that to land soon. We plan to make the charm a subordinate, you can just plop it onto any unit. The idea is to enable people to put logstash’s indexer on every unit they can shove all they care about into ElasticSearch.

It’s been about a year since I started building my own Steam console for my living room. A ton has changed since then. SteamOS has been released, In Home Streaming is out of beta and generally speaking the living room experience has gotten a ton better.

This blog post will be a summary of what’s changed in the past year, in the hopes that it will help someone who might be interested in building their own “next-gen console” for about the same price, and take advantage of nicer hardware and all the things that PC gaming has to offer.

Step 1: Choosing the hardware

  • I consider the NVIDIA GTX 750Ti to be the best thing to happen in hardware for this sort of project. It’s based on their newest Maxwell technology so it runs cool, it does not need a special power supply plug, and it’s pretty small. It’s also between $120-$150 – which means nearly any computer is now capable of becoming a game console. And a competent one at that.

  • I have settled on the Cooler Master 110 case, which is one of the least obnoxious PC case you can find that won’t look too bad in the living room. Unfortunately Valve’s slick-looking case did not kick the case makers into making awesome-looking living room style cases. The closest you can find is the Silverstone RVZ01, which has the right insides, but they ruined the outside with crazy plastic ribs. The Digital Storm Bolt II looks great, but you can’t buy the case seperately. Both cases have CD drives for some reason, boo!

  • Nvidia has a great guide on building a PC within the console-price range if you want to look around. I also recommend checking out r/buildapc, which has tons of Mini-ITX/750Ti builds.

  • Another alternative is the excellent Intel NUC and Gigabyte Brix. These make for great portable machines, but for the upcoming AAA titles for Linux like Metro Redux, Star Citizen, and so on I decided to go with a dedicated graphics card. Gigabyte makes a very interesting model that is the size of a NUC, but with a GTX 760(!). This looks to be ideal, but unfortunately when Linus reviewed it he found heat/throttling issues. When they make a Maxwell based one of these it will likely be awesome.

  • Don’t forget the controller. The Xbox wireless ones will work out of the box. I recommend avoiding the off-brand dongles you see on Amazon, they can be hit or miss.

Step 2: Choosing the software

I’ve been using SteamOS since it came out. The genious about SteamOS is that fundamentally it does only 2 things. It boots, and then runs Steam Big Picture (BPM) mode. This means for a dedicated console, the OS is really not important. I have 2 drives in the box, one with SteamOS, and one with Ubuntu running BPM. After running both I prefer Ubuntu/Steam to SteamOS:

  • Faster boot (Upstart v. SysV)
  • PPAs enable fresh access to new Nvidia drivers and Plex Home Theater
  • Newer kernels and access to HWE kernels over the next 5 years

I tend to alternate between the two, but since I am more familiar with Ubuntu it makes it easier to use for, so the rest of this post will cover how to build a dedicated Steam Box using Ubuntu.

This isn’t to say SteamOS is bad, in fact, setting it up is actually easier than doing the next few steps; remember that the entire point is to not care about the OS underneath, and get you into Steam. So build whatever is most comfortable for you!

Step 3: Installation

These are the steps I am currently doing. It’s not for beginners, you should be comfortable admining an Ubuntu system.

  • Install Ubuntu 14.04.
  • (Optional) - Install openssh-server. I don’t know about you but lugging a keyboard/mouse back and forth to my living room is not my idea of a good time. I prefer to sit on the couch, and ssh into the box from my laptop.
  • Add the xorg-edgers PPA. You don’t need this per se, but let’s go all in!
  • Install the latest Nvidia drivers: As of this writing, nvidia-graphics-drivers-343.

After you’ve installed the drivers and all the security updates you should reboot to get to your nice new clean desktop system. Now it’s time to make it a console:

  • Log in, and install Steam. Log into steam, make sure it works.
  • Add the Marc Deslaurier’s SteamOS packages PPA. These are rebuilt for Ubuntu and he does a great job keeping them up to date.
  • sudo apt-get install steamos-compositor steamos-modeswitch-inhibitor steamos-xpad-dkms
  • Log out, and in the login screen, click on the Ubuntu symbol by the username and select the Steam session. This will get you the dedicated Steam session. Make sure that works. Exit out of that and now let’s make it so we can boot into that new session by default
  • Enable autologin in LightDM after the fact so that when your machine boots it goes right into Steam’s Big Picture mode.

We’re using Valve’s xpad module instead of xboxdrv because that’s what they use in SteamOS and I don’t want to deviate too much. But if you prefer about xboxdrv then follow this guide.

  • Steam updates itself at the client level so there’s no need to worry about that, the final step for a console-like experience is to enable automatic updates. Remember you’re using PPAs, so if you’re not confident that you can fix things, just leave it and do maintenance by hand every once in a while.

Step 4: Home Theater Bling

If you’re going to have a nice living room box, then let’s use it for other things. I have a dedicated server with media that I share out with Plex Media Server, so in this step I’ll install the client side.

Plex Home Theater:

sudo add-apt-repository ppa:plexapp/plexht
sudo add-apt-repository ppa:ppa:pulse-eight/libcec
sudo apt-get update && sudo apt-get install plexhometheater

In Steam you can then click on the + symbol, Add a non-steam game, and then add Plex. Use the gamepad (not the stick) to navigate the UI once you launch it. If you prefer XBMC/Kodi you can install that instead. I found that the controller also works out of the box there, so it’s a nice experience no matter which one you choose.

Step 5: In Home Streaming

This is a killer Steam feature, that allows you to stream your Windows games to your new console. It’s very straight forward, just have both machines on and logged into Steam on the same network, they will autodiscover each other, and your Windows games will show up in your Ubuntu/Steam UI, and you can stream them. Though it works suprisingly well over wireless, you’ll definately want to ensure you’ve got gigabit ethernet if you want to stream games 1080p at 60 frames per second.

Conclusion

And that’s basically it! There’s tons of stuff I’ve glossed over, but these are the basic steps. There’s lots of little things you can do, like remove a bunch of desktop packages you won’t need (so you don’t need to download and update them) and other tips and tricks, I’ll try to keep everyone up to date on how it’s going.

Enjoy your new next-gen gaming console!

TODO:

  • You can change out the plymouth theme to use the one for SteamOS - but I have an SSD in the box and combined with the fast boot it never comes up for me anyway.
  • It’d be cool to make a prototype of Ubuntu Core and then provide Steam in an LXC container on top of that so we don’t have to use a full blown desktop ISO.

Prentice Hall has just released the 8th Ed. of “The Official Ubuntu Book”, authored by Matthew Helmke and Elizabeth K. Joseph with José Antonio Rey, Philip Ballew and Benjamin Mako Hill.

This is the book’s first update in 2 years and as the authors state in their Preface, “…a large part of this book has been rewritten—not because the earlier editions were bad, but because so much has happened since the previous edition was published. This book chronicles the major changes that affect typical users and will help anyone learn the foundations, the history, and how to harness the potential of the free software in Ubuntu.”

As with prior editions, publisher Prentice Hall has kindly offered to ship approved LoCo teams each (1) free copy of this new edition. To keep this as simple as possible, you can request your book by following these steps. The team contact shown on our LoCo Team List (and only the team contact) should send an email to Heather Fox at heather.fox@pearson.com and include the following details:

  • Your full name
  • Which team you are from
  • If your team resides within North America, please provide: Your complete street address (the book will ship by UPS)
  • If your team resides outside North America, you will first be emailed a voucher code to download the complete eBook bundle from the publisher site, InformIT, which includes the ePub/mobi/pdf files.

If your team does reside outside North America and you wish to be considered for a print copy, please provide:

Your complete street address, region, country AND IMPORTANT: Your phone number, including country and area code. (Pearson will make its best effort to arrange shipment through its nearest corporate office.)

A few notes:

  • Only approved teams are eligible for a free copy of the book.
  • Only the team contact for each team can make the request for the book.
  • There is a limit of (1) copy of each book per approved team.
  • Prentice Hall will cover postage, but not any import tax or other shipping fees.
  • When you have the books, it is up to you what you do with them. We recommend you share them between members of the team. LoCo Leaders: please don’t hog them for yourselves!

If you have any questions or concerns, please directly contact Pearson/Prentice Hall’s Heather Fox at heather.fox@pearson.com Also, for those teams who are not approved or yet to be approved, you can still score a rather nice 35% discount on the books by registering your LoCo with the Pearson User Group Program.

The following is a guest post from Curtis Hovey, the Juju release manager. You can find the original announcement on the Juju mailing list.

A new stable release of Juju, juju-core 1.20.0, is now available.

Getting Juju

juju-core 1.20.0 is available for utopic and backported to earlier series in the following PPA:

New and Notable

  • High Availability
  • Availability Zone Placement
  • Azure Availability Sets
  • Juju debug-log Command Supports Filtering and Works with LXC
  • Constraints Support instance-type
  • The lxc-use-clone Option Makes LXC Faster for Non-Local Providers
  • Support for Multiple NICs with the Same MAC
  • MAAS Network Constraints and Deploy Argument
  • MAAS Provider Supports Placement and add-machine
  • Server-Side API Versioning

High Availability

The juju state-server (bootstrap node) can be placed into high availability mode. Juju will automatically recover when one or more the state-servers fail. You can use the ‘ensure-availability’ command to create the additional state-servers:

juju ensure-availability

The ‘ensure-availability’ command creates 3 state servers by default, but you may use the ‘-n’ option to specify a larger number. The number of state servers must be odd. The command supports the ‘series’ and ‘constraints’ options like the ‘bootstrap’ command. You can learn more details by running ‘juju ensure-availability –help’

Availability Zone Placement

Juju supports explicit placement of machines to availability zones (AZs), and implicitly spreads units across the available zones.

When bootstrapping or adding a machine, you can specify the availability zone explicitly as a placement directive. e.g.

juju bootstrap --to zone=us-east-1b
juju add-machine zone=us-east-1c

If you don’t specify a zone explicitly, Juju will automatically and uniformly distribute units across the available zones within the region. Assuming the charm and the charm’s service are well written, you can rest assured that IaaS downtime will not affect your application. Commands you already use will ensure your services are always available. e.g.

juju deploy -n 10 <service>

When adding machines without an AZ explicitly specified, or when adding units to a service, the ec2 and openstack providers will now automatically spread instances across all available AZs in the region. The spread is based on density of instance “distribution groups”.

State servers compose a distribution group: when running ‘juju ensure-availability’, state servers will be spread across AZs. Each deployed service (e.g. mysql, redis, whatever) composes a separate distribution group; the AZ spread of one service does not affect the AZ spread of another service.

Amazon’s EC2 and OpenStack Havana-based clouds and newer are supported. This includes HP Cloud. Older versions of OpenStack are not supported.

Azure availability sets

Azure environments can be configured to use availability sets. This feature ensures services are distributed for high availability; as long as at least two units are deployed, Azure guarantees 99.95% availability of the service overall. Exposed ports will be automatically load balanced across all units within the service.

New Azure environments will have support for availability sets by default. To revert to the previous behaviour, the ‘availability-sets-enabled’ option must be set in environments.yaml like so:

availability-sets-enabled: false

Placement is disabled when ‘availability-sets-enabled’ is true. The option cannot be disabled after the environment is bootstrapped.

Juju debug-log Command Supports Filtering and Works with LXC

The ‘debug-log’ command shows the consolidate logs of all juju agents running on all machines in the environment. The command operates like ‘tail -f’ to stream the logs to the your terminal. The feature now support local-provider LXC environments. Several options are available to select which log lines to display.

The ‘lines’ and ‘limit’ options allow you to select the starting log line and how many additional lines to display. The default behaviour is to show the last 10 lines of the log. The ‘lines’ option selects the starting line from the end of the log. The ‘limit’ option restricts the number of lines to show. For example, you can see just 20 lines from last 100 lines of the log like this:

juju debug-log --lines 100 --limit 20

There are many ways to filter the juju log to see just the pertinent information. A juju log line is written in this format:

<entity> <timestamp> <log-level> <module>:<line-no> <message>

The ‘include’ and ‘exclude’ options select the entity that logged the message. An entity is a juju machine or unit agent. The entity names are similar to the names shown by ‘juju status’. You can exclude all the log messages from the bootstrap machine that hosts the state-server like this:

juju debug-log --exclude machine-0

The options can be used multiple times to select the log messages. This example selects all the message from a unit and its machine as reported by status:

juju debug-log --include unit-mysql-0 --include machine-1

The ‘level’ option restricts messages to the specified log-level or greater. The levels from lowest to highest are TRACE, DEBUG, INFO, WARNING, and ERROR. The WARNING and ERROR messages from the log can seen thusly:

juju debug-log --level WARNING

The ‘include-module’ and ‘exclude-module’ are used to select the kind of message displayed. The module name is dotted. You can specify all or some of a module name to include or exclude messages from the log. This example progressively excludes more content from the logs

juju debug-log --exclude-module juju.state.apiserver
juju debug-log --exclude-module juju.state
juju debug-log --exclude-module juju

The ‘include-module’ and ‘exclude-module’ options can be used multiple times to select the modules you are interested in. For example, you can see the juju.cmd and juju.worker messages like this:

juju debug-log --include-module juju.cmd --include-module juju.worker

The ‘debug-log’ command output can be piped to grep to filter the message like this:

juju debug-log --lines 500 | grep amd64

You can learn more by running ‘juju debug-log –help’ and ‘juju help logging’

Constraints Support instance-type

You can specify ‘instance-type’ with the ‘constraints’ option to select a specific image defined by the cloud provider. The ‘instance-type’ constraint can be used with Azure, EC2, HP Cloud, and all OpenStack-based clouds. For example, when creating an EC2 environment, you can specify ‘m1.small’:

juju bootstrap --constraints instance-type=m1.small

Constraints are validated by all providers to ensure values conflicts and unsupported options are rejected. Previously, juju would reconcile such problems and select an image, possibly one that didn’t meet the needs of the service.

The lxc-use-clone Option Makes LXC Faster for Non-Local Providers

When ‘lxc-use-clone’ is set to true, the LXC provisioner will be configured to use cloning regardless of provider type. This option cannot be changed once it is set. You can set the option to true in environments.yaml like this:

lxc-use-clone: true

This speeds up LXC provisioning when using placement with any provider. For example, deploying mysql to a new LXC container on machine 0 will start faster:

juju deploy --to lxc:0 mysql

Support for Multiple NICs with the Same MAC

Juju now supports multiple physical and virtual network interfaces with the same MAC address on the same machine. Juju takes care of this automatically, there is nothing you need to do.

Caution, network settings are not upgraded from 1.18.x to 1.20.x. If you used juju 1.18.x to deploy an environment with specified networks, you must redeploy your environment instead of upgrading to 1.20.0.

The output of ‘juju status’ will include information about networks when there is more than one. The networks will be presented in this manner:

machines: ...
services: ...
networks:
  net1:
    provider-id: foo
    cidr: 0.1.2.0/24
    vlan-tag: 42

MaaS Network Constraints and Deploy Argument

You can specify which networks to include or exclude as a constraint to the deploy command. The constraint is used to select a machine to deploy the service’s units too. The value of ‘networks’ is a comma-delimited list of juju network names (provided by MaaS). Excluded networks are prefixed with a “^”. For example, this command specify the service requires the “logging” and “storage” networks and conflicts with the “db” and “dmz” networks.

juju deploy mongodb --constraints networks=logging,storage,^db,^dmz

The network constraint does not enable the network for the service. It only defines what machine to pick.

Use the ‘deploy’ command’s ‘networks’ option to specify service-specific network requirements. The ‘networks’ option takes a comma-delimited list of juju-specific network names. Juju will enable the networks on the machines that host service units.

Juju networking support is still experimental and under development, currently only supported with the MaaS provider.

juju deploy mongodb --networks=logging,storage

The ‘exclude-network’ option was removed from the deploy command as it is superseded by the constraint option.

There are plans to add support for network constraint and argument with Amazon EC2, Azure, and OpenStack Havana-based clouds like HP Cloud in the future.

MAAS Provider Supports Placement and add-machine

You can specify which MAAS host to place the juju state-server on with the ‘to’ option. To bootstrap on a host named ‘fnord’, run this:

juju bootstrap --to fnord

The MAAS provider support the add-machine command now. You can provision an existing host in the MAAS-based Juju environment. For example, you can add running machine named fnord like this:

juju add-machine fnord

Server Side API Versioning

The Juju API server now has support for a Version field in requests that are made. For this release, there are no RPC calls that require anything other than ‘version=0’ which is the default when no Version is supplied. This should have limited impact on existing CLI or API users, since it allows us to maintain exact compatibility with existing requests. New features and APIs should be exposed under versioned requests.

For details on the internals (for people writing API clients), see this document.

Finally

We encourage everyone to subscribe the mailing list at juju-dev at lists.canonical.com, or join us on #juju-dev on freenode.

PS. Juju just got 20% more amazing.

We’ve got some changes in Juju and the Juju ecosystem that have been landing this week.

Ian Booth announced the move of Juju core to github.com. You can find all our work at: https://github.com/juju.

Workflow instructions for contributing are available in the CONTRIBUTING file. Ian also adds:

Once the dust settles on the migration of juju-core, we’ll also be migrating various dependencies like goose, gwacl, gomaasapi and golxc.

You can find the code for Juju Core at: https://github.com/juju/juju

On a related note, we have a one way mirror of the Juju Charm Store as well: https://github.com/charms

You can combine these with Francesco Banconi’s git-deploy plugin to deploy right from github, as an example:

juju git-deploy charms/mysql

Hopefully 2-way syncing will be possible soon, stay tuned!

The following is a guest post by Ian Booth:

Juju has the ability to set up and change logging levels at a package level so that problems within a particular area can be better diagnosed. Recently there were some issues with containers (both kvm and lxc) started by the local provider transitioning from pending to started. We wanted to be able to inspect the sequence of commands Juju uses to start a container. Fortunately, the Juju code which starts lxc and kvm containers does log out the actual commands used to download the requisite image and start the container et al. The logging level used for this information is TRACE. By default, Juju machine agents log at the debug level, and this information can be seen when running ‘juju debug-log’. So unfortunately, this means the information we are interested in is not visible by default in the machine logs.

Luckily we can change the logging level used for particular packages so that the information is logged. This is done using the ‘logging-config’ attribute and can either be done at bootstrap:

juju bootstrap --logging-config=golxc=TRACE

or on a running system:

juju set-env logging-config=golxc=TRACE

As an aside, you can use:

juju get-env logging-config

to see what the current logging-config value is.

The logging-config value above turns on TRACE level dubugging for the golxc package, which is responsible for starting and managing lxc containers on behalf of Juju. For kvm containers, the package name is ‘kvm’.

We can use debug-log to look at the logging we have just enabled. As of 1.19 onwards, filtering can be used to just show what we are interested in. Run this command in a separate terminal:

juju debug-log --include-module=golxc

Now we can deploy a charm or use add-machine to initiate a new container. We can see the commands Juju issues to download the image and start the container. This allows us to see what parameters are passed in to the various lxc commands, and we could even manually run the commands if we wish, in order to reproduce and examine in more detail how the commands behave. An example of the logging information when adding TRACE level debugging to lxc startup looks like:

machine-0: 2014-05-27 04:28:39 TRACE golxc.run.lxc-start golxc.go:439 run: lxc-start [--daemon -n ian-local-machine-1 -c /var/lib/juju/containers/ian-local-machine-1/console.log -o
/var/lib/juju/containers/ian-local-machine-1/container.log -l DEBUG]
machine-0: 2014-05-27 04:28:40 TRACE golxc.run.lxc-ls golxc.go:439 run: lxc-ls [-1]
machine-0: 2014-05-27 04:28:41 TRACE golxc.run.lxc-ls golxc.go:451 run
successful output: ian-local-machine-1
machine-0: 2014-05-27 04:28:41 TRACE golxc.run.lxc-info golxc.go:439 run:
lxc-info [-n ian-local-machine-1]
machine-0: 2014-05-27 04:28:41 TRACE golxc.run.lxc-info golxc.go:451 run
successful output: Name: ian-local-machine-1
machine-0: 2014-05-27 04:28:45 TRACE golxc.run.lxc-start golxc.go:448 run failed
output: lxc-start: command get_cgroup failed to receive response
machine-0: 2014-05-27 04:29:45 TRACE golxc.run.lxc-ls golxc.go:439 run: lxc-ls [-1]
machine-0: 2014-05-27 04:29:45 TRACE golxc.run.lxc-ls golxc.go:451 run
successful output: ian-local-machine-1
machine-0: 2014-05-27 04:29:45 TRACE golxc.run.lxc-info golxc.go:439 run:
lxc-info [-n ian-local-machine-1]
machine-0: 2014-05-27 04:29:45 TRACE golxc.run.lxc-info golxc.go:451 run
successful output: Name: ian-local-machine-1

You can see that when logging the various commands execute, the format is cmd [arg arg arg]. So to run these manually leave out the []. You can also see that there was a problem starting the lxc container due to a cgroups issue. This error is shown in juju status, but often it’s useful to see what happens leading up to the error occurring.

In summary, Juju’s configurable logging output can be used to help diagnose issues and understand what Juju is doing under the covers. It offers the ability to turn on extra logging when required, and it can be turned off again when no longer required.

For those of you using Vagrant we are now listing our Juju Vagrant boxes here:

For those of you on OSX and Windows this is a slick way to get Juju with the GUI out of the box deploying to containers for a nice local development resource. Here are the docs on how to use them if you are unfamiliar with Vagrant.

Just how popular is Ubuntu with Vagrant users? 7 out of the top 10!

And just a reminder that we’ll be using these tomorrow (Friday) during our charm school on using Juju with Vagrant, see you at 1500 EDT on #juju and http://ubuntuonair.com!

For the past cycle we have been working with IBM to bring Ubuntu 14.04 to the POWER8 architecture. For my part I’ve been working on helping get our collection of over 170 services to Just Work for this new set of Ubuntu users.

This week Mark Shuttleworth and Doug Balog (IBM) demoed how people would use a POWER8 machine. Since Ubuntu is supported, it comes with the same great tools for deploying and managing your software in the cloud. Using Juju we were able to fire up Hadoop with Ganglia monitoring, a SugarCRM site with MariaDB and memcached, and finally a Websphere Liberty application server serving a simple Pet Store:

No part of this demo is staged; I literally went from nothing to fully deployed in 178 seconds on a POWER8. Dynamic languages just work, and most things in the archive (that are not architecture specific) also just work; we recompiled the archive to ppc64le over the course of this cycle and if you’re using 14.04 you’re good to go.

For reference here’s the entire Juju demo as we did it from behind the curtain:

This is the power of Juju!