The following is a guest post from Curtis Hovey, the Juju release manager. You can find the original announcement on the Juju mailing list.

A new stable release of Juju, juju-core 1.20.0, is now available.

Getting Juju

juju-core 1.20.0 is available for utopic and backported to earlier series in the following PPA:

New and Notable

  • High Availability
  • Availability Zone Placement
  • Azure Availability Sets
  • Juju debug-log Command Supports Filtering and Works with LXC
  • Constraints Support instance-type
  • The lxc-use-clone Option Makes LXC Faster for Non-Local Providers
  • Support for Multiple NICs with the Same MAC
  • MAAS Network Constraints and Deploy Argument
  • MAAS Provider Supports Placement and add-machine
  • Server-Side API Versioning

High Availability

The juju state-server (bootstrap node) can be placed into high availability mode. Juju will automatically recover when one or more the state-servers fail. You can use the ‘ensure-availability’ command to create the additional state-servers:

juju ensure-availability

The ‘ensure-availability’ command creates 3 state servers by default, but you may use the ‘-n’ option to specify a larger number. The number of state servers must be odd. The command supports the ‘series’ and ‘constraints’ options like the ‘bootstrap’ command. You can learn more details by running ‘juju ensure-availability –help’

Availability Zone Placement

Juju supports explicit placement of machines to availability zones (AZs), and implicitly spreads units across the available zones.

When bootstrapping or adding a machine, you can specify the availability zone explicitly as a placement directive. e.g.

juju bootstrap --to zone=us-east-1b
juju add-machine zone=us-east-1c

If you don’t specify a zone explicitly, Juju will automatically and uniformly distribute units across the available zones within the region. Assuming the charm and the charm’s service are well written, you can rest assured that IaaS downtime will not affect your application. Commands you already use will ensure your services are always available. e.g.

juju deploy -n 10 <service>

When adding machines without an AZ explicitly specified, or when adding units to a service, the ec2 and openstack providers will now automatically spread instances across all available AZs in the region. The spread is based on density of instance “distribution groups”.

State servers compose a distribution group: when running ‘juju ensure-availability’, state servers will be spread across AZs. Each deployed service (e.g. mysql, redis, whatever) composes a separate distribution group; the AZ spread of one service does not affect the AZ spread of another service.

Amazon’s EC2 and OpenStack Havana-based clouds and newer are supported. This includes HP Cloud. Older versions of OpenStack are not supported.

Azure availability sets

Azure environments can be configured to use availability sets. This feature ensures services are distributed for high availability; as long as at least two units are deployed, Azure guarantees 99.95% availability of the service overall. Exposed ports will be automatically load balanced across all units within the service.

New Azure environments will have support for availability sets by default. To revert to the previous behaviour, the ‘availability-sets-enabled’ option must be set in environments.yaml like so:

availability-sets-enabled: false

Placement is disabled when ‘availability-sets-enabled’ is true. The option cannot be disabled after the environment is bootstrapped.

Juju debug-log Command Supports Filtering and Works with LXC

The ‘debug-log’ command shows the consolidate logs of all juju agents running on all machines in the environment. The command operates like ‘tail -f’ to stream the logs to the your terminal. The feature now support local-provider LXC environments. Several options are available to select which log lines to display.

The ‘lines’ and ‘limit’ options allow you to select the starting log line and how many additional lines to display. The default behaviour is to show the last 10 lines of the log. The ‘lines’ option selects the starting line from the end of the log. The ‘limit’ option restricts the number of lines to show. For example, you can see just 20 lines from last 100 lines of the log like this:

juju debug-log --lines 100 --limit 20

There are many ways to filter the juju log to see just the pertinent information. A juju log line is written in this format:

<entity> <timestamp> <log-level> <module>:<line-no> <message>

The ‘include’ and ‘exclude’ options select the entity that logged the message. An entity is a juju machine or unit agent. The entity names are similar to the names shown by ‘juju status’. You can exclude all the log messages from the bootstrap machine that hosts the state-server like this:

juju debug-log --exclude machine-0

The options can be used multiple times to select the log messages. This example selects all the message from a unit and its machine as reported by status:

juju debug-log --include unit-mysql-0 --include machine-1

The ‘level’ option restricts messages to the specified log-level or greater. The levels from lowest to highest are TRACE, DEBUG, INFO, WARNING, and ERROR. The WARNING and ERROR messages from the log can seen thusly:

juju debug-log --level WARNING

The ‘include-module’ and ‘exclude-module’ are used to select the kind of message displayed. The module name is dotted. You can specify all or some of a module name to include or exclude messages from the log. This example progressively excludes more content from the logs

juju debug-log --exclude-module juju.state.apiserver
juju debug-log --exclude-module juju.state
juju debug-log --exclude-module juju

The ‘include-module’ and ‘exclude-module’ options can be used multiple times to select the modules you are interested in. For example, you can see the juju.cmd and juju.worker messages like this:

juju debug-log --include-module juju.cmd --include-module juju.worker

The ‘debug-log’ command output can be piped to grep to filter the message like this:

juju debug-log --lines 500 | grep amd64

You can learn more by running ‘juju debug-log –help’ and ‘juju help logging’

Constraints Support instance-type

You can specify ‘instance-type’ with the ‘constraints’ option to select a specific image defined by the cloud provider. The ‘instance-type’ constraint can be used with Azure, EC2, HP Cloud, and all OpenStack-based clouds. For example, when creating an EC2 environment, you can specify ‘m1.small’:

juju bootstrap --constraints instance-type=m1.small

Constraints are validated by all providers to ensure values conflicts and unsupported options are rejected. Previously, juju would reconcile such problems and select an image, possibly one that didn’t meet the needs of the service.

The lxc-use-clone Option Makes LXC Faster for Non-Local Providers

When ‘lxc-use-clone’ is set to true, the LXC provisioner will be configured to use cloning regardless of provider type. This option cannot be changed once it is set. You can set the option to true in environments.yaml like this:

lxc-use-clone: true

This speeds up LXC provisioning when using placement with any provider. For example, deploying mysql to a new LXC container on machine 0 will start faster:

juju deploy --to lxc:0 mysql

Support for Multiple NICs with the Same MAC

Juju now supports multiple physical and virtual network interfaces with the same MAC address on the same machine. Juju takes care of this automatically, there is nothing you need to do.

Caution, network settings are not upgraded from 1.18.x to 1.20.x. If you used juju 1.18.x to deploy an environment with specified networks, you must redeploy your environment instead of upgrading to 1.20.0.

The output of ‘juju status’ will include information about networks when there is more than one. The networks will be presented in this manner:

machines: ...
services: ...
networks:
  net1:
    provider-id: foo
    cidr: 0.1.2.0/24
    vlan-tag: 42

MaaS Network Constraints and Deploy Argument

You can specify which networks to include or exclude as a constraint to the deploy command. The constraint is used to select a machine to deploy the service’s units too. The value of ‘networks’ is a comma-delimited list of juju network names (provided by MaaS). Excluded networks are prefixed with a “^”. For example, this command specify the service requires the “logging” and “storage” networks and conflicts with the “db” and “dmz” networks.

juju deploy mongodb --constraints networks=logging,storage,^db,^dmz

The network constraint does not enable the network for the service. It only defines what machine to pick.

Use the ‘deploy’ command’s ‘networks’ option to specify service-specific network requirements. The ‘networks’ option takes a comma-delimited list of juju-specific network names. Juju will enable the networks on the machines that host service units.

Juju networking support is still experimental and under development, currently only supported with the MaaS provider.

juju deploy mongodb --networks=logging,storage

The ‘exclude-network’ option was removed from the deploy command as it is superseded by the constraint option.

There are plans to add support for network constraint and argument with Amazon EC2, Azure, and OpenStack Havana-based clouds like HP Cloud in the future.

MAAS Provider Supports Placement and add-machine

You can specify which MAAS host to place the juju state-server on with the ‘to’ option. To bootstrap on a host named ‘fnord’, run this:

juju bootstrap --to fnord

The MAAS provider support the add-machine command now. You can provision an existing host in the MAAS-based Juju environment. For example, you can add running machine named fnord like this:

juju add-machine fnord

Server Side API Versioning

The Juju API server now has support for a Version field in requests that are made. For this release, there are no RPC calls that require anything other than ‘version=0’ which is the default when no Version is supplied. This should have limited impact on existing CLI or API users, since it allows us to maintain exact compatibility with existing requests. New features and APIs should be exposed under versioned requests.

For details on the internals (for people writing API clients), see this document.

Finally

We encourage everyone to subscribe the mailing list at juju-dev at lists.canonical.com, or join us on #juju-dev on freenode.

PS. Juju just got 20% more amazing.

We’ve got some changes in Juju and the Juju ecosystem that have been landing this week.

Ian Booth announced the move of Juju core to github.com. You can find all our work at: https://github.com/juju.

Workflow instructions for contributing are available in the CONTRIBUTING file. Ian also adds:

Once the dust settles on the migration of juju-core, we’ll also be migrating various dependencies like goose, gwacl, gomaasapi and golxc.

You can find the code for Juju Core at: https://github.com/juju/juju

On a related note, we have a one way mirror of the Juju Charm Store as well: https://github.com/charms

You can combine these with Francesco Banconi’s git-deploy plugin to deploy right from github, as an example:

juju git-deploy charms/mysql

Hopefully 2-way syncing will be possible soon, stay tuned!

The following is a guest post by Ian Booth:

Juju has the ability to set up and change logging levels at a package level so that problems within a particular area can be better diagnosed. Recently there were some issues with containers (both kvm and lxc) started by the local provider transitioning from pending to started. We wanted to be able to inspect the sequence of commands Juju uses to start a container. Fortunately, the Juju code which starts lxc and kvm containers does log out the actual commands used to download the requisite image and start the container et al. The logging level used for this information is TRACE. By default, Juju machine agents log at the debug level, and this information can be seen when running ‘juju debug-log’. So unfortunately, this means the information we are interested in is not visible by default in the machine logs.

Luckily we can change the logging level used for particular packages so that the information is logged. This is done using the ‘logging-config’ attribute and can either be done at bootstrap:

juju bootstrap --logging-config=golxc=TRACE

or on a running system:

juju set-env logging-config=golxc=TRACE

As an aside, you can use:

juju get-env logging-config

to see what the current logging-config value is.

The logging-config value above turns on TRACE level dubugging for the golxc package, which is responsible for starting and managing lxc containers on behalf of Juju. For kvm containers, the package name is ‘kvm’.

We can use debug-log to look at the logging we have just enabled. As of 1.19 onwards, filtering can be used to just show what we are interested in. Run this command in a separate terminal:

juju debug-log --include-module=golxc

Now we can deploy a charm or use add-machine to initiate a new container. We can see the commands Juju issues to download the image and start the container. This allows us to see what parameters are passed in to the various lxc commands, and we could even manually run the commands if we wish, in order to reproduce and examine in more detail how the commands behave. An example of the logging information when adding TRACE level debugging to lxc startup looks like:

machine-0: 2014-05-27 04:28:39 TRACE golxc.run.lxc-start golxc.go:439 run: lxc-start [--daemon -n ian-local-machine-1 -c /var/lib/juju/containers/ian-local-machine-1/console.log -o
/var/lib/juju/containers/ian-local-machine-1/container.log -l DEBUG]
machine-0: 2014-05-27 04:28:40 TRACE golxc.run.lxc-ls golxc.go:439 run: lxc-ls [-1]
machine-0: 2014-05-27 04:28:41 TRACE golxc.run.lxc-ls golxc.go:451 run
successful output: ian-local-machine-1
machine-0: 2014-05-27 04:28:41 TRACE golxc.run.lxc-info golxc.go:439 run:
lxc-info [-n ian-local-machine-1]
machine-0: 2014-05-27 04:28:41 TRACE golxc.run.lxc-info golxc.go:451 run
successful output: Name: ian-local-machine-1
machine-0: 2014-05-27 04:28:45 TRACE golxc.run.lxc-start golxc.go:448 run failed
output: lxc-start: command get_cgroup failed to receive response
machine-0: 2014-05-27 04:29:45 TRACE golxc.run.lxc-ls golxc.go:439 run: lxc-ls [-1]
machine-0: 2014-05-27 04:29:45 TRACE golxc.run.lxc-ls golxc.go:451 run
successful output: ian-local-machine-1
machine-0: 2014-05-27 04:29:45 TRACE golxc.run.lxc-info golxc.go:439 run:
lxc-info [-n ian-local-machine-1]
machine-0: 2014-05-27 04:29:45 TRACE golxc.run.lxc-info golxc.go:451 run
successful output: Name: ian-local-machine-1

You can see that when logging the various commands execute, the format is cmd [arg arg arg]. So to run these manually leave out the []. You can also see that there was a problem starting the lxc container due to a cgroups issue. This error is shown in juju status, but often it’s useful to see what happens leading up to the error occurring.

In summary, Juju’s configurable logging output can be used to help diagnose issues and understand what Juju is doing under the covers. It offers the ability to turn on extra logging when required, and it can be turned off again when no longer required.

For those of you using Vagrant we are now listing our Juju Vagrant boxes here:

For those of you on OSX and Windows this is a slick way to get Juju with the GUI out of the box deploying to containers for a nice local development resource. Here are the docs on how to use them if you are unfamiliar with Vagrant.

Just how popular is Ubuntu with Vagrant users? 7 out of the top 10!

And just a reminder that we’ll be using these tomorrow (Friday) during our charm school on using Juju with Vagrant, see you at 1500 EDT on #juju and http://ubuntuonair.com!

For the past cycle we have been working with IBM to bring Ubuntu 14.04 to the POWER8 architecture. For my part I’ve been working on helping get our collection of over 170 services to Just Work for this new set of Ubuntu users.

This week Mark Shuttleworth and Doug Balog (IBM) demoed how people would use a POWER8 machine. Since Ubuntu is supported, it comes with the same great tools for deploying and managing your software in the cloud. Using Juju we were able to fire up Hadoop with Ganglia monitoring, a SugarCRM site with MariaDB and memcached, and finally a Websphere Liberty application server serving a simple Pet Store:

No part of this demo is staged; I literally went from nothing to fully deployed in 178 seconds on a POWER8. Dynamic languages just work, and most things in the archive (that are not architecture specific) also just work; we recompiled the archive to ppc64le over the course of this cycle and if you’re using 14.04 you’re good to go.

For reference here’s the entire Juju demo as we did it from behind the curtain:

This is the power of Juju!

We’ve been trying to get our Java developer story in order, as it’s one of the areas where users have been telling us they’d like to see made easier to manage. We brought Matthew Bruzek on board, who will be focusing on our Java framework stuff.

Here’s his first cut, and it’s a big one. We now finally have an all emcompassing Apache Tomcat Charm. This charm allows you to develop your app on top of it as a subordinate charm and be able to deploy it. As an example, let’s deploy OpenMRS.

juju deploy tomcat7
juju deploy openmrs
juju deploy mysql
juju add-relation openmrs mysql
juju add-relation openmrs tomcat7
juju expose tomcat7

The magic happens when you relate openmrs to tomcat7, that installs OpenMRS in Tomcat, and then you’re good to go.

The charm contains a ton of options for you to check out, including deploying to Tomcat6 if that’s how you roll.

Those of you looking for “just shove my war file out there” can check out Robert Ayre’s j2ee-deployer charm. Here’s a blog post on how to use it.

That one is not in the charm store yet, but we’ll get to it!

Some good news for monitoring SaaS Fans, ServerDensity is now in the Juju Charm Store. This means that the ServerDensity agent is now available to tack onto any deployed service in your environment. We call this kind of charm a subordinate, here’s how you’d use it.

Let’s deploy a MongoDB Cluster:

juju-quickstart bundle:mongodb/cluster

This is a 13 node cluster, let’s add some monitoring! First we’ll deploy serverdensity, and then the crucial step, relating serverdensity to the service:

juju deploy serverdensity
juju add-relation mongos serverdensity

Now let’s tell ServerDensity that we want it to monitor Mongo and we need an API key since it’s a service that needs a key:

juju set serverdensity agentkey="Keyfromyourserverdensitycontrolpanel"
juju set serverdensity mongodb-server="ip of mongos node"
juju set serverdensity mongodb-dbstats=true

Now, why do I need to give it the ip of mongo’s node? Well, as v1 of this charm we’re setting this manually, but what we could do in the future is take care of those settings as soon as the relationship is made instead of making the user set those config values. But hey, why nitpick, monitoring a MongoDB cluster with 5 commands isn’t a bad place to start!

There are a bunch of configuration options for you to play with for service monitoring with Apache, MySQL, and RabbitMQ. When used standalone the charm will just do machine monitoring for whatever you related it to.

“Rick, I want you to fire up an empty cloud deployment”, said no boss ever.

The various teams who make up “The Ubuntu Server Team” work on various bits. Some work to make OpenStack easy to deploy, some work on packaging, some work on building images for all the public clouds we support. All this is fine and dandy, but at the end of the day, our users deploy things on top of Ubuntu Server, and these workloads are really what it’s all about.

Over the past two years it’s been increasingly obvious to us that orchestrating workloads is becoming one of the most important things we can help users with. What’s the point of an easy to use OpenStack installer if people can’t plop Hadoop, or Cassandra, or their Rails app right on top easily?

One part of this solution is our work on Juju, which makes orchestration simple; we can just manage deployments at the service level while not caring (too much) about individial instances. Some people wanted and needed things to be even highter level. “Look, I’ve got a pile of machines over here, I need a MongoDB cluster” and you don’t have the time! Or more increasingly, you know how to do all these steps, but need a way to make it portable so everyone else in your organization can benefit from your hard work.

So today we announced Juju Bundles. Technically, they’re very simple, it’s a yaml file that describes charms and their relationships to other charms. It can contain machine constraints, or any of the parameters you’d ship in a charm. That means deploying large swaths of services are now one command.

juju quickstart bundle:~charmers/mongodb/cluster
juju quickstart bundle:~charmers/hadoop/cluster
juju quickstart bundle:~charmers/mediawiki/scalable

There’s a 13 node 3 shard MongoDB cluster, a 7 node starter Hadoop cluster, and a 5 node Mediwiki. All of them can be horizontally scaled out of the box. juju add-unit -n10 hadoop-slavecluster and go to town. Not Bad!

Getting Started

If you’re on Ubuntu Rick has the info in his blog post, you basically:

sudo add-apt-repository ppa:juju/stable
sudo apt-get update
sudo apt-get install juju-quickstart

Then run one of the commands I mentioned:

juju quickstart bundle:~charmers/mongodb/cluster

Juju will then walk you through setting up an environment. Pick one. AWS, Microsoft Windows Azure, Joyent, HP Cloud, or your own OpenStack or laptop via LXC, then fill out your creds:

Now get a cup of coffee, or watch the console for the magic, maybe read the README.

Now go put that bad boy to work.

Quickstart is not yet available for OSX or Windows (we’re working on it), but if you want to play with it we’ve made a Vagrant box available for you to play with.

The Future

Over the course of the next few weeks as we ramp up to 14.04 we’ll be doing a few things. We’ll be tightening up the URLs to be easier to remember, so something like juju quickstart bundle:mongodb/cluster. Obviously we’d love to see more bundles from the community being submitted.

Check out the documentation for more, and for a more technical description of how bundles work check out Rick’s blog post. We also made a tutorial video on how to make and share your own bundles.

It’s that time of the year where elections are needed for 2 available moderator positions on Ask Ubuntu.

Here are the list of nominees, if you have 150 rep on the site you can cast a vote.

The primary closes in 3 days, after which the election begins. Please take the time to read the statements from each of the candidates and cast a vote!