Most recent items from Ubuntu feeds:
The Fridge: Ubuntu Online Summit: 15-16 November 2016 from Planet Ubuntu

The next Ubuntu Online Summit is going to happen:
15-16 November 2016
At the event we are going to celebrate the 16.10 release and all the great things which are new and get to talk about what’s coming up in Ubuntu 17.04.
The event will be added to summit.ubuntu.com shortly and you will all receive a reminder or two to add your sessions.
We’re looking forward to seeing you there.
Originally posted to the ubuntu-news-team mailing list on Thu Sep 22 09:46:35 UTC 2016 by Daniel Holbach

2 days ago

Ubuntu Insights: Rocket.chat, a new snap is landing on your Nextcloud box and beyond! from Planet Ubuntu

Last week Nextcloud, Western Digital and Canonical launched the Nextcloud box, a simple box powered by a Raspberry Pi to easily build a private cloud storage service at home . With the Nextcloud box, you are already securely sharing files with your colleagues, friends, and/or family.
The team at Rocket.chat has been asking “Why not add a group chat so you can chat in real-time or leave messages for one another?”
We got in touch with the team to hear about their story!
Here’s Sing Li from Rocket.chat telling us about their journey to Ubuntu Core!
Introducing Rocket.chat
Rocket.chat is a slack-like server that you can run on your own servers or at home… and installing it with Ubuntu core couldn’t be easier. Your private chat server will be up and running in 2 minutes.
If you’ve not heard of them, Rocket.Chat is one of the largest MIT licenced open source group chat project on GitHub with over 200 global contributors, 9,000 stars, and 40,000 community servers deployed world-wide.
Rocket.chat is an optimized nodejs application, bundled in compressed tar zip. To install it a system admin would need to untar the optimized app, and install its dependency on the server . He/She would then need to configure the server via environment variables and be configured to survive restart using a service manager.. Combined with the typical routine of setting up and configuring a reverse proxy and getting DNS plus SSL set up correctly. This meant that system administrators had to spend on average 3 hours deploying the rocket.chat server before they can even start to see the first login page.
Being a mature production server, the configuration surface of Rocket.Chat is also very large. Currently we have over a hundred configurable settings for all sort of different use-cases. Getting configuration just right for a use-case adds to the already long time required to deploy the server.
Making installation a breeze
We started to look for alternatives that can ubiquitously delivery our server product to the largest possible body of end users (deployer of chat servers) and provide a non-complex, pleasant initial out-of-box experience. If it can help with updating of software and expediting installation of new version – it would be a bonus. If it can further reduce our complex build pipeline complexity, then it is a sure winner.
When we first saw snaps, we knew it had everything were looking for. The ubiquity of Ubuntu 16.04 LTS, Ubuntu 14.04 LTS, plus Ubuntu Core for IoT – means we can deliver Rocket.Chat to an incredibly large audience of server deployers globally – all via one single package. But also cater for the increasing number of users asking us to run a Rocket.chat server on a Raspberry Pi.
With Snap, we only have one bundle for all the Linux platforms supported. It is right here in the snap store What’s real cool is that even the formerly manually built ARM based server, for Raspberry Pi and other IoT devices, can also be part of the same snap. It enjoys the same simplicity of installation, and transactional update just like other Intel 64 bit Linux platforms.
Our next step will be to distribute our desktop client with snaps. We have a vision that once we tag a release, within seconds a build process is kicked off through the CI and published to the snap store.
The asymmetric nature of the snap delivery pipeline has definite benefits. By paying the cost of heavy processing work up front during the build phase, snap deployment is lightning fast. On most modern Intel servers or VPSes, `snap install rocketchat.server` takes only about 30 seconds. The server is ready to handle hundreds of users immediately, available at URL `http://<server address>:3000`.
Consistent with the snap design philosophy, we have done substantial engineering to come up with a default set of ready-to-go configuration settings that supports the most common use cases of a simple chat server.
What this enables us to do is to deliver a 30s “instantly gratifying” experience to system admistator who’d like to give Rocket.Chat server a try – with no obligation whatsover. Try it, if you like it, keep it and learn.
All of the complexity of configuring a full-fledged production server is folded away. The system administrator can learn more about configuration and customization at her/his own pace later.
Distributing updates in a flash
We work on new features on a develop branch (Github), and many of our community members test on this branch with us. Once the features are deemed stable, they are merged down to master (Github) where our official releases (Github) reside. Both branches are rigged to Continuous Integration via travis. Our travis script optimizes, compresses and bundle the distributions and then pushes them out to the various distribution channels that we support, many of them requires further special sub-builds and repackaging that can take substantial time.
Some distributions even calls for manual build every release. For example, we build manually for ARM architecture (Github), to support a growing community of Raspberry Pi makers, hobbyists, and IoT enthusiasts at Github
In addition to the server builds, we also have our desktop client(Github). Every new release requires a manual build on Windows, OSX, and Ubuntu. This build process requires a member of our team to physically login to a Windows(x86 and x86_64) OSX and Ubuntu(x86 and x86_64) machine to create our releases. Once this release was built, our users then had to goto our release page to manually download the Ubuntu release and install it.
That’s where snap also bring a greatt a much simpler distribution mechanism and a better user experience. When we add features and push new, every one of our users will be enjoying it within a few hours- automatically. The version update is transactional, so if they can always roll back to the previous version if they’d like.
In fact, we make use of the stable and edge channels corresponding to our master and develop branches. Community members helping us to test the latest software is on the edge channel , and often gets multiple updates throughout the day as we fix bugs and add new features.
We look forward to the point where our desktop client is also available as a snap and when our users no longer have to wrestle with updating their Desktop Clients. Like the server we are able to deliver their updates quickly and seamlessly.

3 days ago

Ubuntu Podcast from the UK LoCo: S09E30 – Pie Till You Die - Ubuntu Podcast from Planet Ubuntu

It’s Episode Thirty of Season-Nine of the Ubuntu Podcast! Mark Johnson, Alan Pope and Martin Wimpress are here again.

Most of us are here, but one of us is busy!
In this week’s show:

We discuss the Raspberry Pi hitting 10 Million sales and the impact the it has had.

We share a Command Line Lurve:

set -o vi – Which makes bash use vi keybindings.

We also discuss solving an “Internet Mystery” #blamewindows

And we go over all your amazing feedback – thanks for sending it – please keep sending it!

This weeks cover image is taken from Wikimedia.

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

Join us on IRC in #ubuntu-podcast on Freenode

3 days ago

Ubuntu Insights: Monitoring “big software” stacks with the Elastic Stack from Planet Ubuntu

Big​ ​Software​ ​is​ ​a​ ​new​ ​class​ ​of​ ​application.​ ​It’s​ ​composed​ ​of​ ​so​ ​many moving​ ​pieces​ ​that​ ​humans,​ ​by​ ​themselves,​ ​cannot​ ​design,​ ​deploy​ ​or operate​ ​them.​ ​OpenStack,​ ​Hadoop​ ​and​ ​container-based​ ​architectures​ ​are all​ ​examples​ ​of​ ​​Big​ ​Software​.

Gathering​ ​service​ ​metrics​ ​for​ ​complex​ ​big​ ​software​ ​stacks​ ​can​ ​be​ ​a​ ​chore
Especially​ ​when​ ​you​ ​need​ ​to​ ​warehouse,​ ​visualize,​ ​and​ ​share​ ​the​ ​metrics
It’s​ ​not​ ​just​ ​about​ ​measuring​ ​machine​ ​performance,​ ​but​ ​application performance​ ​as​ ​well

You​ ​usually​ ​need​ ​to​ ​warehouse​ ​months​ ​of​ ​history​ ​of​ ​these​ ​metrics​ ​so​ ​you can​ ​spot​ ​trends.​ ​This​ ​enables​ ​you​ ​to​ ​make​ ​educated​ ​infrastructure decisions.​ ​That’s​ ​a​ ​powerful​ ​tool​ ​that’s​ ​usually​ ​offered​ ​on​ ​the​ ​provider level.​ ​But​ ​what​ ​if​ ​you​ ​run​ ​a​ ​hybrid​ ​cloud​ ​deployment?​ ​Not​ ​every​ ​cloud service​ ​is​ ​created​ ​equally.
The​ ​Elastic​ ​folks​ ​provide​ ​everything​ ​we​ ​need​ ​to​ ​make​ ​this​ ​possible.
Additionally​ ​we​ ​can​ ​connect​ ​it​ ​to​ ​​all​ ​sorts​​ ​of​ ​other​ ​bundles​ ​in​ ​the​ ​charm.
Stack
Big Software is a new class of application. It’s composed of so many moving pieces that humans, by themselves, cannot design, deploy or operate them. OpenStack, Hadoop and container-based architectures are all examples of Big Software.
Gathering service metrics for complex big software stacks can be a chore. Especially when you need to warehouse, visualize, and share the metrics. It’s not just about measuring machine performance, but application performance as well.
You usually need to warehouse months of history of these metrics so you can spot trends. This enables you to make educated infrastructure decisions. That’s a powerful tool that’s usually offered on the provider level. But what if you run a hybrid cloud deployment? Not every cloud service is created equally.
The Elastic folks provide everything we need to make this possible. Additionally we can connect it to all sorts of other bundles in the charm store. We can now collect data on any cluster, store it, and visualize it. Let’s look at the pieces that are modeled in this bundle:

Elasticsearch – a distributed RESTful search engine
Beats – lightweight processes that gather metrics on nodes and ship them to Elasticsearch.

Filebeat ships logs
Topbeat ships “top-like” data
Packetbeat provides network protocol monitoring

Dockerbeat is a community beat that provides app container monitoring
Logstash – Performs data transformations, and routing to storage. As an example: Elasticsearch for instant visualization or HDFS for long term storage and analytics
Kibana – a web front end to visualize and analyze the gathered metrics

Getting Started
First, install and configure Juju. This will allow us to model our clusters easily and repeatedly. We used LXD as a backend in order to maximize our ability to explore the cluster on our desktops/laptops. Though you can easily deploy these onto any major public cloud.
juju deploy ~containers/bundle/beats-core
This will give you a complete stack, it looks like this:

 
Note: if you wish to deploy the latest version of this bundle, the ~containers team is publishing a development channel release as new beats are added to the core bundle.
juju deploy ~containers/bundle/beats-core --channel=development
Once everything is deployed we need to deploy the dashboards:
juju action do kibana/0 deploy-dashboard dashboard=beats
Now do a `juju status kibana` to get the ip address to the unit it’s allocated. Now we are… monitoring nothing. We need something to connect it to, and then introduce it to beats, so something like:
juju deploy myapplication
juju add-relation filebeat:beats-host myapplication
juju add-relation topbeat:beats-host myapplication
Let’s connect it to something interesting, like an Apache Spark deployment.
Integrating with other bundles
The standalone bundle is useful but let’s use a more practical example. The Juju Ecosystem team has added elastic stack monitoring to a bunch of existing bundles. You don’t even have to manually connect the beats-core deployment to anything, you can just use an all in one bundle:

 
To deploy this bundle in the command line:
juju deploy apache-processing-spark
We also recommend running `juju status` periodically to check the progress of the deployment. You can also just open up a new terminal and keep `watch juju status` in a window so you can have the status continuously display while you continue on.
In this bundle: Filebeat and Topbeat act as subordinate charms. Which means they are co-located on the spark units. This allows us to use these beats to track each spark node. And since we’re adding this relationship at the service level; any subsequent spark nodes you add will automatically include the beats monitors. The horizontal scaling of our cluster is now observable.
Let’s get the kibana dashboard ready:
juju set-config kibana dashboards="beats"
Notice that this time, we used charm config instead of an action to deploy the dashboard. This allows us to blanket configure, and deploy the kibana dashboards from a bundle. Reducing the number of steps a user must take to get started.
After deployment you will need to do a `juju status kibana` to get the IP address of the unit. Then browse to it in your web browser. For those of you deploying on public clouds: you will need to also do `juju expose kibana` to open a port in the firewall to allow access. Remember, to make things accessible to others in our clouds Juju expects you to explicitly tell it to do this. Out-of-the-box we keep things closed.
When you get to the kibana GUI you need add `topbeat-*` or `filebeat-*` in the initial screen setup to set up Kibana’s index. Make sure you click the “Create” button for each one:

Now we need to load the dashboard’s we’ve included for you, click on the “Dashboard” section and click the load icon, then select the “topbeat-dashboard”:

Now you should see a your shiny new dashboard:

You now have an observable Spark cluster! Now that your graphs are up, let’s run something to ensure all the working pieces are working, let’s do a quick pagerank benchmark:
juju run-action spark/0 pagerank
This will output a UUID for your job for you to query for results:
juju show-action-output
You can find more about available actions in the bundle’s documentation. Feel free to launch the action multiple times if you want to exercise the hardware, or run your own Spark jobs as you see fit.
By default the `apache-processing-spark` bundle gives us three nodes. I left those running for a while and then decided to grow the cluster. Let’s add 10 nodes
juju add-unit -n10 spark
Your `juju status` should be lighting up now with the new units being fired up, and in Kibana itself we can see the rest of the cluster coming online in near-realtime:

Here you can see the CPU and memory consumption of the cluster. You can see the initial three nodes hanging around, and then as the other nodes come up, beats gets installed and they report in, automatically.
Why automatically? ‘apache-processing-spark’ technically is just some yaml. The magic is that we are not just deploying code, we’re modelling the relationship between these applications:
relations:
- [spark, zookeeper]
- ["kibana:rest", "elasticsearch:client"]
- ["filebeat:elasticsearch", "elasticsearch:client"]
- ["filebeat:beats-host", "spark:juju-info"]
- ["topbeat:elasticsearch", "elasticsearch:client"]
- ["topbeat:beats-host", "spark:juju-info"]
So when spark is added, you’re not just adding a new machine, you’re mutating the scale of the application within the model. But what does that mean?
A good way to think about it is just like simple elements and compounds. For example: Carbon Monoxide (CO) and Carbon Dioxide (CO2) are built from the exact same elements. But the combination of those elements allow for two different compounds with different characteristics. If you think of your infrastructure similarly, you’re not just designing the components that compose it. But the number of interactions that those components have with themselves and others.
So, automatically deploying filebeat and topbeat when spark is scaled just becomes an automatic part of the lifecycle. In this case, one new spark unit results in one new unit of filebeat, and one new unit of topbeat. Similarly, we can change this model as our requirements change.
This post-deployment mutability of infrastructure is one of Juju’s key unique features. You’re not just defining how applications talk and relate to each other. You’re also defining the ratios of units to their supporting applications like metrics collection.
We’ve given you two basic elements of beats today, filebeat, and topbeat. And like chemistry, more elements make for more interesting things. So now let’s show you how to expand your metrics-gathering to another level.
Charming up your own custom beat
Elastic has engineered Beats to be expandable. They have invested effort in making it easy for you to write your own “beat”. As you can imagine, this can lead to an explosion of community-generated beats for measuring all sorts of things. We wanted to enable any enthusiast of the beats community to be able to hook into a Juju deployed workload.
As part of this work we’ve published a beats base layer. This will allow you to generate a charm for your custom beat. Or any of the community written beats for that matter. Then deploy it right into your model, just like we do with topbeat and filebeat. Let’s look at an example:
The Beats-base layer
Beats Base provides some helper python code to handle the common patterns every beats unit will undergo. Such as declaring to the model how it will talk to Logstash and/or Elasticsearch. This is always handled the same way among all the beats. So we’re keeping developers from needing to repeat themselves.
Additionally the elasticbeats library handles:

Unit index creation
Template rendering in any context
Enabling the beat as a system service

So starting from beats-base, we have 3 concerns to address and we will have delivered our beat:

How to install your beat (delivery)
How to configure your beat (template config)
Declare your beats index (payload delivery from installation step)

Let’s start with Packetbeat as an example. Packetbeat is an open source project that is designed to provide real‑time analytics for web, database, and other network protocols.
charm create packetbeat
Every charm starts with a layer.yaml
includes:

beats-base
apt
repository: http://github.com/juju-solutions/layer-packetbeat

Let’s add a little bit of metadata.yaml
name: packetbeat
summary: Deploys packetbeat
maintainer: Charles Butler
description: |
data shipper that integrates with Elasticsearch to provide
real-time analytics for web, database, and other
network protocols
series:
- trusty
Tags:
monitoring
analytics
networking
With those meta files in place we’re ready to write our reactive code.
reactive/packetbeat.py
For delivery of packetbeat, elastic has provided a deb repository for the official beats. This makes delivery a bit simpler using the apt-layer. The consuming code is very simple:
import charms.apt

@when_not('apt.installed.packetbeat')
def install_filebeat():
status_set('maintenance', 'Installing packetbeat')
charms.apt.queue_install(['packetbeat'])
This completes our need to deliver the application. The apt-layer will handle all the usual software delivery things for us like installing and configuring an apt repository, etc. Since this layer is reused in charms all across the community, we merely reuse it here.
The next step is modeling how we react to our data-sources being connected. This typically requires rendering a yaml file to configure the beat, starting the beat daemon, and reacting to the beats-base beat.render state.
In order to do this we’ll be adding:

Configuration options to our charm
A Jinja template to render the yaml configuration
Reactive code to handle the state change and events

The configuration for packetbeat comes in the form of declaring protocol and port. This makes attaching packetbeat to anything transmitting data on the wire simple to model with configuration. We’ll provide some sane defaults, and allow the admin to configure the device to listen on.
Config.yaml

device:
type: string
default: any
description: Device to listen on, eg eth0
protocols:
type: string
description: |
the ports on which Packetbeat can find each protocol. space
separated protocol:port format.
default: "http:80 http:8080 dns:53 mysql:3306 pgsql:5432 redis:6379 thrift:9090 mongodb:27017 memcached:11211"

templates/packetbeat.yml

# This file is controlled by Juju. Hand edits will not persist!
interfaces:
device: {{ device }}
protocols:
{% for protocol in protocols -%}
{{ protocol }}:
ports: {{ protocols[protocol] }}
{% endfor %}
{% if elasticsearch -%}
output:
elasticsearch:
hosts: {{ elasticsearch }}
{% endif -%}
{% if principal_unit %}
shipper:
name: {{ principal_unit }}
{% endif %}

reactive/packetbeat.py

from charms.reactive import set_state
from charmhelpers.core.host import service_restart
from charmhelpers.core.hookenv import status_set
from elasticbeats import render_without_context

@when('beat.render')
@when_any('elasticsearch.available', 'logstash.available')
def render_filebeat_template():
render_without_context('packetbeat.yml', '/etc/packetbeat/packetbeat.yml')
remove_state('beat.render')
service_restart('packetbeat')
status_set('active', 'Packetbeat ready')
With all these pieces of the charm plugged in, run a `charm build` in your layer directory and you’re ready to deploy the packetbeat charm.
juju deploy cs:bundles/beats-core
juju deploy cs:trusty/consul
juju deploy ./builds/packetbeat

juju add-relation packetbeat elasticsearch
juju add-relation packetbeat consul
Consul is a great test, we can attach a single beat and monitor DNS, and Web traffic thanks to its UI
juju set-config packetbeat protocols=”dns:53 http:8500”

Load up the kibana dashboard, and look under the “Discover” tab. There will be a packetbeat index, and data aggregating underneath it. Units requesting cluster DNS will start to pile on as well.
To test both of these metrics, browse around the Consul UI on port 8500. Additionally you can ssh into a unit, and dig @ the consul dns server to see DNS metrics populate.
Populating the Packetbeat dashboard from here is a game of painting with data by the numbers.
Conclusion
Observability is a great feature to have in your deployments. Whether it’s a brand new 12-factor application or the simplest of MVC apps. Being able to see inside the box is always a good feature for modern infrastructure to own.
This is why we’re excited about the Elastic stack! We can plug this into just about anything and immediately start gathering data. We’re looking forward to seeing how people bring in new beats to connect other metrics to existing bundles.
We’ve included this bundle in our Swarm, Kubernetes and big data bundles out of the box. I encourage everyone who is publishing bundles in the charm store to consider plugging in this bundle for production-grade observability.

3 days ago

Valorie Zimmerman: Kubuntu beta; please test! from Planet Ubuntu

Kubuntu 16.10 beta has been published. It is possible that it will be re-spun, but we have our beta images ready for testing now.Please go to http://iso.qa.ubuntu.com/qatracker/milestones/367/builds, login, click on the CD icon and download the image. I prefer zsync, which I download via the commandline:~$ cd /media/valorie/ISOs (or whereever you store your images)~$ zsync http://cdimage.ubuntu.com/kubuntu/daily-live/20160921/yakkety-desktop-i386.iso.zsyncThe other methods of downloading work as well, including wget or just downloading in your browser.I tested usb-creator-kde which has sometimes now worked, but it worked like a champ once the images were downloaded. Simply choose the proper ISO and device to write to, and create the live image.Once I figured out how to get my little Dell travel laptop to let me boot from USB (delete key as it is booting; quickly hit f12, legacy boot, then finally I could actually choose to boot from USB). Secure boot and UEFI make this more difficult these days.I found no problems in the live session, including logging into wireless, so I went ahead and started firefox, logged into http://iso.qa.ubuntu.com/qatracker, chose my test, and reported my results. We need more folks to install on various equipment, including VMs.When you run into bugs, try to report them via "apport", which means using ubuntu-bug packagename in the commandline. Once apport has logged into launchpad and downloaded the relevant error messages, you can give some details like a short description of the bug, and can get the number. Please report the bug numbers on the qa site in your test report.Thanks so much for helping us make Kubuntu friendly and high-quality.

3 days ago

Dustin Kirkland: HOWTO: Launch an Ubuntu Cloud Image with KVM from the Command Line from Planet Ubuntu

I reinstalled my primary laptop (Lenovo x250) about 3 months ago (June 30, 2016), when I got a shiny new SSD, with a fresh Ubuntu 16.04 LTS image.Just yesterday, I needed to test something in KVM.  Something that could only be tested in KVM.kirkland@x250:~⟫ kvmThe program 'kvm' is currently not installed. You can install it by typing:sudo apt install qemu-kvm127 kirkland@x250:~⟫ I don't have KVM installed?  How is that even possible?  I used to be the maintainer of the virtualization stack in Ubuntu (kvm, qemu, libvirt, virt-manager, et al.)!  I lived and breathed virtualization on Ubuntu for years...Alas, it seems that I've use LXD for everything these days!  It's built into every Ubuntu 16.04 LTS server, and one 'apt install lxd' away from having it on your desktop.  With ZFS, instances start in under 3 seconds.  Snapshots, live migration, an image store, a REST API, all built in.  Try it out, if you haven't, it's great!kirkland@x250:~⟫ time lxc launch ubuntu:xCreating supreme-parakeetStarting supreme-parakeetreal 0m1.851suser 0m0.008ssys 0m0.000skirkland@x250:~⟫ lxc exec supreme-parakeet bashroot@supreme-parakeet:~# But that's enough of a LXD advertisement...back to the title of the blog post.Here, I want to download an Ubuntu cloud image, and boot into it.  There's one extra step nowadays.  You need to create your "user data" and feed it into cloud-init.First, create a simple text file, called "seed":kirkland@x250:~⟫ cat seed#cloud-configpassword: passw0rdchpasswd: { expire: False }ssh_pwauth: Truessh_import_id: kirklandNow, generate a "seed.img" disk, like this:kirkland@x250:~⟫ cloud-localds seed.img seedkirkland@x250:~⟫ ls -halF seed.img -rw-rw-r-- 1 kirkland kirkland 366K Sep 20 17:12 seed.imgNext, download your image from cloud-images.ubuntu.com:kirkland@x250:~⟫ wget http://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-disk1.img --2016-09-20 17:13:57-- http://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-disk1.imgResolving cloud-images.ubuntu.com (cloud-images.ubuntu.com)... 91.189.88.141, 2001:67c:1360:8001:ffff:ffff:ffff:fffeConnecting to cloud-images.ubuntu.com (cloud-images.ubuntu.com)|91.189.88.141|:80... connected.HTTP request sent, awaiting response... 200 OKLength: 312606720 (298M) [application/octet-stream]Saving to: ‘xenial-server-cloudimg-amd64-disk1.img’xenial-server-cloudimg-amd64-disk1.img 100%[=================================] 298.12M 3.35MB/s in 88s 2016-09-20 17:15:25 (3.39 MB/s) - ‘xenial-server-cloudimg-amd64-disk1.img’ saved [312606720/312606720]In the nominal case, you can now just launch KVM, and add your user data as a cdrom disk.  When it boots, you can login with "ubuntu" and "passw0rd", which we set in the seed:kirkland@x250:~⟫ kvm -cdrom seed.img -hda xenial-server-cloudimg-amd64-disk1.imgFinally, let's enable more bells an whistles, and speed this VM up.  Let's give it all 4 CPUs, a healthy 8GB of memory, a virtio disk, and let's port forward ssh to 2222:kirkland@x250:~⟫ kvm -m 8192 \ -smp 4 \ -cdrom seed.img \ -device e1000,netdev=user.0 \ -netdev user,id=user.0,hostfwd=tcp::5555-:22 \ -drive file=xenial-server-cloudimg-amd64-disk1.img,if=virtio,cache=writeback,index=0And with that, we can how ssh into the VM, with the public SSH key specified in our seed:kirkland@x250:~⟫ ssh -p 5555 ubuntu@localhostThe authenticity of host '[localhost]:5555 ([127.0.0.1]:5555)' can't be established.RSA key fingerprint is SHA256:w2FyU6TcZVj1WuaBA799pCE5MLShHzwio8tn8XwKSdg.No matching host key fingerprint found in DNS.Are you sure you want to continue connecting (yes/no)? yesWelcome to Ubuntu 16.04.1 LTS (GNU/Linux 4.4.0-36-generic x86_64) * Documentation: https://help.ubuntu.com * Management: https://landscape.canonical.com * Support: https://ubuntu.com/advantage Get cloud support with Ubuntu Advantage Cloud Guest: http://www.ubuntu.com/business/services/cloud0 packages can be updated.0 updates are security updates.ubuntu@ubuntu:~⟫ Cheers,:-Dustin

4 days ago

Salih Emin: Vivaldi browser: Interview with Jon Stephenson von Tetzchner from Planet Ubuntu

Vivaldi browser has taken the world of internet browsing by storm, and only months after its initial release it has found its way into the computers of millions of power users. In this interview, Mr.Jon Stephenson von Tetzchner talks about how he got the idea to create this project and what to expect in the future.

4 days ago

Jonathan Riddell: Plasma Wayland ISO Now Working on VirtualBox/virt-manager from Planet Ubuntu

I read that Neon Dev Edition Unstable Branches is moving to Plasma Wayland by default instead of X.  So I thought it a good time to check out this week’s Plasma Wayland ISO. Joy of joys it has gained the ability to work in VirtualBox and virt-manager since last I tried.  It’s full of flickers and Spectacle doesn’t take screenshots but it’s otherwise perfectly functional.  Very exciting
 
by

4 days ago

Eric Hammond: Developing CloudStatus, an Alexa Skill to Query AWS Service Status -- an interview with Kira Hammond by Eric Hammond from Planet Ubuntu

Interview conducted in writing July-August 2016.

[Eric] Good morning, Kira. It is a pleasure to interview you today and to help you introduce your recently launched Alexa skill, “CloudStatus”. Can you provide a brief overview about what the skill does?

[Kira] Good morning, Papa! Thank you for inviting me.

CloudStatus allows users to check the service availability of any AWS region. On opening the skill, Alexa says which (if any) regions are experiencing service issues or were recently having problems. Then the user can inquire about the services in specific regions.

This skill was made at my dad’s request. He wanted to quickly see how AWS services were operating, without needing to open his laptop. As well as summarizing service issues for him, my dad thought CloudStatus would be a good opportunity for me to learn about retrieving and parsing web pages in Python.

All the data can be found in more detail at status.aws.amazon.com. But with CloudStatus, developers can hear AWS statuses with their Amazon Echo. Instead of scrolling through dozens of green checkmarks to find errors, users of CloudStatus listen to which services are having problems, as well as how many services are operating satisfactorily.

CloudStatus is intended for anyone who uses Amazon Web Services and wants to know about current (and recent) AWS problems. Eventually it might be expanded to talk about other clouds as well.

[Eric] Assuming I have an Amazon Echo, how do I install and use the CloudStatus Alexa skill?

[Kira] Just say “Alexa, enable CloudStatus skill”! Ask Alexa to “open CloudStatus” and she will give you a summary of regions with problems. An example of what she might say on the worst of days is:

“3 out of 11 AWS regions are experiencing service issues: Mumbai (ap-south-1), Tokyo (ap-northeast-1), Ireland (eu-west-1). 1 out of 11 AWS regions was having problems, but the issues have been resolved: Northern Virginia (us-east-1). The remaining 7 regions are operating normally. All 7 global services are operating normally. Which Amazon Web Services region would you like to check?”

Or on most days:

“All 62 regional services in the 12 AWS regions are operating normally. All 7 global services are operating normally. Which Amazon Web Services region would you like to check?”

Request any AWS region you are interested in, and Alexa will present you with current and recent service issues in that region.

Here’s the full recording of an example session:
http://pub.alestic.com/alexa/cloudstatus/CloudStatus-Alexa-Skill-sample-20160908.mp3

[Eric] What technologies did you use to create the CloudStatus Alexa skill?

[Kira] I wrote CloudStatus using AWS Lambda, a service that manages servers and scaling for you. Developers need only pay for their servers when the code is called. AWS Lambda also displays metrics from Amazon CloudWatch.

Amazon CloudWatch gives statistics from the last couple weeks, such as the number of invocations, how long they took, and whether there were any errors. CloudWatch Logs is also a very useful service. It allows me to see all the errors and print() output from my code. Without it, I wouldn’t be able to debug my skill!

I used Amazon EC2 to build the Python modules necessary for my program. The modules (Requests and LXML) download and parse the AWS status page, so I can get the data I need. The Python packages and my code files are zipped and uploaded to AWS Lambda.

Fun fact: My Lambda function is based in us-east-1. If AWS Lambda stops working in that region, you can’t use CloudStatus to check if Northern Virginia AWS Lambda is working! For that matter, CloudStatus will be completely dysfunctional.

[Eric] Why do you enjoy programming?

[Kira] Programming is so much fun and so rewarding! I enjoy making tools so I can be lazy.

Let’s rephrase that: Sometimes I’m repeatedly doing a non-programming activity—say, making a long list of equations for math practice. I think of two “random” numbers between one and a hundred (a human can’t actually come up with a random set of numbers) and pick an operation: addition, subtraction, multiplication, or division. After doing this several times, the activity begins to tire me. My brain starts to shut off and wants to do something more interesting. Then I realize that I’m doing the same thing over and over again. Hey! Why not make a program?

Computers can do so much in so little time. Unlike humans, they are capable of picking completely random items from a list. And they aren’t going to make mistakes. You can tell a computer to do the same thing hundreds of times, and it won’t be bored.

Finish the program, type in a command, and voila! Look at that page full of math problems. Plus, I can get a new one whenever I want, in just a couple seconds. Laziness in this case drives a person to put time and effort into ever-changing problem-solving, all so they don’t have to put time and effort into a dull, repetitive task. See http://threevirtues.com/.

But programming isn’t just for tools! I also enjoy making simple games and am learning about websites.

One downside to having computers do things for you: You can’t blame a computer for not doing what you told it to. It did do what you told it to; you just didn’t tell it to do what you thought you did.

Coding can be challenging (even frustrating) and it can be tempting to give up on a debug issue. But, oh, the thrill that comes after solving a difficult coding problem!

The problem-solving can be exciting even when a program is nowhere near finished. My second Alexa program wasn’t coming along that well when—finally!—I got her to say “One plus one is eleven.” and later “Three plus four is twelve.” Though it doesn’t seem that impressive, it showed me that I was getting somewhere and the next problem seemed reasonable.

[Eric] How did you get started programming with the Alexa Skills Kit (ASK)?

[Kira] My very first Alexa skill was based on an AWS Lambda blueprint called Color Expert (alexa-skills-kit-color-expert-python). A blueprint is a sample program that AWS programmers can copy and modify. In the sample skill, the user tells Alexa their favorite color and Alexa stores the color name. Then the user can ask Alexa what their favorite color is. I didn’t make many changes: maybe Alexa’s responses here and there, and I added the color “rainbow sparkles.”

I also made a skill called Calculator in which the user gets answers to simple equations.

Last year, I took a music history class. To help me study for the test, I created a trivia game from Reindeer Games, an Alexa Skills Kit template (see https://developer.amazon.com/public/community/post/TxDJWS16KUPVKO/New-Alexa-Skills-Kit-Template-Build-a-Trivia-Skill-in-under-an-Hour). That was a lot of fun and helped me to grow in my knowledge of how Alexa works behind the scenes.

[Eric] How does Alexa development differ from other programming you have done?

[Kira] At first Alexa was pretty overwhelming. It was so different from anything I’d ever done before, and there were lines and lines of unfamiliar code written by professional Amazon people.

I found the ASK blueprints and templates extremely helpful. Instead of just being a functional program, the code is commented so developers know why it’s there and are encouraged to play around with it.

Still, the pages of code can be scary. One thing new Alexa developers can try: Before modifying your blueprint, set up the skill and ask Alexa to run it. Everything she says from that point on is somewhere in your program! Find her response in the program and tweak it. The variable name is something like “speech_output” or “speechOutput.”

It’s a really cool experience making voice apps. You can make Alexa say ridiculous things in a serious voice! Because CloudStatus started with the Color Expert blueprint, my first successful edit ended with our Echo saying, “I now know your favorite color is Northern Virginia. You can ask me your favorite color by saying, ‘What’s my favorite color?’.”

Voice applications involve factors you never need to deal with in a text app. When the user is interacting through text, they can take as long as they want to read and respond. Speech must be concise so the listener understands the first time. Another challenge is that Alexa doesn’t necessarily know how to pronounce technical terms and foreign names, but the software is always improving.

One plus side to voice apps is not having to build your own language model. With text-based programs, I spend a considerable amount of time listing all the ways a person can answer “yes,” or request help. Luckily, with Alexa I don’t have to worry too much about how the user will phrase their sentences. Amazon already has an algorithm, and it’s constantly getting smarter! Hint: If you’re making your own skill, use some built-in Amazon intents, like AMAZON.YesIntent or AMAZON.HelpIntent.

[Eric] What challenges did you encounter as you built the CloudStatus Alexa skill?

[Kira] At first, I edited the code directly in the Lambda console. Pretty soon though, I needed to import modules that weren’t built in to Python. Now I keep my code and modules in the same directory on a personal computer. That directory gets zipped and uploaded to Lambda, so the modules are right there sitting next to the code.

One challenge of mine has been wanting to fix and improve everything at once. Naturally, there is an error practically every time I upload my code for testing. Isn’t that what testing is for? But when I modify everything instead of improving bit by bit, the bugs are more difficult to sort out. I’m slowly learning from my dad to make small changes and update often. “Ship it!” he cries regularly.

During development, I grew tired of constantly opening my code, modifying it, zipping it and the modules, uploading it to Lambda, and waiting for the Lambda function to save. Eventually I wrote a separate Bash program that lets me type “edit-cloudstatus” into my shell. The program runs unit tests and opens my code files in the Atom editor. After that, it calls the command “fileschanged” to automatically test and zip all the code every time I edit something or add a Python module. That was exciting!

I’ve found that the Alexa speech-to-text conversions aren’t always what I think they will be. For example, if I tell CloudStatus I want to know about “Northern Virginia,” it sends my code “northern Virginia” (lowercase then capitalized), whereas saying “Northern California” turns into “northern california” (all lowercase). To at least fix the capitalization inconsistencies, my dad suggested lowercasing the input and mapping it to the standardized AWS region code as soon as possible.

[Eric] What Alexa skills do you plan on creating in the future?

[Kira] I will probably continue to work on CloudStatus for a while. There’s always something to improve, a feature to add, or something to learn about—right now it’s Speech Synthesis Markup Language (SSML). I don’t think it’s possible to finish a program for good!

My brother and I also want to learn about controlling our lights and thermostat with Alexa. Every time my family leaves the house, we say basically the same thing: “Alexa, turn off all the lights. Alexa, turn the kitchen light to twenty percent. Alexa, tell the thermostat we’re leaving.” I know it’s only three sentences, but wouldn’t it be easier to just say: “Alexa, start Leaving Home” or something like that? If I learned to control the lights, I could also make them flash and turn different colors, which would be super fun. :)

In August a new ASK template was released for decision tree skills. I want to make some sort of dichotomous key with that.
https://developer.amazon.com/public/community/post/TxHGKH09BL2VA1/New-Alexa-Skills-Kit-Template-Step-by-Step-Guide-to-Build-a-Decision-Tree-Skill

[Eric] Do you have any advice for others who want to publish an Alexa skill?

[Kira]

Before submitting your skill for certification, make sure you read through the submission checklist.
https://developer.amazon.com/public/solutions/alexa/alexa-skills-kit/docs/alexa-skills-kit-submission-checklist#submission-checklist

Remember to check your skill’s home cards often. They are displayed in the Alexa App. Sometimes the text that Alexa pronounces should be different from the reader-friendly card content. For example, in CloudStatus, “N. Virginia (us-east-1)” might be easy to read, but Alexa is likely to pronounce it “En Virginia, Us [as in ‘we’] East 1.” I have to tell Alexa to say “northern virginia, u.s. east 1,” while leaving the card readable for humans.

Since readers can process text at their own pace, the home card may display more details than Alexa speaks, if necessary.

If you don’t want a card to accompany a specific response, remove the ‘card’ item from your response dict. Look for the function build_speechlet_response() or buildSpeechletResponse().

Never point your live/public skill at the $LATEST version of your code. The $LATEST version is for you to edit and test your code, and it’s where you catch errors.

If the skill raises errors frequently, don’t be intimidated! It’s part of the process of coding. To find out exactly what the problem is, read the “log streams” for your Lambda function. To print debug information to the logs, print() the information you want (Python) or use a console.log() statement (JavaScript/Node.js).

It helps me to keep a list of phrases to try, including words that the skill won’t understand. Make sure Alexa doesn’t raise an error and exit the skill, no matter what nonsense the user says.

Many great tips for designing voice interactions are on the ASK blog.
https://developer.amazon.com/public/solutions/alexa/alexa-skills-kit/docs/alexa-skills-kit-voice-design-best-practices

Have fun!

In The News

Amazon had early access to this interview and to Kira and wrote an
article about her in the Alexa Blog:

14-Year-Old Girl Creates CloudStatus Alexa Skill That Benefits AWS Developers

which was then picked up by VentureBeat:

A 14-year-old built an Alexa skill for checking the status of AWS

which was then copied, referenced, tweeted, and retweeted.

Original article and comments: https://alestic.com/2016/09/alexa-skill-aws-cloudstatus/

5 days ago

Launchpad News: Beta test: new package picker from Planet Ubuntu

If you are a member of Launchpad’s beta testers team, you’ll now have a slightly different interface for selecting source packages in the Launchpad web interface, and we’d like to know if it goes wrong for you.
One of our longer-standing bugs has been #42298 (“package picker lists unpublished (invalid) packages”).  When selecting a package – for example, when filing a bug against Ubuntu, or if you select “Also affects distribution/package” on a bug – and using the “Choose…” link to pop up a picker widget, the resulting package picker has historically offered all possible source package names (or sometimes all possible source and binary package names) that Launchpad knows about, without much regard for whether they make sense in context.  For example, packages that were removed in Ubuntu 5.10, or packages that only exists in Debian, would be offered in search results, and to make matters worse search results were often ordered alphabetically by name rather than by relevance.  There was some work on this problem back in 2011 or so, but it suffered from performance problems and was never widely enabled.
We’ve now resurrected that work from 2011, fixed the performance problems, and converted all relevant views to use it.  You should now see something like this:

Exact matches on either source or binary package names always come first, and we try to order other matches in a reasonable way as well.  The disclosure triangles alongside each package allow you to check for more details before you make a selection.
Please report any bugs you find with this new feature.  If all goes well, we’ll enable this for all users soon.
Update: as of 2016-09-22, this feature is enabled for all Launchpad users.

5 days ago

Valorie Zimmerman: Kubuntu needs some K/Ubuntu Developer help this week from Planet Ubuntu

Our packaging team has been working very hard, however, we have a lack of active Kubuntu Developers involved right now. So we're asking for Devels with a bit of extra time and some experience with KDE packages to look at our Frameworks, Plasma and Applications packaging in our staging PPAs and sign off and upload them to the Ubuntu Archive.If you have the time and permissions, please stop by #kubuntu-devel in IRC or Telegram and give us a shove across the beta timeline!

6 days ago

Jono Bacon: Looking For Talent For ClusterHQ from Planet Ubuntu

Recently I signed ClusterHQ as a client. If you are unfamiliar with them, they provide a neat technology for managing data as part of the overall lifecycle of an application. You can learn more about them here.

I will be consulting with Cluster to help them (a) build their community strategy, (b) find a great candidate as Senior Developer Evanglist, and (c) help to mentor that person in their role to be successful.

If you are looking for a new career, this could be a good opportunity. ClusterHQ are doing some interesting work, and if this role is a good fit for you, I will also be there to help you work within a crisply defined strategy and be successful in the execution. Think of it as having a friend on the inside.

You can learn more in the job description, but you should have these skills:

You are a deep full-stack cloud technologist. You have a track record of building distributed applications end-to-end.
You either have a Bachelor’s in Computer Science or are self-motivated and self-taught such that you don’t need one.
You are passionate about containers, data management, and building stateful applications in modern clusters.
You have a history of leadership and service in developer and DevOps communities, and you have a passion for making applications work.
You have expertise in lifecycle management of data.
You understand how developers and IT organizations consume cloud technologies, and are able to influence enterprise technology adoption outcomes based on that understanding.
You have great technical writing skills demonstrated via documentation, blog posts and other written work.
You are a social butterfly. You like meeting new people on and offline.
You are a great public speaker and are sought after for your expertise and presentation style.
You don’t mind charging your laptop and phone in airport lounges so are willing and eager to travel anywhere our developer communities live, and stay productive and professional on the road.
You like your weekend and evening time to focus on your outside-of-work passions, but don’t mind working irregular hours and weekends occasionally (as the job demands) to support hackathons, conferences, user groups, and other developer events.

ClusterHQ are primarily looking for help with:

Creating high-quality technical content for publication on our blog and other channels to show developers how to implement specific stateful container management technologies.
Spreading the word about container data services by speaking and sharing your expertise at relevant user groups and conferences.
Evangelizing stateful container management and ClusterHQ technologies to the Docker Swarm, Kubernetes, and Mesosphere communities, as well as to DevOPs/IT organizations chartered with operational management of stateful containers.
Promoting the needs of developers and users to the ClusterHQ product & engineering team, so that we build the right products in the right way.
Supporting developers building containerized applications wherever they are, on forums, social media, and everywhere in between.

Pretty neat opportunity.

Interested?

If you are interested in this role, there are few options for next steps:

You can apply directly by clicking here.
Alternatively, if I know you, I would invite you to confidentially share your interest in this role by filling in my form here. This way I can get a good sense of who is interested and also recommend people I personally know and can vouch for. I will then reach out to those of you who this seems to be a good potential fit for and play a supporting role in brokering the conversation.

By the way, there are going to be a number of these kinds of opportunities shared here on my blog. So, be sure to subscribe to my posts if you want to keep up to date with the latest opportunities.
The post Looking For Talent For ClusterHQ appeared first on Jono Bacon.

6 days ago

Daniel Holbach: Get your software snapped tomorrow! from Planet Ubuntu

For a few weeks we have been running the Snappy Playpen as a pet/research project already. Many great things have happened since then:

With the Playpen we now have a repository of great best-practice examples.
We brought together a lot of people who are excited about snaps, who worked together, collaborated, wrote plugins together and improved snapcraft and friends.
A number of cloud parts were put together by the team as well.
We landed quite a few high-quality snaps in the store.
We had lots of fun.

Opening the Sandpit
With our next Snappy Playpen event tomorrow, 20th September 2016, we want to extend the scheme. We are opening the Sandpit part of the Playpen!
One thing we realised in the last weeks is that we treated the Playpen more and more like a place where well-working, tested and well-understood snaps go to inspire people who are new to snapping software. What we saw as well was that lots of fellow snappers kept their half-done snaps on their hard-disk instead of sharing them and giving others the chance to finish them or get involved in fixing. Time to change that, time for the Sandpit!
In the Sandpit things can get messy, but you get to explore and play around. It’s fun. Naturally things need to be light-weight, which is why we organise the Sandpit on just a simple wiki page. The way it works is that if you have a half-finished snap, you simply push it to a repo, add your name and the link to the wiki, so others get a chance to take a look and work together with you on it.
Tomorrow, 20th September 2016, we are going to get together again and help each other snapping, clean up old bits, fix things, explain, hang out and have a good time. If you want to join, you’re welcome. We’re on Gitter and on IRC.

WHEN: 2016-09-20
WHAT: Snappy Playpen event – opening the Sandpit
WHERE: Gitter and on IRC

Added bonus
As an added bonus, we are going to invite Michael Vogt, one of the core developers of snapd to the Ubuntu Community Q&A tomorrow. Join us at 15:00 UTC tomorrow on http://ubuntuonair.com and ask all the questions you always had!
See you tomorrow!

6 days ago

Paul Tagliamonte: DNSync from Planet Ubuntu

While setting up my new network at my house, I figured I’d do things right and set up an IPSec VPN (and a few other fancy bits). One thing that became annoying when I wasn’t on my LAN was I’d have to fiddle with the DNS Resolver to resolve names of machines on the LAN.

Since I hate fiddling with options when I need things to just work, the easiest way out was to make the DNS names actually resolve on the public internet.

A day or two later, some Golang glue, and AWS Route 53, and I wrote code that would sit on my dnsmasq.leases, watch inotify for IN_MODIFY signals, and sync the records to AWS Route 53.

I pushed it up to my GitHub as DNSync.

PRs welcome!

7 days ago

Aurélien Gâteau: Starting an interactive rebase from Tig from Planet Ubuntu

One of the tools I use a lot to work with git repositories is Tig. This handy ncurses tool let you browse your history, cherry-pick commits, do partial commits and a few other things. But one thing I wanted to do was to be able to start an interactive rebase from within Tig. This week I decided to dig into the documentation a bit to see if it was possible to do so.
Reading the manual I found out Tig is extensible: one can bind shortcut keys to trigger commands. The bound commands can use of several state variables such as the current commit or the current branch. This makes it possible to use Tig as a commit selector for custom commands. Armed with this knowledge, I added these lines to $HOME/.tigrc:
bind main R !git rebase -i %(commit)^
bind diff R !git rebase -i %(commit)^

That worked! If you add these two lines to your .tigrc file, you can start Tig, scroll to the commit you want and press Shift+R to start the rebase from it. No more copying the commit id and going back to the command line!
Note: Shift+R is already bound to the refresh action in Tig, but this action can also be triggered with F5, so it's not really a problem.

9 days ago