Planet Ubuntu is a window into the world, work and lives of Ubuntu developers and contributors
KDE turned twenty recently, which seems significant in a world that seems to change so fast. Yet somehow we stay relevant, and excited to continue to build a better future.Lydia asked recently on the KDE-Community list what we were most proud of.For the KDE community, I'm proud that we continue to grow and change, while remaining friendly, welcoming, and ever more diverse. Our software shows that. As we change and update, some things get left behind, only to re-appear in fresh new ways. And as people get new jobs, or build new families, sometimes they disappear for awhile as well. And yet we keep growing, attracting students, hobbyist programmers, writers, artists, translators, designers and community people, and sometimes we see former contributors re-appear too. See more about that in our 20 Years of KDE Timeline.I'm proud that we develop whole new projects within the community. Recently Peruse, Atelier, Minuet, WikitoLearn, KDEConnect, Krita, Plasma Mobile and neon have all made the news. We welcome projects from outside as well, such as Gcompris, Kdenlive, and the new KDE Store. And our established projects continue to grow and extend. I've been delighted to hear about Calligra Author, for instance, which is for those who want to write and publish a book or article in pdf or epub. Gcompris has long been available for Windows and Mac, but now you can get it on your Android phone or tablet. Marble is on Android, and I hear that Kstars will be available soon.I'm proud of how established projects continue to grow and attract new developers. The Plasma team, hand-in-hand with the Visual Design Group, continues to blow testers and users away with power, beauty and simplicity on the desktop. Marble, Kdevelop, Konsole, Kate, KDE-PIM, KDElibs (now Frameworks), KOffice (now Calligra), KDE-Edu, KDE-Games, Digikam, kdevplatform, Okular, Konversation and Yakuake, just to mention a few, continue to grow as projects, stay relevant and often be offered on new platforms. Heck, KDE 1 runs on modern computer systems!For myself, I'm proud of how the KDE community welcomed in a grandma, a non-coder, and how I'm valued as part of the KDE Student Programs team, and the Community Working Group, and as an author and editor. Season of KDE, Google Summer of Code, and now Google Code-in all work to integrate new people into the community, and give more experienced developers a way to share their knowledge as mentors. I'm proud of how the Amarok handbook we developed on the Userbase wiki has shown the way to other open user documentation. And thanks to the wonderful documentation and translation teams, the help is available to millions of people around the world, in multiple forms.I'm proud to be part of the e.V., the group supporting the fantastic community that creates the software we offer freely to the world.
At the last OpenStack Design Summit in Austin, TX we showed you a preview of deploying your physical server and network infrastructure from the top-of-rack switch, which included OpenStack with your choice of SDN solution.
This was made possible by disaggregating the network stack functionality (the “N” in Network Operating System) to run on general purpose, devices-centric, operating systems. In the world of the Open Compute Project and whitebox switches, a switch can be more than just a switch. Switches are no longer closed systems where you can only see the command line of the network operating system. Whitebox switches are produced by marrying common server components with high powered switching ASICs, loading a Linux OS, and running a network operating system (NOS) functionality as an application.
The user has the ability to not only choose hardware from multiple providers, they can chose the Linux distribution, and the NOS that best matches their environment. Commands can be issued from the Linux prompt or the NOS prompt and most importantly, other applications can be securely installed alongside the NOS. This new switch design opens up the ability to architect secure distributed data center networks with higher scale and more efficient utilization of existing resources in each rack.
Since the last ODS we have witnessed a continued trend for whitebox switches to provide more server like and general purpose functionality from increases in CPU, memory, storage, internal bandwidth between the CPU and ASIC, to power-management (BMC), and secure boot options (UEFI+PXE). This month Mellanox announced the availability of their standard Linux kernel driver included in Ubuntu Core 16 (and classic Ubuntu) for their Open Ethernet Spectrum switch platforms. More recently Facebook announced the acceptance of the Wedge 100 into OCP that includes Facebook’s OpenBMC and their continued effort to disaggregate the stack.
“We are excited to work with Facebook on next generation switch hardware, adding Facebook’s Wedge OpenBMC power driver to our physical cloud (‘Metal-As-A-Service’) MAAS 2.1, and packaging the Facebook Open Switch System (FBOSS) as a snap.” said David Duffey, Director of Technical Partnerships, Canonical. “Facebook with OCP is leading the way to modern, secure, and flexible datacenter design and management. Canonical’s MAAS and snaps give the datacenter operator free choice of network bootloader, operating system, and network stack.”
At this OpenStack Design Summit we are also going to show you the latest integration with MAAS, how you can use snaps as a universal way to install across Linux distributions (including non-Ubuntu non-Debian based distributions), and deploying WiFi-based solutions, like OpenWrt, as a snap.
Please stop by our booth and let us help you plan your transition to a fully automated, secure modern datacenter.
My prior post showed my research from earlier in the year at the 2016 Linux Security Summit on kernel security flaw lifetimes. Now that CVE-2016-5195 is public, here are updated graphs and statistics. Due to their rarity, the Critical bug average has now jumped from 3.3 years to 5.2 years. There aren’t many, but, as I mentioned, they still exist, whether you know about them or not. CVE-2016-5195 was sitting on everyone’s machine when I gave my LSS talk, and there are still other flaws on all our Linux machines right now. (And, I should note, this problem is not unique to Linux.) Dealing with knowing that there are always going to be bugs present requires proactive kernel self-protection (to minimize the effects of possible flaws) and vendors dedicated to updating their devices regularly and quickly (to keep the exposure window minimized once a flaw is widely known).
So, here are the graphs updated for the 668 CVEs known today:
Critical: 3 @ 5.2 years average
High: 44 @ 6.2 years average
Medium: 404 @ 5.3 years average
Low: 216 @ 5.5 years average
© 2016, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Last year I went to All Things Open for the first time and did a keynote. You can watch the keynote here.
I was really impressed with All Things Open last year and have subsequently become friends with the principle organizer, Todd Lewis. I loved how the team put together a show with the right balance of community and corporation, great content, exhibition and more.
All Thing Open 2016 is happening next week and I will be participating in a number of areas:
I will be MCing the keynotes for the event. I am looking forward to introducing such a tremendous group of speakers.
Jeremy King, CTO of Walmart Labs and I will be having a fireside chat. I am looking forward to delving into the work they are doing.
I will also be participating in a panel about openness and collaboration, and delivering a talk about building a community exoskeleton.
It is looking pretty likely I will be doing a book signing with free copies of The Art of Community to be given away thanks to my friends at O’Reilly!
The event takes place in Raleigh, and if you haven’t registered yet, do so right here!
Also, a huge thanks to Red Hat and opensource.com for flying me out. I will be joining the team for a day of meetings prior to All Things Open – looking forward to the discussions!
The post All Things Open Next Week – MCing, Talks, and More appeared first on Jono Bacon.
In the middle of July the Juju team got together to work towards making Juju more accessible. For now the aim was to reach Level AA compliant, with the intention of reaching AAA in the future.
We started by reading through the W3C accessibility guidelines and distilling each principle into sentences that made sense to us as a team and documenting this into a spreadsheet.
We then created separate columns as to how this would affect the main areas across Juju as a product. Namely static pages on jujucharms.com, the GUI and the inspector element within the GUI.
GUI live on jujucharms.com
Inspector within the GUI
Example of static page content from the homepage
The Juju team working through the accessibility guidelines
Tackling this as a team meant that we were all on the same page as to which areas of the Juju GUI were affected by not being AA compliant and how we could work to improve it.
We also discussed the amount of design effort needed for each of the areas that isn’t AA compliant and how long we thought it would take to make improvements.
You can have a look at the spreadsheet we created to help us track the changes that we need to make to Juju to make more accessible:
Spreadsheet created to track changes and improvements needed to be done
This workflow has helped us manage and scope the tasks ahead and clear up uncertainties that we had about which tasks done or which requirements need to be met to achieve the level of accessibility we are aiming for.
The Yakkety Yak 16.10 is released and now you can download the new wallpaper by clicking here. It’s the latest part of the set for the Ubuntu 2016 releases following Xenial Xerus. You can read about our wallpaper visual design process here.
Ubuntu 16.10 Yakkety Yak
Ubuntu 16.10 Yakkety Yak (light version)
It’s Episode Thirty-Four of Season-Nine of the Ubuntu Podcast! Alan Pope, Mark Johnson, Martin Wimpress and Emma Marshall are connected and speaking to your brain.
The three amigos are back with our new amiga!
In this week’s show:
We discuss going to a Randall Munroe book signing of What If? and Thing Explainer and getting extra signed copies for you to try and win in a c-o-m-p-e-t-i-t-i-o-n!
We share a Command Line Lurve:
direnv – An environment switcher for the shell
We also discuss fixing bugs in Ubuntu and visiting Barcelona.
And we go over all your amazing feedback – thanks for sending it – please keep sending it!
This weeks cover image is taken from Wikimedia.
Thing Explainer Competition!
Prize: Signed copies of “What If?” and “Thing Explainer” by Randall Munroe (creator of XKCD)
Question: Listen to the podcast for instructions
You can use the upgoer5 editor to help.
Send your entries to competition AT ubuntupodcast DOT org. We’ll pick our favourite and announce the winner on the show.
Here are some examples to help get you in the groove:
I write words that are read by a computer. Students who want to learn about something ask their computer for part of a book. Their computer talks to another computer over phone lines, and that computer uses the words I’ve written to send them the book part they want. Sometimes students want new types of book parts that they can use to share their learning with other students. I have to work out the right words for the computer to let them do this, and write them. When I can, I share my words with other people so that their computers can send better book parts to their students.
I talk to people about computer things to help make the stuff they make and the stuff we make better. Also I sometimes write things that the computer gets but I am not great at that. We give away a lot of the things we make which is not like the way some other people share their work. It makes me happy inside that we do this.
I help write a group of books that a computer reads and stores. These books make the computer work much better. When a computer has stored the books I help make you can do things with your computer, like write to people and send what you wrote to the other peoples computers. Or you can ask your computer to talk to other computers to learn things, look at moving pictures, listen to music or buy shopping.
The group of books I help write are free for anyone to give to their computer. You are also free to change these books and share those changes with anyone. This way everyone can help make the books even better so your computer can do more for you.
I help people change their computer to something better. I fix things that are broken and make people happy again. I talk to a lot of people about computers all day. I put my heart into every conversation so people feel like they are talking with a human instead of speaking with a pretend human.
That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to email@example.com or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.
Join us on IRC in #ubuntu-podcast on Freenode
We are delighted to announce the availability of a new service for Ubuntu which any user can enable on their current installations – the Canonical Livepatch Service.
This new live kernel patching service can be used on any Ubuntu 16.04 LTS system (using the generic Linux 4.4 kernel) to minimise unplanned downtime and maintain the highest levels of security.
First a bit of background…
Since the release of the Linux 4.0 kernel about 18 months ago, users have been able to patch and update their kernel packages without rebooting. However, until now, no other Linux distribution has offered this feature for free to their users. That changes today with the release of the Canonical Livepatch Service:
The Canonical Livepatch Service is available for free to all users up to 3 machines.
If you want to enable the Canonical Livepatch Service on more than three machines, please purchase an Ubuntu Advantage support package from buy.ubuntu.com or get in touch.
Beyond securing your desktop, server, IoT device or virtual guest, the Canonical Livepatch Service is particularly useful in container environments since every container will share the same kernel.
“Kernel live patching enables runtime correction of critical security issues in your kernel without rebooting. It’s the best way to ensure that machines are safe at the kernel level, while guaranteeing uptime, especially for container hosts where a single machine may be running thousands of different workloads,” says Dustin Kirkland, Ubuntu Product and Strategy for Canonical.
Here’s how to enable the Canonical Livepatch Service today
First, go to the Canonical Livepatch Service portal and retrieve your livepatch token.
Next, install the livepatch ‘Snap’ using the first command below, and then enable your account using the token obtained in the second command below:
sudo snap install canonical-livepatch
sudo canonical-livepatch enable [Token]
That’s it! You’ve just enabled kernel live patching for your Ubuntu system, and you can do that, for free, on two more installations! However, if you want to enable the Canonical Livepatch Service on more than three systems you’ll need to purchase an Ubuntu Advantage support package from as little as $12 per month.
Need a bit more help?
Here’s a quick video to guide you through the steps in less than a minute:
For further details on the Canonical Livepatch Service please read Dustin Kirkland’s useful list of FAQs.
One of the projects proposed for this round of Outreachy is the PGP / PKI Clean Room live image.
Interns, and anybody who decides to start using the project (it is already functional for command line users) need to decide about purchasing various pieces of hardware, including a smart card, a smart card reader and a suitably secure computer to run the clean room image. It may also be desirable to purchase some additional accessories, such as a hardware random number generator.
If you have any specific suggestions for hardware or can help arrange any donations of hardware for Outreachy interns, please come and join us in the pki-clean-room mailing list or consider adding ideas on the PGP / PKI clean room wiki.
Choice of smart card
For standard PGP use, the OpenPGP card provides a good choice.
For X.509 use cases, such as VPN access, there are a range of choices. I recently obtained one of the SmartCard HSM cards, Card Contact were kind enough to provide me with a free sample. An interesting feature of this card is Elliptic Curve (ECC) support. More potential cards are listed on the OpenSC page here.
Choice of card reader
The technical factors to consider are most easily explained with a table:
Smartcard reader without PIN-pad
Smartcard reader with PIN-pad
Mostly free/open, Proprietary firmware in reader
Not generally possible
Passphrase compromise attack vectors
Hardware or software keyloggers, phishing, user error (unsophisticated attackers)
Exploiting firmware bugs over USB (only sophisticated attackers)
Small, USB key form-factor
Largest form factor
Some are shortlisted on the GnuPG wiki and there has been recent discussion of that list on the GnuPG-users mailing list.
Choice of computer to run the clean room environment
There are a wide array of devices to choose from. Here are some principles that come to mind:
Prefer devices without any built-in wireless communications interfaces, or where those interfaces can be removed
Even better if there is no wired networking either
Particularly concerned users may also want to avoid devices with opaque micro-code/firmware
Small devices (laptops) that can be stored away easily in a locked cabinet or safe to prevent tampering
No hard disks required
Having built-in SD card readers or the ability to add them easily
SD cards and SD card readers
The SD cards are used to store the master private key, used to sign the certificates/keys on the smart cards. Multiple copies are kept.
It is a good idea to use SD cards from different vendors, preferably not manufactured in the same batch, to minimize the risk that they all fail at the same time.
For convenience, it would be desirable to use a multi-card reader:
although the software experience will be much the same if lots of individual card readers or USB flash drives are used.
One additional idea that comes to mind is a hardware random number generator (TRNG), such as the FST-01.
Can you help with ideas or donations?
If you have any specific suggestions for hardware or can help arrange any donations of hardware for Outreachy interns, please come and join us in the pki-clean-room mailing list or consider adding ideas on the PGP / PKI clean room wiki.
Introducting the Canonical Livepatch ServiceHowdy!Ubuntu 16.04 LTS’s 4.4 Linux kernel includes an important new security capability in Ubuntu -- the ability to modify the running Linux kernel code, without rebooting, through a mechanism called kernel livepatch.Today, Canonical has publicly launched the Canonical Livepatch Service -- an authenticated, encrypted, signed stream of Linux livepatches that apply to the 64-bit Intel/AMD architecture of the Ubuntu 16.04 LTS (Xenial) Linux 4.4 kernel, addressing the highest and most critical security vulnerabilities, without requiring a reboot in order to take effect. This is particularly amazing for Container hosts -- Docker, LXD, etc. -- as all of the containers share the same kernel, and thus all instances benefit.I’ve tried to answer below some questions that you might have. As you have others, you’re welcometo add them to the comments below or on Twitter with hastag #Livepatch.Retrieve your token from ubuntu.com/livepatchQ: How do I enable the Canonical Livepatch Service?A: Three easy steps, on a fully up-to-date 64-bit Ubuntu 16.04 LTS system.Go to https://ubuntu.com/livepatch and retrieve your livepatch tokenInstall the canonical-livepatch snap $ sudo snap install canonical-livepatch Enable the service with your token $ sudo canonical-livepatch enable [TOKEN] And you’re done! You can check the status at any time using:$ canonical-livepatch status --verboseQ: What are the system requirements?A: The Canonical Livepatch Service is available for the generic and low latency flavors of the 64-bit Intel/AMD (aka, x86_64, amd64) builds of the Ubuntu 16.04 LTS (Xenial) kernel, which is a Linux 4.4 kernel. Canonical livepatches work on Ubuntu 16.04 LTS Servers and Desktops, on physical machines, virtual machines, and in the cloud. The safety, security, and stability firmly depends on unmodified Ubuntu kernels and network access to the Canonical Livepatch Service (https://livepatch.canonical.com:443). You also will need to apt update/upgrade to the latest version of snapd (at least 2.15).Q: What about other architectures?A: The upstream Linux livepatch functionality is currently limited to the 64-bit x86 architecture, at this time. IBM is working on support for POWER8 and s390x (LinuxOne mainframe), and there’s also active upstream development on ARM64, so we do plan to support these eventually. The livepatch plumbing for 32-bit ARM and 32-bit x86 are not under upstream development at this time.Q: What about other flavors?A: We are providing the Canonical Livepatch Service for the generic and low latency (telco) flavors of the the Linux kernel at this time.Q: What about other releases of Ubuntu?A: The Canonical Livepatch Service is provided for Ubuntu 16.04 LTS’s Linux 4.4 kernel. Older releases of Ubuntu will not work, because they’re missing the Linux kernel support. Interim releases of Ubuntu (e.g. Ubuntu 16.10) are targeted at developers and early adopters, rather than Long Term Support users or systems that require maximum uptime. We will consider providing livepatches for the HWE kernels in 2017.Q: What about derivatives of Ubuntu?A: Canonical livepatches are fully supported on the 64-bit Ubuntu 16.04 LTS Desktop, Cloud, and Server operating systems. On other Ubuntu derivatives, your mileage may vary! These are not part of our automated continuous integration quality assurance testing framework for Canonical Livepatches. Canonical Livepatch safety, security, and stability will firmly depend on unmodified Ubuntu generic kernels and network access to the Canonical Livepatch Service.Q: How does Canonical test livepatches?A: Every livepatch is rigorously tested in Canonical's in-house CI/CD (Continuous Integration / Continuous Delivery) quality assurance system, which tests hundreds of combinations of livepatches, kernels, hardware, physical machines, and virtual machines. Once a livepatch passes CI/CD and regression tests, it's rolled out on a canary testing basis, first to a tiny percentage of the Ubuntu Community users of the Canonical Livepatch Service. Based on the success of that microscopic rollout, a moderate rollout follows. And assuming those also succeed, the livepatch is delivered to all free Ubuntu Community and paid Ubuntu Advantage users of the service. Systemic failures are automatically detected and raised for inspection by Canonical engineers. Ubuntu Community users of the Canonical Livepatch Service who want to eliminate the small chance of being randomly chosen as a canary should enroll in the Ubuntu Advantage program (starting at $12/month).Q: What kinds of updates will be provided by the Canonical Livepatch Service?A: The Canonical Livepatch Service is intended to address high and critical severity Linux kernel security vulnerabilities, as identified by Ubuntu Security Notices and the CVE database. Note that there are some limitations to the kernel livepatch technology -- some Linux kernel code paths cannot be safely patched while running. We will do our best to supply Canonical Livepatches for high and critical vulnerabilities in a timely fashion whenever possible. There may be occasions when the traditional kernel upgrade and reboot might still be necessary. We’ll communicate that clearly through the usual mechanisms -- USNs, Landscape, Desktop Notifications, Byobu, /etc/motd, etc.Q: What about non-security bug fixes, stability, performance, or hardware enablement updates?A: Canonical will continue to provide Linux kernel updates addressing bugs, stability issues, performance problems, and hardware compatibility on our usual cadence -- about every 3 weeks. These updates can be easily applied using ‘sudo apt update; sudo apt upgrade -y’, using the Desktop “Software Updates” application, or Landscape systems management. These standard (non-security) updates will still require a reboot, as they always have.Q: Can I rollback a Canonical Livepatch?A: Currently rolling-back/removing an already inserted livepatch module is disabled in Linux 4.4. This is because we need a way to determine if we are currently executing inside a patched function before safely removing it. We can, however, safely apply new livepatches on top of each other and even repatch functions over and over.Q: What about low and medium severity CVEs?A: We’re currently focusing our Canonical Livepatch development and testing resources on high and critical security vulnerabilities, as determined by the Ubuntu Security Team. We'll livepatch other CVEs opportunistically.Q: Why are Canonical Livepatches provided as a subscription service?A: The Canonical Livepatch Service provides a secure, encrypted, authenticated connection, to ensure that only properly signed livepatch kernel modules -- and most importantly, the right modules -- are delivered directly to your system, with extremely high quality testing wrapped around it.Q: But I don’t want to buy UA support!A: You don’t have to! Canonical is providing the Canonical Livepatch Service to community users of Ubuntu, at no charge for up to 3 machines (desktop, server, virtual machines, or cloud instances). A randomly chosen subset of the free users of Canonical Livepatches will receive their Canonical Livepatches slightly earlier than the rest of the free users or UA users, as a lightweight canary testing mechanism, benefiting all Canonical Livepatch users (free and UA). Once those canary livepatches apply safely, all Canonical Livepatch users will receive their live updates.Q: But I don’t have an Ubuntu SSO account!A: An Ubuntu SSO account is free, and provides services similar to Google, Microsoft, and Apple for Android/Windows/Mac devices, respectively. You can create your Ubuntu SSO account here.Q: But I don’t want login to ubuntu.com!A: You don’t have to! Canonical Livepatch is absolutely not required maintain the security of any Ubuntu desktop or server! You may continue to freely and anonymously ‘sudo apt update; sudo apt upgrade; sudo reboot’ as often as you like, and receive all of the same updates, and simply reboot after kernel updates, as you always have with Ubuntu.Q: But I don't have Internet access to livepatch.canonical.com:443!A: You should think of the Canonical Livepatch Service much like you think of Netflix, Pandora, or Dropbox. It's an Internet streaming service for security hotfixes for your kernel. You have access to the stream of bits when you can connect to the service over the Internet. On the flip side, your machines are already thoroughly secured, since they're so heavily firewalled off from the rest of the world!Q: Where’s the source code?A: The source code of livepatch modules can be found here. The source code of the canonical-livepatch client is part of Canonical's Landscape system management product and is commercial software.Q: What about Ubuntu Core?A: Canonical Livepatches for Ubuntu Core are on the roadmap, and may be available in late 2016, for 64-bit Intel/AMD architectures. Canonical Livepatches for ARM-based IoT devices depend on upstream support for livepatches.Q: How does this compare to Oracle Ksplice, RHEL Live Patching and SUSE Live Patching?A: While the concepts are largely the same, the technical implementations and the commercial terms are very different:Oracle Ksplice uses it’s own technology which is not in upstream Linux.RHEL and SUSE currently use their own homegrown kpatch/kgraft implementations, respectively.Canonical Livepatching uses the upstream Linux Kernel Live Patching technology.Ksplice is free, but unsupported, for Ubuntu Desktops, and only available for Oracle Linux and RHEL servers with an Oracle Linux Premier Support license ($2299/node/year).It’s a little unclear how to subscribe to RHEL Kernel Live Patching, but it appears that you need to first be a RHEL customer, and then enroll in the SIG (Special Interests Group) through your TAM (Technical Account Manager), which requires Red Hat Enterprise Linux Server Premium Subscription at $1299/node/year. (I'm happy to be corrected and update this post)SUSE Live Patching is available as an add-on to SUSE Linux Enterprise Server 12 Priority Support subscription at $1,499/node/year, but does come with a free music video.Canonical Livepatching is available for every Ubuntu Advantage customer, starting at our entry level UA Essential for $150/node/year, and available for free to community users of Ubuntu.Q: What happens if I run into problems/bugs with Canonical Livepatches?A: Ubuntu Advantage customers will file a support request at support.canonical.com where it will be serviced according to their UA service level agreement (Essential, Standard, or Advanced). Ubuntu community users will file a bug report on Launchpad and we'll service it on a best effort basis.Q: Why does canonical-livepatch client/server have a proprietary license?A: The canonical-livepatch client is part of the Landscape family of tools available to Canonical support customers. We are enabling free access to the Canonical Livepatch Service for Ubuntu community users as a mark of our appreciation for the broader Ubuntu community, and in exchange for occasional, automatic canary testing.Q: How do I build my own livepatches?A: It’s certainly possible for you to build your own Linux kernel live patches, but it requires considerable skill, time, computing power to produce, and even more effort to comprehensively test. Rest assured that this is the real value of using the Canonical Livepatch Service! That said, Chris Arges has blogged a howto for the curious a while back:http://chrisarges.net/2015/09/21/livepatch-on-ubuntu.htmlQ: How do I get notifications of which CVEs are livepatched and which are not?A: You can, at any time, query the status of the canonical-livepatch daemon using: ‘canonical-livepatch status --verbose’. This command will show any livepatches successfully applied, any outstanding/unapplied livepatches, and any error conditions. Moreover, you can monitor the Ubuntu Security Notices RSS feed and the ubuntu-security-announce mailing list.Q: Isn't livepatching just a big ole rootkit?A: Canonical Livepatches inject kernel modules to replace sections of binary code in the running kernel. This requires the CAP_SYS_MODULE capability. This is required to modprobe any module into the Linux kernel. If you already have that capability (root does, by default, on Ubuntu), then you already have the ability to arbitrarily modify the kernel, with or without Canonical Livepatches. If you’re an Ubuntu sysadmin and you want to disable module loading (and thereby also disable Canonical Livepatches), simply ‘echo 1 | sudo tee /proc/sys/kernel/modules_disabled’.Keep the uptime!:-Dustin
Like each month, here comes a report about the work of paid contributors to Debian LTS.
In September, about 152 work hours have been dispatched among 13 paid contributors. Their reports are available:
Balint Reczey did 15 hours (out of 12.25 hours allocated + 7.25 remaining, thus keeping 4.5 extra hours for October).
Ben Hutchings did 6 hours (out of 12.3 hours allocated + 1.45 remaining, he gave back 7h and thus keeps 9.75 extra hours for October).
Brian May did 12.25 hours.
Chris Lamb did 12.75 hours (out of 12.30 hours allocated + 0.45 hours remaining).
Emilio Pozuelo Monfort did 1 hour (out of 12.3 hours allocated + 2.95 remaining) and gave back the unused hours.
Guido Günther did 6 hours (out of 7h allocated, thus keeping 1 extra hour for October).
Hugo Lefeuvre did 12 hours.
Jonas Meurer did 8 hours (out of 9 hours allocated, thus keeping 1 extra hour for October).
Markus Koschany did 12.25 hours.
Ola Lundqvist did 11 hours (out of 12.25 hours assigned thus keeping 1.25 extra hours).
Raphaël Hertzog did 12.25 hours.
Roberto C. Sanchez did 14 hours (out of 12.25h allocated + 3.75h remaining, thus keeping 2 extra hours).
Thorsten Alteholz did 12.25 hours.
Evolution of the situation
The number of sponsored hours reached 172 hours per month thanks to maxcluster GmbH joining as silver sponsor and RHX Srl joining as bronze sponsor.
We only need a couple of supplementary sponsors now to reach our objective of funding the equivalent of a full time position.
The security tracker currently lists 39 packages with a known CVE and the dla-needed.txt file 34. It’s a small bump compared to last month but almost all issues are affected to someone.
Thanks to our sponsors
New sponsors are in bold.
TOSHIBA (for 12 months)
GitHub (for 3 months)
The Positive Internet (for 28 months)
Blablacar (for 27 months)
Linode LLC (for 17 months)
Babiel GmbH (for 6 months)
Plat’Home (for 6 months)
UR Communications BV
Domeneshop AS (for 27 months)
Université Lille 3 (for 27 months)
Trollweb Solutions (for 25 months)
Nantes Métropole (for 21 months)
University of Luxembourg (for 19 months)
Dalenys (for 18 months)
Univention GmbH (for 13 months)
Université Jean Monnet de St Etienne (for 13 months)
Sonus Networks (for 7 months)
David Ayers – IntarS Austria (for 28 months)
Evolix (for 28 months)
Offensive Security (for 28 months)
Seznam.cz, a.s. (for 28 months)
Freeside Internet Service (for 27 months)
MyTux (for 27 months)
Intevation GmbH (for 25 months)
Linuxhotel GmbH (for 25 months)
Daevel SARL (for 23 months)
Bitfolk LTD (for 22 months)
Megaspace Internet Services GmbH (for 22 months)
Greenbone Networks GmbH (for 21 months)
NUMLOG (for 21 months)
WinGo AG (for 21 months)
Ecole Centrale de Nantes – LHEEA (for 17 months)
Sig-I/O (for 14 months)
Entr’ouvert (for 12 months)
Adfinis SyGroup AG (for 9 months)
GNI MEDIA (for 4 months)
Laboratoire LEGI – UMR 5519 / CNRS (for 4 months)
Quarantainenet BV (for 4 months)
No comment | Liked this article? Click here. | My blog is Flattr-enabled.
In several of my recent presentations, I’ve discussed the lifetime of security flaws in the Linux kernel. Jon Corbet did an analysis in 2010, and found that security bugs appeared to have roughly a 5 year lifetime. As in, the flaw gets introduced in a Linux release, and then goes unnoticed by upstream developers until another release 5 years later, on average. I updated this research for 2011 through 2016, and used the Ubuntu Security Team’s CVE Tracker to assist in the process. The Ubuntu kernel team already does the hard work of trying to identify when flaws were introduced in the kernel, so I didn’t have to re-do this for the 557 kernel CVEs since 2011.
As the README details, the raw CVE data is spread across the active/, retired/, and ignored/ directories. By scanning through the CVE files to find any that contain the line “Patches_linux:”, I can extract the details on when a flaw was introduced and when it was fixed. For example CVE-2016-0728 shows:
break-fix: 3a50597de8635cd05133bd12c95681c82fe7b878 23567fd052a9abb6d67fe8e7a9ccdd9800a540f2
This means that CVE-2016-0728 is believed to have been introduced by commit 3a50597de8635cd05133bd12c95681c82fe7b878 and fixed by commit 23567fd052a9abb6d67fe8e7a9ccdd9800a540f2. If there are multiple lines, then there may be multiple SHAs identified as contributing to the flaw or the fix. And a “-” is just short-hand for the start of Linux git history.
Then for each SHA, I queried git to find its corresponding release, and made a mapping of release version to release date, wrote out the raw data, and rendered graphs. Each vertical line shows a given CVE from when it was introduced to when it was fixed. Red is “Critical”, orange is “High”, blue is “Medium”, and black is “Low”:
And here it is zoomed in to just Critical and High:
The line in the middle is the date from which I started the CVE search (2011). The vertical axis is actually linear time, but it’s labeled with kernel releases (which are pretty regular). The numerical summary is:
Critical: 2 @ 3.3 years
High: 34 @ 6.4 years
Medium: 334 @ 5.2 years
Low: 186 @ 5.0 years
This comes out to roughly 5 years lifetime again, so not much has changed from Jon’s 2010 analysis.
While we’re getting better at fixing bugs, we’re also adding more bugs. And for many devices that have been built on a given kernel version, there haven’t been frequent (or some times any) security updates, so the bug lifetime for those devices is even longer. To really create a safe kernel, we need to get proactive about self-protection technologies. The systems using a Linux kernel are right now running with security flaws. Those flaws are just not known to the developers yet, but they’re likely known to attackers, as there have been prior boasts/gray-market advertisements for at least CVE-2010-3081 and CVE-2013-2888.
(Edit: see my updated graphs that include CVE-2016-5195.)
© 2016, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
<figure class="wp-caption alignright" id="attachment_2858" style="width: 400px;">My Plasma Desktop in 2016</figure>On Monday, KDE’s Plasma team held its traditional kickoff meeting for the new development cycle. We took this opportunity to also look and plan ahead a bit further into the future. In what areas are we lacking, where do we want or need to improve? Where do we want to take Plasma in the next two years?
Our general direction points towards professional use-cases. We want Plasma to be a solid tool, a reliable work-horse that gets out of the way, allowing to get the job done quickly and elegantly. We want it to be faster and of better quality than the competition.
With these big words out there, let’s have a look at some specifics we talked about.
Release schedule until 2018
Our plan is to move from 4 to 3 yearly releases in 2017 and 2018, which we think strikes a nice balance between our pace of development, and stabilization periods around that. Our discussion of the release schedule resulted in the following plan:
Plasma 5.9: 31 January 2017
Plasma 5.10: May 2017
Plasma 5.11: September 2017
Plasma 5.12: December 2017
Plasma 5.13: April 2018
Plasma 5.14 LTS: August 2018
A cautionary note, we can’t know if everything exactly plays out like this, as this schedule, to a degree depends on external factors, such as Qt’s release schedule. Here’s what we intend to do, it is really our “best guess”. Still, this aligns with Qt’s plans, who are also looking at an LTS release in summer 2018. So, what will these upcoming releases bring?
<figure class="wp-caption aligncenter" id="attachment_2860" style="width: 739px;">
Breeze Look and Feel</figure>
UI and Theming
The Breeze icon theme will see further completion work and refinements in its existing icons details. Icon usage over the whole UI will see more streamlining work as well. We also plan to tweak the Breeze-themed scrollbars a bit, so watch out for changes in that area. A Breeze-themed Firefox theme is planned, as well as more refinement in the widget themes for Qt, GTK, etc.. We do not plan any radical changes in the overall look and feel of our Breeze theme, but will further improve and evolve it, both in its light and dark flavors.
<figure class="wp-caption alignright" id="attachment_2870" style="width: 400px;">The menu button is a first sign of the global menu returning to Plasma</figure>One thing that many of our users are missing is support for a global menu similar to how MacOS displays application menus outside of the app’s window (for example at the top of the screen). We’re currently working on bringing this feature, which was well-supported in Plasma 4 back in Plasma 5, modernized and updated to current standards. This may land as soon as the upcoming 5.9 release, at least for X11.
Better support for customizing the locale (the system which shows things like time, currencies, numbers in the way the user expects them) is on our radar as well. In this area, we lost some features due to the transition to Frameworks 5, or rather QLocale, away from kdelibs’ custom, but sometimes incompatible locale handling classes.
The next releases overall will bring further improvements to our Wayland session. Currently, Plasma’s KWin brings an almost feature-complete Wayland display server, which already works for many use-cases. It hasn’t seen the real-world testing it needs, and it is lacking certain features that our users expect from their X11 session, or new features which we want to offer to support modern hardware better.
We plan to improve multi-screen rendering on Wayland and the input stack in areas such as relative pointers, pointer confinement, touchpad gestures, wacom tablet support, clipboard management (for example, Klipper). X11 dependencies in KWin will be further reduced with the goal to make it possible to start up KWin entirely without hard X11 dependencies.
One new feature which we want to offer in our Wayland session is support for scaling the contents of each output individually, which allows users to use multiple displays with vastly varying pixel densities more seamlessly.
There are also improvements planned around virtual desktops under Wayland, as well as their relation to Plasma’s Activities features. Output configuration as of now is also not complete, and needs more work in the coming months. Some features we plan will also need changes in QtWayland, so there’s some upstream bug-fixing needed, as well.
One thing we’d like to see to improve our users’ experience under Wayland is to have application developers test their apps under Wayland. It happens still a bit too often that an application ends up running into a code-path that makes assumptions that X11 is used as display server protocol. While we can run applications in backwards-compatible XWayland mode, applications can benefit from the better rendering quality under Wayland only when actually using the Wayland protocol. (This is mostly handled transparantly by Qt, but applications do their thing, so unless it’s tested, it will contain bugs.)
Plasma’s Mobile flavor will be further stabilized, and its stack cleaned up, we are further reducing the stack’s footprint without losing important functionality. The recently-released Kirigami framework, which allows developers to create convergent applications that work on both mobile and desktop form-factors, will be adjusted to use the new, more light-weight QtQuick Controls 2. This makes Kirigami a more attractive technology to create powerful, yet lean applications that work across a number of mobile and desktop operating systems, such as Plasma Mobile, Android, iOS, and others.
<figure class="wp-caption aligncenter" id="attachment_2863" style="width: 739px;">Discover, Plasma’s software center integrates online content from the KDE Store, its convergent user-interface is provided by the Kirigami framework</figure>
Planned improvements in our integration of online services are dependency handling for assets installed from the store. This will allow us to support installation of meta-themes directly from the KDE Store. We want to also improve our support for online data storage, prioritizing Free services, but also offer support for proprietary services, such as the GDrive support we recently added to Plasma’s feature-set.
We want to further increase our contributor base. We plan to work towards an easier on-boarding experience, through better documentation, mentoring and communication in general. KDE is recruiting, so if you are looking for a challenging and worthwhile way to work as part of a team, or on your individual project, join our ranks of developers, artists, sysadmins, translators, documentation writers, evangelists, media experts and free culture activists and let us help each other.
Incomplete bug reportsSince writing an earlier post on the subject I've continued to monitor new bug reports. I have been very disappointed to see that so many have to be marked as being "incomplete" as they give little information about the problem and don't really give anyone an incentive to work on and help fix. So many are very vague about the problem being reported while some are just an indication that a problem exists. Reports which just say something along the lines of:helpbugi don't knowdont rememberdon't do a lot to point to the problem that is being reported. May be some information can be gleaned from any attached log files but please bug reporters, tell us what the problem is as it will greatly increase the chances of your issue being fixed, investigated or (re)assigned to the correct package. Reporters need to reply when asked for further information about the bug or the version of Ubuntu being used even if it is to say that, for whatever reason, the problem no longer affects them. And I say to all novice reporters: "Please don't keep the Ubuntu version or flavour that you are using a secret!"Bug report or support request?Some reports are probably submitted as a desperate measure when help is needed and no-one is around to help. Over the last couple of months I've seen dozens of bug reports being closed as they have "expired" because there has been no response to a request for information within 59 days of the request being made, Obviously Ubuntu users are having problems but are their issues being resolved? Are those users moving back to Windows or to another Linux distribution because they aren't getting help they need and don't know how to ask for it?Many of the issues that I'm referring to should have been posted initially as support requests at the Ubuntu Forums, Ask Ubuntu or Launchpad Answers and then filed as bug reports once sufficient help and guidance had been obtained and the presence of a bug confirmed.A bug with the bug reporting tool ubuntu-bug?Sometimes trying to establish the correct package against which to file a bug is a difficult task especially if you are not conversant with the inner workings of Ubuntu. Launchpad can often guide the reporter but it seems many reports are being incorrectly filed against the xorg package in error. Bug #1631748 (ubuntu-bug selects wrong category) seems to confirm this widespread problem. If a bug is reported against the wrong package and no description of the issue is given there is no chance of the issue being investigated.Further readingThe following links will give those who are new to bug reporting some help in filing a good bug report that can be worked on by a bug triager or developer.How to Report BugsHow to Report Bugs EffectivelyImproving Ubuntu: A Beginners Guide to Filing Bug ReportsHow do I report a bug?To the future and some events of September 1973In just a couple of weeks I'll no longer have to worry about getting up early for work, fighting my way though the local traffic and aiming for an 8 o'clock start which is something that I seldom manage to achieve these days. No doubt I'll be able to devote much more time to work on Ubuntu and who knows I may well revisit some of the teams and projects that I've left over the past couple of years.Looking at Mark Shuttleworth's Wikipedia page it seems that he was born just a week or two after I started my working life in September 1973. A lot has changed since then. We didn't have personal computers or mobile phones and as far as I can remember we managed perfectly well without them. Back then I had very different interests, some of which I've recently returned to but obviously I had no idea what was in store for me around 40 years later.Thanks for everything so far Mark!zz = Zesty Zapus, a mouse that jumpedSo we now have a code-name for the next Ubuntu release which Mark has confirmed will be Zesty Zapus, Apparently a zapus is a North American mouse that jumps. So, now that we've reached the end of the alphabet, what next?Prediction: There will be much discussion about the code-name for the 17.10 release and it's announcement will probably be the most anticipated yet.
I've seen Ms. Belkin go ahead and wrap up the Y (Yakkety) season while giving a look ahead to the Z (Zesty) season. I'm afraid I cannot give as much of a report. My participation in the Ubuntu realm has been a bit held back by things beyond my control.
During the Y season I was stuck at work. I have a hefty commute to work which pretty much wrecks my day when included with my working hours. My work is considered seasonal which for a federal civil servant means that it is subject to workload constraints. Apparently we did not have a proper handle on workload this year. The estimate was that our work would be done by a certain date and we would go on "seasonal release" or furlough until we are recalled to duty. We missed that date by quite a longshot. After quite a bit of attrition, angry resignations, people checking into therapy, people developing cardiac issues, and worse my unit received "seasonal release" only last Friday. Recall could be as soon as two weeks away.
The only main action I really wanted to handle during Y was to get a backport in of dianara and pumpa if they dropped new releases. I was a little late in doing so but I just got the backport bug for dianara filed tonight. I kept stating I would wait for furlough to do the testing but furlough took long enough that a couple versions of dianara went by in the interim. Folks looking at pump.io have to remember that even the website to a server is itself a client and new features have to be implemented in clients for the main server engine to pass around. The website isn't the point to pump.io but rather the use of a client of your choice is and a list is being maintained.
I don't really know what the plan is for Z for me. Right now many eyes around the world are focused on the election for President of the United States. People regard that office as the so-called leader of the free world. That person also happens to be head of the civil service in the United States. Neither of the major party candidates have nice plans for my employing agency. Both scare me. A good chunk of the attrition and angry resignations at work has been people fleeing for the safety of the private sector in light of what is expected from either major party candidate.
Backporting will continue subject to resource restrictions. I remain a student in the Emergency Management and Planning Administration program at Lakeland Community College with graduation expected in May 2017 subject to course availability. Right now I'm working on learning about the Incident Command System and how it is applied in addition to Continuity of Operations.
Graphic From FEMA's Emergency Management Institute IS-800 Class of the Incident Command System
Time will tell where things go. Clues are not readily available to me. I wish they were, perhaps...