Most recent items from Ubuntu feeds:
Ricardo Salveti: Netgear R7000, DD-WRT, IPv6 and the lack of a stable gateway from Planet Ubuntu

After playing with a few IoT related devices and scenarios, I wanted to finally enable IPv6 in my home network. After looking if my ISP (NET Virtua) was already supporting IPv6, I was happy to find out that they were already claiming that my city (Florianópolis) was covered, so I decided to give that a try (and post my setup/experience).
I’m using Netgear R7000 as my main router, simply because it’s known to be well supported by DD-WRT.
Installing DD-WRT on Netgear R7000 is quite an easy job, and there is also an huge list of links covering several useful tips at http://www.dd-wrt.com/phpBB2/viewtopic.php?t=264152, which is quite handy (overclocking, QoS, installing Tor, etc).
After updating DD-WRT to the latest stable build from Kong (v3.0-r29300M kongac) and enabling bridge mode on my cable modem (from Arris), it was finally time to start playing with IPv6
I first configured my IPv4 network to use DNSMasq for both DHCP and DNS. Being able to use DNSMasq is great because it is quite easy to configure, besides supporting the IPv6 Router Advertisement feature, which is quite handy for my own IPv6 network (no need for radvd).

Then I just enabled IPv6 by going to the Setup->IPv6 tab. I’m using DHCPv6 with prefix delegation since that is supported by my ISP, making it even easier to configure my own network. Since I’m using DNSMasq for DHCP, I can safely disable radvd, but an additional step is required (custom DNSMasq config).

To configure DNSMasq just go to the Services->Services tab, and you will find a field that can be used to store your custom config entries. For a complete overview and description of the supported options, please check http://www.thekelleys.org.uk/dnsmasq/docs/dnsmasq-man.html. In my case, all I needed was to define define the dhcp-range, dhcp-option, ra-param and enable-ra options.

Time to reboot and check if everything would indeed work as expected. Once booted, I decided to check the setup over telnet, since it’s easier to debug and understand what is going on
Using telnet is quite easy, and enabled by default in DD-WRT (internal network only):
rsalveti@evapro:~$ telnet 192.168.1.1
Trying 192.168.1.1...
Connected to 192.168.1.1.
Escape character is '^]'.

DD-WRT v3.0-r29300M kongac (c) 2016 NewMedia-NET GmbH
Release: 04/14/16

domosys login: root
Password:
==========================================================

___ ___ _ _____ ______ ____ ___
/ _ \/ _ \___| | /| / / _ \/_ __/ _ __|_ / / _ \
/ // / // /___/ |/ |/ / , _/ / / | |/ //_ <_/ // /
/____/____/ |__/|__/_/|_| /_/ |___/____(_)___/

DD-WRT v3.0
http://www.dd-wrt.com

==========================================================

BusyBox v1.24.1 (2016-04-14 23:48:45 CEST) built-in shell (ash)

root@domosys:~#

Great, I’m in Time to check if the interfaces were configured correctly:
root@domosys:~# ip -6 addr show
1: lo: mtu 65536
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
3: eth0: mtu 1500 qlen 1000
inet6 fe80::e6f4:c6ff:XXXX:XXXX/64 scope link
valid_lft forever preferred_lft forever
4: vlan1@eth0: mtu 1500
inet6 fe80::e6f4:c6ff:XXXX:XXXX/64 scope link
valid_lft forever preferred_lft forever
5: vlan2@eth0: mtu 1500
inet6 fe80::e6f4:c6ff:XXXX:XXXX/64 scope link
valid_lft forever preferred_lft forever
6: eth1: mtu 1500 qlen 1000
inet6 fe80::e6f4:c6ff:XXXX:XXXX/64 scope link
valid_lft forever preferred_lft forever
7: eth2: mtu 1500 qlen 1000
inet6 fe80::e6f4:c6ff:XXXX:XXXX/64 scope link
valid_lft forever preferred_lft forever
10: br0: mtu 1500
inet6 2804:14d:badb:XXXX:XXXX:ff:fe00:0/64 scope global
valid_lft forever preferred_lft forever
inet6 fe80::e6f4:c6ff:XXXX:XXXX/64 scope link
valid_lft forever preferred_lft forever

Everything is looking good, br0 got a valid IPv6 address, now to check the default route:
root@domosys:~# ip -6 route
2804:14d:badb:XXXX::/64 dev vlan2 proto kernel metric 256
2804:14d:badb:XXXX::/64 dev vlan2 proto kernel metric 256
2804:14d:badb:XXXX::/64 dev br0 proto kernel metric 256
fe80::/64 dev eth0 proto kernel metric 256
fe80::/64 dev vlan1 proto kernel metric 256
fe80::/64 dev eth1 proto kernel metric 256
fe80::/64 dev eth2 proto kernel metric 256
fe80::/64 dev br0 proto kernel metric 256
fe80::/64 dev vlan2 proto kernel metric 256
default via fe80::217:10ff:fe8b:a78b dev vlan2 proto ra metric 1024 expires 1783sec hoplimit 64
unreachable default dev lo proto kernel metric -1 error -101
ff00::/8 dev eth0 metric 256
ff00::/8 dev vlan1 metric 256
ff00::/8 dev eth1 metric 256
ff00::/8 dev eth2 metric 256
ff00::/8 dev br0 metric 256
ff00::/8 dev vlan2 metric 256
unreachable default dev lo proto kernel metric -1 error -101

Default route in place, looking correct, so let’s try pinging devices over IPv6, to see if it is indeed functional:
root@domosys:~# ping6 2001:4860:4860::8844
PING 2001:4860:4860::8844 (2001:4860:4860::8844): 56 data bytes
64 bytes from 2001:4860:4860::8844: seq=0 ttl=42 time=80.255 ms
64 bytes from 2001:4860:4860::8844: seq=1 ttl=42 time=135.638 ms
64 bytes from 2001:4860:4860::8844: seq=2 ttl=42 time=93.105 ms
^C
--- 2001:4860:4860::8844 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 80.255/102.999/135.638 ms
root@domosys:~# ping6 google.com
PING google.com (2800:3f0:4004:804::200e): 56 data bytes
64 bytes from 2800:3f0:4004:804::200e: seq=0 ttl=52 time=43.273 ms
64 bytes from 2800:3f0:4004:804::200e: seq=1 ttl=52 time=52.316 ms
64 bytes from 2800:3f0:4004:804::200e: seq=2 ttl=52 time=77.638 ms
^C
--- google.com ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 43.273/57.742/77.638 ms

Awesome, it seems everything is going as planned, time to connect from my notebook and test my IPv6 connection:
rsalveti@evapro:~$ ip -6 addr show wlp3s0
2: wlp3s0: mtu 1500 state UP qlen 1000
inet6 2804:14d:badb:XXXX:XXXX:d259:65cf:a8f1/64 scope global temporary dynamic
valid_lft 43057sec preferred_lft 43057sec
inet6 2804:14d:badb:XXXX:XXXX:2de6:6164:8e93/64 scope global mngtmpaddr noprefixroute dynamic
valid_lft 43057sec preferred_lft 43057sec
inet6 fe80::a65e:60ff:fee4:XXXX/64 scope link
valid_lft forever preferred_lft forever

rsalveti@evapro:~$ ping6 google.com
PING google.com(2800:3f0:4004:806::200e) 56 data bytes
64 bytes from 2800:3f0:4004:806::200e: icmp_seq=1 ttl=52 time=30.4 ms
64 bytes from 2800:3f0:4004:806::200e: icmp_seq=2 ttl=52 time=35.7 ms
64 bytes from 2800:3f0:4004:806::200e: icmp_seq=3 ttl=52 time=45.8 ms
^C
--- google.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 30.414/37.353/45.860/6.404 ms

Looking great. Now before we celebrate, time to check the results from http://test-ipv6.com

and http://ipv6-test.com

Nice, easy and working as expected!
Unfortunately life it’s not always that easy and simple, something wrong was happening that made my IPv6 network to die completely after a few minutes, but it was always functional after rebooting my router. Tried the same ping6 commands from both my notebook and router, but all I got was that my IPv6 network was down. Time to get my hands dirty and debug!
root@domosys:~# ip -6 addr show br0
10: br0: <BROADCAST,MULTICAST,UP,10000> mtu 1500
inet6 2804:14d:badb:XXXX:XXXX:ff:fe00:0/64 scope global
valid_lft forever preferred_lft forever
inet6 fe80::e6f4:c6ff:XXXX:XXXX/64 scope link
valid_lft forever preferred_lft forever

root@domosys:~# ip -6 route
2804:14d:badb:XXXX::/64 dev vlan2 proto kernel metric 256
2804:14d:badb:XXXX::/64 dev vlan2 proto kernel metric 256
2804:14d:badb:XXXX::/64 dev br0 proto kernel metric 256
fe80::/64 dev eth0 proto kernel metric 256
fe80::/64 dev vlan1 proto kernel metric 256
fe80::/64 dev eth1 proto kernel metric 256
fe80::/64 dev eth2 proto kernel metric 256
fe80::/64 dev br0 proto kernel metric 256
fe80::/64 dev vlan2 proto kernel metric 256
unreachable default dev lo proto kernel metric -1 error -101
ff00::/8 dev eth0 metric 256
ff00::/8 dev vlan1 metric 256
ff00::/8 dev eth1 metric 256
ff00::/8 dev eth2 metric 256
ff00::/8 dev br0 metric 256
ff00::/8 dev vlan2 metric 256
unreachable default dev lo proto kernel metric -1 error -101

IPv6 address looks correct (same as before), but there is no default route for IPv6! Let’s try to manually add the route and check the network again:
root@domosys:~# ip -6 route add default via fe80::217:10ff:fe8b:a78b dev vlan2 proto ra metric 1024 hoplimit 64

root@domosys:~# ping6 google.com
PING google.com (2800:3f0:4004:804::200e): 56 data bytes
64 bytes from 2800:3f0:4004:804::200e: seq=0 ttl=52 time=35.507 ms
64 bytes from 2800:3f0:4004:804::200e: seq=1 ttl=52 time=40.768 ms
64 bytes from 2800:3f0:4004:804::200e: seq=2 ttl=52 time=30.964 ms
^C
--- google.com ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 30.964/35.746/40.768 ms

Good, that was it, now to understand why my default route disappeared.
IPv6 uses the ICMPv6 Router Solicitation (RS) and Router Advertisement (RA) messages in order to request and announce the default gateway on a network. To investigate if my ISP router is periodically sending the unsolicited RA messages (Cisco routers with ipv6 nd ra suppress?) all we need is some help from tcpdump, which is also available in DD-WRT. To make it easier, let’s check just for the RS and RA messages:
root@domosys:~# tcpdump -vvvv -ttt -i vlan2 icmp6 and 'ip6[40] >= 133 && ip6[40] <= 134'
tcpdump: listening on vlan2, link-type EN10MB (Ethernet), capture size 65535 bytes

Waited several minutes and nothing, looks like we’re heading to the right direction. As a test, let’s play with the vlan2 interface a bit:
root@domosys:~# ifconfig vlan2 down; ifconfig vlan2 up; tcpdump -vv -ttt -i vlan2 icmp6 and 'ip6[40] >= 133 && ip6[40] <= 134'
tcpdump: listening on vlan2, link-type EN10MB (Ethernet), capture size 65535 bytes
00:00:00.000000 IP6 (hlim 255, next-header ICMPv6 (58) payload length: 16) fe80::e6f4:c6ff:XXXX:XXXX > ff02::2: [icmp6 sum ok] ICMP6, router solicitation, length 16
source link-address option (1), length 8 (1): e4:f4:c6:XX:XX:XX
0x0000: e4f4 c6XX XXXX
00:00:00.014184 IP6 (hlim 255, next-header ICMPv6 (58) payload length: 96) fe80::217:10ff:fe8b:a78b > fe80::e6f4:c6ff:XXXX:XXXX: [icmp6 sum ok] ICMP6, router advertisement, length 96
hop limit 64, Flags [managed, other stateful], pref medium, router lifetime 1800s, reachable time 0s, retrans time 0s
source link-address option (1), length 8 (1): 00:17:10:8b:a7:8b
0x0000: 0017 108b a78b
mtu option (5), length 8 (1): 1500
0x0000: 0000 0000 05dc

So the ISP router is sending the required RA message, but only after the initial RS message that is sent by my router when establishing the connection (simulated by bringing the interface down and up). The lifetime interval also gives a hint to why my IPv6 network is only working for a few minutes after booting my router (30 minutes in this case).
Unfortunately there is really not much that can be done from the client side, as my ISP’s router is really not helping much here. One possible workaround could be forcing a default gateway after booting my router, but that is not really a good solution as the address might change after a while.
After investigating a bit more, I decided to try to periodically send RS messages by hand (even if not really recommended by the spec), so I could get the desired RA messages that would be used to update my default gateway route.
Luckily there is already a tool called rdisc6 (from http://www.remlab.net/ndisc6) that can be used to send RS messages from the command line, but unfortunately that is not available by default in the DD-WRT build I used, so time to install the tool by hand.
Once nice thing about using Kong’s DD-WRT builds is that you can use the bootstrap command to enable additional OPKG repositories, allowing the user to extend the image with custom packages. After checking Kong’s OPKG repo, I was able to confirm that the rdisc6 was already available as package, so all I needed to do was to install the extra package, which is great!
Since there is really not much disk space in flash, to install additional packages it’s recommended to first install a USB disk (I used an old 4GB pendrive I had). First clean the disk and create a single ext3 partition, then just plug it in the Netgear N7000 router and reboot. To configure the USB disk just go to the Services->USB tab:

Make sure to mount the disk into /opt, otherwise the bootstrap script will fail to run (first enable core USB and USB storage support, reboot, add the partition UUID and reboot again).
With the disk in place and mounted at the right path, just open a telnet connection and run the bootstrap command (which is available in the DD-WRT image):
root@domosys:~# bootstrap
Bootstrap is checking prerequisites...

USB automounter is enabled.
Found a valid partition: /opt.

Proceed with download and install of opkg? (y/n) [default=n]:
y
Connecting to www.desipro.de (217.160.231.132:80)
opkg.ipk 100% |******| 60071 0:00:00 ETA
Connecting to www.desipro.de (217.160.231.132:80)
opkg.ipk.sig 100% |******| 256 0:00:00 ETA
Connecting to www.desipro.de (217.160.231.132:80)
functions.sh 100% |******| 7269 0:00:00 ETA
Bootstrap complete. You can now use opkg to install additional packages.

Now install the rdisc6 package by using the opkg tool:
root@domosys:~# opkg install rdisc6

And now to finally test my theory let’s open tcpdump while we manually send the RS message, to check if the default route gets updated once it gets the RA message from my ISP’s router.
Terminal 1:
root@domosys:~# tcpdump -vvvv -ttt -i vlan2 icmp6 and 'ip6[40] >= 133 && ip6[40] <= 134'
tcpdump: listening on vlan2, link-type EN10MB (Ethernet), capture size 65535 bytes

Terminal 2:
root@domosys:~# ip -6 route
...
fe80::/64 dev vlan2 proto kernel metric 256
unreachable default dev lo proto kernel metric -1 error -101
ff00::/8 dev eth0 metric 256
ff00::/8 dev vlan1 metric 256
ff00::/8 dev eth1 metric 256
ff00::/8 dev eth2 metric 256
ff00::/8 dev br0 metric 256
ff00::/8 dev vlan2 metric 256
unreachable default dev lo proto kernel metric -1 error -101

root@domosys:~# /opt/usr/bin/rdisc6 -1 -q vlan2
2804:14d:badb:XXXX::/64
2804:14d:badb:XXXX::/64

root@domosys:~# ip -6 route
...
fe80::/64 dev vlan2 proto kernel metric 256
default via fe80::217:10ff:fe8b:a78b dev vlan2 proto ra metric 1024 expires 1796sec hoplimit 64
unreachable default dev lo proto kernel metric -1 error -101
ff00::/8 dev eth0 metric 256
ff00::/8 dev vlan1 metric 256
ff00::/8 dev eth1 metric 256
ff00::/8 dev eth2 metric 256
ff00::/8 dev br0 metric 256
ff00::/8 dev vlan2 metric 256
unreachable default dev lo proto kernel metric -1 error -101

Great, default route added again, now back to Terminal 1 to check the output from tcpdump:
00:00:00.000000 IP6 (hlim 255, next-header ICMPv6 (58) payload length: 8) fe80::e6f4:c6ff:XXXX:XXXX > ff02::2: [icmp6 sum ok] ICMP6, router solicitation, length 8
00:00:00.011702 IP6 (hlim 255, next-header ICMPv6 (58) payload length: 96) fe80::217:10ff:fe8b:a78b > fe80::e6f4:c6ff:XXXX:XXXX: [icmp6 sum ok] ICMP6, router advertisement, length 96
hop limit 64, Flags [managed, other stateful], pref medium, router lifetime 1800s, reachable time 0s, retrans time 0s
source link-address option (1), length 8 (1): 00:17:10:8b:a7:8b
0x0000: 0017 108b a78b
mtu option (5), length 8 (1): 1500
0x0000: 0000 0000 05dc
...

Lovely, RS message sent with help from rdisc6, RA message replied back from the ISP router and default route updated.
Now all that remains is to add the rdisc6 command as a cron job, making sure it gets executed often (e.g. every minute). To add custom cron jobs just go to Administration->Management tab:

With that in place I can safely reboot my router and get a stable IPv6 network setup (until my ISP improves/fixes their infrastructure).
That’s it, now to start doing some real IoT IPv6-based deployments in my local network

about 6 hours ago

Jono Bacon: Bacon Roundup from Planet Ubuntu

In my work I tend to create a lot of material both on my website here as well as on other websites (for example, my opensource.com column and my Forbes column. I also participate in interviews and other pieces.

I couldn’t think of an efficient way to pull these together for you folks to check out. So, I figured I will periodically share these goings on in a post. Let’s get this first Bacon Roundup rolling…

How hackers are making products safer (cio.com)
An interview about the work I am doing at HackerOne in building a global community of hackers that are incentivized to find security issues, build their expertise/skills, and earn some money.

8 ways to get your swag on
(opensource.com)
A column about the challenges that face shipping swag out to community members. Here are 8 things I have learned to make this easier covering production, shipping, and more.

10 tips for new GitHub projects
(opensource.com)
Kicking off a new GitHub project can be tough for new communities. I wrote this piece to provide 10 simple tips and tricks to ensure your new GitHub project is setting off on the right path.

The Risks of Over-Rewarding Communities
(jonobacon.org)
A piece about some interesting research into the risks of over-rewarding people to the point of it impacting their performance. This covers the research, the implications for communities, and some practical ways to harness this in your community/organization.

GC On-Demand Podcast Interview (http://podcast.discoposse.com/)
I had a blast chatting to Eric Wright about community management, career development, traversing challenges, and constantly evolving and improving your craft. A fun discussion and I think a fun listen too.

Taking your GitHub issues from good to great (zenhub.com)
I was invited by my friends at ZenHub to participate in a piece about doing GitHub issues right. They wrote the lions-share of this piece but I contributed some material.

Finally, if you want to get my blog posts directly to your inbox, simple put your email address into the box to the right of this post. This will ensure you never miss a beat.

about 18 hours ago

Jonathan Riddell: Neon Updates – KDE Network, KDE Applications from Planet Ubuntu

Not a great week for Neon last week.  I server we used for building packages on filled up limiting the work we could do and then a patch from Plasma broke some people’s startup and they were faced with a dreaded black screen.  Apologies folks.
But then magically we got an upgrade to the server with lots of nice new disk space and the problem patch was reverted so hopefully any affected was able to upgrade again and recover.
So I added some KDE Network bits and rebuilt the live/installable ISO images so they’re all updated to Applications 16.04.3 in User Edition.  And Applications forked so now Dev Edition Stable Branches uses the 16.08 Beta branches and you can try out lots of updated apps.  And because the developer made a special release just for us and wears cute bunny ears I added in Konversation to our builds for good old fashioned IRC chit chat (none of your modern Slacky/Telegram/Web2.0 protocols here).

by

about 22 hours ago

Jono Bacon: The Risks of Over-Rewarding Communities from Planet Ubuntu

Incentive plays an important role in communities. We see it everywhere: community members are rewarded with extrinsic rewards such as t-shirts, stickers, gadgets, or other material, or intrinsic rewards such as increased responsibilities, kudos, reputation, or other benefits.

The logic seems seems sound: if someone is the bees knees and doing a great job, they deserve to be rewarded. People like rewards, and rewards make people want to stick around and contribute more. What’s not to love?

There is though some interesting evidence to suggest that over-rewarding your communities, either internal to an organization or external, has some potent risks. Let’s explore the evidence and then see how we can harness it.

The Research

Back in 1908, Yerkes-Dodson, psychologists (and potential prog rock band) developed the Yerkes-Dodson Law. It suggests performance in a task increases with arousal, but only to a point. Now, before you get too hot under the collar, this study refers to mental or physiological arousal such as motivation. The study highlights a “peak arousal” time which is the ideal mix of the right amount of arousal to hit maximal performance.

Dan Ariely in The Upside of Irrationality took this research and built on it to test the effect of extrinsic rewards on performance. He asked a series of people in India to perform tasks with varying levels of financial reward (very small up to very high). His results were interesting:

Relative to those in the low- or medium-bonus conditions, they achieved good or very good performance less than a third of the time. The experience was so stressful to those in the very-large-bonus condition that they choked under the pressure.

I found this choke point insight interesting. We often see an inverse choke point when the stress of joining a community is too high (e.g. submitting a first code pull request to your peers). Do we see choke points for communities members with a high level of pressure to perform though?

Community Strategy Implications

I am not so sure. Many communities have high performing community members with high levels of responsibility (e.g. release managers, security teams, and core maintainers) who perform with predictably high quality results.

Where we often see the ugly head of community is with entitlement; that is, when some community members expect to be treated differently to others.

When I think back to the cases where I have seen examples of this entitlement (which shall remain anonymous to protect the innocent) it has invariably been due to an imbalance of expectations and rewards. In other words, when their expectations don’t match their level of influence on a community and/or they feel rewarded beyond that suitable level of influence, entitlement tends to brew.

As as such, my graph looks a little like this:

This shows the Yerkes-Dodson curve but subdivides the timeline into three distinctive areas. The first area is used for growth and we use rewards as a means to encourage participation. The middle area is for maintenance and ensuring regular contribution over an extended period of time. The final area is the danger zone – this is where entitlement can set in, so we want to ensure that manage expectations and rewards carefully. In this end zone we want to reward great work, but ultimately cap the size of the reward – lavish gifts and experiences are probably not going to have as much impact and may even risk the dreaded entitlement phenomena.

This narrative matches a hunch I have had for a while that rewards have a direct line to expectations. If we can map our rewards to effectively mitigate the inverse choke point for new members (thus make it easier to get involved) and reduce the latter choke point (thus reduce entitlement), we will have a balanced community.

Things You Can Do

So, dear reader, this is where I give you some homework you can do to harness this research:

Design what a ‘good’ contribution is – before you can reward people well you need to decide what a good contribution is. As an example, is a good code contribution a well formed, submitted, reviewed, and merged pull request? Decide what it is and write it down.
Create a platform for effectively tracking capabilities – while you can always throw out rewards willy-nilly based on observations of performance, this risks accusations of rewarding some but not others. As such, implement an independent way of mapping this good contribution to some kind of automatically generated numeric representation (e.g. reputation/karma).
Front-load intrinsic rewards – for new contributors in the growth stage, intrinsic rewards (such as respect, support, and mentoring) are more meaningful as these new members are often nervous about getting started. You want these intrinsic rewards primarily at the beginning of a new contributor on-ramp – it will build a personal sense of community with them.
Carefully distribute extrinsic rewards – extrinsic rewards such as clothing, gadgets, and money should be carefully distributed along the curve in the graph above. In other words, give out great material, but don’t make it too opulent otherwise you may face the latter choke point.
Create a distribution curve of expectations – in the same way we are mapping rewards to the above graph, we should do the same with expectations. At different points in the community lifecycle we need to provide different levels of expectations and information (e.g. limited scope for new contributions, much wider for regular participants). Map this out and design systems for delivering it.

If we can be mindful of the Yerkes-Dodson curve and balance expectations and rewards well, we have the ability to build truly engaging and incentivized communities and organizations.

I would love to have a discussion about this in the comments. Do you think this makes sense? What am I missing in my thinking here? What are great examples of effective rewards? How have you reduced entitlement? Share your thoughts…

1 day ago

Rhonda D'Vine: Debian LGBTIQA+ from Planet Ubuntu

I have a long overdue blog entry about what happened in recent times. People that follow my tweets did catch some things. Most noteworthy there was the Trans*Inter*Congress in Munich at the start of May. It was an absolute blast. I met so many nice and great people, talked and experienced so many great things there that I'm still having a great motivational push from it every time I think back. It was also the time when I realized that I in fact do have body dysphoria even though I thought I'm fine with my body in general: Being tall is a huge issue for me. Realizing that I have a huge issue (yes, pun intended) with my length was quite relieving, even though it doesn't make it go away. It's something that makes passing and transitioning for me harder. I'm well aware that there are tall women, and that there are dedicated shops for lengthy women, but that's not the only thing that I have trouble with. What bothers me most is what people read into tall people: that they are always someone they can lean on for comfort, that tall people are always considered to be self confident and standing up for themselves (another pun, I know ... my bad).

And while I'm fine with people coming to me for leaning on to, I rarely get the chance to do so myself. And people don't even consider it. When I was there in Munich, talking with another great (... pun?) trans woman who was as tall as me I finally had the possibility to just rest my head on her shoulder and finally feel the comfort I need just as much as everyone else out there, too. Probably that's also the reason why I'm so touchy and do go Free Hugging as often as possible. But being tall also means that you are usually only the big spoon when cuddling up. Having a small mental breakdown because of realizing that didn't change the feeling directly but definitely helped with looking for what I could change to fix that for myself.

Then, at the end of may, the movie FtWTF - female to what the fuck came to cinema. It's a documentary about six people who got assigned female at birth. And it's absolutely charming, and has great food for thoughts in it. If you ever get the chance to watch it you definitely should.

And then came debconf16 in Capetown. The flight to there was canceled and we had to get rebooked. The first offer was to go through Dubai, and gladly a colleague did point out to the person behind the desk that that wouldn't be safe for myself and thus out of scope. In the end we managed to get to Capetown quite nice, and even though it was winter when the sun was shining it was quite nice. Besides the cold nights that is. Or being stuck on the way up to table mountain because a colleague had cramps in his lags and we had to call mountain rescue. Gladly the night was clear, and when the mountain rescue finally got us to top and it was night already we had one of the nicest views from up there most people probably never will experience.

And then ... I got invited to a trans meetup in Capetown. I was both excited and nervous about it, what to expect there. But it was simply great. The group there was simply outstandingly great. The host gave update information on progress on clinical support within south Africa, from what I took with me is that there is only one clinic there for SRS which manages only two people a year which is simply ... yuck. Guess you can guess how many years (yes, decades) the waiting line is ... I was blown away though by the diversity of the group, on so many levels, most notably on the age spectrum. It was a charm to meet you all there! If you ever stop by in Capetown and you are part of the LGBTIQ community, make sure you get in contact with the Triangle Project.

But, about the real reason to write this entry: I was approached at Debconf by at least two people who asked me what I thought about creating an LGBTIQA+ group within Debian, and if I'd like to push for that. Actually I think it would be a good idea to have some sort of exchange between people on the queer spectrum (and I hope I don't offend anyone with just saying queer for LGBTIQA+ people). Given that I'm quite outspoken people approach me every now and then so I'm aware that there is a fair amount of people that would fall into that category. On the other hand some of them wouldn't want to have it publicly known because it shouldn't matter and isn't really the business of others.

So I'm uncertain. If we follow that path I guess something that is closed or at least offers the possibility to have a closed communication would be needed to not out someone by just joining in the discussion. It's was easier with Debian Women where it was (somewhat) clear that male participants are allies supporting the cause and not considered being women themselves, but often enough (mostly cis hetero male) people are afraid to join a dedicated LGBTIQA+ group because they have the fear of having their identity judged. These things should be considered before creating such a place so that people can feel comfortable when joining and know what to expect beforehand.

For the time being I created #debian-diversity on irc.debian.org to discuss how to move forward. Please bear in mind that even the channel name is up for discussion. Acronyms might not be the way to go in my opinion, just read back up the discussion that lead to the Diversity Statement of Debian where the original approach was to start listing groups for inclusiveness but it was quickly clear that it can get outdated too easily.

I am willing to be part of that effort, but right now I have some personal things to deal which eat up a fair amount of my time. My kid starts school in September (yes, it's that long already, time flies ...). And it looks like I'll have to move a second time in the near future: I'll have to leave my current flat by the end of the year and the Que[e]rbau I'm moving into won't be ready by that time to host me yet ... F*ck. :(

/personal |
permanent link |
Comments: 1 |

2 days ago

Canonical Design Team: New starter Raul (UX designer) – “I want to challenge myself to do the most difficult things” from Planet Ubuntu

Meet the newest member of the design team, UX designer Raul Alvarez, who will be working on the Ubuntu convergence story. Raul will be bringing new ideas to improve our apps to allow for a seamless experience across all devices. We caught up with him to tell us more about his background and what attracted him to the open source world of Ubuntu.

 
You can find Raul’s blog here and reach out to him on Twitter using his handle @raulalgo.
Tell me about your background
If we go all the way back to university, I started as a computer engineer student, but after a while I got to a point where I was rather burnt out by it. Then almost by chance, I ended up studying another degree in Advertising and PR. When studying my second degree I gained a fresh perspective. I was coming from studying maths and physics to then finding myself in classes for Spanish, history, law, and eventually design, which is where I got hooked.
I turned 30 and decided to move to London, as everyone in the small town of Salamanca (West Spain) was either getting married or bored; I was the latter. I wanted to challenge myself to do the most difficult things and push a bit more. I moved into designing Forex trading apps, which was a great experience with very smart people. I got to work very close with the developers too.
I then went into e-commerce as a designer, which was another diverse industry I wanted to learn from. Getting into something I know nothing about is key for me. It’s tricky, as people want experience, but once I’m there and I learn, I feel that I have the ability to take a fresh look at things. From studying advertising and knowing how apps are build I could bring those disciplines together to work on different platforms.
Canonical was a company I wanted to be part of. Just so happens they were looking for a designer, and now here I am!
Do you have any projects you’re working / or have worked on?
In the late days of my computer engineering degree, me and some fellow students started our own business. It was when the Social Network movie was out and everyone wanted to be Mark Zuckerberg; and so did we. We created a photography social network that was like a Flickr wannabe, or closer to what 500px is now. We had good intentions and we worked very hard on it. However, we lacked the business vision and strategy to push it forward. We had two choices: we close it off and do something else, or we find a better way to make money.
Salamanca is a small town and has little going on, but it just so happened that a company was doing mobile apps on demand for clients. Instead of hiring more people when they had large spikes of work, they would reached out to other companies. My three partners were playing the role of developers and I was the designer. We spent four years designing mobile apps for various clients specific needs, most came from the advertising industry. We had some startups come to us who didn’t have much money and we would help them advertise and prototype their apps. It was always a rather constrained working environment with a low budget and working with trial and error.
What attracted you to the open source world of Ubuntu?
For me, being here is amazing because I had been using a laptop that ran Ubuntu in my uni days. I’ve always known open source and the ideas around it. I remember playing with Linux when I was at high school too.
What does UX mean to you?
User Experience (laughs). But seriously, I think the term ‘UX’ is thrown back and forth a lot and people forget what it means. It’s a lot of ideas that could or could not be UX.
People might think that UX is just associated with apps and web design. But it’s not. If you think about user experience, it’s in everything. You can use user experience to build your hotel for instance. I could say how is the lobby going to be decorated, what is the uniform going to be like, do I want the guests to find a little chocolate under their pillow? THAT is defining the user experience. You don’t need to do a lot of research. Well, you can research user experience in other hotels, that would be one approach. Or you can say I have this vision I want to make my approach work. For this you need good judgement and to think about people, but also be prepared to take risks.
One of the parts I enjoy most about designing is whenever I don’t know what I’m going to do. That is the fun bit.
What have you learned in your first week at Canonical?
I came here thinking I knew how complex an operating system was. I wasn’t even close. I realised the complexity was way down below, every single little thing is taken into account, which amazes me. Then I realised the scale of the task. It’s amazing how much work is going on here. I have a lot of respect for it.
What is your proudest achievement?
Making a decision like: I’m stuck and I need a change. I made the effort to move to a different country and to change my degree. It has always been very natural for me to take risks, but I didn’t realize how scary it actually is until I stop and think about it.

2 days ago

Dirk Deimeke: Timewarrior 1.0.0beta1 ... from Planet Ubuntu

Jetzt ist es endlich soweit, die erste beta-Version von Timewarrior wurde veröffentlicht. Timewarrior ist eine Kommandozeilenanwendung zur Zeiterfassung, sie lässt sich über Hooks in Taskwarrior einbinden.
Wenn Ihr Verwendung dafür habt, ist jetzt der richtige Zeitpunkt einzusteigen und uns mit Euren Tests und Bugreports zu unterstützen.

2 days ago

The Fridge: Ubuntu Weekly Newsletter Issue 475 from Planet Ubuntu

Welcome to the Ubuntu Weekly Newsletter. This is issue #475 for the week July 18 – 24, 2016, and the full version is available here.
In this issue we cover:

Ubuntu 16.04.1 LTS released
Ubuntu Stats
On The State Of Health Of Our LoCos
Ubuntu 16.04 in the SF Bay Area
LoCo Events
Ubuntu Women: Event Report: WWFS-FWD’2016, Kolkata
Canonical Design Team: See what our interns got up to and what they thought of our apps
Simos Xenitellis: Playing around with LXD containers (LXC) on Ubuntu
Timo Aaltonen: Intel Graphics Gen4 and Newer Now Defaults to Modesetting Driver on X
Ubuntu Cloud News
In The Blogosphere
Featured Audio and Video
Weekly Ubuntu Development Team Meetings
Upcoming Meetings and Events
Updates and Security for 12.04, 14.04, 15.10 and 16.04
And much more!

The issue of The Ubuntu Weekly Newsletter is brought to you by:

Elizabeth K. Joseph
Simon Quigley
Chris Sirrs
Athul Muralidhar
And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!
Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

2 days ago

Elizabeth K. Joseph: The Official Ubuntu Book, 9th Edition released! from Planet Ubuntu

Back in 2014 I had the opportunity to lend my expertise to the 8th edition of The Official Ubuntu Book and began my path into authorship. Since then, I’ve completed the first edition of Common OpenStack Deployments, coming out in September. I was thrilled this year when Matthew Helmke invited me back to work on the 9th edition of The Official Ubuntu Book. We also had José Antonio Rey joining us for this edition as a third co-author.

One of the things we focused on with the 8th edition was, knowing that it would have a shelf life of 2 years, future-proofing. With the 9th edition we continued this focus, but also wanted to add a whole new chapter: Ubuntu, Convergence, and Devices of the Future
Taking a snippet from the book’s sample content, the chapter gives a whirlwind tour of where Ubuntu on desktops, servers and devices is going:
Chapter 10: Ubuntu, Convergence, and Devices of the Future 261
The Convergence Vision 262
Unity 263
Ubuntu Devices 264
The Internet of Things and Beyond 268
The Future of the Ubuntu Desktop 272
Summary 273
The biggest challenge with this chapter was the future-proofing. We’re in an exciting point in the world of Ubuntu and how it’s moved far beyond “Linux for Human Beings” on the desktop and into powering servers, tablets, robots and even refrigerators. With the Snappy and Ubuntu Core technologies both powering much of this progress and changing rapidly, we had to be cautious about how in depth we covered this tooling. With the help of Michael Hall, Nathan Haines and Sergio Schvezov I believe we’ve succeeded in presenting a chapter that gives the reader a firm overview of these new technologies, while being general enough to last us until the 10th edition of this book.

Also thanks to Thomas Mashos of the Mythbuntu team and Paul Mellors who also pitched in with this edition. Finally, as with the last edition, it was a pleasure to work with Matthew and José on this book. I hope you enjoy it!

3 days ago

Michael Terry: An Ubuntu Touch Lockscreen Refresh from Planet Ubuntu

It’s been a while since I’ve blogged about any unity8 work. Lately I’ve been updating the visual look and feel of the Ubuntu Touch lockscreen. I’m digging the new look, so I wanted to share.
Note that these have not landed or been fully reviewed. Looks may change before hitting devices.
<figure style="margin-bottom: 2em;">

A new lighter default background and snazzier infographic.
</figure>
<figure style="margin-bottom: 2em;">

A proper passphrase box like on the tablet or desktop.
</figure>
<figure style="margin-bottom: 2em;">

And use a more consistent look and feel for passcodes too.
</figure>
<figure style="margin-bottom: 2em;">

With a custom background image, we use white infographic bubbles. (image credit)
</figure>

3 days ago

Marcin Juszkiewicz: AArch64 desktop hardware? from Planet Ubuntu

Soon there will be four years since I started working on AArch64 architecture. Lot of software things changed during that time. Lot in a hardware too. But machines availability still sucks badly.

In 2012 all we had was software model. It was slow, terribly slow. Common joke was AArch64 developers standing in a queue for 10GHz x86-64 cpus. So I was generating working binaries by using cross compilation. But many distributions only do native builds. In models. Imagine Qt4 building for 3-4 days…

In 2013 I got access to first server hardware. With first silicon version of CPU. Highly unstable, we could use just one core etc. GCC was crashing like hell but we managed to get stable build results from it. Qt4 was building in few hours now.

Then amount of hardware at Red Hat was growing and growing. Farms of APM Mustangs, AMD Seattle and several other servers appeared, got racked and available to use. In 2014 one Mustang even landed on my desk (as first such machine in Poland).

But this was server land. Each of those machines costed about 1000 USD (if not more). And availability was hard too.

Linaro tried to do something about it and created 96boards project.

First came ‘Consumer Edition’ range. Yet another small form factor boards with functionality stripped as much as possible. No Ethernet, no storage other than emmc/usb, low amount of memory, chips taken from mobile phones etc. But it was selling! But only because people were hungry to get ANYTHING with AArch64 cores. First was HiKey then DragonBoard410 got released. Then few other boards. All with same set of issues: non-mainline kernel, weird bootloaders, binary blobs for this or that…

Then so called ‘Enterprise Edition’ got announced. With another ridiculous form factor (and microATX as an option). And that was it. There was a leak of Husky board which shown how fucked up design it was. Ports all around the edges, memory above and under board and of course incompatible with any industrial form factor. I would like to know what they were smoking…

Time passed by. Husky got forgotten for another year. Then Cello was announced as a “new EE 96boards board” while it looked as redesigned Husky with two SATA ports less (because who needs more than two SATA, right?). Last time I heard about Cello it was still ‘maybe soon, maybe another two weeks’. Prototypes looked like hand soldered, USB controller mounted rotated, dead on-board Ethernet etc.

In meantime we got few devices from other companies. Pine64 had big campaign on Kickstarter and shipped to developers. Hardkernel started selling ODROID-C2, Geekbox released their TV box and probably something else got released as well. But all those boards were limited to 1-2GB of memory, often lacked SATA and used mobile processors with their own set of bootloaders etc causing extra work for distributions.

Overdrive 1000 was announced. Without any options for expansion it looked like SoftIron wanted customers to buy Overdrive 3000 if they want to use PCI Express card.

So we have 2016 now. Four years of my work on AArch64 passed. Most of distributions support this architecture by building on proper servers but most of this effort is not used because developers do not have sane hardware to play with (sane means expandable, supported by distributions, capable).

There is no standard form factor mainboards (mini-itx, microATX, ATX) available on mass market. 96boards failed here, server vendors are not interested, small Chinese companies prefer to release yet-another-fruit/Pi with mobile processor. Nothing, null, nada, nic.

Developers know where to buy normal computer cases, storage, memory, graphics cards, USB controllers, SATA controllers and peripherals. So vendors do not have to worry/deal with this part. But still there is nothing to put those cards into. No mainboards which can be mounted into normal PC case, have some graphics plugged in, few SSD/HDD connected, mouse/keyboard, monitors and just be used.

Sometimes it is really hard to convince software developers to make changes for platform they are unable to test on. And current hardware situation does not help. All those projects of hardware being available “in a cloud” helps only for subset of projects — ever tried to run GNOME/KDE session over the network? With OpenGL acceleration etc?

So where is my AArch64 workstation? In desktop or laptop form.

Post written after my Google+ post where similar discussion happened in comments.

Related posts:
96 boards again?
Cello: new AArch64 enterprise board from 96boards project
96boards goes enterprise?

3 days ago

Shih-Yuan Lee: Disable Secure Boot in shim-signed from Planet Ubuntu

The latest Ubuntu kernel updates bring some Secure Boot enhancement for the kernel modules when the Secure Boot is enabled in BIOS settings. However there is no easy way to sign those kernel modules in DKMS packages automatically so far. If we want to use those DKMS packages, we need to disable Secure Boot in BIOS settings temporarily or we can also disable Secure Boot in shim-signed temporarily. The following steps introduced how to disable Secure Boot in shim-signed.Open a terminal by Ctrl + Alt + T, execute `sudo update-secureboot-policy` and then select ‘Yes’.Enter a temporary password between 8 to 16 digits. (For example, 12345678, we will use this password later.)Enter the same password again to confirm.Reboot the system and press any key when you see the blue screen (MOK management).Select “Change Secure Boot state”.Press the corresponding password character and press Enter. Repeat this step several times to confirm previous temporary password like ‘12345678’ in step 2&3. For exmaple, '2' for this screen.Select ‘Yes’ to disable Secure Boot in shim-signed.Press Enter key to finish the whole procedure.We can still enable Secure Boot in shim-signed again. Just execute `sudo update-secureboot-policy --enable` and then follow the similar steps above.

3 days ago

David Tomaschik: Chrome on Kali for root from Planet Ubuntu

For many of the tools on Kali Linux, it’s easiest to run
them as root, so the defacto standard has more or less become to run as root
when using Kali. Google Chrome, on the other hand, would not like to be run as
root (because it makes sandboxing harder when your user is all-powerful) so
there have been a number of tricks to get it to run. I’m going to describe my
preferred setup here. (Mostly as documentation for myself.)

Download and Install the Chrome .deb file.

I prefer the Google Chrome Beta
build, but stable will work just fine too. Download the .deb file and install
it:

dpkg -i google-chrome*.deb

If it’s a fresh Kali install, you’ll be missing libappindicator, but you can
fix that via:

apt-get install -f

Getting a User to Run Chrome

We’ll create a dedicated user to run Chrome, this provides some user isolation
and prevents Chrome from complaining.

useradd -m chrome

Setting up su

Now we’ll setup su to handle the passing of X credentials between the users.
We’ll add pam_xauth to forward it, and configure root to pass credentials to
the chrome user.

sed -i 's/@include common-session/session optional pam_xauth.so\n\0' \
/etc/pam.d/su
mkdir -p /root/.xauth
echo chrome > /root/.xauth/export

Setting up the Chrome Desktop Icon

All that’s left now is to change the Application Menu entry (aka .desktop) to
use the following as the command:

su -l -c "/usr/bin/google-chrome-beta" chrome

4 days ago

Timo Aaltonen: Intel Graphics Gen4 and Newer Now Defaults to Modesetting Driver on X from Planet Ubuntu

Earlier this week Debian unstable and Ubuntu Yakkety switched to load the ‘modesetting’ X video driver by default on Intel graphics gen4 and newer. This roughly maps to GPU’s made since 2007 (965GM->). The main reason for this was to get rid of chasing after upstream git, because there hasn’t been a stable release in nearly three years and even the latest devel snapshot is over a year and a half old. It also means sharing the glamor 2D acceleration backend with radeon/amdgpu, which is a nice change knowing that the intel SNA backend was constantly slightly broken for some GPU generation(s).
Xserver 1.18.4 was released this week with a number of backported fixes to glamor and modesetting driver from master, so the time was right to make the switch now while both Stretch and Yakkety are still on the development phase. So I wrote a small patch for the xserver to load intel driver only on gen2 & gen3 which can’t do glamor efficiently. Newer Intel GPU’s will fall back to modesetting. This approach is good since it can be easily overridden by dropping a conffile to /etc/X11 that uses something else.
I’ve seen only one bug filed that was caused by this change so far, and it turned out to be a kernel bug fixed in 4.6 (Yak will ship with 4.8). If you see something strange like corrupt widgets or whatnot after upgrading to current Yakkety, verify it doesn’t happen with intel (‘cp /usr/share/doc/xserver-xorg-video-intel/xorg.conf /etc/X11’ followed by login manager restart or reboot) and file a bug against xserver-xorg-core (verify xdiagnose is installed, then run ‘ubuntu-bug xserver-xorg-core)’. We’ll take it from there.

4 days ago

Simos Xenitellis: How to set up multiple secure (SSL/TLS, Qualys SSL Labs A+) websites using LXD containers from Planet Ubuntu

In previous posts we saw how to set up LXD on a DigitalOcean VPS, how to set up LXD on a Scaleway VPS, and how the lifecycle of an LXD container looks like.
In this post, we are going to

Create multiple websites, each in a separate LXD container
Install HAProxy as a TLS Termination Proxy, in an LXD container
Configure HAProxy so that each website is only accessible through TLS
Perform the SSL Server Test so that our websites really get the A+!

In this post, we are not going to install WordPress (or other CMS) on the websites. We keep this post simple as that is material for our next post.
The requirements are

We have at least one domain in our ownership and we configure a few hostnames to resolve to the IP address of the new VPS. This is required in order to get those free TLS certificates from Let’s Encrypt.

Set up a VPS
We are using DigitalOcean in this example.

Ubuntu 16.04.1 LTS was released a few days ago and DigitalOcean changed the Ubuntu default to 16.04.1. This is nice.
We are trying out the smallest droplet in order to figure out how many websites we can squeeze in containers. That is, 512MB RAM on a single virtual CPU core, at only 20GB disk space!
In this example we are not using the new DigitalOcean block storage as at the moment it is available in only two datacentres.
Let’s click on the Create droplet button and the VPS is created!
Initial configuration
We are using DigitalOcean in this HowTo, and we have covered the initial configuration in this previous post.
Trying out LXD containers on Ubuntu on DigitalOcean

Go through the post and perform the tasks described in section «Set up LXD on DigitalOcean».
Creating the containers
We create three containers for three websites, plus one container for HAProxy.
ubuntu@ubuntu-512mb-ams3-01:~$ lxc init ubuntu:x web1
Creating web1
Retrieving image: 100%
ubuntu@ubuntu-512mb-ams3-01:~$ time lxc init ubuntu:x web2
Creating web2

real    0m6.620s
user    0m0.016s
sys    0m0.004s
ubuntu@ubuntu-512mb-ams3-01:~$ time lxc init ubuntu:x web3
Creating web3

real    1m15.723s
user    0m0.012s
sys    0m0.020s
ubuntu@ubuntu-512mb-ams3-01:~$ time lxc init ubuntu:x haproxy
Creating haproxy

real    0m48.747s
user    0m0.012s
sys    0m0.012s
ubuntu@ubuntu-512mb-ams3-01:~$
Normally it takes a few seconds for a new container to initialize. Remember that we are squeezing here, it’s a 512MB VPS, and the ZFS pool is stored on a file (not a block device)! We are looking into the kernel messages of the VPS for lines similar to «Out of memory: Kill process 3829 (unsquashfs) score 524 or sacrifice child», which indicate that we reached the memory limit. While preparing this blog post, there were a couple of Out of memory kills, so I made sure that nothing critical was dying. If this is too much for you, you can select a 1GB RAM (or more) VPS and start over.
Let’s start the containers up!
ubuntu@ubuntu-512mb-ams3-01:~$ lxc start web1 web2 web3 haproxy
ubuntu@ubuntu-512mb-ams3-01:~$ lxc list
+---------+---------+-----------------------+------+------------+-----------+
|  NAME   |  STATE  |         IPV4          | IPV6 |    TYPE    | SNAPSHOTS |
+---------+---------+-----------------------+------+------------+-----------+
| haproxy | RUNNING | 10.234.150.39 (eth0)  |      | PERSISTENT | 0         |
+---------+---------+-----------------------+------+------------+-----------+
| web1    | RUNNING | 10.234.150.169 (eth0) |      | PERSISTENT | 0         |
+---------+---------+-----------------------+------+------------+-----------+
| web2    | RUNNING | 10.234.150.119 (eth0) |      | PERSISTENT | 0         |
+---------+---------+-----------------------+------+------------+-----------+
| web3    | RUNNING | 10.234.150.51 (eth0)  |      | PERSISTENT | 0         |
+---------+---------+-----------------------+------+------------+-----------+
ubuntu@ubuntu-512mb-ams3-01:~$
You may need to run lxc list a few times until you make sure that all containers got an IP address. That means that they all completed their startup.
DNS configuration
The public IP address of this specific VPS is 188.166.10.229. For this test, I am using the domain ubuntugreece.xyz as follows:

Container web1: ubuntugreece.xyz and www.ubuntugreece.xyz have IP 188.166.10.229
Container web2: web2.ubuntugreece.xyz has IP 188.166.10.229
Container web3: web3.ubuntugreece.xyz has IP 188.166.10.229

Here is how it looks when configured on a DNS management console,

From here and forward, it is a waiting game until these DNS configurations are propagated to the rest of the Internet. We need to wait until those hostnames resolve into their IP address.
ubuntu@ubuntu-512mb-ams3-01:~$ host ubuntugreece.xyz
ubuntugreece.xyz has address 188.166.10.229
ubuntu@ubuntu-512mb-ams3-01:~$ host web2.ubuntugreece.xyz
Host web2.ubuntugreece.xyz not found: 3(NXDOMAIN)
ubuntu@ubuntu-512mb-ams3-01:~$ host web3.ubuntugreece.xyz
web3.ubuntugreece.xyz has address 188.166.10.229
ubuntu@ubuntu-512mb-ams3-01:~$
These are the results after ten minutes. ubuntugreece.xyz and web3.ubuntugreece.xyz are resolving fine, while web2.ubuntugreece.xyz needs a bit more time.
We can continue! (and ignore for now web2)
Web server configuration
Let’s see the configuration for web1. You must repeat the following for web2 and web3.
We install the nginx web server,
ubuntu@ubuntu-512mb-ams3-01:~$ lxc exec web1 — /bin/bash
root@web1:~# apt update
Get:1 http://security.ubuntu.com/ubuntu xenial-security InRelease [94.5 kB]

3 packages can be upgraded. Run ‘apt list –upgradable’ to see them.
root@web1:~# apt upgrade
Reading package lists… Done

Processing triggers for initramfs-tools (0.122ubuntu8.1) …
root@web1:~# apt install nginx
Reading package lists… Done

Processing triggers for ufw (0.35-0ubuntu2) …
root@web1:~#
nginx needs to be configured so that it understands the domain name for web1. Here is the diff,
diff --git a/etc/nginx/sites-available/default b/etc/nginx/sites-available/default
index a761605..b2cea8f 100644
--- a/etc/nginx/sites-available/default
+++ b/etc/nginx/sites-available/default
@@ -38,7 +38,7 @@ server {
        # Add index.php to the list if you are using PHP
        index index.html index.htm index.nginx-debian.html;
 
-       server_name _;
+       server_name ubuntugreece.xyz www.ubuntugreece.xyz;
 
        location / {
                # First attempt to serve request as file, then
and finally we restart nginx and exit the web1 container,
root@web1:/etc/nginx/sites-enabled# systemctl restart nginx
root@web1:/etc/nginx/sites-enabled# exit
exit
ubuntu@ubuntu-512mb-ams3-01:~$
Forwarding connections to the HAProxy container
We are about the set up the HAProxy container. Let’s add iptables rules to perform the forwarding of connections to ports 80 and 443 on the VPS, to the HAProxy container.
ubuntu@ubuntu-512mb-ams3-01:~$ ifconfig eth0
eth0      Link encap:Ethernet  HWaddr 04:01:36:50:00:01  
          inet addr:188.166.10.229  Bcast:188.166.63.255  Mask:255.255.192.0
          inet6 addr: fe80::601:36ff:fe50:1/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:40513 errors:0 dropped:0 overruns:0 frame:0
          TX packets:26362 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:360767509 (360.7 MB)  TX bytes:3863846 (3.8 MB)

ubuntu@ubuntu-512mb-ams3-01:~$ lxc list
+---------+---------+-----------------------+------+------------+-----------+
|  NAME   |  STATE  |         IPV4          | IPV6 |    TYPE    | SNAPSHOTS |
+---------+---------+-----------------------+------+------------+-----------+
| haproxy | RUNNING | 10.234.150.39 (eth0)  |      | PERSISTENT | 0         |
+---------+---------+-----------------------+------+------------+-----------+
| web1    | RUNNING | 10.234.150.169 (eth0) |      | PERSISTENT | 0         |
+---------+---------+-----------------------+------+------------+-----------+
| web2    | RUNNING | 10.234.150.119 (eth0) |      | PERSISTENT | 0         |
+---------+---------+-----------------------+------+------------+-----------+
| web3    | RUNNING | 10.234.150.51 (eth0)  |      | PERSISTENT | 0         |
+---------+---------+-----------------------+------+------------+-----------+
ubuntu@ubuntu-512mb-ams3-01:~$ sudo iptables -t nat -I PREROUTING -i eth0 -p TCP -d 188.166.10.229/32 --dport 80 -j DNAT --to-destination 10.234.150.39:80
[sudo] password for ubuntu:
ubuntu@ubuntu-512mb-ams3-01:~$ sudo iptables -t nat -I PREROUTING -i eth0 -p TCP -d 188.166.10.229/32 --dport 443 -j DNAT --to-destination 10.234.150.39:443
ubuntu@ubuntu-512mb-ams3-01:~$
If you want to make those changes permanent, see Saving Iptables Firewall Rules Permanently (the part about the package iptables-persistent).
HAProxy initial configuration
Let’s see how to configure HAProxy in container haproxy. We enter the container, update the software and install the haproxy package.
ubuntu@ubuntu-512mb-ams3-01:~$ lxc exec haproxy -- /bin/bash
root@haproxy:~# apt update
Hit:1 http://archive.ubuntu.com/ubuntu xenial InRelease
...
3 packages can be upgraded. Run 'apt list --upgradable' to see them.
root@haproxy:~# apt upgrade
Reading package lists... Done
...
Processing triggers for initramfs-tools (0.122ubuntu8.1) ...
root@haproxy:~# apt install haproxy
Reading package lists... Done
...
Processing triggers for ureadahead (0.100.0-19) ...
root@haproxy:~#
We add the following configuration to /etc/haproxy/haproxy.conf. Initially, we do not have any certificates for TLS, but we need the Web servers to work with plain HTTP in order for Let’s Encrypt to be able to verify we own the websites. Therefore, here is the complete configuration, with two lines commented out (they start with ###) so that HTTP can work. As soon as we deal with Let’s Encrypt, we go full TLS (by uncommenting the two lines that start with ###) and never look back. We mention when to uncomment later in the post.
diff --git a/etc/haproxy/haproxy.cfg b/etc/haproxy/haproxy.cfg
index 86da67d..f6f2577 100644
--- a/etc/haproxy/haproxy.cfg
+++ b/etc/haproxy/haproxy.cfg
@@ -18,11 +18,17 @@ global
     ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS
     ssl-default-bind-options no-sslv3
 
+        # Minimum DH ephemeral key size. Otherwise, this size would drop to 1024.
+        # @link: https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#3.2-tune.ssl.default-dh-param
+        tune.ssl.default-dh-param 2048
+
 defaults
     log    global
     mode    http
     option    httplog
     option    dontlognull
+        option  forwardfor
+        option  http-server-close
         timeout connect 5000
         timeout client  50000
         timeout server  50000
@@ -33,3 +39,56 @@ defaults
     errorfile 502 /etc/haproxy/errors/502.http
     errorfile 503 /etc/haproxy/errors/503.http
     errorfile 504 /etc/haproxy/errors/504.http
+
+# Configuration of the frontend (HAProxy as a TLS Termination Proxy)
+frontend www_frontend
+    # We bind on port 80 (http) but (see below) get HAProxy to force-switch to HTTPS.
+    bind *:80
+    # We bind on port 443 (https) and specify a directory with the certificates.
+####    bind *:443 ssl crt /etc/haproxy/certs/
+    # We get HAProxy to force-switch to HTTPS, if the connection was just HTTP.
+####    redirect scheme https if !{ ssl_fc }
+    # TLS terminates at HAProxy, the container runs in plain HTTP. Here, HAProxy informs nginx
+    # that there was a TLS Termination Proxy. Required for WordPress and other CMS.
+    reqadd X-Forwarded-Proto:\ https
+
+    # Distinguish between secure and insecure requestsa (used in next two lines)
+    acl secure dst_port eq 443
+
+    # Mark all cookies as secure if sent over SSL
+    rsprep ^Set-Cookie:\ (.*) Set-Cookie:\ \1;\ Secure if secure
+
+    # Add the HSTS header with a 1 year max-age
+    rspadd Strict-Transport-Security:\ max-age=31536000 if secure
+
+    # Configuration for each virtual host (uses Server Name Indication, SNI)
+    acl host_ubuntugreece_xyz hdr(host) -i ubuntugreece.xyz www.ubuntugreece.xyz
+    acl host_web2_ubuntugreece_xyz hdr(host) -i web2.ubuntugreece.xyz
+    acl host_web3_ubuntugreece_xyz hdr(host) -i web3.ubuntugreece.xyz
+
+    # Directing the connection to the correct LXD container
+    use_backend web1_cluster if host_ubuntugreece_xyz
+    use_backend web2_cluster if host_web2_ubuntugreece_xyz
+    use_backend web3_cluster if host_web3_ubuntugreece_xyz
+
+# Configuration of the backend (HAProxy as a TLS Termination Proxy)
+backend web1_cluster
+    balance leastconn
+    # We set the X-Client-IP HTTP header. This is usefull if we want the web server to know the real client IP.
+    http-request set-header X-Client-IP %[src]
+    # This backend, named here "web1", directs to container "web1.lxd" (hostname).
+    server web1 web1.lxd:80 check
+
+backend web2_cluster
+    balance leastconn
+    # We set the X-Client-IP HTTP header. This is usefull if we want the web server to know the real client IP.
+    http-request set-header X-Client-IP %[src]
+    # This backend, named here "web2", directs to container "web2.lxd" (hostname).
+    server web2 web2.lxd:80 check
+
+backend web3_cluster
+    balance leastconn
+    # We set the X-Client-IP HTTP header. This is usefull if we want the web server to know the real client IP.
+    http-request set-header X-Client-IP %[src]
+    # This backend, named here "web3", directs to container "web3.lxd" (hostname).
+    server web3 web3.lxd:80 check
Let’s restart HAProxy. If you get any errors, run systemctl status haproxy and try to figure out what went wrong.
root@haproxy:~# systemctl restart haproxy
root@haproxy:~# exit
ubuntu@ubuntu-512mb-ams3-01:~$
Does it work? Let’s visit the website,

It’s is working! Let’s Encrypt will be able to access and verify that we own the domain in the next step.
Get certificates from Let’s Encrypt
We exit out to the VPS and install letsencrypt.
ubuntu@ubuntu-512mb-ams3-01:~$ sudo apt install letsencrypt
[sudo] password for ubuntu:
Reading package lists... Done
...
Setting up python-pyicu (1.9.2-2build1) ...
ubuntu@ubuntu-512mb-ams3-01:~$
We run letsencrypt three times, one for each website. update It is also possible to simplify the following by using multiple domain (or Subject Alternative Names (SAN)) certificates. Thanks for @jack who mentioned this in the comments.
ubuntu@ubuntu-512mb-ams3-01:~$ sudo letsencrypt certonly --authenticator webroot --webroot-path=/var/lib/lxd/containers/web1/rootfs/var/www/html -d ubuntugreece.xyz -d www.ubuntugreece.xyz
... they ask for a contact e-mail address and whether we accept the Terms of Service...

IMPORTANT NOTES:
 - If you lose your account credentials, you can recover through
   e-mails sent to xxxxx@gmail.com.
 - Congratulations! Your certificate and chain have been saved at
   /etc/letsencrypt/live/ubuntugreece.xyz/fullchain.pem. Your cert
   will expire on 2016-10-21. To obtain a new version of the
   certificate in the future, simply run Let's Encrypt again.
 - Your account credentials have been saved in your Let's Encrypt
   configuration directory at /etc/letsencrypt. You should make a
   secure backup of this folder now. This configuration directory will
   also contain certificates and private keys obtained by Let's
   Encrypt so making regular backups of this folder is ideal.
 - If you like Let's Encrypt, please consider supporting our work by:

   Donating to ISRG / Let's Encrypt:   https://letsencrypt.org/donate
   Donating to EFF:                    https://eff.org/donate-le

ubuntu@ubuntu-512mb-ams3-01:~$
For completeness, here are the command lines for the other two websites,
ubuntu@ubuntu-512mb-ams3-01:~$ sudo letsencrypt certonly --authenticator webroot --webroot-path=/var/lib/lxd/containers/web2/rootfs/var/www/html -d web2.ubuntugreece.xyz

IMPORTANT NOTES:
 - Congratulations! Your certificate and chain have been saved at
   /etc/letsencrypt/live/web2.ubuntugreece.xyz/fullchain.pem. Your
   cert will expire on 2016-10-21. To obtain a new version of the
   certificate in the future, simply run Let's Encrypt again.
 - If you like Let's Encrypt, please consider supporting our work by:

   Donating to ISRG / Let's Encrypt:   https://letsencrypt.org/donate
   Donating to EFF:                    https://eff.org/donate-le

ubuntu@ubuntu-512mb-ams3-01:~$ time sudo letsencrypt certonly --authenticator webroot --webroot-path=/var/lib/lxd/containers/web3/rootfs/var/www/html -d web3.ubuntugreece.xyz

IMPORTANT NOTES:
 - Congratulations! Your certificate and chain have been saved at
   /etc/letsencrypt/live/web3.ubuntugreece.xyz/fullchain.pem. Your
   cert will expire on 2016-10-21. To obtain a new version of the
   certificate in the future, simply run Let's Encrypt again.
 - If you like Let's Encrypt, please consider supporting our work by:

   Donating to ISRG / Let's Encrypt:   https://letsencrypt.org/donate
   Donating to EFF:                    https://eff.org/donate-le

real    0m18.458s
user    0m0.852s
sys    0m0.172s
ubuntu@ubuntu-512mb-ams3-01:~$
Yeah, it takes only around twenty seconds to get your Let’s Encrypt certificate!
We got the certificates, now we need to prepare them so that HAProxy (our TLS Termination Proxy) can make use of them. We just need to join together the certificate chain and the private key for each certificate, and place them in the haproxy container at the appropriate directory.
ubuntu@ubuntu-512mb-ams3-01:~$ sudo mkdir /var/lib/lxd/containers/haproxy/rootfs/etc/haproxy/certs/
ubuntu@ubuntu-512mb-ams3-01:~$ DOMAIN='ubuntugreece.xyz' sudo -E bash -c 'cat /etc/letsencrypt/live/$DOMAIN/fullchain.pem /etc/letsencrypt/live/$DOMAIN/privkey.pem > /var/lib/lxd/containers/haproxy/rootfs/etc/haproxy/certs/$DOMAIN.pem'
ubuntu@ubuntu-512mb-ams3-01:~$ DOMAIN='web2.ubuntugreece.xyz' sudo -E bash -c 'cat /etc/letsencrypt/live/$DOMAIN/fullchain.pem /etc/letsencrypt/live/$DOMAIN/privkey.pem > /var/lib/lxd/containers/haproxy/rootfs/etc/haproxy/certs/$DOMAIN.pem'
ubuntu@ubuntu-512mb-ams3-01:~$ DOMAIN='web3.ubuntugreece.xyz' sudo -E bash -c 'cat /etc/letsencrypt/live/$DOMAIN/fullchain.pem /etc/letsencrypt/live/$DOMAIN/privkey.pem > /var/lib/lxd/containers/haproxy/rootfs/etc/haproxy/certs/$DOMAIN.pem'
ubuntu@ubuntu-512mb-ams3-01:~$
HAProxy final configuration
We are almost there. We need to enter the haproxy container and uncomment those two lines (those that started with ###) that will enable HAProxy to work as a TLS Termination Proxy. Then, restart the haproxy service.
ubuntu@ubuntu-512mb-ams3-01:~$ lxc exec haproxy bash
root@haproxy:~# vi /etc/haproxy/haproxy.cfg

root@haproxy:/etc/haproxy# systemctl restart haproxy
root@haproxy:/etc/haproxy# exit
ubuntu@ubuntu-512mb-ams3-01:~$
Let’s test them!
Here are the three websites, notice the padlocks on all three of them,

The SSL Server Report (Qualys)
Here are the SSL Server Reports for each website,

You can check the cached reports for LXD container web1, LXD container web2 and LXD container web3.
Results
The disk space requirements for those four containers (three static websites plus haproxy) are
ubuntu@ubuntu-512mb-ams3-01:~$ sudo zpool list
[sudo] password for ubuntu:
NAME         SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
mypool-lxd  14.9G  1.13G  13.7G         -     4%     7%  1.00x  ONLINE  -
ubuntu@ubuntu-512mb-ams3-01:~$
The four containers required a bit over 1GB of disk space.
The biggest concern has been the limited RAM memory of 512MB. The Out Of Memory (OOM) handler was invoked a few times during the first steps of container creation, but not afterwards during the launching of the nginx instances.
ubuntu@ubuntu-512mb-ams3-01:~$ dmesg | grep "Out of memory"
[  181.976117] Out of memory: Kill process 3829 (unsquashfs) score 524 or sacrifice child
[  183.792372] Out of memory: Kill process 3834 (unsquashfs) score 525 or sacrifice child
[  190.332834] Out of memory: Kill process 3831 (unsquashfs) score 525 or sacrifice child
[  848.834570] Out of memory: Kill process 6378 (localedef) score 134 or sacrifice child
[  860.833991] Out of memory: Kill process 6400 (localedef) score 143 or sacrifice child
[  878.837410] Out of memory: Kill process 6436 (localedef) score 151 or sacrifice child
ubuntu@ubuntu-512mb-ams3-01:~$
There was an error while creating one of the containers in the beginning. I repeated the creation command and it completed successfully. That error was probably related to this unsquashfs kill.
Summary
We set up a $5 VPS (512MB RAM, 1CPU core and 20GB SSD disk) with Ubuntu 16.04.1 LTS, then configured LXD to handle containers.
We created three containers for three static websites, and an additional container for HAProxy to work as a TLS Termination Proxy.
We got certificates for those three websites, and verified that they all pass with A+ at the Qualys SSL Server Report.
The 512MB RAM VPS should be OK for a few low traffic websites, especially those generated by static site generators.
 

5 days ago