Most recent items from Ubuntu feeds:
Ubuntu Blog: Discover cool apps with snap find from Planet Ubuntu

Software discovery and installation broadly comes in two flavors – via graphical user interface or on the command line. If you’re using a Linux distribution with a friendly software frontend offering integrated snap support, e.g. KDE Discover or GNOME Software, you can enjoy the experience without having to resort to using a terminal window.

For command-line users, the experience is somewhat different. When you want to search for snaps, the default results may be a little too generic, and finding the applications you need can take some time. Moreover, this is in contrast to the colorful Snap Store, which lets you browse categories, individual publishers, and so on. Well, working on the command line does not mean you need to be left out.

Finding you way

Running snap find without any arguments will show the list of featured snaps, a curated list showcasing the new, interesting or unique software available in the Snap Store. The results are identical to the set you will in Gnome Software in Ubuntu, as well as what you see under the Featured category on the store page.

snap findNo search term specified. Here are some interesting snaps:Name                    Version                  Publisher              Notes    Summarypostman                 7.13.0                   postman-inc✓           -        API Development Environmentcolor-picker            1.0                      keshavnrj              -        A colour picker and colour editor for web designers and digital artistspowershell              6.2.3                    microsoft-powershell✓  classic PowerShell for every system!...

If you are not happy with the default results, you can fine-tune your search. Type snap find -h to see what additional options are available. Please note that typing snap find help will in fact search for snaps containing the string “help” in their name, summary or description.

[find command options]  --private                       Search private snaps  --narrow                        Only search for snaps in “stable”  --section=                      Restrict the search to a given section (default: no-section-specified)  --color=[auto|never|always]     Use a little bit of colour to highlight some things. (default: auto)  --unicode=[auto|never|always]   Use a little bit of Unicode to improve legibility. (default: auto)

Notably, the section search is rather useful. If you don’t know which sections exist, you can just type snap find –section to get the list of available categories.

snap find --sectionNo section specified. Available sections: * art-and-design * books-and-reference * development * devices-and-iot...

For example, the results for the development section:

snap find --section=developmentName                     Version                         Publisher            Notes    Summarysublime-text         3211                            snapcrafters         classic  A sophisticated text editor for code, markup and prose.pycharm-community    2019.3                          jetbrains✓             classic  Python IDE for Professional Developerspostman              7.13.0                          postman-inc✓           -        API Development Environmentatom                 1.42.0                          snapcrafters           classic  A hackable text editor for the 21st Century....

Another interesting feature is the ability to view snaps that are not necessarily accessible to everyone. Developers have the option to have their applications listed publicly, have them unlisted (accessible if you know the name but not searchable) or marked as private, in which case you can only access them if you have the right credentials. Indeed, if you’re logged into the store in the current shell, you can also search for your private snaps. This can be useful if you’re a publisher with multiple collaborators, and need to look for content created by your colleagues or peers.

Once you’ve narrowed down your search, you can then get more details with snap info. This command returns the full snap description, as well as a list of all the available channels and versions they include. For example, this could be useful if you intend to parallel install the snap, and test multiple versions of the application.

snap info vlcname:      vlcsummary:   The ultimate media playerpublisher: VideoLAN✓contact:   https://www.videolan.org/support/license:   GPL-2.0+description: |  VLC is the VideoLAN project's media player.  Completely open source and privacy-friendly, it plays every multimedia file and streams. It notably plays MKV, MP4, MPEG, MPEG-2, MPEG-4, DivX, MOV, WMV, QuickTime, WebM, FLAC, MP3, Ogg/Vorbis files, BluRays, DVDs, VCDs, podcasts, and multimedia streams from various network sources. It supports subtitles, closed captions and is translated in numerous languages.snap-id: RT9mcUhVsRYrDLG8qnvGiy26NKvv6Qkdchannels:  stable:    3.0.7                       2019-06-07 (1049) 212MB -  candidate: 3.0.8                       2019-12-16 (1397) 212MB -  beta:      3.0.8-107-g46d459f          2019-12-17 (1399) 212MB -  edge:      4.0.0-dev-10357-g3977a4f1de 2019-12-17 (1398) 328MB -

Summary

Small things can sometimes go a long way. In this regard, the snap command-line search functionality does offer some useful extras that people might not necessarily be aware of, or discover right away. Our goal is to make the overall snap experience as streamlined as possible, so if you have any clever ideas on this topic, please join our forum for a discussion.

Photo by Christer Ehrling on Unsplash.

over 4 years ago

Podcast Ubuntu Portugal: Ep 71 – 2020 from Planet Ubuntu

Episódio 71 – 2020. Para nós, o último episódio de 2019, para vocês o primeiro de 2020, este é o episódio em que contabilizámos os nossos palpites feitos há 1 ano e nos lançámos para a competição de palpites para o ano de 2020. Já sabem: oiçam, comentem e partilhem!
Apoios
Este episódio foi produzido e editado por Alexandre Carrapiço (Thunderclaws Studios – captação, produção, edição, mistura e masterização de som) contacto: thunderclawstudiosPT–arroba–gmail.com.
Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.
Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.
Atribuição e licenças
A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](https://creativecommons.org/publicdomain/zero/1.0/).
Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

over 4 years ago

Ubuntu Blog: OpenStack vs VMware: Bringing costs down from Planet Ubuntu

Moving to OpenStack from VMware can significantly reduce the TCO associated with an initial roll-out and ongoing maintenance of your cloud infrastructure. OpenStack vs VMware economic analysis shows that under certain circumstances, it is possible to bring the costs down an entire order of magnitude. This requires choosing an OpenStack distribution which can be maintained economically. An example of such distribution is Canonical’s Charmed OpenStack.

We have recently published a webinar and a whitepaper where we presented outcomes of our analysis around cost savings resulting from the migration from VMware to OpenStack. You can refer to those materials or, you can just read through this blog to capture the most important information. So let’s start with highlighting OpenStack vs VMware’s differences and elaborate more on how they influence the costs associated with both.

OpenStack vs VMware: Economics comparison

VMware provides its virtualisation platform under vRealize Suite. The entire platform is proprietary-source and owned by VMware. vRealize is designed to run on a special purpose hardware, such as blade servers and storage arrays. Its architecture is centralised, meaning that control services run on their own dedicated nodes while compute, network and storage resources are provided independently by other nodes. VMware vRealize Suite is available in three different variants, allowing access to certain services only to those users who are willing to pay more.

<figure class="wp-block-image"></figure>

In turn, OpenStack is an open-source project hosted by the OpenStack Foundation. It is a fully-functional cloud platform which organisations can use for the purpose of private and public clouds implementation. Contrary to VMware, OpenStack is designed to run on regular hardware and supports so-called hyper-converged architecture. In hyper-converged architecture, all nodes are the same and provide control, compute, network and storage services. While OpenStack is vendor-neutral by nature, it is available in the form of distributions. Canonical created its Charmed OpenStack distribution with economics in mind, ensuring cost savings when migrating from VMware.

So where are those cost savings exactly coming from?

Licencing

The adoption process starts with a software purchase. Since vRealize is proprietary-source software, licensing costs apply. And they are pretty significant. For example, the PLU (Portable LIcense Unit) for vRealize Advanced, costs $6,445. Moreover, as VMware uses a per-CPU pricing model, you need as many PLUs as there are CPUs in your cluster. So if your physical servers have four, eight or more CPUs, the licencing costs can inflate very quickly.

On its part, OpenStack is an open-source software. It is available under the Apache License which means that it can be used free of charge. Even if deployed within one of the available distributions, vendors must not add any additional licensing costs. This applies to Charmed OpenStack too. Canonical is glad to provide its own OpenStack distribution at no cost while encouraging its customers to buy consulting, support and managed services.

Hardware and architecture

Another differentiator is hardware and architecture used by both. VMware vRealize is designed to run on a special-purpose hardware, such as blade servers and storage arrays. Such hardware is usually more expensive than regular hardware. This brings the initial costs up, but may also influence OpEx over time, as hardware has to be refreshed on a regular basis. Moreover, as vRealize’s architecture is centralised, dedicated hardware has to be purchased to host control, compute/network and storage services, even if those machines would not be fully utilised.

OpenStack, however, is designed to run regular hardware. Furthermore, OpenStack supports hyper-converged architecture, meaning that control, compute, network and storage services are distributed across all nodes in the cluster. As a result, all physical machines in the cluster can be based on the same hardware specification. This helps to lower the cost associated with hardware purchase and ensures optimal utilisation of resources.

Consulting and operations

Although organisations can perform the initial deployment of both VMware and OpenStack alone, the complexity usually enforces the need for consulting services. VMware provides consulting services at a fixed price of $400,000. However, a successful deployment is just the beginning of the journey. Organisations have to maintain the entire platform on a daily basis. As VMware does not offer managed services for vRealize, its customers have to hire and train dedicated staff. This makes operational costs unpredictable and hard to evaluate.

Canonical offers consulting services at a more reasonable price. The Private Cloud Build package provides hardware guidance, access to the reference architecture and a two-week delivery. The price varies from $75,000 to $150,000 depending on the complexity of your environment. And, as OpenStack operations tend to be challenging, Canonical offers fully managed services for OpenStack at the price of $4,275 per physical server per year. Managed services offering includes support and is the only cost that customer has to pay on a regular basis post-deployment.

Support

Proper support services are an essential component of every production environment. Organisations have to upgrade the entire platform when new releases become available, patch it against security vulnerabilities, etc. VMware applies the same pricing model to support services as to the licenses. The more CPUs there are in your cluster the more you are going to pay. This again brings TCO up, even if your physical servers remain underloaded.

On the other hand, Canonical applies per-node model to support services for Charmed OpenStack. This makes OpEx more predictable and allows you to benefit from the advances in computer science, using more and more powerful physical servers while paying the same amount of money for OpenStack support. Support services for Charmed OpenStack are available under the UA-I (Ubuntu Advantage for Infrastructure) package. In its most comprehensive version – Advanced – they cost $1,500 yearly per each physical server. 

OpenStack vs VMware: Conclusions

Due to the open-source nature of the OpenStack, the costs associated with its initial roll-out and ongoing operations are usually lower compared to VMware. By applying the per-node support model, using hyper-converged architecture and offering consulting, support and managed services at a reasonable price, Canonical’s Charmed OpenStack distribution goes one step further. It helps to bring the costs down by an entire order of magnitude.

To learn more about Canonical’s solution for OpenStack, visit our website.

Read the whitepaper: From VMware to Charmed OpenStack

Watch the webinar: From VMware to Charmed OpenStack

over 4 years ago

Sean Davis: Catfish 1.4.12 Released from Planet Ubuntu

Welcome to 2020! Let's ring in the new year with a brand new Catfish release.What's NewWayland SupportCatfish 1.4.12 adds support for running on Wayland. Before now, there were some X-specific dependencies related to handling display sizes. These have now been resolved, and Catfish should run smoothly and consistently everywhere.<figure class="kg-card kg-image-card kg-card-hascaption">Catfish 1.4.12 on Wayland on Ubuntu 19.10</figure>Dialog ImprovementsAll dialogs now utilize client-side decorations (CSD) and are modal. The main window will continue to respect the window layout setting introduced in the 1.4.10 release.I also applied a number of fixes to the new Preferences and Search Index dialogs, so they should behave more consistently and work well with keyboard navigation.<figure class="kg-card kg-gallery-card kg-width-wide kg-card-hascaption">The new dialogs are more streamlined and standardized.</figure>Release Process UpdatesI've improved the release process to make it easier for maintainers and to ensure builds are free of temporary files. This helps ensure a faster delivery to package maintainers, and therefore to distributions.Translation UpdatesAlbanian, Catalan, Chinese (China), Chinese (Taiwan), Czech, Danish, Dutch, French, Galician, German, Italian, Japanese, Norwegian Bokmål, Russian, Serbian, Spanish, TurkishDownloadsSource tarball$ md5sum catfish-1.4.12.tar.bz2
9aad6a0bc695ec8793d4294880974cb2

$ sha1sum catfish-1.4.12.tar.bz2
4e78e291a2f17c85122a85049bdc837b49afdd66

$ sha256sum catfish-1.4.12.tar.bz2
c3fb30e02b217752aa493b49769be1a5fc2adde70b22aef381e6c67d5227134aCatfish 1.4.12 will be included in Xubuntu 20.04 "Focal Fossa", available in April.

over 4 years ago

The Fridge: Ubuntu Weekly Newsletter Issue 611 from Planet Ubuntu

<figure class="wp-block-image"></figure>

Welcome to the Ubuntu Weekly Newsletter, Issue 611 for the week of December 22 – 28, 2019. The full version of this issue is available here.

In this issue we cover:

Ubuntu StatsHot in SupportLoCo EventsCore Channel Operator Appointments, December 2019Canonical NewsIn the BlogosphereFeatured Audio and VideoMeeting ReportsUpcoming Meetings and EventsAnd much more!

The Ubuntu Weekly Newsletter is brought to you by:

KrytarikBashing-omChris GuiverWild ManAnd many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

<figure class="alignleft"></figure>

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

over 4 years ago

Eric Hammond: Running AWS CLI Commands Across All Accounts In An AWS Organization from Planet Ubuntu

by generating a temporary IAM STS session with MFA then assuming
cross-account IAM roles

I recently had the need to run some AWS commands across all AWS
accounts in my AWS Organization. This was a bit more difficult to
accomplish cleanly than I had assumed it might be, so I present the
steps here for me to find when I search the Internet for it in the
future.

You are also welcome to try out this approach, though if your account
structure doesn’t match mine, it might require some tweaking.

Assumptions And Background

(Almost) all of my AWS accounts are in a single AWS Organization. This
allows me to ask the Organization for the list of account ids.

I have a role named “admin” in each of my AWS accounts. It has a lot
of power to do things. The default cross-account admin role name for
accounts created in AWS Organizations is “OrganizationAccountAccessRole”.

I start with an IAM principal (IAM user or IAM role) that the aws-cli
can access through a “source profile”. This principal has the power to
assume the “admin” role in other AWS accounts. In fact, that principal
has almost no other permissions.

I require MFA whenever a cross-account IAM role is assumed.

You can read about how I set up AWS accounts here, including the above
configuration:

Creating AWS Accounts From The Command Line With AWS Organizations

I use and love the aws-cli and bash. You should, too, especially if
you want to use the instructions in this guide.

I jump through some hoops in this article to make sure that AWS
credentials never appear in command lines, in the shell history, or in
files, and are not passed as environment variables to processes that
don’t need them (no export).

Setup

For convenience, we can define some bash functions that will improve
clarity when we want to run commands in AWS accounts. These freely use
bash variables to pass information between functions.

The aws-session-init function obtains temporary session credentials
using MFA (optional). These are used to generate temporary assume-role
credentials for each account without having to re-enter an MFA token
for each account. This function will accept optional MFA serial number
and source profile name. This is run once.

aws-session-init() {
# Sets: source_access_key_id source_secret_access_key source_session_token
local source_profile=${1:-${AWS_SESSION_SOURCE_PROFILE:?source profile must be specified}}
local mfa_serial=${2:-$AWS_SESSION_MFA_SERIAL}
local token_code=
local mfa_options=
if [ -n "$mfa_serial" ]; then
read -s -p "Enter MFA code for $mfa_serial: " token_code
echo
mfa_options="--serial-number $mfa_serial --token-code $token_code"
fi
read -r source_access_key_id \
source_secret_access_key \
source_session_token \
<<<$(aws sts get-session-token \
--profile $source_profile \
$mfa_options \
--output text \
--query 'Credentials.[AccessKeyId,SecretAccessKey,SessionToken]')
test -n "$source_access_key_id" && return 0 || return 1
}

The aws-session-set function obtains temporary assume-role
credentials for the specified AWS account and IAM role. This is run
once for each account before commands are run in that account.

aws-session-set() {
# Sets: aws_access_key_id aws_secret_access_key aws_session_token
local account=$1
local role=${2:-$AWS_SESSION_ROLE}
local name=${3:-aws-session-access}
read -r aws_access_key_id \
aws_secret_access_key \
aws_session_token \
<<<$(AWS_ACCESS_KEY_ID=$source_access_key_id \
AWS_SECRET_ACCESS_KEY=$source_secret_access_key \
AWS_SESSION_TOKEN=$source_session_token \
aws sts assume-role \
--role-arn arn:aws:iam::$account:role/$role \
--role-session-name "$name" \
--output text \
--query 'Credentials.[AccessKeyId,SecretAccessKey,SessionToken]')
test -n "$aws_access_key_id" && return 0 || return 1
}

The aws-session-run function runs a provided command, passing in AWS
credentials in environment variables for that process to use. Use this
function to prefix each command that needs to run in the currently set
AWS account/role.

aws-session-run() {
AWS_ACCESS_KEY_ID=$aws_access_key_id \
AWS_SECRET_ACCESS_KEY=$aws_secret_access_key \
AWS_SESSION_TOKEN=$aws_session_token \
"$@"
}

The aws-session-cleanup function should be run once at the end, to
make sure that no AWS credentials are left lying around in bash
variables.

aws-session-cleanup() {
unset source_access_key_id source_secret_access_key source_session_token
unset aws_access_key_id aws_secret_access_key aws_session_token
}

Running aws-cli Commands In Multiple AWS Accounts

After you have defined the above bash functions in your current
shell, here’s an example for how to use them to run aws-cli commands
across AWS accounts.

As mentioned in the assumptions, I have a role named “admin” in each
account. If your role names are less consistent, you’ll need to do
extra work to automate commands.

role="admin" # Yours might be called "OrganizationAccountAccessRole"

This command gets all of the account ids in the AWS Organization. You
can use whatever accounts and roles you wish, as long as you are
allowed to assume-role into them from the source profile.

accounts=$(aws organizations list-accounts \
--output text \
--query 'Accounts[].[JoinedTimestamp,Status,Id,Email,Name]' |
grep ACTIVE |
sort |
cut -f3) # just the ids
echo "$accounts"

Run the initialization function, specifying the aws-cli source profile
for assuming roles, and the MFA device serial number or ARN. These are
the same values as you would use for source_profile and mfa_serial
in the aws-cli config file for a profile that assumes an IAM
role. Your “source_profile” is probably “default”. If you
don’t use MFA for assuming a cross-account IAM role, then you may
leave MFA serial empty.

source_profile=default # The "source_profile" in your aws-cli config
mfa_serial=arn:aws:iam::YOUR_ACCOUNTID:mfa/YOUR_USER # Your "mfa_serial"

aws-session-init $source_profile $mfa_serial

Now, let’s iterate through the AWS accounts, running simple AWS CLI
commands in each account. This example will output each AWS account id
followed by the list of S3 buckets in that account.

for account in $accounts; do
# Set up temporary assume-role credentials for an account/role
# Skip to next account if there was an error.
aws-session-set $account $role || continue

# Sample command 1: Get the current account id (should match)
this_account=$(aws-session-run \
aws sts get-caller-identity \
--output text \
--query 'Account')
echo "Account: $account ($this_account)"

# Sample command 2: List the S3 buckets in the account
aws-session-run aws s3 ls
done

Wrap up by clearing out the bash variables holding temporary
credentials.

aws-session-cleanup

Note: The credentials used by this approach are all temporary and use
the default expiration. If any expire before you complete your tasks,
you may need to adjust some of the commands and limits in your
accounts.

Credits

Thanks to my role model, Jennine Townsend, the above code
uses a special bash syntax to set the AWS environment variables for
the aws-cli commands without an export, which would have made the
sensitive environment variables available to other commands we might
need to run. I guess nothing makes you as (justifiably) paranoid as
deep sysadmin experience.

Jennine also wrote code that demonstrates the same approach of STS
get-session-token with MFA followed by STS assume-role for multiple
roles, but I never quite understood what she was trying to explain to
me until I tried to accomplish the same result. Now I see the light.

GitHub Repo

For my convenience, I’ve added the above functions into a GitHub repo,
so I can easily add them to my $HOME/.bashrc and use them in my
regular work.

https://github.com/alestic/aws-cli-multi-account-sessions

Perhaps you may find it convenient as well. The README provides
instructions for how I set it up, but again, your environment may need
tailoring.
Original article and comments: https://alestic.com/2019/12/aws-cli-across-organization-accounts/

over 4 years ago

Full Circle Magazine: Full Circle Weekly News #160 from Planet Ubuntu

Zorin OS 15.1 is Released
https://zoringroup.com/blog/2019/12/12/zorin-os-15-1-is-released-a-better-way-to-work-learn-and-play/

Firefox 71 is Now Available for All Supported Ubuntu Releases
https://news.softpedia.com/news/mozilla-firefox-71-is-now-available-for-all-supported-ubuntu-linux-releases-528537.shtml

KDE’s December 2019 Apps Update
https://kde.org/announcements/releases/2019-12-apps-update/

Oracle Virtualbox 6.1 Now Available
https://blogs.oracle.com/virtualization/oracle-vm-virtualbox-61-now-available

Microsoft Teams is Now Available for Linux
https://www.ostechnix.com/microsoft-teams-is-now-officially-available-for-linux/

DXVK to Enter Maintenance Mode
https://www.linuxuprising.com/2019/12/dxvk-to-enter-maintenance-mode-because.html

Credits:
Ubuntu “Complete” sound: Canonical
Theme Music: From The Dust – Stardust
https://soundcloud.com/ftdmusic
https://creativecommons.org/licenses/by/4.0/

over 4 years ago

Rhonda D'Vine: Puberty from Planet Ubuntu

I was musing about writing about this publicly. For the first time in all these years of writing pretty personal stuff about my feelings, my way of becoming more honest with myself and a more authentic person through that I was thinking about letting you in on this is a good idea.

You see, people have used information from my personal blog in the past, and tried to use it against me. Needless to say they failed with it, and it only showed their true face. So why does it feel different this time?

Thing is, I'm in the midst of my second puberty, and the hormones are kicking in in complete hardcore mode. And it doesn't help at all that there is trans antagonist crap from the past and also from the present popping up left and right at a pace and a concentrated amount that is hard to swallow on its own without the puberty.

Yes, I used to be able to take those things with a much more stable state. But every. Single. Of. These. Issues is draining all the energy out of myself. And even though I'm aware that I'm not the only one trying to fix all of those, even though for some spots I'm the only one doing the work, it's easier said than done that I don't have to fix the world, when the areas involved mean the world to me. Are areas that support me in so many ways. Are places that I need. And on top of that, the hormones are multiplying the energy drain of those.

So ... I know it's not that common. I know you are not used to a grown up person to go through puberty. But for god's sake. Don't make it harder than it has to be. I know it's hard to deal with a 46 year old teenager, so to say, I'm just trying to survive in this world of systematic oppression of trans people.

It would be nice to go for a week without having to cry your eyes out because another hostile event happened that directly affects your existence. The existence of trans lives aren't a matter of different opinions or different points of view, so don't treat it like that, if you want me to believe that you are a person able of empathy and basic respect.

Sidenote: Finishing to write this at this year's #36c3 is quite interesting because of the conference title: Resource Exhaution. Oh the irony.

/personal |
permanent link |
Comments: 7 |

over 4 years ago

Riccardo Padovani: My year on HackerOne from Planet Ubuntu

Last year, totally by chance, I found a security issue over Facebook - I reported it, and it was fixed quite fast. In 2018, I also found a security issue over Gitlab, so I signed up to HackerOne, and reported it as well. That first experience with Gitlab was far from ideal, but after that first report I’ve started reporting more, and Gitlab has improved its program a lot.

2019

Since June 2019, when I opened my first report of the year, I reported 27 security vulnerabilities: 4 has been marked as duplicated, 3 as informative, 2 as not applicable, 9 have been resolved, and 9 are currently confirmed and the fix is ongoing. All these 27 vulnerabilities were reported to Gitlab.

Especially in October and November I had a lot of fun testing the implementation of ElasticSearch over Gitlab. Two of the issues I have found on this topic have already been disclosed:

Group search leaks private MRs, code, commits
Group search with Elastic search enable leaks unrelated data

Why just Gitlab?

I have an amazing daily job as Solutions Architect at Nextbit that I love. I am not interested in becoming a full-time security researcher, but I am having fun dedicating some hours every month in looking for securities vulnerabilities.

However, since I don’t want it to be a job, I focus on a product I know very well, also because sometimes I contribute to it and I use it daily.

I also tried to target some program I didn’t know anything about, but I get bored quite fast: to find some interesting vulnerability you need to spend quite some time to learn how the system works, and how to exploit it.

Last but not least, Gitlab nowadays manages its HackerOne program in a very cool way: they are very responsive, kind, and I like they are very transparent! You can read a lot about how their security team works in their handbook.

Can you teach me?

Since I have shared a lot of the disclosed reports on Twitter, some people came and asked me to teach them how to start in the bug bounties world. Unfortunately, I don’t have any useful suggestion: I haven’t studied on any specific resource, and all the issues I reported this year come from a deep knowledge of Gitlab, and from what I know thanks to my daily job.
There are definitely more interesting people to follow on Twitter, just check over some common hashtags, such as TogetherWeHitHarder.

Gitlab’s Contest

I am writing this blog post from my new keyboard: a custom-made WASD VP3, generously donated by Gitlab after I won a contest for their first year of public program on HackerOne. I won the best written report category, and it was a complete surprise; I am not a native English speaker, 5 years ago my English was a monstrosity (if you want to have some fun, just go reading my old blog posts), and still to this day I think is quite poor, as you can read here.

Indeed, if you have any suggestion on how to improve this text, please write me!

Congratulations to Gitlab for their first year on HackerOne, and keep up the good work! Your program rocks, and in the last months you improved a lot!

HackeOne Clear

HackerOne started a new program, called HackerOne Clear, only on invitation, where they vet all researchers. I was invited and I thought about accepting the invitation. However, the scope of the data that has to be shared to be vetted is definitely too wide, and to be honest I am surprised so many people accepted the invitation. HackerOne doesn’t perform the check, but delegates to a 3rd party. This 3rd party company asks a lot of things.

<aside>

T&Cs for joining HackerOne Clear ask to hand over a lot of personal data. I totally don't feel comfortable in doing so, and I wonder why so many people, that should be very aware of the importance of privacy, accepted.

</aside>

I totally understand the need of background checks, and I’d be more than happy to provide my criminal record. It wouldn’t be the first time I am vetted, and I am quite sure it wouldn’t be the last.

More than the criminal record, I am a puzzled about these requirements:

Financial history, including credit history, bankruptcy and financial judgments;
Employment or volunteering history, including fiduciary or directorship responsibilities;
Gap activities, including travel;
Health information, including drug tests;
Identity, including identifying numbers and identity documents;

Not only the scope is definitely too wide, but also all these data will be stored and processed outside EU!
Personal information will be stored in the United States, Canada and Ireland. Personal information will be processed in the United States, Canada, the United Kingdom, India and the Philippines.

As European citizen who wants to protect his privacy, I cannot accept such conditions. I’ve written to HackerOne asking why such a wide scope of data, and they replied that since it’s their partner that actually collects the information, there is nothing they can do. I really hope HackerOne will require fewer data in the future, preserving privacy of their researchers.

2020

In these days I’ve though a lot about what I want to do in my future about bug bounties, and for the 2020 I will continue as I’ve done in the last months: assessing Gitlab, dedicating not more than a few hours a month. I don’t feel ready to step up my game at the moment. I have a lot of other interests I want to pursue in 2020 (travelling, learning German, improve my cooking skills), so I will not prioritize bug bounties for the time being.

That’s all for today, and also for the 2019! It has been a lot of fun, and I wish to you all a great 2020!
For any comment, feedback, critic, write to me on Twitter (@rpadovani93)
or drop an email at riccardo@rpadovani.com.

Ciao,
R.

Updates

29th December 2019: added paragraph about having asked to HackerOne more information on why they need such wide scope of personal data.

over 4 years ago

Podcast Ubuntu Portugal: Ep 70 – WikiCon Portugal from Planet Ubuntu

Episódio 70 – WikiCon Portugal. Estamos em contagem decrescente para a entrada em 2020, e logo de seguida acontece a WikiCon Portugal 2020. Para ficarem a saber tudo sobre a conferência organizada pela Wikimedia Portugal estivemos à conversa com o Gonçalo Themudo (aka GoEThe), presidente da Wikimedia Portugal. Já sabem: oiçam, comentem e partilhem!

https://apps.nextcloud.com/apps/passwords
https://dev.to/selrond/tips-to-use-vscode-more-efficiently-3h6p
https://fontlibrary.org/
https://fosdem.org
https://github.com/PixelsCamp/talks
https://meta.wikimedia.org/wiki/WikiCon_Portugal
https://pixels.camp/
https://snapcraft.io/remmina
https://snapcraft.io/teams-for-linux
https://snapcraft.io/unofficial-webapp-office
https://teams.microsoft.com/downloads
https://ubuntu.com/blog/the-ubuntu-20-04-lts-pre-release-survey

Apoios
Este episódio foi produzido e editado por Alexandre Carrapiço (Thunderclaws Studios – captação, produção, edição, mistura e masterização de som) contacto: thunderclawstudiosPT–arroba–gmail.com.
Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.
Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.
Atribuição e licenças
A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](https://creativecommons.org/publicdomain/zero/1.0/).
Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

over 4 years ago

Kubuntu General News: Kubuntu Focus Laptop Christmas Unboxing from Planet Ubuntu

Christmas is here, and the first review units of the Kubuntu Focus have begun arriving with the reviewers eager to get their hands on them.

Our own Kubuntu Councillor, and Community Manager Rick Timmis, who has been leading the project for the Kubuntu side, provides us with a festive, sneak preview, and unboxing experience.

We are expecting the pre-order site to be available in mid-January 2020 and the first consumer units to ship at the beginning of February 2020.

We will keep you posted.

over 4 years ago

Full Circle Magazine: Full Circle Weekly News #159 from Planet Ubuntu

elementaryOS 5.1 Hera Released
https://blog.elementary.io/introducing-elementary-os-5-1-hera/

Linux Mint 19.3 Beta Available
https://blog.linuxmint.com/?p=3816

Debian 11 “Bullseye” Alpha 1 Released
https://lists.debian.org/debian-devel-announce/2019/12/msg00001.html

Ubuntu Cinnamon Remix 19.10 Released
https://discourse.ubuntu.com/t/ubuntu-cinnamon-remix-19-10-eoan-ermine-released/13579

Canonical Introduces Ubuntu AWS Rolling Kernel
https://ubuntu.com/blog/introducing-the-ubuntu-aws-rolling-kernel

Canonical Announces Ubuntu Pro for AWS
https://ubuntu.com/blog/canonical-announces-ubuntu-pro-for-amazon-web-services

Purism Announces the Librem 5 USA
https://puri.sm/posts/librem-5-usa/

Thunderbird 68.3.0 Released
https://www.thunderbird.net/en-US/thunderbird/68.3.0/releasenotes/

Tails 4.1 Released
https://news.softpedia.com/news/tails-4-1-anonymous-os-released-with-latest-tor-browser-linux-kernel-5-3-9-528437.shtml

Credits:
Ubuntu “Complete” sound: Canonical
Theme Music: From The Dust – Stardust
https://soundcloud.com/ftdmusic
https://creativecommons.org/licenses/by/4.0/

over 4 years ago

Stephen Michael Kellat: Early 2019 Summation & 2020 Predictions from Planet Ubuntu

I had an opportunity to listen to a year-end wrap up in the
most
recent episode of Late Night Linux while at the gym. I
encourage you, the reader, to listen to their summation. Mr.
Ressington noted that an upcoming episode would deal with reviewing
predictions for the year.
In light of that, I took a look back at the blog. In retrospect,
I apparently did not make any predictions for 2019. The
year for me was noted in When Standoffs Happen
as starting with the longest shutdown of the federal government in
the history of the USA. I wrote the linked post on Christmas Eve
last year and had no clue I would end up working without pay for
part of that 35 day crisis. After that crisis ended we wound up
effectively moving from one further crisis to another at work until
my sudden resignation at the start of October. The job was eating
me alive. Blog posts would reflect that as time went by and a
former Mastodon account would show my decline over time.
I’m not sure how to feel that my old slot apparently was not
filled and people followed me in departing. Will the last one out
of the Post of Duty turn out the lights?
As to what happened during my year a significant chunk is a
story that can’t be told. That was the job. Significant bits of
life in the past year for me are scattered across case files that I
hope to never, ever see again that are held by multiple agencies.
It most simply can be explained in this piece of Inspirobot.me
output: "Believe in chaos. Prepare for tears."
Frankly, I don’t think anybody could have seen the events of my
2019 coming. For three quarters of the year I was not acting but
rather reacting as were most people around me. I’ve been trying to
turn things around but that has been slow going. I’ve thankfully
gotten back to doing some contributions in different
Ubuntu community areas. I had missed doing that and the job-related
restrictions I had been working under kept me away for too long.
Apparently I've been present on AskUbuntu
longer than I had thought, for example.
In short, depending on your perspective 2019 was either a great
year of growth or a nightmare you’re thankful to escape. Normally
you don’t want to live life on the nightmare side all that much. I
look forward to 2020 as a time to rebuild and make things
better.
All that being said I should roll onwards to predictions. My
predictions for 2020 include:

There will be a “scorched earth” presidential campaign in the
United States without a clear winner.
The 20.04 LTS will reach new records for downloads and
installations across all flavours.
Ubuntu Core will become flight-qualified to run a lunar lander
robot. It won’t be an American robot, though.
One of the flavours will have a proof of concept installable
desktop image where everything is a snap. Redditors will not
rejoice, though.
The Ubuntu Podcast goes
on a brief hiatus in favor of further episodes of 8 Bit Versus.
I will finish the hard sci-fi story I am working on and get it
in order to submit somewhere
Erie Looking Productions will pick up an additional paying
client
There will be a safe design for a Raspberry
Pi 4 laptop and I
will switch to that as a "daily driver"

And now for non sequitur theater…for those seeking a movie to
watch between Christmas and New Year's Eve due to the paucity of
good television programming I recommend Invasion of
the Neptune Men which can be found on the Internet Archive. The
version there is not the version covered by
Mystery Science Theater 3000, though. Other films to watch
from the archives, especially if you're still reeling in shock from

the horror that is the film version of Cats, can be
found by visiting https://archive.org/details/moviesandfilms.

over 4 years ago

Benjamin Mako Hill: Reflections on Janet Fulk and Peter Monge from Planet Ubuntu

In May 2019, my research group was invited to give short remarks on the impact of Janet Fulk and Peter Monge at the International Communication Association‘s annual meeting as part of a session called “Igniting a TON (Technology, Organizing, and Networks) of Insights: Recognizing the Contributions of Janet Fulk and Peter Monge in Shaping the Future of Communication Research.”

<figure class="wp-block-embed-youtube wp-block-embed is-type-video is-provider-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio">

Youtube: Mako Hill @ Janet Fulk and Peter Monge Celebration at ICA 2019</figure>

I gave a five-minute talk on Janet and Peter’s impact to the work of the Community Data Science Collective by unpacking some of the cryptic acronyms on the CDSC-UW lab’s whiteboard as well as explaining that our group has a home in the academic field of communication, in no small part, because of the pioneering scholarship of Janet and Peter. You can view the talk in WebM or on Youtube.

[This blog post was first published on the Community Data Science Collective blog.]

over 4 years ago

Balint Reczey: Introducing show-motd, Message Of The Day for WSL and container shells from Planet Ubuntu

<figure class="wp-block-image size-large"></figure>

People logging in to Ubuntu systems via SSH or on the virtual terminals are familiar with the Message Of The Day greeter which contains useful URLs and important system information including the number of updates that need to be installed manually.

However, when starting a Ubuntu container or a Ubuntu terminal on WSL, you are entering a shell directly which is way less welcoming and also hides if there are software updates waiting to be installed:

user@host:~$ lxc shell bionic-container
root@bionic-container:~#

To make containers and the WSL shell friendlier to new users and more informative to experts it would be nice to show MOTD there, too, and this is exactly what the show-motd package does. The message is printed only once every day in the first started interactive shell to provide up-to-date information without becoming annoying. The package is now present in Ubuntu 19.10 and WSL users already get it installed when running apt upgrade.Please give it a try and tell us what you think!

Bug reports, feature requests are welcome and if the package proves to be useful it will be backported to current LTS releases!

over 4 years ago