From the Canyon Edge -- :-Dustin
Showing posts with label Byobu. Show all posts
Showing posts with label Byobu. Show all posts

Friday, February 16, 2018

10 Amazing Years of Ubuntu and Canonical

February 2008, Canonical's office in Lexington, MA
10 years ago today, I joined Canonical, on the very earliest version of the Ubuntu Server Team!

And in the decade since, I've had the tremendous privilege to work with so many amazing people, and the opportunity to contribute so much open source software to the Ubuntu ecosystem.

Marking the occasion, I've reflected about much of my work over that time period and thought I'd put down in writing a few of the things I'm most proud of (in chronological order)...  Maybe one day, my daughters will read this and think their daddy was a real geek :-)

1. update-motd / motd.ubuntu.com (September 2008)

Throughout the history of UNIX, the "message of the day" was always manually edited and updated by the local system administrator.  Until Ubuntu's message-of-the-day.  In fact, I received an email from Dennis Ritchie and Jon "maddog" Hall, confirming this, in April 2010.  This started as a feature request for the Landscape team, but has turned out to be tremendously useful and informative to all Ubuntu users.  Just last year, we launched motd.ubuntu.com, which provides even more dynamic information about important security vulnerabilities and general news from the Ubuntu ecosystem.  Mathias Gug help me with the design and publication.

2. manpages.ubuntu.com (September 2008)

This was the first public open source project I worked on, in my spare time at Canonical.  I had a local copy of the Ubuntu archive and I was thinking about what sorts of automated jobs I could run on it.  So I wrote some scripts that extracted the manpages out of each one, formatted them as HTML, and published into a structured set of web directories.  10 years later, it's still up and running, serving thousands of hits per day.  In fact, this was one of the ways we were able to shrink the Ubuntu minimal image, but removing the manpages, since they're readable online.  Colin Watson and Kees Cook helped me with the initial implementation, and Matthew Nuzum helped with the CSS and Ubuntu theme in the HTML.

3. Byobu (December 2008)

If you know me at all, you know my passion for the command line UI/UX that is "Byobu".  Byobu was born as the "screen-profiles" project, over lunch at Google in Mountain View, in December of 2008, at the Ubuntu Developer Summit.  Around the lunch table, several of us (including Nick Barcet, Dave Walker, Michael Halcrow, and others), shared our tips and tricks from our own ~/.screenrc configuration files.  In Cape Town, February 2010, at the suggestion of Gustavo Niemeyer, I ported Byobu from Screen to Tmux.  Since Ubuntu Servers don't generally have GUIs, Byobu is designed to be a really nice interface to the Ubuntu command line environment.

4. eCryptfs / Ubuntu Encrypted Home Directories (October 2009)

I was familiar with eCryptfs from its inception in 2005, in the IBM Linux Technology Center's Security Team, sitting next to Michael Halcrow who was the original author.  When I moved to Canonical, I helped Michael maintain the userspace portion of eCryptfs (ecryptfs-utils) and I shepherded into Ubuntu.  eCryptfs was super powerful, with hundreds of options and supported configurations, but all of that proved far to difficult for users at large.  So I set out to simplify it drastically, with an opinionated set of basic defaults.  I started with a simple command to mount a "Private" directory inside of your home directory, where you could stash your secrets.  A few months later, on a long flight to Paris, I managed to hack a new PAM module, pam_ecryptfs.c, that actually encrypted your entire home directory!  This was pretty revolutionary at the time -- predating Apple's FileVault or Microsoft's Bitlocker, even.  Today, tens of millions of Ubuntu users have used eCryptfs to secure their personal data.  I worked closely with Tyler Hicks, Kees Cook, Jamie Strandboge, Michael Halcrow, Colin Watson, and Martin Pitt on this project over the years.

5. ssh-import-id (March 2010)

With the explosion of virtual machines and cloud instances in 2009 / 2010, I found myself constantly copying public SSH keys around.  Moreover, given Canonical's globally distributed nature, I also regularly found myself asking someone for their public SSH keys, so that I could give them access to an instance, perhaps for some pair programming or assistance debugging.  As it turns out, everyone I worked with, had a Launchpad.net account, and had their public SSH keys available there.  So I created (at first) a simple shell script to securely fetch and install those keys.  Scott Moser helped clean up that earliest implementation.  Eventually, I met Casey Marshall, who helped rewrite it entirely in Python.  Moreover, we contacted the maintainers of Github, and asked them to expose user public SSH keys by the API -- which they did!  Now, ssh-import-id is integrated directly into Ubuntu's new subiquity installer and used by many other tools, such as cloud-init and MAAS.

6. Orchestra / MAAS (August 2011)

In 2009, Canonical purchased 5 Dell laptops, which was the Ubuntu Server team's first "cloud".  These laptops were our very first lab for deploying and testing Eucalyptus clouds.  I was responsible for those machines at my house for a while, and I automated their installation with PXE, TFTP, DHCP, DNS, and a ton of nasty debian-installer preseed data.  That said -- it worked!  As it turned out, Scott Moser and Mathias Gug had both created similar setups at their houses for the same reason.  I was mentoring a new hire at Canonical, named Andres Rodriquez at the time, and he took over our part-time hacks and we worked together to create the Orchestra project.  Orchestra, itself was short lived.  It was severely limited by Cobbler as a foundation technology.  So the Orchestra project was killed by Canonical.  But, six months later, a new project was created, based on the same general concept -- physical machine provisioning at scale -- with an entire squad of engineers led by...Andres Rodriguez :-)  MAAS today is easily one of the most important projects the Ubuntu ecosystem and one of the most successful products in Canonical's portfolio.

7. pollinate / pollen / entropy.ubuntu.com (February 2014)

In 2013, I set out to secure Ubuntu at large from a set of attacks ranging from insufficient entropy at first boot.  This was especially problematic in virtual machine instances, in public clouds, where every instance is, by design, exactly identical to many others.  Moreover, the first thing that instance does, is usually ... generate SSH keys.  This isn't hypothetical -- it's quite real.  Raspberry Pi's running Debian were deemed susceptible to this exact problem in November 2015.  So designed and implemented a client (shell script that runs at boot, and fetches some entropy from one to many sources), as well as a high-performance server (golang).  The client is the 'pollinate' script, which runs on the first boot of every Ubuntu server, and the server is the cluster of physical machines processing hundreds of requests per minute at entropy.ubuntu.com.  Many people helped review the design and implementation, including Kees Cook, Jamie Strandboge, Seth Arnold, Tyler Hicks, James Troup, Scott Moser, Steve Langasek, Gustavo Niemeyer, and others.

8. The Orange Box (May 2014)

In December of 2011, in my regular 1:1 with my manager, Mark Shuttleworth, I told him about these new "Intel NUCs", which I had bought and placed them around my house.  I had 3, each of which was running Ubuntu, and attached to a TV around the house, as a media player (music, videos, pictures, etc).  In their spare time, though, they were OpenStack Nova nodes, capable of running a couple of virtual machines.  Mark immediately asked, "How many of those could you fit into a suitcase?"  Within 24 hours, Mark had reached out to the good folks at TranquilPC and introduced me to my new mission -- designing the Orange Box.  I worked with the Tranquil folks through Christmas, and we took our first delivery of 5 of these boxes in January of 2014.  Each chassis held 10 little Intel NUC servers, and a switch, as well as a few peripherals.  Effectively, it's a small data center that travels.  We spend the next 4 months working on the hardware under wraps and then unveiled them at the OpenStack Summit in Atlanta in May 2014.  We've gone through a couple of iterations on the hardware and software over the last 4 years, and these machines continue to deliver tremendous value, from live demos on the booth, to customer workshops on premises, or simply accelerating our own developer productivity by "shipping them a lab in a suitcase".  I worked extensively with Dan Poler on this project, over the course of a couple of years.

9. Hollywood (December 2014)

Perhaps the highlight of my professional career came in October of 2016.  Watching Saturday Night Live with my wife Kim, we were laughing at a skit that poked fun at another of my favorite shows, Mr. Robot.  On the computer screen behind the main character, I clearly spotted Hollywood!  Hollywood is just a silly, fun little project I created on a plane one day, mostly to amuse Kim.  But now, it's been used in Saturday Night LiveNBC Dateline News, and an Experian TV commercials!  Even Jess Frazelle created a Docker container

10. petname / golang-petname / python-petname (January 2015)

From "warty warthog" to "bionic beaver", we've always had a focus on fun, and user experience here in Ubuntu.  How hard is it to talk to your colleague about your Amazon EC2 instance, "i-83ab39f93e"?  Or your container "adfxkenw"?  We set out to make something a little more user-friendly with our "petnames".  Petnames are randomly generated "adjective-animal" names, which are easy to pronounce, spell, and remember.  I curated and created libraries that are easily usable in Shell, Golang, and Python.  With the help of colleagues like Stephane Graber and Andres Rodriguez, we now use these in many places in the Ubuntu ecosystem, such as LXD and MAAS.

If you've read this post, thank you for indulging me in a nostalgic little trip down memory lane!  I've had an amazing time designing, implementing, creating, and innovating with some of the most amazing people in the entire technology industry.  And here's to a productive, fun future!

Cheers,
:-Dustin

Tuesday, October 4, 2016

A Parody within a Parody

My wife, Kimberly, and I watch Saturday Night Live religiously.  As in, we probably haven't missed a single episode since we started dating more than 12 years ago.  And in fact, we both watched our fair share of SNL before we had even met, going back to our teenage years.

We were catching up on SNL's 42nd season premier late this past Sunday night, after putting the kids to bed, when I was excited to see a hilarious sketch/parody of Mr. Robot.

If SNL is my oldest TV favorite, Mr. Robot is certainly my newest!  Just wrapping its 2nd season, it's a brilliantly written, flawlessly acted, impeccably set techno drama series on USA.  I'm completely smitten, and the story seems to be just getting started!

Okay, so Kim and I are watching a hilarious sketch where Leslie Jones asks Elliot to track down the person who recently hacked her social media accounts.  And, as always, I take note of what's going in the background on the computer screen.  It's just something I do.  I love to try and spot the app, the OS, the version, identify the Linux kernel oops, etc., of anything on any computer screen on TV.

At about the 1:32 mark of the SNL/Mr.Robot skit, there was something unmistakable on the left computer, just over actor Pete Davidson's right shoulder.  Merely a fraction of a second, and I recognized it instantly!  A dark terminal, split into a dozen sections.  A light grey boarder, with a thicker grey highlighting one split.  The green drip of text from The Matrix in one of the splits. A flashing, bouncing yellow audio wave in another.  An instant rearrangement of all of those windows each second.

It was Byobu and Hollywood!  I knew it.  Kim didn't believe me at first, until I proved it ;-)

A couple of years ago, after seeing a 007 film in the theater, I created a bit of silliness -- a joke of a program that could turn any Linux terminal into a James Bond caliber hacker screen.  The result is a package called hollywood, which any Ubuntu user can install and run by simply typing:

$ sudo apt install hollywood
$ hollywood

And a few months ago , Hollywood found its way into an NBC News piece that took itself perhaps a little too seriously, as it drummed up a bit of fear around "Ransomware".

But, far more appropriately, I'm absolutely delighted to see another NBC program -- Saturday Night Live -- using Hollywood exactly as intended -- for parody!

Enjoy a few screenshots below...








Cheers!
:-Dustin

Monday, June 20, 2016

HOWTO: Classic, apt-based Ubuntu 16.04 LTS Server on the rpi2!

Classic Ubuntu 16.04 LTS, on an rpi2
Hopefully by now you're well aware of Ubuntu Core -- the snappiest way to run Ubuntu on a Raspberry Pi...

But have you ever wanted to run classic (apt/deb) Ubuntu Server on a RaspberryPi2?


Well, you're in luck!  Follow these instructions, and you'll be up in running in minutes!

First, download the released image (214MB):

$ wget http://cdimage.ubuntu.com/releases/16.04/release/ubuntu-16.04-preinstalled-server-armhf+raspi2.img.xz

Next, uncompress it:

$ unxz *xz

Now, write it to a microSD card using dd.  I'm using the card reader built into my Thinkpad, but you might use a USB adapter.  You'll need to figure out the block device of your card, and perhaps unmount it, if necessary.  Then, you can write the image to disk:

$ sudo dd if=ubuntu-16.04-preinstalled-server-armhf+raspi2.img of=/dev/mmcblk0 bs=32M
$ sync

Now, pop it into your rpi2, and power it on.

If it's connected to a USB mouse and an HDMI monitor, then you'll land in a console where you can login with the username 'ubuntu' and password 'ubuntu', and then you'll be forced to choose a new password.

Assuming it has an Ethernet connection, it should DHCP.  You might need to check your router to determine what IP address it got, or it sets it's hostname to 'ubuntu'.  In my case, I could automatically resolve it on my network, at ubuntu.canyonedge, with IP address 10.0.0.113, and ssh to it:

$ ssh ubuntu@ubuntu.canyonedge

Again, you can login on first boot with password 'ubuntu' and you're required to choose a new password.

On first boot, it will automatically resize the filesystem to use all of the available space on the MicroSD card -- much nicer than having to resize2fs yourself in some offline mode!

Now, you're off and running.  Have fun with sudo, apt, byobu, lxd, docker, and everything else you'd expect to find on a classic Ubuntu server ;-)  Heck, you'll even find the snap command, where you'll be able to install snap packages, right on top of your classic Ubuntu Server!  And if that doesn't just bake your noodle...

Cheers,
Dustin

Thursday, June 16, 2016

sudo purge-old-kernels: Recover some disk space!


If you have long-running Ubuntu systems (server or desktop), and you keep those systems up to date, you will, over time, accumulate a lot of Linux kernels.

Canonical's Ubuntu Kernel Team regularly (about once a month) provides kernel updates, patching security issues, fixing bugs, and enabling new hardware drivers.  The apt utility tries its best to remove unneeded packages, from time to time, but kernels are a little tricky, due to their version strings.

Over time, you might find your /boot directory filled with vmlinuz kernels, consuming a considerable amount of disk space.  Sometimes, sudo apt-get autoremove will clean these up.  However, it doesn't always work very well (especially if you install a version of Ubuntu that's not yet released).

What's the safest way to clean these up?  (This question has been asked numerous times, on the UbuntuForums.org and AskUbuntu.com.)

The definitive answer is:

sudo purge-old-kernels

You'll already have the purge-old-kernels command in Ubuntu 16.04 LTS (and later), as part of the byobu package.  In earlier releases of Ubuntu, you might need to install bikeshed, you can grab it directly from Launchpad or Github.

Here, for example, I'll save almost 700MB of disk space, by removing kernels I no longer need:

$ sudo purge-old-kernels 
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following packages will be REMOVED:
  linux-headers-4.4.0-10-generic* linux-headers-4.4.0-12-generic* linux-headers-4.4.0-15-generic* linux-headers-4.4.0-16-generic*
  linux-headers-4.4.0-17-generic* linux-headers-4.4.0-18-generic* linux-image-4.4.0-10-generic* linux-image-4.4.0-12-generic*
  linux-image-4.4.0-15-generic* linux-image-4.4.0-16-generic* linux-image-4.4.0-17-generic* linux-image-4.4.0-18-generic*
  linux-image-extra-4.4.0-17-generic* linux-image-extra-4.4.0-18-generic*
0 upgraded, 0 newly installed, 14 to remove and 196 not upgraded.
After this operation, 696 MB disk space will be freed.
Do you want to continue? [Y/n] 

From the manpage:
purge-old-kernels will remove old kernel and header packages from the system, freeing disk space. It will never remove the currently running kernel. By default, it will keep at least the latest 2 kernels, but the user can override that value using the --keep parameter. Any additional parameters will be passed directly to apt-get(8).
Full disclosure: I'm the author of the purge-old-kernels utility.

Enjoy,
:-Dustin

Monday, May 16, 2016

Byobu Hollywood Melodrama and Ubuntu Featured on NBCNews!

A few years ago, I wrote and released a fun little script that would carve up an Ubuntu Byobu terminal into a bunch of splits, running various random command line status utilities.

100% complete technical mumbo jumbo.  The goal was to turn your terminal into something that belongs in a Hollywood hacker film.

I am proud to see it included in this NBCNews piece about "Ransomware".  All of the screenshots, demonstrating what a "hacker" is doing with a system are straight from Ubuntu, Byobu, and Hollywood!







Here are a few screenshots, and the video is embedded below...



Enjoy!
:-Dustin

Thursday, April 14, 2016

Docker 1.10 with Fan Networking in Ubuntu 16.04, for Every Architecture!


I'm thrilled to introduce Docker 1.10.3, available on every Ubuntu architecture, for Ubuntu 16.04 LTS, and announce the General Availability of Ubuntu Fan Networking!

That's Ubuntu Docker binaries and Ubuntu Docker images for:
  • armhf (rpi2, et al. IoT devices)
  • arm64 (Cavium, et al. servers)
  • i686 (does anyone seriously still run 32-bit intel servers?)
  • amd64 (most servers and clouds under the sun)
  • ppc64el (OpenPower and IBM POWER8 machine learning super servers)
  • s390x (IBM System Z LinuxOne super uptime mainframes)
That's Docker-Docker-Docker-Docker-Docker-Docker, from the smallest Raspberry Pi's to the biggest IBM mainframes in the world today!  Never more than one 'sudo apt install docker.io' command away.

Moreover, we now have Docker running inside of LXD!  Containers all the way down.  Application containers (e.g. Docker), inside of Machine containers (e.g. LXD), inside of Virtual Machines (e.g. KVM), inside of a public or private cloud (e.g. Azure, OpenStack), running on bare metal (take your pick).

Let's have a look at launching a Docker application container inside of a LXD machine container:

kirkland@x250:~⟫ lxc launch ubuntu-daily:x -p default -p docker
Creating magical-damion
Starting magical-damion
kirkland@x250:~⟫ lxc list | grep RUNNING
| magical-damion | RUNNING | 10.16.4.52 (eth0) |      | PERSISTENT | 0         |
kirkland@x250:~⟫ lxc exec magical-damion bash
root@magical-damion:~# apt update >/dev/null 2>&1 ; apt install -y docker.io >/dev/null 2>&1 
root@magical-damion:~# docker run -it ubuntu bash
Unable to find image 'ubuntu:latest' locally
latest: Pulling from library/ubuntu
759d6771041e: Pull complete 
8836b825667b: Pull complete 
c2f5e51744e6: Pull complete 
a3ed95caeb02: Pull complete 
Digest: sha256:b4dbab2d8029edddfe494f42183de20b7e2e871a424ff16ffe7b15a31f102536
Status: Downloaded newer image for ubuntu:latest
root@0577bd7d5db1:/# ifconfig eth0
eth0      Link encap:Ethernet  HWaddr 02:42:ac:11:00:02  
          inet addr:172.17.0.2  Bcast:0.0.0.0  Mask:255.255.0.0
          inet6 addr: fe80::42:acff:fe11:2/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:16 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:1296 (1.2 KB)  TX bytes:648 (648.0 B)


Oh, and let's talk about networking...  We're also pleased to announce the general availability of Ubuntu Fan networking -- specially designed to connect all of your Docker containers spread across your network.  Ubuntu's Fan networking feature is an easy way to make every Docker container on your local network easily addressable by every other Docker host and container on the same network.  It's high performance, super simple, utterly deterministic, and we've tested it on every major public cloud as well as OpenStack and our private networks.

Simply installing Ubuntu's Docker package will also install the ubuntu-fan package, which provides an interactive setup script, fanatic, should you choose to join the Fan.  Simply run 'sudo fanatic' and answer the questions.  You can trivially revert your Fan networking setup easily with 'sudo fanatic deconfigure'.

kirkland@x250:~$ sudo fanatic 
Welcome to the fanatic fan networking wizard.  This will help you set
up an example fan network and optionally configure docker and/or LXD to
use this network.  See fanatic(1) for more details.
Configure fan underlay (hit return to accept, or specify alternative) [10.0.0.0/16]: 
Configure fan overlay (hit return to accept, or specify alternative) [250.0.0.0/8]: 
Create LXD networking for underlay:10.0.0.0/16 overlay:250.0.0.0/8 [Yn]: n
Create docker networking for underlay:10.0.0.0/16 overlay:250.0.0.0/8 [Yn]: Y
Test docker networking for underlay:10.0.0.45/16 overlay:250.0.0.0/8
(NOTE: potentially triggers large image downloads) [Yn]: Y
local docker test: creating test container ...
34710d2c9a856f4cd7d8aa10011d4d2b3d893d1c3551a870bdb9258b8f583246
test master: ping test (250.0.45.0) ...
test slave: ping test (250.0.45.1) ...
test master: ping test ... PASS
test master: short data test (250.0.45.1 -> 250.0.45.0) ...
test slave: ping test ... PASS
test slave: short data test (250.0.45.0 -> 250.0.45.1) ...
test master: short data ... PASS
test slave: short data ... PASS
test slave: long data test (250.0.45.0 -> 250.0.45.1) ...
test master: long data test (250.0.45.1 -> 250.0.45.0) ...
test master: long data ... PASS
test slave: long data ... PASS
local docker test: destroying test container ...
fanatic-test
fanatic-test
local docker test: test complete PASS (master=0 slave=0)
This host IP address: 10.0.0.45

I've run 'sudo fanatic' here on a couple of machines on my network -- x250 (10.0.0.45) and masterbr (10.0.0.8), and now I'm going to launch a Docker container on each of those two machines, obtain each IP address on the Fan (250.x.y.z), install iperf, and test the connectivity and bandwidth between each of them (on my gigabit home network).  You'll see that we'll get 900mbps+ of throughput:

kirkland@x250:~⟫ sudo docker run -it ubuntu bash
root@c22cf0d8e1f7:/#  apt update >/dev/null 2>&1 ; apt install -y iperf >/dev/null 2>&1
root@c22cf0d8e1f7:/#  ifconfig eth0
eth0      Link encap:Ethernet  HWaddr 02:42:fa:00:2d:00  
          inet addr:250.0.45.0  Bcast:0.0.0.0  Mask:255.0.0.0
          inet6 addr: fe80::42:faff:fe00:2d00/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1450  Metric:1
          RX packets:6423 errors:0 dropped:0 overruns:0 frame:0
          TX packets:4120 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:22065202 (22.0 MB)  TX bytes:227225 (227.2 KB)

root@c22cf0d8e1f7:/# iperf -c 250.0.8.0
multicast ttl failed: Invalid argument
------------------------------------------------------------
Client connecting to 250.0.8.0, TCP port 5001
TCP window size: 45.0 KByte (default)
------------------------------------------------------------
[  3] local 250.0.45.0 port 54274 connected with 250.0.8.0 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  1.05 GBytes   902 Mbits/sec

And the second machine:
kirkland@masterbr:~⟫ sudo docker run -it ubuntu bash
root@effc8fe2513d:/#  apt update >/dev/null 2>&1 ; apt install -y iperf >/dev/null 2>&1
root@effc8fe2513d:/#  ifconfig eth0
eth0      Link encap:Ethernet  HWaddr 02:42:fa:00:08:00  
          inet addr:250.0.8.0  Bcast:0.0.0.0  Mask:255.0.0.0
          inet6 addr: fe80::42:faff:fe00:800/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1450  Metric:1
          RX packets:7659 errors:0 dropped:0 overruns:0 frame:0
          TX packets:3433 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:22131852 (22.1 MB)  TX bytes:189875 (189.8 KB)

root@effc8fe2513d:/# iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[  4] local 250.0.8.0 port 5001 connected with 250.0.45.0 port 54274
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.0 sec  1.05 GBytes   899 Mbits/sec


Finally, let's have another long hard look at the image from the top of this post.  Download it in full resolution to study very carefully what's happening here, because it's pretty [redacted] amazing!


Here, we have a Byobu session, split into 6 panes (Shift-F2 5x Times, Shift-F8 6x times).  In each pane, we have an SSH session to Ubuntu 16.04 LTS servers spread across 6 different architectures -- armhf, arm64, i686, amd64, ppc64el, and s390x.  I used the Shift-F9 key to simultaneously run the same commands in each and every window.  Here are the commands I ran:

clear
lxc launch ubuntu-daily:x -p default -p docker
lxc list | grep RUNNING
uname -a
dpkg -l docker.io | grep docker.io
sudo docker images | grep -m1 ubuntu
sudo docker run -it ubuntu bash
 apt update >/dev/null 2>&1 ; apt install -y net-tools >/dev/null 2>&1
 ifconfig eth0
 exit

That's right.  We just launched Ubuntu LXD containers, as well as Docker containers against every Ubuntu 16.04 LTS architecture.  How's that for Ubuntu everywhere!?!

Ubuntu 16.04 LTS will be one hell of a release!

:-Dustin

Monday, August 10, 2015

The Golden Ratio calculated to a record 2 trillion digits, on Ubuntu, in the Cloud!

The Golden Ratio is one of the oldest and most visible irrational numbers known to humanity.  Pi is perhaps more famous, but the Golden Ratio is found in more of our art, architecture, and culture throughout human history.

I think of the Golden Ratio as sort of "Pi in 1 dimension".  Whereas Pi is the ratio of a circle's circumference to its diameter, the Golden Ratio is the ratio of a whole to one of its parts, when the ratio of that part to the remainder is equal.

Visually, this diagram from Wikipedia helps explain it:


We find the Golden Ratio in the architecture of antiquity, from the Egyptians to the Greeks to the Romans, right up to the Renaissance and even modern times.



While the base of the pyramids are squares, the Golden Ratio can be observed as the base and the hypotenuse of a basic triangular cross section like so:


The floor plan of the Parthenon has a width/depth ratio matching the Golden Ratio...



For the first 300 years of printing, nearly all books were printed on pages whose length to width ratio matched that of the Golden Ratio.

Leonardo da Vinci used the Golden Ratio throughout his works.  I'm told that his Vitruvian Man displays the Golden Ratio...


From school, you probably remember that the Golden Ratio is approximately ~1.6 (and change).
There's a strong chance that your computer or laptop monitor has a 16:10 aspect ratio.  Does 1280x800 or 1680x1050 sound familiar?



That ~1.6 number is only an approximation, of course.  The Golden Ratio is in fact an irrational number and can be calculated to much greater precision through several different representations, including:


You can plug that number into your computer's calculator and crank out a dozen or so significant digits.


However, if you want to go much farther than that, Alexander Yee has created a program called y-cruncher, which as been used to calculate most of the famous constants to world record precision.  (Sorry free software readers of this blog -- y-cruncher is not open source code...)

I came across y-cruncher a few weeks ago when I was working on the mprime post, demonstrating how you can easily put any workload into a Docker container and then produce both Juju Charms and Ubuntu Snaps that package easily.  While I opted to use mprime in that post, I saved y-cruncher for this one :-)

Also, while doing some network benchmark testing of The Fan Networking among Docker containers, I experimented for the first time with some of Amazon's biggest instances, which have dedicated 10gbps network links.  While I had a couple of those instances up, I did some small scale benchmarking of y-cruncher.

Presently, none of the mathematical constant records are even remotely approachable with CPU and Memory alone.  All of them require multiple terabytes of disk, which act as a sort of swap space for temporary files, as bits are moved in and out of memory while the CPU crunches.  As such, approaching these are records are overwhelmingly I/O bound -- not CPU or Memory bound, as you might imagine.

After a variety of tests, I settled on the AWS d2.2xlarge instance size as the most affordable instance size to break the previous Golden Ratio record (1 trillion digits, by Alexander Yee on his gaming PC in 2010).  I say "affordable", in that I could have cracked that record "2x faster" with a d2.4xlarge or d2.8xlarge, however, I would have paid much more (4x) for the total instance hours.  This was purely an economic decision :-)


Let's geek out on technical specifications for a second...  So what's in a d2.2xlarge?
  • 8x Intel Xeon CPUs (E5-2676 v3 @ 2.4GHz)
  • 60GB of Memory
  • 6x 2TB HDDs
First, I arranged all 6 of those 2TB disks into a RAID0 with mdadm, and formatted it with xfs (which performed better than ext4 or btrfs in my cursory tests).

$ sudo mdadm --create --verbose /dev/md0 --level=stripe --raid-devices=6 /dev/xvd?
$ sudo mkfs.xfs /dev/md0
$ df -h /mnt
/dev/md0         11T   34M   11T   1% /mnt

Here's a brief look at raw read performance with hdparm:

$ sudo hdparm -tT /dev/md0
 Timing cached reads:   21126 MB in  2.00 seconds = 10576.60 MB/sec
 Timing buffered disk reads: 1784 MB in  3.00 seconds = 593.88 MB/sec

The beauty here of RAID0 is that each of the 6 disks can be used to read and/or write simultaneously, perfectly in parallel.  600 MB/sec is pretty quick reads by any measure!  In fact, when I tested the d2.8xlarge, I put all 24x 2TB disks into the same RAID0 and saw nearly 2.4 GB/sec read performance across that 48TB array!

With /dev/md0 mounted on /mnt and writable by my ubuntu user, I kicked off y-crunch with these parameters:

Program Version:       0.6.8 Build 9461 (Linux - x64 AVX2 ~ Airi)
Constant:              Golden Ratio
Algorithm:             Newton's Method
Decimal Digits:        2,000,000,000,000
Hexadecimal Digits:    1,660,964,047,444
Threading Mode:        Thread Spawn (1 Thread/Task)  ? / 8
Computation Mode:      Swap Mode
Working Memory:        61,342,174,048 bytes  ( 57.1 GiB )
Logical Disk Usage:    8,851,913,469,608 bytes  ( 8.05 TiB )

Byobu was very handy here, being able to track in the bottom status bar my CPU load, memory usage, disk usage, and disk I/O, as well as connecting and disconnecting from the running session multiple times over the 4 days of running.


And approximately 79 hours later, it finished successfully!

Start Date:            Thu Jul 16 03:54:11 2015
End Date:              Sun Jul 19 11:14:52 2015

Computation Time:      221548.583 seconds
Total Time:            285640.965 seconds

CPU Utilization:           315.469 %
Multi-core Efficiency:     39.434 %

Last Digits:
5027026274 0209627284 1999836114 2950866539 8538613661  :  1,999,999,999,950
2578388470 9290671113 7339871816 2353911433 7831736127  :  2,000,000,000,000

Amazing, another person (who I don't know), named Ron Watkins, performed the exact same computation and published his results within 24 hours, on July 22nd/23rd.  As such, Ron and I are "sharing" credit for the Golden Ratio record.


Now, let's talk about the economics here, which I think are the most interesting part of this post.

Look at the above chart of records, which are published on the y-cruncher page, the vast majority of those have been calculated on physical PCs -- most of them seem to be gaming PCs running Windows.

What's different about my approach is that I used Linux in the Cloud -- specifically Ubuntu in AWS.  I paid hourly (actually, my employer, Canonical, reimbursed me for that expense, thanks!)  It took right at 160 hours to run the initial calculation (79 hours) as well as the verification calculation (81 hours), at the current rate of $1.38/hour for a d2.2xlarge, which is a grand total of $220!

$220 is a small fraction of the cost of 6x 2TB disks, 60 GB of memory, or 8 Xeon cores, not to mention the electricity and cooling required to run a system of this size (~750W) for 160 hours.

If we say the first first trillion digits were already known from the previous record, that comes out to approximately 4.5 billion record-digits per dollar, and 12.5 billion record-digits per hour!

Hopefully you find this as fascinating as I!

Cheers,
:-Dustin

Monday, July 20, 2015

Prime Time: Docker, Juju, and Snappy Ubuntu Core


As you probably remember from grade school math class, primes are numbers that are only divisible by 1 and themselves.  2, 3, 5, 7, and 11 are the first 5 prime numbers, for example.

Many computer operations, such as public-key cryptography, depends entirely on prime numbers.  In fact, RSA encryption, invented in 1978, uses a modulo of a product of two very large primes for encryption and decryption.  The security of asymmetric encryption is tightly coupled with the computational difficulty in factoring large numbers.  I actually use prime numbers as the status update intervals in Byobu, in order to improve performance and distribute the update spikes.

Euclid proved that there are infinitely many prime numbers around 300 BC.  But the Prime Number Theorem (proven in the 19th century) says that the probability of any number is prime is inversely proportional to its number of digits.  That means that larger prime numbers are notoriously harder to find, and it gets harder as they get bigger!
What's the largest known prime number in the world?

Well, it has 17,425,170 decimal digits!  If you wanted to print it out, size 11 font, it would take 6,543 pages -- or 14 reams of paper!

That number is actually one less than a very large power of 2.  257,885,161-1.  It was discovered by Curtis Cooper on January 25, 2013, on an Intel Core2 Duo.

Actually, each of the last 14 record largest prime numbers discovered (between 1996 and today) have been of that form, 2P-1.  Numbers of that form are called Mersenne Prime Numbers, named after Friar Marin Mersenne, a French priest who studied them in the 1600s.


Friar Mersenne's work continues today in the form of the Great Internet Mersenne Prime Search, and the mprime program, which has been used to find those 14 huge prime numbers since 1996.

mprime is a massive parallel, cpu scavenging utility, much like SETI@home or the Protein Folding Project.  It runs in the background, consuming resources, working on its little piece of the problem.  mprime is open source code, and also distributed as a statically compiled binary.  And it will make a fine example of how to package a service into a Docker container, a Juju charm, and a Snappy snap.


Docker Container

First, let's build the Docker container, which will serve as our fundamental building block.  You'll first need to download the mprime tarball from here.  Extract it, and the directory structure should look a little like this (or you can browse it here):

├── license.txt
├── local.txt
├── mprime
├── prime.log
├── prime.txt
├── readme.txt
├── results.txt
├── stress.txt
├── undoc.txt
├── whatsnew.txt
└── worktodo.txt

And then, create a Dockerfile, that copies the files we need into the image.  Here's our example.

FROM ubuntu
MAINTAINER Dustin Kirkland email@example.com
COPY ./mprime /opt/mprime/
COPY ./license.txt /opt/mprime/
COPY ./prime.txt /opt/mprime/
COPY ./readme.txt /opt/mprime/
COPY ./stress.txt /opt/mprime/
COPY ./undoc.txt /opt/mprime/
COPY ./whatsnew.txt /opt/mprime/
CMD ["/opt/mprime/mprime", "-w/opt/mprime/"]

Now, build your Docker image with:

$ sudo docker build .
Sending build context to Docker daemon 36.02 MB
Sending build context to Docker daemon 
Step 0 : FROM ubuntu
...
Successfully built de2e817b195f

Then publish the image to Dockerhub.

$ sudo docker push kirkland/mprime

You can see that image, which I've publicly shared here: https://registry.hub.docker.com/u/kirkland/mprime/



Now you can run this image anywhere you can run Docker.

$ sudo docker run -d kirkland/mprime

And verify that it's running:

$ sudo docker ps
CONTAINER ID        IMAGE                    COMMAND                CREATED             STATUS              PORTS               NAMES
c9233f626c85        kirkland/mprime:latest   "/opt/mprime/mprime    24 seconds ago      Up 23 seconds                           furious_pike     

Juju Charm

So now, let's create a Juju Charm that uses this Docker container.  Actually, we're going to create a subordinate charm.  Subordinate services in Juju are often monitoring and logging services, things that run along side primary services.  Something like mprime is a good example of something that could be a subordinate service, attached to one or many other services in a Juju model.

Our directory structure for the charm looks like this (or you can browse it here):

└── trusty
    └── mprime
        ├── config.yaml
        ├── copyright
        ├── hooks
        │   ├── config-changed
        │   ├── install
        │   ├── juju-info-relation-changed
        │   ├── juju-info-relation-departed
        │   ├── juju-info-relation-joined
        │   ├── start
        │   ├── stop
        │   └── upgrade-charm
        ├── icon.png
        ├── icon.svg
        ├── metadata.yaml
        ├── README.md
        └── revision
3 directories, 15 files

The three key files we should look at here are metadata.yaml, hooks/install and hooks/start:

$ cat metadata.yaml
name: mprime
summary: Search for Mersenne Prime numbers
maintainer: Dustin Kirkland 
description: |
  A Mersenne prime is a prime of the form 2^P-1.
  The first Mersenne primes are 3, 7, 31, 127
  (corresponding to P = 2, 3, 5, 7).
  There are only 48 known Mersenne primes, and
  the 13 largest known prime numbers in the world
  are all Mersenne primes.
  This charm uses a Docker image that includes the
  statically built, 64-bit Linux binary mprime
  which will consume considerable CPU and Memory,
  searching for the next Mersenne prime number.
  See http://www.mersenne.org/ for more details!
tags:
  - misc
subordinate: true
requires:
  juju-info:
    interface: juju-info
    scope: container

And:

$ cat hooks/install
#!/bin/bash
apt-get install -y docker.io
docker pull kirkland/mprime

And:

$ cat hooks/start
#!/bin/bash
service docker restart
docker run -d kirkland/mprime

Now, we can add the mprime service to any other running Juju service.  As an example here, I'll --bootstrap, deploy the Apache2 charm, and attach mprime to it.

$ juju bootrap
$ juju deploy apache2
$ juju deploy cs:~kirkland/mprime
$ juju add-relation apache2 mprime

Looking at our services, we can see everything deployed and running here:

$ juju status
services:
  apache2:
    charm: cs:trusty/apache2-14
    exposed: false
    service-status:
      current: unknown
      since: 20 Jul 2015 11:55:59-05:00
    relations:
      juju-info:
      - mprime
    units:
      apache2/0:
        workload-status:
          current: unknown
          since: 20 Jul 2015 11:55:59-05:00
        agent-status:
          current: idle
          since: 20 Jul 2015 11:56:03-05:00
          version: 1.24.2
        agent-state: started
        agent-version: 1.24.2
        machine: "1"
        public-address: 23.20.147.158
        subordinates:
          mprime/0:
            workload-status:
              current: unknown
              since: 20 Jul 2015 11:58:52-05:00
            agent-status:
              current: idle
              since: 20 Jul 2015 11:58:56-05:00
              version: 1.24.2
            agent-state: started
            agent-version: 1.24.2
            upgrading-from: local:trusty/mprime-1
            public-address: 23.20.147.158
  mprime:
    charm: local:trusty/mprime-1
    exposed: false
    service-status: {}
    relations:
      juju-info:
      - apache2
    subordinate-to:
    - apache2


Snappy Ubuntu Core Snap

Finally, let's build a Snap.  Snaps are applications that run in Ubuntu's transactional, atomic OS, Snappy Ubuntu Core.

We need the simple directory structure below (or you can browse it here):

├── meta
│   ├── icon.png
│   ├── icon.svg
│   ├── package.yaml
│   └── readme.md
└── start.sh
1 directory, 5 files

The package.yaml describes what we're actually building, and what capabilities the service needs.  It looks like this:

name: mprime
vendor: Dustin Kirkland 
architecture: [amd64]
icon: meta/icon.png
version: 28.5-11
frameworks:
  - docker
services:
  - name: mprime
    description: "Search for Mersenne Prime Numbers"
    start: start.sh
    caps:
      - docker_client
      - networking

And the start.sh launches the service via Docker.

#!/bin/sh
PATH=$PATH:/apps/docker/current/bin/
docker rm -v -f mprime
docker run --name mprime -d kirkland/mprime
docker wait mprime

Now, we can build the snap like so:

$ snappy build .
Generated 'mprime_28.5-11_amd64.snap' snap
$ ls -halF *snap
-rw-rw-r-- 1 kirkland kirkland 9.6K Jul 20 12:38 mprime_28.5-11_amd64.snap

First, let's install the Docker framework, upon which we depend:

$ snappy-remote --url ssh://snappy-nuc install docker
=======================================================
Installing docker from the store
Installing docker
Name          Date       Version   Developer 
ubuntu-core   2015-04-23 2         ubuntu    
docker        2015-07-20 1.6.1.002           
webdm         2015-04-23 0.5       sideload  
generic-amd64 2015-04-23 1.1                 
=======================================================

And now, we can install our locally built Snap.
$ snappy-remote --url ssh://snappy-nuc install mprime_28.5-11_amd64.snap
=======================================================
Installing mprime_28.5-11_amd64.snap from local environment
Installing /tmp/mprime_28.5-11_amd64.snap
2015/07/20 17:44:26 Signature check failed, but installing anyway as requested
Name          Date       Version   Developer 
ubuntu-core   2015-04-23 2         ubuntu    
docker        2015-07-20 1.6.1.002           
mprime        2015-07-20 28.5-11   sideload  
webdm         2015-04-23 0.5       sideload  
generic-amd64 2015-04-23 1.1                 
=======================================================

Alternatively, you can install the snap directly from the Ubuntu Snappy store, where I've already uploaded the mprime snap:

$ snappy-remote --url ssh://snappy-nuc install mprime.kirkland
=======================================================
Installing mprime.kirkland from the store
Installing mprime.kirkland
Name          Date       Version   Developer 
ubuntu-core   2015-04-23 2         ubuntu    
docker        2015-07-20 1.6.1.002           
mprime        2015-07-20 28.5-11   kirkland  
webdm         2015-04-23 0.5       sideload  
generic-amd64 2015-04-23 1.1                 
=======================================================

Conclusion

How long until this Docker image, Juju charm, or Ubuntu Snap finds a Mersenne Prime?  Almost certainly never :-)  I want to be clear: that was never the point of this exercise!

Rather I hope you learned how easy it is to run a Docker image inside either a Juju charm or an Ubuntu snap.  And maybe learned something about prime numbers along the way ;-)

Join us in #docker, #juju, and #snappy on irc.freenode.net.

Cheers,
Dustin

Wednesday, December 17, 2014

Hollywood Technodrama -- There's an App for that!



Wargames.  Hackers.  Swordfish.  Superman 3.  Jurassic Park.  GoldenEye.  The Matrix.

You've all seen the high stakes hacking scene, packed with techno-babble and dripping in drama.  And the command and control center with dozens of over-sized monitors, overloaded with scrolling text...

I was stuck on a plane a few weeks back, traveling home from Las Vegas, and the in flight WiFi was down.  I know, I know.  Real world problems.  Suddenly, I had 2 hours on my hands, without access to email, IRC, or any other distractions.

It's at this point I turned to my folder of unfinished ideas, and cherry-picked one that would take just a couple of fun hours to hack.  And I'm pleased to introduce the fruits of that, um, labor -- the hollywood package for Ubuntu :-)  Call it an early Christmas present!  All code is on both Launchpad and Github.


If you're already running Vivid (Ubuntu 15.04) -- I salute you! -- and you can simply:

sudo apt-get install hollywood

If you're on any other version of Ubuntu, you'll need to:

sudo apt-add-repository ppa:hollywood/ppa
sudo apt-get update
sudo apt-get install hollywood

Fire up a terminal, maximize it, open byobu, and run the hollywood command.  Then sit back and soak into the trance...

I recently jumped on the vertical monitor bandwagon, for my secondary display.  It's fantastic for reading and writing code.  It's also hollywood-worthy ;-)


How does all of this work?

For starters, it's all running in a Byobu (tmux) session, which enables us to split a single shell console into a bunch of "panes" or "splits".

The hollywood package depends on a handful of utilities that I found (mostly apt-cache searching the Ubuntu archives for monitors and utilities).  You can find a handful of scripts in /usr/lib/hollywood/.  Each of these is a "driver" for a widget that might run in one of these splits.  And ccze is magical, accepting input on stdin and colorizing the text.

In fact, they're quite easy to write :-)  I'm happy to accept contributions of new driver widgets, as long as you follow a couple of simple rules.  Each widget:
  • Must run as a regular, non-root user
  • Must not eat all available CPU, Disk, or Memory
  • Must not write data
  • Must run indefinitely, until receiving a Ctrl-C
  • Must look hollywood cool!
So far, we have widgets that: generate passphrases encoded in NATO phonetic, monitor and render network bandwidth, emulate The Matrix, find and display, with syntax highlighting, source code on the system, show a bunch of error codes, hexdump a bunch of binaries, monitor some processes, render some images to ASCII art, colorize some log files, open random manpages, generate SSH keys and show their random art, stat a bunch of inodes in /proc and /sys and /dev, and show the tree output of some directories.

I also grabbed a copy of the Mission Impossible theme song, licensed under the Creative Commons.  I played it in the Totem music player in Ubuntu, with the Monoscope visual effect, and recorded a screencast with gtk-recordmydesktop.  I then mixed the output .ogv file, with the original .mp3 file, and transcoded it to mp4/h264/aac, reducing the audio bitrate to 64k and frame size to 128x96, using this command:
avconv -i missionimpossible.ogv -i MissionImpossibleTheme.mp3 -s 128x96 -b 64k -vcodec libx264 -acodec aac -f mpegts -strict experimental -y mi.mp4

Then, hollywood plays it in one of the splits with mplayer's ascii art video output on the console :-)

DISPLAY= mplayer -vo caca /usr/share/hollywood/mi.mp4

Sound totally cheesy?  Why, yes, it is :-)  That's the point :-)

Oh, and by the way...  If you ever sit down at someone else's Linux PC, and want to freak them out a little, just type:

ubuntu@x230:~⟫ PS1="root@$(hostname):~# "; clear 
root@x230:~# 

And then have fun!
That latter "hack", as well as the entire concept of hollywood is inspired in part by Kees Cook's awesome talk, in particular his "Useless Hollywood Drama Mode" in his exploit demo.
Happy hacking!
:-Dustin

Printfriendly