Sunday, September 20, 2009

Atlanta LinuxFest the Day After

Wow, I'm still in a daze. Atlanta LinuxFest (ALF) was an incredible event. It was held at the IBM facility in Atlanta on Saturday.

Some background first. My wife Amber (akgraner) was on the planning team, so I was watching this event come together from the periphery. I would hear her on planning calls and I never really understood what it takes to put an event like this together. My hats is off to all involved, I can say it gave me a new appreciation for all the work that goes into community events like this.

This was ALF's 2nd year and as such was expected to be small. It was far from that, they had over 600 registrations and at any given time during the event there were 300+ people at the event. This event was planned by 4 people with limited resources and they pulled off one hell of a LinuxFest.

The speaker line up was diverse and impressive, there was something for everyone. Additionally Amber had organized an UbuCon and it was a big hit. They had sessions topics like "Community Leadership" & "Burnout", every time I went by the UbuCon area it was packed and they had some intense discussions going on.

The Ubuntu Kernel Team took advantage of the event to get some Karmic testing done on the plethora of laptops/note/netbooks that were there. Manjo Iyer from the Kernel Team ran the testing. I don't have the exact numbers of what makes and models were tested (we still have to sift thru the data), I do believe it was well over 100 people and some had 3 & 4 machines with them. Lots of bugs were filed. I want to put out a heartfelt "Thank You" for everyone that came by for testing. You guys will make Karmic a far better release.

I gave a talk on Ubuntu, Canonical & the Ubuntu Kernel Team. I had given a talk at the Southeast LinuxFest back in June and received lots of feed back about the content. I was surprised to learn that people wanted to know about Canonical, a bit about it's structure and how Ubuntu fits in. So I incorporated lots of that back into the presentation. I also stressed how the Kernel Team is looking to expand its community, and if you want to participate you don't have to be a kernel developer, or even a programmer! We welcome anyone who is willing to test, triage or help us organize. Like in any community we need people with diverse skill sets.

Steve Conklin of the Ubuntu Kernel Team gave a talk called "Debugging the Kernel". This talk originated from another Kernel Team member Colin King (cking). The talk is basically a collection of all the wild & useful debugging techniques that Colin has come up with over the last few years.

John Johansen & Stefan Bader from the Ubuntu Kernel Team gave a rehash of Greg Kroah-Hartman's "Write a real working Linux driver" tutorial. The tutorial consisted of a Ubuntu USB live stick tricked out with compilers, headers and git tree. Users would boot the live stick so that they would have a consistent development environment. John then walked them through the basics of git, kernel device drivers and in the end the users wrote a device driver that would work with a GoTemp USB Thermometer. They had 16 thermometer devices and in the end the temperature could be read by reading a file in the /sys file system. Each session was full, I just wish we had more devices so that everyone would have had the chance to fully participate, not just watch.

Dan Chen of Ubuntu Audio fame gave a great talk on debugging audio. Judging by the size of the crowd in his talk, audio is still an issue with quite a few users.

Suse, Red Hat & Fedora were all there with booths and talks as well, however I would say that the mind share at the event went to Ubutnu. You couldn't turn around without hearing the Ubuntu login music, seeing Ubuntu stickers, banners, t-shirts everywhere!!!!

Surprisingly quite a few folks have been running Karmic in some state of Alpha for quite a while!

The day started with a Video Podcast from Mark Shuttleworth specifically for the event. Mark was about to announce the name of 10.04 when they cut the video with a slide that said "Find out at the UbuCon!"... dooh! It left everyone hanging for about another hour.

In the UbuCon area they had monitor set up with people crowed all around waiting for the announcement. They played the whole video from start to finish and finally after much anticipation, Mark announced that 10.04 would be called Lucid Lynx. This was quite a departure from other "naming announcements" where he would send out an email or post it on his blog. It was really special that it was announced at a UbuCon at a community event!

There is so much more, but I'll leave that for the other blogers... I have to catch a plane on my way to LinuxCon & Linux Plumbers in Portland!

~pete

Tuesday, August 18, 2009

Atlanta LinuxFest

Wanted to drop a quick note about ALF...

I'll be speaking at the Atlanta LinuxFest, talking about the Ubuntu Kernel. This will talk about the team, how we develop & maintain the Ubuntu Kernel.

In fact there are several Canonical folks talking at ALF.
  • Steve Conklin - Kernel Debugging
  • John Pugh - The Weather Ahead - Clouds
  • Ken VanDine - Ubuntu Desktop Experience
  • Rick Clark - Ubuntu Server
Along with all the speakers noted above, members of the Ubuntu Kernel Team will be conducting a driver writing session. This is based on GregKH's original driver writing presentation. We will be using a USB thermometer as the hardware and the object is to write a working driver that will get the current temperature.

Along with ALF there will be a Ubucon (Ubuntu User Conference). The Ubuntu Kernel Team will be holding a "Is your hardware ready for Karmic" workshop. We will have USB sticks loaded with a testsuite that will test most of the new Karmic hardware features like Kernel Mode Setting (KMS), grub2, net/notebook hotkeys, web cams, audio and the like. This is non destructive to the hard disk and will let folks know in advance about what the Karmic experience will look like on their hardware. If items fail to work we will file bugs on the spot with the proper debugging info attached to the bug.

I want to give a huge shout out to Manjo Iyer & Ronald Fader who have spent lots of time and hard work to make the testing happen and to John Johansen for putting together the driver writing session! Great work guys

More on ALF as it gets closer.

~pete

Split Routing Over 2 DSL Lines

Its been a bit since I've blogged last. Most of it is due to moving. We moved from Raleigh NC to a small town in the western part of NC called Union Mills. The good thing is we are on a farm, we have lots of land, space, fresh air... however the Net connection just plain sucks.

I'm 2.3 miles from the CO, so the best I can get with DSL is a 3m/384k connection thru AT&T. So I called AT&T commercial and was assured I could be 2x bonded ADSL lines that would give me effectively a 6mb connection. Guess what? The fibbed. In the end I had to settle for 2x ADSL lines not bonded, and I elected for the non-commercial option since it was far cheaper.

Now came the big question how to best effectively use em'. After hitting up Google I found lots of interesting solutions. Most were very complex, routing various protocols out this interface or that.... I wanted something that would give me the closest to a load balanced connection as possible.

I'm using a old HP Desktop with 3 network cards as my gateway router running Jaunty 9.04 Server Edition.

Below is what I came up with using ip route and iptables. I put in bogus IPs to keep it as real as possible.

First I added two routing tables to the /etc/iproute2/rt_tables file:

1 line1
2 line2

Then I found a script on the net and used it as a template and hacked it up as so:

#!/bin/bash

# DSL Lines are IF0 & IF1, IF2 is local net.
IF0=eth0
IF1=eth1
IF2=eth2

# IP Addr on Gateway matching interfaces above
IP0=192.168.1.1
IP1=192.168.2.1
IP2=172.31.0.1

# DSL IPs
P0=192.168.1.254
P1=192.168.2.254

# Network addresses
P0_NET=192.168.1.0
P1_NET=192.168.2.0

# Routing table entries
T1=1
T2=2

# Set up routes
ip route add $P0_NET dev $IF0 src $IP0 table $T1
ip route add default via $P0 table $T1
ip route add $P1_NET dev $IF1 src $IP1 table $T2
ip route add default via $P1 table $T2

ip route add $P0_NET dev $IF0 src $IP0
ip route add $P1_NET dev $IF1 src $IP1

# Set up default route to balance between both interfaces
ip route add default scope global nexthop via $P0 dev $IF0 weight 1 \
nexthop via $P1 dev $IF1 weight 1

# Add the rules for the routing tables
ip rule add from $IP0 table $T1
ip rule add from $IP1 table $T2

# Now for the masq bits
iptables -F
iptables -X
iptables -t nat -F
iptables -t nat -X
iptables -t filter -F
iptables -t filter -X
#iptables -t nat -A POSTROUTING -o $IF0 -j MASQUERADE
#iptables -t nat -A POSTROUTING -o $IF1 -j MASQUERADE
iptables -t nat -I POSTROUTING -s 172.31.0.0/24 -j MASQUERADE

# Turn on ip_forwarding
echo 1 > /proc/sys/net/ipv4/ip_forward

iptables -A FORWARD -i $IF2 -s 172.31.0.0/24 -j ACCEPT
iptables -A FORWARD -m state --state ESTABLISHED,RELATED -j ACCEPT

# Flush routing every four hours
echo 144000 > /proc/sys/net/ipv4/route/secret_interval

It works like this... Each time a connection is established outbound, the server will decide which interface to route it out of. This should be a rough 50/50 split. The server decides based on congestion and a few other factors. Only one connection can only ever be established over the same route.

If I was start a download, it will only ever utilize a single connection, another download from the same IP will also utilize the same connection. This is due to the kernel having already cached the route. If a 3rd download to a new server was to be started it will likely be established over the second connection due to the first route being congested and the second route is idle.

The route will be flushed 4 hours after the first download completes, then a new decision with be made on how to contact the original server.

I've been using this for a bit now and it seems to do what I need it to do... give me a faster response time when I have intensive net operations going on, like vonage & rsync's to offsite machines.

By no means am I a iptables or routing guru. If anyone has any other suggestions or better ways to do it I'd love to hear them.

~pete

Thursday, May 28, 2009

Android and the Ubuntu Kernel

More news out of UDS. As many know Canonical demoed Android running on Ubuntu on the x86 architecture. As was noted quite a bit did not work because the demo was running a stock Ubuntu kernel. For some background you can read an article on Ars Technica about the demo.

Today we held an open session on Incorporating Android Into the Ubuntu Kernel. It was decided that we would make an Android Enabled Ubuntu kernel available. The kernel will be available on x86 and ARM architectures.

We will be forming up the spec over the next few weeks and I'll keep updating it here as well.

For the curious.... You can find more info on Android here:

Wednesday, May 27, 2009

Daily upstream "crack of the day"

Announcing "crack of the day" kernel builds... Yes it's true, plain ol' upstream kernel builds of Linus' daily tree. For those of you feeling like testing some new bits, running the bleeding edge or just like pain, these kernels are for you. We make no guarantee if you'll get one every day due to the shape of the upstream tree, if it compiles you'll get it.

You can find em' here:

http://kernel.ubuntu.com/~kernel-ppa/mainline/daily/

You can thank Andy Whitcroft, its all his fault :-)

More of the UDS outcomes for the kernel over the next few days....

Sunday, February 22, 2009

More kernel bits...

As of Jaunty Alpha 5 we have enabled kernel oops reporting on both our normal kernels and our vanilla kernel builds. Thanks to Jef for pointing out we should do it on our vanilla builds as well.

We have also made ext4 available for those users that would like to try it. I need to point out it is not the default option, but you can get to it thru the installer or convert to ext4 thru the commandline. Why is it not the default? At UDS in Dec, Ted Ts'o recommended that we don't make it default for this release. He felt it was very stable but not yet ready for mass consumption. I have been running it since Alpha 3 and its been working great for me, however your mileage may vary.

late edit: Amber asked what is the difference between "normal kernels and vanilla kernels". Normal kernels are upstream kernels that Ubuntu patches with code that we need in order to integrate the kernel in the distribution for example we add AUFS to make the live cd work, we at Ubuntu call these patches "Sauce". Vanilla kernels are pure unpatched upstream kernels. As stated in a previous blog post, we provide no support on the vanilla kernels, they are there there to assist users who want to test the latest upstream kernel on Ubuntu, and they also help us as Ubuntu kernel developers to find where our "sauce" patches might be breaking something.

My wife Amber is still hanging in there with Ubuntu, we are on about two weeks and counting. So far she is doing quite will. She has even found IRC and is in the Ubuntu channels. She has filed a few bugs in Launch Pad and is participating in the Jaunty Test Day. The only thing she really seems to be lacking is iMovie/iDVD equivalent applications. Linux has various apps that claim to do the same thing, none of them have the ease of use and integration. For those who are interested you can follow her exploits here: http://amber.redvoodoo.org

~pete

Sunday, February 15, 2009

To the hills and some observations

This weekend I took the family to the mountains of western North Carolina. That is where my wife was born and raised and we will be moving there when the kids get of school in June.

The weather was a bit crappy today and that gave my wife lots of time to continue her investigation of Ubuntu. If you want to read its here: http://amber.redvoodoo.org

I find it very enlightening reading it. She has not been asking for my help and I have to deliberately stay away so I don't volunteer. One thing I found very informational is the Ubuntu help. To be honest I never bothered to read it. Watching her use it she was grumbling about sudo. That caught my attention so I listened more... "Why do I care what sudo does?, why do I care about a command line?" were some of the statements I heard her utter. The one that really struck me was "on my Mac I *never* use command line..." Hmmmm, it was at that point I realized we (techies) assume everyone will need to sudo and use a terminal, if we did a better job of designing interfaces they wouldn't need to. In fact you have to hunt for the terminal application on a Mac. We have it in the accessiories menu. Something to think about.

At Red Hat I managed the Base OS group and that dealt with primarily userspace & plumbing so I never really thought about how to make it better. At Canonical I manage the Kernel Team and again I don't give the desktop much thought. I have been using Linux so long I remember when you had to configure FVWM to launch your applications. Anything that was easier than that to me has been a big win. I just take it for granted you need to do things different than Windows & Mac users do. Watching Amber struggle to understand things has given me a whole new appreciation as to the work we as a community need to do.

Amber managed to get on Freeenode and join #ubuntu-women and join the ubuntu-women mailing list (her first mailing list subscription ever!). The folks in the channel were very patient and supportive of her endvour with Linux. She is very much enjoying the community aspect of it all.

~pete