Developers Conference 2013

The Developers Conference took place in Brno this past weekend (February 23rd, 24th). The #devconf is an annual event organised by the Red Hat office in Brno with the members of the Fedora and JBoss.org communities. The convention targets primarily Linux and JBoss developers and admins and it covers a range of topics from the free software universe.

This year, there were three main tracks, one of which was focused primarily on JBoss and the remaining two on Linux-related topics, such as kernel, networking, security or virtualization/cloud technologies. Apart from that, the conference hosted a number of hackfests and lab sessions for people who wanted to learn something more practical, or have a more focused discussion on a certain topic.

Bryn M. Reeves talking at the Developers Conference

Bryn M. Reeves talking at the Developers Conference

I was there this year and it was amazing. According to the numbers posted to Twitter, the conference had at least 500 attendees on Saturday. There were so many great talks and the organisers had to even turn down presenters, only because there was no room for more. Actually, I don’t think that three tracks will be enough next year.

Saturday

I didn’t get up very early, so for me, Saturday started with a talk from Debarshi Ray about the Gnome Online Accounts. Debarshi talked about how the new Gnome 3.8 integrates with various online services, such as Google Docs/Mail, Facebook, Flickr, ownCloud and others. The integration is a very promising area in my opinion, because these services are used by millions of people. However, there are still some problems that need to be addressed and that are being worked on.

The track in the D3 room continued with Tom Callaway‘s talk on Fedora User Experience. Tom explained the design driven methodology; that we should think about the user experience, before we code. However, the community around Fedora has been focused on the contributors, which are quite often technical people who like to code first. He presented several mockups they have been working on. The first one is a sort-of a rich web interface for mailing lists called Hyperkitty. The goal of this project is to improve the collaboration in communities. At the moment, there are two groups of users, one preferring mailing lists, the second discussion boards.  People from these two group tend to miss each other. Hyperkitty should also provide a karma functionality, to help decrease the pollution of big mailing lists from chronically non-constructive people wasting everyone’s time with pointless discussions.

Hyperkitty

Hyperkitty Design (source: http://blog.linuxgrrl.com/)

The third presentation I saw, again in the D3 room was Negotiation theory for Open Source Hackers by Leslie Hawthorn. This was one of the less technical talks, but it was very insightful. Arguing takes up a fair amount of the time and effort technical people put into their work, especially in open source, and it is not always time well spent. The slides from this presentation are available here.

After Leslie’s talk, we moved to the kernel track in D1 to see Lukáš Czerner speak about what is happening in the local Linux Kernel File Systems. Lukáš summarized the new features from XFS (being the most suitable option for enterprise workloads), ext4 (great for general-purpose), and btrfs (still not stable). Based on the comparison of the number of commits made during the last year, the most active development is going on in btrfs. Its codebase also grows steadily while XFS have lost some weight during the last few years as its developers try to remove unnecessary things. He also discussed the challenges file system developers will need to deal with in the future. The rules of the game don’t change that much with SSD’s, but PCI-based solid-state drives can be problematic, as the current block layer doesn’t scale that well to storage technologies that fast. Similar increase of speed has already happened in networking, so the future development might be inspired by some ideas from that area.

After the file systems update, it was time for my talk about the Linux Network Stack Test project. LNST is a network testing tool designed to simplify testing of real-life scenarios with more than one computer involved. It provides some building blocks for test development and it also serves as a test management tool that will handle proper execution of your test cases. The tests created using LNST are then entirely automated and also completely independent from the underlying network infrastructure, so they can be migrated to a different network without any changes whatsoever. This is important when you want to share them with others.

Slides: You can download the slides I used for the LNST presentation here.

I took a short break from the presentations after that and I returned to see Daniel Borkmann with his presentation about the zero-copy packet capturing and netsniff-ng. At this point, I started to get really tired, so I certainly didn’t catch everything here. And finally, the kernel track was closed by three lighting talks by Jiří Benc on PTP, Jiří Pírko who gave us an update about what happened in the team driver, and Zdeněk Kabeláč who closed the room with his talk about LVM2.

If you came early enough to get a keyring at the entrance to the venue, you were in possession of a ticket to the after party, which took place approximately half an hour later at Fléda. The party was, just as the conference itself, awesome. There was beer and food for free, a live band, and most importantly hundreds of like-minded people and colleagues from Red Hat to talk to about Linux :).

Sunday

Sunday was again crammed with amazing talks. This time, I made sure not to oversleep (even though getting up after the party wasn’t easy at all). The very first talk in the morning in D3 was Evolution of Linux Network Management from Pavel Šimerda. Pavlix talked about the NetworkManager project, the things they improved in the 0.9.8 release (which happened just a few days prior to the conference).  He explained what they focus on at the moment. NetworkManager is going from desktops to servers and it should be used as the primary way of network configuration in Fedora and also in RHEL. This requires revising various things inside the NetworkManager and also implementing additional functionality, that is required in the enterprise area. Networking is absolutely crucial on servers, so they plan to test the code very carefully using multiple different methods (one of them might be a set of real-life test scenarios using LNST).

Thomas Wöerner continued the networking track in D3 with his presentation of firewalld, a dynamic firewall daemon that provides dynamic functionality over the static iptables. The daemon supports network zones, which represent the levels of trust for network connections. These might be public, home, work, etc. Firewalld also supports working with rules grouped into services, which are basically lists of ports that are required for some service to work. This way, you can handle all the rules in a group at the same time.

The last networking talk in D3 before the Core OS track was given by Thomas Graf. The presentation was focused on Open vSwitch, which is a software implementation of a switch similar to the Linux bridge. However, Open vSwitch is focused more towards the enterprise market. It is designed for virtualized server environments, so it comes with support of things, such as OpenFlow and VLAN tagging.

Preparations for What are we breaking now?

Preparations for What are we breaking now?

Probably the most crowded presentation at the Developers Conference was What are we breaking now? delivered by Kay Sievers and Lennart Poettering. They discussed several topics that (in their opinion) need fixing. The first one were persistent network interface names. This has been a problem for a long time, because the kernel names devices as it finds them and the order can change with every other boot. The plan is to use names based on some properties of the hardware, such as the position of the card on the bus, instead of just numbering them as they are recognised. Other than that, they would like to implement D-BUS in the kernel. There has been a couple of tries at this in the past, but they all failed. I personally liked the plan they mentioned next to modify the bootloader (GRUB2) and the kernelinstall script to work with drop-in config files when a new kernel is installed rather than with a convoluted set of self-replicating scripts. Finally, they mentioned app sandboxes that should provide some protection for the user from the actions of a potentially malicious third-party applications.

The Core OS track continued after a short break with a great talk from Bryn M. Reeves called Who moved my /usr?? – staying sane in a changing world. This talk was again a little bit lighter on the technical details, but as the last year’s presentation from Bryn, it was not only interesting, but very entertaining as well. The talk was focused on change. Bryn went through the historic releases of Red Hat Linux and described what happened where, and how did the users react.

I didn’t actually hear the whole talk that went after the previous one in D3, because my stomach was getting pretty unhappy at that time and I went down to get that extremely big hot dog. The leader of the SELinux project, Dan Walsh talked about and also demonstrated the creation of hundreds of secure application containers with virt-sandbox. The containers are much cheaper than virtualization, but they can provide the same amount of security.

Lennart Poettring had a one more talk in the Core OS track called Systemd Journal. This one was an introduction of the logging facility that will be a part of systemd. He explained the motivation, why they decided to go down this path and what are (in his opinion) the benefits of journald. In the second part of the presentation, Lennart did a small demonstration of what can be done with the journalctl tool for reading logs.

The very last talk I attended at this year’s Developers Confefence was the Log Message Processing, Formatting and Normalizing with Rsyslog from the main author of the rsyslog project Rainer Gerhards, but I was getting really tired and sleepy again, so unfortunately I wasn’t listening that carefully.

Summary

Long story short, this year’s #devconf was awesome! Lots of interesting talks, labs, and hackfests. If you missed a talk this year, the good news is, that all the presentations from the three main rooms were recorded. The videos should be soon available online.

Big thanks goes to the main organisers, Radek Vokál and Jiří Eischmann, for making this possible, but also to the many volunteers that were involved in the organising and making sure everything went as planned. For me, the organisation was flawless, as I personally didn’t encounter any difficulties. Man, I’m already looking forward to 2014. See you everyone next year!

If you liked this post, make sure you subscribe to receive notifications about new content on this site by email or a RSS feed.
Alternatively, feel free to follow me on Twitter or Google+.

Just For Fun – The Biography of Linus Torvalds

Book Cover

Did you know there actually is a book about the creator of the Linux kernel? And that it was published twelve years ago in 2001? I have used Linux-based operating systems for almost 8 years, and I must admit I never heard of such a book up to three months ago. I ordered it the minute I found out about it. It was a good choice :-)!

Just For Fun: The Story of an Accidental Revolutionary is about the life journey of the father of the Linux kernelLinus Torvalds, but not only that. It is mostly written as an autobiography, with shorter sections between chapters from a co-author, a journalist, David Diamond (whose idea it was to write a book about Linus in the first place).

It starts with a short preface, a conversation between the two authors on a road trip Linus, his family, and David went on (their destination is an IKEA store in Los Angeles). During the trip, they talk about the book and Linus starts to explain a theory he has about the meaning of life — the law of Linus. The title of the book is in fact based on this theory (which is quite interesting).

Linus starts with describing the memories he has from his childhood in Finland with the same kind of wit you might know from his emails (the first chapter actually starts with: I was an ugly child.). He says, he doesn’t remember much of it, except when it comes to the computers he had. He spent a lot of time with his scientist grandfather, who introduced him to programming and let him use his computer. Linus did what any serious nerdy kid does. Loved maths, stayed indoors a lot, and programmed as much as he could.

The story continues to the times during which Linus got an extremely powerful (at that time) and also extremely expensive 386AT machine. He must have got a loan to be able to afford it. The things Linus did back in that time were basically only programming, sleeping, and eating pretzels. And of course the occasional snooker.

These were the early times and they certainly were not as easy as often portrayed by journalists. Even after the famous release announcement on the Minix’s news group, Linux was not an instant hit. It took quite a lot of hard work before it got any commercial attention, and even more work before Linux actually gained any commercial success.

Linus got a job at the University of Helsinki that allowed him to focus on developing Linux. He decided, he would like to turn Linux into a full featured operating system (which took a lot more work than he had expected). But after some time, he announced the release of Linux 1.0.0. Only this time it was not an email sent to a news group. The usual (and boring) email announcement turned into an event that took place at the university and attracted quite a lot of attention.

In the following chapters, Linus writes about his other early experiences with public speaking and his encounters with journalists coming to his house. Linus explains why he took the job at the Transmeta Corporation and also his decision to relocate from Finland to the US. He also mentions the stock grants he received from Red Hat and various other Linux companies (and why he refused to work for any of them).

Towards the end of the book, there are several essays on open source, intellectual property, copyright, and the future of Linux presenting his own opinions on these topics. Interestingly enough, Linus mentions, that he expects the Linux kernel to power some sort-of handheld devices for reading email in the future (the smart-phones that are common today were certainly not yet that ubiquitous in early 2000s).

This book was a quite an interesting read for me. It is funny and casual, so it reads quite fast. And if you actually use Linux or even participate in the community that surrounds the Linux kernel, I certainly recommend picking it up!

If you liked this post, make sure you subscribe to receive notifications about new content on this site by email or a RSS feed.
Alternatively, feel free to follow me on Twitter or Google+.

FOSDEM 2013

FOSDEM Logo

FOSDEM Logo

This year, I was given the opportunity to go to Brussels to attend the best Free Software and Open Source event in Europe, also known as The Free and Open source Software Developers’ European Meeting (FOSDEM). It is one of the biggest open source conferences in Europe; it is a place where the developers of many open projects from around the world come together to have a beer and to talk about their progress. Over 5000 visitors come to Brussels every year to see hundreds of talks organised to dozens of tracks. It was actually my first time at an even this big. And it was great, indeed!

The main reason for our trip to Brussels was to present our project called LNST in the Test and Automation Devroom. Unfortunately, due to the travel arrangements, we couldn’t make it to the second day of the conference. Nevertheless it was a great experience. Saturday was crammed with great talks and they all seemed to happen at the same time! Hopefully, the organisers managed to record most of them and will make them available online soon.

Friday’s Beer Event

The conference actually started on February 1st by the beer event in probably the largest pub I have ever seen — the Delirium Café. It had at least 3 floors, all of which were full of geeks including a part of the street in front of the pub. There was a really large variety of beers to chose from with alcohol content going up to (and sometimes probably even over) 10%. I personally liked Belgian beer. It is different from that we are used to in Czech Republic, but in a good way.

Delirium Café in Brussels (source: http://deliriumcafe.be/)

Delirium Café in Brussels (src: http://deliriumcafe.be)

Saturday’s Talks

The official event was held at the Université Libre de Bruxelles (or ULB) campus which is situated in southern Brussels. The first talks on Saturday started mostly from 11:00, so there was some time to get yourself together if you had overdone it with the beer the previous day. The first thing we did was actually to get a coffee and a croissant. There was an improvised bar conveniently at the campus, which was really nice.

ULB Campus

ULB Campus Solbosch, Brussels, Belgium.

The first presentation we decided to see was about the architecture and the recent developments of GNU/Hurd from Samuel Thibault in the Microkernels and Component-based OS devroom. Samuel explained that Hurd is designed to provide as much freedom to regular users as possible as the root has (accessing hardware being the notable exception). For example, Hurd allows users to build and use their own TCP/IP stack. Due to the modularity of the kernel, you can replace the majority of the OS without even asking.

Then we decided to move to the Virtualisation track to see a talk about virtual networking. We got there late, and I must admit, that I really didn’t understand the concept. On the other hand, what was really nice were the stands in building K and AW. They were usually overflowing with swag to take (or buy). Red Hat had a Fedora booth there with Jaroslav Řezník and Jiří Eischmann, giving out DVDs with Fedora 18 (apart from other things). I got the opportunity to try the new Firefox OS there. It looked okay, but only one of the four buttons the phone actually worked. I am sure it will do better next year :-).

Željko Filipin talked from 13:30 in the Testing and Automation Devroom about the way Wikipedia is tested, which was really interesting. They use a set of tools called Selenium to automate web browsers. It provides Ruby API to manipulate all the major browsers. This is very helpful during the testing process, as it allows scripting unit and regression tests easily.

My talk about the Linux Network Stack Project was scheduled right after that. I was really nervous about it, because it was my first serious public speaking (and in English). I made it through without passing out, so I guess it went well. Still, I am not sure whether I made everything as clear as I possibly could. The devroom was very well organised (or at least that was the impression I got when I entered) thanks to R. Tyler Croy and others who were running it.

Slides: You can download the slides I used for the LNST presentation here.

The last two talks I saw there were about the Linux kernel, both were part of the Operating Systems main track. In the first one, Thomas Petazzoni from Free Electrons explained the challenges of ARM support in the kernel and described how Linux deals with supporting different ARM-based SoC’s. The very last talk we attended was from the maintainer of I2C subsystem in the Linux kernel — Wolfgang Sang. Wolfgang was talking about what it takes to be a kernel subsystem maintainer. He explained how a person becomes one and then focused mainly on his own experiences and how he prefers to do it.

Apparently, RMS was there attending the conference, but unfortunately I didn’t see him. $*#.! Well, maybe next time.

Brussels

There was no much time left, but we managed to get a while in the evening to go buy some chocolate and have a look through the city. To me, Brussels is something in the middle between Prague and London. They write everything in two languages — French and Dutch, but they seem to use French a lot more. If you are hungry there, you can get a variety of things from Thai to Spanish, Italian, and Mexican food. If you don’t know, where to go, I can recommend a great tex-mex restaurant called ChiChi’s. As always, you can get a whole lot of junk food from McDonald’s. There are a couple of instances of Starbucks in the city and an infinite number of other cafés to maintain proper caffeine levels :-).

Anyway, it was both a great conference and a really nice trip. I hope, I will be able to come next year as well!

If you liked this post, make sure you subscribe to receive notifications about new content on this site by email or a RSS feed.
Alternatively, feel free to follow me on Twitter or Google+.

Raspberry Pi

Raspberry Pi Logo

It’s Christmas time, which means that there is a pause from school and a pause from work as well (the office is closed). So I finally had enough time to set up the Raspberry Pi board I ordered few weeks ago. And it’s fantastic! I was really surprised how easy it was to set it up and get everything to work properly. Due to its popularity there are guides and howtos everywhere.

I would like to use mine as a sort of all-around home server for backups, file sharing, and git for now. But there is so much more you can do with the Pi :). The best thing, at least for the purposes I plan to use mine for, is its compactness and the most importantly complete lack of any noisy mechanical parts, such as fans or hard drives. That’s the killer feature for me. I mean, I sure have a drawer full of old hardware that would make a machine usable for just copying backups. Unfortunately, I don’t have a basement with ethernet and I wouldn’t survive with a cluster of coolers going 24/7 in my apartment. Additionally it consumes much less power than a regular PC does.

Get One

Two models of Raspberry Pi exist at the moment. Model A and B. The most notable differences between the two are the amount of memory (model B has 512MB while model A only 256MB), B also has one additional USB port and an ethernet (RJ-45). The full spec and comparison of the two is available on Wikipedia. I don’t know if model A is available already, but even if it was, go for model B. The ethernet port and an additional USB make a big difference. You don’t have to bother to buy an external USB hub or ethernet dongle.

Raspberry Pi Model B (Image comes from: http://www.raspberrypi.org/faqs)

Raspberry Pi Model B Illustration (http://www.raspberrypi.org/faqs)

As far as I know, you can get this board from two manufacturers — RS and Farnell. I got mine from RS, because Farnell stopped shipping these things to Czech Republic just a couple of days before I ordered it. My brother has his one from Farnell and they are almost identical, with only few cosmetic differences. The prices are also about the same. From RS it cost me roughly £31 with shipping to Czech Republic included.

However this is not everything you’ll need to set it up. No accessories is supplied with the board. You will need a power supply, a SDHC card to boot from, possibly a HDMI to DVI cable to plug it into a display, an ethernet cable to connect it to the network. These things cost me additional £17. I purchased a Samsung phone charger with micro-usb and a 16GB Class10 SDHC card from ADATA. Pay attention to what you’re buying, because not everything will be compatible with the Pi. There is a neat list of verified peripherals on the elinux.com wiki. You can have a problem with a power supply that is not able to provide current at least of 0.7A. There have been some problems with certain SD cards, so make sure to check the list. Raspberry Pi also comes without a case, so it might be a good idea to get one. There is a ton of cases available from different vendors. I liked this metal one.

I use the board without a head and access it over ssh, but you might need a display, a keyboard, and a mouse to handle the installation — for instance to click through some settings or install sshd. It works really nice with a monitor as well. I’m running Fedora 17 Raspberry Pi Remix on it at the moment, which comes with XFCE.

Wokring Raspberry PiRaspberry Pi in a case

Set It Up

As I said earlier, it is very easy to start with Raspberry Pi. Basically, all you need to do is to prepare the SD card. You need to copy one of the prepared OS images to the card and that’s it. The hardest thing is probably choosing the distro. You have a number of options available. There is, again, a list of available distributions on elinux.com. The “default” is Raspbian, which is a port of Debian Wheezy. I myself am used to Fedora, so I started with that one. Arch is also available, Gentoo ARM, OpenWrt or even Android. And the best thing is you can experiment with those really easily. If you have a spare SD card, you can reinstall your board as often as you want :).

To install Raspberry Pi on Linux, all you will need is the OS image and the dd command. You will copy the image to the card and that’s it. Note that the following command doesn’t reference a partition device (e.g. /dev/sde1), instead it uses directly the card device (/dev/sde).

dd bs=1M if=/path/to/rpfr-17-xfce-r2.img of=/dev/sde

When this is done, you can plug everything in and try it out. Just make sure you have connected everything before, you plug in the power. Raspberry Pi doesn’t have an off switch, so it will boot right away. If you encounter any problems, make sure to check the beginners guide and the troubleshooting advice.

Useful Links


If you liked this post, make sure you subscribe to receive notifications of new content by email or RSS feed.
Alternatively, feel free to follow me on Twitter or Google+.

Brief GDB Basics

In this post I would like to go through some of the very basic cases in which gdb can come in handy. I’ve seen people avoid using gdb, saying it is a CLI tool and therefore it would be hard to use. Instead, they opted for this:

std::cout << "qwewtrer" << std::endl;
DEBUG("stupid segfault already?");

That’s just stupid. In fact, printing a back trace in gdb is as easy as writing two letters. I don’t appreciate lengthy debugging sessions that much either, but it’s something you simply cannot avoid in software development. What you can do to speed things up is to know the right tools and to be able to use them efficiently. One of them is GNU debugger.

Example program

All the examples in the text will be referring to the following short piece of code. I have it stored as segfault.c and it’s basically a program that calls a function which results in segmentation fault. The code looks like this:

/* Just a segfault within a function. */

#include <stdio.h>
#include <unistd.h>

void segfault(void)
{
	int *null = NULL;
	*null = 0;
}

int main(void)
{
	printf("PID: %d\n", getpid());
	fflush(stdout);

	segfault();

	return 0;
}

Debugging symbols

One more thing, before we proceed to gdb itself. Well, two actually. In order to get anything more than a bunch of hex addresses you need to compile your binary without stripping symbols and with debug info included. Let me explain.

Symbols (in this case) can be thought of simply variable and function names. You can strip them from your binary either during compilation/linking (by passing -s argument to gcc) or later with strip(1) utility from binutils. People do this, because it can significantly reduce size of the resulting object file. Let’s see how it works exactly. First, compile the code with striping the symbols:

[astro@desktop ~/MyBook/code]$ gcc -s segfault.c

Now let’s fire up gdb:

[astro@desktop ~/MyBook/code]$ gdb ./a.out
GNU gdb (GDB) Fedora (7.3.1-48.fc15)
Reading symbols from /mnt/MyBook/code/a.out...(no debugging symbols found)...done.

Notice the last line of the output. gdb is complaining that it didn’t find any debuging symbols. Now, let’s try to run the program and display stack trace after it crashes:

(gdb) run
Starting program: /mnt/MyBook/code/a.out 
PID: 21568

Program received signal SIGSEGV, Segmentation fault.
0x08048454 in ?? ()
(gdb) bt
#0  0x08048454 in ?? ()
#1  0x0804848d in ?? ()
#2  0x4ee4a3f3 in __libc_start_main (main=0x804845c, argc=1, ubp_av=0xbffff1a4, init=0x80484a0, fini=0x8048510, 
    rtld_fini=0x4ee1dfc0 , stack_end=0xbffff19c) at libc-start.c:226
#3  0x080483b1 in ?? ()

You can imagine, that this won’t help you very much with the debugging. Now let’s see what happens when the code is compiled with symbols, but without the debuginfo.

[astro@desktop ~/MyBook/code]$ gcc segfault.c 
[astro@desktop ~/MyBook/code]$ gdb ./a.out 
GNU gdb (GDB) Fedora (7.3.1-48.fc15)
Reading symbols from /mnt/MyBook/code/a.out...(no debugging symbols found)...done.
(gdb) run
Starting program: /mnt/MyBook/code/a.out 
PID: 21765

Program received signal SIGSEGV, Segmentation fault.
0x08048454 in segfault ()
(gdb) bt
#0  0x08048454 in segfault ()
#1  0x0804848d in main ()

As you can see, gdb still complains about the symbols in the beginning, but the results are much better. The program crashed when it was executing segfault() function, so we can start looking for any problems from there. Now let’s see what we get when debuginfo get’s compiled in.

[astro@desktop ~/MyBook/code]$ gcc -g segfault.c 
[astro@desktop ~/MyBook/code]$ gdb ./a.out 
GNU gdb (GDB) Fedora (7.3.1-48.fc15)
Reading symbols from /mnt/MyBook/code/a.out...done.
(gdb) run
Starting program: /mnt/MyBook/code/a.out 
PID: 21934

Program received signal SIGSEGV, Segmentation fault.
0x08048454 in segfault () at segfault.c:9
9		*null = 0;

That’s more like it! gdb printed the exact line from the code that caused the program to crash! That means, every time you try to use gdb to get some useful directions for debugging, make sure, that you don’t strip symbols and have debuginfo available!

Start, Stop, Interrupt, Continue

These are the basic commands to control your application’s runtime. You can start a program by writing

(gdb) run

When a program is running, you can interrupt it with the usual Ctrl-C, which will send SIGINTR to the debugged process. When the process is interrupted, you can examine it (this is described later in the post) and then either stop it completely or let it continue. To stop the execution, write

(gdb) kill

If you’d like to let your program carry on executing, use

(gdb) continue

I should point out, that in gdb, you can abbreviate most of the commands to as little as a single character. For instance r can be used for run, k for kill, c for continue and so on :).

Stack traces

Stack traces are very powerful when you need to localize the point of failure. Seeing a stack trace will point you directly to the function, that caused you program to crash. If your project is small or you keep your functions short and straight-forward, this could be all you’ll ever need from a debugger. You can display stack trace in case of a segmentation fault or generally anytime when the program is interrupted. The stack trace can be displayed by a backtrace or bt command

(gdb) bt
#0  0x08048454 in segfault () at segfault.c:9
#1  0x0804848d in main () at segfault.c:17

You see, that the program stopped (more precisely was killed by the kernel with a SIGSEGV signal) at line 9 of segfault.c file while it was executing a function segfault(). The segfault function was called directly from the main() function.

Listing source code

When the program is interrupted (and compiled it with debuginfo), you can list the code directly by using the list command. It will show the precise line of code (with some context) where the program was interrupted. This can be more convenient, because you don’t have to go back into your editor and search for the place of the crash by line numbers.

(gdb) list
4 #include 
5
6 void segfault(void)
7 {
8   int *null = NULL;
9 *null = 0;
10 }
11
12 int main(void)
13 {

We know (from the stack trace), that the program has stopped at line 9. This command will show you exactly what is going on around there.

Breakpoints

Up to this point, we only interrupted the program by sending a SIGTERM to it manually. This is not very useful in practice though. In most cases, you will want the program stop at some exact place during the execution, to be able to inspect what is going on, what values do the variables have and possibly to manually step further through the program. To achieve this, you can use breakpoints. By attaching a breakpoint to a line of code, you say that you want the debugger to interrupt every time the program wants to execute the particular line and wait for your instructions.

A breakpoint can be set by a break command (before the program is executed) like this

(gdb) break 8
Breakpoint 2 at 0x4005c0: file segfault.c, line 8.

I’m using line number to specify, where to put the break, but you can use also function name and file name. There are multiple variants of arguments to break command.

You can list the breakpoints you have set up by writing info breakpoints:

(gdb) info breakpoints 
Num     Type           Disp Enb Address            What
1       breakpoint     keep n   0x00000000004005d8 in main at segfault.c:14
2       breakpoint     keep y   0x00000000004005c0 in segfault at segfault.c:8

To disable a break point, use disable <Num> command with the number you find in the info.

Stepping through the code

When gdb stops your application, you can resume the execution manually step-by-step through the instructions. There are several commands to help you with that. You can use the step and next commands to advance to the following line of code. However, these two commands are not entirely the same. Next will ‘jump’ over function calls and run them at once. Step, on the other hand, will allow you to descend into the function and execute it line-by-line as well. When you decide you’ve had enough of stepping, use the continue command to resume the execution uninterrupted to the next break point.

Breakpoint 1, segfault () at segfault.c:8
8		int *null = NULL;
(gdb) step
9		*null = 0;

There are multiple things you can do during the process of stepping through a running program. You can dump values of variables using the print command, even set values to variables (using set command). And this is definitely not all. Gdb is great! It really can save a lot of time and lets you focus on the important parts of software development. Think of it the next time you try to bisect errors in the program by inappropriate debug messages :-).

Sources

Magical container_of() Macro

When you begin with the kernel, and you start to look around and read the code, you will eventually come across this magical preprocessor construct. What does it do? Well, precisely what its name indicates. It takes three arguments — a pointer, type of the container, and the name of the member the pointer refers to. The macro will then expand to a new address pointing to the container which accommodates the respective member. It is indeed a particularly clever macro, but how the hell can this possibly work? Let me illustrate …

The first diagram illustrates the principle of the container_of(ptr, type, member) macro for who might find the above description too clumsy.

Illustration of how containter_of macro works

Illustration of how containter_of macro works

Bellow is the actual implementation of the macro from Linux Kernel:

#define container_of(ptr, type, member) ({                      \
        const typeof( ((type *)0)->member ) *__mptr = (ptr);    \
        (type *)( (char *)__mptr - offsetof(type,member) );})

At first glance, this might look like a whole lot of magic, but it isn’t quite so. Let’s take it step by step.

Statements in Expressions

The first thing to gain your attention might be the structure of the whole expression. The statement should return a pointer, right? But there is just some kind of weird ({}) block with two statements in it. This in fact is a GNU extension to C language called braced-group within expression. The compiler will evaluate the whole block and use the value of the last statement contained in the block. Take for instance the following code. It will print 5.

int x = ({1; 2;}) + 3;
printf("%d\n", x);

typeof()

This is a non-standard GNU C extension. It takes one argument and returns its type. Its exact semantics is throughly described in gcc documentation.

int x = 5;
typeof(x) y = 6;
printf("%d %d\n", x, y);

Zero Pointer Dereference

But what about the zero pointer dereference? Well, it’s a little pointer magic to get the type of the member. It won’t crash, because the expression itself will never be evaluated. All the compiler cares for is its type. The same situation occurs in case we ask back for the address. The compiler again doesn’t care for the value, it will simply add the offset of the member to the address of the structure, in this particular case 0, and return the new address.

struct s {
	char m1;
	char m2;
};

/* This will print 1 */
printf("%d\n", &((struct s*)0)->m2);

Also note that the following two definitions are equivalent:

typeof(((struct s *)0)->m2) c;

char c;

offsetof(st, m)

This macro will return a byte offset of a member to the beginning of the structure. It is even part of the standard library (available in stddef.h). Not in the kernel space though, as the standard C library is not present there. It is a little bit of the same 0 pointer dereference magic as we saw earlier and to avoid that modern compilers usually offer a built-in function, that implements that. Here is the messy version (from the kernel):

#define offsetof(TYPE, MEMBER) ((size_t) &((TYPE *)0)->MEMBER)

It returns an address of a member called MEMBER of a structure of type TYPE that is stored in memory from address 0 (which happens to be the offset we’re looking for).

Putting It All Together

#define container_of(ptr, type, member) ({                      \
        const typeof( ((type *)0)->member ) *__mptr = (ptr);    \
        (type *)( (char *)__mptr - offsetof(type,member) );})

When you look more closely at the original definition from the beginning of this post, you will start wondering if the first line is really good for anything. You will be right. The first line is not intrinsically important for the result of the macro, but it is there for type checking purposes. And what the second line really does? It subtracts the offset of the structure’s member from its address yielding the address of the container structure. That’s it!

After you strip all the magical operators, constructs and tricks, it is that simple :-).

References

Fedora 17 Tweaks

I finally had a moment to perform the annual desktop re-install to upgrade the OS on my primary computer. It’s always a little bit of pain, because I pretty much rely on that machine and I want to have everything in place and ready to use. I upgraded from Lovelock to Beefy Miracle which took me from Gnome 3.0 right to 3.4. Things moved forward a lot during the past year! Lots of people hate Gnome Shell, but I’m ok with it – despite the occasional segfault. It’s definitely not perfect, but I like the idea of using the SUPER key for switching between applications. I have bindings for switching workspaces and after all, I end up using vim, chromium and terminal anyway :).

There are always some post-installation tweaks I like to do, to make the system fit my personal needs more. Here they are, maybe someone could use them as well.

Gnome Tweak Tool

This is the first thing to install to a fresh Fedora 17 or probably any distribution using Gnome 3 desktop. For some reason, the designers decided, that normal users don’t like to change fonts or window manager themes and they moved all this “things for advanced users” to gnome-tweak-tool. To install it, use the following command

 $ su -c 'yum install gnome-tweak-tool'

This will add Advanced Settings icon to the application picker in activities overview (that comes up after you hit the windows key). The application allows you to fine-tune various parts of Gnome, such as font sizes, icons on the desktop, enable or disable extensions etc.

Gnome 3 - Advanced Settings

Gnome 3 – Advanced Settings in Application Menu

Gnome Tweak Tool

Gnome Tweak Tool

Clearlooks in Gnome 3.4

What surprised me the most is lack of Clearlooks theme in Fedora 17. I got used to that theme too much over the years! Luckily for me, there’s a port for Gtk 3 called Clearlooks-Phenix! To install this theme you need to download and unpack Clearlooks-Phoenix 2, which is suited for Gtk 3.4. After that, copy the contents of the archive either to /usr/share/themes to make it available for all users or ~/.themes just for yourself. There are more detailed install installing instructions available directly on the project’s website. Here’s a series of commands, that will do that

$ wget http://jpfleury.indefero.net/p/clearlooks-phenix/source/download/master/
$ unzip clearlooks-phenix-master.zip
$ mv clearlooks-phenix-master Clearlooks-Phenix
$ su -c 'cp -r Clearlooks-Phenix /usr/share/themes'
Clearlooks-Phenix theme preview in Fedora 17

Clearlooks-Phenix theme preview in Fedora 17

Nautilus Loading Too Long

I have this problem with Nautilus on my installation. When the file browser starts, cursor changes to the loading circle one and stays like that for 20 seconds. The application on the other hand seems to be loaded and fully usable. Fortunately, there is a simple workaround to this problem. To fix this, simply edit /usr/share/applications/nautilus.desktop, find line that contains StartupNotify=true (131 on my system) and change the value to false.

Nautilus Loading Issue

Nautilus keeps up the loading cursor for 20 sec

Nautilus executable config

Nautilus executable config options

Multiple Workspaces with Two Screens

Another thing that bothers me on Gnome 3.4 is that, there are no workspaces on the secondary screen. The designers assume, that if you have two monitors, you use workspaces on the primary one and “pin” some windows to the secondary screen, that will stay there all the time through the context switching. I prefer to have this the other way though (i.e. so the workspaces change on the secondary screen as well). This can be easily altered by issuing this command:

 $ gsettings set org.gnome.shell.overrides workspaces-only-on-primary false

The situation is also described in the following articles:

Blue Faces in Flash Player

This issue is caused by combination of Adobe Flash player and nVidia proprietary drivers. And for some reason, the colors are messed up in flash videos. I guess, that Linus was right (again). To fix this, you need turn off the hardware acceleration in flash player settings. Right click on the flash video, chose Settings and untick the Enable Hardware Acceleration option. You need to restart your browser for the change to take effect.

Corrupt colors in flash

This is just annoying

The normal colors

Right click + Settings, untick and restart your browser

Logging Screen Background

I personally don’t like fireworks that much 😛 and since they’re the default background of Beefy Miracle, the wallpaper is also present behind the logging screen. You can change the default wallpaper by editing

 # vim /usr/share/backgrounds/beefy-miracle/default/beefy-miracle.xml

and changing the paths to wallpapers for different screen sizes. I like the blue Gnome stripes. For that one, you can set the path to /usr/share/backgrounds/gnome/Stripes.jpg.

Keyboard Layout Resets to US on Logout

I don’t know what’s causing this and I didn’t find a complete workaround either. Every time I log off from my account, the keyboard layout is reset to English(US), despite the settings where the default is Czech. The workaround for this is to remove the English(US) from layout settings completely and keep only Czech. I can live with that, but it can be pretty annoying to other people I guess.

Other Useful Links


If you liked this post, make sure you subscribe to receive notifications of new content by email or RSS feed.
Alternatively, feel free to follow me on Twitter or Google+.

Custom Kernel on Fedora

Why would someone want to have a custom kernel? Well, maybe you like the cutting-edge features or maybe you want to hack on it! Anyway, this post explains step-by-step, how to download, build and install your custom kernel on Fedora. I’ll be building kernel from the mainline tree for i686.

Warning: Keep in mind that if something goes wrong you might end up irreversibly damaging your system! Always keep a fail-safe kernel to boot to, just in case!

1. Getting the kernel tree

Linux kernel tree is developed using git revision control system. There are a LOT of different versions and different trees of the kernel maintained by various people. The development tree is linux-next, Linus’ mainline is called simply linux, there is also linux-mm which had a similar purpose as linux-next today. You need to pick one and clone it locally. In this how-to, I’ll stick with the linux tree from Linus that is at least a tiny bit more stable than linux-next. Use the following to get the source:

$ git clone git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git

2. Patching the source

Since this is really bleeding-edge code, there might be some problems with compilation or other bugs. Now is the time to patch the tree to resolve all these issues. Also if you want to hack a little and do some of your own changes to the kernel, now is the time! Don’t be afraid, it’s not that hard …

3. Configure the kernel

Kernel is a pretty big piece of software and it provides a shitload of  configuration options. Various limits and settings can be adjusted in this phase. You can also decide, what parts of kernel such as what drivers will be included in your build. Like I said, there is a lot of options and a couple of ways of setting them:

$ make config     # Go through each and every option by hand
$ make defconfig  # Use the default config for your architecture
$ make menuconfig # Edit the config with ncurses gui
$ make gconfig    # Editing with gtk GUI app

Sometimes, the kernel is built with an option that it saves it’s configuration in /proc/config.gz. If this is your case, you can copy it inside the tree and use make oldconfig. This will ask only for the new features, that were not present in the previous version.

$ zcat /proc/config.gz > .config
$ make oldconfig

On Fedora, the configuration of currently installed kernels can be found in the /boot directory:

$ ll /boot/config*
-rw-r--r--. 1 root root 123540 Mar 20 17:31 /boot/config-2.6.42.12-1.fc15.i686
-rw-r--r--. 1 root root 125193 Apr 21 15:54 /boot/config-2.6.43.2-6.fc15.i686
-rw-r--r--. 1 root root 125204 May  8 14:23 /boot/config-2.6.43.5-2.fc15.i686

4. Build it

When you’re done configuring, you can advance to the build. The only advice I can give you here to take advantage of -j option that make offers if you’re on a system with multiple cores. To build the kernel using 2 jobs per core on a dual-core processor use:

$ make -j4

It will significantly improve the build times. Either way, it will take some time to build, so it’s time to get a coffee!

5. Installation

Result of successful build should be a bzImage located in arch/i386/boot/bzImage and a bunch of built modules *.ko (kernel object). It’s vital to point out, that bzImage isn’t just a bzip2-compressed kernel object. It’s a specific bootable file format, that contains compressed kernel code along with some boot code (like a stub for decompressing the kernel etc).

Anatomy of bzImage (source: wikipedia.org)

To install the new kernel, rename and copy the bzImage to boot and install the modules by writing:

# cp arch/i386/boot/bzImage "/boot/vmlinuz-"`make kernelrelease`
# make modules_install

The modules will be installed to /lib/modules. Also a file called System.map will be created in the root of kernel source tree which is a symbol look-up table for kernel debugging. You can place it into /boot along with the image:

# cp System.map "/boot/System.map-"`make kernelrelease`

The file contents looks like this:

$ head System.map
00000000 A VDSO32_PRELINK
00000040 A VDSO32_vsyscall_eh_frame_size
000001d5 A kexec_control_code_size
00000400 A VDSO32_sigreturn
0000040c A VDSO32_rt_sigreturn
00000414 A VDSO32_vsyscall
00000424 A VDSO32_SYSENTER_RETURN
00400000 A phys_startup_32
c0400000 T _text
c0400000 T startup_32

6. Initramfs

When all the files are in place, you need to generate the initial ramdisk (initramfs). The initial filesystem that is created in RAM is there to make some preparations before the real root partition is mounted. For instance if you’re root is on a RAID or LVM, you’ll need to pre-load some drivers etc. It usually just loads the block device modules necessary to mount the root.

There’s an utility called dracut, that will generate this for you.

# dracut "" `make kernelrelease`

This will generate the image and store it into /boot/initramfs-kernelrelease.img. To inspect the file use

# lsinitrd /boot/initramfs-kernelrelease.img

7. Bootloader Settings

In the final step, before you can actually boot the new kernel is to configure your bootloader and tell it that the new kernel is there. There was a transition between Fedora 15 and Fedora 16 from GRUB 0.97 (nowdays known as grub-legacy) to new GRUB2 so I’ll explain both.

grub-legacy

In the old version (which I am currently using on F15) you need to edit the /boot/grub/menu.lst file and add new entry with paths to your kernel and initrd. The entry might look like the following:

title Fedora (2.6.41.10-3.fc15.i686)
      root (hd0,0)
      kernel /boot/vmlinuz-3.1.0-rc7 ro root=UUID=58042206-7ffe-4285-8a07-a1874d5a70d2 rd_NO_LUKS rd_NO_LVM rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYTABLE=cz-us-qwertz rhgb quiet
      initrd /boot/initramfs-3.1.0-rc7.img

grub2

In grub2 you should be able to do this automatically by this command

# grub2-mkconfig -o /boot/grub2/grub.cfg

After this step you can reboot and let your machine chew on some fresh meat directly from the developers! Probably as fresh as it’ll ever get! Boot up and enjoy ;-).

Sources


If you liked this post, make sure you subscribe to receive notifications of new content by email or RSS feed.
Alternatively, feel free to follow me on Twitter or Google+.

Core dumps in Fedora

This post will demonstrate a way of obtaining and examining a core dump on Fedora Linux. Core file is a snapshot of working memory of some process. Normally there’s not much use for such a thing, but when it comes to debugging software it’s more than useful. Especially for those hard-to-reproduce random bugs. When your program crashes in such a way, it might be your only source of information, since the problem doesn’t need to come up in the next million executions of your application.

The thing is, creation of core dumps is disabled by default in Fedora, which is fine since the user doesn’t want to have some magic file spawned in his home folder every time an app goes down. But we’re here to fix stuff, so how do you turn it on? Well, there’s couple of thing that might prevent the cores to appear.

1. Permissions

First, make sure that the program has writing permission for the directory it resides in. The core files are created in the directory of the executable. From my experience, core dump creation doesn’t work on programs executed from NTFS drives mounted through ntfs3g.

2. ulimit

This is the place where the core dump creation is disabled. You can see for yourself by using the ulimit command in bash:

astro@desktop:~$ ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 15976
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 1024
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

To enable core dumps set some reasonable size for core files. I usually opt for unlimited, since disk space is not an issue for me:

ulimit -c unlimited

This setting is local only for the current shell though. To keep this settings, you need to put the above line into your ~/.bashrc or (which is cleaner) adjust the limits in /etc/security/limits.conf.

3. Ruling out ABRT

In Fedora, cores are sent to the Automatic Bug Reporting Tool — ABRT. So they can be posted to the RedHat bugzilla to the developers to analyse. The kernel is configured so that all core dumps are pipelined right to abrt. This is set in /proc/sys/kernel/core_pattern. My settings look like this

|/usr/libexec/abrt-hook-ccpp %s %c %p %u %g %t %h %e 636f726500

That means, that all core files are passed to the standard input of abrt-hook-ccpp. Change this settings simply to “core” i.e.:

core

Then the core files will be stored in the same directory as the executable and will be called core.PID.

4. Send Right Signals

Not every process termination leads to dumping core. Keep in mind, that core file will be created only if the process receives this signals:

  • SIGSEGV
  • SIGFPE
  • SIGABRT
  • SIGILL
  • SIGQUIT

Example program

Here’s a series of steps to test whether your configuration is valid and the cores will appear where they should. You can use this simple program to test it:

/* Print PID and loop. */

#include <stdio.h>
#include <unistd.h>

void infinite_loop(void)
{
    while(1);
}

int main(void)
{
    printf("PID: %d\n", getpid());
    fflush(stdout);

    infinite_loop();

    return 0;
}

Compile the source, run the program and send a signal like following to get a memory dump:

gcc infinite.c
astro@desktop:~$ ./a.out &
[1] 19233
PID: 19233
astro@desktop:~$ kill -SEGV 19233
[1]+  Segmentation fault      (core dumped) ./a.out
astro@desktop:~$ ls core*
core.19233

Analysing Core Files

If you already have a core, you can open it using GNU Debugger (gdb). For instance, to open the core file, that was created earlier in this post and displaying a backtrace, do the following:

astro@desktop:~$ gdb a.out core.19233
GNU gdb (GDB) Fedora (7.3.1-47.fc15)
Copyright (C) 2011 Free Software Foundation, Inc.
Reading symbols from /home/astro/a.out...(no debugging symbols found)...done.
[New LWP 19233]
Core was generated by `./a.out'.
Program terminated with signal 11, Segmentation fault.
#0  0x08048447 in infinite_loop ()
Missing separate debuginfos, use: debuginfo-install glibc-2.14.1-5.i686
(gdb) bt
#0  0x08048447 in infinite_loop ()
#1  0x0804847a in main ()
(gdb)

Sources