Tagged: Fedora

Developers Conference 2013

The Developers Conference took place in Brno this past weekend (February 23rd, 24th). The #devconf is an annual event organised by the Red Hat office in Brno with the members of the Fedora and JBoss.org communities. The convention targets primarily Linux and JBoss developers and admins and it covers a range of topics from the free software universe.

This year, there were three main tracks, one of which was focused primarily on JBoss and the remaining two on Linux-related topics, such as kernel, networking, security or virtualization/cloud technologies. Apart from that, the conference hosted a number of hackfests and lab sessions for people who wanted to learn something more practical, or have a more focused discussion on a certain topic.

Bryn M. Reeves talking at the Developers Conference

Bryn M. Reeves talking at the Developers Conference

I was there this year and it was amazing. According to the numbers posted to Twitter, the conference had at least 500 attendees on Saturday. There were so many great talks and the organisers had to even turn down presenters, only because there was no room for more. Actually, I don’t think that three tracks will be enough next year.

Saturday

I didn’t get up very early, so for me, Saturday started with a talk from Debarshi Ray about the Gnome Online Accounts. Debarshi talked about how the new Gnome 3.8 integrates with various online services, such as Google Docs/Mail, Facebook, Flickr, ownCloud and others. The integration is a very promising area in my opinion, because these services are used by millions of people. However, there are still some problems that need to be addressed and that are being worked on.

The track in the D3 room continued with Tom Callaway‘s talk on Fedora User Experience. Tom explained the design driven methodology; that we should think about the user experience, before we code. However, the community around Fedora has been focused on the contributors, which are quite often technical people who like to code first. He presented several mockups they have been working on. The first one is a sort-of a rich web interface for mailing lists called Hyperkitty. The goal of this project is to improve the collaboration in communities. At the moment, there are two groups of users, one preferring mailing lists, the second discussion boards.  People from these two group tend to miss each other. Hyperkitty should also provide a karma functionality, to help decrease the pollution of big mailing lists from chronically non-constructive people wasting everyone’s time with pointless discussions.

Hyperkitty

Hyperkitty Design (source: http://blog.linuxgrrl.com/)

The third presentation I saw, again in the D3 room was Negotiation theory for Open Source Hackers by Leslie Hawthorn. This was one of the less technical talks, but it was very insightful. Arguing takes up a fair amount of the time and effort technical people put into their work, especially in open source, and it is not always time well spent. The slides from this presentation are available here.

After Leslie’s talk, we moved to the kernel track in D1 to see Lukáš Czerner speak about what is happening in the local Linux Kernel File Systems. Lukáš summarized the new features from XFS (being the most suitable option for enterprise workloads), ext4 (great for general-purpose), and btrfs (still not stable). Based on the comparison of the number of commits made during the last year, the most active development is going on in btrfs. Its codebase also grows steadily while XFS have lost some weight during the last few years as its developers try to remove unnecessary things. He also discussed the challenges file system developers will need to deal with in the future. The rules of the game don’t change that much with SSD’s, but PCI-based solid-state drives can be problematic, as the current block layer doesn’t scale that well to storage technologies that fast. Similar increase of speed has already happened in networking, so the future development might be inspired by some ideas from that area.

After the file systems update, it was time for my talk about the Linux Network Stack Test project. LNST is a network testing tool designed to simplify testing of real-life scenarios with more than one computer involved. It provides some building blocks for test development and it also serves as a test management tool that will handle proper execution of your test cases. The tests created using LNST are then entirely automated and also completely independent from the underlying network infrastructure, so they can be migrated to a different network without any changes whatsoever. This is important when you want to share them with others.

Slides: You can download the slides I used for the LNST presentation here.

I took a short break from the presentations after that and I returned to see Daniel Borkmann with his presentation about the zero-copy packet capturing and netsniff-ng. At this point, I started to get really tired, so I certainly didn’t catch everything here. And finally, the kernel track was closed by three lighting talks by Jiří Benc on PTP, Jiří Pírko who gave us an update about what happened in the team driver, and Zdeněk Kabeláč who closed the room with his talk about LVM2.

If you came early enough to get a keyring at the entrance to the venue, you were in possession of a ticket to the after party, which took place approximately half an hour later at Fléda. The party was, just as the conference itself, awesome. There was beer and food for free, a live band, and most importantly hundreds of like-minded people and colleagues from Red Hat to talk to about Linux :).

Sunday

Sunday was again crammed with amazing talks. This time, I made sure not to oversleep (even though getting up after the party wasn’t easy at all). The very first talk in the morning in D3 was Evolution of Linux Network Management from Pavel Šimerda. Pavlix talked about the NetworkManager project, the things they improved in the 0.9.8 release (which happened just a few days prior to the conference).  He explained what they focus on at the moment. NetworkManager is going from desktops to servers and it should be used as the primary way of network configuration in Fedora and also in RHEL. This requires revising various things inside the NetworkManager and also implementing additional functionality, that is required in the enterprise area. Networking is absolutely crucial on servers, so they plan to test the code very carefully using multiple different methods (one of them might be a set of real-life test scenarios using LNST).

Thomas Wöerner continued the networking track in D3 with his presentation of firewalld, a dynamic firewall daemon that provides dynamic functionality over the static iptables. The daemon supports network zones, which represent the levels of trust for network connections. These might be public, home, work, etc. Firewalld also supports working with rules grouped into services, which are basically lists of ports that are required for some service to work. This way, you can handle all the rules in a group at the same time.

The last networking talk in D3 before the Core OS track was given by Thomas Graf. The presentation was focused on Open vSwitch, which is a software implementation of a switch similar to the Linux bridge. However, Open vSwitch is focused more towards the enterprise market. It is designed for virtualized server environments, so it comes with support of things, such as OpenFlow and VLAN tagging.

Preparations for What are we breaking now?

Preparations for What are we breaking now?

Probably the most crowded presentation at the Developers Conference was What are we breaking now? delivered by Kay Sievers and Lennart Poettering. They discussed several topics that (in their opinion) need fixing. The first one were persistent network interface names. This has been a problem for a long time, because the kernel names devices as it finds them and the order can change with every other boot. The plan is to use names based on some properties of the hardware, such as the position of the card on the bus, instead of just numbering them as they are recognised. Other than that, they would like to implement D-BUS in the kernel. There has been a couple of tries at this in the past, but they all failed. I personally liked the plan they mentioned next to modify the bootloader (GRUB2) and the kernelinstall script to work with drop-in config files when a new kernel is installed rather than with a convoluted set of self-replicating scripts. Finally, they mentioned app sandboxes that should provide some protection for the user from the actions of a potentially malicious third-party applications.

The Core OS track continued after a short break with a great talk from Bryn M. Reeves called Who moved my /usr?? – staying sane in a changing world. This talk was again a little bit lighter on the technical details, but as the last year’s presentation from Bryn, it was not only interesting, but very entertaining as well. The talk was focused on change. Bryn went through the historic releases of Red Hat Linux and described what happened where, and how did the users react.

I didn’t actually hear the whole talk that went after the previous one in D3, because my stomach was getting pretty unhappy at that time and I went down to get that extremely big hot dog. The leader of the SELinux project, Dan Walsh talked about and also demonstrated the creation of hundreds of secure application containers with virt-sandbox. The containers are much cheaper than virtualization, but they can provide the same amount of security.

Lennart Poettring had a one more talk in the Core OS track called Systemd Journal. This one was an introduction of the logging facility that will be a part of systemd. He explained the motivation, why they decided to go down this path and what are (in his opinion) the benefits of journald. In the second part of the presentation, Lennart did a small demonstration of what can be done with the journalctl tool for reading logs.

The very last talk I attended at this year’s Developers Confefence was the Log Message Processing, Formatting and Normalizing with Rsyslog from the main author of the rsyslog project Rainer Gerhards, but I was getting really tired and sleepy again, so unfortunately I wasn’t listening that carefully.

Summary

Long story short, this year’s #devconf was awesome! Lots of interesting talks, labs, and hackfests. If you missed a talk this year, the good news is, that all the presentations from the three main rooms were recorded. The videos should be soon available online.

Big thanks goes to the main organisers, Radek Vokál and Jiří Eischmann, for making this possible, but also to the many volunteers that were involved in the organising and making sure everything went as planned. For me, the organisation was flawless, as I personally didn’t encounter any difficulties. Man, I’m already looking forward to 2014. See you everyone next year!

If you liked this post, make sure you subscribe to receive notifications about new content on this site by email or a RSS feed.
Alternatively, feel free to follow me on Twitter or Google+.

Advertisements

Raspberry Pi

Raspberry Pi Logo

It’s Christmas time, which means that there is a pause from school and a pause from work as well (the office is closed). So I finally had enough time to set up the Raspberry Pi board I ordered few weeks ago. And it’s fantastic! I was really surprised how easy it was to set it up and get everything to work properly. Due to its popularity there are guides and howtos everywhere.

I would like to use mine as a sort of all-around home server for backups, file sharing, and git for now. But there is so much more you can do with the Pi :). The best thing, at least for the purposes I plan to use mine for, is its compactness and the most importantly complete lack of any noisy mechanical parts, such as fans or hard drives. That’s the killer feature for me. I mean, I sure have a drawer full of old hardware that would make a machine usable for just copying backups. Unfortunately, I don’t have a basement with ethernet and I wouldn’t survive with a cluster of coolers going 24/7 in my apartment. Additionally it consumes much less power than a regular PC does.

Get One

Two models of Raspberry Pi exist at the moment. Model A and B. The most notable differences between the two are the amount of memory (model B has 512MB while model A only 256MB), B also has one additional USB port and an ethernet (RJ-45). The full spec and comparison of the two is available on Wikipedia. I don’t know if model A is available already, but even if it was, go for model B. The ethernet port and an additional USB make a big difference. You don’t have to bother to buy an external USB hub or ethernet dongle.

Raspberry Pi Model B (Image comes from: http://www.raspberrypi.org/faqs)

Raspberry Pi Model B Illustration (http://www.raspberrypi.org/faqs)

As far as I know, you can get this board from two manufacturers — RS and Farnell. I got mine from RS, because Farnell stopped shipping these things to Czech Republic just a couple of days before I ordered it. My brother has his one from Farnell and they are almost identical, with only few cosmetic differences. The prices are also about the same. From RS it cost me roughly £31 with shipping to Czech Republic included.

However this is not everything you’ll need to set it up. No accessories is supplied with the board. You will need a power supply, a SDHC card to boot from, possibly a HDMI to DVI cable to plug it into a display, an ethernet cable to connect it to the network. These things cost me additional £17. I purchased a Samsung phone charger with micro-usb and a 16GB Class10 SDHC card from ADATA. Pay attention to what you’re buying, because not everything will be compatible with the Pi. There is a neat list of verified peripherals on the elinux.com wiki. You can have a problem with a power supply that is not able to provide current at least of 0.7A. There have been some problems with certain SD cards, so make sure to check the list. Raspberry Pi also comes without a case, so it might be a good idea to get one. There is a ton of cases available from different vendors. I liked this metal one.

I use the board without a head and access it over ssh, but you might need a display, a keyboard, and a mouse to handle the installation — for instance to click through some settings or install sshd. It works really nice with a monitor as well. I’m running Fedora 17 Raspberry Pi Remix on it at the moment, which comes with XFCE.

Wokring Raspberry PiRaspberry Pi in a case

Set It Up

As I said earlier, it is very easy to start with Raspberry Pi. Basically, all you need to do is to prepare the SD card. You need to copy one of the prepared OS images to the card and that’s it. The hardest thing is probably choosing the distro. You have a number of options available. There is, again, a list of available distributions on elinux.com. The “default” is Raspbian, which is a port of Debian Wheezy. I myself am used to Fedora, so I started with that one. Arch is also available, Gentoo ARM, OpenWrt or even Android. And the best thing is you can experiment with those really easily. If you have a spare SD card, you can reinstall your board as often as you want :).

To install Raspberry Pi on Linux, all you will need is the OS image and the dd command. You will copy the image to the card and that’s it. Note that the following command doesn’t reference a partition device (e.g. /dev/sde1), instead it uses directly the card device (/dev/sde).

dd bs=1M if=/path/to/rpfr-17-xfce-r2.img of=/dev/sde

When this is done, you can plug everything in and try it out. Just make sure you have connected everything before, you plug in the power. Raspberry Pi doesn’t have an off switch, so it will boot right away. If you encounter any problems, make sure to check the beginners guide and the troubleshooting advice.

Useful Links


If you liked this post, make sure you subscribe to receive notifications of new content by email or RSS feed.
Alternatively, feel free to follow me on Twitter or Google+.

Custom Kernel on Fedora

Why would someone want to have a custom kernel? Well, maybe you like the cutting-edge features or maybe you want to hack on it! Anyway, this post explains step-by-step, how to download, build and install your custom kernel on Fedora. I’ll be building kernel from the mainline tree for i686.

Warning: Keep in mind that if something goes wrong you might end up irreversibly damaging your system! Always keep a fail-safe kernel to boot to, just in case!

1. Getting the kernel tree

Linux kernel tree is developed using git revision control system. There are a LOT of different versions and different trees of the kernel maintained by various people. The development tree is linux-next, Linus’ mainline is called simply linux, there is also linux-mm which had a similar purpose as linux-next today. You need to pick one and clone it locally. In this how-to, I’ll stick with the linux tree from Linus that is at least a tiny bit more stable than linux-next. Use the following to get the source:

$ git clone git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git

2. Patching the source

Since this is really bleeding-edge code, there might be some problems with compilation or other bugs. Now is the time to patch the tree to resolve all these issues. Also if you want to hack a little and do some of your own changes to the kernel, now is the time! Don’t be afraid, it’s not that hard …

3. Configure the kernel

Kernel is a pretty big piece of software and it provides a shitload of  configuration options. Various limits and settings can be adjusted in this phase. You can also decide, what parts of kernel such as what drivers will be included in your build. Like I said, there is a lot of options and a couple of ways of setting them:

$ make config     # Go through each and every option by hand
$ make defconfig  # Use the default config for your architecture
$ make menuconfig # Edit the config with ncurses gui
$ make gconfig    # Editing with gtk GUI app

Sometimes, the kernel is built with an option that it saves it’s configuration in /proc/config.gz. If this is your case, you can copy it inside the tree and use make oldconfig. This will ask only for the new features, that were not present in the previous version.

$ zcat /proc/config.gz > .config
$ make oldconfig

On Fedora, the configuration of currently installed kernels can be found in the /boot directory:

$ ll /boot/config*
-rw-r--r--. 1 root root 123540 Mar 20 17:31 /boot/config-2.6.42.12-1.fc15.i686
-rw-r--r--. 1 root root 125193 Apr 21 15:54 /boot/config-2.6.43.2-6.fc15.i686
-rw-r--r--. 1 root root 125204 May  8 14:23 /boot/config-2.6.43.5-2.fc15.i686

4. Build it

When you’re done configuring, you can advance to the build. The only advice I can give you here to take advantage of -j option that make offers if you’re on a system with multiple cores. To build the kernel using 2 jobs per core on a dual-core processor use:

$ make -j4

It will significantly improve the build times. Either way, it will take some time to build, so it’s time to get a coffee!

5. Installation

Result of successful build should be a bzImage located in arch/i386/boot/bzImage and a bunch of built modules *.ko (kernel object). It’s vital to point out, that bzImage isn’t just a bzip2-compressed kernel object. It’s a specific bootable file format, that contains compressed kernel code along with some boot code (like a stub for decompressing the kernel etc).

Anatomy of bzImage (source: wikipedia.org)

To install the new kernel, rename and copy the bzImage to boot and install the modules by writing:

# cp arch/i386/boot/bzImage "/boot/vmlinuz-"`make kernelrelease`
# make modules_install

The modules will be installed to /lib/modules. Also a file called System.map will be created in the root of kernel source tree which is a symbol look-up table for kernel debugging. You can place it into /boot along with the image:

# cp System.map "/boot/System.map-"`make kernelrelease`

The file contents looks like this:

$ head System.map
00000000 A VDSO32_PRELINK
00000040 A VDSO32_vsyscall_eh_frame_size
000001d5 A kexec_control_code_size
00000400 A VDSO32_sigreturn
0000040c A VDSO32_rt_sigreturn
00000414 A VDSO32_vsyscall
00000424 A VDSO32_SYSENTER_RETURN
00400000 A phys_startup_32
c0400000 T _text
c0400000 T startup_32

6. Initramfs

When all the files are in place, you need to generate the initial ramdisk (initramfs). The initial filesystem that is created in RAM is there to make some preparations before the real root partition is mounted. For instance if you’re root is on a RAID or LVM, you’ll need to pre-load some drivers etc. It usually just loads the block device modules necessary to mount the root.

There’s an utility called dracut, that will generate this for you.

# dracut "" `make kernelrelease`

This will generate the image and store it into /boot/initramfs-kernelrelease.img. To inspect the file use

# lsinitrd /boot/initramfs-kernelrelease.img

7. Bootloader Settings

In the final step, before you can actually boot the new kernel is to configure your bootloader and tell it that the new kernel is there. There was a transition between Fedora 15 and Fedora 16 from GRUB 0.97 (nowdays known as grub-legacy) to new GRUB2 so I’ll explain both.

grub-legacy

In the old version (which I am currently using on F15) you need to edit the /boot/grub/menu.lst file and add new entry with paths to your kernel and initrd. The entry might look like the following:

title Fedora (2.6.41.10-3.fc15.i686)
      root (hd0,0)
      kernel /boot/vmlinuz-3.1.0-rc7 ro root=UUID=58042206-7ffe-4285-8a07-a1874d5a70d2 rd_NO_LUKS rd_NO_LVM rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYTABLE=cz-us-qwertz rhgb quiet
      initrd /boot/initramfs-3.1.0-rc7.img

grub2

In grub2 you should be able to do this automatically by this command

# grub2-mkconfig -o /boot/grub2/grub.cfg

After this step you can reboot and let your machine chew on some fresh meat directly from the developers! Probably as fresh as it’ll ever get! Boot up and enjoy ;-).

Sources


If you liked this post, make sure you subscribe to receive notifications of new content by email or RSS feed.
Alternatively, feel free to follow me on Twitter or Google+.

Core dumps in Fedora

This post will demonstrate a way of obtaining and examining a core dump on Fedora Linux. Core file is a snapshot of working memory of some process. Normally there’s not much use for such a thing, but when it comes to debugging software it’s more than useful. Especially for those hard-to-reproduce random bugs. When your program crashes in such a way, it might be your only source of information, since the problem doesn’t need to come up in the next million executions of your application.

The thing is, creation of core dumps is disabled by default in Fedora, which is fine since the user doesn’t want to have some magic file spawned in his home folder every time an app goes down. But we’re here to fix stuff, so how do you turn it on? Well, there’s couple of thing that might prevent the cores to appear.

1. Permissions

First, make sure that the program has writing permission for the directory it resides in. The core files are created in the directory of the executable. From my experience, core dump creation doesn’t work on programs executed from NTFS drives mounted through ntfs3g.

2. ulimit

This is the place where the core dump creation is disabled. You can see for yourself by using the ulimit command in bash:

astro@desktop:~$ ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 15976
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 1024
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

To enable core dumps set some reasonable size for core files. I usually opt for unlimited, since disk space is not an issue for me:

ulimit -c unlimited

This setting is local only for the current shell though. To keep this settings, you need to put the above line into your ~/.bashrc or (which is cleaner) adjust the limits in /etc/security/limits.conf.

3. Ruling out ABRT

In Fedora, cores are sent to the Automatic Bug Reporting Tool — ABRT. So they can be posted to the RedHat bugzilla to the developers to analyse. The kernel is configured so that all core dumps are pipelined right to abrt. This is set in /proc/sys/kernel/core_pattern. My settings look like this

|/usr/libexec/abrt-hook-ccpp %s %c %p %u %g %t %h %e 636f726500

That means, that all core files are passed to the standard input of abrt-hook-ccpp. Change this settings simply to “core” i.e.:

core

Then the core files will be stored in the same directory as the executable and will be called core.PID.

4. Send Right Signals

Not every process termination leads to dumping core. Keep in mind, that core file will be created only if the process receives this signals:

  • SIGSEGV
  • SIGFPE
  • SIGABRT
  • SIGILL
  • SIGQUIT

Example program

Here’s a series of steps to test whether your configuration is valid and the cores will appear where they should. You can use this simple program to test it:

/* Print PID and loop. */

#include <stdio.h>
#include <unistd.h>

void infinite_loop(void)
{
    while(1);
}

int main(void)
{
    printf("PID: %d\n", getpid());
    fflush(stdout);

    infinite_loop();

    return 0;
}

Compile the source, run the program and send a signal like following to get a memory dump:

gcc infinite.c
astro@desktop:~$ ./a.out &
[1] 19233
PID: 19233
astro@desktop:~$ kill -SEGV 19233
[1]+  Segmentation fault      (core dumped) ./a.out
astro@desktop:~$ ls core*
core.19233

Analysing Core Files

If you already have a core, you can open it using GNU Debugger (gdb). For instance, to open the core file, that was created earlier in this post and displaying a backtrace, do the following:

astro@desktop:~$ gdb a.out core.19233
GNU gdb (GDB) Fedora (7.3.1-47.fc15)
Copyright (C) 2011 Free Software Foundation, Inc.
Reading symbols from /home/astro/a.out...(no debugging symbols found)...done.
[New LWP 19233]
Core was generated by `./a.out'.
Program terminated with signal 11, Segmentation fault.
#0  0x08048447 in infinite_loop ()
Missing separate debuginfos, use: debuginfo-install glibc-2.14.1-5.i686
(gdb) bt
#0  0x08048447 in infinite_loop ()
#1  0x0804847a in main ()
(gdb)

Sources

New to Fedora

Just yesterday, after a long time of evaluating all pro’s and con’s I finally decided to install Fedora 15 on my desktop machine. I’ve been using debian-like distros since high school and I’ve gotten used to it pretty hard over the years. I started with Ubuntu 7.04, then switched to Kubuntu to enjoy some KDE. Then by the time I went to college I was using Debian, which made me happy for some time. After Debian I switched back to Ubuntu.

Now, why did I decide for Fedora? Well, the Ubuntu on my machine was quite outdated and the new 11.04 comes with Unity and software centers. Well, I just wanted to try something different. On top of that, Fedora 15 comes with the new Gnome 3!

Gnome 3 Shell

Gnome 3 Shell

Now, after the install and some babysteps with Fedora, I have to say it’s pretty amazing :-). The Gnome 3 environment is FAR from perfect, but it’s usable. In my opinion it’s way better than the KDE4. It takes some getting used to, but not that much actually. I was pretty bummed out without a task bar, the top menu, but I see, that I don’t need those anymore :-).