Wrapping up year 2015 and plans for the new year, 2016

It has been around 4 months and I haven’t had written a single blog! I missed blogging. I wanted to share many things through this blog but for one or other reasonĀ didn’t get a time for that. So, finally I decided to write this post. This is going to be non-technical blog, I’ll write about my learning at Oracle in separate posts.

The year 2015 turned out to be one of the awesome year of my life. I went through lot of ups and down which almost changed my life. So, here are some highlights from the year 2015 in no particular order:

  • Got accepted as an Outreachy Linux Kernel intern and worked with my mentor Julia lawall on project Coccinelle
  • Became active contributor of Linux Kernel community and got a chance to interact with so many intelligent people in open source community
  • Finally done with my graduation in computer engineering
  • Selected as a winner of Linux Foundation Training scholarship, 2015 under the Kernel Guru category
  • Became physically fit [Oh, breaking 2-3 fracture bolts are an exception :P]
  • Got a chance to attend LinuxCon, Europe with other Outreachy interns but Indian interns couldn’t make it because of visa issues. Hope to attend it by this year.
  • Helped Outreachy round 11 applicants for Linux Kernel specific tasks during application period
  • Joined Oracle in November as a Linux Kernel Engineer and started living a life I always dreamed about
  • Relocated to Silicon valley of India – Bangalore
  • Mentoring for RTEMS project in Google Code In to help pre-university students with open source tasks
  • Interviewed by one of the well known open source journalist Swapnil Bhartiya for the IT World magazine. Here, is a link of my interview.
  • Started writing on Quora in my free time and wish to continue it in a next year as well
  • Finally received my new Lenovo ultrabook t450 yesterday [after so much wait] and just done with setting up things in it. Perfect end of the year 2015, isn’t it! šŸ™‚

Some of my future plans:

  • Excited about new adventures in a new city
  • Hoping to learn so many new things by contributing more in the Linux Kernel community through my new job
  • I wish to complete some of my personal projects this year
  • Planning to contribute in other open source projects of my interest this year
  • We have started new Linux Kernel meetup group in Bangalore and I am very much excited for the first meetup which is on 16th Jan, 2016
  • Started learning Kannada and wish to continue it in this year so that I can connect with local people in a more general way
  • Planning to attend some conferences and meetups

There are some more updates and plans but let’s just wait for the right time to publish them.

Basically there are so many expectations from the up coming year and I hope to learn more from new experiences. I will try to write a blog more often.

Stay tuned and have a great year ahead!

Devm functions and their CORRECT usage

Hi,

Lets start this blog with some good news. Guess what, I won the Linux Foundation Training Scholarship under Kernel Guru category. I am so excited for taking Linux Foundation training class. They announced names of scholarship winners in LinuxCon NAĀ which was in settle last week and it came out that I am a youngest scholar who won this scholarship. šŸ™‚

So, lets come to the topic of this blog post. Over the last couple of weeks, I worked on devm functions and their incorrect usage. I couldn’t make it to update my blog properly regarding the same. But in this blog, I have tried to write good summary of my work on devm functions and by adding links wherever possible for someone who is more interested in the discussions and patches. First of all, lets start with introducing devm functions.

Devm Functions: What, Why and Where

There are some common memory resources used by drivers which are allocated using functions and stored as a linked list of arbitrarily sized memory areas called devres. Some of this memory is allocated in the probe function. But all of this memory should be freed on a failure patch or before the driver is detached. Else, the driver will leak resources and wastes main memory.

There are couple of reasons behind introducing devm functions. First is of course about resource leaking. It can be possible that if anything fails in the middle of probe function, it frees anything allocated. Also, remove function is just a duplicate code of probe’s error handling. And most important thing is it is hard to spot problems in failure handling with non-managed resource allocation. So, we need managed versions of such functions = devm functions. Devm functions basically allocates memory in a order resources are allocated and deallocates those memory automatically in a reverse order. Here, is a link of slides which can be useful to understand devm functions and their usages.

Problems I encountered or came across while looking in to devm functions

1. Basically sometimes what happens is developers want to use devm functions because they think that it is better to use managed resource functions but they don’t know how to use it and end up with messing things. For example, here is one of my patch. In this file, we already have devm_snd_soc_register_card and as I said devm functions automatically handles when to free memory. So, actually we don’t need to use function for unregistering the card. I have sent some patches for similar cases and I have provided links of accepted patches at the end of the blog.

2. While working on devm functions, I encountered that many files are using devm_free_irq in remove functions with the devm_request_irq in probe function. One can understand that we need devm_free_irq in remove function when we don’t have devm counterparts for each allocated resource function and it needs to occur before something else to be released. That can be a case where we need to call devm_free_irq explicitly. But not all cases are like that. Here is a link of one of such discussion on a case where use of devm_free_irq is not at all necessary. There are some other such interesting cases for devm_free_irq and but one need to look closely for each case. Also, it can be possible that there are some other such cases for other devm functions. One another example is this.

3. Sometimes it is possible that memory lives within a single function and frees after some line of code. Then we don’t really need devm functions because it just wastes extra memory used for devres structures and few extra cycles for maintaining them. Here, is such example.

Here, are those Coccinelle semantic patches which I used to detect such cases. File devm_entry_points is used to detect problematic usage of devm functions and devm_opportunity is used for finding files which are still using non-managed functions whose devm counterparts already exists. Ā I worked on each case encountered by devm_entry_points but not going to discuss it here. In case you have any questions please ask me here and I’ll be happy to answer them.

I sent many patches using both of these files and for some which came in to my way while working on devm functions. Here, is a links of some of my accepted patches for the reference and there are many more in their way to be accepted. All of my accepted patches can be found here. Ā I am going to send some more as still there are many opportunities where devm functions can be used.

I know understanding devm functions is little bit complicated at first but once you will understand them properly, you can enjoy the beauty of them.Ā Please ping me for any queries regarding devm functions or my patches.

Links:

  1. power_supply: bq24735: Convert to using managed resources
  2. ASoC: tegra: Use devm_clk_get
  3. ASoC: tegra: Convert to managed resources
  4. ASoC: davinci-vcif: Use devm_snd_soc_register_component
  5. crypto: sahara – Use dmam_alloc_coherent
  6. ASoC: rockchip: i2s: Adjust devm usage
  7. ASoC: samsung: Remove redundant arndale_audio_remove
  8. ata: pata_arasam_cf: Use devm_clk_get

Update – 1: I have continued working on devm functions even after the internship. So, now there are many patches which are accepted in the mainline kernel. And still there are many opportunities where one can go for working on. So, here is a link of my all patches. One can go there and check the things for better understanding.

Creating virtual machine using KVM and virt-install/virt-manager

This post discusses about creating virtual machine using KVM, virt-install and virt-manager. So, here is a step by step process from installing necessary stuff to running your favouriteĀ os in virtual machine. I am assuming that you are running Ubuntu.

Step-1 :- Checking hardware virtualization

1. Check if your processor supports hardware virtualization or not.

egrep -c '(vmx|svm)' /proc/cpuinfo

If output of above command is 0 then your CPU doesn’t support hardware virtualization and if output is 1 or more then it does. But you still need to check in your BIOS id virtualization [VT-x in intel and AMD-v in AMD processor] is enabled or not.

2. You can also execute following command to check for kvm compatibility:

kvm-ok

Ouput of this command should be like following:

INFO: /dev/kvm exists
KVM acceleration can be used

If you see, something like following then you can still run virtual machines, but it’ll be much slower without kvm extensions.

INFO: Your CPU does not support KVM extensions
KVM acceleration can NOT be used

But if you see, something like following then check your kernel version. KVM works with Ubuntu-kernel only. So, if you are running vanilla kernel then there may be an issue. Either you need to download Ubuntu-kernel image or you can go for using kernel version which comes by default with particular Ubuntu version. For example, Ubuntu 14.04 comes with 3.13.0-24-generic.

INFO: /dev/kvm does not exist
HINT: sudo modprobe kvm_intel modprobe
FATAL: Module msr not found.

Step-2 :- Installation of kvm

1. sudo apt-get install qemu-kvm libvirt-bin ubuntu-vm-builder bridge-utils

2. Add your username to the group libvirtd

sudo adduser `id -un` libvirtd

After running above command reboot in your system to add your username effectively in libvirtd group. I repeat do not just relogin into your system, please reboot.

Step-3 :- Verifying installation

You can verify your installation using following command:

virsh -c qemu:///system list

This should give output like this:

Id Name State
———————————-

If this command fails to connect with hypervisor, then you can check the errors and the solutions here.

Step-4 :- Restart kernel modules

modprobe -a kvm

Step-5 :- Install virt-viewer

You can install virt-viewer for viewing instances:

sudo apt-get install virt-viewer

Step-6 :- Install virt-manager

This is optional. Virt-manager is a GUI application for creating and managing virtual machines. You can install it either from Ubuntu software center or using following command:

sudo apt-get install virt-manager

Step-7 :- Creating virtual machine

You can either use following command for creating virtual machines or go for this link if you want to create it using virt-manager.

virt-install --name=guest_name --arch=x86_64 --vcpus=1 --cdrom=/var/lib/libvirt/image/ubuntu-12.04.5-desktop-amd64.iso --disk path=/mnt/virtual_machines/guest_name.img,size=20

Virt-install is written using Libvirt API. It is very interesting to learn how all things are binded together. In case someone isĀ interested, then go for cloning repo or can browse online here. Also, in the command above I just gave values to options which are mandatory. One can check other options too.

So, this is complete process of creating virtual machine using KVM and libvirt. I will write some other posts about libvirt as I am learning it these days along with my internship.

Macro builtin_platform_driver

This post is for introducing use of macro builtin_platform_driver. I just came across it while solving cases of module init/exit boilarplate code. Paul Gortmaker introduced this macro. One can see many patches with his name in Linux Kernel git tree. So, lets discuss the reason behind introducing it and where we can use it. I am not going in to that much detail, instead I am providing links so that one can go there and understand what is happening here.

Why we need macro like builtin_platform_driver?

Basically there are increasing number of non-modular drivers which are using module_driver type register functions. And there are several downsides of using this. One can see that commit from Paul Gortmaker here which has very infromative commit log.

Where we can use this macro?

One can go for checking in Kconfig file while handling any driver. And see if the driver is configured with a Kconfig option that is declared as a bool. If that is the case then one can go for using this macro for that file. But yes one need to make sure that if the driver depends on other drivers then all of them are declared as bool. Usually we have tristate for module drivers.

How I came across this macro?
When I was working on module init/exit boilarplate issue, I just saw some patches from Paul Gortmaker. Also, while handling one case Paul Bolle suggested me to use this macro. Although I knew that this macro exists, I was not sure about using for that particular file. And I ended up using it for file drivers/hwtracing/coresight/coresight-replicator.c. My patch can be found here.

Can Coccinelle help to handle such cases?
No, Coccinelle can’t help to detect such cases because for each file one need to check Kconfig options in Kconfig file. So one need to do it by hand. But yes Coccinelle can help you with transformation part. One can write semantic patch accordingly. I have added some semantic patches on which I worked in my github account. One can use them and change them according to their need.

Ether device API functions and some other hacks

Hi again

I worked on ether device API functions in a last month before module init/exit cases. Mainly I worked on 3 functions and got understanding of some others. I am going to talk about some other functions along with ether device API functions in this blog. So, this is going to be somehow managed blog.

1. eth_zero_addr:

This is a ether device API function which replaces memset to assign zero address. Linux kernel community prefers to using eth_zero_addr function instead of memset to assign the zero address to the given array. The function definition goes like this:

static inline void eth_zero_addr(u8 *addr)
{
memset(addr, 0x00, ETH_ALEN);
}

There were like 10-12 files having such cases. I sent patches for all of them. Most of them are now added in a kernel tree. Here, is a semantic patch which I used to do this change. It is preety simple. Only thing is sometimes value ‘6’ is used as ETH_ALEN. In that case one need to check that it is really a network code and in a memset 6 means ETH_ALEN.

@eth_zero_addr@
expression e;
@@

-memset(e,0x00,\(6\|ETH_ALEN\));
+eth_zero_addr(e);

2. eth_broadcast_addr:
This is also ether device API function and replaces memset. Only difference is this function is used to assign the broadcast address to the given address array. There were only 2-3 such cases. I handled all of them too. Semantic patch I used is as follows:

@eth_broadcast_addr@
identifier e;
@@

-memset(e,\(0xff\|0xFF\|255\),\(6\|ETH_ALEN\));
+eth_broadcast_addr(e);

3. eth_hw_addr_random:

This is an intersting function. It generates random Ethernet address [MAC] to be used by a net device and set addr_assign_type so that state can be read by sysfs and be used by userspace. Now, definition of this function goes like this:

static inline void eth_hw_addr_random(struct net_device *dev)
{
dev->addr_assign_type = NET_ADDR_RANDOM;
eth_random_addr(dev->dev_addr);
}

Basically I found this case from the deprecated function list. There were like 63 uses of random_eth_addr in linux 3.0 and now very less uses are there. Primary reason behind this was here random_eth_addr is changed to eth_random_addrfor the consistency purpose. So, there were many commits proposing that change. But when I looked in to detail then found a large bunch of commits from Danny Kukawka in 2012, replacing random_eth_addr with eth_hw_addr_random. Although in each case one need to check many things at a time because sometimes original call of random_eth_addr/eth_random_addr do not make assignment to NET_ADDR_RANDOM. And in some cases we have seen that merging the call and the assignment isĀ not correct. So, from my side I can just compile test the change. And in each case I need to write about this possibility under —. For the experiment I just sent a patch explaining the situation and it turned out positive. After tesing it maintainer applied the patch. That patch can be found here. I am going to send some more patches in a future.

4. Deprecated macro DEFINE_PCI_DEVICE_TABLE

This macro is actually deprecated and this comment is made in a header file too. I just accidentally came across this while reading some other code. And thought to handle it. Basically point is we can use struct pci_device_id instead of this macro. When I searched for the remaining use of this macro in a linux-next, it turned out that most of them were handled by Julia’s one of other intern in a Paris. Only 3 uses were remaining. I sent patches for them. One is applied in a local branch and others are on their way. This was a simple case too. Semantic patch I used is as follows:

@@
identifier a;
declarer name DEFINE_PCI_DEVICE_TABLE;
initializer i;
@@
– DEFINE_PCI_DEVICE_TABLE(a)
+ const struct pci_device_id a[]
= i;

5. Unnecessary function snd_pcm_lib_preallocate_free_for_all()

I picked this from deprecated functions list of Julia. Here, there is no point of using this function snd_pcm_lib_preallocate_free_for_all() because ALSA core takes care that all preallocated memory is freed when the PCM itself is freed. Only one case was left. I just sent a patch for it yesterday and to my surprise it was applied in a very next minute by Mark. šŸ™‚ One can look at the patch here.

By the way, I came across non-modular version of module init/exit macros. Recently many such patches are added in a kernel tree. It is interesting to understand the reasons behind introducing and using such macro. I guess one separate post would be fine for that. I will post that blog in a 2-3 days. Also, I am working on fixing some usage of devm functions from last 2 weeks. I’ll write a blog about it in a next week too.

Till then signing off. Have a happy weekend.

[Part II] Macros module init/exit, Boilerplate code and Linux Kernel

So, as I said in my last blog, I will explain about transformation semantic patch which can be used after matching module init/exit part. Before I go in to the detail, I would like to clear some points. Someone asked me about the use of * in the semantic patch of my last blog post. So, I thought I should add it in this blog too. Basically we use * for something of interest. For example in our case we just want to check if actually there are some cases present which do nothing except register/unregister in module init/exit. One can’t use * with +/-. Following is an output of the semantic patch explained in last blog post.

diff -u -p ./lib/ts_fsm.c /tmp/nothing/lib/ts_fsm.c
--- ./lib/ts_fsm.c
+++ /tmp/nothing/lib/ts_fsm.c
@@ -327,12 +327,10 @@ static struct ts_ops fsm_ops = {
static int __init init_fsm(void)
{
- return textsearch_register(&fsm_ops);
}
static void __exit exit_fsm(void)
{
- textsearch_unregister(&fsm_ops);
}
MODULE_LICENSE("GPL");

Now lets continue from where we left in last blog post. So, after matching cases, we need to remove functions of module init/exit along with declarations of module-init/module_exit. Also, we need to add helper macros like module_platform_driver. We can do that in individual cases by matching register/unregister functions. Here, is an example of such semantic patch for cases where we can use helper macro module_platform_driver.

@r@
declarer name module_init;
identifier f;
@@
module_init(f);

@s@
declarer name module_exit;
identifier e;
@@
module_exit(e);

@a@
identifier r.f;
identifier x;
@@
static f(…) {return platform_driver_register(&x); }

@b depends on a@
identifier s.e,a.x;
@@
static e(…) { platform_driver_unregister(&x); }

@t depends on r && a@
identifier r.f;
@@
-module_init(f);

@v depends on s && a && b@
declarer name module_platform_driver;
identifier s.e, a.x;
@@
-module_exit(e);
+module_platform_driver(x);

@c depends on b@
identifier r.f, a.x;
@@
-static f(…) { return platform_driver_register(&x); }

@d depends on c@
identifier s.e, a.x;
@@

-static e(…) { platform_driver_unregister(&x); }

In first four rules of the semantic patch we are doing matching and then depending on these 4 rules we are doing transformation. This semantic patch can be used for any such cases. Only thing which one need to change is names of register/unregister functions and helper macro. Interesting, right! Best part is those algorithms which are used by Julia in developing a tool. Because Coccinelle is very fast. And I think that’s the beauty of Coccinelle šŸ™‚ In last 2 weeks, I worked on some ether device API functions too. But may be one separate blog will be good to explain them. So, this will be the subject of my next blog. Till then stay connected. Stay happy. Ttyl.

[Part I] Macros module init/exit, Boilerplate code and Linux Kernel

Hii

Lets talk about boiler plate code today. I worked on cases of boiler plate code in Linux kernel during last few days. I am going to divide this particular topic in 2 blog posts. In this post, I will talk about boiler plate code, cases of them and how can we find them in the kernel using Coccinelle. In the second post, I will talk about how Coccinelle can help to handle such cases.

Boiler plate code:

Boilerplate code is any seemingly repetitive code that shows up again and again in order to get some result that seems like it ought to be much simpler. Basically boilerplate code or boilerplate is the sections of code that have to be included in many places with little or no alteration.

Bolier plate code and init/exit macros:

In kernel macro module_init can either be called during do_initcalls (if builtin) or at module insertion time (if a module). Macro module_exit is used to wrap the driver clean-up code with cleanup_module when used with rmmod and the driver is a module. If the driver is statically compiled into the kernel, module_exit has no effect. There can only be one module_init and one module_exit per module. In 70% of cases drivers don’t do anything special in module init/exit. So, such bolier plate code can be eliminated using some helper macros like module_platform_driver, module_pci_driver, module_pcmcia_driver etc. Here, is an example of such code form kernel:


static int __init snirm710_init(void)
{
return platform_driver_register(&snirm710_driver);
}
static void __exit snirm710_exit(void)
{
platform_driver_unregister(&snirm710_driver);
}
module_init(snirm710_init);
module_exit(snirm710_exit);

[From fileĀ drivers/scsi/sni_53c710.c]

Basically these helper macros are defined for drivers whose init and exit paths does only register and unregister. Sometimes we have unnecessary print statements and code in module init/exit too. In such cases we can use these helper macros too. Currently there are some such general macros and driver specific helper macros presented in the kernel. And many more such opportunities are there.

Macros module init/exit and Coccinelle

Generally we can use following Coccinelle semantic patch to match the module init/exit associated functions and statements.

@r@
declarer name module_init;
identifier f;
@@
module_init(f);

@s@
declarer name module_exit;
identifier f;
@@
module_exit(f);

@a@
identifier r.f;
statement S;
@@
f(…) { S }

@depends on a@
identifier s.f;
statement S;
@@
f(…) {
*S
}

@b@
identifier s.f;
statement S;
@@
f(…) { S }

@depends on b@
identifier r.f;
statement S;
@@
f(…) {
*S
}

So, after getting output of this semantic patch I analyzed all cases. And grouped them together accordingly. I grouped them in with categories like old macro and new macro means cases where we can use already defined helper macro and where we need to define new macros respectively. I am thinking to put that file on my github along with some coccinelle scripts so that others can use it. I am going to handle most of them. I have sent some patches and will sent for all those cases where old macro can be used. For the new macro cases, I will send patches introducing some of those macros which can handle maximum cases and such cases are already present there.

Note that above script can help with matching things only. For the transformation one need to be more specific in the script. I will talk about those scripts and some intersting stuff regarding the same in my next post.

Till then stay tuned!