Kernel driver testing




















This provides a quick way of running KUnit tests during development, without requiring a virtual machine or separate hardware. Get started now: Getting Started. A unit test is supposed to test a single unit of code in isolation, hence the name. KUnit tests can be run on most architectures, and most tests are architecture independent. All built-in KUnit tests run on kernel startup. Alternatively, KUnit and KUnit tests can be built as modules and tests will run when the test module is loaded.

KUnit can also run tests without needing a virtual machine or actual hardware under User Mode Linux. KUnit is fast. Excluding build time, from invocation to completion KUnit can run several dozen tests in only 10 to 20 seconds; this might not sound like a big deal to some people, but having such fast and easy to run tests fundamentally changes the way you go about testing and even writing code in the first place.

Linus himself said in his git talk at Google :. In addition to the testing resources we discussed so far, there are projects both open source and initiated by hardware vendors that are worth a mention. Each of these projects focus on specific areas of the kernel and in some cases a specific space such as, embedded or enterprise where the kernel is used.

We will look at a few in this section. Linux Test Project LTP test suite is a collection of tools to test reliability, robustness, and stability of Linux kernel and related features. This test suite can be customized by adding new tests and the LTP project welcomes contributions.

Linux Driver Verification project's goals are to improve the quality of Linux device drivers, develop an integrated platform for device drivers verification, and adopt latest research outcome to enhance quality of verification tools.

The LSB is a Linux Foundation workgroup created to reduce the costs of supporting Linux platform, by reducing the differences between various Linux distributions and ensuring application portability between distributions. If anything, divergence in the Unix world taught us that it is vital to avoid it in the Linux world.

This is exactly the reason why you can take an rpm convert it to deb and install and run it, and how sweet is that. Static analysis tools analyze the code without executing it, hence the name static analysis.

There are a couple of static analysis tools that are sepcifically written for analyzing the Linux kernel code base. Sparse is a static type-checking program written specifically for the Linux kernel, by Linus Torvalds. Sparse is a semantic parser. It creates a sematic prase tree to validate C semantics. It performs lazy type evaluation. Kernel build system has support for sparse and provides a make option to compile the kernel with sparse checking enabled. Smatch analyzes source to detect programming logic errors.

It can detect logic errors such as, attempts to unlock already unlocked spinlock. It is actively used to detect logic errors in the Linux kernel sources. Please follow instructions on how to get smatch from smatch git repo and compile. Smatch is work in progress, instructions keep changing. All usages of foo will need to be updated to the new convention, which will be a very laborious task.

Using Cocinelle, this task becomes easier with a script that looks for all instances of foo parameter1 and replacing them with foo parameter1, NULL. Once this task is done, all instances of foo can be examined to see if passing in NULL value for parameter2 is a good assumption. For more information on Cocinelle and how it is used in fixing problems in various projects including the Linux kernel, please refer to the project page: Cocinelle. We covered a lot of ground in this paper.

I leave you with a few references for further reading on the topics we discussed. I would like to thank Khalid Aziz, Oracle for his review, proof reading, and valuable suggestions for improvement. My special thanks to Mauro Chehab, Samsung and Guy Martin, Samsung for their review and feedback at various stages of writing this paper.

My special thanks to Ibrahim Haddad, Samsung for his support and encouragement without which, I would probably have never set out to write this paper in the first place.

Linux Kernel Testing and Debugging. Linux Kernel Testing Philosophy Testing is an integral and important part of any software development cycle, open or closed, and Linux kernel is no exception to that.

Configuring Development and Test System Let's get started. If build-essential is not already installed, run the following command to install it: sudo apt-get install build-essential At this point, you may install the following packages as well, so the system is ready for cross-compiling Linux kernels.

The Stable Kernel Start by cloning the stable kernel git, building and installing the latest stable kernel. Living in The Fast Lane If you like driving in the fast lane and have the need for speed, clone the mainline kernel git or better yet the linux-next git.

Applying Patches Linux kernel patch files are text files that contain the differences from the original source to the new. A couple of ways to tell git about the new files and have it track them, there by avoiding the above issues: Option 1: When a patch that adds new files is applied using the patch command, run "git clean" to remove untracked files, before running "git reset --hard". Option 2: An alternate approach is to tell git to track the newly added files by running "git apply --index file.

Basic Testing Once a new kernel is installed, the next step is try to boot it and see what happens. Run a few usage tests: Is networking wifi or wired functional?

Does ssh work? Run rsync of a large file over ssh Run git clone and git pull Start web browser Read email Download files: ftp, wget etc. Examine Kernel Logs Checking for regressions in dmesg is a good way to identify problems, if any, introduced by the new code. A few resources that go into detail on how to run ktest: ktest-eLinux. Auto Testing Tools There are several automated testing tools and test infrastructures that you can chose from based on your specific testing needs.

AuToTest Autotest is a framework for fully automated testing. Running lava-test tool to install LTP will automatically install any dependencies, download the source for the recent release of LTP, compile it, and install the binaries in a self-contained area so that they can be removed easily when user runs uninstall.

These results are saved for future reference. This is a good feature to find regressions, if any, between test runs. Summary of commands to run as an example: Show a list of tests supported by lava-test: lava-test list-tests Install a new test: lava-test install ltp Run the test: lava-test run ltp Check results: lava-test results show ltp-timestamp.

Kernel Debug Interfaces Linux kernel has support for static and dynamic debugging via configuration options, debug APIs, interfaces, and frameworks. Debug Configuration Options - Static Linux kernel core and several Linux kernel modules, if not all, include kernel configuration options to debug. Several of these static debug options can be enabled at compile time.

Debug messages are logged in dmesg buffer. Enable dynamic debug feature in a module to persist across reboots create or change modname. Tracepoints So far we talked about various static and dynamic debug features. Please read Tips on how to implement good tracepoint code for more insight into how tracing works. Tracepoint mechanism The tracepoints use jump-labels which is a code modification of a branch.

When it is disabled, the code path looks like: [ code ] nop back: [ code ] return; tracepoint: [ tracepoint code ] jmp back; When it is enabled, the code path looks like: notice how the tracepoint code appears in the code path below [ code ] jmp tracepoint back: [ code ] return; tracepoint: [ tracepoint code ] jmp back; Linux PM Sub-system Testing Using debug, dynamic debug, and tracing, let's run a few suspend to disk PM tests. Note: this mode is tested on ACPI systems. This is how the process works: git bisect start git bisect bad Current version is bad git bisect good v3.

Once the code is ready, compile it. Save the make output to a file to see if the new code introduced any new warnings. Address warnings, if any. Once the code compiles cleanly, install the compiled kernel and boot test. If it boots successfully, make sure there are no new errors in the dmesg, comparing it with the previous kernel dmesg. Run a few usage and stress tests. Please refer to the testing content we discussed earlier in this paper. If the patch is for fixing a specific bug, make sure the patch indeed fixes the bug.

If the patch fixes the problem, make sure, other module regression tests pass. Identify regression tests for the patched module and run them.

When a patch touches other architectures, cross-compile build testing is recommended. Please check the following in the source git as a reference to identify tests. Static Analysis and Tools Static analysis tools analyze the code without executing it, hence the name static analysis. Smatch So what do we do about all of these semantic and logic problems found by Sparse and Smatch? However, some of these semantic issues are global in nature due to cut and paste of code. In some cases when interfaces get obsoleted or changed slightly, a mass change to update several source files becomes necessary.

This is where Coccinelle comes in to rescue. Coccinelle is a program matching and transformation engine which provides the language SmPL Semantic Patch Language for specifying desired matches and transformations in C code. Coccinelle was initially targeted towards performing collateral evolutions in Linux.

For more information on Cocinelle and how it is used in fixing problems in various projects including the Linux kernel, please refer to the project page: Cocinelle References We covered a lot of ground in this paper. Recent Articles. Charles Fisher. Suparna Ganguly. Use the Contoso. For more information about how this certificate was created, see Creating Test Certificates. The sign command configures SignTool to sign the specified catalog file, tstamd Including a time stamp provides the necessary information for key revocation in case the signer's code signing private key is compromised.

You can open the cat file as described before. Within the WDK 7. The Windows 8 or 8. The samples do not come with the Windows 8 or 8. The catalog file when opened by double clicking the file in Windows Explorer, you will see the following screen shot. Below, we are providing the preferred command line option of installing the certificate using the certmgr. The driver can now be tested either on the signing computer or the test computer.

If you are using the test computer, copy the driver package to the machine keeping the file structure intact. The tool certmgr. Copy the certificate. You can copy the certificate file to any directory on the test computer. Where excerpts from CertMgr :. Reboot the computer. You can now run Certmgr. If it is not visible, then another way to install the certificate is to open the certificate and install it on the above two nodes and verify again. Verify signing of the cat file and the sys file.

Open an elevated command window, and assuming the signtool. Execute the following commands at the appropriate directory. The two commands above will generate one error as it is test signed and the certificate was not a trusted certificate.

The above two verification commands will be very useful in release signing which will be discussed later. The driver is now ready to be installed and tested in the test computer.

It is always advisable that the following registry key is set correctly to gather verbose logs in setupapi. After install, a new log setupapi. Once the driver is successfully installed, it can be tested on the development computer or on the test computer. After the system has rebooted in Step 2, the test-signed driver package can be installed and loaded.

There are four ways to install a driver package:. Dpinst and Pnputil pre-installs the driver package, whereas with Devcon and Windows Add Hardware Wizard, the driver as well as the device can be installed.

Pre-installing a driver helps the OS find the driver when a device is connected to the computer. The above command will install all the drivers corresponding to all the inf files. With Dpinst tool a driver can be removed just by referring to the inf file of the driver. This command will install the driver as well as the device. Removes devices with the specified hardware or instance ID. Valid only on the local computer. To reboot when necessary, include -r.

After a device has been removed, to remove the driver, two commands are necessary. This command will show the list of all oemNnn. Run the following command which will show all the available switches. Use of the switches is self-explanatory, no need to show any examples.



0コメント

  • 1000 / 1000