Merge branch 'mauro-books' into docs-next

Merge Mauro's massive patch series creating the process and admin-guide
books.  I think there's a lot of stuff to clean up here, but there's no
point in holding things up for that.

Mauro sez:

This patch series continues the efforts of converting the Linux Kernel
documentation to Sphinx.

It contains text to ReST conversion of several files under Documentation,
and a few ones under the main dir (README, REPORTING-BUGS).

All patches on this series can be found on my development tree:
	https://git.linuxtv.org/mchehab/experimental.git/log/?h=lkml-books-v2

The Kernel docs html output after this series can be seen at:
	https://mchehab.fedorapeople.org/kernel_docs/
This commit is contained in:
Jonathan Corbet 2016-10-25 18:05:23 -06:00
commit 9e1f08607f
149 changed files with 6031 additions and 5529 deletions

View File

@ -15,11 +15,11 @@ Following translations are available on the WWW:
ABI/
- info on kernel <-> userspace ABI and relative interface stability.
BUG-HUNTING
admin-guide/bug-hunting.rst
- brute force method of doing binary search of patches to find bug.
Changes
process/changes.rst
- list of changes that break older software packages.
CodingStyle
process/coding-style.rst
- how the maintainers expect the C code in the kernel to look.
DMA-API.txt
- DMA API, pci_ API & extensions for non-consistent memory machines.
@ -33,7 +33,7 @@ DocBook/
- directory with DocBook templates etc. for kernel documentation.
EDID/
- directory with info on customizing EDID for broken gfx/displays.
HOWTO
process/howto.rst
- the process and procedures of how to do Linux kernel development.
IPMI.txt
- info on Linux Intelligent Platform Management Interface (IPMI) Driver.
@ -48,7 +48,7 @@ Intel-IOMMU.txt
Makefile
- This file does nothing. Removing it breaks make htmldocs and
make distclean.
ManagementStyle
process/management-style.rst
- how to (attempt to) manage kernel hackers.
RCU/
- directory with info on RCU (read-copy update).
@ -56,13 +56,13 @@ SAK.txt
- info on Secure Attention Keys.
SM501.txt
- Silicon Motion SM501 multimedia companion chip
SecurityBugs
admin-guide/security-bugs.rst
- procedure for reporting security bugs found in the kernel.
SubmitChecklist
process/submit-checklist.rst
- Linux kernel patch submission checklist.
SubmittingDrivers
process/submitting-drivers.rst
- procedure to get a new driver source included into the kernel tree.
SubmittingPatches
process/submitting-patches.rst
- procedure to get a source patch included into the kernel tree.
VGA-softcursor.txt
- how to change your VGA cursor from a blinking underscore.
@ -72,7 +72,7 @@ acpi/
- info on ACPI-specific hooks in the kernel.
aoe/
- description of AoE (ATA over Ethernet) along with config examples.
applying-patches.txt
process/applying-patches.rst
- description of various trees and how to apply their patches.
arm/
- directory with info about Linux on the ARM architecture.
@ -86,7 +86,7 @@ auxdisplay/
- misc. LCD driver documentation (cfag12864b, ks0108).
backlight/
- directory with info on controlling backlights in flat panel displays
bad_memory.txt
admin-guide/bad-memory.rst
- how to use kernel parameters to exclude bad RAM regions.
basic_profiling.txt
- basic instructions for those who wants to profile Linux kernel.
@ -150,11 +150,11 @@ debugging-via-ohci1394.txt
- how to use firewire like a hardware debugger memory reader.
dell_rbu.txt
- document demonstrating the use of the Dell Remote BIOS Update driver.
development-process/
process/
- how to work with the mainline kernel development process.
device-mapper/
- directory with info on Device Mapper.
devices.txt
admin-guide/devices.rst
- plain ASCII listing of all the nodes in /dev/ with major minor #'s.
devicetree/
- directory with info on device tree files used by OF/PowerPC/ARM
@ -166,8 +166,6 @@ dontdiff
- file containing a list of files that should never be diff'ed.
driver-model/
- directory with info about Linux driver model.
dvb/
- info on Linux Digital Video Broadcast (DVB) subsystem.
dynamic-debug-howto.txt
- how to use the dynamic debug (dyndbg) feature.
early-userspace/
@ -178,7 +176,7 @@ efi-stub.txt
- How to use the EFI boot stub to bypass GRUB or elilo on EFI systems.
eisa.txt
- info on EISA bus support.
email-clients.txt
process/email-clients.rst
- info on how to use e-mail to send un-mangled (git) patches.
extcon/
- directory with porting guide for Android kernel switch driver.
@ -226,9 +224,9 @@ ia64/
- directory with info about Linux on Intel 64 bit architecture.
infiniband/
- directory with documents concerning Linux InfiniBand support.
init.txt
admin-guide/init.rst
- what to do when the kernel can't find the 1st process to run.
initrd.txt
admin-guide/initrd.rst
- how to use the RAM disk as an initial/temporary root filesystem.
input/
- info on Linux input device support.
@ -248,7 +246,7 @@ isapnp.txt
- info on Linux ISA Plug & Play support.
isdn/
- directory with info on the Linux ISDN support, and supported cards.
java.txt
admin-guide/java.rst
- info on the in-kernel binary support for Java(tm).
ja_JP/
- directory with Japanese translations of various documents
@ -256,11 +254,11 @@ kbuild/
- directory with info about the kernel build process.
kdump/
- directory with mini HowTo on getting the crash dump code to work.
kernel-docs.txt
process/kernel-docs.rst
- listing of various WWW + books that document kernel internals.
kernel-documentation.rst
- how to write and format reStructuredText kernel documentation
kernel-parameters.txt
admin-guide/kernel-parameters.rst
- summary listing of command line / boot prompt args for the kernel.
kernel-per-CPU-kthreads.txt
- List of all per-CPU kthreads and how they introduce jitter.
@ -302,7 +300,7 @@ magic-number.txt
- list of magic numbers used to mark/protect kernel data structures.
mailbox.txt
- How to write drivers for the common mailbox framework (IPC).
md.txt
admin-guide/md.rst
- info on boot arguments for the multiple devices driver.
media-framework.txt
- info on media framework, its data structures, functions and usage.
@ -326,7 +324,7 @@ module-signing.txt
- Kernel module signing for increased security when loading modules.
mtd/
- directory with info about memory technology devices (flash)
mono.txt
admin-guide/mono.rst
- how to execute Mono-based .NET binaries with the help of BINFMT_MISC.
namespaces/
- directory with various information about namespaces
@ -340,7 +338,7 @@ nommu-mmap.txt
- documentation about no-mmu memory mapping support.
numastat.txt
- info on how to read Numa policy hit/miss statistics in sysfs.
oops-tracing.txt
admin-guide/oops-tracing.rst
- how to decode those nasty internal kernel error dump messages.
padata.txt
- An introduction to the "padata" parallel execution API
@ -378,7 +376,7 @@ ptp/
- directory with info on support for IEEE 1588 PTP clocks in Linux.
pwm.txt
- info on the pulse width modulation driver subsystem
ramoops.txt
admin-guide/ramoops.rst
- documentation of the ramoops oops/panic logging module.
rapidio/
- directory with info on RapidIO packet-based fabric interconnect
@ -406,7 +404,7 @@ security/
- directory that contains security-related info
serial/
- directory with info on the low level serial API.
serial-console.txt
admin-guide/serial-console.rst
- how to set up Linux with a serial line console as the default.
sgi-ioc4.txt
- description of the SGI IOC4 PCI (multi function) device.
@ -420,9 +418,9 @@ sparse.txt
- info on how to obtain and use the sparse tool for typechecking.
spi/
- overview of Linux kernel Serial Peripheral Interface (SPI) support.
stable_api_nonsense.txt
process/stable-api-nonsense.rst
- info on why the kernel does not have a stable in-kernel api or abi.
stable_kernel_rules.txt
process/stable-kernel-rules.rst
- rules and procedures for the -stable kernel releases.
static-keys.txt
- info on how static keys allow debug code in hotpaths via patching
@ -444,7 +442,7 @@ trace/
- directory with info on tracing technologies within linux
unaligned-memory-access.txt
- info on how to avoid arch breaking unaligned memory access in code.
unicode.txt
admin-guide/unicode.rst
- info on the Unicode character/font mapping used in Linux.
unshare.txt
- description of the Linux unshare system call.
@ -458,15 +456,13 @@ vgaarbiter.txt
- info on enable/disable the legacy decoding on different VGA devices
video-output.txt
- sysfs class driver interface to enable/disable a video output device.
video4linux/
- directory with info regarding video/TV/radio cards and linux.
virtual/
- directory with information on the various linux virtualizations.
vm/
- directory with info on the Linux vm code.
vme_api.txt
- file relating info on the VME bus API in linux
volatile-considered-harmful.txt
process/volatile-considered-harmful.rst
- Why the "volatile" type class should not be used
w1/
- directory with documents regarding the 1-wire (w1) subsystem.

View File

@ -84,4 +84,4 @@ stable:
- Kernel-internal symbols. Do not rely on the presence, absence, location, or
type of any kernel symbol, either in System.map files or the kernel binary
itself. See Documentation/stable_api_nonsense.txt.
itself. See Documentation/process/stable-api-nonsense.rst.

View File

@ -347,7 +347,7 @@ Description:
because of fragmentation, SLUB will retry with the minimum order
possible depending on its characteristics.
When debug_guardpage_minorder=N (N > 0) parameter is specified
(see Documentation/kernel-parameters.txt), the minimum possible
(see Documentation/admin-guide/kernel-parameters.rst), the minimum possible
order is used and this sysfs entry can not be used to change
the order at run time.

File diff suppressed because it is too large Load Diff

View File

@ -1208,8 +1208,8 @@ static struct block_device_operations opt_fops = {
<listitem>
<para>
Finally, don't forget to read <filename>Documentation/SubmittingPatches</filename>
and possibly <filename>Documentation/SubmittingDrivers</filename>.
Finally, don't forget to read <filename>Documentation/process/submitting-patches.rst</filename>
and possibly <filename>Documentation/process/submitting-drivers.rst</filename>.
</para>
</listitem>
</itemizedlist>

View File

@ -1,841 +1 @@
.. _submittingpatches:
How to Get Your Change Into the Linux Kernel or Care And Operation Of Your Linus Torvalds
=========================================================================================
For a person or company who wishes to submit a change to the Linux
kernel, the process can sometimes be daunting if you're not familiar
with "the system." This text is a collection of suggestions which
can greatly increase the chances of your change being accepted.
This document contains a large number of suggestions in a relatively terse
format. For detailed information on how the kernel development process
works, see :ref:`Documentation/development-process <development_process_main>`.
Also, read :ref:`Documentation/SubmitChecklist <submitchecklist>`
for a list of items to check before
submitting code. If you are submitting a driver, also read
:ref:`Documentation/SubmittingDrivers <submittingdrivers>`;
for device tree binding patches, read
Documentation/devicetree/bindings/submitting-patches.txt.
Many of these steps describe the default behavior of the ``git`` version
control system; if you use ``git`` to prepare your patches, you'll find much
of the mechanical work done for you, though you'll still need to prepare
and document a sensible set of patches. In general, use of ``git`` will make
your life as a kernel developer easier.
Creating and Sending your Change
********************************
0) Obtain a current source tree
-------------------------------
If you do not have a repository with the current kernel source handy, use
``git`` to obtain one. You'll want to start with the mainline repository,
which can be grabbed with::
git clone git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
Note, however, that you may not want to develop against the mainline tree
directly. Most subsystem maintainers run their own trees and want to see
patches prepared against those trees. See the **T:** entry for the subsystem
in the MAINTAINERS file to find that tree, or simply ask the maintainer if
the tree is not listed there.
It is still possible to download kernel releases via tarballs (as described
in the next section), but that is the hard way to do kernel development.
1) ``diff -up``
---------------
If you must generate your patches by hand, use ``diff -up`` or ``diff -uprN``
to create patches. Git generates patches in this form by default; if
you're using ``git``, you can skip this section entirely.
All changes to the Linux kernel occur in the form of patches, as
generated by :manpage:`diff(1)`. When creating your patch, make sure to
create it in "unified diff" format, as supplied by the ``-u`` argument
to :manpage:`diff(1)`.
Also, please use the ``-p`` argument which shows which C function each
change is in - that makes the resultant ``diff`` a lot easier to read.
Patches should be based in the root kernel source directory,
not in any lower subdirectory.
To create a patch for a single file, it is often sufficient to do::
SRCTREE= linux
MYFILE= drivers/net/mydriver.c
cd $SRCTREE
cp $MYFILE $MYFILE.orig
vi $MYFILE # make your change
cd ..
diff -up $SRCTREE/$MYFILE{.orig,} > /tmp/patch
To create a patch for multiple files, you should unpack a "vanilla",
or unmodified kernel source tree, and generate a ``diff`` against your
own source tree. For example::
MYSRC= /devel/linux
tar xvfz linux-3.19.tar.gz
mv linux-3.19 linux-3.19-vanilla
diff -uprN -X linux-3.19-vanilla/Documentation/dontdiff \
linux-3.19-vanilla $MYSRC > /tmp/patch
``dontdiff`` is a list of files which are generated by the kernel during
the build process, and should be ignored in any :manpage:`diff(1)`-generated
patch.
Make sure your patch does not include any extra files which do not
belong in a patch submission. Make sure to review your patch -after-
generating it with :manpage:`diff(1)`, to ensure accuracy.
If your changes produce a lot of deltas, you need to split them into
individual patches which modify things in logical stages; see
:ref:`split_changes`. This will facilitate review by other kernel developers,
very important if you want your patch accepted.
If you're using ``git``, ``git rebase -i`` can help you with this process. If
you're not using ``git``, ``quilt`` <http://savannah.nongnu.org/projects/quilt>
is another popular alternative.
.. _describe_changes:
2) Describe your changes
------------------------
Describe your problem. Whether your patch is a one-line bug fix or
5000 lines of a new feature, there must be an underlying problem that
motivated you to do this work. Convince the reviewer that there is a
problem worth fixing and that it makes sense for them to read past the
first paragraph.
Describe user-visible impact. Straight up crashes and lockups are
pretty convincing, but not all bugs are that blatant. Even if the
problem was spotted during code review, describe the impact you think
it can have on users. Keep in mind that the majority of Linux
installations run kernels from secondary stable trees or
vendor/product-specific trees that cherry-pick only specific patches
from upstream, so include anything that could help route your change
downstream: provoking circumstances, excerpts from dmesg, crash
descriptions, performance regressions, latency spikes, lockups, etc.
Quantify optimizations and trade-offs. If you claim improvements in
performance, memory consumption, stack footprint, or binary size,
include numbers that back them up. But also describe non-obvious
costs. Optimizations usually aren't free but trade-offs between CPU,
memory, and readability; or, when it comes to heuristics, between
different workloads. Describe the expected downsides of your
optimization so that the reviewer can weigh costs against benefits.
Once the problem is established, describe what you are actually doing
about it in technical detail. It's important to describe the change
in plain English for the reviewer to verify that the code is behaving
as you intend it to.
The maintainer will thank you if you write your patch description in a
form which can be easily pulled into Linux's source code management
system, ``git``, as a "commit log". See :ref:`explicit_in_reply_to`.
Solve only one problem per patch. If your description starts to get
long, that's a sign that you probably need to split up your patch.
See :ref:`split_changes`.
When you submit or resubmit a patch or patch series, include the
complete patch description and justification for it. Don't just
say that this is version N of the patch (series). Don't expect the
subsystem maintainer to refer back to earlier patch versions or referenced
URLs to find the patch description and put that into the patch.
I.e., the patch (series) and its description should be self-contained.
This benefits both the maintainers and reviewers. Some reviewers
probably didn't even receive earlier versions of the patch.
Describe your changes in imperative mood, e.g. "make xyzzy do frotz"
instead of "[This patch] makes xyzzy do frotz" or "[I] changed xyzzy
to do frotz", as if you are giving orders to the codebase to change
its behaviour.
If the patch fixes a logged bug entry, refer to that bug entry by
number and URL. If the patch follows from a mailing list discussion,
give a URL to the mailing list archive; use the https://lkml.kernel.org/
redirector with a ``Message-Id``, to ensure that the links cannot become
stale.
However, try to make your explanation understandable without external
resources. In addition to giving a URL to a mailing list archive or
bug, summarize the relevant points of the discussion that led to the
patch as submitted.
If you want to refer to a specific commit, don't just refer to the
SHA-1 ID of the commit. Please also include the oneline summary of
the commit, to make it easier for reviewers to know what it is about.
Example::
Commit e21d2170f36602ae2708 ("video: remove unnecessary
platform_set_drvdata()") removed the unnecessary
platform_set_drvdata(), but left the variable "dev" unused,
delete it.
You should also be sure to use at least the first twelve characters of the
SHA-1 ID. The kernel repository holds a *lot* of objects, making
collisions with shorter IDs a real possibility. Bear in mind that, even if
there is no collision with your six-character ID now, that condition may
change five years from now.
If your patch fixes a bug in a specific commit, e.g. you found an issue using
``git bisect``, please use the 'Fixes:' tag with the first 12 characters of
the SHA-1 ID, and the one line summary. For example::
Fixes: e21d2170f366 ("video: remove unnecessary platform_set_drvdata()")
The following ``git config`` settings can be used to add a pretty format for
outputting the above style in the ``git log`` or ``git show`` commands::
[core]
abbrev = 12
[pretty]
fixes = Fixes: %h (\"%s\")
.. _split_changes:
3) Separate your changes
------------------------
Separate each **logical change** into a separate patch.
For example, if your changes include both bug fixes and performance
enhancements for a single driver, separate those changes into two
or more patches. If your changes include an API update, and a new
driver which uses that new API, separate those into two patches.
On the other hand, if you make a single change to numerous files,
group those changes into a single patch. Thus a single logical change
is contained within a single patch.
The point to remember is that each patch should make an easily understood
change that can be verified by reviewers. Each patch should be justifiable
on its own merits.
If one patch depends on another patch in order for a change to be
complete, that is OK. Simply note **"this patch depends on patch X"**
in your patch description.
When dividing your change into a series of patches, take special care to
ensure that the kernel builds and runs properly after each patch in the
series. Developers using ``git bisect`` to track down a problem can end up
splitting your patch series at any point; they will not thank you if you
introduce bugs in the middle.
If you cannot condense your patch set into a smaller set of patches,
then only post say 15 or so at a time and wait for review and integration.
4) Style-check your changes
---------------------------
Check your patch for basic style violations, details of which can be
found in
:ref:`Documentation/CodingStyle <codingstyle>`.
Failure to do so simply wastes
the reviewers time and will get your patch rejected, probably
without even being read.
One significant exception is when moving code from one file to
another -- in this case you should not modify the moved code at all in
the same patch which moves it. This clearly delineates the act of
moving the code and your changes. This greatly aids review of the
actual differences and allows tools to better track the history of
the code itself.
Check your patches with the patch style checker prior to submission
(scripts/checkpatch.pl). Note, though, that the style checker should be
viewed as a guide, not as a replacement for human judgment. If your code
looks better with a violation then its probably best left alone.
The checker reports at three levels:
- ERROR: things that are very likely to be wrong
- WARNING: things requiring careful review
- CHECK: things requiring thought
You should be able to justify all violations that remain in your
patch.
5) Select the recipients for your patch
---------------------------------------
You should always copy the appropriate subsystem maintainer(s) on any patch
to code that they maintain; look through the MAINTAINERS file and the
source code revision history to see who those maintainers are. The
script scripts/get_maintainer.pl can be very useful at this step. If you
cannot find a maintainer for the subsystem you are working on, Andrew
Morton (akpm@linux-foundation.org) serves as a maintainer of last resort.
You should also normally choose at least one mailing list to receive a copy
of your patch set. linux-kernel@vger.kernel.org functions as a list of
last resort, but the volume on that list has caused a number of developers
to tune it out. Look in the MAINTAINERS file for a subsystem-specific
list; your patch will probably get more attention there. Please do not
spam unrelated lists, though.
Many kernel-related lists are hosted on vger.kernel.org; you can find a
list of them at http://vger.kernel.org/vger-lists.html. There are
kernel-related lists hosted elsewhere as well, though.
Do not send more than 15 patches at once to the vger mailing lists!!!
Linus Torvalds is the final arbiter of all changes accepted into the
Linux kernel. His e-mail address is <torvalds@linux-foundation.org>.
He gets a lot of e-mail, and, at this point, very few patches go through
Linus directly, so typically you should do your best to -avoid-
sending him e-mail.
If you have a patch that fixes an exploitable security bug, send that patch
to security@kernel.org. For severe bugs, a short embargo may be considered
to allow distributors to get the patch out to users; in such cases,
obviously, the patch should not be sent to any public lists.
Patches that fix a severe bug in a released kernel should be directed
toward the stable maintainers by putting a line like this::
Cc: stable@vger.kernel.org
into the sign-off area of your patch (note, NOT an email recipient). You
should also read
:ref:`Documentation/stable_kernel_rules.txt <stable_kernel_rules>`
in addition to this file.
Note, however, that some subsystem maintainers want to come to their own
conclusions on which patches should go to the stable trees. The networking
maintainer, in particular, would rather not see individual developers
adding lines like the above to their patches.
If changes affect userland-kernel interfaces, please send the MAN-PAGES
maintainer (as listed in the MAINTAINERS file) a man-pages patch, or at
least a notification of the change, so that some information makes its way
into the manual pages. User-space API changes should also be copied to
linux-api@vger.kernel.org.
For small patches you may want to CC the Trivial Patch Monkey
trivial@kernel.org which collects "trivial" patches. Have a look
into the MAINTAINERS file for its current manager.
Trivial patches must qualify for one of the following rules:
- Spelling fixes in documentation
- Spelling fixes for errors which could break :manpage:`grep(1)`
- Warning fixes (cluttering with useless warnings is bad)
- Compilation fixes (only if they are actually correct)
- Runtime fixes (only if they actually fix things)
- Removing use of deprecated functions/macros
- Contact detail and documentation fixes
- Non-portable code replaced by portable code (even in arch-specific,
since people copy, as long as it's trivial)
- Any fix by the author/maintainer of the file (ie. patch monkey
in re-transmission mode)
6) No MIME, no links, no compression, no attachments. Just plain text
----------------------------------------------------------------------
Linus and other kernel developers need to be able to read and comment
on the changes you are submitting. It is important for a kernel
developer to be able to "quote" your changes, using standard e-mail
tools, so that they may comment on specific portions of your code.
For this reason, all patches should be submitted by e-mail "inline".
.. warning::
Be wary of your editor's word-wrap corrupting your patch,
if you choose to cut-n-paste your patch.
Do not attach the patch as a MIME attachment, compressed or not.
Many popular e-mail applications will not always transmit a MIME
attachment as plain text, making it impossible to comment on your
code. A MIME attachment also takes Linus a bit more time to process,
decreasing the likelihood of your MIME-attached change being accepted.
Exception: If your mailer is mangling patches then someone may ask
you to re-send them using MIME.
See :ref:`Documentation/email-clients.txt <email_clients>`
for hints about configuring your e-mail client so that it sends your patches
untouched.
7) E-mail size
--------------
Large changes are not appropriate for mailing lists, and some
maintainers. If your patch, uncompressed, exceeds 300 kB in size,
it is preferred that you store your patch on an Internet-accessible
server, and provide instead a URL (link) pointing to your patch. But note
that if your patch exceeds 300 kB, it almost certainly needs to be broken up
anyway.
8) Respond to review comments
-----------------------------
Your patch will almost certainly get comments from reviewers on ways in
which the patch can be improved. You must respond to those comments;
ignoring reviewers is a good way to get ignored in return. Review comments
or questions that do not lead to a code change should almost certainly
bring about a comment or changelog entry so that the next reviewer better
understands what is going on.
Be sure to tell the reviewers what changes you are making and to thank them
for their time. Code review is a tiring and time-consuming process, and
reviewers sometimes get grumpy. Even in that case, though, respond
politely and address the problems they have pointed out.
9) Don't get discouraged - or impatient
---------------------------------------
After you have submitted your change, be patient and wait. Reviewers are
busy people and may not get to your patch right away.
Once upon a time, patches used to disappear into the void without comment,
but the development process works more smoothly than that now. You should
receive comments within a week or so; if that does not happen, make sure
that you have sent your patches to the right place. Wait for a minimum of
one week before resubmitting or pinging reviewers - possibly longer during
busy times like merge windows.
10) Include PATCH in the subject
--------------------------------
Due to high e-mail traffic to Linus, and to linux-kernel, it is common
convention to prefix your subject line with [PATCH]. This lets Linus
and other kernel developers more easily distinguish patches from other
e-mail discussions.
11) Sign your work
------------------
To improve tracking of who did what, especially with patches that can
percolate to their final resting place in the kernel through several
layers of maintainers, we've introduced a "sign-off" procedure on
patches that are being emailed around.
The sign-off is a simple line at the end of the explanation for the
patch, which certifies that you wrote it or otherwise have the right to
pass it on as an open-source patch. The rules are pretty simple: if you
can certify the below:
Developer's Certificate of Origin 1.1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
By making a contribution to this project, I certify that:
(a) The contribution was created in whole or in part by me and I
have the right to submit it under the open source license
indicated in the file; or
(b) The contribution is based upon previous work that, to the best
of my knowledge, is covered under an appropriate open source
license and I have the right under that license to submit that
work with modifications, whether created in whole or in part
by me, under the same open source license (unless I am
permitted to submit under a different license), as indicated
in the file; or
(c) The contribution was provided directly to me by some other
person who certified (a), (b) or (c) and I have not modified
it.
(d) I understand and agree that this project and the contribution
are public and that a record of the contribution (including all
personal information I submit with it, including my sign-off) is
maintained indefinitely and may be redistributed consistent with
this project or the open source license(s) involved.
then you just add a line saying::
Signed-off-by: Random J Developer <random@developer.example.org>
using your real name (sorry, no pseudonyms or anonymous contributions.)
Some people also put extra tags at the end. They'll just be ignored for
now, but you can do this to mark internal company procedures or just
point out some special detail about the sign-off.
If you are a subsystem or branch maintainer, sometimes you need to slightly
modify patches you receive in order to merge them, because the code is not
exactly the same in your tree and the submitters'. If you stick strictly to
rule (c), you should ask the submitter to rediff, but this is a totally
counter-productive waste of time and energy. Rule (b) allows you to adjust
the code, but then it is very impolite to change one submitter's code and
make him endorse your bugs. To solve this problem, it is recommended that
you add a line between the last Signed-off-by header and yours, indicating
the nature of your changes. While there is nothing mandatory about this, it
seems like prepending the description with your mail and/or name, all
enclosed in square brackets, is noticeable enough to make it obvious that
you are responsible for last-minute changes. Example::
Signed-off-by: Random J Developer <random@developer.example.org>
[lucky@maintainer.example.org: struct foo moved from foo.c to foo.h]
Signed-off-by: Lucky K Maintainer <lucky@maintainer.example.org>
This practice is particularly helpful if you maintain a stable branch and
want at the same time to credit the author, track changes, merge the fix,
and protect the submitter from complaints. Note that under no circumstances
can you change the author's identity (the From header), as it is the one
which appears in the changelog.
Special note to back-porters: It seems to be a common and useful practice
to insert an indication of the origin of a patch at the top of the commit
message (just after the subject line) to facilitate tracking. For instance,
here's what we see in a 3.x-stable release::
Date: Tue Oct 7 07:26:38 2014 -0400
libata: Un-break ATA blacklist
commit 1c40279960bcd7d52dbdf1d466b20d24b99176c8 upstream.
And here's what might appear in an older kernel once a patch is backported::
Date: Tue May 13 22:12:27 2008 +0200
wireless, airo: waitbusy() won't delay
[backport of 2.6 commit b7acbdfbd1f277c1eb23f344f899cfa4cd0bf36a]
Whatever the format, this information provides a valuable help to people
tracking your trees, and to people trying to troubleshoot bugs in your
tree.
12) When to use Acked-by: and Cc:
---------------------------------
The Signed-off-by: tag indicates that the signer was involved in the
development of the patch, or that he/she was in the patch's delivery path.
If a person was not directly involved in the preparation or handling of a
patch but wishes to signify and record their approval of it then they can
ask to have an Acked-by: line added to the patch's changelog.
Acked-by: is often used by the maintainer of the affected code when that
maintainer neither contributed to nor forwarded the patch.
Acked-by: is not as formal as Signed-off-by:. It is a record that the acker
has at least reviewed the patch and has indicated acceptance. Hence patch
mergers will sometimes manually convert an acker's "yep, looks good to me"
into an Acked-by: (but note that it is usually better to ask for an
explicit ack).
Acked-by: does not necessarily indicate acknowledgement of the entire patch.
For example, if a patch affects multiple subsystems and has an Acked-by: from
one subsystem maintainer then this usually indicates acknowledgement of just
the part which affects that maintainer's code. Judgement should be used here.
When in doubt people should refer to the original discussion in the mailing
list archives.
If a person has had the opportunity to comment on a patch, but has not
provided such comments, you may optionally add a ``Cc:`` tag to the patch.
This is the only tag which might be added without an explicit action by the
person it names - but it should indicate that this person was copied on the
patch. This tag documents that potentially interested parties
have been included in the discussion.
13) Using Reported-by:, Tested-by:, Reviewed-by:, Suggested-by: and Fixes:
--------------------------------------------------------------------------
The Reported-by tag gives credit to people who find bugs and report them and it
hopefully inspires them to help us again in the future. Please note that if
the bug was reported in private, then ask for permission first before using the
Reported-by tag.
A Tested-by: tag indicates that the patch has been successfully tested (in
some environment) by the person named. This tag informs maintainers that
some testing has been performed, provides a means to locate testers for
future patches, and ensures credit for the testers.
Reviewed-by:, instead, indicates that the patch has been reviewed and found
acceptable according to the Reviewer's Statement:
Reviewer's statement of oversight
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
By offering my Reviewed-by: tag, I state that:
(a) I have carried out a technical review of this patch to
evaluate its appropriateness and readiness for inclusion into
the mainline kernel.
(b) Any problems, concerns, or questions relating to the patch
have been communicated back to the submitter. I am satisfied
with the submitter's response to my comments.
(c) While there may be things that could be improved with this
submission, I believe that it is, at this time, (1) a
worthwhile modification to the kernel, and (2) free of known
issues which would argue against its inclusion.
(d) While I have reviewed the patch and believe it to be sound, I
do not (unless explicitly stated elsewhere) make any
warranties or guarantees that it will achieve its stated
purpose or function properly in any given situation.
A Reviewed-by tag is a statement of opinion that the patch is an
appropriate modification of the kernel without any remaining serious
technical issues. Any interested reviewer (who has done the work) can
offer a Reviewed-by tag for a patch. This tag serves to give credit to
reviewers and to inform maintainers of the degree of review which has been
done on the patch. Reviewed-by: tags, when supplied by reviewers known to
understand the subject area and to perform thorough reviews, will normally
increase the likelihood of your patch getting into the kernel.
A Suggested-by: tag indicates that the patch idea is suggested by the person
named and ensures credit to the person for the idea. Please note that this
tag should not be added without the reporter's permission, especially if the
idea was not posted in a public forum. That said, if we diligently credit our
idea reporters, they will, hopefully, be inspired to help us again in the
future.
A Fixes: tag indicates that the patch fixes an issue in a previous commit. It
is used to make it easy to determine where a bug originated, which can help
review a bug fix. This tag also assists the stable kernel team in determining
which stable kernel versions should receive your fix. This is the preferred
method for indicating a bug fixed by the patch. See :ref:`describe_changes`
for more details.
14) The canonical patch format
------------------------------
This section describes how the patch itself should be formatted. Note
that, if you have your patches stored in a ``git`` repository, proper patch
formatting can be had with ``git format-patch``. The tools cannot create
the necessary text, though, so read the instructions below anyway.
The canonical patch subject line is::
Subject: [PATCH 001/123] subsystem: summary phrase
The canonical patch message body contains the following:
- A ``from`` line specifying the patch author (only needed if the person
sending the patch is not the author).
- An empty line.
- The body of the explanation, line wrapped at 75 columns, which will
be copied to the permanent changelog to describe this patch.
- The ``Signed-off-by:`` lines, described above, which will
also go in the changelog.
- A marker line containing simply ``---``.
- Any additional comments not suitable for the changelog.
- The actual patch (``diff`` output).
The Subject line format makes it very easy to sort the emails
alphabetically by subject line - pretty much any email reader will
support that - since because the sequence number is zero-padded,
the numerical and alphabetic sort is the same.
The ``subsystem`` in the email's Subject should identify which
area or subsystem of the kernel is being patched.
The ``summary phrase`` in the email's Subject should concisely
describe the patch which that email contains. The ``summary
phrase`` should not be a filename. Do not use the same ``summary
phrase`` for every patch in a whole patch series (where a ``patch
series`` is an ordered sequence of multiple, related patches).
Bear in mind that the ``summary phrase`` of your email becomes a
globally-unique identifier for that patch. It propagates all the way
into the ``git`` changelog. The ``summary phrase`` may later be used in
developer discussions which refer to the patch. People will want to
google for the ``summary phrase`` to read discussion regarding that
patch. It will also be the only thing that people may quickly see
when, two or three months later, they are going through perhaps
thousands of patches using tools such as ``gitk`` or ``git log
--oneline``.
For these reasons, the ``summary`` must be no more than 70-75
characters, and it must describe both what the patch changes, as well
as why the patch might be necessary. It is challenging to be both
succinct and descriptive, but that is what a well-written summary
should do.
The ``summary phrase`` may be prefixed by tags enclosed in square
brackets: "Subject: [PATCH <tag>...] <summary phrase>". The tags are
not considered part of the summary phrase, but describe how the patch
should be treated. Common tags might include a version descriptor if
the multiple versions of the patch have been sent out in response to
comments (i.e., "v1, v2, v3"), or "RFC" to indicate a request for
comments. If there are four patches in a patch series the individual
patches may be numbered like this: 1/4, 2/4, 3/4, 4/4. This assures
that developers understand the order in which the patches should be
applied and that they have reviewed or applied all of the patches in
the patch series.
A couple of example Subjects::
Subject: [PATCH 2/5] ext2: improve scalability of bitmap searching
Subject: [PATCH v2 01/27] x86: fix eflags tracking
The ``from`` line must be the very first line in the message body,
and has the form:
From: Original Author <author@example.com>
The ``from`` line specifies who will be credited as the author of the
patch in the permanent changelog. If the ``from`` line is missing,
then the ``From:`` line from the email header will be used to determine
the patch author in the changelog.
The explanation body will be committed to the permanent source
changelog, so should make sense to a competent reader who has long
since forgotten the immediate details of the discussion that might
have led to this patch. Including symptoms of the failure which the
patch addresses (kernel log messages, oops messages, etc.) is
especially useful for people who might be searching the commit logs
looking for the applicable patch. If a patch fixes a compile failure,
it may not be necessary to include _all_ of the compile failures; just
enough that it is likely that someone searching for the patch can find
it. As in the ``summary phrase``, it is important to be both succinct as
well as descriptive.
The ``---`` marker line serves the essential purpose of marking for patch
handling tools where the changelog message ends.
One good use for the additional comments after the ``---`` marker is for
a ``diffstat``, to show what files have changed, and the number of
inserted and deleted lines per file. A ``diffstat`` is especially useful
on bigger patches. Other comments relevant only to the moment or the
maintainer, not suitable for the permanent changelog, should also go
here. A good example of such comments might be ``patch changelogs``
which describe what has changed between the v1 and v2 version of the
patch.
If you are going to include a ``diffstat`` after the ``---`` marker, please
use ``diffstat`` options ``-p 1 -w 70`` so that filenames are listed from
the top of the kernel source tree and don't use too much horizontal
space (easily fit in 80 columns, maybe with some indentation). (``git``
generates appropriate diffstats by default.)
See more details on the proper patch format in the following
references.
.. _explicit_in_reply_to:
15) Explicit In-Reply-To headers
--------------------------------
It can be helpful to manually add In-Reply-To: headers to a patch
(e.g., when using ``git send-email``) to associate the patch with
previous relevant discussion, e.g. to link a bug fix to the email with
the bug report. However, for a multi-patch series, it is generally
best to avoid using In-Reply-To: to link to older versions of the
series. This way multiple versions of the patch don't become an
unmanageable forest of references in email clients. If a link is
helpful, you can use the https://lkml.kernel.org/ redirector (e.g., in
the cover email text) to link to an earlier version of the patch series.
16) Sending ``git pull`` requests
---------------------------------
If you have a series of patches, it may be most convenient to have the
maintainer pull them directly into the subsystem repository with a
``git pull`` operation. Note, however, that pulling patches from a developer
requires a higher degree of trust than taking patches from a mailing list.
As a result, many subsystem maintainers are reluctant to take pull
requests, especially from new, unknown developers. If in doubt you can use
the pull request as the cover letter for a normal posting of the patch
series, giving the maintainer the option of using either.
A pull request should have [GIT] or [PULL] in the subject line. The
request itself should include the repository name and the branch of
interest on a single line; it should look something like::
Please pull from
git://jdelvare.pck.nerim.net/jdelvare-2.6 i2c-for-linus
to get these changes:
A pull request should also include an overall message saying what will be
included in the request, a ``git shortlog`` listing of the patches
themselves, and a ``diffstat`` showing the overall effect of the patch series.
The easiest way to get all this information together is, of course, to let
``git`` do it for you with the ``git request-pull`` command.
Some maintainers (including Linus) want to see pull requests from signed
commits; that increases their confidence that the request actually came
from you. Linus, in particular, will not pull from public hosting sites
like GitHub in the absence of a signed tag.
The first step toward creating such tags is to make a GNUPG key and get it
signed by one or more core kernel developers. This step can be hard for
new developers, but there is no way around it. Attending conferences can
be a good way to find developers who can sign your key.
Once you have prepared a patch series in ``git`` that you wish to have somebody
pull, create a signed tag with ``git tag -s``. This will create a new tag
identifying the last commit in the series and containing a signature
created with your private key. You will also have the opportunity to add a
changelog-style message to the tag; this is an ideal place to describe the
effects of the pull request as a whole.
If the tree the maintainer will be pulling from is not the repository you
are working from, don't forget to push the signed tag explicitly to the
public tree.
When generating your pull request, use the signed tag as the target. A
command like this will do the trick::
git request-pull master git://my.public.tree/linux.git my-signed-tag
REFERENCES
**********
Andrew Morton, "The perfect patch" (tpp).
<http://www.ozlabs.org/~akpm/stuff/tpp.txt>
Jeff Garzik, "Linux kernel patch submission format".
<http://linux.yyz.us/patch-format.html>
Greg Kroah-Hartman, "How to piss off a kernel subsystem maintainer".
<http://www.kroah.com/log/linux/maintainer.html>
<http://www.kroah.com/log/linux/maintainer-02.html>
<http://www.kroah.com/log/linux/maintainer-03.html>
<http://www.kroah.com/log/linux/maintainer-04.html>
<http://www.kroah.com/log/linux/maintainer-05.html>
<http://www.kroah.com/log/linux/maintainer-06.html>
NO!!!! No more huge patch bombs to linux-kernel@vger.kernel.org people!
<https://lkml.org/lkml/2005/7/11/336>
Kernel Documentation/CodingStyle:
:ref:`Documentation/CodingStyle <codingstyle>`
Linus Torvalds's mail on the canonical patch format:
<http://lkml.org/lkml/2005/4/7/183>
Andi Kleen, "On submitting kernel patches"
Some strategies to get difficult or controversial changes in.
http://halobates.de/on-submitting-patches.pdf
This file has moved to process/submitting-patches.rst

View File

@ -1,39 +0,0 @@
Software cursor for VGA by Pavel Machek <pavel@atrey.karlin.mff.cuni.cz>
======================= and Martin Mares <mj@atrey.karlin.mff.cuni.cz>
Linux now has some ability to manipulate cursor appearance. Normally, you
can set the size of hardware cursor (and also work around some ugly bugs in
those miserable Trident cards--see #define TRIDENT_GLITCH in drivers/video/
vgacon.c). You can now play a few new tricks: you can make your cursor look
like a non-blinking red block, make it inverse background of the character it's
over or to highlight that character and still choose whether the original
hardware cursor should remain visible or not. There may be other things I have
never thought of.
The cursor appearance is controlled by a "<ESC>[?1;2;3c" escape sequence
where 1, 2 and 3 are parameters described below. If you omit any of them,
they will default to zeroes.
Parameter 1 specifies cursor size (0=default, 1=invisible, 2=underline, ...,
8=full block) + 16 if you want the software cursor to be applied + 32 if you
want to always change the background color + 64 if you dislike having the
background the same as the foreground. Highlights are ignored for the last two
flags.
The second parameter selects character attribute bits you want to change
(by simply XORing them with the value of this parameter). On standard VGA,
the high four bits specify background and the low four the foreground. In both
groups, low three bits set color (as in normal color codes used by the console)
and the most significant one turns on highlight (or sometimes blinking--it
depends on the configuration of your VGA).
The third parameter consists of character attribute bits you want to set.
Bit setting takes place before bit toggling, so you can simply clear a bit by
including it in both the set mask and the toggle mask.
Examples:
=========
To get normal blinking underline, use: echo -e '\033[?2c'
To get blinking block, use: echo -e '\033[?6c'
To get red non-blinking block, use: echo -e '\033[?17;0;64c'

View File

@ -101,6 +101,6 @@ received a notification, it will set the backlight level accordingly. This does
not affect the sending of event to user space, they are always sent to user
space regardless of whether or not the video module controls the backlight level
directly. This behaviour can be controlled through the brightness_switch_enabled
module parameter as documented in kernel-parameters.txt. It is recommended to
module parameter as documented in admin-guide/kernel-parameters.rst. It is recommended to
disable this behaviour once a GUI environment starts up and wants to have full
control of the backlight level.

View File

@ -0,0 +1,411 @@
Linux kernel release 4.x <http://kernel.org/>
=============================================
These are the release notes for Linux version 4. Read them carefully,
as they tell you what this is all about, explain how to install the
kernel, and what to do if something goes wrong.
What is Linux?
--------------
Linux is a clone of the operating system Unix, written from scratch by
Linus Torvalds with assistance from a loosely-knit team of hackers across
the Net. It aims towards POSIX and Single UNIX Specification compliance.
It has all the features you would expect in a modern fully-fledged Unix,
including true multitasking, virtual memory, shared libraries, demand
loading, shared copy-on-write executables, proper memory management,
and multistack networking including IPv4 and IPv6.
It is distributed under the GNU General Public License - see the
accompanying COPYING file for more details.
On what hardware does it run?
-----------------------------
Although originally developed first for 32-bit x86-based PCs (386 or higher),
today Linux also runs on (at least) the Compaq Alpha AXP, Sun SPARC and
UltraSPARC, Motorola 68000, PowerPC, PowerPC64, ARM, Hitachi SuperH, Cell,
IBM S/390, MIPS, HP PA-RISC, Intel IA-64, DEC VAX, AMD x86-64, AXIS CRIS,
Xtensa, Tilera TILE, AVR32, ARC and Renesas M32R architectures.
Linux is easily portable to most general-purpose 32- or 64-bit architectures
as long as they have a paged memory management unit (PMMU) and a port of the
GNU C compiler (gcc) (part of The GNU Compiler Collection, GCC). Linux has
also been ported to a number of architectures without a PMMU, although
functionality is then obviously somewhat limited.
Linux has also been ported to itself. You can now run the kernel as a
userspace application - this is called UserMode Linux (UML).
Documentation
-------------
- There is a lot of documentation available both in electronic form on
the Internet and in books, both Linux-specific and pertaining to
general UNIX questions. I'd recommend looking into the documentation
subdirectories on any Linux FTP site for the LDP (Linux Documentation
Project) books. This README is not meant to be documentation on the
system: there are much better sources available.
- There are various README files in the Documentation/ subdirectory:
these typically contain kernel-specific installation notes for some
drivers for example. See Documentation/00-INDEX for a list of what
is contained in each file. Please read the
:ref:`Documentation/process/changes.rst <changes>` file, as it
contains information about the problems, which may result by upgrading
your kernel.
- The Documentation/DocBook/ subdirectory contains several guides for
kernel developers and users. These guides can be rendered in a
number of formats: PostScript (.ps), PDF, HTML, & man-pages, among others.
After installation, ``make psdocs``, ``make pdfdocs``, ``make htmldocs``,
or ``make mandocs`` will render the documentation in the requested format.
Installing the kernel source
----------------------------
- If you install the full sources, put the kernel tarball in a
directory where you have permissions (e.g. your home directory) and
unpack it::
xz -cd linux-4.X.tar.xz | tar xvf -
Replace "X" with the version number of the latest kernel.
Do NOT use the /usr/src/linux area! This area has a (usually
incomplete) set of kernel headers that are used by the library header
files. They should match the library, and not get messed up by
whatever the kernel-du-jour happens to be.
- You can also upgrade between 4.x releases by patching. Patches are
distributed in the xz format. To install by patching, get all the
newer patch files, enter the top level directory of the kernel source
(linux-4.X) and execute::
xz -cd ../patch-4.x.xz | patch -p1
Replace "x" for all versions bigger than the version "X" of your current
source tree, **in_order**, and you should be ok. You may want to remove
the backup files (some-file-name~ or some-file-name.orig), and make sure
that there are no failed patches (some-file-name# or some-file-name.rej).
If there are, either you or I have made a mistake.
Unlike patches for the 4.x kernels, patches for the 4.x.y kernels
(also known as the -stable kernels) are not incremental but instead apply
directly to the base 4.x kernel. For example, if your base kernel is 4.0
and you want to apply the 4.0.3 patch, you must not first apply the 4.0.1
and 4.0.2 patches. Similarly, if you are running kernel version 4.0.2 and
want to jump to 4.0.3, you must first reverse the 4.0.2 patch (that is,
patch -R) **before** applying the 4.0.3 patch. You can read more on this in
:ref:`Documentation/process/applying-patches.rst <applying_patches>`.
Alternatively, the script patch-kernel can be used to automate this
process. It determines the current kernel version and applies any
patches found::
linux/scripts/patch-kernel linux
The first argument in the command above is the location of the
kernel source. Patches are applied from the current directory, but
an alternative directory can be specified as the second argument.
- Make sure you have no stale .o files and dependencies lying around::
cd linux
make mrproper
You should now have the sources correctly installed.
Software requirements
---------------------
Compiling and running the 4.x kernels requires up-to-date
versions of various software packages. Consult
:ref:`Documentation/process/changes.rst <changes>` for the minimum version numbers
required and how to get updates for these packages. Beware that using
excessively old versions of these packages can cause indirect
errors that are very difficult to track down, so don't assume that
you can just update packages when obvious problems arise during
build or operation.
Build directory for the kernel
------------------------------
When compiling the kernel, all output files will per default be
stored together with the kernel source code.
Using the option ``make O=output/dir`` allows you to specify an alternate
place for the output files (including .config).
Example::
kernel source code: /usr/src/linux-4.X
build directory: /home/name/build/kernel
To configure and build the kernel, use::
cd /usr/src/linux-4.X
make O=/home/name/build/kernel menuconfig
make O=/home/name/build/kernel
sudo make O=/home/name/build/kernel modules_install install
Please note: If the ``O=output/dir`` option is used, then it must be
used for all invocations of make.
Configuring the kernel
----------------------
Do not skip this step even if you are only upgrading one minor
version. New configuration options are added in each release, and
odd problems will turn up if the configuration files are not set up
as expected. If you want to carry your existing configuration to a
new version with minimal work, use ``make oldconfig``, which will
only ask you for the answers to new questions.
- Alternative configuration commands are::
"make config" Plain text interface.
"make menuconfig" Text based color menus, radiolists & dialogs.
"make nconfig" Enhanced text based color menus.
"make xconfig" Qt based configuration tool.
"make gconfig" GTK+ based configuration tool.
"make oldconfig" Default all questions based on the contents of
your existing ./.config file and asking about
new config symbols.
"make silentoldconfig"
Like above, but avoids cluttering the screen
with questions already answered.
Additionally updates the dependencies.
"make olddefconfig"
Like above, but sets new symbols to their default
values without prompting.
"make defconfig" Create a ./.config file by using the default
symbol values from either arch/$ARCH/defconfig
or arch/$ARCH/configs/${PLATFORM}_defconfig,
depending on the architecture.
"make ${PLATFORM}_defconfig"
Create a ./.config file by using the default
symbol values from
arch/$ARCH/configs/${PLATFORM}_defconfig.
Use "make help" to get a list of all available
platforms of your architecture.
"make allyesconfig"
Create a ./.config file by setting symbol
values to 'y' as much as possible.
"make allmodconfig"
Create a ./.config file by setting symbol
values to 'm' as much as possible.
"make allnoconfig" Create a ./.config file by setting symbol
values to 'n' as much as possible.
"make randconfig" Create a ./.config file by setting symbol
values to random values.
"make localmodconfig" Create a config based on current config and
loaded modules (lsmod). Disables any module
option that is not needed for the loaded modules.
To create a localmodconfig for another machine,
store the lsmod of that machine into a file
and pass it in as a LSMOD parameter.
target$ lsmod > /tmp/mylsmod
target$ scp /tmp/mylsmod host:/tmp
host$ make LSMOD=/tmp/mylsmod localmodconfig
The above also works when cross compiling.
"make localyesconfig" Similar to localmodconfig, except it will convert
all module options to built in (=y) options.
You can find more information on using the Linux kernel config tools
in Documentation/kbuild/kconfig.txt.
- NOTES on ``make config``:
- Having unnecessary drivers will make the kernel bigger, and can
under some circumstances lead to problems: probing for a
nonexistent controller card may confuse your other controllers
- A kernel with math-emulation compiled in will still use the
coprocessor if one is present: the math emulation will just
never get used in that case. The kernel will be slightly larger,
but will work on different machines regardless of whether they
have a math coprocessor or not.
- The "kernel hacking" configuration details usually result in a
bigger or slower kernel (or both), and can even make the kernel
less stable by configuring some routines to actively try to
break bad code to find kernel problems (kmalloc()). Thus you
should probably answer 'n' to the questions for "development",
"experimental", or "debugging" features.
Compiling the kernel
--------------------
- Make sure you have at least gcc 3.2 available.
For more information, refer to :ref:`Documentation/process/changes.rst <changes>`.
Please note that you can still run a.out user programs with this kernel.
- Do a ``make`` to create a compressed kernel image. It is also
possible to do ``make install`` if you have lilo installed to suit the
kernel makefiles, but you may want to check your particular lilo setup first.
To do the actual install, you have to be root, but none of the normal
build should require that. Don't take the name of root in vain.
- If you configured any of the parts of the kernel as ``modules``, you
will also have to do ``make modules_install``.
- Verbose kernel compile/build output:
Normally, the kernel build system runs in a fairly quiet mode (but not
totally silent). However, sometimes you or other kernel developers need
to see compile, link, or other commands exactly as they are executed.
For this, use "verbose" build mode. This is done by passing
``V=1`` to the ``make`` command, e.g.::
make V=1 all
To have the build system also tell the reason for the rebuild of each
target, use ``V=2``. The default is ``V=0``.
- Keep a backup kernel handy in case something goes wrong. This is
especially true for the development releases, since each new release
contains new code which has not been debugged. Make sure you keep a
backup of the modules corresponding to that kernel, as well. If you
are installing a new kernel with the same version number as your
working kernel, make a backup of your modules directory before you
do a ``make modules_install``.
Alternatively, before compiling, use the kernel config option
"LOCALVERSION" to append a unique suffix to the regular kernel version.
LOCALVERSION can be set in the "General Setup" menu.
- In order to boot your new kernel, you'll need to copy the kernel
image (e.g. .../linux/arch/x86/boot/bzImage after compilation)
to the place where your regular bootable kernel is found.
- Booting a kernel directly from a floppy without the assistance of a
bootloader such as LILO, is no longer supported.
If you boot Linux from the hard drive, chances are you use LILO, which
uses the kernel image as specified in the file /etc/lilo.conf. The
kernel image file is usually /vmlinuz, /boot/vmlinuz, /bzImage or
/boot/bzImage. To use the new kernel, save a copy of the old image
and copy the new image over the old one. Then, you MUST RERUN LILO
to update the loading map! If you don't, you won't be able to boot
the new kernel image.
Reinstalling LILO is usually a matter of running /sbin/lilo.
You may wish to edit /etc/lilo.conf to specify an entry for your
old kernel image (say, /vmlinux.old) in case the new one does not
work. See the LILO docs for more information.
After reinstalling LILO, you should be all set. Shutdown the system,
reboot, and enjoy!
If you ever need to change the default root device, video mode,
ramdisk size, etc. in the kernel image, use the ``rdev`` program (or
alternatively the LILO boot options when appropriate). No need to
recompile the kernel to change these parameters.
- Reboot with the new kernel and enjoy.
If something goes wrong
-----------------------
- If you have problems that seem to be due to kernel bugs, please check
the file MAINTAINERS to see if there is a particular person associated
with the part of the kernel that you are having trouble with. If there
isn't anyone listed there, then the second best thing is to mail
them to me (torvalds@linux-foundation.org), and possibly to any other
relevant mailing-list or to the newsgroup.
- In all bug-reports, *please* tell what kernel you are talking about,
how to duplicate the problem, and what your setup is (use your common
sense). If the problem is new, tell me so, and if the problem is
old, please try to tell me when you first noticed it.
- If the bug results in a message like::
unable to handle kernel paging request at address C0000010
Oops: 0002
EIP: 0010:XXXXXXXX
eax: xxxxxxxx ebx: xxxxxxxx ecx: xxxxxxxx edx: xxxxxxxx
esi: xxxxxxxx edi: xxxxxxxx ebp: xxxxxxxx
ds: xxxx es: xxxx fs: xxxx gs: xxxx
Pid: xx, process nr: xx
xx xx xx xx xx xx xx xx xx xx
or similar kernel debugging information on your screen or in your
system log, please duplicate it *exactly*. The dump may look
incomprehensible to you, but it does contain information that may
help debugging the problem. The text above the dump is also
important: it tells something about why the kernel dumped code (in
the above example, it's due to a bad kernel pointer). More information
on making sense of the dump is in Documentation/admin-guide/oops-tracing.rst
- If you compiled the kernel with CONFIG_KALLSYMS you can send the dump
as is, otherwise you will have to use the ``ksymoops`` program to make
sense of the dump (but compiling with CONFIG_KALLSYMS is usually preferred).
This utility can be downloaded from
ftp://ftp.<country>.kernel.org/pub/linux/utils/kernel/ksymoops/ .
Alternatively, you can do the dump lookup by hand:
- In debugging dumps like the above, it helps enormously if you can
look up what the EIP value means. The hex value as such doesn't help
me or anybody else very much: it will depend on your particular
kernel setup. What you should do is take the hex value from the EIP
line (ignore the ``0010:``), and look it up in the kernel namelist to
see which kernel function contains the offending address.
To find out the kernel function name, you'll need to find the system
binary associated with the kernel that exhibited the symptom. This is
the file 'linux/vmlinux'. To extract the namelist and match it against
the EIP from the kernel crash, do::
nm vmlinux | sort | less
This will give you a list of kernel addresses sorted in ascending
order, from which it is simple to find the function that contains the
offending address. Note that the address given by the kernel
debugging messages will not necessarily match exactly with the
function addresses (in fact, that is very unlikely), so you can't
just 'grep' the list: the list will, however, give you the starting
point of each kernel function, so by looking for the function that
has a starting address lower than the one you are searching for but
is followed by a function with a higher address you will find the one
you want. In fact, it may be a good idea to include a bit of
"context" in your problem report, giving a few lines around the
interesting one.
If you for some reason cannot do the above (you have a pre-compiled
kernel image or similar), telling me as much about your setup as
possible will help. Please read the :ref:`admin-guide/reporting-bugs.rst <reportingbugs>`
document for details.
- Alternatively, you can use gdb on a running kernel. (read-only; i.e. you
cannot change values or set break points.) To do this, first compile the
kernel with -g; edit arch/x86/Makefile appropriately, then do a ``make
clean``. You'll also need to enable CONFIG_PROC_FS (via ``make config``).
After you've rebooted with the new kernel, do ``gdb vmlinux /proc/kcore``.
You can now use all the usual gdb commands. The command to look up the
point where your system crashed is ``l *0xXXXXXXXX``. (Replace the XXXes
with the EIP value.)
gdb'ing a non-running kernel currently fails because ``gdb`` (wrongly)
disregards the starting offset for which the kernel is compiled.

View File

@ -1,9 +1,10 @@
How to deal with bad memory e.g. reported by memtest86+ ?
=========================================================
March 2008
Jan-Simon Moeller, dl9pf@gmx.de
How to deal with bad memory e.g. reported by memtest86+ ?
#########################################################
There are three possibilities I know of:
@ -19,6 +20,7 @@ This Howto is about number 3) .
BadRAM
######
BadRAM is the actively developed and available as kernel-patch
here: http://rick.vanrein.org/linux/badram/
@ -31,15 +33,18 @@ memmap is already in the kernel and usable as kernel-parameter at
boot-time. Its syntax is slightly strange and you may need to
calculate the values by yourself!
Syntax to exclude a memory area (see kernel-parameters.txt for details):
memmap=<size>$<address>
Syntax to exclude a memory area (see admin-guide/kernel-parameters.rst for details)::
memmap=<size>$<address>
Example: memtest86+ reported here errors at address 0x18691458, 0x18698424 and
some others. All had 0x1869xxxx in common, so I chose a pattern of
0x18690000,0xffff0000.
some others. All had 0x1869xxxx in common, so I chose a pattern of
0x18690000,0xffff0000.
With the numbers of the example above:
memmap=64K$0x18690000
or
memmap=0x10000$0x18690000
With the numbers of the example above::
memmap=64K$0x18690000
or::
memmap=0x10000$0x18690000

View File

@ -0,0 +1,68 @@
Basic kernel profiling
======================
These instructions are deliberately very basic. If you want something clever,
go read the real docs ;-)
Please don't add more stuff, but feel free to
correct my mistakes ;-) (mbligh@aracnet.com)
Thanks to John Levon, Dave Hansen, et al. for help writing this.
``<test>`` is the thing you're trying to measure.
Make sure you have the correct ``System.map`` / ``vmlinux`` referenced!
It is probably easiest to use ``make install`` for linux and hack
``/sbin/installkernel`` to copy ``vmlinux`` to ``/boot``, in addition to
``vmlinuz``, ``config``, ``System.map``, which are usually installed by default.
Readprofile
-----------
A recent ``readprofile`` command is needed for 2.6, such as found in util-linux
2.12a, which can be downloaded from:
http://www.kernel.org/pub/linux/utils/util-linux/
Most distributions will ship it already.
Add ``profile=2`` to the kernel command line.
Some ``readprofile`` commands::
clear readprofile -r
<test>
dump output readprofile -m /boot/System.map > captured_profile
Oprofile
--------
Get the source (see Changes for required version) from
http://oprofile.sourceforge.net/ and add ``idle=poll`` to the kernel command
line.
Configure with ``CONFIG_PROFILING=y`` and ``CONFIG_OPROFILE=y`` & reboot on new kernel::
./configure --with-kernel-support
make install
For superior results, be sure to enable the local APIC. If opreport sees
a 0Hz CPU, APIC was not on. Be aware that idle=poll may mean a performance
penalty.
One time setup::
opcontrol --setup --vmlinux=/boot/vmlinux
Some ``opcontrol`` commands::
clear opcontrol --reset
start opcontrol --start
<test>
stop opcontrol --stop
dump output opreport > output_file
To only report on the kernel, run ``opreport -l /boot/vmlinux > output_file``
A reset is needed to clear old statistics, which survive a reboot.

View File

@ -0,0 +1,151 @@
Kernel Support for miscellaneous (your favourite) Binary Formats v1.1
=====================================================================
This Kernel feature allows you to invoke almost (for restrictions see below)
every program by simply typing its name in the shell.
This includes for example compiled Java(TM), Python or Emacs programs.
To achieve this you must tell binfmt_misc which interpreter has to be invoked
with which binary. Binfmt_misc recognises the binary-type by matching some bytes
at the beginning of the file with a magic byte sequence (masking out specified
bits) you have supplied. Binfmt_misc can also recognise a filename extension
aka ``.com`` or ``.exe``.
First you must mount binfmt_misc::
mount binfmt_misc -t binfmt_misc /proc/sys/fs/binfmt_misc
To actually register a new binary type, you have to set up a string looking like
``:name:type:offset:magic:mask:interpreter:flags`` (where you can choose the
``:`` upon your needs) and echo it to ``/proc/sys/fs/binfmt_misc/register``.
Here is what the fields mean:
- ``name``
is an identifier string. A new /proc file will be created with this
``name below /proc/sys/fs/binfmt_misc``; cannot contain slashes ``/`` for
obvious reasons.
- ``type``
is the type of recognition. Give ``M`` for magic and ``E`` for extension.
- ``offset``
is the offset of the magic/mask in the file, counted in bytes. This
defaults to 0 if you omit it (i.e. you write ``:name:type::magic...``).
Ignored when using filename extension matching.
- ``magic``
is the byte sequence binfmt_misc is matching for. The magic string
may contain hex-encoded characters like ``\x0a`` or ``\xA4``. Note that you
must escape any NUL bytes; parsing halts at the first one. In a shell
environment you might have to write ``\\x0a`` to prevent the shell from
eating your ``\``.
If you chose filename extension matching, this is the extension to be
recognised (without the ``.``, the ``\x0a`` specials are not allowed).
Extension matching is case sensitive, and slashes ``/`` are not allowed!
- ``mask``
is an (optional, defaults to all 0xff) mask. You can mask out some
bits from matching by supplying a string like magic and as long as magic.
The mask is anded with the byte sequence of the file. Note that you must
escape any NUL bytes; parsing halts at the first one. Ignored when using
filename extension matching.
- ``interpreter``
is the program that should be invoked with the binary as first
argument (specify the full path)
- ``flags``
is an optional field that controls several aspects of the invocation
of the interpreter. It is a string of capital letters, each controls a
certain aspect. The following flags are supported:
``P`` - preserve-argv[0]
Legacy behavior of binfmt_misc is to overwrite
the original argv[0] with the full path to the binary. When this
flag is included, binfmt_misc will add an argument to the argument
vector for this purpose, thus preserving the original ``argv[0]``.
e.g. If your interp is set to ``/bin/foo`` and you run ``blah``
(which is in ``/usr/local/bin``), then the kernel will execute
``/bin/foo`` with ``argv[]`` set to ``["/bin/foo", "/usr/local/bin/blah", "blah"]``. The interp has to be aware of this so it can
execute ``/usr/local/bin/blah``
with ``argv[]`` set to ``["blah"]``.
``O`` - open-binary
Legacy behavior of binfmt_misc is to pass the full path
of the binary to the interpreter as an argument. When this flag is
included, binfmt_misc will open the file for reading and pass its
descriptor as an argument, instead of the full path, thus allowing
the interpreter to execute non-readable binaries. This feature
should be used with care - the interpreter has to be trusted not to
emit the contents of the non-readable binary.
``C`` - credentials
Currently, the behavior of binfmt_misc is to calculate
the credentials and security token of the new process according to
the interpreter. When this flag is included, these attributes are
calculated according to the binary. It also implies the ``O`` flag.
This feature should be used with care as the interpreter
will run with root permissions when a setuid binary owned by root
is run with binfmt_misc.
``F`` - fix binary
The usual behaviour of binfmt_misc is to spawn the
binary lazily when the misc format file is invoked. However,
this doesn``t work very well in the face of mount namespaces and
changeroots, so the ``F`` mode opens the binary as soon as the
emulation is installed and uses the opened image to spawn the
emulator, meaning it is always available once installed,
regardless of how the environment changes.
There are some restrictions:
- the whole register string may not exceed 1920 characters
- the magic must reside in the first 128 bytes of the file, i.e.
offset+size(magic) has to be less than 128
- the interpreter string may not exceed 127 characters
To use binfmt_misc you have to mount it first. You can mount it with
``mount -t binfmt_misc none /proc/sys/fs/binfmt_misc`` command, or you can add
a line ``none /proc/sys/fs/binfmt_misc binfmt_misc defaults 0 0`` to your
``/etc/fstab`` so it auto mounts on boot.
You may want to add the binary formats in one of your ``/etc/rc`` scripts during
boot-up. Read the manual of your init program to figure out how to do this
right.
Think about the order of adding entries! Later added entries are matched first!
A few examples (assumed you are in ``/proc/sys/fs/binfmt_misc``):
- enable support for em86 (like binfmt_em86, for Alpha AXP only)::
echo ':i386:M::\x7fELF\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x03:\xff\xff\xff\xff\xff\xfe\xfe\xff\xff\xff\xff\xff\xff\xff\xff\xff\xfb\xff\xff:/bin/em86:' > register
echo ':i486:M::\x7fELF\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x06:\xff\xff\xff\xff\xff\xfe\xfe\xff\xff\xff\xff\xff\xff\xff\xff\xff\xfb\xff\xff:/bin/em86:' > register
- enable support for packed DOS applications (pre-configured dosemu hdimages)::
echo ':DEXE:M::\x0eDEX::/usr/bin/dosexec:' > register
- enable support for Windows executables using wine::
echo ':DOSWin:M::MZ::/usr/local/bin/wine:' > register
For java support see Documentation/admin-guide/java.rst
You can enable/disable binfmt_misc or one binary type by echoing 0 (to disable)
or 1 (to enable) to ``/proc/sys/fs/binfmt_misc/status`` or
``/proc/.../the_name``.
Catting the file tells you the current status of ``binfmt_misc/the_entry``.
You can remove one entry or all entries by echoing -1 to ``/proc/.../the_name``
or ``/proc/sys/fs/binfmt_misc/status``.
Hints
-----
If you want to pass special arguments to your interpreter, you can
write a wrapper script for it. See Documentation/admin-guide/java.rst for an
example.
Your interpreter should NOT look in the PATH for the filename; the kernel
passes it the full filename (or the file descriptor) to use. Using ``$PATH`` can
cause unexpected behaviour and can be a security hazard.
Richard Günther <rguenth@tat.physik.uni-tuebingen.de>

View File

@ -0,0 +1,38 @@
Linux Braille Console
=====================
To get early boot messages on a braille device (before userspace screen
readers can start), you first need to compile the support for the usual serial
console (see :ref:`Documentation/admin-guide/serial-console.rst <serial_console>`), and
for braille device
(in :menuselection:`Device Drivers --> Accessibility support --> Console on braille device`).
Then you need to specify a ``console=brl``, option on the kernel command line, the
format is::
console=brl,serial_options...
where ``serial_options...`` are the same as described in
:ref:`Documentation/admin-guide/serial-console.rst <serial_console>`.
So for instance you can use ``console=brl,ttyS0`` if the braille device is connected to the first serial port, and ``console=brl,ttyS0,115200`` to
override the baud rate to 115200, etc.
By default, the braille device will just show the last kernel message (console
mode). To review previous messages, press the Insert key to switch to the VT
review mode. In review mode, the arrow keys permit to browse in the VT content,
:kbd:`PAGE-UP`/:kbd:`PAGE-DOWN` keys go at the top/bottom of the screen, and
the :kbd:`HOME` key goes back
to the cursor, hence providing very basic screen reviewing facility.
Sound feedback can be obtained by adding the ``braille_console.sound=1`` kernel
parameter.
For simplicity, only one braille console can be enabled, other uses of
``console=brl,...`` will be discarded. Also note that it does not interfere with
the console selection mechanism described in
:ref:`Documentation/admin-guide/serial-console.rst <serial_console>`.
For now, only the VisioBraille device is supported.
Samuel Thibault <samuel.thibault@ens-lyon.org>

View File

@ -1,18 +1,8 @@
Table of contents
=================
Bug hunting
+++++++++++
Last updated: 20 December 2005
Contents
========
- Introduction
- Devices not appearing
- Finding patch that caused a bug
-- Finding using git-bisect
-- Finding it the old way
- Fixing the bug
Introduction
============
@ -24,7 +14,8 @@ Finding bugs is not always easy. Have a go though. If you can't find it don't
give up. Report as much as you have found to the relevant maintainer. See
MAINTAINERS for who that is for the subsystem you have worked on.
Before you submit a bug report read REPORTING-BUGS.
Before you submit a bug report read
:ref:`Documentation/admin-guide/reporting-bugs.rst <reportingbugs>`.
Devices not appearing
=====================
@ -37,15 +28,16 @@ Finding patch that caused a bug
Finding using git-bisect
------------------------
Finding using ``git-bisect``
----------------------------
Using the provided tools with git makes finding bugs easy provided the bug is
reproducible.
Using the provided tools with ``git`` makes finding bugs easy provided the bug
is reproducible.
Steps to do it:
- start using git for the kernel source
- read the man page for git-bisect
- read the man page for ``git-bisect``
- have fun
Finding it the old way
@ -58,22 +50,22 @@ It's a brute force approach but it works pretty well.
You need:
. A reproducible bug - it has to happen predictably (sorry)
. All the kernel tar files from a revision that worked to the
- A reproducible bug - it has to happen predictably (sorry)
- All the kernel tar files from a revision that worked to the
revision that doesn't
You will then do:
. Rebuild a revision that you believe works, install, and verify that.
. Do a binary search over the kernels to figure out which one
- Rebuild a revision that you believe works, install, and verify that.
- Do a binary search over the kernels to figure out which one
introduced the bug. I.e., suppose 1.3.28 didn't have the bug, but
you know that 1.3.69 does. Pick a kernel in the middle and build
that, like 1.3.50. Build & test; if it works, pick the mid point
between .50 and .69, else the mid point between .28 and .50.
. You'll narrow it down to the kernel that introduced the bug. You
- You'll narrow it down to the kernel that introduced the bug. You
can probably do better than this but it gets tricky.
. Narrow it down to a subdirectory
- Narrow it down to a subdirectory
- Copy kernel that works into "test". Let's say that 3.62 works,
but 3.63 doesn't. So you diff -r those two kernels and come
@ -83,7 +75,7 @@ You will then do:
Copy the non-working directory next to the working directory
as "dir.63".
One directory at time, try moving the working directory to
"dir.62" and mv dir.63 dir"time, try
"dir.62" and mv dir.63 dir"time, try::
mv dir dir.62
mv dir.63 dir
@ -97,15 +89,15 @@ You will then do:
found in my case that they were self explanatory - you may
or may not want to give up when that happens.
. Narrow it down to a file
- Narrow it down to a file
- You can apply the same technique to each file in the directory,
hoping that the changes in that file are self contained.
. Narrow it down to a routine
- Narrow it down to a routine
- You can take the old file and the new file and manually create
a merged file that has
a merged file that has::
#ifdef VER62
routine()
@ -120,7 +112,7 @@ You will then do:
#endif
And then walk through that file, one routine at a time and
prefix it with
prefix it with::
#define VER62
/* both routines here */
@ -153,94 +145,105 @@ To debug a kernel, use objdump and look for the hex offset from the crash
output to find the valid line of code/assembler. Without debug symbols, you
will see the assembler code for the routine shown, but if your kernel has
debug symbols the C code will also be available. (Debug symbols can be enabled
in the kernel hacking menu of the menu configuration.) For example:
in the kernel hacking menu of the menu configuration.) For example::
objdump -r -S -l --disassemble net/dccp/ipv4.o
NB.: you need to be at the top level of the kernel tree for this to pick up
your C files.
.. note::
You need to be at the top level of the kernel tree for this to pick up
your C files.
If you don't have access to the code you can also debug on some crash dumps
e.g. crash dump output as shown by Dave Miller.
e.g. crash dump output as shown by Dave Miller::
> EIP is at ip_queue_xmit+0x14/0x4c0
> ...
> Code: 44 24 04 e8 6f 05 00 00 e9 e8 fe ff ff 8d 76 00 8d bc 27 00 00
> 00 00 55 57 56 53 81 ec bc 00 00 00 8b ac 24 d0 00 00 00 8b 5d 08
> <8b> 83 3c 01 00 00 89 44 24 14 8b 45 28 85 c0 89 44 24 18 0f 85
>
> Put the bytes into a "foo.s" file like this:
>
> .text
> .globl foo
> foo:
> .byte .... /* bytes from Code: part of OOPS dump */
>
> Compile it with "gcc -c -o foo.o foo.s" then look at the output of
> "objdump --disassemble foo.o".
>
> Output:
>
> ip_queue_xmit:
> push %ebp
> push %edi
> push %esi
> push %ebx
> sub $0xbc, %esp
> mov 0xd0(%esp), %ebp ! %ebp = arg0 (skb)
> mov 0x8(%ebp), %ebx ! %ebx = skb->sk
> mov 0x13c(%ebx), %eax ! %eax = inet_sk(sk)->opt
EIP is at ip_queue_xmit+0x14/0x4c0
...
Code: 44 24 04 e8 6f 05 00 00 e9 e8 fe ff ff 8d 76 00 8d bc 27 00 00
00 00 55 57 56 53 81 ec bc 00 00 00 8b ac 24 d0 00 00 00 8b 5d 08
<8b> 83 3c 01 00 00 89 44 24 14 8b 45 28 85 c0 89 44 24 18 0f 85
Put the bytes into a "foo.s" file like this:
.text
.globl foo
foo:
.byte .... /* bytes from Code: part of OOPS dump */
Compile it with "gcc -c -o foo.o foo.s" then look at the output of
"objdump --disassemble foo.o".
Output:
ip_queue_xmit:
push %ebp
push %edi
push %esi
push %ebx
sub $0xbc, %esp
mov 0xd0(%esp), %ebp ! %ebp = arg0 (skb)
mov 0x8(%ebp), %ebx ! %ebx = skb->sk
mov 0x13c(%ebx), %eax ! %eax = inet_sk(sk)->opt
In addition, you can use GDB to figure out the exact file and line
number of the OOPS from the vmlinux file. If you have
CONFIG_DEBUG_INFO enabled, you can simply copy the EIP value from the
OOPS:
number of the OOPS from the ``vmlinux`` file. If you have
``CONFIG_DEBUG_INFO`` enabled, you can simply copy the EIP value from the
OOPS::
EIP: 0060:[<c021e50e>] Not tainted VLI
And use GDB to translate that to human-readable form:
And use GDB to translate that to human-readable form::
gdb vmlinux
(gdb) l *0xc021e50e
If you don't have CONFIG_DEBUG_INFO enabled, you use the function
offset from the OOPS:
If you don't have ``CONFIG_DEBUG_INFO`` enabled, you use the function
offset from the OOPS::
EIP is at vt_ioctl+0xda8/0x1482
And recompile the kernel with CONFIG_DEBUG_INFO enabled:
And recompile the kernel with ``CONFIG_DEBUG_INFO`` enabled::
make vmlinux
gdb vmlinux
(gdb) p vt_ioctl
(gdb) l *(0x<address of vt_ioctl> + 0xda8)
or, as one command
or, as one command::
(gdb) l *(vt_ioctl + 0xda8)
If you have a call trace, such as :-
>Call Trace:
> [<ffffffff8802c8e9>] :jbd:log_wait_commit+0xa3/0xf5
> [<ffffffff810482d9>] autoremove_wake_function+0x0/0x2e
> [<ffffffff8802770b>] :jbd:journal_stop+0x1be/0x1ee
> ...
If you have a call trace, such as::
Call Trace:
[<ffffffff8802c8e9>] :jbd:log_wait_commit+0xa3/0xf5
[<ffffffff810482d9>] autoremove_wake_function+0x0/0x2e
[<ffffffff8802770b>] :jbd:journal_stop+0x1be/0x1ee
...
this shows the problem in the :jbd: module. You can load that module in gdb
and list the relevant code.
and list the relevant code::
gdb fs/jbd/jbd.ko
(gdb) p log_wait_commit
(gdb) l *(0x<address> + 0xa3)
or
or::
(gdb) l *(log_wait_commit + 0xa3)
Another very useful option of the Kernel Hacking section in menuconfig is
Debug memory allocations. This will help you see whether data has been
initialised and not set before use etc. To see the values that get assigned
with this look at mm/slab.c and search for POISON_INUSE. When using this an
Oops will often show the poisoned data instead of zero which is the default.
with this look at ``mm/slab.c`` and search for ``POISON_INUSE``. When using
this an Oops will often show the poisoned data instead of zero which is the
default.
Once you have worked out a fix please submit it upstream. After all open
source is about sharing what you do and don't you want to be recognised for
your genius?
Please do read Documentation/SubmittingPatches though to help your code get
accepted.
Please do read
ref:`Documentation/process/submitting-patches.rst <submittingpatches>` though
to help your code get accepted.

View File

@ -0,0 +1,10 @@
# -*- coding: utf-8; mode: python -*-
project = 'Linux Kernel User Documentation'
tags.add("subproject")
latex_documents = [
('index', 'linux-user.tex', 'Linux Kernel User Documentation',
'The kernel development community', 'manual'),
]

View File

@ -0,0 +1,353 @@
Dynamic debug
+++++++++++++
Introduction
============
This document describes how to use the dynamic debug (dyndbg) feature.
Dynamic debug is designed to allow you to dynamically enable/disable
kernel code to obtain additional kernel information. Currently, if
``CONFIG_DYNAMIC_DEBUG`` is set, then all ``pr_debug()``/``dev_dbg()`` and
``print_hex_dump_debug()``/``print_hex_dump_bytes()`` calls can be dynamically
enabled per-callsite.
If ``CONFIG_DYNAMIC_DEBUG`` is not set, ``print_hex_dump_debug()`` is just
shortcut for ``print_hex_dump(KERN_DEBUG)``.
For ``print_hex_dump_debug()``/``print_hex_dump_bytes()``, format string is
its ``prefix_str`` argument, if it is constant string; or ``hexdump``
in case ``prefix_str`` is build dynamically.
Dynamic debug has even more useful features:
* Simple query language allows turning on and off debugging
statements by matching any combination of 0 or 1 of:
- source filename
- function name
- line number (including ranges of line numbers)
- module name
- format string
* Provides a debugfs control file: ``<debugfs>/dynamic_debug/control``
which can be read to display the complete list of known debug
statements, to help guide you
Controlling dynamic debug Behaviour
===================================
The behaviour of ``pr_debug()``/``dev_dbg()`` are controlled via writing to a
control file in the 'debugfs' filesystem. Thus, you must first mount
the debugfs filesystem, in order to make use of this feature.
Subsequently, we refer to the control file as:
``<debugfs>/dynamic_debug/control``. For example, if you want to enable
printing from source file ``svcsock.c``, line 1603 you simply do::
nullarbor:~ # echo 'file svcsock.c line 1603 +p' >
<debugfs>/dynamic_debug/control
If you make a mistake with the syntax, the write will fail thus::
nullarbor:~ # echo 'file svcsock.c wtf 1 +p' >
<debugfs>/dynamic_debug/control
-bash: echo: write error: Invalid argument
Viewing Dynamic Debug Behaviour
===============================
You can view the currently configured behaviour of all the debug
statements via::
nullarbor:~ # cat <debugfs>/dynamic_debug/control
# filename:lineno [module]function flags format
/usr/src/packages/BUILD/sgi-enhancednfs-1.4/default/net/sunrpc/svc_rdma.c:323 [svcxprt_rdma]svc_rdma_cleanup =_ "SVCRDMA Module Removed, deregister RPC RDMA transport\012"
/usr/src/packages/BUILD/sgi-enhancednfs-1.4/default/net/sunrpc/svc_rdma.c:341 [svcxprt_rdma]svc_rdma_init =_ "\011max_inline : %d\012"
/usr/src/packages/BUILD/sgi-enhancednfs-1.4/default/net/sunrpc/svc_rdma.c:340 [svcxprt_rdma]svc_rdma_init =_ "\011sq_depth : %d\012"
/usr/src/packages/BUILD/sgi-enhancednfs-1.4/default/net/sunrpc/svc_rdma.c:338 [svcxprt_rdma]svc_rdma_init =_ "\011max_requests : %d\012"
...
You can also apply standard Unix text manipulation filters to this
data, e.g.::
nullarbor:~ # grep -i rdma <debugfs>/dynamic_debug/control | wc -l
62
nullarbor:~ # grep -i tcp <debugfs>/dynamic_debug/control | wc -l
42
The third column shows the currently enabled flags for each debug
statement callsite (see below for definitions of the flags). The
default value, with no flags enabled, is ``=_``. So you can view all
the debug statement callsites with any non-default flags::
nullarbor:~ # awk '$3 != "=_"' <debugfs>/dynamic_debug/control
# filename:lineno [module]function flags format
/usr/src/packages/BUILD/sgi-enhancednfs-1.4/default/net/sunrpc/svcsock.c:1603 [sunrpc]svc_send p "svc_process: st_sendto returned %d\012"
Command Language Reference
==========================
At the lexical level, a command comprises a sequence of words separated
by spaces or tabs. So these are all equivalent::
nullarbor:~ # echo -c 'file svcsock.c line 1603 +p' >
<debugfs>/dynamic_debug/control
nullarbor:~ # echo -c ' file svcsock.c line 1603 +p ' >
<debugfs>/dynamic_debug/control
nullarbor:~ # echo -n 'file svcsock.c line 1603 +p' >
<debugfs>/dynamic_debug/control
Command submissions are bounded by a write() system call.
Multiple commands can be written together, separated by ``;`` or ``\n``::
~# echo "func pnpacpi_get_resources +p; func pnp_assign_mem +p" \
> <debugfs>/dynamic_debug/control
If your query set is big, you can batch them too::
~# cat query-batch-file > <debugfs>/dynamic_debug/control
A another way is to use wildcard. The match rule support ``*`` (matches
zero or more characters) and ``?`` (matches exactly one character).For
example, you can match all usb drivers::
~# echo "file drivers/usb/* +p" > <debugfs>/dynamic_debug/control
At the syntactical level, a command comprises a sequence of match
specifications, followed by a flags change specification::
command ::= match-spec* flags-spec
The match-spec's are used to choose a subset of the known pr_debug()
callsites to which to apply the flags-spec. Think of them as a query
with implicit ANDs between each pair. Note that an empty list of
match-specs will select all debug statement callsites.
A match specification comprises a keyword, which controls the
attribute of the callsite to be compared, and a value to compare
against. Possible keywords are:::
match-spec ::= 'func' string |
'file' string |
'module' string |
'format' string |
'line' line-range
line-range ::= lineno |
'-'lineno |
lineno'-' |
lineno'-'lineno
lineno ::= unsigned-int
.. note::
``line-range`` cannot contain space, e.g.
"1-30" is valid range but "1 - 30" is not.
The meanings of each keyword are:
func
The given string is compared against the function name
of each callsite. Example::
func svc_tcp_accept
file
The given string is compared against either the full pathname, the
src-root relative pathname, or the basename of the source file of
each callsite. Examples::
file svcsock.c
file kernel/freezer.c
file /usr/src/packages/BUILD/sgi-enhancednfs-1.4/default/net/sunrpc/svcsock.c
module
The given string is compared against the module name
of each callsite. The module name is the string as
seen in ``lsmod``, i.e. without the directory or the ``.ko``
suffix and with ``-`` changed to ``_``. Examples::
module sunrpc
module nfsd
format
The given string is searched for in the dynamic debug format
string. Note that the string does not need to match the
entire format, only some part. Whitespace and other
special characters can be escaped using C octal character
escape ``\ooo`` notation, e.g. the space character is ``\040``.
Alternatively, the string can be enclosed in double quote
characters (``"``) or single quote characters (``'``).
Examples::
format svcrdma: // many of the NFS/RDMA server pr_debugs
format readahead // some pr_debugs in the readahead cache
format nfsd:\040SETATTR // one way to match a format with whitespace
format "nfsd: SETATTR" // a neater way to match a format with whitespace
format 'nfsd: SETATTR' // yet another way to match a format with whitespace
line
The given line number or range of line numbers is compared
against the line number of each ``pr_debug()`` callsite. A single
line number matches the callsite line number exactly. A
range of line numbers matches any callsite between the first
and last line number inclusive. An empty first number means
the first line in the file, an empty line number means the
last number in the file. Examples::
line 1603 // exactly line 1603
line 1600-1605 // the six lines from line 1600 to line 1605
line -1605 // the 1605 lines from line 1 to line 1605
line 1600- // all lines from line 1600 to the end of the file
The flags specification comprises a change operation followed
by one or more flag characters. The change operation is one
of the characters::
- remove the given flags
+ add the given flags
= set the flags to the given flags
The flags are::
p enables the pr_debug() callsite.
f Include the function name in the printed message
l Include line number in the printed message
m Include module name in the printed message
t Include thread ID in messages not generated from interrupt context
_ No flags are set. (Or'd with others on input)
For ``print_hex_dump_debug()`` and ``print_hex_dump_bytes()``, only ``p`` flag
have meaning, other flags ignored.
For display, the flags are preceded by ``=``
(mnemonic: what the flags are currently equal to).
Note the regexp ``^[-+=][flmpt_]+$`` matches a flags specification.
To clear all flags at once, use ``=_`` or ``-flmpt``.
Debug messages during Boot Process
==================================
To activate debug messages for core code and built-in modules during
the boot process, even before userspace and debugfs exists, use
``dyndbg="QUERY"``, ``module.dyndbg="QUERY"``, or ``ddebug_query="QUERY"``
(``ddebug_query`` is obsoleted by ``dyndbg``, and deprecated). QUERY follows
the syntax described above, but must not exceed 1023 characters. Your
bootloader may impose lower limits.
These ``dyndbg`` params are processed just after the ddebug tables are
processed, as part of the arch_initcall. Thus you can enable debug
messages in all code run after this arch_initcall via this boot
parameter.
On an x86 system for example ACPI enablement is a subsys_initcall and::
dyndbg="file ec.c +p"
will show early Embedded Controller transactions during ACPI setup if
your machine (typically a laptop) has an Embedded Controller.
PCI (or other devices) initialization also is a hot candidate for using
this boot parameter for debugging purposes.
If ``foo`` module is not built-in, ``foo.dyndbg`` will still be processed at
boot time, without effect, but will be reprocessed when module is
loaded later. ``dyndbg_query=`` and bare ``dyndbg=`` are only processed at
boot.
Debug Messages at Module Initialization Time
============================================
When ``modprobe foo`` is called, modprobe scans ``/proc/cmdline`` for
``foo.params``, strips ``foo.``, and passes them to the kernel along with
params given in modprobe args or ``/etc/modprob.d/*.conf`` files,
in the following order:
1. parameters given via ``/etc/modprobe.d/*.conf``::
options foo dyndbg=+pt
options foo dyndbg # defaults to +p
2. ``foo.dyndbg`` as given in boot args, ``foo.`` is stripped and passed::
foo.dyndbg=" func bar +p; func buz +mp"
3. args to modprobe::
modprobe foo dyndbg==pmf # override previous settings
These ``dyndbg`` queries are applied in order, with last having final say.
This allows boot args to override or modify those from ``/etc/modprobe.d``
(sensible, since 1 is system wide, 2 is kernel or boot specific), and
modprobe args to override both.
In the ``foo.dyndbg="QUERY"`` form, the query must exclude ``module foo``.
``foo`` is extracted from the param-name, and applied to each query in
``QUERY``, and only 1 match-spec of each type is allowed.
The ``dyndbg`` option is a "fake" module parameter, which means:
- modules do not need to define it explicitly
- every module gets it tacitly, whether they use pr_debug or not
- it doesn't appear in ``/sys/module/$module/parameters/``
To see it, grep the control file, or inspect ``/proc/cmdline.``
For ``CONFIG_DYNAMIC_DEBUG`` kernels, any settings given at boot-time (or
enabled by ``-DDEBUG`` flag during compilation) can be disabled later via
the sysfs interface if the debug messages are no longer needed::
echo "module module_name -p" > <debugfs>/dynamic_debug/control
Examples
========
::
// enable the message at line 1603 of file svcsock.c
nullarbor:~ # echo -n 'file svcsock.c line 1603 +p' >
<debugfs>/dynamic_debug/control
// enable all the messages in file svcsock.c
nullarbor:~ # echo -n 'file svcsock.c +p' >
<debugfs>/dynamic_debug/control
// enable all the messages in the NFS server module
nullarbor:~ # echo -n 'module nfsd +p' >
<debugfs>/dynamic_debug/control
// enable all 12 messages in the function svc_process()
nullarbor:~ # echo -n 'func svc_process +p' >
<debugfs>/dynamic_debug/control
// disable all 12 messages in the function svc_process()
nullarbor:~ # echo -n 'func svc_process -p' >
<debugfs>/dynamic_debug/control
// enable messages for NFS calls READ, READLINK, READDIR and READDIR+.
nullarbor:~ # echo -n 'format "nfsd: READ" +p' >
<debugfs>/dynamic_debug/control
// enable messages in files of which the paths include string "usb"
nullarbor:~ # echo -n '*usb* +p' > <debugfs>/dynamic_debug/control
// enable all messages
nullarbor:~ # echo -n '+p' > <debugfs>/dynamic_debug/control
// add module, function to all enabled messages
nullarbor:~ # echo -n '+mf' > <debugfs>/dynamic_debug/control
// boot-args example, with newlines and comments for readability
Kernel command line: ...
// see whats going on in dyndbg=value processing
dynamic_debug.verbose=1
// enable pr_debugs in 2 builtins, #cmt is stripped
dyndbg="module params +p #cmt ; module sys +p"
// enable pr_debugs in 2 functions in a module loaded later
pc87360.dyndbg="func pc87360_init_device +p; func pc87360_find +p"

View File

@ -0,0 +1,34 @@
Linux Kernel User's Documentation
=================================
Contents:
.. toctree::
:maxdepth: 2
:numbered:
README
reporting-bugs
bug-hunting
oops-tracing
ramoops
initrd
init
dynamic-debug-howto
security-bugs
kernel-parameters
serial-console
braille-console
parport
md
module-signing
sysrq
unicode
vga-softcursor
sysfs-rules
devices
binfmt-misc
mono
java
bad-memory
basic-profiling

View File

@ -5,6 +5,7 @@ OK, so you've got this pretty unintuitive message (currently located
in init/main.c) and are wondering what the H*** went wrong.
Some high-level reasons for failure (listed roughly in order of execution)
to load the init binary are:
A) Unable to mount root FS
B) init binary doesn't exist on rootfs
C) broken console device
@ -12,37 +13,39 @@ D) binary exists but dependencies not available
E) binary cannot be loaded
Detailed explanations:
0) Set "debug" kernel parameter (in bootloader config file or CONFIG_CMDLINE)
A) Set "debug" kernel parameter (in bootloader config file or CONFIG_CMDLINE)
to get more detailed kernel messages.
A) make sure you have the correct root FS type
(and root= kernel parameter points to the correct partition),
B) make sure you have the correct root FS type
(and ``root=`` kernel parameter points to the correct partition),
required drivers such as storage hardware (such as SCSI or USB!)
and filesystem (ext3, jffs2 etc.) are builtin (alternatively as modules,
to be pre-loaded by an initrd)
C) Possibly a conflict in console= setup --> initial console unavailable.
C) Possibly a conflict in ``console= setup`` --> initial console unavailable.
E.g. some serial consoles are unreliable due to serial IRQ issues (e.g.
missing interrupt-based configuration).
Try using a different console= device or e.g. netconsole= .
Try using a different ``console= device`` or e.g. ``netconsole=``.
D) e.g. required library dependencies of the init binary such as
/lib/ld-linux.so.2 missing or broken. Use readelf -d <INIT>|grep NEEDED
to find out which libraries are required.
``/lib/ld-linux.so.2`` missing or broken. Use
``readelf -d <INIT>|grep NEEDED`` to find out which libraries are required.
E) make sure the binary's architecture matches your hardware.
E.g. i386 vs. x86_64 mismatch, or trying to load x86 on ARM hardware.
In case you tried loading a non-binary file here (shell script?),
you should make sure that the script specifies an interpreter in its shebang
header line (#!/...) that is fully working (including its library
header line (``#!/...``) that is fully working (including its library
dependencies). And before tackling scripts, better first test a simple
non-script binary such as /bin/sh and confirm its successful execution.
To find out more, add code to init/main.c to display kernel_execve()s
non-script binary such as ``/bin/sh`` and confirm its successful execution.
To find out more, add code ``to init/main.c`` to display kernel_execve()s
return values.
Please extend this explanation whenever you find new failure causes
(after all loading the init binary is a CRITICAL and hard transition step
which needs to be made as painless as possible), then submit patch to LKML.
Further TODOs:
- Implement the various run_init_process() invocations via a struct array
which can then store the kernel_execve() result value and on failure
log it all by iterating over _all_ results (very important usability fix).
- Implement the various ``run_init_process()`` invocations via a struct array
which can then store the ``kernel_execve()`` result value and on failure
log it all by iterating over **all** results (very important usability fix).
- try to make the implementation itself more helpful in general,
e.g. by providing additional error messages at affected places.

View File

@ -2,7 +2,7 @@ Using the initial RAM disk (initrd)
===================================
Written 1996,2000 by Werner Almesberger <werner.almesberger@epfl.ch> and
Hans Lermen <lermen@fgan.de>
Hans Lermen <lermen@fgan.de>
initrd provides the capability to load a RAM disk by the boot loader.
@ -16,7 +16,7 @@ where the kernel comes up with a minimum set of compiled-in drivers, and
where additional modules are loaded from initrd.
This document gives a brief overview of the use of initrd. A more detailed
discussion of the boot process can be found in [1].
discussion of the boot process can be found in [#f1]_.
Operation
@ -27,10 +27,10 @@ When using initrd, the system typically boots as follows:
1) the boot loader loads the kernel and the initial RAM disk
2) the kernel converts initrd into a "normal" RAM disk and
frees the memory used by initrd
3) if the root device is not /dev/ram0, the old (deprecated)
3) if the root device is not ``/dev/ram0``, the old (deprecated)
change_root procedure is followed. see the "Obsolete root change
mechanism" section below.
4) root device is mounted. if it is /dev/ram0, the initrd image is
4) root device is mounted. if it is ``/dev/ram0``, the initrd image is
then mounted as root
5) /sbin/init is executed (this can be any valid executable, including
shell scripts; it is run with uid 0 and can do basically everything
@ -38,7 +38,7 @@ When using initrd, the system typically boots as follows:
6) init mounts the "real" root file system
7) init places the root file system at the root directory using the
pivot_root system call
8) init execs the /sbin/init on the new root filesystem, performing
8) init execs the ``/sbin/init`` on the new root filesystem, performing
the usual boot sequence
9) the initrd file system is removed
@ -51,7 +51,7 @@ be accessible.
Boot command-line options
-------------------------
initrd adds the following new options:
initrd adds the following new options::
initrd=<path> (e.g. LOADLIN)
@ -83,36 +83,36 @@ Recent kernels have support for populating a ramdisk from a compressed cpio
archive. On such systems, the creation of a ramdisk image doesn't need to
involve special block devices or loopbacks; you merely create a directory on
disk with the desired initrd content, cd to that directory, and run (as an
example):
example)::
find . | cpio --quiet -H newc -o | gzip -9 -n > /boot/imagefile.img
find . | cpio --quiet -H newc -o | gzip -9 -n > /boot/imagefile.img
Examining the contents of an existing image file is just as simple:
Examining the contents of an existing image file is just as simple::
mkdir /tmp/imagefile
cd /tmp/imagefile
gzip -cd /boot/imagefile.img | cpio -imd --quiet
mkdir /tmp/imagefile
cd /tmp/imagefile
gzip -cd /boot/imagefile.img | cpio -imd --quiet
Installation
------------
First, a directory for the initrd file system has to be created on the
"normal" root file system, e.g.
"normal" root file system, e.g.::
# mkdir /initrd
# mkdir /initrd
The name is not relevant. More details can be found on the pivot_root(2)
man page.
The name is not relevant. More details can be found on the
:manpage:`pivot_root(2)` man page.
If the root file system is created during the boot procedure (i.e. if
you're building an install floppy), the root file system creation
procedure should create the /initrd directory.
procedure should create the ``/initrd`` directory.
If initrd will not be mounted in some cases, its content is still
accessible if the following device has been created:
accessible if the following device has been created::
# mknod /dev/initrd b 1 250
# chmod 400 /dev/initrd
# mknod /dev/initrd b 1 250
# chmod 400 /dev/initrd
Second, the kernel has to be compiled with RAM disk support and with
support for the initial RAM disk enabled. Also, at least all components
@ -131,60 +131,76 @@ kernels, at least three types of devices are suitable for that:
We'll describe the loopback device method:
1) make sure loopback block devices are configured into the kernel
2) create an empty file system of the appropriate size, e.g.
# dd if=/dev/zero of=initrd bs=300k count=1
# mke2fs -F -m0 initrd
2) create an empty file system of the appropriate size, e.g.::
# dd if=/dev/zero of=initrd bs=300k count=1
# mke2fs -F -m0 initrd
(if space is critical, you may want to use the Minix FS instead of Ext2)
3) mount the file system, e.g.
# mount -t ext2 -o loop initrd /mnt
4) create the console device:
3) mount the file system, e.g.::
# mount -t ext2 -o loop initrd /mnt
4) create the console device::
# mkdir /mnt/dev
# mknod /mnt/dev/console c 5 1
5) copy all the files that are needed to properly use the initrd
environment. Don't forget the most important file, /sbin/init
Note that /sbin/init's permissions must include "x" (execute).
environment. Don't forget the most important file, ``/sbin/init``
.. note:: ``/sbin/init`` permissions must include "x" (execute).
6) correct operation the initrd environment can frequently be tested
even without rebooting with the command
# chroot /mnt /sbin/init
even without rebooting with the command::
# chroot /mnt /sbin/init
This is of course limited to initrds that do not interfere with the
general system state (e.g. by reconfiguring network interfaces,
overwriting mounted devices, trying to start already running demons,
etc. Note however that it is usually possible to use pivot_root in
such a chroot'ed initrd environment.)
7) unmount the file system
# umount /mnt
7) unmount the file system::
# umount /mnt
8) the initrd is now in the file "initrd". Optionally, it can now be
compressed
# gzip -9 initrd
compressed::
# gzip -9 initrd
For experimenting with initrd, you may want to take a rescue floppy and
only add a symbolic link from /sbin/init to /bin/sh. Alternatively, you
can try the experimental newlib environment [2] to create a small
only add a symbolic link from ``/sbin/init`` to ``/bin/sh``. Alternatively, you
can try the experimental newlib environment [#f2]_ to create a small
initrd.
Finally, you have to boot the kernel and load initrd. Almost all Linux
boot loaders support initrd. Since the boot process is still compatible
with an older mechanism, the following boot command line parameters
have to be given:
have to be given::
root=/dev/ram0 rw
(rw is only necessary if writing to the initrd file system.)
With LOADLIN, you simply execute
With LOADLIN, you simply execute::
LOADLIN <kernel> initrd=<disk_image>
e.g. LOADLIN C:\LINUX\BZIMAGE initrd=C:\LINUX\INITRD.GZ root=/dev/ram0 rw
With LILO, you add the option INITRD=<path> to either the global section
or to the section of the respective kernel in /etc/lilo.conf, and pass
the options using APPEND, e.g.
e.g.::
LOADLIN C:\LINUX\BZIMAGE initrd=C:\LINUX\INITRD.GZ root=/dev/ram0 rw
With LILO, you add the option ``INITRD=<path>`` to either the global section
or to the section of the respective kernel in ``/etc/lilo.conf``, and pass
the options using APPEND, e.g.::
image = /bzImage
initrd = /boot/initrd.gz
append = "root=/dev/ram0 rw"
and run /sbin/lilo
and run ``/sbin/lilo``
For other boot loaders, please refer to the respective documentation.
@ -204,33 +220,33 @@ The procedure involves the following steps:
- unmounting the initrd file system and de-allocating the RAM disk
Mounting the new root file system is easy: it just needs to be mounted on
a directory under the current root. Example:
a directory under the current root. Example::
# mkdir /new-root
# mount -o ro /dev/hda1 /new-root
# mkdir /new-root
# mount -o ro /dev/hda1 /new-root
The root change is accomplished with the pivot_root system call, which
is also available via the pivot_root utility (see pivot_root(8) man
page; pivot_root is distributed with util-linux version 2.10h or higher
[3]). pivot_root moves the current root to a directory under the new
is also available via the ``pivot_root`` utility (see :manpage:`pivot_root(8)`
man page; ``pivot_root`` is distributed with util-linux version 2.10h or higher
[#f3]_). ``pivot_root`` moves the current root to a directory under the new
root, and puts the new root at its place. The directory for the old root
must exist before calling pivot_root. Example:
must exist before calling ``pivot_root``. Example::
# cd /new-root
# mkdir initrd
# pivot_root . initrd
# cd /new-root
# mkdir initrd
# pivot_root . initrd
Now, the init process may still access the old root via its
executable, shared libraries, standard input/output/error, and its
current root directory. All these references are dropped by the
following command:
following command::
# exec chroot . what-follows <dev/console >dev/console 2>&1
# exec chroot . what-follows <dev/console >dev/console 2>&1
Where what-follows is a program under the new root, e.g. /sbin/init
Where what-follows is a program under the new root, e.g. ``/sbin/init``
If the new root file system will be used with udev and has no valid
/dev directory, udev must be initialized before invoking chroot in order
to provide /dev/console.
``/dev`` directory, udev must be initialized before invoking chroot in order
to provide ``/dev/console``.
Note: implementation details of pivot_root may change with time. In order
to ensure compatibility, the following points should be observed:
@ -244,13 +260,13 @@ to ensure compatibility, the following points should be observed:
- use relative paths for dev/console in the exec command
Now, the initrd can be unmounted and the memory allocated by the RAM
disk can be freed:
disk can be freed::
# umount /initrd
# blockdev --flushbufs /dev/ram0
# umount /initrd
# blockdev --flushbufs /dev/ram0
It is also possible to use initrd with an NFS-mounted root, see the
pivot_root(8) man page for details.
:manpage:`pivot_root(8)` man page for details.
Usage scenarios
@ -263,21 +279,21 @@ as follows:
1) system boots from floppy or other media with a minimal kernel
(e.g. support for RAM disks, initrd, a.out, and the Ext2 FS) and
loads initrd
2) /sbin/init determines what is needed to (1) mount the "real" root FS
2) ``/sbin/init`` determines what is needed to (1) mount the "real" root FS
(i.e. device type, device drivers, file system) and (2) the
distribution media (e.g. CD-ROM, network, tape, ...). This can be
done by asking the user, by auto-probing, or by using a hybrid
approach.
3) /sbin/init loads the necessary kernel modules
4) /sbin/init creates and populates the root file system (this doesn't
3) ``/sbin/init`` loads the necessary kernel modules
4) ``/sbin/init`` creates and populates the root file system (this doesn't
have to be a very usable system yet)
5) /sbin/init invokes pivot_root to change the root file system and
5) ``/sbin/init`` invokes ``pivot_root`` to change the root file system and
execs - via chroot - a program that continues the installation
6) the boot loader is installed
7) the boot loader is configured to load an initrd with the set of
modules that was used to bring up the system (e.g. /initrd can be
modules that was used to bring up the system (e.g. ``/initrd`` can be
modified, then unmounted, and finally, the image is written from
/dev/ram0 or /dev/rd/0 to a file)
``/dev/ram0`` or ``/dev/rd/0`` to a file)
8) now the system is bootable and additional installation tasks can be
performed
@ -290,7 +306,7 @@ different hardware configurations in a single administrative domain. In
such cases, it is desirable to generate only a small set of kernels
(ideally only one) and to keep the system-specific part of configuration
information as small as possible. In this case, a common initrd could be
generated with all the necessary modules. Then, only /sbin/init or a file
generated with all the necessary modules. Then, only ``/sbin/init`` or a file
read by it would have to be different.
A third scenario is more convenient recovery disks, because information
@ -301,9 +317,9 @@ auto-detection).
Last not least, CD-ROM distributors may use it for better installation
from CD, e.g. by using a boot floppy and bootstrapping a bigger RAM disk
via initrd from CD; or by booting via a loader like LOADLIN or directly
via initrd from CD; or by booting via a loader like ``LOADLIN`` or directly
from the CD-ROM, and loading the RAM disk from CD without need of
floppies.
floppies.
Obsolete root change mechanism
@ -316,51 +332,52 @@ continued availability.
It works by mounting the "real" root device (i.e. the one set with rdev
in the kernel image or with root=... at the boot command line) as the
root file system when linuxrc exits. The initrd file system is then
unmounted, or, if it is still busy, moved to a directory /initrd, if
unmounted, or, if it is still busy, moved to a directory ``/initrd``, if
such a directory exists on the new root file system.
In order to use this mechanism, you do not have to specify the boot
command options root, init, or rw. (If specified, they will affect
the real root file system, not the initrd environment.)
If /proc is mounted, the "real" root device can be changed from within
linuxrc by writing the number of the new root FS device to the special
file /proc/sys/kernel/real-root-dev, e.g.
file /proc/sys/kernel/real-root-dev, e.g.::
# echo 0x301 >/proc/sys/kernel/real-root-dev
Note that the mechanism is incompatible with NFS and similar file
systems.
This old, deprecated mechanism is commonly called "change_root", while
the new, supported mechanism is called "pivot_root".
This old, deprecated mechanism is commonly called ``change_root``, while
the new, supported mechanism is called ``pivot_root``.
Mixed change_root and pivot_root mechanism
------------------------------------------
In case you did not want to use root=/dev/ram0 to trigger the pivot_root
mechanism, you may create both /linuxrc and /sbin/init in your initrd image.
In case you did not want to use ``root=/dev/ram0`` to trigger the pivot_root
mechanism, you may create both ``/linuxrc`` and ``/sbin/init`` in your initrd
image.
/linuxrc would contain only the following:
``/linuxrc`` would contain only the following::
#! /bin/sh
mount -n -t proc proc /proc
echo 0x0100 >/proc/sys/kernel/real-root-dev
umount -n /proc
#! /bin/sh
mount -n -t proc proc /proc
echo 0x0100 >/proc/sys/kernel/real-root-dev
umount -n /proc
Once linuxrc exited, the kernel would mount again your initrd as root,
this time executing /sbin/init. Again, it would be the duty of this init
to build the right environment (maybe using the root= device passed on
the cmdline) before the final execution of the real /sbin/init.
this time executing ``/sbin/init``. Again, it would be the duty of this init
to build the right environment (maybe using the ``root= device`` passed on
the cmdline) before the final execution of the real ``/sbin/init``.
Resources
---------
[1] Almesberger, Werner; "Booting Linux: The History and the Future"
.. [#f1] Almesberger, Werner; "Booting Linux: The History and the Future"
http://www.almesberger.net/cv/papers/ols2k-9.ps.gz
[2] newlib package (experimental), with initrd example
http://sources.redhat.com/newlib/
[3] util-linux: Miscellaneous utilities for Linux
http://www.kernel.org/pub/linux/utils/util-linux/
.. [#f2] newlib package (experimental), with initrd example
https://www.sourceware.org/newlib/
.. [#f3] util-linux: Miscellaneous utilities for Linux
https://www.kernel.org/pub/linux/utils/util-linux/

View File

@ -1,5 +1,5 @@
Java(tm) Binary Kernel Support for Linux v1.03
----------------------------------------------
Java(tm) Binary Kernel Support for Linux v1.03
----------------------------------------------
Linux beats them ALL! While all other OS's are TALKING about direct
support of Java Binaries in the OS, Linux is doing it!
@ -19,70 +19,80 @@ other program after you have done the following:
as the application itself).
2) You have to compile BINFMT_MISC either as a module or into
the kernel (CONFIG_BINFMT_MISC) and set it up properly.
the kernel (``CONFIG_BINFMT_MISC``) and set it up properly.
If you choose to compile it as a module, you will have
to insert it manually with modprobe/insmod, as kmod
cannot easily be supported with binfmt_misc.
cannot easily be supported with binfmt_misc.
Read the file 'binfmt_misc.txt' in this directory to know
more about the configuration process.
3) Add the following configuration items to binfmt_misc
(you should really have read binfmt_misc.txt now):
support for Java applications:
(you should really have read ``binfmt_misc.txt`` now):
support for Java applications::
':Java:M::\xca\xfe\xba\xbe::/usr/local/bin/javawrapper:'
support for executable Jar files:
support for executable Jar files::
':ExecutableJAR:E::jar::/usr/local/bin/jarwrapper:'
support for Java Applets:
support for Java Applets::
':Applet:E::html::/usr/bin/appletviewer:'
or the following, if you want to be more selective:
or the following, if you want to be more selective::
':Applet:M::<!--applet::/usr/bin/appletviewer:'
Of course you have to fix the path names. The path/file names given in this
document match the Debian 2.1 system. (i.e. jdk installed in /usr,
custom wrappers from this document in /usr/local)
document match the Debian 2.1 system. (i.e. jdk installed in ``/usr``,
custom wrappers from this document in ``/usr/local``)
Note, that for the more selective applet support you have to modify
existing html-files to contain <!--applet--> in the first line
('<' has to be the first character!) to let this work!
existing html-files to contain ``<!--applet-->`` in the first line
(``<`` has to be the first character!) to let this work!
For the compiled Java programs you need a wrapper script like the
following (this is because Java is broken in case of the filename
handling), again fix the path names, both in the script and in the
above given configuration string.
You, too, need the little program after the script. Compile like
gcc -O2 -o javaclassname javaclassname.c
and stick it to /usr/local/bin.
You, too, need the little program after the script. Compile like::
gcc -O2 -o javaclassname javaclassname.c
and stick it to ``/usr/local/bin``.
Both the javawrapper shellscript and the javaclassname program
were supplied by Colin J. Watson <cjw44@cam.ac.uk>.
====================== Cut here ===================
#!/bin/bash
# /usr/local/bin/javawrapper - the wrapper for binfmt_misc/java
Javawrapper shell script::
if [ -z "$1" ]; then
#!/bin/bash
# /usr/local/bin/javawrapper - the wrapper for binfmt_misc/java
if [ -z "$1" ]; then
exec 1>&2
echo Usage: $0 class-file
exit 1
fi
fi
CLASS=$1
FQCLASS=`/usr/local/bin/javaclassname $1`
FQCLASSN=`echo $FQCLASS | sed -e 's/^.*\.\([^.]*\)$/\1/'`
FQCLASSP=`echo $FQCLASS | sed -e 's-\.-/-g' -e 's-^[^/]*$--' -e 's-/[^/]*$--'`
CLASS=$1
FQCLASS=`/usr/local/bin/javaclassname $1`
FQCLASSN=`echo $FQCLASS | sed -e 's/^.*\.\([^.]*\)$/\1/'`
FQCLASSP=`echo $FQCLASS | sed -e 's-\.-/-g' -e 's-^[^/]*$--' -e 's-/[^/]*$--'`
# for example:
# CLASS=Test.class
# FQCLASS=foo.bar.Test
# FQCLASSN=Test
# FQCLASSP=foo/bar
# for example:
# CLASS=Test.class
# FQCLASS=foo.bar.Test
# FQCLASSN=Test
# FQCLASSP=foo/bar
unset CLASSBASE
unset CLASSBASE
declare -i LINKLEVEL=0
declare -i LINKLEVEL=0
while :; do
while :; do
if [ "`basename $CLASS .class`" == "$FQCLASSN" ]; then
# See if this directory works straight off
cd -L `dirname $CLASS`
@ -119,9 +129,9 @@ while :; do
exit 1
fi
CLASS=`ls --color=no -l $CLASS | sed -e 's/^.* \([^ ]*\)$/\1/'`
done
done
if [ -z "$CLASSBASE" ]; then
if [ -z "$CLASSBASE" ]; then
if [ -z "$FQCLASSP" ]; then
GOODNAME=$FQCLASSN.class
else
@ -131,24 +141,23 @@ if [ -z "$CLASSBASE" ]; then
echo $0:
echo " $FQCLASS should be in a file called $GOODNAME"
exit 1
fi
fi
if ! echo $CLASSPATH | grep -q "^\(.*:\)*$CLASSBASE\(:.*\)*"; then
if ! echo $CLASSPATH | grep -q "^\(.*:\)*$CLASSBASE\(:.*\)*"; then
# class is not in CLASSPATH, so prepend dir of class to CLASSPATH
if [ -z "${CLASSPATH}" ] ; then
export CLASSPATH=$CLASSBASE
else
export CLASSPATH=$CLASSBASE:$CLASSPATH
fi
fi
fi
shift
/usr/bin/java $FQCLASS "$@"
====================== Cut here ===================
shift
/usr/bin/java $FQCLASS "$@"
javaclassname.c::
====================== Cut here ===================
/* javaclassname.c
/* javaclassname.c
*
* Extracts the class name from a Java class file; intended for use in a Java
* wrapper of the type supported by the binfmt_misc option in the Linux kernel.
@ -170,57 +179,57 @@ shift
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*/
#include <stdlib.h>
#include <stdio.h>
#include <stdarg.h>
#include <sys/types.h>
#include <stdlib.h>
#include <stdio.h>
#include <stdarg.h>
#include <sys/types.h>
/* From Sun's Java VM Specification, as tag entries in the constant pool. */
/* From Sun's Java VM Specification, as tag entries in the constant pool. */
#define CP_UTF8 1
#define CP_INTEGER 3
#define CP_FLOAT 4
#define CP_LONG 5
#define CP_DOUBLE 6
#define CP_CLASS 7
#define CP_STRING 8
#define CP_FIELDREF 9
#define CP_METHODREF 10
#define CP_INTERFACEMETHODREF 11
#define CP_NAMEANDTYPE 12
#define CP_METHODHANDLE 15
#define CP_METHODTYPE 16
#define CP_INVOKEDYNAMIC 18
#define CP_UTF8 1
#define CP_INTEGER 3
#define CP_FLOAT 4
#define CP_LONG 5
#define CP_DOUBLE 6
#define CP_CLASS 7
#define CP_STRING 8
#define CP_FIELDREF 9
#define CP_METHODREF 10
#define CP_INTERFACEMETHODREF 11
#define CP_NAMEANDTYPE 12
#define CP_METHODHANDLE 15
#define CP_METHODTYPE 16
#define CP_INVOKEDYNAMIC 18
/* Define some commonly used error messages */
/* Define some commonly used error messages */
#define seek_error() error("%s: Cannot seek\n", program)
#define corrupt_error() error("%s: Class file corrupt\n", program)
#define eof_error() error("%s: Unexpected end of file\n", program)
#define utf8_error() error("%s: Only ASCII 1-255 supported\n", program);
#define seek_error() error("%s: Cannot seek\n", program)
#define corrupt_error() error("%s: Class file corrupt\n", program)
#define eof_error() error("%s: Unexpected end of file\n", program)
#define utf8_error() error("%s: Only ASCII 1-255 supported\n", program);
char *program;
char *program;
long *pool;
long *pool;
u_int8_t read_8(FILE *classfile);
u_int16_t read_16(FILE *classfile);
void skip_constant(FILE *classfile, u_int16_t *cur);
void error(const char *format, ...);
int main(int argc, char **argv);
u_int8_t read_8(FILE *classfile);
u_int16_t read_16(FILE *classfile);
void skip_constant(FILE *classfile, u_int16_t *cur);
void error(const char *format, ...);
int main(int argc, char **argv);
/* Reads in an unsigned 8-bit integer. */
u_int8_t read_8(FILE *classfile)
{
/* Reads in an unsigned 8-bit integer. */
u_int8_t read_8(FILE *classfile)
{
int b = fgetc(classfile);
if(b == EOF)
eof_error();
return (u_int8_t)b;
}
}
/* Reads in an unsigned 16-bit integer. */
u_int16_t read_16(FILE *classfile)
{
/* Reads in an unsigned 16-bit integer. */
u_int16_t read_16(FILE *classfile)
{
int b1, b2;
b1 = fgetc(classfile);
if(b1 == EOF)
@ -229,11 +238,11 @@ u_int16_t read_16(FILE *classfile)
if(b2 == EOF)
eof_error();
return (u_int16_t)((b1 << 8) | b2);
}
}
/* Reads in a value from the constant pool. */
void skip_constant(FILE *classfile, u_int16_t *cur)
{
/* Reads in a value from the constant pool. */
void skip_constant(FILE *classfile, u_int16_t *cur)
{
u_int16_t len;
int seekerr = 1;
pool[*cur] = ftell(classfile);
@ -270,19 +279,19 @@ void skip_constant(FILE *classfile, u_int16_t *cur)
}
if(seekerr)
seek_error();
}
}
void error(const char *format, ...)
{
void error(const char *format, ...)
{
va_list ap;
va_start(ap, format);
vfprintf(stderr, format, ap);
va_end(ap);
exit(1);
}
}
int main(int argc, char **argv)
{
int main(int argc, char **argv)
{
FILE *classfile;
u_int16_t cp_count, i, this_class, classinfo_ptr;
u_int8_t length;
@ -349,19 +358,19 @@ int main(int argc, char **argv)
free(pool);
fclose(classfile);
return 0;
}
====================== Cut here ===================
}
jarwrapper::
#!/bin/bash
# /usr/local/java/bin/jarwrapper - the wrapper for binfmt_misc/jar
java -jar $1
====================== Cut here ===================
#!/bin/bash
# /usr/local/java/bin/jarwrapper - the wrapper for binfmt_misc/jar
Now simply ``chmod +x`` the ``.class``, ``.jar`` and/or ``.html`` files you
want to execute.
java -jar $1
====================== Cut here ===================
Now simply chmod +x the .class, .jar and/or .html files you want to execute.
To add a Java program to your path best put a symbolic link to the main
.class file into /usr/bin (or another place you like) omitting the .class
extension. The directory containing the original .class file will be
@ -369,7 +378,7 @@ added to your CLASSPATH during execution.
To test your new setup, enter in the following simple Java app, and name
it "HelloWorld.java":
it "HelloWorld.java"::
class HelloWorld {
public static void main(String args[]) {
@ -377,23 +386,28 @@ it "HelloWorld.java":
}
}
Now compile the application with:
Now compile the application with::
javac HelloWorld.java
Set the executable permissions of the binary file, with:
Set the executable permissions of the binary file, with::
chmod 755 HelloWorld.class
And then execute it:
And then execute it::
./HelloWorld.class
To execute Java Jar files, simple chmod the *.jar files to include
the execution bit, then just do
To execute Java Jar files, simple chmod the ``*.jar`` files to include
the execution bit, then just do::
./Application.jar
To execute Java Applets, simple chmod the *.html files to include
the execution bit, then just do
To execute Java Applets, simple chmod the ``*.html`` files to include
the execution bit, then just do::
./Applet.html
@ -401,4 +415,3 @@ originally by Brian A. Lantz, brian@lantz.com
heavily edited for binfmt_misc by Richard Günther
new scripts by Colin J. Watson <cjw44@cam.ac.uk>
added executable Jar file support by Kurt Huwig <kurt@iku-netz.de>

View File

@ -1,5 +1,5 @@
Kernel Parameters
~~~~~~~~~~~~~~~~~
Kernel Parameters
~~~~~~~~~~~~~~~~~
The following is a consolidated list of the kernel parameters as
implemented by the __setup(), core_param() and module_param() macros
@ -14,7 +14,7 @@ environment, others are passed as command line arguments to init.
Everything after "--" is passed as an argument to init.
Module parameters can be specified in two ways: via the kernel command
line with a module name prefix, or via modprobe, e.g.:
line with a module name prefix, or via modprobe, e.g.::
(kernel command line) usbcore.blinkenlights=1
(modprobe command line) modprobe usbcore blinkenlights=1
@ -25,12 +25,16 @@ kernel command line (/proc/cmdline) and collects module parameters
when it loads a module, so the kernel command line can be used for
loadable modules too.
Hyphens (dashes) and underscores are equivalent in parameter names, so
Hyphens (dashes) and underscores are equivalent in parameter names, so::
log_buf_len=1M print-fatal-signals=1
can also be entered as
can also be entered as::
log-buf-len=1M print_fatal_signals=1
Double-quotes can be used to protect spaces in values, e.g.:
Double-quotes can be used to protect spaces in values, e.g.::
param="spaces in here"
cpu lists:
@ -69,12 +73,12 @@ This document may not be entirely up to date and comprehensive. The command
module. Loadable modules, after being loaded into the running kernel, also
reveal their parameters in /sys/module/${modulename}/parameters/. Some of these
parameters may be changed at runtime by the command
"echo -n ${value} > /sys/module/${modulename}/parameters/${parm}".
``echo -n ${value} > /sys/module/${modulename}/parameters/${parm}``.
The parameters listed below are only valid if certain kernel build options were
enabled and if respective hardware is present. The text in square brackets at
the beginning of each description states the restrictions within which a
parameter is applicable:
parameter is applicable::
ACPI ACPI support is enabled.
AGP AGP (Accelerated Graphics Port) is enabled.
@ -165,7 +169,7 @@ parameter is applicable:
X86_UV SGI UV support is enabled.
XEN Xen support is enabled
In addition, the following text indicates that the option:
In addition, the following text indicates that the option::
BUGS= Relates to possible processor bugs on the said processor.
KNL Is a kernel start-up parameter.
@ -194,7 +198,7 @@ and is between 256 and 4096 characters. It is defined in the file
Finally, the [KMG] suffix is commonly described after a number of kernel
parameter values. These 'K', 'M', and 'G' letters represent the _binary_
multipliers 'Kilo', 'Mega', and 'Giga', equalling 2^10, 2^20, and 2^30
bytes respectively. Such letter suffixes can also be entirely omitted.
bytes respectively. Such letter suffixes can also be entirely omitted::
acpi= [HW,ACPI,X86,ARM64]
@ -811,7 +815,7 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
bits, and "f" is flow control ("r" for RTS or
omit it). Default is "9600n8".
See Documentation/serial-console.txt for more
See Documentation/admin-guide/serial-console.rst for more
information. See
Documentation/networking/netconsole.txt for an
alternative.
@ -2235,7 +2239,7 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
mce=option [X86-64] See Documentation/x86/x86_64/boot-options.txt
md= [HW] RAID subsystems devices and level
See Documentation/md.txt.
See Documentation/admin-guide/md.rst.
mdacon= [MDA]
Format: <first>,<last>
@ -2545,7 +2549,7 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
will be sent.
The default is to send the implementation identification
information.
nfs.recover_lost_locks =
[NFSv4] Attempt to recover locks that were lost due
to a lease timeout on the server. Please note that
@ -3318,7 +3322,7 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
r128= [HW,DRM]
raid= [HW,RAID]
See Documentation/md.txt.
See Documentation/admin-guide/md.rst.
ramdisk_size= [RAM] Sizes of RAM disks in kilobytes
See Documentation/blockdev/ramdisk.txt.
@ -4197,7 +4201,7 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
See also Documentation/input/joystick-parport.txt
udbg-immortal [PPC] When debugging early kernel crashes that
happen after console_init() and before a proper
happen after console_init() and before a proper
console driver takes over, this boot options might
help "seeing" what's going on.
@ -4565,8 +4569,9 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
Format:
<irq>,<irq_mask>,<io>,<full_duplex>,<do_sound>,<lockup_hack>[,<irq2>[,<irq3>[,<irq4>]]]
______________________________________________________________________
------------------------
TODO:
Todo
----
Add more DRM drivers.

View File

@ -1,42 +1,77 @@
Tools that manage md devices can be found at
http://www.kernel.org/pub/linux/utils/raid/
RAID arrays
===========
Boot time assembly of RAID arrays
---------------------------------
Tools that manage md devices can be found at
http://www.kernel.org/pub/linux/utils/raid/
You can boot with your md device with the following kernel command
lines:
for old raid arrays without persistent superblocks:
for old raid arrays without persistent superblocks::
md=<md device no.>,<raid level>,<chunk size factor>,<fault level>,dev0,dev1,...,devn
for raid arrays with persistent superblocks
for raid arrays with persistent superblocks::
md=<md device no.>,dev0,dev1,...,devn
or, to assemble a partitionable array:
or, to assemble a partitionable array::
md=d<md device no.>,dev0,dev1,...,devn
md device no. = the number of the md device ...
0 means md0,
1 md1,
2 md2,
3 md3,
4 md4
raid level = -1 linear mode
0 striped mode
other modes are only supported with persistent super blocks
``md device no.``
+++++++++++++++++
chunk size factor = (raid-0 and raid-1 only)
Set the chunk size as 4k << n.
fault level = totally ignored
dev0-devn: e.g. /dev/hda1,/dev/hdc1,/dev/sda1,/dev/sdb1
A possible loadlin line (Harald Hoyer <HarryH@Royal.Net>) looks like this:
The number of the md device
e:\loadlin\loadlin e:\zimage root=/dev/md0 md=0,0,4,0,/dev/hdb2,/dev/hdc3 ro
================= =========
``md device no.`` device
================= =========
0 md0
1 md1
2 md2
3 md3
4 md4
================= =========
``raid level``
++++++++++++++
level of the RAID array
=============== =============
``raid level`` level
=============== =============
-1 linear mode
0 striped mode
=============== =============
other modes are only supported with persistent super blocks
``chunk size factor``
+++++++++++++++++++++
(raid-0 and raid-1 only)
Set the chunk size as 4k << n.
``fault level``
+++++++++++++++
Totally ignored
``dev0`` to ``devn``
++++++++++++++++++++
e.g. ``/dev/hda1``, ``/dev/hdc1``, ``/dev/sda1``, ``/dev/sdb1``
A possible loadlin line (Harald Hoyer <HarryH@Royal.Net>) looks like this::
e:\loadlin\loadlin e:\zimage root=/dev/md0 md=0,0,4,0,/dev/hdb2,/dev/hdc3 ro
Boot time autodetection of RAID arrays
@ -45,10 +80,10 @@ Boot time autodetection of RAID arrays
When md is compiled into the kernel (not as module), partitions of
type 0xfd are scanned and automatically assembled into RAID arrays.
This autodetection may be suppressed with the kernel parameter
"raid=noautodetect". As of kernel 2.6.9, only drives with a type 0
``raid=noautodetect``. As of kernel 2.6.9, only drives with a type 0
superblock can be autodetected and run at boot time.
The kernel parameter "raid=partitionable" (or "raid=part") means
The kernel parameter ``raid=partitionable`` (or ``raid=part``) means
that all auto-detected arrays are assembled as partitionable.
Boot time assembly of degraded/dirty arrays
@ -56,22 +91,23 @@ Boot time assembly of degraded/dirty arrays
If a raid5 or raid6 array is both dirty and degraded, it could have
undetectable data corruption. This is because the fact that it is
'dirty' means that the parity cannot be trusted, and the fact that it
``dirty`` means that the parity cannot be trusted, and the fact that it
is degraded means that some datablocks are missing and cannot reliably
be reconstructed (due to no parity).
For this reason, md will normally refuse to start such an array. This
requires the sysadmin to take action to explicitly start the array
despite possible corruption. This is normally done with
despite possible corruption. This is normally done with::
mdadm --assemble --force ....
This option is not really available if the array has the root
filesystem on it. In order to support this booting from such an
array, md supports a module parameter "start_dirty_degraded" which,
array, md supports a module parameter ``start_dirty_degraded`` which,
when set to 1, bypassed the checks and will allows dirty degraded
arrays to be started.
So, to boot with a root filesystem of a dirty degraded raid[56], use
So, to boot with a root filesystem of a dirty degraded raid 5 or 6, use::
md-mod.start_dirty_degraded=1
@ -80,30 +116,30 @@ Superblock formats
------------------
The md driver can support a variety of different superblock formats.
Currently, it supports superblock formats "0.90.0" and the "md-1" format
Currently, it supports superblock formats ``0.90.0`` and the ``md-1`` format
introduced in the 2.5 development series.
The kernel will autodetect which format superblock is being used.
Superblock format '0' is treated differently to others for legacy
Superblock format ``0`` is treated differently to others for legacy
reasons - it is the original superblock format.
General Rules - apply for all superblock formats
------------------------------------------------
An array is 'created' by writing appropriate superblocks to all
An array is ``created`` by writing appropriate superblocks to all
devices.
It is 'assembled' by associating each of these devices with an
It is ``assembled`` by associating each of these devices with an
particular md virtual device. Once it is completely assembled, it can
be accessed.
An array should be created by a user-space tool. This will write
superblocks to all devices. It will usually mark the array as
'unclean', or with some devices missing so that the kernel md driver
can create appropriate redundancy (copying in raid1, parity
calculation in raid4/5).
``unclean``, or with some devices missing so that the kernel md driver
can create appropriate redundancy (copying in raid 1, parity
calculation in raid 4/5).
When an array is assembled, it is first initialized with the
SET_ARRAY_INFO ioctl. This contains, in particular, a major and minor
@ -126,13 +162,12 @@ Devices that have failed or are not yet active can be detached from an
array using HOT_REMOVE_DISK.
Specific Rules that apply to format-0 super block arrays, and
arrays with no superblock (non-persistent).
-------------------------------------------------------------
Specific Rules that apply to format-0 super block arrays, and arrays with no superblock (non-persistent)
--------------------------------------------------------------------------------------------------------
An array can be 'created' by describing the array (level, chunksize
etc) in a SET_ARRAY_INFO ioctl. This must have major_version==0 and
raid_disks != 0.
An array can be ``created`` by describing the array (level, chunksize
etc) in a SET_ARRAY_INFO ioctl. This must have ``major_version==0`` and
``raid_disks != 0``.
Then uninitialized devices can be added with ADD_NEW_DISK. The
structure passed to ADD_NEW_DISK must specify the state of the device
@ -142,24 +177,26 @@ Once started with RUN_ARRAY, uninitialized spares can be added with
HOT_ADD_DISK.
MD devices in sysfs
-------------------
md devices appear in sysfs (/sys) as regular block devices,
e.g.
md devices appear in sysfs (``/sys``) as regular block devices,
e.g.::
/sys/block/md0
Each 'md' device will contain a subdirectory called 'md' which
Each ``md`` device will contain a subdirectory called ``md`` which
contains further md-specific information about the device.
All md devices contain:
level
a text file indicating the 'raid level'. e.g. raid0, raid1,
a text file indicating the ``raid level``. e.g. raid0, raid1,
raid5, linear, multipath, faulty.
If no raid level has been set yet (array is still being
assembled), the value will reflect whatever has been written
to it, which may be a name like the above, or may be a number
such as '0', '5', etc.
such as ``0``, ``5``, etc.
raid_disks
a text file with a simple number indicating the number of devices
@ -172,10 +209,10 @@ All md devices contain:
A change to this attribute will not be permitted if it would
reduce the size of the array. To reduce the number of drives
in an e.g. raid5, the array size must first be reduced by
setting the 'array_size' attribute.
setting the ``array_size`` attribute.
chunk_size
This is the size in bytes for 'chunks' and is only relevant to
This is the size in bytes for ``chunks`` and is only relevant to
raid levels that involve striping (0,4,5,6,10). The address space
of the array is conceptually divided into chunks and consecutive
chunks are striped onto neighbouring devices.
@ -183,7 +220,7 @@ All md devices contain:
of 2. This can only be set while assembling an array
layout
The "layout" for the array for the particular level. This is
The ``layout`` for the array for the particular level. This is
simply a number that is interpretted differently by different
levels. It can be written while assembling an array.
@ -193,22 +230,24 @@ All md devices contain:
devices. Writing a number (in Kilobytes) which is less than
the available size will set the size. Any reconfiguration of the
array (e.g. adding devices) will not cause the size to change.
Writing the word 'default' will cause the effective size of the
Writing the word ``default`` will cause the effective size of the
array to be whatever size is actually available based on
'level', 'chunk_size' and 'component_size'.
``level``, ``chunk_size`` and ``component_size``.
This can be used to reduce the size of the array before reducing
the number of devices in a raid4/5/6, or to support external
metadata formats which mandate such clipping.
reshape_position
This is either "none" or a sector number within the devices of
the array where "reshape" is up to. If this is set, the three
This is either ``none`` or a sector number within the devices of
the array where ``reshape`` is up to. If this is set, the three
attributes mentioned above (raid_disks, chunk_size, layout) can
potentially have 2 values, an old and a new value. If these
values differ, reading the attribute returns
values differ, reading the attribute returns::
new (old)
and writing will effect the 'new' value, leaving the 'old'
and writing will effect the ``new`` value, leaving the ``old``
unchanged.
component_size
@ -223,9 +262,9 @@ All md devices contain:
metadata_version
This indicates the format that is being used to record metadata
about the array. It can be 0.90 (traditional format), 1.0, 1.1,
1.2 (newer format in varying locations) or "none" indicating that
1.2 (newer format in varying locations) or ``none`` indicating that
the kernel isn't managing metadata at all.
Alternately it can be "external:" followed by a string which
Alternately it can be ``external:`` followed by a string which
is set by user-space. This indicates that metadata is managed
by a user-space program. Any device failure or other event that
requires a metadata update will cause array activity to be
@ -233,9 +272,9 @@ All md devices contain:
resync_start
The point at which resync should start. If no resync is needed,
this will be a very large number (or 'none' since 2.6.30-rc1). At
this will be a very large number (or ``none`` since 2.6.30-rc1). At
array creation it will default to 0, though starting the array as
'clean' will set it much larger.
``clean`` will set it much larger.
new_dev
This file can be written but not read. The value written should
@ -246,10 +285,10 @@ All md devices contain:
safe_mode_delay
When an md array has seen no write requests for a certain period
of time, it will be marked as 'clean'. When another write
request arrives, the array is marked as 'dirty' before the write
commences. This is known as 'safe_mode'.
The 'certain period' is controlled by this file which stores the
of time, it will be marked as ``clean``. When another write
request arrives, the array is marked as ``dirty`` before the write
commences. This is known as ``safe_mode``.
The ``certain period`` is controlled by this file which stores the
period as a number of seconds. The default is 200msec (0.200).
Writing a value of 0 disables safemode.
@ -260,38 +299,50 @@ All md devices contain:
cannot be explicitly set, and some transitions are not allowed.
Select/poll works on this file. All changes except between
active_idle and active (which can be frequent and are not
very interesting) are notified. active->active_idle is
reported if the metadata is externally managed.
Active_idle and active (which can be frequent and are not
very interesting) are notified. active->active_idle is
reported if the metadata is externally managed.
clear
No devices, no size, no level
Writing is equivalent to STOP_ARRAY ioctl
inactive
May have some settings, but array is not active
all IO results in error
all IO results in error
When written, doesn't tear down array, but just stops it
suspended (not supported yet)
All IO requests will block. The array can be reconfigured.
Writing this, if accepted, will block until array is quiessent
readonly
no resync can happen. no superblocks get written.
write requests fail
read-auto
like readonly, but behaves like 'clean' on a write request.
clean - no pending writes, but otherwise active.
Write requests fail
read-auto
like readonly, but behaves like ``clean`` on a write request.
clean
no pending writes, but otherwise active.
When written to inactive array, starts without resync
If a write request arrives then
if metadata is known, mark 'dirty' and switch to 'active'.
if not known, block and switch to write-pending
if metadata is known, mark ``dirty`` and switch to ``active``.
if not known, block and switch to write-pending
If written to an active array that has pending writes, then fails.
active
fully active: IO and resync can be happening.
When written to inactive array, starts with resync
write-pending
clean, but writes are blocked waiting for 'active' to be written.
clean, but writes are blocked waiting for ``active`` to be written.
active-idle
like active, but no writes have been seen for a while (safe_mode_delay).
@ -299,57 +350,71 @@ All md devices contain:
bitmap/location
This indicates where the write-intent bitmap for the array is
stored.
It can be one of "none", "file" or "[+-]N".
"file" may later be extended to "file:/file/name"
"[+-]N" means that many sectors from the start of the metadata.
This is replicated on all devices. For arrays with externally
managed metadata, the offset is from the beginning of the
device.
It can be one of ``none``, ``file`` or ``[+-]N``.
``file`` may later be extended to ``file:/file/name``
``[+-]N`` means that many sectors from the start of the metadata.
This is replicated on all devices. For arrays with externally
managed metadata, the offset is from the beginning of the
device.
bitmap/chunksize
The size, in bytes, of the chunk which will be represented by a
single bit. For RAID456, it is a portion of an individual
device. For RAID10, it is a portion of the array. For RAID1, it
is both (they come to the same thing).
bitmap/time_base
The time, in seconds, between looking for bits in the bitmap to
be cleared. In the current implementation, a bit will be cleared
between 2 and 3 times "time_base" after all the covered blocks
between 2 and 3 times ``time_base`` after all the covered blocks
are known to be in-sync.
bitmap/backlog
When write-mostly devices are active in a RAID1, write requests
to those devices proceed in the background - the filesystem (or
other user of the device) does not have to wait for them.
'backlog' sets a limit on the number of concurrent background
``backlog`` sets a limit on the number of concurrent background
writes. If there are more than this, new writes will by
synchronous.
bitmap/metadata
This can be either 'internal' or 'external'.
'internal' is the default and means the metadata for the bitmap
is stored in the first 256 bytes of the allocated space and is
managed by the md module.
'external' means that bitmap metadata is managed externally to
the kernel (i.e. by some userspace program)
This can be either ``internal`` or ``external``.
``internal``
is the default and means the metadata for the bitmap
is stored in the first 256 bytes of the allocated space and is
managed by the md module.
``external``
means that bitmap metadata is managed externally to
the kernel (i.e. by some userspace program)
bitmap/can_clear
This is either 'true' or 'false'. If 'true', then bits in the
This is either ``true`` or ``false``. If ``true``, then bits in the
bitmap will be cleared when the corresponding blocks are thought
to be in-sync. If 'false', bits will never be cleared.
This is automatically set to 'false' if a write happens on a
to be in-sync. If ``false``, bits will never be cleared.
This is automatically set to ``false`` if a write happens on a
degraded array, or if the array becomes degraded during a write.
When metadata is managed externally, it should be set to true
once the array becomes non-degraded, and this fact has been
recorded in the metadata.
As component devices are added to an md array, they appear in the 'md'
directory as new directories named
As component devices are added to an md array, they appear in the ``md``
directory as new directories named::
dev-XXX
where XXX is a name that the kernel knows for the device, e.g. hdb1.
where ``XXX`` is a name that the kernel knows for the device, e.g. hdb1.
Each directory contains:
block
a symlink to the block device in /sys/block, e.g.
a symlink to the block device in /sys/block, e.g.::
/sys/block/md0/md/dev-hdb1/block -> ../../../../block/hdb/hdb1
super
@ -358,51 +423,83 @@ Each directory contains:
state
A file recording the current state of the device in the array
which can be a comma separated list of
faulty - device has been kicked from active use due to
a detected fault, or it has unacknowledged bad
blocks
in_sync - device is a fully in-sync member of the array
writemostly - device will only be subject to read
requests if there are no other options.
This applies only to raid1 arrays.
blocked - device has failed, and the failure hasn't been
acknowledged yet by the metadata handler.
Writes that would write to this device if
it were not faulty are blocked.
spare - device is working, but not a full member.
This includes spares that are in the process
of being recovered to
write_error - device has ever seen a write error.
want_replacement - device is (mostly) working but probably
should be replaced, either due to errors or
due to user request.
replacement - device is a replacement for another active
device with same raid_disk.
which can be a comma separated list of:
faulty
device has been kicked from active use due to
a detected fault, or it has unacknowledged bad
blocks
in_sync
device is a fully in-sync member of the array
writemostly
device will only be subject to read
requests if there are no other options.
This applies only to raid1 arrays.
blocked
device has failed, and the failure hasn't been
acknowledged yet by the metadata handler.
Writes that would write to this device if
it were not faulty are blocked.
spare
device is working, but not a full member.
This includes spares that are in the process
of being recovered to
write_error
device has ever seen a write error.
want_replacement
device is (mostly) working but probably
should be replaced, either due to errors or
due to user request.
replacement
device is a replacement for another active
device with same raid_disk.
This list may grow in future.
This can be written to.
Writing "faulty" simulates a failure on the device.
Writing "remove" removes the device from the array.
Writing "writemostly" sets the writemostly flag.
Writing "-writemostly" clears the writemostly flag.
Writing "blocked" sets the "blocked" flag.
Writing "-blocked" clears the "blocked" flags and allows writes
to complete and possibly simulates an error.
Writing "in_sync" sets the in_sync flag.
Writing "write_error" sets writeerrorseen flag.
Writing "-write_error" clears writeerrorseen flag.
Writing "want_replacement" is allowed at any time except to a
replacement device or a spare. It sets the flag.
Writing "-want_replacement" is allowed at any time. It clears
the flag.
Writing "replacement" or "-replacement" is only allowed before
starting the array. It sets or clears the flag.
Writing ``faulty`` simulates a failure on the device.
Writing ``remove`` removes the device from the array.
Writing ``writemostly`` sets the writemostly flag.
Writing ``-writemostly`` clears the writemostly flag.
Writing ``blocked`` sets the ``blocked`` flag.
Writing ``-blocked`` clears the ``blocked`` flags and allows writes
to complete and possibly simulates an error.
Writing ``in_sync`` sets the in_sync flag.
Writing ``write_error`` sets writeerrorseen flag.
Writing ``-write_error`` clears writeerrorseen flag.
Writing ``want_replacement`` is allowed at any time except to a
replacement device or a spare. It sets the flag.
Writing ``-want_replacement`` is allowed at any time. It clears
the flag.
Writing ``replacement`` or ``-replacement`` is only allowed before
starting the array. It sets or clears the flag.
This file responds to select/poll. Any change to 'faulty'
or 'blocked' causes an event.
This file responds to select/poll. Any change to ``faulty``
or ``blocked`` causes an event.
errors
An approximate count of read errors that have been detected on
@ -417,9 +514,9 @@ Each directory contains:
slot
This gives the role that the device has in the array. It will
either be 'none' if the device is not active in the array
either be ``none`` if the device is not active in the array
(i.e. is a spare or has failed) or an integer less than the
'raid_disks' number for the array indicating which position
``raid_disks`` number for the array indicating which position
it currently fills. This can only be set while assembling an
array. A device for which this is set is assumed to be working.
@ -437,7 +534,7 @@ Each directory contains:
written, it will be rejected.
recovery_start
When the device is not 'in_sync', this records the number of
When the device is not ``in_sync``, this records the number of
sectors from the start of the device which are known to be
correct. This is normally zero, but during a recovery
operation it will steadily increase, and if the recovery is
@ -447,21 +544,21 @@ Each directory contains:
This can be set whenever the device is not an active member of
the array, either before the array is activated, or before
the 'slot' is set.
the ``slot`` is set.
Setting this to ``none`` is equivalent to setting ``in_sync``.
Setting to any other value also clears the ``in_sync`` flag.
Setting this to 'none' is equivalent to setting 'in_sync'.
Setting to any other value also clears the 'in_sync' flag.
bad_blocks
This gives the list of all known bad blocks in the form of
start address and length (in sectors respectively). If output
is too big to fit in a page, it will be truncated. Writing
"sector length" to this file adds new acknowledged (i.e.
``sector length`` to this file adds new acknowledged (i.e.
recorded to disk safely) bad blocks.
unacknowledged_bad_blocks
This gives the list of known-but-not-yet-saved-to-disk bad
blocks in the same form of 'bad_blocks'. If output is too big
blocks in the same form of ``bad_blocks``. If output is too big
to fit in a page, it will be truncated. Writing to this file
adds bad blocks without acknowledging them. This is largely
for testing.
@ -469,16 +566,18 @@ Each directory contains:
An active md device will also contain an entry for each active device
in the array. These are named
in the array. These are named::
rdNN
where 'NN' is the position in the array, starting from 0.
where ``NN`` is the position in the array, starting from 0.
So for a 3 drive array there will be rd0, rd1, rd2.
These are symbolic links to the appropriate 'dev-XXX' entry.
Thus, for example,
These are symbolic links to the appropriate ``dev-XXX`` entry.
Thus, for example::
cat /sys/block/md*/md/rd*/state
will show 'in_sync' on every line.
will show ``in_sync`` on every line.
@ -488,50 +587,62 @@ also have
sync_action
a text file that can be used to monitor and control the rebuild
process. It contains one word which can be one of:
resync - redundancy is being recalculated after unclean
shutdown or creation
recover - a hot spare is being built to replace a
failed/missing device
idle - nothing is happening
check - A full check of redundancy was requested and is
happening. This reads all blocks and checks
them. A repair may also happen for some raid
levels.
repair - A full check and repair is happening. This is
similar to 'resync', but was requested by the
user, and the write-intent bitmap is NOT used to
optimise the process.
resync
redundancy is being recalculated after unclean
shutdown or creation
recover
a hot spare is being built to replace a
failed/missing device
idle
nothing is happening
check
A full check of redundancy was requested and is
happening. This reads all blocks and checks
them. A repair may also happen for some raid
levels.
repair
A full check and repair is happening. This is
similar to ``resync``, but was requested by the
user, and the write-intent bitmap is NOT used to
optimise the process.
This file is writable, and each of the strings that could be
read are meaningful for writing.
'idle' will stop an active resync/recovery etc. There is no
guarantee that another resync/recovery may not be automatically
started again, though some event will be needed to trigger
this.
'resync' or 'recovery' can be used to restart the
corresponding operation if it was stopped with 'idle'.
'check' and 'repair' will start the appropriate process
providing the current state is 'idle'.
``idle`` will stop an active resync/recovery etc. There is no
guarantee that another resync/recovery may not be automatically
started again, though some event will be needed to trigger
this.
``resync`` or ``recovery`` can be used to restart the
corresponding operation if it was stopped with ``idle``.
``check`` and ``repair`` will start the appropriate process
providing the current state is ``idle``.
This file responds to select/poll. Any important change in the value
triggers a poll event. Sometimes the value will briefly be
"recover" if a recovery seems to be needed, but cannot be
achieved. In that case, the transition to "recover" isn't
``recover`` if a recovery seems to be needed, but cannot be
achieved. In that case, the transition to ``recover`` isn't
notified, but the transition away is.
degraded
This contains a count of the number of devices by which the
arrays is degraded. So an optimal array will show '0'. A
single failed/missing drive will show '1', etc.
arrays is degraded. So an optimal array will show ``0``. A
single failed/missing drive will show ``1``, etc.
This file responds to select/poll, any increase or decrease
in the count of missing devices will trigger an event.
mismatch_count
When performing 'check' and 'repair', and possibly when
performing 'resync', md will count the number of errors that are
found. The count in 'mismatch_cnt' is the number of sectors
that were re-written, or (for 'check') would have been
When performing ``check`` and ``repair``, and possibly when
performing ``resync``, md will count the number of errors that are
found. The count in ``mismatch_cnt`` is the number of sectors
that were re-written, or (for ``check``) would have been
re-written. As most raid levels work in units of pages rather
than sectors, this may be larger than the number of actual errors
by a factor of the number of sectors in a page.
@ -542,27 +653,30 @@ also have
would need to check the corresponding blocks. Either individual
numbers or start-end pairs can be written. Multiple numbers
can be separated by a space.
Note that the numbers are 'bit' numbers, not 'block' numbers.
Note that the numbers are ``bit`` numbers, not ``block`` numbers.
They should be scaled by the bitmap_chunksize.
sync_speed_min
sync_speed_max
This are similar to /proc/sys/dev/raid/speed_limit_{min,max}
sync_speed_min, sync_speed_max
This are similar to ``/proc/sys/dev/raid/speed_limit_{min,max}``
however they only apply to the particular array.
If no value has been written to these, or if the word 'system'
If no value has been written to these, or if the word ``system``
is written, then the system-wide value is used. If a value,
in kibibytes-per-second is written, then it is used.
When the files are read, they show the currently active value
followed by "(local)" or "(system)" depending on whether it is
followed by ``(local)`` or ``(system)`` depending on whether it is
a locally set or system-wide value.
sync_completed
This shows the number of sectors that have been completed of
whatever the current sync_action is, followed by the number of
sectors in total that could need to be processed. The two
numbers are separated by a '/' thus effectively showing one
numbers are separated by a ``/`` thus effectively showing one
value, a fraction of the process that is complete.
A 'select' on this attribute will return when resync completes,
A ``select`` on this attribute will return when resync completes,
when it reaches the current sync_max (below) and possibly at
other times.
@ -570,26 +684,24 @@ also have
This shows the current actual speed, in K/sec, of the current
sync_action. It is averaged over the last 30 seconds.
suspend_lo
suspend_hi
suspend_lo, suspend_hi
The two values, given as numbers of sectors, indicate a range
within the array where IO will be blocked. This is currently
only supported for raid4/5/6.
sync_min
sync_max
sync_min, sync_max
The two values, given as numbers of sectors, indicate a range
within the array where 'check'/'repair' will operate. Must be
a multiple of chunk_size. When it reaches "sync_max" it will
within the array where ``check``/``repair`` will operate. Must be
a multiple of chunk_size. When it reaches ``sync_max`` it will
pause, rather than complete.
You can use 'select' or 'poll' on "sync_completed" to wait for
You can use ``select`` or ``poll`` on ``sync_completed`` to wait for
that number to reach sync_max. Then you can either increase
"sync_max", or can write 'idle' to "sync_action".
``sync_max``, or can write ``idle`` to ``sync_action``.
The value of 'max' for "sync_max" effectively disables the limit.
The value of ``max`` for ``sync_max`` effectively disables the limit.
When a resync is active, the value can only ever be increased,
never decreased.
The value of '0' is the minimum for "sync_min".
The value of ``0`` is the minimum for ``sync_min``.
@ -598,13 +710,15 @@ personality module that manages it.
These are specific to the implementation of the module and could
change substantially if the implementation changes.
These currently include
These currently include:
stripe_cache_size (currently raid5 only)
number of entries in the stripe cache. This is writable, but
there are upper and lower limits (32768, 17). Default is 256.
strip_cache_active (currently raid5 only)
number of active entries in the stripe cache
preread_bypass_threshold (currently raid5 only)
number of times a stripe requiring preread will be bypassed by
a stripe that does not require preread. For fairness defaults

View File

@ -1,22 +1,21 @@
==============================
KERNEL MODULE SIGNING FACILITY
==============================
Kernel module signing facility
------------------------------
CONTENTS
- Overview.
- Configuring module signing.
- Generating signing keys.
- Public keys in the kernel.
- Manually signing modules.
- Signed modules and stripping.
- Loading signed modules.
- Non-valid signatures and unsigned modules.
- Administering/protecting the private key.
.. CONTENTS
..
.. - Overview.
.. - Configuring module signing.
.. - Generating signing keys.
.. - Public keys in the kernel.
.. - Manually signing modules.
.. - Signed modules and stripping.
.. - Loading signed modules.
.. - Non-valid signatures and unsigned modules.
.. - Administering/protecting the private key.
========
OVERVIEW
Overview
========
The kernel module signing facility cryptographically signs modules during
@ -36,17 +35,19 @@ SHA-512 (the algorithm is selected by data in the signature).
==========================
CONFIGURING MODULE SIGNING
Configuring module signing
==========================
The module signing facility is enabled by going to the "Enable Loadable Module
Support" section of the kernel configuration and turning on
The module signing facility is enabled by going to the
:menuselection:`Enable Loadable Module Support` section of
the kernel configuration and turning on::
CONFIG_MODULE_SIG "Module signature verification"
This has a number of options available:
(1) "Require modules to be validly signed" (CONFIG_MODULE_SIG_FORCE)
(1) :menuselection:`Require modules to be validly signed`
(``CONFIG_MODULE_SIG_FORCE``)
This specifies how the kernel should deal with a module that has a
signature for which the key is not known or a module that is unsigned.
@ -64,35 +65,39 @@ This has a number of options available:
cannot be parsed, it will be rejected out of hand.
(2) "Automatically sign all modules" (CONFIG_MODULE_SIG_ALL)
(2) :menuselection:`Automatically sign all modules`
(``CONFIG_MODULE_SIG_ALL``)
If this is on then modules will be automatically signed during the
modules_install phase of a build. If this is off, then the modules must
be signed manually using:
be signed manually using::
scripts/sign-file
(3) "Which hash algorithm should modules be signed with?"
(3) :menuselection:`Which hash algorithm should modules be signed with?`
This presents a choice of which hash algorithm the installation phase will
sign the modules with:
CONFIG_MODULE_SIG_SHA1 "Sign modules with SHA-1"
CONFIG_MODULE_SIG_SHA224 "Sign modules with SHA-224"
CONFIG_MODULE_SIG_SHA256 "Sign modules with SHA-256"
CONFIG_MODULE_SIG_SHA384 "Sign modules with SHA-384"
CONFIG_MODULE_SIG_SHA512 "Sign modules with SHA-512"
=============================== ==========================================
``CONFIG_MODULE_SIG_SHA1`` :menuselection:`Sign modules with SHA-1`
``CONFIG_MODULE_SIG_SHA224`` :menuselection:`Sign modules with SHA-224`
``CONFIG_MODULE_SIG_SHA256`` :menuselection:`Sign modules with SHA-256`
``CONFIG_MODULE_SIG_SHA384`` :menuselection:`Sign modules with SHA-384`
``CONFIG_MODULE_SIG_SHA512`` :menuselection:`Sign modules with SHA-512`
=============================== ==========================================
The algorithm selected here will also be built into the kernel (rather
than being a module) so that modules signed with that algorithm can have
their signatures checked without causing a dependency loop.
(4) "File name or PKCS#11 URI of module signing key" (CONFIG_MODULE_SIG_KEY)
(4) :menuselection:`File name or PKCS#11 URI of module signing key`
(``CONFIG_MODULE_SIG_KEY``)
Setting this option to something other than its default of
"certs/signing_key.pem" will disable the autogeneration of signing keys
``certs/signing_key.pem`` will disable the autogeneration of signing keys
and allow the kernel modules to be signed with a key of your choosing.
The string provided should identify a file containing both a private key
and its corresponding X.509 certificate in PEM form, or — on systems where
@ -102,10 +107,11 @@ This has a number of options available:
If the PEM file containing the private key is encrypted, or if the
PKCS#11 token requries a PIN, this can be provided at build time by
means of the KBUILD_SIGN_PIN variable.
means of the ``KBUILD_SIGN_PIN`` variable.
(5) "Additional X.509 keys for default system keyring" (CONFIG_SYSTEM_TRUSTED_KEYS)
(5) :menuselection:`Additional X.509 keys for default system keyring`
(``CONFIG_SYSTEM_TRUSTED_KEYS``)
This option can be set to the filename of a PEM-encoded file containing
additional certificates which will be included in the system keyring by
@ -116,7 +122,7 @@ packages to the kernel build processes for the tool that does the signing.
=======================
GENERATING SIGNING KEYS
Generating signing keys
=======================
Cryptographic keypairs are required to generate and check signatures. A
@ -126,14 +132,14 @@ it can be deleted or stored securely. The public key gets built into the
kernel so that it can be used to check the signatures as the modules are
loaded.
Under normal conditions, when CONFIG_MODULE_SIG_KEY is unchanged from its
Under normal conditions, when ``CONFIG_MODULE_SIG_KEY`` is unchanged from its
default, the kernel build will automatically generate a new keypair using
openssl if one does not exist in the file:
openssl if one does not exist in the file::
certs/signing_key.pem
during the building of vmlinux (the public part of the key needs to be built
into vmlinux) using parameters in the:
into vmlinux) using parameters in the::
certs/x509.genkey
@ -142,14 +148,14 @@ file (which is also generated if it does not already exist).
It is strongly recommended that you provide your own x509.genkey file.
Most notably, in the x509.genkey file, the req_distinguished_name section
should be altered from the default:
should be altered from the default::
[ req_distinguished_name ]
#O = Unspecified company
CN = Build time autogenerated kernel key
#emailAddress = unspecified.user@unspecified.company
The generated RSA key size can also be set with:
The generated RSA key size can also be set with::
[ req ]
default_bits = 4096
@ -158,23 +164,23 @@ The generated RSA key size can also be set with:
It is also possible to manually generate the key private/public files using the
x509.genkey key generation configuration file in the root node of the Linux
kernel sources tree and the openssl command. The following is an example to
generate the public/private key files:
generate the public/private key files::
openssl req -new -nodes -utf8 -sha256 -days 36500 -batch -x509 \
-config x509.genkey -outform PEM -out kernel_key.pem \
-keyout kernel_key.pem
The full pathname for the resulting kernel_key.pem file can then be specified
in the CONFIG_MODULE_SIG_KEY option, and the certificate and key therein will
in the ``CONFIG_MODULE_SIG_KEY`` option, and the certificate and key therein will
be used instead of an autogenerated keypair.
=========================
PUBLIC KEYS IN THE KERNEL
Public keys in the kernel
=========================
The kernel contains a ring of public keys that can be viewed by root. They're
in a keyring called ".system_keyring" that can be seen by:
in a keyring called ".system_keyring" that can be seen by::
[root@deneb ~]# cat /proc/keys
...
@ -184,27 +190,27 @@ in a keyring called ".system_keyring" that can be seen by:
Beyond the public key generated specifically for module signing, additional
trusted certificates can be provided in a PEM-encoded file referenced by the
CONFIG_SYSTEM_TRUSTED_KEYS configuration option.
``CONFIG_SYSTEM_TRUSTED_KEYS`` configuration option.
Further, the architecture code may take public keys from a hardware store and
add those in also (e.g. from the UEFI key database).
Finally, it is possible to add additional public keys by doing:
Finally, it is possible to add additional public keys by doing::
keyctl padd asymmetric "" [.system_keyring-ID] <[key-file]
e.g.:
e.g.::
keyctl padd asymmetric "" 0x223c7853 <my_public_key.x509
Note, however, that the kernel will only permit keys to be added to
.system_keyring _if_ the new key's X.509 wrapper is validly signed by a key
``.system_keyring _if_`` the new key's X.509 wrapper is validly signed by a key
that is already resident in the .system_keyring at the time the key was added.
=========================
MANUALLY SIGNING MODULES
=========================
========================
Manually signing modules
========================
To manually sign a module, use the scripts/sign-file tool available in
the Linux kernel source tree. The script requires 4 arguments:
@ -214,7 +220,7 @@ the Linux kernel source tree. The script requires 4 arguments:
3. The public key filename
4. The kernel module to be signed
The following is an example to sign a kernel module:
The following is an example to sign a kernel module::
scripts/sign-file sha512 kernel-signkey.priv \
kernel-signkey.x509 module.ko
@ -228,11 +234,11 @@ $KBUILD_SIGN_PIN environment variable.
============================
SIGNED MODULES AND STRIPPING
Signed modules and stripping
============================
A signed module has a digital signature simply appended at the end. The string
"~Module signature appended~." at the end of the module's file confirms that a
``~Module signature appended~.`` at the end of the module's file confirms that a
signature is present but it does not confirm that the signature is valid!
Signed modules are BRITTLE as the signature is outside of the defined ELF
@ -242,19 +248,19 @@ debug information present at the time of signing.
======================
LOADING SIGNED MODULES
Loading signed modules
======================
Modules are loaded with insmod, modprobe, init_module() or finit_module(),
exactly as for unsigned modules as no processing is done in userspace. The
signature checking is all done within the kernel.
Modules are loaded with insmod, modprobe, ``init_module()`` or
``finit_module()``, exactly as for unsigned modules as no processing is
done in userspace. The signature checking is all done within the kernel.
=========================================
NON-VALID SIGNATURES AND UNSIGNED MODULES
Non-valid signatures and unsigned modules
=========================================
If CONFIG_MODULE_SIG_FORCE is enabled or module.sig_enforce=1 is supplied on
If ``CONFIG_MODULE_SIG_FORCE`` is enabled or module.sig_enforce=1 is supplied on
the kernel command line, the kernel will only load validly signed modules
for which it has a public key. Otherwise, it will also load modules that are
unsigned. Any module for which the kernel has a key, but which proves to have
@ -264,7 +270,7 @@ Any module that has an unparseable signature will be rejected.
=========================================
ADMINISTERING/PROTECTING THE PRIVATE KEY
Administering/protecting the private key
=========================================
Since the private key is used to sign modules, viruses and malware could use
@ -275,5 +281,5 @@ in the root node of the kernel source tree.
If you use the same private key to sign modules for multiple kernel
configurations, you must ensure that the module version information is
sufficient to prevent loading a module into a different kernel. Either
set CONFIG_MODVERSIONS=y or ensure that each configuration has a different
kernel release string by changing EXTRAVERSION or CONFIG_LOCALVERSION.
set ``CONFIG_MODVERSIONS=y`` or ensure that each configuration has a different
kernel release string by changing ``EXTRAVERSION`` or ``CONFIG_LOCALVERSION``.

View File

@ -1,5 +1,5 @@
Mono(tm) Binary Kernel Support for Linux
-----------------------------------------
Mono(tm) Binary Kernel Support for Linux
-----------------------------------------
To configure Linux to automatically execute Mono-based .NET binaries
(in the form of .exe files) without the need to use the mono CLR
@ -19,22 +19,22 @@ other program after you have done the following:
http://www.go-mono.com/compiling.html
Once the Mono CLR support has been installed, just check that
/usr/bin/mono (which could be located elsewhere, for example
/usr/local/bin/mono) is working.
``/usr/bin/mono`` (which could be located elsewhere, for example
``/usr/local/bin/mono``) is working.
2) You have to compile BINFMT_MISC either as a module or into
the kernel (CONFIG_BINFMT_MISC) and set it up properly.
the kernel (``CONFIG_BINFMT_MISC``) and set it up properly.
If you choose to compile it as a module, you will have
to insert it manually with modprobe/insmod, as kmod
cannot be easily supported with binfmt_misc.
Read the file 'binfmt_misc.txt' in this directory to know
cannot be easily supported with binfmt_misc.
Read the file ``binfmt_misc.txt`` in this directory to know
more about the configuration process.
3) Add the following entries to /etc/rc.local or similar script
to be run at system startup:
3) Add the following entries to ``/etc/rc.local`` or similar script
to be run at system startup::
# Insert BINFMT_MISC module into the kernel
if [ ! -e /proc/sys/fs/binfmt_misc/register ]; then
# Insert BINFMT_MISC module into the kernel
if [ ! -e /proc/sys/fs/binfmt_misc/register ]; then
/sbin/modprobe binfmt_misc
# Some distributions, like Fedora Core, perform
# the following command automatically when the
@ -43,24 +43,26 @@ if [ ! -e /proc/sys/fs/binfmt_misc/register ]; then
# Thus, it is possible that the following line
# is not needed at all.
mount -t binfmt_misc none /proc/sys/fs/binfmt_misc
fi
fi
# Register support for .NET CLR binaries
if [ -e /proc/sys/fs/binfmt_misc/register ]; then
# Register support for .NET CLR binaries
if [ -e /proc/sys/fs/binfmt_misc/register ]; then
# Replace /usr/bin/mono with the correct pathname to
# the Mono CLR runtime (usually /usr/local/bin/mono
# when compiling from sources or CVS).
echo ':CLR:M::MZ::/usr/bin/mono:' > /proc/sys/fs/binfmt_misc/register
else
else
echo "No binfmt_misc support"
exit 1
fi
fi
4) Check that .exe binaries can be ran without the need of a
wrapper script, simply by launching the .exe file directly
from a command prompt, for example:
4) Check that ``.exe`` binaries can be ran without the need of a
wrapper script, simply by launching the ``.exe`` file directly
from a command prompt, for example::
/usr/bin/xsd.exe
NOTE: If this fails with a permission denied error, check
that the .exe file has execute permissions.
.. note::
If this fails with a permission denied error, check
that the ``.exe`` file has execute permissions.

View File

@ -1,7 +1,13 @@
NOTE: ksymoops is useless on 2.6. Please use the Oops in its original format
(from dmesg, etc). Ignore any references in this or other docs to "decoding
the Oops" or "running it through ksymoops". If you post an Oops from 2.6 that
has been run through ksymoops, people will just tell you to repost it.
OOPS tracing
============
.. note::
``ksymoops`` is useless on 2.6 or upper. Please use the Oops in its original
format (from ``dmesg``, etc). Ignore any references in this or other docs to
"decoding the Oops" or "running it through ksymoops".
If you post an Oops from 2.6+ that has been run through ``ksymoops``,
people will just tell you to repost it.
Quick Summary
-------------
@ -12,7 +18,7 @@ If you are unsure send it to the person responsible for the code relevant to
what you were doing. If it occurs repeatably try and describe how to recreate
it. That's worth even more than the oops.
If you are totally stumped as to whom to send the report, send it to
If you are totally stumped as to whom to send the report, send it to
linux-kernel@vger.kernel.org. Thanks for your help in making Linux as
stable as humanly possible.
@ -20,24 +26,25 @@ Where is the Oops?
----------------------
Normally the Oops text is read from the kernel buffers by klogd and
handed to syslogd which writes it to a syslog file, typically
/var/log/messages (depends on /etc/syslog.conf). Sometimes klogd dies,
in which case you can run dmesg > file to read the data from the kernel
buffers and save it. Or you can cat /proc/kmsg > file, however you
have to break in to stop the transfer, kmsg is a "never ending file".
handed to ``syslogd`` which writes it to a syslog file, typically
``/var/log/messages`` (depends on ``/etc/syslog.conf``). Sometimes ``klogd``
dies, in which case you can run ``dmesg > file`` to read the data from the
kernel buffers and save it. Or you can ``cat /proc/kmsg > file``, however you
have to break in to stop the transfer, ``kmsg`` is a "never ending file".
If the machine has crashed so badly that you cannot enter commands or
the disk is not available then you have three options :-
the disk is not available then you have three options :
(1) Hand copy the text from the screen and type it in after the machine
has restarted. Messy but it is the only option if you have not
planned for a crash. Alternatively, you can take a picture of
the screen with a digital camera - not nice, but better than
nothing. If the messages scroll off the top of the console, you
may find that booting with a higher resolution (eg, vga=791)
will allow you to read more of the text. (Caveat: This needs vesafb,
may find that booting with a higher resolution (eg, ``vga=791``)
will allow you to read more of the text. (Caveat: This needs ``vesafb``,
so won't help for 'early' oopses)
(2) Boot with a serial console (see Documentation/serial-console.txt),
(2) Boot with a serial console (see
:ref:`Documentation/admin-guide/serial-console.rst <serial_console>`),
run a null modem to a second machine and capture the output there
using your favourite communication program. Minicom works well.
@ -49,117 +56,126 @@ the disk is not available then you have three options :-
Full Information
----------------
NOTE: the message from Linus below applies to 2.4 kernel. I have preserved it
for historical reasons, and because some of the information in it still
applies. Especially, please ignore any references to ksymoops.
.. note::
From: Linus Torvalds <torvalds@osdl.org>
the message from Linus below applies to 2.4 kernel. I have preserved it
for historical reasons, and because some of the information in it still
applies. Especially, please ignore any references to ksymoops.
How to track down an Oops.. [originally a mail to linux-kernel]
::
The main trick is having 5 years of experience with those pesky oops
messages ;-)
From: Linus Torvalds <torvalds@osdl.org>
Actually, there are things you can do that make this easier. I have two
separate approaches:
How to track down an Oops.. [originally a mail to linux-kernel]
The main trick is having 5 years of experience with those pesky oops
messages ;-)
Actually, there are things you can do that make this easier. I have two
separate approaches::
gdb /usr/src/linux/vmlinux
gdb> disassemble <offending_function>
That's the easy way to find the problem, at least if the bug-report is
well made (like this one was - run through ksymoops to get the
information of which function and the offset in the function that it
That's the easy way to find the problem, at least if the bug-report is
well made (like this one was - run through ``ksymoops`` to get the
information of which function and the offset in the function that it
happened in).
Oh, it helps if the report happens on a kernel that is compiled with the
Oh, it helps if the report happens on a kernel that is compiled with the
same compiler and similar setups.
The other thing to do is disassemble the "Code:" part of the bug report:
The other thing to do is disassemble the "Code:" part of the bug report:
ksymoops will do this too with the correct tools, but if you don't have
the tools you can just do a silly program:
the tools you can just do a silly program::
char str[] = "\xXX\xXX\xXX...";
main(){}
and compile it with gcc -g and then do "disassemble str" (where the "XX"
stuff are the values reported by the Oops - you can just cut-and-paste
and do a replace of spaces to "\x" - that's what I do, as I'm too lazy
and compile it with ``gcc -g`` and then do ``disassemble str`` (where the ``XX``
stuff are the values reported by the Oops - you can just cut-and-paste
and do a replace of spaces to ``\x`` - that's what I do, as I'm too lazy
to write a program to automate this all).
Alternatively, you can use the shell script in scripts/decodecode.
Its usage is: decodecode < oops.txt
Alternatively, you can use the shell script in ``scripts/decodecode``.
Its usage is::
decodecode < oops.txt
The hex bytes that follow "Code:" may (in some architectures) have a series
of bytes that precede the current instruction pointer as well as bytes at and
following the current instruction pointer. In some cases, one instruction
byte or word is surrounded by <> or (), as in "<86>" or "(f00d)". These
<> or () markings indicate the current instruction pointer. Example from
i386, split into multiple lines for readability:
byte or word is surrounded by ``<>`` or ``()``, as in ``<86>`` or ``(f00d)``.
These ``<>`` or ``()`` markings indicate the current instruction pointer.
Code: f9 0f 8d f9 00 00 00 8d 42 0c e8 dd 26 11 c7 a1 60 ea 2b f9 8b 50 08 a1
64 ea 2b f9 8d 34 82 8b 1e 85 db 74 6d 8b 15 60 ea 2b f9 <8b> 43 04 39 42 54
7e 04 40 89 42 54 8b 43 04 3b 05 00 f6 52 c0
Example from i386, split into multiple lines for readability::
Finally, if you want to see where the code comes from, you can do
Code: f9 0f 8d f9 00 00 00 8d 42 0c e8 dd 26 11 c7 a1 60 ea 2b f9 8b 50 08 a1
64 ea 2b f9 8d 34 82 8b 1e 85 db 74 6d 8b 15 60 ea 2b f9 <8b> 43 04 39 42 54
7e 04 40 89 42 54 8b 43 04 3b 05 00 f6 52 c0
Finally, if you want to see where the code comes from, you can do::
cd /usr/src/linux
make fs/buffer.s # or whatever file the bug happened in
and then you get a better idea of what happens than with the gdb
and then you get a better idea of what happens than with the gdb
disassembly.
Now, the trick is just then to combine all the data you have: the C
sources (and general knowledge of what it _should_ do), the assembly
listing and the code disassembly (and additionally the register dump you
also get from the "oops" message - that can be useful to see _what_ the
corrupted pointers were, and when you have the assembler listing you can
also match the other registers to whatever C expressions they were used
Now, the trick is just then to combine all the data you have: the C
sources (and general knowledge of what it **should** do), the assembly
listing and the code disassembly (and additionally the register dump you
also get from the "oops" message - that can be useful to see **what** the
corrupted pointers were, and when you have the assembler listing you can
also match the other registers to whatever C expressions they were used
for).
Essentially, you just look at what doesn't match (in this case it was the
"Code" disassembly that didn't match with what the compiler generated).
Then you need to find out _why_ they don't match. Often it's simple - you
see that the code uses a NULL pointer and then you look at the code and
wonder how the NULL pointer got there, and if it's a valid thing to do
Essentially, you just look at what doesn't match (in this case it was the
"Code" disassembly that didn't match with what the compiler generated).
Then you need to find out **why** they don't match. Often it's simple - you
see that the code uses a NULL pointer and then you look at the code and
wonder how the NULL pointer got there, and if it's a valid thing to do
you just check against it..
Now, if somebody gets the idea that this is time-consuming and requires
some small amount of concentration, you're right. Which is why I will
mostly just ignore any panic reports that don't have the symbol table
info etc looked up: it simply gets too hard to look it up (I have some
programs to search for specific patterns in the kernel code segment, and
sometimes I have been able to look up those kinds of panics too, but
that really requires pretty good knowledge of the kernel just to be able
Now, if somebody gets the idea that this is time-consuming and requires
some small amount of concentration, you're right. Which is why I will
mostly just ignore any panic reports that don't have the symbol table
info etc looked up: it simply gets too hard to look it up (I have some
programs to search for specific patterns in the kernel code segment, and
sometimes I have been able to look up those kinds of panics too, but
that really requires pretty good knowledge of the kernel just to be able
to pick out the right sequences etc..)
_Sometimes_ it happens that I just see the disassembled code sequence
from the panic, and I know immediately where it's coming from. That's when
**Sometimes** it happens that I just see the disassembled code sequence
from the panic, and I know immediately where it's coming from. That's when
I get worried that I've been doing this for too long ;-)
Linus
---------------------------------------------------------------------------
Notes on Oops tracing with klogd:
Notes on Oops tracing with ``klogd``
------------------------------------
In order to help Linus and the other kernel developers there has been
substantial support incorporated into klogd for processing protection
substantial support incorporated into ``klogd`` for processing protection
faults. In order to have full support for address resolution at least
version 1.3-pl3 of the sysklogd package should be used.
version 1.3-pl3 of the ``sysklogd`` package should be used.
When a protection fault occurs the klogd daemon automatically
When a protection fault occurs the ``klogd`` daemon automatically
translates important addresses in the kernel log messages to their
symbolic equivalents. This translated kernel message is then
forwarded through whatever reporting mechanism klogd is using. The
forwarded through whatever reporting mechanism ``klogd`` is using. The
protection fault message can be simply cut out of the message files
and forwarded to the kernel developers.
Two types of address resolution are performed by klogd. The first is
Two types of address resolution are performed by ``klogd``. The first is
static translation and the second is dynamic translation. Static
translation uses the System.map file in much the same manner that
ksymoops does. In order to do static translation the klogd daemon
ksymoops does. In order to do static translation the ``klogd`` daemon
must be able to find a system map file at daemon initialization time.
See the klogd man page for information on how klogd searches for map
See the klogd man page for information on how ``klogd`` searches for map
files.
Dynamic address translation is important when kernel loadable modules
@ -178,101 +194,106 @@ information available if the developer of the loadable module chose to
export symbol information from the module.
Since the kernel module environment can be dynamic there must be a
mechanism for notifying the klogd daemon when a change in module
mechanism for notifying the ``klogd`` daemon when a change in module
environment occurs. There are command line options available which
allow klogd to signal the currently executing daemon that symbol
information should be refreshed. See the klogd manual page for more
information should be refreshed. See the ``klogd`` manual page for more
information.
A patch is included with the sysklogd distribution which modifies the
modules-2.0.0 package to automatically signal klogd whenever a module
``modules-2.0.0`` package to automatically signal klogd whenever a module
is loaded or unloaded. Applying this patch provides essentially
seamless support for debugging protection faults which occur with
kernel loadable modules.
The following is an example of a protection fault in a loadable module
processed by klogd:
---------------------------------------------------------------------------
Aug 29 09:51:01 blizard kernel: Unable to handle kernel paging request at virtual address f15e97cc
Aug 29 09:51:01 blizard kernel: current->tss.cr3 = 0062d000, %cr3 = 0062d000
Aug 29 09:51:01 blizard kernel: *pde = 00000000
Aug 29 09:51:01 blizard kernel: Oops: 0002
Aug 29 09:51:01 blizard kernel: CPU: 0
Aug 29 09:51:01 blizard kernel: EIP: 0010:[oops:_oops+16/3868]
Aug 29 09:51:01 blizard kernel: EFLAGS: 00010212
Aug 29 09:51:01 blizard kernel: eax: 315e97cc ebx: 003a6f80 ecx: 001be77b edx: 00237c0c
Aug 29 09:51:01 blizard kernel: esi: 00000000 edi: bffffdb3 ebp: 00589f90 esp: 00589f8c
Aug 29 09:51:01 blizard kernel: ds: 0018 es: 0018 fs: 002b gs: 002b ss: 0018
Aug 29 09:51:01 blizard kernel: Process oops_test (pid: 3374, process nr: 21, stackpage=00589000)
Aug 29 09:51:01 blizard kernel: Stack: 315e97cc 00589f98 0100b0b4 bffffed4 0012e38e 00240c64 003a6f80 00000001
Aug 29 09:51:01 blizard kernel: 00000000 00237810 bfffff00 0010a7fa 00000003 00000001 00000000 bfffff00
Aug 29 09:51:01 blizard kernel: bffffdb3 bffffed4 ffffffda 0000002b 0007002b 0000002b 0000002b 00000036
Aug 29 09:51:01 blizard kernel: Call Trace: [oops:_oops_ioctl+48/80] [_sys_ioctl+254/272] [_system_call+82/128]
Aug 29 09:51:01 blizard kernel: Code: c7 00 05 00 00 00 eb 08 90 90 90 90 90 90 90 90 89 ec 5d c3
processed by ``klogd``::
Aug 29 09:51:01 blizard kernel: Unable to handle kernel paging request at virtual address f15e97cc
Aug 29 09:51:01 blizard kernel: current->tss.cr3 = 0062d000, %cr3 = 0062d000
Aug 29 09:51:01 blizard kernel: *pde = 00000000
Aug 29 09:51:01 blizard kernel: Oops: 0002
Aug 29 09:51:01 blizard kernel: CPU: 0
Aug 29 09:51:01 blizard kernel: EIP: 0010:[oops:_oops+16/3868]
Aug 29 09:51:01 blizard kernel: EFLAGS: 00010212
Aug 29 09:51:01 blizard kernel: eax: 315e97cc ebx: 003a6f80 ecx: 001be77b edx: 00237c0c
Aug 29 09:51:01 blizard kernel: esi: 00000000 edi: bffffdb3 ebp: 00589f90 esp: 00589f8c
Aug 29 09:51:01 blizard kernel: ds: 0018 es: 0018 fs: 002b gs: 002b ss: 0018
Aug 29 09:51:01 blizard kernel: Process oops_test (pid: 3374, process nr: 21, stackpage=00589000)
Aug 29 09:51:01 blizard kernel: Stack: 315e97cc 00589f98 0100b0b4 bffffed4 0012e38e 00240c64 003a6f80 00000001
Aug 29 09:51:01 blizard kernel: 00000000 00237810 bfffff00 0010a7fa 00000003 00000001 00000000 bfffff00
Aug 29 09:51:01 blizard kernel: bffffdb3 bffffed4 ffffffda 0000002b 0007002b 0000002b 0000002b 00000036
Aug 29 09:51:01 blizard kernel: Call Trace: [oops:_oops_ioctl+48/80] [_sys_ioctl+254/272] [_system_call+82/128]
Aug 29 09:51:01 blizard kernel: Code: c7 00 05 00 00 00 eb 08 90 90 90 90 90 90 90 90 89 ec 5d c3
---------------------------------------------------------------------------
Dr. G.W. Wettstein Oncology Research Div. Computing Facility
Roger Maris Cancer Center INTERNET: greg@wind.rmcc.com
820 4th St. N.
Fargo, ND 58122
Phone: 701-234-7556
::
Dr. G.W. Wettstein Oncology Research Div. Computing Facility
Roger Maris Cancer Center INTERNET: greg@wind.rmcc.com
820 4th St. N.
Fargo, ND 58122
Phone: 701-234-7556
---------------------------------------------------------------------------
Tainted kernels:
Some oops reports contain the string 'Tainted: ' after the program
Tainted kernels
---------------
Some oops reports contain the string **'Tainted: '** after the program
counter. This indicates that the kernel has been tainted by some
mechanism. The string is followed by a series of position-sensitive
characters, each representing a particular tainted value.
1: 'G' if all modules loaded have a GPL or compatible license, 'P' if
1) 'G' if all modules loaded have a GPL or compatible license, 'P' if
any proprietary module has been loaded. Modules without a
MODULE_LICENSE or with a MODULE_LICENSE that is not recognised by
insmod as GPL compatible are assumed to be proprietary.
2: 'F' if any module was force loaded by "insmod -f", ' ' if all
2) ``F`` if any module was force loaded by ``insmod -f``, ``' '`` if all
modules were loaded normally.
3: 'S' if the oops occurred on an SMP kernel running on hardware that
3) ``S`` if the oops occurred on an SMP kernel running on hardware that
hasn't been certified as safe to run multiprocessor.
Currently this occurs only on various Athlons that are not
SMP capable.
4: 'R' if a module was force unloaded by "rmmod -f", ' ' if all
4) ``R`` if a module was force unloaded by ``rmmod -f``, ``' '`` if all
modules were unloaded normally.
5: 'M' if any processor has reported a Machine Check Exception,
' ' if no Machine Check Exceptions have occurred.
5) ``M`` if any processor has reported a Machine Check Exception,
``' '`` if no Machine Check Exceptions have occurred.
6: 'B' if a page-release function has found a bad page reference or
6) ``B`` if a page-release function has found a bad page reference or
some unexpected page flags.
7: 'U' if a user or user application specifically requested that the
Tainted flag be set, ' ' otherwise.
7) ``U`` if a user or user application specifically requested that the
Tainted flag be set, ``' '`` otherwise.
8: 'D' if the kernel has died recently, i.e. there was an OOPS or BUG.
8) ``D`` if the kernel has died recently, i.e. there was an OOPS or BUG.
9: 'A' if the ACPI table has been overridden.
9) ``A`` if the ACPI table has been overridden.
10: 'W' if a warning has previously been issued by the kernel.
10) ``W`` if a warning has previously been issued by the kernel.
(Though some warnings may set more specific taint flags.)
11: 'C' if a staging driver has been loaded.
11) ``C`` if a staging driver has been loaded.
12: 'I' if the kernel is working around a severe bug in the platform
12) ``I`` if the kernel is working around a severe bug in the platform
firmware (BIOS or similar).
13: 'O' if an externally-built ("out-of-tree") module has been loaded.
13) ``O`` if an externally-built ("out-of-tree") module has been loaded.
14: 'E' if an unsigned module has been loaded in a kernel supporting
14) ``E`` if an unsigned module has been loaded in a kernel supporting
module signature.
15: 'L' if a soft lockup has previously occurred on the system.
15) ``L`` if a soft lockup has previously occurred on the system.
16: 'K' if the kernel has been live patched.
16) ``K`` if the kernel has been live patched.
The primary reason for the 'Tainted: ' string is to tell kernel
The primary reason for the **'Tainted: '** string is to tell kernel
debuggers if this is a clean kernel or if anything unusual has
occurred. Tainting is permanent: even if an offending module is
unloaded, the tainted value remains to indicate that the kernel is not

View File

@ -0,0 +1,286 @@
Parport
+++++++
The ``parport`` code provides parallel-port support under Linux. This
includes the ability to share one port between multiple device
drivers.
You can pass parameters to the ``parport`` code to override its automatic
detection of your hardware. This is particularly useful if you want
to use IRQs, since in general these can't be autoprobed successfully.
By default IRQs are not used even if they **can** be probed. This is
because there are a lot of people using the same IRQ for their
parallel port and a sound card or network card.
The ``parport`` code is split into two parts: generic (which deals with
port-sharing) and architecture-dependent (which deals with actually
using the port).
Parport as modules
==================
If you load the `parport`` code as a module, say::
# insmod parport
to load the generic ``parport`` code. You then must load the
architecture-dependent code with (for example)::
# insmod parport_pc io=0x3bc,0x378,0x278 irq=none,7,auto
to tell the ``parport`` code that you want three PC-style ports, one at
0x3bc with no IRQ, one at 0x378 using IRQ 7, and one at 0x278 with an
auto-detected IRQ. Currently, PC-style (``parport_pc``), Sun ``bpp``,
Amiga, Atari, and MFC3 hardware is supported.
PCI parallel I/O card support comes from ``parport_pc``. Base I/O
addresses should not be specified for supported PCI cards since they
are automatically detected.
modprobe
--------
If you use modprobe , you will find it useful to add lines as below to a
configuration file in /etc/modprobe.d/ directory::
alias parport_lowlevel parport_pc
options parport_pc io=0x378,0x278 irq=7,auto
modprobe will load ``parport_pc`` (with the options ``io=0x378,0x278 irq=7,auto``)
whenever a parallel port device driver (such as ``lp``) is loaded.
Note that these are example lines only! You shouldn't in general need
to specify any options to ``parport_pc`` in order to be able to use a
parallel port.
Parport probe [optional]
------------------------
In 2.2 kernels there was a module called ``parport_probe``, which was used
for collecting IEEE 1284 device ID information. This has now been
enhanced and now lives with the IEEE 1284 support. When a parallel
port is detected, the devices that are connected to it are analysed,
and information is logged like this::
parport0: Printer, BJC-210 (Canon)
The probe information is available from files in ``/proc/sys/dev/parport/``.
Parport linked into the kernel statically
=========================================
If you compile the ``parport`` code into the kernel, then you can use
kernel boot parameters to get the same effect. Add something like the
following to your LILO command line::
parport=0x3bc parport=0x378,7 parport=0x278,auto,nofifo
You can have many ``parport=...`` statements, one for each port you want
to add. Adding ``parport=0`` to the kernel command-line will disable
parport support entirely. Adding ``parport=auto`` to the kernel
command-line will make ``parport`` use any IRQ lines or DMA channels that
it auto-detects.
Files in /proc
==============
If you have configured the ``/proc`` filesystem into your kernel, you will
see a new directory entry: ``/proc/sys/dev/parport``. In there will be a
directory entry for each parallel port for which parport is
configured. In each of those directories are a collection of files
describing that parallel port.
The ``/proc/sys/dev/parport`` directory tree looks like::
parport
|-- default
| |-- spintime
| `-- timeslice
|-- parport0
| |-- autoprobe
| |-- autoprobe0
| |-- autoprobe1
| |-- autoprobe2
| |-- autoprobe3
| |-- devices
| | |-- active
| | `-- lp
| | `-- timeslice
| |-- base-addr
| |-- irq
| |-- dma
| |-- modes
| `-- spintime
`-- parport1
|-- autoprobe
|-- autoprobe0
|-- autoprobe1
|-- autoprobe2
|-- autoprobe3
|-- devices
| |-- active
| `-- ppa
| `-- timeslice
|-- base-addr
|-- irq
|-- dma
|-- modes
`-- spintime
.. tabularcolumns:: |p{4.0cm}|p{13.5cm}|
======================= =======================================================
File Contents
======================= =======================================================
``devices/active`` A list of the device drivers using that port. A "+"
will appear by the name of the device currently using
the port (it might not appear against any). The
string "none" means that there are no device drivers
using that port.
``base-addr`` Parallel port's base address, or addresses if the port
has more than one in which case they are separated
with tabs. These values might not have any sensible
meaning for some ports.
``irq`` Parallel port's IRQ, or -1 if none is being used.
``dma`` Parallel port's DMA channel, or -1 if none is being
used.
``modes`` Parallel port's hardware modes, comma-separated,
meaning:
- PCSPP
PC-style SPP registers are available.
- TRISTATE
Port is bidirectional.
- COMPAT
Hardware acceleration for printers is
available and will be used.
- EPP
Hardware acceleration for EPP protocol
is available and will be used.
- ECP
Hardware acceleration for ECP protocol
is available and will be used.
- DMA
DMA is available and will be used.
Note that the current implementation will only take
advantage of COMPAT and ECP modes if it has an IRQ
line to use.
``autoprobe`` Any IEEE-1284 device ID information that has been
acquired from the (non-IEEE 1284.3) device.
``autoprobe[0-3]`` IEEE 1284 device ID information retrieved from
daisy-chain devices that conform to IEEE 1284.3.
``spintime`` The number of microseconds to busy-loop while waiting
for the peripheral to respond. You might find that
adjusting this improves performance, depending on your
peripherals. This is a port-wide setting, i.e. it
applies to all devices on a particular port.
``timeslice`` The number of milliseconds that a device driver is
allowed to keep a port claimed for. This is advisory,
and driver can ignore it if it must.
``default/*`` The defaults for spintime and timeslice. When a new
port is registered, it picks up the default spintime.
When a new device is registered, it picks up the
default timeslice.
======================= =======================================================
Device drivers
==============
Once the parport code is initialised, you can attach device drivers to
specific ports. Normally this happens automatically; if the lp driver
is loaded it will create one lp device for each port found. You can
override this, though, by using parameters either when you load the lp
driver::
# insmod lp parport=0,2
or on the LILO command line::
lp=parport0 lp=parport2
Both the above examples would inform lp that you want ``/dev/lp0`` to be
the first parallel port, and /dev/lp1 to be the **third** parallel port,
with no lp device associated with the second port (parport1). Note
that this is different to the way older kernels worked; there used to
be a static association between the I/O port address and the device
name, so ``/dev/lp0`` was always the port at 0x3bc. This is no longer the
case - if you only have one port, it will default to being ``/dev/lp0``,
regardless of base address.
Also:
* If you selected the IEEE 1284 support at compile time, you can say
``lp=auto`` on the kernel command line, and lp will create devices
only for those ports that seem to have printers attached.
* If you give PLIP the ``timid`` parameter, either with ``plip=timid`` on
the command line, or with ``insmod plip timid=1`` when using modules,
it will avoid any ports that seem to be in use by other devices.
* IRQ autoprobing works only for a few port types at the moment.
Reporting printer problems with parport
=======================================
If you are having problems printing, please go through these steps to
try to narrow down where the problem area is.
When reporting problems with parport, really you need to give all of
the messages that ``parport_pc`` spits out when it initialises. There are
several code paths:
- polling
- interrupt-driven, protocol in software
- interrupt-driven, protocol in hardware using PIO
- interrupt-driven, protocol in hardware using DMA
The kernel messages that ``parport_pc`` logs give an indication of which
code path is being used. (They could be a lot better actually..)
For normal printer protocol, having IEEE 1284 modes enabled or not
should not make a difference.
To turn off the 'protocol in hardware' code paths, disable
``CONFIG_PARPORT_PC_FIFO``. Note that when they are enabled they are not
necessarily **used**; it depends on whether the hardware is available,
enabled by the BIOS, and detected by the driver.
So, to start with, disable ``CONFIG_PARPORT_PC_FIFO``, and load ``parport_pc``
with ``irq=none``. See if printing works then. It really should,
because this is the simplest code path.
If that works fine, try with ``io=0x378 irq=7`` (adjust for your
hardware), to make it use interrupt-driven in-software protocol.
If **that** works fine, then one of the hardware modes isn't working
right. Enable ``CONFIG_FIFO`` (no, it isn't a module option,
and yes, it should be), set the port to ECP mode in the BIOS and note
the DMA channel, and try with::
io=0x378 irq=7 dma=none (for PIO)
io=0x378 irq=7 dma=3 (for DMA)
----------
philb@gnu.org
tim@cyberelk.net

View File

@ -5,34 +5,37 @@ Sergiu Iordache <sergiu@chromium.org>
Updated: 17 November 2011
0. Introduction
Introduction
------------
Ramoops is an oops/panic logger that writes its logs to RAM before the system
crashes. It works by logging oopses and panics in a circular buffer. Ramoops
needs a system with persistent RAM so that the content of that area can
survive after a restart.
1. Ramoops concepts
Ramoops concepts
----------------
Ramoops uses a predefined memory area to store the dump. The start and size
and type of the memory area are set using three variables:
* "mem_address" for the start
* "mem_size" for the size. The memory size will be rounded down to a
power of two.
* "mem_type" to specifiy if the memory type (default is pgprot_writecombine).
Typically the default value of mem_type=0 should be used as that sets the pstore
mapping to pgprot_writecombine. Setting mem_type=1 attempts to use
pgprot_noncached, which only works on some platforms. This is because pstore
* ``mem_address`` for the start
* ``mem_size`` for the size. The memory size will be rounded down to a
power of two.
* ``mem_type`` to specifiy if the memory type (default is pgprot_writecombine).
Typically the default value of ``mem_type=0`` should be used as that sets the pstore
mapping to pgprot_writecombine. Setting ``mem_type=1`` attempts to use
``pgprot_noncached``, which only works on some platforms. This is because pstore
depends on atomic operations. At least on ARM, pgprot_noncached causes the
memory to be mapped strongly ordered, and atomic operations on strongly ordered
memory are implementation defined, and won't work on many ARMs such as omaps.
The memory area is divided into "record_size" chunks (also rounded down to
power of two) and each oops/panic writes a "record_size" chunk of
The memory area is divided into ``record_size`` chunks (also rounded down to
power of two) and each oops/panic writes a ``record_size`` chunk of
information.
Dumping both oopses and panics can be done by setting 1 in the "dump_oops"
Dumping both oopses and panics can be done by setting 1 in the ``dump_oops``
variable while setting 0 in that variable dumps only the panics.
The module uses a counter to record multiple dumps but the counter gets reset
@ -43,7 +46,8 @@ This might be useful when a hardware reset was used to bring the machine back
to life (i.e. a watchdog triggered). In such cases, RAM may be somewhat
corrupt, but usually it is restorable.
2. Setting the parameters
Setting the parameters
----------------------
Setting the ramoops parameters can be done in several different manners:
@ -52,12 +56,13 @@ Setting the ramoops parameters can be done in several different manners:
boot and then use the reserved memory for ramoops. For example, assuming a
machine with > 128 MB of memory, the following kernel command line will tell
the kernel to use only the first 128 MB of memory, and place ECC-protected
ramoops region at 128 MB boundary:
"mem=128M ramoops.mem_address=0x8000000 ramoops.ecc=1"
ramoops region at 128 MB boundary::
mem=128M ramoops.mem_address=0x8000000 ramoops.ecc=1
B. Use Device Tree bindings, as described in
Documentation/device-tree/bindings/reserved-memory/ramoops.txt.
For example:
``Documentation/device-tree/bindings/reserved-memory/admin-guide/ramoops.rst``.
For example::
reserved-memory {
#address-cells = <2>;
@ -73,60 +78,63 @@ Setting the ramoops parameters can be done in several different manners:
};
C. Use a platform device and set the platform data. The parameters can then
be set through that platform data. An example of doing that is:
be set through that platform data. An example of doing that is::
#include <linux/pstore_ram.h>
[...]
#include <linux/pstore_ram.h>
[...]
static struct ramoops_platform_data ramoops_data = {
static struct ramoops_platform_data ramoops_data = {
.mem_size = <...>,
.mem_address = <...>,
.mem_type = <...>,
.record_size = <...>,
.dump_oops = <...>,
.ecc = <...>,
};
};
static struct platform_device ramoops_dev = {
static struct platform_device ramoops_dev = {
.name = "ramoops",
.dev = {
.platform_data = &ramoops_data,
},
};
};
[... inside a function ...]
int ret;
[... inside a function ...]
int ret;
ret = platform_device_register(&ramoops_dev);
if (ret) {
ret = platform_device_register(&ramoops_dev);
if (ret) {
printk(KERN_ERR "unable to register platform device\n");
return ret;
}
}
You can specify either RAM memory or peripheral devices' memory. However, when
specifying RAM, be sure to reserve the memory by issuing memblock_reserve()
very early in the architecture code, e.g.:
very early in the architecture code, e.g.::
#include <linux/memblock.h>
#include <linux/memblock.h>
memblock_reserve(ramoops_data.mem_address, ramoops_data.mem_size);
memblock_reserve(ramoops_data.mem_address, ramoops_data.mem_size);
3. Dump format
Dump format
-----------
The data dump begins with a header, currently defined as "====" followed by a
The data dump begins with a header, currently defined as ``====`` followed by a
timestamp and a new line. The dump then continues with the actual data.
4. Reading the data
Reading the data
----------------
The dump data can be read from the pstore filesystem. The format for these
files is "dmesg-ramoops-N", where N is the record number in memory. To delete
files is ``dmesg-ramoops-N``, where N is the record number in memory. To delete
a stored record from RAM, simply unlink the respective pstore file.
5. Persistent function tracing
Persistent function tracing
---------------------------
Persistent function tracing might be useful for debugging software or hardware
related hangs. The functions call chain log is stored in a "ftrace-ramoops"
file. Here is an example of usage:
related hangs. The functions call chain log is stored in a ``ftrace-ramoops``
file. Here is an example of usage::
# mount -t debugfs debugfs /sys/kernel/debug/
# echo 1 > /sys/kernel/debug/pstore/record_ftrace

View File

@ -1,3 +1,8 @@
.. _reportingbugs:
Reporting bugs
++++++++++++++
Background
==========
@ -50,12 +55,13 @@ maintainer replies to you, make sure to 'Reply-all' in order to keep the
public mailing list(s) in the email thread.
If you know which driver is causing issues, you can pass one of the driver
files to the get_maintainer.pl script:
files to the get_maintainer.pl script::
perl scripts/get_maintainer.pl -f <filename>
If it is a security bug, please copy the Security Contact listed in the
MAINTAINERS file. They can help coordinate bugfix and disclosure. See
Documentation/SecurityBugs for more information.
:ref:`Documentation/admin-guide/security-bugs.rst <securitybugs>` for more information.
If you can't figure out which subsystem caused the issue, you should file
a bug in kernel.org bugzilla and send email to
@ -69,8 +75,9 @@ Tips for reporting bugs
If you haven't reported a bug before, please read:
http://www.chiark.greenend.org.uk/~sgtatham/bugs.html
http://www.catb.org/esr/faqs/smart-questions.html
http://www.chiark.greenend.org.uk/~sgtatham/bugs.html
http://www.catb.org/esr/faqs/smart-questions.html
It's REALLY important to report bugs that seem unrelated as separate email
threads or separate bugzilla entries. If you report several unrelated
@ -87,7 +94,7 @@ step-by-step instructions for how a user can trigger the bug.
If the failure includes an "OOPS:", take a picture of the screen, capture
a netconsole trace, or type the message from your screen into the bug
report. Please read "Documentation/oops-tracing.txt" before posting your
report. Please read "Documentation/admin-guide/oops-tracing.rst" before posting your
bug report. This explains what you should do with the "Oops" information
to make it useful to the recipient.
@ -99,34 +106,34 @@ relevant to your bug, feel free to exclude it.
First run the ver_linux script included as scripts/ver_linux, which
reports the version of some important subsystems. Run this script with
the command "sh scripts/ver_linux".
the command ``sh scripts/ver_linux``.
Use that information to fill in all fields of the bug report form, and
post it to the mailing list with a subject of "PROBLEM: <one line
summary from [1.]>" for easy identification by the developers.
summary from [1.]>" for easy identification by the developers::
[1.] One line summary of the problem:
[2.] Full description of the problem/report:
[3.] Keywords (i.e., modules, networking, kernel):
[4.] Kernel information
[4.1.] Kernel version (from /proc/version):
[4.2.] Kernel .config file:
[5.] Most recent kernel version which did not have the bug:
[6.] Output of Oops.. message (if applicable) with symbolic information
resolved (see Documentation/oops-tracing.txt)
[7.] A small shell script or example program which triggers the
problem (if possible)
[8.] Environment
[8.1.] Software (add the output of the ver_linux script here)
[8.2.] Processor information (from /proc/cpuinfo):
[8.3.] Module information (from /proc/modules):
[8.4.] Loaded driver and hardware information (/proc/ioports, /proc/iomem)
[8.5.] PCI information ('lspci -vvv' as root)
[8.6.] SCSI information (from /proc/scsi/scsi)
[8.7.] Other information that might be relevant to the problem
(please look in /proc and include all information that you
think to be relevant):
[X.] Other notes, patches, fixes, workarounds:
[1.] One line summary of the problem:
[2.] Full description of the problem/report:
[3.] Keywords (i.e., modules, networking, kernel):
[4.] Kernel information
[4.1.] Kernel version (from /proc/version):
[4.2.] Kernel .config file:
[5.] Most recent kernel version which did not have the bug:
[6.] Output of Oops.. message (if applicable) with symbolic information
resolved (see Documentation/admin-guide/oops-tracing.rst)
[7.] A small shell script or example program which triggers the
problem (if possible)
[8.] Environment
[8.1.] Software (add the output of the ver_linux script here)
[8.2.] Processor information (from /proc/cpuinfo):
[8.3.] Module information (from /proc/modules):
[8.4.] Loaded driver and hardware information (/proc/ioports, /proc/iomem)
[8.5.] PCI information ('lspci -vvv' as root)
[8.6.] SCSI information (from /proc/scsi/scsi)
[8.7.] Other information that might be relevant to the problem
(please look in /proc and include all information that you
think to be relevant):
[X.] Other notes, patches, fixes, workarounds:
Follow up
@ -153,7 +160,8 @@ Expectations for kernel maintainers
Linux kernel maintainers are busy, overworked human beings. Some times
they may not be able to address your bug in a day, a week, or two weeks.
If they don't answer your email, they may be on vacation, or at a Linux
conference. Check the conference schedule at LWN.net for more info:
conference. Check the conference schedule at https://LWN.net for more info:
https://lwn.net/Calendar/
In general, kernel maintainers take 1 to 5 business days to respond to

View File

@ -8,8 +8,8 @@ like to know when a security bug is found so that it can be fixed and
disclosed as quickly as possible. Please report security bugs to the
Linux kernel security team.
1) Contact
----------
Contact
-------
The Linux kernel security team can be contacted by email at
<security@kernel.org>. This is a private list of security officers
@ -19,12 +19,12 @@ area maintainers to understand and fix the security vulnerability.
As it is with any bug, the more information provided the easier it
will be to diagnose and fix. Please review the procedure outlined in
REPORTING-BUGS if you are unclear about what information is helpful.
admin-guide/reporting-bugs.rst if you are unclear about what information is helpful.
Any exploit code is very helpful and will not be released without
consent from the reporter unless it has already been made public.
2) Disclosure
-------------
Disclosure
----------
The goal of the Linux kernel security team is to work with the
bug submitter to bug resolution as well as disclosure. We prefer
@ -39,8 +39,8 @@ disclosure is from immediate (esp. if it's already publicly known)
to a few weeks. As a basic default policy, we expect report date to
disclosure date to be on the order of 7 days.
3) Non-disclosure agreements
----------------------------
Non-disclosure agreements
-------------------------
The Linux kernel security team is not a formal body and therefore unable
to enter any non-disclosure agreements.

View File

@ -1,15 +1,21 @@
Linux Serial Console
.. _serial_console:
Linux Serial Console
====================
To use a serial port as console you need to compile the support into your
kernel - by default it is not compiled in. For PC style serial ports
it's the config option next to "Standard/generic (dumb) serial support".
it's the config option next to menu option:
:menuselection:`Character devices --> Serial drivers --> 8250/16550 and compatible serial support --> Console on 8250/16550 and compatible serial port`
You must compile serial support into the kernel and not as a module.
It is possible to specify multiple devices for console output. You can
define a new kernel command line option to select which device(s) to
use for console output.
The format of this option is:
The format of this option is::
console=device,options
@ -28,11 +34,11 @@ The format of this option is:
You can specify multiple console= options on the kernel command line.
Output will appear on all of them. The last device will be used when
you open /dev/console. So, for example:
you open ``/dev/console``. So, for example::
console=ttyS1,9600 console=tty0
defines that opening /dev/console will get you the current foreground
defines that opening ``/dev/console`` will get you the current foreground
virtual console, and kernel messages will appear on both the VGA
console and the 2nd serial port (ttyS1 or COM2) at 9600 baud.
@ -44,61 +50,61 @@ first looks for a VGA card and then for a serial port. So if you don't
have a VGA card in your system the first serial port will automatically
become the console.
You will need to create a new device to use /dev/console. The official
/dev/console is now character device 5,1.
You will need to create a new device to use ``/dev/console``. The official
``/dev/console`` is now character device 5,1.
(You can also use a network device as a console. See
Documentation/networking/netconsole.txt for information on that.)
``Documentation/networking/netconsole.txt`` for information on that.)
Here's an example that will use /dev/ttyS1 (COM2) as the console.
Here's an example that will use ``/dev/ttyS1`` (COM2) as the console.
Replace the sample values as needed.
1. Create /dev/console (real console) and /dev/tty0 (master virtual
console):
1. Create ``/dev/console`` (real console) and ``/dev/tty0`` (master virtual
console)::
cd /dev
rm -f console tty0
mknod -m 622 console c 5 1
mknod -m 622 tty0 c 4 0
cd /dev
rm -f console tty0
mknod -m 622 console c 5 1
mknod -m 622 tty0 c 4 0
2. LILO can also take input from a serial device. This is a very
useful option. To tell LILO to use the serial port:
In lilo.conf (global section):
In lilo.conf (global section)::
serial = 1,9600n8 (ttyS1, 9600 bd, no parity, 8 bits)
serial = 1,9600n8 (ttyS1, 9600 bd, no parity, 8 bits)
3. Adjust to kernel flags for the new kernel,
again in lilo.conf (kernel section)
again in lilo.conf (kernel section)::
append = "console=ttyS1,9600"
append = "console=ttyS1,9600"
4. Make sure a getty runs on the serial port so that you can login to
it once the system is done booting. This is done by adding a line
like this to /etc/inittab (exact syntax depends on your getty):
like this to ``/etc/inittab`` (exact syntax depends on your getty)::
S1:23:respawn:/sbin/getty -L ttyS1 9600 vt100
S1:23:respawn:/sbin/getty -L ttyS1 9600 vt100
5. Init and /etc/ioctl.save
5. Init and ``/etc/ioctl.save``
Sysvinit remembers its stty settings in a file in /etc, called
`/etc/ioctl.save'. REMOVE THIS FILE before using the serial
Sysvinit remembers its stty settings in a file in ``/etc``, called
``/etc/ioctl.save``. REMOVE THIS FILE before using the serial
console for the first time, because otherwise init will probably
set the baudrate to 38400 (baudrate of the virtual console).
6. /dev/console and X
6. ``/dev/console`` and X
Programs that want to do something with the virtual console usually
open /dev/console. If you have created the new /dev/console device,
open ``/dev/console``. If you have created the new ``/dev/console`` device,
and your console is NOT the virtual console some programs will fail.
Those are programs that want to access the VT interface, and use
/dev/console instead of /dev/tty0. Some of those programs are:
``/dev/console instead of /dev/tty0``. Some of those programs are::
Xfree86, svgalib, gpm, SVGATextMode
Xfree86, svgalib, gpm, SVGATextMode
It should be fixed in modern versions of these programs though.
Note that if you boot without a console= option (or with
console=/dev/tty0), /dev/console is the same as /dev/tty0. In that
case everything will still work.
Note that if you boot without a ``console=`` option (or with
``console=/dev/tty0``), ``/dev/console`` is the same as ``/dev/tty0``.
In that case everything will still work.
7. Thanks

View File

@ -0,0 +1,192 @@
Rules on how to access information in the Linux kernel sysfs
============================================================
The kernel-exported sysfs exports internal kernel implementation details
and depends on internal kernel structures and layout. It is agreed upon
by the kernel developers that the Linux kernel does not provide a stable
internal API. Therefore, there are aspects of the sysfs interface that
may not be stable across kernel releases.
To minimize the risk of breaking users of sysfs, which are in most cases
low-level userspace applications, with a new kernel release, the users
of sysfs must follow some rules to use an as-abstract-as-possible way to
access this filesystem. The current udev and HAL programs already
implement this and users are encouraged to plug, if possible, into the
abstractions these programs provide instead of accessing sysfs directly.
But if you really do want or need to access sysfs directly, please follow
the following rules and then your programs should work with future
versions of the sysfs interface.
- Do not use libsysfs
It makes assumptions about sysfs which are not true. Its API does not
offer any abstraction, it exposes all the kernel driver-core
implementation details in its own API. Therefore it is not better than
reading directories and opening the files yourself.
Also, it is not actively maintained, in the sense of reflecting the
current kernel development. The goal of providing a stable interface
to sysfs has failed; it causes more problems than it solves. It
violates many of the rules in this document.
- sysfs is always at ``/sys``
Parsing ``/proc/mounts`` is a waste of time. Other mount points are a
system configuration bug you should not try to solve. For test cases,
possibly support a ``SYSFS_PATH`` environment variable to overwrite the
application's behavior, but never try to search for sysfs. Never try
to mount it, if you are not an early boot script.
- devices are only "devices"
There is no such thing like class-, bus-, physical devices,
interfaces, and such that you can rely on in userspace. Everything is
just simply a "device". Class-, bus-, physical, ... types are just
kernel implementation details which should not be expected by
applications that look for devices in sysfs.
The properties of a device are:
- devpath (``/devices/pci0000:00/0000:00:1d.1/usb2/2-2/2-2:1.0``)
- identical to the DEVPATH value in the event sent from the kernel
at device creation and removal
- the unique key to the device at that point in time
- the kernel's path to the device directory without the leading
``/sys``, and always starting with a slash
- all elements of a devpath must be real directories. Symlinks
pointing to /sys/devices must always be resolved to their real
target and the target path must be used to access the device.
That way the devpath to the device matches the devpath of the
kernel used at event time.
- using or exposing symlink values as elements in a devpath string
is a bug in the application
- kernel name (``sda``, ``tty``, ``0000:00:1f.2``, ...)
- a directory name, identical to the last element of the devpath
- applications need to handle spaces and characters like ``!`` in
the name
- subsystem (``block``, ``tty``, ``pci``, ...)
- simple string, never a path or a link
- retrieved by reading the "subsystem"-link and using only the
last element of the target path
- driver (``tg3``, ``ata_piix``, ``uhci_hcd``)
- a simple string, which may contain spaces, never a path or a
link
- it is retrieved by reading the "driver"-link and using only the
last element of the target path
- devices which do not have "driver"-link just do not have a
driver; copying the driver value in a child device context is a
bug in the application
- attributes
- the files in the device directory or files below subdirectories
of the same device directory
- accessing attributes reached by a symlink pointing to another device,
like the "device"-link, is a bug in the application
Everything else is just a kernel driver-core implementation detail
that should not be assumed to be stable across kernel releases.
- Properties of parent devices never belong into a child device.
Always look at the parent devices themselves for determining device
context properties. If the device ``eth0`` or ``sda`` does not have a
"driver"-link, then this device does not have a driver. Its value is empty.
Never copy any property of the parent-device into a child-device. Parent
device properties may change dynamically without any notice to the
child device.
- Hierarchy in a single device tree
There is only one valid place in sysfs where hierarchy can be examined
and this is below: ``/sys/devices.``
It is planned that all device directories will end up in the tree
below this directory.
- Classification by subsystem
There are currently three places for classification of devices:
``/sys/block,`` ``/sys/class`` and ``/sys/bus.`` It is planned that these will
not contain any device directories themselves, but only flat lists of
symlinks pointing to the unified ``/sys/devices`` tree.
All three places have completely different rules on how to access
device information. It is planned to merge all three
classification directories into one place at ``/sys/subsystem``,
following the layout of the bus directories. All buses and
classes, including the converted block subsystem, will show up
there.
The devices belonging to a subsystem will create a symlink in the
"devices" directory at ``/sys/subsystem/<name>/devices``,
If ``/sys/subsystem`` exists, ``/sys/bus``, ``/sys/class`` and ``/sys/block``
can be ignored. If it does not exist, you always have to scan all three
places, as the kernel is free to move a subsystem from one place to
the other, as long as the devices are still reachable by the same
subsystem name.
Assuming ``/sys/class/<subsystem>`` and ``/sys/bus/<subsystem>``, or
``/sys/block`` and ``/sys/class/block`` are not interchangeable is a bug in
the application.
- Block
The converted block subsystem at ``/sys/class/block`` or
``/sys/subsystem/block`` will contain the links for disks and partitions
at the same level, never in a hierarchy. Assuming the block subsystem to
contain only disks and not partition devices in the same flat list is
a bug in the application.
- "device"-link and <subsystem>:<kernel name>-links
Never depend on the "device"-link. The "device"-link is a workaround
for the old layout, where class devices are not created in
``/sys/devices/`` like the bus devices. If the link-resolving of a
device directory does not end in ``/sys/devices/``, you can use the
"device"-link to find the parent devices in ``/sys/devices/``, That is the
single valid use of the "device"-link; it must never appear in any
path as an element. Assuming the existence of the "device"-link for
a device in ``/sys/devices/`` is a bug in the application.
Accessing ``/sys/class/net/eth0/device`` is a bug in the application.
Never depend on the class-specific links back to the ``/sys/class``
directory. These links are also a workaround for the design mistake
that class devices are not created in ``/sys/devices.`` If a device
directory does not contain directories for child devices, these links
may be used to find the child devices in ``/sys/class.`` That is the single
valid use of these links; they must never appear in any path as an
element. Assuming the existence of these links for devices which are
real child device directories in the ``/sys/devices`` tree is a bug in
the application.
It is planned to remove all these links when all class device
directories live in ``/sys/devices.``
- Position of devices along device chain can change.
Never depend on a specific parent device position in the devpath,
or the chain of parent devices. The kernel is free to insert devices into
the chain. You must always request the parent device you are looking for
by its subsystem value. You need to walk up the chain until you find
the device that matches the expected subsystem. Depending on a specific
position of a parent device or exposing relative paths using ``../`` to
access the chain of parents is a bug in the application.
- When reading and writing sysfs device attribute files, avoid dependency
on specific error codes wherever possible. This minimizes coupling to
the error handling implementation within the kernel.
In general, failures to read or write sysfs device attributes shall
propagate errors wherever possible. Common errors include, but are not
limited to:
``-EIO``: The read or store operation is not supported, typically
returned by the sysfs system itself if the read or store pointer
is ``NULL``.
``-ENXIO``: The read or store operation failed
Error codes will not be changed without good reason, and should a change
to error codes result in user-space breakage, it will be fixed, or the
the offending change will be reverted.
Userspace applications can, however, expect the format and contents of
the attribute files to remain consistent in the absence of a version
attribute change in the context of a given attribute.

View File

@ -0,0 +1,289 @@
Linux Magic System Request Key Hacks
====================================
Documentation for sysrq.c
What is the magic SysRq key?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
It is a 'magical' key combo you can hit which the kernel will respond to
regardless of whatever else it is doing, unless it is completely locked up.
How do I enable the magic SysRq key?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
You need to say "yes" to 'Magic SysRq key (CONFIG_MAGIC_SYSRQ)' when
configuring the kernel. When running a kernel with SysRq compiled in,
/proc/sys/kernel/sysrq controls the functions allowed to be invoked via
the SysRq key. The default value in this file is set by the
CONFIG_MAGIC_SYSRQ_DEFAULT_ENABLE config symbol, which itself defaults
to 1. Here is the list of possible values in /proc/sys/kernel/sysrq:
- 0 - disable sysrq completely
- 1 - enable all functions of sysrq
- >1 - bitmask of allowed sysrq functions (see below for detailed function
description)::
2 = 0x2 - enable control of console logging level
4 = 0x4 - enable control of keyboard (SAK, unraw)
8 = 0x8 - enable debugging dumps of processes etc.
16 = 0x10 - enable sync command
32 = 0x20 - enable remount read-only
64 = 0x40 - enable signalling of processes (term, kill, oom-kill)
128 = 0x80 - allow reboot/poweroff
256 = 0x100 - allow nicing of all RT tasks
You can set the value in the file by the following command::
echo "number" >/proc/sys/kernel/sysrq
The number may be written here either as decimal or as hexadecimal
with the 0x prefix. CONFIG_MAGIC_SYSRQ_DEFAULT_ENABLE must always be
written in hexadecimal.
Note that the value of ``/proc/sys/kernel/sysrq`` influences only the invocation
via a keyboard. Invocation of any operation via ``/proc/sysrq-trigger`` is
always allowed (by a user with admin privileges).
How do I use the magic SysRq key?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
On x86 - You press the key combo :kbd:`ALT-SysRq-<command key>`.
.. note::
Some
keyboards may not have a key labeled 'SysRq'. The 'SysRq' key is
also known as the 'Print Screen' key. Also some keyboards cannot
handle so many keys being pressed at the same time, so you might
have better luck with press :kbd:`Alt`, press :kbd:`SysRq`,
release :kbd:`SysRq`, press :kbd:`<command key>`, release everything.
On SPARC - You press :kbd:`ALT-STOP-<command key>`, I believe.
On the serial console (PC style standard serial ports only)
You send a ``BREAK``, then within 5 seconds a command key. Sending
``BREAK`` twice is interpreted as a normal BREAK.
On PowerPC
Press :kbd:`ALT - Print Screen` (or :kbd:`F13`) - :kbd:`<command key>`,
:kbd:`Print Screen` (or :kbd:`F13`) - :kbd:`<command key>` may suffice.
On other
If you know of the key combos for other architectures, please
let me know so I can add them to this section.
On all
write a character to /proc/sysrq-trigger. e.g.::
echo t > /proc/sysrq-trigger
What are the 'command' keys?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
=========== ===================================================================
Command Function
=========== ===================================================================
``b`` Will immediately reboot the system without syncing or unmounting
your disks.
``c`` Will perform a system crash by a NULL pointer dereference.
A crashdump will be taken if configured.
``d`` Shows all locks that are held.
``e`` Send a SIGTERM to all processes, except for init.
``f`` Will call the oom killer to kill a memory hog process, but do not
panic if nothing can be killed.
``g`` Used by kgdb (kernel debugger)
``h`` Will display help (actually any other key than those listed
here will display help. but ``h`` is easy to remember :-)
``i`` Send a SIGKILL to all processes, except for init.
``j`` Forcibly "Just thaw it" - filesystems frozen by the FIFREEZE ioctl.
``k`` Secure Access Key (SAK) Kills all programs on the current virtual
console. NOTE: See important comments below in SAK section.
``l`` Shows a stack backtrace for all active CPUs.
``m`` Will dump current memory info to your console.
``n`` Used to make RT tasks nice-able
``o`` Will shut your system off (if configured and supported).
``p`` Will dump the current registers and flags to your console.
``q`` Will dump per CPU lists of all armed hrtimers (but NOT regular
timer_list timers) and detailed information about all
clockevent devices.
``r`` Turns off keyboard raw mode and sets it to XLATE.
``s`` Will attempt to sync all mounted filesystems.
``t`` Will dump a list of current tasks and their information to your
console.
``u`` Will attempt to remount all mounted filesystems read-only.
``v`` Forcefully restores framebuffer console
``v`` Causes ETM buffer dump [ARM-specific]
``w`` Dumps tasks that are in uninterruptable (blocked) state.
``x`` Used by xmon interface on ppc/powerpc platforms.
Show global PMU Registers on sparc64.
Dump all TLB entries on MIPS.
``y`` Show global CPU Registers [SPARC-64 specific]
``z`` Dump the ftrace buffer
``0``-``9`` Sets the console log level, controlling which kernel messages
will be printed to your console. (``0``, for example would make
it so that only emergency messages like PANICs or OOPSes would
make it to your console.)
=========== ===================================================================
Okay, so what can I use them for?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Well, unraw(r) is very handy when your X server or a svgalib program crashes.
sak(k) (Secure Access Key) is useful when you want to be sure there is no
trojan program running at console which could grab your password
when you would try to login. It will kill all programs on given console,
thus letting you make sure that the login prompt you see is actually
the one from init, not some trojan program.
.. important::
In its true form it is not a true SAK like the one in a
c2 compliant system, and it should not be mistaken as
such.
It seems others find it useful as (System Attention Key) which is
useful when you want to exit a program that will not let you switch consoles.
(For example, X or a svgalib program.)
``reboot(b)`` is good when you're unable to shut down. But you should also
``sync(s)`` and ``umount(u)`` first.
``crash(c)`` can be used to manually trigger a crashdump when the system is hung.
Note that this just triggers a crash if there is no dump mechanism available.
``sync(s)`` is great when your system is locked up, it allows you to sync your
disks and will certainly lessen the chance of data loss and fscking. Note
that the sync hasn't taken place until you see the "OK" and "Done" appear
on the screen. (If the kernel is really in strife, you may not ever get the
OK or Done message...)
``umount(u)`` is basically useful in the same ways as ``sync(s)``. I generally
``sync(s)``, ``umount(u)``, then ``reboot(b)`` when my system locks. It's saved
me many a fsck. Again, the unmount (remount read-only) hasn't taken place until
you see the "OK" and "Done" message appear on the screen.
The loglevels ``0``-``9`` are useful when your console is being flooded with
kernel messages you do not want to see. Selecting ``0`` will prevent all but
the most urgent kernel messages from reaching your console. (They will
still be logged if syslogd/klogd are alive, though.)
``term(e)`` and ``kill(i)`` are useful if you have some sort of runaway process
you are unable to kill any other way, especially if it's spawning other
processes.
"just thaw ``it(j)``" is useful if your system becomes unresponsive due to a
frozen (probably root) filesystem via the FIFREEZE ioctl.
Sometimes SysRq seems to get 'stuck' after using it, what can I do?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
That happens to me, also. I've found that tapping shift, alt, and control
on both sides of the keyboard, and hitting an invalid sysrq sequence again
will fix the problem. (i.e., something like :kbd:`alt-sysrq-z`). Switching to
another virtual console (:kbd:`ALT+Fn`) and then back again should also help.
I hit SysRq, but nothing seems to happen, what's wrong?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
There are some keyboards that produce a different keycode for SysRq than the
pre-defined value of 99 (see ``KEY_SYSRQ`` in ``include/linux/input.h``), or
which don't have a SysRq key at all. In these cases, run ``showkey -s`` to find
an appropriate scancode sequence, and use ``setkeycodes <sequence> 99`` to map
this sequence to the usual SysRq code (e.g., ``setkeycodes e05b 99``). It's
probably best to put this command in a boot script. Oh, and by the way, you
exit ``showkey`` by not typing anything for ten seconds.
I want to add SysRQ key events to a module, how does it work?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In order to register a basic function with the table, you must first include
the header ``include/linux/sysrq.h``, this will define everything else you need.
Next, you must create a ``sysrq_key_op`` struct, and populate it with A) the key
handler function you will use, B) a help_msg string, that will print when SysRQ
prints help, and C) an action_msg string, that will print right before your
handler is called. Your handler must conform to the prototype in 'sysrq.h'.
After the ``sysrq_key_op`` is created, you can call the kernel function
``register_sysrq_key(int key, struct sysrq_key_op *op_p);`` this will
register the operation pointed to by ``op_p`` at table key 'key',
if that slot in the table is blank. At module unload time, you must call
the function ``unregister_sysrq_key(int key, struct sysrq_key_op *op_p)``, which
will remove the key op pointed to by 'op_p' from the key 'key', if and only if
it is currently registered in that slot. This is in case the slot has been
overwritten since you registered it.
The Magic SysRQ system works by registering key operations against a key op
lookup table, which is defined in 'drivers/tty/sysrq.c'. This key table has
a number of operations registered into it at compile time, but is mutable,
and 2 functions are exported for interface to it::
register_sysrq_key and unregister_sysrq_key.
Of course, never ever leave an invalid pointer in the table. I.e., when
your module that called register_sysrq_key() exits, it must call
unregister_sysrq_key() to clean up the sysrq key table entry that it used.
Null pointers in the table are always safe. :)
If for some reason you feel the need to call the handle_sysrq function from
within a function called by handle_sysrq, you must be aware that you are in
a lock (you are also in an interrupt handler, which means don't sleep!), so
you must call ``__handle_sysrq_nolock`` instead.
When I hit a SysRq key combination only the header appears on the console?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Sysrq output is subject to the same console loglevel control as all
other console output. This means that if the kernel was booted 'quiet'
as is common on distro kernels the output may not appear on the actual
console, even though it will appear in the dmesg buffer, and be accessible
via the dmesg command and to the consumers of ``/proc/kmsg``. As a specific
exception the header line from the sysrq command is passed to all console
consumers as if the current loglevel was maximum. If only the header
is emitted it is almost certain that the kernel loglevel is too low.
Should you require the output on the console channel then you will need
to temporarily up the console loglevel using :kbd:`alt-sysrq-8` or::
echo 8 > /proc/sysrq-trigger
Remember to return the loglevel to normal after triggering the sysrq
command you are interested in.
I have more questions, who can I ask?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Just ask them on the linux-kernel mailing list:
linux-kernel@vger.kernel.org
Credits
~~~~~~~
Written by Mydraal <vulpyne@vulpyne.net>
Updated by Adam Sulmicki <adam@cfar.umd.edu>
Updated by Jeremy M. Dolan <jmd@turbogeek.org> 2001/01/28 10:15:59
Added to by Crutcher Dunnavant <crutcher+kernel@datastacks.com>

View File

@ -1,12 +1,16 @@
Unicode support
===============
Last update: 2005-01-17, version 1.4
This file is maintained by H. Peter Anvin <unicode@lanana.org> as part
of the Linux Assigned Names And Numbers Authority (LANANA) project.
The current version can be found at:
http://www.lanana.org/docs/unicode/unicode.txt
http://www.lanana.org/docs/unicode/admin-guide/unicode.rst
------------------------
Introdution
-----------
The Linux kernel code has been rewritten to use Unicode to map
characters to fonts. By downloading a single Unicode-to-font table,
@ -16,12 +20,14 @@ the font as indicated.
This changes the semantics of the eight-bit character tables subtly.
The four character tables are now:
=============== =============================== ================
Map symbol Map name Escape code (G0)
=============== =============================== ================
LAT1_MAP Latin-1 (ISO 8859-1) ESC ( B
GRAF_MAP DEC VT100 pseudographics ESC ( 0
IBMPC_MAP IBM code page 437 ESC ( U
USER_MAP User defined ESC ( K
=============== =============================== ================
In particular, ESC ( U is no longer "straight to font", since the font
might be completely different than the IBM character set. This
@ -55,10 +61,12 @@ In addition, the following characters not present in Unicode 1.1.4
have been defined; these are used by the DEC VT graphics map. [v1.2]
THIS USE IS OBSOLETE AND SHOULD NO LONGER BE USED; PLEASE SEE BELOW.
====== ======================================
U+F800 DEC VT GRAPHICS HORIZONTAL LINE SCAN 1
U+F801 DEC VT GRAPHICS HORIZONTAL LINE SCAN 3
U+F803 DEC VT GRAPHICS HORIZONTAL LINE SCAN 7
U+F804 DEC VT GRAPHICS HORIZONTAL LINE SCAN 9
====== ======================================
The DEC VT220 uses a 6x10 character matrix, and these characters form
a smooth progression in the DEC VT graphics character set. I have
@ -74,10 +82,12 @@ keyboard symbols that are unlikely to ever be added to Unicode proper
since they are horribly vendor-specific. This, of course, is an
excellent example of horrible design.
====== ======================================
U+F810 KEYBOARD SYMBOL FLYING FLAG
U+F811 KEYBOARD SYMBOL PULLDOWN MENU
U+F812 KEYBOARD SYMBOL OPEN APPLE
U+F813 KEYBOARD SYMBOL SOLID APPLE
====== ======================================
Klingon language support
------------------------
@ -99,8 +109,10 @@ of the dingbats/symbols/forms type and this is a language, I have
located it at the end, on a 16-cell boundary in keeping with standard
Unicode practice.
NOTE: This range is now officially managed by the ConScript Unicode
Registry. The normative reference is at:
.. note::
This range is now officially managed by the ConScript Unicode
Registry. The normative reference is at:
http://www.evertype.com/standards/csur/klingon.html
@ -112,6 +124,7 @@ However, since the set of symbols appear to be consistent throughout,
with only the actual shapes being different, in keeping with standard
Unicode practice these differences are considered font variants.
====== =======================================================
U+F8D0 KLINGON LETTER A
U+F8D1 KLINGON LETTER B
U+F8D2 KLINGON LETTER CH
@ -155,6 +168,7 @@ U+F8F9 KLINGON DIGIT NINE
U+F8FD KLINGON COMMA
U+F8FE KLINGON FULL STOP
U+F8FF KLINGON SYMBOL FOR EMPIRE
====== =======================================================
Other Fictional and Artificial Scripts
--------------------------------------

View File

@ -0,0 +1,66 @@
Software cursor for VGA
=======================
by Pavel Machek <pavel@atrey.karlin.mff.cuni.cz>
and Martin Mares <mj@atrey.karlin.mff.cuni.cz>
Linux now has some ability to manipulate cursor appearance. Normally, you
can set the size of hardware cursor (and also work around some ugly bugs in
those miserable Trident cards [#f1]_. You can now play a few new tricks:
you can make your cursor look
like a non-blinking red block, make it inverse background of the character it's
over or to highlight that character and still choose whether the original
hardware cursor should remain visible or not. There may be other things I have
never thought of.
The cursor appearance is controlled by a ``<ESC>[?1;2;3c`` escape sequence
where 1, 2 and 3 are parameters described below. If you omit any of them,
they will default to zeroes.
first Parameter
specifies cursor size::
0=default
1=invisible
2=underline,
...
8=full block
+ 16 if you want the software cursor to be applied
+ 32 if you want to always change the background color
+ 64 if you dislike having the background the same as the
foreground.
Highlights are ignored for the last two flags.
second parameter
selects character attribute bits you want to change
(by simply XORing them with the value of this parameter). On standard
VGA, the high four bits specify background and the low four the
foreground. In both groups, low three bits set color (as in normal
color codes used by the console) and the most significant one turns
on highlight (or sometimes blinking -- it depends on the configuration
of your VGA).
third parameter
consists of character attribute bits you want to set.
Bit setting takes place before bit toggling, so you can simply clear a
bit by including it in both the set mask and the toggle mask.
.. [#f1] see ``#define TRIDENT_GLITCH`` in ``drivers/video/vgacon.c``.
Examples:
=========
To get normal blinking underline, use::
echo -e '\033[?2c'
To get blinking block, use::
echo -e '\033[?6c'
To get red non-blinking block, use::
echo -e '\033[?17;0;64c'

View File

@ -51,7 +51,7 @@ As an alternative, the boot loader can pass the relevant 'console='
option to the kernel via the tagged lists specifying the port, and
serial format options as described in
Documentation/kernel-parameters.txt.
Documentation/admin-guide/kernel-parameters.rst.
3. Detect the machine type

View File

@ -16,7 +16,7 @@ will fail. Something like the following should suffice:
typedef struct { long counter; } atomic_long_t;
Historically, counter has been declared volatile. This is now discouraged.
See Documentation/volatile-considered-harmful.txt for the complete rationale.
See Documentation/process/volatile-considered-harmful.rst for the complete rationale.
local_t is very similar to atomic_t. If the counter is per CPU and only
updated by one CPU, local_t is probably more appropriate. Please see

View File

@ -1,56 +0,0 @@
These instructions are deliberately very basic. If you want something clever,
go read the real docs ;-) Please don't add more stuff, but feel free to
correct my mistakes ;-) (mbligh@aracnet.com)
Thanks to John Levon, Dave Hansen, et al. for help writing this.
<test> is the thing you're trying to measure.
Make sure you have the correct System.map / vmlinux referenced!
It is probably easiest to use "make install" for linux and hack
/sbin/installkernel to copy vmlinux to /boot, in addition to vmlinuz,
config, System.map, which are usually installed by default.
Readprofile
-----------
A recent readprofile command is needed for 2.6, such as found in util-linux
2.12a, which can be downloaded from:
http://www.kernel.org/pub/linux/utils/util-linux/
Most distributions will ship it already.
Add "profile=2" to the kernel command line.
clear readprofile -r
<test>
dump output readprofile -m /boot/System.map > captured_profile
Oprofile
--------
Get the source (see Changes for required version) from
http://oprofile.sourceforge.net/ and add "idle=poll" to the kernel command
line.
Configure with CONFIG_PROFILING=y and CONFIG_OPROFILE=y & reboot on new kernel
./configure --with-kernel-support
make install
For superior results, be sure to enable the local APIC. If opreport sees
a 0Hz CPU, APIC was not on. Be aware that idle=poll may mean a performance
penalty.
One time setup:
opcontrol --setup --vmlinux=/boot/vmlinux
clear opcontrol --reset
start opcontrol --start
<test>
stop opcontrol --stop
dump output opreport > output_file
To only report on the kernel, run opreport -l /boot/vmlinux > output_file
A reset is needed to clear old statistics, which survive a reboot.

View File

@ -1,131 +0,0 @@
Kernel Support for miscellaneous (your favourite) Binary Formats v1.1
=====================================================================
This Kernel feature allows you to invoke almost (for restrictions see below)
every program by simply typing its name in the shell.
This includes for example compiled Java(TM), Python or Emacs programs.
To achieve this you must tell binfmt_misc which interpreter has to be invoked
with which binary. Binfmt_misc recognises the binary-type by matching some bytes
at the beginning of the file with a magic byte sequence (masking out specified
bits) you have supplied. Binfmt_misc can also recognise a filename extension
aka '.com' or '.exe'.
First you must mount binfmt_misc:
mount binfmt_misc -t binfmt_misc /proc/sys/fs/binfmt_misc
To actually register a new binary type, you have to set up a string looking like
:name:type:offset:magic:mask:interpreter:flags (where you can choose the ':'
upon your needs) and echo it to /proc/sys/fs/binfmt_misc/register.
Here is what the fields mean:
- 'name' is an identifier string. A new /proc file will be created with this
name below /proc/sys/fs/binfmt_misc; cannot contain slashes '/' for obvious
reasons.
- 'type' is the type of recognition. Give 'M' for magic and 'E' for extension.
- 'offset' is the offset of the magic/mask in the file, counted in bytes. This
defaults to 0 if you omit it (i.e. you write ':name:type::magic...'). Ignored
when using filename extension matching.
- 'magic' is the byte sequence binfmt_misc is matching for. The magic string
may contain hex-encoded characters like \x0a or \xA4. Note that you must
escape any NUL bytes; parsing halts at the first one. In a shell environment
you might have to write \\x0a to prevent the shell from eating your \.
If you chose filename extension matching, this is the extension to be
recognised (without the '.', the \x0a specials are not allowed). Extension
matching is case sensitive, and slashes '/' are not allowed!
- 'mask' is an (optional, defaults to all 0xff) mask. You can mask out some
bits from matching by supplying a string like magic and as long as magic.
The mask is anded with the byte sequence of the file. Note that you must
escape any NUL bytes; parsing halts at the first one. Ignored when using
filename extension matching.
- 'interpreter' is the program that should be invoked with the binary as first
argument (specify the full path)
- 'flags' is an optional field that controls several aspects of the invocation
of the interpreter. It is a string of capital letters, each controls a
certain aspect. The following flags are supported -
'P' - preserve-argv[0]. Legacy behavior of binfmt_misc is to overwrite
the original argv[0] with the full path to the binary. When this
flag is included, binfmt_misc will add an argument to the argument
vector for this purpose, thus preserving the original argv[0].
e.g. If your interp is set to /bin/foo and you run `blah` (which is
in /usr/local/bin), then the kernel will execute /bin/foo with
argv[] set to ["/bin/foo", "/usr/local/bin/blah", "blah"]. The
interp has to be aware of this so it can execute /usr/local/bin/blah
with argv[] set to ["blah"].
'O' - open-binary. Legacy behavior of binfmt_misc is to pass the full path
of the binary to the interpreter as an argument. When this flag is
included, binfmt_misc will open the file for reading and pass its
descriptor as an argument, instead of the full path, thus allowing
the interpreter to execute non-readable binaries. This feature
should be used with care - the interpreter has to be trusted not to
emit the contents of the non-readable binary.
'C' - credentials. Currently, the behavior of binfmt_misc is to calculate
the credentials and security token of the new process according to
the interpreter. When this flag is included, these attributes are
calculated according to the binary. It also implies the 'O' flag.
This feature should be used with care as the interpreter
will run with root permissions when a setuid binary owned by root
is run with binfmt_misc.
'F' - fix binary. The usual behaviour of binfmt_misc is to spawn the
binary lazily when the misc format file is invoked. However,
this doesn't work very well in the face of mount namespaces and
changeroots, so the F mode opens the binary as soon as the
emulation is installed and uses the opened image to spawn the
emulator, meaning it is always available once installed,
regardless of how the environment changes.
There are some restrictions:
- the whole register string may not exceed 1920 characters
- the magic must reside in the first 128 bytes of the file, i.e.
offset+size(magic) has to be less than 128
- the interpreter string may not exceed 127 characters
To use binfmt_misc you have to mount it first. You can mount it with
"mount -t binfmt_misc none /proc/sys/fs/binfmt_misc" command, or you can add
a line "none /proc/sys/fs/binfmt_misc binfmt_misc defaults 0 0" to your
/etc/fstab so it auto mounts on boot.
You may want to add the binary formats in one of your /etc/rc scripts during
boot-up. Read the manual of your init program to figure out how to do this
right.
Think about the order of adding entries! Later added entries are matched first!
A few examples (assumed you are in /proc/sys/fs/binfmt_misc):
- enable support for em86 (like binfmt_em86, for Alpha AXP only):
echo ':i386:M::\x7fELF\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x03:\xff\xff\xff\xff\xff\xfe\xfe\xff\xff\xff\xff\xff\xff\xff\xff\xff\xfb\xff\xff:/bin/em86:' > register
echo ':i486:M::\x7fELF\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x06:\xff\xff\xff\xff\xff\xfe\xfe\xff\xff\xff\xff\xff\xff\xff\xff\xff\xfb\xff\xff:/bin/em86:' > register
- enable support for packed DOS applications (pre-configured dosemu hdimages):
echo ':DEXE:M::\x0eDEX::/usr/bin/dosexec:' > register
- enable support for Windows executables using wine:
echo ':DOSWin:M::MZ::/usr/local/bin/wine:' > register
For java support see Documentation/java.txt
You can enable/disable binfmt_misc or one binary type by echoing 0 (to disable)
or 1 (to enable) to /proc/sys/fs/binfmt_misc/status or /proc/.../the_name.
Catting the file tells you the current status of binfmt_misc/the entry.
You can remove one entry or all entries by echoing -1 to /proc/.../the_name
or /proc/sys/fs/binfmt_misc/status.
HINTS:
======
If you want to pass special arguments to your interpreter, you can
write a wrapper script for it. See Documentation/java.txt for an
example.
Your interpreter should NOT look in the PATH for the filename; the kernel
passes it the full filename (or the file descriptor) to use. Using $PATH can
cause unexpected behaviour and can be a security hazard.
Richard Günther <rguenth@tat.physik.uni-tuebingen.de>

View File

@ -14,7 +14,7 @@ Contents:
The RAM disk driver is a way to use main system memory as a block device. It
is required for initrd, an initial filesystem used if you need to load modules
in order to access the root filesystem (see Documentation/initrd.txt). It can
in order to access the root filesystem (see Documentation/admin-guide/initrd.rst). It can
also be used for a temporary filesystem for crypto work, since the contents
are erased on reboot.

View File

@ -1,34 +0,0 @@
Linux Braille Console
To get early boot messages on a braille device (before userspace screen
readers can start), you first need to compile the support for the usual serial
console (see serial-console.txt), and for braille device (in Device Drivers -
Accessibility).
Then you need to specify a console=brl, option on the kernel command line, the
format is:
console=brl,serial_options...
where serial_options... are the same as described in serial-console.txt
So for instance you can use console=brl,ttyS0 if the braille device is connected
to the first serial port, and console=brl,ttyS0,115200 to override the baud rate
to 115200, etc.
By default, the braille device will just show the last kernel message (console
mode). To review previous messages, press the Insert key to switch to the VT
review mode. In review mode, the arrow keys permit to browse in the VT content,
page up/down keys go at the top/bottom of the screen, and the home key goes back
to the cursor, hence providing very basic screen reviewing facility.
Sound feedback can be obtained by adding the braille_console.sound=1 kernel
parameter.
For simplicity, only one braille console can be enabled, other uses of
console=brl,... will be discarded. Also note that it does not interfere with
the console selection mechanism described in serial-console.txt
For now, only the VisioBraille device is supported.
Samuel Thibault <samuel.thibault@ens-lyon.org>

View File

@ -8,7 +8,7 @@ cpuacct.txt
- CPU Accounting Controller; account CPU usage for groups of tasks.
cpusets.txt
- documents the cpusets feature; assign CPUs and Mem to a set of tasks.
devices.txt
admin-guide/devices.rst
- Device Whitelist Controller; description, interface and security.
freezer-subsystem.txt
- checkpointing; rationale to not use signals, interface.

View File

@ -336,9 +336,11 @@ latex_elements = {
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
('user/index', 'linux-user.tex', 'Linux Kernel User Documentation',
'The kernel development community', 'manual'),
('kernel-documentation', 'kernel-documentation.tex', 'The Linux Kernel Documentation',
'The kernel development community', 'manual'),
('development-process/index', 'development-process.tex', 'Linux Kernel Development Documentation',
('process/index', 'development-process.tex', 'Linux Kernel Development Documentation',
'The kernel development community', 'manual'),
('gpu/index', 'gpu.tex', 'Linux GPU Driver Developer\'s Guide',
'The kernel development community', 'manual'),

View File

@ -1,9 +0,0 @@
Linux Kernel Development Documentation
======================================
Contents:
.. toctree::
:maxdepth: 2
development-process

View File

@ -1,7 +1,7 @@
* Maxim DS3231 Real Time Clock
Required properties:
see: Documentation/devicetree/bindings/i2c/trivial-devices.txt
see: Documentation/devicetree/bindings/i2c/trivial-admin-guide/devices.rst
Optional property:
- #clock-cells: Should be 1.

View File

@ -3,7 +3,7 @@
Philips PCF8563/Epson RTC8564 Real Time Clock
Required properties:
see: Documentation/devicetree/bindings/i2c/trivial-devices.txt
see: Documentation/devicetree/bindings/i2c/trivial-admin-guide/devices.rst
Optional property:
- #clock-cells: Should be 0.

View File

@ -3,7 +3,7 @@
I. For patch submitters
0) Normal patch submission rules from Documentation/SubmittingPatches
0) Normal patch submission rules from Documentation/process/submitting-patches.rst
applies.
1) The Documentation/ portion of the patch should be a separate patch.

View File

@ -1,340 +0,0 @@
Introduction
============
This document describes how to use the dynamic debug (dyndbg) feature.
Dynamic debug is designed to allow you to dynamically enable/disable
kernel code to obtain additional kernel information. Currently, if
CONFIG_DYNAMIC_DEBUG is set, then all pr_debug()/dev_dbg() and
print_hex_dump_debug()/print_hex_dump_bytes() calls can be dynamically
enabled per-callsite.
If CONFIG_DYNAMIC_DEBUG is not set, print_hex_dump_debug() is just
shortcut for print_hex_dump(KERN_DEBUG).
For print_hex_dump_debug()/print_hex_dump_bytes(), format string is
its 'prefix_str' argument, if it is constant string; or "hexdump"
in case 'prefix_str' is build dynamically.
Dynamic debug has even more useful features:
* Simple query language allows turning on and off debugging
statements by matching any combination of 0 or 1 of:
- source filename
- function name
- line number (including ranges of line numbers)
- module name
- format string
* Provides a debugfs control file: <debugfs>/dynamic_debug/control
which can be read to display the complete list of known debug
statements, to help guide you
Controlling dynamic debug Behaviour
===================================
The behaviour of pr_debug()/dev_dbg()s are controlled via writing to a
control file in the 'debugfs' filesystem. Thus, you must first mount
the debugfs filesystem, in order to make use of this feature.
Subsequently, we refer to the control file as:
<debugfs>/dynamic_debug/control. For example, if you want to enable
printing from source file 'svcsock.c', line 1603 you simply do:
nullarbor:~ # echo 'file svcsock.c line 1603 +p' >
<debugfs>/dynamic_debug/control
If you make a mistake with the syntax, the write will fail thus:
nullarbor:~ # echo 'file svcsock.c wtf 1 +p' >
<debugfs>/dynamic_debug/control
-bash: echo: write error: Invalid argument
Viewing Dynamic Debug Behaviour
===========================
You can view the currently configured behaviour of all the debug
statements via:
nullarbor:~ # cat <debugfs>/dynamic_debug/control
# filename:lineno [module]function flags format
/usr/src/packages/BUILD/sgi-enhancednfs-1.4/default/net/sunrpc/svc_rdma.c:323 [svcxprt_rdma]svc_rdma_cleanup =_ "SVCRDMA Module Removed, deregister RPC RDMA transport\012"
/usr/src/packages/BUILD/sgi-enhancednfs-1.4/default/net/sunrpc/svc_rdma.c:341 [svcxprt_rdma]svc_rdma_init =_ "\011max_inline : %d\012"
/usr/src/packages/BUILD/sgi-enhancednfs-1.4/default/net/sunrpc/svc_rdma.c:340 [svcxprt_rdma]svc_rdma_init =_ "\011sq_depth : %d\012"
/usr/src/packages/BUILD/sgi-enhancednfs-1.4/default/net/sunrpc/svc_rdma.c:338 [svcxprt_rdma]svc_rdma_init =_ "\011max_requests : %d\012"
...
You can also apply standard Unix text manipulation filters to this
data, e.g.
nullarbor:~ # grep -i rdma <debugfs>/dynamic_debug/control | wc -l
62
nullarbor:~ # grep -i tcp <debugfs>/dynamic_debug/control | wc -l
42
The third column shows the currently enabled flags for each debug
statement callsite (see below for definitions of the flags). The
default value, with no flags enabled, is "=_". So you can view all
the debug statement callsites with any non-default flags:
nullarbor:~ # awk '$3 != "=_"' <debugfs>/dynamic_debug/control
# filename:lineno [module]function flags format
/usr/src/packages/BUILD/sgi-enhancednfs-1.4/default/net/sunrpc/svcsock.c:1603 [sunrpc]svc_send p "svc_process: st_sendto returned %d\012"
Command Language Reference
==========================
At the lexical level, a command comprises a sequence of words separated
by spaces or tabs. So these are all equivalent:
nullarbor:~ # echo -c 'file svcsock.c line 1603 +p' >
<debugfs>/dynamic_debug/control
nullarbor:~ # echo -c ' file svcsock.c line 1603 +p ' >
<debugfs>/dynamic_debug/control
nullarbor:~ # echo -n 'file svcsock.c line 1603 +p' >
<debugfs>/dynamic_debug/control
Command submissions are bounded by a write() system call.
Multiple commands can be written together, separated by ';' or '\n'.
~# echo "func pnpacpi_get_resources +p; func pnp_assign_mem +p" \
> <debugfs>/dynamic_debug/control
If your query set is big, you can batch them too:
~# cat query-batch-file > <debugfs>/dynamic_debug/control
A another way is to use wildcard. The match rule support '*' (matches
zero or more characters) and '?' (matches exactly one character).For
example, you can match all usb drivers:
~# echo "file drivers/usb/* +p" > <debugfs>/dynamic_debug/control
At the syntactical level, a command comprises a sequence of match
specifications, followed by a flags change specification.
command ::= match-spec* flags-spec
The match-spec's are used to choose a subset of the known pr_debug()
callsites to which to apply the flags-spec. Think of them as a query
with implicit ANDs between each pair. Note that an empty list of
match-specs will select all debug statement callsites.
A match specification comprises a keyword, which controls the
attribute of the callsite to be compared, and a value to compare
against. Possible keywords are:
match-spec ::= 'func' string |
'file' string |
'module' string |
'format' string |
'line' line-range
line-range ::= lineno |
'-'lineno |
lineno'-' |
lineno'-'lineno
// Note: line-range cannot contain space, e.g.
// "1-30" is valid range but "1 - 30" is not.
lineno ::= unsigned-int
The meanings of each keyword are:
func
The given string is compared against the function name
of each callsite. Example:
func svc_tcp_accept
file
The given string is compared against either the full pathname, the
src-root relative pathname, or the basename of the source file of
each callsite. Examples:
file svcsock.c
file kernel/freezer.c
file /usr/src/packages/BUILD/sgi-enhancednfs-1.4/default/net/sunrpc/svcsock.c
module
The given string is compared against the module name
of each callsite. The module name is the string as
seen in "lsmod", i.e. without the directory or the .ko
suffix and with '-' changed to '_'. Examples:
module sunrpc
module nfsd
format
The given string is searched for in the dynamic debug format
string. Note that the string does not need to match the
entire format, only some part. Whitespace and other
special characters can be escaped using C octal character
escape \ooo notation, e.g. the space character is \040.
Alternatively, the string can be enclosed in double quote
characters (") or single quote characters (').
Examples:
format svcrdma: // many of the NFS/RDMA server pr_debugs
format readahead // some pr_debugs in the readahead cache
format nfsd:\040SETATTR // one way to match a format with whitespace
format "nfsd: SETATTR" // a neater way to match a format with whitespace
format 'nfsd: SETATTR' // yet another way to match a format with whitespace
line
The given line number or range of line numbers is compared
against the line number of each pr_debug() callsite. A single
line number matches the callsite line number exactly. A
range of line numbers matches any callsite between the first
and last line number inclusive. An empty first number means
the first line in the file, an empty line number means the
last number in the file. Examples:
line 1603 // exactly line 1603
line 1600-1605 // the six lines from line 1600 to line 1605
line -1605 // the 1605 lines from line 1 to line 1605
line 1600- // all lines from line 1600 to the end of the file
The flags specification comprises a change operation followed
by one or more flag characters. The change operation is one
of the characters:
- remove the given flags
+ add the given flags
= set the flags to the given flags
The flags are:
p enables the pr_debug() callsite.
f Include the function name in the printed message
l Include line number in the printed message
m Include module name in the printed message
t Include thread ID in messages not generated from interrupt context
_ No flags are set. (Or'd with others on input)
For print_hex_dump_debug() and print_hex_dump_bytes(), only 'p' flag
have meaning, other flags ignored.
For display, the flags are preceded by '='
(mnemonic: what the flags are currently equal to).
Note the regexp ^[-+=][flmpt_]+$ matches a flags specification.
To clear all flags at once, use "=_" or "-flmpt".
Debug messages during Boot Process
==================================
To activate debug messages for core code and built-in modules during
the boot process, even before userspace and debugfs exists, use
dyndbg="QUERY", module.dyndbg="QUERY", or ddebug_query="QUERY"
(ddebug_query is obsoleted by dyndbg, and deprecated). QUERY follows
the syntax described above, but must not exceed 1023 characters. Your
bootloader may impose lower limits.
These dyndbg params are processed just after the ddebug tables are
processed, as part of the arch_initcall. Thus you can enable debug
messages in all code run after this arch_initcall via this boot
parameter.
On an x86 system for example ACPI enablement is a subsys_initcall and
dyndbg="file ec.c +p"
will show early Embedded Controller transactions during ACPI setup if
your machine (typically a laptop) has an Embedded Controller.
PCI (or other devices) initialization also is a hot candidate for using
this boot parameter for debugging purposes.
If foo module is not built-in, foo.dyndbg will still be processed at
boot time, without effect, but will be reprocessed when module is
loaded later. dyndbg_query= and bare dyndbg= are only processed at
boot.
Debug Messages at Module Initialization Time
============================================
When "modprobe foo" is called, modprobe scans /proc/cmdline for
foo.params, strips "foo.", and passes them to the kernel along with
params given in modprobe args or /etc/modprob.d/*.conf files,
in the following order:
1. # parameters given via /etc/modprobe.d/*.conf
options foo dyndbg=+pt
options foo dyndbg # defaults to +p
2. # foo.dyndbg as given in boot args, "foo." is stripped and passed
foo.dyndbg=" func bar +p; func buz +mp"
3. # args to modprobe
modprobe foo dyndbg==pmf # override previous settings
These dyndbg queries are applied in order, with last having final say.
This allows boot args to override or modify those from /etc/modprobe.d
(sensible, since 1 is system wide, 2 is kernel or boot specific), and
modprobe args to override both.
In the foo.dyndbg="QUERY" form, the query must exclude "module foo".
"foo" is extracted from the param-name, and applied to each query in
"QUERY", and only 1 match-spec of each type is allowed.
The dyndbg option is a "fake" module parameter, which means:
- modules do not need to define it explicitly
- every module gets it tacitly, whether they use pr_debug or not
- it doesn't appear in /sys/module/$module/parameters/
To see it, grep the control file, or inspect /proc/cmdline.
For CONFIG_DYNAMIC_DEBUG kernels, any settings given at boot-time (or
enabled by -DDEBUG flag during compilation) can be disabled later via
the sysfs interface if the debug messages are no longer needed:
echo "module module_name -p" > <debugfs>/dynamic_debug/control
Examples
========
// enable the message at line 1603 of file svcsock.c
nullarbor:~ # echo -n 'file svcsock.c line 1603 +p' >
<debugfs>/dynamic_debug/control
// enable all the messages in file svcsock.c
nullarbor:~ # echo -n 'file svcsock.c +p' >
<debugfs>/dynamic_debug/control
// enable all the messages in the NFS server module
nullarbor:~ # echo -n 'module nfsd +p' >
<debugfs>/dynamic_debug/control
// enable all 12 messages in the function svc_process()
nullarbor:~ # echo -n 'func svc_process +p' >
<debugfs>/dynamic_debug/control
// disable all 12 messages in the function svc_process()
nullarbor:~ # echo -n 'func svc_process -p' >
<debugfs>/dynamic_debug/control
// enable messages for NFS calls READ, READLINK, READDIR and READDIR+.
nullarbor:~ # echo -n 'format "nfsd: READ" +p' >
<debugfs>/dynamic_debug/control
// enable messages in files of which the paths include string "usb"
nullarbor:~ # echo -n '*usb* +p' > <debugfs>/dynamic_debug/control
// enable all messages
nullarbor:~ # echo -n '+p' > <debugfs>/dynamic_debug/control
// add module, function to all enabled messages
nullarbor:~ # echo -n '+mf' > <debugfs>/dynamic_debug/control
// boot-args example, with newlines and comments for readability
Kernel command line: ...
// see whats going on in dyndbg=value processing
dynamic_debug.verbose=1
// enable pr_debugs in 2 builtins, #cmt is stripped
dyndbg="module params +p #cmt ; module sys +p"
// enable pr_debugs in 2 functions in a module loaded later
pc87360.dyndbg="func pc87360_init_device +p; func pc87360_find +p"

View File

@ -19,7 +19,7 @@ forever.
This should not cause problems for anybody, since everybody using a
2.1.x kernel should have updated their C library to a suitable version
anyway (see the file "Documentation/Changes".)
anyway (see the file "Documentation/process/changes.rst".)
1.2 Allow Mixed Locks Again
---------------------------

View File

@ -11,7 +11,7 @@ Updated 2006 by Horms <horms@verge.net.au>
In order to use a diskless system, such as an X-terminal or printer server
for example, it is necessary for the root filesystem to be present on a
non-disk device. This may be an initramfs (see Documentation/filesystems/
ramfs-rootfs-initramfs.txt), a ramdisk (see Documentation/initrd.txt) or a
ramfs-rootfs-initramfs.txt), a ramdisk (see Documentation/admin-guide/initrd.rst) or a
filesystem mounted via NFS. The following text describes on how to use NFS
for the root filesystem. For the rest of this text 'client' means the
diskless system, and 'server' means the NFS server.
@ -284,7 +284,7 @@ They depend on various facilities being available:
"kernel <relative-path-below /tftpboot>". The nfsroot parameters
are passed to the kernel by adding them to the "append" line.
It is common to use serial console in conjunction with pxeliunx,
see Documentation/serial-console.txt for more information.
see Documentation/admin-guide/serial-console.rst for more information.
For more information on isolinux, including how to create bootdisks
for prebuilt kernels, see http://syslinux.zytor.com/

View File

@ -119,7 +119,7 @@ separated by spaces:
253:0 Device with major 253 and minor 0
Authoritative information can be found in
"Documentation/kernel-parameters.txt".
"Documentation/admin-guide/kernel-parameters.rst".
(*) rw

View File

@ -10,10 +10,10 @@ increase the chances of your change being accepted.
----------
* It should be unnecessary to mention, but please read and follow
Documentation/SubmitChecklist
Documentation/SubmittingDrivers
Documentation/SubmittingPatches
Documentation/CodingStyle
Documentation/process/submit-checklist.rst
Documentation/process/submitting-drivers.rst
Documentation/process/submitting-patches.rst
Documentation/process/coding-style.rst
* Please run your patch through 'checkpatch --strict'. There should be no
errors, no warnings, and few if any check messages. If there are any

View File

@ -11,8 +11,9 @@ Contents:
.. toctree::
:maxdepth: 2
admin-guide/index
kernel-documentation
development-process/index
process/index
dev-tools/tools
driver-api/index
media/index

View File

@ -332,7 +332,7 @@ README for the ISDN-subsystem
4. Device-inodes
The major and minor numbers and their names are described in
Documentation/devices.txt. The major numbers are:
Documentation/admin-guide/devices.rst. The major numbers are:
43 for the ISDN-tty's.
44 for the ISDN-callout-tty's.

View File

@ -127,15 +127,15 @@ linux-api@ver.kernel.org に送ることを勧めます。
小限のレベルで必要な数々のソフトウェアパッケージの一覧を示してい
ます。
Documentation/CodingStyle
Documentation/process/coding-style.rst
これは Linux カーネルのコーディングスタイルと背景にある理由を記述
しています。全ての新しいコードはこのドキュメントにあるガイドライン
に従っていることを期待されています。大部分のメンテナはこれらのルー
ルに従っているものだけを受け付け、多くの人は正しいスタイルのコード
だけをレビューします。
Documentation/SubmittingPatches
Documentation/SubmittingDrivers
Documentation/process/submitting-patches.rst
Documentation/process/submitting-drivers.rst
これらのファイルには、どうやってうまくパッチを作って投稿するかに
ついて非常に詳しく書かれており、以下を含みます(これだけに限らない
けれども)
@ -153,7 +153,7 @@ linux-api@ver.kernel.org に送ることを勧めます。
"Linux kernel patch submission format"
http://linux.yyz.us/patch-format.html
Documentation/stable_api_nonsense.txt
Documentation/process/stable-api-nonsense.rst
このファイルはカーネルの中に不変のAPIを持たないことにした意識的な
決断の背景にある理由について書かれています。以下のようなことを含
んでいます-
@ -164,29 +164,29 @@ linux-api@ver.kernel.org に送ることを勧めます。
このドキュメントは Linux 開発の思想を理解するのに非常に重要です。
そして、他のOSでの開発者が Linux に移る時にとても重要です。
Documentation/SecurityBugs
Documentation/admin-guide/security-bugs.rst
もし Linux カーネルでセキュリティ問題を発見したように思ったら、こ
のドキュメントのステップに従ってカーネル開発者に連絡し、問題解決を
支援してください。
Documentation/ManagementStyle
Documentation/process/management-style.rst
このドキュメントは Linux カーネルのメンテナ達がどう行動するか、
彼らの手法の背景にある共有されている精神について記述しています。こ
れはカーネル開発の初心者なら(もしくは、単に興味があるだけの人でも)
重要です。なぜならこのドキュメントは、カーネルメンテナ達の独特な
行動についての多くの誤解や混乱を解消するからです。
Documentation/stable_kernel_rules.txt
Documentation/process/stable-kernel-rules.rst
このファイルはどのように stable カーネルのリリースが行われるかのルー
ルが記述されています。そしてこれらのリリースの中のどこかで変更を取
り入れてもらいたい場合に何をすれば良いかが示されています。
Documentation/kernel-docs.txt
Documentation/process/kernel-docs.rst
  カーネル開発に付随する外部ドキュメントのリストです。もしあなたが
探しているものがカーネル内のドキュメントでみつからなかった場合、
このリストをあたってみてください。
Documentation/applying-patches.txt
Documentation/process/applying-patches.rst
パッチとはなにか、パッチをどうやって様々なカーネルの開発ブランチに
適用するのかについて正確に記述した良い入門書です。
@ -314,7 +314,7 @@ Andrew Morton が Linux-kernel メーリングリストにカーネルリリー
た問題がなければもう少し長くなることもあります。セキュリティ関連の問題
の場合はこれに対してだいたいの場合、すぐにリリースがされます。
カーネルツリーに入っている、Documentation/stable_kernel_rules.txt ファ
カーネルツリーに入っている、Documentation/process/stable-kernel-rules.rst ファ
イルにはどのような種類の変更が -stable ツリーに受け入れ可能か、またリ
リースプロセスがどう動くかが記述されています。
@ -372,7 +372,7 @@ bugzilla.kernel.org は Linux カーネル開発者がカーネルのバグを
場所です。ユーザは見つけたバグの全てをこのツールで報告すべきです。
どう kernel bugzilla を使うかの詳細は、以下を参照してください-
http://bugzilla.kernel.org/page.cgi?id=faq.html
メインカーネルソースディレクトリにあるファイル REPORTING-BUGS はカーネ
メインカーネルソースディレクトリにあるファイル admin-guide/reporting-bugs.rst はカーネ
ルバグらしいものについてどうレポートするかの良いテンプレートであり、問
題の追跡を助けるためにカーネル開発者にとってどんな情報が必要なのかの詳
細が書かれています。
@ -438,7 +438,7 @@ MAINTAINERS ファイルにリストがありますので参照してくださ
メールの先頭でなく、各引用行の間にあなたの言いたいことを追加するべきで
す。
もしパッチをメールに付ける場合は、Documentation/SubmittingPatches に提
もしパッチをメールに付ける場合は、Documentation/process/submitting-patches.rst に提
示されているように、それは プレーンな可読テキストにすることを忘れない
ようにしましょう。カーネル開発者は 添付や圧縮したパッチを扱いたがりま
せん-

View File

@ -1,5 +1,5 @@
NOTE:
This is a version of Documentation/SubmitChecklist into Japanese.
This is a version of Documentation/process/submit-checklist.rst into Japanese.
This document is maintained by Takenori Nagano <t-nagano@ah.jp.nec.com>
and the JF Project team <http://www.linux.or.jp/JF/>.
If you find any difference between this document and the original file
@ -14,7 +14,7 @@ to update the original English file first.
Last Updated: 2008/07/14
==================================
これは、
linux-2.6.26/Documentation/SubmitChecklist の和訳です。
linux-2.6.26/Documentation/process/submit-checklist.rst の和訳です。
翻訳団体: JF プロジェクト < http://www.linux.or.jp/JF/ >
翻訳日: 2008/07/14
@ -27,7 +27,7 @@ Linux カーネルパッチ投稿者向けチェックリスト
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
本書では、パッチをより素早く取り込んでもらいたい開発者が実践すべき基本的な事柄
をいくつか紹介します。ここにある全ての事柄は、Documentation/SubmittingPatches
をいくつか紹介します。ここにある全ての事柄は、Documentation/process/submitting-patches.rst
などのLinuxカーネルパッチ投稿に際しての心得を補足するものです。
1: 妥当なCONFIGオプションや変更されたCONFIGオプション、つまり =y, =m, =n
@ -84,7 +84,7 @@ Linux カーネルパッチ投稿者向けチェックリスト
必ずドキュメントを追加してください。
17: 新しいブートパラメータを追加した場合には、
必ずDocumentation/kernel-parameters.txt に説明を追加してください。
必ずDocumentation/admin-guide/kernel-parameters.rst に説明を追加してください。
18: 新しくmoduleにパラメータを追加した場合には、MODULE_PARM_DESC()を
利用して必ずその説明を記述してください。

View File

@ -1,5 +1,5 @@
NOTE:
This is a version of Documentation/SubmittingPatches into Japanese.
This is a version of Documentation/process/submitting-patches.rst into Japanese.
This document is maintained by Keiichi KII <k-keiichi@bx.jp.nec.com>
and the JF Project team <http://www.linux.or.jp/JF/>.
If you find any difference between this document and the original file
@ -15,7 +15,7 @@ Last Updated: 2011/06/09
==================================
これは、
linux-2.6.39/Documentation/SubmittingPatches の和訳
linux-2.6.39/Documentation/process/submitting-patches.rst の和訳
です。
翻訳団体: JF プロジェクト < http://www.linux.or.jp/JF/ >
翻訳日: 2011/06/09
@ -34,9 +34,9 @@ Linux カーネルに変更を加えたいと思っている個人又は会社
おじけづかせることもあります。この文章はあなたの変更を大いに受け入れ
てもらえやすくする提案を集めたものです。
コードを投稿する前に、Documentation/SubmitChecklist の項目リストに目
コードを投稿する前に、Documentation/process/submit-checklist.rst の項目リストに目
を通してチェックしてください。もしあなたがドライバーを投稿しようとし
ているなら、Documentation/SubmittingDrivers にも目を通してください。
ているなら、Documentation/process/submitting-drivers.rst にも目を通してください。
--------------------------------------------
セクション1 パッチの作り方と送り方
@ -148,7 +148,7 @@ http://savannah.nongnu.org/projects/quilt
4) パッチのスタイルチェック
あなたのパッチが基本的な( Linux カーネルの)コーディングスタイルに違反し
ていないかをチェックして下さい。その詳細を Documentation/CodingStyle
ていないかをチェックして下さい。その詳細を Documentation/process/coding-style.rst
見つけることができます。コーディングスタイルの違反はレビューする人の
時間を無駄にするだけなので、恐らくあなたのパッチは読まれることすらなく
拒否されるでしょう。
@ -246,7 +246,7 @@ MIME 形式の添付ファイルは Linus に手間を取らせることにな
あれば、誰かが MIME 形式のパッチを再送するよう求めるかもしれません。
余計な変更を加えずにあなたのパッチを送信するための電子メールクライアントの設定
のヒントについては Documentation/email-clients.txt を参照してください。
のヒントについては Documentation/process/email-clients.rst を参照してください。
8) 電子メールのサイズ
@ -609,7 +609,7 @@ diffstat の結果を生成するために「 git diff -M --stat --summary 」
し例外を適用するには、本当に妥当な理由が不可欠です。あなたは恐らくこの
セクションを Linus のコンピュータ・サイエンス101と呼ぶでしょう。
1) Documentation/CodingStyleを参照
1) Documentation/process/coding-style.rstを参照
言うまでもなく、あなたのコードがこのコーディングスタイルからあまりに
も逸脱していると、レビューやコメントなしに受け取ってもらえないかもし
@ -704,8 +704,8 @@ Greg Kroah-Hartman, "How to piss off a kernel subsystem maintainer".
NO!!!! No more huge patch bombs to linux-kernel@vger.kernel.org people!
<https://lkml.org/lkml/2005/7/11/336>
Kernel Documentation/CodingStyle:
<http://users.sosdg.org/~qiyong/lxr/source/Documentation/CodingStyle>
Kernel Documentation/process/coding-style.rst:
<http://users.sosdg.org/~qiyong/lxr/source/Documentation/process/coding-style.rst>
Linus Torvalds's mail on the canonical patch format:
<http://lkml.org/lkml/2005/4/7/183>

View File

@ -1,5 +1,5 @@
NOTE:
This is a version of Documentation/stable_api_nonsense.txt into Japanese.
This is a version of Documentation/process/stable-api-nonsense.rst into Japanese.
This document is maintained by IKEDA, Munehiro <m-ikeda@ds.jp.nec.com>
and the JF Project team <http://www.linux.or.jp/JF/>.
If you find any difference between this document and the original file
@ -14,7 +14,7 @@ to update the original English file first.
Last Updated: 2007/07/18
==================================
これは、
linux-2.6.22-rc4/Documentation/stable_api_nonsense.txt の和訳
linux-2.6.22-rc4/Documentation/process/stable-api-nonsense.rst の和訳
です。
翻訳団体: JF プロジェクト < http://www.linux.or.jp/JF/ >
翻訳日 2007/06/11

View File

@ -1,5 +1,5 @@
NOTE:
This is Japanese translated version of "Documentation/stable_kernel_rules.txt".
This is Japanese translated version of "Documentation/process/stable-kernel-rules.rst".
This one is maintained by Tsugikazu Shibata <tshibata@ab.jp.nec.com>
and JF Project team <www.linux.or.jp/JF>.
If you find difference with original file or problem in translation,
@ -12,7 +12,7 @@ file at first.
==================================
これは、
linux-2.6.29/Documentation/stable_kernel_rules.txt
linux-2.6.29/Documentation/process/stable-kernel-rules.rst
の和訳です。
翻訳団体: JF プロジェクト < http://www.linux.or.jp/JF/ >
@ -43,7 +43,7 @@ linux-2.6.29/Documentation/stable_kernel_rules.txt
"理論的には競合状態になる"ようなものは不可。
- いかなる些細な修正も含めることはできない。(スペルの修正、空白のクリー
ンアップなど)
- Documentation/SubmittingPatches の規則に従ったものでなければならない。
- Documentation/process/submitting-patches.rst の規則に従ったものでなければならない。
- パッチ自体か同等の修正が Linus のツリーに既に存在しなければならない。
  Linus のツリーでのコミットID を -stable へのパッチ投稿の際に引用す
ること。

View File

@ -264,7 +264,7 @@ To reduce its OS jitter, do at least one of the following:
kthreads from being created in the first place.
2. Boot with "nosoftlockup=0", which will also prevent these kthreads
from being created. Other related watchdog and softlockup boot
parameters may be found in Documentation/kernel-parameters.txt
parameters may be found in Documentation/admin-guide/kernel-parameters.rst
and Documentation/watchdog/watchdog-parameters.txt.
3. Echo a zero to /proc/sys/kernel/watchdog to disable the
watchdog timer.

View File

@ -1,5 +1,5 @@
NOTE:
This is a version of Documentation/HOWTO translated into korean
This is a version of Documentation/process/howto.rst translated into korean
This document is maintained by Minchan Kim <minchan@kernel.org>
If you find any difference between this document and the original file or
a problem with the translation, please contact the maintainer of this file.
@ -11,7 +11,7 @@ try to update the original English file first.
==================================
이 문서는
Documentation/HOWTO
Documentation/process/howto.rst
의 한글 번역입니다.
역자: 김민찬 <minchan@kernel.org>
@ -98,18 +98,18 @@ mtk.manpages@gmail.com의 메인테이너에게 보낼 것을 권장한다.
빌드하기 위해 필요한 것을 설명한다. 커널에 입문하는 사람들은 여기서
시작해야 한다.
Documentation/Changes
Documentation/process/changes.rst
이 파일은 커널을 성공적으로 빌드하고 실행시키기 위해 필요한 다양한
소프트웨어 패키지들의 최소 버젼을 나열한다.
Documentation/CodingStyle
Documentation/process/coding-style.rst
이 문서는 리눅스 커널 코딩 스타일과 그렇게 한 몇몇 이유를 설명한다.
모든 새로운 코드는 이 문서에 가이드라인들을 따라야 한다. 대부분의
메인테이너들은 이 규칙을 따르는 패치들만을 받아들일 것이고 많은 사람들이
그 패치가 올바른 스타일일 경우만 코드를 검토할 것이다.
Documentation/SubmittingPatches
Documentation/SubmittingDrivers
Documentation/process/submitting-patches.rst
Documentation/process/submitting-drivers.rst
이 파일들은 성공적으로 패치를 만들고 보내는 법을 다음의 내용들로
굉장히 상세히 설명하고 있다(그러나 다음으로 한정되진 않는다).
- Email 내용들
@ -126,7 +126,7 @@ mtk.manpages@gmail.com의 메인테이너에게 보낼 것을 권장한다.
"Linux kernel patch submission format"
http://linux.yyz.us/patch-format.html
Documentation/stable_api_nonsense.txt
Documentation/process/stable-api-nonsense.rst
이 문서는 의도적으로 커널이 불변하는 API를 갖지 않도록 결정한
이유를 설명하며 다음과 같은 것들을 포함한다.
- 서브시스템 shim-layer(호환성을 위해?)
@ -136,12 +136,12 @@ mtk.manpages@gmail.com의 메인테이너에게 보낼 것을 권장한다.
리눅스로 전향하는 사람들에게는 매우 중요하다.
Documentation/SecurityBugs
Documentation/admin-guide/security-bugs.rst
여러분들이 리눅스 커널의 보안 문제를 발견했다고 생각한다면 이 문서에
나온 단계에 따라서 커널 개발자들에게 알리고 그 문제를 해결할 수 있도록
도와 달라.
Documentation/ManagementStyle
Documentation/process/management-style.rst
이 문서는 리눅스 커널 메인테이너들이 그들의 방법론에 녹아 있는
정신을 어떻게 공유하고 운영하는지를 설명한다. 이것은 커널 개발에 입문하는
모든 사람들(또는 커널 개발에 작은 호기심이라도 있는 사람들)이
@ -149,17 +149,17 @@ mtk.manpages@gmail.com의 메인테이너에게 보낼 것을 권장한다.
독특한 행동에 관하여 흔히 있는 오해들과 혼란들을 해소하고 있기
때문이다.
Documentation/stable_kernel_rules.txt
Documentation/process/stable-kernel-rules.rst
이 문서는 안정적인 커널 배포가 이루어지는 규칙을 설명하고 있으며
여러분들이 이러한 배포들 중 하나에 변경을 하길 원한다면
무엇을 해야 하는지를 설명한다.
Documentation/kernel-docs.txt
Documentation/process/kernel-docs.rst
커널 개발에 관계된 외부 문서의 리스트이다. 커널 내의 포함된 문서들
중에 여러분이 찾고 싶은 문서를 발견하지 못할 경우 이 리스트를
살펴보라.
Documentation/applying-patches.txt
Documentation/process/applying-patches.rst
패치가 무엇이며 그것을 커널의 다른 개발 브랜치들에 어떻게
적용하는지에 관하여 자세히 설명하고 있는 좋은 입문서이다.
@ -276,7 +276,7 @@ Andrew Morton의 글이 있다.
4.x.y는 "stable" 팀<stable@vger.kernel.org>에 의해 관리되며 거의 매번 격주로
배포된다.
커널 트리 문서들 내에 Documentation/stable_kernel_rules.txt 파일은 어떤
커널 트리 문서들 내에 Documentation/process/stable-kernel-rules.rst 파일은 어떤
종류의 변경들이 -stable 트리로 들어왔는지와 배포 프로세스가 어떻게
진행되는지를 설명한다.
@ -328,7 +328,7 @@ bugzilla.kernel.org는 리눅스 커널 개발자들이 커널의 버그를 추
kernel bugzilla를 사용하는 자세한 방법은 다음을 참조하라.
http://test.kernel.org/bugzilla/faq.html
메인 커널 소스 디렉토리에 있는 REPORTING-BUGS 파일은 커널 버그라고 생각되는
메인 커널 소스 디렉토리에 있는 admin-guide/reporting-bugs.rst 파일은 커널 버그라고 생각되는
것을 보고하는 방법에 관한 좋은 템플릿이며 문제를 추적하기 위해서 커널
개발자들이 필요로 하는 정보가 무엇들인지를 상세히 설명하고 있다.
@ -391,7 +391,7 @@ bugme-janitor 메일링 리스트(bugzilla에 모든 변화들이 여기서 메
"John 커널해커는 작성했다...."를 유지하며 여러분들의 의견을 그 메일의 윗부분에
작성하지 말고 각 인용한 단락들 사이에 넣어라.
여러분들이 패치들을 메일에 넣는다면 그것들은 Documentation/SubmittingPatches
여러분들이 패치들을 메일에 넣는다면 그것들은 Documentation/process/submitting-patches.rst
나와있는데로 명백히(plain) 읽을 수 있는 텍스트여야 한다. 커널 개발자들은
첨부파일이나 압축된 패치들을 원하지 않는다. 그들은 여러분들의 패치의
각 라인 단위로 코멘트를 하길 원하며 압축하거나 첨부하지 않고 보내는 것이

View File

@ -1,5 +1,5 @@
NOTE:
This is a version of Documentation/stable_api_nonsense.txt translated
This is a version of Documentation/process/stable-api-nonsense.rst translated
into korean
This document is maintained by Minchan Kim <minchan@kernel.org>
If you find any difference between this document and the original file or
@ -12,7 +12,7 @@ try to update the original English file first.
==================================
이 문서는
Documentation/stable_api_nonsense.txt
Documentation/process/stable-api-nonsense.rst
의 한글 번역입니다.
역자: 김민찬 <minchan@kernel.org>

View File

@ -11,7 +11,7 @@ details), without giving other tasks a chance to run. The current
stack trace is displayed upon detection and, by default, the system
will stay locked up. Alternatively, the kernel can be configured to
panic; a sysctl, "kernel.softlockup_panic", a kernel parameter,
"softlockup_panic" (see "Documentation/kernel-parameters.txt" for
"softlockup_panic" (see "Documentation/admin-guide/kernel-parameters.rst" for
details), and a compile option, "BOOTPARAM_SOFTLOCKUP_PANIC", are
provided for this.
@ -23,7 +23,7 @@ upon detection and the system will stay locked up unless the default
behavior is changed, which can be done through a sysctl,
'hardlockup_panic', a compile time knob, "BOOTPARAM_HARDLOCKUP_PANIC",
and a kernel parameter, "nmi_watchdog"
(see "Documentation/kernel-parameters.txt" for details).
(see "Documentation/admin-guide/kernel-parameters.rst" for details).
The panic option can be used in combination with panic_timeout (this
timeout is set through the confusingly named "kernel.panic" sysctl),

View File

@ -139,7 +139,7 @@ follows:
PARTUUID=00112233-4455-6677-8899-AABBCCDDEEFF/PARTNROFF=-2
Authoritative information can be found in
"Documentation/kernel-parameters.txt".
"Documentation/admin-guide/kernel-parameters.rst".
2.2) ro, rw

View File

@ -1,158 +0,0 @@
This file is a registry of magic numbers which are in use. When you
add a magic number to a structure, you should also add it to this
file, since it is best if the magic numbers used by various structures
are unique.
It is a *very* good idea to protect kernel data structures with magic
numbers. This allows you to check at run time whether (a) a structure
has been clobbered, or (b) you've passed the wrong structure to a
routine. This last is especially useful --- particularly when you are
passing pointers to structures via a void * pointer. The tty code,
for example, does this frequently to pass driver-specific and line
discipline-specific structures back and forth.
The way to use magic numbers is to declare then at the beginning of
the structure, like so:
struct tty_ldisc {
int magic;
...
};
Please follow this discipline when you are adding future enhancements
to the kernel! It has saved me countless hours of debugging,
especially in the screwy cases where an array has been overrun and
structures following the array have been overwritten. Using this
discipline, these cases get detected quickly and safely.
Theodore Ts'o
31 Mar 94
The magic table is current to Linux 2.1.55.
Michael Chastain
<mailto:mec@shout.net>
22 Sep 1997
Now it should be up to date with Linux 2.1.112. Because
we are in feature freeze time it is very unlikely that
something will change before 2.2.x. The entries are
sorted by number field.
Krzysztof G. Baranowski
<mailto: kgb@knm.org.pl>
29 Jul 1998
Updated the magic table to Linux 2.5.45. Right over the feature freeze,
but it is possible that some new magic numbers will sneak into the
kernel before 2.6.x yet.
Petr Baudis
<pasky@ucw.cz>
03 Nov 2002
Updated the magic table to Linux 2.5.74.
Fabian Frederick
<ffrederick@users.sourceforge.net>
09 Jul 2003
Magic Name Number Structure File
===========================================================================
PG_MAGIC 'P' pg_{read,write}_hdr include/linux/pg.h
CMAGIC 0x0111 user include/linux/a.out.h
MKISS_DRIVER_MAGIC 0x04bf mkiss_channel drivers/net/mkiss.h
HDLC_MAGIC 0x239e n_hdlc drivers/char/n_hdlc.c
APM_BIOS_MAGIC 0x4101 apm_user arch/x86/kernel/apm_32.c
CYCLADES_MAGIC 0x4359 cyclades_port include/linux/cyclades.h
DB_MAGIC 0x4442 fc_info drivers/net/iph5526_novram.c
DL_MAGIC 0x444d fc_info drivers/net/iph5526_novram.c
FASYNC_MAGIC 0x4601 fasync_struct include/linux/fs.h
FF_MAGIC 0x4646 fc_info drivers/net/iph5526_novram.c
ISICOM_MAGIC 0x4d54 isi_port include/linux/isicom.h
PTY_MAGIC 0x5001 drivers/char/pty.c
PPP_MAGIC 0x5002 ppp include/linux/if_pppvar.h
SERIAL_MAGIC 0x5301 async_struct include/linux/serial.h
SSTATE_MAGIC 0x5302 serial_state include/linux/serial.h
SLIP_MAGIC 0x5302 slip drivers/net/slip.h
STRIP_MAGIC 0x5303 strip drivers/net/strip.c
X25_ASY_MAGIC 0x5303 x25_asy drivers/net/x25_asy.h
SIXPACK_MAGIC 0x5304 sixpack drivers/net/hamradio/6pack.h
AX25_MAGIC 0x5316 ax_disp drivers/net/mkiss.h
TTY_MAGIC 0x5401 tty_struct include/linux/tty.h
MGSL_MAGIC 0x5401 mgsl_info drivers/char/synclink.c
TTY_DRIVER_MAGIC 0x5402 tty_driver include/linux/tty_driver.h
MGSLPC_MAGIC 0x5402 mgslpc_info drivers/char/pcmcia/synclink_cs.c
TTY_LDISC_MAGIC 0x5403 tty_ldisc include/linux/tty_ldisc.h
USB_SERIAL_MAGIC 0x6702 usb_serial drivers/usb/serial/usb-serial.h
FULL_DUPLEX_MAGIC 0x6969 drivers/net/ethernet/dec/tulip/de2104x.c
USB_BLUETOOTH_MAGIC 0x6d02 usb_bluetooth drivers/usb/class/bluetty.c
RFCOMM_TTY_MAGIC 0x6d02 net/bluetooth/rfcomm/tty.c
USB_SERIAL_PORT_MAGIC 0x7301 usb_serial_port drivers/usb/serial/usb-serial.h
CG_MAGIC 0x00090255 ufs_cylinder_group include/linux/ufs_fs.h
RPORT_MAGIC 0x00525001 r_port drivers/char/rocket_int.h
LSEMAGIC 0x05091998 lse drivers/fc4/fc.c
GDTIOCTL_MAGIC 0x06030f07 gdth_iowr_str drivers/scsi/gdth_ioctl.h
RIEBL_MAGIC 0x09051990 drivers/net/atarilance.c
NBD_REQUEST_MAGIC 0x12560953 nbd_request include/linux/nbd.h
RED_MAGIC2 0x170fc2a5 (any) mm/slab.c
BAYCOM_MAGIC 0x19730510 baycom_state drivers/net/baycom_epp.c
ISDN_X25IFACE_MAGIC 0x1e75a2b9 isdn_x25iface_proto_data
drivers/isdn/isdn_x25iface.h
ECP_MAGIC 0x21504345 cdkecpsig include/linux/cdk.h
LSOMAGIC 0x27091997 lso drivers/fc4/fc.c
LSMAGIC 0x2a3b4d2a ls drivers/fc4/fc.c
WANPIPE_MAGIC 0x414C4453 sdla_{dump,exec} include/linux/wanpipe.h
CS_CARD_MAGIC 0x43525553 cs_card sound/oss/cs46xx.c
LABELCL_MAGIC 0x4857434c labelcl_info_s include/asm/ia64/sn/labelcl.h
ISDN_ASYNC_MAGIC 0x49344C01 modem_info include/linux/isdn.h
CTC_ASYNC_MAGIC 0x49344C01 ctc_tty_info drivers/s390/net/ctctty.c
ISDN_NET_MAGIC 0x49344C02 isdn_net_local_s drivers/isdn/i4l/isdn_net_lib.h
SAVEKMSG_MAGIC2 0x4B4D5347 savekmsg arch/*/amiga/config.c
CS_STATE_MAGIC 0x4c4f4749 cs_state sound/oss/cs46xx.c
SLAB_C_MAGIC 0x4f17a36d kmem_cache mm/slab.c
COW_MAGIC 0x4f4f4f4d cow_header_v1 arch/um/drivers/ubd_user.c
I810_CARD_MAGIC 0x5072696E i810_card sound/oss/i810_audio.c
TRIDENT_CARD_MAGIC 0x5072696E trident_card sound/oss/trident.c
ROUTER_MAGIC 0x524d4157 wan_device [in wanrouter.h pre 3.9]
SAVEKMSG_MAGIC1 0x53415645 savekmsg arch/*/amiga/config.c
GDA_MAGIC 0x58464552 gda arch/mips/include/asm/sn/gda.h
RED_MAGIC1 0x5a2cf071 (any) mm/slab.c
EEPROM_MAGIC_VALUE 0x5ab478d2 lanai_dev drivers/atm/lanai.c
HDLCDRV_MAGIC 0x5ac6e778 hdlcdrv_state include/linux/hdlcdrv.h
PCXX_MAGIC 0x5c6df104 channel drivers/char/pcxx.h
KV_MAGIC 0x5f4b565f kernel_vars_s arch/mips/include/asm/sn/klkernvars.h
I810_STATE_MAGIC 0x63657373 i810_state sound/oss/i810_audio.c
TRIDENT_STATE_MAGIC 0x63657373 trient_state sound/oss/trident.c
M3_CARD_MAGIC 0x646e6f50 m3_card sound/oss/maestro3.c
FW_HEADER_MAGIC 0x65726F66 fw_header drivers/atm/fore200e.h
SLOT_MAGIC 0x67267321 slot drivers/hotplug/cpqphp.h
SLOT_MAGIC 0x67267322 slot drivers/hotplug/acpiphp.h
LO_MAGIC 0x68797548 nbd_device include/linux/nbd.h
OPROFILE_MAGIC 0x6f70726f super_block drivers/oprofile/oprofilefs.h
M3_STATE_MAGIC 0x734d724d m3_state sound/oss/maestro3.c
VMALLOC_MAGIC 0x87654320 snd_alloc_track sound/core/memory.c
KMALLOC_MAGIC 0x87654321 snd_alloc_track sound/core/memory.c
PWC_MAGIC 0x89DC10AB pwc_device drivers/usb/media/pwc.h
NBD_REPLY_MAGIC 0x96744668 nbd_reply include/linux/nbd.h
ENI155_MAGIC 0xa54b872d midway_eprom drivers/atm/eni.h
CODA_MAGIC 0xC0DAC0DA coda_file_info fs/coda/coda_fs_i.h
DPMEM_MAGIC 0xc0ffee11 gdt_pci_sram drivers/scsi/gdth.h
YAM_MAGIC 0xF10A7654 yam_port drivers/net/hamradio/yam.c
CCB_MAGIC 0xf2691ad2 ccb drivers/scsi/ncr53c8xx.c
QUEUE_MAGIC_FREE 0xf7e1c9a3 queue_entry drivers/scsi/arm/queue.c
QUEUE_MAGIC_USED 0xf7e1cc33 queue_entry drivers/scsi/arm/queue.c
HTB_CMAGIC 0xFEFAFEF1 htb_class net/sched/sch_htb.c
NMI_MAGIC 0x48414d4d455201 nmi_s arch/mips/include/asm/sn/nmi.h
Note that there are also defined special per-driver magic numbers in sound
memory management. See include/sound/sndmagic.h for complete list of them. Many
OSS sound drivers have their magic numbers constructed from the soundcard PCI
ID - these are not listed here as well.
IrDA subsystem also uses large number of own magic numbers, see
include/net/irda/irda.h for a complete list of them.
HFS is another larger user of magic numbers - you can find them in
fs/hfs/hfs.h.

View File

@ -648,12 +648,12 @@ microcode programming. A new interface for MPEG compression and playback
devices is documented in :ref:`extended-controls`.
.. [#f1]
According to Documentation/devices.txt these should be symbolic links
According to Documentation/admin-guide/devices.rst these should be symbolic links
to ``/dev/video0``. Note the original bttv interface is not
compatible with V4L or V4L2.
.. [#f2]
According to ``Documentation/devices.txt`` a symbolic link to
According to ``Documentation/admin-guide/devices.rst`` a symbolic link to
``/dev/radio0``.
.. [#f3]

View File

@ -304,10 +304,10 @@ bug. It is very helpful if you can tell where exactly it broke
With a hard freeze you probably doesn't find anything in the logfiles.
The only way to capture any kernel messages is to hook up a serial
console and let some terminal application log the messages. /me uses
screen. See Documentation/serial-console.txt for details on setting
screen. See Documentation/admin-guide/serial-console.rst for details on setting
up a serial console.
Read Documentation/oops-tracing.txt to learn how to get any useful
Read Documentation/admin-guide/oops-tracing.rst to learn how to get any useful
information out of a register+stack dump printed by the kernel on
protection faults (so-called "kernel oops").

View File

@ -324,7 +324,7 @@ guarantee that the memory block contains only migratable pages.
Now, a boot option for making a memory block which consists of migratable pages
is supported. By specifying "kernelcore=" or "movablecore=" boot option, you can
create ZONE_MOVABLE...a zone which is just used for movable pages.
(See also Documentation/kernel-parameters.txt)
(See also Documentation/admin-guide/kernel-parameters.rst)
Assume the system has "TOTAL" amount of memory at boot time, this boot option
creates ZONE_MOVABLE as following.

View File

@ -200,7 +200,7 @@ priority messages to the console. You can change this at runtime using:
or by specifying "debug" on the kernel command line at boot, to send
all kernel messages to the console. A specific value for this parameter
can also be set using the "loglevel" kernel boot option. See the
dmesg(8) man page and Documentation/kernel-parameters.txt for details.
dmesg(8) man page and Documentation/admin-guide/kernel-parameters.rst for details.
Netconsole was designed to be as instantaneous as possible, to
enable the logging of even the most critical kernel bugs. It works

View File

@ -136,14 +136,14 @@ A: Normally Greg Kroah-Hartman collects stable commits himself, but
Q: I see a network patch and I think it should be backported to stable.
Should I request it via "stable@vger.kernel.org" like the references in
the kernel's Documentation/stable_kernel_rules.txt file say?
the kernel's Documentation/process/stable-kernel-rules.rst file say?
A: No, not for networking. Check the stable queues as per above 1st to see
if it is already queued. If not, then send a mail to netdev, listing
the upstream commit ID and why you think it should be a stable candidate.
Before you jump to go do the above, do note that the normal stable rules
in Documentation/stable_kernel_rules.txt still apply. So you need to
in Documentation/process/stable-kernel-rules.rst still apply. So you need to
explicitly indicate why it is a critical fix and exactly what users are
impacted. In addition, you need to convince yourself that you _really_
think it has been overlooked, vs. having been considered and rejected.
@ -165,7 +165,7 @@ A: No. See above answer. In short, if you think it really belongs in
If you think there is some valid information relating to it being in
stable that does _not_ belong in the commit log, then use the three
dash marker line as described in Documentation/SubmittingPatches to
dash marker line as described in Documentation/process/submitting-patches.rst to
temporarily embed that information into the patch that you send.
Q: Someone said that the comment style and coding convention is different
@ -220,5 +220,5 @@ A: Attention to detail. Re-read your own work as if you were the
If it is your first patch, mail it to yourself so you can test apply
it to an unpatched tree to confirm infrastructure didn't mangle it.
Finally, go back and read Documentation/SubmittingPatches to be
Finally, go back and read Documentation/process/submitting-patches.rst to be
sure you are not repeating some common mistake documented there.

View File

@ -364,7 +364,7 @@ steps you should take:
- The contents of your report will vary a lot depending upon the
problem. If it's a kernel crash then you should refer to the
REPORTING-BUGS file.
admin-guide/reporting-bugs.rst file.
But for most problems it is useful to provide the following:

View File

@ -1,267 +0,0 @@
The `parport' code provides parallel-port support under Linux. This
includes the ability to share one port between multiple device
drivers.
You can pass parameters to the parport code to override its automatic
detection of your hardware. This is particularly useful if you want
to use IRQs, since in general these can't be autoprobed successfully.
By default IRQs are not used even if they _can_ be probed. This is
because there are a lot of people using the same IRQ for their
parallel port and a sound card or network card.
The parport code is split into two parts: generic (which deals with
port-sharing) and architecture-dependent (which deals with actually
using the port).
Parport as modules
==================
If you load the parport code as a module, say
# insmod parport
to load the generic parport code. You then must load the
architecture-dependent code with (for example):
# insmod parport_pc io=0x3bc,0x378,0x278 irq=none,7,auto
to tell the parport code that you want three PC-style ports, one at
0x3bc with no IRQ, one at 0x378 using IRQ 7, and one at 0x278 with an
auto-detected IRQ. Currently, PC-style (parport_pc), Sun `bpp',
Amiga, Atari, and MFC3 hardware is supported.
PCI parallel I/O card support comes from parport_pc. Base I/O
addresses should not be specified for supported PCI cards since they
are automatically detected.
modprobe
--------
If you use modprobe , you will find it useful to add lines as below to a
configuration file in /etc/modprobe.d/ directory:.
alias parport_lowlevel parport_pc
options parport_pc io=0x378,0x278 irq=7,auto
modprobe will load parport_pc (with the options "io=0x378,0x278 irq=7,auto")
whenever a parallel port device driver (such as lp) is loaded.
Note that these are example lines only! You shouldn't in general need
to specify any options to parport_pc in order to be able to use a
parallel port.
Parport probe [optional]
-------------
In 2.2 kernels there was a module called parport_probe, which was used
for collecting IEEE 1284 device ID information. This has now been
enhanced and now lives with the IEEE 1284 support. When a parallel
port is detected, the devices that are connected to it are analysed,
and information is logged like this:
parport0: Printer, BJC-210 (Canon)
The probe information is available from files in /proc/sys/dev/parport/.
Parport linked into the kernel statically
=========================================
If you compile the parport code into the kernel, then you can use
kernel boot parameters to get the same effect. Add something like the
following to your LILO command line:
parport=0x3bc parport=0x378,7 parport=0x278,auto,nofifo
You can have many `parport=...' statements, one for each port you want
to add. Adding `parport=0' to the kernel command-line will disable
parport support entirely. Adding `parport=auto' to the kernel
command-line will make parport use any IRQ lines or DMA channels that
it auto-detects.
Files in /proc
==============
If you have configured the /proc filesystem into your kernel, you will
see a new directory entry: /proc/sys/dev/parport. In there will be a
directory entry for each parallel port for which parport is
configured. In each of those directories are a collection of files
describing that parallel port.
The /proc/sys/dev/parport directory tree looks like:
parport
|-- default
| |-- spintime
| `-- timeslice
|-- parport0
| |-- autoprobe
| |-- autoprobe0
| |-- autoprobe1
| |-- autoprobe2
| |-- autoprobe3
| |-- devices
| | |-- active
| | `-- lp
| | `-- timeslice
| |-- base-addr
| |-- irq
| |-- dma
| |-- modes
| `-- spintime
`-- parport1
|-- autoprobe
|-- autoprobe0
|-- autoprobe1
|-- autoprobe2
|-- autoprobe3
|-- devices
| |-- active
| `-- ppa
| `-- timeslice
|-- base-addr
|-- irq
|-- dma
|-- modes
`-- spintime
File: Contents:
devices/active A list of the device drivers using that port. A "+"
will appear by the name of the device currently using
the port (it might not appear against any). The
string "none" means that there are no device drivers
using that port.
base-addr Parallel port's base address, or addresses if the port
has more than one in which case they are separated
with tabs. These values might not have any sensible
meaning for some ports.
irq Parallel port's IRQ, or -1 if none is being used.
dma Parallel port's DMA channel, or -1 if none is being
used.
modes Parallel port's hardware modes, comma-separated,
meaning:
PCSPP PC-style SPP registers are available.
TRISTATE Port is bidirectional.
COMPAT Hardware acceleration for printers is
available and will be used.
EPP Hardware acceleration for EPP protocol
is available and will be used.
ECP Hardware acceleration for ECP protocol
is available and will be used.
DMA DMA is available and will be used.
Note that the current implementation will only take
advantage of COMPAT and ECP modes if it has an IRQ
line to use.
autoprobe Any IEEE-1284 device ID information that has been
acquired from the (non-IEEE 1284.3) device.
autoprobe[0-3] IEEE 1284 device ID information retrieved from
daisy-chain devices that conform to IEEE 1284.3.
spintime The number of microseconds to busy-loop while waiting
for the peripheral to respond. You might find that
adjusting this improves performance, depending on your
peripherals. This is a port-wide setting, i.e. it
applies to all devices on a particular port.
timeslice The number of milliseconds that a device driver is
allowed to keep a port claimed for. This is advisory,
and driver can ignore it if it must.
default/* The defaults for spintime and timeslice. When a new
port is registered, it picks up the default spintime.
When a new device is registered, it picks up the
default timeslice.
Device drivers
==============
Once the parport code is initialised, you can attach device drivers to
specific ports. Normally this happens automatically; if the lp driver
is loaded it will create one lp device for each port found. You can
override this, though, by using parameters either when you load the lp
driver:
# insmod lp parport=0,2
or on the LILO command line:
lp=parport0 lp=parport2
Both the above examples would inform lp that you want /dev/lp0 to be
the first parallel port, and /dev/lp1 to be the _third_ parallel port,
with no lp device associated with the second port (parport1). Note
that this is different to the way older kernels worked; there used to
be a static association between the I/O port address and the device
name, so /dev/lp0 was always the port at 0x3bc. This is no longer the
case - if you only have one port, it will default to being /dev/lp0,
regardless of base address.
Also:
* If you selected the IEEE 1284 support at compile time, you can say
`lp=auto' on the kernel command line, and lp will create devices
only for those ports that seem to have printers attached.
* If you give PLIP the `timid' parameter, either with `plip=timid' on
the command line, or with `insmod plip timid=1' when using modules,
it will avoid any ports that seem to be in use by other devices.
* IRQ autoprobing works only for a few port types at the moment.
Reporting printer problems with parport
=======================================
If you are having problems printing, please go through these steps to
try to narrow down where the problem area is.
When reporting problems with parport, really you need to give all of
the messages that parport_pc spits out when it initialises. There are
several code paths:
o polling
o interrupt-driven, protocol in software
o interrupt-driven, protocol in hardware using PIO
o interrupt-driven, protocol in hardware using DMA
The kernel messages that parport_pc logs give an indication of which
code path is being used. (They could be a lot better actually..)
For normal printer protocol, having IEEE 1284 modes enabled or not
should not make a difference.
To turn off the 'protocol in hardware' code paths, disable
CONFIG_PARPORT_PC_FIFO. Note that when they are enabled they are not
necessarily _used_; it depends on whether the hardware is available,
enabled by the BIOS, and detected by the driver.
So, to start with, disable CONFIG_PARPORT_PC_FIFO, and load parport_pc
with 'irq=none'. See if printing works then. It really should,
because this is the simplest code path.
If that works fine, try with 'io=0x378 irq=7' (adjust for your
hardware), to make it use interrupt-driven in-software protocol.
If _that_ works fine, then one of the hardware modes isn't working
right. Enable CONFIG_PARPORT_PC_FIFO (no, it isn't a module option,
and yes, it should be), set the port to ECP mode in the BIOS and note
the DMA channel, and try with:
io=0x378 irq=7 dma=none (for PIO)
io=0x378 irq=7 dma=3 (for DMA)
--
philb@gnu.org
tim@cyberelk.net

View File

@ -6,7 +6,7 @@ basic-pm-debugging.txt
- Debugging suspend and resume
charger-manager.txt
- Battery charger management.
devices.txt
admin-guide/devices.rst
- How drivers interact with system-wide power management
drivers-testing.txt
- Testing suspend and resume support in device drivers

View File

@ -8,7 +8,7 @@ management. Based on previous work by Patrick Mochel <mochel@transmeta.com>
This document only covers the aspects of power management specific to PCI
devices. For general description of the kernel's interfaces related to device
power management refer to Documentation/power/devices.txt and
power management refer to Documentation/power/admin-guide/devices.rst and
Documentation/power/runtime_pm.txt.
---------------------------------------------------------------------------
@ -417,7 +417,7 @@ pm->runtime_idle() callback.
2.4. System-Wide Power Transitions
----------------------------------
There are a few different types of system-wide power transitions, described in
Documentation/power/devices.txt. Each of them requires devices to be handled
Documentation/power/admin-guide/devices.rst. Each of them requires devices to be handled
in a specific way and the PM core executes subsystem-level power management
callbacks for this purpose. They are executed in phases such that each phase
involves executing the same subsystem-level callback for every device belonging
@ -623,7 +623,7 @@ System restore requires a hibernation image to be loaded into memory and the
pre-hibernation memory contents to be restored before the pre-hibernation system
activity can be resumed.
As described in Documentation/power/devices.txt, the hibernation image is loaded
As described in Documentation/power/admin-guide/devices.rst, the hibernation image is loaded
into memory by a fresh instance of the kernel, called the boot kernel, which in
turn is loaded and run by a boot loader in the usual way. After the boot kernel
has loaded the image, it needs to replace its own code and data with the code
@ -677,7 +677,7 @@ controlling the runtime power management of their devices.
At the time of this writing there are two ways to define power management
callbacks for a PCI device driver, the recommended one, based on using a
dev_pm_ops structure described in Documentation/power/devices.txt, and the
dev_pm_ops structure described in Documentation/power/admin-guide/devices.rst, and the
"legacy" one, in which the .suspend(), .suspend_late(), .resume_early(), and
.resume() callbacks from struct pci_driver are used. The legacy approach,
however, doesn't allow one to define runtime power management callbacks and is
@ -1046,5 +1046,5 @@ PCI Local Bus Specification, Rev. 3.0
PCI Bus Power Management Interface Specification, Rev. 1.2
Advanced Configuration and Power Interface (ACPI) Specification, Rev. 3.0b
PCI Express Base Specification, Rev. 2.0
Documentation/power/devices.txt
Documentation/power/admin-guide/devices.rst
Documentation/power/runtime_pm.txt

View File

@ -674,7 +674,7 @@ left in runtime suspend. If that happens, the PM core will not execute any
system suspend and resume callbacks for all of those devices, except for the
complete callback, which is then entirely responsible for handling the device
as appropriate. This only applies to system suspend transitions that are not
related to hibernation (see Documentation/power/devices.txt for more
related to hibernation (see Documentation/power/admin-guide/devices.rst for more
information).
The PM core does its best to reduce the probability of race conditions between

View File

@ -8,7 +8,7 @@ Some prerequisites:
You know how dm-crypt works. If not, visit the following web page:
http://www.saout.de/misc/dm-crypt/
You have read Documentation/power/swsusp.txt and understand it.
You did read Documentation/initrd.txt and know how an initrd works.
You did read Documentation/admin-guide/initrd.rst and know how an initrd works.
You know how to create or how to modify an initrd.
Now your system is properly set up, your disk is encrypted except for

View File

@ -22,7 +22,7 @@ Coding style
************
The kernel has long had a standard coding style, described in
Documentation/CodingStyle. For much of that time, the policies described
Documentation/process/coding-style.rst. For much of that time, the policies described
in that file were taken as being, at most, advisory. As a result, there is
a substantial amount of code in the kernel which does not meet the coding
style guidelines. The presence of that code leads to two independent
@ -343,7 +343,7 @@ user-space developers to know what they are working with. See
Documentation/ABI/README for a description of how this documentation should
be formatted and what information needs to be provided.
The file Documentation/kernel-parameters.txt describes all of the kernel's
The file Documentation/admin-guide/kernel-parameters.rst describes all of the kernel's
boot-time parameters. Any patch which adds new parameters should add the
appropriate entries to this file.

View File

@ -9,8 +9,8 @@ kernel. Unsurprisingly, the kernel development community has evolved a set
of conventions and procedures which are used in the posting of patches;
following them will make life much easier for everybody involved. This
document will attempt to cover these expectations in reasonable detail;
more information can also be found in the files SubmittingPatches,
SubmittingDrivers, and SubmitChecklist in the kernel documentation
more information can also be found in the files process/submitting-patches.rst,
process/submitting-drivers.rst, and process/submit-checklist.rst in the kernel documentation
directory.
@ -198,7 +198,7 @@ pass it to diff with the "-X" option.
The tags mentioned above are used to describe how various developers have
been associated with the development of this patch. They are described in
detail in the SubmittingPatches document; what follows here is a brief
detail in the process/submitting-patches.rst document; what follows here is a brief
summary. Each of these lines has the format:
::
@ -210,7 +210,7 @@ The tags in common use are:
- Signed-off-by: this is a developer's certification that he or she has
the right to submit the patch for inclusion into the kernel. It is an
agreement to the Developer's Certificate of Origin, the full text of
which can be found in Documentation/SubmittingPatches. Code without a
which can be found in Documentation/process/submitting-patches.rst. Code without a
proper signoff cannot be merged into the mainline.
- Acked-by: indicates an agreement by another developer (often a
@ -221,7 +221,7 @@ The tags in common use are:
it to work.
- Reviewed-by: the named developer has reviewed the patch for correctness;
see the reviewer's statement in Documentation/SubmittingPatches for more
see the reviewer's statement in Documentation/process/submitting-patches.rst for more
detail.
- Reported-by: names a user who reported a problem which is fixed by this
@ -248,7 +248,7 @@ take care of:
be examined in any detail. If there is any doubt at all, mail the patch
to yourself and convince yourself that it shows up intact.
Documentation/email-clients.txt has some helpful hints on making
Documentation/process/email-clients.rst has some helpful hints on making
specific mail clients work for sending patches.
- Are you sure your patch is free of silly mistakes? You should always

View File

@ -176,5 +176,3 @@ security issues, duplication of code found elsewhere, adequate
documentation, adverse effects on performance, user-space ABI changes, etc.
All types of review, if they lead to better code going into the kernel, are
welcome and worthwhile.

View File

@ -5,9 +5,9 @@ For more information
There are numerous sources of information on Linux kernel development and
related topics. First among those will always be the Documentation
directory found in the kernel source distribution. The top-level HOWTO
file is an important starting point; SubmittingPatches and
SubmittingDrivers are also something which all kernel developers should
directory found in the kernel source distribution. The top-level process/howto.rst
file is an important starting point; process/submitting-patches.rst and
process/submitting-drivers.rst are also something which all kernel developers should
read. Many internal kernel APIs are documented using the kerneldoc
mechanism; "make htmldocs" or "make pdfdocs" can be used to generate those
documents in HTML or PDF format (though the version of TeX shipped by some

View File

@ -3,7 +3,7 @@ Adding a New System Call
This document describes what's involved in adding a new system call to the
Linux kernel, over and above the normal submission advice in
Documentation/SubmittingPatches.
:ref:`Documentation/process/submitting-patches.rst <submittingpatches>`.
System Call Alternatives
@ -19,30 +19,33 @@ interface.
object, it may make more sense to create a new filesystem or device. This
also makes it easier to encapsulate the new functionality in a kernel module
rather than requiring it to be built into the main kernel.
- If the new functionality involves operations where the kernel notifies
userspace that something has happened, then returning a new file
descriptor for the relevant object allows userspace to use
poll/select/epoll to receive that notification.
- However, operations that don't map to read(2)/write(2)-like operations
have to be implemented as ioctl(2) requests, which can lead to a
somewhat opaque API.
``poll``/``select``/``epoll`` to receive that notification.
- However, operations that don't map to
:manpage:`read(2)`/:manpage:`write(2)`-like operations
have to be implemented as :manpage:`ioctl(2)` requests, which can lead
to a somewhat opaque API.
- If you're just exposing runtime system information, a new node in sysfs
(see Documentation/filesystems/sysfs.txt) or the /proc filesystem may be
more appropriate. However, access to these mechanisms requires that the
(see ``Documentation/filesystems/sysfs.txt``) or the ``/proc`` filesystem may
be more appropriate. However, access to these mechanisms requires that the
relevant filesystem is mounted, which might not always be the case (e.g.
in a namespaced/sandboxed/chrooted environment). Avoid adding any API to
debugfs, as this is not considered a 'production' interface to userspace.
- If the operation is specific to a particular file or file descriptor, then
an additional fcntl(2) command option may be more appropriate. However,
fcntl(2) is a multiplexing system call that hides a lot of complexity, so
an additional :manpage:`fcntl(2)` command option may be more appropriate. However,
:manpage:`fcntl(2)` is a multiplexing system call that hides a lot of complexity, so
this option is best for when the new function is closely analogous to
existing fcntl(2) functionality, or the new functionality is very simple
existing :manpage:`fcntl(2)` functionality, or the new functionality is very simple
(for example, getting/setting a simple flag related to a file descriptor).
- If the operation is specific to a particular task or process, then an
additional prctl(2) command option may be more appropriate. As with
fcntl(2), this system call is a complicated multiplexor so is best reserved
for near-analogs of existing prctl() commands or getting/setting a simple
flag related to a process.
additional :manpage:`prctl(2)` command option may be more appropriate. As
with :manpage:`fcntl(2)`, this system call is a complicated multiplexor so
is best reserved for near-analogs of existing ``prctl()`` commands or
getting/setting a simple flag related to a process.
Designing the API: Planning for Extension
@ -54,15 +57,16 @@ interface on the kernel mailing list, and it's important to plan for future
extensions of the interface.
(The syscall table is littered with historical examples where this wasn't done,
together with the corresponding follow-up system calls -- eventfd/eventfd2,
dup2/dup3, inotify_init/inotify_init1, pipe/pipe2, renameat/renameat2 -- so
together with the corresponding follow-up system calls --
``eventfd``/``eventfd2``, ``dup2``/``dup3``, ``inotify_init``/``inotify_init1``,
``pipe``/``pipe2``, ``renameat``/``renameat2`` -- so
learn from the history of the kernel and plan for extensions from the start.)
For simpler system calls that only take a couple of arguments, the preferred
way to allow for future extensibility is to include a flags argument to the
system call. To make sure that userspace programs can safely use flags
between kernel versions, check whether the flags value holds any unknown
flags, and reject the system call (with EINVAL) if it does:
flags, and reject the system call (with ``EINVAL``) if it does::
if (flags & ~(THING_FLAG1 | THING_FLAG2 | THING_FLAG3))
return -EINVAL;
@ -72,7 +76,7 @@ flags, and reject the system call (with EINVAL) if it does:
For more sophisticated system calls that involve a larger number of arguments,
it's preferred to encapsulate the majority of the arguments into a structure
that is passed in by pointer. Such a structure can cope with future extension
by including a size argument in the structure:
by including a size argument in the structure::
struct xyzzy_params {
u32 size; /* userspace sets p->size = sizeof(struct xyzzy_params) */
@ -81,19 +85,19 @@ by including a size argument in the structure:
u64 param_3;
};
As long as any subsequently added field, say param_4, is designed so that a
As long as any subsequently added field, say ``param_4``, is designed so that a
zero value gives the previous behaviour, then this allows both directions of
version mismatch:
- To cope with a later userspace program calling an older kernel, the kernel
code should check that any memory beyond the size of the structure that it
expects is zero (effectively checking that param_4 == 0).
expects is zero (effectively checking that ``param_4 == 0``).
- To cope with an older userspace program calling a newer kernel, the kernel
code can zero-extend a smaller instance of the structure (effectively
setting param_4 = 0).
setting ``param_4 = 0``).
See perf_event_open(2) and the perf_copy_attr() function (in
kernel/events/core.c) for an example of this approach.
See :manpage:`perf_event_open(2)` and the ``perf_copy_attr()`` function (in
``kernel/events/core.c``) for an example of this approach.
Designing the API: Other Considerations
@ -104,57 +108,60 @@ should use a file descriptor as the handle for that object -- don't invent a
new type of userspace object handle when the kernel already has mechanisms and
well-defined semantics for using file descriptors.
If your new xyzzy(2) system call does return a new file descriptor, then the
flags argument should include a value that is equivalent to setting O_CLOEXEC
on the new FD. This makes it possible for userspace to close the timing
window between xyzzy() and calling fcntl(fd, F_SETFD, FD_CLOEXEC), where an
unexpected fork() and execve() in another thread could leak a descriptor to
If your new :manpage:`xyzzy(2)` system call does return a new file descriptor,
then the flags argument should include a value that is equivalent to setting
``O_CLOEXEC`` on the new FD. This makes it possible for userspace to close
the timing window between ``xyzzy()`` and calling
``fcntl(fd, F_SETFD, FD_CLOEXEC)``, where an unexpected ``fork()`` and
``execve()`` in another thread could leak a descriptor to
the exec'ed program. (However, resist the temptation to re-use the actual value
of the O_CLOEXEC constant, as it is architecture-specific and is part of a
numbering space of O_* flags that is fairly full.)
of the ``O_CLOEXEC`` constant, as it is architecture-specific and is part of a
numbering space of ``O_*`` flags that is fairly full.)
If your system call returns a new file descriptor, you should also consider
what it means to use the poll(2) family of system calls on that file
what it means to use the :manpage:`poll(2)` family of system calls on that file
descriptor. Making a file descriptor ready for reading or writing is the
normal way for the kernel to indicate to userspace that an event has
occurred on the corresponding kernel object.
If your new xyzzy(2) system call involves a filename argument:
If your new :manpage:`xyzzy(2)` system call involves a filename argument::
int sys_xyzzy(const char __user *path, ..., unsigned int flags);
you should also consider whether an xyzzyat(2) version is more appropriate:
you should also consider whether an :manpage:`xyzzyat(2)` version is more appropriate::
int sys_xyzzyat(int dfd, const char __user *path, ..., unsigned int flags);
This allows more flexibility for how userspace specifies the file in question;
in particular it allows userspace to request the functionality for an
already-opened file descriptor using the AT_EMPTY_PATH flag, effectively giving
an fxyzzy(3) operation for free:
already-opened file descriptor using the ``AT_EMPTY_PATH`` flag, effectively
giving an :manpage:`fxyzzy(3)` operation for free::
- xyzzyat(AT_FDCWD, path, ..., 0) is equivalent to xyzzy(path,...)
- xyzzyat(fd, "", ..., AT_EMPTY_PATH) is equivalent to fxyzzy(fd, ...)
(For more details on the rationale of the *at() calls, see the openat(2) man
page; for an example of AT_EMPTY_PATH, see the fstatat(2) man page.)
(For more details on the rationale of the \*at() calls, see the
:manpage:`openat(2)` man page; for an example of AT_EMPTY_PATH, see the
:manpage:`fstatat(2)` man page.)
If your new xyzzy(2) system call involves a parameter describing an offset
within a file, make its type loff_t so that 64-bit offsets can be supported
even on 32-bit architectures.
If your new :manpage:`xyzzy(2)` system call involves a parameter describing an
offset within a file, make its type ``loff_t`` so that 64-bit offsets can be
supported even on 32-bit architectures.
If your new xyzzy(2) system call involves privileged functionality, it needs
to be governed by the appropriate Linux capability bit (checked with a call to
capable()), as described in the capabilities(7) man page. Choose an existing
capability bit that governs related functionality, but try to avoid combining
lots of only vaguely related functions together under the same bit, as this
goes against capabilities' purpose of splitting the power of root. In
particular, avoid adding new uses of the already overly-general CAP_SYS_ADMIN
capability.
If your new :manpage:`xyzzy(2)` system call involves privileged functionality,
it needs to be governed by the appropriate Linux capability bit (checked with
a call to ``capable()``), as described in the :manpage:`capabilities(7)` man
page. Choose an existing capability bit that governs related functionality,
but try to avoid combining lots of only vaguely related functions together
under the same bit, as this goes against capabilities' purpose of splitting
the power of root. In particular, avoid adding new uses of the already
overly-general ``CAP_SYS_ADMIN`` capability.
If your new xyzzy(2) system call manipulates a process other than the calling
process, it should be restricted (using a call to ptrace_may_access()) so that
only a calling process with the same permissions as the target process, or
with the necessary capabilities, can manipulate the target process.
If your new :manpage:`xyzzy(2)` system call manipulates a process other than
the calling process, it should be restricted (using a call to
``ptrace_may_access()``) so that only a calling process with the same
permissions as the target process, or with the necessary capabilities, can
manipulate the target process.
Finally, be aware that some non-x86 architectures have an easier time if
system call parameters that are explicitly 64-bit fall on odd-numbered
@ -175,7 +182,7 @@ distinct commits (each of which is described further below):
- Wiring up of the new system call for one particular architecture, usually
x86 (including all of x86_64, x86_32 and x32).
- A demonstration of the use of the new system call in userspace via a
selftest in tools/testing/selftests/.
selftest in ``tools/testing/selftests/``.
- A draft man-page for the new system call, either as plain text in the
cover letter, or as a patch to the (separate) man-pages repository.
@ -186,24 +193,24 @@ be cc'ed to linux-api@vger.kernel.org.
Generic System Call Implementation
----------------------------------
The main entry point for your new xyzzy(2) system call will be called
sys_xyzzy(), but you add this entry point with the appropriate
SYSCALL_DEFINEn() macro rather than explicitly. The 'n' indicates the number
of arguments to the system call, and the macro takes the system call name
The main entry point for your new :manpage:`xyzzy(2)` system call will be called
``sys_xyzzy()``, but you add this entry point with the appropriate
``SYSCALL_DEFINEn()`` macro rather than explicitly. The 'n' indicates the
number of arguments to the system call, and the macro takes the system call name
followed by the (type, name) pairs for the parameters as arguments. Using
this macro allows metadata about the new system call to be made available for
other tools.
The new entry point also needs a corresponding function prototype, in
include/linux/syscalls.h, marked as asmlinkage to match the way that system
calls are invoked:
``include/linux/syscalls.h``, marked as asmlinkage to match the way that system
calls are invoked::
asmlinkage long sys_xyzzy(...);
Some architectures (e.g. x86) have their own architecture-specific syscall
tables, but several other architectures share a generic syscall table. Add your
new system call to the generic list by adding an entry to the list in
include/uapi/asm-generic/unistd.h:
``include/uapi/asm-generic/unistd.h``::
#define __NR_xyzzy 292
__SYSCALL(__NR_xyzzy, sys_xyzzy)
@ -212,30 +219,30 @@ Also update the __NR_syscalls count to reflect the additional system call, and
note that if multiple new system calls are added in the same merge window,
your new syscall number may get adjusted to resolve conflicts.
The file kernel/sys_ni.c provides a fallback stub implementation of each system
call, returning -ENOSYS. Add your new system call here too:
The file ``kernel/sys_ni.c`` provides a fallback stub implementation of each
system call, returning ``-ENOSYS``. Add your new system call here too::
cond_syscall(sys_xyzzy);
Your new kernel functionality, and the system call that controls it, should
normally be optional, so add a CONFIG option (typically to init/Kconfig) for
it. As usual for new CONFIG options:
normally be optional, so add a ``CONFIG`` option (typically to
``init/Kconfig``) for it. As usual for new ``CONFIG`` options:
- Include a description of the new functionality and system call controlled
by the option.
- Make the option depend on EXPERT if it should be hidden from normal users.
- Make any new source files implementing the function dependent on the CONFIG
option in the Makefile (e.g. "obj-$(CONFIG_XYZZY_SYSCALL) += xyzzy.c").
option in the Makefile (e.g. ``obj-$(CONFIG_XYZZY_SYSCALL) += xyzzy.c``).
- Double check that the kernel still builds with the new CONFIG option turned
off.
To summarize, you need a commit that includes:
- CONFIG option for the new function, normally in init/Kconfig
- SYSCALL_DEFINEn(xyzzy, ...) for the entry point
- corresponding prototype in include/linux/syscalls.h
- generic table entry in include/uapi/asm-generic/unistd.h
- fallback stub in kernel/sys_ni.c
- ``CONFIG`` option for the new function, normally in ``init/Kconfig``
- ``SYSCALL_DEFINEn(xyzzy, ...)`` for the entry point
- corresponding prototype in ``include/linux/syscalls.h``
- generic table entry in ``include/uapi/asm-generic/unistd.h``
- fallback stub in ``kernel/sys_ni.c``
x86 System Call Implementation
@ -244,11 +251,11 @@ x86 System Call Implementation
To wire up your new system call for x86 platforms, you need to update the
master syscall tables. Assuming your new system call isn't special in some
way (see below), this involves a "common" entry (for x86_64 and x32) in
arch/x86/entry/syscalls/syscall_64.tbl:
arch/x86/entry/syscalls/syscall_64.tbl::
333 common xyzzy sys_xyzzy
and an "i386" entry in arch/x86/entry/syscalls/syscall_32.tbl:
and an "i386" entry in ``arch/x86/entry/syscalls/syscall_32.tbl``::
380 i386 xyzzy sys_xyzzy
@ -267,48 +274,49 @@ However, there are a couple of situations where a compatibility layer is
needed to cope with size differences between 32-bit and 64-bit.
The first is if the 64-bit kernel also supports 32-bit userspace programs, and
so needs to parse areas of (__user) memory that could hold either 32-bit or
so needs to parse areas of (``__user``) memory that could hold either 32-bit or
64-bit values. In particular, this is needed whenever a system call argument
is:
- a pointer to a pointer
- a pointer to a struct containing a pointer (e.g. struct iovec __user *)
- a pointer to a varying sized integral type (time_t, off_t, long, ...)
- a pointer to a struct containing a pointer (e.g. ``struct iovec __user *``)
- a pointer to a varying sized integral type (``time_t``, ``off_t``,
``long``, ...)
- a pointer to a struct containing a varying sized integral type.
The second situation that requires a compatibility layer is if one of the
system call's arguments has a type that is explicitly 64-bit even on a 32-bit
architecture, for example loff_t or __u64. In this case, a value that arrives
at a 64-bit kernel from a 32-bit application will be split into two 32-bit
values, which then need to be re-assembled in the compatibility layer.
architecture, for example ``loff_t`` or ``__u64``. In this case, a value that
arrives at a 64-bit kernel from a 32-bit application will be split into two
32-bit values, which then need to be re-assembled in the compatibility layer.
(Note that a system call argument that's a pointer to an explicit 64-bit type
does *not* need a compatibility layer; for example, splice(2)'s arguments of
type loff_t __user * do not trigger the need for a compat_ system call.)
does **not** need a compatibility layer; for example, :manpage:`splice(2)`'s arguments of
type ``loff_t __user *`` do not trigger the need for a ``compat_`` system call.)
The compatibility version of the system call is called compat_sys_xyzzy(), and
is added with the COMPAT_SYSCALL_DEFINEn() macro, analogously to
The compatibility version of the system call is called ``compat_sys_xyzzy()``,
and is added with the ``COMPAT_SYSCALL_DEFINEn()`` macro, analogously to
SYSCALL_DEFINEn. This version of the implementation runs as part of a 64-bit
kernel, but expects to receive 32-bit parameter values and does whatever is
needed to deal with them. (Typically, the compat_sys_ version converts the
values to 64-bit versions and either calls on to the sys_ version, or both of
needed to deal with them. (Typically, the ``compat_sys_`` version converts the
values to 64-bit versions and either calls on to the ``sys_`` version, or both of
them call a common inner implementation function.)
The compat entry point also needs a corresponding function prototype, in
include/linux/compat.h, marked as asmlinkage to match the way that system
calls are invoked:
``include/linux/compat.h``, marked as asmlinkage to match the way that system
calls are invoked::
asmlinkage long compat_sys_xyzzy(...);
If the system call involves a structure that is laid out differently on 32-bit
and 64-bit systems, say struct xyzzy_args, then the include/linux/compat.h
header file should also include a compat version of the structure (struct
compat_xyzzy_args) where each variable-size field has the appropriate compat_
type that corresponds to the type in struct xyzzy_args. The
compat_sys_xyzzy() routine can then use this compat_ structure to parse the
arguments from a 32-bit invocation.
and 64-bit systems, say ``struct xyzzy_args``, then the include/linux/compat.h
header file should also include a compat version of the structure (``struct
compat_xyzzy_args``) where each variable-size field has the appropriate
``compat_`` type that corresponds to the type in ``struct xyzzy_args``. The
``compat_sys_xyzzy()`` routine can then use this ``compat_`` structure to
parse the arguments from a 32-bit invocation.
For example, if there are fields:
For example, if there are fields::
struct xyzzy_args {
const char __user *ptr;
@ -317,7 +325,7 @@ For example, if there are fields:
/* ... */
};
in struct xyzzy_args, then struct compat_xyzzy_args would have:
in struct xyzzy_args, then struct compat_xyzzy_args would have::
struct compat_xyzzy_args {
compat_uptr_t ptr;
@ -327,18 +335,19 @@ in struct xyzzy_args, then struct compat_xyzzy_args would have:
};
The generic system call list also needs adjusting to allow for the compat
version; the entry in include/uapi/asm-generic/unistd.h should use
__SC_COMP rather than __SYSCALL:
version; the entry in ``include/uapi/asm-generic/unistd.h`` should use
``__SC_COMP`` rather than ``__SYSCALL``::
#define __NR_xyzzy 292
__SC_COMP(__NR_xyzzy, sys_xyzzy, compat_sys_xyzzy)
To summarize, you need:
- a COMPAT_SYSCALL_DEFINEn(xyzzy, ...) for the compat entry point
- corresponding prototype in include/linux/compat.h
- (if needed) 32-bit mapping struct in include/linux/compat.h
- instance of __SC_COMP not __SYSCALL in include/uapi/asm-generic/unistd.h
- a ``COMPAT_SYSCALL_DEFINEn(xyzzy, ...)`` for the compat entry point
- corresponding prototype in ``include/linux/compat.h``
- (if needed) 32-bit mapping struct in ``include/linux/compat.h``
- instance of ``__SC_COMP`` not ``__SYSCALL`` in
``include/uapi/asm-generic/unistd.h``
Compatibility System Calls (x86)
@ -347,9 +356,9 @@ Compatibility System Calls (x86)
To wire up the x86 architecture of a system call with a compatibility version,
the entries in the syscall tables need to be adjusted.
First, the entry in arch/x86/entry/syscalls/syscall_32.tbl gets an extra
First, the entry in ``arch/x86/entry/syscalls/syscall_32.tbl`` gets an extra
column to indicate that a 32-bit userspace program running on a 64-bit kernel
should hit the compat entry point:
should hit the compat entry point::
380 i386 xyzzy sys_xyzzy compat_sys_xyzzy
@ -359,8 +368,8 @@ should either match the 64-bit version or the 32-bit version.
If there's a pointer-to-a-pointer involved, the decision is easy: x32 is
ILP32, so the layout should match the 32-bit version, and the entry in
arch/x86/entry/syscalls/syscall_64.tbl is split so that x32 programs hit the
compatibility wrapper:
``arch/x86/entry/syscalls/syscall_64.tbl`` is split so that x32 programs hit
the compatibility wrapper::
333 64 xyzzy sys_xyzzy
...
@ -384,8 +393,9 @@ stack the same and most of the registers the same as before the system call,
and with the same virtual memory space.
However, a few system calls do things differently. They might return to a
different location (rt_sigreturn) or change the memory space (fork/vfork/clone)
or even architecture (execve/execveat) of the program.
different location (``rt_sigreturn``) or change the memory space
(``fork``/``vfork``/``clone``) or even architecture (``execve``/``execveat``)
of the program.
To allow for this, the kernel implementation of the system call may need to
save and restore additional registers to the kernel stack, allowing complete
@ -395,31 +405,31 @@ This is arch-specific, but typically involves defining assembly entry points
that save/restore additional registers and invoke the real system call entry
point.
For x86_64, this is implemented as a stub_xyzzy entry point in
arch/x86/entry/entry_64.S, and the entry in the syscall table
(arch/x86/entry/syscalls/syscall_64.tbl) is adjusted to match:
For x86_64, this is implemented as a ``stub_xyzzy`` entry point in
``arch/x86/entry/entry_64.S``, and the entry in the syscall table
(``arch/x86/entry/syscalls/syscall_64.tbl``) is adjusted to match::
333 common xyzzy stub_xyzzy
The equivalent for 32-bit programs running on a 64-bit kernel is normally
called stub32_xyzzy and implemented in arch/x86/entry/entry_64_compat.S,
called ``stub32_xyzzy`` and implemented in ``arch/x86/entry/entry_64_compat.S``,
with the corresponding syscall table adjustment in
arch/x86/entry/syscalls/syscall_32.tbl:
``arch/x86/entry/syscalls/syscall_32.tbl``::
380 i386 xyzzy sys_xyzzy stub32_xyzzy
If the system call needs a compatibility layer (as in the previous section)
then the stub32_ version needs to call on to the compat_sys_ version of the
system call rather than the native 64-bit version. Also, if the x32 ABI
then the ``stub32_`` version needs to call on to the ``compat_sys_`` version
of the system call rather than the native 64-bit version. Also, if the x32 ABI
implementation is not common with the x86_64 version, then its syscall
table will also need to invoke a stub that calls on to the compat_sys_
table will also need to invoke a stub that calls on to the ``compat_sys_``
version.
For completeness, it's also nice to set up a mapping so that user-mode Linux
still works -- its syscall table will reference stub_xyzzy, but the UML build
doesn't include arch/x86/entry/entry_64.S implementation (because UML
doesn't include ``arch/x86/entry/entry_64.S`` implementation (because UML
simulates registers etc). Fixing this is as simple as adding a #define to
arch/x86/um/sys_call_table_64.c:
``arch/x86/um/sys_call_table_64.c``::
#define stub_xyzzy sys_xyzzy
@ -432,9 +442,9 @@ occasional exception that may need updating for your particular system call.
The audit subsystem is one such special case; it includes (arch-specific)
functions that classify some special types of system call -- specifically
file open (open/openat), program execution (execve/exeveat) or socket
multiplexor (socketcall) operations. If your new system call is analogous to
one of these, then the audit system should be updated.
file open (``open``/``openat``), program execution (``execve``/``exeveat``) or
socket multiplexor (``socketcall``) operations. If your new system call is
analogous to one of these, then the audit system should be updated.
More generally, if there is an existing system call that is analogous to your
new system call, it's worth doing a kernel-wide grep for the existing system
@ -447,10 +457,10 @@ Testing
A new system call should obviously be tested; it is also useful to provide
reviewers with a demonstration of how user space programs will use the system
call. A good way to combine these aims is to include a simple self-test
program in a new directory under tools/testing/selftests/.
program in a new directory under ``tools/testing/selftests/``.
For a new system call, there will obviously be no libc wrapper function and so
the test will need to invoke it using syscall(); also, if the system call
the test will need to invoke it using ``syscall()``; also, if the system call
involves a new userspace-visible structure, the corresponding header will need
to be installed to compile the test.
@ -461,6 +471,7 @@ and x32 (-mx32) ABI program.
For more extensive and thorough testing of new functionality, you should also
consider adding tests to the Linux Test Project, or to the xfstests project
for filesystem-related changes.
- https://linux-test-project.github.io/
- git://git.kernel.org/pub/scm/fs/xfs/xfstests-dev.git
@ -487,12 +498,14 @@ References and Sources
arguments: https://lwn.net/Articles/311630/
- Pair of LWN articles from David Drysdale that describe the system call
implementation paths in detail for v3.14:
- https://lwn.net/Articles/604287/
- https://lwn.net/Articles/604515/
- Architecture-specific requirements for system calls are discussed in the
syscall(2) man-page:
:manpage:`syscall(2)` man-page:
http://man7.org/linux/man-pages/man2/syscall.2.html#NOTES
- Collated emails from Linus Torvalds discussing the problems with ioctl():
- Collated emails from Linus Torvalds discussing the problems with ``ioctl()``:
http://yarchive.net/comp/linux/ioctl.html
- "How to not invent kernel interfaces", Arnd Bergmann,
http://www.ukuug.org/events/linux2007/2007/papers/Bergmann.pdf
@ -507,17 +520,19 @@ References and Sources
commit: https://lkml.org/lkml/2014/11/19/254
- Suggestion from Greg Kroah-Hartman that it's good for new system calls to
come with a man-page & selftest: https://lkml.org/lkml/2014/3/19/710
- Discussion from Michael Kerrisk of new system call vs. prctl(2) extension:
- Discussion from Michael Kerrisk of new system call vs. :manpage:`prctl(2)` extension:
https://lkml.org/lkml/2014/6/3/411
- Suggestion from Ingo Molnar that system calls that involve multiple
arguments should encapsulate those arguments in a struct, which includes a
size field for future extensibility: https://lkml.org/lkml/2015/7/30/117
- Numbering oddities arising from (re-)use of O_* numbering space flags:
- commit 75069f2b5bfb ("vfs: renumber FMODE_NONOTIFY and add to uniqueness
check")
- commit 12ed2e36c98a ("fanotify: FMODE_NONOTIFY and __O_SYNC in sparc
conflict")
- commit bb458c644a59 ("Safer ABI for O_TMPFILE")
- Discussion from Matthew Wilcox about restrictions on 64-bit arguments:
https://lkml.org/lkml/2008/12/12/187
- Recommendation from Greg Kroah-Hartman that unknown flags should be

View File

@ -427,7 +427,7 @@ The -mm patches are experimental patches released by Andrew Morton.
In the past, -mm tree were used to also test subsystem patches, but this
function is now done via the
:ref:`linux-next <https://www.kernel.org/doc/man-pages/linux-next.html>`
`linux-next <https://www.kernel.org/doc/man-pages/linux-next.html>`
tree. The Subsystem maintainers push their patches first to linux-next,
and, during the merge window, sends them directly to Linus.
@ -462,4 +462,3 @@ the kernel.
Thank you's to Randy Dunlap, Rolf Eike Beer, Linus Torvalds, Bodo Eggert,
Johannes Stezenbach, Grant Coady, Pavel Machek and others that I may have
forgotten for their reviews and contributions to this document.

View File

@ -19,7 +19,8 @@ please contact the Linux Foundation's Technical Advisory Board at
will work to resolve the issue to the best of their ability. For more
information on who is on the Technical Advisory Board and what their
role is, please see:
http://www.linuxfoundation.org/projects/linux/tab
- http://www.linuxfoundation.org/projects/linux/tab
As a reviewer of code, please strive to keep things civil and focused on
the technical issues involved. We are all humans, and frustrations can

File diff suppressed because it is too large Load Diff

View File

@ -5,6 +5,6 @@ project = 'Linux Kernel Development Documentation'
tags.add("subproject")
latex_documents = [
('index', 'development-process.tex', 'Linux Kernel Development Documentation',
('index', 'process.tex', 'Linux Kernel Development Documentation',
'The kernel development community', 'manual'),
]

View File

@ -26,4 +26,3 @@ development (or, indeed, free software development in general). While
there is some technical material here, this is very much a process-oriented
discussion which does not require a deep knowledge of kernel programming to
understand.

View File

@ -90,19 +90,19 @@ required reading:
what is necessary to do to configure and build the kernel. People
who are new to the kernel should start here.
:ref:`Documentation/Changes <changes>`
:ref:`Documentation/process/changes.rst <changes>`
This file gives a list of the minimum levels of various software
packages that are necessary to build and run the kernel
successfully.
:ref:`Documentation/CodingStyle <codingstyle>`
:ref:`Documentation/process/coding-style.rst <codingstyle>`
This describes the Linux kernel coding style, and some of the
rationale behind it. All new code is expected to follow the
guidelines in this document. Most maintainers will only accept
patches if these rules are followed, and many people will only
review code if it is in the proper style.
:ref:`Documentation/SubmittingPatches <submittingpatches>` and :ref:`Documentation/SubmittingDrivers <submittingdrivers>`
:ref:`Documentation/process/submitting-patches.rst <submittingpatches>` and :ref:`Documentation/process/submitting-drivers.rst <submittingdrivers>`
These files describe in explicit detail how to successfully create
and send a patch, including (but not limited to):
@ -122,7 +122,7 @@ required reading:
"Linux kernel patch submission format"
http://linux.yyz.us/patch-format.html
:ref:`Documentation/stable_api_nonsense.txt <stable_api_nonsense>`
:ref:`Documentation/process/stable-api-nonsense.rst <stable_api_nonsense>`
This file describes the rationale behind the conscious decision to
not have a stable API within the kernel, including things like:
@ -135,29 +135,29 @@ required reading:
philosophy and is very important for people moving to Linux from
development on other Operating Systems.
:ref:`Documentation/SecurityBugs <securitybugs>`
:ref:`Documentation/admin-guide/security-bugs.rst <securitybugs>`
If you feel you have found a security problem in the Linux kernel,
please follow the steps in this document to help notify the kernel
developers, and help solve the issue.
:ref:`Documentation/ManagementStyle <managementstyle>`
:ref:`Documentation/process/management-style.rst <managementstyle>`
This document describes how Linux kernel maintainers operate and the
shared ethos behind their methodologies. This is important reading
for anyone new to kernel development (or anyone simply curious about
it), as it resolves a lot of common misconceptions and confusion
about the unique behavior of kernel maintainers.
:ref:`Documentation/stable_kernel_rules.txt <stable_kernel_rules>`
:ref:`Documentation/process/stable-kernel-rules.rst <stable_kernel_rules>`
This file describes the rules on how the stable kernel releases
happen, and what to do if you want to get a change into one of these
releases.
:ref:`Documentation/kernel-docs.txt <kernel_docs>`
:ref:`Documentation/process/kernel-docs.rst <kernel_docs>`
A list of external documentation that pertains to kernel
development. Please consult this list if you do not find what you
are looking for within the in-kernel documentation.
:ref:`Documentation/applying-patches.txt <applying_patches>`
:ref:`Documentation/process/applying-patches.rst <applying_patches>`
A good introduction describing exactly what a patch is and how to
apply it to the different development branches of the kernel.
@ -307,7 +307,7 @@ two weeks, but it can be longer if there are no pressing problems. A
security-related problem, instead, can cause a release to happen almost
instantly.
The file Documentation/stable_kernel_rules.txt in the kernel tree
The file Documentation/process/stable-kernel-rules.rst in the kernel tree
documents what kinds of changes are acceptable for the -stable tree, and
how the release process works.
@ -366,7 +366,7 @@ tool. For details on how to use the kernel bugzilla, please see:
https://bugzilla.kernel.org/page.cgi?id=faq.html
The file REPORTING-BUGS in the main kernel source directory has a good
The file admin-guide/reporting-bugs.rst in the main kernel source directory has a good
template for how to report a possible kernel bug, and details what kind
of information is needed by the kernel developers to help track down the
problem.
@ -440,7 +440,7 @@ add your statements between the individual quoted sections instead of
writing at the top of the mail.
If you add patches to your mail, make sure they are plain readable text
as stated in Documentation/SubmittingPatches.
as stated in Documentation/process/submitting-patches.rst.
Kernel developers don't want to deal with
attachments or compressed patches; they may want to comment on
individual lines of your patch, which works only that way. Make sure you

View File

@ -0,0 +1,32 @@
.. raw:: latex
\renewcommand\thesection*
\renewcommand\thesubsection*
Linux Kernel Development Documentation
======================================
Contents:
.. toctree::
:maxdepth: 2
howto
changes
coding-style
submitting-patches
submitting-drivers
stable-api-nonsense
management-style
stable-kernel-rules
kernel-docs
applying-patches
email-clients
submit-checklist
code-of-conflict
adding-syscalls
magic-number
volatile-considered-harmful
development-process

View File

@ -0,0 +1,164 @@
Linux magic numbers
===================
This file is a registry of magic numbers which are in use. When you
add a magic number to a structure, you should also add it to this
file, since it is best if the magic numbers used by various structures
are unique.
It is a **very** good idea to protect kernel data structures with magic
numbers. This allows you to check at run time whether (a) a structure
has been clobbered, or (b) you've passed the wrong structure to a
routine. This last is especially useful --- particularly when you are
passing pointers to structures via a void * pointer. The tty code,
for example, does this frequently to pass driver-specific and line
discipline-specific structures back and forth.
The way to use magic numbers is to declare then at the beginning of
the structure, like so::
struct tty_ldisc {
int magic;
...
};
Please follow this discipline when you are adding future enhancements
to the kernel! It has saved me countless hours of debugging,
especially in the screwy cases where an array has been overrun and
structures following the array have been overwritten. Using this
discipline, these cases get detected quickly and safely.
Changelog::
Theodore Ts'o
31 Mar 94
The magic table is current to Linux 2.1.55.
Michael Chastain
<mailto:mec@shout.net>
22 Sep 1997
Now it should be up to date with Linux 2.1.112. Because
we are in feature freeze time it is very unlikely that
something will change before 2.2.x. The entries are
sorted by number field.
Krzysztof G. Baranowski
<mailto: kgb@knm.org.pl>
29 Jul 1998
Updated the magic table to Linux 2.5.45. Right over the feature freeze,
but it is possible that some new magic numbers will sneak into the
kernel before 2.6.x yet.
Petr Baudis
<pasky@ucw.cz>
03 Nov 2002
Updated the magic table to Linux 2.5.74.
Fabian Frederick
<ffrederick@users.sourceforge.net>
09 Jul 2003
===================== ================ ======================== ==========================================
Magic Name Number Structure File
===================== ================ ======================== ==========================================
PG_MAGIC 'P' pg_{read,write}_hdr ``include/linux/pg.h``
CMAGIC 0x0111 user ``include/linux/a.out.h``
MKISS_DRIVER_MAGIC 0x04bf mkiss_channel ``drivers/net/mkiss.h``
HDLC_MAGIC 0x239e n_hdlc ``drivers/char/n_hdlc.c``
APM_BIOS_MAGIC 0x4101 apm_user ``arch/x86/kernel/apm_32.c``
CYCLADES_MAGIC 0x4359 cyclades_port ``include/linux/cyclades.h``
DB_MAGIC 0x4442 fc_info ``drivers/net/iph5526_novram.c``
DL_MAGIC 0x444d fc_info ``drivers/net/iph5526_novram.c``
FASYNC_MAGIC 0x4601 fasync_struct ``include/linux/fs.h``
FF_MAGIC 0x4646 fc_info ``drivers/net/iph5526_novram.c``
ISICOM_MAGIC 0x4d54 isi_port ``include/linux/isicom.h``
PTY_MAGIC 0x5001 ``drivers/char/pty.c``
PPP_MAGIC 0x5002 ppp ``include/linux/if_pppvar.h``
SERIAL_MAGIC 0x5301 async_struct ``include/linux/serial.h``
SSTATE_MAGIC 0x5302 serial_state ``include/linux/serial.h``
SLIP_MAGIC 0x5302 slip ``drivers/net/slip.h``
STRIP_MAGIC 0x5303 strip ``drivers/net/strip.c``
X25_ASY_MAGIC 0x5303 x25_asy ``drivers/net/x25_asy.h``
SIXPACK_MAGIC 0x5304 sixpack ``drivers/net/hamradio/6pack.h``
AX25_MAGIC 0x5316 ax_disp ``drivers/net/mkiss.h``
TTY_MAGIC 0x5401 tty_struct ``include/linux/tty.h``
MGSL_MAGIC 0x5401 mgsl_info ``drivers/char/synclink.c``
TTY_DRIVER_MAGIC 0x5402 tty_driver ``include/linux/tty_driver.h``
MGSLPC_MAGIC 0x5402 mgslpc_info ``drivers/char/pcmcia/synclink_cs.c``
TTY_LDISC_MAGIC 0x5403 tty_ldisc ``include/linux/tty_ldisc.h``
USB_SERIAL_MAGIC 0x6702 usb_serial ``drivers/usb/serial/usb-serial.h``
FULL_DUPLEX_MAGIC 0x6969 ``drivers/net/ethernet/dec/tulip/de2104x.c``
USB_BLUETOOTH_MAGIC 0x6d02 usb_bluetooth ``drivers/usb/class/bluetty.c``
RFCOMM_TTY_MAGIC 0x6d02 ``net/bluetooth/rfcomm/tty.c``
USB_SERIAL_PORT_MAGIC 0x7301 usb_serial_port ``drivers/usb/serial/usb-serial.h``
CG_MAGIC 0x00090255 ufs_cylinder_group ``include/linux/ufs_fs.h``
RPORT_MAGIC 0x00525001 r_port ``drivers/char/rocket_int.h``
LSEMAGIC 0x05091998 lse ``drivers/fc4/fc.c``
GDTIOCTL_MAGIC 0x06030f07 gdth_iowr_str ``drivers/scsi/gdth_ioctl.h``
RIEBL_MAGIC 0x09051990 ``drivers/net/atarilance.c``
NBD_REQUEST_MAGIC 0x12560953 nbd_request ``include/linux/nbd.h``
RED_MAGIC2 0x170fc2a5 (any) ``mm/slab.c``
BAYCOM_MAGIC 0x19730510 baycom_state ``drivers/net/baycom_epp.c``
ISDN_X25IFACE_MAGIC 0x1e75a2b9 isdn_x25iface_proto_data ``drivers/isdn/isdn_x25iface.h``
ECP_MAGIC 0x21504345 cdkecpsig ``include/linux/cdk.h``
LSOMAGIC 0x27091997 lso ``drivers/fc4/fc.c``
LSMAGIC 0x2a3b4d2a ls ``drivers/fc4/fc.c``
WANPIPE_MAGIC 0x414C4453 sdla_{dump,exec} ``include/linux/wanpipe.h``
CS_CARD_MAGIC 0x43525553 cs_card ``sound/oss/cs46xx.c``
LABELCL_MAGIC 0x4857434c labelcl_info_s ``include/asm/ia64/sn/labelcl.h``
ISDN_ASYNC_MAGIC 0x49344C01 modem_info ``include/linux/isdn.h``
CTC_ASYNC_MAGIC 0x49344C01 ctc_tty_info ``drivers/s390/net/ctctty.c``
ISDN_NET_MAGIC 0x49344C02 isdn_net_local_s ``drivers/isdn/i4l/isdn_net_lib.h``
SAVEKMSG_MAGIC2 0x4B4D5347 savekmsg ``arch/*/amiga/config.c``
CS_STATE_MAGIC 0x4c4f4749 cs_state ``sound/oss/cs46xx.c``
SLAB_C_MAGIC 0x4f17a36d kmem_cache ``mm/slab.c``
COW_MAGIC 0x4f4f4f4d cow_header_v1 ``arch/um/drivers/ubd_user.c``
I810_CARD_MAGIC 0x5072696E i810_card ``sound/oss/i810_audio.c``
TRIDENT_CARD_MAGIC 0x5072696E trident_card ``sound/oss/trident.c``
ROUTER_MAGIC 0x524d4157 wan_device [in ``wanrouter.h`` pre 3.9]
SAVEKMSG_MAGIC1 0x53415645 savekmsg ``arch/*/amiga/config.c``
GDA_MAGIC 0x58464552 gda ``arch/mips/include/asm/sn/gda.h``
RED_MAGIC1 0x5a2cf071 (any) ``mm/slab.c``
EEPROM_MAGIC_VALUE 0x5ab478d2 lanai_dev ``drivers/atm/lanai.c``
HDLCDRV_MAGIC 0x5ac6e778 hdlcdrv_state ``include/linux/hdlcdrv.h``
PCXX_MAGIC 0x5c6df104 channel ``drivers/char/pcxx.h``
KV_MAGIC 0x5f4b565f kernel_vars_s ``arch/mips/include/asm/sn/klkernvars.h``
I810_STATE_MAGIC 0x63657373 i810_state ``sound/oss/i810_audio.c``
TRIDENT_STATE_MAGIC 0x63657373 trient_state ``sound/oss/trident.c``
M3_CARD_MAGIC 0x646e6f50 m3_card ``sound/oss/maestro3.c``
FW_HEADER_MAGIC 0x65726F66 fw_header ``drivers/atm/fore200e.h``
SLOT_MAGIC 0x67267321 slot ``drivers/hotplug/cpqphp.h``
SLOT_MAGIC 0x67267322 slot ``drivers/hotplug/acpiphp.h``
LO_MAGIC 0x68797548 nbd_device ``include/linux/nbd.h``
OPROFILE_MAGIC 0x6f70726f super_block ``drivers/oprofile/oprofilefs.h``
M3_STATE_MAGIC 0x734d724d m3_state ``sound/oss/maestro3.c``
VMALLOC_MAGIC 0x87654320 snd_alloc_track ``sound/core/memory.c``
KMALLOC_MAGIC 0x87654321 snd_alloc_track ``sound/core/memory.c``
PWC_MAGIC 0x89DC10AB pwc_device ``drivers/usb/media/pwc.h``
NBD_REPLY_MAGIC 0x96744668 nbd_reply ``include/linux/nbd.h``
ENI155_MAGIC 0xa54b872d midway_eprom ``drivers/atm/eni.h``
CODA_MAGIC 0xC0DAC0DA coda_file_info ``fs/coda/coda_fs_i.h``
DPMEM_MAGIC 0xc0ffee11 gdt_pci_sram ``drivers/scsi/gdth.h``
YAM_MAGIC 0xF10A7654 yam_port ``drivers/net/hamradio/yam.c``
CCB_MAGIC 0xf2691ad2 ccb ``drivers/scsi/ncr53c8xx.c``
QUEUE_MAGIC_FREE 0xf7e1c9a3 queue_entry ``drivers/scsi/arm/queue.c``
QUEUE_MAGIC_USED 0xf7e1cc33 queue_entry ``drivers/scsi/arm/queue.c``
HTB_CMAGIC 0xFEFAFEF1 htb_class ``net/sched/sch_htb.c``
NMI_MAGIC 0x48414d4d455201 nmi_s ``arch/mips/include/asm/sn/nmi.h``
===================== ================ ======================== ==========================================
Note that there are also defined special per-driver magic numbers in sound
memory management. See ``include/sound/sndmagic.h`` for complete list of them. Many
OSS sound drivers have their magic numbers constructed from the soundcard PCI
ID - these are not listed here as well.
IrDA subsystem also uses large number of own magic numbers, see
``include/net/irda/irda.h`` for a complete list of them.
HFS is another larger user of magic numbers - you can find them in
``fs/hfs/hfs.h``.

View File

@ -5,7 +5,7 @@ Linux kernel management style
This is a short document describing the preferred (or made up, depending
on who you ask) management style for the linux kernel. It's meant to
mirror the CodingStyle document to some degree, and mainly written to
mirror the process/coding-style.rst document to some degree, and mainly written to
avoid answering [#f1]_ the same (or similar) questions over and over again.
Management style is very personal and much harder to quantify than

View File

@ -27,7 +27,7 @@ Rules on what kind of patches are accepted, and which ones are not, into the
- It cannot contain any "trivial" fixes in it (spelling changes,
whitespace cleanups, etc).
- It must follow the
:ref:`Documentation/SubmittingPatches <submittingpatches>`
:ref:`Documentation/process/submitting-patches.rst <submittingpatches>`
rules.
- It or an equivalent fix must already exist in Linus' tree (upstream).
@ -40,7 +40,7 @@ Procedure for submitting patches to the -stable tree
Documentation/networking/netdev-FAQ.txt
- Security patches should not be handled (solely) by the -stable review
process but should follow the procedures in
:ref:`Documentation/SecurityBugs <securitybugs>`.
:ref:`Documentation/admin-guide/security-bugs.rst <securitybugs>`.
For all other submissions, choose one of the following procedures
-----------------------------------------------------------------

View File

@ -7,7 +7,7 @@ Here are some basic things that developers should do if they want to see their
kernel patch submissions accepted more quickly.
These are all above and beyond the documentation that is provided in
:ref:`Documentation/SubmittingPatches <submittingpatches>`
:ref:`Documentation/process/submitting-patches.rst <submittingpatches>`
and elsewhere regarding submitting Linux kernel patches.
@ -31,7 +31,7 @@ and elsewhere regarding submitting Linux kernel patches.
tends to use ``unsigned long`` for 64-bit quantities.
5) Check your patch for general style as detailed in
:ref:`Documentation/CodingStyle <codingstyle>`.
:ref:`Documentation/process/coding-style.rst <codingstyle>`.
Check for trivial violations with the patch style checker prior to
submission (``scripts/checkpatch.pl``).
You should be able to justify all violations that remain in
@ -78,7 +78,7 @@ and elsewhere regarding submitting Linux kernel patches.
16) All new ``/proc`` entries are documented under ``Documentation/``
17) All new kernel boot parameters are documented in
``Documentation/kernel-parameters.txt``.
``Documentation/admin-guide/kernel-parameters.rst``.
18) All new module parameters are documented with ``MODULE_PARM_DESC()``

Some files were not shown because too many files have changed in this diff Show More