m68k: Correct some typos in comments

Signed-off-by: Finn Thain <fthain@telegraphics.com.au>
Link: https://lore.kernel.org/r/f54e99e9bd1e25ad70a6a1d7a7ec9ab2b4e50d68.1595460351.git.fthain@telegraphics.com.au
Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
This commit is contained in:
Finn Thain 2020-07-23 09:25:51 +10:00 committed by Geert Uytterhoeven
parent c75e59e401
commit 5661bccb70

View File

@ -57,7 +57,7 @@
* Of course, readability is a subjective issue, so it will never be
* argued that that goal was accomplished. It was merely a goal.
* A key way to help make code more readable is to give good
* documentation. So, the first thing you will find is exaustive
* documentation. So, the first thing you will find is exhaustive
* write-ups on the structure of the file, and the features of the
* functional subroutines.
*
@ -1304,7 +1304,7 @@ L(mmu_fixup_done):
* mmu_engage
*
* This chunk of code performs the gruesome task of engaging the MMU.
* The reason its gruesome is because when the MMU becomes engaged it
* The reason it's gruesome is because when the MMU becomes engaged it
* maps logical addresses to physical addresses. The Program Counter
* register is then passed through the MMU before the next instruction
* is fetched (the instruction following the engage MMU instruction).
@ -1369,7 +1369,7 @@ L(mmu_fixup_done):
/*
* After this point no new memory is allocated and
* the start of available memory is stored in availmem.
* (The bootmem allocator requires now the physicall address.)
* (The bootmem allocator requires now the physical address.)
*/
movel L(memory_start),availmem
@ -1547,7 +1547,7 @@ func_return get_bi_record
* seven bits of the logical address (LA) are used as an
* index into the "root table." Each entry in the root
* table has a bit which specifies if it's a valid pointer to a
* pointer table. Each entry defines a 32KMeg range of memory.
* pointer table. Each entry defines a 32Meg range of memory.
* If an entry is invalid then that logical range of 32M is
* invalid and references to that range of memory (when the MMU
* is enabled) will fault. If the entry is valid, then it does
@ -1584,7 +1584,7 @@ func_return get_bi_record
* bits 17..12 - index into the Page Table
* bits 11..0 - offset into a particular 4K page
*
* The algorithms which follows do one thing: they abstract
* The algorithms which follow do one thing: they abstract
* the MMU hardware. For example, there are three kinds of
* cache settings that are relevant. Either, memory is
* being mapped in which case it is either Kernel Code (or
@ -2082,7 +2082,7 @@ func_return mmu_map_tt
* mmu_map
*
* This routine will map a range of memory using a pointer
* table and allocating the pages on the fly from the kernel.
* table and allocate the pages on the fly from the kernel.
* The pointer table does not have to be already linked into
* the root table, this routine will do that if necessary.
*
@ -2528,7 +2528,7 @@ func_start mmu_get_root_table_entry,%d0/%a1
/* Find the start of free memory, get_bi_record does this for us,
* as the bootinfo structure is located directly behind the kernel
* and and we simply search for the last entry.
* we simply search for the last entry.
*/
get_bi_record BI_LAST
addw #PAGESIZE-1,%a0
@ -2654,7 +2654,7 @@ func_start mmu_get_page_table_entry,%d0/%a1
jne 2f
/* If the page table entry doesn't exist, we allocate a complete new
* page and use it as one continues big page table which can cover
* page and use it as one continuous big page table which can cover
* 4MB of memory, nearly almost all mappings have that alignment.
*/
get_new_page