Marcel Holtmann [Sun, 6 Oct 2002 12:35:03 +0000 (05:35 -0700)]
[PATCH] Bluetooth kbuild fix and config cleanup
This removes the obsolete O_TARGET and cleans up the Config.* and *.c
files to have a unique CONFIG_BLUEZ prefix. Additional two missing help
entries are added.
This patch supports new driver nsp32 - NinjaSCSI-32Bi/UDE PCI/Cardbus
SCSI adapter for 2.5.40. This driver supports at least (we tested) 7
different PCI/Cardbus SCSI cards which use Workbit NinjaSCSI-32 SCSI
processor.
This is the driver part, next one is for things like Config.help,
Makefile, and so on.
Alan Cox [Sun, 6 Oct 2002 08:57:42 +0000 (01:57 -0700)]
[PATCH] add the mini 4x6 font from uclinux
This stands alone from UCLinux and is independent of whether it ever
merges with the mainstream. Its rather handy for getting an entire oops
onto a PDA screen
Alan Cox [Sun, 6 Oct 2002 08:57:20 +0000 (01:57 -0700)]
[PATCH] NCR5380 port to 2.5 first pass
There is still more work to do, the driver sucks in 2.4 and 2.5 but 2.5 has a
lot more of what is needed to make it work nicely. Basically NCR5380_main
probably has to become a thread in the next generation of the code.
After reverting my nice but totally broken idea about accelerating
the linking steps, make the three-stage .tmp_kallsyms.o generation
/ addition work again.
Yeah, that means that we now link vmlinux three times when
CONFIG_KALLSYMS is set, and that's annoying.
The kallsyms patches added __kallsyms as last section into vmlinux,
behind .bss.
This was done to save two additional kallsyms passes, since as the
added section was last, it did not change the symbols before it.
With the new infrastructure in the top-level Makefile, we do not need
to do full relinks for these passes, so they are cheaper. We now
use one additional link/kallsyms run to be able to place the __kallsyms
section before .bss. The other pass is saved by adding an empty but
allocated __kallsyms section in kernel/kallsyms.c, so the first kallsyms
pass already generates a section of the final size.
kbuild: Generalize adding of additional sections to vmlinux
kallsyms needs to actually have a final vmlinux to extract the symbols,
and then add this information as a new section to the final vmlinux.
Currently, we basically just do the vmlinux link twice, adding
.tmp_kallsyms.o the second time. However, it's actually possible to just
link together the temporary vmlinux generated the first time and the
new object file directly without going back to all the single parts
that the temporary vmlinux was linked from.
This mechanism should be useful for sparc as well, where the btfix
mechanism needs an already linked vmlinux, too.
IMPORTANT: This does only work as desired if the link script can be
used recursively, i.e.
Jaroslav Kysela [Sat, 5 Oct 2002 11:40:53 +0000 (13:40 +0200)]
ALSA update
- CS46xx driver - removed unused variable
- USB code
- pass struct usb_interface pointer to the usb-midi parser.
in usb-midi functions, this instance is used instead of parsing
the interface from dev and ifnum.
- allocate the descriptor buffer only for parsing the audio device.
- clean up, new probe/disconnect callbacks for 2.4 API.
- added the support for Yamaha and Midiman devices.
Linus Torvalds [Sat, 5 Oct 2002 10:13:01 +0000 (03:13 -0700)]
Increase the delay in waiting for pcmcia drivers to register.
Reported by Peter Osterlund.
(Yeah, the real fix would be to make driver services not have to
know about low-level pcmcia core drivers beforehand, but that's not
life as we know it right now).
Russell King [Sun, 6 Oct 2002 01:02:22 +0000 (02:02 +0100)]
[SERIAL] Fix serial includes for modversions/modules.
This fixes the build error that occurs if you have a certain selection
of module/modversions settings.
Russell King [Sun, 6 Oct 2002 00:30:10 +0000 (01:30 +0100)]
[SERIAL] Allow PCMCIA serial cards to work again.
The PCMCIA layer claims the IO or memory regions for all cards. This
means that any port registered via 8250_cs must not cause the 8250
code to claim the resources itself.
We also add support for iomem-based ports at initialisation time for
PPC.
Andrew Morton [Sat, 5 Oct 2002 03:35:54 +0000 (20:35 -0700)]
[PATCH] stricter dirty memory clamping
The ratelimiting logic in balance_dirty_pages_ratelimited() is designed
to prevent excessive calls to the expensive get_page_state(): On a big
machine we only check to see if we're over dirty memory limits once per
1024 dirtyings per cpu.
This works OK normally, but it has the effect of allowing each process
to go 1024 pages over the dirty limit before it gets throttled.
So if someone runs 16000 tiobench threads, they can go 16G over the
dirty memory threshold and die the death of buffer_head consumption.
Because page dirtiness pins the page's buffer_heads, defeating the
special buffer_head reclaim logic.
I'd left this overshoot artifact in place because it provides a degree
of adaptivity - of someone if running hundreds of dirtying processes
(dbench!) then they do want to overshoot the dirty memory limit.
But it's hard to balance, and is really not worth the futzing around.
So change the logic to only perform the get_page_state() call rate
limiting if we're known to be under the dirty memory threshold.
Andrew Morton [Sat, 5 Oct 2002 03:35:48 +0000 (20:35 -0700)]
[PATCH] remove page->virtual
The patch removes page->virtual for all architectures which do not
define WANT_PAGE_VIRTUAL. Hash for it instead.
Possibly we could define WANT_PAGE_VIRTUAL for CONFIG_HIGHMEM4G, but it
seems unlikely.
A lot of the pressure went off kmap() and page_address() as a result of
the move to kmap_atomic(). That should be the preferred way to address
CPU load in the set_page_address() and page_address() hashing and
locking.
If kmap_atomic is not usable then the next best approach is for users
to cache the result of kmap() in a local rather than calling
page_address() repeatedly.
One heavy user of kmap() and page_address() is the ext2 directory code.
On a 7G Quad PIII, running four concurrent instances of
while true
do
find /usr/src/linux > /dev/null
done
on ext2 with everything cached, profiling shows that the new hashed
set_page_address() and page_address() implementations consume 0.4% and
1.3% of CPU time respectively. I think that's OK.
Andrew Morton [Sat, 5 Oct 2002 03:35:43 +0000 (20:35 -0700)]
[PATCH] use buffer_boundary() for writeback scheduling hints
This is the replacement for write_mapping_buffers().
Whenever the mpage code sees that it has just written a block which had
buffer_boundary() set, it assumes that the next block is dirty
filesystem metadata. (This is a good assumption - that's what
buffer_boundary is for).
So we do a lookup in the blockdev mapping for the next block and it if
is present and dirty, then schedule it for IO.
So the indirect blocks in the blockdev mapping get merged with the data
blocks in the file mapping.
This is a bit more general than the write_mapping_buffers() approach.
write_mapping_buffers() required that the fs carefully maintain the
correct buffers on the mapping->private_list, and that the fs call
write_mapping_buffers(), and the implementation was generally rather
yuk.
This version will "just work" for filesystems which implement
buffer_boundary correctly. Currently this is ext2, ext3 and some
not-yet-merged reiserfs patches. JFS implements buffer_boundary() but
does not use ext2-like layouts - so there will be no change there.
Andrew Morton [Sat, 5 Oct 2002 03:35:37 +0000 (20:35 -0700)]
[PATCH] remove write_mapping_buffers()
When the global buffer LRU was present, dirty ext2 indirect blocks were
automatically scheduled for writeback alongside their data.
I added write_mapping_buffers() to replace this - the idea was to
schedule the indirects close in time to the scheduling of their data.
It works OK for small-to-medium sized files but for large, linear writes
it doesn't work: the request queue is completely full of file data and
when we later come to scheduling the indirects, their neighbouring data
has already been written.
So writeback of really huge files tends to be a bit seeky.
So. Kill it. Will fix this problem by other means.
Andrew Morton [Sat, 5 Oct 2002 03:35:13 +0000 (20:35 -0700)]
[PATCH] fix reclaim for higher-order allocations
The page reclaim logic will bail out if all zones are at pages_high.
But if the caller is requesting a higher-order allocation we need to go
on and free more memory anyway. That's the only way we have of
addressing buddy fragmentation.
Andrew Morton [Sat, 5 Oct 2002 03:35:08 +0000 (20:35 -0700)]
[PATCH] separation of direct-reclaim and kswapd functions
There is some lack of clarity in what kswapd does and what
direct-reclaim tasks do; try_to_free_pages() tries to service both
functions, and they are different.
- kswapd's role is to keep all zones on its node at
zone->free_pages >= zone->pages_high.
and to never stop as long as any zones do not meet that condition.
- A direct reclaimer's role is to try to free some pages from the
zones which are suitable for this particular allocation request, and
to return when that has been achieved, or when all the relevant zones
are at
zone->free_pages >= zone->pages_high.
The patch explicitly separates these two code paths; kswapd does not
run try_to_free_pages() any more. kswapd should not be aware of zone
fallbacks.
Andrew Morton [Sat, 5 Oct 2002 03:35:02 +0000 (20:35 -0700)]
[PATCH] mempool wakeup fix
When the mempool is empty, tasks wait on the waitqueue in "exclusive
mode". So one task is woken for each returned element.
But if the number of tasks which are waiting exceeds the mempool's
specified size (min_nr), mempool_free() ends up deciding that as the
pool is fully replenished, there cannot possibly be anyone waiting for
more elements.
But with 16384 threads running tiobench, it happens.
We could fix this with a waitqueue_active() test in mempool_free().
But rather than adding that test to this fastpath I changed the wait to
be non-exclusive, and used the prepare_to_wait/finish_wait API, which
will be quite beneficial in this case.
Also, convert the schedule() in mempool_alloc() to an io_schedule(), so
this sleep time is accounted as "IO wait". Which is a bit approximate
- we don't _know_ that the caller is really waiting for IO completion.
But for most current users of mempools, io_schedule() is more accurate
than schedule() here.
Andrew Morton [Sat, 5 Oct 2002 03:34:57 +0000 (20:34 -0700)]
[PATCH] O_DIRECT invalidation fix
If the alignment checks in generic_direct_IO() fail, we end up not
forcing writeback of dirty pagecache pages, but we still run
invalidate_inode_pages2(). The net result is that dirty pagecache gets
incorrectly removed. I guess this will expose unwritten disk blocks.
So move the sync up into generic_file_direct_IO(), where we perform the
invalidation. So we know that pagecache and disk are in sync before we
do anything else.
Andrew Morton [Sat, 5 Oct 2002 03:34:50 +0000 (20:34 -0700)]
[PATCH] truncate fixes
The new truncate code needs to check page->mapping after acquiring the
page lock. Because the page could have been unmapped by page reclaim
or by invalidate_inode_pages() while we waited for the page lock.
Also, the page may have been moved between a tmpfs inode and
swapper_space. Because we don't hold the mapping->page_lock across the
entire truncate operation any more.
Also, change the initial truncate scan (the non-blocking one which is
there to stop as much writeout as possible) so that it is immune to
other CPUs decreasing page->index.
Also fix negated test in invalidate_inode_pages2(). Not sure how that
got in there.
Andrew Morton [Sat, 5 Oct 2002 03:34:45 +0000 (20:34 -0700)]
[PATCH] distinguish between address span of a zone and the number
From David Mosberger
The patch below fixes a bug in nr_free_zone_pages() which shows when a
zone has hole. The problem is due to the fact that "struct zone"
didn't keep track of the amount of real memory in a zone. Because of
this, nr_free_zone_pages() simply assumed that a zone consists entirely
of real memory. On machines with large holes, this has catastrophic
effects on VM performance, because the VM system ends up thinking that
there is plenty of memory left over in a zone, when in fact it may be
completely full.
The patch below fixes the problem by replacing the "size" member in
"struct zone" with "spanned_pages" and "present_pages" and updating
page_alloc.c.
Andrew Morton [Sat, 5 Oct 2002 03:34:35 +0000 (20:34 -0700)]
[PATCH] hugetlb kmap fix
From Bill Irwin
This patch makes alloc_hugetlb_page() kmap() the memory it's zeroing,
and cleans up a tiny bit of list handling on the side. Without this
fix, it oopses every time it's called.
Petr Vandrovec [Sat, 5 Oct 2002 01:30:22 +0000 (18:30 -0700)]
[PATCH] FAT/VFAT memory corruption during mount()
This patch fixes memory corruption during vfat mount: one byte
before mount options is overwritten by ',' since strtok->strsep
conversion happened.
This patch also fixes another problem introduced by strtok->strsep
conversion: VFAT requires that FAT does not modify passed options,
but unfortunately FAT driver fails to preserve options string if
there is more than one consecutive comma in option string.