- Use "existing" directory in <_devfs_make_parent_for_leaf>
- Use slab cache rather than fixed buffer for devfsd events
+===============================================================================
+Changes for patch v199
+
+- Removed obsolete usage of DEVFS_FL_NO_PERSISTENCE
+
+- Send DEVFSD_NOTIFY_REGISTERED events in <devfs_mk_dir>
+
+- Fixed locking bug in <devfs_d_revalidate_wait> due to typo
+
+- Do not send CREATE, CHANGE, ASYNC_OPEN or DELETE events from devfsd
+ or children
+===============================================================================
+Changes for patch v200
+
+- Ported to kernel 2.5.1-pre2
+===============================================================================
+Changes for patch v201
+
+- Fixed bug in <devfsd_read>: was dereferencing freed pointer
+===============================================================================
+Changes for patch v202
+
+- Fixed bug in <devfsd_close>: was dereferencing freed pointer
+
+- Added process group check for devfsd privileges
+===============================================================================
+Changes for patch v203
+
+- Use SLAB_ATOMIC in <devfsd_notify_de> from <devfs_d_delete>
+===============================================================================
+Changes for patch v204
+
+- Removed long obsolete rc.devfs
+
+- Return old entry in <devfs_mk_dir> for 2.4.x kernels
+
+- Updated README from master HTML file
+
+- Increment refcount on module in <check_disc_changed>
+
+- Created <devfs_get_handle> and exported <devfs_put>
+
+- Increment refcount on module in <devfs_get_ops>
+
+- Created <devfs_put_ops> and used where needed to fix races
+
+- Added clarifying comments in response to preliminary EMC code review
+
+- Added poisoning to <devfs_put>
+
+- Improved debugging messages
+
+- Fixed unregister bugs in drivers/md/lvm-fs.c
+===============================================================================
+Changes for patch v205
+
+- Corrected (made useful) debugging message in <unregister>
+
+- Moved <kmem_cache_create> in <mount_devfs_fs> to <init_devfs_fs>
+
+- Fixed drivers/md/lvm-fs.c to create "lvm" entry
+
+- Added magic number to guard against scribbling drivers
+
+- Only return old entry in <devfs_mk_dir> if a directory
+
+- Defined macros for error and debug messages
+
+- Updated README from master HTML file
Linux Devfs (Device File System) FAQ
Richard Gooch
-9-NOV-2001
+21-DEC-2001
-----------------------------------------------------------------------------
Making things work
Alternatives to devfs
+What I don't like about devfs
+How to report bugs
+Strange kernel messages
Other resources
Devfsd
OK, if you're reading this, I assume you want to play with
-devfs. First you need to compile devfsd, the device management daemon,
-available at
+devfs. First you should ensure that /usr/src/linux contains a
+recent kernel source tree. Then you need to compile devfsd, the device
+management daemon, available at
+
http://www.atnf.csiro.au/~rgooch/linux/.
Because the kernel has a naming scheme
which is quite different from the old naming scheme, you need to
Making things work
Alternatives to devfs
What I don't like about devfs
+How to report bugs
+Strange kernel messages
This is not even remotely true. As shown above,
both code and data size are quite modest.
+
+How to report bugs
+
+If you have (or think you have) a bug with devfs, please follow the
+steps below:
+
+
+
+please make sure you have the latest devfs patches applied. The
+latest kernel version might not have the latest devfs patches applied
+yet (Linus is very busy)
+
+
+save a copy of your complete kernel logs (preferably by
+using the dmesg programme) for later inclusion in your bug
+report. You may need to use the -s switch to increase the
+internal buffer size so you can capture all the boot messages
+
+
+try booting with devfs=dall passed to the kernel boot
+command line (read the documentation on your bootloader on how to do
+this), and save the result to a file. This may be quite verbose, and
+it may overflow the messages buffer, but try to get as much of it as
+you can
+
+
+if you get an Oops, run ksymoops to decode it so that the
+names of the offending functions are provided. A non-decoded Oops is
+pretty useless
+
+
+send a copy of your devfsd configuration file(s)
+
+send the bug report to me first.
+Don't expect that I will see it if you post it to the linux-kernel
+mailing list. Include all the information listed above, plus
+anything else that you think might be relevant. Put the string
+devfs somewhere in the subject line, so my mail filters mark
+it as urgent
+
+
+
+
+Here is a general guide on how to ask questions in a way that greatly
+improves your chances of getting a reply:
+
+http://www.tuxedo.org/~esr/faqs/smart-questions.html. If you have
+a bug to report, you should also read
+
+http://www.chiark.greenend.org.uk/~sgtatham/bugs.html.
+
+
+Strange kernel messages
+
+You may see devfs-related messages in your kernel logs. Below are some
+messages and what they mean (and what you should do about them, if
+anything).
+
+
+
+devfs_register(fred): could not append to parent, err: -17
+
+You need to check what the error code means, but usually 17 means
+EEXIST. This means that a driver attempted to create an entry
+fred in a directory, but there already was an entry with that
+name. This is often caused by flawed boot scripts which untar a bunch
+of inodes into /dev, as a way to restore permissions. This
+message is harmless, as the device nodes will still
+provide access to the driver (unless you use the devfs=only
+boot option, which is only for dedicated souls:-). If you want to get
+rid of these annoying messages, upgrade to devfsd-v1.3.20 and use the
+recommended RESTORE directive to restore permissions.
+
+
+devfs_mk_dir(bill): using old entry in dir: c1808724 ""
+
+This is similar to the message above, except that a driver attempted
+to create a directory named bill, and the parent directory
+has an entry with the same name. In this case, to ensure that drivers
+continue to work properly, the old entry is re-used and given to the
+driver. In 2.5 kernels, the driver is given a NULL entry, and thus,
+under rare circumstances, may not create the require device nodes.
+The solution is the same as above.
+
+
+
+
-----------------------------------------------------------------------------
+++ /dev/null
-#! /bin/sh
-#
-# /etc/rc.d/rc.devfs
-#
-# Linux Boot Scripts by Richard Gooch <rgooch@atnf.csiro.au>
-# Copyright 1993-1999 under GNU Copyleft version 2.0. See /etc/rc for
-# copyright notice.
-#
-# Save and restore devfs ownerships and permissions
-#
-# Written by Richard Gooch 11-JAN-1998
-#
-# Updated by Richard Gooch 23-JAN-1998: Added "start" and "stop".
-#
-# Updated by Richard Gooch 5-AUG-1998: Robustness improvements by
-# Roderich Schupp.
-#
-# Updated by Richard Gooch 9-AUG-1998: Took account of change from
-# ".epoch" to ".devfsd".
-#
-# Updated by Richard Gooch 19-AUG-1998: Test and tty pattern patch
-# by Roderich Schupp.
-#
-# Updated by Richard Gooch 24-MAY-1999: Use sed instead of tr.
-#
-# Last updated by Richard Gooch 25-MAY-1999: Don't save /dev/log.
-#
-#
-# Usage: rc.devfs save|restore [savedir] [devfsdir]
-#
-# Note: "start" is a synonym for "restore" and "stop" is a synonym for "save".
-
-# Set VERBOSE to "no" if you would like a more quiet operation.
-VERBOSE=yes
-
-# Set TAROPTS to "v" or even "vv" to see which files get saved/restored.
-TAROPTS=
-
-option="$1"
-
-case "$option" in
- save|restore) ;;
- start) option=restore ;;
- stop) option=save ;;
- *) echo "No save or restore option given" ; exit 1 ;;
-esac
-
-if [ "$2" = "" ]; then
- savedir=/var/state
-else
- savedir=$2
-fi
-
-if [ ! -d $savedir ]; then
- echo "Directory: $savedir does not exist"
- exit 1
-fi
-
-if [ "$3" = "" ]; then
- if [ -d /devfs ]; then
- devfs=/devfs
- else
- devfs=/dev
- fi
-else
- devfs=$3
-fi
-
-grep devfs /proc/filesystems >/dev/null || exit 0
-
-if [ ! -d $devfs ]; then
- echo "Directory: $devfs does not exist"
- exit 1
-elif [ ! -c $devfs/.devfsd ]; then
- echo "Directory: $devfs is not the root of a devfs filesystem"
- exit 1
-fi
-
-savefile=`echo $devfs | sed 's*/*_*g'`
-tarfile=${savedir}/devfssave.${savefile}.tar.gz
-
-cd $devfs
-
-case "$option" in
- save)
- [ "$VERBOSE" != no ] && echo "Saving $devfs permissions..."
-
- # You might want to adjust the pattern below to control
- # which file's permissions will be saved.
- # The sample pattern exludes all virtual consoles
- # as well as old and new style pseudo terminals.
- files=`find * -noleaf -cnewer .devfsd \
- ! -regex 'tty[0-9]+\|vc/.*\|vcsa?[0-9]+\|vcc/.*\|[pt]ty[a-z][0-9a-f]\|pt[ms]/.*\|log' -print`
- rm -f $tarfile
- [ -n "$files" ] && tar cz${TAROPTS}f $tarfile $files
- ;;
-
- restore)
- [ "$VERBOSE" != no ] && echo "Restoring $devfs permissions..."
- [ -f $tarfile ] && tar xpz${TAROPTS}f $tarfile
- ;;
-esac
-
-exit 0
M: vojtech@suse.cz
L: linux-joystick@atrey.karlin.mff.cuni.cz
W: http://www.suse.cz/development/joystick/
-S: Supported
+S: Maintained
KERNEL AUTOMOUNTER (AUTOFS)
P: H. Peter Anvin
M: vojtech@suse.cz
L: linux-usb-users@lists.sourceforge.net
L: linux-usb-devel@lists.sourceforge.net
-S: Supported
+S: Maintained
USB BLUETOOTH DRIVER
P: Greg Kroah-Hartman
L: linux-usb-users@lists.sourceforge.net
L: linux-usb-devel@lists.sourceforge.net
W: http://www.suse.cz/development/input/
-S: Supported
+S: Maintained
USB HUB
P: Johannes Erdfelt
M: vojtech@suse.cz
L: linux-usb-users@lists.sourceforge.net
L: linux-usb-devel@lists.sourceforge.net
-S: Supported
+S: Maintained
USB SE401 DRIVER
P: Jeroen Vreeken
VERSION = 2
PATCHLEVEL = 5
SUBLEVEL = 2
-EXTRAVERSION =-pre2
+EXTRAVERSION =-pre3
KERNELRELEASE=$(VERSION).$(PATCHLEVEL).$(SUBLEVEL)$(EXTRAVERSION)
{
/* An endless idle loop with no priority at all. */
current->nice = 20;
- current->counter = -100;
while (1) {
/* FIXME -- EV6 and LCA45 know how to power down
/* endless idle loop with no priority at all */
init_idle();
current->nice = 20;
- current->counter = -100;
while (1) {
void (*idle)(void) = pm_idle;
/* Purpose : Function to change a bit
* Prototype: int change_bit(int bit, void *addr)
*/
-ENTRY(change_bit)
+ENTRY(_change_bit_be)
+ eor r0, r0, #0x18 @ big endian byte ordering
+ENTRY(_change_bit_le)
and r2, r0, #7
mov r3, #1
mov r3, r3, lsl r2
* Purpose : Function to clear a bit
* Prototype: int clear_bit(int bit, void *addr)
*/
-
-ENTRY(clear_bit)
+ENTRY(_clear_bit_be)
+ eor r0, r0, #0x18 @ big endian byte ordering
+ENTRY(_clear_bit_le)
and r2, r0, #7
mov r3, #1
mov r3, r3, lsl r2
* Purpose : Find a 'zero' bit
* Prototype: int find_first_zero_bit(void *addr, int maxbit);
*/
-ENTRY(find_first_zero_bit)
+ENTRY(_find_first_zero_bit_le)
mov r2, #0
-.bytelp: ldrb r3, [r0, r2, lsr #3]
+1: ldrb r3, [r0, r2, lsr #3]
eors r3, r3, #0xff @ invert bits
bne .found @ any now set - found zero bit
add r2, r2, #8 @ next bit pointer
cmp r2, r1 @ any more?
- bcc .bytelp
+ bcc 1b
add r0, r1, #1 @ no free bits
RETINSTR(mov,pc,lr)
* Purpose : Find next 'zero' bit
* Prototype: int find_next_zero_bit(void *addr, int maxbit, int offset)
*/
-ENTRY(find_next_zero_bit)
+ENTRY(_find_next_zero_bit_le)
ands ip, r2, #7
- beq .bytelp @ If new byte, goto old routine
+ beq 1b @ If new byte, goto old routine
ldrb r3, [r0, r2, lsr#3]
eor r3, r3, #0xff @ now looking for a 1 bit
movs r3, r3, lsr ip @ shift off unused bits
+ bne .found
+ orr r2, r2, #7 @ if zero, then no bits here
+ add r2, r2, #1 @ align bit pointer
+ b 1b @ loop for next bit
+
+#ifdef __ARMEB__
+
+ENTRY(_find_first_zero_bit_be)
+ mov r2, #0
+1: eor r3, r2, #0x18 @ big endian byte ordering
+ ldrb r3, [r0, r3, lsr #3]
+ eors r3, r3, #0xff @ invert bits
+ bne .found @ any now set - found zero bit
+ add r2, r2, #8 @ next bit pointer
+ cmp r2, r1 @ any more?
+ bcc 1b
+ add r0, r1, #1 @ no free bits
+ RETINSTR(mov,pc,lr)
+
+ENTRY(_find_next_zero_bit_be)
+ ands ip, r2, #7
+ beq 1b @ If new byte, goto old routine
+ eor r3, r2, #0x18 @ big endian byte ordering
+ ldrb r3, [r0, r3, lsr#3]
+ eor r3, r3, #0xff @ now looking for a 1 bit
+ movs r3, r3, lsr ip @ shift off unused bits
orreq r2, r2, #7 @ if zero, then no bits here
addeq r2, r2, #1 @ align bit pointer
- beq .bytelp @ loop for next bit
+ beq 1b @ loop for next bit
+
+#endif
/*
* One or more bits in the LSB of r3 are assumed to be set.
*/
#include <linux/linkage.h>
#include <asm/assembler.h>
- .text
+ .text
/*
* Purpose : Function to set a bit
* Prototype: int set_bit(int bit, void *addr)
*/
-
-ENTRY(set_bit)
- and r2, r0, #7
- mov r3, #1
- mov r3, r3, lsl r2
+ENTRY(_set_bit_be)
+ eor r0, r0, #0x18 @ big endian byte ordering
+ENTRY(_set_bit_le)
+ and r2, r0, #7
+ mov r3, #1
+ mov r3, r3, lsl r2
save_and_disable_irqs ip, r2
ldrb r2, [r1, r0, lsr #3]
orr r2, r2, r3
strb r2, [r1, r0, lsr #3]
restore_irqs ip
RETINSTR(mov,pc,lr)
-
-
#include <asm/assembler.h>
.text
-ENTRY(test_and_change_bit)
+ENTRY(_test_and_change_bit_be)
+ eor r0, r0, #0x18 @ big endian byte ordering
+ENTRY(_test_and_change_bit_le)
add r1, r1, r0, lsr #3
and r3, r0, #7
mov r0, #1
#include <asm/assembler.h>
.text
-ENTRY(test_and_clear_bit)
+ENTRY(_test_and_clear_bit_be)
+ eor r0, r0, #0x18 @ big endian byte ordering
+ENTRY(_test_and_clear_bit_le)
add r1, r1, r0, lsr #3 @ Get byte offset
and r3, r0, #7 @ Get bit offset
mov r0, #1
#include <asm/assembler.h>
.text
-ENTRY(test_and_set_bit)
+ENTRY(_test_and_set_bit_be)
+ eor r0, r0, #0x18 @ big endian byte ordering
+ENTRY(_test_and_set_bit_le)
add r1, r1, r0, lsr #3 @ Get byte offset
and r3, r0, #7 @ Get bit offset
mov r0, #1
-
-/*
--------------------------------------------------------------------------------
-One of the macros `BIGENDIAN' or `LITTLEENDIAN' must be defined.
--------------------------------------------------------------------------------
-*/
-#define LITTLEENDIAN
-
/*
-------------------------------------------------------------------------------
The macro `BITS64' can be defined to indicate that 64-bit integer types are
This routine does three things:
-1) It saves SP into a variable called userRegisters. The kernel has
-created a struct pt_regs on the stack and saved the user registers
-into it. See /usr/include/asm/proc/ptrace.h for details. The
-emulator code uses userRegisters as the base of an array of words from
-which the contents of the registers can be extracted.
+1) The kernel has created a struct pt_regs on the stack and saved the
+user registers into it. See /usr/include/asm/proc/ptrace.h for details.
+The emulator code uses userRegisters as the base of an array of words
+from which the contents of the registers can be extracted.
2) It calls EmulateAll to emulate a floating point instruction.
EmulateAll returns 1 if the emulation was successful, or 0 if not.
of stealing two regs from the register allocator. Not sure if
it's worth it. */
str sp, [r10] @ Store the user registers pointer in the fpa11 structure.
- mov r4, sp @ use r4 for local pointer
- mov r10, lr @ save the failure-return addresses
+ mov r4, lr @ save the failure-return addresses
- ldr r5, [r4, #60] @ get contents of PC;
+ mov r0, r10
+ bl FPA11_CheckInit @ check to see if we are initialised
+
+ ldr r5, [sp, #60] @ get contents of PC;
sub r8, r5, #4
.Lx2: ldrt r0, [r8] @ get actual instruction into r0
emulate:
bl EmulateAll @ emulate the instruction
cmp r0, #0 @ was emulation successful
- moveq pc, r10 @ no, return failure
+ moveq pc, r4 @ no, return failure
next:
.Lx1: ldrt r6, [r5], #4 @ get the next instruction and
teqne r2, #0x0E000000
movne pc, r9 @ return ok if not a fp insn
- str r5, [r4, #60] @ update PC copy in regs
+ str r5, [sp, #60] @ update PC copy in regs
mov r0, r6 @ save a copy
- ldr r1, [r4, #64] @ fetch the condition codes
+ ldr r1, [sp, #64] @ fetch the condition codes
bl checkCondition @ check the condition
cmp r0, #0 @ r0 = 0 ==> condition failed
along with this program; if not, write to the Free Software
Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*/
-
+#include <linux/compiler.h>
#include <asm/system.h>
#include "fpa11.h"
}
}
-/* Emulate the instruction in the opcode. */
-unsigned int EmulateAll(unsigned int opcode)
+void FPA11_CheckInit(FPA11 *fpa11)
{
- unsigned int nRc = 0;
- unsigned long flags;
- FPA11 *fpa11;
- save_flags(flags); sti();
-
- fpa11 = GET_FPA11();
-
- if (fpa11->initflag == 0) /* good place for __builtin_expect */
+ if (unlikely(fpa11->initflag == 0))
{
resetFPA11();
SetRoundingMode(ROUND_TO_NEAREST);
SetRoundingPrecision(ROUND_EXTENDED);
fpa11->initflag = 1;
}
+}
- if (TEST_OPCODE(opcode,MASK_CPRT))
- {
- /* Emulate conversion opcodes. */
- /* Emulate register transfer opcodes. */
- /* Emulate comparison opcodes. */
- nRc = EmulateCPRT(opcode);
- }
- else if (TEST_OPCODE(opcode,MASK_CPDO))
- {
- /* Emulate monadic arithmetic opcodes. */
- /* Emulate dyadic arithmetic opcodes. */
- nRc = EmulateCPDO(opcode);
- }
- else if (TEST_OPCODE(opcode,MASK_CPDT))
- {
- /* Emulate load/store opcodes. */
- /* Emulate load/store multiple opcodes. */
- nRc = EmulateCPDT(opcode);
- }
- else
+/* Emulate the instruction in the opcode. */
+unsigned int EmulateAll(unsigned int opcode)
+{
+ unsigned int nRc = 1, code;
+ unsigned long flags;
+ FPA11 *fpa11;
+
+ save_flags(flags); sti();
+
+ code = opcode & 0x00000f00;
+ if (code == 0x00000100 || code == 0x00000200)
{
- /* Invalid instruction detected. Return FALSE. */
- nRc = 0;
+ /* For coprocessor 1 or 2 (FPA11) */
+ code = opcode & 0x0e000000;
+ if (code == 0x0e000000)
+ {
+ if (opcode & 0x00000010)
+ {
+ /* Emulate conversion opcodes. */
+ /* Emulate register transfer opcodes. */
+ /* Emulate comparison opcodes. */
+ nRc = EmulateCPRT(opcode);
+ }
+ else
+ {
+ /* Emulate monadic arithmetic opcodes. */
+ /* Emulate dyadic arithmetic opcodes. */
+ nRc = EmulateCPDO(opcode);
+ }
+ }
+ else if (code == 0x0c000000)
+ {
+ /* Emulate load/store opcodes. */
+ /* Emulate load/store multiple opcodes. */
+ nRc = EmulateCPDT(opcode);
+ }
+ else
+ {
+ /* Invalid instruction detected. Return FALSE. */
+ nRc = 0;
+ }
}
restore_flags(flags);
fpa11->fType[Fn] = typeDouble;
get_user(p[0], &pMem[1]);
get_user(p[1], &pMem[0]); /* sign & exponent */
-}
+}
static inline
void loadExtended(const unsigned int Fn,const unsigned int *pMem)
get_user(p[0], &pMem[0]); /* sign & exponent */
get_user(p[1], &pMem[2]); /* ls bits */
get_user(p[2], &pMem[1]); /* ms bits */
-}
+}
static inline
void loadMultiple(const unsigned int Fn,const unsigned int *pMem)
p = (unsigned int*)&(fpa11->fpreg[Fn]);
get_user(x, &pMem[0]);
fpa11->fType[Fn] = (x >> 14) & 0x00000003;
-
+
switch (fpa11->fType[Fn])
{
case typeSingle:
get_user(p[1], &pMem[1]); /* double msw */
p[2] = 0; /* empty */
}
- break;
-
+ break;
+
case typeExtended:
{
get_user(p[1], &pMem[2]);
get_user(p[2], &pMem[1]); /* msw */
- p[0] = (x & 0x80003fff);
+ p[0] = (x & 0x80003fff);
}
break;
}
void storeSingle(const unsigned int Fn,unsigned int *pMem)
{
FPA11 *fpa11 = GET_FPA11();
- float32 val;
- register unsigned int *p = (unsigned int*)&val;
-
+ union
+ {
+ float32 f;
+ unsigned int i[1];
+ } val;
+
switch (fpa11->fType[Fn])
{
- case typeDouble:
- val = float64_to_float32(fpa11->fpreg[Fn].fDouble);
+ case typeDouble:
+ val.f = float64_to_float32(fpa11->fpreg[Fn].fDouble);
break;
- case typeExtended:
- val = floatx80_to_float32(fpa11->fpreg[Fn].fExtended);
+ case typeExtended:
+ val.f = floatx80_to_float32(fpa11->fpreg[Fn].fExtended);
break;
- default: val = fpa11->fpreg[Fn].fSingle;
+ default: val.f = fpa11->fpreg[Fn].fSingle;
}
-
- put_user(p[0], pMem);
-}
+
+ put_user(val.i[0], pMem);
+}
static inline
void storeDouble(const unsigned int Fn,unsigned int *pMem)
{
FPA11 *fpa11 = GET_FPA11();
- float64 val;
- register unsigned int *p = (unsigned int*)&val;
+ union
+ {
+ float64 f;
+ unsigned int i[2];
+ } val;
switch (fpa11->fType[Fn])
{
- case typeSingle:
- val = float32_to_float64(fpa11->fpreg[Fn].fSingle);
+ case typeSingle:
+ val.f = float32_to_float64(fpa11->fpreg[Fn].fSingle);
break;
case typeExtended:
- val = floatx80_to_float64(fpa11->fpreg[Fn].fExtended);
+ val.f = floatx80_to_float64(fpa11->fpreg[Fn].fExtended);
break;
- default: val = fpa11->fpreg[Fn].fDouble;
+ default: val.f = fpa11->fpreg[Fn].fDouble;
}
- put_user(p[1], &pMem[0]); /* msw */
- put_user(p[0], &pMem[1]); /* lsw */
-}
+
+ put_user(val.i[1], &pMem[0]); /* msw */
+ put_user(val.i[0], &pMem[1]); /* lsw */
+}
static inline
void storeExtended(const unsigned int Fn,unsigned int *pMem)
{
FPA11 *fpa11 = GET_FPA11();
- floatx80 val;
- register unsigned int *p = (unsigned int*)&val;
-
+ union
+ {
+ floatx80 f;
+ unsigned int i[3];
+ } val;
+
switch (fpa11->fType[Fn])
{
- case typeSingle:
- val = float32_to_floatx80(fpa11->fpreg[Fn].fSingle);
+ case typeSingle:
+ val.f = float32_to_floatx80(fpa11->fpreg[Fn].fSingle);
break;
- case typeDouble:
- val = float64_to_floatx80(fpa11->fpreg[Fn].fDouble);
+ case typeDouble:
+ val.f = float64_to_floatx80(fpa11->fpreg[Fn].fDouble);
break;
- default: val = fpa11->fpreg[Fn].fExtended;
+ default: val.f = fpa11->fpreg[Fn].fExtended;
}
-
- put_user(p[0], &pMem[0]); /* sign & exp */
- put_user(p[1], &pMem[2]);
- put_user(p[2], &pMem[1]); /* msw */
-}
+
+ put_user(val.i[0], &pMem[0]); /* sign & exp */
+ put_user(val.i[1], &pMem[2]);
+ put_user(val.i[2], &pMem[1]); /* msw */
+}
static inline
void storeMultiple(const unsigned int Fn,unsigned int *pMem)
{
FPA11 *fpa11 = GET_FPA11();
register unsigned int nType, *p;
-
+
p = (unsigned int*)&(fpa11->fpreg[Fn]);
nType = fpa11->fType[Fn];
-
+
switch (nType)
{
case typeSingle:
put_user(p[1], &pMem[1]); /* double msw */
put_user(nType << 14, &pMem[0]);
}
- break;
-
+ break;
+
case typeExtended:
{
put_user(p[2], &pMem[1]); /* msw */
case TRANSFER_EXTENDED: loadExtended(getFd(opcode),pAddress); break;
default: nRc = 0;
}
-
+
if (write_back) writeRegister(getRn(opcode),(unsigned int)pFinal);
return nRc;
}
{
unsigned int *pBase, *pAddress, *pFinal, nRc = 1,
write_back = WRITE_BACK(opcode);
-
+
//printk("PerformSTF(0x%08x), Fd = 0x%08x\n",opcode,getFd(opcode));
SetRoundingMode(ROUND_TO_NEAREST);
-
+
pBase = (unsigned int*)readRegister(getRn(opcode));
if (REG_PC == getRn(opcode))
{
case TRANSFER_EXTENDED: storeExtended(getFd(opcode),pAddress); break;
default: nRc = 0;
}
-
+
if (write_back) writeRegister(getRn(opcode),(unsigned int)pFinal);
return nRc;
}
{
unsigned int i, Fd, *pBase, *pAddress, *pFinal,
write_back = WRITE_BACK(opcode);
-
+
pBase = (unsigned int*)readRegister(getRn(opcode));
if (REG_PC == getRn(opcode))
{
pBase += 2;
write_back = 0;
}
-
+
pFinal = pBase;
if (BIT_UP_SET(opcode))
pFinal += getOffset(opcode);
unsigned int nRc = 0;
//printk("EmulateCPDT(0x%08x)\n",opcode);
-
+
if (LDF_OP(opcode))
{
nRc = PerformLDF(opcode);
else if (STF_OP(opcode))
{
nRc = PerformSTF(opcode);
- }
+ }
else if (SFM_OP(opcode))
{
nRc = PerformSFM(opcode);
{
nRc = 0;
}
-
+
return nRc;
}
#endif
int cpu_idle(void *unused)
{
while(1) {
- current->counter = -100;
schedule();
}
}
maxlvt = get_maxlvt();
+ /*
+ * Masking an LVT entry on a P6 can trigger a local APIC error
+ * if the vector is zero. Mask LVTERR first to prevent this.
+ */
+ if (maxlvt >= 3) {
+ v = ERROR_APIC_VECTOR; /* any non-zero vector will do */
+ apic_write_around(APIC_LVTERR, v | APIC_LVT_MASKED);
+ }
/*
* Careful: we have to set masks only first to deassert
* any level-triggered sources.
apic_write_around(APIC_LVT0, v | APIC_LVT_MASKED);
v = apic_read(APIC_LVT1);
apic_write_around(APIC_LVT1, v | APIC_LVT_MASKED);
- if (maxlvt >= 3) {
- v = apic_read(APIC_LVTERR);
- apic_write_around(APIC_LVTERR, v | APIC_LVT_MASKED);
- }
if (maxlvt >= 4) {
v = apic_read(APIC_LVTPC);
apic_write_around(APIC_LVTPC, v | APIC_LVT_MASKED);
apic_write_around(APIC_LVTERR, APIC_LVT_MASKED);
if (maxlvt >= 4)
apic_write_around(APIC_LVTPC, APIC_LVT_MASKED);
+ apic_write(APIC_ESR, 0);
+ v = apic_read(APIC_ESR);
}
void __init connect_bsp_APIC(void)
l &= ~MSR_IA32_APICBASE_BASE;
l |= MSR_IA32_APICBASE_ENABLE | APIC_DEFAULT_PHYS_BASE;
wrmsr(MSR_IA32_APICBASE, l, h);
+ apic_write(APIC_LVTERR, ERROR_APIC_VECTOR | APIC_LVT_MASKED);
apic_write(APIC_ID, apic_pm_state.apic_id);
apic_write(APIC_DFR, apic_pm_state.apic_dfr);
apic_write(APIC_LDR, apic_pm_state.apic_ldr);
apic_write(APIC_SPIV, apic_pm_state.apic_spiv);
apic_write(APIC_LVT0, apic_pm_state.apic_lvt0);
apic_write(APIC_LVT1, apic_pm_state.apic_lvt1);
+ apic_write(APIC_LVTPC, apic_pm_state.apic_lvtpc);
+ apic_write(APIC_LVTT, apic_pm_state.apic_lvtt);
+ apic_write(APIC_TDCR, apic_pm_state.apic_tdcr);
+ apic_write(APIC_TMICT, apic_pm_state.apic_tmict);
apic_write(APIC_ESR, 0);
apic_read(APIC_ESR);
apic_write(APIC_LVTERR, apic_pm_state.apic_lvterr);
apic_write(APIC_ESR, 0);
apic_read(APIC_ESR);
- apic_write(APIC_LVTPC, apic_pm_state.apic_lvtpc);
- apic_write(APIC_LVTT, apic_pm_state.apic_lvtt);
- apic_write(APIC_TDCR, apic_pm_state.apic_tdcr);
- apic_write(APIC_TMICT, apic_pm_state.apic_tmict);
__restore_flags(flags);
if (apic_pm_state.perfctr_pmdev)
pm_send(apic_pm_state.perfctr_pmdev, PM_RESUME, data);
/* endless idle loop with no priority at all */
init_idle();
current->nice = 20;
- current->counter = -100;
while (1) {
void (*idle)(void) = pm_idle;
/* endless idle loop with no priority at all */
init_idle();
current->nice = 20;
- current->counter = -100;
-
while (1) {
#ifdef CONFIG_SMP
struct file_operations *
hwgraph_cdevsw_get(devfs_handle_t de)
{
- return(devfs_get_ops(de));
+ struct file_operations *fops = devfs_get_ops(de);
+
+ devfs_put_ops(de); /* FIXME: this may need to be moved to callers */
+ return(fops);
}
/*
* hwgraph_bdevsw_get - returns the fops of the given devfs entry.
*/
-struct file_operations *
+struct file_operations * /* FIXME: shouldn't this be a blkdev? */
hwgraph_bdevsw_get(devfs_handle_t de)
{
- return(devfs_get_ops(de));
+ struct file_operations *fops = devfs_get_ops(de);
+
+ devfs_put_ops(de); /* FIXME: this may need to be moved to callers */
+ return(fops);
}
/*
/* endless idle loop with no priority at all */
init_idle();
current->nice = 20;
- current->counter = -100;
idle();
}
{
/* endless idle loop with no priority at all */
current->nice = 20;
- current->counter = -100;
init_idle();
while (1) {
/* endless idle loop with no priority at all */
init_idle();
current->nice = 20;
- current->counter = -100;
while (1) {
while (!current->need_resched)
if (wait_available)
/* endless idle loop with no priority at all */
init_idle();
current->nice = 20;
- current->counter = -100;
while (1) {
while (!current->need_resched) {
printk("lsr = %d (jiff=%lu)...", lsr, jiffies);
#endif
current->state = TASK_INTERRUPTIBLE;
-/* current->counter = 0; make us low-priority */
+/* current->dyn_prio = 0; make us low-priority */
schedule_timeout(char_time);
if (signal_pending(current))
break;
printk("lsr = %d (jiff=%lu)...", lsr, jiffies);
#endif
current->state = TASK_INTERRUPTIBLE;
-/* current->counter = 0; make us low-priority */
+/* current->dyn_prio = 0; make us low-priority */
schedule_timeout(char_time);
if (signal_pending(current))
break;
/* endless loop with no priority at all */
current->nice = 20;
- current->counter = -100;
init_idle();
for (;;) {
#ifdef CONFIG_SMP
/* endless idle loop with no priority at all */
init_idle();
current->nice = 20;
- current->counter = -100;
wait_psw.mask = _WAIT_PSW_MASK;
wait_psw.addr = (unsigned long) &&idle_wakeup | 0x80000000L;
while(1) {
/* endless idle loop with no priority at all */
init_idle();
current->nice = 20;
- current->counter = -100;
wait_psw.mask = _WAIT_PSW_MASK;
wait_psw.addr = (unsigned long) &&idle_wakeup;
while(1) {
/* endless idle loop with no priority at all */
init_idle();
current->nice = 20;
- current->counter = -100;
while (1) {
if (hlt_counter) {
/* endless idle loop with no priority at all */
current->nice = 20;
- current->counter = -100;
init_idle();
for (;;) {
{
/* endless idle loop with no priority at all */
current->nice = 20;
- current->counter = -100;
init_idle();
while(1) {
/* endless idle loop with no priority at all */
current->nice = 20;
- current->counter = -100;
init_idle();
for (;;) {
int cpu_idle(void)
{
current->nice = 20;
- current->counter = -100;
init_idle();
while(1) {
static int __init acornscsi_init(void)
{
- acornscsi_template.module = THIS_MODULE;
- scsi_register_module(MODULE_SCSI_HA, &acornscsi_template);
+ scsi_register_host(&acornscsi_template);
if (acornscsi_template.present)
return 0;
- scsi_unregister_module(MODULE_SCSI_HA, &acornscsi_template);
+ scsi_unregister_host(&acornscsi_template);
return -ENODEV;
}
static void __exit acornscsi_exit(void)
{
- scsi_unregister_module(MODULE_SCSI_HA, &acornscsi_template);
+ scsi_unregister_host(&acornscsi_template);
}
module_init(acornscsi_init);
static int __init init_arxe_scsi_driver(void)
{
arxescsi_template.module = THIS_MODULE;
- scsi_register_module(MODULE_SCSI_HA, &arxescsi_template);
+ scsi_register_host(&arxescsi_template);
if (arxescsi_template.present)
return 0;
- scsi_unregister_module(MODULE_SCSI_HA, &arxescsi_template);
+ scsi_unregister_host(&arxescsi_template);
return -ENODEV;
}
static void __exit exit_arxe_scsi_driver(void)
{
- scsi_unregister_module(MODULE_SCSI_HA, &arxescsi_template);
+ scsi_unregister_host(&arxescsi_template);
}
module_init(init_arxe_scsi_driver);
static int __init cumanascsi_init(void)
{
- scsi_register_module(MODULE_SCSI_HA, &cumanascsi_template);
+ scsi_register_host(&cumanascsi_template);
if (cumanascsi_template.present)
return 0;
- scsi_unregister_module(MODULE_SCSI_HA, &cumanascsi_template);
+ scsi_unregister_host(&cumanascsi_template);
return -ENODEV;
}
static void __exit cumanascsi_exit(void)
{
- scsi_unregister_module(MODULE_SCSI_HA, &cumanascsi_template);
+ scsi_unregister_host(&cumanascsi_template);
}
module_init(cumanascsi_init);
static int __init cumanascsi2_init(void)
{
- scsi_register_module(MODULE_SCSI_HA, &cumanascsi2_template);
+ scsi_register_host(&cumanascsi2_template);
if (cumanascsi2_template.present)
return 0;
- scsi_unregister_module(MODULE_SCSI_HA, &cumanascsi2_template);
+ scsi_unregister_host(&cumanascsi2_template);
return -ENODEV;
}
static void __exit cumanascsi2_exit(void)
{
- scsi_unregister_module(MODULE_SCSI_HA, &cumanascsi2_template);
+ scsi_unregister_host(&cumanascsi2_template);
}
module_init(cumanascsi2_init);
static int __init ecoscsi_init(void)
{
- scsi_register_module(MODULE_SCSI_HA, &ecoscsi_template);
+ scsi_register_host(&ecoscsi_template);
if (ecoscsi_template.present)
return 0;
- scsi_unregister_module(MODULE_SCSI_HA, &ecoscsi_template);
+ scsi_unregister_host(&ecoscsi_template);
return -ENODEV;
}
static void __exit ecoscsi_exit(void)
{
- scsi_unregister_module(MODULE_SCSI_HA, &ecoscsi_template);
+ scsi_unregister_host(&ecoscsi_template);
}
module_init(ecoscsi_init);
static int __init eesox_init(void)
{
- scsi_register_module(MODULE_SCSI_HA, &eesox_template);
+ scsi_register_host(&eesox_template);
if (eesox_template.present)
return 0;
- scsi_unregister_module(MODULE_SCSI_HA, &eesox_template);
+ scsi_unregister_host(&eesox_template);
return -ENODEV;
}
static void __exit eesox_exit(void)
{
- scsi_unregister_module(MODULE_SCSI_HA, &eesox_template);
+ scsi_unregister_host(&eesox_template);
}
module_init(eesox_init);
static int __init oakscsi_init(void)
{
- scsi_register_module(MODULE_SCSI_HA, &oakscsi_template);
+ scsi_register_host(&oakscsi_template);
if (oakscsi_template.present)
return 0;
- scsi_unregister_module(MODULE_SCSI_HA, &oakscsi_template);
+ scsi_unregister_host(&oakscsi_template);
return -ENODEV;
}
static void __exit oakscsi_exit(void)
{
- scsi_unregister_module(MODULE_SCSI_HA, &oakscsi_template);
+ scsi_unregister_host(&oakscsi_template);
}
module_init(oakscsi_init);
static int __init powertecscsi_init(void)
{
- scsi_register_module(MODULE_SCSI_HA, &powertecscsi_template);
+ scsi_register_host(&powertecscsi_template);
if (powertecscsi_template.present)
return 0;
- scsi_unregister_module(MODULE_SCSI_HA, &powertecscsi_template);
+ scsi_unregister_host(&powertecscsi_template);
return -ENODEV;
}
static void __exit powertecscsi_exit(void)
{
- scsi_unregister_module(MODULE_SCSI_HA, &powertecscsi_template);
+ scsi_unregister_host(&powertecscsi_template);
}
module_init(powertecscsi_init);
struct gendisk *g;
u64 ullval = 0;
int intval, *iptr;
+ unsigned short usval;
if (!dev)
return -EINVAL;
return put_user(iptr[MINOR(dev)], (long *) arg);
case BLKSECTGET:
- if ((q = blk_get_queue(dev)))
- return put_user(q->max_sectors, (unsigned short *)arg);
- return -EINVAL;
+ if ((q = blk_get_queue(dev)) == NULL)
+ return -EINVAL;
+
+ usval = q->max_sectors;
+ blk_put_queue(q);
+ return put_user(usval, (unsigned short *)arg);
case BLKFLSBUF:
if (!capable(CAP_SYS_ADMIN))
case BLKBSZGET:
/* get the logical block size (cf. BLKSSZGET) */
- intval = BLOCK_SIZE;
- if (blksize_size[MAJOR(dev)])
- intval = blksize_size[MAJOR(dev)][MINOR(dev)];
+ intval = block_size(dev);
return put_user(intval, (int *) arg);
case BLKBSZSET:
err = -ENOTTY;
}
-#if 0
blk_put_queue(q);
-#endif
return err;
}
*/
inline int elv_rq_merge_ok(struct request *rq, struct bio *bio)
{
- if (!(rq->flags & REQ_CMD))
+ if (!rq_mergeable(rq))
return 0;
/*
*/
if (bio_data_dir(bio) != rq_data_dir(rq))
return 0;
- if (rq->flags & REQ_NOMERGE)
- return 0;
/*
* same device and no special stuff set, merge is ok
inline int elv_try_merge(struct request *__rq, struct bio *bio)
{
unsigned int count = bio_sectors(bio);
-
- if (!elv_rq_merge_ok(__rq, bio))
- return ELEVATOR_NO_MERGE;
+ int ret = ELEVATOR_NO_MERGE;
/*
* we can merge and sequence is ok, check if it's possible
*/
- if (__rq->sector + __rq->nr_sectors == bio->bi_sector) {
- return ELEVATOR_BACK_MERGE;
- } else if (__rq->sector - count == bio->bi_sector) {
- __rq->elevator_sequence -= count;
- return ELEVATOR_FRONT_MERGE;
+ if (elv_rq_merge_ok(__rq, bio)) {
+ if (__rq->sector + __rq->nr_sectors == bio->bi_sector) {
+ ret = ELEVATOR_BACK_MERGE;
+ } else if (__rq->sector - count == bio->bi_sector) {
+ __rq->elevator_sequence -= count;
+ ret = ELEVATOR_FRONT_MERGE;
+ }
}
- return ELEVATOR_NO_MERGE;
+ return ret;
}
int elevator_linus_merge(request_queue_t *q, struct request **req,
break;
if (!(__rq->flags & REQ_CMD))
continue;
- if (__rq->elevator_sequence < 0)
+ if (__rq->elevator_sequence < bio_sectors(bio))
break;
if (!*req && bio_rq_in_between(bio, __rq, &q->queue_head))
blk_queue_max_sectors(q, MAX_SECTORS);
blk_queue_hardsect_size(q, 512);
+ /*
+ * by default assume old behaviour and bounce for any highmem page
+ */
+ blk_queue_bounce_limit(q, BLK_BOUNCE_HIGH);
+
init_waitqueue_head(&q->queue_wait);
}
if (req->nr_phys_segments + nr_phys_segs > q->max_phys_segments) {
req->flags |= REQ_NOMERGE;
+ q->last_merge = NULL;
return 0;
}
if (req->nr_hw_segments + nr_hw_segs > q->max_hw_segments) {
req->flags |= REQ_NOMERGE;
+ q->last_merge = NULL;
return 0;
}
return 0;
/* Merge is OK... */
- if (q->last_merge == &next->queuelist)
- q->last_merge = NULL;
-
req->nr_phys_segments = total_phys_segments;
req->nr_hw_segments = total_hw_segments;
return 1;
q->plug_tq.data = q;
q->queue_flags = (1 << QUEUE_FLAG_CLUSTER);
q->queue_lock = lock;
+ q->last_merge = NULL;
- /*
- * by default assume old behaviour and bounce for any highmem page
- */
- blk_queue_bounce_limit(q, BLK_BOUNCE_HIGH);
-
blk_queue_segment_boundary(q, 0xffffffff);
blk_queue_make_request(q, __make_request);
if (!rq && (gfp_mask & __GFP_WAIT))
rq = get_request_wait(q, rw);
+ if (rq) {
+ rq->flags = 0;
+ rq->buffer = NULL;
+ rq->bio = rq->biotail = NULL;
+ rq->waiting = NULL;
+ }
return rq;
}
drive_stat_acct(req, req->nr_sectors, 1);
/*
- * debug stuff...
+ * it's a bug to let this rq preempt an already started request
*/
- if (insert_here == &q->queue_head) {
- struct request *__rq = __elv_next_request(q);
-
- BUG_ON(__rq && (__rq->flags & REQ_STARTED));
- }
+ if (insert_here->next != &q->queue_head)
+ BUG_ON(list_entry_rq(insert_here->next)->flags & REQ_STARTED);
/*
* elevator indicated where it wants this request to be
void blkdev_release_request(struct request *req)
{
struct request_list *rl = req->rl;
+ request_queue_t *q = req->q;
req->rq_status = RQ_INACTIVE;
req->q = NULL;
req->rl = NULL;
+ if (q) {
+ if (q->last_merge == &req->queuelist)
+ q->last_merge = NULL;
+ }
+
/*
* Request may not have originated from ll_rw_blk. if not,
* it didn't come out of our reserved rq pools
/*
* Has to be called with the request spinlock acquired
*/
-static void attempt_merge(request_queue_t *q, struct request *req)
+static void attempt_merge(request_queue_t *q, struct request *req,
+ struct request *next)
{
- struct request *next = blkdev_next_request(req);
-
- /*
- * not a rw command
- */
- if (!(next->flags & REQ_CMD))
+ if (!rq_mergeable(req) || !rq_mergeable(next))
return;
/*
if (req->sector + req->nr_sectors != next->sector)
return;
- /*
- * don't touch NOMERGE rq, or one that has been started by driver
- */
- if (next->flags & (REQ_NOMERGE | REQ_STARTED))
- return;
-
if (rq_data_dir(req) != rq_data_dir(next)
|| req->rq_dev != next->rq_dev
|| req->nr_sectors + next->nr_sectors > q->max_sectors
return;
/*
- * If we are not allowed to merge these requests, then
- * return. If we are allowed to merge, then the count
- * will have been updated to the appropriate number,
- * and we shouldn't do it here too.
+ * If we are allowed to merge, then append bio list
+ * from next to rq and release next. merge_requests_fn
+ * will have updated segment counts, update sector
+ * counts here.
*/
if (q->merge_requests_fn(q, req, next)) {
q->elevator.elevator_merge_req_fn(req, next);
static inline void attempt_back_merge(request_queue_t *q, struct request *rq)
{
- if (&rq->queuelist != q->queue_head.prev)
- attempt_merge(q, rq);
+ struct list_head *next = rq->queuelist.next;
+
+ if (next != &q->queue_head)
+ attempt_merge(q, rq, list_entry_rq(next));
}
static inline void attempt_front_merge(request_queue_t *q, struct request *rq)
struct list_head *prev = rq->queuelist.prev;
if (prev != &q->queue_head)
- attempt_merge(q, blkdev_entry_to_request(prev));
-}
-
-static inline void __blk_attempt_remerge(request_queue_t *q, struct request *rq)
-{
- if (rq->queuelist.next != &q->queue_head)
- attempt_merge(q, rq);
+ attempt_merge(q, list_entry_rq(prev), rq);
}
/**
unsigned long flags;
spin_lock_irqsave(q->queue_lock, flags);
- __blk_attempt_remerge(q, rq);
+ attempt_back_merge(q, rq);
spin_unlock_irqrestore(q->queue_lock, flags);
}
el_ret = elevator->elevator_merge_fn(q, &req, bio);
switch (el_ret) {
case ELEVATOR_BACK_MERGE:
- BUG_ON(req->flags & REQ_STARTED);
- BUG_ON(req->flags & REQ_NOMERGE);
+ BUG_ON(!rq_mergeable(req));
if (!q->back_merge_fn(q, req, bio))
break;
goto out;
case ELEVATOR_FRONT_MERGE:
- BUG_ON(req->flags & REQ_STARTED);
- BUG_ON(req->flags & REQ_NOMERGE);
+ BUG_ON(!rq_mergeable(req));
if (!q->front_merge_fn(q, req, bio))
break;
int minor = MINOR(bio->bi_dev);
request_queue_t *q;
sector_t minorsize = 0;
- int nr_sectors = bio_sectors(bio);
+ int ret, nr_sectors = bio_sectors(bio);
/* Test device or partition size, when known. */
if (blk_size[major])
*/
blk_partition_remap(bio);
- } while (q->make_request_fn(q, bio));
+ ret = q->make_request_fn(q, bio);
+ blk_put_queue(q);
+
+ } while (ret);
}
/*
inline void blk_recalc_rq_sectors(struct request *rq, int nsect)
{
- rq->hard_sector += nsect;
- rq->hard_nr_sectors -= nsect;
- rq->sector = rq->hard_sector;
- rq->nr_sectors = rq->hard_nr_sectors;
+ if (rq->flags & REQ_CMD) {
+ rq->hard_sector += nsect;
+ rq->hard_nr_sectors -= nsect;
+ rq->sector = rq->hard_sector;
+ rq->nr_sectors = rq->hard_nr_sectors;
- rq->current_nr_sectors = bio_iovec(rq->bio)->bv_len >> 9;
- rq->hard_cur_sectors = rq->current_nr_sectors;
+ rq->current_nr_sectors = bio_iovec(rq->bio)->bv_len >> 9;
+ rq->hard_cur_sectors = rq->current_nr_sectors;
- /*
- * if total number of sectors is less than the first segment
- * size, something has gone terribly wrong
- */
- if (rq->nr_sectors < rq->current_nr_sectors) {
- printk("blk: request botched\n");
- rq->nr_sectors = rq->current_nr_sectors;
+ /*
+ * if total number of sectors is less than the first segment
+ * size, something has gone terribly wrong
+ */
+ if (rq->nr_sectors < rq->current_nr_sectors) {
+ printk("blk: request botched\n");
+ rq->nr_sectors = rq->current_nr_sectors;
+ }
}
}
static inline int loop_get_bs(struct loop_device *lo)
{
- int bs = 0;
-
- if (blksize_size[MAJOR(lo->lo_device)])
- bs = blksize_size[MAJOR(lo->lo_device)][MINOR(lo->lo_device)];
- if (!bs)
- bs = BLOCK_SIZE;
-
- return bs;
+ return block_size(lo->lo_device);
}
static inline unsigned long loop_get_iv(struct loop_device *lo,
kdev_t lo_device;
int lo_flags = 0;
int error;
- int bs;
MOD_INC_USE_COUNT;
lo->old_gfp_mask = inode->i_mapping->gfp_mask;
inode->i_mapping->gfp_mask = GFP_NOIO;
- bs = 0;
- if (blksize_size[MAJOR(lo_device)])
- bs = blksize_size[MAJOR(lo_device)][MINOR(lo_device)];
- if (!bs)
- bs = BLOCK_SIZE;
-
- set_blocksize(dev, bs);
+ set_blocksize(dev, block_size(lo_device));
lo->lo_bio = lo->lo_biotail = NULL;
kernel_thread(loop_thread, lo, CLONE_FS | CLONE_FILES | CLONE_SIGHAND);
/* Register a slave for the master */
if (tty->driver.major == PTY_MASTER_MAJOR)
tty_register_devfs(&tty->link->driver,
- DEVFS_FL_CURRENT_OWNER |
- DEVFS_FL_NO_PERSISTENCE | DEVFS_FL_WAIT,
+ DEVFS_FL_CURRENT_OWNER | DEVFS_FL_WAIT,
tty->link->driver.minor_start +
MINOR(tty->device)-tty->driver.minor_start);
retval = 0;
if (raw_devices[minor].inuse++)
goto out;
- /*
- * Don't interfere with mounted devices: we cannot safely set
- * the blocksize on a device which is already mounted.
- */
-
- sector_size = 512;
- if (is_mounted(rdev)) {
- if (blksize_size[MAJOR(rdev)])
- sector_size = blksize_size[MAJOR(rdev)][MINOR(rdev)];
- } else
- sector_size = get_hardsect_size(rdev);
-
- set_blocksize(rdev, sector_size);
+ sector_size = get_hardsect_size(rdev);
raw_devices[minor].sector_size = sector_size;
-
for (sector_bits = 0; !(sector_size & 1); )
sector_size>>=1, sector_bits++;
raw_devices[minor].sector_bits = sector_bits;
set_bit(TTY_PTY_LOCK, &tty->flags); /* LOCK THE SLAVE */
minor -= driver->minor_start;
devpts_pty_new(driver->other->name_base + minor, MKDEV(driver->other->major, minor + driver->other->minor_start));
- tty_register_devfs(&pts_driver[major], DEVFS_FL_NO_PERSISTENCE,
+ tty_register_devfs(&pts_driver[major], DEVFS_FL_DEFAULT,
pts_driver[major].minor_start + minor);
noctty = 1;
goto init_dev_done;
#include "hptraid.h"
-static int read_disk_sb (int major, int minor, unsigned char *buffer,int bufsize)
+static int __init read_disk_sb(struct block_device *bdev,
+ struct highpoint_raid_conf *buf)
{
- int ret = -EINVAL;
- struct buffer_head *bh = NULL;
- kdev_t dev = MKDEV(major,minor);
-
- if (blksize_size[major]==NULL) /* device doesn't exist */
- return -EINVAL;
-
+ /* Superblock is at 9*512 bytes */
+ Sector sect;
+ unsigned char *p = read_dev_sector(bdev, 9, §);
- /* Superblock is at 4096+412 bytes */
- set_blocksize (dev, 4096);
- bh = bread (dev, 1, 4096);
-
-
- if (bh) {
- memcpy (buffer, bh->b_data, bufsize);
- } else {
- printk(KERN_ERR "hptraid: Error reading superblock.\n");
- goto abort;
+ if (p) {
+ memcpy(buf, p, 512);
+ put_dev_sector(§);
+ return 0;
}
- ret = 0;
-abort:
- if (bh)
- brelse (bh);
- return ret;
+ printk(KERN_ERR "hptraid: Error reading superblock.\n");
+ return -1;
}
static unsigned long maxsectors (int major,int minor)
return lba;
}
+static struct highpoint_raid_conf __initdata prom;
static void __init probedisk(int major, int minor,int device)
{
int i;
- struct highpoint_raid_conf *prom;
- static unsigned char block[4096];
- struct block_device *bdev;
-
- if (maxsectors(major,minor)==0)
+ struct block_device *bdev = bdget(MKDEV(major,minor));
+ struct gendisk *gd;
+
+ if (!bdev)
return;
-
- if (read_disk_sb(major,minor,(unsigned char*)&block,sizeof(block)))
- return;
-
- prom = (struct highpoint_raid_conf*)&block[512];
-
- if (prom->magic!= 0x5a7816f0)
- return;
- if (prom->type) {
+
+ if (blkdev_get(bdev,FMODE_READ|FMODE_WRITE,0,BDEV_RAW) < 0)
+ return;
+
+ if (maxsectors(major,minor)==0)
+ goto out;
+
+ if (read_disk_sb(bdev, &prom))
+ goto out;
+
+ if (prom.magic!= 0x5a7816f0)
+ goto out;
+ if (prom.type) {
printk(KERN_INFO "hptraid: only RAID0 is supported currently\n");
- return;
+ goto out;
}
- i = prom->disk_number;
+ i = prom.disk_number;
if (i<0)
- return;
+ goto out;
if (i>8)
- return;
+ goto out;
+
+ raid[device].disk[i].bdev = bdev;
+ /* This is supposed to prevent others from stealing our underlying disks */
+ /* now blank the /proc/partitions table for the wrong partition table,
+ so that scripts don't accidentally mount it and crash the kernel */
+ /* XXX: the 0 is an utter hack --hch */
+ gd=get_gendisk(MKDEV(major, 0));
+ if (gd!=NULL) {
+ int j;
+ for (j=1+(minor<<gd->minor_shift);j<((minor+1)<<gd->minor_shift);j++)
+ gd->part[j].nr_sects=0;
+ }
- bdev = bdget(MKDEV(major,minor));
- if (bdev && blkdev_get(bdev,FMODE_READ|FMODE_WRITE,0,BDEV_RAW) == 0) {
- int j=0;
- struct gendisk *gd;
- raid[device].disk[i].bdev = bdev;
- /* This is supposed to prevent others from stealing our underlying disks */
- /* now blank the /proc/partitions table for the wrong partition table,
- so that scripts don't accidentally mount it and crash the kernel */
- /* XXX: the 0 is an utter hack --hch */
- gd=get_gendisk(MKDEV(major, 0));
- if (gd!=NULL) {
- for (j=1+(minor<<gd->minor_shift);j<((minor+1)<<gd->minor_shift);j++)
- gd->part[j].nr_sects=0;
- }
- }
raid[device].disk[i].device = MKDEV(major,minor);
raid[device].disk[i].sectors = maxsectors(major,minor);
- raid[device].stride = (1<<prom->raid0_shift);
- raid[device].disks = prom->raid_disks;
- raid[device].sectors = prom->total_secs;
-
+ raid[device].stride = (1<<prom.raid0_shift);
+ raid[device].disks = prom.raid_disks;
+ raid[device].sectors = prom.total_secs;
+ return;
+out:
+ blkdev_put(bdev);
}
static void __init fill_cutoff(int device)
#include <asm/io.h>
extern char *ide_xfer_verbose (byte xfer_rate);
+extern char *ide_dmafunc_verbose(ide_dma_action_t dmafunc);
/*
* Maximum number of interfaces per card
static int ide_build_sglist(ide_hwif_t *hwif, struct request *rq)
{
- struct buffer_head *bh;
+ request_queue_t *q = &hwif->drives[DEVICE_NR(rq->rq_dev) & 1].queue;
struct scatterlist *sg = hwif->sg_table;
- int nents = 0;
+ int nents = blk_rq_map_sg(q, rq, sg);
- if (rq->cmd == READ)
+ if (rq->q && nents > rq->nr_phys_segments)
+ printk("icside: received %d segments, build %d\n",
+ rq->nr_phys_segments, nents);
+
+ if (rq_data_dir(rq) == READ)
hwif->sg_dma_direction = PCI_DMA_FROMDEVICE;
else
hwif->sg_dma_direction = PCI_DMA_TODEVICE;
- bh = rq->bh;
- do {
- unsigned char *virt_addr = bh->b_data;
- unsigned int size = bh->b_size;
-
- while ((bh = bh->b_reqnext) != NULL) {
- if ((virt_addr + size) != (unsigned char *)bh->b_data)
- break;
- size += bh->b_size;
- }
- memset(&sg[nents], 0, sizeof(*sg));
- sg[nents].address = virt_addr;
- sg[nents].length = size;
- nents++;
- } while (bh != NULL);
return pci_map_sg(NULL, sg, nents, hwif->sg_dma_direction);
}
pci_unmap_sg(NULL, sg, nents, HWIF(drive)->sg_dma_direction);
}
+/*
+ * Configure the IOMD to give the appropriate timings for the transfer
+ * mode being requested. We take the advice of the ATA standards, and
+ * calculate the cycle time based on the transfer mode, and the EIDE
+ * MW DMA specs that the drive provides in the IDENTIFY command.
+ *
+ * We have the following IOMD DMA modes to choose from:
+ *
+ * Type Active Recovery Cycle
+ * A 250 (250) 312 (550) 562 (800)
+ * B 187 250 437
+ * C 125 (125) 125 (375) 250 (500)
+ * D 62 125 187
+ *
+ * (figures in brackets are actual measured timings)
+ *
+ * However, we also need to take care of the read/write active and
+ * recovery timings:
+ *
+ * Read Write
+ * Mode Active -- Recovery -- Cycle IOMD type
+ * MW0 215 50 215 480 A
+ * MW1 80 50 50 150 C
+ * MW2 70 25 25 120 C
+ */
static int
icside_config_if(ide_drive_t *drive, int xfer_mode)
{
int func = ide_dma_off;
+ int cycle_time = 0, use_dma_info = 0;
switch (xfer_mode) {
- case XFER_MW_DMA_2:
- /*
- * The cycle time is limited to 250ns by the r/w
- * pulse width (90ns), however we should still
- * have a maximum burst transfer rate of 8MB/s.
- */
- drive->drive_data = 250;
- break;
-
- case XFER_MW_DMA_1:
- drive->drive_data = 250;
- break;
+ case XFER_MW_DMA_2: cycle_time = 250; use_dma_info = 1; break;
+ case XFER_MW_DMA_1: cycle_time = 250; use_dma_info = 1; break;
+ case XFER_MW_DMA_0: cycle_time = 480; break;
+ }
- case XFER_MW_DMA_0:
- drive->drive_data = 480;
- break;
+ /*
+ * If we're going to be doing MW_DMA_1 or MW_DMA_2, we should
+ * take care to note the values in the ID...
+ */
+ if (use_dma_info && drive->id->eide_dma_time > cycle_time)
+ cycle_time = drive->id->eide_dma_time;
- default:
- drive->drive_data = 0;
- break;
- }
+ drive->drive_data = cycle_time;
if (!drive->init_speed)
- drive->init_speed = (byte) xfer_mode;
+ drive->init_speed = xfer_mode;
- if (drive->drive_data &&
- ide_config_drive_speed(drive, (byte) xfer_mode) == 0)
+ if (cycle_time && ide_config_drive_speed(drive, xfer_mode) == 0)
func = ide_dma_on;
else
drive->drive_data = 480;
printk("%s: %s selected (peak %dMB/s)\n", drive->name,
ide_xfer_verbose(xfer_mode), 2000 / drive->drive_data);
- drive->current_speed = (byte) xfer_mode;
+ drive->current_speed = xfer_mode;
return func;
}
* This is setup to be called as an extern for future support
* to other special driver code.
*/
-static int check_drive_lists(ide_drive_t *drive, int good_bad)
+static int icside_check_drive_lists(ide_drive_t *drive, int good_bad)
{
struct hd_driveid *id = drive->id;
/*
* Consult the list of known "bad" drives
*/
- if (check_drive_lists(drive, 0)) {
+ if (icside_check_drive_lists(drive, 0)) {
func = ide_dma_off;
goto out;
}
/*
* Consult the list of known "good" drives
*/
- if (check_drive_lists(drive, 1)) {
+ if (icside_check_drive_lists(drive, 1)) {
if (id->eide_dma_time > 150)
goto out;
xfer_mode = XFER_MW_DMA_1;
case ide_dma_off_quietly:
case ide_dma_on:
+ /*
+ * We don't need any bouncing. Yes, this looks the
+ * wrong way around, but it is correct.
+ */
+ blk_queue_bounce_limit(&drive->queue, BLK_BOUNCE_ANY);
drive->using_dma = (func == ide_dma_on);
return 0;
case ide_dma_bad_drive:
case ide_dma_good_drive:
- return check_drive_lists(drive, (func == ide_dma_good_drive));
+ return icside_check_drive_lists(drive, (func ==
+ ide_dma_good_drive));
case ide_dma_verbose:
return icside_dma_verbose(drive);
printk(" -- ERROR, unable to allocate DMA table\n");
return 0;
}
+
+int ide_release_dma(ide_hwif_t *hwif)
+{
+ if (hwif->sg_table) {
+ kfree(hwif->sg_table);
+ hwif->sg_table = NULL;
+ }
+ return 1;
+}
#endif
static ide_hwif_t *
#define _IDE_TIMING_H
/*
- * $Id: ide-timing.h,v 1.5 2001/01/15 21:48:56 vojtech Exp $
+ * $Id: ide-timing.h,v 1.6 2001/12/23 22:47:56 vojtech Exp $
*
- * Copyright (c) 1999-2000 Vojtech Pavlik
- *
- * Sponsored by SuSE
+ * Copyright (c) 1999-2001 Vojtech Pavlik
*/
/*
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*
* Should you need to contact me, the author, you can do so either by
- * e-mail - mail your message to <vojtech@suse.cz>, or by paper mail:
- * Vojtech Pavlik, Ucitelska 1576, Prague 8, 182 00 Czech Republic
+ * e-mail - mail your message to <vojtech@ucw.cz>, or by paper mail:
+ * Vojtech Pavlik, Simunkova 1594, Prague 8, 182 00 Czech Republic
*/
#include <linux/hdreg.h>
-#ifndef XFER_PIO_5
#define XFER_PIO_5 0x0d
#define XFER_UDMA_SLOW 0x4f
-#endif
struct ide_timing {
short mode;
};
/*
- * PIO 0-5, MWDMA 0-2 and UDMA 0-5 timings (in nanoseconds).
+ * PIO 0-5, MWDMA 0-2 and UDMA 0-6 timings (in nanoseconds).
* These were taken from ATA/ATAPI-6 standard, rev 0a, except
- * for PIO 5, which is a nonstandard extension.
+ * for PIO 5, which is a nonstandard extension and UDMA6, which
+ * is currently supported only by Maxtor drives.
*/
static struct ide_timing ide_timing[] = {
+ { XFER_UDMA_6, 0, 0, 0, 0, 0, 0, 0, 15 },
{ XFER_UDMA_5, 0, 0, 0, 0, 0, 0, 0, 20 },
{ XFER_UDMA_4, 0, 0, 0, 0, 0, 0, 0, 30 },
{ XFER_UDMA_3, 0, 0, 0, 0, 0, 0, 0, 45 },
#define EZ(v,unit) ((v)?ENOUGH(v,unit):0)
#define XFER_MODE 0xf0
+#define XFER_UDMA_133 0x48
#define XFER_UDMA_100 0x44
#define XFER_UDMA_66 0x42
#define XFER_UDMA 0x40
if ((map & XFER_UDMA) && (id->field_valid & 4)) { /* Want UDMA and UDMA bitmap valid */
+ if ((map & XFER_UDMA_133) == XFER_UDMA_133)
+ if ((best = (id->dma_ultra & 0x0040) ? XFER_UDMA_6 : 0)) return best;
+
if ((map & XFER_UDMA_100) == XFER_UDMA_100)
if ((best = (id->dma_ultra & 0x0020) ? XFER_UDMA_5 : 0)) return best;
static void ide_timing_quantize(struct ide_timing *t, struct ide_timing *q, int T, int UT)
{
- q->setup = EZ(t->setup, T);
- q->act8b = EZ(t->act8b, T);
- q->rec8b = EZ(t->rec8b, T);
- q->cyc8b = EZ(t->cyc8b, T);
- q->active = EZ(t->active, T);
- q->recover = EZ(t->recover, T);
- q->cycle = EZ(t->cycle, T);
- q->udma = EZ(t->udma, UT);
+ q->setup = EZ(t->setup * 1000, T);
+ q->act8b = EZ(t->act8b * 1000, T);
+ q->rec8b = EZ(t->rec8b * 1000, T);
+ q->cyc8b = EZ(t->cyc8b * 1000, T);
+ q->active = EZ(t->active * 1000, T);
+ q->recover = EZ(t->recover * 1000, T);
+ q->cycle = EZ(t->cycle * 1000, T);
+ q->udma = EZ(t->udma * 1000, UT);
}
static void ide_timing_merge(struct ide_timing *a, struct ide_timing *b, struct ide_timing *m, unsigned int what)
#include "pdcraid.h"
-static unsigned long calc_pdcblock_offset (int major,int minor)
+static unsigned long calc_pdcblock_offset(struct block_device *bdev)
{
unsigned long lba = 0;
- kdev_t dev;
- ide_drive_t *ideinfo;
-
- dev = MKDEV(major,minor);
- ideinfo = get_info_ptr (dev);
+ ide_drive_t *ideinfo = get_info_ptr(to_kdev_t(bdev->bd_dev));
+
if (ideinfo==NULL)
return 0;
-
-
+
/* first sector of the last cluster */
if (ideinfo->head==0)
return 0;
}
-static int read_disk_sb (int major, int minor, unsigned char *buffer,int bufsize)
+static int read_disk_sb(struct block_device *bdev, struct promise_raid_conf *p)
{
- int ret = -EINVAL;
- struct buffer_head *bh = NULL;
- kdev_t dev = MKDEV(major,minor);
unsigned long sb_offset;
+ char *buffer;
+ int i;
- if (blksize_size[major]==NULL) /* device doesn't exist */
- return -EINVAL;
-
-
/*
* Calculate the position of the superblock,
* it's at first sector of the last cylinder
*/
- sb_offset = calc_pdcblock_offset(major,minor)/8;
- /* The /8 transforms sectors into 4Kb blocks */
+ sb_offset = calc_pdcblock_offset(bdev);
if (sb_offset==0)
return -1;
-
- set_blocksize (dev, 4096);
- bh = bread (dev, sb_offset, 4096);
-
- if (bh) {
- memcpy (buffer, bh->b_data, bufsize);
- } else {
- printk(KERN_ERR "pdcraid: Error reading superblock.\n");
- goto abort;
+ for (i = 0, buffer = (char*)p; i < 4; i++, buffer += 512) {
+ Sector sect;
+ char *q = read_dev_sector(bdev, sb_offset + i, §);
+ if (!p) {
+ printk(KERN_ERR "pdcraid: Error reading superblock.\n");
+ return -1;
+ }
+ memcpy(buffer, q, 512);
+ put_dev_sector(§);
}
- ret = 0;
-abort:
- if (bh)
- brelse (bh);
- return ret;
+ return 0;
}
static unsigned int calc_sb_csum (unsigned int* ptr)
static int cookie = 0;
+static struct promise_raid_conf __initdata prom;
static void __init probedisk(int devindex,int device, int raidlevel)
{
int i;
int major, minor;
- struct promise_raid_conf *prom;
- static unsigned char block[4096];
struct block_device *bdev;
if (devlist[devindex].device!=-1) /* already assigned to another array */
major = devlist[devindex].major;
minor = devlist[devindex].minor;
- if (read_disk_sb(major,minor,(unsigned char*)&block,sizeof(block)))
- return;
-
- prom = (struct promise_raid_conf*)&block[512];
-
- /* the checksums must match */
- if (prom->checksum != calc_sb_csum((unsigned int*)prom))
- return;
- if (prom->raid.type!=raidlevel) /* different raidlevel */
+ bdev = bdget(MKDEV(major,minor));
+ if (!bdev)
return;
- if ((cookie!=0) && (cookie != prom->raid.magic_1)) /* different array */
+ if (blkdev_get(bdev, FMODE_READ|FMODE_WRITE, 0, BDEV_RAW) != 0)
return;
+
+ if (read_disk_sb(bdev, &prom))
+ goto out;
+
+ /* the checksums must match */
+ if (prom.checksum != calc_sb_csum((unsigned int*)&prom))
+ goto out;
+ if (prom.raid.type!=raidlevel) /* different raidlevel */
+ goto out;
+
+ if ((cookie!=0) && (cookie != prom.raid.magic_1)) /* different array */
+ goto out;
- cookie = prom->raid.magic_1;
+ cookie = prom.raid.magic_1;
/* This looks evil. But basically, we have to search for our adapternumber
in the arraydefinition, both of which are in the superblock */
- for (i=0;(i<prom->raid.total_disks)&&(i<8);i++) {
- if ( (prom->raid.disk[i].channel== prom->raid.channel) &&
- (prom->raid.disk[i].device == prom->raid.device) ) {
-
- bdev = bdget(MKDEV(major,minor));
- if (bdev && blkdev_get(bdev, FMODE_READ|FMODE_WRITE, 0, BDEV_RAW) == 0) {
- raid[device].disk[i].bdev = bdev;
- }
+ for (i=0;(i<prom.raid.total_disks)&&(i<8);i++) {
+ if ( (prom.raid.disk[i].channel== prom.raid.channel) &&
+ (prom.raid.disk[i].device == prom.raid.device) ) {
+
+ raid[device].disk[i].bdev = bdev;
raid[device].disk[i].device = MKDEV(major,minor);
- raid[device].disk[i].sectors = prom->raid.disk_secs;
- raid[device].stride = (1<<prom->raid.raid0_shift);
- raid[device].disks = prom->raid.total_disks;
- raid[device].sectors = prom->raid.total_secs;
- raid[device].geom.heads = prom->raid.heads+1;
- raid[device].geom.sectors = prom->raid.sectors;
- raid[device].geom.cylinders = prom->raid.cylinders+1;
+ raid[device].disk[i].sectors = prom.raid.disk_secs;
+ raid[device].stride = (1<<prom.raid.raid0_shift);
+ raid[device].disks = prom.raid.total_disks;
+ raid[device].sectors = prom.raid.total_secs;
+ raid[device].geom.heads = prom.raid.heads+1;
+ raid[device].geom.sectors = prom.raid.sectors;
+ raid[device].geom.cylinders = prom.raid.cylinders+1;
devlist[devindex].device=device;
- }
+ return;
+ }
}
-
+out:
+ blkdev_put(bdev, BDEV_RAW);
}
static void __init fill_cutoff(int device)
/*
- * $Id: via82cxxx.c,v 3.29 2001/09/10 10:06:00 vojtech Exp $
+ * $Id: via82cxxx.c,v 3.33 2001/12/23 22:46:12 vojtech Exp $
*
* Copyright (c) 2000-2001 Vojtech Pavlik
*
* Michel Aubry
* Jeff Garzik
* Andre Hedrick
- *
- * Sponsored by SuSE
*/
/*
* VIA IDE driver for Linux. Supports
*
* vt82c576, vt82c586, vt82c586a, vt82c586b, vt82c596a, vt82c596b,
- * vt82c686, vt82c686a, vt82c686b, vt8231, vt8233
+ * vt82c686, vt82c686a, vt82c686b, vt8231, vt8233, vt8233c, vt8233a
*
* southbridges, which can be found in
*
* VIA Apollo Master, VP, VP2, VP2/97, VP3, VPX, VPX/97, MVP3, MVP4, P6, Pro,
* ProII, ProPlus, Pro133, Pro133+, Pro133A, Pro133A Dual, Pro133T, Pro133Z,
* PLE133, PLE133T, Pro266, Pro266T, ProP4X266, PM601, PM133, PN133, PL133T,
- * PX266, PM266, KX133, KT133, KT133A, KLE133, KT266, KX266, KM133, KM133A,
- * KL133, KN133, KM266
+ * PX266, PM266, KX133, KT133, KT133A, KT133E, KLE133, KT266, KX266, KM133,
+ * KM133A, KL133, KN133, KM266
* PC-Chips VXPro, VXPro+, VXTwo, TXPro-III, TXPro-AGP, AGPPro, ViaGra, BXToo,
* BXTel, BXpert
* AMD 640, 640 AGP, 750 IronGate, 760, 760MP
*
* chipsets. Supports
*
- * PIO 0-5, MWDMA 0-2, SWDMA 0-2 and UDMA 0-5
+ * PIO 0-5, MWDMA 0-2, SWDMA 0-2 and UDMA 0-6
*
- * (this includes UDMA33, 66 and 100) modes. UDMA66 and higher modes are
+ * (this includes UDMA33, 66, 100 and 133) modes. UDMA66 and higher modes are
* autoenabled only in case the BIOS has detected a 80 wire cable. To ignore
* the BIOS data and assume the cable is present, use 'ide0=ata66' or
* 'ide1=ata66' on the kernel command line.
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*
* Should you need to contact me, the author, you can do so either by
- * e-mail - mail your message to <vojtech@suse.cz>, or by paper mail:
- * Vojtech Pavlik, Ucitelska 1576, Prague 8, 182 00 Czech Republic
+ * e-mail - mail your message to <vojtech@ucw.cz>, or by paper mail:
+ * Vojtech Pavlik, Simunkova 1594, Prague 8, 182 00 Czech Republic
*/
#include <linux/config.h>
#define VIA_UDMA_33 0x001
#define VIA_UDMA_66 0x002
#define VIA_UDMA_100 0x003
+#define VIA_UDMA_133 0x004
#define VIA_BAD_PREQ 0x010 /* Crashes if PREQ# till DDACK# set */
#define VIA_BAD_CLK66 0x020 /* 66 MHz clock doesn't work correctly */
#define VIA_SET_FIFO 0x040 /* Needs to have FIFO split set */
#define VIA_NO_UNMASK 0x080 /* Doesn't work with IRQ unmasking on */
+#define VIA_BAD_ID 0x100 /* Has wrong vendor ID (0x1107) */
/*
* VIA SouthBridge chips.
unsigned short flags;
} via_isa_bridges[] = {
#ifdef FUTURE_BRIDGES
- { "vt8237", PCI_DEVICE_ID_VIA_8237, 0x00, 0x2f, VIA_UDMA_100 },
- { "vt8235", PCI_DEVICE_ID_VIA_8235, 0x00, 0x2f, VIA_UDMA_100 },
- { "vt8233c", PCI_DEVICE_ID_VIA_8233C, 0x00, 0x2f, VIA_UDMA_100 },
+ { "vt8237", PCI_DEVICE_ID_VIA_8237, 0x00, 0x2f, VIA_UDMA_133 },
+ { "vt8235", PCI_DEVICE_ID_VIA_8235, 0x00, 0x2f, VIA_UDMA_133 },
#endif
+ { "vt8233a", PCI_DEVICE_ID_VIA_8233A, 0x00, 0x2f, VIA_UDMA_133 },
+ { "vt8233c", PCI_DEVICE_ID_VIA_8233C_0, 0x00, 0x2f, VIA_UDMA_100 },
{ "vt8233", PCI_DEVICE_ID_VIA_8233_0, 0x00, 0x2f, VIA_UDMA_100 },
{ "vt8231", PCI_DEVICE_ID_VIA_8231, 0x00, 0x2f, VIA_UDMA_100 },
{ "vt82c686b", PCI_DEVICE_ID_VIA_82C686, 0x40, 0x4f, VIA_UDMA_100 },
{ "vt82c586a", PCI_DEVICE_ID_VIA_82C586_0, 0x20, 0x2f, VIA_UDMA_33 | VIA_SET_FIFO },
{ "vt82c586", PCI_DEVICE_ID_VIA_82C586_0, 0x00, 0x0f, VIA_UDMA_NONE | VIA_SET_FIFO },
{ "vt82c576", PCI_DEVICE_ID_VIA_82C576, 0x00, 0x2f, VIA_UDMA_NONE | VIA_SET_FIFO | VIA_NO_UNMASK },
+ { "vt82c576", PCI_DEVICE_ID_VIA_82C576, 0x00, 0x2f, VIA_UDMA_NONE | VIA_SET_FIFO | VIA_NO_UNMASK | VIA_BAD_ID },
{ NULL }
};
static unsigned char via_enabled;
static unsigned int via_80w;
static unsigned int via_clock;
-static char *via_dma[] = { "MWDMA16", "UDMA33", "UDMA66", "UDMA100" };
+static char *via_dma[] = { "MWDMA16", "UDMA33", "UDMA66", "UDMA100", "UDMA133" };
/*
* VIA /proc entry.
static int via_get_info(char *buffer, char **addr, off_t offset, int count)
{
- short speed[4], cycle[4], setup[4], active[4], recover[4], den[4],
+ int speed[4], cycle[4], setup[4], active[4], recover[4], den[4],
uen[4], udma[4], umul[4], active8b[4], recover8b[4];
struct pci_dev *dev = bmide_dev;
unsigned int v, u, i;
via_print("----------VIA BusMastering IDE Configuration----------------");
- via_print("Driver Version: 3.29");
+ via_print("Driver Version: 3.33");
via_print("South Bridge: VIA %s", via_config->name);
pci_read_config_byte(isa_dev, PCI_REVISION_ID, &t);
via_print("Highest DMA rate: %s", via_dma[via_config->flags & VIA_UDMA]);
via_print("BM-DMA base: %#x", via_base);
- via_print("PCI clock: %dMHz", via_clock);
+ via_print("PCI clock: %d.%dMHz", via_clock / 1000, via_clock / 100 % 10);
pci_read_config_byte(dev, VIA_MISC_1, &t);
via_print("Master Read Cycle IRDY: %dws", (t & 64) >> 6);
uen[i] = ((u >> ((3 - i) << 3)) & 0x20);
den[i] = (c & ((i & 1) ? 0x40 : 0x20) << ((i & 2) << 2));
- speed[i] = 20 * via_clock / (active[i] + recover[i]);
- cycle[i] = 1000 / via_clock * (active[i] + recover[i]);
+ speed[i] = 2 * via_clock / (active[i] + recover[i]);
+ cycle[i] = 1000000 * (active[i] + recover[i]) / via_clock;
if (!uen[i] || !den[i])
continue;
switch (via_config->flags & VIA_UDMA) {
-
- case VIA_UDMA_100:
- speed[i] = 60 * via_clock / udma[i];
- cycle[i] = 333 / via_clock * udma[i];
+
+ case VIA_UDMA_33:
+ speed[i] = 2 * via_clock / udma[i];
+ cycle[i] = 1000000 * udma[i] / via_clock;
break;
case VIA_UDMA_66:
- speed[i] = 40 * via_clock / (udma[i] * umul[i]);
- cycle[i] = 500 / via_clock * (udma[i] * umul[i]);
+ speed[i] = 4 * via_clock / (udma[i] * umul[i]);
+ cycle[i] = 500000 * (udma[i] * umul[i]) / via_clock;
break;
- case VIA_UDMA_33:
- speed[i] = 20 * via_clock / udma[i];
- cycle[i] = 1000 / via_clock * udma[i];
+ case VIA_UDMA_100:
+ speed[i] = 6 * via_clock / udma[i];
+ cycle[i] = 333333 * udma[i] / via_clock;
+ break;
+
+ case VIA_UDMA_133:
+ speed[i] = 8 * via_clock / udma[i];
+ cycle[i] = 250000 * udma[i] / via_clock;
break;
}
}
via_print_drive("Transfer Mode: ", "%10s", den[i] ? (uen[i] ? "UDMA" : "DMA") : "PIO");
- via_print_drive("Address Setup: ", "%8dns", (1000 / via_clock) * setup[i]);
- via_print_drive("Cmd Active: ", "%8dns", (1000 / via_clock) * active8b[i]);
- via_print_drive("Cmd Recovery: ", "%8dns", (1000 / via_clock) * recover8b[i]);
- via_print_drive("Data Active: ", "%8dns", (1000 / via_clock) * active[i]);
- via_print_drive("Data Recovery: ", "%8dns", (1000 / via_clock) * recover[i]);
+ via_print_drive("Address Setup: ", "%8dns", 1000000 * setup[i] / via_clock);
+ via_print_drive("Cmd Active: ", "%8dns", 1000000 * active8b[i] / via_clock);
+ via_print_drive("Cmd Recovery: ", "%8dns", 1000000 * recover8b[i] / via_clock);
+ via_print_drive("Data Active: ", "%8dns", 1000000 * active[i] / via_clock);
+ via_print_drive("Data Recovery: ", "%8dns", 1000000 * recover[i] / via_clock);
via_print_drive("Cycle Time: ", "%8dns", cycle[i]);
- via_print_drive("Transfer Rate: ", "%4d.%dMB/s", speed[i] / 10, speed[i] % 10);
+ via_print_drive("Transfer Rate: ", "%4d.%dMB/s", speed[i] / 1000, speed[i] / 100 % 10);
return p - buffer; /* hoping it is less than 4K... */
}
case VIA_UDMA_33: t = timing->udma ? (0xe0 | (FIT(timing->udma, 2, 5) - 2)) : 0x03; break;
case VIA_UDMA_66: t = timing->udma ? (0xe8 | (FIT(timing->udma, 2, 9) - 2)) : 0x0f; break;
case VIA_UDMA_100: t = timing->udma ? (0xe0 | (FIT(timing->udma, 2, 9) - 2)) : 0x07; break;
+ case VIA_UDMA_133: t = timing->udma ? (0xe0 | (FIT(timing->udma, 2, 9) - 2)) : 0x07; break;
default: return;
}
{
ide_drive_t *peer = HWIF(drive)->drives + (~drive->dn & 1);
struct ide_timing t, p;
- int T, UT;
+ unsigned int T, UT;
if (speed != XFER_PIO_SLOW && speed != drive->current_speed)
if (ide_config_drive_speed(drive, speed))
printk(KERN_WARNING "ide%d: Drive %d didn't accept speed setting. Oh, well.\n",
drive->dn >> 1, drive->dn & 1);
- T = 1000 / via_clock;
+ T = 1000000000 / via_clock;
switch (via_config->flags & VIA_UDMA) {
case VIA_UDMA_33: UT = T; break;
case VIA_UDMA_66: UT = T/2; break;
case VIA_UDMA_100: UT = T/3; break;
- default: UT = T; break;
+ case VIA_UDMA_133: UT = T/4; break;
+ default: UT = T;
}
ide_timing_compute(drive, speed, &t, T, UT);
XFER_PIO | XFER_EPIO | XFER_SWDMA | XFER_MWDMA |
(via_config->flags & VIA_UDMA ? XFER_UDMA : 0) |
(w80 && (via_config->flags & VIA_UDMA) >= VIA_UDMA_66 ? XFER_UDMA_66 : 0) |
- (w80 && (via_config->flags & VIA_UDMA) >= VIA_UDMA_100 ? XFER_UDMA_100 : 0));
+ (w80 && (via_config->flags & VIA_UDMA) >= VIA_UDMA_100 ? XFER_UDMA_100 : 0) |
+ (w80 && (via_config->flags & VIA_UDMA) >= VIA_UDMA_133 ? XFER_UDMA_133 : 0));
via_set_drive(drive, speed);
*/
for (via_config = via_isa_bridges; via_config->id; via_config++)
- if ((isa = pci_find_device(PCI_VENDOR_ID_VIA, via_config->id, NULL))) {
+ if ((isa = pci_find_device(PCI_VENDOR_ID_VIA +
+ !!(via_config->flags & VIA_BAD_ID), via_config->id, NULL))) {
+
pci_read_config_byte(isa, PCI_REVISION_ID, &t);
if (t >= via_config->rev_min && t <= via_config->rev_max)
break;
}
if (!via_config->id) {
- printk(KERN_WARNING "VP_IDE: Unknown VIA SouthBridge, contact Vojtech Pavlik <vojtech@suse.cz>\n");
+ printk(KERN_WARNING "VP_IDE: Unknown VIA SouthBridge, contact Vojtech Pavlik <vojtech@ucw.cz>\n");
return -ENODEV;
}
switch (via_config->flags & VIA_UDMA) {
- case VIA_UDMA_100:
-
- pci_read_config_dword(dev, VIA_UDMA_TIMING, &u);
- for (i = 24; i >= 0; i -= 8)
- if (((u >> i) & 0x10) || (((u >> i) & 0x20) && (((u >> i) & 7) < 3)))
- via_80w |= (1 << (1 - (i >> 4))); /* BIOS 80-wire bit or UDMA w/ < 50ns/cycle */
- break;
-
case VIA_UDMA_66:
-
pci_read_config_dword(dev, VIA_UDMA_TIMING, &u); /* Enable Clk66 */
pci_write_config_dword(dev, VIA_UDMA_TIMING, u | 0x80008);
for (i = 24; i >= 0; i -= 8)
if (((u >> (i & 16)) & 8) && ((u >> i) & 0x20) && (((u >> i) & 7) < 2))
via_80w |= (1 << (1 - (i >> 4))); /* 2x PCI clock and UDMA w/ < 3T/cycle */
break;
+
+ case VIA_UDMA_100:
+ pci_read_config_dword(dev, VIA_UDMA_TIMING, &u);
+ for (i = 24; i >= 0; i -= 8)
+ if (((u >> i) & 0x10) || (((u >> i) & 0x20) && (((u >> i) & 7) < 4)))
+ via_80w |= (1 << (1 - (i >> 4))); /* BIOS 80-wire bit or UDMA w/ < 60ns/cycle */
+ break;
+
+ case VIA_UDMA_133:
+ pci_read_config_dword(dev, VIA_UDMA_TIMING, &u);
+ for (i = 24; i >= 0; i -= 8)
+ if (((u >> i) & 0x10) || (((u >> i) & 0x20) && (((u >> i) & 7) < 8)))
+ via_80w |= (1 << (1 - (i >> 4))); /* BIOS 80-wire bit or UDMA w/ < 60ns/cycle */
+ break;
+
}
if (via_config->flags & VIA_BAD_CLK66) { /* Disable Clk66 */
* Determine system bus clock.
*/
- via_clock = system_bus_clock();
- if (via_clock < 20 || via_clock > 50) {
+ via_clock = system_bus_clock() * 1000;
+
+ switch (via_clock) {
+ case 33000: via_clock = 33333; break;
+ case 37000: via_clock = 37500; break;
+ case 41000: via_clock = 41666; break;
+ }
+
+ if (via_clock < 20000 || via_clock > 50000) {
printk(KERN_WARNING "VP_IDE: User given PCI clock speed impossible (%d), using 33 MHz instead.\n", via_clock);
- printk(KERN_WARNING "VP_IDE: Use ide0=ata66 if you want to force UDMA66/UDMA100.\n");
+ printk(KERN_WARNING "VP_IDE: Use ide0=ata66 if you want to assume 80-wire cable.\n");
via_clock = 33;
}
* 04/10/2001 - corrected devfs_register() call in lvm_init_fs()
* 11/04/2001 - don't devfs_register("lvm") as user-space always does it
* 10/05/2001 - show more of PV name in /proc/lvm/global
+ * 16/12/2001 - fix devfs unregister order and prevent duplicate unreg (REG)
*
*/
static void _show_uuid(const char *src, char *b, char *e);
-#if 0
static devfs_handle_t lvm_devfs_handle;
-#endif
static devfs_handle_t vg_devfs_handle[MAX_VG];
static devfs_handle_t ch_devfs_handle[MAX_VG];
static devfs_handle_t lv_devfs_handle[MAX_LV];
void __init lvm_init_fs() {
struct proc_dir_entry *pde;
-/* User-space has already registered this */
-#if 0
+ /* Must create device node. Think about "devfs=only" situation */
lvm_devfs_handle = devfs_register(
0 , "lvm", 0, LVM_CHAR_MAJOR, 0,
S_IFCHR | S_IRUSR | S_IWUSR | S_IRGRP,
&lvm_chr_fops, NULL);
-#endif
lvm_proc_dir = create_proc_entry(LVM_DIR, S_IFDIR, &proc_root);
if (lvm_proc_dir) {
}
void lvm_fin_fs() {
-#if 0
devfs_unregister (lvm_devfs_handle);
-#endif
remove_proc_entry(LVM_GLOBAL, lvm_proc_dir);
remove_proc_entry(LVM_VG_SUBDIR, lvm_proc_dir);
int i;
devfs_unregister(ch_devfs_handle[vg_ptr->vg_number]);
- devfs_unregister(vg_devfs_handle[vg_ptr->vg_number]);
+ ch_devfs_handle[vg_ptr->vg_number] = NULL;
/* remove lv's */
for(i = 0; i < vg_ptr->lv_max; i++)
for(i = 0; i < vg_ptr->pv_max; i++)
if(vg_ptr->pv[i]) lvm_fs_remove_pv(vg_ptr, vg_ptr->pv[i]);
+ /* must not remove directory before leaf nodes */
+ devfs_unregister(vg_devfs_handle[vg_ptr->vg_number]);
+ vg_devfs_handle[vg_ptr->vg_number] = NULL;
+
if(vg_ptr->vg_dir_pde) {
remove_proc_entry(LVM_LV_SUBDIR, vg_ptr->vg_dir_pde);
vg_ptr->lv_subdir_pde = NULL;
void lvm_fs_remove_lv(vg_t *vg_ptr, lv_t *lv) {
devfs_unregister(lv_devfs_handle[MINOR(lv->lv_dev)]);
+ lv_devfs_handle[MINOR(lv->lv_dev)] = NULL;
if(vg_ptr->lv_subdir_pde) {
const char *name = _basename(lv->lv_name);
return 1;
}
-inline int lvm_get_blksize(kdev_t dev)
-{
- int correct_size = BLOCK_SIZE, i, major;
-
- major = MAJOR(dev);
- if (blksize_size[major])
- {
- i = blksize_size[major][MINOR(dev)];
- if (i)
- correct_size = i;
- }
- return correct_size;
-}
-
#ifdef DEBUG_SNAPSHOT
static inline void invalidate_snap_cache(unsigned long start, unsigned long nr,
kdev_t dev)
is--;
blksize_snap =
- lvm_get_blksize(lv_snap->lv_block_exception[is].rdev_new);
+ block_size(lv_snap->lv_block_exception[is].rdev_new);
is -= is % (blksize_snap / sizeof(lv_COW_table_disk_t));
memset(lv_COW_table, 0, blksize_snap);
iobuf = lv_snap->lv_iobuf;
- blksize_org = lvm_get_blksize(org_phys_dev);
- blksize_snap = lvm_get_blksize(snap_phys_dev);
+ blksize_org = block_size(org_phys_dev);
+ blksize_snap = block_size(snap_phys_dev);
max_blksize = max(blksize_org, blksize_snap);
min_blksize = min(blksize_org, blksize_snap);
max_sectors = LVM_MAX_SECTORS * (min_blksize>>9);
snap_phys_dev = lv_snap->lv_block_exception[idx].rdev_new;
snap_pe_start = lv_snap->lv_block_exception[idx - (idx % COW_entries_per_pe)].rsector_new - lv_snap->lv_chunk_size;
- blksize_snap = lvm_get_blksize(snap_phys_dev);
+ blksize_snap = block_size(snap_phys_dev);
COW_entries_per_block = blksize_snap / sizeof(lv_COW_table_disk_t);
idx_COW_table = idx % COW_entries_per_pe % COW_entries_per_block;
idx++;
snap_phys_dev = lv_snap->lv_block_exception[idx].rdev_new;
snap_pe_start = lv_snap->lv_block_exception[idx - (idx % COW_entries_per_pe)].rsector_new - lv_snap->lv_chunk_size;
- blksize_snap = lvm_get_blksize(snap_phys_dev);
+ blksize_snap = block_size(snap_phys_dev);
blocks[0] = snap_pe_start >> (blksize_snap >> 10);
} else blocks[0]++;
* (Jens Axboe)
* - Defer writes to an extent that is being moved [JT + AD]
* 28/05/2001 - implemented missing BLKSSZGET ioctl [AD]
+ * 28/12/2001 - buffer_head -> bio
+ * removed huge allocation of a lv_t on stack
+ * (Anders Gustafsson)
*
*/
memset(&bio,0,sizeof(bio));
bio.bi_dev = inode->i_rdev;
- bio.bi_io_vec.bv_len = lvm_get_blksize(bio.bi_dev);
+ bio.bi_size = lvm_get_blksize(bio.bi_dev); /* NEEDED by bio_sectors */
bio.bi_sector = block * bio_sectors(&bio);
bio.bi_rw = READ;
if ((err=lvm_map(&bio)) < 0) {
return 0;
}
-static int lvm_map(struct bio *bh)
+static int lvm_map(struct bio *bi)
{
- int minor = MINOR(bh->bi_dev);
+ int minor = MINOR(bi->bi_dev);
ulong index;
ulong pe_start;
- ulong size = bio_sectors(bh);
- ulong rsector_org = bh->bi_sector;
+ ulong size = bio_sectors(bi);
+ ulong rsector_org = bi->bi_sector;
ulong rsector_map;
kdev_t rdev_map;
vg_t *vg_this = vg[VG_BLK(minor)];
lv_t *lv = vg_this->lv[LV_BLK(minor)];
- int rw = bio_data_dir(bh);
-
+ int rw = bio_rw(bi);
down_read(&lv->lv_lock);
if (!(lv->lv_status & LV_ACTIVE)) {
P_MAP("%s - lvm_map minor: %d *rdev: %s *rsector: %lu size:%lu\n",
lvm_name, minor,
- kdevname(bh->bi_dev),
+ kdevname(bi->bi_dev),
rsector_org, size);
if (rsector_org + size > lv->lv_size) {
* we need to queue this request, because this is in the fast path.
*/
if (rw == WRITE || rw == WRITEA) {
- if(_defer_extent(bh, rw, rdev_map,
+ if(_defer_extent(bi, rw, rdev_map,
rsector_map, vg_this->pe_size)) {
up_read(&lv->lv_lock);
}
out:
- bh->bi_dev = rdev_map;
- bh->bi_sector = rsector_map;
+ bi->bi_dev = rdev_map;
+ bi->bi_sector = rsector_map;
up_read(&lv->lv_lock);
return 1;
bad:
- bio_io_error(bh);
+ bio_io_error(bi);
up_read(&lv->lv_lock);
return -1;
} /* lvm_map() */
{
int ret = 0;
ulong l, ls = 0, p, size;
- lv_t lv;
vg_t *vg_ptr;
lv_t **snap_lv_ptr;
+ lv_t *tmplv;
if ((vg_ptr = kmalloc(sizeof(vg_t),GFP_KERNEL)) == NULL) {
printk(KERN_CRIT
lvm_name, __LINE__);
return -ENOMEM;
}
+
/* get the volume group structure */
if (copy_from_user(vg_ptr, arg, sizeof(vg_t)) != 0) {
P_IOCTL("lvm_do_vg_create ERROR: copy VG ptr %p (%d bytes)\n",
return -EFAULT;
}
+
+
/* VG_CREATE now uses minor number in VG structure */
if (minor == -1) minor = vg_ptr->vg_number;
}
memset(snap_lv_ptr, 0, size);
+ if ((tmplv = kmalloc(sizeof(lv_t),GFP_KERNEL)) == NULL) {
+ printk(KERN_CRIT
+ "%s -- VG_CREATE: kmalloc error LV at line %d\n",
+ lvm_name, __LINE__);
+ vfree(snap_lv_ptr);
+ return -ENOMEM;
+ }
+
/* get the logical volume structures */
vg_ptr->lv_cur = 0;
for (l = 0; l < vg_ptr->lv_max; l++) {
lv_t *lvp;
+
/* user space address */
if ((lvp = vg_ptr->lv[l]) != NULL) {
- if (copy_from_user(&lv, lvp, sizeof(lv_t)) != 0) {
+ if (copy_from_user(tmplv, lvp, sizeof(lv_t)) != 0) {
P_IOCTL("ERROR: copying LV ptr %p (%d bytes)\n",
lvp, sizeof(lv_t));
lvm_do_vg_remove(minor);
+ vfree(snap_lv_ptr);
+ kfree(tmplv);
return -EFAULT;
}
- if ( lv.lv_access & LV_SNAPSHOT) {
+ if ( tmplv->lv_access & LV_SNAPSHOT) {
snap_lv_ptr[ls] = lvp;
vg_ptr->lv[l] = NULL;
ls++;
}
vg_ptr->lv[l] = NULL;
/* only create original logical volumes for now */
- if (lvm_do_lv_create(minor, lv.lv_name, &lv) != 0) {
+ if (lvm_do_lv_create(minor, tmplv->lv_name, tmplv) != 0) {
lvm_do_vg_remove(minor);
+ vfree(snap_lv_ptr);
+ kfree(tmplv);
return -EFAULT;
}
}
in place during first path above */
for (l = 0; l < ls; l++) {
lv_t *lvp = snap_lv_ptr[l];
- if (copy_from_user(&lv, lvp, sizeof(lv_t)) != 0) {
+ if (copy_from_user(tmplv, lvp, sizeof(lv_t)) != 0) {
lvm_do_vg_remove(minor);
+ vfree(snap_lv_ptr);
+ kfree(tmplv);
return -EFAULT;
}
- if (lvm_do_lv_create(minor, lv.lv_name, &lv) != 0) {
+ if (lvm_do_lv_create(minor, tmplv->lv_name, tmplv) != 0) {
lvm_do_vg_remove(minor);
+ vfree(snap_lv_ptr);
+ kfree(tmplv);
return -EFAULT;
}
}
vfree(snap_lv_ptr);
-
+ kfree(tmplv);
vg_count++;
static int read_disk_sb(mdk_rdev_t * rdev)
{
- int ret = -EINVAL;
- struct buffer_head *bh = NULL;
- kdev_t dev = rdev->dev;
- mdp_super_t *sb;
+ struct address_space *mapping = rdev->bdev->bd_inode->i_mapping;
+ struct page *page;
+ char *p;
unsigned long sb_offset;
+ int n = PAGE_CACHE_SIZE / BLOCK_SIZE;
if (!rdev->sb) {
MD_BUG();
- goto abort;
+ return -EINVAL;
}
/*
* Calculate the position of the superblock,
- * it's at the end of the disk
+ * it's at the end of the disk.
+ *
+ * It also happens to be a multiple of 4Kb.
*/
sb_offset = calc_dev_sboffset(rdev->dev, rdev->mddev, 1);
rdev->sb_offset = sb_offset;
- fsync_dev(dev);
- set_blocksize (dev, MD_SB_BYTES);
- bh = bread (dev, sb_offset / MD_SB_BLOCKS, MD_SB_BYTES);
-
- if (bh) {
- sb = (mdp_super_t *) bh->b_data;
- memcpy (rdev->sb, sb, MD_SB_BYTES);
- } else {
- printk(NO_SB,partition_name(rdev->dev));
- goto abort;
- }
+ page = read_cache_page(mapping, sb_offset/n,
+ (filler_t *)mapping->a_ops->readpage, NULL);
+ if (IS_ERR(page))
+ goto out;
+ wait_on_page(page);
+ if (!Page_Uptodate(page))
+ goto fail;
+ if (PageError(page))
+ goto fail;
+ p = (char *)page_address(page) + BLOCK_SIZE * (sb_offset % n);
+ memcpy((char*)rdev->sb, p, MD_SB_BYTES);
+ page_cache_release(page);
printk(KERN_INFO " [events: %08lx]\n", (unsigned long)rdev->sb->events_lo);
- ret = 0;
-abort:
- if (bh)
- brelse (bh);
- return ret;
+ return 0;
+
+fail:
+ page_cache_release(page);
+out:
+ printk(NO_SB,partition_name(rdev->dev));
+ return -EINVAL;
}
static unsigned int calc_sb_csum(mdp_super_t * sb)
return NULL;
}
-#define GETBLK_FAILED KERN_ERR \
-"md: getblk failed for device %s\n"
-
static int write_disk_sb(mdk_rdev_t * rdev)
{
- struct buffer_head *bh;
- kdev_t dev;
+ struct address_space *mapping = rdev->bdev->bd_inode->i_mapping;
+ struct page *page;
+ unsigned offs;
+ int error;
+ kdev_t dev = rdev->dev;
unsigned long sb_offset, size;
- mdp_super_t *sb;
if (!rdev->sb) {
MD_BUG();
return 1;
}
- dev = rdev->dev;
sb_offset = calc_dev_sboffset(dev, rdev->mddev, 1);
if (rdev->sb_offset != sb_offset) {
printk(KERN_INFO "%s's sb offset has changed from %ld to %ld, skipping\n",
printk(KERN_INFO "(write) %s's sb offset: %ld\n", partition_name(dev), sb_offset);
fsync_dev(dev);
- set_blocksize(dev, MD_SB_BYTES);
- bh = getblk(dev, sb_offset / MD_SB_BLOCKS, MD_SB_BYTES);
- if (!bh) {
- printk(GETBLK_FAILED, partition_name(dev));
- return 1;
- }
- memset(bh->b_data,0,bh->b_size);
- sb = (mdp_super_t *) bh->b_data;
- memcpy(sb, rdev->sb, MD_SB_BYTES);
-
- mark_buffer_uptodate(bh, 1);
- mark_buffer_dirty(bh);
- ll_rw_block(WRITE, 1, &bh);
- wait_on_buffer(bh);
- brelse(bh);
+ page = grab_cache_page(mapping, sb_offset/(PAGE_CACHE_SIZE/BLOCK_SIZE));
+ offs = sb_offset % (PAGE_CACHE_SIZE/BLOCK_SIZE);
+ if (!page)
+ goto fail;
+ error = mapping->a_ops->prepare_write(NULL, page, offs,
+ offs + MD_SB_BYTES);
+ if (error)
+ goto unlock;
+ memcpy((char *)page_address(page) + offs, rdev->sb, MD_SB_BYTES);
+ error = mapping->a_ops->commit_write(NULL, page, offs,
+ offs + MD_SB_BYTES);
+ if (error)
+ goto unlock;
+ UnlockPage(page);
+ wait_on_page(page);
+ page_cache_release(page);
fsync_dev(dev);
skip:
return 0;
+unlock:
+ UnlockPage(page);
+ page_cache_release(page);
+fail:
+ printk("md: write_disk_sb failed for device %s\n", partition_name(dev));
+ return 1;
}
-#undef GETBLK_FAILED
static void set_this_disk(mddev_t *mddev, mdk_rdev_t *rdev)
{
return 0;
}
-/*
- * Determine correct block size for this device.
- */
-unsigned int device_bsize (kdev_t dev)
-{
- unsigned int i, correct_size;
-
- correct_size = BLOCK_SIZE;
- if (blksize_size[MAJOR(dev)]) {
- i = blksize_size[MAJOR(dev)][MINOR(dev)];
- if (i)
- correct_size = i;
- }
-
- return correct_size;
-}
-
static int raid5_sync_request (mddev_t *mddev, unsigned long sector_nr)
{
raid5_conf_t *conf = (raid5_conf_t *) mddev->private;
#define CONFIG_MTD_BLKDEV_ERASESIZE 128
#define VERSION "1.1"
extern int *blk_size[];
-extern int *blksize_size[];
/* Info for the block device */
typedef struct mtd_raw_dev_data_s {
DEBUG(1, "blkmtd: devname = %s\n", bdevname(rdev));
blocksize = BLOCK_SIZE;
- if(bs) {
- blocksize = bs;
- } else {
- if (blksize_size[maj] && blksize_size[maj][min]) {
- DEBUG(2, "blkmtd: blksize_size = %d\n", blksize_size[maj][min]);
- blocksize = blksize_size[maj][min];
- }
- }
+ blocksize = bs ? bs : block_size(rdev);
i = blocksize;
blocksize_bits = 0;
while(i != 1) {
{
int i = 0;
- driver_template.module = &__this_module;
- scsi_register_module(MODULE_SCSI_HA, &driver_template);
+ driver_template.module = THIS_MODULE;
+ scsi_register_host(&driver_template);
if (driver_template.present)
scsi_registered = TRUE;
else {
printk("iph5526: SCSI registeration failed!!!\n");
scsi_registered = FALSE;
- scsi_unregister_module(MODULE_SCSI_HA, &driver_template);
+ scsi_unregister_host(&driver_template);
}
while(fc[i] != NULL) {
i++;
}
if (scsi_registered == TRUE)
- scsi_unregister_module(MODULE_SCSI_HA, &driver_template);
+ scsi_unregister_host(&driver_template);
}
#endif /* MODULE */
*/
do {
if (busy) {
- current->counter = 0;
+ current->time_slice = 0;
schedule();
}
BusLogic_HostAdapter_T *HostAdapter =
(BusLogic_HostAdapter_T *) Disk->device->host->hostdata;
BIOS_DiskParameters_T *DiskParameters = (BIOS_DiskParameters_T *) Parameters;
- struct buffer_head *BufferHead;
+ unsigned char *buf;
if (HostAdapter->ExtendedTranslationEnabled &&
Disk->capacity >= 2*1024*1024 /* 1 GB in 512 byte sectors */)
{
}
DiskParameters->Cylinders =
Disk->capacity / (DiskParameters->Heads * DiskParameters->Sectors);
- /*
- Attempt to read the first 1024 bytes from the disk device.
- */
- BufferHead = bread(MKDEV(MAJOR(Device), MINOR(Device) & ~0x0F), 0, 1024);
- if (BufferHead == NULL) return 0;
+ buf = scsi_bios_ptable(Device);
+ if (buf == NULL) return 0;
/*
If the boot sector partition table flag is valid, search for a partition
table entry whose end_head matches one of the standard BusLogic geometry
translations (64/32, 128/32, or 255/63).
*/
- if (*(unsigned short *) (BufferHead->b_data + 0x1FE) == 0xAA55)
+ if (*(unsigned short *) (buf+64) == 0xAA55)
{
- PartitionTable_T *FirstPartitionEntry =
- (PartitionTable_T *) (BufferHead->b_data + 0x1BE);
+ PartitionTable_T *FirstPartitionEntry = (PartitionTable_T *) buf;
PartitionTable_T *PartitionEntry = FirstPartitionEntry;
int SavedCylinders = DiskParameters->Cylinders, PartitionNumber;
unsigned char PartitionEntryEndHead, PartitionEntryEndSector;
DiskParameters->Heads, DiskParameters->Sectors);
}
}
- brelse(BufferHead);
+ kfree(buf);
return 0;
}
int ret;
int extended;
struct ahc_softc *ahc;
- struct buffer_head *bh;
+ unsigned char *buf;
ahc = *((struct ahc_softc **)disk->device->host->hostdata);
- bh = bread(MKDEV(MAJOR(dev), MINOR(dev) & ~0xf), 0, block_size(dev));
+ buf = scsi_bios_ptable(dev);
- if (bh) {
- ret = scsi_partsize(bh, disk->capacity,
+ if (buf) {
+ ret = scsi_partsize(buf, disk->capacity,
&geom[2], &geom[0], &geom[1]);
- brelse(bh);
+ kfree(buf);
if (ret != -1)
return (ret);
}
{
int heads, sectors, cylinders, ret;
struct aic7xxx_host *p;
- struct buffer_head *bh;
+ unsigned char *buf;
p = (struct aic7xxx_host *) disk->device->host->hostdata;
- bh = bread(MKDEV(MAJOR(dev), MINOR(dev)&~0xf), 0, block_size(dev));
+ buf = scsi_bios_ptable(dev);
- if ( bh )
+ if ( buf )
{
- ret = scsi_partsize(bh, disk->capacity, &geom[2], &geom[0], &geom[1]);
- brelse(bh);
+ ret = scsi_partsize(buf, disk->capacity, &geom[2], &geom[0], &geom[1]);
+ kfree(buf);
if ( ret != -1 )
return(ret);
}
return retval;
}
-int
-scsi_register_device(struct Scsi_Device_Template * sdpnt)
-{
- if(sdpnt->next) panic("Device already registered");
- sdpnt->next = scsi_devicelist;
- scsi_devicelist = sdpnt;
- return 0;
-}
-
/*
* Overrides for Emacs so that we follow Linus's tabbing style.
* Emacs will notice this stuff at the end of the file and automatically
void scsi_initialize_queue(Scsi_Device * SDpnt, struct Scsi_Host * SHpnt);
-int scsi_register_device(struct Scsi_Device_Template * sdpnt);
-/* These are used by loadable modules */
-extern int scsi_register_module(int, void *);
-extern int scsi_unregister_module(int, void *);
-
-/* The different types of modules that we can load and unload */
-#define MODULE_SCSI_HA 1
-#define MODULE_SCSI_CONST 2
-#define MODULE_SCSI_IOCTL 3
-#define MODULE_SCSI_DEV 4
+/*
+ * Driver registration/unregistration.
+ */
+extern int scsi_register_device(struct Scsi_Device_Template *);
+extern int scsi_unregister_device(struct Scsi_Device_Template *);
+extern int scsi_register_host(Scsi_Host_Template *);
+extern int scsi_unregister_host(Scsi_Host_Template *);
/*
#include "scsi.h"
#include "hosts.h"
#include "sd.h"
-#include "ide-scsi.h"
#include <scsi/sg.h>
#define IDESCSI_DEBUG_LOG 0
return 0;
}
-static Scsi_Host_Template idescsi_template = IDESCSI;
+static Scsi_Host_Template idescsi_template = {
+ module: THIS_MODULE,
+ name: "idescsi"
+ detect: idescsi_detect,
+ release: idescsi_release,
+ info: idescsi_info,
+ ioctl: idescsi_ioctl,
+ queuecommand: idescsi_queue,
+ abort: idescsi_abort,
+ reset: idescsi_reset,
+ bios_param: idescsi_bios,
+ can_queue: 10,
+ this_id: -1,
+ sg_tablesize: 256,
+ cmd_per_lun: 5,
+ use_clustering: DISABLE_CLUSTERING,
+ emulated: 1,
+};
static int __init init_idescsi_module(void)
{
idescsi_init();
- idescsi_template.module = THIS_MODULE;
- scsi_register_module (MODULE_SCSI_HA, &idescsi_template);
+ scsi_register_host(&idescsi_template);
return 0;
}
byte media[] = {TYPE_DISK, TYPE_TAPE, TYPE_PROCESSOR, TYPE_WORM, TYPE_ROM, TYPE_SCANNER, TYPE_MOD, 255};
int i, failed;
- scsi_unregister_module (MODULE_SCSI_HA, &idescsi_template);
+ scsi_unregister_host(&idescsi_template);
for (i = 0; media[i] != 255; i++) {
failed = 0;
while ((drive = ide_scan_devices (media[i], idescsi_driver.name, &idescsi_driver, failed)) != NULL)
+++ /dev/null
-/*
- * linux/drivers/scsi/ide-scsi.h
- *
- * Copyright (C) 1996, 1997 Gadi Oxman <gadio@netvision.net.il>
- */
-
-#ifndef IDESCSI_H
-#define IDESCSI_H
-
-extern int idescsi_detect (Scsi_Host_Template *host_template);
-extern int idescsi_release (struct Scsi_Host *host);
-extern const char *idescsi_info (struct Scsi_Host *host);
-extern int idescsi_ioctl (Scsi_Device *dev, int cmd, void *arg);
-extern int idescsi_queue (Scsi_Cmnd *cmd, void (*done)(Scsi_Cmnd *));
-extern int idescsi_abort (Scsi_Cmnd *cmd);
-extern int idescsi_reset (Scsi_Cmnd *cmd, unsigned int resetflags);
-extern int idescsi_bios (Disk *disk, kdev_t dev, int *parm);
-
-#define IDESCSI { \
- name: "idescsi", /* name */ \
- detect: idescsi_detect, /* detect */ \
- release: idescsi_release, /* release */ \
- info: idescsi_info, /* info */ \
- ioctl: idescsi_ioctl, /* ioctl */ \
- queuecommand: idescsi_queue, /* queuecommand */ \
- abort: idescsi_abort, /* abort */ \
- reset: idescsi_reset, /* reset */ \
- bios_param: idescsi_bios, /* bios_param */ \
- can_queue: 10, /* can_queue */ \
- this_id: -1, /* this_id */ \
- sg_tablesize: 256, /* sg_tablesize */ \
- cmd_per_lun: 5, /* cmd_per_lun */ \
- use_clustering: DISABLE_CLUSTERING, /* clustering */ \
- emulated: 1 /* emulated */ \
-}
-
-#endif /* IDESCSI_H */
static int
mega_partsize(Disk * disk, kdev_t dev, int *geom)
{
- struct buffer_head *bh;
struct partition *p, *largest = NULL;
int i, largest_cyl;
int heads, cyls, sectors;
int capacity = disk->capacity;
+ unsigned char *buf;
- int ma = MAJOR(dev);
- int mi = (MINOR(dev) & ~0xf);
-
- int block = 1024;
-
- if(blksize_size[ma])
- block = blksize_size[ma][mi];
-
- if(!(bh = bread(MKDEV(ma,mi), 0, block)))
+ if (!(buf = scsi_bios_ptable(dev)))
return -1;
- if( *(unsigned short *)(bh->b_data + 510) == 0xAA55 ) {
+ if( *(unsigned short *)(buf + 64) == 0xAA55 ) {
- for( largest_cyl = -1, p = (struct partition *)(0x1BE + bh->b_data),
+ for( largest_cyl = -1, p = (struct partition *)buf,
i = 0; i < 4; ++i, ++p) {
if (!p->sys_ind) continue;
sectors = largest->end_sector & 0x3f;
if (heads == 0 || sectors == 0) {
- brelse(bh);
+ kfree(buf);
return -1;
}
geom[1] = sectors;
geom[2] = cyls;
- brelse(bh);
+ kfree(buf);
return 0;
}
- brelse(bh);
+ kfree(buf);
return -1;
}
struct Scsi_Device_Template osst_template =
{
+ module: THIS_MODULE,
name: "OnStream tape",
tag: "osst",
scsi_type: TYPE_TAPE,
static int __init init_osst(void)
{
validate_options();
- osst_template.module = THIS_MODULE;
- return scsi_register_module(MODULE_SCSI_DEV, &osst_template);
+ return scsi_register_device(&osst_template);
}
static void __exit exit_osst (void)
int i;
OS_Scsi_Tape * STp;
- scsi_unregister_module(MODULE_SCSI_DEV, &osst_template);
+ scsi_unregister_device(&osst_template);
#ifdef CONFIG_DEVFS_FS
devfs_unregister_chrdev(MAJOR_NR, "osst");
#else
}
aha152x_setup("PCMCIA setup", ints);
- scsi_register_module(MODULE_SCSI_HA, &driver_template);
+ scsi_register_host(&driver_template);
tail = &link->dev;
info->ndev = 0;
return;
}
- scsi_unregister_module(MODULE_SCSI_HA, &driver_template);
+ scsi_unregister_host(&driver_template);
link->dev = NULL;
CardServices(ReleaseConfiguration, link->handle);
ints[2] = link->irq.AssignedIRQ;
fdomain_setup("PCMCIA setup", ints);
- scsi_register_module(MODULE_SCSI_HA, &driver_template);
+ scsi_register_host(&driver_template);
tail = &link->dev;
info->ndev = 0;
return;
}
- scsi_unregister_module(MODULE_SCSI_HA, &driver_template);
+ scsi_unregister_host(&driver_template);
link->dev = NULL;
CardServices(ReleaseConfiguration, link->handle);
goto cs_failed;
}
- scsi_register_module(MODULE_SCSI_HA, &driver_template);
+ scsi_register_host(&driver_template);
DEBUG(0, "GET_SCSI_INFO\n");
tail = &link->dev;
}
/* Unlink the device chain */
- scsi_unregister_module(MODULE_SCSI_HA, &driver_template);
+ scsi_unregister_host(&driver_template);
link->dev = NULL;
if (link->win) {
else
qlogicfas_preset(link->io.BasePort1, link->irq.AssignedIRQ);
- scsi_register_module(MODULE_SCSI_HA, &driver_template);
+ scsi_register_host(&driver_template);
tail = &link->dev;
info->ndev = 0;
return;
}
- scsi_unregister_module(MODULE_SCSI_HA, &driver_template);
+ scsi_unregister_host(&driver_template);
link->dev = NULL;
CardServices(ReleaseConfiguration, link->handle);
SCpnt->done(SCpnt);
}
-static int scsi_register_host(Scsi_Host_Template *);
-static int scsi_unregister_host(Scsi_Host_Template *);
-
/*
* Function: scsi_release_commandblocks()
*
* This entry point should be called by a driver if it is trying
* to add a low level scsi driver to the system.
*/
-static int scsi_register_host(Scsi_Host_Template * tpnt)
+int scsi_register_host(Scsi_Host_Template * tpnt)
{
int pcount;
struct Scsi_Host *shpnt;
* Similarly, this entry point should be called by a loadable module if it
* is trying to remove a low level scsi driver from the system.
*/
-static int scsi_unregister_host(Scsi_Host_Template * tpnt)
+int scsi_unregister_host(Scsi_Host_Template * tpnt)
{
int online_status;
int pcount0, pcount;
return -1;
}
-static int scsi_unregister_device(struct Scsi_Device_Template *tpnt);
-
/*
* This entry point should be called by a loadable module if it is trying
* add a high level scsi driver to the system.
*/
-static int scsi_register_device_module(struct Scsi_Device_Template *tpnt)
+int scsi_register_device(struct Scsi_Device_Template *tpnt)
{
Scsi_Device *SDpnt;
struct Scsi_Host *shpnt;
int out_of_space = 0;
+#ifdef CONFIG_KMOD
+ if (scsi_hosts == NULL)
+ request_module("scsi_hostadapter");
+#endif
+
if (tpnt->next)
return 1;
- scsi_register_device(tpnt);
+ tpnt->next = scsi_devicelist;
+ scsi_devicelist = tpnt;
+
/*
* First scan the devices that we know about, and see if we notice them.
*/
return 0;
}
-static int scsi_unregister_device(struct Scsi_Device_Template *tpnt)
+int scsi_unregister_device(struct Scsi_Device_Template *tpnt)
{
Scsi_Device *SDpnt;
struct Scsi_Host *shpnt;
return -1;
}
-
-/* This function should be called by drivers which needs to register
- * with the midlevel scsi system. As of 2.4.0-test9pre3 this is our
- * main device/hosts register function /mathiasen
- */
-int scsi_register_module(int module_type, void *ptr)
-{
- switch (module_type) {
- case MODULE_SCSI_HA:
- return scsi_register_host((Scsi_Host_Template *) ptr);
-
- /* Load upper level device handler of some kind */
- case MODULE_SCSI_DEV:
-#ifdef CONFIG_KMOD
- if (scsi_hosts == NULL)
- request_module("scsi_hostadapter");
-#endif
- return scsi_register_device_module((struct Scsi_Device_Template *) ptr);
- /* The rest of these are not yet implemented */
-
- /* Load constants.o */
- case MODULE_SCSI_CONST:
-
- /* Load specialized ioctl handler for some device. Intended for
- * cdroms that have non-SCSI2 audio command sets. */
- case MODULE_SCSI_IOCTL:
-
- default:
- return 1;
- }
-}
-
-/* Reverse the actions taken above
- */
-int scsi_unregister_module(int module_type, void *ptr)
-{
- int retval = 0;
-
- switch (module_type) {
- case MODULE_SCSI_HA:
- retval = scsi_unregister_host((Scsi_Host_Template *) ptr);
- break;
- case MODULE_SCSI_DEV:
- retval = scsi_unregister_device((struct Scsi_Device_Template *)ptr);
- break;
- /* The rest of these are not yet implemented. */
- case MODULE_SCSI_CONST:
- case MODULE_SCSI_IOCTL:
- break;
- default:;
- }
- return retval;
-}
-
#ifdef CONFIG_PROC_FS
/*
* Function: scsi_dump_status
/*
* Prototypes for functions in scsicam.c
*/
-extern int scsi_partsize(struct buffer_head *bh, unsigned long capacity,
+extern int scsi_partsize(unsigned char *buf, unsigned long capacity,
unsigned int *cyls, unsigned int *hds,
unsigned int *secs);
static int __init init_this_scsi_driver(void)
{
driver_template.module = THIS_MODULE;
- scsi_register_module(MODULE_SCSI_HA, &driver_template);
+ scsi_register_host(&driver_template);
if (driver_template.present)
return 0;
- scsi_unregister_module(MODULE_SCSI_HA, &driver_template);
+ scsi_unregister_host(&driver_template);
return -ENODEV;
}
static void __exit exit_this_scsi_driver(void)
{
- scsi_unregister_module(MODULE_SCSI_HA, &driver_template);
+ scsi_unregister_host(&driver_template);
}
module_init(init_this_scsi_driver);
* This source file contains the symbol table used by scsi loadable
* modules.
*/
-EXPORT_SYMBOL(scsi_register_module);
-EXPORT_SYMBOL(scsi_unregister_module);
+EXPORT_SYMBOL(scsi_register_device);
+EXPORT_SYMBOL(scsi_unregister_device);
+EXPORT_SYMBOL(scsi_register_host);
+EXPORT_SYMBOL(scsi_unregister_host);
EXPORT_SYMBOL(scsi_register);
EXPORT_SYMBOL(scsi_unregister);
EXPORT_SYMBOL(scsicam_bios_param);
EXPORT_SYMBOL(scsi_partsize);
+EXPORT_SYMBOL(scsi_bios_ptable);
EXPORT_SYMBOL(scsi_allocate_device);
EXPORT_SYMBOL(scsi_do_cmd);
EXPORT_SYMBOL(scsi_command_size);
static int setsize(unsigned long capacity, unsigned int *cyls, unsigned int *hds,
unsigned int *secs);
+unsigned char *scsi_bios_ptable(kdev_t dev)
+{
+ unsigned char *res = kmalloc(66, GFP_KERNEL);
+ kdev_t rdev = MKDEV(MAJOR(dev), MINOR(dev) & ~0xf);
+
+ if (res) {
+ struct buffer_head *bh = bread(rdev, 0, block_size(rdev));
+ if (bh) {
+ memcpy(res, bh->b_data + 0x1be, 66);
+ } else {
+ kfree(res);
+ res = NULL;
+ }
+ }
+ return res;
+}
/*
* Function : int scsicam_bios_param (Disk *disk, int dev, int *ip)
kdev_t dev, /* Device major, minor */
int *ip /* Heads, sectors, cylinders in that order */ )
{
- struct buffer_head *bh;
int ret_code;
int size = disk->capacity;
unsigned long temp_cyl;
+ unsigned char *p = scsi_bios_ptable(dev);
- int ma = MAJOR(dev);
- int mi = (MINOR(dev) & ~0xf);
-
- int block = 1024;
-
- if(blksize_size[ma])
- block = blksize_size[ma][mi];
-
- if (!(bh = bread(MKDEV(ma,mi), 0, block)))
+ if (!p)
return -1;
/* try to infer mapping from partition table */
- ret_code = scsi_partsize(bh, (unsigned long) size, (unsigned int *) ip + 2,
+ ret_code = scsi_partsize(p, (unsigned long) size, (unsigned int *) ip + 2,
(unsigned int *) ip + 0, (unsigned int *) ip + 1);
- brelse(bh);
+ kfree(p);
if (ret_code == -1) {
/* pick some standard mapping with at most 1024 cylinders,
}
/*
- * Function : static int scsi_partsize(struct buffer_head *bh, unsigned long
+ * Function : static int scsi_partsize(unsigned char *buf, unsigned long
* capacity,unsigned int *cyls, unsigned int *hds, unsigned int *secs);
*
* Purpose : to determine the BIOS mapping used to create the partition
*
*/
-int scsi_partsize(struct buffer_head *bh, unsigned long capacity,
+int scsi_partsize(unsigned char *buf, unsigned long capacity,
unsigned int *cyls, unsigned int *hds, unsigned int *secs)
{
- struct partition *p, *largest = NULL;
+ struct partition *p = (struct partition *)buf, *largest = NULL;
int i, largest_cyl;
int cyl, ext_cyl, end_head, end_cyl, end_sector;
unsigned int logical_end, physical_end, ext_physical_end;
- if (*(unsigned short *) (bh->b_data + 510) == 0xAA55) {
- for (largest_cyl = -1, p = (struct partition *)
- (0x1BE + bh->b_data), i = 0; i < 4; ++i, ++p) {
+ if (*(unsigned short *) (buf + 66) == 0xAA55) {
+ for (largest_cyl = -1, i = 0; i < 4; ++i, ++p) {
if (!p->sys_ind)
continue;
#ifdef DEBUG
static int sd_init_command(Scsi_Cmnd *);
static struct Scsi_Device_Template sd_template = {
+ module:THIS_MODULE,
name:"disk",
tag:"sd",
scsi_type:TYPE_DISK,
static int __init init_sd(void)
{
sd_template.module = THIS_MODULE;
- return scsi_register_module(MODULE_SCSI_DEV, &sd_template);
+ return scsi_register_device(&sd_template);
}
static void __exit exit_sd(void)
{
int i;
- scsi_unregister_module(MODULE_SCSI_DEV, &sd_template);
+ scsi_unregister_device(&sd_template);
for (i = 0; i < N_USED_SD_MAJORS; i++)
devfs_unregister_blkdev(SD_MAJOR(i), "sd");
static struct Scsi_Device_Template sg_template =
{
+ module:THIS_MODULE,
tag:"sg",
scsi_type:0xff,
major:SCSI_GENERIC_MAJOR,
static int __init init_sg(void) {
if (def_reserved_size >= 0)
sg_big_buff = def_reserved_size;
- sg_template.module = THIS_MODULE;
- return scsi_register_module(MODULE_SCSI_DEV, &sg_template);
+ return scsi_register_device(&sg_template);
}
static void __exit exit_sg( void)
#ifdef CONFIG_PROC_FS
sg_proc_cleanup();
#endif /* CONFIG_PROC_FS */
- scsi_unregister_module(MODULE_SCSI_DEV, &sg_template);
+ scsi_unregister_device(&sg_template);
devfs_unregister_chrdev(SCSI_GENERIC_MAJOR, "sg");
if(sg_dev_arr != NULL) {
kfree((char *)sg_dev_arr);
static struct Scsi_Device_Template sr_template =
{
+ module:THIS_MODULE,
name:"cdrom",
tag:"sr",
scsi_type:TYPE_ROM,
static int __init init_sr(void)
{
- sr_template.module = THIS_MODULE;
- return scsi_register_module(MODULE_SCSI_DEV, &sr_template);
+ return scsi_register_device(&sr_template);
}
static void __exit exit_sr(void)
{
- scsi_unregister_module(MODULE_SCSI_DEV, &sr_template);
+ scsi_unregister_device(&sr_template);
devfs_unregister_blkdev(MAJOR_NR, "sr");
sr_registered--;
if (scsi_CDs != NULL) {
validate_options();
st_template.module = THIS_MODULE;
- return scsi_register_module(MODULE_SCSI_DEV, &st_template);
+ return scsi_register_device(&st_template);
}
static void __exit exit_st(void)
{
int i;
- scsi_unregister_module(MODULE_SCSI_DEV, &st_template);
+ scsi_unregister_device(&st_template);
devfs_unregister_chrdev(SCSI_TAPE_MAJOR, "st");
st_registered--;
if (scsi_tapes != NULL) {
#include <asm/unaligned.h>
/*
- * Function : static int partsize(struct buffer_head *bh, unsigned long
+ * Function : static int partsize(unsigned char *buf, unsigned long
* capacity,unsigned int *cyls, unsigned int *hds, unsigned int *secs);
*
* Purpose : to determine the BIOS mapping used to create the partition
*
*/
-static int partsize(struct buffer_head *bh, unsigned long capacity,
+static int partsize(unsigned char *buf, unsigned long capacity,
unsigned int *cyls, unsigned int *hds, unsigned int *secs) {
struct partition *p, *largest = NULL;
int i, largest_cyl;
unsigned int logical_end, physical_end, ext_physical_end;
- if (*(unsigned short *) (bh->b_data+510) == 0xAA55) {
- for (largest_cyl = -1, p = (struct partition *)
- (0x1BE + bh->b_data), i = 0; i < 4; ++i, ++p) {
+ if (*(unsigned short *) (buf+64) == 0xAA55) {
+ for (largest_cyl = -1, p = (struct partition *) buf,
+ i = 0; i < 4; ++i, ++p) {
if (!p->sys_ind)
continue;
cyl = p->cyl + ((p->sector & 0xc0) << 2);
{
int heads, sectors, cylinders;
PACB pACB = (PACB) disk->device->host->hostdata;
- struct buffer_head *bh;
int ret_code = -1;
int size = disk->capacity;
+ unsigned char *buf;
- if ((bh = bread(MKDEV(MAJOR(devno), MINOR(devno)&~0xf), 0, 1024)))
+ if ((buf = scsi_bios_ptable(devno)))
{
/* try to infer mapping from partition table */
- ret_code = partsize (bh, (unsigned long) size, (unsigned int *) geom + 2,
+ ret_code = partsize (buf, (unsigned long) size, (unsigned int *) geom + 2,
(unsigned int *) geom + 0, (unsigned int *) geom + 1);
- brelse (bh);
+ kfree (buf);
}
if (ret_code == -1)
{
(struct hpusbscsi *) new->ctempl.proc_dir = new;
new->ctempl.module = THIS_MODULE;
- if (scsi_register_module (MODULE_SCSI_HA, &(new->ctempl)))
+ if (scsi_register_host(&new->ctempl))
goto err_out;
/* adding to list for module unload */
tmp = tmp->next;
o = (struct hpusbscsi *)old;
usb_unlink_urb(&o->controlurb);
- scsi_unregister_module(MODULE_SCSI_HA,&o->ctempl);
+ scsi_unregister_host(&o->ctempl);
kfree(old);
}
}
MTS_DEBUG_GOT_HERE();
- scsi_unregister_module(MODULE_SCSI_HA, &(to_remove->ctempl));
+ scsi_unregister_host(&to_remove->ctempl);
unlock_kernel();
kfree( to_remove );
MTS_DEBUG("registering SCSI module\n");
new_desc->ctempl.module = THIS_MODULE;
- result = scsi_register_module(MODULE_SCSI_HA, &(new_desc->ctempl));
+ result = scsi_register_host(&new_desc->ctempl);
/* Will get hit back in microtek_detect by this func */
if ( result )
{
- MTS_ERROR( "error %d from scsi_register_module! Help!\n",
+ MTS_ERROR( "error %d from scsi_register_host! Help!\n",
(int)result );
/* FIXME: need more cleanup? */
/* now register - our detect function will be called */
ss->htmplt.module = THIS_MODULE;
- scsi_register_module(MODULE_SCSI_HA, &(ss->htmplt));
+ scsi_register_host(&ss->htmplt);
/* lock access to the data structures */
down(&us_list_semaphore);
* interface
*/
for (next = us_list; next; next = next->next) {
- US_DEBUGP("-- calling scsi_unregister_module()\n");
- scsi_unregister_module(MODULE_SCSI_HA, &(next->htmplt));
+ US_DEBUGP("-- calling scsi_unregister_host()\n");
+ scsi_unregister_host(&next->htmplt);
}
/* While there are still structures, free them. Note that we are
goto abort_toobig;
block = __adfs_block_map(inode->i_sb, inode->i_ino, block);
- if (block) {
- bh->b_dev = inode->i_dev;
- bh->b_blocknr = block;
- bh->b_state |= (1UL << BH_Mapped);
- }
+ if (block)
+ map_bh(bh, inode->i_sb, block);
return 0;
}
/* don't support allocation of blocks yet */
ext_bh = affs_get_extblock(inode, ext);
if (IS_ERR(ext_bh))
goto err_ext;
- bh_result->b_blocknr = be32_to_cpu(AFFS_BLOCK(sb, ext_bh, block));
- bh_result->b_dev = inode->i_dev;
- bh_result->b_state |= (1UL << BH_Mapped);
+ map_bh(bh_result, sb, be32_to_cpu(AFFS_BLOCK(sb, ext_bh, block)));
if (create) {
u32 blocknr = affs_alloc_block(inode, ext_bh->b_blocknr);
struct inode * dir = f->f_dentry->d_inode;
struct buffer_head * bh;
struct bfs_dirent * de;
- kdev_t dev = dir->i_dev;
unsigned int offset;
int block;
if (f->f_pos & (BFS_DIRENT_SIZE-1)) {
printf("Bad f_pos=%08lx for %s:%08lx\n", (unsigned long)f->f_pos,
- bdevname(dev), dir->i_ino);
+ bdevname(dir->i_dev), dir->i_ino);
return -EBADF;
}
struct buffer_head * bh;
struct bfs_dirent * de;
int block, sblock, eblock, off;
- kdev_t dev;
int i;
dprintf("name=%s, namelen=%d\n", name, namelen);
if (namelen > BFS_NAMELEN)
return -ENAMETOOLONG;
- dev = dir->i_dev;
sblock = dir->iu_sblock;
eblock = dir->iu_eblock;
for (block=sblock; block<=eblock; block++) {
mmap: generic_file_mmap,
};
-static int bfs_move_block(unsigned long from, unsigned long to, kdev_t dev)
+static int bfs_move_block(unsigned long from, unsigned long to, struct super_block *sb)
{
struct buffer_head *bh, *new;
- bh = bread(dev, from, BFS_BSIZE);
+ bh = sb_bread(sb, from);
if (!bh)
return -EIO;
- new = getblk(dev, to, BFS_BSIZE);
+ new = sb_getblk(sb, to);
memcpy(new->b_data, bh->b_data, bh->b_size);
mark_buffer_dirty(new);
bforget(bh);
return 0;
}
-static int bfs_move_blocks(kdev_t dev, unsigned long start, unsigned long end,
+static int bfs_move_blocks(struct super_block *sb, unsigned long start, unsigned long end,
unsigned long where)
{
unsigned long i;
dprintf("%08lx-%08lx->%08lx\n", start, end, where);
for (i = start; i <= end; i++)
- if(bfs_move_block(i, where + i, dev)) {
+ if(bfs_move_block(i, where + i, sb)) {
dprintf("failed to move block %08lx -> %08lx\n", i, where + i);
return -EIO;
}
if (!create) {
if (phys <= inode->iu_eblock) {
dprintf("c=%d, b=%08lx, phys=%08lx (granted)\n", create, block, phys);
- bh_result->b_dev = inode->i_dev;
- bh_result->b_blocknr = phys;
- bh_result->b_state |= (1UL << BH_Mapped);
+ map_bh(bh_result, sb, phys);
}
return 0;
}
if (inode->i_size && phys <= inode->iu_eblock) {
dprintf("c=%d, b=%08lx, phys=%08lx (interim block granted)\n",
create, block, phys);
- bh_result->b_dev = inode->i_dev;
- bh_result->b_blocknr = phys;
- bh_result->b_state |= (1UL << BH_Mapped);
+ map_bh(bh_result, sb, phys);
return 0;
}
if (inode->iu_eblock == sb->su_lf_eblk) {
dprintf("c=%d, b=%08lx, phys=%08lx (simple extension)\n",
create, block, phys);
- bh_result->b_dev = inode->i_dev;
- bh_result->b_blocknr = phys;
- bh_result->b_state |= (1UL << BH_Mapped);
+ map_bh(bh_result, sb, phys);
sb->su_freeb -= phys - inode->iu_eblock;
sb->su_lf_eblk = inode->iu_eblock = phys;
mark_inode_dirty(inode);
/* Ok, we have to move this entire file to the next free block */
phys = sb->su_lf_eblk + 1;
if (inode->iu_sblock) { /* if data starts on block 0 then there is no data */
- err = bfs_move_blocks(inode->i_dev, inode->iu_sblock,
+ err = bfs_move_blocks(inode->i_sb, inode->iu_sblock,
inode->iu_eblock, phys);
if (err) {
dprintf("failed to move ino=%08lx -> fs corruption\n", inode->i_ino);
sb->su_freeb -= inode->iu_eblock - inode->iu_sblock + 1 - inode->i_blocks;
mark_inode_dirty(inode);
mark_buffer_dirty(sbh);
- bh_result->b_dev = inode->i_dev;
- bh_result->b_blocknr = phys;
- bh_result->b_state |= (1UL << BH_Mapped);
+ map_bh(bh_result, sb, phys);
out:
unlock_kernel();
return err;
static void bfs_read_inode(struct inode * inode)
{
unsigned long ino = inode->i_ino;
- kdev_t dev = inode->i_dev;
struct bfs_inode * di;
struct buffer_head * bh;
int block, off;
if (ino < BFS_ROOT_INO || ino > inode->i_sb->su_lasti) {
- printf("Bad inode number %s:%08lx\n", bdevname(dev), ino);
+ printf("Bad inode number %s:%08lx\n", bdevname(inode->i_dev), ino);
make_bad_inode(inode);
return;
}
block = (ino - BFS_ROOT_INO)/BFS_INODES_PER_BLOCK + 1;
bh = sb_bread(inode->i_sb, block);
if (!bh) {
- printf("Unable to read inode %s:%08lx\n", bdevname(dev), ino);
+ printf("Unable to read inode %s:%08lx\n", bdevname(inode->i_dev), ino);
make_bad_inode(inode);
return;
}
static void bfs_write_inode(struct inode * inode, int unused)
{
unsigned long ino = inode->i_ino;
- kdev_t dev = inode->i_dev;
struct bfs_inode * di;
struct buffer_head * bh;
int block, off;
if (ino < BFS_ROOT_INO || ino > inode->i_sb->su_lasti) {
- printf("Bad inode number %s:%08lx\n", bdevname(dev), ino);
+ printf("Bad inode number %s:%08lx\n", bdevname(inode->i_dev), ino);
return;
}
block = (ino - BFS_ROOT_INO)/BFS_INODES_PER_BLOCK + 1;
bh = sb_bread(inode->i_sb, block);
if (!bh) {
- printf("Unable to read inode %s:%08lx\n", bdevname(dev), ino);
+ printf("Unable to read inode %s:%08lx\n", bdevname(inode->i_dev), ino);
unlock_kernel();
return;
}
static void bfs_delete_inode(struct inode * inode)
{
unsigned long ino = inode->i_ino;
- kdev_t dev = inode->i_dev;
struct bfs_inode * di;
struct buffer_head * bh;
int block, off;
block = (ino - BFS_ROOT_INO)/BFS_INODES_PER_BLOCK + 1;
bh = sb_bread(s, block);
if (!bh) {
- printf("Unable to read inode %s:%08lx\n", bdevname(dev), ino);
+ printf("Unable to read inode %s:%08lx\n", bdevname(inode->i_dev), ino);
unlock_kernel();
return;
}
static struct super_block * bfs_read_super(struct super_block * s,
void * data, int silent)
{
- kdev_t dev;
struct buffer_head * bh;
struct bfs_super_block * bfs_sb;
struct inode * inode;
int i, imap_len;
- dev = s->s_dev;
sb_set_blocksize(s, BFS_BSIZE);
bh = sb_bread(s, 0);
if (bfs_sb->s_magic != BFS_MAGIC) {
if (!silent)
printf("No BFS filesystem on %s (magic=%08x)\n",
- bdevname(dev), bfs_sb->s_magic);
+ bdevname(s->s_dev), bfs_sb->s_magic);
goto out;
}
if (BFS_UNCLEAN(bfs_sb, s) && !silent)
- printf("%s is unclean, continuing\n", bdevname(dev));
+ printf("%s is unclean, continuing\n", bdevname(s->s_dev));
s->s_magic = BFS_MAGIC;
s->su_bfs_sb = bfs_sb;
else
clear_bit(BIO_UPTODATE, &bio->bi_flags);
- return bio->bi_end_io(bio, nr_sectors);
+ if (bio->bi_end_io)
+ return bio->bi_end_io(bio, nr_sectors);
+
+ return 0;
}
static void __init biovec_init_pool(void)
de = devfs_find_handle (NULL, NULL, i, MINOR (dev),
DEVFS_SPECIAL_BLK, 0);
- if (de) bdops = devfs_get_ops (de);
+ if (de) {
+ bdops = devfs_get_ops (de);
+ devfs_put_ops (de); /* We're running in owner module */
+ }
}
if (bdops == NULL)
return 0;
lock_kernel();
sync_inodes_sb(sb);
- DQUOT_SYNC(dev);
+ DQUOT_SYNC(sb);
lock_super(sb);
if (sb->s_dirt && sb->s_op && sb->s_op->write_super)
sb->s_op->write_super(sb);
lock_kernel();
sync_inodes(dev);
- DQUOT_SYNC(dev);
+ if (dev) {
+ struct super_block *sb = get_super(dev);
+ if (sb) {
+ DQUOT_SYNC(sb);
+ drop_super(sb);
+ }
+ } else
+ DQUOT_SYNC(NULL);
sync_supers(dev);
unlock_kernel();
return 1;
}
-void create_empty_buffers(struct page *page, kdev_t dev, unsigned long blocksize)
+void create_empty_buffers(struct page *page, unsigned long blocksize)
{
struct buffer_head *bh, *head, *tail;
bh = head;
do {
- bh->b_dev = dev;
- bh->b_blocknr = 0;
bh->b_end_io = NULL;
tail = bh;
bh = bh->b_this_page;
BUG();
if (!page->buffers)
- create_empty_buffers(page, inode->i_dev, 1 << inode->i_blkbits);
+ create_empty_buffers(page, 1 << inode->i_blkbits);
head = page->buffers;
block = page->index << (PAGE_CACHE_SHIFT - inode->i_blkbits);
blocksize = 1 << inode->i_blkbits;
if (!page->buffers)
- create_empty_buffers(page, inode->i_dev, blocksize);
+ create_empty_buffers(page, blocksize);
head = page->buffers;
bbits = inode->i_blkbits;
PAGE_BUG(page);
blocksize = 1 << inode->i_blkbits;
if (!page->buffers)
- create_empty_buffers(page, inode->i_dev, blocksize);
+ create_empty_buffers(page, blocksize);
head = page->buffers;
blocks = PAGE_CACHE_SIZE >> inode->i_blkbits;
goto out;
if (!page->buffers)
- create_empty_buffers(page, inode->i_dev, blocksize);
+ create_empty_buffers(page, blocksize);
/* Find the buffer that contains "offset" */
bh = page->buffers;
panic("brw_page: page not locked for I/O");
if (!page->buffers)
- create_empty_buffers(page, dev, size);
+ create_empty_buffers(page, size);
head = bh = page->buffers;
/* Stage 1: lock all the buffers */
do {
lock_buffer(bh);
bh->b_blocknr = *(b++);
+ bh->b_dev = dev;
set_bit(BH_Mapped, &bh->b_state);
set_buffer_async_io(bh);
bh = bh->b_this_page;
Work sponsored by SGI.
v0.92
20000306 Richard Gooch <rgooch@atnf.csiro.au>
- Added DEVFS_FL_NO_PERSISTENCE flag.
+ Added DEVFS_ FL_NO_PERSISTENCE flag.
Removed unnecessary call to <update_devfs_inode_from_entry> in
<devfs_readdir>.
Work sponsored by SGI.
20011122 Richard Gooch <rgooch@atnf.csiro.au>
Use slab cache rather than fixed buffer for devfsd events.
v1.1
+ 20011125 Richard Gooch <rgooch@atnf.csiro.au>
+ Send DEVFSD_NOTIFY_REGISTERED events in <devfs_mk_dir>.
+ 20011127 Richard Gooch <rgooch@atnf.csiro.au>
+ Fixed locking bug in <devfs_d_revalidate_wait> due to typo.
+ Do not send CREATE, CHANGE, ASYNC_OPEN or DELETE events from
+ devfsd or children.
+ v1.2
+ 20011202 Richard Gooch <rgooch@atnf.csiro.au>
+ Fixed bug in <devfsd_read>: was dereferencing freed pointer.
+ v1.3
+ 20011203 Richard Gooch <rgooch@atnf.csiro.au>
+ Fixed bug in <devfsd_close>: was dereferencing freed pointer.
+ Added process group check for devfsd privileges.
+ v1.4
+ 20011204 Richard Gooch <rgooch@atnf.csiro.au>
+ Use SLAB_ATOMIC in <devfsd_notify_de> from <devfs_d_delete>.
+ v1.5
+ 20011211 Richard Gooch <rgooch@atnf.csiro.au>
+ Return old entry in <devfs_mk_dir> for 2.4.x kernels.
+ 20011212 Richard Gooch <rgooch@atnf.csiro.au>
+ Increment refcount on module in <check_disc_changed>.
+ 20011215 Richard Gooch <rgooch@atnf.csiro.au>
+ Created <devfs_get_handle> and exported <devfs_put>.
+ Increment refcount on module in <devfs_get_ops>.
+ Created <devfs_put_ops>.
+ v1.6
+ 20011216 Richard Gooch <rgooch@atnf.csiro.au>
+ Added poisoning to <devfs_put>.
+ Improved debugging messages.
+ v1.7
+ 20011221 Richard Gooch <rgooch@atnf.csiro.au>
+ Corrected (made useful) debugging message in <unregister>.
+ Moved <kmem_cache_create> in <mount_devfs_fs> to <init_devfs_fs>
+ 20011224 Richard Gooch <rgooch@atnf.csiro.au>
+ Added magic number to guard against scribbling drivers.
+ 20011226 Richard Gooch <rgooch@atnf.csiro.au>
+ Only return old entry in <devfs_mk_dir> if a directory.
+ Defined macros for error and debug messages.
+ v1.8
*/
#include <linux/types.h>
#include <linux/errno.h>
#include <asm/bitops.h>
#include <asm/atomic.h>
-#define DEVFS_VERSION "1.1 (20011122)"
+#define DEVFS_VERSION "1.8 (20011226)"
#define DEVFS_NAME "devfs"
#define FIRST_INODE 1
#define STRING_LENGTH 256
+#define FAKE_BLOCK_SIZE 1024
+#define POISON_PTR ( *(void **) poison_array )
+#define MAGIC_VALUE 0x327db823
#ifndef TRUE
# define TRUE 1
#define OPTION_MOUNT 0x01
#define OPTION_ONLY 0x02
-#define OOPS(format, args...) {printk (format, ## args); \
- printk ("Forcing Oops\n"); \
- BUG();}
+#define PRINTK(format, args...) \
+ {printk (KERN_ERR "%s" format, __FUNCTION__ , ## args);}
+
+#define OOPS(format, args...) \
+ {printk (KERN_CRIT "%s" format, __FUNCTION__ , ## args); \
+ printk ("Forcing Oops\n"); \
+ BUG();}
+
+#ifdef CONFIG_DEVFS_DEBUG
+# define VERIFY_ENTRY(de) \
+ {if ((de) && (de)->magic_number != MAGIC_VALUE) \
+ OOPS ("(%p): bad magic value: %x\n", (de), (de)->magic_number);}
+# define WRITE_ENTRY_MAGIC(de,magic) (de)->magic_number = (magic)
+# define DPRINTK(flag, format, args...) \
+ {if (devfs_debug & flag) \
+ printk (KERN_INFO "%s" format, __FUNCTION__ , ## args);}
+#else
+# define VERIFY_ENTRY(de)
+# define WRITE_ENTRY_MAGIC(de,magic)
+# define DPRINTK(flag, format, args...)
+#endif
+
struct directory_type
{
struct devfs_entry
{
+#ifdef CONFIG_DEVFS_DEBUG
+ unsigned int magic_number;
+#endif
void *info;
atomic_t refcount; /* When this drops to zero, it's unused */
union
struct devfsd_buf_entry *devfsd_last_event;
volatile int devfsd_sleeping;
volatile struct task_struct *devfsd_task;
+ volatile pid_t devfsd_pgrp;
volatile struct file *devfsd_file;
struct devfsd_notify_struct *devfsd_info;
volatile unsigned long devfsd_event_mask;
static unsigned int stat_num_entries;
static unsigned int stat_num_bytes;
#endif
+static unsigned char poison_array[8] =
+ {0x5a, 0x5a, 0x5a, 0x5a, 0x5a, 0x5a, 0x5a, 0x5a};
#ifdef CONFIG_DEVFS_MOUNT
static unsigned int boot_options = OPTION_MOUNT;
static struct devfs_entry *devfs_get (struct devfs_entry *de)
{
+ VERIFY_ENTRY (de);
if (de) atomic_inc (&de->refcount);
return de;
} /* End Function devfs_get */
/**
* devfs_put - Put (release) a reference to a devfs entry.
- * @de: The devfs entry.
+ * @de: The handle to the devfs entry.
*/
-static void devfs_put (struct devfs_entry *de)
+void devfs_put (devfs_handle_t de)
{
if (!de) return;
+ VERIFY_ENTRY (de);
+ if (de->info == POISON_PTR) OOPS ("(%p): poisoned pointer\n", de);
if ( !atomic_dec_and_test (&de->refcount) ) return;
- if (de == root_entry)
- OOPS ("%s: devfs_put(): root entry being freed\n", DEVFS_NAME);
-#ifdef CONFIG_DEVFS_DEBUG
- if (devfs_debug & DEBUG_FREE)
- printk ("%s: devfs_put(%s): de: %p, parent: %p \"%s\"\n",
- DEVFS_NAME, de->name, de, de->parent,
- de->parent ? de->parent->name : "no parent");
-#endif
+ if (de == root_entry) OOPS ("(%p): root entry being freed\n", de);
+ DPRINTK (DEBUG_FREE, "(%s): de: %p, parent: %p \"%s\"\n",
+ de->name, de, de->parent,
+ de->parent ? de->parent->name : "no parent");
if ( S_ISLNK (de->mode) ) kfree (de->u.symlink.linkname);
if ( ( S_ISCHR (de->mode) || S_ISBLK (de->mode) ) && de->u.fcb.autogen )
{
MKDEV (de->u.fcb.u.device.major,
de->u.fcb.u.device.minor) );
}
+ WRITE_ENTRY_MAGIC (de, 0);
#ifdef CONFIG_DEVFS_DEBUG
spin_lock (&stat_lock);
--stat_num_entries;
if ( S_ISLNK (de->mode) ) stat_num_bytes -= de->u.symlink.length + 1;
spin_unlock (&stat_lock);
#endif
+ de->info = POISON_PTR;
kfree (de);
} /* End Function devfs_put */
if ( !S_ISDIR (dir->mode) )
{
- printk ("%s: search_dir(%s): not a directory\n", DEVFS_NAME,dir->name);
+ PRINTK ("(%s): not a directory\n", dir->name);
return NULL;
}
for (curr = dir->u.dir.first; curr != NULL; curr = curr->next)
if ( name && (namelen < 1) ) namelen = strlen (name);
if ( ( new = kmalloc (sizeof *new + namelen, GFP_KERNEL) ) == NULL )
return NULL;
- memset (new, 0, sizeof *new + namelen);
+ memset (new, 0, sizeof *new + namelen); /* Will set '\0' on name */
new->mode = mode;
if ( S_ISDIR (mode) ) rwlock_init (&new->u.dir.lock);
atomic_set (&new->refcount, 1);
spin_unlock (&counter_lock);
if (name) memcpy (new->name, name, namelen);
new->namelen = namelen;
+ WRITE_ENTRY_MAGIC (new, MAGIC_VALUE);
#ifdef CONFIG_DEVFS_DEBUG
spin_lock (&stat_lock);
++stat_num_entries;
* @de: The devfs entry to append.
* @removable: If TRUE, increment the count of removable devices for %dir.
* @old_de: If an existing entry exists, it will be written here. This may
- * be %NULL.
+ * be %NULL. An implicit devfs_get() is performed on this entry.
*
* Append a devfs entry to a directory's list of children, checking first to
* see if an entry of the same name exists. The directory will be locked.
if (old_de) *old_de = NULL;
if ( !S_ISDIR (dir->mode) )
{
- printk ("%s: append_entry(%s): dir: \"%s\" is not a directory\n",
- DEVFS_NAME, de->name, dir->name);
+ PRINTK ("(%s): dir: \"%s\" is not a directory\n", de->name, dir->name);
devfs_put (de);
return -ENOTDIR;
}
if ( ( *dir = _devfs_make_parent_for_leaf (*dir, name, namelen,
&leaf_pos) ) == NULL )
{
- printk ("%s: prepare_leaf(%s): could not create parent path\n",
- DEVFS_NAME, name);
+ PRINTK ("(%s): could not create parent path\n", name);
return NULL;
}
if ( ( de = _devfs_alloc_entry (name + leaf_pos, namelen - leaf_pos,mode) )
== NULL )
{
- printk ("%s: prepare_leaf(%s): could not allocate entry\n",
- DEVFS_NAME, name);
+ PRINTK ("(%s): could not allocate entry\n", name);
devfs_put (*dir);
return NULL;
}
/**
- * find_by_dev - Find a devfs entry in a directory.
+ * _devfs_find_by_dev - Find a devfs entry in a directory.
* @dir: The directory where to search
* @major: The major number to search for.
* @minor: The minor number to search for.
* @type: The type of special file to search for. This may be either
* %DEVFS_SPECIAL_CHR or %DEVFS_SPECIAL_BLK.
*
- * Returns the devfs_entry pointer on success, else %NULL.
+ * Returns the devfs_entry pointer on success, else %NULL. An implicit
+ * devfs_get() is performed.
*/
-static struct devfs_entry *find_by_dev (struct devfs_entry *dir,
- unsigned int major, unsigned int minor,
- char type)
+static struct devfs_entry *_devfs_find_by_dev (struct devfs_entry *dir,
+ unsigned int major,
+ unsigned int minor, char type)
{
struct devfs_entry *entry, *de;
if (dir == NULL) return NULL;
if ( !S_ISDIR (dir->mode) )
{
- printk ("%s: find_by_dev(): not a directory\n", DEVFS_NAME);
+ PRINTK ("(%p): not a directory\n", dir);
devfs_put (dir);
return NULL;
}
for (entry = dir->u.dir.first; entry != NULL; entry = entry->next)
{
if ( !S_ISDIR (entry->mode) ) continue;
- de = find_by_dev (entry, major, minor, type);
+ de = _devfs_find_by_dev (entry, major, minor, type);
if (de)
{
read_unlock (&dir->u.dir.lock);
read_unlock (&dir->u.dir.lock);
devfs_put (dir);
return NULL;
-} /* End Function find_by_dev */
+} /* End Function _devfs_find_by_dev */
/**
- * find_entry - Find a devfs entry.
+ * _devfs_find_entry - Find a devfs entry.
* @dir: The handle to the parent devfs directory entry. If this is %NULL the
* name is relative to the root of the devfs.
- * @name: The name of the entry. This is ignored if @handle is not %NULL.
- * @namelen: The number of characters in @name, not including a %NULL
- * terminator. If this is 0, then @name must be %NULL-terminated and the
- * length is computed internally.
- * @major: The major number. This is used if @handle and @name are %NULL.
- * @minor: The minor number. This is used if @handle and @name are %NULL.
+ * @name: The name of the entry. This may be %NULL.
+ * @major: The major number. This is used if lookup by @name fails.
+ * @minor: The minor number. This is used if lookup by @name fails.
* NOTE: If @major and @minor are both 0, searching by major and minor
* numbers is disabled.
* @type: The type of special file to search for. This may be either
* %DEVFS_SPECIAL_CHR or %DEVFS_SPECIAL_BLK.
* @traverse_symlink: If %TRUE then symbolic links are traversed.
*
- * Returns the devfs_entry pointer on success, else %NULL.
+ * Returns the devfs_entry pointer on success, else %NULL. An implicit
+ * devfs_get() is performed.
*/
-static struct devfs_entry *find_entry (devfs_handle_t dir,
- const char *name, unsigned int namelen,
- unsigned int major, unsigned int minor,
- char type, int traverse_symlink)
+static struct devfs_entry *_devfs_find_entry (devfs_handle_t dir,
+ const char *name,
+ unsigned int major,
+ unsigned int minor,
+ char type, int traverse_symlink)
{
struct devfs_entry *entry;
if (name != NULL)
{
- if (namelen < 1) namelen = strlen (name);
+ unsigned int namelen = strlen (name);
+
if (name[0] == '/')
{
/* Skip leading pathname component */
if (namelen < 2)
{
- printk ("%s: find_entry(%s): too short\n", DEVFS_NAME, name);
+ PRINTK ("(%s): too short\n", name);
return NULL;
}
for (++name, --namelen; (*name != '/') && (namelen > 0);
++name, --namelen);
if (namelen < 2)
{
- printk ("%s: find_entry(%s): too short\n", DEVFS_NAME, name);
+ PRINTK ("(%s): too short\n", name);
return NULL;
}
++name;
}
/* Have to search by major and minor: slow */
if ( (major == 0) && (minor == 0) ) return NULL;
- return find_by_dev (root_entry, major, minor, type);
-} /* End Function find_entry */
+ return _devfs_find_by_dev (root_entry, major, minor, type);
+} /* End Function _devfs_find_entry */
static struct devfs_entry *get_devfs_entry_from_vfs_inode (struct inode *inode)
{
if (inode == NULL) return NULL;
+ VERIFY_ENTRY ( (struct devfs_entry *) inode->u.generic_ip );
return inode->u.generic_ip;
} /* End Function get_devfs_entry_from_vfs_inode */
{
struct task_struct *p;
- for (p = current; p != &init_task; p = p->p_opptr)
+ if (current == fs_info->devfsd_task) return (TRUE);
+ if (current->pgrp == fs_info->devfsd_pgrp) return (TRUE);
+#if LINUX_VERSION_CODE < KERNEL_VERSION(2,5,1)
+ for (p = current->p_opptr; p != &init_task; p = p->p_opptr)
{
if (p == fs_info->devfsd_task) return (TRUE);
}
+#endif
return (FALSE);
} /* End Function is_devfsd_or_child */
static int devfsd_notify_de (struct devfs_entry *de,
unsigned short type, umode_t mode,
- uid_t uid, gid_t gid, struct fs_info *fs_info)
+ uid_t uid, gid_t gid, struct fs_info *fs_info,
+ int atomic)
{
struct devfsd_buf_entry *entry;
struct devfs_entry *curr;
if ( !( fs_info->devfsd_event_mask & (1 << type) ) ) return (FALSE);
- if ( ( entry = kmem_cache_alloc (devfsd_buf_cache, 0) ) == NULL )
+ if ( ( entry = kmem_cache_alloc (devfsd_buf_cache,
+ atomic ? SLAB_ATOMIC : SLAB_KERNEL) )
+ == NULL )
{
atomic_inc (&fs_info->devfsd_overrun_count);
return (FALSE);
static void devfsd_notify (struct devfs_entry *de,unsigned short type,int wait)
{
if (devfsd_notify_de (de, type, de->mode, current->euid,
- current->egid, &fs_info) && wait)
+ current->egid, &fs_info, 0) && wait)
wait_for_devfsd_finished (&fs_info);
} /* End Function devfsd_notify */
if (name == NULL)
{
- printk ("%s: devfs_register(): NULL name pointer\n", DEVFS_NAME);
+ PRINTK ("(): NULL name pointer\n");
return NULL;
}
if (ops == NULL)
if ( S_ISBLK (mode) ) ops = (void *) get_blkfops (major);
if (ops == NULL)
{
- printk ("%s: devfs_register(%s): NULL ops pointer\n",
- DEVFS_NAME, name);
+ PRINTK ("(%s): NULL ops pointer\n", name);
return NULL;
}
- printk ("%s: devfs_register(%s): NULL ops, got %p from major table\n",
- DEVFS_NAME, name, ops);
+ PRINTK ("(%s): NULL ops, got %p from major table\n", name, ops);
}
if ( S_ISDIR (mode) )
{
- printk("%s: devfs_register(%s): creating directories is not allowed\n",
- DEVFS_NAME, name);
+ PRINTK ("(%s): creating directories is not allowed\n", name);
return NULL;
}
if ( S_ISLNK (mode) )
{
- printk ("%s: devfs_register(%s): creating symlinks is not allowed\n",
- DEVFS_NAME, name);
+ PRINTK ("(%s): creating symlinks is not allowed\n", name);
return NULL;
}
if ( ( S_ISCHR (mode) || S_ISBLK (mode) ) &&
{
if ( ( devnum = devfs_alloc_devnum (devtype) ) == NODEV )
{
- printk ("%s: devfs_register(%s): exhausted %s device numbers\n",
- DEVFS_NAME, name, S_ISCHR (mode) ? "char" : "block");
+ PRINTK ("(%s): exhausted %s device numbers\n",
+ name, S_ISCHR (mode) ? "char" : "block");
return NULL;
}
major = MAJOR (devnum);
}
if ( ( de = _devfs_prepare_leaf (&dir, name, mode) ) == NULL )
{
- printk ("%s: devfs_register(%s): could not prepare leaf\n",
- DEVFS_NAME, name);
+ PRINTK ("(%s): could not prepare leaf\n", name);
if (devnum != NODEV) devfs_dealloc_devnum (devtype, devnum);
return NULL;
}
}
else if ( !S_ISREG (mode) )
{
- printk ("%s: devfs_register(%s): illegal mode: %x\n",
- DEVFS_NAME, name, mode);
+ PRINTK ("(%s): illegal mode: %x\n", name, mode);
devfs_put (de);
devfs_put (dir);
return (NULL);
if ( ( err = _devfs_append_entry (dir, de, de->u.fcb.removable, NULL) )
!= 0 )
{
- printk("%s: devfs_register(%s): could not append to parent, err: %d\n",
- DEVFS_NAME, name, err);
+ PRINTK ("(%s): could not append to parent, err: %d\n", name, err);
devfs_put (dir);
if (devnum != NODEV) devfs_dealloc_devnum (devtype, devnum);
return NULL;
}
-#ifdef CONFIG_DEVFS_DEBUG
- if (devfs_debug & DEBUG_REGISTER)
- printk ("%s: devfs_register(%s): de: %p dir: %p \"%s\" pp: %p\n",
- DEVFS_NAME, name, de, dir, dir->name, dir->parent);
-#endif
+ DPRINTK (DEBUG_REGISTER, "(%s): de: %p dir: %p \"%s\" pp: %p\n",
+ name, de, dir, dir->name, dir->parent);
devfsd_notify (de, DEVFSD_NOTIFY_REGISTERED, flags & DEVFS_FL_WAIT);
devfs_put (dir);
return de;
/**
- * unregister - Unregister a device entry from it's parent.
+ * _devfs_unregister - Unregister a device entry from it's parent.
* @dir: The parent directory.
* @de: The entry to unregister.
*
* unlocked by this function.
*/
-static void unregister (struct devfs_entry *dir, struct devfs_entry *de)
+static void _devfs_unregister (struct devfs_entry *dir, struct devfs_entry *de)
{
int unhooked = _devfs_unhook (de);
write_lock (&de->u.dir.lock);
de->u.dir.no_more_additions = TRUE;
child = de->u.dir.first;
- unregister (de, child);
+ VERIFY_ENTRY (child);
+ _devfs_unregister (de, child);
if (!child) break;
-#ifdef CONFIG_DEVFS_DEBUG
- if (devfs_debug & DEBUG_UNREGISTER)
- printk ("%s: unregister(): child->name: \"%s\" child: %p\n",
- DEVFS_NAME, child->name, child);
-#endif
+ DPRINTK (DEBUG_UNREGISTER, "(%s): child: %p refcount: %d\n",
+ child->name, child, atomic_read (&child->refcount) );
devfs_put (child);
}
-} /* End Function unregister */
+} /* End Function _devfs_unregister */
/**
* devfs_unregister - Unregister a device entry.
* @de: A handle previously created by devfs_register() or returned from
- * devfs_find_handle(). If this is %NULL the routine does nothing.
+ * devfs_get_handle(). If this is %NULL the routine does nothing.
*/
void devfs_unregister (devfs_handle_t de)
{
+ VERIFY_ENTRY (de);
if ( (de == NULL) || (de->parent == NULL) ) return;
-#ifdef CONFIG_DEVFS_DEBUG
- if (devfs_debug & DEBUG_UNREGISTER)
- printk ("%s: devfs_unregister(): de->name: \"%s\" de: %p\n",
- DEVFS_NAME, de->name, de);
-#endif
+ DPRINTK (DEBUG_UNREGISTER, "(%s): de: %p refcount: %d\n",
+ de->name, de, atomic_read (&de->refcount) );
write_lock (&de->parent->u.dir.lock);
- unregister (de->parent, de);
+ _devfs_unregister (de->parent, de);
devfs_put (de);
} /* End Function devfs_unregister */
if (handle != NULL) *handle = NULL;
if (name == NULL)
{
- printk ("%s: devfs_do_symlink(): NULL name pointer\n", DEVFS_NAME);
+ PRINTK ("(): NULL name pointer\n");
return -EINVAL;
}
-#ifdef CONFIG_DEVFS_DEBUG
- if (devfs_debug & DEBUG_REGISTER)
- printk ("%s: devfs_do_symlink(%s)\n", DEVFS_NAME, name);
-#endif
if (link == NULL)
{
- printk ("%s: devfs_do_symlink(): NULL link pointer\n", DEVFS_NAME);
+ PRINTK ("(%s): NULL link pointer\n", name);
return -EINVAL;
}
linklength = strlen (link);
if ( ( de = _devfs_prepare_leaf (&dir, name, S_IFLNK | S_IRUGO | S_IXUGO) )
== NULL )
{
- printk ("%s: devfs_do_symlink(%s): could not prepare leaf\n",
- DEVFS_NAME, name);
+ PRINTK ("(%s): could not prepare leaf\n", name);
kfree (newlink);
return -ENOTDIR;
}
de->u.symlink.length = linklength;
if ( ( err = _devfs_append_entry (dir, de, FALSE, NULL) ) != 0 )
{
- printk ("%s: devfs_do_symlink(%s): could not append to parent, err: %d\n",
- DEVFS_NAME, name, err);
+ PRINTK ("(%s): could not append to parent, err: %d\n", name, err);
devfs_put (dir);
return err;
}
devfs_handle_t de;
if (handle != NULL) *handle = NULL;
+ DPRINTK (DEBUG_REGISTER, "(%s)\n", name);
err = devfs_do_symlink (dir, name, flags, link, &de, info);
if (err) return err;
if (handle != NULL) *handle = de;
devfs_handle_t devfs_mk_dir (devfs_handle_t dir, const char *name, void *info)
{
int err;
- struct devfs_entry *de;
+ struct devfs_entry *de, *old;
if (name == NULL)
{
- printk ("%s: devfs_mk_dir(): NULL name pointer\n", DEVFS_NAME);
+ PRINTK ("(): NULL name pointer\n");
return NULL;
}
if ( ( de = _devfs_prepare_leaf (&dir, name, MODE_DIR) ) == NULL )
{
- printk ("%s: devfs_mk_dir(%s): could not prepare leaf\n",
- DEVFS_NAME, name);
+ PRINTK ("(%s): could not prepare leaf\n", name);
return NULL;
}
de->info = info;
- if ( ( err = _devfs_append_entry (dir, de, FALSE, NULL) ) != 0 )
+ if ( ( err = _devfs_append_entry (dir, de, FALSE, &old) ) != 0 )
{
- printk ("%s: devfs_mk_dir(%s): could not append to dir: %p \"%s\", err: %d\n",
- DEVFS_NAME, name, dir, dir->name, err);
+#if LINUX_VERSION_CODE < KERNEL_VERSION(2,5,1)
+ if ( old && S_ISDIR (old->mode) )
+ {
+ PRINTK ("(%s): using old entry in dir: %p \"%s\"\n",
+ name, dir, dir->name);
+ old->vfs_created = FALSE;
+ devfs_put (dir);
+ return old;
+ }
+#endif
+ PRINTK ("(%s): could not append to dir: %p \"%s\", err: %d\n",
+ name, dir, dir->name, err);
+ devfs_put (old);
devfs_put (dir);
return NULL;
}
-#ifdef CONFIG_DEVFS_DEBUG
- if (devfs_debug & DEBUG_REGISTER)
- printk ("%s: devfs_mk_dir(%s): de: %p dir: %p \"%s\"\n",
- DEVFS_NAME, name, de, dir, dir->name);
-#endif
+ DPRINTK (DEBUG_REGISTER, "(%s): de: %p dir: %p \"%s\"\n",
+ name, de, dir, dir->name);
+ devfsd_notify (de, DEVFSD_NOTIFY_REGISTERED, 0);
devfs_put (dir);
return de;
} /* End Function devfs_mk_dir */
/**
- * devfs_find_handle - Find the handle of a devfs entry.
+ * devfs_get_handle - Find the handle of a devfs entry.
* @dir: The handle to the parent devfs directory entry. If this is %NULL the
* name is relative to the root of the devfs.
* @name: The name of the entry.
* traversed. Symlinks pointing out of the devfs namespace will cause a
* failure. Symlink traversal consumes stack space.
*
- * Returns a handle which may later be used in a call to devfs_unregister(),
- * devfs_get_flags(), or devfs_set_flags(). On failure %NULL is returned.
+ * Returns a handle which may later be used in a call to
+ * devfs_unregister(), devfs_get_flags(), or devfs_set_flags(). A
+ * subsequent devfs_put() is required to decrement the refcount.
+ * On failure %NULL is returned.
*/
+devfs_handle_t devfs_get_handle (devfs_handle_t dir, const char *name,
+ unsigned int major, unsigned int minor,
+ char type, int traverse_symlinks)
+{
+ if ( (name != NULL) && (name[0] == '\0') ) name = NULL;
+ return _devfs_find_entry (dir, name, major, minor, type,traverse_symlinks);
+} /* End Function devfs_get_handle */
+
+
+/* Compatibility function. Will be removed in sometime in 2.5 */
+
devfs_handle_t devfs_find_handle (devfs_handle_t dir, const char *name,
unsigned int major, unsigned int minor,
char type, int traverse_symlinks)
{
devfs_handle_t de;
- if ( (name != NULL) && (name[0] == '\0') ) name = NULL;
- de = find_entry (dir, name, 0, major, minor, type, traverse_symlinks);
- devfs_put (de); /* FIXME: in 2.5 consider dropping this and require a
- call to devfs_put() */
+ de = devfs_get_handle (dir, name, major, minor, type, traverse_symlinks);
+ devfs_put (de);
return de;
} /* End Function devfs_find_handle */
unsigned int fl = 0;
if (de == NULL) return -EINVAL;
+ VERIFY_ENTRY (de);
if (de->hide) fl |= DEVFS_FL_HIDE;
if ( S_ISCHR (de->mode) || S_ISBLK (de->mode) || S_ISREG (de->mode) )
{
int devfs_set_flags (devfs_handle_t de, unsigned int flags)
{
if (de == NULL) return -EINVAL;
-#ifdef CONFIG_DEVFS_DEBUG
- if (devfs_debug & DEBUG_SET_FLAGS)
- printk ("%s: devfs_set_flags(): de->name: \"%s\"\n",
- DEVFS_NAME, de->name);
-#endif
+ VERIFY_ENTRY (de);
+ DPRINTK (DEBUG_SET_FLAGS, "(%s): flags: %x\n", de->name, flags);
de->hide = (flags & DEVFS_FL_HIDE) ? TRUE : FALSE;
if ( S_ISCHR (de->mode) || S_ISBLK (de->mode) || S_ISREG (de->mode) )
{
unsigned int *minor)
{
if (de == NULL) return -EINVAL;
+ VERIFY_ENTRY (de);
if ( S_ISDIR (de->mode) ) return -EISDIR;
if ( !S_ISCHR (de->mode) && !S_ISBLK (de->mode) ) return -EINVAL;
if (major != NULL) *major = de->u.fcb.u.device.major;
#define NAMEOF(de) ( (de)->mode ? (de)->name : (de)->u.name )
if (de == NULL) return -EINVAL;
+ VERIFY_ENTRY (de);
if (de->namelen >= buflen) return -ENAMETOOLONG; /* Must be first */
path[buflen - 1] = '\0';
if (de->parent == NULL) return buflen - 1; /* Don't prepend root */
* @de: The handle to the device entry.
*
* Returns a pointer to the device operations on success, else NULL.
+ * The use count for the module owning the operations will be incremented.
*/
void *devfs_get_ops (devfs_handle_t de)
{
+ struct module *owner;
+
if (de == NULL) return NULL;
- if ( S_ISCHR (de->mode) || S_ISBLK (de->mode) || S_ISREG (de->mode) )
- return de->u.fcb.ops;
- return NULL;
+ VERIFY_ENTRY (de);
+ if ( !S_ISCHR (de->mode) && !S_ISBLK (de->mode) && !S_ISREG (de->mode) )
+ return NULL;
+ if (de->u.fcb.ops == NULL) return NULL;
+ read_lock (&de->parent->u.dir.lock); /* Prevent module from unloading */
+ if (de->next == de) owner = NULL; /* Ops pointer is already stale */
+ else if ( S_ISCHR (de->mode) || S_ISREG (de->mode) )
+ owner = ( (struct file_operations *) de->u.fcb.ops )->owner;
+ else owner = ( (struct block_device_operations *) de->u.fcb.ops )->owner;
+ if ( (de->next == de) || !try_inc_mod_count (owner) )
+ { /* Entry is already unhooked or module is unloading */
+ read_unlock (&de->parent->u.dir.lock);
+ return NULL;
+ }
+ read_unlock (&de->parent->u.dir.lock); /* Module can continue unloading*/
+ return de->u.fcb.ops;
} /* End Function devfs_get_ops */
+/**
+ * devfs_put_ops - Put the device operations for a devfs entry.
+ * @de: The handle to the device entry.
+ *
+ * The use count for the module owning the operations will be decremented.
+ */
+
+void devfs_put_ops (devfs_handle_t de)
+{
+ struct module *owner;
+
+ if (de == NULL) return;
+ VERIFY_ENTRY (de);
+ if ( !S_ISCHR (de->mode) && !S_ISBLK (de->mode) && !S_ISREG (de->mode) )
+ return;
+ if (de->u.fcb.ops == NULL) return;
+ if ( S_ISCHR (de->mode) || S_ISREG (de->mode) )
+ owner = ( (struct file_operations *) de->u.fcb.ops )->owner;
+ else owner = ( (struct block_device_operations *) de->u.fcb.ops )->owner;
+ if (owner) __MOD_DEC_USE_COUNT (owner);
+} /* End Function devfs_put_ops */
+
+
/**
* devfs_set_file_size - Set the file size for a devfs regular file.
* @de: The handle to the device entry.
int devfs_set_file_size (devfs_handle_t de, unsigned long size)
{
if (de == NULL) return -EINVAL;
+ VERIFY_ENTRY (de);
if ( !S_ISREG (de->mode) ) return -EINVAL;
if (de->u.fcb.u.file.size == size) return 0;
de->u.fcb.u.file.size = size;
void *devfs_get_info (devfs_handle_t de)
{
if (de == NULL) return NULL;
+ VERIFY_ENTRY (de);
return de->info;
} /* End Function devfs_get_info */
int devfs_set_info (devfs_handle_t de, void *info)
{
if (de == NULL) return -EINVAL;
+ VERIFY_ENTRY (de);
de->info = info;
return 0;
} /* End Function devfs_set_info */
devfs_handle_t devfs_get_parent (devfs_handle_t de)
{
if (de == NULL) return NULL;
+ VERIFY_ENTRY (de);
return de->parent;
} /* End Function devfs_get_parent */
devfs_handle_t devfs_get_first_child (devfs_handle_t de)
{
if (de == NULL) return NULL;
+ VERIFY_ENTRY (de);
if ( !S_ISDIR (de->mode) ) return NULL;
return de->u.dir.first;
} /* End Function devfs_get_first_child */
devfs_handle_t devfs_get_next_sibling (devfs_handle_t de)
{
if (de == NULL) return NULL;
+ VERIFY_ENTRY (de);
return de->next;
} /* End Function devfs_get_next_sibling */
void devfs_auto_unregister (devfs_handle_t master, devfs_handle_t slave)
{
if (master == NULL) return;
+ VERIFY_ENTRY (master);
+ VERIFY_ENTRY (slave);
if (master->slave != NULL)
{
/* Because of the dumbness of the layers above, ignore duplicates */
if (master->slave == slave) return;
- printk ("%s: devfs_auto_unregister(): only one slave allowed\n",
- DEVFS_NAME);
- OOPS (" master: \"%s\" old slave: \"%s\" new slave: \"%s\"\n",
- master->name, master->slave->name, slave->name);
+ PRINTK ("(%s): only one slave allowed\n", master->name);
+ OOPS ("(): old slave: \"%s\" new slave: \"%s\"\n",
+ master->slave->name, slave->name);
}
master->slave = slave;
} /* End Function devfs_auto_unregister */
devfs_handle_t devfs_get_unregister_slave (devfs_handle_t master)
{
if (master == NULL) return NULL;
+ VERIFY_ENTRY (master);
return master->slave;
} /* End Function devfs_get_unregister_slave */
const char *devfs_get_name (devfs_handle_t de, unsigned int *namelen)
{
if (de == NULL) return NULL;
+ VERIFY_ENTRY (de);
if (namelen != NULL) *namelen = de->namelen;
return de->name;
} /* End Function devfs_get_name */
__setup("devfs=", devfs_setup);
+EXPORT_SYMBOL(devfs_put);
EXPORT_SYMBOL(devfs_register);
EXPORT_SYMBOL(devfs_unregister);
EXPORT_SYMBOL(devfs_mk_symlink);
EXPORT_SYMBOL(devfs_mk_dir);
+EXPORT_SYMBOL(devfs_get_handle);
EXPORT_SYMBOL(devfs_find_handle);
EXPORT_SYMBOL(devfs_get_flags);
EXPORT_SYMBOL(devfs_set_flags);
buf->parent = parent;
buf->namelen = namelen;
buf->u.name = name;
+ WRITE_ENTRY_MAGIC (buf, MAGIC_VALUE);
if ( !devfsd_notify_de (buf, DEVFSD_NOTIFY_LOOKUP, 0,
- current->euid, current->egid, fs_info) )
+ current->euid, current->egid, fs_info, 0) )
return -ENOENT;
/* Possible success */
return 0;
static int check_disc_changed (struct devfs_entry *de)
{
int tmp;
+ int retval = 0;
kdev_t dev = MKDEV (de->u.fcb.u.device.major, de->u.fcb.u.device.minor);
- struct block_device_operations *bdops = de->u.fcb.ops;
+ struct block_device_operations *bdops;
extern int warn_no_part;
if ( !S_ISBLK (de->mode) ) return 0;
- if (bdops == NULL) return 0;
- if (bdops->check_media_change == NULL) return 0;
- if ( !bdops->check_media_change (dev) ) return 0;
+ bdops = devfs_get_ops (de);
+ if (!bdops) return 0;
+ if (bdops->check_media_change == NULL) goto out;
+ if ( !bdops->check_media_change (dev) ) goto out;
+ retval = 1;
printk ( KERN_DEBUG "VFS: Disk change detected on device %s\n",
kdevname (dev) );
if (invalidate_device(dev, 0))
warn_no_part = 0;
if (bdops->revalidate) bdops->revalidate (dev);
warn_no_part = tmp;
- return 1;
+out:
+ devfs_put_ops (de);
+ return retval;
} /* End Function check_disc_changed */
if (retval != 0) return retval;
retval = inode_setattr (inode, iattr);
if (retval != 0) return retval;
-#ifdef CONFIG_DEVFS_DEBUG
- if (devfs_debug & DEBUG_I_CHANGE)
- {
- printk ("%s: notify_change(%d): VFS inode: %p devfs_entry: %p\n",
- DEVFS_NAME, (int) inode->i_ino, inode, de);
- printk ("%s: mode: 0%o uid: %d gid: %d\n",
- DEVFS_NAME, (int) inode->i_mode,
- (int) inode->i_uid, (int) inode->i_gid);
- }
-#endif
+ DPRINTK (DEBUG_I_CHANGE, "(%d): VFS inode: %p devfs_entry: %p\n",
+ (int) inode->i_ino, inode, de);
+ DPRINTK (DEBUG_I_CHANGE, "(): mode: 0%o uid: %d gid: %d\n",
+ (int) inode->i_mode, (int) inode->i_uid, (int) inode->i_gid);
/* Inode is not on hash chains, thus must save permissions here rather
than in a write_inode() method */
if ( ( !S_ISREG (inode->i_mode) && !S_ISCHR (inode->i_mode) &&
de->inode.atime = inode->i_atime;
de->inode.mtime = inode->i_mtime;
de->inode.ctime = inode->i_ctime;
- if ( iattr->ia_valid & (ATTR_MODE | ATTR_UID | ATTR_GID) )
+ if ( ( iattr->ia_valid & (ATTR_MODE | ATTR_UID | ATTR_GID) ) &&
+ !is_devfsd_or_child (fs_info) )
devfsd_notify_de (de, DEVFSD_NOTIFY_CHANGE, inode->i_mode,
- inode->i_uid, inode->i_gid, fs_info);
+ inode->i_uid, inode->i_gid, fs_info, 0);
return 0;
} /* End Function devfs_notify_change */
static int devfs_statfs (struct super_block *sb, struct statfs *buf)
{
buf->f_type = DEVFS_SUPER_MAGIC;
- buf->f_bsize = PAGE_SIZE / sizeof (long);
+ buf->f_bsize = FAKE_BLOCK_SIZE;
buf->f_bfree = 0;
buf->f_bavail = 0;
buf->f_ffree = 0;
/**
- * get_vfs_inode - Get a VFS inode.
+ * _devfs_get_vfs_inode - Get a VFS inode.
* @sb: The super block.
* @de: The devfs inode.
* @dentry: The dentry to register with the devfs inode.
* performed if the inode is created.
*/
-static struct inode *get_vfs_inode (struct super_block *sb,
- struct devfs_entry *de,
- struct dentry *dentry)
+static struct inode *_devfs_get_vfs_inode (struct super_block *sb,
+ struct devfs_entry *de,
+ struct dentry *dentry)
{
int is_fcb = FALSE;
struct inode *inode;
if (de->prev == de) return NULL; /* Quick check to see if unhooked */
if ( ( inode = new_inode (sb) ) == NULL )
{
- printk ("%s: get_vfs_inode(%s): new_inode() failed, de: %p\n",
- DEVFS_NAME, de->name, de);
+ PRINTK ("(%s): new_inode() failed, de: %p\n", de->name, de);
return NULL;
}
if (de->parent)
}
inode->u.generic_ip = devfs_get (de);
inode->i_ino = de->inode.ino;
-#ifdef CONFIG_DEVFS_DEBUG
- if (devfs_debug & DEBUG_I_GET)
- printk ("%s: get_vfs_inode(%d): VFS inode: %p devfs_entry: %p\n",
- DEVFS_NAME, (int) inode->i_ino, inode, de);
-#endif
+ DPRINTK (DEBUG_I_GET, "(%d): VFS inode: %p devfs_entry: %p\n",
+ (int) inode->i_ino, inode, de);
inode->i_blocks = 0;
- inode->i_blksize = 1024;
+ inode->i_blksize = FAKE_BLOCK_SIZE;
inode->i_op = &devfs_iops;
inode->i_fop = &devfs_fops;
inode->i_rdev = NODEV;
if (!inode->i_bdev->bd_op && de->u.fcb.ops)
inode->i_bdev->bd_op = de->u.fcb.ops;
}
- else printk ("%s: get_vfs_inode(%d): no block device from bdget()\n",
- DEVFS_NAME, (int) inode->i_ino);
+ else PRINTK ("(%d): no block device from bdget()\n",(int)inode->i_ino);
is_fcb = TRUE;
}
else if ( S_ISFIFO (de->mode) ) inode->i_fop = &def_fifo_fops;
inode->i_atime = de->inode.atime;
inode->i_mtime = de->inode.mtime;
inode->i_ctime = de->inode.ctime;
-#ifdef CONFIG_DEVFS_DEBUG
- if (devfs_debug & DEBUG_I_GET)
- printk ("%s: mode: 0%o uid: %d gid: %d\n",
- DEVFS_NAME, (int) inode->i_mode,
- (int) inode->i_uid, (int) inode->i_gid);
-#endif
+ DPRINTK (DEBUG_I_GET, "(): mode: 0%o uid: %d gid: %d\n",
+ (int) inode->i_mode, (int) inode->i_uid, (int) inode->i_gid);
return inode;
-} /* End Function get_vfs_inode */
+} /* End Function _devfs_get_vfs_inode */
/* File operations for device entries follow */
fs_info = inode->i_sb->u.generic_sbp;
parent = get_devfs_entry_from_vfs_inode (file->f_dentry->d_inode);
if ( (long) file->f_pos < 0 ) return -EINVAL;
-#ifdef CONFIG_DEVFS_DEBUG
- if (devfs_debug & DEBUG_F_READDIR)
- printk ("%s: readdir(): fs_info: %p pos: %ld\n", DEVFS_NAME,
- fs_info, (long) file->f_pos);
-#endif
+ DPRINTK (DEBUG_F_READDIR, "(%s): fs_info: %p pos: %ld\n",
+ parent->name, fs_info, (long) file->f_pos);
switch ( (long) file->f_pos )
{
case 0:
inode->i_uid = current->euid;
inode->i_gid = current->egid;
}
- if (df->aopen_notify)
+ if ( df->aopen_notify && !is_devfsd_or_child (fs_info) )
devfsd_notify_de (de, DEVFSD_NOTIFY_ASYNC_OPEN, inode->i_mode,
- current->euid, current->egid, fs_info);
+ current->euid, current->egid, fs_info, 0);
return 0;
} /* End Function devfs_open */
static void devfs_d_release (struct dentry *dentry)
{
-#ifdef CONFIG_DEVFS_DEBUG
- struct inode *inode = dentry->d_inode;
-
- if (devfs_debug & DEBUG_D_RELEASE)
- printk ("%s: d_release(): dentry: %p inode: %p\n",
- DEVFS_NAME, dentry, inode);
-#endif
+ DPRINTK (DEBUG_D_RELEASE, "(%p): inode: %p\n", dentry, dentry->d_inode);
} /* End Function devfs_d_release */
/**
struct devfs_entry *de;
de = get_devfs_entry_from_vfs_inode (inode);
-#ifdef CONFIG_DEVFS_DEBUG
- if (devfs_debug & DEBUG_D_IPUT)
- printk ("%s: d_iput(): dentry: %p inode: %p de: %p de->dentry: %p\n",
- DEVFS_NAME, dentry, inode, de, de->inode.dentry);
-#endif
+ DPRINTK (DEBUG_D_IPUT,"(%s): dentry: %p inode: %p de: %p de->dentry: %p\n",
+ de->name, dentry, inode, de, de->inode.dentry);
if ( de->inode.dentry && (de->inode.dentry != dentry) )
- OOPS ("%s: d_iput(%s): de: %p dentry: %p de->dentry: %p\n",
- DEVFS_NAME, de->name, de, dentry, de->inode.dentry);
+ OOPS ("(%s): de: %p dentry: %p de->dentry: %p\n",
+ de->name, de, dentry, de->inode.dentry);
de->inode.dentry = NULL;
iput (inode);
devfs_put (de);
/* Unhash dentry if negative (has no inode) */
if (inode == NULL)
{
-#ifdef CONFIG_DEVFS_DEBUG
- if (devfs_debug & DEBUG_D_DELETE)
- printk ("%s: d_delete(): dropping negative dentry: %p\n",
- DEVFS_NAME, dentry);
-#endif
+ DPRINTK (DEBUG_D_DELETE, "(%p): dropping negative dentry\n", dentry);
return 1;
}
fs_info = inode->i_sb->u.generic_sbp;
de = get_devfs_entry_from_vfs_inode (inode);
-#ifdef CONFIG_DEVFS_DEBUG
- if (devfs_debug & DEBUG_D_DELETE)
- printk ("%s: d_delete(): dentry: %p inode: %p devfs_entry: %p\n",
- DEVFS_NAME, dentry, inode, de);
-#endif
+ DPRINTK (DEBUG_D_DELETE, "(%p): inode: %p devfs_entry: %p\n",
+ dentry, inode, de);
if (de == NULL) return 0;
if ( !S_ISCHR (de->mode) && !S_ISBLK (de->mode) && !S_ISREG (de->mode) )
return 0;
de->u.fcb.open = FALSE;
if (de->u.fcb.aopen_notify)
devfsd_notify_de (de, DEVFSD_NOTIFY_CLOSE, inode->i_mode,
- current->euid, current->egid, fs_info);
+ current->euid, current->egid, fs_info, 1);
if (!de->u.fcb.auto_owner) return 0;
/* Change the ownership/protection back */
inode->i_mode = (de->mode & S_IFMT) | S_IRUGO | S_IWUGO;
devfs_handle_t parent = get_devfs_entry_from_vfs_inode (dir);
struct inode *inode;
-#ifdef CONFIG_DEVFS_DEBUG
- if (devfs_debug & DEBUG_I_LOOKUP)
- printk ("%s: d_revalidate(%s): dentry: %p by: \"%s\"\n",
- DEVFS_NAME, dentry->d_name.name, dentry, current->comm);
-#endif
+ DPRINTK (DEBUG_I_LOOKUP, "(%s): dentry: %p by: \"%s\"\n",
+ dentry->d_name.name, dentry, current->comm);
read_lock (&parent->u.dir.lock);
de = _devfs_search_dir (parent, dentry->d_name.name,
dentry->d_name.len);
- read_lock (&parent->u.dir.lock);
+ read_unlock (&parent->u.dir.lock);
if (de == NULL) return 1;
/* Create an inode, now that the driver information is available */
- inode = get_vfs_inode (dir->i_sb, de, dentry);
+ inode = _devfs_get_vfs_inode (dir->i_sb, de, dentry);
devfs_put (de);
if (!inode) return 1;
-#ifdef CONFIG_DEVFS_DEBUG
- if (devfs_debug & DEBUG_I_LOOKUP)
- printk ("%s: d_revalidate(): new VFS inode(%u): %p devfs_entry: %p\n",
- DEVFS_NAME, de->inode.ino, inode, de);
-#endif
+ DPRINTK (DEBUG_I_LOOKUP, "(%s): new VFS inode(%u): %p de: %p\n",
+ de->name, de->inode.ino, inode, de);
d_instantiate (dentry, inode);
return 1;
}
static struct dentry *devfs_lookup (struct inode *dir, struct dentry *dentry)
{
- struct fs_info *fs_info;
+ struct fs_info *fs_info = dir->i_sb->u.generic_sbp;
struct devfs_entry *parent, *de;
struct inode *inode;
/* Set up the dentry operations before anything else, to ensure cleaning
up on any error */
dentry->d_op = &devfs_dops;
- fs_info = dir->i_sb->u.generic_sbp;
/* First try to get the devfs entry for this directory */
parent = get_devfs_entry_from_vfs_inode (dir);
-#ifdef CONFIG_DEVFS_DEBUG
- if (devfs_debug & DEBUG_I_LOOKUP)
- printk ("%s: lookup(%s): dentry: %p parent: %p by: \"%s\"\n",
- DEVFS_NAME, dentry->d_name.name, dentry, parent,current->comm);
-#endif
+ DPRINTK (DEBUG_I_LOOKUP, "(%s): dentry: %p parent: %p by: \"%s\"\n",
+ dentry->d_name.name, dentry, parent, current->comm);
if (parent == NULL) return ERR_PTR (-ENOENT);
read_lock (&parent->u.dir.lock);
de = _devfs_search_dir (parent, dentry->d_name.name, dentry->d_name.len);
d_add (dentry, NULL); /* Open the floodgates */
}
/* Create an inode, now that the driver information is available */
- inode = get_vfs_inode (dir->i_sb, de, dentry);
+ inode = _devfs_get_vfs_inode (dir->i_sb, de, dentry);
devfs_put (de);
if (!inode) return ERR_PTR (-ENOMEM);
-#ifdef CONFIG_DEVFS_DEBUG
- if (devfs_debug & DEBUG_I_LOOKUP)
- printk ("%s: lookup(): new VFS inode(%u): %p devfs_entry: %p\n",
- DEVFS_NAME, de->inode.ino, inode, de);
-#endif
+ DPRINTK (DEBUG_I_LOOKUP, "(%s): new VFS inode(%u): %p de: %p\n",
+ de->name, de->inode.ino, inode, de);
d_instantiate (dentry, inode);
if (dentry->d_op == &devfs_wait_dops)
{ /* Unlock directory semaphore, which will release any waiters. They
int unhooked;
struct devfs_entry *de;
struct inode *inode = dentry->d_inode;
+ struct fs_info *fs_info = dir->i_sb->u.generic_sbp;
-#ifdef CONFIG_DEVFS_DEBUG
- if (devfs_debug & DEBUG_I_UNLINK)
- printk ("%s: unlink(%s)\n", DEVFS_NAME, dentry->d_name.name);
-#endif
de = get_devfs_entry_from_vfs_inode (inode);
+ DPRINTK (DEBUG_I_UNLINK, "(%s): de: %p\n", dentry->d_name.name, de);
if (de == NULL) return -ENOENT;
if (!de->vfs_created) return -EPERM;
write_lock (&de->parent->u.dir.lock);
unhooked = _devfs_unhook (de);
write_unlock (&de->parent->u.dir.lock);
if (!unhooked) return -ENOENT;
- devfsd_notify_de (de, DEVFSD_NOTIFY_DELETE, inode->i_mode,
- inode->i_uid, inode->i_gid, dir->i_sb->u.generic_sbp);
+ if ( !is_devfsd_or_child (fs_info) )
+ devfsd_notify_de (de, DEVFSD_NOTIFY_DELETE, inode->i_mode,
+ inode->i_uid, inode->i_gid, fs_info, 0);
free_dentry (de);
devfs_put (de);
return 0;
const char *symname)
{
int err;
- struct fs_info *fs_info;
+ struct fs_info *fs_info = dir->i_sb->u.generic_sbp;
struct devfs_entry *parent, *de;
struct inode *inode;
- fs_info = dir->i_sb->u.generic_sbp;
/* First try to get the devfs entry for this directory */
parent = get_devfs_entry_from_vfs_inode (dir);
if (parent == NULL) return -ENOENT;
err = devfs_do_symlink (parent, dentry->d_name.name, DEVFS_FL_NONE,
symname, &de, NULL);
-#ifdef CONFIG_DEVFS_DEBUG
- if (devfs_debug & DEBUG_DISABLED)
- printk ("%s: symlink(): errcode from <devfs_do_symlink>: %d\n",
- DEVFS_NAME, err);
-#endif
+ DPRINTK (DEBUG_DISABLED, "(%s): errcode from <devfs_do_symlink>: %d\n",
+ dentry->d_name.name, err);
if (err < 0) return err;
de->vfs_created = TRUE;
de->inode.uid = current->euid;
de->inode.atime = CURRENT_TIME;
de->inode.mtime = CURRENT_TIME;
de->inode.ctime = CURRENT_TIME;
- if ( ( inode = get_vfs_inode (dir->i_sb, de, dentry) ) == NULL )
+ if ( ( inode = _devfs_get_vfs_inode (dir->i_sb, de, dentry) ) == NULL )
return -ENOMEM;
-#ifdef CONFIG_DEVFS_DEBUG
- if (devfs_debug & DEBUG_DISABLED)
- printk ("%s: symlink(): new VFS inode(%u): %p dentry: %p\n",
- DEVFS_NAME, de->inode.ino, inode, dentry);
-#endif
+ DPRINTK (DEBUG_DISABLED, "(%s): new VFS inode(%u): %p dentry: %p\n",
+ dentry->d_name.name, de->inode.ino, inode, dentry);
d_instantiate (dentry, inode);
- devfsd_notify_de (de, DEVFSD_NOTIFY_CREATE, inode->i_mode,
- inode->i_uid, inode->i_gid, fs_info);
+ if ( !is_devfsd_or_child (fs_info) )
+ devfsd_notify_de (de, DEVFSD_NOTIFY_CREATE, inode->i_mode,
+ inode->i_uid, inode->i_gid, fs_info, 0);
return 0;
} /* End Function devfs_symlink */
static int devfs_mkdir (struct inode *dir, struct dentry *dentry, int mode)
{
int err;
- struct fs_info *fs_info;
+ struct fs_info *fs_info = dir->i_sb->u.generic_sbp;
struct devfs_entry *parent, *de;
struct inode *inode;
mode = (mode & ~S_IFMT) | S_IFDIR; /* VFS doesn't pass S_IFMT part */
- fs_info = dir->i_sb->u.generic_sbp;
parent = get_devfs_entry_from_vfs_inode (dir);
if (parent == NULL) return -ENOENT;
de = _devfs_alloc_entry (dentry->d_name.name, dentry->d_name.len, mode);
de->inode.atime = CURRENT_TIME;
de->inode.mtime = CURRENT_TIME;
de->inode.ctime = CURRENT_TIME;
- if ( ( inode = get_vfs_inode (dir->i_sb, de, dentry) ) == NULL )
+ if ( ( inode = _devfs_get_vfs_inode (dir->i_sb, de, dentry) ) == NULL )
return -ENOMEM;
-#ifdef CONFIG_DEVFS_DEBUG
- if (devfs_debug & DEBUG_DISABLED)
- printk ("%s: mkdir(): new VFS inode(%u): %p dentry: %p\n",
- DEVFS_NAME, de->inode.ino, inode, dentry);
-#endif
+ DPRINTK (DEBUG_DISABLED, "(%s): new VFS inode(%u): %p dentry: %p\n",
+ dentry->d_name.name, de->inode.ino, inode, dentry);
d_instantiate (dentry, inode);
- devfsd_notify_de (de, DEVFSD_NOTIFY_CREATE, inode->i_mode,
- inode->i_uid, inode->i_gid, fs_info);
+ if ( !is_devfsd_or_child (fs_info) )
+ devfsd_notify_de (de, DEVFSD_NOTIFY_CREATE, inode->i_mode,
+ inode->i_uid, inode->i_gid, fs_info, 0);
return 0;
} /* End Function devfs_mkdir */
{
int err = 0;
struct devfs_entry *de;
- struct fs_info *fs_info;
+ struct fs_info *fs_info = dir->i_sb->u.generic_sbp;
struct inode *inode = dentry->d_inode;
if (dir->i_sb->u.generic_sbp != inode->i_sb->u.generic_sbp) return -EINVAL;
- fs_info = dir->i_sb->u.generic_sbp;
de = get_devfs_entry_from_vfs_inode (inode);
if (de == NULL) return -ENOENT;
if ( !S_ISDIR (de->mode) ) return -ENOTDIR;
if ( !_devfs_unhook (de) ) err = -ENOENT;
write_unlock (&de->parent->u.dir.lock);
if (err) return err;
- devfsd_notify_de (de, DEVFSD_NOTIFY_DELETE, inode->i_mode,
- inode->i_uid, inode->i_gid, fs_info);
+ if ( !is_devfsd_or_child (fs_info) )
+ devfsd_notify_de (de, DEVFSD_NOTIFY_DELETE, inode->i_mode,
+ inode->i_uid, inode->i_gid, fs_info, 0);
free_dentry (de);
devfs_put (de);
return 0;
int rdev)
{
int err;
- struct fs_info *fs_info;
+ struct fs_info *fs_info = dir->i_sb->u.generic_sbp;
struct devfs_entry *parent, *de;
struct inode *inode;
-#ifdef CONFIG_DEVFS_DEBUG
- if (devfs_debug & DEBUG_I_MKNOD)
- printk ("%s: mknod(%s): mode: 0%o dev: %d\n",
- DEVFS_NAME, dentry->d_name.name, mode, rdev);
-#endif
- fs_info = dir->i_sb->u.generic_sbp;
+ DPRINTK (DEBUG_I_MKNOD, "(%s): mode: 0%o dev: %d\n",
+ dentry->d_name.name, mode, rdev);
parent = get_devfs_entry_from_vfs_inode (dir);
if (parent == NULL) return -ENOENT;
de = _devfs_alloc_entry (dentry->d_name.name, dentry->d_name.len, mode);
de->inode.atime = CURRENT_TIME;
de->inode.mtime = CURRENT_TIME;
de->inode.ctime = CURRENT_TIME;
- if ( ( inode = get_vfs_inode (dir->i_sb, de, dentry) ) == NULL )
+ if ( ( inode = _devfs_get_vfs_inode (dir->i_sb, de, dentry) ) == NULL )
return -ENOMEM;
-#ifdef CONFIG_DEVFS_DEBUG
- if (devfs_debug & DEBUG_I_MKNOD)
- printk ("%s: new VFS inode(%u): %p dentry: %p\n",
- DEVFS_NAME, de->inode.ino, inode, dentry);
-#endif
+ DPRINTK (DEBUG_I_MKNOD, ": new VFS inode(%u): %p dentry: %p\n",
+ de->inode.ino, inode, dentry);
d_instantiate (dentry, inode);
- devfsd_notify_de (de, DEVFSD_NOTIFY_CREATE, inode->i_mode,
- inode->i_uid, inode->i_gid, fs_info);
+ if ( !is_devfsd_or_child (fs_info) )
+ devfsd_notify_de (de, DEVFSD_NOTIFY_CREATE, inode->i_mode,
+ inode->i_uid, inode->i_gid, fs_info, 0);
return 0;
} /* End Function devfs_mknod */
sb->s_blocksize_bits = 10;
sb->s_magic = DEVFS_SUPER_MAGIC;
sb->s_op = &devfs_sops;
- if ( ( root_inode = get_vfs_inode (sb, root_entry, NULL) ) == NULL )
+ if ( ( root_inode = _devfs_get_vfs_inode (sb, root_entry, NULL) ) == NULL )
goto out_no_root;
sb->s_root = d_alloc_root (root_inode);
if (!sb->s_root) goto out_no_root;
-#ifdef CONFIG_DEVFS_DEBUG
- if (devfs_debug & DEBUG_S_READ)
- printk ("%s: read super, made devfs ptr: %p\n",
- DEVFS_NAME, sb->u.generic_sbp);
-#endif
+ DPRINTK (DEBUG_S_READ, "(): made devfs ptr: %p\n", sb->u.generic_sbp);
return sb;
out_no_root:
tlen = rpos - *ppos;
if (done)
{
+ devfs_handle_t parent;
+
spin_lock (&fs_info->devfsd_buffer_lock);
fs_info->devfsd_first_event = entry->next;
if (entry->next == NULL) fs_info->devfsd_last_event = NULL;
spin_unlock (&fs_info->devfsd_buffer_lock);
- for (; de != NULL; de = de->parent) devfs_put (de);
+ for (; de != NULL; de = parent)
+ {
+ parent = de->parent;
+ devfs_put (de);
+ }
kmem_cache_free (devfsd_buf_cache, entry);
if (ival > 0) atomic_sub (ival, &fs_info->devfsd_overrun_count);
*ppos = 0;
}
fs_info->devfsd_task = current;
spin_unlock (&lock);
+ fs_info->devfsd_pgrp = (current->pgrp == current->pid) ?
+ current->pgrp : 0;
fs_info->devfsd_file = file;
fs_info->devfsd_info = kmalloc (sizeof *fs_info->devfsd_info,
GFP_KERNEL);
static int devfsd_close (struct inode *inode, struct file *file)
{
- struct devfsd_buf_entry *entry;
+ struct devfsd_buf_entry *entry, *next;
struct fs_info *fs_info = inode->i_sb->u.generic_sbp;
if (fs_info->devfsd_file != file) return 0;
fs_info->devfsd_info = NULL;
}
spin_unlock (&fs_info->devfsd_buffer_lock);
+ fs_info->devfsd_pgrp = 0;
fs_info->devfsd_task = NULL;
wake_up (&fs_info->revalidate_wait_queue);
- for (; entry; entry = entry->next)
+ for (; entry; entry = next)
+ {
+ next = entry->next;
kmem_cache_free (devfsd_buf_cache, entry);
+ }
return 0;
} /* End Function devfsd_close */
printk ("%s: v%s Richard Gooch (rgooch@atnf.csiro.au)\n",
DEVFS_NAME, DEVFS_VERSION);
+ devfsd_buf_cache = kmem_cache_create ("devfsd_event",
+ sizeof (struct devfsd_buf_entry),
+ 0, 0, NULL, NULL);
+ if (!devfsd_buf_cache) OOPS ("(): unable to allocate event slab\n");
#ifdef CONFIG_DEVFS_DEBUG
devfs_debug = devfs_debug_init;
printk ("%s: devfs_debug: 0x%0x\n", DEVFS_NAME, devfs_debug);
#endif
printk ("%s: boot_options: 0x%0x\n", DEVFS_NAME, boot_options);
- devfsd_buf_cache = kmem_cache_create ("devfsd_event",
- sizeof (struct devfsd_buf_entry),
- 0, 0, NULL, NULL);
err = register_filesystem (&devfs_fs_type);
if (!err)
{
return is_enabled(sb_dqopt(sb), type);
}
-static inline int const hashfn(kdev_t dev, unsigned int id, short type)
+static inline int const hashfn(struct super_block *sb, unsigned int id, short type)
{
- return((HASHDEV(dev) ^ id) * (MAXQUOTAS - type)) % NR_DQHASH;
+ return((HASHDEV(sb->s_dev) ^ id) * (MAXQUOTAS - type)) % NR_DQHASH;
}
static inline void insert_dquot_hash(struct dquot *dquot)
{
- struct list_head *head = dquot_hash + hashfn(dquot->dq_dev, dquot->dq_id, dquot->dq_type);
+ struct list_head *head = dquot_hash + hashfn(dquot->dq_sb, dquot->dq_id, dquot->dq_type);
list_add(&dquot->dq_hash, head);
}
INIT_LIST_HEAD(&dquot->dq_hash);
}
-static inline struct dquot *find_dquot(unsigned int hashent, kdev_t dev, unsigned int id, short type)
+static inline struct dquot *find_dquot(unsigned int hashent, struct super_block *sb, unsigned int id, short type)
{
struct list_head *head;
struct dquot *dquot;
for (head = dquot_hash[hashent].next; head != dquot_hash+hashent; head = head->next) {
dquot = list_entry(head, struct dquot, dq_hash);
- if (dquot->dq_dev == dev && dquot->dq_id == id && dquot->dq_type == type)
+ if (dquot->dq_sb == sb && dquot->dq_id == id && dquot->dq_type == type)
return dquot;
}
return NODQUOT;
sizeof(struct dqblk), &offset);
if (ret != sizeof(struct dqblk))
printk(KERN_WARNING "VFS: dquota write failed on dev %s\n",
- kdevname(dquot->dq_dev));
+ kdevname(dquot->dq_sb->s_dev));
set_fs(fs);
up(sem);
}
}
-int sync_dquots(kdev_t dev, short type)
+int sync_dquots(struct super_block *sb, short type)
{
struct list_head *head;
struct dquot *dquot;
restart:
for (head = inuse_list.next; head != &inuse_list; head = head->next) {
dquot = list_entry(head, struct dquot, dq_inuse);
- if (dev && dquot->dq_dev != dev)
+ if (sb && dquot->dq_sb != sb)
continue;
if (type != -1 && dquot->dq_type != type)
continue;
if (!dquot->dq_count) {
printk("VFS: dqput: trying to free free dquot\n");
printk("VFS: device %s, dquot of %s %d\n",
- kdevname(dquot->dq_dev), quotatypes[dquot->dq_type],
+ kdevname(dquot->dq_sb->s_dev),
+ quotatypes[dquot->dq_type],
dquot->dq_id);
return;
}
static struct dquot *dqget(struct super_block *sb, unsigned int id, short type)
{
- unsigned int hashent = hashfn(sb->s_dev, id, type);
+ unsigned int hashent = hashfn(sb, id, type);
struct dquot *dquot, *empty = NODQUOT;
struct quota_mount_options *dqopt = sb_dqopt(sb);
return NODQUOT;
}
- if ((dquot = find_dquot(hashent, sb->s_dev, id, type)) == NODQUOT) {
+ if ((dquot = find_dquot(hashent, sb, id, type)) == NODQUOT) {
if (empty == NODQUOT) {
if ((empty = get_empty_dquot()) == NODQUOT)
schedule(); /* Try to wait for a moment... */
dquot = empty;
dquot->dq_id = id;
dquot->dq_type = type;
- dquot->dq_dev = sb->s_dev;
dquot->dq_sb = sb;
/* hash it first so it can be found */
insert_dquot_hash(dquot);
flags |= SET_QLIMIT;
break;
case Q_SYNC:
- ret = sync_dquots(dev, type);
+ ret = sync_dquots(sb, type);
goto out;
case Q_GETSTATS:
ret = get_stats(addr);
return 0;
}
phys = efs_map_block(inode, iblock);
- if (phys) {
- bh_result->b_dev = inode->i_dev;
- bh_result->b_blocknr = phys;
- bh_result->b_state |= (1UL << BH_Mapped);
- }
+ if (phys)
+ map_bh(bh_result, inode->i_sb, phys);
return 0;
}
/* Simplest case - block found, no allocation needed */
if (!partial) {
got_it:
- bh_result->b_dev = inode->i_dev;
- bh_result->b_blocknr = le32_to_cpu(chain[depth-1].key);
- bh_result->b_state |= (1UL << BH_Mapped);
+ map_bh(bh_result, inode->i_sb, le32_to_cpu(chain[depth-1].key));
/* Clean up and exit */
partial = chain+depth-1; /* the whole chain */
goto cleanup;
if (!partial) {
bh_result->b_state &= ~(1UL << BH_New);
got_it:
- bh_result->b_dev = inode->i_dev;
- bh_result->b_blocknr = le32_to_cpu(chain[depth-1].key);
- bh_result->b_state |= (1UL << BH_Mapped);
+ map_bh(bh_result, inode->i_sb, le32_to_cpu(chain[depth-1].key));
/* Clean up and exit */
partial = chain+depth-1; /* the whole chain */
goto cleanup;
/* bget() all the buffers */
if (order_data) {
if (!page->buffers)
- create_empty_buffers(page,
- inode->i_dev, inode->i_sb->s_blocksize);
+ create_empty_buffers(page, inode->i_sb->s_blocksize);
page_buffers = page->buffers;
walk_page_buffers(handle, page_buffers, 0,
PAGE_CACHE_SIZE, NULL, bget_one);
goto out;
if (!page->buffers)
- create_empty_buffers(page, inode->i_dev, blocksize);
+ create_empty_buffers(page, blocksize);
/* Find the buffer that contains "offset" */
bh = page->buffers;
}
fat_cache = &cache[0];
for (count = 0; count < FAT_CACHE; count++) {
- cache[count].device = 0;
+ cache[count].sb = NULL;
cache[count].next = count == FAT_CACHE-1 ? NULL :
&cache[count+1];
}
return;
spin_lock(&fat_cache_lock);
for (walk = fat_cache; walk; walk = walk->next)
- if (inode->i_dev == walk->device
+ if (inode->i_sb == walk->sb
&& walk->start_cluster == first
&& walk->file_cluster <= cluster
&& walk->file_cluster > *f_clu) {
struct fat_cache *walk;
for (walk = fat_cache; walk; walk = walk->next) {
- if (walk->device)
- printk("<%s,%d>(%d,%d) ", kdevname(walk->device),
+ if (walk->sb)
+ printk("<%s,%d>(%d,%d) ", bdevname(walk->sb->s_dev),
walk->start_cluster, walk->file_cluster,
walk->disk_cluster);
else printk("-- ");
last = NULL;
spin_lock(&fat_cache_lock);
for (walk = fat_cache; walk->next; walk = (last = walk)->next)
- if (inode->i_dev == walk->device
+ if (inode->i_sb == walk->sb
&& walk->start_cluster == first
&& walk->file_cluster == f_clu) {
if (walk->disk_cluster != d_clu) {
spin_unlock(&fat_cache_lock);
return;
}
- walk->device = inode->i_dev;
+ walk->sb = inode->i_sb;
walk->start_cluster = first;
walk->file_cluster = f_clu;
walk->disk_cluster = d_clu;
spin_lock(&fat_cache_lock);
for (walk = fat_cache; walk; walk = walk->next)
- if (walk->device == inode->i_dev
+ if (walk->sb == inode->i_sb
&& walk->start_cluster == first)
- walk->device = 0;
+ walk->sb = NULL;
spin_unlock(&fat_cache_lock);
}
-void fat_cache_inval_dev(kdev_t device)
+void fat_cache_inval_dev(struct super_block *sb)
{
struct fat_cache *walk;
spin_lock(&fat_cache_lock);
for (walk = fat_cache; walk; walk = walk->next)
- if (walk->device == device)
- walk->device = 0;
+ if (walk->sb == sb)
+ walk->sb = 0;
spin_unlock(&fat_cache_lock);
}
phys = fat_bmap(inode, iblock);
if (phys) {
- bh_result->b_dev = inode->i_dev;
- bh_result->b_blocknr = phys;
- bh_result->b_state |= (1UL << BH_Mapped);
+ map_bh(bh_result, inode->i_sb, phys);
return 0;
}
if (!create)
phys = fat_bmap(inode, iblock);
if (!phys)
BUG();
- bh_result->b_dev = inode->i_dev;
- bh_result->b_blocknr = phys;
- bh_result->b_state |= (1UL << BH_Mapped);
bh_result->b_state |= (1UL << BH_New);
+ map_bh(bh_result, inode->i_sb, phys);
return 0;
}
if (MSDOS_SB(sb)->fat_bits == 32) {
fat_clusters_flush(sb);
}
- fat_cache_inval_dev(sb->s_dev);
+ fat_cache_inval_dev(sb);
set_blocksize (sb->s_dev,BLOCK_SIZE);
if (MSDOS_SB(sb)->nls_disk) {
unload_nls(MSDOS_SB(sb)->nls_disk);
pblock = vxfs_bmap1(ip, iblock);
if (pblock != 0) {
- bp->b_dev = ip->i_dev;
- bp->b_blocknr = pblock;
- bp->b_state |= (1UL << BH_Mapped);
-
+ map_bh(bp, ip->i_sb, pblock);
return 0;
}
phys = hfs_extent_map(HFS_I(inode)->fork, iblock, create);
if (phys) {
- bh_result->b_dev = inode->i_dev;
- bh_result->b_blocknr = phys;
- bh_result->b_state |= (1UL << BH_Mapped);
if (create)
bh_result->b_state |= (1UL << BH_New);
+ map_bh(bh_result, inode->i_sb, phys);
return 0;
}
secno s;
s = hpfs_bmap(inode, iblock);
if (s) {
- bh_result->b_dev = inode->i_dev;
- bh_result->b_blocknr = s;
- bh_result->b_state |= (1UL << BH_Mapped);
+ map_bh(bh_result, inode->i_sb, s);
return 0;
}
if (!create) return 0;
}
inode->i_blocks++;
inode->u.hpfs_i.mmu_private += 512;
- bh_result->b_dev = inode->i_dev;
- bh_result->b_blocknr = s;
- bh_result->b_state |= (1UL << BH_Mapped) | (1UL << BH_New);
+ bh_result->b_state |= 1UL << BH_New;
+ map_bh(bh_result, inode->i_sb, s);
return 0;
}
* (0 == error.)
*/
int isofs_get_blocks(struct inode *inode, sector_t iblock,
- struct buffer_head **bh_result, unsigned long nblocks)
+ struct buffer_head **bh, unsigned long nblocks)
{
unsigned long b_off;
unsigned offset, sect_size;
}
}
- if ( *bh_result ) {
- (*bh_result)->b_dev = inode->i_dev;
- (*bh_result)->b_blocknr = firstext + b_off - offset;
- (*bh_result)->b_state |= (1UL << BH_Mapped);
+ if ( *bh ) {
+ map_bh(*bh, inode->i_sb, firstext + b_off - offset);
} else {
- *bh_result = sb_getblk(inode->i_sb, firstext+b_off-offset);
- if ( !*bh_result )
+ *bh = sb_getblk(inode->i_sb, firstext+b_off-offset);
+ if ( !*bh )
goto abort;
}
- bh_result++; /* Next buffer head */
+ bh++; /* Next buffer head */
b_off++; /* Next buffer offset */
nblocks--;
rv++;
}
static inline int get_block(struct inode * inode, sector_t block,
- struct buffer_head *bh_result, int create)
+ struct buffer_head *bh, int create)
{
int err = -EIO;
int offsets[DEPTH];
/* Simplest case - block found, no allocation needed */
if (!partial) {
got_it:
- bh_result->b_dev = inode->i_dev;
- bh_result->b_blocknr = block_to_cpu(chain[depth-1].key);
- bh_result->b_state |= (1UL << BH_Mapped);
+ map_bh(bh, inode->i_sb, block_to_cpu(chain[depth-1].key));
/* Clean up and exit */
partial = chain+depth-1; /* the whole chain */
goto cleanup;
if (splice_branch(inode, chain, partial, left) < 0)
goto changed;
- bh_result->b_state |= (1UL << BH_New);
+ bh->b_state |= (1UL << BH_New);
goto got_it;
changed:
spin_unlock(&dcache_lock);
lock_kernel();
DQUOT_OFF(sb);
- acct_auto_close(sb->s_dev);
+ acct_auto_close(sb);
unlock_kernel();
spin_lock(&dcache_lock);
}
#include <linux/pagemap.h>
+#include <linux/blkdev.h>
/*
* add_gd_partition adds a partitions details to the devices partition
*/
void add_gd_partition(struct gendisk *hd, int minor, int start, int size);
-typedef struct {struct page *v;} Sector;
-
-unsigned char *read_dev_sector(struct block_device *, unsigned long, Sector *);
-
-static inline void put_dev_sector(Sector p)
-{
- page_cache_release(p.v);
-}
-
extern int warn_no_part;
/* scale priority and nice values from timeslices to -20..20 */
/* to make it look like a "normal" Unix priority/nice value */
- priority = task->counter;
- priority = 20 - (priority * 10 + DEF_COUNTER / 2) / DEF_COUNTER;
+ priority = task->dyn_prio;
nice = task->nice;
read_lock(&tasklist_lock);
phys = qnx4_block_map( inode, iblock );
if ( phys ) {
// logical block is before EOF
- bh->b_dev = inode->i_dev;
- bh->b_blocknr = phys;
- bh->b_state |= (1UL << BH_Mapped);
+ map_bh(bh, inode->i_sb, phys);
} else if ( create ) {
// to be done.
}
}
run_task_queue(&tq_disk);
current->policy |= SCHED_YIELD;
- /*current->counter = 0;*/
+ /* current->dyn_prio = 0; */
schedule();
}
if (repeat_counter > 30000000) {
block. */
/* The function is NOT SCHEDULE-SAFE! */
-struct buffer_head * reiserfs_bread (struct super_block *super, int n_block, int n_size)
+struct buffer_head * reiserfs_bread (struct super_block *super, int n_block)
{
struct buffer_head *result;
PROC_EXP( unsigned int ctx_switches = kstat.context_swtch );
- result = bread (super -> s_dev, n_block, n_size);
+ result = sb_bread(super, n_block);
PROC_INFO_INC( super, breads );
PROC_EXP( if( kstat.context_swtch != ctx_switches )
PROC_INFO_INC( super, bread_miss ) );
actually get the block off of the disk. */
/* The function is NOT SCHEDULE-SAFE! */
-struct buffer_head * reiserfs_getblk (kdev_t n_dev, int n_block, int n_size)
+struct buffer_head * reiserfs_getblk(struct super_block *sb, int n_block)
{
- return getblk (n_dev, n_block, n_size);
+ return sb_getblk(sb, n_block);
}
#ifdef NEW_GET_NEW_BUFFER
blocknr, 1)) == NO_DISK_SPACE )
return NO_DISK_SPACE;
- *pp_s_new_bh = reiserfs_getblk(p_s_sb->s_dev, n_new_blocknumber, p_s_sb->s_blocksize);
+ *pp_s_new_bh = reiserfs_getblk(p_s_sb, n_new_blocknumber);
if ( buffer_uptodate(*pp_s_new_bh) ) {
RFALSE( buffer_dirty(*pp_s_new_bh) || (*pp_s_new_bh)->b_dev == NODEV,
printk("get_new_buffer(%u): counter(%d) too big", current->pid, repeat_counter);
#endif
- current->counter = 0;
+ current->time_slice = 0;
schedule();
}
if ( (n_repeat = reiserfs_new_unf_blocknrs (th, &n_new_blocknumber, p_s_bh->b_blocknr)) == NO_DISK_SPACE )
return NO_DISK_SPACE;
- *pp_s_new_bh = reiserfs_getblk(p_s_sb->s_dev, n_new_blocknumber, p_s_sb->s_blocksize);
+ *pp_s_new_bh = reiserfs_getblk(p_s_sb, n_new_blocknumber);
if (atomic_read (&(*pp_s_new_bh)->b_count) > 1) {
/* Free path buffers to prevent deadlock which can occur in the
situation like : this process holds p_s_path; Block
RFALSE( ! *p_n_blocknr,
"PAP-8135: reiserfs_new_blocknrs failed when got new blocks");
- p_s_new_bh = reiserfs_getblk(p_s_sb->s_dev, *p_n_blocknr, p_s_sb->s_blocksize);
+ p_s_new_bh = reiserfs_getblk(p_s_sb, *p_n_blocknr);
if (atomic_read (&(p_s_new_bh->b_count)) > 1) {
/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
/*
n_child_position = ( p_s_bh == p_s_tb->FL[n_h] ) ? p_s_tb->lkey[n_h] : B_NR_ITEMS (p_s_tb->FL[n_h]);
n_son_number = B_N_CHILD_NUM(p_s_tb->FL[n_h], n_child_position);
- p_s_bh = reiserfs_bread(p_s_sb, n_son_number, p_s_sb->s_blocksize);
+ p_s_bh = reiserfs_bread(p_s_sb, n_son_number);
if (!p_s_bh)
return IO_ERROR;
if ( FILESYSTEM_CHANGED_TB (p_s_tb) ) {
n_child_position = ( p_s_bh == p_s_tb->FR[n_h] ) ? p_s_tb->rkey[n_h] + 1 : 0;
n_son_number = B_N_CHILD_NUM(p_s_tb->FR[n_h], n_child_position);
- p_s_bh = reiserfs_bread(p_s_sb, n_son_number, p_s_sb->s_blocksize);
+ p_s_bh = reiserfs_bread(p_s_sb, n_son_number);
if (!p_s_bh)
return IO_ERROR;
if ( FILESYSTEM_CHANGED_TB (p_s_tb) ) {
static inline void set_block_dev_mapped (struct buffer_head * bh,
b_blocknr_t block, struct inode * inode)
{
- bh->b_dev = inode->i_dev;
- bh->b_blocknr = block;
- bh->b_state |= (1UL << BH_Mapped);
+ map_bh(bh, inode->i_sb, block);
}
blocknr = get_block_num(ind_item, path.pos_in_item) ;
ret = 0 ;
if (blocknr) {
- bh_result->b_dev = inode->i_dev;
- bh_result->b_blocknr = blocknr;
- bh_result->b_state |= (1UL << BH_Mapped);
+ map_bh(bh_result, inode->i_sb, blocknr);
} else if ((args & GET_BLOCK_NO_HOLE)) {
ret = -ENOENT ;
}
finished:
pathrelse (&path);
- bh_result->b_blocknr = 0 ;
- bh_result->b_dev = inode->i_dev;
+ /* I _really_ doubt that you want it. Chris? */
+ map_bh(bh_result, inode->i_sb, 0);
mark_buffer_uptodate (bh_result, 1);
- bh_result->b_state |= (1UL << BH_Mapped);
return 0;
}
// FIXME: do we need this? shouldn't we simply continue?
run_task_queue(&tq_disk);
current->policy |= SCHED_YIELD;
- /*current->counter = 0;*/
+ /*current->time_slice = 0;*/
schedule();
#endif
continue;
for (i = 0; i < bmap_nr; i++)
bitmap[i] = SB_AP_BITMAP(s)[i];
for (i = bmap_nr; i < bmap_nr_new; i++) {
- bitmap[i] = reiserfs_getblk(s->s_dev, i * s->s_blocksize * 8, s->s_blocksize);
+ bitmap[i] = reiserfs_getblk(s, i * s->s_blocksize * 8);
memset(bitmap[i]->b_data, 0, sb->s_blocksize);
reiserfs_test_and_set_le_bit(0, bitmap[i]->b_data);
if (blocknr == 0)
return;
- bh = reiserfs_getblk (s->s_dev, blocknr, s->s_blocksize);
+ bh = reiserfs_getblk (s, blocknr);
if (!buffer_uptodate (bh)) {
ll_rw_block (READA, 1, &bh);
DISK_LEAF_NODE_LEVEL */
) {
int n_block_number = SB_ROOT_BLOCK (p_s_sb),
- expected_level = SB_TREE_HEIGHT (p_s_sb),
- n_block_size = p_s_sb->s_blocksize;
+ expected_level = SB_TREE_HEIGHT (p_s_sb);
struct buffer_head * p_s_bh;
struct path_element * p_s_last_element;
int n_node_level, n_retval;
/* Read the next tree node, and set the last element in the path to
have a pointer to it. */
if ( ! (p_s_bh = p_s_last_element->pe_buffer =
- reiserfs_bread(p_s_sb, n_block_number, n_block_size)) ) {
+ reiserfs_bread(p_s_sb, n_block_number)) ) {
p_s_search_path->path_length --;
pathrelse(p_s_search_path);
return IO_ERROR;
#include <linux/smp_lock.h>
#include <linux/locks.h>
#include <linux/init.h>
+#include <linux/blkdev.h>
#define REISERFS_OLD_BLOCKSIZE 4096
#define REISERFS_SUPER_MAGIC_STRING_OFFSET_NJ 20
labeling scheme currently used will have enough space. Then we
need one block for the super. -Hans */
bmp = (REISERFS_DISK_OFFSET_IN_BYTES / s->s_blocksize) + 1; /* first of bitmap blocks */
- SB_AP_BITMAP (s)[0] = reiserfs_bread (s, bmp, s->s_blocksize);
+ SB_AP_BITMAP (s)[0] = reiserfs_bread (s, bmp);
if(!SB_AP_BITMAP(s)[0])
return 1;
for (i = 1, bmp = dl = s->s_blocksize * 8; i < sb_bmap_nr(rs); i ++) {
- SB_AP_BITMAP (s)[i] = reiserfs_bread (s, bmp, s->s_blocksize);
+ SB_AP_BITMAP (s)[i] = reiserfs_bread (s, bmp);
if (!SB_AP_BITMAP (s)[i])
return 1;
bmp += dl;
memset (SB_AP_BITMAP (s), 0, sizeof (struct buffer_head *) * sb_bmap_nr(rs));
for (i = 0; i < sb_bmap_nr(rs); i ++) {
- SB_AP_BITMAP (s)[i] = reiserfs_bread (s, bmp1 + i, s->s_blocksize);
+ SB_AP_BITMAP (s)[i] = reiserfs_bread (s, bmp1 + i);
if (!SB_AP_BITMAP (s)[i])
return 1;
}
free, SB_FREE_BLOCKS (s));
}
-static int read_super_block (struct super_block * s, int size, int offset)
+static int read_super_block (struct super_block * s, int offset)
{
struct buffer_head * bh;
struct reiserfs_super_block * rs;
- bh = bread (s->s_dev, offset / size, size);
+ bh = sb_bread (s, offset / s->s_blocksize);
if (!bh) {
printk ("read_super_block: "
"bread failed (dev %s, block %d, size %d)\n",
- kdevname (s->s_dev), offset / size, size);
+ kdevname (s->s_dev), offset / s->s_blocksize, s->s_blocksize);
return 1;
}
if (!is_reiserfs_magic_string (rs)) {
printk ("read_super_block: "
"can't find a reiserfs filesystem on (dev %s, block %lu, size %d)\n",
- kdevname(s->s_dev), bh->b_blocknr, size);
+ kdevname(s->s_dev), bh->b_blocknr, s->s_blocksize);
brelse (bh);
return 1;
}
//
// ok, reiserfs signature (old or new) found in at the given offset
//
- s->s_blocksize = sb_blocksize(rs);
- s->s_blocksize_bits = 0;
- while ((1 << s->s_blocksize_bits) != s->s_blocksize)
- s->s_blocksize_bits ++;
-
brelse (bh);
- if (s->s_blocksize != size)
- set_blocksize (s->s_dev, s->s_blocksize);
+ sb_set_blocksize (s, sb_blocksize(rs));
- bh = reiserfs_bread (s, offset / s->s_blocksize, s->s_blocksize);
+ bh = reiserfs_bread (s, offset / s->s_blocksize);
if (!bh) {
printk("read_super_block: "
"bread failed (dev %s, block %d, size %d)\n",
- kdevname (s->s_dev), offset / size, size);
+ kdevname (s->s_dev), offset / s->s_blocksize, s->s_blocksize);
return 1;
}
rs = (struct reiserfs_super_block *)bh->b_data;
- if (!is_reiserfs_magic_string (rs) ||
- sb_blocksize(rs) != s->s_blocksize) {
+ if (!is_reiserfs_magic_string (rs) || sb_blocksize(rs) != s->s_blocksize) {
printk ("read_super_block: "
"can't find a reiserfs filesystem on (dev %s, block %lu, size %d)\n",
- kdevname(s->s_dev), bh->b_blocknr, size);
+ kdevname(s->s_dev), bh->b_blocknr, s->s_blocksize);
brelse (bh);
printk ("read_super_block: can't find a reiserfs filesystem on dev %s.\n", kdevname(s->s_dev));
return 1;
{
int size;
struct inode *root_inode;
- kdev_t dev = s->s_dev;
int j;
- extern int *blksize_size[];
struct reiserfs_transaction_handle th ;
int old_format = 0;
unsigned long blocks;
return NULL;
}
- if (blksize_size[MAJOR(dev)] && blksize_size[MAJOR(dev)][MINOR(dev)] != 0) {
- /* as blocksize is set for partition we use it */
- size = blksize_size[MAJOR(dev)][MINOR(dev)];
- } else {
- size = BLOCK_SIZE;
- set_blocksize (s->s_dev, BLOCK_SIZE);
- }
+ size = block_size(s->s_dev);
+ sb_set_blocksize(s, size);
/* read block (64-th 1k block), which can contain reiserfs super block */
- if (read_super_block (s, size, REISERFS_DISK_OFFSET_IN_BYTES)) {
+ if (read_super_block (s, REISERFS_DISK_OFFSET_IN_BYTES)) {
// try old format (undistributed bitmap, super block in 8-th 1k block of a device)
- if (read_super_block (s, size, REISERFS_OLD_DISK_OFFSET_IN_BYTES))
+ sb_set_blocksize(s, size);
+ if (read_super_block (s, REISERFS_OLD_DISK_OFFSET_IN_BYTES))
goto error;
else
old_format = 1;
}
+ s->s_blocksize = size;
s->u.reiserfs_sb.s_mount_state = SB_REISERFS_STATE(s);
s->u.reiserfs_sb.s_mount_state = REISERFS_VALID_FS ;
return -EACCES;
/*flags |= MS_RDONLY;*/
if (flags & MS_RDONLY)
- acct_auto_close(sb->s_dev);
+ acct_auto_close(sb);
shrink_dcache_sb(sb);
fsync_super(sb);
/* If we are remounting RDONLY, make sure there are no rw files open */
struct inode *inode;
struct block_device *bdev;
struct block_device_operations *bdops;
+ devfs_handle_t de;
struct super_block * s;
struct nameidata nd;
struct list_head *p;
goto out;
bd_acquire(inode);
bdev = inode->i_bdev;
- bdops = devfs_get_ops ( devfs_get_handle_from_inode (inode) );
+ de = devfs_get_handle_from_inode (inode);
+ bdops = devfs_get_ops (de); /* Increments module use count */
if (bdops) bdev->bd_op = bdops;
/* Done with lookups, semaphore down */
dev = to_kdev_t(bdev->bd_dev);
if (!(flags & MS_RDONLY))
mode |= FMODE_WRITE;
error = blkdev_get(bdev, mode, 0, BDEV_FS);
+ devfs_put_ops (de); /* Decrement module use count now we're safe */
if (error)
goto out;
check_disk_change(dev);
/* Simplest case - block found, no allocation needed */
if (!partial) {
got_it:
- bh_result->b_dev = sb->s_dev;
- bh_result->b_blocknr = block_to_cpu(sb, chain[depth-1].key);
- bh_result->b_state |= (1UL << BH_Mapped);
+ map_bh(bh_result, sb, block_to_cpu(sb, chain[depth-1].key));
/* Clean up and exit */
partial = chain+depth-1; /* the whole chain */
goto cleanup;
{
phys = udf_block_map(inode, block);
if (phys)
- {
- bh_result->b_dev = inode->i_dev;
- bh_result->b_blocknr = phys;
- bh_result->b_state |= (1UL << BH_Mapped);
- }
+ map_bh(bh_result, inode->i_sb, phys);
return 0;
}
if (!phys)
BUG();
- bh_result->b_dev = inode->i_dev;
- bh_result->b_blocknr = phys;
- bh_result->b_state |= (1UL << BH_Mapped);
if (new)
bh_result->b_state |= (1UL << BH_New);
+ map_bh(bh_result, inode->i_sb, phys);
abort:
unlock_kernel();
return err;
static int ufs_getfrag_block (struct inode *inode, sector_t fragment, struct buffer_head *bh_result, int create)
{
- struct super_block * sb;
- struct ufs_sb_private_info * uspi;
+ struct super_block * sb = inode->i_sb;
+ struct ufs_sb_private_info * uspi = sb->u.ufs_sb.s_uspi;
struct buffer_head * bh;
int ret, err, new;
unsigned long ptr, phys;
- sb = inode->i_sb;
- uspi = sb->u.ufs_sb.s_uspi;
-
if (!create) {
phys = ufs_frag_map(inode, fragment);
- if (phys) {
- bh_result->b_dev = inode->i_dev;
- bh_result->b_blocknr = phys;
- bh_result->b_state |= (1UL << BH_Mapped);
- }
+ if (phys)
+ map_bh(bh_result, sb, phys);
return 0;
}
out:
if (err)
goto abort;
- bh_result->b_dev = inode->i_dev;
- bh_result->b_blocknr = phys;
- bh_result->b_state |= (1UL << BH_Mapped);
if (new)
bh_result->b_state |= (1UL << BH_New);
+ map_bh(bh_result, sb, phys);
abort:
unlock_kernel();
return err;
* Copyright 1995, Russell King.
* Various bits and pieces copyrights include:
* Linus Torvalds (test_bit).
+ * Big endian support: Copyright 2001, Nicolas Pitre
+ * reworked by rmk.
*
* bit 0 is the LSB of addr; bit 32 is the LSB of (addr+1).
*
#ifdef __KERNEL__
+#include <asm/system.h>
+
#define smp_mb__before_clear_bit() do { } while (0)
#define smp_mb__after_clear_bit() do { } while (0)
/*
- * Function prototypes to keep gcc -Wall happy.
+ * These functions are the basis of our bit ops.
+ * First, the atomic bitops.
+ *
+ * The endian issue for these functions is handled by the macros below.
*/
-extern void set_bit(int nr, volatile void * addr);
+static inline void
+____atomic_set_bit_mask(unsigned int mask, volatile unsigned char *p)
+{
+ unsigned long flags;
+
+ local_irq_save(flags);
+ *p |= mask;
+ local_irq_restore(flags);
+}
+
+static inline void
+____atomic_clear_bit_mask(unsigned int mask, volatile unsigned char *p)
+{
+ unsigned long flags;
+
+ local_irq_save(flags);
+ *p &= ~mask;
+ local_irq_restore(flags);
+}
-static inline void __set_bit(int nr, volatile void *addr)
+static inline void
+____atomic_change_bit_mask(unsigned int mask, volatile unsigned char *p)
{
- ((unsigned char *) addr)[nr >> 3] |= (1U << (nr & 7));
+ unsigned long flags;
+
+ local_irq_save(flags);
+ *p ^= mask;
+ local_irq_restore(flags);
}
-extern void clear_bit(int nr, volatile void * addr);
+static inline int
+____atomic_test_and_set_bit_mask(unsigned int mask, volatile unsigned char *p)
+{
+ unsigned long flags;
+ unsigned int res;
+
+ local_irq_save(flags);
+ res = *p;
+ *p = res | mask;
+ local_irq_restore(flags);
+
+ return res & mask;
+}
+
+static inline int
+____atomic_test_and_clear_bit_mask(unsigned int mask, volatile unsigned char *p)
+{
+ unsigned long flags;
+ unsigned int res;
+
+ local_irq_save(flags);
+ res = *p;
+ *p = res & ~mask;
+ local_irq_restore(flags);
+
+ return res & mask;
+}
-static inline void __clear_bit(int nr, volatile void *addr)
+static inline int
+____atomic_test_and_change_bit_mask(unsigned int mask, volatile unsigned char *p)
{
- ((unsigned char *) addr)[nr >> 3] &= ~(1U << (nr & 7));
+ unsigned long flags;
+ unsigned int res;
+
+ local_irq_save(flags);
+ res = *p;
+ *p = res ^ mask;
+ local_irq_restore(flags);
+
+ return res & mask;
}
-extern void change_bit(int nr, volatile void * addr);
+/*
+ * Now the non-atomic variants. We let the compiler handle all optimisations
+ * for these.
+ */
+static inline void ____nonatomic_set_bit(int nr, volatile void *p)
+{
+ ((unsigned char *) p)[nr >> 3] |= (1U << (nr & 7));
+}
-static inline void __change_bit(int nr, volatile void *addr)
+static inline void ____nonatomic_clear_bit(int nr, volatile void *p)
{
- ((unsigned char *) addr)[nr >> 3] ^= (1U << (nr & 7));
+ ((unsigned char *) p)[nr >> 3] &= ~(1U << (nr & 7));
}
-extern int test_and_set_bit(int nr, volatile void * addr);
+static inline void ____nonatomic_change_bit(int nr, volatile void *p)
+{
+ ((unsigned char *) p)[nr >> 3] ^= (1U << (nr & 7));
+}
-static inline int __test_and_set_bit(int nr, volatile void *addr)
+static inline int ____nonatomic_test_and_set_bit(int nr, volatile void *p)
{
unsigned int mask = 1 << (nr & 7);
unsigned int oldval;
- oldval = ((unsigned char *) addr)[nr >> 3];
- ((unsigned char *) addr)[nr >> 3] = oldval | mask;
+ oldval = ((unsigned char *) p)[nr >> 3];
+ ((unsigned char *) p)[nr >> 3] = oldval | mask;
return oldval & mask;
}
-extern int test_and_clear_bit(int nr, volatile void * addr);
-
-static inline int __test_and_clear_bit(int nr, volatile void *addr)
+static inline int ____nonatomic_test_and_clear_bit(int nr, volatile void *p)
{
unsigned int mask = 1 << (nr & 7);
unsigned int oldval;
- oldval = ((unsigned char *) addr)[nr >> 3];
- ((unsigned char *) addr)[nr >> 3] = oldval & ~mask;
+ oldval = ((unsigned char *) p)[nr >> 3];
+ ((unsigned char *) p)[nr >> 3] = oldval & ~mask;
return oldval & mask;
}
-extern int test_and_change_bit(int nr, volatile void * addr);
-
-static inline int __test_and_change_bit(int nr, volatile void *addr)
+static inline int ____nonatomic_test_and_change_bit(int nr, volatile void *p)
{
unsigned int mask = 1 << (nr & 7);
unsigned int oldval;
- oldval = ((unsigned char *) addr)[nr >> 3];
- ((unsigned char *) addr)[nr >> 3] = oldval ^ mask;
+ oldval = ((unsigned char *) p)[nr >> 3];
+ ((unsigned char *) p)[nr >> 3] = oldval ^ mask;
return oldval & mask;
}
-extern int find_first_zero_bit(void * addr, unsigned size);
-extern int find_next_zero_bit(void * addr, int size, int offset);
-
/*
* This routine doesn't need to be atomic.
*/
-static inline int test_bit(int nr, const void * addr)
+static inline int ____test_bit(int nr, const void * p)
{
- return ((unsigned char *) addr)[nr >> 3] & (1U << (nr & 7));
+ return ((volatile unsigned char *) p)[nr >> 3] & (1U << (nr & 7));
}
+/*
+ * A note about Endian-ness.
+ * -------------------------
+ *
+ * When the ARM is put into big endian mode via CR15, the processor
+ * merely swaps the order of bytes within words, thus:
+ *
+ * ------------ physical data bus bits -----------
+ * D31 ... D24 D23 ... D16 D15 ... D8 D7 ... D0
+ * little byte 3 byte 2 byte 1 byte 0
+ * big byte 0 byte 1 byte 2 byte 3
+ *
+ * This means that reading a 32-bit word at address 0 returns the same
+ * value irrespective of the endian mode bit.
+ *
+ * Peripheral devices should be connected with the data bus reversed in
+ * "Big Endian" mode. ARM Application Note 61 is applicable, and is
+ * available from http://www.arm.com/.
+ *
+ * The following assumes that the data bus connectivity for big endian
+ * mode has been followed.
+ *
+ * Note that bit 0 is defined to be 32-bit word bit 0, not byte 0 bit 0.
+ */
+
+/*
+ * Little endian assembly bitops. nr = 0 -> byte 0 bit 0.
+ */
+extern void _set_bit_le(int nr, volatile void * p);
+extern void _clear_bit_le(int nr, volatile void * p);
+extern void _change_bit_le(int nr, volatile void * p);
+extern int _test_and_set_bit_le(int nr, volatile void * p);
+extern int _test_and_clear_bit_le(int nr, volatile void * p);
+extern int _test_and_change_bit_le(int nr, volatile void * p);
+extern int _find_first_zero_bit_le(void * p, unsigned size);
+extern int _find_next_zero_bit_le(void * p, int size, int offset);
+
+/*
+ * Big endian assembly bitops. nr = 0 -> byte 3 bit 0.
+ */
+extern void _set_bit_be(int nr, volatile void * p);
+extern void _clear_bit_be(int nr, volatile void * p);
+extern void _change_bit_be(int nr, volatile void * p);
+extern int _test_and_set_bit_be(int nr, volatile void * p);
+extern int _test_and_clear_bit_be(int nr, volatile void * p);
+extern int _test_and_change_bit_be(int nr, volatile void * p);
+extern int _find_first_zero_bit_be(void * p, unsigned size);
+extern int _find_next_zero_bit_be(void * p, int size, int offset);
+
+
+/*
+ * The __* form of bitops are non-atomic and may be reordered.
+ */
+#define ATOMIC_BITOP_LE(name,nr,p) \
+ (__builtin_constant_p(nr) ? \
+ ____atomic_##name##_mask(1 << ((nr) & 7), \
+ ((unsigned char *)(p)) + ((nr) >> 3)) : \
+ _##name##_le(nr,p))
+
+#define ATOMIC_BITOP_BE(name,nr,p) \
+ (__builtin_constant_p(nr) ? \
+ ____atomic_##name##_mask(1 << ((nr) & 7), \
+ ((unsigned char *)(p)) + (((nr) >> 3) ^ 3)) : \
+ _##name##_be(nr,p))
+
+#define NONATOMIC_BITOP_LE(name,nr,p) \
+ (____nonatomic_##name(nr, p))
+
+#define NONATOMIC_BITOP_BE(name,nr,p) \
+ (____nonatomic_##name(nr ^ 0x18, p))
+
+#ifndef __ARMEB__
+/*
+ * These are the little endian, atomic definitions.
+ */
+#define set_bit(nr,p) ATOMIC_BITOP_LE(set_bit,nr,p)
+#define clear_bit(nr,p) ATOMIC_BITOP_LE(clear_bit,nr,p)
+#define change_bit(nr,p) ATOMIC_BITOP_LE(change_bit,nr,p)
+#define test_and_set_bit(nr,p) ATOMIC_BITOP_LE(test_and_set_bit,nr,p)
+#define test_and_clear_bit(nr,p) ATOMIC_BITOP_LE(test_and_clear_bit,nr,p)
+#define test_and_change_bit(nr,p) ATOMIC_BITOP_LE(test_and_change_bit,nr,p)
+#define test_bit(nr,p) ____test_bit(nr,p)
+#define find_first_zero_bit(p,sz) _find_first_zero_bit_le(p,sz)
+#define find_next_zero_bit(p,sz,off) _find_next_zero_bit_le(p,sz,off)
+
+/*
+ * These are the little endian, non-atomic definitions.
+ */
+#define __set_bit(nr,p) NONATOMIC_BITOP_LE(set_bit,nr,p)
+#define __clear_bit(nr,p) NONATOMIC_BITOP_LE(clear_bit,nr,p)
+#define __change_bit(nr,p) NONATOMIC_BITOP_LE(change_bit,nr,p)
+#define __test_and_set_bit(nr,p) NONATOMIC_BITOP_LE(test_and_set_bit,nr,p)
+#define __test_and_clear_bit(nr,p) NONATOMIC_BITOP_LE(test_and_clear_bit,nr,p)
+#define __test_and_change_bit(nr,p) NONATOMIC_BITOP_LE(test_and_change_bit,nr,p)
+#define __test_bit(nr,p) ____test_bit(nr,p)
+
+#else
+
+/*
+ * These are the little endian, atomic definitions.
+ */
+#define set_bit(nr,p) ATOMIC_BITOP_BE(set_bit,nr,p)
+#define clear_bit(nr,p) ATOMIC_BITOP_BE(clear_bit,nr,p)
+#define change_bit(nr,p) ATOMIC_BITOP_BE(change_bit,nr,p)
+#define test_and_set_bit(nr,p) ATOMIC_BITOP_BE(test_and_set_bit,nr,p)
+#define test_and_clear_bit(nr,p) ATOMIC_BITOP_BE(test_and_clear_bit,nr,p)
+#define test_and_change_bit(nr,p) ATOMIC_BITOP_BE(test_and_change_bit,nr,p)
+#define test_bit(nr,p) ____test_bit((nr) ^ 0x18, p)
+#define find_first_zero_bit(p,sz) _find_first_zero_bit_be(p,sz)
+#define find_next_zero_bit(p,sz,off) _find_next_zero_bit_be(p,sz,off)
+
+/*
+ * These are the little endian, non-atomic definitions.
+ */
+#define __set_bit(nr,p) NONATOMIC_BITOP_BE(set_bit,nr,p)
+#define __clear_bit(nr,p) NONATOMIC_BITOP_BE(clear_bit,nr,p)
+#define __change_bit(nr,p) NONATOMIC_BITOP_BE(change_bit,nr,p)
+#define __test_and_set_bit(nr,p) NONATOMIC_BITOP_BE(test_and_set_bit,nr,p)
+#define __test_and_clear_bit(nr,p) NONATOMIC_BITOP_BE(test_and_clear_bit,nr,p)
+#define __test_and_change_bit(nr,p) NONATOMIC_BITOP_BE(test_and_change_bit,nr,p)
+#define __test_bit(nr,p) ____test_bit((nr) ^ 0x18, p)
+
+#endif
+
/*
* ffz = Find First Zero in word. Undefined if no zero exists,
* so code should check against ~0UL first..
#define hweight16(x) generic_hweight16(x)
#define hweight8(x) generic_hweight8(x)
-#define ext2_set_bit test_and_set_bit
-#define ext2_clear_bit test_and_clear_bit
-#define ext2_test_bit test_bit
-#define ext2_find_first_zero_bit find_first_zero_bit
-#define ext2_find_next_zero_bit find_next_zero_bit
-
-/* Bitmap functions for the minix filesystem. */
-#define minix_test_and_set_bit(nr,addr) test_and_set_bit(nr,addr)
-#define minix_set_bit(nr,addr) set_bit(nr,addr)
-#define minix_test_and_clear_bit(nr,addr) test_and_clear_bit(nr,addr)
-#define minix_test_bit(nr,addr) test_bit(nr,addr)
-#define minix_find_first_zero_bit(addr,size) find_first_zero_bit(addr,size)
+/*
+ * Ext2 is defined to use little-endian byte ordering.
+ * These do not need to be atomic.
+ */
+#define ext2_set_bit(nr,p) NONATOMIC_BITOP_LE(test_and_set_bit,nr,p)
+#define ext2_clear_bit(nr,p) NONATOMIC_BITOP_LE(test_and_clear_bit,nr,p)
+#define ext2_test_bit(nr,p) __test_bit(nr,p)
+#define ext2_find_first_zero_bit(p,sz) _find_first_zero_bit_le(p,sz)
+#define ext2_find_next_zero_bit(p,sz,off) _find_next_zero_bit_le(p,sz,off)
+
+/*
+ * Minix is defined to use little-endian byte ordering.
+ * These do not need to be atomic.
+ */
+#define minix_set_bit(nr,p) NONATOMIC_BITOP_LE(set_bit,nr,p)
+#define minix_test_bit(nr,p) __test_bit(nr,p)
+#define minix_test_and_set_bit(nr,p) NONATOMIC_BITOP_LE(test_and_set_bit,nr,p)
+#define minix_test_and_clear_bit(nr,p) NONATOMIC_BITOP_LE(test_and_clear_bit,nr,p)
+#define minix_find_first_zero_bit(p,sz) _find_first_zero_bit_le(p,sz)
#endif /* __KERNEL__ */
#include <linux/config.h>
#ifdef CONFIG_BSD_PROCESS_ACCT
-extern void acct_auto_close(kdev_t dev);
+extern void acct_auto_close(struct super_block *sb);
extern int acct_process(long exitcode);
#else
#define acct_auto_close(x) do { } while (0)
#endif
#define BIO_MAX_SECTORS 128
+#define BIO_MAX_SIZE (BIO_MAX_SECTORS << 9)
/*
* was unsigned short, but we might as well be ready for > 64kB I/O pages
#include <linux/genhd.h>
#include <linux/tqueue.h>
#include <linux/list.h>
-#include <linux/mm.h>
+#include <linux/pagemap.h>
#include <asm/scatterlist.h>
#define RQ_SCSI_DISCONNECTING 0xffe0
#define QUEUE_FLAG_PLUGGED 0 /* queue is plugged */
-#define QUEUE_FLAG_NOSPLIT 1 /* can process bio over several goes */
-#define QUEUE_FLAG_CLUSTER 2 /* cluster several segments into 1 */
+#define QUEUE_FLAG_CLUSTER 1 /* cluster several segments into 1 */
#define blk_queue_plugged(q) test_bit(QUEUE_FLAG_PLUGGED, &(q)->queue_flags)
#define blk_mark_plugged(q) set_bit(QUEUE_FLAG_PLUGGED, &(q)->queue_flags)
#define rq_data_dir(rq) ((rq)->flags & 1)
+/*
+ * mergeable request must not have _NOMERGE or _BARRIER bit set, nor may
+ * it already be started by driver.
+ */
+#define rq_mergeable(rq) \
+ (!((rq)->flags & (REQ_NOMERGE | REQ_STARTED | REQ_BARRIER)) \
+ && ((rq)->flags & REQ_CMD))
+
/*
* noop, requests are automagically marked as active/inactive by I/O
* scheduler -- see elv_next_request
extern int block_ioctl(kdev_t, unsigned int, unsigned long);
extern int ll_10byte_cmd_build(request_queue_t *, struct request *);
+/*
+ * get ready for proper ref counting
+ */
+#define blk_put_queue(q) do { } while (0)
+
/*
* Access functions for manipulating queue properties
*/
return retval;
}
+typedef struct {struct page *v;} Sector;
+
+unsigned char *read_dev_sector(struct block_device *, unsigned long, Sector *);
+
+static inline void put_dev_sector(Sector p)
+{
+ page_cache_release(p.v);
+}
+
#endif
binary interface will change */
struct devfsd_notify_struct
-{
+{ /* Use native C types to ensure same types in kernel and user space */
unsigned int type; /* DEVFSD_NOTIFY_* value */
unsigned int mode; /* Mode of the inode or device entry */
unsigned int major; /* Major number of device entry */
*/
#define DEVFS_FL_REMOVABLE 0x010 /* This is a removable media device */
#define DEVFS_FL_WAIT 0x020 /* Wait for devfsd to finish */
-#define DEVFS_FL_NO_PERSISTENCE 0x040 /* Forget changes after unregister */
-#define DEVFS_FL_CURRENT_OWNER 0x080 /* Set initial ownership to current */
+#define DEVFS_FL_CURRENT_OWNER 0x040 /* Set initial ownership to current */
#define DEVFS_FL_DEFAULT DEVFS_FL_NONE
#define UNIQUE_NUMBERSPACE_INITIALISER {SPIN_LOCK_UNLOCKED, 0, 0, 0, NULL}
+extern void devfs_put (devfs_handle_t de);
extern devfs_handle_t devfs_register (devfs_handle_t dir, const char *name,
unsigned int flags,
unsigned int major, unsigned int minor,
devfs_handle_t *handle, void *info);
extern devfs_handle_t devfs_mk_dir (devfs_handle_t dir, const char *name,
void *info);
+extern devfs_handle_t devfs_get_handle (devfs_handle_t dir, const char *name,
+ unsigned int major,unsigned int minor,
+ char type, int traverse_symlinks);
extern devfs_handle_t devfs_find_handle (devfs_handle_t dir, const char *name,
unsigned int major,unsigned int minor,
char type, int traverse_symlinks);
extern devfs_handle_t devfs_get_handle_from_inode (struct inode *inode);
extern int devfs_generate_path (devfs_handle_t de, char *path, int buflen);
extern void *devfs_get_ops (devfs_handle_t de);
+extern void devfs_put_ops (devfs_handle_t de);
extern int devfs_set_file_size (devfs_handle_t de, unsigned long size);
extern void *devfs_get_info (devfs_handle_t de);
extern int devfs_set_info (devfs_handle_t de, void *info);
#define UNIQUE_NUMBERSPACE_INITIALISER {0}
+static inline void devfs_put (devfs_handle_t de)
+{
+ return;
+}
static inline devfs_handle_t devfs_register (devfs_handle_t dir,
const char *name,
unsigned int flags,
{
return NULL;
}
+static inline devfs_handle_t devfs_get_handle (devfs_handle_t dir,
+ const char *name,
+ unsigned int major,
+ unsigned int minor,
+ char type,
+ int traverse_symlinks)
+{
+ return NULL;
+}
static inline devfs_handle_t devfs_find_handle (devfs_handle_t dir,
const char *name,
unsigned int major,
{
return NULL;
}
+static inline void devfs_put_ops (devfs_handle_t de)
+{
+ return;
+}
static inline int devfs_set_file_size (devfs_handle_t de, unsigned long size)
{
return -ENOSYS;
extern int try_to_free_buffers(struct page *, unsigned int);
extern void refile_buffer(struct buffer_head * buf);
-extern void create_empty_buffers(struct page *, kdev_t, unsigned long);
+extern void create_empty_buffers(struct page *, unsigned long);
extern void end_buffer_io_sync(struct buffer_head *bh, int uptodate);
/* reiserfs_writepage needs this */
{
return get_hash_table(sb->s_dev, block, sb->s_blocksize);
}
+static inline void map_bh(struct buffer_head *bh, struct super_block *sb, int block)
+{
+ bh->b_state |= 1 << BH_Mapped;
+ bh->b_dev = sb->s_dev;
+ bh->b_blocknr = block;
+}
extern void wakeup_bdflush(void);
extern void put_unused_buffer_head(struct buffer_head * bh);
extern struct buffer_head * get_unused_buffer_head(int async);
#include <linux/nls.h>
struct fat_cache {
- kdev_t device; /* device number. 0 means unused. */
+ struct super_block *sb; /* fs in question. NULL means unused */
int start_cluster; /* first cluster of the chain. */
int file_cluster; /* cluster number in the file. */
int disk_cluster; /* cluster number on disk. */
int *d_clu);
extern void fat_cache_add(struct inode *inode, int f_clu, int d_clu);
extern void fat_cache_inval_inode(struct inode *inode);
-extern void fat_cache_inval_dev(kdev_t device);
+extern void fat_cache_inval_dev(struct super_block *sb);
extern int fat_get_cluster(struct inode *inode, int cluster);
extern int fat_free(struct inode *inode, int skip);
#define PCI_DEVICE_ID_VIA_8233_7 0x3065
#define PCI_DEVICE_ID_VIA_82C686_6 0x3068
#define PCI_DEVICE_ID_VIA_8233_0 0x3074
-#define PCI_DEVICE_ID_VIA_8622 0x3102
-#define PCI_DEVICE_ID_VIA_8233C_0 0x3109
-#define PCI_DEVICE_ID_VIA_8361 0x3112
#define PCI_DEVICE_ID_VIA_8633_0 0x3091
#define PCI_DEVICE_ID_VIA_8367_0 0x3099
+#define PCI_DEVICE_ID_VIA_8622 0x3102
+#define PCI_DEVICE_ID_VIA_8233C_0 0x3109
+#define PCI_DEVICE_ID_VIA_8361 0x3112
+#define PCI_DEVICE_ID_VIA_8233A 0x3147
#define PCI_DEVICE_ID_VIA_86C100A 0x6100
#define PCI_DEVICE_ID_VIA_8231 0x8231
#define PCI_DEVICE_ID_VIA_8231_4 0x8235
#define PCI_DEVICE_ID_VIA_82C597_1 0x8597
#define PCI_DEVICE_ID_VIA_82C598_1 0x8598
#define PCI_DEVICE_ID_VIA_8601_1 0x8601
-#define PCI_DEVICE_ID_VIA_8505_1 0X8605
+#define PCI_DEVICE_ID_VIA_8505_1 0x8605
#define PCI_DEVICE_ID_VIA_8633_1 0xB091
#define PCI_DEVICE_ID_VIA_8367_1 0xB099
/* fields after this point are cleared when invalidating */
struct super_block *dq_sb; /* superblock this applies to */
unsigned int dq_id; /* ID this applies to (uid, gid) */
- kdev_t dq_dev; /* Device this applies to */
short dq_type; /* Type of quota */
short dq_flags; /* See DQ_* */
unsigned long dq_referenced; /* Number of times this dquot was
extern void dquot_initialize(struct inode *inode, short type);
extern void dquot_drop(struct inode *inode);
extern int quota_off(struct super_block *sb, short type);
-extern int sync_dquots(kdev_t dev, short type);
+extern int sync_dquots(struct super_block *sb, short type);
extern int dquot_alloc_block(struct inode *inode, unsigned long number, char prealloc);
extern int dquot_alloc_inode(const struct inode *inode, unsigned long number);
return 0;
}
-#define DQUOT_SYNC(dev) sync_dquots(dev, -1)
+#define DQUOT_SYNC(sb) sync_dquots(sb, -1)
#define DQUOT_OFF(sb) quota_off(sb, -1)
#else
#define DQUOT_DROP(inode) do { } while(0)
#define DQUOT_ALLOC_INODE(inode) (0)
#define DQUOT_FREE_INODE(inode) do { } while(0)
-#define DQUOT_SYNC(dev) do { } while(0)
+#define DQUOT_SYNC(sb) do { } while(0)
#define DQUOT_OFF(sb) do { } while(0)
#define DQUOT_TRANSFER(inode, iattr) (0)
extern __inline__ int DQUOT_PREALLOC_BLOCK_NODIRTY(struct inode *inode, int nr)
//void decrement_i_read_sync_counter (struct inode * p_s_inode);
-#define block_size(inode) ((inode)->i_sb->s_blocksize)
+#define i_block_size(inode) ((inode)->i_sb->s_blocksize)
#define file_size(inode) ((inode)->i_size)
-#define tail_size(inode) (file_size (inode) & (block_size (inode) - 1))
+#define tail_size(inode) (file_size (inode) & (i_block_size (inode) - 1))
#define tail_has_to_be_packed(inode) (!dont_have_tails ((inode)->i_sb) &&\
-!STORE_TAIL_IN_UNFM(file_size (inode), tail_size(inode), block_size (inode)))
+!STORE_TAIL_IN_UNFM(file_size (inode), tail_size(inode), i_block_size (inode)))
/*
int get_buffer_by_range (struct super_block * p_s_sb, struct key * p_s_range_begin, struct key * p_s_range_end,
/* buffer2.c */
-struct buffer_head * reiserfs_getblk (kdev_t n_dev, int n_block, int n_size);
+struct buffer_head * reiserfs_getblk (struct super_block *super, int n_block);
void wait_buffer_until_released (const struct buffer_head * bh);
-struct buffer_head * reiserfs_bread (struct super_block *super, int n_block,
- int n_size);
+struct buffer_head * reiserfs_bread (struct super_block *super, int n_block);
/* fix_nodes.c */
void * reiserfs_kmalloc (size_t size, int flags, struct super_block * s);
extern void update_process_times(int user);
extern void update_one_process(struct task_struct *p, unsigned long user,
unsigned long system, int cpu);
+extern void expire_task(struct task_struct *p);
#define MAX_SCHEDULE_TIMEOUT LONG_MAX
extern signed long FASTCALL(schedule_timeout(signed long timeout));
* all fields in a single cacheline that are needed for
* the goodness() loop in schedule().
*/
- long counter;
+ long dyn_prio;
long nice;
unsigned long policy;
struct mm_struct *mm;
* that's just fine.)
*/
struct list_head run_list;
+ long time_slice;
unsigned long sleep_time;
+ /* recalculation loop checkpoint */
+ unsigned long rcl_last;
struct task_struct *next_task, *prev_task;
struct mm_struct *active_mm;
*/
#define _STK_LIM (8*1024*1024)
-#define DEF_COUNTER (10*HZ/100) /* 100 ms time slice */
-#define MAX_COUNTER (20*HZ/100)
+#define MAX_DYNPRIO 100
+#define DEF_TSLICE (6 * HZ / 100)
+#define MAX_TSLICE (20 * HZ / 100)
#define DEF_NICE (0)
addr_limit: KERNEL_DS, \
exec_domain: &default_exec_domain, \
lock_depth: -1, \
- counter: DEF_COUNTER, \
+ dyn_prio: 0, \
nice: DEF_NICE, \
policy: SCHED_OTHER, \
mm: NULL, \
active_mm: &init_mm, \
cpus_runnable: -1, \
cpus_allowed: -1, \
- run_list: LIST_HEAD_INIT(tsk.run_list), \
+ run_list: { NULL, NULL }, \
+ rcl_last: 0, \
+ time_slice: DEF_TSLICE, \
next_task: &tsk, \
prev_task: &tsk, \
p_opptr: &tsk, \
/*
* The default limit for the nr of threads is now in
- * /proc/sys/kernel/max-threads.
+ * /proc/sys/kernel/threads-max.
*/
#ifdef CONFIG_SMP
#define SCSICAM_H
#include <linux/kdev_t.h>
extern int scsicam_bios_param (Disk *disk, kdev_t dev, int *ip);
-extern int scsi_partsize(struct buffer_head *bh, unsigned long capacity,
+extern int scsi_partsize(unsigned char *buf, unsigned long capacity,
unsigned int *cyls, unsigned int *hds, unsigned int *secs);
+extern unsigned char *scsi_bios_ptable(kdev_t dev);
#endif /* def SCSICAM_H */
goto out;
}
-void acct_auto_close(kdev_t dev)
+void acct_auto_close(struct super_block *sb)
{
lock_kernel();
- if (acct_file && acct_file->f_dentry->d_inode->i_dev == dev)
+ if (acct_file && acct_file->f_dentry->d_inode->i_sb == sb)
sys_acct(NULL);
unlock_kernel();
}
* timeslices, because any timeslice recovered here
* was given away by the parent in the first place.)
*/
- current->counter += p->counter;
- if (current->counter >= MAX_COUNTER)
- current->counter = MAX_COUNTER;
+ current->time_slice += p->time_slice;
+ if (current->time_slice > MAX_TSLICE)
+ current->time_slice = MAX_TSLICE;
p->pid = 0;
free_task_struct(p);
} else {
* more scheduling fairness. This is only important in the first
* timeslice, on the long run the scheduling behaviour is unchanged.
*/
- p->counter = (current->counter + 1) >> 1;
- current->counter >>= 1;
- if (!current->counter)
+ p->time_slice = (current->time_slice + 1) >> 1;
+ current->time_slice >>= 1;
+ if (!current->time_slice)
current->need_resched = 1;
/*
EXPORT_SYMBOL(ioctl_by_bdev);
EXPORT_SYMBOL(grok_partitions);
EXPORT_SYMBOL(register_disk);
+EXPORT_SYMBOL(read_dev_sector);
EXPORT_SYMBOL(tq_disk);
EXPORT_SYMBOL(init_buffer);
EXPORT_SYMBOL(refile_buffer);
static LIST_HEAD(runqueue_head);
+static unsigned long rcl_curr = 0;
+
/*
* We align per-CPU scheduling data on cacheline boundaries,
* to prevent cacheline ping-pong.
* Don't do any other calculations if the time slice is
* over..
*/
- weight = p->counter;
- if (!weight)
- goto out;
-
+ if (!p->time_slice)
+ return 0;
+
+ weight = p->dyn_prio + 1;
+
#ifdef CONFIG_SMP
/* Give a largish advantage to the same processor... */
/* (this is equivalent to penalizing other processors) */
*/
static inline void add_to_runqueue(struct task_struct * p)
{
+ p->dyn_prio += rcl_curr - p->rcl_last;
+ p->rcl_last = rcl_curr;
+ if (p->dyn_prio > MAX_DYNPRIO) p->dyn_prio = MAX_DYNPRIO;
list_add(&p->run_list, &runqueue_head);
nr_running++;
}
__schedule_tail(prev);
}
+void expire_task(struct task_struct *p)
+{
+ if (!p->time_slice)
+ p->need_resched = 1;
+ else {
+ if (!--p->time_slice) {
+ if (p->dyn_prio > 0) {
+ --p->time_slice;
+ --p->dyn_prio;
+ }
+ p->need_resched = 1;
+ } else if (p->time_slice < -NICE_TO_TICKS(p->nice)) {
+ p->time_slice = 0;
+ p->need_resched = 1;
+ }
+ }
+}
+
/*
* 'schedule()' is the scheduler function. It's a very simple and nice
* scheduler: it's not perfect, but certainly works for most things.
/* move an exhausted RR process to be last.. */
if (unlikely(prev->policy == SCHED_RR))
- if (!prev->counter) {
- prev->counter = NICE_TO_TICKS(prev->nice);
+ if (!prev->time_slice) {
+ prev->time_slice = NICE_TO_TICKS(prev->nice);
move_last_runqueue(prev);
}
switch (prev->state) {
- case TASK_INTERRUPTIBLE:
- if (signal_pending(prev)) {
- prev->state = TASK_RUNNING;
- break;
- }
- default:
- del_from_runqueue(prev);
- case TASK_RUNNING:;
+ case TASK_INTERRUPTIBLE:
+ if (signal_pending(prev)) {
+ prev->state = TASK_RUNNING;
+ break;
+ }
+ default:
+ del_from_runqueue(prev);
+ case TASK_RUNNING:;
}
prev->need_resched = 0;
/* Do we need to re-calculate counters? */
if (unlikely(!c)) {
- struct task_struct *p;
-
- spin_unlock_irq(&runqueue_lock);
- read_lock(&tasklist_lock);
- for_each_task(p)
- p->counter = (p->counter >> 1) + NICE_TO_TICKS(p->nice);
- read_unlock(&tasklist_lock);
- spin_lock_irq(&runqueue_lock);
+ ++rcl_curr;
+ list_for_each(tmp, &runqueue_head) {
+ p = list_entry(tmp, struct task_struct, run_list);
+ p->time_slice = NICE_TO_TICKS(p->nice);
+ p->rcl_last = rcl_curr;
+ }
goto repeat_schedule;
}
nr_pending--;
#endif
if (nr_pending) {
+ struct task_struct *ctsk = current;
/*
* This process can only be rescheduled by us,
* so this is safe without any locking.
*/
- if (current->policy == SCHED_OTHER)
- current->policy |= SCHED_YIELD;
- current->need_resched = 1;
+ if (ctsk->policy == SCHED_OTHER)
+ ctsk->policy |= SCHED_YIELD;
+ ctsk->need_resched = 1;
- spin_lock_irq(&runqueue_lock);
- move_last_runqueue(current);
- spin_unlock_irq(&runqueue_lock);
+ ctsk->time_slice = 0;
+ ++ctsk->dyn_prio;
}
return 0;
}
if (current != &init_task && task_on_runqueue(current)) {
printk("UGH! (%d:%d) was on the runqueue, removing.\n",
- smp_processor_id(), current->pid);
+ smp_processor_id(), current->pid);
del_from_runqueue(current);
}
+ current->dyn_prio = 0;
sched_data->curr = current;
sched_data->last_schedule = get_cycles();
clear_bit(current->processor, &wait_init_idle);
update_one_process(p, user_tick, system, cpu);
if (p->pid) {
- if (--p->counter <= 0) {
- p->counter = 0;
- p->need_resched = 1;
- }
+ expire_task(p);
if (p->nice > 0)
kstat.per_cpu_nice[cpu] += user_tick;
else
static void *page_pool_alloc(int gfp_mask, void *data)
{
- return alloc_page(gfp_mask);
+ int gfp = gfp_mask | (int) data;
+
+ return alloc_page(gfp);
}
static void page_pool_free(void *page, void *data)
if (isa_page_pool)
return 0;
- isa_page_pool = mempool_create(ISA_POOL_SIZE, page_pool_alloc, page_pool_free, NULL);
+ isa_page_pool = mempool_create(ISA_POOL_SIZE, page_pool_alloc, page_pool_free, (void *) __GFP_DMA);
if (!isa_page_pool)
BUG();
int i;
__bio_for_each_segment(tovec, to, i, 0) {
- fromvec = &from->bi_io_vec[i];
+ fromvec = from->bi_io_vec + i;
/*
* not bounced
* free up bounce indirect pages used
*/
__bio_for_each_segment(bvec, bio, i, 0) {
- org_vec = &bio_orig->bi_io_vec[i];
+ org_vec = bio_orig->bi_io_vec + i;
if (bvec->bv_page == org_vec->bv_page)
continue;
if (!bio)
bio = bio_alloc(bio_gfp, (*bio_orig)->bi_vcnt);
- to = &bio->bi_io_vec[i];
+ to = bio->bi_io_vec + i;
to->bv_page = mempool_alloc(pool, gfp);
to->bv_len = from->bv_len;
* all the memory it needs. That way it should be able to
* exit() and clear out its resources quickly...
*/
- p->counter = 5 * HZ;
+ p->time_slice = 2 * MAX_TSLICE;
+ p->dyn_prio = MAX_DYNPRIO + 1;
p->flags |= PF_MEMALLOC | PF_MEMDIE;
/* This process has hardware access, be more careful. */
if (S_ISBLK(swap_inode->i_mode)) {
kdev_t dev = swap_inode->i_rdev;
struct block_device_operations *bdops;
+ devfs_handle_t de;
p->swap_device = dev;
set_blocksize(dev, PAGE_SIZE);
bd_acquire(swap_inode);
bdev = swap_inode->i_bdev;
- bdops = devfs_get_ops(devfs_get_handle_from_inode(swap_inode));
+ de = devfs_get_handle_from_inode(swap_inode);
+ bdops = devfs_get_ops(de); /* Increments module use count */
if (bdops) bdev->bd_op = bdops;
error = blkdev_get(bdev, FMODE_READ|FMODE_WRITE, 0, BDEV_SWAP);
+ devfs_put_ops(de);/*Decrement module use count now we're safe*/
if (error)
goto bad_swap_2;
set_blocksize(dev, PAGE_SIZE);