S: L3R 8B2
S: Canada
-N: Andrzej M. Krzysztofowicz
-E: ankry@mif.pg.gda.pl
-D: Some 8-bit XT disk driver and /devfs hacking
-
N: Andreas S. Krebs
E: akrebs@altavista.net
D: CYPRESS CY82C693 chipset IDE, Digital's PC-Alpha 164SX boards
N: Andrzej M. Krzysztofowicz
E: ankry@mif.pg.gda.pl
-D: XT disk driver
+D: Some 8-bit XT disk driver and devfs hacking
D: Aladdin 1533/1543(C) chipset IDE
D: PIIX chipset IDE
S: ul. Matemblewska 1B/10
D: XF86_8514
D: cfdisk (curses based disk partitioning program)
+N: Heinz Mauelshagen
+E: mge@EZ-Darmstadt.Telekom.de
+D: Logical Volume Manager
+S: Bartningstr. 12
+S: 64289 Darmstadt
+S: Germany
+
N: Mike McLagan
E: mike.mclagan@linux.org
W: http://www.invlogic.com/~mmclagan
called on26.o. You must also have a high-level driver for the type
of device that you want to support.
+Logical Volume Manager (LVM) support
+CONFIG_BLK_DEV_LVM
+ This driver lets you combine several hard disks, hard disk partitions,
+ multiple devices or even loop devices (for evaluation purposes) into
+ a volume group. Imagine a volume group as a kind of virtual disk.
+ Logical volumes, which can be thought of as virtual partitions,
+ can be created in the volume group. You can resize volume groups and
+ logical volumes after creation time, corresponding to new capacity needs.
+ Logical volumes are accessed as block devices named
+ /dev/VolumeGroupName/LogicalVolumeName.
+
+ For details see /usr/src/linux/Documentaion/LVM-HOWTO.
+
+ To get the newest software see <http://linux.msede.com/lvm>.
+
+Logical Volume Manager proc filesystem information
+CONFIG_LVM_PROC_FS
+ If you say Y here, you are able to access overall Logical Volume Manager,
+ Volume Group, Logical and Physical Volume information in /proc/lvm.
+
+ To use this option, you have to check, that the "proc filesystem support"
+ (CONFIG_PROC_FS) is enabled too.
+
Multiple devices driver support
CONFIG_BLK_DEV_MD
This driver lets you combine several hard disk partitions into one
Provide NFSv3 server support (EXPERIMENTAL)
CONFIG_NFSD_V3
If you would like to include the NFSv3 server was well as the NFSv2
- server, say Y here. File locking, via the NLMv4 protocol, is not
- supported yet. If unsure, say N.
+ server, say Y here. File locking, via the NLMv4 protocol, is also
+ supported. If unsure, say N.
OS/2 HPFS filesystem support
CONFIG_HPFS_FS
--- /dev/null
+Heinz Mauelshagen's LVM (Logical Volume Manager) howto. 02/10/1999
+
+
+Abstract:
+---------
+The LVM adds a kind of virtual disks and virtual partitions functionality
+to the Linux operating system.
+
+It achieves this by adding an additional layer between the physical peripherals
+and the i/o interface in the kernel.
+
+This allows the concatenation of several disk partitions or total disks
+(so-called physical volumes or PVs) or even multiple devices
+to form a storage pool (so-called Volume Group or VG) with
+allocation units called physical extents (called PE).
+You can think of the volume group as a virtual disk.
+Please see scenario below.
+
+Some or all PEs of this VG then can be allocated to so-called Logical Volumes
+or LVs in units called logical extents or LEs.
+Each LE is mapped to a corresponding PE.
+LEs and PEs are equal in size.
+Logical volumes are a kind of virtual partitions.
+
+
+The LVs can be used through device special files similar to the known
+/dev/sd[a-z]* or /dev/hd[a-z]* named /dev/VolumeGroupName/LogicalVolumeName.
+
+But going beyond this, you are able to extend or reduce
+VGs _AND_ LVs at runtime!
+
+So...
+If for example the capacity of a LV gets too small and your VG containing
+this LV is full, you could add another PV to that VG and simply extend
+the LV afterwards.
+If you reduce or delete a LV you can use the freed capacity for different
+LVs in the same VG.
+
+
+The above scenario looks like this:
+
+ /------------------------------------------\
+ | /--PV2---\ VG 1 /--PVn---\ |
+ | |-VGDA---| |-VGDA-- | |
+ | |PE1PE2..| |PE1PE2..| |
+ | | | ...... | | |
+ | | | | | |
+ | | /-----------------------\ | |
+ | | \-------LV 1------------/ | |
+ | | ..PEn| | ..PEn| |
+ | \--------/ \--------/ |
+ \------------------------------------------/
+
+PV 1 could be /dev/sdc1 sized 3GB
+PV n could be /dev/sde1 sized 4GB
+VG 1 could be test_vg
+LV 1 could be /dev/test_vg/test_lv
+VGDA is the volume group descriptor area holding the LVM metadata
+PE1 up to PEn is the number of physical extents on each disk(partition)
+
+
+
+Installation steps see INSTALL and insmod(1)/modprobe(1), kmod/kerneld(8)
+to load the logical volume manager module if you did not bind it
+into the kernel.
+
+
+Configuration steps for getting the above scenario:
+
+1. Set the partition system id to 0x8e on /dev/sdc1 and /dev/sde1.
+
+2. do a "pvcreate /dev/sd[ce]1"
+ For testing purposes you can use more than one partition on a disk.
+ You should not use more than one partition because in the case of
+ a striped LV you'll have a performance breakdown.
+
+3. do a "vgcreate test_vg /dev/sd[ce]1" to create the new VG named "test_vg"
+ which has the total capacity of both partitions.
+ vgcreate activates (transfers the metadata into the LVM driver in the kernel)
+ the new volume group too to be able to create LVs in the next step.
+
+4. do a "lvcreate -L1500 -ntest_lv test_vg" to get a 1500MB linear LV named
+ "test_lv" and it's block device special "/dev/test_vg/test_lv".
+
+ Or do a "lvcreate -i2 -I4 -l1500 -nanother_test_lv test_vg" to get a 100 LE
+ large logical volume with 2 stripes and stripesize 4 KB.
+
+5. For example generate a filesystem in one LV with
+ "mke2fs /dev/test_vg/test_lv" and mount it.
+
+6. extend /dev/test_vg/test_lv to 1600MB with relative size by
+ "lvextend -L+100 /dev/test_vg/test_lv"
+ or with absolute size by
+ "lvextend -L1600 /dev/test_vg/test_lv"
+
+7. reduce /dev/test_vg/test_lv to 900 logical extents with relative extents by
+ "lvreduce -l-700 /dev/test_vg/test_lv"
+ or with absolute extents by
+ "lvreduce -l900 /dev/test_vg/test_lv"
+
+9. rename a VG by deactivating it with
+ "vgchange -an test_vg" # only VGs with _no_ open LVs can be deactivated!
+ "vgrename test_vg whatever"
+ and reactivate it again by
+ "vgchange -ay whatever"
+
+9. rename a LV after closing it by
+ "lvchange -an /dev/whatever/test_lv" # only closed LVs can be deactivated
+ "lvrename /dev/whatever/test_lv /dev/whatever/whatvolume"
+ or by
+ "lvrename whatever test_lv whatvolume"
+ and reactivate it again by
+ "lvchange -ay /dev/whatever/whatvolume"
+
+10. if you own Ted Tso's/Powerquest's resize2fs program, you are able to
+ resize the ext2 type filesystems contained in logical volumes without
+ destroyiing the data by
+ "e2fsadm -L+100 /dev/test_vg/another_test_lv"
+
Work sponsored by SGI
- Ported to kernel 2.3.46-pre2
+===============================================================================
+Changes for patch v159
+
+Work sponsored by SGI
+
+- Fixed drivers/block/md.c
+ Thanks to Mike Galbraith <mikeg@weiden.de>
+
+- Documentation fixes
+
+- Moved device registration from <lp_init> to <lp_register>
+ Thanks to Tim Waugh <twaugh@redhat.com>
Richard Gooch <rgooch@atnf.csiro.au>
- 5-DEC-1999
+ 18-FEB-2000
This is a list of things to be done for better devfs support in the
Linux kernel. If you'd like to contribute to the devfs, please have a
- ST-RAM device (arch/m68k/atari/stram.c)
+- Raw devices
+
Be sure to turn on the following options:
CONFIG_DECNET (obviously)
- CONFIG_PROCFS (to see what's going on)
+ CONFIG_PROC_FS (to see what's going on)
CONFIG_SYSCTL (for easy configuration)
if you want to try out router support (not properly debugged yet)
L: linux-hams@vger.rutgers.edu
S: Maintained
+LOGICAL VOLUME MANAGER
+P: Heinz Mauelshagen
+M: linux-LVM@EZ-Darmstadt.Telekom.de
+L: linux-LVM@msede.com
+W: http://linux.msede.com/lvm
+S: Maintained
+
HIPPI
P: Jes Sorensen
M: Jes.Sorensen@cern.ch
extern struct hwrpb_struct *hwrpb;
extern void dump_thread(struct pt_regs *, struct user *);
extern int dump_fpu(struct pt_regs *, elf_fpregset_t *);
+extern spinlock_t kernel_flag;
/* these are C runtime functions with special calling conventions: */
extern void __divl (void);
EXPORT_SYMBOL(enable_irq);
EXPORT_SYMBOL(disable_irq);
EXPORT_SYMBOL(disable_irq_nosync);
+EXPORT_SYMBOL(probe_irq_mask);
EXPORT_SYMBOL(screen_info);
EXPORT_SYMBOL(perf_irq);
*/
#ifdef __SMP__
+EXPORT_SYMBOL(kernel_flag);
EXPORT_SYMBOL(synchronize_irq);
EXPORT_SYMBOL(flush_tlb_all);
EXPORT_SYMBOL(flush_tlb_mm);
EXPORT_SYMBOL(flush_tlb_page);
EXPORT_SYMBOL(flush_tlb_range);
+EXPORT_SYMBOL(smp_imb);
EXPORT_SYMBOL(cpu_data);
EXPORT_SYMBOL(__cpu_number_map);
+EXPORT_SYMBOL(smp_num_cpus);
EXPORT_SYMBOL(global_irq_holder);
EXPORT_SYMBOL(__global_cli);
EXPORT_SYMBOL(__global_sti);
unsigned long
probe_irq_on(void)
{
- unsigned int i;
+ int i;
unsigned long delay;
+ unsigned long val;
/* Something may have generated an irq long ago and we want to
flush such a longstanding irq before considering it as spurious. */
spin_lock_irq(&irq_controller_lock);
- for (i = NR_IRQS-1; i > 0; i--)
+ for (i = NR_IRQS-1; i >= 0; i--)
if (!irq_desc[i].action)
- irq_desc[i].handler->startup(i);
+ if(irq_desc[i].handler->startup(i))
+ irq_desc[i].status |= IRQ_PENDING;
spin_unlock_irq(&irq_controller_lock);
/* Wait for longstanding interrupts to trigger. */
if a longstanding irq happened in the previous stage, it may have
masked itself) first, enable any unassigned irqs. */
spin_lock_irq(&irq_controller_lock);
- for (i = NR_IRQS-1; i > 0; i--) {
+ for (i = NR_IRQS-1; i >= 0; i--) {
if (!irq_desc[i].action) {
irq_desc[i].status |= IRQ_AUTODETECT | IRQ_WAITING;
if(irq_desc[i].handler->startup(i))
/*
* Now filter out any obviously spurious interrupts
*/
+ val = 0;
spin_lock_irq(&irq_controller_lock);
for (i=0; i<NR_IRQS; i++) {
unsigned int status = irq_desc[i].status;
if (!(status & IRQ_WAITING)) {
irq_desc[i].status = status & ~IRQ_AUTODETECT;
irq_desc[i].handler->shutdown(i);
+ continue;
}
+
+ if (i < 64)
+ val |= 1 << i;
}
spin_unlock_irq(&irq_controller_lock);
- return 0x12345678;
+ return val;
+}
+
+/*
+ * Return a mask of triggered interrupts (this
+ * can handle only legacy ISA interrupts).
+ */
+unsigned int probe_irq_mask(unsigned long val)
+{
+ int i;
+ unsigned int mask;
+
+ mask = 0;
+ spin_lock_irq(&irq_controller_lock);
+ for (i = 0; i < 16; i++) {
+ unsigned int status = irq_desc[i].status;
+
+ if (!(status & IRQ_AUTODETECT))
+ continue;
+
+ if (!(status & IRQ_WAITING))
+ mask |= 1 << i;
+
+ irq_desc[i].status = status & ~IRQ_AUTODETECT;
+ irq_desc[i].handler->shutdown(i);
+ }
+ spin_unlock_irq(&irq_controller_lock);
+
+ return mask & val;
}
/*
*/
int
-probe_irq_off(unsigned long unused)
+probe_irq_off(unsigned long val)
{
int i, irq_found, nr_irqs;
- if (unused != 0x12345678)
- printk("Bad IRQ probe from %lx\n", (&unused)[-1]);
-
nr_irqs = 0;
irq_found = 0;
spin_lock_irq(&irq_controller_lock);
return 0;
}
+static void
+ipi_imb(void)
+{
+ imb();
+}
+
+void
+smp_imb(void)
+{
+ /* Must wait other processors to flush their icache before continue. */
+ if (smp_call_function(ipi_imb, NULL, 1, 1))
+ printk(KERN_CRIT "smp_imb: timed out\n");
+
+ imb();
+}
+
static void
ipi_flush_tlb_all(void *ignored)
{
* Code supporting the DP264 (EV6+TSUNAMI).
*/
+#include <linux/config.h>
#include <linux/kernel.h>
#include <linux/types.h>
#include <linux/mm.h>
end_clipper_irq
};
-static unsigned long cached_irq_mask = ~0UL;
+static unsigned long cached_irq_mask;
#define TSUNAMI_SET_IRQ_MASK(cpu, value) \
do { \
unsigned long value;
#ifdef CONFIG_SMP
- value = ~mask;
- do_flush_smp_irq_mask(value);
+ do_flush_smp_irq_mask(mask);
#endif
- value = ~mask | (1UL << 55) | 0xffff; /* isa irqs always enabled */
+ value = mask | (1UL << 55) | 0xffff; /* isa irqs always enabled */
do_flush_irq_mask(value);
}
static void
enable_tsunami_irq(unsigned int irq)
{
- cached_irq_mask &= ~(1UL << irq);
+ cached_irq_mask |= 1UL << irq;
dp264_flush_irq_mask(cached_irq_mask);
}
static void
disable_tsunami_irq(unsigned int irq)
{
- cached_irq_mask |= 1UL << irq;
+ cached_irq_mask &= ~(1UL << irq);
dp264_flush_irq_mask(cached_irq_mask);
}
{
unsigned long value;
+ value = mask >> 16;
#ifdef CONFIG_SMP
- value = ~mask >> 16;
do_flush_smp_irq_mask(value);
#endif
- value = (~mask >> 16) | (1UL << 55); /* master ISA enable */
+ value = value | (1UL << 55); /* master ISA enable */
do_flush_irq_mask(value);
}
static void
enable_clipper_irq(unsigned int irq)
{
- cached_irq_mask &= ~(1UL << irq);
+ cached_irq_mask |= 1UL << irq;
clipper_flush_irq_mask(cached_irq_mask);
}
static void
disable_clipper_irq(unsigned int irq)
{
- cached_irq_mask |= 1UL << irq;
+ cached_irq_mask &= ~(1UL << irq);
clipper_flush_irq_mask(cached_irq_mask);
}
continue;
if (i < 16)
continue;
+ /* only irqs between 16 and 47 are tsunami irqs */
+ if (i >= 48)
+ break;
irq_desc[i].status = IRQ_DISABLED | IRQ_LEVEL;
irq_desc[i].handler = ops;
}
init_RTC_irq();
init_TSUNAMI_irqs(&tsunami_irq_type);
- dp264_flush_irq_mask(~0UL);
+ dp264_flush_irq_mask(0UL);
}
static void __init
init_RTC_irq();
init_TSUNAMI_irqs(&clipper_irq_type);
- clipper_flush_irq_mask(~0UL);
+ clipper_flush_irq_mask(0UL);
}
void __init paging_init(void)
{
void *zero_page, *bad_page, *bad_table;
- unsigned int zone_size[MAX_NR_ZONES];
+ unsigned long zone_size[MAX_NR_ZONES];
int i;
#ifdef CONFIG_CPU_32
ENTRY(cpu_arm7_flush_cache_entry)
ENTRY(cpu_arm6_flush_icache_area)
ENTRY(cpu_arm7_flush_icache_area)
+ENTRY(cpu_arm6_flush_icache_page)
+ENTRY(cpu_arm7_flush_icache_page)
ENTRY(cpu_arm6_cache_wback_area)
ENTRY(cpu_arm7_cache_wback_area)
ENTRY(cpu_arm6_cache_purge_area)
.word cpu_arm6_cache_purge_area
.word cpu_arm6_flush_tlb_page
.word cpu_arm6_do_idle
+ .word cpu_arm6_flush_icache_page
.size arm6_processor_functions, . - arm6_processor_functions
/*
.word cpu_arm7_cache_purge_area
.word cpu_arm7_flush_tlb_page
.word cpu_arm7_do_idle
+ .word cpu_arm7_flush_icache_page
.size arm7_processor_functions, . - arm7_processor_functions
.type cpu_arm6_info, #object
mcr p15, 0, r0, c7, c5, 0 @ flush I cache
mov pc, lr
+ .align 5
+ENTRY(cpu_sa110_flush_icache_page)
+ENTRY(cpu_sa1100_flush_icache_page)
+ mcr p15, 0, r0, c7, c5, 0 @ flush I cache
+ mov pc, lr
+
/*
* Function: sa110_data_abort ()
* Params : r0 = address of aborted instruction
.word cpu_sa110_cache_purge_area
.word cpu_sa110_flush_tlb_page
.word cpu_sa110_do_idle
+ .word cpu_sa110_flush_icache_page
.size sa110_processor_functions, . - sa110_processor_functions
.type cpu_sa110_info, #object
.word cpu_sa1100_cache_purge_area
.word cpu_sa1100_flush_tlb_page
.word cpu_sa1100_do_idle
+ .word cpu_sa1100_flush_icache_page
.size sa1100_processor_functions, . - sa1100_processor_functions
cpu_sa1100_info:
define_bool CONFIG_X86_USE_3DNOW y
fi
-if [ "$CONFIG_M686" = "y" -a "$CONFIG_PROC_FS" = "y" ]; then
+if [ "$CONFIG_PROC_FS" = "y" ]; then
tristate '/proc/driver/microcode - Intel P6 CPU microcode support' CONFIG_MICROCODE
fi
if (!(status & IRQ_WAITING)) {
irq_desc[i].status = status & ~IRQ_AUTODETECT;
irq_desc[i].handler->shutdown(i);
+ continue;
}
if (i < 32)
*
* 1.0 16 February 2000, Tigran Aivazian <tigran@sco.com>
* Initial release.
+ * 1.01 18 February 2000, Tigran Aivazian <tigran@sco.com>
+ * Added read() support + cleanups.
*/
#include <linux/init.h>
#include <asm/uaccess.h>
#include <asm/processor.h>
-#define MICROCODE_VERSION "1.0"
+#define MICROCODE_VERSION "1.01"
MODULE_DESCRIPTION("CPU (P6) microcode update driver");
MODULE_AUTHOR("Tigran Aivazian <tigran@ocston.org>");
/* VFS interface */
static int microcode_open(struct inode *, struct file *);
static int microcode_release(struct inode *, struct file *);
+static ssize_t microcode_read(struct file *, char *, size_t, loff_t *);
static ssize_t microcode_write(struct file *, const char *, size_t, loff_t *);
/* internal helpers to do the work */
-static void do_microcode_update(void);
+static int do_microcode_update(void);
static void do_update_one(void *);
/*
/* the actual array of microcode blocks, each 2048 bytes */
static struct microcode * microcode = NULL;
static unsigned int microcode_num = 0;
+static char *mc_applied = NULL; /* holds an array of applied microcode blocks */
static struct file_operations microcode_fops = {
+ read: microcode_read,
write: microcode_write,
open: microcode_open,
release: microcode_release,
static int __init microcode_init(void)
{
- /* write-only /proc/driver/microcode file, one day may become read-write.. */
- proc_microcode = create_proc_entry("microcode", S_IWUSR, proc_root_driver);
+ int size;
+
+ proc_microcode = create_proc_entry("microcode", S_IWUSR|S_IRUSR, proc_root_driver);
if (!proc_microcode) {
- printk(KERN_ERR "microcode: can't create /proc/driver/microcode entry\n");
+ printk(KERN_ERR "microcode: can't create /proc/driver/microcode\n");
return -ENOMEM;
}
proc_microcode->ops = µcode_inops;
- printk(KERN_ERR "P6 Microcode Update Driver v%s registered\n", MICROCODE_VERSION);
+ size = smp_num_cpus * sizeof(struct microcode);
+ mc_applied = kmalloc(size, GFP_KERNEL);
+ if (!mc_applied) {
+ remove_proc_entry("microcode", proc_root_driver);
+ printk(KERN_ERR "microcode: can't allocate memory to hold applied microcode\n");
+ return -ENOMEM;
+ }
+ memset(mc_applied, 0, size); /* so that reading from offsets corresponding to failed
+ update makes this obvious */
+ printk(KERN_INFO "P6 Microcode Update Driver v%s registered\n", MICROCODE_VERSION);
return 0;
}
static void __exit microcode_exit(void)
{
remove_proc_entry("microcode", proc_root_driver);
- printk(KERN_ERR "P6 Microcode Update Driver v%s unregistered\n", MICROCODE_VERSION);
+ kfree(mc_applied);
+ printk(KERN_INFO "P6 Microcode Update Driver v%s unregistered\n", MICROCODE_VERSION);
}
module_init(microcode_init);
}
-static void do_microcode_update(void)
+static int do_microcode_update(void)
{
- if (smp_call_function(do_update_one, NULL, 1, 0) != 0)
+ int err;
+
+ if (smp_call_function(do_update_one, &err, 1, 1) != 0)
panic("do_microcode_update(): timed out waiting for other CPUs\n");
- do_update_one(NULL);
+ do_update_one(&err);
+
+ return err;
}
-static void do_update_one(void *unused)
+static void do_update_one(void *arg)
{
+ int *err = (int *)arg;
struct cpuinfo_x86 * c;
unsigned int pf = 0, val[2], rev, sig;
int i, id;
id = smp_processor_id();
c = cpu_data + id;
-
+ *err = 1; /* be pessimistic */
if (c->x86_vendor != X86_VENDOR_INTEL || c->x86 < 6)
return;
" %d (current=%d)\n", id, microcode[i].rev, rev);
} else {
int sum = 0;
- struct microcode *m = µcode[i];
- unsigned int *sump = (unsigned int *)(m+1);
+ struct microcode *m, *mcslot;
+ unsigned int *sump;
+ m = µcode[i];
+ sump = (unsigned int *)(m+1);
while (--sump >= (unsigned int *)m)
sum += *sump;
if (sum != 0) {
wrmsr(0x79, (unsigned int)(m->bits), 0);
__asm__ __volatile__ ("cpuid");
rdmsr(0x8B, val[0], val[1]);
- printk(KERN_ERR "microcode: CPU%d microcode updated "
- "from revision %d to %d\n", id, rev, val[1]);
+ *err = 0;
+ mcslot = (struct microcode *)mc_applied + id;
+ memcpy(mcslot, m, sizeof(struct microcode));
+ printk(KERN_ERR "microcode: CPU%d updated from revision "
+ "%d to %d, date=%08x\n", id, rev, val[1], m->date);
}
break;
}
}
+static ssize_t microcode_read(struct file *file, char *buf, size_t len, loff_t *ppos)
+{
+ size_t fsize = smp_num_cpus * sizeof(struct microcode);
+
+ if (!proc_microcode->size || *ppos >= fsize)
+ return 0; /* EOF */
+ if (*ppos + len > fsize)
+ len = fsize - *ppos;
+ if (copy_to_user(buf, mc_applied + *ppos, len))
+ return -EFAULT;
+ *ppos += len;
+ return len;
+}
+
static ssize_t microcode_write(struct file *file, const char *buf, size_t len, loff_t *ppos)
{
+ int err;
+
if (len % sizeof(struct microcode) != 0) {
printk(KERN_ERR "microcode: can only write in N*%d bytes units\n",
sizeof(struct microcode));
return -EINVAL;
- return -EINVAL;
}
- if (!capable(CAP_SYS_RAWIO))
- return -EPERM;
lock_kernel();
microcode_num = len/sizeof(struct microcode);
microcode = vmalloc(len);
unlock_kernel();
return -EFAULT;
}
- do_microcode_update();
+ err = do_microcode_update();
+ if (err)
+ len = (size_t)err;
+ else
+ proc_microcode->size = smp_num_cpus * sizeof(struct microcode);
vfree(microcode);
unlock_kernel();
return len;
/*
* linux/arch/arm/drivers/net/ether1.c
*
- * (C) Copyright 1996,1997,1998 Russell King
+ * (C) Copyright 1996-2000 Russell King
*
* Acorn ether1 driver (82586 chip)
* for Acorn machines
* TDR now only reports failure when chip reports non-zero
* TDR time-distance.
* 1.05 RMK 31/12/1997 Removed calls to dev_tint for 2.1
+ * 1.06 RMK 10/02/2000 Updated for 2.3.43
*/
#include <linux/module.h>
#define RX_AREA_START 0x05000
#define RX_AREA_END 0x0fc00
-#define tx_done(dev) 0
+static int ether1_open(struct net_device *dev);
+static int ether1_sendpacket(struct sk_buff *skb, struct net_device *dev);
+static void ether1_interrupt(int irq, void *dev_id, struct pt_regs *regs);
+static int ether1_close(struct net_device *dev);
+static struct enet_statistics *ether1_getstats(struct net_device *dev);
+static void ether1_setmulticastlist(struct net_device *dev);
+static void ether1_timeout(struct net_device *dev);
+
/* ------------------------------------------------------------------------- */
-static char *version = "ether1 ethernet driver (c) 1995 Russell King v1.05\n";
+static char *version = "ether1 ethernet driver (c) 2000 Russell King v1.06\n";
#define BUS_16 16
#define BUS_8 8
if (net_debug && version_printed++ == 0)
printk (KERN_INFO "%s", version);
- printk (KERN_INFO "%s: ether1 found [%d, %04lx, %d]", dev->name, priv->bus_type,
- dev->base_addr, dev->irq);
-
request_region (dev->base_addr, 16, "ether1");
request_region (dev->base_addr + 0x800, 4096, "ether1(ram)");
+ printk (KERN_INFO "%s: ether1 at %lx, IRQ%d, ether address ",
+ dev->name, dev->base_addr, dev->irq);
+
for (i = 0; i < 6; i++)
printk (i==0?" %02x":i==5?":%02x\n":":%02x", dev->dev_addr[i]);
return 1;
}
- dev->open = ether1_open;
- dev->stop = ether1_close;
+ dev->open = ether1_open;
+ dev->stop = ether1_close;
dev->hard_start_xmit = ether1_sendpacket;
- dev->get_stats = ether1_getstats;
+ dev->get_stats = ether1_getstats;
dev->set_multicast_list = ether1_setmulticastlist;
+ dev->tx_timeout = ether1_timeout;
+ dev->watchdog_timeo = 5 * HZ / 100;
/* Fill in the fields of the device structure with ethernet values */
ether_setup (dev);
return start;
}
-static void
-ether1_restart (struct net_device *dev, char *reason)
-{
- struct ether1_priv *priv = (struct ether1_priv *)dev->priv;
- priv->stats.tx_errors ++;
-
- if (reason)
- printk (KERN_WARNING "%s: %s - resetting device\n", dev->name, reason);
- else
- printk (" - resetting device\n");
-
- ether1_reset (dev);
-
- dev->start = 0;
- dev->tbusy = 0;
-
- if (ether1_init_for_open (dev))
- printk (KERN_ERR "%s: unable to restart interface\n", dev->name);
-
- dev->start = 1;
-}
-
static int
ether1_open (struct net_device *dev)
{
struct ether1_priv *priv = (struct ether1_priv *)dev->priv;
- if (request_irq (dev->irq, ether1_interrupt, 0, "ether1", dev))
- return -EAGAIN;
-
MOD_INC_USE_COUNT;
+ if (request_irq(dev->irq, ether1_interrupt, 0, "ether1", dev)) {
+ MOD_DEC_USE_COUNT;
+ return -EAGAIN;
+ }
+
memset (&priv->stats, 0, sizeof (struct enet_statistics));
if (ether1_init_for_open (dev)) {
return -EAGAIN;
}
- dev->tbusy = 0;
- dev->interrupt = 0;
- dev->start = 1;
+ netif_start_queue(dev);
return 0;
}
-static int
-ether1_sendpacket (struct sk_buff *skb, struct net_device *dev)
+static void
+ether1_timeout(struct net_device *dev)
{
struct ether1_priv *priv = (struct ether1_priv *)dev->priv;
- if (priv->restart)
- ether1_restart (dev, NULL);
-
- if (dev->tbusy) {
- /*
- * If we get here, some higher level has decided that we are broken.
- * There should really be a "kick me" function call instead.
- */
- int tickssofar = jiffies - dev->trans_start;
+ printk(KERN_WARNING "%s: transmit timeout, network cable problem?\n",
+ dev->name);
+ printk(KERN_WARNING "%s: resetting device\n", dev->name);
- if (tickssofar < 5)
- return 1;
-
- /* Try to restart the adapter. */
- ether1_restart (dev, "transmit timeout, network cable problem?");
- dev->trans_start = jiffies;
- }
+ ether1_reset (dev);
- /*
- * Block a timer-based transmit from overlapping. This could better be
- * done with atomic_swap(1, dev->tbusy), but set_bit() works as well.
- */
- if (test_and_set_bit (0, (void *)&dev->tbusy) != 0)
- printk (KERN_WARNING "%s: transmitter access conflict.\n", dev->name);
- else {
- int len = (ETH_ZLEN < skb->len) ? skb->len : ETH_ZLEN;
- int tmp, tst, nopaddr, txaddr, tbdaddr, dataddr;
- unsigned long flags;
- tx_t tx;
- tbd_t tbd;
- nop_t nop;
-
- /*
- * insert packet followed by a nop
- */
- txaddr = ether1_txalloc (dev, TX_SIZE);
- tbdaddr = ether1_txalloc (dev, TBD_SIZE);
- dataddr = ether1_txalloc (dev, len);
- nopaddr = ether1_txalloc (dev, NOP_SIZE);
-
- tx.tx_status = 0;
- tx.tx_command = CMD_TX | CMD_INTR;
- tx.tx_link = nopaddr;
- tx.tx_tbdoffset = tbdaddr;
- tbd.tbd_opts = TBD_EOL | len;
- tbd.tbd_link = I82586_NULL;
- tbd.tbd_bufl = dataddr;
- tbd.tbd_bufh = 0;
- nop.nop_status = 0;
- nop.nop_command = CMD_NOP;
- nop.nop_link = nopaddr;
+ if (ether1_init_for_open (dev))
+ printk (KERN_ERR "%s: unable to restart interface\n", dev->name);
- save_flags_cli (flags);
- ether1_writebuffer (dev, &tx, txaddr, TX_SIZE);
- ether1_writebuffer (dev, &tbd, tbdaddr, TBD_SIZE);
- ether1_writebuffer (dev, skb->data, dataddr, len);
- ether1_writebuffer (dev, &nop, nopaddr, NOP_SIZE);
- tmp = priv->tx_link;
- priv->tx_link = nopaddr;
+ priv->stats.tx_errors++;
+ netif_wake_queue(dev);
+}
- /* now reset the previous nop pointer */
- ether1_outw (dev, txaddr, tmp, nop_t, nop_link, NORMALIRQS);
+static int
+ether1_sendpacket (struct sk_buff *skb, struct net_device *dev)
+{
+ struct ether1_priv *priv = (struct ether1_priv *)dev->priv;
+ int len = (ETH_ZLEN < skb->len) ? skb->len : ETH_ZLEN;
+ int tmp, tst, nopaddr, txaddr, tbdaddr, dataddr;
+ unsigned long flags;
+ tx_t tx;
+ tbd_t tbd;
+ nop_t nop;
- restore_flags (flags);
+ if (priv->restart) {
+ printk(KERN_WARNING "%s: resetting device\n", dev->name);
- /* handle transmit */
- dev->trans_start = jiffies;
+ ether1_reset(dev);
- /* check to see if we have room for a full sized ether frame */
- tmp = priv->tx_head;
- tst = ether1_txalloc (dev, TX_SIZE + TBD_SIZE + NOP_SIZE + ETH_FRAME_LEN);
- priv->tx_head = tmp;
- if (tst != -1)
- dev->tbusy = 0;
+ if (ether1_init_for_open(dev))
+ printk(KERN_ERR "%s: unable to restart interface\n", dev->name);
}
+
+ /*
+ * insert packet followed by a nop
+ */
+ txaddr = ether1_txalloc (dev, TX_SIZE);
+ tbdaddr = ether1_txalloc (dev, TBD_SIZE);
+ dataddr = ether1_txalloc (dev, len);
+ nopaddr = ether1_txalloc (dev, NOP_SIZE);
+
+ tx.tx_status = 0;
+ tx.tx_command = CMD_TX | CMD_INTR;
+ tx.tx_link = nopaddr;
+ tx.tx_tbdoffset = tbdaddr;
+ tbd.tbd_opts = TBD_EOL | len;
+ tbd.tbd_link = I82586_NULL;
+ tbd.tbd_bufl = dataddr;
+ tbd.tbd_bufh = 0;
+ nop.nop_status = 0;
+ nop.nop_command = CMD_NOP;
+ nop.nop_link = nopaddr;
+
+ save_flags_cli(flags);
+ ether1_writebuffer (dev, &tx, txaddr, TX_SIZE);
+ ether1_writebuffer (dev, &tbd, tbdaddr, TBD_SIZE);
+ ether1_writebuffer (dev, skb->data, dataddr, len);
+ ether1_writebuffer (dev, &nop, nopaddr, NOP_SIZE);
+ tmp = priv->tx_link;
+ priv->tx_link = nopaddr;
+
+ /* now reset the previous nop pointer */
+ ether1_outw (dev, txaddr, tmp, nop_t, nop_link, NORMALIRQS);
+
+ restore_flags(flags);
+
+ /* handle transmit */
+ dev->trans_start = jiffies;
+
+ /* check to see if we have room for a full sized ether frame */
+ tmp = priv->tx_head;
+ tst = ether1_txalloc (dev, TX_SIZE + TBD_SIZE + NOP_SIZE + ETH_FRAME_LEN);
+ priv->tx_head = tmp;
dev_kfree_skb (skb);
+ if (tst == -1)
+ netif_stop_queue(dev);
+
return 0;
}
tst = ether1_txalloc (dev, TX_SIZE + TBD_SIZE + NOP_SIZE + ETH_FRAME_LEN);
priv->tx_head = caddr;
if (tst != -1)
- dev->tbusy = 0;
-
- mark_bh (NET_BH);
+ netif_wake_queue(dev);
}
static void
struct ether1_priv *priv = (struct ether1_priv *)dev->priv;
int status;
- dev->interrupt = 1;
-
status = ether1_inw (dev, SCB_ADDR, scb_t, scb_status, NORMALIRQS);
if (status) {
}
} else
outb (CTRL_ACK, REG_CONTROL);
-
- dev->interrupt = 0;
}
static int
free_irq(dev->irq, dev);
- dev->start = 0;
- dev->tbusy = 0;
-
MOD_DEC_USE_COUNT;
return 0;
static struct ether_dev {
struct expansion_card *ec;
char name[9];
- struct net_device dev;
+ struct net_device dev;
} ether_devs[MAX_ECARDS];
int
unsigned char restart : 1;
};
-static int ether1_open (struct net_device *dev);
-static int ether1_sendpacket (struct sk_buff *skb, struct net_device *dev);
-static void ether1_interrupt (int irq, void *dev_id, struct pt_regs *regs);
-static int ether1_close (struct net_device *dev);
-static struct enet_statistics *ether1_getstats (struct net_device *dev);
-static void ether1_setmulticastlist (struct net_device *dev);
-
#define I82586_NULL (-1)
typedef struct { /* tdr */
* 1.14 RMK 07/01/1998 Added initial code for ETHERB addressing.
* 1.15 RMK 30/04/1999 More fixes to the transmit routine for buggy
* hardware.
+ * 1.16 RMK 10/02/2000 Updated for 2.3.43
*/
-static char *version = "ether3 ethernet driver (c) 1995-1999 R.M.King v1.15\n";
+static char *version = "ether3 ethernet driver (c) 1995-2000 R.M.King v1.16\n";
#include <linux/module.h>
#include <linux/kernel.h>
{ 0xffff, 0xffff }
};
-static void ether3_setmulticastlist(struct net_device *dev);
-static int ether3_rx(struct net_device *dev, struct dev_priv *priv, unsigned int maxcnt);
-static void ether3_tx(struct net_device *dev, struct dev_priv *priv);
+static void ether3_setmulticastlist(struct net_device *dev);
+static int ether3_rx(struct net_device *dev, struct dev_priv *priv, unsigned int maxcnt);
+static void ether3_tx(struct net_device *dev, struct dev_priv *priv);
+static int ether3_probe1 (struct net_device *dev);
+static int ether3_open (struct net_device *dev);
+static int ether3_sendpacket (struct sk_buff *skb, struct net_device *dev);
+static void ether3_interrupt (int irq, void *dev_id, struct pt_regs *regs);
+static int ether3_close (struct net_device *dev);
+static struct enet_statistics *ether3_getstats (struct net_device *dev);
+static void ether3_setmulticastlist (struct net_device *dev);
+static void ether3_timeout(struct net_device *dev);
#define BUS_16 2
#define BUS_8 1
static unsigned version_printed = 0;
struct dev_priv *priv;
unsigned int i, bus_type, error = ENODEV;
+ const char *name = "ether3";
if (net_debug && version_printed++ == 0)
printk(version);
priv = (struct dev_priv *) dev->priv;
memset(priv, 0, sizeof(struct dev_priv));
- request_region(dev->base_addr, 128, "ether3");
+ request_region(dev->base_addr, 128, name);
/* Reset card...
*/
switch (bus_type) {
case BUS_UNKNOWN:
- printk(KERN_ERR "%s: unable to identify podule bus width\n", dev->name);
+ printk(KERN_ERR "%s: unable to identify bus width\n", dev->name);
goto failed;
case BUS_8:
- printk(KERN_ERR "%s: ether3 found, but is an unsupported 8-bit card\n", dev->name);
+ printk(KERN_ERR "%s: %s found, but is an unsupported "
+ "8-bit card\n", dev->name, name);
goto failed;
default:
break;
}
- printk("%s: ether3 found at %lx, IRQ%d, ether address ", dev->name, dev->base_addr, dev->irq);
+ printk("%s: %s at %lx, IRQ%d, ether address ",
+ dev->name, name, dev->base_addr, dev->irq);
for (i = 0; i < 6; i++)
printk(i == 5 ? "%2.2x\n" : "%2.2x:", dev->dev_addr[i]);
- if (!ether3_init_2(dev)) {
- dev->open = ether3_open;
- dev->stop = ether3_close;
- dev->hard_start_xmit = ether3_sendpacket;
- dev->get_stats = ether3_getstats;
- dev->set_multicast_list = ether3_setmulticastlist;
+ if (ether3_init_2(dev))
+ goto failed;
- /* Fill in the fields of the device structure with ethernet values. */
- ether_setup(dev);
+ dev->open = ether3_open;
+ dev->stop = ether3_close;
+ dev->hard_start_xmit = ether3_sendpacket;
+ dev->get_stats = ether3_getstats;
+ dev->set_multicast_list = ether3_setmulticastlist;
+ dev->tx_timeout = ether3_timeout;
+ dev->watchdog_timeo = 5 * HZ / 100;
- return 0;
- }
+ /* Fill in the fields of the device structure with ethernet values. */
+ ether_setup(dev);
+
+ return 0;
failed:
kfree(dev->priv);
return -EAGAIN;
}
- dev->tbusy = 0;
- dev->interrupt = 0;
- dev->start = 1;
-
ether3_init_for_open(dev);
+ netif_start_queue(dev);
+
return 0;
}
{
struct dev_priv *priv = (struct dev_priv *)dev->priv;
- dev->tbusy = 1;
- dev->start = 0;
+ netif_stop_queue(dev);
disable_irq(dev->irq);
ether3_outw(priv->regs.config1 | CFG1_LOCBUFMEM, REG_CONFIG1);
}
+static void
+ether3_timeout(struct net_device *dev)
+{
+ struct dev_priv *priv = (struct dev_priv *)dev->priv;
+ unsigned long flags;
+
+ del_timer(&priv->timer);
+
+ save_flags_cli(flags);
+ printk(KERN_ERR "%s: transmit timed out, network cable problem?\n", dev->name);
+ printk(KERN_ERR "%s: state: { status=%04X cfg1=%04X cfg2=%04X }\n", dev->name,
+ ether3_inw(REG_STATUS), ether3_inw(REG_CONFIG1), ether3_inw(REG_CONFIG2));
+ printk(KERN_ERR "%s: { rpr=%04X rea=%04X tpr=%04X }\n", dev->name,
+ ether3_inw(REG_RECVPTR), ether3_inw(REG_RECVEND), ether3_inw(REG_TRANSMITPTR));
+ printk(KERN_ERR "%s: tx head=%X tx tail=%X\n", dev->name,
+ priv->tx_head, priv->tx_tail);
+ ether3_setbuffer(dev, buffer_read, priv->tx_tail);
+ printk(KERN_ERR "%s: packet status = %08X\n", dev->name, ether3_readlong(dev));
+ restore_flags(flags);
+
+ priv->regs.config2 |= CFG2_CTRLO;
+ priv->stats.tx_errors += 1;
+ ether3_outw(priv->regs.config2, REG_CONFIG2);
+ priv->tx_head = priv->tx_tail = 0;
+
+ netif_wake_queue(dev);
+}
+
/*
* Transmit a packet
*/
ether3_sendpacket(struct sk_buff *skb, struct net_device *dev)
{
struct dev_priv *priv = (struct dev_priv *)dev->priv;
-retry:
- if (!dev->tbusy) {
- /* Block a timer-based transmit from overlapping. This could better be
- * done with atomic_swap(1, dev->tbusy), but set_bit() works as well.
- */
- if (!test_and_set_bit(0, (void *)&dev->tbusy)) {
- unsigned long flags;
- unsigned int length = ETH_ZLEN < skb->len ? skb->len : ETH_ZLEN;
- unsigned int ptr, next_ptr;
-
- length = (length + 1) & ~1;
-
- if (priv->broken) {
- dev_kfree_skb(skb);
- priv->stats.tx_dropped ++;
- dev->tbusy = 0;
- return 0;
- }
+ unsigned long flags;
+ unsigned int length = ETH_ZLEN < skb->len ? skb->len : ETH_ZLEN;
+ unsigned int ptr, next_ptr;
- next_ptr = (priv->tx_head + 1) & 15;
+ length = (length + 1) & ~1;
- save_flags_cli(flags);
+ if (priv->broken) {
+ dev_kfree_skb(skb);
+ priv->stats.tx_dropped ++;
+ netif_start_queue(dev);
+ return 0;
+ }
- if (priv->tx_tail == next_ptr) {
- restore_flags(flags);
- return 1; /* unable to queue */
- }
+ next_ptr = (priv->tx_head + 1) & 15;
- dev->trans_start = jiffies;
- ptr = 0x600 * priv->tx_head;
- priv->tx_head = next_ptr;
- next_ptr *= 0x600;
+ save_flags_cli(flags);
-#define TXHDR_FLAGS (TXHDR_TRANSMIT|TXHDR_CHAINCONTINUE|TXHDR_DATAFOLLOWS|TXHDR_ENSUCCESS)
+ if (priv->tx_tail == next_ptr) {
+ restore_flags(flags);
+ return 1; /* unable to queue */
+ }
- ether3_setbuffer(dev, buffer_write, next_ptr);
- ether3_writelong(dev, 0);
- ether3_setbuffer(dev, buffer_write, ptr);
- ether3_writelong(dev, 0);
- ether3_writebuffer(dev, skb->data, length);
- ether3_writeword(dev, htons(next_ptr));
- ether3_writeword(dev, TXHDR_CHAINCONTINUE >> 16);
- ether3_setbuffer(dev, buffer_write, ptr);
- ether3_writeword(dev, htons((ptr + length + 4)));
- ether3_writeword(dev, TXHDR_FLAGS >> 16);
- ether3_ledon(dev, priv);
-
- if (!(ether3_inw(REG_STATUS) & STAT_TXON)) {
- ether3_outw(ptr, REG_TRANSMITPTR);
- ether3_outw(priv->regs.command | CMD_TXON, REG_COMMAND);
- }
+ dev->trans_start = jiffies;
+ ptr = 0x600 * priv->tx_head;
+ priv->tx_head = next_ptr;
+ next_ptr *= 0x600;
- next_ptr = (priv->tx_head + 1) & 15;
- if (priv->tx_tail != next_ptr)
- dev->tbusy = 0;
+#define TXHDR_FLAGS (TXHDR_TRANSMIT|TXHDR_CHAINCONTINUE|TXHDR_DATAFOLLOWS|TXHDR_ENSUCCESS)
- restore_flags(flags);
+ ether3_setbuffer(dev, buffer_write, next_ptr);
+ ether3_writelong(dev, 0);
+ ether3_setbuffer(dev, buffer_write, ptr);
+ ether3_writelong(dev, 0);
+ ether3_writebuffer(dev, skb->data, length);
+ ether3_writeword(dev, htons(next_ptr));
+ ether3_writeword(dev, TXHDR_CHAINCONTINUE >> 16);
+ ether3_setbuffer(dev, buffer_write, ptr);
+ ether3_writeword(dev, htons((ptr + length + 4)));
+ ether3_writeword(dev, TXHDR_FLAGS >> 16);
+ ether3_ledon(dev, priv);
- dev_kfree_skb(skb);
+ if (!(ether3_inw(REG_STATUS) & STAT_TXON)) {
+ ether3_outw(ptr, REG_TRANSMITPTR);
+ ether3_outw(priv->regs.command | CMD_TXON, REG_COMMAND);
+ }
- return 0;
- } else {
- printk("%s: transmitter access conflict.\n", dev->name);
- return 1;
- }
- } else {
- /* If we get here, some higher level has decided we are broken.
- * There should really be a "kick me" function call instead.
- */
- int tickssofar = jiffies - dev->trans_start;
- unsigned long flags;
+ next_ptr = (priv->tx_head + 1) & 15;
+ restore_flags(flags);
- if (tickssofar < 5)
- return 1;
- del_timer(&priv->timer);
-
- save_flags_cli(flags);
- printk(KERN_ERR "%s: transmit timed out, network cable problem?\n", dev->name);
- printk(KERN_ERR "%s: state: { status=%04X cfg1=%04X cfg2=%04X }\n", dev->name,
- ether3_inw(REG_STATUS), ether3_inw(REG_CONFIG1), ether3_inw(REG_CONFIG2));
- printk(KERN_ERR "%s: { rpr=%04X rea=%04X tpr=%04X }\n", dev->name,
- ether3_inw(REG_RECVPTR), ether3_inw(REG_RECVEND), ether3_inw(REG_TRANSMITPTR));
- printk(KERN_ERR "%s: tx head=%X tx tail=%X\n", dev->name,
- priv->tx_head, priv->tx_tail);
- ether3_setbuffer(dev, buffer_read, priv->tx_tail);
- printk(KERN_ERR "%s: packet status = %08X\n", dev->name, ether3_readlong(dev));
- restore_flags(flags);
+ dev_kfree_skb(skb);
- dev->tbusy = 0;
- priv->regs.config2 |= CFG2_CTRLO;
- priv->stats.tx_errors += 1;
- ether3_outw(priv->regs.config2, REG_CONFIG2);
- dev->trans_start = jiffies;
- priv->tx_head = priv->tx_tail = 0;
- goto retry;
- }
+ if (priv->tx_tail == next_ptr)
+ netif_stop_queue(dev);
+
+ return 0;
}
static void
priv = (struct dev_priv *)dev->priv;
- dev->interrupt = 1;
-
status = ether3_inw(REG_STATUS);
if (status & STAT_INTRX) {
ether3_tx(dev, priv);
}
- dev->interrupt = 0;
-
#if NET_DEBUG > 1
if(net_debug & DEBUG_INT)
printk("done\n");
if (priv->tx_tail != tx_tail) {
priv->tx_tail = tx_tail;
- dev->tbusy = 0;
- mark_bh(NET_BH); /* Inform upper layers. */
+ netif_wake_queue(dev);
}
}
static struct ether_dev {
struct expansion_card *ec;
char name[9];
- struct net_device dev;
+ struct net_device dev;
} ether_devs[MAX_ECARDS];
int
int broken; /* 0 = ok, 1 = something went wrong */
};
-extern int ether3_probe (struct net_device *dev);
-static int ether3_probe1 (struct net_device *dev);
-static int ether3_open (struct net_device *dev);
-static int ether3_sendpacket (struct sk_buff *skb, struct net_device *dev);
-static void ether3_interrupt (int irq, void *dev_id, struct pt_regs *regs);
-static int ether3_close (struct net_device *dev);
-static struct enet_statistics *ether3_getstats (struct net_device *dev);
-static void ether3_setmulticastlist (struct net_device *dev);
-
#endif
* RMK 1.03 Added support for EtherLan500 cards
* 23-11-1997 RMK 1.04 Added media autodetection
* 16-04-1998 RMK 1.05 Improved media autodetection
+ * 10-02-2000 RMK 1.06 Updated for 2.3.43
*
* Insmod Module Parameters
* ------------------------
MODULE_AUTHOR("Russell King");
MODULE_DESCRIPTION("i3 EtherH driver");
-static char *version = "etherh [500/600/600A] ethernet driver (c) 1998 R.M.King v1.05\n";
+static char *version = "etherh [500/600/600A] ethernet driver (c) 2000 R.M.King v1.06\n";
#define ETHERH500_DATAPORT 0x200 /* MEMC */
#define ETHERH500_NS8390 0x000 /* MEMC */
if (ei_status.dmaing) {
printk ("%s: DMAing conflict in etherh_block_input: "
- " DMAstat %d irqlock %d intr %ld\n", dev->name,
- ei_status.dmaing, ei_status.irqlock, dev->interrupt);
+ " DMAstat %d irqlock %d\n", dev->name,
+ ei_status.dmaing, ei_status.irqlock);
return;
}
if (ei_status.dmaing) {
printk ("%s: DMAing conflict in etherh_block_input: "
- " DMAstat %d irqlock %d intr %ld\n", dev->name,
- ei_status.dmaing, ei_status.irqlock, dev->interrupt);
+ " DMAstat %d irqlock %d\n", dev->name,
+ ei_status.dmaing, ei_status.irqlock);
return;
}
if (ei_status.dmaing) {
printk ("%s: DMAing conflict in etherh_get_header: "
- " DMAstat %d irqlock %d intr %ld\n", dev->name,
- ei_status.dmaing, ei_status.irqlock, dev->interrupt);
+ " DMAstat %d irqlock %d\n", dev->name,
+ ei_status.dmaing, ei_status.irqlock);
return;
}
unsigned int addr, i, reg0, tmp;
const char *dev_type;
const char *if_type;
+ const char *name = "etherh";
addr = dev->base_addr;
switch (dev->mem_end) {
case PROD_I3_ETHERLAN500:
- dev_type = "500 ";
+ dev_type = "500";
break;
case PROD_I3_ETHERLAN600:
- dev_type = "600 ";
+ dev_type = "600";
break;
case PROD_I3_ETHERLAN600A:
- dev_type = "600A ";
+ dev_type = "600A";
break;
default:
dev_type = "";
reg0 = inb (addr);
if (reg0 == 0xff) {
if (net_debug & DEBUG_INIT)
- printk ("%s: etherh error: NS8390 command register wrong\n", dev->name);
+ printk("%s: %s error: NS8390 command register wrong\n",
+ dev->name, name);
return -ENODEV;
}
inb (addr + EN0_COUNTER0);
if (inb (addr + EN0_COUNTER0) != 0) {
if (net_debug & DEBUG_INIT)
- printk ("%s: etherh error: NS8390 not found\n", dev->name);
+ printk("%s: %s error: NS8390 not found\n",
+ dev->name, name);
outb (reg0, addr);
outb (tmp, addr + 13);
return -ENODEV;
}
- if (ethdev_init (dev))
+ if (ethdev_init(dev))
return -ENOMEM;
- request_region (addr, 16, "etherh");
+ request_region(addr, 16, name);
- printk("%s: etherh %sfound at %lx, IRQ%d, ether address ",
- dev->name, dev_type, dev->base_addr, dev->irq);
+ printk("%s: %s %s at %lx, IRQ%d, ether address ",
+ dev->name, name, dev_type, dev->base_addr, dev->irq);
for (i = 0; i < 6; i++)
printk (i == 5 ? "%2.2x " : "%2.2x:", dev->dev_addr[i]);
- ei_status.name = "etherh";
- ei_status.word16 = 1;
- ei_status.tx_start_page = ETHERH_TX_START_PAGE;
- ei_status.rx_start_page = ei_status.tx_start_page + TX_PAGES;
- ei_status.stop_page = ETHERH_STOP_PAGE;
- ei_status.reset_8390 = etherh_reset;
- ei_status.block_input = etherh_block_input;
- ei_status.block_output = etherh_block_output;
- ei_status.get_8390_hdr = etherh_get_header;
- dev->open = etherh_open;
- dev->stop = etherh_close;
+ ei_status.name = name;
+ ei_status.word16 = 1;
+ ei_status.tx_start_page = ETHERH_TX_START_PAGE;
+ ei_status.rx_start_page = ei_status.tx_start_page + TX_PAGES;
+ ei_status.stop_page = ETHERH_STOP_PAGE;
+ ei_status.reset_8390 = etherh_reset;
+ ei_status.block_input = etherh_block_input;
+ ei_status.block_output = etherh_block_output;
+ ei_status.get_8390_hdr = etherh_get_header;
+ dev->open = etherh_open;
+ dev->stop = etherh_close;
/* select 10bT */
ei_status.interface_num = 0;
my_ethers[i] = dev;
if (register_netdev(dev) != 0) {
- printk (KERN_WARNING "No etherh card found at %08lX\n", dev->base_addr);
+ printk(KERN_ERR "No etherh card found at %08lX\n",
+ dev->base_addr);
if (ec[i]) {
ecard_release(ec[i]);
ec[i] = NULL;
endif
endif
+ifeq ($(CONFIG_BLK_DEV_LVM),y)
+L_OBJS += lvm.o lvm-snap.o
+else
+ ifeq ($(CONFIG_BLK_DEV_LVM),m)
+ M_OBJS += lvm-mod.o
+ endif
+endif
+
ifeq ($(CONFIG_BLK_DEV_MD),y)
LX_OBJS += md.o
endif
ifeq ($(CONFIG_MD_RAID5),y)
-LX_OBJS += xor.o
-CFLAGS_xor.o := $(PROFILING) -fomit-frame-pointer
L_OBJS += raid5.o
else
ifeq ($(CONFIG_MD_RAID5),m)
- LX_OBJS += xor.o
- CFLAGS_xor.o := $(PROFILING) -fomit-frame-pointer
M_OBJS += raid5.o
endif
endif
ide-probe-mod.o: ide-probe.o ide-geometry.o
$(LD) $(LD_RFLAG) -r -o $@ ide-probe.o ide-geometry.o
+
+lvm-mod.o: lvm.o lvm-snap.o
+ $(LD) -r -o $@ lvm.o lvm-snap.o
--- /dev/null
+
+This is the Logical Volume Manager driver for Linux,
+
+Tools, library that manage logical volumes can be found
+at <http://linux.msede.com/lvm>.
+
+There you can obtain actual driver versions too.
+
#include <asm/ecard.h>
#include <asm/io.h>
+extern char *ide_xfer_verbose (byte xfer_rate);
+
/*
* Maximum number of interfaces per card
*/
ide_ioreg_t ide_control_reg = hwif->io_ports[IDE_CONTROL_OFFSET];
ide_ioreg_t region_low = hwif->io_ports[IDE_DATA_OFFSET];
ide_ioreg_t region_high = region_low;
- ide_ioreg_t region_request = 8;
+ unsigned int region_request = 8;
int i;
if (hwif->noprobe)
--- /dev/null
+/*
+ * kernel/lvm-snap.c
+ *
+ * Copyright (C) 2000 Andrea Arcangeli <andrea@suse.de> SuSE
+ *
+ * LVM snapshot driver is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2, or (at your option)
+ * any later version.
+ *
+ * LVM driver is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with GNU CC; see the file COPYING. If not, write to
+ * the Free Software Foundation, 59 Temple Place - Suite 330,
+ * Boston, MA 02111-1307, USA.
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/vmalloc.h>
+#include <linux/blkdev.h>
+#include <linux/smp_lock.h>
+#include <linux/types.h>
+#include <linux/iobuf.h>
+#include <linux/lvm.h>
+
+
+static char *lvm_snap_version = "LVM 0.8final (15/02/2000)\n";
+
+extern const char *const lvm_name;
+extern int lvm_blocksizes[];
+
+void lvm_snapshot_release(lv_t *);
+
+#define hashfn(dev,block,mask,chunk_size) \
+ ((HASHDEV(dev)^((block)/(chunk_size))) & (mask))
+
+static inline lv_block_exception_t *
+lvm_find_exception_table(kdev_t org_dev, unsigned long org_start, lv_t * lv)
+{
+ struct list_head * hash_table = lv->lv_snapshot_hash_table, * next;
+ unsigned long mask = lv->lv_snapshot_hash_mask;
+ int chunk_size = lv->lv_chunk_size;
+ lv_block_exception_t * ret;
+ int i = 0;
+
+ hash_table = &hash_table[hashfn(org_dev, org_start, mask, chunk_size)];
+ ret = NULL;
+ for (next = hash_table->next; next != hash_table; next = next->next)
+ {
+ lv_block_exception_t * exception;
+
+ exception = list_entry(next, lv_block_exception_t, hash);
+ if (exception->rsector_org == org_start &&
+ exception->rdev_org == org_dev)
+ {
+ if (i)
+ {
+ /* fun, isn't it? :) */
+ list_del(next);
+ list_add(next, hash_table);
+ }
+ ret = exception;
+ break;
+ }
+ i++;
+ }
+ return ret;
+}
+
+static inline void lvm_hash_link(lv_block_exception_t * exception,
+ kdev_t org_dev, unsigned long org_start,
+ lv_t * lv)
+{
+ struct list_head * hash_table = lv->lv_snapshot_hash_table;
+ unsigned long mask = lv->lv_snapshot_hash_mask;
+ int chunk_size = lv->lv_chunk_size;
+
+ hash_table = &hash_table[hashfn(org_dev, org_start, mask, chunk_size)];
+ list_add(&exception->hash, hash_table);
+}
+
+int lvm_snapshot_remap_block(kdev_t * org_dev, unsigned long * org_sector,
+ unsigned long pe_start, lv_t * lv)
+{
+ int ret;
+ unsigned long pe_off, pe_adjustment, __org_start;
+ kdev_t __org_dev;
+ int chunk_size = lv->lv_chunk_size;
+ lv_block_exception_t * exception;
+
+ pe_off = pe_start % chunk_size;
+ pe_adjustment = (*org_sector-pe_off) % chunk_size;
+ __org_start = *org_sector - pe_adjustment;
+ __org_dev = *org_dev;
+
+ ret = 0;
+ exception = lvm_find_exception_table(__org_dev, __org_start, lv);
+ if (exception)
+ {
+ *org_dev = exception->rdev_new;
+ *org_sector = exception->rsector_new + pe_adjustment;
+ ret = 1;
+ }
+ return ret;
+}
+
+static void lvm_drop_snapshot(lv_t * lv_snap, const char * reason)
+{
+ kdev_t last_dev;
+ int i;
+
+ /* no exception storage space available for this snapshot
+ or error on this snapshot --> release it */
+ invalidate_buffers(lv_snap->lv_dev);
+
+ for (i = last_dev = 0; i < lv_snap->lv_remap_ptr; i++) {
+ if ( lv_snap->lv_block_exception[i].rdev_new != last_dev) {
+ last_dev = lv_snap->lv_block_exception[i].rdev_new;
+ invalidate_buffers(last_dev);
+ }
+ }
+
+ lvm_snapshot_release(lv_snap);
+
+ printk(KERN_INFO
+ "%s -- giving up to snapshot %s on %s due %s\n",
+ lvm_name, lv_snap->lv_snapshot_org->lv_name, lv_snap->lv_name,
+ reason);
+}
+
+static inline void lvm_snapshot_prepare_blocks(unsigned long * blocks,
+ unsigned long start,
+ int nr_sectors,
+ int blocksize)
+{
+ int i, sectors_per_block, nr_blocks;
+
+ sectors_per_block = blocksize >> 9;
+ nr_blocks = nr_sectors / sectors_per_block;
+ start /= sectors_per_block;
+
+ for (i = 0; i < nr_blocks; i++)
+ blocks[i] = start++;
+}
+
+static inline int get_blksize(kdev_t dev)
+{
+ int correct_size = BLOCK_SIZE, i, major;
+
+ major = MAJOR(dev);
+ if (blksize_size[major])
+ {
+ i = blksize_size[major][MINOR(dev)];
+ if (i)
+ correct_size = i;
+ }
+ return correct_size;
+}
+
+#ifdef DEBUG_SNAPSHOT
+static inline void invalidate_snap_cache(unsigned long start, unsigned long nr,
+ kdev_t dev)
+{
+ struct buffer_head * bh;
+ int sectors_per_block, i, blksize, minor;
+
+ minor = MINOR(dev);
+ blksize = lvm_blocksizes[minor];
+ sectors_per_block = blksize >> 9;
+ nr /= sectors_per_block;
+ start /= sectors_per_block;
+
+ for (i = 0; i < nr; i++)
+ {
+ bh = get_hash_table(dev, start++, blksize);
+ if (bh)
+ bforget(bh);
+ }
+}
+#endif
+
+/*
+ * copy on write handler for one snapshot logical volume
+ *
+ * read the original blocks and store it/them on the new one(s).
+ * if there is no exception storage space free any longer --> release snapshot.
+ *
+ * this routine gets called for each _first_ write to a physical chunk.
+ */
+int lvm_snapshot_COW(kdev_t org_phys_dev,
+ unsigned long org_phys_sector,
+ unsigned long org_pe_start,
+ unsigned long org_virt_sector,
+ lv_t * lv_snap)
+{
+ const char * reason;
+ unsigned long org_start, snap_start, snap_phys_dev, virt_start, pe_off;
+ int idx = lv_snap->lv_remap_ptr, chunk_size = lv_snap->lv_chunk_size;
+ struct kiobuf * iobuf;
+ unsigned long blocks[KIO_MAX_SECTORS];
+ int blksize_snap, blksize_org, min_blksize, max_blksize;
+ int max_sectors, nr_sectors;
+
+ /* check if we are out of snapshot space */
+ if (idx >= lv_snap->lv_remap_end)
+ goto fail_out_of_space;
+
+ /* calculate physical boundaries of source chunk */
+ pe_off = org_pe_start % chunk_size;
+ org_start = org_phys_sector - ((org_phys_sector-pe_off) % chunk_size);
+ virt_start = org_virt_sector - (org_phys_sector - org_start);
+
+ /* calculate physical boundaries of destination chunk */
+ snap_phys_dev = lv_snap->lv_block_exception[idx].rdev_new;
+ snap_start = lv_snap->lv_block_exception[idx].rsector_new;
+
+#ifdef DEBUG_SNAPSHOT
+ printk(KERN_INFO
+ "%s -- COW: "
+ "org %02d:%02d faulting %lu start %lu, "
+ "snap %02d:%02d start %lu, "
+ "size %d, pe_start %lu pe_off %lu, virt_sec %lu\n",
+ lvm_name,
+ MAJOR(org_phys_dev), MINOR(org_phys_dev), org_phys_sector,
+ org_start,
+ MAJOR(snap_phys_dev), MINOR(snap_phys_dev), snap_start,
+ chunk_size,
+ org_pe_start, pe_off,
+ org_virt_sector);
+#endif
+
+ iobuf = lv_snap->lv_iobuf;
+
+ blksize_org = get_blksize(org_phys_dev);
+ blksize_snap = get_blksize(snap_phys_dev);
+ max_blksize = max(blksize_org, blksize_snap);
+ min_blksize = min(blksize_org, blksize_snap);
+ max_sectors = KIO_MAX_SECTORS * (min_blksize>>9);
+
+ if (chunk_size % (max_blksize>>9))
+ goto fail_blksize;
+
+ while (chunk_size)
+ {
+ nr_sectors = min(chunk_size, max_sectors);
+ chunk_size -= nr_sectors;
+
+ iobuf->length = nr_sectors << 9;
+
+ lvm_snapshot_prepare_blocks(blocks, org_start,
+ nr_sectors, blksize_org);
+ if (brw_kiovec(READ, 1, &iobuf, org_phys_dev,
+ blocks, blksize_org) != (nr_sectors<<9))
+ goto fail_raw_read;
+
+ lvm_snapshot_prepare_blocks(blocks, snap_start,
+ nr_sectors, blksize_snap);
+ if (brw_kiovec(WRITE, 1, &iobuf, snap_phys_dev,
+ blocks, blksize_snap) != (nr_sectors<<9))
+ goto fail_raw_write;
+ }
+
+#ifdef DEBUG_SNAPSHOT
+ /* invalidate the logcial snapshot buffer cache */
+ invalidate_snap_cache(virt_start, lv_snap->lv_chunk_size,
+ lv_snap->lv_dev);
+#endif
+
+ /* the original chunk is now stored on the snapshot volume
+ so update the execption table */
+ lv_snap->lv_block_exception[idx].rdev_org = org_phys_dev;
+ lv_snap->lv_block_exception[idx].rsector_org = org_start;
+ lvm_hash_link(lv_snap->lv_block_exception + idx,
+ org_phys_dev, org_start, lv_snap);
+ lv_snap->lv_remap_ptr = idx + 1;
+ return 0;
+
+ /* slow path */
+ out:
+ lvm_drop_snapshot(lv_snap, reason);
+ return 1;
+
+ fail_out_of_space:
+ reason = "out of space";
+ goto out;
+ fail_raw_read:
+ reason = "read error";
+ goto out;
+ fail_raw_write:
+ reason = "write error";
+ goto out;
+ fail_blksize:
+ reason = "blocksize error";
+ goto out;
+}
+
+static int lvm_snapshot_alloc_iobuf_pages(struct kiobuf * iobuf, int sectors)
+{
+ int bytes, nr_pages, err, i;
+
+ bytes = sectors << 9;
+ nr_pages = (bytes + ~PAGE_MASK) >> PAGE_SHIFT;
+ err = expand_kiobuf(iobuf, nr_pages);
+ if (err)
+ goto out;
+
+ err = -ENOMEM;
+ iobuf->locked = 1;
+ iobuf->nr_pages = 0;
+ for (i = 0; i < nr_pages; i++)
+ {
+ struct page * page;
+
+#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,3,27)
+ page = alloc_page(GFP_KERNEL);
+ if (!page)
+ goto out;
+#else
+ {
+ unsigned long addr = __get_free_page(GFP_USER);
+ if (!addr)
+ goto out;
+ iobuf->pagelist[i] = addr;
+ page = mem_map + MAP_NR(addr);
+ }
+#endif
+
+ iobuf->maplist[i] = page;
+ /* the only point to lock the page here is to be allowed
+ to share unmap_kiobuf() in the fail-path */
+#ifndef LockPage
+#define LockPage(map) set_bit(PG_locked, &(map)->flags)
+#endif
+ LockPage(page);
+ iobuf->nr_pages++;
+ }
+ iobuf->offset = 0;
+
+ err = 0;
+ out:
+ return err;
+}
+
+static int calc_max_buckets(void)
+{
+ unsigned long mem;
+
+ mem = num_physpages << PAGE_SHIFT;
+ mem /= 100;
+ mem *= 2;
+ mem /= sizeof(struct list_head);
+
+ return mem;
+}
+
+static int lvm_snapshot_alloc_hash_table(lv_t * lv)
+{
+ int err;
+ unsigned long buckets, max_buckets, size;
+ struct list_head * hash;
+
+ buckets = lv->lv_remap_end;
+ max_buckets = calc_max_buckets();
+ buckets = min(buckets, max_buckets);
+ while (buckets & (buckets-1))
+ buckets &= (buckets-1);
+
+ size = buckets * sizeof(struct list_head);
+
+ err = -ENOMEM;
+ hash = vmalloc(size);
+ lv->lv_snapshot_hash_table = hash;
+
+ if (!hash)
+ goto out;
+
+ lv->lv_snapshot_hash_mask = buckets-1;
+ while (buckets--)
+ INIT_LIST_HEAD(hash+buckets);
+ err = 0;
+ out:
+ return err;
+}
+
+int lvm_snapshot_alloc(lv_t * lv_snap)
+{
+ int err, blocksize, max_sectors;
+
+ err = alloc_kiovec(1, &lv_snap->lv_iobuf);
+ if (err)
+ goto out;
+
+ blocksize = lvm_blocksizes[MINOR(lv_snap->lv_dev)];
+ max_sectors = KIO_MAX_SECTORS << (PAGE_SHIFT-9);
+
+ err = lvm_snapshot_alloc_iobuf_pages(lv_snap->lv_iobuf, max_sectors);
+ if (err)
+ goto out_free_kiovec;
+
+ err = lvm_snapshot_alloc_hash_table(lv_snap);
+ if (err)
+ goto out_free_kiovec;
+ out:
+ return err;
+
+ out_free_kiovec:
+ unmap_kiobuf(lv_snap->lv_iobuf);
+ free_kiovec(1, &lv_snap->lv_iobuf);
+ goto out;
+}
+
+void lvm_snapshot_release(lv_t * lv)
+{
+ if (lv->lv_block_exception)
+ {
+ vfree(lv->lv_block_exception);
+ lv->lv_block_exception = NULL;
+ }
+ if (lv->lv_snapshot_hash_table)
+ {
+ vfree(lv->lv_snapshot_hash_table);
+ lv->lv_snapshot_hash_table = NULL;
+ }
+ if (lv->lv_iobuf)
+ {
+ free_kiovec(1, &lv->lv_iobuf);
+ lv->lv_iobuf = NULL;
+ }
+}
--- /dev/null
+/*
+ * kernel/lvm.c
+ *
+ * Copyright (C) 1997 - 2000 Heinz Mauelshagen, Germany
+ *
+ * February-November 1997
+ * April-May,July-August,November 1998
+ * January-March,May,July,September,October 1999
+ * January,February 2000
+ *
+ *
+ * LVM driver is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2, or (at your option)
+ * any later version.
+ *
+ * LVM driver is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with GNU CC; see the file COPYING. If not, write to
+ * the Free Software Foundation, 59 Temple Place - Suite 330,
+ * Boston, MA 02111-1307, USA.
+ *
+ */
+
+/*
+ * Changelog
+ *
+ * 09/11/1997 - added chr ioctls VG_STATUS_GET_COUNT
+ * and VG_STATUS_GET_NAMELIST
+ * 18/01/1998 - change lvm_chr_open/close lock handling
+ * 30/04/1998 - changed LV_STATUS ioctl to LV_STATUS_BYNAME and
+ * - added LV_STATUS_BYINDEX ioctl
+ * - used lvm_status_byname_req_t and
+ * lvm_status_byindex_req_t vars
+ * 04/05/1998 - added multiple device support
+ * 08/05/1998 - added support to set/clear extendable flag in volume group
+ * 09/05/1998 - changed output of lvm_proc_get_info() because of
+ * support for free (eg. longer) logical volume names
+ * 12/05/1998 - added spin_locks (thanks to Pascal van Dam
+ * <pascal@ramoth.xs4all.nl>)
+ * 25/05/1998 - fixed handling of locked PEs in lvm_map() and lvm_chr_ioctl()
+ * 26/05/1998 - reactivated verify_area by access_ok
+ * 07/06/1998 - used vmalloc/vfree instead of kmalloc/kfree to go
+ * beyond 128/256 KB max allocation limit per call
+ * - #ifdef blocked spin_lock calls to avoid compile errors
+ * with 2.0.x
+ * 11/06/1998 - another enhancement to spinlock code in lvm_chr_open()
+ * and use of LVM_VERSION_CODE instead of my own macros
+ * (thanks to Michael Marxmeier <mike@msede.com>)
+ * 07/07/1998 - added statistics in lvm_map()
+ * 08/07/1998 - saved statistics in lvm_do_lv_extend_reduce()
+ * 25/07/1998 - used __initfunc macro
+ * 02/08/1998 - changes for official char/block major numbers
+ * 07/08/1998 - avoided init_module() and cleanup_module() to be static
+ * 30/08/1998 - changed VG lv_open counter from sum of LV lv_open counters
+ * to sum of LVs open (no matter how often each is)
+ * 01/09/1998 - fixed lvm_gendisk.part[] index error
+ * 07/09/1998 - added copying of lv_current_pe-array
+ * in LV_STATUS_BYINDEX ioctl
+ * 17/11/1998 - added KERN_* levels to printk
+ * 13/01/1999 - fixed LV index bug in lvm_do_lv_create() which hit lvrename
+ * 07/02/1999 - fixed spinlock handling bug in case of LVM_RESET
+ * by moving spinlock code from lvm_chr_open()
+ * to lvm_chr_ioctl()
+ * - added LVM_LOCK_LVM ioctl to lvm_chr_ioctl()
+ * - allowed LVM_RESET and retrieval commands to go ahead;
+ * only other update ioctls are blocked now
+ * - fixed pv->pe to NULL for pv_status
+ * - using lv_req structure in lvm_chr_ioctl() now
+ * - fixed NULL ptr reference bug in lvm_do_lv_extend_reduce()
+ * caused by uncontiguous PV array in lvm_chr_ioctl(VG_REDUCE)
+ * 09/02/1999 - changed BLKRASET and BLKRAGET in lvm_chr_ioctl() to
+ * handle lgoical volume private read ahead sector
+ * - implemented LV read_ahead handling with lvm_blk_read()
+ * and lvm_blk_write()
+ * 10/02/1999 - implemented 2.[12].* support function lvm_hd_name()
+ * to be used in drivers/block/genhd.c by disk_name()
+ * 12/02/1999 - fixed index bug in lvm_blk_ioctl(), HDIO_GETGEO
+ * - enhanced gendisk insert/remove handling
+ * 16/02/1999 - changed to dynamic block minor number allocation to
+ * have as much as 99 volume groups with 256 logical volumes
+ * as the grand total; this allows having 1 volume group with
+ * up to 256 logical volumes in it
+ * 21/02/1999 - added LV open count information to proc filesystem
+ * - substituted redundant LVM_RESET code by calls
+ * to lvm_do_vg_remove()
+ * 22/02/1999 - used schedule_timeout() to be more responsive
+ * in case of lvm_do_vg_remove() with lots of logical volumes
+ * 19/03/1999 - fixed NULL pointer bug in module_init/lvm_init
+ * 17/05/1999 - used DECLARE_WAIT_QUEUE_HEAD macro (>2.3.0)
+ * - enhanced lvm_hd_name support
+ * 03/07/1999 - avoided use of KERNEL_VERSION macro based ifdefs and
+ * memcpy_tofs/memcpy_fromfs macro redefinitions
+ * 06/07/1999 - corrected reads/writes statistic counter copy in case
+ * of striped logical volume
+ * 28/07/1999 - implemented snapshot logical volumes
+ * - lvm_chr_ioctl
+ * - LV_STATUS_BYINDEX
+ * - LV_STATUS_BYNAME
+ * - lvm_do_lv_create
+ * - lvm_do_lv_remove
+ * - lvm_map
+ * - new lvm_snapshot_remap_block
+ * - new lvm_snapshot_remap_new_block
+ * 08/10/1999 - implemented support for multiple snapshots per
+ * original logical volume
+ * 12/10/1999 - support for 2.3.19
+ * 11/11/1999 - support for 2.3.28
+ * 21/11/1999 - changed lvm_map() interface to buffer_head based
+ * 19/12/1999 - support for 2.3.33
+ * 01/01/2000 - changed locking concept in lvm_map(),
+ * lvm_do_vg_create() and lvm_do_lv_remove()
+ * 15/01/2000 - fixed PV_FLUSH bug in lvm_chr_ioctl()
+ * 24/01/2000 - ported to 2.3.40 including Alan Cox's pointer changes etc.
+ * 29/01/2000 - used kmalloc/kfree again for all small structures
+ * 20/01/2000 - cleaned up lvm_chr_ioctl by moving code
+ * to seperated functions
+ * - avoided "/dev/" in proc filesystem output
+ * - avoided inline strings functions lvm_strlen etc.
+ * 14/02/2000 - support for 2.3.43
+ * - integrated Andrea Arcagnelli's snapshot code
+ *
+ */
+
+
+static char *lvm_version = "LVM version 0.8final by Heinz Mauelshagen (15/02/2000)\n";
+static char *lvm_short_version = "version 0.8final (15/02/2000)";
+
+#define MAJOR_NR LVM_BLK_MAJOR
+#define DEVICE_OFF(device)
+
+#include <linux/config.h>
+#include <linux/version.h>
+
+#ifdef MODVERSIONS
+#undef MODULE
+#define MODULE
+#include <linux/modversions.h>
+#endif
+
+#include <linux/module.h>
+
+#include <linux/kernel.h>
+#include <linux/vmalloc.h>
+#include <linux/slab.h>
+#include <linux/init.h>
+
+#include <linux/hdreg.h>
+#include <linux/stat.h>
+#include <linux/fs.h>
+#include <linux/proc_fs.h>
+#include <linux/blkdev.h>
+#include <linux/genhd.h>
+#include <linux/locks.h>
+#include <linux/smp_lock.h>
+#include <asm/ioctl.h>
+#include <asm/segment.h>
+#include <asm/uaccess.h>
+
+#ifdef CONFIG_KERNELD
+#include <linux/kerneld.h>
+#endif
+
+#include <linux/blk.h>
+#include <linux/blkpg.h>
+
+#include <linux/errno.h>
+#include <linux/lvm.h>
+
+#define LVM_CORRECT_READ_AHEAD( a) \
+ if ( a < LVM_MIN_READ_AHEAD || \
+ a > LVM_MAX_READ_AHEAD) a = LVM_MAX_READ_AHEAD;
+
+#ifndef WRITEA
+# define WRITEA WRITE
+#endif
+
+/*
+ * External function prototypes
+ */
+#ifdef MODULE
+int init_module(void);
+void cleanup_module(void);
+#else
+extern int lvm_init(void);
+#endif
+
+static void lvm_dummy_device_request(request_queue_t *);
+#define DEVICE_REQUEST lvm_dummy_device_request
+
+static void lvm_make_request_fn(int, struct buffer_head*);
+
+static int lvm_blk_ioctl(struct inode *, struct file *, uint, ulong);
+static int lvm_blk_open(struct inode *, struct file *);
+
+static ssize_t lvm_blk_read(struct file *, char *, size_t, loff_t *);
+static ssize_t lvm_blk_write(struct file *, const char *, size_t, loff_t *);
+
+static int lvm_chr_open(struct inode *, struct file *);
+
+static int lvm_chr_close(struct inode *, struct file *);
+static int lvm_blk_close(struct inode *, struct file *);
+
+static int lvm_chr_ioctl(struct inode *, struct file *, uint, ulong);
+
+#if defined CONFIG_LVM_PROC_FS && defined CONFIG_PROC_FS
+static int lvm_proc_get_info(char *, char **, off_t, int);
+static int (*lvm_proc_get_info_ptr) (char *, char **, off_t, int) =
+&lvm_proc_get_info;
+#endif
+
+#ifdef LVM_HD_NAME
+void lvm_hd_name(char *, int);
+#endif
+/* End external function prototypes */
+
+
+/*
+ * Internal function prototypes
+ */
+static void lvm_init_vars(void);
+
+/* external snapshot calls */
+int lvm_snapshot_remap_block(kdev_t *, ulong *, ulong, lv_t *);
+int lvm_snapshot_COW(kdev_t, ulong, ulong, ulong, lv_t *);
+int lvm_snapshot_alloc(lv_t *);
+void lvm_snapshot_release(lv_t *);
+
+#ifdef LVM_HD_NAME
+extern void (*lvm_hd_name_ptr) (char *, int);
+#endif
+static int lvm_map(struct buffer_head *, int);
+static int lvm_do_lock_lvm(void);
+static int lvm_do_le_remap(vg_t *, void *);
+static int lvm_do_pe_lock_unlock(vg_t *r, void *);
+static int lvm_do_vg_create(int, void *);
+static int lvm_do_vg_extend(vg_t *, void *);
+static int lvm_do_vg_reduce(vg_t *, void *);
+static int lvm_do_vg_remove(int);
+static int lvm_do_lv_create(int, char *, lv_t *);
+static int lvm_do_lv_remove(int, char *, int);
+static int lvm_do_lv_extend_reduce(int, char *, lv_t *);
+static int lvm_do_lv_status_byname(vg_t *r, void *);
+static int lvm_do_lv_status_byindex(vg_t *, void *arg);
+static int lvm_do_pv_change(vg_t*, void*);
+static int lvm_do_pv_status(vg_t *, void *);
+static void lvm_geninit(struct gendisk *);
+#ifdef LVM_GET_INODE
+static struct inode *lvm_get_inode(int);
+void lvm_clear_inode(struct inode *);
+#endif
+/* END Internal function prototypes */
+
+
+/* volume group descriptor area pointers */
+static vg_t *vg[ABS_MAX_VG];
+static pv_t *pvp = NULL;
+static lv_t *lvp = NULL;
+static pe_t *pep = NULL;
+static pe_t *pep1 = NULL;
+
+
+/* map from block minor number to VG and LV numbers */
+typedef struct {
+ int vg_number;
+ int lv_number;
+} vg_lv_map_t;
+static vg_lv_map_t vg_lv_map[ABS_MAX_LV];
+
+
+/* Request structures (lvm_chr_ioctl()) */
+static pv_change_req_t pv_change_req;
+static pv_flush_req_t pv_flush_req;
+static pv_status_req_t pv_status_req;
+static pe_lock_req_t pe_lock_req;
+static le_remap_req_t le_remap_req;
+static lv_req_t lv_req;
+
+#ifdef LVM_TOTAL_RESET
+static int lvm_reset_spindown = 0;
+#endif
+
+static char pv_name[NAME_LEN];
+/* static char rootvg[NAME_LEN] = { 0, }; */
+static uint lv_open = 0;
+static const char *const lvm_name = LVM_NAME;
+static int lock = 0;
+static int loadtime = 0;
+static uint vg_count = 0;
+static long lvm_chr_open_count = 0;
+static ushort lvm_iop_version = LVM_DRIVER_IOP_VERSION;
+static DECLARE_WAIT_QUEUE_HEAD(lvm_snapshot_wait);
+static DECLARE_WAIT_QUEUE_HEAD(lvm_wait);
+static DECLARE_WAIT_QUEUE_HEAD(lvm_map_wait);
+
+static spinlock_t lvm_lock = SPIN_LOCK_UNLOCKED;
+static spinlock_t lvm_snapshot_lock = SPIN_LOCK_UNLOCKED;
+
+static struct file_operations lvm_chr_fops =
+{
+ open: lvm_chr_open,
+ release: lvm_chr_close,
+ ioctl: lvm_chr_ioctl,
+};
+
+static struct file_operations lvm_blk_fops =
+{
+ open: lvm_blk_open,
+ release: blkdev_close,
+ read: lvm_blk_read,
+ write: lvm_blk_write,
+ ioctl: lvm_blk_ioctl,
+ fsync: block_fsync,
+};
+
+#define BLOCK_DEVICE_OPERATIONS
+/* block device operations structure needed for 2.3.38? and above */
+static struct block_device_operations lvm_blk_dops =
+{
+ open: lvm_blk_open,
+ release: lvm_blk_close,
+ ioctl: lvm_blk_ioctl
+};
+
+/* gendisk structures */
+static struct hd_struct lvm_hd_struct[MAX_LV];
+static int lvm_blocksizes[MAX_LV] =
+{0,};
+static int lvm_size[MAX_LV] =
+{0,};
+static struct gendisk lvm_gendisk =
+{
+ MAJOR_NR, /* major # */
+ LVM_NAME, /* name of major */
+ 0, /* number of times minor is shifted
+ to get real minor */
+ 1, /* maximum partitions per device */
+ lvm_hd_struct, /* partition table */
+ lvm_size, /* device size in blocks, copied
+ to block_size[] */
+ MAX_LV, /* number or real devices */
+ NULL, /* internal */
+ NULL, /* pointer to next gendisk struct (internal) */
+};
+
+
+#ifdef MODULE
+/*
+ * Module initialization...
+ */
+int init_module(void)
+#else
+/*
+ * Driver initialization...
+ */
+#ifdef __initfunc
+__initfunc(int lvm_init(void))
+#else
+int __init lvm_init(void)
+#endif
+#endif /* #ifdef MODULE */
+{
+ struct gendisk *gendisk_ptr = NULL;
+
+ if (register_chrdev(LVM_CHAR_MAJOR, lvm_name, &lvm_chr_fops) < 0) {
+ printk(KERN_ERR "%s -- register_chrdev failed\n", lvm_name);
+ return -EIO;
+ }
+#ifdef BLOCK_DEVICE_OPERATIONS
+ if (register_blkdev(MAJOR_NR, lvm_name, &lvm_blk_dops) < 0)
+#else
+ if (register_blkdev(MAJOR_NR, lvm_name, &lvm_blk_fops) < 0)
+#endif
+ {
+ printk("%s -- register_blkdev failed\n", lvm_name);
+ if (unregister_chrdev(LVM_CHAR_MAJOR, lvm_name) < 0)
+ printk(KERN_ERR "%s -- unregister_chrdev failed\n", lvm_name);
+ return -EIO;
+ }
+#if defined CONFIG_LVM_PROC_FS && defined CONFIG_PROC_FS
+ create_proc_info_entry(LVM_NAME, S_IFREG | S_IRUGO,
+ &proc_root, lvm_proc_get_info_ptr);
+#endif
+
+ lvm_init_vars();
+ lvm_geninit(&lvm_gendisk);
+
+ /* insert our gendisk at the corresponding major */
+ if (gendisk_head != NULL) {
+ gendisk_ptr = gendisk_head;
+ while (gendisk_ptr->next != NULL &&
+ gendisk_ptr->major > lvm_gendisk.major) {
+ gendisk_ptr = gendisk_ptr->next;
+ }
+ lvm_gendisk.next = gendisk_ptr->next;
+ gendisk_ptr->next = &lvm_gendisk;
+ } else {
+ gendisk_head = &lvm_gendisk;
+ lvm_gendisk.next = NULL;
+ }
+
+#ifdef LVM_HD_NAME
+ /* reference from drivers/block/genhd.c */
+ lvm_hd_name_ptr = lvm_hd_name;
+#endif
+
+ blk_init_queue(BLK_DEFAULT_QUEUE(MAJOR_NR), DEVICE_REQUEST);
+ blk_queue_make_request(BLK_DEFAULT_QUEUE(MAJOR_NR), lvm_make_request_fn);
+ /* optional read root VGDA */
+/*
+ if ( *rootvg != 0) vg_read_with_pv_and_lv ( rootvg, &vg);
+*/
+
+ printk(KERN_INFO
+ "%s%s -- "
+#ifdef MODULE
+ "Module"
+#else
+ "Driver"
+#endif
+ " successfully initialized\n",
+ lvm_version, lvm_name);
+
+ return 0;
+} /* init_module() / lvm_init() */
+
+
+#ifdef MODULE
+/*
+ * Module cleanup...
+ */
+void cleanup_module(void)
+{
+ struct gendisk *gendisk_ptr = NULL, *gendisk_ptr_prev = NULL;
+
+ if (unregister_chrdev(LVM_CHAR_MAJOR, lvm_name) < 0) {
+ printk(KERN_ERR "%s -- unregister_chrdev failed\n", lvm_name);
+ }
+ if (unregister_blkdev(MAJOR_NR, lvm_name) < 0) {
+ printk(KERN_ERR "%s -- unregister_blkdev failed\n", lvm_name);
+ }
+ blk_cleanup_queue(BLK_DEFAULT_QUEUE(MAJOR_NR));
+
+ gendisk_ptr = gendisk_ptr_prev = gendisk_head;
+ while (gendisk_ptr != NULL) {
+ if (gendisk_ptr == &lvm_gendisk)
+ break;
+ gendisk_ptr_prev = gendisk_ptr;
+ gendisk_ptr = gendisk_ptr->next;
+ }
+ /* delete our gendisk from chain */
+ if (gendisk_ptr == &lvm_gendisk)
+ gendisk_ptr_prev->next = gendisk_ptr->next;
+
+ blk_size[MAJOR_NR] = NULL;
+ blksize_size[MAJOR_NR] = NULL;
+
+#if defined CONFIG_LVM_PROC_FS && defined CONFIG_PROC_FS
+ remove_proc_entry(LVM_NAME, &proc_root);
+#endif
+
+#ifdef LVM_HD_NAME
+ /* reference from linux/drivers/block/genhd.c */
+ lvm_hd_name_ptr = NULL;
+#endif
+
+ printk(KERN_INFO "%s -- Module successfully deactivated\n", lvm_name);
+
+ return;
+} /* void cleanup_module() */
+#endif /* #ifdef MODULE */
+
+
+/*
+ * support function to initialize lvm variables
+ */
+#ifdef __initfunc
+__initfunc(void lvm_init_vars(void))
+#else
+void __init lvm_init_vars(void)
+#endif
+{
+ int v;
+
+ loadtime = CURRENT_TIME;
+
+ lvm_lock = lvm_snapshot_lock = SPIN_LOCK_UNLOCKED;
+
+ pe_lock_req.lock = UNLOCK_PE;
+ pe_lock_req.data.lv_dev = \
+ pe_lock_req.data.pv_dev = \
+ pe_lock_req.data.pv_offset = 0;
+
+ /* Initialize VG pointers */
+ for (v = 0; v < ABS_MAX_VG; v++) vg[v] = NULL;
+
+ /* Initialize LV -> VG association */
+ for (v = 0; v < ABS_MAX_LV; v++) {
+ /* index ABS_MAX_VG never used for real VG */
+ vg_lv_map[v].vg_number = ABS_MAX_VG;
+ vg_lv_map[v].lv_number = -1;
+ }
+
+ return;
+} /* lvm_init_vars() */
+
+
+/********************************************************************
+ *
+ * Character device functions
+ *
+ ********************************************************************/
+
+/*
+ * character device open routine
+ */
+static int lvm_chr_open(struct inode *inode,
+ struct file *file)
+{
+ int minor = MINOR(inode->i_rdev);
+
+#ifdef DEBUG
+ printk(KERN_DEBUG
+ "%s -- lvm_chr_open MINOR: %d VG#: %d mode: 0x%X lock: %d\n",
+ lvm_name, minor, VG_CHR(minor), file->f_mode, lock);
+#endif
+
+ /* super user validation */
+ if (!capable(CAP_SYS_ADMIN)) return -EACCES;
+
+ /* Group special file open */
+ if (VG_CHR(minor) > MAX_VG) return -ENXIO;
+
+ MOD_INC_USE_COUNT;
+
+ lvm_chr_open_count++;
+ return 0;
+} /* lvm_chr_open() */
+
+
+/*
+ * character device i/o-control routine
+ *
+ * Only one changing process can do changing ioctl at one time,
+ * others will block.
+ *
+ */
+static int lvm_chr_ioctl(struct inode *inode, struct file *file,
+ uint command, ulong a)
+{
+ int minor = MINOR(inode->i_rdev);
+ uint extendable, l, v;
+ void *arg = (void *) a;
+ lv_t lv;
+ vg_t* vg_ptr = vg[VG_CHR(minor)];
+
+ /* otherwise cc will complain about unused variables */
+ (void) lvm_lock;
+
+
+#ifdef DEBUG_IOCTL
+ printk(KERN_DEBUG
+ "%s -- lvm_chr_ioctl: command: 0x%X MINOR: %d "
+ "VG#: %d mode: 0x%X\n",
+ lvm_name, command, minor, VG_CHR(minor), file->f_mode);
+#endif
+
+#ifdef LVM_TOTAL_RESET
+ if (lvm_reset_spindown > 0) return -EACCES;
+#endif
+
+ /* Main command switch */
+ switch (command) {
+ case LVM_LOCK_LVM:
+ /* lock the LVM */
+ return lvm_do_lock_lvm();
+
+ case LVM_GET_IOP_VERSION:
+ /* check lvm version to ensure driver/tools+lib
+ interoperability */
+ if (copy_to_user(arg, &lvm_iop_version, sizeof(ushort)) != 0)
+ return -EFAULT;
+ return 0;
+
+#ifdef LVM_TOTAL_RESET
+ case LVM_RESET:
+ /* lock reset function */
+ lvm_reset_spindown = 1;
+ for (v = 0; v < ABS_MAX_VG; v++) {
+ if (vg[v] != NULL) lvm_do_vg_remove(v);
+ }
+
+#ifdef MODULE
+ while (GET_USE_COUNT(&__this_module) < 1)
+ MOD_INC_USE_COUNT;
+ while (GET_USE_COUNT(&__this_module) > 1)
+ MOD_DEC_USE_COUNT;
+#endif /* MODULE */
+ lock = 0; /* release lock */
+ wake_up_interruptible(&lvm_wait);
+ return 0;
+#endif /* LVM_TOTAL_RESET */
+
+
+ case LE_REMAP:
+ /* remap a logical extent (after moving the physical extent) */
+ return lvm_do_le_remap(vg_ptr,arg);
+
+ case PE_LOCK_UNLOCK:
+ /* lock/unlock i/o to a physical extent to move it to another
+ physical volume (move's done in user space's pvmove) */
+ return lvm_do_pe_lock_unlock(vg_ptr,arg);
+
+ case VG_CREATE:
+ /* create a VGDA */
+ return lvm_do_vg_create(minor, arg);
+
+ case VG_REMOVE:
+ /* remove an inactive VGDA */
+ return lvm_do_vg_remove(minor);
+
+ case VG_EXTEND:
+ /* extend a volume group */
+ return lvm_do_vg_extend(vg_ptr,arg);
+
+ case VG_REDUCE:
+ /* reduce a volume group */
+ return lvm_do_vg_reduce(vg_ptr,arg);
+
+
+ case VG_SET_EXTENDABLE:
+ /* set/clear extendability flag of volume group */
+ if (vg_ptr == NULL) return -ENXIO;
+ if (copy_from_user(&extendable, arg, sizeof(extendable)) != 0)
+ return -EFAULT;
+
+ if (extendable == VG_EXTENDABLE ||
+ extendable == ~VG_EXTENDABLE) {
+ if (extendable == VG_EXTENDABLE)
+ vg_ptr->vg_status |= VG_EXTENDABLE;
+ else
+ vg_ptr->vg_status &= ~VG_EXTENDABLE;
+ } else return -EINVAL;
+ return 0;
+
+
+ case VG_STATUS:
+ /* get volume group data (only the vg_t struct) */
+ if (vg_ptr == NULL) return -ENXIO;
+ if (copy_to_user(arg, vg_ptr, sizeof(vg_t)) != 0)
+ return -EFAULT;
+ return 0;
+
+
+ case VG_STATUS_GET_COUNT:
+ /* get volume group count */
+ if (copy_to_user(arg, &vg_count, sizeof(vg_count)) != 0)
+ return -EFAULT;
+ return 0;
+
+
+ case VG_STATUS_GET_NAMELIST:
+ /* get volume group count */
+ for (l = v = 0; v < ABS_MAX_VG; v++) {
+ if (vg[v] != NULL) {
+ if (copy_to_user(arg + l++ * NAME_LEN,
+ vg[v]->vg_name,
+ NAME_LEN) != 0)
+ return -EFAULT;
+ }
+ }
+ return 0;
+
+
+ case LV_CREATE:
+ case LV_REMOVE:
+ case LV_EXTEND:
+ case LV_REDUCE:
+ /* create, remove, extend or reduce a logical volume */
+ if (vg_ptr == NULL) return -ENXIO;
+ if (copy_from_user(&lv_req, arg, sizeof(lv_req)) != 0)
+ return -EFAULT;
+
+ if (command != LV_REMOVE) {
+ if (copy_from_user(&lv, lv_req.lv, sizeof(lv_t)) != 0)
+ return -EFAULT;
+ }
+ switch (command) {
+ case LV_CREATE:
+ return lvm_do_lv_create(minor, lv_req.lv_name, &lv);
+
+ case LV_REMOVE:
+ return lvm_do_lv_remove(minor, lv_req.lv_name, -1);
+
+ case LV_EXTEND:
+ case LV_REDUCE:
+ return lvm_do_lv_extend_reduce(minor, lv_req.lv_name, &lv);
+ }
+
+
+ case LV_STATUS_BYNAME:
+ /* get status of a logical volume by name */
+ return lvm_do_lv_status_byname(vg_ptr,arg);
+
+ case LV_STATUS_BYINDEX:
+ /* get status of a logical volume by index */
+ return lvm_do_lv_status_byindex(vg_ptr,arg);
+
+ case PV_CHANGE:
+ /* change a physical volume */
+ return lvm_do_pv_change(vg_ptr,arg);
+
+ case PV_STATUS:
+ /* get physical volume data (pv_t structure only) */
+ return lvm_do_pv_status(vg_ptr,arg);
+
+ case PV_FLUSH:
+ /* physical volume buffer flush/invalidate */
+ if (copy_from_user(&pv_flush_req, arg,
+ sizeof(pv_flush_req)) != 0)
+ return -EFAULT;
+
+ fsync_dev(pv_flush_req.pv_dev);
+ invalidate_buffers(pv_flush_req.pv_dev);
+ return 0;
+
+ default:
+ printk(KERN_WARNING
+ "%s -- lvm_chr_ioctl: unknown command %x\n",
+ lvm_name, command);
+ return -EINVAL;
+ }
+
+ return 0;
+} /* lvm_chr_ioctl */
+
+
+/*
+ * character device close routine
+ */
+static int lvm_chr_close(struct inode *inode, struct file *file)
+{
+#ifdef DEBUG
+ int minor = MINOR(inode->i_rdev);
+ printk(KERN_DEBUG
+ "%s -- lvm_chr_close VG#: %d\n", lvm_name, VG_CHR(minor));
+#endif
+
+#ifdef LVM_TOTAL_RESET
+ if (lvm_reset_spindown > 0) {
+ lvm_reset_spindown = 0;
+ lvm_chr_open_count = 1;
+ }
+#endif
+
+ if (lvm_chr_open_count > 0) lvm_chr_open_count--;
+ if (lock == current->pid) {
+ lock = 0; /* release lock */
+ wake_up_interruptible(&lvm_wait);
+ }
+
+#ifdef MODULE
+ if (GET_USE_COUNT(&__this_module) > 0) MOD_DEC_USE_COUNT;
+#endif
+
+ return 0;
+} /* lvm_chr_close() */
+
+
+
+/********************************************************************
+ *
+ * Block device functions
+ *
+ ********************************************************************/
+
+/*
+ * block device open routine
+ */
+static int lvm_blk_open(struct inode *inode, struct file *file)
+{
+ int minor = MINOR(inode->i_rdev);
+ lv_t *lv_ptr;
+ vg_t *vg_ptr = vg[VG_BLK(minor)];
+
+#ifdef DEBUG_LVM_BLK_OPEN
+ printk(KERN_DEBUG
+ "%s -- lvm_blk_open MINOR: %d VG#: %d LV#: %d mode: 0x%X\n",
+ lvm_name, minor, VG_BLK(minor), LV_BLK(minor), file->f_mode);
+#endif
+
+#ifdef LVM_TOTAL_RESET
+ if (lvm_reset_spindown > 0)
+ return -EPERM;
+#endif
+
+ if (vg_ptr != NULL &&
+ (vg_ptr->vg_status & VG_ACTIVE) &&
+ (lv_ptr = vg_ptr->lv[LV_BLK(minor)]) != NULL &&
+ LV_BLK(minor) >= 0 &&
+ LV_BLK(minor) < vg_ptr->lv_max) {
+
+ /* Check parallel LV spindown (LV remove) */
+ if (lv_ptr->lv_status & LV_SPINDOWN) return -EPERM;
+
+ /* Check inactive LV and open for read/write */
+ if (file->f_mode & O_RDWR) {
+ if (!(lv_ptr->lv_status & LV_ACTIVE)) return -EPERM;
+ if (!(lv_ptr->lv_access & LV_WRITE)) return -EACCES;
+ }
+
+#ifdef BLOCK_DEVICE_OPERATIONS
+ file->f_op = &lvm_blk_fops;
+#endif
+
+ /* be sure to increment VG counter */
+ if (lv_ptr->lv_open == 0) vg_ptr->lv_open++;
+ lv_ptr->lv_open++;
+
+ MOD_INC_USE_COUNT;
+
+#ifdef DEBUG_LVM_BLK_OPEN
+ printk(KERN_DEBUG
+ "%s -- lvm_blk_open MINOR: %d VG#: %d LV#: %d size: %d\n",
+ lvm_name, minor, VG_BLK(minor), LV_BLK(minor),
+ lv_ptr->lv_size);
+#endif
+
+ return 0;
+ }
+ return -ENXIO;
+} /* lvm_blk_open() */
+
+
+/*
+ * block device read
+ */
+static ssize_t lvm_blk_read(struct file *file, char *buffer,
+ size_t size, loff_t * offset)
+{
+ int minor = MINOR(file->f_dentry->d_inode->i_rdev);
+
+ read_ahead[MAJOR(file->f_dentry->d_inode->i_rdev)] =
+ vg[VG_BLK(minor)]->lv[LV_BLK(minor)]->lv_read_ahead;
+ return block_read(file, buffer, size, offset);
+}
+
+
+/*
+ * block device write
+ */
+static ssize_t lvm_blk_write(struct file *file, const char *buffer,
+ size_t size, loff_t * offset)
+{
+ int minor = MINOR(file->f_dentry->d_inode->i_rdev);
+
+ read_ahead[MAJOR(file->f_dentry->d_inode->i_rdev)] =
+ vg[VG_BLK(minor)]->lv[LV_BLK(minor)]->lv_read_ahead;
+ return block_write(file, buffer, size, offset);
+}
+
+
+/*
+ * block device i/o-control routine
+ */
+static int lvm_blk_ioctl(struct inode *inode, struct file *file,
+ uint command, ulong a)
+{
+ int minor = MINOR(inode->i_rdev);
+ vg_t *vg_ptr = vg[VG_BLK(minor)];
+ lv_t *lv_ptr = vg_ptr->lv[LV_BLK(minor)];
+ void *arg = (void *) a;
+ struct hd_geometry *hd = (struct hd_geometry *) a;
+
+#ifdef DEBUG_IOCTL
+ printk(KERN_DEBUG
+ "%s -- lvm_blk_ioctl MINOR: %d command: 0x%X arg: %X "
+ "VG#: %dl LV#: %d\n",
+ lvm_name, minor, command, (ulong) arg,
+ VG_BLK(minor), LV_BLK(minor));
+#endif
+
+ switch (command) {
+ case BLKGETSIZE:
+ /* return device size */
+#ifdef DEBUG_IOCTL
+ printk(KERN_DEBUG
+ "%s -- lvm_blk_ioctl -- BLKGETSIZE: %u\n",
+ lvm_name, lv_ptr->lv_size);
+#endif
+ copy_to_user((long *) arg, &lv_ptr->lv_size,
+ sizeof(lv_ptr->lv_size));
+ break;
+
+
+ case BLKFLSBUF:
+ /* flush buffer cache */
+ if (!capable(CAP_SYS_ADMIN)) return -EACCES;
+
+#ifdef DEBUG_IOCTL
+ printk(KERN_DEBUG
+ "%s -- lvm_blk_ioctl -- BLKFLSBUF\n", lvm_name);
+#endif
+ fsync_dev(inode->i_rdev);
+ break;
+
+
+ case BLKRASET:
+ /* set read ahead for block device */
+ if (!capable(CAP_SYS_ADMIN)) return -EACCES;
+
+#ifdef DEBUG_IOCTL
+ printk(KERN_DEBUG
+ "%s -- lvm_blk_ioctl -- BLKRASET: %d sectors for %02X:%02X\n",
+ lvm_name, (long) arg, MAJOR(inode->i_rdev), minor);
+#endif
+ if ((long) arg < LVM_MIN_READ_AHEAD ||
+ (long) arg > LVM_MAX_READ_AHEAD)
+ return -EINVAL;
+ lv_ptr->lv_read_ahead = (long) arg;
+ break;
+
+
+ case BLKRAGET:
+ /* get current read ahead setting */
+#ifdef DEBUG_IOCTL
+ printk(KERN_DEBUG
+ "%s -- lvm_blk_ioctl -- BLKRAGET\n", lvm_name);
+#endif
+ copy_to_user((long *) arg, &lv_ptr->lv_read_ahead,
+ sizeof(lv_ptr->lv_read_ahead));
+ break;
+
+
+ case HDIO_GETGEO:
+ /* get disk geometry */
+#ifdef DEBUG_IOCTL
+ printk(KERN_DEBUG
+ "%s -- lvm_blk_ioctl -- HDIO_GETGEO\n", lvm_name);
+#endif
+ if (hd == NULL)
+ return -EINVAL;
+ {
+ unsigned char heads = 64;
+ unsigned char sectors = 32;
+ long start = 0;
+ short cylinders = lv_ptr->lv_size / heads / sectors;
+
+ if (copy_to_user((char *) &hd->heads, &heads,
+ sizeof(heads)) != 0 ||
+ copy_to_user((char *) &hd->sectors, §ors,
+ sizeof(sectors)) != 0 ||
+ copy_to_user((short *) &hd->cylinders,
+ &cylinders, sizeof(cylinders)) != 0 ||
+ copy_to_user((long *) &hd->start, &start,
+ sizeof(start)) != 0)
+ return -EFAULT;
+ }
+
+#ifdef DEBUG_IOCTL
+ printk(KERN_DEBUG
+ "%s -- lvm_blk_ioctl -- cylinders: %d\n",
+ lvm_name, lv_ptr->lv_size / heads / sectors);
+#endif
+ break;
+
+
+ case LV_SET_ACCESS:
+ /* set access flags of a logical volume */
+ if (!capable(CAP_SYS_ADMIN)) return -EACCES;
+ lv_ptr->lv_access = (ulong) arg;
+ break;
+
+
+ case LV_SET_STATUS:
+ /* set status flags of a logical volume */
+ if (!capable(CAP_SYS_ADMIN)) return -EACCES;
+ if (!((ulong) arg & LV_ACTIVE) && lv_ptr->lv_open > 1)
+ return -EPERM;
+ lv_ptr->lv_status = (ulong) arg;
+ break;
+
+
+ case LV_SET_ALLOCATION:
+ /* set allocation flags of a logical volume */
+ if (!capable(CAP_SYS_ADMIN)) return -EACCES;
+ lv_ptr->lv_allocation = (ulong) arg;
+ break;
+
+
+ default:
+ printk(KERN_WARNING
+ "%s -- lvm_blk_ioctl: unknown command %d\n",
+ lvm_name, command);
+ return -EINVAL;
+ }
+
+ return 0;
+} /* lvm_blk_ioctl() */
+
+
+/*
+ * block device close routine
+ */
+static int lvm_blk_close(struct inode *inode, struct file *file)
+{
+ int minor = MINOR(inode->i_rdev);
+ vg_t *vg_ptr = vg[VG_BLK(minor)];
+ lv_t *lv_ptr = vg_ptr->lv[LV_BLK(minor)];
+
+#ifdef DEBUG
+ printk(KERN_DEBUG
+ "%s -- lvm_blk_close MINOR: %d VG#: %d LV#: %d\n",
+ lvm_name, minor, VG_BLK(minor), LV_BLK(minor));
+#endif
+
+ sync_dev(inode->i_rdev);
+ if (lv_ptr->lv_open == 1) vg_ptr->lv_open--;
+ lv_ptr->lv_open--;
+
+ MOD_DEC_USE_COUNT;
+
+ return 0;
+} /* lvm_blk_close() */
+
+
+#if defined CONFIG_LVM_PROC_FS && defined CONFIG_PROC_FS
+/*
+ * Support function /proc-Filesystem
+ */
+#define LVM_PROC_BUF ( i == 0 ? dummy_buf : &buf[sz])
+
+static int lvm_proc_get_info(char *page, char **start, off_t pos, int count)
+{
+ int c, i, l, p, v, vg_counter, pv_counter, lv_counter, lv_open_counter,
+ lv_open_total, pe_t_bytes, lv_block_exception_t_bytes, seconds;
+ static off_t sz;
+ off_t sz_last;
+ char allocation_flag, inactive_flag, rw_flag, stripes_flag;
+ char *lv_name, *pv_name;
+ static char *buf = NULL;
+ static char dummy_buf[160]; /* sized for 2 lines */
+ vg_t *vg_ptr;
+ lv_t *lv_ptr;
+ pv_t *pv_ptr;
+
+
+#ifdef DEBUG_LVM_PROC_GET_INFO
+ printk(KERN_DEBUG
+ "%s - lvm_proc_get_info CALLED pos: %lu count: %d whence: %d\n",
+ lvm_name, pos, count, whence);
+#endif
+
+ if (pos == 0 || buf == NULL) {
+ sz_last = vg_counter = pv_counter = lv_counter = lv_open_counter = \
+ lv_open_total = pe_t_bytes = lv_block_exception_t_bytes = 0;
+
+ /* search for activity */
+ for (v = 0; v < ABS_MAX_VG; v++) {
+ if ((vg_ptr = vg[v]) != NULL) {
+ vg_counter++;
+ pv_counter += vg_ptr->pv_cur;
+ lv_counter += vg_ptr->lv_cur;
+ if (vg_ptr->lv_cur > 0) {
+ for (l = 0; l < vg[v]->lv_max; l++) {
+ if ((lv_ptr = vg_ptr->lv[l]) != NULL) {
+ pe_t_bytes += lv_ptr->lv_allocated_le;
+ if (lv_ptr->lv_block_exception != NULL)
+ lv_block_exception_t_bytes += lv_ptr->lv_remap_end;
+ if (lv_ptr->lv_open > 0) {
+ lv_open_counter++;
+ lv_open_total += lv_ptr->lv_open;
+ }
+ }
+ }
+ }
+ }
+ }
+ pe_t_bytes *= sizeof(pe_t);
+ lv_block_exception_t_bytes *= sizeof(lv_block_exception_t);
+
+ if (buf != NULL) {
+#ifdef DEBUG_KFREE
+ printk(KERN_DEBUG
+ "%s -- kfree %d\n", lvm_name, __LINE__);
+#endif
+ kfree(buf);
+ buf = NULL;
+ }
+ /* 2 times: first to get size to allocate buffer,
+ 2nd to fill the malloced buffer */
+ for (i = 0; i < 2; i++) {
+ sz = 0;
+ sz += sprintf(LVM_PROC_BUF,
+ "LVM "
+#ifdef MODULE
+ "module"
+#else
+ "driver"
+#endif
+ " %s\n\n"
+ "Total: %d VG%s %d PV%s %d LV%s ",
+ lvm_short_version,
+ vg_counter, vg_counter == 1 ? "" : "s",
+ pv_counter, pv_counter == 1 ? "" : "s",
+ lv_counter, lv_counter == 1 ? "" : "s");
+ sz += sprintf(LVM_PROC_BUF,
+ "(%d LV%s open",
+ lv_open_counter,
+ lv_open_counter == 1 ? "" : "s");
+ if (lv_open_total > 0)
+ sz += sprintf(LVM_PROC_BUF,
+ " %d times)\n",
+ lv_open_total);
+ else
+ sz += sprintf(LVM_PROC_BUF, ")");
+ sz += sprintf(LVM_PROC_BUF,
+ "\nGlobal: %lu bytes malloced IOP version: %d ",
+ vg_counter * sizeof(vg_t) +
+ pv_counter * sizeof(pv_t) +
+ lv_counter * sizeof(lv_t) +
+ pe_t_bytes + lv_block_exception_t_bytes + sz_last,
+ lvm_iop_version);
+
+ seconds = CURRENT_TIME - loadtime;
+ if (seconds < 0)
+ loadtime = CURRENT_TIME + seconds;
+ if (seconds / 86400 > 0) {
+ sz += sprintf(LVM_PROC_BUF, "%d day%s ",
+ seconds / 86400,
+ seconds / 86400 == 0 ||
+ seconds / 86400 > 1 ? "s" : "");
+ }
+ sz += sprintf(LVM_PROC_BUF, "%d:%02d:%02d active\n",
+ (seconds % 86400) / 3600,
+ (seconds % 3600) / 60,
+ seconds % 60);
+
+ if (vg_counter > 0) {
+ for (v = 0; v < ABS_MAX_VG; v++) {
+ /* volume group */
+ if ((vg_ptr = vg[v]) != NULL) {
+ inactive_flag = ' ';
+ if (!(vg_ptr->vg_status & VG_ACTIVE)) inactive_flag = 'I';
+ sz += sprintf(LVM_PROC_BUF,
+ "\nVG: %c%s [%d PV, %d LV/%d open] "
+ " PE Size: %d KB\n"
+ " Usage [KB/PE]: %d /%d total "
+ "%d /%d used %d /%d free",
+ inactive_flag,
+ vg_ptr->vg_name,
+ vg_ptr->pv_cur,
+ vg_ptr->lv_cur,
+ vg_ptr->lv_open,
+ vg_ptr->pe_size >> 1,
+ vg_ptr->pe_size * vg_ptr->pe_total >> 1,
+ vg_ptr->pe_total,
+ vg_ptr->pe_allocated * vg_ptr->pe_size >> 1,
+ vg_ptr->pe_allocated,
+ (vg_ptr->pe_total - vg_ptr->pe_allocated) *
+ vg_ptr->pe_size >> 1,
+ vg_ptr->pe_total - vg_ptr->pe_allocated);
+
+ /* physical volumes */
+ sz += sprintf(LVM_PROC_BUF,
+ "\n PV%s ",
+ vg_ptr->pv_cur == 1 ? ": " : "s:");
+ c = 0;
+ for (p = 0; p < vg_ptr->pv_max; p++) {
+ if ((pv_ptr = vg_ptr->pv[p]) != NULL) {
+ inactive_flag = 'A';
+ if (!(pv_ptr->pv_status & PV_ACTIVE))
+ inactive_flag = 'I';
+ allocation_flag = 'A';
+ if (!(pv_ptr->pv_allocatable & PV_ALLOCATABLE))
+ allocation_flag = 'N';
+ pv_name = strchr(pv_ptr->pv_name+1,'/');
+ if ( pv_name == 0) pv_name = pv_ptr->pv_name;
+ else pv_name++;
+ sz += sprintf(LVM_PROC_BUF,
+ "[%c%c] %-21s %8d /%-6d "
+ "%8d /%-6d %8d /%-6d",
+ inactive_flag,
+ allocation_flag,
+ pv_name,
+ pv_ptr->pe_total *
+ pv_ptr->pe_size >> 1,
+ pv_ptr->pe_total,
+ pv_ptr->pe_allocated *
+ pv_ptr->pe_size >> 1,
+ pv_ptr->pe_allocated,
+ (pv_ptr->pe_total -
+ pv_ptr->pe_allocated) *
+ pv_ptr->pe_size >> 1,
+ pv_ptr->pe_total -
+ pv_ptr->pe_allocated);
+ c++;
+ if (c < vg_ptr->pv_cur)
+ sz += sprintf(LVM_PROC_BUF,
+ "\n ");
+ }
+ }
+
+ /* logical volumes */
+ sz += sprintf(LVM_PROC_BUF,
+ "\n LV%s ",
+ vg_ptr->lv_cur == 1 ? ": " : "s:");
+ c = 0;
+ for (l = 0; l < vg[v]->lv_max; l++) {
+ if ((lv_ptr = vg_ptr->lv[l]) != NULL) {
+ inactive_flag = 'A';
+ if (!(lv_ptr->lv_status & LV_ACTIVE))
+ inactive_flag = 'I';
+ rw_flag = 'R';
+ if (lv_ptr->lv_access & LV_WRITE)
+ rw_flag = 'W';
+ allocation_flag = 'D';
+ if (lv_ptr->lv_allocation & LV_CONTIGUOUS)
+ allocation_flag = 'C';
+ stripes_flag = 'L';
+ if (lv_ptr->lv_stripes > 1)
+ stripes_flag = 'S';
+ sz += sprintf(LVM_PROC_BUF,
+ "[%c%c%c%c",
+ inactive_flag,
+ rw_flag,
+ allocation_flag,
+ stripes_flag);
+ if (lv_ptr->lv_stripes > 1)
+ sz += sprintf(LVM_PROC_BUF, "%-2d",
+ lv_ptr->lv_stripes);
+ else
+ sz += sprintf(LVM_PROC_BUF, " ");
+ lv_name = strrchr(lv_ptr->lv_name, '/');
+ if ( lv_name == 0) lv_name = lv_ptr->lv_name;
+ else lv_name++;
+ sz += sprintf(LVM_PROC_BUF, "] %-25s", lv_name);
+ if (strlen(lv_name) > 25)
+ sz += sprintf(LVM_PROC_BUF,
+ "\n ");
+ sz += sprintf(LVM_PROC_BUF, "%9d /%-6d ",
+ lv_ptr->lv_size >> 1,
+ lv_ptr->lv_size / vg[v]->pe_size);
+
+ if (lv_ptr->lv_open == 0)
+ sz += sprintf(LVM_PROC_BUF, "close");
+ else
+ sz += sprintf(LVM_PROC_BUF, "%dx open",
+ lv_ptr->lv_open);
+ c++;
+ if (c < vg_ptr->lv_cur)
+ sz += sprintf(LVM_PROC_BUF,
+ "\n ");
+ }
+ }
+ if (vg_ptr->lv_cur == 0) sz += sprintf(LVM_PROC_BUF, "none");
+ sz += sprintf(LVM_PROC_BUF, "\n");
+ }
+ }
+ }
+ if (buf == NULL) {
+ if ((buf = vmalloc(sz)) == NULL) {
+ sz = 0;
+ return sprintf(page, "%s - vmalloc error at line %d\n",
+ lvm_name, __LINE__);
+ }
+ }
+ sz_last = sz;
+ }
+ }
+ if (pos > sz - 1) {
+ vfree(buf);
+ buf = NULL;
+ return 0;
+ }
+ *start = &buf[pos];
+ if (sz - pos < count)
+ return sz - pos;
+ else
+ return count;
+} /* lvm_proc_get_info() */
+#endif /* #if defined CONFIG_LVM_PROC_FS && defined CONFIG_PROC_FS */
+
+
+/*
+ * block device support function for /usr/src/linux/drivers/block/ll_rw_blk.c
+ * (see init_module/lvm_init)
+ */
+static int lvm_map(struct buffer_head *bh, int rw)
+{
+ int minor = MINOR(bh->b_dev);
+ int ret = 0;
+ ulong index;
+ ulong pe_start;
+ ulong size = bh->b_size >> 9;
+ ulong rsector_tmp = bh->b_blocknr * size;
+ ulong rsector_sav;
+ kdev_t rdev_tmp = bh->b_dev;
+ kdev_t rdev_sav;
+ lv_t *lv = vg[VG_BLK(minor)]->lv[LV_BLK(minor)];
+
+
+ if (!(lv->lv_status & LV_ACTIVE)) {
+ printk(KERN_ALERT
+ "%s - lvm_map: ll_rw_blk for inactive LV %s\n",
+ lvm_name, lv->lv_name);
+ return -1;
+ }
+/*
+ if ( lv->lv_access & LV_SNAPSHOT)
+ printk ( "%s -- %02d:%02d block: %lu rw: %d\n", lvm_name, MAJOR ( bh->b_dev), MINOR ( bh->b_dev), bh->b_blocknr, rw);
+ */
+
+ /* take care of snapshot chunk writes before
+ check for writable logical volume */
+ if ((lv->lv_access & LV_SNAPSHOT) &&
+ MAJOR(bh->b_rdev) != 0 &&
+ MAJOR(bh->b_rdev) != MAJOR_NR &&
+ (rw == WRITEA || rw == WRITE))
+ {
+ printk ( "%s -- doing snapshot write for %02d:%02d[%02d:%02d] b_blocknr: %lu b_rsector: %lu\n", lvm_name, MAJOR ( bh->b_dev), MINOR ( bh->b_dev), MAJOR ( bh->b_rdev), MINOR ( bh->b_rdev), bh->b_blocknr, bh->b_rsector);
+ return 0;
+ }
+
+ if ((rw == WRITE || rw == WRITEA) &&
+ !(lv->lv_access & LV_WRITE)) {
+ printk(KERN_CRIT
+ "%s - lvm_map: ll_rw_blk write for readonly LV %s\n",
+ lvm_name, lv->lv_name);
+ return -1;
+ }
+#ifdef DEBUG_MAP
+ printk(KERN_DEBUG
+ "%s - lvm_map minor:%d *rdev: %02d:%02d *rsector: %lu "
+ "size:%lu\n",
+ lvm_name, minor,
+ MAJOR(rdev_tmp),
+ MINOR(rdev_tmp),
+ rsector_tmp, size);
+#endif
+
+ if (rsector_tmp + size > lv->lv_size) {
+ printk(KERN_ALERT
+ "%s - lvm_map *rsector: %lu or size: %lu wrong for"
+ " minor: %2d\n", lvm_name, rsector_tmp, size, minor);
+ return -1;
+ }
+ rsector_sav = rsector_tmp;
+ rdev_sav = rdev_tmp;
+
+lvm_second_remap:
+ /* linear mapping */
+ if (lv->lv_stripes < 2) {
+ /* get the index */
+ index = rsector_tmp / vg[VG_BLK(minor)]->pe_size;
+ pe_start = lv->lv_current_pe[index].pe;
+ rsector_tmp = lv->lv_current_pe[index].pe +
+ (rsector_tmp % vg[VG_BLK(minor)]->pe_size);
+ rdev_tmp = lv->lv_current_pe[index].dev;
+
+#ifdef DEBUG_MAP
+ printk(KERN_DEBUG
+ "lv_current_pe[%ld].pe: %ld rdev: %02d:%02d rsector:%ld\n",
+ index,
+ lv->lv_current_pe[index].pe,
+ MAJOR(rdev_tmp),
+ MINOR(rdev_tmp),
+ rsector_tmp);
+#endif
+
+ /* striped mapping */
+ } else {
+ ulong stripe_index;
+ ulong stripe_length;
+
+ stripe_length = vg[VG_BLK(minor)]->pe_size * lv->lv_stripes;
+ stripe_index = (rsector_tmp % stripe_length) / lv->lv_stripesize;
+ index = rsector_tmp / stripe_length +
+ (stripe_index % lv->lv_stripes) *
+ (lv->lv_allocated_le / lv->lv_stripes);
+ pe_start = lv->lv_current_pe[index].pe;
+ rsector_tmp = lv->lv_current_pe[index].pe +
+ (rsector_tmp % stripe_length) -
+ (stripe_index % lv->lv_stripes) * lv->lv_stripesize -
+ stripe_index / lv->lv_stripes *
+ (lv->lv_stripes - 1) * lv->lv_stripesize;
+ rdev_tmp = lv->lv_current_pe[index].dev;
+ }
+
+#ifdef DEBUG_MAP
+ printk(KERN_DEBUG
+ "lv_current_pe[%ld].pe: %ld rdev: %02d:%02d rsector:%ld\n"
+ "stripe_length: %ld stripe_index: %ld\n",
+ index,
+ lv->lv_current_pe[index].pe,
+ MAJOR(rdev_tmp),
+ MINOR(rdev_tmp),
+ rsector_tmp,
+ stripe_length,
+ stripe_index);
+#endif
+
+ /* handle physical extents on the move */
+ if (pe_lock_req.lock == LOCK_PE) {
+ if (rdev_tmp == pe_lock_req.data.pv_dev &&
+ rsector_tmp >= pe_lock_req.data.pv_offset &&
+ rsector_tmp < (pe_lock_req.data.pv_offset +
+ vg[VG_BLK(minor)]->pe_size)) {
+ sleep_on(&lvm_map_wait);
+ rsector_tmp = rsector_sav;
+ rdev_tmp = rdev_sav;
+ goto lvm_second_remap;
+ }
+ }
+ /* statistic */
+ if (rw == WRITE || rw == WRITEA)
+ lv->lv_current_pe[index].writes++;
+ else
+ lv->lv_current_pe[index].reads++;
+
+ /* snapshot volume exception handling on physical device address base */
+ if (lv->lv_access & (LV_SNAPSHOT | LV_SNAPSHOT_ORG)) {
+ /* original logical volume */
+ if (lv->lv_access & LV_SNAPSHOT_ORG) {
+ if (rw == WRITE || rw == WRITEA)
+ {
+ lv_t *lv_ptr;
+
+ /* start with first snapshot and loop thrugh all of them */
+ for (lv_ptr = lv->lv_snapshot_next;
+ lv_ptr != NULL;
+ lv_ptr = lv_ptr->lv_snapshot_next) {
+ down(&lv->lv_snapshot_org->lv_snapshot_sem);
+ /* do we still have exception storage for this snapshot free? */
+ if (lv_ptr->lv_block_exception != NULL) {
+ rdev_sav = rdev_tmp;
+ rsector_sav = rsector_tmp;
+ if (!lvm_snapshot_remap_block(&rdev_tmp,
+ &rsector_tmp,
+ pe_start,
+ lv_ptr)) {
+ /* create a new mapping */
+ ret = lvm_snapshot_COW(rdev_tmp,
+ rsector_tmp,
+ pe_start,
+ rsector_sav,
+ lv_ptr);
+ }
+ rdev_tmp = rdev_sav;
+ rsector_tmp = rsector_sav;
+ }
+ up(&lv->lv_snapshot_org->lv_snapshot_sem);
+ }
+ }
+ } else {
+ /* remap snapshot logical volume */
+ down(&lv->lv_snapshot_sem);
+ if (lv->lv_block_exception != NULL)
+ lvm_snapshot_remap_block(&rdev_tmp, &rsector_tmp, pe_start, lv);
+ up(&lv->lv_snapshot_sem);
+ }
+ }
+ bh->b_rdev = rdev_tmp;
+ bh->b_rsector = rsector_tmp;
+
+ return ret;
+} /* lvm_map() */
+
+
+/*
+ * internal support functions
+ */
+
+#ifdef LVM_HD_NAME
+/*
+ * generate "hard disk" name
+ */
+void lvm_hd_name(char *buf, int minor)
+{
+ int len = 0;
+ lv_t *lv_ptr;
+
+ if (vg[VG_BLK(minor)] == NULL ||
+ (lv_ptr = vg[VG_BLK(minor)]->lv[LV_BLK(minor)]) == NULL)
+ return;
+ len = strlen(lv_ptr->lv_name) - 5;
+ memcpy(buf, &lv_ptr->lv_name[5], len);
+ buf[len] = 0;
+ return;
+}
+#endif
+
+
+/*
+ * this one never should be called...
+ */
+static void lvm_dummy_device_request(request_queue_t * t)
+{
+ printk(KERN_EMERG
+ "%s -- oops, got lvm request for %02d:%02d [sector: %lu]\n",
+ lvm_name,
+ MAJOR(CURRENT->rq_dev),
+ MINOR(CURRENT->rq_dev),
+ CURRENT->sector);
+ return;
+}
+
+
+/*
+ * make request function
+ */
+static void lvm_make_request_fn(int rw, struct buffer_head *bh)
+{
+ lvm_map(bh, rw);
+ if (bh->b_rdev != MD_MAJOR) generic_make_request(rw, bh);
+ return;
+}
+
+
+/********************************************************************
+ *
+ * Character device support functions
+ *
+ ********************************************************************/
+/*
+ * character device support function logical volume manager lock
+ */
+static int lvm_do_lock_lvm(void)
+{
+lock_try_again:
+ spin_lock(&lvm_lock);
+ if (lock != 0 && lock != current->pid) {
+#ifdef DEBUG_IOCTL
+ printk(KERN_INFO "lvm_do_lock_lvm: %s is locked by pid %d ...\n",
+ lvm_name, lock);
+#endif
+ spin_unlock(&lvm_lock);
+ interruptible_sleep_on(&lvm_wait);
+ if (current->sigpending != 0)
+ return -EINTR;
+#ifdef LVM_TOTAL_RESET
+ if (lvm_reset_spindown > 0)
+ return -EACCES;
+#endif
+ goto lock_try_again;
+ }
+ lock = current->pid;
+ spin_unlock(&lvm_lock);
+ return 0;
+} /* lvm_do_lock_lvm */
+
+
+/*
+ * character device support function lock/unlock physical extend
+ */
+static int lvm_do_pe_lock_unlock(vg_t *vg_ptr, void *arg)
+{
+ uint p;
+
+ if (vg_ptr == NULL) return -ENXIO;
+ if (copy_from_user(&pe_lock_req, arg,
+ sizeof(pe_lock_req_t)) != 0) return -EFAULT;
+
+ switch (pe_lock_req.lock) {
+ case LOCK_PE:
+ for (p = 0; p < vg_ptr->pv_max; p++) {
+ if (vg_ptr->pv[p] != NULL &&
+ pe_lock_req.data.pv_dev ==
+ vg_ptr->pv[p]->pv_dev)
+ break;
+ }
+ if (p == vg_ptr->pv_max) return -ENXIO;
+
+ pe_lock_req.lock = UNLOCK_PE;
+ fsync_dev(pe_lock_req.data.lv_dev);
+ pe_lock_req.lock = LOCK_PE;
+ break;
+
+ case UNLOCK_PE:
+ pe_lock_req.lock = UNLOCK_PE;
+ pe_lock_req.data.lv_dev = \
+ pe_lock_req.data.pv_dev = \
+ pe_lock_req.data.pv_offset = 0;
+ wake_up(&lvm_map_wait);
+ break;
+
+ default:
+ return -EINVAL;
+ }
+ return 0;
+}
+
+
+/*
+ * character device support function logical extend remap
+ */
+static int lvm_do_le_remap(vg_t *vg_ptr, void *arg)
+{
+ uint l, le;
+ lv_t *lv_ptr;
+
+ if (vg_ptr == NULL) return -ENXIO;
+ if (copy_from_user(&le_remap_req, arg,
+ sizeof(le_remap_req_t)) != 0)
+ return -EFAULT;
+
+ for (l = 0; l < vg_ptr->lv_max; l++) {
+ lv_ptr = vg_ptr->lv[l];
+ if (lv_ptr != NULL &&
+ strcmp(lv_ptr->lv_name,
+ le_remap_req.lv_name) == 0) {
+ for (le = 0; le < lv_ptr->lv_allocated_le;
+ le++) {
+ if (lv_ptr->lv_current_pe[le].dev ==
+ le_remap_req.old_dev &&
+ lv_ptr->lv_current_pe[le].pe ==
+ le_remap_req.old_pe) {
+ lv_ptr->lv_current_pe[le].dev =
+ le_remap_req.new_dev;
+ lv_ptr->lv_current_pe[le].pe =
+ le_remap_req.new_pe;
+ return 0;
+ }
+ }
+ return -EINVAL;
+ }
+ }
+ return -ENXIO;
+} /* lvm_do_le_remap() */
+
+
+/*
+ * character device support function VGDA create
+ */
+int lvm_do_vg_create(int minor, void *arg)
+{
+ int snaporg_minor = 0;
+ ulong l, p;
+ lv_t lv;
+ vg_t *vg_ptr;
+ pv_t *pv_ptr;
+ lv_t *lv_ptr;
+
+ if (vg[VG_CHR(minor)] != NULL) return -EPERM;
+
+ if ((vg_ptr = kmalloc(sizeof(vg_t),GFP_KERNEL)) == NULL) {
+ printk(KERN_CRIT
+ "%s -- VG_CREATE: kmalloc error VG at line %d\n",
+ lvm_name, __LINE__);
+ return -ENOMEM;
+ }
+ /* get the volume group structure */
+ if (copy_from_user(vg_ptr, arg, sizeof(vg_t)) != 0) {
+ kfree(vg_ptr);
+ return -EFAULT;
+ }
+ /* we are not that active so far... */
+ vg_ptr->vg_status &= ~VG_ACTIVE;
+ vg[VG_CHR(minor)] = vg_ptr;
+
+ vg[VG_CHR(minor)]->pe_allocated = 0;
+ if (vg_ptr->pv_max > ABS_MAX_PV) {
+ printk(KERN_WARNING
+ "%s -- Can't activate VG: ABS_MAX_PV too small\n",
+ lvm_name);
+ kfree(vg_ptr);
+ vg[VG_CHR(minor)] = NULL;
+ return -EPERM;
+ }
+ if (vg_ptr->lv_max > ABS_MAX_LV) {
+ printk(KERN_WARNING
+ "%s -- Can't activate VG: ABS_MAX_LV too small for %u\n",
+ lvm_name, vg_ptr->lv_max);
+ kfree(vg_ptr);
+ vg_ptr = NULL;
+ return -EPERM;
+ }
+ /* get the physical volume structures */
+ vg_ptr->pv_act = vg_ptr->pv_cur = 0;
+ for (p = 0; p < vg_ptr->pv_max; p++) {
+ /* user space address */
+ if ((pvp = vg_ptr->pv[p]) != NULL) {
+ pv_ptr = vg_ptr->pv[p] = kmalloc(sizeof(pv_t),GFP_KERNEL);
+ if (pv_ptr == NULL) {
+ printk(KERN_CRIT
+ "%s -- VG_CREATE: kmalloc error PV at line %d\n",
+ lvm_name, __LINE__);
+ lvm_do_vg_remove(minor);
+ return -ENOMEM;
+ }
+ if (copy_from_user(pv_ptr, pvp, sizeof(pv_t)) != 0) {
+ lvm_do_vg_remove(minor);
+ return -EFAULT;
+ }
+ /* We don't need the PE list
+ in kernel space as with LVs pe_t list (see below) */
+ pv_ptr->pe = NULL;
+ pv_ptr->pe_allocated = 0;
+ pv_ptr->pv_status = PV_ACTIVE;
+ vg_ptr->pv_act++;
+ vg_ptr->pv_cur++;
+
+#ifdef LVM_GET_INODE
+ /* insert a dummy inode for fs_may_mount */
+ pv_ptr->inode = lvm_get_inode(pv_ptr->pv_dev);
+#endif
+ }
+ }
+
+ /* get the logical volume structures */
+ vg_ptr->lv_cur = 0;
+ for (l = 0; l < vg_ptr->lv_max; l++) {
+ /* user space address */
+ if ((lvp = vg_ptr->lv[l]) != NULL) {
+ if (copy_from_user(&lv, lvp, sizeof(lv_t)) != 0) {
+ lvm_do_vg_remove(minor);
+ return -EFAULT;
+ }
+ vg_ptr->lv[l] = NULL;
+ if (lvm_do_lv_create(minor, lv.lv_name, &lv) != 0) {
+ lvm_do_vg_remove(minor);
+ return -EFAULT;
+ }
+ }
+ }
+
+ /* Second path to correct snapshot logical volumes which are not
+ in place during first path above */
+ for (l = 0; l < vg_ptr->lv_max; l++) {
+ if ((lv_ptr = vg_ptr->lv[l]) != NULL &&
+ vg_ptr->lv[l]->lv_access & LV_SNAPSHOT) {
+ snaporg_minor = lv_ptr->lv_snapshot_minor;
+ if (vg_ptr->lv[LV_BLK(snaporg_minor)] != NULL) {
+ /* get pointer to original logical volume */
+ lv_ptr = vg_ptr->lv[l]->lv_snapshot_org =
+ vg_ptr->lv[LV_BLK(snaporg_minor)];
+
+ /* set necessary fields of original logical volume */
+ lv_ptr->lv_access |= LV_SNAPSHOT_ORG;
+ lv_ptr->lv_snapshot_minor = 0;
+ lv_ptr->lv_snapshot_org = lv_ptr;
+ lv_ptr->lv_snapshot_prev = NULL;
+
+ /* find last snapshot logical volume in the chain */
+ while (lv_ptr->lv_snapshot_next != NULL)
+ lv_ptr = lv_ptr->lv_snapshot_next;
+
+ /* set back pointer to this last one in our new logical volume */
+ vg_ptr->lv[l]->lv_snapshot_prev = lv_ptr;
+
+ /* last logical volume now points to our new snapshot volume */
+ lv_ptr->lv_snapshot_next = vg_ptr->lv[l];
+
+ /* now point to the new one */
+ lv_ptr = lv_ptr->lv_snapshot_next;
+
+ /* set necessary fields of new snapshot logical volume */
+ lv_ptr->lv_snapshot_next = NULL;
+ lv_ptr->lv_current_pe =
+ vg_ptr->lv[LV_BLK(snaporg_minor)]->lv_current_pe;
+ lv_ptr->lv_allocated_le =
+ vg_ptr->lv[LV_BLK(snaporg_minor)]->lv_allocated_le;
+ lv_ptr->lv_current_le =
+ vg_ptr->lv[LV_BLK(snaporg_minor)]->lv_current_le;
+ lv_ptr->lv_size =
+ vg_ptr->lv[LV_BLK(snaporg_minor)]->lv_size;
+ }
+ }
+ }
+
+ vg_count++;
+
+ /* let's go active */
+ vg_ptr->vg_status |= VG_ACTIVE;
+
+ MOD_INC_USE_COUNT;
+
+ return 0;
+} /* lvm_do_vg_create() */
+
+
+/*
+ * character device support function VGDA extend
+ */
+static int lvm_do_vg_extend(vg_t *vg_ptr, void *arg)
+{
+ uint p;
+ pv_t *pv_ptr;
+
+ if (vg_ptr == NULL) return -ENXIO;
+ if (vg_ptr->pv_cur < vg_ptr->pv_max) {
+ for (p = 0; p < vg_ptr->pv_max; p++) {
+ if (vg_ptr->pv[p] == NULL) {
+ if ((pv_ptr = vg_ptr->pv[p] = kmalloc(sizeof(pv_t),GFP_KERNEL)) == NULL) {
+ printk(KERN_CRIT
+ "%s -- VG_EXTEND: kmalloc error PV at line %d\n",
+ lvm_name, __LINE__);
+ return -ENOMEM;
+ }
+ if (copy_from_user(pv_ptr, arg, sizeof(pv_t)) != 0) {
+ kfree(pv_ptr);
+ vg_ptr->pv[p] = NULL;
+ return -EFAULT;
+ }
+
+ pv_ptr->pv_status = PV_ACTIVE;
+ /* We don't need the PE list
+ in kernel space like LVs pe_t list */
+ pv_ptr->pe = NULL;
+ vg_ptr->pv_cur++;
+ vg_ptr->pv_act++;
+ vg_ptr->pe_total +=
+ pv_ptr->pe_total;
+#ifdef LVM_GET_INODE
+ /* insert a dummy inode for fs_may_mount */
+ pv_ptr->inode = lvm_get_inode(pv_ptr->pv_dev);
+#endif
+ return 0;
+ }
+ }
+ }
+return -EPERM;
+} /* lvm_do_vg_extend() */
+
+
+/*
+ * character device support function VGDA reduce
+ */
+static int lvm_do_vg_reduce(vg_t *vg_ptr, void *arg)
+{
+ uint p;
+ pv_t *pv_ptr;
+
+ if (vg_ptr == NULL) return -ENXIO;
+ if (copy_from_user(pv_name, arg, sizeof(pv_name)) != 0)
+ return -EFAULT;
+
+ for (p = 0; p < vg_ptr->pv_max; p++) {
+ pv_ptr = vg_ptr->pv[p];
+ if (pv_ptr != NULL &&
+ strcmp(pv_ptr->pv_name,
+ pv_name) == 0) {
+ if (pv_ptr->lv_cur > 0) return -EPERM;
+ vg_ptr->pe_total -=
+ pv_ptr->pe_total;
+ vg_ptr->pv_cur--;
+ vg_ptr->pv_act--;
+#ifdef LVM_GET_INODE
+ lvm_clear_inode(pv_ptr->inode);
+#endif
+ kfree(pv_ptr);
+ /* Make PV pointer array contiguous */
+ for (; p < vg_ptr->pv_max - 1; p++)
+ vg_ptr->pv[p] = vg_ptr->pv[p + 1];
+ vg_ptr->pv[p + 1] = NULL;
+ return 0;
+ }
+ }
+ return -ENXIO;
+} /* lvm_do_vg_reduce */
+
+
+/*
+ * character device support function VGDA remove
+ */
+static int lvm_do_vg_remove(int minor)
+{
+ int i;
+ vg_t *vg_ptr = vg[VG_CHR(minor)];
+ pv_t *pv_ptr;
+
+ if (vg_ptr == NULL) return -ENXIO;
+
+#ifdef LVM_TOTAL_RESET
+ if (vg_ptr->lv_open > 0 && lvm_reset_spindown == 0)
+#else
+ if (vg_ptr->lv_open > 0)
+#endif
+ return -EPERM;
+
+ /* let's go inactive */
+ vg_ptr->vg_status &= ~VG_ACTIVE;
+
+ /* free LVs */
+ /* first free snapshot logical volumes */
+ for (i = 0; i < vg_ptr->lv_max; i++) {
+ if (vg_ptr->lv[i] != NULL &&
+ vg_ptr->lv[i]->lv_access & LV_SNAPSHOT) {
+ lvm_do_lv_remove(minor, NULL, i);
+ current->state = TASK_UNINTERRUPTIBLE;
+ schedule_timeout(1);
+ }
+ }
+ /* then free the rest of the LVs */
+ for (i = 0; i < vg_ptr->lv_max; i++) {
+ if (vg_ptr->lv[i] != NULL) {
+ lvm_do_lv_remove(minor, NULL, i);
+ current->state = TASK_UNINTERRUPTIBLE;
+ schedule_timeout(1);
+ }
+ }
+
+ /* free PVs */
+ for (i = 0; i < vg_ptr->pv_max; i++) {
+ if ((pv_ptr = vg_ptr->pv[i]) != NULL) {
+#ifdef DEBUG_KFREE
+ printk(KERN_DEBUG
+ "%s -- kfree %d\n", lvm_name, __LINE__);
+#endif
+#ifdef LVM_GET_INODE
+ lvm_clear_inode(pv_ptr->inode);
+#endif
+ kfree(pv_ptr);
+ vg[VG_CHR(minor)]->pv[i] = NULL;
+ }
+ }
+
+#ifdef DEBUG_KFREE
+ printk(KERN_DEBUG "%s -- kfree %d\n", lvm_name, __LINE__);
+#endif
+ kfree(vg_ptr);
+ vg[VG_CHR(minor)] = NULL;
+
+ vg_count--;
+
+ MOD_DEC_USE_COUNT;
+
+ return 0;
+} /* lvm_do_vg_remove() */
+
+
+/*
+ * character device support function logical volume create
+ */
+static int lvm_do_lv_create(int minor, char *lv_name, lv_t *lv)
+{
+ int l, le, l_new, p, size;
+ ulong lv_status_save;
+ lv_block_exception_t *lvbe = lv->lv_block_exception;
+ vg_t *vg_ptr = vg[VG_CHR(minor)];
+ lv_t *lv_ptr = NULL;
+
+ if ((pep = lv->lv_current_pe) == NULL) return -EINVAL;
+ if (lv->lv_chunk_size > LVM_SNAPSHOT_MAX_CHUNK)
+ return -EINVAL;
+
+ for (l = 0; l < vg_ptr->lv_max; l++) {
+ if (vg_ptr->lv[l] != NULL &&
+ strcmp(vg_ptr->lv[l]->lv_name, lv_name) == 0)
+ return -EEXIST;
+ }
+
+ /* in case of lv_remove(), lv_create() pair; for eg. lvrename does this */
+ l_new = -1;
+ if (vg_ptr->lv[lv->lv_number] == NULL)
+ l_new = lv->lv_number;
+ else {
+ for (l = 0; l < vg_ptr->lv_max; l++) {
+ if (vg_ptr->lv[l] == NULL)
+ if (l_new == -1) l_new = l;
+ }
+ }
+ if (l_new == -1) return -EPERM;
+ else l = l_new;
+
+ if ((lv_ptr = kmalloc(sizeof(lv_t),GFP_KERNEL)) == NULL) {;
+ printk(KERN_CRIT "%s -- LV_CREATE: kmalloc error LV at line %d\n",
+ lvm_name, __LINE__);
+ return -ENOMEM;
+ }
+ /* copy preloaded LV */
+ memcpy((char *) lv_ptr, (char *) lv, sizeof(lv_t));
+
+ lv_status_save = lv_ptr->lv_status;
+ lv_ptr->lv_status &= ~LV_ACTIVE;
+ lv_ptr->lv_snapshot_org = \
+ lv_ptr->lv_snapshot_prev = \
+ lv_ptr->lv_snapshot_next = NULL;
+ lv_ptr->lv_block_exception = NULL;
+ init_MUTEX(&lv_ptr->lv_snapshot_sem);
+ vg_ptr->lv[l] = lv_ptr;
+
+ /* get the PE structures from user space if this
+ is no snapshot logical volume */
+ if (!(lv_ptr->lv_access & LV_SNAPSHOT)) {
+ size = lv_ptr->lv_allocated_le * sizeof(pe_t);
+ if ((lv_ptr->lv_current_pe = vmalloc(size)) == NULL) {
+ printk(KERN_CRIT
+ "%s -- LV_CREATE: vmalloc error LV_CURRENT_PE of %d Byte "
+ "at line %d\n",
+ lvm_name, size, __LINE__);
+#ifdef DEBUG_KFREE
+ printk(KERN_DEBUG "%s -- kfree %d\n", lvm_name, __LINE__);
+#endif
+ kfree(lv_ptr);
+ vg[VG_CHR(minor)]->lv[l] = NULL;
+ return -ENOMEM;
+ }
+ if (copy_from_user(lv_ptr->lv_current_pe, pep, size)) {
+ vfree(lv_ptr->lv_current_pe);
+ kfree(lv_ptr);
+ vg_ptr->lv[l] = NULL;
+ return -EFAULT;
+ }
+ /* correct the PE count in PVs */
+ for (le = 0; le < lv_ptr->lv_allocated_le; le++) {
+ vg_ptr->pe_allocated++;
+ for (p = 0; p < vg_ptr->pv_cur; p++) {
+ if (vg_ptr->pv[p]->pv_dev ==
+ lv_ptr->lv_current_pe[le].dev)
+ vg_ptr->pv[p]->pe_allocated++;
+ }
+ }
+ } else {
+ /* Get snapshot exception data and block list */
+ if (lvbe != NULL) {
+ lv_ptr->lv_snapshot_org =
+ vg_ptr->lv[LV_BLK(lv_ptr->lv_snapshot_minor)];
+ if (lv_ptr->lv_snapshot_org != NULL) {
+ size = lv_ptr->lv_remap_end * sizeof(lv_block_exception_t);
+ if ((lv_ptr->lv_block_exception = vmalloc(size)) == NULL) {
+ printk(KERN_CRIT
+ "%s -- lvm_do_lv_create: vmalloc error LV_BLOCK_EXCEPTION "
+ "of %d byte at line %d\n",
+ lvm_name, size, __LINE__);
+#ifdef DEBUG_KFREE
+ printk(KERN_DEBUG "%s -- kfree %d\n", lvm_name, __LINE__);
+#endif
+ kfree(lv_ptr);
+ vg_ptr->lv[l] = NULL;
+ return -ENOMEM;
+ }
+ if (copy_from_user(lv_ptr->lv_block_exception, lvbe, size)) {
+ vfree(lv_ptr->lv_block_exception);
+ kfree(lv_ptr);
+ vg[VG_CHR(minor)]->lv[l] = NULL;
+ return -EFAULT;
+ }
+ /* get pointer to original logical volume */
+ lv_ptr = lv_ptr->lv_snapshot_org;
+
+ lv_ptr->lv_snapshot_minor = 0;
+ lv_ptr->lv_snapshot_org = lv_ptr;
+ lv_ptr->lv_snapshot_prev = NULL;
+ /* walk thrugh the snapshot list */
+ while (lv_ptr->lv_snapshot_next != NULL)
+ lv_ptr = lv_ptr->lv_snapshot_next;
+ /* now lv_ptr points to the last existing snapshot in the chain */
+ vg_ptr->lv[l]->lv_snapshot_prev = lv_ptr;
+ /* our new one now back points to the previous last in the chain */
+ lv_ptr = vg_ptr->lv[l];
+ /* now lv_ptr points to our new last snapshot logical volume */
+ lv_ptr->lv_snapshot_org = lv_ptr->lv_snapshot_prev->lv_snapshot_org;
+ lv_ptr->lv_snapshot_next = NULL;
+ lv_ptr->lv_current_pe = lv_ptr->lv_snapshot_org->lv_current_pe;
+ lv_ptr->lv_allocated_le = lv_ptr->lv_snapshot_org->lv_allocated_le;
+ lv_ptr->lv_current_le = lv_ptr->lv_snapshot_org->lv_current_le;
+ lv_ptr->lv_size = lv_ptr->lv_snapshot_org->lv_size;
+ lv_ptr->lv_stripes = lv_ptr->lv_snapshot_org->lv_stripes;
+ lv_ptr->lv_stripesize = lv_ptr->lv_snapshot_org->lv_stripesize;
+ {
+ int err = lvm_snapshot_alloc(lv_ptr);
+ if (err)
+ {
+ vfree(lv_ptr->lv_block_exception);
+ kfree(lv_ptr);
+ vg[VG_CHR(minor)]->lv[l] = NULL;
+ return err;
+ }
+ }
+ } else {
+ vfree(lv_ptr->lv_block_exception);
+ kfree(lv_ptr);
+ vg_ptr->lv[l] = NULL;
+ return -EFAULT;
+ }
+ } else {
+ kfree(vg_ptr->lv[l]);
+ vg_ptr->lv[l] = NULL;
+ return -EINVAL;
+ }
+ } /* if ( vg[VG_CHR(minor)]->lv[l]->lv_access & LV_SNAPSHOT) */
+
+ lv_ptr = vg_ptr->lv[l];
+ lvm_gendisk.part[MINOR(lv_ptr->lv_dev)].start_sect = 0;
+ lvm_gendisk.part[MINOR(lv_ptr->lv_dev)].nr_sects = lv_ptr->lv_size;
+ lvm_size[MINOR(lv_ptr->lv_dev)] = lv_ptr->lv_size >> 1;
+ vg_lv_map[MINOR(lv_ptr->lv_dev)].vg_number = vg_ptr->vg_number;
+ vg_lv_map[MINOR(lv_ptr->lv_dev)].lv_number = lv_ptr->lv_number;
+ LVM_CORRECT_READ_AHEAD(lv_ptr->lv_read_ahead);
+ vg_ptr->lv_cur++;
+ lv_ptr->lv_status = lv_status_save;
+
+ /* optionally add our new snapshot LV */
+ if (lv_ptr->lv_access & LV_SNAPSHOT) {
+ /* sync the original logical volume */
+ fsync_dev(lv_ptr->lv_snapshot_org->lv_dev);
+ /* put ourselve into the chain */
+ lv_ptr->lv_snapshot_prev->lv_snapshot_next = lv_ptr;
+ lv_ptr->lv_snapshot_org->lv_access |= LV_SNAPSHOT_ORG;
+ }
+ return 0;
+} /* lvm_do_lv_create() */
+
+
+/*
+ * character device support function logical volume remove
+ */
+static int lvm_do_lv_remove(int minor, char *lv_name, int l)
+{
+ uint le, p;
+ vg_t *vg_ptr = vg[VG_CHR(minor)];
+ lv_t *lv_ptr;
+
+ if (l == -1) {
+ for (l = 0; l < vg_ptr->lv_max; l++) {
+ if (vg_ptr->lv[l] != NULL &&
+ strcmp(vg_ptr->lv[l]->lv_name, lv_name) == 0) {
+ break;
+ }
+ }
+ }
+ if (l == vg_ptr->lv_max) return -ENXIO;
+
+ lv_ptr = vg_ptr->lv[l];
+#ifdef LVM_TOTAL_RESET
+ if (lv_ptr->lv_open > 0 && lvm_reset_spindown == 0)
+#else
+ if (lv_ptr->lv_open > 0)
+#endif
+ return -EBUSY;
+
+ /* check for deletion of snapshot source while
+ snapshot volume still exists */
+ if ((lv_ptr->lv_access & LV_SNAPSHOT_ORG) &&
+ lv_ptr->lv_snapshot_next != NULL)
+ return -EPERM;
+
+ lv_ptr->lv_status |= LV_SPINDOWN;
+
+ /* sync the buffers */
+ fsync_dev(lv_ptr->lv_dev);
+
+ lv_ptr->lv_status &= ~LV_ACTIVE;
+
+ /* invalidate the buffers */
+ invalidate_buffers(lv_ptr->lv_dev);
+
+ /* reset generic hd */
+ lvm_gendisk.part[MINOR(lv_ptr->lv_dev)].start_sect = -1;
+ lvm_gendisk.part[MINOR(lv_ptr->lv_dev)].nr_sects = 0;
+ lvm_size[MINOR(lv_ptr->lv_dev)] = 0;
+
+ /* reset VG/LV mapping */
+ vg_lv_map[MINOR(lv_ptr->lv_dev)].vg_number = ABS_MAX_VG;
+ vg_lv_map[MINOR(lv_ptr->lv_dev)].lv_number = -1;
+
+ /* correct the PE count in PVs if this is no snapshot logical volume */
+ if (!(lv_ptr->lv_access & LV_SNAPSHOT)) {
+ /* only if this is no snapshot logical volume because
+ we share the lv_current_pe[] structs with the
+ original logical volume */
+ for (le = 0; le < lv_ptr->lv_allocated_le; le++) {
+ vg_ptr->pe_allocated--;
+ for (p = 0; p < vg_ptr->pv_cur; p++) {
+ if (vg_ptr->pv[p]->pv_dev ==
+ lv_ptr->lv_current_pe[le].dev)
+ vg_ptr->pv[p]->pe_allocated--;
+ }
+ }
+ vfree(lv_ptr->lv_current_pe);
+ /* LV_SNAPSHOT */
+ } else {
+ /* remove this snapshot logical volume from the chain */
+ lv_ptr->lv_snapshot_prev->lv_snapshot_next = lv_ptr->lv_snapshot_next;
+ if (lv_ptr->lv_snapshot_next != NULL) {
+ lv_ptr->lv_snapshot_next->lv_snapshot_prev =
+ lv_ptr->lv_snapshot_prev;
+ }
+ /* no more snapshots? */
+ if (lv_ptr->lv_snapshot_org->lv_snapshot_next == NULL)
+ lv_ptr->lv_snapshot_org->lv_access &= ~LV_SNAPSHOT_ORG;
+ lvm_snapshot_release(lv_ptr);
+ }
+
+#ifdef DEBUG_KFREE
+ printk(KERN_DEBUG "%s -- kfree %d\n", lvm_name, __LINE__);
+#endif
+ kfree(lv_ptr);
+ vg_ptr->lv[l] = NULL;
+ vg_ptr->lv_cur--;
+ return 0;
+} /* lvm_do_lv_remove() */
+
+
+/*
+ * character device support function logical volume extend / reduce
+ */
+static int lvm_do_lv_extend_reduce(int minor, char *lv_name, lv_t *lv)
+{
+ int l, le, p, size, old_allocated_le;
+ uint32_t end, lv_status_save;
+ vg_t *vg_ptr = vg[VG_CHR(minor)];
+ lv_t *lv_ptr;
+ pe_t *pe;
+
+ if ((pep = lv->lv_current_pe) == NULL) return -EINVAL;
+
+ for (l = 0; l < vg_ptr->lv_max; l++) {
+ if (vg_ptr->lv[l] != NULL &&
+ strcmp(vg_ptr->lv[l]->lv_name, lv_name) == 0)
+ break;
+ }
+ if (l == vg_ptr->lv_max) return -ENXIO;
+ lv_ptr = vg_ptr->lv[l];
+
+ /* check for active snapshot */
+ if (lv->lv_access & (LV_SNAPSHOT | LV_SNAPSHOT_ORG)) return -EPERM;
+
+ if ((pe = vmalloc(size = lv->lv_current_le * sizeof(pe_t))) == NULL) {
+ printk(KERN_CRIT
+ "%s -- lvm_do_lv_extend_reduce: vmalloc error LV_CURRENT_PE "
+ "of %d Byte at line %d\n",
+ lvm_name, size, __LINE__);
+ return -ENOMEM;
+ }
+ /* get the PE structures from user space */
+ if (copy_from_user(pe, pep, size)) {
+ vfree(pe);
+ return -EFAULT;
+ }
+
+#ifdef DEBUG
+ printk(KERN_DEBUG
+ "%s -- fsync_dev and "
+ "invalidate_buffers for %s [%s] in %s\n",
+ lvm_name, lv_ptr->lv_name,
+ kdevname(lv_ptr->lv_dev),
+ vg_ptr->vg_name);
+#endif
+
+ lv_ptr->lv_status |= LV_SPINDOWN;
+ fsync_dev(lv_ptr->lv_dev);
+ lv_ptr->lv_status &= ~LV_ACTIVE;
+ invalidate_buffers(lv_ptr->lv_dev);
+
+ /* reduce allocation counters on PV(s) */
+ for (le = 0; le < lv_ptr->lv_allocated_le; le++) {
+ vg_ptr->pe_allocated--;
+ for (p = 0; p < vg_ptr->pv_cur; p++) {
+ if (vg_ptr->pv[p]->pv_dev ==
+ lv_ptr->lv_current_pe[le].dev) {
+ vg_ptr->pv[p]->pe_allocated--;
+ break;
+ }
+ }
+ }
+
+
+ /* save pointer to "old" lv/pe pointer array */
+ pep1 = lv_ptr->lv_current_pe;
+ end = lv_ptr->lv_current_le;
+
+ /* save open counter */
+ lv_open = lv_ptr->lv_open;
+
+ /* save # of old allocated logical extents */
+ old_allocated_le = lv_ptr->lv_allocated_le;
+
+ /* copy preloaded LV */
+ lv_status_save = lv->lv_status;
+ lv->lv_status |= LV_SPINDOWN;
+ lv->lv_status &= ~LV_ACTIVE;
+ memcpy((char *) lv_ptr, (char *) lv, sizeof(lv_t));
+ lv_ptr->lv_current_pe = pe;
+ lv_ptr->lv_open = lv_open;
+
+ /* save availiable i/o statistic data */
+ /* linear logical volume */
+ if (lv_ptr->lv_stripes < 2) {
+ /* Check what last LE shall be used */
+ if (end > lv_ptr->lv_current_le) end = lv_ptr->lv_current_le;
+ for (le = 0; le < end; le++) {
+ lv_ptr->lv_current_pe[le].reads = pep1[le].reads;
+ lv_ptr->lv_current_pe[le].writes = pep1[le].writes;
+ }
+ /* striped logical volume */
+ } else {
+ uint i, j, source, dest, end, old_stripe_size, new_stripe_size;
+
+ old_stripe_size = old_allocated_le / lv_ptr->lv_stripes;
+ new_stripe_size = lv_ptr->lv_allocated_le / lv_ptr->lv_stripes;
+ end = old_stripe_size;
+ if (end > new_stripe_size) end = new_stripe_size;
+ for (i = source = dest = 0;
+ i < lv_ptr->lv_stripes; i++) {
+ for (j = 0; j < end; j++) {
+ lv_ptr->lv_current_pe[dest + j].reads =
+ pep1[source + j].reads;
+ lv_ptr->lv_current_pe[dest + j].writes =
+ pep1[source + j].writes;
+ }
+ source += old_stripe_size;
+ dest += new_stripe_size;
+ }
+ }
+ vfree(pep1);
+ pep1 = NULL;
+
+
+ /* extend the PE count in PVs */
+ for (le = 0; le < lv_ptr->lv_allocated_le; le++) {
+ vg_ptr->pe_allocated++;
+ for (p = 0; p < vg_ptr->pv_cur; p++) {
+ if (vg_ptr->pv[p]->pv_dev ==
+ vg_ptr->lv[l]->lv_current_pe[le].dev) {
+ vg_ptr->pv[p]->pe_allocated++;
+ break;
+ }
+ }
+ }
+
+ lvm_gendisk.part[MINOR(lv_ptr->lv_dev)].start_sect = 0;
+ lvm_gendisk.part[MINOR(lv_ptr->lv_dev)].nr_sects = lv_ptr->lv_size;
+ lvm_size[MINOR(lv_ptr->lv_dev)] = lv_ptr->lv_size >> 1;
+ /* vg_lv_map array doesn't have to be changed here */
+
+ LVM_CORRECT_READ_AHEAD(lv_ptr->lv_read_ahead);
+ lv_ptr->lv_status = lv_status_save;
+
+ return 0;
+} /* lvm_do_lv_extend_reduce() */
+
+
+/*
+ * character device support function logical volume status by name
+ */
+static int lvm_do_lv_status_byname(vg_t *vg_ptr, void *arg)
+{
+ uint l;
+ ulong size;
+ lv_t lv;
+ lv_t *lv_ptr;
+ lv_status_byname_req_t lv_status_byname_req;
+
+ if (vg_ptr == NULL) return -ENXIO;
+ if (copy_from_user(&lv_status_byname_req, arg,
+ sizeof(lv_status_byname_req_t)) != 0)
+ return -EFAULT;
+
+ if (lv_status_byname_req.lv == NULL) return -EINVAL;
+ if (copy_from_user(&lv, lv_status_byname_req.lv,
+ sizeof(lv_t)) != 0)
+ return -EFAULT;
+
+ for (l = 0; l < vg_ptr->lv_max; l++) {
+ lv_ptr = vg_ptr->lv[l];
+ if (lv_ptr != NULL &&
+ strcmp(lv_ptr->lv_name,
+ lv_status_byname_req.lv_name) == 0) {
+ if (copy_to_user(lv_status_byname_req.lv,
+ lv_ptr,
+ sizeof(lv_t)) != 0)
+ return -EFAULT;
+
+ if (lv.lv_current_pe != NULL) {
+ size = lv_ptr->lv_allocated_le *
+ sizeof(pe_t);
+ if (copy_to_user(lv.lv_current_pe,
+ lv_ptr->lv_current_pe,
+ size) != 0)
+ return -EFAULT;
+ }
+ return 0;
+ }
+ }
+ return -ENXIO;
+} /* lvm_do_lv_status_byname() */
+
+
+/*
+ * character device support function logical volume status by index
+ */
+static int lvm_do_lv_status_byindex(vg_t *vg_ptr,void *arg)
+{
+ ulong size;
+ lv_t lv;
+ lv_t *lv_ptr;
+ lv_status_byindex_req_t lv_status_byindex_req;
+
+ if (vg_ptr == NULL) return -ENXIO;
+ if (copy_from_user(&lv_status_byindex_req, arg,
+ sizeof(lv_status_byindex_req)) != 0)
+ return -EFAULT;
+
+ if ((lvp = lv_status_byindex_req.lv) == NULL)
+ return -EINVAL;
+ if ( ( lv_ptr = vg_ptr->lv[lv_status_byindex_req.lv_index]) == NULL)
+ return -ENXIO;
+
+ if (copy_from_user(&lv, lvp, sizeof(lv_t)) != 0)
+ return -EFAULT;
+
+ if (copy_to_user(lvp, lv_ptr, sizeof(lv_t)) != 0)
+ return -EFAULT;
+
+ if (lv.lv_current_pe != NULL) {
+ size = lv_ptr->lv_allocated_le * sizeof(pe_t);
+ if (copy_to_user(lv.lv_current_pe,
+ lv_ptr->lv_current_pe,
+ size) != 0)
+ return -EFAULT;
+ }
+ return 0;
+} /* lvm_do_lv_status_byindex() */
+
+
+/*
+ * character device support function physical volume change
+ */
+static int lvm_do_pv_change(vg_t *vg_ptr, void *arg)
+{
+ uint p;
+ pv_t *pv_ptr;
+#ifdef LVM_GET_INODE
+ struct inode *inode_sav;
+#endif
+
+ if (vg_ptr == NULL) return -ENXIO;
+ if (copy_from_user(&pv_change_req, arg,
+ sizeof(pv_change_req)) != 0)
+ return -EFAULT;
+
+ for (p = 0; p < vg_ptr->pv_max; p++) {
+ pv_ptr = vg_ptr->pv[p];
+ if (pv_ptr != NULL &&
+ strcmp(pv_ptr->pv_name,
+ pv_change_req.pv_name) == 0) {
+#ifdef LVM_GET_INODE
+ inode_sav = pv_ptr->inode;
+#endif
+ if (copy_from_user(pv_ptr,
+ pv_change_req.pv,
+ sizeof(pv_t)) != 0)
+ return -EFAULT;
+
+ /* We don't need the PE list
+ in kernel space as with LVs pe_t list */
+ pv_ptr->pe = NULL;
+#ifdef LVM_GET_INODE
+ pv_ptr->inode = inode_sav;
+#endif
+ return 0;
+ }
+ }
+ return -ENXIO;
+} /* lvm_do_pv_change() */
+
+/*
+ * character device support function get physical volume status
+ */
+static int lvm_do_pv_status(vg_t *vg_ptr, void *arg)
+{
+ uint p;
+ pv_t *pv_ptr;
+
+ if (vg_ptr == NULL) return -ENXIO;
+ if (copy_from_user(&pv_status_req, arg,
+ sizeof(pv_status_req)) != 0)
+ return -EFAULT;
+
+ for (p = 0; p < vg_ptr->pv_max; p++) {
+ pv_ptr = vg_ptr->pv[p];
+ if (pv_ptr != NULL &&
+ strcmp(pv_ptr->pv_name,
+ pv_status_req.pv_name) == 0) {
+ if (copy_to_user(pv_status_req.pv,
+ pv_ptr,
+ sizeof(pv_t)) != 0)
+ return -EFAULT;
+ return 0;
+ }
+ }
+ return -ENXIO;
+} /* lvm_do_pv_status() */
+
+
+/*
+ * support function initialize gendisk variables
+ */
+#ifdef __initfunc
+__initfunc(void lvm_geninit(struct gendisk *lvm_gdisk))
+#else
+void __init
+ lvm_geninit(struct gendisk *lvm_gdisk)
+#endif
+{
+ int i = 0;
+
+#ifdef DEBUG_GENDISK
+ printk(KERN_DEBUG "%s -- lvm_gendisk\n", lvm_name);
+#endif
+
+ for (i = 0; i < MAX_LV; i++) {
+ lvm_gendisk.part[i].start_sect = -1; /* avoid partition check */
+ lvm_size[i] = lvm_gendisk.part[i].nr_sects = 0;
+ lvm_blocksizes[i] = BLOCK_SIZE;
+ }
+
+ blksize_size[MAJOR_NR] = lvm_blocksizes;
+ blk_size[MAJOR_NR] = lvm_size;
+
+ return;
+} /* lvm_gen_init() */
+
+
+#ifdef LVM_GET_INODE
+/*
+ * support function to get an empty inode
+ *
+ * Gets an empty inode to be inserted into the inode hash,
+ * so that a physical volume can't be mounted.
+ * This is analog to drivers/block/md.c
+ *
+ * Is this the real thing?
+ *
+ */
+struct inode *lvm_get_inode(int dev)
+{
+ struct inode *inode_this = NULL;
+
+ /* Lock the device by inserting a dummy inode. */
+ inode_this = get_empty_inode();
+ inode_this->i_dev = dev;
+ insert_inode_hash(inode_this);
+ return inode_this;
+}
+
+
+/*
+ * support function to clear an inode
+ *
+ */
+void lvm_clear_inode(struct inode *inode)
+{
+#ifdef I_FREEING
+ inode->i_state |= I_FREEING;
+#endif
+ clear_inode(inode);
+ return;
+}
+#endif /* #ifdef LVM_GET_INODE */
return (-1);
}
devfs_handle = devfs_mk_dir (NULL, "md", 0, NULL);
- devfs_register_series (devfs_handle, "%u", MAX_MD_DEV,DEVFS_FL_DEFAULT,
+ devfs_register_series (devfs_handle, "%u",MAX_MD_DEVS,DEVFS_FL_DEFAULT,
MAJOR_NR, 0, S_IFBLK | S_IRUSR | S_IWUSR, 0, 0,
&md_fops, NULL);
unsigned long draw_from = 0, draw_to = 0;
struct vt_struct *vt = (struct vt_struct *)tty->driver_data;
u16 himask, charmask;
+ const unsigned char *orig_buf = NULL;
+ int orig_count;
currcons = vt->vc_num;
if (!vc_cons_allocated(currcons)) {
return 0;
}
- down(&con_buf_sem);
+ orig_buf = buf;
+ orig_count = count;
if (from_user) {
+ down(&con_buf_sem);
+
+again:
if (count > CON_BUF_SIZE)
count = CON_BUF_SIZE;
if (copy_from_user(con_buf, buf, count)) {
spin_unlock_irq(&console_lock);
out:
- up(&con_buf_sem);
+ if (from_user) {
+ /* If the user requested something larger than
+ * the CON_BUF_SIZE, and the tty is not stopped,
+ * keep going.
+ */
+ if ((orig_count > CON_BUF_SIZE) && !tty->stopped) {
+ orig_count -= CON_BUF_SIZE;
+ orig_buf += CON_BUF_SIZE;
+ count = orig_count;
+ buf = orig_buf;
+ goto again;
+ }
+
+ up(&con_buf_sem);
+ }
return n;
#undef FLUSH
static int lp_register(int nr, struct parport *port)
{
+ char name[8];
+
lp_table[nr].dev = parport_register_device(port, "lp",
NULL, NULL, NULL, 0,
(void *) &lp_table[nr]);
if (reset)
lp_reset(nr);
+ sprintf (name, "%d", nr);
+ devfs_register (devfs_handle, name, 0,
+ DEVFS_FL_DEFAULT, LP_MAJOR, nr,
+ S_IFCHR | S_IRUGO | S_IWUGO, 0, 0,
+ &lp_fops, NULL);
+
printk(KERN_INFO "lp%d: using %s (%s).\n", nr, port->name,
(port->irq == PARPORT_IRQ_NONE)?"polling":"interrupt-driven");
}
devfs_handle = devfs_mk_dir (NULL, "printers", 0, NULL);
- if (lp_count) {
- for (i = 0; i < LP_NO; ++i)
- {
- char name[8];
-
- if (!(lp_table[i].flags & LP_EXIST))
- continue; /* skip this entry: it doesn't exist. */
- sprintf (name, "%d", i);
- devfs_register (devfs_handle, name, 0,
- DEVFS_FL_DEFAULT, LP_MAJOR, i,
- S_IFCHR | S_IRUGO | S_IWUGO, 0, 0,
- &lp_fops, NULL);
- }
- }
if (!lp_count) {
printk (KERN_INFO "lp: driver loaded but no devices found\n");
#include <linux/module.h>
#include <linux/init.h>
#include <linux/sched.h>
-#include <linux/init.h>
#include <linux/devfs_fs_kernel.h>
#include <linux/ioctl.h>
#include <linux/parport.h>
#include <linux/config.h>
#include <linux/module.h> /* For EXPORT_SYMBOL */
-#include <linux/config.h>
#include <linux/errno.h>
#include <linux/sched.h>
#include <linux/interrupt.h>
* - making it shorter - scr_readw are macros which expand in PRETTY long code
*/
+#include <linux/config.h>
#include <linux/kernel.h>
#include <linux/major.h>
#include <linux/errno.h>
#include <asm/io.h>
-#define RTL8139_VERSION "0.9.2"
+#define RTL8139_VERSION "0.9.3"
#define RTL8139_MODULE_NAME "8139too"
#define RTL8139_DRIVER_NAME RTL8139_MODULE_NAME " Fast Ethernet driver " RTL8139_VERSION
#define PFX RTL8139_MODULE_NAME ": "
HAS_LNK_CHNG = 0x040000,
};
-#define RTL_IO_SIZE 256
+#define RTL_IO_SIZE 0x80
#define RTL8139_CAPS HAS_CHIP_XCVR|HAS_LNK_CHNG
RTL8139_CB,
SMC1211TX,
/*MPX5030,*/
- SIS900,
- SIS7016,
DELTA8139,
ADDTRON8139,
} chip_t;
{ RTL8139_CB, "RealTek RTL8139B PCI/CardBus"},
{ SMC1211TX, "SMC1211TX EZCard 10/100 (RealTek RTL8139)"},
/* { MPX5030, "Accton MPX5030 (RealTek RTL8139)"},*/
- { SIS900, "SiS 900 (RealTek RTL8139) Fast Ethernet"},
- { SIS7016, "SiS 7016 (RealTek RTL8139) Fast Ethernet"},
{ DELTA8139, "Delta Electronics 8139 10/100BaseTX"},
{ ADDTRON8139, "Addtron Technolgy 8139 10/100BaseTX"},
{0,},
{0x10ec, 0x8138, PCI_ANY_ID, PCI_ANY_ID, 0, 0, RTL8139_CB },
{0x1113, 0x1211, PCI_ANY_ID, PCI_ANY_ID, 0, 0, SMC1211TX },
/* {0x1113, 0x1211, PCI_ANY_ID, PCI_ANY_ID, 0, 0, MPX5030 },*/
- {0x1039, 0x0900, PCI_ANY_ID, PCI_ANY_ID, 0, 0, SIS900 },
- {0x1039, 0x7016, PCI_ANY_ID, PCI_ANY_ID, 0, 0, SIS7016 },
{0x1500, 0x1360, PCI_ANY_ID, PCI_ANY_ID, 0, 0, DELTA8139 },
{0x4033, 0x1360, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ADDTRON8139 },
{0,},
/* 'options' is used to pass a transceiver override or full-duplex flag
e.g. "options=16" for FD, "options=32" for 100mbps-only. */
-#if MODULE_SETUP_FIXED
static int full_duplex[] = {-1, -1, -1, -1, -1, -1, -1, -1};
static int options[] = {-1, -1, -1, -1, -1, -1, -1, -1};
-#endif
static int debug = -1; /* The debug level */
/* A few values that may be tweaked. */
#error You must compile this driver with "-O".
#endif
-#include <linux/version.h>
+
+#include <linux/config.h>
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/string.h>
MODULE_AUTHOR("Donald Becker <becker@cesdis.gsfc.nasa.gov>");
MODULE_DESCRIPTION("Intel i82557/i82558 PCI EtherExpressPro driver");
MODULE_PARM(debug, "i");
-
-#if MODULE_OPTIONS_FIXED
MODULE_PARM(options, "1-" __MODULE_STRING(8) "i");
MODULE_PARM(full_duplex, "1-" __MODULE_STRING(8) "i");
-#endif
-
MODULE_PARM(congenb, "i");
MODULE_PARM(txfifo, "i");
MODULE_PARM(rxfifo, "i");
#define RUN_AT(x) (jiffies + (x))
+/* ACPI power states don't universally work (yet) */
+#ifndef CONFIG_EEPRO100_PM
+#undef pci_set_power_state
+#define pci_set_power_state null_set_power_state
+static inline int null_set_power_state(struct pci_dev *dev, int state)
+{
+ return 0;
+}
+#endif /* CONFIG_EEPRO100_PM */
+
+
+/* compile-time switch to en/disable slow PIO */
+#undef USE_IO
+
+
int speedo_debug = 1;
u16 eeprom[0x100];
int acpi_idle_state = 0, pm, irq;
unsigned long ioaddr;
+ static int card_idx = -1;
static int did_version = 0; /* Already printed version info. */
ioaddr = pci_resource_start (pdev, 1);
#endif
irq = pdev->irq;
+
+ card_idx++;
if (!request_region (pci_resource_start (pdev, 1),
pci_resource_len (pdev, 1),
if (dev->mem_start > 0)
option = dev->mem_start;
-#if MODULE_SETUP_FIXED
else if (card_idx >= 0 && options[card_idx] >= 0)
option = options[card_idx];
-#endif
else
option = 0;
sp->full_duplex = option >= 0 && (option & 0x10) ? 1 : 0;
-#if MODULE_SETUP_FIXED
if (card_idx >= 0) {
if (full_duplex[card_idx] >= 0)
sp->full_duplex = full_duplex[card_idx];
}
-#endif
sp->default_port = option >= 0 ? (option & 0x0f) : 0;
err_out_iounmap:
#ifndef USE_IO
iounmap ((void *)ioaddr);
-#endif
err_out_free_mmio_region:
+#endif
release_mem_region (pci_resource_start (pdev, 0),
pci_resource_len (pdev, 0));
err_out_free_pio_region:
/* Free the original skb. */
if (sp->tx_skbuff[entry]) {
sp->stats.tx_packets++; /* Count only user packets. */
-#if LINUX_VERSION_CODE > 0x20127
sp->stats.tx_bytes += sp->tx_skbuff[entry]->len;
-#endif
pci_unmap_single(sp->pdev,
le32_to_cpu(sp->tx_ring[entry].tx_buf_addr0),
sp->tx_skbuff[entry]->len);
skb->protocol = eth_type_trans(skb, dev);
netif_rx(skb);
sp->stats.rx_packets++;
-#if LINUX_VERSION_CODE > 0x20127
sp->stats.rx_bytes += pkt_len;
-#endif
}
entry = (++sp->cur_rx) % RX_RING_SIZE;
}
pci_unmap_single(sp->pdev,
sp->rx_ring_dma[i],
PKT_BUF_SZ + sizeof(struct RxFD));
-#if LINUX_VERSION_CODE < 0x20100
- skb->free = 1;
-#endif
dev_kfree_skb(skb);
}
}
long ioaddr = dev->base_addr;
struct tulip_private *tp = (struct tulip_private *)dev->priv;
- netif_stop_queue (dev);
-
/* Disable interrupts by clearing the interrupt mask. */
outl(0x00000000, ioaddr + CSR7);
/* Stop the chip's Tx and Rx processes. */
if (inl(ioaddr + CSR6) != 0xffffffff)
tp->stats.rx_missed_errors += inl(ioaddr + CSR8) & 0xffff;
- del_timer(&tp->timer);
-
dev->if_port = tp->saved_if_port;
}
printk(KERN_DEBUG "%s: Shutting down ethercard, status was %2.2x.\n",
dev->name, inl(ioaddr + CSR5));
+ netif_stop_queue(dev);
+
if (netif_device_present(dev))
tulip_down(dev);
+ del_timer(&tp->timer);
+
free_irq(dev->irq, dev);
/* Free all the skbuffs in the Rx queue. */
#define DEV_KFREE_SKB(skb) dev_kfree_skb(skb)
#define DEV_KFREE_SKB_IRQ(skb) dev_kfree_skb_irq(skb)
+#define DEV_KFREE_SKB_ANY(skb) dev_kfree_skb_any(skb)
/* function prototypes ******************************************************/
static void FreeResources(struct net_device *dev);
pTxd->pMBuf->len);
/* free message */
- if (in_irq())
- DEV_KFREE_SKB_IRQ(pTxd->pMBuf);
- else
- DEV_KFREE_SKB(pTxd->pMBuf); /* free message */
+ DEV_KFREE_SKB_ANY(pTxd->pMBuf);
pTxPort->TxdRingFree++;
pTxd->TBControl &= ~TX_CTRL_SOFTWARE;
pTxd = pTxd->pNextTxd; /* point behind fragment with EOF */
pFreeMbuf = pMbuf;
do {
pNextMbuf = pFreeMbuf->pNext;
- if (in_irq())
- DEV_KFREE_SKB_IRQ(pFreeMbuf->pOs);
- else
- DEV_KFREE_SKB(pFreeMbuf->pOs);
+ DEV_KFREE_SKB_ANY(pFreeMbuf->pOs);
pFreeMbuf = pNextMbuf;
} while ( pFreeMbuf != NULL );
} /* SkDrvFreeRlmtMbuf */
-/* $Id: sunbmac.c,v 1.16 2000/02/16 10:36:18 davem Exp $
+/* $Id: sunbmac.c,v 1.17 2000/02/17 18:29:04 davem Exp $
* sunbmac.c: Driver for Sparc BigMAC 100baseT ethernet adapters.
*
* Copyright (C) 1997, 1998, 1999 David S. Miller (davem@redhat.com)
for (i = 0; i < RX_RING_SIZE; i++) {
if (bp->rx_skbs[i] != NULL) {
- if (in_irq())
- dev_kfree_skb_irq(bp->rx_skbs[i]);
- else
- dev_kfree_skb(bp->rx_skbs[i]);
+ dev_kfree_skb_any(bp->rx_skbs[i]);
bp->rx_skbs[i] = NULL;
}
}
for (i = 0; i < TX_RING_SIZE; i++) {
if (bp->tx_skbs[i] != NULL) {
- if (in_irq())
- dev_kfree_skb_irq(bp->tx_skbs[i]);
- else
- dev_kfree_skb(bp->tx_skbs[i]);
+ dev_kfree_skb_any(bp->tx_skbs[i]);
bp->tx_skbs[i] = NULL;
}
}
-/* $Id: sunhme.c,v 1.90 2000/02/16 10:36:16 davem Exp $
+/* $Id: sunhme.c,v 1.91 2000/02/17 18:29:02 davem Exp $
* sunhme.c: Sparc HME/BigMac 10/100baseT half/full duplex auto switching,
* auto carrier detecting ethernet driver. Also known as the
* "Happy Meal Ethernet" found on SunSwift SBUS cards.
rxd = &hp->happy_block->happy_meal_rxd[i];
dma_addr = hme_read_desc32(hp, &rxd->rx_addr);
hme_dma_unmap(hp, dma_addr, RX_BUF_ALLOC_SIZE);
- if (in_irq())
- dev_kfree_skb_irq(skb);
- else
- dev_kfree_skb(skb);
+ dev_kfree_skb_any(skb);
hp->rx_skbs[i] = NULL;
}
}
txd = &hp->happy_block->happy_meal_txd[i];
dma_addr = hme_read_desc32(hp, &txd->tx_addr);
hme_dma_unmap(hp, dma_addr, skb->len);
- if (in_irq())
- dev_kfree_skb_irq(skb);
- else
- dev_kfree_skb(skb);
+ dev_kfree_skb_any(skb);
hp->tx_skbs[i] = NULL;
}
}
if ( ! priv->phyOnline ) {
TLAN_DBG( TLAN_DEBUG_TX, "TRANSMIT: %s PHY is not ready\n", dev->name );
- if (in_irq())
- dev_kfree_skb_irq(skb);
- else
- dev_kfree_skb(skb);
+ dev_kfree_skb_any(skb);
return 0;
}
CIRC_INC( priv->txTail, TLAN_NUM_TX_LISTS );
- if ( bbuf ) {
- if (in_irq())
- dev_kfree_skb_irq(skb);
- else
- dev_kfree_skb(skb);
- }
+ if ( bbuf )
+ dev_kfree_skb_any(skb);
dev->trans_start = jiffies;
return 0;
list = priv->txList + i;
skb = (struct sk_buff *) list->buffer[9].address;
if ( skb ) {
- if (in_irq())
- dev_kfree_skb_irq( skb );
- else
- dev_kfree_skb( skb );
+ dev_kfree_skb_any( skb );
list->buffer[9].address = 0;
}
}
list = priv->rxList + i;
skb = (struct sk_buff *) list->buffer[9].address;
if ( skb ) {
- if (in_irq())
- dev_kfree_skb_irq( skb );
- else
- dev_kfree_skb( skb );
+ dev_kfree_skb_any( skb );
list->buffer[9].address = 0;
}
}
#error You must compile this driver with "-O".
#endif
-#include <linux/config.h>
#include <linux/version.h>
#include <linux/module.h>
#include <linux/modversions.h>
/* Configuration section **************************************************** */
/* Set the following macro to 1 to reload the ISP2x00's firmware. This is
- version 1.15.37 of the isp2100's firmware and version 2.00.16 of the
+ version 1.17.30 of the isp2100's firmware and version 2.00.40 of the
isp2200's firmware.
*/
/* #define TRACE_ISP 1 */
-#define DEFAULT_LOOP_COUNT 10000000
+#define DEFAULT_LOOP_COUNT 1000000000
/* End Configuration section ************************************************ */
#define QLOGICFC_MAX_LUN 128
#define QLOGICFC_MAX_LOOP_ID 0x7d
+/* the following connection options only apply to the 2200. i have only
+ * had success with LOOP_ONLY and P2P_ONLY.
+ */
+
+#define LOOP_ONLY 0
+#define P2P_ONLY 1
+#define LOOP_PREFERED 2
+#define P2P_PREFERED 3
+
+#define CONNECTION_PREFERENCE LOOP_ONLY
+
/* adapter_state values */
#define AS_FIRMWARE_DEAD -1
#define AS_LOOP_DOWN 0
hostdata->queued = 0;
/* set up the control block */
hostdata->control_block.version = 0x1;
- hostdata->control_block.firm_opts = 0x000e;
+ hostdata->control_block.firm_opts = 0x800e;
hostdata->control_block.max_frame_len = 2048;
hostdata->control_block.max_iocb = QLOGICFC_REQ_QUEUE_LEN;
hostdata->control_block.exec_throttle = QLOGICFC_CMD_PER_LUN;
hostdata->control_block.req_queue_addr_lo = virt_to_bus_low32(hostdata->req);
hostdata->control_block.req_queue_addr_high = virt_to_bus_high32(hostdata->req);
+
+ hostdata->control_block.add_firm_opts |= CONNECTION_PREFERENCE<<4;
hostdata->adapter_state = AS_LOOP_DOWN;
hostdata->explore_timer.data = 1;
hostdata->host_id = hosts;
/*
- * acm.c Version 0.15
+ * acm.c Version 0.16
*
* Copyright (c) 1999 Armin Fuerst <fuerst@in.tum.de>
* Copyright (c) 1999 Pavel Machek <pavel@suse.cz>
* v0.13 - added termios, added hangup
* v0.14 - sized down struct acm
* v0.15 - fixed flow control again - characters could be lost
+ * v0.16 - added code for modems with swapped data and control interfaces
*/
/*
for (i = 0; i < dev->descriptor.bNumConfigurations; i++) {
cfacm = dev->config + i;
- dbg("probing config %d", cfacm->bConfigurationValue);
- ifcom = cfacm->interface[0].altsetting + 0;
- if (ifcom->bInterfaceClass != 2 || ifcom->bInterfaceSubClass != 2 ||
- ifcom->bInterfaceProtocol != 1 || ifcom->bNumEndpoints != 1)
- continue;
+ dbg("probing config %d", cfacm->bConfigurationValue);
- epctrl = ifcom->endpoint + 0;
- if ((epctrl->bEndpointAddress & 0x80) != 0x80 || (epctrl->bmAttributes & 3) != 3)
+ if (cfacm->bNumInterfaces != 2 ||
+ usb_interface_claimed(cfacm->interface + 0) ||
+ usb_interface_claimed(cfacm->interface + 1))
continue;
+ ifcom = cfacm->interface[0].altsetting + 0;
ifdata = cfacm->interface[1].altsetting + 0;
- if (ifdata->bInterfaceClass != 10 || ifdata->bNumEndpoints != 2)
- continue;
- if (usb_interface_claimed(cfacm->interface + 0) ||
- usb_interface_claimed(cfacm->interface + 1))
+ if (ifdata->bInterfaceClass != 10 || ifdata->bNumEndpoints != 2) {
+ ifcom = cfacm->interface[1].altsetting + 0;
+ ifdata = cfacm->interface[0].altsetting + 0;
+ if (ifdata->bInterfaceClass != 10 || ifdata->bNumEndpoints != 2)
+ continue;
+ }
+
+ if (ifcom->bInterfaceClass != 2 || ifcom->bInterfaceSubClass != 2 ||
+ ifcom->bInterfaceProtocol != 1 || ifcom->bNumEndpoints != 1)
continue;
+ epctrl = ifcom->endpoint + 0;
epread = ifdata->endpoint + 0;
epwrite = ifdata->endpoint + 1;
- if ((epread->bmAttributes & 3) != 2 || (epwrite->bmAttributes & 3) != 2 ||
+ if ((epctrl->bEndpointAddress & 0x80) != 0x80 || (epctrl->bmAttributes & 3) != 3 ||
+ (epread->bmAttributes & 3) != 2 || (epwrite->bmAttributes & 3) != 2 ||
((epread->bEndpointAddress & 0x80) ^ (epwrite->bEndpointAddress & 0x80)) != 0x80)
continue;
}
break;
- case HID_UP_HOTKEY:
+ case HID_UP_CONSUMER: /* USB HUT v1.1, pages 56-62 */
switch (usage->hid & HID_USAGE) {
- case 0x0034: usage->code = KEY_PHONE; break;
- case 0x0036: usage->code = KEY_NOTEPAD; break;
- case 0x008a: usage->code = KEY_MAIL; break;
- case 0x0095: usage->code = KEY_CALENDAR; break;
- case 0x00b7: usage->code = KEY_PRINT; break;
- case 0x00b8: usage->code = KEY_HELP; break;
- case 0x00cd: usage->code = KEY_SOUND; break;
- case 0x00e2: usage->code = KEY_PROG1; break;
- case 0x00e9: usage->code = KEY_PROG2; break;
- case 0x00ea: usage->code = KEY_PROG3; break;
- case 0x018a: usage->code = KEY_WWW; break;
- case 0x0223: usage->code = KEY_FULLSCREEN; break;
- default: usage->code = KEY_UNKNOWN; break;
+ case 0x034: usage->code = KEY_SLEEP; break;
+ case 0x036: usage->code = BTN_MISC; break;
+ case 0x08a: usage->code = KEY_WWW; break;
+ case 0x095: usage->code = KEY_HELP; break;
+
+ case 0x0b4: usage->code = KEY_REWIND; break;
+ case 0x0b5: usage->code = KEY_NEXTSONG; break;
+ case 0x0b6: usage->code = KEY_PREVIOUSSONG; break;
+ case 0x0b7: usage->code = KEY_STOPCD; break;
+ case 0x0b8: usage->code = KEY_EJECTCD; break;
+ case 0x0cd: usage->code = KEY_PLAYPAUSE; break;
+
+ case 0x0e2: usage->code = KEY_MUTE; break;
+ case 0x0e9: usage->code = KEY_VOLUMEUP; break;
+ case 0x0ea: usage->code = KEY_VOLUMEDOWN; break;
+
+ case 0x183: usage->code = KEY_CONFIG; break;
+ case 0x18a: usage->code = KEY_MAIL; break;
+ case 0x192: usage->code = KEY_CALC; break;
+ case 0x194: usage->code = KEY_FILE; break;
+
+ case 0x21a: usage->code = KEY_UNDO; break;
+ case 0x21b: usage->code = KEY_COPY; break;
+ case 0x21c: usage->code = KEY_CUT; break;
+ case 0x21d: usage->code = KEY_PASTE; break;
+
+ case 0x221: usage->code = KEY_FIND; break;
+ case 0x223: usage->code = KEY_HOMEPAGE; break;
+ case 0x224: usage->code = KEY_BACK; break;
+ case 0x225: usage->code = KEY_FORWARD; break;
+ case 0x226: usage->code = KEY_STOP; break;
+ case 0x227: usage->code = KEY_REFRESH; break;
+ case 0x22a: usage->code = KEY_BOOKMARKS; break;
+
+ default: usage->code = KEY_UNKNOWN; break;
}
#define HID_UP_KEYBOARD 0x00070000
#define HID_UP_LED 0x00080000
#define HID_UP_BUTTON 0x00090000
-#define HID_UP_HOTKEY 0x000c0000
+#define HID_UP_CONSUMER 0x000c0000
#define HID_UP_DIGITIZER 0x000d0000
#define HID_UP_PID 0x000f0000
* This is the local enviroment. It is resistent up the the next main-item.
*/
-#define MAX_USAGES 256
+#define MAX_USAGES 512
struct hid_local {
unsigned usage[MAX_USAGES]; /* usage array */
tristate 'Minix fs support' CONFIG_MINIX_FS
tristate 'NTFS filesystem support (read only)' CONFIG_NTFS_FS
-dep_bool ' NTFS write support (DANGEROUS)' CONFIG_NTFS_RW $CONFIG_NTFS_FS $CONFIG_EXPERIMENTAL
+if [ "$CONFIG_NTFS_FS" != "n" -a "$CONFIG_EXPERIMENTAL" = "y" ]; then
+ bool ' NTFS write support (DANGEROUS)' CONFIG_NTFS_RW
+fi
tristate 'OS/2 HPFS filesystem support' CONFIG_HPFS_FS
bool '/proc filesystem support' CONFIG_PROC_FS
-if [ "$CONFIG_EXPERIMENTAL" = "y" ]; then
- bool '/dev filesystem support (EXPERIMENTAL)' CONFIG_DEVFS_FS
- if [ "$CONFIG_DEVFS_FS" = "y" ]; then
- bool ' Debug devfs' CONFIG_DEVFS_DEBUG
- fi
-fi
+dep_bool '/dev filesystem support (EXPERIMENTAL)' CONFIG_DEVFS_FS $CONFIG_EXPERIMENTAL
+dep_bool ' Debug devfs' CONFIG_DEVFS_DEBUG $CONFIG_DEVFS_FS
# It compiles as a module for testing only. It should not be used
# as a module in general. If we make this "tristate", a bunch of people
dep_bool '/dev/pts filesystem for Unix98 PTYs' CONFIG_DEVPTS_FS $CONFIG_UNIX98_PTYS
dep_tristate 'QNX4 filesystem support (read only) (EXPERIMENTAL)' CONFIG_QNX4FS_FS $CONFIG_EXPERIMENTAL
-dep_bool ' QNX4FS write support (DANGEROUS)' CONFIG_QNX4FS_RW $CONFIG_QNX4FS_FS
+if [ "$CONFIG_QNX4FS_FS" != "n" -a "$CONFIG_EXPERIMENTAL" = "y" ]; then
+ bool ' QNX4FS write support (DANGEROUS)' CONFIG_QNX4FS_RW
+fi
tristate 'ROM filesystem support' CONFIG_ROMFS_FS
tristate 'Second extended fs support' CONFIG_EXT2_FS
tristate 'System V and Coherent filesystem support' CONFIG_SYSV_FS
-dep_bool ' SYSV filesystem write support (DANGEROUS)' CONFIG_SYSV_FS_WRITE $CONFIG_SYSV_FS $CONFIG_EXPERIMENTAL
+if [ "$CONFIG_SYSV_FS" != "n" -a "$CONFIG_EXPERIMENTAL" = "y" ]; then
+ bool ' SYSV filesystem write support (DANGEROUS)' CONFIG_SYSV_FS_WRITE
+fi
tristate 'UDF filesystem support (read only)' CONFIG_UDF_FS
if [ "$CONFIG_UDF_FS" != "n" -a "$CONFIG_EXPERIMENTAL" = "y" ]; then
- bool ' UDF write support (DANGEROUS)' CONFIG_UDF_RW
+ bool ' UDF write support (DANGEROUS)' CONFIG_UDF_RW
fi
tristate 'UFS filesystem support (read only)' CONFIG_UFS_FS
-dep_bool ' UFS filesystem write support (DANGEROUS)' CONFIG_UFS_FS_WRITE $CONFIG_UFS_FS $CONFIG_EXPERIMENTAL
-
+if [ "$CONFIG_UFS_FS" != "n" -a "$CONFIG_EXPERIMENTAL" = "y" ]; then
+ bool ' UFS filesystem write support (DANGEROUS)' CONFIG_UFS_FS_WRITE
+fi
if [ "$CONFIG_NET" = "y" ]; then
dep_bool ' Root file system on NFS' CONFIG_ROOT_NFS $CONFIG_NFS_FS $CONFIG_IP_PNP
tristate 'NFS server support' CONFIG_NFSD
- dep_bool ' Provide NFSv3 server support (EXPERIMENTAL)' CONFIG_NFSD_V3 $CONFIG_NFSD $CONFIG_EXPERIMENTAL
+ if [ "$CONFIG_NFSD" != "n" ]; then
+ bool ' Provide NFSv3 server support (EXPERIMENTAL)' CONFIG_NFSD_V3
+ fi
if [ "$CONFIG_NFS_FS" = "y" -o "$CONFIG_NFSD" = "y" ]; then
define_tristate CONFIG_SUNRPC y
define_tristate CONFIG_LOCKD n
fi
fi
+ if [ "$CONFIG_NFSD_V3" == "y" ]; then
+ define_bool CONFIG_LOCKD_V4 y
+ fi
tristate 'SMB filesystem support (to mount WfW shares etc.)' CONFIG_SMB_FS
fi
if [ "$CONFIG_IPX" != "n" -o "$CONFIG_INET" != "n" ]; then
static int do_load_script(struct linux_binprm *bprm,struct pt_regs *regs)
{
- char *cp, *i_name, *i_name_start, *i_arg;
+ char *cp, *i_name, *i_arg;
struct dentry * dentry;
char interp[128];
int retval;
for (cp = bprm->buf+2; (*cp == ' ') || (*cp == '\t'); cp++);
if (*cp == '\0')
return -ENOEXEC; /* No interpreter name found */
- i_name_start = i_name = cp;
+ i_name = cp;
i_arg = 0;
- for ( ; *cp && (*cp != ' ') && (*cp != '\t'); cp++) {
- if (*cp == '/')
- i_name = cp+1;
- }
+ for ( ; *cp && (*cp != ' ') && (*cp != '\t'); cp++)
+ /* nothing */
while ((*cp == ' ') || (*cp == '\t'))
*cp++ = '\0';
if (*cp)
i_arg = cp;
- strcpy (interp, i_name_start);
+ strcpy (interp, i_name);
/*
* OK, we've parsed out the interpreter name and
* (optional) argument.
O_TARGET := lockd.o
O_OBJS := clntlock.o clntproc.o host.o svc.o svclock.o svcshare.o \
svcproc.o svcsubs.o mon.o xdr.o
+
+ifdef CONFIG_LOCKD_V4
+ O_OBJS += xdr4.o svc4proc.o
+endif
+
OX_OBJS := lockd_syms.o
M_OBJS := $(O_TARGET)
EXPORT_SYMBOL(nlmsvc_grace_period);
EXPORT_SYMBOL(nlmsvc_timeout);
+#ifdef CONFIG_LOCKD_V4
+
+/* NLM4 exported symbols */
+EXPORT_SYMBOL(nlm4_rofs);
+EXPORT_SYMBOL(nlm4_stale_fh);
+EXPORT_SYMBOL(nlm4_deadlock);
+EXPORT_SYMBOL(nlm4_failed);
+EXPORT_SYMBOL(nlm4_fbig);
+
+#endif
+
#endif /* CONFIG_MODULES */
static struct svc_version nlmsvc_version3 = {
3, 24, nlmsvc_procedures, NULL
};
-#ifdef CONFIG_NFSD_NFS3
+#ifdef CONFIG_LOCKD_V4
static struct svc_version nlmsvc_version4 = {
4, 24, nlmsvc_procedures4, NULL
};
&nlmsvc_version1,
NULL,
&nlmsvc_version3,
-#ifdef CONFIG_NFSD_NFS3
+#ifdef CONFIG_LOCKD_V4
&nlmsvc_version4,
#endif
};
--- /dev/null
+/*
+ * linux/fs/lockd/svc4proc.c
+ *
+ * Lockd server procedures. We don't implement the NLM_*_RES
+ * procedures because we don't use the async procedures.
+ *
+ * Copyright (C) 1996, Olaf Kirch <okir@monad.swb.de>
+ */
+
+#include <linux/types.h>
+#include <linux/sched.h>
+#include <linux/malloc.h>
+#include <linux/in.h>
+#include <linux/sunrpc/svc.h>
+#include <linux/sunrpc/clnt.h>
+#include <linux/nfsd/nfsd.h>
+#include <linux/lockd/lockd.h>
+#include <linux/lockd/share.h>
+#include <linux/lockd/sm_inter.h>
+
+
+#define NLMDBG_FACILITY NLMDBG_CLIENT
+
+static u32 nlm4svc_callback(struct svc_rqst *, u32, struct nlm_res *);
+static void nlm4svc_callback_exit(struct rpc_task *);
+
+/*
+ * Obtain client and file from arguments
+ */
+static u32
+nlm4svc_retrieve_args(struct svc_rqst *rqstp, struct nlm_args *argp,
+ struct nlm_host **hostp, struct nlm_file **filp)
+{
+ struct nlm_host *host = NULL;
+ struct nlm_file *file = NULL;
+ struct nlm_lock *lock = &argp->lock;
+ u32 error = 0;
+
+ /* nfsd callbacks must have been installed for this procedure */
+ if (!nlmsvc_ops)
+ return nlm_lck_denied_nolocks;
+
+ /* Obtain handle for client host */
+ if (rqstp->rq_client == NULL) {
+ printk(KERN_NOTICE
+ "lockd: unauthenticated request from (%08x:%d)\n",
+ ntohl(rqstp->rq_addr.sin_addr.s_addr),
+ ntohs(rqstp->rq_addr.sin_port));
+ return nlm_lck_denied_nolocks;
+ }
+
+ /* Obtain host handle */
+ if (!(host = nlmsvc_lookup_host(rqstp))
+ || (argp->monitor && !host->h_monitored && nsm_monitor(host) < 0))
+ goto no_locks;
+ *hostp = host;
+
+ /* Obtain file pointer. Not used by FREE_ALL call. */
+ if (filp != NULL) {
+ if ((error = nlm_lookup_file(rqstp, &file, &lock->fh)) != 0)
+ goto no_locks;
+ *filp = file;
+
+ /* Set up the missing parts of the file_lock structure */
+ lock->fl.fl_file = &file->f_file;
+ lock->fl.fl_owner = (fl_owner_t) host;
+ }
+
+ return 0;
+
+no_locks:
+ if (host)
+ nlm_release_host(host);
+ if (error)
+ return error;
+ return nlm_lck_denied_nolocks;
+}
+
+/*
+ * NULL: Test for presence of service
+ */
+static int
+nlm4svc_proc_null(struct svc_rqst *rqstp, void *argp, void *resp)
+{
+ dprintk("lockd: NULL called\n");
+ return rpc_success;
+}
+
+/*
+ * TEST: Check for conflicting lock
+ */
+static int
+nlm4svc_proc_test(struct svc_rqst *rqstp, struct nlm_args *argp,
+ struct nlm_res *resp)
+{
+ struct nlm_host *host;
+ struct nlm_file *file;
+
+ dprintk("lockd: TEST4 called\n");
+ resp->cookie = argp->cookie;
+
+ /* Don't accept test requests during grace period */
+ if (nlmsvc_grace_period) {
+ resp->status = nlm_lck_denied_grace_period;
+ return rpc_success;
+ }
+
+ /* Obtain client and file */
+ if ((resp->status = nlm4svc_retrieve_args(rqstp, argp, &host, &file)))
+ return rpc_success;
+
+ /* Now check for conflicting locks */
+ resp->status = nlmsvc_testlock(file, &argp->lock, &resp->lock);
+
+ dprintk("lockd: TEST4 status %d\n", ntohl(resp->status));
+ nlm_release_host(host);
+ nlm_release_file(file);
+ return rpc_success;
+}
+
+static int
+nlm4svc_proc_lock(struct svc_rqst *rqstp, struct nlm_args *argp,
+ struct nlm_res *resp)
+{
+ struct nlm_host *host;
+ struct nlm_file *file;
+
+ dprintk("lockd: LOCK called\n");
+
+ resp->cookie = argp->cookie;
+
+ /* Don't accept new lock requests during grace period */
+ if (nlmsvc_grace_period && !argp->reclaim) {
+ resp->status = nlm_lck_denied_grace_period;
+ return rpc_success;
+ }
+
+ /* Obtain client and file */
+ if ((resp->status = nlm4svc_retrieve_args(rqstp, argp, &host, &file)))
+ return rpc_success;
+
+#if 0
+ /* If supplied state doesn't match current state, we assume it's
+ * an old request that time-warped somehow. Any error return would
+ * do in this case because it's irrelevant anyway.
+ *
+ * NB: We don't retrieve the remote host's state yet.
+ */
+ if (host->h_nsmstate && host->h_nsmstate != argp->state) {
+ resp->status = nlm_lck_denied_nolocks;
+ } else
+#endif
+
+ /* Now try to lock the file */
+ resp->status = nlmsvc_lock(rqstp, file, &argp->lock,
+ argp->block, &argp->cookie);
+
+ dprintk("lockd: LOCK status %d\n", ntohl(resp->status));
+ nlm_release_host(host);
+ nlm_release_file(file);
+ return rpc_success;
+}
+
+static int
+nlm4svc_proc_cancel(struct svc_rqst *rqstp, struct nlm_args *argp,
+ struct nlm_res *resp)
+{
+ struct nlm_host *host;
+ struct nlm_file *file;
+
+ dprintk("lockd: CANCEL called\n");
+
+ resp->cookie = argp->cookie;
+
+ /* Don't accept requests during grace period */
+ if (nlmsvc_grace_period) {
+ resp->status = nlm_lck_denied_grace_period;
+ return rpc_success;
+ }
+
+ /* Obtain client and file */
+ if ((resp->status = nlm4svc_retrieve_args(rqstp, argp, &host, &file)))
+ return rpc_success;
+
+ /* Try to cancel request. */
+ resp->status = nlmsvc_cancel_blocked(file, &argp->lock);
+
+ dprintk("lockd: CANCEL status %d\n", ntohl(resp->status));
+ nlm_release_host(host);
+ nlm_release_file(file);
+ return rpc_success;
+}
+
+/*
+ * UNLOCK: release a lock
+ */
+static int
+nlm4svc_proc_unlock(struct svc_rqst *rqstp, struct nlm_args *argp,
+ struct nlm_res *resp)
+{
+ struct nlm_host *host;
+ struct nlm_file *file;
+
+ dprintk("lockd: UNLOCK called\n");
+
+ resp->cookie = argp->cookie;
+
+ /* Don't accept new lock requests during grace period */
+ if (nlmsvc_grace_period) {
+ resp->status = nlm_lck_denied_grace_period;
+ return rpc_success;
+ }
+
+ /* Obtain client and file */
+ if ((resp->status = nlm4svc_retrieve_args(rqstp, argp, &host, &file)))
+ return rpc_success;
+
+ /* Now try to remove the lock */
+ resp->status = nlmsvc_unlock(file, &argp->lock);
+
+ dprintk("lockd: UNLOCK status %d\n", ntohl(resp->status));
+ nlm_release_host(host);
+ nlm_release_file(file);
+ return rpc_success;
+}
+
+/*
+ * GRANTED: A server calls us to tell that a process' lock request
+ * was granted
+ */
+static int
+nlm4svc_proc_granted(struct svc_rqst *rqstp, struct nlm_args *argp,
+ struct nlm_res *resp)
+{
+ resp->cookie = argp->cookie;
+
+ dprintk("lockd: GRANTED called\n");
+ resp->status = nlmclnt_grant(&argp->lock);
+ dprintk("lockd: GRANTED status %d\n", ntohl(resp->status));
+ return rpc_success;
+}
+
+/*
+ * `Async' versions of the above service routines. They aren't really,
+ * because we send the callback before the reply proper. I hope this
+ * doesn't break any clients.
+ */
+static int
+nlm4svc_proc_test_msg(struct svc_rqst *rqstp, struct nlm_args *argp,
+ void *resp)
+{
+ struct nlm_res res;
+ u32 stat;
+
+ dprintk("lockd: TEST_MSG called\n");
+
+ if ((stat = nlm4svc_proc_test(rqstp, argp, &res)) == 0)
+ stat = nlm4svc_callback(rqstp, NLMPROC_TEST_RES, &res);
+ return stat;
+}
+
+static int
+nlm4svc_proc_lock_msg(struct svc_rqst *rqstp, struct nlm_args *argp,
+ void *resp)
+{
+ struct nlm_res res;
+ u32 stat;
+
+ dprintk("lockd: LOCK_MSG called\n");
+
+ if ((stat = nlm4svc_proc_lock(rqstp, argp, &res)) == 0)
+ stat = nlm4svc_callback(rqstp, NLMPROC_LOCK_RES, &res);
+ return stat;
+}
+
+static int
+nlm4svc_proc_cancel_msg(struct svc_rqst *rqstp, struct nlm_args *argp,
+ void *resp)
+{
+ struct nlm_res res;
+ u32 stat;
+
+ dprintk("lockd: CANCEL_MSG called\n");
+
+ if ((stat = nlm4svc_proc_cancel(rqstp, argp, &res)) == 0)
+ stat = nlm4svc_callback(rqstp, NLMPROC_CANCEL_RES, &res);
+ return stat;
+}
+
+static int
+nlm4svc_proc_unlock_msg(struct svc_rqst *rqstp, struct nlm_args *argp,
+ void *resp)
+{
+ struct nlm_res res;
+ u32 stat;
+
+ dprintk("lockd: UNLOCK_MSG called\n");
+
+ if ((stat = nlm4svc_proc_unlock(rqstp, argp, &res)) == 0)
+ stat = nlm4svc_callback(rqstp, NLMPROC_UNLOCK_RES, &res);
+ return stat;
+}
+
+static int
+nlm4svc_proc_granted_msg(struct svc_rqst *rqstp, struct nlm_args *argp,
+ void *resp)
+{
+ struct nlm_res res;
+ u32 stat;
+
+ dprintk("lockd: GRANTED_MSG called\n");
+
+ if ((stat = nlm4svc_proc_granted(rqstp, argp, &res)) == 0)
+ stat = nlm4svc_callback(rqstp, NLMPROC_GRANTED_RES, &res);
+ return stat;
+}
+
+/*
+ * SHARE: create a DOS share or alter existing share.
+ */
+static int
+nlm4svc_proc_share(struct svc_rqst *rqstp, struct nlm_args *argp,
+ struct nlm_res *resp)
+{
+ struct nlm_host *host;
+ struct nlm_file *file;
+
+ dprintk("lockd: SHARE called\n");
+
+ resp->cookie = argp->cookie;
+
+ /* Don't accept new lock requests during grace period */
+ if (nlmsvc_grace_period && !argp->reclaim) {
+ resp->status = nlm_lck_denied_grace_period;
+ return rpc_success;
+ }
+
+ /* Obtain client and file */
+ if ((resp->status = nlm4svc_retrieve_args(rqstp, argp, &host, &file)))
+ return rpc_success;
+
+ /* Now try to create the share */
+ resp->status = nlmsvc_share_file(host, file, argp);
+
+ dprintk("lockd: SHARE status %d\n", ntohl(resp->status));
+ nlm_release_host(host);
+ nlm_release_file(file);
+ return rpc_success;
+}
+
+/*
+ * UNSHARE: Release a DOS share.
+ */
+static int
+nlm4svc_proc_unshare(struct svc_rqst *rqstp, struct nlm_args *argp,
+ struct nlm_res *resp)
+{
+ struct nlm_host *host;
+ struct nlm_file *file;
+
+ dprintk("lockd: UNSHARE called\n");
+
+ resp->cookie = argp->cookie;
+
+ /* Don't accept requests during grace period */
+ if (nlmsvc_grace_period) {
+ resp->status = nlm_lck_denied_grace_period;
+ return rpc_success;
+ }
+
+ /* Obtain client and file */
+ if ((resp->status = nlm4svc_retrieve_args(rqstp, argp, &host, &file)))
+ return rpc_success;
+
+ /* Now try to lock the file */
+ resp->status = nlmsvc_unshare_file(host, file, argp);
+
+ dprintk("lockd: UNSHARE status %d\n", ntohl(resp->status));
+ nlm_release_host(host);
+ nlm_release_file(file);
+ return rpc_success;
+}
+
+/*
+ * NM_LOCK: Create an unmonitored lock
+ */
+static int
+nlm4svc_proc_nm_lock(struct svc_rqst *rqstp, struct nlm_args *argp,
+ struct nlm_res *resp)
+{
+ dprintk("lockd: NM_LOCK called\n");
+
+ argp->monitor = 0; /* just clean the monitor flag */
+ return nlm4svc_proc_lock(rqstp, argp, resp);
+}
+
+/*
+ * FREE_ALL: Release all locks and shares held by client
+ */
+static int
+nlm4svc_proc_free_all(struct svc_rqst *rqstp, struct nlm_args *argp,
+ void *resp)
+{
+ struct nlm_host *host;
+
+ /* Obtain client */
+ if (nlm4svc_retrieve_args(rqstp, argp, &host, NULL))
+ return rpc_success;
+
+ nlmsvc_free_host_resources(host);
+ nlm_release_host(host);
+ return rpc_success;
+}
+
+/*
+ * SM_NOTIFY: private callback from statd (not part of official NLM proto)
+ */
+static int
+nlm4svc_proc_sm_notify(struct svc_rqst *rqstp, struct nlm_reboot *argp,
+ void *resp)
+{
+ struct sockaddr_in saddr = rqstp->rq_addr;
+ struct nlm_host *host;
+
+ dprintk("lockd: SM_NOTIFY called\n");
+ if (saddr.sin_addr.s_addr != htonl(INADDR_LOOPBACK)
+ || ntohs(saddr.sin_port) >= 1024) {
+ printk(KERN_WARNING
+ "lockd: rejected NSM callback from %08x:%d\n",
+ ntohl(rqstp->rq_addr.sin_addr.s_addr),
+ ntohs(rqstp->rq_addr.sin_port));
+ return rpc_system_err;
+ }
+
+ /* Obtain the host pointer for this NFS server and try to
+ * reclaim all locks we hold on this server.
+ */
+ saddr.sin_addr.s_addr = argp->addr;
+ if ((host = nlm_lookup_host(NULL, &saddr, IPPROTO_UDP, 1)) != NULL) {
+ nlmclnt_recovery(host, argp->state);
+ nlm_release_host(host);
+ }
+
+ /* If we run on an NFS server, delete all locks held by the client */
+ if (nlmsvc_ops != NULL) {
+ struct svc_client *clnt;
+ saddr.sin_addr.s_addr = argp->addr;
+ if ((clnt = nlmsvc_ops->exp_getclient(&saddr)) != NULL
+ && (host = nlm_lookup_host(clnt, &saddr, 0, 0)) != NULL) {
+ nlmsvc_free_host_resources(host);
+ }
+ nlm_release_host(host);
+ }
+
+ return rpc_success;
+}
+
+/*
+ * This is the generic lockd callback for async RPC calls
+ */
+static u32
+nlm4svc_callback(struct svc_rqst *rqstp, u32 proc, struct nlm_res *resp)
+{
+ struct nlm_host *host;
+ struct nlm_rqst *call;
+
+ if (!(call = nlmclnt_alloc_call()))
+ return rpc_system_err;
+
+ host = nlmclnt_lookup_host(&rqstp->rq_addr,
+ rqstp->rq_prot, rqstp->rq_vers);
+ if (!host) {
+ rpc_free(call);
+ return rpc_system_err;
+ }
+
+ call->a_flags = RPC_TASK_ASYNC;
+ call->a_host = host;
+ memcpy(&call->a_args, resp, sizeof(*resp));
+
+/* FIXME this should become nlmSVC_async_call when that code gets
+ merged in XXX */
+ if (nlmclnt_async_call(call, proc, nlm4svc_callback_exit) < 0)
+ return rpc_system_err;
+
+ return rpc_success;
+}
+
+static void
+nlm4svc_callback_exit(struct rpc_task *task)
+{
+ struct nlm_rqst *call = (struct nlm_rqst *) task->tk_calldata;
+
+ if (task->tk_status < 0) {
+ dprintk("lockd: %4d callback failed (errno = %d)\n",
+ task->tk_pid, -task->tk_status);
+ }
+ nlm_release_host(call->a_host);
+ rpc_free(call);
+}
+
+/*
+ * NLM Server procedures.
+ */
+
+#define nlm4svc_encode_norep nlm4svc_encode_void
+#define nlm4svc_decode_norep nlm4svc_decode_void
+#define nlm4svc_decode_testres nlm4svc_decode_void
+#define nlm4svc_decode_lockres nlm4svc_decode_void
+#define nlm4svc_decode_unlockres nlm4svc_decode_void
+#define nlm4svc_decode_cancelres nlm4svc_decode_void
+#define nlm4svc_decode_grantedres nlm4svc_decode_void
+
+#define nlm4svc_proc_none nlm4svc_proc_null
+#define nlm4svc_proc_test_res nlm4svc_proc_null
+#define nlm4svc_proc_lock_res nlm4svc_proc_null
+#define nlm4svc_proc_cancel_res nlm4svc_proc_null
+#define nlm4svc_proc_unlock_res nlm4svc_proc_null
+#define nlm4svc_proc_granted_res nlm4svc_proc_null
+
+struct nlm_void { int dummy; };
+
+#define PROC(name, xargt, xrest, argt, rest) \
+ { (svc_procfunc) nlm4svc_proc_##name, \
+ (kxdrproc_t) nlm4svc_decode_##xargt, \
+ (kxdrproc_t) nlm4svc_encode_##xrest, \
+ NULL, \
+ sizeof(struct nlm_##argt), \
+ sizeof(struct nlm_##rest), \
+ 0, \
+ 0 \
+ }
+struct svc_procedure nlmsvc_procedures4[] = {
+ PROC(null, void, void, void, void),
+ PROC(test, testargs, testres, args, res),
+ PROC(lock, lockargs, res, args, res),
+ PROC(cancel, cancargs, res, args, res),
+ PROC(unlock, unlockargs, res, args, res),
+ PROC(granted, testargs, res, args, res),
+ PROC(test_msg, testargs, norep, args, void),
+ PROC(lock_msg, lockargs, norep, args, void),
+ PROC(cancel_msg, cancargs, norep, args, void),
+ PROC(unlock_msg, unlockargs, norep, args, void),
+ PROC(granted_msg, testargs, norep, args, void),
+ PROC(test_res, testres, norep, res, void),
+ PROC(lock_res, lockres, norep, res, void),
+ PROC(cancel_res, cancelres, norep, res, void),
+ PROC(unlock_res, unlockres, norep, res, void),
+ PROC(granted_res, grantedres, norep, res, void),
+ PROC(none, void, void, void, void),
+ PROC(none, void, void, void, void),
+ PROC(none, void, void, void, void),
+ PROC(none, void, void, void, void),
+ PROC(share, shareargs, shareres, args, res),
+ PROC(unshare, shareargs, shareres, args, res),
+ PROC(nm_lock, lockargs, res, args, res),
+ PROC(free_all, notify, void, args, void),
+
+ /* statd callback */
+ PROC(sm_notify, reboot, void, reboot, void),
+};
lock->fl.fl_end, lock->fl.fl_type);
for (head = &nlm_blocked; (block = *head); head = &block->b_next) {
fl = &block->b_call.a_args.lock.fl;
- dprintk(" check f=%p pd=%d %ld-%ld ty=%d\n",
+ dprintk("lockd: check f=%p pd=%d %ld-%ld ty=%d cookie=%x\n",
block->b_file, fl->fl_pid, fl->fl_start,
- fl->fl_end, fl->fl_type);
+ fl->fl_end, fl->fl_type,
+ *(unsigned int*)(block->b_call.a_args.cookie.data));
if (block->b_file == file && nlm_compare_locks(fl, &lock->fl)) {
if (remove)
*head = block->b_next;
struct nlm_block *block;
for (block = nlm_blocked; block; block = block->b_next) {
+ dprintk("cookie: head of blocked queue %p, block %p\n",
+ nlm_blocked, block);
if (nlm_cookie_match(&block->b_call.a_args.cookie,cookie))
break;
}
switch(-error) {
case 0:
return nlm_granted;
- case EDEADLK: /* no applicable NLM status */
+ case EDEADLK:
+#ifdef CONFIG_LOCKD_V4
+ return nlm4_deadlock; /* will be downgraded to lck_deined if this
+ * is a NLMv1,3 request */
+#else
+ /* no applicable NLM status */
+#endif
case EAGAIN:
return nlm_lck_denied;
default: /* includes ENOLCK */
unsigned long timeout;
dprintk("lockd: GRANT_MSG RPC callback\n");
+ dprintk("callback: looking for cookie %x \n",
+ *(unsigned int *)(call->a_args.cookie.data));
if (!(block = nlmsvc_find_block(&call->a_args.cookie))) {
dprintk("lockd: no block for cookie %x\n", *(u32 *)(call->a_args.cookie.data));
return;
dprintk("nlmsvc_retry_blocked(%p, when=%ld)\n",
nlm_blocked,
nlm_blocked? nlm_blocked->b_when : 0);
- while ((block = nlm_blocked) && block->b_when < jiffies) {
+ while ((block = nlm_blocked) && block->b_when <= jiffies) {
dprintk("nlmsvc_retry_blocked(%p, when=%ld, done=%d)\n",
block, block->b_when, block->b_done);
if (block->b_done)
static u32 nlmsvc_callback(struct svc_rqst *, u32, struct nlm_res *);
static void nlmsvc_callback_exit(struct rpc_task *);
+#ifdef CONFIG_LOCKD_V4
+static u32
+cast_to_nlm(u32 status, u32 vers)
+{
+
+ if (vers != 4){
+ switch(ntohl(status)){
+ case NLM_LCK_GRANTED:
+ case NLM_LCK_DENIED:
+ case NLM_LCK_DENIED_NOLOCKS:
+ case NLM_LCK_BLOCKED:
+ case NLM_LCK_DENIED_GRACE_PERIOD:
+ break;
+ default:
+ status = NLM_LCK_DENIED_NOLOCKS;
+ }
+ }
+
+ return (status);
+}
+#define cast_status(status) (cast_to_nlm(status, rqstp->rq_vers))
+#else
+#define cast_status(status) (status)
+#endif
+
/*
* Obtain client and file from arguments
*/
return rpc_success;
/* Now check for conflicting locks */
- resp->status = nlmsvc_testlock(file, &argp->lock, &resp->lock);
+ resp->status = cast_status(nlmsvc_testlock(file, &argp->lock, &resp->lock));
- dprintk("lockd: TEST status %d\n", ntohl(resp->status));
+ dprintk("lockd: TEST status %d vers %d\n",
+ ntohl(resp->status), rqstp->rq_vers);
nlm_release_host(host);
nlm_release_file(file);
return rpc_success;
#endif
/* Now try to lock the file */
- resp->status = nlmsvc_lock(rqstp, file, &argp->lock,
- argp->block, &argp->cookie);
+ resp->status = cast_status(nlmsvc_lock(rqstp, file, &argp->lock,
+ argp->block, &argp->cookie));
dprintk("lockd: LOCK status %d\n", ntohl(resp->status));
nlm_release_host(host);
return rpc_success;
/* Try to cancel request. */
- resp->status = nlmsvc_cancel_blocked(file, &argp->lock);
+ resp->status = cast_status(nlmsvc_cancel_blocked(file, &argp->lock));
dprintk("lockd: CANCEL status %d\n", ntohl(resp->status));
nlm_release_host(host);
return rpc_success;
/* Now try to remove the lock */
- resp->status = nlmsvc_unlock(file, &argp->lock);
+ resp->status = cast_status(nlmsvc_unlock(file, &argp->lock));
dprintk("lockd: UNLOCK status %d\n", ntohl(resp->status));
nlm_release_host(host);
return rpc_success;
/* Now try to create the share */
- resp->status = nlmsvc_share_file(host, file, argp);
+ resp->status = cast_status(nlmsvc_share_file(host, file, argp));
dprintk("lockd: SHARE status %d\n", ntohl(resp->status));
nlm_release_host(host);
if ((resp->status = nlmsvc_retrieve_args(rqstp, argp, &host, &file)))
return rpc_success;
- /* Now try to lock the file */
- resp->status = nlmsvc_unshare_file(host, file, argp);
+ /* Now try to unshare the file */
+ resp->status = cast_status(nlmsvc_unshare_file(host, file, argp));
dprintk("lockd: UNSHARE status %d\n", ntohl(resp->status));
nlm_release_host(host);
out_free:
kfree(file);
+#ifdef CONFIG_LOCKD_V4
+ if (nfserr == 1)
+ nfserr = nlm4_stale_fh;
+ else
+#endif
nfserr = nlm_lck_denied;
goto out_unlock;
}
nlm_lck_blocked = htonl(NLM_LCK_BLOCKED);
nlm_lck_denied_grace_period = htonl(NLM_LCK_DENIED_GRACE_PERIOD);
+#ifdef CONFIG_LOCKD_V4
+ nlm4_deadlock = htonl(NLM_DEADLCK);
+ nlm4_rofs = htonl(NLM_ROFS);
+ nlm4_stale_fh = htonl(NLM_STALE_FH);
+ nlm4_fbig = htonl(NLM_FBIG);
+ nlm4_failed = htonl(NLM_FAILED);
+#endif
+
inited = 1;
}
{ "nlm_" #proc, \
(kxdrproc_t) nlmclt_encode_##argtype, \
(kxdrproc_t) nlmclt_decode_##restype, \
- MAX(NLM_##argtype##_sz, NLM_##restype##_sz) << 2 \
+ MAX(NLM_##argtype##_sz, NLM_##restype##_sz) << 2, \
+ 0 \
}
static struct rpc_procinfo nlm_procedures[] = {
--- /dev/null
+/*
+ * linux/fs/lockd/xdr4.c
+ *
+ * XDR support for lockd and the lock client.
+ *
+ * Copyright (C) 1995, 1996 Olaf Kirch <okir@monad.swb.de>
+ * Copyright (C) 1999, Trond Myklebust <trond.myklebust@fys.uio.no>
+ */
+
+#include <linux/types.h>
+#include <linux/sched.h>
+#include <linux/utsname.h>
+#include <linux/nfs.h>
+
+#include <linux/sunrpc/xdr.h>
+#include <linux/sunrpc/clnt.h>
+#include <linux/sunrpc/svc.h>
+#include <linux/sunrpc/stats.h>
+#include <linux/lockd/lockd.h>
+#include <linux/lockd/sm_inter.h>
+
+#define NLMDBG_FACILITY NLMDBG_XDR
+#define NLM_MAXSTRLEN 1024
+#define OFFSET_MAX ((off_t)LONG_MAX)
+
+#define QUADLEN(len) (((len) + 3) >> 2)
+
+u32 nlm4_deadlock, nlm4_rofs, nlm4_stale_fh, nlm4_fbig,
+ nlm4_failed;
+
+
+typedef struct nlm_args nlm_args;
+
+static inline off_t
+size_to_off_t(__s64 size)
+{
+ size = (size > (__s64)LONG_MAX) ? (off_t)LONG_MAX : (off_t) size;
+ return (size < (__s64)-LONG_MAX) ? (off_t)-LONG_MAX : (off_t) size;
+}
+
+/*
+ * XDR functions for basic NLM types
+ */
+static u32 *
+nlm4_decode_cookie(u32 *p, struct nlm_cookie *c)
+{
+ unsigned int len;
+
+ len = ntohl(*p++);
+
+ if(len==0)
+ {
+ c->len=4;
+ memset(c->data, 0, 4); /* hockeypux brain damage */
+ }
+ else if(len<=8)
+ {
+ c->len=len;
+ memcpy(c->data, p, len);
+ p+=(len+3)>>2;
+ }
+ else
+ {
+ printk(KERN_NOTICE
+ "lockd: bad cookie size %d (only cookies under 8 bytes are supported.)\n", len);
+ return NULL;
+ }
+ return p;
+}
+
+static u32 *
+nlm4_encode_cookie(u32 *p, struct nlm_cookie *c)
+{
+ *p++ = htonl(c->len);
+ memcpy(p, c->data, c->len);
+ p+=(c->len+3)>>2;
+ return p;
+}
+
+static u32 *
+nlm4_decode_fh(u32 *p, struct nfs_fh *f)
+{
+ memset(f->data, 0, sizeof(f->data));
+#ifdef NFS_MAXFHSIZE
+ f->size = ntohl(*p++);
+ if (f->size > NFS_MAXFHSIZE) {
+ printk(KERN_NOTICE
+ "lockd: bad fhandle size %x (should be %d)\n",
+ f->size, NFS_MAXFHSIZE);
+ return NULL;
+ }
+ memcpy(f->data, p, f->size);
+ return p + XDR_QUADLEN(f->size);
+#else
+ if (ntohl(*p++) != NFS_FHSIZE)
+ return NULL; /* for now, all filehandles are 32 bytes */
+ memcpy(f->data, p, NFS_FHSIZE);
+ return p + XDR_QUADLEN(NFS_FHSIZE);
+#endif
+}
+
+static u32 *
+nlm4_encode_fh(u32 *p, struct nfs_fh *f)
+{
+#ifdef NFS_MAXFHSIZE
+ *p++ = htonl(f->size);
+ memcpy(p, f->data, f->size);
+ return p + XDR_QUADLEN(f->size);
+#else
+ *p++ = htonl(NFS_FHSIZE);
+ memcpy(p, f->data, NFS_FHSIZE);
+ return p + XDR_QUADLEN(NFS_FHSIZE);
+#endif
+}
+
+/*
+ * Encode and decode owner handle
+ */
+static u32 *
+nlm4_decode_oh(u32 *p, struct xdr_netobj *oh)
+{
+ return xdr_decode_netobj(p, oh);
+}
+
+static u32 *
+nlm4_encode_oh(u32 *p, struct xdr_netobj *oh)
+{
+ return xdr_encode_netobj(p, oh);
+}
+
+static u32 *
+nlm4_decode_lock(u32 *p, struct nlm_lock *lock)
+{
+ struct file_lock *fl = &lock->fl;
+ __s64 len, start, end;
+ int tmp;
+
+ if (!(p = xdr_decode_string(p, &lock->caller, &tmp, NLM_MAXSTRLEN))
+ || !(p = nlm4_decode_fh(p, &lock->fh))
+ || !(p = nlm4_decode_oh(p, &lock->oh)))
+ return NULL;
+
+ memset(fl, 0, sizeof(*fl));
+ fl->fl_owner = current->files;
+ fl->fl_pid = ntohl(*p++);
+ fl->fl_flags = FL_POSIX;
+ fl->fl_type = F_RDLCK; /* as good as anything else */
+ p = xdr_decode_hyper(p, &start);
+ p = xdr_decode_hyper(p, &len);
+ end = start + len - 1;
+
+ fl->fl_start = size_to_off_t(start);
+ fl->fl_end = size_to_off_t(end);
+
+ if (len == 0 || fl->fl_end < 0)
+ fl->fl_end = OFFSET_MAX;
+ return p;
+}
+
+/*
+ * Encode a lock as part of an NLM call
+ */
+static u32 *
+nlm4_encode_lock(u32 *p, struct nlm_lock *lock)
+{
+ struct file_lock *fl = &lock->fl;
+
+ if (!(p = xdr_encode_string(p, lock->caller))
+ || !(p = nlm4_encode_fh(p, &lock->fh))
+ || !(p = nlm4_encode_oh(p, &lock->oh)))
+ return NULL;
+
+ *p++ = htonl(fl->fl_pid);
+ p = xdr_encode_hyper(p, fl->fl_start);
+ if (fl->fl_end == OFFSET_MAX)
+ p = xdr_encode_hyper(p, 0);
+ else
+ p = xdr_encode_hyper(p, fl->fl_end - fl->fl_start + 1);
+
+ return p;
+}
+
+/*
+ * Encode result of a TEST/TEST_MSG call
+ */
+static u32 *
+nlm4_encode_testres(u32 *p, struct nlm_res *resp)
+{
+ dprintk("xdr: before encode_testres (p %p resp %p)\n", p, resp);
+ if (!(p = nlm4_encode_cookie(p, &resp->cookie)))
+ return 0;
+ *p++ = resp->status;
+
+ if (resp->status == nlm_lck_denied) {
+ struct file_lock *fl = &resp->lock.fl;
+
+ *p++ = (fl->fl_type == F_RDLCK)? xdr_zero : xdr_one;
+ *p++ = htonl(fl->fl_pid);
+
+ /* Encode owner handle. */
+ if (!(p = xdr_encode_netobj(p, &resp->lock.oh)))
+ return 0;
+
+ p = xdr_encode_hyper(p, fl->fl_start);
+ if (fl->fl_end == OFFSET_MAX)
+ p = xdr_encode_hyper(p, 0);
+ else
+ p = xdr_encode_hyper(p, fl->fl_end - fl->fl_start + 1);
+ dprintk("xdr: encode_testres (status %d pid %d type %d start %ld end %ld)\n", resp->status, fl->fl_pid, fl->fl_type, fl->fl_start, fl->fl_end);
+ }
+
+ dprintk("xdr: after encode_testres (p %p resp %p)\n", p, resp);
+ return p;
+}
+
+
+/*
+ * Check buffer bounds after decoding arguments
+ */
+static int
+xdr_argsize_check(struct svc_rqst *rqstp, u32 *p)
+{
+ struct svc_buf *buf = &rqstp->rq_argbuf;
+
+ return p - buf->base <= buf->buflen;
+}
+
+static int
+xdr_ressize_check(struct svc_rqst *rqstp, u32 *p)
+{
+ struct svc_buf *buf = &rqstp->rq_resbuf;
+
+ buf->len = p - buf->base;
+ return (buf->len <= buf->buflen);
+}
+
+/*
+ * First, the server side XDR functions
+ */
+int
+nlm4svc_decode_testargs(struct svc_rqst *rqstp, u32 *p, nlm_args *argp)
+{
+ u32 exclusive;
+
+ if (!(p = nlm4_decode_cookie(p, &argp->cookie)))
+ return 0;
+
+ exclusive = ntohl(*p++);
+ if (!(p = nlm4_decode_lock(p, &argp->lock)))
+ return 0;
+ if (exclusive)
+ argp->lock.fl.fl_type = F_WRLCK;
+
+ return xdr_argsize_check(rqstp, p);
+}
+
+int
+nlm4svc_encode_testres(struct svc_rqst *rqstp, u32 *p, struct nlm_res *resp)
+{
+ if (!(p = nlm4_encode_testres(p, resp)))
+ return 0;
+ return xdr_ressize_check(rqstp, p);
+}
+
+int
+nlm4svc_decode_lockargs(struct svc_rqst *rqstp, u32 *p, nlm_args *argp)
+{
+ u32 exclusive;
+
+ if (!(p = nlm4_decode_cookie(p, &argp->cookie)))
+ return 0;
+ argp->block = ntohl(*p++);
+ exclusive = ntohl(*p++);
+ if (!(p = nlm4_decode_lock(p, &argp->lock)))
+ return 0;
+ if (exclusive)
+ argp->lock.fl.fl_type = F_WRLCK;
+ argp->reclaim = ntohl(*p++);
+ argp->state = ntohl(*p++);
+ argp->monitor = 1; /* monitor client by default */
+
+ return xdr_argsize_check(rqstp, p);
+}
+
+int
+nlm4svc_decode_cancargs(struct svc_rqst *rqstp, u32 *p, nlm_args *argp)
+{
+ u32 exclusive;
+
+ if (!(p = nlm4_decode_cookie(p, &argp->cookie)))
+ return 0;
+ argp->block = ntohl(*p++);
+ exclusive = ntohl(*p++);
+ if (!(p = nlm4_decode_lock(p, &argp->lock)))
+ return 0;
+ if (exclusive)
+ argp->lock.fl.fl_type = F_WRLCK;
+ return xdr_argsize_check(rqstp, p);
+}
+
+int
+nlm4svc_decode_unlockargs(struct svc_rqst *rqstp, u32 *p, nlm_args *argp)
+{
+ if (!(p = nlm4_decode_cookie(p, &argp->cookie))
+ || !(p = nlm4_decode_lock(p, &argp->lock)))
+ return 0;
+ argp->lock.fl.fl_type = F_UNLCK;
+ return xdr_argsize_check(rqstp, p);
+}
+
+int
+nlm4svc_decode_shareargs(struct svc_rqst *rqstp, u32 *p, nlm_args *argp)
+{
+ struct nlm_lock *lock = &argp->lock;
+ int len;
+
+ memset(lock, 0, sizeof(*lock));
+ lock->fl.fl_pid = ~(u32) 0;
+
+ if (!(p = nlm4_decode_cookie(p, &argp->cookie))
+ || !(p = xdr_decode_string(p, &lock->caller, &len, NLM_MAXSTRLEN))
+ || !(p = nlm4_decode_fh(p, &lock->fh))
+ || !(p = nlm4_decode_oh(p, &lock->oh)))
+ return 0;
+ argp->fsm_mode = ntohl(*p++);
+ argp->fsm_access = ntohl(*p++);
+ return xdr_argsize_check(rqstp, p);
+}
+
+int
+nlm4svc_encode_shareres(struct svc_rqst *rqstp, u32 *p, struct nlm_res *resp)
+{
+ if (!(p = nlm4_encode_cookie(p, &resp->cookie)))
+ return 0;
+ *p++ = resp->status;
+ *p++ = xdr_zero; /* sequence argument */
+ return xdr_ressize_check(rqstp, p);
+}
+
+int
+nlm4svc_encode_res(struct svc_rqst *rqstp, u32 *p, struct nlm_res *resp)
+{
+ if (!(p = nlm4_encode_cookie(p, &resp->cookie)))
+ return 0;
+ *p++ = resp->status;
+ return xdr_ressize_check(rqstp, p);
+}
+
+int
+nlm4svc_decode_notify(struct svc_rqst *rqstp, u32 *p, struct nlm_args *argp)
+{
+ struct nlm_lock *lock = &argp->lock;
+ int len;
+
+ if (!(p = xdr_decode_string(p, &lock->caller, &len, NLM_MAXSTRLEN)))
+ return 0;
+ argp->state = ntohl(*p++);
+ return xdr_argsize_check(rqstp, p);
+}
+
+int
+nlm4svc_decode_reboot(struct svc_rqst *rqstp, u32 *p, struct nlm_reboot *argp)
+{
+ if (!(p = xdr_decode_string(p, &argp->mon, &argp->len, SM_MAXSTRLEN)))
+ return 0;
+ argp->state = ntohl(*p++);
+ argp->addr = ntohl(*p++);
+ return xdr_argsize_check(rqstp, p);
+}
+
+int
+nlm4svc_decode_res(struct svc_rqst *rqstp, u32 *p, struct nlm_res *resp)
+{
+ if (!(p = nlm4_decode_cookie(p, &resp->cookie)))
+ return 0;
+ resp->status = ntohl(*p++);
+ return xdr_argsize_check(rqstp, p);
+}
+
+int
+nlm4svc_decode_void(struct svc_rqst *rqstp, u32 *p, void *dummy)
+{
+ return xdr_argsize_check(rqstp, p);
+}
+
+int
+nlm4svc_encode_void(struct svc_rqst *rqstp, u32 *p, void *dummy)
+{
+ return xdr_ressize_check(rqstp, p);
+}
+
+/*
+ * Now, the client side XDR functions
+ */
+static int
+nlm4clt_encode_void(struct rpc_rqst *req, u32 *p, void *ptr)
+{
+ req->rq_slen = xdr_adjust_iovec(req->rq_svec, p);
+ return 0;
+}
+
+static int
+nlm4clt_decode_void(struct rpc_rqst *req, u32 *p, void *ptr)
+{
+ return 0;
+}
+
+static int
+nlm4clt_encode_testargs(struct rpc_rqst *req, u32 *p, nlm_args *argp)
+{
+ struct nlm_lock *lock = &argp->lock;
+
+ if (!(p = nlm4_encode_cookie(p, &argp->cookie)))
+ return -EIO;
+ *p++ = (lock->fl.fl_type == F_WRLCK)? xdr_one : xdr_zero;
+ if (!(p = nlm4_encode_lock(p, lock)))
+ return -EIO;
+ req->rq_slen = xdr_adjust_iovec(req->rq_svec, p);
+ return 0;
+}
+
+static int
+nlm4clt_decode_testres(struct rpc_rqst *req, u32 *p, struct nlm_res *resp)
+{
+ if (!(p = nlm4_decode_cookie(p, &resp->cookie)))
+ return -EIO;
+ resp->status = ntohl(*p++);
+ if (resp->status == NLM_LCK_DENIED) {
+ struct file_lock *fl = &resp->lock.fl;
+ u32 excl;
+ s64 start, end, len;
+
+ memset(&resp->lock, 0, sizeof(resp->lock));
+ excl = ntohl(*p++);
+ fl->fl_pid = ntohl(*p++);
+ if (!(p = nlm4_decode_oh(p, &resp->lock.oh)))
+ return -EIO;
+
+ fl->fl_flags = FL_POSIX;
+ fl->fl_type = excl? F_WRLCK : F_RDLCK;
+ p = xdr_decode_hyper(p, &start);
+ p = xdr_decode_hyper(p, &len);
+ end = start + len - 1;
+
+ fl->fl_start = size_to_off_t(start);
+ fl->fl_end = size_to_off_t(end);
+ if (len == 0 || fl->fl_end < 0)
+ fl->fl_end = OFFSET_MAX;
+ }
+ return 0;
+}
+
+
+static int
+nlm4clt_encode_lockargs(struct rpc_rqst *req, u32 *p, nlm_args *argp)
+{
+ struct nlm_lock *lock = &argp->lock;
+
+ if (!(p = nlm4_encode_cookie(p, &argp->cookie)))
+ return -EIO;
+ *p++ = argp->block? xdr_one : xdr_zero;
+ *p++ = (lock->fl.fl_type == F_WRLCK)? xdr_one : xdr_zero;
+ if (!(p = nlm4_encode_lock(p, lock)))
+ return -EIO;
+ *p++ = argp->reclaim? xdr_one : xdr_zero;
+ *p++ = htonl(argp->state);
+ req->rq_slen = xdr_adjust_iovec(req->rq_svec, p);
+ return 0;
+}
+
+static int
+nlm4clt_encode_cancargs(struct rpc_rqst *req, u32 *p, nlm_args *argp)
+{
+ struct nlm_lock *lock = &argp->lock;
+
+ if (!(p = nlm4_encode_cookie(p, &argp->cookie)))
+ return -EIO;
+ *p++ = argp->block? xdr_one : xdr_zero;
+ *p++ = (lock->fl.fl_type == F_WRLCK)? xdr_one : xdr_zero;
+ if (!(p = nlm4_encode_lock(p, lock)))
+ return -EIO;
+ req->rq_slen = xdr_adjust_iovec(req->rq_svec, p);
+ return 0;
+}
+
+static int
+nlm4clt_encode_unlockargs(struct rpc_rqst *req, u32 *p, nlm_args *argp)
+{
+ struct nlm_lock *lock = &argp->lock;
+
+ if (!(p = nlm4_encode_cookie(p, &argp->cookie)))
+ return -EIO;
+ if (!(p = nlm4_encode_lock(p, lock)))
+ return -EIO;
+ req->rq_slen = xdr_adjust_iovec(req->rq_svec, p);
+ return 0;
+}
+
+static int
+nlm4clt_encode_res(struct rpc_rqst *req, u32 *p, struct nlm_res *resp)
+{
+ if (!(p = nlm4_encode_cookie(p, &resp->cookie)))
+ return -EIO;
+ *p++ = resp->status;
+ req->rq_slen = xdr_adjust_iovec(req->rq_svec, p);
+ return 0;
+}
+
+static int
+nlm4clt_encode_testres(struct rpc_rqst *req, u32 *p, struct nlm_res *resp)
+{
+ if (!(p = nlm4_encode_testres(p, resp)))
+ return -EIO;
+ req->rq_slen = xdr_adjust_iovec(req->rq_svec, p);
+ return 0;
+}
+
+static int
+nlm4clt_decode_res(struct rpc_rqst *req, u32 *p, struct nlm_res *resp)
+{
+ if (!(p = nlm4_decode_cookie(p, &resp->cookie)))
+ return -EIO;
+ resp->status = ntohl(*p++);
+ return 0;
+}
+
+/*
+ * Buffer requirements for NLM
+ */
+#define NLM4_void_sz 0
+#define NLM4_cookie_sz 3 /* 1 len , 2 data */
+#define NLM4_caller_sz 1+XDR_QUADLEN(NLM_MAXSTRLEN)
+#define NLM4_netobj_sz 1+XDR_QUADLEN(XDR_MAX_NETOBJ)
+/* #define NLM4_owner_sz 1+XDR_QUADLEN(NLM4_MAXOWNER) */
+#define NLM4_fhandle_sz 1+XDR_QUADLEN(NFS3_FHSIZE)
+#define NLM4_lock_sz 5+NLM4_caller_sz+NLM4_netobj_sz+NLM4_fhandle_sz
+#define NLM4_holder_sz 6+NLM4_netobj_sz
+
+#define NLM4_testargs_sz NLM4_cookie_sz+1+NLM4_lock_sz
+#define NLM4_lockargs_sz NLM4_cookie_sz+4+NLM4_lock_sz
+#define NLM4_cancargs_sz NLM4_cookie_sz+2+NLM4_lock_sz
+#define NLM4_unlockargs_sz NLM4_cookie_sz+NLM4_lock_sz
+
+#define NLM4_testres_sz NLM4_cookie_sz+1+NLM4_holder_sz
+#define NLM4_res_sz NLM4_cookie_sz+1
+#define NLM4_norep_sz 0
+
+#ifndef MAX
+# define MAX(a,b) (((a) > (b))? (a) : (b))
+#endif
+
+/*
+ * For NLM, a void procedure really returns nothing
+ */
+#define nlm4clt_decode_norep NULL
+
+#define PROC(proc, argtype, restype) \
+ { "nlm4_" #proc, \
+ (kxdrproc_t) nlm4clt_encode_##argtype, \
+ (kxdrproc_t) nlm4clt_decode_##restype, \
+ MAX(NLM4_##argtype##_sz, NLM4_##restype##_sz) << 2, \
+ 0 \
+ }
+
+static struct rpc_procinfo nlm4_procedures[] = {
+ PROC(null, void, void),
+ PROC(test, testargs, testres),
+ PROC(lock, lockargs, res),
+ PROC(canc, cancargs, res),
+ PROC(unlock, unlockargs, res),
+ PROC(granted, testargs, res),
+ PROC(test_msg, testargs, norep),
+ PROC(lock_msg, lockargs, norep),
+ PROC(canc_msg, cancargs, norep),
+ PROC(unlock_msg, unlockargs, norep),
+ PROC(granted_msg, testargs, norep),
+ PROC(test_res, testres, norep),
+ PROC(lock_res, res, norep),
+ PROC(canc_res, res, norep),
+ PROC(unlock_res, res, norep),
+ PROC(granted_res, res, norep),
+ PROC(undef, void, void),
+ PROC(undef, void, void),
+ PROC(undef, void, void),
+ PROC(undef, void, void),
+#ifdef NLMCLNT_SUPPORT_SHARES
+ PROC(share, shareargs, shareres),
+ PROC(unshare, shareargs, shareres),
+ PROC(nm_lock, lockargs, res),
+ PROC(free_all, notify, void),
+#else
+ PROC(undef, void, void),
+ PROC(undef, void, void),
+ PROC(undef, void, void),
+ PROC(undef, void, void),
+#endif
+};
+
+struct rpc_version nlm_version4 = {
+ 4, 24, nlm4_procedures,
+};
if (!nfserr)
dget(filp->f_dentry);
fh_put(&fh);
- return nfserr;
+ /* nlm and nfsd don't share error codes.
+ * we invent: 0 = no error
+ * 1 = stale file handle
+ * 2 = other error
+ */
+ if (nfserr == 0)
+ return 0;
+ else if (nfserr == nfserr_stale)
+ return 1;
+ else return 2;
}
static void
*p++ = htonl((u32) nfsd_ruid(rqstp, inode->i_uid));
*p++ = htonl((u32) nfsd_rgid(rqstp, inode->i_gid));
if (S_ISLNK(inode->i_mode) && inode->i_size > NFS3_MAXPATHLEN) {
- p = enc64(p, (u64) NFS3_MAXPATHLEN);
+ p = xdr_encode_hyper(p, (u64) NFS3_MAXPATHLEN);
} else {
- p = enc64(p, (u64) inode->i_size);
+ p = xdr_encode_hyper(p, (u64) inode->i_size);
}
- p = enc64(p, ((u64)inode->i_blocks) << 9);
+ p = xdr_encode_hyper(p, ((u64)inode->i_blocks) << 9);
*p++ = htonl((u32) MAJOR(inode->i_rdev));
*p++ = htonl((u32) MINOR(inode->i_rdev));
- p = enc64(p, (u64) inode->i_dev);
- p = enc64(p, (u64) inode->i_ino);
+ p = xdr_encode_hyper(p, (u64) inode->i_dev);
+ p = xdr_encode_hyper(p, (u64) inode->i_ino);
p = encode_time3(p, inode->i_atime);
p = encode_time3(p, inode->i_mtime);
p = encode_time3(p, inode->i_ctime);
*p++ = htonl((u32) nfsd_ruid(rqstp, fhp->fh_post_uid));
*p++ = htonl((u32) nfsd_rgid(rqstp, fhp->fh_post_gid));
if (S_ISLNK(fhp->fh_post_mode) && fhp->fh_post_size > NFS3_MAXPATHLEN) {
- p = enc64(p, (u64) NFS3_MAXPATHLEN);
+ p = xdr_encode_hyper(p, (u64) NFS3_MAXPATHLEN);
} else {
- p = enc64(p, (u64) fhp->fh_post_size);
+ p = xdr_encode_hyper(p, (u64) fhp->fh_post_size);
}
- p = enc64(p, ((u64)fhp->fh_post_blocks) << 9);
+ p = xdr_encode_hyper(p, ((u64)fhp->fh_post_blocks) << 9);
*p++ = htonl((u32) MAJOR(fhp->fh_post_rdev));
*p++ = htonl((u32) MINOR(fhp->fh_post_rdev));
- p = enc64(p, (u64) inode->i_dev);
- p = enc64(p, (u64) inode->i_ino);
+ p = xdr_encode_hyper(p, (u64) inode->i_dev);
+ p = xdr_encode_hyper(p, (u64) inode->i_ino);
p = encode_time3(p, fhp->fh_post_atime);
p = encode_time3(p, fhp->fh_post_mtime);
p = encode_time3(p, fhp->fh_post_ctime);
if (dentry && dentry->d_inode && fhp->fh_post_saved) {
if (fhp->fh_pre_saved) {
*p++ = xdr_one;
- p = enc64(p, (u64) fhp->fh_pre_size);
+ p = xdr_encode_hyper(p, (u64) fhp->fh_pre_size);
p = encode_time3(p, fhp->fh_pre_mtime);
p = encode_time3(p, fhp->fh_pre_ctime);
} else {
int buflen, slen, elen;
if (cd->offset)
- enc64(cd->offset, (u64) offset);
+ xdr_encode_hyper(cd->offset, (u64) offset);
/* nfsd_readdir calls us with name == 0 when it wants us to
* set the last offset entry. */
return -EINVAL;
}
*p++ = xdr_one; /* mark entry present */
- p = enc64(p, ino); /* file id */
+ p = xdr_encode_hyper(p, ino); /* file id */
#ifdef XDR_ENCODE_STRING_TAKES_LENGTH
p = xdr_encode_string(p, name, namlen); /* name length & name */
#else
p[slen - 1] = 0; /* don't leak kernel data */
cd->offset = p; /* remember pointer */
- p = enc64(p, NFS_OFFSET_MAX); /* offset of next entry */
+ p = xdr_encode_hyper(p, NFS_OFFSET_MAX); /* offset of next entry */
/* throw in readdirplus baggage */
if (plus) {
*p++ = xdr_zero; /* no post_op_attr */
if (resp->status == 0) {
- p = enc64(p, bs * s->f_blocks); /* total bytes */
- p = enc64(p, bs * s->f_bfree); /* free bytes */
- p = enc64(p, bs * s->f_bavail); /* user available bytes */
- p = enc64(p, s->f_files); /* total inodes */
- p = enc64(p, s->f_ffree); /* free inodes */
- p = enc64(p, s->f_ffree); /* user available inodes */
+ p = xdr_encode_hyper(p, bs * s->f_blocks); /* total bytes */
+ p = xdr_encode_hyper(p, bs * s->f_bfree); /* free bytes */
+ p = xdr_encode_hyper(p, bs * s->f_bavail); /* user available bytes */
+ p = xdr_encode_hyper(p, s->f_files); /* total inodes */
+ p = xdr_encode_hyper(p, s->f_ffree); /* free inodes */
+ p = xdr_encode_hyper(p, s->f_ffree); /* user available inodes */
*p++ = htonl(resp->invarsec); /* mean unchanged time */
}
return xdr_ressize_check(rqstp, p);
*p++ = htonl(resp->f_wtpref);
*p++ = htonl(resp->f_wtmult);
*p++ = htonl(resp->f_dtpref);
- p = enc64(p, resp->f_maxfilesize);
+ p = xdr_encode_hyper(p, resp->f_maxfilesize);
*p++ = xdr_one;
*p++ = xdr_zero;
*p++ = htonl(resp->f_properties);
if (cd.offset) {
#ifdef CONFIG_NFSD_V3
if (rqstp->rq_vers == 3)
- (void)enc64(cd.offset, file.f_pos);
+ (void)xdr_encode_hyper(cd.offset, file.f_pos);
else
#endif /* CONFIG_NFSD_V3 */
*cd.offset = htonl(file.f_pos);
#define flush_cache_range(mm, start, end) do { } while (0)
#define flush_cache_page(vma, vmaddr) do { } while (0)
#define flush_page_to_ram(page) do { } while (0)
-#define flush_icache_range(start, end) do { } while (0)
+/*
+ * The icache is not coherent with the dcache on alpha, thus before
+ * running self modified code like kernel modules we must always run
+ * an imb().
+ */
+#ifndef __SMP__
+#define flush_icache_range(start, end) imb()
+#else
+#define flush_icache_range(start, end) smp_imb()
+extern void smp_imb(void);
+#endif
#define flush_icache_page(vma, page) do { } while (0)
/*
flush_tlb_current(current->mm);
}
+/*
+ * Flush a specified range of user mapping page tables
+ * from TLB.
+ * Although Alpha uses VPTE caches, this can be a nop, as Alpha does
+ * not have finegrained tlb flushing, so it will flush VPTE stuff
+ * during next flush_tlb_range.
+ */
+static inline void flush_tlb_pgtables(struct mm_struct *mm,
+ unsigned long start, unsigned long end)
+{
+}
+
#ifndef __SMP__
/*
* Flush everything (kernel mapping may also have
flush_tlb_mm(mm);
}
-/*
- * Flush a specified range of user mapping page tables
- * from TLB.
- * Although Alpha uses VPTE caches, this can be a nop, as Alpha does
- * not have finegrained tlb flushing, so it will flush VPTE stuff
- * during next flush_tlb_range.
- */
-static inline void flush_tlb_pgtables(struct mm_struct *mm,
- unsigned long start, unsigned long end)
-{
-}
-
#else /* __SMP__ */
extern void flush_tlb_all(void);
* Idle the processor
*/
int (*_do_idle)(void);
+ /*
+ * flush I cache for a page
+ */
+ void (*_flush_icache_page)(unsigned long address);
} processor;
extern const struct processor arm6_processor_functions;
#define cpu_flush_icache_area(start,end) processor._flush_icache_area(start,end)
#define cpu_cache_wback_area(start,end) processor._cache_wback_area(start,end)
#define cpu_cache_purge_area(start,end) processor._cache_purge_area(start,end)
+#define cpu_flush_icache_page(virt) processor._flush_icache_page(virt)
#define cpu_switch_mm(pgd,tsk) cpu_set_pgd(__virt_to_phys((unsigned long)(pgd)))
#define cpu_flush_icache_area cpu_fn(CPU_NAME,_flush_icache_area)
#define cpu_cache_wback_area cpu_fn(CPU_NAME,_cache_wback_area)
#define cpu_cache_purge_area cpu_fn(CPU_NAME,_cache_purge_area)
+#define cpu_flush_icache_page cpu_fn(CPU_NAME,_flush_icache_page)
#ifndef __ASSEMBLY__
extern void cpu_flush_icache_area(unsigned long start, unsigned long size);
extern void cpu_cache_wback_area(unsigned long start, unsigned long end);
extern void cpu_cache_purge_area(unsigned long start, unsigned long end);
+extern void cpu_flush_icache_page(unsigned long virt);
#define cpu_switch_mm(pgd,tsk) cpu_set_pgd(__virt_to_phys((unsigned long)(pgd)))
#define flush_cache_range(mm,start,end) do { } while (0)
#define flush_cache_page(vma,vmaddr) do { } while (0)
#define flush_page_to_ram(page) do { } while (0)
+#define flush_icache_page(vma,page) do { } while (0)
#define flush_icache_range(start,end) do { } while (0)
/*
+#include <asm/mman.h>
+
/*
* Cache flushing...
*/
#define flush_icache_range(_start,_end) \
cpu_flush_icache_area((_start), (_end) - (_start))
+#define flush_icache_page(vma,pg) \
+ do { \
+ if ((vma)->vm_flags & PROT_EXEC) \
+ cpu_flush_icache_page(page_address(pg)); \
+ } while (0)
+
/*
* We don't have a MEMC chip...
*/
#ifndef __ASM_ARM_UNALIGNED_H
#define __ASM_ARM_UNALIGNED_H
+#include <linux/types.h>
+
#define get_unaligned(ptr) \
((__typeof__(*(ptr)))__get_unaligned_size((ptr), sizeof(*(ptr))))
#define KEY_RECORD 167
#define KEY_REWIND 168
#define KEY_PHONE 169
-#define KEY_CALENDAR 170
-#define KEY_NOTEPAD 171
-#define KEY_PROG3 172
-#define KEY_PRINT 173
-#define KEY_SOUND 174
-#define KEY_FULLSCREEN 175
+#define KEY_CONFIG 171
+#define KEY_HOMEPAGE 172
+#define KEY_REFRESH 173
+#define KEY_EXIT 174
+#define KEY_MOVE 175
#define KEY_UNKNOWN 180
#define NLMDBG_CLNTSUBS 0x0020
#define NLMDBG_SVCSUBS 0x0040
#define NLMDBG_HOSTCACHE 0x0080
+#define NLMDBG_XDR 0x0100
#define NLMDBG_ALL 0x7fff
#include <linux/nfsd/nfsfh.h>
#include <linux/lockd/bind.h>
#include <linux/lockd/xdr.h>
+#ifdef CONFIG_LOCKD_V4
+#include <linux/lockd/xdr4.h>
+#endif
#include <linux/lockd/debug.h>
/*
*/
extern struct rpc_program nlm_program;
extern struct svc_procedure nlmsvc_procedures[];
+#ifdef CONFIG_LOCKD_V4
+extern struct svc_procedure nlmsvc_procedures4[];
+#endif
extern unsigned long nlmsvc_grace_period;
extern unsigned long nlmsvc_timeout;
/* Return states for NLM */
enum {
- NLM_LCK_GRANTED = 0,
- NLM_LCK_DENIED,
- NLM_LCK_DENIED_NOLOCKS,
- NLM_LCK_BLOCKED,
- NLM_LCK_DENIED_GRACE_PERIOD,
+ NLM_LCK_GRANTED = 0,
+ NLM_LCK_DENIED = 1,
+ NLM_LCK_DENIED_NOLOCKS = 2,
+ NLM_LCK_BLOCKED = 3,
+ NLM_LCK_DENIED_GRACE_PERIOD = 4,
+#ifdef CONFIG_LOCKD_V4
+ NLM_DEADLCK = 5,
+ NLM_ROFS = 6,
+ NLM_STALE_FH = 7,
+ NLM_FBIG = 8,
+ NLM_FAILED = 9,
+#endif
};
#define NLM_PROGRAM 100021
--- /dev/null
+/*
+ * linux/include/linux/lockd/xdr.h
+ *
+ * XDR types for the NLM protocol
+ *
+ * Copyright (C) 1996 Olaf Kirch <okir@monad.swb.de>
+ */
+
+#ifndef LOCKD_XDR4_H
+#define LOCKD_XDR4_H
+
+#include <linux/fs.h>
+#include <linux/nfs.h>
+#include <linux/sunrpc/xdr.h>
+#include <linux/lockd/xdr.h>
+
+/* error codes new to NLMv4 */
+extern u32 nlm4_deadlock, nlm4_rofs, nlm4_stale_fh, nlm4_fbig, nlm4_failed;
+
+
+int nlm4svc_decode_testargs(struct svc_rqst *, u32 *, struct nlm_args *);
+int nlm4svc_encode_testres(struct svc_rqst *, u32 *, struct nlm_res *);
+int nlm4svc_decode_lockargs(struct svc_rqst *, u32 *, struct nlm_args *);
+int nlm4svc_decode_cancargs(struct svc_rqst *, u32 *, struct nlm_args *);
+int nlm4svc_decode_unlockargs(struct svc_rqst *, u32 *, struct nlm_args *);
+int nlm4svc_encode_res(struct svc_rqst *, u32 *, struct nlm_res *);
+int nlm4svc_decode_res(struct svc_rqst *, u32 *, struct nlm_res *);
+int nlm4svc_encode_void(struct svc_rqst *, u32 *, void *);
+int nlm4svc_decode_void(struct svc_rqst *, u32 *, void *);
+int nlm4svc_decode_shareargs(struct svc_rqst *, u32 *, struct nlm_args *);
+int nlm4svc_encode_shareres(struct svc_rqst *, u32 *, struct nlm_res *);
+int nlm4svc_decode_notify(struct svc_rqst *, u32 *, struct nlm_args *);
+int nlm4svc_decode_reboot(struct svc_rqst *, u32 *, struct nlm_reboot *);
+/*
+int nlmclt_encode_testargs(struct rpc_rqst *, u32 *, struct nlm_args *);
+int nlmclt_encode_lockargs(struct rpc_rqst *, u32 *, struct nlm_args *);
+int nlmclt_encode_cancargs(struct rpc_rqst *, u32 *, struct nlm_args *);
+int nlmclt_encode_unlockargs(struct rpc_rqst *, u32 *, struct nlm_args *);
+ */
+
+#endif /* LOCKD_XDR4_H */
--- /dev/null
+/*
+ * kernel/lvm.h
+ *
+ * Copyright (C) 1997 - 2000 Heinz Mauelshagen, Germany
+ *
+ * February-November 1997
+ * May-July 1998
+ * January-March,July,September,October,Dezember 1999
+ * January 2000
+ *
+ * lvm is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2, or (at your option)
+ * any later version.
+ *
+ * lvm is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with GNU CC; see the file COPYING. If not, write to
+ * the Free Software Foundation, 59 Temple Place - Suite 330,
+ * Boston, MA 02111-1307, USA.
+ *
+ */
+
+/*
+ * Changelog
+ *
+ * 10/10/1997 - beginning of new structure creation
+ * 12/05/1998 - incorporated structures from lvm_v1.h and deleted lvm_v1.h
+ * 07/06/1998 - avoided LVM_KMALLOC_MAX define by using vmalloc/vfree
+ * instead of kmalloc/kfree
+ * 01/07/1998 - fixed wrong LVM_MAX_SIZE
+ * 07/07/1998 - extended pe_t structure by ios member (for statistic)
+ * 02/08/1998 - changes for official char/block major numbers
+ * 07/08/1998 - avoided init_module() and cleanup_module() to be static
+ * 29/08/1998 - seprated core and disk structure type definitions
+ * 01/09/1998 - merged kernel integration version (mike)
+ * 20/01/1999 - added LVM_PE_DISK_OFFSET macro for use in
+ * vg_read_with_pv_and_lv(), pv_move_pe(), pv_show_pe_text()...
+ * 18/02/1999 - added definition of time_disk_t structure for;
+ * keeps time stamps on disk for nonatomic writes (future)
+ * 15/03/1999 - corrected LV() and VG() macro definition to use argument
+ * instead of minor
+ * 03/07/1999 - define for genhd.c name handling
+ * 23/07/1999 - implemented snapshot part
+ * 08/12/1999 - changed LVM_LV_SIZE_MAX macro to reflect current 1TB limit
+ * 01/01/2000 - extended lv_v2 core structure by wait_queue member
+ * 12/02/2000 - integrated Andrea Arcagnelli's snapshot work
+ *
+ */
+
+
+#ifndef _LVM_H_INCLUDE
+#define _LVM_H_INCLUDE
+
+#define _LVM_H_VERSION "LVM 0.8final (15/2/2000)"
+
+/*
+ * preprocessor definitions
+ */
+/* if you like emergency reset code in the driver */
+#define LVM_TOTAL_RESET
+
+#define LVM_GET_INODE
+#define LVM_HD_NAME
+
+/* lots of debugging output (see driver source)
+ #define DEBUG_LVM_GET_INFO
+ #define DEBUG
+ #define DEBUG_MAP
+ #define DEBUG_MAP_SIZE
+ #define DEBUG_IOCTL
+ #define DEBUG_READ
+ #define DEBUG_GENDISK
+ #define DEBUG_VG_CREATE
+ #define DEBUG_LVM_BLK_OPEN
+ #define DEBUG_KFREE
+ */
+
+#include <linux/version.h>
+
+#ifndef __KERNEL__
+#define __KERNEL__
+#include <linux/kdev_t.h>
+#undef __KERNEL__
+#else
+#include <linux/kdev_t.h>
+#endif
+
+#include <linux/major.h>
+
+#if LINUX_VERSION_CODE >= KERNEL_VERSION ( 2, 3 ,0)
+#include <linux/spinlock.h>
+#else
+#include <asm/spinlock.h>
+#endif
+
+#include <asm/semaphore.h>
+#include <asm/page.h>
+
+#if !defined ( LVM_BLK_MAJOR) || !defined ( LVM_CHAR_MAJOR)
+#error Bad include/linux/major.h - LVM MAJOR undefined
+#endif
+
+
+#define LVM_STRUCT_VERSION 1 /* structure version */
+
+#ifndef min
+#define min(a,b) (((a)<(b))?(a):(b))
+#endif
+#ifndef max
+#define max(a,b) (((a)>(b))?(a):(b))
+#endif
+
+/* set the default structure version */
+#if ( LVM_STRUCT_VERSION == 1)
+#define pv_t pv_v1_t
+#define lv_t lv_v2_t
+#define vg_t vg_v1_t
+#define pv_disk_t pv_disk_v1_t
+#define lv_disk_t lv_disk_v1_t
+#define vg_disk_t vg_disk_v1_t
+#define lv_exception_t lv_v2_exception_t
+#endif
+
+
+/*
+ * i/o protocoll version
+ *
+ * defined here for the driver and defined seperate in the
+ * user land LVM parts
+ *
+ */
+#define LVM_DRIVER_IOP_VERSION 6
+
+#define LVM_NAME "lvm"
+
+/*
+ * VG/LV indexing macros
+ */
+/* character minor maps directly to volume group */
+#define VG_CHR(a) ( a)
+
+/* block minor indexes into a volume group/logical volume indirection table */
+#define VG_BLK(a) ( vg_lv_map[a].vg_number)
+#define LV_BLK(a) ( vg_lv_map[a].lv_number)
+
+/*
+ * absolute limits for VGs, PVs per VG and LVs per VG
+ */
+#define ABS_MAX_VG 99
+#define ABS_MAX_PV 256
+#define ABS_MAX_LV 256 /* caused by 8 bit minor */
+
+#define MAX_VG ABS_MAX_VG
+#define MAX_LV ABS_MAX_LV
+#define MAX_PV ABS_MAX_PV
+
+#if ( MAX_VG > ABS_MAX_VG)
+#undef MAX_VG
+#define MAX_VG ABS_MAX_VG
+#endif
+
+#if ( MAX_LV > ABS_MAX_LV)
+#undef MAX_LV
+#define MAX_LV ABS_MAX_LV
+#endif
+
+
+/*
+ * VGDA: default disk spaces and offsets
+ *
+ * there's space after the structures for later extensions.
+ *
+ * offset what size
+ * --------------- ---------------------------------- ------------
+ * 0 physical volume structure ~500 byte
+ *
+ * 1K volume group structure ~200 byte
+ *
+ * 5K time stamp structure ~
+ *
+ * 6K namelist of physical volumes 128 byte each
+ *
+ * 6k + n * 128byte n logical volume structures ~300 byte each
+ *
+ * + m * 328byte m physical extent alloc. structs 4 byte each
+ *
+ * End of disk - first physical extent typical 4 megabyte
+ * PE total *
+ * PE size
+ *
+ *
+ */
+
+/* DONT TOUCH THESE !!! */
+/* base of PV structure in disk partition */
+#define LVM_PV_DISK_BASE 0L
+
+/* size reserved for PV structure on disk */
+#define LVM_PV_DISK_SIZE 1024L
+
+/* base of VG structure in disk partition */
+#define LVM_VG_DISK_BASE LVM_PV_DISK_SIZE
+
+/* size reserved for VG structure */
+#define LVM_VG_DISK_SIZE ( 9 * 512L)
+
+/* size reserved for timekeeping */
+#define LVM_TIMESTAMP_DISK_BASE ( LVM_VG_DISK_BASE + LVM_VG_DISK_SIZE)
+#define LVM_TIMESTAMP_DISK_SIZE 512L /* reserved for timekeeping */
+
+/* name list of physical volumes on disk */
+#define LVM_PV_NAMELIST_DISK_BASE ( LVM_TIMESTAMP_DISK_BASE + \
+ LVM_TIMESTAMP_DISK_SIZE)
+
+/* now for the dynamically calculated parts of the VGDA */
+#define LVM_LV_DISK_OFFSET(a, b) ( (a)->lv_on_disk.base + sizeof ( lv_t) * b)
+#define LVM_DISK_SIZE(pv) ( (pv)->pe_on_disk.base + \
+ (pv)->pe_on_disk.size)
+#define LVM_PE_DISK_OFFSET(pe, pv) ( pe * pv->pe_size + \
+ ( LVM_DISK_SIZE ( pv) / SECTOR_SIZE))
+#define LVM_PE_ON_DISK_BASE(pv) \
+ { int rest; \
+ pv->pe_on_disk.base = pv->lv_on_disk.base + pv->lv_on_disk.size; \
+ if ( ( rest = pv->pe_on_disk.base % SECTOR_SIZE) != 0) \
+ pv->pe_on_disk.base += ( SECTOR_SIZE - rest); \
+ }
+/* END default disk spaces and offsets for PVs */
+
+
+/*
+ * LVM_PE_T_MAX corresponds to:
+ *
+ * 8KB PE size can map a ~512 MB logical volume at the cost of 1MB memory,
+ *
+ * 128MB PE size can map a 8TB logical volume at the same cost of memory.
+ *
+ * Default PE size of 4 MB gives a maximum logical volume size of 256 GB.
+ *
+ * Maximum PE size of 16GB gives a maximum logical volume size of 1024 TB.
+ *
+ * AFAIK, the actual kernels limit this to 1 TB.
+ *
+ * Should be a sufficient spectrum ;*)
+ */
+
+/* This is the usable size of disk_pe_t.le_num !!! v v */
+#define LVM_PE_T_MAX ( ( 1 << ( sizeof ( uint16_t) * 8)) - 2)
+
+#define LVM_LV_SIZE_MAX(a) ( ( long long) LVM_PE_T_MAX * (a)->pe_size > ( long long) 2*1024*1024*1024 ? ( long long) 2*1024*1024*1024 : ( long long) LVM_PE_T_MAX * (a)->pe_size)
+#define LVM_MIN_PE_SIZE ( 8L * 2) /* 8 KB in sectors */
+#define LVM_MAX_PE_SIZE ( 16L * 1024L * 1024L * 2) /* 16GB in sectors */
+#define LVM_DEFAULT_PE_SIZE ( 4096L * 2) /* 4 MB in sectors */
+#define LVM_DEFAULT_STRIPE_SIZE 16L /* 16 KB */
+#define LVM_MIN_STRIPE_SIZE ( PAGE_SIZE>>9) /* PAGESIZE in sectors */
+#define LVM_MAX_STRIPE_SIZE ( 512L * 2) /* 512 KB in sectors */
+#define LVM_MAX_STRIPES 128 /* max # of stripes */
+#define LVM_MAX_SIZE ( 1024LU * 1024 * 1024 * 2) /* 1TB[sectors] */
+#define LVM_MAX_MIRRORS 2 /* future use */
+#define LVM_MIN_READ_AHEAD 2 /* minimum read ahead sectors */
+#define LVM_MAX_READ_AHEAD 120 /* maximum read ahead sectors */
+#define LVM_MAX_LV_IO_TIMEOUT 60 /* seconds I/O timeout (future use) */
+#define LVM_PARTITION 0xfe /* LVM partition id */
+#define LVM_NEW_PARTITION 0x8e /* new LVM partition id (10/09/1999) */
+#define LVM_PE_SIZE_PV_SIZE_REL 5 /* max relation PV size and PE size */
+
+#define LVM_SNAPSHOT_MAX_CHUNK 1024 /* 1024 KB */
+#define LVM_SNAPSHOT_DEF_CHUNK 64 /* 64 KB */
+#define LVM_SNAPSHOT_MIN_CHUNK 1 /* 1 KB */
+
+#define UNDEF -1
+#define FALSE 0
+#define TRUE 1
+
+
+/*
+ * ioctls
+ */
+/* volume group */
+#define VG_CREATE _IOW ( 0xfe, 0x00, 1)
+#define VG_REMOVE _IOW ( 0xfe, 0x01, 1)
+
+#define VG_EXTEND _IOW ( 0xfe, 0x03, 1)
+#define VG_REDUCE _IOW ( 0xfe, 0x04, 1)
+
+#define VG_STATUS _IOWR ( 0xfe, 0x05, 1)
+#define VG_STATUS_GET_COUNT _IOWR ( 0xfe, 0x06, 1)
+#define VG_STATUS_GET_NAMELIST _IOWR ( 0xfe, 0x07, 1)
+
+#define VG_SET_EXTENDABLE _IOW ( 0xfe, 0x08, 1)
+
+
+/* logical volume */
+#define LV_CREATE _IOW ( 0xfe, 0x20, 1)
+#define LV_REMOVE _IOW ( 0xfe, 0x21, 1)
+
+#define LV_ACTIVATE _IO ( 0xfe, 0x22)
+#define LV_DEACTIVATE _IO ( 0xfe, 0x23)
+
+#define LV_EXTEND _IOW ( 0xfe, 0x24, 1)
+#define LV_REDUCE _IOW ( 0xfe, 0x25, 1)
+
+#define LV_STATUS_BYNAME _IOWR ( 0xfe, 0x26, 1)
+#define LV_STATUS_BYINDEX _IOWR ( 0xfe, 0x27, 1)
+
+#define LV_SET_ACCESS _IOW ( 0xfe, 0x28, 1)
+#define LV_SET_ALLOCATION _IOW ( 0xfe, 0x29, 1)
+#define LV_SET_STATUS _IOW ( 0xfe, 0x2a, 1)
+
+#define LE_REMAP _IOW ( 0xfe, 0x2b, 1)
+
+
+/* physical volume */
+#define PV_STATUS _IOWR ( 0xfe, 0x40, 1)
+#define PV_CHANGE _IOWR ( 0xfe, 0x41, 1)
+#define PV_FLUSH _IOW ( 0xfe, 0x42, 1)
+
+/* physical extent */
+#define PE_LOCK_UNLOCK _IOW ( 0xfe, 0x50, 1)
+
+/* i/o protocol version */
+#define LVM_GET_IOP_VERSION _IOR ( 0xfe, 0x98, 1)
+
+#ifdef LVM_TOTAL_RESET
+/* special reset function for testing purposes */
+#define LVM_RESET _IO ( 0xfe, 0x99)
+#endif
+
+/* lock the logical volume manager */
+#define LVM_LOCK_LVM _IO ( 0xfe, 0x100)
+/* END ioctls */
+
+
+/*
+ * Status flags
+ */
+/* volume group */
+#define VG_ACTIVE 0x01 /* vg_status */
+#define VG_EXPORTED 0x02 /* " */
+#define VG_EXTENDABLE 0x04 /* " */
+
+#define VG_READ 0x01 /* vg_access */
+#define VG_WRITE 0x02 /* " */
+
+/* logical volume */
+#define LV_ACTIVE 0x01 /* lv_status */
+#define LV_SPINDOWN 0x02 /* " */
+
+#define LV_READ 0x01 /* lv_access */
+#define LV_WRITE 0x02 /* " */
+#define LV_SNAPSHOT 0x04 /* " */
+#define LV_SNAPSHOT_ORG 0x08 /* " */
+
+#define LV_BADBLOCK_ON 0x01 /* lv_badblock */
+
+#define LV_STRICT 0x01 /* lv_allocation */
+#define LV_CONTIGUOUS 0x02 /* " */
+
+/* physical volume */
+#define PV_ACTIVE 0x01 /* pv_status */
+#define PV_ALLOCATABLE 0x02 /* pv_allocatable */
+
+
+/*
+ * Structure definitions core/disk follow
+ *
+ * conditional conversion takes place on big endian architectures
+ * in functions * pv_copy_*(), vg_copy_*() and lv_copy_*()
+ *
+ */
+
+#define NAME_LEN 128 /* don't change!!! */
+#define UUID_LEN 16 /* don't change!!! */
+
+/* remap physical sector/rdev pairs */
+typedef struct
+{
+ struct list_head hash;
+ ulong rsector_org;
+ kdev_t rdev_org;
+ ulong rsector_new;
+ kdev_t rdev_new;
+} lv_block_exception_t;
+
+
+/* disk stored pe information */
+typedef struct
+ {
+ uint16_t lv_num;
+ uint16_t le_num;
+ }
+disk_pe_t;
+
+/* disk stored PV, VG, LV and PE size and offset information */
+typedef struct
+ {
+ uint32_t base;
+ uint32_t size;
+ }
+lvm_disk_data_t;
+
+
+/*
+ * Structure Physical Volume (PV) Version 1
+ */
+
+/* core */
+typedef struct
+ {
+ uint8_t id[2]; /* Identifier */
+ uint16_t version; /* HM lvm version */
+ lvm_disk_data_t pv_on_disk;
+ lvm_disk_data_t vg_on_disk;
+ lvm_disk_data_t pv_namelist_on_disk;
+ lvm_disk_data_t lv_on_disk;
+ lvm_disk_data_t pe_on_disk;
+ uint8_t pv_name[NAME_LEN];
+ uint8_t vg_name[NAME_LEN];
+ uint8_t system_id[NAME_LEN]; /* for vgexport/vgimport */
+ kdev_t pv_dev;
+ uint32_t pv_number;
+ uint32_t pv_status;
+ uint32_t pv_allocatable;
+ uint32_t pv_size; /* HM */
+ uint32_t lv_cur;
+ uint32_t pe_size;
+ uint32_t pe_total;
+ uint32_t pe_allocated;
+ uint32_t pe_stale; /* for future use */
+
+ disk_pe_t *pe; /* HM */
+ struct inode *inode; /* HM */
+ }
+pv_v1_t;
+
+/* disk */
+typedef struct
+ {
+ uint8_t id[2]; /* Identifier */
+ uint16_t version; /* HM lvm version */
+ lvm_disk_data_t pv_on_disk;
+ lvm_disk_data_t vg_on_disk;
+ lvm_disk_data_t pv_namelist_on_disk;
+ lvm_disk_data_t lv_on_disk;
+ lvm_disk_data_t pe_on_disk;
+ uint8_t pv_name[NAME_LEN];
+ uint8_t vg_name[NAME_LEN];
+ uint8_t system_id[NAME_LEN]; /* for vgexport/vgimport */
+ uint32_t pv_major;
+ uint32_t pv_number;
+ uint32_t pv_status;
+ uint32_t pv_allocatable;
+ uint32_t pv_size; /* HM */
+ uint32_t lv_cur;
+ uint32_t pe_size;
+ uint32_t pe_total;
+ uint32_t pe_allocated;
+ }
+pv_disk_v1_t;
+
+
+/*
+ * Structure Physical Volume (PV) Version 2 (future!)
+ */
+
+typedef struct
+ {
+ uint8_t id[2]; /* Identifier */
+ uint16_t version; /* HM lvm version */
+ lvm_disk_data_t pv_on_disk;
+ lvm_disk_data_t vg_on_disk;
+ lvm_disk_data_t pv_uuid_on_disk;
+ lvm_disk_data_t lv_on_disk;
+ lvm_disk_data_t pe_on_disk;
+ uint8_t pv_name[NAME_LEN];
+ uint8_t vg_name[NAME_LEN];
+ uint8_t system_id[NAME_LEN]; /* for vgexport/vgimport */
+ kdev_t pv_dev;
+ uint32_t pv_number;
+ uint32_t pv_status;
+ uint32_t pv_allocatable;
+ uint32_t pv_size; /* HM */
+ uint32_t lv_cur;
+ uint32_t pe_size;
+ uint32_t pe_total;
+ uint32_t pe_allocated;
+ uint32_t pe_stale; /* for future use */
+ disk_pe_t *pe; /* HM */
+ struct inode *inode; /* HM */
+ /* delta to version 1 starts here */
+ uint8_t pv_uuid[UUID_LEN];
+ uint32_t pv_atime; /* PV access time */
+ uint32_t pv_ctime; /* PV creation time */
+ uint32_t pv_mtime; /* PV modification time */
+ }
+pv_v2_t;
+
+
+/*
+ * Structures for Logical Volume (LV)
+ */
+
+/* core PE information */
+typedef struct
+ {
+ kdev_t dev;
+ uint32_t pe; /* to be changed if > 2TB */
+ uint32_t reads;
+ uint32_t writes;
+ }
+pe_t;
+
+typedef struct
+ {
+ uint8_t lv_name[NAME_LEN];
+ kdev_t old_dev;
+ kdev_t new_dev;
+ ulong old_pe;
+ ulong new_pe;
+ }
+le_remap_req_t;
+
+
+
+/*
+ * Structure Logical Volume (LV) Version 1
+ */
+
+/* disk */
+typedef struct
+ {
+ uint8_t lv_name[NAME_LEN];
+ uint8_t vg_name[NAME_LEN];
+ uint32_t lv_access;
+ uint32_t lv_status;
+ uint32_t lv_open; /* HM */
+ uint32_t lv_dev; /* HM */
+ uint32_t lv_number; /* HM */
+ uint32_t lv_mirror_copies; /* for future use */
+ uint32_t lv_recovery; /* " */
+ uint32_t lv_schedule; /* " */
+ uint32_t lv_size;
+ uint32_t dummy;
+ uint32_t lv_current_le; /* for future use */
+ uint32_t lv_allocated_le;
+ uint32_t lv_stripes;
+ uint32_t lv_stripesize;
+ uint32_t lv_badblock; /* for future use */
+ uint32_t lv_allocation;
+ uint32_t lv_io_timeout; /* for future use */
+ uint32_t lv_read_ahead; /* HM, for future use */
+ }
+lv_disk_v1_t;
+
+
+/*
+ * Structure Logical Volume (LV) Version 2
+ */
+
+/* core */
+typedef struct lv_v2
+ {
+ uint8_t lv_name[NAME_LEN];
+ uint8_t vg_name[NAME_LEN];
+ uint32_t lv_access;
+ uint32_t lv_status;
+ uint32_t lv_open; /* HM */
+ kdev_t lv_dev; /* HM */
+ uint32_t lv_number; /* HM */
+ uint32_t lv_mirror_copies; /* for future use */
+ uint32_t lv_recovery; /* " */
+ uint32_t lv_schedule; /* " */
+ uint32_t lv_size;
+ pe_t *lv_current_pe; /* HM */
+ uint32_t lv_current_le; /* for future use */
+ uint32_t lv_allocated_le;
+ uint32_t lv_stripes;
+ uint32_t lv_stripesize;
+ uint32_t lv_badblock; /* for future use */
+ uint32_t lv_allocation;
+ uint32_t lv_io_timeout; /* for future use */
+ uint32_t lv_read_ahead;
+
+ /* delta to version 1 starts here */
+ struct lv_v2 *lv_snapshot_org;
+ struct lv_v2 *lv_snapshot_prev;
+ struct lv_v2 *lv_snapshot_next;
+ lv_block_exception_t *lv_block_exception;
+ uint32_t lv_remap_ptr;
+ uint32_t lv_remap_end;
+ uint32_t lv_chunk_size;
+ uint32_t lv_snapshot_minor;
+ struct kiobuf * lv_iobuf;
+ struct semaphore lv_snapshot_sem;
+ struct list_head * lv_snapshot_hash_table;
+ unsigned long lv_snapshot_hash_mask;
+} lv_v2_t;
+
+/* disk */
+typedef struct
+ {
+ uint8_t lv_name[NAME_LEN];
+ uint8_t vg_name[NAME_LEN];
+ uint32_t lv_access;
+ uint32_t lv_status;
+ uint32_t lv_open; /* HM */
+ uint32_t lv_dev; /* HM */
+ uint32_t lv_number; /* HM */
+ uint32_t lv_mirror_copies; /* for future use */
+ uint32_t lv_recovery; /* " */
+ uint32_t lv_schedule; /* " */
+ uint32_t lv_size;
+ uint32_t dummy;
+ uint32_t lv_current_le; /* for future use */
+ uint32_t lv_allocated_le;
+ uint32_t lv_stripes;
+ uint32_t lv_stripesize;
+ uint32_t lv_badblock; /* for future use */
+ uint32_t lv_allocation;
+ uint32_t lv_io_timeout; /* for future use */
+ uint32_t lv_read_ahead; /* HM, for future use */
+ }
+lv_disk_v2_t;
+
+
+/*
+ * Structure Volume Group (VG) Version 1
+ */
+
+typedef struct
+ {
+ uint8_t vg_name[NAME_LEN]; /* volume group name */
+ uint32_t vg_number; /* volume group number */
+ uint32_t vg_access; /* read/write */
+ uint32_t vg_status; /* active or not */
+ uint32_t lv_max; /* maximum logical volumes */
+ uint32_t lv_cur; /* current logical volumes */
+ uint32_t lv_open; /* open logical volumes */
+ uint32_t pv_max; /* maximum physical volumes */
+ uint32_t pv_cur; /* current physical volumes FU */
+ uint32_t pv_act; /* active physical volumes */
+ uint32_t dummy; /* was obsolete max_pe_per_pv */
+ uint32_t vgda; /* volume group descriptor arrays FU */
+ uint32_t pe_size; /* physical extent size in sectors */
+ uint32_t pe_total; /* total of physical extents */
+ uint32_t pe_allocated; /* allocated physical extents */
+ uint32_t pvg_total; /* physical volume groups FU */
+ struct proc_dir_entry *proc;
+ pv_t *pv[ABS_MAX_PV + 1]; /* physical volume struct pointers */
+ lv_t *lv[ABS_MAX_LV + 1]; /* logical volume struct pointers */
+ }
+vg_v1_t;
+
+typedef struct
+ {
+ uint8_t vg_name[NAME_LEN]; /* volume group name */
+ uint32_t vg_number; /* volume group number */
+ uint32_t vg_access; /* read/write */
+ uint32_t vg_status; /* active or not */
+ uint32_t lv_max; /* maximum logical volumes */
+ uint32_t lv_cur; /* current logical volumes */
+ uint32_t lv_open; /* open logical volumes */
+ uint32_t pv_max; /* maximum physical volumes */
+ uint32_t pv_cur; /* current physical volumes FU */
+ uint32_t pv_act; /* active physical volumes */
+ uint32_t dummy;
+ uint32_t vgda; /* volume group descriptor arrays FU */
+ uint32_t pe_size; /* physical extent size in sectors */
+ uint32_t pe_total; /* total of physical extents */
+ uint32_t pe_allocated; /* allocated physical extents */
+ uint32_t pvg_total; /* physical volume groups FU */
+ }
+vg_disk_v1_t;
+
+/*
+ * Structure Volume Group (VG) Version 2
+ */
+
+typedef struct
+ {
+ uint8_t vg_name[NAME_LEN]; /* volume group name */
+ uint32_t vg_number; /* volume group number */
+ uint32_t vg_access; /* read/write */
+ uint32_t vg_status; /* active or not */
+ uint32_t lv_max; /* maximum logical volumes */
+ uint32_t lv_cur; /* current logical volumes */
+ uint32_t lv_open; /* open logical volumes */
+ uint32_t pv_max; /* maximum physical volumes */
+ uint32_t pv_cur; /* current physical volumes FU */
+ uint32_t pv_act; /* future: active physical volumes */
+ uint32_t max_pe_per_pv; /* OBSOLETE maximum PE/PV */
+ uint32_t vgda; /* volume group descriptor arrays FU */
+ uint32_t pe_size; /* physical extent size in sectors */
+ uint32_t pe_total; /* total of physical extents */
+ uint32_t pe_allocated; /* allocated physical extents */
+ uint32_t pvg_total; /* physical volume groups FU */
+ struct proc_dir_entry *proc;
+ pv_t *pv[ABS_MAX_PV + 1]; /* physical volume struct pointers */
+ lv_t *lv[ABS_MAX_LV + 1]; /* logical volume struct pointers */
+ /* delta to version 1 starts here */
+ uint8_t vg_uuid[UUID_LEN]; /* volume group UUID */
+ time_t vg_atime; /* VG access time */
+ time_t vg_ctime; /* VG creation time */
+ time_t vg_mtime; /* VG modification time */
+ }
+vg_v2_t;
+
+
+/*
+ * Timekeeping structure on disk (0.7 feature)
+ *
+ * Holds several timestamps for start/stop time of non
+ * atomic VGDA disk i/o operations
+ *
+ */
+
+typedef struct
+ {
+ uint32_t seconds; /* seconds since the epoch */
+ uint32_t jiffies; /* micro timer */
+ }
+lvm_time_t;
+
+#define TIMESTAMP_ID_SIZE 2
+typedef struct
+ {
+ uint8_t id[TIMESTAMP_ID_SIZE]; /* Identifier */
+ lvm_time_t pv_vg_lv_pe_io_begin;
+ lvm_time_t pv_vg_lv_pe_io_end;
+ lvm_time_t pv_io_begin;
+ lvm_time_t pv_io_end;
+ lvm_time_t vg_io_begin;
+ lvm_time_t vg_io_end;
+ lvm_time_t lv_io_begin;
+ lvm_time_t lv_io_end;
+ lvm_time_t pe_io_begin;
+ lvm_time_t pe_io_end;
+ lvm_time_t pe_move_io_begin;
+ lvm_time_t pe_move_io_end;
+ uint8_t dummy[LVM_TIMESTAMP_DISK_SIZE -
+ TIMESTAMP_ID_SIZE -
+ 12 * sizeof (lvm_time_t)];
+ /* ATTENTION ^^ */
+ }
+timestamp_disk_t;
+
+/* same on disk and in core so far */
+typedef timestamp_disk_t timestamp_t;
+
+/* function identifiers for timestamp actions */
+typedef enum
+ {
+ PV_VG_LV_PE_IO_BEGIN,
+ PV_VG_LV_PE_IO_END,
+ PV_IO_BEGIN,
+ PV_IO_END,
+ VG_IO_BEGIN,
+ VG_IO_END,
+ LV_IO_BEGIN,
+ LV_IO_END,
+ PE_IO_BEGIN,
+ PE_IO_END,
+ PE_MOVE_IO_BEGIN,
+ PE_MOVE_IO_END
+ }
+ts_fct_id_t;
+
+
+/*
+ * Request structures for ioctls
+ */
+
+/* Request structure PV_STATUS */
+typedef struct
+ {
+ char pv_name[NAME_LEN];
+ pv_t *pv;
+ }
+pv_status_req_t, pv_change_req_t;
+
+/* Request structure PV_FLUSH */
+typedef struct
+ {
+ char pv_name[NAME_LEN];
+ kdev_t pv_dev;
+ }
+pv_flush_req_t;
+
+
+/* Request structure PE_MOVE */
+typedef struct
+ {
+ enum
+ {
+ LOCK_PE, UNLOCK_PE
+ }
+ lock;
+ struct
+ {
+ kdev_t lv_dev;
+ kdev_t pv_dev;
+ uint32_t pv_offset;
+ }
+ data;
+ }
+pe_lock_req_t;
+
+
+/* Request structure LV_STATUS_BYNAME */
+typedef struct
+ {
+ char lv_name[NAME_LEN];
+ lv_t *lv;
+ }
+lv_status_byname_req_t, lv_req_t;
+
+/* Request structure LV_STATUS_BYINDEX */
+typedef struct
+ {
+ ulong lv_index;
+ lv_t *lv;
+ }
+lv_status_byindex_req_t;
+
+#endif /* #ifndef _LVM_H_INCLUDE */
#define IDE4_MAJOR 56
#define IDE5_MAJOR 57
+#define LVM_BLK_MAJOR 58 /* Logical Volume Manager */
+
#define SCSI_DISK1_MAJOR 65
#define SCSI_DISK2_MAJOR 66
#define SCSI_DISK3_MAJOR 67
#define SCSI_DISK7_MAJOR 71
-#define LVM_BLK_MAJOR 58 /* Logical Volume Manager */
-
#define COMPAQ_SMART2_MAJOR 72
#define COMPAQ_SMART2_MAJOR1 73
#define COMPAQ_SMART2_MAJOR2 74
#define COMPAQ_SMART2_MAJOR6 78
#define COMPAQ_SMART2_MAJOR7 79
-#define LVM_BLK_MAJOR 58 /* Logical Volume Manager */
-
#define SPECIALIX_NORMAL_MAJOR 75
#define SPECIALIX_CALLOUT_MAJOR 76
#define DASD_MAJOR 94 /* Official assignations from Peter */
-#define LVM_CHAR_MAJOR 109 /* Logical Volume Manager */
-
#define MDISK_MAJOR 95 /* Official assignations from Peter */
#define I2O_MAJOR 80 /* 80->87 */
#define PHONE_MAJOR 100
+#define LVM_CHAR_MAJOR 109 /* Logical Volume Manager */
+
#define RTF_MAJOR 150
#define RAW_MAJOR 162
netif_wake_queue(dev);
}
+/* Use this variant when it is known for sure that it
+ * is executing from interrupt context.
+ */
extern __inline__ void dev_kfree_skb_irq(struct sk_buff *skb)
{
if (atomic_dec_and_test(&skb->users)) {
}
}
+/* Use this variant in places where it could be invoked
+ * either from interrupt or non-interrupt context.
+ */
+extern __inline__ void dev_kfree_skb_any(struct sk_buff *skb)
+{
+ if (in_irq())
+ dev_kfree_skb_irq(skb);
+ else
+ dev_kfree_skb(skb);
+}
#define HAVE_NETIF_RX 1
extern void netif_rx(struct sk_buff *skb);
int nfs3svc_encode_entry_plus(struct readdir_cd *, const char *name,
int namlen, off_t offset, ino_t ino);
-#ifdef __KERNEL__
-
-/*
- * This is needed in nfs_readdir for encoding NFS3 directory cookies.
- */
-static inline u32 *
-enc64(u32 *p, u64 val)
-{
- *p++ = htonl(val >> 32);
- *p++ = htonl(val & 0xffffffff);
- return p;
-}
-
-#endif /* __KERNEL__ */
#endif /* _LINUX_NFSD_XDR3_H */
* INIT_TASK is used to set up the first task table, touch at
* your own risk!. Base=0, limit=0x1fffff (=2MB)
*/
-#define INIT_TASK(name) \
-/* state etc */ { 0,0,0,KERNEL_DS,&default_exec_domain,0, \
-/* avg_slice */ 0, -1, \
-/* counter */ DEF_PRIORITY,DEF_PRIORITY,SCHED_OTHER, \
-/* mm */ NULL, &init_mm, \
-/* has_cpu */ 0,0, \
-/* run_list */ LIST_HEAD_INIT(init_task.run_list), \
-/* next_task */ &init_task,&init_task, \
-/* last_proc */ 0, \
-/* binfmt */ NULL, \
-/* ec,brk... */ 0,0,0,0,0,0, \
-/* pid etc.. */ 0,0,0,0,0, \
-/* proc links*/ &init_task,&init_task,NULL,NULL,NULL, \
-/* pidhash */ NULL, NULL, \
-/* chld wait */ __WAIT_QUEUE_HEAD_INITIALIZER(name.wait_chldexit), NULL, \
-/* timeout */ 0,0,0,0,0,0,0, \
-/* timer */ { NULL, NULL, 0, 0, it_real_fn }, \
-/* utime */ {0,0,0,0},0, \
-/* per CPU times */ {0, }, {0, }, \
-/* flt */ 0,0,0,0,0,0, \
-/* swp */ 0, \
-/* process credentials */ \
-/* uid etc */ 0,0,0,0,0,0,0,0, \
-/* suppl grps*/ 0, {0,}, \
-/* caps */ CAP_INIT_EFF_SET,CAP_INIT_INH_SET,CAP_FULL_SET, \
-/* user */ NULL, \
-/* rlimits */ INIT_RLIMITS, \
-/* math */ 0, \
-/* comm */ "swapper", \
-/* fs info */ 0,NULL, \
-/* ipc */ NULL, NULL, \
-/* thread */ INIT_THREAD, \
-/* fs */ &init_fs, \
-/* files */ &init_files, \
-/* signals */ SPIN_LOCK_UNLOCKED, &init_signals, {{0}}, {{0}}, NULL, &init_task.sigqueue, 0, 0, \
-/* exec cts */ 0,0, \
-/* exit_sem */ __MUTEX_INITIALIZER(name.exit_sem), \
+#define INIT_TASK(tsk) \
+{ \
+ state: 0, \
+ flags: 0, \
+ sigpending: 0, \
+ addr_limit: KERNEL_DS, \
+ exec_domain: &default_exec_domain, \
+ lock_depth: -1, \
+ counter: DEF_PRIORITY, \
+ priority: DEF_PRIORITY, \
+ policy: SCHED_OTHER, \
+ mm: NULL, \
+ active_mm: &init_mm, \
+ run_list: LIST_HEAD_INIT(tsk.run_list), \
+ next_task: &tsk, \
+ prev_task: &tsk, \
+ p_opptr: &tsk, \
+ p_pptr: &tsk, \
+ wait_chldexit: __WAIT_QUEUE_HEAD_INITIALIZER(tsk.wait_chldexit),\
+ real_timer: { \
+ function: it_real_fn \
+ }, \
+ cap_effective: CAP_INIT_EFF_SET, \
+ cap_inheritable: CAP_INIT_INH_SET, \
+ cap_permitted: CAP_FULL_SET, \
+ rlim: INIT_RLIMITS, \
+ comm: "swapper", \
+ thread: INIT_THREAD, \
+ fs: &init_fs, \
+ files: &init_files, \
+ sigmask_lock: SPIN_LOCK_UNLOCKED, \
+ sig: &init_signals, \
+ signal: {{0}}, \
+ blocked: {{0}}, \
+ sigqueue: NULL, \
+ sigqueue_tail: &tsk.sigqueue, \
+ exit_sem: __MUTEX_INITIALIZER(tsk.exit_sem) \
}
+
#ifndef INIT_TASK_SIZE
# define INIT_TASK_SIZE 2048*sizeof(long)
#endif
/*
* include/linux/sunrpc/xdr.h
*
- * Copyright (C) 1995, 1996 Olaf Kirch <okir@monad.swb.de>
+ * Copyright (C) 1995-1997 Olaf Kirch <okir@monad.swb.de>
*/
#ifndef _SUNRPC_XDR_H_
u32 * xdr_decode_netobj(u32 *p, struct xdr_netobj *);
u32 * xdr_decode_netobj_fixed(u32 *p, void *obj, unsigned int len);
+/*
+ * Decode 64bit quantities (NFSv3 support)
+ */
+static inline u32 *
+xdr_encode_hyper(u32 *p, __u64 val)
+{
+ *p++ = htonl(val >> 32);
+ *p++ = htonl(val & 0xFFFFFFFF);
+ return p;
+}
+
+static inline u32 *
+xdr_decode_hyper(u32 *p, __u64 *valp)
+{
+ *valp = ((__u64) ntohl(*p++)) << 32;
+ *valp |= ntohl(*p++);
+ return p;
+}
+
/*
* Adjust iovec to reflect end of xdr'ed data (RPC client XDR)
*/