N: Torben Mathiasen
E: torben.mathiasen@compaq.com
-E: tmm@image.dk
+E: torben@kernel.dk
W: http://tlan.kernel.dk
D: ThunderLAN maintainer
D: ThunderLAN updates and other kernel fixes.
S: Germany
N: Joerg Reuter
-E: jreuter@poboxes.com
-W: http://poboxes.com/jreuter/
-W: http://qsl.net/dl1bke/
+E: jreuter@yaina.de
+W: http://yaina.de/jreuter/
+W: http://www.qsl.net/dl1bke/
D: Generic Z8530 driver, AX.25 DAMA slave implementation
D: Several AX.25 hacks
Internet:
=========
-1. ftp://ftp.ccac.rwth-aachen.de/pub/jr/z8530drv-utils-3.0-1.tar.gz
+1. ftp://ftp.ccac.rwth-aachen.de/pub/jr/z8530drv-utils_3.0-3.tar.gz
-2. ftp://ftp.pspt.fi/pub/ham/linux/ax25/z8530drv-utils-3.0-1.tar.gz
-
-3. ftp://ftp.ucsd.edu/hamradio/packet/tcpip/incoming/z8530drv-utils-3.0.tar.gz
- If you can't find it there, try .../tcpip/linux/z8530drv-utils-3.0.tar.gz
+2. ftp://ftp.pspt.fi/pub/ham/linux/ax25/z8530drv-utils_3.0-3.tar.gz
Please note that the information in this document may be hopelessly outdated.
A new version of the documentation, along with links to other important
Linux Kernel AX.25 documentation and programs, is available on
-http://www.rat.de/jr
+http://yaina.de/jreuter
-----------------------------------------------------------------------------
********************************************************************
- (c) 1993,1998 by Joerg Reuter DL1BKE <jreuter@poboxes.com>
+ (c) 1993,2000 by Joerg Reuter DL1BKE <jreuter@yaina.de>
portions (c) 1993 Guido ten Dolle PE1NNZ
in the Linux standard distribution and their support.
Joerg Reuter ampr-net: dl1bke@db0pra.ampr.org
- AX-25 : DL1BKE @ DB0ACH.#NRW.DEU.EU
- Internet: jreuter@poboxes.com
- WWW : http://www.rat.de/jr/
+ AX-25 : DL1BKE @ DB0ABH.#BAY.DEU.EU
+ Internet: jreuter@yaina.de
+ WWW : http://yaina.de/jreuter
free_irq() - Release Device Ownership
A device driver may call free_irq() to release ownership of a previously
-aquired device.
+acquired device.
void free_irq( unsigned int irq,
void *dev_id);
from different people about how locking and synchronization is done
in the Linux vm code.
-vmlist_access_lock/vmlist_modify_lock
+page_table_lock
--------------------------------------
Page stealers pick processes out of the process pool and scan for
of the victim mm, a mm_count inc and a mmdrop are done in swap_out().
Page stealers hold kernel_lock to protect against a bunch of races.
The vma list of the victim mm is also scanned by the stealer,
-and the vmlist_lock is used to preserve list sanity against the
+and the page_table_lock is used to preserve list sanity against the
process adding/deleting to the list. This also guarantees existence
of the vma. Vma existence is not guaranteed once try_to_swap_out()
-drops the vmlist lock. To guarantee the existence of the underlying
+drops the page_table_lock. To guarantee the existence of the underlying
file structure, a get_file is done before the swapout() method is
invoked. The page passed into swapout() is guaranteed not to be reused
for a different purpose because the page reference count due to being
(ie all vm system calls and faults), and from ptrace, swapin due to
swap deletion etc.
2. To modify the vmlist (add/delete or change fields in an element),
-you must also hold vmlist_modify_lock, to guard against page stealers
+you must also hold page_table_lock, to guard against page stealers
scanning the list.
3. To scan the vmlist (find_vma()), you must either
a. grab mmap_sem, which should be done by all cases except
page stealer.
or
- b. grab vmlist_access_lock, only done by page stealer.
-4. While holding the vmlist_modify_lock, you must be able to guarantee
+ b. grab page_table_lock, only done by page stealer.
+4. While holding the page_table_lock, you must be able to guarantee
that no code path will lead to page stealing. A better guarantee is
to claim non sleepability, which ensures that you are not sleeping
for a lock, whose holder might in turn be doing page stealing.
-5. You must be able to guarantee that while holding vmlist_modify_lock
-or vmlist_access_lock of mm A, you will not try to get either lock
+5. You must be able to guarantee that while holding page_table_lock
+or page_table_lock of mm A, you will not try to get either lock
for mm B.
The caveats are:
The update of mmap_cache is racy (page stealer can race with other code
that invokes find_vma with mmap_sem held), but that is okay, since it
is a hint. This can be fixed, if desired, by having find_vma grab the
-vmlist lock.
+page_table_lock.
Code that add/delete elements from the vmlist chain are
expand_stack(), it is hard to come up with a destructive scenario without
having the vmlist protection in this case.
-The vmlist lock nests with the inode i_shared_lock and the kmem cache
+The page_table_lock nests with the inode i_shared_lock and the kmem cache
c_spinlock spinlocks. This is okay, since code that holds i_shared_lock
never asks for memory, and the kmem code asks for pages after dropping
-c_spinlock. The vmlist lock also nests with pagecache_lock and
+c_spinlock. The page_table_lock also nests with pagecache_lock and
pagemap_lru_lock spinlocks, and no code asks for memory with these locks
held.
-The vmlist lock is grabbed while holding the kernel_lock spinning monitor.
+The page_table_lock is grabbed while holding the kernel_lock spinning monitor.
-The vmlist lock can be a sleeping or spin lock. In either case, care
-must be taken that it is not held on entry to the driver methods, since
-those methods might sleep or ask for memory, causing deadlocks.
-
-The current implementation of the vmlist lock uses the page_table_lock,
-which is also the spinlock that page stealers use to protect changes to
-the victim process' ptes. Thus we have a reduction in the total number
-of locks.
+The page_table_lock is a spin lock.
swap_list_lock/swap_device_lock
-------------------------------
DAMA SLAVE for AX.25
P: Joerg Reuter
-M: jreuter@poboxes.com
-W: http://poboxes.com/jreuter/
-W: http://qsl.net/dl1bke/
+M: jreuter@yaina.de
+W: http://yaina.de/jreuter/
+W: http://www.qsl.net/dl1bke/
L: linux-hams@vger.kernel.org
S: Maintained
FILE LOCKING (flock() and fcntl()/lockf())
P: Matthew Wilcox
-M: willy@thepuffingroup.com
-L: linux-kernel@vger.kernel.org
+M: matthew@wil.cx
+L: linux-fsdevel@vger.kernel.org
S: Maintained
FPU EMULATOR
TLAN NETWORK DRIVER
P: Torben Mathiasen
M: torben.mathiasen@compaq.com
-M: tmm@image.dk
+M: torben@kernel.dk
L: tlan@vuser.vu.union.edu
L: linux-net@vger.kernel.org
W: http://tlan.kernel.dk
Z8530 DRIVER FOR AX.25
P: Joerg Reuter
-M: jreuter@poboxes.com
-W: http://poboxes.com/jreuter/
-W: http://qsl.net/dl1bke/
+M: jreuter@yaina.de
+W: http://yaina.de/jreuter/
+W: http://www.qsl.net/dl1bke/
L: linux-hams@vger.kernel.org
S: Maintained
}
static struct console srmcons = {
- "srm0",
- srm_console_write,
- NULL,
- srm_console_device,
- srm_console_wait_key,
- NULL,
- srm_console_setup,
- CON_PRINTBUFFER | CON_ENABLED, /* fake it out */
- -1,
- 0,
- NULL
+ name: "srm0",
+ write: srm_console_write,
+ device: srm_console_device,
+ wait_key: srm_console_wait_key,
+ setup: srm_console_setup,
+ flags: CON_PRINTBUFFER | CON_ENABLED, /* fake it out */
+ index: -1,
};
#else
/*
* Semaphores are implemented using a two-way counter:
* The "count" variable is decremented for each process
- * that tries to aquire the semaphore, while the "sleeping"
- * variable is a count of such aquires.
+ * that tries to acquire the semaphore, while the "sleeping"
+ * variable is a count of such acquires.
*
* Notably, the inline "up()" and "down()" functions can
* efficiently test if they need to do any extra work (up
while (atomic_read(&sem->count) < 0) {
set_task_state(tsk, TASK_UNINTERRUPTIBLE | TASK_EXCLUSIVE);
if (atomic_read(&sem->count) >= 0)
- break; /* we must attempt to aquire or bias the lock */
+ break; /* we must attempt to acquire or bias the lock */
schedule();
}
outb %al, $0x60
call empty_8042
+ movb $02, %al # "fast A20" version
+ outb %al, $0x92 # some chips have only this
+
# wait until a20 really *is* enabled; it can take a fair amount of
# time on certain systems; Toshiba Tecras are known to have this
# problem. The memory location used here (0x200) is the int 0x80
* FPU lazy state save handling.
*/
-void save_fpu( struct task_struct *tsk )
-{
- if ( HAVE_FXSR ) {
- asm volatile( "fxsave %0 ; fwait"
- : "=m" (tsk->thread.i387.fxsave) );
- } else {
- asm volatile( "fnsave %0 ; fwait"
- : "=m" (tsk->thread.i387.fsave) );
- }
- tsk->flags &= ~PF_USEDFPU;
- stts();
-}
-
void save_init_fpu( struct task_struct *tsk )
{
if ( HAVE_FXSR ) {
/*
* Semaphores are implemented using a two-way counter:
* The "count" variable is decremented for each process
- * that tries to aquire the semaphore, while the "sleeping"
- * variable is a count of such aquires.
+ * that tries to acquire the semaphore, while the "sleeping"
+ * variable is a count of such acquires.
*
* Notably, the inline "up()" and "down()" functions can
* efficiently test if they need to do any extra work (up
while (atomic_read(&sem->count) < 0) {
set_task_state(tsk, TASK_UNINTERRUPTIBLE | TASK_EXCLUSIVE);
if (atomic_read(&sem->count) >= 0)
- break; /* we must attempt to aquire or bias the lock */
+ break; /* we must attempt to acquire or bias the lock */
schedule();
}
* Forward port AMD Duron errata T13 from 2.2.17pre
* Dave Jones <davej@suse.de>, August 2000
*
+ * Forward port lots of fixes/improvements from 2.2.18pre
+ * Cyrix III, Pentium IV support.
+ * Dave Jones <davej@suse.de>, October 2000
+ *
*/
/*
extern char _text, _etext, _edata, _end;
extern unsigned long cpu_khz;
+static int disable_x86_serial_nr __initdata = 1;
+
/*
* This is set up by the setup-routine at boot-time
*/
{
unsigned int n, dummy, *v;
- /*
- * Actually we must have cpuid or we could never have
- * figured out that this was AMD/Cyrix/Transmeta
- * from the vendor info :-).
- */
-
cpuid(0x80000000, &n, &dummy, &dummy, &dummy);
if (n < 0x80000004)
return 0;
return 1;
}
+
+static void __init display_cacheinfo(struct cpuinfo_x86 *c)
+{
+ unsigned int n, dummy, ecx, edx;
+
+ cpuid(0x80000000, &n, &dummy, &dummy, &dummy);
+
+ if (n >= 0x80000005) {
+ cpuid(0x80000005, &dummy, &dummy, &ecx, &edx);
+ printk("CPU: L1 I Cache: %dK L1 D Cache: %dK (%d bytes/line)\n",
+ edx>>24, ecx>>24, edx&0xFF);
+ c->x86_cache_size=(ecx>>24)+(edx>>24);
+ }
+
+ if (n < 0x80000006) /* Cyrix just has large L1. */
+ return;
+
+ cpuid(0x80000006, &dummy, &dummy, &ecx, &edx);
+ c->x86_cache_size = ecx >>16;
+
+ /* AMD errata T13 (order #21922) */
+ if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD &&
+ boot_cpu_data.x86 == 6 && boot_cpu_data.x86_model == 3 &&
+ boot_cpu_data.x86_mask == 0)
+ {
+ c->x86_cache_size = 64;
+ }
+ printk("CPU: L2 Cache: %dK\n", ecx>>16);
+}
+
+
static int __init amd_model(struct cpuinfo_x86 *c)
{
u32 l, h;
unsigned long flags;
- unsigned int n, dummy, ecx, edx;
int mbytes = max_mapnr >> (20-PAGE_SHIFT);
int r=get_model_name(c);
- /*
- * Set MTRR capability flag if appropriate
- */
- if(boot_cpu_data.x86 == 5) {
- if((boot_cpu_data.x86_model == 13) ||
- (boot_cpu_data.x86_model == 9) ||
- ((boot_cpu_data.x86_model == 8) &&
- (boot_cpu_data.x86_mask >= 8)))
- c->x86_capability |= X86_FEATURE_MTRR;
- }
-
- /*
- * Now do the cache operations.
- */
switch(c->x86)
{
case 5:
if(mbytes>4092)
mbytes=4092;
+
rdmsr(0xC0000082, l, h);
if((l&0xFFFF0000)==0)
{
printk(KERN_INFO "Enabling new style K6 write allocation for %d Mb\n",
mbytes);
}
+
+ /* Set MTRR capability flag if appropriate */
+ if((boot_cpu_data.x86_model == 13) ||
+ (boot_cpu_data.x86_model == 9) ||
+ ((boot_cpu_data.x86_model == 8) &&
+ (boot_cpu_data.x86_mask >= 8)))
+ c->x86_capability |= X86_FEATURE_MTRR;
break;
}
+
break;
+
case 6: /* An Athlon/Duron. We can trust the BIOS probably */
break;
}
- cpuid(0x80000000, &n, &dummy, &dummy, &dummy);
- if (n >= 0x80000005) {
- cpuid(0x80000005, &dummy, &dummy, &ecx, &edx);
- printk("CPU: L1 I Cache: %dK L1 D Cache: %dK (%d bytes/line)\n",
- edx>>24, ecx>>24, edx&0xFF);
- c->x86_cache_size=(ecx>>24)+(edx>>24);
- }
-
- /* AMD errata T13 (order #21922) */
- if (boot_cpu_data.x86 == 6 && boot_cpu_data.x86_model == 3 &&
- boot_cpu_data.x86_mask == 0)
- {
- c->x86_cache_size = 64;
- printk("CPU: L2 Cache: 64K\n");
- } else {
- if (n >= 0x80000006) {
- cpuid(0x80000006, &dummy, &dummy, &ecx, &edx);
- printk("CPU: L2 Cache: %dK\n", ecx>>16);
- c->x86_cache_size=(ecx>>16);
- }
- }
-
+ display_cacheinfo(c);
return r;
}
bug to do with 'hlt'. I've not seen any boards using VSA2
and X doesn't seem to support it either so who cares 8).
VSA1 we work around however.
-
*/
-
+
printk(KERN_INFO "Working around Cyrix MediaGX virtual DMA bugs.\n");
isa_dma_bridge_buggy = 2;
#endif
u32 lo,hi,newlo;
u32 aa,bb,cc,dd;
- switch(c->x86_model) {
- case 4:
- name="C6";
- fcr_set=ECX8|DSMC|EDCTLB|EMMX|ERETSTK;
- fcr_clr=DPDC;
- printk("Disabling bugged TSC.\n");
- c->x86_capability &= ~X86_FEATURE_TSC;
- break;
- case 8:
- switch(c->x86_mask) {
- default:
- name="2";
- break;
- case 7 ... 9:
- name="2A";
- break;
- case 10 ... 15:
- name="2B";
+ switch (c->x86) {
+
+ case 5:
+ switch(c->x86_model) {
+ case 4:
+ name="C6";
+ fcr_set=ECX8|DSMC|EDCTLB|EMMX|ERETSTK;
+ fcr_clr=DPDC;
+ printk("Disabling bugged TSC.\n");
+ c->x86_capability &= ~X86_FEATURE_TSC;
+ break;
+ case 8:
+ switch(c->x86_mask) {
+ default:
+ name="2";
+ break;
+ case 7 ... 9:
+ name="2A";
+ break;
+ case 10 ... 15:
+ name="2B";
+ break;
+ }
+ fcr_set=ECX8|DSMC|DTLOCK|EMMX|EBRPRED|ERETSTK|E2MMX|EAMD3D;
+ fcr_clr=DPDC;
+ break;
+ case 9:
+ name="3";
+ fcr_set=ECX8|DSMC|DTLOCK|EMMX|EBRPRED|ERETSTK|E2MMX|EAMD3D;
+ fcr_clr=DPDC;
+ break;
+ case 10:
+ name="4";
+ /* no info on the WC4 yet */
+ break;
+ default:
+ name="??";
+ }
+
+ /* get FCR */
+ rdmsr(0x107, lo, hi);
+
+ newlo=(lo|fcr_set) & (~fcr_clr);
+
+ if (newlo!=lo) {
+ printk("Centaur FCR was 0x%X now 0x%X\n", lo, newlo );
+ wrmsr(0x107, newlo, hi );
+ } else {
+ printk("Centaur FCR is 0x%X\n",lo);
+ }
+ /* Emulate MTRRs using Centaur's MCR. */
+ c->x86_capability |= X86_FEATURE_MTRR;
+ /* Report CX8 */
+ c->x86_capability |= X86_FEATURE_CX8;
+ /* Set 3DNow! on Winchip 2 and above. */
+ if (c->x86_model >=8)
+ c->x86_capability |= X86_FEATURE_AMD3D;
+ /* See if we can find out some more. */
+ cpuid(0x80000000,&aa,&bb,&cc,&dd);
+ if (aa>=0x80000005) { /* Yes, we can. */
+ cpuid(0x80000005,&aa,&bb,&cc,&dd);
+ /* Add L1 data and code cache sizes. */
+ c->x86_cache_size = (cc>>24)+(dd>>24);
+ }
+ sprintf( c->x86_model_id, "WinChip %s", name );
break;
- }
- fcr_set=ECX8|DSMC|DTLOCK|EMMX|EBRPRED|ERETSTK|E2MMX|EAMD3D;
- fcr_clr=DPDC;
- break;
- case 9:
- name="3";
- fcr_set=ECX8|DSMC|DTLOCK|EMMX|EBRPRED|ERETSTK|E2MMX|EAMD3D;
- fcr_clr=DPDC;
- break;
- case 10:
- name="4";
- /* no info on the WC4 yet */
- break;
- default:
- name="??";
- }
- /* get FCR */
- rdmsr(0x107, lo, hi);
+ case 6:
+ switch (c->x86_model) {
+ case 6: /* Cyrix III */
+ rdmsr (0x1107, lo, hi);
+ lo |= (1<<1 | 1<<7); /* Report CX8 & enable PGE */
+ wrmsr (0x1107, lo, hi);
- newlo=(lo|fcr_set) & (~fcr_clr);
+ c->x86_capability |= X86_FEATURE_CX8;
+ rdmsr (0x80000001, lo, hi);
+ if (hi & (1<<31))
+ c->x86_capability |= X86_FEATURE_AMD3D;
- if (newlo!=lo) {
- printk("Centaur FCR was 0x%X now 0x%X\n", lo, newlo );
- wrmsr(0x107, newlo, hi );
- } else {
- printk("Centaur FCR is 0x%X\n",lo);
+ get_model_name(c);
+ display_cacheinfo(c);
+ break;
+ }
+ break;
}
- /* Emulate MTRRs using Centaur's MCR. */
- c->x86_capability |= X86_FEATURE_MTRR;
- /* Report CX8 */
- c->x86_capability |= X86_FEATURE_CX8;
- /* Set 3DNow! on Winchip 2 and above. */
- if (c->x86_model >=8)
- c->x86_capability |= X86_FEATURE_AMD3D;
- /* See if we can find out some more. */
- cpuid(0x80000000,&aa,&bb,&cc,&dd);
- if (aa>=0x80000005) { /* Yes, we can. */
- cpuid(0x80000005,&aa,&bb,&cc,&dd);
- /* Add L1 data and code cache sizes. */
- c->x86_cache_size = (cc>>24)+(dd>>24);
- }
- sprintf( c->x86_model_id, "WinChip %s", name );
}
+
static void __init transmeta_model(struct cpuinfo_x86 *c)
{
- unsigned int cap_mask, uk, max, dummy, n, ecx, edx;
+ unsigned int cap_mask, uk, max, dummy;
unsigned int cms_rev1, cms_rev2;
unsigned int cpu_rev, cpu_freq, cpu_flags;
char cpu_info[65];
get_model_name(c); /* Same as AMD/Cyrix */
+ display_cacheinfo(c);
/* Print CMS and CPU revision */
cpuid(0x80860000, &max, &dummy, &dummy, &dummy);
wrmsr(0x80860004, ~0, uk);
cpuid(0x00000001, &dummy, &dummy, &dummy, &c->x86_capability);
wrmsr(0x80860004, cap_mask, uk);
-
-
- /* L1/L2 cache */
- cpuid(0x80000000, &n, &dummy, &dummy, &dummy);
-
- if (n >= 0x80000005) {
- cpuid(0x80000005, &dummy, &dummy, &ecx, &edx);
- printk("CPU: L1 I Cache: %dK L1 D Cache: %dK\n",
- ecx>>24, edx>>24);
- c->x86_cache_size=(ecx>>24)+(edx>>24);
- }
- if (n >= 0x80000006) {
- cpuid(0x80000006, &dummy, &dummy, &ecx, &edx);
- printk("CPU: L2 Cache: %dK\n", ecx>>16);
- c->x86_cache_size=(ecx>>16);
- }
}
* to have CPUID. (Thanks to Herbert Oppmann)
*/
-static int deep_magic_nexgen_probe(void)
+static int __init deep_magic_nexgen_probe(void)
{
int ret;
return ret;
}
-static void squash_the_stupid_serial_number(struct cpuinfo_x86 *c)
+static void __init squash_the_stupid_serial_number(struct cpuinfo_x86 *c)
{
- if(c->x86_capability&(1<<18)) {
+ if(c->x86_capability&(X86_FEATURE_PN) && disable_x86_serial_nr) {
/* Disable processor serial number */
unsigned long lo,hi;
rdmsr(0x119,lo,hi);
}
}
+
+int __init x86_serial_nr_setup(char *s)
+{
+ disable_x86_serial_nr = 0;
+ return 1;
+}
+__setup("serialnumber", x86_serial_nr_setup);
+
+
void __init identify_cpu(struct cpuinfo_x86 *c)
{
int i=0;
char *p = NULL;
+ extern void mcheck_init(void);
c->loops_per_sec = loops_per_sec;
c->x86_cache_size = -1;
return;
case X86_VENDOR_INTEL:
-
+
squash_the_stupid_serial_number(c);
-
+ mcheck_init();
+
if (c->cpuid_level > 1) {
/* supports eax=2 call */
int edx, dummy;
}
}
+ /* Pentium IV. */
+ if (c->x86 == 15) {
+ c->x86 = 6;
+ get_model_name(c);
+ goto name_decoded;
+ }
+
/* Names for the Pentium II/Celeron processors
detectable only by also checking the cache size.
Dixon is NOT a Celeron. */
squash_the_stupid_serial_number(c);
return;
}
-
+
/* may be changed in the switch so needs to be after */
if(c->x86_vendor == X86_VENDOR_NEXGEN)
static kdev_t simcons_console_device (struct console *);
struct console hpsim_cons = {
- "simcons",
- simcons_write, /* write */
- NULL, /* read */
- simcons_console_device, /* device */
- simcons_wait_key, /* wait_key */
- NULL, /* unblank */
- simcons_init, /* setup */
- CON_PRINTBUFFER, /* flags */
- -1, /* index */
- 0, /* cflag */
- NULL /* next */
+ name: "simcons",
+ write: simcons_write,
+ device: simcons_console_device,
+ wait_key: simcons_wait_key,
+ setup: simcons_init,
+ flags: CON_PRINTBUFFER,
+ index: -1,
};
static int
* Goutham Rao: <goutham.rao@intel.com>
* Skip non-WB memory and ignore empty memory ranges.
*/
+#include <linux/config.h>
#include <linux/kernel.h>
#include <linux/init.h>
#include <linux/types.h>
// 00/03/29 cfleck Added code to save INIT handoff state in pt_regs format, switch to temp kstack,
// switch modes, jump to C INIT handler
//
+#include <linux/config.h>
+
#include <asm/pgtable.h>
#include <asm/processor.h>
#include <asm/mca_asm.h>
* - as of 2.2.9/2.2.12, the following values are still wrong
* PAL_VM_SUMMARY: key & rid sizes
*/
+#include <linux/config.h>
#include <linux/types.h>
#include <linux/errno.h>
#include <linux/init.h>
/*
* Semaphores are implemented using a two-way counter: The "count"
- * variable is decremented for each process that tries to aquire the
+ * variable is decremented for each process that tries to acquire the
* semaphore, while the "sleepers" variable is a count of such
- * aquires.
+ * acquires.
*
* Notably, the inline "up()" and "down()" functions can efficiently
* test if they need to do any extra work (up needs to do something
}
/*
- * This gets called if we failed to aquire the lock and we are not
+ * This gets called if we failed to acquire the lock and we are not
* biased to acquire the lock. We undo the decrement that was
* done earlier, go to sleep, and then attempt to re-acquire the
* lock afterwards.
while (sem->count < 0) {
set_task_state(tsk, TASK_UNINTERRUPTIBLE | TASK_EXCLUSIVE);
if (sem->count >= 0)
- break; /* we must attempt to aquire or bias the lock */
+ break; /* we must attempt to acquire or bias the lock */
schedule();
}
* Application processor startup code, moved from smp.c to better support kernel profile
*/
-#include <linux/config.h>
-
#include <linux/kernel.h>
#include <linux/sched.h>
#include <linux/init.h>
#endif
static struct console amiga_console_driver = {
- "debug",
- NULL, /* write */
- NULL, /* read */
- NULL, /* device */
- amiga_wait_key, /* wait_key */
- NULL, /* unblank */
- NULL, /* setup */
- CON_PRINTBUFFER,
- -1,
- 0,
- NULL
+ name: "debug",
+ wait_key: amiga_wait_key,
+ flags: CON_PRINTBUFFER,
+ index: -1,
};
#ifdef CONFIG_MAGIC_SYSRQ
int atari_SCC_reset_done = 0;
static struct console atari_console_driver = {
- "debug",
- NULL, /* write */
- NULL, /* read */
- NULL, /* device */
- NULL, /* wait_key */
- NULL, /* unblank */
- NULL, /* setup */
- CON_PRINTBUFFER,
- -1,
- 0,
- NULL
+ name: "debug",
+ flags: CON_PRINTBUFFER,
+ index: -1,
};
while (atomic_read(&sem->count) < 0) {
set_task_state(current, TASK_UNINTERRUPTIBLE | TASK_EXCLUSIVE);
if (atomic_read(&sem->count) >= 0)
- break; /* we must attempt to aquire or bias the lock */
+ break; /* we must attempt to acquire or bias the lock */
schedule();
}
static int scc_port = -1;
static struct console mac_console_driver = {
- "debug",
- NULL, /* write */
- NULL, /* read */
- NULL, /* device */
- NULL, /* wait_key */
- NULL, /* unblank */
- NULL, /* setup */
- CON_PRINTBUFFER,
- -1,
- 0,
- NULL
+ name: "debug",
+ flags: CON_PRINTBUFFER,
+ index: -1,
};
/*
static int q40_wait_key(struct console *co){return 0;}
static struct console q40_console_driver = {
- "debug",
- NULL, /* write */
- NULL, /* read */
- NULL, /* device */
- q40_wait_key, /* wait_key */
- NULL, /* unblank */
- NULL, /* setup */
- CON_PRINTBUFFER,
- -1,
- 0,
- NULL
+ name: "debug",
+ wait_key: q40_wait_key,
+ flags: CON_PRINTBUFFER,
+ index: -1,
};
}
static struct console sercons = {
- "ttyS",
- serial_console_write,
- NULL,
- serial_console_device,
- serial_console_wait_key,
- NULL,
- serial_console_setup,
- CON_PRINTBUFFER,
- -1,
- 0,
- NULL
+ name: "ttyS",
+ write: serial_console_write,
+ device: serial_console_device,
+ wait_key: serial_console_wait_key,
+ setup: serial_console_setup,
+ flags: CON_PRINTBUFFER,
+ index: -1,
};
/*
static struct console sercons =
{
- "ttyS",
- prom_console_write,
- NULL,
- prom_console_device,
- prom_console_wait_key,
- NULL,
- prom_console_setup,
- CON_PRINTBUFFER,
- -1,
- 0,
- NULL
+ name: "ttyS",
+ write: prom_console_write,
+ device: prom_console_device,
+ wait_key: prom_console_wait_key,
+ setup: prom_console_setup,
+ flags: CON_PRINTBUFFER,
+ index: -1,
};
/*
static struct console sercons =
{
- "ttyS",
- prom_console_write,
- NULL,
- prom_console_device,
- prom_console_wait_key,
- NULL,
- prom_console_setup,
- CON_PRINTBUFFER,
- -1,
- 0,
- NULL
+ name: "ttyS",
+ write: prom_console_write,
+ device: prom_console_device,
+ wait_key: prom_console_wait_key,
+ setup: prom_console_setup,
+ flags: CON_PRINTBUFFER,
+ index: -1,
};
/*
static struct console sercons =
{
- "ttyS",
- prom_console_write,
- NULL,
- prom_console_device,
- prom_console_wait_key,
- NULL,
- prom_console_setup,
- CON_PRINTBUFFER,
- -1,
- 0,
- NULL
+ name: "ttyS",
+ write: prom_console_write,
+ device: prom_console_device,
+ wait_key: prom_console_wait_key,
+ setup: prom_console_setup,
+ flags: CON_PRINTBUFFER,
+ index: -1,
};
/*
static struct console sercons = {
- "ttyS",
- serial_console_write,
- NULL,
- serial_console_device,
- serial_console_wait_key,
- NULL,
- serial_console_setup,
- CON_PRINTBUFFER,
- CONFIG_SERIAL_CONSOLE_PORT,
- 0,
- NULL
+ name: "ttyS",
+ write: serial_console_write,
+ device: serial_console_device,
+ wait_key: serial_console_wait_key,
+ setup: serial_console_setup,
+ flags: CON_PRINTBUFFER,
+ index: CONFIG_SERIAL_CONSOLE_PORT,
};
/*
static struct console sercons = {
- "ttyS",
- serial_console_write,
- NULL,
- serial_console_device,
- serial_console_wait_key,
- NULL,
- serial_console_setup,
- CON_PRINTBUFFER,
- CONFIG_SERIAL_CONSOLE_PORT,
- 0,
- NULL
+ name: "ttyS",
+ write: serial_console_write,
+ device: serial_console_device,
+ wait_key: serial_console_wait_key,
+ setup: serial_console_setup,
+ flags: CON_PRINTBUFFER,
+ index: CONFIG_SERIAL_CONSOLE_PORT,
};
/*
#endif
static struct console amiga_console_driver = {
- "debug",
- NULL, /* write */
- NULL, /* read */
- NULL, /* device */
- amiga_wait_key, /* wait_key */
- NULL, /* unblank */
- NULL, /* setup */
- CON_PRINTBUFFER,
- -1,
- 0,
- NULL
+ name: "debug",
+ wait_key: amiga_wait_key,
+ flags: CON_PRINTBUFFER,
+ index: -1,
};
#ifdef CONFIG_MAGIC_SYSRQ
/*
* Semaphores are implemented using a two-way counter:
* The "count" variable is decremented for each process
- * that tries to aquire the semaphore, while the "sleeping"
- * variable is a count of such aquires.
+ * that tries to acquire the semaphore, while the "sleeping"
+ * variable is a count of such acquires.
*
* Notably, the inline "up()" and "down()" functions can
* efficiently test if they need to do any extra work (up
while (atomic_read(&sem->count) < 0) {
set_task_state(tsk, TASK_UNINTERRUPTIBLE | TASK_EXCLUSIVE);
if (atomic_read(&sem->count) >= 0)
- break; /* we must attempt to aquire or bias the lock */
+ break; /* we must attempt to acquire or bias the lock */
schedule();
}
.align 2
tlb_miss_load:
- mov.l 2f, $r0
- mov.l @$r0, $r6
- mov $r15, $r4
- mov.l 1f, $r0
- jmp @$r0
+ bra call_dpf
mov #0, $r5
.align 2
tlb_miss_store:
- mov.l 2f, $r0
- mov.l @$r0, $r6
- mov $r15, $r4
- mov.l 1f, $r0
- jmp @$r0
+ bra call_dpf
mov #1, $r5
.align 2
initial_page_write:
- mov.l 2f, $r0
- mov.l @$r0, $r6
- mov $r15, $r4
- mov.l 1f, $r0
- jmp @$r0
+ bra call_dpf
mov #1, $r5
.align 2
tlb_protection_violation_load:
- mov.l 2f, $r0
- mov.l @$r0, $r6
- mov $r15, $r4
- mov.l 1f, $r0
- jmp @$r0
+ bra call_dpf
mov #0, $r5
.align 2
tlb_protection_violation_store:
- mov.l 2f, $r0
- mov.l @$r0, $r6
- mov $r15, $r4
+ bra call_dpf
+ mov #1, $r5
+
+call_dpf:
mov.l 1f, $r0
+ mov $r5, $r8
+ mov.l @$r0, $r6
+ mov $r6, $r9
+ mov.l 2f, $r0
+ sts $pr, $r10
+ jsr @$r0
+ mov $r15, $r4
+ !
+ tst #0xff, $r0
+ bf/s 0f
+ lds $r10, $pr
+ rts
+ nop
+0: STI()
+ mov.l 3f, $r0
+ mov $r9, $r6
+ mov $r8, $r5
jmp @$r0
- mov #1, $r5
+ mov $r15, $r4
.align 2
-1: .long SYMBOL_NAME(__do_page_fault)
-2: .long MMU_TEA
+1: .long MMU_TEA
+2: .long SYMBOL_NAME(__do_page_fault)
+3: .long SYMBOL_NAME(do_page_fault)
#if defined(CONFIG_DEBUG_KERNEL_WITH_GDB_STUB) || defined(CONFIG_SH_STANDARD_BIOS)
.align 2
while (atomic_read(&sem->count) < 0) {
set_task_state(tsk, TASK_UNINTERRUPTIBLE | TASK_EXCLUSIVE);
if (atomic_read(&sem->count) >= 0)
- break; /* we must attempt to aquire or bias the lock */
+ break; /* we must attempt to acquire or bias the lock */
schedule();
}
}
static struct console sh_console = {
- "bios",
- sh_console_write,
- NULL,
- sh_console_device,
- sh_console_wait_key,
- NULL,
- sh_console_setup,
- CON_PRINTBUFFER,
- -1,
- 0,
- NULL
+ name: "bios",
+ write: sh_console_write,
+ device: sh_console_device,
+ wait_key: sh_console_wait_key,
+ setup: sh_console_setup,
+ flags: CON_PRINTBUFFER,
+ index: -1,
};
void sh_console_init(void)
goto no_context;
}
-static int __do_page_fault1(struct pt_regs *regs, unsigned long writeaccess,
- unsigned long address)
+/*
+ * Called with interrupt disabled.
+ */
+asmlinkage int __do_page_fault(struct pt_regs *regs, unsigned long writeaccess,
+ unsigned long address)
{
pgd_t *dir;
pmd_t *pmd;
pte_t entry;
if (address >= VMALLOC_START && address < VMALLOC_END)
- /* We can change the implementation of P3 area pte entries.
- set_pgdir and such. */
dir = pgd_offset_k(address);
else
dir = pgd_offset(current->mm, address);
return 0;
}
-/*
- * Called with interrupt disabled.
- */
-asmlinkage void __do_page_fault(struct pt_regs *regs, unsigned long writeaccess,
- unsigned long address)
-{
- /*
- * XXX: Could you please implement this (calling __do_page_fault1)
- * in assembler language in entry.S?
- */
- if (__do_page_fault1(regs, writeaccess, address) == 0)
- /* Done. */
- return;
- sti();
- do_page_fault(regs, writeaccess, address);
-}
-
void update_mmu_cache(struct vm_area_struct * vma,
unsigned long address, pte_t pte)
{
unsigned long phys_addr, unsigned long flags)
{
unsigned long end;
+ pgprot_t pgprot = __pgprot(_PAGE_PRESENT | _PAGE_RW |
+ _PAGE_DIRTY | _PAGE_ACCESSED |
+ _PAGE_HW_SHARED | _PAGE_FLAGS_HARD | flags);
address &= ~PMD_MASK;
end = address + size;
do {
if (!pte_none(*pte))
printk("remap_area_pte: page already exists\n");
- set_pte(pte, mk_pte_phys(phys_addr, __pgprot(_PAGE_PRESENT | _PAGE_RW |
- _PAGE_DIRTY | _PAGE_ACCESSED | flags)));
+ set_pte(pte, mk_pte_phys(phys_addr, pgprot));
address += PAGE_SIZE;
phys_addr += PAGE_SIZE;
pte++;
}
static int remap_area_pages(unsigned long address, unsigned long phys_addr,
- unsigned long size, unsigned long flags)
+ unsigned long size, unsigned long flags)
{
pgd_t * dir;
unsigned long end = address + size;
phys_addr -= address;
- dir = pgd_offset(&init_mm, address);
+ dir = pgd_offset_k(address);
flush_cache_all();
while (address < end) {
pmd_t *pmd = pmd_alloc_kernel(dir, address);
if (!pmd)
return -ENOMEM;
if (remap_area_pmd(pmd, address, end - address,
- phys_addr + address, flags))
+ phys_addr + address, flags))
return -ENOMEM;
- set_pgdir(address, *dir);
address = (address + PGDIR_SIZE) & PGDIR_MASK;
dir++;
}
-/* $Id: ioport.c,v 1.39 2000/06/20 01:10:00 anton Exp $
+/* $Id: ioport.c,v 1.40 2000/10/10 09:44:46 anton Exp $
* ioport.c: Simple io mapping allocator.
*
* Copyright (C) 1995 David S. Miller (davem@caip.rutgers.edu)
unsigned long plen;
plen = res->end - res->start + 1;
+ plen = (plen + PAGE_SIZE-1) & PAGE_MASK;
while (plen != 0) {
plen -= PAGE_SIZE;
(*_sparc_unmapioaddr)(res->start + plen);
return;
}
- if (((unsigned long)p & (PAGE_MASK-1)) != 0) {
+ if (((unsigned long)p & (PAGE_SIZE-1)) != 0) {
printk("sbus_free_consistent: unaligned va %p\n", p);
return;
}
if ((res = kmalloc(sizeof(struct resource), GFP_KERNEL)) == NULL) {
free_pages(va, order);
- printk("sbus_alloc_consistent: no core\n");
+ printk("pci_alloc_consistent: no core\n");
return NULL;
}
memset((char*)res, 0, sizeof(struct resource));
if ((res = _sparc_find_resource(&_sparc_dvma,
(unsigned long)p)) == NULL) {
- printk("sbus_free_consistent: cannot free %p\n", p);
+ printk("pci_free_consistent: cannot free %p\n", p);
return;
}
- if (((unsigned long)p & (PAGE_MASK-1)) != 0) {
- printk("sbus_free_consistent: unaligned va %p\n", p);
+ if (((unsigned long)p & (PAGE_SIZE-1)) != 0) {
+ printk("pci_free_consistent: unaligned va %p\n", p);
return;
}
n = (n + PAGE_SIZE-1) & PAGE_MASK;
if ((res->end-res->start)+1 != n) {
- printk("sbus_free_consistent: region 0x%lx asked 0x%lx\n",
+ printk("pci_free_consistent: region 0x%lx asked 0x%lx\n",
(long)((res->end-res->start)+1), (long)n);
return;
}
while (sem->count < 0) {
set_task_state(tsk, TASK_UNINTERRUPTIBLE | TASK_EXCLUSIVE);
if (sem->count >= 0)
- break; /* we must attempt to aquire or bias the lock */
+ break; /* we must attempt to acquire or bias the lock */
schedule();
}
}
static struct console prom_console = {
- "PROM", prom_cons_write, 0, 0, 0, 0, 0, CON_PRINTBUFFER, 0, 0, 0
+ name: "PROM",
+ write: prom_cons_write,
+ flags: CON_PRINTBUFFER,
};
#endif
while (sem->count < 0) {
set_task_state(tsk, TASK_UNINTERRUPTIBLE | TASK_EXCLUSIVE);
if (sem->count >= 0)
- break; /* we must attempt to aquire or bias the lock */
+ break; /* we must attempt to acquire or bias the lock */
schedule();
}
}
static struct console prom_console = {
- "prom",
- prom_console_write,
- NULL,
- NULL,
- NULL,
- NULL,
- NULL,
- CON_CONSDEV | CON_ENABLED,
- -1,
- 0,
- NULL
+ name: "prom",
+ write: prom_console_write,
+ flags: CON_CONSDEV | CON_ENABLED,
+ index: -1,
};
#define PROM_TRUE -1
#ifdef PROM_DEBUG_CONSOLE
static struct console prom_debug_console = {
- "debug",
- prom_console_write,
- NULL,
- NULL,
- NULL,
- NULL,
- NULL,
- CON_PRINTBUFFER,
- -1,
- 0,
- NULL
+ name: "debug",
+ write: prom_console_write,
+ flags: CON_PRINTBUFFER,
+ index: -1,
};
#endif
* on the list.
*
* This is called with interrupts off and no requests on the queue.
- * (and with the request spinlock aquired)
+ * (and with the request spinlock acquired)
*/
static void generic_plug_device(request_queue_t *q, kdev_t dev)
{
/*
* add-request adds a request to the linked list.
- * It disables interrupts (aquires the request spinlock) so that it can muck
+ * It disables interrupts (acquires the request spinlock) so that it can muck
* with the request-lists in peace. Thus it should be called with no spinlocks
* held.
*
}
/*
- * Has to be called with the request spinlock aquired
+ * Has to be called with the request spinlock acquired
*/
static void attempt_merge(request_queue_t * q,
struct request *req,
#
-# Makefile for PARIDE
+# Makefile for Parallel port IDE device drivers.
#
-# Note! Dependencies are done automagically by 'make dep', which also
-# removes any old dependencies. DON'T put your own dependencies here
-# unless it's something special (ie not a .c file).
+# 7 October 2000, Bartlomiej Zolnierkiewicz <bkz@linux-ide.org>
+# Rewritten to use lists instead of if-statements.
#
-# Note 2! The CFLAGS definitions are now inherited from the
-# parent makes..
-
-SUB_DIRS :=
-MOD_SUB_DIRS := $(SUB_DIRS)
-ALL_SUB_DIRS := $(SUB_DIRS)
L_TARGET := paride.a
-MX_OBJS :=
-LX_OBJS :=
-MI_OBJS :=
-MIX_OBJS :=
-
-ifeq ($(CONFIG_PARIDE),y)
- LX_OBJS += paride.o
-else
- ifeq ($(CONFIG_PARIDE),m)
- MX_OBJS += paride.o
- endif
-endif
-
-ifeq ($(CONFIG_PARIDE_PD),y)
- LX_OBJS += pd.o
-else
- ifeq ($(CONFIG_PARIDE_PD),m)
- M_OBJS += pd.o
- endif
-endif
-
-ifeq ($(CONFIG_PARIDE_PCD),y)
- LX_OBJS += pcd.o
-else
- ifeq ($(CONFIG_PARIDE_PCD),m)
- M_OBJS += pcd.o
- endif
-endif
-
-ifeq ($(CONFIG_PARIDE_PF),y)
- LX_OBJS += pf.o
-else
- ifeq ($(CONFIG_PARIDE_PF),m)
- M_OBJS += pf.o
- endif
-endif
-
-ifeq ($(CONFIG_PARIDE_PT),y)
- LX_OBJS += pt.o
-else
- ifeq ($(CONFIG_PARIDE_PT),m)
- M_OBJS += pt.o
- endif
-endif
-
-ifeq ($(CONFIG_PARIDE_PG),y)
- LX_OBJS += pg.o
-else
- ifeq ($(CONFIG_PARIDE_PG),m)
- M_OBJS += pg.o
- endif
-endif
-
-ifeq ($(CONFIG_PARIDE_ATEN),y)
- LX_OBJS += aten.o
-else
- ifeq ($(CONFIG_PARIDE_ATEN),m)
- M_OBJS += aten.o
- endif
-endif
-
-ifeq ($(CONFIG_PARIDE_BPCK),y)
- LX_OBJS += bpck.o
-else
- ifeq ($(CONFIG_PARIDE_BPCK),m)
- M_OBJS += bpck.o
- endif
-endif
-
-ifeq ($(CONFIG_PARIDE_COMM),y)
- LX_OBJS += comm.o
-else
- ifeq ($(CONFIG_PARIDE_COMM),m)
- M_OBJS += comm.o
- endif
-endif
-
-ifeq ($(CONFIG_PARIDE_DSTR),y)
- LX_OBJS += dstr.o
-else
- ifeq ($(CONFIG_PARIDE_DSTR),m)
- M_OBJS += dstr.o
- endif
-endif
-ifeq ($(CONFIG_PARIDE_KBIC),y)
- LX_OBJS += kbic.o
-else
- ifeq ($(CONFIG_PARIDE_KBIC),m)
- M_OBJS += kbic.o
- endif
-endif
-
-ifeq ($(CONFIG_PARIDE_EPAT),y)
- LX_OBJS += epat.o
-else
- ifeq ($(CONFIG_PARIDE_EPAT),m)
- M_OBJS += epat.o
- endif
-endif
-
-ifeq ($(CONFIG_PARIDE_EPIA),y)
- LX_OBJS += epia.o
-else
- ifeq ($(CONFIG_PARIDE_EPIA),m)
- M_OBJS += epia.o
- endif
-endif
-
-ifeq ($(CONFIG_PARIDE_FIT2),y)
- LX_OBJS += fit2.o
-else
- ifeq ($(CONFIG_PARIDE_FIT2),m)
- M_OBJS += fit2.o
- endif
-endif
-
-ifeq ($(CONFIG_PARIDE_FIT3),y)
- LX_OBJS += fit3.o
-else
- ifeq ($(CONFIG_PARIDE_FIT3),m)
- M_OBJS += fit3.o
- endif
-endif
-
-ifeq ($(CONFIG_PARIDE_FRPW),y)
- LX_OBJS += frpw.o
-else
- ifeq ($(CONFIG_PARIDE_FRPW),m)
- M_OBJS += frpw.o
- endif
-endif
-
-
-ifeq ($(CONFIG_PARIDE_FRIQ),y)
- LX_OBJS += friq.o
-else
- ifeq ($(CONFIG_PARIDE_FRIQ),m)
- M_OBJS += friq.o
- endif
-endif
-
-ifeq ($(CONFIG_PARIDE_ON20),y)
- LX_OBJS += on20.o
-else
- ifeq ($(CONFIG_PARIDE_ON20),m)
- M_OBJS += on20.o
- endif
-endif
-
-ifeq ($(CONFIG_PARIDE_ON26),y)
- LX_OBJS += on26.o
-else
- ifeq ($(CONFIG_PARIDE_ON26),m)
- M_OBJS += on26.o
- endif
-endif
-
-ifeq ($(CONFIG_PARIDE_KTTI),y)
- LX_OBJS += ktti.o
-else
- ifeq ($(CONFIG_PARIDE_KTTI),m)
- M_OBJS += ktti.o
- endif
-endif
+obj-$(CONFIG_PARIDE) += paride.o
+obj-$(CONFIG_PARIDE_PD) += pd.o
+obj-$(CONFIG_PARIDE_PCD) += pcd.o
+obj-$(CONFIG_PARIDE_PF) += pf.o
+obj-$(CONFIG_PARIDE_PT) += pt.o
+obj-$(CONFIG_PARIDE_PG) += pg.o
+obj-$(CONFIG_PARIDE_ATEN) += aten.o
+obj-$(CONFIG_PARIDE_BPCK) += bpck.o
+obj-$(CONFIG_PARIDE_COMM) += comm.o
+obj-$(CONFIG_PARIDE_DSTR) += dstr.o
+obj-$(CONFIG_PARIDE_KBIC) += kbic.o
+obj-$(CONFIG_PARIDE_EPAT) += epat.o
+obj-$(CONFIG_PARIDE_EPIA) += epia.o
+obj-$(CONFIG_PARIDE_FIT2) += fit2.o
+obj-$(CONFIG_PARIDE_FIT3) += fit3.o
+obj-$(CONFIG_PARIDE_FRPW) += frpw.o
+obj-$(CONFIG_PARIDE_FRIQ) += friq.o
+obj-$(CONFIG_PARIDE_ON20) += on20.o
+obj-$(CONFIG_PARIDE_ON26) += on26.o
+obj-$(CONFIG_PARIDE_KTTI) += ktti.o
+
+L_OBJS := $(obj-y)
+M_OBJS := $(obj-m)
include $(TOPDIR)/Rules.make
-
}
struct console vt_console_driver = {
- "tty",
- vt_console_print,
- NULL,
- vt_console_device,
- keyboard_wait_for_keypress,
- unblank_screen,
- NULL,
- CON_PRINTBUFFER,
- -1,
- 0,
- NULL
+ name: "tty",
+ write: vt_console_print,
+ device: vt_console_device,
+ wait_key: keyboard_wait_for_keypress,
+ unblank: unblank_screen,
+ flags: CON_PRINTBUFFER,
+ index: -1,
};
#endif
}
static struct console dz_sercons = {
- "ttyS",
- dz_console_print,
- NULL,
- dz_console_device,
- dz_console_wait_key,
- NULL,
- dz_console_setup,
- CON_CONSDEV | CON_PRINTBUFFER,
- CONSOLE_LINE,
- 0,
- NULL
+ name: "ttyS",
+ write: dz_console_print,
+ device: dz_console_device,
+ wait_key: dz_console_wait_key,
+ setup: dz_console_setup,
+ flags: CON_CONSDEV | CON_PRINTBUFFER,
+ index: CONSOLE_LINE,
};
void __init dz_serial_console_init(void)
Hardware driver for Intel i810 Random Number Generator (RNG)
Copyright 2000 Jeff Garzik <jgarzik@mandrakesoft.com>
+ Copyright 2000 Philipp Rumpf <prumpf@tux.org>
Driver Web site: http://gtf.org/garzik/drivers/i810_rng/
This will slow things down but guarantee that bad data is
never passed upstream.
+ * FIXME: module unload is racy. To fix this, struct ctl_table
+ needs an owner member a la struct file_operations.
+
* Since the RNG is accessed from a timer as well as normal
kernel code, but not from interrupts, we use spin_lock_bh
in regular code, and spin_lock in the timer function, to
* Convert numeric globals to unsigned
* Module unload cleanup
+ Version 0.9.1:
+ * Support i815 chipsets too (Matt Sottek)
+ * Fix reference counting when statically compiled (prumpf)
+ * Rewrite rng_dev_read (prumpf)
+ * Make module races less likely (prumpf)
+ * Small miscellaneous bug fixes (prumpf)
+ * Use pci table for PCI id list
+
*/
/*
* core module and version information
*/
-#define RNG_VERSION "0.9.0"
+#define RNG_VERSION "0.9.1"
#define RNG_MODULE_NAME "i810_rng"
#define RNG_DRIVER_NAME RNG_MODULE_NAME " hardware driver " RNG_VERSION
#define PFX RNG_MODULE_NAME ": "
#endif
-/*
- * misc helper macros
- */
-#define arraysize(x) (sizeof(x)/sizeof(*(x)))
-
/*
* prototypes
*/
/*
- * rng_enable - enable or disable the RNG and internal timer
+ * rng_enable - enable or disable the RNG hardware
*/
static int rng_enable (int enable)
{
hw_status = rng_hwstatus ();
if (enable) {
- rng_hw_enabled = 1;
+ rng_hw_enabled++;
MOD_INC_USE_COUNT;
} else {
-#ifndef __alpha__
- if (GET_USE_COUNT (THIS_MODULE) > 0)
+ if (rng_hw_enabled) {
+ rng_hw_enabled--;
MOD_DEC_USE_COUNT;
- if (GET_USE_COUNT (THIS_MODULE) == 0)
- rng_hw_enabled = 0;
-#endif
+ }
}
if (rng_hw_enabled && ((hw_status & RNG_ENABLED) == 0)) {
else if (action == 2)
printk (KERN_INFO PFX "RNG h/w disabled\n");
- if ((!!enable) != (!!(new_status & RNG_ENABLED))) {
+ /* too bad C doesn't have ^^ */
+ if ((!enable) != (!(new_status & RNG_ENABLED))) {
printk (KERN_ERR PFX "Unable to %sable the RNG\n",
enable ? "en" : "dis");
rc = -EIO;
DPRINTK ("ENTER\n");
+ MOD_INC_USE_COUNT;
spin_lock_bh (&rng_lock);
rng_enabled_sysctl = enabled_save = rng_timer_enabled;
spin_unlock_bh (&rng_lock);
spin_unlock_bh (&rng_lock);
}
+ /* This needs to be in a higher layer */
+ MOD_DEC_USE_COUNT;
+
DPRINTK ("EXIT, returning 0\n");
return 0;
}
}
-static ssize_t rng_dev_read (struct file *filp, char * buf, size_t size,
- loff_t *offp)
+static ssize_t rng_dev_read (struct file *filp, char *buf, size_t size,
+ loff_t * offp)
{
- int have_data, copied = 0;
- u8 data=0;
- u8 *page;
-
- if (size < 1)
- return 0;
-
- page = (unsigned char *) get_free_page (GFP_KERNEL);
- if (!page)
- return -ENOMEM;
-
-read_loop:
- /* using the fact that read() can return >0 but
- * less than the requested amount, we simply
- * read up to PAGE_SIZE or buffer size, whichever
- * is smaller, and return that data.
- */
- if ((copied == size) || (copied == PAGE_SIZE)) {
- size_t tmpsize = (copied == size) ? size : PAGE_SIZE;
- int rc = copy_to_user (buf, page, tmpsize);
- free_page ((long)page);
- if (rc) return rc;
- return tmpsize;
- }
+ int have_data;
+ u8 data = 0;
+ ssize_t ret = 0;
- spin_lock_bh (&rng_lock);
+ while (size) {
+ spin_lock_bh (&rng_lock);
- have_data = 0;
- if (rng_data_present ()) {
- data = rng_data_read ();
- have_data = 1;
- }
+ have_data = 0;
+ if (rng_data_present ()) {
+ data = rng_data_read ();
+ have_data = 1;
+ }
- spin_unlock_bh (&rng_lock);
+ spin_unlock_bh (&rng_lock);
- if (have_data) {
- page[copied] = data;
- copied++;
- } else {
- if (filp->f_flags & O_NONBLOCK) {
- free_page ((long)page);
- return -EAGAIN;
+ if (have_data) {
+ if (put_user (data, buf++)) {
+ ret = ret ? : -EFAULT;
+ break;
+ }
+ size--;
+ ret++;
}
- }
- if (current->need_resched)
- schedule ();
+ if (current->need_resched)
+ schedule ();
+
+ if (signal_pending (current))
+ return ret ? : -ERESTARTSYS;
- if (signal_pending (current)) {
- free_page ((long)page);
- return -ERESTARTSYS;
+ if (filp->f_flags & O_NONBLOCK)
+ return ret ? : -EAGAIN;
}
- goto read_loop;
+ return ret;
}
* want to register another driver on the same PCI id.
*/
const static struct pci_device_id rng_pci_tbl[] __initdata = {
- { 0x8086, 0x2418, PCI_ANY_ID, PCI_ANY_ID, },
- { 0x8086, 0x2428, PCI_ANY_ID, PCI_ANY_ID, },
+ { 0x8086, 0x2418, PCI_ANY_ID, PCI_ANY_ID, },
+ { 0x8086, 0x2428, PCI_ANY_ID, PCI_ANY_ID, },
+ { 0x8086, 0x1130, PCI_ANY_ID, PCI_ANY_ID, },
{ 0, },
};
MODULE_DEVICE_TABLE (pci, rng_pci_tbl);
init_MUTEX (&rng_open_sem);
init_waitqueue_head (&rng_open_wait);
- pdev = pci_find_device (0x8086, 0x2418, NULL);
- if (!pdev)
- pdev = pci_find_device (0x8086, 0x2428, NULL);
- if (!pdev)
- return -ENODEV;
+ pci_for_each_dev(pdev) {
+ if (pci_match_device (rng_pci_tbl, pdev) != NULL)
+ goto match;
+ }
+
+ DPRINTK ("EXIT, returning -ENODEV\n");
+ return -ENODEV;
+match:
rc = rng_init_one (pdev);
if (rc)
return rc;
* 4.11.1 (http://csrc.nist.gov/fips/fips1401.htm)
* The Monobit, Poker, Runs, and Long Runs tests are implemented below.
* This test is run at periodic intervals to verify
-* data is sufficently random. If the tests are failed the RNG module
+* data is sufficiently random. If the tests are failed the RNG module
* will no longer submit data to the entropy pool, but the tests will
* continue to run at the given interval. If at a later time the RNG
* passes all tests it will be re-enabled for the next period.
* disable the RNG, we will just leave it disabled for the period of
* time until the tests are rerun and passed.
*
-* For argument sake I tested /proc/urandom with these tests and it
+* For argument sake I tested /dev/urandom with these tests and it
* took 142,095 tries before I got a failure, and urandom isn't as
* random as random :)
*/
int j;
static int last_bit = 0;
- DPRINTK ("ENTER, rng_data = %d\n", rng_data & 0xFF);
+ DPRINTK ("ENTER, rng_data = %d\n", rng_data);
poker[rng_data >> 4]++;
poker[rng_data & 15]++;
rng_trusted = rng_test;
/* finally, clear out FIPS variables for start of next run */
- memset (&poker, 0, sizeof (poker));
- memset (&runs, 0, sizeof (runs));
+ memset (poker, 0, sizeof (poker));
+ memset (runs, 0, sizeof (runs));
ones = 0;
rlength = -1;
current_bit = 0;
DPRINTK ("EXIT\n");
}
-
}
static struct console lpcons = {
- "lp",
- lp_console_write,
- NULL,
- lp_console_device,
- NULL,
- NULL,
- NULL,
- CON_PRINTBUFFER,
- 0,
- 0,
- NULL
+ name: "lp",
+ write: lp_console_write,
+ device: lp_console_device,
+ flags: CON_PRINTBUFFER,
};
#endif /* console on line printer */
}
static struct console sercons = {
- "ttyS",
- serial_console_write,
- NULL,
- serial_console_device,
- serial_console_wait_key,
- NULL,
- serial_console_setup,
- CON_PRINTBUFFER,
- -1,
- 0,
- NULL
+ name: "ttyS",
+ write: serial_console_write,
+ device: serial_console_device,
+ wait_key: serial_console_wait_key,
+ setup: serial_console_setup,
+ flags: CON_PRINTBUFFER,
+ index: -1,
};
/*
static struct console sercons = {
- "ttyS",
- serial167_console_write,
- NULL,
- serial167_console_device,
- serial167_console_wait_key,
- NULL,
- serial167_console_setup,
- CON_PRINTBUFFER,
- -1,
- 0,
- NULL
+ name: "ttyS",
+ write: serial167_console_write,
+ device: serial167_console_device,
+ wait_key: serial167_console_wait_key,
+ setup: serial167_console_setup,
+ flags: CON_PRINTBUFFER,
+ index: -1,
};
static struct console rs285_cons =
{
- SERIAL_21285_NAME,
- rs285_console_write,
- NULL,
- rs285_console_device,
- rs285_console_wait_key,
- NULL,
- rs285_console_setup,
- CON_PRINTBUFFER,
- -1,
- 0,
- NULL
+ name: SERIAL_21285_NAME,
+ write: rs285_console_write,
+ device: rs285_console_device,
+ wait_key: rs285_console_wait_key,
+ setup: rs285_console_setup,
+ flags: CON_PRINTBUFFER,
+ index: -1,
};
void __init rs285_console_init(void)
#endif
device: ambauart_console_device,
wait_key: ambauart_console_wait_key,
- unblank: NULL,
setup: ambauart_console_setup,
flags: CON_PRINTBUFFER,
index: -1,
}
static struct console sercons = {
- "ttySC",
- serial_console_write,
- NULL,
- serial_console_device,
- serial_console_wait_key,
- NULL,
- serial_console_setup,
- CON_PRINTBUFFER,
- -1,
- 0,
- NULL
+ name: "ttySC",
+ write: serial_console_write,
+ device: serial_console_device,
+ wait_key: serial_console_wait_key,
+ setup: serial_console_setup,
+ flags: CON_PRINTBUFFER,
+ index: -1,
};
/*
static struct console sercons = {
- "ttyS",
- scc_console_write,
- NULL,
- scc_console_device,
- scc_console_wait_key,
- NULL,
- scc_console_setup,
- CON_PRINTBUFFER,
- -1,
- 0,
- NULL
+ name: "ttyS",
+ write: scc_console_write,
+ device: scc_console_device,
+ wait_key: scc_console_wait_key,
+ setup: scc_console_setup,
+ flags: CON_PRINTBUFFER,
+ index: -1,
};
/*
- * $Id: capifs.c,v 1.9 2000/08/20 07:30:13 keil Exp $
+ * $Id: capifs.c,v 1.10 2000/10/12 10:12:35 calle Exp $
*
* (c) Copyright 2000 by Carsten Paeth (calle@calle.de)
*
* Heavily based on devpts filesystem from H. Peter Anvin
*
* $Log: capifs.c,v $
+ * Revision 1.10 2000/10/12 10:12:35 calle
+ * Bugfix: second iput(inode) on umount, destroies a foreign inode.
+ *
* Revision 1.9 2000/08/20 07:30:13 keil
* changes for 2.4
*
MODULE_AUTHOR("Carsten Paeth <calle@calle.de>");
-static char *revision = "$Revision: 1.9 $";
+static char *revision = "$Revision: 1.10 $";
struct capifs_ncci {
struct inode *inode;
continue;
if (np->inode) {
inode = np->inode;
+ np->inode = 0;
np->used = 0;
inode->i_nlink--;
iput(inode);
request_region(card->iobase + PCI9050_USER_IO, 1, "HYSDN");
ergo_stopcard(card); /* disable interrupts */
if (request_irq(card->irq, ergo_interrupt, SA_SHIRQ, "HYSDN", card)) {
- ergo_releasehardware(card); /* return the aquired hardware */
+ ergo_releasehardware(card); /* return the acquired hardware */
return (-1);
}
/* success, now setup the function pointers */
/*
* No interrupt could be used
*/
- pr_debug("Failed to aquire an IRQ line\n");
+ pr_debug("Failed to acquire an IRQ line\n");
continue;
}
}
static struct console sercons = {
- "ttyS",
- serial_console_write,
- NULL,
- serial_console_device,
- serial_console_wait_key,
- NULL,
- serial_console_setup,
- CON_PRINTBUFFER,
- -1,
- 0,
- NULL
+ name: "ttyS",
+ write: serial_console_write,
+ device: serial_console_device,
+ wait_key: serial_console_wait_key,
+ setup: serial_console_setup,
+ flags: CON_PRINTBUFFER,
+ index: -1,
};
/*
#
# 19971130 Moved the amateur radio related network drivers from
# drivers/net/ to drivers/hamradio for easier maintainance.
-# Joerg Reuter DL1BKE <jreuter@poboxes.com>
+# Joerg Reuter DL1BKE <jreuter@yaina.de>
#
# 20000806 Rewritten to use lists instead of if-statements.
# Christoph Hellwig <hch@caldera.de>
unregister_netdev(&bpq->axdev);
}
-MODULE_AUTHOR("Joerg Reuter DL1BKE <jreuter@poboxes.com>");
+MODULE_AUTHOR("Joerg Reuter DL1BKE <jreuter@yaina.de>");
MODULE_DESCRIPTION("Transmit and receive AX.25 packets over Ethernet");
module_init(bpq_init_driver);
module_exit(bpq_cleanup_driver);
vy 73,
Joerg Reuter ampr-net: dl1bke@db0pra.ampr.org
- AX-25 : DL1BKE @ DB0ACH.#NRW.DEU.EU
- Internet: jreuter@poboxes.com
- www : http://poboxes.com/jreuter/
+ AX-25 : DL1BKE @ DB0ABH.#BAY.DEU.EU
+ Internet: jreuter@yaina.de
+ www : http://yaina.de/jreuter
*/
/* ----------------------------------------------------------------------- */
scc_net_procfs_remove();
}
-MODULE_AUTHOR("Joerg Reuter <jreuter@poboxes.com>");
+MODULE_AUTHOR("Joerg Reuter <jreuter@yaina.de>");
MODULE_DESCRIPTION("AX.25 Device Driver for Z8530 based HDLC cards");
-MODULE_SUPPORTED_DEVICE("scc");
+MODULE_SUPPORTED_DEVICE("Z8530 based SCC cards for Amateur Radio");
module_init(scc_init_driver);
module_exit(scc_cleanup_driver);
if (fdx && !(lp->options & PORT_ASEL) && full_duplex[card_idx])
lp->options |= PORT_FD;
- lp->a = *a;
if (a == NULL) {
printk(KERN_ERR "pcnet32: No access methods\n");
return -ENODEV;
}
+ lp->a = *a;
/* detect special T1/E1 WAN card by checking for MAC address */
if (dev->dev_addr[0] == 0x00 && dev->dev_addr[1] == 0xe0 && dev->dev_addr[2] == 0x75)
* dev - pointer to device information
*
* Functional Description:
- * This function aquires the driver lock and only calls
+ * This function acquires the driver lock and only calls
* skfp_ctl_set_multicast_list_wo_lock then.
* This routine follows a fairly simple algorithm for setting the
* adapter filters and CAM:
* - Incresed tx_timeout beacuse of auto-neg.
* - Adjusted timers for forced speeds.
*
+ * v1.12 Oct 12, 2000 - Minor fixes (memleak, init, etc.)
+ *
*******************************************************************************/
/* For removing EISA devices */
-static struct net_device *TLan_Eisa_Devices = NULL;
+static struct net_device *TLan_Eisa_Devices;
-static int TLanDevicesInstalled = 0;
+static int TLanDevicesInstalled;
/* Force speed, duplex and aui settings */
-static int aui = 0;
-static int duplex = 0;
-static int speed = 0;
+static int aui;
+static int duplex;
+static int speed;
MODULE_AUTHOR("Maintainer: Torben Mathiasen <torben.mathiasen@compaq.com>");
MODULE_DESCRIPTION("Driver for TI ThunderLAN based ethernet PCI adapters");
#undef MONITOR
/* Turn on debugging. See linux/Documentation/networking/tlan.txt for details */
-static int debug = 0;
+static int debug;
-static int bbuf = 0;
+static int bbuf;
static u8 *TLanPadBuffer;
static char TLanSignature[] = "TLAN";
-static const char *tlan_banner = "ThunderLAN driver v1.11\n";
-static int tlan_have_pci = 0;
-static int tlan_have_eisa = 0;
+static const char *tlan_banner = "ThunderLAN driver v1.12\n";
+static int tlan_have_pci;
+static int tlan_have_eisa;
const char *media[] = {
"10BaseT-HD ", "10BaseT-FD ","100baseTx-HD ",
};
MODULE_DEVICE_TABLE(pci, tlan_pci_tbl);
-static int TLan_EisaProbe( void );
+static void TLan_EisaProbe( void );
static void TLan_Eisa_Cleanup( void );
static int TLan_Init( struct net_device * );
static int TLan_Open( struct net_device *dev );
**************************************************************/
-static void __exit tlan_remove_one( struct pci_dev *pdev)
+static void __devexit tlan_remove_one( struct pci_dev *pdev)
{
struct net_device *dev = pdev->driver_data;
TLanPrivateInfo *priv = (TLanPrivateInfo *) dev->priv;
static int __init tlan_probe(void)
{
- int rc;
static int pad_allocated = 0;
printk(KERN_INFO "%s", tlan_banner);
/* Use new style PCI probing. Now the kernel will
do most of this for us */
- rc = pci_module_init(&tlan_driver);
+ pci_module_init(&tlan_driver);
TLAN_DBG(TLAN_DEBUG_PROBE, "Starting EISA Probe....\n");
- rc = TLan_EisaProbe();
+ TLan_EisaProbe();
printk(KERN_INFO "TLAN: %d device%s installed, PCI: %d EISA: %d\n",
TLanDevicesInstalled, TLanDevicesInstalled == 1 ? "" : "s",
*
**************************************************************/
-static int __init TLan_probe1(struct pci_dev *pdev,
+static int __devinit TLan_probe1(struct pci_dev *pdev,
long ioaddr, int irq, int rev, const struct pci_device_id *ent )
{
}
priv = dev->priv;
- memset(priv, 0, sizeof(TLanPrivateInfo));
-
dev->base_addr = ioaddr;
dev->irq = irq;
/* Is this a PCI device? */
if (pdev) {
priv->adapter = &board_info[ent->driver_data];
- if (pci_enable_device(pdev))
+ if (pci_enable_device(pdev)) {
+ unregister_netdev(dev);
+ kfree(dev);
return -1;
+ }
pci_read_config_byte ( pdev, PCI_REVISION_ID, &pci_rev);
priv->adapterRev = pci_rev;
pci_set_master(pdev);
*
*************************************************************/
-static int __init TLan_EisaProbe (void)
+static void __init TLan_EisaProbe (void)
{
long ioaddr;
int rc = -ENODEV;
if (!EISA_bus) {
TLAN_DBG(TLAN_DEBUG_PROBE, "No EISA bus present\n");
- return 0;
+ return;
}
/* Loop through all slots of the EISA bus */
}
- return rc;
-
} /* TLan_EisaProbe */
if ( err ) {
printk(KERN_ERR "TLAN: Cannot open %s because IRQ %d is already in use.\n", dev->name, dev->irq );
MOD_DEC_USE_COUNT;
- return -EAGAIN;
+ return err;
}
init_timer(&priv->timer);
static void TLan_tx_timeout(struct net_device *dev)
{
- //TLanPrivateInfo *priv = (TLanPrivateInfo *) dev->priv;
TLAN_DBG( TLAN_DEBUG_GNRL, "%s: Transmit timed out.\n", dev->name);
TLan_ReadAndClearStats( dev, TLAN_IGNORE );
TLan_ResetAdapter( dev );
dev->trans_start = jiffies;
- netif_start_queue( dev );
+ netif_wake_queue( dev );
}
*
* ******************************************************************/
-void TLan_PhyMonitor( struct net_device *data )
+void TLan_PhyMonitor( struct net_device *dev )
{
- struct net_device *dev = (struct net_device *)data;
TLanPrivateInfo *priv = (TLanPrivateInfo *) dev->priv;
u16 phy;
u16 phy_status;
#endif
/* Prevent reentrancy. We need to do that because we may have
- * multiple interrupt handler running concurently.
- * It is safe because wv_splhi() disable interrupts before aquiring
+ * multiple interrupt handler running concurrently.
+ * It is safe because wv_splhi() disables interrupts before acquiring
* the spinlock. */
spin_lock(&lp->spinlock);
#
-# Makefile for the kernel miscellaneous drivers.
+# Makefile for the kernel Parallel port device drivers.
#
-# Note! Dependencies are done automagically by 'make dep', which also
-# removes any old dependencies. DON'T put your own dependencies here
-# unless it's something special (ie not a .c file).
+# Note! Parport is the Borg. We have assimilated some other
+# drivers in the `char', `net' and `scsi' directories,
+# but left them there to allay suspicion.
#
-# Note 2! The CFLAGS definitions are now inherited from the
-# parent makes..
+# 7 October 2000, Bartlomiej Zolnierkiewicz <bkz@linux-ide.org>
+# Rewritten to use lists instead of if-statements.
#
-# Note 3! Parport is the Borg. We have assimilated some other
-# drivers in the `char', `net' and `scsi' directories, but left them
-# there to allay suspicion.
-
-SUB_DIRS :=
-MOD_SUB_DIRS := $(SUB_DIRS)
-ALL_SUB_DIRS := $(SUB_DIRS)
L_TARGET := parport.a
-MX_OBJS :=
-LX_OBJS :=
-MI_OBJS :=
-MIX_OBJS :=
-ifeq ($(CONFIG_PARPORT),y)
- L_OBJS += share.o ieee1284.o ieee1284_ops.o procfs.o
+export-objs := init.o parport_pc.o
- ifeq ($(CONFIG_PARPORT_1284),y)
- L_OBJS += daisy.o probe.o
- endif
+list-multi := parport.o
+parport-objs := share.o ieee1284.o ieee1284_ops.o init.o procfs.o
- ifeq ($(CONFIG_PARPORT_PC),y)
- LX_OBJS += parport_pc.o
- else
- ifeq ($(CONFIG_PARPORT_PC),m)
- MX_OBJS += parport_pc.o
- endif
- endif
- ifeq ($(CONFIG_PARPORT_AMIGA),y)
- LX_OBJS += parport_amiga.o
- else
- ifeq ($(CONFIG_PARPORT_AMIGA),m)
- M_OBJS += parport_amiga.o
- endif
- endif
- ifeq ($(CONFIG_PARPORT_MFC3),y)
- LX_OBJS += parport_mfc3.o
- else
- ifeq ($(CONFIG_PARPORT_MFC3),m)
- M_OBJS += parport_mfc3.o
- endif
- endif
- ifeq ($(CONFIG_PARPORT_ATARI),y)
- LX_OBJS += parport_atari.o
- else
- ifeq ($(CONFIG_PARPORT_ATARI),m)
- M_OBJS += parport_atari.o
- endif
- endif
- ifeq ($(CONFIG_PARPORT_SUNBPP),y)
- LX_OBJS += parport_sunbpp.o
- else
- ifeq ($(CONFIG_PARPORT_SUNBPP),m)
- MX_OBJS += parport_sunbpp.o
- endif
- endif
- LX_OBJS += init.o
-else
- ifeq ($(CONFIG_PARPORT),m)
- MI_OBJS += share.o ieee1284.o ieee1284_ops.o
- ifeq ($(CONFIG_PARPORT_1284),y)
- MI_OBJS += daisy.o probe.o
- endif
- ifneq ($(CONFIG_PROC_FS),n)
- MI_OBJS += procfs.o
- endif
- MIX_OBJS += init.o
- M_OBJS += parport.o
- endif
- ifeq ($(CONFIG_PARPORT_PC),m)
- MX_OBJS += parport_pc.o
- endif
- ifeq ($(CONFIG_PARPORT_AMIGA),m)
- M_OBJS += parport_amiga.o
- endif
- ifeq ($(CONFIG_PARPORT_MFC3),m)
- M_OBJS += parport_mfc3.o
- endif
- ifeq ($(CONFIG_PARPORT_ATARI),m)
- M_OBJS += parport_atari.o
- endif
- ifeq ($(CONFIG_PARPORT_SUNBPP),m)
- M_OBJS += parport_sunbpp.o
- endif
+ifeq ($(CONFIG_PARPORT_1284),y)
+ parport-objs += daisy.o probe.o
endif
+obj-$(CONFIG_PARPORT) += parport.o
+obj-$(CONFIG_PARPORT_PC) += parport_pc.o
+obj-$(CONFIG_PARPORT_AMIGA) += parport_amiga.o
+obj-$(CONFIG_PARPORT_MFC3) += parport_mfc3.o
+obj-$(CONFIG_PARPORT_ATARI) += parport_atari.o
+obj-$(CONFIG_PARPORT_SUNBPP) += parport_sunbpp.o
+
+# Extract lists of the multi-part drivers.
+# The 'int-*' lists are the intermediate files used to build the multi's.
+multi-y := $(filter $(list-multi), $(obj-y))
+multi-m := $(filter $(list-multi), $(obj-m))
+int-y := $(sort $(foreach m, $(multi-y), $($(basename $(m))-objs)))
+int-m := $(sort $(foreach m, $(multi-m), $($(basename $(m))-objs)))
+
+# Take multi-part drivers out of obj-y and put components in.
+obj-y := $(filter-out $(list-multi), $(obj-y)) $(int-y)
+
+# Translate to Rules.make lists.
+L_OBJS := $(filter-out $(export-objs), $(obj-y))
+LX_OBJS := $(filter $(export-objs), $(obj-y))
+M_OBJS := $(sort $(filter-out $(export-objs), $(obj-m)))
+MX_OBJS := $(sort $(filter $(export-objs), $(obj-m)))
+MI_OBJS := $(sort $(filter-out $(export-objs), $(int-m)))
+MIX_OBJS := $(sort $(filter $(export-objs), $(int-m)))
+
include $(TOPDIR)/Rules.make
-# Special rule to build the composite parport.o module
-parport.o: $(MI_OBJS) $(MIX_OBJS)
- $(LD) $(LD_RFLAG) -r -o $@ $(MI_OBJS) $(MIX_OBJS)
+parport.o: $(parport-objs)
+ $(LD) -r -o $@ $(parport-objs)
/*
* Find the extent of a PCI decode..
*/
-static u32 pci_size(u32 base, u32 mask)
+static u32 pci_size(u32 base, unsigned long mask)
{
u32 size = mask & base; /* Find the significant bits */
size = size & ~(size-1); /* Get the lowest of them to find the decode size */
PCI_COMMAND_WAIT);
/* MAGIC NUMBERS! Fixme */
- config_writeb(socket, PCI_CACHE_LINE_SIZE, 32);
+ config_writeb(socket, PCI_CACHE_LINE_SIZE, L1_CACHE_BYTES / 4);
config_writeb(socket, PCI_LATENCY_TIMER, 168);
config_writeb(socket, PCI_SEC_LATENCY_TIMER, 176);
config_writeb(socket, PCI_PRIMARY_BUS, dev->bus->number);
* The console structure for the 3215 console
*/
static struct console con3215 = {
- "tty3215",
- con3215_write,
- NULL,
- con3215_device,
- NULL,
- con3215_unblank,
- con3215_consetup,
- CON_PRINTBUFFER,
- 0,
- 0,
- NULL
+ name: "tty3215",
+ write: con3215_write,
+ device: con3215_device,
+ unblank: con3215_unblank,
+ setup: con3215_consetup,
+ flags: CON_PRINTBUFFER,
};
#endif
struct console hwc_console =
{
- hwc_console_name,
- hwc_console_write,
- NULL,
- hwc_console_device,
- NULL,
- NULL,
- NULL,
- CON_PRINTBUFFER,
- 0,
- 0,
- NULL
+ name: hwc_console_name,
+ write: hwc_console_write,
+ device: hwc_console_device,
+ flags: CON_PRINTBUFFER,
};
void
#
-# Makefile for the linux kernel.
+# Makefile for the kernel SPARC audio drivers.
#
-# Note! Dependencies are done automagically by 'make dep', which also
-# removes any old dependencies. DON'T put your own dependencies here
-# unless it's something special (ie not a .c file).
-#
-# Note 2! The CFLAGS definitions are now in the main makefile...
-
-#
-# sbus audio drivers
+# 7 October 2000, Bartlomiej Zolnierkiewicz <bkz@linux-ide.org>
+# Rewritten to use lists instead of if-statements.
#
O_TARGET := sparcaudio.o
-O_OBJS :=
-M_OBJS :=
-M :=
-MM :=
-
-ifeq ($(CONFIG_SPARCAUDIO),y)
-M=y
-else
- ifeq ($(CONFIG_SPARCAUDIO),m)
- MM=y
- endif
-endif
-
-ifeq ($(CONFIG_SPARCAUDIO_AMD7930),y)
-M=y
-OX_OBJS += amd7930.o
-else
- ifeq ($(CONFIG_SPARCAUDIO_AMD7930),m)
- MM=y
- MX_OBJS += amd7930.o
- endif
-endif
-
-ifeq ($(CONFIG_SPARCAUDIO_CS4231),y)
-M=y
-O_OBJS += cs4231.o
-else
- ifeq ($(CONFIG_SPARCAUDIO_CS4231),m)
- MM=y
- M_OBJS += cs4231.o
- endif
-endif
-ifeq ($(CONFIG_SPARCAUDIO_DBRI),y)
-M=y
-OX_OBJS += dbri.o
-else
- ifeq ($(CONFIG_SPARCAUDIO_DBRI),m)
- MM=y
- MX_OBJS += dbri.o
- endif
-endif
+export-objs := audio.o amd7930.o dbri.o
-ifeq ($(CONFIG_SPARCAUDIO_DUMMY),y)
-M=y
-O_OBJS += dmy.o
-else
- ifeq ($(CONFIG_SPARCAUDIO_DUMMY),m)
- MM=y
- M_OBJS += dmy.o
- endif
-endif
+obj-$(CONFIG_SPARCAUDIO) += audio.o
+obj-$(CONFIG_SPARCAUDIO_AMD7930) += amd7930.o
+obj-$(CONFIG_SPARCAUDIO_CS4231) += cs4231.o
+obj-$(CONFIG_SPARCAUDIO_DBRI) += dbri.o
+obj-$(CONFIG_SPARCAUDIO_DUMMY) += dmy.o
-ifdef M
-OX_OBJS += audio.o
-else
- ifdef MM
- MX_OBJS += audio.o
- endif
-endif
+O_OBJS := $(filter-out $(export-objs), $(obj-y))
+OX_OBJS := $(filter $(export-objs), $(obj-y))
+M_OBJS := $(sort $(filter-out $(export-objs), $(obj-m)))
+MX_OBJS := $(sort $(filter $(export-objs), $(obj-m)))
include $(TOPDIR)/Rules.make
-/* $Id: dbri.h,v 1.12 1999/09/21 14:37:34 davem Exp $
+/* $Id: dbri.h,v 1.13 2000/10/13 00:34:24 uzi Exp $
* drivers/sbus/audio/cs4231.h
*
* Copyright (C) 1997 Rudolf Koenig (rfkoenig@immd4.informatik.uni-erlangen.de)
#define REG9 0x24UL /* Interrupt Queue Pointer */
#define DBRI_NO_CMDS 64
-#define DBRI_NO_INTS 2
+#define DBRI_NO_INTS 1 /* Note: the value of this define was
+ * originally 2. The ringbuffer to store
+ * interrupts in dma is currently broken.
+ * This is a temporary fix until the ringbuffer
+ * is fixed.
+ */
#define DBRI_INT_BLK 64
#define DBRI_NO_DESCS 64
#
-# Makefile for the linux kernel.
+# Makefile for the kernel miscellaneous SPARC device drivers.
#
-# Note! Dependencies are done automagically by 'make dep', which also
-# removes any old dependencies. DON'T put your own dependencies here
-# unless it's something special (ie not a .c file).
-#
-# Note 2! The CFLAGS definitions are now in the main makefile...
-
# Dave Redman Frame Buffer tuning support.
-# OK this is kind of ugly but it does allow drivers to be added fairly
-# easily. and you can even choose what sort of support you want.
+#
+# 7 October 2000, Bartlomiej Zolnierkiewicz <bkz@linux-ide.org>
+# Rewritten to use lists instead of if-statements.
+#
O_TARGET := sunchar.o
O_OBJS := ${O_OBJ} sunkbd.o sunkbdmap.o sunmouse.o sunserial.o zs.o
-M_OBJS :=
-ifeq ($(ARCH),sparc64)
+vfc-objs := vfc_dev.o vfc_i2c.o
ifeq ($(CONFIG_PCI),y)
-
-OX_OBJS += su.o
-O_OBJS += pcikbd.o
-
-ifeq ($(CONFIG_SAB82532),y)
-O_OBJS += sab82532.o
-else
- ifeq ($(CONFIG_SAB82532),m)
- M_OBJS += sab82532.o
- endif
-endif
-
-ifeq ($(CONFIG_ENVCTRL),y)
-O_OBJS += envctrl.o
-else
- ifeq ($(CONFIG_ENVCTRL),m)
- M_OBJS += envctrl.o
- endif
-endif
-
-ifeq ($(CONFIG_DISPLAY7SEG),y)
-O_OBJS += display7seg.o
-else
- ifeq ($(CONFIG_DISPLAY7SEG),m)
- M_OBJS += display7seg.o
- endif
-endif
-
-endif # eq($(CONFIG_PCI,y)
-
-ifeq ($(CONFIG_OBP_FLASH),y)
-O_OBJS += flash.o
-else
- ifeq ($(CONFIG_OBP_FLASH),m)
- M_OBJS += flash.o
- endif
+OX_OBJS += su.o
+O_OBJS += pcikbd.o
endif
-else # !eq($(ARCH),sparc64)
+ifeq ($(ARCH),sparc64)
ifeq ($(CONFIG_PCI),y)
-OX_OBJS += su.o
-O_OBJS += pcikbd.o
-endif
-
-endif # !eq($(ARCH),sparc64)
-
-ifeq ($(CONFIG_SUN_OPENPROMIO),y)
-O_OBJS += openprom.o
-else
- ifeq ($(CONFIG_SUN_OPENPROMIO),m)
- M_OBJS += openprom.o
- endif
+obj-$(CONFIG_SAB82532) += sab82532.o
+obj-$(CONFIG_ENVCTRL) += envctrl.o
+obj-$(CONFIG_DISPLAY7SEG) += display7seg.o
endif
-ifeq ($(CONFIG_SUN_MOSTEK_RTC),y)
-O_OBJS += rtc.o
-else
- ifeq ($(CONFIG_SUN_MOSTEK_RTC),m)
- M_OBJS += rtc.o
- endif
+obj-$(CONFIG_OBP_FLASH) += flash.o
endif
-ifeq ($(CONFIG_SUN_BPP),y)
-O_OBJS += bpp.o
-else
- ifeq ($(CONFIG_SUN_BPP),m)
- M_OBJS += bpp.o
- endif
-endif
-
-ifeq ($(CONFIG_SUN_VIDEOPIX),y)
-O_OBJS += vfc.o
-else
- ifeq ($(CONFIG_SUN_VIDEOPIX),m)
- M_OBJS += vfc.o
- endif
-endif
+obj-$(CONFIG_SUN_OPENPROMIO) += openprom.o
+obj-$(CONFIG_SUN_MOSTEK_RTC) += rtc.o
+obj-$(CONFIG_SUN_BPP) += bpp.o
+obj-$(CONFIG_SUN_VIDEOPIX) += vfc.o
+obj-$(CONFIG_SUN_AURORA) += aurora.o
+obj-$(CONFIG_TADPOLE_TS102_UCTRL) += uctrl.o
+obj-$(CONFIG_SUN_JSFLASH) += jsflash.o
-ifeq ($(CONFIG_SUN_AURORA),y)
-O_OBJS += aurora.o
-else
- ifeq ($(CONFIG_SUN_AURORA),m)
- M_OBJS += aurora.o
- endif
-endif
-
-ifeq ($(CONFIG_TADPOLE_TS102_UCTRL),y)
-O_OBJS += uctrl.o
-else
- ifeq ($(CONFIG_TADPOLE_TS102_UCTRL),m)
- M_OBJS += uctrl.o
- endif
-endif
-
-ifeq ($(CONFIG_SUN_JSFLASH),y)
-O_OBJS += jsflash.o
-endif
-ifeq ($(CONFIG_SUN_JSFLASH),m)
-M_OBJS += jsflash.o
-endif
+O_OBJS := $(obj-y)
+M_OBJS := $(obj-m)
include $(TOPDIR)/Rules.make
sunkbdmap.o: sunkeymap.c
-vfc.o: vfc_dev.o vfc_i2c.o
- $(LD) -r -o vfc.o vfc_dev.o vfc_i2c.o
+vfc.o: $(vfc-objs)
+ $(LD) -r -o $@ $(vfc-objs)
}
static struct console sab82532_console = {
- "ttyS",
- sab82532_console_write,
- NULL,
- sab82532_console_device,
- sab82532_console_wait_key,
- NULL,
- sab82532_console_setup,
- CON_PRINTBUFFER,
- -1,
- 0,
- NULL
+ name: "ttyS",
+ write: sab82532_console_write,
+ device: sab82532_console_device,
+ wait_key: sab82532_console_wait_key,
+ setup: sab82532_console_setup,
+ flags: CON_PRINTBUFFER,
+ index: -1,
};
int __init sab82532_console_init(void)
}
static struct console sercons = {
- "ttyS",
- serial_console_write,
- NULL,
- serial_console_device,
- serial_console_wait_key,
- NULL,
- serial_console_setup,
- CON_PRINTBUFFER,
- -1,
- 0,
- NULL
+ name: "ttyS",
+ write: serial_console_write,
+ device: serial_console_device,
+ wait_key: serial_console_wait_key,
+ setup: serial_console_setup,
+ flags: CON_PRINTBUFFER,
+ index: -1,
};
int su_console_registered = 0;
}
static struct console zs_console = {
- "ttyS",
- zs_console_write,
- NULL,
- zs_console_device,
- zs_console_wait_key,
- NULL,
- zs_console_setup,
- CON_PRINTBUFFER,
- -1,
- 0,
- NULL
+ name: "ttyS",
+ write: zs_console_write,
+ device: zs_console_device,
+ wait_key: zs_console_wait_key,
+ setup: zs_console_setup,
+ flags: CON_PRINTBUFFER,
+ index: -1,
};
static int __init zs_console_init(void)
/*
* setup controller to generate interrupts depending
- * on current state (lock has to be aquired)
+ * on current state (lock has to be acquired)
*
*/
static int setup_expected_interrupts(struct Scsi_Host *shpnt)
* Description:
* Detect the cables that are present on the 2940-UWPro cards
*
- * NOTE: This functions assumes the SEEPROM will have already been aquired
+ * NOTE: This function assumes the SEEPROM will have already been acquired
* prior to invocation of this function.
*-F*************************************************************************/
static void
* Description:
* Detect the cables that are present on aic787x class controller chips
*
- * NOTE: This functions assumes the SEEPROM will have already been aquired
+ * NOTE: This function assumes the SEEPROM will have already been acquired
* prior to invocation of this function.
*-F*************************************************************************/
static void
* Description:
* Detect the termination settings present on ultra2 class controllers
*
- * NOTE: This functions assumes the SEEPROM will have already been aquired
+ * NOTE: This function assumes the SEEPROM will have already been acquired
* prior to invocation of this function.
*-F*************************************************************************/
static void
}
/*
- * Aimmrently the the disk->capacity attribute is off by 1 sector
+ * Apparently the the disk->capacity attribute is off by 1 sector
* for all disk drives. We add the one here, but it should really
* be done in sd.c. Even if it gets fixed there, this will still
* work.
ip[0] = 0xff;
ip[1] = 0x3f;
ip[2] = (disk->capacity + 1) / (ip[0] * ip[1]);
- if (ip[2] > 1023)
- ip[2] = 1023;
}
return 0;
}
(scsi_memory_upper_value - scsi_init_memory_start) / 1024);
#endif
- /* Remove it from the linked list and /proc */
- if (tpnt->present) {
+ /*
+ * Remove it from the linked list and /proc if all
+ * hosts were successfully removed (ie preset == 0)
+ */
+ if (!tpnt->present) {
Scsi_Host_Template **SHTp = &scsi_hosts;
Scsi_Host_Template *SHT;
}
static struct console sgi_console_driver = {
- "ttyS",
- zs_console_write, /* write */
- NULL, /* read */
- zs_console_device, /* device */
- zs_console_wait_key, /* wait_key */
- NULL, /* unblank */
- zs_console_setup, /* setup */
- CON_PRINTBUFFER,
- -1,
- 0,
- NULL
+ name: "ttyS",
+ write: zs_console_write,
+ device: zs_console_device,
+ wait_key: zs_console_wait_key,
+ setup: zs_console_setup,
+ flags: CON_PRINTBUFFER,
+ index: -1,
};
/*
}
static struct console sercons = {
- "ttyS",
- serial_console_write,
- NULL,
- serial_console_device,
- serial_console_wait_key,
- NULL,
- serial_console_setup,
- CON_PRINTBUFFER,
- -1,
- 0,
- NULL
+ name: "ttyS",
+ write: serial_console_write,
+ device: serial_console_device,
+ wait_key: serial_console_wait_key,
+ setup: serial_console_setup,
+ flags: CON_PRINTBUFFER,
+ index: -1,
};
/*
}
spin_lock_irqsave(&as->lock, flags);
}
- if (u->dma.count >= u->dma.dmasize && !u->dma.mapped)
+ if (u->dma.count >= u->dma.dmasize && !u->dma.mapped) {
+ spin_unlock_irqrestore(&as->lock, flags);
return 0;
+ }
u->flags |= FLG_RUNNING;
if (!(u->flags & FLG_URB0RUNNING)) {
urb = &u->durb[0].urb;
}
spin_lock_irqsave(&as->lock, flags);
}
- if (u->dma.count <= 0 && !u->dma.mapped)
+ if (u->dma.count <= 0 && !u->dma.mapped) {
+ spin_unlock_irqrestore(&as->lock, flags);
return 0;
+ }
u->flags |= FLG_RUNNING;
if (!(u->flags & FLG_URB0RUNNING)) {
urb = &u->durb[0].urb;
init_waitqueue_head(&as->usbin.dma.wait);
init_waitqueue_head(&as->usbout.dma.wait);
spin_lock_init(&as->lock);
+ spin_lock_init(&as->usbin.durb[0].urb.lock);
+ spin_lock_init(&as->usbin.durb[1].urb.lock);
+ spin_lock_init(&as->usbin.surb[0].urb.lock);
+ spin_lock_init(&as->usbin.surb[1].urb.lock);
+ spin_lock_init(&as->usbout.durb[0].urb.lock);
+ spin_lock_init(&as->usbout.durb[1].urb.lock);
+ spin_lock_init(&as->usbout.surb[0].urb.lock);
+ spin_lock_init(&as->usbout.surb[1].urb.lock);
as->state = s;
as->usbin.interface = asifin;
as->usbout.interface = asifout;
mdc800->camera_request_ready=0;
retval=0;
+ mdc800->irq_urb->dev = mdc800->dev;
if (usb_submit_urb (mdc800->irq_urb))
{
err ("request USB irq fails (submit_retval=%i urb_status=%i).",retval, mdc800->irq_urb->status);
mdc800->out_ptr=0;
/* Download -> Request new bytes */
+ mdc800->download_urb->dev = mdc800->dev;
if (usb_submit_urb (mdc800->download_urb))
{
err ("Can't submit download urb (status=%i)",mdc800->download_urb->status);
mdc800->state=WORKING;
memcpy (mdc800->write_urb->transfer_buffer, mdc800->in,8);
+ mdc800->write_urb->dev = mdc800->dev;
if (usb_submit_urb (mdc800->write_urb))
{
err ("submitting write urb fails (status=%i)", mdc800->write_urb->status);
#include <linux/usb.h>
-static const char *version = __FILE__ ": v0.4.13 2000/10/09 (C) 1999-2000 Petko Manolov (petkan@dce.bg)";
+static const char *version = __FILE__ ": v0.4.13 2000/10/13 (C) 1999-2000 Petko Manolov (petkan@dce.bg)";
#define PEGASUS_USE_INTR
__u8 node_id[6];
get_node_id(pegasus, node_id);
+ set_registers( pegasus, EthID, sizeof(node_id), node_id );
memcpy( pegasus->net->dev_addr, node_id, sizeof(node_id) );
}
netif_wake_queue( pegasus->net );
}
-
+#ifdef PEGASUS_USE_INTR
static void intr_callback( struct urb *urb )
{
pegasus_t *pegasus = urb->context;
info("intr status %d", urb->status);
}
}
-
+#endif
static void pegasus_tx_timeout( struct net_device *net )
{
init_MUTEX( &pegasus-> ctrl_sem );
init_waitqueue_head( &pegasus->ctrl_wait );
- net = init_etherdev(0, 0);
+ net = init_etherdev( NULL, 0 );
+ if ( !net ) {
+ kfree( pegasus );
+ return NULL;
+ }
+
pegasus->usb = dev;
pegasus->net = net;
net->priv = pegasus;
net->get_stats = pegasus_netdev_stats;
net->mtu = PEGASUS_MTU;
- set_ethernet_addr( pegasus );
- register_netdev( net );
-
pegasus->features = usb_dev_id[dev_indx].private;
if ( reset_mac(pegasus) ) {
err("can't reset MAC");
return NULL;
}
+ set_ethernet_addr( pegasus );
+
if ( pegasus->features & PEGASUS_II ) {
info( "setup Pegasus II specific registers" );
setup_pegasus_II( pegasus );
}
-
+
pegasus->phy = mii_phy_probe( pegasus );
if ( !pegasus->phy ) {
warn( "can't locate MII phy, using default" );
pegasus->phy = 1;
}
-
info( "%s: %s", net->name, usb_dev_id[dev_indx].name );
return pegasus;
return;
}
+ s->bulkurb->dev = s->usbdev;
ret=usb_submit_urb(s->bulkurb);
if(ret && ret!=-EBUSY) {
err("plusb_int_complete: usb_submit_urb failed");
char name[128];
struct input_dev dev;
struct urb irq;
+ struct usb_device *my_usb_device; // for resubmitting my urb
int open;
};
if (mouse->open++)
return 0;
+ mouse->irq.dev = mouse->my_usb_device;
if (usb_submit_urb(&mouse->irq))
return -EIO;
kfree(buf);
+ mouse->my_usb_device = dev;
FILL_INT_URB(&mouse->irq, dev, pipe, mouse->data, maxp > 8 ? 8 : maxp,
usb_mouse_irq, mouse, endpoint->bInterval);
struct wacom {
signed char data[10];
struct input_dev dev;
+ struct usb_device *usbdev;
struct urb irq;
struct wacom_features *features;
int tool;
if (wacom->open++)
return 0;
+ wacom->irq.dev = wacom->usbdev;
if (usb_submit_urb(&wacom->irq))
return -EIO;
wacom->dev.idvendor = dev->descriptor.idVendor;
wacom->dev.idproduct = dev->descriptor.idProduct;
wacom->dev.idversion = dev->descriptor.bcdDevice;
+ wacom->usbdev = dev;
FILL_INT_URB(&wacom->irq, dev, usb_rcvintpipe(dev, endpoint->bEndpointAddress),
wacom->data, wacom->features->pktlen, wacom->features->irq, wacom, endpoint->bInterval);
These are two special cases. Normal usage imply the device driver
to issue a sync on the device (without waiting I/O completation) and
- then an invalidate_buffers call that doesn't trashes dirty buffers. */
+ then an invalidate_buffers call that doesn't trash dirty buffers. */
void __invalidate_buffers(kdev_t dev, int destroy_dirty_buffers)
{
int i, nlist, slept;
__remove_from_queues(bh);
put_last_free(bh);
}
+ /* else complain loudly? */
+
write_unlock(&hash_table_lock);
if (slept)
goto out;
blocks = (filesize - pos) >> (9+index);
- if (blocks < (read_ahead[MAJOR(dev)] >> index))
- blocks = read_ahead[MAJOR(dev)] >> index;
if (blocks > NBUF)
blocks = NBUF;
-/* if (blocks) printk("breada (new) %d blocks\n",blocks); */
-
bhlist[0] = bh;
j = 1;
for(i=1; i<blocks; i++) {
static struct super_block * coda_read_super(struct super_block *sb,
void *data, int silent)
{
- struct inode *psdev = 0, *root = 0;
+ struct inode *root = 0;
struct coda_sb_info *sbi = NULL;
struct venus_comm *vc = NULL;
ViceFid fid;
mpnt->vm_pgoff = 0;
mpnt->vm_file = NULL;
mpnt->vm_private_data = (void *) 0;
- vmlist_modify_lock(current->mm);
+ spin_lock(¤t->mm->page_table_lock);
insert_vm_struct(current->mm, mpnt);
- vmlist_modify_unlock(current->mm);
+ spin_unlock(¤t->mm->page_table_lock);
current->mm->total_vm = (mpnt->vm_end - mpnt->vm_start) >> PAGE_SHIFT;
}
kdev_t dev = s->s_dev;
struct buffer_head *bh;
- /* vvvv - workaround for the breada bug */
- if (!ahead || secno + ahead + (read_ahead[MAJOR(dev)] >> 9) >= s->s_hpfs_fs_size)
+ if (!ahead || secno + ahead >= s->s_hpfs_fs_size)
*bhp = bh = bread(dev, secno, 512);
else *bhp = bh = breada(dev, secno, 512, 0, (ahead + 1) << 9);
if (bh != NULL)
goto bail;
}
- /* vvvv - workaround for the breada bug */
- if (!ahead || secno + 4 + ahead + (read_ahead[MAJOR(dev)] >> 9) > s->s_hpfs_fs_size)
+ if (!ahead || secno + 4 + ahead > s->s_hpfs_fs_size)
qbh->bh[0] = bh = bread(dev, secno, 512);
else qbh->bh[0] = bh = breada(dev, secno, 512, 0, (ahead + 4) << 9);
if (!bh)
/* Append to list of blocked */
nlmsvc_insert_block(block, NLM_NEVER);
- if (list_empty(&block->b_call.a_args.lock.fl.fl_list)) {
+ if (list_empty(&block->b_call.a_args.lock.fl.fl_block)) {
/* Now add block to block list of the conflicting lock
if we haven't done so. */
dprintk("lockd: blocking on this lock.\n");
if (!list_empty(&fl->fl_block))
panic("Attempting to free lock with active block list");
- if (!list_empty(&fl->fl_link) || !list_empty(&fl->fl_list))
+ if (!list_empty(&fl->fl_link))
panic("Attempting to free lock on active lock list");
kmem_cache_free(filelock_cache, fl);
{
INIT_LIST_HEAD(&fl->fl_link);
INIT_LIST_HEAD(&fl->fl_block);
- INIT_LIST_HEAD(&fl->fl_list);
init_waitqueue_head(&fl->fl_wait);
fl->fl_next = NULL;
fl->fl_fasync = NULL;
*/
static void locks_delete_block(struct file_lock *waiter)
{
- list_del(&waiter->fl_list);
- INIT_LIST_HEAD(&waiter->fl_list);
+ list_del(&waiter->fl_block);
+ INIT_LIST_HEAD(&waiter->fl_block);
list_del(&waiter->fl_link);
INIT_LIST_HEAD(&waiter->fl_link);
waiter->fl_next = NULL;
static void locks_insert_block(struct file_lock *blocker,
struct file_lock *waiter)
{
- if (!list_empty(&waiter->fl_list)) {
+ if (!list_empty(&waiter->fl_block)) {
printk(KERN_ERR "locks_insert_block: removing duplicated lock "
"(pid=%d %Ld-%Ld type=%d)\n", waiter->fl_pid,
waiter->fl_start, waiter->fl_end, waiter->fl_type);
locks_delete_block(waiter);
}
- list_add_tail(&waiter->fl_list, &blocker->fl_block);
+ list_add_tail(&waiter->fl_block, &blocker->fl_block);
waiter->fl_next = blocker;
list_add(&waiter->fl_link, &blocked_list);
}
static void locks_wake_up_blocks(struct file_lock *blocker, unsigned int wait)
{
while (!list_empty(&blocker->fl_block)) {
- struct file_lock *waiter = list_entry(blocker->fl_block.next, struct file_lock, fl_list);
+ struct file_lock *waiter = list_entry(blocker->fl_block.next, struct file_lock, fl_block);
/* N.B. Is it possible for the notify function to block?? */
if (waiter->fl_notify)
waiter->fl_notify(waiter);
caller_pid = caller_fl->fl_pid;
blocked_owner = block_fl->fl_owner;
blocked_pid = block_fl->fl_pid;
- tmp = blocked_list.next;
next_task:
if (caller_owner == blocked_owner && caller_pid == blocked_pid)
posix_unblock_lock(struct file_lock *waiter)
{
acquire_fl_sem();
- if (!list_empty(&waiter->fl_list)) {
+ if (!list_empty(&waiter->fl_block)) {
locks_delete_block(waiter);
wake_up(&waiter->fl_wait);
}
list_for_each(btmp, &fl->fl_block) {
struct file_lock *bfl = list_entry(btmp,
- struct file_lock, fl_list);
+ struct file_lock, fl_block);
lock_get_status(q, bfl, i, " ->");
move_lock_status(&q, &pos, offset);
/* if there's no valid primary partition, assume that no Atari
format partition table (there's no reliable magic or the like
:-() */
+ brelse(bh);
return 0;
}
poll_wait(filp, PIPE_WAIT(*inode), wait);
- /* Reading only -- no need for aquiring the semaphore. */
+ /* Reading only -- no need for acquiring the semaphore. */
mask = POLLIN | POLLRDNORM;
if (PIPE_EMPTY(*inode))
mask = POLLOUT | POLLWRNORM;
sz = len;
ext_start = NULL;
}
- break;
+ goto stop0;
}
}
- if (charbuf[chi] == '.')
- break;
}
+stop0:;
if (ext_start == name - 1) {
sz = len;
ext_start = NULL;
if (chl == 0)
break;
for (chi = 0; chi < chl; chi++)
- if (!strchr(skip_chars, charbuf[chi]))
- break;
+ if (!strchr(skip_chars, charbuf[chi])) {
+ goto stop1;
+ }
name_start++;
}
+stop1:;
if (name_start != ext_start) {
sz = ext_start - name;
ext_start++;
/*
* FPU lazy state save handling...
*/
-extern void save_fpu( struct task_struct *tsk );
extern void save_init_fpu( struct task_struct *tsk );
extern void restore_fpu( struct task_struct *tsk );
#define unlazy_fpu( tsk ) do { \
if ( tsk->flags & PF_USEDFPU ) \
- save_fpu( tsk ); \
+ save_init_fpu( tsk ); \
} while (0)
#define clear_fpu( tsk ) do { \
#define X86_FEATURE_CMOV 0x00008000 /* CMOV instruction (FCMOVCC and FCOMI too if FPU present) */
#define X86_FEATURE_PAT 0x00010000 /* Page Attribute Table */
#define X86_FEATURE_PSE36 0x00020000 /* 36-bit PSEs */
-#define X86_FEATURE_18 0x00040000
+#define X86_FEATURE_PN 0x00040000
#define X86_FEATURE_19 0x00080000
#define X86_FEATURE_20 0x00100000
#define X86_FEATURE_21 0x00200000
-#ifdef CONFIG_ACPI_KERNEL_CONFIG
/*
* acpikcfg.h - ACPI based Kernel Configuration Manager External Interfaces
*
* Copyright (C) 2000 J.I. Lee <jung-ik.lee@intel.com>
*/
+#include <linux/config.h>
+
+#ifdef CONFIG_ACPI_KERNEL_CONFIG
u32 __init acpi_cf_init (void * rsdp);
u32 __init acpi_cf_terminate (void );
* Copyright (C) 2000 David Mosberger-Tang <davidm@hpl.hp.com>
*/
-#include <linux/config.h>
-
#include <linux/types.h>
#include <asm/ptrace.h>
#ifndef _ASM_IA64_PCI_H
#define _ASM_IA64_PCI_H
-#include <linux/config.h>
#include <linux/slab.h>
#include <linux/string.h>
#include <linux/types.h>
#include <asm/bitops.h>
#include <asm/mmu_context.h>
-#include <asm/processor.h>
#include <asm/system.h>
/*
* traditional two-level paging, page table allocation routines:
*/
-extern __inline__ pmd_t *get_pmd_fast(void)
+static __inline__ pmd_t *get_pmd_fast(void)
{
return (pmd_t *)0;
}
-extern __inline__ void free_pmd_fast(pmd_t *pmd) { }
-extern __inline__ void free_pmd_slow(pmd_t *pmd) { }
+static __inline__ void free_pmd_fast(pmd_t *pmd) { }
+static __inline__ void free_pmd_slow(pmd_t *pmd) { }
-extern inline pmd_t * pmd_alloc(pgd_t *pgd, unsigned long address)
+static __inline__ pmd_t * pmd_alloc(pgd_t *pgd, unsigned long address)
{
if (!pgd)
BUG();
#include <asm/processor.h>
#include <linux/threads.h>
+#include <linux/slab.h>
#define pgd_quicklist (current_cpu_data.pgd_quick)
#define pmd_quicklist ((unsigned long *)0)
* if any.
*/
-extern __inline__ pgd_t *get_pgd_slow(void)
+static __inline__ pgd_t *get_pgd_slow(void)
{
- pgd_t *ret = (pgd_t *)__get_free_page(GFP_KERNEL);
+ unsigned int pgd_size = (USER_PTRS_PER_PGD * sizeof(pgd_t));
+ pgd_t *ret = (pgd_t *)kmalloc(pgd_size, GFP_KERNEL);
+
+ if (ret)
+ memset(ret, 0, pgd_size);
- if (ret) {
- memset(ret, 0, USER_PTRS_PER_PGD * sizeof(pgd_t));
- memcpy(ret + USER_PTRS_PER_PGD, swapper_pg_dir + USER_PTRS_PER_PGD, (PTRS_PER_PGD - USER_PTRS_PER_PGD) * sizeof(pgd_t));
- }
return ret;
}
-extern __inline__ pgd_t *get_pgd_fast(void)
+static __inline__ pgd_t *get_pgd_fast(void)
{
unsigned long *ret;
return (pgd_t *)ret;
}
-extern __inline__ void free_pgd_fast(pgd_t *pgd)
+static __inline__ void free_pgd_fast(pgd_t *pgd)
{
*(unsigned long *)pgd = (unsigned long) pgd_quicklist;
pgd_quicklist = (unsigned long *) pgd;
pgtable_cache_size++;
}
-extern __inline__ void free_pgd_slow(pgd_t *pgd)
+static __inline__ void free_pgd_slow(pgd_t *pgd)
{
- free_page((unsigned long)pgd);
+ kfree(pgd);
}
extern pte_t *get_pte_slow(pmd_t *pmd, unsigned long address_preadjusted);
extern pte_t *get_pte_kernel_slow(pmd_t *pmd, unsigned long address_preadjusted);
-extern __inline__ pte_t *get_pte_fast(void)
+static __inline__ pte_t *get_pte_fast(void)
{
unsigned long *ret;
return (pte_t *)ret;
}
-extern __inline__ void free_pte_fast(pte_t *pte)
+static __inline__ void free_pte_fast(pte_t *pte)
{
*(unsigned long *)pte = (unsigned long) pte_quicklist;
pte_quicklist = (unsigned long *) pte;
pgtable_cache_size++;
}
-extern __inline__ void free_pte_slow(pte_t *pte)
+static __inline__ void free_pte_slow(pte_t *pte)
{
free_page((unsigned long)pte);
}
#define pgd_free(pgd) free_pgd_slow(pgd)
#define pgd_alloc() get_pgd_fast()
-extern inline pte_t * pte_alloc_kernel(pmd_t * pmd, unsigned long address)
+static __inline__ pte_t * pte_alloc_kernel(pmd_t * pmd, unsigned long address)
{
if (!pmd)
BUG();
return (pte_t *) pmd_page(*pmd) + address;
}
-extern inline pte_t * pte_alloc(pmd_t * pmd, unsigned long address)
+static __inline__ pte_t * pte_alloc(pmd_t * pmd, unsigned long address)
{
address = (address >> PAGE_SHIFT) & (PTRS_PER_PTE - 1);
* allocating and freeing a pmd is trivial: the 1-entry pmd is
* inside the pgd, so has no extra memory associated with it.
*/
-extern inline void pmd_free(pmd_t * pmd)
+static __inline__ void pmd_free(pmd_t * pmd)
{
}
extern int do_check_pgt_cache(int, int);
-extern inline void set_pgdir(unsigned long address, pgd_t entry)
-{
- struct task_struct * p;
- pgd_t *pgd;
-
- read_lock(&tasklist_lock);
- for_each_task(p) {
- if (!p->mm)
- continue;
- *pgd_offset(p->mm,address) = entry;
- }
- read_unlock(&tasklist_lock);
- for (pgd = (pgd_t *)pgd_quicklist; pgd; pgd = (pgd_t *)*(unsigned long *)pgd)
- pgd[address >> PGDIR_SHIFT] = entry;
-}
-
/*
* TLB flushing:
*
extern void flush_tlb_range(struct mm_struct *mm, unsigned long start,
unsigned long end);
extern void flush_tlb_page(struct vm_area_struct *vma, unsigned long page);
-extern inline void flush_tlb_pgtables(struct mm_struct *mm,
- unsigned long start, unsigned long end)
+
+static __inline__ void flush_tlb_pgtables(struct mm_struct *mm,
+ unsigned long start, unsigned long end)
{
}
-/* $Id: pgalloc.h,v 1.9 2000/08/01 04:53:58 anton Exp $ */
+/* $Id: pgalloc.h,v 1.10 2000/10/13 01:40:26 davem Exp $ */
#ifndef _SPARC_PGALLOC_H
#define _SPARC_PGALLOC_H
#define pgd_free(pgd) BTFIXUP_CALL(pgd_free)(pgd)
#define pgd_alloc() BTFIXUP_CALL(pgd_alloc)()
-BTFIXUPDEF_CALL(void, set_pgdir, unsigned long, pgd_t)
+#error Anton, you need to do set_pgdir things now as on ix86, see i386/mm/fault.c
#define set_pgdir(address,entry) BTFIXUP_CALL(set_pgdir)(address,entry)
extern int do_check_pgt_cache(int, int);
-/* Nothing to do on sparc64 :) */
-#define set_pgdir(address, entry) do { } while(0)
-
#endif /* _SPARC64_PGALLOC_H */
int coda_permission(struct inode *inode, int mask);
int coda_revalidate_inode(struct dentry *);
int coda_notify_change(struct dentry *, struct iattr *);
-int coda_pioctl(struct inode * inode, struct file * filp,
- unsigned int cmd, unsigned long arg);
/* global variables */
extern int coda_debug;
struct file_lock {
struct file_lock *fl_next; /* singly linked list for this inode */
struct list_head fl_link; /* doubly linked list of all locks */
- struct list_head fl_block; /* circular list of blocked processes */
- struct list_head fl_list; /* block list member */
+ struct list_head fl_block; /* circular list of blocked processes */
fl_owner_t fl_owner;
unsigned int fl_pid;
wait_queue_head_t fl_wait;
#define free_page(addr) free_pages((addr),0)
extern void show_free_areas(void);
-extern void show_free_areas_node(int nid);
+extern void show_free_areas_node(pg_data_t *pgdat);
extern void clear_page_tables(struct mm_struct *, unsigned long, int);
#define pgcache_under_min() (atomic_read(&page_cache_size) * 100 < \
page_cache.min_percent * num_physpages)
-#define vmlist_access_lock(mm) spin_lock(&mm->page_table_lock)
-#define vmlist_access_unlock(mm) spin_unlock(&mm->page_table_lock)
-#define vmlist_modify_lock(mm) vmlist_access_lock(mm)
-#define vmlist_modify_unlock(mm) vmlist_access_unlock(mm)
-
#endif /* __KERNEL__ */
#endif
* prototypes for the discontig memory code.
*/
struct page;
-extern void show_free_areas_core(int);
+extern void show_free_areas_core(pg_data_t *pgdat);
extern void free_area_init_core(int nid, pg_data_t *pgdat, struct page **gmap,
unsigned long *zones_size, unsigned long paddr, unsigned long *zholes_size,
struct page *pmap);
-#ifndef CONFIG_DISCONTIGMEM
-
extern pg_data_t contig_page_data;
+#ifndef CONFIG_DISCONTIGMEM
+
#define NODE_DATA(nid) (&contig_page_data)
#define NODE_MEM_MAP(nid) mem_map
* Client authentication handle
*/
#define RPC_CREDCACHE_NR 8
+#define RPC_CREDCACHE_MASK (RPC_CREDCACHE_NR - 1)
struct rpc_auth {
struct rpc_cred * au_credcache[RPC_CREDCACHE_NR];
unsigned long au_expire; /* cache expiry interval */
* get the semaphore and if this process wants to reduce some
* semaphore value we simply wake it up without doing the
* operation. So it has to try to get it later. Thus e.g. the
- * running process may reaquire the semaphore during the current
+ * running process may reacquire the semaphore during the current
* time slice. If it only waits for zero or increases the semaphore,
* we do the operation in advance and wake it up.
* 2) It did not wake up all zero waiting processes. We try to do
* NOTE! we rely on the previous spin_lock to
* lock interrupts for us! We can only be called with
* "sigmask_lock" held, and the local interrupt must
- * have been disabled when that got aquired!
+ * have been disabled when that got acquired!
*
* No need to set need_resched since signal event passing
* goes through ->blocked
/*
* For SMP, we need to re-test the user struct counter
- * after having aquired the spinlock. This allows us to do
+ * after having acquired the spinlock. This allows us to do
* the common case (not freeing anything) without having
* any locking.
*/
get_file(n->vm_file);
if (n->vm_ops && n->vm_ops->open)
n->vm_ops->open(n);
- vmlist_modify_lock(vma->vm_mm);
+ spin_lock(&vma->vm_mm->page_table_lock);
vma->vm_pgoff += (end - vma->vm_start) >> PAGE_SHIFT;
vma->vm_start = end;
insert_vm_struct(current->mm, n);
- vmlist_modify_unlock(vma->vm_mm);
+ spin_unlock(&vma->vm_mm->page_table_lock);
return 0;
}
get_file(n->vm_file);
if (n->vm_ops && n->vm_ops->open)
n->vm_ops->open(n);
- vmlist_modify_lock(vma->vm_mm);
+ spin_lock(&vma->vm_mm->page_table_lock);
vma->vm_end = start;
insert_vm_struct(current->mm, n);
- vmlist_modify_unlock(vma->vm_mm);
+ spin_unlock(&vma->vm_mm->page_table_lock);
return 0;
}
vma->vm_ops->open(left);
vma->vm_ops->open(right);
}
- vmlist_modify_lock(vma->vm_mm);
+ spin_lock(&vma->vm_mm->page_table_lock);
vma->vm_pgoff += (start - vma->vm_start) >> PAGE_SHIFT;
vma->vm_start = start;
vma->vm_end = end;
vma->vm_raend = 0;
insert_vm_struct(current->mm, left);
insert_vm_struct(current->mm, right);
- vmlist_modify_unlock(vma->vm_mm);
+ spin_unlock(&vma->vm_mm->page_table_lock);
return 0;
}
static inline int mlock_fixup_all(struct vm_area_struct * vma, int newflags)
{
- vmlist_modify_lock(vma->vm_mm);
+ spin_lock(&vma->vm_mm->page_table_lock);
vma->vm_flags = newflags;
- vmlist_modify_unlock(vma->vm_mm);
+ spin_unlock(&vma->vm_mm->page_table_lock);
return 0;
}
get_file(n->vm_file);
if (n->vm_ops && n->vm_ops->open)
n->vm_ops->open(n);
- vmlist_modify_lock(vma->vm_mm);
+ spin_lock(&vma->vm_mm->page_table_lock);
vma->vm_pgoff += (end - vma->vm_start) >> PAGE_SHIFT;
vma->vm_start = end;
insert_vm_struct(current->mm, n);
- vmlist_modify_unlock(vma->vm_mm);
+ spin_unlock(&vma->vm_mm->page_table_lock);
return 0;
}
get_file(n->vm_file);
if (n->vm_ops && n->vm_ops->open)
n->vm_ops->open(n);
- vmlist_modify_lock(vma->vm_mm);
+ spin_lock(&vma->vm_mm->page_table_lock);
vma->vm_end = start;
insert_vm_struct(current->mm, n);
- vmlist_modify_unlock(vma->vm_mm);
+ spin_unlock(&vma->vm_mm->page_table_lock);
return 0;
}
vma->vm_ops->open(left);
vma->vm_ops->open(right);
}
- vmlist_modify_lock(vma->vm_mm);
+ spin_lock(&vma->vm_mm->page_table_lock);
vma->vm_pgoff += (start - vma->vm_start) >> PAGE_SHIFT;
vma->vm_start = start;
vma->vm_end = end;
vma->vm_raend = 0;
insert_vm_struct(current->mm, left);
insert_vm_struct(current->mm, right);
- vmlist_modify_unlock(vma->vm_mm);
+ spin_unlock(&vma->vm_mm->page_table_lock);
return 0;
}
break;
}
}
- vmlist_modify_lock(current->mm);
+ spin_lock(¤t->mm->page_table_lock);
merge_segments(current->mm, start, end);
- vmlist_modify_unlock(current->mm);
+ spin_unlock(¤t->mm->page_table_lock);
return error;
}
if (error)
break;
}
- vmlist_modify_lock(current->mm);
+ spin_lock(¤t->mm->page_table_lock);
merge_segments(current->mm, 0, TASK_SIZE);
- vmlist_modify_unlock(current->mm);
+ spin_unlock(¤t->mm->page_table_lock);
return error;
}
*/
flags = vma->vm_flags;
addr = vma->vm_start; /* can addr have changed?? */
- vmlist_modify_lock(mm);
+ spin_lock(&mm->page_table_lock);
insert_vm_struct(mm, vma);
if (correct_wcount)
atomic_inc(&file->f_dentry->d_inode->i_writecount);
merge_segments(mm, vma->vm_start, vma->vm_end);
- vmlist_modify_unlock(mm);
+ spin_unlock(&mm->page_table_lock);
mm->total_vm += len >> PAGE_SHIFT;
if (flags & VM_LOCKED) {
/* Work out to one of the ends. */
if (end == area->vm_end) {
area->vm_end = addr;
- vmlist_modify_lock(mm);
+ spin_lock(&mm->page_table_lock);
} else if (addr == area->vm_start) {
area->vm_pgoff += (end - area->vm_start) >> PAGE_SHIFT;
area->vm_start = end;
- vmlist_modify_lock(mm);
+ spin_lock(&mm->page_table_lock);
} else {
/* Unmapping a hole: area->vm_start < addr <= end < area->vm_end */
/* Add end mapping -- leave beginning for below */
if (mpnt->vm_ops && mpnt->vm_ops->open)
mpnt->vm_ops->open(mpnt);
area->vm_end = addr; /* Truncate area */
- vmlist_modify_lock(mm);
+ spin_lock(&mm->page_table_lock);
insert_vm_struct(mm, mpnt);
}
insert_vm_struct(mm, area);
- vmlist_modify_unlock(mm);
+ spin_unlock(&mm->page_table_lock);
return extra;
}
npp = (prev ? &prev->vm_next : &mm->mmap);
free = NULL;
- vmlist_modify_lock(mm);
+ spin_lock(&mm->page_table_lock);
for ( ; mpnt && mpnt->vm_start < addr+len; mpnt = *npp) {
*npp = mpnt->vm_next;
mpnt->vm_next = free;
avl_remove(mpnt, &mm->mmap_avl);
}
mm->mmap_cache = NULL; /* Kill the cache. */
- vmlist_modify_unlock(mm);
+ spin_unlock(&mm->page_table_lock);
/* Ok - we have the memory areas we should free on the 'free' list,
* so release them, and unmap the page range..
flags = vma->vm_flags;
addr = vma->vm_start;
- vmlist_modify_lock(mm);
+ spin_lock(&mm->page_table_lock);
insert_vm_struct(mm, vma);
merge_segments(mm, vma->vm_start, vma->vm_end);
- vmlist_modify_unlock(mm);
+ spin_unlock(&mm->page_table_lock);
mm->total_vm += len >> PAGE_SHIFT;
if (flags & VM_LOCKED) {
struct vm_area_struct * mpnt;
release_segments(mm);
- vmlist_modify_lock(mm);
+ spin_lock(&mm->page_table_lock);
mpnt = mm->mmap;
mm->mmap = mm->mmap_avl = mm->mmap_cache = NULL;
- vmlist_modify_unlock(mm);
+ spin_unlock(&mm->page_table_lock);
mm->rss = 0;
mm->total_vm = 0;
mm->locked_vm = 0;
if (mpnt->vm_ops && mpnt->vm_ops->close) {
mpnt->vm_pgoff += (mpnt->vm_end - mpnt->vm_start) >> PAGE_SHIFT;
mpnt->vm_start = mpnt->vm_end;
- vmlist_modify_unlock(mm);
+ spin_unlock(&mm->page_table_lock);
mpnt->vm_ops->close(mpnt);
- vmlist_modify_lock(mm);
+ spin_lock(&mm->page_table_lock);
}
mm->map_count--;
remove_shared_vm_struct(mpnt);
static inline int mprotect_fixup_all(struct vm_area_struct * vma,
int newflags, pgprot_t prot)
{
- vmlist_modify_lock(vma->vm_mm);
+ spin_lock(&vma->vm_mm->page_table_lock);
vma->vm_flags = newflags;
vma->vm_page_prot = prot;
- vmlist_modify_unlock(vma->vm_mm);
+ spin_unlock(&vma->vm_mm->page_table_lock);
return 0;
}
get_file(n->vm_file);
if (n->vm_ops && n->vm_ops->open)
n->vm_ops->open(n);
- vmlist_modify_lock(vma->vm_mm);
+ spin_lock(&vma->vm_mm->page_table_lock);
vma->vm_pgoff += (end - vma->vm_start) >> PAGE_SHIFT;
vma->vm_start = end;
insert_vm_struct(current->mm, n);
- vmlist_modify_unlock(vma->vm_mm);
+ spin_unlock(&vma->vm_mm->page_table_lock);
return 0;
}
get_file(n->vm_file);
if (n->vm_ops && n->vm_ops->open)
n->vm_ops->open(n);
- vmlist_modify_lock(vma->vm_mm);
+ spin_lock(&vma->vm_mm->page_table_lock);
vma->vm_end = start;
insert_vm_struct(current->mm, n);
- vmlist_modify_unlock(vma->vm_mm);
+ spin_unlock(&vma->vm_mm->page_table_lock);
return 0;
}
vma->vm_ops->open(left);
vma->vm_ops->open(right);
}
- vmlist_modify_lock(vma->vm_mm);
+ spin_lock(&vma->vm_mm->page_table_lock);
vma->vm_pgoff += (start - vma->vm_start) >> PAGE_SHIFT;
vma->vm_start = start;
vma->vm_end = end;
vma->vm_page_prot = prot;
insert_vm_struct(current->mm, left);
insert_vm_struct(current->mm, right);
- vmlist_modify_unlock(vma->vm_mm);
+ spin_unlock(&vma->vm_mm->page_table_lock);
return 0;
}
break;
}
}
- vmlist_modify_lock(current->mm);
+ spin_lock(¤t->mm->page_table_lock);
merge_segments(current->mm, start, end);
- vmlist_modify_unlock(current->mm);
+ spin_unlock(¤t->mm->page_table_lock);
out:
up(¤t->mm->mmap_sem);
return error;
get_file(new_vma->vm_file);
if (new_vma->vm_ops && new_vma->vm_ops->open)
new_vma->vm_ops->open(new_vma);
- vmlist_modify_lock(current->mm);
+ spin_lock(¤t->mm->page_table_lock);
insert_vm_struct(current->mm, new_vma);
merge_segments(current->mm, new_vma->vm_start, new_vma->vm_end);
- vmlist_modify_unlock(current->mm);
+ spin_unlock(¤t->mm->page_table_lock);
do_munmap(current->mm, addr, old_len);
current->mm->total_vm += new_len >> PAGE_SHIFT;
if (new_vma->vm_flags & VM_LOCKED) {
/* can we just expand the current mapping? */
if (max_addr - addr >= new_len) {
int pages = (new_len - old_len) >> PAGE_SHIFT;
- vmlist_modify_lock(vma->vm_mm);
+ spin_lock(&vma->vm_mm->page_table_lock);
vma->vm_end = addr + new_len;
- vmlist_modify_unlock(vma->vm_mm);
+ spin_unlock(&vma->vm_mm->page_table_lock);
current->mm->total_vm += pages;
if (vma->vm_flags & VM_LOCKED) {
current->mm->locked_vm += pages;
int numnodes = 1; /* Initialized for UMA platforms */
-#ifndef CONFIG_DISCONTIGMEM
-
static bootmem_data_t contig_bootmem_data;
pg_data_t contig_page_data = { bdata: &contig_bootmem_data };
+#ifndef CONFIG_DISCONTIGMEM
+
/*
* This is meant to be invoked by platforms whose physical memory starts
* at a considerably higher value than 0. Examples are Super-H, ARM, m68k.
unsigned long *zones_size, unsigned long zone_start_paddr,
unsigned long *zholes_size)
{
- free_area_init_core(0, NODE_DATA(0), &mem_map, zones_size,
+ free_area_init_core(0, &contig_page_data, &mem_map, zones_size,
zone_start_paddr, zholes_size, pmap);
}
struct page * alloc_pages_node(int nid, int gfp_mask, unsigned long order)
{
+#ifdef CONFIG_NUMA
return __alloc_pages(NODE_DATA(nid)->node_zonelists + gfp_mask, order);
+#else
+ return alloc_pages(gfp_mask, order);
+#endif
}
#ifdef CONFIG_DISCONTIGMEM
static spinlock_t node_lock = SPIN_LOCK_UNLOCKED;
-void show_free_areas_node(int nid)
+void show_free_areas_node(pg_data_t *pgdat)
{
unsigned long flags;
spin_lock_irqsave(&node_lock, flags);
- printk("Memory information for node %d:\n", nid);
- show_free_areas_core(nid);
+ show_free_areas_core(pgdat);
spin_unlock_irqrestore(&node_lock, flags);
}
memset(pgdat->valid_addr_bitmap, 0, size);
}
+static struct page * alloc_pages_pgdat(pg_data_t *pgdat, int gfp_mask,
+ unsigned long order)
+{
+ return __alloc_pages(pgdat->node_zonelists + gfp_mask, order);
+}
+
/*
* This can be refined. Currently, tries to do round robin, instead
* should do concentratic circle search, starting from current node.
struct page * alloc_pages(int gfp_mask, unsigned long order)
{
struct page *ret = 0;
- int startnode, tnode;
+ pg_data_t *start, *temp;
#ifndef CONFIG_NUMA
unsigned long flags;
- static int nextnid = 0;
+ static pg_data_t *next = 0;
#endif
if (order >= MAX_ORDER)
return NULL;
#ifdef CONFIG_NUMA
- tnode = numa_node_id();
+ temp = NODE_DATA(numa_node_id());
#else
spin_lock_irqsave(&node_lock, flags);
- tnode = nextnid;
- nextnid++;
- if (nextnid == numnodes)
- nextnid = 0;
+ if (!next) next = pgdat_list;
+ temp = next;
+ next = next->node_next;
spin_unlock_irqrestore(&node_lock, flags);
#endif
- startnode = tnode;
- while (tnode < numnodes) {
- if ((ret = alloc_pages_node(tnode++, gfp_mask, order)))
+ start = temp;
+ while (temp) {
+ if ((ret = alloc_pages_pgdat(temp, gfp_mask, order)))
return(ret);
+ temp = temp->node_next;
}
- tnode = 0;
- while (tnode != startnode) {
- if ((ret = alloc_pages_node(tnode++, gfp_mask, order)))
+ temp = pgdat_list;
+ while (temp != start) {
+ if ((ret = alloc_pages_pgdat(temp, gfp_mask, order)))
return(ret);
+ temp = temp->node_next;
}
return(0);
}
#include <linux/pagemap.h>
#include <linux/bootmem.h>
-/* Use NUMNODES instead of numnodes for better code inside kernel APIs */
-#ifndef CONFIG_DISCONTIGMEM
-#define NUMNODES 1
-#else
-#define NUMNODES numnodes
-#endif
-
int nr_swap_pages;
int nr_active_pages;
int nr_inactive_dirty_pages;
{
unsigned int sum;
zone_t *zone;
- int i;
+ pg_data_t *pgdat = pgdat_list;
sum = 0;
- for (i = 0; i < NUMNODES; i++)
- for (zone = NODE_DATA(i)->node_zones; zone < NODE_DATA(i)->node_zones + MAX_NR_ZONES; zone++)
+ while (pgdat) {
+ for (zone = pgdat->node_zones; zone < pgdat->node_zones + MAX_NR_ZONES; zone++)
sum += zone->free_pages;
+ pgdat = pgdat->node_next;
+ }
return sum;
}
{
unsigned int sum;
zone_t *zone;
- int i;
+ pg_data_t *pgdat = pgdat_list;
sum = 0;
- for (i = 0; i < NUMNODES; i++)
- for (zone = NODE_DATA(i)->node_zones; zone < NODE_DATA(i)->node_zones + MAX_NR_ZONES; zone++)
+ while (pgdat) {
+ for (zone = pgdat->node_zones; zone < pgdat->node_zones + MAX_NR_ZONES; zone++)
sum += zone->inactive_clean_pages;
+ pgdat = pgdat->node_next;
+ }
return sum;
}
#if CONFIG_HIGHMEM
unsigned int nr_free_highpages (void)
{
- int i;
+ pg_data_t *pgdat = pgdat_list;
unsigned int pages = 0;
- for (i = 0; i < NUMNODES; i++)
- pages += NODE_DATA(i)->node_zones[ZONE_HIGHMEM].free_pages;
+ while (pgdat) {
+ pages += pgdat->node_zones[ZONE_HIGHMEM].free_pages;
+ pgdat = pgdat->node_next;
+ }
return pages;
}
#endif
* We also calculate the percentage fragmentation. We do this by counting the
* memory on each free list with the exception of the first item on the list.
*/
-void show_free_areas_core(int nid)
+void show_free_areas_core(pg_data_t *pgdat)
{
unsigned long order;
unsigned type;
for (type = 0; type < MAX_NR_ZONES; type++) {
struct list_head *head, *curr;
- zone_t *zone = NODE_DATA(nid)->node_zones + type;
+ zone_t *zone = pgdat->node_zones + type;
unsigned long nr, total, flags;
total = 0;
void show_free_areas(void)
{
- show_free_areas_core(0);
+ show_free_areas_core(pgdat_list);
}
/*
void __init free_area_init(unsigned long *zones_size)
{
- free_area_init_core(0, NODE_DATA(0), &mem_map, zones_size, 0, 0, 0);
+ free_area_init_core(0, &contig_page_data, &mem_map, zones_size, 0, 0, 0);
}
static int __init setup_mem_frac(char *str)
*/
if (!mm)
return;
- vmlist_access_lock(mm);
+ spin_lock(&mm->page_table_lock);
for (vma = mm->mmap; vma; vma = vma->vm_next) {
pgd_t * pgd = pgd_offset(mm, vma->vm_start);
unuse_vma(vma, pgd, entry, page);
}
- vmlist_access_unlock(mm);
+ spin_unlock(&mm->page_table_lock);
return;
}
pte_clear(page_table);
mm->rss--;
flush_tlb_page(vma, address);
- vmlist_access_unlock(mm);
+ spin_unlock(&mm->page_table_lock);
error = swapout(page, file);
UnlockPage(page);
if (file) fput(file);
mm->rss--;
set_pte(page_table, swp_entry_to_pte(entry));
flush_tlb_page(vma, address);
- vmlist_access_unlock(mm);
+ spin_unlock(&mm->page_table_lock);
/* OK, do a physical asynchronous write to swap. */
rw_swap_page(WRITE, page, 0);
* Find the proper vm-area after freezing the vma chain
* and ptes.
*/
- vmlist_access_lock(mm);
+ spin_lock(&mm->page_table_lock);
vma = find_vma(mm, address);
if (vma) {
if (address < vma->vm_start)
mm->swap_cnt = 0;
out_unlock:
- vmlist_access_unlock(mm);
+ spin_unlock(&mm->page_table_lock);
/* We didn't find anything for the process */
return 0;
#
# 19971130 Now in an own category to make correct compilation of the
# AX.25 stuff easier...
-# Joerg Reuter DL1BKE <jreuter@poboxes.com>
+# Joerg Reuter DL1BKE <jreuter@yaina.de>
# 19980129 Moved to net/ax25/Config.in, sourcing device drivers.
mainmenu_option next_comment
*/
if (sk->zapped) {
/* check if we can remove this feature. It is broken. */
- printk(KERN_WARNING "ax25_connect(): %s uses autobind, please contact jreuter@poboxes.com\n",
+ printk(KERN_WARNING "ax25_connect(): %s uses autobind, please contact jreuter@yaina.de\n",
current->comm);
if ((err = ax25_rt_autobind(sk->protinfo.ax25, &fsa->fsa_ax25.sax25_call)) < 0)
return err;
ax25_dama_on(ax25);
/* according to DK4EG´s spec we are required to
- * send a RR RESPONSE FINAL NR=0. Please mail
- * <jreuter@poboxes.com> if this causes problems
- * with the TheNetNode DAMA Master implementation.
- */
+ * send a RR RESPONSE FINAL NR=0.
+ */
ax25_std_enquiry_response(ax25);
break;
#include <linux/etherdevice.h>
#include <linux/skbuff.h>
#include <linux/errno.h>
-#include <linux/config.h>
#include <linux/init.h>
#include <net/dst.h>
#include <net/arp.h>
/*
* Perform a file control on a socket file descriptor.
*
- * Doesn't aquire a fd lock, because no network fcntl
+ * Doesn't acquire a fd lock, because no network fcntl
* function sleeps currently.
*/
{
int nr;
- nr = (cred->cr_uid % RPC_CREDCACHE_NR);
+ nr = (cred->cr_uid & RPC_CREDCACHE_MASK);
spin_lock(&rpc_credcache_lock);
cred->cr_next = auth->au_credcache[nr];
auth->au_credcache[nr] = cred;
int nr = 0;
if (!(taskflags & RPC_TASK_ROOTCREDS))
- nr = current->uid % RPC_CREDCACHE_NR;
+ nr = current->uid & RPC_CREDCACHE_MASK;
if (time_before(auth->au_nextgc, jiffies))
rpcauth_gc_credcache(auth);
struct rpc_cred **q, *cr;
int nr;
- nr = (cred->cr_uid % RPC_CREDCACHE_NR);
+ nr = (cred->cr_uid & RPC_CREDCACHE_MASK);
spin_lock(&rpc_credcache_lock);
q = &auth->au_credcache[nr];
while ((cr = *q) != NULL) {
return NULL;
cred->cr_count = 0;
cred->cr_flags = RPCAUTH_CRED_UPTODATE;
+ cred->cr_uid = current->uid;
return cred;
}