This is a Logical Link Layer protocol used for X.25 connections over
Ethernet, using ordinary Ethernet cards.
+
+Frame Diverter (EXPERIMENTAL)
+CONFIG_NET_DIVERT
+ The Frame Diverter allows you to divert packets from the
+ network, that are not aimed at the interface receiving it (in
+ promisc. mode). Typically, a Linux box setup as an ethernet bridge
+ with the Frames Diverter on, can do some *really* transparent www
+ caching using a Squid proxy for example.
+
+ This is very usefull when you don't want to change your router's
+ config (or if you simply don't have access to it).
+
+ The other possible usages of diverting Ethernet Frames are numberous:
+ - reroute smtp traffic to another interface
+ - traffic-shape certain network streams
+ - transparently proxy smtp connections
+ - etc...
+
+ For more informations, please refer to:
+ http://www.freshmeat.net/projects/etherdivert
+ http://perso.wanadoo.fr/magpie/EtherDivert.html
+
+ If unsure, say N
+
802.1d Ethernet Bridging
CONFIG_BRIDGE
If you say Y here, then your Linux box will be able to act as an
say Y and read the Ethernet-HOWTO, available from
http://www.linuxdoc.org/docs.html#howto .
+ This driver also works for the following NE2000 clone cards:
+ RealTek RTL-8029 Winbond 89C940 Compex RL2000 KTI ET32P2
+ NetVin NV5000SC Via 86C926 SureCom NE34 Winbond
+ Holtek HT80232 Holtek HT80229
+
This driver is also available as a module ( = code which can be
inserted in and removed from the running kernel whenever you want).
The module will be called ne2k-pci.o. If you want to compile it as a
read the Ethernet-HOWTO, available from
http://www.linuxdoc.org/docs.html#howto .
+ Note: the 8029 is a NE2000 PCI clone, you can use the NE2K-PCI driver.
+
If you want to compile this driver as a module ( = code which can be
inserted in and removed from the running kernel whenever you want),
say M here and read Documentation/modules.txt. This is recommended.
The module will be called usb-ohci.o. If you want to compile it
as a module, say M here and read Documentation/modules.txt.
-USB Human Interface Device (HID) support
+USB Human Interface Device (full HID) support
CONFIG_USB_HID
- Say Y here if you want to connect keyboards, mice, joysticks,
- graphic tablets, or any other HID based devices to your
- computer via USB. More information is available:
- Documentation/usb/input.txt.
+ Say Y here if you want full HID support to connect keyboards,
+ mice, joysticks, graphic tablets, or any other HID based devices
+ to your computer via USB. You can't use this driver and the
+ HIDBP (Boot Protocol) keyboard and mouse drivers at the same time.
+ More information is available: Documentation/usb/input.txt.
If unsure, say Y.
The module will be called hid.o. If you want to compile it as a
module, say M here and read Documentation/modules.txt.
-USB HIDBP Keyboard support
+USB HIDBP Keyboard (basic) support
CONFIG_USB_KBD
Say Y here if you don't want to use the generic HID driver for your
USB keyboard and prefer to use the keyboard in its limited Boot
- Protocol mode. This driver is much smaller than the HID one.
+ Protocol mode instead. This driver is much smaller than the HID one.
This code is also available as a module ( = code which can be
inserted in and removed from the running kernel whenever you want).
If unsure, say N.
-USB HIDBP Mouse support
+USB HIDBP Mouse (basic) support
CONFIG_USB_MOUSE
Say Y here if you don't want to use the generic HID driver for your
USB mouse and prefer to use the mouse in its limited Boot Protocol
- mode. This driver is much smaller than the HID one.
+ mode instead. This driver is much smaller than the HID one.
This code is also available as a module ( = code which can be
inserted in and removed from the running kernel whenever you want).
another UltraSPARC-IIi-cEngine boardset with a 7-segment display,
you should say N to this option.
+IA-64 system type
+CONFIG_IA64_GENERIC
+ This selects the system type of your hardware. A "generic" kernel
+ will run on any supported IA-64 system. However, if you configure
+ a kernel for your specific system, it will be faster and smaller.
+
+ To find out what type of IA-64 system you have, you may want to
+ check the IA-64 Linux web site at http://www.linux-ia64.org/.
+ As of the time of this writing, most hardware is DIG compliant,
+ so the "DIG-compliant" option is usually the right choice.
+
+ HP-simulator For the HP simulator (http://software.hp.com/ia64linux/).
+ SN1-simulator For the SGI SN1 simulator.
+ DIG-compliant For DIG ("Developer's Interface Guide") compliant system.
+
+ If you don't know what to do, choose "generic".
+
+Kernel page size
+CONFIG_IA64_PAGE_SIZE_4KB
+
+ This lets you select the page size of the kernel. For best IA-64
+ performance, a page size of 8KB or 16KB is recommended. For best
+ IA-32 compatibility, a page size of 4KB should be selected (the vast
+ majority of IA-32 binaries work perfectly fine with a larger page
+ size). For Itanium systems, do NOT chose a page size larger than
+ 16KB.
+
+ 4KB For best IA-32 compatibility
+ 8KB For best IA-64 performance
+ 16KB For best IA-64 performance
+ 64KB Not for Itanium.
+
+ If you don't know what to do, choose 8KB.
+
+Enable Itanium A-step specific code
+CONFIG_ITANIUM_ASTEP_SPECIFIC
+ Select this option to build a kernel for an Itanium prototype system
+ with an A-step CPU. You have an A-step CPU if the "revision" field in
+ /proc/cpuinfo is 0.
+
+Enable Itanium A1-step specific code
+CONFIG_ITANIUM_A1_SPECIFIC
+ Select this option to build a kernel for an Itanium prototype system
+ with an A1-step CPU. If you don't know whether you have an A1-step CPU,
+ you probably don't and you can answer "no" here.
+
+Enable Itanium B-step specific code
+CONFIG_ITANIUM_BSTEP_SPECIFIC
+ Select this option to build a kernel for an Itanium prototype system
+ with a B-step CPU. You have a B-step CPU if the "revision" field in
+ /proc/cpuinfo has a value in the range from 1 to 4.
+
+Enable Itanium B0-step specific code
+CONFIG_ITANIUM_B0_SPECIFIC
+ Select this option to bild a kernel for an Itanium prototype system
+ with a B0-step CPU. You have a B0-step CPU if the "revision" field in
+ /proc/cpuinfo is 1.
+
+Force interrupt redirection
+CONFIG_IA64_HAVE_IRQREDIR
+ Select this option if you know that your system has the ability to
+ redirect interrupts to different CPUs. Select N here if you're
+ unsure.
+
+Enable use of global TLB purge instruction (ptc.g)
+CONFIG_ITANIUM_PTCG
+ Say Y here if you want the kernel to use the IA-64 "ptc.g"
+ instruction to flush the TLB on all CPUs. Select N here if
+ you're unsure.
+
+Enable SoftSDV hacks
+CONFIG_IA64_SOFTSDV_HACKS
+ Say Y here to enable hacks to make the kernel work on the Intel
+ SoftSDV simulator. Select N here if you're unsure.
+
+Enable AzusA hacks
+CONFIG_IA64_AZUSA_HACKS
+ Say Y here to enable hacks to make the kernel work on the NEC
+ AzusA platform. Select N here if you're unsure.
+
+Force socket buffers below 4GB?
+CONFIG_SKB_BELOW_4GB
+ Most of today's network interface cards (NICs) support DMA to
+ the low 32 bits of the address space only. On machines with
+ more then 4GB of memory, this can cause the system to slow
+ down if there is no I/O TLB hardware. Turning this option on
+ avoids the slow-down by forcing socket buffers to be allocated
+ from memory below 4GB. The downside is that your system could
+ run out of memory below 4GB before all memory has been used up.
+ If you're unsure how to answer this question, answer Y.
+
+Enable IA-64 Machine Check Abort
+CONFIG_IA64_MCA
+ Say Y here to enable machine check support for IA-64. If you're
+ unsure, answer Y.
+
+Performance monitor support
+CONFIG_PERFMON
+ Selects whether support for the IA-64 performance monitor hardware
+ is included in the kernel. This makes some kernel data-structures a
+ little bigger and slows down execution a bit, but it is still
+ usually a good idea to turn this on. If you're unsure, say N.
+
+/proc/pal support
+CONFIG_IA64_PALINFO
+ If you say Y here, you are able to get PAL (Processor Abstraction
+ Layer) information in /proc/pal. This contains useful information
+ about the processors in your systems, such as cache and TLB sizes
+ and the PAL firmware version in use.
+
+ To use this option, you have to check that the "/proc file system
+ support" (CONFIG_PROC_FS) is enabled, too.
+
#
# A couple of things I keep forgetting:
# capitalize: AppleTalk, Ethernet, DOS, DMA, FAT, FTP, Internet,
user space shared/writable mappings of this page potentially
exist, this routine is called.
+ NOTE: This routine need only be called for page cache pages
+ which can potentially ever be mapped into the address
+ space of a user process. So for example, VFS layer code
+ handling vfs symlinks in the page cache need not call
+ this interface at all.
+
The phrase "kernel writes to a page cache page" means,
specifically, that the kernel executes store instructions
that dirty data in that page at the page->virtual mapping
- Linux kernel release 2.3.xx for the IA-64 Platform
+ Linux kernel release 2.4.xx for the IA-64 Platform
- These are the release notes for Linux version 2.3 for IA-64
+ These are the release notes for Linux version 2.4 for IA-64
platform. This document provides information specific to IA-64
ONLY, to get additional information about the Linux kernel also
read the original Linux README provided with the kernel.
IA-64 SPECIFICS
- - Security related issues:
-
- o mmap needs to check whether mapping would overlap with the
- address-space hole in a region or whether the mapping would be
- across regions. In both cases, mmap should fail.
-
- o ptrace is a huge security hole right now as it does not reject
- writing to security sensitive bits (such as the PSR!).
-
- General issues:
- o Kernel modules aren't supported yet.
-
- o For non-RT signals, siginfo isn't passed through from the kernel
- to the point where the signal is actually delivered. Also, we
- should make sure the siginfo data is compliant with the UNIX
- ABI.
-
o Hardly any performance tuning has been done. Obvious targets
- include the library routines (memcpy, IP checksum, etc.). Less
+ include the library routines (IP checksum, etc.). Less
obvious targets include making sure we don't flush the TLB
- needlessly, etc. Also, the TLB handlers should probably try to
- do a speculative load from the virtually mapped linear page
- table and only if that fails fall back on walking the page table
- tree.
+ needlessly, etc.
- o Discontiguous large memory support; memory above 4GB will be
- discontiguous since the 4GB-64MB is reserved for firmware and I/O
- space.
-
- o Correct mapping for PAL runtime code; PAL code needs to be
- mapped by a TR.
-
- o Make current IRQ/IOSAPIC handling closer to IA32 such as,
- disable/enable interrupts, use of INPROGRESS flag etc.
-
- o clone system call implementation; needs to setup proper backing
- store
-
o SMP locks cleanup/optimization
- o IA32 support. Currently experimental. It mostly works but
- there are problems with some dynamically loaded programs.
+ o IA32 support. Currently experimental. It mostly works.
Example for ids parameter initialization:
static struct isapnp_device_id device_ids[] __devinitdata = {
- { ISAPNP_DEVICE_SINGLE('E','S','S', 0x0968, 'E','S','S', 0x0968), }
+ { ISAPNP_DEVICE_SINGLE('E','S','S', 0x0968, 'E','S','S', 0x0968), },
{ ISAPNP_DEVICE_SINGLE_END, }
};
-MODULE_DEVICE_TABLE(isapnp, &device_ids);
+MODULE_DEVICE_TABLE(isapnp, device_ids);
ISA PnP configuration
VERSION = 2
PATCHLEVEL = 4
SUBLEVEL = 0
-EXTRAVERSION = -test9
+EXTRAVERSION = -test10
KERNELRELEASE=$(VERSION).$(PATCHLEVEL).$(SUBLEVEL)$(EXTRAVERSION)
DRIVERS-$(CONFIG_PCMCIA_NETCARD) += drivers/net/pcmcia/pcmcia_net.o
DRIVERS-$(CONFIG_PCMCIA_CHRDEV) += drivers/char/pcmcia/pcmcia_char.o
DRIVERS-$(CONFIG_DIO) += drivers/dio/dio.a
-DRIVERS-$(CONFIG_SBUS) += drivers/sbus/sbus.a
+DRIVERS-$(CONFIG_SBUS) += drivers/sbus/sbus_all.o
DRIVERS-$(CONFIG_ZORRO) += drivers/zorro/zorro.a
DRIVERS-$(CONFIG_FC4) += drivers/fc4/fc4.a
DRIVERS-$(CONFIG_ALL_PPC) += drivers/macintosh/macintosh.o
endif
ifdef CONFIG_MWINCHIPC6
-CFLAGS += $(shell if $(CC) -march=i686 -S -o /dev/null -xc /dev/null >/dev/null 2>&1; then echo "-march=i686"; fi)
+CFLAGS += $(shell if $(CC) -march=i586 -S -o /dev/null -xc /dev/null >/dev/null 2>&1; then echo "-march=i586"; fi)
endif
ifdef CONFIG_MWINCHIP2
-CFLAGS += $(shell if $(CC) -march=i686 -S -o /dev/null -xc /dev/null >/dev/null 2>&1; then echo "-march=i686"; fi)
+CFLAGS += $(shell if $(CC) -march=i586 -S -o /dev/null -xc /dev/null >/dev/null 2>&1; then echo "-march=i586"; fi)
endif
ifdef CONFIG_MWINCHIP3D
-CFLAGS += $(shell if $(CC) -march=i686 -S -o /dev/null -xc /dev/null >/dev/null 2>&1; then echo "-march=i686"; fi)
+CFLAGS += $(shell if $(CC) -march=i586 -S -o /dev/null -xc /dev/null >/dev/null 2>&1; then echo "-march=i586"; fi)
endif
HEAD := arch/i386/kernel/head.o arch/i386/kernel/init_task.o
__asm__("movl %%cr2,%0":"=r" (address));
tsk = current;
+
+ /*
+ * We fault-in kernel-space virtual memory on-demand. The
+ * 'reference' page table is init_mm.pgd.
+ *
+ * NOTE! We MUST NOT take any locks for this case. We may
+ * be in an interrupt or a critical region, and should
+ * only copy the information from the master page table,
+ * nothing more.
+ */
+ if (address >= TASK_SIZE)
+ goto vmalloc_fault;
+
mm = tsk->mm;
info.si_code = SEGV_MAPERR;
bad_area:
up(&mm->mmap_sem);
+bad_area_nosemaphore:
/* User mode accesses just cause a SIGSEGV */
if (error_code & 4) {
tsk->thread.cr2 = address;
/* Kernel mode? Handle exceptions or die */
if (!(error_code & 4))
goto no_context;
+ return;
+
+vmalloc_fault:
+ {
+ /*
+ * Synchronize this task's top level page-table
+ * with the 'reference' page table.
+ */
+ int offset = __pgd_offset(address);
+ pgd_t *pgd, *pgd_k;
+ pmd_t *pmd, *pmd_k;
+
+ pgd = tsk->active_mm->pgd + offset;
+ pgd_k = init_mm.pgd + offset;
+
+ if (!pgd_present(*pgd)) {
+ if (!pgd_present(*pgd_k))
+ goto bad_area_nosemaphore;
+ set_pgd(pgd, *pgd_k);
+ return;
+ }
+
+ pmd = pmd_offset(pgd, address);
+ pmd_k = pmd_offset(pgd_k, address);
+
+ if (pmd_present(*pmd) || !pmd_present(*pmd_k))
+ goto bad_area_nosemaphore;
+ set_pmd(pmd, *pmd_k);
+ return;
+ }
}
if (remap_area_pmd(pmd, address, end - address,
phys_addr + address, flags))
return -ENOMEM;
- set_pgdir(address, *dir);
address = (address + PGDIR_SIZE) & PGDIR_MASK;
dir++;
} while (address && (address < end));
export AWK
LINKFLAGS = -static -T arch/$(ARCH)/vmlinux.lds
-AFLAGS += -Wa,-x
+AFLAGS += -Wa,-x
+AFLAGS_KERNEL := -mconstant-gp
EXTRA =
CFLAGS := $(CFLAGS) -pipe $(EXTRA) -Wa,-x -ffixed-r13 -mfixed-range=f10-f15,f32-f127 \
-funwind-tables
CFLAGS_KERNEL := -mconstant-gp
+ifeq ($(CONFIG_ITANIUM_ASTEP_SPECIFIC),y)
+ CFLAGS += -ma-step
+endif
+
ifdef CONFIG_IA64_GENERIC
CORE_FILES := arch/$(ARCH)/hp/hp.a \
arch/$(ARCH)/sn/sn.a \
$(CORE_FILES)
endif
-ifdef CONFIG_IA64_SGI_SN1_SIM
+ifdef CONFIG_IA64_SGI_SN1
+CFLAGS := $(CFLAGS) -DSN -I. -DBRINGUP -DDIRECT_L1_CONSOLE \
+ -DNUMA_BASE -DSIMULATED_KLGRAPH -DNUMA_MIGR_CONTROL \
+ -DLITTLE_ENDIAN -DREAL_HARDWARE -DLANGUAGE_C=1 \
+ -D_LANGUAGE_C=1
SUBDIRS := arch/$(ARCH)/sn/sn1 \
arch/$(ARCH)/sn \
+ arch/$(ARCH)/sn/io \
+ arch/$(ARCH)/sn/fprom \
$(SUBDIRS)
CORE_FILES := arch/$(ARCH)/sn/sn.a \
+ arch/$(ARCH)/sn/io/sgiio.o\
$(CORE_FILES)
endif
void
enter_virtual_mode (unsigned long new_psr)
{
+ long tmp;
+
+ asm volatile ("movl %0=1f" : "=r"(tmp));
asm volatile ("mov cr.ipsr=%0" :: "r"(new_psr));
- asm volatile ("mov cr.iip=%0" :: "r"(&&target));
+ asm volatile ("mov cr.iip=%0" :: "r"(tmp));
asm volatile ("mov cr.ifs=r0");
- asm volatile ("rfi;;"); /* must be last insn in an insn group */
-
- target:
+ asm volatile ("rfi;;");
+ asm volatile ("1:");
}
-
#define MAX_ARGS 32
void
char *kpath, *args;
long arglen = 0;
- asm volatile ("movl gp=__gp" ::: "memory");
+ asm volatile ("movl gp=__gp;;" ::: "memory");
asm volatile ("mov sp=%0" :: "r"(stack) : "memory");
asm volatile ("bsw.1;;");
#ifdef CONFIG_ITANIUM_ASTEP_SPECIFIC
"generic CONFIG_IA64_GENERIC \
DIG-compliant CONFIG_IA64_DIG \
HP-simulator CONFIG_IA64_HP_SIM \
- SN1-simulator CONFIG_IA64_SGI_SN1_SIM" generic
+ SGI-SN1 CONFIG_IA64_SGI_SN1" generic
choice 'Kernel page size' \
"4KB CONFIG_IA64_PAGE_SIZE_4KB \
bool ' Enable SoftSDV hacks' CONFIG_IA64_SOFTSDV_HACKS
bool ' Enable AzusA hacks' CONFIG_IA64_AZUSA_HACKS
bool ' Enable IA-64 Machine Check Abort' CONFIG_IA64_MCA
+ bool ' Force socket buffers below 4GB?' CONFIG_SKB_BELOW_4GB
+
+ bool ' ACPI kernel configuration manager (EXPERIMENTAL)' CONFIG_ACPI_KERNEL_CONFIG
+ if [ "$CONFIG_ACPI_KERNEL_CONFIG" = "y" ]; then
+ define_bool CONFIG_PM y
+ define_bool CONFIG_ACPI y
+ define_bool CONFIG_ACPI_INTERPRETER y
+ fi
fi
-if [ "$CONFIG_IA64_SGI_SN1_SIM" = "y" ]; then
- define_bool CONFIG_NUMA y
- define_bool CONFIG_IA64_SOFTSDV_HACKS y
+if [ "$CONFIG_IA64_SGI_SN1" = "y" ]; then
+ bool ' Enable use of global TLB purge instruction (ptc.g)' CONFIG_ITANIUM_PTCG
+ bool ' Enable Itanium B-step specific code' CONFIG_ITANIUM_BSTEP_SPECIFIC
+ if [ "$CONFIG_ITANIUM_BSTEP_SPECIFIC" = "y" ]; then
+ bool ' Enable Itanium B0-step specific code' CONFIG_ITANIUM_B0_SPECIFIC
+ fi
+ bool ' Enable SGI Medusa Simulator Support' CONFIG_IA64_SGI_SN1_SIM n
+ bool ' Enable SGI hack for version 1.0 syngery bugs' CONFIG_IA64_SGI_SYNERGY_1_0_HACKS n
+ define_bool CONFIG_DEVFS_DEBUG y
+ define_bool CONFIG_DEVFS_FS y
+ define_bool CONFIG_IA64_BRL_EMU y
+ define_bool CONFIG_IA64_MCA y
+ define_bool CONFIG_IA64_SGI_IO y
+ define_bool CONFIG_ITANIUM y
fi
define_bool CONFIG_KCORE_ELF y # On IA-64, we always want an ELF /proc/kcore.
fi # !HP_SIM
+#
+# input before char - char/joystick depends on it. As does USB.
+#
+source drivers/input/Config.in
source drivers/char/Config.in
#source drivers/misc/Config.in
endmenu
source drivers/usb/Config.in
-source drivers/input/Config.in
fi # !HP_SIM
bool 'Turn on irq debug checks (slow!)' CONFIG_IA64_DEBUG_IRQ
bool 'Print possible IA64 hazards to console' CONFIG_IA64_PRINT_HAZARDS
bool 'Enable new unwind support' CONFIG_IA64_NEW_UNWIND
+bool 'Disable VHPT' CONFIG_DISABLE_VHPT
endmenu
#include <asm/ptrace.h>
#include <asm/system.h>
+#ifdef CONFIG_ACPI_KERNEL_CONFIG
+# include <asm/acpikcfg.h>
+#endif
+
#undef DEBUG_IRQ_ROUTING
static spinlock_t iosapic_lock = SPIN_LOCK_UNLOCKED;
{
struct hw_interrupt_type *irq_type;
struct pci_vector_struct *vectors;
- int i, irq;
+ int i, irq, num_pci_vectors;
if (irqbase == 0)
/*
* Map the PCI Interrupt data into the ACPI IOSAPIC data using
* the info that the bootstrap loader passed to us.
*/
+# ifdef CONFIG_ACPI_KERNEL_CONFIG
+ acpi_cf_get_pci_vectors(&vectors, &num_pci_vectors);
+# else
ia64_boot_param.pci_vectors = (__u64) __va(ia64_boot_param.pci_vectors);
vectors = (struct pci_vector_struct *) ia64_boot_param.pci_vectors;
- for (i = 0; i < ia64_boot_param.num_pci_vectors; i++) {
+ num_pci_vectors = ia64_boot_param.num_pci_vectors;
+# endif
+ for (i = 0; i < num_pci_vectors; i++) {
irq = vectors[i].irq;
if (irq < 16)
irq = isa_irq_to_vector(irq);
iosapic_trigger(irq) = IO_SAPIC_LEVEL;
iosapic_polarity(irq) = IO_SAPIC_POL_LOW;
-#ifdef DEBUG_IRQ_ROUTING
+# ifdef DEBUG_IRQ_ROUTING
printk("PCI: BUS %d Slot %x Pin %x IRQ %02x --> Vector %02x IOSAPIC Pin %d\n",
vectors[i].bus, vectors[i].pci_id>>16, vectors[i].pin, vectors[i].irq,
irq, iosapic_pin(irq));
-#endif
+# endif
}
#endif /* CONFIG_IA64_SOFTSDV_HACKS */
unsigned int ver, v;
int l, max_pin;
- ver = iosapic_version(iosapic->address);
+ ver = iosapic_version((unsigned long) ioremap(iosapic->address, 0));
max_pin = (ver >> 16) & 0xff;
printk("IOSAPIC Version %x.%x: address 0x%lx IRQs 0x%x - 0x%x\n",
#
.S.s:
- $(CPP) $(AFLAGS) -o $*.s $<
+ $(CPP) $(AFLAGS) $(AFLAGS_KERNEL) -o $*.s $<
.S.o:
- $(CC) $(AFLAGS) -c -o $*.o $<
+ $(CC) $(AFLAGS) $(AFLAGS_KERNEL) -c -o $*.o $<
all: ia32.o
pte_t * pte;
if (page_count(page) != 1)
- printk("mem_map disagrees with %p at %08lx\n", page, address);
+ printk("mem_map disagrees with %p at %08lx\n", (void *) page, address);
pgd = pgd_offset(tsk->mm, address);
pmd = pmd_alloc(pgd, address);
if (!pmd) {
: "r" ((ulong)IA32_FCR_DEFAULT));
__asm__("mov ar.fir = r0");
__asm__("mov ar.fdr = r0");
+ __asm__("mov %0=ar.k0 ;;" : "=r" (current->thread.old_iob));
+ __asm__("mov ar.k0=%0 ;;" :: "r"(IA32_IOBASE));
/* TSS */
__asm__("mov ar.k1 = %0"
: /* no outputs */
err |= setup_sigcontext_ia32(&frame->sc, &frame->fpstate, regs, set->sig[0]);
- if (_NSIG_WORDS > 1) {
+ if (_IA32_NSIG_WORDS > 1) {
err |= __copy_to_user(frame->extramask, &set->sig[1],
sizeof(frame->extramask));
}
#if 0
printk("SIG deliver (%s:%d): sig=%d sp=%p pc=%lx ra=%x\n",
- current->comm, current->pid, sig, frame, regs->cr_iip, frame->pretcode);
+ current->comm, current->pid, sig, (void *) frame, regs->cr_iip, frame->pretcode);
#endif
return 1;
#if 0
printk("SIG deliver (%s:%d): sp=%p pc=%lx ra=%x\n",
- current->comm, current->pid, frame, regs->cr_iip, frame->pretcode);
+ current->comm, current->pid, (void *) frame, regs->cr_iip, frame->pretcode);
#endif
return 1;
thread->csd = csd;
thread->ssd = ssd;
thread->tssd = tssd;
+ asm ("mov ar.k0=%0 ;;" :: "r"(thread->old_iob));
}
void
"mov ar.k1=%7"
:: "r"(eflag), "r"(fsr), "r"(fcr), "r"(fir), "r"(fdr),
"r"(csd), "r"(ssd), "r"(tssd));
+ asm ("mov %0=ar.k0 ;;" : "=r"(thread->old_iob));
+ asm ("mov ar.k0=%0 ;;" :: "r"(IA32_IOBASE));
}
/*
n = 0;
do {
err = get_user(addr, (int *)A(arg));
- if (IS_ERR(err))
+ if (err)
return err;
if (ap) { /* no access_ok needed, we allocated */
err = __put_user((char *)A(addr), ap++);
- if (IS_ERR(err))
+ if (err)
return err;
}
arg += sizeof(unsigned int);
{
struct pt_regs *regs = (struct pt_regs *)&stack;
char **av, **ae;
- int na, ne, r, len;
+ int na, ne, len;
+ long r;
na = nargs(argv, NULL);
- if (IS_ERR(na))
+ if (na < 0)
return(na);
ne = nargs(envp, NULL);
- if (IS_ERR(ne))
+ if (ne < 0)
return(ne);
len = (na + ne + 2) * sizeof(*av);
/*
return (long)av;
ae = av + na + 1;
r = __put_user(0, (av + na));
- if (IS_ERR(r))
+ if (r)
goto out;
r = __put_user(0, (ae + ne));
- if (IS_ERR(r))
+ if (r)
goto out;
r = nargs(argv, av);
- if (IS_ERR(r))
+ if (r < 0)
goto out;
r = nargs(envp, ae);
- if (IS_ERR(r))
+ if (r < 0)
goto out;
r = sys_execve(filename, av, ae, regs);
- if (IS_ERR(r))
+ if (r < 0)
out:
sys_munmap((unsigned long) av, len);
return(r);
error = do_mmap(file, addr, len, prot, flags, poff);
up(¤t->mm->mmap_sem);
- if (!IS_ERR(error))
+ if (!IS_ERR((void *) error))
error += offset - poff;
} else {
down(¤t->mm->mmap_sem);
}
static int
-fillonedir32 (void * __buf, const char * name, int namlen, off_t offset, ino_t ino)
+fillonedir32 (void * __buf, const char * name, int namlen, off_t offset, ino_t ino,
+ unsigned int d_type)
{
struct readdir32_callback * buf = (struct readdir32_callback *) __buf;
struct old_linux32_dirent * dirent;
return(sys_ni_syscall());
}
+/*
+ * The IA64 maps 4 I/O ports for each 4K page
+ */
+#define IOLEN ((65536 / 4) * 4096)
+
+asmlinkage long
+sys_iopl (int level, long arg1, long arg2, long arg3)
+{
+ extern unsigned long ia64_iobase;
+ int fd;
+ struct file * file;
+ unsigned int old;
+ unsigned long addr;
+ mm_segment_t old_fs = get_fs ();
+
+ if (level != 3)
+ return(-EINVAL);
+ /* Trying to gain more privileges? */
+ __asm__ __volatile__("mov %0=ar.eflag ;;" : "=r"(old));
+ if (level > ((old >> 12) & 3)) {
+ if (!capable(CAP_SYS_RAWIO))
+ return -EPERM;
+ }
+ set_fs(KERNEL_DS);
+ fd = sys_open("/dev/mem", O_SYNC | O_RDWR, 0);
+ set_fs(old_fs);
+ if (fd < 0)
+ return fd;
+ file = fget(fd);
+ if (file == NULL) {
+ sys_close(fd);
+ return(-EFAULT);
+ }
+
+ down(¤t->mm->mmap_sem);
+ lock_kernel();
+
+ addr = do_mmap_pgoff(file, IA32_IOBASE,
+ IOLEN, PROT_READ|PROT_WRITE, MAP_SHARED,
+ (ia64_iobase & ~PAGE_OFFSET) >> PAGE_SHIFT);
+
+ unlock_kernel();
+ up(¤t->mm->mmap_sem);
+
+ if (addr >= 0) {
+ __asm__ __volatile__("mov ar.k0=%0 ;;" :: "r"(addr));
+ old = (old & ~0x3000) | (level << 12);
+ __asm__ __volatile__("mov ar.eflag=%0 ;;" :: "r"(old));
+ }
+
+ fput(file);
+ sys_close(fd);
+ return 0;
+}
+
+asmlinkage long
+sys_ioperm (unsigned long from, unsigned long num, int on)
+{
+
+ /*
+ * Since IA64 doesn't have permission bits we'd have to go to
+ * a lot of trouble to simulate them in software. There's
+ * no point, only trusted programs can make this call so we'll
+ * just turn it into an iopl call and let the process have
+ * access to all I/O ports.
+ *
+ * XXX proper ioperm() support should be emulated by
+ * manipulating the page protections...
+ */
+ return(sys_iopl(3, 0, 0, 0));
+}
+
#ifdef NOTYET /* UNTESTED FOR IA64 FROM HERE DOWN */
/* In order to reduce some races, while at the same time doing additional
#
.S.s:
- $(CPP) $(AFLAGS) -o $*.s $<
+ $(CPP) $(AFLAGS) $(AFLAGS_KERNEL) -o $*.s $<
.S.o:
- $(CC) $(AFLAGS) -c -o $*.o $<
+ $(CC) $(AFLAGS) $(AFLAGS_KERNEL) -c -o $*.o $<
all: kernel.o head.o init_task.o
obj-$(CONFIG_IA64_GENERIC) += machvec.o
obj-$(CONFIG_IA64_PALINFO) += palinfo.o
obj-$(CONFIG_PCI) += pci.o
-obj-$(CONFIG_SMP) += smp.o
+obj-$(CONFIG_SMP) += smp.o smpboot.o
obj-$(CONFIG_IA64_MCA) += mca.o mca_asm.o
obj-$(CONFIG_IA64_BRL_EMU) += brl_emu.o
#include <asm/iosapic.h>
#include <asm/machvec.h>
#include <asm/page.h>
+#ifdef CONFIG_ACPI_KERNEL_CONFIG
+# include <asm/acpikcfg.h>
+#endif
#undef ACPI_DEBUG /* Guess what this does? */
-#ifdef CONFIG_SMP
-extern struct smp_boot_data smp;
-#endif
-
/* These are ugly but will be reclaimed by the kernel */
-int __initdata available_cpus = 0;
-int __initdata total_cpus = 0;
+int __initdata available_cpus;
+int __initdata total_cpus;
-void (*pm_idle) (void);
+void (*pm_idle)(void);
/*
* Identify usable CPU's and remember them for SMP bringup later.
add = 0;
}
+#ifdef CONFIG_SMP
+ smp_boot_data.cpu_phys_id[total_cpus] = -1;
+#endif
if (add) {
printk("Available.\n");
available_cpus++;
#ifdef CONFIG_SMP
-# if LARGE_CPU_ID_OK
- smp.cpu_map[total_cpus] = (lsapic->id << 8) | lsapic->eid;
-# else
- smp.cpu_map[total_cpus] = lsapic->id;
-# endif
-#endif
+ smp_boot_data.cpu_phys_id[total_cpus] = (lsapic->id << 8) | lsapic->eid;
+#endif /* CONFIG_SMP */
}
-
total_cpus++;
}
break;
}
-#if 1/*def ACPI_DEBUG*/
+# ifdef ACPI_DEBUG
printk("Legacy ISA IRQ %x -> IA64 Vector %x IOSAPIC Pin %x Active %s %s Trigger\n",
legacy->isa_irq, vector, iosapic_pin(vector),
((iosapic_polarity(vector) == IO_SAPIC_POL_LOW) ? "Low" : "High"),
((iosapic_trigger(vector) == IO_SAPIC_LEVEL) ? "Level" : "Edge"));
-#endif /* ACPI_DEBUG */
-
+# endif /* ACPI_DEBUG */
#endif /* CONFIG_IA64_IRQ_ACPI */
}
/* Base address of IPI Message Block */
ipi_base_addr = (unsigned long) ioremap(msapic->interrupt_block, 0);
-#ifdef CONFIG_SMP
- memset(&smp, -1, sizeof(smp));
-#endif
-
p = (char *) (msapic + 1);
end = p + (msapic->header.length - sizeof(acpi_sapic_t));
printk("ACPI: %.6s %.8s %d.%d\n", rsdt->header.oem_id, rsdt->header.oem_table_id,
rsdt->header.oem_revision >> 16, rsdt->header.oem_revision & 0xffff);
+#ifdef CONFIG_ACPI_KERNEL_CONFIG
+ acpi_cf_init(rsdp);
+#endif
+
tables = (rsdt->header.length - sizeof(acpi_desc_table_hdr_t)) / 8;
for (i = 0; i < tables; i++) {
hdrp = (acpi_desc_table_hdr_t *) __va(rsdt->entry_ptrs[i]);
acpi_parse_msapic((acpi_sapic_t *) hdrp);
}
+#ifdef CONFIG_ACPI_KERNEL_CONFIG
+ acpi_cf_terminate();
+#endif
+
#ifdef CONFIG_SMP
if (available_cpus == 0) {
printk("ACPI: Found 0 CPUS; assuming 1\n");
available_cpus = 1; /* We've got at least one of these, no? */
}
- smp.cpu_count = available_cpus;
+ smp_boot_data.cpu_count = available_cpus;
#endif
return 1;
}
#else
# if defined (CONFIG_IA64_HP_SIM)
return "hpsim";
-# elif defined (CONFIG_IA64_SGI_SN1_SIM)
+# elif defined (CONFIG_IA64_SGI_SN1)
return "sn1";
# elif defined (CONFIG_IA64_DIG)
return "dig";
# error Unknown platform. Fix acpi.c.
# endif
#endif
-}
+}
md->phys_addr);
continue;
}
- mask = ~((1 << _PAGE_SIZE_4M)-1); /* XXX should be dynamic? */
+ /*
+ * We must use the same page size as the one used
+ * for the kernel region when we map the PAL code.
+ * This way, we avoid overlapping TRs if code is
+ * executed nearby. The Alt I-TLB installs 256MB
+ * page sizes as defined for region 7.
+ *
+ * XXX Fixme: should be dynamic here (for page size)
+ */
+ mask = ~((1 << _PAGE_SIZE_256M)-1);
vaddr = PAGE_OFFSET + md->phys_addr;
- printk(__FUNCTION__": mapping PAL code [0x%lx-0x%lx) into [0x%lx-0x%lx)\n",
- md->phys_addr, md->phys_addr + (md->num_pages << 12),
- vaddr & mask, (vaddr & mask) + 4*1024*1024);
+ /*
+ * We must check that the PAL mapping won't overlap
+ * with the kernel mapping on ITR1.
+ *
+ * PAL code is guaranteed to be aligned on a power of 2
+ * between 4k and 256KB.
+ * Also from the documentation, it seems like there is an
+ * implicit guarantee that you will need only ONE ITR to
+ * map it. This implies that the PAL code is always aligned
+ * on its size, i.e., the closest matching page size supported
+ * by the TLB. Therefore PAL code is guaranteed never to cross
+ * a 256MB unless it is bigger than 256MB (very unlikely!).
+ * So for now the following test is enough to determine whether
+ * or not we need a dedicated ITR for the PAL code.
+ */
+ if ((vaddr & mask) == (PAGE_OFFSET & mask)) {
+ printk(__FUNCTION__ " : no need to install ITR for PAL Code\n");
+ continue;
+ }
+
+ printk("CPU %d: mapping PAL code [0x%lx-0x%lx) into [0x%lx-0x%lx)\n",
+ smp_processor_id(), md->phys_addr, md->phys_addr + (md->num_pages << 12),
+ vaddr & mask, (vaddr & mask) + 256*1024*1024);
/*
* Cannot write to CRx with PSR.ic=1
* ITR0/DTR0: used for kernel code/data
* ITR1/DTR1: used by HP simulator
* ITR2/DTR2: map PAL code
- * ITR3/DTR3: used to map PAL calls buffer
*/
ia64_itr(0x1, 2, vaddr & mask,
pte_val(mk_pte_phys(md->phys_addr,
__pgprot(__DIRTY_BITS|_PAGE_PL_0|_PAGE_AR_RX))),
- _PAGE_SIZE_4M);
+ _PAGE_SIZE_256M);
local_irq_restore(flags);
ia64_srlz_i ();
}
#endif
efi_map_pal_code();
+
+#ifndef CONFIG_IA64_SOFTSDV_HACKS
+ /*
+ * (Some) SoftSDVs seem to have a problem with this call.
+ * Since it's mostly a performance optimization, just don't do
+ * it for now... --davidm 99/12/6
+ */
+ efi_enter_virtual_mode();
+#endif
+
}
void
mov r13=in0 // set "current" pointer
;;
DO_LOAD_SWITCH_STACK( )
+#ifdef CONFIG_SMP
+ sync.i // ensure "fc"s done by this CPU are visible on other CPUs
+#endif
br.ret.sptk.few rp
END(ia64_switch_to)
data8 sys_setpriority
data8 sys_statfs
data8 sys_fstatfs
- data8 sys_ioperm // 1105
+ data8 ia64_ni_syscall
data8 sys_semget
data8 sys_semop
data8 sys_semctl
#define MB (1024*1024UL)
-#define NUM_MEM_DESCS 3
+#define NUM_MEM_DESCS 2
static char fw_mem[( sizeof(efi_system_table_t)
+ sizeof(efi_runtime_services_t)
md->num_pages = (1*MB) >> 12; /* 1MB (in 4KB pages) */
md->attribute = EFI_MEMORY_WB;
+#if 0
+ /*
+ * XXX bootmem is broken for now... (remember to NUM_MEM_DESCS
+ * if you re-enable this!)
+ */
+
/* descriptor for high memory (>4GB): */
md = &efi_memmap[2];
md->type = EFI_CONVENTIONAL_MEMORY;
md->virt_addr = 0;
md->num_pages = (32*MB) >> 12; /* 32MB (in 4KB pages) */
md->attribute = EFI_MEMORY_WB;
+#endif
bp = id(ZERO_PAGE_ADDR);
bp->efi_systab = __pa(&fw_mem);
* be implemented more efficiently (for example, __switch_to()
* always sets the psr.dfh bit of the task it is switching to).
*/
- addl r12=IA64_STK_OFFSET-IA64_PT_REGS_SIZE,r2
+ addl r12=IA64_STK_OFFSET-IA64_PT_REGS_SIZE-16,r2
addl r2=IA64_RBS_OFFSET,r2 // initialize the RSE
mov ar.rsc=r0 // place RSE in enforced lazy mode
;;
EXPORT_SYMBOL(memcmp);
EXPORT_SYMBOL_NOVERS(memcpy);
EXPORT_SYMBOL(memmove);
+EXPORT_SYMBOL(memscan);
EXPORT_SYMBOL(strcat);
EXPORT_SYMBOL(strchr);
EXPORT_SYMBOL(strcmp);
EXPORT_SYMBOL(strncat);
EXPORT_SYMBOL(strncmp);
EXPORT_SYMBOL(strncpy);
+EXPORT_SYMBOL(strnlen);
+EXPORT_SYMBOL(strrchr);
EXPORT_SYMBOL(strstr);
EXPORT_SYMBOL(strtok);
#include <linux/in6.h>
#include <asm/checksum.h>
+/* not coded yet?? EXPORT_SYMBOL(csum_ipv6_magic); */
EXPORT_SYMBOL(csum_partial_copy_nocheck);
+EXPORT_SYMBOL(csum_tcpudp_magic);
+EXPORT_SYMBOL(ip_compute_csum);
+EXPORT_SYMBOL(ip_fast_csum);
+
+#include <asm/io.h>
+EXPORT_SYMBOL(__ia64_memcpy_fromio);
+EXPORT_SYMBOL(__ia64_memcpy_toio);
+EXPORT_SYMBOL(__ia64_memset_c_io);
#include <asm/irq.h>
EXPORT_SYMBOL(enable_irq);
EXPORT_SYMBOL(disable_irq);
+EXPORT_SYMBOL(disable_irq_nosync);
+
+#include <asm/page.h>
+EXPORT_SYMBOL(clear_page);
+
+#include <asm/pci.h>
+EXPORT_SYMBOL(pci_dma_sync_sg);
+EXPORT_SYMBOL(pci_dma_sync_single);
+EXPORT_SYMBOL(pci_map_sg);
+EXPORT_SYMBOL(pci_map_single);
+EXPORT_SYMBOL(pci_unmap_sg);
+EXPORT_SYMBOL(pci_unmap_single);
#include <asm/processor.h>
EXPORT_SYMBOL(cpu_data);
EXPORT_SYMBOL(kernel_thread);
+#include <asm/system.h>
+#ifdef CONFIG_IA64_DEBUG_IRQ
+EXPORT_SYMBOL(last_cli_ip);
+#endif
+
#ifdef CONFIG_SMP
+
+#include <asm/current.h>
#include <asm/hardirq.h>
EXPORT_SYMBOL(synchronize_irq);
+#include <asm/smp.h>
+EXPORT_SYMBOL(smp_call_function);
+
+#include <linux/smp.h>
+EXPORT_SYMBOL(smp_num_cpus);
+
#include <asm/smplock.h>
EXPORT_SYMBOL(kernel_flag);
-#include <asm/system.h>
+/* #include <asm/system.h> */
EXPORT_SYMBOL(__global_sti);
EXPORT_SYMBOL(__global_cli);
EXPORT_SYMBOL(__global_save_flags);
#include <asm/uaccess.h>
EXPORT_SYMBOL(__copy_user);
+EXPORT_SYMBOL(__do_clear_user);
#include <asm/unistd.h>
EXPORT_SYMBOL(__ia64_syscall);
/* from arch/ia64/lib */
+extern void __divsi3(void);
+extern void __udivsi3(void);
+extern void __modsi3(void);
+extern void __umodsi3(void);
extern void __divdi3(void);
extern void __udivdi3(void);
extern void __moddi3(void);
extern void __umoddi3(void);
+EXPORT_SYMBOL_NOVERS(__divsi3);
+EXPORT_SYMBOL_NOVERS(__udivsi3);
+EXPORT_SYMBOL_NOVERS(__modsi3);
+EXPORT_SYMBOL_NOVERS(__umodsi3);
EXPORT_SYMBOL_NOVERS(__divdi3);
EXPORT_SYMBOL_NOVERS(__udivdi3);
EXPORT_SYMBOL_NOVERS(__moddi3);
EXPORT_SYMBOL_NOVERS(__umoddi3);
+
+extern unsigned long ia64_iobase;
+EXPORT_SYMBOL(ia64_iobase);
desc->depth--;
break;
case 0:
- printk("enable_irq() unbalanced from %p\n",
- __builtin_return_address(0));
+ printk("enable_irq() unbalanced from %p\n", (void *) __builtin_return_address(0));
}
spin_unlock_irqrestore(&desc->lock, flags);
}
spinlock_t ivr_read_lock;
#endif
-unsigned long ipi_base_addr = IPI_DEFAULT_BASE_ADDR; /* default base addr of IPI table */
+/* default base addr of IPI table */
+unsigned long ipi_base_addr = (__IA64_UNCACHED_OFFSET | IPI_DEFAULT_BASE_ADDR);
/*
* Legacy IRQ to IA-64 vector translation table. Any vector not in
{
unsigned long ipi_addr;
unsigned long ipi_data;
+ unsigned long phys_cpu_id;
#ifdef CONFIG_ITANIUM_A1_SPECIFIC
unsigned long flags;
#endif
-# define EID 0
+
+#ifdef CONFIG_SMP
+ phys_cpu_id = cpu_physical_id(cpu);
+#else
+ phys_cpu_id = (ia64_get_lid() >> 16) & 0xffff;
+#endif
+
+ /*
+ * cpu number is in 8bit ID and 8bit EID
+ */
ipi_data = (delivery_mode << 8) | (vector & 0xff);
- ipi_addr = ipi_base_addr | ((cpu << 8 | EID) << 4) | ((redirect & 1) << 3);
+ ipi_addr = ipi_base_addr | (phys_cpu_id << 4) | ((redirect & 1) << 3);
#ifdef CONFIG_ITANIUM_A1_SPECIFIC
spin_lock_irqsave(&ivr_read_lock, flags);
* Copyright (C) 1998-2000 Hewlett-Packard Co
* Copyright (C) 1998, 1999 Stephane Eranian <eranian@hpl.hp.com>
* Copyright (C) 1998-2000 David Mosberger <davidm@hpl.hp.com>
+ *
+ * 00/08/23 Asit Mallick <asit.k.mallick@intel.com> TLB handling for SMP
*/
/*
* This file defines the interrupt vector table used by the CPU.
(p7) cmp.eq p6,p7=r17,r0 // was L1 entry NULL?
dep r17=r18,r17,3,(PAGE_SHIFT-3) // compute address of L2 page table entry
;;
-(p7) ld8 r17=[r17] // fetch the L2 entry (may be 0)
+(p7) ld8 r20=[r17] // fetch the L2 entry (may be 0)
shr.u r19=r16,PAGE_SHIFT // shift L3 index into position
;;
-(p7) cmp.eq.or.andcm p6,p7=r17,r0 // was L2 entry NULL?
- dep r17=r19,r17,3,(PAGE_SHIFT-3) // compute address of L3 page table entry
+(p7) cmp.eq.or.andcm p6,p7=r20,r0 // was L2 entry NULL?
+ dep r21=r19,r20,3,(PAGE_SHIFT-3) // compute address of L3 page table entry
;;
-(p7) ld8 r18=[r17] // read the L3 PTE
+(p7) ld8 r18=[r21] // read the L3 PTE
mov r19=cr.isr // cr.isr bit 0 tells us if this is an insn miss
;;
(p7) tbit.z p6,p7=r18,0 // page present bit cleared?
- mov r21=cr.iha // get the VHPT address that caused the TLB miss
+ mov r22=cr.iha // get the VHPT address that caused the TLB miss
;; // avoid RAW on p7
(p7) tbit.nz.unc p10,p11=r19,32 // is it an instruction TLB miss?
- dep r17=0,r17,0,PAGE_SHIFT // clear low bits to get page address
+ dep r23=0,r20,0,PAGE_SHIFT // clear low bits to get page address
;;
(p10) itc.i r18 // insert the instruction TLB entry
(p11) itc.d r18 // insert the data TLB entry
(p6) br.spnt.few page_fault // handle bad address/page not present (page fault)
- mov cr.ifa=r21
+ mov cr.ifa=r22
// Now compute and insert the TLB entry for the virtual page table.
// We never execute in a page table page so there is no need to set
// the exception deferral bit.
- adds r16=__DIRTY_BITS_NO_ED|_PAGE_PL_0|_PAGE_AR_RW,r17
+ adds r24=__DIRTY_BITS_NO_ED|_PAGE_PL_0|_PAGE_AR_RW,r23
+ ;;
+(p7) itc.d r24
+ ;;
+#ifdef CONFIG_SMP
+ //
+ // Re-check L2 and L3 pagetable. If they changed, we may have received
+ // a ptc.g between reading the pagetable and the "itc". If so,
+ // flush the entry we inserted and retry.
+ //
+ ld8 r25=[r21] // read L3 PTE again
+ ld8 r26=[r17] // read L2 entry again
+ ;;
+ cmp.ne p6,p7=r26,r20 // did L2 entry change
+ mov r27=PAGE_SHIFT<<2
+ ;;
+(p6) ptc.l r22,r27 // purge PTE page translation
+(p7) cmp.ne.or.andcm p6,p7=r25,r18 // did L3 PTE change
;;
-(p7) itc.d r16
+(p6) ptc.l r16,r27 // purge translation
+#endif
+
mov pr=r31,-1 // restore predicate registers
rfi
* The speculative access will fail if there is no TLB entry
* for the L3 page table page we're trying to access.
*/
- mov r16=cr.iha // get virtual address of L3 PTE
+ mov r16=cr.ifa // get virtual address
+ mov r19=cr.iha // get virtual address of L3 PTE
;;
- ld8.s r16=[r16] // try to read L3 PTE
+ ld8.s r17=[r19] // try to read L3 PTE
mov r31=pr // save predicates
;;
- tnat.nz p6,p0=r16 // did read succeed?
+ tnat.nz p6,p0=r17 // did read succeed?
(p6) br.cond.spnt.many 1f
;;
- itc.i r16
+ itc.i r17
+ ;;
+#ifdef CONFIG_SMP
+ ld8.s r18=[r19] // try to read L3 PTE again and see if same
+ mov r20=PAGE_SHIFT<<2 // setup page size for purge
;;
+ cmp.eq p6,p7=r17,r18
+ ;;
+(p7) ptc.l r16,r20
+#endif
mov pr=r31,-1
rfi
-1: mov r16=cr.ifa // get address that caused the TLB miss
- ;;
- rsm psr.dt // use physical addressing for data
+#ifdef CONFIG_DISABLE_VHPT
+itlb_fault:
+#endif
+1: rsm psr.dt // use physical addressing for data
mov r19=ar.k7 // get page table base address
shl r21=r16,3 // shift bit 60 into sign bit
shr.u r17=r16,61 // get the region number into r17
(p7) itc.i r18 // insert the instruction TLB entry
(p6) br.spnt.few page_fault // handle bad address/page not present (page fault)
;;
+#ifdef CONFIG_SMP
+ ld8 r19=[r17] // re-read the PTE and check if same
+ ;;
+ cmp.eq p6,p7=r18,r19
+ mov r20=PAGE_SHIFT<<2
+ ;;
+(p7) ptc.l r16,r20 // PTE changed purge translation
+#endif
+
mov pr=r31,-1 // restore predicate registers
rfi
* The speculative access will fail if there is no TLB entry
* for the L3 page table page we're trying to access.
*/
- mov r16=cr.iha // get virtual address of L3 PTE
+ mov r16=cr.ifa // get virtual address
+ mov r19=cr.iha // get virtual address of L3 PTE
;;
- ld8.s r16=[r16] // try to read L3 PTE
+ ld8.s r17=[r19] // try to read L3 PTE
mov r31=pr // save predicates
;;
- tnat.nz p6,p0=r16 // did read succeed?
+ tnat.nz p6,p0=r17 // did read succeed?
(p6) br.cond.spnt.many 1f
;;
- itc.d r16
+ itc.d r17
+ ;;
+#ifdef CONFIG_SMP
+ ld8.s r18=[r19] // try to read L3 PTE again and see if same
+ mov r20=PAGE_SHIFT<<2 // setup page size for purge
;;
+ cmp.eq p6,p7=r17,r18
+ ;;
+(p7) ptc.l r16,r20
+#endif
mov pr=r31,-1
rfi
-1: mov r16=cr.ifa // get address that caused the TLB miss
- ;;
- rsm psr.dt // use physical addressing for data
+#ifdef CONFIG_DISABLE_VHPT
+dtlb_fault:
+#endif
+1: rsm psr.dt // use physical addressing for data
mov r19=ar.k7 // get page table base address
shl r21=r16,3 // shift bit 60 into sign bit
shr.u r17=r16,61 // get the region number into r17
(p7) itc.d r18 // insert the instruction TLB entry
(p6) br.spnt.few page_fault // handle bad address/page not present (page fault)
;;
+#ifdef CONFIG_SMP
+ ld8 r19=[r17] // re-read the PTE and check if same
+ ;;
+ cmp.eq p6,p7=r18,r19
+ mov r20=PAGE_SHIFT<<2
+ ;;
+(p7) ptc.l r16,r20 // PTE changed purge translation
+#endif
mov pr=r31,-1 // restore predicate registers
rfi
/////////////////////////////////////////////////////////////////////////////////////////
// 0x0c00 Entry 3 (size 64 bundles) Alt ITLB (19)
mov r16=cr.ifa // get address that caused the TLB miss
+#ifdef CONFIG_DISABLE_VHPT
+ mov r31=pr
+ ;;
+ shr.u r21=r16,61 // get the region number into r21
+ ;;
+ cmp.gt p6,p0=6,r21 // user mode
+(p6) br.cond.dptk.many itlb_fault
+ ;;
+ mov pr=r31,-1
+#endif
movl r17=__DIRTY_BITS|_PAGE_PL_0|_PAGE_AR_RX
;;
shr.u r18=r16,57 // move address bit 61 to bit 4
movl r17=__DIRTY_BITS|_PAGE_PL_0|_PAGE_AR_RW
mov r20=cr.isr
mov r21=cr.ipsr
- mov r19=pr
+ mov r31=pr
+ ;;
+#ifdef CONFIG_DISABLE_VHPT
+ shr.u r22=r16,61 // get the region number into r21
;;
+ cmp.gt p8,p0=6,r22 // user mode
+(p8) br.cond.dptk.many dtlb_fault
+#endif
tbit.nz p6,p7=r20,IA64_ISR_SP_BIT // is speculation bit on?
shr.u r18=r16,57 // move address bit 61 to bit 4
dep r16=0,r16,IA64_MAX_PHYS_BITS,(64-IA64_MAX_PHYS_BITS) // clear ed & reserved bits
(p6) mov cr.ipsr=r21
;;
(p7) itc.d r16 // insert the TLB entry
- mov pr=r19,-1
+ mov pr=r31,-1
rfi
;;
// a nested TLB miss hit where we look up the physical address of the L3 PTE
// and then continue at label 1 below.
//
+#ifndef CONFIG_SMP
mov r16=cr.ifa // get the address that caused the fault
movl r30=1f // load continuation point in case of nested fault
;;
;;
st8 [r17]=r18 // store back updated PTE
itc.d r18 // install updated PTE
+#else
+ mov r16=cr.ifa // get the address that caused the fault
+ movl r30=1f // load continuation point in case of nested fault
+ ;;
+ thash r17=r16 // compute virtual address of L3 PTE
+ mov r28=ar.ccv // save ar.ccv
+ mov r29=b0 // save b0 in case of nested fault
+ mov r27=pr
+ ;;
+1: ld8 r18=[r17]
+ ;; // avoid RAW on r18
+ mov ar.ccv=r18 // set compare value for cmpxchg
+ or r25=_PAGE_D,r18 // set the dirty bit
+ ;;
+ cmpxchg8.acq r26=[r17],r25,ar.ccv
+ mov r24=PAGE_SHIFT<<2
+ ;;
+ cmp.eq p6,p7=r26,r18
+ ;;
+(p6) itc.d r25 // install updated PTE
+ ;;
+ ld8 r18=[r17] // read PTE again
+ ;;
+ cmp.eq p6,p7=r18,r25 // is it same as the newly installed
+ ;;
+(p7) ptc.l r16,r24
+ mov b0=r29 // restore b0
+ mov ar.ccv=r28
+ mov pr=r27,-1
+#endif
rfi
.align 1024
(p6) mov r16=r18 // if so, use cr.iip instead of cr.ifa
mov pr=r31,-1
#endif /* CONFIG_ITANIUM */
+
+#ifndef CONFIG_SMP
movl r30=1f // load continuation point in case of nested fault
;;
thash r17=r16 // compute virtual address of L3 PTE
;;
st8 [r17]=r18 // store back updated PTE
itc.i r18 // install updated PTE
+#else
+ movl r30=1f // load continuation point in case of nested fault
+ ;;
+ thash r17=r16 // compute virtual address of L3 PTE
+ mov r28=ar.ccv // save ar.ccv
+ mov r29=b0 // save b0 in case of nested fault)
+ mov r27=pr
+ ;;
+1: ld8 r18=[r17]
+#if defined(CONFIG_IA32_SUPPORT) && \
+ (defined(CONFIG_ITANIUM_ASTEP_SPECIFIC) || defined(CONFIG_ITANIUM_B0_SPECIFIC))
+ //
+ // Erratum 85 (Access bit fault could be reported before page not present fault)
+ // If the PTE is indicates the page is not present, then just turn this into a
+ // page fault.
+ //
+ ;;
+ tbit.nz p6,p0=r18,0 // page present bit set?
+(p6) br.cond.sptk 1f
+ ;; // avoid WAW on p6
+ mov pr=r27,-1
+ br.cond.sptk page_fault // page wasn't present
+1:
+#else
+ ;; // avoid RAW on r18
+#endif
+ mov ar.ccv=r18 // set compare value for cmpxchg
+ or r25=_PAGE_A,r18 // set the accessed bit
+ ;;
+ cmpxchg8.acq r26=[r17],r25,ar.ccv
+ mov r24=PAGE_SHIFT<<2
+ ;;
+ cmp.eq p6,p7=r26,r18
+ ;;
+(p6) itc.i r25 // install updated PTE
+ ;;
+ ld8 r18=[r17] // read PTE again
+ ;;
+ cmp.eq p6,p7=r18,r25 // is it same as the newly installed
+ ;;
+(p7) ptc.l r16,r24
+ mov b0=r29 // restore b0
+ mov ar.ccv=r28
+ mov pr=r27,-1
+#endif
rfi
.align 1024
/////////////////////////////////////////////////////////////////////////////////////////
// 0x2800 Entry 10 (size 64 bundles) Data Access-bit (15,55)
// Like Entry 8, except for data access
+#ifndef CONFIG_SMP
mov r16=cr.ifa // get the address that caused the fault
movl r30=1f // load continuation point in case of nested fault
;;
;;
st8 [r17]=r18 // store back updated PTE
itc.d r18 // install updated PTE
+#else
+ mov r16=cr.ifa // get the address that caused the fault
+ movl r30=1f // load continuation point in case of nested fault
+ ;;
+ thash r17=r16 // compute virtual address of L3 PTE
+ mov r28=ar.ccv // save ar.ccv
+ mov r29=b0 // save b0 in case of nested fault
+ mov r27=pr
+ ;;
+1: ld8 r18=[r17]
+ ;; // avoid RAW on r18
+ mov ar.ccv=r18 // set compare value for cmpxchg
+ or r25=_PAGE_A,r18 // set the dirty bit
+ ;;
+ cmpxchg8.acq r26=[r17],r25,ar.ccv
+ mov r24=PAGE_SHIFT<<2
+ ;;
+ cmp.eq p6,p7=r26,r18
+ ;;
+(p6) itc.d r25 // install updated PTE
+ ;;
+ ld8 r18=[r17] // read PTE again
+ ;;
+ cmp.eq p6,p7=r18,r25 // is it same as the newly installed
+ ;;
+(p7) ptc.l r16,r24
+ mov b0=r29 // restore b0
+ mov ar.ccv=r28
+ mov pr=r27,-1
+#endif
rfi
.align 1024
u64 ia64_mca_bspstore[1024];
u64 ia64_init_stack[INIT_TASK_SIZE] __attribute__((aligned(16)));
-#if defined(SAL_MPINIT_WORKAROUND) && !defined(CONFIG_SMP)
-int bootstrap_processor = -1;
-#endif
-
static void ia64_mca_cmc_vector_setup(int enable,
int_vector_t cmc_vector);
static void ia64_mca_wakeup_ipi_wait(void);
IA64_MCA_DEBUG("ia64_mca_init : begin\n");
-#if defined(SAL_MPINIT_WORKAROUND) && !defined(CONFIG_SMP)
- /* XXX -- workaround for SAL bug for running on MP system, but UP kernel */
-
- bootstrap_processor = hard_smp_processor_id();
-#endif
-
/* Clear the Rendez checkin flag for all cpus */
for(i = 0 ; i < IA64_MAXCPUS; i++)
ia64_mc_info.imi_rendez_checkin[i] = IA64_MCA_RENDEZ_CHECKIN_NOTDONE;
IA64_MCA_DEBUG("ia64_mca_init : correctable mca vector setup done\n");
ia64_mc_info.imi_mca_handler = __pa(ia64_os_mca_dispatch);
- ia64_mc_info.imi_mca_handler_size =
- __pa(ia64_os_mca_dispatch_end) - __pa(ia64_os_mca_dispatch);
+ /*
+ * XXX - disable SAL checksum by setting size to 0; should be
+ * __pa(ia64_os_mca_dispatch_end) - __pa(ia64_os_mca_dispatch);
+ */
+ ia64_mc_info.imi_mca_handler_size = 0;
/* Register the os mca handler with SAL */
if (ia64_sal_set_vectors(SAL_VECTOR_OS_MCA,
ia64_mc_info.imi_mca_handler,
IA64_MCA_DEBUG("ia64_mca_init : registered os mca handler with SAL\n");
+ /*
+ * XXX - disable SAL checksum by setting size to 0, should be
+ * IA64_INIT_HANDLER_SIZE
+ */
ia64_mc_info.imi_monarch_init_handler = __pa(mon_init_ptr->fp);
- ia64_mc_info.imi_monarch_init_handler_size = IA64_INIT_HANDLER_SIZE;
+ ia64_mc_info.imi_monarch_init_handler_size = 0;
ia64_mc_info.imi_slave_init_handler = __pa(slave_init_ptr->fp);
- ia64_mc_info.imi_slave_init_handler_size = IA64_INIT_HANDLER_SIZE;
+ ia64_mc_info.imi_slave_init_handler_size = 0;
IA64_MCA_DEBUG("ia64_mca_init : os init handler at %lx\n",ia64_mc_info.imi_monarch_init_handler);
int cpu;
/* Clear the Rendez checkin flag for all cpus */
- for(cpu = 0 ; cpu < IA64_MAXCPUS; cpu++)
+ for(cpu = 0 ; cpu < smp_num_cpus; cpu++)
if (ia64_mc_info.imi_rendez_checkin[cpu] == IA64_MCA_RENDEZ_CHECKIN_DONE)
ia64_mca_wakeup(cpu);
void
ia64_mca_rendez_int_handler(int rendez_irq, void *arg, struct pt_regs *ptregs)
{
- int flags;
+ int flags, cpu = 0;
/* Mask all interrupts */
save_and_cli(flags);
- ia64_mc_info.imi_rendez_checkin[ia64_get_cpuid(0)] = IA64_MCA_RENDEZ_CHECKIN_DONE;
+#ifdef CONFIG_SMP
+ cpu = cpu_logical_id(hard_smp_processor_id());
+#endif
+ ia64_mc_info.imi_rendez_checkin[cpu] = IA64_MCA_RENDEZ_CHECKIN_DONE;
/* Register with the SAL monarch that the slave has
* reached SAL
*/
.proc ia64_monarch_init_handler
ia64_monarch_init_handler:
-#if defined(SAL_MPINIT_WORKAROUND)
+#if defined(CONFIG_SMP) && defined(SAL_MPINIT_WORKAROUND)
//
// work around SAL bug that sends all processors to monarch entry
//
- .global bootstrap_processor
-
- movl r21=24
- movl r20=16
mov r17=cr.lid
- movl r18=bootstrap_processor
+ movl r18=__cpu_physical_id
;;
- dep r18=0,r18,61,3 // convert bsp to physical address
+ dep r18=0,r18,61,3 // convert to physical address
;;
- shr r19=r17,r20
- shr r22=r17,r21
+ shr.u r17=r17,16
ld4 r18=[r18] // get the BSP ID
;;
- and r19=0xf, r19
- and r22=0xf, r22
- ;;
- shl r19=r19,8 // get them in the right order
- ;;
- or r22=r22,r19 // combine EID and LID
+ dep r17=0,r17,16,48
;;
- cmp.eq p6,p7=r22,r18 // Am I the BSP ?
-(p7) br.cond.spnt slave_init_spin_me
+ cmp4.ne p6,p0=r17,r18 // Am I the BSP ?
+(p6) br.cond.spnt slave_init_spin_me
;;
#endif
#define SAVE_MIN_WITH_COVER DO_SAVE_MIN(cover;; mov rCRIFS=cr.ifs,) STOPS
#define SAVE_MIN_WITH_COVER_R19 DO_SAVE_MIN(cover;; mov rCRIFS=cr.ifs, mov r15=r19) STOPS
#define SAVE_MIN DO_SAVE_MIN(mov rCRIFS=r0,) STOPS
-
-#ifdef CONFIG_ITANIUM_ASTEP_SPECIFIC
-# define STOPS nop.i 0x0;; nop.i 0x0;; nop.i 0x0;;
-#else
-# define STOPS
-#endif
-
-#define SAVE_MIN_WITH_COVER DO_SAVE_MIN(cover;; mov rCRIFS=cr.ifs,) STOPS
-#define SAVE_MIN_WITH_COVER_R19 DO_SAVE_MIN(cover;; mov rCRIFS=cr.ifs, mov r15=r19) STOPS
-#define SAVE_MIN DO_SAVE_MIN(mov rCRIFS=r0,) STOPS
*
* in0 Pointer to struct ia64_pal_retval
* in1 Index of PAL service
- * in2 - in4 Remaning PAL arguments
+ * in2 - in4 Remaining PAL arguments
+ * in5 1 ==> clear psr.ic, 0 ==> don't clear psr.ic
*
*/
GLOBAL_ENTRY(ia64_pal_call_static)
}
;;
ld8 loc2 = [loc2] // loc2 <- entry point
- mov r30 = in2
- mov r31 = in3
+ tbit.nz p6,p7 = in5, 0
+ adds r8 = 1f-1b,r8
;;
mov loc3 = psr
mov loc0 = rp
UNW(.body)
- adds r8 = 1f-1b,r8
- ;;
- rsm psr.i
+ mov r30 = in2
+
+(p6) rsm psr.i | psr.ic
+ mov r31 = in3
mov b7 = loc2
+
+(p7) rsm psr.i
+ ;;
+(p6) srlz.i
mov rp = r8
- ;;
br.cond.sptk.few b7
1: mov psr.l = loc3
mov ar.pfs = loc1
* Copyright (C) 2000 Stephane Eranian <eranian@hpl.hp.com>
*
* 05/26/2000 S.Eranian initial release
+ * 08/21/2000 S.Eranian updated to July 2000 PAL specs
*
* ISSUES:
- * - because of some PAL bugs, some calls return invalid results or
- * are empty for now.
- * - remove hack to avoid problem with <= 256M RAM for itr.
+ * - as of 2.2.9/2.2.12, the following values are still wrong
+ * PAL_VM_SUMMARY: key & rid sizes
*/
-#include <linux/config.h>
#include <linux/types.h>
#include <linux/errno.h>
#include <linux/init.h>
#define RSE_HINTS_COUNT (sizeof(rse_hints)/sizeof(const char *))
/*
- * The current revision of the Volume 2 of
+ * The current revision of the Volume 2 (July 2000) of
* IA-64 Architecture Software Developer's Manual is wrong.
* Table 4-10 has invalid information concerning the ma field:
* Correct table is:
"NaTPage" /* 111 */
};
-
-
-/*
- * Allocate a buffer suitable for calling PAL code in Virtual mode
- *
- * The documentation (PAL2.6) allows DTLB misses on the buffer. So
- * using the TC is enough, no need to pin the entry.
- *
- * We allocate a kernel-sized page (at least 4KB). This is enough to
- * hold any possible reply.
- */
-static inline void *
-get_palcall_buffer(void)
-{
- void *tmp;
-
- tmp = (void *)__get_free_page(GFP_KERNEL);
- if (tmp == 0) {
- printk(KERN_ERR __FUNCTION__" : can't get a buffer page\n");
- }
- return tmp;
-}
-
-/*
- * Free a palcall buffer allocated with the previous call
- */
-static inline void
-free_palcall_buffer(void *addr)
-{
- __free_page(addr);
-}
-
/*
* Take a 64bit vector and produces a string such that
* if bit n is set then 2^n in clear text is generated. The adjustment
{
s64 status;
char *p = page;
- pal_power_mgmt_info_u_t *halt_info;
+ u64 halt_info_buffer[8];
+ pal_power_mgmt_info_u_t *halt_info =(pal_power_mgmt_info_u_t *)halt_info_buffer;
int i;
- halt_info = get_palcall_buffer();
- if (halt_info == 0) return 0;
-
status = ia64_pal_halt_info(halt_info);
- if (status != 0) {
- free_palcall_buffer(halt_info);
- return 0;
- }
+ if (status != 0) return 0;
for (i=0; i < 8 ; i++ ) {
if (halt_info[i].pal_power_mgmt_info_s.im == 1) {
p += sprintf(p,"Power level %d: not implemented\n",i);
}
}
-
- free_palcall_buffer(halt_info);
-
return p - page;
}
"RSE load/store hints : %ld (%s)\n",
phys_stacked,
hints.ph_data,
- hints.ph_data < RSE_HINTS_COUNT ? rse_hints[hints.ph_data]: "(??)");
+ hints.ph_data < RSE_HINTS_COUNT ? rse_hints[hints.ph_data]: "(\?\?)");
if (ia64_pal_debug_info(&iregs, &dregs)) return 0;
"Enable Half Transfer",
NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL,
NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL,
- NULL, NULL, NULL, NULL, NULL, NULL,
+ NULL, NULL, NULL, NULL,
+ "Enable Cache Line Repl. Exclusive",
+ "Enable Cache Line Repl. Shared",
"Disable Transaction Queuing",
"Disable Reponse Error Checking",
"Disable Bus Error Checking",
perfmon_info(char *page)
{
char *p = page;
- u64 *pm_buffer;
+ u64 pm_buffer[16];
pal_perf_mon_info_u_t pm_info;
- pm_buffer = (u64 *)get_palcall_buffer();
- if (pm_buffer == 0) return 0;
-
- if (ia64_pal_perf_mon_info(pm_buffer, &pm_info) != 0) {
- free_palcall_buffer(pm_buffer);
- return 0;
- }
+ if (ia64_pal_perf_mon_info(pm_buffer, &pm_info) != 0) return 0;
#ifdef IA64_PAL_PERF_MON_INFO_BUG
/*
p += sprintf(p, "\n");
- free_palcall_buffer(pm_buffer);
-
return p - page;
}
io_tlb_index = 0;
io_tlb_orig_addr = alloc_bootmem(io_tlb_nslabs * sizeof(char *));
- printk("Placing software IO TLB between 0x%p - 0x%p\n", io_tlb_start, io_tlb_end);
+ printk("Placing software IO TLB between 0x%p - 0x%p\n",
+ (void *) io_tlb_start, (void *) io_tlb_end);
}
/*
* Once the device is given the dma address, the device owns this memory
* until either pci_unmap_single or pci_dma_sync_single is performed.
*/
-extern inline dma_addr_t
+dma_addr_t
pci_map_single (struct pci_dev *hwdev, void *ptr, size_t size, int direction)
{
if (direction == PCI_DMA_NONE)
* After this call, reads by the cpu to the buffer are guarenteed to see
* whatever the device wrote there.
*/
-extern inline void
+void
pci_unmap_single (struct pci_dev *hwdev, dma_addr_t dma_addr, size_t size, int direction)
{
if (direction == PCI_DMA_NONE)
* Device ownership issues as mentioned above for pci_map_single are
* the same here.
*/
-extern inline int
+int
pci_map_sg (struct pci_dev *hwdev, struct scatterlist *sg, int nents, int direction)
{
if (direction == PCI_DMA_NONE)
* Again, cpu read rules concerning calls here are the same as for
* pci_unmap_single() above.
*/
-extern inline void
+void
pci_unmap_sg (struct pci_dev *hwdev, struct scatterlist *sg, int nents, int direction)
{
if (direction == PCI_DMA_NONE)
* next point you give the PCI dma address back to the card, the
* device again owns the buffer.
*/
-extern inline void
+void
pci_dma_sync_single (struct pci_dev *hwdev, dma_addr_t dma_handle, size_t size, int direction)
{
if (direction == PCI_DMA_NONE)
* The same as pci_dma_sync_single but for a scatter-gather list,
* same rules and usage.
*/
-extern inline void
+void
pci_dma_sync_sg (struct pci_dev *hwdev, struct scatterlist *sg, int nelems, int direction)
{
if (direction == PCI_DMA_NONE)
#include <linux/config.h>
#include <linux/kernel.h>
+#include <linux/init.h>
#include <linux/sched.h>
#include <linux/interrupt.h>
#include <linux/smp_lock.h>
+#include <linux/proc_fs.h>
+#include <linux/ptrace.h>
#include <asm/errno.h>
#include <asm/hw_irq.h>
#include <asm/processor.h>
#include <asm/system.h>
#include <asm/uaccess.h>
+#include <asm/pal.h>
/* Long blurb on how this works:
* We set dcr.pp, psr.pp, and the appropriate pmc control values with
#ifdef CONFIG_PERFMON
#define MAX_PERF_COUNTER 4 /* true for Itanium, at least */
+#define PMU_FIRST_COUNTER 4 /* first generic counter */
+
#define WRITE_PMCS_AND_START 0xa0
#define WRITE_PMCS 0xa1
#define READ_PMDS 0xa2
#define STOP_PMCS 0xa3
-#define IA64_COUNTER_MASK 0xffffffffffffff6fL
-#define PERF_OVFL_VAL 0xffffffffL
-volatile int used_by_system;
-struct perfmon_counter {
- unsigned long data;
- unsigned long counter_num;
-};
+/*
+ * this structure needs to be enhanced
+ */
+typedef struct {
+ unsigned long pmu_reg_data; /* generic PMD register */
+ unsigned long pmu_reg_num; /* which register number */
+} perfmon_reg_t;
+
+/*
+ * This structure is initialize at boot time and contains
+ * a description of the PMU main characteristic as indicated
+ * by PAL
+ */
+typedef struct {
+ unsigned long perf_ovfl_val; /* overflow value for generic counters */
+ unsigned long max_pmc; /* highest PMC */
+ unsigned long max_pmd; /* highest PMD */
+ unsigned long max_counters; /* number of generic counter pairs (PMC/PMD) */
+} pmu_config_t;
+
+/* XXX will go static when ptrace() is cleaned */
+unsigned long perf_ovfl_val; /* overflow value for generic counters */
+
+static pmu_config_t pmu_conf;
+/*
+ * could optimize to avoid cache conflicts in SMP
+ */
unsigned long pmds[NR_CPUS][MAX_PERF_COUNTER];
asmlinkage unsigned long
-sys_perfmonctl (int cmd1, int cmd2, void *ptr)
+sys_perfmonctl (int cmd, int count, void *ptr, long arg4, long arg5, long arg6, long arg7, long arg8, long stack)
{
- struct perfmon_counter tmp, *cptr = ptr;
- unsigned long cnum, dcr, flags;
- struct perf_counter;
+ struct pt_regs *regs = (struct pt_regs *) &stack;
+ perfmon_reg_t tmp, *cptr = ptr;
+ unsigned long cnum;
int i;
- switch (cmd1) {
+ switch (cmd) {
case WRITE_PMCS: /* Writes to PMC's and clears PMDs */
case WRITE_PMCS_AND_START: /* Also starts counting */
- if (cmd2 <= 0 || cmd2 > MAX_PERF_COUNTER - used_by_system)
- return -EINVAL;
-
- if (!access_ok(VERIFY_READ, cptr, sizeof(struct perf_counter)*cmd2))
+ if (!access_ok(VERIFY_READ, cptr, sizeof(struct perfmon_reg_t)*count))
return -EFAULT;
- current->thread.flags |= IA64_THREAD_PM_VALID;
+ for (i = 0; i < count; i++, cptr++) {
- for (i = 0; i < cmd2; i++, cptr++) {
copy_from_user(&tmp, cptr, sizeof(tmp));
- /* XXX need to check validity of counter_num and perhaps data!! */
- if (tmp.counter_num < 4
- || tmp.counter_num >= 4 + MAX_PERF_COUNTER - used_by_system)
- return -EFAULT;
-
- ia64_set_pmc(tmp.counter_num, tmp.data);
- ia64_set_pmd(tmp.counter_num, 0);
- pmds[smp_processor_id()][tmp.counter_num - 4] = 0;
+
+ /* XXX need to check validity of pmu_reg_num and perhaps data!! */
+
+ if (tmp.pmu_reg_num > pmu_conf.max_pmc || tmp.pmu_reg_num == 0) return -EFAULT;
+
+ ia64_set_pmc(tmp.pmu_reg_num, tmp.pmu_reg_data);
+
+ /* to go away */
+ if (tmp.pmu_reg_num >= PMU_FIRST_COUNTER && tmp.pmu_reg_num < PMU_FIRST_COUNTER+pmu_conf.max_counters) {
+ ia64_set_pmd(tmp.pmu_reg_num, 0);
+ pmds[smp_processor_id()][tmp.pmu_reg_num - PMU_FIRST_COUNTER] = 0;
+
+ printk(__FUNCTION__" setting PMC/PMD[%ld] es=0x%lx pmd[%ld]=%lx\n", tmp.pmu_reg_num, (tmp.pmu_reg_data>>8) & 0x7f, tmp.pmu_reg_num, ia64_get_pmd(tmp.pmu_reg_num));
+ } else
+ printk(__FUNCTION__" setting PMC[%ld]=0x%lx\n", tmp.pmu_reg_num, tmp.pmu_reg_data);
}
- if (cmd1 == WRITE_PMCS_AND_START) {
+ if (cmd == WRITE_PMCS_AND_START) {
+#if 0
+/* irrelevant with user monitors */
local_irq_save(flags);
+
dcr = ia64_get_dcr();
dcr |= IA64_DCR_PP;
ia64_set_dcr(dcr);
+
local_irq_restore(flags);
+#endif
+
ia64_set_pmc(0, 0);
+
+ /* will start monitoring right after rfi */
+ ia64_psr(regs)->up = 1;
}
+ /*
+ * mark the state as valid.
+ * this will trigger save/restore at context switch
+ */
+ current->thread.flags |= IA64_THREAD_PM_VALID;
break;
case READ_PMDS:
- if (cmd2 <= 0 || cmd2 > MAX_PERF_COUNTER - used_by_system)
+ if (count <= 0 || count > MAX_PERF_COUNTER)
return -EINVAL;
- if (!access_ok(VERIFY_WRITE, cptr, sizeof(struct perf_counter)*cmd2))
+ if (!access_ok(VERIFY_WRITE, cptr, sizeof(struct perfmon_reg_t)*count))
return -EFAULT;
/* This looks shady, but IMHO this will work fine. This is
* with the interrupt handler. See explanation in the
* following comment.
*/
-
+#if 0
+/* irrelevant with user monitors */
local_irq_save(flags);
__asm__ __volatile__("rsm psr.pp\n");
dcr = ia64_get_dcr();
dcr &= ~IA64_DCR_PP;
ia64_set_dcr(dcr);
local_irq_restore(flags);
-
+#endif
/*
* We cannot write to pmc[0] to stop counting here, as
* that particular instruction might cause an overflow
* when we re-enabled interrupts. When I muck with dcr,
* is the irq_save/restore needed?
*/
- for (i = 0, cnum = 4;i < cmd2; i++, cnum++, cptr++) {
- tmp.data = (pmds[smp_processor_id()][i]
- + (ia64_get_pmd(cnum) & PERF_OVFL_VAL));
- tmp.counter_num = cnum;
- if (copy_to_user(cptr, &tmp, sizeof(tmp)))
- return -EFAULT;
- //put_user(pmd, &cptr->data);
+
+
+ /* XXX: This needs to change to read more than just the counters */
+ for (i = 0, cnum = PMU_FIRST_COUNTER;i < count; i++, cnum++, cptr++) {
+
+ tmp.pmu_reg_data = (pmds[smp_processor_id()][i]
+ + (ia64_get_pmd(cnum) & pmu_conf.perf_ovfl_val));
+
+ tmp.pmu_reg_num = cnum;
+
+ if (copy_to_user(cptr, &tmp, sizeof(tmp))) return -EFAULT;
}
+#if 0
+/* irrelevant with user monitors */
local_irq_save(flags);
__asm__ __volatile__("ssm psr.pp");
dcr = ia64_get_dcr();
dcr |= IA64_DCR_PP;
ia64_set_dcr(dcr);
local_irq_restore(flags);
+#endif
break;
case STOP_PMCS:
ia64_set_pmc(0, 1);
ia64_srlz_d();
- for (i = 0; i < MAX_PERF_COUNTER - used_by_system; ++i)
+ for (i = 0; i < MAX_PERF_COUNTER; ++i)
ia64_set_pmc(4+i, 0);
- if (!used_by_system) {
- local_irq_save(flags);
- dcr = ia64_get_dcr();
- dcr &= ~IA64_DCR_PP;
- ia64_set_dcr(dcr);
- local_irq_restore(flags);
- }
+#if 0
+/* irrelevant with user monitors */
+ local_irq_save(flags);
+ dcr = ia64_get_dcr();
+ dcr &= ~IA64_DCR_PP;
+ ia64_set_dcr(dcr);
+ local_irq_restore(flags);
+ ia64_psr(regs)->up = 0;
+#endif
+
current->thread.flags &= ~(IA64_THREAD_PM_VALID);
+
break;
default:
unsigned long mask, i, cnum, val;
mask = ia64_get_pmc(0) >> 4;
- for (i = 0, cnum = 4; i < MAX_PERF_COUNTER - used_by_system; cnum++, i++, mask >>= 1) {
- val = 0;
+ for (i = 0, cnum = PMU_FIRST_COUNTER ; i < pmu_conf.max_counters; cnum++, i++, mask >>= 1) {
+
+
+ val = mask & 0x1 ? pmu_conf.perf_ovfl_val + 1 : 0;
+
if (mask & 0x1)
- val += PERF_OVFL_VAL + 1;
+ printk(__FUNCTION__ " PMD%ld overflowed pmd=%lx pmod=%lx\n", cnum, ia64_get_pmd(cnum), pmds[smp_processor_id()][i]);
+
/* since we got an interrupt, might as well clear every pmd. */
- val += ia64_get_pmd(cnum) & PERF_OVFL_VAL;
+ val += ia64_get_pmd(cnum) & pmu_conf.perf_ovfl_val;
+
+ printk(__FUNCTION__ " adding val=%lx to pmod[%ld]=%lx \n", val, i, pmds[smp_processor_id()][i]);
+
pmds[smp_processor_id()][i] += val;
+
ia64_set_pmd(cnum, 0);
}
}
name: "perfmon"
};
-void
+static int
+perfmon_proc_info(char *page)
+{
+ char *p = page;
+ u64 pmc0 = ia64_get_pmc(0);
+
+ p += sprintf(p, "PMC[0]=%lx\n", pmc0);
+
+ return p - page;
+}
+
+static int
+perfmon_read_entry(char *page, char **start, off_t off, int count, int *eof, void *data)
+{
+ int len = perfmon_proc_info(page);
+
+ if (len <= off+count) *eof = 1;
+
+ *start = page + off;
+ len -= off;
+
+ if (len>count) len = count;
+ if (len<0) len = 0;
+
+ return len;
+}
+
+static struct proc_dir_entry *perfmon_dir;
+
+void __init
perfmon_init (void)
{
+ pal_perf_mon_info_u_t pm_info;
+ u64 pm_buffer[16];
+ s64 status;
+
irq_desc[PERFMON_IRQ].status |= IRQ_PER_CPU;
irq_desc[PERFMON_IRQ].handler = &irq_type_ia64_sapic;
setup_irq(PERFMON_IRQ, &perfmon_irqaction);
ia64_set_pmv(PERFMON_IRQ);
ia64_srlz_d();
- printk("Initialized perfmon vector to %u\n",PERFMON_IRQ);
+
+ printk("perfmon: Initialized vector to %u\n",PERFMON_IRQ);
+
+ if ((status=ia64_pal_perf_mon_info(pm_buffer, &pm_info)) != 0) {
+ printk(__FUNCTION__ " pal call failed (%ld)\n", status);
+ return;
+ }
+ pmu_conf.perf_ovfl_val = perf_ovfl_val = (1L << pm_info.pal_perf_mon_info_s.width) - 1;
+
+ /* XXX need to use PAL instead */
+ pmu_conf.max_pmc = 13;
+ pmu_conf.max_pmd = 17;
+ pmu_conf.max_counters = pm_info.pal_perf_mon_info_s.generic;
+
+ printk("perfmon: Counters are %d bits\n", pm_info.pal_perf_mon_info_s.width);
+ printk("perfmon: Maximum counter value 0x%lx\n", pmu_conf.perf_ovfl_val);
+
+ /*
+ * for now here for debug purposes
+ */
+ perfmon_dir = create_proc_read_entry ("perfmon", 0, 0, perfmon_read_entry, NULL);
}
void
ia64_set_pmc(0, 1);
ia64_srlz_d();
- for (i=0; i< IA64_NUM_PM_REGS - used_by_system ; i++) {
- t->pmd[i] = ia64_get_pmd(4+i);
+ /*
+ * XXX: this will need to be extended beyong just counters
+ */
+ for (i=0; i< IA64_NUM_PM_REGS; i++) {
+ t->pmd[i] = ia64_get_pmd(4+i);
t->pmod[i] = pmds[smp_processor_id()][i];
- t->pmc[i] = ia64_get_pmc(4+i);
+ t->pmc[i] = ia64_get_pmc(4+i);
}
}
{
int i;
- for (i=0; i< IA64_NUM_PM_REGS - used_by_system ; i++) {
+ /*
+ * XXX: this will need to be extended beyong just counters
+ */
+ for (i=0; i< IA64_NUM_PM_REGS ; i++) {
ia64_set_pmd(4+i, t->pmd[i]);
pmds[smp_processor_id()][i] = t->pmod[i];
ia64_set_pmc(4+i, t->pmc[i]);
#else /* !CONFIG_PERFMON */
asmlinkage unsigned long
-sys_perfmonctl (int cmd1, int cmd2, void *ptr)
+sys_perfmonctl (int cmd, int count, void *ptr)
{
return -ENOSYS;
}
* call behavior where scratch registers are preserved across
* system calls (unless used by the system call itself).
*/
-# define THREAD_FLAGS_TO_CLEAR (IA64_THREAD_FPH_VALID | IA64_THREAD_DBG_VALID)
+# define THREAD_FLAGS_TO_CLEAR (IA64_THREAD_FPH_VALID | IA64_THREAD_DBG_VALID \
+ | IA64_THREAD_PM_VALID)
# define THREAD_FLAGS_TO_SET 0
p->thread.flags = ((current->thread.flags & ~THREAD_FLAGS_TO_CLEAR)
| THREAD_FLAGS_TO_SET);
if (ia64_peek(pt, current, addr, &val) == 0)
access_process_vm(current, addr, &val, sizeof(val), 1);
+ /*
+ * coredump format:
+ * r0-r31
+ * NaT bits (for r0-r31; bit N == 1 iff rN is a NaT)
+ * predicate registers (p0-p63)
+ * b0-b7
+ * ip cfm user-mask
+ * ar.rsc ar.bsp ar.bspstore ar.rnat
+ * ar.ccv ar.unat ar.fpsr ar.pfs ar.lc ar.ec
+ */
+
/* r0 is zero */
for (i = 1, mask = (1UL << i); i < 32; ++i) {
unw_get_gr(info, i, &dst[i], &nat);
void
do_dump_fpu (struct unw_frame_info *info, void *arg)
{
- struct task_struct *fpu_owner = ia64_get_fpu_owner();
elf_fpreg_t *dst = arg;
int i;
for (i = 2; i < 32; ++i)
unw_get_fr(info, i, dst + i);
- if ((fpu_owner == current) || (current->thread.flags & IA64_THREAD_FPH_VALID)) {
- ia64_sync_fph(current);
+ ia64_flush_fph(current);
+ if ((current->thread.flags & IA64_THREAD_FPH_VALID) != 0)
memcpy(dst + 32, current->thread.fph, 96*16);
- }
}
#endif /* CONFIG_IA64_NEW_UNWIND */
unw_init_running(do_dump_fpu, dst);
#else
struct switch_stack *sw = ((struct switch_stack *) pt) - 1;
- struct task_struct *fpu_owner = ia64_get_fpu_owner();
memset(dst, 0, sizeof (dst)); /* don't leak any "random" bits */
dst[8] = pt->f8; dst[9] = pt->f9;
memcpy(dst + 10, &sw->f10, 22*16); /* f10-f31 are contiguous */
- if ((fpu_owner == current) || (current->thread.flags & IA64_THREAD_FPH_VALID)) {
- if (fpu_owner == current) {
- __ia64_save_fpu(current->thread.fph);
- }
+ ia64_flush_fph(current);
+ if ((current->thread.flags & IA64_THREAD_FPH_VALID) != 0)
memcpy(dst + 32, current->thread.fph, 96*16);
- }
#endif
return 1; /* f0-f31 are always valid so we always return 1 */
}
kernel_thread (int (*fn)(void *), void *arg, unsigned long flags)
{
struct task_struct *parent = current;
- int result;
+ int result, tid;
- clone(flags | CLONE_VM, 0);
+ tid = clone(flags | CLONE_VM, 0);
if (parent != current) {
result = (*fn)(arg);
_exit(result);
}
- return 0; /* parent: just return */
+ return tid;
}
/*
/* drop floating-point and debug-register state if it exists: */
current->thread.flags &= ~(IA64_THREAD_FPH_VALID | IA64_THREAD_DBG_VALID);
- if (ia64_get_fpu_owner() == current) {
+#ifndef CONFIG_SMP
+ if (ia64_get_fpu_owner() == current)
ia64_set_fpu_owner(0);
- }
+#endif
}
/*
void
exit_thread (void)
{
- if (ia64_get_fpu_owner() == current) {
+#ifndef CONFIG_SMP
+ if (ia64_get_fpu_owner() == current)
ia64_set_fpu_owner(0);
+#endif
+#ifdef CONFIG_PERFMON
+ /* stop monitoring */
+ if ((current->thread.flags & IA64_THREAD_PM_VALID) != 0) {
+ /*
+ * we cannot rely on switch_to() to save the PMU
+ * context for the last time. There is a possible race
+ * condition in SMP mode between the child and the
+ * parent. by explicitly saving the PMU context here
+ * we garantee no race. this call we also stop
+ * monitoring
+ */
+ ia64_save_pm_regs(¤t->thread);
+ /*
+ * make sure that switch_to() will not save context again
+ */
+ current->thread.flags &= ~IA64_THREAD_PM_VALID;
}
+#endif
}
unsigned long
ret = 0;
} else {
if ((unsigned long) laddr >= (unsigned long) high_memory) {
- printk("yikes: trying to access long at %p\n", laddr);
+ printk("yikes: trying to access long at %p\n",
+ (void *) laddr);
return -EIO;
}
ret = *laddr;
}
/*
- * Ensure the state in child->thread.fph is up-to-date.
+ * Write f32-f127 back to task->thread.fph if it has been modified.
*/
-void
-ia64_sync_fph (struct task_struct *child)
+inline void
+ia64_flush_fph (struct task_struct *task)
{
- if (ia64_psr(ia64_task_regs(child))->mfh && ia64_get_fpu_owner() == child) {
- ia64_psr(ia64_task_regs(child))->mfh = 0;
- ia64_set_fpu_owner(0);
- ia64_save_fpu(&child->thread.fph[0]);
- child->thread.flags |= IA64_THREAD_FPH_VALID;
+ struct ia64_psr *psr = ia64_psr(ia64_task_regs(task));
+#ifdef CONFIG_SMP
+ struct task_struct *fpu_owner = current;
+#else
+ struct task_struct *fpu_owner = ia64_get_fpu_owner();
+#endif
+
+ if (task == fpu_owner && psr->mfh) {
+ psr->mfh = 0;
+ ia64_save_fpu(&task->thread.fph[0]);
+ task->thread.flags |= IA64_THREAD_FPH_VALID;
}
- if (!(child->thread.flags & IA64_THREAD_FPH_VALID)) {
- memset(&child->thread.fph, 0, sizeof(child->thread.fph));
- child->thread.flags |= IA64_THREAD_FPH_VALID;
+}
+
+/*
+ * Sync the fph state of the task so that it can be manipulated
+ * through thread.fph. If necessary, f32-f127 are written back to
+ * thread.fph or, if the fph state hasn't been used before, thread.fph
+ * is cleared to zeroes. Also, access to f32-f127 is disabled to
+ * ensure that the task picks up the state from thread.fph when it
+ * executes again.
+ */
+void
+ia64_sync_fph (struct task_struct *task)
+{
+ struct ia64_psr *psr = ia64_psr(ia64_task_regs(task));
+
+ ia64_flush_fph(task);
+ if (!(task->thread.flags & IA64_THREAD_FPH_VALID)) {
+ task->thread.flags |= IA64_THREAD_FPH_VALID;
+ memset(&task->thread.fph, 0, sizeof(task->thread.fph));
}
+#ifndef CONFIG_SMP
+ if (ia64_get_fpu_owner() == task)
+ ia64_set_fpu_owner(0);
+#endif
+ psr->dfh = 1;
}
#ifdef CONFIG_IA64_NEW_UNWIND
struct switch_stack *sw;
struct unw_frame_info info;
struct pt_regs *pt;
+ unsigned long pmd_tmp;
pt = ia64_task_regs(child);
sw = (struct switch_stack *) (child->thread.ksp + 16);
if (addr < PT_F127 + 16) {
/* accessing fph */
- ia64_sync_fph(child);
+ if (write_access)
+ ia64_sync_fph(child);
+ else
+ ia64_flush_fph(child);
ptr = (unsigned long *) ((unsigned long) &child->thread.fph + addr);
} else if (addr >= PT_F10 && addr < PT_F15 + 16) {
/* scratch registers untouched by kernel (saved in switch_stack) */
case PT_B1: case PT_B2: case PT_B3: case PT_B4: case PT_B5:
return unw_access_br(&info, (addr - PT_B1)/8 + 1, data, write_access);
+ case PT_AR_EC:
+ return unw_access_ar(&info, UNW_AR_EC, data, write_access);
+
case PT_AR_LC:
return unw_access_ar(&info, UNW_AR_LC, data, write_access);
addr);
return -1;
}
- } else {
+ } else
+#ifdef CONFIG_PERFMON
+ if (addr < PT_PMD)
+#endif
+ {
/* access debug registers */
if (!(child->thread.flags & IA64_THREAD_DBG_VALID)) {
ptr += regnum;
}
+#ifdef CONFIG_PERFMON
+ else {
+ /*
+ * XXX: will eventually move back to perfmonctl()
+ */
+ unsigned long pmd = (addr - PT_PMD) >> 3;
+ extern unsigned long perf_ovfl_val;
+
+ /* we just use ptrace to read */
+ if (write_access) return -1;
+
+ if (pmd > 3) {
+ printk("ptrace: rejecting access to PMD[%ld] address 0x%lx\n", pmd, addr);
+ return -1;
+ }
+
+ /*
+ * We always need to mask upper 32bits of pmd because value is random
+ */
+ pmd_tmp = child->thread.pmod[pmd]+(child->thread.pmd[pmd]& perf_ovfl_val);
+
+ /*printk(__FUNCTION__" child=%d reading pmd[%ld]=%lx\n", child->pid, pmd, pmd_tmp);*/
+
+ ptr = &pmd_tmp;
+ }
+#endif
if (write_access)
*ptr = *data;
else
static int
access_uarea (struct task_struct *child, unsigned long addr, unsigned long *data, int write_access)
{
- unsigned long *ptr, *rbs, *bspstore, ndirty, regnum;
+ unsigned long *ptr = NULL, *rbs, *bspstore, ndirty, regnum;
struct switch_stack *sw;
+ unsigned long pmd_tmp;
struct pt_regs *pt;
if ((addr & 0x7) != 0)
if (addr < PT_F127+16) {
/* accessing fph */
- ia64_sync_fph(child);
+ if (write_access)
+ ia64_sync_fph(child);
+ else
+ ia64_flush_fph(child);
ptr = (unsigned long *) ((unsigned long) &child->thread.fph + addr);
} else if (addr < PT_F9+16) {
/* accessing switch_stack or pt_regs: */
*data = (pt->cr_ipsr & IPSR_READ_MASK);
return 0;
+ case PT_AR_EC:
+ if (write_access)
+ sw->ar_pfs = (((*data & 0x3f) << 52)
+ | (sw->ar_pfs & ~(0x3fUL << 52)));
+ else
+ *data = (sw->ar_pfs >> 52) & 0x3f;
+ break;
+
case PT_R1: case PT_R2: case PT_R3:
case PT_R4: case PT_R5: case PT_R6: case PT_R7:
case PT_R8: case PT_R9: case PT_R10: case PT_R11:
/* disallow accessing anything else... */
return -1;
}
- } else {
+ } else
+#ifdef CONFIG_PERFMON
+ if (addr < PT_PMD)
+#endif
+ {
+
/* access debug registers */
if (!(child->thread.flags & IA64_THREAD_DBG_VALID)) {
ptr += regnum;
}
+#ifdef CONFIG_PERFMON
+ else {
+ /*
+ * XXX: will eventually move back to perfmonctl()
+ */
+ unsigned long pmd = (addr - PT_PMD) >> 3;
+ extern unsigned long perf_ovfl_val;
+
+ /* we just use ptrace to read */
+ if (write_access) return -1;
+
+ if (pmd > 3) {
+ printk("ptrace: rejecting access to PMD[%ld] address 0x%lx\n", pmd, addr);
+ return -1;
+ }
+
+ /*
+ * We always need to mask upper 32bits of pmd because value is random
+ */
+ pmd_tmp = child->thread.pmod[pmd]+(child->thread.pmd[pmd]& perf_ovfl_val);
+
+ /*printk(__FUNCTION__" child=%d reading pmd[%ld]=%lx\n", child->pid, pmd, pmd_tmp);*/
+
+ ptr = &pmd_tmp;
+ }
+#endif
+
if (write_access)
*ptr = *data;
else
ret = -ESRCH;
if (!(child->ptrace & PT_PTRACED))
goto out_tsk;
+
if (child->state != TASK_STOPPED) {
- if (request != PTRACE_KILL)
+ if (request != PTRACE_KILL && request != PTRACE_PEEKUSR)
goto out_tsk;
}
+
if (child->p_pptr != current)
goto out_tsk;
}
ia64_sal_handler ia64_sal = (ia64_sal_handler) default_handler;
+ia64_sal_desc_ptc_t *ia64_ptc_domain_info;
const char *
ia64_sal_strerror (long status)
ia64_sal_handler_init(__va(ep->sal_proc), __va(ep->gp));
break;
+ case SAL_DESC_PTC:
+ ia64_ptc_domain_info = (ia64_sal_desc_ptc_t *)p;
+ break;
+
case SAL_DESC_AP_WAKEUP:
#ifdef CONFIG_SMP
{
#include <asm/system.h>
#include <asm/efi.h>
#include <asm/mca.h>
+#include <asm/smp.h>
#ifdef CONFIG_BLK_DEV_RAM
# include <linux/blk.h>
extern char _end;
-/* cpu_data[bootstrap_processor] is data for the bootstrap processor: */
+/* cpu_data[0] is data for the bootstrap processor: */
struct cpuinfo_ia64 cpu_data[NR_CPUS];
unsigned long ia64_cycles_per_usec;
volatile unsigned long cpu_online_map;
#endif
+unsigned long ia64_iobase; /* virtual address for I/O accesses */
+
#define COMMAND_LINE_SIZE 512
char saved_command_line[COMMAND_LINE_SIZE]; /* used in proc filesystem */
void __init
setup_arch (char **cmdline_p)
{
+ extern unsigned long ia64_iobase;
unsigned long max_pfn, bootmap_start, bootmap_size;
unw_init();
if (initrd_start >= PAGE_OFFSET)
printk("Warning: boot loader passed virtual address "
"for initrd, please upgrade the loader\n");
- } else
+ else
#endif
/*
* The loader ONLY passes physical addresses
ia64_sal_init(efi.sal_systab);
#ifdef CONFIG_SMP
- bootstrap_processor = hard_smp_processor_id();
- current->processor = bootstrap_processor;
+ current->processor = 0;
+ cpu_physical_id(0) = hard_smp_processor_id();
#endif
+ /*
+ * Set `iobase' to the appropriate address in region 6
+ * (uncached access range)
+ */
+ __asm__ ("mov %0=ar.k0;;" : "=r"(ia64_iobase));
+ ia64_iobase = __IA64_UNCACHED_OFFSET | (ia64_iobase & ~PAGE_OFFSET);
+
cpu_init(); /* initialize the bootstrap CPU */
#ifdef CONFIG_IA64_GENERIC
int
get_cpuinfo (char *buffer)
{
+#ifdef CONFIG_SMP
+# define lps c->loops_per_sec
+#else
+# define lps loops_per_sec
+#endif
char family[32], model[32], features[128], *cp, *p = buffer;
struct cpuinfo_ia64 *c;
unsigned long mask;
features,
c->ppn, c->number, c->proc_freq / 1000000, c->proc_freq % 1000000,
c->itc_freq / 1000000, c->itc_freq % 1000000,
- loops_per_sec() / 500000, (loops_per_sec() / 5000) % 100);
+ lps / 500000, (lps / 5000) % 100);
}
return p - buffer;
}
#endif
phys_addr_size = vm1.pal_vm_info_1_s.phys_add_size;
}
- printk("processor implements %lu virtual and %lu physical address bits\n",
- impl_va_msb + 1, phys_addr_size);
+ printk("CPU %d: %lu virtual and %lu physical address bits\n",
+ smp_processor_id(), impl_va_msb + 1, phys_addr_size);
c->unimpl_va_mask = ~((7L<<61) | ((1L << (impl_va_msb + 1)) - 1));
c->unimpl_pa_mask = ~((1L<<63) | ((1L << phys_addr_size) - 1));
* do NOT defer TLB misses, page-not-present, access bit, or
* debug faults but kernel code should not rely on any
* particular setting of these bits.
- */
ia64_set_dcr(IA64_DCR_DR | IA64_DCR_DK | IA64_DCR_DX | IA64_DCR_PP);
+ */
+ ia64_set_dcr(IA64_DCR_DR | IA64_DCR_DK | IA64_DCR_DX );
+#ifndef CONFIG_SMP
ia64_set_fpu_owner(0); /* initialize ar.k5 */
+#endif
atomic_inc(&init_mm.mm_count);
current->active_mm = &init_mm;
ia64_put_nat_bits(&scr->pt, &scr->sw, nat); /* restore the original scratch NaT bits */
#endif
- if (flags & IA64_SC_FLAG_FPH_VALID) {
- struct task_struct *fpu_owner = ia64_get_fpu_owner();
+ if ((flags & IA64_SC_FLAG_FPH_VALID) != 0) {
+ struct ia64_psr *psr = ia64_psr(&scr->pt);
__copy_from_user(current->thread.fph, &sc->sc_fr[32], 96*16);
- if (fpu_owner == current) {
+ if (!psr->dfh) {
+ psr->mfh = 0;
__ia64_load_fpu(current->thread.fph);
}
}
goto give_sigsegv;
sigdelsetmask(&set, ~_BLOCKABLE);
+
spin_lock_irq(¤t->sigmask_lock);
- current->blocked = set;
- recalc_sigpending(current);
+ {
+ current->blocked = set;
+ recalc_sigpending(current);
+ }
spin_unlock_irq(¤t->sigmask_lock);
if (restore_sigcontext(sc, scr))
static long
setup_sigcontext (struct sigcontext *sc, sigset_t *mask, struct sigscratch *scr)
{
- struct task_struct *fpu_owner = ia64_get_fpu_owner();
unsigned long flags = 0, ifs, nat;
long err;
/* if cr_ifs isn't valid, we got here through a syscall */
flags |= IA64_SC_FLAG_IN_SYSCALL;
}
- if ((fpu_owner == current) || (current->thread.flags & IA64_THREAD_FPH_VALID)) {
+ ia64_flush_fph(current);
+ if ((current->thread.flags & IA64_THREAD_FPH_VALID)) {
flags |= IA64_SC_FLAG_FPH_VALID;
- if (fpu_owner == current) {
- __ia64_save_fpu(current->thread.fph);
- }
__copy_to_user(&sc->sc_fr[32], current->thread.fph, 96*16);
}
if (!(ka->sa.sa_flags & SA_NODEFER)) {
spin_lock_irq(¤t->sigmask_lock);
- sigorsets(¤t->blocked, ¤t->blocked, &ka->sa.sa_mask);
- sigaddset(¤t->blocked, sig);
- recalc_sigpending(current);
+ {
+ sigorsets(¤t->blocked, ¤t->blocked, &ka->sa.sa_mask);
+ sigaddset(¤t->blocked, sig);
+ recalc_sigpending(current);
+ }
spin_unlock_irq(¤t->sigmask_lock);
}
return 1;
*
* Lots of stuff stolen from arch/alpha/kernel/smp.c
*
+ * 00/09/11 David Mosberger <davidm@hpl.hp.com> Do loops_per_sec calibration on each CPU.
+ * 00/08/23 Asit Mallick <asit.k.mallick@intel.com> fixed logical processor id
* 00/03/31 Rohit Seth <rohit.seth@intel.com> Fixes for Bootstrap Processor & cpu_online_map
* now gets done here (instead of setup.c)
* 99/10/05 davidm Update to bring it in sync with new command-line processing scheme.
#include <asm/bitops.h>
#include <asm/current.h>
#include <asm/delay.h>
+#include <asm/efi.h>
#include <asm/io.h>
#include <asm/irq.h>
#include <asm/system.h>
#include <asm/unistd.h>
+extern void __init calibrate_delay(void);
extern int cpu_idle(void * unused);
-extern void _start(void);
extern void machine_halt(void);
+extern void start_ap(void);
extern int cpu_now_booting; /* Used by head.S to find idle task */
extern volatile unsigned long cpu_online_map; /* Bitmap of available cpu's */
extern struct cpuinfo_ia64 cpu_data[NR_CPUS]; /* Duh... */
+struct smp_boot_data smp_boot_data __initdata;
+
spinlock_t kernel_flag = SPIN_LOCK_UNLOCKED;
-struct smp_boot_data smp __initdata = { 0, };
-char no_int_routing __initdata = 0;
+char __initdata no_int_routing;
unsigned char smp_int_redirect; /* are INT and IPI redirectable by the chipset? */
-volatile int __cpu_number_map[NR_CPUS] = { -1, }; /* SAPIC ID -> Logical ID */
-volatile int __cpu_logical_map[NR_CPUS] = { -1, }; /* logical ID -> SAPIC ID */
+volatile int __cpu_physical_id[NR_CPUS] = { -1, }; /* Logical ID -> SAPIC ID */
int smp_num_cpus = 1;
-int bootstrap_processor = -1; /* SAPIC ID of BSP */
-int smp_threads_ready = 0; /* Set when the idlers are all forked */
-cycles_t cacheflush_time = 0;
+volatile int smp_threads_ready; /* Set when the idlers are all forked */
+cycles_t cacheflush_time;
unsigned long ap_wakeup_vector = -1; /* External Int to use to wakeup AP's */
+
+static volatile unsigned long cpu_callin_map;
+static volatile int smp_commenced;
+
static int max_cpus = -1; /* Command line */
static unsigned long ipi_op[NR_CPUS];
struct smp_call_struct {
static inline int
pointer_lock(void *lock, void *data, int retry)
{
+ volatile long *ptr = lock;
again:
if (cmpxchg_acq((void **) lock, 0, data) == 0)
return 0;
if (!retry)
return -EBUSY;
- while (*(void **) lock)
+ while (*ptr)
;
goto again;
send_IPI_allbutself(int op)
{
int i;
- int cpu_id = 0;
for (i = 0; i < smp_num_cpus; i++) {
- cpu_id = __cpu_logical_map[i];
- if (cpu_id != smp_processor_id())
- send_IPI_single(cpu_id, op);
+ if (i != smp_processor_id())
+ send_IPI_single(i, op);
}
}
int i;
for (i = 0; i < smp_num_cpus; i++)
- send_IPI_single(__cpu_logical_map[i], op);
+ send_IPI_single(i, op);
}
static inline void
smp_call_function_single (int cpuid, void (*func) (void *info), void *info, int retry, int wait)
{
struct smp_call_struct data;
- long timeout;
+ unsigned long timeout;
int cpus = 1;
if (cpuid == smp_processor_id()) {
smp_call_function (void (*func) (void *info), void *info, int retry, int wait)
{
struct smp_call_struct data;
- long timeout;
+ unsigned long timeout;
int cpus = smp_num_cpus - 1;
if (cpus == 0)
if (--data->prof_counter <= 0) {
data->prof_counter = data->prof_multiplier;
- /*
- * update_process_times() expects us to have done irq_enter().
- * Besides, if we don't timer interrupts ignore the global
- * interrupt lock, which is the WrongThing (tm) to do.
- */
- irq_enter(cpu, 0);
update_process_times(user);
- irq_exit(cpu, 0);
}
}
-static inline void __init
-smp_calibrate_delay(int cpuid)
-{
- struct cpuinfo_ia64 *c = &cpu_data[cpuid];
-#if 0
- unsigned long old = loops_per_sec;
- extern void calibrate_delay(void);
-
- loops_per_sec = 0;
- calibrate_delay();
- c->loops_per_sec = loops_per_sec;
- loops_per_sec = old;
-#else
- c->loops_per_sec = loops_per_sec;
-#endif
-}
-
-/*
- * SAL shoves the AP's here when we start them. Physical mode, no kernel TR,
- * no RRs set, better than even chance that psr is bogus. Fix all that and
- * call _start. In effect, pretend to be lilo.
- *
- * Stolen from lilo_start.c. Thanks David!
- */
-void
-start_ap(void)
-{
- unsigned long flags;
-
- /*
- * Install a translation register that identity maps the
- * kernel's 256MB page(s).
- */
- ia64_clear_ic(flags);
- ia64_set_rr( 0, (0x1000 << 8) | (_PAGE_SIZE_1M << 2));
- ia64_set_rr(PAGE_OFFSET, (ia64_rid(0, PAGE_OFFSET) << 8) | (_PAGE_SIZE_256M << 2));
- ia64_srlz_d();
- ia64_itr(0x3, 1, PAGE_OFFSET,
- pte_val(mk_pte_phys(0, __pgprot(__DIRTY_BITS|_PAGE_PL_0|_PAGE_AR_RWX))),
- _PAGE_SIZE_256M);
- ia64_srlz_i();
-
- flags = (IA64_PSR_IT | IA64_PSR_IC | IA64_PSR_DT | IA64_PSR_RT | IA64_PSR_DFH |
- IA64_PSR_BN);
-
- asm volatile ("movl r8 = 1f\n"
- ";;\n"
- "mov cr.ipsr=%0\n"
- "mov cr.iip=r8\n"
- "mov cr.ifs=r0\n"
- ";;\n"
- "rfi;;"
- "1:\n"
- "movl r1 = __gp" :: "r"(flags) : "r8");
- _start();
-}
-
/*
* AP's start using C here.
*/
void __init
-smp_callin(void)
+smp_callin (void)
{
extern void ia64_rid_init(void);
extern void ia64_init_itm(void);
#ifdef CONFIG_PERFMON
extern void perfmon_init_percpu(void);
#endif
+ int cpu = smp_processor_id();
- efi_map_pal_code();
+ if (test_and_set_bit(cpu, &cpu_online_map)) {
+ printk("CPU#%d already initialized!\n", cpu);
+ machine_halt();
+ }
+ efi_map_pal_code();
cpu_init();
- smp_setup_percpu_timer(smp_processor_id());
+ smp_setup_percpu_timer(cpu);
/* setup the CPU local timer tick */
ia64_init_itm();
ia64_set_lrr0(0, 1);
ia64_set_lrr1(0, 1);
- if (test_and_set_bit(smp_processor_id(), &cpu_online_map)) {
- printk("CPU#%d already initialized!\n", smp_processor_id());
- machine_halt();
- }
- while (!smp_threads_ready)
- mb();
-
local_irq_enable(); /* Interrupts have been off until now */
- smp_calibrate_delay(smp_processor_id());
- printk("SMP: CPU %d starting idle loop\n", smp_processor_id());
+
+ calibrate_delay();
+ my_cpu_data.loops_per_sec = loops_per_sec;
+
+ /* allow the master to continue */
+ set_bit(cpu, &cpu_callin_map);
+
+ /* finally, wait for the BP to finish initialization: */
+ while (!smp_commenced);
cpu_idle(NULL);
}
}
/*
- * Bring one cpu online.
- *
- * NB: cpuid is the CPU BUS-LOCAL ID, not the entire SAPIC ID. See asm/smp.h.
+ * Bring one cpu online. Return 0 if this fails for any reason.
*/
static int __init
-smp_boot_one_cpu(int cpuid, int cpunum)
+smp_boot_one_cpu(int cpu)
{
struct task_struct *idle;
+ int cpu_phys_id = cpu_physical_id(cpu);
long timeout;
/*
* Sheesh . . .
*/
if (fork_by_hand() < 0)
- panic("failed fork for CPU %d", cpuid);
+ panic("failed fork for CPU 0x%x", cpu_phys_id);
/*
* We remove it from the pidhash and the runqueue
* once we got the process:
*/
idle = init_task.prev_task;
if (!idle)
- panic("No idle process for CPU %d", cpuid);
- init_tasks[cpunum] = idle;
+ panic("No idle process for CPU 0x%x", cpu_phys_id);
+ init_tasks[cpu] = idle;
del_from_runqueue(idle);
unhash_process(idle);
/* Schedule the first task manually. */
- idle->processor = cpuid;
+ idle->processor = cpu;
idle->has_cpu = 1;
/* Let _start know what logical CPU we're booting (offset into init_tasks[] */
- cpu_now_booting = cpunum;
-
+ cpu_now_booting = cpu;
+
/* Kick the AP in the butt */
- ipi_send(cpuid, ap_wakeup_vector, IA64_IPI_DM_INT, 0);
- ia64_srlz_i();
- mb();
+ ipi_send(cpu, ap_wakeup_vector, IA64_IPI_DM_INT, 0);
- /*
- * OK, wait a bit for that CPU to finish staggering about. smp_callin() will
- * call cpu_init() which will set a bit for this AP. When that bit flips, the AP
- * is waiting for smp_threads_ready to be 1 and we can move on.
- */
+ /* wait up to 10s for the AP to start */
for (timeout = 0; timeout < 100000; timeout++) {
- if (test_bit(cpuid, &cpu_online_map))
- goto alive;
+ if (test_bit(cpu, &cpu_callin_map))
+ return 1;
udelay(100);
- barrier();
}
- printk(KERN_ERR "SMP: Processor %d is stuck.\n", cpuid);
+ printk(KERN_ERR "SMP: Processor 0x%x is stuck.\n", cpu_phys_id);
return 0;
-
-alive:
- /* Remember the AP data */
- __cpu_number_map[cpuid] = cpunum;
- __cpu_logical_map[cpunum] = cpuid;
- return 1;
}
unsigned long bogosum;
/* Take care of some initial bookkeeping. */
- memset(&__cpu_number_map, -1, sizeof(__cpu_number_map));
- memset(&__cpu_logical_map, -1, sizeof(__cpu_logical_map));
+ memset(&__cpu_physical_id, -1, sizeof(__cpu_physical_id));
memset(&ipi_op, 0, sizeof(ipi_op));
- /* Setup BSP mappings */
- __cpu_number_map[bootstrap_processor] = 0;
- __cpu_logical_map[0] = bootstrap_processor;
+ /* Setup BP mappings */
+ __cpu_physical_id[0] = hard_smp_processor_id();
- smp_calibrate_delay(smp_processor_id());
+ /* on the BP, the kernel already called calibrate_delay_loop() in init/main.c */
+ my_cpu_data.loops_per_sec = loops_per_sec;
#if 0
smp_tune_scheduling();
#endif
- smp_setup_percpu_timer(bootstrap_processor);
+ smp_setup_percpu_timer(0);
- if (test_and_set_bit(bootstrap_processor, &cpu_online_map)) {
+ if (test_and_set_bit(0, &cpu_online_map)) {
printk("CPU#%d already initialized!\n", smp_processor_id());
machine_halt();
}
if (max_cpus != -1)
printk("Limiting CPUs to %d\n", max_cpus);
- if (smp.cpu_count > 1) {
+ if (smp_boot_data.cpu_count > 1) {
printk(KERN_INFO "SMP: starting up secondaries.\n");
- for (i = 0; i < NR_CPUS; i++) {
- if (smp.cpu_map[i] == -1 ||
- smp.cpu_map[i] == bootstrap_processor)
+ for (i = 0; i < smp_boot_data.cpu_count; i++) {
+ /* skip performance restricted and bootstrap cpu: */
+ if (smp_boot_data.cpu_phys_id[i] == -1
+ || smp_boot_data.cpu_phys_id[i] == hard_smp_processor_id())
continue;
- if (smp_boot_one_cpu(smp.cpu_map[i], cpu_count) == 0)
- continue;
+ cpu_physical_id(cpu_count) = smp_boot_data.cpu_phys_id[i];
+ if (!smp_boot_one_cpu(cpu_count))
+ continue; /* failed */
cpu_count++; /* Count good CPUs only... */
/*
}
/*
- * Called from main.c by each AP.
+ * Called when the BP is just about to fire off init.
*/
void __init
smp_commence(void)
{
- mb();
-}
-
-/*
- * Not used; part of the i386 bringup
- */
-void __init
-initialize_secondary(void)
-{
+ smp_commenced = 1;
}
int __init
*
* Setup of the IPI irq handler is done in irq.c:init_IRQ_SMP().
*
- * So this just gets the BSP SAPIC ID and print's it out. Dull, huh?
- *
- * Not anymore. This also registers the AP OS_MC_REDVEZ address with SAL.
+ * This also registers the AP OS_MC_REDVEZ address with SAL.
*/
void __init
init_smp_config(void)
} *ap_startup;
long sal_ret;
- /* Grab the BSP ID */
- bootstrap_processor = hard_smp_processor_id();
-
/* Tell SAL where to drop the AP's. */
ap_startup = (struct fptr *) start_ap;
sal_ret = ia64_sal_set_vectors(SAL_VECTOR_OS_BOOT_RENDEZ,
-unsigned long cpu_online_map;
+/*
+ * SMP Support
+ *
+ * Application processor startup code, moved from smp.c to better support kernel profile
+ */
+
+#include <linux/config.h>
+
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/init.h>
+#include <linux/interrupt.h>
+#include <linux/smp.h>
+#include <linux/kernel_stat.h>
+#include <linux/mm.h>
+#include <linux/delay.h>
+
+#include <asm/atomic.h>
+#include <asm/bitops.h>
+#include <asm/current.h>
+#include <asm/delay.h>
+#include <asm/efi.h>
+
+#include <asm/io.h>
+#include <asm/irq.h>
+#include <asm/page.h>
+#include <asm/pgtable.h>
+#include <asm/pgalloc.h>
+#include <asm/processor.h>
+#include <asm/ptrace.h>
+#include <asm/sal.h>
+#include <asm/system.h>
+#include <asm/unistd.h>
+
+/*
+ * SAL shoves the AP's here when we start them. Physical mode, no kernel TR,
+ * no RRs set, better than even chance that psr is bogus. Fix all that and
+ * call _start. In effect, pretend to be lilo.
+ *
+ * Stolen from lilo_start.c. Thanks David!
+ */
+void
+start_ap(void)
+{
+ extern void _start (void);
+ unsigned long flags;
+
+ /*
+ * Install a translation register that identity maps the
+ * kernel's 256MB page(s).
+ */
+ ia64_clear_ic(flags);
+ ia64_set_rr( 0, (0x1000 << 8) | (_PAGE_SIZE_1M << 2));
+ ia64_set_rr(PAGE_OFFSET, (ia64_rid(0, PAGE_OFFSET) << 8) | (_PAGE_SIZE_256M << 2));
+ ia64_srlz_d();
+ ia64_itr(0x3, 1, PAGE_OFFSET,
+ pte_val(mk_pte_phys(0, __pgprot(__DIRTY_BITS|_PAGE_PL_0|_PAGE_AR_RWX))),
+ _PAGE_SIZE_256M);
+ ia64_srlz_i();
+
+ flags = (IA64_PSR_IT | IA64_PSR_IC | IA64_PSR_DT | IA64_PSR_RT | IA64_PSR_DFH |
+ IA64_PSR_BN);
+
+ asm volatile ("movl r8 = 1f\n"
+ ";;\n"
+ "mov cr.ipsr=%0\n"
+ "mov cr.iip=r8\n"
+ "mov cr.ifs=r0\n"
+ ";;\n"
+ "rfi;;"
+ "1:\n"
+ "movl r1 = __gp" :: "r"(flags) : "r8");
+ _start();
+}
+
struct pt_regs *regs = (struct pt_regs *) &stack;
addr = do_mmap2(addr, len, prot, flags, fd, pgoff);
- if (!IS_ERR(addr))
+ if (!IS_ERR((void *) addr))
regs->r8 = 0; /* ensure large addresses are not mistaken as failures... */
return addr;
}
return -EINVAL;
addr = do_mmap2(addr, len, prot, flags, fd, off >> PAGE_SHIFT);
- if (!IS_ERR(addr))
+ if (!IS_ERR((void *) addr))
regs->r8 = 0; /* ensure large addresses are not mistaken as failures... */
return addr;
}
-asmlinkage long
-sys_ioperm (unsigned long from, unsigned long num, int on)
-{
- printk(KERN_ERR "sys_ioperm(from=%lx, num=%lx, on=%d)\n", from, num, on);
- return -EIO;
-}
-
-asmlinkage long
-sys_iopl (int level, long arg1, long arg2, long arg3)
-{
- printk(KERN_ERR "sys_iopl(level=%d)!\n", level);
- return -ENOSYS;
-}
-
asmlinkage long
sys_vm86 (long arg0, long arg1, long arg2, long arg3)
{
unsigned long addr;
addr = sys_create_module (name_user, size);
- if (!IS_ERR(addr))
+ if (!IS_ERR((void *) addr))
regs->r8 = 0; /* ensure large addresses are not mistaken as failures... */
return addr;
}
#ifdef CONFIG_SMP
smp_do_timer(regs);
- if (smp_processor_id() == bootstrap_processor)
+ if (smp_processor_id() == 0)
do_timer(regs);
#else
do_timer(regs);
itc_freq = (platform_base_freq*itc_ratio.num)/itc_ratio.den;
itm.delta = itc_freq / HZ;
- printk("timer: CPU %d base freq=%lu.%03luMHz, ITC ratio=%lu/%lu, ITC freq=%lu.%03luMHz\n",
+ printk("CPU %d: base freq=%lu.%03luMHz, ITC ratio=%lu/%lu, ITC freq=%lu.%03luMHz\n",
smp_processor_id(),
platform_base_freq / 1000000, (platform_base_freq / 1000) % 1000,
itc_ratio.num, itc_ratio.den, itc_freq / 1000000, (itc_freq / 1000) % 1000);
}
/*
- * disabled_fp_fault() is called when a user-level process attempts to
- * access one of the registers f32..f127 while it doesn't own the
+ * disabled_fph_fault() is called when a user-level process attempts
+ * to access one of the registers f32..f127 when it doesn't own the
* fp-high register partition. When this happens, we save the current
* fph partition in the task_struct of the fpu-owner (if necessary)
* and then load the fp-high partition of the current task (if
- * necessary).
+ * necessary). Note that the kernel has access to fph by the time we
+ * get here, as the IVT's "Diabled FP-Register" handler takes care of
+ * clearing psr.dfh.
*/
static inline void
disabled_fph_fault (struct pt_regs *regs)
{
- struct task_struct *fpu_owner = ia64_get_fpu_owner();
+ struct ia64_psr *psr = ia64_psr(regs);
- /* first, clear psr.dfh and psr.mfh: */
- regs->cr_ipsr &= ~(IA64_PSR_DFH | IA64_PSR_MFH);
- if (fpu_owner != current) {
- ia64_set_fpu_owner(current);
+ /* first, grant user-level access to fph partition: */
+ psr->dfh = 0;
+#ifndef CONFIG_SMP
+ {
+ struct task_struct *fpu_owner = ia64_get_fpu_owner();
- if (fpu_owner && ia64_psr(ia64_task_regs(fpu_owner))->mfh) {
- ia64_psr(ia64_task_regs(fpu_owner))->mfh = 0;
- fpu_owner->thread.flags |= IA64_THREAD_FPH_VALID;
- __ia64_save_fpu(fpu_owner->thread.fph);
- }
- if ((current->thread.flags & IA64_THREAD_FPH_VALID) != 0) {
- __ia64_load_fpu(current->thread.fph);
- } else {
- __ia64_init_fpu();
- /*
- * Set mfh because the state in thread.fph does not match
- * the state in the fph partition.
- */
- ia64_psr(regs)->mfh = 1;
- }
+ if (fpu_owner == current)
+ return;
+
+ if (fpu_owner)
+ ia64_flush_fph(fpu_owner);
+
+ ia64_set_fpu_owner(current);
+ }
+#endif /* !CONFIG_SMP */
+ if ((current->thread.flags & IA64_THREAD_FPH_VALID) != 0) {
+ __ia64_load_fpu(current->thread.fph);
+ psr->mfh = 0;
+ } else {
+ __ia64_init_fpu();
+ /*
+ * Set mfh because the state in thread.fph does not match the state in
+ * the fph partition.
+ */
+ psr->mfh = 1;
}
}
* kernel, so set those bits in the mask and set the low volatile
* pointer to point to these registers.
*/
- fp_state.bitmask_low64 = 0xffc0; /* bit6..bit15 */
#ifndef FPSWA_BUG
- fp_state.fp_state_low_volatile = ®s->f6;
+ fp_state.bitmask_low64 = 0x3c0; /* bit 6..9 */
+ fp_state.fp_state_low_volatile = (fp_state_low_volatile_t *) ®s->f6;
#else
+ fp_state.bitmask_low64 = 0xffc0; /* bit6..bit15 */
f6_15[0] = regs->f6;
f6_15[1] = regs->f7;
f6_15[2] = regs->f8;
f6_15[3] = regs->f9;
- __asm__ ("stf.spill %0=f10" : "=m"(f6_15[4]));
- __asm__ ("stf.spill %0=f11" : "=m"(f6_15[5]));
- __asm__ ("stf.spill %0=f12" : "=m"(f6_15[6]));
- __asm__ ("stf.spill %0=f13" : "=m"(f6_15[7]));
- __asm__ ("stf.spill %0=f14" : "=m"(f6_15[8]));
- __asm__ ("stf.spill %0=f15" : "=m"(f6_15[9]));
+ __asm__ ("stf.spill %0=f10%P0" : "=m"(f6_15[4]));
+ __asm__ ("stf.spill %0=f11%P0" : "=m"(f6_15[5]));
+ __asm__ ("stf.spill %0=f12%P0" : "=m"(f6_15[6]));
+ __asm__ ("stf.spill %0=f13%P0" : "=m"(f6_15[7]));
+ __asm__ ("stf.spill %0=f14%P0" : "=m"(f6_15[8]));
+ __asm__ ("stf.spill %0=f15%P0" : "=m"(f6_15[9]));
fp_state.fp_state_low_volatile = (fp_state_low_volatile_t *) f6_15;
#endif
/*
(unsigned long *) isr, (unsigned long *) pr,
(unsigned long *) ifs, &fp_state);
#ifdef FPSWA_BUG
- __asm__ ("ldf.fill f10=%0" :: "m"(f6_15[4]));
- __asm__ ("ldf.fill f11=%0" :: "m"(f6_15[5]));
- __asm__ ("ldf.fill f12=%0" :: "m"(f6_15[6]));
- __asm__ ("ldf.fill f13=%0" :: "m"(f6_15[7]));
- __asm__ ("ldf.fill f14=%0" :: "m"(f6_15[8]));
- __asm__ ("ldf.fill f15=%0" :: "m"(f6_15[9]));
+ __asm__ ("ldf.fill f10=%0%P0" :: "m"(f6_15[4]));
+ __asm__ ("ldf.fill f11=%0%P0" :: "m"(f6_15[5]));
+ __asm__ ("ldf.fill f12=%0%P0" :: "m"(f6_15[6]));
+ __asm__ ("ldf.fill f13=%0%P0" :: "m"(f6_15[7]));
+ __asm__ ("ldf.fill f14=%0%P0" :: "m"(f6_15[8]));
+ __asm__ ("ldf.fill f15=%0%P0" :: "m"(f6_15[9]));
regs->f6 = f6_15[0];
regs->f7 = f6_15[1];
regs->f8 = f6_15[2];
bspstore = (unsigned long *)regs->ar_bspstore;
DPRINT(("rse_slot_num=0x%lx\n",ia64_rse_slot_num((unsigned long *)sw->ar_bspstore)));
- DPRINT(("kbs=%p nlocals=%ld\n", kbs, nlocals));
+ DPRINT(("kbs=%p nlocals=%ld\n", (void *) kbs, nlocals));
DPRINT(("bspstore next rnat slot %p\n",
- ia64_rse_rnat_addr((unsigned long *)sw->ar_bspstore)));
+ (void *) ia64_rse_rnat_addr((unsigned long *)sw->ar_bspstore)));
DPRINT(("on_kbs=%ld rnats=%ld\n",
on_kbs, ((sw->ar_bspstore-(unsigned long)kbs)>>3) - on_kbs));
addr = slot = ia64_rse_skip_regs(bsp, r1 - 32);
DPRINT(("ubs_end=%p bsp=%p addr=%p slot=0x%lx\n",
- ubs_end, bsp, addr, ia64_rse_slot_num(addr)));
+ (void *) ubs_end, (void *) bsp, (void *) addr, ia64_rse_slot_num(addr)));
ia64_poke(regs, current, (unsigned long)addr, val);
ia64_peek(regs, current, (unsigned long)addr, &rnats);
DPRINT(("rnat @%p = 0x%lx nat=%d rnatval=%lx\n",
- addr, rnats, nat, rnats &ia64_rse_slot_num(slot)));
+ (void *) addr, rnats, nat, rnats &ia64_rse_slot_num(slot)));
if (nat) {
rnats |= __IA64_UL(1) << ia64_rse_slot_num(slot);
}
ia64_poke(regs, current, (unsigned long)addr, rnats);
- DPRINT(("rnat changed to @%p = 0x%lx\n", addr, rnats));
+ DPRINT(("rnat changed to @%p = 0x%lx\n", (void *) addr, rnats));
}
addr = slot = ia64_rse_skip_regs(bsp, r1 - 32);
DPRINT(("ubs_end=%p bsp=%p addr=%p slot=0x%lx\n",
- ubs_end, bsp, addr, ia64_rse_slot_num(addr)));
+ (void *) ubs_end, (void *) bsp, (void *) addr, ia64_rse_slot_num(addr)));
ia64_peek(regs, current, (unsigned long)addr, val);
addr = ia64_rse_rnat_addr(addr);
ia64_peek(regs, current, (unsigned long)addr, &rnats);
- DPRINT(("rnat @%p = 0x%lx\n", addr, rnats));
+ DPRINT(("rnat @%p = 0x%lx\n", (void *) addr, rnats));
if (nat)
*nat = rnats >> ia64_rse_slot_num(slot) & 0x1;
* UNAT bit_pos = GR[r3]{8:3} form EAS-2.4
*/
bitmask = __IA64_UL(1) << (addr >> 3 & 0x3f);
- DPRINT(("*0x%lx=0x%lx NaT=%d prev_unat @%p=%lx\n", addr, val, nat, unat, *unat));
+ DPRINT(("*0x%lx=0x%lx NaT=%d prev_unat @%p=%lx\n", addr, val, nat, (void *) unat, *unat));
if (nat) {
*unat |= bitmask;
} else {
*unat &= ~bitmask;
}
- DPRINT(("*0x%lx=0x%lx NaT=%d new unat: %p=%lx\n", addr, val, nat, unat,*unat));
+ DPRINT(("*0x%lx=0x%lx NaT=%d new unat: %p=%lx\n", addr, val, nat, (void *) unat,*unat));
}
#define IA64_FPH_OFFS(r) (r - IA64_FIRST_ROTATING_FR)
unsigned long addr;
/*
- * From EAS-2.5: FPDisableFault has higher priority than
- * Unaligned Fault. Thus, when we get here, we know the partition is
- * enabled.
+ * From EAS-2.5: FPDisableFault has higher priority than Unaligned
+ * Fault. Thus, when we get here, we know the partition is enabled.
+ * To update f32-f127, there are three choices:
+ *
+ * (1) save f32-f127 to thread.fph and update the values there
+ * (2) use a gigantic switch statement to directly access the registers
+ * (3) generate code on the fly to update the desired register
*
- * The registers [32-127] are ususally saved in the tss. When get here,
- * they are NECESSARILY live because they are only saved explicitely.
- * We have 3 ways of updating the values: force a save of the range
- * in tss, use a gigantic switch/case statement or generate code on the
- * fly to store to the right register.
- * For now, we are using the (slow) save/restore way.
+ * For now, we are using approach (1).
*/
if (regnum >= IA64_FIRST_ROTATING_FR) {
ia64_sync_fph(current);
* let's do it for safety.
*/
regs->cr_ipsr |= IA64_PSR_MFL;
-
}
}
* Unaligned Fault. Thus, when we get here, we know the partition is
* enabled.
*
- * When regnum > 31, the register is still live and
- * we need to force a save to the tss to get access to it.
- * See discussion in setfpreg() for reasons and other ways of doing this.
+ * When regnum > 31, the register is still live and we need to force a save
+ * to current->thread.fph to get access to it. See discussion in setfpreg()
+ * for reasons and other ways of doing this.
*/
if (regnum >= IA64_FIRST_ROTATING_FR) {
- ia64_sync_fph(current);
+ ia64_flush_fph(current);
*fpval = current->thread.fph[IA64_FPH_OFFS(regnum)];
} else {
/*
/*
* XXX fixme
*
- * A possible optimization would be to drop fpr_final
- * and directly use the storage from the saved context i.e.,
- * the actual final destination (pt_regs, switch_stack or tss).
+ * A possible optimization would be to drop fpr_final and directly
+ * use the storage from the saved context i.e., the actual final
+ * destination (pt_regs, switch_stack or thread structure).
*/
setfpreg(ld->r1, &fpr_final[0], regs);
setfpreg(ld->imm, &fpr_final[1], regs);
/*
* XXX fixme
*
- * A possible optimization would be to drop fpr_final
- * and directly use the storage from the saved context i.e.,
- * the actual final destination (pt_regs, switch_stack or tss).
+ * A possible optimization would be to drop fpr_final and directly
+ * use the storage from the saved context i.e., the actual final
+ * destination (pt_regs, switch_stack or thread structure).
*/
setfpreg(ld->r1, &fpr_final, regs);
}
* check for updates on any loads
*/
if (ld->op == 0x7 || ld->m)
- emulate_load_updates(ld->op == 0x7 ? UPD_IMMEDIATE: UPD_REG,
- ld, regs, ifa);
-
+ emulate_load_updates(ld->op == 0x7 ? UPD_IMMEDIATE: UPD_REG, ld, regs, ifa);
/*
* invalidate ALAT entry in case of advanced floating point loads
#define UNW_STATS 0 /* WARNING: this disabled interrupts for long time-spans!! */
#if UNW_DEBUG
- static long unw_debug_level = 1;
+ static long unw_debug_level = 255;
# define debug(level,format...) if (unw_debug_level > level) printk(format)
# define dprintk(format...) printk(format)
# define inline
struct unw_table kernel_table;
/* hash table that maps instruction pointer to script index: */
- unw_hash_index_t hash[UNW_HASH_SIZE];
+ unsigned short hash[UNW_HASH_SIZE];
/* script cache: */
struct unw_script cache[UNW_CACHE_SIZE];
UNW_REG_UNAT, UNW_REG_LC, UNW_REG_FPSR, UNW_REG_PRI_UNAT_GR
},
preg_index: {
- struct_offset(struct unw_frame_info, pri_unat)/8, /* PRI_UNAT_GR */
- struct_offset(struct unw_frame_info, pri_unat)/8, /* PRI_UNAT_MEM */
- struct_offset(struct unw_frame_info, pbsp)/8,
- struct_offset(struct unw_frame_info, bspstore)/8,
- struct_offset(struct unw_frame_info, pfs)/8,
- struct_offset(struct unw_frame_info, rnat)/8,
+ struct_offset(struct unw_frame_info, pri_unat_loc)/8, /* PRI_UNAT_GR */
+ struct_offset(struct unw_frame_info, pri_unat_loc)/8, /* PRI_UNAT_MEM */
+ struct_offset(struct unw_frame_info, bsp_loc)/8,
+ struct_offset(struct unw_frame_info, bspstore_loc)/8,
+ struct_offset(struct unw_frame_info, pfs_loc)/8,
+ struct_offset(struct unw_frame_info, rnat_loc)/8,
struct_offset(struct unw_frame_info, psp)/8,
- struct_offset(struct unw_frame_info, rp)/8,
+ struct_offset(struct unw_frame_info, rp_loc)/8,
struct_offset(struct unw_frame_info, r4)/8,
struct_offset(struct unw_frame_info, r5)/8,
struct_offset(struct unw_frame_info, r6)/8,
struct_offset(struct unw_frame_info, r7)/8,
- struct_offset(struct unw_frame_info, unat)/8,
- struct_offset(struct unw_frame_info, pr)/8,
- struct_offset(struct unw_frame_info, lc)/8,
- struct_offset(struct unw_frame_info, fpsr)/8,
- struct_offset(struct unw_frame_info, b1)/8,
- struct_offset(struct unw_frame_info, b2)/8,
- struct_offset(struct unw_frame_info, b3)/8,
- struct_offset(struct unw_frame_info, b4)/8,
- struct_offset(struct unw_frame_info, b5)/8,
- struct_offset(struct unw_frame_info, f2)/8,
- struct_offset(struct unw_frame_info, f3)/8,
- struct_offset(struct unw_frame_info, f4)/8,
- struct_offset(struct unw_frame_info, f5)/8,
- struct_offset(struct unw_frame_info, fr[16 - 16])/8,
- struct_offset(struct unw_frame_info, fr[17 - 16])/8,
- struct_offset(struct unw_frame_info, fr[18 - 16])/8,
- struct_offset(struct unw_frame_info, fr[19 - 16])/8,
- struct_offset(struct unw_frame_info, fr[20 - 16])/8,
- struct_offset(struct unw_frame_info, fr[21 - 16])/8,
- struct_offset(struct unw_frame_info, fr[22 - 16])/8,
- struct_offset(struct unw_frame_info, fr[23 - 16])/8,
- struct_offset(struct unw_frame_info, fr[24 - 16])/8,
- struct_offset(struct unw_frame_info, fr[25 - 16])/8,
- struct_offset(struct unw_frame_info, fr[26 - 16])/8,
- struct_offset(struct unw_frame_info, fr[27 - 16])/8,
- struct_offset(struct unw_frame_info, fr[28 - 16])/8,
- struct_offset(struct unw_frame_info, fr[29 - 16])/8,
- struct_offset(struct unw_frame_info, fr[30 - 16])/8,
- struct_offset(struct unw_frame_info, fr[31 - 16])/8,
+ struct_offset(struct unw_frame_info, unat_loc)/8,
+ struct_offset(struct unw_frame_info, pr_loc)/8,
+ struct_offset(struct unw_frame_info, lc_loc)/8,
+ struct_offset(struct unw_frame_info, fpsr_loc)/8,
+ struct_offset(struct unw_frame_info, b1_loc)/8,
+ struct_offset(struct unw_frame_info, b2_loc)/8,
+ struct_offset(struct unw_frame_info, b3_loc)/8,
+ struct_offset(struct unw_frame_info, b4_loc)/8,
+ struct_offset(struct unw_frame_info, b5_loc)/8,
+ struct_offset(struct unw_frame_info, f2_loc)/8,
+ struct_offset(struct unw_frame_info, f3_loc)/8,
+ struct_offset(struct unw_frame_info, f4_loc)/8,
+ struct_offset(struct unw_frame_info, f5_loc)/8,
+ struct_offset(struct unw_frame_info, fr_loc[16 - 16])/8,
+ struct_offset(struct unw_frame_info, fr_loc[17 - 16])/8,
+ struct_offset(struct unw_frame_info, fr_loc[18 - 16])/8,
+ struct_offset(struct unw_frame_info, fr_loc[19 - 16])/8,
+ struct_offset(struct unw_frame_info, fr_loc[20 - 16])/8,
+ struct_offset(struct unw_frame_info, fr_loc[21 - 16])/8,
+ struct_offset(struct unw_frame_info, fr_loc[22 - 16])/8,
+ struct_offset(struct unw_frame_info, fr_loc[23 - 16])/8,
+ struct_offset(struct unw_frame_info, fr_loc[24 - 16])/8,
+ struct_offset(struct unw_frame_info, fr_loc[25 - 16])/8,
+ struct_offset(struct unw_frame_info, fr_loc[26 - 16])/8,
+ struct_offset(struct unw_frame_info, fr_loc[27 - 16])/8,
+ struct_offset(struct unw_frame_info, fr_loc[28 - 16])/8,
+ struct_offset(struct unw_frame_info, fr_loc[29 - 16])/8,
+ struct_offset(struct unw_frame_info, fr_loc[30 - 16])/8,
+ struct_offset(struct unw_frame_info, fr_loc[31 - 16])/8,
},
hash : { [0 ... UNW_HASH_SIZE - 1] = -1 },
#if UNW_DEBUG
\f
/* Unwind accessors. */
+/*
+ * Returns offset of rREG in struct pt_regs.
+ */
+static inline unsigned long
+pt_regs_off (unsigned long reg)
+{
+ unsigned long off =0;
+
+ if (reg >= 1 && reg <= 3)
+ off = struct_offset(struct pt_regs, r1) + 8*(reg - 1);
+ else if (reg <= 11)
+ off = struct_offset(struct pt_regs, r8) + 8*(reg - 8);
+ else if (reg <= 15)
+ off = struct_offset(struct pt_regs, r12) + 8*(reg - 12);
+ else if (reg <= 31)
+ off = struct_offset(struct pt_regs, r16) + 8*(reg - 16);
+ else
+ dprintk("unwind: bad scratch reg r%lu\n", reg);
+ return off;
+}
+
int
unw_access_gr (struct unw_frame_info *info, int regnum, unsigned long *val, char *nat, int write)
{
}
/* fall through */
case UNW_NAT_NONE:
+ dummy_nat = 0;
nat_addr = &dummy_nat;
break;
- case UNW_NAT_SCRATCH:
- if (info->pri_unat)
- nat_addr = info->pri_unat;
- else
- nat_addr = &info->sw->caller_unat;
- case UNW_NAT_PRI_UNAT:
+ case UNW_NAT_MEMSTK:
nat_mask = (1UL << ((long) addr & 0x1f8)/8);
break;
- case UNW_NAT_STACKED:
+ case UNW_NAT_REGSTK:
nat_addr = ia64_rse_rnat_addr(addr);
if ((unsigned long) addr < info->regstk.limit
|| (unsigned long) addr >= info->regstk.top)
{
- dprintk("unwind: 0x%p outside of regstk "
- "[0x%lx-0x%lx)\n", addr,
- info->regstk.limit, info->regstk.top);
+ dprintk("unwind: %p outside of regstk "
+ "[0x%lx-0x%lx)\n", (void *) addr,
+ info->regstk.limit,
+ info->regstk.top);
return -1;
}
if ((unsigned long) nat_addr >= info->regstk.top)
pt = (struct pt_regs *) info->psp - 1;
else
pt = (struct pt_regs *) info->sp - 1;
- if (regnum <= 3)
- addr = &pt->r1 + (regnum - 1);
- else if (regnum <= 11)
- addr = &pt->r8 + (regnum - 8);
- else if (regnum <= 15)
- addr = &pt->r12 + (regnum - 12);
- else
- addr = &pt->r16 + (regnum - 16);
- if (info->pri_unat)
- nat_addr = info->pri_unat;
+ addr = (unsigned long *) ((long) pt + pt_regs_off(regnum));
+ if (info->pri_unat_loc)
+ nat_addr = info->pri_unat_loc;
else
- nat_addr = &info->sw->caller_unat;
+ nat_addr = &info->sw->ar_unat;
nat_mask = (1UL << ((long) addr & 0x1f8)/8);
}
} else {
if (write) {
*addr = *val;
- *nat_addr = (*nat_addr & ~nat_mask) | nat_mask;
+ if (*nat)
+ *nat_addr |= nat_mask;
+ else
+ *nat_addr &= ~nat_mask;
} else {
*val = *addr;
*nat = (*nat_addr & nat_mask) != 0;
/* preserved: */
case 1: case 2: case 3: case 4: case 5:
- addr = *(&info->b1 + (regnum - 1));
+ addr = *(&info->b1_loc + (regnum - 1));
if (!addr)
addr = &info->sw->b1 + (regnum - 1);
break;
pt = (struct pt_regs *) info->sp - 1;
if (regnum <= 5) {
- addr = *(&info->f2 + (regnum - 2));
+ addr = *(&info->f2_loc + (regnum - 2));
if (!addr)
addr = &info->sw->f2 + (regnum - 2);
} else if (regnum <= 15) {
else
addr = &info->sw->f10 + (regnum - 10);
} else if (regnum <= 31) {
- addr = info->fr[regnum - 16];
+ addr = info->fr_loc[regnum - 16];
if (!addr)
addr = &info->sw->f16 + (regnum - 16);
} else {
struct task_struct *t = info->task;
- ia64_sync_fph(t);
+ if (write)
+ ia64_sync_fph(t);
+ else
+ ia64_flush_fph(t);
addr = t->thread.fph + (regnum - 32);
}
switch (regnum) {
case UNW_AR_BSP:
- addr = info->pbsp;
+ addr = info->bsp_loc;
if (!addr)
addr = &info->sw->ar_bspstore;
break;
case UNW_AR_BSPSTORE:
- addr = info->bspstore;
+ addr = info->bspstore_loc;
if (!addr)
addr = &info->sw->ar_bspstore;
break;
case UNW_AR_PFS:
- addr = info->pfs;
+ addr = info->pfs_loc;
if (!addr)
addr = &info->sw->ar_pfs;
break;
case UNW_AR_RNAT:
- addr = info->rnat;
+ addr = info->rnat_loc;
if (!addr)
addr = &info->sw->ar_rnat;
break;
case UNW_AR_UNAT:
- addr = info->unat;
+ addr = info->unat_loc;
if (!addr)
addr = &info->sw->ar_unat;
break;
case UNW_AR_LC:
- addr = info->lc;
+ addr = info->lc_loc;
if (!addr)
addr = &info->sw->ar_lc;
break;
case UNW_AR_EC:
- if (!info->cfm)
+ if (!info->cfm_loc)
return -1;
if (write)
- *info->cfm = (*info->cfm & ~(0x3fUL << 52)) | ((*val & 0x3f) << 52);
+ *info->cfm_loc =
+ (*info->cfm_loc & ~(0x3fUL << 52)) | ((*val & 0x3f) << 52);
else
- *val = (*info->cfm >> 52) & 0x3f;
+ *val = (*info->cfm_loc >> 52) & 0x3f;
return 0;
case UNW_AR_FPSR:
- addr = info->fpsr;
+ addr = info->fpsr_loc;
if (!addr)
addr = &info->sw->ar_fpsr;
break;
{
unsigned long *addr;
- addr = info->pr;
+ addr = info->pr_loc;
if (!addr)
addr = &info->sw->pr;
int i;
/*
- * First, resolve implicit register save locations
- * (see Section "11.4.2.3 Rules for Using Unwind
- * Descriptors", rule 3):
+ * First, resolve implicit register save locations (see Section "11.4.2.3 Rules
+ * for Using Unwind Descriptors", rule 3):
*/
for (i = 0; i < (int) sizeof(unw.save_order)/sizeof(unw.save_order[0]); ++i) {
reg = sr->curr.reg + unw.save_order[i];
static inline unw_hash_index_t
hash (unsigned long ip)
{
-# define magic 0x9e3779b97f4a7c16 /* (sqrt(5)/2-1)*2^64 */
+# define magic 0x9e3779b97f4a7c16 /* based on (sqrt(5)/2-1)*2^64 */
return (ip >> 4)*magic >> (64 - UNW_LOG_HASH_SIZE);
}
static inline long
-cache_match (struct unw_script *script, unsigned long ip, unsigned long pr_val)
+cache_match (struct unw_script *script, unsigned long ip, unsigned long pr)
{
read_lock(&script->lock);
- if ((ip) == (script)->ip && (((pr_val) ^ (script)->pr_val) & (script)->pr_mask) == 0)
+ if (ip == script->ip && ((pr ^ script->pr_val) & script->pr_mask) == 0)
/* keep the read lock... */
return 1;
read_unlock(&script->lock);
script_lookup (struct unw_frame_info *info)
{
struct unw_script *script = unw.cache + info->hint;
- unsigned long ip, pr_val;
+ unsigned short index;
+ unsigned long ip, pr;
STAT(++unw.stat.cache.lookups);
ip = info->ip;
- pr_val = info->pr_val;
+ pr = info->pr;
- if (cache_match(script, ip, pr_val)) {
+ if (cache_match(script, ip, pr)) {
STAT(++unw.stat.cache.hinted_hits);
return script;
}
- script = unw.cache + unw.hash[hash(ip)];
+ index = unw.hash[hash(ip)];
+ if (index >= UNW_CACHE_SIZE)
+ return 0;
+
+ script = unw.cache + index;
while (1) {
- if (cache_match(script, ip, pr_val)) {
+ if (cache_match(script, ip, pr)) {
/* update hint; no locking required as single-word writes are atomic */
STAT(++unw.stat.cache.normal_hits);
unw.cache[info->prev_script].hint = script - unw.cache;
script_new (unsigned long ip)
{
struct unw_script *script, *prev, *tmp;
+ unw_hash_index_t index;
unsigned long flags;
- unsigned char index;
unsigned short head;
STAT(++unw.stat.script.news);
unw.lru_tail = head;
/* remove the old script from the hash table (if it's there): */
- index = hash(script->ip);
- tmp = unw.cache + unw.hash[index];
- prev = 0;
- while (1) {
- if (tmp == script) {
- if (prev)
- prev->coll_chain = tmp->coll_chain;
- else
- unw.hash[index] = tmp->coll_chain;
- break;
- } else
- prev = tmp;
- if (tmp->coll_chain >= UNW_CACHE_SIZE)
+ if (script->ip) {
+ index = hash(script->ip);
+ tmp = unw.cache + unw.hash[index];
+ prev = 0;
+ while (1) {
+ if (tmp == script) {
+ if (prev)
+ prev->coll_chain = tmp->coll_chain;
+ else
+ unw.hash[index] = tmp->coll_chain;
+ break;
+ } else
+ prev = tmp;
+ if (tmp->coll_chain >= UNW_CACHE_SIZE)
/* old script wasn't in the hash-table */
- break;
- tmp = unw.cache + tmp->coll_chain;
+ break;
+ tmp = unw.cache + tmp->coll_chain;
+ }
}
/* enter new script in the hash table */
struct unw_reg_info *r = sr->curr.reg + i;
enum unw_insn_opcode opc;
struct unw_insn insn;
- unsigned long val;
+ unsigned long val = 0;
switch (r->where) {
case UNW_WHERE_GR:
if (r->val >= 32) {
/* register got spilled to a stacked register */
opc = UNW_INSN_SETNAT_TYPE;
- val = UNW_NAT_STACKED;
- } else {
+ val = UNW_NAT_REGSTK;
+ } else
/* register got spilled to a scratch register */
- opc = UNW_INSN_SETNAT_TYPE;
- val = UNW_NAT_SCRATCH;
- }
+ opc = UNW_INSN_SETNAT_MEMSTK;
break;
case UNW_WHERE_FR:
case UNW_WHERE_PSPREL:
case UNW_WHERE_SPREL:
- opc = UNW_INSN_SETNAT_PRI_UNAT;
- val = 0;
+ opc = UNW_INSN_SETNAT_MEMSTK;
break;
default:
}
val = unw.preg_index[UNW_REG_R4 + (rval - 4)];
} else {
- opc = UNW_INSN_LOAD_SPREL;
- val = -sizeof(struct pt_regs);
- if (rval >= 1 && rval <= 3)
- val += struct_offset(struct pt_regs, r1) + 8*(rval - 1);
- else if (rval <= 11)
- val += struct_offset(struct pt_regs, r8) + 8*(rval - 8);
- else if (rval <= 15)
- val += struct_offset(struct pt_regs, r12) + 8*(rval - 12);
- else if (rval <= 31)
- val += struct_offset(struct pt_regs, r16) + 8*(rval - 16);
- else
- dprintk("unwind: bad scratch reg r%lu\n", rval);
+ opc = UNW_INSN_ADD_SP;
+ val = -sizeof(struct pt_regs) + pt_regs_off(rval);
}
break;
else if (rval >= 16 && rval <= 31)
val = unw.preg_index[UNW_REG_F16 + (rval - 16)];
else {
- opc = UNW_INSN_LOAD_SPREL;
+ opc = UNW_INSN_ADD_SP;
val = -sizeof(struct pt_regs);
if (rval <= 9)
val += struct_offset(struct pt_regs, f6) + 16*(rval - 6);
if (rval >= 1 && rval <= 5)
val = unw.preg_index[UNW_REG_B1 + (rval - 1)];
else {
- opc = UNW_INSN_LOAD_SPREL;
+ opc = UNW_INSN_ADD_SP;
val = -sizeof(struct pt_regs);
if (rval == 0)
val += struct_offset(struct pt_regs, b0);
break;
case UNW_WHERE_SPREL:
- opc = UNW_INSN_LOAD_SPREL;
+ opc = UNW_INSN_ADD_SP;
break;
case UNW_WHERE_PSPREL:
- opc = UNW_INSN_LOAD_PSPREL;
+ opc = UNW_INSN_ADD_PSP;
break;
default:
script_emit(script, insn);
if (need_nat_info)
emit_nat_info(sr, i, script);
+
+ if (i == UNW_REG_PSP) {
+ /*
+ * info->psp must contain the _value_ of the previous
+ * sp, not it's save location. We get this by
+ * dereferencing the value we just stored in
+ * info->psp:
+ */
+ insn.opc = UNW_INSN_LOAD;
+ insn.dst = insn.val = unw.preg_index[UNW_REG_PSP];
+ script_emit(script, insn);
+ }
}
static inline struct unw_table_entry *
memset(&sr, 0, sizeof(sr));
for (r = sr.curr.reg; r < sr.curr.reg + UNW_NUM_REGS; ++r)
r->when = UNW_WHEN_NEVER;
- sr.pr_val = info->pr_val;
+ sr.pr_val = info->pr;
script = script_new(ip);
if (!script) {
}
#if UNW_DEBUG
- printk ("unwind: state record for func 0x%lx, t=%u:\n",
- table->segment_base + e->start_offset, sr.when_target);
+ printk("unwind: state record for func 0x%lx, t=%u:\n",
+ table->segment_base + e->start_offset, sr.when_target);
for (r = sr.curr.reg; r < sr.curr.reg + UNW_NUM_REGS; ++r) {
if (r->where != UNW_WHERE_NONE || r->when != UNW_WHEN_NEVER) {
printk(" %s <- ", unw.preg_name[r - sr.curr.reg]);
break;
default: printk("BADWHERE(%d)", r->where); break;
}
- printk ("\t\t%d\n", r->when);
+ printk("\t\t%d\n", r->when);
}
}
#endif
/* translate state record into unwinder instructions: */
- if (sr.curr.reg[UNW_REG_PSP].where == UNW_WHERE_NONE
- && sr.when_target > sr.curr.reg[UNW_REG_PSP].when && sr.curr.reg[UNW_REG_PSP].val != 0)
- {
+ /*
+ * First, set psp if we're dealing with a fixed-size frame;
+ * subsequent instructions may depend on this value.
+ */
+ if (sr.when_target > sr.curr.reg[UNW_REG_PSP].when
+ && (sr.curr.reg[UNW_REG_PSP].where == UNW_WHERE_NONE)
+ && sr.curr.reg[UNW_REG_PSP].val != 0) {
/* new psp is sp plus frame size */
insn.opc = UNW_INSN_ADD;
- insn.dst = unw.preg_index[UNW_REG_PSP];
- insn.val = sr.curr.reg[UNW_REG_PSP].val;
+ insn.dst = struct_offset(struct unw_frame_info, psp)/8;
+ insn.val = sr.curr.reg[UNW_REG_PSP].val; /* frame size */
script_emit(script, insn);
}
val);
break;
- case UNW_INSN_LOAD_PSPREL:
+ case UNW_INSN_ADD_PSP:
s[dst] = state->psp + val;
break;
- case UNW_INSN_LOAD_SPREL:
+ case UNW_INSN_ADD_SP:
s[dst] = state->sp + val;
break;
- case UNW_INSN_SETNAT_PRI_UNAT:
- if (!state->pri_unat)
- state->pri_unat = &state->sw->caller_unat;
- s[dst+1] = ((*state->pri_unat - s[dst]) << 32) | UNW_NAT_PRI_UNAT;
+ case UNW_INSN_SETNAT_MEMSTK:
+ if (!state->pri_unat_loc)
+ state->pri_unat_loc = &state->sw->ar_unat;
+ /* register off. is a multiple of 8, so the least 3 bits (type) are 0 */
+ s[dst+1] = (*state->pri_unat_loc - s[dst]) | UNW_NAT_MEMSTK;
break;
case UNW_INSN_SETNAT_TYPE:
s[dst+1] = val;
break;
+
+ case UNW_INSN_LOAD:
+#if UNW_DEBUG
+ if ((s[val] & (my_cpu_data.unimpl_va_mask | 0x7)) || s[val] < TASK_SIZE) {
+ debug(1, "unwind: rejecting bad psp=0x%lx\n", s[val]);
+ break;
+ }
+#endif
+ s[dst] = *(unsigned long *) s[val];
+ break;
}
}
STAT(unw.stat.script.run_time += ia64_get_itc() - start);
lazy_init:
off = unw.sw_off[val];
s[val] = (unsigned long) state->sw + off;
- if (off >= struct_offset (struct unw_frame_info, r4)
- && off <= struct_offset (struct unw_frame_info, r7))
+ if (off >= struct_offset(struct switch_stack, r4)
+ && off <= struct_offset(struct switch_stack, r7))
/*
- * We're initializing a general register: init NaT info, too. Note that we
- * rely on the fact that call_unat is the first field in struct switch_stack:
+ * We're initializing a general register: init NaT info, too. Note that
+ * the offset is a multiple of 8 which gives us the 3 bits needed for
+ * the type field.
*/
- s[val+1] = (-off << 32) | UNW_NAT_PRI_UNAT;
+ s[val+1] = (struct_offset(struct switch_stack, ar_unat) - off) | UNW_NAT_MEMSTK;
goto redo;
}
int have_write_lock = 0;
struct unw_script *scr;
- if ((info->ip & (my_cpu_data.unimpl_va_mask | 0xf)) || rgn_index(info->ip) != RGN_KERNEL)
- {
+ if ((info->ip & (my_cpu_data.unimpl_va_mask | 0xf)) || info->ip < TASK_SIZE) {
/* don't let obviously bad addresses pollute the cache */
debug(1, "unwind: rejecting bad ip=0x%lx\n", info->ip);
- info->rp = 0;
+ info->rp_loc = 0;
return -1;
}
prev_bsp = info->bsp;
/* restore the ip */
- if (!info->rp) {
+ if (!info->rp_loc) {
debug(1, "unwind: failed to locate return link (ip=0x%lx)!\n", info->ip);
STAT(unw.stat.api.unwind_time += ia64_get_itc() - start; local_irq_restore(flags));
return -1;
}
- ip = info->ip = *info->rp;
+ ip = info->ip = *info->rp_loc;
if (ip < GATE_ADDR + PAGE_SIZE) {
/*
* We don't have unwind info for the gate page, so we consider that part
}
/* restore the cfm: */
- if (!info->pfs) {
+ if (!info->pfs_loc) {
dprintk("unwind: failed to locate ar.pfs!\n");
STAT(unw.stat.api.unwind_time += ia64_get_itc() - start; local_irq_restore(flags));
return -1;
}
- info->cfm = info->pfs;
+ info->cfm_loc = info->pfs_loc;
/* restore the bsp: */
- pr = info->pr_val;
+ pr = info->pr;
num_regs = 0;
if ((info->flags & UNW_FLAG_INTERRUPT_FRAME)) {
if ((pr & (1UL << pNonSys)) != 0)
- num_regs = *info->cfm & 0x7f; /* size of frame */
- info->pfs =
+ num_regs = *info->cfm_loc & 0x7f; /* size of frame */
+ info->pfs_loc =
(unsigned long *) (info->sp + 16 + struct_offset(struct pt_regs, ar_pfs));
} else
- num_regs = (*info->cfm >> 7) & 0x7f; /* size of locals */
+ num_regs = (*info->cfm_loc >> 7) & 0x7f; /* size of locals */
info->bsp = (unsigned long) ia64_rse_skip_regs((unsigned long *) info->bsp, -num_regs);
if (info->bsp < info->regstk.limit || info->bsp > info->regstk.top) {
dprintk("unwind: bsp (0x%lx) out of range [0x%lx-0x%lx]\n",
info->sp = info->psp;
if (info->sp < info->memstk.top || info->sp > info->memstk.limit) {
dprintk("unwind: sp (0x%lx) out of range [0x%lx-0x%lx]\n",
- info->sp, info->regstk.top, info->regstk.limit);
+ info->sp, info->memstk.top, info->memstk.limit);
STAT(unw.stat.api.unwind_time += ia64_get_itc() - start; local_irq_restore(flags));
return -1;
}
return -1;
}
+ /* as we unwind, the saved ar.unat becomes the primary unat: */
+ info->pri_unat_loc = info->unat_loc;
+
/* finally, restore the predicates: */
- unw_get_pr(info, &info->pr_val);
+ unw_get_pr(info, &info->pr);
retval = find_save_locs(info);
STAT(unw.stat.api.unwind_time += ia64_get_itc() - start; local_irq_restore(flags));
info->task = t;
info->sw = sw;
info->sp = info->psp = (unsigned long) (sw + 1) - 16;
- info->cfm = &sw->ar_pfs;
- sol = (*info->cfm >> 7) & 0x7f;
+ info->cfm_loc = &sw->ar_pfs;
+ sol = (*info->cfm_loc >> 7) & 0x7f;
info->bsp = (unsigned long) ia64_rse_skip_regs((unsigned long *) info->regstk.top, -sol);
info->ip = sw->b0;
- info->pr_val = sw->pr;
+ info->pr = sw->pr;
find_save_locs(info);
STAT(unw.stat.api.init_time += ia64_get_itc() - start; local_irq_restore(flags));
info->regstk.top = top;
info->sw = sw;
info->bsp = (unsigned long) ia64_rse_skip_regs((unsigned long *) info->regstk.top, -sol);
- info->cfm = &sw->ar_pfs;
+ info->cfm_loc = &sw->ar_pfs;
info->ip = sw->b0;
#endif
}
info->regstk.top = top;
info->sw = sw;
info->bsp = (unsigned long) ia64_rse_skip_regs(bsp, -sof);
- info->cfm = ®s->cr_ifs;
+ info->cfm_loc = ®s->cr_ifs;
info->ip = regs->cr_iip;
#endif
}
int
unw_unwind (struct unw_frame_info *info)
{
- unsigned long sol, cfm = *info->cfm;
+ unsigned long sol, cfm = *info->cfm_loc;
int is_nat;
sol = (cfm >> 7) & 0x7f; /* size of locals */
/* reject let obviously bad addresses */
return -1;
- info->cfm = ia64_rse_skip_regs((unsigned long *) info->bsp, sol - 1);
+ info->cfm_loc = ia64_rse_skip_regs((unsigned long *) info->bsp, sol - 1);
cfm = read_reg(info, sol - 1, &is_nat);
if (is_nat)
return -1;
if (prevt->next == table)
break;
if (!prevt) {
- dprintk("unwind: failed to find unwind table %p\n", table);
+ dprintk("unwind: failed to find unwind table %p\n", (void *) table);
spin_unlock_irqrestore(&unw.lock, flags);
return;
}
for (i = UNW_REG_F16, off = SW(F16); i <= UNW_REG_F31; ++i, off += 16)
unw.sw_off[unw.preg_index[i]] = off;
- unw.cache[0].coll_chain = -1;
- for (i = 1; i < UNW_CACHE_SIZE; ++i) {
- unw.cache[i].lru_chain = (i - 1);
+ for (i = 0; i < UNW_CACHE_SIZE; ++i) {
+ if (i > 0)
+ unw.cache[i].lru_chain = (i - 1);
unw.cache[i].coll_chain = -1;
unw.cache[i].lock = RW_LOCK_UNLOCKED;
}
enum unw_nat_type {
UNW_NAT_NONE, /* NaT not represented */
UNW_NAT_VAL, /* NaT represented by NaT value (fp reg) */
- UNW_NAT_PRI_UNAT, /* NaT value is in unat word at offset OFF */
- UNW_NAT_SCRATCH, /* NaT value is in scratch.pri_unat */
- UNW_NAT_STACKED /* NaT is in rnat */
+ UNW_NAT_MEMSTK, /* NaT value is in unat word at offset OFF */
+ UNW_NAT_REGSTK /* NaT is in rnat */
};
enum unw_insn_opcode {
UNW_INSN_ADD, /* s[dst] += val */
+ UNW_INSN_ADD_PSP, /* s[dst] = (s.psp + val) */
+ UNW_INSN_ADD_SP, /* s[dst] = (s.sp + val) */
UNW_INSN_MOVE, /* s[dst] = s[val] */
UNW_INSN_MOVE2, /* s[dst] = s[val]; s[dst+1] = s[val+1] */
UNW_INSN_MOVE_STACKED, /* s[dst] = ia64_rse_skip(*s.bsp, val) */
- UNW_INSN_LOAD_PSPREL, /* s[dst] = *(*s.psp + 8*val) */
- UNW_INSN_LOAD_SPREL, /* s[dst] = *(*s.sp + 8*val) */
- UNW_INSN_SETNAT_PRI_UNAT, /* s[dst+1].nat.type = PRI_UNAT;
+ UNW_INSN_SETNAT_MEMSTK, /* s[dst+1].nat.type = MEMSTK;
s[dst+1].nat.off = *s.pri_unat - s[dst] */
- UNW_INSN_SETNAT_TYPE /* s[dst+1].nat.type = val */
+ UNW_INSN_SETNAT_TYPE, /* s[dst+1].nat.type = val */
+ UNW_INSN_LOAD /* s[dst] = *s[val] */
};
struct unw_insn {
#
.S.o:
- $(CC) $(AFLAGS) -c $< -o $@
+ $(CC) $(AFLAGS) $(AFLAGS_KERNEL) -c $< -o $@
L_TARGET = lib.a
-L_OBJS = __divdi3.o __udivdi3.o __moddi3.o __umoddi3.o \
- checksum.o clear_page.o csum_partial_copy.o copy_page.o \
- copy_user.o clear_user.o memcpy.o memset.o strncpy_from_user.o \
- strlen.o strlen_user.o strnlen_user.o \
+L_OBJS = __divsi3.o __udivsi3.o __modsi3.o __umodsi3.o \
+ __divdi3.o __udivdi3.o __moddi3.o __umoddi3.o \
+ checksum.o clear_page.o csum_partial_copy.o copy_page.o \
+ copy_user.o clear_user.o strncpy_from_user.o strlen_user.o strnlen_user.o \
flush.o do_csum.o
+ifneq ($(CONFIG_ITANIUM_ASTEP_SPECIFIC),y)
+ L_OBJS += memcpy.o memset.o strlen.o
+endif
+
LX_OBJS = io.o
-IGNORE_FLAGS_OBJS = __divdi3.o __udivdi3.o __moddi3.o __umoddi3.o
+IGNORE_FLAGS_OBJS = __divsi3.o __udivsi3.o __modsi3.o __umodsi3.o \
+ __divdi3.o __udivdi3.o __moddi3.o __umoddi3.o
-include $(TOPDIR)/Rules.make
+$(L_TARGET):
+
+__divdi3.o: idiv64.S
+ $(CC) $(AFLAGS) $(AFLAGS_KERNEL) -c -o $@ $<
+
+__udivdi3.o: idiv64.S
+ $(CC) $(AFLAGS) $(AFLAGS_KERNEL) -c -DUNSIGNED -c -o $@ $<
-__divdi3.o: idiv.S
- $(CC) $(AFLAGS) -c -o $@ $<
+__moddi3.o: idiv64.S
+ $(CC) $(AFLAGS) $(AFLAGS_KERNEL) -c -DMODULO -c -o $@ $<
-__udivdi3.o: idiv.S
- $(CC) $(AFLAGS) -c -DUNSIGNED -c -o $@ $<
+__umoddi3.o: idiv64.S
+ $(CC) $(AFLAGS) $(AFLAGS_KERNEL) -c -DMODULO -DUNSIGNED -c -o $@ $<
-__moddi3.o: idiv.S
- $(CC) $(AFLAGS) -c -DMODULO -c -o $@ $<
+__divsi3.o: idiv32.S
+ $(CC) $(AFLAGS) $(AFLAGS_KERNEL) -c -o $@ $<
-__umoddi3.o: idiv.S
- $(CC) $(AFLAGS) -c -DMODULO -DUNSIGNED -c -o $@ $<
+__udivsi3.o: idiv32.S
+ $(CC) $(AFLAGS) $(AFLAGS_KERNEL) -c -DUNSIGNED -c -o $@ $<
+
+__modsi3.o: idiv32.S
+ $(CC) $(AFLAGS) $(AFLAGS_KERNEL) -c -DMODULO -c -o $@ $<
+
+__umodsi3.o: idiv32.S
+ $(CC) $(AFLAGS) $(AFLAGS_KERNEL) -c -DMODULO -DUNSIGNED -c -o $@ $<
+
+include $(TOPDIR)/Rules.make
+++ /dev/null
-/*
- * Integer division routine.
- *
- * Copyright (C) 1999-2000 Hewlett-Packard Co
- * Copyright (C) 1999-2000 David Mosberger-Tang <davidm@hpl.hp.com>
- */
-
-#include <asm/asmmacro.h>
-
-/*
- * Compute a 64-bit unsigned integer quotient.
- *
- * Use reciprocal approximation and Newton-Raphson iteration to compute the
- * quotient. frcpa gives 8.6 significant bits, so we need 3 iterations
- * to get more than the 64 bits of precision that we need for DImode.
- *
- * Must use max precision for the reciprocal computations to get 64 bits of
- * precision.
- *
- * r32 holds the dividend. r33 holds the divisor.
- */
-
-#ifdef MODULO
-# define OP mod
-#else
-# define OP div
-#endif
-
-#ifdef UNSIGNED
-# define SGN u
-# define INT_TO_FP(a,b) fcvt.xuf.s1 a=b
-# define FP_TO_INT(a,b) fcvt.fxu.trunc.s1 a=b
-#else
-# define SGN
-# define INT_TO_FP(a,b) fcvt.xf a=b
-# define FP_TO_INT(a,b) fcvt.fx.trunc.s1 a=b
-#endif
-
-#define PASTE1(a,b) a##b
-#define PASTE(a,b) PASTE1(a,b)
-#define NAME PASTE(PASTE(__,SGN),PASTE(OP,di3))
-
-GLOBAL_ENTRY(NAME)
- UNW(.prologue)
- .regstk 2,0,0,0
- // Transfer inputs to FP registers.
- setf.sig f8 = in0
- setf.sig f9 = in1
- UNW(.fframe 16)
- UNW(.save.f 0x20)
- stf.spill [sp] = f17,-16
-
- // Convert the inputs to FP, to avoid FP software-assist faults.
- INT_TO_FP(f8, f8)
- ;;
-
- UNW(.save.f 0x10)
- stf.spill [sp] = f16
- UNW(.body)
- INT_TO_FP(f9, f9)
- ;;
- frcpa.s1 f17, p6 = f8, f9 // y = frcpa(b)
- ;;
- /*
- * This is the magic algorithm described in Section 8.6.2 of "IA-64
- * and Elementary Functions" by Peter Markstein; HP Professional Books
- * (http://www.hp.com/go/retailbooks/)
- */
-(p6) fmpy.s1 f7 = f8, f17 // q = a*y
-(p6) fnma.s1 f6 = f9, f17, f1 // e = -b*y + 1
- ;;
-(p6) fma.s1 f16 = f7, f6, f7 // q1 = q*e + q
-(p6) fmpy.s1 f7 = f6, f6 // e1 = e*e
- ;;
-(p6) fma.s1 f16 = f16, f7, f16 // q2 = q1*e1 + q1
-(p6) fma.s1 f6 = f17, f6, f17 // y1 = y*e + y
- ;;
-(p6) fma.s1 f6 = f6, f7, f6 // y2 = y1*e1 + y1
-(p6) fnma.s1 f7 = f9, f16, f8 // r = -b*q2 + a
- ;;
-(p6) fma.s1 f17 = f7, f6, f16 // q3 = r*y2 + q2
- ;;
-#ifdef MODULO
- FP_TO_INT(f17, f17) // round quotient to an unsigned integer
- ;;
- INT_TO_FP(f17, f17) // renormalize
- ;;
- fnma.s1 f17 = f17, f9, f8 // compute remainder
- ;;
-#endif
- UNW(.restore sp)
- ldf.fill f16 = [sp], 16
- FP_TO_INT(f8, f17) // round result to an (unsigned) integer
- ;;
- ldf.fill f17 = [sp]
- getf.sig r8 = f8 // transfer result to result register
- br.ret.sptk rp
-END(NAME)
--- /dev/null
+/*
+ * Copyright (C) 2000 Hewlett-Packard Co
+ * Copyright (C) 2000 David Mosberger-Tang <davidm@hpl.hp.com>
+ *
+ * 32-bit integer division.
+ *
+ * This code is based on the application note entitled "Divide, Square Root
+ * and Remainder Algorithms for the IA-64 Architecture". This document
+ * is available as Intel document number 248725-002 or via the web at
+ * http://developer.intel.com/software/opensource/numerics/
+ *
+ * For more details on the theory behind these algorithms, see "IA-64
+ * and Elementary Functions" by Peter Markstein; HP Professional Books
+ * (http://www.hp.com/go/retailbooks/)
+ */
+
+#include <asm/asmmacro.h>
+
+#ifdef MODULO
+# define OP mod
+#else
+# define OP div
+#endif
+
+#ifdef UNSIGNED
+# define SGN u
+# define EXTEND zxt4
+# define INT_TO_FP(a,b) fcvt.xuf.s1 a=b
+# define FP_TO_INT(a,b) fcvt.fxu.trunc.s1 a=b
+#else
+# define SGN
+# define EXTEND sxt4
+# define INT_TO_FP(a,b) fcvt.xf a=b
+# define FP_TO_INT(a,b) fcvt.fx.trunc.s1 a=b
+#endif
+
+#define PASTE1(a,b) a##b
+#define PASTE(a,b) PASTE1(a,b)
+#define NAME PASTE(PASTE(__,SGN),PASTE(OP,si3))
+
+GLOBAL_ENTRY(NAME)
+ .regstk 2,0,0,0
+ // Transfer inputs to FP registers.
+ mov r2 = 0xffdd // r2 = -34 + 65535 (fp reg format bias)
+ EXTEND in0 = in0 // in0 = a
+ EXTEND in1 = in1 // in1 = b
+ ;;
+ setf.sig f8 = in0
+ setf.sig f9 = in1
+#ifdef MODULO
+ sub in1 = r0, in1 // in1 = -b
+#endif
+ ;;
+ // Convert the inputs to FP, to avoid FP software-assist faults.
+ INT_TO_FP(f8, f8)
+ INT_TO_FP(f9, f9)
+ ;;
+ setf.exp f7 = r2 // f7 = 2^-34
+ frcpa.s1 f6, p6 = f8, f9 // y0 = frcpa(b)
+ ;;
+(p6) fmpy.s1 f8 = f8, f6 // q0 = a*y0
+(p6) fnma.s1 f6 = f9, f6, f1 // e0 = -b*y0 + 1
+ ;;
+#ifdef MODULO
+ setf.sig f9 = in1 // f9 = -b
+#endif
+(p6) fma.s1 f8 = f6, f8, f8 // q1 = e0*q0 + q0
+(p6) fma.s1 f6 = f6, f6, f7 // e1 = e0*e0 + 2^-34
+ ;;
+#ifdef MODULO
+ setf.sig f7 = in0
+#endif
+(p6) fma.s1 f6 = f6, f8, f8 // q2 = e1*q1 + q1
+ ;;
+ FP_TO_INT(f6, f6) // q = trunc(q2)
+ ;;
+#ifdef MODULO
+ xma.l f6 = f6, f9, f7 // r = q*(-b) + a
+ ;;
+#endif
+ getf.sig r8 = f6 // transfer result to result register
+ br.ret.sptk rp
+END(NAME)
--- /dev/null
+/*
+ * Copyright (C) 1999-2000 Hewlett-Packard Co
+ * Copyright (C) 1999-2000 David Mosberger-Tang <davidm@hpl.hp.com>
+ *
+ * 64-bit integer division.
+ *
+ * This code is based on the application note entitled "Divide, Square Root
+ * and Remainder Algorithms for the IA-64 Architecture". This document
+ * is available as Intel document number 248725-002 or via the web at
+ * http://developer.intel.com/software/opensource/numerics/
+ *
+ * For more details on the theory behind these algorithms, see "IA-64
+ * and Elementary Functions" by Peter Markstein; HP Professional Books
+ * (http://www.hp.com/go/retailbooks/)
+ */
+
+#include <asm/asmmacro.h>
+
+#ifdef MODULO
+# define OP mod
+#else
+# define OP div
+#endif
+
+#ifdef UNSIGNED
+# define SGN u
+# define INT_TO_FP(a,b) fcvt.xuf.s1 a=b
+# define FP_TO_INT(a,b) fcvt.fxu.trunc.s1 a=b
+#else
+# define SGN
+# define INT_TO_FP(a,b) fcvt.xf a=b
+# define FP_TO_INT(a,b) fcvt.fx.trunc.s1 a=b
+#endif
+
+#define PASTE1(a,b) a##b
+#define PASTE(a,b) PASTE1(a,b)
+#define NAME PASTE(PASTE(__,SGN),PASTE(OP,di3))
+
+GLOBAL_ENTRY(NAME)
+ UNW(.prologue)
+ .regstk 2,0,0,0
+ // Transfer inputs to FP registers.
+ setf.sig f8 = in0
+ setf.sig f9 = in1
+ UNW(.fframe 16)
+ UNW(.save.f 0x20)
+ stf.spill [sp] = f17,-16
+
+ // Convert the inputs to FP, to avoid FP software-assist faults.
+ INT_TO_FP(f8, f8)
+ ;;
+
+ UNW(.save.f 0x10)
+ stf.spill [sp] = f16
+ UNW(.body)
+ INT_TO_FP(f9, f9)
+ ;;
+ frcpa.s1 f17, p6 = f8, f9 // y0 = frcpa(b)
+ ;;
+(p6) fmpy.s1 f7 = f8, f17 // q0 = a*y0
+(p6) fnma.s1 f6 = f9, f17, f1 // e0 = -b*y0 + 1
+ ;;
+(p6) fma.s1 f16 = f7, f6, f7 // q1 = q0*e0 + q0
+(p6) fmpy.s1 f7 = f6, f6 // e1 = e0*e0
+ ;;
+#ifdef MODULO
+ sub in1 = r0, in1 // in1 = -b
+#endif
+(p6) fma.s1 f16 = f16, f7, f16 // q2 = q1*e1 + q1
+(p6) fma.s1 f6 = f17, f6, f17 // y1 = y0*e0 + y0
+ ;;
+(p6) fma.s1 f6 = f6, f7, f6 // y2 = y1*e1 + y1
+(p6) fnma.s1 f7 = f9, f16, f8 // r = -b*q2 + a
+ ;;
+#ifdef MODULO
+ setf.sig f8 = in0 // f8 = a
+ setf.sig f9 = in1 // f9 = -b
+#endif
+(p6) fma.s1 f17 = f7, f6, f16 // q3 = r*y2 + q2
+ ;;
+ UNW(.restore sp)
+ ldf.fill f16 = [sp], 16
+ FP_TO_INT(f17, f17) // q = trunc(q3)
+ ;;
+#ifdef MODULO
+ xma.l f17 = f17, f9, f8 // r = q*(-b) + a
+ ;;
+#endif
+ getf.sig r8 = f17 // transfer result to result register
+ ldf.fill f17 = [sp]
+ br.ret.sptk rp
+END(NAME)
-#include <linux/module.h>
#include <linux/types.h>
#include <asm/io.h>
}
}
-EXPORT_SYMBOL(__ia64_memcpy_fromio);
-EXPORT_SYMBOL(__ia64_memcpy_toio);
-EXPORT_SYMBOL(__ia64_memset_c_io);
panic("mm/init: overlap between virtually mapped linear page table and "
"mapped kernel space!");
pta = POW2(61) - POW2(impl_va_msb);
+#ifndef CONFIG_DISABLE_VHPT
/*
* Set the (virtually mapped linear) page table address. Bit
* 8 selects between the short and long format, bits 2-7 the
* enabled.
*/
ia64_set_pta(pta | (0<<8) | ((3*(PAGE_SHIFT-3)+3)<<2) | 1);
+#else
+ ia64_set_pta(pta | (0<<8) | ((3*(PAGE_SHIFT-3)+3)<<2) | 0);
+#endif
}
/*
/* install the gate page in the global page table: */
put_gate_page(virt_to_page(__start_gate_section), GATE_ADDR);
-#ifndef CONFIG_IA64_SOFTSDV_HACKS
- /*
- * (Some) SoftSDVs seem to have a problem with this call.
- * Since it's mostly a performance optimization, just don't do
- * it for now... --davidm 99/12/6
- */
- efi_enter_virtual_mode();
-#endif
-
#ifdef CONFIG_IA32_SUPPORT
ia32_gdt_init();
#endif
endif
-.PHONY: all modules
+.PHONY: all modules modules_install
-# $Id: config.in,v 1.102 2000/08/23 05:59:28 davem Exp $
+# $Id: config.in,v 1.104 2000/10/04 09:01:38 anton Exp $
# For a description of the syntax of this configuration file,
# see Documentation/kbuild/config-language.txt.
#
# bool ' LVM information in proc filesystem' CONFIG_LVM_PROC_FS Y
#fi
-include drivers/md/Config.in
+source drivers/md/Config.in
tristate 'RAM disk support' CONFIG_BLK_DEV_RAM
if [ "$CONFIG_BLK_DEV_RAM" = "y" -o "$CONFIG_BLK_DEV_RAM" = "m" ]; then
-/* $Id: ebus.c,v 1.10 2000/06/20 01:10:00 anton Exp $
+/* $Id: ebus.c,v 1.11 2000/10/10 01:07:38 davem Exp $
* ebus.c: PCI to EBus bridge device.
*
* Copyright (C) 1997 Eddie C. Dost (ecd@skynet.be)
#ifdef CONFIG_SUN_OPENPROMIO
extern int openprom_init(void);
#endif
-#ifdef CONFIG_SPARCAUDIO
-extern int sparcaudio_init(void);
-#endif
#ifdef CONFIG_SUN_AUXIO
extern void auxio_probe(void);
#endif
openprom_init();
#endif
-#ifdef CONFIG_SPARCAUDIO
- sparcaudio_init();
-#endif
#ifdef CONFIG_SUN_BPP
bpp_init();
#endif
-/* $Id: process.c,v 1.153 2000/09/06 00:45:01 davem Exp $
+/* $Id: process.c,v 1.154 2000/10/05 06:12:57 anton Exp $
* linux/arch/sparc/kernel/process.c
*
* Copyright (C) 1995 David S. Miller (davem@caip.rutgers.edu)
void smp_show_backtrace_all_cpus(void)
{
xc0((smpfunc_t) show_backtrace);
+ show_backtrace();
}
#endif
void smp_flush_cache_all(void)
{
xc0((smpfunc_t) BTFIXUP_CALL(local_flush_cache_all));
+ local_flush_cache_all();
}
void smp_flush_tlb_all(void)
{
xc0((smpfunc_t) BTFIXUP_CALL(local_flush_tlb_all));
+ local_flush_tlb_all();
}
void smp_flush_cache_mm(struct mm_struct *mm)
{
if(mm->context != NO_CONTEXT) {
- if(mm->cpu_vm_mask == (1 << smp_processor_id()))
- local_flush_cache_mm(mm);
- else
+ if(mm->cpu_vm_mask != (1 << smp_processor_id()))
xc1((smpfunc_t) BTFIXUP_CALL(local_flush_cache_mm), (unsigned long) mm);
+ local_flush_cache_mm(mm);
}
}
void smp_flush_tlb_mm(struct mm_struct *mm)
{
if(mm->context != NO_CONTEXT) {
- if(mm->cpu_vm_mask == (1 << smp_processor_id())) {
- local_flush_tlb_mm(mm);
- } else {
+ if(mm->cpu_vm_mask != (1 << smp_processor_id())) {
xc1((smpfunc_t) BTFIXUP_CALL(local_flush_tlb_mm), (unsigned long) mm);
if(atomic_read(&mm->mm_users) == 1 && current->active_mm == mm)
mm->cpu_vm_mask = (1 << smp_processor_id());
}
+ local_flush_tlb_mm(mm);
}
}
unsigned long end)
{
if(mm->context != NO_CONTEXT) {
- if(mm->cpu_vm_mask == (1 << smp_processor_id()))
- local_flush_cache_range(mm, start, end);
- else
+ if(mm->cpu_vm_mask != (1 << smp_processor_id()))
xc3((smpfunc_t) BTFIXUP_CALL(local_flush_cache_range), (unsigned long) mm, start, end);
+ local_flush_cache_range(mm, start, end);
}
}
unsigned long end)
{
if(mm->context != NO_CONTEXT) {
- if(mm->cpu_vm_mask == (1 << smp_processor_id()))
- local_flush_tlb_range(mm, start, end);
- else
+ if(mm->cpu_vm_mask != (1 << smp_processor_id()))
xc3((smpfunc_t) BTFIXUP_CALL(local_flush_tlb_range), (unsigned long) mm, start, end);
+ local_flush_tlb_range(mm, start, end);
}
}
struct mm_struct *mm = vma->vm_mm;
if(mm->context != NO_CONTEXT) {
- if(mm->cpu_vm_mask == (1 << smp_processor_id()))
- local_flush_cache_page(vma, page);
- else
+ if(mm->cpu_vm_mask != (1 << smp_processor_id()))
xc2((smpfunc_t) BTFIXUP_CALL(local_flush_cache_page), (unsigned long) vma, page);
+ local_flush_cache_page(vma, page);
}
}
struct mm_struct *mm = vma->vm_mm;
if(mm->context != NO_CONTEXT) {
- if(mm->cpu_vm_mask == (1 << smp_processor_id()))
- local_flush_tlb_page(vma, page);
- else
+ if(mm->cpu_vm_mask != (1 << smp_processor_id()))
xc2((smpfunc_t) BTFIXUP_CALL(local_flush_tlb_page), (unsigned long) vma, page);
+ local_flush_tlb_page(vma, page);
}
}
*/
#if 1
xc1((smpfunc_t) BTFIXUP_CALL(local_flush_page_to_ram), page);
-#else
- local_flush_page_to_ram(page);
#endif
+ local_flush_page_to_ram(page);
}
void smp_flush_sig_insns(struct mm_struct *mm, unsigned long insn_addr)
{
- if(mm->cpu_vm_mask == (1 << smp_processor_id()))
- local_flush_sig_insns(mm, insn_addr);
- else
+ if(mm->cpu_vm_mask != (1 << smp_processor_id()))
xc2((smpfunc_t) BTFIXUP_CALL(local_flush_sig_insns), (unsigned long) mm, insn_addr);
+ local_flush_sig_insns(mm, insn_addr);
}
/* Reschedule call back. */
-/* $Id: sparc-stub.c,v 1.26 1999/12/27 06:08:34 anton Exp $
+/* $Id: sparc-stub.c,v 1.27 2000/10/03 07:28:49 anton Exp $
* sparc-stub.c: KGDB support for the Linux kernel.
*
* Modifications to run under Linux
}
}
- /* First, run local copy. */
- func(arg1, arg2, arg3, arg4, arg5);
-
{
register int i;
}
spin_unlock_irqrestore(&cross_call_lock, flags);
- } else {
- func(arg1, arg2, arg3, arg4, arg5); /* Just need to run local copy. */
}
}
}
}
- /* First, run local copy. */
- func(arg1, arg2, arg3, arg4, arg5);
-
{
register int i;
}
spin_unlock_irqrestore(&cross_call_lock, flags);
- } else {
- func(arg1, arg2, arg3, arg4, arg5); /* Just need to run local copy. */
}
}
-# $Id: config.in,v 1.121 2000/08/23 05:59:28 davem Exp $
+# $Id: config.in,v 1.125 2000/10/10 01:05:53 davem Exp $
# For a description of the syntax of this configuration file,
# see the Configure script.
#
tristate 'Loopback device support' CONFIG_BLK_DEV_LOOP
dep_tristate 'Network block device support' CONFIG_BLK_DEV_NBD $CONFIG_NET
-#tristate 'Logical volume manager (LVM) support' CONFIG_BLK_DEV_LVM
-#if [ "$CONFIG_BLK_DEV_LVM" != "n" ]; then
-# bool ' LVM information in proc filesystem' CONFIG_LVM_PROC_FS
-#fi
-
-include drivers/md/Config.in
+source drivers/md/Config.in
tristate 'RAM disk support' CONFIG_BLK_DEV_RAM
if [ "$CONFIG_BLK_DEV_RAM" = "y" -o "$CONFIG_BLK_DEV_RAM" = "m" ]; then
CONFIG_BLK_DEV_FD=y
CONFIG_BLK_DEV_LOOP=m
CONFIG_BLK_DEV_NBD=m
-CONFIG_BLK_DEV_MD=m
-CONFIG_MD_LINEAR=m
-CONFIG_MD_RAID0=m
-CONFIG_MD_RAID1=m
+
+#
+# Multi-device support (RAID and LVM)
+#
+# CONFIG_MD is not set
+# CONFIG_BLK_DEV_MD is not set
+# CONFIG_MD_LINEAR is not set
+# CONFIG_MD_RAID0 is not set
+# CONFIG_MD_RAID1 is not set
+# CONFIG_MD_RAID5 is not set
+# CONFIG_BLK_DEV_LVM is not set
+# CONFIG_LVM_PROC_FS is not set
# CONFIG_BLK_DEV_RAM is not set
# CONFIG_BLK_DEV_INITRD is not set
# CONFIG_X25 is not set
# CONFIG_LAPB is not set
# CONFIG_LLC is not set
+# CONFIG_NET_DIVERT is not set
# CONFIG_ECONET is not set
# CONFIG_WAN_ROUTER is not set
# CONFIG_NET_FASTROUTE is not set
# CONFIG_BLK_DEV_SIS5513 is not set
# CONFIG_BLK_DEV_TRM290 is not set
# CONFIG_BLK_DEV_VIA82CXXX is not set
-# CONFIG_VIA82CXXX_TUNING is not set
# CONFIG_IDE_CHIPSETS is not set
CONFIG_IDEDMA_AUTO=y
# CONFIG_IDEDMA_IVB is not set
-/* $Id: pci.c,v 1.17 2000/09/05 06:49:44 anton Exp $
+/* $Id: pci.c,v 1.18 2000/10/03 11:31:42 anton Exp $
* pci.c: UltraSparc PCI controller support.
*
* Copyright (C) 1997, 1998, 1999 David S. Miller (davem@redhat.com)
-/* $Id: sys_sparc32.c,v 1.164 2000/09/14 10:42:47 davem Exp $
+/* $Id: sys_sparc32.c,v 1.165 2000/10/10 04:47:31 davem Exp $
* sys_sparc32.c: Conversion between 32bit and 64bit native syscalls.
*
* Copyright (C) 1997,1998 Jakub Jelinek (jj@sunsite.mff.cuni.cz)
/*
* count32() counts the number of arguments/envelopes
*/
-static int count32(u32 * argv)
+static int count32(u32 * argv, int max)
{
int i = 0;
u32 p; int error;
error = get_user(p,argv);
- if (error) return error;
- if (!p) break;
- argv++; i++;
+ if (error)
+ return error;
+ if (!p)
+ break;
+ argv++;
+ if (++i > max)
+ return -E2BIG;
}
}
return i;
bprm.sh_bang = 0;
bprm.loader = 0;
bprm.exec = 0;
- if ((bprm.argc = count32(argv)) < 0) {
+ if ((bprm.argc = count32(argv, bprm.p / sizeof(u32))) < 0) {
allow_write_access(file);
fput(file);
return bprm.argc;
}
- if ((bprm.envc = count32(envp)) < 0) {
+ if ((bprm.envc = count32(envp, bprm.p / sizeof(u32))) < 0) {
allow_write_access(file);
fput(file);
return bprm.envc;
if (nodma)
return (PIO_MODE);
- if (((u_int) buffer & 0xFFFF0000) != (((u_int) buffer + count) & 0xFFFF0000)) {
+ if (((unsigned long) buffer & 0xFFFF0000) != (((unsigned long) buffer + count) & 0xFFFF0000)) {
#ifdef DEBUG_OTHER
printk("xd_setup_dma: using PIO, transfer overlaps 64k boundary\n");
#endif /* DEBUG_OTHER */
disable_dma(xd_dma);
clear_dma_ff(xd_dma);
set_dma_mode(xd_dma,mode);
- set_dma_addr(xd_dma,(u_int) buffer);
+ set_dma_addr(xd_dma, (unsigned long) buffer);
set_dma_count(xd_dma,count);
release_dma_lock(f);
{
current->state = TASK_INTERRUPTIBLE;
schedule_timeout((ms * HZ + 1000 - HZ) / 1000);
- current->state = TASK_RUNNING;
}
static int dtlk_readable(void)
#ifdef CONFIG_I2C
extern int i2c_init_all(void);
#endif
-#ifdef CONFIG_SPARCAUDIO
-extern int sparcaudio_init(void);
-#endif
#ifdef CONFIG_ISDN
int isdn_init(void);
#endif
* 1.10a Andrea Arcangeli: Alpha updates
* 1.10b Andrew Morton: SMP lock fix
* 1.10c Cesar Barros: SMP locking fixes and cleanup
+ * 1.10d Paul Gortmaker: delete paranoia check in rtc_exit
*/
-#define RTC_VERSION "1.10c"
+#define RTC_VERSION "1.10d"
#define RTC_IO_EXTENT 0x10 /* Only really two ports, but... */
static void __exit rtc_exit (void)
{
- /* interrupts and maybe timer disabled at this point by rtc_release */
- /* FIXME: Maybe??? */
-
- if (rtc_status & RTC_TIMER_ON) {
- spin_lock_irq (&rtc_lock);
- rtc_status &= ~RTC_TIMER_ON;
- del_timer(&rtc_irq_timer);
- spin_unlock_irq (&rtc_lock);
-
- printk(KERN_WARNING "rtc_exit(), and timer still running.\n");
- }
-
remove_proc_entry ("driver/rtc", NULL);
misc_deregister(&rtc_dev);
}
}
- set_current_state(TASK_RUNNING);
exit:
if (debug_level >= DEBUG_LEVEL_INFO)
printk("%s(%d):mgsl_wait_until_sent(%s) exit\n",
bool ' Support for non-IEEE1394 local ports' CONFIG_IEEE1394_PCILYNX_PORTS
fi
- dep_tristate 'Adaptec AIC-5800 (AHA-89xx) support' CONFIG_IEEE1394_AIC5800 $CONFIG_IEEE1394
+# this driver is unsupported now:
+# dep_tristate 'Adaptec AIC-5800 (AHA-89xx) support' CONFIG_IEEE1394_AIC5800 $CONFIG_IEEE1394
dep_tristate 'OHCI (Open Host Controller Interface) support' CONFIG_IEEE1394_OHCI1394 $CONFIG_IEEE1394
dep_tristate 'Video1394 support' CONFIG_IEEE1394_VIDEO1394 $CONFIG_IEEE1394_OHCI1394
/*
+ * +++ THIS DRIVER IS ORPHANED AND UNSUPPORTED +++
+ *
* aic5800.c - Adaptec AIC-5800 PCI-IEEE1394 chip driver
* Copyright (C)1999 Emanuel Pirker <epirker@edu.uni-klu.ac.at>
*
return -ENXIO;
for (i = 0; netcard_portlist[i]; i++)
- {
- int ioaddr = netcard_portlist[i];
- if (check_region(ioaddr, EL1_IO_EXTENT))
- continue;
- if (el1_probe1(dev, ioaddr) == 0)
+ if (el1_probe1(dev, netcard_portlist[i]) == 0)
return 0;
- }
return -ENODEV;
}
else if (base_addr != 0)
return -ENXIO; /* Don't probe at all. */
- for (i = 0; netcard_portlist[i]; i++) {
- int ioaddr = netcard_portlist[i];
- if (check_region(ioaddr, EL16_IO_EXTENT))
- continue;
- if (el16_probe1(dev, ioaddr) == 0)
+ for (i = 0; netcard_portlist[i]; i++)
+ if (el16_probe1(dev, netcard_portlist[i]) == 0)
return 0;
- }
return -ENODEV;
}
static int __init el16_probe1(struct net_device *dev, int ioaddr)
{
static unsigned char init_ID_done = 0, version_printed = 0;
- int i, irq, irqval;
+ int i, irq, irqval, retval;
struct net_local *lp;
if (init_ID_done == 0) {
init_ID_done = 1;
}
- if (inb(ioaddr) == '*' && inb(ioaddr+1) == '3'
- && inb(ioaddr+2) == 'C' && inb(ioaddr+3) == 'O')
- ;
- else
+ if (!request_region(ioaddr, EL16_IO_EXTENT, "3c507"))
return -ENODEV;
+ if ((inb(ioaddr) != '*') || (inb(ioaddr + 1) != '3') ||
+ (inb(ioaddr + 2) != 'C') || (inb(ioaddr + 3) != 'O')) {
+ retval = -ENODEV;
+ goto out;
+ }
+
/* Allocate a new 'dev' if needed. */
if (dev == NULL)
- dev = init_etherdev(0, 0);
+ if (!(dev = init_etherdev(0, 0))) {
+ retval = -ENOMEM;
+ goto out;
+ }
if (net_debug && version_printed++ == 0)
printk(version);
irqval = request_irq(irq, &el16_interrupt, 0, "3c507", dev);
if (irqval) {
printk ("unable to get IRQ %d (irqval=%d).\n", irq, irqval);
- return -EAGAIN;
+ retval = -EAGAIN;
+ goto out;
}
/* We've committed to using the board, and can start filling in *dev. */
- request_region(ioaddr, EL16_IO_EXTENT, "3c507");
dev->base_addr = ioaddr;
outb(0x01, ioaddr + MISC_CTRL);
/* Initialize the device structure. */
lp = dev->priv = kmalloc(sizeof(struct net_local), GFP_KERNEL);
- if (dev->priv == NULL)
- return -ENOMEM;
+ if (dev->priv == NULL) {
+ retval = -ENOMEM;
+ goto out;
+ }
memset(dev->priv, 0, sizeof(struct net_local));
spin_lock_init(&lp->lock);
dev->flags&=~IFF_MULTICAST; /* Multicast doesn't work */
return 0;
+out:
+ release_region(ioaddr, EL16_IO_EXTENT);
+ return retval;
}
static int el16_open(struct net_device *dev)
tristate ' Mylex EISA LNE390A/B support (EXPERIMENTAL)' CONFIG_LNE390
fi
dep_tristate ' National Semiconductor DP83810 series PCI Ethernet support' CONFIG_NATSEMI $CONFIG_PCI
- dep_tristate ' PCI NE2000 support' CONFIG_NE2K_PCI $CONFIG_PCI
+ dep_tristate ' PCI NE2000 and clones support (see help)' CONFIG_NE2K_PCI $CONFIG_PCI
if [ "$CONFIG_EXPERIMENTAL" = "y" ]; then
tristate ' Novell/Eagle/Microdyne NE3210 EISA support (EXPERIMENTAL)' CONFIG_NE3210
fi
#include <linux/errno.h>
#include <linux/init.h>
#include <linux/netlink.h>
+#include <net/divert.h>
#define NEXT_DEV NULL
{
struct devprobe *p = plist;
unsigned long base_addr = dev->base_addr;
+#ifdef CONFIG_NET_DIVERT
+ int ret;
+#endif /* CONFIG_NET_DIVERT */
while (p->probe != NULL) {
- if (base_addr && p->probe(dev) == 0) /* probe given addr */
+ if (base_addr && p->probe(dev) == 0) { /* probe given addr */
+#ifdef CONFIG_NET_DIVERT
+ ret = alloc_divert_blk(dev);
+ if (ret)
+ return ret;
+#endif /* CONFIG_NET_DIVERT */
return 0;
- else if (p->status == 0) { /* has autoprobe failed yet? */
+ } else if (p->status == 0) { /* has autoprobe failed yet? */
p->status = p->probe(dev); /* no, try autoprobe */
- if (p->status == 0)
+ if (p->status == 0) {
+#ifdef CONFIG_NET_DIVERT
+ ret = alloc_divert_blk(dev);
+ if (ret)
+ return ret;
+#endif /* CONFIG_NET_DIVERT */
return 0;
+ }
}
p++;
}
return -ENXIO;
for (i = 0; netcard_portlist[i]; i++) {
- int ioaddr = netcard_portlist[i];
- if (check_region(ioaddr, NETCARD_IO_EXTENT))
- continue;
- if (cs89x0_probe1(dev, ioaddr) == 0)
+ if (cs89x0_probe1(dev, netcard_portlist[i]) == 0)
return 0;
}
printk(KERN_WARNING "cs89x0: no cs8900 or cs8920 detected. Be sure to disable PnP with SETUP\n");
/* Initialize the device structure. */
if (dev->priv == NULL) {
dev->priv = kmalloc(sizeof(struct net_local), GFP_KERNEL);
- if (dev->priv == 0)
- {
+ if (dev->priv == 0) {
retval = -ENOMEM;
goto out;
}
memset(lp, 0, sizeof(*lp));
spin_lock_init(&lp->lock);
#if !defined(MODULE) && (ALLOW_DMA != 0)
- if (g_cs89x0_dma)
- {
+ if (g_cs89x0_dma) {
lp->use_dma = 1;
lp->dma = g_cs89x0_dma;
lp->dmasize = 16; /* Could make this an option... */
}
lp = (struct net_local *)dev->priv;
+ /* Grab the region so we can find another board if autoIRQ fails. */
+ if (!request_region(ioaddr, NETCARD_IO_EXTENT, "cs89x0")) {
+ retval = -ENODEV;
+ goto out1;
+ }
+
/* if they give us an odd I/O address, then do ONE write to
the address port, to get it back to address zero, where we
expect to find the EISA signature word. An IO with a base of 0x3
will skip the test for the ADD_PORT. */
if (ioaddr & 1) {
if ((ioaddr & 2) != 2)
- if ((inw((ioaddr & ~3)+ ADD_PORT) & ADD_MASK) != ADD_SIG)
- return -ENODEV;
+ if ((inw((ioaddr & ~3)+ ADD_PORT) & ADD_MASK) != ADD_SIG) {
+ retval = -ENODEV;
+ goto out2;
+ }
ioaddr &= ~3;
outw(PP_ChipID, ioaddr + ADD_PORT);
}
- if (inw(ioaddr + DATA_PORT) != CHIP_EISA_ID_SIG)
- {
+ if (inw(ioaddr + DATA_PORT) != CHIP_EISA_ID_SIG) {
retval = -ENODEV;
- goto out1;
+ goto out2;
}
/* Fill in the 'dev' fields. */
printk(" IRQ %d", dev->irq);
#if ALLOW_DMA
- if (lp->use_dma)
- {
+ if (lp->use_dma) {
get_dma_channel(dev);
printk(", DMA %d", dev->dma);
}
printk("%s%02x", i ? ":" : "", dev->dev_addr[i]);
}
- /* Grab the region so we can find another board if autoIRQ fails. */
-
- /*
- * FIXME: we should check this, but really the isapnp stuff should have given
- * us a free region. Sort this out when the isapnp is sorted out
- */
- request_region(ioaddr, NETCARD_IO_EXTENT,"cs89x0");
-
dev->open = net_open;
dev->stop = net_close;
dev->tx_timeout = net_timeout;
if (net_debug)
printk("cs89x0_probe1() successful\n");
return 0;
+out2:
+ release_region(ioaddr, NETCARD_IO_EXTENT);
out1:
kfree(dev->priv);
dev->priv = 0;
{
struct net_local *lp = (struct net_local *)dev->priv;
- if (lp->use_dma)
- {
- if ((lp->isa_config & ANY_ISA_DMA) == 0)
- {
+ if (lp->use_dma) {
+ if ((lp->isa_config & ANY_ISA_DMA) == 0) {
if (net_debug > 3)
printk("set_dma_cfg(): no DMA\n");
return;
}
- if (lp->isa_config & ISA_RxDMA)
- {
+ if (lp->isa_config & ISA_RxDMA) {
lp->curr_rx_cfg |= RX_DMA_ONLY;
if (net_debug > 3)
printk("set_dma_cfg(): RX_DMA_ONLY\n");
- }
- else
- {
+ } else {
lp->curr_rx_cfg |= AUTO_RX_DMA; /* not that we support it... */
if (net_debug > 3)
printk("set_dma_cfg(): AUTO_RX_DMA\n");
{
int retval = 0;
struct net_local *lp = (struct net_local *)dev->priv;
- if (lp->use_dma)
- {
+ if (lp->use_dma) {
if (lp->isa_config & ANY_ISA_DMA)
retval |= RESET_RX_DMA; /* Reset the DMA pointer */
if (lp->isa_config & DMA_BURST)
status = bp[0] + (bp[1]<<8);
length = bp[2] + (bp[3]<<8);
bp += 4;
- if (net_debug > 5)
- {
+ if (net_debug > 5) {
printk( "%s: receiving DMA packet at %lx, status %x, length %x\n",
dev->name, (unsigned long)bp, status, length);
}
if (bp >= lp->end_dma_buff) bp -= lp->dmasize*1024;
lp->rx_dma_ptr = bp;
- if (net_debug > 3)
- {
+ if (net_debug > 3) {
printk( "%s: received %d byte DMA packet of type %x\n",
dev->name, length,
(skb->data[ETH_ALEN+ETH_ALEN] << 8) | skb->data[ETH_ALEN+ETH_ALEN+1]);
}
#if ALLOW_DMA
- if (lp->use_dma)
- {
+ if (lp->use_dma) {
if (lp->isa_config & ANY_ISA_DMA) {
unsigned long flags;
lp->dma_buff = (unsigned char *)__get_dma_pages(GFP_KERNEL,
printk(KERN_ERR "%s: cannot get %dK memory for DMA\n", dev->name, lp->dmasize);
goto release_irq;
}
- if (net_debug > 1)
- {
+ if (net_debug > 1) {
printk( "%s: dma %lx %lx\n",
dev->name,
(unsigned long)lp->dma_buff,
{
struct net_local *lp = (struct net_local *)dev->priv;
- if (net_debug > 3)
- {
+ if (net_debug > 3) {
printk("%s: sent %d byte packet of type %x\n",
dev->name, skb->len,
(skb->data[ETH_ALEN+ETH_ALEN] << 8) | skb->data[ETH_ALEN+ETH_ALEN+1]);
writeword(dev, TX_LEN_PORT, skb->len);
/* Test to see if the chip has allocated memory for the packet */
- if ((readreg(dev, PP_BusST) & READY_FOR_TX_NOW) == 0)
- {
+ if ((readreg(dev, PP_BusST) & READY_FOR_TX_NOW) == 0) {
/*
* Gasp! It hasn't. But that shouldn't happen since
* we're waiting for TxOk, so return 1 and requeue this packet.
TX_LOST_CRS |
TX_SQE_ERROR |
TX_LATE_COL |
- TX_16_COL)) != TX_OK)
- {
+ TX_16_COL)) != TX_OK) {
if ((status & TX_OK) == 0) lp->stats.tx_errors++;
if (status & TX_LOST_CRS) lp->stats.tx_carrier_errors++;
if (status & TX_SQE_ERROR) lp->stats.tx_heartbeat_errors++;
if (length & 1)
skb->data[length-1] = inw(ioaddr + RX_FRAME_PORT);
- if (net_debug > 3)
- {
+ if (net_debug > 3) {
printk( "%s: received %d byte packet of type %x\n",
dev->name, length,
(skb->data[ETH_ALEN+ETH_ALEN] << 8) | skb->data[ETH_ALEN+ETH_ALEN+1]);
#if ALLOW_DMA
static void release_dma_buff(struct net_local *lp)
{
- if (lp->dma_buff)
- {
+ if (lp->dma_buff) {
free_pages((unsigned long)(lp->dma_buff), (lp->dmasize * 1024) / PAGE_SIZE);
lp->dma_buff = 0;
}
if (netif_running(dev))
return -EBUSY;
- if (net_debug)
- {
+ if (net_debug) {
printk("%s: Setting MAC address to ", dev->name);
for (i = 0; i < 6; i++)
printk(" %2.2x", dev->dev_addr[i] = ((unsigned char *)addr)[i]);
dev_cs89x0.init = cs89x0_probe;
dev_cs89x0.priv = kmalloc(sizeof(struct net_local), GFP_KERNEL);
- if (dev_cs89x0.priv == 0)
- {
+ if (dev_cs89x0.priv == 0) {
printk(KERN_ERR "cs89x0.c: Out of memory.\n");
return -ENOMEM;
}
lp = (struct net_local *)dev_cs89x0.priv;
#if ALLOW_DMA
- if (use_dma)
- {
+ if (use_dma) {
lp->use_dma = use_dma;
lp->dma = dma;
lp->dmasize = dmasize;
if (duplex==-1)
lp->auto_neg_cnf = AUTO_NEG_ENABLE;
- if (io == 0) {
+ if (io == 0) {
printk(KERN_ERR "cs89x0.c: Module autoprobing not allowed.\n");
printk(KERN_ERR "cs89x0.c: Append io=0xNNN\n");
return -EPERM;
* PPPoE --- PPP over Ethernet (RFC 2516)
*
*
- * Version: 0.6.2
+ * Version: 0.6.3
*
* 030700 : Fixed connect logic to allow for disconnect.
* 270700 : Fixed potential SMP problems; we must protect against
* guards against sock_put not actually freeing the sk
* in pppoe_release.
*
+ * 051000 : Initialization cleanup
+ *
* Author: Michal Ostrowski <mostrows@styx.uwaterloo.ca>
* Contributors:
* Arnaldo Carvalho de Melo <acme@conectiva.com.br>
int err = register_pppox_proto(PX_PROTO_OE, &pppoe_proto);
if (err == 0) {
- printk(KERN_INFO "Registered PPPoE v0.5\n");
+ printk(KERN_INFO "Registered PPPoE v0.6.3\n");
dev_add_pack(&pppoes_ptype);
register_netdevice_notifier(&pppoe_notifier);
return err;
}
-
-#ifdef MODULE
-MODULE_PARM(debug, "i");
-int init_module(void)
-{
- return pppoe_init();
-}
-
-void cleanup_module(void)
+void __exit pppoe_exit(void)
{
unregister_pppox_proto(PX_PROTO_OE);
dev_remove_pack(&pppoes_ptype);
proc_net_remove("pppoe");
}
-#else
-
-int pppoe_proto_init(struct net_proto *np)
-{
- return pppoe_init();
-}
-
-#endif
+module_init(pppoe_init);
+module_exit(pppoe_exit);
* PPPoE --- PPP over Ethernet (RFC 2516)
*
*
- * Version: 0.5.0
+ * Version: 0.5.1
*
* Author: Michal Ostrowski <mostrows@styx.uwaterloo.ca>
*
+ * 051000 : Initialization cleanup
+ *
* License:
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
extern int pppoe_init (void);
-#ifdef MODULE
-int init_module(void)
-#else
-int __init pppox_proto_init(struct net_proto *pro)
-#endif
+int __init pppox_init(void)
{
int err = 0;
return err;
}
-#ifdef MODULE
-
-MODULE_PARM(debug, "i");
-
-void cleanup_module(void)
+void __exit pppox_exit(void)
{
sock_unregister(PF_PPPOX);
}
-#endif
+module_init(pppox_init);
+module_exit(pppox_exit);
-/* $Id: sunlance.c,v 1.103 2000/08/12 19:23:38 anton Exp $
+/* $Id: sunlance.c,v 1.104 2000/09/18 05:48:42 davem Exp $
* lance.c: Linux/Sparc/Lance driver
*
* Written 1995, 1996 by Miguel de Icaza
return IORESOURCE_MEM;
}
+/*
+ * Find the extent of a PCI decode..
+ */
+static u32 pci_size(u32 base, u32 mask)
+{
+ u32 size = mask & base; /* Find the significant bits */
+ size = size & ~(size-1); /* Get the lowest of them to find the decode size */
+ return size-1; /* extent = size - 1 */
+}
+
static void pci_read_bases(struct pci_dev *dev, unsigned int howmany, int rom)
{
unsigned int pos, reg, next;
l = 0;
if ((l & PCI_BASE_ADDRESS_SPACE) == PCI_BASE_ADDRESS_SPACE_MEMORY) {
res->start = l & PCI_BASE_ADDRESS_MEM_MASK;
- sz = ~(sz & PCI_BASE_ADDRESS_MEM_MASK);
+ sz = pci_size(sz, PCI_BASE_ADDRESS_MEM_MASK);
} else {
res->start = l & PCI_BASE_ADDRESS_IO_MASK;
- sz = ~(sz & PCI_BASE_ADDRESS_IO_MASK) & 0xffff;
+ sz = pci_size(sz, PCI_BASE_ADDRESS_IO_MASK & 0xffff);
}
res->end = res->start + (unsigned long) sz;
res->flags |= (l & 0xf) | pci_calc_resource_flags(l);
res->flags = (l & PCI_ROM_ADDRESS_ENABLE) |
IORESOURCE_MEM | IORESOURCE_PREFETCH | IORESOURCE_READONLY | IORESOURCE_CACHEABLE;
res->start = l & PCI_ROM_ADDRESS_MASK;
- sz = ~(sz & PCI_ROM_ADDRESS_MASK);
+ sz = pci_size(sz, PCI_ROM_ADDRESS_MASK);
res->end = res->start + (unsigned long) sz;
}
res->name = dev->name;
MOD_SUB_DIRS := $(SUB_DIRS) char audio
ALL_SUB_DIRS := $(SUB_DIRS) char audio
-L_OBJS := sbus.o dvma.o
-L_TARGET := sbus.a
+O_OBJS := sbus.o dvma.o
+O_TARGET := sbus_all.o
# Character devices for SBUS-based machines.
#
ifeq ($(CONFIG_SBUSCHAR),y)
SUB_DIRS += char
-L_OBJS += char/sunchar.o
+O_OBJS += char/sunchar.o
endif
# Audio devices for SBUS-based machines.
#
ifeq ($(CONFIG_SPARCAUDIO),y)
SUB_DIRS += audio
-L_OBJS += audio/sparcaudio.o
+O_OBJS += audio/sparcaudio.o
endif
include $(TOPDIR)/Rules.make
-/* $Id: audio.c,v 1.54 2000/07/13 08:06:40 davem Exp $
+/* $Id: audio.c,v 1.55 2000/10/10 01:07:39 davem Exp $
* drivers/sbus/audio/audio.c
*
* Copyright 1996 Thomas K. Dyas (tdyas@noc.rutgers.edu)
return 0;
}
-#if defined (LINUX_VERSION_CODE) && LINUX_VERSION_CODE < 0x20100
-static struct symbol_table sparcaudio_syms = {
-#include <linux/symtab_begin.h>
- X(register_sparcaudio_driver),
- X(unregister_sparcaudio_driver),
- X(sparcaudio_output_done),
- X(sparcaudio_input_done),
-#include <linux/symtab_end.h>
-};
-#else
EXPORT_SYMBOL(register_sparcaudio_driver);
EXPORT_SYMBOL(unregister_sparcaudio_driver);
EXPORT_SYMBOL(sparcaudio_output_done);
EXPORT_SYMBOL(sparcaudio_input_done);
-#endif
-#ifdef MODULE
-int init_module(void)
-#else
-int __init sparcaudio_init(void)
-#endif
+static int __init sparcaudio_init(void)
{
-#if defined (LINUX_VERSION_CODE) && LINUX_VERSION_CODE < 0x20100
- /* Export symbols for use by the low-level drivers. */
- register_symtab(&sparcaudio_syms);
-#endif
-
/* Register our character device driver with the VFS. */
if (devfs_register_chrdev(SOUND_MAJOR, "sparcaudio", &sparcaudio_fops))
return -EIO;
return 0;
}
-#ifdef MODULE
-void cleanup_module(void)
+static void __exit sparcaudio_exit(void)
{
devfs_unregister_chrdev(SOUND_MAJOR, "sparcaudio");
devfs_unregister (devfs_handle);
}
-#endif
+
+module_init(sparcaudio_init)
+module_exit(sparcaudio_exit)
/*
* Code from Linux Streams, Copyright 1995 by
-/* $Id: esp.c,v 1.96 2000/08/24 03:51:26 davem Exp $
+/* $Id: esp.c,v 1.97 2000/09/19 01:29:27 davem Exp $
* esp.c: EnhancedScsiProcessor Sun SCSI driver code.
*
* Copyright (C) 1995, 1998 David S. Miller (davem@caip.rutgers.edu)
* Next, walk the list, and fill in the addresses and sizes of
* each segment.
*/
- memset(sgpnt, 0, SCpnt->use_sg * sizeof(struct scatterlist));
+ memset(sgpnt, 0, SCpnt->sglist_len);
SCpnt->request_buffer = (char *) sgpnt;
SCpnt->request_bufflen = 0;
bhprev = NULL;
+
/*
* ac97_codec.c: Generic AC97 mixer/modem module
*
char *name;
int (*init) (struct ac97_codec *codec);
} ac97_codec_ids[] = {
- {0x414B4D00, "Asahi Kasei AK4540" , NULL},
+ {0x414B4D00, "Asahi Kasei AK4540 rev 0", NULL},
+ {0x414B4D01, "Asahi Kasei AK4540 rev 1", NULL},
{0x41445340, "Analog Devices AD1881" , NULL},
{0x41445360, "Analog Devices AD1885" , enable_eapd},
{0x43525900, "Cirrus Logic CS4297" , NULL},
}
if (codec->name == NULL)
codec->name = "Unknown";
- printk(KERN_INFO "ac97_codec: AC97 %s codec, vendor id1: 0x%04x, "
- "id2: 0x%04x (%s)\n", audio ? "Audio" : (modem ? "Modem" : ""),
+ printk(KERN_INFO "ac97_codec: AC97 %s codec, id: 0x%04x:"
+ "0x%04x (%s)\n", audio ? "Audio" : (modem ? "Modem" : ""),
id1, id2, codec->name);
return ac97_init_mixer(codec);
if [ "$CONFIG_INPUT" = "n" ]; then
comment ' Input core support is needed for USB HID'
else
- dep_tristate ' USB Human Interface Device (HID) support' CONFIG_USB_HID $CONFIG_USB $CONFIG_INPUT
+ dep_tristate ' USB Human Interface Device (full HID) support' CONFIG_USB_HID $CONFIG_USB $CONFIG_INPUT
if [ "$CONFIG_USB_HID" != "y" ]; then
- dep_tristate ' USB HIDBP Keyboard support' CONFIG_USB_KBD $CONFIG_USB $CONFIG_INPUT
- dep_tristate ' USB HIDBP Mouse support' CONFIG_USB_MOUSE $CONFIG_USB $CONFIG_INPUT
+ dep_tristate ' USB HIDBP Keyboard (basic) support' CONFIG_USB_KBD $CONFIG_USB $CONFIG_INPUT
+ dep_tristate ' USB HIDBP Mouse (basic) support' CONFIG_USB_MOUSE $CONFIG_USB $CONFIG_INPUT
fi
dep_tristate ' Wacom Intuos/Graphire tablet support' CONFIG_USB_WACOM $CONFIG_USB $CONFIG_INPUT
fi
mask = 0;
printk(KERN_ERR "usbin_completed: panic: unknown URB\n");
}
+ urb->dev = as->state->usbdev;
spin_lock_irqsave(&as->lock, flags);
if (!usbin_retire_desc(u, urb) &&
u->flags & FLG_RUNNING &&
mask = 0;
printk(KERN_ERR "usbin_sync_completed: panic: unknown URB\n");
}
+ urb->dev = as->state->usbdev;
spin_lock_irqsave(&as->lock, flags);
if (!usbin_sync_retire_desc(u, urb) &&
u->flags & FLG_RUNNING &&
mask = 0;
printk(KERN_ERR "usbout_completed: panic: unknown URB\n");
}
+ urb->dev = as->state->usbdev;
spin_lock_irqsave(&as->lock, flags);
if (!usbout_retire_desc(u, urb) &&
u->flags & FLG_RUNNING &&
mask = 0;
printk(KERN_ERR "usbout_sync_completed: panic: unknown URB\n");
}
+ urb->dev = as->state->usbdev;
spin_lock_irqsave(&as->lock, flags);
if (!usbout_sync_retire_desc(u, urb) &&
u->flags & FLG_RUNNING &&
* Setup: parse used options
*/
-__initfunc(void sun3fb_setup(char *options))
+void __init sun3fb_setup(char *options)
{
char *p;
/*
* Initialisation
*/
-__initfunc(static void sun3fb_init_fb(int fbtype, unsigned long addr))
+static void __init sun3fb_init_fb(int fbtype, unsigned long addr)
{
static struct linux_sbus_device sdb;
struct fb_fix_screeninfo *fix;
}
-__initfunc(int sun3fb_init(void))
+int __init sun3fb_init(void)
{
extern int con_is_present(void);
unsigned long addr;
printk("RR: RE (%x)\n", inode->i_ino);
#endif
if (buffer) kfree(buffer);
- return -1;
+ return 0;
default:
break;
}
goto getlen_out;
/* Check whether the mmaps could change if we sleep */
- volatile_task = (task != current || atomic_read(&mm->mm_users) > 1);
+ volatile_task = (task != current || atomic_read(&mm->mm_users) > 2);
/* decode f_pos */
lineno = *ppos >> MAPS_LINE_SHIFT;
extern int do_check_pgt_cache(int, int);
-extern inline void set_pgdir(unsigned long address, pgd_t entry)
-{
- struct task_struct * p;
- pgd_t *pgd;
-#ifdef CONFIG_SMP
- int i;
-#endif
-
- read_lock(&tasklist_lock);
- for_each_task(p) {
- if (!p->mm)
- continue;
- *pgd_offset(p->mm,address) = entry;
- }
- read_unlock(&tasklist_lock);
-#ifndef CONFIG_SMP
- for (pgd = (pgd_t *)pgd_quicklist; pgd; pgd = (pgd_t *)*(unsigned long *)pgd)
- pgd[address >> PGDIR_SHIFT] = entry;
-#else
- /* To pgd_alloc/pgd_free, one holds master kernel lock and so does our callee, so we can
- modify pgd caches of other CPUs as well. -jj */
- for (i = 0; i < NR_CPUS; i++)
- for (pgd = (pgd_t *)cpu_data[i].pgd_quick; pgd; pgd = (pgd_t *)*(unsigned long *)pgd)
- pgd[address >> PGDIR_SHIFT] = entry;
-#endif
-}
-
/*
* TLB flushing:
*
#define LSAPIC_PERFORMANCE_RESTRICTED (1<<1)
#define LSAPIC_PRESENT (1<<2)
-typedef struct {
+typedef struct acpi_entry_lsapic {
u8 type;
u8 length;
u16 acpi_processor_id;
--- /dev/null
+#ifdef CONFIG_ACPI_KERNEL_CONFIG
+/*
+ * acpikcfg.h - ACPI based Kernel Configuration Manager External Interfaces
+ *
+ * Copyright (C) 2000 Intel Corp.
+ * Copyright (C) 2000 J.I. Lee <jung-ik.lee@intel.com>
+ */
+
+
+u32 __init acpi_cf_init (void * rsdp);
+u32 __init acpi_cf_terminate (void );
+
+u32 __init
+acpi_cf_get_pci_vectors (
+ struct pci_vector_struct **vectors,
+ int *num_pci_vectors
+ );
+
+
+#ifdef CONFIG_ACPI_KERNEL_CONFIG_DEBUG
+void __init
+acpi_cf_print_pci_vectors (
+ struct pci_vector_struct *vectors,
+ int num_pci_vectors
+ );
+#endif
+#endif /* CONFIG_ACPI_KERNEL_CONFIG */
#include <asm/system.h>
-/*
- * Make sure gcc doesn't try to be clever and move things around
- * on us. We need to use _exactly_ the address the user gave us,
- * not some alias that contains the same information.
- */
-#define __atomic_fool_gcc(x) (*(volatile struct { int a[100]; } *)x)
-
/*
* On IA-64, counter must always be volatile to ensure that that the
* memory accesses are ordered.
* bit 0 is the LSB of addr; bit 32 is the LSB of (addr+1).
*/
-extern __inline__ void
+static __inline__ void
set_bit (int nr, volatile void *addr)
{
__u32 bit, old, new;
} while (cmpxchg_acq(m, old, new) != old);
}
-extern __inline__ void
+/*
+ * clear_bit() doesn't provide any barrier for the compiler.
+ */
+#define smp_mb__before_clear_bit() smp_mb()
+#define smp_mb__after_clear_bit() smp_mb()
+static __inline__ void
clear_bit (int nr, volatile void *addr)
{
__u32 mask, old, new;
} while (cmpxchg_acq(m, old, new) != old);
}
-extern __inline__ void
+static __inline__ void
change_bit (int nr, volatile void *addr)
{
__u32 bit, old, new;
} while (cmpxchg_acq(m, old, new) != old);
}
-extern __inline__ int
+static __inline__ int
test_and_set_bit (int nr, volatile void *addr)
{
__u32 bit, old, new;
return (old & bit) != 0;
}
-extern __inline__ int
+static __inline__ int
test_and_clear_bit (int nr, volatile void *addr)
{
__u32 mask, old, new;
return (old & ~mask) != 0;
}
-extern __inline__ int
+static __inline__ int
test_and_change_bit (int nr, volatile void *addr)
{
__u32 bit, old, new;
return (old & bit) != 0;
}
-extern __inline__ int
+static __inline__ int
test_bit (int nr, volatile void *addr)
{
return 1 & (((const volatile __u32 *) addr)[nr >> 5] >> (nr & 31));
* ffz = Find First Zero in word. Undefined if no zero exists,
* so code should check against ~0UL first..
*/
-extern inline unsigned long
+static inline unsigned long
ffz (unsigned long x)
{
unsigned long result;
* hweightN: returns the hamming weight (i.e. the number
* of bits set) of a N-bit word
*/
-extern __inline__ unsigned long
+static __inline__ unsigned long
hweight64 (unsigned long x)
{
unsigned long result;
/*
* Find next zero bit in a bitmap reasonably efficiently..
*/
-extern inline int
+static inline int
find_next_zero_bit (void *addr, unsigned long size, unsigned long offset)
{
unsigned long *p = ((unsigned long *) addr) + (offset >> 6);
#include <asm/processor.h>
-extern __inline__ void
+static __inline__ void
ia64_set_itm (unsigned long val)
{
__asm__ __volatile__("mov cr.itm=%0;; srlz.d;;" :: "r"(val) : "memory");
}
-extern __inline__ unsigned long
+static __inline__ unsigned long
ia64_get_itm (void)
{
unsigned long result;
return result;
}
-extern __inline__ void
+static __inline__ void
ia64_set_itv (unsigned char vector, unsigned char masked)
{
if (masked > 1)
:: "r"((masked << 16) | vector) : "memory");
}
-extern __inline__ void
+static __inline__ void
ia64_set_itc (unsigned long val)
{
__asm__ __volatile__("mov ar.itc=%0;; srlz.d;;" :: "r"(val) : "memory");
}
-extern __inline__ unsigned long
+static __inline__ unsigned long
ia64_get_itc (void)
{
unsigned long result;
return result;
}
-extern __inline__ void
+static __inline__ void
__delay (unsigned long loops)
{
unsigned long saved_ar_lc;
__asm__ __volatile__("mov ar.lc=%0" :: "r"(saved_ar_lc));
}
-extern __inline__ void
+static __inline__ void
udelay (unsigned long usecs)
{
#ifdef CONFIG_IA64_SOFTSDV_HACKS
efi_reset_system_t *reset_system;
} efi;
-extern inline int
+static inline int
efi_guidcmp (efi_guid_t left, efi_guid_t right)
{
return memcmp(&left, &right, sizeof (efi_guid_t));
/*
* This is mostly compatible with Linux/x86.
*
- * Copyright (C) 1998, 1999 Hewlett-Packard Co
- * Copyright (C) 1998, 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1998-2000 Hewlett-Packard Co
+ * Copyright (C) 1998-2000 David Mosberger-Tang <davidm@hpl.hp.com>
*/
/*
pid_t l_pid;
};
+#ifdef __KERNEL__
+# define flock64 flock
+#endif
+
#define F_LINUX_SPECIFIC_BASE 1024
#endif /* _ASM_IA64_FCNTL_H */
# define hardirq_trylock(cpu) (local_irq_count(cpu) == 0)
# define hardirq_endlock(cpu) do { } while (0)
-# define irq_enter(cpu, irq) (++local_irq_count(cpu))
-# define irq_exit(cpu, irq) (--local_irq_count(cpu))
+# define irq_enter(cpu, irq) (local_irq_count(cpu)++)
+# define irq_exit(cpu, irq) (local_irq_count(cpu)--)
# define synchronize_irq() barrier()
#else
static inline void irq_enter(int cpu, int irq)
{
- ++local_irq_count(cpu);
+ local_irq_count(cpu)++;
while (test_bit(0,&global_irq_lock)) {
/* nothing */;
static inline void irq_exit(int cpu, int irq)
{
- --local_irq_count(cpu);
+ local_irq_count(cpu)--;
}
static inline int hardirq_trylock(int cpu)
static inline void
hw_resend_irq (struct hw_interrupt_type *h, unsigned int vector)
{
- int my_cpu_id;
-
-#ifdef CONFIG_SMP
- my_cpu_id = smp_processor_id();
-#else
- __u64 lid;
-
- __asm__ ("mov %0=cr.lid" : "=r"(lid));
- my_cpu_id = (lid >> 24) & 0xff; /* extract id (ignore eid) */
-#endif
- ipi_send(my_cpu_id, vector, IA64_IPI_DM_INT, 0);
+ ipi_send(smp_processor_id(), vector, IA64_IPI_DM_INT, 0);
}
#endif /* _ASM_IA64_HW_IRQ_H */
(granularity << IA32_SEG_G) | \
(((base >> 24) & 0xFF) << IA32_SEG_HIGH_BASE))
+#define IA32_IOBASE 0x2000000000000000 /* Virtual addres for I/O space */
+
#define IA32_CR0 0x80000001 /* Enable PG and PE bits */
#define IA32_CR4 0 /* No architectural extensions */
*/
#define __ia64_mf_a() __asm__ __volatile__ ("mf.a" ::: "memory")
-extern inline const unsigned long
+static inline const unsigned long
__ia64_get_io_port_base (void)
{
- unsigned long addr;
+ extern unsigned long ia64_iobase;
- __asm__ ("mov %0=ar.k0;;" : "=r"(addr));
- return __IA64_UNCACHED_OFFSET | addr;
+ return ia64_iobase;
}
-extern inline void*
+static inline void*
__ia64_mk_io_addr (unsigned long port)
{
const unsigned long io_base = __ia64_get_io_port_base();
* order. --davidm 99/12/07
*/
-extern inline unsigned int
+static inline unsigned int
__inb (unsigned long port)
{
volatile unsigned char *addr = __ia64_mk_io_addr(port);
return ret;
}
-extern inline unsigned int
+static inline unsigned int
__inw (unsigned long port)
{
volatile unsigned short *addr = __ia64_mk_io_addr(port);
return ret;
}
-extern inline unsigned int
+static inline unsigned int
__inl (unsigned long port)
{
volatile unsigned int *addr = __ia64_mk_io_addr(port);
return ret;
}
-extern inline void
+static inline void
__insb (unsigned long port, void *dst, unsigned long count)
{
volatile unsigned char *addr = __ia64_mk_io_addr(port);
return;
}
-extern inline void
+static inline void
__insw (unsigned long port, void *dst, unsigned long count)
{
volatile unsigned short *addr = __ia64_mk_io_addr(port);
return;
}
-extern inline void
+static inline void
__insl (unsigned long port, void *dst, unsigned long count)
{
volatile unsigned int *addr = __ia64_mk_io_addr(port);
return;
}
-extern inline void
+static inline void
__outb (unsigned char val, unsigned long port)
{
volatile unsigned char *addr = __ia64_mk_io_addr(port);
__ia64_mf_a();
}
-extern inline void
+static inline void
__outw (unsigned short val, unsigned long port)
{
volatile unsigned short *addr = __ia64_mk_io_addr(port);
__ia64_mf_a();
}
-extern inline void
+static inline void
__outl (unsigned int val, unsigned long port)
{
volatile unsigned int *addr = __ia64_mk_io_addr(port);
__ia64_mf_a();
}
-extern inline void
+static inline void
__outsb (unsigned long port, const void *src, unsigned long count)
{
volatile unsigned char *addr = __ia64_mk_io_addr(port);
return;
}
-extern inline void
+static inline void
__outsw (unsigned long port, const void *src, unsigned long count)
{
volatile unsigned short *addr = __ia64_mk_io_addr(port);
return;
}
-extern inline void
+static inline void
__outsl (unsigned long port, void *src, unsigned long count)
{
volatile unsigned int *addr = __ia64_mk_io_addr(port);
/*
* The address passed to these functions are ioremap()ped already.
*/
-extern inline unsigned char
+static inline unsigned char
__readb (void *addr)
{
return *(volatile unsigned char *)addr;
}
-extern inline unsigned short
+static inline unsigned short
__readw (void *addr)
{
return *(volatile unsigned short *)addr;
}
-extern inline unsigned int
+static inline unsigned int
__readl (void *addr)
{
return *(volatile unsigned int *) addr;
}
-extern inline unsigned long
+static inline unsigned long
__readq (void *addr)
{
return *(volatile unsigned long *) addr;
}
-extern inline void
+static inline void
__writeb (unsigned char val, void *addr)
{
*(volatile unsigned char *) addr = val;
}
-extern inline void
+static inline void
__writew (unsigned short val, void *addr)
{
*(volatile unsigned short *) addr = val;
}
-extern inline void
+static inline void
__writel (unsigned int val, void *addr)
{
*(volatile unsigned int *) addr = val;
}
-extern inline void
+static inline void
__writeq (unsigned long val, void *addr)
{
*(volatile unsigned long *) addr = val;
{
}
-extern inline unsigned long
+static inline unsigned long
ia64_rid (unsigned long context, unsigned long region_addr)
{
# ifdef CONFIG_IA64_TLB_CHECKS_REGION_NUMBER
# endif
}
-extern inline void
+static inline void
get_new_mmu_context (struct mm_struct *mm)
{
spin_lock(&ia64_ctx.lock);
}
-extern inline void
+static inline void
get_mmu_context (struct mm_struct *mm)
{
/* check if our ASN is of an older generation and thus invalid: */
get_new_mmu_context(mm);
}
-extern inline int
+static inline int
init_new_context (struct task_struct *p, struct mm_struct *mm)
{
mm->context = 0;
return 0;
}
-extern inline void
+static inline void
destroy_context (struct mm_struct *mm)
{
/* Nothing to do. */
}
-extern inline void
+static inline void
reload_context (struct mm_struct *mm)
{
unsigned long rid;
--- /dev/null
+#ifndef _ASM_IA64_MODULE_H
+#define _ASM_IA64_MODULE_H
+/*
+ * This file contains the ia64 architecture specific module code.
+ *
+ * Copyright (C) 2000 Intel Corporation.
+ * Copyright (C) 2000 Mike Stephens <mike.stephens@intel.com>
+ */
+
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/vmalloc.h>
+#include <asm/unwind.h>
+
+#define module_map(x) vmalloc(x)
+#define module_unmap(x) ia64_module_unmap(x)
+#define module_arch_init(x) ia64_module_init(x)
+
+/*
+ * This must match in size and layout the data created by
+ * modutils/obj/obj-ia64.c
+ */
+struct archdata {
+ const char *unw_table;
+ const char *segment_base;
+ const char *unw_start;
+ const char *unw_end;
+ const char *gp;
+};
+
+/*
+ * functions to add/remove a modules unwind info when
+ * it is loaded or unloaded.
+ */
+static inline int
+ia64_module_init(struct module *mod)
+{
+#ifdef CONFIG_IA64_NEW_UNWIND
+ struct archdata *archdata;
+
+ if (!mod_member_present(mod, archdata_start) || !mod->archdata_start)
+ return 0;
+ archdata = (struct archdata *)(mod->archdata_start);
+
+ /*
+ * Make sure the unwind pointers are sane.
+ */
+
+ if (archdata->unw_table)
+ {
+ printk(KERN_ERR "arch_init_module: archdata->unw_table must be zero.\n");
+ return 1;
+ }
+ if (!mod_bound(archdata->gp, 0, mod))
+ {
+ printk(KERN_ERR "arch_init_module: archdata->gp out of bounds.\n");
+ return 1;
+ }
+ if (!mod_bound(archdata->unw_start, 0, mod))
+ {
+ printk(KERN_ERR "arch_init_module: archdata->unw_start out of bounds.\n");
+ return 1;
+ }
+ if (!mod_bound(archdata->unw_end, 0, mod))
+ {
+ printk(KERN_ERR "arch_init_module: archdata->unw_end out of bounds.\n");
+ return 1;
+ }
+ if (!mod_bound(archdata->segment_base, 0, mod))
+ {
+ printk(KERN_ERR "arch_init_module: archdata->unw_table out of bounds.\n");
+ return 1;
+ }
+
+ /*
+ * Pointers are reasonable, add the module unwind table
+ */
+ archdata->unw_table = unw_add_unwind_table(mod->name, archdata->segment_base,
+ (unsigned long) archdata->gp,
+ (unsigned long) archdata->unw_start,
+ (unsigned long) archdata->unw_end);
+#endif /* CONFIG_IA64_NEW_UNWIND */
+ return 0;
+}
+
+static inline void
+ia64_module_unmap(void * addr)
+{
+#ifdef CONFIG_IA64_NEW_UNWIND
+ struct module *mod = (struct module *) addr;
+ struct archdata *archdata;
+
+ /*
+ * Before freeing the module memory remove the unwind table entry
+ */
+ if (mod_member_present(mod, archdata_start) && mod->archdata_start)
+ {
+ archdata = (struct archdata *)(mod->archdata_start);
+
+ if (archdata->unw_table != NULL)
+ unw_remove_unwind_table(archdata->unw_table);
+ }
+#endif /* CONFIG_IA64_NEW_UNWIND */
+
+ vfree(addr);
+}
+
+#endif /* _ASM_IA64_MODULE_H */
#define PT_PTRACED_BIT 0
#define PT_TRACESYS_BIT 1
-#define IA64_TASK_SIZE 2864 /* 0xb30 */
+#define IA64_TASK_SIZE 3328 /* 0xd00 */
#define IA64_PT_REGS_SIZE 400 /* 0x190 */
#define IA64_SWITCH_STACK_SIZE 560 /* 0x230 */
#define IA64_SIGINFO_SIZE 128 /* 0x80 */
#define IA64_TASK_SIGPENDING_OFFSET 16 /* 0x10 */
#define IA64_TASK_NEED_RESCHED_OFFSET 40 /* 0x28 */
#define IA64_TASK_PROCESSOR_OFFSET 100 /* 0x64 */
-#define IA64_TASK_THREAD_OFFSET 896 /* 0x380 */
-#define IA64_TASK_THREAD_KSP_OFFSET 896 /* 0x380 */
-#define IA64_TASK_THREAD_SIGMASK_OFFSET 2744 /* 0xab8 */
+#define IA64_TASK_THREAD_OFFSET 1424 /* 0x590 */
+#define IA64_TASK_THREAD_KSP_OFFSET 1424 /* 0x590 */
+#define IA64_TASK_THREAD_SIGMASK_OFFSET 3184 /* 0xc70 */
#define IA64_TASK_PID_OFFSET 188 /* 0xbc */
#define IA64_TASK_MM_OFFSET 88 /* 0x58 */
#define IA64_PT_REGS_CR_IPSR_OFFSET 0 /* 0x0 */
#ifdef CONFIG_IA64_GENERIC
# include <asm/machvec.h>
# define virt_to_page(kaddr) (mem_map + platform_map_nr(kaddr))
-#elif defined (CONFIG_IA64_SN_SN1_SIM)
+#elif defined (CONFIG_IA64_SN_SN1)
# define virt_to_page(kaddr) (mem_map + MAP_NR_SN1(kaddr))
#else
# define virt_to_page(kaddr) (mem_map + MAP_NR_DENSE(kaddr))
#endif
#define VALID_PAGE(page) ((page - mem_map) < max_mapnr)
-# endif /* __KERNEL__ */
-
typedef union ia64_va {
struct {
unsigned long off : 61; /* intra-region offset */
#define BUG() do { printk("kernel BUG at %s:%d!\n", __FILE__, __LINE__); *(int *)0=0; } while (0)
#define PAGE_BUG(page) do { BUG(); } while (0)
-extern __inline__ int
+static __inline__ int
get_order (unsigned long size)
{
double d = size - 1;
return order;
}
+# endif /* __KERNEL__ */
#endif /* !ASSEMBLY */
#define PAGE_OFFSET 0xe000000000000000
#define PAL_CACHE_PROT_INFO 38 /* get i/d cache protection info */
#define PAL_REGISTER_INFO 39 /* return AR and CR register information*/
#define PAL_SHUTDOWN 40 /* enter processor shutdown state */
+#define PAL_PREFETCH_VISIBILITY 41
#define PAL_COPY_PAL 256 /* relocate PAL procedures and PAL PMI */
#define PAL_HALT_INFO 257 /* return the low power capabilities of processor */
* (generally 0) MUST be passed. Reserved parameters are not optional
* parameters.
*/
-extern struct ia64_pal_retval ia64_pal_call_static (u64, u64, u64, u64);
-extern struct ia64_pal_retval ia64_pal_call_stacked (u64, u64, u64, u64);
-extern struct ia64_pal_retval ia64_pal_call_phys_static (u64, u64, u64, u64);
-extern struct ia64_pal_retval ia64_pal_call_phys_stacked (u64, u64, u64, u64);
+extern struct ia64_pal_retval ia64_pal_call_static (u64, u64, u64, u64, u64);
+extern struct ia64_pal_retval ia64_pal_call_stacked (u64, u64, u64, u64);
+extern struct ia64_pal_retval ia64_pal_call_phys_static (u64, u64, u64, u64);
+extern struct ia64_pal_retval ia64_pal_call_phys_stacked (u64, u64, u64, u64);
-#define PAL_CALL(iprv,a0,a1,a2,a3) iprv = ia64_pal_call_static(a0, a1, a2, a3)
-#define PAL_CALL_STK(iprv,a0,a1,a2,a3) iprv = ia64_pal_call_stacked(a0, a1, a2, a3)
-#define PAL_CALL_PHYS(iprv,a0,a1,a2,a3) iprv = ia64_pal_call_phys_static(a0, a1, a2, a3)
-#define PAL_CALL_PHYS_STK(iprv,a0,a1,a2,a3) iprv = ia64_pal_call_phys_stacked(a0, a1, a2, a3)
+#define PAL_CALL(iprv,a0,a1,a2,a3) iprv = ia64_pal_call_static(a0, a1, a2, a3, 0)
+#define PAL_CALL_IC_OFF(iprv,a0,a1,a2,a3) iprv = ia64_pal_call_static(a0, a1, a2, a3, 1)
+#define PAL_CALL_STK(iprv,a0,a1,a2,a3) iprv = ia64_pal_call_stacked(a0, a1, a2, a3)
+#define PAL_CALL_PHYS(iprv,a0,a1,a2,a3) iprv = ia64_pal_call_phys_static(a0, a1, a2, a3)
+#define PAL_CALL_PHYS_STK(iprv,a0,a1,a2,a3) iprv = ia64_pal_call_phys_stacked(a0, a1, a2, a3)
typedef int (*ia64_pal_handler) (u64, ...);
extern ia64_pal_handler ia64_pal;
extern void pal_bus_features_print (u64);
/* Provide information about configurable processor bus features */
-extern inline s64
+static inline s64
ia64_pal_bus_get_features (pal_bus_features_u_t *features_avail,
pal_bus_features_u_t *features_status,
pal_bus_features_u_t *features_control)
}
/* Enables/disables specific processor bus features */
-extern inline s64
+static inline s64
ia64_pal_bus_set_features (pal_bus_features_u_t feature_select)
{
struct ia64_pal_retval iprv;
}
/* Get detailed cache information */
-extern inline s64
+static inline s64
ia64_pal_cache_config_info (u64 cache_level, u64 cache_type, pal_cache_config_info_t *conf)
{
struct ia64_pal_retval iprv;
}
/* Get detailed cche protection information */
-extern inline s64
+static inline s64
ia64_pal_cache_prot_info (u64 cache_level, u64 cache_type, pal_cache_protection_info_t *prot)
{
struct ia64_pal_retval iprv;
* Flush the processor instruction or data caches. *PROGRESS must be
* initialized to zero before calling this for the first time..
*/
-extern inline s64
+static inline s64
ia64_pal_cache_flush (u64 cache_type, u64 invalidate, u64 *progress)
{
struct ia64_pal_retval iprv;
- PAL_CALL(iprv, PAL_CACHE_FLUSH, cache_type, invalidate, *progress);
+ PAL_CALL_IC_OFF(iprv, PAL_CACHE_FLUSH, cache_type, invalidate, *progress);
*progress = iprv.v1;
return iprv.status;
}
/* Initialize the processor controlled caches */
-extern inline s64
+static inline s64
ia64_pal_cache_init (u64 level, u64 cache_type, u64 restrict)
{
struct ia64_pal_retval iprv;
* processor controlled cache to known values without the availability
* of backing memory.
*/
-extern inline s64
+static inline s64
ia64_pal_cache_line_init (u64 physical_addr, u64 data_value)
{
struct ia64_pal_retval iprv;
/* Read the data and tag of a processor controlled cache line for diags */
-extern inline s64
+static inline s64
ia64_pal_cache_read (pal_cache_line_id_u_t line_id, u64 physical_addr)
{
struct ia64_pal_retval iprv;
}
/* Return summary information about the heirarchy of caches controlled by the processor */
-extern inline s64
+static inline s64
ia64_pal_cache_summary (u64 *cache_levels, u64 *unique_caches)
{
struct ia64_pal_retval iprv;
}
/* Write the data and tag of a processor-controlled cache line for diags */
-extern inline s64
+static inline s64
ia64_pal_cache_write (pal_cache_line_id_u_t line_id, u64 physical_addr, u64 data)
{
struct ia64_pal_retval iprv;
/* Return the parameters needed to copy relocatable PAL procedures from ROM to memory */
-extern inline s64
+static inline s64
ia64_pal_copy_info (u64 copy_type, u64 num_procs, u64 num_iopics,
u64 *buffer_size, u64 *buffer_align)
{
}
/* Copy relocatable PAL procedures from ROM to memory */
-extern inline s64
+static inline s64
ia64_pal_copy_pal (u64 target_addr, u64 alloc_size, u64 processor, u64 *pal_proc_offset)
{
struct ia64_pal_retval iprv;
}
/* Return the number of instruction and data debug register pairs */
-extern inline s64
+static inline s64
ia64_pal_debug_info (u64 *inst_regs, u64 *data_regs)
{
struct ia64_pal_retval iprv;
#ifdef TBD
/* Switch from IA64-system environment to IA-32 system environment */
-extern inline s64
+static inline s64
ia64_pal_enter_ia32_env (ia32_env1, ia32_env2, ia32_env3)
{
struct ia64_pal_retval iprv;
#endif
/* Get unique geographical address of this processor on its bus */
-extern inline s64
+static inline s64
ia64_pal_fixed_addr (u64 *global_unique_addr)
{
struct ia64_pal_retval iprv;
}
/* Get base frequency of the platform if generated by the processor */
-extern inline s64
+static inline s64
ia64_pal_freq_base (u64 *platform_base_freq)
{
struct ia64_pal_retval iprv;
* Get the ratios for processor frequency, bus frequency and interval timer to
* to base frequency of the platform
*/
-extern inline s64
+static inline s64
ia64_pal_freq_ratios (struct pal_freq_ratio *proc_ratio, struct pal_freq_ratio *bus_ratio,
struct pal_freq_ratio *itc_ratio)
{
* power states where prefetching and execution are suspended and cache and
* TLB coherency is not maintained.
*/
-extern inline s64
+static inline s64
ia64_pal_halt (u64 halt_state)
{
struct ia64_pal_retval iprv;
} pal_power_mgmt_info_u_t;
/* Return information about processor's optional power management capabilities. */
-extern inline s64
+static inline s64
ia64_pal_halt_info (pal_power_mgmt_info_u_t *power_buf)
{
struct ia64_pal_retval iprv;
/* Cause the processor to enter LIGHT HALT state, where prefetching and execution are
* suspended, but cache and TLB coherency is maintained.
*/
-extern inline s64
+static inline s64
ia64_pal_halt_light (void)
{
struct ia64_pal_retval iprv;
* the error logging registers to be written. This procedure also checks the pending
* machine check bit and pending INIT bit and reports their states.
*/
-extern inline s64
+static inline s64
ia64_pal_mc_clear_log (u64 *pending_vector)
{
struct ia64_pal_retval iprv;
/* Ensure that all outstanding transactions in a processor are completed or that any
* MCA due to thes outstanding transaction is taken.
*/
-extern inline s64
+static inline s64
ia64_pal_mc_drain (void)
{
struct ia64_pal_retval iprv;
}
/* Return the machine check dynamic processor state */
-extern inline s64
+static inline s64
ia64_pal_mc_dynamic_state (u64 offset, u64 *size, u64 *pds)
{
struct ia64_pal_retval iprv;
}
/* Return processor machine check information */
-extern inline s64
+static inline s64
ia64_pal_mc_error_info (u64 info_index, u64 type_index, u64 *size, u64 *error_info)
{
struct ia64_pal_retval iprv;
/* Inform PALE_CHECK whether a machine check is expected so that PALE_CHECK willnot
* attempt to correct any expected machine checks.
*/
-extern inline s64
+static inline s64
ia64_pal_mc_expected (u64 expected, u64 *previous)
{
struct ia64_pal_retval iprv;
* minimal processor state in the event of a machine check or initialization
* event.
*/
-extern inline s64
+static inline s64
ia64_pal_mc_register_mem (u64 physical_addr)
{
struct ia64_pal_retval iprv;
/* Restore minimal architectural processor state, set CMC interrupt if necessary
* and resume execution
*/
-extern inline s64
+static inline s64
ia64_pal_mc_resume (u64 set_cmci, u64 save_ptr)
{
struct ia64_pal_retval iprv;
}
/* Return the memory attributes implemented by the processor */
-extern inline s64
+static inline s64
ia64_pal_mem_attrib (u64 *mem_attrib)
{
struct ia64_pal_retval iprv;
/* Return the amount of memory needed for second phase of processor
* self-test and the required alignment of memory.
*/
-extern inline s64
+static inline s64
ia64_pal_mem_for_test (u64 *bytes_needed, u64 *alignment)
{
struct ia64_pal_retval iprv;
/* Return the performance monitor information about what can be counted
* and how to configure the monitors to count the desired events.
*/
-extern inline s64
+static inline s64
ia64_pal_perf_mon_info (u64 *pm_buffer, pal_perf_mon_info_u_t *pm_info)
{
struct ia64_pal_retval iprv;
/* Specifies the physical address of the processor interrupt block
* and I/O port space.
*/
-extern inline s64
+static inline s64
ia64_pal_platform_addr (u64 type, u64 physical_addr)
{
struct ia64_pal_retval iprv;
}
/* Set the SAL PMI entrypoint in memory */
-extern inline s64
+static inline s64
ia64_pal_pmi_entrypoint (u64 sal_pmi_entry_addr)
{
struct ia64_pal_retval iprv;
struct pal_features_s;
/* Provide information about configurable processor features */
-extern inline s64
+static inline s64
ia64_pal_proc_get_features (u64 *features_avail,
u64 *features_status,
u64 *features_control)
}
/* Enable/disable processor dependent features */
-extern inline s64
+static inline s64
ia64_pal_proc_set_features (u64 feature_select)
{
struct ia64_pal_retval iprv;
/* Return the information required for the architected loop used to purge
* (initialize) the entire TC
*/
-extern inline s64
+static inline s64
ia64_get_ptce (ia64_ptce_info_t *ptce)
{
struct ia64_pal_retval iprv;
}
/* Return info about implemented application and control registers. */
-extern inline s64
+static inline s64
ia64_pal_register_info (u64 info_request, u64 *reg_info_1, u64 *reg_info_2)
{
struct ia64_pal_retval iprv;
/* Return information about the register stack and RSE for this processor
* implementation.
*/
-extern inline s64
+static inline s64
ia64_pal_rse_info (u64 *num_phys_stacked, pal_hints_u_t *hints)
{
struct ia64_pal_retval iprv;
* suspended, but cause cache and TLB coherency to be maintained.
* This is usually called in IA-32 mode.
*/
-extern inline s64
+static inline s64
ia64_pal_shutdown (void)
{
struct ia64_pal_retval iprv;
}
/* Perform the second phase of processor self-test. */
-extern inline s64
+static inline s64
ia64_pal_test_proc (u64 test_addr, u64 test_size, u64 attributes, u64 *self_test_state)
{
struct ia64_pal_retval iprv;
/* Return PAL version information */
-extern inline s64
+static inline s64
ia64_pal_version (pal_version_u_t *pal_min_version, pal_version_u_t *pal_cur_version)
{
struct ia64_pal_retval iprv;
/* Return information about the virtual memory characteristics of the processor
* implementation.
*/
-extern inline s64
+static inline s64
ia64_pal_vm_info (u64 tc_level, u64 tc_type, pal_tc_info_u_t *tc_info, u64 *tc_pages)
{
struct ia64_pal_retval iprv;
/* Get page size information about the virtual memory characteristics of the processor
* implementation.
*/
-extern inline s64
+static inline s64
ia64_pal_vm_page_size (u64 *tr_pages, u64 *vw_pages)
{
struct ia64_pal_retval iprv;
/* Get summary information about the virtual memory characteristics of the processor
* implementation.
*/
-extern inline s64
+static inline s64
ia64_pal_vm_summary (pal_vm_info_1_u_t *vm_info_1, pal_vm_info_2_u_t *vm_info_2)
{
struct ia64_pal_retval iprv;
} pal_tr_valid_u_t;
/* Read a translation register */
-extern inline s64
+static inline s64
ia64_pal_tr_read (u64 reg_num, u64 tr_type, u64 *tr_buffer, pal_tr_valid_u_t *tr_valid)
{
struct ia64_pal_retval iprv;
return iprv.status;
}
+static inline s64
+ia64_pal_prefetch_visibility (void)
+{
+ struct ia64_pal_retval iprv;
+ PAL_CALL(iprv, PAL_PREFETCH_VISIBILITY, 0, 0, 0);
+ return iprv.status;
+}
+
#endif /* __ASSEMBLY__ */
#endif /* _ASM_IA64_PAL_H */
* Yeah, simulating stuff is slow, so let us catch some breath between
* timer interrupts...
*/
-# define HZ 20
+# define HZ 32
#else
# define HZ 1024
#endif
--- /dev/null
+/*
+ * parport.h: platform-specific PC-style parport initialisation
+ *
+ * Copyright (C) 1999, 2000 Tim Waugh <tim@cyberelk.demon.co.uk>
+ *
+ * This file should only be included by drivers/parport/parport_pc.c.
+ */
+
+#ifndef _ASM_IA64_PARPORT_H
+#define _ASM_IA64_PARPORT_H 1
+
+static int __devinit parport_pc_find_isa_ports (int autoirq, int autodma);
+
+static int __devinit
+parport_pc_find_nonpci_ports (int autoirq, int autodma)
+{
+ return parport_pc_find_isa_ports(autoirq, autodma);
+}
+
+#endif /* _ASM_IA64_PARPORT_H */
#ifndef _ASM_IA64_PCI_H
#define _ASM_IA64_PCI_H
+#include <linux/config.h>
#include <linux/slab.h>
#include <linux/string.h>
#include <linux/types.h>
struct pci_dev;
-extern inline void pcibios_set_master(struct pci_dev *dev)
+static inline void pcibios_set_master(struct pci_dev *dev)
{
/* No special bus mastering setup handling */
}
-extern inline void pcibios_penalize_isa_irq(int irq)
+static inline void pcibios_penalize_isa_irq(int irq)
{
/* We don't do dynamic PCI IRQ allocation */
}
* only drive the low 24-bits during PCI bus mastering, then
* you would pass 0x00ffffff as the mask to this function.
*/
-extern inline int
+static inline int
pci_dma_supported(struct pci_dev *hwdev, dma_addr_t mask)
{
return 1;
#define pte_quicklist (my_cpu_data.pte_quick)
#define pgtable_cache_size (my_cpu_data.pgtable_cache_sz)
-extern __inline__ pgd_t*
+static __inline__ pgd_t*
get_pgd_slow (void)
{
pgd_t *ret = (pgd_t *)__get_free_page(GFP_KERNEL);
return ret;
}
-extern __inline__ pgd_t*
+static __inline__ pgd_t*
get_pgd_fast (void)
{
unsigned long *ret = pgd_quicklist;
return (pgd_t *)ret;
}
-extern __inline__ pgd_t*
+static __inline__ pgd_t*
pgd_alloc (void)
{
pgd_t *pgd;
return pgd;
}
-extern __inline__ void
+static __inline__ void
free_pgd_fast (pgd_t *pgd)
{
*(unsigned long *)pgd = (unsigned long) pgd_quicklist;
++pgtable_cache_size;
}
-extern __inline__ pmd_t *
+static __inline__ pmd_t *
get_pmd_slow (void)
{
pmd_t *pmd = (pmd_t *) __get_free_page(GFP_KERNEL);
return pmd;
}
-extern __inline__ pmd_t *
+static __inline__ pmd_t *
get_pmd_fast (void)
{
unsigned long *ret = (unsigned long *)pmd_quicklist;
return (pmd_t *)ret;
}
-extern __inline__ void
+static __inline__ void
free_pmd_fast (pmd_t *pmd)
{
*(unsigned long *)pmd = (unsigned long) pmd_quicklist;
++pgtable_cache_size;
}
-extern __inline__ void
+static __inline__ void
free_pmd_slow (pmd_t *pmd)
{
free_page((unsigned long)pmd);
extern pte_t *get_pte_slow (pmd_t *pmd, unsigned long address_preadjusted);
-extern __inline__ pte_t *
+static __inline__ pte_t *
get_pte_fast (void)
{
unsigned long *ret = (unsigned long *)pte_quicklist;
return (pte_t *)ret;
}
-extern __inline__ void
+static __inline__ void
free_pte_fast (pte_t *pte)
{
*(unsigned long *)pte = (unsigned long) pte_quicklist;
extern void __handle_bad_pgd (pgd_t *pgd);
extern void __handle_bad_pmd (pmd_t *pmd);
-extern __inline__ pte_t*
+static __inline__ pte_t*
pte_alloc (pmd_t *pmd, unsigned long vmaddr)
{
unsigned long offset;
return (pte_t *) pmd_page(*pmd) + offset;
}
-extern __inline__ pmd_t*
+static __inline__ pmd_t*
pmd_alloc (pgd_t *pgd, unsigned long vmaddr)
{
unsigned long offset;
/*
* Flush a specified user mapping
*/
-extern __inline__ void
+static __inline__ void
flush_tlb_mm (struct mm_struct *mm)
{
if (mm) {
/*
* This file contains the functions and defines necessary to modify and use
- * the ia-64 page table tree.
+ * the IA-64 page table tree.
*
- * This hopefully works with any (fixed) ia-64 page-size, as defined
+ * This hopefully works with any (fixed) IA-64 page-size, as defined
* in <asm/page.h> (currently 8192).
*
* Copyright (C) 1998-2000 Hewlett-Packard Co
#define IA64_MAX_PHYS_BITS 50 /* max. number of physical address bits (architected) */
-/* Is ADDR a valid kernel address? */
-#define kern_addr_valid(addr) ((addr) >= TASK_SIZE)
-
-/* Is ADDR a valid physical address? */
-#define phys_addr_valid(addr) (((addr) & my_cpu_data.unimpl_pa_mask) == 0)
-
/*
* First, define the various bits in a PTE. Note that the PTE format
* matches the VHPT short format, the firt doubleword of the VHPD long
#include <asm/bitops.h>
#include <asm/mmu_context.h>
+#include <asm/processor.h>
#include <asm/system.h>
/*
* Given a pointer to an mem_map[] entry, return the kernel virtual
* address corresponding to that page.
*/
-#define page_address(page) ((void *) (PAGE_OFFSET + (((page) - mem_map) << PAGE_SHIFT)))
+#define page_address(page) ((page)->virtual)
/*
* Now for some cache flushing routines. This is the kind of stuff
ia64_flush_icache_page((unsigned long) page_address(pg)); \
} while (0)
+/* Quick test to see if ADDR is a (potentially) valid physical address. */
+static __inline__ long
+ia64_phys_addr_valid (unsigned long addr)
+{
+ return (addr & (my_cpu_data.unimpl_pa_mask)) == 0;
+}
+
+/*
+ * kern_addr_valid(ADDR) tests if ADDR is pointing to valid kernel
+ * memory. For the return value to be meaningful, ADDR must be >=
+ * PAGE_OFFSET. This operation can be relatively expensive (e.g.,
+ * require a hash-, or multi-level tree-lookup or something of that
+ * sort) but it guarantees to return TRUE only if accessing the page
+ * at that address does not cause an error. Note that there may be
+ * addresses for which kern_addr_valid() returns FALSE even though an
+ * access would not cause an error (e.g., this is typically true for
+ * memory mapped I/O regions.
+ *
+ * XXX Need to implement this for IA-64.
+ */
+#define kern_addr_valid(addr) (1)
+
/*
* Now come the defines and routines to manage and access the three-level
* page table.
#define pmd_set(pmdp, ptep) (pmd_val(*(pmdp)) = __pa(ptep))
#define pmd_none(pmd) (!pmd_val(pmd))
-#define pmd_bad(pmd) (!phys_addr_valid(pmd_val(pmd)))
+#define pmd_bad(pmd) (!ia64_phys_addr_valid(pmd_val(pmd)))
#define pmd_present(pmd) (pmd_val(pmd) != 0UL)
#define pmd_clear(pmdp) (pmd_val(*(pmdp)) = 0UL)
#define pmd_page(pmd) ((unsigned long) __va(pmd_val(pmd) & _PFN_MASK))
#define pgd_set(pgdp, pmdp) (pgd_val(*(pgdp)) = __pa(pmdp))
#define pgd_none(pgd) (!pgd_val(pgd))
-#define pgd_bad(pgd) (!phys_addr_valid(pgd_val(pgd)))
+#define pgd_bad(pgd) (!ia64_phys_addr_valid(pgd_val(pgd)))
#define pgd_present(pgd) (pgd_val(pgd) != 0UL)
#define pgd_clear(pgdp) (pgd_val(*(pgdp)) = 0UL)
#define pgd_page(pgd) ((unsigned long) __va(pgd_val(pgd) & _PFN_MASK))
/*
* Return the region index for virtual address ADDRESS.
*/
-extern __inline__ unsigned long
+static __inline__ unsigned long
rgn_index (unsigned long address)
{
ia64_va a;
/*
* Return the region offset for virtual address ADDRESS.
*/
-extern __inline__ unsigned long
+static __inline__ unsigned long
rgn_offset (unsigned long address)
{
ia64_va a;
#define RGN_SIZE (1UL << 61)
#define RGN_KERNEL 7
-extern __inline__ unsigned long
+static __inline__ unsigned long
pgd_index (unsigned long address)
{
unsigned long region = address >> 61;
/* The offset in the 1-level directory is given by the 3 region bits
(61..63) and the seven level-1 bits (33-39). */
-extern __inline__ pgd_t*
+static __inline__ pgd_t*
pgd_offset (struct mm_struct *mm, unsigned long address)
{
return mm->pgd + pgd_index(address);
#define pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) })
#define swp_entry_to_pte(x) ((pte_t) { (x).val })
-#define module_map vmalloc
-#define module_unmap vfree
-
/* Needs to be defined here and not in linux/mm.h, as it is arch dependent */
#define PageSkip(page) (0)
* ZERO_PAGE is a global shared page that is always zero: used
* for zero-mapped memory areas etc..
*/
-extern unsigned long empty_zero_page[1024];
+extern unsigned long empty_zero_page[PAGE_SIZE/sizeof(unsigned long)];
#define ZERO_PAGE(vaddr) (virt_to_page(empty_zero_page))
# endif /* !__ASSEMBLY__ */
#define my_cpu_data cpu_data[smp_processor_id()]
#ifdef CONFIG_SMP
-# define loops_per_sec() my_cpu_data.loops_per_sec
+# define ia64_loops_per_sec() my_cpu_data.loops_per_sec
#else
-# define loops_per_sec() loops_per_sec
+# define ia64_loops_per_sec() loops_per_sec
#endif
extern struct cpuinfo_ia64 cpu_data[NR_CPUS];
__u64 csd; /* IA32 code selector descriptor */
__u64 ssd; /* IA32 stack selector descriptor */
__u64 tssd; /* IA32 TSS descriptor */
+ __u64 old_iob; /* old IOBase value */
union {
__u64 sigmask; /* aligned mask for sigsuspend scall */
} un;
-# define INIT_THREAD_IA32 , 0, 0, 0x17800000037fULL, 0, 0, 0, 0, 0, {0}
+# define INIT_THREAD_IA32 , 0, 0, 0x17800000037fULL, 0, 0, 0, 0, 0, 0, {0}
#else
# define INIT_THREAD_IA32
#endif /* CONFIG_IA32_SUPPORT */
#define start_thread(regs,new_ip,new_sp) do { \
set_fs(USER_DS); \
+ ia64_psr(regs)->dfh = 1; /* disable fph */ \
+ ia64_psr(regs)->mfh = 0; /* clear mfh */ \
ia64_psr(regs)->cpl = 3; /* set user mode */ \
ia64_psr(regs)->ri = 0; /* clear return slot number */ \
ia64_psr(regs)->is = 0; /* IA-64 instruction set */ \
/* Return stack pointer of blocked task TSK. */
#define KSTK_ESP(tsk) ((tsk)->thread.ksp)
+#ifndef CONFIG_SMP
+
static inline struct task_struct *
ia64_get_fpu_owner (void)
{
__asm__ __volatile__ ("mov ar.k5=%0" :: "r"(t));
}
+#endif /* !CONFIG_SMP */
+
extern void __ia64_init_fpu (void);
extern void __ia64_save_fpu (struct ia64_fpreg *fph);
extern void __ia64_load_fpu (struct ia64_fpreg *fph);
ia64_fph_disable();
}
-extern inline void
+static inline void
ia64_fc (void *addr)
{
__asm__ __volatile__ ("fc %0" :: "r"(addr) : "memory");
}
-extern inline void
+static inline void
ia64_sync_i (void)
{
__asm__ __volatile__ (";; sync.i" ::: "memory");
}
-extern inline void
+static inline void
ia64_srlz_i (void)
{
__asm__ __volatile__ (";; srlz.i ;;" ::: "memory");
}
-extern inline void
+static inline void
ia64_srlz_d (void)
{
__asm__ __volatile__ (";; srlz.d" ::: "memory");
}
-extern inline __u64
+static inline __u64
ia64_get_rr (__u64 reg_bits)
{
__u64 r;
return r;
}
-extern inline void
+static inline void
ia64_set_rr (__u64 reg_bits, __u64 rr_val)
{
__asm__ __volatile__ ("mov rr[%0]=%1" :: "r"(reg_bits), "r"(rr_val) : "memory");
}
-extern inline __u64
+static inline __u64
ia64_get_dcr (void)
{
__u64 r;
return r;
}
-extern inline void
+static inline void
ia64_set_dcr (__u64 val)
{
__asm__ __volatile__ ("mov cr.dcr=%0;;" :: "r"(val) : "memory");
ia64_srlz_d();
}
-extern inline __u64
+static inline __u64
ia64_get_lid (void)
{
__u64 r;
return r;
}
-extern inline void
+static inline void
ia64_invala (void)
{
__asm__ __volatile__ ("invala" ::: "memory");
* Insert a translation into an instruction and/or data translation
* register.
*/
-extern inline void
+static inline void
ia64_itr (__u64 target_mask, __u64 tr_num,
__u64 vmaddr, __u64 pte,
__u64 log_page_size)
* Insert a translation into the instruction and/or data translation
* cache.
*/
-extern inline void
+static inline void
ia64_itc (__u64 target_mask, __u64 vmaddr, __u64 pte,
__u64 log_page_size)
{
* Purge a range of addresses from instruction and/or data translation
* register(s).
*/
-extern inline void
+static inline void
ia64_ptr (__u64 target_mask, __u64 vmaddr, __u64 log_size)
{
if (target_mask & 0x1)
}
/* Set the interrupt vector address. The address must be suitably aligned (32KB). */
-extern inline void
+static inline void
ia64_set_iva (void *ivt_addr)
{
__asm__ __volatile__ ("mov cr.iva=%0;; srlz.i;;" :: "r"(ivt_addr) : "memory");
}
/* Set the page table address and control bits. */
-extern inline void
+static inline void
ia64_set_pta (__u64 pta)
{
/* Note: srlz.i implies srlz.d */
__asm__ __volatile__ ("mov cr.pta=%0;; srlz.i;;" :: "r"(pta) : "memory");
}
-extern inline __u64
+static inline __u64
ia64_get_cpuid (__u64 regnum)
{
__u64 r;
return r;
}
-extern inline void
+static inline void
ia64_eoi (void)
{
__asm__ ("mov cr.eoi=r0;; srlz.d;;" ::: "memory");
}
-extern __inline__ void
+static inline void
ia64_set_lrr0 (__u8 vector, __u8 masked)
{
if (masked > 1)
}
-extern __inline__ void
+static inline void
ia64_set_lrr1 (__u8 vector, __u8 masked)
{
if (masked > 1)
:: "r"((masked << 16) | vector) : "memory");
}
-extern __inline__ void
+static inline void
ia64_set_pmv (__u64 val)
{
__asm__ __volatile__ ("mov cr.pmv=%0" :: "r"(val) : "memory");
}
-extern __inline__ __u64
+static inline __u64
ia64_get_pmc (__u64 regnum)
{
__u64 retval;
return retval;
}
-extern __inline__ void
+static inline void
ia64_set_pmc (__u64 regnum, __u64 value)
{
__asm__ __volatile__ ("mov pmc[%0]=%1" :: "r"(regnum), "r"(value));
}
-extern __inline__ __u64
+static inline __u64
ia64_get_pmd (__u64 regnum)
{
__u64 retval;
return retval;
}
-extern __inline__ void
+static inline void
ia64_set_pmd (__u64 regnum, __u64 value)
{
__asm__ __volatile__ ("mov pmd[%0]=%1" :: "r"(regnum), "r"(value));
* Given the address to which a spill occurred, return the unat bit
* number that corresponds to this address.
*/
-extern inline __u64
+static inline __u64
ia64_unat_pos (void *spill_addr)
{
return ((__u64) spill_addr >> 3) & 0x3f;
* Set the NaT bit of an integer register which was spilled at address
* SPILL_ADDR. UNAT is the mask to be updated.
*/
-extern inline void
+static inline void
ia64_set_unat (__u64 *unat, void *spill_addr, unsigned long nat)
{
__u64 bit = ia64_unat_pos(spill_addr);
* Return saved PC of a blocked thread.
* Note that the only way T can block is through a call to schedule() -> switch_to().
*/
-extern inline unsigned long
+static inline unsigned long
thread_saved_pc (struct thread_struct *t)
{
struct unw_frame_info info;
/*
* Set the correctable machine check vector register
*/
-extern __inline__ void
+static inline void
ia64_set_cmcv (__u64 val)
{
__asm__ __volatile__ ("mov cr.cmcv=%0" :: "r"(val) : "memory");
/*
* Read the correctable machine check vector register
*/
-extern __inline__ __u64
+static inline __u64
ia64_get_cmcv (void)
{
__u64 val;
return val;
}
-extern inline __u64
+static inline __u64
ia64_get_ivr (void)
{
__u64 r;
return r;
}
-extern inline void
+static inline void
ia64_set_tpr (__u64 val)
{
__asm__ __volatile__ ("mov cr.tpr=%0" :: "r"(val));
}
-extern inline __u64
+static inline __u64
ia64_get_tpr (void)
{
__u64 r;
return r;
}
-extern __inline__ void
+static inline void
ia64_set_irr0 (__u64 val)
{
__asm__ __volatile__("mov cr.irr0=%0;;" :: "r"(val) : "memory");
ia64_srlz_d();
}
-extern __inline__ __u64
+static inline __u64
ia64_get_irr0 (void)
{
__u64 val;
- __asm__ ("mov %0=cr.irr0" : "=r"(val));
+ /* this is volatile because irr may change unbeknownst to gcc... */
+ __asm__ __volatile__("mov %0=cr.irr0" : "=r"(val));
return val;
}
-extern __inline__ void
+static inline void
ia64_set_irr1 (__u64 val)
{
__asm__ __volatile__("mov cr.irr1=%0;;" :: "r"(val) : "memory");
ia64_srlz_d();
}
-extern __inline__ __u64
+static inline __u64
ia64_get_irr1 (void)
{
__u64 val;
- __asm__ ("mov %0=cr.irr1" : "=r"(val));
+ /* this is volatile because irr may change unbeknownst to gcc... */
+ __asm__ __volatile__("mov %0=cr.irr1" : "=r"(val));
return val;
}
-extern __inline__ void
+static inline void
ia64_set_irr2 (__u64 val)
{
__asm__ __volatile__("mov cr.irr2=%0;;" :: "r"(val) : "memory");
ia64_srlz_d();
}
-extern __inline__ __u64
+static inline __u64
ia64_get_irr2 (void)
{
__u64 val;
- __asm__ ("mov %0=cr.irr2" : "=r"(val));
+ /* this is volatile because irr may change unbeknownst to gcc... */
+ __asm__ __volatile__("mov %0=cr.irr2" : "=r"(val));
return val;
}
-extern __inline__ void
+static inline void
ia64_set_irr3 (__u64 val)
{
__asm__ __volatile__("mov cr.irr3=%0;;" :: "r"(val) : "memory");
ia64_srlz_d();
}
-extern __inline__ __u64
+static inline __u64
ia64_get_irr3 (void)
{
__u64 val;
- __asm__ ("mov %0=cr.irr3" : "=r"(val));
+ /* this is volatile because irr may change unbeknownst to gcc... */
+ __asm__ __volatile__("mov %0=cr.irr3" : "=r"(val));
return val;
}
-extern __inline__ __u64
+static inline __u64
ia64_get_gp(void)
{
__u64 val;
#define ia64_rotl(w,n) ia64_rotr((w),(64)-(n))
-extern __inline__ __u64
+static inline __u64
ia64_thash (__u64 addr)
{
__u64 result;
extern void show_regs (struct pt_regs *);
extern long ia64_peek (struct pt_regs *, struct task_struct *, unsigned long addr, long *val);
extern long ia64_poke (struct pt_regs *, struct task_struct *, unsigned long addr, long val);
+ extern void ia64_flush_fph (struct task_struct *t);
extern void ia64_sync_fph (struct task_struct *t);
#ifdef CONFIG_IA64_NEW_UNWIND
* unsigned long dbr[8];
* unsigned long rsvd2[504];
* unsigned long ibr[8];
+ * unsigned long rsvd3[504];
+ * unsigned long pmd[4];
* }
*/
#define PT_B4 0x07f0
#define PT_B5 0x07f8
+#define PT_AR_EC 0x0800
#define PT_AR_LC 0x0808
/* pt_regs */
#define PT_DBR 0x2000 /* data breakpoint registers */
#define PT_IBR 0x3000 /* instruction breakpoint registers */
+#define PT_PMD 0x4000 /* performance monitoring counters */
#endif /* _ASM_IA64_PTRACE_OFFSETS_H */
*/
#include <linux/config.h>
+#include <linux/spinlock.h>
#include <asm/pal.h>
#include <asm/system.h>
char reserved2[8];
};
-struct ia64_sal_desc_ptc {
+typedef struct ia64_sal_desc_ptc {
char type;
char reserved1[3];
unsigned int num_domains; /* # of coherence domains */
- long domain_info; /* physical address of domain info table */
-};
+ s64 domain_info; /* physical address of domain info table */
+} ia64_sal_desc_ptc_t;
+
+typedef struct ia64_sal_ptc_domain_info {
+ unsigned long proc_count; /* number of processors in domain */
+ long proc_list; /* physical address of LID array */
+} ia64_sal_ptc_domain_info_t;
+
+typedef struct ia64_sal_ptc_domain_proc_entry {
+ unsigned char id; /* id of processor */
+ unsigned char eid; /* eid of processor */
+} ia64_sal_ptc_domain_proc_entry_t;
#define IA64_SAL_AP_EXTERNAL_INT 0
};
extern ia64_sal_handler ia64_sal;
+extern struct ia64_sal_desc_ptc *ia64_ptc_domain_info;
extern const char *ia64_sal_strerror (long status);
extern void ia64_sal_init (struct ia64_sal_systab *sal_systab);
* Now define a couple of inline functions for improved type checking
* and convenience.
*/
-extern inline long
+static inline long
ia64_sal_freq_base (unsigned long which, unsigned long *ticks_per_second,
unsigned long *drift_info)
{
}
/* Flush all the processor and platform level instruction and/or data caches */
-extern inline s64
+static inline s64
ia64_sal_cache_flush (u64 cache_type)
{
struct ia64_sal_retval isrv;
/* Initialize all the processor and platform level instruction and data caches */
-extern inline s64
+static inline s64
ia64_sal_cache_init (void)
{
struct ia64_sal_retval isrv;
/* Clear the processor and platform information logged by SAL with respect to the
* machine state at the time of MCA's, INITs or CMCs
*/
-extern inline s64
+static inline s64
ia64_sal_clear_state_info (u64 sal_info_type, u64 sal_info_sub_type)
{
struct ia64_sal_retval isrv;
/* Get the processor and platform information logged by SAL with respect to the machine
* state at the time of the MCAs, INITs or CMCs.
*/
-extern inline u64
+static inline u64
ia64_sal_get_state_info (u64 sal_info_type, u64 sal_info_sub_type, u64 *sal_info)
{
struct ia64_sal_retval isrv;
/* Get the maximum size of the information logged by SAL with respect to the machine
* state at the time of MCAs, INITs or CMCs
*/
-extern inline u64
+static inline u64
ia64_sal_get_state_info_size (u64 sal_info_type, u64 sal_info_sub_type)
{
struct ia64_sal_retval isrv;
/* Causes the processor to go into a spin loop within SAL where SAL awaits a wakeup
* from the monarch processor.
*/
-extern inline s64
+static inline s64
ia64_sal_mc_rendez (void)
{
struct ia64_sal_retval isrv;
* the machine check rendezvous sequence as well as the mechanism to wake up the
* non-monarch processor at the end of machine check processing.
*/
-extern inline s64
+static inline s64
ia64_sal_mc_set_params (u64 param_type, u64 i_or_m, u64 i_or_m_val, u64 timeout)
{
struct ia64_sal_retval isrv;
}
/* Read from PCI configuration space */
-extern inline s64
+static inline s64
ia64_sal_pci_config_read (u64 pci_config_addr, u64 size, u64 *value)
{
struct ia64_sal_retval isrv;
}
/* Write to PCI configuration space */
-extern inline s64
+static inline s64
ia64_sal_pci_config_write (u64 pci_config_addr, u64 size, u64 value)
{
struct ia64_sal_retval isrv;
* Register physical addresses of locations needed by SAL when SAL
* procedures are invoked in virtual mode.
*/
-extern inline s64
+static inline s64
ia64_sal_register_physical_addr (u64 phys_entry, u64 phys_addr)
{
struct ia64_sal_retval isrv;
* or entry points where SAL will pass control for the specified event. These event
* handlers are for the bott rendezvous, MCAs and INIT scenarios.
*/
-extern inline s64
+static inline s64
ia64_sal_set_vectors (u64 vector_type,
u64 handler_addr1, u64 gp1, u64 handler_len1,
u64 handler_addr2, u64 gp2, u64 handler_len2)
return isrv.status;
}
/* Update the contents of PAL block in the non-volatile storage device */
-extern inline s64
+static inline s64
ia64_sal_update_pal (u64 param_buf, u64 scratch_buf, u64 scratch_buf_size,
u64 *error_code, u64 *scratch_buf_size_needed)
{
#define DECLARE_MUTEX(name) __DECLARE_SEMAPHORE_GENERIC(name, 1)
#define DECLARE_MUTEX_LOCKED(name) __DECLARE_SEMAPHORE_GENERIC(name, 0)
-extern inline void
+static inline void
sema_init (struct semaphore *sem, int val)
{
*sem = (struct semaphore) __SEMAPHORE_INITIALIZER(*sem, val);
* Atomically decrement the semaphore's count. If it goes negative,
* block the calling thread in the TASK_UNINTERRUPTIBLE state.
*/
-extern inline void
+static inline void
down (struct semaphore *sem)
{
#if WAITQUEUE_DEBUG
* Atomically decrement the semaphore's count. If it goes negative,
* block the calling thread in the TASK_INTERRUPTIBLE state.
*/
-extern inline int
+static inline int
down_interruptible (struct semaphore * sem)
{
int ret = 0;
return ret;
}
-extern inline int
+static inline int
down_trylock (struct semaphore *sem)
{
int ret = 0;
return ret;
}
-extern inline void
+static inline void
up (struct semaphore * sem)
{
#if WAITQUEUE_DEBUG
extern void __down_write_failed (struct rw_semaphore *sem, long count);
extern void __rwsem_wake (struct rw_semaphore *sem, long count);
-extern inline void
+static inline void
init_rwsem (struct rw_semaphore *sem)
{
sem->count = RW_LOCK_BIAS;
#endif
}
-extern inline void
+static inline void
down_read (struct rw_semaphore *sem)
{
long count;
#endif
}
-extern inline void
+static inline void
down_write (struct rw_semaphore *sem)
{
long old_count, new_count;
* case is when there was a writer waiting, and we've
* bumped the count to 0: we must wake the writer up.
*/
-extern inline void
+static inline void
__up_read (struct rw_semaphore *sem)
{
long count;
* Releasing the writer is easy -- just release it and
* wake up any sleepers.
*/
-extern inline void
+static inline void
__up_write (struct rw_semaphore *sem)
{
long old_count, new_count;
__rwsem_wake(sem, new_count);
}
-extern inline void
+static inline void
up_read (struct rw_semaphore *sem)
{
#if WAITQUEUE_DEBUG
__up_read(sem);
}
-extern inline void
+static inline void
up_write (struct rw_semaphore *sem)
{
#if WAITQUEUE_DEBUG
#ifdef __KERNEL__
#include <linux/string.h>
-extern inline void copy_siginfo(siginfo_t *to, siginfo_t *from)
+static inline void
+copy_siginfo (siginfo_t *to, siginfo_t *from)
{
if (from->si_code < 0)
memcpy(to, from, sizeof(siginfo_t));
#define smp_processor_id() (current->processor)
-struct smp_boot_data {
+extern struct smp_boot_data {
int cpu_count;
- int cpu_map[NR_CPUS];
-};
+ int cpu_phys_id[NR_CPUS];
+} smp_boot_data __initdata;
extern unsigned long cpu_present_map;
extern unsigned long cpu_online_map;
extern unsigned long ipi_base_addr;
extern int bootstrap_processor;
-extern volatile int __cpu_number_map[NR_CPUS];
-extern volatile int __cpu_logical_map[NR_CPUS];
+extern volatile int __cpu_physical_id[NR_CPUS];
extern unsigned char smp_int_redirect;
extern char no_int_routing;
-
-#define cpu_number_map(i) __cpu_number_map[i]
-#define cpu_logical_map(i) __cpu_logical_map[i]
+extern int smp_num_cpus;
+
+#define cpu_physical_id(i) __cpu_physical_id[i]
+#define cpu_number_map(i) (i)
+#define cpu_logical_map(i) (i)
extern unsigned long ap_wakeup_vector;
+/*
+ * Function to map hard smp processor id to logical id. Slow, so
+ * don't use this in performance-critical code.
+ */
+static inline int
+cpu_logical_id (int cpuid)
+{
+ int i;
+
+ for (i=0; i<smp_num_cpus; i++) {
+ if (cpu_physical_id(i) == cpuid)
+ break;
+ }
+ return i;
+}
+
/*
* XTP control functions:
* min_xtp : route all interrupts to this CPU
* max_xtp : never deliver interrupts to this CPU.
*/
-extern __inline void
+static inline void
min_xtp(void)
{
if (smp_int_redirect & SMP_IRQ_REDIRECTION)
writeb(0x00, ipi_base_addr | XTP_OFFSET); /* XTP to min */
}
-extern __inline void
+static inline void
normal_xtp(void)
{
if (smp_int_redirect & SMP_IRQ_REDIRECTION)
writeb(0x08, ipi_base_addr | XTP_OFFSET); /* XTP normal */
}
-extern __inline void
+static inline void
max_xtp(void)
{
if (smp_int_redirect & SMP_IRQ_REDIRECTION)
writeb(0x0f, ipi_base_addr | XTP_OFFSET); /* Set XTP to max */
}
-extern __inline__ unsigned int
+static inline unsigned int
hard_smp_processor_id(void)
{
struct {
__asm__ ("mov %0=cr.lid" : "=r" (lid));
-#ifdef LARGE_CPU_ID_OK
- return lid.eid << 8 | lid.id;
-#else
- if (((lid.id << 8) | lid.eid) > NR_CPUS)
- printk("WARNING: SMP ID %d > NR_CPUS\n", (lid.id << 8) | lid.eid);
- return lid.id;
-#endif
+ return lid.id << 8 | lid.eid;
}
#define NO_PROC_ID (-1)
* Streamlined test_and_set_bit(0, (x)). We use test-and-test-and-set
* rather than a simple xchg to avoid writing the cache-line when
* there is contention.
- *
- * XXX Fix me: instead of preserving ar.pfs, we should just mark it
- * XXX as "clobbered". Unfortunately, the Mar 2000 release of the compiler
- * XXX doesn't let us do that. The August release fixes that.
*/
-#define spin_lock(x) \
-{ \
+#define spin_lock(x) \
+{ \
+ register char *addr __asm__ ("r31") = (char *) &(x)->lock; \
+ \
+ __asm__ __volatile__ ( \
+ "mov r30=1\n" \
+ "mov ar.ccv=r0\n" \
+ ";;\n" \
+ IA64_SEMFIX"cmpxchg1.acq r30=[%0],r30,ar.ccv\n" \
+ ";;\n" \
+ "cmp.ne p15,p0=r30,r0\n" \
+ "(p15) br.call.spnt.few b7=ia64_spinlock_contention\n" \
+ ";;\n" \
+ "1:\n" /* force a new bundle */ \
+ :: "r"(addr) \
+ : "ar.ccv", "ar.pfs", "b7", "p15", "r28", "r29", "r30", "memory"); \
+}
+
+#define spin_trylock(x) \
+({ \
register char *addr __asm__ ("r31") = (char *) &(x)->lock; \
- long saved_pfs; \
+ register long result; \
\
__asm__ __volatile__ ( \
"mov r30=1\n" \
"mov ar.ccv=r0\n" \
";;\n" \
- IA64_SEMFIX"cmpxchg1.acq r30=[%1],r30,ar.ccv\n" \
- ";;\n" \
- "cmp.ne p15,p0=r30,r0\n" \
- "mov %0=ar.pfs\n" \
- "(p15) br.call.spnt.few b7=ia64_spinlock_contention\n" \
- ";;\n" \
- "1: (p15) mov ar.pfs=%0;;\n" /* force a new bundle */ \
- : "=&r"(saved_pfs) : "r"(addr) \
- : "p15", "r28", "r29", "r30", "memory"); \
-}
-
-#define spin_trylock(x) \
-({ \
- register char *addr __asm__ ("r31") = (char *) &(x)->lock; \
- register long result; \
- \
- __asm__ __volatile__ ( \
- "mov r30=1\n" \
- "mov ar.ccv=r0\n" \
- ";;\n" \
- IA64_SEMFIX"cmpxchg1.acq %0=[%1],r30,ar.ccv\n" \
- : "=r"(result) : "r"(addr) : "r30", "memory"); \
- (result == 0); \
+ IA64_SEMFIX"cmpxchg1.acq %0=[%1],r30,ar.ccv\n" \
+ : "=r"(result) : "r"(addr) : "ar.ccv", "r30", "memory"); \
+ (result == 0); \
})
#define spin_is_locked(x) ((x)->lock != 0)
-#define spin_unlock(x) ({((spinlock_t *) x)->lock = 0;})
-#define spin_unlock_wait(x) ({ while ((x)->lock); })
+#define spin_unlock(x) do {((spinlock_t *) x)->lock = 0;} while (0)
+#define spin_unlock_wait(x) do {} while ((x)->lock)
#else /* !NEW_LOCK */
"mov r29 = 1\n" \
";;\n" \
"1:\n" \
- "ld4 r2 = %0\n" \
+ "ld4 r2 = [%0]\n" \
";;\n" \
"cmp4.eq p0,p7 = r0,r2\n" \
"(p7) br.cond.spnt.few 1b \n" \
- IA64_SEMFIX"cmpxchg4.acq r2 = %0, r29, ar.ccv\n" \
+ IA64_SEMFIX"cmpxchg4.acq r2 = [%0], r29, ar.ccv\n" \
";;\n" \
"cmp4.eq p0,p7 = r0, r2\n" \
"(p7) br.cond.spnt.few 1b\n" \
";;\n" \
- :: "m" __atomic_fool_gcc((x)) : "r2", "r29", "memory")
+ :: "r"(&(x)->lock) : "r2", "r29", "memory")
#define spin_is_locked(x) ((x)->lock != 0)
-#define spin_unlock(x) ({((spinlock_t *) x)->lock = 0; barrier();})
+#define spin_unlock(x) do {((spinlock_t *) x)->lock = 0; barrier(); } while (0)
#define spin_trylock(x) (cmpxchg_acq(&(x)->lock, 0, 1) == 0)
-#define spin_unlock_wait(x) ({ do { barrier(); } while ((x)->lock); })
+#define spin_unlock_wait(x) do { barrier(); } while ((x)->lock)
#endif /* !NEW_LOCK */
} rwlock_t;
#define RW_LOCK_UNLOCKED (rwlock_t) { 0, 0 }
-#define read_lock(rw) \
-do { \
- int tmp = 0; \
- __asm__ __volatile__ ("1:\t"IA64_SEMFIX"fetchadd4.acq %0 = %1, 1\n" \
- ";;\n" \
- "tbit.nz p6,p0 = %0, 31\n" \
- "(p6) br.cond.sptk.few 2f\n" \
- ".section .text.lock,\"ax\"\n" \
- "2:\t"IA64_SEMFIX"fetchadd4.rel %0 = %1, -1\n" \
- ";;\n" \
- "3:\tld4.acq %0 = %1\n" \
- ";;\n" \
- "tbit.nz p6,p0 = %0, 31\n" \
- "(p6) br.cond.sptk.few 3b\n" \
- "br.cond.sptk.few 1b\n" \
- ";;\n" \
- ".previous\n" \
- : "=r" (tmp), "=m" (__atomic_fool_gcc(rw)) \
- :: "memory"); \
+#define read_lock(rw) \
+do { \
+ int tmp = 0; \
+ __asm__ __volatile__ ("1:\t"IA64_SEMFIX"fetchadd4.acq %0 = [%1], 1\n" \
+ ";;\n" \
+ "tbit.nz p6,p0 = %0, 31\n" \
+ "(p6) br.cond.sptk.few 2f\n" \
+ ".section .text.lock,\"ax\"\n" \
+ "2:\t"IA64_SEMFIX"fetchadd4.rel %0 = [%1], -1\n" \
+ ";;\n" \
+ "3:\tld4.acq %0 = [%1]\n" \
+ ";;\n" \
+ "tbit.nz p6,p0 = %0, 31\n" \
+ "(p6) br.cond.sptk.few 3b\n" \
+ "br.cond.sptk.few 1b\n" \
+ ";;\n" \
+ ".previous\n" \
+ : "=&r" (tmp) \
+ : "r" (rw): "memory"); \
} while(0)
-#define read_unlock(rw) \
-do { \
- int tmp = 0; \
- __asm__ __volatile__ (IA64_SEMFIX"fetchadd4.rel %0 = %1, -1\n" \
- : "=r" (tmp) \
- : "m" (__atomic_fool_gcc(rw)) \
- : "memory"); \
+#define read_unlock(rw) \
+do { \
+ int tmp = 0; \
+ __asm__ __volatile__ (IA64_SEMFIX"fetchadd4.rel %0 = [%1], -1\n" \
+ : "=r" (tmp) \
+ : "r" (rw) \
+ : "memory"); \
} while(0)
-#define write_lock(rw) \
-do { \
- do { \
- while ((rw)->write_lock); \
- } while (test_and_set_bit(31, (rw))); \
- while ((rw)->read_counter); \
- barrier(); \
-} while (0)
+#define write_lock(rw) \
+do { \
+ __asm__ __volatile__ ( \
+ "mov ar.ccv = r0\n" \
+ "movl r29 = 0x80000000\n" \
+ ";;\n" \
+ "1:\n" \
+ "ld4 r2 = [%0]\n" \
+ ";;\n" \
+ "cmp4.eq p0,p7 = r0,r2\n" \
+ "(p7) br.cond.spnt.few 1b \n" \
+ IA64_SEMFIX"cmpxchg4.acq r2 = [%0], r29, ar.ccv\n" \
+ ";;\n" \
+ "cmp4.eq p0,p7 = r0, r2\n" \
+ "(p7) br.cond.spnt.few 1b\n" \
+ ";;\n" \
+ :: "r"(rw) : "r2", "r29", "memory"); \
+} while(0)
/*
* clear_bit() has "acq" semantics; we're really need "rel" semantics,
* Copyright (C) 1998-2000 David Mosberger-Tang <davidm@hpl.hp.com>
*/
+#include <linux/config.h> /* remove this once we remove the A-step workaround... */
+#ifndef CONFIG_ITANIUM_ASTEP_SPECIFIC
+
#define __HAVE_ARCH_STRLEN 1 /* see arch/ia64/lib/strlen.S */
#define __HAVE_ARCH_MEMSET 1 /* see arch/ia64/lib/memset.S */
#define __HAVE_ARCH_MEMCPY 1 /* see arch/ia64/lib/memcpy.S */
extern void *memset (void *, int, __kernel_size_t);
extern void *memcpy (void *, const void *, __kernel_size_t);
+#endif /* CONFIG_ITANIUM_ASTEP_SPECIFIC */
+
#endif /* _ASM_IA64_STRING_H */
#ifndef __ASSEMBLY__
+#include <linux/kernel.h>
#include <linux/types.h>
struct pci_vector_struct {
__u64 initrd_size;
} ia64_boot_param;
-extern inline void
+static inline void
ia64_insn_group_barrier (void)
{
__asm__ __volatile__ (";;" ::: "memory");
#define rmb() mb()
#define wmb() mb()
+#ifdef CONFIG_SMP
+# define smp_mb() mb()
+# define smp_rmb() rmb()
+# define smp_wmb() wmb()
+#else
+# define smp_mb() barrier()
+# define smp_rmb() barrier()
+# define smp_wmb() barrier()
+#endif
+
/*
* XXX check on these---I suspect what Linus really wants here is
* acquire vs release semantics but we can't discuss this stuff with
({ \
switch (sz) { \
case 4: \
- __asm__ __volatile__ (IA64_SEMFIX"fetchadd4.rel %0=%1,%3" \
- : "=r"(tmp), "=m"(__atomic_fool_gcc(v)) \
- : "m" (__atomic_fool_gcc(v)), "i"(n)); \
+ __asm__ __volatile__ (IA64_SEMFIX"fetchadd4.rel %0=[%1],%2" \
+ : "=r"(tmp) : "r"(v), "i"(n) : "memory"); \
break; \
\
case 8: \
- __asm__ __volatile__ (IA64_SEMFIX"fetchadd8.rel %0=%1,%3" \
- : "=r"(tmp), "=m"(__atomic_fool_gcc(v)) \
- : "m" (__atomic_fool_gcc(v)), "i"(n)); \
+ __asm__ __volatile__ (IA64_SEMFIX"fetchadd8.rel %0=[%1],%2" \
+ : "=r"(tmp) : "r"(v), "i"(n) : "memory"); \
break; \
\
default: \
switch (size) {
case 1:
- __asm__ __volatile (IA64_SEMFIX"xchg1 %0=%1,%2" : "=r" (result)
- : "m" (*(char *) ptr), "r" (x) : "memory");
+ __asm__ __volatile (IA64_SEMFIX"xchg1 %0=[%1],%2" : "=r" (result)
+ : "r" (ptr), "r" (x) : "memory");
return result;
case 2:
- __asm__ __volatile (IA64_SEMFIX"xchg2 %0=%1,%2" : "=r" (result)
- : "m" (*(short *) ptr), "r" (x) : "memory");
+ __asm__ __volatile (IA64_SEMFIX"xchg2 %0=[%1],%2" : "=r" (result)
+ : "r" (ptr), "r" (x) : "memory");
return result;
case 4:
- __asm__ __volatile (IA64_SEMFIX"xchg4 %0=%1,%2" : "=r" (result)
- : "m" (*(int *) ptr), "r" (x) : "memory");
+ __asm__ __volatile (IA64_SEMFIX"xchg4 %0=[%1],%2" : "=r" (result)
+ : "r" (ptr), "r" (x) : "memory");
return result;
case 8:
- __asm__ __volatile (IA64_SEMFIX"xchg8 %0=%1,%2" : "=r" (result)
- : "m" (*(long *) ptr), "r" (x) : "memory");
+ __asm__ __volatile (IA64_SEMFIX"xchg8 %0=[%1],%2" : "=r" (result)
+ : "r" (ptr), "r" (x) : "memory");
return result;
}
__xchg_called_with_bad_pointer();
*/
extern long __cmpxchg_called_with_bad_pointer(void);
-struct __xchg_dummy { unsigned long a[100]; };
-#define __xg(x) (*(struct __xchg_dummy *)(x))
-
#define ia64_cmpxchg(sem,ptr,old,new,size) \
({ \
__typeof__(ptr) _p_ = (ptr); \
__asm__ __volatile__ ("mov ar.ccv=%0;;" :: "rO"(_o_)); \
switch (size) { \
case 1: \
- __asm__ __volatile__ (IA64_SEMFIX"cmpxchg1."sem" %0=%2,%3,ar.ccv" \
- : "=r"(_r_), "=m"(__xg(_p_)) \
- : "m"(__xg(_p_)), "r"(_n_)); \
+ __asm__ __volatile__ (IA64_SEMFIX"cmpxchg1."sem" %0=[%1],%2,ar.ccv" \
+ : "=r"(_r_) : "r"(_p_), "r"(_n_) : "memory"); \
break; \
\
case 2: \
- __asm__ __volatile__ (IA64_SEMFIX"cmpxchg2."sem" %0=%2,%3,ar.ccv" \
- : "=r"(_r_), "=m"(__xg(_p_)) \
- : "m"(__xg(_p_)), "r"(_n_)); \
+ __asm__ __volatile__ (IA64_SEMFIX"cmpxchg2."sem" %0=[%1],%2,ar.ccv" \
+ : "=r"(_r_) : "r"(_p_), "r"(_n_) : "memory"); \
break; \
\
case 4: \
- __asm__ __volatile__ (IA64_SEMFIX"cmpxchg4."sem" %0=%2,%3,ar.ccv" \
- : "=r"(_r_), "=m"(__xg(_p_)) \
- : "m"(__xg(_p_)), "r"(_n_)); \
+ __asm__ __volatile__ (IA64_SEMFIX"cmpxchg4."sem" %0=[%1],%2,ar.ccv" \
+ : "=r"(_r_) : "r"(_p_), "r"(_n_) : "memory"); \
break; \
\
case 8: \
- __asm__ __volatile__ (IA64_SEMFIX"cmpxchg8."sem" %0=%2,%3,ar.ccv" \
- : "=r"(_r_), "=m"(__xg(_p_)) \
- : "m"(__xg(_p_)), "r"(_n_)); \
+ __asm__ __volatile__ (IA64_SEMFIX"cmpxchg8."sem" %0=[%1],%2,ar.ccv" \
+ : "=r"(_r_) : "r"(_p_), "r"(_n_) : "memory"); \
break; \
\
default: \
if (((next)->thread.flags & (IA64_THREAD_DBG_VALID|IA64_THREAD_PM_VALID)) \
|| IS_IA32_PROCESS(ia64_task_regs(next))) \
ia64_load_extra(next); \
- ia64_psr(ia64_task_regs(next))->dfh = (ia64_get_fpu_owner() != (next)); \
(last) = ia64_switch_to((next)); \
} while (0)
#ifdef CONFIG_SMP
/*
* In the SMP case, we save the fph state when context-switching
- * away from a thread that owned and modified fph. This way, when
- * the thread gets scheduled on another CPU, the CPU can pick up the
- * state frm task->thread.fph, avoiding the complication of having
- * to fetch the latest fph state from another CPU. If the thread
- * happens to be rescheduled on the same CPU later on and nobody
- * else has touched the FPU in the meantime, the thread will fault
- * upon the first access to fph but since the state in fph is still
- * valid, no other overheads are incurred. In other words, CPU
- * affinity is a Good Thing.
+ * away from a thread that modified fph. This way, when the thread
+ * gets scheduled on another CPU, the CPU can pick up the state from
+ * task->thread.fph, avoiding the complication of having to fetch
+ * the latest fph state from another CPU.
*/
-# define switch_to(prev,next,last) do { \
- if (ia64_get_fpu_owner() == (prev) && ia64_psr(ia64_task_regs(prev))->mfh) { \
- ia64_psr(ia64_task_regs(prev))->mfh = 0; \
- (prev)->thread.flags |= IA64_THREAD_FPH_VALID; \
- __ia64_save_fpu((prev)->thread.fph); \
- } \
- __switch_to(prev,next,last); \
+# define switch_to(prev,next,last) do { \
+ if (ia64_psr(ia64_task_regs(prev))->mfh) { \
+ ia64_psr(ia64_task_regs(prev))->mfh = 0; \
+ (prev)->thread.flags |= IA64_THREAD_FPH_VALID; \
+ __ia64_save_fpu((prev)->thread.fph); \
+ } \
+ ia64_psr(ia64_task_regs(prev))->dfh = 1; \
+ __switch_to(prev,next,last); \
} while (0)
#else
-# define switch_to(prev,next,last) __switch_to(prev,next,last)
+# define switch_to(prev,next,last) do { \
+ ia64_psr(ia64_task_regs(next))->dfh = (ia64_get_fpu_owner() != (next)); \
+ __switch_to(prev,next,last); \
+} while (0)
#endif
#endif /* __KERNEL__ */
#define __access_ok(addr,size,segment) (((unsigned long) (addr)) <= (segment).seg)
#define access_ok(type,addr,size) __access_ok((addr),(size),get_fs())
-extern inline int
+static inline int
verify_area (int type, const void *addr, unsigned long size)
{
return access_ok(type,addr,size) ? 0 : -EFAULT;
#define __m(x) (*(struct __large_struct *)(x))
#define __get_user_64(addr) \
- __asm__ ("\n1:\tld8 %0=%2\t// %0 and %1 get overwritten by exception handler\n" \
+ __asm__ ("\n1:\tld8 %0=%2%P2\t// %0 and %1 get overwritten by exception handler\n" \
"2:\n" \
"\t.section __ex_table,\"a\"\n" \
"\t\tdata4 @gprel(1b)\n" \
: "m"(__m(addr)), "1"(__gu_err));
#define __get_user_32(addr) \
- __asm__ ("\n1:\tld4 %0=%2\t// %0 and %1 get overwritten by exception handler\n" \
+ __asm__ ("\n1:\tld4 %0=%2%P2\t// %0 and %1 get overwritten by exception handler\n" \
"2:\n" \
"\t.section __ex_table,\"a\"\n" \
"\t\tdata4 @gprel(1b)\n" \
: "m"(__m(addr)), "1"(__gu_err));
#define __get_user_16(addr) \
- __asm__ ("\n1:\tld2 %0=%2\t// %0 and %1 get overwritten by exception handler\n" \
+ __asm__ ("\n1:\tld2 %0=%2%P2\t// %0 and %1 get overwritten by exception handler\n" \
"2:\n" \
"\t.section __ex_table,\"a\"\n" \
"\t\tdata4 @gprel(1b)\n" \
: "m"(__m(addr)), "1"(__gu_err));
#define __get_user_8(addr) \
- __asm__ ("\n1:\tld1 %0=%2\t// %0 and %1 get overwritten by exception handler\n" \
+ __asm__ ("\n1:\tld1 %0=%2%P2\t// %0 and %1 get overwritten by exception handler\n" \
"2:\n" \
"\t.section __ex_table,\"a\"\n" \
"\t\tdata4 @gprel(1b)\n" \
*/
#define __put_user_64(x,addr) \
__asm__ __volatile__ ( \
- "\n1:\tst8 %1=%r2\t// %0 gets overwritten by exception handler\n" \
+ "\n1:\tst8 %1=%r2%P1\t// %0 gets overwritten by exception handler\n" \
"2:\n" \
"\t.section __ex_table,\"a\"\n" \
"\t\tdata4 @gprel(1b)\n" \
#define __put_user_32(x,addr) \
__asm__ __volatile__ ( \
- "\n1:\tst4 %1=%r2\t// %0 gets overwritten by exception handler\n" \
+ "\n1:\tst4 %1=%r2%P1\t// %0 gets overwritten by exception handler\n" \
"2:\n" \
"\t.section __ex_table,\"a\"\n" \
"\t\tdata4 @gprel(1b)\n" \
#define __put_user_16(x,addr) \
__asm__ __volatile__ ( \
- "\n1:\tst2 %1=%r2\t// %0 gets overwritten by exception handler\n" \
+ "\n1:\tst2 %1=%r2%P1\t// %0 gets overwritten by exception handler\n" \
"2:\n" \
"\t.section __ex_table,\"a\"\n" \
"\t\tdata4 @gprel(1b)\n" \
#define __put_user_8(x,addr) \
__asm__ __volatile__ ( \
- "\n1:\tst1 %1=%r2\t// %0 gets overwritten by exception handler\n" \
+ "\n1:\tst1 %1=%r2%P1\t// %0 gets overwritten by exception handler\n" \
"2:\n" \
"\t.section __ex_table,\"a\"\n" \
"\t\tdata4 @gprel(1b)\n" \
struct __una_u32 { __u32 x __attribute__((packed)); };
struct __una_u16 { __u16 x __attribute__((packed)); };
-extern inline unsigned long
+static inline unsigned long
__uldq (const unsigned long * r11)
{
const struct __una_u64 *ptr = (const struct __una_u64 *) r11;
return ptr->x;
}
-extern inline unsigned long
+static inline unsigned long
__uldl (const unsigned int * r11)
{
const struct __una_u32 *ptr = (const struct __una_u32 *) r11;
return ptr->x;
}
-extern inline unsigned long
+static inline unsigned long
__uldw (const unsigned short * r11)
{
const struct __una_u16 *ptr = (const struct __una_u16 *) r11;
return ptr->x;
}
-extern inline void
+static inline void
__ustq (unsigned long r5, unsigned long * r11)
{
struct __una_u64 *ptr = (struct __una_u64 *) r11;
ptr->x = r5;
}
-extern inline void
+static inline void
__ustl (unsigned long r5, unsigned int * r11)
{
struct __una_u32 *ptr = (struct __una_u32 *) r11;
ptr->x = r5;
}
-extern inline void
+static inline void
__ustw (unsigned long r5, unsigned short * r11)
{
struct __una_u16 *ptr = (struct __una_u16 *) r11;
#define __NR_setpriority 1102
#define __NR_statfs 1103
#define __NR_fstatfs 1104
-#define __NR_ioperm 1105
+/* unused; used to be __NR_ioperm */
#define __NR_semget 1106
#define __NR_semop 1107
#define __NR_semctl 1108
unsigned int flags;
short hint;
short prev_script;
- unsigned long bsp;
- unsigned long sp; /* stack pointer */
- unsigned long psp; /* previous sp */
- unsigned long ip; /* instruction pointer */
- unsigned long pr_val; /* current predicates */
- unsigned long *cfm;
+
+ /* current frame info: */
+ unsigned long bsp; /* backing store pointer value */
+ unsigned long sp; /* stack pointer value */
+ unsigned long psp; /* previous sp value */
+ unsigned long ip; /* instruction pointer value */
+ unsigned long pr; /* current predicate values */
+ unsigned long *cfm_loc; /* cfm save location (or NULL) */
struct task_struct *task;
struct switch_stack *sw;
/* preserved state: */
- unsigned long *pbsp; /* previous bsp */
- unsigned long *bspstore;
- unsigned long *pfs;
- unsigned long *rnat;
- unsigned long *rp;
- unsigned long *pri_unat;
- unsigned long *unat;
- unsigned long *pr;
- unsigned long *lc;
- unsigned long *fpsr;
+ unsigned long *bsp_loc; /* previous bsp save location */
+ unsigned long *bspstore_loc;
+ unsigned long *pfs_loc;
+ unsigned long *rnat_loc;
+ unsigned long *rp_loc;
+ unsigned long *pri_unat_loc;
+ unsigned long *unat_loc;
+ unsigned long *pr_loc;
+ unsigned long *lc_loc;
+ unsigned long *fpsr_loc;
struct unw_ireg {
unsigned long *loc;
struct unw_ireg_nat {
- int type : 3; /* enum unw_nat_type */
- signed int off; /* NaT word is at loc+nat.off */
+ long type : 3; /* enum unw_nat_type */
+ signed long off : 61; /* NaT word is at loc+nat.off */
} nat;
} r4, r5, r6, r7;
- unsigned long *b1, *b2, *b3, *b4, *b5;
- struct ia64_fpreg *f2, *f3, *f4, *f5, *fr[16];
+ unsigned long *b1_loc, *b2_loc, *b3_loc, *b4_loc, *b5_loc;
+ struct ia64_fpreg *f2_loc, *f3_loc, *f4_loc, *f5_loc, *fr_loc[16];
};
/*
*/
extern int unw_unwind_to_user (struct unw_frame_info *info);
-#define unw_get_ip(info,vp) ({*(vp) = (info)->ip; 0;})
-#define unw_get_sp(info,vp) ({*(vp) = (unsigned long) (info)->sp; 0;})
-#define unw_get_psp(info,vp) ({*(vp) = (unsigned long) (info)->psp; 0;})
-#define unw_get_bsp(info,vp) ({*(vp) = (unsigned long) (info)->bsp; 0;})
-#define unw_get_cfm(info,vp) ({*(vp) = *(info)->cfm; 0;})
-#define unw_set_cfm(info,val) ({*(info)->cfm = (val); 0;})
+#define unw_is_intr_frame(info) (((info)->flags & UNW_FLAG_INTERRUPT_FRAME) != 0)
+
+static inline unsigned long
+unw_get_ip (struct unw_frame_info *info, unsigned long *valp)
+{
+ *valp = (info)->ip;
+ return 0;
+}
+
+static inline unsigned long
+unw_get_sp (struct unw_frame_info *info, unsigned long *valp)
+{
+ *valp = (info)->sp;
+ return 0;
+}
+
+static inline unsigned long
+unw_get_psp (struct unw_frame_info *info, unsigned long *valp)
+{
+ *valp = (info)->psp;
+ return 0;
+}
+
+static inline unsigned long
+unw_get_bsp (struct unw_frame_info *info, unsigned long *valp)
+{
+ *valp = (info)->bsp;
+ return 0;
+}
+
+static inline unsigned long
+unw_get_cfm (struct unw_frame_info *info, unsigned long *valp)
+{
+ *valp = *(info)->cfm_loc;
+ return 0;
+}
+
+static inline unsigned long
+unw_set_cfm (struct unw_frame_info *info, unsigned long val)
+{
+ *(info)->cfm_loc = val;
+ return 0;
+}
static inline int
unw_get_rp (struct unw_frame_info *info, unsigned long *val)
{
- if (!info->rp)
+ if (!info->rp_loc)
return -1;
- *val = *info->rp;
+ *val = *info->rp_loc;
return 0;
}
extern int unregister_sparcaudio_driver(struct sparcaudio_driver *, int);
extern void sparcaudio_output_done(struct sparcaudio_driver *, int);
extern void sparcaudio_input_done(struct sparcaudio_driver *, int);
-extern int sparcaudio_init(void);
extern int amd7930_init(void);
extern int cs4231_init(void);
extern int dbri_init(void);
-/* $Id: fcntl.h,v 1.14 2000/08/12 20:49:49 jj Exp $ */
+/* $Id: fcntl.h,v 1.15 2000/09/23 02:09:21 davem Exp $ */
#ifndef _SPARC_FCNTL_H
#define _SPARC_FCNTL_H
-/* $Id: resource.h,v 1.11 1999/12/15 17:51:59 jj Exp $
+/* $Id: resource.h,v 1.12 2000/09/23 02:09:21 davem Exp $
* resource.h: Resource definitions.
*
* Copyright (C) 1995 David S. Miller (davem@caip.rutgers.edu)
unsigned long arg3, unsigned long arg4, unsigned long arg5)
{ smp_cross_call(func, arg1, arg2, arg3, arg4, arg5); }
+extern __inline__ int smp_call_function(void (*func)(void *info), void *info, int nonatomic, int wait)
+{
+ xc1((smpfunc_t)func, (unsigned long)info);
+ return 0;
+}
+
extern __volatile__ int __cpu_number_map[NR_CPUS];
extern __volatile__ int __cpu_logical_map[NR_CPUS];
extern unsigned long smp_proc_in_lock[NR_CPUS];
-/* $Id: atomic.h,v 1.20 2000/03/16 16:44:44 davem Exp $
+/* $Id: atomic.h,v 1.21 2000/10/03 07:28:56 anton Exp $
* atomic.h: Thankfully the V9 is at least reasonable for this
* stuff.
*
extern int unregister_sparcaudio_driver(struct sparcaudio_driver *, int);
extern void sparcaudio_output_done(struct sparcaudio_driver *, int);
extern void sparcaudio_input_done(struct sparcaudio_driver *, int);
-extern int sparcaudio_init(void);
extern int amd7930_init(void);
extern int cs4231_init(void);
-/* $Id: fcntl.h,v 1.10 2000/08/12 20:49:49 jj Exp $ */
+/* $Id: fcntl.h,v 1.11 2000/09/23 02:09:21 davem Exp $ */
#ifndef _SPARC64_FCNTL_H
#define _SPARC64_FCNTL_H
-/* $Id: resource.h,v 1.7 1999/12/15 17:52:08 jj Exp $
+/* $Id: resource.h,v 1.8 2000/09/23 02:09:21 davem Exp $
* resource.h: Resource definitions.
*
* Copyright (C) 1996 David S. Miller (davem@caip.rutgers.edu)
-/* $Id: spitfire.h,v 1.9 1998/04/28 08:23:33 davem Exp $
+/* $Id: spitfire.h,v 1.10 2000/10/06 13:10:29 anton Exp $
* spitfire.h: SpitFire/BlackBird/Cheetah inline MMU operations.
*
* Copyright (C) 1996 David S. Miller (davem@caip.rutgers.edu)
__asm__ __volatile__("stxa %0, [%1] %2"
: /* No outputs */
: "r" (tag), "r" (addr), "i" (ASI_DCACHE_TAG));
+ membar("#Sync");
}
/* The instruction cache lines are flushed with this, but note that
#include <linux/if.h>
#include <linux/if_ether.h>
#include <linux/if_packet.h>
+#include <net/divert.h>
#include <asm/atomic.h>
#include <asm/cache.h>
(TC use only - dev_queue_xmit
returns this as NET_XMIT_SUCCESS) */
+/* Backlog congestion levels */
+#define NET_RX_SUCCESS 0 /* keep 'em coming, baby */
+#define NET_RX_CN_LOW 1 /* storm alert, just in case */
+#define NET_RX_CN_MOD 2 /* Storm on its way! */
+#define NET_RX_CN_HIGH 5 /* The storm is here */
+#define NET_RX_DROP -1 /* packet dropped */
+
#define net_xmit_errno(e) ((e) != NET_XMIT_CN ? -ENOBUFS : 0)
#endif
rwlock_t fastpath_lock;
struct dst_entry *fastpath[NETDEV_FASTROUTE_HMASK+1];
#endif
+#ifdef CONFIG_NET_DIVERT
+ /* this will get initialized at each interface type init routine */
+ struct divert_blk *divert;
+#endif /* CONFIG_NET_DIVERT */
};
struct softnet_data
{
int throttle;
+ int cng_level;
+ int avg_blog;
struct sk_buff_head input_pkt_queue;
struct net_device *output_queue;
struct sk_buff *completion_queue;
extern void net_call_rx_atomic(void (*fn)(void));
#define HAVE_NETIF_RX 1
-extern void netif_rx(struct sk_buff *skb);
+extern int netif_rx(struct sk_buff *skb);
extern int dev_ioctl(unsigned int cmd, void *);
extern int dev_change_flags(struct net_device *, unsigned);
extern void dev_queue_xmit_nit(struct sk_buff *skb, struct net_device *dev);
extern void netdev_unregister_fc(int bit);
extern int netdev_max_backlog;
extern unsigned long netdev_fc_xoff;
+extern atomic_t netdev_dropping;
extern int netdev_set_master(struct net_device *dev, struct net_device *master);
#ifdef CONFIG_NET_FASTROUTE
extern int netdev_fastroute;
#define PCI_DEVICE_ID_APPLE_HYDRA 0x000e
#define PCI_DEVICE_ID_APPLE_UNINORTH 0x0020
+#define PCI_VENDOR_ID_YAMAHA 0x1073
+#define PCI_DEVICE_ID_YAMAHA_724 0x0004
+#define PCI_DEVICE_ID_YAMAHA_724F 0x000d
+#define PCI_DEVICE_ID_YAMAHA_740 0x000a
+#define PCI_DEVICE_ID_YAMAHA_740C 0x000c
+#define PCI_DEVICE_ID_YAMAHA_744 0x0010
+#define PCI_DEVICE_ID_YAMAHA_754 0x0012
+
#define PCI_VENDOR_ID_NEXGEN 0x1074
#define PCI_DEVICE_ID_NEXGEN_82C501 0x4e78
#define SIOCGIFTXQLEN 0x8942 /* Get the tx queue length */
#define SIOCSIFTXQLEN 0x8943 /* Set the tx queue length */
+#define SIOCGIFDIVERT 0x8944 /* Frame diversion support */
+#define SIOCSIFDIVERT 0x8945 /* Set frame diversion options */
+
/* ARP cache control calls. */
/* 0x8950 - 0x8952 * obsolete calls, don't re-use */
NET_CORE_MSG_BURST=9,
NET_CORE_OPTMEM_MAX=10,
NET_CORE_HOT_LIST_LENGTH=11,
- NET_CORE_DIVERT_VERSION=12
+ NET_CORE_DIVERT_VERSION=12,
+ NET_CORE_NO_CONG_THRESH=13,
+ NET_CORE_NO_CONG=14,
+ NET_CORE_LO_CONG=15,
+ NET_CORE_MOD_CONG=16
};
/* /proc/sys/net/ethernet */
--- /dev/null
+/*
+ * Frame Diversion, Benoit Locher <Benoit.Locher@skf.com>
+ *
+ * Changes:
+ * 06/09/2000 BL: initial version
+ *
+ */
+
+#ifndef _LINUX_DIVERT_H
+#define _LINUX_DIVERT_H
+
+#define MAX_DIVERT_PORTS 8 /* Max number of ports to divert (tcp, udp) */
+
+/* Divertable protocols */
+#define DIVERT_PROTO_NONE 0x0000
+#define DIVERT_PROTO_IP 0x0001
+#define DIVERT_PROTO_ICMP 0x0002
+#define DIVERT_PROTO_TCP 0x0004
+#define DIVERT_PROTO_UDP 0x0008
+
+#ifdef __KERNEL__
+ #define S16 s16
+ #define U16 u16
+ #define S32 s32
+ #define U32 u32
+ #define S64 s64
+ #define U64 u64
+#else
+ #define S16 __s16
+ #define U16 __u16
+ #define S32 __s32
+ #define U32 __u32
+ #define S64 __s64
+ #define U64 __u64
+#endif
+
+/*
+ * This is an Ethernet Frame Diverter option block
+ */
+struct divert_blk
+{
+ int divert; /* are we active */
+ unsigned int protos; /* protocols */
+ U16 tcp_dst[MAX_DIVERT_PORTS]; /* specific tcp dst ports to divert */
+ U16 tcp_src[MAX_DIVERT_PORTS]; /* specific tcp src ports to divert */
+ U16 udp_dst[MAX_DIVERT_PORTS]; /* specific udp dst ports to divert */
+ U16 udp_src[MAX_DIVERT_PORTS]; /* specific udp src ports to divert */
+};
+
+/*
+ * Diversion control block, for configuration with the userspace tool
+ * divert
+ */
+
+typedef union _divert_cf_arg
+{
+ S16 int16;
+ U16 uint16;
+ S32 int32;
+ U32 uint32;
+ S64 int64;
+ U64 uint64;
+ void *ptr;
+} divert_cf_arg;
+
+
+struct divert_cf
+{
+ int cmd; /* Command */
+ divert_cf_arg arg1,
+ arg2,
+ arg3;
+ int dev_index; /* device index (eth0=0, etc...) */
+};
+
+
+/* Diversion commands */
+#define DIVCMD_DIVERT 1 /* ENABLE/DISABLE diversion */
+#define DIVCMD_IP 2 /* ENABLE/DISABLE whold IP diversion */
+#define DIVCMD_TCP 3 /* ENABLE/DISABLE whold TCP diversion */
+#define DIVCMD_TCPDST 4 /* ADD/REMOVE TCP DST port for diversion */
+#define DIVCMD_TCPSRC 5 /* ADD/REMOVE TCP SRC port for diversion */
+#define DIVCMD_UDP 6 /* ENABLE/DISABLE whole UDP diversion */
+#define DIVCMD_UDPDST 7 /* ADD/REMOVE UDP DST port for diversion */
+#define DIVCMD_UDPSRC 8 /* ADD/REMOVE UDP SRC port for diversion */
+#define DIVCMD_ICMP 9 /* ENABLE/DISABLE whole ICMP diversion */
+#define DIVCMD_GETSTATUS 10 /* GET the status of the diverter */
+#define DIVCMD_RESET 11 /* Reset the diverter on the specified dev */
+#define DIVCMD_GETVERSION 12 /* Retrieve the diverter code version (char[32]) */
+
+/* General syntax of the commands:
+ *
+ * DIVCMD_xxxxxx(arg1, arg2, arg3, dev_index)
+ *
+ * SIOCSIFDIVERT:
+ * DIVCMD_DIVERT(DIVARG1_ENABLE|DIVARG1_DISABLE, , ,ifindex)
+ * DIVCMD_IP(DIVARG1_ENABLE|DIVARG1_DISABLE, , , ifindex)
+ * DIVCMD_TCP(DIVARG1_ENABLE|DIVARG1_DISABLE, , , ifindex)
+ * DIVCMD_TCPDST(DIVARG1_ADD|DIVARG1_REMOVE, port, , ifindex)
+ * DIVCMD_TCPSRC(DIVARG1_ADD|DIVARG1_REMOVE, port, , ifindex)
+ * DIVCMD_UDP(DIVARG1_ENABLE|DIVARG1_DISABLE, , , ifindex)
+ * DIVCMD_UDPDST(DIVARG1_ADD|DIVARG1_REMOVE, port, , ifindex)
+ * DIVCMD_UDPSRC(DIVARG1_ADD|DIVARG1_REMOVE, port, , ifindex)
+ * DIVCMD_ICMP(DIVARG1_ENABLE|DIVARG1_DISABLE, , , ifindex)
+ * DIVCMD_RESET(, , , ifindex)
+ *
+ * SIOGIFDIVERT:
+ * DIVCMD_GETSTATUS(divert_blk, , , ifindex)
+ * DIVCMD_GETVERSION(string[3])
+ */
+
+
+/* Possible values for arg1 */
+#define DIVARG1_ENABLE 0 /* ENABLE something */
+#define DIVARG1_DISABLE 1 /* DISABLE something */
+#define DIVARG1_ADD 2 /* ADD something */
+#define DIVARG1_REMOVE 3 /* REMOVE something */
+
+
+#ifdef __KERNEL__
+
+/* diverter functions */
+#include <linux/skbuff.h>
+int alloc_divert_blk(struct net_device *);
+void free_divert_blk(struct net_device *);
+int divert_ioctl(unsigned int cmd, struct divert_cf *arg);
+void divert_frame(struct sk_buff *skb);
+
+#endif __KERNEL__
+
+#endif /* _LINUX_DIVERT_H */
struct sock *chain;
} __attribute__((__aligned__(8)));
-extern int tcp_ehash_size;
-extern struct tcp_ehash_bucket *tcp_ehash;
-
/* This is for listening sockets, thus all sockets which possess wildcards. */
#define TCP_LHTABLE_SIZE 32 /* Yes, really, this is all you need. */
-/* tcp_ipv4.c: These need to be shared by v4 and v6 because the lookup
- * and hashing code needs to work with different AF's yet
- * the port space is shared.
- */
-extern struct sock *tcp_listening_hash[TCP_LHTABLE_SIZE];
-extern rwlock_t tcp_lhash_lock;
-extern atomic_t tcp_lhash_users;
-extern wait_queue_head_t tcp_lhash_wait;
-
/* There are a few simple rules, which allow for local port reuse by
* an application. In essence:
*
struct tcp_bind_bucket *chain;
};
-extern struct tcp_bind_hashbucket *tcp_bhash;
-extern int tcp_bhash_size;
-extern spinlock_t tcp_portalloc_lock;
+extern struct tcp_hashinfo {
+ /* This is for sockets with full identity only. Sockets here will
+ * always be without wildcards and will have the following invariant:
+ *
+ * TCP_ESTABLISHED <= sk->state < TCP_CLOSE
+ *
+ * First half of the table is for sockets not in TIME_WAIT, second half
+ * is for TIME_WAIT sockets only.
+ */
+ struct tcp_ehash_bucket *__tcp_ehash;
+
+ /* Ok, let's try this, I give up, we do need a local binding
+ * TCP hash as well as the others for fast bind/connect.
+ */
+ struct tcp_bind_hashbucket *__tcp_bhash;
+
+ int __tcp_bhash_size;
+ int __tcp_ehash_size;
+
+ /* All sockets in TCP_LISTEN state will be in here. This is the only
+ * table where wildcard'd TCP sockets can exist. Hash function here
+ * is just local port number.
+ */
+ struct sock *__tcp_listening_hash[TCP_LHTABLE_SIZE];
+
+ /* All the above members are written once at bootup and
+ * never written again _or_ are predominantly read-access.
+ *
+ * Now align to a new cache line as all the following members
+ * are often dirty.
+ */
+ rwlock_t __tcp_lhash_lock
+ __attribute__((__aligned__(SMP_CACHE_BYTES)));
+ atomic_t __tcp_lhash_users;
+ wait_queue_head_t __tcp_lhash_wait;
+ spinlock_t __tcp_portalloc_lock;
+} tcp_hashinfo;
+
+#define tcp_ehash (tcp_hashinfo.__tcp_ehash)
+#define tcp_bhash (tcp_hashinfo.__tcp_bhash)
+#define tcp_ehash_size (tcp_hashinfo.__tcp_ehash_size)
+#define tcp_bhash_size (tcp_hashinfo.__tcp_bhash_size)
+#define tcp_listening_hash (tcp_hashinfo.__tcp_listening_hash)
+#define tcp_lhash_lock (tcp_hashinfo.__tcp_lhash_lock)
+#define tcp_lhash_users (tcp_hashinfo.__tcp_lhash_users)
+#define tcp_lhash_wait (tcp_hashinfo.__tcp_lhash_wait)
+#define tcp_portalloc_lock (tcp_hashinfo.__tcp_portalloc_lock)
extern kmem_cache_t *tcp_bucket_cachep;
extern struct tcp_bind_bucket *tcp_bucket_create(struct tcp_bind_hashbucket *head,
struct vm_area_struct * mpnt;
release_segments(mm);
- mpnt = mm->mmap;
vmlist_modify_lock(mm);
+ mpnt = mm->mmap;
mm->mmap = mm->mmap_avl = mm->mmap_cache = NULL;
vmlist_modify_unlock(mm);
mm->rss = 0;
flush_cache_all();
do {
pmd_t *pmd;
- pgd_t olddir = *dir;
pmd = pmd_alloc_kernel(dir, address);
if (!pmd)
return -ENOMEM;
+
if (alloc_area_pmd(pmd, address, end - address, gfp_mask, prot))
return -ENOMEM;
- if (pgd_val(olddir) != pgd_val(*dir))
- set_pgdir(address, *dir);
+
address = (address + PGDIR_SIZE) & PGDIR_MASK;
dir++;
} while (address && (address < end));
return NULL;
}
area = get_vm_area(size, VM_ALLOC);
- if (!area) {
- BUG();
+ if (!area)
return NULL;
- }
addr = area->addr;
if (vmalloc_area_pages(VMALLOC_VMADDR(addr), size, gfp_mask, prot)) {
vfree(addr);
- BUG();
return NULL;
}
return addr;
tristate 'CCITT X.25 Packet Layer (EXPERIMENTAL)' CONFIG_X25
tristate 'LAPB Data Link Driver (EXPERIMENTAL)' CONFIG_LAPB
bool '802.2 LLC (EXPERIMENTAL)' CONFIG_LLC
+ bool 'Frame Diverter (EXPERIMENTAL)' CONFIG_NET_DIVERT
# if [ "$CONFIG_LLC" = "y" ]; then
# bool ' Netbeui (EXPERIMENTAL)' CONFIG_NETBEUI
# fi
* Authors:
* Lennert Buytenhek <buytenh@gnu.org>
*
- * $Id: br_if.c,v 1.3 2000/05/05 02:17:17 davem Exp $
+ * $Id: br_if.c,v 1.4 2000/10/05 01:58:16 davem Exp $
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
*/
#include <linux/kernel.h>
+#include <linux/if_arp.h>
#include <linux/if_bridge.h>
#include <linux/inetdevice.h>
#include <linux/rtnetlink.h>
if (dev->br_port != NULL)
return -EBUSY;
- if (dev->flags & IFF_LOOPBACK)
+ if (dev->flags & IFF_LOOPBACK || dev->type != ARPHRD_ETHER)
return -EINVAL;
dev_hold(dev);
* Authors:
* Lennert Buytenhek <buytenh@gnu.org>
*
- * $Id: br_ioctl.c,v 1.2 2000/02/21 15:51:34 davem Exp $
+ * $Id: br_ioctl.c,v 1.3 2000/10/05 01:58:16 davem Exp $
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
p.hold_timer_value = br_timer_get_residue(&pt->hold_timer);
if (copy_to_user((void *)arg0, &p, sizeof(p)))
- return -EINVAL;
+ return -EFAULT;
return 0;
}
OX_OBJS += netfilter.o
endif
+ifeq ($(CONFIG_NET_DIVERT),y)
+O_OBJS += dv.o
+endif
+
endif
ifdef CONFIG_NET_PROFILE
* Paul Rusty Russell : SIOCSIFNAME
* Pekka Riikonen : Netdev boot-time settings code
* Andrew Morton : Make unregister_netdevice wait indefinitely on dev->refcnt
+ * J Hadi Salim : - Backlog queue sampling
+ * - netif_rx() feedback
*/
#include <asm/uaccess.h>
#include <linux/proc_fs.h>
#include <linux/stat.h>
#include <linux/if_bridge.h>
+#include <net/divert.h>
#include <net/dst.h>
#include <net/pkt_sched.h>
#include <net/profile.h>
extern int plip_init(void);
#endif
+/* This define, if set, will randomly drop a packet when congestion
+ * is more than moderate. It helps fairness in the multi-interface
+ * case when one of them is a hog, but it kills performance for the
+ * single interface case so it is off now by default.
+ */
+#undef RAND_LIE
+
+/* Setting this will sample the queue lengths and thus congestion
+ * via a timer instead of as each packet is received.
+ */
+#undef OFFLINE_SAMPLE
+
NET_PROFILE_DEFINE(dev_queue_xmit)
NET_PROFILE_DEFINE(softnet_process)
static struct packet_type *ptype_base[16]; /* 16 way hashed list */
static struct packet_type *ptype_all = NULL; /* Taps */
+#ifdef OFFLINE_SAMPLE
+static void sample_queue(unsigned long dummy);
+static struct timer_list samp_timer = { function: sample_queue };
+#endif
+
/*
* Our notifier list
*/
=======================================================================*/
int netdev_max_backlog = 300;
+/* These numbers are selected based on intuition and some
+ * experimentatiom, if you have more scientific way of doing this
+ * please go ahead and fix things.
+ */
+int no_cong_thresh = 10;
+int no_cong = 20;
+int lo_cong = 100;
+int mod_cong = 290;
struct netif_rx_stats netdev_rx_stat[NR_CPUS];
#ifdef CONFIG_NET_HW_FLOWCONTROL
-static atomic_t netdev_dropping = ATOMIC_INIT(0);
+atomic_t netdev_dropping = ATOMIC_INIT(0);
static unsigned long netdev_fc_mask = 1;
unsigned long netdev_fc_xoff = 0;
spinlock_t netdev_fc_lock = SPIN_LOCK_UNLOCKED;
}
#endif
+static void get_sample_stats(int cpu)
+{
+#ifdef RAND_LIE
+ unsigned long rd;
+ int rq;
+#endif
+ int blog = softnet_data[cpu].input_pkt_queue.qlen;
+ int avg_blog = softnet_data[cpu].avg_blog;
+
+ avg_blog = (avg_blog >> 1)+ (blog >> 1);
+
+ if (avg_blog > mod_cong) {
+ /* Above moderate congestion levels. */
+ softnet_data[cpu].cng_level = NET_RX_CN_HIGH;
+#ifdef RAND_LIE
+ rd = net_random();
+ rq = rd % netdev_max_backlog;
+ if (rq < avg_blog) /* unlucky bastard */
+ softnet_data[cpu].cng_level = NET_RX_DROP;
+#endif
+ } else if (avg_blog > lo_cong) {
+ softnet_data[cpu].cng_level = NET_RX_CN_MOD;
+#ifdef RAND_LIE
+ rd = net_random();
+ rq = rd % netdev_max_backlog;
+ if (rq < avg_blog) /* unlucky bastard */
+ softnet_data[cpu].cng_level = NET_RX_CN_HIGH;
+#endif
+ } else if (avg_blog > no_cong)
+ softnet_data[cpu].cng_level = NET_RX_CN_LOW;
+ else /* no congestion */
+ softnet_data[cpu].cng_level = NET_RX_SUCCESS;
+
+ softnet_data[cpu].avg_blog = avg_blog;
+}
+
+#ifdef OFFLINE_SAMPLE
+static void sample_queue(unsigned long dummy)
+{
+/* 10 ms 0r 1ms -- i dont care -- JHS */
+ int next_tick = 1;
+ int cpu = smp_processor_id();
+
+ get_sample_stats(cpu);
+ next_tick += jiffies;
+ mod_timer(&samp_timer, next_tick);
+}
+#endif
+
+
/**
* netif_rx - post buffer to the network code
* @skb: buffer to post
* the upper (protocol) levels to process. It always succeeds. The buffer
* may be dropped during processing for congestion control or by the
* protocol layers.
+ *
+ * return values:
+ * NET_RX_SUCCESS (no congestion)
+ * NET_RX_CN_LOW (low congestion)
+ * NET_RX_CN_MOD (moderate congestion)
+ * NET_RX_CN_HIGH (high congestion)
+ * NET_RX_DROP (packet was dropped)
+ *
+ *
*/
-void netif_rx(struct sk_buff *skb)
+int netif_rx(struct sk_buff *skb)
{
int this_cpu = smp_processor_id();
struct softnet_data *queue;
__skb_queue_tail(&queue->input_pkt_queue,skb);
__cpu_raise_softirq(this_cpu, NET_RX_SOFTIRQ);
local_irq_restore(flags);
- return;
+#ifndef OFFLINE_SAMPLE
+ get_sample_stats(this_cpu);
+#endif
+ return softnet_data[this_cpu].cng_level;
}
if (queue->throttle) {
local_irq_restore(flags);
kfree_skb(skb);
+ return NET_RX_DROP;
}
/* Deliver skb to an old protocol, which is not threaded well
}
+#ifdef CONFIG_NET_DIVERT
+static inline void handle_diverter(struct sk_buff *skb)
+{
+ /* if diversion is supported on device, then divert */
+ if (skb->dev->divert && skb->dev->divert->divert)
+ divert_frame(skb);
+}
+#endif /* CONFIG_NET_DIVERT */
+
+
static void net_rx_action(struct softirq_action *h)
{
int this_cpu = smp_processor_id();
}
}
+#ifdef CONFIG_NET_DIVERT
+ if (skb->dev->divert && skb->dev->divert->divert)
+ handle_diverter(skb);
+#endif /* CONFIG_NET_DIVERT */
+
+
#if defined(CONFIG_BRIDGE) || defined(CONFIG_BRIDGE_MODULE)
if (skb->dev->br_port != NULL &&
br_handle_frame_hook != NULL) {
if (bugdet-- < 0 || jiffies - start_time > 1)
goto softnet_break;
+
+#ifdef CONFIG_NET_HW_FLOWCONTROL
+ if (queue->throttle && queue->input_pkt_queue.qlen < no_cong_thresh ) {
+ if (atomic_dec_and_test(&netdev_dropping)) {
+ queue->throttle = 0;
+ netdev_wakeup();
+ goto softnet_break;
+ }
+ }
+#endif
+
}
br_read_unlock(BR_NETPROTO_LOCK);
int register_netdevice(struct net_device *dev)
{
struct net_device *d, **dp;
+#ifdef CONFIG_NET_DIVERT
+ int ret;
+#endif
spin_lock_init(&dev->queue_lock);
spin_lock_init(&dev->xmit_lock);
dev_hold(dev);
write_unlock_bh(&dev_base_lock);
+#ifdef CONFIG_NET_DIVERT
+ ret = alloc_divert_blk(dev);
+ if (ret)
+ return ret;
+#endif /* CONFIG_NET_DIVERT */
+
/*
* Default initial state at registry is that the
* device is present.
dev->deadbeaf = 0;
write_unlock_bh(&dev_base_lock);
+#ifdef CONFIG_NET_DIVERT
+ ret = alloc_divert_blk(dev);
+ if (ret)
+ return ret;
+#endif /* CONFIG_NET_DIVERT */
+
/* Notify protocols, that a new device appeared. */
notifier_call_chain(&netdev_chain, NETDEV_REGISTER, dev);
/* Notifier chain MUST detach us from master device. */
BUG_TRAP(dev->master==NULL);
+#ifdef CONFIG_NET_DIVERT
+ free_divert_blk(dev);
+#endif
+
if (dev->new_style) {
#ifdef NET_REFCNT_DEBUG
if (atomic_read(&dev->refcnt) != 1)
extern void net_device_init(void);
extern void ip_auto_config(void);
+#ifdef CONFIG_NET_DIVERT
+extern void dv_init(void);
+#endif /* CONFIG_NET_DIVERT */
int __init net_dev_init(void)
{
pktsched_init();
#endif
+#ifdef CONFIG_NET_DIVERT
+ dv_init();
+#endif /* CONFIG_NET_DIVERT */
+
/*
* Initialise the packet receive queues.
*/
queue = &softnet_data[i];
skb_queue_head_init(&queue->input_pkt_queue);
queue->throttle = 0;
+ queue->cng_level = 0;
+ queue->avg_blog = 10; /* arbitrary non-zero */
queue->completion_queue = NULL;
}
NET_PROFILE_REGISTER(dev_queue_xmit);
NET_PROFILE_REGISTER(softnet_process);
#endif
+
+#ifdef OFFLINE_SAMPLE
+ samp_timer.expires = jiffies + (10 * HZ);
+ add_timer(&samp_timer);
+#endif
+
/*
* Add the devices.
* If the call to dev->init fails, the dev is removed
--- /dev/null
+/*
+ * INET An implementation of the TCP/IP protocol suite for the LINUX
+ * operating system. INET is implemented using the BSD Socket
+ * interface as the means of communication with the user level.
+ *
+ * Generic frame diversion
+ *
+ * Version: @(#)eth.c 0.41 09/09/2000
+ *
+ * Authors:
+ * Benoit LOCHER: initial integration within the kernel with support for ethernet
+ * Dave Miller: improvement on the code (correctness, performance and source files)
+ *
+ */
+#include <linux/types.h>
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/string.h>
+#include <linux/mm.h>
+#include <linux/socket.h>
+#include <linux/in.h>
+#include <linux/inet.h>
+#include <linux/ip.h>
+#include <linux/udp.h>
+#include <linux/netdevice.h>
+#include <linux/etherdevice.h>
+#include <linux/skbuff.h>
+#include <linux/errno.h>
+#include <linux/config.h>
+#include <linux/init.h>
+#include <net/dst.h>
+#include <net/arp.h>
+#include <net/sock.h>
+#include <net/ipv6.h>
+#include <net/ip.h>
+#include <asm/uaccess.h>
+#include <asm/system.h>
+#include <asm/checksum.h>
+#include <net/divert.h>
+#include <linux/sockios.h>
+
+const char sysctl_divert_version[32]="0.46"; /* Current version */
+
+int __init dv_init(void)
+{
+ printk(KERN_INFO "NET4: Frame Diverter %s\n", sysctl_divert_version);
+ return 0;
+}
+
+/*
+ * Allocate a divert_blk for a device. This must be an ethernet nic.
+ */
+int alloc_divert_blk(struct net_device *dev)
+{
+ int alloc_size = (sizeof(struct divert_blk) + 3) & ~3;
+
+ if (!strncmp(dev->name, "eth", 3)) {
+ printk(KERN_DEBUG "divert: allocating divert_blk for %s\n",
+ dev->name);
+
+ dev->divert = (struct divert_blk *)
+ kmalloc(alloc_size, GFP_KERNEL);
+ if (dev->divert == NULL) {
+ printk(KERN_DEBUG "divert: unable to allocate divert_blk for %s\n",
+ dev->name);
+ return -EFAULT;
+ } else {
+ memset(dev->divert, 0, sizeof(struct divert_blk));
+ }
+ } else {
+ printk(KERN_DEBUG "divert: not allocating divert_blk for non-ethernet device %s\n",
+ dev->name);
+
+ dev->divert = NULL;
+ }
+ return 0;
+}
+
+/*
+ * Free a divert_blk allocated by the above function, if it was
+ * allocated on that device.
+ */
+void free_divert_blk(struct net_device *dev)
+{
+ if (dev->divert) {
+ kfree(dev->divert);
+ dev->divert=NULL;
+ printk(KERN_DEBUG "divert: freeing divert_blk for %s\n",
+ dev->name);
+ } else {
+ printk(KERN_DEBUG "divert: no divert_blk to free, %s not ethernet\n",
+ dev->name);
+ }
+}
+
+/*
+ * Adds a tcp/udp (source or dest) port to an array
+ */
+int add_port(u16 ports[], u16 port)
+{
+ int i;
+
+ if (port == 0)
+ return -EINVAL;
+
+ /* Storing directly in network format for performance,
+ * thanks Dave :)
+ */
+ port = htons(port);
+
+ for (i = 0; i < MAX_DIVERT_PORTS; i++) {
+ if (ports[i] == port)
+ return -EALREADY;
+ }
+
+ for (i = 0; i < MAX_DIVERT_PORTS; i++) {
+ if (ports[i] == 0) {
+ ports[i] = port;
+ return 0;
+ }
+ }
+
+ return -ENOBUFS;
+}
+
+/*
+ * Removes a port from an array tcp/udp (source or dest)
+ */
+int remove_port(u16 ports[], u16 port)
+{
+ int i;
+
+ if (port == 0)
+ return -EINVAL;
+
+ /* Storing directly in network format for performance,
+ * thanks Dave !
+ */
+ port = htons(port);
+
+ for (i = 0; i < MAX_DIVERT_PORTS; i++) {
+ if (ports[i] == port) {
+ ports[i] = 0;
+ return 0;
+ }
+ }
+
+ return -EINVAL;
+}
+
+/* Some basic sanity checks on the arguments passed to divert_ioctl() */
+int check_args(struct divert_cf *div_cf, struct net_device **dev)
+{
+ char devname[32];
+
+ if (dev == NULL)
+ return -EFAULT;
+
+ /* GETVERSION: all other args are unused */
+ if (div_cf->cmd == DIVCMD_GETVERSION)
+ return 0;
+
+ /* Network device index should reasonably be between 0 and 1000 :) */
+ if (div_cf->dev_index < 0 || div_cf->dev_index > 1000)
+ return -EINVAL;
+
+ /* Let's try to find the ifname */
+ sprintf(devname, "eth%d", div_cf->dev_index);
+ *dev = dev_get_by_name(devname);
+
+ /* dev should NOT be null */
+ if (*dev == NULL)
+ return -EINVAL;
+
+ /* user issuing the ioctl must be a super one :) */
+ if (!suser())
+ return -EPERM;
+
+ /* Device must have a divert_blk member NOT null */
+ if ((*dev)->divert == NULL)
+ return -EFAULT;
+
+ return 0;
+}
+
+/*
+ * control function of the diverter
+ */
+#define DVDBG(a) \
+ printk(KERN_DEBUG "divert_ioctl() line %d %s\n", __LINE__, (a))
+
+int divert_ioctl(unsigned int cmd, struct divert_cf *arg)
+{
+ struct divert_cf div_cf;
+ struct divert_blk *div_blk;
+ struct net_device *dev;
+ int ret;
+
+ switch (cmd) {
+ case SIOCGIFDIVERT:
+ DVDBG("SIOCGIFDIVERT, copy_from_user");
+ if (copy_from_user(&div_cf, arg, sizeof(struct divert_cf)))
+ return -EFAULT;
+ DVDBG("before check_args");
+ ret = check_args(&div_cf, &dev);
+ if (ret)
+ return ret;
+ DVDBG("after checkargs");
+ div_blk = dev->divert;
+
+ DVDBG("befre switch()");
+ switch (div_cf.cmd) {
+ case DIVCMD_GETSTATUS:
+ /* Now, just give the user the raw divert block
+ * for him to play with :)
+ */
+ if (copy_to_user(div_cf.arg1.ptr, dev->divert,
+ sizeof(struct divert_blk)))
+ return -EFAULT;
+ break;
+
+ case DIVCMD_GETVERSION:
+ DVDBG("GETVERSION: checking ptr");
+ if (div_cf.arg1.ptr == NULL)
+ return -EINVAL;
+ DVDBG("GETVERSION: copying data to userland");
+ if (copy_to_user(div_cf.arg1.ptr,
+ sysctl_divert_version, 32))
+ return -EFAULT;
+ DVDBG("GETVERSION: data copied");
+ break;
+
+ default:
+ return -EINVAL;
+ };
+
+ break;
+
+ case SIOCSIFDIVERT:
+ if (copy_from_user(&div_cf, arg, sizeof(struct divert_cf)))
+ return -EFAULT;
+
+ ret = check_args(&div_cf, &dev);
+ if (ret)
+ return ret;
+
+ div_blk = dev->divert;
+
+ switch(div_cf.cmd) {
+ case DIVCMD_RESET:
+ div_blk->divert = 0;
+ div_blk->protos = DIVERT_PROTO_NONE;
+ memset(div_blk->tcp_dst, 0,
+ MAX_DIVERT_PORTS * sizeof(u16));
+ memset(div_blk->tcp_src, 0,
+ MAX_DIVERT_PORTS * sizeof(u16));
+ memset(div_blk->udp_dst, 0,
+ MAX_DIVERT_PORTS * sizeof(u16));
+ memset(div_blk->udp_src, 0,
+ MAX_DIVERT_PORTS * sizeof(u16));
+ return 0;
+
+ case DIVCMD_DIVERT:
+ switch(div_cf.arg1.int32) {
+ case DIVARG1_ENABLE:
+ if (div_blk->divert)
+ return -EALREADY;
+ div_blk->divert = 1;
+ break;
+
+ case DIVARG1_DISABLE:
+ if (!div_blk->divert)
+ return -EALREADY;
+ div_blk->divert = 0;
+ break;
+
+ default:
+ return -EINVAL;
+ };
+
+ break;
+
+ case DIVCMD_IP:
+ switch(div_cf.arg1.int32) {
+ case DIVARG1_ENABLE:
+ if (div_blk->protos & DIVERT_PROTO_IP)
+ return -EALREADY;
+ div_blk->protos |= DIVERT_PROTO_IP;
+ break;
+
+ case DIVARG1_DISABLE:
+ if (!(div_blk->protos & DIVERT_PROTO_IP))
+ return -EALREADY;
+ div_blk->protos &= ~DIVERT_PROTO_IP;
+ break;
+
+ default:
+ return -EINVAL;
+ };
+
+ break;
+
+ case DIVCMD_TCP:
+ switch(div_cf.arg1.int32) {
+ case DIVARG1_ENABLE:
+ if (div_blk->protos & DIVERT_PROTO_TCP)
+ return -EALREADY;
+ div_blk->protos |= DIVERT_PROTO_TCP;
+ break;
+
+ case DIVARG1_DISABLE:
+ if (!(div_blk->protos & DIVERT_PROTO_TCP))
+ return -EALREADY;
+ div_blk->protos &= ~DIVERT_PROTO_TCP;
+ break;
+
+ default:
+ return -EINVAL;
+ };
+
+ break;
+
+ case DIVCMD_TCPDST:
+ switch(div_cf.arg1.int32) {
+ case DIVARG1_ADD:
+ return add_port(div_blk->tcp_dst,
+ div_cf.arg2.uint16);
+
+ case DIVARG1_REMOVE:
+ return remove_port(div_blk->tcp_dst,
+ div_cf.arg2.uint16);
+
+ default:
+ return -EINVAL;
+ };
+
+ break;
+
+ case DIVCMD_TCPSRC:
+ switch(div_cf.arg1.int32) {
+ case DIVARG1_ADD:
+ return add_port(div_blk->tcp_src,
+ div_cf.arg2.uint16);
+
+ case DIVARG1_REMOVE:
+ return remove_port(div_blk->tcp_src,
+ div_cf.arg2.uint16);
+
+ default:
+ return -EINVAL;
+ };
+
+ break;
+
+ case DIVCMD_UDP:
+ switch(div_cf.arg1.int32) {
+ case DIVARG1_ENABLE:
+ if (div_blk->protos & DIVERT_PROTO_UDP)
+ return -EALREADY;
+ div_blk->protos |= DIVERT_PROTO_UDP;
+ break;
+
+ case DIVARG1_DISABLE:
+ if (!(div_blk->protos & DIVERT_PROTO_UDP))
+ return -EALREADY;
+ div_blk->protos &= ~DIVERT_PROTO_UDP;
+ break;
+
+ default:
+ return -EINVAL;
+ };
+
+ break;
+
+ case DIVCMD_UDPDST:
+ switch(div_cf.arg1.int32) {
+ case DIVARG1_ADD:
+ return add_port(div_blk->udp_dst,
+ div_cf.arg2.uint16);
+
+ case DIVARG1_REMOVE:
+ return remove_port(div_blk->udp_dst,
+ div_cf.arg2.uint16);
+
+ default:
+ return -EINVAL;
+ };
+
+ break;
+
+ case DIVCMD_UDPSRC:
+ switch(div_cf.arg1.int32) {
+ case DIVARG1_ADD:
+ return add_port(div_blk->udp_src,
+ div_cf.arg2.uint16);
+
+ case DIVARG1_REMOVE:
+ return remove_port(div_blk->udp_src,
+ div_cf.arg2.uint16);
+
+ default:
+ return -EINVAL;
+ };
+
+ break;
+
+ case DIVCMD_ICMP:
+ switch(div_cf.arg1.int32) {
+ case DIVARG1_ENABLE:
+ if (div_blk->protos & DIVERT_PROTO_ICMP)
+ return -EALREADY;
+ div_blk->protos |= DIVERT_PROTO_ICMP;
+ break;
+
+ case DIVARG1_DISABLE:
+ if (!(div_blk->protos & DIVERT_PROTO_ICMP))
+ return -EALREADY;
+ div_blk->protos &= ~DIVERT_PROTO_ICMP;
+ break;
+
+ default:
+ return -EINVAL;
+ };
+
+ break;
+
+ default:
+ return -EINVAL;
+ };
+
+ break;
+
+ default:
+ return -EINVAL;
+ };
+
+ return 0;
+}
+
+
+/*
+ * Check if packet should have its dest mac address set to the box itself
+ * for diversion
+ */
+
+#define ETH_DIVERT_FRAME(skb) \
+ memcpy(skb->mac.ethernet, skb->dev->dev_addr, ETH_ALEN); \
+ skb->pkt_type=PACKET_HOST
+
+void divert_frame(struct sk_buff *skb)
+{
+ struct ethhdr *eth = skb->mac.ethernet;
+ struct iphdr *iph;
+ struct tcphdr *tcph;
+ struct udphdr *udph;
+ struct divert_blk *divert = skb->dev->divert;
+ int i, src, dst;
+ unsigned char *skb_data_end = skb->data + skb->len;
+
+ /* Packet is already aimed at us, return */
+ if (!memcmp(eth, skb->dev->dev_addr, ETH_ALEN))
+ return;
+
+ /* proto is not IP, do nothing */
+ if (eth->h_proto != htons(ETH_P_IP))
+ return;
+
+ /* Divert all IP frames ? */
+ if (divert->protos & DIVERT_PROTO_IP) {
+ ETH_DIVERT_FRAME(skb);
+ return;
+ }
+
+ /* Check for possible (maliciously) malformed IP frame (thanks Dave) */
+ iph = (struct iphdr *) skb->data;
+ if (((iph->ihl<<2)+(unsigned char*)(iph)) >= skb_data_end) {
+ printk(KERN_INFO "divert: malformed IP packet !\n");
+ return;
+ }
+
+ switch (iph->protocol) {
+ /* Divert all ICMP frames ? */
+ case IPPROTO_ICMP:
+ if (divert->protos & DIVERT_PROTO_ICMP) {
+ ETH_DIVERT_FRAME(skb);
+ return;
+ }
+ break;
+
+ /* Divert all TCP frames ? */
+ case IPPROTO_TCP:
+ if (divert->protos & DIVERT_PROTO_TCP) {
+ ETH_DIVERT_FRAME(skb);
+ return;
+ }
+
+ /* Check for possible (maliciously) malformed IP
+ * frame (thanx Dave)
+ */
+ tcph = (struct tcphdr *)
+ (((unsigned char *)iph) + (iph->ihl<<2));
+ if (((unsigned char *)(tcph+1)) >= skb_data_end) {
+ printk(KERN_INFO "divert: malformed TCP packet !\n");
+ return;
+ }
+
+ /* Divert some tcp dst/src ports only ?*/
+ for (i = 0; i < MAX_DIVERT_PORTS; i++) {
+ dst = divert->tcp_dst[i];
+ src = divert->tcp_src[i];
+ if ((dst && dst == tcph->dest) ||
+ (src && src == tcph->source)) {
+ ETH_DIVERT_FRAME(skb);
+ return;
+ }
+ }
+ break;
+
+ /* Divert all UDP frames ? */
+ case IPPROTO_UDP:
+ if (divert->protos & DIVERT_PROTO_UDP) {
+ ETH_DIVERT_FRAME(skb);
+ return;
+ }
+
+ /* Check for possible (maliciously) malformed IP
+ * packet (thanks Dave)
+ */
+ udph = (struct udphdr *)
+ (((unsigned char *)iph) + (iph->ihl<<2));
+ if (((unsigned char *)(udph+1)) >= skb_data_end) {
+ printk(KERN_INFO
+ "divert: malformed UDP packet !\n");
+ return;
+ }
+
+ /* Divert some udp dst/src ports only ? */
+ for (i = 0; i < MAX_DIVERT_PORTS; i++) {
+ dst = divert->udp_dst[i];
+ src = divert->udp_src[i];
+ if ((dst && dst == udph->dest) ||
+ (src && src == udph->source)) {
+ ETH_DIVERT_FRAME(skb);
+ return;
+ }
+ }
+ break;
+ };
+
+ return;
+}
+
#ifdef CONFIG_SYSCTL
extern int netdev_max_backlog;
+extern int no_cong_thresh;
+extern int no_cong;
+extern int lo_cong;
+extern int mod_cong;
extern int netdev_fastroute;
extern int net_msg_cost;
extern int net_msg_burst;
extern int sysctl_optmem_max;
extern int sysctl_hot_list_len;
+#ifdef CONFIG_NET_DIVERT
+extern char sysctl_divert_version[];
+#endif /* CONFIG_NET_DIVERT */
+
ctl_table core_table[] = {
#ifdef CONFIG_NET
{NET_CORE_WMEM_MAX, "wmem_max",
{NET_CORE_MAX_BACKLOG, "netdev_max_backlog",
&netdev_max_backlog, sizeof(int), 0644, NULL,
&proc_dointvec},
+ {NET_CORE_NO_CONG_THRESH, "no_cong_thresh",
+ &no_cong, sizeof(int), 0644, NULL,
+ &proc_dointvec},
+ {NET_CORE_NO_CONG, "no_cong",
+ &no_cong, sizeof(int), 0644, NULL,
+ &proc_dointvec},
+ {NET_CORE_LO_CONG, "lo_cong",
+ &lo_cong, sizeof(int), 0644, NULL,
+ &proc_dointvec},
+ {NET_CORE_MOD_CONG, "mod_cong",
+ &mod_cong, sizeof(int), 0644, NULL,
+ &proc_dointvec},
#ifdef CONFIG_NET_FASTROUTE
{NET_CORE_FASTROUTE, "netdev_fastroute",
&netdev_fastroute, sizeof(int), 0644, NULL,
{NET_CORE_HOT_LIST_LENGTH, "hot_list_length",
&sysctl_hot_list_len, sizeof(int), 0644, NULL,
&proc_dointvec},
+#ifdef CONFIG_NET_DIVERT
+ {NET_CORE_DIVERT_VERSION, "divert_version",
+ (void *)sysctl_divert_version, 32, 0444, NULL,
+ &proc_dostring},
+#endif /* CONFIG_NET_DIVERT */
#endif /* CONFIG_NET */
{ 0 }
};
*
* PF_INET protocol family socket handler.
*
- * Version: $Id: af_inet.c,v 1.114 2000/09/18 05:59:48 davem Exp $
+ * Version: $Id: af_inet.c,v 1.115 2000/10/06 10:37:47 davem Exp $
*
* Authors: Ross Biro, <bir7@leland.Stanford.Edu>
* Fred N. van Kempen, <waltje@uWalt.NL.Mugnet.ORG>
#ifdef CONFIG_KMOD
#include <linux/kmod.h>
#endif
+#ifdef CONFIG_NET_DIVERT
+#include <net/divert.h>
+#endif /* CONFIG_NET_DIVERT */
#if defined(CONFIG_NET_RADIO) || defined(CONFIG_NET_PCMCIA_RADIO)
#include <linux/wireless.h> /* Note : will define WIRELESS_EXT */
#endif /* CONFIG_NET_RADIO || CONFIG_NET_PCMCIA_RADIO */
if (br_ioctl_hook != NULL)
return br_ioctl_hook(arg);
#endif
+ case SIOCGIFDIVERT:
+ case SIOCSIFDIVERT:
+#ifdef CONFIG_NET_DIVERT
+ return(divert_ioctl(cmd, (struct divert_cf *) arg));
+#else
+ return -ENOPKG;
+#endif /* CONFIG_NET_DIVERT */
return -ENOPKG;
case SIOCADDDLCI:
/* linux/net/inet/arp.c
*
- * Version: $Id: arp.c,v 1.88 2000/08/02 06:05:02 davem Exp $
+ * Version: $Id: arp.c,v 1.90 2000/10/04 09:20:56 anton Exp $
*
* Copyright (C) 1994 by Florian La Roche
*
*
* This source is covered by the GNU GPL, the same as all kernel sources.
*
- * Version: $Id: inetpeer.c,v 1.2 2000/05/03 06:37:06 davem Exp $
+ * Version: $Id: inetpeer.c,v 1.3 2000/10/03 07:29:00 anton Exp $
*
* Authors: Andrey V. Savochkin <saw@msu.ru>
*/
*
* INET protocol dispatch tables.
*
- * Version: $Id: protocol.c,v 1.11 2000/02/22 23:54:26 davem Exp $
+ * Version: $Id: protocol.c,v 1.12 2000/10/03 07:29:00 anton Exp $
*
* Authors: Ross Biro, <bir7@leland.Stanford.Edu>
* Fred N. van Kempen, <waltje@uWalt.NL.Mugnet.ORG>
*
* ROUTE - implementation of the IP router.
*
- * Version: $Id: route.c,v 1.90 2000/08/31 23:39:12 davem Exp $
+ * Version: $Id: route.c,v 1.91 2000/10/03 07:29:00 anton Exp $
*
* Authors: Ross Biro, <bir7@leland.Stanford.Edu>
* Fred N. van Kempen, <waltje@uWalt.NL.Mugnet.ORG>
*
* Implementation of the Transmission Control Protocol(TCP).
*
- * Version: $Id: tcp.c,v 1.174 2000/09/18 05:59:48 davem Exp $
+ * Version: $Id: tcp.c,v 1.176 2000/10/06 22:45:41 davem Exp $
*
* Authors: Ross Biro, <bir7@leland.Stanford.Edu>
* Fred N. van Kempen, <waltje@uWalt.NL.Mugnet.ORG>
skb = NULL;
if (tcp_memory_free(sk))
- skb = tcp_alloc_skb(sk, tmp, GFP_KERNEL);
+ skb = tcp_alloc_skb(sk, tmp, sk->allocation);
if (skb == NULL) {
/* If we didn't get any memory, we need to sleep. */
set_bit(SOCK_ASYNC_NOSPACE, &sk->socket->flags);
*
* Implementation of the Transmission Control Protocol(TCP).
*
- * Version: $Id: tcp_ipv4.c,v 1.213 2000/09/18 05:59:48 davem Exp $
+ * Version: $Id: tcp_ipv4.c,v 1.216 2000/10/10 03:58:56 davem Exp $
*
* IPv4 specific functions
*
void tcp_v4_send_check(struct sock *sk, struct tcphdr *th, int len,
struct sk_buff *skb);
-/* This is for sockets with full identity only. Sockets here will always
- * be without wildcards and will have the following invariant:
- * TCP_ESTABLISHED <= sk->state < TCP_CLOSE
- *
- * First half of the table is for sockets not in TIME_WAIT, second half
- * is for TIME_WAIT sockets only.
- */
-struct tcp_ehash_bucket *tcp_ehash;
-
-/* Ok, let's try this, I give up, we do need a local binding
- * TCP hash as well as the others for fast bind/connect.
- */
-struct tcp_bind_hashbucket *tcp_bhash;
-
-int tcp_bhash_size;
-int tcp_ehash_size;
-
-/* All sockets in TCP_LISTEN state will be in here. This is the only table
- * where wildcard'd TCP sockets can exist. Hash function here is just local
- * port number.
- */
-struct sock *tcp_listening_hash[TCP_LHTABLE_SIZE];
-char __tcp_clean_cacheline_pad[(SMP_CACHE_BYTES -
- (((sizeof(void *) * (TCP_LHTABLE_SIZE + 2)) +
- (sizeof(int) * 2)) % SMP_CACHE_BYTES))] = { 0, };
-
-rwlock_t tcp_lhash_lock = RW_LOCK_UNLOCKED;
-atomic_t tcp_lhash_users = ATOMIC_INIT(0);
-DECLARE_WAIT_QUEUE_HEAD(tcp_lhash_wait);
-
-spinlock_t tcp_portalloc_lock = SPIN_LOCK_UNLOCKED;
+struct tcp_hashinfo __cacheline_aligned tcp_hashinfo = {
+ __tcp_lhash_lock: RW_LOCK_UNLOCKED,
+ __tcp_lhash_users: ATOMIC_INIT(0),
+ __tcp_lhash_wait:
+ __WAIT_QUEUE_HEAD_INITIALIZER(tcp_hashinfo.__tcp_lhash_wait),
+ __tcp_portalloc_lock: SPIN_LOCK_UNLOCKED
+};
/*
* This array holds the first and last local port number.
*
* Implementation of the Transmission Control Protocol(TCP).
*
- * Version: $Id: tcp_timer.c,v 1.79 2000/08/11 00:13:36 davem Exp $
+ * Version: $Id: tcp_timer.c,v 1.80 2000/10/03 07:29:01 anton Exp $
*
* Authors: Ross Biro, <bir7@leland.Stanford.Edu>
* Fred N. van Kempen, <waltje@uWalt.NL.Mugnet.ORG>
*
* The User Datagram Protocol (UDP).
*
- * Version: $Id: udp.c,v 1.87 2000/09/20 02:11:34 davem Exp $
+ * Version: $Id: udp.c,v 1.89 2000/10/03 07:29:01 anton Exp $
*
* Authors: Ross Biro, <bir7@leland.Stanford.Edu>
* Fred N. van Kempen, <waltje@uWalt.NL.Mugnet.ORG>
if (flags & MSG_ERRQUEUE)
return ip_recv_error(sk, msg, len);
-
- retry:
/*
* From here the generic datagram does a lot of the work. Come
* the finished NET3, it will do _ALL_ the work!
csum_copy_err:
UDP_INC_STATS_BH(UdpInErrors);
- if (flags&(MSG_PEEK|MSG_DONTWAIT)) {
- struct sk_buff *skb2;
-
+ /* Clear queue. */
+ if (flags&MSG_PEEK) {
+ int clear = 0;
spin_lock_irq(&sk->receive_queue.lock);
- skb2 = skb_peek(&sk->receive_queue);
- if ((flags & MSG_PEEK) && skb == skb2) {
+ if (skb == skb_peek(&sk->receive_queue)) {
__skb_unlink(skb, &sk->receive_queue);
+ clear = 1;
}
spin_unlock_irq(&sk->receive_queue.lock);
- skb_free_datagram(sk, skb);
- if ((flags & MSG_DONTWAIT) && !skb2)
- return -EAGAIN;
- } else
- skb_free_datagram(sk, skb);
- goto retry;
+ if (clear)
+ kfree_skb(skb);
+ }
+
+ skb_free_datagram(sk, skb);
+
+ return -EAGAIN;
}
int udp_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len)
* Various kernel-resident INET utility functions; mainly
* for format conversion and debugging output.
*
- * Version: $Id: utils.c,v 1.7 1999/06/09 10:11:05 davem Exp $
+ * Version: $Id: utils.c,v 1.8 2000/10/03 07:29:01 anton Exp $
*
* Author: Fred N. van Kempen, <waltje@uWalt.NL.Mugnet.ORG>
*
*
* PF_INET6 protocol dispatch tables.
*
- * Version: $Id: protocol.c,v 1.8 2000/02/22 23:54:29 davem Exp $
+ * Version: $Id: protocol.c,v 1.9 2000/10/03 07:29:01 anton Exp $
*
* Authors: Pedro Roque <roque@di.fc.ul.pt>
*
* Pedro Roque <roque@di.fc.ul.pt>
* Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>
*
- * $Id: sit.c,v 1.43 2000/08/25 02:15:47 davem Exp $
+ * $Id: sit.c,v 1.44 2000/10/10 04:36:50 davem Exp $
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
}
if (i==100)
goto failed;
- memcpy(parms->name, dev->name, IFNAMSIZ);
+ memcpy(nt->parms.name, dev->name, IFNAMSIZ);
}
if (register_netdevice(dev) < 0)
goto failed;
#include <net/scm.h>
#include <linux/if_bridge.h>
#include <linux/random.h>
+#ifdef CONFIG_NET_DIVERT
+#include <net/divert.h>
+#endif /* CONFIG_NET_DIVERT */
#ifdef CONFIG_NET
extern __u32 sysctl_wmem_max;
#endif
#endif
+#ifdef CONFIG_NET_DIVERT
+EXPORT_SYMBOL(alloc_divert_blk);
+EXPORT_SYMBOL(free_divert_blk);
+EXPORT_SYMBOL(divert_ioctl);
+#endif /* CONFIG_NET_DIVERT */
+
#ifdef CONFIG_INET
/* Internet layer registration */
EXPORT_SYMBOL(inetdev_lock);
EXPORT_SYMBOL(inet_sock_release);
/* Socket demultiplexing. */
-EXPORT_SYMBOL(tcp_ehash);
-EXPORT_SYMBOL(tcp_ehash_size);
-EXPORT_SYMBOL(tcp_listening_hash);
-EXPORT_SYMBOL(tcp_lhash_lock);
-EXPORT_SYMBOL(tcp_lhash_users);
-EXPORT_SYMBOL(tcp_lhash_wait);
+EXPORT_SYMBOL(tcp_hashinfo);
EXPORT_SYMBOL(tcp_listen_wlock);
-EXPORT_SYMBOL(tcp_bhash);
-EXPORT_SYMBOL(tcp_bhash_size);
-EXPORT_SYMBOL(tcp_portalloc_lock);
EXPORT_SYMBOL(udp_hash);
EXPORT_SYMBOL(udp_hash_lock);
EXPORT_SYMBOL(dev_ioctl);
EXPORT_SYMBOL(dev_queue_xmit);
#ifdef CONFIG_NET_HW_FLOWCONTROL
+EXPORT_SYMBOL(netdev_dropping);
EXPORT_SYMBOL(netdev_register_fc);
EXPORT_SYMBOL(netdev_unregister_fc);
EXPORT_SYMBOL(netdev_fc_xoff);
*
* PACKET - implements raw packet sockets.
*
- * Version: $Id: af_packet.c,v 1.42 2000/08/29 03:44:56 davem Exp $
+ * Version: $Id: af_packet.c,v 1.43 2000/10/06 10:37:47 davem Exp $
*
* Authors: Ross Biro, <bir7@leland.Stanford.Edu>
* Fred N. van Kempen, <waltje@uWalt.NL.Mugnet.ORG>
#include <linux/init.h>
#include <linux/if_bridge.h>
+#ifdef CONFIG_NET_DIVERT
+#include <net/divert.h>
+#endif /* CONFIG_NET_DIVERT */
+
#ifdef CONFIG_INET
#include <net/inet_common.h>
#endif
#endif
#endif
+ case SIOCGIFDIVERT:
+ case SIOCSIFDIVERT:
+#ifdef CONFIG_NET_DIVERT
+ return(divert_ioctl(cmd, (struct divert_cf *) arg));
+#else
+ return -ENOPKG;
+#endif /* CONFIG_NET_DIVERT */
+
return -ENOPKG;
#ifdef CONFIG_INET
#endif
-#ifdef CONFIG_PPPOE
-#include <linux/if_pppox.h>
-#endif
-
/*
* Protocol Table
*/
#ifdef CONFIG_IRDA
{ "IrDA", irda_proto_init }, /* IrDA protocols */
-#endif
-#ifdef CONFIG_PPPOE
- { "PPPoX", pppox_proto_init }, /* PPP over Ethernet */
#endif
{ NULL, NULL } /* End marker */
};