S: USA
N: Stefan Reinauer
-E: stepan@home.culture.mipt.ru
-W: http://home.culture.mipt.ru/~stepan
-D: Modularized affs and ufs. Minor fixes.
-S: Rebmannsweg 34h
-S: 79539 Loerrach
+E: stepan@linux.de
+W: http://www.freiburg.linux.de/~stepan/
+D: Modularization of some filesystems
+D: /proc/sound, minor fixes
+S: Schlossbergring 9
+S: 79098 Freiburg
S: Germany
N: Joerg Reuter
--- /dev/null
+ ARM Linux 2.1.78
+ ================
+
+ ** The ARM support contained within is NOT complete - it will not build. **
+ ** If you want to build it, then please obtain a full copy of the ARM **
+ ** patches from ftp://ftp.arm.uk.linux.org/pub/armlinux/kernel-sources/v2.1 **
+
+ Since this is a development kernel, it will not be as stable as the 2.0
+ series, and can cause very nasty problems (eg, trashing your hard disk).
+ When running one of these kernels, I advise you to back up the complete
+ contents of all your hard disks.
+
+Contributors
+------------
+
+ Here is a list of people actively working on the project (If you
+ wish to be added to the list, please email me):
+
+ Name: Russell King
+ Mail: linux@arm.uk.linux.org
+ Desc: Original developer of ARM Linux, project co-ordinator.
+
+ Name: Dave Gilbert
+ Mail: linux@treblig.org
+ Desc: A3/4/5xx floppy and hard disk code maintainer.
+
+ Name: Philip Blundell
+ Mail: Philip.Blundell@pobox.com
+ Desc: Architecture and processor selection during make config.
+
+Todo list
+---------
+
+ This is the list of changes to be done (roughly prioritorised):
+
+ * fully test new A5000 & older MEMC translation code
+ * fully test new AcornSCSI driver.
+ * reply to email ;)
+
+Bugs
+----
+
+ Fixed bugs in this version 2.1.76:
+
+ Modules believed to be buggy (please report your successes/failures):
+
+ * AcornSCSI believed to occasionally corrupt hard drives.
+ * All NCR5380-based SCSI devices [Cumana I, Oak, EcoSCSI] are slow,
+ and may not allow write access.
+ * A5000 and older machine kernel builds may not be as stable as they were.
+
+ Notes
+ =====
+
+Compilation of kernel
+---------------------
+
+ In order to compile ARM Linux, you will need a compiler capable of
+ generating ARM ELF code with GNU extensions. GCC-2.7.2.2 is good.
+
+ To build ARM Linux natively, you shouldn't have to alter the ARCH = line in
+ the top level Makefile. However, if you don't have the ARM Linux ELF tools
+ installed as default, then you should change the CROSS_COMPILE line as
+ detailed below.
+
+ If you wish to cross-compile, then alter the following lines in the top
+ level make file:
+
+ ARCH = <whatever>
+ with
+ ARCH = arm
+
+ and
+
+ CROSS_COMPILE=
+ to
+ CROSS_COMPILE=<your-path-to-your-compiler-without-gcc>
+ eg.
+ CROSS_COMPILE=/usr/src/bin/arm/arm-linuxelf-
+
+ Do a 'make config', followed by 'make dep', and finally 'make all' to
+ build the kernel (vmlinux). A compressed image can be built by doing
+ a 'make zImage' instead of 'make all'.
+
+Bug reports etc
+---------------
+
+ Please send patches, bug reports and code for the ARM Linux project
+ to linux@arm.uk.linux.org. Patches will not be included into future
+ kernels unless they come to me (or the relevant person concerned).
+
+ When sending bug reports, please ensure that they contain all relevent
+ information, eg. the kernel messages that were printed before/during
+ the problem, what you were doing, etc.
+
+ For patches, please include some explaination as to what the patch does
+ and why (if relevent).
+
+Modules
+-------
+
+ Although modularisation is supported (and required for the FP emulator),
+ each module on an arm2/arm250/arm3 machine when is loaded will take
+ memory up to the next 32k boundary due to the size of the pages. Hence is
+ modularisation on these machines really worth it?
+
+ However, arm6 and up machines allow modules to take multiples of 4k, and
+ as such Acorn RiscPCs and other architectures using these processors can
+ make good use of modularisation.
+
+ADFS Image files
+----------------
+
+ You can access image files on your ADFS partitions by mounting the ADFS
+ partition, and then using the loopback device driver. You must have
+ losetup installed.
+
+ Please note that the PCEmulator DOS partitions have a partition table at
+ the start, and as such, you will have to give '-o offset' to losetup.
+
+Kernel initialisation abort codes
+---------------------------------
+
+ When the kernel is unable to boot, it will if possible display a colour
+ at the top of the screen. The colours have the following significance
+ when run in a 16 colour mode with the default palette:
+
+ Stripes of White,Red,Yellow,Green:
+ Kernel does not support the processor architecture detected.
+
+Request to developers
+---------------------
+
+ When writing device drivers which include a separate assember file, please
+ include it in with the C file, and not the arch/arm/lib directory. This
+ allows the driver to be compiled as a loadable module without requiring
+ half the code to be needlessly compiled into the kernel image.
+
+ In general, try to avoid using assembler unless it is really necessary. It
+ makes drivers far less easy to port to other hardware.
+
+ST506 hard drives
+-----------------
+
+ The ST506 hard drive controllers seem to be working fine (if a little
+ slowly). At the moment they will only work off the controllers on an
+ A4x0's motherboard, but for it to work off a Podule just requires
+ someone with a podule to add the addresses for the IRQ mask and the
+ HDC base to the source.
+
+ As of 31/3/96 it works with two drives (you should get the ADFS
+ *configure harddrive set to 2). I've got an internal 20MB and a great
+ big external 5.25" FH 64MB drive (who could ever want more :-) ).
+
+ I've just got 240K/s off it (a dd with bs=128k); thats about half of what
+ RiscOS gets; but its a heck of a lot better than the 50K/s I was getting
+ last week :-)
+
+ Known bug: Drive data errors can cause a hang; including cases where
+ the controller has fixed the error using ECC. (Possibly ONLY
+ in that case...hmm).
+
+
+1772 Floppy
+-----------
+ This also seems to work OK, but hasn't been stressed much lately. It
+ hasn't got any code for disc change detection in there at the moment which
+ could be a bit of a problem! Suggestions on the correct way to do this
+ are welcome.
--- /dev/null
+#
+# arch/arm/Makefile
+#
+# This file is included by the global makefile so that you can add your own
+# architecture-specific flags and dependencies. Remember to do have actions
+# for "archclean" and "archdep" for cleaning up and making dependencies for
+# this architecture
+#
+# This file is subject to the terms and conditions of the GNU General Public
+# License. See the file "COPYING" in the main directory of this archive
+# for more details.
+#
+# Copyright (C) 1995, 1996 by Russell King
+
+ifeq ($(CONFIG_CPU_ARM2),y)
+PROCESSOR = armo
+ASFLAGS_PROC += -m2
+ifeq ($(CONFIG_BINUTILS_NEW),y)
+CFLAGS_PROC += -mcpu=arm2
+ASFLAGS_PROC += -m2
+else
+CFLAGS_PROC += -m2
+ASFLAGS_PROC += -m2
+endif
+endif
+
+ifeq ($(CONFIG_CPU_ARM3),y)
+PROCESSOR = armo
+ifeq ($(CONFIG_BINUTILS_NEW),y)
+CFLAGS_PROC += -mcpu=arm3
+ASFLAGS_PROC += -m3
+else
+CFLAGS_PROC += -m3
+ASFLAGS_PROC += -m3
+endif
+endif
+
+ifeq ($(CONFIG_CPU_ARM6),y)
+PROCESSOR = armv
+ifeq ($(CONFIG_BINUTILS_NEW),y)
+CFLAGS_PROC += -mcpu=arm6
+ASFLAGS_PROC += -m6
+else
+CFLAGS_PROC += -m6
+ASFLAGS_PROC += -m6
+endif
+endif
+
+ifeq ($(CONFIG_CPU_SA110),y)
+PROCESSOR = armv
+ifeq ($(CONFIG_BINUTILS_NEW),y)
+CFLAGS_PROC += -mcpu=strongarm110
+ASFLAGS_PROC += -m6
+else
+CFLAGS_PROC += -m6
+ASFLAGS_PROC += -m6
+endif
+endif
+
+# Processor Architecture
+# CFLAGS_PROC - processor dependent CFLAGS
+# PROCESSOR - processor type
+# TEXTADDR - Uncompressed kernel link text address
+# ZTEXTADDR - Compressed kernel link text address
+# ZRELADDR - Compressed kernel relocating address (point at which uncompressed kernel is loaded).
+#
+
+HEAD := arch/arm/kernel/head-$(PROCESSOR).o arch/arm/kernel/init_task.o
+COMPRESSED_HEAD = head.o
+
+ifeq ($(PROCESSOR),armo)
+ifeq ($(CONFIG_BINUTILS_NEW),y)
+CFLAGS_PROC += -mapcs-26 -mshort-load-bytes
+endif
+TEXTADDR = 0x02080000
+ZTEXTADDR = 0x01800000
+ZRELADDR = 0x02080000
+endif
+
+ifeq ($(CONFIG_ARCH_A5K),y)
+MACHINE = a5k
+COMPRESSED_EXTRA = $(TOPDIR)/arch/arm/lib/ll_char_wr.o
+endif
+
+ifeq ($(CONFIG_ARCH_ARC),y)
+MACHINE = arc
+COMPRESSED_EXTRA = $(TOPDIR)/arch/arm/lib/ll_char_wr.o
+endif
+
+ifeq ($(PROCESSOR),armv)
+ifeq ($(CONFIG_BINUTILS_NEW),y)
+CFLAGS_PROC += -mapcs-32 -mshort-load-bytes
+endif
+TEXTADDR = 0xC0008000
+endif
+
+ifeq ($(CONFIG_ARCH_RPC),y)
+MACHINE = rpc
+COMPRESSED_EXTRA = $(TOPDIR)/arch/arm/lib/ll_char_wr.o
+ZTEXTADDR = 0x10008000
+ZRELADDR = 0x10008000
+endif
+
+ifeq ($(CONFIG_ARCH_EBSA110),y)
+MACHINE = ebsa110
+ZTEXTADDR = 0x00008000
+ZRELADDR = 0x00008000
+endif
+
+ifeq ($(CONFIG_ARCH_NEXUSPCI),y)
+MACHINE = nexuspci
+TEXTADDR = 0xc0000000
+ZTEXTADDR = 0x40200000
+ZRELADDR = 0x40000000
+COMPRESSED_EXTRA = $(TOPDIR)/arch/arm/lib/ll_char_wr_scc.o
+COMPRESSED_HEAD = head-nexuspci.o
+endif
+
+OBJDUMP = $(CROSS_COMPILE)objdump
+PERL = perl
+LD = $(CROSS_COMPILE)ld -m elf_arm
+CPP = $(CC) -E
+OBJCOPY = $(CROSS_COMPILE)objcopy -O binary -R .note -R .comment -S
+ARCHCC := $(word 1,$(CC))
+GCCLIB := `$(ARCHCC) $(CFLAGS_PROC) --print-libgcc-file-name`
+GCCARCH := -B/usr/src/bin/arm/arm-linuxelf-
+HOSTCFLAGS := $(CFLAGS:-fomit-frame-pointer=)
+ifeq ($(CONFIG_FRAME_POINTER),y)
+CFLAGS := $(CFLAGS:-fomit-frame-pointer=)
+endif
+CFLAGS := $(CFLAGS_PROC) $(CFLAGS) -pipe
+ASFLAGS := $(ASFLAGS_PROC) $(ASFLAGS) -D__ASSEMBLY__
+LINKFLAGS = -T $(TOPDIR)/arch/arm/vmlinux.lds -e stext -Ttext $(TEXTADDR)
+ZLINKFLAGS = -Ttext $(ZTEXTADDR)
+
+SUBDIRS := $(SUBDIRS:drivers=) arch/arm/lib arch/arm/kernel arch/arm/mm arch/arm/drivers
+CORE_FILES := arch/arm/kernel/kernel.o arch/arm/mm/mm.o $(CORE_FILES)
+LIBS := arch/arm/lib/lib.a $(LIBS) $(GCCLIB)
+
+DRIVERS := arch/arm/drivers/block/block.a \
+ arch/arm/drivers/char/char.a \
+ drivers/misc/misc.a \
+ arch/arm/drivers/net/net.a
+
+ifeq ($(CONFIG_SCSI),y)
+DRIVERS := $(DRIVERS) arch/arm/drivers/scsi/scsi.a
+endif
+
+ifneq ($(CONFIG_CD_NO_IDESCSI)$(CONFIG_BLK_DEV_IDECD)$(CONFIG_BLK_DEV_SR),)
+DRIVERS := $(DRIVERS) drivers/cdrom/cdrom.a
+endif
+
+ifeq ($(CONFIG_SOUND),y)
+DRIVERS := $(DRIVERS) arch/arm/drivers/sound/sound.a
+endif
+
+symlinks::
+ $(RM) include/asm-arm/arch include/asm-arm/proc
+ (cd include/asm-arm; ln -sf arch-$(MACHINE) arch; ln -sf proc-$(PROCESSOR) proc)
+
+mrproper::
+ rm -f include/asm-arm/arch include/asm-arm/proc
+ @$(MAKE) -C arch/$(ARCH)/drivers mrproper
+
+arch/arm/kernel: dummy
+ $(MAKE) linuxsubdirs SUBDIRS=arch/arm/kernel
+
+arch/arm/mm: dummy
+ $(MAKE) linuxsubdirs SUBDIRS=arch/arm/mm
+
+MAKEBOOT = $(MAKE) -C arch/$(ARCH)/boot
+
+zImage: vmlinux
+ @$(MAKEBOOT) zImage
+
+zinstall: vmlinux
+ @$(MAKEBOOT) zinstall
+
+Image: vmlinux
+ @$(MAKEBOOT) Image
+
+install: vmlinux
+ @$(MAKEBOOT) install
+
+# My testing targets (that short circuit a few dependencies)
+#
+zImg:; @$(MAKEBOOT) zImage
+Img:; @$(MAKEBOOT) Image
+i:; @$(MAKEBOOT) install
+zi:; @$(MAKEBOOT) zinstall
+
+archclean:
+ @$(MAKEBOOT) clean
+ @$(MAKE) -C arch/arm/lib clean
+
+archdep:
+ @$(MAKEBOOT) dep
+sed -e /^MACHINE..*=/s,= .*,= rpc,;/^PROCESSOR..*=/s,= .*,= armv, linux/arch/arm/Makefile.normal
--- /dev/null
+#
+# arch/arm/boot/Makefile
+#
+# This file is subject to the terms and conditions of the GNU General Public
+# License. See the file "COPYING" in the main directory of this archive
+# for more details.
+#
+# Copyright (C) 1995, 1996 Russell King
+#
+
+SYSTEM =$(TOPDIR)/vmlinux
+
+Image: $(CONFIGURE) $(SYSTEM)
+ $(OBJCOPY) $(SYSTEM) $@
+
+zImage: $(CONFIGURE) compressed/vmlinux
+ $(OBJCOPY) compressed/vmlinux $@
+
+compressed/vmlinux: $(TOPDIR)/vmlinux dep
+ @$(MAKE) -C compressed vmlinux
+
+install: $(CONFIGURE) Image
+ sh ./install.sh $(VERSION).$(PATCHLEVEL).$(SUBLEVEL) Image $(TOPDIR)/System.map "$(INSTALL_PATH)"
+
+zinstall: $(CONFIGURE) zImage
+ sh ./install.sh $(VERSION).$(PATCHLEVEL).$(SUBLEVEL) zImage $(TOPDIR)/System.map "$(INSTALL_PATH)"
+
+tools/build: tools/build.c
+ $(HOSTCC) $(HOSTCFLAGS) -o $@ $< -I$(TOPDIR)/include
+
+clean:
+ rm -f Image zImage tools/build
+ @$(MAKE) -C compressed clean
+
+dep:
--- /dev/null
+#
+# linux/arch/arm/boot/compressed/Makefile
+#
+# create a compressed vmlinuz image from the original vmlinux
+#
+# With this config, max compressed image size = 640k
+# Uncompressed image size = 1.3M (text+data)
+
+SYSTEM =$(TOPDIR)/vmlinux
+HEAD =$(COMPRESSED_HEAD)
+OBJS =$(HEAD) misc.o $(COMPRESSED_EXTRA)
+CFLAGS =-O2 -DSTDC_HEADERS $(CFLAGS_PROC)
+ARFLAGS =rc
+
+all: vmlinux
+
+vmlinux: piggy.o $(OBJS)
+ $(LD) $(ZLINKFLAGS) -o vmlinux $(OBJS) piggy.o
+
+$(HEAD): $(HEAD:.o=.S)
+ $(CC) -traditional -DLOADADDR=$(ZRELADDR) -c $(HEAD:.o=.S)
+
+piggy.o: $(SYSTEM)
+ tmppiggy=_tmp_$$$$piggy; \
+ rm -f $$tmppiggy $$tmppiggy.gz $$tmppiggy.lnk; \
+ $(OBJCOPY) $(SYSTEM) $$tmppiggy; \
+ gzip -f -9 < $$tmppiggy > $$tmppiggy.gz; \
+ echo "SECTIONS { .data : { input_len = .; LONG(input_data_end - input_data) input_data = .; *(.data) input_data_end = .; }}" > $$tmppiggy.lnk; \
+ $(LD) -m elf_arm -r -o piggy.o -b binary $$tmppiggy.gz -b elf32-arm -T $$tmppiggy.lnk; \
+ rm -f $$tmppiggy $$tmppiggy.gz $$tmppiggy.lnk;
+
+clean:; rm -f vmlinux core
+
--- /dev/null
+#
+# linux/arch/arm/boot/compressed/Makefile
+#
+# create a compressed vmlinux image from the original vmlinux
+#
+
+COMPRESSED_EXTRA=../../lib/ll_char_wr.o
+OBJECTS=misc-debug.o $(COMPRESSED_EXTRA)
+
+CFLAGS=-D__KERNEL__ -O2 -DSTDC_HEADERS -DSTANDALONE_DEBUG -Wall -I../../../../include -c
+
+test-gzip: piggy.o $(OBJECTS)
+ $(CC) -o $@ $(OBJECTS) piggy.o
+
+misc-debug.o: misc.c
+ $(CC) $(CFLAGS) -o $@ misc.c
--- /dev/null
+/*
+ * linux/arch/arm/boot/compressed/head-nexuspci.S
+ *
+ * Copyright (C) 1996 Philip Blundell
+ */
+
+#define ARM_CP p15
+#define ARM610_REG_CONTROL cr1
+#define ARM_REG_ZERO cr0
+
+ .text
+
+start: b skip1
+ b go_uncompress
+ b go_uncompress
+ b go_uncompress
+ b go_uncompress
+ b go_uncompress
+ b go_uncompress
+ b go_uncompress
+ b go_uncompress
+ b go_uncompress
+skip1: mov sp, #0x40000000
+ add sp, sp, #0x200000
+ mov r2, #0x20000000
+ mov r1, #0x1a
+ str r1, [r2]
+
+ MOV r0, #0x30
+ MCR ARM_CP, 0, r0, ARM610_REG_CONTROL, ARM_REG_ZERO
+
+ mov r2, #0x10000000
+
+ mov r1, #42
+ strb r1, [r2, #8]
+
+ mov r1, #48
+ strb r1, [r2, #8]
+
+ mov r1, #16
+ strb r1, [r2, #8]
+
+ mov r1, #0x93
+ strb r1, [r2, #0]
+ mov r1, #0x17
+ strb r1, [r2, #0]
+
+ mov r1, #0xbb
+ strb r1, [r2, #0x4]
+
+ mov r1, #0x78
+ strb r1, [r2, #0x10]
+
+ mov r1, #160
+ strb r1, [r2, #0x8]
+
+ mov r1, #5
+ strb r1, [r2, #0x8]
+
+ mov r0, #0x50
+ bl _ll_write_char
+
+ mov r4, #0x40000000
+ mov r1, #0x00200000
+ add r4, r4, r1
+copylp:
+ ldr r3, [r1]
+ str r3, [r4, r1]
+ subs r1, r1, #4
+ bne copylp
+
+ add pc, r4, #0x28
+
+
+/*
+ * Uncompress the kernel
+ */
+go_uncompress:
+ mov r0, #0x40000000
+ add r0, r0, #0x300000
+ bl _decompress_kernel
+
+ mov r0, #0x40000000
+ add r1, r0, #0x300000
+ mov r2, #0x100000
+
+clp2: ldr r3, [r1, r2]
+ str r3, [r0, r2]
+ subs r2, r2, #4
+ bne clp2
+
+ mov r2, #0x40000000
+ mov r0, #0
+ mov r1, #3
+ add pc, r2, #0x20 @ call via EXEC entry
--- /dev/null
+/*
+ * linux/arch/arm/boot/compressed/head.S
+ *
+ * Copyright (C) 1996,1997,1998 Russell King
+ */
+#include <linux/linkage.h>
+
+ .text
+/*
+ * sort out different calling conventions
+ */
+ .align
+ .globl _start
+_start:
+start: mov r0, r0
+ mov r0, r0
+ mov r0, r0
+ mov r0, r0
+ mov r0, r0
+ mov r0, r0
+ mov r0, r0
+ mov r0, r0
+ teq r0, #0
+ beq 2f
+ mov r4, #0x02000000
+ add r4, r4, #0x7C000
+ mov r3, #0x4000
+ sub r3, r3, #4
+1: ldmia r0!, {r5 - r12}
+ stmia r4!, {r5 - r12}
+ subs r3, r3, #32
+ bpl 1b
+2: adr r2, LC0
+ ldmia r2, {r2, r3, r4, r5, r6, sp}
+ add r2, r2, #3
+ add r3, r3, #3
+ add sp, sp, #3
+ bic r2, r2, #3
+ bic r3, r3, #3
+ bic sp, sp, #3
+ adr r7, start
+ sub r6, r7, r6
+/*
+ * Relocate pointers
+ */
+ add r2, r2, r6
+ add r3, r3, r6
+ add r5, r5, r6
+ add sp, sp, r6
+/*
+ * Clear zero-init
+ */
+ mov r6, #0
+1: str r6, [r2], #4
+ cmp r2, r3
+ blt 1b
+ str r1, [r5] @ save architecture
+/*
+ * Uncompress the kernel
+ */
+ mov r1, #0x8000
+ add r2, r2, r1, lsl #1 @ Add 64k for malloc
+ sub r1, r1, #1
+ add r2, r2, r1
+ bic r5, r2, r1 @ decompress kernel to after end of the compressed
+ mov r0, r5
+ bl SYMBOL_NAME(decompress_kernel)
+ add r0, r0, #7
+ bic r2, r0, #7
+/*
+ * Now move the kernel to the correct location (r5 -> r4, len r0)
+ */
+ mov r0, r4 @ r0 = start of real kernel
+ mov r1, r5 @ r1 = start of kernel image
+ add r3, r5, r2 @ r3 = end of kernel
+ adr r4, movecode
+ adr r5, movecodeend
+1: ldmia r4!, {r6 - r12, lr}
+ stmia r3!, {r6 - r12, lr}
+ cmp r4, r5
+ blt 1b
+ mrc p15, 0, r5, c0, c0
+ eor r5, r5, #0x44 << 24
+ eor r5, r5, #0x01 << 16
+ eor r5, r5, #0xa1 << 8
+ movs r5, r5, lsr #4
+ mov r5, #0
+ mcreq p15, 0, r5, c7, c5, 0 @ flush I cache
+ ldr r5, LC0 + 12 @ get architecture
+ ldr r5, [r5]
+ add pc, r1, r2 @ Call move code
+
+/*
+ * r0 = length, r1 = to, r2 = from
+ */
+movecode: add r3, r1, r2
+ mov r4, r0
+1: ldmia r1!, {r6 - r12, lr}
+ stmia r0!, {r6 - r12, lr}
+ cmp r1, r3
+ blt 1b
+ mrc p15, 0, r0, c0, c0
+ eor r0, r0, #0x44 << 24
+ eor r0, r0, #0x01 << 16
+ eor r0, r0, #0xa1 << 8
+ movs r0, r0, lsr #4
+ mov r0, #0
+ mcreq p15, 0, r0, c7, c5, 0 @ flush I cache
+ mov r1, r5 @ call kernel correctly
+ mov pc, r4 @ call via EXEC entry
+movecodeend:
+
+LC0: .word SYMBOL_NAME(_edata)
+ .word SYMBOL_NAME(_end)
+ .word LOADADDR
+ .word SYMBOL_NAME(architecture)
+ .word start
+ .word SYMBOL_NAME(user_stack)+4096
+ .align
+
+ .bss
+SYMBOL_NAME(architecture):
+ .space 4
+ .align
--- /dev/null
+/*
+ * misc.c
+ *
+ * This is a collection of several routines from gzip-1.0.3
+ * adapted for Linux.
+ *
+ * malloc by Hannu Savolainen 1993 and Matthias Urlichs 1994
+ *
+ * Modified for ARM Linux by Russell King
+ */
+
+#include <asm/uaccess.h>
+#include <asm/arch/uncompress.h>
+#include <asm/proc/uncompress.h>
+
+#ifdef STANDALONE_DEBUG
+#define puts printf
+#endif
+
+#define __ptr_t void *
+
+/*
+ * Optimised C version of memzero for the ARM.
+ */
+extern __inline__ __ptr_t __memzero (__ptr_t s, size_t n)
+{
+ union { void *vp; unsigned long *ulp; unsigned char *ucp; } u;
+ int i;
+
+ u.vp = s;
+
+ for (i = n >> 5; i > 0; i--) {
+ *u.ulp++ = 0;
+ *u.ulp++ = 0;
+ *u.ulp++ = 0;
+ *u.ulp++ = 0;
+ *u.ulp++ = 0;
+ *u.ulp++ = 0;
+ *u.ulp++ = 0;
+ *u.ulp++ = 0;
+ }
+
+ if (n & 1 << 4) {
+ *u.ulp++ = 0;
+ *u.ulp++ = 0;
+ *u.ulp++ = 0;
+ *u.ulp++ = 0;
+ }
+
+ if (n & 1 << 3) {
+ *u.ulp++ = 0;
+ *u.ulp++ = 0;
+ }
+
+ if (n & 1 << 2)
+ *u.ulp++ = 0;
+
+ if (n & 1 << 1) {
+ *u.ucp++ = 0;
+ *u.ucp++ = 0;
+ }
+
+ if (n & 1)
+ *u.ucp++ = 0;
+ return s;
+}
+
+#define memzero(s,n) __memzero(s,n)
+
+extern __inline__ __ptr_t memcpy(__ptr_t __dest, __const __ptr_t __src,
+ size_t __n)
+{
+ int i = 0;
+ unsigned char *d = (unsigned char *)__dest, *s = (unsigned char *)__src;
+
+ for (i = __n >> 3; i > 0; i--) {
+ *d++ = *s++;
+ *d++ = *s++;
+ *d++ = *s++;
+ *d++ = *s++;
+ *d++ = *s++;
+ *d++ = *s++;
+ *d++ = *s++;
+ *d++ = *s++;
+ }
+
+ if (__n & 1 << 2) {
+ *d++ = *s++;
+ *d++ = *s++;
+ *d++ = *s++;
+ *d++ = *s++;
+ }
+
+ if (__n & 1 << 1) {
+ *d++ = *s++;
+ *d++ = *s++;
+ }
+
+ if (__n & 1)
+ *d++ = *s++;
+
+ return __dest;
+}
+
+/*
+ * gzip delarations
+ */
+#define OF(args) args
+#define STATIC static
+
+typedef unsigned char uch;
+typedef unsigned short ush;
+typedef unsigned long ulg;
+
+#define WSIZE 0x8000 /* Window size must be at least 32k, */
+ /* and a power of two */
+
+static uch *inbuf; /* input buffer */
+static uch window[WSIZE]; /* Sliding window buffer */
+
+static unsigned insize; /* valid bytes in inbuf */
+static unsigned inptr; /* index of next byte to be processed in inbuf */
+static unsigned outcnt; /* bytes in output buffer */
+
+/* gzip flag byte */
+#define ASCII_FLAG 0x01 /* bit 0 set: file probably ascii text */
+#define CONTINUATION 0x02 /* bit 1 set: continuation of multi-part gzip file */
+#define EXTRA_FIELD 0x04 /* bit 2 set: extra field present */
+#define ORIG_NAME 0x08 /* bit 3 set: original file name present */
+#define COMMENT 0x10 /* bit 4 set: file comment present */
+#define ENCRYPTED 0x20 /* bit 5 set: file is encrypted */
+#define RESERVED 0xC0 /* bit 6,7: reserved */
+
+#define get_byte() (inptr < insize ? inbuf[inptr++] : fill_inbuf())
+
+/* Diagnostic functions */
+#ifdef DEBUG
+# define Assert(cond,msg) {if(!(cond)) error(msg);}
+# define Trace(x) fprintf x
+# define Tracev(x) {if (verbose) fprintf x ;}
+# define Tracevv(x) {if (verbose>1) fprintf x ;}
+# define Tracec(c,x) {if (verbose && (c)) fprintf x ;}
+# define Tracecv(c,x) {if (verbose>1 && (c)) fprintf x ;}
+#else
+# define Assert(cond,msg)
+# define Trace(x)
+# define Tracev(x)
+# define Tracevv(x)
+# define Tracec(c,x)
+# define Tracecv(c,x)
+#endif
+
+static int fill_inbuf(void);
+static void flush_window(void);
+static void error(char *m);
+static void gzip_mark(void **);
+static void gzip_release(void **);
+
+extern char input_data[];
+extern int input_len;
+
+static uch *output_data;
+static ulg output_ptr;
+static ulg bytes_out = 0;
+
+static void *malloc(int size);
+static void free(void *where);
+static void error(char *m);
+static void gzip_mark(void **);
+static void gzip_release(void **);
+
+static void puts(const char *);
+
+extern int end;
+static ulg free_mem_ptr;
+static ulg free_mem_ptr_end;
+
+#define HEAP_SIZE 0x2000
+
+#include "../../../../lib/inflate.c"
+
+#ifndef STANDALONE_DEBUG
+static void *malloc(int size)
+{
+ void *p;
+
+ if (size <0) error("Malloc error\n");
+ if (free_mem_ptr <= 0) error("Memory error\n");
+
+ free_mem_ptr = (free_mem_ptr + 3) & ~3; /* Align */
+
+ p = (void *)free_mem_ptr;
+ free_mem_ptr += size;
+
+ if (free_mem_ptr >= free_mem_ptr_end)
+ error("Out of memory");
+ return p;
+}
+
+static void free(void *where)
+{ /* gzip_mark & gzip_release do the free */
+}
+
+static void gzip_mark(void **ptr)
+{
+ *ptr = (void *) free_mem_ptr;
+}
+
+static void gzip_release(void **ptr)
+{
+ free_mem_ptr = (long) *ptr;
+}
+#else
+static void gzip_mark(void **ptr)
+{
+}
+
+static void gzip_release(void **ptr)
+{
+}
+#endif
+
+/* ===========================================================================
+ * Fill the input buffer. This is called only when the buffer is empty
+ * and at least one byte is really needed.
+ */
+int fill_inbuf()
+{
+ if (insize != 0)
+ error("ran out of input data\n");
+
+ inbuf = input_data;
+ insize = input_len;
+ inptr = 1;
+ return inbuf[0];
+}
+
+/* ===========================================================================
+ * Write the output window window[0..outcnt-1] and update crc and bytes_out.
+ * (Used for the decompressed data only.)
+ */
+void flush_window()
+{
+ ulg c = crc;
+ unsigned n;
+ uch *in, *out, ch;
+
+ in = window;
+ out = &output_data[output_ptr];
+ for (n = 0; n < outcnt; n++) {
+ ch = *out++ = *in++;
+ c = crc_32_tab[((int)c ^ ch) & 0xff] ^ (c >> 8);
+ }
+ crc = c;
+ bytes_out += (ulg)outcnt;
+ output_ptr += (ulg)outcnt;
+ outcnt = 0;
+}
+
+static void error(char *x)
+{
+ int ptr;
+
+ puts("\n\n");
+ puts(x);
+ puts("\n\n -- System halted");
+
+ while(1); /* Halt */
+}
+
+#define STACK_SIZE (4096)
+
+ulg user_stack [STACK_SIZE];
+
+#ifndef STANDALONE_DEBUG
+
+ulg decompress_kernel(ulg output_start)
+{
+ free_mem_ptr = (ulg)&end;
+ free_mem_ptr_end = output_start;
+
+ proc_decomp_setup ();
+ arch_decomp_setup ();
+
+ output_data = (uch *)output_start; /* Points to kernel start */
+
+ makecrc();
+ puts("Uncompressing Linux...");
+ gunzip();
+ puts("done.\nNow booting the kernel\n");
+ return output_ptr;
+}
+#else
+
+char output_buffer[1500*1024];
+
+int main()
+{
+ output_data = output_buffer;
+
+ makecrc();
+ puts("Uncompressing Linux...");
+ gunzip();
+ puts("done.\n");
+ return 0;
+}
+#endif
+
--- /dev/null
+#!/bin/sh
+#
+# arch/arm/boot/install.sh
+#
+# This file is subject to the terms and conditions of the GNU General Public
+# License. See the file "COPYING" in the main directory of this archive
+# for more details.
+#
+# Copyright (C) 1995 by Linus Torvalds
+#
+# Adapted from code in arch/i386/boot/Makefile by H. Peter Anvin
+# Adapted from code in arch/i386/boot/install.sh by Russell King
+#
+# "make install" script for arm architecture
+#
+# Arguments:
+# $1 - kernel version
+# $2 - kernel image file
+# $3 - kernel map file
+# $4 - default install path (blank if root directory)
+#
+
+# User may have a custom install script
+
+if [ -x /sbin/installkernel ]; then
+ exec /sbin/installkernel "$@"
+fi
+
+if [ "$2" = "zImage" ]; then
+# Compressed install
+ echo "Installing compressed kernel"
+ if [ -f $4/vmlinuz-$1 ]; then
+ mv $4/vmlinuz-$1 $4/vmlinuz.old
+ fi
+
+ if [ -f $4/System.map-$1 ]; then
+ mv $4/System.map-$1 $4/System.old
+ fi
+
+ cat $2 > $4/vmlinuz-$1
+ cp $3 $4/System.map-$1
+else
+# Normal install
+ echo "Installing normal kernel"
+ if [ -f $4/vmlinux-$1 ]; then
+ mv $4/vmlinux-$1 $4/vmlinux.old
+ fi
+
+ if [ -f $4/System.map ]; then
+ mv $4/System.map $4/System.old
+ fi
+
+ cat $2 > $4/vmlinux-$1
+ cp $3 $4/System.map
+fi
+
+if [ -x /sbin/loadmap ]; then
+ /sbin/loadmap --rdev /dev/ima
+else
+ echo "You have to install it yourself"
+fi
--- /dev/null
+#include <stdio.h>
+#include <string.h>
+#include <stdlib.h>
+#include <stdarg.h>
+#include <a.out.h>
+
+typedef unsigned char byte;
+typedef unsigned short word;
+typedef unsigned long u32;
+
+void die(const char * str, ...)
+{
+ va_list args;
+ va_start(args, str);
+ vfprintf(stderr, str, args);
+ fputc('\n', stderr);
+ exit (1);
+}
+
+int main(int argc, char **argv)
+{
+ void *data;
+ struct exec ex;
+ FILE *f;
+ int totlen;
+
+ if (argc < 2) {
+ fprintf(stderr, "Usage: build kernel-name\n");
+ exit(1);
+ }
+
+ f = fopen(argv[1], "rb");
+ if (!f)
+ die("Unable to open `%s': %m", argv[1]);
+
+ fread(&ex, 1, sizeof(ex), f);
+
+ if(N_MAGIC(ex) == ZMAGIC) {
+ fseek(f, 4096, SEEK_SET);
+ totlen = ex.a_text + ex.a_data;
+ } else
+ if(N_MAGIC(ex) == QMAGIC) {
+ unsigned long my_header;
+
+ fseek(f, 4, SEEK_SET);
+
+ my_header = 0xea000006;
+
+ fwrite(&my_header, 4, 1, stdout);
+
+ totlen = ex.a_text + ex.a_data - 4;
+ } else {
+ fprintf(stderr, "Unacceptable a.out header on kernel\n");
+ fclose(f);
+ exit(1);
+ }
+
+ fprintf(stderr, "Kernel is %dk (%dk text, %dk data, %dk bss)\n",
+ (ex.a_text + ex.a_data + ex.a_bss)/1024,
+ ex.a_text/1024, ex.a_data/1024, ex.a_bss/1024);
+
+ data = malloc(totlen);
+ fread(data, 1, totlen, f);
+ fwrite(data, 1, totlen, stdout);
+
+ free(data);
+ fclose(f);
+ fflush(stdout);
+ return 0;
+}
--- /dev/null
+#
+# For a description of the syntax of this configuration file,
+# see the Configure script.
+#
+mainmenu_name "Linux Kernel Configuration"
+
+define_bool CONFIG_ARM y
+
+mainmenu_option next_comment
+comment 'Code maturity level options'
+bool 'Prompt for development and/or incomplete code/drivers' CONFIG_EXPERIMENTAL
+endmenu
+
+mainmenu_option next_comment
+comment 'Loadable module support'
+bool 'Enable loadable module support' CONFIG_MODULES
+if [ "$CONFIG_MODULES" = "y" ]; then
+ bool 'Set version information on all symbols for modules' CONFIG_MODVERSIONS
+ bool 'Kernel daemon support (e.g. autoload of modules)' CONFIG_KERNELD
+fi
+endmenu
+
+mainmenu_option next_comment
+comment 'General setup'
+choice 'ARM system type' \
+ "Archimedes CONFIG_ARCH_ARC \
+ A5000 CONFIG_ARCH_A5K \
+ RiscPC CONFIG_ARCH_RPC \
+ EBSA-110 CONFIG_ARCH_EBSA110 \
+ NexusPCI CONFIG_ARCH_NEXUSPCI" RiscPC
+if [ "$CONFIG_ARCH_ARC" = "y" -o "$CONFIG_ARCH_A5K" = "y" -o "$CONFIG_ARCH_RPC" = "y" ]; then
+ define_bool CONFIG_ARCH_ACORN y
+else
+ define_bool CONFIG_ARCH_ACORN n
+fi
+if [ "$CONFIG_ARCH_NEXUSPCI" = "y" ]; then
+ define_bool CONFIG_PCI y
+else
+ define_bool CONFIG_PCI n
+fi
+if [ "$CONFIG_ARCH_NEXUSPCI" = "y" -o "$CONFIG_ARCH_EBSA110" = "y" ]; then
+ define_bool CONFIG_CPU_SA110 y
+else
+ if [ "$CONFIG_ARCH_A5K" = "y" ]; then
+ define_bool CONFIG_CPU_ARM3 y
+ else
+ choice 'ARM cpu type' \
+ "ARM2 CONFIG_CPU_ARM2 \
+ ARM3 CONFIG_CPU_ARM3 \
+ ARM6/7 CONFIG_CPU_ARM6 \
+ StrongARM CONFIG_CPU_SA110" StrongARM
+ fi
+fi
+bool 'Compile kernel with frame pointer (for useful debugging)' CONFIG_FRAME_POINTER
+bool 'Use new compilation options (for GCC 2.8)' CONFIG_BINUTILS_NEW
+bool 'Debug kernel errors' CONFIG_DEBUG_ERRORS
+bool 'Networking support' CONFIG_NET
+bool 'System V IPC' CONFIG_SYSVIPC
+bool 'Sysctl support' CONFIG_SYSCTL
+tristate 'Kernel support for a.out binaries' CONFIG_BINFMT_AOUT
+tristate 'Kernel support for ELF binaries' CONFIG_BINFMT_ELF
+if [ "$CONFIG_EXPERIMENTAL" = "y" ]; then
+# tristate 'Kernel support for JAVA binaries' CONFIG_BINFMT_JAVA
+ define_bool CONFIG_BINFMT_JAVA n
+fi
+tristate 'Parallel port support' CONFIG_PARPORT
+if [ "$CONFIG_PARPORT" != "n" ]; then
+ dep_tristate ' PC-style hardware' CONFIG_PARPORT_PC $CONFIG_PARPORT
+fi
+endmenu
+
+source arch/arm/drivers/block/Config.in
+
+if [ "$CONFIG_NET" = "y" ]; then
+ source net/Config.in
+fi
+
+mainmenu_option next_comment
+comment 'SCSI support'
+
+tristate 'SCSI support?' CONFIG_SCSI
+
+if [ "$CONFIG_SCSI" != "n" ]; then
+ source arch/arm/drivers/scsi/Config.in
+fi
+endmenu
+
+if [ "$CONFIG_NET" = "y" ]; then
+ mainmenu_option next_comment
+ comment 'Network device support'
+
+ bool 'Network device support?' CONFIG_NETDEVICES
+ if [ "$CONFIG_NETDEVICES" = "y" ]; then
+ source arch/arm/drivers/net/Config.in
+ fi
+ endmenu
+fi
+
+# mainmenu_option next_comment
+# comment 'ISDN subsystem'
+#
+# tristate 'ISDN support' CONFIG_ISDN
+# if [ "$CONFIG_ISDN" != "n" ]; then
+# source drivers/isdn/Config.in
+# fi
+# endmenu
+
+# Conditionally compile in the Uniform CD-ROM driver
+if [ "$CONFIG_BLK_DEV_IDECD" = "y" -o "$CONFIG_BLK_DEV_SR" = "y" ]; then
+ define_bool CONFIG_CDROM y
+else
+ if [ "$CONFIG_BLK_DEV_IDECD" = "m" -o "$CONFIG_BLK_DEV_SR" = "m" ]; then
+ define_bool CONFIG_CDROM m
+ else
+ define_bool CONFIG_CDROM n
+ fi
+fi
+
+source fs/Config.in
+
+source fs/nls/Config.in
+
+source arch/arm/drivers/char/Config.in
+
+if [ "$CONFIG_ARCH_ACORN" = "y" ]; then
+ mainmenu_option next_comment
+ comment 'Sound'
+
+ tristate 'Sound support' CONFIG_SOUND
+ if [ "$CONFIG_SOUND" != "n" ]; then
+ source arch/arm/drivers/sound/Config.in
+ fi
+ endmenu
+fi
+
+mainmenu_option next_comment
+comment 'Kernel hacking'
+
+#bool 'Debug kmalloc/kfree' CONFIG_DEBUG_MALLOC
+bool 'Kernel profiling support' CONFIG_PROFILE
+if [ "$CONFIG_PROFILE" = "y" ]; then
+ int ' Profile shift count' CONFIG_PROFILE_SHIFT 2
+fi
+bool 'Magic SysRq key' CONFIG_MAGIC_SYSRQ
+endmenu
--- /dev/null
+#
+# Automatically generated make config: don't edit
+#
+CONFIG_ARM=y
+
+#
+# Code maturity level options
+#
+CONFIG_EXPERIMENTAL=y
+
+#
+# Loadable module support
+#
+CONFIG_MODULES=y
+CONFIG_MODVERSIONS=y
+CONFIG_KERNELD=y
+
+#
+# General setup
+#
+# CONFIG_ARCH_ARC is not set
+# CONFIG_ARCH_A5K is not set
+CONFIG_ARCH_RPC=y
+# CONFIG_ARCH_EBSA110 is not set
+# CONFIG_ARCH_NEXUSPCI is not set
+CONFIG_ARCH_ACORN=y
+# CONFIG_PCI is not set
+# CONFIG_CPU_ARM2 is not set
+# CONFIG_CPU_ARM3 is not set
+# CONFIG_CPU_ARM6 is not set
+CONFIG_CPU_SA110=y
+CONFIG_FRAME_POINTER=y
+# CONFIG_BINUTILS_NEW is not set
+CONFIG_NET=y
+CONFIG_SYSVIPC=y
+CONFIG_SYSCTL=y
+CONFIG_BINFMT_AOUT=y
+CONFIG_BINFMT_ELF=m
+# CONFIG_BINFMT_JAVA is not set
+CONFIG_PARPORT=y
+CONFIG_PARPORT_PC=y
+
+#
+# Floppy, IDE, and other block devices
+#
+CONFIG_BLK_DEV_FD=y
+CONFIG_BLK_DEV_IDE=y
+
+#
+# Please see Documentation/ide.txt for help/info on IDE drives
+#
+# CONFIG_BLK_DEV_HD_IDE is not set
+CONFIG_BLK_DEV_IDEDISK=y
+CONFIG_BLK_DEV_IDECD=y
+# CONFIG_BLK_DEV_IDETAPE is not set
+# CONFIG_BLK_DEV_IDEFLOPPY is not set
+# CONFIG_BLK_DEV_IDESCSI is not set
+# CONFIG_BLK_DEV_IDE_PCMCIA is not set
+CONFIG_BLK_DEV_IDE_CARDS=y
+CONFIG_BLK_DEV_IDE_ICSIDE=y
+# CONFIG_BLK_DEV_IDE_RAPIDE is not set
+# CONFIG_BLK_DEV_XD is not set
+
+#
+# Additional Block Devices
+#
+CONFIG_BLK_DEV_LOOP=m
+# CONFIG_BLK_DEV_MD is not set
+CONFIG_BLK_DEV_RAM=y
+CONFIG_BLK_DEV_INITRD=y
+CONFIG_PARIDE_PARPORT=y
+# CONFIG_PARIDE is not set
+CONFIG_BLK_DEV_PART=y
+# CONFIG_BLK_DEV_HD is not set
+
+#
+# Networking options
+#
+CONFIG_PACKET=m
+# CONFIG_NETLINK is not set
+# CONFIG_FIREWALL is not set
+# CONFIG_NET_ALIAS is not set
+# CONFIG_FILTER is not set
+CONFIG_UNIX=y
+CONFIG_INET=y
+# CONFIG_IP_MULTICAST is not set
+# CONFIG_IP_ADVANCED_ROUTER is not set
+# CONFIG_IP_PNP is not set
+# CONFIG_IP_ACCT is not set
+# CONFIG_IP_MASQUERADE is not set
+# CONFIG_IP_ROUTER is not set
+# CONFIG_NET_IPIP is not set
+# CONFIG_NET_IPGRE is not set
+# CONFIG_IP_ALIAS is not set
+# CONFIG_SYN_COOKIES is not set
+
+#
+# (it is safe to leave these untouched)
+#
+# CONFIG_INET_RARP is not set
+CONFIG_IP_NOSR=y
+# CONFIG_SKB_LARGE is not set
+# CONFIG_IPV6 is not set
+
+#
+#
+#
+# CONFIG_IPX is not set
+# CONFIG_ATALK is not set
+# CONFIG_X25 is not set
+# CONFIG_LAPB is not set
+# CONFIG_BRIDGE is not set
+# CONFIG_LLC is not set
+# CONFIG_WAN_ROUTER is not set
+# CONFIG_CPU_IS_SLOW is not set
+# CONFIG_NET_SCHED is not set
+
+#
+# SCSI support
+#
+CONFIG_SCSI=y
+
+#
+# SCSI support type (disk, tape, CD-ROM)
+#
+CONFIG_BLK_DEV_SD=y
+# CONFIG_CHR_DEV_ST is not set
+CONFIG_BLK_DEV_SR=y
+# CONFIG_BLK_DEV_SR_VENDOR is not set
+# CONFIG_CHR_DEV_SG is not set
+
+#
+# Some SCSI devices (e.g. CD jukebox) support multiple LUNs
+#
+# CONFIG_SCSI_MULTI_LUN is not set
+CONFIG_SCSI_CONSTANTS=y
+CONFIG_SCSI_LOGGING=y
+
+#
+# SCSI low-level drivers
+#
+CONFIG_SCSI_ACORNSCSI_3=m
+CONFIG_SCSI_ACORNSCSI_TAGGED_QUEUE=y
+CONFIG_SCSI_ACORNSCSI_SYNC=y
+CONFIG_SCSI_CUMANA_2=m
+CONFIG_SCSI_POWERTECSCSI=m
+
+#
+# The following drives are not fully supported
+#
+CONFIG_SCSI_CUMANA_1=m
+CONFIG_SCSI_ECOSCSI=m
+CONFIG_SCSI_OAK1=m
+CONFIG_SCSI_PPA=m
+CONFIG_SCSI_PPA_HAVE_PEDANTIC=2
+
+#
+# Network device support
+#
+CONFIG_NETDEVICES=y
+# CONFIG_DUMMY is not set
+# CONFIG_EQUALIZER is not set
+CONFIG_PPP=m
+
+#
+# CCP compressors for PPP are only built as modules.
+#
+# CONFIG_SLIP is not set
+CONFIG_ETHER1=m
+CONFIG_ETHER3=m
+CONFIG_ETHERH=m
+CONFIG_CDROM=y
+
+#
+# Filesystems
+#
+# CONFIG_QUOTA is not set
+# CONFIG_MINIX_FS is not set
+CONFIG_EXT2_FS=y
+CONFIG_ISO9660_FS=y
+CONFIG_JOLIET=y
+CONFIG_FAT_FS=y
+CONFIG_MSDOS_FS=y
+# CONFIG_UMSDOS_FS is not set
+CONFIG_VFAT_FS=y
+CONFIG_PROC_FS=y
+CONFIG_NFS_FS=y
+CONFIG_NFSD=y
+CONFIG_SUNRPC=y
+CONFIG_LOCKD=y
+# CONFIG_CODA_FS is not set
+# CONFIG_SMB_FS is not set
+# CONFIG_HPFS_FS is not set
+# CONFIG_NTFS_FS is not set
+# CONFIG_SYSV_FS is not set
+# CONFIG_AFFS_FS is not set
+# CONFIG_HFS_FS is not set
+# CONFIG_ROMFS_FS is not set
+# CONFIG_AUTOFS_FS is not set
+# CONFIG_UFS_FS is not set
+CONFIG_ADFS_FS=y
+# CONFIG_MAC_PARTITION is not set
+CONFIG_NLS=y
+
+#
+# Native Language Support
+#
+# CONFIG_NLS_CODEPAGE_437 is not set
+# CONFIG_NLS_CODEPAGE_737 is not set
+# CONFIG_NLS_CODEPAGE_775 is not set
+# CONFIG_NLS_CODEPAGE_850 is not set
+# CONFIG_NLS_CODEPAGE_852 is not set
+# CONFIG_NLS_CODEPAGE_855 is not set
+# CONFIG_NLS_CODEPAGE_857 is not set
+# CONFIG_NLS_CODEPAGE_860 is not set
+# CONFIG_NLS_CODEPAGE_861 is not set
+# CONFIG_NLS_CODEPAGE_862 is not set
+# CONFIG_NLS_CODEPAGE_863 is not set
+# CONFIG_NLS_CODEPAGE_864 is not set
+# CONFIG_NLS_CODEPAGE_865 is not set
+# CONFIG_NLS_CODEPAGE_866 is not set
+# CONFIG_NLS_CODEPAGE_869 is not set
+# CONFIG_NLS_CODEPAGE_874 is not set
+# CONFIG_NLS_ISO8859_1 is not set
+# CONFIG_NLS_ISO8859_2 is not set
+# CONFIG_NLS_ISO8859_3 is not set
+# CONFIG_NLS_ISO8859_4 is not set
+# CONFIG_NLS_ISO8859_5 is not set
+# CONFIG_NLS_ISO8859_6 is not set
+# CONFIG_NLS_ISO8859_7 is not set
+# CONFIG_NLS_ISO8859_8 is not set
+# CONFIG_NLS_ISO8859_9 is not set
+# CONFIG_NLS_KOI8_R is not set
+
+#
+# Character devices
+#
+CONFIG_VT=y
+CONFIG_VT_CONSOLE=y
+CONFIG_SERIAL=y
+# CONFIG_SERIAL_CONSOLE is not set
+# CONFIG_SERIAL_EXTENDED is not set
+CONFIG_ATOMWIDE_SERIAL=y
+CONFIG_DUALSP_SERIAL=y
+CONFIG_MOUSE=y
+CONFIG_PRINTER=m
+CONFIG_PRINTER_READBACK=y
+# CONFIG_UMISC is not set
+# CONFIG_WATCHDOG is not set
+CONFIG_RPCMOUSE=y
+
+#
+# Sound
+#
+# CONFIG_SOUND is not set
+# CONFIG_VIDC is not set
+# CONFIG_AUDIO is not set
+# DSP_BUFFSIZE is not set
+
+#
+# Kernel hacking
+#
+# CONFIG_PROFILE is not set
+CONFIG_MAGIC_SYSRQ=y
--- /dev/null
+#
+# Makefile for the linux kernel.
+#
+# Note! Dependencies are done automagically by 'make dep', which also
+# removes any old dependencies. DON'T put your own dependencies here
+# unless it's something special (ie not a .c file).
+
+HEAD_OBJ = head-$(PROCESSOR).o
+ENTRY_OBJ = entry-$(PROCESSOR).o
+
+O_TARGET := kernel.o
+O_OBJS := $(ENTRY_OBJ) ioport.o irq.o process.o ptrace.o signal.o sys_arm.o time.o traps.o
+
+all: kernel.o $(HEAD_OBJ) init_task.o
+
+ifeq ($(CONFIG_MODULES),y)
+OX_OBJS = armksyms.o
+else
+O_OBJS += armksyms.o
+endif
+
+ifdef CONFIG_ARCH_ACORN
+ O_OBJS += setup.o ecard.o iic.o dma.o
+ ifdef CONFIG_ARCH_ARC
+ O_OBJS += oldlatches.o
+ endif
+endif
+
+ifeq ($(MACHINE),ebsa110)
+ O_OBJS += setup-ebsa110.o dma.o
+endif
+
+ifeq ($(MACHINE),nexuspci)
+ O_OBJS += setup-ebsa110.o
+endif
+
+$(HEAD_OBJ): $(HEAD_OBJ:.o=.S)
+ $(CC) -D__ASSEMBLY__ -traditional -c $(HEAD_OBJ:.o=.S) -o $@
+
+include $(TOPDIR)/Rules.make
+
+$(ENTRY_OBJ:.o=.S): ../lib/constants.h
+
+.PHONY: ../lib/constants.h
+
+../lib/constants.h:
+ $(MAKE) -C ../lib constants.h
--- /dev/null
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/user.h>
+#include <linux/string.h>
+#include <linux/mm.h>
+#include <linux/mman.h>
+
+#include <asm/ecard.h>
+#include <asm/io.h>
+#include <asm/delay.h>
+#include <asm/dma.h>
+#include <asm/pgtable.h>
+#include <asm/uaccess.h>
+
+extern void dump_thread(struct pt_regs *, struct user *);
+extern int dump_fpu(struct pt_regs *, struct user_fp_struct *);
+
+/*
+ * libgcc functions - functions that are used internally by the
+ * compiler... (prototypes are not correct though, but that
+ * doesn't really matter since they're not versioned).
+ */
+extern void __gcc_bcmp(void);
+extern void __ashldi3(void);
+extern void __ashrdi3(void);
+extern void __cmpdi2(void);
+extern void __divdi3(void);
+extern void __divsi3(void);
+extern void __lshrdi3(void);
+extern void __moddi3(void);
+extern void __modsi3(void);
+extern void __muldi3(void);
+extern void __negdi2(void);
+extern void __ucmpdi2(void);
+extern void __udivdi3(void);
+extern void __udivmoddi4(void);
+extern void __udivsi3(void);
+extern void __umoddi3(void);
+extern void __umodsi3(void);
+
+extern void inswb(unsigned int port, void *to, int len);
+extern void outswb(unsigned int port, const void *to, int len);
+
+/*
+ * floating point math emulator support.
+ * These will not change. If they do, then a new version
+ * of the emulator will have to be compiled...
+ * fp_current is never actually dereferenced - it is just
+ * used as a pointer to pass back for send_sig().
+ */
+extern void (*fp_save)(unsigned char *);
+extern void (*fp_restore)(unsigned char *);
+extern void fp_setup(void);
+extern void fpreturn(void);
+extern void fpundefinstr(void);
+extern void fp_enter(void);
+extern void fp_printk(void);
+extern struct task_struct *fp_current;
+extern void fp_send_sig(int);
+
+/* platform dependent support */
+EXPORT_SYMBOL(dump_thread);
+EXPORT_SYMBOL(dump_fpu);
+EXPORT_SYMBOL(udelay);
+EXPORT_SYMBOL(dma_str);
+EXPORT_SYMBOL(xchg_str);
+
+/* expansion card support */
+#ifdef CONFIG_ARCH_ACORN
+EXPORT_SYMBOL(ecard_startfind);
+EXPORT_SYMBOL(ecard_find);
+EXPORT_SYMBOL(ecard_readchunk);
+EXPORT_SYMBOL(ecard_address);
+#endif
+
+/* processor dependencies */
+EXPORT_SYMBOL(processor);
+
+/* io */
+EXPORT_SYMBOL(outswb);
+EXPORT_SYMBOL(outsw);
+EXPORT_SYMBOL(inswb);
+EXPORT_SYMBOL(insw);
+
+#ifdef CONFIG_ARCH_RPC
+EXPORT_SYMBOL(drambank);
+#endif
+
+/* dma */
+EXPORT_SYMBOL(enable_dma);
+EXPORT_SYMBOL(set_dma_mode);
+EXPORT_SYMBOL(set_dma_addr);
+EXPORT_SYMBOL(set_dma_count);
+EXPORT_SYMBOL(get_dma_residue);
+
+/*
+ * floating point math emulator support.
+ * These symbols will never change their calling convention...
+ */
+EXPORT_SYMBOL_NOVERS(fpreturn);
+EXPORT_SYMBOL_NOVERS(fpundefinstr);
+EXPORT_SYMBOL_NOVERS(fp_enter);
+EXPORT_SYMBOL_NOVERS(fp_save);
+EXPORT_SYMBOL_NOVERS(fp_restore);
+EXPORT_SYMBOL_NOVERS(fp_setup);
+
+const char __kstrtab_fp_printk[] __attribute__((section(".kstrtab"))) = __MODULE_STRING(fp_printk);
+const struct module_symbol __ksymtab_fp_printk __attribute__((section("__ksymtab"))) =
+{ (unsigned long)&printk, __kstrtab_fp_printk };
+
+const char __kstrtab_fp_send_sig[] __attribute__((section(".kstrtab"))) = __MODULE_STRING(fp_send_sig);
+const struct module_symbol __ksymtab_fp_send_sig __attribute__((section("__ksymtab"))) =
+{ (unsigned long)&send_sig, __kstrtab_fp_send_sig };
+
+//EXPORT_SYMBOL_NOVERS(fp_current);
+
+ /*
+ * string / mem functions
+ */
+EXPORT_SYMBOL_NOVERS(strcpy);
+EXPORT_SYMBOL_NOVERS(strncpy);
+EXPORT_SYMBOL_NOVERS(strcat);
+EXPORT_SYMBOL_NOVERS(strncat);
+EXPORT_SYMBOL_NOVERS(strcmp);
+EXPORT_SYMBOL_NOVERS(strncmp);
+EXPORT_SYMBOL_NOVERS(strchr);
+EXPORT_SYMBOL_NOVERS(strlen);
+EXPORT_SYMBOL_NOVERS(strnlen);
+EXPORT_SYMBOL_NOVERS(strspn);
+EXPORT_SYMBOL_NOVERS(strpbrk);
+EXPORT_SYMBOL_NOVERS(strtok);
+EXPORT_SYMBOL_NOVERS(strrchr);
+EXPORT_SYMBOL_NOVERS(memset);
+EXPORT_SYMBOL_NOVERS(memcpy);
+EXPORT_SYMBOL_NOVERS(memmove);
+EXPORT_SYMBOL_NOVERS(memcmp);
+EXPORT_SYMBOL_NOVERS(memscan);
+EXPORT_SYMBOL_NOVERS(memzero);
+
+ /* user mem (segment) */
+#if defined(CONFIG_CPU_ARM6) || defined(CONFIG_CPU_SA110)
+EXPORT_SYMBOL(__arch_copy_from_user);
+EXPORT_SYMBOL(__arch_copy_to_user);
+EXPORT_SYMBOL(__arch_clear_user);
+EXPORT_SYMBOL(__arch_strlen_user);
+#elif defined(CONFIG_CPU_ARM2) || defined(CONFIG_CPU_ARM3)
+EXPORT_SYMBOL(uaccess_kernel);
+EXPORT_SYMBOL(uaccess_user);
+#endif
+
+ /* gcc lib functions */
+EXPORT_SYMBOL_NOVERS(__gcc_bcmp);
+EXPORT_SYMBOL_NOVERS(__ashldi3);
+EXPORT_SYMBOL_NOVERS(__ashrdi3);
+EXPORT_SYMBOL_NOVERS(__cmpdi2);
+EXPORT_SYMBOL_NOVERS(__divdi3);
+EXPORT_SYMBOL_NOVERS(__divsi3);
+EXPORT_SYMBOL_NOVERS(__lshrdi3);
+EXPORT_SYMBOL_NOVERS(__moddi3);
+EXPORT_SYMBOL_NOVERS(__modsi3);
+EXPORT_SYMBOL_NOVERS(__muldi3);
+EXPORT_SYMBOL_NOVERS(__negdi2);
+EXPORT_SYMBOL_NOVERS(__ucmpdi2);
+EXPORT_SYMBOL_NOVERS(__udivdi3);
+EXPORT_SYMBOL_NOVERS(__udivmoddi4);
+EXPORT_SYMBOL_NOVERS(__udivsi3);
+EXPORT_SYMBOL_NOVERS(__umoddi3);
+EXPORT_SYMBOL_NOVERS(__umodsi3);
+
+ /* bitops */
+EXPORT_SYMBOL(set_bit);
+EXPORT_SYMBOL(test_and_set_bit);
+EXPORT_SYMBOL(clear_bit);
+EXPORT_SYMBOL(test_and_clear_bit);
+EXPORT_SYMBOL(change_bit);
+EXPORT_SYMBOL(test_and_change_bit);
+EXPORT_SYMBOL(find_first_zero_bit);
+EXPORT_SYMBOL(find_next_zero_bit);
--- /dev/null
+/*
+ * linux/arch/arm/lib/calls.h
+ *
+ * Copyright (C) 1995, 1996 Russell King
+ */
+#ifndef NR_SYSCALLS
+#define NR_syscalls 256
+#define NR_SYSCALLS 182
+#else
+
+/* 0 */ .long SYMBOL_NAME(sys_setup)
+ .long SYMBOL_NAME(sys_exit)
+ .long SYMBOL_NAME(sys_fork_wrapper)
+ .long SYMBOL_NAME(sys_read)
+ .long SYMBOL_NAME(sys_write)
+/* 5 */ .long SYMBOL_NAME(sys_open)
+ .long SYMBOL_NAME(sys_close)
+ .long SYMBOL_NAME(sys_waitpid)
+ .long SYMBOL_NAME(sys_creat)
+ .long SYMBOL_NAME(sys_link)
+/* 10 */ .long SYMBOL_NAME(sys_unlink)
+ .long SYMBOL_NAME(sys_execve_wrapper)
+ .long SYMBOL_NAME(sys_chdir)
+ .long SYMBOL_NAME(sys_time)
+ .long SYMBOL_NAME(sys_mknod)
+/* 15 */ .long SYMBOL_NAME(sys_chmod)
+ .long SYMBOL_NAME(sys_chown)
+ .long SYMBOL_NAME(sys_ni_syscall) /* was sys_break */
+ .long SYMBOL_NAME(sys_stat)
+ .long SYMBOL_NAME(sys_lseek)
+/* 20 */ .long SYMBOL_NAME(sys_getpid)
+ .long SYMBOL_NAME(sys_mount_wrapper)
+ .long SYMBOL_NAME(sys_umount)
+ .long SYMBOL_NAME(sys_setuid)
+ .long SYMBOL_NAME(sys_getuid)
+/* 25 */ .long SYMBOL_NAME(sys_stime)
+ .long SYMBOL_NAME(sys_ptrace)
+ .long SYMBOL_NAME(sys_alarm)
+ .long SYMBOL_NAME(sys_fstat)
+ .long SYMBOL_NAME(sys_pause)
+/* 30 */ .long SYMBOL_NAME(sys_utime)
+ .long SYMBOL_NAME(sys_ni_syscall) /* was sys_stty */
+ .long SYMBOL_NAME(sys_ni_syscall) /* was sys_getty */
+ .long SYMBOL_NAME(sys_access)
+ .long SYMBOL_NAME(sys_nice)
+/* 35 */ .long SYMBOL_NAME(sys_ni_syscall) /* was sys_ftime */
+ .long SYMBOL_NAME(sys_sync)
+ .long SYMBOL_NAME(sys_kill)
+ .long SYMBOL_NAME(sys_rename)
+ .long SYMBOL_NAME(sys_mkdir)
+/* 40 */ .long SYMBOL_NAME(sys_rmdir)
+ .long SYMBOL_NAME(sys_dup)
+ .long SYMBOL_NAME(sys_pipe)
+ .long SYMBOL_NAME(sys_times)
+ .long SYMBOL_NAME(sys_ni_syscall) /* was sys_prof */
+/* 45 */ .long SYMBOL_NAME(sys_brk)
+ .long SYMBOL_NAME(sys_setgid)
+ .long SYMBOL_NAME(sys_getgid)
+ .long SYMBOL_NAME(sys_signal)
+ .long SYMBOL_NAME(sys_geteuid)
+/* 50 */ .long SYMBOL_NAME(sys_getegid)
+ .long SYMBOL_NAME(sys_acct)
+ .long SYMBOL_NAME(sys_ni_syscall) /* was sys_phys */
+ .long SYMBOL_NAME(sys_ni_syscall) /* was sys_lock */
+ .long SYMBOL_NAME(sys_ioctl)
+/* 55 */ .long SYMBOL_NAME(sys_fcntl)
+ .long SYMBOL_NAME(sys_ni_syscall) /* was sys_mpx */
+ .long SYMBOL_NAME(sys_setpgid)
+ .long SYMBOL_NAME(sys_ni_syscall) /* was sys_ulimit */
+ .long SYMBOL_NAME(sys_olduname)
+/* 60 */ .long SYMBOL_NAME(sys_umask)
+ .long SYMBOL_NAME(sys_chroot)
+ .long SYMBOL_NAME(sys_ustat)
+ .long SYMBOL_NAME(sys_dup2)
+ .long SYMBOL_NAME(sys_getppid)
+/* 65 */ .long SYMBOL_NAME(sys_getpgrp)
+ .long SYMBOL_NAME(sys_setsid)
+ .long SYMBOL_NAME(sys_sigaction)
+ .long SYMBOL_NAME(sys_sgetmask)
+ .long SYMBOL_NAME(sys_ssetmask)
+/* 70 */ .long SYMBOL_NAME(sys_setreuid)
+ .long SYMBOL_NAME(sys_setregid)
+ .long SYMBOL_NAME(sys_sigsuspend_wrapper)
+ .long SYMBOL_NAME(sys_sigpending)
+ .long SYMBOL_NAME(sys_sethostname)
+/* 75 */ .long SYMBOL_NAME(sys_setrlimit)
+ .long SYMBOL_NAME(sys_getrlimit)
+ .long SYMBOL_NAME(sys_getrusage)
+ .long SYMBOL_NAME(sys_gettimeofday)
+ .long SYMBOL_NAME(sys_settimeofday)
+/* 80 */ .long SYMBOL_NAME(sys_getgroups)
+ .long SYMBOL_NAME(sys_setgroups)
+ .long SYMBOL_NAME(old_select)
+ .long SYMBOL_NAME(sys_symlink)
+ .long SYMBOL_NAME(sys_lstat)
+/* 85 */ .long SYMBOL_NAME(sys_readlink)
+ .long SYMBOL_NAME(sys_uselib)
+ .long SYMBOL_NAME(sys_swapon)
+ .long SYMBOL_NAME(sys_reboot)
+ .long SYMBOL_NAME(old_readdir)
+/* 90 */ .long SYMBOL_NAME(old_mmap)
+ .long SYMBOL_NAME(sys_munmap)
+ .long SYMBOL_NAME(sys_truncate)
+ .long SYMBOL_NAME(sys_ftruncate)
+ .long SYMBOL_NAME(sys_fchmod)
+/* 95 */ .long SYMBOL_NAME(sys_fchown)
+ .long SYMBOL_NAME(sys_getpriority)
+ .long SYMBOL_NAME(sys_setpriority)
+ .long SYMBOL_NAME(sys_ni_syscall) /* was sys_profil */
+ .long SYMBOL_NAME(sys_statfs)
+/* 100 */ .long SYMBOL_NAME(sys_fstatfs)
+ .long SYMBOL_NAME(sys_ni_syscall) /* .long _sys_ioperm */
+ .long SYMBOL_NAME(sys_socketcall)
+ .long SYMBOL_NAME(sys_syslog)
+ .long SYMBOL_NAME(sys_setitimer)
+/* 105 */ .long SYMBOL_NAME(sys_getitimer)
+ .long SYMBOL_NAME(sys_newstat)
+ .long SYMBOL_NAME(sys_newlstat)
+ .long SYMBOL_NAME(sys_newfstat)
+ .long SYMBOL_NAME(sys_uname)
+/* 110 */ .long SYMBOL_NAME(sys_iopl)
+ .long SYMBOL_NAME(sys_vhangup)
+ .long SYMBOL_NAME(sys_idle)
+ .long SYMBOL_NAME(sys_syscall) /* call a syscall */
+ .long SYMBOL_NAME(sys_wait4)
+/* 115 */ .long SYMBOL_NAME(sys_swapoff)
+ .long SYMBOL_NAME(sys_sysinfo)
+ .long SYMBOL_NAME(sys_ipc)
+ .long SYMBOL_NAME(sys_fsync)
+ .long SYMBOL_NAME(sys_sigreturn_wrapper)
+ .long SYMBOL_NAME(sys_clone_wapper)
+ .long SYMBOL_NAME(sys_setdomainname)
+ .long SYMBOL_NAME(sys_newuname)
+ .long SYMBOL_NAME(sys_ni_syscall) /* .long SYMBOL_NAME(sys_modify_ldt) */
+ .long SYMBOL_NAME(sys_adjtimex)
+/* 125 */ .long SYMBOL_NAME(sys_mprotect)
+ .long SYMBOL_NAME(sys_sigprocmask)
+ .long SYMBOL_NAME(sys_create_module)
+ .long SYMBOL_NAME(sys_init_module)
+ .long SYMBOL_NAME(sys_delete_module)
+/* 130 */ .long SYMBOL_NAME(sys_get_kernel_syms)
+ .long SYMBOL_NAME(sys_quotactl)
+ .long SYMBOL_NAME(sys_getpgid)
+ .long SYMBOL_NAME(sys_fchdir)
+ .long SYMBOL_NAME(sys_bdflush)
+/* 135 */ .long SYMBOL_NAME(sys_sysfs)
+ .long SYMBOL_NAME(sys_personality)
+ .long SYMBOL_NAME(sys_ni_syscall) /* .long _sys_afs_syscall */
+ .long SYMBOL_NAME(sys_setfsuid)
+ .long SYMBOL_NAME(sys_setfsgid)
+/* 140 */ .long SYMBOL_NAME(sys_llseek_wrapper)
+ .long SYMBOL_NAME(sys_getdents)
+ .long SYMBOL_NAME(sys_select)
+ .long SYMBOL_NAME(sys_flock)
+ .long SYMBOL_NAME(sys_msync)
+/* 145 */ .long SYMBOL_NAME(sys_readv)
+ .long SYMBOL_NAME(sys_writev)
+ .long SYMBOL_NAME(sys_getsid)
+ .long SYMBOL_NAME(sys_ni_syscall)
+ .long SYMBOL_NAME(sys_ni_syscall)
+/* 150 */ .long SYMBOL_NAME(sys_mlock)
+ .long SYMBOL_NAME(sys_munlock)
+ .long SYMBOL_NAME(sys_mlockall)
+ .long SYMBOL_NAME(sys_munlockall)
+ .long SYMBOL_NAME(sys_sched_setparam)
+/* 155 */ .long SYMBOL_NAME(sys_sched_getparam)
+ .long SYMBOL_NAME(sys_sched_setscheduler)
+ .long SYMBOL_NAME(sys_sched_getscheduler)
+ .long SYMBOL_NAME(sys_sched_yield)
+ .long SYMBOL_NAME(sys_sched_get_priority_max)
+/* 160 */ .long SYMBOL_NAME(sys_sched_get_priority_min)
+ .long SYMBOL_NAME(sys_sched_rr_get_interval)
+ .long SYMBOL_NAME(sys_nanosleep)
+ .long SYMBOL_NAME(sys_mremap)
+ .long SYMBOL_NAME(sys_setresuid)
+/* 165 */ .long SYMBOL_NAME(sys_getresuid)
+ .long SYMBOL_NAME(sys_ni_syscall)
+ .long SYMBOL_NAME(sys_query_module)
+ .long SYMBOL_NAME(sys_poll)
+ .long SYMBOL_NAME(sys_nfsservctl)
+/* 170 */ .long SYMBOL_NAME(sys_setresgid)
+ .long SYMBOL_NAME(sys_getresgid)
+ .long SYMBOL_NAME(sys_prctl)
+ .long SYMBOL_NAME(sys_rt_sigreturn_wrapper)
+ .long SYMBOL_NAME(sys_rt_sigaction)
+/* 175 */ .long SYMBOL_NAME(sys_rt_sigprocmask)
+ .long SYMBOL_NAME(sys_rt_sigpending)
+ .long SYMBOL_NAME(sys_rt_sigtimedwait)
+ .long SYMBOL_NAME(sys_rt_sigqueueinfo)
+ .long SYMBOL_NAME(sys_rt_sigsuspend_wrapper)
+/* 180 */ .long SYMBOL_NAME(sys_pread)
+ .long SYMBOL_NAME(sys_pwrite)
+ .space (NR_syscalls - 182) * 4
+#endif
--- /dev/null
+/*
+ * linux/arch/arm/kernel/dma.c
+ *
+ * Copyright (C) 1995, 1996 Russell King
+ */
+
+#include <linux/config.h>
+#include <linux/sched.h>
+#include <linux/malloc.h>
+#include <linux/mman.h>
+
+#include <asm/page.h>
+#include <asm/pgtable.h>
+#include <asm/irq.h>
+#include <asm/hardware.h>
+#include <asm/io.h>
+#define KERNEL_ARCH_DMA
+#include <asm/dma.h>
+
+static unsigned long dma_address[8];
+static unsigned long dma_count[8];
+static char dma_direction[8] = { -1, -1, -1, -1, -1, -1, -1};
+
+#if defined(CONFIG_ARCH_A5K) || defined(CONFIG_ARCH_RPC)
+#define DMA_PCIO
+#endif
+#if defined(CONFIG_ARCH_ARC) && defined(CONFIG_BLK_DEV_FD)
+#define DMA_OLD
+#endif
+
+void enable_dma (unsigned int dmanr)
+{
+ switch (dmanr) {
+#ifdef DMA_PCIO
+ case 2: {
+ void *fiqhandler_start;
+ unsigned int fiqhandler_length;
+ extern void floppy_fiqsetup (unsigned long len, unsigned long addr,
+ unsigned long port);
+ switch (dma_direction[dmanr]) {
+ case 1: {
+ extern unsigned char floppy_fiqin_start, floppy_fiqin_end;
+ fiqhandler_start = &floppy_fiqin_start;
+ fiqhandler_length = &floppy_fiqin_end - &floppy_fiqin_start;
+ break;
+ }
+ case 0: {
+ extern unsigned char floppy_fiqout_start, floppy_fiqout_end;
+ fiqhandler_start = &floppy_fiqout_start;
+ fiqhandler_length = &floppy_fiqout_end - &floppy_fiqout_start;
+ break;
+ }
+ default:
+ printk ("enable_dma: dma%d not initialised\n", dmanr);
+ return;
+ }
+ memcpy ((void *)0x1c, fiqhandler_start, fiqhandler_length);
+ flush_page_to_ram(0);
+ floppy_fiqsetup (dma_count[dmanr], dma_address[dmanr], (int)PCIO_FLOPPYDMABASE);
+ enable_irq (64);
+ return;
+ }
+#endif
+#ifdef DMA_OLD
+ case 0: { /* Data DMA */
+ switch (dma_direction[dmanr]) {
+ case 1: /* read */
+ {
+ extern unsigned char fdc1772_dma_read, fdc1772_dma_read_end;
+ extern void fdc1772_setupdma(unsigned int count,unsigned int addr);
+ unsigned long flags;
+#ifdef DEBUG
+ printk("enable_dma fdc1772 data read\n");
+#endif
+ save_flags(flags);
+ cliIF();
+
+ memcpy ((void *)0x1c, (void *)&fdc1772_dma_read,
+ &fdc1772_dma_read_end - &fdc1772_dma_read);
+ fdc1772_setupdma(dma_count[dmanr],dma_address[dmanr]); /* Sets data pointer up */
+ enable_irq (64);
+ restore_flags(flags);
+ }
+ break;
+
+ case 0: /* write */
+ {
+ extern unsigned char fdc1772_dma_write, fdc1772_dma_write_end;
+ extern void fdc1772_setupdma(unsigned int count,unsigned int addr);
+ unsigned long flags;
+
+#ifdef DEBUG
+ printk("enable_dma fdc1772 data write\n");
+#endif
+ save_flags(flags);
+ cliIF();
+ memcpy ((void *)0x1c, (void *)&fdc1772_dma_write,
+ &fdc1772_dma_write_end - &fdc1772_dma_write);
+ fdc1772_setupdma(dma_count[dmanr],dma_address[dmanr]); /* Sets data pointer up */
+ enable_irq (64);
+
+ restore_flags(flags);
+ }
+ break;
+ default:
+ printk ("enable_dma: dma%d not initialised\n", dmanr);
+ return;
+ }
+ }
+ break;
+
+ case 1: { /* Command end FIQ - actually just sets a flag */
+ /* Need to build a branch at the FIQ address */
+ extern void fdc1772_comendhandler(void);
+ unsigned long flags;
+
+ /*printk("enable_dma fdc1772 command end FIQ\n");*/
+ save_flags(flags);
+ cliIF();
+
+ *((unsigned int *)0x1c)=0xea000000 | (((unsigned int)fdc1772_comendhandler-(0x1c+8))/4); /* B fdc1772_comendhandler */
+
+ restore_flags(flags);
+ }
+ break;
+#endif
+ case DMA_0:
+ case DMA_1:
+ case DMA_2:
+ case DMA_3:
+ case DMA_S0:
+ case DMA_S1:
+ arch_enable_dma (dmanr - DMA_0);
+ break;
+
+ default:
+ printk ("enable_dma: dma %d not supported\n", dmanr);
+ }
+}
+
+void set_dma_mode (unsigned int dmanr, char mode)
+{
+ if (dmanr < 8) {
+ if (mode == DMA_MODE_READ)
+ dma_direction[dmanr] = 1;
+ else if (mode == DMA_MODE_WRITE)
+ dma_direction[dmanr] = 0;
+ else
+ printk ("set_dma_mode: dma%d: invalid mode %02X not supported\n",
+ dmanr, mode);
+ } else if (dmanr < MAX_DMA_CHANNELS)
+ arch_set_dma_mode (dmanr - DMA_0, mode);
+ else
+ printk ("set_dma_mode: dma %d not supported\n", dmanr);
+}
+
+void set_dma_addr (unsigned int dmanr, unsigned int addr)
+{
+ if (dmanr < 8)
+ dma_address[dmanr] = (unsigned long)addr;
+ else if (dmanr < MAX_DMA_CHANNELS)
+ arch_set_dma_addr (dmanr - DMA_0, addr);
+ else
+ printk ("set_dma_addr: dma %d not supported\n", dmanr);
+}
+
+void set_dma_count (unsigned int dmanr, unsigned int count)
+{
+ if (dmanr < 8)
+ dma_count[dmanr] = (unsigned long)count;
+ else if (dmanr < MAX_DMA_CHANNELS)
+ arch_set_dma_count (dmanr - DMA_0, count);
+ else
+ printk ("set_dma_count: dma %d not supported\n", dmanr);
+}
+
+int get_dma_residue (unsigned int dmanr)
+{
+ if (dmanr < 8) {
+ switch (dmanr) {
+#if defined(CONFIG_ARCH_A5K) || defined(CONFIG_ARCH_RPC)
+ case 2: {
+ extern int floppy_fiqresidual (void);
+ return floppy_fiqresidual ();
+ }
+#endif
+#if defined(CONFIG_ARCH_ARC) && defined(CONFIG_BLK_DEV_FD)
+ case 0: {
+ extern unsigned int fdc1772_bytestogo;
+ return fdc1772_bytestogo;
+ }
+#endif
+ default:
+ return -1;
+ }
+ } else if (dmanr < MAX_DMA_CHANNELS)
+ return arch_dma_count (dmanr - DMA_0);
+ return -1;
+}
--- /dev/null
+/*
+ * linux/arch/arm/kernel/ecard.c
+ *
+ * Find all installed expansion cards, and handle interrupts from them.
+ *
+ * Copyright 1995,1996,1997 Russell King
+ *
+ * Created from information from Acorns RiscOS3 PRMs
+ *
+ * 08-Dec-1996 RMK Added code for the 9'th expansion card - the ether podule slot.
+ * 06-May-1997 RMK Added blacklist for cards whose loader doesn't work.
+ * 12-Sep-1997 RMK Created new handling of interrupt enables/disables - cards can
+ * now register their own routine to control interrupts (recommended).
+ * 29-Sep-1997 RMK Expansion card interrupt hardware not being re-enabled on reset from
+ * Linux. (Caused cards not to respond under RiscOS without hard reset).
+ */
+
+#define ECARD_C
+
+#include <linux/config.h>
+#include <linux/kernel.h>
+#include <linux/types.h>
+#include <linux/sched.h>
+#include <linux/interrupt.h>
+#include <linux/mm.h>
+#include <linux/malloc.h>
+
+#include <asm/irq-no.h>
+#include <asm/ecard.h>
+#include <asm/irq.h>
+#include <asm/io.h>
+#include <asm/hardware.h>
+#include <asm/arch/irq.h>
+
+#ifdef CONFIG_ARCH_ARC
+#include <asm/arch/oldlatches.h>
+#else
+#define oldlatch_init()
+#endif
+
+#define BLACKLIST_NAME(m,p,s) { m, p, NULL, s }
+#define BLACKLIST_LOADER(m,p,l) { m, p, l, NULL }
+#define BLACKLIST_NOLOADER(m,p) { m, p, noloader, blacklisted_str }
+#define BUS_ADDR(x) ((((unsigned long)(x)) << 2) + IO_BASE)
+
+extern unsigned long atomwide_serial_loader[], oak_scsi_loader[], noloader[];
+static const char blacklisted_str[] = "*loader blacklisted - not 32-bit compliant*";
+
+static const struct expcard_blacklist {
+ unsigned short manufacturer;
+ unsigned short product;
+ const loader_t loader;
+ const char *type;
+} blacklist[] = {
+/* Cards without names */
+ BLACKLIST_NAME(MANU_ACORN, PROD_ACORN_ETHER1, "Acorn Ether1"),
+
+/* Cards with corrected loader */
+ BLACKLIST_LOADER(MANU_ATOMWIDE, PROD_ATOMWIDE_3PSERIAL, atomwide_serial_loader),
+ BLACKLIST_LOADER(MANU_OAK, PROD_OAK_SCSI, oak_scsi_loader),
+
+/* Unsupported cards with no loader */
+BLACKLIST_NOLOADER(MANU_ALSYSTEMS, PROD_ALSYS_SCSIATAPI),
+BLACKLIST_NOLOADER(MANU_MCS, PROD_MCS_CONNECT32)
+};
+
+extern int setup_arm_irq(int, struct irqaction *);
+
+/*
+ * from linux/arch/arm/kernel/irq.c
+ */
+extern void do_ecard_IRQ(int irq, struct pt_regs *);
+
+static ecard_t expcard[MAX_ECARDS];
+static signed char irqno_to_expcard[16];
+static unsigned int ecard_numcards, ecard_numirqcards;
+static unsigned int have_expmask;
+static unsigned long kmem;
+
+static void ecard_def_irq_enable (ecard_t *ec, int irqnr)
+{
+#ifdef HAS_EXPMASK
+ if (irqnr < 4 && have_expmask) {
+ have_expmask |= 1 << irqnr;
+ EXPMASK_ENABLE = have_expmask;
+ }
+#endif
+}
+
+static void ecard_def_irq_disable (ecard_t *ec, int irqnr)
+{
+#ifdef HAS_EXPMASK
+ if (irqnr < 4 && have_expmask) {
+ have_expmask &= ~(1 << irqnr);
+ EXPMASK_ENABLE = have_expmask;
+ }
+#endif
+}
+
+static void ecard_def_fiq_enable (ecard_t *ec, int fiqnr)
+{
+ panic ("ecard_def_fiq_enable called - impossible");
+}
+
+static void ecard_def_fiq_disable (ecard_t *ec, int fiqnr)
+{
+ panic ("ecard_def_fiq_disable called - impossible");
+}
+
+static expansioncard_ops_t ecard_default_ops = {
+ ecard_def_irq_enable,
+ ecard_def_irq_disable,
+ ecard_def_fiq_enable,
+ ecard_def_fiq_disable
+};
+
+/*
+ * Enable and disable interrupts from expansion cards.
+ * (interrupts are disabled for these functions).
+ *
+ * They are not meant to be called directly, but via enable/disable_irq.
+ */
+void ecard_enableirq (unsigned int irqnr)
+{
+ if (irqnr < MAX_ECARDS && irqno_to_expcard[irqnr] != -1) {
+ ecard_t *ec = expcard + irqno_to_expcard[irqnr];
+
+ if (!ec->ops)
+ ec->ops = &ecard_default_ops;
+
+ if (ec->claimed && ec->ops->irqenable)
+ ec->ops->irqenable (ec, irqnr);
+ else
+ printk (KERN_ERR "ecard: rejecting request to "
+ "enable IRQs for %d\n", irqnr);
+ }
+}
+
+void ecard_disableirq (unsigned int irqnr)
+{
+ if (irqnr < MAX_ECARDS && irqno_to_expcard[irqnr] != -1) {
+ ecard_t *ec = expcard + irqno_to_expcard[irqnr];
+
+ if (!ec->ops)
+ ec->ops = &ecard_default_ops;
+
+ if (ec->ops && ec->ops->irqdisable)
+ ec->ops->irqdisable (ec, irqnr);
+ }
+}
+
+void ecard_enablefiq (unsigned int fiqnr)
+{
+ if (fiqnr < MAX_ECARDS && irqno_to_expcard[fiqnr] != -1) {
+ ecard_t *ec = expcard + irqno_to_expcard[fiqnr];
+
+ if (!ec->ops)
+ ec->ops = &ecard_default_ops;
+
+ if (ec->claimed && ec->ops->fiqenable)
+ ec->ops->fiqenable (ec, fiqnr);
+ else
+ printk (KERN_ERR "ecard: rejecting request to "
+ "enable FIQs for %d\n", fiqnr);
+ }
+}
+
+void ecard_disablefiq (unsigned int fiqnr)
+{
+ if (fiqnr < MAX_ECARDS && irqno_to_expcard[fiqnr] != -1) {
+ ecard_t *ec = expcard + irqno_to_expcard[fiqnr];
+
+ if (!ec->ops)
+ ec->ops = &ecard_default_ops;
+
+ if (ec->ops->fiqdisable)
+ ec->ops->fiqdisable (ec, fiqnr);
+ }
+}
+
+static void *ecard_malloc(int len)
+{
+ int r;
+
+ len = (len + 3) & ~3;
+
+ if (kmem) {
+ r = kmem;
+ kmem += len;
+ return (void *)r;
+ } else
+ return kmalloc(len, GFP_KERNEL);
+}
+
+static void ecard_irq_noexpmask(int intr_no, void *dev_id, struct pt_regs *regs)
+{
+ const int num_cards = ecard_numirqcards;
+ int i, called = 0;
+
+ mask_irq (IRQ_EXPANSIONCARD);
+ for (i = 0; i < num_cards; i++) {
+ if (expcard[i].claimed && expcard[i].irq &&
+ (!expcard[i].irqmask ||
+ expcard[i].irqaddr[0] & expcard[i].irqmask)) {
+ do_ecard_IRQ(expcard[i].irq, regs);
+ called ++;
+ }
+ }
+ cli ();
+ unmask_irq (IRQ_EXPANSIONCARD);
+ if (called == 0)
+ printk (KERN_WARNING "Wild interrupt from backplane?\n");
+}
+
+#ifdef HAS_EXPMASK
+static unsigned char priority_masks[] =
+{
+ 0xf0, 0xf1, 0xf3, 0xf7, 0xff, 0xff, 0xff, 0xff
+};
+
+static unsigned char first_set[] =
+{
+ 0x00, 0x00, 0x01, 0x00, 0x02, 0x00, 0x01, 0x00,
+ 0x03, 0x00, 0x01, 0x00, 0x02, 0x00, 0x01, 0x00
+};
+
+static void ecard_irq_expmask (int intr_no, void *dev_id, struct pt_regs *regs)
+{
+ const unsigned int statusmask = 15;
+ unsigned int status;
+
+ status = EXPMASK_STATUS & statusmask;
+ if (status) {
+ unsigned int irqno;
+ ecard_t *ec;
+again:
+ irqno = first_set[status];
+ ec = expcard + irqno_to_expcard[irqno];
+ if (ec->claimed) {
+ unsigned int oldexpmask;
+ /*
+ * this ugly code is so that we can operate a prioritorising system.
+ * Card 0 highest priority
+ * Card 1
+ * Card 2
+ * Card 3 lowest priority
+ * Serial cards should go in 0/1, ethernet/scsi in 2/3
+ * otherwise you will lose serial data at high speeds!
+ */
+ oldexpmask = have_expmask;
+ EXPMASK_ENABLE = (have_expmask &= priority_masks[irqno]);
+ sti ();
+ do_ecard_IRQ (ec->irq, regs);
+ cli ();
+ EXPMASK_ENABLE = have_expmask = oldexpmask;
+ status = EXPMASK_STATUS & statusmask;
+ if (status)
+ goto again;
+ } else {
+ printk (KERN_WARNING "card%d: interrupt from unclaimed card???\n", irqno);
+ EXPMASK_ENABLE = (have_expmask &= ~(1 << irqno));
+ }
+ } else
+ printk (KERN_WARNING "Wild interrupt from backplane (masks)\n");
+}
+
+static int ecard_checkirqhw (void)
+{
+ int found;
+
+ EXPMASK_ENABLE = 0x00;
+ EXPMASK_STATUS = 0xff;
+ found = ((EXPMASK_STATUS & 15) == 0);
+ EXPMASK_ENABLE = 0xff;
+
+ return found;
+}
+#endif
+
+static void ecard_readbytes (void *addr, ecard_t *ec, int off, int len, int useld)
+{
+ extern int ecard_loader_read(int off, volatile unsigned int pa, loader_t loader);
+ unsigned char *a = (unsigned char *)addr;
+
+ if (ec->slot_no == 8) {
+ static unsigned int lowaddress;
+ unsigned int laddr, haddr;
+ unsigned char byte = 0; /* keep gcc quiet */
+
+ laddr = off & 4095; /* number of bytes to read from offset + base addr */
+ haddr = off >> 12; /* offset into card from base addr */
+
+ if (haddr > 256)
+ return;
+
+ /*
+ * If we require a low address or address 0, then reset, and start again...
+ */
+ if (!off || lowaddress > laddr) {
+ outb (0, ec->podaddr);
+ lowaddress = 0;
+ }
+ while (lowaddress <= laddr) {
+ byte = inb (ec->podaddr + haddr);
+ lowaddress += 1;
+ }
+ while (len--) {
+ *a++ = byte;
+ if (len) {
+ byte = inb (ec->podaddr + haddr);
+ lowaddress += 1;
+ }
+ }
+ } else {
+ if (!useld || !ec->loader) {
+ while(len--)
+ *a++ = inb(ec->podaddr + (off++));
+ } else {
+ while(len--) {
+ *(unsigned long *)0x108 = 0; /* hack for some loaders!!! */
+ *a++ = ecard_loader_read(off++, BUS_ADDR(ec->podaddr), ec->loader);
+ }
+ }
+ }
+}
+
+/*
+ * This is called to reset the loaders for each expansion card on reboot.
+ *
+ * This is required to make sure that the card is in the correct state
+ * that RiscOS expects it to be.
+ */
+void ecard_reset (int card)
+{
+ extern int ecard_loader_reset (volatile unsigned int pa, loader_t loader);
+
+ if (card >= ecard_numcards)
+ return;
+
+ if (card < 0) {
+ for (card = 0; card < ecard_numcards; card++)
+ if (expcard[card].loader)
+ ecard_loader_reset (BUS_ADDR(expcard[card].podaddr),
+ expcard[card].loader);
+ } else
+ if (expcard[card].loader)
+ ecard_loader_reset (BUS_ADDR(expcard[card].podaddr),
+ expcard[card].loader);
+
+#ifdef HAS_EXPMASK
+ if (have_expmask) {
+ have_expmask |= ~0;
+ EXPMASK_ENABLE = have_expmask;
+ }
+#endif
+}
+
+static unsigned int ecard_startcard;
+
+void ecard_startfind (void)
+{
+ ecard_startcard = 0;
+}
+
+ecard_t *ecard_find (int cld, const card_ids *cids)
+{
+ int card;
+ if (!cids) {
+ for (card = ecard_startcard; card < ecard_numcards; card++)
+ if (!expcard[card].claimed &&
+ ((expcard[card].cld.ecld ^ cld) & 0x78) == 0)
+ break;
+ } else {
+ for (card = ecard_startcard; card < ecard_numcards; card++) {
+ unsigned int manufacturer, product;
+ int i;
+
+ if (expcard[card].claimed)
+ continue;
+
+ manufacturer = expcard[card].cld.manufacturer;
+ product = expcard[card].cld.product;
+
+ for (i = 0; cids[i].manufacturer != 65535; i++)
+ if (manufacturer == cids[i].manufacturer &&
+ product == cids[i].product)
+ break;
+
+ if (cids[i].manufacturer != 65535)
+ break;
+ }
+ }
+ ecard_startcard = card + 1;
+ return card < ecard_numcards ? &expcard[card] : NULL;
+}
+
+int ecard_readchunk (struct in_chunk_dir *cd, ecard_t *ec, int id, int num)
+{
+ struct ex_chunk_dir excd;
+ int index = 16;
+ int useld = 0;
+
+ while(1) {
+ ecard_readbytes(&excd, ec, index, 8, useld);
+ index += 8;
+ if (c_id(&excd) == 0) {
+ if (!useld && ec->loader) {
+ useld = 1;
+ index = 0;
+ continue;
+ }
+ return 0;
+ }
+ if (c_id(&excd) == 0xf0) { /* link */
+ index = c_start(&excd);
+ continue;
+ }
+ if (c_id(&excd) == 0x80) { /* loader */
+ if (!ec->loader) {
+ ec->loader = (loader_t)ecard_malloc(c_len(&excd));
+ ecard_readbytes(ec->loader, ec, (int)c_start(&excd), c_len(&excd), useld);
+ }
+ continue;
+ }
+ if (c_id(&excd) == id && num-- == 0)
+ break;
+ }
+
+ if (c_id(&excd) & 0x80) {
+ switch (c_id(&excd) & 0x70) {
+ case 0x70:
+ ecard_readbytes((unsigned char *)excd.d.string, ec,
+ (int)c_start(&excd), c_len(&excd), useld);
+ break;
+ case 0x00:
+ break;
+ }
+ }
+ cd->start_offset = c_start(&excd);
+ memcpy (cd->d.string, excd.d.string, 256);
+ return 1;
+}
+
+unsigned int ecard_address (ecard_t *ec, card_type_t memc, card_speed_t speed)
+{
+ switch (ec->slot_no) {
+ case 0:
+ case 1:
+ case 2:
+ case 3:
+ return (memc ? MEMCECIO_BASE : IOCECIO_BASE + (speed << 17)) + (ec->slot_no << 12);
+#ifdef IOCEC4IO_BASE
+ case 4:
+ case 5:
+ case 6:
+ case 7:
+ return (memc ? 0 : IOCEC4IO_BASE + (speed << 17)) + ((ec->slot_no - 4) << 12);
+#endif
+#ifdef MEMCEC8IO_BASE
+ case 8:
+ return MEMCEC8IO_BASE;
+#endif
+ }
+ return 0;
+}
+
+/*
+ * Probe for an expansion card.
+ *
+ * If bit 1 of the first byte of the card is set,
+ * then the card does not exist.
+ */
+static int ecard_probe (int card, int freeslot)
+{
+ ecard_t *ec = expcard + freeslot;
+ struct ex_ecld excld;
+ const char *card_desc = NULL;
+ int i;
+
+ irqno_to_expcard[card] = -1;
+
+ ec->slot_no = card;
+ if ((ec->podaddr = ecard_address (ec, 0, ECARD_SYNC)) == 0)
+ return 0;
+
+ excld.r_ecld = 2;
+ ecard_readbytes (&excld, ec, 0, 16, 0);
+ if (excld.r_ecld & 2)
+ return 0;
+
+ irqno_to_expcard[card] = freeslot;
+
+ ec->irq = -1;
+ ec->fiq = -1;
+ ec->cld.ecld = e_ecld(&excld);
+ ec->cld.manufacturer = e_manu(&excld);
+ ec->cld.product = e_prod(&excld);
+ ec->cld.country = e_country(&excld);
+ ec->cld.fiqmask = e_fiqmask(&excld);
+ ec->cld.irqmask = e_irqmask(&excld);
+ ec->cld.fiqaddr = e_fiqaddr(&excld);
+ ec->cld.irqaddr = e_irqaddr(&excld);
+ ec->fiqaddr =
+ ec->irqaddr = (unsigned char *)BUS_ADDR(ec->podaddr);
+ ec->fiqmask = 4;
+ ec->irqmask = 1;
+ ec->ops = &ecard_default_ops;
+
+ for (i = 0; i < sizeof (blacklist) / sizeof (*blacklist); i++)
+ if (blacklist[i].manufacturer == ec->cld.manufacturer &&
+ blacklist[i].product == ec->cld.product) {
+ ec->loader = blacklist[i].loader;
+ card_desc = blacklist[i].type;
+ break;
+ }
+
+ if (card != 8) {
+ ec->irq = 32 + card;
+#if 0
+ ec->fiq = 96 + card;
+#endif
+ } else {
+ ec->irq = 11;
+ ec->fiq = -1;
+ }
+
+ if ((ec->cld.ecld & 0x78) == 0) {
+ struct in_chunk_dir incd;
+ printk ("\n %d: [%04X:%04X] ", card, ec->cld.manufacturer, ec->cld.product);
+ if (e_is (&excld)) {
+ ec->fiqmask = e_fiqmask (&excld);
+ ec->irqmask = e_irqmask (&excld);
+ ec->fiqaddr += e_fiqaddr (&excld);
+ ec->irqaddr += e_irqaddr (&excld);
+ }
+ if (!card_desc && e_cd (&excld) && ecard_readchunk (&incd, ec, 0xf5, 0))
+ card_desc = incd.d.string;
+ if (card_desc)
+ printk ("%s", card_desc);
+ else
+ printk ("*Unknown*");
+ } else
+ printk("\n %d: Simple card %d\n", card, (ec->cld.ecld >> 3) & 15);
+ return 1;
+}
+
+static struct irqaction irqexpansioncard = { ecard_irq_noexpmask, SA_INTERRUPT, 0, "expansion cards", NULL, NULL };
+
+/*
+ * Initialise the expansion card system.
+ * Locate all hardware - interrupt management and
+ * actual cards.
+ */
+unsigned long ecard_init(unsigned long start_mem)
+{
+ int i, nc = 0;
+
+ kmem = (start_mem | 3) & ~3;
+ memset (expcard, 0, sizeof (expcard));
+
+#ifdef HAS_EXPMASK
+ if (ecard_checkirqhw()) {
+ printk (KERN_DEBUG "Expansion card interrupt management hardware found\n");
+ irqexpansioncard.handler = ecard_irq_expmask;
+ have_expmask = -1;
+ }
+#endif
+ printk("Installed expansion cards:");
+
+ /*
+ * First of all, probe all cards on the expansion card interrupt line
+ */
+ for (i = 0; i < 4; i++)
+ if (ecard_probe (i, nc))
+ nc += 1;
+ else
+ have_expmask &= ~(1<<i);
+
+ ecard_numirqcards = nc;
+
+ /*
+ * Now probe other cards with different interrupt lines
+ */
+#ifdef MEMCEC8IO_BASE
+ if (ecard_probe (8, nc))
+ nc += 1;
+#endif
+ printk("\n");
+ ecard_numcards = nc;
+
+ if (nc && setup_arm_irq(IRQ_EXPANSIONCARD, &irqexpansioncard)) {
+ printk ("Could not allocate interrupt for expansion cards\n");
+ return kmem;
+ }
+
+#ifdef HAS_EXPMASK
+ if (nc && have_expmask)
+ EXPMASK_ENABLE = have_expmask;
+#endif
+ oldlatch_init ();
+ start_mem = kmem;
+ kmem = 0;
+ return start_mem;
+}
--- /dev/null
+/*
+ * linux/arch/arm/kernel/entry-armo.S
+ *
+ * Copyright (C) 1995,1996,1997,1998 Russell King.
+ *
+ * Low-level vector interface routines
+ *
+ * Design issues:
+ * - We have several modes that each vector can be called from,
+ * each with its own set of registers. On entry to any vector,
+ * we *must* save the registers used in *that* mode.
+ *
+ * - This code must be as fast as possible.
+ *
+ * There are a few restrictions on the vectors:
+ * - the SWI vector cannot be called from *any* non-user mode
+ *
+ * - the FP emulator is *never* called from *any* non-user mode undefined
+ * instruction.
+ *
+ * Ok, so this file may be a mess, but its as efficient as possible while
+ * adhering to the above criteria.
+ */
+#include <linux/autoconf.h>
+#include <linux/linkage.h>
+
+#include <asm/assembler.h>
+#include <asm/errno.h>
+#include <asm/hardware.h>
+
+#include "../lib/constants.h"
+
+ .text
+
+@ Offsets into task structure
+@ ---------------------------
+@
+#define STATE 0
+#define COUNTER 4
+#define PRIORITY 8
+#define FLAGS 12
+#define SIGPENDING 16
+
+#define PF_TRACESYS 0x20
+
+@ Bad Abort numbers
+@ -----------------
+@
+#define BAD_PREFETCH 0
+#define BAD_DATA 1
+#define BAD_ADDREXCPTN 2
+#define BAD_IRQ 3
+#define BAD_UNDEFINSTR 4
+
+@ OS version number used in SWIs
+@ RISC OS is 0
+@ RISC iX is 8
+@
+#define OS_NUMBER 9
+
+@
+@ Stack format (ensured by USER_* and SVC_*)
+@
+#define S_OLD_R0 64
+#define S_PSR 60
+#define S_PC 60
+#define S_LR 56
+#define S_SP 52
+#define S_IP 48
+#define S_FP 44
+#define S_R10 40
+#define S_R9 36
+#define S_R8 32
+#define S_R7 28
+#define S_R6 24
+#define S_R5 20
+#define S_R4 16
+#define S_R3 12
+#define S_R2 8
+#define S_R1 4
+#define S_R0 0
+
+#ifdef IOC_BASE
+/* IOC / IOMD based hardware */
+ .equ ioc_base_high, IOC_BASE & 0xff000000
+ .equ ioc_base_low, IOC_BASE & 0x00ff0000
+ .macro disable_fiq
+ mov r12, #ioc_base_high
+ .if ioc_base_low
+ orr r12, r12, #ioc_base_low
+ .endif
+ strb r12, [r12, #0x38] @ Disable FIQ register
+ .endm
+
+ .macro get_irqnr_and_base, irqnr, base
+ mov r4, #ioc_base_high @ point at IOC
+ .if ioc_base_low
+ orr r4, r4, #ioc_base_low
+ .endif
+ ldrb \irqnr, [r4, #0x24] @ get high priority first
+ adr \base, irq_prio_h
+ teq \irqnr, #0
+ ldreqb \irqnr, [r4, #0x14] @ get low priority
+ adreq \base, irq_prio_l
+ .endm
+
+/*
+ * Interrupt table (incorporates priority)
+ */
+ .macro irq_prio_table
+irq_prio_l: .byte 0, 0, 1, 0, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3
+ .byte 4, 0, 1, 0, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3
+ .byte 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5
+ .byte 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5
+ .byte 6, 6, 6, 6, 6, 6, 6, 6, 3, 3, 3, 3, 3, 3, 3, 3
+ .byte 6, 6, 6, 6, 6, 6, 6, 6, 3, 3, 3, 3, 3, 3, 3, 3
+ .byte 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5
+ .byte 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5
+ .byte 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7
+ .byte 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7
+ .byte 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7
+ .byte 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7
+ .byte 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7
+ .byte 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7
+ .byte 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7
+ .byte 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7
+irq_prio_h: .byte 0, 8, 9, 8,10,10,10,10,11,11,11,11,10,10,10,10
+ .byte 12, 8, 9, 8,10,10,10,10,11,11,11,11,10,10,10,10
+ .byte 13,13,13,13,10,10,10,10,11,11,11,11,10,10,10,10
+ .byte 13,13,13,13,10,10,10,10,11,11,11,11,10,10,10,10
+ .byte 14,14,14,14,10,10,10,10,11,11,11,11,10,10,10,10
+ .byte 14,14,14,14,10,10,10,10,11,11,11,11,10,10,10,10
+ .byte 13,13,13,13,10,10,10,10,11,11,11,11,10,10,10,10
+ .byte 13,13,13,13,10,10,10,10,11,11,11,11,10,10,10,10
+ .byte 15,15,15,15,10,10,10,10,11,11,11,11,10,10,10,10
+ .byte 15,15,15,15,10,10,10,10,11,11,11,11,10,10,10,10
+ .byte 13,13,13,13,10,10,10,10,11,11,11,11,10,10,10,10
+ .byte 13,13,13,13,10,10,10,10,11,11,11,11,10,10,10,10
+ .byte 15,15,15,15,10,10,10,10,11,11,11,11,10,10,10,10
+ .byte 15,15,15,15,10,10,10,10,11,11,11,11,10,10,10,10
+ .byte 13,13,13,13,10,10,10,10,11,11,11,11,10,10,10,10
+ .byte 13,13,13,13,10,10,10,10,11,11,11,11,10,10,10,10
+ .endm
+#else
+#error Unknown architecture
+#endif
+
+/*=============================================================================
+ * For entry-common.S
+ */
+
+ .macro save_user_regs
+ str r0, [sp, #-4]!
+ str lr, [sp, #-4]!
+ sub sp, sp, #15*4
+ stmia sp, {r0 - lr}^
+ mov r0, r0
+ .endm
+
+ .macro restore_user_regs
+ ldmia sp, {r0 - lr}^
+ mov r0, r0
+ add sp, sp, #15*4
+ ldr lr, [sp], #8
+ movs pc, lr
+ .endm
+
+ .macro mask_pc, rd, rm
+ bic \rd, \rm, #PCMASK
+ .endm
+
+ .macro arm700_bug_check, instr, temp
+ .endm
+
+ .macro enable_irqs, temp
+ teqp pc, #0x00000003
+ .endm
+
+ .macro initialise_traps_extra
+ .endm
+
+ .macro get_current_task, rd
+ mov \rd, sp, lsr #13
+ mov \rd, \rd, lsl #13
+ .endm
+
+ /*
+ * Like adr, but force SVC mode (if required)
+ */
+ .macro adrsvc, cond, reg, label
+ adr\cond \reg, \label
+ orr\cond \reg, \reg, #3
+ .endm
+
+#if 0
+/*
+ * Uncomment these if you wish to get more debugging into about data aborts.
+ */
+#define FAULT_CODE_LDRSTRPOST 0x80
+#define FAULT_CODE_LDRSTRPRE 0x40
+#define FAULT_CODE_LDRSTRREG 0x20
+#define FAULT_CODE_LDMSTM 0x10
+#define FAULT_CODE_LDCSTC 0x08
+#endif
+#define FAULT_CODE_PREFETCH 0x04
+#define FAULT_CODE_WRITE 0x02
+#define FAULT_CODE_USER 0x01
+
+
+#define SVC_SAVE_ALL \
+ str sp, [sp, #-16]! ;\
+ str lr, [sp, #8] ;\
+ str lr, [sp, #4] ;\
+ stmfd sp!, {r0 - r12} ;\
+ mov r0, #-1 ;\
+ str r0, [sp, #S_OLD_R0] ;\
+ mov fp, #0
+
+#define SVC_IRQ_SAVE_ALL \
+ str sp, [sp, #-16]! ;\
+ str lr, [sp, #4] ;\
+ ldr lr, .LCirq ;\
+ ldr lr, [lr] ;\
+ str lr, [sp, #8] ;\
+ stmfd sp!, {r0 - r12} ;\
+ mov r0, #-1 ;\
+ str r0, [sp, #S_OLD_R0] ;\
+ mov fp, #0
+
+#define USER_RESTORE_ALL \
+ ldmia sp, {r0 - lr}^ ;\
+ mov r0, r0 ;\
+ add sp, sp, #15*4 ;\
+ ldr lr, [sp], #8 ;\
+ movs pc, lr
+
+#define SVC_RESTORE_ALL \
+ ldmfd sp, {r0 - pc}^
+
+/*=============================================================================
+ * Undefined FIQs
+ *-----------------------------------------------------------------------------
+ */
+_unexp_fiq: ldr sp, .LCfiq
+ mov r12, #IOC_BASE
+ strb r12, [r12, #0x38] @ Disable FIQ register
+ teqp pc, #0x0c000003
+ mov r0, r0
+ stmfd sp!, {r0 - r3, ip, lr}
+ adr r0, Lfiqmsg
+ bl SYMBOL_NAME(printk)
+ ldmfd sp!, {r0 - r3, ip, lr}
+ teqp pc, #0x0c000001
+ mov r0, r0
+ movs pc, lr
+
+Lfiqmsg: .ascii "*** Unexpeced FIQ\n\0"
+ .align
+
+.LCfiq: .word __temp_fiq
+.LCirq: .word __temp_irq
+
+/*=============================================================================
+ * Undefined instruction handler
+ *-----------------------------------------------------------------------------
+ * Handles floating point instructions
+ */
+vector_undefinstr:
+ tst lr,#3
+ bne __und_svc
+ save_user_regs
+ mov fp, #0
+ teqp pc, #I_BIT | MODE_SVC
+.Lbug_undef:
+ adr r1, .LC2
+ ldmia r1, {r1, r4}
+ ldr r1, [r1]
+ get_current_task r2
+ teq r1, r2
+ stmnefd sp!, {ip, lr}
+ blne SYMBOL_NAME(math_state_restore)
+ ldmnefd sp!, {ip, lr}
+ ldr pc, [r4] @ Call FP module USR entry point
+
+ .globl SYMBOL_NAME(fpundefinstr)
+SYMBOL_NAME(fpundefinstr): @ Called by FP module on undefined instr
+SYMBOL_NAME(fpundefinstrsvc):
+ mov r0, lr
+ mov r1, sp
+ teqp pc, #MODE_SVC
+ bl SYMBOL_NAME(do_undefinstr)
+ b ret_from_exception @ Normal FP exit
+
+__und_svc: SVC_SAVE_ALL @ Non-user mode
+ mask_pc r0, lr
+ and r2, lr, #3
+ sub r0, r0, #4
+ mov r1, sp
+ bl SYMBOL_NAME(do_undefinstr)
+ SVC_RESTORE_ALL
+
+.LC2: .word SYMBOL_NAME(last_task_used_math)
+ .word SYMBOL_NAME(fp_enter)
+
+/*=============================================================================
+ * Prefetch abort handler
+ *-----------------------------------------------------------------------------
+ */
+
+vector_prefetch:
+ sub lr, lr, #4
+ tst lr, #3
+ bne __pabt_invalid
+ save_user_regs
+ teqp pc, #0x00000003 @ NOT a problem - doesnt change mode
+ mask_pc r0, lr @ Address of abort
+ mov r1, #FAULT_CODE_PREFETCH|FAULT_CODE_USER @ Error code
+ mov r2, sp @ Tasks registers
+ bl SYMBOL_NAME(do_PrefetchAbort)
+ teq r0, #0 @ If non-zero, we believe this abort..
+ bne ret_from_sys_call
+#ifdef DEBUG_UNDEF
+ adr r0, t
+ bl SYMBOL_NAME(printk)
+#endif
+ ldr lr, [sp,#S_PC] @ program to test this on. I think its
+ b .Lbug_undef @ broken at the moment though!)
+
+__pabt_invalid: SVC_SAVE_ALL
+ mov r0, sp @ Prefetch aborts are definitely *not*
+ mov r1, #BAD_PREFETCH @ allowed in non-user modes. We cant
+ and r2, lr, #3 @ recover from this problem.
+ b SYMBOL_NAME(bad_mode)
+
+#ifdef DEBUG_UNDEF
+t: .ascii "*** undef ***\r\n\0"
+ .align
+#endif
+
+/*=============================================================================
+ * Address exception handler
+ *-----------------------------------------------------------------------------
+ * These aren't too critical.
+ * (they're not supposed to happen).
+ * In order to debug the reason for address exceptions in non-user modes,
+ * we have to obtain all the registers so that we can see what's going on.
+ */
+
+vector_addrexcptn:
+ sub lr, lr, #8
+ tst lr, #3
+ bne Laddrexcptn_not_user
+ save_user_regs
+ teq pc, #0x00000003
+ mask_pc r0, lr @ Point to instruction
+ mov r1, sp @ Point to registers
+ mov r2, #0x400
+ mov lr, pc
+ bl SYMBOL_NAME(do_excpt)
+ b ret_from_exception
+
+Laddrexcptn_not_user:
+ SVC_SAVE_ALL
+ and r2, lr, #3
+ teq r2, #3
+ bne Laddrexcptn_illegal_mode
+ teqp pc, #0x00000003 @ NOT a problem - doesnt change mode
+ mask_pc r0, lr
+ mov r1, sp
+ orr r2, r2, #0x400
+ bl SYMBOL_NAME(do_excpt)
+ ldmia sp, {r0 - lr} @ I cant remember the reason I changed this...
+ add sp, sp, #15*4
+ movs pc, lr
+
+Laddrexcptn_illegal_mode:
+ mov r0, sp
+ str lr, [sp, #-4]!
+ orr r1, r2, #0x0c000000
+ teqp r1, #0 @ change into mode (wont be user mode)
+ mov r0, r0
+ mov r1, r8 @ Any register from r8 - r14 can be banked
+ mov r2, r9
+ mov r3, r10
+ mov r4, r11
+ mov r5, r12
+ mov r6, r13
+ mov r7, r14
+ teqp pc, #0x04000003 @ back to svc
+ mov r0, r0
+ stmfd sp!, {r1-r7}
+ ldmia r0, {r0-r7}
+ stmfd sp!, {r0-r7}
+ mov r0, sp
+ mov r1, #BAD_ADDREXCPTN
+ b SYMBOL_NAME(bad_mode)
+
+/*=============================================================================
+ * Interrupt (IRQ) handler
+ *-----------------------------------------------------------------------------
+ * Note: if in user mode, then *no* kernel routine is running, so dont have
+ * to save svc lr
+ * (r13 points to irq temp save area)
+ */
+
+vector_IRQ: ldr r13, .LCirq @ Ill leave this one in just in case...
+ sub lr, lr, #4
+ str lr, [r13]
+ tst lr, #3
+ bne __irq_svc
+ teqp pc, #0x08000003
+ mov r0, r0
+ ldr lr, .LCirq
+ ldr lr, [lr]
+ save_user_regs
+
+1: get_irqnr_and_base r6, r5
+ teq r6, #0
+ ldrneb r0, [r5, r6] @ get IRQ number
+ movne r1, sp
+ @
+ @ routine called with r0 = irq number, r1 = struct pt_regs *
+ @
+ adr lr, 1b
+ orr lr, lr, #3 @ Force SVC
+ bne do_IRQ
+ b ret_with_reschedule
+
+ irq_prio_table
+
+__irq_svc: teqp pc, #0x08000003
+ mov r0, r0
+ SVC_IRQ_SAVE_ALL
+ and r2, lr, #3
+ teq r2, #3
+ bne __irq_invalid
+1: get_irqnr_and_base r6, r5
+ teq r6, #0
+ ldrneb r0, [r5, r6] @ get IRQ number
+ movne r1, sp
+ @
+ @ routine called with r0 = irq number, r1 = struct pt_regs *
+ @
+ adr lr, 1b
+ orr lr, lr, #3 @ Force SVC
+ bne do_IRQ @ Returns to 1b
+ SVC_RESTORE_ALL
+
+__irq_invalid: mov r0, sp
+ mov r1, #BAD_IRQ
+ b SYMBOL_NAME(bad_mode)
+
+/*=============================================================================
+ * Data abort handler code
+ *-----------------------------------------------------------------------------
+ *
+ * This handles both exceptions from user and SVC modes, computes the address
+ * range of the problem, and does any correction that is required. It then
+ * calls the kernel data abort routine.
+ *
+ * This is where I wish that the ARM would tell you which address aborted.
+ */
+
+vector_data: sub lr, lr, #8 @ Correct lr
+ tst lr, #3
+ bne Ldata_not_user
+ save_user_regs
+ teqp pc, #0x00000003 @ NOT a problem - doesnt change mode
+ mask_pc r0, lr
+ mov r2, #FAULT_CODE_USER
+ bl Ldata_do
+ b ret_from_exception
+
+Ldata_not_user:
+ SVC_SAVE_ALL
+ and r2, lr, #3
+ teq r2, #3
+ bne Ldata_illegal_mode
+ tst lr, #0x08000000
+ teqeqp pc, #0x00000003 @ NOT a problem - doesnt change mode
+ mask_pc r0, lr
+ mov r2, #0
+ bl Ldata_do
+ SVC_RESTORE_ALL
+
+Ldata_illegal_mode:
+ mov r0, sp
+ mov r1, #BAD_DATA
+ b SYMBOL_NAME(bad_mode)
+
+Ldata_do: mov r3, sp
+ ldr r4, [r0] @ Get instruction
+ tst r4, #1 << 20 @ Check to see if it is a write instruction
+ orreq r2, r2, #FAULT_CODE_WRITE @ Indicate write instruction
+ mov r1, r4, lsr #22 @ Now branch to the relevent processing routine
+ and r1, r1, #15 << 2
+ add pc, pc, r1
+ movs pc, lr
+ b Ldata_unknown
+ b Ldata_unknown
+ b Ldata_unknown
+ b Ldata_unknown
+ b Ldata_ldrstr_post @ ldr rd, [rn], #m
+ b Ldata_ldrstr_numindex @ ldr rd, [rn, #m] @ RegVal
+ b Ldata_ldrstr_post @ ldr rd, [rn], rm
+ b Ldata_ldrstr_regindex @ ldr rd, [rn, rm]
+ b Ldata_ldmstm @ ldm*a rn, <rlist>
+ b Ldata_ldmstm @ ldm*b rn, <rlist>
+ b Ldata_unknown
+ b Ldata_unknown
+ b Ldata_ldrstr_post @ ldc rd, [rn], #m @ Same as ldr rd, [rn], #m
+ b Ldata_ldcstc_pre @ ldc rd, [rn, #m]
+ b Ldata_unknown
+Ldata_unknown: @ Part of jumptable
+ ldr r3, [sp, #15 * 4]
+ str r3, [sp, #-4]!
+ mov r1, r1, lsr #2
+ mov r2, r0
+ mov r3, r4
+ adr r0, Ltt
+ bl SYMBOL_NAME(printk)
+Llpxx: b Llpxx
+
+Ltt: .ascii "Unknown data abort code %d [pc=%p, *pc=%p]\nLR=%p\0"
+ .align
+
+Ldata_ldrstr_post:
+ mov r0, r4, lsr #14 @ Get Rn
+ and r0, r0, #15 << 2 @ Mask out reg.
+ teq r0, #15 << 2
+ ldr r0, [r3, r0] @ Get register
+ biceq r0, r0, #PCMASK
+ mov r1, r0
+#ifdef FAULT_CODE_LDRSTRPOST
+ orr r2, r2, #FAULT_CODE_LDRSTRPOST
+#endif
+ b SYMBOL_NAME(do_DataAbort)
+
+Ldata_ldrstr_numindex:
+ mov r0, r4, lsr #14 @ Get Rn
+ and r0, r0, #15 << 2 @ Mask out reg.
+ teq r0, #15 << 2
+ ldr r0, [r3, r0] @ Get register
+ biceq r0, r0, #PCMASK
+ mov r1, r4, lsl #20
+ tst r4, #1 << 23
+ addne r0, r0, r1, lsr #20
+ subeq r0, r0, r1, lsr #20
+ mov r1, r0
+#ifdef FAULT_CODE_LDRSTRPRE
+ orr r2, r2, #FAULT_CODE_LDRSTRPRE
+#endif
+ b SYMBOL_NAME(do_DataAbort)
+
+Ldata_ldrstr_regindex:
+ mov r0, r4, lsr #14 @ Get Rn
+ and r0, r0, #15 << 2 @ Mask out reg.
+ teq r0, #15 << 2
+ ldr r0, [r3, r0] @ Get register
+ biceq r0, r0, #PCMASK
+ and r7, r4, #15
+ teq r7, #15 @ Check for PC
+ ldr r7, [r3, r7, lsl #2] @ Get Rm
+ biceq r7, r7, #PCMASK
+ and r8, r4, #0x60 @ Get shift types
+ mov r9, r4, lsr #7 @ Get shift amount
+ and r9, r9, #31
+ teq r8, #0
+ moveq r7, r7, lsl r9
+ teq r8, #0x20 @ LSR shift
+ moveq r7, r7, lsr r9
+ teq r8, #0x40 @ ASR shift
+ moveq r7, r7, asr r9
+ teq r8, #0x60 @ ROR shift
+ moveq r7, r7, ror r9
+ tst r4, #1 << 23
+ addne r0, r0, r7
+ subeq r0, r0, r7 @ Apply correction
+ mov r1, r0
+#ifdef FAULT_CODE_LDRSTRREG
+ orr r2, r2, #FAULT_CODE_LDRSTRREG
+#endif
+ b SYMBOL_NAME(do_DataAbort)
+
+Ldata_ldmstm:
+ mov r7, #0x11
+ orr r7, r7, r7, lsl #8
+ and r0, r4, r7
+ and r1, r4, r7, lsl #1
+ add r0, r0, r1, lsr #1
+ and r1, r4, r7, lsl #2
+ add r0, r0, r1, lsr #2
+ and r1, r4, r7, lsl #3
+ add r0, r0, r1, lsr #3
+ add r0, r0, r0, lsr #8
+ add r0, r0, r0, lsr #4
+ and r7, r0, #15 @ r7 = no. of registers to transfer.
+ mov r5, r4, lsr #14 @ Get Rn
+ and r5, r5, #15 << 2
+ ldr r0, [r3, r5] @ Get reg
+ eor r6, r4, r4, lsl #2
+ tst r6, #1 << 23 @ Check inc/dec ^ writeback
+ rsbeq r7, r7, #0
+ add r7, r0, r7, lsl #2 @ Do correction (signed)
+ subne r1, r7, #1
+ subeq r1, r0, #1
+ moveq r0, r7
+ tst r4, #1 << 21 @ Check writeback
+ strne r7, [r3, r5]
+ eor r6, r4, r4, lsl #1
+ tst r6, #1 << 24 @ Check Pre/Post ^ inc/dec
+ addeq r0, r0, #4
+ addeq r1, r1, #4
+ teq r5, #15*4 @ CHECK FOR PC
+ biceq r1, r1, #PCMASK
+ biceq r0, r0, #PCMASK
+#ifdef FAULT_CODE_LDMSTM
+ orr r2, r2, #FAULT_CODE_LDMSTM
+#endif
+ b SYMBOL_NAME(do_DataAbort)
+
+Ldata_ldcstc_pre:
+ mov r0, r4, lsr #14 @ Get Rn
+ and r0, r0, #15 << 2 @ Mask out reg.
+ teq r0, #15 << 2
+ ldr r0, [r3, r0] @ Get register
+ biceq r0, r0, #PCMASK
+ mov r1, r4, lsl #24 @ Get offset
+ tst r4, #1 << 23
+ addne r0, r0, r1, lsr #24
+ subeq r0, r0, r1, lsr #24
+ mov r1, r0
+#ifdef FAULT_CODE_LDCSTC
+ orr r2, r2, #FAULT_CODE_LDCSTC
+#endif
+ b SYMBOL_NAME(do_DataAbort)
+
+#include "entry-common.S"
+
+ .data
+
+__temp_irq: .word 0 @ saved lr_irq
+__temp_fiq: .space 128
--- /dev/null
+/*
+ * linux/arch/arm/kernel/entry-armv.S
+ *
+ * Copyright (C) 1996,1997,1998 Russell King.
+ * ARM700 fix by Matthew Godbolt (linux-user@willothewisp.demon.co.uk)
+ *
+ * Low-level vector interface routines
+ *
+ * Note: there is a StrongARM bug in the STMIA rn, {regs}^ instruction that causes
+ * it to save wrong values... Be aware!
+ */
+#include <linux/autoconf.h>
+#include <linux/linkage.h>
+
+#include <asm/assembler.h>
+#include <asm/errno.h>
+#include <asm/hardware.h>
+
+#include "../lib/constants.h"
+
+ .text
+
+@ Offsets into task structure
+@ ---------------------------
+@
+#define STATE 0
+#define COUNTER 4
+#define PRIORITY 8
+#define FLAGS 12
+#define SIGPENDING 16
+
+#define PF_TRACESYS 0x20
+
+@ Bad Abort numbers
+@ -----------------
+@
+#define BAD_PREFETCH 0
+#define BAD_DATA 1
+#define BAD_ADDREXCPTN 2
+#define BAD_IRQ 3
+#define BAD_UNDEFINSTR 4
+
+@ OS version number used in SWIs
+@ RISC OS is 0
+@ RISC iX is 8
+@
+#define OS_NUMBER 9
+
+@
+@ Stack format (ensured by USER_* and SVC_*)
+@
+#define S_FRAME_SIZE 72
+#define S_OLD_R0 68
+#define S_PSR 64
+#define S_PC 60
+#define S_LR 56
+#define S_SP 52
+#define S_IP 48
+#define S_FP 44
+#define S_R10 40
+#define S_R9 36
+#define S_R8 32
+#define S_R7 28
+#define S_R6 24
+#define S_R5 20
+#define S_R4 16
+#define S_R3 12
+#define S_R2 8
+#define S_R1 4
+#define S_R0 0
+
+#ifdef IOC_BASE
+/* IOC / IOMD based hardware */
+ .equ ioc_base_high, IOC_BASE & 0xff000000
+ .equ ioc_base_low, IOC_BASE & 0x00ff0000
+ .macro disable_fiq
+ mov r12, #ioc_base_high
+ .if ioc_base_low
+ orr r12, r12, #ioc_base_low
+ .endif
+ strb r12, [r12, #0x38] @ Disable FIQ register
+ .endm
+
+ .macro get_irqnr_and_base, irqnr, base
+ mov r4, #ioc_base_high @ point at IOC
+ .if ioc_base_low
+ orr r4, r4, #ioc_base_low
+ .endif
+ ldrb \irqnr, [r4, #0x24] @ get high priority first
+ adr \base, irq_prio_h
+ teq \irqnr, #0
+#ifdef IOMD_BASE
+ ldreqb \irqnr, [r4, #0x1f4] @ get dma
+ adreq \base, irq_prio_d
+ teqeq \irqnr, #0
+#endif
+ ldreqb \irqnr, [r4, #0x14] @ get low priority
+ adreq \base, irq_prio_l
+ .endm
+
+/*
+ * Interrupt table (incorporates priority)
+ */
+ .macro irq_prio_table
+irq_prio_l: .byte 0, 0, 1, 0, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3
+ .byte 4, 0, 1, 0, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3
+ .byte 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5
+ .byte 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5
+ .byte 6, 6, 6, 6, 6, 6, 6, 6, 3, 3, 3, 3, 3, 3, 3, 3
+ .byte 6, 6, 6, 6, 6, 6, 6, 6, 3, 3, 3, 3, 3, 3, 3, 3
+ .byte 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5
+ .byte 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5
+ .byte 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7
+ .byte 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7
+ .byte 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7
+ .byte 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7
+ .byte 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7
+ .byte 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7
+ .byte 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7
+ .byte 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7
+#ifdef IOMD_BASE
+irq_prio_d: .byte 0,16,17,16,18,16,17,16,19,16,17,16,18,16,17,16
+ .byte 20,16,17,16,18,16,17,16,19,16,17,16,18,16,17,16
+ .byte 21,16,17,16,18,16,17,16,19,16,17,16,18,16,17,16
+ .byte 21,16,17,16,18,16,17,16,19,16,17,16,18,16,17,16
+ .byte 22,16,17,16,18,16,17,16,19,16,17,16,18,16,17,16
+ .byte 22,16,17,16,18,16,17,16,19,16,17,16,18,16,17,16
+ .byte 21,16,17,16,18,16,17,16,19,16,17,16,18,16,17,16
+ .byte 21,16,17,16,18,16,17,16,19,16,17,16,18,16,17,16
+ .byte 23,16,17,16,18,16,17,16,19,16,17,16,18,16,17,16
+ .byte 23,16,17,16,18,16,17,16,19,16,17,16,18,16,17,16
+ .byte 21,16,17,16,18,16,17,16,19,16,17,16,18,16,17,16
+ .byte 21,16,17,16,18,16,17,16,19,16,17,16,18,16,17,16
+ .byte 22,16,17,16,18,16,17,16,19,16,17,16,18,16,17,16
+ .byte 22,16,17,16,18,16,17,16,19,16,17,16,18,16,17,16
+ .byte 21,16,17,16,18,16,17,16,19,16,17,16,18,16,17,16
+ .byte 21,16,17,16,18,16,17,16,19,16,17,16,18,16,17,16
+#endif
+irq_prio_h: .byte 0, 8, 9, 8,10,10,10,10,11,11,11,11,10,10,10,10
+ .byte 12, 8, 9, 8,10,10,10,10,11,11,11,11,10,10,10,10
+ .byte 13,13,13,13,10,10,10,10,11,11,11,11,10,10,10,10
+ .byte 13,13,13,13,10,10,10,10,11,11,11,11,10,10,10,10
+ .byte 14,14,14,14,10,10,10,10,11,11,11,11,10,10,10,10
+ .byte 14,14,14,14,10,10,10,10,11,11,11,11,10,10,10,10
+ .byte 13,13,13,13,10,10,10,10,11,11,11,11,10,10,10,10
+ .byte 13,13,13,13,10,10,10,10,11,11,11,11,10,10,10,10
+ .byte 15,15,15,15,10,10,10,10,11,11,11,11,10,10,10,10
+ .byte 15,15,15,15,10,10,10,10,11,11,11,11,10,10,10,10
+ .byte 13,13,13,13,10,10,10,10,11,11,11,11,10,10,10,10
+ .byte 13,13,13,13,10,10,10,10,11,11,11,11,10,10,10,10
+ .byte 15,15,15,15,10,10,10,10,11,11,11,11,10,10,10,10
+ .byte 15,15,15,15,10,10,10,10,11,11,11,11,10,10,10,10
+ .byte 13,13,13,13,10,10,10,10,11,11,11,11,10,10,10,10
+ .byte 13,13,13,13,10,10,10,10,11,11,11,11,10,10,10,10
+ .endm
+
+#elif defined(CONFIG_ARCH_EBSA110)
+
+ .macro disable_fiq
+ .endm
+
+ .macro get_irqnr_and_base, irqnr, base
+ mov r4, #0xf3000000
+ ldrb \irqnr, [r4] @ get interrupts
+ adr \base, irq_prio_ebsa110
+ .endm
+
+ .macro irq_prio_table
+irq_prio_ebsa110:
+ .byte 0, 0, 1, 1, 2, 2, 2, 2, 3, 3, 1, 1, 2, 2, 2, 2
+ .byte 4, 4, 1, 1, 2, 2, 2, 2, 3, 3, 1, 1, 2, 2, 2, 2
+ .byte 5, 5, 1, 1, 2, 2, 2, 2, 3, 3, 1, 1, 2, 2, 2, 2
+ .byte 5, 5, 1, 1, 2, 2, 2, 2, 3, 3, 1, 1, 2, 2, 2, 2
+
+ .byte 6, 6, 6, 6, 2, 2, 2, 2, 3, 3, 6, 6, 2, 2, 2, 2
+ .byte 6, 6, 6, 6, 2, 2, 2, 2, 3, 3, 6, 6, 2, 2, 2, 2
+ .byte 6, 6, 6, 6, 2, 2, 2, 2, 3, 3, 6, 6, 2, 2, 2, 2
+ .byte 6, 6, 6, 6, 2, 2, 2, 2, 3, 3, 6, 6, 2, 2, 2, 2
+
+ .byte 7, 0, 1, 1, 2, 2, 2, 2, 3, 3, 1, 1, 2, 2, 2, 2
+ .byte 4, 4, 1, 1, 2, 2, 2, 2, 3, 3, 1, 1, 2, 2, 2, 2
+ .byte 5, 5, 1, 1, 2, 2, 2, 2, 3, 3, 1, 1, 2, 2, 2, 2
+ .byte 5, 5, 1, 1, 2, 2, 2, 2, 3, 3, 1, 1, 2, 2, 2, 2
+
+ .byte 6, 6, 6, 6, 2, 2, 2, 2, 3, 3, 6, 6, 2, 2, 2, 2
+ .byte 6, 6, 6, 6, 2, 2, 2, 2, 3, 3, 6, 6, 2, 2, 2, 2
+ .byte 6, 6, 6, 6, 2, 2, 2, 2, 3, 3, 6, 6, 2, 2, 2, 2
+ .byte 6, 6, 6, 6, 2, 2, 2, 2, 3, 3, 6, 6, 2, 2, 2, 2
+ .endm
+
+#else
+#error Unknown architecture
+#endif
+
+/*============================================================================
+ * For entry-common.S
+ */
+
+ .macro save_user_regs
+ sub sp, sp, #S_FRAME_SIZE
+ stmia sp, {r0 - r12} @ Calling r0 - r12
+ add r8, sp, #S_PC
+ stmdb r8, {sp, lr}^ @ Calling sp, lr
+ mov r7, r0
+ mrs r6, spsr
+ mov r5, lr
+ stmia r8, {r5, r6, r7} @ Save calling PC, CPSR, OLD_R0
+ .endm
+
+ .macro restore_user_regs
+ mrs r0, cpsr @ disable IRQs
+ orr r0, r0, #I_BIT
+ msr cpsr, r0
+ ldr r0, [sp, #S_PSR] @ Get calling cpsr
+ msr spsr, r0 @ save in spsr_svc
+ ldmia sp, {r0 - lr}^ @ Get calling r0 - lr
+ mov r0, r0
+ add sp, sp, #S_PC
+ ldr lr, [sp], #S_FRAME_SIZE - S_PC @ Get PC and jump over PC, PSR, OLD_R0
+ movs pc, lr @ return & move spsr_svc into cpsr
+ .endm
+
+ .macro mask_pc, rd, rm
+ .endm
+
+ .macro arm700_bug_check, instr, temp
+ and \temp, \instr, #0x0f000000 @ check for SWI
+ teq \temp, #0x0f000000
+ bne .Larm700bug
+ .endm
+
+ .macro enable_irqs, temp
+ mrs \temp, cpsr
+ bic \temp, \temp, #I_BIT
+ msr cpsr, \temp
+ .endm
+
+ .macro initialise_traps_extra
+ mrs r0, cpsr
+ bic r0, r0, #31
+ orr r0, r0, #0xd3
+ msr cpsr, r0
+ .endm
+
+
+.Larm700bug: str lr, [r8]
+ ldr r0, [sp, #S_PSR] @ Get calling cpsr
+ msr spsr, r0
+ ldmia sp, {r0 - lr}^ @ Get calling r0 - lr
+ mov r0, r0
+ add sp, sp, #S_PC
+ ldr lr, [sp], #S_FRAME_SIZE - S_PC @ Get PC and jump over PC, PSR, OLD_R0
+ movs pc, lr
+
+
+ .macro get_current_task, rd
+ mov \rd, sp, lsr #13
+ mov \rd, \rd, lsl #13
+ .endm
+
+ /*
+ * Like adr, but force SVC mode (if required)
+ */
+ .macro adrsvc, cond, reg, label
+ adr\cond \reg, \label
+ .endm
+
+/*=============================================================================
+ * Undefined FIQs
+ *-----------------------------------------------------------------------------
+ * Enter in FIQ mode, spsr = ANY CPSR, lr = ANY PC
+ * MUST PRESERVE SVC SPSR, but need to switch to SVC mode to show our msg.
+ * Basically to switch modes, we *HAVE* to clobber one register... brain
+ * damage alert! I don't think that we can execute any code in here in any
+ * other mode than FIQ... Ok you can switch to another mode, but you can't
+ * get out of that mode without clobbering one register.
+ */
+_unexp_fiq: disable_fiq
+ subs pc, lr, #4
+
+/*=============================================================================
+ * Interrupt entry dispatcher
+ *-----------------------------------------------------------------------------
+ * Enter in IRQ mode, spsr = SVC/USR CPSR, lr = SVC/USR PC
+ */
+vector_IRQ: @
+ @ save mode specific registers
+ @
+ ldr r13, .LCirq
+ sub lr, lr, #4
+ str lr, [r13] @ save lr_IRQ
+ mrs lr, spsr
+ str lr, [r13, #4] @ save spsr_IRQ
+ @
+ @ now branch to the relevent MODE handling routine
+ @
+ mrs sp, cpsr @ switch to SVC mode
+ bic sp, sp, #31
+ orr sp, sp, #0x13
+ msr spsr, sp
+ and lr, lr, #15
+ cmp lr, #4
+ addlts pc, pc, lr, lsl #2 @ Changes mode and branches
+ b __irq_invalid @ 4 - 15
+ b __irq_usr @ 0 (USR_26 / USR_32)
+ b __irq_invalid @ 1 (FIQ_26 / FIQ_32)
+ b __irq_invalid @ 2 (IRQ_26 / IRQ_32)
+ b __irq_svc @ 3 (SVC_26 / SVC_32)
+/*
+ *------------------------------------------------------------------------------------------------
+ * Undef instr entry dispatcher - dispatches it to the correct handler for the processor mode
+ *------------------------------------------------------------------------------------------------
+ * Enter in UND mode, spsr = SVC/USR CPSR, lr = SVC/USR PC
+ */
+.LCirq: .word __temp_irq
+.LCund: .word __temp_und
+.LCabt: .word __temp_abt
+
+vector_undefinstr:
+ @
+ @ save mode specific registers
+ @
+ ldr r13, [pc, #.LCund - . - 8]
+ str lr, [r13]
+ mrs lr, spsr
+ str lr, [r13, #4]
+ @
+ @ now branch to the relevent MODE handling routine
+ @
+ mrs sp, cpsr
+ bic sp, sp, #31
+ orr sp, sp, #0x13
+ msr spsr, sp
+ and lr, lr, #15
+ cmp lr, #4
+ addlts pc, pc, lr, lsl #2 @ Changes mode and branches
+ b __und_invalid @ 4 - 15
+ b __und_usr @ 0 (USR_26 / USR_32)
+ b __und_invalid @ 1 (FIQ_26 / FIQ_32)
+ b __und_invalid @ 2 (IRQ_26 / IRQ_32)
+ b __und_svc @ 3 (SVC_26 / SVC_32)
+/*
+ *------------------------------------------------------------------------------------------------
+ * Prefetch abort dispatcher - dispatches it to the correct handler for the processor mode
+ *------------------------------------------------------------------------------------------------
+ * Enter in ABT mode, spsr = USR CPSR, lr = USR PC
+ */
+vector_prefetch:
+ @
+ @ save mode specific registers
+ @
+ sub lr, lr, #4
+ ldr r13, .LCabt
+ str lr, [r13]
+ mrs lr, spsr
+ str lr, [r13, #4]
+ @
+ @ now branch to the relevent MODE handling routine
+ @
+ mrs sp, cpsr
+ bic sp, sp, #31
+ orr sp, sp, #0x13
+ msr spsr, sp
+ and lr, lr, #15
+ adds pc, pc, lr, lsl #2 @ Changes mode and branches
+ b __pabt_invalid @ 4 - 15
+ b __pabt_usr @ 0 (USR_26 / USR_32)
+ b __pabt_invalid @ 1 (FIQ_26 / FIQ_32)
+ b __pabt_invalid @ 2 (IRQ_26 / IRQ_32)
+ b __pabt_invalid @ 3 (SVC_26 / SVC_32)
+/*
+ *------------------------------------------------------------------------------------------------
+ * Data abort dispatcher - dispatches it to the correct handler for the processor mode
+ *------------------------------------------------------------------------------------------------
+ * Enter in ABT mode, spsr = USR CPSR, lr = USR PC
+ */
+vector_data: @
+ @ save mode specific registers
+ @
+ sub lr, lr, #8
+ ldr r13, .LCabt
+ str lr, [r13]
+ mrs lr, spsr
+ str lr, [r13, #4]
+ @
+ @ now branch to the relevent MODE handling routine
+ @
+ mrs sp, cpsr
+ bic sp, sp, #31
+ orr sp, sp, #0x13
+ msr spsr, sp
+ and lr, lr, #15
+ cmp lr, #4
+ addlts pc, pc, lr, lsl #2 @ Changes mode & branches
+ b __dabt_invalid @ 4 - 15
+ b __dabt_usr @ 0 (USR_26 / USR_32)
+ b __dabt_invalid @ 1 (FIQ_26 / FIQ_32)
+ b __dabt_invalid @ 2 (IRQ_26 / IRQ_32)
+ b __dabt_svc @ 3 (SVC_26 / SVC_32)
+
+/*=============================================================================
+ * Undefined instruction handler
+ *-----------------------------------------------------------------------------
+ * Handles floating point instructions
+ */
+__und_usr: sub sp, sp, #S_FRAME_SIZE @ Allocate frame size in one go
+ stmia sp, {r0 - r12} @ Save r0 - r12
+ add r8, sp, #S_PC
+ stmdb r8, {sp, lr}^ @ Save user r0 - r12
+ ldr r4, .LCund
+ ldmia r4, {r5 - r7}
+ stmia r8, {r5 - r7} @ Save USR pc, cpsr, old_r0
+ mov fp, #0
+
+ adr r1, .LC2
+ ldmia r1, {r1, r4}
+ ldr r1, [r1]
+ get_current_task r2
+ teq r1, r2
+ blne SYMBOL_NAME(math_state_restore)
+ adrsvc, al, r9, SYMBOL_NAME(fpreturn)
+ adrsvc al, lr, SYMBOL_NAME(fpundefinstr)
+ ldr pc, [r4] @ Call FP module USR entry point
+
+ .globl SYMBOL_NAME(fpundefinstr)
+SYMBOL_NAME(fpundefinstr): @ Called by FP module on undefined instr
+ mov r0, lr
+ mov r1, sp
+ mrs r4, cpsr @ Enable interrupts
+ bic r4, r4, #I_BIT
+ msr cpsr, r4
+ bl SYMBOL_NAME(do_undefinstr)
+ b ret_from_exception @ Normal FP exit
+
+__und_svc: sub sp, sp, #S_FRAME_SIZE
+ stmia sp, {r0 - r12} @ save r0 - r12
+ mov r6, lr
+ ldr r7, .LCund
+ ldmia r7, {r7 - r9}
+ add r5, sp, #S_FRAME_SIZE
+ add r4, sp, #S_SP
+ stmia r4, {r5 - r9} @ save sp_SVC, lr_SVC, pc, cpsr, old_ro
+
+ adr r1, .LC2
+ ldmia r1, {r1, r4}
+ ldr r1, [r1]
+ mov r2, sp, lsr #13
+ mov r2, r2, lsl #13
+ teq r1, r2
+ blne SYMBOL_NAME(math_state_restore)
+ adrsvc al, r9, SYMBOL_NAME(fpreturnsvc)
+ adrsvc al, lr, SYMBOL_NAME(fpundefinstrsvc)
+ ldr pc, [r4] @ Call FP module SVC entry point
+
+ .globl SYMBOL_NAME(fpundefinstrsvc)
+SYMBOL_NAME(fpundefinstrsvc):
+ mov r0, r5 @ unsigned long pc
+ mov r1, sp @ struct pt_regs *regs
+ bl SYMBOL_NAME(do_undefinstr)
+
+ .globl SYMBOL_NAME(fpreturnsvc)
+SYMBOL_NAME(fpreturnsvc):
+ ldr lr, [sp, #S_PSR] @ Get SVC cpsr
+ msr spsr, lr
+ ldmia sp, {r0 - pc}^ @ Restore SVC registers
+
+.LC2: .word SYMBOL_NAME(last_task_used_math)
+ .word SYMBOL_NAME(fp_enter)
+
+__und_invalid: sub sp, sp, #S_FRAME_SIZE
+ stmia sp, {r0 - lr}
+ mov r7, r0
+ ldr r4, .LCund
+ ldmia r4, {r5, r6} @ Get UND/IRQ/FIQ/ABT pc, cpsr
+ add r4, sp, #S_PC
+ stmia r4, {r5, r6, r7} @ Save UND/IRQ/FIQ/ABT pc, cpsr, old_r0
+ mov r0, sp @ struct pt_regs *regs
+ mov r1, #BAD_UNDEFINSTR @ int reason
+ and r2, r6, #31 @ int mode
+ b SYMBOL_NAME(bad_mode) @ Does not ever return...
+/*=============================================================================
+ * Prefetch abort handler
+ *-----------------------------------------------------------------------------
+ */
+pabtmsg: .ascii "Pabt: %08lX\n\0"
+ .align
+__pabt_usr: sub sp, sp, #S_FRAME_SIZE @ Allocate frame size in one go
+ stmia sp, {r0 - r12} @ Save r0 - r12
+ add r8, sp, #S_PC
+ stmdb r8, {sp, lr}^ @ Save sp_usr lr_usr
+ ldr r4, .LCabt
+ ldmia r4, {r5 - r7} @ Get USR pc, cpsr
+ stmia r8, {r5 - r7} @ Save USR pc, cpsr, old_r0
+
+ mrs r7, cpsr @ Enable interrupts if they were
+ bic r7, r7, #I_BIT @ previously
+ msr cpsr, r7
+ mov r0, r5 @ address (pc)
+ mov r1, sp @ regs
+ bl SYMBOL_NAME(do_PrefetchAbort) @ call abort handler
+ teq r0, #0 @ Does this still apply???
+ bne ret_from_exception @ Return from exception
+#ifdef DEBUG_UNDEF
+ adr r0, t
+ bl SYMBOL_NAME(printk)
+#endif
+ mov r0, r5
+ mov r1, sp
+ and r2, r6, #31
+ bl SYMBOL_NAME(do_undefinstr)
+ ldr lr, [sp, #S_PSR] @ Get USR cpsr
+ msr spsr, lr
+ ldmia sp, {r0 - pc}^ @ Restore USR registers
+
+__pabt_invalid: sub sp, sp, #S_FRAME_SIZE @ Allocate frame size in one go
+ stmia sp, {r0 - lr} @ Save XXX r0 - lr
+ mov r7, r0 @ OLD R0
+ ldr r4, .LCabt
+ ldmia r4, {r5 - r7} @ Get XXX pc, cpsr
+ add r4, sp, #S_PC
+ stmia r4, {r5 - r7} @ Save XXX pc, cpsr, old_r0
+ mov r0, sp @ Prefetch aborts are definitely *not*
+ mov r1, #BAD_PREFETCH @ allowed in non-user modes. We cant
+ and r2, r6, #31 @ recover from this problem.
+ b SYMBOL_NAME(bad_mode)
+
+#ifdef DEBUG_UNDEF
+t: .ascii "*** undef ***\r\n\0"
+ .align
+#endif
+
+/*=============================================================================
+ * Address exception handler
+ *-----------------------------------------------------------------------------
+ * These aren't too critical.
+ * (they're not supposed to happen, and won't happen in 32-bit mode).
+ */
+
+vector_addrexcptn:
+ b vector_addrexcptn
+
+/*=============================================================================
+ * Interrupt (IRQ) handler
+ *-----------------------------------------------------------------------------
+ */
+__irq_usr: sub sp, sp, #S_FRAME_SIZE
+ stmia sp, {r0 - r12} @ save r0 - r12
+ add r8, sp, #S_PC
+ stmdb r8, {sp, lr}^
+ ldr r4, .LCirq
+ ldmia r4, {r5 - r7} @ get saved PC, SPSR
+ stmia r8, {r5 - r7} @ save pc, psr, old_r0
+1: get_irqnr_and_base r6, r5
+ teq r6, #0
+ ldrneb r0, [r5, r6] @ get IRQ number
+ movne r1, sp
+ @
+ @ routine called with r0 = irq number, r1 = struct pt_regs *
+ @
+ adrsvc ne, lr, 1b
+ bne do_IRQ
+ b ret_with_reschedule
+
+ irq_prio_table
+
+__irq_svc: sub sp, sp, #S_FRAME_SIZE
+ stmia sp, {r0 - r12} @ save r0 - r12
+ mov r6, lr
+ ldr r7, .LCirq
+ ldmia r7, {r7 - r9}
+ add r5, sp, #S_FRAME_SIZE
+ add r4, sp, #S_SP
+ stmia r4, {r5, r6, r7, r8, r9} @ save sp_SVC, lr_SVC, pc, cpsr, old_ro
+1: get_irqnr_and_base r6, r5
+ teq r6, #0
+ ldrneb r0, [r5, r6] @ get IRQ number
+ movne r1, sp
+ @
+ @ routine called with r0 = irq number, r1 = struct pt_regs *
+ @
+ adrsvc ne, lr, 1b
+ bne do_IRQ
+ ldr r0, [sp, #S_PSR]
+ msr spsr, r0
+ ldmia sp, {r0 - pc}^ @ load r0 - pc, cpsr
+
+__irq_invalid: sub sp, sp, #S_FRAME_SIZE @ Allocate space on stack for frame
+ stmfd sp, {r0 - lr} @ Save r0 - lr
+ mov r7, #-1
+ ldr r4, .LCirq
+ ldmia r4, {r5, r6} @ get saved pc, psr
+ add r4, sp, #S_PC
+ stmia r4, {r5, r6, r7}
+ mov fp, #0
+ mov r0, sp
+ mov r1, #BAD_IRQ
+ b SYMBOL_NAME(bad_mode)
+
+/*=============================================================================
+ * Data abort handler code
+ *-----------------------------------------------------------------------------
+ */
+.LCprocfns: .word SYMBOL_NAME(processor)
+
+__dabt_usr: sub sp, sp, #S_FRAME_SIZE @ Allocate frame size in one go
+ stmia sp, {r0 - r12} @ save r0 - r12
+ add r3, sp, #S_PC
+ stmdb r3, {sp, lr}^
+ ldr r0, .LCabt
+ ldmia r0, {r0 - r2} @ Get USR pc, cpsr
+ stmia r3, {r0 - r2} @ Save USR pc, cpsr, old_r0
+ mov fp, #0
+ mrs r2, cpsr @ Enable interrupts if they were
+ bic r2, r2, #I_BIT @ previously
+ msr cpsr, r2
+ ldr r2, .LCprocfns
+ mov lr, pc
+ ldr pc, [r2, #8] @ call processor specific code
+ mov r3, sp
+ bl SYMBOL_NAME(do_DataAbort)
+ b ret_from_sys_call
+
+__dabt_svc: sub sp, sp, #S_FRAME_SIZE
+ stmia sp, {r0 - r12} @ save r0 - r12
+ ldr r2, .LCabt
+ add r0, sp, #S_FRAME_SIZE
+ add r5, sp, #S_SP
+ mov r1, lr
+ ldmia r2, {r2 - r4} @ get pc, cpsr
+ stmia r5, {r0 - r4} @ save sp_SVC, lr_SVC, pc, cpsr, old_ro
+ tst r3, #I_BIT
+ mrseq r0, cpsr @ Enable interrupts if they were
+ biceq r0, r0, #I_BIT @ previously
+ msreq cpsr, r0
+ mov r0, r2
+ ldr r2, .LCprocfns
+ mov lr, pc
+ ldr pc, [r2, #8] @ call processor specific code
+ mov r3, sp
+ bl SYMBOL_NAME(do_DataAbort)
+ ldr r0, [sp, #S_PSR]
+ msr spsr, r0
+ ldmia sp, {r0 - pc}^ @ load r0 - pc, cpsr
+
+__dabt_invalid: sub sp, sp, #S_FRAME_SIZE
+ stmia sp, {r0 - lr} @ Save SVC r0 - lr [lr *should* be intact]
+ mov r7, r0
+ ldr r4, .LCabt
+ ldmia r4, {r5, r6} @ Get SVC pc, cpsr
+ add r4, sp, #S_PC
+ stmia r4, {r5, r6, r7} @ Save SVC pc, cpsr, old_r0
+ mov r0, sp
+ mov r1, #BAD_DATA
+ and r2, r6, #31
+ b SYMBOL_NAME(bad_mode)
+
+
+#include "entry-common.S"
+
+ .data
+
+__temp_irq: .word 0 @ saved lr_irq
+ .word 0 @ saved spsr_irq
+ .word -1 @ old_r0
+__temp_und: .word 0 @ Saved lr_und
+ .word 0 @ Saved spsr_und
+ .word -1 @ old_r0
+__temp_abt: .word 0 @ Saved lr_abt
+ .word 0 @ Saved spsr_abt
+ .word -1 @ old_r0
--- /dev/null
+/*
+ *=============================================================================
+ * Low-level interface code
+ *-----------------------------------------------------------------------------
+ * Trap initialisation
+ *-----------------------------------------------------------------------------
+ *
+ * Note - FIQ code has changed. The default is a couple of words in 0x1c, 0x20
+ * that call _unexp_fiq. Nowever, we now copy the FIQ routine to 0x1c (removes
+ * some excess cycles).
+ *
+ * What we need to put into 0-0x1c are ldrs to branch to 0xC0000000
+ * (the kernel).
+ * 0x1c onwards is reserved for FIQ, so I think that I will allocate 0xe0 onwards for
+ * the actuall address to jump to.
+ */
+/*
+ * these go into 0x00
+ */
+.Lbranches: swi SYS_ERROR0
+ ldr pc, .Lbranches + 0xe4
+ ldr pc, .Lbranches + 0xe8
+ ldr pc, .Lbranches + 0xec
+ ldr pc, .Lbranches + 0xf0
+ ldr pc, .Lbranches + 0xf4
+ ldr pc, .Lbranches + 0xf8
+ ldr pc, .Lbranches + 0xfc
+/*
+ * this is put into 0xe4 and above
+ */
+.Ljump_addresses:
+ .word vector_undefinstr @ 0xe4
+ .word vector_swi @ 0xe8
+ .word vector_prefetch @ 0xec
+ .word vector_data @ 0xf0
+ .word vector_addrexcptn @ 0xf4
+ .word vector_IRQ @ 0xf8
+ .word _unexp_fiq @ 0xfc
+/*
+ * initialise the trap system
+ */
+ENTRY(trap_init)
+ stmfd sp!, {r4 - r7, lr}
+ initialise_traps_extra
+ mov r0, #0xe4
+ adr r1, .Ljump_addresses
+ ldmia r1, {r1 - r6}
+ stmia r0, {r1 - r6}
+ mov r0, #0
+ adr r1, .Lbranches
+ ldmia r1, {r1 - r7}
+ stmia r0, {r1 - r7}
+ LOADREGS(fd, sp!, {r4 - r7, pc})
+
+/*=============================================================================
+ * SWI handler
+ *-----------------------------------------------------------------------------
+ *
+ * We now handle sys-call tracing, and the errno in the task structure.
+ * Still have a problem with >4 arguments for functions. Theres only
+ * a couple of functions in the code that have 5 arguments, so Im not
+ * too worried.
+ */
+
+#include "calls.S"
+
+vector_swi: save_user_regs
+ mov fp, #0
+ mask_pc lr, lr
+ ldr r6, [lr, #-4]! @ get SWI instruction
+ arm700_bug_check r6, r7
+ enable_irqs r7
+
+ bic r6, r6, #0xff000000 @ mask off SWI op-code
+ eor r6, r6, #OS_NUMBER<<20 @ check OS number
+ cmp r6, #NR_SYSCALLS @ check upper syscall limit
+ bcs 2f
+
+ get_current_task r5
+ ldr ip, [r5, #FLAGS] @ check for syscall tracing
+ tst ip, #PF_TRACESYS
+ bne 1f
+
+ adr ip, SYMBOL_NAME(sys_call_table)
+ str r4, [sp, #-4]! @ new style: (r0 = arg1, r5 = arg5)
+ mov lr, pc
+ ldr pc, [ip, r6, lsl #2] @ call sys routine
+ add sp, sp, #4
+ str r0, [sp, #S_R0] @ returned r0
+ b ret_from_sys_call
+
+1: ldr r7, [sp, #S_IP] @ save old IP
+ mov r0, #0
+ str r7, [sp, #S_IP] @ trace entry [IP = 0]
+ bl SYMBOL_NAME(syscall_trace)
+ str r7, [sp, #S_IP]
+ ldmia sp, {r0 - r3} @ have to reload r0 - r3
+ adr ip, SYMBOL_NAME(sys_call_table)
+ str r4, [sp, #-4]! @ new style: (r0 = arg1, r5 = arg5)
+ mov lr, pc
+ ldr pc, [ip, r6, lsl #2] @ call sys routine
+ add sp, sp, #4
+ str r0, [sp, #S_R0] @ returned r0
+ mov r0, #1
+ str r0, [sp, #S_IP] @ trace exit [IP = 1]
+ bl SYMBOL_NAME(syscall_trace)
+ str r7, [sp, #S_IP]
+ b ret_from_sys_call
+
+2: tst r6, #0x00f00000 @ is it a Unix SWI?
+ bne 3f
+ cmp r6, #(KSWI_SYS_BASE - KSWI_BASE)
+ bcc 4f @ not private func
+ bic r0, r6, #0x000f0000
+ mov r1, sp
+ bl SYMBOL_NAME(arm_syscall)
+ b ret_from_sys_call
+
+3: eor r0, r6, #OS_NUMBER<<20 @ Put OS number back
+ mov r1, sp
+ bl SYMBOL_NAME(deferred)
+ ldmfd sp, {r0 - r3}
+ b ret_from_sys_call
+
+4: bl SYMBOL_NAME(sys_ni_syscall)
+ str r0, [sp, #0] @ returned r0
+ b ret_from_sys_call
+
+@ r0 = syscall number
+@ r1 = syscall r0
+@ r5 = syscall r4
+@ ip = syscall table
+SYMBOL_NAME(sys_syscall):
+ mov r6, r0
+ eor r6, r6, #OS_NUMBER << 20
+ cmp r6, #NR_SYSCALLS @ check range
+ movgt r0, #-ENOSYS
+ movgt pc, lr
+ add sp, sp, #4 @ take of the save of our r4
+ ldmib sp, {r0 - r4} @ get our args
+ str r4, [sp, #-4]! @ Put our arg on the stack
+ ldr pc, [ip, r6, lsl #2]
+
+ENTRY(sys_call_table)
+#include "calls.S"
+
+/*============================================================================
+ * Special system call wrappers
+ */
+sys_fork_wrapper:
+ add r0, sp, #4
+ b SYMBOL_NAME(sys_fork)
+
+sys_execve_wrapper:
+ add r3, sp, #4
+ b SYMBOL_NAME(sys_execve)
+
+sys_mount_wrapper:
+ mov r6, lr
+ add r5, sp, #4
+ str r5, [sp]
+ str r4, [sp, #-4]!
+ bl SYMBOL_NAME(sys_compat_mount)
+ add sp, sp, #4
+ RETINSTR(mov,pc,r6)
+
+sys_clone_wapper:
+ add r2, sp, #4
+ b SYMBOL_NAME(sys_clone)
+
+sys_llseek_wrapper:
+ mov r6, lr
+ add r5, sp, #4
+ str r5, [sp]
+ str r4, [sp, #-4]!
+ bl SYMBOL_NAME(sys_compat_llseek)
+ add sp, sp, #4
+ RETINSTR(mov,pc,r6)
+
+sys_sigsuspend_wrapper:
+ add r3, sp, #4
+ b SYMBOL_NAME(sys_sigsuspend)
+
+sys_rt_sigsuspend_wrapper:
+ add r2, sp, #4
+ b SYMBOL_NAME(sys_rt_sigsuspend)
+
+sys_sigreturn_wrapper:
+ add r0, sp, #4
+ b SYMBOL_NAME(sys_sigreturn)
+
+sys_rt_sigreturn_wrapper:
+ add r0, sp, #4
+ b SYMBOL_NAME(sys_rt_sigreturn)
+
+/*============================================================================
+ * All exits to user mode from the kernel go through this code.
+ */
+
+ .globl ret_from_sys_call
+
+ .globl SYMBOL_NAME(fpreturn)
+SYMBOL_NAME(fpreturn):
+ret_from_exception:
+ adr r0, 1f
+ ldmia r0, {r0, r1}
+ ldr r0, [r0]
+ ldr r1, [r1]
+ tst r0, r1
+ blne SYMBOL_NAME(do_bottom_half)
+ret_from_intr: ldr r0, [sp, #S_PSR]
+ tst r0, #3
+ beq ret_with_reschedule
+ b ret_from_all
+
+ret_signal: mov r1, sp
+ adrsvc al, lr, ret_from_all
+ b SYMBOL_NAME(do_signal)
+
+2: bl SYMBOL_NAME(schedule)
+
+ret_from_sys_call:
+ adr r0, 1f
+ ldmia r0, {r0, r1}
+ ldr r0, [r0]
+ ldr r1, [r1]
+ tst r0, r1
+ adrsvc ne, lr, ret_from_intr
+ bne SYMBOL_NAME(do_bottom_half)
+
+ret_with_reschedule:
+ ldr r0, 1f + 8
+ ldr r0, [r0]
+ teq r0, #0
+ bne 2b
+
+ get_current_task r1
+ ldr r1, [r1, #SIGPENDING]
+ teq r1, #0
+ bne ret_signal
+
+ret_from_all: restore_user_regs
+
+1: .word SYMBOL_NAME(bh_mask)
+ .word SYMBOL_NAME(bh_active)
+ .word SYMBOL_NAME(need_resched)
+
+/*============================================================================
+ * FP support
+ */
+
+1: .word SYMBOL_NAME(fp_save)
+ .word SYMBOL_NAME(fp_restore)
+
+.Lfpnull: mov pc, lr
+
+
+/*
+ * Function to call when switching tasks to save FP state
+ */
+ENTRY(fpe_save)
+ ldr r1, 1b
+ ldr pc, [r1]
+
+/*
+ * Function to call when switching tasks to restore FP state
+ */
+ENTRY(fpe_restore)
+ ldr r1, 1b + 4
+ ldr pc, [r1]
+
+
+ .data
+
+ENTRY(fp_enter)
+ .word SYMBOL_NAME(fpundefinstr)
+ .word SYMBOL_NAME(fpundefinstrsvc)
+
+ENTRY(fp_save)
+ .word .Lfpnull
+ENTRY(fp_restore)
+ .word .Lfpnull
+
--- /dev/null
+/*
+ * linux/arch/arm/kernel/head.S
+ *
+ * Copyright (C) 1994, 1995, 1996, 1997 Russell King
+ *
+ * 26-bit kernel startup code
+ */
+#include <linux/config.h>
+#include <linux/linkage.h>
+
+ .text
+ .align
+/*
+ * Entry point.
+ */
+ENTRY(stext)
+ENTRY(_stext)
+__entry: cmp pc, #0x02000000
+ ldrlt pc, LC1 @ if 0x01800000, call at 0x02080000
+ teq r0, #0 @ Check for old calling method
+ blne Loldparams @ Move page if old
+ adr r5, LC0
+ ldmia r5, {r5, r6, sl, sp} @ Setup stack
+ mov r4, #0
+1: cmp r5, sl @ Clear BSS
+ strcc r4, [r5], #4
+ bcc 1b
+ mov r0, #0xea000000 @ Point undef instr to continuation
+ adr r5, Lcontinue - 12
+ orr r5, r0, r5, lsr #2
+ str r5, [r4, #4]
+ mov r2, r4
+ ldr r5, Larm2_id
+ swp r0, r0, [r2] @ check for swp (ARM2 can't)
+ ldr r5, Larm250_id
+ mrc 15, 0, r0, c0, c0 @ check for CP#15 (ARM250 can't)
+ mov r5, r0 @ Use processor ID if we do have CP#15
+Lcontinue: str r5, [r6]
+ mov r5, #0xeb000000 @ Point undef instr vector to itself
+ sub r5, r5, #2
+ str r5, [r4, #4]
+ mov fp, #0
+ b SYMBOL_NAME(start_kernel)
+
+LC1: .word SYMBOL_NAME(_stext)
+LC0: .word SYMBOL_NAME(_edata)
+ .word SYMBOL_NAME(arm_id)
+ .word SYMBOL_NAME(_end)
+ .word SYMBOL_NAME(init_task_union)+8192
+Larm2_id: .long 0x41560200
+Larm250_id: .long 0x41560250
+ .align
+
+Loldparams: mov r4, #0x02000000
+ add r3, r4, #0x00080000
+ add r4, r4, #0x0007c000
+1: ldmia r0!, {r5 - r12}
+ stmia r4!, {r5 - r12}
+ cmp r4, r3
+ blt 1b
+ movs pc, lr
+
+ .align 13
+ENTRY(this_must_match_init_task)
--- /dev/null
+/*
+ * linux/arch/arm/kernel/head32.S
+ *
+ * Copyright (C) 1994, 1995, 1996, 1997 Russell King
+ *
+ * Kernel 32 bit startup code for ARM6 / ARM7 / StrongARM
+ */
+#include <linux/config.h>
+#include <linux/linkage.h>
+ .text
+ .align
+
+ .globl SYMBOL_NAME(swapper_pg_dir)
+ .equ SYMBOL_NAME(swapper_pg_dir), 0xc0004000
+
+ .globl __stext
+/*
+ * Entry point and restart point. Entry *must* be called with r0 == 0,
+ * MMU off.
+ *
+ * r1 = 0 -> ebsa (Ram @ 0x00000000)
+ * r1 = 1 -> RPC (Ram @ 0x10000000)
+ * r1 = 2 -> ebsit (???)
+ * r1 = 3 -> nexuspci
+ */
+ENTRY(stext)
+ENTRY(_stext)
+__entry:
+ teq r0, #0 @ check for illegal entry...
+ bne .Lerror @ loop indefinitely
+ cmp r1, #4 @ Unknown machine architecture
+ bge .Lerror
+@
+@ First thing to do is to get the page tables set up so that we can call the kernel
+@ in the correct place. This is relocatable code...
+@
+ mrc p15, 0, r9, c0, c0 @ get Processor ID
+@
+@ Read processor ID register (CP#15, CR0).
+@ NOTE: ARM2 & ARM250 cause an undefined instruction exception...
+@ Values are:
+@ XX01XXXX = ARMv4 architecture (StrongARM)
+@ XX00XXXX = ARMv3 architecture
+@ 4156061X = ARM 610
+@ 4156030X = ARM 3
+@ 4156025X = ARM 250
+@ 4156020X = ARM 2
+@
+ adr r10, .LCProcTypes
+1: ldmia r10!, {r5, r6, r8} @ Get Set, Mask, MMU Flags
+ teq r5, #0 @ End of list?
+ beq .Lerror
+ eor r5, r5, r9
+ tst r5, r6
+ addne r10, r10, #8
+ bne 1b
+
+ adr r4, .LCMachTypes
+ add r4, r4, r1, lsl #4
+ ldmia r4, {r4, r5, r6} @ r4 = page dir in physical ram
+
+ mov r0, r4
+ mov r1, #0
+ add r2, r0, #0x4000
+1: str r1, [r0], #4 @ Clear page table
+ teq r0, r2
+ bne 1b
+@
+@ Add enough entries to allow the kernel to be called.
+@ It will sort out the real mapping in paging_init
+@
+ add r0, r4, #0x3000
+ mov r1, #0x0000000c @ SECT_CACHEABLE | SECT_BUFFERABLE
+ orr r1, r1, r8
+ add r1, r1, r5
+ str r1, [r0], #4
+ add r1, r1, #1 << 20
+ str r1, [r0], #4
+ add r1, r1, #1 << 20
+@
+@ Map in IO space
+@
+ add r0, r4, #0x3800
+ orr r1, r6, r8
+ add r2, r0, #0x0800
+1: str r1, [r0], #4
+ add r1, r1, #1 << 20
+ teq r0, r2
+ bne 1b
+@
+@ Map in screen at 0x02000000 & SCREEN2_BASE
+@
+ teq r5, #0
+ addne r0, r4, #0x80 @ 02000000
+ movne r1, #0x02000000
+ orrne r1, r1, r8
+ strne r1, [r0]
+ addne r0, r4, #0x3600 @ d8000000
+ strne r1, [r0]
+@
+@ The following should work on both v3 and v4 implementations
+@
+ mov lr, pc
+ mov pc, r10 @ Call processor flush (returns ctrl reg)
+ adr r5, __entry
+ sub r10, r10, r5 @ Make r10 PIC
+ ldr lr, .Lbranch
+ mcr p15, 0, r0, c1, c0 @ Enable MMU & caches. In 3 instructions
+ @ we lose this page!
+ mov pc, lr
+
+.Lerror: mov r0, #0x02000000
+ mov r1, #0x11
+ orr r1, r1, r1, lsl #8
+ orr r1, r1, r1, lsl #16
+ str r1, [r0], #4
+ str r1, [r0], #4
+ str r1, [r0], #4
+ str r1, [r0], #4
+ b .Lerror
+
+.Lbranch: .long .Lalready_done_mmap @ Real address of routine
+
+ @ EBSA (pg dir phys, phys ram start, phys i/o)
+.LCMachTypes: .long SYMBOL_NAME(swapper_pg_dir) - 0xc0000000 @ Address of page tables (physical)
+ .long 0 @ Address of RAM
+ .long 0xe0000000 @ I/O address
+ .long 0
+
+ @ RPC
+ .long SYMBOL_NAME(swapper_pg_dir) - 0xc0000000 + 0x10000000
+ .long 0x10000000
+ .long 0x03000000
+ .long 0
+
+ @ EBSIT ???
+ .long SYMBOL_NAME(swapper_pg_dir) - 0xc0000000
+ .long 0
+ .long 0xe0000000
+ .long 0
+
+ @ NexusPCI
+ .long SYMBOL_NAME(swapper_pg_dir) - 0xc0000000 + 0x40000000
+ .long 0x40000000
+ .long 0x10000000
+ .long 0
+
+.LCProcTypes: @ ARM6 / 610
+ .long 0x41560600
+ .long 0xffffff00
+ .long 0x00000c12
+ b .Larmv3_flush_early @ arm v3 flush & ctrl early setup
+ mov pc, lr
+
+ @ ARM7 / 710
+ .long 0x41007000
+ .long 0xfffff000
+ .long 0x00000c12
+ b .Larmv3_flush_late @ arm v3 flush & ctrl late setup
+ mov pc, lr
+
+ @ StrongARM
+ .long 0x4401a100
+ .long 0xfffffff0
+ .long 0x00000c02
+ b .Larmv4_flush_early
+ b .Lsa_fastclock
+
+ .long 0
+
+.LC0: .long SYMBOL_NAME(_edata)
+ .long SYMBOL_NAME(arm_id)
+ .long SYMBOL_NAME(_end)
+ .long SYMBOL_NAME(init_task_union)+8192
+ .align
+
+.Larmv3_flush_early:
+ mov r0, #0
+ mcr p15, 0, r0, c7, c0 @ flush caches on v3
+ mcr p15, 0, r0, c5, c0 @ flush TLBs on v3
+ mcr p15, 0, r4, c2, c0 @ load page table pointer
+ mov r0, #0x1f @ Domains 0, 1 = client
+ mcr p15, 0, r0, c3, c0 @ load domain access register
+ mov r0, #0x3d @ ....S..DPWC.M
+ orr r0, r0, #0x100
+ mov pc, lr
+
+.Larmv3_flush_late:
+ mov r0, #0
+ mcr p15, 0, r0, c7, c0 @ flush caches on v3
+ mcr p15, 0, r0, c5, c0 @ flush TLBs on v3
+ mcr p15, 0, r4, c2, c0 @ load page table pointer
+ mov r0, #0x1f @ Domains 0, 1 = client
+ mcr p15, 0, r0, c3, c0 @ load domain access register
+ mov r0, #0x7d @ ....S.LDPWC.M
+ orr r0, r0, #0x100
+ mov pc, lr
+
+.Larmv4_flush_early:
+ mov r0, #0
+ mcr p15, 0, r0, c7, c7 @ flush I,D caches on v4
+ mcr p15, 0, r0, c7, c10, 4 @ drain write buffer on v4
+ mcr p15, 0, r0, c8, c7 @ flush I,D TLBs on v4
+ mcr p15, 0, r4, c2, c0 @ load page table pointer
+ mov r0, #0x1f @ Domains 0, 1 = client
+ mcr p15, 0, r0, c3, c0 @ load domain access register
+ mrc p15, 0, r0, c1, c0 @ get control register v4
+ bic r0, r0, #0x0e00
+ bic r0, r0, #0x0002
+ orr r0, r0, #0x003d @ I...S..DPWC.M
+ orr r0, r0, #0x1100 @ v4 supports separate I cache
+ mov pc, lr
+
+.Lsa_fastclock: mcr p15, 0, r4, c15, c1, 2 @ Enable clock switching
+ mov pc, lr
+
+.Lalready_done_mmap:
+ adr r5, __entry @ Add base back in
+ add r10, r10, r5
+ adr r5, .LC0
+ ldmia r5, {r5, r6, r8, sp} @ Setup stack
+ mov r4, #0
+1: cmp r5, r8 @ Clear BSS
+ strcc r4, [r5],#4
+ bcc 1b
+
+ str r9, [r6] @ Save processor ID
+ mov lr, pc
+ add pc, r10, #4 @ Call post-processor init
+ mov fp, #0
+ b SYMBOL_NAME(start_kernel)
+
+#if 1
+/*
+ * Useful debugging routines
+ */
+ .globl _printhex8
+_printhex8: mov r1, #8
+ b printhex
+
+ .globl _printhex4
+_printhex4: mov r1, #4
+ b printhex
+
+ .globl _printhex2
+_printhex2: mov r1, #2
+printhex: ldr r2, =hexbuf
+ add r3, r2, r1
+ mov r1, #0
+ strb r1, [r3]
+1: and r1, r0, #15
+ mov r0, r0, lsr #4
+ cmp r1, #10
+ addlt r1, r1, #'0'
+ addge r1, r1, #'a' - 10
+ strb r1, [r3, #-1]!
+ teq r3, r2
+ bne 1b
+ mov r0, r2
+
+ .globl _printascii
+_printascii:
+#ifdef CONFIG_ARCH_RPC
+ mov r3, #0xe0000000
+ orr r3, r3, #0x00010000
+ orr r3, r3, #0x00000fe0
+#else
+ mov r3, #0xf0000000
+ orr r3, r3, #0x0be0
+#endif
+ b 3f
+1: ldrb r2, [r3, #0x18]
+ tst r2, #0x10
+ beq 1b
+ strb r1, [r3]
+2: ldrb r2, [r3, #0x14]
+ and r2, r2, #0x60
+ teq r2, #0x60
+ bne 2b
+ teq r1, #'\n'
+ moveq r1, #'\r'
+ beq 1b
+3: teq r0, #0
+ ldrneb r1, [r0], #1
+ teqne r1, #0
+ bne 1b
+ mov pc, lr
+
+ .ltorg
+
+ .globl _printch
+_printch:
+#ifdef CONFIG_ARCH_RPC
+ mov r3, #0xe0000000
+ orr r3, r3, #0x00010000
+ orr r3, r3, #0x00000fe0
+#else
+ mov r3, #0xf0000000
+ orr r3, r3, #0x0be0
+#endif
+ mov r1, r0
+ mov r0, #0
+ b 1b
+
+ .bss
+hexbuf: .space 16
+
+#endif
+
+ .text
+ .align 13
+ENTRY(this_must_match_init_task)
--- /dev/null
+/*
+ * linux/arch/arm/kernel/iic.c
+ *
+ * Copyright (C) 1995, 1996 Russell King
+ *
+ * IIC is used to get the current time from the CMOS rtc.
+ */
+
+#include <asm/system.h>
+#include <asm/delay.h>
+#include <asm/io.h>
+#include <asm/hardware.h>
+
+/*
+ * if delay loop has been calibrated then us that,
+ * else use IOC timer 1.
+ */
+static void iic_delay (void)
+{
+ extern unsigned long loops_per_sec;
+ if (loops_per_sec != (1 << 12)) {
+ udelay(10);
+ return;
+ } else {
+ unsigned long flags;
+ save_flags_cli(flags);
+
+ outb(254, IOC_T1LTCHL);
+ outb(255, IOC_T1LTCHH);
+ outb(0, IOC_T1GO);
+ outb(1<<6, IOC_IRQCLRA); /* clear T1 irq */
+ outb(4, IOC_T1LTCHL);
+ outb(0, IOC_T1LTCHH);
+ outb(0, IOC_T1GO);
+ while ((inb(IOC_IRQSTATA) & (1<<6)) == 0);
+ restore_flags(flags);
+ }
+}
+
+static inline void iic_start (void)
+{
+ unsigned char out;
+
+ out = inb(IOC_CONTROL) | 0xc2;
+
+ outb(out, IOC_CONTROL);
+ iic_delay();
+
+ outb(out ^ 1, IOC_CONTROL);
+ iic_delay();
+}
+
+static inline void iic_stop (void)
+{
+ unsigned char out;
+
+ out = inb(IOC_CONTROL) | 0xc3;
+
+ iic_delay();
+ outb(out ^ 1, IOC_CONTROL);
+
+ iic_delay();
+ outb(out, IOC_CONTROL);
+}
+
+static int iic_sendbyte (unsigned char b)
+{
+ unsigned char out, in;
+ int i;
+
+ out = (inb(IOC_CONTROL) & 0xfc) | 0xc0;
+
+ outb(out, IOC_CONTROL);
+ for (i = 7; i >= 0; i--) {
+ unsigned char c;
+ c = out | ((b & (1 << i)) ? 1 : 0);
+
+ outb(c, IOC_CONTROL);
+ iic_delay();
+
+ outb(c | 2, IOC_CONTROL);
+ iic_delay();
+
+ outb(c, IOC_CONTROL);
+ }
+ outb(out | 1, IOC_CONTROL);
+ iic_delay();
+
+ outb(out | 3, IOC_CONTROL);
+ iic_delay();
+
+ in = inb(IOC_CONTROL) & 1;
+
+ outb(out | 1, IOC_CONTROL);
+ iic_delay();
+
+ outb(out, IOC_CONTROL);
+ iic_delay();
+
+ if(in) {
+ printk("No acknowledge from RTC\n");
+ return 1;
+ } else
+ return 0;
+}
+
+static unsigned char iic_recvbyte (void)
+{
+ unsigned char out, in;
+ int i;
+
+ out = (inb(IOC_CONTROL) & 0xfc) | 0xc0;
+
+ outb(out, IOC_CONTROL);
+ in = 0;
+ for (i = 7; i >= 0; i--) {
+ outb(out | 1, IOC_CONTROL);
+ iic_delay();
+ outb(out | 3, IOC_CONTROL);
+ iic_delay();
+ in = (in << 1) | (inb(IOC_CONTROL) & 1);
+ outb(out | 1, IOC_CONTROL);
+ iic_delay();
+ }
+ outb(out, IOC_CONTROL);
+ iic_delay();
+ outb(out | 2, IOC_CONTROL);
+ iic_delay();
+
+ return in;
+}
+
+void iic_control (unsigned char addr, unsigned char loc, unsigned char *buf, int len)
+{
+ iic_start();
+
+ if (iic_sendbyte(addr & 0xfe))
+ goto error;
+
+ if (iic_sendbyte(loc))
+ goto error;
+
+ if (addr & 1) {
+ int i;
+
+ for (i = 0; i < len; i++)
+ if (iic_sendbyte (buf[i]))
+ break;
+ } else {
+ int i;
+
+ iic_stop();
+ iic_start();
+ iic_sendbyte(addr|1);
+ for (i = 0; i < len; i++)
+ buf[i] = iic_recvbyte ();
+ }
+error:
+ iic_stop();
+}
--- /dev/null
+#include <linux/mm.h>
+#include <linux/sched.h>
+
+#include <asm/uaccess.h>
+#include <asm/pgtable.h>
+
+static struct vm_area_struct init_mmap = INIT_MMAP;
+static struct fs_struct init_fs = INIT_FS;
+static struct files_struct init_files = INIT_FILES;
+static struct signal_struct init_signals = INIT_SIGNALS;
+struct mm_struct init_mm = INIT_MM;
+
+/*
+ * Initial task structure.
+ *
+ * We need to make sure that this is 8192-byte aligned due to the
+ * way process stacks are handled. This is done by making sure
+ * the linker maps this in the .text segment right after head.S,
+ * and making head.S ensure the proper alignment.
+ *
+ * The things we do for performance..
+ */
+union task_union init_task_union __attribute__((__section__(".text"))) = { INIT_TASK };
--- /dev/null
+/*
+ * linux/arch/arm/kernel/ioport.c
+ *
+ * This contains the io-permission bitmap code - written by obz, with changes
+ * by Linus.
+ *
+ * Modifications for ARM processor Copyright (C) 1995, 1996 Russell King
+ */
+
+#include <linux/sched.h>
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/types.h>
+#include <linux/ioport.h>
+
+/* Set EXTENT bits starting at BASE in BITMAP to value TURN_ON. */
+asmlinkage void set_bitmap(unsigned long *bitmap, short base, short extent, int new_value)
+{
+ int mask;
+ unsigned long *bitmap_base = bitmap + (base >> 5);
+ unsigned short low_index = base & 0x1f;
+ int length = low_index + extent;
+
+ if (low_index != 0) {
+ mask = (~0 << low_index);
+ if (length < 32)
+ mask &= ~(~0 << length);
+ if (new_value)
+ *bitmap_base++ |= mask;
+ else
+ *bitmap_base++ &= ~mask;
+ length -= 32;
+ }
+
+ mask = (new_value ? ~0 : 0);
+ while (length >= 32) {
+ *bitmap_base++ = mask;
+ length -= 32;
+ }
+
+ if (length > 0) {
+ mask = ~(~0 << length);
+ if (new_value)
+ *bitmap_base++ |= mask;
+ else
+ *bitmap_base++ &= ~mask;
+ }
+}
+
+/*
+ * this changes the io permissions bitmap in the current task.
+ */
+asmlinkage int sys_ioperm(unsigned long from, unsigned long num, int turn_on)
+{
+ if (from + num <= from)
+ return -EINVAL;
+#ifndef __arm__
+ if (from + num > IO_BITMAP_SIZE*32)
+ return -EINVAL;
+#endif
+ if (!suser())
+ return -EPERM;
+
+#ifdef IODEBUG
+ printk("io: from=%d num=%d %s\n", from, num, (turn_on ? "on" : "off"));
+#endif
+#ifndef __arm__
+ set_bitmap((unsigned long *)current->tss.io_bitmap, from, num, !turn_on);
+#endif
+ return 0;
+}
+
+unsigned int *stack;
+
+/*
+ * sys_iopl has to be used when you want to access the IO ports
+ * beyond the 0x3ff range: to get the full 65536 ports bitmapped
+ * you'd need 8kB of bitmaps/process, which is a bit excessive.
+ *
+ * Here we just change the eflags value on the stack: we allow
+ * only the super-user to do it. This depends on the stack-layout
+ * on system-call entry - see also fork() and the signal handling
+ * code.
+ */
+asmlinkage int sys_iopl(long ebx,long ecx,long edx,
+ long esi, long edi, long ebp, long eax, long ds,
+ long es, long fs, long gs, long orig_eax,
+ long eip,long cs,long eflags,long esp,long ss)
+{
+ unsigned int level = ebx;
+
+ if (level > 3)
+ return -EINVAL;
+ if (!suser())
+ return -EPERM;
+ *(&eflags) = (eflags & 0xffffcfff) | (level << 12);
+ return 0;
+}
--- /dev/null
+/*
+ * linux/arch/arm/kernel/irq.c
+ *
+ * Copyright (C) 1992 Linus Torvalds
+ * Modifications for ARM processor Copyright (C) 1995, 1996 Russell King.
+ *
+ * This file contains the code used by various IRQ handling routines:
+ * asking for different IRQ's should be done through these routines
+ * instead of just grabbing them. Thus setups with different IRQ numbers
+ * shouldn't result in any weird surprises, and installing new handlers
+ * should be easier.
+ */
+
+/*
+ * IRQ's are in fact implemented a bit like signal handlers for the kernel.
+ * Naturally it's not a 1:1 relation, but there are similarities.
+ */
+#include <linux/ptrace.h>
+#include <linux/errno.h>
+#include <linux/kernel_stat.h>
+#include <linux/signal.h>
+#include <linux/sched.h>
+#include <linux/ioport.h>
+#include <linux/interrupt.h>
+#include <linux/timex.h>
+#include <linux/malloc.h>
+#include <linux/random.h>
+#include <linux/smp.h>
+#include <linux/smp_lock.h>
+#include <linux/init.h>
+
+#include <asm/io.h>
+#include <asm/system.h>
+#include <asm/hardware.h>
+#include <asm/irq-no.h>
+#include <asm/arch/irq.h>
+
+#ifdef __SMP_PROF__
+extern volatile unsigned long smp_local_timer_ticks[1+NR_CPUS];
+#endif
+
+unsigned int local_irq_count[NR_CPUS];
+#ifdef __SMP__
+atomic_t __arm_bh_counter;
+#else
+int __arm_bh_counter;
+#endif
+
+spinlock_t irq_controller_lock;
+
+#ifndef SMP
+#define irq_enter(cpu, irq) (++local_irq_count[cpu])
+#define irq_exit(cpu, irq) (--local_irq_count[cpu])
+#else
+#error SMP not supported
+#endif
+
+void disable_irq(unsigned int irq_nr)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&irq_controller_lock, flags);
+#ifdef cliIF
+ save_flags(flags);
+ cliIF();
+#endif
+ mask_irq(irq_nr);
+ spin_unlock_irqrestore(&irq_controller_lock, flags);
+}
+
+void enable_irq(unsigned int irq_nr)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&irq_controller_lock, flags);
+#ifdef cliIF
+ save_flags (flags);
+ cliIF();
+#endif
+ unmask_irq(irq_nr);
+ spin_unlock_irqrestore(&irq_controller_lock, flags);
+}
+
+struct irqaction *irq_action[NR_IRQS];
+
+/*
+ * Bitmask indicating valid interrupt numbers
+ */
+unsigned long validirqs[NR_IRQS / 32] = {
+ 0x003fffff, 0x000001ff, 0x000000ff, 0x00000000
+};
+
+int get_irq_list(char *buf)
+{
+ int i;
+ struct irqaction * action;
+ char *p = buf;
+
+ for (i = 0 ; i < NR_IRQS ; i++) {
+ action = irq_action[i];
+ if (!action)
+ continue;
+ p += sprintf(p, "%3d: %10u %s",
+ i, kstat.interrupts[i], action->name);
+ for (action = action->next; action; action = action->next) {
+ p += sprintf(p, ", %s", action->name);
+ }
+ *p++ = '\n';
+ }
+ return p - buf;
+}
+
+/*
+ * do_IRQ handles all normal device IRQ's
+ */
+asmlinkage void do_IRQ(int irq, struct pt_regs * regs)
+{
+ struct irqaction * action;
+ int status, cpu;
+
+#if defined(HAS_IOMD) || defined(HAS_IOC)
+ if (irq != IRQ_EXPANSIONCARD)
+#endif
+ {
+ spin_lock(&irq_controller_lock);
+ mask_and_ack_irq(irq);
+ spin_unlock(&irq_controller_lock);
+ }
+
+ cpu = smp_processor_id();
+ irq_enter(cpu, irq);
+ kstat.interrupts[irq]++;
+
+ /* Return with this interrupt masked if no action */
+ status = 0;
+ action = *(irq + irq_action);
+ if (action) {
+ if (!(action->flags & SA_INTERRUPT))
+ __sti();
+
+ do {
+ status |= action->flags;
+ action->handler(irq, action->dev_id, regs);
+ action = action->next;
+ } while (action);
+ if (status & SA_SAMPLE_RANDOM)
+ add_interrupt_randomness(irq);
+ __cli();
+#if defined(HAS_IOMD) || defined(HAS_IOC)
+ if (irq != IRQ_KEYBOARDTX && irq != IRQ_EXPANSIONCARD)
+#endif
+ {
+ spin_lock(&irq_controller_lock);
+ unmask_irq(irq);
+ spin_unlock(&irq_controller_lock);
+ }
+ }
+
+ irq_exit(cpu, irq);
+ /*
+ * This should be conditional: we should really get
+ * a return code from the irq handler to tell us
+ * whether the handler wants us to do software bottom
+ * half handling or not..
+ *
+ * ** IMPORTANT NOTE: do_bottom_half() ENABLES IRQS!!! **
+ * ** WE MUST DISABLE THEM AGAIN, ELSE IDE DISKS GO **
+ * ** AWOL **
+ */
+ if (1) {
+ if (bh_active & bh_mask)
+ do_bottom_half();
+ __cli();
+ }
+}
+
+#if defined(HAS_IOMD) || defined(HAS_IOC)
+void do_ecard_IRQ(int irq, struct pt_regs *regs)
+{
+ struct irqaction * action;
+
+ action = *(irq + irq_action);
+ if (action) {
+ do {
+ action->handler(irq, action->dev_id, regs);
+ action = action->next;
+ } while (action);
+ } else {
+ spin_lock(&irq_controller_lock);
+ mask_irq (irq);
+ spin_unlock(&irq_controller_lock);
+ }
+}
+#endif
+
+int setup_arm_irq(int irq, struct irqaction * new)
+{
+ int shared = 0;
+ struct irqaction *old, **p;
+ unsigned long flags;
+
+ p = irq_action + irq;
+ if ((old = *p) != NULL) {
+ /* Can't share interrupts unless both agree to */
+ if (!(old->flags & new->flags & SA_SHIRQ))
+ return -EBUSY;
+
+ /* add new interrupt at end of irq queue */
+ do {
+ p = &old->next;
+ old = *p;
+ } while (old);
+ shared = 1;
+ }
+
+ if (new->flags & SA_SAMPLE_RANDOM)
+ rand_initialize_irq(irq);
+
+ save_flags_cli(flags);
+ *p = new;
+
+ if (!shared) {
+ spin_lock(&irq_controller_lock);
+ unmask_irq(irq);
+ spin_unlock(&irq_controller_lock);
+ }
+ restore_flags(flags);
+ return 0;
+}
+
+/*
+ * Using "struct sigaction" is slightly silly, but there
+ * are historical reasons and it works well, so..
+ */
+int request_irq(unsigned int irq, void (*handler)(int, void *, struct pt_regs *),
+ unsigned long irq_flags, const char * devname, void *dev_id)
+{
+ unsigned long retval;
+ struct irqaction *action;
+
+ if (irq >= NR_IRQS || !(validirqs[irq >> 5] & (1 << (irq & 31))))
+ return -EINVAL;
+ if (!handler)
+ return -EINVAL;
+
+ action = (struct irqaction *)kmalloc(sizeof(struct irqaction), GFP_KERNEL);
+ if (!action)
+ return -ENOMEM;
+
+ action->handler = handler;
+ action->flags = irq_flags;
+ action->mask = 0;
+ action->name = devname;
+ action->next = NULL;
+ action->dev_id = dev_id;
+
+ retval = setup_arm_irq(irq, action);
+
+ if (retval)
+ kfree(action);
+ return retval;
+}
+
+void free_irq(unsigned int irq, void *dev_id)
+{
+ struct irqaction * action, **p;
+ unsigned long flags;
+
+ if (irq >= NR_IRQS || !(validirqs[irq >> 5] & (1 << (irq & 31)))) {
+ printk(KERN_ERR "Trying to free IRQ%d\n",irq);
+#ifdef CONFIG_DEBUG_ERRORS
+ __backtrace();
+#endif
+ return;
+ }
+ for (p = irq + irq_action; (action = *p) != NULL; p = &action->next) {
+ if (action->dev_id != dev_id)
+ continue;
+
+ /* Found it - now free it */
+ save_flags_cli (flags);
+ *p = action->next;
+ restore_flags (flags);
+ kfree(action);
+ return;
+ }
+ printk(KERN_ERR "Trying to free free IRQ%d\n",irq);
+#ifdef CONFIG_DEBUG_ERRORS
+ __backtrace();
+#endif
+}
+
+unsigned long probe_irq_on (void)
+{
+ unsigned int i, irqs = 0;
+ unsigned long delay;
+
+ /* first snaffle up any unassigned irqs */
+ for (i = 15; i > 0; i--) {
+ if (!irq_action[i]) {
+ enable_irq(i);
+ irqs |= 1 << i;
+ }
+ }
+
+ /* wait for spurious interrupts to mask themselves out again */
+ for (delay = jiffies + HZ/10; delay > jiffies; )
+ /* min 100ms delay */;
+
+ /* now filter out any obviously spurious interrupts */
+ return irqs & get_enabled_irqs();
+}
+
+int probe_irq_off (unsigned long irqs)
+{
+ unsigned int i;
+
+ irqs &= ~get_enabled_irqs();
+ if (!irqs)
+ return 0;
+ i = ffz (~irqs);
+ if (irqs != (irqs & (1 << i)))
+ i = -i;
+ return i;
+}
+
+__initfunc(void init_IRQ(void))
+{
+ irq_init_irq();
+}
--- /dev/null
+/* Support for the latches on the old Archimedes which control the floppy,
+ * hard disc and printer
+ *
+ * (c) David Alan Gilbert 1995/1996
+ */
+#include <linux/kernel.h>
+
+#include <asm/io.h>
+#include <asm/hardware.h>
+
+#ifdef LATCHAADDR
+/*
+ * They are static so that everyone who accesses them has to go through here
+ */
+static unsigned char LatchACopy;
+
+/* newval=(oldval & ~mask)|newdata */
+void oldlatch_aupdate(unsigned char mask,unsigned char newdata)
+{
+ LatchACopy=(LatchACopy & ~mask)|newdata;
+ outb(LatchACopy, LATCHAADDR);
+#ifdef DEBUG
+ printk("oldlatch_A:0x%2x\n",LatchACopy);
+#endif
+
+}
+#endif
+
+#ifdef LATCHBADDR
+static unsigned char LatchBCopy;
+
+/* newval=(oldval & ~mask)|newdata */
+void oldlatch_bupdate(unsigned char mask,unsigned char newdata)
+{
+ LatchBCopy=(LatchBCopy & ~mask)|newdata;
+ outb(LatchBCopy, LATCHBADDR);
+#ifdef DEBUG
+ printk("oldlatch_B:0x%2x\n",LatchBCopy);
+#endif
+}
+#endif
+
+void oldlatch_init(void)
+{
+ printk("oldlatch: init\n");
+#ifdef LATCHAADDR
+ oldlatch_aupdate(0xff,0xff);
+#endif
+#ifdef LATCHBADDR
+ oldlatch_bupdate(0xff,0x8); /* Thats no FDC reset...*/
+#endif
+ return ;
+}
--- /dev/null
+/*
+ * linux/arch/arm/kernel/process.c
+ *
+ * Copyright (C) 1996 Russell King - Converted to ARM.
+ * Origional Copyright (C) 1995 Linus Torvalds
+ */
+
+/*
+ * This file handles the architecture-dependent parts of process handling..
+ */
+
+#define __KERNEL_SYSCALLS__
+#include <stdarg.h>
+
+#include <linux/errno.h>
+#include <linux/sched.h>
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/smp.h>
+#include <linux/smp_lock.h>
+#include <linux/stddef.h>
+#include <linux/unistd.h>
+#include <linux/ptrace.h>
+#include <linux/malloc.h>
+#include <linux/vmalloc.h>
+#include <linux/user.h>
+#include <linux/a.out.h>
+#include <linux/interrupt.h>
+#include <linux/config.h>
+#include <linux/unistd.h>
+#include <linux/delay.h>
+#include <linux/smp.h>
+#include <linux/reboot.h>
+#include <linux/init.h>
+
+#include <asm/uaccess.h>
+#include <asm/pgtable.h>
+#include <asm/system.h>
+#include <asm/io.h>
+
+extern void fpe_save(struct fp_soft_struct *);
+extern char *processor_modes[];
+
+asmlinkage void ret_from_sys_call(void) __asm__("ret_from_sys_call");
+
+static int hlt_counter=0;
+
+void disable_hlt(void)
+{
+ hlt_counter++;
+}
+
+void enable_hlt(void)
+{
+ hlt_counter--;
+}
+
+/*
+ * The idle loop on an arm..
+ */
+asmlinkage int sys_idle(void)
+{
+ int ret = -EPERM;
+
+ lock_kernel();
+ if (current->pid != 0)
+ goto out;
+ /* endless idle loop with no priority at all */
+ current->priority = -100;
+ for (;;)
+ {
+ if (!hlt_counter && !need_resched)
+ proc_idle ();
+ run_task_queue(&tq_scheduler);
+ schedule();
+ }
+ ret = 0;
+out:
+ unlock_kernel();
+ return ret;
+}
+
+__initfunc(void reboot_setup(char *str, int *ints))
+{
+}
+
+/*
+ * This routine reboots the machine by resetting the expansion cards via
+ * their loaders, turning off the processor cache (if ARM3), copying the
+ * first instruction of the ROM to 0, and executing it there.
+ */
+void machine_restart(char * __unused)
+{
+ proc_hard_reset ();
+ arch_hard_reset ();
+}
+
+void machine_halt(void)
+{
+}
+
+void machine_power_off(void)
+{
+}
+
+
+void show_regs(struct pt_regs * regs)
+{
+ unsigned long flags;
+
+ flags = condition_codes(regs);
+
+ printk("\n"
+ "pc : [<%08lx>]\n"
+ "lr : [<%08lx>]\n"
+ "sp : %08lx ip : %08lx fp : %08lx\n",
+ instruction_pointer(regs),
+ regs->ARM_lr, regs->ARM_sp,
+ regs->ARM_ip, regs->ARM_fp);
+ printk( "r10: %08lx r9 : %08lx r8 : %08lx\n",
+ regs->ARM_r10, regs->ARM_r9,
+ regs->ARM_r8);
+ printk( "r7 : %08lx r6 : %08lx r5 : %08lx r4 : %08lx\n",
+ regs->ARM_r7, regs->ARM_r6,
+ regs->ARM_r5, regs->ARM_r4);
+ printk( "r3 : %08lx r2 : %08lx r1 : %08lx r0 : %08lx\n",
+ regs->ARM_r3, regs->ARM_r2,
+ regs->ARM_r1, regs->ARM_r0);
+ printk("Flags: %c%c%c%c",
+ flags & CC_N_BIT ? 'N' : 'n',
+ flags & CC_Z_BIT ? 'Z' : 'z',
+ flags & CC_C_BIT ? 'C' : 'c',
+ flags & CC_V_BIT ? 'V' : 'v');
+ printk(" IRQs %s FIQs %s Mode %s\n",
+ interrupts_enabled(regs) ? "on" : "off",
+ fast_interrupts_enabled(regs) ? "on" : "off",
+ processor_modes[processor_mode(regs)]);
+#if defined(CONFIG_CPU_ARM6) || defined(CONFIG_CPU_SA110)
+{ int ctrl, transbase, dac;
+ __asm__ (
+" mrc p15, 0, %0, c1, c0\n"
+" mrc p15, 0, %1, c2, c0\n"
+" mrc p15, 0, %2, c3, c0\n"
+ : "=r" (ctrl), "=r" (transbase), "=r" (dac));
+ printk("Control: %04X Table: %08X DAC: %08X",
+ ctrl, transbase, dac);
+ }
+#endif
+ printk ("Segment %s\n", get_fs() == get_ds() ? "kernel" : "user");
+}
+
+/*
+ * Free current thread data structures etc..
+ */
+void exit_thread(void)
+{
+ if (last_task_used_math == current)
+ last_task_used_math = NULL;
+}
+
+void flush_thread(void)
+{
+ int i;
+
+ for (i = 0; i < 8; i++)
+ current->debugreg[i] = 0;
+ if (last_task_used_math == current)
+ last_task_used_math = NULL;
+ current->used_math = 0;
+ current->flags &= ~PF_USEDFPU;
+}
+
+void release_thread(struct task_struct *dead_task)
+{
+}
+
+int copy_thread(int nr, unsigned long clone_flags, unsigned long esp,
+ struct task_struct * p, struct pt_regs * regs)
+{
+ struct pt_regs * childregs;
+ struct context_save_struct * save;
+
+ childregs = ((struct pt_regs *)((unsigned long)p + 8192)) - 1;
+ *childregs = *regs;
+ childregs->ARM_r0 = 0;
+
+ save = ((struct context_save_struct *)(childregs)) - 1;
+ copy_thread_css (save);
+ p->tss.save = save;
+ /*
+ * Save current math state in p->tss.fpe_save if not already there.
+ */
+ if (last_task_used_math == current)
+ fpe_save (&p->tss.fpstate.soft);
+
+ return 0;
+}
+
+/*
+ * fill in the fpe structure for a core dump...
+ */
+int dump_fpu (struct pt_regs *regs, struct user_fp *fp)
+{
+ int fpvalid = 0;
+
+ if (current->used_math) {
+ if (last_task_used_math == current)
+ fpe_save (¤t->tss.fpstate.soft);
+
+ memcpy (fp, ¤t->tss.fpstate.soft, sizeof (fp));
+ }
+
+ return fpvalid;
+}
+
+/*
+ * fill in the user structure for a core dump..
+ */
+void dump_thread(struct pt_regs * regs, struct user * dump)
+{
+ int i;
+
+ dump->magic = CMAGIC;
+ dump->start_code = current->mm->start_code;
+ dump->start_stack = regs->ARM_sp & ~(PAGE_SIZE - 1);
+
+ dump->u_tsize = (current->mm->end_code - current->mm->start_code) >> PAGE_SHIFT;
+ dump->u_dsize = (current->mm->brk - current->mm->start_data + PAGE_SIZE - 1) >> PAGE_SHIFT;
+ dump->u_ssize = 0;
+
+ for (i = 0; i < 8; i++)
+ dump->u_debugreg[i] = current->debugreg[i];
+
+ if (dump->start_stack < 0x04000000)
+ dump->u_ssize = (0x04000000 - dump->start_stack) >> PAGE_SHIFT;
+
+ dump->regs = *regs;
+ dump->u_fpvalid = dump_fpu (regs, &dump->u_fp);
+}
--- /dev/null
+/* ptrace.c */
+/* By Ross Biro 1/23/92 */
+/* edited by Linus Torvalds */
+/* edited for ARM by Russell King */
+
+#include <linux/head.h>
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/mm.h>
+#include <linux/smp.h>
+#include <linux/smp_lock.h>
+#include <linux/errno.h>
+#include <linux/ptrace.h>
+#include <linux/user.h>
+
+#include <asm/uaccess.h>
+#include <asm/pgtable.h>
+#include <asm/system.h>
+
+/*
+ * does not yet catch signals sent when the child dies.
+ * in exit.c or in signal.c.
+ */
+
+/*
+ * Breakpoint SWI instruction: SWI &9F0001
+ */
+#define BREAKINST 0xef9f0001
+
+/* change a pid into a task struct. */
+static inline struct task_struct * get_task(int pid)
+{
+ int i;
+
+ for (i = 1; i < NR_TASKS; i++) {
+ if (task[i] != NULL && (task[i]->pid == pid))
+ return task[i];
+ }
+ return NULL;
+}
+
+/*
+ * this routine will get a word off of the processes privileged stack.
+ * the offset is how far from the base addr as stored in the TSS.
+ * this routine assumes that all the privileged stacks are in our
+ * data space.
+ */
+static inline long get_stack_long(struct task_struct *task, int offset)
+{
+ unsigned char *stack;
+
+ stack = (unsigned char *)((unsigned long)task + 8192 - sizeof(struct pt_regs));
+ stack += offset << 2;
+ return *(unsigned long *)stack;
+}
+
+/*
+ * this routine will put a word on the processes privileged stack.
+ * the offset is how far from the base addr as stored in the TSS.
+ * this routine assumes that all the privileged stacks are in our
+ * data space.
+ */
+static inline long put_stack_long(struct task_struct *task, int offset,
+ unsigned long data)
+{
+ unsigned char *stack;
+
+ stack = (unsigned char *)((unsigned long)task + 8192 - sizeof(struct pt_regs));
+ stack += offset << 2;
+ *(unsigned long *) stack = data;
+ return 0;
+}
+
+/*
+ * This routine gets a long from any process space by following the page
+ * tables. NOTE! You should check that the long isn't on a page boundary,
+ * and that it is in the task area before calling this: this routine does
+ * no checking.
+ */
+static unsigned long get_long(struct task_struct * tsk,
+ struct vm_area_struct * vma, unsigned long addr)
+{
+ pgd_t *pgdir;
+ pmd_t *pgmiddle;
+ pte_t *pgtable;
+ unsigned long page;
+
+repeat:
+ pgdir = pgd_offset(vma->vm_mm, addr);
+ if (pgd_none(*pgdir)) {
+ handle_mm_fault(tsk, vma, addr, 0);
+ goto repeat;
+ }
+ if (pgd_bad(*pgdir)) {
+ printk("ptrace: bad page directory %08lx\n", pgd_val(*pgdir));
+ pgd_clear(pgdir);
+ return 0;
+ }
+ pgmiddle = pmd_offset(pgdir, addr);
+ if (pmd_none(*pgmiddle)) {
+ handle_mm_fault(tsk, vma, addr, 0);
+ goto repeat;
+ }
+ if (pmd_bad(*pgmiddle)) {
+ printk("ptrace: bad page middle %08lx\n", pmd_val(*pgmiddle));
+ pmd_clear(pgmiddle);
+ return 0;
+ }
+ pgtable = pte_offset(pgmiddle, addr);
+ if (!pte_present(*pgtable)) {
+ handle_mm_fault(tsk, vma, addr, 0);
+ goto repeat;
+ }
+ page = pte_page(*pgtable);
+
+ if(MAP_NR(page) >= max_mapnr)
+ return 0;
+ page += addr & ~PAGE_MASK;
+ return *(unsigned long *)page;
+}
+
+/*
+ * This routine puts a long into any process space by following the page
+ * tables. NOTE! You should check that the long isn't on a page boundary,
+ * and that it is in the task area before calling this: this routine does
+ * no checking.
+ *
+ * Now keeps R/W state of the page so that a text page stays readonly
+ * even if a debugger scribbles breakpoints into it. -M.U-
+ */
+static void put_long(struct task_struct * tsk, struct vm_area_struct * vma, unsigned long addr,
+ unsigned long data)
+{
+ pgd_t *pgdir;
+ pmd_t *pgmiddle;
+ pte_t *pgtable;
+ unsigned long page;
+
+repeat:
+ pgdir = pgd_offset(vma->vm_mm, addr);
+ if (!pgd_present(*pgdir)) {
+ handle_mm_fault(tsk, vma, addr, 1);
+ goto repeat;
+ }
+ if (pgd_bad(*pgdir)) {
+ printk("ptrace: bad page directory %08lx\n", pgd_val(*pgdir));
+ pgd_clear(pgdir);
+ return;
+ }
+ pgmiddle = pmd_offset(pgdir, addr);
+ if (pmd_none(*pgmiddle)) {
+ handle_mm_fault(tsk, vma, addr, 1);
+ goto repeat;
+ }
+ if (pmd_bad(*pgmiddle)) {
+ printk("ptrace: bad page middle %08lx\n", pmd_val(*pgmiddle));
+ pmd_clear(pgmiddle);
+ return;
+ }
+ pgtable = pte_offset(pgmiddle, addr);
+ if (!pte_present(*pgtable)) {
+ handle_mm_fault(tsk, vma, addr, 1);
+ goto repeat;
+ }
+ page = pte_page(*pgtable);
+ if (!pte_write(*pgtable)) {
+ handle_mm_fault(tsk, vma, addr, 1);
+ goto repeat;
+ }
+
+ if (MAP_NR(page) < max_mapnr) {
+ page += addr & ~PAGE_MASK;
+ *(unsigned long *)page = data;
+ __flush_entry_to_ram(page);
+ }
+ set_pte(pgtable, pte_mkdirty(mk_pte(page, vma->vm_page_prot)));
+ flush_tlb();
+}
+
+static struct vm_area_struct * find_extend_vma(struct task_struct * tsk, unsigned long addr)
+{
+ struct vm_area_struct * vma;
+
+ addr &= PAGE_MASK;
+ vma = find_vma(tsk->mm,addr);
+ if (!vma)
+ return NULL;
+ if (vma->vm_start <= addr)
+ return vma;
+ if (!(vma->vm_flags & VM_GROWSDOWN))
+ return NULL;
+ if (vma->vm_end - addr > tsk->rlim[RLIMIT_STACK].rlim_cur)
+ return NULL;
+ vma->vm_offset -= vma->vm_start - addr;
+ vma->vm_start = addr;
+ return vma;
+}
+
+/*
+ * This routine checks the page boundaries, and that the offset is
+ * within the task area. It then calls get_long() to read a long.
+ */
+static int read_long(struct task_struct * tsk, unsigned long addr,
+ unsigned long * result)
+{
+ struct vm_area_struct * vma = find_extend_vma(tsk, addr);
+
+ if (!vma)
+ return -EIO;
+ if ((addr & ~PAGE_MASK) > PAGE_SIZE-sizeof(long)) {
+ unsigned long low,high;
+ struct vm_area_struct * vma_high = vma;
+
+ if (addr + sizeof(long) >= vma->vm_end) {
+ vma_high = vma->vm_next;
+ if (!vma_high || vma_high->vm_start != vma->vm_end)
+ return -EIO;
+ }
+ low = get_long(tsk, vma, addr & ~(sizeof(long)-1));
+ high = get_long(tsk, vma_high, (addr+sizeof(long)) & ~(sizeof(long)-1));
+ switch (addr & (sizeof(long)-1)) {
+ case 1:
+ low >>= 8;
+ low |= high << 24;
+ break;
+ case 2:
+ low >>= 16;
+ low |= high << 16;
+ break;
+ case 3:
+ low >>= 24;
+ low |= high << 8;
+ break;
+ }
+ *result = low;
+ } else
+ *result = get_long(tsk, vma, addr);
+ return 0;
+}
+
+/*
+ * This routine checks the page boundaries, and that the offset is
+ * within the task area. It then calls put_long() to write a long.
+ */
+static int write_long(struct task_struct * tsk, unsigned long addr,
+ unsigned long data)
+{
+ struct vm_area_struct * vma = find_extend_vma(tsk, addr);
+
+ if (!vma)
+ return -EIO;
+ if ((addr & ~PAGE_MASK) > PAGE_SIZE-sizeof(long)) {
+ unsigned long low,high;
+ struct vm_area_struct * vma_high = vma;
+
+ if (addr + sizeof(long) >= vma->vm_end) {
+ vma_high = vma->vm_next;
+ if (!vma_high || vma_high->vm_start != vma->vm_end)
+ return -EIO;
+ }
+ low = get_long(tsk, vma, addr & ~(sizeof(long)-1));
+ high = get_long(tsk, vma_high, (addr+sizeof(long)) & ~(sizeof(long)-1));
+ switch (addr & (sizeof(long)-1)) {
+ case 0: /* shouldn't happen, but safety first */
+ low = data;
+ break;
+ case 1:
+ low &= 0x000000ff;
+ low |= data << 8;
+ high &= ~0xff;
+ high |= data >> 24;
+ break;
+ case 2:
+ low &= 0x0000ffff;
+ low |= data << 16;
+ high &= ~0xffff;
+ high |= data >> 16;
+ break;
+ case 3:
+ low &= 0x00ffffff;
+ low |= data << 24;
+ high &= ~0xffffff;
+ high |= data >> 8;
+ break;
+ }
+ put_long(tsk, vma, addr & ~(sizeof(long)-1),low);
+ put_long(tsk, vma_high, (addr+sizeof(long)) & ~(sizeof(long)-1),high);
+ } else
+ put_long(tsk, vma, addr, data);
+ return 0;
+}
+
+/*
+ * Get value of register `rn' (in the instruction)
+ */
+static unsigned long ptrace_getrn (struct task_struct *child, unsigned long insn)
+{
+ unsigned int reg = (insn >> 16) & 15;
+ unsigned long val;
+
+ if (reg == 15)
+ val = pc_pointer (get_stack_long (child, reg));
+ else
+ val = get_stack_long (child, reg);
+
+printk ("r%02d=%08lX ", reg, val);
+ return val;
+}
+
+/*
+ * Get value of operand 2 (in an ALU instruction)
+ */
+static unsigned long ptrace_getaluop2 (struct task_struct *child, unsigned long insn)
+{
+ unsigned long val;
+ int shift;
+ int type;
+
+printk ("op2=");
+ if (insn & 1 << 25) {
+ val = insn & 255;
+ shift = (insn >> 8) & 15;
+ type = 3;
+printk ("(imm)");
+ } else {
+ val = get_stack_long (child, insn & 15);
+
+ if (insn & (1 << 4))
+ shift = (int)get_stack_long (child, (insn >> 8) & 15);
+ else
+ shift = (insn >> 7) & 31;
+
+ type = (insn >> 5) & 3;
+printk ("(r%02ld)", insn & 15);
+ }
+printk ("sh%dx%d", type, shift);
+ switch (type) {
+ case 0: val <<= shift; break;
+ case 1: val >>= shift; break;
+ case 2:
+ val = (((signed long)val) >> shift);
+ break;
+ case 3:
+ __asm__ __volatile__("mov %0, %0, ror %1" : "=r" (val) : "0" (val), "r" (shift));
+ break;
+ }
+printk ("=%08lX ", val);
+ return val;
+}
+
+/*
+ * Get value of operand 2 (in a LDR instruction)
+ */
+static unsigned long ptrace_getldrop2 (struct task_struct *child, unsigned long insn)
+{
+ unsigned long val;
+ int shift;
+ int type;
+
+ val = get_stack_long (child, insn & 15);
+ shift = (insn >> 7) & 31;
+ type = (insn >> 5) & 3;
+
+printk ("op2=r%02ldsh%dx%d", insn & 15, shift, type);
+ switch (type) {
+ case 0: val <<= shift; break;
+ case 1: val >>= shift; break;
+ case 2:
+ val = (((signed long)val) >> shift);
+ break;
+ case 3:
+ __asm__ __volatile__("mov %0, %0, ror %1" : "=r" (val) : "0" (val), "r" (shift));
+ break;
+ }
+printk ("=%08lX ", val);
+ return val;
+}
+#undef pc_pointer
+#define pc_pointer(x) ((x) & 0x03fffffc)
+int ptrace_set_bpt (struct task_struct *child)
+{
+ unsigned long insn, pc, alt;
+ int i, nsaved = 0, res;
+
+ pc = pc_pointer (get_stack_long (child, 15/*REG_PC*/));
+
+ res = read_long (child, pc, &insn);
+ if (res < 0)
+ return res;
+
+ child->debugreg[nsaved++] = alt = pc + 4;
+printk ("ptrace_set_bpt: insn=%08lX pc=%08lX ", insn, pc);
+ switch (insn & 0x0e100000) {
+ case 0x00000000:
+ case 0x00100000:
+ case 0x02000000:
+ case 0x02100000: /* data processing */
+ printk ("data ");
+ switch (insn & 0x01e0f000) {
+ case 0x0000f000:
+ alt = ptrace_getrn(child, insn) & ptrace_getaluop2(child, insn);
+ break;
+ case 0x0020f000:
+ alt = ptrace_getrn(child, insn) ^ ptrace_getaluop2(child, insn);
+ break;
+ case 0x0040f000:
+ alt = ptrace_getrn(child, insn) - ptrace_getaluop2(child, insn);
+ break;
+ case 0x0060f000:
+ alt = ptrace_getaluop2(child, insn) - ptrace_getrn(child, insn);
+ break;
+ case 0x0080f000:
+ alt = ptrace_getrn(child, insn) + ptrace_getaluop2(child, insn);
+ break;
+ case 0x00a0f000:
+ alt = ptrace_getrn(child, insn) + ptrace_getaluop2(child, insn) +
+ (get_stack_long (child, 16/*REG_PSR*/) & CC_C_BIT ? 1 : 0);
+ break;
+ case 0x00c0f000:
+ alt = ptrace_getrn(child, insn) - ptrace_getaluop2(child, insn) +
+ (get_stack_long (child, 16/*REG_PSR*/) & CC_C_BIT ? 1 : 0);
+ break;
+ case 0x00e0f000:
+ alt = ptrace_getaluop2(child, insn) - ptrace_getrn(child, insn) +
+ (get_stack_long (child, 16/*REG_PSR*/) & CC_C_BIT ? 1 : 0);
+ break;
+ case 0x0180f000:
+ alt = ptrace_getrn(child, insn) | ptrace_getaluop2(child, insn);
+ break;
+ case 0x01a0f000:
+ alt = ptrace_getaluop2(child, insn);
+ break;
+ case 0x01c0f000:
+ alt = ptrace_getrn(child, insn) & ~ptrace_getaluop2(child, insn);
+ break;
+ case 0x01e0f000:
+ alt = ~ptrace_getaluop2(child, insn);
+ break;
+ }
+ break;
+
+ case 0x04100000: /* ldr */
+ if ((insn & 0xf000) == 0xf000) {
+printk ("ldr ");
+ alt = ptrace_getrn(child, insn);
+ if (insn & 1 << 24) {
+ if (insn & 1 << 23)
+ alt += ptrace_getldrop2 (child, insn);
+ else
+ alt -= ptrace_getldrop2 (child, insn);
+ }
+ if (read_long (child, alt, &alt) < 0)
+ alt = pc + 4; /* not valid */
+ else
+ alt = pc_pointer (alt);
+ }
+ break;
+
+ case 0x06100000: /* ldr imm */
+ if ((insn & 0xf000) == 0xf000) {
+printk ("ldrimm ");
+ alt = ptrace_getrn(child, insn);
+ if (insn & 1 << 24) {
+ if (insn & 1 << 23)
+ alt += insn & 0xfff;
+ else
+ alt -= insn & 0xfff;
+ }
+ if (read_long (child, alt, &alt) < 0)
+ alt = pc + 4; /* not valid */
+ else
+ alt = pc_pointer (alt);
+ }
+ break;
+
+ case 0x08100000: /* ldm */
+ if (insn & (1 << 15)) {
+ unsigned long base;
+ int nr_regs;
+printk ("ldm ");
+
+ if (insn & (1 << 23)) {
+ nr_regs = insn & 65535;
+
+ nr_regs = (nr_regs & 0x5555) + ((nr_regs & 0xaaaa) >> 1);
+ nr_regs = (nr_regs & 0x3333) + ((nr_regs & 0xcccc) >> 2);
+ nr_regs = (nr_regs & 0x0707) + ((nr_regs & 0x7070) >> 4);
+ nr_regs = (nr_regs & 0x000f) + ((nr_regs & 0x0f00) >> 8);
+ nr_regs <<= 2;
+
+ if (!(insn & (1 << 24)))
+ nr_regs -= 4;
+ } else {
+ if (insn & (1 << 24))
+ nr_regs = -4;
+ else
+ nr_regs = 0;
+ }
+
+ base = ptrace_getrn (child, insn);
+
+ if (read_long (child, base + nr_regs, &alt) < 0)
+ alt = pc + 4; /* not valid */
+ else
+ alt = pc_pointer (alt);
+ break;
+ }
+ break;
+
+ case 0x0a000000:
+ case 0x0a100000: { /* bl or b */
+ signed long displ;
+printk ("b/bl ");
+ /* It's a branch/branch link: instead of trying to
+ * figure out whether the branch will be taken or not,
+ * we'll put a breakpoint at either location. This is
+ * simpler, more reliable, and probably not a whole lot
+ * slower than the alternative approach of emulating the
+ * branch.
+ */
+ displ = (insn & 0x00ffffff) << 8;
+ displ = (displ >> 6) + 8;
+ if (displ != 0 && displ != 4)
+ alt = pc + displ;
+ }
+ break;
+ }
+printk ("=%08lX\n", alt);
+ if (alt != pc + 4)
+ child->debugreg[nsaved++] = alt;
+
+ for (i = 0; i < nsaved; i++) {
+ res = read_long (child, child->debugreg[i], &insn);
+ if (res >= 0) {
+ child->debugreg[i + 2] = insn;
+ res = write_long (child, child->debugreg[i], BREAKINST);
+ }
+ if (res < 0) {
+ child->debugreg[4] = 0;
+ return res;
+ }
+ }
+ child->debugreg[4] = nsaved;
+ return 0;
+}
+
+/* Ensure no single-step breakpoint is pending. Returns non-zero
+ * value if child was being single-stepped.
+ */
+int ptrace_cancel_bpt (struct task_struct *child)
+{
+ int i, nsaved = child->debugreg[4];
+
+ child->debugreg[4] = 0;
+
+ if (nsaved > 2) {
+ printk ("ptrace_cancel_bpt: bogus nsaved: %d!\n", nsaved);
+ nsaved = 2;
+ }
+ for (i = 0; i < nsaved; i++)
+ write_long (child, child->debugreg[i], child->debugreg[i + 2]);
+ return nsaved != 0;
+}
+
+asmlinkage int sys_ptrace(long request, long pid, long addr, long data)
+{
+ struct task_struct *child;
+ int ret;
+
+ lock_kernel();
+ ret = -EPERM;
+ if (request == PTRACE_TRACEME) {
+ /* are we already being traced? */
+ if (current->flags & PF_PTRACED)
+ goto out;
+ /* set the ptrace bit in the process flags. */
+ current->flags |= PF_PTRACED;
+ ret = 0;
+ goto out;
+ }
+ if (pid == 1) /* you may not mess with init */
+ goto out;
+ ret = -ESRCH;
+ if (!(child = get_task(pid)))
+ goto out;
+ ret = -EPERM;
+ if (request == PTRACE_ATTACH) {
+ if (child == current)
+ goto out;
+ if ((!child->dumpable ||
+ (current->uid != child->euid) ||
+ (current->uid != child->suid) ||
+ (current->uid != child->uid) ||
+ (current->gid != child->egid) ||
+ (current->gid != child->sgid) ||
+ (current->gid != child->gid)) && !suser())
+ goto out;
+ /* the same process cannot be attached many times */
+ if (child->flags & PF_PTRACED)
+ goto out;
+ child->flags |= PF_PTRACED;
+ if (child->p_pptr != current) {
+ REMOVE_LINKS(child);
+ child->p_pptr = current;
+ SET_LINKS(child);
+ }
+ send_sig(SIGSTOP, child, 1);
+ ret = 0;
+ goto out;
+ }
+ ret = -ESRCH;
+ if (!(child->flags & PF_PTRACED))
+ goto out;
+ if (child->state != TASK_STOPPED) {
+ if (request != PTRACE_KILL)
+ goto out;
+ }
+ if (child->p_pptr != current)
+ goto out;
+
+ switch (request) {
+ case PTRACE_PEEKTEXT: /* read word at location addr. */
+ case PTRACE_PEEKDATA: {
+ unsigned long tmp;
+
+ ret = read_long(child, addr, &tmp);
+ if (ret >= 0)
+ ret = put_user(tmp, (unsigned long *)data);
+ goto out;
+ }
+
+ case PTRACE_PEEKUSR: { /* read the word at location addr in the USER area. */
+ unsigned long tmp;
+
+ ret = -EIO;
+ if ((addr & 3) || addr < 0 || addr >= sizeof(struct user))
+ goto out;
+
+ tmp = 0; /* Default return condition */
+ if (addr < sizeof (struct pt_regs))
+ tmp = get_stack_long(child, (int)addr >> 2);
+ ret = put_user(tmp, (unsigned long *)data);
+ goto out;
+ }
+
+ case PTRACE_POKETEXT: /* write the word at location addr. */
+ case PTRACE_POKEDATA:
+ ret = write_long(child,addr,data);
+ goto out;
+
+ case PTRACE_POKEUSR: /* write the word at location addr in the USER area */
+ ret = -EIO;
+ if ((addr & 3) || addr < 0 || addr >= sizeof(struct user))
+ goto out;
+
+ if (addr < sizeof (struct pt_regs))
+ ret = put_stack_long(child, (int)addr >> 2, data);
+ goto out;
+
+ case PTRACE_SYSCALL: /* continue and stop at next (return from) syscall */
+ case PTRACE_CONT: /* restart after signal. */
+ ret = -EIO;
+ if ((unsigned long) data > _NSIG)
+ goto out;
+ if (request == PTRACE_SYSCALL)
+ child->flags |= PF_TRACESYS;
+ else
+ child->flags &= ~PF_TRACESYS;
+ child->exit_code = data;
+ wake_up_process (child);
+ /* make sure single-step breakpoint is gone. */
+ ptrace_cancel_bpt (child);
+ ret = 0;
+ goto out;
+
+ /* make the child exit. Best I can do is send it a sigkill.
+ * perhaps it should be put in the status that it wants to
+ * exit.
+ */
+ case PTRACE_KILL:
+ if (child->state == TASK_ZOMBIE) /* already dead */
+ return 0;
+ wake_up_process (child);
+ child->exit_code = SIGKILL;
+ ptrace_cancel_bpt (child);
+ /* make sure single-step breakpoint is gone. */
+ ptrace_cancel_bpt (child);
+ ret = 0;
+ goto out;
+
+ case PTRACE_SINGLESTEP: /* execute single instruction. */
+ ret = -EIO;
+ if ((unsigned long) data > _NSIG)
+ goto out;
+ child->debugreg[4] = -1;
+ child->flags &= ~PF_TRACESYS;
+ wake_up_process(child);
+ child->exit_code = data;
+ /* give it a chance to run. */
+ ret = 0;
+ goto out;
+
+ case PTRACE_DETACH: /* detach a process that was attached. */
+ ret = -EIO;
+ if ((unsigned long) data > _NSIG)
+ goto out;
+ child->flags &= ~(PF_PTRACED|PF_TRACESYS);
+ wake_up_process (child);
+ child->exit_code = data;
+ REMOVE_LINKS(child);
+ child->p_pptr = child->p_opptr;
+ SET_LINKS(child);
+ /* make sure single-step breakpoint is gone. */
+ ptrace_cancel_bpt (child);
+ ret = 0;
+ goto out;
+
+ default:
+ ret = -EIO;
+ goto out;
+ }
+out:
+ unlock_kernel();
+ return ret;
+}
+
+asmlinkage void syscall_trace(void)
+{
+ if ((current->flags & (PF_PTRACED|PF_TRACESYS))
+ != (PF_PTRACED|PF_TRACESYS))
+ return;
+ current->exit_code = SIGTRAP;
+ current->state = TASK_STOPPED;
+ notify_parent(current, SIGCHLD);
+ schedule();
+ /*
+ * this isn't the same as continuing with a signal, but it will do
+ * for normal use. strace only continues with a signal if the
+ * stopping signal is not SIGTRAP. -brl
+ */
+ if (current->exit_code) {
+ send_sig(current->exit_code, current, 1);
+ current->exit_code = 0;
+ }
+}
--- /dev/null
+/*
+ * linux/arch/arm/kernel/setup-sa.c
+ *
+ * Copyright (C) 1995, 1996 Russell King
+ */
+
+/*
+ * This file obtains various parameters about the system that the kernel
+ * is running on.
+ */
+
+#include <linux/config.h>
+#include <linux/errno.h>
+#include <linux/sched.h>
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/stddef.h>
+#include <linux/unistd.h>
+#include <linux/ptrace.h>
+#include <linux/malloc.h>
+#include <linux/ldt.h>
+#include <linux/user.h>
+#include <linux/a.out.h>
+#include <linux/tty.h>
+#include <linux/ioport.h>
+#include <linux/delay.h>
+#include <linux/major.h>
+#include <linux/utsname.h>
+
+#include <asm/segment.h>
+#include <asm/system.h>
+#include <asm/hardware.h>
+#include <asm/pgtable.h>
+
+#ifndef CONFIG_CMDLINE
+#define CONFIG_CMDLINE "root=nfs rw console=ttyS1,38400n8"
+#endif
+#define MEM_SIZE (16*1024*1024)
+
+#define COMMAND_LINE_SIZE 256
+
+unsigned char aux_device_present;
+unsigned long arm_id;
+extern int root_mountflags;
+extern int _etext, _edata, _end;
+
+#ifdef CONFIG_BLK_DEV_RAM
+extern int rd_doload; /* 1 = load ramdisk, 0 = don't load */
+extern int rd_prompt; /* 1 = prompt for ramdisk, 0 = don't prompt */
+extern int rd_image_start; /* starting block # of image */
+
+static inline void setup_ramdisk (void)
+{
+ rd_image_start = 0;
+ rd_prompt = 1;
+ rd_doload = 1;
+}
+#else
+#define setup_ramdisk()
+#endif
+
+static char default_command_line[] = CONFIG_CMDLINE;
+static char command_line[COMMAND_LINE_SIZE] = { 0, };
+ char saved_command_line[COMMAND_LINE_SIZE];
+
+struct processor processor;
+extern const struct processor sa110_processor_functions;
+
+void setup_arch(char **cmdline_p,
+ unsigned long * memory_start_p, unsigned long * memory_end_p)
+{
+ unsigned long memory_start, memory_end;
+ char c = ' ', *to = command_line, *from;
+ int len = 0;
+
+ memory_start = (unsigned long)&_end;
+ memory_end = 0xc0000000 + MEM_SIZE;
+ from = default_command_line;
+
+ processor = sa110_processor_functions;
+ processor._proc_init ();
+
+ ROOT_DEV = 0x00ff;
+ setup_ramdisk();
+
+ init_task.mm->start_code = TASK_SIZE;
+ init_task.mm->end_code = TASK_SIZE + (unsigned long) &_etext;
+ init_task.mm->end_data = TASK_SIZE + (unsigned long) &_edata;
+ init_task.mm->brk = TASK_SIZE + (unsigned long) &_end;
+
+ /* Save unparsed command line copy for /proc/cmdline */
+ memcpy(saved_command_line, from, COMMAND_LINE_SIZE);
+ saved_command_line[COMMAND_LINE_SIZE-1] = '\0';
+
+ for (;;) {
+ if (c == ' ' &&
+ from[0] == 'm' &&
+ from[1] == 'e' &&
+ from[2] == 'm' &&
+ from[3] == '=') {
+ memory_end = simple_strtoul(from+4, &from, 0);
+ if ( *from == 'K' || *from == 'k' ) {
+ memory_end = memory_end << 10;
+ from++;
+ } else if ( *from == 'M' || *from == 'm' ) {
+ memory_end = memory_end << 20;
+ from++;
+ }
+ memory_end = memory_end + PAGE_OFFSET;
+ }
+ c = *from++;
+ if (!c)
+ break;
+ if (COMMAND_LINE_SIZE <= ++len)
+ break;
+ *to++ = c;
+ }
+
+ *to = '\0';
+ *cmdline_p = command_line;
+ *memory_start_p = memory_start;
+ *memory_end_p = memory_end;
+ strcpy (system_utsname.machine, "sa110");
+}
+
+int get_cpuinfo(char * buffer)
+{
+ int len;
+
+ len = sprintf (buffer, "CPU:\n"
+ "Type\t\t: %s\n"
+ "Revision\t: %d\n"
+ "Manufacturer\t: %s\n"
+ "32bit modes\t: %s\n"
+ "BogoMips\t: %lu.%02lu\n",
+ "sa110",
+ (int)arm_id & 15,
+ "DEC",
+ "yes",
+ (loops_per_sec+2500) / 500000,
+ ((loops_per_sec+2500) / 5000) % 100);
+ return len;
+}
--- /dev/null
+/*
+ * linux/arch/arm/kernel/setup.c
+ *
+ * Copyright (C) 1995, 1996, 1997 Russell King
+ */
+
+/*
+ * This file obtains various parameters about the system that the kernel
+ * is running on.
+ */
+
+#include <linux/config.h>
+#include <linux/errno.h>
+#include <linux/sched.h>
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/stddef.h>
+#include <linux/unistd.h>
+#include <linux/ptrace.h>
+#include <linux/malloc.h>
+#include <linux/user.h>
+#include <linux/a.out.h>
+#include <linux/tty.h>
+#include <linux/ioport.h>
+#include <linux/delay.h>
+#include <linux/major.h>
+#include <linux/utsname.h>
+#include <linux/blk.h>
+
+#include <asm/segment.h>
+#include <asm/system.h>
+#include <asm/hardware.h>
+#include <asm/pgtable.h>
+#include <asm/arch/mmu.h>
+#include <asm/procinfo.h>
+#include <asm/io.h>
+#include <asm/setup.h>
+
+struct drive_info_struct { char dummy[32]; } drive_info;
+struct screen_info screen_info;
+struct processor processor;
+unsigned char aux_device_present;
+
+extern const struct processor arm2_processor_functions;
+extern const struct processor arm250_processor_functions;
+extern const struct processor arm3_processor_functions;
+extern const struct processor arm6_processor_functions;
+extern const struct processor arm7_processor_functions;
+extern const struct processor sa110_processor_functions;
+
+struct armversions armidlist[] = {
+#if defined(CONFIG_CPU_ARM2) || defined(CONFIG_CPU_ARM3)
+ { 0x41560200, 0xfffffff0, F_MEMC , "ARM/VLSI", "arm2" , &arm2_processor_functions },
+ { 0x41560250, 0xfffffff0, F_MEMC , "ARM/VLSI", "arm250" , &arm250_processor_functions },
+ { 0x41560300, 0xfffffff0, F_MEMC|F_CACHE, "ARM/VLSI", "arm3" , &arm3_processor_functions },
+#endif
+#if defined(CONFIG_CPU_ARM6) || defined(CONFIG_CPU_SA110)
+ { 0x41560600, 0xfffffff0, F_MMU|F_32BIT , "ARM/VLSI", "arm6" , &arm6_processor_functions },
+ { 0x41560610, 0xfffffff0, F_MMU|F_32BIT , "ARM/VLSI", "arm610" , &arm6_processor_functions },
+ { 0x41007000, 0xffffff00, F_MMU|F_32BIT , "ARM/VLSI", "arm7" , &arm7_processor_functions },
+ { 0x41007100, 0xffffff00, F_MMU|F_32BIT , "ARM/VLSI", "arm710" , &arm7_processor_functions },
+ { 0x4401a100, 0xfffffff0, F_MMU|F_32BIT , "DEC", "sa110" , &sa110_processor_functions },
+#endif
+ { 0x00000000, 0x00000000, 0 , "***", "*unknown*" , NULL }
+};
+
+static struct param_struct *params = (struct param_struct *)PARAMS_BASE;
+
+unsigned long arm_id;
+unsigned int vram_half_sam;
+int armidindex;
+int ioebpresent;
+int memc_ctrl_reg;
+int number_ide_drives;
+int number_mfm_drives;
+
+extern int bytes_per_char_h;
+extern int bytes_per_char_v;
+extern int root_mountflags;
+extern int _etext, _edata, _end;
+extern unsigned long real_end_mem;
+
+/*-------------------------------------------------------------------------
+ * Early initialisation routines for various configurable items in the
+ * kernel. Each one either supplies a setup_ function, or defines this
+ * symbol to be empty if not configured.
+ */
+
+/*
+ * Risc-PC specific initialisation
+ */
+#ifdef CONFIG_ARCH_RPC
+
+extern void init_dram_banks(struct param_struct *params);
+
+static void setup_rpc (struct param_struct *params)
+{
+ init_dram_banks(params);
+
+ switch (params->u1.s.pages_in_vram) {
+ case 256:
+ vram_half_sam = 1024;
+ break;
+ case 512:
+ default:
+ vram_half_sam = 2048;
+ }
+
+ /*
+ * Set ROM speed to maximum
+ */
+ outb (0x1d, IOMD_ROMCR0);
+}
+#else
+#define setup_rpc(x)
+#endif
+
+/*
+ * ram disk
+ */
+#ifdef CONFIG_BLK_DEV_RAM
+extern int rd_doload; /* 1 = load ramdisk, 0 = don't load */
+extern int rd_prompt; /* 1 = prompt for ramdisk, 0 = don't prompt */
+extern int rd_image_start; /* starting block # of image */
+
+static void setup_ramdisk (struct param_struct *params)
+{
+ rd_image_start = params->u1.s.rd_start;
+ rd_prompt = (params->u1.s.flags & FLAG_RDPROMPT) == 0;
+ rd_doload = (params->u1.s.flags & FLAG_RDLOAD) == 0;
+}
+#else
+#define setup_ramdisk(p)
+#endif
+
+/*
+ * initial ram disk
+ */
+#ifdef CONFIG_BLK_DEV_INITRD
+static void setup_initrd (struct param_struct *params, unsigned long memory_end)
+{
+ initrd_start = params->u1.s.initrd_start;
+ initrd_end = params->u1.s.initrd_start + params->u1.s.initrd_size;
+
+ if (initrd_end > memory_end) {
+ printk ("initrd extends beyond end of memory "
+ "(0x%08lx > 0x%08lx) - disabling initrd\n",
+ initrd_end, memory_end);
+ initrd_start = 0;
+ }
+}
+#else
+#define setup_initrd(p,m)
+#endif
+
+static inline void check_ioeb_present(void)
+{
+ if (((*IOEB_BASE) & 15) == 5)
+ armidlist[armidindex].features |= F_IOEB;
+}
+
+static void get_processor_type (void)
+{
+ for (armidindex = 0; ; armidindex ++)
+ if (!((armidlist[armidindex].id ^ arm_id) &
+ armidlist[armidindex].mask))
+ break;
+
+ if (armidlist[armidindex].id == 0) {
+ int i;
+
+ for (i = 0; i < 3200; i++)
+ ((unsigned long *)SCREEN2_BASE)[i] = 0x77113322;
+
+ while (1);
+ }
+ processor = *armidlist[armidindex].proc;
+}
+
+#define COMMAND_LINE_SIZE 256
+
+static char command_line[COMMAND_LINE_SIZE] = { 0, };
+ char saved_command_line[COMMAND_LINE_SIZE];
+
+void setup_arch(char **cmdline_p,
+ unsigned long * memory_start_p, unsigned long * memory_end_p)
+{
+ static unsigned char smptrap;
+ unsigned long memory_start, memory_end;
+ char c = ' ', *to = command_line, *from;
+ int len = 0;
+
+ if (smptrap == 1)
+ return;
+ smptrap = 1;
+
+ get_processor_type ();
+ check_ioeb_present ();
+ processor._proc_init ();
+
+ bytes_per_char_h = params->u1.s.bytes_per_char_h;
+ bytes_per_char_v = params->u1.s.bytes_per_char_v;
+ from = params->commandline;
+ ROOT_DEV = to_kdev_t (params->u1.s.rootdev);
+ ORIG_X = params->u1.s.video_x;
+ ORIG_Y = params->u1.s.video_y;
+ ORIG_VIDEO_COLS = params->u1.s.video_num_cols;
+ ORIG_VIDEO_LINES = params->u1.s.video_num_rows;
+ memc_ctrl_reg = params->u1.s.memc_control_reg;
+ number_ide_drives = (params->u1.s.adfsdrives >> 6) & 3;
+ number_mfm_drives = (params->u1.s.adfsdrives >> 3) & 3;
+
+ setup_rpc (params);
+ setup_ramdisk (params);
+
+ if (!(params->u1.s.flags & FLAG_READONLY))
+ root_mountflags &= ~MS_RDONLY;
+
+ memory_start = MAPTOPHYS((unsigned long)&_end);
+ memory_end = GET_MEMORY_END(params);
+
+ init_task.mm->start_code = TASK_SIZE;
+ init_task.mm->end_code = TASK_SIZE + (unsigned long) &_etext;
+ init_task.mm->end_data = TASK_SIZE + (unsigned long) &_edata;
+ init_task.mm->brk = TASK_SIZE + (unsigned long) &_end;
+
+ /* Save unparsed command line copy for /proc/cmdline */
+ memcpy(saved_command_line, from, COMMAND_LINE_SIZE);
+ saved_command_line[COMMAND_LINE_SIZE-1] = '\0';
+
+ for (;;) {
+ if (c == ' ' &&
+ from[0] == 'm' &&
+ from[1] == 'e' &&
+ from[2] == 'm' &&
+ from[3] == '=') {
+ memory_end = simple_strtoul(from+4, &from, 0);
+ if (*from == 'K' || *from == 'k') {
+ memory_end = memory_end << 10;
+ from++;
+ } else if (*from == 'M' || *from == 'm') {
+ memory_end = memory_end << 20;
+ from++;
+ }
+ memory_end = memory_end + PAGE_OFFSET;
+ }
+ c = *from++;
+ if (!c)
+ break;
+ if (COMMAND_LINE_SIZE <= ++len)
+ break;
+ *to++ = c;
+ }
+
+ *to = '\0';
+ *cmdline_p = command_line;
+ *memory_start_p = memory_start;
+ *memory_end_p = memory_end;
+
+ setup_initrd (params, memory_end);
+
+ strcpy (system_utsname.machine, armidlist[armidindex].name);
+}
+
+#define ISSET(bit) (armidlist[armidindex].features & bit)
+
+int get_cpuinfo(char * buffer)
+{
+ int len;
+
+ len = sprintf (buffer, "CPU:\n"
+ "Type\t\t: %s\n"
+ "Revision\t: %d\n"
+ "Manufacturer\t: %s\n"
+ "32bit modes\t: %s\n"
+ "BogoMips\t: %lu.%02lu\n",
+ armidlist[armidindex].name,
+ (int)arm_id & 15,
+ armidlist[armidindex].manu,
+ ISSET (F_32BIT) ? "yes" : "no",
+ (loops_per_sec+2500) / 500000,
+ ((loops_per_sec+2500) / 5000) % 100);
+ len += sprintf (buffer + len,
+ "\nHardware:\n"
+ "Mem System\t: %s\n"
+ "IOEB\t\t: %s\n",
+ ISSET(F_MEMC) ? "MEMC" :
+ ISSET(F_MMU) ? "MMU" : "*unknown*",
+ ISSET(F_IOEB) ? "present" : "absent"
+ );
+ return len;
+}
--- /dev/null
+/*
+ * linux/arch/arm/kernel/signal.c
+ *
+ * Copyright (C) 1995, 1996 Russell King
+ */
+
+#include <linux/sched.h>
+#include <linux/mm.h>
+#include <linux/smp.h>
+#include <linux/smp_lock.h>
+#include <linux/kernel.h>
+#include <linux/signal.h>
+#include <linux/errno.h>
+#include <linux/wait.h>
+#include <linux/ptrace.h>
+#include <linux/unistd.h>
+#include <linux/stddef.h>
+
+#include <asm/ucontext.h>
+#include <asm/uaccess.h>
+#include <asm/pgtable.h>
+
+#define _BLOCKABLE (~(sigmask(SIGKILL) | sigmask(SIGSTOP)))
+
+#define SWI_SYS_SIGRETURN (0xef000000|(__NR_sigreturn))
+#define SWI_SYS_RT_SIGRETURN (0xef000000|(__NR_rt_sigreturn))
+
+asmlinkage int sys_wait4(pid_t pid, unsigned long * stat_addr,
+ int options, unsigned long *ru);
+asmlinkage int do_signal(sigset_t *oldset, struct pt_regs * regs);
+extern int ptrace_cancel_bpt (struct task_struct *);
+extern int ptrace_set_bpt (struct task_struct *);
+
+/*
+ * atomically swap in the new signal mask, and wait for a signal.
+ */
+asmlinkage int sys_sigsuspend(int restart, unsigned long oldmask, old_sigset_t mask, struct pt_regs *regs)
+{
+
+ sigset_t saveset;
+
+ mask &= _BLOCKABLE;
+ spin_lock_irq(¤t->sigmask_lock);
+ saveset = current->blocked;
+ siginitset(¤t->blocked, mask);
+ recalc_sigpending(current);
+ spin_unlock_irq(¤t->sigmask_lock);
+ regs->ARM_r0 = -EINTR;
+
+ while (1) {
+ current->state = TASK_INTERRUPTIBLE;
+ schedule();
+ if (do_signal(&saveset, regs))
+ return regs->ARM_r0;
+ }
+}
+
+asmlinkage int
+sys_rt_sigsuspend(sigset_t *unewset, size_t sigsetsize, struct pt_regs *regs)
+{
+ sigset_t saveset, newset;
+
+ /* XXX: Don't preclude handling different sized sigset_t's. */
+ if (sigsetsize != sizeof(sigset_t))
+ return -EINVAL;
+
+ if (copy_from_user(&newset, unewset, sizeof(newset)))
+ return -EFAULT;
+ sigdelsetmask(&newset, ~_BLOCKABLE);
+
+ spin_lock_irq(¤t->sigmask_lock);
+ saveset = current->blocked;
+ current->blocked = newset;
+ recalc_sigpending(current);
+ spin_unlock_irq(¤t->sigmask_lock);
+ regs->ARM_r0 = -EINTR;
+
+ while (1) {
+ current->state = TASK_INTERRUPTIBLE;
+ schedule();
+ if (do_signal(&saveset, regs))
+ return regs->ARM_r0;
+ }
+}
+
+asmlinkage int
+sys_sigaction(int sig, const struct old_sigaction *act,
+ struct old_sigaction *oact)
+{
+ struct k_sigaction new_ka, old_ka;
+ int ret;
+
+ if (act) {
+ old_sigset_t mask;
+ if (verify_area(VERIFY_READ, act, sizeof(*act)) ||
+ __get_user(new_ka.sa.sa_handler, &act->sa_handler) ||
+ __get_user(new_ka.sa.sa_restorer, &act->sa_restorer))
+ return -EFAULT;
+ __get_user(new_ka.sa.sa_flags, &act->sa_flags);
+ __get_user(mask, &act->sa_mask);
+ siginitset(&new_ka.sa.sa_mask, mask);
+ }
+
+ ret = do_sigaction(sig, act ? &new_ka : NULL, oact ? &old_ka : NULL);
+
+ if (!ret && oact) {
+ if (verify_area(VERIFY_WRITE, oact, sizeof(*oact)) ||
+ __put_user(old_ka.sa.sa_handler, &oact->sa_handler) ||
+ __put_user(old_ka.sa.sa_restorer, &oact->sa_restorer))
+ return -EFAULT;
+ __put_user(old_ka.sa.sa_flags, &oact->sa_flags);
+ __put_user(old_ka.sa.sa_mask.sig[0], &oact->sa_mask);
+ }
+
+ return ret;
+}
+
+/*
+ * Do a signal return; undo the signal stack.
+ */
+struct sigframe
+{
+ struct sigcontext sc;
+ unsigned long extramask[_NSIG_WORDS-1];
+ unsigned long retcode;
+};
+
+struct rt_sigframe
+{
+ struct siginfo *pinfo;
+ void *puc;
+ struct siginfo info;
+ struct ucontext uc;
+ unsigned long retcode;
+};
+
+static int
+restore_sigcontext(struct pt_regs *regs, struct sigcontext *sc)
+{
+ __get_user(regs->ARM_r0, &sc->arm_r0);
+ __get_user(regs->ARM_r1, &sc->arm_r1);
+ __get_user(regs->ARM_r2, &sc->arm_r2);
+ __get_user(regs->ARM_r3, &sc->arm_r3);
+ __get_user(regs->ARM_r4, &sc->arm_r4);
+ __get_user(regs->ARM_r5, &sc->arm_r5);
+ __get_user(regs->ARM_r6, &sc->arm_r6);
+ __get_user(regs->ARM_r7, &sc->arm_r7);
+ __get_user(regs->ARM_r8, &sc->arm_r8);
+ __get_user(regs->ARM_r9, &sc->arm_r9);
+ __get_user(regs->ARM_r10, &sc->arm_r10);
+ __get_user(regs->ARM_fp, &sc->arm_fp);
+ __get_user(regs->ARM_ip, &sc->arm_ip);
+ __get_user(regs->ARM_sp, &sc->arm_sp);
+ __get_user(regs->ARM_lr, &sc->arm_lr);
+ __get_user(regs->ARM_pc, &sc->arm_pc); /* security! */
+#if defined(CONFIG_CPU_ARM6) || defined(CONFIG_CPU_SA110)
+ __get_user(regs->ARM_cpsr, &sc->arm_cpsr); /* security! */
+#endif
+
+ /* send SIGTRAP if we're single-stepping */
+ if (ptrace_cancel_bpt (current))
+ send_sig (SIGTRAP, current, 1);
+
+ return regs->ARM_r0;
+}
+
+asmlinkage int sys_sigreturn(struct pt_regs *regs)
+{
+ struct sigframe *frame;
+ sigset_t set;
+
+ frame = (struct sigframe *)regs->ARM_sp;
+
+ if (verify_area(VERIFY_READ, frame, sizeof (*frame)))
+ goto badframe;
+ if (__get_user(set.sig[0], &frame->sc.oldmask)
+ || (_NSIG_WORDS > 1
+ && __copy_from_user(&set.sig[1], &frame->extramask,
+ sizeof(frame->extramask))))
+ goto badframe;
+
+ sigdelsetmask(&set, ~_BLOCKABLE);
+ spin_lock_irq(¤t->sigmask_lock);
+ current->blocked = set;
+ recalc_sigpending(current);
+ spin_unlock_irq(¤t->sigmask_lock);
+
+ return restore_sigcontext(regs, &frame->sc);
+
+badframe:
+ lock_kernel();
+ do_exit(SIGSEGV);
+}
+
+asmlinkage int sys_rt_sigreturn(struct pt_regs *regs)
+{
+ struct rt_sigframe *frame;
+ sigset_t set;
+
+ frame = (struct rt_sigframe *)regs->ARM_sp;
+
+ if (verify_area(VERIFY_READ, frame, sizeof (*frame)))
+ goto badframe;
+ if (__copy_from_user(&set, &frame->uc.uc_sigmask, sizeof(set)))
+ goto badframe;
+
+ sigdelsetmask(&set, ~_BLOCKABLE);
+ spin_lock_irq(¤t->sigmask_lock);
+ current->blocked = set;
+ recalc_sigpending(current);
+ spin_unlock_irq(¤t->sigmask_lock);
+
+ return restore_sigcontext(regs, &frame->uc.uc_mcontext);
+
+badframe:
+ lock_kernel();
+ do_exit(SIGSEGV);
+}
+
+static void
+setup_sigcontext(struct sigcontext *sc, /*struct _fpstate *fpstate,*/
+ struct pt_regs *regs, unsigned long mask)
+{
+ __put_user (regs->ARM_r0, &sc->arm_r0);
+ __put_user (regs->ARM_r1, &sc->arm_r1);
+ __put_user (regs->ARM_r2, &sc->arm_r2);
+ __put_user (regs->ARM_r3, &sc->arm_r3);
+ __put_user (regs->ARM_r4, &sc->arm_r4);
+ __put_user (regs->ARM_r5, &sc->arm_r5);
+ __put_user (regs->ARM_r6, &sc->arm_r6);
+ __put_user (regs->ARM_r7, &sc->arm_r7);
+ __put_user (regs->ARM_r8, &sc->arm_r8);
+ __put_user (regs->ARM_r9, &sc->arm_r9);
+ __put_user (regs->ARM_r10, &sc->arm_r10);
+ __put_user (regs->ARM_fp, &sc->arm_fp);
+ __put_user (regs->ARM_ip, &sc->arm_ip);
+ __put_user (regs->ARM_sp, &sc->arm_sp);
+ __put_user (regs->ARM_lr, &sc->arm_lr);
+ __put_user (regs->ARM_pc, &sc->arm_pc); /* security! */
+#if defined(CONFIG_CPU_ARM6) || defined(CONFIG_CPU_SA110)
+ __put_user (regs->ARM_cpsr, &sc->arm_cpsr); /* security! */
+#endif
+
+ __put_user (current->tss.trap_no, &sc->trap_no);
+ __put_user (current->tss.error_code, &sc->error_code);
+ __put_user (mask, &sc->oldmask);
+}
+
+static void setup_frame(int sig, struct k_sigaction *ka,
+ sigset_t *set, struct pt_regs *regs)
+{
+ struct sigframe *frame;
+ unsigned long retcode;
+
+ frame = (struct sigframe *)regs->ARM_sp - 1;
+
+ if (!access_ok(VERIFT_WRITE, frame, sizeof (*frame)))
+ goto segv_and_exit;
+
+ setup_sigcontext(&frame->sc, /*&frame->fpstate,*/ regs, set->sig[0]);
+
+ if (_NSIG_WORDS > 1) {
+ __copy_to_user(frame->extramask, &set->sig[1],
+ sizeof(frame->extramask));
+ }
+
+ /* Set up to return from userspace. If provided, use a stub
+ already in userspace. */
+ if (ka->sa.sa_flags & SA_RESTORER) {
+ retcode = (unsigned long)ka->sa.sa_restorer; /* security! */
+ } else {
+ retcode = (unsigned long)&frame->retcode;
+ __put_user(SWI_SYS_SIGRETURN, &frame->retcode);
+ __flush_entry_to_ram (&frame->retcode);
+ }
+
+ if (current->exec_domain && current->exec_domain->signal_invmap && sig < 32)
+ regs->ARM_r0 = current->exec_domain->signal_invmap[sig];
+ else
+ regs->ARM_r0 = sig;
+ regs->ARM_sp = (unsigned long)frame;
+ regs->ARM_lr = retcode;
+ regs->ARM_pc = (unsigned long)ka->sa.sa_handler; /* security! */
+ return;
+
+segv_and_exit:
+ lock_kernel();
+ do_exit (SIGSEGV);
+}
+
+static void setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info,
+ sigset_t *set, struct pt_regs *regs)
+{
+ struct rt_sigframe *frame;
+ unsigned long retcode;
+
+ frame = (struct rt_sigframe *)regs->ARM_sp - 1;
+ if (!access_ok(VERIFY_WRITE, frame, sizeof (*frame)))
+ goto segv_and_exit;
+
+ __put_user(&frame->info, &frame->pinfo);
+ __put_user(&frame->uc, &frame->puc);
+ __copy_to_user(&frame->info, info, sizeof(*info));
+
+ /* Clear all the bits of the ucontext we don't use. */
+ __clear_user(&frame->uc, offsetof(struct ucontext, uc_mcontext));
+
+ setup_sigcontext(&frame->uc.uc_mcontext, /*&frame->fpstate,*/
+ regs, set->sig[0]);
+ __copy_to_user(&frame->uc.uc_sigmask, set, sizeof(*set));
+
+ /* Set up to return from userspace. If provided, use a stub
+ already in userspace. */
+ if (ka->sa.sa_flags & SA_RESTORER) {
+ retcode = (unsigned long)ka->sa.sa_restorer; /* security! */
+ } else {
+ retcode = (unsigned long)&frame->retcode;
+ __put_user(SWI_SYS_RT_SIGRETURN, &frame->retcode);
+ __flush_entry_to_ram (&frame->retcode);
+ }
+
+ if (current->exec_domain && current->exec_domain->signal_invmap && sig < 32)
+ regs->ARM_r0 = current->exec_domain->signal_invmap[sig];
+ else
+ regs->ARM_r0 = sig;
+ regs->ARM_sp = (unsigned long)frame;
+ regs->ARM_lr = retcode;
+ regs->ARM_pc = (unsigned long)ka->sa.sa_handler; /* security! */
+ return;
+
+segv_and_exit:
+ lock_kernel();
+ do_exit (SIGSEGV);
+}
+
+/*
+ * OK, we're invoking a handler
+ */
+static void
+handle_signal(unsigned long sig, struct k_sigaction *ka,
+ siginfo_t *info, sigset_t *oldset, struct pt_regs * regs)
+{
+ /* Set up the stack frame */
+ if (ka->sa.sa_flags & SA_SIGINFO)
+ setup_rt_frame(sig, ka, info, oldset, regs);
+ else
+ setup_frame(sig, ka, oldset, regs);
+
+ if (ka->sa.sa_flags & SA_ONESHOT)
+ ka->sa.sa_handler = SIG_DFL;
+
+ if (!(ka->sa.sa_flags & SA_NODEFER)) {
+ spin_lock_irq(¤t->sigmask_lock);
+ sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask);
+ sigaddset(¤t->blocked,sig);
+ recalc_sigpending(current);
+ spin_unlock_irq(¤t->sigmask_lock);
+ }
+}
+
+/*
+ * Note that 'init' is a special process: it doesn't get signals it doesn't
+ * want to handle. Thus you cannot kill init even with a SIGKILL even by
+ * mistake.
+ *
+ * Note that we go through the signals twice: once to check the signals that
+ * the kernel can handle, and then we build all the user-level signal handling
+ * stack-frames in one go after that.
+ */
+asmlinkage int do_signal(sigset_t *oldset, struct pt_regs *regs)
+{
+ unsigned long instr, *pc = (unsigned long *)(instruction_pointer(regs)-4);
+ struct k_sigaction *ka;
+ siginfo_t info;
+ int single_stepping, swi_instr;
+
+ if (!oldset)
+ oldset = ¤t->blocked;
+
+ single_stepping = ptrace_cancel_bpt (current);
+ swi_instr = (!get_user (instr, pc) && (instr & 0x0f000000) == 0x0f000000);
+
+ for (;;) {
+ unsigned long signr;
+
+ spin_lock_irq (¤t->sigmask_lock);
+ signr = dequeue_signal(¤t->blocked, &info);
+ spin_unlock_irq (¤t->sigmask_lock);
+
+ if (!signr)
+ break;
+
+ if ((current->flags & PF_PTRACED) && signr != SIGKILL) {
+ /* Let the debugger run. */
+ current->exit_code = signr;
+ current->state = TASK_STOPPED;
+ notify_parent(current, SIGCHLD);
+ schedule();
+ single_stepping |= ptrace_cancel_bpt (current);
+
+ /* We're back. Did the debugger cancel the sig? */
+ if (!(signr = current->exit_code))
+ continue;
+ current->exit_code = 0;
+
+ /* The debugger continued. Ignore SIGSTOP. */
+ if (signr == SIGSTOP)
+ continue;
+
+ /* Update the siginfo structure. Is this good? */
+ if (signr != info.si_signo) {
+ info.si_signo = signr;
+ info.si_errno = 0;
+ info.si_code = SI_USER;
+ info.si_pid = current->p_pptr->pid;
+ info.si_uid = current->p_pptr->uid;
+ }
+
+ /* If the (new) signal is now blocked, requeue it. */
+ if (sigismember(¤t->blocked, signr)) {
+ send_sig_info(signr, &info, current);
+ continue;
+ }
+ }
+
+ ka = ¤t->sig->action[signr-1];
+ if (ka->sa.sa_handler == SIG_IGN) {
+ if (signr != SIGCHLD)
+ continue;
+ /* Check for SIGCHLD: it's special. */
+ while (sys_wait4(-1, NULL, WNOHANG, NULL) > 0)
+ /* nothing */;
+ continue;
+ }
+
+ if (ka->sa.sa_handler == SIG_DFL) {
+ int exit_code = signr;
+
+ /* Init gets no signals it doesn't want. */
+ if (current->pid == 1)
+ continue;
+
+ switch (signr) {
+ case SIGCONT: case SIGCHLD: case SIGWINCH:
+ continue;
+
+ case SIGTSTP: case SIGTTIN: case SIGTTOU:
+ if (is_orphaned_pgrp(current->pgrp))
+ continue;
+ /* FALLTHRU */
+
+ case SIGSTOP:
+ current->state = TASK_STOPPED;
+ current->exit_code = signr;
+ if (!(current->p_pptr->sig->action[SIGCHLD-1].sa.sa_flags & SA_NOCLDSTOP))
+ notify_parent(current, SIGCHLD);
+ schedule();
+ continue;
+
+ case SIGQUIT: case SIGILL: case SIGTRAP:
+ case SIGABRT: case SIGFPE: case SIGSEGV:
+ lock_kernel();
+ if (current->binfmt
+ && current->binfmt->core_dump
+ && current->binfmt->core_dump(signr, regs))
+ exit_code |= 0x80;
+ unlock_kernel();
+ /* FALLTHRU */
+
+ default:
+ lock_kernel();
+ sigaddset(¤t->signal, signr);
+ current->flags |= PF_SIGNALED;
+ do_exit(exit_code);
+ /* NOTREACHED */
+ }
+ }
+
+ /* Are we from a system call? */
+ if (swi_instr) {
+ switch (regs->ARM_r0) {
+ case -ERESTARTNOHAND:
+ regs->ARM_r0 = -EINTR;
+ break;
+
+ case -ERESTARTSYS:
+ if (!(ka->sa.sa_flags & SA_RESTART)) {
+ regs->ARM_r0 = -EINTR;
+ break;
+ }
+ /* fallthrough */
+ case -ERESTARTNOINTR:
+ regs->ARM_r0 = regs->ARM_ORIG_r0;
+ regs->ARM_pc -= 4;
+ }
+ }
+ /* Whee! Actually deliver the signal. */
+ handle_signal(signr, ka, &info, oldset, regs);
+ if (single_stepping)
+ ptrace_set_bpt (current);
+ return 1;
+ }
+
+ if (swi_instr &&
+ (regs->ARM_r0 == -ERESTARTNOHAND ||
+ regs->ARM_r0 == -ERESTARTSYS ||
+ regs->ARM_r0 == -ERESTARTNOINTR)) {
+ regs->ARM_r0 = regs->ARM_ORIG_r0;
+ regs->ARM_pc -= 4;
+ }
+ if (single_stepping)
+ ptrace_set_bpt (current);
+ return 0;
+}
--- /dev/null
+/*
+ * linux/arch/arm/kernel/sys_arm.c
+ *
+ * Copyright (C) People who wrote linux/arch/i386/kernel/sys_i386.c
+ * Copyright (C) 1995, 1996 Russell King.
+ *
+ * This file contains various random system calls that
+ * have a non-standard calling sequence on the Linux/arm
+ * platform.
+ */
+
+#include <linux/errno.h>
+#include <linux/sched.h>
+#include <linux/mm.h>
+#include <linux/smp.h>
+#include <linux/smp_lock.h>
+#include <linux/sem.h>
+#include <linux/msg.h>
+#include <linux/shm.h>
+#include <linux/stat.h>
+#include <linux/mman.h>
+#include <linux/file.h>
+#include <linux/utsname.h>
+
+#include <asm/uaccess.h>
+#include <asm/ipc.h>
+
+/*
+ * Constant strings used in inlined functions in header files
+ */
+/* proc/system.h */
+const char xchg_str[] = "xchg";
+/* arch/dma.h */
+const char dma_str[] = "%s: dma %d not supported\n";
+
+/*
+ * sys_pipe() is the normal C calling standard for creating
+ * a pipe. It's not the way unix traditionally does this, though.
+ */
+asmlinkage int sys_pipe(unsigned long * fildes)
+{
+ int fd[2];
+ int error;
+
+ lock_kernel();
+ error = do_pipe(fd);
+ unlock_kernel();
+ if (!error) {
+ if (copy_to_user(fildes, fd, 2*sizeof(int)))
+ error = -EFAULT;
+ }
+ return error;
+}
+
+/*
+ * Perform the select(nd, in, out, ex, tv) and mmap() system
+ * calls. ARM Linux didn't use to be able to handle more than
+ * 4 system call parameters, so these system calls used a memory
+ * block for parameter passing..
+ */
+
+struct mmap_arg_struct {
+ unsigned long addr;
+ unsigned long len;
+ unsigned long prot;
+ unsigned long flags;
+ unsigned long fd;
+ unsigned long offset;
+};
+
+asmlinkage int old_mmap(struct mmap_arg_struct *arg)
+{
+ int error = -EFAULT;
+ struct file * file = NULL;
+ struct mmap_arg_struct a;
+
+ lock_kernel();
+ if (copy_from_user(&a, arg, sizeof(a)))
+ goto out;
+ if (!(a.flags & MAP_ANONYMOUS)) {
+ error = -EBADF;
+ if (a.fd >= NR_OPEN || !(file = current->files->fd[a.fd]))
+ goto out;
+ }
+ a.flags &= ~(MAP_EXECUTABLE | MAP_DENYWRITE);
+ error = do_mmap(file, a.addr, a.len, a.prot, a.flags, a.offset);
+out:
+ unlock_kernel();
+ return error;
+}
+
+
+extern asmlinkage int sys_select(int, fd_set *, fd_set *, fd_set *, struct timeval *);
+
+struct sel_arg_struct {
+ unsigned long n;
+ fd_set *inp, *outp, *exp;
+ struct timeval *tvp;
+};
+
+asmlinkage int old_select(struct sel_arg_struct *arg)
+{
+ struct sel_arg_struct a;
+
+ if (copy_from_user(&a, arg, sizeof(a)))
+ return -EFAULT;
+ /* sys_select() does the appropriate kernel locking */
+ return sys_select(a.n, a.inp, a.outp, a.exp, a.tvp);
+}
+
+/*
+ * sys_ipc() is the de-multiplexer for the SysV IPC calls..
+ *
+ * This is really horribly ugly.
+ */
+asmlinkage int sys_ipc (uint call, int first, int second, int third, void *ptr, long fifth)
+{
+ int version, ret;
+
+ lock_kernel();
+ version = call >> 16; /* hack for backward compatibility */
+ call &= 0xffff;
+
+ if (call <= SEMCTL)
+ switch (call) {
+ case SEMOP:
+ ret = sys_semop (first, (struct sembuf *)ptr, second);
+ goto out;
+ case SEMGET:
+ ret = sys_semget (first, second, third);
+ goto out;
+ case SEMCTL: {
+ union semun fourth;
+ ret = -EINVAL;
+ if (!ptr)
+ goto out;
+ ret = -EFAULT;
+ if (get_user(fourth.__pad, (void **) ptr))
+ goto out;
+ ret = sys_semctl (first, second, third, fourth);
+ goto out;
+ }
+ default:
+ ret = -EINVAL;
+ goto out;
+ }
+ if (call <= MSGCTL)
+ switch (call) {
+ case MSGSND:
+ ret = sys_msgsnd (first, (struct msgbuf *) ptr,
+ second, third);
+ goto out;
+ case MSGRCV:
+ switch (version) {
+ case 0: {
+ struct ipc_kludge tmp;
+ ret = -EINVAL;
+ if (!ptr)
+ goto out;
+ ret = -EFAULT;
+ if (copy_from_user(&tmp,(struct ipc_kludge *) ptr,
+ sizeof (tmp)))
+ goto out;
+ ret = sys_msgrcv (first, tmp.msgp, second, tmp.msgtyp, third);
+ goto out;
+ }
+ case 1: default:
+ ret = sys_msgrcv (first, (struct msgbuf *) ptr, second, fifth, third);
+ goto out;
+ }
+ case MSGGET:
+ ret = sys_msgget ((key_t) first, second);
+ goto out;
+ case MSGCTL:
+ ret = sys_msgctl (first, second, (struct msqid_ds *) ptr);
+ goto out;
+ default:
+ ret = -EINVAL;
+ goto out;
+ }
+ if (call <= SHMCTL)
+ switch (call) {
+ case SHMAT:
+ switch (version) {
+ case 0: default: {
+ ulong raddr;
+ ret = sys_shmat (first, (char *) ptr, second, &raddr);
+ if (ret)
+ goto out;
+ ret = put_user (raddr, (ulong *) third);
+ goto out;
+ }
+ case 1: /* iBCS2 emulator entry point */
+ ret = -EINVAL;
+ if (!segment_eq(get_fs(), get_ds()))
+ goto out;
+ ret = sys_shmat (first, (char *) ptr, second, (ulong *) third);
+ goto out;
+ }
+ case SHMDT:
+ ret = sys_shmdt ((char *)ptr);
+ goto out;
+ case SHMGET:
+ ret = sys_shmget (first, second, third);
+ goto out;
+ case SHMCTL:
+ ret = sys_shmctl (first, second, (struct shmid_ds *) ptr);
+ goto out;
+ default:
+ ret = -EINVAL;
+ goto out;
+ }
+ else
+ ret = -EINVAL;
+out:
+ unlock_kernel();
+ return ret;
+}
+
+/* Fork a new task - this creates a new program thread.
+ * This is called indirectly via a small wrapper
+ */
+asmlinkage int sys_fork(struct pt_regs *regs)
+{
+ int ret;
+
+ lock_kernel();
+ ret = do_fork(SIGCHLD, regs->ARM_sp, regs);
+ unlock_kernel();
+
+ return ret;
+}
+
+/* Clone a task - this clones the calling program thread.
+ * This is called indirectly via a small wrapper
+ */
+asmlinkage int sys_clone(unsigned long clone_flags, unsigned long newsp, struct pt_regs *regs)
+{
+ int ret;
+
+ lock_kernel();
+ if (!newsp)
+ newsp = regs->ARM_sp;
+
+ ret = do_fork(clone_flags, newsp, regs);
+ unlock_kernel();
+ return ret;
+}
+
+/* sys_execve() executes a new program.
+ * This is called indirectly via a small wrapper
+ */
+asmlinkage int sys_execve(char *filenamei, char **argv, char **envp, struct pt_regs *regs)
+{
+ int error;
+ char * filename;
+
+ lock_kernel();
+ filename = getname(filenamei);
+ error = PTR_ERR(filename);
+ if (IS_ERR(filename))
+ goto out;
+ error = do_execve(filename, argv, envp, regs);
+ putname(filename);
+out:
+ unlock_kernel();
+ return error;
+}
+
+/*
+ * Detect the old function calling standard
+ */
+static inline unsigned long old_calling_standard (struct pt_regs *regs)
+{
+ unsigned long instr, *pcv = (unsigned long *)(instruction_pointer(regs) - 8);
+ return (!get_user (instr, pcv) && instr == 0xe1a0300d);
+}
+
+/* Compatability functions - we used to pass 5 parameters as r0, r1, r2, *r3, *(r3+4)
+ * We now use r0 - r4, and return an error if the old style calling standard is used.
+ * Eventually these functions will disappear.
+ */
+asmlinkage int
+sys_compat_llseek (unsigned int fd, unsigned long offset_high, unsigned long offset_low,
+ loff_t *result, unsigned int origin, struct pt_regs *regs)
+{
+ extern int sys_llseek (unsigned int, unsigned long, unsigned long, loff_t *, unsigned int);
+
+ if (old_calling_standard (regs)) {
+ printk (KERN_NOTICE "%s (%d): unsupported llseek call standard\n",
+ current->comm, current->pid);
+ return -EINVAL;
+ }
+ return sys_llseek (fd, offset_high, offset_low, result, origin);
+}
+
+asmlinkage int
+sys_compat_mount (char *devname, char *dirname, char *type, unsigned long flags, void *data,
+ struct pt_regs *regs)
+{
+ extern int sys_mount (char *, char *, char *, unsigned long, void *);
+
+ if (old_calling_standard (regs)) {
+ printk (KERN_NOTICE "%s (%d): unsupported mount call standard\n",
+ current->comm, current->pid);
+ return -EINVAL;
+ }
+ return sys_mount (devname, dirname, type, flags, data);
+}
+
+asmlinkage int sys_uname (struct old_utsname * name)
+{
+ static int warned = 0;
+
+ if (warned == 0) {
+ warned ++;
+ printk (KERN_NOTICE "%s (%d): obsolete uname call\n",
+ current->comm, current->pid);
+ }
+
+ if (name && !copy_to_user (name, &system_utsname, sizeof (*name)))
+ return 0;
+ return -EFAULT;
+}
+
+asmlinkage int sys_olduname(struct oldold_utsname * name)
+{
+ int error;
+ static int warned = 0;
+
+ if (warned == 0) {
+ warned ++;
+ printk (KERN_NOTICE "%s (%d): obsolete olduname call\n",
+ current->comm, current->pid);
+ }
+
+ if (!name)
+ return -EFAULT;
+
+ if (!access_ok(VERIFY_WRITE,name,sizeof(struct oldold_utsname)))
+ return -EFAULT;
+
+ error = __copy_to_user(&name->sysname,&system_utsname.sysname,__OLD_UTS_LEN);
+ error -= __put_user(0,name->sysname+__OLD_UTS_LEN);
+ error -= __copy_to_user(&name->nodename,&system_utsname.nodename,__OLD_UTS_LEN);
+ error -= __put_user(0,name->nodename+__OLD_UTS_LEN);
+ error -= __copy_to_user(&name->release,&system_utsname.release,__OLD_UTS_LEN);
+ error -= __put_user(0,name->release+__OLD_UTS_LEN);
+ error -= __copy_to_user(&name->version,&system_utsname.version,__OLD_UTS_LEN);
+ error -= __put_user(0,name->version+__OLD_UTS_LEN);
+ error -= __copy_to_user(&name->machine,&system_utsname.machine,__OLD_UTS_LEN);
+ error -= __put_user(0,name->machine+__OLD_UTS_LEN);
+ error = error ? -EFAULT : 0;
+
+ return error;
+}
+
+asmlinkage int sys_pause(void)
+{
+ static int warned = 0;
+
+ if (warned == 0) {
+ warned ++;
+ printk (KERN_NOTICE "%s (%d): obsolete pause call\n",
+ current->comm, current->pid);
+ }
+
+ current->state = TASK_INTERRUPTIBLE;
+ schedule();
+ return -ERESTARTNOHAND;
+}
+
--- /dev/null
+/*
+ * linux/arch/arm/kernel/time.c
+ *
+ * Copyright (C) 1991, 1992, 1995 Linus Torvalds
+ * Modifications for ARM (C) 1994, 1995, 1996,1997 Russell King
+ *
+ * This file contains the ARM-specific time handling details:
+ * reading the RTC at bootup, etc...
+ *
+ * 1994-07-02 Alan Modra
+ * fixed set_rtc_mmss, fixed time.year for >= 2000, new mktime
+ * 1997-09-10 Updated NTP code according to technical memorandum Jan '96
+ * "A Kernel Model for Precision Timekeeping" by Dave Mills
+ */
+#include <linux/errno.h>
+#include <linux/sched.h>
+#include <linux/kernel.h>
+#include <linux/param.h>
+#include <linux/string.h>
+#include <linux/mm.h>
+#include <linux/interrupt.h>
+#include <linux/time.h>
+#include <linux/delay.h>
+#include <linux/init.h>
+#include <linux/smp.h>
+
+#include <asm/uaccess.h>
+#include <asm/io.h>
+#include <asm/irq.h>
+#include <asm/delay.h>
+
+#include <linux/timex.h>
+#include <asm/irq-no.h>
+#include <asm/hardware.h>
+
+extern int setup_arm_irq(int, struct irqaction *);
+extern volatile unsigned long lost_ticks;
+
+/* change this if you have some constant time drift */
+#define USECS_PER_JIFFY (1000000/HZ)
+
+#ifndef BCD_TO_BIN
+#define BCD_TO_BIN(val) ((val)=((val)&15) + ((val)>>4)*10)
+#endif
+
+#ifndef BIN_TO_BCD
+#define BIN_TO_BCD(val) ((val)=(((val)/10)<<4) + (val)%10)
+#endif
+
+/* Converts Gregorian date to seconds since 1970-01-01 00:00:00.
+ * Assumes input in normal date format, i.e. 1980-12-31 23:59:59
+ * => year=1980, mon=12, day=31, hour=23, min=59, sec=59.
+ *
+ * [For the Julian calendar (which was used in Russia before 1917,
+ * Britain & colonies before 1752, anywhere else before 1582,
+ * and is still in use by some communities) leave out the
+ * -year/100+year/400 terms, and add 10.]
+ *
+ * This algorithm was first published by Gauss (I think).
+ *
+ * WARNING: this function will overflow on 2106-02-07 06:28:16 on
+ * machines were long is 32-bit! (However, as time_t is signed, we
+ * will already get problems at other places on 2038-01-19 03:14:08)
+ */
+static inline unsigned long mktime(unsigned int year, unsigned int mon,
+ unsigned int day, unsigned int hour,
+ unsigned int min, unsigned int sec)
+{
+ if (0 >= (int) (mon -= 2)) { /* 1..12 -> 11,12,1..10 */
+ mon += 12; /* Puts Feb last since it has leap day */
+ year -= 1;
+ }
+ return (((
+ (unsigned long)(year/4 - year/100 + year/400 + 367*mon/12 + day) +
+ year*365 - 719499
+ )*24 + hour /* now have hours */
+ )*60 + min /* now have minutes */
+ )*60 + sec; /* finally seconds */
+}
+
+#include <asm/arch/time.h>
+
+static unsigned long do_gettimeoffset(void)
+{
+ return gettimeoffset ();
+}
+
+void do_gettimeofday(struct timeval *tv)
+{
+ unsigned long flags;
+
+ save_flags_cli (flags);
+ *tv = xtime;
+ tv->tv_usec += do_gettimeoffset();
+
+ /*
+ * xtime is atomically updated in timer_bh. lost_ticks is
+ * nonzero if the tiemr bottom half hasnt executed yet.
+ */
+ if (lost_ticks)
+ tv->tv_usec += USECS_PER_JIFFY;
+
+ restore_flags(flags);
+
+ if (tv->tv_usec >= 1000000) {
+ tv->tv_usec -= 1000000;
+ tv->tv_sec++;
+ }
+}
+
+void do_settimeofday(struct timeval *tv)
+{
+ cli ();
+ /* This is revolting. We need to set the xtime.tv_usec
+ * correctly. However, the value in this location is
+ * is value at the last tick.
+ * Discover what correction gettimeofday
+ * would have done, and then undo it!
+ */
+ tv->tv_usec -= do_gettimeoffset();
+
+ if (tv->tv_usec < 0) {
+ tv->tv_usec += 1000000;
+ tv->tv_sec--;
+ }
+
+ xtime = *tv;
+ time_state = TIME_BAD;
+ time_maxerror = MAXPHASE;
+ time_esterror = MAXPHASE;
+ sti ();
+}
+
+/*
+ * timer_interrupt() needs to keep up the real-time clock,
+ * as well as call the "do_timer()" routine every clocktick.
+ */
+static void timer_interrupt(int irq, void *dev_id, struct pt_regs *regs)
+{
+ if (reset_timer ())
+ do_timer(regs);
+
+ update_rtc ();
+}
+
+static struct irqaction irqtimer0 = { timer_interrupt, 0, 0, "timer", NULL, NULL};
+
+void time_init(void)
+{
+ xtime.tv_sec = setup_timer();
+ xtime.tv_usec = 0;
+
+ setup_arm_irq(IRQ_TIMER0, &irqtimer0);
+}
--- /dev/null
+/*
+ * linux/arch/arm/kernel/traps.c
+ *
+ * Copyright (C) 1995, 1996 Russell King
+ * Fragments that appear the same as linux/arch/i386/kernel/traps.c (C) Linus Torvalds
+ */
+
+/*
+ * 'traps.c' handles hardware exceptions after we have saved some state in
+ * 'linux/arch/arm/lib/traps.S'. Mostly a debugging aid, but will probably
+ * kill the offending process.
+ */
+#include <linux/config.h>
+#include <linux/types.h>
+#include <linux/kernel.h>
+#include <linux/signal.h>
+#include <linux/sched.h>
+#include <linux/mm.h>
+
+#include <asm/system.h>
+#include <asm/uaccess.h>
+#include <asm/io.h>
+#include <asm/spinlock.h>
+#include <asm/atomic.h>
+#include <asm/pgtable.h>
+
+extern void fpe_save(struct fp_soft_struct *);
+extern void fpe_restore(struct fp_soft_struct *);
+extern void die_if_kernel(char *str, struct pt_regs *regs, int err, int ret);
+extern void c_backtrace (unsigned long fp, int pmode);
+extern int ptrace_cancel_bpt (struct task_struct *);
+
+char *processor_modes[]=
+{ "USER_26", "FIQ_26" , "IRQ_26" , "SVC_26" , "UK4_26" , "UK5_26" , "UK6_26" , "UK7_26" ,
+ "UK8_26" , "UK9_26" , "UK10_26", "UK11_26", "UK12_26", "UK13_26", "UK14_26", "UK15_26",
+ "USER_32", "FIQ_32" , "IRQ_32" , "SVC_32" , "UK4_32" , "UK5_32" , "UK6_32" , "ABT_32" ,
+ "UK8_32" , "UK9_32" , "UK10_32", "UND_32" , "UK12_32", "UK13_32", "UK14_32", "SYS_32"
+};
+
+static char *handler[]= { "prefetch abort", "data abort", "address exception", "interrupt" };
+
+static inline void console_verbose(void)
+{
+ extern int console_loglevel;
+ console_loglevel = 15;
+}
+
+int kstack_depth_to_print = 200;
+
+static int verify_stack_pointer (unsigned long stackptr, int size)
+{
+#if defined(CONFIG_CPU_ARM2) || defined(CONFIG_CPU_ARM3)
+ if (stackptr < 0x02048000 || stackptr + size > 0x03000000)
+ return -EFAULT;
+#else
+ if (stackptr < 0xc0000000 || stackptr + size > (unsigned long)high_memory)
+ return -EFAULT;
+#endif
+ return 0;
+}
+
+static void dump_stack (unsigned long *start, unsigned long *end, int offset, int max)
+{
+ unsigned long *p;
+ int i;
+
+ for (p = start + offset, i = 0; i < max && p < end; i++, p++) {
+ if (i && (i & 7) == 0)
+ printk ("\n ");
+ printk ("%08lx ", *p);
+ }
+ printk ("\n");
+}
+
+/*
+ * These constants are for searching for possible module text
+ * segments. VMALLOC_OFFSET comes from mm/vmalloc.c; MODULE_RANGE is
+ * a guess of how much space is likely to be vmalloced.
+ */
+#define VMALLOC_OFFSET (8*1024*1024)
+#define MODULE_RANGE (8*1024*1024)
+
+static void dump_instr (unsigned long pc)
+{
+ unsigned long module_start, module_end;
+ int pmin = -2, pmax = 3, ok = 0;
+ extern char start_kernel, _etext;
+
+ module_start = VMALLOC_START;
+ module_end = module_start + MODULE_RANGE;
+
+ if ((pc >= (unsigned long) &start_kernel) &&
+ (pc <= (unsigned long) &_etext)) {
+ if (pc + pmin < (unsigned long) &start_kernel)
+ pmin = ((unsigned long) &start_kernel) - pc;
+ if (pc + pmax > (unsigned long) &_etext)
+ pmax = ((unsigned long) &_etext) - pc;
+ ok = 1;
+ } else if (pc >= module_start && pc <= module_end) {
+ if (pc + pmin < module_start)
+ pmin = module_start - pc;
+ if (pc + pmax > module_end)
+ pmax = module_end - pc;
+ ok = 1;
+ }
+ printk ("Code: ");
+ if (ok) {
+ int i;
+ for (i = pmin; i < pmax; i++)
+ printk("%08lx ", ((unsigned long *)pc)[i]);
+ printk ("\n");
+ } else
+ printk ("pc not in code space\n");
+}
+
+/*
+ * This function is protected against kernel-mode re-entrancy. If it
+ * is re-entered it will hang the system since we can't guarantee in
+ * this case that any of the functions that it calls are safe any more.
+ * Even the panic function could be a problem, but we'll give it a go.
+ */
+void die_if_kernel(char *str, struct pt_regs *regs, int err, int ret)
+{
+ static int died = 0;
+ unsigned long cstack, sstack, frameptr;
+
+ if (user_mode(regs))
+ return;
+
+ switch (died) {
+ case 2:
+ while (1);
+ case 1:
+ died ++;
+ panic ("die_if_kernel re-entered. Major kernel corruption. Please reboot me!");
+ break;
+ case 0:
+ died ++;
+ break;
+ }
+
+ console_verbose ();
+ printk ("Internal error: %s: %x\n", str, err);
+ printk ("CPU: %d", smp_processor_id());
+ show_regs (regs);
+ printk ("Process %s (pid: %d, stackpage=%08lx)\nStack: ",
+ current->comm, current->pid, 4096+(unsigned long)current);
+
+ cstack = (unsigned long)(regs + 1);
+ sstack = 4096+(unsigned long)current;
+
+ if (*(unsigned long *)sstack != STACK_MAGIC)
+ printk ("*** corrupted stack page\n ");
+
+ if (verify_stack_pointer (cstack, 4))
+ printk ("%08lx invalid kernel stack pointer\n", cstack);
+ else if(cstack > sstack + 4096)
+ printk("(sp overflow)\n");
+ else if(cstack < sstack)
+ printk("(sp underflow)\n");
+ else
+ dump_stack ((unsigned long *)sstack, (unsigned long *)sstack + 1024,
+ cstack - sstack, kstack_depth_to_print);
+
+ frameptr = regs->ARM_fp;
+ if (frameptr) {
+ if (verify_stack_pointer (frameptr, 4))
+ printk ("Backtrace: invalid frame pointer\n");
+ else {
+ printk("Backtrace: \n");
+ c_backtrace (frameptr, processor_mode(regs));
+ }
+ }
+
+ dump_instr (instruction_pointer(regs));
+ died = 0;
+ if (ret != -1)
+ do_exit (ret);
+ else {
+ cli ();
+ while (1);
+ }
+}
+
+void bad_user_access_alignment (const void *ptr)
+{
+ void *pc;
+ __asm__("mov %0, lr\n": "=r" (pc));
+ printk (KERN_ERR "bad_user_access_alignment called: ptr = %p, pc = %p\n", ptr, pc);
+ current->tss.error_code = 0;
+ current->tss.trap_no = 11;
+ force_sig (SIGBUS, current);
+/* die_if_kernel("Oops - bad user access alignment", regs, mode, SIGBUS);*/
+}
+
+asmlinkage void do_undefinstr (int address, struct pt_regs *regs, int mode)
+{
+ current->tss.error_code = 0;
+ current->tss.trap_no = 6;
+ force_sig (SIGILL, current);
+ die_if_kernel("Oops - undefined instruction", regs, mode, SIGILL);
+}
+
+asmlinkage void do_excpt (int address, struct pt_regs *regs, int mode)
+{
+ current->tss.error_code = 0;
+ current->tss.trap_no = 11;
+ force_sig (SIGBUS, current);
+ die_if_kernel("Oops - address exception", regs, mode, SIGBUS);
+}
+
+asmlinkage void do_unexp_fiq (struct pt_regs *regs)
+{
+#ifndef CONFIG_IGNORE_FIQ
+ printk ("Hmm. Unexpected FIQ received, but trying to continue\n");
+ printk ("You may have a hardware problem...\n");
+#endif
+}
+
+asmlinkage void bad_mode(struct pt_regs *regs, int reason, int proc_mode)
+{
+ printk (KERN_CRIT "Bad mode in %s handler detected: mode %s\n",
+ handler[reason],
+ processor_modes[proc_mode]);
+ die_if_kernel ("Oops", regs, 0, -1);
+}
+
+/*
+ * 'math_state_restore()' saves the current math information in the
+ * old math state array, and gets the new ones from the current task.
+ *
+ * We no longer save/restore the math state on every context switch
+ * any more. We only do this now if it actually gets used.
+ */
+asmlinkage void math_state_restore (void)
+{
+ if (last_task_used_math == current)
+ return;
+ if (last_task_used_math)
+ /*
+ * Save current fp state into last_task_used_math->tss.fpe_save
+ */
+ fpe_save (&last_task_used_math->tss.fpstate.soft);
+ last_task_used_math = current;
+ if (current->used_math) {
+ /*
+ * Restore current fp state from current->tss.fpe_save
+ */
+ fpe_restore (¤t->tss.fpstate.soft);
+ } else {
+ /*
+ * initialise fp state
+ */
+ fpe_restore (&init_task.tss.fpstate.soft);
+ current->used_math = 1;
+ }
+}
+
+asmlinkage void arm_syscall (int no, struct pt_regs *regs)
+{
+ switch (no) {
+ case 0: /* branch through 0 */
+ printk ("[%d] %s: branch through zero\n", current->pid, current->comm);
+ force_sig (SIGILL, current);
+ if (user_mode(regs)) {
+ show_regs (regs);
+ c_backtrace (regs->ARM_fp, processor_mode(regs));
+ }
+ die_if_kernel ("Oops", regs, 0, SIGILL);
+ break;
+
+ case 1: /* SWI_BREAK_POINT */
+ regs->ARM_pc -= 4; /* Decrement PC by one instruction */
+ ptrace_cancel_bpt (current);
+ force_sig (SIGTRAP, current);
+ break;
+
+ default:
+ printk ("[%d] %s: arm syscall %d\n", current->pid, current->comm, no);
+ force_sig (SIGILL, current);
+ if (user_mode(regs)) {
+ show_regs (regs);
+ c_backtrace (regs->ARM_fp, processor_mode(regs));
+ }
+ die_if_kernel ("Oops", regs, no, SIGILL);
+ break;
+ }
+}
+
+asmlinkage void deferred(int n, struct pt_regs *regs)
+{
+ printk ("[%d] %s: old system call %X\n", current->pid, current->comm, n);
+ show_regs (regs);
+ force_sig (SIGILL, current);
+}
+
+asmlinkage void arm_malalignedptr(const char *str, void *pc, volatile void *ptr)
+{
+ printk ("Mal-aligned pointer in %s: %p (PC=%p)\n", str, ptr, pc);
+}
+
+asmlinkage void arm_invalidptr (const char *function, int size)
+{
+ printk ("Invalid pointer size in %s (PC=%p) size %d\n",
+ function, __builtin_return_address(0), size);
+}
--- /dev/null
+#
+# linux/arch/arm/lib/Makefile
+#
+# Copyright (C) 1995-1998 Russell King
+#
+
+L_TARGET := lib.a
+L_OBJS := backtrace.o bitops.o delay.o fp_support.o \
+ loaders.o memcpy.o memfastset.o system.o string.o uaccess.o
+
+ifeq ($(PROCESSOR),armo)
+ L_OBJS += uaccess-armo.o
+endif
+
+ifdef CONFIG_INET
+ L_OBJS += checksum.o
+endif
+
+ifdef CONFIG_ARCH_ACORN
+ L_OBJS += ll_char_wr.o io-acorn.o
+ ifdef CONFIG_ARCH_A5K
+ L_OBJS += floppydma.o
+ endif
+ ifdef CONFIG_ARCH_RPC
+ L_OBJS += floppydma.o
+ endif
+endif
+
+ifdef CONFIG_ARCH_EBSA110
+ L_OBJS += io-ebsa110.o
+endif
+
+include $(TOPDIR)/Rules.make
+
+constants.h: getconstants
+ ./getconstants > constants.h
+
+getconstants: getconstants.c getconstants.h
+ $(HOSTCC) -D__KERNEL__ -o getconstants getconstants.c
+
+getconstants.h: getconsdata.c
+ $(CC) $(CFLAGS) -c getconsdata.c
+ $(PERL) extractinfo.perl $(OBJDUMP) > $@
+
+%.o: %.S
+ifndef $(CONFIG_BINUTILS_NEW)
+ $(CC) $(CFLAGS) -D__ASSEMBLY__ -E $< | tr ';$$' '\n#' > ..tmp.$<.s
+ $(CC) $(CFLAGS:-pipe=) -c -o $@ ..tmp.$<.s
+ $(RM) ..tmp.$<.s
+else
+ $(CC) $(CFLAGS) -D__ASSEMBLY__ -c -o $@ $<
+endif
+
+clean:
+ $(RM) getconstants constants.h getconstants.h
--- /dev/null
+/*
+ * linux/arch/arm/lib/backtrace.S
+ *
+ * Copyright (C) 1995, 1996 Russell King
+ */
+#include <linux/linkage.h>
+#include <asm/assembler.h>
+ .text
+
+@ fp is 0 or stack frame
+
+#define frame r4
+#define next r5
+#define save r6
+#define mask r7
+#define offset r8
+
+ENTRY(__backtrace)
+ mov r1, #0x10
+ mov r0, fp
+
+ENTRY(c_backtrace)
+ stmfd sp!, {r4 - r8, lr} @ Save an extra register so we have a location...
+ tst r1, #0x10 @ 26 or 32-bit?
+ moveq mask, #0xfc000003
+ movne mask, #0
+ tst mask, r0
+ movne r0, #0
+ movs frame, r0
+1: moveq r0, #-2
+ LOADREGS(eqfd, sp!, {r4 - r8, pc})
+
+2: stmfd sp!, {pc} @ calculate offset of PC in STMIA instruction
+ ldr r0, [sp], #4
+ adr r1, 2b - 4
+ sub offset, r0, r1
+
+3: tst frame, mask @ Check for address exceptions...
+ bne 1b
+
+ ldmda frame, {r0, r1, r2, r3} @ fp, sp, lr, pc
+ mov next, r0
+
+ sub save, r3, offset @ Correct PC for prefetching
+ bic save, save, mask
+ adr r0, .Lfe
+ mov r1, save
+ bic r2, r2, mask
+ bl SYMBOL_NAME(printk)
+
+ sub r0, frame, #16
+ ldr r1, [save, #4]
+ mov r3, r1, lsr #10
+ ldr r2, .Ldsi+4
+ teq r3, r2 @ Check for stmia sp!, {args}
+ addeq save, save, #4 @ next instruction
+ bleq .Ldumpstm
+
+ ldr r1, [save, #4] @ Get 'stmia sp!, {rlist, fp, ip, lr, pc}' instruction
+ mov r3, r1, lsr #10
+ ldr r2, .Ldsi
+ teq r3, r2
+ bleq .Ldumpstm
+
+ teq frame, next
+ movne frame, next
+ teqne frame, #0
+ bne 3b
+ LOADREGS(fd, sp!, {r4 - r8, pc})
+
+
+#define instr r4
+#define reg r5
+#define stack r6
+
+.Ldumpstm: stmfd sp!, {instr, reg, stack, lr}
+ mov stack, r0
+ mov instr, r1
+ mov reg, #9
+
+1: mov r3, #1
+ tst instr, r3, lsl reg
+ beq 2f
+ ldr r2, [stack], #-4
+ mov r1, reg
+ adr r0, .Lfp
+ bl SYMBOL_NAME(printk)
+2: subs reg, reg, #1
+ bpl 1b
+
+ mov r0, stack
+ LOADREGS(fd, sp!, {instr, reg, stack, pc})
+
+.Lfe: .ascii "Function entered at [<%p>] from [<%p>]\n"
+ .byte 0
+.Lfp: .ascii " r%d = %p\n"
+ .byte 0
+ .align
+.Ldsi: .word 0x00e92dd8 >> 2
+ .word 0x00e92d00 >> 2
--- /dev/null
+/*
+ * linux/arch/arm/lib/bitops.S
+ *
+ * Copyright (C) 1995, 1996 Russell King
+ */
+
+#include <linux/linkage.h>
+#include <asm/assembler.h>
+ .text
+
+@ Purpose : Function to set a bit
+@ Prototype: int set_bit(int bit,int *addr)
+
+ENTRY(set_bit)
+ and r2, r0, #7
+ mov r3, #1
+ mov r3, r3, lsl r2
+ SAVEIRQS(ip)
+ DISABLEIRQS(ip)
+ ldrb r2, [r1, r0, lsr #3]
+ orr r2, r2, r3
+ strb r2, [r1, r0, lsr #3]
+ RESTOREIRQS(ip)
+ RETINSTR(mov,pc,lr)
+
+ENTRY(test_and_set_bit)
+ add r1, r1, r0, lsr #3 @ Get byte offset
+ and r3, r0, #7 @ Get bit offset
+ mov r0, #1
+ SAVEIRQS(ip)
+ DISABLEIRQS(ip)
+ ldrb r2, [r1]
+ tst r2, r0, lsl r3
+ orr r2, r2, r0, lsl r3
+ moveq r0, #0
+ strb r2, [r1]
+ RESTOREIRQS(ip)
+ RETINSTR(mov,pc,lr)
+
+@ Purpose : Function to clear a bit
+@ Prototype: int clear_bit(int bit,int *addr)
+
+ENTRY(clear_bit)
+ and r2, r0, #7
+ mov r3, #1
+ mov r3, r3, lsl r2
+ SAVEIRQS(ip)
+ DISABLEIRQS(ip)
+ ldrb r2, [r1, r0, lsr #3]
+ bic r2, r2, r3
+ strb r2, [r1, r0, lsr #3]
+ RESTOREIRQS(ip)
+ RETINSTR(mov,pc,lr)
+
+ENTRY(test_and_clear_bit)
+ add r1, r1, r0, lsr #3 @ Get byte offset
+ and r3, r0, #7 @ Get bit offset
+ mov r0, #1
+ SAVEIRQS(ip)
+ DISABLEIRQS(ip)
+ ldrb r2, [r1]
+ tst r2, r0, lsl r3
+ bic r2, r2, r0, lsl r3
+ moveq r0, #0
+ strb r2, [r1]
+ RESTOREIRQS(ip)
+ RETINSTR(mov,pc,lr)
+
+/* Purpose : Function to change a bit
+ * Prototype: int change_bit(int bit,int *addr)
+ */
+ENTRY(change_bit)
+ and r2, r0, #7
+ mov r3, #1
+ mov r3, r3, lsl r2
+ SAVEIRQS(ip)
+ DISABLEIRQS(ip)
+ ldrb r2, [r1, r0, lsr #3]
+ eor r2, r2, r3
+ strb r2, [r1, r0, lsr #3]
+ RESTOREIRQS(ip)
+ RETINSTR(mov,pc,lr)
+
+ENTRY(test_and_change_bit)
+ add r1, r1, r0, lsr #3
+ and r3, r0, #7
+ mov r0, #1
+ SAVEIRQS(ip)
+ DISABLEIRQS(ip)
+ ldrb r2, [r1]
+ tst r2, r0, lsl r3
+ eor r2, r2, r0, lsl r3
+ moveq r0, #0
+ strb r2, [r1]
+ RESTOREIRQS(ip)
+ RETINSTR(mov,pc,lr)
+
+@ Purpose : Find a 'zero' bit
+@ Prototype: int find_first_zero_bit(char *addr,int maxbit);
+
+ENTRY(find_first_zero_bit)
+ mov r2, #0 @ Initialise bit position
+Lfindzbit1lp: ldrb r3, [r0, r2, lsr #3] @ Check byte, if 0xFF, then all bits set
+ teq r3, #0xFF
+ bne Lfoundzbit
+ add r2, r2, #8
+ cmp r2, r1 @ Check to see if we have come to the end
+ bcc Lfindzbit1lp
+ add r0, r1, #1 @ Make sure that we flag an error
+ RETINSTR(mov,pc,lr)
+Lfoundzbit: tst r3, #1 @ Check individual bits
+ moveq r0, r2
+ RETINSTR(moveq,pc,lr)
+ tst r3, #2
+ addeq r0, r2, #1
+ RETINSTR(moveq,pc,lr)
+ tst r3, #4
+ addeq r0, r2, #2
+ RETINSTR(moveq,pc,lr)
+ tst r3, #8
+ addeq r0, r2, #3
+ RETINSTR(moveq,pc,lr)
+ tst r3, #16
+ addeq r0, r2, #4
+ RETINSTR(moveq,pc,lr)
+ tst r3, #32
+ addeq r0, r2, #5
+ RETINSTR(moveq,pc,lr)
+ tst r3, #64
+ addeq r0, r2, #6
+ RETINSTR(moveq,pc,lr)
+ add r0, r2, #7
+ RETINSTR(mov,pc,lr)
+
+@ Purpose : Find next 'zero' bit
+@ Prototype: int find_next_zero_bit(char *addr,int maxbit,int offset)
+
+ENTRY(find_next_zero_bit)
+ tst r2, #7
+ beq Lfindzbit1lp @ If new byte, goto old routine
+ ldrb r3, [r0, r2, lsr#3]
+ orr r3, r3, #0xFF00 @ Set top bits so we wont get confused
+ stmfd sp!, {r4}
+ and r4, r2, #7
+ mov r3, r3, lsr r4 @ Shift right by no. of bits
+ ldmfd sp!, {r4}
+ and r3, r3, #0xFF
+ teq r3, #0xFF
+ orreq r2, r2, #7
+ addeq r2, r2, #1
+ beq Lfindzbit1lp @ If all bits are set, goto old routine
+ b Lfoundzbit
--- /dev/null
+/*
+ * linux/arch/arm/lib/iputils.S
+ *
+ * Copyright (C) 1995, 1996, 1997, 1998 Russell King
+ */
+#include <linux/linkage.h>
+#include <asm/assembler.h>
+#include <asm/errno.h>
+
+ .text
+
+/* Function: __u32 csum_partial(const char *src, int len, __u32)
+ * Params : r0 = buffer, r1 = len, r2 = checksum
+ * Returns : r0 = new checksum
+ */
+
+ENTRY(csum_partial)
+ tst r0, #2
+ beq 1f
+ subs r1, r1, #2
+ addmi r1, r1, #2
+ bmi 3f
+ bic r0, r0, #3
+ ldr r3, [r0], #4
+ adds r2, r2, r3, lsr #16
+ adcs r2, r2, #0
+1: adds r2, r2, #0
+ bics ip, r1, #31
+ beq 3f
+ stmfd sp!, {r4 - r6}
+2: ldmia r0!, {r3 - r6}
+ adcs r2, r2, r3
+ adcs r2, r2, r4
+ adcs r2, r2, r5
+ adcs r2, r2, r6
+ ldmia r0!, {r3 - r6}
+ adcs r2, r2, r3
+ adcs r2, r2, r4
+ adcs r2, r2, r5
+ adcs r2, r2, r6
+ sub ip, ip, #32
+ teq ip, #0
+ bne 2b
+ adcs r2, r2, #0
+ ldmfd sp!, {r4 - r6}
+3: ands ip, r1, #0x1c
+ beq 5f
+4: ldr r3, [r0], #4
+ adcs r2, r2, r3
+ sub ip, ip, #4
+ teq ip, #0
+ bne 4b
+ adcs r2, r2, #0
+5: ands ip, r1, #3
+ moveq r0, r2
+ RETINSTR(moveq,pc,lr)
+ mov ip, ip, lsl #3
+ rsb ip, ip, #32
+ ldr r3, [r0]
+ mov r3, r3, lsl ip
+ adds r2, r2, r3, lsr ip
+ adc r0, r2, #0
+ RETINSTR(mov,pc,lr)
+
+/* Function: __u32 csum_partial_copy_from_user (const char *src, char *dst, int len, __u32 sum, int *err_ptr)
+ * Params : r0 = src, r1 = dst, r2 = len, r3 = sum, [sp, #0] = &err
+ * Returns : r0 = checksum, [[sp, #0], #0] = 0 or -EFAULT
+ */
+
+#define USER_LDR(instr...) \
+9999: instr; \
+ .section __ex_table, "a"; \
+ .align 3; \
+ .long 9999b, 6001f; \
+ .previous;
+
+ENTRY(csum_partial_copy_from_user)
+ mov ip, sp
+ stmfd sp!, {r4 - r8, fp, ip, lr, pc}
+ sub fp, ip, #4
+ cmp r2, #4
+ blt .too_small_user
+ tst r1, #2 @ Test destination alignment
+ beq .dst_aligned_user
+ subs r2, r2, #2 @ We dont know if SRC is aligned...
+USER_LDR( ldrbt ip, [r0], #1)
+USER_LDR( ldrbt r8, [r0], #1)
+ orr ip, ip, r8, lsl #8
+ adds r3, r3, ip
+ adcs r3, r3, #0
+ strb ip, [r1], #1
+ mov ip, ip, lsr #8
+ strb ip, [r1], #1 @ Destination now aligned
+.dst_aligned_user:
+ tst r0, #3
+ bne .src_not_aligned_user
+ adds r3, r3, #0
+ bics ip, r2, #15 @ Routine for src & dst aligned
+ beq 2f
+1:
+USER_LDR( ldrt r4, [r0], #4)
+USER_LDR( ldrt r5, [r0], #4)
+USER_LDR( ldrt r6, [r0], #4)
+USER_LDR( ldrt r7, [r0], #4)
+ stmia r1!, {r4, r5, r6, r7}
+ adcs r3, r3, r4
+ adcs r3, r3, r5
+ adcs r3, r3, r6
+ adcs r3, r3, r7
+ sub ip, ip, #16
+ teq ip, #0
+ bne 1b
+2: ands ip, r2, #12
+ beq 4f
+ tst ip, #8
+ beq 3f
+USER_LDR( ldrt r4, [r0], #4)
+USER_LDR( ldrt r5, [r0], #4)
+ stmia r1!, {r4, r5}
+ adcs r3, r3, r4
+ adcs r3, r3, r5
+ tst ip, #4
+ beq 4f
+3:
+USER_LDR( ldrt r4, [r0], #4)
+ str r4, [r1], #4
+ adcs r3, r3, r4
+4: ands r2, r2, #3
+ adceq r0, r3, #0
+ LOADREGS(eqea,fp,{r4 - r8, fp, sp, pc})
+USER_LDR( ldrt r4, [r0], #4)
+ tst r2, #2
+ beq .exit
+ adcs r3, r3, r4, lsl #16
+ strb r4, [r1], #1
+ mov r4, r4, lsr #8
+ strb r4, [r1], #1
+ mov r4, r4, lsr #8
+.exit: tst r2, #1
+ strneb r4, [r1], #1
+ andne r4, r4, #255
+ adcnes r3, r3, r4
+ adcs r0, r3, #0
+ LOADREGS(ea,fp,{r4 - r8, fp, sp, pc})
+
+.too_small_user:
+ teq r2, #0
+ LOADREGS(eqea,fp,{r4 - r8, fp, sp, pc})
+ cmp r2, #2
+ blt .too_small_user1
+USER_LDR( ldrbt ip, [r0], #1)
+USER_LDR( ldrbt r8, [r0], #1)
+ orr ip, ip, r8, lsl #8
+ adds r3, r3, ip
+ strb ip, [r1], #1
+ strb r8, [r1], #1
+ tst r2, #1
+.too_small_user1:
+USER_LDR( ldrnebt ip, [r0], #1)
+ strneb ip, [r1], #1
+ adcnes r3, r3, ip
+ adcs r0, r3, #0
+ LOADREGS(ea,fp,{r4 - r8, fp, sp, pc})
+
+.src_not_aligned_user:
+ cmp r2, #4
+ blt .too_small_user
+ and ip, r0, #3
+ bic r0, r0, #3
+USER_LDR( ldrt r4, [r0], #4)
+ cmp ip, #2
+ beq .src2_aligned_user
+ bhi .src3_aligned_user
+ mov r4, r4, lsr #8
+ adds r3, r3, #0
+ bics ip, r2, #15
+ beq 2f
+1:
+USER_LDR( ldrt r5, [r0], #4)
+USER_LDR( ldrt r6, [r0], #4)
+USER_LDR( ldrt r7, [r0], #4)
+USER_LDR( ldrt r8, [r0], #4)
+ orr r4, r4, r5, lsl #24
+ mov r5, r5, lsr #8
+ orr r5, r5, r6, lsl #24
+ mov r6, r6, lsr #8
+ orr r6, r6, r7, lsl #24
+ mov r7, r7, lsr #8
+ orr r7, r7, r8, lsl #24
+ stmia r1!, {r4, r5, r6, r7}
+ adcs r3, r3, r4
+ adcs r3, r3, r5
+ adcs r3, r3, r6
+ adcs r3, r3, r7
+ mov r4, r8, lsr #8
+ sub ip, ip, #16
+ teq ip, #0
+ bne 1b
+2: ands ip, r2, #12
+ beq 4f
+ tst ip, #8
+ beq 3f
+USER_LDR( ldrt r5, [r0], #4)
+USER_LDR( ldrt r6, [r0], #4)
+ orr r4, r4, r5, lsl #24
+ mov r5, r5, lsr #8
+ orr r5, r5, r6, lsl #24
+ stmia r1!, {r4, r5}
+ adcs r3, r3, r4
+ adcs r3, r3, r5
+ mov r4, r6, lsr #8
+ tst ip, #4
+ beq 4f
+3:
+USER_LDR( ldrt r5, [r0], #4)
+ orr r4, r4, r5, lsl #24
+ str r4, [r1], #4
+ adcs r3, r3, r4
+ mov r4, r5, lsr #8
+4: ands r2, r2, #3
+ adceq r0, r3, #0
+ LOADREGS(eqea,fp,{r4 - r8, fp, sp, pc})
+ tst r2, #2
+ beq .exit
+ adcs r3, r3, r4, lsl #16
+ strb r4, [r1], #1
+ mov r4, r4, lsr #8
+ strb r4, [r1], #1
+ mov r4, r4, lsr #8
+ b .exit
+
+.src2_aligned_user:
+ mov r4, r4, lsr #16
+ adds r3, r3, #0
+ bics ip, r2, #15
+ beq 2f
+1:
+USER_LDR( ldrt r5, [r0], #4)
+USER_LDR( ldrt r6, [r0], #4)
+USER_LDR( ldrt r7, [r0], #4)
+USER_LDR( ldrt r8, [r0], #4)
+ orr r4, r4, r5, lsl #16
+ mov r5, r5, lsr #16
+ orr r5, r5, r6, lsl #16
+ mov r6, r6, lsr #16
+ orr r6, r6, r7, lsl #16
+ mov r7, r7, lsr #16
+ orr r7, r7, r8, lsl #16
+ stmia r1!, {r4, r5, r6, r7}
+ adcs r3, r3, r4
+ adcs r3, r3, r5
+ adcs r3, r3, r6
+ adcs r3, r3, r7
+ mov r4, r8, lsr #16
+ sub ip, ip, #16
+ teq ip, #0
+ bne 1b
+2: ands ip, r2, #12
+ beq 4f
+ tst ip, #8
+ beq 3f
+USER_LDR( ldrt r5, [r0], #4)
+USER_LDR( ldrt r6, [r0], #4)
+ orr r4, r4, r5, lsl #16
+ mov r5, r5, lsr #16
+ orr r5, r5, r6, lsl #16
+ stmia r1!, {r4, r5}
+ adcs r3, r3, r4
+ adcs r3, r3, r5
+ mov r4, r6, lsr #16
+ tst ip, #4
+ beq 4f
+3:
+USER_LDR( ldrt r5, [r0], #4)
+ orr r4, r4, r5, lsl #16
+ str r4, [r1], #4
+ adcs r3, r3, r4
+ mov r4, r5, lsr #16
+4: ands r2, r2, #3
+ adceq r0, r3, #0
+ LOADREGS(eqea,fp,{r4 - r8, fp, sp, pc})
+ tst r2, #2
+ beq .exit
+ adcs r3, r3, r4, lsl #16
+ strb r4, [r1], #1
+ mov r4, r4, lsr #8
+ strb r4, [r1], #1
+USER_LDR( ldrb r4, [r0], #1)
+ b .exit
+
+.src3_aligned_user:
+ mov r4, r4, lsr #24
+ adds r3, r3, #0
+ bics ip, r2, #15
+ beq 2f
+1:
+USER_LDR( ldrt r5, [r0], #4)
+USER_LDR( ldrt r6, [r0], #4)
+USER_LDR( ldrt r7, [r0], #4)
+USER_LDR( ldrt r8, [r0], #4)
+ orr r4, r4, r5, lsl #8
+ mov r5, r5, lsr #24
+ orr r5, r5, r6, lsl #8
+ mov r6, r6, lsr #24
+ orr r6, r6, r7, lsl #8
+ mov r7, r7, lsr #24
+ orr r7, r7, r8, lsl #8
+ stmia r1!, {r4, r5, r6, r7}
+ adcs r3, r3, r4
+ adcs r3, r3, r5
+ adcs r3, r3, r6
+ adcs r3, r3, r7
+ mov r4, r8, lsr #24
+ sub ip, ip, #16
+ teq ip, #0
+ bne 1b
+2: ands ip, r2, #12
+ beq 4f
+ tst ip, #8
+ beq 3f
+USER_LDR( ldrt r5, [r0], #4)
+USER_LDR( ldrt r6, [r0], #4)
+ orr r4, r4, r5, lsl #8
+ mov r5, r5, lsr #24
+ orr r5, r5, r6, lsl #8
+ stmia r1!, {r4, r5}
+ adcs r3, r3, r4
+ adcs r3, r3, r5
+ mov r4, r6, lsr #24
+ tst ip, #4
+ beq 4f
+3:
+USER_LDR( ldrt r5, [r0], #4)
+ orr r4, r4, r5, lsl #8
+ str r4, [r1], #4
+ adcs r3, r3, r4
+ mov r4, r5, lsr #24
+4: ands r2, r2, #3
+ adceq r0, r3, #0
+ LOADREGS(eqea,fp,{r4 - r8, fp, sp, pc})
+ tst r2, #2
+ beq .exit
+ adcs r3, r3, r4, lsl #16
+ strb r4, [r1], #1
+USER_LDR( ldrt r4, [r0], #4)
+ strb r4, [r1], #1
+ adcs r3, r3, r4, lsl #24
+ mov r4, r4, lsr #8
+ b .exit
+
+ .section .fixup,"ax"
+ .align 4
+6001: mov r4, #-EFAULT
+ ldr r5, [sp, #4*8]
+ str r4, [r5]
+ LOADREGS(ea,fp,{r4 - r8, fp, sp, pc})
+
+/* Function: __u32 csum_partial_copy (const char *src, char *dst, int len, __u32 sum)
+ * Params : r0 = src, r1 = dst, r2 = len, r3 = checksum
+ * Returns : r0 = new checksum
+ */
+ENTRY(csum_partial_copy)
+ mov ip, sp
+ stmfd sp!, {r4 - r8, fp, ip, lr, pc}
+ sub fp, ip, #4
+ cmp r2, #4
+ blt Ltoo_small
+ tst r1, #2 @ Test destination alignment
+ beq Ldst_aligned
+ subs r2, r2, #2 @ We dont know if SRC is aligned...
+ ldrb ip, [r0], #1
+ ldrb r8, [r0], #1
+ orr ip, ip, r8, lsl #8
+ adds r3, r3, ip
+ adcs r3, r3, #0
+ strb ip, [r1], #1
+ mov ip, ip, lsr #8
+ strb ip, [r1], #1 @ Destination now aligned
+Ldst_aligned: tst r0, #3
+ bne Lsrc_not_aligned
+ adds r3, r3, #0
+ bics ip, r2, #15 @ Routine for src & dst aligned
+ beq 3f
+1: ldmia r0!, {r4, r5, r6, r7}
+ stmia r1!, {r4, r5, r6, r7}
+ adcs r3, r3, r4
+ adcs r3, r3, r5
+ adcs r3, r3, r6
+ adcs r3, r3, r7
+ sub ip, ip, #16
+ teq ip, #0
+ bne 1b
+3: ands ip, r2, #12
+ beq 5f
+ tst ip, #8
+ beq 4f
+ ldmia r0!, {r4, r5}
+ stmia r1!, {r4, r5}
+ adcs r3, r3, r4
+ adcs r3, r3, r5
+ tst ip, #4
+ beq 5f
+4: ldr r4, [r0], #4
+ str r4, [r1], #4
+ adcs r3, r3, r4
+5: ands r2, r2, #3
+ adceq r0, r3, #0
+ LOADREGS(eqea,fp,{r4 - r8, fp, sp, pc})
+ ldr r4, [r0], #4
+ tst r2, #2
+ beq Lexit
+ adcs r3, r3, r4, lsl #16
+ strb r4, [r1], #1
+ mov r4, r4, lsr #8
+ strb r4, [r1], #1
+ mov r4, r4, lsr #8
+ b Lexit
+
+Ltoo_small: teq r2, #0
+ LOADREGS(eqea,fp,{r4 - r8, fp, sp, pc})
+ cmp r2, #2
+ blt Ltoo_small1
+ ldrb ip, [r0], #1
+ ldrb r8, [r0], #1
+ orr ip, ip, r8, lsl #8
+ adds r3, r3, ip
+ strb ip, [r1], #1
+ strb r8, [r1], #1
+Lexit: tst r2, #1
+Ltoo_small1: ldrneb ip, [r0], #1
+ strneb ip, [r1], #1
+ adcnes r3, r3, ip
+ adcs r0, r3, #0
+ LOADREGS(ea,fp,{r4 - r8, fp, sp, pc})
+
+Lsrc_not_aligned:
+ cmp r2, #4
+ blt Ltoo_small
+ and ip, r0, #3
+ bic r0, r0, #3
+ ldr r4, [r0], #4
+ cmp ip, #2
+ beq Lsrc2_aligned
+ bhi Lsrc3_aligned
+ mov r4, r4, lsr #8
+ adds r3, r3, #0
+ bics ip, r2, #15
+ beq 2f
+1: ldmia r0!, {r5, r6, r7, r8}
+ orr r4, r4, r5, lsl #24
+ mov r5, r5, lsr #8
+ orr r5, r5, r6, lsl #24
+ mov r6, r6, lsr #8
+ orr r6, r6, r7, lsl #24
+ mov r7, r7, lsr #8
+ orr r7, r7, r8, lsl #24
+ stmia r1!, {r4, r5, r6, r7}
+ adcs r3, r3, r4
+ adcs r3, r3, r5
+ adcs r3, r3, r6
+ adcs r3, r3, r7
+ mov r4, r8, lsr #8
+ sub ip, ip, #16
+ teq ip, #0
+ bne 1b
+2: ands ip, r2, #12
+ beq 4f
+ tst ip, #8
+ beq 3f
+ ldmia r0!, {r5, r6}
+ orr r4, r4, r5, lsl #24
+ mov r5, r5, lsr #8
+ orr r5, r5, r6, lsl #24
+ stmia r1!, {r4, r5}
+ adcs r3, r3, r4
+ adcs r3, r3, r5
+ mov r4, r6, lsr #8
+ tst ip, #4
+ beq 4f
+3: ldr r5, [r0], #4
+ orr r4, r4, r5, lsl #24
+ str r4, [r1], #4
+ adcs r3, r3, r4
+ mov r4, r5, lsr #8
+4: ands r2, r2, #3
+ adceq r0, r3, #0
+ LOADREGS(eqea,fp,{r4 - r8, fp, sp, pc})
+ tst r2, #2
+ beq Lexit
+ adcs r3, r3, r4, lsl #16
+ strb r4, [r1], #1
+ mov r4, r4, lsr #8
+ strb r4, [r1], #1
+ mov r4, r4, lsr #8
+ b Lexit
+
+Lsrc2_aligned: mov r4, r4, lsr #16
+ adds r3, r3, #0
+ bics ip, r2, #15
+ beq 2f
+1: ldmia r0!, {r5, r6, r7, r8}
+ orr r4, r4, r5, lsl #16
+ mov r5, r5, lsr #16
+ orr r5, r5, r6, lsl #16
+ mov r6, r6, lsr #16
+ orr r6, r6, r7, lsl #16
+ mov r7, r7, lsr #16
+ orr r7, r7, r8, lsl #16
+ stmia r1!, {r4, r5, r6, r7}
+ adcs r3, r3, r4
+ adcs r3, r3, r5
+ adcs r3, r3, r6
+ adcs r3, r3, r7
+ mov r4, r8, lsr #16
+ sub ip, ip, #16
+ teq ip, #0
+ bne 1b
+2: ands ip, r2, #12
+ beq 4f
+ tst ip, #8
+ beq 3f
+ ldmia r0!, {r5, r6}
+ orr r4, r4, r5, lsl #16
+ mov r5, r5, lsr #16
+ orr r5, r5, r6, lsl #16
+ stmia r1!, {r4, r5}
+ adcs r3, r3, r4
+ adcs r3, r3, r5
+ mov r4, r6, lsr #16
+ tst ip, #4
+ beq 4f
+3: ldr r5, [r0], #4
+ orr r4, r4, r5, lsl #16
+ str r4, [r1], #4
+ adcs r3, r3, r4
+ mov r4, r5, lsr #16
+4: ands r2, r2, #3
+ adceq r0, r3, #0
+ LOADREGS(eqea,fp,{r4 - r8, fp, sp, pc})
+ tst r2, #2
+ beq Lexit
+ adcs r3, r3, r4, lsl #16
+ strb r4, [r1], #1
+ mov r4, r4, lsr #8
+ strb r4, [r1], #1
+ ldrb r4, [r0], #1
+ b Lexit
+
+Lsrc3_aligned: mov r4, r4, lsr #24
+ adds r3, r3, #0
+ bics ip, r2, #15
+ beq 2f
+1: ldmia r0!, {r5, r6, r7, r8}
+ orr r4, r4, r5, lsl #8
+ mov r5, r5, lsr #24
+ orr r5, r5, r6, lsl #8
+ mov r6, r6, lsr #24
+ orr r6, r6, r7, lsl #8
+ mov r7, r7, lsr #24
+ orr r7, r7, r8, lsl #8
+ stmia r1!, {r4, r5, r6, r7}
+ adcs r3, r3, r4
+ adcs r3, r3, r5
+ adcs r3, r3, r6
+ adcs r3, r3, r7
+ mov r4, r8, lsr #24
+ sub ip, ip, #16
+ teq ip, #0
+ bne 1b
+2: ands ip, r2, #12
+ beq 4f
+ tst ip, #8
+ beq 3f
+ ldmia r0!, {r5, r6}
+ orr r4, r4, r5, lsl #8
+ mov r5, r5, lsr #24
+ orr r5, r5, r6, lsl #8
+ stmia r1!, {r4, r5}
+ adcs r3, r3, r4
+ adcs r3, r3, r5
+ mov r4, r6, lsr #24
+ tst ip, #4
+ beq 4f
+3: ldr r5, [r0], #4
+ orr r4, r4, r5, lsl #8
+ str r4, [r1], #4
+ adcs r3, r3, r4
+ mov r4, r5, lsr #24
+4: ands r2, r2, #3
+ adceq r0, r3, #0
+ LOADREGS(eqea,fp,{r4 - r8, fp, sp, pc})
+ tst r2, #2
+ beq Lexit
+ adcs r3, r3, r4, lsl #16
+ strb r4, [r1], #1
+ ldr r4, [r0], #4
+ strb r4, [r1], #1
+ adcs r3, r3, r4, lsl #24
+ mov r4, r4, lsr #8
+ b Lexit
--- /dev/null
+/*
+ * linux/arch/arm/lib/delay.S
+ *
+ * Copyright (C) 1995, 1996 Russell King
+ */
+#include <linux/linkage.h>
+#include <asm/assembler.h>
+ .text
+
+LC0: .word SYMBOL_NAME(loops_per_sec)
+
+ENTRY(udelay)
+ mov r2, #0x1000
+ orr r2, r2, #0x00c6
+ mul r1, r0, r2
+ ldr r2, LC0
+ ldr r2, [r2]
+ mov r1, r1, lsr #11
+ mov r2, r2, lsr #11
+ mul r0, r1, r2
+ movs r0, r0, lsr #10
+ RETINSTR(moveq,pc,lr)
+
+@ Delay routine
+ENTRY(__delay)
+ subs r0, r0, #1
+ RETINSTR(movcc,pc,lr)
+ subs r0, r0, #1
+ RETINSTR(movcc,pc,lr)
+ subs r0, r0, #1
+ RETINSTR(movcc,pc,lr)
+ subs r0, r0, #1
+ RETINSTR(movcc,pc,lr)
+ subs r0, r0, #1
+ RETINSTR(movcc,pc,lr)
+ subs r0, r0, #1
+ RETINSTR(movcc,pc,lr)
+ subs r0, r0, #1
+ RETINSTR(movcc,pc,lr)
+ subs r0, r0, #1
+ bcs SYMBOL_NAME(__delay)
+ RETINSTR(mov,pc,lr)
+
--- /dev/null
+#!/usr/bin/perl
+
+$OBJDUMP=$ARGV[0];
+
+sub swapdata {
+ local ($num) = @_;
+
+ return substr($num, 6, 2).substr($num, 4, 2).substr ($num, 2, 2).substr ($num, 0, 2);
+}
+
+open (DATA, $OBJDUMP.' --full-contents --section=.data getconsdata.o | grep \'^ 00\' |') ||
+ die ('Cant objdump!');
+while (<DATA>) {
+ ($addr, $data0, $data1, $data2, $data3) = split (' ');
+ $dat[hex($addr)] = hex(&swapdata($data0));
+ $dat[hex($addr)+4] = hex(&swapdata($data1));
+ $dat[hex($addr)+8] = hex(&swapdata($data2));
+ $dat[hex($addr)+12] = hex(&swapdata($data3));
+}
+close (DATA);
+
+open (DATA, $OBJDUMP.' --syms getconsdata.o |') || die ('Cant objdump!');
+while (<DATA>) {
+ /elf32/ && ( $elf = 1 );
+ /a.out/ && ( $aout = 1 );
+ next if ($aout && ! / 07 /);
+ next if ($elf && ! (/^00...... g/ && /.data/));
+ next if (!$aout && !$elf);
+
+ ($addr, $flags, $sect, $a1, $a2, $a3, $name) = split (' ') if $aout;
+ $nam[hex($addr)] = substr($name, 1) if $aout;
+ if ($elf) {
+ chomp;
+ $addr = substr ($_, 0, 8);
+ $name = substr ($_, 32);
+ $nam[hex($addr)] = $name;
+ }
+}
+close (DATA);
+
+print "/*\n * *** THIS FILE IS AUTOMATICALLY GENERATED. DO NOT EDIT! ***\n */\n";
+for ($i = 0; $i < hex($addr)+12; $i ++) {
+ print "unsigned long $nam[$i] = $dat[$i];\n" if $dat[$i];
+ print "#define __HAS_$nam[$i]\n" if $dat[$i];
+}
--- /dev/null
+/*
+ * linux/arch/arm/lib/floppydma.S
+ *
+ * Copyright (C) 1995, 1996 Russell King
+ */
+#include <linux/linkage.h>
+#include <asm/assembler.h>
+ .text
+
+ .global SYMBOL_NAME(floppy_fiqin_end)
+ENTRY(floppy_fiqin_start)
+ subs r9, r9, #1
+ ldrgtb r12, [r11, #-4]
+ ldrleb r12, [r11], #0
+ strb r12, [r10], #1
+ subs pc, lr, #4
+SYMBOL_NAME(floppy_fiqin_end):
+
+ .global SYMBOL_NAME(floppy_fiqout_end)
+ENTRY(floppy_fiqout_start)
+ subs r9, r9, #1
+ ldrgeb r12, [r10], #1
+ movlt r12, #0
+ strleb r12, [r11], #0
+ subles pc, lr, #4
+ strb r12, [r11, #-4]
+ subs pc, lr, #4
+SYMBOL_NAME(floppy_fiqout_end):
+
+@ Params:
+@ r0 = length
+@ r1 = address
+@ r2 = floppy port
+@ Puts these into R9_fiq, R10_fiq, R11_fiq
+ENTRY(floppy_fiqsetup)
+ mov ip, sp
+ stmfd sp!, {fp, ip, lr, pc}
+ sub fp, ip, #4
+ MODE(r3,ip,I_BIT|F_BIT|DEFAULT_FIQ) @ disable FIQs, IRQs, FIQ mode
+ mov r0, r0
+ mov r9, r0
+ mov r10, r1
+ mov r11, r2
+ RESTOREMODE(r3) @ back to normal
+ mov r0, r0
+ LOADREGS(ea,fp,{fp, sp, pc})
+
+ENTRY(floppy_fiqresidual)
+ mov ip, sp
+ stmfd sp!, {fp, ip, lr, pc}
+ sub fp, ip, #4
+ MODE(r3,ip,I_BIT|F_BIT|DEFAULT_FIQ) @ disable FIQs, IRQs, FIQ mode
+ mov r0, r0
+ mov r0, r9
+ RESTOREMODE(r3)
+ mov r0, r0
+ LOADREGS(ea,fp,{fp, sp, pc})
--- /dev/null
+/*
+ * linux/arch/arm/lib/fp_support.c
+ *
+ * Copyright (C) 1995, 1996 Russell King
+ */
+
+#include <linux/sched.h>
+#include <linux/linkage.h>
+
+extern void (*fp_save)(struct fp_soft_struct *);
+
+asmlinkage void fp_setup(void)
+{
+ struct task_struct *p;
+
+ p = &init_task;
+ do {
+ fp_save(&p->tss.fpstate.soft);
+ p = p->next_task;
+ }
+ while (p != &init_task);
+}
--- /dev/null
+/*
+ * linux/arch/arm/lib/getconsdata.c
+ *
+ * Copyright (C) 1995, 1996 Russell King
+ */
+
+#include <linux/config.h>
+#include <linux/sched.h>
+#include <linux/mm.h>
+#include <linux/unistd.h>
+#include <asm/pgtable.h>
+#include <asm/uaccess.h>
+
+#define OFF_TSK(n) (unsigned long)&(((struct task_struct *)0)->n)
+#define OFF_MM(n) (unsigned long)&(((struct mm_struct *)0)->n)
+
+#ifdef KERNEL_DOMAIN
+unsigned long kernel_domain = KERNEL_DOMAIN;
+#endif
+#ifdef USER_DOMAIN
+unsigned long user_domain = USER_DOMAIN;
+#endif
+unsigned long addr_limit = OFF_TSK(addr_limit);
+unsigned long tss_memmap = OFF_TSK(tss.memmap);
+unsigned long mm = OFF_TSK(mm);
+unsigned long pgd = OFF_MM(pgd);
+unsigned long tss_save = OFF_TSK(tss.save);
+unsigned long tss_fpesave = OFF_TSK(tss.fpstate.soft.save);
+#if defined(CONFIG_CPU_ARM2) || defined(CONFIG_CPU_ARM3)
+unsigned long tss_memcmap = OFF_TSK(tss.memcmap);
+#endif
--- /dev/null
+/*
+ * linux/arch/arm/lib/getconstants.c
+ *
+ * Copyright (C) 1995, 1996 Russell King
+ */
+
+#include <linux/mm.h>
+#include <asm/pgtable.h>
+#include <stdio.h>
+#include <linux/unistd.h>
+
+void printdef(char *def, int no)
+{
+ printf("#define %s\t%d\n", def, no);
+}
+
+#include "getconstants.h"
+
+int main()
+{
+ printf("/*\n * contants.h generated by getconstants\n * DO NOT EDIT!\n */\n");
+
+ printf("#define _current\t_%s\n", "current_set");
+
+#ifdef _PAGE_PRESENT
+ printdef("PAGE_PRESENT", _PAGE_PRESENT);
+#endif
+#ifdef _PAGE_RW
+ printdef("PAGE_RW", _PAGE_RW);
+#endif
+#ifdef _PAGE_USER
+ printdef("PAGE_USER", _PAGE_USER);
+#endif
+#ifdef _PAGE_ACCESSED
+ printdef("PAGE_ACCESSED", _PAGE_ACCESSED);
+#endif
+#ifdef _PAGE_DIRTY
+ printdef("PAGE_DIRTY", _PAGE_DIRTY);
+#endif
+#ifdef _PAGE_READONLY
+ printdef("PAGE_READONLY", _PAGE_READONLY);
+#endif
+#ifdef _PAGE_NOT_USER
+ printdef("PAGE_NOT_USER", _PAGE_NOT_USER);
+#endif
+#ifdef _PAGE_OLD
+ printdef("PAGE_OLD", _PAGE_OLD);
+#endif
+#ifdef _PAGE_CLEAN
+ printdef("PAGE_CLEAN", _PAGE_CLEAN);
+#endif
+ printdef("TSS_MEMMAP", (int)tss_memmap);
+ printdef("TSS_SAVE", (int)tss_save);
+#ifdef __HAS_tss_memcmap
+ printdef("TSS_MEMCMAP", (int)tss_memcmap);
+#endif
+#ifdef __HAS_addr_limit
+ printdef("ADDR_LIMIT", (int)addr_limit);
+#endif
+#ifdef __HAS_kernel_domain
+ printdef("KERNEL_DOMAIN", kernel_domain);
+#endif
+#ifdef __HAS_user_domain
+ printdef("USER_DOMAIN", user_domain);
+#endif
+ printdef("TSS_FPESAVE", (int)tss_fpesave);
+ printdef("MM", (int)mm);
+ printdef("PGD", (int)pgd);
+
+ printf("#define KSWI_BASE 0x900000\n");
+ printf("#define KSWI_SYS_BASE 0x9F0000\n");
+ printf("#define SYS_ERROR0 0x9F0000\n");
+ return 0;
+}
--- /dev/null
+/*
+ * *** THIS FILE IS AUTOMATICALLY GENERATED. DO NOT EDIT! ***
+ */
+unsigned long addr_limit = 56;
+#define __HAS_addr_limit
+unsigned long tss_memmap = 640;
+#define __HAS_tss_memmap
+unsigned long mm = 1676;
+#define __HAS_mm
+unsigned long pgd = 8;
+#define __HAS_pgd
+unsigned long tss_save = 636;
+#define __HAS_tss_save
+unsigned long tss_fpesave = 492;
+#define __HAS_tss_fpesave
+unsigned long tss_memcmap = 644;
+#define __HAS_tss_memcmap
--- /dev/null
+/*
+ * linux/arch/arm/lib/io.S
+ *
+ * Copyright (C) 1995, 1996 Russell King
+ */
+#include <linux/autoconf.h>
+#include <linux/linkage.h>
+#include <asm/assembler.h>
+#include <asm/hardware.h>
+
+ .text
+ .align
+
+#define OUT(reg) \
+ mov r8, reg, lsl $16 ;\
+ orr r8, r8, r8, lsr $16 ;\
+ str r8, [r3, r0, lsl $2] ;\
+ mov r8, reg, lsr $16 ;\
+ orr r8, r8, r8, lsl $16 ;\
+ str r8, [r3, r0, lsl $2]
+
+#define IN(reg) \
+ ldr reg, [r0] ;\
+ and reg, reg, ip ;\
+ ldr lr, [r0] ;\
+ orr reg, reg, lr, lsl $16
+
+ .equ pcio_base_high, PCIO_BASE & 0xff000000
+ .equ pcio_base_low, PCIO_BASE & 0x00ff0000
+ .equ io_base_high, IO_BASE & 0xff000000
+ .equ io_base_low, IO_BASE & 0x00ff0000
+
+ .equ addr_io_diff_hi, pcio_base_high - io_base_high
+ .equ addr_io_diff_lo, pcio_base_low - io_base_low
+
+ .macro addr reg, off
+ tst \off, #0x80000000
+ .if addr_io_diff_hi
+ movne \reg, #IO_BASE
+ moveq \reg, #pcio_base_high
+ .if pcio_base_low
+ addeq \reg, \reg, #pcio_base_low
+ .endif
+ .else
+ mov \reg, #IO_BASE
+ addeq \reg, \reg, #addr_io_diff_lo
+ .endif
+ .endm
+
+@ Purpose: read a block of data from a hardware register to memory.
+@ Proto : insw(int from_port, void *to, int len_in_words);
+@ Proto : inswb(int from_port, void *to, int len_in_bytes);
+@ Notes : increment to
+
+ENTRY(insw)
+ mov r2, r2, lsl#1
+ENTRY(inswb)
+ mov ip, sp
+ stmfd sp!, {r4 - r10 ,fp ,ip ,lr ,pc}
+ sub fp, ip, #4
+ addr r3, r0
+ add r0, r3, r0, lsl #2
+ tst r1, #3
+ beq Linswok
+ tst r1, #1
+ bne Linsw_notaligned
+ cmp r2, #1
+ ldrge r4, [r0]
+ strgeb r4, [r1], #1
+ movgt r4, r4, LSR#8
+ strgtb r4, [r1], #1
+ ldmleea fp, {r4 - r10, fp, sp, pc}^
+ sub r2, r2, #2
+Linswok: mov ip, #0xFF
+ orr ip, ip, ip, lsl #8
+Linswlp: subs r2, r2, #64
+ bmi Linsw_toosmall
+ IN(r3)
+ IN(r4)
+ IN(r5)
+ IN(r6)
+ IN(r7)
+ IN(r8)
+ IN(r9)
+ IN(r10)
+ stmia r1!, {r3 - r10}
+ IN(r3)
+ IN(r4)
+ IN(r5)
+ IN(r6)
+ IN(r7)
+ IN(r8)
+ IN(r9)
+ IN(r10)
+ stmia r1!, {r3 - r10}
+ bne Linswlp
+ LOADREGS(ea, fp, {r4 - r10, fp, sp, pc})
+Linsw_toosmall:
+ adds r2, r2, #32
+ bmi Linsw_toosmall2
+Linsw2lp: IN(r3)
+ IN(r4)
+ IN(r5)
+ IN(r6)
+ IN(r7)
+ IN(r8)
+ IN(r9)
+ IN(r10)
+ stmia r1!, {r3 - r10}
+ LOADREGS(eqea, fp, {r4 - r10, fp, sp, pc})
+ b Linsw_notaligned
+Linsw_toosmall2:
+ add r2, r2, #32
+Linsw_notaligned:
+ cmp r2, #1
+ LOADREGS(ltea, fp, {r4 - r10, fp, sp, pc})
+ ldr r4, [r0]
+ strb r4, [r1], #1
+ movgt r4, r4, LSR#8
+ strgtb r4, [r1], #1
+ subs r2, r2, #2
+ bgt Linsw_notaligned
+ LOADREGS(ea, fp, {r4 - r10, fp, sp, pc})
+
+@ Purpose: write a block of data from memory to a hardware register.
+@ Proto : outsw(int to_reg, void *from, int len_in_words);
+@ Proto : outswb(int to_reg, void *from, int len_in_bytes);
+@ Notes : increments from
+
+ENTRY(outsw)
+ mov r2, r2, LSL#1
+ENTRY(outswb)
+ mov ip, sp
+ stmfd sp!, {r4 - r8, fp, ip, lr, pc}
+ sub fp, ip, #4
+ addr r3, r0
+ tst r1, #2
+ beq 1f
+ ldr r4, [r1], #2
+ mov r4, r4, lsl #16
+ orr r4, r4, r4, lsr #16
+ str r4, [r3, r0, lsl #2]
+ subs r2, r2, #2
+ LOADREGS(eqea, fp, {r4 - r8, fp, sp, pc})
+1: subs r2, r2, #32
+ blt 2f
+ ldmia r1!, {r4, r5, r6, r7}
+ OUT(r4)
+ OUT(r5)
+ OUT(r6)
+ OUT(r7)
+ ldmia r1!, {r4, r5, r6, r7}
+ OUT(r4)
+ OUT(r5)
+ OUT(r6)
+ OUT(r7)
+ bne 1b
+ LOADREGS(ea, fp, {r4 - r8, fp, sp, pc})
+2: adds r2, r2, #32
+ LOADREGS(eqea, fp, {r4 - r8, fp, sp, pc})
+3: ldr r4, [r1],#2
+ mov r4, r4, lsl#16
+ orr r4, r4, r4, lsr#16
+ str r4, [r3, r0, lsl#2]
+ subs r2, r2, #2
+ bgt 3b
+ LOADREGS(ea, fp, {r4 - r8, fp, sp, pc})
+
+@ Purpose: write a memc register
+@ Proto : void memc_write(int register, int value);
+@ Returns: nothing
+
+#if defined(CONFIG_CPU_ARM2) || defined(CONFIG_CPU_ARM3)
+ENTRY(memc_write)
+ cmp r0, #7
+ RETINSTR(movgt,pc,lr)
+ mov r0, r0, lsl #17
+ mov r1, r1, lsl #15
+ mov r1, r1, lsr #17
+ orr r0, r0, r1, lsl #2
+ add r0, r0, #0x03600000
+ strb r0, [r0]
+ RETINSTR(mov,pc,lr)
+#define CPSR2SPSR(rt)
+#else
+#define CPSR2SPSR(rt) \
+ mrs rt, cpsr; \
+ msr spsr, rt
+#endif
+
+@ Purpose: call an expansion card loader to read bytes.
+@ Proto : char read_loader(int offset, char *card_base, char *loader);
+@ Returns: byte read
+
+ENTRY(ecard_loader_read)
+ stmfd sp!, {r4 - r12, lr}
+ mov r11, r1
+ mov r1, r0
+ CPSR2SPSR(r0)
+ mov lr, pc
+ mov pc, r2
+ LOADREGS(fd, sp!, {r4 - r12, pc})
+
+@ Purpose: call an expansion card loader to reset the card
+@ Proto : void read_loader(int card_base, char *loader);
+@ Returns: byte read
+
+ENTRY(ecard_loader_reset)
+ stmfd sp!, {r4 - r12, lr}
+ mov r11, r0
+ CPSR2SPSR(r0)
+ mov lr, pc
+ add pc, r1, #8
+ LOADREGS(fd, sp!, {r4 - r12, pc})
--- /dev/null
+/*
+ * linux/arch/arm/lib/io-ebsa.S
+ *
+ * Copyright (C) 1995, 1996 Russell King
+ */
+#include <linux/linkage.h>
+#include <asm/assembler.h>
+ .text
+ .align
+
+#define OUT(reg) \
+ mov r8, reg, lsl $16 ;\
+ orr r8, r8, r8, lsr $16 ;\
+ str r8, [r3, r0, lsl $2] ;\
+ mov r8, reg, lsr $16 ;\
+ orr r8, r8, r8, lsl $16 ;\
+ str r8, [r3, r0, lsl $2]
+
+#define IN(reg) \
+ ldr reg, [r0] ;\
+ and reg, reg, ip ;\
+ ldr lr, [r0] ;\
+ orr reg, reg, lr, lsl $16
+
+@ Purpose: read a block of data from a hardware register to memory.
+@ Proto : insw(int from_port, void *to, int len_in_words);
+@ Proto : inswb(int from_port, void *to, int len_in_bytes);
+@ Notes : increment to
+
+ENTRY(insw)
+ mov r2, r2, lsl#1
+ENTRY(inswb)
+ mov ip, sp
+ stmfd sp!, {r4 - r10 ,fp ,ip ,lr ,pc}
+ sub fp, ip, #4
+ cmp r0, #0x00c00000
+ movge r3, #0
+ movlt r3, #0xf0000000
+ add r0, r3, r0, lsl #2
+ tst r1, #3
+ beq Linswok
+ tst r1, #1
+ bne Linsw_notaligned
+ cmp r2, #1
+ ldrge r4, [r0]
+ strgeb r4, [r1], #1
+ movgt r4, r4, LSR#8
+ strgtb r4, [r1], #1
+ ldmleea fp, {r4 - r10, fp, sp, pc}^
+ sub r2, r2, #2
+Linswok: mov ip, #0xFF
+ orr ip, ip, ip, lsl #8
+Linswlp: subs r2, r2, #64
+ bmi Linsw_toosmall
+ IN(r3)
+ IN(r4)
+ IN(r5)
+ IN(r6)
+ IN(r7)
+ IN(r8)
+ IN(r9)
+ IN(r10)
+ stmia r1!, {r3 - r10}
+ IN(r3)
+ IN(r4)
+ IN(r5)
+ IN(r6)
+ IN(r7)
+ IN(r8)
+ IN(r9)
+ IN(r10)
+ stmia r1!, {r3 - r10}
+ bne Linswlp
+ LOADREGS(ea, fp, {r4 - r10, fp, sp, pc})
+Linsw_toosmall:
+ add r2, r2, #32
+ bmi Linsw_toosmall2
+Linsw2lp: IN(r3)
+ IN(r4)
+ IN(r5)
+ IN(r6)
+ IN(r7)
+ IN(r8)
+ IN(r9)
+ IN(r10)
+ stmia r1!, {r3 - r10}
+ LOADREGS(eqea, fp, {r4 - r10, fp, sp, pc})
+ b Linsw_notaligned
+Linsw_toosmall2:
+ add r2, r2, #32
+Linsw_notaligned:
+ cmp r2, #1
+ LOADREGS(ltea, fp, {r4 - r10, fp, sp, pc})
+ ldr r4, [r0]
+ strb r4, [r1], #1
+ movgt r4, r4, LSR#8
+ strgtb r4, [r1], #1
+ subs r2, r2, #2
+ bgt Linsw_notaligned
+ LOADREGS(ea, fp, {r4 - r10, fp, sp, pc})
+
+@ Purpose: write a block of data from memory to a hardware register.
+@ Proto : outsw(int to_reg, void *from, int len_in_words);
+@ Proto : outswb(int to_reg, void *from, int len_in_bytes);
+@ Notes : increments from
+
+ENTRY(outsw)
+ mov r2, r2, LSL#1
+ENTRY(outswb)
+ mov ip, sp
+ stmfd sp!, {r4 - r8, fp, ip, lr, pc}
+ sub fp, ip, #4
+ cmp r0, #0x00c00000
+ movge r3, #0
+ movlt r3, #0xf0000000
+ tst r1, #2
+ beq Loutsw32lp
+ ldr r4, [r1], #2
+ mov r4, r4, lsl #16
+ orr r4, r4, r4, lsr #16
+ str r4, [r3, r0, lsl #2]
+ sub r2, r2, #2
+ teq r2, #0
+ LOADREGS(eqea, fp, {r4 - r8, fp, sp, pc})
+Loutsw32lp: subs r2,r2,#32
+ blt Loutsw_toosmall
+ ldmia r1!,{r4,r5,r6,r7}
+ OUT(r4)
+ OUT(r5)
+ OUT(r6)
+ OUT(r7)
+ ldmia r1!,{r4,r5,r6,r7}
+ OUT(r4)
+ OUT(r5)
+ OUT(r6)
+ OUT(r7)
+ LOADREGS(eqea, fp, {r4 - r8, fp, sp, pc})
+ b Loutsw32lp
+Loutsw_toosmall:
+ adds r2,r2,#32
+ LOADREGS(eqea, fp, {r4 - r8, fp, sp, pc})
+Llpx: ldr r4,[r1],#2
+ mov r4,r4,LSL#16
+ orr r4,r4,r4,LSR#16
+ str r4,[r3,r0,LSL#2]
+ subs r2,r2,#2
+ bgt Llpx
+ LOADREGS(ea, fp, {r4 - r8, fp, sp, pc})
+
--- /dev/null
+/*
+ * linux/arch/arm/lib/ll_char_wr.S
+ *
+ * Copyright (C) 1995, 1996 Russell King.
+ *
+ * Speedups & 1bpp code (C) 1996 Philip Blundel & Russell King.
+ *
+ * 10-04-96 RMK Various cleanups & reduced register usage.
+ */
+
+@ Regs: [] = corruptable
+@ {} = used
+@ () = dont use
+
+#include <linux/linkage.h>
+#include <asm/assembler.h>
+ .text
+
+#define BOLD 0x01
+#define ITALIC 0x02
+#define UNDERLINE 0x04
+#define FLASH 0x08
+#define INVERSE 0x10
+
+LC0: .word SYMBOL_NAME(bytes_per_char_h)
+ .word SYMBOL_NAME(video_size_row)
+ .word SYMBOL_NAME(cmap_80)
+ .word SYMBOL_NAME(con_charconvtable)
+
+ENTRY(ll_write_char)
+ stmfd sp!, {r4 - r7, lr}
+@
+@ Smashable regs: {r0 - r3}, [r4 - r7], (r8 - fp), [ip], (sp), [lr], (pc)
+@
+ eor ip, r1, #UNDERLINE << 24
+/*
+ * calculate colours
+ */
+ tst r1, #INVERSE << 24
+ moveq r2, r1, lsr #8
+ moveq r3, r1, lsr #16
+ movne r2, r1, lsr #16
+ movne r3, r1, lsr #8
+ and r3, r3, #255
+ and r2, r2, #255
+/*
+ * calculate offset into character table
+ */
+ and r1, r1, #255
+ mov r1, r1, lsl #3
+/*
+ * calculate offset required for each row [maybe I should make this an argument to this fn.
+ * Have to see what the register usage is like in the calling routines.
+ */
+ adr r4, LC0
+ ldmia r4, {r4, r5, r6, lr}
+ ldr r4, [r4]
+ ldr r5, [r5]
+/*
+ * Go to resolution-dependent routine...
+ */
+ cmp r4, #4
+ blt Lrow1bpp
+ eor r2, r3, r2 @ Create eor mask to change colour from bg
+ orr r3, r3, r3, lsl #8 @ to fg.
+ orr r3, r3, r3, lsl #16
+ add r0, r0, r5, lsl #3 @ Move to bottom of character
+ add r1, r1, #7
+ ldrb r7, [r6, r1]
+ tst ip, #UNDERLINE << 24
+ eoreq r7, r7, #255
+ teq r4, #8
+ beq Lrow8bpplp
+@
+@ Smashable regs: {r0 - r3}, [r4], {r5 - r7}, (r8 - fp), [ip], (sp), {lr}, (pc)
+@
+ orr r3, r3, r3, lsl #4
+Lrow4bpplp: ldr r7, [lr, r7, lsl #2]
+ mul r7, r2, r7
+ tst r1, #7 @ avoid using r7 directly after
+ eor ip, r3, r7
+ str ip, [r0, -r5]!
+ LOADREGS(eqfd, sp!, {r4 - r7, pc})
+ sub r1, r1, #1
+ ldrb r7, [r6, r1]
+ ldr r7, [lr, r7, lsl #2]
+ mul r7, r2, r7
+ tst r1, #7 @ avoid using r7 directly after
+ eor ip, r3, r7
+ str ip, [r0, -r5]!
+ subne r1, r1, #1
+ ldrneb r7, [r6, r1]
+ bne Lrow4bpplp
+ LOADREGS(fd, sp!, {r4 - r7, pc})
+
+@
+@ Smashable regs: {r0 - r3}, [r4], {r5 - r7}, (r8 - fp), [ip], (sp), {lr}, (pc)
+@
+Lrow8bpplp: mov ip, r7, lsr #4
+ ldr ip, [lr, ip, lsl #2]
+ mul r4, r2, ip
+ and ip, r7, #15
+ eor r4, r3, r4
+ ldr ip, [lr, ip, lsl #2]
+ mul ip, r2, ip
+ tst r1, #7
+ eor ip, r3, ip
+ sub r0, r0, r5
+ stmia r0, {r4, ip}
+ LOADREGS(eqfd, sp!, {r4 - r7, pc})
+ sub r1, r1, #1
+ ldrb r7, [r6, r1]
+ mov ip, r7, lsr #4
+ ldr ip, [lr, ip, lsl #2]
+ mul r4, r2, ip
+ and ip, r7, #15
+ eor r4, r3, r4
+ ldr ip, [lr, ip, lsl #2]
+ mul ip, r2, ip
+ tst r1, #7
+ eor ip, r3, ip
+ sub r0, r0, r5
+ stmia r0, {r4, ip}
+ subne r1, r1, #1
+ ldrneb r7, [r6, r1]
+ bne Lrow8bpplp
+ LOADREGS(fd, sp!, {r4 - r7, pc})
+
+@
+@ Smashable regs: {r0 - r3}, [r4], {r5, r6}, [r7], (r8 - fp), [ip], (sp), [lr], (pc)
+@
+Lrow1bpp: add r6, r6, r1
+ ldmia r6, {r4, r7}
+ tst ip, #INVERSE << 24
+ mvnne r4, r4
+ mvnne r7, r7
+ strb r4, [r0], r5
+ mov r4, r4, lsr #8
+ strb r4, [r0], r5
+ mov r4, r4, lsr #8
+ strb r4, [r0], r5
+ mov r4, r4, lsr #8
+ strb r4, [r0], r5
+ strb r7, [r0], r5
+ mov r7, r7, lsr #8
+ strb r7, [r0], r5
+ mov r7, r7, lsr #8
+ strb r7, [r0], r5
+ mov r7, r7, lsr #8
+ tst ip, #UNDERLINE << 24
+ mvneq r7, r7
+ strb r7, [r0], r5
+ LOADREGS(fd, sp!, {r4 - r7, pc})
+
+ .bss
+ENTRY(con_charconvtable)
+ .space 1024
--- /dev/null
+/*
+ * linux/arch/arm/lib/loaders.S
+ *
+ * This file contains the ROM loaders for buggy cards
+ */
+#include <linux/linkage.h>
+#include <asm/assembler.h>
+
+/*
+ * Oak SCSI
+ */
+
+ENTRY(oak_scsi_loader)
+ b Loak_scsi_read
+ .word 0
+Loak_scsi_reset: bic r10, r11, #0x00ff0000
+ ldr r2, [r10]
+ RETINSTR(mov,pc,lr)
+
+Loak_scsi_read: mov r2, r1, lsr #3
+ and r2, r2, #15 << 9
+ bic r10, r11, #0x00ff0000
+ ldr r2, [r10, r2]
+ mov r2, r1, lsl #20
+ ldrb r0, [r11, r2, lsr #18]
+ ldr r2, [r10]
+ RETINSTR(mov,pc,lr)
+
+ENTRY(atomwide_serial_loader)
+ b Latomwide_serial_read
+ .word 0
+Latomwide_serial_reset: mov r2, #0x3c00
+ strb r2, [r11, r2]
+ RETINSTR(mov,pc,lr)
+
+Latomwide_serial_read: cmp r1, #0x8000
+ RETINSTR(movhi,pc,lr)
+ add r0, r1, #0x800
+ mov r0, r0, lsr #11
+ mov r3, #0x3c00
+ strb r0, [r11, r3]
+ mov r2, r1, lsl #21
+ ldrb r0, [r11, r2, lsr #19]
+ strb r2, [r11, r3]
+ RETINSTR(mov,pc,lr)
+
+/*
+ * Cards we don't know about yet
+ */
+ENTRY(noloader)
+ mov r0, r0
+ mov r0, #0
+ RETINSTR(mov,pc,lr)
--- /dev/null
+/*
+ * linux/arch/arm/lib/segment.S
+ *
+ * Copyright (C) 1995, 1996 Russell King
+ * Except memcpy/memmove routine.
+ */
+
+#include <asm/assembler.h>
+#include <linux/linkage.h>
+
+ .text
+#define ENTER \
+ mov ip,sp ;\
+ stmfd sp!,{r4-r9,fp,ip,lr,pc} ;\
+ sub fp,ip,#4
+
+#define EXIT \
+ LOADREGS(ea, fp, {r4 - r9, fp, sp, pc})
+
+#define EXITEQ \
+ LOADREGS(eqea, fp, {r4 - r9, fp, sp, pc})
+
+# Prototype: void memcpy(void *to,const void *from,unsigned long n);
+# ARM3: cant use memcopy here!!!
+
+ENTRY(memcpy)
+ENTRY(memmove)
+ ENTER
+ cmp r1, r0
+ bcc 19f
+ subs r2, r2, #4
+ blt 6f
+ ands ip, r0, #3
+ bne 7f
+ ands ip, r1, #3
+ bne 8f
+
+1: subs r2, r2, #8
+ blt 5f
+ subs r2, r2, #0x14
+ blt 3f
+2: ldmia r1!,{r3 - r9, ip}
+ stmia r0!,{r3 - r9, ip}
+ subs r2, r2, #32
+ bge 2b
+ cmn r2, #16
+ ldmgeia r1!, {r3 - r6}
+ stmgeia r0!, {r3 - r6}
+ subge r2, r2, #0x10
+3: adds r2, r2, #0x14
+4: ldmgeia r1!, {r3 - r5}
+ stmgeia r0!, {r3 - r5}
+ subges r2, r2, #12
+ bge 4b
+5: adds r2, r2, #8
+ blt 6f
+ subs r2, r2, #4
+ ldrlt r3, [r1], #4
+ strlt r3, [r0], #4
+ ldmgeia r1!, {r3, r4}
+ stmgeia r0!, {r3, r4}
+ subge r2, r2, #4
+
+6: adds r2, r2, #4
+ EXITEQ
+ cmp r2, #2
+ ldrb r3, [r1], #1
+ strb r3, [r0], #1
+ ldrgeb r3, [r1], #1
+ strgeb r3, [r0], #1
+ ldrgtb r3, [r1], #1
+ strgtb r3, [r0], #1
+ EXIT
+
+7: rsb ip, ip, #4
+ cmp ip, #2
+ ldrb r3, [r1], #1
+ strb r3, [r0], #1
+ ldrgeb r3, [r1], #1
+ strgeb r3, [r0], #1
+ ldrgtb r3, [r1], #1
+ strgtb r3, [r0], #1
+ subs r2, r2, ip
+ blt 6b
+ ands ip, r1, #3
+ beq 1b
+8: bic r1, r1, #3
+ ldr r7, [r1], #4
+ cmp ip, #2
+ bgt 15f
+ beq 11f
+ cmp r2, #12
+ blt 10f
+ sub r2, r2, #12
+9: mov r3, r7, lsr #8
+ ldmia r1!, {r4 - r7}
+ orr r3, r3, r4, lsl #24
+ mov r4, r4, lsr #8
+ orr r4, r4, r5, lsl #24
+ mov r5, r5, lsr #8
+ orr r5, r5, r6, lsl #24
+ mov r6, r6, lsr #8
+ orr r6, r6, r7, lsl #24
+ stmia r0!, {r3 - r6}
+ subs r2, r2, #16
+ bge 9b
+ adds r2, r2, #12
+ blt 1b
+10: mov r3, r7, lsr #8
+ ldr r7, [r1], #4
+ orr r3, r3, r7, lsl #24
+ str r3, [r0], #4
+ subs r2, r2, #4
+ bge 10b
+ sub r1, r1, #3
+ b 6b
+
+11: cmp r2, #12
+ blt 13f /* */
+ sub r2, r2, #12
+12: mov r3, r7, lsr #16
+ ldmia r1!, {r4 - r7}
+ orr r3, r3, r4, lsl #16
+ mov r4, r4, lsr #16
+ orr r4, r4, r5, lsl #16
+ mov r5, r5, lsr #16
+ orr r5, r5, r6, lsl #16
+ mov r6, r6, lsr #16
+ orr r6, r6, r7,LSL#16
+ stmia r0!, {r3 - r6}
+ subs r2, r2, #16
+ bge 12b
+ adds r2, r2, #12
+ blt 14f
+13: mov r3, r7, lsr #16
+ ldr r7, [r1], #4
+ orr r3, r3, r7, lsl #16
+ str r3, [r0], #4
+ subs r2, r2, #4
+ bge 13b
+14: sub r1, r1, #2
+ b 6b
+
+15: cmp r2, #12
+ blt 17f
+ sub r2, r2, #12
+16: mov r3, r7, lsr #24
+ ldmia r1!,{r4 - r7}
+ orr r3, r3, r4, lsl #8
+ mov r4, r4, lsr #24
+ orr r4, r4, r5, lsl #8
+ mov r5, r5, lsr #24
+ orr r5, r5, r6, lsl #8
+ mov r6, r6, lsr #24
+ orr r6, r6, r7, lsl #8
+ stmia r0!, {r3 - r6}
+ subs r2, r2, #16
+ bge 16b
+ adds r2, r2, #12
+ blt 18f
+17: mov r3, r7, lsr #24
+ ldr r7, [r1], #4
+ orr r3, r3, r7, lsl#8
+ str r3, [r0], #4
+ subs r2, r2, #4
+ bge 17b
+18: sub r1, r1, #1
+ b 6b
+
+
+19: add r1, r1, r2
+ add r0, r0, r2
+ subs r2, r2, #4
+ blt 24f
+ ands ip, r0, #3
+ bne 25f
+ ands ip, r1, #3
+ bne 26f
+
+20: subs r2, r2, #8
+ blt 23f
+ subs r2, r2, #0x14
+ blt 22f
+21: ldmdb r1!, {r3 - r9, ip}
+ stmdb r0!, {r3 - r9, ip}
+ subs r2, r2, #32
+ bge 21b
+22: cmn r2, #16
+ ldmgedb r1!, {r3 - r6}
+ stmgedb r0!, {r3 - r6}
+ subge r2, r2, #16
+ adds r2, r2, #20
+ ldmgedb r1!, {r3 - r5}
+ stmgedb r0!, {r3 - r5}
+ subge r2, r2, #12
+23: adds r2, r2, #8
+ blt 24f
+ subs r2, r2, #4
+ ldrlt r3, [r1, #-4]!
+ strlt r3, [r0, #-4]!
+ ldmgedb r1!, {r3, r4}
+ stmgedb r0!, {r3, r4}
+ subge r2, r2, #4
+
+24: adds r2, r2, #4
+ EXITEQ
+ cmp r2, #2
+ ldrb r3, [r1, #-1]!
+ strb r3, [r0, #-1]!
+ ldrgeb r3, [r1, #-1]!
+ strgeb r3, [r0, #-1]!
+ ldrgtb r3, [r1, #-1]!
+ strgtb r3, [r0, #-1]!
+ EXIT
+
+25: cmp ip, #2
+ ldrb r3, [r1, #-1]!
+ strb r3, [r0, #-1]!
+ ldrgeb r3, [r1, #-1]!
+ strgeb r3, [r0, #-1]!
+ ldrgtb r3, [r1, #-1]!
+ strgtb r3, [r0, #-1]!
+ subs r2, r2, ip
+ blt 24b
+ ands ip, r1, #3
+ beq 20b
+
+26: bic r1, r1, #3
+ ldr r3, [r1], #0
+ cmp ip, #2
+ blt 34f
+ beq 30f
+ cmp r2, #12
+ blt 28f
+ sub r2, r2, #12
+27: mov r7, r3, lsl #8
+ ldmdb r1!, {r3, r4, r5, r6}
+ orr r7, r7, r6, lsr #24
+ mov r6, r6, lsl #8
+ orr r6, r6, r5, lsr #24
+ mov r5, r5, lsl #8
+ orr r5, r5, r4, lsr #24
+ mov r4, r4, lsl #8
+ orr r4, r4, r3, lsr #24
+ stmdb r0!, {r4, r5, r6, r7}
+ subs r2, r2, #16
+ bge 27b
+ adds r2, r2, #12
+ blt 29f
+28: mov ip, r3, lsl #8
+ ldr r3, [r1, #-4]!
+ orr ip, ip, r3, lsr #24
+ str ip, [r0, #-4]!
+ subs r2, r2, #4
+ bge 28b
+29: add r1, r1, #3
+ b 24b
+
+30: cmp r2, #12
+ blt 32f
+ sub r2, r2, #12
+31: mov r7, r3, lsl #16
+ ldmdb r1!, {r3, r4, r5, r6}
+ orr r7, r7, r6, lsr #16
+ mov r6, r6, lsl #16
+ orr r6, r6, r5, lsr #16
+ mov r5, r5, lsl #16
+ orr r5, r5, r4, lsr #16
+ mov r4, r4, lsl #16
+ orr r4, r4, r3, lsr #16
+ stmdb r0!, {r4, r5, r6, r7}
+ subs r2, r2, #16
+ bge 31b
+ adds r2, r2, #12
+ blt 33f
+32: mov ip, r3, lsl #16
+ ldr r3, [r1, #-4]!
+ orr ip, ip, r3, lsr #16
+ str ip, [r0, #-4]!
+ subs r2, r2, #4
+ bge 32b
+33: add r1, r1, #2
+ b 24b
+
+34: cmp r2, #12
+ blt 36f
+ sub r2, r2, #12
+35: mov r7, r3, lsl #24
+ ldmdb r1!, {r3, r4, r5, r6}
+ orr r7, r7, r6, lsr #8
+ mov r6, r6, lsl #24
+ orr r6, r6, r5, lsr #8
+ mov r5, r5, lsl #24
+ orr r5, r5, r4, lsr #8
+ mov r4, r4, lsl #24
+ orr r4, r4, r3, lsr #8
+ stmdb r0!, {r4, r5, r6, r7}
+ subs r2, r2, #16
+ bge 35b
+ adds r2, r2, #12
+ blt 37f
+36: mov ip, r3, lsl #24
+ ldr r3, [r1, #-4]!
+ orr ip, ip, r3, lsr #8
+ str ip, [r0, #-4]!
+ subs r2, r2, #4
+ bge 36b
+37: add r1, r1, #1
+ b 24b
+
+ .align
+
--- /dev/null
+/*
+ * linux/arch/arm/lib/memfastset.S
+ *
+ * Copyright (C) 1995, 1996 Russell King
+ */
+#include <linux/linkage.h>
+#include <asm/assembler.h>
+ .text
+@ Prototype: void memsetl (unsigned long *d, unsigned long c, size_t n);
+
+ENTRY(memsetl)
+ stmfd sp!, {lr}
+ cmp r2, #16
+ blt 5f
+ mov r3, r1
+ mov ip, r1
+ mov lr, r1
+ subs r2, r2, #32
+ bmi 2f
+1: stmia r0!, {r1, r3, ip, lr}
+ stmia r0!, {r1, r3, ip, lr}
+ LOADREGS(eqfd, sp!, {pc})
+ subs r2, r2, #32
+ bpl 1b
+2: adds r2, r2, #16
+ bmi 4f
+3: stmia r0!, {r1, r3, ip, lr}
+ LOADREGS(eqfd, sp!, {pc})
+ subs r2, r2, #16
+ bpl 3b
+4: add r2, r2, #16
+5: subs r2, r2, #4
+ strge r1, [r0], #4
+ bgt 5b
+ LOADREGS(fd, sp!, {pc})
--- /dev/null
+/*
+ * linux/arch/arm/lib/string.S
+ *
+ * Copyright (C) 1995, 1996 Russell King
+ */
+#include <linux/linkage.h>
+#include <asm/assembler.h>
+ .text
+# Prototype: char *strrchr(const char *s,char c);
+
+@ r0 = pointer, r1 = length
+ .global memzero
+memzero: stmfd sp!, {lr}
+ mov r2, #0
+ mov r3, #0
+ mov ip, #0
+ mov lr, #0
+1: subs r1, r1, #4*8
+ stmgeia r0!, {r2, r3, ip, lr}
+ stmgeia r0!, {r2, r3, ip, lr}
+ bgt 1b
+ LOADREGS(fd, sp!, {pc})
+
+ .global __page_memcpy
+__page_memcpy: stmfd sp!, {r4, r5, lr}
+1: subs r2, r2, #4*8
+ ldmgeia r1!, {r3, r4, r5, ip}
+ stmgeia r0!, {r3, r4, r5, ip}
+ ldmgeia r1!, {r3, r4, r5, ip}
+ stmgeia r0!, {r3, r4, r5, ip}
+ bgt 1b
+ LOADREGS(fd, sp!, {r4, r5, pc})
+
+ .global memset
+memset: mov r3, r0
+ cmp r2, #16
+ blt 6f
+ ands ip, r3, #3
+ beq 1f
+ cmp ip, #2
+ strltb r1, [r3], #1 @ Align destination
+ strleb r1, [r3], #1
+ strb r1, [r3], #1
+ rsb ip, ip, #4
+ sub r2, r2, ip
+1: orr r1, r1, r1, lsl #8
+ orr r1, r1, r1, lsl #16
+ cmp r2, #256
+ blt 4f
+ stmfd sp!, {r4, r5, lr}
+ mov r4, r1
+ mov r5, r1
+ mov lr, r1
+ mov ip, r2, lsr #6
+ sub r2, r2, ip, lsl #6
+2: stmia r3!, {r1, r4, r5, lr} @ 64 bytes at a time.
+ stmia r3!, {r1, r4, r5, lr}
+ stmia r3!, {r1, r4, r5, lr}
+ stmia r3!, {r1, r4, r5, lr}
+ subs ip, ip, #1
+ bne 2b
+ teq r2, #0
+ LOADREGS(eqfd, sp!, {r4, r5, pc}) @ Now <64 bytes to go.
+ tst r2, #32
+ stmneia r3!, {r1, r4, r5, lr}
+ stmneia r3!, {r1, r4, r5, lr}
+ tst r2, #16
+ stmneia r3!, {r1, r4, r5, lr}
+ ldmia sp!, {r4, r5}
+3: tst r2, #8
+ stmneia r3!, {r1, lr}
+ tst r2, #4
+ strne r1, [r3], #4
+ tst r2, #2
+ strneb r1, [r3], #1
+ strneb r1, [r3], #1
+ tst r2, #1
+ strneb r1, [r3], #1
+ LOADREGS(fd, sp!, {pc})
+
+4: movs ip, r2, lsr #3
+ beq 3b
+ sub r2, r2, ip, lsl #3
+ stmfd sp!, {lr}
+ mov lr, r1
+ subs ip, ip, #4
+5: stmgeia r3!, {r1, lr}
+ stmgeia r3!, {r1, lr}
+ stmgeia r3!, {r1, lr}
+ stmgeia r3!, {r1, lr}
+ subges ip, ip, #4
+ bge 5b
+ tst ip, #2
+ stmneia r3!, {r1, lr}
+ stmneia r3!, {r1, lr}
+ tst ip, #1
+ stmneia r3!, {r1, lr}
+ teq r2, #0
+ LOADREGS(eqfd, sp!, {pc})
+ b 3b
+
+6: subs r2, r2, #1
+ strgeb r1, [r3], #1
+ bgt 6b
+ RETINSTR(mov, pc, lr)
+
+ENTRY(strrchr)
+ stmfd sp!, {lr}
+ mov r3, #0
+1: ldrb r2, [r0], #1
+ teq r2, r1
+ moveq r3, r0
+ teq r2, #0
+ bne 1b
+ mov r0, r3
+ LOADREGS(fd, sp!, {pc})
+
+ENTRY(strchr)
+ stmfd sp!,{lr}
+ mov r3, #0
+1: ldrb r2, [r0], #1
+ teq r2, r1
+ teqne r2, #0
+ bne 1b
+ teq r2, #0
+ moveq r0, #0
+ subne r0, r0, #1
+ LOADREGS(fd, sp!, {pc})
+
+ENTRY(memchr)
+ stmfd sp!, {lr}
+1: ldrb r3, [r0], #1
+ teq r3, r1
+ beq 2f
+ subs r2, r2, #1
+ bpl 1b
+2: movne r0, #0
+ subeq r0, r0, #1
+ LOADREGS(fd, sp!, {pc})
--- /dev/null
+/*
+ * linux/arch/arm/lib/system.S
+ *
+ * Copyright (C) 1995, 1996 Russell King
+ *
+ * 07/06/96: Now support tasks running in SVC mode.
+ */
+#include <linux/linkage.h>
+#include <linux/config.h>
+#include <asm/assembler.h>
+
+ .text
+
+ENTRY(abort)
+ adr r0, .abort_msg
+ mov r1, lr
+ b SYMBOL_NAME(panic)
+
+.abort_msg: .ascii "Eek! Got to an abort() from %p! "
+ .ascii "(Please report to rmk@ecs.soton.ac.uk)\n\0"
+ .align
--- /dev/null
+char buffer[1036];
+char buffer2[1036];
+
+int main ()
+{
+ char *p;
+ int i, o, o2, l;
+
+ printf ("Testing memset\n");
+ for (l = 1; l < 1020; l ++) {
+ for (o = 0; o < 4; o++) {
+ p = buffer + o + 4;
+ for (i = 0; i < l + 12; i++)
+ buffer[i] = 0x55;
+
+ memset (p, 0xaa, l);
+
+ for (i = 0; i < l; i++)
+ if (p[i] != 0xaa)
+ printf ("Error: %X+%d\n", p, i);
+ if (p[-1] != 0x55 || p[-2] != 0x55 || p[-3] != 0x55 || p[-4] != 0x55)
+ printf ("Error before %X\n", p);
+ if (p[l] != 0x55 || p[l+1] != 0x55 || p[l+2] != 0x55 || p[l+3] != 0x55)
+ printf ("Error at end: %p: %02X %02X %02X %02X\n", p+l, p[l], p[l+1], p[l+2], p[l+3]);
+ }
+ }
+
+ printf ("Testing memcpy s > d\n");
+ for (l = 1; l < 1020; l++) {
+ for (o = 0; o < 4; o++) {
+ for (o2 = 0; o2 < 4; o2++) {
+ char *d, *s;
+
+ for (i = 0; i < l + 12; i++)
+ buffer[i] = (i & 0x3f) + 0x40;
+ for (i = 0; i < 1036; i++)
+ buffer2[i] = 0;
+
+ s = buffer + o;
+ d = buffer2 + o2 + 4;
+
+ memcpy (d, s, l);
+
+ for (i = 0; i < l; i++)
+ if (s[i] != d[i])
+ printf ("Error at %X+%d -> %X+%d (%02X != %02X)\n", s, i, d, i, s[i], d[i]);
+ if (d[-1] || d[-2] || d[-3] || d[-4])
+ printf ("Error before %X\n", d);
+ if (d[l] || d[l+1] || d[l+2] || d[l+3])
+ printf ("Error after %X\n", d+l);
+ }
+ }
+ }
+
+ printf ("Testing memcpy s < d\n");
+ for (l = 1; l < 1020; l++) {
+ for (o = 0; o < 4; o++) {
+ for (o2 = 0; o2 < 4; o2++) {
+ char *d, *s;
+
+ for (i = 0; i < l + 12; i++)
+ buffer2[i] = (i & 0x3f) + 0x40;
+ for (i = 0; i < 1036; i++)
+ buffer[i] = 0;
+
+ s = buffer2 + o;
+ d = buffer + o2 + 4;
+
+ memcpy (d, s, l);
+
+ for (i = 0; i < l; i++)
+ if (s[i] != d[i])
+ printf ("Error at %X+%d -> %X+%d (%02X != %02X)\n", s, i, d, i, s[i], d[i]);
+ if (d[-1] || d[-2] || d[-3] || d[-4])
+ printf ("Error before %X\n", d);
+ if (d[l] || d[l+1] || d[l+2] || d[l+3])
+ printf ("Error after %X\n", d+l);
+ }
+ }
+ }
+}
--- /dev/null
+/*
+ * arch/arm/lib/uaccess-armo.S
+ *
+ * Copyright (C) 1998 Russell King
+ *
+ * Note! Some code fragments found in here have a special calling
+ * convention - they are not APCS compliant!
+ */
+#include <linux/linkage.h>
+#include <asm/assembler.h>
+
+ .text
+
+#define USER(x...) \
+9999: x; \
+ .section __ex_table,"a"; \
+ .align 3; \
+ .long 9999b,9001f; \
+ .previous
+
+ .globl SYMBOL_NAME(uaccess_user)
+SYMBOL_NAME(uaccess_user):
+ .word uaccess_user_put_byte
+ .word uaccess_user_get_byte
+ .word uaccess_user_put_half
+ .word uaccess_user_get_half
+ .word uaccess_user_put_word
+ .word uaccess_user_get_word
+ .word __arch_copy_from_user
+ .word __arch_copy_to_user
+ .word __arch_clear_user
+ .word __arch_strncpy_from_user
+ .word __arch_strlen_user
+
+
+@ In : r0 = x, r1 = addr, r2 = error
+@ Out: r2 = error
+uaccess_user_put_byte:
+ stmfd sp!, {lr}
+USER( strbt r0, [r1])
+ ldmfd sp!, {pc}^
+
+@ In : r0 = x, r1 = addr, r2 = error
+@ Out: r2 = error
+uaccess_user_put_half:
+ stmfd sp!, {lr}
+USER( strbt r0, [r1], #1)
+ mov r0, r0, lsr #8
+USER( strbt r0, [r1])
+ ldmfd sp!, {pc}^
+
+@ In : r0 = x, r1 = addr, r2 = error
+@ Out: r2 = error
+uaccess_user_put_word:
+ stmfd sp!, {lr}
+USER( strt r0, [r1])
+ ldmfd sp!, {pc}^
+
+9001: mov r2, #-EFAULT
+ ldmfd sp!, {pc}^
+
+@ In : r0 = addr, r1 = error
+@ Out: r0 = x, r1 = error
+uaccess_user_get_byte:
+ stmfd sp!, {lr}
+USER( ldrbt r0, [r0])
+ ldmfd sp!, {pc}^
+
+@ In : r0 = addr, r1 = error
+@ Out: r0 = x, r1 = error
+uaccess_user_get_half:
+ stmfd sp!, {lr}
+USER( ldrt r0, [r0])
+ mov r0, r0, lsl #16
+ mov r0, r0, lsr #16
+ ldmfd sp!, {pc}^
+
+@ In : r0 = addr, r1 = error
+@ Out: r0 = x, r1 = error
+uaccess_user_get_word:
+ stmfd sp!, {lr}
+USER( ldrt r0, [r0])
+ ldmfd sp!, {pc}^
+
+9001: mov r1, #-EFAULT
+ ldmfd sp!, {pc}^
+
+
+
+ .globl SYMBOL_NAME(uaccess_kernel)
+SYMBOL_NAME(uaccess_kernel):
+ .word uaccess_kernel_put_byte
+ .word uaccess_kernel_get_byte
+ .word uaccess_kernel_put_half
+ .word uaccess_kernel_get_half
+ .word uaccess_kernel_put_word
+ .word uaccess_kernel_get_word
+ .word uaccess_kernel_copy
+ .word uaccess_kernel_copy
+ .word uaccess_kernel_clear
+ .word uaccess_kernel_strncpy_from
+ .word uaccess_kernel_strlen
+
+@ In : r0 = x, r1 = addr, r2 = error
+@ Out: r2 = error
+uaccess_kernel_put_byte:
+ stmfd sp!, {lr}
+ strb r0, [r1]
+ ldmfd sp!, {pc}^
+
+@ In : r0 = x, r1 = addr, r2 = error
+@ Out: r2 = error
+uaccess_kernel_put_half:
+ stmfd sp!, {lr}
+ strb r0, [r1]
+ mov r0, r0, lsr #8
+ strb r0, [r1, #1]
+ ldmfd sp!, {pc}^
+
+@ In : r0 = x, r1 = addr, r2 = error
+@ Out: r2 = error
+uaccess_kernel_put_word:
+ stmfd sp!, {lr}
+ str r0, [r1]
+ ldmfd sp!, {pc}^
+
+@ In : r0 = addr, r1 = error
+@ Out: r0 = x, r1 = error
+uaccess_kernel_get_byte:
+ stmfd sp!, {lr}
+ ldrb r0, [r0]
+ ldmfd sp!, {pc}^
+
+@ In : r0 = addr, r1 = error
+@ Out: r0 = x, r1 = error
+uaccess_kernel_get_half:
+ stmfd sp!, {lr}
+ ldr r0, [r0]
+ mov r0, r0, lsl #16
+ mov r0, r0, lsr #16
+ ldmfd sp!, {pc}^
+
+@ In : r0 = addr, r1 = error
+@ Out: r0 = x, r1 = error
+uaccess_kernel_get_word:
+ stmfd sp!, {lr}
+ ldr r0, [r0]
+ ldmfd sp!, {pc}^
+
+
+/* Prototype: int uaccess_kernel_copy(void *to, const char *from, size_t n)
+ * Purpose : copy a block to kernel memory from kernel memory
+ * Params : to - kernel memory
+ * : from - kernel memory
+ * : n - number of bytes to copy
+ * Returns : Number of bytes NOT copied.
+ */
+uaccess_kernel_copy:
+ stmfd sp!, {lr}
+ bl SYMBOL_NAME(memcpy)
+ mov r0, #0
+ ldmfd sp!, {pc}^
+
+/* Prototype: int uaccess_kernel_clear(void *addr, size_t sz)
+ * Purpose : clear some kernel memory
+ * Params : addr - kernel memory address to clear
+ * : sz - number of bytes to clear
+ * Returns : number of bytes NOT cleared
+ */
+uaccess_kernel_clear:
+ stmfd sp!, {lr}
+ mov r2, #0
+ cmp r1, #4
+ blt 2f
+ ands ip, r0, #3
+ beq 1f
+ cmp ip, #1
+ strb r2, [r0], #1
+ strleb r2, [r0], #1
+ strltb r2, [r0], #1
+ rsb ip, ip, #4
+ sub r1, r1, ip @ 7 6 5 4 3 2 1
+1: subs r1, r1, #8 @ -1 -2 -3 -4 -5 -6 -7
+ bmi 2f
+ str r2, [r0], #4
+ str r2, [r0], #4
+ b 1b
+2: adds r1, r1, #4 @ 3 2 1 0 -1 -2 -3
+ strpl r2, [r0], #4
+ tst r1, #2 @ 1x 1x 0x 0x 1x 1x 0x
+ strneb r2, [r0], #1
+ strneb r2, [r0], #1
+ tst r1, #1 @ x1 x0 x1 x0 x1 x0 x1
+ strneb r2, [r0], #1
+ mov r0, #0
+ ldmfd sp!, {pc}^
+
+/* Prototype: size_t uaccess_kernel_strncpy_from(char *dst, char *src, size_t len)
+ * Purpose : copy a string from kernel memory to kernel memory
+ * Params : dst - kernel memory destination
+ * : src - kernel memory source
+ * : len - maximum length of string
+ * Returns : number of characters copied
+ */
+uaccess_kernel_strncpy_from:
+ stmfd sp!, {lr}
+ mov ip, r2
+1: subs r2, r2, #1
+ bmi 2f
+ ldrb r3, [r1], #1
+ strb r3, [r0], #1
+ teq r3, #0
+ bne 1b
+2: subs r0, ip, r2
+ ldmfd sp!, {pc}^
+
+/* Prototype: int uaccess_kernel_strlen(char *str)
+ * Purpose : get length of a string in kernel memory
+ * Params : str - address of string in kernel memory
+ * Returns : length of string *including terminator*, or zero on error
+ */
+uaccess_kernel_strlen:
+ stmfd sp!, {lr}
+ mov r2, r0
+1: ldrb r1, [r0], #1
+ teq r1, #0
+ bne 1b
+ sub r0, r0, r2
+ ldmfd sp!, {pc}^
+
--- /dev/null
+/*
+ * linux/arch/arm/lib/uaccess.S
+ *
+ * Copyright (C) 1995, 1996,1997,1998 Russell King
+ *
+ * Routines to block copy data to/from user memory
+ * These are highly optimised both for the 4k page size
+ * and for various alignments.
+ */
+#include <linux/autoconf.h>
+#include <linux/linkage.h>
+#include <asm/assembler.h>
+#include <asm/errno.h>
+
+ .text
+
+#define USER(x...) \
+9999: x; \
+ .section __ex_table,"a"; \
+ .align 3; \
+ .long 9999b,9001f; \
+ .previous
+
+#define PAGE_SHIFT 12
+
+/* Prototype: int __arch_copy_to_user(void *to, const char *from, size_t n)
+ * Purpose : copy a block to user memory from kernel memory
+ * Params : to - user memory
+ * : from - kernel memory
+ * : n - number of bytes to copy
+ * Returns : Number of bytes NOT copied.
+ */
+
+.c2u_dest_not_aligned:
+ rsb ip, ip, #4
+ cmp ip, #2
+ ldrb r3, [r1], #1
+USER( strbt r3, [r0], #1) // May fault
+ ldrgeb r3, [r1], #1
+USER( strgebt r3, [r0], #1) // May fault
+ ldrgtb r3, [r1], #1
+USER( strgtbt r3, [r0], #1) // May fault
+ sub r2, r2, ip
+ b .c2u_dest_aligned
+
+ENTRY(__arch_copy_to_user)
+ stmfd sp!, {r2, r4 - r7, lr}
+ cmp r2, #4
+ blt .c2u_not_enough
+ ands ip, r0, #3
+ bne .c2u_dest_not_aligned
+.c2u_dest_aligned:
+
+ ands ip, r1, #3
+ bne .c2u_src_not_aligned
+/*
+ * Seeing as there has to be at least 8 bytes to copy, we can
+ * copy one word, and force a user-mode page fault...
+ */
+
+.c2u_0fupi: subs r2, r2, #4
+ addmi ip, r2, #4
+ bmi .c2u_0nowords
+ ldr r3, [r1], #4
+USER( strt r3, [r0], #4) // May fault
+ mov ip, r0, lsl #32 - PAGE_SHIFT // On each page, use a ld/st??t instruction
+ rsb ip, ip, #0
+ movs ip, ip, lsr #32 - PAGE_SHIFT
+ beq .c2u_0fupi
+/*
+ * ip = max no. of bytes to copy before needing another "strt" insn
+ */
+ cmp r2, ip
+ movlt ip, r2
+ sub r2, r2, ip
+ subs ip, ip, #32
+ blt .c2u_0rem8lp
+
+.c2u_0cpy8lp: ldmia r1!, {r3 - r6}
+ stmia r0!, {r3 - r6} // Shouldn't fault
+ ldmia r1!, {r3 - r6}
+ stmia r0!, {r3 - r6} // Shouldn't fault
+ subs ip, ip, #32
+ bpl .c2u_0cpy8lp
+.c2u_0rem8lp: cmn ip, #16
+ ldmgeia r1!, {r3 - r6}
+ stmgeia r0!, {r3 - r6} // Shouldn't fault
+ tst ip, #8
+ ldmneia r1!, {r3 - r4}
+ stmneia r0!, {r3 - r4} // Shouldn't fault
+ tst ip, #4
+ ldrne r3, [r1], #4
+ strnet r3, [r0], #4 // Shouldn't fault
+ ands ip, ip, #3
+ beq .c2u_0fupi
+.c2u_0nowords: teq ip, #0
+ beq .c2u_finished
+.c2u_nowords: cmp ip, #2
+ ldrb r3, [r1], #1
+USER( strbt r3, [r0], #1) // May fault
+ ldrgeb r3, [r1], #1
+USER( strgebt r3, [r0], #1) // May fault
+ ldrgtb r3, [r1], #1
+USER( strgtbt r3, [r0], #1) // May fault
+ b .c2u_finished
+
+.c2u_not_enough:
+ movs ip, r2
+ bne .c2u_nowords
+.c2u_finished: mov r0, #0
+ LOADREGS(fd,sp!,{r2, r4 - r7, pc})
+
+.c2u_src_not_aligned:
+ bic r1, r1, #3
+ ldr r7, [r1], #4
+ cmp ip, #2
+ bgt .c2u_3fupi
+ beq .c2u_2fupi
+.c2u_1fupi: subs r2, r2, #4
+ addmi ip, r2, #4
+ bmi .c2u_1nowords
+ mov r3, r7, lsr #8
+ ldr r7, [r1], #4
+ orr r3, r3, r7, lsl #24
+USER( strt r3, [r0], #4) // May fault
+ mov ip, r0, lsl #32 - PAGE_SHIFT
+ rsb ip, ip, #0
+ movs ip, ip, lsr #32 - PAGE_SHIFT
+ beq .c2u_1fupi
+ cmp r2, ip
+ movlt ip, r2
+ sub r2, r2, ip
+ subs ip, ip, #16
+ blt .c2u_1rem8lp
+
+.c2u_1cpy8lp: mov r3, r7, lsr #8
+ ldmia r1!, {r4 - r7}
+ orr r3, r3, r4, lsl #24
+ mov r4, r4, lsr #8
+ orr r4, r4, r5, lsl #24
+ mov r5, r5, lsr #8
+ orr r5, r5, r6, lsl #24
+ mov r6, r6, lsr #8
+ orr r6, r6, r7, lsl #24
+ stmia r0!, {r3 - r6} // Shouldn't fault
+ subs ip, ip, #16
+ bpl .c2u_1cpy8lp
+.c2u_1rem8lp: tst ip, #8
+ movne r3, r7, lsr #8
+ ldmneia r1!, {r4, r7}
+ orrne r3, r3, r4, lsl #24
+ movne r4, r4, lsr #8
+ orrne r4, r4, r7, lsl #24
+ stmneia r0!, {r3 - r4} // Shouldn't fault
+ tst ip, #4
+ movne r3, r7, lsr #8
+ ldrne r7, [r1], #4
+ orrne r3, r3, r7, lsl #24
+ strnet r3, [r0], #4 // Shouldn't fault
+ ands ip, ip, #3
+ beq .c2u_1fupi
+.c2u_1nowords: mov r3, r7, lsr #8
+ teq ip, #0
+ beq .c2u_finished
+ cmp ip, #2
+USER( strbt r3, [r0], #1) // May fault
+ movge r3, r3, lsr #8
+USER( strgebt r3, [r0], #1) // May fault
+ movgt r3, r3, lsr #8
+USER( strgtbt r3, [r0], #1) // May fault
+ b .c2u_finished
+
+.c2u_2fupi: subs r2, r2, #4
+ addmi ip, r2, #4
+ bmi .c2u_2nowords
+ mov r3, r7, lsr #16
+ ldr r7, [r1], #4
+ orr r3, r3, r7, lsl #16
+USER( strt r3, [r0], #4) // May fault
+ mov ip, r0, lsl #32 - PAGE_SHIFT
+ rsb ip, ip, #0
+ movs ip, ip, lsr #32 - PAGE_SHIFT
+ beq .c2u_2fupi
+ cmp r2, ip
+ movlt ip, r2
+ sub r2, r2, ip
+ subs ip, ip, #16
+ blt .c2u_2rem8lp
+
+.c2u_2cpy8lp: mov r3, r7, lsr #16
+ ldmia r1!, {r4 - r7}
+ orr r3, r3, r4, lsl #16
+ mov r4, r4, lsr #16
+ orr r4, r4, r5, lsl #16
+ mov r5, r5, lsr #16
+ orr r5, r5, r6, lsl #16
+ mov r6, r6, lsr #16
+ orr r6, r6, r7, lsl #16
+ stmia r0!, {r3 - r6} // Shouldn't fault
+ subs ip, ip, #16
+ bpl .c2u_2cpy8lp
+.c2u_2rem8lp: tst ip, #8
+ movne r3, r7, lsr #16
+ ldmneia r1!, {r4, r7}
+ orrne r3, r3, r4, lsl #16
+ movne r4, r4, lsr #16
+ orrne r4, r4, r7, lsl #16
+ stmneia r0!, {r3 - r4} // Shouldn't fault
+ tst ip, #4
+ movne r3, r7, lsr #16
+ ldrne r7, [r1], #4
+ orrne r3, r3, r7, lsl #16
+ strnet r3, [r0], #4 // Shouldn't fault
+ ands ip, ip, #3
+ beq .c2u_2fupi
+.c2u_2nowords: mov r3, r7, lsr #16
+ teq ip, #0
+ beq .c2u_finished
+ cmp ip, #2
+USER( strbt r3, [r0], #1) // May fault
+ movge r3, r3, lsr #8
+USER( strgebt r3, [r0], #1) // May fault
+ ldrgtb r3, [r1], #0
+USER( strgtbt r3, [r0], #1) // May fault
+ b .c2u_finished
+
+.c2u_3fupi: subs r2, r2, #4
+ addmi ip, r2, #4
+ bmi .c2u_3nowords
+ mov r3, r7, lsr #24
+ ldr r7, [r1], #4
+ orr r3, r3, r7, lsl #8
+USER( strt r3, [r0], #4) // May fault
+ mov ip, r0, lsl #32 - PAGE_SHIFT
+ rsb ip, ip, #0
+ movs ip, ip, lsr #32 - PAGE_SHIFT
+ beq .c2u_3fupi
+ cmp r2, ip
+ movlt ip, r2
+ sub r2, r2, ip
+ subs ip, ip, #16
+ blt .c2u_3rem8lp
+
+.c2u_3cpy8lp: mov r3, r7, lsr #24
+ ldmia r1!, {r4 - r7}
+ orr r3, r3, r4, lsl #8
+ mov r4, r4, lsr #24
+ orr r4, r4, r5, lsl #8
+ mov r5, r5, lsr #24
+ orr r5, r5, r6, lsl #8
+ mov r6, r6, lsr #24
+ orr r6, r6, r7, lsl #8
+ stmia r0!, {r3 - r6} // Shouldn't fault
+ subs ip, ip, #16
+ bpl .c2u_3cpy8lp
+.c2u_3rem8lp: tst ip, #8
+ movne r3, r7, lsr #24
+ ldmneia r1!, {r4, r7}
+ orrne r3, r3, r4, lsl #8
+ movne r4, r4, lsr #24
+ orrne r4, r4, r7, lsl #8
+ stmneia r0!, {r3 - r4} // Shouldn't fault
+ tst ip, #4
+ movne r3, r7, lsr #24
+ ldrne r7, [r1], #4
+ orrne r3, r3, r7, lsl #8
+ strnet r3, [r0], #4 // Shouldn't fault
+ ands ip, ip, #3
+ beq .c2u_3fupi
+.c2u_3nowords: mov r3, r7, lsr #24
+ teq ip, #0
+ beq .c2u_finished
+ cmp ip, #2
+USER( strbt r3, [r0], #1) // May fault
+ ldrge r3, [r1], #0
+USER( strgebt r3, [r0], #1) // May fault
+ movgt r3, r3, lsr #8
+USER( strgtbt r3, [r0], #1) // May fault
+ b .c2u_finished
+
+ .section .fixup,"ax"
+ .align 0
+9001: LOADREGS(fd,sp!, {r0, r4 - r7, pc})
+ .previous
+
+
+
+/* Prototype: unsigned long __arch_copy_from_user(void *to,const void *from,unsigned long n);
+ * Purpose : copy a block from user memory to kernel memory
+ * Params : to - kernel memory
+ * : from - user memory
+ * : n - number of bytes to copy
+ * Returns : Number of bytes NOT copied.
+ */
+.cfu_dest_not_aligned:
+ rsb ip, ip, #4
+ cmp ip, #2
+USER( ldrbt r3, [r1], #1) // May fault
+ strb r3, [r0], #1
+USER( ldrgebt r3, [r1], #1) // May fault
+ strgeb r3, [r0], #1
+USER( ldrgtbt r3, [r1], #1) // May fault
+ strgtb r3, [r0], #1
+ sub r2, r2, ip
+ b .cfu_dest_aligned
+
+ENTRY(__arch_copy_from_user)
+ stmfd sp!, {r2, r4 - r7, lr}
+ cmp r2, #4
+ blt .cfu_not_enough
+ ands ip, r0, #3
+ bne .cfu_dest_not_aligned
+.cfu_dest_aligned:
+ ands ip, r1, #3
+ bne .cfu_src_not_aligned
+/*
+ * Seeing as there has to be at least 8 bytes to copy, we can
+ * copy one word, and force a user-mode page fault...
+ */
+
+.cfu_0fupi: subs r2, r2, #4
+ addmi ip, r2, #4
+ bmi .cfu_0nowords
+USER( ldrt r3, [r1], #4)
+ str r3, [r0], #4
+ mov ip, r1, lsl #32 - PAGE_SHIFT // On each page, use a ld/st??t instruction
+ rsb ip, ip, #0
+ movs ip, ip, lsr #32 - PAGE_SHIFT
+ beq .cfu_0fupi
+/*
+ * ip = max no. of bytes to copy before needing another "strt" insn
+ */
+ cmp r2, ip
+ movlt ip, r2
+ sub r2, r2, ip
+ subs ip, ip, #32
+ blt .cfu_0rem8lp
+
+.cfu_0cpy8lp: ldmia r1!, {r3 - r6} // Shouldn't fault
+ stmia r0!, {r3 - r6}
+ ldmia r1!, {r3 - r6} // Shouldn't fault
+ stmia r0!, {r3 - r6}
+ subs ip, ip, #32
+ bpl .cfu_0cpy8lp
+.cfu_0rem8lp: cmn ip, #16
+ ldmgeia r1!, {r3 - r6} // Shouldn't fault
+ stmgeia r0!, {r3 - r6}
+ tst ip, #8
+ ldmneia r1!, {r3 - r4} // Shouldn't fault
+ stmneia r0!, {r3 - r4}
+ tst ip, #4
+ ldrnet r3, [r1], #4 // Shouldn't fault
+ strne r3, [r0], #4
+ ands ip, ip, #3
+ beq .cfu_0fupi
+.cfu_0nowords: teq ip, #0
+ beq .cfu_finished
+.cfu_nowords: cmp ip, #2
+USER( ldrbt r3, [r1], #1) // May fault
+ strb r3, [r0], #1
+USER( ldrgebt r3, [r1], #1) // May fault
+ strgeb r3, [r0], #1
+USER( ldrgtbt r3, [r1], #1) // May fault
+ strgtb r3, [r0], #1
+ b .cfu_finished
+
+.cfu_not_enough:
+ movs ip, r2
+ bne .cfu_nowords
+.cfu_finished: mov r0, #0
+ LOADREGS(fd,sp!,{r2, r4 - r7, pc})
+
+.cfu_src_not_aligned:
+ bic r1, r1, #3
+USER( ldrt r7, [r1], #4) // May fault
+ cmp ip, #2
+ bgt .cfu_3fupi
+ beq .cfu_2fupi
+.cfu_1fupi: subs r2, r2, #4
+ addmi ip, r2, #4
+ bmi .cfu_1nowords
+ mov r3, r7, lsr #8
+USER( ldrt r7, [r1], #4) // May fault
+ orr r3, r3, r7, lsl #24
+ str r3, [r0], #4
+ mov ip, r1, lsl #32 - PAGE_SHIFT
+ rsb ip, ip, #0
+ movs ip, ip, lsr #32 - PAGE_SHIFT
+ beq .cfu_1fupi
+ cmp r2, ip
+ movlt ip, r2
+ sub r2, r2, ip
+ subs ip, ip, #16
+ blt .cfu_1rem8lp
+
+.cfu_1cpy8lp: mov r3, r7, lsr #8
+ ldmia r1!, {r4 - r7} // Shouldn't fault
+ orr r3, r3, r4, lsl #24
+ mov r4, r4, lsr #8
+ orr r4, r4, r5, lsl #24
+ mov r5, r5, lsr #8
+ orr r5, r5, r6, lsl #24
+ mov r6, r6, lsr #8
+ orr r6, r6, r7, lsl #24
+ stmia r0!, {r3 - r6}
+ subs ip, ip, #16
+ bpl .cfu_1cpy8lp
+.cfu_1rem8lp: tst ip, #8
+ movne r3, r7, lsr #8
+ ldmneia r1!, {r4, r7} // Shouldn't fault
+ orrne r3, r3, r4, lsl #24
+ movne r4, r4, lsr #8
+ orrne r4, r4, r7, lsl #24
+ stmneia r0!, {r3 - r4}
+ tst ip, #4
+ movne r3, r7, lsr #8
+USER( ldrnet r7, [r1], #4) // May fault
+ orrne r3, r3, r7, lsl #24
+ strne r3, [r0], #4
+ ands ip, ip, #3
+ beq .cfu_1fupi
+.cfu_1nowords: mov r3, r7, lsr #8
+ teq ip, #0
+ beq .cfu_finished
+ cmp ip, #2
+ strb r3, [r0], #1
+ movge r3, r3, lsr #8
+ strgeb r3, [r0], #1
+ movgt r3, r3, lsr #8
+ strgtb r3, [r0], #1
+ b .cfu_finished
+
+.cfu_2fupi: subs r2, r2, #4
+ addmi ip, r2, #4
+ bmi .cfu_2nowords
+ mov r3, r7, lsr #16
+USER( ldrt r7, [r1], #4) // May fault
+ orr r3, r3, r7, lsl #16
+ str r3, [r0], #4
+ mov ip, r1, lsl #32 - PAGE_SHIFT
+ rsb ip, ip, #0
+ movs ip, ip, lsr #32 - PAGE_SHIFT
+ beq .cfu_2fupi
+ cmp r2, ip
+ movlt ip, r2
+ sub r2, r2, ip
+ subs ip, ip, #16
+ blt .cfu_2rem8lp
+
+.cfu_2cpy8lp: mov r3, r7, lsr #16
+ ldmia r1!, {r4 - r7} // Shouldn't fault
+ orr r3, r3, r4, lsl #16
+ mov r4, r4, lsr #16
+ orr r4, r4, r5, lsl #16
+ mov r5, r5, lsr #16
+ orr r5, r5, r6, lsl #16
+ mov r6, r6, lsr #16
+ orr r6, r6, r7, lsl #16
+ stmia r0!, {r3 - r6}
+ subs ip, ip, #16
+ bpl .cfu_2cpy8lp
+.cfu_2rem8lp: tst ip, #8
+ movne r3, r7, lsr #16
+ ldmneia r1!, {r4, r7} // Shouldn't fault
+ orrne r3, r3, r4, lsl #16
+ movne r4, r4, lsr #16
+ orrne r4, r4, r7, lsl #16
+ stmneia r0!, {r3 - r4}
+ tst ip, #4
+ movne r3, r7, lsr #16
+USER( ldrnet r7, [r1], #4) // May fault
+ orrne r3, r3, r7, lsl #16
+ strne r3, [r0], #4
+ ands ip, ip, #3
+ beq .cfu_2fupi
+.cfu_2nowords: mov r3, r7, lsr #16
+ teq ip, #0
+ beq .cfu_finished
+ cmp ip, #2
+ strb r3, [r0], #1
+ movge r3, r3, lsr #8
+ strgeb r3, [r0], #1
+USER( ldrgtbt r3, [r1], #0) // May fault
+ strgtb r3, [r0], #1
+ b .cfu_finished
+
+.cfu_3fupi: subs r2, r2, #4
+ addmi ip, r2, #4
+ bmi .cfu_3nowords
+ mov r3, r7, lsr #24
+USER( ldrt r7, [r1], #4) // May fault
+ orr r3, r3, r7, lsl #8
+ str r3, [r0], #4
+ mov ip, r1, lsl #32 - PAGE_SHIFT
+ rsb ip, ip, #0
+ movs ip, ip, lsr #32 - PAGE_SHIFT
+ beq .cfu_3fupi
+ cmp r2, ip
+ movlt ip, r2
+ sub r2, r2, ip
+ subs ip, ip, #16
+ blt .cfu_3rem8lp
+
+.cfu_3cpy8lp: mov r3, r7, lsr #24
+ ldmia r1!, {r4 - r7} // Shouldn't fault
+ orr r3, r3, r4, lsl #8
+ mov r4, r4, lsr #24
+ orr r4, r4, r5, lsl #8
+ mov r5, r5, lsr #24
+ orr r5, r5, r6, lsl #8
+ mov r6, r6, lsr #24
+ orr r6, r6, r7, lsl #8
+ stmia r0!, {r3 - r6}
+ subs ip, ip, #16
+ bpl .cfu_3cpy8lp
+.cfu_3rem8lp: tst ip, #8
+ movne r3, r7, lsr #24
+ ldmneia r1!, {r4, r7} // Shouldn't fault
+ orrne r3, r3, r4, lsl #8
+ movne r4, r4, lsr #24
+ orrne r4, r4, r7, lsl #8
+ stmneia r0!, {r3 - r4}
+ tst ip, #4
+ movne r3, r7, lsr #24
+USER( ldrnet r7, [r1], #4) // May fault
+ orrne r3, r3, r7, lsl #8
+ strne r3, [r0], #4
+ ands ip, ip, #3
+ beq .cfu_3fupi
+.cfu_3nowords: mov r3, r7, lsr #24
+ teq ip, #0
+ beq .cfu_finished
+ cmp ip, #2
+ strb r3, [r0], #1
+USER( ldrget r3, [r1], #0) // May fault
+ strgeb r3, [r0], #1
+ movgt r3, r3, lsr #8
+ strgtb r3, [r0], #1
+ b .cfu_finished
+
+ .section .fixup,"ax"
+ .align 0
+9001: LOADREGS(fd,sp!, {r0, r4 - r7, pc})
+ .previous
+
+/* Prototype: int __arch_clear_user(void *addr, size_t sz)
+ * Purpose : clear some user memory
+ * Params : addr - user memory address to clear
+ * : sz - number of bytes to clear
+ * Returns : number of bytes NOT cleared
+ */
+ENTRY(__arch_clear_user)
+ stmfd sp!, {r1, lr}
+ mov r2, #0
+ cmp r1, #4
+ blt 2f
+ ands ip, r0, #3
+ beq 1f
+ cmp ip, #1
+USER( strbt r2, [r0], #1)
+USER( strlebt r2, [r0], #1)
+USER( strltbt r2, [r0], #1)
+ rsb ip, ip, #4
+ sub r1, r1, ip @ 7 6 5 4 3 2 1
+1: subs r1, r1, #8 @ -1 -2 -3 -4 -5 -6 -7
+USER( strplt r2, [r0], #4)
+USER( strplt r2, [r0], #4)
+ bpl 1b
+2: adds r1, r1, #4 @ 3 2 1 0 -1 -2 -3
+USER( strplt r2, [r0], #4)
+ tst r1, #2 @ 1x 1x 0x 0x 1x 1x 0x
+USER( strnebt r2, [r0], #1)
+USER( strnebt r2, [r0], #1)
+ tst r1, #1 @ x1 x0 x1 x0 x1 x0 x1
+USER( strnebt r2, [r0], #1)
+ mov r0, #0
+ LOADREGS(fd,sp!, {r1, pc})
+
+ .section .fixup,"ax"
+ .align 0
+9001: LOADREGS(fd,sp!, {r0, pc})
+ .previous
+
+/* Prototype: int __arch_strlen_user(char *str)
+ * Purpose : get length of a string in user memory
+ * Params : str - address of string in user memory
+ * Returns : length of string *including terminator*, or zero on error
+ */
+ENTRY(__arch_strlen_user)
+ stmfd sp!, {lr}
+ mov r2, r0
+1:
+USER( ldrbt r1, [r0], #1)
+ teq r1, #0
+ bne 1b
+ sub r0, r0, r2
+ LOADREGS(fd,sp!, {pc})
+
+ .section .fixup,"ax"
+ .align 0
+9001: mov r0, #0
+ LOADREGS(fd,sp!,{pc})
+ .previous
+
+/* Prototype: size_t __arch_strncpy_from_user(char *dst, char *src, size_t len)
+ * Purpose : copy a string from user memory to kernel memory
+ * Params : dst - kernel memory destination
+ * : src - user memory source
+ * : len - maximum length of string
+ * Returns : number of characters copied
+ */
+ENTRY(__arch_strncpy_from_user)
+ stmfd sp!, {lr}
+ mov ip, r2
+1: subs r2, r2, #1
+ bmi 2f
+USER( ldrbt r3, [r1], #1)
+ strb r3, [r0], #1
+ teq r3, #0
+ bne 1b
+2: subs r0, ip, r2
+ LOADREGS(fd,sp!, {pc})
+
+ .section .fixup,"ax"
+ .align 0
+9001: mov r0, #-EFAULT
+ LOADREGS(fd,sp!, {pc})
+ .previous
+
+ .align
+
--- /dev/null
+#
+# Makefile for the linux arm-specific parts of the memory manager.
+#
+# Note! Dependencies are done automagically by 'make dep', which also
+# removes any old dependencies. DON'T put your own dependencies here
+# unless it's something special (ie not a .c file).
+#
+# Note 2! The CFLAGS definition is now in the main makefile...
+
+O_TARGET := mm.o
+O_OBJS := init.o extable.o fault-$(PROCESSOR).o mm-$(MACHINE).o
+
+ifeq ($(PROCESSOR),armo)
+ O_OBJS += proc-arm2,3.o
+endif
+
+ifeq ($(PROCESSOR),armv)
+ O_OBJS += small_page.o proc-arm6,7.o proc-sa110.o
+endif
+
+include $(TOPDIR)/Rules.make
+
+proc-arm2,3.o: ../lib/constants.h
+proc-arm6,7.o: ../lib/constants.h
+proc-sa110.o: ../lib/constants.h
+
+.PHONY: ../lib/constants.h
+../lib/constants.h:
+ @$(MAKE) -C ../lib constants.h
+
+%.o: %.S
+ifndef $(CONFIG_BINUTILS_NEW)
+ $(CC) $(CFLAGS) -D__ASSEMBLY__ -E $< | tr ';$$' '\n#' > ..tmp.s
+ $(CC) $(CFLAGS:-pipe=) -c -o $@ ..tmp.s
+ $(RM) ..tmp.s
+endif
--- /dev/null
+/*
+ * linux/arch/arm/mm/extable.c
+ */
+
+#include <linux/config.h>
+#include <linux/module.h>
+#include <asm/uaccess.h>
+
+extern const struct exception_table_entry __start___ex_table[];
+extern const struct exception_table_entry __stop___ex_table[];
+
+static inline unsigned long
+search_one_table(const struct exception_table_entry *first,
+ const struct exception_table_entry *last,
+ unsigned long value)
+{
+ while (first <= last) {
+ const struct exception_table_entry *mid;
+ long diff;
+
+ mid = (last - first) / 2 + first;
+ diff = mid->insn - value;
+ if (diff == 0)
+ return mid->fixup;
+ else if (diff < 0)
+ first = mid+1;
+ else
+ last = mid-1;
+ }
+ return 0;
+}
+
+unsigned long
+search_exception_table(unsigned long addr)
+{
+ unsigned long ret;
+
+#ifndef CONFIG_MODULES
+ /* There is only the kernel to search. */
+ ret = search_one_table(__start___ex_table, __stop___ex_table-1, addr);
+ if (ret) return ret;
+#else
+ /* The kernel is the last "module" -- no need to treat it special. */
+ struct module *mp;
+ for (mp = module_list; mp != NULL; mp = mp->next) {
+ if (mp->ex_table_start == NULL)
+ continue;
+ ret = search_one_table(mp->ex_table_start,
+ mp->ex_table_end - 1, addr);
+ if (ret) return ret;
+ }
+#endif
+
+ return 0;
+}
--- /dev/null
+/*
+ * linux/arch/arm/mm/fault.c
+ *
+ * Copyright (C) 1995 Linus Torvalds
+ * Modifications for ARM processor (c) 1995, 1996 Russell King
+ */
+
+#include <linux/signal.h>
+#include <linux/sched.h>
+#include <linux/head.h>
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/ptrace.h>
+#include <linux/mman.h>
+#include <linux/mm.h>
+#include <linux/smp.h>
+#include <linux/smp_lock.h>
+
+#include <asm/system.h>
+#include <asm/uaccess.h>
+#include <asm/pgtable.h>
+
+#define FAULT_CODE_FORCECOW 0x80
+#define FAULT_CODE_PREFETCH 0x04
+#define FAULT_CODE_WRITE 0x02
+#define FAULT_CODE_USER 0x01
+
+extern void die_if_kernel(char *msg, struct pt_regs *regs, unsigned int err, unsigned int ret);
+
+static void kernel_page_fault (unsigned long addr, int mode, struct pt_regs *regs,
+ struct task_struct *tsk, struct mm_struct *mm)
+{
+ /*
+ * Oops. The kernel tried to access some bad page. We'll have to
+ * terminate things with extreme prejudice.
+ */
+ pgd_t *pgd;
+ if (addr < PAGE_SIZE)
+ printk (KERN_ALERT "Unable to handle kernel NULL pointer dereference");
+ else
+ printk (KERN_ALERT "Unable to handle kernel paging request");
+ printk (" at virtual address %08lx\n", addr);
+ printk (KERN_ALERT "current->tss.memmap = %08lX\n", tsk->tss.memmap);
+ pgd = pgd_offset (mm, addr);
+ printk (KERN_ALERT "*pgd = %08lx", pgd_val (*pgd));
+ if (!pgd_none (*pgd)) {
+ pmd_t *pmd;
+ pmd = pmd_offset (pgd, addr);
+ printk (", *pmd = %08lx", pmd_val (*pmd));
+ if (!pmd_none (*pmd))
+ printk (", *pte = %08lx", pte_val (*pte_offset (pmd, addr)));
+ }
+ printk ("\n");
+ die_if_kernel ("Oops", regs, mode, SIGKILL);
+ do_exit (SIGKILL);
+}
+
+static void
+handle_dataabort (unsigned long addr, int mode, struct pt_regs *regs)
+{
+ struct task_struct *tsk;
+ struct mm_struct *mm;
+ struct vm_area_struct *vma;
+ unsigned long fixup;
+
+ lock_kernel();
+ tsk = current;
+ mm = tsk->mm;
+
+ down(&mm->mmap_sem);
+ vma = find_vma (mm, addr);
+ if (!vma)
+ goto bad_area;
+ if (addr >= vma->vm_start)
+ goto good_area;
+ if (!(vma->vm_flags & VM_GROWSDOWN) || expand_stack (vma, addr))
+ goto bad_area;
+
+ /*
+ * Ok, we have a good vm_area for this memory access, so
+ * we can handle it..
+ */
+good_area:
+ if (!(mode & FAULT_CODE_WRITE)) { /* write? */
+ if (!(vma->vm_flags & (VM_READ|VM_EXEC)))
+ goto bad_area;
+ } else {
+ if (!(vma->vm_flags & VM_WRITE))
+ goto bad_area;
+ }
+ handle_mm_fault (tsk, vma, addr, mode & (FAULT_CODE_WRITE|FAULT_CODE_FORCECOW));
+ up(&mm->mmap_sem);
+ goto out;
+
+ /*
+ * Something tried to access memory that isn't in our memory map..
+ * Fix it, but check if it's kernel or user first..
+ */
+bad_area:
+ up(&mm->mmap_sem);
+ if (mode & FAULT_CODE_USER) {
+extern int console_loglevel;
+cli();
+ tsk->tss.error_code = mode;
+ tsk->tss.trap_no = 14;
+console_loglevel = 9;
+ printk ("%s: memory violation at pc=0x%08lx, lr=0x%08lx (bad address=0x%08lx, code %d)\n",
+ tsk->comm, regs->ARM_pc, regs->ARM_lr, addr, mode);
+//#ifdef DEBUG
+ show_regs (regs);
+ c_backtrace (regs->ARM_fp, 0);
+//#endif
+ force_sig(SIGSEGV, tsk);
+while (1);
+ goto out;
+ }
+
+ /* Are we prepared to handle this kernel fault? */
+ if ((fixup = search_exception_table(regs->ARM_pc)) != 0) {
+ printk(KERN_DEBUG "%s: Exception at [<%lx>] addr=%lx (fixup: %lx)\n",
+ tsk->comm, regs->ARM_pc, addr, fixup);
+ regs->ARM_pc = fixup;
+ goto out;
+ }
+
+
+ kernel_page_fault (addr, mode, regs, tsk, mm);
+out:
+ unlock_kernel();
+}
+
+/*
+ * Handle a data abort. Note that we have to handle a range of addresses
+ * on ARM2/3 for ldm. If both pages are zero-mapped, then we have to force
+ * a copy-on-write
+ */
+asmlinkage void
+do_DataAbort (unsigned long min_addr, unsigned long max_addr, int mode, struct pt_regs *regs)
+{
+ handle_dataabort (min_addr, mode, regs);
+
+ if ((min_addr ^ max_addr) >> PAGE_SHIFT)
+ handle_dataabort (max_addr, mode | FAULT_CODE_FORCECOW, regs);
+}
+
+asmlinkage int
+do_PrefetchAbort (unsigned long addr, int mode, struct pt_regs *regs)
+{
+#if 0
+ if (the memc mapping for this page exists - can check now...) {
+ printk ("Page in, but got abort (undefined instruction?)\n");
+ return 0;
+ }
+#endif
+ handle_dataabort (addr, mode, regs);
+ return 1;
+}
--- /dev/null
+/*
+ * linux/arch/arm/mm/fault.c
+ *
+ * Copyright (C) 1995 Linus Torvalds
+ * Modifications for ARM processor (c) 1995, 1996 Russell King
+ */
+
+#include <linux/signal.h>
+#include <linux/sched.h>
+#include <linux/head.h>
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/ptrace.h>
+#include <linux/mman.h>
+#include <linux/mm.h>
+#include <linux/smp.h>
+#include <linux/smp_lock.h>
+
+#include <asm/system.h>
+#include <asm/uaccess.h>
+#include <asm/pgtable.h>
+
+#define FAULT_CODE_READ 0x02
+#define FAULT_CODE_USER 0x01
+
+extern void die_if_kernel(char *msg, struct pt_regs *regs, unsigned int err, unsigned int ret);
+
+static void kernel_page_fault (unsigned long addr, int mode, struct pt_regs *regs,
+ struct task_struct *tsk, struct mm_struct *mm)
+{
+ /*
+ * Oops. The kernel tried to access some bad page. We'll have to
+ * terminate things with extreme prejudice.
+ */
+ pgd_t *pgd;
+ if (addr < PAGE_SIZE)
+ printk (KERN_ALERT "Unable to handle kernel NULL pointer dereference");
+ else
+ printk (KERN_ALERT "Unable to handle kernel paging request");
+ printk (" at virtual address %08lx\n", addr);
+ printk (KERN_ALERT "current->tss.memmap = %08lX\n", tsk->tss.memmap);
+ pgd = pgd_offset (mm, addr);
+ printk (KERN_ALERT "*pgd = %08lx", pgd_val (*pgd));
+ if (!pgd_none (*pgd)) {
+ pmd_t *pmd;
+ pmd = pmd_offset (pgd, addr);
+ printk (", *pmd = %08lx", pmd_val (*pmd));
+ if (!pmd_none (*pmd))
+ printk (", *pte = %08lx", pte_val (*pte_offset (pmd, addr)));
+ }
+ printk ("\n");
+ die_if_kernel ("Oops", regs, mode, SIGKILL);
+ do_exit (SIGKILL);
+}
+
+static void page_fault (unsigned long addr, int mode, struct pt_regs *regs)
+{
+ struct task_struct *tsk;
+ struct mm_struct *mm;
+ struct vm_area_struct *vma;
+ unsigned long fixup;
+
+ lock_kernel();
+ tsk = current;
+ mm = tsk->mm;
+
+ down(&mm->mmap_sem);
+ vma = find_vma (mm, addr);
+ if (!vma)
+ goto bad_area;
+ if (vma->vm_start <= addr)
+ goto good_area;
+ if (!(vma->vm_flags & VM_GROWSDOWN) || expand_stack (vma, addr))
+ goto bad_area;
+
+ /*
+ * Ok, we have a good vm_area for this memory access, so
+ * we can handle it..
+ */
+good_area:
+ if (mode & FAULT_CODE_READ) { /* read? */
+ if (!(vma->vm_flags & (VM_READ|VM_EXEC)))
+ goto bad_area;
+ } else {
+ if (!(vma->vm_flags & VM_WRITE))
+ goto bad_area;
+ }
+ handle_mm_fault (tsk, vma, addr & PAGE_MASK, !(mode & FAULT_CODE_READ));
+ up(&mm->mmap_sem);
+ goto out;
+
+ /*
+ * Something tried to access memory that isn't in our memory map..
+ * Fix it, but check if it's kernel or user first..
+ */
+bad_area:
+ up(&mm->mmap_sem);
+ if (mode & FAULT_CODE_USER) {
+ tsk->tss.error_code = mode;
+ tsk->tss.trap_no = 14;
+ printk ("%s: memory violation at pc=0x%08lx, lr=0x%08lx (bad address=0x%08lx, code %d)\n",
+ tsk->comm, regs->ARM_pc, regs->ARM_lr, addr, mode);
+#ifdef DEBUG
+ show_regs (regs);
+ c_backtrace (regs->ARM_fp, regs->ARM_cpsr);
+#endif
+ force_sig(SIGSEGV, tsk);
+ goto out;
+ }
+
+ /* Are we prepared to handle this kernel fault? */
+ if ((fixup = search_exception_table(regs->ARM_pc)) != 0) {
+ printk(KERN_DEBUG "%s: Exception at [<%lx>] addr=%lx (fixup: %lx)\n",
+ tsk->comm, regs->ARM_pc, addr, fixup);
+ regs->ARM_pc = fixup;
+ goto out;
+ }
+
+ kernel_page_fault (addr, mode, regs, tsk, mm);
+out:
+ unlock_kernel();
+}
+
+/*
+ * Handle a data abort. Note that we have to handle a range of addresses
+ * on ARM2/3 for ldm. If both pages are zero-mapped, then we have to force
+ * a copy-on-write
+ */
+asmlinkage void
+do_DataAbort (unsigned long addr, int fsr, int error_code, struct pt_regs *regs)
+{
+ if (user_mode(regs))
+ error_code |= FAULT_CODE_USER;
+
+#define DIE(signr,nam)\
+ force_sig(signr, current);\
+ die_if_kernel(nam, regs, fsr, signr);\
+ break;
+
+ switch (fsr & 15) {
+ case 2:
+ DIE(SIGKILL, "Terminal exception")
+ case 0:
+ DIE(SIGSEGV, "Vector exception")
+ case 1:
+ case 3:
+ DIE(SIGBUS, "Alignment exception")
+ case 12:
+ case 14:
+ DIE(SIGBUS, "External abort on translation")
+ case 9:
+ case 11:
+ DIE(SIGSEGV, "Domain fault")
+ case 13:/* permission fault on section */
+#ifndef DEBUG
+ {
+ unsigned int i, j, a;
+static int count=2;
+if (count-- == 0) while (1);
+ a = regs->ARM_sp;
+ for (j = 0; j < 10; j++) {
+ printk ("%08x: ", a);
+ for (i = 0; i < 8; i += 1, a += 4)
+ printk ("%08lx ", *(unsigned long *)a);
+ printk ("\n");
+ }
+ }
+#endif
+ DIE(SIGSEGV, "Permission fault")
+
+ case 15:/* permission fault on page */
+ case 5: /* page-table entry descriptor fault */
+ case 7: /* first-level descriptor fault */
+ page_fault (addr, error_code, regs);
+ break;
+ case 4:
+ case 6:
+ DIE(SIGBUS, "External abort on linefetch")
+ case 8:
+ case 10:
+ DIE(SIGBUS, "External abort on non-linefetch")
+ }
+}
+
+asmlinkage int
+do_PrefetchAbort (unsigned long addr, struct pt_regs *regs)
+{
+#if 0
+ /* does this still apply ? */
+ if (the memc mapping for this page exists - can check now...) {
+ printk ("Page in, but got abort (undefined instruction?)\n");
+ return 0;
+ }
+#endif
+ page_fault (addr, FAULT_CODE_USER|FAULT_CODE_READ, regs);
+ return 1;
+}
+
--- /dev/null
+/*
+ * linux/arch/arm/mm/init.c
+ *
+ * Copyright (C) 1995, 1996 Russell King
+ */
+
+#include <linux/config.h>
+#include <linux/signal.h>
+#include <linux/sched.h>
+#include <linux/head.h>
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/ptrace.h>
+#include <linux/mman.h>
+#include <linux/mm.h>
+#include <linux/swap.h>
+#include <linux/smp.h>
+#ifdef CONFIG_BLK_DEV_INITRD
+#include <linux/blk.h>
+#endif
+
+#include <asm/system.h>
+#include <asm/segment.h>
+#include <asm/pgtable.h>
+#include <asm/dma.h>
+#include <asm/hardware.h>
+#include <asm/proc/mm-init.h>
+
+pgd_t swapper_pg_dir[PTRS_PER_PGD];
+
+const char bad_pmd_string[] = "Bad pmd in pte_alloc: %08lx\n";
+extern char _etext, _stext, _edata, __bss_start, _end;
+extern char __init_begin, __init_end;
+
+/*
+ * BAD_PAGE is the page that is used for page faults when linux
+ * is out-of-memory. Older versions of linux just did a
+ * do_exit(), but using this instead means there is less risk
+ * for a process dying in kernel mode, possibly leaving a inode
+ * unused etc..
+ *
+ * BAD_PAGETABLE is the accompanying page-table: it is initialized
+ * to point to BAD_PAGE entries.
+ *
+ * ZERO_PAGE is a special page that is used for zero-initialized
+ * data and COW.
+ */
+#if PTRS_PER_PTE != 1
+unsigned long *empty_bad_page_table;
+
+pte_t *__bad_pagetable(void)
+{
+ int i;
+ pte_t bad_page;
+
+ bad_page = BAD_PAGE;
+ for (i = 0; i < PTRS_PER_PTE; i++)
+ empty_bad_page_table[i] = (unsigned long)pte_val(bad_page);
+ return (pte_t *) empty_bad_page_table;
+}
+#endif
+
+unsigned long *empty_zero_page;
+unsigned long *empty_bad_page;
+
+pte_t __bad_page(void)
+{
+ memzero (empty_bad_page, PAGE_SIZE);
+ return pte_nocache(pte_mkdirty(mk_pte((unsigned long) empty_bad_page, PAGE_SHARED)));
+}
+
+void show_mem(void)
+{
+ extern void show_net_buffers(void);
+ int i,free = 0,total = 0,reserved = 0;
+ int shared = 0;
+
+ printk("Mem-info:\n");
+ show_free_areas();
+ printk("Free swap: %6dkB\n",nr_swap_pages<<(PAGE_SHIFT-10));
+ i = MAP_NR(high_memory);
+ while (i-- > 0) {
+ total++;
+ if (PageReserved(mem_map+i))
+ reserved++;
+ else if (!atomic_read(&mem_map[i].count))
+ free++;
+ else
+ shared += atomic_read(&mem_map[i].count) - 1;
+ }
+ printk("%d pages of RAM\n",total);
+ printk("%d free pages\n",free);
+ printk("%d reserved pages\n",reserved);
+ printk("%d pages shared\n",shared);
+ show_buffers();
+#ifdef CONFIG_NET
+ show_net_buffers();
+#endif
+}
+
+/*
+ * paging_init() sets up the page tables...
+ */
+unsigned long paging_init(unsigned long start_mem, unsigned long end_mem)
+{
+ extern unsigned long free_area_init(unsigned long, unsigned long);
+
+ start_mem = PAGE_ALIGN(start_mem);
+ empty_zero_page = (unsigned long *)start_mem;
+ start_mem += PAGE_SIZE;
+ empty_bad_page = (unsigned long *)start_mem;
+ start_mem += PAGE_SIZE;
+#if PTRS_PER_PTE != 1
+ empty_bad_page_table = (unsigned long *)start_mem;
+ start_mem += PTRS_PER_PTE * sizeof (void *);
+#endif
+ memzero (empty_zero_page, PAGE_SIZE);
+ start_mem = setup_pagetables (start_mem, end_mem);
+
+ flush_tlb_all ();
+ update_mm_cache_all ();
+
+ return free_area_init (start_mem, end_mem);
+}
+
+/*
+ * mem_init() marks the free areas in the mem_map and tells us how much
+ * memory is free. This is done after various parts of the system have
+ * claimed their memory after the kernel image.
+ */
+void mem_init(unsigned long start_mem, unsigned long end_mem)
+{
+ extern void sound_init(void);
+ int codepages = 0;
+ int reservedpages = 0;
+ int datapages = 0;
+ int initpages = 0;
+ unsigned long tmp;
+
+ end_mem &= PAGE_MASK;
+ high_memory = (void *)end_mem;
+ max_mapnr = num_physpages = MAP_NR(end_mem);
+
+ /* mark usable pages in the mem_map[] */
+ mark_usable_memory_areas(&start_mem, end_mem);
+
+ for (tmp = PAGE_OFFSET; tmp < end_mem ; tmp += PAGE_SIZE) {
+ if (PageReserved(mem_map+MAP_NR(tmp))) {
+ if (tmp >= KERNTOPHYS(_stext) &&
+ tmp < KERNTOPHYS(_edata)) {
+ if (tmp < KERNTOPHYS(_etext))
+ codepages++;
+ else
+ datapages++;
+ } else if (tmp >= KERNTOPHYS(__init_begin)
+ && tmp < KERNTOPHYS(__init_end))
+ initpages++;
+ else if (tmp >= KERNTOPHYS(__bss_start)
+ && tmp < (unsigned long) start_mem)
+ datapages++;
+ else
+ reservedpages++;
+ continue;
+ }
+ atomic_set(&mem_map[MAP_NR(tmp)].count, 1);
+#ifdef CONFIG_BLK_DEV_INITRD
+ if (!initrd_start || (tmp < initrd_start || tmp >= initrd_end))
+#endif
+ free_page(tmp);
+ }
+ printk ("Memory: %luk/%luk available (%dk kernel code, %dk reserved, %dk data, %dk init)\n",
+ (unsigned long) nr_free_pages << (PAGE_SHIFT-10),
+ max_mapnr << (PAGE_SHIFT-10),
+ codepages << (PAGE_SHIFT-10),
+ reservedpages << (PAGE_SHIFT-10),
+ datapages << (PAGE_SHIFT-10),
+ initpages << (PAGE_SHIFT-10));
+}
+
+void free_initmem (void)
+{
+ unsigned long addr;
+
+ addr = (unsigned long)(&__init_begin);
+ for (; addr < (unsigned long)(&__init_end); addr += PAGE_SIZE) {
+ mem_map[MAP_NR(addr)].flags &= ~(1 << PG_reserved);
+ atomic_set(&mem_map[MAP_NR(addr)].count, 1);
+ free_page(addr);
+ }
+ printk ("Freeing unused kernel memory: %dk freed\n", (&__init_end - &__init_begin) >> 10);
+}
+
+void si_meminfo(struct sysinfo *val)
+{
+ int i;
+
+ i = MAP_NR(high_memory);
+ val->totalram = 0;
+ val->sharedram = 0;
+ val->freeram = nr_free_pages << PAGE_SHIFT;
+ val->bufferram = buffermem;
+ while (i-- > 0) {
+ if (PageReserved(mem_map+i))
+ continue;
+ val->totalram++;
+ if (!atomic_read(&mem_map[i].count))
+ continue;
+ val->sharedram += atomic_read(&mem_map[i].count) - 1;
+ }
+ val->totalram <<= PAGE_SHIFT;
+ val->sharedram <<= PAGE_SHIFT;
+}
+
--- /dev/null
+/*
+ * arch/arm/mm/mm-a5k.c
+ *
+ * Extra MM routines for the Archimedes architecture
+ *
+ * Copyright (C) 1998 Russell King
+ */
--- /dev/null
+/*
+ * arch/arm/mm/mm-arc.c
+ *
+ * Extra MM routines for the Archimedes architecture
+ *
+ * Copyright (C) 1998 Russell King
+ */
--- /dev/null
+/*
+ * arch/arm/mm/mm-ebsa110.c
+ *
+ * Extra MM routines for the Archimedes architecture
+ *
+ * Copyright (C) 1998 Russell King
+ */
--- /dev/null
+/*
+ * arch/arm/mm/mm-nexuspci.c
+ *
+ * Extra MM routines for the Archimedes architecture
+ *
+ * Copyright (C) 1998 Russell King
+ */
--- /dev/null
+/*
+ * arch/arm/mm/mm-rpc.c
+ *
+ * Extra MM routines for RiscPC architecture
+ *
+ * Copyright (C) 1998 Russell King
+ */
+
+#include <asm/setup.h>
+
+#define NR_DRAM_BANKS 4
+#define NR_VRAM_BANKS 1
+
+#define NR_BANKS (NR_DRAM_BANKS + NR_VRAM_BANKS)
+
+#define FIRST_BANK 0
+#define FIRST_DRAM_BANK 0
+#define FIRST_VRAM_BANK NR_DRAM_BANKS
+
+#define BANK_SHIFT 26
+#define FIRST_DRAM_ADDR 0x10000000
+
+#define PHYS_TO_BANK(x) (((x) >> BANK_SHIFT) & (NR_DRAM_BANKS - 1))
+#define BANK_TO_PHYS(x) ((FIRST_DRAM_ADDR) +
+ (((x) - FIRST_DRAM_BANK) << BANK_SHIFT)
+
+struct ram_bank {
+ unsigned int virt_addr; /* virtual address of the *end* of this bank + 1 */
+ signed int phys_offset; /* offset to physical address of this bank */
+};
+
+static struct ram_bank rambank[NR_BANKS];
+
+/*
+ * Return the physical (0x10000000 -> 0x20000000) address of
+ * the virtual (0xc0000000 -> 0xd0000000) address
+ */
+unsigned long __virt_to_phys(unsigned long vpage)
+{
+ unsigned int bank = FIRST_BANK;
+
+ while (vpage >= rambank[bank].virt_addr && bank < NR_BANKS)
+ bank ++;
+
+ return vpage - rambank[bank].phys_offset;
+}
+
+/*
+ * Return the virtual (0xc0000000 -> 0xd0000000) address of
+ * the physical (0x10000000 -> 0x20000000) address
+ */
+unsigned long __phys_to_virt(unsigned long phys)
+{
+ unsigned int bank;
+
+ if (phys > FIRST_DRAM_ADDR)
+ bank = PHYS_TO_BANK(phys);
+ else
+ bank = FIRST_VRAM_BANK;
+
+ return phys + rambank[bank].phys_offset;
+}
+
+void init_dram_banks(struct param_struct *params)
+{
+ unsigned int bank;
+ unsigned int bytes = 0;
+
+ for (bank = FIRST_DRAM_BANK; bank < NR_DRAM_BANKS; bank++) {
+ rambank[bank].phys_offset = PAGE_OFFSET + bytes
+ - BANK_TO_PHYS(bank);
+
+ bytes += params->u1.s.pages_in_bank[bank - FIRST_DRAM_BANK] * PAGE_SIZE;
+
+ rambank[bank].virt_addr = PAGE_OFFSET + bytes;
+ }
+
+ drambank[4].phys_offset = 0xd6000000;
+ drambank[4].virt_addr = 0xd8000000;
+}
--- /dev/null
+/*
+ * linux/arch/arm/mm/arm2,3.S: MMU functions for ARM2,3
+ *
+ * (C) 1997 Russell King
+ *
+ * These are the low level assembler for performing cache
+ * and memory functions on ARM2, ARM250 and ARM3 processors.
+ */
+#include <linux/linkage.h>
+
+#include <asm/assembler.h>
+#include "../lib/constants.h"
+
+/*
+ * Code common to all processors - MEMC specific not processor
+ * specific!
+ */
+
+LC1: .word SYMBOL_NAME(page_nr)
+/*
+ * Function: arm2_3_update_map (struct task_struct *tsk)
+ *
+ * Params : tsk Task structure to be updated
+ *
+ * Purpose : Re-generate memc maps for task from its pseudo page tables
+ */
+_arm2_3_update_map:
+ mov ip, sp
+ stmfd sp!, {r4 - r6, fp, ip, lr, pc}
+ sub fp, ip, #4
+ add r1, r0, #TSS_MEMCMAP
+ ldr r2, LC1
+ ldr r2, [r2]
+ mov r3, #0x03f00000
+ orr r3, r3, #0x00000f00
+ orr r4, r3, #1
+ orr r5, r3, #2
+ orr r6, r3, #3
+1: stmia r1!, {r3, r4, r5, r6} @ Default mapping (null mapping)
+ add r3, r3, #4
+ add r4, r4, #4
+ add r5, r5, #4
+ add r6, r6, #4
+ stmia r1!, {r3, r4, r5, r6} @ Default mapping (null mapping)
+ add r3, r3, #4
+ add r4, r4, #4
+ add r5, r5, #4
+ add r6, r6, #4
+ subs r2, r2, #8
+ bhi 1b
+
+ adr r2, Lphystomemc32 @ r2 = conversion table to logical page number
+ ldr r4, [r0, #TSS_MEMMAP] @ r4 = active mem map
+ add r5, r4, #32 << 2 @ r5 = end of active mem map
+ add r0, r0, #TSS_MEMCMAP @ r0 = memc map
+
+ mov r6, #0
+2: ldmia r4!, {r1, r3}
+ tst r1, #PAGE_PRESENT
+ blne update_map_pgd
+ add r6, r6, #32 << 2
+ tst r3, #PAGE_PRESENT
+ blne update_map_pgd3
+ add r6, r6, #32 << 2
+ cmp r4, r5
+ blt 2b
+ ldmea fp, {r4 - r6, fp, sp, pc}^
+
+@ r0,r2,r3,r4,r5 = preserve
+@ r1,ip = available
+@ r0 = memc map
+@ r1 = pgd entry
+@ r2 = conversion table
+@ r6 = logical page no << 2
+
+update_map_pgd3:
+ mov r1, r3
+update_map_pgd: stmfd sp!, {r3, r4, r5, lr}
+ bic r4, r1, #3 @ r4 = page table
+ sub r5, r6, #1 << 2
+ add ip, r4, #32 << 2 @ ip = end of page table
+
+1: ldr r1, [r4], #4 @ get entry
+ add r5, r5, #1 << 2
+ tst r1, #PAGE_PRESENT @ page present?
+ blne Lconvertmemc @ yes
+ ldr r1, [r4], #4 @ get entry
+ add r5, r5, #1 << 2
+ tst r1, #PAGE_PRESENT @ page present?
+ blne Lconvertmemc @ yes
+ ldr r1, [r4], #4 @ get entry
+ add r5, r5, #1 << 2
+ tst r1, #PAGE_PRESENT @ page present?
+ blne Lconvertmemc @ yes
+ ldr r1, [r4], #4 @ get entry
+ add r5, r5, #1 << 2
+ tst r1, #PAGE_PRESENT @ page present?
+ blne Lconvertmemc @ yes
+ cmp r4, ip
+ blt 1b
+ ldmfd sp!, {r3, r4, r5, pc}^
+
+Lconvertmemc: mov r3, r1, lsr #13 @
+ and r3, r3, #0x3fc @ Convert to memc physical page no
+ ldr r3, [r2, r3] @
+
+ tst r1, #PAGE_OLD|PAGE_NOT_USER @ check for MEMC read
+ biceq r3, r3, #0x200 @
+ tsteq r1, #PAGE_READONLY|PAGE_CLEAN @ check for MEMC write
+ biceq r3, r3, #0x300 @
+
+ orr r3, r3, r5, lsl #13
+ and r1, r5, #0x01800000 >> 13
+ orr r3, r3, r1
+
+ and r1, r3, #255
+ str r3, [r0, r1, lsl #2]
+ movs pc, lr
+
+/*
+ * Function: arm2_3_update_cache (struct task_struct *tsk, unsigned long addr, pte_t pte)
+ * Params : tsk Task to update
+ * address Address of fault.
+ * pte New PTE at address
+ * Purpose : Update the mapping for this address.
+ * Notes : does the ARM3 run faster if you dont use the result in the next instruction?
+ */
+_arm2_3_update_cache:
+ tst r2, #PAGE_PRESENT
+ moveqs pc, lr
+ mov r3, r2, lsr #13 @ Physical page no.
+ adr ip, Lphystomemc32 @ Convert to logical page number
+ and r3, r3, #0x3fc
+ mov r1, r1, lsr #15
+ ldr r3, [ip, r3] @ Convert to memc phys page no.
+ tst r2, #PAGE_OLD|PAGE_NOT_USER
+ biceq r3, r3, #0x200
+ tsteq r2, #PAGE_READONLY|PAGE_CLEAN
+ biceq r3, r3, #0x300
+ mov ip, sp, lsr #13
+ orr r3, r3, r1, lsl #15
+ mov ip, ip, lsl #13
+ and r1, r1, #0x300
+ teq ip, r0
+ orr r3, r3, r1, lsl #2
+ add r0, r0, #TSS_MEMCMAP
+ and r2, r3, #255
+ streqb r3, [r3]
+ str r3, [r0, r2, lsl #2]
+ movs pc, lr
+
+#define PCD(a0, a1, a2, a3, a4, a5, a6, a7, a8, a9, aa, ab, ac, ad, ae, af) \
+ .long a0| 0x03800300; .long a1| 0x03800300;\
+ .long a2| 0x03800300; .long a3| 0x03800300;\
+ .long a4| 0x03800300; .long a5| 0x03800300;\
+ .long a6| 0x03800300; .long a7| 0x03800300;\
+ .long a8| 0x03800300; .long a9| 0x03800300;\
+ .long aa| 0x03800300; .long ab| 0x03800300;\
+ .long ac| 0x03800300; .long ad| 0x03800300;\
+ .long ae| 0x03800300; .long af| 0x03800300
+
+@ Table to map from page number to vidc page number
+Lphystomemc32: PCD(0x00,0x08,0x10,0x18,0x20,0x28,0x30,0x38,0x40,0x48,0x50,0x58,0x60,0x68,0x70,0x78)
+ PCD(0x01,0x09,0x11,0x19,0x21,0x29,0x31,0x39,0x41,0x49,0x51,0x59,0x61,0x69,0x71,0x79)
+ PCD(0x04,0x0C,0x14,0x1C,0x24,0x2C,0x34,0x3C,0x44,0x4C,0x54,0x5C,0x64,0x6C,0x74,0x7C)
+ PCD(0x05,0x0D,0x15,0x1D,0x25,0x2D,0x35,0x3D,0x45,0x4D,0x55,0x5D,0x65,0x6D,0x75,0x7D)
+ PCD(0x02,0x0A,0x12,0x1A,0x22,0x2A,0x32,0x3A,0x42,0x4A,0x52,0x5A,0x62,0x6A,0x72,0x7A)
+ PCD(0x03,0x0B,0x13,0x1B,0x23,0x2B,0x33,0x3B,0x43,0x4B,0x53,0x5B,0x63,0x6B,0x73,0x7B)
+ PCD(0x06,0x0E,0x16,0x1E,0x26,0x2E,0x36,0x3E,0x46,0x4E,0x56,0x5E,0x66,0x6E,0x76,0x7E)
+ PCD(0x07,0x0F,0x17,0x1F,0x27,0x2F,0x37,0x3F,0x47,0x4F,0x57,0x5F,0x67,0x6F,0x77,0x7F)
+ PCD(0x80,0x88,0x90,0x98,0xA0,0xA8,0xB0,0xB8,0xC0,0xC8,0xD0,0xD8,0xE0,0xE8,0xF0,0xF8)
+ PCD(0x81,0x89,0x91,0x99,0xA1,0xA9,0xB1,0xB9,0xC1,0xC9,0xD1,0xD9,0xE1,0xE9,0xF1,0xF9)
+ PCD(0x84,0x8C,0x94,0x9C,0xA4,0xAC,0xB4,0xBC,0xC4,0xCC,0xD4,0xDC,0xE4,0xEC,0xF4,0xFC)
+ PCD(0x85,0x8D,0x95,0x9D,0xA5,0xAD,0xB5,0xBD,0xC5,0xCD,0xD5,0xDD,0xE5,0xED,0xF5,0xFD)
+ PCD(0x82,0x8A,0x92,0x9A,0xA2,0xAA,0xB2,0xBA,0xC2,0xCA,0xD2,0xDA,0xE2,0xEA,0xF2,0xFA)
+ PCD(0x83,0x8B,0x93,0x9B,0xA3,0xAB,0xB3,0xBB,0xC3,0xCB,0xD3,0xDB,0xE3,0xEB,0xF3,0xFB)
+ PCD(0x86,0x8E,0x96,0x9E,0xA6,0xAE,0xB6,0xBE,0xC6,0xCE,0xD6,0xDE,0xE6,0xEE,0xF6,0xFE)
+ PCD(0x87,0x8F,0x97,0x9F,0xA7,0xAF,0xB7,0xBF,0xC7,0xCF,0xD7,0xDF,0xE7,0xEF,0xF7,0xFF)
+
+/*
+ * Function: arm2_3_data_abort ()
+ *
+ * Params : r0 = address of aborted instruction
+ *
+ * Purpose :
+ *
+ * Returns : r0 = address of abort
+ * : r1 = FSR
+ * : r2 != 0 if writing
+ */
+
+_arm2_3_data_abort:
+ movs pc, lr
+
+_arm2_3_check_bugs:
+ movs pc, lr
+
+/*
+ * Processor specific - ARM2
+ */
+
+LC0: .word SYMBOL_NAME(page_nr)
+/*
+ * Function: arm2_switch_to (struct task_struct *prev, struct task_struct *next)
+ *
+ * Params : prev Old task structure
+ * : next New task structure for process to run
+ *
+ * Purpose : Perform a task switch, saving the old processes state, and restoring
+ * the new.
+ *
+ * Notes : We don't fiddle with the FP registers here - we postpone this until
+ * the new task actually uses FP. This way, we don't swap FP for tasks
+ * that do not require it.
+ */
+_arm2_switch_to:
+ stmfd sp!, {r4 - r9, fp, lr} @ Store most regs on stack
+ str sp, [r0, #TSS_SAVE] @ Save sp_SVC
+ ldr sp, [r1, #TSS_SAVE] @ Get saved sp_SVC
+ mov r4, r1
+ add r0, r1, #TSS_MEMCMAP @ Remap MEMC
+ ldr r1, LC0
+ ldr r1, [r1]
+1: ldmia r0!, {r2, r3, r5, r6}
+ strb r2, [r2]
+ strb r3, [r3]
+ strb r5, [r5]
+ strb r6, [r6]
+ ldmia r0!, {r2, r3, r5, r6}
+ strb r2, [r2]
+ strb r3, [r3]
+ strb r5, [r5]
+ strb r6, [r6]
+ subs r1, r1, #8
+ bhi 1b
+ ldmfd sp!, {r4 - r9, fp, pc}^ @ Load all regs saved previously
+
+/*
+ * Function: arm2_remap_memc (struct task_struct *tsk)
+ *
+ * Params : tsk Task structure specifing the new mapping structure
+ *
+ * Purpose : remap MEMC tables
+ */
+_arm2_remap_memc:
+ stmfd sp!, {lr}
+ add r0, r0, #TSS_MEMCMAP
+ ldr r1, LC0
+ ldr r1, [r1]
+1: ldmia r0!, {r2, r3, ip, lr}
+ strb r2, [r2]
+ strb r3, [r3]
+ strb ip, [ip]
+ strb lr, [lr]
+ ldmia r0!, {r2, r3, ip, lr}
+ strb r2, [r2]
+ strb r3, [r3]
+ strb ip, [ip]
+ strb lr, [lr]
+ subs r1, r1, #8
+ bhi 1b
+ ldmfd sp!, {pc}^
+
+/*
+ * Function: arm2_xchg_1 (int new, volatile void *ptr)
+ *
+ * Params : new New value to store at...
+ * : ptr pointer to byte-wide location
+ *
+ * Purpose : Performs an exchange operation
+ *
+ * Returns : Original byte data at 'ptr'
+ *
+ * Notes : This will have to be changed if we ever use multi-processing using these
+ * processors, but that is very unlikely...
+ */
+_arm2_xchg_1: mov r2, pc
+ orr r2, r2, #I_BIT
+ teqp r2, #0
+ ldrb r2, [r1]
+ strb r0, [r1]
+ mov r0, r2
+ movs pc, lr
+
+/*
+ * Function: arm2_xchg_4 (int new, volatile void *ptr)
+ *
+ * Params : new New value to store at...
+ * : ptr pointer to word-wide location
+ *
+ * Purpose : Performs an exchange operation
+ *
+ * Returns : Original word data at 'ptr'
+ *
+ * Notes : This will have to be changed if we ever use multi-processing using these
+ * processors, but that is very unlikely...
+ */
+_arm2_xchg_4: mov r2, pc
+ orr r2, r2, #I_BIT
+ teqp r2, #0
+ ldr r2, [r1]
+ str r0, [r1]
+ mov r0, r2
+/*
+ * fall through
+ */
+/*
+ * Function: arm2_proc_init (void)
+ * : arm2_proc_fin (void)
+ *
+ * Purpose : Initialise / finalise processor specifics (none required)
+ */
+_arm2_proc_init:
+_arm2_proc_fin: movs pc, lr
+/*
+ * Function: arm3_switch_to (struct task_struct *prev, struct task_struct *next)
+ *
+ * Params : prev Old task structure
+ * : next New task structure for process to run
+ *
+ * Purpose : Perform a task switch, saving the old processes state, and restoring
+ * the new.
+ *
+ * Notes : We don't fiddle with the FP registers here - we postpone this until
+ * the new task actually uses FP. This way, we don't swap FP for tasks
+ * that do not require it.
+ */
+_arm3_switch_to:
+ stmfd sp!, {r4 - r9, fp, lr} @ Store most regs on stack
+ str sp, [r0, #TSS_SAVE] @ Save sp_SVC
+ ldr sp, [r1, #TSS_SAVE] @ Get saved sp_SVC
+ mov r4, r1
+ add r0, r1, #TSS_MEMCMAP @ Remap MEMC
+ ldr r1, LC0
+ ldr r1, [r1]
+1: ldmia r0!, {r2, r3, r5, r6}
+ strb r2, [r2]
+ strb r3, [r3]
+ strb r5, [r5]
+ strb r6, [r6]
+ ldmia r0!, {r2, r3, r5, r6}
+ strb r2, [r2]
+ strb r3, [r3]
+ strb r5, [r5]
+ strb r6, [r6]
+ subs r1, r1, #8
+ bhi 1b
+ mcr p15, 0, r0, c1, c0, 0 @ flush cache
+ ldmfd sp!, {r4 - r9, fp, pc}^ @ Load all regs saved previously
+/*
+ * Function: arm3_remap_memc (struct task_struct *tsk)
+ *
+ * Params : tsk Task structure specifing the new mapping structure
+ *
+ * Purpose : remap MEMC tables
+ */
+_arm3_remap_memc:
+ stmfd sp!, {lr}
+ add r0, r0, #TSS_MEMCMAP
+ ldr r1, LC0
+ ldr r1, [r1]
+1: ldmia r0!, {r2, r3, ip, lr}
+ strb r2, [r2]
+ strb r3, [r3]
+ strb ip, [ip]
+ strb lr, [lr]
+ ldmia r0!, {r2, r3, ip, lr}
+ strb r2, [r2]
+ strb r3, [r3]
+ strb ip, [ip]
+ strb lr, [lr]
+ subs r1, r1, #8
+ bhi 1b
+ mcr p15, 0, r0, c1, c0, 0 @ flush cache
+ ldmfd sp!, {pc}^
+
+/*
+ * Function: arm3_proc_init (void)
+ *
+ * Purpose : Initialise the cache control registers
+ */
+_arm3_proc_init:
+ mov r0, #0x001f0000
+ orr r0, r0, #0x0000ff00
+ orr r0, r0, #0x000000ff
+ mcr p15, 0, r0, c3, c0
+ mcr p15, 0, r0, c4, c0
+ mov r0, #0
+ mcr p15, 0, r0, c5, c0
+ mov r0, #3
+ mcr p15, 0, r0, c1, c0
+ mcr p15, 0, r0, c2, c0
+ movs pc, lr
+
+/*
+ * Function: arm3_proc_fin (void)
+ *
+ * Purpose : Finalise processor (disable caches)
+ */
+_arm3_proc_fin: mov r0, #2
+ mcr p15, 0, r0, c2, c0
+ movs pc, lr
+
+/*
+ * Function: arm3_xchg_1 (int new, volatile void *ptr)
+ *
+ * Params : new New value to store at...
+ * : ptr pointer to byte-wide location
+ *
+ * Purpose : Performs an exchange operation
+ *
+ * Returns : Original byte data at 'ptr'
+ */
+_arm3_xchg_1: swpb r0, r0, [r1]
+ movs pc, lr
+
+/*
+ * Function: arm3_xchg_4 (int new, volatile void *ptr)
+ *
+ * Params : new New value to store at...
+ * : ptr pointer to word-wide location
+ *
+ * Purpose : Performs an exchange operation
+ *
+ * Returns : Original word data at 'ptr'
+ */
+_arm3_xchg_4: swp r0, r0, [r1]
+ movs pc, lr
+
+
+/*
+ * Purpose : Function pointers used to access above functions - all calls
+ * come through these
+ */
+_arm2_name:
+ .ascii "arm2\0"
+ .align
+
+ .globl SYMBOL_NAME(arm2_processor_functions)
+SYMBOL_NAME(arm2_processor_functions):
+ .word _arm2_name @ 0
+ .word _arm2_switch_to @ 4
+ .word _arm2_3_data_abort @ 8
+ .word _arm2_3_check_bugs @ 12
+ .word _arm2_proc_init @ 16
+ .word _arm2_proc_fin @ 20
+
+ .word _arm2_remap_memc @ 24
+ .word _arm2_3_update_map @ 28
+ .word _arm2_3_update_cache @ 32
+ .word _arm2_xchg_1 @ 36
+ .word SYMBOL_NAME(abort) @ 40
+ .word _arm2_xchg_4 @ 44
+
+_arm250_name:
+ .ascii "arm250\0"
+ .align
+
+ .globl SYMBOL_NAME(arm250_processor_functions)
+SYMBOL_NAME(arm250_processor_functions):
+ .word _arm250_name @ 0
+ .word _arm2_switch_to @ 4
+ .word _arm2_3_data_abort @ 8
+ .word _arm2_3_check_bugs @ 12
+ .word _arm2_proc_init @ 16
+ .word _arm2_proc_fin @ 20
+
+ .word _arm2_remap_memc @ 24
+ .word _arm2_3_update_map @ 28
+ .word _arm2_3_update_cache @ 32
+ .word _arm3_xchg_1 @ 36
+ .word SYMBOL_NAME(abort) @ 40
+ .word _arm3_xchg_4 @ 44
+
+_arm3_name:
+ .ascii "arm3\0"
+ .align
+
+ .globl SYMBOL_NAME(arm3_processor_functions)
+SYMBOL_NAME(arm3_processor_functions):
+ .word _arm3_name @ 0
+ .word _arm3_switch_to @ 4
+ .word _arm2_3_data_abort @ 8
+ .word _arm2_3_check_bugs @ 12
+ .word _arm3_proc_init @ 16
+ .word _arm3_proc_fin @ 20
+
+ .word _arm3_remap_memc @ 24
+ .word _arm2_3_update_map @ 28
+ .word _arm2_3_update_cache @ 32
+ .word _arm3_xchg_1 @ 36
+ .word SYMBOL_NAME(abort) @ 40
+ .word _arm3_xchg_4 @ 44
+
--- /dev/null
+/*
+ * linux/arch/arm/mm/arm6.S: MMU functions for ARM6
+ *
+ * (C) 1997 Russell King
+ *
+ * These are the low level assembler for performing cache and TLB
+ * functions on the ARM6 & ARM7.
+ */
+#include <linux/linkage.h>
+#include <asm/assembler.h>
+#include "../lib/constants.h"
+
+/*
+ * Function: arm6_7_flush_cache_all (void)
+ * : arm6_7_flush_cache_page (unsigned long address, int size, int flags)
+ *
+ * Params : address Area start address
+ * : size size of area
+ * : flags b0 = I cache as well
+ *
+ * Purpose : Flush all cache lines
+ */
+_arm6_7_flush_cache:
+ mov r0, #0
+ mcr p15, 0, r0, c7, c0, 0 @ flush cache
+_arm6_7_null:
+ mov pc, lr
+
+/*
+ * Function: arm6_7_flush_tlb_all (void)
+ *
+ * Purpose : flush all TLB entries in all caches
+ */
+_arm6_7_flush_tlb_all:
+ mov r0, #0
+ mcr p15, 0, r0, c5, c0, 0 @ flush TLB
+ mov pc, lr
+
+/*
+ * Function: arm6_7_flush_tlb_page (unsigned long address, int end, int flags)
+ *
+ * Params : address Area start address
+ * : end Area end address
+ * : flags b0 = I cache as well
+ *
+ * Purpose : flush a TLB entry
+ */
+_arm6_7_flush_tlb_area:
+1: mcr p15, 0, r0, c6, c0, 0 @ flush TLB
+ add r0, r0, #4096
+ cmp r0, r1
+ blt 1b
+ mov pc, lr
+
+@LC0: .word _current
+/*
+ * Function: arm6_7_switch_to (struct task_struct *prev, struct task_struct *next)
+ *
+ * Params : prev Old task structure
+ * : next New task structure for process to run
+ *
+ * Purpose : Perform a task switch, saving the old processes state, and restoring
+ * the new.
+ *
+ * Notes : We don't fiddle with the FP registers here - we postpone this until
+ * the new task actually uses FP. This way, we don't swap FP for tasks
+ * that do not require it.
+ */
+_arm6_7_switch_to:
+ stmfd sp!, {r4 - r9, fp, lr} @ Store most regs on stack
+ mrs ip, cpsr
+ stmfd sp!, {ip} @ Save cpsr_SVC
+ str sp, [r0, #TSS_SAVE] @ Save sp_SVC
+ ldr sp, [r1, #TSS_SAVE] @ Get saved sp_SVC
+ ldr r0, [r1, #ADDR_LIMIT]
+ teq r0, #0
+ moveq r0, #KERNEL_DOMAIN
+ movne r0, #USER_DOMAIN
+ mcr p15, 0, r0, c3, c0 @ Set domain reg
+ ldr r0, [r1, #TSS_MEMMAP] @ Page table pointer
+ mov r1, #0
+ mcr p15, 0, r1, c7, c0, 0 @ flush cache
+ mcr p15, 0, r0, c2, c0, 0 @ update page table ptr
+ mcr p15, 0, r1, c5, c0, 0 @ flush TLBs
+ ldmfd sp!, {ip}
+ msr spsr, ip @ Save tasks CPSR into SPSR for this return
+ ldmfd sp!, {r4 - r9, fp, pc}^ @ Load all regs saved previously
+
+/*
+ * Function: arm6_7_data_abort ()
+ *
+ * Params : r0 = address of aborted instruction
+ *
+ * Purpose : obtain information about current aborted instruction
+ *
+ * Returns : r0 = address of abort
+ * : r1 = FSR
+ * : r2 != 0 if writing
+ * : sp = pointer to registers
+ */
+
+Lukabttxt: .ascii "Unknown data abort code %d [pc=%p, *pc=%p] LR=%p\0"
+ .align
+
+msg: .ascii "DA*%p=%p\n\0"
+ .align
+
+_arm6_data_abort:
+ ldr r4, [r0] @ read instruction causing problem
+ mov r2, r4, lsr #19 @ r2 b1 = L
+ and r1, r4, #15 << 24
+ add pc, pc, r1, lsr #22 @ Now branch to the relevent processing routine
+ movs pc, lr
+ b Ldata_unknown
+ b Ldata_unknown
+ b Ldata_unknown
+ b Ldata_unknown
+ b Ldata_earlyldrpost @ ldr rd, [rn], #m
+ b Ldata_simple @ ldr rd, [rn, #m] @ RegVal
+ b Ldata_earlyldrpost @ ldr rd, [rn], rm
+ b Ldata_simple @ ldr rd, [rn, rm]
+ b Ldata_ldmstm @ ldm*a rn, <rlist>
+ b Ldata_ldmstm @ ldm*b rn, <rlist>
+ b Ldata_unknown
+ b Ldata_unknown
+ b Ldata_simple @ ldc rd, [rn], #m @ Same as ldr rd, [rn], #m
+ b Ldata_simple @ ldc rd, [rn, #m]
+ b Ldata_unknown
+Ldata_unknown: @ Part of jumptable
+ ldr r3, [sp, #15 * 4] @ Get PC
+ str r3, [sp, #-4]!
+ mov r1, r1, lsr #2
+ mov r3, r4
+ mov r2, r0
+ adr r0, Lukabttxt
+ bl SYMBOL_NAME(panic)
+Lstop: b Lstop
+
+_arm7_data_abort:
+ ldr r4, [r0] @ read instruction causing problem
+ mov r2, r4, lsr #19 @ r2 b1 = L
+ and r1, r4, #15 << 24
+ add pc, pc, r1, lsr #22 @ Now branch to the relevent processing routine
+ movs pc, lr
+ b Ldata_unknown
+ b Ldata_unknown
+ b Ldata_unknown
+ b Ldata_unknown
+ b Ldata_lateldrpostconst @ ldr rd, [rn], #m
+ b Ldata_lateldrpreconst @ ldr rd, [rn, #m] @ RegVal
+ b Ldata_lateldrpostreg @ ldr rd, [rn], rm
+ b Ldata_lateldrprereg @ ldr rd, [rn, rm]
+ b Ldata_ldmstm @ ldm*a rn, <rlist>
+ b Ldata_ldmstm @ ldm*b rn, <rlist>
+ b Ldata_unknown
+ b Ldata_unknown
+ b Ldata_simple @ ldc rd, [rn], #m @ Same as ldr rd, [rn], #m
+ b Ldata_simple @ ldc rd, [rn, #m]
+ b Ldata_unknown
+ b Ldata_unknown
+
+Ldata_ldmstm: tst r4, #1 << 21 @ check writeback bit
+ beq Ldata_simple
+
+ mov r7, #0x11
+ orr r7, r7, r7, lsl #8
+ and r0, r4, r7
+ and r1, r4, r7, lsl #1
+ add r0, r0, r1, lsr #1
+ and r1, r4, r7, lsl #2
+ add r0, r0, r1, lsr #2
+ and r1, r4, r7, lsl #3
+ add r0, r0, r1, lsr #3
+ add r0, r0, r0, lsr #8
+ add r0, r0, r0, lsr #4
+ and r7, r0, #15 @ r7 = no. of registers to transfer.
+ and r5, r4, #15 << 16 @ Get Rn
+ ldr r0, [sp, r5, lsr #14] @ Get register
+ eor r6, r4, r4, lsl #2
+ tst r6, #1 << 23 @ Check inc/dec ^ writeback
+ rsbeq r7, r7, #0
+ add r7, r0, r7, lsl #2 @ Do correction (signed)
+ str r7, [sp, r5, lsr #14] @ Put register
+
+Ldata_simple: and r2, r2, #2 @ check read/write bit
+ mrc p15, 0, r0, c6, c0, 0 @ get FAR
+ mrc p15, 0, r1, c5, c0, 0 @ get FSR
+ and r1, r1, #15
+ mov pc, lr
+
+Ldata_earlyldrpost:
+ tst r2, #4
+ and r2, r2, #2 @ check read/write bit
+ orrne r2, r2, #1 @ T bit
+ mrc p15, 0, r0, c6, c0, 0 @ get FAR
+ mrc p15, 0, r1, c5, c0, 0 @ get FSR
+ and r1, r1, #15
+ mov pc, lr
+
+Ldata_lateldrpostconst:
+ movs r1, r4, lsl #20 @ Get offset
+ beq Ldata_earlyldrpost @ if offset is zero, no effect
+ and r5, r4, #15 << 16 @ Get Rn
+ ldr r0, [sp, r5, lsr #14]
+ tst r4, #1 << 23 @ U bit
+ subne r0, r0, r1, lsr #20
+ addeq r0, r0, r1, lsr #20
+ str r0, [sp, r5, lsr #14] @ Put register
+ b Ldata_earlyldrpost
+
+Ldata_lateldrpreconst:
+ tst r4, #1 << 21 @ check writeback bit
+ movnes r1, r4, lsl #20 @ Get offset
+ beq Ldata_simple
+ and r5, r4, #15 << 16 @ Get Rn
+ ldr r0, [sp, r5, lsr #14]
+ tst r4, #1 << 23 @ U bit
+ subne r0, r0, r1, lsr #20
+ addeq r0, r0, r1, lsr #20
+ str r0, [sp, r5, lsr #14] @ Put register
+ b Ldata_simple
+
+Ldata_lateldrpostreg:
+ and r5, r4, #15
+ ldr r1, [sp, r5, lsl #2] @ Get Rm
+ mov r3, r4, lsr #7
+ ands r3, r3, #31
+ and r6, r4, #0x70
+ orreq r6, r6, #8
+ add pc, pc, r6
+ mov r0, r0
+
+ mov r1, r1, lsl r3 @ 0: LSL #!0
+ b 1f
+ b 1f @ 1: LSL #0
+ mov r0, r0
+ b 1f @ 2: MUL?
+ mov r0, r0
+ b 1f @ 3: MUL?
+ mov r0, r0
+ mov r1, r1, lsr r3 @ 4: LSR #!0
+ b 1f
+ mov r1, r1, lsr #32 @ 5: LSR #32
+ b 1f
+ b 1f @ 6: MUL?
+ mov r0, r0
+ b 1f @ 7: MUL?
+ mov r0, r0
+ mov r1, r1, asr r3 @ 8: ASR #!0
+ b 1f
+ mov r1, r1, asr #32 @ 9: ASR #32
+ b 1f
+ b 1f @ A: MUL?
+ mov r0, r0
+ b 1f @ B: MUL?
+ mov r0, r0
+ mov r1, r1, ror r3 @ C: ROR #!0
+ b 1f
+ mov r1, r1, rrx @ D: RRX
+ b 1f
+ mov r0, r0 @ E: MUL?
+ mov r0, r0
+ mov r0, r0 @ F: MUL?
+
+
+1: and r5, r4, #15 << 16 @ Get Rn
+ ldr r0, [sp, r5, lsr #14]
+ tst r4, #1 << 23 @ U bit
+ subne r0, r0, r1
+ addeq r0, r0, r1
+ str r0, [sp, r5, lsr #14] @ Put register
+ b Ldata_earlyldrpost
+
+Ldata_lateldrprereg:
+ tst r4, #1 << 21 @ check writeback bit
+ beq Ldata_simple
+ and r5, r4, #15
+ ldr r1, [sp, r5, lsl #2] @ Get Rm
+ mov r3, r4, lsr #7
+ ands r3, r3, #31
+ and r6, r4, #0x70
+ orreq r6, r6, #8
+ add pc, pc, r6
+ mov r0, r0
+
+ mov r1, r1, lsl r3 @ 0: LSL #!0
+ b 1f
+ b 1f @ 1: LSL #0
+ mov r0, r0
+ b 1f @ 2: MUL?
+ mov r0, r0
+ b 1f @ 3: MUL?
+ mov r0, r0
+ mov r1, r1, lsr r3 @ 4: LSR #!0
+ b 1f
+ mov r1, r1, lsr #32 @ 5: LSR #32
+ b 1f
+ b 1f @ 6: MUL?
+ mov r0, r0
+ b 1f @ 7: MUL?
+ mov r0, r0
+ mov r1, r1, asr r3 @ 8: ASR #!0
+ b 1f
+ mov r1, r1, asr #32 @ 9: ASR #32
+ b 1f
+ b 1f @ A: MUL?
+ mov r0, r0
+ b 1f @ B: MUL?
+ mov r0, r0
+ mov r1, r1, ror r3 @ C: ROR #!0
+ b 1f
+ mov r1, r1, rrx @ D: RRX
+ b 1f
+ mov r0, r0 @ E: MUL?
+ mov r0, r0
+ mov r0, r0 @ F: MUL?
+
+
+1: and r5, r4, #15 << 16 @ Get Rn
+ ldr r0, [sp, r5, lsr #14]
+ tst r4, #1 << 23 @ U bit
+ subne r0, r0, r1
+ addeq r0, r0, r1
+ str r0, [sp, r5, lsr #14] @ Put register
+ b Ldata_simple
+
+/*
+ * Function: arm6_7_check_bugs (void)
+ * : arm6_7_proc_init (void)
+ * : arm6_7_proc_fin (void)
+ *
+ * Notes : This processor does not require these
+ */
+_arm6_7_check_bugs:
+ mrs ip, cpsr
+ bic ip, ip, #F_BIT
+ msr cpsr, ip
+_arm6_7_proc_init:
+_arm6_7_proc_fin:
+ mov pc, lr
+
+/*
+ * Function: arm6_set_pmd ()
+ *
+ * Params : r0 = Address to set
+ * : r1 = value to set
+ *
+ * Purpose : Set a PMD and flush it out of any WB cache
+ */
+_arm6_set_pmd: and r2, r1, #3
+ teq r2, #2
+ andeq r2, r1, #8
+ orreq r1, r1, r2, lsl #1 @ Updatable = Cachable
+ teq r2, #1
+ orreq r1, r1, #16 @ Updatable = 1 if Page table
+ str r1, [r0]
+ mov pc, lr
+
+/*
+ * Function: arm7_set_pmd ()
+ *
+ * Params : r0 = Address to set
+ * : r1 = value to set
+ *
+ * Purpose : Set a PMD and flush it out of any WB cache
+ */
+_arm7_set_pmd: orr r1, r1, #16 @ Updatable bit is always set on ARM7
+ str r1, [r0]
+ mov pc, lr
+
+/*
+ * Function: _arm6_7_reset
+ *
+ * Notes : This sets up everything for a reset
+ */
+_arm6_7_reset: mrs r1, cpsr
+ orr r1, r1, #F_BIT|I_BIT
+ msr cpsr, r1
+ mov r0, #0
+ mcr p15, 0, r0, c7, c0, 0 @ flush cache
+ mcr p15, 0, r0, c5, c0, 0 @ flush TLB
+ mov r1, #F_BIT | I_BIT | 3
+ mov pc, lr
+
+/*
+ * Purpose : Function pointers used to access above functions - all calls
+ * come through these
+ */
+_arm6_name: .ascii "arm6\0"
+ .align
+
+ENTRY(arm6_processor_functions)
+ .word _arm6_name @ 0
+ .word _arm6_7_switch_to @ 4
+ .word _arm6_data_abort @ 8
+ .word _arm6_7_check_bugs @ 12
+ .word _arm6_7_proc_init @ 16
+ .word _arm6_7_proc_fin @ 20
+
+ .word _arm6_7_flush_cache @ 24
+ .word _arm6_7_flush_cache @ 28
+ .word _arm6_7_flush_cache @ 32
+ .word _arm6_7_null @ 36
+ .word _arm6_7_flush_cache @ 40
+ .word _arm6_7_flush_tlb_all @ 44
+ .word _arm6_7_flush_tlb_area @ 48
+ .word _arm6_set_pmd @ 52
+ .word _arm6_7_reset @ 54
+ .word _arm6_7_flush_cache @ 58
+
+/*
+ * Purpose : Function pointers used to access above functions - all calls
+ * come through these
+ */
+_arm7_name: .ascii "arm7\0"
+ .align
+
+ENTRY(arm7_processor_functions)
+ .word _arm7_name @ 0
+ .word _arm6_7_switch_to @ 4
+ .word _arm7_data_abort @ 8
+ .word _arm6_7_check_bugs @ 12
+ .word _arm6_7_proc_init @ 16
+ .word _arm6_7_proc_fin @ 20
+
+ .word _arm6_7_flush_cache @ 24
+ .word _arm6_7_flush_cache @ 28
+ .word _arm6_7_flush_cache @ 32
+ .word _arm6_7_null @ 36
+ .word _arm6_7_flush_cache @ 40
+ .word _arm6_7_flush_tlb_all @ 44
+ .word _arm6_7_flush_tlb_area @ 48
+ .word _arm7_set_pmd @ 52
+ .word _arm6_7_reset @ 54
+ .word _arm6_7_flush_cache @ 58
+
--- /dev/null
+/*
+ * linux/arch/arm/mm/sa110.S: MMU functions for SA110
+ *
+ * (C) 1997 Russell King
+ *
+ * These are the low level assembler for performing cache and TLB
+ * functions on the sa110.
+ */
+#include <linux/linkage.h>
+#include <asm/assembler.h>
+#include "../lib/constants.h"
+
+ .data
+Lclean_switch: .long 0
+ .text
+
+/*
+ * Function: sa110_flush_cache_all (void)
+ *
+ * Purpose : Flush all cache lines
+ */
+ .align 5
+_sa110_flush_cache_all: @ preserves r0
+ ldr r3, =Lclean_switch
+ ldr r2, [r3]
+ ands r2, r2, #1
+ eor r2, r2, #1
+ str r2, [r3]
+ ldr ip, =0xdf000000
+ addne ip, ip, #32768
+ add r1, ip, #16384 @ only necessary for 16k
+1: ldr r2, [ip], #32
+ teq r1, ip
+ bne 1b
+ mov ip, #0
+ mcr p15, 0, ip, c7, c5, 0 @ flush I cache
+ mcr p15, 0, ip, c7, c10, 4 @ drain WB
+ mov pc, lr
+
+/*
+ * Function: sa110_flush_cache_area (unsigned long address, int end, int flags)
+ *
+ * Params : address Area start address
+ * : end Area end address
+ * : flags b0 = I cache as well
+ *
+ * Purpose : clean & flush all cache lines associated with this area of memory
+ */
+ .align 5
+_sa110_flush_cache_area:
+ sub r3, r1, r0
+ cmp r3, #32768
+ bgt _sa110_flush_cache_all
+1: mcr p15, 0, r0, c7, c10, 1 @ clean D entry
+ mcr p15, 0, r0, c7, c6, 1 @ flush D entry
+ add r0, r0, #32
+ mcr p15, 0, r0, c7, c10, 1 @ clean D entry
+ mcr p15, 0, r0, c7, c6, 1 @ flush D entry
+ add r0, r0, #32
+ cmp r0, r1
+ blt 1b
+ tst r2, #1
+ movne r0, #0
+ mcrne p15, 0, r0, c7, c5, 0 @ flush I cache
+ mov pc, lr
+
+/*
+ * Function: sa110_flush_cache_entry (unsigned long address)
+ *
+ * Params : address Address of cache line to flush
+ *
+ * Purpose : clean & flush an entry
+ */
+ .align 5
+_sa110_flush_cache_entry:
+ mov r1, #0
+ mcr p15, 0, r0, c7, c10, 1 @ clean D entry
+ mcr p15, 0, r1, c7, c10, 4 @ drain WB
+ mcr p15, 0, r1, c7, c5, 0 @ flush I cache
+ mov pc, lr
+
+/*
+ * Function: sa110_flush_cache_pte (unsigned long address)
+ *
+ * Params : address Address of cache line to clean
+ *
+ * Purpose : Ensure that physical memory reflects cache at this location
+ * for page table purposes.
+ */
+_sa110_flush_cache_pte:
+ mcr p15, 0, r0, c7, c10, 1 @ clean D entry (drain is done by TLB fns)
+ mov pc, lr
+
+/*
+ * Function: sa110_flush_ram_page (unsigned long page)
+ *
+ * Params : address Area start address
+ * : size size of area
+ * : flags b0 = I cache as well
+ *
+ * Purpose : clean & flush all cache lines associated with this area of memory
+ */
+ .align 5
+_sa110_flush_ram_page:
+ mov r1, #4096
+1: mcr p15, 0, r0, c7, c10, 1 @ clean D entry
+ mcr p15, 0, r0, c7, c6, 1 @ flush D entry
+ add r0, r0, #32
+ mcr p15, 0, r0, c7, c10, 1 @ clean D entry
+ mcr p15, 0, r0, c7, c6, 1 @ flush D entry
+ add r0, r0, #32
+ subs r1, r1, #64
+ bne 1b
+ mov r0, #0
+ mcr p15, 0, r0, c7, c10, 4 @ drain WB
+ mcr p15, 0, r0, c7, c5, 0 @ flush I cache
+ mov pc, lr
+
+/*
+ * Function: sa110_flush_tlb_all (void)
+ *
+ * Purpose : flush all TLB entries in all caches
+ */
+ .align 5
+_sa110_flush_tlb_all:
+ mov r0, #0
+ mcr p15, 0, r0, c7, c10, 4 @ drain WB
+ mcr p15, 0, r0, c8, c7, 0 @ flush I & D tlbs
+ mov pc, lr
+
+/*
+ * Function: sa110_flush_tlb_area (unsigned long address, int end, int flags)
+ *
+ * Params : address Area start address
+ * : end Area end address
+ * : flags b0 = I cache as well
+ *
+ * Purpose : flush a TLB entry
+ */
+ .align 5
+_sa110_flush_tlb_area:
+ mov r3, #0
+ mcr p15, 0, r3, c7, c10, 4 @ drain WB
+1: cmp r0, r1
+ mcrlt p15, 0, r0, c8, c6, 1 @ flush D TLB entry
+ addlt r0, r0, #4096
+ cmp r0, r1
+ mcrlt p15, 0, r0, c8, c6, 1 @ flush D TLB entry
+ addlt r0, r0, #4096
+ blt 1b
+ tst r2, #1
+ mcrne p15, 0, r3, c8, c5, 0 @ flush I TLB
+ mov pc, lr
+
+ .align 5
+_sa110_flush_icache_area:
+ mov r3, #0
+1: mcr p15, 0, r0, c7, c10, 1 @ Clean D entry
+ add r0, r0, #32
+ cmp r0, r1
+ blt 1b
+ mcr p15, 0, r0, c7, c5, 0 @ flush I cache
+ mov pc, lr
+
+@LC0: .word _current
+/*
+ * Function: sa110_switch_to (struct task_struct *prev, struct task_struct *next)
+ *
+ * Params : prev Old task structure
+ * : next New task structure for process to run
+ *
+ * Purpose : Perform a task switch, saving the old processes state, and restoring
+ * the new.
+ *
+ * Notes : We don't fiddle with the FP registers here - we postpone this until
+ * the new task actually uses FP. This way, we don't swap FP for tasks
+ * that do not require it.
+ */
+ .align 5
+_sa110_switch_to:
+ stmfd sp!, {r4 - r9, fp, lr} @ Store most regs on stack
+ mrs ip, cpsr
+ stmfd sp!, {ip} @ Save cpsr_SVC
+ str sp, [r0, #TSS_SAVE] @ Save sp_SVC
+ ldr sp, [r1, #TSS_SAVE] @ Get saved sp_SVC
+ ldr r0, [r1, #ADDR_LIMIT]
+ teq r0, #0
+ moveq r0, #KERNEL_DOMAIN
+ movne r0, #USER_DOMAIN
+ mcr p15, 0, r0, c3, c0 @ Set segment
+ ldr r0, [r1, #TSS_MEMMAP] @ Page table pointer
+ ldr r3, =Lclean_switch
+ ldr r2, [r3]
+ ands r2, r2, #1
+ eor r2, r2, #1
+ str r2, [r3]
+ ldr r2, =0xdf000000
+ addne r2, r2, #32768
+ add r1, r2, #16384 @ only necessary for 16k
+1: ldr r3, [r2], #32
+ teq r1, r2
+ bne 1b
+ mov r1, #0
+ mcr p15, 0, r1, c7, c5, 0 @ flush I cache
+ mcr p15, 0, r1, c7, c10, 4 @ drain WB
+ mcr p15, 0, r0, c2, c0, 0 @ load page table pointer
+ mcr p15, 0, r1, c8, c7, 0 @ flush TLBs
+ ldmfd sp!, {ip}
+ msr spsr, ip @ Save tasks CPSR into SPSR for this return
+ ldmfd sp!, {r4 - r9, fp, pc}^ @ Load all regs saved previously
+
+/*
+ * Function: sa110_data_abort ()
+ *
+ * Params : r0 = address of aborted instruction
+ *
+ * Purpose : obtain information about current aborted instruction
+ *
+ * Returns : r0 = address of abort
+ * : r1 = FSR
+ * : r2 != 0 if writing
+ */
+ .align 5
+_sa110_data_abort:
+ ldr r2, [r0] @ read instruction causing problem
+ mrc p15, 0, r0, c6, c0, 0 @ get FAR
+ mov r2, r2, lsr #19 @ b1 = L
+ and r3, r2, #0x69 << 2
+ and r2, r2, #2
+// teq r3, #0x21 << 2
+// orreq r2, r2, #1 @ b0 = {LD,ST}RT
+ mrc p15, 0, r1, c5, c0, 0 @ get FSR
+ and r1, r1, #255
+ mov pc, lr
+
+/*
+ * Function: sa110_set_pmd ()
+ *
+ * Params : r0 = Address to set
+ * : r1 = value to set
+ *
+ * Purpose : Set a PMD and flush it out of any WB cache
+ */
+ .align 5
+_sa110_set_pmd: str r1, [r0]
+ mcr p15, 0, r0, c7, c10, 1 @ clean D entry (drain is done by TLB fns)
+ mov pc, lr
+
+/*
+ * Function: sa110_check_bugs (void)
+ * : sa110_proc_init (void)
+ * : sa110_proc_fin (void)
+ *
+ * Notes : This processor does not require these
+ */
+_sa110_check_bugs:
+ mrs ip, cpsr
+ bic ip, ip, #F_BIT
+ msr cpsr, ip
+_sa110_proc_init:
+_sa110_proc_fin:
+ mov pc, lr
+
+/*
+ * Function: sa110_reset
+ *
+ * Notes : This sets up everything for a reset
+ */
+_sa110_reset: mrs r1, cpsr
+ orr r1, r1, #F_BIT | I_BIT
+ msr cpsr, r1
+ stmfd sp!, {r1, lr}
+ bl _sa110_flush_cache_all
+ bl _sa110_flush_tlb_all
+ mcr p15, 0, ip, c7, c7, 0 @ flush I,D caches
+ mrc p15, 0, r0, c1, c0, 0 @ ctrl register
+ bic r0, r0, #0x1800
+ bic r0, r0, #0x000f
+ ldmfd sp!, {r1, pc}
+/*
+ * Purpose : Function pointers used to access above functions - all calls
+ * come through these
+ */
+_sa110_name: .ascii "sa110\0"
+ .align
+
+ENTRY(sa110_processor_functions)
+ .word _sa110_name @ 0
+ .word _sa110_switch_to @ 4
+ .word _sa110_data_abort @ 8
+ .word _sa110_check_bugs @ 12
+ .word _sa110_proc_init @ 16
+ .word _sa110_proc_fin @ 20
+
+ .word _sa110_flush_cache_all @ 24
+ .word _sa110_flush_cache_area @ 28
+ .word _sa110_flush_cache_entry @ 32
+ .word _sa110_flush_cache_pte @ 36
+ .word _sa110_flush_ram_page @ 40
+ .word _sa110_flush_tlb_all @ 44
+ .word _sa110_flush_tlb_area @ 48
+
+ .word _sa110_set_pmd @ 52
+ .word _sa110_reset @ 54
+ .word _sa110_flush_icache_area @ 58
--- /dev/null
+/*
+ * linux/arch/arm/mm/small_page.c
+ *
+ * Copyright (C) 1996 Russell King
+ *
+ * Changelog:
+ * 26/01/1996 RMK Cleaned up various areas to make little more generic
+ */
+
+#include <linux/signal.h>
+#include <linux/sched.h>
+#include <linux/head.h>
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/ptrace.h>
+#include <linux/mman.h>
+#include <linux/mm.h>
+#include <linux/swap.h>
+#include <linux/smp.h>
+
+#define SMALL_ALLOC_SHIFT (10)
+#define SMALL_ALLOC_SIZE (1 << SMALL_ALLOC_SHIFT)
+#define NR_BLOCKS (PAGE_SIZE / SMALL_ALLOC_SIZE)
+
+#if NR_BLOCKS != 4
+#error I only support 4 blocks per page!
+#endif
+
+#define USED(pg) ((atomic_read(&(pg)->count) >> 8) & 15)
+#define SET_USED(pg,off) (atomic_read(&(pg)->count) |= 256 << off)
+#define CLEAR_USED(pg,off) (atomic_read(&(pg)->count) &= ~(256 << off))
+#define IS_FREE(pg,off) (!(atomic_read(&(pg)->count) & (256 << off)))
+#define PAGE_PTR(page,block) ((struct free_small_page *)((page) + \
+ ((block) << SMALL_ALLOC_SHIFT)))
+
+struct free_small_page {
+ unsigned long next;
+ unsigned long prev;
+};
+
+/*
+ * To handle allocating small pages, we use the main get_free_page routine,
+ * and split the page up into 4. The page is marked in mem_map as reserved,
+ * so it can't be free'd by free_page. The count field is used to keep track
+ * of which sections of this page are allocated.
+ */
+static unsigned long small_page_ptr;
+
+static unsigned char offsets[1<<NR_BLOCKS] = {
+ 0, /* 0000 */
+ 1, /* 0001 */
+ 0, /* 0010 */
+ 2, /* 0011 */
+ 0, /* 0100 */
+ 1, /* 0101 */
+ 0, /* 0110 */
+ 3, /* 0111 */
+ 0, /* 1000 */
+ 1, /* 1001 */
+ 0, /* 1010 */
+ 2, /* 1011 */
+ 0, /* 1100 */
+ 1, /* 1101 */
+ 0, /* 1110 */
+ 4 /* 1111 */
+};
+
+static inline void clear_page_links(unsigned long page)
+{
+ struct free_small_page *fsp;
+ int i;
+
+ for (i = 0; i < NR_BLOCKS; i++) {
+ fsp = PAGE_PTR(page, i);
+ fsp->next = fsp->prev = 0;
+ }
+}
+
+static inline void set_page_links_prev(unsigned long page, unsigned long prev)
+{
+ struct free_small_page *fsp;
+ unsigned int mask;
+ int i;
+
+ if (!page)
+ return;
+
+ mask = USED(&mem_map[MAP_NR(page)]);
+ for (i = 0; i < NR_BLOCKS; i++) {
+ if (mask & (1 << i))
+ continue;
+ fsp = PAGE_PTR(page, i);
+ fsp->prev = prev;
+ }
+}
+
+static inline void set_page_links_next(unsigned long page, unsigned long next)
+{
+ struct free_small_page *fsp;
+ unsigned int mask;
+ int i;
+
+ if (!page)
+ return;
+
+ mask = USED(&mem_map[MAP_NR(page)]);
+ for (i = 0; i < NR_BLOCKS; i++) {
+ if (mask & (1 << i))
+ continue;
+ fsp = PAGE_PTR(page, i);
+ fsp->next = next;
+ }
+}
+
+unsigned long get_small_page(int priority)
+{
+ struct free_small_page *fsp;
+ unsigned long new_page;
+ unsigned long flags;
+ struct page *page;
+ int offset;
+
+ save_flags(flags);
+ if (!small_page_ptr)
+ goto need_new_page;
+ cli();
+again:
+ page = mem_map + MAP_NR(small_page_ptr);
+ offset = offsets[USED(page)];
+ SET_USED(page, offset);
+ new_page = (unsigned long)PAGE_PTR(small_page_ptr, offset);
+ if (USED(page) == 15) {
+ fsp = (struct free_small_page *)new_page;
+ set_page_links_prev (fsp->next, 0);
+ small_page_ptr = fsp->next;
+ }
+ restore_flags(flags);
+ return new_page;
+
+need_new_page:
+ new_page = __get_free_page(priority);
+ if (!small_page_ptr) {
+ if (new_page) {
+ set_bit (PG_reserved, &mem_map[MAP_NR(new_page)].flags);
+ clear_page_links (new_page);
+ cli();
+ small_page_ptr = new_page;
+ goto again;
+ }
+ restore_flags(flags);
+ return 0;
+ }
+ free_page(new_page);
+ cli();
+ goto again;
+}
+
+void free_small_page(unsigned long spage)
+{
+ struct free_small_page *ofsp, *cfsp;
+ unsigned long flags;
+ struct page *page;
+ int offset, oldoffset;
+
+ offset = (spage >> SMALL_ALLOC_SHIFT) & (NR_BLOCKS - 1);
+ spage -= offset << SMALL_ALLOC_SHIFT;
+
+ page = mem_map + MAP_NR(spage);
+ if (!PageReserved(page) || !USED(page)) {
+ printk ("Trying to free non-small page from %p\n", __builtin_return_address(0));
+ return;
+ }
+ if (IS_FREE(page, offset)) {
+ printk ("Trying to free free small page from %p\n", __builtin_return_address(0));
+ return;
+ }
+ save_flags_cli (flags);
+ oldoffset = offsets[USED(page)];
+ CLEAR_USED(page, offset);
+ ofsp = PAGE_PTR(spage, oldoffset);
+ cfsp = PAGE_PTR(spage, offset);
+
+ if (oldoffset == NR_BLOCKS) { /* going from totally used to mostly used */
+ cfsp->prev = 0;
+ cfsp->next = small_page_ptr;
+ set_page_links_prev (small_page_ptr, spage);
+ small_page_ptr = spage;
+ } else if (!USED(page)) {
+ set_page_links_prev (ofsp->next, ofsp->prev);
+ set_page_links_next (ofsp->prev, ofsp->next);
+ if (spage == small_page_ptr)
+ small_page_ptr = ofsp->next;
+ clear_bit (PG_reserved, &page->flags);
+ restore_flags(flags);
+ free_page (spage);
+ } else
+ *cfsp = *ofsp;
+ restore_flags(flags);
+}
--- /dev/null
+/* ld script to make i386 Linux kernel
+ * Written by Martin Mares <mj@atrey.karlin.mff.cuni.cz>
+ */
+OUTPUT_FORMAT("elf32-arm", "elf32-arm", "elf32-arm")
+OUTPUT_ARCH(arm)
+ENTRY(_start)
+SECTIONS
+{
+ _text = .; /* Text and read-only data */
+ .text : {
+ *(.text)
+ *(.fixup)
+ *(.gnu.warning)
+ } = 0x9090
+ .text.lock : { *(.text.lock) } /* out-of-line lock text */
+ .rodata : { *(.rodata) }
+ .kstrtab : { *(.kstrtab) }
+
+ . = ALIGN(16); /* Exception table */
+ __start___ex_table = .;
+ __ex_table : { *(__ex_table) }
+ __stop___ex_table = .;
+
+ __start___ksymtab = .; /* Kernel symbol table */
+ __ksymtab : { *(__ksymtab) }
+ __stop___ksymtab = .;
+
+ _etext = .; /* End of text section */
+
+ .data : { /* Data */
+ *(.data)
+ CONSTRUCTORS
+ }
+
+ _edata = .; /* End of data section */
+
+ . = ALIGN(4096); /* Init code and data */
+ __init_begin = .;
+ .text.init : { *(.text.init) }
+ .data.init : { *(.data.init) }
+ . = ALIGN(4096);
+ __init_end = .;
+
+ __bss_start = .; /* BSS */
+ .bss : {
+ *(.bss)
+ }
+ _end = . ;
+
+ /* Stabs debugging sections. */
+ .stab 0 : { *(.stab) }
+ .stabstr 0 : { *(.stabstr) }
+ .stab.excl 0 : { *(.stab.excl) }
+ .stab.exclstr 0 : { *(.stab.exclstr) }
+ .stab.index 0 : { *(.stab.index) }
+ .stab.indexstr 0 : { *(.stab.indexstr) }
+ .comment 0 : { *(.comment) }
+}
fi
bool 'MCA support' CONFIG_MCA
bool 'System V IPC' CONFIG_SYSVIPC
+bool 'BSD Process Accounting' CONFIG_BSD_PROCESS_ACCT
bool 'Sysctl support' CONFIG_SYSCTL
tristate 'Kernel support for a.out binaries' CONFIG_BINFMT_AOUT
tristate 'Kernel support for ELF binaries' CONFIG_BINFMT_ELF
CONFIG_PCI_DIRECT=y
# CONFIG_MCA is not set
CONFIG_SYSVIPC=y
+# CONFIG_BSD_PROCESS_ACCT is not set
CONFIG_SYSCTL=y
CONFIG_BINFMT_AOUT=y
CONFIG_BINFMT_ELF=y
# CONFIG_WATCHDOG is not set
# CONFIG_RTC is not set
# CONFIG_VIDEO_DEV is not set
-# CONFIG_VIDEO_BT848 is not set
-# CONFIG_VIDEO_PMS is not set
# CONFIG_NVRAM is not set
# CONFIG_JOYSTICK is not set
# CONFIG_MISC_RADIO is not set
O_OBJS += mca.o
endif
+
ifdef SMP
-O_OBJS += smp.o trampoline.o
+O_OBJS += io_apic.o smp.o trampoline.o
head.o: head.S $(TOPDIR)/include/linux/tasks.h
$(CC) -D__ASSEMBLY__ -D__SMP__ -traditional -c $*.S -o $*.o
--- /dev/null
+/*
+ * Intel IO-APIC support for multi-pentium hosts.
+ *
+ * (c) 1997 Ingo Molnar, Hajnalka Szabo
+ *
+ * Many thanks to Stig Venaas for trying out countless experimental
+ * patches and reporting/debugging problems patiently!
+ */
+
+#include <linux/kernel.h>
+#include <linux/string.h>
+#include <linux/timer.h>
+#include <linux/sched.h>
+#include <linux/mm.h>
+#include <linux/kernel_stat.h>
+#include <linux/delay.h>
+#include <linux/mc146818rtc.h>
+#include <asm/i82489.h>
+#include <linux/smp.h>
+#include <linux/smp_lock.h>
+#include <linux/interrupt.h>
+#include <linux/init.h>
+#include <asm/pgtable.h>
+#include <asm/bitops.h>
+#include <asm/pgtable.h>
+#include <asm/smp.h>
+#include <asm/io.h>
+
+#include "irq.h"
+
+#define IO_APIC_BASE 0xfec00000
+
+/*
+ * volatile is justified in this case, it might change
+ * spontaneously, GCC should not cache it
+ */
+volatile unsigned int * io_apic_reg = NULL;
+
+/*
+ * The structure of the IO-APIC:
+ */
+struct IO_APIC_reg_00 {
+ __u32 __reserved_2 : 24,
+ ID : 4,
+ __reserved_1 : 4;
+} __attribute__ ((packed));
+
+struct IO_APIC_reg_01 {
+ __u32 version : 8,
+ __reserved_2 : 8,
+ entries : 8,
+ __reserved_1 : 8;
+} __attribute__ ((packed));
+
+struct IO_APIC_reg_02 {
+ __u32 __reserved_2 : 24,
+ arbitration : 4,
+ __reserved_1 : 4;
+} __attribute__ ((packed));
+
+struct IO_APIC_route_entry {
+ __u32 vector : 8,
+ delivery_mode : 3, /* 000: FIXED
+ * 001: lowest prio
+ * 111: ExtInt
+ */
+ dest_mode : 1, /* 0: physical, 1: logical */
+ delivery_status : 1,
+ polarity : 1,
+ irr : 1,
+ trigger : 1, /* 0: edge, 1: level */
+ mask : 1, /* 0: enabled, 1: disabled */
+ __reserved_2 : 15;
+
+ union { struct { __u32
+ __reserved_1 : 24,
+ physical_dest : 4,
+ __reserved_2 : 4;
+ } physical;
+
+ struct { __u32
+ __reserved_1 : 24,
+ logical_dest : 8;
+ } logical;
+ } dest;
+
+} __attribute__ ((packed));
+
+#define UNEXPECTED_IO_APIC() \
+ { \
+ printk(" WARNING: unexpected IO-APIC, please mail\n"); \
+ printk(" to linux-smp@vger.rutgers.edu\n"); \
+ }
+
+int nr_ioapic_registers = 0; /* # of IRQ routing registers */
+int mp_irq_entries = 0; /* # of MP IRQ source entries */
+struct mpc_config_intsrc mp_irqs[MAX_IRQ_SOURCES];
+ /* MP IRQ source entries */
+
+unsigned int io_apic_read (unsigned int reg)
+{
+ *io_apic_reg = reg;
+ return *(io_apic_reg+4);
+}
+
+void io_apic_write (unsigned int reg, unsigned int value)
+{
+ *io_apic_reg = reg;
+ *(io_apic_reg+4) = value;
+}
+
+void enable_IO_APIC_irq (int irq)
+{
+ struct IO_APIC_route_entry entry;
+
+ /*
+ * Enable it in the IO-APIC irq-routing table:
+ */
+ *(((int *)&entry)+0) = io_apic_read(0x10+irq*2);
+ entry.mask = 0;
+ io_apic_write(0x10+2*irq, *(((int *)&entry)+0));
+}
+
+void disable_IO_APIC_irq (int irq)
+{
+ struct IO_APIC_route_entry entry;
+
+ /*
+ * Disable it in the IO-APIC irq-routing table:
+ */
+ *(((int *)&entry)+0) = io_apic_read(0x10+irq*2);
+ entry.mask = 1;
+ io_apic_write(0x10+2*irq, *(((int *)&entry)+0));
+}
+
+void clear_IO_APIC_irq (int irq)
+{
+ struct IO_APIC_route_entry entry;
+
+ /*
+ * Disable it in the IO-APIC irq-routing table:
+ */
+ memset(&entry, 0, sizeof(entry));
+ entry.mask = 1;
+ io_apic_write(0x10+2*irq, *(((int *)&entry)+0));
+ io_apic_write(0x11+2*irq, *(((int *)&entry)+1));
+}
+
+/*
+ * support for broken MP BIOSes, enables hand-redirection of PIRQ0-3 to
+ * specific CPU-side IRQs.
+ */
+
+#define MAX_PIRQS 4
+int pirq_entries [MAX_PIRQS];
+
+void ioapic_pirq_setup(char *str, int *ints)
+{
+ int i, max;
+
+ for (i=0; i<MAX_PIRQS; i++)
+ pirq_entries[i]=-1;
+
+ if (!ints)
+ printk("PIRQ redirection SETUP, trusting MP-BIOS.\n");
+ else {
+ printk("PIRQ redirection SETUP, working around broken MP-BIOS.\n");
+ max = MAX_PIRQS;
+ if (ints[0] < MAX_PIRQS)
+ max = ints[0];
+
+ for (i=0; i < max; i++) {
+ printk("... PIRQ%d -> IRQ %d\n", i, ints[i+1]);
+ /*
+ * PIRQs are mapped upside down, usually.
+ */
+ pirq_entries[MAX_PIRQS-i-1]=ints[i+1];
+ }
+ }
+}
+
+int find_irq_entry(int pin)
+{
+ int i;
+
+ for (i=mp_irq_entries-1; i>=0; i--) {
+ if (mp_irqs[i].mpc_dstirq == pin)
+ return i;
+ }
+ return -1;
+}
+
+void setup_IO_APIC_irqs (void)
+{
+ struct IO_APIC_route_entry entry;
+ int i, idx, bus, irq, first_notcon=1;
+
+ printk("init IO_APIC IRQs\n");
+
+ for (i=0; i<nr_ioapic_registers; i++) {
+
+ /*
+ * add it to the IO-APIC irq-routing table:
+ */
+ memset(&entry,0,sizeof(entry));
+
+ entry.delivery_mode = 1; /* lowest prio */
+ entry.dest_mode = 1; /* logical delivery */
+ entry.mask = 0; /* enable IRQ */
+ entry.dest.logical.logical_dest = 0xff; /* all CPUs */
+
+ idx = find_irq_entry(i);
+ if (idx == -1) {
+ if (first_notcon) {
+ printk(" IO-APIC pin %d", i);
+ first_notcon=0;
+ } else
+ printk(", %d", i);
+ continue;
+ }
+ bus = mp_irqs[idx].mpc_srcbus;
+
+ switch (mp_bus_id_to_type[bus])
+ {
+ case MP_BUS_ISA: /* ISA pin */
+ {
+ irq = mp_irqs[idx].mpc_srcbusirq;
+ break;
+ }
+ case MP_BUS_PCI: /* PCI pin */
+ {
+ irq = mp_irqs[idx].mpc_srcbusirq >> 2;
+ if (irq>=16)
+ printk("WARNING: MP BIOS says PIRQ%d is redirected to %d, suspicious.\n",idx-16, irq);
+ break;
+ }
+ default:
+ {
+ printk("unknown bus type %d.\n",bus);
+ irq = 0;
+ break;
+ }
+ }
+
+ /*
+ * PCI IRQ redirection. Yes, limits are hardcoded.
+ */
+ if ((i>=16) && (i<=19)) {
+ if (pirq_entries[i-16] != -1) {
+ if (!pirq_entries[i-16]) {
+ printk("disabling PIRQ%d\n", i-16);
+ } else {
+ irq = pirq_entries[i-16];
+ printk("using PIRQ%d -> IRQ %d\n",
+ i-16, irq);
+ }
+ }
+ }
+
+ if (!IO_APIC_IRQ(irq))
+ continue;
+
+ entry.vector = IO_APIC_GATE_OFFSET + (irq<<3);
+
+ /*
+ * Determine IRQ line polarity (high active or low active):
+ */
+ switch (mp_irqs[idx].mpc_irqflag & 3)
+ {
+ case 0: /* conforms, ie. bus-type dependent polarity */
+ {
+ switch (mp_bus_id_to_type[bus])
+ {
+ case MP_BUS_ISA: /* ISA pin */
+ {
+ entry.polarity = 0;
+ break;
+ }
+ case MP_BUS_PCI: /* PCI pin */
+ {
+ entry.polarity = 1;
+ break;
+ }
+ default:
+ {
+ printk("broken BIOS!!\n");
+ break;
+ }
+ }
+ break;
+ }
+ case 1: /* high active */
+ {
+ entry.polarity = 0;
+ break;
+ }
+ case 2: /* reserved */
+ {
+ printk("broken BIOS!!\n");
+ break;
+ }
+ case 3: /* low active */
+ {
+ entry.polarity = 1;
+ break;
+ }
+ }
+
+ /*
+ * Determine IRQ trigger mode (edge or level sensitive):
+ */
+ switch ((mp_irqs[idx].mpc_irqflag>>2) & 3)
+ {
+ case 0: /* conforms, ie. bus-type dependent */
+ {
+ switch (mp_bus_id_to_type[bus])
+ {
+ case MP_BUS_ISA: /* ISA pin, edge */
+ {
+ entry.trigger = 0;
+ break;
+ }
+ case MP_BUS_PCI: /* PCI pin, level */
+ {
+ entry.trigger = 1;
+ break;
+ }
+ default:
+ {
+ printk("broken BIOS!!\n");
+ break;
+ }
+ }
+ break;
+ }
+ case 1: /* edge */
+ {
+ entry.trigger = 0;
+ break;
+ }
+ case 2: /* reserved */
+ {
+ printk("broken BIOS!!\n");
+ break;
+ }
+ case 3: /* level */
+ {
+ entry.trigger = 1;
+ break;
+ }
+ }
+
+ io_apic_write(0x10+2*i, *(((int *)&entry)+0));
+ io_apic_write(0x11+2*i, *(((int *)&entry)+1));
+ }
+
+ if (!first_notcon)
+ printk(" not connected.\n");
+}
+
+void setup_IO_APIC_irq_ISA_default (int irq)
+{
+ struct IO_APIC_route_entry entry;
+
+ /*
+ * add it to the IO-APIC irq-routing table:
+ */
+ memset(&entry,0,sizeof(entry));
+
+ entry.delivery_mode = 1; /* lowest prio */
+ entry.dest_mode = 1; /* logical delivery */
+ entry.mask = 1; /* unmask IRQ now */
+ entry.dest.logical.logical_dest = 0xff; /* all CPUs */
+
+ entry.vector = IO_APIC_GATE_OFFSET + (irq<<3);
+
+ entry.polarity=0;
+ entry.trigger=0;
+
+ io_apic_write(0x10+2*irq, *(((int *)&entry)+0));
+ io_apic_write(0x11+2*irq, *(((int *)&entry)+1));
+}
+
+void setup_IO_APIC_irq (int irq)
+{
+}
+
+void print_IO_APIC (void)
+{
+ int i;
+ struct IO_APIC_reg_00 reg_00;
+ struct IO_APIC_reg_01 reg_01;
+ struct IO_APIC_reg_02 reg_02;
+
+ *(int *)®_00 = io_apic_read(0);
+ *(int *)®_01 = io_apic_read(1);
+ *(int *)®_02 = io_apic_read(2);
+
+ /*
+ * We are a bit conservative about what we expect, we have to
+ * know about every HW change ASAP ...
+ */
+ printk("testing the IO APIC.......................\n");
+
+ printk(".... register #00: %08X\n", *(int *)®_00);
+ printk("....... : physical APIC id: %02X\n", reg_00.ID);
+ if (reg_00.__reserved_1 || reg_00.__reserved_2)
+ UNEXPECTED_IO_APIC();
+
+ printk(".... register #01: %08X\n", *(int *)®_01);
+ printk("....... : max redirection entries: %04X\n", reg_01.entries);
+ if ( (reg_01.entries != 0x0f) && /* ISA-only Neptune boards */
+ (reg_01.entries != 0x17) /* ISA+PCI boards */
+ )
+ UNEXPECTED_IO_APIC();
+ if (reg_01.entries == 0x0f)
+ printk("....... [IO-APIC cannot route PCI PIRQ 0-3]\n");
+
+ printk("....... : IO APIC version: %04X\n", reg_01.version);
+ if ( (reg_01.version != 0x10) && /* oldest IO-APICs */
+ (reg_01.version != 0x11) /* my IO-APIC */
+ )
+ UNEXPECTED_IO_APIC();
+ if (reg_01.__reserved_1 || reg_01.__reserved_2)
+ UNEXPECTED_IO_APIC();
+
+ printk(".... register #02: %08X\n", *(int *)®_02);
+ printk("....... : arbitration: %02X\n", reg_02.arbitration);
+ if (reg_02.__reserved_1 || reg_02.__reserved_2)
+ UNEXPECTED_IO_APIC();
+
+ printk(".... IRQ redirection table:\n");
+
+ printk(" NR Log Phy ");
+ printk("Mask Trig IRR Pol Stat Dest Deli Vect: \n");
+
+ for (i=0; i<=reg_01.entries; i++) {
+ struct IO_APIC_route_entry entry;
+
+ *(((int *)&entry)+0) = io_apic_read(0x10+i*2);
+ *(((int *)&entry)+1) = io_apic_read(0x11+i*2);
+
+ printk(" %02x %03X %02X ",
+ i,
+ entry.dest.logical.logical_dest,
+ entry.dest.physical.physical_dest
+ );
+
+ printk("%1d %1d %1d %1d %1d %1d %1d %02X\n",
+ entry.mask,
+ entry.trigger,
+ entry.irr,
+ entry.polarity,
+ entry.delivery_status,
+ entry.dest_mode,
+ entry.delivery_mode,
+ entry.vector
+ );
+ }
+
+ printk(".................................... done.\n");
+
+ return;
+}
+
+void init_sym_mode (void)
+{
+ printk("enabling Symmetric IO mode ... ");
+ outb (0x70, 0x22);
+ outb (0x01, 0x23);
+ printk("...done.\n");
+}
+
+void setup_IO_APIC (void)
+{
+ int i;
+ /*
+ * Map the IO APIC into kernel space
+ */
+
+ printk("mapping IO APIC from standard address.\n");
+ io_apic_reg = ioremap_nocache(IO_APIC_BASE,4096);
+ printk("new virtual address: %p.\n",io_apic_reg);
+
+ init_sym_mode();
+ {
+ struct IO_APIC_reg_01 reg_01;
+
+ *(int *)®_01 = io_apic_read(1);
+ nr_ioapic_registers = reg_01.entries+1;
+ }
+
+ init_IO_APIC_traps();
+
+ /*
+ * do not trust the IO-APIC being empty at bootup
+ */
+ for (i=0; i<nr_ioapic_registers; i++)
+ clear_IO_APIC_irq (i);
+
+#if DEBUG_1
+ for (i=0; i<16; i++)
+ if (IO_APIC_IRQ(i))
+ setup_IO_APIC_irq_ISA_default (i);
+#endif
+
+ setup_IO_APIC_irqs ();
+
+ printk("nr of MP irq sources: %d.\n", mp_irq_entries);
+ printk("nr of IOAPIC registers: %d.\n", nr_ioapic_registers);
+ print_IO_APIC();
+}
+
#include <linux/malloc.h>
#include <linux/random.h>
#include <linux/smp.h>
+#include <linux/tasks.h>
#include <linux/smp_lock.h>
#include <linux/init.h>
#include <asm/bitops.h>
#include <asm/smp.h>
#include <asm/pgtable.h>
+#include <asm/delay.h>
#include "irq.h"
-#ifdef __SMP_PROF__
-extern volatile unsigned long smp_local_timer_ticks[1+NR_CPUS];
+/*
+ * I had a lockup scenario where a tight loop doing
+ * spin_unlock()/spin_lock() on CPU#1 was racing with
+ * spin_lock() on CPU#0. CPU#0 should have noticed spin_unlock(), but
+ * apparently the spin_unlock() information did not make it
+ * through to CPU#0 ... nasty, is this by design, do we haveto limit
+ * 'memory update oscillation frequency' artificially like here?
+ *
+ * Such 'high frequency update' races can be avoided by careful design, but
+ * some of our major constructs like spinlocks use similar techniques,
+ * it would be nice to clarify this issue. Set this define to 0 if you
+ * want to check wether your system freezes. I suspect the delay done
+ * by SYNC_OTHER_CORES() is in correlation with 'snooping latency', but
+ * i thought that such things are guaranteed by design, since we use
+ * the 'LOCK' prefix.
+ */
+#define SUSPECTED_CPU_OR_CHIPSET_BUG_WORKAROUND 1
+
+#if SUSPECTED_CPU_OR_CHIPSET_BUG_WORKAROUND
+# define SYNC_OTHER_CORES(x) udelay(x+1)
+#else
+/*
+ * We have to allow irqs to arrive between __sti and __cli
+ */
+# define SYNC_OTHER_CORES(x) __asm__ __volatile__ ("nop")
#endif
unsigned int local_irq_count[NR_CPUS];
int __intel_bh_counter;
#endif
-#ifdef __SMP_PROF__
-static unsigned int int_count[NR_CPUS][NR_IRQS] = {{0},};
-#endif
-
atomic_t nmi_counter;
/*
- * This contains the irq mask for both irq controllers
+ * About the IO-APIC, the architecture is 'merged' into our
+ * current irq architecture, seemlessly. (i hope). It is only
+ * visible through 8 more hardware interrupt lines, but otherwise
+ * drivers are unaffected. The main code is believed to be
+ * NR_IRQS-safe (nothing anymore thinks we have 16
+ * irq lines only), but there might be some places left ...
+ */
+
+/*
+ * This contains the irq mask for both 8259A irq controllers,
+ * and on SMP the extended IO-APIC IRQs 16-23. The IO-APIC
+ * uses this mask too, in probe_irq*().
+ *
+ * (0x0000ffff for NR_IRQS==16, 0x00ffffff for NR_IRQS=24)
*/
-static unsigned int cached_irq_mask = 0xffff;
+static unsigned int cached_irq_mask = (1<<NR_IRQS)-1;
-#define cached_21 (((char *)(&cached_irq_mask))[0])
-#define cached_A1 (((char *)(&cached_irq_mask))[1])
+#define cached_21 ((cached_irq_mask | io_apic_irqs) & 0xff)
+#define cached_A1 (((cached_irq_mask | io_apic_irqs) >> 8) & 0xff)
spinlock_t irq_controller_lock;
+static int irq_events [NR_IRQS] = { -1, };
+static int disabled_irq [NR_IRQS] = { 0, };
+#ifdef __SMP__
+static int irq_owner [NR_IRQS] = { NO_PROC_ID, };
+#endif
+
/*
- * This is always called from an interrupt context
- * with local interrupts disabled. Don't worry about
- * irq-safe locks.
+ * Not all IRQs can be routed through the IO-APIC, eg. on certain (older)
+ * boards the timer interrupt and sometimes the keyboard interrupt is
+ * not connected to any IO-APIC pin, it's fed to the CPU ExtInt IRQ line
+ * directly.
*
- * Note that we always ack the primary irq controller,
- * even if the interrupt came from the secondary, as
- * the primary will still have routed it. Oh, the joys
- * of PC hardware.
+ * Any '1' bit in this mask means the IRQ is routed through the IO-APIC.
+ * this 'mixed mode' IRQ handling costs us one more branch in do_IRQ,
+ * but we have _much_ higher compatibility and robustness this way.
*/
-static inline void mask_and_ack_irq(int irq_nr)
+
+#ifndef __SMP__
+ static const unsigned int io_apic_irqs = 0;
+#else
+ /*
+ * the timer interrupt is not connected to the IO-APIC on all boards
+ * (mine is such ;), and since it is not performance critical anyway,
+ * we route it through the INTA pin and win lots of design simplicity.
+ * Ditto the obsolete EISA dma chaining irq. All other interrupts are
+ * routed through the IO-APIC, distributed amongst all CPUs, dependent
+ * on irq traffic and CPU load.
+ */
+ const unsigned int io_apic_irqs = ~((1<<0)|(1<<2)|(1<<13));
+#endif
+
+static inline int ack_irq(int irq)
{
+ /*
+ * The IO-APIC part will be moved to assembly, nested
+ * interrupts will be ~5 instructions from entry to iret ...
+ */
+ int should_handle_irq = 0;
+ int cpu = smp_processor_id();
+
+ /*
+ * We always call this with local irqs disabled
+ */
spin_lock(&irq_controller_lock);
- cached_irq_mask |= 1 << irq_nr;
- if (irq_nr & 8) {
- inb(0xA1); /* DUMMY */
- outb(cached_A1,0xA1);
- outb(0x62,0x20); /* Specific EOI to cascade */
- outb(0x20,0xA0);
- } else {
- inb(0x21); /* DUMMY */
- outb(cached_21,0x21);
- outb(0x20,0x20);
+
+ if (!irq_events[irq]++ && !disabled_irq[irq]) {
+ should_handle_irq = 1;
+#ifdef __SMP__
+ irq_owner[irq] = cpu;
+#endif
+ hardirq_enter(cpu);
+ }
+
+ if (IO_APIC_IRQ(irq))
+ ack_APIC_irq ();
+ else {
+ /*
+ * 8259-triggered INTA-cycle interrupt
+ */
+ if (should_handle_irq)
+ mask_irq(irq);
+
+ if (irq & 8) {
+ inb(0xA1); /* DUMMY */
+ outb(0x62,0x20); /* Specific EOI to cascade */
+ outb(0x20,0xA0);
+ } else {
+ inb(0x21); /* DUMMY */
+ outb(0x20,0x20);
+ }
}
+
spin_unlock(&irq_controller_lock);
+
+ return (should_handle_irq);
}
-static inline void set_irq_mask(int irq_nr)
+void set_8259A_irq_mask(int irq)
{
- if (irq_nr & 8) {
+ if (irq >= 16) {
+ printk ("HUH #3 (%d)?\n", irq);
+ return;
+ }
+ if (irq & 8) {
outb(cached_A1,0xA1);
} else {
outb(cached_21,0x21);
* These have to be protected by the spinlock
* before being called.
*/
-static inline void mask_irq(unsigned int irq_nr)
+void mask_irq(unsigned int irq)
{
- cached_irq_mask |= 1 << irq_nr;
- set_irq_mask(irq_nr);
+ if (IO_APIC_IRQ(irq))
+ disable_IO_APIC_irq(irq);
+ else {
+ cached_irq_mask |= 1 << irq;
+ set_8259A_irq_mask(irq);
+ }
}
-static inline void unmask_irq(unsigned int irq_nr)
+void unmask_irq(unsigned int irq)
{
- cached_irq_mask &= ~(1 << irq_nr);
- set_irq_mask(irq_nr);
+ if (IO_APIC_IRQ(irq))
+ enable_IO_APIC_irq(irq);
+ else {
+ cached_irq_mask &= ~(1 << irq);
+ set_8259A_irq_mask(irq);
+ }
}
-void disable_irq(unsigned int irq_nr)
-{
- unsigned long flags;
+/*
+ * This builds up the IRQ handler stubs using some ugly macros in irq.h
+ *
+ * These macros create the low-level assembly IRQ routines that save
+ * register context and call do_IRQ(). do_IRQ() then does all the
+ * operations that are needed to keep the AT (or SMP IOAPIC)
+ * interrupt-controller happy.
+ */
- spin_lock_irqsave(&irq_controller_lock, flags);
- mask_irq(irq_nr);
- spin_unlock_irqrestore(&irq_controller_lock, flags);
- synchronize_irq();
-}
-void enable_irq(unsigned int irq_nr)
-{
- unsigned long flags;
+BUILD_COMMON_IRQ()
+/*
+ * ISA PIC or IO-APIC triggered (INTA-cycle or APIC) interrupts:
+ */
+BUILD_IRQ(0) BUILD_IRQ(1) BUILD_IRQ(2) BUILD_IRQ(3)
+BUILD_IRQ(4) BUILD_IRQ(5) BUILD_IRQ(6) BUILD_IRQ(7)
+BUILD_IRQ(8) BUILD_IRQ(9) BUILD_IRQ(10) BUILD_IRQ(11)
+BUILD_IRQ(12) BUILD_IRQ(13) BUILD_IRQ(14) BUILD_IRQ(15)
- spin_lock_irqsave(&irq_controller_lock, flags);
- unmask_irq(irq_nr);
- spin_unlock_irqrestore(&irq_controller_lock, flags);
-}
+#ifdef __SMP__
/*
- * This builds up the IRQ handler stubs using some ugly macros in irq.h
+ * The IO-APIC (persent only in SMP boards) has 8 more hardware
+ * interrupt pins, for all of them we define an IRQ vector:
*
- * These macros create the low-level assembly IRQ routines that do all
- * the operations that are needed to keep the AT interrupt-controller
- * happy. They are also written to be fast - and to disable interrupts
- * as little as humanly possible.
+ * raw PCI interrupts 0-3, basically these are the ones used
+ * heavily:
*/
+BUILD_IRQ(16) BUILD_IRQ(17) BUILD_IRQ(18) BUILD_IRQ(19)
-#if NR_IRQS != 16
-#error make irq stub building NR_IRQS dependent and remove me.
-#endif
+/*
+ * [FIXME: anyone with 2 separate PCI buses and 2 IO-APICs,
+ * please speak up and request experimental patches.
+ * --mingo ]
+ */
-BUILD_COMMON_IRQ()
-BUILD_IRQ(FIRST,0,0x01)
-BUILD_IRQ(FIRST,1,0x02)
-BUILD_IRQ(FIRST,2,0x04)
-BUILD_IRQ(FIRST,3,0x08)
-BUILD_IRQ(FIRST,4,0x10)
-BUILD_IRQ(FIRST,5,0x20)
-BUILD_IRQ(FIRST,6,0x40)
-BUILD_IRQ(FIRST,7,0x80)
-BUILD_IRQ(SECOND,8,0x01)
-BUILD_IRQ(SECOND,9,0x02)
-BUILD_IRQ(SECOND,10,0x04)
-BUILD_IRQ(SECOND,11,0x08)
-BUILD_IRQ(SECOND,12,0x10)
-BUILD_IRQ(SECOND,13,0x20)
-BUILD_IRQ(SECOND,14,0x40)
-BUILD_IRQ(SECOND,15,0x80)
+/*
+ * MIRQ (motherboard IRQ) interrupts 0-1:
+ */
+BUILD_IRQ(20) BUILD_IRQ(21)
-#ifdef __SMP__
+/*
+ * 'nondefined general purpose interrupt'.
+ */
+BUILD_IRQ(22)
+/*
+ * optionally rerouted SMI interrupt:
+ */
+BUILD_IRQ(23)
+
+/*
+ * The following vectors are part of the Linux architecture, there
+ * is no hardware IRQ pin equivalent for them, they are triggered
+ * through the ICC by us (IPIs), via smp_message_pass():
+ */
BUILD_SMP_INTERRUPT(reschedule_interrupt)
BUILD_SMP_INTERRUPT(invalidate_interrupt)
BUILD_SMP_INTERRUPT(stop_cpu_interrupt)
+
+/*
+ * every pentium local APIC has two 'local interrupts', with a
+ * soft-definable vector attached to both interrupts, one of
+ * which is a timer interrupt, the other one is error counter
+ * overflow. Linux uses the local APIC timer interrupt to get
+ * a much simpler SMP time architecture:
+ */
BUILD_SMP_TIMER_INTERRUPT(apic_timer_interrupt)
+
#endif
-static void (*interrupt[17])(void) = {
+static void (*interrupt[NR_IRQS])(void) = {
IRQ0_interrupt, IRQ1_interrupt, IRQ2_interrupt, IRQ3_interrupt,
IRQ4_interrupt, IRQ5_interrupt, IRQ6_interrupt, IRQ7_interrupt,
IRQ8_interrupt, IRQ9_interrupt, IRQ10_interrupt, IRQ11_interrupt,
- IRQ12_interrupt, IRQ13_interrupt, IRQ14_interrupt, IRQ15_interrupt
+ IRQ12_interrupt, IRQ13_interrupt, IRQ14_interrupt, IRQ15_interrupt
+#ifdef __SMP__
+ ,IRQ16_interrupt, IRQ17_interrupt, IRQ18_interrupt, IRQ19_interrupt,
+ IRQ20_interrupt, IRQ21_interrupt, IRQ22_interrupt, IRQ23_interrupt
+#endif
};
/*
*/
static struct irqaction irq2 = { no_action, 0, 0, "cascade", NULL, NULL};
-static struct irqaction *irq_action[16] = {
+static struct irqaction *irq_action[NR_IRQS] = {
NULL, NULL, NULL, NULL,
NULL, NULL, NULL, NULL,
NULL, NULL, NULL, NULL,
NULL, NULL, NULL, NULL
+#ifdef __SMP__
+ ,NULL, NULL, NULL, NULL,
+ NULL, NULL, NULL, NULL
+#endif
};
int get_irq_list(char *buf)
{
- int i;
+ int i, j;
struct irqaction * action;
char *p = buf;
+ p += sprintf(p, " ");
+ for (j=0; j<smp_num_cpus; j++)
+ p += sprintf(p, "CPU%d ",j);
+ *p++ = '\n';
+
for (i = 0 ; i < NR_IRQS ; i++) {
action = irq_action[i];
if (!action)
continue;
- p += sprintf(p, "%3d: %10u %s",
- i, kstat.interrupts[i], action->name);
+ p += sprintf(p, "%3d: ",i);
+#ifndef __SMP__
+ p += sprintf(p, "%10u ", kstat.interrupts[0][i]);
+#else
+ for (j=0; j<smp_num_cpus; j++)
+ p += sprintf(p, "%10u ",
+ kstat.interrupts[cpu_logical_map[j]][i]);
+#endif
+ if (IO_APIC_IRQ(i))
+ p += sprintf(p, " IO-APIC ");
+ else
+ p += sprintf(p, " XT PIC ");
+ p += sprintf(p, " %s", action->name);
+
for (action=action->next; action; action = action->next) {
p += sprintf(p, ", %s", action->name);
}
*p++ = '\n';
}
p += sprintf(p, "NMI: %10u\n", atomic_read(&nmi_counter));
-#ifdef __SMP_PROF__
+#ifdef __SMP__
p += sprintf(p, "IPI: %10lu\n", ipi_count);
#endif
return p - buf;
}
-#ifdef __SMP_PROF__
-
-extern unsigned int prof_multiplier[NR_CPUS];
-extern unsigned int prof_counter[NR_CPUS];
-
-int get_smp_prof_list(char *buf) {
- int i,j, len = 0;
- struct irqaction * action;
- unsigned long sum_spins = 0;
- unsigned long sum_spins_syscall = 0;
- unsigned long sum_spins_sys_idle = 0;
- unsigned long sum_smp_idle_count = 0;
- unsigned long sum_local_timer_ticks = 0;
-
- for (i=0;i<smp_num_cpus;i++) {
- int cpunum = cpu_logical_map[i];
- sum_spins+=smp_spins[cpunum];
- sum_spins_syscall+=smp_spins_syscall[cpunum];
- sum_spins_sys_idle+=smp_spins_sys_idle[cpunum];
- sum_smp_idle_count+=smp_idle_count[cpunum];
- sum_local_timer_ticks+=smp_local_timer_ticks[cpunum];
- }
-
- len += sprintf(buf+len,"CPUS: %10i \n", smp_num_cpus);
- len += sprintf(buf+len," SUM ");
- for (i=0;i<smp_num_cpus;i++)
- len += sprintf(buf+len," P%1d ",cpu_logical_map[i]);
- len += sprintf(buf+len,"\n");
- for (i = 0 ; i < NR_IRQS ; i++) {
- action = *(i + irq_action);
- if (!action || !action->handler)
- continue;
- len += sprintf(buf+len, "%3d: %10d ",
- i, kstat.interrupts[i]);
- for (j=0;j<smp_num_cpus;j++)
- len+=sprintf(buf+len, "%10d ",
- int_count[cpu_logical_map[j]][i]);
- len += sprintf(buf+len, " %s", action->name);
- for (action=action->next; action; action = action->next) {
- len += sprintf(buf+len, ", %s", action->name);
- }
- len += sprintf(buf+len, "\n");
- }
- len+=sprintf(buf+len, "LCK: %10lu",
- sum_spins);
-
- for (i=0;i<smp_num_cpus;i++)
- len+=sprintf(buf+len," %10lu",smp_spins[cpu_logical_map[i]]);
-
- len +=sprintf(buf+len," spins from int\n");
-
- len+=sprintf(buf+len, "LCK: %10lu",
- sum_spins_syscall);
-
- for (i=0;i<smp_num_cpus;i++)
- len+=sprintf(buf+len," %10lu",smp_spins_syscall[cpu_logical_map[i]]);
-
- len +=sprintf(buf+len," spins from syscall\n");
-
- len+=sprintf(buf+len, "LCK: %10lu",
- sum_spins_sys_idle);
-
- for (i=0;i<smp_num_cpus;i++)
- len+=sprintf(buf+len," %10lu",smp_spins_sys_idle[cpu_logical_map[i]]);
-
- len +=sprintf(buf+len," spins from sysidle\n");
- len+=sprintf(buf+len,"IDLE %10lu",sum_smp_idle_count);
-
- for (i=0;i<smp_num_cpus;i++)
- len+=sprintf(buf+len," %10lu",smp_idle_count[cpu_logical_map[i]]);
-
- len +=sprintf(buf+len," idle ticks\n");
-
- len+=sprintf(buf+len,"TICK %10lu",sum_local_timer_ticks);
- for (i=0;i<smp_num_cpus;i++)
- len+=sprintf(buf+len," %10lu",smp_local_timer_ticks[cpu_logical_map[i]]);
-
- len +=sprintf(buf+len," local APIC timer ticks\n");
-
- len+=sprintf(buf+len,"MULT: ");
- for (i=0;i<smp_num_cpus;i++)
- len+=sprintf(buf+len," %10u",prof_multiplier[cpu_logical_map[i]]);
- len +=sprintf(buf+len," profiling multiplier\n");
-
- len+=sprintf(buf+len,"COUNT: ");
- for (i=0;i<smp_num_cpus;i++)
- len+=sprintf(buf+len," %10u",prof_counter[cpu_logical_map[i]]);
-
- len +=sprintf(buf+len," profiling counter\n");
-
- len+=sprintf(buf+len, "IPI: %10lu received\n",
- ipi_count);
-
- return len;
-}
-#endif
-
-
/*
* Global interrupt locks for SMP. Allow interrupts to come in on any
* CPU, yet make cli/sti act globally to protect critical regions..
* and other fun things.
*/
atomic_sub(local_count, &global_irq_count);
+ global_irq_holder = NO_PROC_ID;
global_irq_lock = 0;
/*
* their things before trying to get the lock again.
*/
for (;;) {
+ atomic_add(local_count, &global_irq_count);
+ __sti();
+ SYNC_OTHER_CORES(cpu);
+ __cli();
+ atomic_sub(local_count, &global_irq_count);
+ SYNC_OTHER_CORES(cpu);
check_smp_invalidate(cpu);
if (atomic_read(&global_irq_count))
continue;
break;
}
atomic_add(local_count, &global_irq_count);
+ global_irq_holder = cpu;
}
}
* are no interrupts that are executing on another
* CPU we need to call this function.
*
+ * We have to give pending interrupts a chance to
+ * arrive (ie. let them get until hard_irq_enter()),
+ * even if they are arriving to another CPU.
+ *
* On UP this is a no-op.
+ *
+ * UPDATE: this method is not quite safe, as it wont
+ * catch irq handlers polling for the irq lock bit
+ * in __global_cli():get_interrupt_lock():wait_on_irq().
+ * drivers should rather use disable_irq()/enable_irq()
+ * and/or synchronize_one_irq()
*/
void synchronize_irq(void)
{
- int cpu = smp_processor_id();
- int local_count = local_irq_count[cpu];
+ int local_count = local_irq_count[smp_processor_id()];
- /* Do we need to wait? */
if (local_count != atomic_read(&global_irq_count)) {
- /* The stupid way to do this */
+ int i;
+
+ /* The very stupid way to do this */
+ for (i=0; i<NR_IRQS; i++) {
+ disable_irq(i);
+ enable_irq(i);
+ }
cli();
sti();
}
}
}
+void synchronize_one_irq(unsigned int irq)
+{
+ int cpu = smp_processor_id(), owner;
+ int local_count = local_irq_count[cpu];
+ unsigned long flags;
+
+ __save_flags(flags);
+ __cli();
+ release_irqlock(cpu);
+ atomic_sub(local_count, &global_irq_count);
+
+repeat:
+ spin_lock(&irq_controller_lock);
+ owner = irq_owner[irq];
+ spin_unlock(&irq_controller_lock);
+
+ if ((owner != NO_PROC_ID) && (owner != cpu)) {
+ atomic_add(local_count, &global_irq_count);
+ __sti();
+ SYNC_OTHER_CORES(cpu);
+ __cli();
+ atomic_sub(local_count, &global_irq_count);
+ SYNC_OTHER_CORES(cpu);
+ goto repeat;
+ }
+
+ if (!disabled_irq[irq])
+ printk("\n...WHAT??.#1...\n");
+
+ atomic_add(local_count, &global_irq_count);
+ __restore_flags(flags);
+}
+
#endif
-/*
- * do_IRQ handles all normal device IRQ's (the special
- * SMP cross-CPU interrupts have their own specific
- * handlers).
- */
-asmlinkage void do_IRQ(struct pt_regs regs)
+static void handle_IRQ_event(int irq, struct pt_regs * regs)
{
- int irq = regs.orig_eax & 0xff;
struct irqaction * action;
- int status, cpu;
-
- /*
- * mask and ack quickly, we don't want the irq controller
- * thinking we're snobs just because some other CPU has
- * disabled global interrupts (we have already done the
- * INT_ACK cycles, it's too late to try to pretend to the
- * controller that we aren't taking the interrupt).
- */
- mask_and_ack_irq(irq);
+ int status, cpu = smp_processor_id();
- cpu = smp_processor_id();
- irq_enter(cpu, irq);
- kstat.interrupts[irq]++;
+again:
+#ifdef __SMP__
+ while (test_bit(0,&global_irq_lock)) mb();
+#endif
- /* Return with this interrupt masked if no action */
+ kstat.interrupts[cpu][irq]++;
status = 0;
action = *(irq + irq_action);
+
if (action) {
+#if 0
if (!(action->flags & SA_INTERRUPT))
__sti();
+#endif
do {
status |= action->flags;
- action->handler(irq, action->dev_id, ®s);
+ action->handler(irq, action->dev_id, regs);
action = action->next;
} while (action);
if (status & SA_SAMPLE_RANDOM)
add_interrupt_randomness(irq);
__cli();
- spin_lock(&irq_controller_lock);
- unmask_irq(irq);
+ }
+
+ spin_lock(&irq_controller_lock);
+
+#ifdef __SMP__
+ release_irqlock(cpu);
+#endif
+
+ if ((--irq_events[irq]) && (!disabled_irq[irq])) {
spin_unlock(&irq_controller_lock);
+ goto again;
}
+#ifdef __SMP__
+ /* FIXME: move this into hardirq.h */
+ irq_owner[irq] = NO_PROC_ID;
+#endif
+ hardirq_exit(cpu);
+
+ spin_unlock(&irq_controller_lock);
+}
+
+
+/*
+ * disable/enable_irq() wait for all irq contexts to finish
+ * executing. Also it's recursive.
+ */
+void disable_irq(unsigned int irq)
+{
+#ifdef __SMP__
+ int cpu = smp_processor_id();
+#endif
+ unsigned long f, flags;
+
+ save_flags(flags);
+ __save_flags(f);
+ __cli();
+ spin_lock(&irq_controller_lock);
+
+ disabled_irq[irq]++;
+
+#ifdef __SMP__
+ /*
+ * We have to wait for all irq handlers belonging to this IRQ
+ * vector to finish executing.
+ */
+ if ((irq_owner[irq] == NO_PROC_ID) || (irq_owner[irq] == cpu) ||
+ (disabled_irq[irq] > 1)) {
+
+ spin_unlock(&irq_controller_lock);
+ __restore_flags(f);
+ restore_flags(flags);
+ if (disabled_irq[irq] > 100)
+ printk("disable_irq(%d), infinit recursion!\n",irq);
+ return;
+ }
+#endif
+
+ spin_unlock(&irq_controller_lock);
+
+#ifdef __SMP__
+ synchronize_one_irq(irq);
+#endif
+
+ __restore_flags(f);
+ restore_flags(flags);
+}
+
+void enable_irq(unsigned int irq)
+{
+ unsigned long flags;
+ int cpu = smp_processor_id();
+
+ spin_lock_irqsave(&irq_controller_lock,flags);
+
+ if (!disabled_irq[irq]) {
+ spin_unlock_irqrestore(&irq_controller_lock,flags);
+ printk("more enable_irq(%d)'s than disable_irq(%d)'s!!",irq,irq);
+ return;
+ }
+
+ disabled_irq[irq]--;
+
+#ifndef __SMP__
+ if (disabled_irq[irq]) {
+ spin_unlock_irqrestore(&irq_controller_lock,flags);
+ return;
+ }
+#else
+ if (disabled_irq[irq] || (irq_owner[irq] != NO_PROC_ID)) {
+ spin_unlock_irqrestore(&irq_controller_lock,flags);
+ return;
+ }
+#endif
+
+ /*
+ * Nobody is executing this irq handler currently, lets check
+ * wether we have outstanding events to be handled.
+ */
+
+ if (irq_events[irq]) {
+ struct pt_regs regs;
+
+#ifdef __SMP__
+ irq_owner[irq] = cpu;
+#endif
+ hardirq_enter(cpu);
+#ifdef __SMP__
+ release_irqlock(cpu);
+#endif
+ spin_unlock(&irq_controller_lock);
+
+ handle_IRQ_event(irq,®s);
+ __restore_flags(flags);
+ return;
+ }
+ spin_unlock_irqrestore(&irq_controller_lock,flags);
+}
+
+/*
+ * do_IRQ handles all normal device IRQ's (the special
+ * SMP cross-CPU interrupts have their own specific
+ * handlers).
+ *
+ * the biggest change on SMP is the fact that we no more mask
+ * interrupts in hardware, please believe me, this is unavoidable,
+ * the hardware is largely message-oriented, i tried to force our
+ * state-driven irq handling scheme onto the IO-APIC, but no avail.
+ *
+ * so we soft-disable interrupts via 'event counters', the first 'incl'
+ * will do the IRQ handling. This also has the nice side effect of increased
+ * overlapping ... i saw no driver problem so far.
+ */
+asmlinkage void do_IRQ(struct pt_regs regs)
+{
+ /*
+ * We ack quickly, we don't want the irq controller
+ * thinking we're snobs just because some other CPU has
+ * disabled global interrupts (we have already done the
+ * INT_ACK cycles, it's too late to try to pretend to the
+ * controller that we aren't taking the interrupt).
+ *
+ * 0 return value means that this irq is already being
+ * handled by some other CPU. (or is disabled)
+ */
+ int irq = regs.orig_eax & 0xff;
+
+/*
+ printk("<%d>",irq);
+ */
+ if (!ack_irq(irq))
+ return;
+
+ handle_IRQ_event(irq,®s);
+
+ unmask_irq(irq);
- irq_exit(cpu, irq);
/*
* This should be conditional: we should really get
* a return code from the irq handler to tell us
if (!shared) {
spin_lock(&irq_controller_lock);
+ if (IO_APIC_IRQ(irq)) {
+ /*
+ * First disable it in the 8259A:
+ */
+ cached_irq_mask |= 1 << irq;
+ if (irq < 16)
+ set_8259A_irq_mask(irq);
+ setup_IO_APIC_irq(irq);
+ }
unmask_irq(irq);
spin_unlock(&irq_controller_lock);
}
int retval;
struct irqaction * action;
- if (irq > 15)
+ if (irq >= NR_IRQS)
return -EINVAL;
if (!handler)
return -EINVAL;
- action = (struct irqaction *)kmalloc(sizeof(struct irqaction), GFP_KERNEL);
+ action = (struct irqaction *)
+ kmalloc(sizeof(struct irqaction), GFP_KERNEL);
if (!action)
return -ENOMEM;
struct irqaction * action, **p;
unsigned long flags;
- if (irq > 15) {
+ if (irq >= NR_IRQS) {
printk("Trying to free IRQ%d\n",irq);
return;
}
printk("Trying to free free IRQ%d\n",irq);
}
+/*
+ * probing is always single threaded [FIXME: is this true?]
+ */
+static unsigned int probe_irqs[NR_CPUS][NR_IRQS];
+
unsigned long probe_irq_on (void)
{
- unsigned int i, irqs = 0;
+ unsigned int i, j, irqs = 0;
unsigned long delay;
- /* first, enable any unassigned irqs */
- for (i = 15; i > 0; i--) {
+ /*
+ * save current irq counts
+ */
+ memcpy(probe_irqs,kstat.interrupts,NR_CPUS*NR_IRQS*sizeof(int));
+
+ /*
+ * first, enable any unassigned irqs
+ */
+ for (i = NR_IRQS-1; i > 0; i--) {
if (!irq_action[i]) {
- enable_irq(i);
+ spin_lock(&irq_controller_lock);
+ unmask_irq(i);
irqs |= (1 << i);
+ spin_unlock(&irq_controller_lock);
}
}
- /* wait for spurious interrupts to mask themselves out again */
+ /*
+ * wait for spurious interrupts to increase counters
+ */
for (delay = jiffies + HZ/10; delay > jiffies; )
- /* about 100ms delay */;
+ /* about 100ms delay */ synchronize_irq();
+
+ /*
+ * now filter out any obviously spurious interrupts
+ */
+ for (i=0; i<NR_IRQS; i++)
+ for (j=0; j<NR_CPUS; j++)
+ if (kstat.interrupts[j][i] != probe_irqs[j][i])
+ irqs &= ~(i<<1);
- /* now filter out any obviously spurious interrupts */
- return irqs & ~cached_irq_mask;
+ return irqs;
}
int probe_irq_off (unsigned long irqs)
{
- unsigned int i;
+ int i,j, irq_found = -1;
-#ifdef DEBUG
- printk("probe_irq_off: irqs=0x%04lx irqmask=0x%04x\n", irqs, cached_irq_mask);
-#endif
- irqs &= cached_irq_mask;
- if (!irqs)
- return 0;
- i = ffz(~irqs);
- if (irqs != (irqs & (1 << i)))
- i = -i;
- return i;
+ for (i=0; i<NR_IRQS; i++) {
+ int sum = 0;
+ for (j=0; j<NR_CPUS; j++) {
+ sum += kstat.interrupts[j][i];
+ sum -= probe_irqs[j][i];
+ }
+ if (sum && (irqs & (i<<1))) {
+ if (irq_found != -1) {
+ irq_found = -irq_found;
+ goto out;
+ } else
+ irq_found = i;
+ }
+ }
+ if (irq_found == -1)
+ irq_found = 0;
+out:
+ return irq_found;
+}
+
+void init_IO_APIC_traps(void)
+{
+ int i;
+ /*
+ * NOTE! The local APIC isn't very good at handling
+ * multiple interrupts at the same interrupt level.
+ * As the interrupt level is determined by taking the
+ * vector number and shifting that right by 4, we
+ * want to spread these out a bit so that they don't
+ * all fall in the same interrupt level
+ *
+ * also, we've got to be careful not to trash gate
+ * 0x80, because int 0x80 is hm, kindof importantish ;)
+ */
+ for (i = 0; i < NR_IRQS ; i++)
+ if (IO_APIC_GATE_OFFSET+(i<<3) <= 0xfe) /* HACK */ {
+ if (IO_APIC_IRQ(i)) {
+ /*
+ * First disable it in the 8259A:
+ */
+ cached_irq_mask |= 1 << i;
+ if (i < 16)
+ set_8259A_irq_mask(i);
+ setup_IO_APIC_irq(i);
+ }
+ }
}
__initfunc(void init_IRQ(void))
outb_p(LATCH & 0xff , 0x40); /* LSB */
outb(LATCH >> 8 , 0x40); /* MSB */
- for (i = 0; i < NR_IRQS ; i++)
+ printk("INIT IRQ\n");
+ for (i=0; i<NR_IRQS; i++) {
+ irq_events[i] = 0;
+#ifdef __SMP__
+ irq_owner[i] = NO_PROC_ID;
+#endif
+ disabled_irq[i] = 0;
+ }
+ /*
+ * 16 old-style INTA-cycle interrupt gates:
+ */
+ for (i = 0; i < 16; i++)
set_intr_gate(0x20+i,interrupt[i]);
#ifdef __SMP__
- /*
- * NOTE! The local APIC isn't very good at handling
- * multiple interrupts at the same interrupt level.
- * As the interrupt level is determined by taking the
- * vector number and shifting that right by 4, we
- * want to spread these out a bit so that they don't
- * all fall in the same interrupt level
- */
+
+ for (i = 0; i < NR_IRQS ; i++)
+ if (IO_APIC_GATE_OFFSET+(i<<3) <= 0xfe) /* hack -- mingo */
+ set_intr_gate(IO_APIC_GATE_OFFSET+(i<<3),interrupt[i]);
/*
* The reschedule interrupt slowly changes it's functionality,
* [ It has to be here .. it doesn't work if you put
* it down the bottom - assembler explodes 8) ]
*/
- /* IRQ '16' (trap 0x30) - IPI for rescheduling */
- set_intr_gate(0x20+i, reschedule_interrupt);
+ /* IPI for rescheduling */
+ set_intr_gate(0x30, reschedule_interrupt);
- /* IRQ '17' (trap 0x31) - IPI for invalidation */
- set_intr_gate(0x21+i, invalidate_interrupt);
+ /* IPI for invalidation */
+ set_intr_gate(0x31, invalidate_interrupt);
- /* IRQ '18' (trap 0x40) - IPI for CPU halt */
- set_intr_gate(0x30+i, stop_cpu_interrupt);
+ /* IPI for CPU halt */
+ set_intr_gate(0x40, stop_cpu_interrupt);
+
+ /* self generated IPI for local APIC timer */
+ set_intr_gate(0x41, apic_timer_interrupt);
- /* IRQ '19' (trap 0x41) - self generated IPI for local APIC timer */
- set_intr_gate(0x31+i, apic_timer_interrupt);
#endif
request_region(0x20,0x20,"pic1");
request_region(0xa0,0x20,"pic2");
setup_x86_irq(2, &irq2);
setup_x86_irq(13, &irq13);
}
+
* Interrupt entry/exit code at both C and assembly level
*/
+#define IO_APIC_GATE_OFFSET 0x51
+
+void mask_irq(unsigned int irq_nr);
+void unmask_irq(unsigned int irq_nr);
+void enable_IO_APIC_irq (int irq);
+void disable_IO_APIC_irq (int irq);
+void set_8259A_irq_mask(int irq_nr);
+void setup_IO_APIC_irq (int irq);
+void ack_APIC_irq (void);
+void setup_IO_APIC (void);
+void init_IO_APIC_traps(void);
+
+extern const unsigned int io_apic_irqs;
+#define IO_APIC_IRQ(x) ((1<<x) & io_apic_irqs)
+
+#define MAX_IRQ_SOURCES 128
+#define MAX_MP_BUSSES 32
+enum mp_bustype {
+ MP_BUS_ISA,
+ MP_BUS_PCI
+};
+extern int mp_bus_id_to_type [MAX_MP_BUSSES];
+
+extern spinlock_t irq_controller_lock; /*
+ * Protects both the 8259 and the
+ * IO-APIC
+ */
+
#ifdef __SMP__
static inline void irq_enter(int cpu, int irq)
"pushl $ret_from_intr\n\t" \
"jmp "SYMBOL_NAME_STR(do_IRQ));
-#define BUILD_IRQ(chip,nr,mask) \
+#define BUILD_IRQ(nr) \
asmlinkage void IRQ_NAME(nr); \
__asm__( \
"\n"__ALIGN_STR"\n" \
static const char *x86_cap_flags[] = {
"fpu", "vme", "de", "pse", "tsc", "msr", "pae", "mce",
"cx8", "apic", "10", "sep", "mtrr", "pge", "mca", "cmov",
- "16", "17", "18", "19", "20", "21", "22", "mmx",
- "24", "25", "26", "27", "28", "29", "30", "31"
+ "fcmov", "17", "18", "19", "20", "21", "22", "mmx",
+ "cxmmx", "25", "26", "27", "28", "29", "30", "amd3d"
};
struct cpuinfo_x86 *c = cpu_data;
int i, n;
volatile unsigned long syscall_count=0; /* Number of times the processor holds the syscall lock */
volatile unsigned long ipi_count; /* Number of IPI's delivered */
-#ifdef __SMP_PROF__
-volatile unsigned long smp_spins[NR_CPUS]={0}; /* Count interrupt spins */
-volatile unsigned long smp_spins_syscall[NR_CPUS]={0}; /* Count syscall spins */
-volatile unsigned long smp_spins_syscall_cur[NR_CPUS]={0};/* Count spins for the actual syscall */
-volatile unsigned long smp_spins_sys_idle[NR_CPUS]={0}; /* Count spins for sys_idle */
-volatile unsigned long smp_idle_count[1+NR_CPUS]={0,}; /* Count idle ticks */
-
-/* Count local APIC timer ticks */
-volatile unsigned long smp_local_timer_ticks[1+NR_CPUS]={0,};
-
-#endif
-#if defined (__SMP_PROF__)
-volatile unsigned long smp_idle_map=0; /* Map for idle processors */
-#endif
volatile unsigned long smp_proc_in_lock[NR_CPUS] = {0,};/* for computing process time */
volatile int smp_process_available=0;
const char lk_lockmsg[] = "lock from interrupt context at %p\n";
+int mp_bus_id_to_type [MAX_MP_BUSSES] = { -1, };
+extern int mp_irq_entries;
+extern struct mpc_config_intsrc mp_irqs [MAX_IRQ_SOURCES];
-/*#define SMP_DEBUG*/
+/* #define SMP_DEBUG */
#ifdef SMP_DEBUG
#define SMP_PRINTK(x) printk x
max_cpus = 0;
}
-static inline void ack_APIC_irq (void)
+void ack_APIC_irq (void)
{
/* Clear the IPI */
SMP_PRINTK(("Bus #%d is %s\n",
m->mpc_busid,
str));
+ if ((strncmp(m->mpc_bustype,"ISA",3) == 0) ||
+ (strncmp(m->mpc_bustype,"EISA",4) == 0))
+ mp_bus_id_to_type[m->mpc_busid] =
+ MP_BUS_ISA;
+ else
+ if (strncmp(m->mpc_bustype,"PCI",3) == 0)
+ mp_bus_id_to_type[m->mpc_busid] =
+ MP_BUS_PCI;
mpt+=sizeof(*m);
count+=sizeof(*m);
break;
struct mpc_config_intsrc *m=
(struct mpc_config_intsrc *)mpt;
+ mp_irqs [mp_irq_entries] = *m;
+ if (++mp_irq_entries == MAX_IRQ_SOURCES) {
+ printk("Max irq sources exceeded!!\n");
+ printk("Skipping remaining sources.\n");
+ --mp_irq_entries;
+ }
+
+printk(" Itype:%d Iflag:%d srcbus:%d srcbusI:%d dstapic:%d dstI:%d.\n",
+ m->mpc_irqtype,
+ m->mpc_irqflag,
+ m->mpc_srcbus,
+ m->mpc_srcbusirq,
+ m->mpc_dstapic,
+ m->mpc_dstirq);
+
mpt+=sizeof(*m);
count+=sizeof(*m);
break;
/*
* Set up our APIC timer.
*/
- setup_APIC_clock ();
+ setup_APIC_clock();
sti();
/*
else
system=1;
- irq_enter(cpu, 0);
+ irq_enter(cpu, 0);
if (p->pid) {
update_one_process(p, 1, user, system, cpu);
kstat.cpu_system += system;
kstat.per_cpu_system[cpu] += system;
- } else {
-#ifdef __SMP_PROF__
- if (test_bit(cpu,&smp_idle_map))
- smp_idle_count[cpu]++;
-#endif
}
prof_counter[cpu]=prof_multiplier[cpu];
-
irq_exit(cpu, 0);
}
-#ifdef __SMP_PROF__
- smp_local_timer_ticks[cpu]++;
-#endif
/*
* We take the 'long' return path, and there every subsystem
* grabs the apropriate locks (kernel lock/ irq lock).
* This looks silly, but we actually do need to wait
* for the global interrupt lock.
*/
+ printk("huh, this is used, where???\n");
irq_enter(cpu, 0);
need_resched = 1;
irq_exit(cpu, 0);
#include <asm/system.h>
#include <asm/uaccess.h>
#include <asm/pgtable.h>
+#include <asm/hardirq.h>
extern void die_if_kernel(const char *,struct pt_regs *,long);
/* get the address */
__asm__("movl %%cr2,%0":"=r" (address));
+ if (local_irq_count[smp_processor_id()])
+ die_if_kernel("page fault from irq handler",regs,error_code);
lock_kernel();
tsk = current;
mm = tsk->mm;
* If we're interrupted we keep this page and our place in it
* since we validly hold it and it's reserved for us.
*/
- pageptr = __get_free_pages(GFP_ATOMIC, 0, 0 );
+ pageptr = __get_free_pages(GFP_ATOMIC, 0);
if ( !pageptr )
goto retry;
static unsigned long apmmu_alloc_kernel_stack(struct task_struct *tsk)
{
- unsigned long kstk = __get_free_pages(GFP_KERNEL, 1, 0);
+ unsigned long kstk = __get_free_pages(GFP_KERNEL, 1);
if(!kstk)
kstk = (unsigned long) vmalloc(PAGE_SIZE << 1);
MSC_OUT(MSC_SQRAM + i * 8, -1);
if (!qof_base) {
- qof_base = (struct qof_elt *) __get_free_pages(GFP_ATOMIC, QOF_ORDER, 0);
+ qof_base = (struct qof_elt *) __get_free_pages(GFP_ATOMIC, QOF_ORDER);
for (i = MAP_NR(qof_base); i <= MAP_NR(((char*)qof_base)+QOF_SIZE-1);++i)
set_bit(PG_reserved, &mem_map[i].flags);
}
if (!system_ringbuf.ringbuf) {
system_ringbuf.ringbuf =
- (void *)__get_free_pages(GFP_ATOMIC,SYSTEM_RINGBUF_ORDER,0);
+ (void *)__get_free_pages(GFP_ATOMIC,SYSTEM_RINGBUF_ORDER);
for (i=MAP_NR(system_ringbuf.ringbuf);
i<=MAP_NR(system_ringbuf.ringbuf+SYSTEM_RINGBUF_SIZE-1);i++)
set_bit(PG_reserved, &mem_map[i].flags);
if (!dummy_ringbuf.ringbuf) {
dummy_ringbuf.ringbuf =
- (void *)__get_free_pages(GFP_ATOMIC,DUMMY_RINGBUF_ORDER,0);
+ (void *)__get_free_pages(GFP_ATOMIC,DUMMY_RINGBUF_ORDER);
for (i=MAP_NR(dummy_ringbuf.ringbuf);
i<=MAP_NR(dummy_ringbuf.ringbuf+DUMMY_RINGBUF_SIZE-1);i++)
set_bit(PG_reserved, &mem_map[i].flags);
*/
struct task_struct *srmmu_alloc_task_struct(void)
{
- return (struct task_struct *) __get_free_pages(GFP_KERNEL, 1, 0);
+ return (struct task_struct *) __get_free_pages(GFP_KERNEL, 1);
}
static void srmmu_free_task_struct(struct task_struct *tsk)
unsigned long addr, pages;
int entry;
- pages = __get_free_pages(GFP_KERNEL, 1, 0);
+ pages = __get_free_pages(GFP_KERNEL, 1);
if(!pages)
return (struct task_struct *) 0;
((dvma_pages_current_offset + len) > (1 << 16))) {
struct linux_sbus *sbus;
unsigned long *iopte;
- unsigned long newpages = __get_free_pages(GFP_KERNEL, 3, 0);
+ unsigned long newpages = __get_free_pages(GFP_KERNEL, 3);
int i;
if(!newpages)
/* preallocate some ringbuffers */
for (i=0;i<RBUF_RESERVED;i++) {
- if (!(rb_ptr = (char *)__get_free_pages(GFP_ATOMIC,RBUF_RESERVED_ORDER,0))) {
+ if (!(rb_ptr = (char *)__get_free_pages(GFP_ATOMIC,RBUF_RESERVED_ORDER))) {
printk("failed to preallocate ringbuf %d\n",i);
return;
}
}
if (!rb_ptr) {
- rb_ptr = (char *)__get_free_pages(GFP_USER,order,0);
+ rb_ptr = (char *)__get_free_pages(GFP_USER,order);
if (!rb_ptr) return -ENOMEM;
for (i = MAP_NR(rb_ptr); i <= MAP_NR(rb_ptr+rb_size-1); i++) {
return -EBUSY;
}
- if (!(acsi_buffer = (char *)__get_free_pages(GFP_KERNEL,
- ACSI_BUFFER_ORDER, 1))) {
+ if (!(acsi_buffer = (char *)__get_free_pages(GFP_KERNEL | GFP_DMA,
+ ACSI_BUFFER_ORDER))) {
printk( KERN_ERR "Unable to get ACSI ST-Ram buffer.\n" );
unregister_blkdev( MAJOR_NR, "ad" );
return -ENOMEM;
memset (raid_conf, 0, sizeof (*raid_conf));
raid_conf->mddev = mddev;
- if ((raid_conf->stripe_hashtbl = (struct stripe_head **) __get_free_pages(GFP_ATOMIC, HASH_PAGES_ORDER, 0)) == NULL)
+ if ((raid_conf->stripe_hashtbl = (struct stripe_head **) __get_free_pages(GFP_ATOMIC, HASH_PAGES_ORDER)) == NULL)
goto abort;
memset(raid_conf->stripe_hashtbl, 0, HASH_PAGES * PAGE_SIZE);
bool 'Tadpole ANA H8 Support' CONFIG_H8
fi
tristate 'Video For Linux' CONFIG_VIDEO_DEV
-dep_tristate 'BT848 Video For Linux' CONFIG_VIDEO_BT848 $CONFIG_VIDEO_DEV
-if [ "$CONFIG_PARPORT" != "n" ]; then
- dep_tristate 'Quickcam BW Video For Linux' CONFIG_VIDEO_BWQCAM $CONFIG_VIDEO_DEV
- dep_tristate 'Colour QuickCam Video For Linux (EXPERIMENTAL)' CONFIG_VIDEO_CQCAM $CONFIG_VIDEO_DEV
+if [ "$CONFIG_VIDEO_DEV" != n ]; then
+ dep_tristate 'BT848 Video For Linux' CONFIG_VIDEO_BT848 $CONFIG_VIDEO_DEV
+ if [ "$CONFIG_PARPORT" != "n" ]; then
+ dep_tristate 'Quickcam BW Video For Linux' CONFIG_VIDEO_BWQCAM $CONFIG_VIDEO_DEV
+ dep_tristate 'Colour QuickCam Video For Linux (EXPERIMENTAL)' CONFIG_VIDEO_CQCAM $CONFIG_VIDEO_DEV
+ fi
+ dep_tristate 'Mediavision Pro Movie Studio Video For Linux' CONFIG_VIDEO_PMS $CONFIG_VIDEO_DEV
fi
-dep_tristate 'Mediavision Pro Movie Studio Video For Linux' CONFIG_VIDEO_PMS $CONFIG_VIDEO_DEV
tristate '/dev/nvram support' CONFIG_NVRAM
tristate 'PC joystick support' CONFIG_JOYSTICK
bool 'Radio Device Support' CONFIG_MISC_RADIO
* but how do I then check device minor number?
* Do I need this function at all???
*/
-#ifdef 0
+#if 0
static int dsp56k_select(struct inode *inode, struct file *file, int sel_type,
select_table *wait)
{
printk(KERN_DEBUG "BufPoolAdd bp %x\n", bp);
#endif
- ptr = (struct Pages *) __get_free_pages(priority, bp->pageorder, 0);
+ ptr = (struct Pages *) __get_free_pages(priority, bp->pageorder);
if (!ptr) {
printk(KERN_WARNING "BufPoolAdd couldn't get pages!\n");
return (-1);
}
#endif
-#ifdef 0
+#if 0
if(!at_least_one)
{
int i;
flag |= 0xe2000000;
tp->tx_ring[entry].length = skb->len | flag;
+ tp->stats.tx_bytes += skb->len;
tp->tx_ring[entry].status = 0x80000000; /* Pass ownership to the chip. */
tp->cur_tx++;
/* Trigger an immediate transmit demand. */
/*
Issue the Test Command Complete Interrupt commands.
*/
- InitialInterruptCount = kstat.interrupts[HostAdapter->IRQ_Channel];
+
+ InitialInterruptCount = 0;
+ for (i=0; i<NR_CPUS; i++)
+ InitialInterruptCount += kstat.interrupts[i][HostAdapter->IRQ_Channel];
for (i = 0; i < TestCount; i++)
BusLogic_Command(HostAdapter, BusLogic_TestCommandCompleteInterrupt,
NULL, 0, NULL, 0);
- FinalInterruptCount = kstat.interrupts[HostAdapter->IRQ_Channel];
+ FinalInterruptCount = 0;
+ for (i=0; i<NR_CPUS; i++)
+ FinalInterruptCount += kstat.interrupts[i][HostAdapter->IRQ_Channel];
/*
Verify that BusLogic_InterruptHandler was called at least TestCount
times. Shared IRQ Channels could cause more than TestCount interrupts to
* This needs to be attached to task[0] instead.
*/
- siginitsetinv(¤t->blocked, SHUTDOWN_SIGS);
+ sigfillset(¤t->blocked);
current->fs->umask = 0;
/*
int probe_adlib(struct address_info *hw_config)
{
-
if (check_region(hw_config->io_base, 4)) {
DDB(printk("opl3.c: I/O port %x already in use\n", hw_config->io_base));
return 0;
/* else
printk(KERN_DEBUG"/dev/dsp%d: No coprocessor for this device\n", dev); */
return -ENXIO;
- } else switch (cmd) {
- case SNDCTL_DSP_SYNC:
- if (!(audio_devs[dev]->open_mode & OPEN_WRITE))
- return 0;
- if (audio_devs[dev]->dmap_out->fragment_size == 0)
+ }
+ else switch (cmd)
+ {
+ case SNDCTL_DSP_SYNC:
+ if (!(audio_devs[dev]->open_mode & OPEN_WRITE))
+ return 0;
+ if (audio_devs[dev]->dmap_out->fragment_size == 0)
+ return 0;
+ sync_output(dev);
+ DMAbuf_sync(dev);
+ DMAbuf_reset(dev);
return 0;
- sync_output(dev);
- DMAbuf_sync(dev);
- DMAbuf_reset(dev);
- return 0;
- case SNDCTL_DSP_POST:
- if (!(audio_devs[dev]->open_mode & OPEN_WRITE))
+ case SNDCTL_DSP_POST:
+ if (!(audio_devs[dev]->open_mode & OPEN_WRITE))
+ return 0;
+ if (audio_devs[dev]->dmap_out->fragment_size == 0)
+ return 0;
+ audio_devs[dev]->dmap_out->flags |= DMA_POST | DMA_DIRTY;
+ sync_output(dev);
+ dma_ioctl(dev, SNDCTL_DSP_POST, (caddr_t) 0);
return 0;
- if (audio_devs[dev]->dmap_out->fragment_size == 0)
- return 0;
- audio_devs[dev]->dmap_out->flags |= DMA_POST | DMA_DIRTY;
- sync_output(dev);
- dma_ioctl(dev, SNDCTL_DSP_POST, (caddr_t) 0);
- return 0;
-
- case SNDCTL_DSP_RESET:
- audio_mode[dev] = AM_NONE;
- DMAbuf_reset(dev);
- return 0;
-
- case SNDCTL_DSP_GETFMTS:
- val = audio_devs[dev]->format_mask;
- return __put_user(val, (int *)arg);
-
- case SNDCTL_DSP_SETFMT:
- if (__get_user(val, (int *)arg))
- return -EFAULT;
- val = set_format(dev, val);
- return __put_user(val, (int *)arg);
-
- case SNDCTL_DSP_GETISPACE:
- if (!(audio_devs[dev]->open_mode & OPEN_READ))
+
+ case SNDCTL_DSP_RESET:
+ audio_mode[dev] = AM_NONE;
+ DMAbuf_reset(dev);
return 0;
- if ((audio_mode[dev] & AM_WRITE) && !(audio_devs[dev]->flags & DMA_DUPLEX))
- return -EBUSY;
- return dma_ioctl(dev, cmd, arg);
-
- case SNDCTL_DSP_GETOSPACE:
- if (!(audio_devs[dev]->open_mode & OPEN_WRITE))
- return -EPERM;
- if ((audio_mode[dev] & AM_READ) && !(audio_devs[dev]->flags & DMA_DUPLEX))
- return -EBUSY;
- return dma_ioctl(dev, cmd, arg);
+
+ case SNDCTL_DSP_GETFMTS:
+ val = audio_devs[dev]->format_mask;
+ break;
+
+ case SNDCTL_DSP_SETFMT:
+ if (get_user(val, (int *)arg))
+ return -EFAULT;
+ val = set_format(dev, val);
+ break;
+
+ case SNDCTL_DSP_GETISPACE:
+ if (!(audio_devs[dev]->open_mode & OPEN_READ))
+ return 0;
+ if ((audio_mode[dev] & AM_WRITE) && !(audio_devs[dev]->flags & DMA_DUPLEX))
+ return -EBUSY;
+ return dma_ioctl(dev, cmd, arg);
+
+ case SNDCTL_DSP_GETOSPACE:
+ if (!(audio_devs[dev]->open_mode & OPEN_WRITE))
+ return -EPERM;
+ if ((audio_mode[dev] & AM_READ) && !(audio_devs[dev]->flags & DMA_DUPLEX))
+ return -EBUSY;
+ return dma_ioctl(dev, cmd, arg);
- case SNDCTL_DSP_NONBLOCK:
- dev_nblock[dev] = 1;
- return 0;
-
- case SNDCTL_DSP_GETCAPS:
- info = 1 | DSP_CAP_MMAP; /* Revision level of this ioctl() */
- if (audio_devs[dev]->flags & DMA_DUPLEX &&
- audio_devs[dev]->open_mode == OPEN_READWRITE)
- info |= DSP_CAP_DUPLEX;
- if (audio_devs[dev]->coproc)
- info |= DSP_CAP_COPROC;
- if (audio_devs[dev]->d->local_qlen) /* Device has hidden buffers */
- info |= DSP_CAP_BATCH;
- if (audio_devs[dev]->d->trigger) /* Supports SETTRIGGER */
- info |= DSP_CAP_TRIGGER;
- return __put_user(info, (int *)arg);
+ case SNDCTL_DSP_NONBLOCK:
+ dev_nblock[dev] = 1;
+ return 0;
+
+ case SNDCTL_DSP_GETCAPS:
+ info = 1 | DSP_CAP_MMAP; /* Revision level of this ioctl() */
+ if (audio_devs[dev]->flags & DMA_DUPLEX &&
+ audio_devs[dev]->open_mode == OPEN_READWRITE)
+ info |= DSP_CAP_DUPLEX;
+ if (audio_devs[dev]->coproc)
+ info |= DSP_CAP_COPROC;
+ if (audio_devs[dev]->d->local_qlen) /* Device has hidden buffers */
+ info |= DSP_CAP_BATCH;
+ if (audio_devs[dev]->d->trigger) /* Supports SETTRIGGER */
+ info |= DSP_CAP_TRIGGER;
+ break;
- case SOUND_PCM_WRITE_RATE:
- if (__get_user(val, (int *)arg))
- return -EFAULT;
- val = audio_devs[dev]->d->set_speed(dev, val);
- return __put_user(val, (int *)arg);
-
- case SOUND_PCM_READ_RATE:
- val = audio_devs[dev]->d->set_speed(dev, 0);
- return __put_user(val, (int *)arg);
-
- case SNDCTL_DSP_STEREO:
- if (__get_user(val, (int *)arg))
- return -EFAULT;
- if (val > 1 || val < 0)
- return -EINVAL;
- val = audio_devs[dev]->d->set_channels(dev, val + 1) - 1;
- return __put_user(val, (int *)arg);
-
- case SOUND_PCM_WRITE_CHANNELS:
- if (__get_user(val, (int *)arg))
- return -EFAULT;
- val = audio_devs[dev]->d->set_channels(dev, val);
- return __put_user(val, (int *)arg);
-
- case SOUND_PCM_READ_CHANNELS:
- val = audio_devs[dev]->d->set_channels(dev, 0);
- return __put_user(val, (int *)arg);
+ case SOUND_PCM_WRITE_RATE:
+ if (get_user(val, (int *)arg))
+ return -EFAULT;
+ val = audio_devs[dev]->d->set_speed(dev, val);
+ break;
+
+ case SOUND_PCM_READ_RATE:
+ val = audio_devs[dev]->d->set_speed(dev, 0);
+ break;
+
+ case SNDCTL_DSP_STEREO:
+ if (get_user(val, (int *)arg))
+ return -EFAULT;
+ if (val > 1 || val < 0)
+ return -EINVAL;
+ val = audio_devs[dev]->d->set_channels(dev, val + 1) - 1;
+ break;
+
+ case SOUND_PCM_WRITE_CHANNELS:
+ if (get_user(val, (int *)arg))
+ return -EFAULT;
+ val = audio_devs[dev]->d->set_channels(dev, val);
+ break;
+
+ case SOUND_PCM_READ_CHANNELS:
+ val = audio_devs[dev]->d->set_channels(dev, 0);
+ break;
- case SOUND_PCM_READ_BITS:
- val = audio_devs[dev]->d->set_bits(dev, 0);
- return __put_user(val, (int *)arg);
-
- case SNDCTL_DSP_SETDUPLEX:
- if (audio_devs[dev]->open_mode != OPEN_READWRITE)
- return -EPERM;
- return (audio_devs[dev]->flags & DMA_DUPLEX) ? 0 : -EIO;
-
- case SNDCTL_DSP_PROFILE:
- if (__get_user(val, (int *)arg))
- return -EFAULT;
- if (audio_devs[dev]->open_mode & OPEN_WRITE)
- audio_devs[dev]->dmap_out->applic_profile = val;
- if (audio_devs[dev]->open_mode & OPEN_READ)
- audio_devs[dev]->dmap_in->applic_profile = val;
- return 0;
+ case SOUND_PCM_READ_BITS:
+ val = audio_devs[dev]->d->set_bits(dev, 0);
+ break;
+
+ case SNDCTL_DSP_SETDUPLEX:
+ if (audio_devs[dev]->open_mode != OPEN_READWRITE)
+ return -EPERM;
+ return (audio_devs[dev]->flags & DMA_DUPLEX) ? 0 : -EIO;
+
+ case SNDCTL_DSP_PROFILE:
+ if (get_user(val, (int *)arg))
+ return -EFAULT;
+ if (audio_devs[dev]->open_mode & OPEN_WRITE)
+ audio_devs[dev]->dmap_out->applic_profile = val;
+ if (audio_devs[dev]->open_mode & OPEN_READ)
+ audio_devs[dev]->dmap_in->applic_profile = val;
+ return 0;
- case SNDCTL_DSP_GETODELAY:
- dmap = audio_devs[dev]->dmap_out;
- if (!(audio_devs[dev]->open_mode & OPEN_WRITE))
- return -EINVAL;
- if (!(dmap->flags & DMA_ALLOC_DONE))
- return __put_user(0, (int *)arg);
+ case SNDCTL_DSP_GETODELAY:
+ dmap = audio_devs[dev]->dmap_out;
+ if (!(audio_devs[dev]->open_mode & OPEN_WRITE))
+ return -EINVAL;
+ if (!(dmap->flags & DMA_ALLOC_DONE))
+ {
+ val=0;
+ break;
+ }
- save_flags (flags);
- cli();
- /* Compute number of bytes that have been played */
- count = DMAbuf_get_buffer_pointer (dev, dmap, DMODE_OUTPUT);
- if (count < dmap->fragment_size && dmap->qhead != 0)
- count += dmap->bytes_in_use; /* Pointer wrap not handled yet */
- count += dmap->byte_counter;
+ save_flags (flags);
+ cli();
+ /* Compute number of bytes that have been played */
+ count = DMAbuf_get_buffer_pointer (dev, dmap, DMODE_OUTPUT);
+ if (count < dmap->fragment_size && dmap->qhead != 0)
+ count += dmap->bytes_in_use; /* Pointer wrap not handled yet */
+ count += dmap->byte_counter;
- /* Substract current count from the number of bytes written by app */
- count = dmap->user_counter - count;
- if (count < 0)
- count = 0;
- restore_flags (flags);
- return __put_user(count, (int *)arg);
+ /* Substract current count from the number of bytes written by app */
+ count = dmap->user_counter - count;
+ if (count < 0)
+ count = 0;
+ restore_flags (flags);
+ val = count;
+ break;
- default:
- return dma_ioctl(dev, cmd, arg);
+ default:
+ return dma_ioctl(dev, cmd, arg);
}
+ return put_user(val, (int *)arg);
}
void audio_init_devices(void)
static int dma_subdivide(int dev, struct dma_buffparms *dmap, int fact)
{
- if (fact == 0) {
+ if (fact == 0)
+ {
fact = dmap->subdivision;
if (fact == 0)
fact = 1;
int fact, ret, changed, bits, count, err;
unsigned long flags;
- switch (cmd) {
- case SNDCTL_DSP_SUBDIVIDE:
- ret = 0;
- if (__get_user(fact, (int *)arg))
- return -EFAULT;
- if (audio_devs[dev]->open_mode & OPEN_WRITE)
- ret = dma_subdivide(dev, dmap_out, fact);
- if (ret < 0)
- return ret;
- if (audio_devs[dev]->open_mode != OPEN_WRITE ||
- (audio_devs[dev]->flags & DMA_DUPLEX &&
- audio_devs[dev]->open_mode & OPEN_READ))
- ret = dma_subdivide(dev, dmap_in, fact);
- if (ret < 0)
- return ret;
- return __put_user(ret, (int *)arg);
-
- case SNDCTL_DSP_GETISPACE:
- case SNDCTL_DSP_GETOSPACE:
- dmap = dmap_out;
- if (cmd == SNDCTL_DSP_GETISPACE && !(audio_devs[dev]->open_mode & OPEN_READ))
- return -EINVAL;
- if (cmd == SNDCTL_DSP_GETOSPACE && !(audio_devs[dev]->open_mode & OPEN_WRITE))
- return -EINVAL;
- if (cmd == SNDCTL_DSP_GETISPACE && audio_devs[dev]->flags & DMA_DUPLEX)
- dmap = dmap_in;
- if (dmap->mapping_flags & DMA_MAP_MAPPED)
- return -EINVAL;
- if (!(dmap->flags & DMA_ALLOC_DONE))
- reorganize_buffers(dev, dmap, (cmd == SNDCTL_DSP_GETISPACE));
- info.fragstotal = dmap->nbufs;
- if (cmd == SNDCTL_DSP_GETISPACE)
- info.fragments = dmap->qlen;
- else {
- if (!DMAbuf_space_in_queue(dev))
- info.fragments = 0;
- else {
- info.fragments = DMAbuf_space_in_queue(dev);
- if (audio_devs[dev]->d->local_qlen) {
- int tmp = audio_devs[dev]->d->local_qlen(dev);
- if (tmp && info.fragments)
- tmp--; /*
- * This buffer has been counted twice
- */
- info.fragments -= tmp;
+ switch (cmd)
+ {
+ case SNDCTL_DSP_SUBDIVIDE:
+ ret = 0;
+ if (get_user(fact, (int *)arg))
+ return -EFAULT;
+ if (audio_devs[dev]->open_mode & OPEN_WRITE)
+ ret = dma_subdivide(dev, dmap_out, fact);
+ if (ret < 0)
+ return ret;
+ if (audio_devs[dev]->open_mode != OPEN_WRITE ||
+ (audio_devs[dev]->flags & DMA_DUPLEX &&
+ audio_devs[dev]->open_mode & OPEN_READ))
+ ret = dma_subdivide(dev, dmap_in, fact);
+ if (ret < 0)
+ return ret;
+ break;
+
+ case SNDCTL_DSP_GETISPACE:
+ case SNDCTL_DSP_GETOSPACE:
+ dmap = dmap_out;
+ if (cmd == SNDCTL_DSP_GETISPACE && !(audio_devs[dev]->open_mode & OPEN_READ))
+ return -EINVAL;
+ if (cmd == SNDCTL_DSP_GETOSPACE && !(audio_devs[dev]->open_mode & OPEN_WRITE))
+ return -EINVAL;
+ if (cmd == SNDCTL_DSP_GETISPACE && audio_devs[dev]->flags & DMA_DUPLEX)
+ dmap = dmap_in;
+ if (dmap->mapping_flags & DMA_MAP_MAPPED)
+ return -EINVAL;
+ if (!(dmap->flags & DMA_ALLOC_DONE))
+ reorganize_buffers(dev, dmap, (cmd == SNDCTL_DSP_GETISPACE));
+ info.fragstotal = dmap->nbufs;
+ if (cmd == SNDCTL_DSP_GETISPACE)
+ info.fragments = dmap->qlen;
+ else
+ {
+ if (!DMAbuf_space_in_queue(dev))
+ info.fragments = 0;
+ else
+ {
+ info.fragments = DMAbuf_space_in_queue(dev);
+ if (audio_devs[dev]->d->local_qlen)
+ {
+ int tmp = audio_devs[dev]->d->local_qlen(dev);
+ if (tmp && info.fragments)
+ tmp--; /*
+ * This buffer has been counted twice
+ */
+ info.fragments -= tmp;
+ }
}
}
- }
- if (info.fragments < 0)
+ if (info.fragments < 0)
info.fragments = 0;
- else if (info.fragments > dmap->nbufs)
- info.fragments = dmap->nbufs;
+ else if (info.fragments > dmap->nbufs)
+ info.fragments = dmap->nbufs;
- info.fragsize = dmap->fragment_size;
- info.bytes = info.fragments * dmap->fragment_size;
+ info.fragsize = dmap->fragment_size;
+ info.bytes = info.fragments * dmap->fragment_size;
- if (cmd == SNDCTL_DSP_GETISPACE && dmap->qlen)
- info.bytes -= dmap->counts[dmap->qhead];
- else {
- info.fragments = info.bytes / dmap->fragment_size;
- info.bytes -= dmap->user_counter % dmap->fragment_size;
- }
- return __copy_to_user(arg, &info, sizeof(info));
-
- case SNDCTL_DSP_SETTRIGGER:
- if (__get_user(bits, (int *)arg))
- return -EFAULT;
- bits &= audio_devs[dev]->open_mode;
- if (audio_devs[dev]->d->trigger == NULL)
- return -EINVAL;
- if (!(audio_devs[dev]->flags & DMA_DUPLEX) && (bits & PCM_ENABLE_INPUT) &&
- (bits & PCM_ENABLE_OUTPUT))
- return -EINVAL;
- save_flags(flags);
- cli();
- changed = audio_devs[dev]->enable_bits ^ bits;
- if ((changed & bits) & PCM_ENABLE_INPUT && audio_devs[dev]->go) {
- reorganize_buffers(dev, dmap_in, 1);
- if ((err = audio_devs[dev]->d->prepare_for_input(dev,
+ if (cmd == SNDCTL_DSP_GETISPACE && dmap->qlen)
+ info.bytes -= dmap->counts[dmap->qhead];
+ else
+ {
+ info.fragments = info.bytes / dmap->fragment_size;
+ info.bytes -= dmap->user_counter % dmap->fragment_size;
+ }
+ return copy_to_user(arg, &info, sizeof(info));
+
+ case SNDCTL_DSP_SETTRIGGER:
+ if (get_user(bits, (int *)arg))
+ return -EFAULT;
+ bits &= audio_devs[dev]->open_mode;
+ if (audio_devs[dev]->d->trigger == NULL)
+ return -EINVAL;
+ if (!(audio_devs[dev]->flags & DMA_DUPLEX) && (bits & PCM_ENABLE_INPUT) &&
+ (bits & PCM_ENABLE_OUTPUT))
+ return -EINVAL;
+ save_flags(flags);
+ cli();
+ changed = audio_devs[dev]->enable_bits ^ bits;
+ if ((changed & bits) & PCM_ENABLE_INPUT && audio_devs[dev]->go)
+ {
+ reorganize_buffers(dev, dmap_in, 1);
+ if ((err = audio_devs[dev]->d->prepare_for_input(dev,
dmap_in->fragment_size, dmap_in->nbufs)) < 0)
- return -err;
- dmap_in->dma_mode = DMODE_INPUT;
+ return -err;
+ dmap_in->dma_mode = DMODE_INPUT;
+ audio_devs[dev]->enable_bits = bits;
+ DMAbuf_activate_recording(dev, dmap_in);
+ }
+ if ((changed & bits) & PCM_ENABLE_OUTPUT &&
+ (dmap_out->mapping_flags & DMA_MAP_MAPPED || dmap_out->qlen > 0) &&
+ audio_devs[dev]->go)
+ {
+ if (!(dmap_out->flags & DMA_ALLOC_DONE))
+ reorganize_buffers(dev, dmap_out, 0);
+ dmap_out->dma_mode = DMODE_OUTPUT;
+ audio_devs[dev]->enable_bits = bits;
+ dmap_out->counts[dmap_out->qhead] = dmap_out->fragment_size;
+ DMAbuf_launch_output(dev, dmap_out);
+ }
audio_devs[dev]->enable_bits = bits;
- DMAbuf_activate_recording(dev, dmap_in);
- }
- if ((changed & bits) & PCM_ENABLE_OUTPUT &&
- (dmap_out->mapping_flags & DMA_MAP_MAPPED || dmap_out->qlen > 0) &&
- audio_devs[dev]->go) {
+ if (changed && audio_devs[dev]->d->trigger)
+ audio_devs[dev]->d->trigger(dev, bits * audio_devs[dev]->go);
+ restore_flags(flags);
+ /* Falls through... */
+
+ case SNDCTL_DSP_GETTRIGGER:
+ ret = audio_devs[dev]->enable_bits;
+ break;
+
+ case SNDCTL_DSP_SETSYNCRO:
+ if (!audio_devs[dev]->d->trigger)
+ return -EINVAL;
+ audio_devs[dev]->d->trigger(dev, 0);
+ audio_devs[dev]->go = 0;
+ return 0;
+
+ case SNDCTL_DSP_GETIPTR:
+ if (!(audio_devs[dev]->open_mode & OPEN_READ))
+ return -EINVAL;
+ save_flags(flags);
+ cli();
+ cinfo.bytes = dmap_in->byte_counter;
+ cinfo.ptr = DMAbuf_get_buffer_pointer(dev, dmap_in, DMODE_INPUT) & ~3;
+ if (cinfo.ptr < dmap_in->fragment_size && dmap_in->qtail != 0)
+ cinfo.bytes += dmap_in->bytes_in_use; /* Pointer wrap not handled yet */
+ cinfo.blocks = dmap_in->qlen;
+ cinfo.bytes += cinfo.ptr;
+ if (dmap_in->mapping_flags & DMA_MAP_MAPPED)
+ dmap_in->qlen = 0; /* Reset interrupt counter */
+ restore_flags(flags);
+ return copy_to_user(arg, &cinfo, sizeof(cinfo));
+
+ case SNDCTL_DSP_GETOPTR:
+ if (!(audio_devs[dev]->open_mode & OPEN_WRITE))
+ return -EINVAL;
+
+ save_flags(flags);
+ cli();
+ cinfo.bytes = dmap_out->byte_counter;
+ cinfo.ptr = DMAbuf_get_buffer_pointer(dev, dmap_out, DMODE_OUTPUT) & ~3;
+ if (cinfo.ptr < dmap_out->fragment_size && dmap_out->qhead != 0)
+ cinfo.bytes += dmap_out->bytes_in_use; /* Pointer wrap not handled yet */
+ cinfo.blocks = dmap_out->qlen;
+ cinfo.bytes += cinfo.ptr;
+ if (dmap_out->mapping_flags & DMA_MAP_MAPPED)
+ dmap_out->qlen = 0; /* Reset interrupt counter */
+ restore_flags(flags);
+ return copy_to_user(arg, &cinfo, sizeof(cinfo));
+
+ case SNDCTL_DSP_GETODELAY:
+ if (!(audio_devs[dev]->open_mode & OPEN_WRITE))
+ return -EINVAL;
if (!(dmap_out->flags & DMA_ALLOC_DONE))
- reorganize_buffers(dev, dmap_out, 0);
- dmap_out->dma_mode = DMODE_OUTPUT;
- audio_devs[dev]->enable_bits = bits;
- dmap_out->counts[dmap_out->qhead] = dmap_out->fragment_size;
- DMAbuf_launch_output(dev, dmap_out);
- }
- audio_devs[dev]->enable_bits = bits;
- if (changed && audio_devs[dev]->d->trigger)
- audio_devs[dev]->d->trigger(dev, bits * audio_devs[dev]->go);
- restore_flags(flags);
- /* Falls through... */
-
- case SNDCTL_DSP_GETTRIGGER:
- ret = audio_devs[dev]->enable_bits;
- return __put_user(ret, (int *)arg);
-
- case SNDCTL_DSP_SETSYNCRO:
- if (!audio_devs[dev]->d->trigger)
- return -EINVAL;
- audio_devs[dev]->d->trigger(dev, 0);
- audio_devs[dev]->go = 0;
- return 0;
-
- case SNDCTL_DSP_GETIPTR:
- if (!(audio_devs[dev]->open_mode & OPEN_READ))
- return -EINVAL;
- save_flags(flags);
- cli();
- cinfo.bytes = dmap_in->byte_counter;
- cinfo.ptr = DMAbuf_get_buffer_pointer(dev, dmap_in, DMODE_INPUT) & ~3;
- if (cinfo.ptr < dmap_in->fragment_size && dmap_in->qtail != 0)
- cinfo.bytes += dmap_in->bytes_in_use; /* Pointer wrap not handled yet */
- cinfo.blocks = dmap_in->qlen;
- cinfo.bytes += cinfo.ptr;
- if (dmap_in->mapping_flags & DMA_MAP_MAPPED)
- dmap_in->qlen = 0; /* Reset interrupt counter */
- restore_flags(flags);
- return __copy_to_user(arg, &cinfo, sizeof(cinfo));
-
- case SNDCTL_DSP_GETOPTR:
- if (!(audio_devs[dev]->open_mode & OPEN_WRITE))
- return -EINVAL;
-
- save_flags(flags);
- cli();
- cinfo.bytes = dmap_out->byte_counter;
- cinfo.ptr = DMAbuf_get_buffer_pointer(dev, dmap_out, DMODE_OUTPUT) & ~3;
- if (cinfo.ptr < dmap_out->fragment_size && dmap_out->qhead != 0)
- cinfo.bytes += dmap_out->bytes_in_use; /* Pointer wrap not handled yet */
- cinfo.blocks = dmap_out->qlen;
- cinfo.bytes += cinfo.ptr;
- if (dmap_out->mapping_flags & DMA_MAP_MAPPED)
- dmap_out->qlen = 0; /* Reset interrupt counter */
- restore_flags(flags);
- return __copy_to_user(arg, &cinfo, sizeof(cinfo));
-
- case SNDCTL_DSP_GETODELAY:
- if (!(audio_devs[dev]->open_mode & OPEN_WRITE))
- return -EINVAL;
- if (!(dmap_out->flags & DMA_ALLOC_DONE))
- return __put_user(0, (int *)arg);
- save_flags(flags);
- cli();
- /* Compute number of bytes that have been played */
- count = DMAbuf_get_buffer_pointer (dev, dmap_out, DMODE_OUTPUT);
- if (count < dmap_out->fragment_size && dmap_out->qhead != 0)
- count += dmap_out->bytes_in_use; /* Pointer wrap not handled yet */
- count += dmap_out->byte_counter;
- /* Substract current count from the number of bytes written by app */
- count = dmap_out->user_counter - count;
- if (count < 0)
- count = 0;
- restore_flags (flags);
- return __put_user(count, (int *)arg);
-
- case SNDCTL_DSP_POST:
- if (audio_devs[dev]->dmap_out->qlen > 0)
- if (!(audio_devs[dev]->dmap_out->flags & DMA_ACTIVE))
- DMAbuf_launch_output(dev, audio_devs[dev]->dmap_out);
- return 0;
-
- case SNDCTL_DSP_GETBLKSIZE:
- dmap = dmap_out;
- if (audio_devs[dev]->open_mode & OPEN_WRITE)
- reorganize_buffers(dev, dmap_out, (audio_devs[dev]->open_mode == OPEN_READ));
- if (audio_devs[dev]->open_mode == OPEN_READ ||
- (audio_devs[dev]->flags & DMA_DUPLEX &&
- audio_devs[dev]->open_mode & OPEN_READ))
- reorganize_buffers(dev, dmap_in, (audio_devs[dev]->open_mode == OPEN_READ));
- if (audio_devs[dev]->open_mode == OPEN_READ)
- dmap = dmap_in;
- ret = dmap->fragment_size;
- return __put_user(ret, (int *)arg);
-
- case SNDCTL_DSP_SETFRAGMENT:
- ret = 0;
- if (__get_user(fact, (int *)arg))
- return -EFAULT;
- if (audio_devs[dev]->open_mode & OPEN_WRITE)
- ret = dma_set_fragment(dev, dmap_out, fact);
- if (ret < 0)
- return ret;
- if (audio_devs[dev]->open_mode == OPEN_READ ||
- (audio_devs[dev]->flags & DMA_DUPLEX &&
- audio_devs[dev]->open_mode & OPEN_READ))
- ret = dma_set_fragment(dev, dmap_in, fact);
- if (ret < 0)
- return ret;
- if (!arg) /* don't know what this is good for, but preserve old semantics */
+ {
+ ret=0;
+ break;
+ }
+ save_flags(flags);
+ cli();
+ /* Compute number of bytes that have been played */
+ count = DMAbuf_get_buffer_pointer (dev, dmap_out, DMODE_OUTPUT);
+ if (count < dmap_out->fragment_size && dmap_out->qhead != 0)
+ count += dmap_out->bytes_in_use; /* Pointer wrap not handled yet */
+ count += dmap_out->byte_counter;
+ /* Substract current count from the number of bytes written by app */
+ count = dmap_out->user_counter - count;
+ if (count < 0)
+ count = 0;
+ restore_flags (flags);
+ ret = count;
+ break;
+
+ case SNDCTL_DSP_POST:
+ if (audio_devs[dev]->dmap_out->qlen > 0)
+ if (!(audio_devs[dev]->dmap_out->flags & DMA_ACTIVE))
+ DMAbuf_launch_output(dev, audio_devs[dev]->dmap_out);
return 0;
- return __put_user(ret, (int *)arg);
- default:
- if (!audio_devs[dev]->d->ioctl)
- return -EINVAL;
- return audio_devs[dev]->d->ioctl(dev, cmd, arg);
+ case SNDCTL_DSP_GETBLKSIZE:
+ dmap = dmap_out;
+ if (audio_devs[dev]->open_mode & OPEN_WRITE)
+ reorganize_buffers(dev, dmap_out, (audio_devs[dev]->open_mode == OPEN_READ));
+ if (audio_devs[dev]->open_mode == OPEN_READ ||
+ (audio_devs[dev]->flags & DMA_DUPLEX &&
+ audio_devs[dev]->open_mode & OPEN_READ))
+ reorganize_buffers(dev, dmap_in, (audio_devs[dev]->open_mode == OPEN_READ));
+ if (audio_devs[dev]->open_mode == OPEN_READ)
+ dmap = dmap_in;
+ ret = dmap->fragment_size;
+ break;
+
+ case SNDCTL_DSP_SETFRAGMENT:
+ ret = 0;
+ if (get_user(fact, (int *)arg))
+ return -EFAULT;
+ if (audio_devs[dev]->open_mode & OPEN_WRITE)
+ ret = dma_set_fragment(dev, dmap_out, fact);
+ if (ret < 0)
+ return ret;
+ if (audio_devs[dev]->open_mode == OPEN_READ ||
+ (audio_devs[dev]->flags & DMA_DUPLEX &&
+ audio_devs[dev]->open_mode & OPEN_READ))
+ ret = dma_set_fragment(dev, dmap_in, fact);
+ if (ret < 0)
+ return ret;
+ if (!arg) /* don't know what this is good for, but preserve old semantics */
+ return 0;
+ break;
+
+ default:
+ if (!audio_devs[dev]->d->ioctl)
+ return -EINVAL;
+ return audio_devs[dev]->d->ioctl(dev, cmd, arg);
}
+ return put_user(ret, (int *)arg);
}
#define KEY_PORT 0x279 /* Same as LPT1 status port */
#define CSN_NUM 0x99 /* Just a random number */
-static void
-CS_OUT(unsigned char a)
+static void CS_OUT(unsigned char a)
{
- outb((a), KEY_PORT);
+ outb(a, KEY_PORT);
}
+
#define CS_OUT2(a, b) {CS_OUT(a);CS_OUT(b);}
#define CS_OUT3(a, b, c) {CS_OUT(a);CS_OUT(b);CS_OUT(c);}
-static int mpu_base = 0, mpu_irq = 0;
-static int mpu_detected = 0;
+static int mpu_base = 0, mpu_irq = 0;
+static int mpu_detected = 0;
-int
-probe_cs4232_mpu(struct address_info *hw_config)
+int probe_cs4232_mpu(struct address_info *hw_config)
{
-/*
- * Just write down the config values.
- */
+ /*
+ * Just write down the config values.
+ */
mpu_base = hw_config->io_base;
mpu_irq = hw_config->irq;
return 1;
}
-void
-attach_cs4232_mpu(struct address_info *hw_config)
+void attach_cs4232_mpu(struct address_info *hw_config)
{
+ /* Nothing needs doing */
}
static unsigned char crystal_key[] = /* A 32 byte magic key sequence */
0x09, 0x84, 0x42, 0xa1, 0xd0, 0x68, 0x34, 0x1a
};
-int
-probe_cs4232(struct address_info *hw_config)
+int probe_cs4232(struct address_info *hw_config)
{
- int i, n;
- int base = hw_config->io_base, irq = hw_config->irq;
- int dma1 = hw_config->dma, dma2 = hw_config->dma2;
+ int i, n;
+ int base = hw_config->io_base, irq = hw_config->irq;
+ int dma1 = hw_config->dma, dma2 = hw_config->dma2;
+ unsigned long tlimit;
static struct wait_queue *cs_sleeper = NULL;
- static volatile struct snd_wait cs_sleep_flag =
- {0};
-
+ static volatile struct snd_wait cs_sleep_flag = {
+ 0
+ };
-/*
- * Verify that the I/O port range is free.
- */
+ /*
+ * Verify that the I/O port range is free.
+ */
if (check_region(base, 4))
- {
- printk("cs4232.c: I/O port 0x%03x not free\n", base);
- return 0;
- }
+ {
+ printk(KERN_ERR "cs4232.c: I/O port 0x%03x not free\n", base);
+ return 0;
+ }
if (ad1848_detect(hw_config->io_base, NULL, hw_config->osp))
return 1; /* The card is already active */
-/*
- * This version of the driver doesn't use the PnP method when configuring
- * the card but a simplified method defined by Crystal. This means that
- * just one CS4232 compatible device can exist on the system. Also this
- * method conflicts with possible PnP support in the OS. For this reason
- * driver is just a temporary kludge.
- */
+ /*
+ * This version of the driver doesn't use the PnP method when configuring
+ * the card but a simplified method defined by Crystal. This means that
+ * just one CS4232 compatible device can exist on the system. Also this
+ * method conflicts with possible PnP support in the OS. For this reason
+ * driver is just a temporary kludge.
+ */
-/*
- * Repeat initialization few times since it doesn't always succeed in
- * first time.
- */
+ /*
+ * Repeat initialization few times since it doesn't always succeed in
+ * first time.
+ */
for (n = 0; n < 4; n++)
- {
- cs_sleep_flag.opts = WK_NONE;
-/*
- * Wake up the card by sending a 32 byte Crystal key to the key port.
- */
- for (i = 0; i < 32; i++)
- CS_OUT(crystal_key[i]);
-
-
- {
- unsigned long tlimit;
-
- if (HZ / 10)
- current->timeout = tlimit = jiffies + (HZ / 10);
- else
- tlimit = (unsigned long) -1;
- cs_sleep_flag.opts = WK_SLEEP;
- interruptible_sleep_on(&cs_sleeper);
- if (!(cs_sleep_flag.opts & WK_WAKEUP))
- {
- if (jiffies >= tlimit)
- cs_sleep_flag.opts |= WK_TIMEOUT;
- }
- cs_sleep_flag.opts &= ~WK_SLEEP;
- }; /* Delay */
-
-/*
- * Now set the CSN (Card Select Number).
- */
-
- CS_OUT2(0x06, CSN_NUM);
-
-
-/*
- * Then set some config bytes. First logical device 0
- */
-
- CS_OUT2(0x15, 0x00); /* Select logical device 0 (WSS/SB/FM) */
- CS_OUT3(0x47, (base >> 8) & 0xff, base & 0xff); /* WSS base */
-
- if (check_region(0x388, 4)) /* Not free */
- CS_OUT3(0x48, 0x00, 0x00) /* FM base off */
- else
- CS_OUT3(0x48, 0x03, 0x88); /* FM base 0x388 */
-
- CS_OUT3(0x42, 0x00, 0x00); /* SB base off */
- CS_OUT2(0x22, irq); /* SB+WSS IRQ */
- CS_OUT2(0x2a, dma1); /* SB+WSS DMA */
-
- if (dma2 != -1)
- CS_OUT2(0x25, dma2) /* WSS DMA2 */
- else
- CS_OUT2(0x25, 4); /* No WSS DMA2 */
-
- CS_OUT2(0x33, 0x01); /* Activate logical dev 0 */
-
-
- {
- unsigned long tlimit;
-
- if (HZ / 10)
- current->timeout = tlimit = jiffies + (HZ / 10);
- else
- tlimit = (unsigned long) -1;
- cs_sleep_flag.opts = WK_SLEEP;
- interruptible_sleep_on(&cs_sleeper);
- if (!(cs_sleep_flag.opts & WK_WAKEUP))
- {
- if (jiffies >= tlimit)
- cs_sleep_flag.opts |= WK_TIMEOUT;
- }
- cs_sleep_flag.opts &= ~WK_SLEEP;
- }; /* Delay */
-
-/*
- * Initialize logical device 3 (MPU)
- */
+ {
+ cs_sleep_flag.opts = WK_NONE;
+
+ /*
+ * Wake up the card by sending a 32 byte Crystal key to the key port.
+ */
+
+ for (i = 0; i < 32; i++)
+ CS_OUT(crystal_key[i]);
+
+ current->timeout = tlimit = jiffies + (HZ / 10);
+ cs_sleep_flag.opts = WK_SLEEP;
+ interruptible_sleep_on(&cs_sleeper);
+ if (!(cs_sleep_flag.opts & WK_WAKEUP))
+ {
+ if (jiffies >= tlimit)
+ cs_sleep_flag.opts |= WK_TIMEOUT;
+ }
+ cs_sleep_flag.opts &= ~WK_SLEEP;
+
+ /*
+ * Now set the CSN (Card Select Number).
+ */
+
+ CS_OUT2(0x06, CSN_NUM);
+
+ /*
+ * Then set some config bytes. First logical device 0
+ */
+
+ CS_OUT2(0x15, 0x00); /* Select logical device 0 (WSS/SB/FM) */
+ CS_OUT3(0x47, (base >> 8) & 0xff, base & 0xff); /* WSS base */
+
+ if (check_region(0x388, 4)) /* Not free */
+ CS_OUT3(0x48, 0x00, 0x00) /* FM base off */
+ else
+ CS_OUT3(0x48, 0x03, 0x88); /* FM base 0x388 */
+
+ CS_OUT3(0x42, 0x00, 0x00); /* SB base off */
+ CS_OUT2(0x22, irq); /* SB+WSS IRQ */
+ CS_OUT2(0x2a, dma1); /* SB+WSS DMA */
+
+ if (dma2 != -1)
+ CS_OUT2(0x25, dma2) /* WSS DMA2 */
+ else
+ CS_OUT2(0x25, 4); /* No WSS DMA2 */
+
+ CS_OUT2(0x33, 0x01); /* Activate logical dev 0 */
+
+ current->timeout = tlimit = jiffies + (HZ / 10);
+ cs_sleep_flag.opts = WK_SLEEP;
+ interruptible_sleep_on(&cs_sleeper);
+ if (!(cs_sleep_flag.opts & WK_WAKEUP))
+ {
+ if (jiffies >= tlimit)
+ cs_sleep_flag.opts |= WK_TIMEOUT;
+ }
+ cs_sleep_flag.opts &= ~WK_SLEEP;
+
+ /*
+ * Initialize logical device 3 (MPU)
+ */
#if defined(CONFIG_UART401) && defined(CONFIG_MIDI)
- if (mpu_base != 0 && mpu_irq != 0)
- {
- CS_OUT2(0x15, 0x03); /* Select logical device 3 (MPU) */
- CS_OUT3(0x47, (mpu_base >> 8) & 0xff, mpu_base & 0xff); /* MPU base */
- CS_OUT2(0x22, mpu_irq); /* MPU IRQ */
- CS_OUT2(0x33, 0x01); /* Activate logical dev 3 */
- }
+ if (mpu_base != 0 && mpu_irq != 0)
+ {
+ CS_OUT2(0x15, 0x03); /* Select logical device 3 (MPU) */
+ CS_OUT3(0x47, (mpu_base >> 8) & 0xff, mpu_base & 0xff); /* MPU base */
+ CS_OUT2(0x22, mpu_irq); /* MPU IRQ */
+ CS_OUT2(0x33, 0x01); /* Activate logical dev 3 */
+ }
#endif
-/*
- * Finally activate the chip
- */
- CS_OUT(0x79);
-
-
- {
- unsigned long tlimit;
-
- if (HZ / 5)
- current->timeout = tlimit = jiffies + (HZ / 5);
- else
- tlimit = (unsigned long) -1;
- cs_sleep_flag.opts = WK_SLEEP;
- interruptible_sleep_on(&cs_sleeper);
- if (!(cs_sleep_flag.opts & WK_WAKEUP))
- {
- if (jiffies >= tlimit)
- cs_sleep_flag.opts |= WK_TIMEOUT;
- }
- cs_sleep_flag.opts &= ~WK_SLEEP;
- }; /* Delay */
-
-/*
- * Then try to detect the codec part of the chip
- */
-
- if (ad1848_detect(hw_config->io_base, NULL, hw_config->osp))
- return 1;
-
-
- {
- unsigned long tlimit;
-
- if (HZ)
- current->timeout = tlimit = jiffies + (HZ);
- else
- tlimit = (unsigned long) -1;
- cs_sleep_flag.opts = WK_SLEEP;
- interruptible_sleep_on(&cs_sleeper);
- if (!(cs_sleep_flag.opts & WK_WAKEUP))
- {
- if (jiffies >= tlimit)
- cs_sleep_flag.opts |= WK_TIMEOUT;
- }
- cs_sleep_flag.opts &= ~WK_SLEEP;
- }; /* Longer delay */
- }
-
+ /*
+ * Finally activate the chip
+ */
+
+ CS_OUT(0x79);
+
+ current->timeout = tlimit = jiffies + (HZ / 5);
+ cs_sleep_flag.opts = WK_SLEEP;
+ interruptible_sleep_on(&cs_sleeper);
+ if (!(cs_sleep_flag.opts & WK_WAKEUP))
+ {
+ if (jiffies >= tlimit)
+ cs_sleep_flag.opts |= WK_TIMEOUT;
+ }
+ cs_sleep_flag.opts &= ~WK_SLEEP;
+
+ /*
+ * Then try to detect the codec part of the chip
+ */
+
+ if (ad1848_detect(hw_config->io_base, NULL, hw_config->osp))
+ return 1;
+
+ current->timeout = tlimit = jiffies + (HZ);
+ cs_sleep_flag.opts = WK_SLEEP;
+ interruptible_sleep_on(&cs_sleeper);
+ if (!(cs_sleep_flag.opts & WK_WAKEUP))
+ {
+ if (jiffies >= tlimit)
+ cs_sleep_flag.opts |= WK_TIMEOUT;
+ }
+ cs_sleep_flag.opts &= ~WK_SLEEP;
+ }
return 0;
}
-void
-attach_cs4232(struct address_info *hw_config)
+void attach_cs4232(struct address_info *hw_config)
{
- int base = hw_config->io_base, irq = hw_config->irq;
- int dma1 = hw_config->dma, dma2 = hw_config->dma2;
- int old_num_mixers = num_mixers;
+ int base = hw_config->io_base, irq = hw_config->irq;
+ int dma1 = hw_config->dma, dma2 = hw_config->dma2;
+ int old_num_mixers = num_mixers;
if (dma2 == -1)
dma2 = dma1;
hw_config->osp);
if (num_mixers > old_num_mixers)
- { /* Assume the mixer map is as suggested in the CS4232 databook */
- AD1848_REROUTE(SOUND_MIXER_LINE1, SOUND_MIXER_LINE);
- AD1848_REROUTE(SOUND_MIXER_LINE2, SOUND_MIXER_CD);
- AD1848_REROUTE(SOUND_MIXER_LINE3, SOUND_MIXER_SYNTH); /* FM synth */
- }
+ {
+ /* Assume the mixer map is as suggested in the CS4232 databook */
+ AD1848_REROUTE(SOUND_MIXER_LINE1, SOUND_MIXER_LINE);
+ AD1848_REROUTE(SOUND_MIXER_LINE2, SOUND_MIXER_CD);
+ AD1848_REROUTE(SOUND_MIXER_LINE3, SOUND_MIXER_SYNTH); /* FM synth */
+ }
#if defined(CONFIG_UART401) && defined(CONFIG_MIDI)
if (mpu_base != 0 && mpu_irq != 0)
- {
- static struct address_info hw_config2 =
- {0}; /* Ensure it's initialized */
-
- hw_config2.io_base = mpu_base;
- hw_config2.irq = mpu_irq;
- hw_config2.dma = -1;
- hw_config2.dma2 = -1;
- hw_config2.always_detect = 0;
- hw_config2.name = NULL;
- hw_config2.driver_use_1 = 0;
- hw_config2.driver_use_2 = 0;
- hw_config2.card_subtype = 0;
-
- if (probe_uart401(&hw_config2))
- {
- mpu_detected = 1;
- attach_uart401(&hw_config2);
- } else
- {
- mpu_base = mpu_irq = 0;
- }
- hw_config->slots[1] = hw_config2.slots[1];
- }
+ {
+ static struct address_info hw_config2 = {
+ 0
+ }; /* Ensure it's initialized */
+
+ hw_config2.io_base = mpu_base;
+ hw_config2.irq = mpu_irq;
+ hw_config2.dma = -1;
+ hw_config2.dma2 = -1;
+ hw_config2.always_detect = 0;
+ hw_config2.name = NULL;
+ hw_config2.driver_use_1 = 0;
+ hw_config2.driver_use_2 = 0;
+ hw_config2.card_subtype = 0;
+
+ if (probe_uart401(&hw_config2))
+ {
+ mpu_detected = 1;
+ attach_uart401(&hw_config2);
+ }
+ else
+ {
+ mpu_base = mpu_irq = 0;
+ }
+ hw_config->slots[1] = hw_config2.slots[1];
+ }
#endif
}
-void
-unload_cs4232(struct address_info *hw_config)
+void unload_cs4232(struct address_info *hw_config)
{
- int base = hw_config->io_base, irq = hw_config->irq;
- int dma1 = hw_config->dma, dma2 = hw_config->dma2;
+ int base = hw_config->io_base, irq = hw_config->irq;
+ int dma1 = hw_config->dma, dma2 = hw_config->dma2;
if (dma2 == -1)
dma2 = dma1;
sound_unload_audiodev(hw_config->slots[0]);
#if defined(CONFIG_UART401) && defined(CONFIG_MIDI)
if (mpu_base != 0 && mpu_irq != 0 && mpu_detected)
- {
- static struct address_info hw_config2 =
- {0}; /* Ensure it's initialized */
-
- hw_config2.io_base = mpu_base;
- hw_config2.irq = mpu_irq;
- hw_config2.dma = -1;
- hw_config2.dma2 = -1;
- hw_config2.always_detect = 0;
- hw_config2.name = NULL;
- hw_config2.driver_use_1 = 0;
- hw_config2.driver_use_2 = 0;
- hw_config2.card_subtype = 0;
- hw_config2.slots[1] = hw_config->slots[1];
-
- unload_uart401(&hw_config2);
- }
+ {
+ static struct address_info hw_config2 =
+ {
+ 0
+ }; /* Ensure it's initialized */
+
+ hw_config2.io_base = mpu_base;
+ hw_config2.irq = mpu_irq;
+ hw_config2.dma = -1;
+ hw_config2.dma2 = -1;
+ hw_config2.always_detect = 0;
+ hw_config2.name = NULL;
+ hw_config2.driver_use_1 = 0;
+ hw_config2.driver_use_2 = 0;
+ hw_config2.card_subtype = 0;
+ hw_config2.slots[1] = hw_config->slots[1];
+
+ unload_uart401(&hw_config2);
+ }
#endif
}
-void
-unload_cs4232_mpu(struct address_info *hw_config)
+void unload_cs4232_mpu(struct address_info *hw_config)
{
/* Not required. Handled by cs4232_unload */
}
int dma = -1;
int dma2 = -1;
+MODULE_PARM(io,"i");
+MODULE_PARM(irq,"i");
+MODULE_PARM(dma,"i");
+MODULE_PARM(dma2,"i");
+
struct address_info cfg;
/*
- * Install a CS4232 based card. Need to have ad1848 and mpu401
- * loaded ready.
+ * Install a CS4232 based card. Need to have ad1848 and mpu401
+ * loaded ready.
*/
int
init_module(void)
{
if (io == -1 || irq == -1 || dma == -1 || dma2 == -1)
- {
- printk(KERN_ERR "cs4232: dma, dma2, irq and io must be set.\n");
- return -EINVAL;
- }
+ {
+ printk(KERN_ERR "cs4232: dma, dma2, irq and io must be set.\n");
+ return -EINVAL;
+ }
cfg.io_base = io;
cfg.irq = irq;
cfg.dma = dma;
return 0;
}
-void
-cleanup_module(void)
+void cleanup_module(void)
{
unload_cs4232_mpu(&cfg);
unload_cs4232(&cfg);
SOUND_LOCK_END;
}
-#endif
#endif
+#endif
*
* Device call tables.
*/
+
/*
* Copyright (C) by Hannu Savolainen 1993-1997
*
* Version 2 (June 1991). See the "COPYING" file distributed with this software
* for more info.
*/
+
#include <linux/config.h>
#define _DEV_TABLE_C_
#include "sound_config.h"
-int sb_be_quiet = 0;
-int softoss_dev = 0;
-
-int sound_started = 0;
-int sndtable_get_cardcount(void);
+int sb_be_quiet = 0;
+int softoss_dev = 0;
+int sound_started = 0;
+int sndtable_get_cardcount(void);
-int
-snd_find_driver(int type)
+int snd_find_driver(int type)
{
- int i, n = num_sound_drivers;
+ int i, n = num_sound_drivers;
for (i = 0; i < n; i++)
if (sound_drivers[i].card_type == type)
return -1;
}
-static void
-start_services(void)
+static void start_services(void)
{
- int soundcards_installed;
+ int soundcards_installed;
#ifdef FIXME
if (!(soundcards_installed = sndtable_get_cardcount()))
#ifdef CONFIG_AUDIO
if (num_audiodevs) /* Audio devices present */
- {
- int dev;
-
- for (dev = 0; dev < num_audiodevs; dev++)
- {
- }
- audio_init_devices();
+ {
+ int dev;
+ for (dev = 0; dev < num_audiodevs; dev++)
+ {
+ }
+ audio_init_devices();
}
#endif
static void
start_cards(void)
{
- int i, n = num_sound_cards;
- int drv;
+ int i, n = num_sound_cards;
+ int drv;
sound_started = 1;
if (trace_init)
- printk("Sound initialization started\n");
+ printk(KERN_DEBUG "Sound initialization started\n");
#ifdef CONFIG_LOWLEVEL_SOUND
{
- extern void sound_preinit_lowlevel_drivers(void);
-
+ extern void sound_preinit_lowlevel_drivers(void);
sound_preinit_lowlevel_drivers();
}
#endif
num_sound_cards = i + 1;
for (i = 0; i < n && snd_installed_cards[i].card_type; i++)
+ {
if (snd_installed_cards[i].enabled)
- {
- snd_installed_cards[i].for_driver_use = NULL;
-
- if ((drv = snd_find_driver(snd_installed_cards[i].card_type)) == -1)
- {
- snd_installed_cards[i].enabled = 0; /*
- * Mark as not detected
- */
- continue;
- }
- snd_installed_cards[i].config.card_subtype =
- sound_drivers[drv].card_subtype;
-
- if (sound_drivers[drv].probe(&snd_installed_cards[i].config))
- {
-
- sound_drivers[drv].attach(&snd_installed_cards[i].config);
-
- } else
- snd_installed_cards[i].enabled = 0; /*
+ {
+ snd_installed_cards[i].for_driver_use = NULL;
+
+ if ((drv = snd_find_driver(snd_installed_cards[i].card_type)) == -1)
+ {
+ snd_installed_cards[i].enabled = 0; /*
* Mark as not detected
*/
- }
+ continue;
+ }
+ snd_installed_cards[i].config.card_subtype =
+ sound_drivers[drv].card_subtype;
+
+ if (sound_drivers[drv].probe(&snd_installed_cards[i].config))
+ sound_drivers[drv].attach(&snd_installed_cards[i].config);
+ else
+ snd_installed_cards[i].enabled = 0; /*
+ * Mark as not detected
+ */
+ }
+ }
#ifdef CONFIG_LOWLEVEL_SOUND
{
extern void sound_init_lowlevel_drivers(void);
-
sound_init_lowlevel_drivers();
}
#endif
-
if (trace_init)
- printk("Sound initialization complete\n");
+ printk(KERN_DEBUG "Sound initialization complete\n");
}
-void
-sndtable_init(void)
+void sndtable_init(void)
{
start_cards();
}
-void
-sound_unload_drivers(void)
+void sound_unload_drivers(void)
{
- int i, n = num_sound_cards;
- int drv;
+ int i, n = num_sound_cards;
+ int drv;
if (!sound_started)
return;
if (trace_init)
- printk("Sound unload started\n");
+ printk(KERN_DEBUG "Sound unload started\n");
for (i = 0; i < n && snd_installed_cards[i].card_type; i++)
+ {
if (snd_installed_cards[i].enabled)
- {
- if ((drv = snd_find_driver(snd_installed_cards[i].card_type)) != -1)
- {
- if (sound_drivers[drv].unload)
- {
- sound_drivers[drv].unload(&snd_installed_cards[i].config);
- snd_installed_cards[i].enabled = 0;
- }
- }
- }
+ {
+ if ((drv = snd_find_driver(snd_installed_cards[i].card_type)) != -1)
+ {
+ if (sound_drivers[drv].unload)
+ {
+ sound_drivers[drv].unload(&snd_installed_cards[i].config);
+ snd_installed_cards[i].enabled = 0;
+ }
+ }
+ }
+ }
+
for (i=0;i<num_audiodevs;i++)
DMAbuf_deinit(i);
if (trace_init)
- printk("Sound unload complete\n");
+ printk(KERN_DEBUG "Sound unload complete\n");
}
-void
-sound_unload_driver(int type)
+void sound_unload_driver(int type)
{
- int i, drv = -1, n = num_sound_cards;
+ int i, drv = -1, n = num_sound_cards;
DEB(printk("unload driver %d: ", type));
for (i = 0; i < n && snd_installed_cards[i].card_type; i++)
+ {
if (snd_installed_cards[i].card_type == type)
- {
- if (snd_installed_cards[i].enabled)
- {
- if ((drv = snd_find_driver(type)) != -1)
- {
- DEB(printk(" card %d", i));
- if (sound_drivers[drv].unload)
- {
- sound_drivers[drv].unload(&snd_installed_cards[i].config);
- snd_installed_cards[i].enabled = 0;
- }
- }
- }
- }
+ {
+ if (snd_installed_cards[i].enabled)
+ {
+ if ((drv = snd_find_driver(type)) != -1)
+ {
+ DEB(printk(" card %d", i));
+ if (sound_drivers[drv].unload)
+ {
+ sound_drivers[drv].unload(&snd_installed_cards[i].config);
+ snd_installed_cards[i].enabled = 0;
+ }
+ }
+ }
+ }
+ }
DEB(printk("\n"));
}
-int
-sndtable_probe(int unit, struct address_info *hw_config)
+int sndtable_probe(int unit, struct address_info *hw_config)
{
int sel = -1;
- DEB(printk("sndtable_probe(%d)\n", unit));
+ DEB(printk(KERN_DEBUG "sndtable_probe(%d)\n", unit));
if (!unit)
return 1;
if (sel == -1 && num_sound_cards < max_sound_cards)
- {
- int i;
-
- i = sel = (num_sound_cards++);
-
- snd_installed_cards[sel].card_type = unit;
- snd_installed_cards[sel].enabled = 1;
- }
+ {
+ int i;
+ i = sel = (num_sound_cards++);
+ snd_installed_cards[sel].card_type = unit;
+ snd_installed_cards[sel].enabled = 1;
+ }
if (sel != -1)
- {
- int drv;
-
- snd_installed_cards[sel].for_driver_use = NULL;
- snd_installed_cards[sel].config.io_base = hw_config->io_base;
- snd_installed_cards[sel].config.irq = hw_config->irq;
- snd_installed_cards[sel].config.dma = hw_config->dma;
- snd_installed_cards[sel].config.dma2 = hw_config->dma2;
- snd_installed_cards[sel].config.name = hw_config->name;
- snd_installed_cards[sel].config.always_detect = hw_config->always_detect;
- snd_installed_cards[sel].config.driver_use_1 = hw_config->driver_use_1;
- snd_installed_cards[sel].config.driver_use_2 = hw_config->driver_use_2;
- snd_installed_cards[sel].config.card_subtype = hw_config->card_subtype;
-
- if ((drv = snd_find_driver(snd_installed_cards[sel].card_type)) == -1)
- {
- snd_installed_cards[sel].enabled = 0;
- DEB(printk("Failed to find driver\n"));
- return 0;
- }
- DEB(printk("Driver name '%s'\n", sound_drivers[drv].name));
-
- hw_config->card_subtype =
- snd_installed_cards[sel].config.card_subtype =
- sound_drivers[drv].card_subtype;
-
- if (sound_drivers[drv].probe(hw_config))
- {
- DEB(printk("Hardware probed OK\n"));
- return 1;
- }
- DEB(printk("Failed to find hardware\n"));
- snd_installed_cards[sel].enabled = 0; /*
+ {
+ int drv;
+
+ snd_installed_cards[sel].for_driver_use = NULL;
+ snd_installed_cards[sel].config.io_base = hw_config->io_base;
+ snd_installed_cards[sel].config.irq = hw_config->irq;
+ snd_installed_cards[sel].config.dma = hw_config->dma;
+ snd_installed_cards[sel].config.dma2 = hw_config->dma2;
+ snd_installed_cards[sel].config.name = hw_config->name;
+ snd_installed_cards[sel].config.always_detect = hw_config->always_detect;
+ snd_installed_cards[sel].config.driver_use_1 = hw_config->driver_use_1;
+ snd_installed_cards[sel].config.driver_use_2 = hw_config->driver_use_2;
+ snd_installed_cards[sel].config.card_subtype = hw_config->card_subtype;
+
+ if ((drv = snd_find_driver(snd_installed_cards[sel].card_type)) == -1)
+ {
+ snd_installed_cards[sel].enabled = 0;
+ DEB(printk(KERN_DEBUG "Failed to find driver\n"));
+ return 0;
+ }
+ DEB(printk(KERN_DEBUG "Driver name '%s'\n", sound_drivers[drv].name));
+
+ hw_config->card_subtype = snd_installed_cards[sel].config.card_subtype = sound_drivers[drv].card_subtype;
+
+ if (sound_drivers[drv].probe(hw_config))
+ {
+ DEB(printk(KERN_DEBUG "Hardware probed OK\n"));
+ return 1;
+ }
+ DEB(printk("Failed to find hardware\n"));
+ snd_installed_cards[sel].enabled = 0; /*
* Mark as not detected
*/
- return 0;
- }
+ return 0;
+ }
return 0;
}
-int
-sndtable_init_card(int unit, struct address_info *hw_config)
+int sndtable_init_card(int unit, struct address_info *hw_config)
{
- int i, n = num_sound_cards;
+ int i, n = num_sound_cards;
DEB(printk("sndtable_init_card(%d) entered\n", unit));
if (!unit)
- {
- sndtable_init();
- return 1;
- }
+ {
+ sndtable_init();
+ return 1;
+ }
for (i = 0; i < n && snd_installed_cards[i].card_type; i++)
+ {
if (snd_installed_cards[i].card_type == unit)
- {
- int drv;
-
- snd_installed_cards[i].config.io_base = hw_config->io_base;
- snd_installed_cards[i].config.irq = hw_config->irq;
- snd_installed_cards[i].config.dma = hw_config->dma;
- snd_installed_cards[i].config.dma2 = hw_config->dma2;
- snd_installed_cards[i].config.name = hw_config->name;
- snd_installed_cards[i].config.always_detect = hw_config->always_detect;
- snd_installed_cards[i].config.driver_use_1 = hw_config->driver_use_1;
- snd_installed_cards[i].config.driver_use_2 = hw_config->driver_use_2;
- snd_installed_cards[i].config.card_subtype = hw_config->card_subtype;
-
- if ((drv = snd_find_driver(snd_installed_cards[i].card_type)) == -1)
- snd_installed_cards[i].enabled = 0; /*
+ {
+ int drv;
+
+ snd_installed_cards[i].config.io_base = hw_config->io_base;
+ snd_installed_cards[i].config.irq = hw_config->irq;
+ snd_installed_cards[i].config.dma = hw_config->dma;
+ snd_installed_cards[i].config.dma2 = hw_config->dma2;
+ snd_installed_cards[i].config.name = hw_config->name;
+ snd_installed_cards[i].config.always_detect = hw_config->always_detect;
+ snd_installed_cards[i].config.driver_use_1 = hw_config->driver_use_1;
+ snd_installed_cards[i].config.driver_use_2 = hw_config->driver_use_2;
+ snd_installed_cards[i].config.card_subtype = hw_config->card_subtype;
+
+ if ((drv = snd_find_driver(snd_installed_cards[i].card_type)) == -1)
+ snd_installed_cards[i].enabled = 0; /*
* Mark as not detected
*/
- else
- {
-
- DEB(printk("Located card - calling attach routine\n"));
- sound_drivers[drv].attach(hw_config);
-
- DEB(printk("attach routine finished\n"));
- }
- start_services();
- return 1;
- }
+ else
+ {
+ DEB(printk(KERN_DEBUG "Located card - calling attach routine\n"));
+ sound_drivers[drv].attach(hw_config);
+
+ DEB(printk("attach routine finished\n"));
+ }
+ start_services();
+ return 1;
+ }
+ }
DEB(printk("sndtable_init_card: No card defined with type=%d, num cards: %d\n", unit, num_sound_cards));
return 0;
}
-int
-sndtable_get_cardcount(void)
+int sndtable_get_cardcount(void)
{
return num_audiodevs + num_mixers + num_synths + num_midis;
}
-int
-sndtable_identify_card(char *name)
+int sndtable_identify_card(char *name)
{
- int i, n = num_sound_drivers;
+ int i, n = num_sound_drivers;
if (name == NULL)
return 0;
for (i = 0; i < n; i++)
+ {
if (sound_drivers[i].driver_id != NULL)
- {
- char *id = sound_drivers[i].driver_id;
- int j;
-
- for (j = 0; j < 80 && name[j] == id[j]; j++)
- if (id[j] == 0 && name[j] == 0) /* Match */
- return sound_drivers[i].card_type;
- }
+ {
+ char *id = sound_drivers[i].driver_id;
+ int j;
+
+ for (j = 0; j < 80 && name[j] == id[j]; j++)
+ if (id[j] == 0 && name[j] == 0) /* Match */
+ return sound_drivers[i].card_type;
+ }
+ }
return 0;
}
-void
-sound_setup(char *str, int *ints)
+void sound_setup(char *str, int *ints)
{
- int i, n = num_sound_cards;
+ int i, n = num_sound_cards;
/*
- * First disable all drivers
+ * First disable all drivers
*/
for (i = 0; i < n && snd_installed_cards[i].card_type; i++)
if (ints[0] == 0 || ints[1] == 0)
return;
/*
- * Then enable them one by time
+ * Then enable them one by time
*/
for (i = 1; i <= ints[0]; i++)
- {
- int card_type, ioaddr, irq, dma, dma2,
- ptr, j;
- unsigned int val;
-
- val = (unsigned int) ints[i];
-
- card_type = (val & 0x0ff00000) >> 20;
-
- if (card_type > 127)
- {
- /*
- * Add any future extensions here
- */
- return;
- }
- ioaddr = (val & 0x000fff00) >> 8;
- irq = (val & 0x000000f0) >> 4;
- dma = (val & 0x0000000f);
- dma2 = (val & 0xf0000000) >> 28;
-
- ptr = -1;
- for (j = 0; j < n && ptr == -1; j++)
- if (snd_installed_cards[j].card_type == card_type &&
- !snd_installed_cards[j].enabled) /*
+ {
+ int card_type, ioaddr, irq, dma, dma2, ptr, j;
+ unsigned int val;
+
+ val = (unsigned int) ints[i];
+ card_type = (val & 0x0ff00000) >> 20;
+
+ if (card_type > 127)
+ {
+ /*
+ * Add any future extensions here
+ */
+ return;
+ }
+ ioaddr = (val & 0x000fff00) >> 8;
+ irq = (val & 0x000000f0) >> 4;
+ dma = (val & 0x0000000f);
+ dma2 = (val & 0xf0000000) >> 28;
+
+ ptr = -1;
+ for (j = 0; j < n && ptr == -1; j++)
+ {
+ if (snd_installed_cards[j].card_type == card_type &&
+ !snd_installed_cards[j].enabled)/*
* Not already found
*/
- ptr = j;
-
- if (ptr == -1)
- printk("Sound: Invalid setup parameter 0x%08x\n", val);
- else
- {
- snd_installed_cards[ptr].enabled = 1;
- snd_installed_cards[ptr].config.io_base = ioaddr;
- snd_installed_cards[ptr].config.irq = irq;
- snd_installed_cards[ptr].config.dma = dma;
- snd_installed_cards[ptr].config.dma2 = dma2;
- snd_installed_cards[ptr].config.name = NULL;
- snd_installed_cards[ptr].config.always_detect = 0;
- snd_installed_cards[ptr].config.driver_use_1 = 0;
- snd_installed_cards[ptr].config.driver_use_2 = 0;
- snd_installed_cards[ptr].config.card_subtype = 0;
- }
- }
+ ptr = j;
+ }
+
+ if (ptr == -1)
+ printk(KERN_ERR "Sound: Invalid setup parameter 0x%08x\n", val);
+ else
+ {
+ snd_installed_cards[ptr].enabled = 1;
+ snd_installed_cards[ptr].config.io_base = ioaddr;
+ snd_installed_cards[ptr].config.irq = irq;
+ snd_installed_cards[ptr].config.dma = dma;
+ snd_installed_cards[ptr].config.dma2 = dma2;
+ snd_installed_cards[ptr].config.name = NULL;
+ snd_installed_cards[ptr].config.always_detect = 0;
+ snd_installed_cards[ptr].config.driver_use_1 = 0;
+ snd_installed_cards[ptr].config.driver_use_2 = 0;
+ snd_installed_cards[ptr].config.card_subtype = 0;
+ }
+ }
}
-struct address_info
- *
-sound_getconf(int card_type)
+struct address_info * sound_getconf(int card_type)
{
- int j, ptr;
- int n = num_sound_cards;
+ int j, ptr;
+ int n = num_sound_cards;
ptr = -1;
for (j = 0; j < n && ptr == -1 && snd_installed_cards[j].card_type; j++)
+ {
if (snd_installed_cards[j].card_type == card_type)
ptr = j;
-
+ }
if (ptr == -1)
return (struct address_info *) NULL;
-int
-sound_install_audiodrv(int vers,
- char *name,
- struct audio_driver *driver,
- int driver_size,
- int flags,
- unsigned int format_mask,
- void *devc,
- int dma1,
- int dma2)
+int sound_install_audiodrv(int vers, char *name, struct audio_driver *driver,
+ int driver_size, int flags, unsigned int format_mask,
+ void *devc, int dma1, int dma2)
{
#ifdef CONFIG_AUDIO
struct audio_driver *d;
struct audio_operations *op;
- int l, num;
+ int l, num;
- if (vers != AUDIO_DRIVER_VERSION ||
- driver_size > sizeof(struct audio_driver))
- {
- printk(KERN_ERR "Sound: Incompatible audio driver for %s\n", name);
- return -(EINVAL);
- }
+ if (vers != AUDIO_DRIVER_VERSION || driver_size > sizeof(struct audio_driver))
+ {
+ printk(KERN_ERR "Sound: Incompatible audio driver for %s\n", name);
+ return -(EINVAL);
+ }
num = sound_alloc_audiodev();
if (num == -1)
- {
- printk(KERN_ERR "sound: Too many audio drivers\n");
- return -(EBUSY);
- }
+ {
+ printk(KERN_ERR "sound: Too many audio drivers\n");
+ return -(EBUSY);
+ }
d = (struct audio_driver *) (sound_mem_blocks[sound_nblocks] = vmalloc(sizeof(struct audio_driver)));
sound_mem_sizes[sound_nblocks] = sizeof(struct audio_driver);
if (sound_nblocks < 1024)
- sound_nblocks++;;
+ sound_nblocks++;
op = (struct audio_operations *) (sound_mem_blocks[sound_nblocks] = vmalloc(sizeof(struct audio_operations)));
sound_mem_sizes[sound_nblocks] = sizeof(struct audio_operations);
if (sound_nblocks < 1024)
- sound_nblocks++;;
+ sound_nblocks++;
if (d == NULL || op == NULL)
- {
- printk(KERN_ERR "Sound: Can't allocate driver for (%s)\n", name);
- sound_unload_audiodev(num);
- return -(ENOMEM);
- }
+ {
+ printk(KERN_ERR "Sound: Can't allocate driver for (%s)\n", name);
+ sound_unload_audiodev(num);
+ return -(ENOMEM);
+ }
memset((char *) op, 0, sizeof(struct audio_operations));
if (driver_size < sizeof(struct audio_driver))
- memset((char *) d, 0, sizeof(struct audio_driver));
+ memset((char *) d, 0, sizeof(struct audio_driver));
memcpy((char *) d, (char *) driver, driver_size);
op->d = d;
-
l = strlen(name) + 1;
if (l > sizeof(op->name))
l = sizeof(op->name);
op->format_mask = format_mask;
op->devc = devc;
-/*
- * Hardcoded defaults
- */
+ /*
+ * Hardcoded defaults
+ */
audio_devs[num] = op;
DMAbuf_init(num, dma1, dma2);
#endif
}
-int
-sound_install_mixer(int vers,
- char *name,
- struct mixer_operations *driver,
- int driver_size,
- void *devc)
+int sound_install_mixer(int vers, char *name, struct mixer_operations *driver,
+ int driver_size, void *devc)
{
struct mixer_operations *op;
- int l;
+ int l;
- int n = sound_alloc_mixerdev();
+ int n = sound_alloc_mixerdev();
if (n == -1)
- {
- printk(KERN_ERR "Sound: Too many mixer drivers\n");
- return -(EBUSY);
- }
+ {
+ printk(KERN_ERR "Sound: Too many mixer drivers\n");
+ return -EBUSY;
+ }
if (vers != MIXER_DRIVER_VERSION ||
- driver_size > sizeof(struct mixer_operations))
- {
- printk(KERN_ERR "Sound: Incompatible mixer driver for %s\n", name);
- return -(EINVAL);
- }
+ driver_size > sizeof(struct mixer_operations))
+ {
+ printk(KERN_ERR "Sound: Incompatible mixer driver for %s\n", name);
+ return -EINVAL;
+ }
+
+ /* FIXME: This leaks a mixer_operations struct every time its called
+ until you unload sound! */
+
op = (struct mixer_operations *) (sound_mem_blocks[sound_nblocks] = vmalloc(sizeof(struct mixer_operations)));
sound_mem_sizes[sound_nblocks] = sizeof(struct mixer_operations);
if (sound_nblocks < 1024)
- sound_nblocks++;;
+ sound_nblocks++;
if (op == NULL)
- {
- printk(KERN_ERR "Sound: Can't allocate mixer driver for (%s)\n", name);
- return -(ENOMEM);
- }
+ {
+ printk(KERN_ERR "Sound: Can't allocate mixer driver for (%s)\n", name);
+ return -ENOMEM;
+ }
memset((char *) op, 0, sizeof(struct mixer_operations));
-
memcpy((char *) op, (char *) driver, driver_size);
l = strlen(name) + 1;
return n;
}
-void
-sound_unload_audiodev(int dev)
+void sound_unload_audiodev(int dev)
{
if (dev != -1)
audio_devs[dev] = NULL;
}
-int
-sound_alloc_audiodev(void)
+int sound_alloc_audiodev(void)
{
- int i;
+ int i;
for (i = 0; i < MAX_AUDIO_DEV; i++)
- {
- if (audio_devs[i] == NULL)
- {
- if (i >= num_audiodevs)
- num_audiodevs = i + 1;
- return i;
- }
- }
+ {
+ if (audio_devs[i] == NULL)
+ {
+ if (i >= num_audiodevs)
+ num_audiodevs = i + 1;
+ return i;
+ }
+ }
return -1;
}
-int
-sound_alloc_mididev(void)
+int sound_alloc_mididev(void)
{
- int i;
+ int i;
for (i = 0; i < MAX_MIDI_DEV; i++)
- {
- if (midi_devs[i] == NULL)
- {
- if (i >= num_midis)
- num_midis++;
- return i;
- }
- }
-
+ {
+ if (midi_devs[i] == NULL)
+ {
+ if (i >= num_midis)
+ num_midis++;
+ return i;
+ }
+ }
return -1;
}
-int
-sound_alloc_synthdev(void)
+int sound_alloc_synthdev(void)
{
- int i;
+ int i;
for (i = 0; i < MAX_SYNTH_DEV; i++)
- {
- if (synth_devs[i] == NULL)
- {
- if (i >= num_synths)
- num_synths++;
- return i;
- }
- }
+ {
+ if (synth_devs[i] == NULL)
+ {
+ if (i >= num_synths)
+ num_synths++;
+ return i;
+ }
+ }
return -1;
}
-int
-sound_alloc_mixerdev(void)
+int sound_alloc_mixerdev(void)
{
- int i;
+ int i;
for (i = 0; i < MAX_MIXER_DEV; i++)
- {
- if (mixer_devs[i] == NULL)
- {
- if (i >= num_mixers)
- num_mixers++;
- return i;
- }
- }
+ {
+ if (mixer_devs[i] == NULL)
+ {
+ if (i >= num_mixers)
+ num_mixers++;
+ return i;
+ }
+ }
return -1;
}
-int
-sound_alloc_timerdev(void)
+int sound_alloc_timerdev(void)
{
- int i;
+ int i;
for (i = 0; i < MAX_TIMER_DEV; i++)
- {
- if (sound_timer_devs[i] == NULL)
- {
- if (i >= num_sound_timers)
- num_sound_timers++;
- return i;
- }
- }
+ {
+ if (sound_timer_devs[i] == NULL)
+ {
+ if (i >= num_sound_timers)
+ num_sound_timers++;
+ return i;
+ }
+ }
return -1;
}
-void
-sound_unload_mixerdev(int dev)
+void sound_unload_mixerdev(int dev)
{
if (dev != -1)
mixer_devs[dev] = NULL;
}
-void
-sound_unload_mididev(int dev)
+void sound_unload_mididev(int dev)
{
#ifdef CONFIG_MIDI
if (dev != -1)
#endif
}
-void
-sound_unload_synthdev(int dev)
+void sound_unload_synthdev(int dev)
{
if (dev != -1)
synth_devs[dev] = NULL;
}
-void
-sound_unload_timerdev(int dev)
+void sound_unload_timerdev(int dev)
{
if (dev != -1)
sound_timer_devs[dev] = NULL;
}
+
* dev_table.h
*
* Global definitions for device call tables
- */
-/*
+ *
+ *
* Copyright (C) by Hannu Savolainen 1993-1997
*
* OSS/Free for Linux is distributed under the GNU GENERAL PUBLIC LICENSE (GPL)
* Sound card numbers 27 to 999. (1 to 26 are defined in soundcard.h)
* Numbers 1000 to N are reserved for driver's internal use.
*/
+
#define SNDCARD_DESKPROXL 27 /* Compaq Deskpro XL */
#define SNDCARD_SBPNP 29
#define SNDCARD_OPL3SA1 38
extern int sound_started;
-struct driver_info {
+struct driver_info
+{
char *driver_id;
int card_subtype; /* Driver specific. Usually 0 */
int card_type; /* From soundcard.h */
void (*unload) (struct address_info *hw_config);
};
-struct card_info {
+struct card_info
+{
int card_type; /* Link (search key) to the driver list */
struct address_info config;
int enabled;
#define DMODE_OUTPUT PCM_ENABLE_OUTPUT
#define DMODE_INPUT PCM_ENABLE_INPUT
-struct dma_buffparms {
+struct dma_buffparms
+{
int dma_mode; /* DMODE_INPUT, DMODE_OUTPUT or DMODE_NONE */
int closing;
* Structure for use with various microcontrollers and DSP processors
* in the recent soundcards.
*/
-typedef struct coproc_operations {
- char name[64];
- int (*open) (void *devc, int sub_device);
- void (*close) (void *devc, int sub_device);
- int (*ioctl) (void *devc, unsigned int cmd, caddr_t arg, int local);
- void (*reset) (void *devc);
+typedef struct coproc_operations
+{
+ char name[64];
+ int (*open) (void *devc, int sub_device);
+ void (*close) (void *devc, int sub_device);
+ int (*ioctl) (void *devc, unsigned int cmd, caddr_t arg, int local);
+ void (*reset) (void *devc);
- void *devc; /* Driver specific info */
- } coproc_operations;
+ void *devc; /* Driver specific info */
+} coproc_operations;
-struct audio_driver {
+struct audio_driver
+{
int (*open) (int dev, int mode);
void (*close) (int dev);
void (*output_block) (int dev, unsigned long buf,
void (*preprocess_read)(int dev); /* Device spesific preprocessing for read data */
};
-struct audio_operations {
+struct audio_operations
+{
char name[128];
int flags;
#define NOTHING_SPECIAL 0x00
int *load_mixer_volumes(char *name, int *levels, int present);
-struct mixer_operations {
+struct mixer_operations
+{
char id[16];
char name[64];
int (*ioctl) (int dev, unsigned int cmd, caddr_t arg);
int modify_counter;
};
-struct synth_operations {
+struct synth_operations
+{
char *id; /* Unique identifier (ASCII) max 29 char */
struct synth_info *info;
int midi_dev;
int sysex_ptr;
};
-struct midi_input_info { /* MIDI input scanner variables */
+struct midi_input_info
+{
+ /* MIDI input scanner variables */
#define MI_MAX 10
- int m_busy;
- unsigned char m_buf[MI_MAX];
- unsigned char m_prev_status; /* For running status */
- int m_ptr;
+ int m_busy;
+ unsigned char m_buf[MI_MAX];
+ unsigned char m_prev_status; /* For running status */
+ int m_ptr;
#define MST_INIT 0
#define MST_DATA 1
#define MST_SYSEX 2
- int m_state;
- int m_left;
- };
+ int m_state;
+ int m_left;
+};
-struct midi_operations {
+struct midi_operations
+{
struct midi_info info;
struct synth_operations *converter;
struct midi_input_info in_info;
void *devc;
};
-struct sound_lowlev_timer {
- int dev;
- int priority;
- unsigned int (*tmr_start)(int dev, unsigned int usecs);
- void (*tmr_disable)(int dev);
- void (*tmr_restart)(int dev);
- };
+struct sound_lowlev_timer
+{
+ int dev;
+ int priority;
+ unsigned int (*tmr_start)(int dev, unsigned int usecs);
+ void (*tmr_disable)(int dev);
+ void (*tmr_restart)(int dev);
+};
-struct sound_timer_operations {
+struct sound_timer_operations
+{
struct sound_timer_info info;
int priority;
int devlink;
#ifdef _DEV_TABLE_C_
- struct audio_operations *audio_devs[MAX_AUDIO_DEV] = {NULL}; int num_audiodevs = 0;
- struct mixer_operations *mixer_devs[MAX_MIXER_DEV] = {NULL}; int num_mixers = 0;
- struct synth_operations *synth_devs[MAX_SYNTH_DEV+MAX_MIDI_DEV] = {NULL}; int num_synths = 0;
- struct midi_operations *midi_devs[MAX_MIDI_DEV] = {NULL}; int num_midis = 0;
+struct audio_operations *audio_devs[MAX_AUDIO_DEV] = {NULL}; int num_audiodevs = 0;
+struct mixer_operations *mixer_devs[MAX_MIXER_DEV] = {NULL}; int num_mixers = 0;
+struct synth_operations *synth_devs[MAX_SYNTH_DEV+MAX_MIDI_DEV] = {NULL}; int num_synths = 0;
+struct midi_operations *midi_devs[MAX_MIDI_DEV] = {NULL}; int num_midis = 0;
#if defined(CONFIG_SEQUENCER) && !defined(EXCLUDE_TIMERS) && !defined(VMIDI)
- extern struct sound_timer_operations default_sound_timer;
- struct sound_timer_operations *sound_timer_devs[MAX_TIMER_DEV] =
- {&default_sound_timer, NULL};
- int num_sound_timers = 1;
+extern struct sound_timer_operations default_sound_timer;
+struct sound_timer_operations *sound_timer_devs[MAX_TIMER_DEV] = {
+ &default_sound_timer, NULL
+};
+int num_sound_timers = 1;
#else
- struct sound_timer_operations *sound_timer_devs[MAX_TIMER_DEV] =
- {NULL};
- int num_sound_timers = 0;
+struct sound_timer_operations *sound_timer_devs[MAX_TIMER_DEV] = {
+ NULL
+};
+int num_sound_timers = 0;
#endif
/*
* List of low level drivers compiled into the kernel.
*/
- struct driver_info sound_drivers[] = {
+struct driver_info sound_drivers[] =
+{
#if defined(CONFIG_PSS) && !defined(CONFIG_PSS_MODULE)
- {"PSS", 0, SNDCARD_PSS, "Echo Personal Sound System PSS (ESC614)", attach_pss, probe_pss, unload_pss},
- {"PSSMPU", 0, SNDCARD_PSS_MPU, "PSS-MPU", attach_pss_mpu, probe_pss_mpu, unload_pss_mpu},
- {"PSSMSS", 0, SNDCARD_PSS_MSS, "PSS-MSS", attach_pss_mss, probe_pss_mss, unload_pss_mss},
+ {"PSS", 0, SNDCARD_PSS, "Echo Personal Sound System PSS (ESC614)", attach_pss, probe_pss, unload_pss},
+ {"PSSMPU", 0, SNDCARD_PSS_MPU, "PSS-MPU", attach_pss_mpu, probe_pss_mpu, unload_pss_mpu},
+ {"PSSMSS", 0, SNDCARD_PSS_MSS, "PSS-MSS", attach_pss_mss, probe_pss_mss, unload_pss_mss},
#endif
#if defined(CONFIG_GUS) && !defined(CONFIG_GUS_MODULE)
#ifdef CONFIG_GUS16
- {"GUS16", 0, SNDCARD_GUS16, "Ultrasound 16-bit opt.", attach_gus_db16, probe_gus_db16, unload_gus_db16},
+ {"GUS16", 0, SNDCARD_GUS16, "Ultrasound 16-bit opt.", attach_gus_db16, probe_gus_db16, unload_gus_db16},
#endif
#ifdef CONFIG_GUSHW
- {"GUS", 0, SNDCARD_GUS, "Gravis Ultrasound", attach_gus_card, probe_gus, unload_gus},
- {"GUSPNP", 1, SNDCARD_GUSPNP, "GUS PnP", attach_gus_card, probe_gus, unload_gus},
+ {"GUS", 0, SNDCARD_GUS, "Gravis Ultrasound", attach_gus_card, probe_gus, unload_gus},
+ {"GUSPNP", 1, SNDCARD_GUSPNP, "GUS PnP", attach_gus_card, probe_gus, unload_gus},
#endif
#endif
#if defined(CONFIG_MSS) && !defined(CONFIG_MSS_MODULE)
- {"MSS", 0, SNDCARD_MSS, "MS Sound System", attach_ms_sound, probe_ms_sound, unload_ms_sound},
+ {"MSS", 0, SNDCARD_MSS, "MS Sound System", attach_ms_sound, probe_ms_sound, unload_ms_sound},
/* Compaq Deskpro XL */
- {"DESKPROXL", 2, SNDCARD_DESKPROXL, "Compaq Deskpro XL", attach_ms_sound, probe_ms_sound, unload_ms_sound},
+ {"DESKPROXL", 2, SNDCARD_DESKPROXL, "Compaq Deskpro XL", attach_ms_sound, probe_ms_sound, unload_ms_sound},
#endif
#ifdef CONFIG_MAD16
- {"MAD16", 0, SNDCARD_MAD16, "MAD16/Mozart (MSS)", attach_mad16, probe_mad16, unload_mad16},
- {"MAD16MPU", 0, SNDCARD_MAD16_MPU, "MAD16/Mozart (MPU)", attach_mad16_mpu, probe_mad16_mpu, unload_mad16_mpu},
+ {"MAD16", 0, SNDCARD_MAD16, "MAD16/Mozart (MSS)", attach_mad16, probe_mad16, unload_mad16},
+ {"MAD16MPU", 0, SNDCARD_MAD16_MPU, "MAD16/Mozart (MPU)", attach_mad16_mpu, probe_mad16_mpu, unload_mad16_mpu},
#endif
#ifdef CONFIG_CS4232
- {"CS4232", 0, SNDCARD_CS4232, "CS4232", attach_cs4232, probe_cs4232, unload_cs4232},
- {"CS4232MPU", 0, SNDCARD_CS4232_MPU, "CS4232 MIDI", attach_cs4232_mpu, probe_cs4232_mpu, unload_cs4232_mpu},
+ {"CS4232", 0, SNDCARD_CS4232, "CS4232", attach_cs4232, probe_cs4232, unload_cs4232},
+ {"CS4232MPU", 0, SNDCARD_CS4232_MPU, "CS4232 MIDI", attach_cs4232_mpu, probe_cs4232_mpu, unload_cs4232_mpu},
#endif
#if defined(CONFIG_YM3812) && !defined(CONFIG_YM3812_MODULE)
- {"OPL3", 0, SNDCARD_ADLIB, "OPL-2/OPL-3 FM", attach_adlib_card, probe_adlib, unload_adlib},
+ {"OPL3", 0, SNDCARD_ADLIB, "OPL-2/OPL-3 FM", attach_adlib_card, probe_adlib, unload_adlib},
#endif
#if defined(CONFIG_PAS) && !defined(CONFIG_PAS_MODULE)
- {"PAS16", 0, SNDCARD_PAS, "ProAudioSpectrum", attach_pas_card, probe_pas, unload_pas},
+ {"PAS16", 0, SNDCARD_PAS, "ProAudioSpectrum", attach_pas_card, probe_pas, unload_pas},
#endif
#if (defined(CONFIG_MPU401) || defined(CONFIG_MPU_EMU)) && defined(CONFIG_MIDI) && !defined(CONFIG_MPU401_MODULE)
- {"MPU401", 0, SNDCARD_MPU401,"Roland MPU-401", attach_mpu401, probe_mpu401, unload_mpu401},
+ {"MPU401", 0, SNDCARD_MPU401,"Roland MPU-401", attach_mpu401, probe_mpu401, unload_mpu401},
#endif
#if defined(CONFIG_UART401) && defined(CONFIG_MIDI) && !defined(CONFIG_UART401_MODULE)
{"UART401", 0, SNDCARD_UART401,"MPU-401 (UART)",
attach_uart401, probe_uart401, unload_uart401},
#endif
#if defined(CONFIG_MAUI) && !defined(CONFIG_MAUI_MODULE)
- {"MAUI", 0, SNDCARD_MAUI,"TB Maui", attach_maui, probe_maui, unload_maui},
+ {"MAUI", 0, SNDCARD_MAUI,"TB Maui", attach_maui, probe_maui, unload_maui},
#endif
#if defined(CONFIG_UART6850) && defined(CONFIG_MIDI) && !defined(CONFIG_UART6850_MODULE)
- {"MIDI6850", 0, SNDCARD_UART6850,"6860 UART Midi", attach_uart6850, probe_uart6850, unload_uart6850},
+ {"MIDI6850", 0, SNDCARD_UART6850,"6860 UART Midi", attach_uart6850, probe_uart6850, unload_uart6850},
#endif
#if defined(CONFIG_SBDSP) && !defined(CONFIG_SBDSP_MODULE)
- {"SBLAST", 0, SNDCARD_SB, "Sound Blaster", attach_sb_card, probe_sb, unload_sb},
- {"SBPNP", 6, SNDCARD_SBPNP, "Sound Blaster PnP", attach_sb_card, probe_sb, unload_sb},
+ {"SBLAST", 0, SNDCARD_SB, "Sound Blaster", attach_sb_card, probe_sb, unload_sb},
+ {"SBPNP", 6, SNDCARD_SBPNP, "Sound Blaster PnP", attach_sb_card, probe_sb, unload_sb},
-# ifdef CONFIG_MIDI
- {"SBMPU", 0, SNDCARD_SB16MIDI,"SB MPU-401", attach_sbmpu, probe_sbmpu, unload_sbmpu},
-# endif
+#ifdef CONFIG_MIDI
+ {"SBMPU", 0, SNDCARD_SB16MIDI,"SB MPU-401", attach_sbmpu, probe_sbmpu, unload_sbmpu},
+#endif
#endif
#ifdef CONFIG_SSCAPEHW
- {"SSCAPE", 0, SNDCARD_SSCAPE, "Ensoniq SoundScape", attach_sscape, probe_sscape, unload_sscape},
- {"SSCAPEMSS", 0, SNDCARD_SSCAPE_MSS, "MS Sound System (SoundScape)", attach_ss_ms_sound, probe_ss_ms_sound, unload_ss_ms_sound},
+ {"SSCAPE", 0, SNDCARD_SSCAPE, "Ensoniq SoundScape", attach_sscape, probe_sscape, unload_sscape},
+ {"SSCAPEMSS", 0, SNDCARD_SSCAPE_MSS, "MS Sound System (SoundScape)", attach_ss_ms_sound, probe_ss_ms_sound, unload_ss_ms_sound},
#endif
#ifdef CONFIG_OPL3SA1
#endif
#if defined (CONFIG_TRIX) && !defined(CONFIG_TRIX_MODULE)
- {"TRXPRO", 0, SNDCARD_TRXPRO, "MediaTrix AudioTrix Pro", attach_trix_wss, probe_trix_wss, unload_trix_wss},
- {"TRXPROSB", 0, SNDCARD_TRXPRO_SB, "AudioTrix (SB mode)", attach_trix_sb, probe_trix_sb, unload_trix_sb},
- {"TRXPROMPU", 0, SNDCARD_TRXPRO_MPU, "AudioTrix MIDI", attach_trix_mpu, probe_trix_mpu, unload_trix_mpu},
+ {"TRXPRO", 0, SNDCARD_TRXPRO, "MediaTrix AudioTrix Pro", attach_trix_wss, probe_trix_wss, unload_trix_wss},
+ {"TRXPROSB", 0, SNDCARD_TRXPRO_SB, "AudioTrix (SB mode)", attach_trix_sb, probe_trix_sb, unload_trix_sb},
+ {"TRXPROMPU", 0, SNDCARD_TRXPRO_MPU, "AudioTrix MIDI", attach_trix_mpu, probe_trix_mpu, unload_trix_mpu},
#endif
#if defined(CONFIG_SOFTOSS) && !defined(CONFIG_SOFTOSS_MODULE)
- {"SOFTSYN", 0, SNDCARD_SOFTOSS, "SoftOSS Virtual Wave Table",
+ {"SOFTSYN", 0, SNDCARD_SOFTOSS, "SoftOSS Virtual Wave Table",
attach_softsyn_card, probe_softsyn, unload_softsyn},
#endif
#if defined(CONFIG_VMIDI) && defined(CONFIG_MIDI) && !defined(CONFIG_VMIDI_MODULE)
- {"VMIDI", 0, SNDCARD_VMIDI,"Loopback MIDI Device", attach_v_midi, probe_v_midi, unload_v_midi},
+ {"VMIDI", 0, SNDCARD_VMIDI,"Loopback MIDI Device", attach_v_midi, probe_v_midi, unload_v_midi},
#endif
+ {NULL, 0, 0, "*?*", NULL, NULL, NULL}
+};
-
-
-
- {NULL, 0, 0, "*?*", NULL, NULL, NULL}
- };
-
- int num_sound_drivers =
- sizeof(sound_drivers) / sizeof (struct driver_info);
+int num_sound_drivers = sizeof(sound_drivers) / sizeof (struct driver_info);
#ifndef FULL_SOUND
+
/*
* List of devices actually configured in the system.
*
* Note! The detection order is significant. Don't change it.
*/
- struct card_info snd_installed_cards[] = {
+struct card_info snd_installed_cards[] =
+{
#ifdef CONFIG_PSS
- {SNDCARD_PSS, {PSS_BASE, 0, -1, -1}, SND_DEFAULT_ENABLE},
-# ifdef PSS_MPU_BASE
- {SNDCARD_PSS_MPU, {PSS_MPU_BASE, PSS_MPU_IRQ, 0, -1}, SND_DEFAULT_ENABLE},
-# endif
-# ifdef PSS_MSS_BASE
- {SNDCARD_PSS_MSS, {PSS_MSS_BASE, PSS_MSS_IRQ, PSS_MSS_DMA, -1}, SND_DEFAULT_ENABLE},
-# endif
+ {SNDCARD_PSS, {PSS_BASE, 0, -1, -1}, SND_DEFAULT_ENABLE},
+#ifdef PSS_MPU_BASE
+ {SNDCARD_PSS_MPU, {PSS_MPU_BASE, PSS_MPU_IRQ, 0, -1}, SND_DEFAULT_ENABLE},
+#endif
+#ifdef PSS_MSS_BASE
+ {SNDCARD_PSS_MSS, {PSS_MSS_BASE, PSS_MSS_IRQ, PSS_MSS_DMA, -1}, SND_DEFAULT_ENABLE},
+#endif
#endif
#ifdef CONFIG_TRIX
#ifndef TRIX_DMA2
#define TRIX_DMA2 TRIX_DMA
#endif
- {SNDCARD_TRXPRO, {TRIX_BASE, TRIX_IRQ, TRIX_DMA, TRIX_DMA2}, SND_DEFAULT_ENABLE},
-# ifdef TRIX_SB_BASE
- {SNDCARD_TRXPRO_SB, {TRIX_SB_BASE, TRIX_SB_IRQ, TRIX_SB_DMA, -1}, SND_DEFAULT_ENABLE},
-# endif
-# ifdef TRIX_MPU_BASE
- {SNDCARD_TRXPRO_MPU, {TRIX_MPU_BASE, TRIX_MPU_IRQ, 0, -1}, SND_DEFAULT_ENABLE},
-# endif
+ {SNDCARD_TRXPRO, {TRIX_BASE, TRIX_IRQ, TRIX_DMA, TRIX_DMA2}, SND_DEFAULT_ENABLE},
+#ifdef TRIX_SB_BASE
+ {SNDCARD_TRXPRO_SB, {TRIX_SB_BASE, TRIX_SB_IRQ, TRIX_SB_DMA, -1}, SND_DEFAULT_ENABLE},
+#endif
+#ifdef TRIX_MPU_BASE
+ {SNDCARD_TRXPRO_MPU, {TRIX_MPU_BASE, TRIX_MPU_IRQ, 0, -1}, SND_DEFAULT_ENABLE},
+#endif
#endif
#ifdef CONFIG_OPL3SA1
- {SNDCARD_OPL3SA1, {OPL3SA1_BASE, OPL3SA1_IRQ, OPL3SA1_DMA, OPL3SA1_DMA2}, SND_DEFAULT_ENABLE},
-# ifdef OPL3SA1_MPU_BASE
- {SNDCARD_OPL3SA1_MPU, {OPL3SA1_MPU_BASE, OPL3SA1_MPU_IRQ, 0, -1}, SND_DEFAULT_ENABLE},
-# endif
+ {SNDCARD_OPL3SA1, {OPL3SA1_BASE, OPL3SA1_IRQ, OPL3SA1_DMA, OPL3SA1_DMA2}, SND_DEFAULT_ENABLE},
+#ifdef OPL3SA1_MPU_BASE
+ {SNDCARD_OPL3SA1_MPU, {OPL3SA1_MPU_BASE, OPL3SA1_MPU_IRQ, 0, -1}, SND_DEFAULT_ENABLE},
+#endif
#endif
#ifdef CONFIG_SOFTOSS
- {SNDCARD_SOFTOSS, {0, 0, -1, -1}, SND_DEFAULT_ENABLE},
+ {SNDCARD_SOFTOSS, {0, 0, -1, -1}, SND_DEFAULT_ENABLE},
#endif
#ifdef CONFIG_SSCAPE
- {SNDCARD_SSCAPE, {SSCAPE_BASE, SSCAPE_IRQ, SSCAPE_DMA, -1}, SND_DEFAULT_ENABLE},
- {SNDCARD_SSCAPE_MSS, {SSCAPE_MSS_BASE, SSCAPE_MSS_IRQ, SSCAPE_DMA, -1}, SND_DEFAULT_ENABLE},
+ {SNDCARD_SSCAPE, {SSCAPE_BASE, SSCAPE_IRQ, SSCAPE_DMA, -1}, SND_DEFAULT_ENABLE},
+ {SNDCARD_SSCAPE_MSS, {SSCAPE_MSS_BASE, SSCAPE_MSS_IRQ, SSCAPE_DMA, -1}, SND_DEFAULT_ENABLE},
#endif
#ifdef CONFIG_MAD16
#ifndef MAD16_DMA2
#define MAD16_DMA2 MAD16_DMA
#endif
- {SNDCARD_MAD16, {MAD16_BASE, MAD16_IRQ, MAD16_DMA, MAD16_DMA2}, SND_DEFAULT_ENABLE},
-# ifdef MAD16_MPU_BASE
- {SNDCARD_MAD16_MPU, {MAD16_MPU_BASE, MAD16_MPU_IRQ, 0, -1}, SND_DEFAULT_ENABLE},
-# endif
+ {SNDCARD_MAD16, {MAD16_BASE, MAD16_IRQ, MAD16_DMA, MAD16_DMA2}, SND_DEFAULT_ENABLE},
+#ifdef MAD16_MPU_BASE
+ {SNDCARD_MAD16_MPU, {MAD16_MPU_BASE, MAD16_MPU_IRQ, 0, -1}, SND_DEFAULT_ENABLE},
+#endif
#endif
#ifdef CONFIG_CS4232
#ifndef CS4232_DMA2
#define CS4232_DMA2 CS4232_DMA
#endif
-# ifdef CS4232_MPU_BASE
- {SNDCARD_CS4232_MPU, {CS4232_MPU_BASE, CS4232_MPU_IRQ, 0, -1}, SND_DEFAULT_ENABLE},
-# endif
- {SNDCARD_CS4232, {CS4232_BASE, CS4232_IRQ, CS4232_DMA, CS4232_DMA2}, SND_DEFAULT_ENABLE},
+#ifdef CS4232_MPU_BASE
+ {SNDCARD_CS4232_MPU, {CS4232_MPU_BASE, CS4232_MPU_IRQ, 0, -1}, SND_DEFAULT_ENABLE},
+#endif
+ {SNDCARD_CS4232, {CS4232_BASE, CS4232_IRQ, CS4232_DMA, CS4232_DMA2}, SND_DEFAULT_ENABLE},
#endif
#ifdef CONFIG_MSS
-# ifndef MSS_DMA2
-# define MSS_DMA2 -1
-# endif
+#ifndef MSS_DMA2
+#define MSS_DMA2 -1
+#endif
-# ifdef DESKPROXL
- {SNDCARD_DESKPROXL, {MSS_BASE, MSS_IRQ, MSS_DMA, MSS_DMA2}, SND_DEFAULT_ENABLE},
-# else
- {SNDCARD_MSS, {MSS_BASE, MSS_IRQ, MSS_DMA, MSS_DMA2}, SND_DEFAULT_ENABLE},
-# endif
-# ifdef MSS2_BASE
- {SNDCARD_MSS, {MSS2_BASE, MSS2_IRQ, MSS2_DMA, MSS2_DMA2}, SND_DEFAULT_ENABLE},
-# endif
+#ifdef DESKPROXL
+ {SNDCARD_DESKPROXL, {MSS_BASE, MSS_IRQ, MSS_DMA, MSS_DMA2}, SND_DEFAULT_ENABLE},
+#else
+ {SNDCARD_MSS, {MSS_BASE, MSS_IRQ, MSS_DMA, MSS_DMA2}, SND_DEFAULT_ENABLE},
+#endif
+#ifdef MSS2_BASE
+ {SNDCARD_MSS, {MSS2_BASE, MSS2_IRQ, MSS2_DMA, MSS2_DMA2}, SND_DEFAULT_ENABLE},
+#endif
#endif
#ifdef CONFIG_PAS
- {SNDCARD_PAS, {PAS_BASE, PAS_IRQ, PAS_DMA, -1}, SND_DEFAULT_ENABLE},
+ {SNDCARD_PAS, {PAS_BASE, PAS_IRQ, PAS_DMA, -1}, SND_DEFAULT_ENABLE},
#endif
#ifdef CONFIG_SB
-# ifndef SBC_DMA
-# define SBC_DMA 1
-# endif
-# ifndef SB_DMA2
-# define SB_DMA2 -1
-# endif
- {SNDCARD_SB, {SBC_BASE, SBC_IRQ, SBC_DMA, SB_DMA2}, SND_DEFAULT_ENABLE},
-# ifdef SB2_BASE
- {SNDCARD_SB, {SB2_BASE, SB2_IRQ, SB2_DMA, SB2_DMA2}, SND_DEFAULT_ENABLE},
-# endif
+#ifndef SBC_DMA
+#define SBC_DMA 1
+#endif
+#ifndef SB_DMA2
+#define SB_DMA2 -1
+#endif
+ {SNDCARD_SB, {SBC_BASE, SBC_IRQ, SBC_DMA, SB_DMA2}, SND_DEFAULT_ENABLE},
+#ifdef SB2_BASE
+ {SNDCARD_SB, {SB2_BASE, SB2_IRQ, SB2_DMA, SB2_DMA2}, SND_DEFAULT_ENABLE},
+#endif
#endif
#if defined(CONFIG_MAUI)
- {SNDCARD_MAUI, {MAUI_BASE, MAUI_IRQ, 0, -1}, SND_DEFAULT_ENABLE},
+ {SNDCARD_MAUI, {MAUI_BASE, MAUI_IRQ, 0, -1}, SND_DEFAULT_ENABLE},
#endif
#if defined(CONFIG_MPU401) && defined(CONFIG_MIDI)
- {SNDCARD_MPU401, {MPU_BASE, MPU_IRQ, 0, -1}, SND_DEFAULT_ENABLE},
+ {SNDCARD_MPU401, {MPU_BASE, MPU_IRQ, 0, -1}, SND_DEFAULT_ENABLE},
#ifdef MPU2_BASE
- {SNDCARD_MPU401, {MPU2_BASE, MPU2_IRQ, 0, -1}, SND_DEFAULT_ENABLE},
+ {SNDCARD_MPU401, {MPU2_BASE, MPU2_IRQ, 0, -1}, SND_DEFAULT_ENABLE},
#endif
#ifdef MPU3_BASE
- {SNDCARD_MPU401, {MPU3_BASE, MPU2_IRQ, 0, -1}, SND_DEFAULT_ENABLE},
+ {SNDCARD_MPU401, {MPU3_BASE, MPU2_IRQ, 0, -1}, SND_DEFAULT_ENABLE},
#endif
#endif
#if defined(CONFIG_UART6850) && defined(CONFIG_MIDI)
- {SNDCARD_UART6850, {U6850_BASE, U6850_IRQ, 0, -1}, SND_DEFAULT_ENABLE},
+ {SNDCARD_UART6850, {U6850_BASE, U6850_IRQ, 0, -1}, SND_DEFAULT_ENABLE},
#endif
#if defined(CONFIG_SB)
#if defined(CONFIG_MIDI) && defined(SB_MPU_BASE)
- {SNDCARD_SB16MIDI,{SB_MPU_BASE, SB_MPU_IRQ, 0, -1}, SND_DEFAULT_ENABLE},
+ {SNDCARD_SB16MIDI,{SB_MPU_BASE, SB_MPU_IRQ, 0, -1}, SND_DEFAULT_ENABLE},
#endif
#endif
#define GUS_DMA2 GUS_DMA
#endif
#ifdef CONFIG_GUS16
- {SNDCARD_GUS16, {GUS16_BASE, GUS16_IRQ, GUS16_DMA, -1}, SND_DEFAULT_ENABLE},
+ {SNDCARD_GUS16, {GUS16_BASE, GUS16_IRQ, GUS16_DMA, -1}, SND_DEFAULT_ENABLE},
#endif
- {SNDCARD_GUS, {GUS_BASE, GUS_IRQ, GUS_DMA, GUS_DMA2}, SND_DEFAULT_ENABLE},
+ {SNDCARD_GUS, {GUS_BASE, GUS_IRQ, GUS_DMA, GUS_DMA2}, SND_DEFAULT_ENABLE},
#endif
#if defined(CONFIG_YM3812)
- {SNDCARD_ADLIB, {FM_MONO, 0, 0, -1}, SND_DEFAULT_ENABLE},
+ {SNDCARD_ADLIB, {FM_MONO, 0, 0, -1}, SND_DEFAULT_ENABLE},
#endif
#if defined(CONFIG_VMIDI) && defined(CONFIG_MIDI)
- {SNDCARD_VMIDI, {0, 0, 0, -1}, SND_DEFAULT_ENABLE},
+ {SNDCARD_VMIDI, {0, 0, 0, -1}, SND_DEFAULT_ENABLE},
#endif
- {0, {0}, 0}
- };
+ {0, {0}, 0}
+};
- int num_sound_cards =
- sizeof(snd_installed_cards) / sizeof (struct card_info);
- static int max_sound_cards =
- sizeof(snd_installed_cards) / sizeof (struct card_info);
+int num_sound_cards = sizeof(snd_installed_cards) / sizeof (struct card_info);
+static int max_sound_cards = sizeof(snd_installed_cards) / sizeof (struct card_info);
#else
- int num_sound_cards = 0;
- struct card_info snd_installed_cards[20] = {{0}};
- static int max_sound_cards = 20;
+int num_sound_cards = 0;
+struct card_info snd_installed_cards[20] = {{0}};
+static int max_sound_cards = 20;
#endif
#if defined(MODULE) || (!defined(linux) && !defined(_AIX))
- int trace_init = 0;
-# else
- int trace_init = 1;
-# endif
+int trace_init = 0;
+#else
+int trace_init = 1;
+#endif
#else
- extern struct audio_operations * audio_devs[MAX_AUDIO_DEV]; extern int num_audiodevs;
- extern struct mixer_operations * mixer_devs[MAX_MIXER_DEV]; extern int num_mixers;
- extern struct synth_operations * synth_devs[MAX_SYNTH_DEV+MAX_MIDI_DEV]; extern int num_synths;
- extern struct midi_operations * midi_devs[MAX_MIDI_DEV]; extern int num_midis;
- extern struct sound_timer_operations * sound_timer_devs[MAX_TIMER_DEV]; extern int num_sound_timers;
-
- extern struct driver_info sound_drivers[];
- extern int num_sound_drivers;
- extern struct card_info snd_installed_cards[];
- extern int num_sound_cards;
-
- extern int trace_init;
+extern struct audio_operations * audio_devs[MAX_AUDIO_DEV]; extern int num_audiodevs;
+extern struct mixer_operations * mixer_devs[MAX_MIXER_DEV]; extern int num_mixers;
+extern struct synth_operations * synth_devs[MAX_SYNTH_DEV+MAX_MIDI_DEV]; extern int num_synths;
+extern struct midi_operations * midi_devs[MAX_MIDI_DEV]; extern int num_midis;
+extern struct sound_timer_operations * sound_timer_devs[MAX_TIMER_DEV]; extern int num_sound_timers;
+
+extern struct driver_info sound_drivers[];
+extern int num_sound_drivers;
+extern struct card_info snd_installed_cards[];
+extern int num_sound_cards;
+
+extern int trace_init;
#endif /* _DEV_TABLE_C_ */
-
void sndtable_init(void);
int sndtable_get_cardcount (void);
struct address_info *sound_getconf(int card_type);
int sndtable_init_card (int unit, struct address_info *hw_config);
int sndtable_start_card (int unit, struct address_info *hw_config);
void sound_timer_init (struct sound_lowlev_timer *t, char *name);
-int sound_start_dma ( int dev, struct dma_buffparms *dmap, int chan,
- unsigned long physaddr,
- int count, int dma_mode, int autoinit);
+int sound_start_dma(int dev, struct dma_buffparms *dmap, int chan,
+ unsigned long physaddr, int count, int dma_mode, int autoinit);
void sound_dma_intr (int dev, struct dma_buffparms *dmap, int chan);
#define AUDIO_DRIVER_VERSION 2
#define MIXER_DRIVER_VERSION 2
-int sound_install_audiodrv(int vers,
- char *name,
- struct audio_driver *driver,
- int driver_size,
- int flags,
- unsigned int format_mask,
- void *devc,
- int dma1,
- int dma2);
-int sound_install_mixer(int vers,
- char *name,
- struct mixer_operations *driver,
- int driver_size,
- void *devc);
+int sound_install_audiodrv(int vers, char *name, struct audio_driver *driver,
+ int driver_size, int flags, unsigned int format_mask,
+ void *devc, int dma1, int dma2);
+int sound_install_mixer(int vers, char *name, struct mixer_operations *driver,
+ int driver_size, void *devc);
void sound_unload_audiodev(int dev);
void sound_unload_mixerdev(int dev);
#if defined(CONFIG_AUDIO) || defined(CONFIG_GUSHW)
-static struct wait_queue *in_sleeper[MAX_AUDIO_DEV] =
-{NULL};
-static volatile struct snd_wait in_sleep_flag[MAX_AUDIO_DEV] =
-{
- {0}};
-static struct wait_queue *out_sleeper[MAX_AUDIO_DEV] =
-{NULL};
-static volatile struct snd_wait out_sleep_flag[MAX_AUDIO_DEV] =
-{
- {0}};
+static struct wait_queue *in_sleeper[MAX_AUDIO_DEV] = {
+ NULL
+};
+
+static volatile struct snd_wait in_sleep_flag[MAX_AUDIO_DEV] = {
+ {0}
+};
+
+static struct wait_queue *out_sleeper[MAX_AUDIO_DEV] = {
+ NULL
+};
-static int ndmaps = 0;
+static volatile struct snd_wait out_sleep_flag[MAX_AUDIO_DEV] = {
+ {0}
+};
+
+static int ndmaps = 0;
#define MAX_DMAP (MAX_AUDIO_DEV*2)
-static struct dma_buffparms dmaps[MAX_DMAP] =
-{
- {0}};
+static struct dma_buffparms dmaps[MAX_DMAP] = {
+ {0}
+};
-static void dma_reset_output(int dev);
-static void dma_reset_input(int dev);
-static int local_start_dma(int dev, unsigned long physaddr, int count, int dma_mode);
+static void dma_reset_output(int dev);
+static void dma_reset_input(int dev);
+static int local_start_dma(int dev, unsigned long physaddr, int count, int dma_mode);
-static void
-dma_init_buffers(int dev, struct dma_buffparms *dmap)
+static void dma_init_buffers(int dev, struct dma_buffparms *dmap)
{
-
dmap->qlen = dmap->qhead = dmap->qtail = dmap->user_counter = 0;
dmap->byte_counter = 0;
dmap->max_byte_counter = 8000 * 60 * 60;
dmap->flags = DMA_BUSY; /* Other flags off */
}
-static int
-open_dmap(int dev, int mode, struct dma_buffparms *dmap, int chan)
+static int open_dmap(int dev, int mode, struct dma_buffparms *dmap, int chan)
{
+ int err;
+
if (dmap->flags & DMA_BUSY)
return -EBUSY;
+ if ((err = sound_alloc_dmap(dev, dmap, chan)) < 0)
+ return err;
+ if (dmap->raw_buf == NULL)
{
- int err;
-
- if ((err = sound_alloc_dmap(dev, dmap, chan)) < 0)
- return err;
+ printk(KERN_WARNING "Sound: DMA buffers not available\n");
+ return -ENOSPC; /* Memory allocation failed during boot */
}
-
- if (dmap->raw_buf == NULL)
- {
- printk("Sound: DMA buffers not available\n");
- return -ENOSPC; /* Memory allocation failed during boot */
- }
if (sound_open_dma(chan, audio_devs[dev]->name))
- {
- printk("Unable to grab(2) DMA%d for the audio driver\n", chan);
- return -EBUSY;
- }
+ {
+ printk(KERN_WARNING "Unable to grab(2) DMA%d for the audio driver\n", chan);
+ return -EBUSY;
+ }
dma_init_buffers(dev, dmap);
dmap->open_mode = mode;
dmap->subdivision = dmap->underrun_count = 0;
if (dmap->dma_mode & DMODE_OUTPUT)
- {
- out_sleep_flag[dev].opts = WK_NONE;
- } else
- {
- in_sleep_flag[dev].opts = WK_NONE;
- }
-
+ out_sleep_flag[dev].opts = WK_NONE;
+ else
+ in_sleep_flag[dev].opts = WK_NONE;
return 0;
}
-static void
-close_dmap(int dev, struct dma_buffparms *dmap, int chan)
+static void close_dmap(int dev, struct dma_buffparms *dmap, int chan)
{
sound_close_dma(chan);
if (dmap->flags & DMA_BUSY)
dmap->dma_mode = DMODE_NONE;
dmap->flags &= ~DMA_BUSY;
-
disable_dma(dmap->dma);
}
return r;
}
-static void
-check_driver(struct audio_driver *d)
+static void check_driver(struct audio_driver *d)
{
if (d->set_speed == NULL)
d->set_speed = default_set_speed;
d->set_channels = default_set_channels;
}
-int
-DMAbuf_open(int dev, int mode)
+int DMAbuf_open(int dev, int mode)
{
- int retval;
+ int retval;
struct dma_buffparms *dmap_in = NULL;
struct dma_buffparms *dmap_out = NULL;
if (dev >= num_audiodevs || audio_devs[dev] == NULL)
- {
- return -ENXIO;
- }
+ return -ENXIO;
+
if (!audio_devs[dev])
- {
return -ENXIO;
- }
+
if (!(audio_devs[dev]->flags & DMA_DUPLEX))
- {
- audio_devs[dev]->dmap_in = audio_devs[dev]->dmap_out;
- audio_devs[dev]->dmap_in->dma = audio_devs[dev]->dmap_out->dma;
- }
+ {
+ audio_devs[dev]->dmap_in = audio_devs[dev]->dmap_out;
+ audio_devs[dev]->dmap_in->dma = audio_devs[dev]->dmap_out->dma;
+ }
check_driver(audio_devs[dev]->d);
if ((retval = audio_devs[dev]->d->open(dev, mode)) < 0)
audio_devs[dev]->flags &= ~DMA_DUPLEX;
if (mode & OPEN_WRITE)
- {
- if ((retval = open_dmap(dev, mode, dmap_out, audio_devs[dev]->dmap_out->dma)) < 0)
- {
- audio_devs[dev]->d->close(dev);
- return retval;
- }
- }
+ {
+ if ((retval = open_dmap(dev, mode, dmap_out, audio_devs[dev]->dmap_out->dma)) < 0)
+ {
+ audio_devs[dev]->d->close(dev);
+ return retval;
+ }
+ }
audio_devs[dev]->enable_bits = mode;
if (mode == OPEN_READ || (mode != OPEN_WRITE &&
- audio_devs[dev]->flags & DMA_DUPLEX))
- {
- if ((retval = open_dmap(dev, mode, dmap_in, audio_devs[dev]->dmap_in->dma)) < 0)
- {
- audio_devs[dev]->d->close(dev);
-
- if (mode & OPEN_WRITE)
- {
- close_dmap(dev, dmap_out, audio_devs[dev]->dmap_out->dma);
- }
- return retval;
- }
- }
+ audio_devs[dev]->flags & DMA_DUPLEX))
+ {
+ if ((retval = open_dmap(dev, mode, dmap_in, audio_devs[dev]->dmap_in->dma)) < 0)
+ {
+ audio_devs[dev]->d->close(dev);
+ if (mode & OPEN_WRITE)
+ {
+ close_dmap(dev, dmap_out, audio_devs[dev]->dmap_out->dma);
+ }
+ return retval;
+ }
+ }
audio_devs[dev]->open_mode = mode;
audio_devs[dev]->go = 1;
audio_devs[dev]->d->set_speed(dev, DSP_DEFAULT_SPEED);
if (audio_devs[dev]->dmap_out->dma_mode == DMODE_OUTPUT)
- {
- memset(audio_devs[dev]->dmap_out->raw_buf,
- audio_devs[dev]->dmap_out->neutral_byte,
- audio_devs[dev]->dmap_out->bytes_in_use);
- }
+ {
+ memset(audio_devs[dev]->dmap_out->raw_buf,
+ audio_devs[dev]->dmap_out->neutral_byte,
+ audio_devs[dev]->dmap_out->bytes_in_use);
+ }
return 0;
}
-void
-DMAbuf_reset(int dev)
+void DMAbuf_reset(int dev)
{
if (audio_devs[dev]->open_mode & OPEN_WRITE)
dma_reset_output(dev);
dma_reset_input(dev);
}
-static void
-dma_reset_output(int dev)
+static void dma_reset_output(int dev)
{
- unsigned long flags;
- int tmout;
-
+ unsigned long flags;
+ int tmout;
struct dma_buffparms *dmap = audio_devs[dev]->dmap_out;
-
if (!(dmap->flags & DMA_STARTED)) /* DMA is not active */
return;
-/*
- * First wait until the current fragment has been played completely
- */
+ /*
+ * First wait until the current fragment has been played completely
+ */
save_flags(flags);
cli();
- tmout =
- (dmap->fragment_size * HZ) / dmap->data_rate;
+ tmout = (dmap->fragment_size * HZ) / dmap->data_rate;
tmout += HZ / 5; /* Some safety distance */
-
if (tmout < (HZ / 2))
tmout = HZ / 2;
if (tmout > 20 * HZ)
if (!signal_pending(current)
&& audio_devs[dev]->dmap_out->qlen
&& audio_devs[dev]->dmap_out->underrun_count == 0)
- {
-
- {
- unsigned long tlimit;
-
- if (tmout)
- current->timeout = tlimit = jiffies + (tmout);
- else
- tlimit = (unsigned long) -1;
- out_sleep_flag[dev].opts = WK_SLEEP;
- interruptible_sleep_on(&out_sleeper[dev]);
- if (!(out_sleep_flag[dev].opts & WK_WAKEUP))
- {
- if (jiffies >= tlimit)
- out_sleep_flag[dev].opts |= WK_TIMEOUT;
- }
- out_sleep_flag[dev].opts &= ~WK_SLEEP;
- };
- }
+ {
+ unsigned long tlimit;
+
+ if (tmout)
+ current->timeout = tlimit = jiffies + (tmout);
+ else
+ tlimit = (unsigned long) -1;
+ out_sleep_flag[dev].opts = WK_SLEEP;
+ interruptible_sleep_on(&out_sleeper[dev]);
+ if (!(out_sleep_flag[dev].opts & WK_WAKEUP))
+ {
+ if (jiffies >= tlimit)
+ out_sleep_flag[dev].opts |= WK_TIMEOUT;
+ }
+ out_sleep_flag[dev].opts &= ~WK_SLEEP;
+ }
audio_devs[dev]->dmap_out->flags &= ~(DMA_SYNCING | DMA_ACTIVE);
-/*
- * Finally shut the device off
- */
+ /*
+ * Finally shut the device off
+ */
if (!(audio_devs[dev]->flags & DMA_DUPLEX) ||
- !audio_devs[dev]->d->halt_output)
+ !audio_devs[dev]->d->halt_output)
audio_devs[dev]->d->halt_io(dev);
else
audio_devs[dev]->d->halt_output(dev);
dmap->qlen = dmap->qhead = dmap->qtail = dmap->user_counter = 0;
}
-static void
-dma_reset_input(int dev)
+static void dma_reset_input(int dev)
{
- unsigned long flags;
+ unsigned long flags;
struct dma_buffparms *dmap = audio_devs[dev]->dmap_in;
save_flags(flags);
cli();
if (!(audio_devs[dev]->flags & DMA_DUPLEX) ||
- !audio_devs[dev]->d->halt_input)
+ !audio_devs[dev]->d->halt_input)
audio_devs[dev]->d->halt_io(dev);
else
audio_devs[dev]->d->halt_input(dev);
reorganize_buffers(dev, audio_devs[dev]->dmap_in, 1);
}
-void
-DMAbuf_launch_output(int dev, struct dma_buffparms *dmap)
+void DMAbuf_launch_output(int dev, struct dma_buffparms *dmap)
{
if (!((audio_devs[dev]->enable_bits * audio_devs[dev]->go) & PCM_ENABLE_OUTPUT))
return; /* Don't start DMA yet */
-
dmap->dma_mode = DMODE_OUTPUT;
if (!(dmap->flags & DMA_ACTIVE) || !(audio_devs[dev]->flags & DMA_AUTOMODE) || dmap->flags & DMA_NODMA)
- {
- if (!(dmap->flags & DMA_STARTED))
- {
- reorganize_buffers(dev, dmap, 0);
-
- if (audio_devs[dev]->d->prepare_for_output(dev,
- dmap->fragment_size, dmap->nbufs))
- return;
-
- if (!(dmap->flags & DMA_NODMA))
- {
- local_start_dma(dev, dmap->raw_buf_phys, dmap->bytes_in_use,
- DMA_MODE_WRITE);
- }
- dmap->flags |= DMA_STARTED;
- }
- if (dmap->counts[dmap->qhead] == 0)
- dmap->counts[dmap->qhead] = dmap->fragment_size;
-
- dmap->dma_mode = DMODE_OUTPUT;
- audio_devs[dev]->d->output_block(dev, dmap->raw_buf_phys +
- dmap->qhead * dmap->fragment_size,
- dmap->counts[dmap->qhead], 1);
- if (audio_devs[dev]->d->trigger)
- audio_devs[dev]->d->trigger(dev,
- audio_devs[dev]->enable_bits * audio_devs[dev]->go);
- }
+ {
+ if (!(dmap->flags & DMA_STARTED))
+ {
+ reorganize_buffers(dev, dmap, 0);
+ if (audio_devs[dev]->d->prepare_for_output(dev,
+ dmap->fragment_size, dmap->nbufs))
+ return;
+ if (!(dmap->flags & DMA_NODMA))
+ local_start_dma(dev, dmap->raw_buf_phys, dmap->bytes_in_use,DMA_MODE_WRITE);
+ dmap->flags |= DMA_STARTED;
+ }
+ if (dmap->counts[dmap->qhead] == 0)
+ dmap->counts[dmap->qhead] = dmap->fragment_size;
+
+ dmap->dma_mode = DMODE_OUTPUT;
+ audio_devs[dev]->d->output_block(dev, dmap->raw_buf_phys + dmap->qhead * dmap->fragment_size,
+ dmap->counts[dmap->qhead], 1);
+ if (audio_devs[dev]->d->trigger)
+ audio_devs[dev]->d->trigger(dev,audio_devs[dev]->enable_bits * audio_devs[dev]->go);
+ }
dmap->flags |= DMA_ACTIVE;
}
-int
-DMAbuf_sync(int dev)
+int DMAbuf_sync(int dev)
{
- unsigned long flags;
- int tmout, n = 0;
+ unsigned long flags;
+ int tmout, n = 0;
if (!audio_devs[dev]->go && (!audio_devs[dev]->enable_bits & PCM_ENABLE_OUTPUT))
return 0;
if (audio_devs[dev]->dmap_out->dma_mode == DMODE_OUTPUT)
- {
-
- struct dma_buffparms *dmap = audio_devs[dev]->dmap_out;
-
- save_flags(flags);
- cli();
-
- tmout =
- (dmap->fragment_size * HZ) / dmap->data_rate;
-
- tmout += HZ / 5; /* Some safety distance */
-
- if (tmout < (HZ / 2))
- tmout = HZ / 2;
- if (tmout > 20 * HZ)
- tmout = 20 * HZ;
-
- ;
- if (dmap->qlen > 0)
- if (!(dmap->flags & DMA_ACTIVE))
- DMAbuf_launch_output(dev, dmap);
- ;
-
- audio_devs[dev]->dmap_out->flags |= DMA_SYNCING;
-
- audio_devs[dev]->dmap_out->underrun_count = 0;
- while (!signal_pending(current)
- && n++ <= audio_devs[dev]->dmap_out->nbufs
- && audio_devs[dev]->dmap_out->qlen
- && audio_devs[dev]->dmap_out->underrun_count == 0)
- {
-
- {
- unsigned long tlimit;
-
- if (tmout)
- current->timeout = tlimit = jiffies + (tmout);
- else
- tlimit = (unsigned long) -1;
- out_sleep_flag[dev].opts = WK_SLEEP;
- interruptible_sleep_on(&out_sleeper[dev]);
- if (!(out_sleep_flag[dev].opts & WK_WAKEUP))
- {
- if (jiffies >= tlimit)
- out_sleep_flag[dev].opts |= WK_TIMEOUT;
- }
- out_sleep_flag[dev].opts &= ~WK_SLEEP;
- };
- if ((out_sleep_flag[dev].opts & WK_TIMEOUT))
- {
- audio_devs[dev]->dmap_out->flags &= ~DMA_SYNCING;
- restore_flags(flags);
- return audio_devs[dev]->dmap_out->qlen;
- }
- }
- audio_devs[dev]->dmap_out->flags &= ~(DMA_SYNCING | DMA_ACTIVE);
- restore_flags(flags);
- /*
- * Some devices such as GUS have huge amount of on board RAM for the
- * audio data. We have to wait until the device has finished playing.
- */
-
- save_flags(flags);
- cli();
- if (audio_devs[dev]->d->local_qlen) /* Device has hidden buffers */
- {
- while (!signal_pending(current)
- && audio_devs[dev]->d->local_qlen(dev))
- {
-
- {
- unsigned long tlimit;
-
- if (tmout)
- current->timeout = tlimit = jiffies + (tmout);
- else
- tlimit = (unsigned long) -1;
- out_sleep_flag[dev].opts = WK_SLEEP;
- interruptible_sleep_on(&out_sleeper[dev]);
- if (!(out_sleep_flag[dev].opts & WK_WAKEUP))
- {
- if (jiffies >= tlimit)
- out_sleep_flag[dev].opts |= WK_TIMEOUT;
- }
- out_sleep_flag[dev].opts &= ~WK_SLEEP;
- };
- }
- }
- restore_flags(flags);
- }
+ {
+ struct dma_buffparms *dmap = audio_devs[dev]->dmap_out;
+ save_flags(flags);
+ cli();
+
+ tmout = (dmap->fragment_size * HZ) / dmap->data_rate;
+ tmout += HZ / 5; /* Some safety distance */
+
+ if (tmout < (HZ / 2))
+ tmout = HZ / 2;
+ if (tmout > 20 * HZ)
+ tmout = 20 * HZ;
+
+ if (dmap->qlen > 0)
+ if (!(dmap->flags & DMA_ACTIVE))
+ DMAbuf_launch_output(dev, dmap);
+
+ audio_devs[dev]->dmap_out->flags |= DMA_SYNCING;
+ audio_devs[dev]->dmap_out->underrun_count = 0;
+ while (!signal_pending(current) && n++ <= audio_devs[dev]->dmap_out->nbufs && audio_devs[dev]->dmap_out->qlen
+ && audio_devs[dev]->dmap_out->underrun_count == 0)
+ {
+ unsigned long tlimit;
+
+ if (tmout)
+ current->timeout = tlimit = jiffies + (tmout);
+ else
+ tlimit = (unsigned long) -1;
+ out_sleep_flag[dev].opts = WK_SLEEP;
+ interruptible_sleep_on(&out_sleeper[dev]);
+ if (!(out_sleep_flag[dev].opts & WK_WAKEUP))
+ {
+ if (jiffies >= tlimit)
+ out_sleep_flag[dev].opts |= WK_TIMEOUT;
+ }
+ out_sleep_flag[dev].opts &= ~WK_SLEEP;
+
+ if ((out_sleep_flag[dev].opts & WK_TIMEOUT))
+ {
+ audio_devs[dev]->dmap_out->flags &= ~DMA_SYNCING;
+ restore_flags(flags);
+ return audio_devs[dev]->dmap_out->qlen;
+ }
+ }
+ audio_devs[dev]->dmap_out->flags &= ~(DMA_SYNCING | DMA_ACTIVE);
+ restore_flags(flags);
+
+ /*
+ * Some devices such as GUS have huge amount of on board RAM for the
+ * audio data. We have to wait until the device has finished playing.
+ */
+
+ save_flags(flags);
+ cli();
+ if (audio_devs[dev]->d->local_qlen) /* Device has hidden buffers */
+ {
+ while (!signal_pending(current) && audio_devs[dev]->d->local_qlen(dev))
+ {
+ unsigned long tlimit;
+
+ if (tmout)
+ current->timeout = tlimit = jiffies + (tmout);
+ else
+ tlimit = (unsigned long) -1;
+ out_sleep_flag[dev].opts = WK_SLEEP;
+ interruptible_sleep_on(&out_sleeper[dev]);
+ if (!(out_sleep_flag[dev].opts & WK_WAKEUP))
+ { if (jiffies >= tlimit)
+ out_sleep_flag[dev].opts |= WK_TIMEOUT;
+ }
+ out_sleep_flag[dev].opts &= ~WK_SLEEP;
+ }
+ }
+ restore_flags(flags);
+ }
audio_devs[dev]->dmap_out->dma_mode = DMODE_NONE;
return audio_devs[dev]->dmap_out->qlen;
}
-int
-DMAbuf_release(int dev, int mode)
+int DMAbuf_release(int dev, int mode)
{
unsigned long flags;
if (!(audio_devs[dev]->dmap_in->mapping_flags & DMA_MAP_MAPPED))
if (!signal_pending(current)
&& (audio_devs[dev]->dmap_out->dma_mode == DMODE_OUTPUT))
- {
- DMAbuf_sync(dev);
- }
+ {
+ DMAbuf_sync(dev);
+ }
if (audio_devs[dev]->dmap_out->dma_mode == DMODE_OUTPUT)
- {
- memset(audio_devs[dev]->dmap_out->raw_buf,
- audio_devs[dev]->dmap_out->neutral_byte,
- audio_devs[dev]->dmap_out->bytes_in_use);
- }
+ {
+ memset(audio_devs[dev]->dmap_out->raw_buf, audio_devs[dev]->dmap_out->neutral_byte, audio_devs[dev]->dmap_out->bytes_in_use);
+ }
save_flags(flags);
cli();
if (audio_devs[dev]->open_mode == OPEN_READ ||
(audio_devs[dev]->open_mode != OPEN_WRITE &&
audio_devs[dev]->flags & DMA_DUPLEX))
+ {
close_dmap(dev, audio_devs[dev]->dmap_in, audio_devs[dev]->dmap_in->dma);
+ }
audio_devs[dev]->open_mode = 0;
-
restore_flags(flags);
-
return 0;
}
-int
-DMAbuf_activate_recording(int dev, struct dma_buffparms *dmap)
+int DMAbuf_activate_recording(int dev, struct dma_buffparms *dmap)
{
if (!(audio_devs[dev]->open_mode & OPEN_READ))
return 0;
return 0;
if (dmap->dma_mode == DMODE_OUTPUT) /* Direction change */
- {
- DMAbuf_sync(dev);
- DMAbuf_reset(dev);
- dmap->dma_mode = DMODE_NONE;
- }
+ {
+ DMAbuf_sync(dev);
+ DMAbuf_reset(dev);
+ dmap->dma_mode = DMODE_NONE;
+ }
if (!dmap->dma_mode)
- {
- int err;
-
- reorganize_buffers(dev, dmap, 1);
- if ((err = audio_devs[dev]->d->prepare_for_input(dev,
- dmap->fragment_size, dmap->nbufs)) < 0)
- {
- return err;
- }
- dmap->dma_mode = DMODE_INPUT;
- }
+ {
+ int err;
+
+ reorganize_buffers(dev, dmap, 1);
+ if ((err = audio_devs[dev]->d->prepare_for_input(dev,
+ dmap->fragment_size, dmap->nbufs)) < 0)
+ return err;
+ dmap->dma_mode = DMODE_INPUT;
+ }
if (!(dmap->flags & DMA_ACTIVE))
- {
- if (dmap->needs_reorg)
- reorganize_buffers(dev, dmap, 0);
- local_start_dma(dev, dmap->raw_buf_phys, dmap->bytes_in_use,
- DMA_MODE_READ);
- audio_devs[dev]->d->start_input(dev, dmap->raw_buf_phys +
- dmap->qtail * dmap->fragment_size,
- dmap->fragment_size, 0);
- dmap->flags |= DMA_ACTIVE;
- if (audio_devs[dev]->d->trigger)
- audio_devs[dev]->d->trigger(dev,
- audio_devs[dev]->enable_bits * audio_devs[dev]->go);
- }
+ {
+ if (dmap->needs_reorg)
+ reorganize_buffers(dev, dmap, 0);
+ local_start_dma(dev, dmap->raw_buf_phys, dmap->bytes_in_use, DMA_MODE_READ);
+ audio_devs[dev]->d->start_input(dev, dmap->raw_buf_phys +
+ dmap->qtail * dmap->fragment_size,
+ dmap->fragment_size, 0);
+ dmap->flags |= DMA_ACTIVE;
+ if (audio_devs[dev]->d->trigger)
+ audio_devs[dev]->d->trigger(dev, audio_devs[dev]->enable_bits * audio_devs[dev]->go);
+ }
return 0;
}
-int
-DMAbuf_getrdbuffer(int dev, char **buf, int *len, int dontblock)
+int DMAbuf_getrdbuffer(int dev, char **buf, int *len, int dontblock)
{
- unsigned long flags;
- int err = 0, n = 0;
+ unsigned long flags;
+ int err = 0, n = 0;
struct dma_buffparms *dmap = audio_devs[dev]->dmap_in;
if (!(audio_devs[dev]->open_mode & OPEN_READ))
save_flags(flags);
cli();
if (audio_devs[dev]->dmap_in->mapping_flags & DMA_MAP_MAPPED)
- {
- printk("Sound: Can't read from mmapped device (1)\n");
+ {
+/* printk(KERN_WARNING "Sound: Can't read from mmapped device (1)\n");*/
restore_flags(flags);
return -EINVAL;
- } else
- while (dmap->qlen <= 0 && n++ < 10)
- {
- int tmout;
-
- if (!(audio_devs[dev]->enable_bits & PCM_ENABLE_INPUT) ||
- !audio_devs[dev]->go)
- {
- restore_flags(flags);
- return -EAGAIN;
- }
- if ((err = DMAbuf_activate_recording(dev, dmap)) < 0)
- {
- restore_flags(flags);
- return err;
- }
- /* Wait for the next block */
-
- if (dontblock)
- {
- restore_flags(flags);
- return -EAGAIN;
- }
- if (!audio_devs[dev]->go)
- tmout = 0;
- else
- {
- tmout =
- (dmap->fragment_size * HZ) / dmap->data_rate;
-
- tmout += HZ / 5; /* Some safety distance */
-
- if (tmout < (HZ / 2))
- tmout = HZ / 2;
- if (tmout > 20 * HZ)
- tmout = 20 * HZ;
- }
-
-
- {
- unsigned long tlimit;
-
- if (tmout)
- current->timeout = tlimit = jiffies + (tmout);
- else
- tlimit = (unsigned long) -1;
- in_sleep_flag[dev].opts = WK_SLEEP;
- interruptible_sleep_on(&in_sleeper[dev]);
- if (!(in_sleep_flag[dev].opts & WK_WAKEUP))
- {
- if (jiffies >= tlimit)
- in_sleep_flag[dev].opts |= WK_TIMEOUT;
- }
- in_sleep_flag[dev].opts &= ~WK_SLEEP;
- };
- if ((in_sleep_flag[dev].opts & WK_TIMEOUT))
- {
- err = -EIO;
- printk("Sound: DMA (input) timed out - IRQ/DRQ config error?\n");
- dma_reset_input(dev);
- ;
- } else
- err = -EINTR;
- }
+ }
+ else while (dmap->qlen <= 0 && n++ < 10)
+ {
+ int tmout;
+ unsigned long tlimit;
+
+ if (!(audio_devs[dev]->enable_bits & PCM_ENABLE_INPUT) || !audio_devs[dev]->go)
+ {
+ restore_flags(flags);
+ return -EAGAIN;
+ }
+ if ((err = DMAbuf_activate_recording(dev, dmap)) < 0)
+ {
+ restore_flags(flags);
+ return err;
+ }
+ /* Wait for the next block */
+
+ if (dontblock)
+ {
+ restore_flags(flags);
+ return -EAGAIN;
+ }
+ if (!audio_devs[dev]->go)
+ tmout = 0;
+ else
+ {
+ tmout = (dmap->fragment_size * HZ) / dmap->data_rate;
+ tmout += HZ / 5; /* Some safety distance */
+
+ if (tmout < (HZ / 2))
+ tmout = HZ / 2;
+ if (tmout > 20 * HZ)
+ tmout = 20 * HZ;
+ }
+
+ if (tmout)
+ current->timeout = tlimit = jiffies + (tmout);
+ else
+ tlimit = (unsigned long) -1;
+ in_sleep_flag[dev].opts = WK_SLEEP;
+ interruptible_sleep_on(&in_sleeper[dev]);
+ if (!(in_sleep_flag[dev].opts & WK_WAKEUP))
+ {
+ if (jiffies >= tlimit)
+ in_sleep_flag[dev].opts |= WK_TIMEOUT;
+ }
+ in_sleep_flag[dev].opts &= ~WK_SLEEP;
+
+ if ((in_sleep_flag[dev].opts & WK_TIMEOUT))
+ {
+ /* FIXME: include device name */
+ err = -EIO;
+ printk(KERN_WARNING "Sound: DMA (input) timed out - IRQ/DRQ config error?\n");
+ dma_reset_input(dev);
+ }
+ else
+ err = -EINTR;
+ }
restore_flags(flags);
if (dmap->qlen <= 0)
- {
- if (err == 0)
- err = -EINTR;
- return err;
- }
+ {
+ if (err == 0)
+ err = -EINTR;
+ return err;
+ }
*buf = &dmap->raw_buf[dmap->qhead * dmap->fragment_size + dmap->counts[dmap->qhead]];
*len = dmap->fragment_size - dmap->counts[dmap->qhead];
return dmap->qhead;
}
-int
-DMAbuf_rmchars(int dev, int buff_no, int c)
+int DMAbuf_rmchars(int dev, int buff_no, int c)
{
struct dma_buffparms *dmap = audio_devs[dev]->dmap_in;
-
- int p = dmap->counts[dmap->qhead] + c;
+ int p = dmap->counts[dmap->qhead] + c;
if (dmap->mapping_flags & DMA_MAP_MAPPED)
- {
- printk("Sound: Can't read from mmapped device (2)\n");
- return -EINVAL;
- } else if (dmap->qlen <= 0)
+ {
+/* printk("Sound: Can't read from mmapped device (2)\n");*/
+ return -EINVAL;
+ }
+ else if (dmap->qlen <= 0)
return -EIO;
else if (p >= dmap->fragment_size)
- { /* This buffer is completely empty */
- dmap->counts[dmap->qhead] = 0;
- dmap->qlen--;
- dmap->qhead = (dmap->qhead + 1) % dmap->nbufs;
- } else
- dmap->counts[dmap->qhead] = p;
+ { /* This buffer is completely empty */
+ dmap->counts[dmap->qhead] = 0;
+ dmap->qlen--;
+ dmap->qhead = (dmap->qhead + 1) % dmap->nbufs;
+ }
+ else dmap->counts[dmap->qhead] = p;
return 0;
}
-int
-DMAbuf_get_buffer_pointer(int dev, struct dma_buffparms *dmap, int direction)
+int DMAbuf_get_buffer_pointer(int dev, struct dma_buffparms *dmap, int direction)
{
-/*
- * Try to approximate the active byte position of the DMA pointer within the
- * buffer area as well as possible.
- */
- int pos;
- unsigned long flags;
+ /*
+ * Try to approximate the active byte position of the DMA pointer within the
+ * buffer area as well as possible.
+ */
+
+ int pos;
+ unsigned long flags;
save_flags(flags);
cli();
if (!(dmap->flags & DMA_ACTIVE))
pos = 0;
else
- {
- int chan = dmap->dma;
-
- clear_dma_ff(chan);
- disable_dma(dmap->dma);
- pos = get_dma_residue(chan);
- pos = dmap->bytes_in_use - pos;
-
- if (!(dmap->mapping_flags & DMA_MAP_MAPPED))
- if (direction == DMODE_OUTPUT)
- {
- if (dmap->qhead == 0)
- if (pos > dmap->fragment_size)
- pos = 0;
- } else
- {
- if (dmap->qtail == 0)
- if (pos > dmap->fragment_size)
- pos = 0;
- }
- if (pos < 0)
- pos = 0;
- if (pos >= dmap->bytes_in_use)
- pos = 0;
- enable_dma(dmap->dma);
- }
+ {
+ int chan = dmap->dma;
+ clear_dma_ff(chan);
+ disable_dma(dmap->dma);
+ pos = get_dma_residue(chan);
+ pos = dmap->bytes_in_use - pos;
+
+ if (!(dmap->mapping_flags & DMA_MAP_MAPPED))
+ {
+ if (direction == DMODE_OUTPUT)
+ {
+ if (dmap->qhead == 0)
+ if (pos > dmap->fragment_size)
+ pos = 0;
+ }
+ else
+ {
+ if (dmap->qtail == 0)
+ if (pos > dmap->fragment_size)
+ pos = 0;
+ }
+ }
+ if (pos < 0)
+ pos = 0;
+ if (pos >= dmap->bytes_in_use)
+ pos = 0;
+ enable_dma(dmap->dma);
+ }
restore_flags(flags);
/* printk( "%04x ", pos); */
}
/*
- * DMAbuf_start_devices() is called by the /dev/music driver to start
- * one or more audio devices at desired moment.
+ * DMAbuf_start_devices() is called by the /dev/music driver to start
+ * one or more audio devices at desired moment.
*/
-static void
-DMAbuf_start_device(int dev)
+
+static void DMAbuf_start_device(int dev)
{
if (audio_devs[dev]->open_mode != 0)
+ {
if (!audio_devs[dev]->go)
- {
- /* OK to start the device */
- audio_devs[dev]->go = 1;
-
- if (audio_devs[dev]->d->trigger)
- audio_devs[dev]->d->trigger(dev,
- audio_devs[dev]->enable_bits * audio_devs[dev]->go);
- }
+ {
+ /* OK to start the device */
+ audio_devs[dev]->go = 1;
+
+ if (audio_devs[dev]->d->trigger)
+ audio_devs[dev]->d->trigger(dev,audio_devs[dev]->enable_bits * audio_devs[dev]->go);
+ }
+ }
}
-void
-DMAbuf_start_devices(unsigned int devmask)
+void DMAbuf_start_devices(unsigned int devmask)
{
- int dev;
+ int dev;
for (dev = 0; dev < num_audiodevs; dev++)
if ((devmask & (1 << dev)) && audio_devs[dev] != NULL)
DMAbuf_start_device(dev);
}
-int
-DMAbuf_space_in_queue(int dev)
+int DMAbuf_space_in_queue(int dev)
{
- int len, max, tmp;
+ int len, max, tmp;
struct dma_buffparms *dmap = audio_devs[dev]->dmap_out;
-
- int lim = dmap->nbufs;
-
+ int lim = dmap->nbufs;
if (lim < 2)
lim = 2;
return 0;
/*
- * Verify that there are no more pending buffers than the limit
- * defined by the process.
+ * Verify that there are no more pending buffers than the limit
+ * defined by the process.
*/
max = dmap->max_fragments;
len = dmap->qlen;
if (audio_devs[dev]->d->local_qlen)
- {
- tmp = audio_devs[dev]->d->local_qlen(dev);
- if (tmp && len)
- tmp--; /*
- * This buffer has been counted twice
- */
- len += tmp;
- }
+ {
+ tmp = audio_devs[dev]->d->local_qlen(dev);
+ if (tmp && len)
+ tmp--; /*
+ * This buffer has been counted twice
+ */
+ len += tmp;
+ }
if (dmap->byte_counter % dmap->fragment_size) /* There is a partial fragment */
len = len + 1;
return max - len;
}
-static int
-output_sleep(int dev, int dontblock)
+static int output_sleep(int dev, int dontblock)
{
- int tmout;
- int err = 0;
+ int tmout;
+ int err = 0;
struct dma_buffparms *dmap = audio_devs[dev]->dmap_out;
+ unsigned long tlimit;
if (dontblock)
- {
- return -EAGAIN;
- }
+ return -EAGAIN;
if (!(audio_devs[dev]->enable_bits & PCM_ENABLE_OUTPUT))
- {
- return -EAGAIN;
- }
+ return -EAGAIN;
+
/*
* Wait for free space
*/
+
if (!audio_devs[dev]->go || dmap->flags & DMA_NOTIMEOUT)
tmout = 0;
else
- {
- tmout =
- (dmap->fragment_size * HZ) / dmap->data_rate;
-
- tmout += HZ / 5; /* Some safety distance */
+ {
+ tmout = (dmap->fragment_size * HZ) / dmap->data_rate;
+ tmout += HZ / 5; /* Some safety distance */
- if (tmout < (HZ / 2))
- tmout = HZ / 2;
- if (tmout > 20 * HZ)
- tmout = 20 * HZ;
- }
+ if (tmout < (HZ / 2))
+ tmout = HZ / 2;
+ if (tmout > 20 * HZ)
+ tmout = 20 * HZ;
+ }
if (signal_pending(current))
return -EIO;
-
+ if (tmout)
+ current->timeout = tlimit = jiffies + (tmout);
+ else
+ tlimit = (unsigned long) -1;
+ out_sleep_flag[dev].opts = WK_SLEEP;
+ interruptible_sleep_on(&out_sleeper[dev]);
+ if (!(out_sleep_flag[dev].opts & WK_WAKEUP))
{
- unsigned long tlimit;
+ if (jiffies >= tlimit)
+ out_sleep_flag[dev].opts |= WK_TIMEOUT;
+ }
+ out_sleep_flag[dev].opts &= ~WK_SLEEP;
- if (tmout)
- current->timeout = tlimit = jiffies + (tmout);
- else
- tlimit = (unsigned long) -1;
- out_sleep_flag[dev].opts = WK_SLEEP;
- interruptible_sleep_on(&out_sleeper[dev]);
- if (!(out_sleep_flag[dev].opts & WK_WAKEUP))
- {
- if (jiffies >= tlimit)
- out_sleep_flag[dev].opts |= WK_TIMEOUT;
- }
- out_sleep_flag[dev].opts &= ~WK_SLEEP;
- };
if ((out_sleep_flag[dev].opts & WK_TIMEOUT))
- {
- printk("Sound: DMA (output) timed out - IRQ/DRQ config error?\n");
- ;
- dma_reset_output(dev);
- } else if (signal_pending(current))
- {
- err = -EINTR;
- }
+ {
+ printk(KERN_WARNING "Sound: DMA (output) timed out - IRQ/DRQ config error?\n");
+ dma_reset_output(dev);
+ }
+ else if (signal_pending(current))
+ err = -EINTR;
return err;
}
-static int
-find_output_space(int dev, char **buf, int *size)
+static int find_output_space(int dev, char **buf, int *size)
{
struct dma_buffparms *dmap = audio_devs[dev]->dmap_out;
- unsigned long flags;
- unsigned long active_offs;
- long len, offs;
- int maxfrags;
- int occupied_bytes = (dmap->user_counter % dmap->fragment_size);
+ unsigned long flags;
+ unsigned long active_offs;
+ long len, offs;
+ int maxfrags;
+ int occupied_bytes = (dmap->user_counter % dmap->fragment_size);
*buf = dmap->raw_buf;
if (!(maxfrags = DMAbuf_space_in_queue(dev)) && !occupied_bytes)
- {
- return 0;
- }
+ return 0;
save_flags(flags);
cli();
offs = (dmap->user_counter % dmap->bytes_in_use) & ~SAMPLE_ROUNDUP;
if (offs < 0 || offs >= dmap->bytes_in_use)
- {
- printk("OSS: Got unexpected offs %ld. Giving up.\n", offs);
- printk("Counter = %ld, bytes=%d\n", dmap->user_counter, dmap->bytes_in_use);
- return 0;
- }
+ {
+ printk(KERN_ERR "Sound: Got unexpected offs %ld. Giving up.\n", offs);
+ printk("Counter = %ld, bytes=%d\n", dmap->user_counter, dmap->bytes_in_use);
+ return 0;
+ }
*buf = dmap->raw_buf + offs;
len = active_offs + dmap->bytes_in_use - dmap->user_counter; /* Number of unused bytes in buffer */
if ((offs + len) > dmap->bytes_in_use)
- {
- len = dmap->bytes_in_use - offs;
- }
+ len = dmap->bytes_in_use - offs;
if (len < 0)
- {
- restore_flags(flags);
- return 0;
- }
+ {
+ restore_flags(flags);
+ return 0;
+ }
if (len > ((maxfrags * dmap->fragment_size) - occupied_bytes))
- {
- len = (maxfrags * dmap->fragment_size) - occupied_bytes;
- }
+ len = (maxfrags * dmap->fragment_size) - occupied_bytes;
+
*size = len & ~SAMPLE_ROUNDUP;
restore_flags(flags);
return (*size > 0);
}
-int
-DMAbuf_getwrbuffer(int dev, char **buf, int *size, int dontblock)
+int DMAbuf_getwrbuffer(int dev, char **buf, int *size, int dontblock)
{
- unsigned long flags;
- int err = -EIO;
+ unsigned long flags;
+ int err = -EIO;
struct dma_buffparms *dmap = audio_devs[dev]->dmap_out;
if (dmap->needs_reorg)
reorganize_buffers(dev, dmap, 0);
if (dmap->mapping_flags & DMA_MAP_MAPPED)
- {
- printk("Sound: Can't write to mmapped device (3)\n");
- return -EINVAL;
- }
+ {
+/* printk(KERN_DEBUG "Sound: Can't write to mmapped device (3)\n");*/
+ return -EINVAL;
+ }
if (dmap->dma_mode == DMODE_INPUT) /* Direction change */
- {
+ {
DMAbuf_reset(dev);
dmap->dma_mode = DMODE_NONE;
- }
+ }
dmap->dma_mode = DMODE_OUTPUT;
save_flags(flags);
cli();
while (find_output_space(dev, buf, size) <= 0)
- {
- if ((err = output_sleep(dev, dontblock)) < 0)
- {
- restore_flags(flags);
- return err;
- }
- }
+ {
+ if ((err = output_sleep(dev, dontblock)) < 0)
+ {
+ restore_flags(flags);
+ return err;
+ }
+ }
restore_flags(flags);
return 0;
}
-int
-DMAbuf_move_wrpointer(int dev, int l)
+int DMAbuf_move_wrpointer(int dev, int l)
{
struct dma_buffparms *dmap = audio_devs[dev]->dmap_out;
- unsigned long ptr = (dmap->user_counter / dmap->fragment_size)
- * dmap->fragment_size;
-
- unsigned long end_ptr, p;
- int post = (dmap->flags & DMA_POST);
-
- ;
+ unsigned long ptr = (dmap->user_counter / dmap->fragment_size) * dmap->fragment_size;
+ unsigned long end_ptr, p;
+ int post = (dmap->flags & DMA_POST);
dmap->flags &= ~DMA_POST;
-
dmap->cfrag = -1;
-
dmap->user_counter += l;
dmap->flags |= DMA_DIRTY;
if (dmap->user_counter >= dmap->max_byte_counter)
- { /* Wrap the byte counters */
- long decr = dmap->user_counter;
+ { /* Wrap the byte counters */
+ long decr = dmap->user_counter;
- dmap->user_counter = (dmap->user_counter % dmap->bytes_in_use) + dmap->bytes_in_use;
- decr -= dmap->user_counter;
- dmap->byte_counter -= decr;
- }
+ dmap->user_counter = (dmap->user_counter % dmap->bytes_in_use) + dmap->bytes_in_use;
+ decr -= dmap->user_counter;
+ dmap->byte_counter -= decr;
+ }
end_ptr = (dmap->user_counter / dmap->fragment_size) * dmap->fragment_size;
p = (dmap->user_counter - 1) % dmap->bytes_in_use;
/* Update the fragment based bookkeeping too */
while (ptr < end_ptr)
- {
- dmap->counts[dmap->qtail] = dmap->fragment_size;
- dmap->qtail = (dmap->qtail + 1) % dmap->nbufs;
- dmap->qlen++;
- ptr += dmap->fragment_size;
- }
+ {
+ dmap->counts[dmap->qtail] = dmap->fragment_size;
+ dmap->qtail = (dmap->qtail + 1) % dmap->nbufs;
+ dmap->qlen++;
+ ptr += dmap->fragment_size;
+ }
dmap->counts[dmap->qtail] = dmap->user_counter - ptr;
-/*
- * Let the low level driver to perform some postprocessing to
- * the written data.
- */
+ /*
+ * Let the low level driver to perform some postprocessing to
+ * the written data.
+ */
if (audio_devs[dev]->d->postprocess_write)
audio_devs[dev]->d->postprocess_write(dev);
if (!(dmap->flags & DMA_ACTIVE))
+ {
if (dmap->qlen > 1 ||
(dmap->qlen > 0 && (post || dmap->qlen >= dmap->nbufs - 1)))
- {
DMAbuf_launch_output(dev, dmap);
- };
+ }
return 0;
}
-int
-DMAbuf_start_dma(int dev, unsigned long physaddr, int count, int dma_mode)
+int DMAbuf_start_dma(int dev, unsigned long physaddr, int count, int dma_mode)
{
int chan;
struct dma_buffparms *dmap;
if (dma_mode == DMA_MODE_WRITE)
- {
- chan = audio_devs[dev]->dmap_out->dma;
- dmap = audio_devs[dev]->dmap_out;
- } else
- {
- chan = audio_devs[dev]->dmap_in->dma;
- dmap = audio_devs[dev]->dmap_in;
- }
+ {
+ chan = audio_devs[dev]->dmap_out->dma;
+ dmap = audio_devs[dev]->dmap_out;
+ }
+ else
+ {
+ chan = audio_devs[dev]->dmap_in->dma;
+ dmap = audio_devs[dev]->dmap_in;
+ }
if (dmap->raw_buf == NULL)
- {
- printk("sound: DMA buffer(1) == NULL\n");
- printk("Device %d, chn=%s\n", dev, (dmap == audio_devs[dev]->dmap_out) ? "out" : "in");
- return 0;
- }
+ {
+ printk(KERN_ERR "sound: DMA buffer(1) == NULL\n");
+ printk("Device %d, chn=%s\n", dev, (dmap == audio_devs[dev]->dmap_out) ? "out" : "in");
+ return 0;
+ }
if (chan < 0)
return 0;
return count;
}
-static int
-local_start_dma(int dev, unsigned long physaddr, int count, int dma_mode)
+static int local_start_dma(int dev, unsigned long physaddr, int count, int dma_mode)
{
- int chan;
+ int chan;
struct dma_buffparms *dmap;
if (dma_mode == DMA_MODE_WRITE)
- {
- chan = audio_devs[dev]->dmap_out->dma;
- dmap = audio_devs[dev]->dmap_out;
- } else
- {
- chan = audio_devs[dev]->dmap_in->dma;
- dmap = audio_devs[dev]->dmap_in;
- }
+ {
+ chan = audio_devs[dev]->dmap_out->dma;
+ dmap = audio_devs[dev]->dmap_out;
+ }
+ else
+ {
+ chan = audio_devs[dev]->dmap_in->dma;
+ dmap = audio_devs[dev]->dmap_in;
+ }
if (dmap->raw_buf == NULL)
- {
- printk("sound: DMA buffer(2) == NULL\n");
- printk("Device %d, chn=%s\n", dev, (dmap == audio_devs[dev]->dmap_out) ? "out" : "in");
- return 0;
- }
+ {
+ printk(KERN_ERR "sound: DMA buffer(2) == NULL\n");
+ printk(KERN_ERR "Device %d, chn=%s\n", dev, (dmap == audio_devs[dev]->dmap_out) ? "out" : "in");
+ return 0;
+ }
if (dmap->flags & DMA_NODMA)
- {
- return 1;
- }
+ {
+ return 1;
+ }
if (chan < 0)
return 0;
return count;
}
-static void
-finish_output_interrupt(int dev, struct dma_buffparms *dmap)
+static void finish_output_interrupt(int dev, struct dma_buffparms *dmap)
{
- unsigned long flags;
+ unsigned long flags;
if (dmap->audio_callback != NULL)
dmap->audio_callback(dev, dmap->callback_parm);
save_flags(flags);
cli();
if ((out_sleep_flag[dev].opts & WK_SLEEP))
- {
- {
- out_sleep_flag[dev].opts = WK_WAKEUP;
- wake_up(&out_sleeper[dev]);
- };
- }
+ {
+ out_sleep_flag[dev].opts = WK_WAKEUP;
+ wake_up(&out_sleeper[dev]);
+ }
restore_flags(flags);
}
-static void
-do_outputintr(int dev, int dummy)
+static void do_outputintr(int dev, int dummy)
{
- unsigned long flags;
+ unsigned long flags;
struct dma_buffparms *dmap = audio_devs[dev]->dmap_out;
- int this_fragment;
+ int this_fragment;
#ifdef OS_DMA_INTR
if (audio_devs[dev]->dmap_out->dma >= 0)
#endif
if (dmap->raw_buf == NULL)
- {
- printk("Sound: Fatal error. Audio interrupt (%d) after freeing buffers.\n", dev);
- return;
- }
+ {
+ printk(KERN_ERR "Sound: Error. Audio interrupt (%d) after freeing buffers.\n", dev);
+ return;
+ }
if (dmap->mapping_flags & DMA_MAP_MAPPED) /* Virtual memory mapped access */
- {
- /* mmapped access */
- dmap->qhead = (dmap->qhead + 1) % dmap->nbufs;
- if (dmap->qhead == 0) /* Wrapped */
- {
- dmap->byte_counter += dmap->bytes_in_use;
- if (dmap->byte_counter >= dmap->max_byte_counter) /* Overflow */
- {
- long decr = dmap->byte_counter;
-
- dmap->byte_counter = (dmap->byte_counter % dmap->bytes_in_use) + dmap->bytes_in_use;
- decr -= dmap->byte_counter;
- dmap->user_counter -= decr;
- }
- }
- dmap->qlen++; /* Yes increment it (don't decrement) */
- if (!(audio_devs[dev]->flags & DMA_AUTOMODE))
- dmap->flags &= ~DMA_ACTIVE;
- dmap->counts[dmap->qhead] = dmap->fragment_size;
-
- DMAbuf_launch_output(dev, dmap);
- finish_output_interrupt(dev, dmap);
- return;
- }
+ {
+ /* mmapped access */
+ dmap->qhead = (dmap->qhead + 1) % dmap->nbufs;
+ if (dmap->qhead == 0) /* Wrapped */
+ {
+ dmap->byte_counter += dmap->bytes_in_use;
+ if (dmap->byte_counter >= dmap->max_byte_counter) /* Overflow */
+ {
+ long decr = dmap->byte_counter;
+
+ dmap->byte_counter = (dmap->byte_counter % dmap->bytes_in_use) + dmap->bytes_in_use;
+ decr -= dmap->byte_counter;
+ dmap->user_counter -= decr;
+ }
+ }
+ dmap->qlen++; /* Yes increment it (don't decrement) */
+ if (!(audio_devs[dev]->flags & DMA_AUTOMODE))
+ dmap->flags &= ~DMA_ACTIVE;
+ dmap->counts[dmap->qhead] = dmap->fragment_size;
+
+ DMAbuf_launch_output(dev, dmap);
+ finish_output_interrupt(dev, dmap);
+ return;
+ }
save_flags(flags);
cli();
dmap->qhead = (dmap->qhead + 1) % dmap->nbufs;
if (dmap->qhead == 0) /* Wrapped */
- {
- dmap->byte_counter += dmap->bytes_in_use;
- if (dmap->byte_counter >= dmap->max_byte_counter) /* Overflow */
- {
- long decr = dmap->byte_counter;
-
- dmap->byte_counter = (dmap->byte_counter % dmap->bytes_in_use) + dmap->bytes_in_use;
- decr -= dmap->byte_counter;
- dmap->user_counter -= decr;
- }
- }
+ {
+ dmap->byte_counter += dmap->bytes_in_use;
+ if (dmap->byte_counter >= dmap->max_byte_counter) /* Overflow */
+ {
+ long decr = dmap->byte_counter;
+
+ dmap->byte_counter = (dmap->byte_counter % dmap->bytes_in_use) + dmap->bytes_in_use;
+ decr -= dmap->byte_counter;
+ dmap->user_counter -= decr;
+ }
+ }
if (!(audio_devs[dev]->flags & DMA_AUTOMODE))
dmap->flags &= ~DMA_ACTIVE;
while (dmap->qlen <= 0)
- {
- dmap->underrun_count++;
-
- dmap->qlen++;
- if (dmap->flags & DMA_DIRTY && dmap->applic_profile != APF_CPUINTENS)
- {
- dmap->flags &= ~DMA_DIRTY;
- memset(audio_devs[dev]->dmap_out->raw_buf,
- audio_devs[dev]->dmap_out->neutral_byte,
- audio_devs[dev]->dmap_out->buffsize);
- }
- dmap->user_counter += dmap->fragment_size;
- dmap->qtail = (dmap->qtail + 1) % dmap->nbufs;
- }
+ {
+ dmap->underrun_count++;
+ dmap->qlen++;
+ if (dmap->flags & DMA_DIRTY && dmap->applic_profile != APF_CPUINTENS)
+ {
+ dmap->flags &= ~DMA_DIRTY;
+ memset(audio_devs[dev]->dmap_out->raw_buf,
+ audio_devs[dev]->dmap_out->neutral_byte,
+ audio_devs[dev]->dmap_out->buffsize);
+ }
+ dmap->user_counter += dmap->fragment_size;
+ dmap->qtail = (dmap->qtail + 1) % dmap->nbufs;
+ }
if (dmap->qlen > 0)
DMAbuf_launch_output(dev, dmap);
finish_output_interrupt(dev, dmap);
}
-void
-DMAbuf_outputintr(int dev, int notify_only)
+void DMAbuf_outputintr(int dev, int notify_only)
{
- unsigned long flags;
+ unsigned long flags;
struct dma_buffparms *dmap = audio_devs[dev]->dmap_out;
save_flags(flags);
cli();
if (!(dmap->flags & DMA_NODMA))
- {
- int chan = dmap->dma, pos, n;
-
- clear_dma_ff(chan);
- disable_dma(dmap->dma);
- pos = dmap->bytes_in_use - get_dma_residue(chan);
- enable_dma(dmap->dma);
-
- pos = pos / dmap->fragment_size; /* Actual qhead */
- if (pos < 0 || pos >= dmap->nbufs)
- pos = 0;
-
- n = 0;
- while (dmap->qhead != pos && n++ < dmap->nbufs)
- {
- do_outputintr(dev, notify_only);
- }
- } else
+ {
+ int chan = dmap->dma, pos, n;
+ clear_dma_ff(chan);
+ disable_dma(dmap->dma);
+ pos = dmap->bytes_in_use - get_dma_residue(chan);
+ enable_dma(dmap->dma);
+ pos = pos / dmap->fragment_size; /* Actual qhead */
+ if (pos < 0 || pos >= dmap->nbufs)
+ pos = 0;
+ n = 0;
+ while (dmap->qhead != pos && n++ < dmap->nbufs)
+ do_outputintr(dev, notify_only);
+ }
+ else
do_outputintr(dev, notify_only);
restore_flags(flags);
}
-static void
-do_inputintr(int dev)
+static void do_inputintr(int dev)
{
struct dma_buffparms *dmap = audio_devs[dev]->dmap_in;
- unsigned long flags;
+ unsigned long flags;
#ifdef OS_DMA_INTR
if (audio_devs[dev]->dmap_in->dma >= 0)
#endif
if (dmap->raw_buf == NULL)
- {
- printk("Sound: Fatal error. Audio interrupt after freeing buffers.\n");
- return;
- }
+ {
+ printk(KERN_ERR "Sound: Fatal error. Audio interrupt after freeing buffers.\n");
+ return;
+ }
if (dmap->mapping_flags & DMA_MAP_MAPPED)
- {
- dmap->qtail = (dmap->qtail + 1) % dmap->nbufs;
- if (dmap->qtail == 0) /* Wrapped */
- {
- dmap->byte_counter += dmap->bytes_in_use;
- if (dmap->byte_counter >= dmap->max_byte_counter) /* Overflow */
- {
- long decr = dmap->byte_counter;
-
- dmap->byte_counter = (dmap->byte_counter % dmap->bytes_in_use) + dmap->bytes_in_use;
- decr -= dmap->byte_counter;
- dmap->user_counter -= decr;
- }
- }
- dmap->qlen++;
-
- if (!(audio_devs[dev]->flags & DMA_AUTOMODE))
- {
- if (dmap->needs_reorg)
- reorganize_buffers(dev, dmap, 0);
- local_start_dma(dev, dmap->raw_buf_phys, dmap->bytes_in_use,
- DMA_MODE_READ);
- audio_devs[dev]->d->start_input(dev, dmap->raw_buf_phys +
- dmap->qtail * dmap->fragment_size,
- dmap->fragment_size, 1);
- if (audio_devs[dev]->d->trigger)
- audio_devs[dev]->d->trigger(dev,
- audio_devs[dev]->enable_bits * audio_devs[dev]->go);
- }
- dmap->flags |= DMA_ACTIVE;
- } else if (dmap->qlen >= (dmap->nbufs - 1))
- {
- printk("Sound: Recording overrun\n");
- dmap->underrun_count++;
-
- /* Just throw away the oldest fragment but keep the engine running */
- dmap->qhead = (dmap->qhead + 1) % dmap->nbufs;
- dmap->qtail = (dmap->qtail + 1) % dmap->nbufs;
- } else if (dmap->qlen >= 0 && dmap->qlen < dmap->nbufs)
- {
- dmap->qlen++;
- dmap->qtail = (dmap->qtail + 1) % dmap->nbufs;
- if (dmap->qtail == 0) /* Wrapped */
- {
- dmap->byte_counter += dmap->bytes_in_use;
- if (dmap->byte_counter >= dmap->max_byte_counter) /* Overflow */
- {
- long decr = dmap->byte_counter;
-
- dmap->byte_counter = (dmap->byte_counter % dmap->bytes_in_use) + dmap->bytes_in_use;
- decr -= dmap->byte_counter;
- dmap->user_counter -= decr;
- }
- }
- }
+ {
+ dmap->qtail = (dmap->qtail + 1) % dmap->nbufs;
+ if (dmap->qtail == 0) /* Wrapped */
+ {
+ dmap->byte_counter += dmap->bytes_in_use;
+ if (dmap->byte_counter >= dmap->max_byte_counter) /* Overflow */
+ {
+ long decr = dmap->byte_counter;
+
+ dmap->byte_counter = (dmap->byte_counter % dmap->bytes_in_use) + dmap->bytes_in_use;
+ decr -= dmap->byte_counter;
+ dmap->user_counter -= decr;
+ }
+ }
+ dmap->qlen++;
+
+ if (!(audio_devs[dev]->flags & DMA_AUTOMODE))
+ {
+ if (dmap->needs_reorg)
+ reorganize_buffers(dev, dmap, 0);
+ local_start_dma(dev, dmap->raw_buf_phys, dmap->bytes_in_use,DMA_MODE_READ);
+ audio_devs[dev]->d->start_input(dev, dmap->raw_buf_phys +
+ dmap->qtail * dmap->fragment_size,
+ dmap->fragment_size, 1);
+ if (audio_devs[dev]->d->trigger)
+ audio_devs[dev]->d->trigger(dev,audio_devs[dev]->enable_bits * audio_devs[dev]->go);
+ }
+ dmap->flags |= DMA_ACTIVE;
+ }
+ else if (dmap->qlen >= (dmap->nbufs - 1))
+ {
+ printk(KERN_WARNING "Sound: Recording overrun\n");
+ dmap->underrun_count++;
+
+ /* Just throw away the oldest fragment but keep the engine running */
+ dmap->qhead = (dmap->qhead + 1) % dmap->nbufs;
+ dmap->qtail = (dmap->qtail + 1) % dmap->nbufs;
+ }
+ else if (dmap->qlen >= 0 && dmap->qlen < dmap->nbufs)
+ {
+ dmap->qlen++;
+ dmap->qtail = (dmap->qtail + 1) % dmap->nbufs;
+ if (dmap->qtail == 0) /* Wrapped */
+ {
+ dmap->byte_counter += dmap->bytes_in_use;
+ if (dmap->byte_counter >= dmap->max_byte_counter) /* Overflow */
+ {
+ long decr = dmap->byte_counter;
+
+ dmap->byte_counter = (dmap->byte_counter % dmap->bytes_in_use) + dmap->bytes_in_use;
+ decr -= dmap->byte_counter;
+ dmap->user_counter -= decr;
+ }
+ }
+ }
if (!(audio_devs[dev]->flags & DMA_AUTOMODE) || dmap->flags & DMA_NODMA)
- {
- local_start_dma(dev, dmap->raw_buf_phys, dmap->bytes_in_use,
- DMA_MODE_READ);
- audio_devs[dev]->d->start_input(dev, dmap->raw_buf_phys +
- dmap->qtail * dmap->fragment_size,
- dmap->fragment_size, 1);
- if (audio_devs[dev]->d->trigger)
- audio_devs[dev]->d->trigger(dev,
- audio_devs[dev]->enable_bits * audio_devs[dev]->go);
- }
+ {
+ local_start_dma(dev, dmap->raw_buf_phys, dmap->bytes_in_use, DMA_MODE_READ);
+ audio_devs[dev]->d->start_input(dev, dmap->raw_buf_phys + dmap->qtail * dmap->fragment_size, dmap->fragment_size, 1);
+ if (audio_devs[dev]->d->trigger)
+ audio_devs[dev]->d->trigger(dev,audio_devs[dev]->enable_bits * audio_devs[dev]->go);
+ }
dmap->flags |= DMA_ACTIVE;
save_flags(flags);
cli();
if (dmap->qlen > 0)
+ {
if ((in_sleep_flag[dev].opts & WK_SLEEP))
- {
- {
- in_sleep_flag[dev].opts = WK_WAKEUP;
- wake_up(&in_sleeper[dev]);
- };
- }
+ {
+ in_sleep_flag[dev].opts = WK_WAKEUP;
+ wake_up(&in_sleeper[dev]);
+ }
+ }
restore_flags(flags);
}
-void
-DMAbuf_inputintr(int dev)
+void DMAbuf_inputintr(int dev)
{
struct dma_buffparms *dmap = audio_devs[dev]->dmap_in;
- unsigned long flags;
+ unsigned long flags;
save_flags(flags);
cli();
if (!(dmap->flags & DMA_NODMA))
- {
- int chan = dmap->dma, pos, n;
-
- clear_dma_ff(chan);
- disable_dma(dmap->dma);
- pos = dmap->bytes_in_use - get_dma_residue(chan);
- enable_dma(dmap->dma);
-
- pos = pos / dmap->fragment_size; /* Actual qhead */
- if (pos < 0 || pos >= dmap->nbufs)
- pos = 0;
-
- n = 0;
- while (dmap->qtail != pos && ++n < dmap->nbufs)
- {
- do_inputintr(dev);
- }
- } else
+ {
+ int chan = dmap->dma, pos, n;
+
+ clear_dma_ff(chan);
+ disable_dma(dmap->dma);
+ pos = dmap->bytes_in_use - get_dma_residue(chan);
+ enable_dma(dmap->dma);
+
+ pos = pos / dmap->fragment_size; /* Actual qhead */
+ if (pos < 0 || pos >= dmap->nbufs)
+ pos = 0;
+
+ n = 0;
+ while (dmap->qtail != pos && ++n < dmap->nbufs)
+ do_inputintr(dev);
+ }
+ else
do_inputintr(dev);
restore_flags(flags);
}
-int
-DMAbuf_open_dma(int dev)
+int DMAbuf_open_dma(int dev)
{
-/*
- * NOTE! This routine opens only the primary DMA channel (output).
- */
+ /*
+ * NOTE! This routine opens only the primary DMA channel (output).
+ */
- int chan = audio_devs[dev]->dmap_out->dma;
- int err;
+ int chan = audio_devs[dev]->dmap_out->dma;
+ int err;
if ((err = open_dmap(dev, OPEN_READWRITE, audio_devs[dev]->dmap_out, chan)) < 0)
- {
- return -EBUSY;
- }
+ return -EBUSY;
dma_init_buffers(dev, audio_devs[dev]->dmap_out);
out_sleep_flag[dev].opts = WK_NONE;
audio_devs[dev]->dmap_out->flags |= DMA_ALLOC_DONE;
audio_devs[dev]->dmap_out->fragment_size = audio_devs[dev]->dmap_out->buffsize;
if (chan >= 0)
- {
- unsigned long flags;
+ {
+ unsigned long flags;
- save_flags(flags);
- cli();
- disable_dma(audio_devs[dev]->dmap_out->dma);
- clear_dma_ff(chan);
- restore_flags(flags);
- }
+ save_flags(flags);
+ cli();
+ disable_dma(audio_devs[dev]->dmap_out->dma);
+ clear_dma_ff(chan);
+ restore_flags(flags);
+ }
return 0;
}
-void
-DMAbuf_close_dma(int dev)
+void DMAbuf_close_dma(int dev)
{
close_dmap(dev, audio_devs[dev]->dmap_out, audio_devs[dev]->dmap_out->dma);
}
-void
-DMAbuf_init(int dev, int dma1, int dma2)
+void DMAbuf_init(int dev, int dma1, int dma2)
{
/*
- * NOTE! This routine could be called several times.
+ * NOTE! This routine could be called several times.
*/
if (audio_devs[dev] && audio_devs[dev]->dmap_out == NULL)
- {
- if (audio_devs[dev]->d == NULL)
- panic("OSS: audio_devs[%d]->d == NULL\n", dev);
-
- if (audio_devs[dev]->parent_dev)
- { /* Use DMA map of the parent dev */
- int parent = audio_devs[dev]->parent_dev - 1;
-
- audio_devs[dev]->dmap_out = audio_devs[parent]->dmap_out;
- audio_devs[dev]->dmap_in = audio_devs[parent]->dmap_in;
- } else
- {
- audio_devs[dev]->dmap_out =
- audio_devs[dev]->dmap_in =
- &dmaps[ndmaps++];
- audio_devs[dev]->dmap_out->dma = dma1;
-
- if (audio_devs[dev]->flags & DMA_DUPLEX)
- {
- audio_devs[dev]->dmap_in =
- &dmaps[ndmaps++];
- audio_devs[dev]->dmap_in->dma = dma2;
- }
- }
- }
+ {
+ if (audio_devs[dev]->d == NULL)
+ panic("OSS: audio_devs[%d]->d == NULL\n", dev);
+
+ if (audio_devs[dev]->parent_dev)
+ { /* Use DMA map of the parent dev */
+ int parent = audio_devs[dev]->parent_dev - 1;
+
+ audio_devs[dev]->dmap_out = audio_devs[parent]->dmap_out;
+ audio_devs[dev]->dmap_in = audio_devs[parent]->dmap_in;
+ }
+ else
+ {
+ audio_devs[dev]->dmap_out = audio_devs[dev]->dmap_in = &dmaps[ndmaps++];
+ audio_devs[dev]->dmap_out->dma = dma1;
+
+ if (audio_devs[dev]->flags & DMA_DUPLEX)
+ {
+ audio_devs[dev]->dmap_in = &dmaps[ndmaps++];
+ audio_devs[dev]->dmap_in->dma = dma2;
+ }
+ }
+ }
}
-int
-DMAbuf_select(int dev, struct fileinfo *file, int sel_type, poll_table * wait)
+int DMAbuf_select(int dev, struct fileinfo *file, int sel_type, poll_table * wait)
{
struct dma_buffparms *dmap;
unsigned long flags;
switch (sel_type)
- {
- case SEL_IN:
- if (!(audio_devs[dev]->open_mode & OPEN_READ))
- return 0;
-
- dmap = audio_devs[dev]->dmap_in;
-
- if (dmap->mapping_flags & DMA_MAP_MAPPED)
- {
- if (dmap->qlen)
- return 1;
-
- save_flags(flags);
- cli();
-
- in_sleep_flag[dev].opts = WK_SLEEP;
- poll_wait(&in_sleeper[dev], wait);
- restore_flags(flags);
- return 0;
- }
- if (dmap->dma_mode != DMODE_INPUT)
- {
- if (dmap->dma_mode == DMODE_NONE &&
- audio_devs[dev]->enable_bits & PCM_ENABLE_INPUT &&
- !dmap->qlen &&
- audio_devs[dev]->go)
- {
- unsigned long flags;
-
- save_flags(flags);
- cli();
- DMAbuf_activate_recording(dev, dmap);
- restore_flags(flags);
- }
- return 0;
- }
- if (!dmap->qlen)
- {
- save_flags(flags);
- cli();
-
- in_sleep_flag[dev].opts = WK_SLEEP;
- poll_wait(&in_sleeper[dev], wait);
- restore_flags(flags);
- return 0;
- }
- return 1;
- break;
-
- case SEL_OUT:
- dmap = audio_devs[dev]->dmap_out;
-
- if (!(audio_devs[dev]->open_mode & OPEN_WRITE))
- return 0;
-
- if (dmap->mapping_flags & DMA_MAP_MAPPED)
- {
- if (dmap->qlen)
- return 1;
-
- save_flags(flags);
- cli();
-
- out_sleep_flag[dev].opts = WK_SLEEP;
- poll_wait(&out_sleeper[dev], wait);
- restore_flags(flags);
- return 0;
- }
- if (dmap->dma_mode == DMODE_INPUT)
- {
- return 0;
- }
- if (dmap->dma_mode == DMODE_NONE)
- {
- return 1;
- }
- if (!DMAbuf_space_in_queue(dev))
- {
- save_flags(flags);
- cli();
-
- out_sleep_flag[dev].opts = WK_SLEEP;
- poll_wait(&out_sleeper[dev], wait);
- restore_flags(flags);
- return 0;
- }
- return 1;
- break;
-
- case SEL_EX:
- return 0;
- }
+ {
+ case SEL_IN:
+ if (!(audio_devs[dev]->open_mode & OPEN_READ))
+ return 0;
+ dmap = audio_devs[dev]->dmap_in;
+
+ if (dmap->mapping_flags & DMA_MAP_MAPPED)
+ {
+ if (dmap->qlen)
+ return 1;
+ save_flags(flags);
+ cli();
+ in_sleep_flag[dev].opts = WK_SLEEP;
+ poll_wait(&in_sleeper[dev], wait);
+ restore_flags(flags);
+ return 0;
+ }
+ if (dmap->dma_mode != DMODE_INPUT)
+ {
+ if (dmap->dma_mode == DMODE_NONE &&
+ audio_devs[dev]->enable_bits & PCM_ENABLE_INPUT &&
+ !dmap->qlen && audio_devs[dev]->go)
+ {
+ unsigned long flags;
+ save_flags(flags);
+ cli();
+ DMAbuf_activate_recording(dev, dmap);
+ restore_flags(flags);
+ }
+ return 0;
+ }
+ if (!dmap->qlen)
+ {
+ save_flags(flags);
+ cli();
+
+ in_sleep_flag[dev].opts = WK_SLEEP;
+ poll_wait(&in_sleeper[dev], wait);
+ restore_flags(flags);
+ return 0;
+ }
+ return 1;
+
+ case SEL_OUT:
+ dmap = audio_devs[dev]->dmap_out;
+
+ if (!(audio_devs[dev]->open_mode & OPEN_WRITE))
+ return 0;
+
+ if (dmap->mapping_flags & DMA_MAP_MAPPED)
+ {
+ if (dmap->qlen)
+ return 1;
+
+ save_flags(flags);
+ cli();
+
+ out_sleep_flag[dev].opts = WK_SLEEP;
+ poll_wait(&out_sleeper[dev], wait);
+ restore_flags(flags);
+ return 0;
+ }
+
+ if (dmap->dma_mode == DMODE_INPUT)
+ return 0;
+
+ if (dmap->dma_mode == DMODE_NONE)
+ return 1;
+
+ if (!DMAbuf_space_in_queue(dev))
+ {
+ save_flags(flags);
+ cli();
+
+ out_sleep_flag[dev].opts = WK_SLEEP;
+ poll_wait(&out_sleeper[dev], wait);
+ restore_flags(flags);
+ return 0;
+ }
+ return 1;
+ break;
+
+ case SEL_EX:
+ return 0;
+ }
return 0;
}
-void
-DMAbuf_deinit(int dev)
+void DMAbuf_deinit(int dev)
{
/* This routine is called when driver is being unloaded */
#ifdef RUNTIME_DMA_ALLOC
if (hw_config->slots[4] != -1)
sound_unload_audiodev(hw_config->slots[4]);
if (hw_config->slots[5] != -1)
- sound_unload_mixerdev(hw_config->slots[4]);
+ sound_unload_mixerdev(hw_config->slots[5]);
if(samples)
vfree(samples);
extern void mix_write(unsigned char data, int ioaddr);
-unsigned charpas_read(int ioaddr)
+unsigned char pas_read(int ioaddr)
{
return inb(ioaddr ^ translate_code);
}
*/
/*
* Thomas Sailer : ioctl code reworked (vmalloc/vfree removed)
- * integrated sound_switch.c and made /proc/sound (equals to /dev/sndstat,
+ * integrated sound_switch.c
+ * Stefan Reinauer : integrated /proc/sound (equals to /dev/sndstat,
* which should disappear in the near future)
*/
#include <linux/config.h>
dmap->buffsize = PAGE_SIZE * (1 << sz);
- if ((start_addr = (char *) __get_free_pages(GFP_ATOMIC, sz, MAX_DMA_ADDRESS)) == NULL)
+ start_addr = (char *) __get_free_pages(GFP_ATOMIC | GFP_DMA, sz);
+ if (start_addr == NULL)
dmap->buffsize /= 2;
}
bool 'SMD disklabel (Sun partition tables) support' CONFIG_SMD_DISKLABEL
bool 'Solaris (x86) partition table support' CONFIG_SOLARIS_X86_PARTITION
fi
+if [ "$CONFIG_EXPERIMENTAL" = "y" ]; then
+ tristate 'ADFS filesystem support (read only) (EXPERIMENTAL)' CONFIG_ADFS_FS
+fi
bool 'Macintosh partition map support' CONFIG_MAC_PARTITION
endmenu
endif
endif
+ifeq ($(CONFIG_ADFS_FS),y)
+SUB_DIRS += adfs
+else
+ ifeq ($(CONFIG_ADFS_FS),m)
+ MOD_SUB_DIRS += adfs
+ endif
+endif
+
ifeq ($(CONFIG_BINFMT_ELF),y)
BINFMTS += binfmt_elf.o
else
--- /dev/null
+#
+# Makefile for the linux adfs-filesystem routines.
+#
+# Note! Dependencies are done automagically by 'make dep', which also
+# removes any old dependencies. DON'T put your own dependencies here
+# unless it's something special (ie not a .c file).
+#
+# Note 2! The CFLAGS definitions are now in the main makefile...
+
+O_TARGET := adfs.o
+O_OBJS := dir.o file.o inode.o map.o namei.o super.o
+M_OBJS := $(O_TARGET)
+
+include $(TOPDIR)/Rules.make
--- /dev/null
+/*
+ * linux/fs/adfs/dir.c
+ *
+ * Copyright (C) 1997 Russell King
+ */
+
+#include <linux/errno.h>
+#include <linux/fs.h>
+#include <linux/adfs_fs.h>
+#include <linux/sched.h>
+#include <linux/stat.h>
+
+static ssize_t adfs_dirread (struct file *filp, char *buf,
+ size_t siz, loff_t *ppos)
+{
+ return -EISDIR;
+}
+
+static int adfs_readdir (struct file *, void *, filldir_t);
+
+static struct file_operations adfs_dir_operations = {
+ NULL, /* lseek - default */
+ adfs_dirread, /* read */
+ NULL, /* write - bad */
+ adfs_readdir, /* readdir */
+ NULL, /* select - default */
+ NULL, /* ioctl */
+ NULL, /* mmap */
+ NULL, /* no special open code */
+ NULL, /* no special release code */
+ file_fsync, /* fsync */
+ NULL, /* fasync */
+ NULL, /* check_media_change */
+ NULL /* revalidate */
+};
+
+/*
+ * directories can handle most operations...
+ */
+struct inode_operations adfs_dir_inode_operations = {
+ &adfs_dir_operations, /* default directory file-ops */
+ NULL, /* create */
+ adfs_lookup, /* lookup */
+ NULL, /* link */
+ NULL, /* unlink */
+ NULL, /* symlink */
+ NULL, /* mkdir */
+ NULL, /* rmdir */
+ NULL, /* mknod */
+ NULL, /* rename */
+ NULL, /* read link */
+ NULL, /* follow link */
+ NULL, /* read page */
+ NULL, /* write page */
+ NULL, /* bmap */
+ NULL, /* truncate */
+ NULL, /* permission */
+ NULL /* smap */
+};
+
+unsigned int adfs_val (unsigned char *p, int len)
+{
+ unsigned int val = 0;
+
+ switch (len) {
+ case 4:
+ val |= p[3] << 24;
+ case 3:
+ val |= p[2] << 16;
+ case 2:
+ val |= p[1] << 8;
+ default:
+ val |= p[0];
+ }
+ return val;
+}
+
+static unsigned int adfs_time (unsigned int load, unsigned int exec)
+{
+ unsigned int high, low;
+
+ high = ((load << 24) | (exec >> 8)) - 0x336e996a;
+ low = exec & 255;
+
+ /* 65537 = h256,l1
+ * (h256 % 100) = 56 h256 / 100 = 2
+ * 56 << 8 = 14336 2 * 256 = 512
+ * + l1 = 14337
+ * / 100 = 143
+ * + 512 = 655
+ */
+ return (((high % 100) << 8) + low) / 100 + (high / 100 << 8);
+}
+
+int adfs_readname (char *buf, char *ptr, int maxlen)
+{
+ int size = 0;
+ while (*ptr >= ' ' && maxlen--) {
+ switch (*ptr) {
+ case '/':
+ *buf++ = '.';
+ break;
+ default:
+ *buf++ = *ptr;
+ break;
+ }
+ ptr++;
+ size ++;
+ }
+ *buf = '\0';
+ return size;
+}
+
+int adfs_dir_read_parent (struct inode *inode, struct buffer_head **bhp)
+{
+ struct super_block *sb;
+ int i, size;
+
+ if (!inode)
+ return 0;
+
+ sb = inode->i_sb;
+
+ size = 2048 >> sb->s_blocksize_bits;
+
+ for (i = 0; i < size; i++) {
+ int block;
+
+ block = adfs_parent_bmap (inode, i);
+ if (block)
+ bhp[i] = bread (sb->s_dev, block, sb->s_blocksize);
+ else
+ adfs_error (sb, "adfs_dir_read_parent",
+ "directory %lu with a hole at offset %d", inode->i_ino, i);
+ if (!block || !bhp[i]) {
+ int j;
+ for (j = i - 1; j >= 0; j--)
+ brelse (bhp[j]);
+ return 0;
+ }
+ }
+ return i;
+}
+
+int adfs_dir_read (struct inode *inode, struct buffer_head **bhp)
+{
+ struct super_block *sb;
+ int i, size;
+
+ if (!inode || !S_ISDIR(inode->i_mode))
+ return 0;
+
+ sb = inode->i_sb;
+
+ size = inode->i_size >> sb->s_blocksize_bits;
+
+ for (i = 0; i < size; i++) {
+ int block;
+
+ block = adfs_bmap (inode, i);
+ if (block)
+ bhp[i] = bread (sb->s_dev, block, sb->s_blocksize);
+ else
+ adfs_error (sb, "adfs_dir_read",
+ "directory %lX,%lX with a hole at offset %d",
+ inode->i_ino, inode->u.adfs_i.file_id, i);
+ if (!block || !bhp[i]) {
+ int j;
+ for (j = i - 1; j >= 0; j--)
+ brelse (bhp[j]);
+ return 0;
+ }
+ }
+ return i;
+}
+
+int adfs_dir_check (struct inode *inode, struct buffer_head **bhp, int buffers, union adfs_dirtail *dtp)
+{
+ struct adfs_dirheader dh;
+ union adfs_dirtail dt;
+
+ memcpy (&dh, bhp[0]->b_data, sizeof (dh));
+ memcpy (&dt, bhp[3]->b_data + 471, sizeof(dt));
+
+ if (memcmp (&dh.startmasseq, &dt.new.endmasseq, 5) ||
+ (memcmp (&dh.startname, "Nick", 4) &&
+ memcmp (&dh.startname, "Hugo", 4))) {
+ adfs_error (inode->i_sb, "adfs_check_dir",
+ "corrupted directory inode %lX,%lX",
+ inode->i_ino, inode->u.adfs_i.file_id);
+ return 1;
+ }
+ if (dtp)
+ *dtp = dt;
+ return 0;
+}
+
+void adfs_dir_free (struct buffer_head **bhp, int buffers)
+{
+ int i;
+
+ for (i = buffers - 1; i >= 0; i--)
+ brelse (bhp[i]);
+}
+
+int adfs_dir_get (struct super_block *sb, struct buffer_head **bhp,
+ int buffers, int pos, unsigned long parent_object_id,
+ struct adfs_idir_entry *ide)
+{
+ struct adfs_direntry de;
+ int thissize, buffer, offset;
+
+ offset = pos & (sb->s_blocksize - 1);
+ buffer = pos >> sb->s_blocksize_bits;
+
+ if (buffer > buffers)
+ return 0;
+
+ thissize = sb->s_blocksize - offset;
+ if (thissize > 26)
+ thissize = 26;
+
+ memcpy (&de, bhp[buffer]->b_data + offset, thissize);
+ if (thissize != 26)
+ memcpy (((char *)&de) + thissize, bhp[buffer + 1]->b_data, 26 - thissize);
+
+ if (!de.dirobname[0])
+ return 0;
+
+ ide->name_len = adfs_readname (ide->name, de.dirobname, ADFS_NAME_LEN);
+ ide->inode_no = adfs_inode_generate (parent_object_id, pos);
+ ide->file_id = adfs_val (de.dirinddiscadd, 3);
+ ide->size = adfs_val (de.dirlen, 4);
+ ide->mode = de.newdiratts;
+ ide->mtime = adfs_time (adfs_val (de.dirload, 4), adfs_val (de.direxec, 4));
+ ide->filetype = (adfs_val (de.dirload, 4) >> 8) & 0xfff;
+ return 1;
+}
+
+int adfs_dir_find_entry (struct super_block *sb, struct buffer_head **bhp,
+ int buffers, unsigned int pos,
+ struct adfs_idir_entry *ide)
+{
+ struct adfs_direntry de;
+ int offset, buffer, thissize;
+
+ offset = pos & (sb->s_blocksize - 1);
+ buffer = pos >> sb->s_blocksize_bits;
+
+ if (buffer > buffers)
+ return 0;
+
+ thissize = sb->s_blocksize - offset;
+ if (thissize > 26)
+ thissize = 26;
+
+ memcpy (&de, bhp[buffer]->b_data + offset, thissize);
+ if (thissize != 26)
+ memcpy (((char *)&de) + thissize, bhp[buffer + 1]->b_data, 26 - thissize);
+
+ if (!de.dirobname[0])
+ return 0;
+
+ ide->name_len = adfs_readname (ide->name, de.dirobname, ADFS_NAME_LEN);
+ ide->size = adfs_val (de.dirlen, 4);
+ ide->mode = de.newdiratts;
+ ide->file_id = adfs_val (de.dirinddiscadd, 3);
+ ide->mtime = adfs_time (adfs_val (de.dirload, 4), adfs_val (de.direxec, 4));
+ ide->filetype = (adfs_val (de.dirload, 4) >> 8) & 0xfff;
+ return 1;
+}
+
+static int adfs_readdir (struct file *filp, void *dirent, filldir_t filldir)
+{
+ struct inode *inode = filp->f_dentry->d_inode;
+ struct super_block *sb;
+ struct buffer_head *bh[4];
+ union adfs_dirtail dt;
+ unsigned long parent_object_id, dir_object_id;
+ int buffers, pos;
+
+ if (!inode || !S_ISDIR(inode->i_mode))
+ return -EBADF;
+ sb = inode->i_sb;
+
+ if (filp->f_pos > ADFS_NUM_DIR_ENTRIES + 2)
+ return -ENOENT;
+
+ if (!(buffers = adfs_dir_read (inode, bh))) {
+ adfs_error (sb, "adfs_readdir", "unable to read directory");
+ return -EINVAL;
+ }
+
+ if (adfs_dir_check (inode, bh, buffers, &dt)) {
+ adfs_dir_free (bh, buffers);
+ return -ENOENT;
+ }
+
+ parent_object_id = adfs_val (dt.new.dirparent, 3);
+ dir_object_id = adfs_inode_objid (inode);
+
+ if (filp->f_pos < 2) {
+ if (filp->f_pos < 1) {
+ if (filldir (dirent, ".", 1, 0, inode->i_ino) < 0)
+ return 0;
+ filp->f_pos ++;
+ }
+ if (filldir (dirent, "..", 2, 1,
+ adfs_inode_generate (parent_object_id, 0)) < 0)
+ return 0;
+ filp->f_pos ++;
+ }
+
+ pos = 5 + (filp->f_pos - 2) * 26;
+ while (filp->f_pos < 79) {
+ struct adfs_idir_entry ide;
+
+ if (!adfs_dir_get (sb, bh, buffers, pos, dir_object_id, &ide))
+ break;
+
+ if (filldir (dirent, ide.name, ide.name_len, filp->f_pos, ide.inode_no) < 0)
+ return 0;
+ filp->f_pos ++;
+ pos += 26;
+ }
+ adfs_dir_free (bh, buffers);
+ return 0;
+}
--- /dev/null
+/*
+ * linux/fs/adfs/file.c
+ *
+ * Copyright (C) 1997 Russell King
+ * from:
+ *
+ * linux/fs/ext2/file.c
+ *
+ * Copyright (C) 1992, 1993, 1994, 1995
+ * Remy Card (card@masi.ibp.fr)
+ * Laboratoire MASI - Institut Blaise Pascal
+ * Universite Pierre et Marie Curie (Paris VI)
+ *
+ * from
+ *
+ * linux/fs/minix/file.c
+ *
+ * Copyright (C) 1991, 1992 Linus Torvalds
+ *
+ * adfs regular file handling primitives
+ */
+
+#include <linux/errno.h>
+#include <linux/fs.h>
+#include <linux/ext2_fs.h>
+#include <linux/fcntl.h>
+#include <linux/sched.h>
+#include <linux/stat.h>
+
+/*
+ * We have mostly NULL's here: the current defaults are ok for
+ * the adfs filesystem.
+ */
+static struct file_operations adfs_file_operations = {
+ NULL, /* lseek - default */
+ generic_file_read, /* read */
+ NULL, /* write */
+ NULL, /* readdir - bad */
+ NULL, /* select - default */
+ NULL, /* ioctl */
+ generic_file_mmap, /* mmap */
+ NULL, /* open - not special */
+ NULL, /* release */
+ file_fsync, /* fsync */
+ NULL, /* fasync */
+ NULL, /* check_media_change */
+ NULL /* revalidate */
+};
+
+struct inode_operations adfs_file_inode_operations = {
+ &adfs_file_operations, /* default file operations */
+ NULL, /* create */
+ NULL, /* lookup */
+ NULL, /* link */
+ NULL, /* unlink */
+ NULL, /* symlink */
+ NULL, /* mkdir */
+ NULL, /* rmdir */
+ NULL, /* mknod */
+ NULL, /* rename */
+ NULL, /* readlink */
+ NULL, /* follow_link */
+ generic_readpage, /* readpage */
+ NULL, /* writepage */
+ adfs_bmap, /* bmap */
+ NULL, /* truncate */
+ NULL, /* permission */
+ NULL /* smap */
+};
--- /dev/null
+/*
+ * linux/fs/adfs/inode.c
+ *
+ * Copyright (C) 1997 Russell King
+ */
+
+#include <linux/errno.h>
+#include <linux/fs.h>
+#include <linux/adfs_fs.h>
+#include <linux/sched.h>
+#include <linux/stat.h>
+#include <linux/string.h>
+#include <linux/locks.h>
+#include <linux/mm.h>
+
+/*
+ * Old Inode numbers:
+ * bit 30 - 16 FragID of parent object
+ * bit 15 0 1
+ * bit 14 - 0 FragID of object Offset into parent FragID
+ *
+ * New Inode numbers:
+ * Inode = Frag ID of parent (14) + Frag Offset (8) + (index into directory + 1)(8)
+ */
+#define inode_frag(ino) ((ino) >> 8)
+#define inode_idx(ino) ((ino) & 0xff)
+#define inode_dirindex(idx) (((idx) & 0xff) * 26 - 21)
+
+#define frag_id(x) (((x) >> 8) & 0x7fff)
+#define off(x) (((x) & 0xff) ? ((x) & 0xff) - 1 : 0)
+
+static inline int adfs_inode_validate_no (struct super_block *sb, unsigned int inode_no)
+{
+ unsigned long max_frag_id;
+
+ max_frag_id = sb->u.adfs_sb.s_map_size * sb->u.adfs_sb.s_ids_per_zone;
+
+ return (inode_no & 0x800000ff) ||
+ (frag_id (inode_frag (inode_no)) > max_frag_id) ||
+ (frag_id (inode_frag (inode_no)) < 2);
+}
+
+int adfs_inode_validate (struct inode *inode)
+{
+ struct super_block *sb = inode->i_sb;
+
+ return adfs_inode_validate_no (sb, inode->i_ino & 0xffffff00) ||
+ adfs_inode_validate_no (sb, inode->u.adfs_i.file_id << 8);
+}
+
+unsigned long adfs_inode_generate (unsigned long parent_id, int diridx)
+{
+ if (!parent_id)
+ return -1;
+
+ if (diridx)
+ diridx = (diridx + 21) / 26;
+
+ return (parent_id << 8) | diridx;
+}
+
+unsigned long adfs_inode_objid (struct inode *inode)
+{
+ if (adfs_inode_validate (inode)) {
+ adfs_error (inode->i_sb, "adfs_inode_objid",
+ "bad inode number: %lu (%X,%X)",
+ inode->i_ino, inode->i_ino, inode->u.adfs_i.file_id);
+ return 0;
+ }
+
+ return inode->u.adfs_i.file_id;
+}
+
+unsigned int adfs_bmap (struct inode *inode, int block)
+{
+ struct super_block *sb = inode->i_sb;
+ unsigned int blk;
+
+ if (adfs_inode_validate (inode)) {
+ adfs_error (sb, "adfs_bmap",
+ "bad inode number: %lu (%X,%X)",
+ inode->i_ino, inode->i_ino, inode->u.adfs_i.file_id);
+ return 0;
+ }
+
+ if (frag_id(inode->u.adfs_i.file_id) == ADFS_ROOT_FRAG)
+ blk = sb->u.adfs_sb.s_map_block + off(inode_frag (inode->i_ino)) + block;
+ else
+ blk = adfs_map_lookup (sb, frag_id(inode->u.adfs_i.file_id),
+ off (inode->u.adfs_i.file_id) + block);
+ return blk;
+}
+
+unsigned int adfs_parent_bmap (struct inode *inode, int block)
+{
+ struct super_block *sb = inode->i_sb;
+ unsigned int blk, fragment;
+
+ if (adfs_inode_validate_no (sb, inode->i_ino & 0xffffff00)) {
+ adfs_error (sb, "adfs_parent_bmap",
+ "bad inode number: %lu (%X,%X)",
+ inode->i_ino, inode->i_ino, inode->u.adfs_i.file_id);
+ return 0;
+ }
+
+ fragment = inode_frag (inode->i_ino);
+ if (frag_id (fragment) == ADFS_ROOT_FRAG)
+ blk = sb->u.adfs_sb.s_map_block + off (fragment) + block;
+ else
+ blk = adfs_map_lookup (sb, frag_id (fragment), off (fragment) + block);
+ return blk;
+}
+
+static int adfs_atts2mode (unsigned char mode, unsigned int filetype)
+{
+ int omode = 0;
+
+ if (filetype == 0xfc0 /* LinkFS */) {
+ omode = S_IFLNK|S_IRUSR|S_IWUSR|S_IXUSR|
+ S_IRGRP|S_IWGRP|S_IXGRP|
+ S_IROTH|S_IWOTH|S_IXOTH;
+ } else {
+ if (mode & ADFS_NDA_DIRECTORY)
+ omode |= S_IFDIR|S_IRUSR|S_IXUSR|S_IXGRP|S_IXOTH;
+ else
+ omode |= S_IFREG;
+ if (mode & ADFS_NDA_OWNER_READ) {
+ omode |= S_IRUSR;
+ if (filetype == 0xfe6 /* UnixExec */)
+ omode |= S_IXUSR;
+ }
+ if (mode & ADFS_NDA_OWNER_WRITE)
+ omode |= S_IWUSR;
+ if (mode & ADFS_NDA_PUBLIC_READ) {
+ omode |= S_IRGRP | S_IROTH;
+ if (filetype == 0xfe6)
+ omode |= S_IXGRP | S_IXOTH;
+ }
+ if (mode & ADFS_NDA_PUBLIC_WRITE)
+ omode |= S_IWGRP | S_IWOTH;
+ }
+ return omode;
+}
+
+void adfs_read_inode (struct inode *inode)
+{
+ struct super_block *sb;
+ struct buffer_head *bh[4];
+ struct adfs_idir_entry ide;
+ int buffers;
+
+ sb = inode->i_sb;
+ inode->i_uid = 0;
+ inode->i_gid = 0;
+ inode->i_version = ++event;
+
+ if (adfs_inode_validate_no (sb, inode->i_ino & 0xffffff00)) {
+ adfs_error (sb, "adfs_read_inode",
+ "bad inode number: %lu", inode->i_ino);
+ goto bad;
+ }
+
+ if (frag_id(inode_frag (inode->i_ino)) == ADFS_ROOT_FRAG &&
+ inode_idx (inode->i_ino) == 0) {
+ /* root dir */
+ inode->i_mode = S_IRWXUGO | S_IFDIR;
+ inode->i_nlink = 2;
+ inode->i_size = ADFS_NEWDIR_SIZE;
+ inode->i_blksize = PAGE_SIZE;
+ inode->i_blocks = inode->i_size / sb->s_blocksize;
+ inode->i_mtime =
+ inode->i_atime =
+ inode->i_ctime = 0;
+ inode->u.adfs_i.file_id = inode_frag (inode->i_ino);
+ } else {
+ if (!(buffers = adfs_dir_read_parent (inode, bh)))
+ goto bad;
+
+ if (adfs_dir_check (inode, bh, buffers, NULL)) {
+ adfs_dir_free (bh, buffers);
+ goto bad;
+ }
+
+ if (!adfs_dir_find_entry (sb, bh, buffers, inode_dirindex (inode->i_ino), &ide)) {
+ adfs_dir_free (bh, buffers);
+ goto bad;
+ }
+ adfs_dir_free (bh, buffers);
+ inode->i_mode = adfs_atts2mode (ide.mode, ide.filetype);
+ inode->i_nlink = 2;
+ inode->i_size = ide.size;
+ inode->i_blksize = PAGE_SIZE;
+ inode->i_blocks = (inode->i_size + sb->s_blocksize - 1) >> sb->s_blocksize_bits;
+ inode->i_mtime =
+ inode->i_atime =
+ inode->i_ctime = ide.mtime;
+ inode->u.adfs_i.file_id = ide.file_id;
+ }
+
+ if (S_ISDIR(inode->i_mode))
+ inode->i_op = &adfs_dir_inode_operations;
+ else if (S_ISREG(inode->i_mode))
+ inode->i_op = &adfs_file_inode_operations;
+ return;
+
+bad:
+ inode->i_mode = 0;
+ inode->i_nlink = 1;
+ inode->i_size = 0;
+ inode->i_blksize = 0;
+ inode->i_blocks = 0;
+ inode->i_mtime =
+ inode->i_atime =
+ inode->i_ctime = 0;
+ inode->i_op = NULL;
+}
--- /dev/null
+/*
+ * linux/fs/adfs/map.c
+ *
+ * Copyright (C) 1997 Russell King
+ */
+
+#include <linux/errno.h>
+#include <linux/fs.h>
+#include <linux/adfs_fs.h>
+
+static inline unsigned int
+adfs_convert_map_to_sector (const struct super_block *sb, unsigned int mapoff)
+{
+ if (sb->u.adfs_sb.s_map2blk >= 0)
+ mapoff <<= sb->u.adfs_sb.s_map2blk;
+ else
+ mapoff >>= -sb->u.adfs_sb.s_map2blk;
+ return mapoff;
+}
+
+static inline unsigned int
+adfs_convert_sector_to_map (const struct super_block *sb, unsigned int secoff)
+{
+ if (sb->u.adfs_sb.s_map2blk >= 0)
+ secoff >>= sb->u.adfs_sb.s_map2blk;
+ else
+ secoff <<= -sb->u.adfs_sb.s_map2blk;
+ return secoff;
+}
+
+static int lookup_zone (struct super_block *sb, int zone, int frag_id, int *offset)
+{
+ unsigned int mapptr, idlen, mapsize;
+ unsigned long *map;
+
+ map = ((unsigned long *)sb->u.adfs_sb.s_map[zone]->b_data) + 1;
+ zone =
+ mapptr = zone == 0 ? (ADFS_DR_SIZE << 3) : 0;
+ idlen = sb->u.adfs_sb.s_idlen;
+ mapsize = sb->u.adfs_sb.s_zonesize;
+
+ do {
+ unsigned long v1, v2;
+ unsigned int start;
+
+ v1 = map[mapptr>>5];
+ v2 = map[(mapptr>>5)+1];
+
+ v1 = (v1 >> (mapptr & 31)) | (v2 << (32 - (mapptr & 31)));
+ start = mapptr;
+ mapptr += idlen;
+
+ v2 = map[mapptr >> 5] >> (mapptr & 31);
+ if (!v2) {
+ mapptr = (mapptr + 32) & ~31;
+ for (; (v2 = map[mapptr >> 5]) == 0 && mapptr < mapsize; mapptr += 32);
+ }
+ for (; (v2 & 255) == 0; v2 >>= 8, mapptr += 8);
+ for (; (v2 & 1) == 0; v2 >>= 1, mapptr += 1);
+ mapptr += 1;
+
+ if ((v1 & ((1 << idlen) - 1)) == frag_id) {
+ int length = mapptr - start;
+ if (*offset >= length)
+ *offset -= length;
+ else
+ return start + *offset - zone;
+ }
+ } while (mapptr < mapsize);
+ return -1;
+}
+
+int adfs_map_lookup (struct super_block *sb, int frag_id, int offset)
+{
+ unsigned int start_zone, zone, max_zone, mapoff, secoff;
+
+ zone = start_zone = frag_id / sb->u.adfs_sb.s_ids_per_zone;
+ max_zone = sb->u.adfs_sb.s_map_size;
+
+ if (start_zone >= max_zone) {
+ adfs_error (sb, "adfs_map_lookup", "fragment %X is invalid (zone = %d, max = %d)",
+ frag_id, start_zone, max_zone);
+ return 0;
+ }
+
+ /* Convert sector offset to map offset */
+ mapoff = adfs_convert_sector_to_map (sb, offset);
+ /* Calculate sector offset into map block */
+ secoff = offset - adfs_convert_map_to_sector (sb, mapoff);
+
+ do {
+ int result = lookup_zone (sb, zone, frag_id, &mapoff);
+
+ if (result != -1) {
+ result += zone ? (zone * sb->u.adfs_sb.s_zonesize) - (ADFS_DR_SIZE << 3): 0;
+ return adfs_convert_map_to_sector (sb, result) + secoff;
+ }
+
+ zone ++;
+ if (zone >= max_zone)
+ zone = 0;
+
+ } while (zone != start_zone);
+
+ adfs_error (sb, "adfs_map_lookup", "fragment %X at offset %d not found in map (start zone %d)",
+ frag_id, offset, start_zone);
+ return 0;
+}
--- /dev/null
+/*
+ * linux/fs/adfs/namei.c
+ *
+ * Copyright (C) 1997 Russell King
+ */
+
+#include <linux/errno.h>
+#include <linux/fs.h>
+#include <linux/adfs_fs.h>
+#include <linux/fcntl.h>
+#include <linux/sched.h>
+#include <linux/stat.h>
+#include <linux/string.h>
+#include <linux/locks.h>
+
+/*
+ * NOTE! unlike strncmp, ext2_match returns 1 for success, 0 for failure
+ */
+static int adfs_match (int len, const char * const name, struct adfs_idir_entry *de)
+{
+ int i;
+
+ if (!de || len > ADFS_NAME_LEN)
+ return 0;
+ /*
+ * "" means "." ---> so paths like "/usr/lib//libc.a" work
+ */
+ if (!len && de->name_len == 1 && de->name[0] == '.' &&
+ de->name[1] == '\0')
+ return 1;
+ if (len != de->name_len)
+ return 0;
+
+ for (i = 0; i < len; i++)
+ if ((de->name[i] ^ name[i]) & 0x5f)
+ return 0;
+ return 1;
+}
+
+static int adfs_find_entry (struct inode *dir, const char * const name, int namelen,
+ struct adfs_idir_entry *ide)
+{
+ struct super_block *sb;
+ struct buffer_head *bh[4];
+ union adfs_dirtail dt;
+ unsigned long parent_object_id, dir_object_id;
+ int buffers, pos;
+
+ if (!dir || !S_ISDIR(dir->i_mode))
+ return 0;
+
+ sb = dir->i_sb;
+
+ if (adfs_inode_validate (dir)) {
+ adfs_error (sb, "adfs_find_entry",
+ "invalid inode number: %lu", dir->i_ino);
+ return 0;
+ }
+
+ if (namelen > ADFS_NAME_LEN)
+ return 0;
+
+ if (!(buffers = adfs_dir_read (dir, bh))) {
+ adfs_error (sb, "adfs_find_entry", "unable to read directory");
+ return 0;
+ }
+
+ if (adfs_dir_check (dir, bh, buffers, &dt)) {
+ adfs_dir_free (bh, buffers);
+ return 0;
+ }
+
+ parent_object_id = adfs_val (dt.new.dirparent, 3);
+ dir_object_id = adfs_inode_objid (dir);
+
+ if (namelen == 2 && name[0] == '.' && name[1] == '.') {
+ ide->name_len = 2;
+ ide->name[0] = ide->name[1] = '.';
+ ide->name[2] = '\0';
+ ide->inode_no = adfs_inode_generate (parent_object_id, 0);
+ adfs_dir_free (bh, buffers);
+ return 1;
+ }
+
+ pos = 5;
+
+ do {
+ if (!adfs_dir_get (sb, bh, buffers, pos, dir_object_id, ide))
+ break;
+
+ if (adfs_match (namelen, name, ide)) {
+ adfs_dir_free (bh, buffers);
+ return pos;
+ }
+ pos += 26;
+ } while (1);
+ adfs_dir_free (bh, buffers);
+ return 0;
+}
+
+int adfs_lookup (struct inode *dir, struct dentry *dentry)
+{
+ struct inode *inode = NULL;
+ struct adfs_idir_entry de;
+ unsigned long ino;
+
+ if (dentry->d_name.len > ADFS_NAME_LEN)
+ return -ENAMETOOLONG;
+
+ if (adfs_find_entry (dir, dentry->d_name.name, dentry->d_name.len, &de)) {
+ ino = de.inode_no;
+ inode = iget (dir->i_sb, ino);
+
+ if (!inode)
+ return -EACCES;
+ }
+ d_add(dentry, inode);
+ return 0;
+}
--- /dev/null
+/*
+ * linux/fs/adfs/super.c
+ *
+ * Copyright (C) 1997 Russell King
+ */
+
+#include <linux/module.h>
+#include <linux/errno.h>
+#include <linux/fs.h>
+#include <linux/adfs_fs.h>
+#include <linux/malloc.h>
+#include <linux/sched.h>
+#include <linux/stat.h>
+#include <linux/string.h>
+#include <linux/locks.h>
+#include <linux/init.h>
+
+#include <asm/bitops.h>
+#include <asm/uaccess.h>
+#include <asm/system.h>
+
+#include <stdarg.h>
+
+static void adfs_put_super (struct super_block *sb);
+static int adfs_statfs (struct super_block *sb, struct statfs *buf, int bufsiz);
+void adfs_read_inode (struct inode *inode);
+
+void adfs_error (struct super_block *sb, const char *function, const char *fmt, ...)
+{
+ char error_buf[128];
+ va_list args;
+
+ va_start (args, fmt);
+ vsprintf (error_buf, fmt, args);
+ va_end (args);
+
+ printk (KERN_CRIT "ADFS-fs error (device %s)%s%s: %s\n",
+ kdevname (sb->s_dev), function ? ": " : "",
+ function ? function : "", error_buf);
+}
+
+unsigned char adfs_calccrosscheck (struct super_block *sb, char *map)
+{
+ unsigned int v0, v1, v2, v3;
+ int i;
+
+ v0 = v1 = v2 = v3 = 0;
+ for (i = sb->s_blocksize - 4; i; i -= 4) {
+ v0 += map[i] + (v3 >> 8);
+ v3 &= 0xff;
+ v1 += map[i + 1] + (v0 >> 8);
+ v0 &= 0xff;
+ v2 += map[i + 2] + (v1 >> 8);
+ v1 &= 0xff;
+ v3 += map[i + 3] + (v2 >> 8);
+ v2 &= 0xff;
+ }
+ v0 += v3 >> 8;
+ v1 += map[1] + (v0 >> 8);
+ v2 += map[2] + (v1 >> 8);
+ v3 += map[3] + (v2 >> 8);
+
+ return v0 ^ v1 ^ v2 ^ v3;
+}
+
+static int adfs_checkmap (struct super_block *sb)
+{
+ unsigned char crosscheck = 0, zonecheck = 1;
+ int i;
+
+ for (i = 0; i < sb->u.adfs_sb.s_map_size; i++) {
+ char *map;
+
+ map = sb->u.adfs_sb.s_map[i]->b_data;
+ if (adfs_calccrosscheck (sb, map) != map[0]) {
+ adfs_error (sb, "adfs_checkmap", "zone %d fails zonecheck", i);
+ zonecheck = 0;
+ }
+ crosscheck ^= map[3];
+ }
+ if (crosscheck != 0xff)
+ adfs_error (sb, "adfs_checkmap", "crosscheck != 0xff");
+ return crosscheck == 0xff && zonecheck;
+}
+
+static struct super_operations adfs_sops = {
+ adfs_read_inode,
+ NULL,
+ NULL,
+ NULL,
+ NULL,
+ adfs_put_super,
+ NULL,
+ adfs_statfs,
+ NULL
+};
+
+static void adfs_put_super (struct super_block *sb)
+{
+ int i;
+ lock_super (sb);
+ sb->s_dev = 0;
+ for (i = 0; i < sb->u.adfs_sb.s_map_size; i++)
+ brelse (sb->u.adfs_sb.s_map[i]);
+ kfree (sb->u.adfs_sb.s_map);
+ brelse (sb->u.adfs_sb.s_sbh);
+ unlock_super (sb);
+ MOD_DEC_USE_COUNT;
+}
+
+struct super_block *adfs_read_super (struct super_block *sb, void *data, int silent)
+{
+ struct adfs_discrecord *dr;
+ struct buffer_head *bh;
+ unsigned char *b_data;
+ kdev_t dev = sb->s_dev;
+ int i, j;
+
+ MOD_INC_USE_COUNT;
+ lock_super (sb);
+ set_blocksize (dev, BLOCK_SIZE);
+ if (!(bh = bread (dev, ADFS_DISCRECORD / BLOCK_SIZE, BLOCK_SIZE))) {
+ unlock_super (sb);
+ adfs_error (sb, NULL, "unable to read superblock");
+ MOD_DEC_USE_COUNT;
+ return NULL;
+ }
+
+ b_data = bh->b_data + (ADFS_DISCRECORD % BLOCK_SIZE);
+
+ if (adfs_checkbblk (b_data)) {
+ if (!silent)
+ printk ("VFS: Can't find an adfs filesystem on dev "
+ "%s.\n", kdevname(dev));
+failed_mount:
+ unlock_super (sb);
+ if (bh)
+ brelse (bh);
+ MOD_DEC_USE_COUNT;
+ return NULL;
+ }
+ dr = (struct adfs_discrecord *)(b_data + ADFS_DR_OFFSET);
+
+ sb->s_blocksize_bits = dr->log2secsize;
+ sb->s_blocksize = 1 << sb->s_blocksize_bits;
+ if (sb->s_blocksize != BLOCK_SIZE &&
+ (sb->s_blocksize == 512 || sb->s_blocksize == 1024 ||
+ sb->s_blocksize == 2048 || sb->s_blocksize == 4096)) {
+
+ brelse (bh);
+ set_blocksize (dev, sb->s_blocksize);
+ bh = bread (dev, ADFS_DISCRECORD / sb->s_blocksize, sb->s_blocksize);
+ if (!bh) {
+ adfs_error (sb, NULL, "couldn't read superblock on "
+ "2nd try.");
+ goto failed_mount;
+ }
+ b_data = bh->b_data + (ADFS_DISCRECORD % sb->s_blocksize);
+ if (adfs_checkbblk (b_data)) {
+ adfs_error (sb, NULL, "disc record mismatch, very weird!");
+ goto failed_mount;
+ }
+ dr = (struct adfs_discrecord *)(b_data + ADFS_DR_OFFSET);
+ }
+ if (sb->s_blocksize != bh->b_size) {
+ if (!silent)
+ printk (KERN_ERR "VFS: Unsupported blocksize on dev "
+ "%s.\n", kdevname (dev));
+ goto failed_mount;
+ }
+ /* blocksize on this device should now be set to the adfs log2secsize */
+
+ sb->u.adfs_sb.s_sbh = bh;
+ sb->u.adfs_sb.s_dr = dr;
+
+ /* s_zone_size = size of 1 zone (1 sector) * bits_in_byte - zone_spare =>
+ * number of map bits in a zone
+ */
+ sb->u.adfs_sb.s_zone_size = (8 << dr->log2secsize) - dr->zone_spare;
+
+ /* s_ids_per_zone = bit size of 1 zone / min. length of fragment block =>
+ * number of ids in one zone
+ */
+ sb->u.adfs_sb.s_ids_per_zone = sb->u.adfs_sb.s_zone_size / (dr->idlen + 1);
+
+ /* s_idlen = length of 1 id */
+ sb->u.adfs_sb.s_idlen = dr->idlen;
+
+ /* map size (in sectors) = number of zones */
+ sb->u.adfs_sb.s_map_size = dr->nzones;
+
+ /* zonesize = size of sector - zonespare */
+ sb->u.adfs_sb.s_zonesize = (sb->s_blocksize << 3) - dr->zone_spare;
+
+ /* map start (in sectors) = start of zone (number of zones) / 2 */
+ sb->u.adfs_sb.s_map_block = (dr->nzones >> 1) * sb->u.adfs_sb.s_zone_size -
+ ((dr->nzones > 1) ? 8 * ADFS_DR_SIZE : 0);
+
+ /* (signed) number of bits to shift left a map address to a sector address */
+ sb->u.adfs_sb.s_map2blk = dr->log2bpmb - dr->log2secsize;
+
+ if (sb->u.adfs_sb.s_map2blk >= 0)
+ sb->u.adfs_sb.s_map_block <<= sb->u.adfs_sb.s_map2blk;
+ else
+ sb->u.adfs_sb.s_map_block >>= -sb->u.adfs_sb.s_map2blk;
+
+ printk (KERN_DEBUG "ADFS: zone size %d, IDs per zone %d, map address %X size %d sectors\n",
+ sb->u.adfs_sb.s_zone_size, sb->u.adfs_sb.s_ids_per_zone,
+ sb->u.adfs_sb.s_map_block, sb->u.adfs_sb.s_map_size);
+ printk (KERN_DEBUG "ADFS: sector size %d, map bit size %d\n",
+ 1 << dr->log2secsize, 1 << dr->log2bpmb);
+
+ sb->s_magic = ADFS_SUPER_MAGIC;
+ sb->s_flags |= MS_RDONLY; /* we don't support writing yet */
+
+ sb->u.adfs_sb.s_map = kmalloc (sb->u.adfs_sb.s_map_size *
+ sizeof (struct buffer_head *), GFP_KERNEL);
+ if (sb->u.adfs_sb.s_map == NULL) {
+ adfs_error (sb, NULL, "not enough memory");
+ goto failed_mount;
+ }
+
+ for (i = 0; i < sb->u.adfs_sb.s_map_size; i++) {
+ sb->u.adfs_sb.s_map[i] = bread (dev,
+ sb->u.adfs_sb.s_map_block + i,
+ sb->s_blocksize);
+ if (!sb->u.adfs_sb.s_map[i]) {
+ for (j = 0; j < i; j++)
+ brelse (sb->u.adfs_sb.s_map[j]);
+ kfree (sb->u.adfs_sb.s_map);
+ adfs_error (sb, NULL, "unable to read map");
+ goto failed_mount;
+ }
+ }
+ if (!adfs_checkmap (sb)) {
+ for (i = 0; i < sb->u.adfs_sb.s_map_size; i++)
+ brelse (sb->u.adfs_sb.s_map[i]);
+ adfs_error (sb, NULL, "map corrupted");
+ goto failed_mount;
+ }
+
+ dr = (struct adfs_discrecord *)(sb->u.adfs_sb.s_map[0]->b_data + 4);
+ unlock_super (sb);
+
+ /*
+ * set up enough so that it can read an inode
+ */
+ sb->s_op = &adfs_sops;
+ sb->u.adfs_sb.s_root = adfs_inode_generate (dr->root, 0);
+ sb->s_root = d_alloc_root(iget(sb, sb->u.adfs_sb.s_root), NULL);
+
+ if (!sb->s_root) {
+ sb->s_dev = 0;
+ for (i = 0; i < sb->u.adfs_sb.s_map_size; i++)
+ brelse (sb->u.adfs_sb.s_map[i]);
+ brelse (bh);
+ adfs_error (sb, NULL, "get root inode failed\n");
+ MOD_DEC_USE_COUNT;
+ return NULL;
+ }
+ return sb;
+}
+
+static int adfs_statfs (struct super_block *sb, struct statfs *buf, int bufsiz)
+{
+ struct statfs tmp;
+ const unsigned int nidlen = sb->u.adfs_sb.s_idlen + 1;
+
+ tmp.f_type = ADFS_SUPER_MAGIC;
+ tmp.f_bsize = sb->s_blocksize;
+ tmp.f_blocks = (sb->u.adfs_sb.s_dr->disc_size) >> (sb->s_blocksize_bits);
+ tmp.f_files = tmp.f_blocks >> nidlen;
+ {
+ unsigned int i, j = 0;
+ const unsigned mask = (1 << (nidlen - 1)) - 1;
+ for (i = 0; i < sb->u.adfs_sb.s_map_size; i++) {
+ const char *map = sb->u.adfs_sb.s_map[i]->b_data;
+ unsigned freelink, mapindex = 24;
+ j -= nidlen;
+ do {
+ unsigned char k, l, m;
+ unsigned off = (mapindex - nidlen) >> 3;
+ unsigned rem;
+ const unsigned boff = mapindex & 7;
+
+ /* get next freelink */
+
+ k = map[off++];
+ l = map[off++];
+ m = map[off++];
+ freelink = (m << 16) | (l << 8) | k;
+ rem = freelink >> (boff + nidlen - 1);
+ freelink = (freelink >> boff) & mask;
+ mapindex += freelink;
+
+ /* find its length and add it to running total */
+
+ while (rem == 0) {
+ j += 8;
+ rem = map[off++];
+ }
+ if ((rem & 0xff) == 0) j+=8, rem>>=8;
+ if ((rem & 0xf) == 0) j+=4, rem>>=4;
+ if ((rem & 0x3) == 0) j+=2, rem>>=2;
+ if ((rem & 0x1) == 0) j+=1;
+ j += nidlen - boff;
+ if (freelink <= nidlen) break;
+ } while (mapindex < 8 * sb->s_blocksize);
+ if (mapindex > 8 * sb->s_blocksize)
+ adfs_error (sb, NULL, "oversized free fragment\n");
+ else if (freelink)
+ adfs_error (sb, NULL, "undersized free fragment\n");
+ }
+ tmp.f_bfree = tmp.f_bavail = j <<
+ (sb->u.adfs_sb.s_dr->log2bpmb - sb->s_blocksize_bits);
+ }
+ tmp.f_ffree = tmp.f_bfree >> nidlen;
+ tmp.f_namelen = ADFS_NAME_LEN;
+ return copy_to_user (buf, &tmp, bufsiz) ? -EFAULT : 0;
+}
+
+static struct file_system_type adfs_fs_type = {
+ "adfs", FS_REQUIRES_DEV, adfs_read_super, NULL
+};
+
+__initfunc(int init_adfs_fs (void))
+{
+ return register_filesystem (&adfs_fs_type);
+}
+
+#ifdef MODULE
+int init_module (void)
+{
+ int status;
+
+ if ((status = init_adfs_fs()) == 0)
+ register_symtab(0);
+ return status;
+}
+
+void cleanup_module (void)
+{
+ unregister_filesystem (&adfs_fs_type);
+}
+#endif
init_autofs_fs();
#endif
+#ifdef CONFIG_ADFS_FS
+ init_adfs_fs();
+#endif
+
#ifdef CONFIG_NLS
init_nls();
#endif
result = 0;
io_error:
- if (refresh)
+ /* Note: we don't refresh if the call returned error */
+ if (refresh && result >= 0)
nfs_refresh_inode(inode, &rqst.ra_fattr);
+ /* N.B. Use nfs_unlock_page here? */
clear_bit(PG_locked, &page->flags);
wake_up(&page->wait);
return result;
{
struct nfs_rreq *req = (struct nfs_rreq *) task->tk_calldata;
struct page *page = req->ra_page;
+ unsigned long address = page_address(page);
int result = task->tk_status;
static int succ = 0, fail = 0;
dprintk("NFS: %4d received callback for page %lx, result %d\n",
- task->tk_pid, page_address(page), result);
+ task->tk_pid, address, result);
if (result >= 0) {
result = req->ra_res.count;
if (result < PAGE_SIZE) {
- memset((char *) page_address(page) + result, 0,
- PAGE_SIZE - result);
+ memset((char *) address + result, 0, PAGE_SIZE - result);
}
nfs_refresh_inode(req->ra_inode, &req->ra_fattr);
set_bit(PG_uptodate, &page->flags);
fail++;
dprintk("NFS: %d successful reads, %d failures\n", succ, fail);
}
+ /* N.B. Use nfs_unlock_page here? */
clear_bit(PG_locked, &page->flags);
wake_up(&page->wait);
- free_page(page_address(page));
+ free_page(address);
rpc_release_task(task);
kfree(req);
nfs_readpage_async(struct dentry *dentry, struct inode *inode,
struct page *page)
{
+ unsigned long address = page_address(page);
struct nfs_rreq *req;
- int result, flags;
+ int result = -1, flags;
dprintk("NFS: nfs_readpage_async(%p)\n", page);
- flags = RPC_TASK_ASYNC | (IS_SWAPFILE(inode)? NFS_RPC_SWAPFLAGS : 0);
+ if (NFS_CONGESTED(inode))
+ goto out_defer;
- if (NFS_CONGESTED(inode)
- || !(req = (struct nfs_rreq *) rpc_allocate(flags, sizeof(*req)))) {
- dprintk("NFS: deferring async READ request.\n");
- return -1;
- }
+ /* N.B. Do we need to test? Never called for swapfile inode */
+ flags = RPC_TASK_ASYNC | (IS_SWAPFILE(inode)? NFS_RPC_SWAPFLAGS : 0);
+ req = (struct nfs_rreq *) rpc_allocate(flags, sizeof(*req));
+ if (!req)
+ goto out_defer;
/* Initialize request */
+ /* N.B. Will the dentry remain valid for life of request? */
nfs_readreq_setup(req, NFS_FH(dentry), page->offset,
- (void *) page_address(page), PAGE_SIZE);
+ (void *) address, PAGE_SIZE);
req->ra_inode = inode;
- req->ra_page = page;
+ req->ra_page = page; /* count has been incremented by caller */
/* Start the async call */
dprintk("NFS: executing async READ request.\n");
result = rpc_do_call(NFS_CLIENT(inode), NFSPROC_READ,
&req->ra_args, &req->ra_res, flags,
nfs_readpage_result, req);
+ if (result < 0)
+ goto out_free;
+ result = 0;
+out:
+ return result;
- if (result >= 0) {
- atomic_inc(&page->count);
- return 0;
- }
-
+out_defer:
+ dprintk("NFS: deferring async READ request.\n");
+ goto out;
+out_free:
dprintk("NFS: failed to enqueue async READ request.\n");
kfree(req);
- return -1;
+ goto out;
}
/*
nfs_readpage(struct dentry *dentry, struct page *page)
{
struct inode *inode = dentry->d_inode;
- unsigned long address;
int error = -1;
- dprintk("NFS: nfs_readpage %08lx\n", page_address(page));
+ dprintk("NFS: nfs_readpage (%p %ld@%ld)\n",
+ page, PAGE_SIZE, page->offset);
set_bit(PG_locked, &page->flags);
- address = page_address(page);
atomic_inc(&page->count);
if (!IS_SWAPFILE(inode) && !PageError(page) &&
NFS_SERVER(inode)->rsize >= PAGE_SIZE)
error = nfs_readpage_async(dentry, inode, page);
- if (error < 0) /* couldn't enqueue */
+ if (error < 0) { /* couldn't enqueue */
error = nfs_readpage_sync(dentry, inode, page);
- if (error < 0 && IS_SWAPFILE(inode))
- printk("Aiee.. nfs swap-in of page failed!\n");
- free_page(address);
+ if (error < 0 && IS_SWAPFILE(inode))
+ printk("Aiee.. nfs swap-in of page failed!\n");
+ free_page(page_address(page));
+ }
return error;
}
req->wb_flags |= NFS_WRITE_LOCKED;
rpc_wake_up_task(&req->wb_task);
- dprintk("nfs: wake up task %d (flags %x)\n",
+ dprintk("NFS: wake up task %d (flags %x)\n",
req->wb_task.tk_pid, req->wb_flags);
}
} while (count);
io_error:
- /* N.B. do we want to refresh if there was an error?? (fattr valid?) */
- if (refresh) {
+ /* Note: we don't refresh if the call failed (fattr invalid) */
+ if (refresh && result >= 0) {
/* See comments in nfs_wback_result */
/* N.B. I don't think this is right -- sync writes in order */
if (fattr.size < inode->i_size)
nfs_updatepage(struct dentry *dentry, struct page *page, const char *buffer,
unsigned long offset, unsigned int count, int sync)
{
- struct inode *inode = dentry->d_inode;
+ struct inode *inode = dentry->d_inode;
+ u8 *page_addr = (u8 *) page_address(page);
struct nfs_wreq *req;
int status = 0, page_locked = 1;
- u8 *page_addr;
dprintk("NFS: nfs_updatepage(%s/%s %d@%ld, sync=%d)\n",
dentry->d_parent->d_name.name, dentry->d_name.name,
count, page->offset+offset, sync);
set_bit(PG_locked, &page->flags);
- page_addr = (u8 *) page_address(page);
-
- /* If wsize is smaller than page size, update and write
- * page synchronously.
- */
- if (NFS_SERVER(inode)->wsize < PAGE_SIZE) {
- copy_from_user(page_addr + offset, buffer, count);
- return nfs_writepage_sync(dentry, inode, page, offset, count);
- }
/*
* Try to find a corresponding request on the writeback queue.
*/
if ((req = find_write_request(inode, page)) != NULL) {
if (update_write_request(req, offset, count)) {
+ /* N.B. check for a fault here and cancel the req */
copy_from_user(page_addr + offset, buffer, count);
goto updated;
}
return 0;
}
+ /* Copy data to page buffer. */
+ status = -EFAULT;
+ if (copy_from_user(page_addr + offset, buffer, count))
+ goto done;
+
+ /* If wsize is smaller than page size, update and write
+ * page synchronously.
+ */
+ if (NFS_SERVER(inode)->wsize < PAGE_SIZE)
+ return nfs_writepage_sync(dentry, inode, page, offset, count);
+
/* Create the write request. */
status = -ENOBUFS;
req = create_write_request(dentry, inode, page, offset, count);
if (!req)
goto done;
- /* Copy data to page buffer. */
- /* N.B. should check for fault here ... */
- copy_from_user(page_addr + offset, buffer, count);
-
/* Schedule request */
page_locked = schedule_write_request(req, sync);
if ((count = nfs_write_error(inode)) < 0)
status = count;
}
- } else
+ } else {
+ if (status < 0) {
+printk("NFS: %s/%s write failed, clearing bit\n",
+dentry->d_parent->d_name.name, dentry->d_name.name);
+ clear_bit(PG_uptodate, &page->flags);
+ }
nfs_unlock_page(page);
+ }
}
dprintk("NFS: nfs_updatepage returns %d (isize %ld)\n",
req->wb_task.tk_pid,
req->wb_inode->i_dev, req->wb_inode->i_ino,
req->wb_page->offset, req->wb_flags);
- if (!WB_INPROGRESS(req)) {
- rqoffset = req->wb_page->offset + req->wb_offset;
- rqend = rqoffset + req->wb_bytes;
-
- if (rqoffset < end && offset < rqend
+ rqoffset = req->wb_page->offset + req->wb_offset;
+ rqend = rqoffset + req->wb_bytes;
+
+ if (rqoffset < end && offset < rqend
&& (pid == 0 || req->wb_pid == pid)) {
- if (!WB_HAVELOCK(req)) {
+ if (!WB_INPROGRESS(req) && !WB_HAVELOCK(req)) {
#ifdef NFS_DEBUG_VERBOSE
printk("nfs_flush: flushing inode=%ld, %d @ %lu\n",
req->wb_inode->i_ino, req->wb_bytes, rqoffset);
#endif
- nfs_flush_request(req);
- }
- last = req;
+ nfs_flush_request(req);
}
+ last = req;
}
if (invalidate)
req->wb_flags |= NFS_WRITE_INVALIDATE;
struct page *page = req->wb_page;
struct dentry *dentry = req->wb_dentry;
- dprintk("NFS: %4d nfs_wback_lock (status %d flags %x)\n",
- task->tk_pid, task->tk_status, req->wb_flags);
+ dprintk("NFS: %4d nfs_wback_lock (%s/%s, status=%d flags=%x)\n",
+ task->tk_pid, dentry->d_parent->d_name.name,
+ dentry->d_name.name, task->tk_status, req->wb_flags);
if (!WB_HAVELOCK(req))
req->wb_flags |= NFS_WRITE_WANTLOCK;
- if (WB_WANTLOCK(req) && test_and_set_bit(PG_locked, &page->flags)) {
- printk("NFS: page already locked in writeback_lock!\n");
- task->tk_timeout = 2 * HZ;
- rpc_sleep_on(&write_queue, task, NULL, NULL);
- return;
- }
- task->tk_status = 0;
+ if (WB_WANTLOCK(req) && test_and_set_bit(PG_locked, &page->flags))
+ goto out_locked;
req->wb_flags &= ~NFS_WRITE_WANTLOCK;
req->wb_flags |= NFS_WRITE_LOCKED;
+ task->tk_status = 0;
if (req->wb_args == 0) {
size_t size = sizeof(struct nfs_writeargs)
+ sizeof(struct nfs_fattr);
void *ptr;
- if (!(ptr = kmalloc(size, GFP_KERNEL))) {
- task->tk_timeout = HZ;
- rpc_sleep_on(&write_queue, task, NULL, NULL);
- return;
- }
+ if (!(ptr = kmalloc(size, GFP_KERNEL)))
+ goto out_no_args;
req->wb_args = (struct nfs_writeargs *) ptr;
req->wb_fattr = (struct nfs_fattr *) (req->wb_args + 1);
}
rpc_call_setup(task, NFSPROC_WRITE, req->wb_args, req->wb_fattr, 0);
req->wb_flags |= NFS_WRITE_INPROGRESS;
+ return;
+
+out_locked:
+ printk("NFS: page already locked in writeback_lock!\n");
+ task->tk_timeout = 2 * HZ;
+ rpc_sleep_on(&write_queue, task, NULL, NULL);
+ return;
+out_no_args:
+ printk("NFS: can't alloc args, sleeping\n");
+ task->tk_timeout = HZ;
+ rpc_sleep_on(&write_queue, task, NULL, NULL);
+ return;
}
/*
struct page *page = req->wb_page;
int status = task->tk_status;
- dprintk("NFS: %4d nfs_wback_result (status %d)\n",
- task->tk_pid, status);
+ dprintk("NFS: %4d nfs_wback_result (%s/%s, status=%d, flags=%x)\n",
+ task->tk_pid, req->wb_dentry->d_parent->d_name.name,
+ req->wb_dentry->d_name.name, status, req->wb_flags);
+ /* Set the WRITE_COMPLETE flag, but leave INPROGRESS set */
+ req->wb_flags |= NFS_WRITE_COMPLETE;
if (status < 0) {
/*
* An error occurred. Report the error back to the
static int get_kstat(char * buffer)
{
- int i, len;
+ int i, j, len;
unsigned sum = 0;
extern unsigned long total_forks;
unsigned long ticks;
ticks = jiffies * smp_num_cpus;
+#ifndef __SMP__
for (i = 0 ; i < NR_IRQS ; i++)
- sum += kstat.interrupts[i];
+ sum += kstat.interrupts[0][i];
+#else
+ for (j = 0 ; j < smp_num_cpus ; j++)
+ for (i = 0 ; i < NR_IRQS ; i++)
+ sum += kstat.interrupts[cpu_logical_map[j]][i];
+#endif
+
#ifdef __SMP__
len = sprintf(buffer,
"cpu %u %u %u %lu\n",
kstat.pswpin,
kstat.pswpout,
sum);
- for (i = 0 ; i < NR_IRQS ; i++)
- len += sprintf(buffer + len, " %u", kstat.interrupts[i]);
+ for (i = 0 ; i < NR_IRQS ; i++) {
+#ifndef __SMP__
+ len += sprintf(buffer + len, " %u", kstat.interrupts[0][i]);
+#else
+ int sum=0;
+
+ for (j = 0 ; j < smp_num_cpus ; j++)
+ sum += kstat.interrupts[cpu_logical_map[j]][i];
+ len += sprintf(buffer + len, " %u", sum);
+#endif
+ }
len += sprintf(buffer + len,
"\nctxt %u\n"
"btime %lu\n"
extern int get_rtc_status (char *);
extern int get_locks_status (char *, char **, off_t, int);
extern int get_swaparea_info (char *);
-#ifdef __SMP_PROF__
-extern int get_smp_prof_list(char *);
-#endif
#ifdef CONFIG_ZORRO
extern int zorro_get_list(char *);
#endif
#ifdef CONFIG_BLK_DEV_MD
case PROC_MD:
return get_md_status(page);
-#endif
-#ifdef __SMP_PROF__
- case PROC_SMP_PROF:
- return get_smp_prof_list(page);
#endif
case PROC_CMDLINE:
return get_cmdline(page);
unsigned long pages;
if ((1 << alloced) * PAGE_SIZE < (n + 2) * sizeof(openpromfs_node)) {
- pages = __get_free_pages (GFP_KERNEL, alloced + 1, 0);
+ pages = __get_free_pages (GFP_KERNEL, alloced + 1);
if (!pages)
return -1;
if (!romvec->pv_romvers)
return RET(ENODEV);
#endif
- nodes = (openpromfs_node *)__get_free_pages(GFP_KERNEL, 0, 0);
+ nodes = (openpromfs_node *)__get_free_pages(GFP_KERNEL, 0);
if (!nodes) {
printk (KERN_WARNING "/proc/openprom: can't get free page\n");
return RET(EIO);
S_IFREG | S_IRUGO, 1, 0, 0,
0, &proc_array_inode_operations
};
-#ifdef __SMP_PROF__
-static struct proc_dir_entry proc_root_smp = {
- PROC_SMP_PROF, 3,"smp",
- S_IFREG | S_IRUGO, 1, 0, 0,
- 0, &proc_array_inode_operations
-};
-#endif
static struct proc_dir_entry proc_root_filesystems = {
PROC_FILESYSTEMS, 11,"filesystems",
S_IFREG | S_IRUGO, 1, 0, 0,
proc_register(&proc_root, &proc_root_stat);
proc_register(&proc_root, &proc_root_devices);
proc_register(&proc_root, &proc_root_interrupts);
-#ifdef __SMP_PROF__
- proc_register(&proc_root, &proc_root_smp);
-#endif
proc_register(&proc_root, &proc_root_filesystems);
proc_register(&proc_root, &proc_root_dma);
proc_register(&proc_root, &proc_root_ioports);
--- /dev/null
+#ifndef __ARM_A_OUT_H__
+#define __ARM_A_OUT_H__
+
+struct exec
+{
+ unsigned long a_info; /* Use macros N_MAGIC, etc for access */
+ unsigned a_text; /* length of text, in bytes */
+ unsigned a_data; /* length of data, in bytes */
+ unsigned a_bss; /* length of uninitialized data area for file, in bytes */
+ unsigned a_syms; /* length of symbol table data in file, in bytes */
+ unsigned a_entry; /* start address */
+ unsigned a_trsize; /* length of relocation info for text, in bytes */
+ unsigned a_drsize; /* length of relocation info for data, in bytes */
+};
+
+/*
+ * This is always the same
+ */
+#define N_TXTADDR(a) (0x00008000)
+
+#define N_TRSIZE(a) ((a).a_trsize)
+#define N_DRSIZE(a) ((a).a_drsize)
+#define N_SYMSIZE(a) ((a).a_syms)
+
+#define M_ARM 103
+
+#include <asm/arch/a.out.h>
+#endif /* __A_OUT_GNU_H__ */
--- /dev/null
+/*
+ * arcaudio.h
+ *
+ */
+
+#ifndef _LINUX_ARCAUDIO_H
+#define _LINUX_ARCAUDIO_H
+
+#define ARCAUDIO_MAXCHANNELS 8
+
+enum ch_type
+{
+ ARCAUDIO_NONE, /* No sound (muted) */
+ ARCAUDIO_8BITSIGNED, /* signed 8 bits per samples */
+ ARCAUDIO_8BITUNSIGNED, /* unsigned 8 bits per samples */
+ ARCAUDIO_16BITSIGNED, /* signed 16 bits per samples (little endian) */
+ ARCAUDIO_16BITUNSIGNED, /* unsigned 16 bits per samples (little endian) */
+ ARCAUDIO_LOG /* Vidc Log */
+};
+
+/*
+ * Global information
+ */
+struct arcaudio
+{
+ int sample_rate; /* sample rate (Hz) */
+ int num_channels; /* number of channels */
+ int volume; /* overall system volume */
+};
+
+/*
+ * Per channel information
+ */
+struct arcaudio_channel
+{
+ int stereo_position; /* Channel position */
+ int channel_volume; /* Channel volume */
+ enum ch_type channel_type; /* Type of channel */
+ int buffer_size; /* Size of channel buffer */
+};
+
+/* IOCTLS */
+#define ARCAUDIO_GETINFO 0x6101
+#define ARCAUDIO_SETINFO 0x6102
+#define ARCAUDIO_GETCHANNELINFO 0x6111
+#define ARCAUDIO_SETCHANNELINFO 0x6112
+#define ARCAUDIO_GETOPTS 0x61f0
+#define ARCAUDIO_SETOPTS 0x61f1
+#define ARCAUDIO_OPTSPKR 1<<0
+
+#endif
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-a5k/a.out.h
+ *
+ * Copyright (C) 1996 Russell King
+ */
+
+#ifndef __ASM_ARCH_A_OUT_H
+#define __ASM_ARCH_A_OUT_H
+
+#ifdef __KERNEL__
+#define STACK_TOP (0x01a00000)
+#define LIBRARY_START_TEXT (0x00c00000)
+#endif
+
+#endif
+
--- /dev/null
+#ifndef __ASM_ARCH_DMA_H
+#define __ASM_ARCH_DMA_H
+
+#define MAX_DMA_ADDRESS 0x03000000
+
+#ifdef KERNEL_ARCH_DMA
+
+static inline void arch_disable_dma (int dmanr)
+{
+ printk (dma_str, "arch_disable_dma", dmanr);
+}
+
+static inline void arch_enable_dma (int dmanr)
+{
+ printk (dma_str, "arch_enable_dma", dmanr);
+}
+
+static inline void arch_set_dma_addr (int dmanr, unsigned int addr)
+{
+ printk (dma_str, "arch_set_dma_addr", dmanr);
+}
+
+static inline void arch_set_dma_count (int dmanr, unsigned int count)
+{
+ printk (dma_str, "arch_set_dma_count", dmanr);
+}
+
+static inline void arch_set_dma_mode (int dmanr, char mode)
+{
+ printk (dma_str, "arch_set_dma_mode", dmanr);
+}
+
+static inline int arch_dma_count (int dmanr)
+{
+ printk (dma_str, "arch_dma_count", dmanr);
+ return 0;
+}
+
+#endif
+
+/* enable/disable a specific DMA channel */
+extern void enable_dma(unsigned int dmanr);
+
+static __inline__ void disable_dma(unsigned int dmanr)
+{
+ switch(dmanr) {
+ case 2: disable_irq(64); break;
+ default: printk (dma_str, "disable_dma", dmanr); break;
+ }
+}
+
+/* Clear the 'DMA Pointer Flip Flop'.
+ * Write 0 for LSB/MSB, 1 for MSB/LSB access.
+ * Use this once to initialize the FF to a known state.
+ * After that, keep track of it. :-)
+ * --- In order to do that, the DMA routines below should ---
+ * --- only be used while interrupts are disabled! ---
+ */
+static __inline__ void clear_dma_ff(unsigned int dmanr)
+{
+ switch(dmanr) {
+ case 2: break;
+ default: printk (dma_str, "clear_dma_ff", dmanr); break;
+ }
+}
+
+/* set mode (above) for a specific DMA channel */
+extern void set_dma_mode(unsigned int dmanr, char mode);
+
+/* Set only the page register bits of the transfer address.
+ * This is used for successive transfers when we know the contents of
+ * the lower 16 bits of the DMA current address register, but a 64k boundary
+ * may have been crossed.
+ */
+static __inline__ void set_dma_page(unsigned int dmanr, char pagenr)
+{
+ printk (dma_str, "set_dma_page", dmanr);
+}
+
+
+/* Set transfer address & page bits for specific DMA channel.
+ * Assumes dma flipflop is clear.
+ */
+extern void set_dma_addr(unsigned int dmanr, unsigned int addr);
+
+/* Set transfer size for a specific DMA channel.
+ */
+extern void set_dma_count(unsigned int dmanr, unsigned int count);
+
+/* Get DMA residue count. After a DMA transfer, this
+ * should return zero. Reading this while a DMA transfer is
+ * still in progress will return unpredictable results.
+ * If called before the channel has been used, it may return 1.
+ * Otherwise, it returns the number of _bytes_ left to transfer.
+ *
+ * Assumes DMA flip-flop is clear.
+ */
+extern int get_dma_residue(unsigned int dmanr);
+
+#endif /* _ASM_ARCH_DMA_H */
+
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-a5k/hardware.h
+ *
+ * Copyright (C) 1996 Russell King.
+ *
+ * This file contains the hardware definitions of the A5000 series machines.
+ */
+
+#ifndef __ASM_ARCH_HARDWARE_H
+#define __ASM_ARCH_HARDWARE_H
+
+/*
+ * What hardware must be present
+ */
+#define HAS_IOC
+#define HAS_PCIO
+#define HAS_MEMC
+#define HAS_MEMC1A
+#define HAS_VIDC
+
+/*
+ * Optional hardware
+ */
+#define HAS_EXPMASK
+
+#ifndef __ASSEMBLER__
+
+/*
+ * for use with inb/outb
+ */
+#define VIDC_BASE 0x80100000
+#define IOCEC4IO_BASE 0x8009c000
+#define IOCECIO_BASE 0x80090000
+#define IOC_BASE 0x80080000
+#define MEMCECIO_BASE 0x80000000
+
+/*
+ * IO definitions
+ */
+#define EXPMASK_BASE ((volatile unsigned char *)0x03360000)
+#define IOEB_BASE ((volatile unsigned char *)0x03350050)
+#define PCIO_FLOPPYDMABASE ((volatile unsigned char *)0x0302a000)
+#define PCIO_BASE 0x03010000
+
+/*
+ * Mapping areas
+ */
+#define IO_END 0x03ffffff
+#define IO_BASE 0x03000000
+#define IO_SIZE (IO_END - IO_BASE)
+#define IO_START 0x03000000
+
+/*
+ * Screen mapping information
+ */
+#define SCREEN2_END 0x02078000
+#define SCREEN2_BASE 0x02000000
+#define SCREEN1_END SCREEN2_BASE
+#define SCREEN1_BASE 0x01f88000
+#define SCREEN_START 0x02000000
+
+/*
+ * RAM definitions
+ */
+#define MAPTOPHYS(a) (((unsigned long)a & 0x007fffff) + PAGE_OFFSET)
+#define KERNTOPHYS(a) ((((unsigned long)(&a)) & 0x007fffff) + PAGE_OFFSET)
+#define GET_MEMORY_END(p) (PAGE_OFFSET + (p->u1.s.page_size) * (p->u1.s.nr_pages))
+#define PARAMS_BASE (PAGE_OFFSET + 0x7c000)
+#define KERNEL_BASE (PAGE_OFFSET + 0x80000)
+
+#else
+
+#define IOEB_BASE 0x03350050
+#define IOC_BASE 0x03200000
+#define PCIO_FLOPPYDMABASE 0x0302a000
+#define PCIO_BASE 0x03010000
+#define IO_BASE 0x03000000
+
+#endif
+#endif
+
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-a5k/ide.h
+ *
+ * Copyright (c) 1997 Russell King
+ */
+
+static __inline__ int
+ide_default_irq(ide_ioreg_t base)
+{
+ if (base == 0x1f0)
+ return 11;
+ return 0;
+}
+
+static __inline__ ide_ioreg_t
+ide_default_io_base(int index)
+{
+ if (index == 0)
+ return 0x1f0;
+ return 0;
+}
+
+static __inline__ int
+ide_default_stepping(int index)
+{
+ return 0;
+}
+
+static __inline__ void
+ide_init_hwif_ports (ide_ioreg_t *p, ide_ioreg_t base, int stepping, int *irq)
+{
+ ide_ioreg_t port = base;
+ ide_ioreg_t ctrl = base + 0x206;
+ int i;
+
+ i = 8;
+ while (i--) {
+ *p++ = port;
+ port += 1 << stepping;
+ }
+ *p++ = ctrl;
+ if (irq != NULL)
+ irq = 0;
+}
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-a5k/io.h
+ *
+ * Copyright (C) 1997 Russell King
+ *
+ * Modifications:
+ * 06-Dec-1997 RMK Created.
+ */
+#ifndef __ASM_ARM_ARCH_IO_H
+#define __ASM_ARM_ARCH_IO_H
+
+/*
+ * Virtual view <-> DMA view memory address translations
+ * virt_to_bus: Used to translate the virtual address to an
+ * address suitable to be passed to set_dma_addr
+ * bus_to_virt: Used to convert an address for DMA operations
+ * to an address that the kernel can use.
+ */
+#define virt_to_bus(x) ((unsigned long)(x))
+#define bus_to_virt(x) ((void *)(x))
+
+/*
+ * This architecture does not require any delayed IO, and
+ * has the constant-optimised IO
+ */
+#undef ARCH_IO_DELAY
+
+/*
+ * We use two different types of addressing - PC style addresses, and ARM
+ * addresses. PC style accesses the PC hardware with the normal PC IO
+ * addresses, eg 0x3f8 for serial#1. ARM addresses are 0x80000000+
+ * and are translated to the start of IO. Note that all addresses are
+ * shifted left!
+ */
+#define __PORT_PCIO(x) (!((x) & 0x80000000))
+
+/*
+ * Dynamic IO functions - let the compiler
+ * optimize the expressions
+ */
+extern __inline__ void __outb (unsigned int value, unsigned int port)
+{
+ unsigned long temp;
+ __asm__ __volatile__(
+ "tst %2, #0x80000000\n\t"
+ "mov %0, %4\n\t"
+ "addeq %0, %0, %3\n\t"
+ "strb %1, [%0, %2, lsl #2]"
+ : "=&r" (temp)
+ : "r" (value), "r" (port), "Ir" (PCIO_BASE - IO_BASE), "Ir" (IO_BASE)
+ : "cc");
+}
+
+extern __inline__ void __outw (unsigned int value, unsigned int port)
+{
+ unsigned long temp;
+ __asm__ __volatile__(
+ "tst %2, #0x80000000\n\t"
+ "mov %0, %4\n\t"
+ "addeq %0, %0, %3\n\t"
+ "str %1, [%0, %2, lsl #2]"
+ : "=&r" (temp)
+ : "r" (value|value<<16), "r" (port), "Ir" (PCIO_BASE - IO_BASE), "Ir" (IO_BASE)
+ : "cc");
+}
+
+extern __inline__ void __outl (unsigned int value, unsigned int port)
+{
+ unsigned long temp;
+ __asm__ __volatile__(
+ "tst %2, #0x80000000\n\t"
+ "mov %0, %4\n\t"
+ "addeq %0, %0, %3\n\t"
+ "str %1, [%0, %2, lsl #2]"
+ : "=&r" (temp)
+ : "r" (value), "r" (port), "Ir" (PCIO_BASE - IO_BASE), "Ir" (IO_BASE)
+ : "cc");
+}
+
+#define DECLARE_DYN_IN(sz,fnsuffix,instr) \
+extern __inline__ unsigned sz __in##fnsuffix (unsigned int port) \
+{ \
+ unsigned long temp, value; \
+ __asm__ __volatile__( \
+ "tst %2, #0x80000000\n\t" \
+ "mov %0, %4\n\t" \
+ "addeq %0, %0, %3\n\t" \
+ "ldr" ##instr## " %1, [%0, %2, lsl #2]" \
+ : "=&r" (temp), "=r" (value) \
+ : "r" (port), "Ir" (PCIO_BASE - IO_BASE), "Ir" (IO_BASE) \
+ : "cc"); \
+ return (unsigned sz)value; \
+}
+
+extern __inline__ unsigned int __ioaddr (unsigned int port) \
+{ \
+ if (__PORT_PCIO(port)) \
+ return (unsigned int)(PCIO_BASE + (port << 2)); \
+ else \
+ return (unsigned int)(IO_BASE + (port << 2)); \
+}
+
+#define DECLARE_IO(sz,fnsuffix,instr) \
+ DECLARE_DYN_IN(sz,fnsuffix,instr)
+
+DECLARE_IO(char,b,"b")
+DECLARE_IO(short,w,"")
+DECLARE_IO(long,l,"")
+
+#undef DECLARE_IO
+#undef DECLARE_DYN_IN
+
+/*
+ * Constant address IO functions
+ *
+ * These have to be macros for the 'J' constraint to work -
+ * +/-4096 immediate operand.
+ */
+#define __outbc(value,port) \
+({ \
+ if (__PORT_PCIO((port))) \
+ __asm__ __volatile__( \
+ "strb %0, [%1, %2]" \
+ : : "r" (value), "r" (PCIO_BASE), "Jr" ((port) << 2)); \
+ else \
+ __asm__ __volatile__( \
+ "strb %0, [%1, %2]" \
+ : : "r" (value), "r" (IO_BASE), "r" ((port) << 2)); \
+})
+
+#define __inbc(port) \
+({ \
+ unsigned char result; \
+ if (__PORT_PCIO((port))) \
+ __asm__ __volatile__( \
+ "ldrb %0, [%1, %2]" \
+ : "=r" (result) : "r" (PCIO_BASE), "Jr" ((port) << 2)); \
+ else \
+ __asm__ __volatile__( \
+ "ldrb %0, [%1, %2]" \
+ : "=r" (result) : "r" (IO_BASE), "r" ((port) << 2)); \
+ result; \
+})
+
+#define __outwc(value,port) \
+({ \
+ unsigned long v = value; \
+ if (__PORT_PCIO((port))) \
+ __asm__ __volatile__( \
+ "str %0, [%1, %2]" \
+ : : "r" (v|v<<16), "r" (PCIO_BASE), "Jr" ((port) << 2)); \
+ else \
+ __asm__ __volatile__( \
+ "str %0, [%1, %2]" \
+ : : "r" (v|v<<16), "r" (IO_BASE), "r" ((port) << 2)); \
+})
+
+#define __inwc(port) \
+({ \
+ unsigned short result; \
+ if (__PORT_PCIO((port))) \
+ __asm__ __volatile__( \
+ "ldr %0, [%1, %2]" \
+ : "=r" (result) : "r" (PCIO_BASE), "Jr" ((port) << 2)); \
+ else \
+ __asm__ __volatile__( \
+ "ldr %0, [%1, %2]" \
+ : "=r" (result) : "r" (IO_BASE), "r" ((port) << 2)); \
+ result & 0xffff; \
+})
+
+#define __outlc(v,p) __outwc((v),(p))
+
+#define __inlc(port) \
+({ \
+ unsigned long result; \
+ if (__PORT_PCIO((port))) \
+ __asm__ __volatile__( \
+ "ldr %0, [%1, %2]" \
+ : "=r" (result) : "r" (PCIO_BASE), "Jr" ((port) << 2)); \
+ else \
+ __asm__ __volatile__( \
+ "ldr %0, [%1, %2]" \
+ : "=r" (result) : "r" (IO_BASE), "r" ((port) << 2)); \
+ result; \
+})
+
+#define __ioaddrc(port) \
+({ \
+ unsigned long addr; \
+ if (__PORT_PCIO((port))) \
+ addr = PCIO_BASE + ((port) << 2); \
+ else \
+ addr = IO_BASE + ((port) << 2); \
+ addr; \
+})
+
+/*
+ * Translated address IO functions
+ *
+ * IO address has already been translated to a virtual address
+ */
+#define outb_t(v,p) \
+ (*(volatile unsigned char *)(p) = (v))
+
+#define inb_t(p) \
+ (*(volatile unsigned char *)(p))
+
+#define outl_t(v,p) \
+ (*(volatile unsigned long *)(p) = (v))
+
+#define inl_t(p) \
+ (*(volatile unsigned long *)(p))
+
+#endif
--- /dev/null
+/*
+ * include/asm-arm/arch-a5k/irq.h
+ *
+ * Copyright (C) 1996 Russell King
+ *
+ * Changelog:
+ * 24-09-1996 RMK Created
+ * 10-10-1996 RMK Brought up to date with arch-sa110eval
+ * 22-10-1996 RMK Changed interrupt numbers & uses new inb/outb macros
+ * 11-01-1998 RMK Added mask_and_ack_irq
+ */
+
+#define BUILD_IRQ(s,n,m) \
+ void IRQ##n##_interrupt(void); \
+ void fast_IRQ##n##_interrupt(void); \
+ void bad_IRQ##n##_interrupt(void); \
+ void probe_IRQ##n##_interrupt(void);
+
+/*
+ * The timer is a special interrupt
+ */
+#define IRQ5_interrupt timer_IRQ_interrupt
+
+#define IRQ_INTERRUPT(n) IRQ##n##_interrupt
+#define FAST_INTERRUPT(n) fast_IRQ##n##_interrupt
+#define BAD_INTERRUPT(n) bad_IRQ##n##_interrupt
+#define PROBE_INTERRUPT(n) probe_IRQ##n##_interrupt
+
+#define X(x) (x)|0x01, (x)|0x02, (x)|0x04, (x)|0x08, (x)|0x10, (x)|0x20, (x)|0x40, (x)|0x80
+#define Z(x) (x), (x), (x), (x), (x), (x), (x), (x)
+
+static __inline__ void mask_and_ack_irq(unsigned int irq)
+{
+ static const int addrmasks[] = {
+ X((IOC_IRQMASKA - IOC_BASE)<<18 | (1 << 15)),
+ X((IOC_IRQMASKB - IOC_BASE)<<18),
+ Z(0),
+ Z(0),
+ Z(0),
+ Z(0),
+ Z(0),
+ Z(0),
+ X((IOC_FIQMASK - IOC_BASE)<<18),
+ Z(0),
+ Z(0),
+ Z(0),
+ Z(0),
+ Z(0),
+ Z(0),
+ Z(0)
+ };
+ unsigned int temp1, temp2;
+
+ __asm__ __volatile__(
+" ldr %1, [%5, %3, lsl #2]\n"
+" teq %1, #0\n"
+" beq 2f\n"
+" ldrb %0, [%2, %1, lsr #16]\n"
+" bic %0, %0, %1\n"
+" strb %0, [%2, %1, lsr #16]\n"
+" tst %1, #0x8000\n" /* do we need an IRQ clear? */
+" strneb %1, [%2, %4]\n"
+"2:"
+ : "=&r" (temp1), "=&r" (temp2)
+ : "r" (ioaddr(IOC_BASE)), "r" (irq),
+ "I" ((IOC_IRQCLRA - IOC_BASE) << 2), "r" (addrmasks));
+}
+
+#undef X
+#undef Z
+
+static __inline__ void mask_irq(unsigned int irq)
+{
+ extern void ecard_disableirq (unsigned int);
+ extern void ecard_disablefiq (unsigned int);
+ unsigned char mask = 1 << (irq & 7);
+
+ switch (irq >> 3) {
+ case 0:
+ outb(inb(IOC_IRQMASKA) & ~mask, IOC_IRQMASKA);
+ break;
+ case 1:
+ outb(inb(IOC_IRQMASKB) & ~mask, IOC_IRQMASKB);
+ break;
+ case 4:
+ ecard_disableirq (irq & 7);
+ break;
+ case 8:
+ outb(inb(IOC_FIQMASK) & ~mask, IOC_FIQMASK);
+ break;
+ case 12:
+ ecard_disablefiq (irq & 7);
+ }
+}
+
+static __inline__ void unmask_irq(unsigned int irq)
+{
+ extern void ecard_enableirq (unsigned int);
+ extern void ecard_enablefiq (unsigned int);
+ unsigned char mask = 1 << (irq & 7);
+
+ switch (irq >> 3) {
+ case 0:
+ outb(inb(IOC_IRQMASKA) | mask, IOC_IRQMASKA);
+ break;
+ case 1:
+ outb(inb(IOC_IRQMASKB) | mask, IOC_IRQMASKB);
+ break;
+ case 4:
+ ecard_enableirq (irq & 7);
+ break;
+ case 8:
+ outb(inb(IOC_FIQMASK) | mask, IOC_FIQMASK);
+ break;
+ case 12:
+ ecard_enablefiq (irq & 7);
+ }
+}
+
+static __inline__ unsigned long get_enabled_irqs(void)
+{
+ return inb(IOC_IRQMASKA) | inb(IOC_IRQMASKB) << 8;
+}
+
+static __inline__ void irq_init_irq(void)
+{
+ outb(0, IOC_IRQMASKA);
+ outb(0, IOC_IRQMASKB);
+ outb(0, IOC_FIQMASK);
+}
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-a5k/irqs.h
+ *
+ * Copyright (C) 1996 Russell King
+ */
+
+#define IRQ_PRINTER 0
+#define IRQ_BATLOW 1
+#define IRQ_FLOPPYINDEX 2
+#define IRQ_VSYNCPULSE 3
+#define IRQ_POWERON 4
+#define IRQ_TIMER0 5
+#define IRQ_TIMER1 6
+#define IRQ_IMMEDIATE 7
+#define IRQ_EXPCARDFIQ 8
+#define IRQ_SOUNDCHANGE 9
+#define IRQ_SERIALPORT 10
+#define IRQ_HARDDISK 11
+#define IRQ_FLOPPYDISK 12
+#define IRQ_EXPANSIONCARD 13
+#define IRQ_KEYBOARDTX 14
+#define IRQ_KEYBOARDRX 15
+
+#define FIQ_FLOPPYDATA 0
+#define FIQ_ECONET 2
+#define FIQ_SERIALPORT 4
+#define FIQ_EXPANSIONCARD 6
+#define FIQ_FORCE 7
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-a5k/mmu.h
+ *
+ * Copyright (c) 1996 Russell King.
+ *
+ * Changelog:
+ * 22-11-1996 RMK Created
+ */
+#ifndef __ASM_ARCH_MMU_H
+#define __ASM_ARCH_MMU_H
+
+#define __virt_to_phys(vpage) vpage
+#define __phys_to_virt(ppage) ppage
+
+#endif
--- /dev/null
+/*
+ * Dummy oldlatches.h
+ *
+ * Copyright (C) 1996 Russell King
+ */
+
+#ifdef __need_oldlatches
+#error "Old latches not present in this (a5k) machine"
+#endif
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-a5k/processor.h
+ *
+ * Copyright (c) 1996 Russell King.
+ *
+ * Changelog:
+ * 10-09-1996 RMK Created
+ */
+
+#ifndef __ASM_ARCH_PROCESSOR_H
+#define __ASM_ARCH_PROCESSOR_H
+
+/*
+ * Bus types
+ */
+#define EISA_bus 0
+#define EISA_bus__is_a_macro /* for versions in ksyms.c */
+#define MCA_bus 0
+#define MCA_bus__is_a_macro /* for versions in ksyms.c */
+
+/*
+ * User space: 26MB
+ */
+#define TASK_SIZE (0x01a00000UL)
+
+/* This decides where the kernel will search for a free chunk of vm
+ * space during mmap's.
+ */
+#define TASK_UNMAPPED_BASE (TASK_SIZE / 3)
+
+#define INIT_MMAP \
+{ &init_mm, 0, 0x02000000, PAGE_SHARED, VM_READ | VM_WRITE | VM_EXEC, NULL, &init_mm.mmap }
+
+#endif
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-a5k/serial.h
+ *
+ * Copyright (c) 1996 Russell King.
+ *
+ * Changelog:
+ * 15-10-1996 RMK Created
+ */
+#ifndef __ASM_ARCH_SERIAL_H
+#define __ASM_ARCH_SERIAL_H
+
+/*
+ * This assumes you have a 1.8432 MHz clock for your UART.
+ *
+ * It'd be nice if someone built a serial card with a 24.576 MHz
+ * clock, since the 16550A is capable of handling a top speed of 1.5
+ * megabits/second; but this requires the faster clock.
+ */
+#define BASE_BAUD (1843200 / 16)
+
+#define STD_COM_FLAGS (ASYNC_BOOT_AUTOCONF | ASYNC_SKIP_TEST)
+
+ /* UART CLK PORT IRQ FLAGS */
+#define RS_UARTS \
+ { 0, BASE_BAUD, 0x3F8, 10, STD_COM_FLAGS }, /* ttyS0 */ \
+ { 0, BASE_BAUD, 0x2F8, 10, STD_COM_FLAGS }, /* ttyS1 */ \
+ { 0, BASE_BAUD, 0 , 0, STD_COM_FLAGS }, /* ttyS2 */ \
+ { 0, BASE_BAUD, 0 , 0, STD_COM_FLAGS }, /* ttyS3 */ \
+ { 0, BASE_BAUD, 0 , 0, STD_COM_FLAGS }, /* ttyS4 */ \
+ { 0, BASE_BAUD, 0 , 0, STD_COM_FLAGS }, /* ttyS5 */ \
+ { 0, BASE_BAUD, 0 , 0, STD_COM_FLAGS }, /* ttyS6 */ \
+ { 0, BASE_BAUD, 0 , 0, STD_COM_FLAGS }, /* ttyS7 */ \
+ { 0, BASE_BAUD, 0 , 0, STD_COM_FLAGS }, /* ttyS8 */ \
+ { 0, BASE_BAUD, 0 , 0, STD_COM_FLAGS }, /* ttyS9 */ \
+ { 0, BASE_BAUD, 0 , 0, STD_COM_FLAGS }, /* ttyS10 */ \
+ { 0, BASE_BAUD, 0 , 0, STD_COM_FLAGS }, /* ttyS11 */ \
+ { 0, BASE_BAUD, 0 , 0, STD_COM_FLAGS }, /* ttyS12 */ \
+ { 0, BASE_BAUD, 0 , 0, STD_COM_FLAGS }, /* ttyS13 */
+
+#endif
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-a5k/shmparam.h
+ *
+ * Copyright (c) 1996 Russell King.
+ */
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-a5k/system.h
+ *
+ * Copyright (c) 1996 Russell King
+ */
+#ifndef __ASM_ARCH_SYSTEM_H
+#define __ASM_ARCH_SYSTEM_H
+
+extern __inline__ void arch_hard_reset (void)
+{
+ extern void ecard_reset (int card);
+
+ /*
+ * Reset all expansion cards.
+ */
+ ecard_reset (-1);
+
+ /*
+ * copy branch instruction to reset location and call it
+ */
+ *(unsigned long *)0 = *(unsigned long *)0x03800000;
+ ((void(*)(void))0)();
+
+ /*
+ * If that didn't work, loop endlessly
+ */
+ while (1);
+}
+
+#endif
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-a5k/time.h
+ *
+ * Copyright (c) 1996 Russell King.
+ *
+ * Changelog:
+ * 24-Sep-1996 RMK Created
+ * 10-Oct-1996 RMK Brought up to date with arch-sa110eval
+ * 04-Dec-1997 RMK Updated for new arch/arm/time.c
+ */
+
+extern __inline__ unsigned long gettimeoffset (void)
+{
+ unsigned int count1, count2, status1, status2;
+ unsigned long offset = 0;
+
+ status1 = IOC_IRQREQA;
+ barrier ();
+ outb (0, IOC_T0LATCH);
+ barrier ();
+ count1 = inb(IOC_T0CNTL) | (inb(IOC_T0CNTH) << 8);
+ barrier ();
+ status2 = inb(IOC_IRQREQA);
+ barrier ();
+ outb (0, IOC_T0LATCH);
+ barrier ();
+ count2 = inb(IOC_T0CNTL) | (inb(IOC_T0CNTH) << 8);
+
+ if (count2 < count1) {
+ /*
+ * This means that we haven't just had an interrupt
+ * while reading into status2.
+ */
+ if (status2 & (1 << 5))
+ offset = tick;
+ count1 = count2;
+ } else if (count2 > count1) {
+ /*
+ * We have just had another interrupt while reading
+ * status2.
+ */
+ offset += tick;
+ count1 = count2;
+ }
+
+ count1 = LATCH - count1;
+ /*
+ * count1 = number of clock ticks since last interrupt
+ */
+ offset += count1 * tick / LATCH;
+ return offset;
+}
+
+/*
+ * No need to reset the timer at every irq
+ */
+#define reset_timer() 1
+
+/*
+ * Updating of the RTC. We don't currently write the time to the
+ * CMOS clock.
+ */
+#define update_rtc()
+
+/*
+ * Set up timer interrupt, and return the current time in seconds.
+ */
+extern __inline__ unsigned long setup_timer (void)
+{
+ extern int iic_control (unsigned char, int, char *, int);
+ unsigned int year, mon, day, hour, min, sec;
+ char buf[8];
+
+ outb(LATCH & 255, IOC_T0LTCHL);
+ outb(LATCH >> 8, IOC_T0LTCHH);
+ outb(0, IOC_T0GO);
+
+ iic_control (0xa0, 0xc0, buf, 1);
+ year = buf[0];
+ if ((year += 1900) < 1970)
+ year += 100;
+
+ iic_control (0xa0, 2, buf, 5);
+ mon = buf[4] & 0x1f;
+ day = buf[3] & 0x3f;
+ hour = buf[2];
+ min = buf[1];
+ sec = buf[0];
+ BCD_TO_BIN(mon);
+ BCD_TO_BIN(day);
+ BCD_TO_BIN(hour);
+ BCD_TO_BIN(min);
+ BCD_TO_BIN(sec);
+
+ return mktime(year, mon, day, hour, min, sec);
+}
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-a5k/timex.h
+ *
+ * A5000 architecture timex specifications
+ *
+ * Copyright (C) 1997, 1998 Russell King
+ */
+
+/*
+ * On the RiscPC, the clock ticks at 2MHz.
+ */
+#define CLOCK_TICK_RATE 2000000
+
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-a5k/uncompress.h
+ *
+ * Copyright (C) 1996 Russell King
+ */
+#define VIDMEM ((char *)0x02000000)
+
+#include "../arch/arm/drivers/char/font.h"
+
+int video_num_columns, video_num_lines, video_size_row;
+int white, bytes_per_char_h;
+extern unsigned long con_charconvtable[256];
+
+struct param_struct {
+ unsigned long page_size;
+ unsigned long nr_pages;
+ unsigned long ramdisk_size;
+ unsigned long mountrootrdonly;
+ unsigned long rootdev;
+ unsigned long video_num_cols;
+ unsigned long video_num_rows;
+ unsigned long video_x;
+ unsigned long video_y;
+ unsigned long memc_control_reg;
+ unsigned char sounddefault;
+ unsigned char adfsdrives;
+ unsigned char bytes_per_char_h;
+ unsigned char bytes_per_char_v;
+ unsigned long unused[256/4-11];
+};
+
+static struct param_struct *params = (struct param_struct *)0x0207c000;
+
+/*
+ * This does not append a newline
+ */
+static void puts(const char *s)
+{
+ extern void ll_write_char(char *, unsigned long);
+ int x,y;
+ unsigned char c;
+ char *ptr;
+
+ x = params->video_x;
+ y = params->video_y;
+
+ while ( ( c = *(unsigned char *)s++ ) != '\0' ) {
+ if ( c == '\n' ) {
+ x = 0;
+ if ( ++y >= video_num_lines ) {
+ y--;
+ }
+ } else {
+ ptr = VIDMEM + ((y*video_num_columns*params->bytes_per_char_v+x)*bytes_per_char_h);
+ ll_write_char(ptr, c|(white<<8));
+ if ( ++x >= video_num_columns ) {
+ x = 0;
+ if ( ++y >= video_num_lines ) {
+ y--;
+ }
+ }
+ }
+ }
+
+ params->video_x = x;
+ params->video_y = y;
+}
+
+static void error(char *x);
+
+/*
+ * Setup for decompression
+ */
+static void arch_decomp_setup(void)
+{
+ int i;
+
+ video_num_lines = params->video_num_rows;
+ video_num_columns = params->video_num_cols;
+ bytes_per_char_h = params->bytes_per_char_h;
+ video_size_row = video_num_columns * bytes_per_char_h;
+ if (bytes_per_char_h == 4)
+ for (i = 0; i < 256; i++)
+ con_charconvtable[i] =
+ (i & 128 ? 1 << 0 : 0) |
+ (i & 64 ? 1 << 4 : 0) |
+ (i & 32 ? 1 << 8 : 0) |
+ (i & 16 ? 1 << 12 : 0) |
+ (i & 8 ? 1 << 16 : 0) |
+ (i & 4 ? 1 << 20 : 0) |
+ (i & 2 ? 1 << 24 : 0) |
+ (i & 1 ? 1 << 28 : 0);
+ else
+ for (i = 0; i < 16; i++)
+ con_charconvtable[i] =
+ (i & 8 ? 1 << 0 : 0) |
+ (i & 4 ? 1 << 8 : 0) |
+ (i & 2 ? 1 << 16 : 0) |
+ (i & 1 ? 1 << 24 : 0);
+
+ white = bytes_per_char_h == 8 ? 0xfc : 7;
+
+ if (params->nr_pages * params->page_size < 4096*1024) error("<4M of mem\n");
+}
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-arc/a.out.h
+ *
+ * Copyright (C) 1996 Russell King
+ */
+
+#ifndef __ASM_ARCH_A_OUT_H
+#define __ASM_ARCH_A_OUT_H
+
+#ifdef __KERNEL__
+#define STACK_TOP (0x01a00000)
+#define LIBRARY_START_TEXT (0x00c00000)
+#endif
+
+#endif
+
--- /dev/null
+#ifndef __ASM_ARCH_DMA_H
+#define __ASM_ARCH_DMA_H
+
+#define MAX_DMA_ADDRESS 0x03000000
+
+#ifdef KERNEL_ARCH_DMA
+
+static inline void arch_disable_dma (int dmanr)
+{
+ printk (dma_str, "arch_disable_dma", dmanr);
+}
+
+static inline void arch_enable_dma (int dmanr)
+{
+ printk (dma_str, "arch_enable_dma", dmanr);
+}
+
+static inline void arch_set_dma_addr (int dmanr, unsigned int addr)
+{
+ printk (dma_str, "arch_set_dma_addr", dmanr);
+}
+
+static inline void arch_set_dma_count (int dmanr, unsigned int count)
+{
+ printk (dma_str, "arch_set_dma_count", dmanr);
+}
+
+static inline void arch_set_dma_mode (int dmanr, char mode)
+{
+ printk (dma_str, "arch_set_dma_mode", dmanr);
+}
+
+static inline int arch_dma_count (int dmanr)
+{
+ printk (dma_str, "arch_dma_count", dmanr);
+ return 0;
+}
+
+#endif
+
+/* enable/disable a specific DMA channel */
+extern void enable_dma(unsigned int dmanr);
+
+static __inline__ void disable_dma(unsigned int dmanr)
+{
+ switch(dmanr) {
+ case 0: disable_irq(64); break;
+ case 1: break;
+ default: printk (dma_str, "disable_dma", dmanr); break;
+ }
+}
+
+/* Clear the 'DMA Pointer Flip Flop'.
+ * Write 0 for LSB/MSB, 1 for MSB/LSB access.
+ * Use this once to initialize the FF to a known state.
+ * After that, keep track of it. :-)
+ * --- In order to do that, the DMA routines below should ---
+ * --- only be used while interrupts are disabled! ---
+ */
+#define clear_dma_ff(dmanr)
+
+/* set mode (above) for a specific DMA channel */
+extern void set_dma_mode(unsigned int dmanr, char mode);
+
+/* Set only the page register bits of the transfer address.
+ * This is used for successive transfers when we know the contents of
+ * the lower 16 bits of the DMA current address register, but a 64k boundary
+ * may have been crossed.
+ */
+static __inline__ void set_dma_page(unsigned int dmanr, char pagenr)
+{
+ printk (dma_str, "set_dma_page", dmanr);
+}
+
+
+/* Set transfer address & page bits for specific DMA channel.
+ * Assumes dma flipflop is clear.
+ */
+extern void set_dma_addr(unsigned int dmanr, unsigned int addr);
+
+/* Set transfer size for a specific DMA channel.
+ */
+extern void set_dma_count(unsigned int dmanr, unsigned int count);
+
+/* Get DMA residue count. After a DMA transfer, this
+ * should return zero. Reading this while a DMA transfer is
+ * still in progress will return unpredictable results.
+ * If called before the channel has been used, it may return 1.
+ * Otherwise, it returns the number of _bytes_ left to transfer.
+ *
+ * Assumes DMA flip-flop is clear.
+ */
+extern int get_dma_residue(unsigned int dmanr);
+
+#endif /* _ASM_ARCH_DMA_H */
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-arc/hardware.h
+ *
+ * Copyright (C) 1996 Russell King.
+ *
+ * This file contains the hardware definitions of the A3/4/5xx series machines.
+ */
+
+#ifndef __ASM_ARCH_HARDWARE_H
+#define __ASM_ARCH_HARDWARE_H
+
+/*
+ * What hardware must be present
+ */
+#define HAS_IOC
+#define HAS_MEMC
+#define HAS_MEMC1A
+#define HAS_VIDC
+
+/*
+ * Optional hardware
+ */
+#define HAS_EXPMASK
+
+#ifndef __ASSEMBLER__
+
+/*
+ * for use with inb/outb
+ */
+#define VIDC_BASE 0x80100000
+#define IOCEC4IO_BASE 0x8009c000
+#define LATCHAADDR 0x80094010
+#define LATCHBADDR 0x80094006
+#define IOCECIO_BASE 0x80090000
+#define IOC_BASE 0x80080000
+#define MEMCECIO_BASE 0x80000000
+
+/*
+ * IO definitions
+ */
+#define EXPMASK_BASE ((volatile unsigned char *)0x03360000)
+#define IOEB_BASE ((volatile unsigned char *)0x03350050)
+#define PCIO_FLOPPYDMABASE ((volatile unsigned char *)0x0302a000)
+#define PCIO_BASE 0x03010000
+
+/*
+ * Mapping areas
+ */
+#define IO_END 0x03ffffff
+#define IO_BASE 0x03000000
+#define IO_SIZE (IO_END - IO_BASE)
+#define IO_START 0x03000000
+
+/*
+ * Screen mapping information
+ */
+#define SCREEN2_END 0x02078000
+#define SCREEN2_BASE 0x02000000
+#define SCREEN1_END SCREEN2_BASE
+#define SCREEN1_BASE 0x01f88000
+#define SCREEN_START 0x02000000
+
+/*
+ * RAM definitions
+ */
+#define MAPTOPHYS(a) (((unsigned long)a & 0x007fffff) + PAGE_OFFSET)
+#define KERNTOPHYS(a) ((((unsigned long)(&a)) & 0x007fffff) + PAGE_OFFSET)
+#define GET_MEMORY_END(p) (PAGE_OFFSET + (p->u1.s.page_size) * (p->u1.s.nr_pages))
+#define PARAMS_BASE (PAGE_OFFSET + 0x7c000)
+#define KERNEL_BASE (PAGE_OFFSET + 0x80000)
+
+#else
+
+#define IOEB_BASE 0x03350050
+#define IOC_BASE 0x03200000
+#define PCIO_FLOPPYDMABASE 0x0302a000
+#define PCIO_BASE 0x03010000
+#define IO_BASE 0x03000000
+
+#endif
+#endif
+
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-arc/ide.h
+ *
+ * Copyright (c) 1997,1998 Russell King
+ */
+
+static __inline__ int
+ide_default_irq(ide_ioreg_t base)
+{
+ return 0;
+}
+
+static __inline__ ide_ioreg_t
+ide_default_io_base(int index)
+{
+ return 0;
+}
+
+static __inline__ int
+ide_default_stepping(int index)
+{
+ return 0;
+}
+
+static __inline__ void
+ide_init_hwif_ports (ide_ioreg_t *p, ide_ioreg_t base, int stepping, int *irq)
+{
+ ide_ioreg_t port = base;
+ ide_ioreg_t ctrl = base + 0x206;
+ int i;
+
+ i = 8;
+ while (i--) {
+ *p++ = port;
+ port += 1 << stepping;
+ }
+ *p++ = ctrl;
+ if (irq != NULL)
+ irq = 0;
+}
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-arc/io.h
+ *
+ * Copyright (C) 1997 Russell King
+ *
+ * Modifications:
+ * 06-Dec-1997 RMK Created.
+ */
+#ifndef __ASM_ARM_ARCH_IO_H
+#define __ASM_ARM_ARCH_IO_H
+
+/*
+ * Virtual view <-> DMA view memory address translations
+ * virt_to_bus: Used to translate the virtual address to an
+ * address suitable to be passed to set_dma_addr
+ * bus_to_virt: Used to convert an address for DMA operations
+ * to an address that the kernel can use.
+ */
+#define virt_to_bus(x) ((unsigned long)(x))
+#define bus_to_virt(x) ((void *)(x))
+
+/*
+ * This architecture does not require any delayed IO, and
+ * has the constant-optimised IO
+ */
+#undef ARCH_IO_DELAY
+
+/*
+ * We use two different types of addressing - PC style addresses, and ARM
+ * addresses. PC style accesses the PC hardware with the normal PC IO
+ * addresses, eg 0x3f8 for serial#1. ARM addresses are 0x80000000+
+ * and are translated to the start of IO. Note that all addresses are
+ * shifted left!
+ */
+#define __PORT_PCIO(x) (!((x) & 0x80000000))
+
+/*
+ * Dynamic IO functions - let the compiler
+ * optimize the expressions
+ */
+extern __inline__ void __outb (unsigned int value, unsigned int port)
+{
+ unsigned long temp;
+ __asm__ __volatile__(
+ "tst %2, #0x80000000\n\t"
+ "mov %0, %4\n\t"
+ "addeq %0, %0, %3\n\t"
+ "strb %1, [%0, %2, lsl #2]"
+ : "=&r" (temp)
+ : "r" (value), "r" (port), "Ir" (PCIO_BASE - IO_BASE), "Ir" (IO_BASE)
+ : "cc");
+}
+
+extern __inline__ void __outw (unsigned int value, unsigned int port)
+{
+ unsigned long temp;
+ __asm__ __volatile__(
+ "tst %2, #0x80000000\n\t"
+ "mov %0, %4\n\t"
+ "addeq %0, %0, %3\n\t"
+ "strb %1, [%0, %2, lsl #2]"
+ : "=&r" (temp)
+ : "r" (value|value<<16), "r" (port), "Ir" (PCIO_BASE - IO_BASE), "Ir" (IO_BASE)
+ : "cc");
+}
+
+extern __inline__ void __outl (unsigned int value, unsigned int port)
+{
+ unsigned long temp;
+ __asm__ __volatile__(
+ "tst %2, #0x80000000\n\t"
+ "mov %0, %4\n\t"
+ "addeq %0, %0, %3\n\t"
+ "strb %1, [%0, %2, lsl #2]"
+ : "=&r" (temp)
+ : "r" (value), "r" (port), "Ir" (PCIO_BASE - IO_BASE), "Ir" (IO_BASE)
+ : "cc");
+}
+
+#define DECLARE_DYN_IN(sz,fnsuffix,instr) \
+extern __inline__ unsigned sz __in##fnsuffix (unsigned int port) \
+{ \
+ unsigned long temp, value; \
+ __asm__ __volatile__( \
+ "tst %2, #0x80000000\n\t" \
+ "mov %0, %4\n\t" \
+ "addeq %0, %0, %3\n\t" \
+ "ldr" ##instr## " %1, [%0, %2, lsl #2]" \
+ : "=&r" (temp), "=r" (value) \
+ : "r" (port), "Ir" (PCIO_BASE - IO_BASE), "Ir" (IO_BASE) \
+ : "cc"); \
+ return (unsigned sz)value; \
+}
+
+extern __inline__ unsigned int __ioaddr (unsigned int port) \
+{ \
+ if (__PORT_PCIO(port)) \
+ return (unsigned int)(PCIO_BASE + (port << 2)); \
+ else \
+ return (unsigned int)(IO_BASE + (port << 2)); \
+}
+
+#define DECLARE_IO(sz,fnsuffix,instr) \
+ DECLARE_DYN_IN(sz,fnsuffix,instr)
+
+DECLARE_IO(char,b,"b")
+DECLARE_IO(short,w,"")
+DECLARE_IO(long,l,"")
+
+#undef DECLARE_IO
+#undef DECLARE_DYN_IN
+
+/*
+ * Constant address IO functions
+ *
+ * These have to be macros for the 'J' constraint to work -
+ * +/-4096 immediate operand.
+ */
+#define __outbc(value,port) \
+({ \
+ if (__PORT_PCIO((port))) \
+ __asm__ __volatile__( \
+ "strb %0, [%1, %2]" \
+ : : "r" (value), "r" (PCIO_BASE), "Jr" ((port) << 2)); \
+ else \
+ __asm__ __volatile__( \
+ "strb %0, [%1, %2]" \
+ : : "r" (value), "r" (IO_BASE), "r" ((port) << 2)); \
+})
+
+#define __inbc(port) \
+({ \
+ unsigned char result; \
+ if (__PORT_PCIO((port))) \
+ __asm__ __volatile__( \
+ "ldrb %0, [%1, %2]" \
+ : "=r" (result) : "r" (PCIO_BASE), "Jr" ((port) << 2)); \
+ else \
+ __asm__ __volatile__( \
+ "ldrb %0, [%1, %2]" \
+ : "=r" (result) : "r" (IO_BASE), "r" ((port) << 2)); \
+ result; \
+})
+
+#define __outwc(value,port) \
+({ \
+ unsigned long v = value; \
+ if (__PORT_PCIO((port))) \
+ __asm__ __volatile__( \
+ "str %0, [%1, %2]" \
+ : : "r" (v|v<<16), "r" (PCIO_BASE), "Jr" ((port) << 2)); \
+ else \
+ __asm__ __volatile__( \
+ "str %0, [%1, %2]" \
+ : : "r" (v|v<<16), "r" (IO_BASE), "r" ((port) << 2)); \
+})
+
+#define __inwc(port) \
+({ \
+ unsigned short result; \
+ if (__PORT_PCIO((port))) \
+ __asm__ __volatile__( \
+ "ldr %0, [%1, %2]" \
+ : "=r" (result) : "r" (PCIO_BASE), "Jr" ((port) << 2)); \
+ else \
+ __asm__ __volatile__( \
+ "ldr %0, [%1, %2]" \
+ : "=r" (result) : "r" (IO_BASE), "r" ((port) << 2)); \
+ result & 0xffff; \
+})
+
+#define __outlc(v,p) __outwc((v),(p))
+
+#define __inlc(port) \
+({ \
+ unsigned long result; \
+ if (__PORT_PCIO((port))) \
+ __asm__ __volatile__( \
+ "ldr %0, [%1, %2]" \
+ : "=r" (result) : "r" (PCIO_BASE), "Jr" ((port) << 2)); \
+ else \
+ __asm__ __volatile__( \
+ "ldr %0, [%1, %2]" \
+ : "=r" (result) : "r" (IO_BASE), "r" ((port) << 2)); \
+ result; \
+})
+
+#define __ioaddrc(port) \
+({ \
+ unsigned long addr; \
+ if (__PORT_PCIO((port))) \
+ addr = PCIO_BASE + ((port) << 2); \
+ else \
+ addr = IO_BASE + ((port) << 2); \
+ addr; \
+})
+
+/*
+ * Translated address IO functions
+ *
+ * IO address has already been translated to a virtual address
+ */
+#define outb_t(v,p) \
+ (*(volatile unsigned char *)(p) = (v))
+
+#define inb_t(p) \
+ (*(volatile unsigned char *)(p))
+
+#define outl_t(v,p) \
+ (*(volatile unsigned long *)(p) = (v))
+
+#define inl_t(p) \
+ (*(volatile unsigned long *)(p))
+
+#endif
--- /dev/null
+/*
+ * include/asm-arm/arch-arc/irq.h
+ *
+ * Copyright (C) 1996 Russell King
+ *
+ * Changelog:
+ * 24-09-1996 RMK Created
+ * 10-10-1996 RMK Brought up to date with arch-sa110eval
+ * 05-11-1996 RMK Changed interrupt numbers & uses new inb/outb macros
+ * 11-01-1998 RMK Added mask_and_ack_irq
+ */
+
+#define BUILD_IRQ(s,n,m) \
+ void IRQ##n##_interrupt(void); \
+ void fast_IRQ##n##_interrupt(void); \
+ void bad_IRQ##n##_interrupt(void); \
+ void probe_IRQ##n##_interrupt(void);
+
+/*
+ * The timer is a special interrupt
+ */
+#define IRQ5_interrupt timer_IRQ_interrupt
+
+#define IRQ_INTERRUPT(n) IRQ##n##_interrupt
+#define FAST_INTERRUPT(n) fast_IRQ##n##_interrupt
+#define BAD_INTERRUPT(n) bad_IRQ##n##_interrupt
+#define PROBE_INTERRUPT(n) probe_IRQ##n##_interrupt
+
+#define X(x) (x)|0x01, (x)|0x02, (x)|0x04, (x)|0x08, (x)|0x10, (x)|0x20, (x)|0x40, (x)|0x80
+#define Z(x) (x), (x), (x), (x), (x), (x), (x), (x)
+
+static __inline__ void mask_and_ack_irq(unsigned int irq)
+{
+ static const int addrmasks[] = {
+ X((IOC_IRQMASKA - IOC_BASE)<<18 | (1 << 15)),
+ X((IOC_IRQMASKB - IOC_BASE)<<18),
+ Z(0),
+ Z(0),
+ Z(0),
+ Z(0),
+ Z(0),
+ Z(0),
+ X((IOC_FIQMASK - IOC_BASE)<<18),
+ Z(0),
+ Z(0),
+ Z(0),
+ Z(0),
+ Z(0),
+ Z(0),
+ Z(0)
+ };
+ unsigned int temp1, temp2;
+
+ __asm__ __volatile__(
+" ldr %1, [%5, %3, lsl #2]\n"
+" teq %1, #0\n"
+" beq 2f\n"
+" ldrb %0, [%2, %1, lsr #16]\n"
+" bic %0, %0, %1\n"
+" strb %0, [%2, %1, lsr #16]\n"
+" tst %1, #0x8000\n" /* do we need an IRQ clear? */
+" strneb %1, [%2, %4]\n"
+"2:"
+ : "=&r" (temp1), "=&r" (temp2)
+ : "r" (ioaddr(IOC_BASE)), "r" (irq),
+ "I" ((IOC_IRQCLRA - IOC_BASE) << 2), "r" (addrmasks));
+}
+
+#undef X
+#undef Z
+
+static __inline__ void mask_irq(unsigned int irq)
+{
+ extern void ecard_disableirq (unsigned int);
+ extern void ecard_disablefiq (unsigned int);
+ unsigned char mask = 1 << (irq & 7);
+
+ switch (irq >> 3) {
+ case 0:
+ outb(inb(IOC_IRQMASKA) & ~mask, IOC_IRQMASKA);
+ break;
+ case 1:
+ outb(inb(IOC_IRQMASKB) & ~mask, IOC_IRQMASKB);
+ break;
+ case 4:
+ ecard_disableirq (irq & 7);
+ break;
+ case 8:
+ outb(inb(IOC_FIQMASK) & ~mask, IOC_FIQMASK);
+ break;
+ case 12:
+ ecard_disablefiq (irq & 7);
+ }
+}
+
+static __inline__ void unmask_irq(unsigned int irq)
+{
+ extern void ecard_enableirq (unsigned int);
+ extern void ecard_enablefiq (unsigned int);
+ unsigned char mask = 1 << (irq & 7);
+
+ switch (irq >> 3) {
+ case 0:
+ outb(inb(IOC_IRQMASKA) | mask, IOC_IRQMASKA);
+ break;
+ case 1:
+ outb(inb(IOC_IRQMASKB) | mask, IOC_IRQMASKB);
+ break;
+ case 4:
+ ecard_enableirq (irq & 7);
+ break;
+ case 8:
+ outb(inb(IOC_FIQMASK) | mask, IOC_FIQMASK);
+ break;
+ case 12:
+ ecard_enablefiq (irq & 7);
+ }
+}
+
+static __inline__ unsigned long get_enabled_irqs(void)
+{
+ return inb(IOC_IRQMASKA) | inb(IOC_IRQMASKB) << 8;
+}
+
+static __inline__ void irq_init_irq(void)
+{
+ outb(0, IOC_IRQMASKA);
+ outb(0, IOC_IRQMASKB);
+ outb(0, IOC_FIQMASK);
+}
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-arc/irqs.h
+ *
+ * Copyright (C) 1996 Russell King, Dave Gilbert (gilbertd@cs.man.ac.uk)
+ */
+
+#define IRQ_PRINTERBUSY 0
+#define IRQ_SERIALRING 1
+#define IRQ_PRINTERACK 2
+#define IRQ_VSYNCPULSE 3
+#define IRQ_POWERON 4
+#define IRQ_TIMER0 5
+#define IRQ_TIMER1 6
+#define IRQ_IMMEDIATE 7
+#define IRQ_EXPCARDFIQ 8
+#define IRQ_SOUNDCHANGE 9
+#define IRQ_SERIALPORT 10
+#define IRQ_HARDDISK 11
+#define IRQ_FLOPPYCHANGED 12
+#define IRQ_EXPANSIONCARD 13
+#define IRQ_KEYBOARDTX 14
+#define IRQ_KEYBOARDRX 15
+
+#define FIQ_FLOPPYDATA 0
+#define FIQ_FLOPPYIRQ 1
+#define FIQ_ECONET 2
+#define FIQ_EXPANSIONCARD 6
+#define FIQ_FORCE 7
+
+#define FIQ_FD1772 FIQ_FLOPPYIRQ
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-arc/mmu.h
+ *
+ * Copyright (c) 1996 Russell King.
+ *
+ * Changelog:
+ * 22-11-1996 RMK Created
+ */
+#ifndef __ASM_ARCH_MMU_H
+#define __ASM_ARCH_MMU_H
+
+#define __virt_to_phys(vpage) vpage
+#define __phys_to_virt(ppage) ppage
+
+#endif
--- /dev/null
+#ifndef _ASM_ARM_ARCHARC_OLDLATCH_H
+#define _ASM_ARM_ARCHARC_OLDLATCH_H
+
+#define LATCHA_FDSEL0 (1<<0)
+#define LATCHA_FDSEL1 (1<<1)
+#define LATCHA_FDSEL2 (1<<2)
+#define LATCHA_FDSEL3 (1<<3)
+#define LATCHA_FDSELALL (0xf)
+#define LATCHA_SIDESEL (1<<4)
+#define LATCHA_MOTOR (1<<5)
+#define LATCHA_INUSE (1<<6)
+#define LATCHA_CHANGERST (1<<7)
+
+#define LATCHB_FDCDENSITY (1<<1)
+#define LATCHB_FDCRESET (1<<3)
+#define LATCHB_PRINTSTROBE (1<<4)
+
+/* newval=(oldval & mask)|newdata */
+void oldlatch_bupdate(unsigned char mask,unsigned char newdata);
+
+/* newval=(oldval & mask)|newdata */
+void oldlatch_aupdate(unsigned char mask,unsigned char newdata);
+
+#endif
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-arc/processor.h
+ *
+ * Copyright (c) 1996 Russell King.
+ *
+ * Changelog:
+ * 10-09-1996 RMK Created
+ */
+
+#ifndef __ASM_ARCH_PROCESSOR_H
+#define __ASM_ARCH_PROCESSOR_H
+
+/*
+ * Bus types
+ */
+#define EISA_bus 0
+#define EISA_bus__is_a_macro /* for versions in ksyms.c */
+#define MCA_bus 0
+#define MCA_bus__is_a_macro /* for versions in ksyms.c */
+
+/*
+ * User space: 26MB
+ */
+#define TASK_SIZE (0x01a00000UL)
+
+/* This decides where the kernel will search for a free chunk of vm
+ * space during mmap's.
+ */
+#define TASK_UNMAPPED_BASE (TASK_SIZE / 3)
+
+#define INIT_MMAP \
+{ &init_mm, 0, 0x02000000, PAGE_SHARED, VM_READ | VM_WRITE | VM_EXEC, NULL, &init_mm.mmap }
+
+#endif
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-arc/serial.h
+ *
+ * Copyright (c) 1996 Russell King.
+ *
+ * Changelog:
+ * 15-10-1996 RMK Created
+ */
+#ifndef __ASM_ARCH_SERIAL_H
+#define __ASM_ARCH_SERIAL_H
+
+/*
+ * This assumes you have a 1.8432 MHz clock for your UART.
+ *
+ * It'd be nice if someone built a serial card with a 24.576 MHz
+ * clock, since the 16550A is capable of handling a top speed of 1.5
+ * megabits/second; but this requires the faster clock.
+ */
+#define BASE_BAUD (1843200 / 16)
+
+#define STD_COM_FLAGS (ASYNC_BOOT_AUTOCONF | ASYNC_SKIP_TEST)
+
+ /* UART CLK PORT IRQ FLAGS */
+#define RS_UARTS \
+ { 0, BASE_BAUD, 0 , 0, STD_COM_FLAGS }, /* ttyS0 */ \
+ { 0, BASE_BAUD, 0 , 0, STD_COM_FLAGS }, /* ttyS1 */ \
+ { 0, BASE_BAUD, 0 , 0, STD_COM_FLAGS }, /* ttyS2 */ \
+ { 0, BASE_BAUD, 0 , 0, STD_COM_FLAGS }, /* ttyS3 */ \
+ { 0, BASE_BAUD, 0 , 0, STD_COM_FLAGS }, /* ttyS4 */ \
+ { 0, BASE_BAUD, 0 , 0, STD_COM_FLAGS }, /* ttyS5 */ \
+ { 0, BASE_BAUD, 0 , 0, STD_COM_FLAGS }, /* ttyS6 */ \
+ { 0, BASE_BAUD, 0 , 0, STD_COM_FLAGS }, /* ttyS7 */ \
+ { 0, BASE_BAUD, 0 , 0, STD_COM_FLAGS }, /* ttyS8 */ \
+ { 0, BASE_BAUD, 0 , 0, STD_COM_FLAGS }, /* ttyS9 */ \
+ { 0, BASE_BAUD, 0 , 0, STD_COM_FLAGS }, /* ttyS10 */ \
+ { 0, BASE_BAUD, 0 , 0, STD_COM_FLAGS }, /* ttyS11 */ \
+ { 0, BASE_BAUD, 0 , 0, STD_COM_FLAGS }, /* ttyS12 */ \
+ { 0, BASE_BAUD, 0 , 0, STD_COM_FLAGS }, /* ttyS13 */
+
+#endif
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-arc/shmparam.h
+ *
+ * Copyright (c) 1996 Russell King.
+ */
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-arc/system.h
+ *
+ * Copyright (c) 1996 Russell King and Dave Gilbert
+ */
+#ifndef __ASM_ARCH_SYSTEM_H
+#define __ASM_ARCH_SYSTEM_H
+
+#define cliIF() \
+ do { \
+ unsigned long temp; \
+ __asm__ __volatile__( \
+" mov %0, pc\n" \
+" orr %0, %0, #0x0c000000\n" \
+" teqp %0, #0\n" \
+ : "=r" (temp) \
+ : ); \
+ } while(0)
+
+extern __inline__ void arch_hard_reset (void)
+{
+ extern void ecard_reset (int card);
+
+ /*
+ * Reset all expansion cards.
+ */
+ ecard_reset (-1);
+
+ /*
+ * copy branch instruction to reset location and call it
+ */
+ *(unsigned long *)0 = *(unsigned long *)0x03800000;
+ ((void(*)(void))0)();
+
+ /*
+ * If that didn't work, loop endlessly
+ */
+ while (1);
+}
+
+#endif
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-arc/time.h
+ *
+ * Copyright (c) 1996 Russell King.
+ *
+ * Changelog:
+ * 24-Sep-1996 RMK Created
+ * 10-Oct-1996 RMK Brought up to date with arch-sa110eval
+ * 04-Dec-1997 RMK Updated for new arch/arm/time.c
+ */
+
+extern __inline__ unsigned long gettimeoffset (void)
+{
+ unsigned int count1, count2, status1, status2;
+ unsigned long offset = 0;
+
+ status1 = inb(IOC_IRQREQA);
+ barrier ();
+ outb (0, IOC_T0LATCH);
+ barrier ();
+ count1 = inb(IOC_T0CNTL) | (inb(IOC_T0CNTH) << 8);
+ barrier ();
+ status2 = inb(IOC_IRQREQA);
+ barrier ();
+ outb (0, IOC_T0LATCH);
+ barrier ();
+ count2 = inb(IOC_T0CNTL) | (inb(IOC_T0CNTH) << 8);
+
+ if (count2 < count1) {
+ /*
+ * This means that we haven't just had an interrupt
+ * while reading into status2.
+ */
+ if (status2 & (1 << 5))
+ offset = tick;
+ count1 = count2;
+ } else if (count2 > count1) {
+ /*
+ * We have just had another interrupt while reading
+ * status2.
+ */
+ offset += tick;
+ count1 = count2;
+ }
+
+ count1 = LATCH - count1;
+ /*
+ * count1 = number of clock ticks since last interrupt
+ */
+ offset += count1 * tick / LATCH;
+ return offset;
+}
+
+/*
+ * No need to reset the timer at every irq
+ */
+#define reset_timer() 1
+
+/*
+ * Updating of the RTC. We don't currently write the time to the
+ * CMOS clock.
+ */
+#define update_rtc()
+
+/*
+ * Set up timer interrupt, and return the current time in seconds.
+ */
+extern __inline__ unsigned long setup_timer (void)
+{
+ extern int iic_control (unsigned char, int, char *, int);
+ unsigned int year, mon, day, hour, min, sec;
+ char buf[8];
+
+ outb(LATCH & 255, IOC_T0LTCHL);
+ outb(LATCH >> 8, IOC_T0LTCHH);
+ outb(0, IOC_T0GO);
+
+ iic_control (0xa0, 0xc0, buf, 1);
+ year = buf[0];
+ if ((year += 1900) < 1970)
+ year += 100;
+
+ iic_control (0xa0, 2, buf, 5);
+ mon = buf[4] & 0x1f;
+ day = buf[3] & 0x3f;
+ hour = buf[2];
+ min = buf[1];
+ sec = buf[0];
+ BCD_TO_BIN(mon);
+ BCD_TO_BIN(day);
+ BCD_TO_BIN(hour);
+ BCD_TO_BIN(min);
+ BCD_TO_BIN(sec);
+
+ return mktime(year, mon, day, hour, min, sec);
+}
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-arc/timex.h
+ *
+ * Archimedes architecture timex specifications
+ *
+ * Copyright (C) 1997, 1998 Russell King
+ */
+
+/*
+ * On the RiscPC, the clock ticks at 2MHz.
+ */
+#define CLOCK_TICK_RATE 2000000
+
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-a5k/uncompress.h
+ *
+ * Copyright (C) 1996 Russell King
+ */
+#define VIDMEM ((char *)0x02000000)
+
+#include "../arch/arm/drivers/char/font.h"
+
+int video_num_columns, video_num_lines, video_size_row;
+int white, bytes_per_char_h;
+extern unsigned long con_charconvtable[256];
+
+struct param_struct {
+ unsigned long page_size;
+ unsigned long nr_pages;
+ unsigned long ramdisk_size;
+ unsigned long mountrootrdonly;
+ unsigned long rootdev;
+ unsigned long video_num_cols;
+ unsigned long video_num_rows;
+ unsigned long video_x;
+ unsigned long video_y;
+ unsigned long memc_control_reg;
+ unsigned char sounddefault;
+ unsigned char adfsdrives;
+ unsigned char bytes_per_char_h;
+ unsigned char bytes_per_char_v;
+ unsigned long unused[256/4-11];
+};
+
+static struct param_struct *params = (struct param_struct *)0x0207c000;
+
+/*
+ * This does not append a newline
+ */
+static void puts(const char *s)
+{
+ extern void ll_write_char(char *, unsigned long);
+ int x,y;
+ unsigned char c;
+ char *ptr;
+
+ x = params->video_x;
+ y = params->video_y;
+
+ while ( ( c = *(unsigned char *)s++ ) != '\0' ) {
+ if ( c == '\n' ) {
+ x = 0;
+ if ( ++y >= video_num_lines ) {
+ y--;
+ }
+ } else {
+ ptr = VIDMEM + ((y*video_num_columns*params->bytes_per_char_v+x)*bytes_per_char_h);
+ ll_write_char(ptr, c|(white<<8));
+ if ( ++x >= video_num_columns ) {
+ x = 0;
+ if ( ++y >= video_num_lines ) {
+ y--;
+ }
+ }
+ }
+ }
+
+ params->video_x = x;
+ params->video_y = y;
+}
+
+static void error(char *x);
+
+/*
+ * Setup for decompression
+ */
+static void arch_decomp_setup(void)
+{
+ int i;
+
+ video_num_lines = params->video_num_rows;
+ video_num_columns = params->video_num_cols;
+ bytes_per_char_h = params->bytes_per_char_h;
+ video_size_row = video_num_columns * bytes_per_char_h;
+ if (bytes_per_char_h == 4)
+ for (i = 0; i < 256; i++)
+ con_charconvtable[i] =
+ (i & 128 ? 1 << 0 : 0) |
+ (i & 64 ? 1 << 4 : 0) |
+ (i & 32 ? 1 << 8 : 0) |
+ (i & 16 ? 1 << 12 : 0) |
+ (i & 8 ? 1 << 16 : 0) |
+ (i & 4 ? 1 << 20 : 0) |
+ (i & 2 ? 1 << 24 : 0) |
+ (i & 1 ? 1 << 28 : 0);
+ else
+ for (i = 0; i < 16; i++)
+ con_charconvtable[i] =
+ (i & 8 ? 1 << 0 : 0) |
+ (i & 4 ? 1 << 8 : 0) |
+ (i & 2 ? 1 << 16 : 0) |
+ (i & 1 ? 1 << 24 : 0);
+
+ white = bytes_per_char_h == 8 ? 0xfc : 7;
+
+ if (params->nr_pages * params->page_size < 4096*1024) error("<4M of mem\n");
+}
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-ebsa110/a.out.h
+ *
+ * Copyright (C) 1996 Russell King
+ */
+
+#ifndef __ASM_ARCH_A_OUT_H
+#define __ASM_ARCH_A_OUT_H
+
+#ifdef __KERNEL__
+#define STACK_TOP (0xc0000000)
+#define LIBRARY_START_TEXT (0x00c00000)
+#endif
+
+#endif
+
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-ebsa110/dma.h
+ *
+ * Architecture DMA routes
+ *
+ * Copyright (C) 1997.1998 Russell King
+ */
+#ifndef __ASM_ARCH_DMA_H
+#define __ASM_ARCH_DMA_H
+
+#ifdef KERNEL_ARCH_DMA
+
+static inline void arch_disable_dma (int dmanr)
+{
+ printk (dma_str, "arch_disable_dma", dmanr);
+}
+
+static inline void arch_enable_dma (int dmanr)
+{
+ printk (dma_str, "arch_enable_dma", dmanr);
+}
+
+static inline void arch_set_dma_addr (int dmanr, unsigned int addr)
+{
+ printk (dma_str, "arch_set_dma_addr", dmanr);
+}
+
+static inline void arch_set_dma_count (int dmanr, unsigned int count)
+{
+ printk (dma_str, "arch_set_dma_count", dmanr);
+}
+
+static inline void arch_set_dma_mode (int dmanr, char mode)
+{
+ printk (dma_str, "arch_set_dma_mode", dmanr);
+}
+
+static inline int arch_dma_count (int dmanr)
+{
+ printk (dma_str, "arch_dma_count", dmanr);
+ return 0;
+}
+
+#endif
+
+/* enable/disable a specific DMA channel */
+extern void enable_dma(unsigned int dmanr);
+
+static __inline__ void disable_dma(unsigned int dmanr)
+{
+ printk (dma_str, "disable_dma", dmanr);
+}
+
+/* Clear the 'DMA Pointer Flip Flop'.
+ * Write 0 for LSB/MSB, 1 for MSB/LSB access.
+ * Use this once to initialize the FF to a known state.
+ * After that, keep track of it. :-)
+ * --- In order to do that, the DMA routines below should ---
+ * --- only be used while interrupts are disabled! ---
+ */
+static __inline__ void clear_dma_ff(unsigned int dmanr)
+{
+ printk (dma_str, "clear_dma_ff", dmanr);
+}
+
+/* set mode (above) for a specific DMA channel */
+extern void set_dma_mode(unsigned int dmanr, char mode);
+
+/* Set only the page register bits of the transfer address.
+ * This is used for successive transfers when we know the contents of
+ * the lower 16 bits of the DMA current address register, but a 64k boundary
+ * may have been crossed.
+ */
+static __inline__ void set_dma_page(unsigned int dmanr, char pagenr)
+{
+ printk (dma_str, "set_dma_page", dmanr);
+}
+
+
+/* Set transfer address & page bits for specific DMA channel.
+ * Assumes dma flipflop is clear.
+ */
+extern void set_dma_addr(unsigned int dmanr, unsigned int addr);
+
+/* Set transfer size for a specific DMA channel.
+ */
+extern void set_dma_count(unsigned int dmanr, unsigned int count);
+
+/* Get DMA residue count. After a DMA transfer, this
+ * should return zero. Reading this while a DMA transfer is
+ * still in progress will return unpredictable results.
+ * If called before the channel has been used, it may return 1.
+ * Otherwise, it returns the number of _bytes_ left to transfer.
+ *
+ * Assumes DMA flip-flop is clear.
+ */
+extern int get_dma_residue(unsigned int dmanr);
+
+#endif /* _ASM_ARCH_DMA_H */
+
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-ebsa110/hardware.h
+ *
+ * Copyright (C) 1996,1997,1998 Russell King.
+ *
+ * This file contains the hardware definitions of the EBSA-110.
+ */
+
+#ifndef __ASM_ARCH_HARDWARE_H
+#define __ASM_ARCH_HARDWARE_H
+
+/*
+ * What hardware must be present
+ */
+#define HAS_PCIO
+
+#ifndef __ASSEMBLER__
+
+/*
+ * IO definitions
+ */
+#define PIT_CTRL ((volatile unsigned char *)0xf200000d)
+#define PIT_T2 ((volatile unsigned char *)0xf2000009)
+#define PIT_T1 ((volatile unsigned char *)0xf2000005)
+#define PIT_T0 ((volatile unsigned char *)0xf2000001)
+#define PCIO_BASE 0xf0000000
+
+/*
+ * Mapping areas
+ */
+#define IO_END 0xffffffff
+#define IO_BASE 0xe0000000
+#define IO_SIZE (IO_END - IO_BASE)
+#define IO_START 0xe0000000
+
+/*
+ * RAM definitions
+ */
+#define MAPTOPHYS(a) ((unsigned long)(a) - PAGE_OFFSET)
+#define KERNTOPHYS(a) ((unsigned long)(&a))
+#define KERNEL_BASE (0xc0008000)
+
+#else
+
+#define PCIO_BASE 0xf0000000
+#define IO_BASE 0
+
+#endif
+#endif
+
--- /dev/null
+/* no ide */
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-ebsa110/io.h
+ *
+ * Copyright (C) 1997,1998 Russell King
+ *
+ * Modifications:
+ * 06-Dec-1997 RMK Created.
+ */
+#ifndef __ASM_ARM_ARCH_IO_H
+#define __ASM_ARM_ARCH_IO_H
+
+/*
+ * Virtual view <-> DMA view memory address translations
+ * virt_to_bus: Used to translate the virtual address to an
+ * address suitable to be passed to set_dma_addr
+ * bus_to_virt: Used to convert an address for DMA operations
+ * to an address that the kernel can use.
+ */
+#define virt_to_bus(x) ((unsigned long)(x))
+#define bus_to_virt(x) ((void *)(x))
+
+/*
+ * This architecture does not require any delayed IO, and
+ * has the constant-optimised IO
+ */
+#undef ARCH_IO_DELAY
+
+/*
+ * We use two different types of addressing - PC style addresses, and ARM
+ * addresses. PC style accesses the PC hardware with the normal PC IO
+ * addresses, eg 0x3f8 for serial#1. ARM addresses are 0x80000000+
+ * and are translated to the start of IO. Note that all addresses are
+ * shifted left!
+ */
+#define __PORT_PCIO(x) (!((x) & 0x80000000))
+
+/*
+ * Dynamic IO functions - let the compiler
+ * optimize the expressions
+ */
+#define DECLARE_DYN_OUT(fnsuffix,instr) \
+extern __inline__ void __out##fnsuffix (unsigned int value, unsigned int port) \
+{ \
+ unsigned long temp; \
+ __asm__ __volatile__( \
+ "tst %2, #0x80000000\n\t" \
+ "mov %0, %4\n\t" \
+ "addeq %0, %0, %3\n\t" \
+ "str" ##instr## " %1, [%0, %2, lsl #2]" \
+ : "=&r" (temp) \
+ : "r" (value), "r" (port), "Ir" (PCIO_BASE - IO_BASE), "Ir" (IO_BASE) \
+ : "cc"); \
+}
+
+#define DECLARE_DYN_IN(sz,fnsuffix,instr) \
+extern __inline__ unsigned sz __in##fnsuffix (unsigned int port) \
+{ \
+ unsigned long temp, value; \
+ __asm__ __volatile__( \
+ "tst %2, #0x80000000\n\t" \
+ "mov %0, %4\n\t" \
+ "addeq %0, %0, %3\n\t" \
+ "ldr" ##instr## " %1, [%0, %2, lsl #2]" \
+ : "=&r" (temp), "=r" (value) \
+ : "r" (port), "Ir" (PCIO_BASE - IO_BASE), "Ir" (IO_BASE) \
+ : "cc"); \
+ return (unsigned sz)value; \
+}
+
+extern __inline__ unsigned int __ioaddr (unsigned int port) \
+{ \
+ if (__PORT_PCIO(port)) \
+ return (unsigned int)(PCIO_BASE + (port << 2)); \
+ else \
+ return (unsigned int)(IO_BASE + (port << 2)); \
+}
+
+#define DECLARE_IO(sz,fnsuffix,instr) \
+ DECLARE_DYN_OUT(fnsuffix,instr) \
+ DECLARE_DYN_IN(sz,fnsuffix,instr)
+
+DECLARE_IO(char,b,"b")
+DECLARE_IO(short,w,"")
+DECLARE_IO(long,l,"")
+
+#undef DECLARE_IO
+#undef DECLARE_DYN_OUT
+#undef DECLARE_DYN_IN
+
+/*
+ * Constant address IO functions
+ *
+ * These have to be macros for the 'J' constraint to work -
+ * +/-4096 immediate operand.
+ */
+#define __outbc(value,port) \
+({ \
+ if (__PORT_PCIO((port))) \
+ __asm__ __volatile__( \
+ "strb %0, [%1, %2]" \
+ : : "r" (value), "r" (PCIO_BASE), "Jr" ((port) << 2)); \
+ else \
+ __asm__ __volatile__( \
+ "strb %0, [%1, %2]" \
+ : : "r" (value), "r" (IO_BASE), "r" ((port) << 2)); \
+})
+
+#define __inbc(port) \
+({ \
+ unsigned char result; \
+ if (__PORT_PCIO((port))) \
+ __asm__ __volatile__( \
+ "ldrb %0, [%1, %2]" \
+ : "=r" (result) : "r" (PCIO_BASE), "Jr" ((port) << 2)); \
+ else \
+ __asm__ __volatile__( \
+ "ldrb %0, [%1, %2]" \
+ : "=r" (result) : "r" (IO_BASE), "r" ((port) << 2)); \
+ result; \
+})
+
+#define __outwc(value,port) \
+({ \
+ unsigned long v = value; \
+ if (__PORT_PCIO((port))) \
+ __asm__ __volatile__( \
+ "str %0, [%1, %2]" \
+ : : "r" (v|v<<16), "r" (PCIO_BASE), "Jr" ((port) << 2)); \
+ else \
+ __asm__ __volatile__( \
+ "str %0, [%1, %2]" \
+ : : "r" (v|v<<16), "r" (IO_BASE), "r" ((port) << 2)); \
+})
+
+#define __inwc(port) \
+({ \
+ unsigned short result; \
+ if (__PORT_PCIO((port))) \
+ __asm__ __volatile__( \
+ "ldr %0, [%1, %2]" \
+ : "=r" (result) : "r" (PCIO_BASE), "Jr" ((port) << 2)); \
+ else \
+ __asm__ __volatile__( \
+ "ldr %0, [%1, %2]" \
+ : "=r" (result) : "r" (IO_BASE), "r" ((port) << 2)); \
+ result & 0xffff; \
+})
+
+#define __outlc(v,p) __outwc((v),(p))
+
+#define __inlc(port) \
+({ \
+ unsigned long result; \
+ if (__PORT_PCIO((port))) \
+ __asm__ __volatile__( \
+ "ldr %0, [%1, %2]" \
+ : "=r" (result) : "r" (PCIO_BASE), "Jr" ((port) << 2)); \
+ else \
+ __asm__ __volatile__( \
+ "ldr %0, [%1, %2]" \
+ : "=r" (result) : "r" (IO_BASE), "r" ((port) << 2)); \
+ result; \
+})
+
+#define __ioaddrc(port) \
+({ \
+ unsigned long addr; \
+ if (__PORT_PCIO((port))) \
+ addr = PCIO_BASE + ((port) << 2); \
+ else \
+ addr = IO_BASE + ((port) << 2); \
+ addr; \
+})
+
+/*
+ * Translated address IO functions
+ *
+ * IO address has already been translated to a virtual address
+ */
+#define outb_t(v,p) \
+ (*(volatile unsigned char *)(p) = (v))
+
+#define inb_t(p) \
+ (*(volatile unsigned char *)(p))
+
+#define outl_t(v,p) \
+ (*(volatile unsigned long *)(p) = (v))
+
+#define inl_t(p) \
+ (*(volatile unsigned long *)(p))
+
+#endif
--- /dev/null
+/*
+ * include/asm-arm/arch-ebsa110/irq.h
+ *
+ * Copyright (C) 1996,1997,1998 Russell King
+ */
+
+#define IRQ_MCLR ((volatile unsigned char *)0xf3000000)
+#define IRQ_MSET ((volatile unsigned char *)0xf2c00000)
+#define IRQ_MASK ((volatile unsigned char *)0xf2c00000)
+
+static __inline__ void mask_and_ack_irq(unsigned int irq)
+{
+ if (irq < 8)
+ *IRQ_MCLR = 1 << irq;
+}
+
+static __inline__ void mask_irq(unsigned int irq)
+{
+ if (irq < 8)
+ *IRQ_MCLR = 1 << irq;
+}
+
+static __inline__ void unmask_irq(unsigned int irq)
+{
+ if (irq < 8)
+ *IRQ_MSET = 1 << irq;
+}
+
+static __inline__ unsigned long get_enabled_irqs(void)
+{
+ return 0;
+}
+
+static __inline__ void irq_init_irq(void)
+{
+ unsigned long flags;
+
+ save_flags_cli (flags);
+ *IRQ_MCLR = 0xff;
+ *IRQ_MSET = 0x55;
+ *IRQ_MSET = 0x00;
+ if (*IRQ_MASK != 0x55)
+ while (1);
+ *IRQ_MCLR = 0xff; /* clear all interrupt enables */
+ restore_flags (flags);
+}
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-sa100eval/irqs.h
+ *
+ * Copyright (C) 1996 Russell King
+ */
+
+#define IRQ_PRINTER 0
+#define IRQ_COM1 1
+#define IRQ_COM2 2
+#define IRQ_ETHERNET 3
+#define IRQ_TIMER0 4
+#define IRQ_TIMER1 5
+#define IRQ_PCMCIA 6
+#define IRQ_IMMEDIATE 7
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-ebsa110/mm-init.h
+ *
+ * Copyright (C) 1997,1998 Russell King
+ *
+ * Description of the initial memory map for EBSA-110
+ */
+
+static init_mem_map_t init_mem_map[] = {
+ INIT_MEM_MAP_SENTINEL
+};
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-ebsa110/mmap.h
+ *
+ * Copyright (C) 1996,1997,1998 Russell King
+ */
+
+/*
+ * Use SRAM for cache flushing
+ */
+#define SAFE_ADDR 0x40000000
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-ebsa110/mmu.h
+ *
+ * Copyright (c) 1996,1997,1998 Russell King.
+ *
+ * Changelog:
+ * 20-10-1996 RMK Created
+ * 31-12-1997 RMK Fixed definitions to reduce warnings
+ */
+#ifndef __ASM_ARCH_MMU_H
+#define __ASM_ARCH_MMU_H
+
+/*
+ * On ebsa, the dram is contiguous
+ */
+#define __virt_to_phys(vpage) ((vpage) - PAGE_OFFSET)
+#define __phys_to_virt(ppage) ((ppage) + PAGE_OFFSET)
+
+#endif
--- /dev/null
+/*
+ * Dummy oldlatches.h
+ *
+ * Copyright (C) 1996 Russell King
+ */
+
+#ifdef __need_oldlatches
+#error "Old latches not present in this (rpc) machine"
+#endif
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-ebsa110/processor.h
+ *
+ * Copyright (C) 1996,1997,1998 Russell King
+ */
+
+#ifndef __ASM_ARCH_PROCESSOR_H
+#define __ASM_ARCH_PROCESSOR_H
+
+/*
+ * Bus types
+ */
+#define EISA_bus 0
+#define EISA_bus__is_a_macro /* for versions in ksyms.c */
+#define MCA_bus 0
+#define MCA_bus__is_a_macro /* for versions in ksyms.c */
+
+/*
+ * User space: 3GB
+ */
+#define TASK_SIZE (0xc0000000UL)
+
+/* This decides where the kernel will search for a free chunk of vm
+ * space during mmap's.
+ */
+#define TASK_UNMAPPED_BASE (TASK_SIZE / 3)
+
+#define INIT_MMAP \
+{ &init_mm, 0xc0000000, 0xc2000000, PAGE_SHARED, VM_READ | VM_WRITE | VM_EXEC, NULL, &init_mm.mmap }
+
+#endif
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-ebsa110/serial.h
+ *
+ * Copyright (c) 1996,1997,1998 Russell King.
+ *
+ * Changelog:
+ * 15-10-1996 RMK Created
+ */
+#ifndef __ASM_ARCH_SERIAL_H
+#define __ASM_ARCH_SERIAL_H
+
+/*
+ * This assumes you have a 1.8432 MHz clock for your UART.
+ *
+ * It'd be nice if someone built a serial card with a 24.576 MHz
+ * clock, since the 16550A is capable of handling a top speed of 1.5
+ * megabits/second; but this requires the faster clock.
+ */
+#define BASE_BAUD (1843200 / 16)
+
+#define STD_COM_FLAGS (ASYNC_BOOT_AUTOCONF | ASYNC_SKIP_TEST)
+
+ /* UART CLK PORT IRQ FLAGS */
+#define RS_UARTS \
+ { 0, BASE_BAUD, 0x3F8, 1, STD_COM_FLAGS }, /* ttyS0 */ \
+ { 0, BASE_BAUD, 0x2F8, 2, STD_COM_FLAGS }, /* ttyS1 */ \
+ { 0, BASE_BAUD, 0 , 0, STD_COM_FLAGS }, /* ttyS2 */ \
+ { 0, BASE_BAUD, 0 , 0, STD_COM_FLAGS }, /* ttyS3 */ \
+ { 0, BASE_BAUD, 0 , 0, STD_COM_FLAGS }, /* ttyS4 */ \
+ { 0, BASE_BAUD, 0 , 0, STD_COM_FLAGS }, /* ttyS5 */ \
+ { 0, BASE_BAUD, 0 , 0, STD_COM_FLAGS }, /* ttyS6 */ \
+ { 0, BASE_BAUD, 0 , 0, STD_COM_FLAGS }, /* ttyS7 */ \
+ { 0, BASE_BAUD, 0 , 0, STD_COM_FLAGS }, /* ttyS8 */ \
+ { 0, BASE_BAUD, 0 , 0, STD_COM_FLAGS }, /* ttyS9 */ \
+ { 0, BASE_BAUD, 0 , 0, STD_COM_FLAGS }, /* ttyS10 */ \
+ { 0, BASE_BAUD, 0 , 0, STD_COM_FLAGS }, /* ttyS11 */ \
+ { 0, BASE_BAUD, 0 , 0, STD_COM_FLAGS }, /* ttyS12 */ \
+ { 0, BASE_BAUD, 0 , 0, STD_COM_FLAGS }, /* ttyS13 */
+
+#endif
+
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-ebsa110/shmparam.h
+ *
+ * Copyright (c) 1996 Russell King.
+ */
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-ebsa110/system.h
+ *
+ * Copyright (c) 1996,1997,1998 Russell King.
+ */
+#ifndef __ASM_ARCH_SYSTEM_H
+#define __ASM_ARCH_SYSTEM_H
+
+extern __inline__ void arch_hard_reset (void)
+{
+ /*
+ * loop endlessly
+ */
+ cli();
+ while (1);
+}
+
+#endif
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-ebsa110/time.h
+ *
+ * Copyright (c) 1996,1997,1998 Russell King.
+ *
+ * No real time clock on the evalulation board!
+ *
+ * Changelog:
+ * 10-Oct-1996 RMK Created
+ * 04-Dec-1997 RMK Updated for new arch/arm/time.c
+ */
+
+#define MCLK_47_8
+
+#if defined(MCLK_42_3)
+#define PIT1_COUNT 0xecbe
+#elif defined(MCLK_47_8)
+/*
+ * This should be 0x10AE1, but that doesn't exactly fit.
+ * We run the timer interrupt at 5ms, and then divide it by
+ * two in software... This is so that the user processes
+ * see exactly the same model whichever ARM processor they're
+ * running on.
+ */
+#define PIT1_COUNT 0x8570
+#define DIVISOR 2
+#endif
+
+extern __inline__ unsigned long gettimeoffset (void)
+{
+ return 0;
+}
+
+#ifndef DIVISOR
+extern __inline__ int reset_timer (void)
+{
+ *PIT_T1 = (PIT1_COUNT) & 0xff;
+ *PIT_T1 = (PIT1_COUNT) >> 8;
+ return 1;
+}
+#else
+extern __inline__ int reset_timer (void)
+{
+ static unsigned int divisor;
+ static int count = 50;
+
+ *PIT_T1 = (PIT1_COUNT) & 0xff;
+ *PIT_T1 = (PIT1_COUNT) >> 8;
+
+ if (--count == 0) {
+ count = 50;
+ *(volatile unsigned char *)0xf2400000 ^= 128;
+ }
+
+ if (divisor == 0) {
+ divisor = DIVISOR - 1;
+ return 1;
+ }
+ divisor -= 1;
+ return 0;
+}
+#endif
+
+/*
+ * We don't have a RTC to update!
+ */
+#define update_rtc()
+
+/*
+ * Set up timer interrupt, and return the current time in seconds.
+ */
+extern __inline__ unsigned long setup_timer (void)
+{
+ /*
+ * Timer 1, mode 0, 16-bit, autoreload
+ */
+ *PIT_CTRL = 0x70;
+ /*
+ * Refresh counter clocked at 47.8MHz/7 = 146.4ns
+ * We want centi-second interrupts
+ */
+ reset_timer ();
+ /*
+ * Default the date to 1 Jan 1970 0:0:0
+ * You will have to run a time daemon to set the
+ * clock correctly at bootup
+ */
+ return mktime(1970, 1, 1, 0, 0, 0);
+}
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-ebsa110/timex.h
+ *
+ * RiscPC architecture timex specifications
+ *
+ * Copyright (C) 1997, 1998 Russell King
+ */
+
+/*
+ * On the EBSA, the clock ticks at weird rates.
+ * This is therefore not used to calculate the
+ * divisor.
+ */
+//#define CLOCK_TICK_RATE 2000000
+
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-ebsa110/uncompress.h
+ *
+ * Copyright (C) 1996,1997,1998 Russell King
+ */
+
+/*
+ * This does not append a newline
+ */
+static void puts(const char *s)
+{
+ __asm__ __volatile__("
+ ldrb %0, [%2], #1
+ teq %0, #0
+ beq 3f
+1: strb %0, [%3]
+2: ldrb %1, [%3, #0x14]
+ and %1, %1, #0x60
+ teq %1, #0x60
+ bne 2b
+ teq %0, #'\n'
+ moveq %0, #'\r'
+ beq 1b
+ ldrb %0, [%2], #1
+ teq %0, #0
+ bne 1b
+3: " : : "r" (0), "r" (0), "r" (s), "r" (0xf0000be0) : "cc");
+}
+
+/*
+ * nothing to do
+ */
+#define arch_decomp_setup()
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-ebsa/a.out.h
+ *
+ * Copyright (C) 1996 Russell King
+ */
+
+#ifndef __ASM_ARCH_A_OUT_H
+#define __ASM_ARCH_A_OUT_H
+
+#ifdef __KERNEL__
+#define STACK_TOP (0xc0000000)
+#define LIBRARY_START_TEXT (0x00c00000)
+#endif
+
+#endif
+
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-ebsa110/dma.h
+ *
+ * Architecture DMA routes
+ *
+ * Copyright (C) 1997.1998 Russell King
+ */
+#ifndef __ASM_ARCH_DMA_H
+#define __ASM_ARCH_DMA_H
+
+#ifdef KERNEL_ARCH_DMA
+
+static inline void arch_disable_dma (int dmanr)
+{
+ printk (dma_str, "arch_disable_dma", dmanr);
+}
+
+static inline void arch_enable_dma (int dmanr)
+{
+ printk (dma_str, "arch_enable_dma", dmanr);
+}
+
+static inline void arch_set_dma_addr (int dmanr, unsigned int addr)
+{
+ printk (dma_str, "arch_set_dma_addr", dmanr);
+}
+
+static inline void arch_set_dma_count (int dmanr, unsigned int count)
+{
+ printk (dma_str, "arch_set_dma_count", dmanr);
+}
+
+static inline void arch_set_dma_mode (int dmanr, char mode)
+{
+ printk (dma_str, "arch_set_dma_mode", dmanr);
+}
+
+static inline int arch_dma_count (int dmanr)
+{
+ printk (dma_str, "arch_dma_count", dmanr);
+ return 0;
+}
+
+#endif
+
+/* enable/disable a specific DMA channel */
+extern void enable_dma(unsigned int dmanr);
+
+static __inline__ void disable_dma(unsigned int dmanr)
+{
+ printk (dma_str, "disable_dma", dmanr);
+}
+
+/* Clear the 'DMA Pointer Flip Flop'.
+ * Write 0 for LSB/MSB, 1 for MSB/LSB access.
+ * Use this once to initialize the FF to a known state.
+ * After that, keep track of it. :-)
+ * --- In order to do that, the DMA routines below should ---
+ * --- only be used while interrupts are disabled! ---
+ */
+static __inline__ void clear_dma_ff(unsigned int dmanr)
+{
+ printk (dma_str, "clear_dma_ff", dmanr);
+}
+
+/* set mode (above) for a specific DMA channel */
+extern void set_dma_mode(unsigned int dmanr, char mode);
+
+/* Set only the page register bits of the transfer address.
+ * This is used for successive transfers when we know the contents of
+ * the lower 16 bits of the DMA current address register, but a 64k boundary
+ * may have been crossed.
+ */
+static __inline__ void set_dma_page(unsigned int dmanr, char pagenr)
+{
+ printk (dma_str, "set_dma_page", dmanr);
+}
+
+
+/* Set transfer address & page bits for specific DMA channel.
+ * Assumes dma flipflop is clear.
+ */
+extern void set_dma_addr(unsigned int dmanr, unsigned int addr);
+
+/* Set transfer size for a specific DMA channel.
+ */
+extern void set_dma_count(unsigned int dmanr, unsigned int count);
+
+/* Get DMA residue count. After a DMA transfer, this
+ * should return zero. Reading this while a DMA transfer is
+ * still in progress will return unpredictable results.
+ * If called before the channel has been used, it may return 1.
+ * Otherwise, it returns the number of _bytes_ left to transfer.
+ *
+ * Assumes DMA flip-flop is clear.
+ */
+extern int get_dma_residue(unsigned int dmanr);
+
+#endif /* _ASM_ARCH_DMA_H */
+
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-nexuspci/hardware.h
+ *
+ * Copyright (C) 1997 Philip Blundell
+ *
+ * This file contains the hardware definitions of the Nexus PCI card.
+ */
+
+#ifndef __ASM_ARCH_HARDWARE_H
+#define __ASM_ARCH_HARDWARE_H
+
+/*
+ * What hardware must be present
+ */
+
+#ifndef __ASSEMBLER__
+
+/*
+ * Mapping areas
+ */
+#define IO_END 0xffffffff
+#define IO_BASE 0xd0000000
+#define IO_SIZE (IO_END - IO_BASE)
+#define IO_START 0xd0000000
+
+/*
+ * RAM definitions
+ */
+#define RAM_BASE 0x40000000
+#define MAPTOPHYS(a) ((unsigned long)(a) - PAGE_OFFSET + RAM_BASE)
+#define KERNTOPHYS(a) ((unsigned long)(&a))
+#define KERNEL_BASE (0xc0008000)
+
+#else
+
+#define IO_BASE 0
+
+#endif
+#endif
+
--- /dev/null
+/*
+ * include/asm-arm/arch-ebsa110/irq.h
+ *
+ * Copyright (C) 1996,1997,1998 Russell King
+ */
+
+#define IRQ_MCLR ((volatile unsigned char *)0xf3000000)
+#define IRQ_MSET ((volatile unsigned char *)0xf2c00000)
+#define IRQ_MASK ((volatile unsigned char *)0xf2c00000)
+
+static __inline__ void mask_and_ack_irq(unsigned int irq)
+{
+ if (irq < 8)
+ *IRQ_MCLR = 1 << irq;
+}
+
+static __inline__ void mask_irq(unsigned int irq)
+{
+ if (irq < 8)
+ *IRQ_MCLR = 1 << irq;
+}
+
+static __inline__ void unmask_irq(unsigned int irq)
+{
+ if (irq < 8)
+ *IRQ_MSET = 1 << irq;
+}
+
+static __inline__ unsigned long get_enabled_irqs(void)
+{
+ return 0;
+}
+
+static __inline__ void irq_init_irq(void)
+{
+ unsigned long flags;
+
+ save_flags_cli (flags);
+ *IRQ_MCLR = 0xff;
+ *IRQ_MSET = 0x55;
+ *IRQ_MSET = 0x00;
+ if (*IRQ_MASK != 0x55)
+ while (1);
+ *IRQ_MCLR = 0xff; /* clear all interrupt enables */
+ restore_flags (flags);
+}
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-nexuspci/irqs.h
+ *
+ * Copyright (C) 1997 Philip Blundell
+ */
+
+#define IRQ_DUART 0
+#define IRQ_TIMER0 0 /* timer is part of the DUART */
+#define IRQ_PLX 1
+#define IRQ_PCI_D 2
+#define IRQ_PCI_C 3
+#define IRQ_PCI_B 4
+#define IRQ_PCI_A 5
+#define IRQ_SYSERR 6 /* must ask JB about this one */
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-ebsa110/mmap.h
+ *
+ * Copyright (C) 1996,1997,1998 Russell King
+ */
+
+/*
+ * Use SRAM for cache flushing
+ */
+#define SAFE_ADDR 0x40000000
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-nexuspci/mmu.h
+ *
+ * Copyright (c) 1997 Philip Blundell.
+ *
+ */
+#ifndef __ASM_ARCH_MMU_H
+#define __ASM_ARCH_MMU_H
+
+/*
+ * On NexusPCI, the dram is contiguous
+ */
+#define __virt_to_phys(vpage) ((vpage) - PAGE_OFFSET + 0x40000000)
+#define __phys_to_virt(ppage) ((ppage) + PAGE_OFFSET - 0x40000000)
+
+#endif
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-ebsa110/processor.h
+ *
+ * Copyright (C) 1996,1997,1998 Russell King
+ */
+
+#ifndef __ASM_ARCH_PROCESSOR_H
+#define __ASM_ARCH_PROCESSOR_H
+
+/*
+ * Bus types
+ */
+#define EISA_bus 0
+#define EISA_bus__is_a_macro /* for versions in ksyms.c */
+#define MCA_bus 0
+#define MCA_bus__is_a_macro /* for versions in ksyms.c */
+
+/*
+ * User space: 3GB
+ */
+#define TASK_SIZE (0xc0000000UL)
+
+/* This decides where the kernel will search for a free chunk of vm
+ * space during mmap's.
+ */
+#define TASK_UNMAPPED_BASE (TASK_SIZE / 3)
+
+#define INIT_MMAP \
+{ &init_mm, 0xc0000000, 0xc2000000, PAGE_SHARED, VM_READ | VM_WRITE | VM_EXEC, NULL, &init_mm.mmap }
+
+#endif
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-ebsa110/serial.h
+ *
+ * Copyright (c) 1996,1997,1998 Russell King.
+ *
+ * Changelog:
+ * 15-10-1996 RMK Created
+ */
+#ifndef __ASM_ARCH_SERIAL_H
+#define __ASM_ARCH_SERIAL_H
+
+/*
+ * This assumes you have a 1.8432 MHz clock for your UART.
+ *
+ * It'd be nice if someone built a serial card with a 24.576 MHz
+ * clock, since the 16550A is capable of handling a top speed of 1.5
+ * megabits/second; but this requires the faster clock.
+ */
+#define BASE_BAUD (1843200 / 16)
+
+#define STD_COM_FLAGS (ASYNC_BOOT_AUTOCONF | ASYNC_SKIP_TEST)
+
+ /* UART CLK PORT IRQ FLAGS */
+#define RS_UARTS \
+ { 0, BASE_BAUD, 0x3F8, 1, STD_COM_FLAGS }, /* ttyS0 */ \
+ { 0, BASE_BAUD, 0x2F8, 2, STD_COM_FLAGS }, /* ttyS1 */ \
+ { 0, BASE_BAUD, 0 , 0, STD_COM_FLAGS }, /* ttyS2 */ \
+ { 0, BASE_BAUD, 0 , 0, STD_COM_FLAGS }, /* ttyS3 */ \
+ { 0, BASE_BAUD, 0 , 0, STD_COM_FLAGS }, /* ttyS4 */ \
+ { 0, BASE_BAUD, 0 , 0, STD_COM_FLAGS }, /* ttyS5 */ \
+ { 0, BASE_BAUD, 0 , 0, STD_COM_FLAGS }, /* ttyS6 */ \
+ { 0, BASE_BAUD, 0 , 0, STD_COM_FLAGS }, /* ttyS7 */ \
+ { 0, BASE_BAUD, 0 , 0, STD_COM_FLAGS }, /* ttyS8 */ \
+ { 0, BASE_BAUD, 0 , 0, STD_COM_FLAGS }, /* ttyS9 */ \
+ { 0, BASE_BAUD, 0 , 0, STD_COM_FLAGS }, /* ttyS10 */ \
+ { 0, BASE_BAUD, 0 , 0, STD_COM_FLAGS }, /* ttyS11 */ \
+ { 0, BASE_BAUD, 0 , 0, STD_COM_FLAGS }, /* ttyS12 */ \
+ { 0, BASE_BAUD, 0 , 0, STD_COM_FLAGS }, /* ttyS13 */
+
+#endif
+
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-ebsa110/shmparam.h
+ *
+ * Copyright (c) 1996 Russell King.
+ */
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-ebsa110/system.h
+ *
+ * Copyright (c) 1996,1997,1998 Russell King.
+ */
+#ifndef __ASM_ARCH_SYSTEM_H
+#define __ASM_ARCH_SYSTEM_H
+
+extern __inline__ void arch_hard_reset (void)
+{
+ /*
+ * loop endlessly
+ */
+ cli();
+ while (1);
+}
+
+#endif
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-nexuspci/time.h
+ *
+ * Copyright (c) 1997 Phil Blundell.
+ *
+ * Nexus PCI card has no real-time clock.
+ *
+ */
+
+extern __inline__ unsigned long gettimeoffset (void)
+{
+ return 0;
+}
+
+extern __inline__ int reset_timer (void)
+{
+ return 0;
+}
+
+extern __inline__ unsigned long setup_timer (void)
+{
+ reset_timer ();
+ /*
+ * Default the date to 1 Jan 1970 0:0:0
+ * You will have to run a time daemon to set the
+ * clock correctly at bootup
+ */
+ return mktime(1970, 1, 1, 0, 0, 0);
+}
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-ebsa110/uncompress.h
+ *
+ * Copyright (C) 1996,1997,1998 Russell King
+ */
+
+/*
+ * This does not append a newline
+ */
+static void puts(const char *s)
+{
+}
+
+/*
+ * nothing to do
+ */
+#define arch_decomp_setup()
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-rpc/a.out.h
+ *
+ * Copyright (C) 1996 Russell King
+ */
+
+#ifndef __ASM_ARCH_A_OUT_H
+#define __ASM_ARCH_A_OUT_H
+
+#ifdef __KERNEL__
+#define STACK_TOP (0xc0000000)
+#define LIBRARY_START_TEXT (0x00c00000)
+#endif
+
+#endif
+
--- /dev/null
+#ifndef __ASM_ARCH_DMA_H
+#define __ASM_ARCH_DMA_H
+
+#define MAX_DMA_ADDRESS 0xd0000000
+
+#ifdef KERNEL_ARCH_DMA
+
+static unsigned char arch_dma_setup;
+unsigned char arch_dma_ctrl[8];
+unsigned long arch_dma_addr[8];
+unsigned long arch_dma_cnt[8];
+
+static inline void arch_enable_dma(int dmanr)
+{
+ if (!(arch_dma_setup & (1 << dmanr))) {
+ arch_dma_setup |= 1 << dmanr;
+/* dma_interrupt (16 + dmanr);*/
+ }
+ arch_dma_ctrl[dmanr] |= DMA_CR_E;
+ switch (dmanr) {
+ case 0: outb (arch_dma_ctrl[0], IOMD_IO0CR); break;
+ case 1: outb (arch_dma_ctrl[1], IOMD_IO1CR); break;
+ case 2: outb (arch_dma_ctrl[2], IOMD_IO2CR); break;
+ case 3: outb (arch_dma_ctrl[3], IOMD_IO3CR); break;
+ case 4: outb (arch_dma_ctrl[4], IOMD_SD0CR); break;
+ case 5: outb (arch_dma_ctrl[5], IOMD_SD1CR); break;
+ }
+}
+
+static inline void arch_disable_dma(int dmanr)
+{
+ arch_dma_ctrl[dmanr] &= ~DMA_CR_E;
+ switch (dmanr) {
+ case 0: outb (arch_dma_ctrl[0], IOMD_IO0CR); break;
+ case 1: outb (arch_dma_ctrl[1], IOMD_IO1CR); break;
+ case 2: outb (arch_dma_ctrl[2], IOMD_IO2CR); break;
+ case 3: outb (arch_dma_ctrl[3], IOMD_IO3CR); break;
+ case 4: outb (arch_dma_ctrl[4], IOMD_SD0CR); break;
+ case 5: outb (arch_dma_ctrl[5], IOMD_SD1CR); break;
+ }
+}
+
+static inline void arch_set_dma_addr(int dmanr, unsigned int addr)
+{
+ arch_dma_setup &= ~dmanr;
+ arch_dma_addr[dmanr] = addr;
+}
+
+static inline void arch_set_dma_count(int dmanr, unsigned int count)
+{
+ arch_dma_setup &= ~dmanr;
+ arch_dma_cnt[dmanr] = count;
+}
+
+static inline void arch_set_dma_mode(int dmanr, char mode)
+{
+ switch (mode) {
+ case DMA_MODE_READ:
+ arch_dma_ctrl[dmanr] |= DMA_CR_D;
+ break;
+ case DMA_MODE_WRITE:
+ arch_dma_ctrl[dmanr] &= ~DMA_CR_D;
+ break;
+ }
+}
+
+static inline int arch_dma_count (int dmanr)
+{
+ return arch_dma_cnt[dmanr];
+}
+#endif
+
+/* enable/disable a specific DMA channel */
+extern void enable_dma(unsigned int dmanr);
+
+static __inline__ void disable_dma(unsigned int dmanr)
+{
+ switch(dmanr) {
+ case 1: break;
+ case 2: disable_irq(64); break;
+ default: printk(dma_str, "disable_dma", dmanr); break;
+ }
+}
+
+/* Clear the 'DMA Pointer Flip Flop'.
+ * Write 0 for LSB/MSB, 1 for MSB/LSB access.
+ * Use this once to initialize the FF to a known state.
+ * After that, keep track of it. :-)
+ * --- In order to do that, the DMA routines below should ---
+ * --- only be used while interrupts are disabled! ---
+ */
+#define clear_dma_ff(dmanr)
+
+/* set mode (above) for a specific DMA channel */
+extern void set_dma_mode(unsigned int dmanr, char mode);
+
+/* Set only the page register bits of the transfer address.
+ * This is used for successive transfers when we know the contents of
+ * the lower 16 bits of the DMA current address register, but a 64k boundary
+ * may have been crossed.
+ */
+static __inline__ void set_dma_page(unsigned int dmanr, char pagenr)
+{
+ printk (dma_str, "set_dma_page", dmanr);
+}
+
+
+/* Set transfer address & page bits for specific DMA channel.
+ * Assumes dma flipflop is clear.
+ */
+extern void set_dma_addr(unsigned int dmanr, unsigned int addr);
+
+/* Set transfer size for a specific DMA channel.
+ */
+extern void set_dma_count(unsigned int dmanr, unsigned int count);
+
+/* Get DMA residue count. After a DMA transfer, this
+ * should return zero. Reading this while a DMA transfer is
+ * still in progress will return unpredictable results.
+ * If called before the channel has been used, it may return 1.
+ * Otherwise, it returns the number of _bytes_ left to transfer.
+ *
+ * Assumes DMA flip-flop is clear.
+ */
+extern int get_dma_residue(unsigned int dmanr);
+
+#endif /* _ASM_ARCH_DMA_H */
+
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-rpc/hardware.h
+ *
+ * Copyright (C) 1996 Russell King.
+ *
+ * This file contains the hardware definitions of the RiscPC series machines.
+ */
+
+#ifndef __ASM_ARCH_HARDWARE_H
+#define __ASM_ARCH_HARDWARE_H
+
+/*
+ * What hardware must be present
+ */
+#define HAS_IOMD
+#define HAS_PCIO
+#define HAS_VIDC20
+
+/*
+ * Optional hardware
+ */
+#define HAS_EXPMASK
+
+/*
+ * Physical definitions
+ */
+#define RAM_START 0x10000000
+#define IO_START 0x03000000
+#define SCREEN_START 0x02000000 /* VRAM */
+
+#ifndef __ASSEMBLER__
+
+/*
+ * for use with inb/outb
+ */
+#define VIDC_AUDIO_BASE 0x80140000
+#define VIDC_BASE 0x80100000
+#define IOCEC4IO_BASE 0x8009c000
+#define IOCECIO_BASE 0x80090000
+#define IOMD_BASE 0x80080000
+#define MEMCEC8IO_BASE 0x8000ac00
+#define MEMCECIO_BASE 0x80000000
+
+/*
+ * IO definitions
+ */
+#define EXPMASK_BASE ((volatile unsigned char *)0xe0360000)
+#define IOEB_BASE ((volatile unsigned char *)0xe0350050)
+#define IOC_BASE ((volatile unsigned char *)0xe0200000)
+#define PCIO_FLOPPYDMABASE ((volatile unsigned char *)0xe002a000)
+#define PCIO_BASE 0xe0010000
+
+/*
+ * Mapping areas
+ */
+#define IO_END 0xe0ffffff
+#define IO_BASE 0xe0000000
+#define IO_SIZE (IO_END - IO_BASE)
+
+/*
+ * Screen mapping information
+ */
+#define SCREEN2_END 0xe0000000
+#define SCREEN2_BASE 0xd8000000
+#define SCREEN1_END SCREEN2_BASE
+#define SCREEN1_BASE 0xd0000000
+
+/*
+ * Offsets from RAM base
+ */
+#define PARAMS_OFFSET 0x0100
+#define KERNEL_OFFSET 0x8000
+
+/*
+ * RAM definitions
+ */
+#define MAPTOPHYS(x) (x)
+#define KERNTOPHYS(x) ((unsigned long)(&x))
+#define GET_MEMORY_END(p) (PAGE_OFFSET + p->u1.s.page_size * \
+ (p->u1.s.pages_in_bank[0] + \
+ p->u1.s.pages_in_bank[1] + \
+ p->u1.s.pages_in_bank[2] + \
+ p->u1.s.pages_in_bank[3]))
+
+#define KERNEL_BASE (PAGE_OFFSET + KERNEL_OFFSET)
+#define PARAMS_BASE (PAGE_OFFSET + PARAMS_OFFSET)
+#define Z_PARAMS_BASE (RAM_START + PARAMS_OFFSET)
+
+#else
+
+#define VIDC_SND_BASE 0xe0500000
+#define VIDC_BASE 0xe0400000
+#define IOMD_BASE 0xe0200000
+#define IOC_BASE 0xe0200000
+#define PCIO_FLOPPYDMABASE 0xe002a000
+#define PCIO_BASE 0xe0010000
+#define IO_BASE 0xe0000000
+
+#endif
+#endif
+
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-rpc/ide.h
+ *
+ * Copyright (c) 1997 Russell King
+ */
+
+static __inline__ int
+ide_default_irq(ide_ioreg_t base)
+{
+ if (base == 0x1f0)
+ return 9;
+ return 0;
+}
+
+static __inline__ ide_ioreg_t
+ide_default_io_base(int index)
+{
+ if (index == 0)
+ return 0x1f0;
+ return 0;
+}
+
+static __inline__ int
+ide_default_stepping(int index)
+{
+ return 0;
+}
+
+static __inline__ void
+ide_init_hwif_ports (ide_ioreg_t *p, ide_ioreg_t base, int stepping, int *irq)
+{
+ ide_ioreg_t port = base;
+ ide_ioreg_t ctrl = base + 0x206;
+ int i;
+
+ i = 8;
+ while (i--) {
+ *p++ = port;
+ port += 1 << stepping;
+ }
+ *p++ = ctrl;
+ if (irq != NULL)
+ irq = 0;
+}
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-rpc/io.h
+ *
+ * Copyright (C) 1997 Russell King
+ *
+ * Modifications:
+ * 06-Dec-1997 RMK Created.
+ */
+#ifndef __ASM_ARM_ARCH_IO_H
+#define __ASM_ARM_ARCH_IO_H
+
+/*
+ * Virtual view <-> DMA view memory address translations
+ * virt_to_bus: Used to translate the virtual address to an
+ * address suitable to be passed to set_dma_addr
+ * bus_to_virt: Used to convert an address for DMA operations
+ * to an address that the kernel can use.
+ */
+#define virt_to_bus(x) ((unsigned long)(x))
+#define bus_to_virt(x) ((void *)(x))
+
+/*
+ * This architecture does not require any delayed IO, and
+ * has the constant-optimised IO
+ */
+#undef ARCH_IO_DELAY
+
+/*
+ * We use two different types of addressing - PC style addresses, and ARM
+ * addresses. PC style accesses the PC hardware with the normal PC IO
+ * addresses, eg 0x3f8 for serial#1. ARM addresses are 0x80000000+
+ * and are translated to the start of IO. Note that all addresses are
+ * shifted left!
+ */
+#define __PORT_PCIO(x) (!((x) & 0x80000000))
+
+/*
+ * Dynamic IO functions - let the compiler
+ * optimize the expressions
+ */
+#define DECLARE_DYN_OUT(fnsuffix,instr) \
+extern __inline__ void __out##fnsuffix (unsigned int value, unsigned int port) \
+{ \
+ unsigned long temp; \
+ __asm__ __volatile__( \
+ "tst %2, #0x80000000\n\t" \
+ "mov %0, %4\n\t" \
+ "addeq %0, %0, %3\n\t" \
+ "str" ##instr## " %1, [%0, %2, lsl #2]" \
+ : "=&r" (temp) \
+ : "r" (value), "r" (port), "Ir" (PCIO_BASE - IO_BASE), "Ir" (IO_BASE) \
+ : "cc"); \
+}
+
+#define DECLARE_DYN_IN(sz,fnsuffix,instr) \
+extern __inline__ unsigned sz __in##fnsuffix (unsigned int port) \
+{ \
+ unsigned long temp, value; \
+ __asm__ __volatile__( \
+ "tst %2, #0x80000000\n\t" \
+ "mov %0, %4\n\t" \
+ "addeq %0, %0, %3\n\t" \
+ "ldr" ##instr## " %1, [%0, %2, lsl #2]" \
+ : "=&r" (temp), "=r" (value) \
+ : "r" (port), "Ir" (PCIO_BASE - IO_BASE), "Ir" (IO_BASE) \
+ : "cc"); \
+ return (unsigned sz)value; \
+}
+
+extern __inline__ unsigned int __ioaddr (unsigned int port) \
+{ \
+ if (__PORT_PCIO(port)) \
+ return (unsigned int)(PCIO_BASE + (port << 2)); \
+ else \
+ return (unsigned int)(IO_BASE + (port << 2)); \
+}
+
+#define DECLARE_IO(sz,fnsuffix,instr) \
+ DECLARE_DYN_OUT(fnsuffix,instr) \
+ DECLARE_DYN_IN(sz,fnsuffix,instr)
+
+DECLARE_IO(char,b,"b")
+DECLARE_IO(short,w,"")
+DECLARE_IO(long,l,"")
+
+#undef DECLARE_IO
+#undef DECLARE_DYN_OUT
+#undef DECLARE_DYN_IN
+
+/*
+ * Constant address IO functions
+ *
+ * These have to be macros for the 'J' constraint to work -
+ * +/-4096 immediate operand.
+ */
+#define __outbc(value,port) \
+({ \
+ if (__PORT_PCIO((port))) \
+ __asm__ __volatile__( \
+ "strb %0, [%1, %2]" \
+ : : "r" (value), "r" (PCIO_BASE), "Jr" ((port) << 2)); \
+ else \
+ __asm__ __volatile__( \
+ "strb %0, [%1, %2]" \
+ : : "r" (value), "r" (IO_BASE), "r" ((port) << 2)); \
+})
+
+#define __inbc(port) \
+({ \
+ unsigned char result; \
+ if (__PORT_PCIO((port))) \
+ __asm__ __volatile__( \
+ "ldrb %0, [%1, %2]" \
+ : "=r" (result) : "r" (PCIO_BASE), "Jr" ((port) << 2)); \
+ else \
+ __asm__ __volatile__( \
+ "ldrb %0, [%1, %2]" \
+ : "=r" (result) : "r" (IO_BASE), "r" ((port) << 2)); \
+ result; \
+})
+
+#define __outwc(value,port) \
+({ \
+ unsigned long v = value; \
+ if (__PORT_PCIO((port))) \
+ __asm__ __volatile__( \
+ "str %0, [%1, %2]" \
+ : : "r" (v|v<<16), "r" (PCIO_BASE), "Jr" ((port) << 2)); \
+ else \
+ __asm__ __volatile__( \
+ "str %0, [%1, %2]" \
+ : : "r" (v|v<<16), "r" (IO_BASE), "r" ((port) << 2)); \
+})
+
+#define __inwc(port) \
+({ \
+ unsigned short result; \
+ if (__PORT_PCIO((port))) \
+ __asm__ __volatile__( \
+ "ldr %0, [%1, %2]" \
+ : "=r" (result) : "r" (PCIO_BASE), "Jr" ((port) << 2)); \
+ else \
+ __asm__ __volatile__( \
+ "ldr %0, [%1, %2]" \
+ : "=r" (result) : "r" (IO_BASE), "r" ((port) << 2)); \
+ result & 0xffff; \
+})
+
+#define __outlc(value,port) \
+({ \
+ unsigned long v = value; \
+ if (__PORT_PCIO((port))) \
+ __asm__ __volatile__( \
+ "str %0, [%1, %2]" \
+ : : "r" (v), "r" (PCIO_BASE), "Jr" ((port) << 2)); \
+ else \
+ __asm__ __volatile__( \
+ "str %0, [%1, %2]" \
+ : : "r" (v), "r" (IO_BASE), "r" ((port) << 2)); \
+})
+
+#define __inlc(port) \
+({ \
+ unsigned long result; \
+ if (__PORT_PCIO((port))) \
+ __asm__ __volatile__( \
+ "ldr %0, [%1, %2]" \
+ : "=r" (result) : "r" (PCIO_BASE), "Jr" ((port) << 2)); \
+ else \
+ __asm__ __volatile__( \
+ "ldr %0, [%1, %2]" \
+ : "=r" (result) : "r" (IO_BASE), "r" ((port) << 2)); \
+ result; \
+})
+
+#define __ioaddrc(port) \
+({ \
+ unsigned long addr; \
+ if (__PORT_PCIO((port))) \
+ addr = PCIO_BASE + ((port) << 2); \
+ else \
+ addr = IO_BASE + ((port) << 2); \
+ addr; \
+})
+
+/*
+ * Translated address IO functions
+ *
+ * IO address has already been translated to a virtual address
+ */
+#define outb_t(v,p) \
+ (*(volatile unsigned char *)(p) = (v))
+
+#define inb_t(p) \
+ (*(volatile unsigned char *)(p))
+
+#define outl_t(v,p) \
+ (*(volatile unsigned long *)(p) = (v))
+
+#define inl_t(p) \
+ (*(volatile unsigned long *)(p))
+
+#endif
--- /dev/null
+/*
+ * include/asm-arm/arch-rpc/irq.h
+ *
+ * Copyright (C) 1996 Russell King
+ *
+ * Changelog:
+ * 10-10-1996 RMK Brought up to date with arch-sa110eval
+ */
+
+#define BUILD_IRQ(s,n,m) \
+ void IRQ##n##_interrupt(void); \
+ void fast_IRQ##n##_interrupt(void); \
+ void bad_IRQ##n##_interrupt(void); \
+ void probe_IRQ##n##_interrupt(void);
+
+/*
+ * The timer is a special interrupt
+ */
+#define IRQ5_interrupt timer_IRQ_interrupt
+
+#define IRQ_INTERRUPT(n) IRQ##n##_interrupt
+#define FAST_INTERRUPT(n) fast_IRQ##n##_interrupt
+#define BAD_INTERRUPT(n) bad_IRQ##n##_interrupt
+#define PROBE_INTERRUPT(n) probe_IRQ##n##_interrupt
+
+#define X(x) (x)|0x01, (x)|0x02, (x)|0x04, (x)|0x08, (x)|0x10, (x)|0x20, (x)|0x40, (x)|0x80
+#define Z(x) (x), (x), (x), (x), (x), (x), (x), (x)
+
+static __inline__ void mask_and_ack_irq(unsigned int irq)
+{
+ static const int addrmasks[] = {
+ X((IOMD_IRQMASKA - IOMD_BASE)<<18 | (1 << 15)),
+ X((IOMD_IRQMASKB - IOMD_BASE)<<18),
+ X((IOMD_DMAMASK - IOMD_BASE)<<18),
+ Z(0),
+ Z(0),
+ Z(0),
+ Z(0),
+ Z(0),
+ X((IOMD_FIQMASK - IOMD_BASE)<<18),
+ Z(0),
+ Z(0),
+ Z(0),
+ Z(0),
+ Z(0),
+ Z(0),
+ Z(0)
+ };
+ unsigned int temp1, temp2;
+
+ __asm__ __volatile__(
+" ldr %1, [%5, %3, lsl #2]\n"
+" teq %1, #0\n"
+" beq 2f\n"
+" ldrb %0, [%2, %1, lsr #16]\n"
+" bic %0, %0, %1\n"
+" strb %0, [%2, %1, lsr #16]\n"
+" tst %1, #0x8000\n" /* do we need an IRQ clear? */
+" strneb %1, [%2, %4]\n"
+"2:"
+ : "=&r" (temp1), "=&r" (temp2)
+ : "r" (ioaddr(IOMD_BASE)), "r" (irq),
+ "I" ((IOMD_IRQCLRA - IOMD_BASE) << 2), "r" (addrmasks));
+}
+
+#undef X
+#undef Z
+
+static __inline__ void mask_irq(unsigned int irq)
+{
+ extern void ecard_disableirq (unsigned int);
+ extern void ecard_disablefiq (unsigned int);
+ unsigned char mask = 1 << (irq & 7);
+
+ switch (irq >> 3) {
+ case 0:
+ outb(inb(IOMD_IRQMASKA) & ~mask, IOMD_IRQMASKA);
+ break;
+ case 1:
+ outb(inb(IOMD_IRQMASKB) & ~mask, IOMD_IRQMASKB);
+ break;
+ case 2:
+ outb(inb(IOMD_DMAMASK) & ~mask, IOMD_DMAMASK);
+ break;
+ case 4:
+ ecard_disableirq (irq & 7);
+ break;
+ case 8:
+ outb(inb(IOMD_FIQMASK) & ~mask, IOMD_FIQMASK);
+ break;
+ case 12:
+ ecard_disablefiq (irq & 7);
+ }
+}
+
+static __inline__ void unmask_irq(unsigned int irq)
+{
+ extern void ecard_enableirq (unsigned int);
+ extern void ecard_enablefiq (unsigned int);
+ unsigned char mask = 1 << (irq & 7);
+
+ switch (irq >> 3) {
+ case 0:
+ outb(inb(IOMD_IRQMASKA) | mask, IOMD_IRQMASKA);
+ break;
+ case 1:
+ outb(inb(IOMD_IRQMASKB) | mask, IOMD_IRQMASKB);
+ break;
+ case 2:
+ outb(inb(IOMD_DMAMASK) | mask, IOMD_DMAMASK);
+ break;
+ case 4:
+ ecard_enableirq (irq & 7);
+ break;
+ case 8:
+ outb(inb(IOMD_FIQMASK) | mask, IOMD_FIQMASK);
+ break;
+ case 12:
+ ecard_enablefiq (irq & 7);
+ }
+}
+
+static __inline__ unsigned long get_enabled_irqs(void)
+{
+ return inb(IOMD_IRQMASKA) | inb(IOMD_IRQMASKB) << 8 | inb(IOMD_DMAMASK) << 16;
+}
+
+static __inline__ void irq_init_irq(void)
+{
+ outb(0, IOMD_IRQMASKA);
+ outb(0, IOMD_IRQMASKB);
+ outb(0, IOMD_FIQMASK);
+ outb(0, IOMD_DMAMASK);
+ outb(0, IOMD_IO0CR);
+ outb(0, IOMD_IO1CR);
+ outb(0, IOMD_IO2CR);
+ outb(0, IOMD_IO3CR);
+}
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-a5k/irqs.h
+ *
+ * Copyright (C) 1996 Russell King
+ */
+
+#define IRQ_PRINTER 0
+#define IRQ_BATLOW 1
+#define IRQ_FLOPPYINDEX 2
+#define IRQ_VSYNCPULSE 3
+#define IRQ_POWERON 4
+#define IRQ_TIMER0 5
+#define IRQ_TIMER1 6
+#define IRQ_IMMEDIATE 7
+#define IRQ_EXPCARDFIQ 8
+#define IRQ_SOUNDCHANGE 9
+#define IRQ_SERIALPORT 10
+#define IRQ_HARDDISK 11
+#define IRQ_FLOPPYDISK 12
+#define IRQ_EXPANSIONCARD 13
+#define IRQ_KEYBOARDTX 14
+#define IRQ_KEYBOARDRX 15
+
+#define FIQ_FLOPPYDATA 0
+#define FIQ_ECONET 2
+#define FIQ_SERIALPORT 4
+#define FIQ_EXPANSIONCARD 6
+#define FIQ_FORCE 7
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-rpc/mmap.h
+ *
+ * Copyright (C) 1996 Russell King
+ */
+
+#define HAVE_MAP_VID_MEM
+#define SAFE_ADDR 0x00000000 /* ROM */
+
+unsigned long map_screen_mem(unsigned long log_start, unsigned long kmem, int update)
+{
+ static int updated = 0;
+ unsigned long address;
+ pgd_t *pgd;
+
+ if (updated)
+ return 0;
+ updated = update;
+
+ address = SCREEN_START | PMD_TYPE_SECT | PMD_DOMAIN(DOMAIN_KERNEL) | PMD_SECT_AP_WRITE;
+ pgd = swapper_pg_dir + (SCREEN2_BASE >> PGDIR_SHIFT);
+ pgd_val(pgd[0]) = address;
+ pgd_val(pgd[1]) = address + (1 << PGDIR_SHIFT);
+
+ if (update) {
+ unsigned long pgtable = PAGE_ALIGN(kmem), *p;
+ int i;
+
+ memzero ((void *)pgtable, 4096);
+
+ pgd_val(pgd[-2]) = virt_to_phys(pgtable) | PMD_TYPE_TABLE | PMD_DOMAIN(DOMAIN_KERNEL);
+ pgd_val(pgd[-1]) = virt_to_phys(pgtable + PTRS_PER_PTE*4) | PMD_TYPE_TABLE | PMD_DOMAIN(DOMAIN_KERNEL);
+ p = (unsigned long *)pgtable;
+
+ i = PTRS_PER_PTE * 2 - ((SCREEN1_END - log_start) >> PAGE_SHIFT);
+ address = SCREEN_START | PTE_TYPE_SMALL | PTE_AP_WRITE;
+
+ while (i < PTRS_PER_PTE * 2) {
+ p[i++] = address;
+ address += PAGE_SIZE;
+ }
+
+ flush_page_to_ram(pgtable);
+
+ kmem = pgtable + PAGE_SIZE;
+ }
+ return kmem;
+}
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-rpc/mmu.h
+ *
+ * Copyright (c) 1996,1997,1998 Russell King.
+ *
+ * Changelog:
+ * 20-10-1996 RMK Created
+ * 31-12-1997 RMK Fixed definitions to reduce warnings
+ * 11-01-1998 RMK Uninlined to reduce hits on cache
+ */
+#ifndef __ASM_ARCH_MMU_H
+#define __ASM_ARCH_MMU_H
+
+extern unsigned long __virt_to_phys(unsigned long vpage);
+extern unsigned long __phys_to_virt(unsigned long ppage);
+
+#endif
--- /dev/null
+/*
+ * Dummy oldlatches.h
+ *
+ * Copyright (C) 1996 Russell King
+ */
+
+#ifdef __need_oldlatches
+#error "Old latches not present in this (rpc) machine"
+#endif
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-rpc/processor.h
+ *
+ * Copyright (c) 1996 Russell King.
+ *
+ * Changelog:
+ * 10-09-1996 RMK Created
+ */
+
+#ifndef __ASM_ARCH_PROCESSOR_H
+#define __ASM_ARCH_PROCESSOR_H
+
+/*
+ * Bus types
+ */
+#define EISA_bus 0
+#define EISA_bus__is_a_macro /* for versions in ksyms.c */
+#define MCA_bus 0
+#define MCA_bus__is_a_macro /* for versions in ksyms.c */
+
+/*
+ * User space: 3GB
+ */
+#define TASK_SIZE (0xc0000000UL)
+
+/* This decides where the kernel will search for a free chunk of vm
+ * space during mmap's.
+ */
+#define TASK_UNMAPPED_BASE (TASK_SIZE / 3)
+
+#define INIT_MMAP \
+{ &init_mm, 0xc0000000, 0xc2000000, PAGE_SHARED, VM_READ | VM_WRITE | VM_EXEC, NULL, &init_mm.mmap }
+
+#endif
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-rpc/serial.h
+ *
+ * Copyright (c) 1996 Russell King.
+ *
+ * Changelog:
+ * 15-10-1996 RMK Created
+ */
+#ifndef __ASM_ARCH_SERIAL_H
+#define __ASM_ARCH_SERIAL_H
+
+/*
+ * This assumes you have a 1.8432 MHz clock for your UART.
+ *
+ * It'd be nice if someone built a serial card with a 24.576 MHz
+ * clock, since the 16550A is capable of handling a top speed of 1.5
+ * megabits/second; but this requires the faster clock.
+ */
+#define BASE_BAUD (1843200 / 16)
+
+#define STD_COM_FLAGS (ASYNC_BOOT_AUTOCONF | ASYNC_SKIP_TEST)
+
+ /* UART CLK PORT IRQ FLAGS */
+#define RS_UARTS \
+ { 0, BASE_BAUD, 0x3F8, 10, STD_COM_FLAGS }, /* ttyS0 */ \
+ { 0, BASE_BAUD, 0x2F8, 10, STD_COM_FLAGS }, /* ttyS1 */ \
+ { 0, BASE_BAUD, 0 , 0, STD_COM_FLAGS }, /* ttyS2 */ \
+ { 0, BASE_BAUD, 0 , 0, STD_COM_FLAGS }, /* ttyS3 */ \
+ { 0, BASE_BAUD, 0 , 0, STD_COM_FLAGS }, /* ttyS4 */ \
+ { 0, BASE_BAUD, 0 , 0, STD_COM_FLAGS }, /* ttyS5 */ \
+ { 0, BASE_BAUD, 0 , 0, STD_COM_FLAGS }, /* ttyS6 */ \
+ { 0, BASE_BAUD, 0 , 0, STD_COM_FLAGS }, /* ttyS7 */ \
+ { 0, BASE_BAUD, 0 , 0, STD_COM_FLAGS }, /* ttyS8 */ \
+ { 0, BASE_BAUD, 0 , 0, STD_COM_FLAGS }, /* ttyS9 */ \
+ { 0, BASE_BAUD, 0 , 0, STD_COM_FLAGS }, /* ttyS10 */ \
+ { 0, BASE_BAUD, 0 , 0, STD_COM_FLAGS }, /* ttyS11 */ \
+ { 0, BASE_BAUD, 0 , 0, STD_COM_FLAGS }, /* ttyS12 */ \
+ { 0, BASE_BAUD, 0 , 0, STD_COM_FLAGS }, /* ttyS13 */
+
+#endif
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-rpc/shmparam.h
+ *
+ * Copyright (c) 1996 Russell King.
+ */
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-rpc/system.h
+ *
+ * Copyright (c) 1996 Russell King
+ */
+#ifndef __ASM_ARCH_SYSTEM_H
+#define __ASM_ARCH_SYSTEM_H
+
+#include <asm/proc-fns.h>
+
+#define arch_hard_reset() { \
+ extern void ecard_reset (int card); \
+ outb (0, IOMD_ROMCR0); \
+ ecard_reset (-1); \
+ cli(); \
+ __asm__ __volatile__("msr spsr, r1;" \
+ "mcr p15, 0, %0, c1, c0, 0;" \
+ "movs pc, #0" \
+ : \
+ : "r" (processor.u.armv3v4.reset())); \
+ }
+
+#endif
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-rpc/time.h
+ *
+ * Copyright (c) 1996 Russell King.
+ *
+ * Changelog:
+ * 24-Sep-1996 RMK Created
+ * 10-Oct-1996 RMK Brought up to date with arch-sa110eval
+ * 04-Dec-1997 RMK Updated for new arch/arm/time.c
+ */
+
+extern __inline__ unsigned long gettimeoffset (void)
+{
+ unsigned long offset = 0;
+ unsigned int count1, count2, status1, status2;
+
+ status1 = IOMD_IRQREQA;
+ barrier ();
+ outb(0, IOMD_T0LATCH);
+ barrier ();
+ count1 = inb(IOMD_T0CNTL) | (inb(IOMD_T0CNTH) << 8);
+ barrier ();
+ status2 = inb(IOMD_IRQREQA);
+ barrier ();
+ outb(0, IOMD_T0LATCH);
+ barrier ();
+ count2 = inb(IOMD_T0CNTL) | (inb(IOMD_T0CNTH) << 8);
+
+ if (count2 < count1) {
+ /*
+ * This means that we haven't just had an interrupt
+ * while reading into status2.
+ */
+ if (status2 & (1 << 5))
+ offset = tick;
+ count1 = count2;
+ } else if (count2 > count1) {
+ /*
+ * We have just had another interrupt while reading
+ * status2.
+ */
+ offset += tick;
+ count1 = count2;
+ }
+
+ count1 = LATCH - count1;
+ /*
+ * count1 = number of clock ticks since last interrupt
+ */
+ offset += count1 * tick / LATCH;
+ return offset;
+}
+
+/*
+ * No need to reset the timer at every irq
+ */
+#define reset_timer() 1
+
+/*
+ * Updating of the RTC. We don't currently write the time to the
+ * CMOS clock.
+ */
+#define update_rtc()
+
+/*
+ * Set up timer interrupt, and return the current time in seconds.
+ */
+extern __inline__ unsigned long setup_timer (void)
+{
+ extern int iic_control (unsigned char, int, char *, int);
+ unsigned int year, mon, day, hour, min, sec;
+ char buf[8];
+
+ outb(LATCH & 255, IOMD_T0LTCHL);
+ outb(LATCH >> 8, IOMD_T0LTCHH);
+ outb(0, IOMD_T0GO);
+
+ iic_control (0xa0, 0xc0, buf, 1);
+ year = buf[0];
+ if ((year += 1900) < 1970)
+ year += 100;
+
+ iic_control (0xa0, 2, buf, 5);
+ mon = buf[4] & 0x1f;
+ day = buf[3] & 0x3f;
+ hour = buf[2];
+ min = buf[1];
+ sec = buf[0];
+ BCD_TO_BIN(mon);
+ BCD_TO_BIN(day);
+ BCD_TO_BIN(hour);
+ BCD_TO_BIN(min);
+ BCD_TO_BIN(sec);
+
+ return mktime(year, mon, day, hour, min, sec);
+}
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-rpc/timex.h
+ *
+ * RiscPC architecture timex specifications
+ *
+ * Copyright (C) 1997, 1998 Russell King
+ */
+
+/*
+ * On the RiscPC, the clock ticks at 2MHz.
+ */
+#define CLOCK_TICK_RATE 2000000
+
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-a5k/uncompress.h
+ *
+ * Copyright (C) 1996 Russell King
+ */
+#define VIDMEM ((char *)SCREEN_START)
+
+#include "../arch/arm/drivers/char/font.h"
+#include <asm/hardware.h>
+#include <asm/io.h>
+
+int video_num_columns, video_num_lines, video_size_row;
+int white, bytes_per_char_h;
+extern unsigned long con_charconvtable[256];
+
+struct param_struct {
+ unsigned long page_size;
+ unsigned long nr_pages;
+ unsigned long ramdisk_size;
+ unsigned long mountrootrdonly;
+ unsigned long rootdev;
+ unsigned long video_num_cols;
+ unsigned long video_num_rows;
+ unsigned long video_x;
+ unsigned long video_y;
+ unsigned long memc_control_reg;
+ unsigned char sounddefault;
+ unsigned char adfsdrives;
+ unsigned char bytes_per_char_h;
+ unsigned char bytes_per_char_v;
+ unsigned long unused[256/4-11];
+};
+
+static const unsigned long palette_4[16] = {
+ 0x00000000,
+ 0x000000cc,
+ 0x0000cc00, /* Green */
+ 0x0000cccc, /* Yellow */
+ 0x00cc0000, /* Blue */
+ 0x00cc00cc, /* Magenta */
+ 0x00cccc00, /* Cyan */
+ 0x00cccccc, /* White */
+ 0x00000000,
+ 0x000000ff,
+ 0x0000ff00,
+ 0x0000ffff,
+ 0x00ff0000,
+ 0x00ff00ff,
+ 0x00ffff00,
+ 0x00ffffff
+};
+
+#define palette_setpixel(p) *(unsigned long *)(IO_START+0x00400000) = 0x10000000|((p) & 255)
+#define palette_write(v) *(unsigned long *)(IO_START+0x00400000) = 0x00000000|((v) & 0x00ffffff)
+
+static struct param_struct * const params = (struct param_struct *)Z_PARAMS_BASE;
+
+#ifndef STANDALONE_DEBUG
+/*
+ * This does not append a newline
+ */
+static void puts(const char *s)
+{
+ extern void ll_write_char(char *, unsigned long);
+ int x,y;
+ unsigned char c;
+ char *ptr;
+
+ x = params->video_x;
+ y = params->video_y;
+
+ while ( ( c = *(unsigned char *)s++ ) != '\0' ) {
+ if ( c == '\n' ) {
+ x = 0;
+ if ( ++y >= video_num_lines ) {
+ y--;
+ }
+ } else {
+ ptr = VIDMEM + ((y*video_num_columns*params->bytes_per_char_v+x)*bytes_per_char_h);
+ ll_write_char(ptr, c|(white<<8));
+ if ( ++x >= video_num_columns ) {
+ x = 0;
+ if ( ++y >= video_num_lines ) {
+ y--;
+ }
+ }
+ }
+ }
+
+ params->video_x = x;
+ params->video_y = y;
+}
+
+static void error(char *x);
+
+/*
+ * Setup for decompression
+ */
+static void arch_decomp_setup(void)
+{
+ int i;
+
+ video_num_lines = params->video_num_rows;
+ video_num_columns = params->video_num_cols;
+ bytes_per_char_h = params->bytes_per_char_h;
+ video_size_row = video_num_columns * bytes_per_char_h;
+ if (bytes_per_char_h == 4)
+ for (i = 0; i < 256; i++)
+ con_charconvtable[i] =
+ (i & 128 ? 1 << 0 : 0) |
+ (i & 64 ? 1 << 4 : 0) |
+ (i & 32 ? 1 << 8 : 0) |
+ (i & 16 ? 1 << 12 : 0) |
+ (i & 8 ? 1 << 16 : 0) |
+ (i & 4 ? 1 << 20 : 0) |
+ (i & 2 ? 1 << 24 : 0) |
+ (i & 1 ? 1 << 28 : 0);
+ else
+ for (i = 0; i < 16; i++)
+ con_charconvtable[i] =
+ (i & 8 ? 1 << 0 : 0) |
+ (i & 4 ? 1 << 8 : 0) |
+ (i & 2 ? 1 << 16 : 0) |
+ (i & 1 ? 1 << 24 : 0);
+
+
+ palette_setpixel(0);
+ if (bytes_per_char_h == 1) {
+ palette_write (0);
+ palette_write (0x00ffffff);
+ for (i = 2; i < 256; i++)
+ palette_write (0);
+ white = 1;
+ } else {
+ for (i = 0; i < 256; i++)
+ palette_write (i < 16 ? palette_4[i] : 0);
+ white = 7;
+ }
+
+ if (params->nr_pages * params->page_size < 4096*1024) error("<4M of mem\n");
+}
+#endif
+
--- /dev/null
+/*
+ * linux/asm/assembler.h
+ *
+ * This file contains arm architecture specific defines
+ * for the different processors.
+ *
+ * Do not include any C declarations in this file - it is included by
+ * assembler source.
+ */
+
+/*
+ * LOADREGS: multiple register load (ldm) with pc in register list
+ * (takes account of ARM6 not using ^)
+ *
+ * RETINSTR: return instruction: adds the 's' in at the end of the
+ * instruction if this is not an ARM6
+ *
+ * SAVEIRQS: save IRQ state (not required on ARM2/ARM3 - done
+ * implicitly
+ *
+ * RESTOREIRQS: restore IRQ state (not required on ARM2/ARM3 - done
+ * implicitly with ldm ... ^ or movs.
+ *
+ * These next two need thinking about - can't easily use stack... (see system.S)
+ * DISABLEIRQS: disable IRQS in SVC mode
+ *
+ * ENABLEIRQS: enable IRQS in SVC mode
+ *
+ * USERMODE: switch to USER mode
+ *
+ * SVCMODE: switch to SVC mode
+ */
+
+#include <asm/proc/assembler.h>
--- /dev/null
+/*
+ * linux/include/asm-arm/atomic.h
+ *
+ * Copyright (c) 1996 Russell King.
+ *
+ * Changelog:
+ * 27-06-1996 RMK Created
+ * 13-04-1997 RMK Made functions atomic!
+ * 07-12-1997 RMK Upgraded for v2.1.
+ */
+#ifndef __ASM_ARM_ATOMIC_H
+#define __ASM_ARM_ATOMIC_H
+
+#include <asm/system.h>
+
+#ifdef __SMP__
+#error SMP not supported
+#endif
+
+typedef struct { int counter; } atomic_t;
+
+#define ATOMIC_INIT(i) { (i) }
+
+#define atomic_read(v) ((v)->counter)
+#define atomic_set(v,i) (((v)->counter) = (i))
+
+static __inline__ void atomic_add(int i, volatile atomic_t *v)
+{
+ unsigned long flags;
+
+ save_flags_cli (flags);
+ v->counter += i;
+ restore_flags (flags);
+}
+
+static __inline__ void atomic_sub(int i, volatile atomic_t *v)
+{
+ unsigned long flags;
+
+ save_flags_cli (flags);
+ v->counter -= i;
+ restore_flags (flags);
+}
+
+static __inline__ void atomic_inc(volatile atomic_t *v)
+{
+ unsigned long flags;
+
+ save_flags_cli (flags);
+ v->counter += 1;
+ restore_flags (flags);
+}
+
+static __inline__ void atomic_dec(volatile atomic_t *v)
+{
+ unsigned long flags;
+
+ save_flags_cli (flags);
+ v->counter -= 1;
+ restore_flags (flags);
+}
+
+static __inline__ int atomic_dec_and_test(volatile atomic_t *v)
+{
+ unsigned long flags;
+ int result;
+
+ save_flags_cli (flags);
+ v->counter -= 1;
+ result = (v->counter == 0);
+ restore_flags (flags);
+
+ return result;
+}
+
+static __inline__ void atomic_clear_mask(unsigned long mask, unsigned long *addr)
+{
+ unsigned long flags;
+
+ save_flags_cli (flags);
+ *addr &= ~mask;
+ restore_flags (flags);
+}
+
+#endif
--- /dev/null
+#ifndef __ASM_ARM_BITOPS_H
+#define __ASM_ARM_BITOPS_H
+
+/*
+ * Copyright 1995, Russell King.
+ * Various bits and pieces copyrights include:
+ * Linus Torvalds (test_bit).
+ */
+
+/*
+ * These should be done with inline assembly.
+ * All bit operations return 0 if the bit
+ * was cleared before the operation and != 0 if it was not.
+ *
+ * bit 0 is the LSB of addr; bit 32 is the LSB of (addr+1).
+ */
+
+/*
+ * Function prototypes to keep gcc -Wall happy
+ */
+extern void set_bit(int nr, volatile void * addr);
+extern void clear_bit(int nr, volatile void * addr);
+extern void change_bit(int nr, volatile void * addr);
+extern int test_and_set_bit(int nr, volatile void * addr);
+extern int test_and_clear_bit(int nr, volatile void * addr);
+extern int test_and_change_bit(int nr, volatile void * addr);
+extern int find_first_zero_bit(void * addr, unsigned size);
+extern int find_next_zero_bit(void * addr, int size, int offset);
+
+/*
+ * This routine doesn't need to be atomic.
+ */
+extern __inline__ int test_bit(int nr, const void * addr)
+{
+ return ((unsigned char *) addr)[nr >> 3] & (1U << (nr & 7));
+}
+
+/*
+ * ffz = Find First Zero in word. Undefined if no zero exists,
+ * so code should check against ~0UL first..
+ */
+extern __inline__ unsigned long ffz(unsigned long word)
+{
+ int k;
+
+ word = ~word;
+ k = 31;
+ if (word & 0x0000ffff) { k -= 16; word <<= 16; }
+ if (word & 0x00ff0000) { k -= 8; word <<= 8; }
+ if (word & 0x0f000000) { k -= 4; word <<= 4; }
+ if (word & 0x30000000) { k -= 2; word <<= 2; }
+ if (word & 0x40000000) { k -= 1; }
+ return k;
+}
+
+#ifdef __KERNEL__
+
+#define ext2_set_bit test_and_set_bit
+#define ext2_clear_bit test_and_clear_bit
+#define ext2_test_bit test_bit
+#define ext2_find_first_zero_bit find_first_zero_bit
+#define ext2_find_next_zero_bit find_next_zero_bit
+
+/* Bitmap functions for the minix filesystem. */
+#define minix_set_bit(nr,addr) test_and_set_bit(nr,addr)
+#define minix_clear_bit(nr,addr) test_and_clear_bit(nr,addr)
+#define minix_test_bit(nr,addr) test_bit(nr,addr)
+#define minix_find_first_zero_bit(addr,size) find_first_zero_bit(addr,size)
+
+#endif /* __KERNEL__ */
+
+#endif /* _ARM_BITOPS_H */
--- /dev/null
+/*
+ * include/asm-arm/bugs.h
+ *
+ * Copyright (C) 1995 Russell King
+ */
+#ifndef __ASM_BUGS_H
+#define __ASM_BUGS_H
+
+#include <asm/proc-fns.h>
+
+#define check_bugs() processor._check_bugs()
+
+#endif
--- /dev/null
+#ifndef __ASM_ARM_BYTEORDER_H
+#define __ASM_ARM_BYTEORDER_H
+
+#include <asm/types.h>
+
+#ifdef __GNUC__
+
+static __inline__ __const__ __u32 ___arch__swab32(__u32 x)
+{
+ unsigned long xx;
+ __asm__("eor\t%1, %0, %0, ror #16\n\t"
+ "bic\t%1, %1, #0xff0000\n\t"
+ "mov\t%0, %0, ror #8\n\t"
+ "eor\t%0, %0, %1, lsr #8\n\t"
+ : "=r" (x), "=&r" (xx)
+ : "0" (x));
+ return x;
+}
+
+static __inline__ __const__ __u16 ___arch__swab16(__u16 x)
+{
+ __asm__("eor\t%0, %0, %0, lsr #8\n\t"
+ "eor\t%0, %0, %0, lsl #8\n\t"
+ "bic\t%0, %0, #0xff0000\n\t"
+ "eor\t%0, %0, %0, lsr #8\n\t"
+ : "=r" (x)
+ : "0" (x));
+ return x;
+}
+
+#define __arch__swab32(x) ___arch__swab32(x)
+#define __arch__swab16(x) ___arch__swab16(x)
+
+#endif /* __GNUC__ */
+
+#include <linux/byteorder/little_endian.h>
+
+#endif
+
--- /dev/null
+/*
+ * include/asm-i386/cache.h
+ */
+#ifndef __ASMARM_CACHE_H
+#define __ASMARM_CACHE_H
+
+#define L1_CACHE_BYTES 32
+#define L1_CACHE_ALIGN(x) (((x)+(L1_CACHE_BYTES-1))&~(L1_CACHE_BYTES-1))
+
+#endif
--- /dev/null
+/*
+ * linux/include/asm-arm/checksum.h
+ *
+ * IP checksum routines
+ *
+ * Copyright (C) Original authors of ../asm-i386/checksum.h
+ * Copyright (C) 1996,1997,1998 Russell King
+ */
+#ifndef __ASM_ARM_CHECKSUM_H
+#define __ASM_ARM_CHECKSUM_H
+
+#ifndef __ASM_ARM_SEGMENT_H
+#include <asm/segment.h>
+#endif
+
+/*
+ * computes the checksum of a memory block at buff, length len,
+ * and adds in "sum" (32-bit)
+ *
+ * returns a 32-bit number suitable for feeding into itself
+ * or csum_tcpudp_magic
+ *
+ * this function must be called with even lengths, except
+ * for the last fragment, which may be odd
+ *
+ * it's best to have buff aligned on a 32-bit boundary
+ */
+unsigned int csum_partial(const unsigned char * buff, int len, unsigned int sum);
+
+/*
+ * the same as csum_partial, but copies from src while it
+ * checksums, and handles user-space pointer exceptions correctly, when needed.
+ *
+ * here even more important to align src and dst on a 32-bit (or even
+ * better 64-bit) boundary
+ */
+
+extern
+unsigned int csum_partial_copy_from_user (const char *src, char *dst, int len, int sum, int *err_ptr);
+
+/*
+ * This combination is currently not used, but possible:
+ */
+extern
+unsigned int csum_partial_copy_to_user (const char *src, char *dst, int len, int sum, int *err_ptr);
+
+/*
+ * These are the old (and unsafe) way of doing checksums, a warning message will be
+ * printed if they are used and an exception occurs.
+ *
+ * these functions should go away after some time.
+ */
+#define csum_partial_copy_fromuser csum_partial_copy
+unsigned int csum_partial_copy(const char *src, char *dst, int len, int sum);
+
+/*
+ * This is a version of ip_compute_csum() optimized for IP headers,
+ * which always checksum on 4 octet boundaries.
+ *
+ * Converted and optimised for ARM by R. M. King.
+ *
+ * Note: the order that the LDM registers are loaded with respect to
+ * the adc's doesn't matter.
+ */
+static inline unsigned short ip_fast_csum(unsigned char * iph,
+ unsigned int ihl) {
+ unsigned int sum, tmp1;
+
+ __asm__ __volatile__("
+ sub %2, %2, #5
+ ldr %0, [%1], #4
+ ldr %3, [%1], #4
+ adds %0, %0, %3
+ ldr %3, [%1], #4
+ adcs %0, %0, %3
+ ldr %3, [%1], #4
+ adcs %0, %0, %3
+1: ldr %3, [%1], #4
+ adcs %0, %0, %3
+ tst %2, #15
+ subne %2, %2, #1
+ bne 1b
+ adc %0, %0, #0
+ adds %0, %0, %0, lsl #16
+ addcs %0, %0, #0x10000
+ mvn %0, %0
+ mov %0, %0, lsr #16
+ "
+ : "=&r" (sum), "=&r" (iph), "=&r" (ihl), "=&r" (tmp1)
+ : "1" (iph), "2" (ihl));
+ return(sum);
+}
+
+/*
+ * computes the checksum of the TCP/UDP pseudo-header
+ * returns a 16-bit checksum, already complemented
+ */
+static inline unsigned short int csum_tcpudp_magic(unsigned long saddr,
+ unsigned long daddr,
+ unsigned short len,
+ unsigned short proto,
+ unsigned int sum) {
+ __asm__ __volatile__("
+ adds %0, %0, %1
+ adcs %0, %0, %4
+ adcs %0, %0, %5
+ adc %0, %0, #0
+ adds %0, %0, %0, lsl #16
+ addcs %0, %0, #0x10000
+ mvn %0, %0
+ mov %0, %0, lsr #16
+ "
+ : "=&r" (sum), "=&r" (saddr)
+ : "0" (daddr), "1"(saddr), "r"((ntohs(len)<<16)+proto*256), "r"(sum));
+ return((unsigned short)sum);
+}
+
+/*
+ * Fold a partial checksum without adding pseudo headers
+ */
+static inline unsigned int csum_fold(unsigned int sum)
+{
+ __asm__ __volatile__("
+ adds %0, %0, %0, lsl #16
+ addcs %0, %0, #0x10000
+ mvn %0, %0
+ mov %0, %0, lsr #16
+ "
+ : "=r" (sum)
+ : "0" (sum));
+ return sum;
+}
+
+
+/*
+ * this routine is used for miscellaneous IP-like checksums, mainly
+ * in icmp.c
+ */
+
+static inline unsigned short ip_compute_csum(unsigned char * buff, int len) {
+ unsigned int sum;
+
+ __asm__ __volatile__("
+ adds %0, %0, %0, lsl #16
+ addcs %0, %0, #0x10000
+ mvn %0, %0
+ mov %0, %0, lsr #16
+ "
+ : "=r"(sum)
+ : "0" (csum_partial(buff, len, 0)));
+ return(sum);
+}
+
+#endif
--- /dev/null
+#ifndef _ASMARM_CURRENT_H
+#define _ASMARM_CURRENT_H
+
+static inline unsigned long get_sp(void)
+{
+ unsigned long sp;
+ __asm__ ("mov %0,sp" : "=r" (sp));
+ return sp;
+}
+
+static inline struct task_struct *get_current(void)
+{
+ struct task_struct *ts;
+ __asm__ __volatile__("
+ bic %0, sp, #0x1f00
+ bic %0, %0, #0x00ff
+ " : "=r" (ts));
+ return ts;
+}
+
+#define current (get_current())
+
+#endif /* _ASMARM_CURRENT_H */
--- /dev/null
+#ifndef __ASM_ARM_DELAY_H
+#define __ASM_ARM_DELAY_H
+
+/*
+ * Copyright (C) 1995 Russell King
+ *
+ * Delay routines, using a pre-computed "loops_per_second" value.
+ */
+
+extern void __delay(int loops);
+
+/*
+ * division by multiplication: you don't have to worry about
+ * loss of precision.
+ *
+ * Use only for very small delays ( < 1 msec). Should probably use a
+ * lookup table, really, as the multiplications take much too long with
+ * short delays. This is a "reasonable" implementation, though (and the
+ * first constant multiplications gets optimized away if the delay is
+ * a constant)
+ */
+extern void udelay(unsigned long usecs);
+
+extern __inline__ unsigned long muldiv(unsigned long a, unsigned long b, unsigned long c)
+{
+ return a * b / c;
+}
+
+
+
+#endif /* defined(_ARM_DELAY_H) */
+
--- /dev/null
+#ifndef __ASM_ARM_DMA_H
+#define __ASM_ARM_DMA_H
+
+#include <asm/irq.h>
+
+#define MAX_DMA_CHANNELS 14
+#define DMA_0 8
+#define DMA_1 9
+#define DMA_2 10
+#define DMA_3 11
+#define DMA_S0 12
+#define DMA_S1 13
+
+#define DMA_MODE_READ 0x44
+#define DMA_MODE_WRITE 0x48
+
+extern const char dma_str[];
+
+#include <asm/arch/dma.h>
+
+/* These are in kernel/dma.c: */
+/* reserve a DMA channel */
+extern int request_dma(unsigned int dmanr, const char * device_id);
+/* release it again */
+extern void free_dma(unsigned int dmanr);
+
+#endif /* _ARM_DMA_H */
+
--- /dev/null
+/*
+ * linux/include/asm-arm/ecard.h
+ *
+ * definitions for expansion cards
+ *
+ * This is a new system as from Linux 1.2.3
+ *
+ * Changelog:
+ * 11-12-1996 RMK Further minor improvements
+ * 12-09-1997 RMK Added interrupt enable/disable for card level
+ *
+ * Reference: Acorns Risc OS 3 Programmers Reference Manuals.
+ */
+
+#ifndef __ASM_ECARD_H
+#define __ASM_ECARD_H
+
+/*
+ * Currently understood cards
+ * Manufacturer Product ID
+ */
+#define MANU_ACORN 0x0000
+#define PROD_ACORN_SCSI 0x0002
+#define PROD_ACORN_ETHER1 0x0003
+#define PROD_ACORN_MFM 0x000b
+
+#define MANU_ANT2 0x0011
+#define PROD_ANT_ETHER3 0x00a4
+
+#define MANU_ATOMWIDE 0x0017
+#define PROD_ATOMWIDE_3PSERIAL 0x0090
+
+#define MANU_OAK 0x0021
+#define PROD_OAK_SCSI 0x0058
+
+#define MANU_MORLEY 0x002b
+#define PROD_MORLEY_SCSI_UNCACHED 0x0067
+
+#define MANU_CUMANA 0x003a
+#define PROD_CUMANA_SCSI_1 0x00a0
+#define PROD_CUMANA_SCSI_2 0x003a
+
+#define MANU_ICS 0x003c
+#define PROD_ICS_IDE 0x00ae
+
+#define MANU_SERPORT 0x003f
+#define PROD_SERPORT_DSPORT 0x00b9
+
+#define MANU_I3 0x0046
+#define PROD_I3_ETHERLAN500 0x00d4
+#define PROD_I3_ETHERLAN600 0x00ec
+#define PROD_I3_ETHERLAN600A 0x011e
+
+#define MANU_ANT 0x0053
+#define PROD_ANT_ETHERB 0x00e4
+
+#define MANU_ALSYSTEMS 0x005b
+#define PROD_ALSYS_SCSIATAPI 0x0107
+
+#define MANU_MCS 0x0063
+#define PROD_MCS_CONNECT32 0x0125
+
+
+
+#ifdef ECARD_C
+#define CONST
+#else
+#define CONST const
+#endif
+
+#define MAX_ECARDS 8
+
+/* Type of card's address space */
+typedef enum {
+ ECARD_IOC = 0,
+ ECARD_MEMC = 1
+} card_type_t;
+
+/* Speed of card for ECARD_IOC address space */
+typedef enum {
+ ECARD_SLOW = 0,
+ ECARD_MEDIUM = 1,
+ ECARD_FAST = 2,
+ ECARD_SYNC = 3
+} card_speed_t;
+
+/* Card ID structure */
+typedef struct {
+ unsigned short manufacturer;
+ unsigned short product;
+} card_ids;
+
+/* External view of card ID information */
+struct in_ecld {
+ unsigned short product;
+ unsigned short manufacturer;
+ unsigned char ecld;
+ unsigned char country;
+ unsigned char fiqmask;
+ unsigned char irqmask;
+ unsigned long fiqaddr;
+ unsigned long irqaddr;
+};
+
+typedef struct expansion_card ecard_t;
+
+/* Card handler routines */
+typedef struct {
+ void (*irqenable)(ecard_t *ec, int irqnr);
+ void (*irqdisable)(ecard_t *ec, int irqnr);
+ void (*fiqenable)(ecard_t *ec, int fiqnr);
+ void (*fiqdisable)(ecard_t *ec, int fiqnr);
+} expansioncard_ops_t;
+
+typedef unsigned long *loader_t;
+
+/*
+ * This contains all the info needed on an expansion card
+ */
+struct expansion_card {
+ /* Public data */
+ volatile unsigned char *irqaddr; /* address of IRQ register */
+ volatile unsigned char *fiqaddr; /* address of FIQ register */
+ unsigned char irqmask; /* IRQ mask */
+ unsigned char fiqmask; /* FIQ mask */
+ unsigned char claimed; /* Card claimed? */
+ CONST unsigned char slot_no; /* Slot number */
+ CONST unsigned char irq; /* IRQ number (for request_irq) */
+ CONST unsigned char fiq; /* FIQ number (for request_irq) */
+ CONST unsigned short unused;
+ CONST struct in_ecld cld; /* Card Identification */
+ void *irq_data; /* Data for use for IRQ by card */
+ void *fiq_data; /* Data for use for FIQ by card */
+ expansioncard_ops_t *ops; /* Enable/Disable Ops for card */
+
+ /* Private internal data */
+ CONST unsigned int podaddr; /* Base Linux address for card */
+ CONST loader_t loader; /* loader program */
+};
+
+struct in_chunk_dir {
+ unsigned int start_offset;
+ union {
+ unsigned char string[256];
+ unsigned char data[1];
+ } d;
+};
+
+/*
+ * ecard_claim: claim an expansion card entry
+ */
+#define ecard_claim(ec) ((ec)->claimed = 1)
+
+/*
+ * ecard_release: release an expansion card entry
+ */
+#define ecard_release(ec) ((ec)->claimed = 0)
+
+/*
+ * Start finding cards from the top of the list
+ */
+extern void ecard_startfind (void);
+
+/*
+ * Find an expansion card with the correct cld, product and manufacturer code
+ */
+extern struct expansion_card *ecard_find (int cld, const card_ids *ids);
+
+/*
+ * Read a chunk from an expansion card
+ * cd : where to put read data
+ * ec : expansion card info struct
+ * id : id number to find
+ * num: (n+1)'th id to find.
+ */
+extern int ecard_readchunk (struct in_chunk_dir *cd, struct expansion_card *ec, int id, int num);
+
+/*
+ * Obtain the address of a card
+ */
+extern unsigned int ecard_address (struct expansion_card *ec, card_type_t card_type, card_speed_t speed);
+
+#ifdef ECARD_C
+/* Definitions internal to ecard.c - for it's use only!!
+ *
+ * External expansion card header as read from the card
+ */
+struct ex_ecld {
+ unsigned char r_ecld;
+ unsigned char r_reserved[2];
+ unsigned char r_product[2];
+ unsigned char r_manufacturer[2];
+ unsigned char r_country;
+ long r_fiqs;
+ long r_irqs;
+#define e_ecld(x) ((x)->r_ecld)
+#define e_cd(x) ((x)->r_reserved[0] & 1)
+#define e_is(x) ((x)->r_reserved[0] & 2)
+#define e_w(x) (((x)->r_reserved[0] & 12)>>2)
+#define e_prod(x) ((x)->r_product[0]|((x)->r_product[1]<<8))
+#define e_manu(x) ((x)->r_manufacturer[0]|((x)->r_manufacturer[1]<<8))
+#define e_country(x) ((x)->r_country)
+#define e_fiqmask(x) ((x)->r_fiqs & 0xff)
+#define e_fiqaddr(x) ((x)->r_fiqs >> 8)
+#define e_irqmask(x) ((x)->r_irqs & 0xff)
+#define e_irqaddr(x) ((x)->r_irqs >> 8)
+};
+
+/*
+ * Chunk directory entry as read from the card
+ */
+struct ex_chunk_dir {
+ unsigned char r_id;
+ unsigned char r_len[3];
+ unsigned long r_start;
+ union {
+ char string[256];
+ char data[1];
+ } d;
+#define c_id(x) ((x)->r_id)
+#define c_len(x) ((x)->r_len[0]|((x)->r_len[1]<<8)|((x)->r_len[2]<<16))
+#define c_start(x) ((x)->r_start)
+};
+
+#endif
+
+#endif
--- /dev/null
+#ifndef __ASMARM_ELF_H
+#define __ASMARM_ELF_H
+
+/*
+ * ELF register definitions..
+ */
+
+#include <asm/ptrace.h>
+
+typedef unsigned long elf_greg_t;
+
+#define EM_ARM 40
+
+#define ELF_NGREG (sizeof (struct pt_regs) / sizeof(elf_greg_t))
+typedef elf_greg_t elf_gregset_t[ELF_NGREG];
+
+typedef struct { void *null; } elf_fpregset_t;
+
+/*
+ * This is used to ensure we don't load something for the wrong architecture.
+ */
+#define elf_check_arch(x) ( ((x) == EM_ARM) )
+
+/*
+ * These are used to set parameters in the core dumps.
+ */
+#define ELF_CLASS ELFCLASS32
+#define ELF_DATA ELFDATA2LSB;
+#define ELF_ARCH EM_ARM
+
+#define USE_ELF_CORE_DUMP
+#define ELF_EXEC_PAGESIZE 32768
+
+/* This is the location that an ET_DYN program is loaded if exec'ed. Typical
+ use of this is to invoke "./ld.so someprog" to test out a new version of
+ the loader. We need to make sure that it is out of the way of the program
+ that it will "exec", and that there is sufficient room for the brk. */
+
+#define ELF_ET_DYN_BASE (2 * TASK_SIZE / 3)
+
+#define R_ARM_NONE (0)
+#define R_ARM_32 (1) /* => ld 32 */
+#define R_ARM_PC26 (2) /* => ld b/bl branches */
+#define R_ARM_PC32 (3)
+#define R_ARM_GOT32 (4) /* -> object relocation into GOT */
+#define R_ARM_PLT32 (5)
+#define R_ARM_COPY (6) /* => dlink copy object */
+#define R_ARM_GLOB_DAT (7) /* => dlink 32bit absolute address for .got */
+#define R_ARM_JUMP_SLOT (8) /* => dlink 32bit absolute address for .got.plt */
+#define R_ARM_RELATIVE (9) /* => ld resolved 32bit absolute address requiring load address adjustment */
+#define R_ARM_GOTOFF (10) /* => ld calculates offset of data from base of GOT */
+#define R_ARM_GOTPC (11) /* => ld 32-bit relative offset */
+
+#endif
--- /dev/null
+#ifndef _ARM_ERRNO_H
+#define _ARM_ERRNO_H
+
+#define EPERM 1 /* Operation not permitted */
+#define ENOENT 2 /* No such file or directory */
+#define ESRCH 3 /* No such process */
+#define EINTR 4 /* Interrupted system call */
+#define EIO 5 /* I/O error */
+#define ENXIO 6 /* No such device or address */
+#define E2BIG 7 /* Arg list too long */
+#define ENOEXEC 8 /* Exec format error */
+#define EBADF 9 /* Bad file number */
+#define ECHILD 10 /* No child processes */
+#define EAGAIN 11 /* Try again */
+#define ENOMEM 12 /* Out of memory */
+#define EACCES 13 /* Permission denied */
+#define EFAULT 14 /* Bad address */
+#define ENOTBLK 15 /* Block device required */
+#define EBUSY 16 /* Device or resource busy */
+#define EEXIST 17 /* File exists */
+#define EXDEV 18 /* Cross-device link */
+#define ENODEV 19 /* No such device */
+#define ENOTDIR 20 /* Not a directory */
+#define EISDIR 21 /* Is a directory */
+#define EINVAL 22 /* Invalid argument */
+#define ENFILE 23 /* File table overflow */
+#define EMFILE 24 /* Too many open files */
+#define ENOTTY 25 /* Not a typewriter */
+#define ETXTBSY 26 /* Text file busy */
+#define EFBIG 27 /* File too large */
+#define ENOSPC 28 /* No space left on device */
+#define ESPIPE 29 /* Illegal seek */
+#define EROFS 30 /* Read-only file system */
+#define EMLINK 31 /* Too many links */
+#define EPIPE 32 /* Broken pipe */
+#define EDOM 33 /* Math argument out of domain of func */
+#define ERANGE 34 /* Math result not representable */
+#define EDEADLK 35 /* Resource deadlock would occur */
+#define ENAMETOOLONG 36 /* File name too long */
+#define ENOLCK 37 /* No record locks available */
+#define ENOSYS 38 /* Function not implemented */
+#define ENOTEMPTY 39 /* Directory not empty */
+#define ELOOP 40 /* Too many symbolic links encountered */
+#define EWOULDBLOCK EAGAIN /* Operation would block */
+#define ENOMSG 42 /* No message of desired type */
+#define EIDRM 43 /* Identifier removed */
+#define ECHRNG 44 /* Channel number out of range */
+#define EL2NSYNC 45 /* Level 2 not synchronized */
+#define EL3HLT 46 /* Level 3 halted */
+#define EL3RST 47 /* Level 3 reset */
+#define ELNRNG 48 /* Link number out of range */
+#define EUNATCH 49 /* Protocol driver not attached */
+#define ENOCSI 50 /* No CSI structure available */
+#define EL2HLT 51 /* Level 2 halted */
+#define EBADE 52 /* Invalid exchange */
+#define EBADR 53 /* Invalid request descriptor */
+#define EXFULL 54 /* Exchange full */
+#define ENOANO 55 /* No anode */
+#define EBADRQC 56 /* Invalid request code */
+#define EBADSLT 57 /* Invalid slot */
+
+#define EDEADLOCK EDEADLK
+
+#define EBFONT 59 /* Bad font file format */
+#define ENOSTR 60 /* Device not a stream */
+#define ENODATA 61 /* No data available */
+#define ETIME 62 /* Timer expired */
+#define ENOSR 63 /* Out of streams resources */
+#define ENONET 64 /* Machine is not on the network */
+#define ENOPKG 65 /* Package not installed */
+#define EREMOTE 66 /* Object is remote */
+#define ENOLINK 67 /* Link has been severed */
+#define EADV 68 /* Advertise error */
+#define ESRMNT 69 /* Srmount error */
+#define ECOMM 70 /* Communication error on send */
+#define EPROTO 71 /* Protocol error */
+#define EMULTIHOP 72 /* Multihop attempted */
+#define EDOTDOT 73 /* RFS specific error */
+#define EBADMSG 74 /* Not a data message */
+#define EOVERFLOW 75 /* Value too large for defined data type */
+#define ENOTUNIQ 76 /* Name not unique on network */
+#define EBADFD 77 /* File descriptor in bad state */
+#define EREMCHG 78 /* Remote address changed */
+#define ELIBACC 79 /* Can not access a needed shared library */
+#define ELIBBAD 80 /* Accessing a corrupted shared library */
+#define ELIBSCN 81 /* .lib section in a.out corrupted */
+#define ELIBMAX 82 /* Attempting to link in too many shared libraries */
+#define ELIBEXEC 83 /* Cannot exec a shared library directly */
+#define EILSEQ 84 /* Illegal byte sequence */
+#define ERESTART 85 /* Interrupted system call should be restarted */
+#define ESTRPIPE 86 /* Streams pipe error */
+#define EUSERS 87 /* Too many users */
+#define ENOTSOCK 88 /* Socket operation on non-socket */
+#define EDESTADDRREQ 89 /* Destination address required */
+#define EMSGSIZE 90 /* Message too long */
+#define EPROTOTYPE 91 /* Protocol wrong type for socket */
+#define ENOPROTOOPT 92 /* Protocol not available */
+#define EPROTONOSUPPORT 93 /* Protocol not supported */
+#define ESOCKTNOSUPPORT 94 /* Socket type not supported */
+#define EOPNOTSUPP 95 /* Operation not supported on transport endpoint */
+#define EPFNOSUPPORT 96 /* Protocol family not supported */
+#define EAFNOSUPPORT 97 /* Address family not supported by protocol */
+#define EADDRINUSE 98 /* Address already in use */
+#define EADDRNOTAVAIL 99 /* Cannot assign requested address */
+#define ENETDOWN 100 /* Network is down */
+#define ENETUNREACH 101 /* Network is unreachable */
+#define ENETRESET 102 /* Network dropped connection because of reset */
+#define ECONNABORTED 103 /* Software caused connection abort */
+#define ECONNRESET 104 /* Connection reset by peer */
+#define ENOBUFS 105 /* No buffer space available */
+#define EISCONN 106 /* Transport endpoint is already connected */
+#define ENOTCONN 107 /* Transport endpoint is not connected */
+#define ESHUTDOWN 108 /* Cannot send after transport endpoint shutdown */
+#define ETOOMANYREFS 109 /* Too many references: cannot splice */
+#define ETIMEDOUT 110 /* Connection timed out */
+#define ECONNREFUSED 111 /* Connection refused */
+#define EHOSTDOWN 112 /* Host is down */
+#define EHOSTUNREACH 113 /* No route to host */
+#define EALREADY 114 /* Operation already in progress */
+#define EINPROGRESS 115 /* Operation now in progress */
+#define ESTALE 116 /* Stale NFS file handle */
+#define EUCLEAN 117 /* Structure needs cleaning */
+#define ENOTNAM 118 /* Not a XENIX named type file */
+#define ENAVAIL 119 /* No XENIX semaphores available */
+#define EISNAM 120 /* Is a named type file */
+#define EREMOTEIO 121 /* Remote I/O error */
+#define EDQUOT 122 /* Quota exceeded */
+
+#define ENOMEDIUM 123 /* No medium found */
+#define EMEDIUMTYPE 124 /* Wrong medium type */
+
+#endif
--- /dev/null
+#ifndef _ARM_FCNTL_H
+#define _ARM_FCNTL_H
+
+/* open/fcntl - O_SYNC is only implemented on blocks devices and on files
+ located on an ext2 file system */
+#define O_ACCMODE 0003
+#define O_RDONLY 00
+#define O_WRONLY 01
+#define O_RDWR 02
+#define O_CREAT 0100 /* not fcntl */
+#define O_EXCL 0200 /* not fcntl */
+#define O_NOCTTY 0400 /* not fcntl */
+#define O_TRUNC 01000 /* not fcntl */
+#define O_APPEND 02000
+#define O_NONBLOCK 04000
+#define O_NDELAY O_NONBLOCK
+#define O_SYNC 010000
+#define FASYNC 020000 /* fcntl, for BSD compatibility */
+
+#define F_DUPFD 0 /* dup */
+#define F_GETFD 1 /* get f_flags */
+#define F_SETFD 2 /* set f_flags */
+#define F_GETFL 3 /* more flags (cloexec) */
+#define F_SETFL 4
+#define F_GETLK 5
+#define F_SETLK 6
+#define F_SETLKW 7
+
+#define F_SETOWN 8 /* for sockets. */
+#define F_GETOWN 9 /* for sockets. */
+
+/* for F_[GET|SET]FL */
+#define FD_CLOEXEC 1 /* actually anything with low bit set goes */
+
+/* for posix fcntl() and lockf() */
+#define F_RDLCK 0
+#define F_WRLCK 1
+#define F_UNLCK 2
+
+/* for old implementation of bsd flock () */
+#define F_EXLCK 4 /* or 3 */
+#define F_SHLCK 8 /* or 4 */
+
+/* operations for bsd flock(), also used by the kernel implementation */
+#define LOCK_SH 1 /* shared lock */
+#define LOCK_EX 2 /* exclusive lock */
+#define LOCK_NB 4 /* or'd with one of the above to prevent
+ blocking */
+#define LOCK_UN 8 /* remove lock */
+
+struct flock {
+ short l_type;
+ short l_whence;
+ off_t l_start;
+ off_t l_len;
+ pid_t l_pid;
+};
+
+#endif
--- /dev/null
+/*
+ * linux/include/asm-arm/floppy.h
+ *
+ * (C) 1996 Russell King
+ */
+#ifndef __ASM_ARM_FLOPPY_H
+#define __ASM_ARM_FLOPPY_H
+#if 0
+#include <asm/arch/floppy.h>
+#endif
+
+#define fd_outb(val,port) outb((val),(port))
+#define fd_inb(port) inb((port))
+#define fd_request_irq() request_irq(IRQ_FLOPPYDISK,floppy_interrupt,\
+ SA_INTERRUPT|SA_SAMPLE_RANDOM,"floppy",NULL)
+#define fd_free_irq() free_irq(IRQ_FLOPPYDISK,NULL)
+#define fd_disable_irq() disable_irq(IRQ_FLOPPYDISK)
+#define fd_enable_irq() enable_irq(IRQ_FLOPPYDISK)
+
+#define fd_request_dma() request_dma(FLOPPY_DMA,"floppy")
+#define fd_free_dma() free_dma(FLOPPY_DMA)
+#define fd_disable_dma() disable_dma(FLOPPY_DMA)
+#define fd_enable_dma() enable_dma(FLOPPY_DMA)
+#define fd_clear_dma_ff() clear_dma_ff(FLOPPY_DMA)
+#define fd_set_dma_mode(mode) set_dma_mode(FLOPPY_DMA, (mode))
+#define fd_set_dma_addr(addr) set_dma_addr(FLOPPY_DMA, virt_to_bus((addr)))
+#define fd_set_dma_count(len) set_dma_count(FLOPPY_DMA, (len))
+#define fd_cacheflush(addr,sz)
+
+/* Floppy_selects is the list of DOR's to select drive fd
+ *
+ * On initialisation, the floppy list is scanned, and the drives allocated
+ * in the order that they are found. This is done by seeking the drive
+ * to a non-zero track, and then restoring it to track 0. If an error occurs,
+ * then there is no floppy drive present. [to be put back in again]
+ */
+static unsigned char floppy_selects[2][4] =
+{
+ { 0x10, 0x21, 0x23, 0x33 },
+ { 0x10, 0x21, 0x23, 0x33 }
+};
+
+#define fd_setdor(dor) \
+do { \
+ int new_dor = (dor); \
+ if (new_dor & 0xf0) \
+ fd_outb((new_dor & 0x0c) | floppy_selects[fdc][new_dor & 3], FD_DOR); \
+ else \
+ fd_outb((new_dor & 0x0c), FD_DOR); \
+} while (0)
+
+/*
+ * Someday, we'll automatically detect which drives are present...
+ */
+extern __inline__ void fd_scandrives (void)
+{
+#if 0
+ int floppy, drive_count;
+
+ fd_disable_irq();
+ raw_cmd = &default_raw_cmd;
+ raw_cmd->flags = FD_RAW_SPIN | FD_RAW_NEED_SEEK;
+ raw_cmd->track = 0;
+ raw_cmd->rate = ?;
+ drive_count = 0;
+ for (floppy = 0; floppy < 4; floppy ++) {
+ current_drive = drive_count;
+ /*
+ * Turn on floppy motor
+ */
+ if (start_motor(redo_fd_request))
+ continue;
+ /*
+ * Set up FDC
+ */
+ fdc_specify();
+ /*
+ * Tell FDC to recalibrate
+ */
+ output_byte(FD_RECALIBRATE);
+ LAST_OUT(UNIT(floppy));
+ /* wait for command to complete */
+ if (!successful) {
+ int i;
+ for (i = drive_count; i < 3; i--)
+ floppy_selects[fdc][i] = floppy_selects[fdc][i + 1];
+ floppy_selects[fdc][3] = 0;
+ floppy -= 1;
+ } else
+ drive_count++;
+ }
+#else
+ floppy_selects[0][0] = 0x10;
+ floppy_selects[0][1] = 0x21;
+ floppy_selects[0][2] = 0x23;
+ floppy_selects[0][3] = 0x33;
+#endif
+}
+
+#define FDC1 (0x3f0)
+static int FDC2 = -1;
+
+#define FLOPPY0_TYPE 4
+#define FLOPPY1_TYPE 4
+
+#define N_FDC 1
+#define N_DRIVE 8
+
+#define FLOPPY_MOTOR_MASK 0xf0
+
+#define CROSS_64KB(a,s) (0)
+
+#endif
--- /dev/null
+#ifndef __ASM_HARDIRQ_H
+#define __ASM_HARDIRQ_H
+
+#include <linux/tasks.h>
+
+extern unsigned int local_irq_count[NR_CPUS];
+#define in_interrupt() (local_irq_count[smp_processor_id()] != 0)
+
+#ifndef __SMP__
+
+#define hardirq_trylock(cpu) (local_irq_count[cpu] == 0)
+#define hardirq_endlock(cpu) do { } while (0)
+
+#define hardirq_enter(cpu) (local_irq_count[cpu]++)
+#define hardirq_exit(cpu) (local_irq_count[cpu]--)
+
+#define synchronize_irq() do { } while (0)
+
+#else
+#error SMP not supported
+#endif /* __SMP__ */
+
+#endif /* __ASM_HARDIRQ_H */
--- /dev/null
+/*
+ * linux/include/asm-arm/hardware.h
+ *
+ * Copyright (C) 1996 Russell King
+ *
+ * Common hardware definitions
+ */
+
+#ifndef __ASM_HARDWARE_H
+#define __ASM_HARDWARE_H
+
+#include <asm/arch/hardware.h>
+
+/*
+ * Use these macros to read/write the IOC. All it does is perform the actual
+ * read/write.
+ */
+#ifdef HAS_IOC
+#ifndef __ASSEMBLER__
+#define __IOC(offset) (IOC_BASE + (offset >> 2))
+#else
+#define __IOC(offset) offset
+#endif
+
+#define IOC_CONTROL __IOC(0x00)
+#define IOC_KARTTX __IOC(0x04)
+#define IOC_KARTRX __IOC(0x04)
+
+#define IOC_IRQSTATA __IOC(0x10)
+#define IOC_IRQREQA __IOC(0x14)
+#define IOC_IRQCLRA __IOC(0x14)
+#define IOC_IRQMASKA __IOC(0x18)
+
+#define IOC_IRQSTATB __IOC(0x20)
+#define IOC_IRQREQB __IOC(0x24)
+#define IOC_IRQMASKB __IOC(0x28)
+
+#define IOC_FIQSTAT __IOC(0x30)
+#define IOC_FIQREQ __IOC(0x34)
+#define IOC_FIQMASK __IOC(0x38)
+
+#define IOC_T0CNTL __IOC(0x40)
+#define IOC_T0LTCHL __IOC(0x40)
+#define IOC_T0CNTH __IOC(0x44)
+#define IOC_T0LTCHH __IOC(0x44)
+#define IOC_T0GO __IOC(0x48)
+#define IOC_T0LATCH __IOC(0x4c)
+
+#define IOC_T1CNTL __IOC(0x50)
+#define IOC_T1LTCHL __IOC(0x50)
+#define IOC_T1CNTH __IOC(0x54)
+#define IOC_T1LTCHH __IOC(0x54)
+#define IOC_T1GO __IOC(0x58)
+#define IOC_T1LATCH __IOC(0x5c)
+
+#define IOC_T2CNTL __IOC(0x60)
+#define IOC_T2LTCHL __IOC(0x60)
+#define IOC_T2CNTH __IOC(0x64)
+#define IOC_T2LTCHH __IOC(0x64)
+#define IOC_T2GO __IOC(0x68)
+#define IOC_T2LATCH __IOC(0x6c)
+
+#define IOC_T3CNTL __IOC(0x70)
+#define IOC_T3LTCHL __IOC(0x70)
+#define IOC_T3CNTH __IOC(0x74)
+#define IOC_T3LTCHH __IOC(0x74)
+#define IOC_T3GO __IOC(0x78)
+#define IOC_T3LATCH __IOC(0x7c)
+#endif
+
+#ifdef HAS_MEMC
+#define VDMA_ALIGNMENT PAGE_SIZE
+#define VDMA_XFERSIZE 16
+#define VDMA_INIT 0
+#define VDMA_START 1
+#define VDMA_END 2
+
+#define video_set_dma(start,end,offset) \
+do { \
+ memc_write (VDMA_START, (start >> 2)); \
+ memc_write (VDMA_END, (end - VDMA_XFERSIZE) >> 2); \
+ memc_write (VDMA_INIT, (offset >> 2)); \
+} while (0)
+#endif
+
+#ifdef HAS_IOMD
+#ifndef __ASSEMBLER__
+#define __IOMD(offset) (IOMD_BASE + (offset >> 2))
+#else
+#define __IOMD(offset) offset
+#endif
+
+#define IOMD_CONTROL __IOMD(0x000)
+#define IOMD_KARTTX __IOMD(0x004)
+#define IOMD_KARTRX __IOMD(0x004)
+#define IOMD_KCTRL __IOMD(0x008)
+
+#define IOMD_IRQSTATA __IOMD(0x010)
+#define IOMD_IRQREQA __IOMD(0x014)
+#define IOMD_IRQCLRA __IOMD(0x014)
+#define IOMD_IRQMASKA __IOMD(0x018)
+
+#define IOMD_IRQSTATB __IOMD(0x020)
+#define IOMD_IRQREQB __IOMD(0x024)
+#define IOMD_IRQMASKB __IOMD(0x028)
+
+#define IOMD_FIQSTAT __IOMD(0x030)
+#define IOMD_FIQREQ __IOMD(0x034)
+#define IOMD_FIQMASK __IOMD(0x038)
+
+#define IOMD_T0CNTL __IOMD(0x040)
+#define IOMD_T0LTCHL __IOMD(0x040)
+#define IOMD_T0CNTH __IOMD(0x044)
+#define IOMD_T0LTCHH __IOMD(0x044)
+#define IOMD_T0GO __IOMD(0x048)
+#define IOMD_T0LATCH __IOMD(0x04c)
+
+#define IOMD_T1CNTL __IOMD(0x050)
+#define IOMD_T1LTCHL __IOMD(0x050)
+#define IOMD_T1CNTH __IOMD(0x054)
+#define IOMD_T1LTCHH __IOMD(0x054)
+#define IOMD_T1GO __IOMD(0x058)
+#define IOMD_T1LATCH __IOMD(0x05c)
+
+#define IOMD_ROMCR0 __IOMD(0x080)
+#define IOMD_ROMCR1 __IOMD(0x084)
+#define IOMD_DRAMCR __IOMD(0x088)
+#define IOMD_VREFCR __IOMD(0x08C)
+
+#define IOMD_FSIZE __IOMD(0x090)
+#define IOMD_ID0 __IOMD(0x094)
+#define IOMD_ID1 __IOMD(0x098)
+#define IOMD_VERSION __IOMD(0x09C)
+
+#define IOMD_MOUSEX __IOMD(0x0A0)
+#define IOMD_MOUSEY __IOMD(0x0A4)
+
+#define IOMD_DMATCR __IOMD(0x0C0)
+#define IOMD_IOTCR __IOMD(0x0C4)
+#define IOMD_ECTCR __IOMD(0x0C8)
+#define IOMD_DMAEXT __IOMD(0x0CC)
+
+#define IOMD_IO0CURA __IOMD(0x100)
+#define IOMD_IO0ENDA __IOMD(0x104)
+#define IOMD_IO0CURB __IOMD(0x108)
+#define IOMD_IO0ENDB __IOMD(0x10C)
+#define IOMD_IO0CR __IOMD(0x110)
+#define IOMD_IO0ST __IOMD(0x114)
+
+#define IOMD_IO1CURA __IOMD(0x120)
+#define IOMD_IO1ENDA __IOMD(0x124)
+#define IOMD_IO1CURB __IOMD(0x128)
+#define IOMD_IO1ENDB __IOMD(0x12C)
+#define IOMD_IO1CR __IOMD(0x130)
+#define IOMD_IO1ST __IOMD(0x134)
+
+#define IOMD_IO2CURA __IOMD(0x140)
+#define IOMD_IO2ENDA __IOMD(0x144)
+#define IOMD_IO2CURB __IOMD(0x148)
+#define IOMD_IO2ENDB __IOMD(0x14C)
+#define IOMD_IO2CR __IOMD(0x150)
+#define IOMD_IO2ST __IOMD(0x154)
+
+#define IOMD_IO3CURA __IOMD(0x160)
+#define IOMD_IO3ENDA __IOMD(0x164)
+#define IOMD_IO3CURB __IOMD(0x168)
+#define IOMD_IO3ENDB __IOMD(0x16C)
+#define IOMD_IO3CR __IOMD(0x170)
+#define IOMD_IO3ST __IOMD(0x174)
+
+#define IOMD_SD0CURA __IOMD(0x180)
+#define IOMD_SD0ENDA __IOMD(0x184)
+#define IOMD_SD0CURB __IOMD(0x188)
+#define IOMD_SD0ENDB __IOMD(0x18C)
+#define IOMD_SD0CR __IOMD(0x190)
+#define IOMD_SD0ST __IOMD(0x194)
+
+#define IOMD_SD1CURA __IOMD(0x1A0)
+#define IOMD_SD1ENDA __IOMD(0x1A4)
+#define IOMD_SD1CURB __IOMD(0x1A8)
+#define IOMD_SD1ENDB __IOMD(0x1AC)
+#define IOMD_SD1CR __IOMD(0x1B0)
+#define IOMD_SD1ST __IOMD(0x1B4)
+
+#define IOMD_CURSCUR __IOMD(0x1C0)
+#define IOMD_CURSINIT __IOMD(0x1C4)
+
+#define IOMD_VIDCUR __IOMD(0x1D0)
+#define IOMD_VIDEND __IOMD(0x1D4)
+#define IOMD_VIDSTART __IOMD(0x1D8)
+#define IOMD_VIDINIT __IOMD(0x1DC)
+#define IOMD_VIDCR __IOMD(0x1E0)
+
+#define IOMD_DMASTAT __IOMD(0x1F0)
+#define IOMD_DMAREQ __IOMD(0x1F4)
+#define IOMD_DMAMASK __IOMD(0x1F8)
+
+#define DMA_CR_C 0x80
+#define DMA_CR_D 0x40
+#define DMA_CR_E 0x20
+
+#define DMA_ST_OFL 4
+#define DMA_ST_INT 2
+#define DMA_ST_AB 1
+/*
+ * IOC compatability
+ */
+#define IOC_CONTROL IOMD_CONTROL
+#define IOC_IRQSTATA IOMD_IRQSTATA
+#define IOC_IRQREQA IOMD_IRQREQA
+#define IOC_IRQCLRA IOMD_IRQCLRA
+#define IOC_IRQMASKA IOMD_IRQMASKA
+
+#define IOC_IRQSTATB IOMD_IRQSTATB
+#define IOC_IRQREQB IOMD_IRQREQB
+#define IOC_IRQMASKB IOMD_IRQMASKB
+
+#define IOC_FIQSTAT IOMD_FIQSTAT
+#define IOC_FIQREQ IOMD_FIQREQ
+#define IOC_FIQMASK IOMD_FIQMASK
+
+#define IOC_T0CNTL IOMD_T0CNTL
+#define IOC_T0LTCHL IOMD_T0LTCHL
+#define IOC_T0CNTH IOMD_T0CNTH
+#define IOC_T0LTCHH IOMD_T0LTCHH
+#define IOC_T0GO IOMD_T0GO
+#define IOC_T0LATCH IOMD_T0LATCH
+
+#define IOC_T1CNTL IOMD_T1CNTL
+#define IOC_T1LTCHL IOMD_T1LTCHL
+#define IOC_T1CNTH IOMD_T1CNTH
+#define IOC_T1LTCHH IOMD_T1LTCHH
+#define IOC_T1GO IOMD_T1GO
+#define IOC_T1LATCH IOMD_T1LATCH
+
+/*
+ * DMA (MEMC) compatability
+ */
+#define HALF_SAM vram_half_sam
+#define VDMA_ALIGNMENT (HALF_SAM * 2)
+#define VDMA_XFERSIZE (HALF_SAM)
+#define VDMA_INIT IOMD_VIDINIT
+#define VDMA_START IOMD_VIDSTART
+#define VDMA_END IOMD_VIDEND
+
+#ifndef __ASSEMBLER__
+extern unsigned int vram_half_sam;
+#define video_set_dma(start,end,offset) \
+do { \
+ outl (SCREEN_START + start, VDMA_START); \
+ outl (SCREEN_START + end - VDMA_XFERSIZE, VDMA_END); \
+ if (offset >= end - VDMA_XFERSIZE) \
+ offset |= 0x40000000; \
+ outl (SCREEN_START + offset, VDMA_INIT); \
+} while (0)
+#endif
+#endif
+
+#ifdef HAS_EXPMASK
+#ifndef __ASSEMBLER__
+#define __EXPMASK(offset) (((volatile unsigned char *)EXPMASK_BASE)[offset])
+#else
+#define __EXPMASK(offset) offset
+#endif
+
+#define EXPMASK_STATUS __EXPMASK(0x00)
+#define EXPMASK_ENABLE __EXPMASK(0x04)
+
+#endif
+
+#endif
--- /dev/null
+/*
+ * linux/include/asm-arm/ide.h
+ *
+ * Copyright (C) 1994-1996 Linus Torvalds & authors
+ */
+
+/*
+ * This file contains the i386 architecture specific IDE code.
+ */
+
+#ifndef __ASMARM_IDE_H
+#define __ASMARM_IDE_H
+
+#ifdef __KERNEL__
+
+typedef unsigned long ide_ioreg_t;
+
+#ifndef MAX_HWIFS
+#define MAX_HWIFS 4
+#endif
+
+#define ide_sti() sti()
+
+#include <asm/arch/ide.h>
+
+typedef union {
+ unsigned all : 8; /* all of the bits together */
+ struct {
+ unsigned head : 4; /* always zeros here */
+ unsigned unit : 1; /* drive select number, 0 or 1 */
+ unsigned bit5 : 1; /* always 1 */
+ unsigned lba : 1; /* using LBA instead of CHS */
+ unsigned bit7 : 1; /* always 1 */
+ } b;
+ } select_t;
+
+static __inline__ int ide_request_irq(unsigned int irq, void (*handler)(int, void *, struct pt_regs *),
+ unsigned long flags, const char *device, void *dev_id)
+{
+ return request_irq(irq, handler, flags, device, dev_id);
+}
+
+static __inline__ void ide_free_irq(unsigned int irq, void *dev_id)
+{
+ free_irq(irq, dev_id);
+}
+
+static __inline__ int ide_check_region (ide_ioreg_t from, unsigned int extent)
+{
+ return check_region(from, extent);
+}
+
+static __inline__ void ide_request_region (ide_ioreg_t from, unsigned int extent, const char *name)
+{
+ request_region(from, extent, name);
+}
+
+static __inline__ void ide_release_region (ide_ioreg_t from, unsigned int extent)
+{
+ release_region(from, extent);
+}
+
+/*
+ * The following are not needed for the non-m68k ports
+ */
+static __inline__ int ide_ack_intr (ide_ioreg_t status_port, ide_ioreg_t irq_port)
+{
+ return(1);
+}
+
+static __inline__ void ide_fix_driveid(struct hd_driveid *id)
+{
+}
+
+static __inline__ void ide_release_lock (int *ide_lock)
+{
+}
+
+static __inline__ void ide_get_lock (int *ide_lock, void (*handler)(int, void *, struct pt_regs *), void *data)
+{
+}
+
+#endif /* __KERNEL__ */
+
+#endif /* __ASMi386_IDE_H */
--- /dev/null
+#ifndef _ASMARM_INIT_H
+#define _ASMARM_INIT_H
+
+#if 0
+#define __init __attribute__ ((__section__ (".text.init")))
+#define __initdata __attribute__ ((__section__ (".data.init")))
+#define __initfunc(__arginit) \
+ __arginit __init; \
+ __arginit
+/* For assembly routines */
+#define __INIT .section ".text.init",@alloc,@execinstr
+#define __FINIT .previous
+#define __INITDATA .section ".data.init",@alloc,@write
+#else
+#define __init
+#define __initdata
+#define __initfunc(__arginit) __arginit
+/* For assembly routines */
+#define __INIT
+#define __FINIT
+#define __INITDATA
+#endif
+
+#endif
--- /dev/null
+/*
+ * linux/include/asm-arm/io.h
+ *
+ * Copyright (C) 1996 Russell King
+ *
+ * Modifications:
+ * 16-Sep-1996 RMK Inlined the inx/outx functions & optimised for both
+ * constant addresses and variable addresses.
+ * 04-Dec-1997 RMK Moved a lot of this stuff to the new architecture
+ * specific IO header files.
+ */
+#ifndef __ASM_ARM_IO_H
+#define __ASM_ARM_IO_H
+
+#include <asm/hardware.h>
+#include <asm/arch/mmu.h>
+#include <asm/arch/io.h>
+
+/* unsigned long virt_to_phys(void *x) */
+#define virt_to_phys(x) __virt_to_phys((unsigned long)(x))
+
+/* void *phys_to_virt(unsigned long x) */
+#define phys_to_virt(x) ((void *)(__phys_to_virt((unsigned long)(x))))
+
+/*
+ * These macros actually build the multi-value IO function prototypes
+ */
+#define __OUTS(s,i,x) extern void outs##s(unsigned int port, const void *from, int len);
+#define __INS(s,i,x) extern void ins##s(unsigned int port, void *to, int len);
+
+#define __IO(s,i,x) \
+ __OUTS(s,i,x) \
+ __INS(s,i,x)
+
+__IO(b,"b",char)
+__IO(w,"h",short)
+__IO(l,"",long)
+
+/*
+ * Note that due to the way __builtin_constant_t() works, you
+ * - can't use it inside an inline function (it will never be true)
+ * - you don't have to worry about side effects withing the __builtin..
+ */
+#ifdef __outbc
+#define outb(val,port) \
+ (__builtin_constant_p((port)) ? __outbc((val),(port)) : __outb((val),(port)))
+#else
+#define outb(val,port) __outb((val),(port))
+#endif
+
+#ifdef __outwc
+#define outw(val,port) \
+ (__builtin_constant_p((port)) ? __outwc((val),(port)) : __outw((val),(port)))
+#else
+#define outw(val,port) __outw((val),(port))
+#endif
+
+#ifdef __outlc
+#define outl(val,port) \
+ (__builtin_constant_p((port)) ? __outlc((val),(port)) : __outl((val),(port)))
+#else
+#define outl(val,port) __outl((val),(port))
+#endif
+
+#ifdef __inbc
+#define inb(port) \
+ (__builtin_constant_p((port)) ? __inbc((port)) : __inb((port)))
+#else
+#define inb(port) __inb((port))
+#endif
+
+#ifdef __inwc
+#define inw(port) \
+ (__builtin_constant_p((port)) ? __inwc((port)) : __inw((port)))
+#else
+#define inw(port) __inw((port))
+#endif
+
+#ifdef __inlc
+#define inl(port) \
+ (__builtin_constant_p((port)) ? __inlc((port)) : __inl((port)))
+#else
+#define inl(port) __inl((port))
+#endif
+
+/*
+ * This macro will give you the translated IO address for this particular
+ * architecture, which can be used with the out_t... functions.
+ */
+#define ioaddr(port) \
+ (__builtin_constant_p((port)) ? __ioaddrc((port)) : __ioaddr((port)))
+
+#ifndef ARCH_IO_DELAY
+/*
+ * This architecture does not require any delayed IO.
+ * It is handled in the hardware.
+ */
+#define outb_p(val,port) outb((val),(port))
+#define outw_p(val,port) outw((val),(port))
+#define outl_p(val,port) outl((val),(port))
+#define inb_p(port) inb((port))
+#define inw_p(port) inw((port))
+#define inl_p(port) inl((port))
+#define outsb_p(port,from,len) outsb(port,from,len)
+#define outsw_p(port,from,len) outsw(port,from,len)
+#define outsl_p(port,from,len) outsl(port,from,len)
+#define insb_p(port,to,len) insb(port,to,len)
+#define insw_p(port,to,len) insw(port,to,len)
+#define insl_p(port,to,len) insl(port,to,len)
+
+#else
+
+/*
+ * We have to delay the IO...
+ */
+#ifdef __outbc_p
+#define outb_p(val,port) \
+ (__builtin_constant_p((port)) ? __outbc_p((val),(port)) : __outb_p((val),(port)))
+#else
+#define outb_p(val,port) __outb_p((val),(port))
+#endif
+
+#ifdef __outwc_p
+#define outw_p(val,port) \
+ (__builtin_constant_p((port)) ? __outwc_p((val),(port)) : __outw_p((val),(port)))
+#else
+#define outw_p(val,port) __outw_p((val),(port))
+#endif
+
+#ifdef __outlc_p
+#define outl_p(val,port) \
+ (__builtin_constant_p((port)) ? __outlc_p((val),(port)) : __outl_p((val),(port)))
+#else
+#define outl_p(val,port) __outl_p((val),(port))
+#endif
+
+#ifdef __inbc_p
+#define inb_p(port) \
+ (__builtin_constant_p((port)) ? __inbc_p((port)) : __inb_p((port)))
+#else
+#define inb_p(port) __inb_p((port))
+#endif
+
+#ifdef __inwc_p
+#define inw_p(port) \
+ (__builtin_constant_p((port)) ? __inwc_p((port)) : __inw_p((port)))
+#else
+#define inw_p(port) __inw_p((port))
+#endif
+
+#ifdef __inlc_p
+#define inl_p(port) \
+ (__builtin_constant_p((port)) ? __inlc_p((port)) : __inl_p((port)))
+#else
+#define inl_p(port) __inl_p((port))
+#endif
+
+#endif
+
+#undef ARCH_IO_DELAY
+#undef ARCH_IO_CONSTANT
+
+/*
+ * Leftovers...
+ */
+#if 0
+#define __outwc(value,port) \
+({ \
+ if (port < 256) \
+ __asm__ __volatile__( \
+ "strh %0, [%1, %2]" \
+ : : "r" (value), "r" (PCIO_BASE), "J" (port << 2)); \
+ else if (__PORT_PCIO(port)) \
+ __asm__ __volatile__( \
+ "strh %0, [%1, %2]" \
+ : : "r" (value), "r" (PCIO_BASE), "r" (port << 2)); \
+ else \
+ __asm__ __volatile__( \
+ "strh %0, [%1, %2]" \
+ : : "r" (value), "r" (IO_BASE), "r" (port << 2)); \
+})
+
+#define __inwc(port) \
+({ \
+ unsigned short result; \
+ if (port < 256) \
+ __asm__ __volatile__( \
+ "ldrh %0, [%1, %2]" \
+ : "=r" (result) : "r" (PCIO_BASE), "J" (port << 2)); \
+ else \
+ if (__PORT_PCIO(port)) \
+ __asm__ __volatile__( \
+ "ldrh %0, [%1, %2]" \
+ : "=r" (result) : "r" (PCIO_BASE), "r" (port << 2)); \
+ else \
+ __asm__ __volatile__( \
+ "ldrh %0, [%1, %2]" \
+ : "=r" (result) : "r" (IO_BASE), "r" (port << 2)); \
+ result; \
+})
+#endif
+#endif
+
--- /dev/null
+/* $Id: ioctl.h,v 1.5 1993/07/19 21:53:50 root Exp root $
+ *
+ * linux/ioctl.h for Linux by H.H. Bergman.
+ */
+
+#ifndef _ASMARM_IOCTL_H
+#define _ASMARM_IOCTL_H
+
+/* ioctl command encoding: 32 bits total, command in lower 16 bits,
+ * size of the parameter structure in the lower 14 bits of the
+ * upper 16 bits.
+ * Encoding the size of the parameter structure in the ioctl request
+ * is useful for catching programs compiled with old versions
+ * and to avoid overwriting user space outside the user buffer area.
+ * The highest 2 bits are reserved for indicating the ``access mode''.
+ * NOTE: This limits the max parameter size to 16kB -1 !
+ */
+
+/*
+ * The following is for compatibility across the various Linux
+ * platforms. The i386 ioctl numbering scheme doesn't really enforce
+ * a type field. De facto, however, the top 8 bits of the lower 16
+ * bits are indeed used as a type field, so we might just as well make
+ * this explicit here. Please be sure to use the decoding macros
+ * below from now on.
+ */
+#define _IOC_NRBITS 8
+#define _IOC_TYPEBITS 8
+#define _IOC_SIZEBITS 14
+#define _IOC_DIRBITS 2
+
+#define _IOC_NRMASK ((1 << _IOC_NRBITS)-1)
+#define _IOC_TYPEMASK ((1 << _IOC_TYPEBITS)-1)
+#define _IOC_SIZEMASK ((1 << _IOC_SIZEBITS)-1)
+#define _IOC_DIRMASK ((1 << _IOC_DIRBITS)-1)
+
+#define _IOC_NRSHIFT 0
+#define _IOC_TYPESHIFT (_IOC_NRSHIFT+_IOC_NRBITS)
+#define _IOC_SIZESHIFT (_IOC_TYPESHIFT+_IOC_TYPEBITS)
+#define _IOC_DIRSHIFT (_IOC_SIZESHIFT+_IOC_SIZEBITS)
+
+/*
+ * Direction bits.
+ */
+#define _IOC_NONE 0U
+#define _IOC_WRITE 1U
+#define _IOC_READ 2U
+
+#define _IOC(dir,type,nr,size) \
+ (((dir) << _IOC_DIRSHIFT) | \
+ ((type) << _IOC_TYPESHIFT) | \
+ ((nr) << _IOC_NRSHIFT) | \
+ ((size) << _IOC_SIZESHIFT))
+
+/* used to create numbers */
+#define _IO(type,nr) _IOC(_IOC_NONE,(type),(nr),0)
+#define _IOR(type,nr,size) _IOC(_IOC_READ,(type),(nr),sizeof(size))
+#define _IOW(type,nr,size) _IOC(_IOC_WRITE,(type),(nr),sizeof(size))
+#define _IOWR(type,nr,size) _IOC(_IOC_READ|_IOC_WRITE,(type),(nr),sizeof(size))
+
+/* used to decode ioctl numbers.. */
+#define _IOC_DIR(nr) (((nr) >> _IOC_DIRSHIFT) & _IOC_DIRMASK)
+#define _IOC_TYPE(nr) (((nr) >> _IOC_TYPESHIFT) & _IOC_TYPEMASK)
+#define _IOC_NR(nr) (((nr) >> _IOC_NRSHIFT) & _IOC_NRMASK)
+#define _IOC_SIZE(nr) (((nr) >> _IOC_SIZESHIFT) & _IOC_SIZEMASK)
+
+/* ...and for the drivers/sound files... */
+
+#define IOC_IN (_IOC_WRITE << _IOC_DIRSHIFT)
+#define IOC_OUT (_IOC_READ << _IOC_DIRSHIFT)
+#define IOC_INOUT ((_IOC_WRITE|_IOC_READ) << _IOC_DIRSHIFT)
+#define IOCSIZE_MASK (_IOC_SIZEMASK << _IOC_SIZESHIFT)
+#define IOCSIZE_SHIFT (_IOC_SIZESHIFT)
+
+#endif /* _ASMARM_IOCTL_H */
--- /dev/null
+#ifndef __ASM_ARM_IOCTLS_H
+#define __ASM_ARM_IOCTLS_H
+
+#include <asm/ioctl.h>
+
+/* 0x54 is just a magic number to make these relatively unique ('T') */
+
+#define TCGETS 0x5401
+#define TCSETS 0x5402
+#define TCSETSW 0x5403
+#define TCSETSF 0x5404
+#define TCGETA 0x5405
+#define TCSETA 0x5406
+#define TCSETAW 0x5407
+#define TCSETAF 0x5408
+#define TCSBRK 0x5409
+#define TCXONC 0x540A
+#define TCFLSH 0x540B
+#define TIOCEXCL 0x540C
+#define TIOCNXCL 0x540D
+#define TIOCSCTTY 0x540E
+#define TIOCGPGRP 0x540F
+#define TIOCSPGRP 0x5410
+#define TIOCOUTQ 0x5411
+#define TIOCSTI 0x5412
+#define TIOCGWINSZ 0x5413
+#define TIOCSWINSZ 0x5414
+#define TIOCMGET 0x5415
+#define TIOCMBIS 0x5416
+#define TIOCMBIC 0x5417
+#define TIOCMSET 0x5418
+#define TIOCGSOFTCAR 0x5419
+#define TIOCSSOFTCAR 0x541A
+#define FIONREAD 0x541B
+#define TIOCINQ FIONREAD
+#define TIOCLINUX 0x541C
+#define TIOCCONS 0x541D
+#define TIOCGSERIAL 0x541E
+#define TIOCSSERIAL 0x541F
+#define TIOCPKT 0x5420
+#define FIONBIO 0x5421
+#define TIOCNOTTY 0x5422
+#define TIOCSETD 0x5423
+#define TIOCGETD 0x5424
+#define TCSBRKP 0x5425 /* Needed for POSIX tcsendbreak() */
+#define TIOCTTYGSTRUCT 0x5426 /* For debugging only */
+#define TIOCSBRK 0x5427 /* BSD compatibility */
+#define TIOCCBRK 0x5428 /* BSD compatibility */
+#define TIOCGSID 0x5429 /* Return the session ID of FD */
+
+#define FIONCLEX 0x5450 /* these numbers need to be adjusted. */
+#define FIOCLEX 0x5451
+#define FIOASYNC 0x5452
+#define TIOCSERCONFIG 0x5453
+#define TIOCSERGWILD 0x5454
+#define TIOCSERSWILD 0x5455
+#define TIOCGLCKTRMIOS 0x5456
+#define TIOCSLCKTRMIOS 0x5457
+#define TIOCSERGSTRUCT 0x5458 /* For debugging only */
+#define TIOCSERGETLSR 0x5459 /* Get line status register */
+#define TIOCSERGETMULTI 0x545A /* Get multiport config */
+#define TIOCSERSETMULTI 0x545B /* Set multiport config */
+
+#define TIOCMIWAIT 0x545C /* wait for a change on serial input line(s) */
+#define TIOCGICOUNT 0x545D /* read serial port inline interrupt counts */
+
+/* Used for packet mode */
+#define TIOCPKT_DATA 0
+#define TIOCPKT_FLUSHREAD 1
+#define TIOCPKT_FLUSHWRITE 2
+#define TIOCPKT_STOP 4
+#define TIOCPKT_START 8
+#define TIOCPKT_NOSTOP 16
+#define TIOCPKT_DOSTOP 32
+
+#define TIOCSER_TEMT 0x01 /* Transmitter physically empty */
+
+#endif
--- /dev/null
+#ifndef __ASMARM_IPC_H
+#define __ASMARM_IPC_H
+
+/*
+ * These are used to wrap system calls on ARM.
+ *
+ * See arch/arm/kernel/sys-arm.c for ugly details..
+ */
+struct ipc_kludge {
+ struct msgbuf *msgp;
+ long msgtyp;
+};
+
+#define SEMOP 1
+#define SEMGET 2
+#define SEMCTL 3
+#define MSGSND 11
+#define MSGRCV 12
+#define MSGGET 13
+#define MSGCTL 14
+#define SHMAT 21
+#define SHMDT 22
+#define SHMGET 23
+#define SHMCTL 24
+
+#define IPCCALL(version,op) ((version)<<16 | (op))
+
+#endif
--- /dev/null
+/*
+ * linux/include/asm-arm/irq-no.h
+ *
+ * Machine independent interrupt numbers
+ */
+
+#include <asm/arch/irqs.h>
+
+#ifndef NR_IRQS
+#define NR_IRQS 128
+#endif
--- /dev/null
+#ifndef __ASM_ARM_IRQ_H
+#define __ASM_ARM_IRQ_H
+
+#include <asm/irq-no.h>
+
+extern void disable_irq(unsigned int);
+extern void enable_irq(unsigned int);
+
+#define __STR(x) #x
+#define STR(x) __STR(x)
+
+#endif
+
--- /dev/null
+#ifndef __ASM_PIPE_H
+#define __ASM_PIPE_H
+
+#ifndef PAGE_SIZE
+#include <asm/page.h>
+#endif
+
+#define PIPE_BUF PAGE_SIZE
+
+#endif
+
--- /dev/null
+/*
+ * linux/include/asm-arm/mm-init.h
+ *
+ * Copyright (C) 1997,1998 Russell King
+ *
+ * Contained within are structures to describe how to set up the
+ * initial memory map. It includes both a processor-specific header
+ * for parsing these structures, and an architecture-specific header
+ * to fill out the structures.
+ */
+#ifndef __ASM_MM_INIT_H
+#define __ASM_MM_INIT_H
+
+typedef enum {
+ // physical address is absolute
+ init_mem_map_absolute,
+ /* physical address is relative to start_mem
+ * as passed in paging_init
+ */
+ init_mem_map_relative_start_mem
+} init_memmap_type_t;
+
+typedef struct {
+ init_memmap_type_t type;
+ unsigned long physical_address;
+ unsigned long virtual_address;
+ unsigned long size;
+} init_memmap_t;
+
+#define INIT_MEM_MAP_SENTINEL { init_mem_map_absolute, 0, 0, 0 }
+#define INIT_MEM_MAP_ABSOLUTE(p,l,s) { init_mem_map_absolute,p,l,s }
+#define INIT_MEM_MAP_RELATIVE(o,l,s) { init_mem_map_relative_start_mem,o,l,s }
+
+/*
+ * Within this file, initialise an array of init_mem_map_t's
+ * to describe your initial memory mapping structure.
+ */
+#include <asm/arch/mm-init.h>
+
+/*
+ * Contained within this file is code to read the array
+ * of init_mem_map_t's created above.
+ */
+#include <asm/proc/mm-init.h>
+
+#endif
--- /dev/null
+#ifndef __ARM_MMAN_H__
+#define __ARM_MMAN_H__
+
+#define PROT_READ 0x1 /* page can be read */
+#define PROT_WRITE 0x2 /* page can be written */
+#define PROT_EXEC 0x4 /* page can be executed */
+#define PROT_NONE 0x0 /* page can not be accessed */
+
+#define MAP_SHARED 0x01 /* Share changes */
+#define MAP_PRIVATE 0x02 /* Changes are private */
+#define MAP_TYPE 0x0f /* Mask for type of mapping */
+#define MAP_FIXED 0x10 /* Interpret addr exactly */
+#define MAP_ANONYMOUS 0x20 /* don't use a file */
+
+#define MAP_GROWSDOWN 0x0100 /* stack-like segment */
+#define MAP_DENYWRITE 0x0800 /* ETXTBSY */
+#define MAP_EXECUTABLE 0x1000 /* mark it as an executable */
+#define MAP_LOCKED 0x2000 /* pages are locked */
+#define MAP_NORESERVE 0x4000 /* don't check for reservations */
+
+#define MS_ASYNC 1 /* sync memory asynchronously */
+#define MS_INVALIDATE 2 /* invalidate the caches */
+#define MS_SYNC 4 /* synchronous memory sync */
+
+#define MCL_CURRENT 1 /* lock all current mappings */
+#define MCL_FUTURE 2 /* lock all future mappings */
+
+/* compatibility flags */
+#define MAP_ANON MAP_ANONYMOUS
+#define MAP_FILE 0
+
+#endif /* __ARM_MMAN_H__ */
--- /dev/null
+/*
+ * linux/include/asm-arm/mmu_context.h
+ *
+ * Copyright (c) 1996 Russell King.
+ *
+ * Changelog:
+ * 27-06-1996 RMK Created
+ */
+#ifndef __ASM_ARM_MMU_CONTEXT_H
+#define __ASM_ARM_MMU_CONTEXT_H
+
+#define get_mmu_context(x) do { } while (0)
+
+#define init_new_context(mm) do { } while(0)
+#define destroy_context(mm) do { } while(0)
+
+#endif
--- /dev/null
+/* $Id: namei.h,v 1.1 1996/12/13 14:48:21 jj Exp $
+ * linux/include/asm-i386/namei.h
+ *
+ * Included from linux/fs/namei.c
+ */
+
+#ifndef __ASMARM_NAMEI_H
+#define __ASMARM_NAMEI_H
+
+/* This dummy routine maybe changed to something useful
+ * for /usr/gnemul/ emulation stuff.
+ * Look at asm-sparc/namei.h for details.
+ */
+
+#define __prefix_namei(retrieve_mode, name, base, buf, res_dir, res_inode, \
+ last_name, last_entry, last_error) 1
+
+#endif /* __ASMARM_NAMEI_H */
--- /dev/null
+#ifndef _ASMARM_PAGE_H
+#define _ASMARM_PAGE_H
+
+#include <asm/arch/mmu.h>
+#include <asm/proc/page.h>
+
+#ifdef __KERNEL__
+
+#define clear_page(page) memzero((void *)(page), PAGE_SIZE)
+#define copy_page(to,from) memcpy((void *)(to), (void *)(from), PAGE_SIZE)
+
+#endif
+
+/* unsigned long __pa(void *x) */
+#define __pa(x) __virt_to_phys((unsigned long)(x))
+
+/* void *__va(unsigned long x) */
+#define __va(x) ((void *)(__phys_to_virt((unsigned long)(x))))
+
+#endif
--- /dev/null
+#include <asm/proc/param.h>
--- /dev/null
+#ifndef _ASMARM_PGTABLE_H
+#define _ASMARM_PGTABLE_H
+
+#include <asm/proc-fns.h>
+#include <asm/proc/pgtable.h>
+
+#define module_map vmalloc
+#define module_unmap vfree
+
+#endif /* _ASMARM_PGTABLE_H */
--- /dev/null
+#ifndef __ASMARM_POLL_H
+#define __ASMARM_POLL_H
+
+/* These are specified by iBCS2 */
+#define POLLIN 0x0001
+#define POLLPRI 0x0002
+#define POLLOUT 0x0004
+#define POLLERR 0x0008
+#define POLLHUP 0x0010
+#define POLLNVAL 0x0020
+
+/* The rest seem to be more-or-less nonstandard. Check them! */
+#define POLLRDNORM 0x0040
+#define POLLRDBAND 0x0080
+#define POLLWRNORM 0x0100
+#define POLLWRBAND 0x0200
+#define POLLMSG 0x0400
+
+struct pollfd {
+ int fd;
+ short events;
+ short revents;
+};
+
+#endif
--- /dev/null
+/*
+ * linux/include/asm-arm/posix_types.h
+ *
+ * Copyright (c) 1996 Russell King.
+ *
+ * Changelog:
+ * 27-06-1996 RMK Created
+ */
+#ifndef __ARCH_ARM_POSIX_TYPES_H
+#define __ARCH_ARM_POSIX_TYPES_H
+
+/*
+ * This file is generally used by user-level software, so you need to
+ * be a little careful about namespace pollution etc. Also, we cannot
+ * assume GCC is being used.
+ */
+
+typedef unsigned short __kernel_dev_t;
+typedef unsigned long __kernel_ino_t;
+typedef unsigned short __kernel_mode_t;
+typedef unsigned short __kernel_nlink_t;
+typedef long __kernel_off_t;
+typedef int __kernel_pid_t;
+typedef unsigned short __kernel_ipc_pid_t;
+typedef unsigned short __kernel_uid_t;
+typedef unsigned short __kernel_gid_t;
+typedef unsigned int __kernel_size_t;
+typedef int __kernel_ssize_t;
+typedef int __kernel_ptrdiff_t;
+typedef long __kernel_time_t;
+typedef long __kernel_suseconds_t;
+typedef long __kernel_clock_t;
+typedef int __kernel_daddr_t;
+typedef char * __kernel_caddr_t;
+
+#ifdef __GNUC__
+typedef long long __kernel_loff_t;
+#endif
+
+typedef struct {
+ int val[2];
+} __kernel_fsid_t;
+
+#undef __FD_SET
+#define __FD_SET(fd, fdsetp) \
+ (((fd_set *)fdsetp)->fds_bits[fd >> 5] |= (1<<(fd & 31)))
+
+#undef __FD_CLR
+#define __FD_CLR(fd, fdsetp) \
+ (((fd_set *)fdsetp)->fds_bits[fd >> 5] &= ~(1<<(fd & 31)))
+
+#undef __FD_ISSET
+#define __FD_ISSET(fd, fdsetp) \
+ ((((fd_set *)fdsetp)->fds_bits[fd >> 5] & (1<<(fd & 31))) != 0)
+
+#undef __FD_ZERO
+#define __FD_ZERO(fdsetp) \
+ (memset (fdsetp, 0, sizeof (*(fd_set *)fdsetp)))
+
+#endif
--- /dev/null
+/*
+ * linux/asm-arm/proc-armo/assembler.h
+ *
+ * Copyright (C) 1996 Russell King
+ *
+ * This file contains arm architecture specific defines
+ * for the different processors
+ */
+
+/*
+ * LOADREGS: multiple register load (ldm) with pc in register list
+ * (takes account of ARM6 not using ^)
+ *
+ * RETINSTR: return instruction: adds the 's' in at the end of the
+ * instruction if this is not an ARM6
+ *
+ * SAVEIRQS: save IRQ state (not required on ARM2/ARM3 - done
+ * implicitly
+ *
+ * RESTOREIRQS: restore IRQ state (not required on ARM2/ARM3 - done
+ * implicitly with ldm ... ^ or movs.
+ *
+ * These next two need thinking about - can't easily use stack... (see system.S)
+ * DISABLEIRQS: disable IRQS in SVC mode
+ *
+ * ENABLEIRQS: enable IRQS in SVC mode
+ *
+ * USERMODE: switch to USER mode
+ *
+ * SVCMODE: switch to SVC mode
+ */
+
+#define N_BIT (1 << 31)
+#define Z_BIT (1 << 30)
+#define C_BIT (1 << 29)
+#define V_BIT (1 << 28)
+
+#define PCMASK 0xfc000003
+
+#ifdef __ASSEMBLER__
+
+#define I_BIT (1 << 27)
+#define F_BIT (1 << 26)
+
+#define MODE_USR 0
+#define MODE_FIQ 1
+#define MODE_IRQ 2
+#define MODE_SVC 3
+
+#define DEFAULT_FIQ MODE_FIQ
+
+#define LOADREGS(cond, base, reglist...)\
+ ldm##cond base,reglist^
+
+#define RETINSTR(instr, regs...)\
+ instr##s regs
+
+#define MODENOP\
+ mov r0, r0
+
+#define MODE(savereg,tmpreg,mode) \
+ mov savereg, pc; \
+ bic tmpreg, savereg, $0x0c000003; \
+ orr tmpreg, tmpreg, $mode; \
+ teqp tmpreg, $0
+
+#define RESTOREMODE(savereg) \
+ teqp savereg, $0
+
+#define SAVEIRQS(tmpreg)
+
+#define RESTOREIRQS(tmpreg)
+
+#define DISABLEIRQS(tmpreg)\
+ teqp pc, $0x08000003
+
+#define ENABLEIRQS(tmpreg)\
+ teqp pc, $0x00000003
+
+#define USERMODE(tmpreg)\
+ teqp pc, $0x00000000;\
+ mov r0, r0
+
+#define SVCMODE(tmpreg)\
+ teqp pc, $0x00000003;\
+ mov r0, r0
+
+#endif
--- /dev/null
+/*
+ * linux/include/asm-arm/proc-armo/mmap.h
+ *
+ * Copyright (C) 1996 Russell King
+ *
+ * This contains the code to setup the memory map on an ARM2/ARM250/ARM3
+ * machine. This is both processor & architecture specific, and requires
+ * some more work to get it to fit into our separate processor and
+ * architecture structure.
+ */
+
+static unsigned long phys_screen_end;
+int page_nr;
+
+#define setup_processor_functions()
+
+/*
+ * This routine needs more work to make it dynamically release/allocate mem!
+ */
+unsigned long map_screen_mem(unsigned long log_start, unsigned long kmem, int update)
+{
+ static int updated = 0;
+ unsigned long address = SCREEN_START, i;
+ pgd_t *pg_dir;
+ pmd_t *pm_dir;
+ pte_t *pt_entry;
+
+ if (updated)
+ return 0;
+ updated = update;
+
+ pg_dir = swapper_pg_dir + (SCREEN1_BASE >> PGDIR_SHIFT);
+ pm_dir = pmd_offset(pg_dir, SCREEN1_BASE);
+ pt_entry = pte_offset(pm_dir, SCREEN1_BASE);
+
+ for (i = SCREEN1_BASE; i < SCREEN1_END; i += PAGE_SIZE) {
+ if (i >= log_start) {
+ *pt_entry = mk_pte(address, __pgprot(_PAGE_PRESENT));
+ address += PAGE_SIZE;
+ } else
+ *pt_entry = mk_pte(0, __pgprot(0));
+ pt_entry++;
+ }
+ phys_screen_end = address;
+ if (update)
+ flush_tlb_all ();
+ return kmem;
+}
+
+static inline unsigned long setup_pagetables(unsigned long start_mem, unsigned long end_mem)
+{
+ unsigned long address;
+ unsigned int spi;
+
+ page_nr = MAP_NR(end_mem);
+
+ /* Allocate zero page */
+ address = PAGE_OFFSET + 480*1024;
+ for (spi = 0; spi < 32768 >> PAGE_SHIFT; spi++) {
+ pgd_val(swapper_pg_dir[spi]) = pte_val(mk_pte(address, PAGE_READONLY));
+ address += PAGE_SIZE;
+ }
+
+ while (spi < (PAGE_OFFSET >> PGDIR_SHIFT))
+ pgd_val(swapper_pg_dir[spi++]) = 0;
+
+ map_screen_mem (SCREEN1_END - 480*1024, 0, 0);
+ return start_mem;
+}
+
+static inline void mark_usable_memory_areas(unsigned long *start_mem, unsigned long end_mem)
+{
+ unsigned long smem = PAGE_ALIGN(*start_mem);
+
+ while (smem < end_mem) {
+ clear_bit(PG_reserved, &mem_map[MAP_NR(smem)].flags);
+ smem += PAGE_SIZE;
+ }
+
+ for (smem = phys_screen_end; smem < SCREEN2_END; smem += PAGE_SIZE)
+ clear_bit(PG_reserved, &mem_map[MAP_NR(smem)].flags);
+}
--- /dev/null
+/*
+ * linux/include/asm-arm/proc-armo/mm-init.h
+ *
+ * Copyright (C) 1996 Russell King
+ *
+ * This contains the code to setup the memory map on an ARM2/ARM250/ARM3
+ * machine. This is both processor & architecture specific, and requires
+ * some more work to get it to fit into our separate processor and
+ * architecture structure.
+ */
+
+static unsigned long phys_screen_end;
+int page_nr;
+
+#define setup_processor_functions()
+#define PTE_SIZE (PTRS_PER_PTE * BYTES_PER_PTR)
+
+static inline void setup_swapper_dir (int index, pte_t *ptep)
+{
+ set_pmd (pmd_offset (swapper_pg_dir + index, 0), mk_pmd (ptep));
+}
+
+/*
+ * This routine needs more work to make it dynamically release/allocate mem!
+ */
+unsigned long map_screen_mem(unsigned long log_start, unsigned long kmem, int update)
+{
+ static int updated = 0;
+
+ if (updated)
+ return 0;
+
+ updated = update;
+
+ if (update) {
+ unsigned long address = log_start, offset;
+ pgd_t *pgdp;
+
+ kmem = (kmem + 3) & ~3;
+
+ pgdp = pgd_offset (&init_mm, address); /* +31 */
+ offset = SCREEN_START;
+ while (address < SCREEN1_END) {
+ unsigned long addr_pmd, end_pmd;
+ pmd_t *pmdp;
+
+ /* if (pgd_none (*pgdp)) alloc pmd */
+ pmdp = pmd_offset (pgdp, address); /* +0 */
+ addr_pmd = address & ~PGDIR_MASK; /* 088000 */
+ end_pmd = addr_pmd + SCREEN1_END - address; /* 100000 */
+ if (end_pmd > PGDIR_SIZE)
+ end_pmd = PGDIR_SIZE;
+
+ do {
+ unsigned long addr_pte, end_pte;
+ pte_t *ptep;
+
+ if (pmd_none (*pmdp)) {
+ pte_t *new_pte = (pte_t *)kmem;
+ kmem += PTRS_PER_PTE * BYTES_PER_PTR;
+ memzero (new_pte, PTRS_PER_PTE * BYTES_PER_PTR);
+ set_pmd (pmdp, mk_pmd(new_pte));
+ }
+
+ ptep = pte_offset (pmdp, addr_pmd); /* +11 */
+ addr_pte = addr_pmd & ~PMD_MASK; /* 088000 */
+ end_pte = addr_pte + end_pmd - addr_pmd; /* 100000 */
+ if (end_pte > PMD_SIZE)
+ end_pte = PMD_SIZE;
+
+ do {
+ set_pte (ptep, mk_pte(offset, PAGE_KERNEL));
+ addr_pte += PAGE_SIZE;
+ offset += PAGE_SIZE;
+ ptep++;
+ } while (addr_pte < end_pte);
+
+ pmdp++;
+ addr_pmd = (addr_pmd + PMD_SIZE) & PMD_MASK;
+ } while (addr_pmd < end_pmd);
+
+ address = (address + PGDIR_SIZE) & PGDIR_MASK;
+ pgdp ++;
+ }
+
+ phys_screen_end = offset;
+ flush_tlb_all ();
+ update_mm_cache_all ();
+ }
+ return kmem;
+}
+
+static inline unsigned long setup_pagetables(unsigned long start_mem, unsigned long end_mem)
+{
+ unsigned int i;
+ union {unsigned long l; pte_t *pte; } u;
+
+ page_nr = MAP_NR(end_mem);
+
+ /* map in pages for (0x0000 - 0x8000) */
+ u.l = ((start_mem + (PTE_SIZE-1)) & ~(PTE_SIZE-1));
+ start_mem = u.l + PTE_SIZE;
+ memzero (u.pte, PTE_SIZE);
+ u.pte[0] = mk_pte(PAGE_OFFSET + 491520, PAGE_READONLY);
+ setup_swapper_dir (0, u.pte);
+
+ for (i = 1; i < PTRS_PER_PGD; i++)
+ pgd_val(swapper_pg_dir[i]) = 0;
+
+ /* now map screen mem in */
+ phys_screen_end = SCREEN2_END;
+ map_screen_mem (SCREEN1_END - 480*1024, 0, 0);
+
+ return start_mem;
+}
+
+static inline void mark_usable_memory_areas(unsigned long *start_mem, unsigned long end_mem)
+{
+ unsigned long smem;
+
+ *start_mem = smem = PAGE_ALIGN(*start_mem);
+
+ while (smem < end_mem) {
+ clear_bit(PG_reserved, &mem_map[MAP_NR(smem)].flags);
+ smem += PAGE_SIZE;
+ }
+
+ for (smem = phys_screen_end; smem < SCREEN2_END; smem += PAGE_SIZE)
+ clear_bit(PG_reserved, &mem_map[MAP_NR(smem)].flags);
+}
--- /dev/null
+/*
+ * linux/include/asm-arm/proc-armo/page.h
+ *
+ * Copyright (C) 1995, 1996 Russell King
+ */
+
+#ifndef __ASM_PROC_PAGE_H
+#define __ASM_PROC_PAGE_H
+
+/* PAGE_SHIFT determines the page size */
+#define PAGE_SHIFT 15
+#define PAGE_SIZE (1UL << PAGE_SHIFT)
+#define PAGE_MASK (~(PAGE_SIZE-1))
+
+#ifdef __KERNEL__
+
+#define STRICT_MM_TYPECHECKS
+
+#ifdef STRICT_MM_TYPECHECKS
+/*
+ * These are used to make use of C type-checking..
+ */
+typedef struct { unsigned long pte; } pte_t;
+typedef struct { unsigned long pmd; } pmd_t;
+typedef struct { unsigned long pgd; } pgd_t;
+typedef struct { unsigned long pgprot; } pgprot_t;
+
+#define pte_val(x) ((x).pte)
+#define pmd_val(x) ((x).pmd)
+#define pgd_val(x) ((x).pgd)
+#define pgprot_val(x) ((x).pgprot)
+
+#define __pte(x) ((pte_t) { (x) } )
+#define __pmd(x) ((pmd_t) { (x) } )
+#define __pgd(x) ((pgd_t) { (x) } )
+#define __pgprot(x) ((pgprot_t) { (x) } )
+
+#else
+/*
+ * .. while these make it easier on the compiler
+ */
+typedef unsigned long pte_t;
+typedef unsigned long pmd_t;
+typedef unsigned long pgd_t;
+typedef unsigned long pgprot_t;
+
+#define pte_val(x) (x)
+#define pmd_val(x) (x)
+#define pgd_val(x) (x)
+#define pgprot_val(x) (x)
+
+#define __pte(x) (x)
+#define __pmd(x) (x)
+#define __pgd(x) (x)
+#define __pgprot(x) (x)
+
+#endif
+
+/* to align the pointer to the (next) page boundary */
+#define PAGE_ALIGN(addr) (((addr)+PAGE_SIZE-1)&PAGE_MASK)
+
+/* This handles the memory map.. */
+#define PAGE_OFFSET 0x02000000
+#define MAP_NR(addr) (((unsigned long)(addr) - PAGE_OFFSET) >> PAGE_SHIFT)
+
+#endif /* __KERNEL__ */
+
+#endif /* __ASM_PROC_PAGE_H */
+
--- /dev/null
+/*
+ * linux/include/asm-arm/proc-armo/param.h
+ *
+ * Copyright (C) 1995, 1996 Russell King
+ */
+
+#ifndef __ASM_PROC_PARAM_H
+#define __ASM_PROC_PARAM_H
+
+#ifndef HZ
+#define HZ 100
+#endif
+
+#define EXEC_PAGESIZE 32768
+
+#ifndef NGROUPS
+#define NGROUPS 32
+#endif
+
+#ifndef NOGROUP
+#define NOGROUP (-1)
+#endif
+
+#define MAXHOSTNAMELEN 64 /* max length of hostname */
+
+#endif
+
--- /dev/null
+/*
+ * linux/include/asm-arm/proc-armo/pgtable.h
+ *
+ * Copyright (C) 1995, 1996 Russell King
+ */
+#ifndef __ASM_PROC_PGTABLE_H
+#define __ASM_PROC_PGTABLE_H
+
+#include <asm/arch/mmu.h>
+
+#define LIBRARY_TEXT_START 0x0c000000
+
+/*
+ * Cache flushing...
+ */
+#define flush_cache_all() do { } while (0)
+#define flush_cache_mm(mm) do { } while (0)
+#define flush_cache_range(mm,start,end) do { } while (0)
+#define flush_cache_page(vma,vmaddr) do { } while (0)
+#define flush_page_to_ram(page) do { } while (0)
+
+/*
+ * TLB flushing:
+ *
+ * - flush_tlb() flushes the current mm struct TLBs
+ * - flush_tlb_all() flushes all processes TLBs
+ * - flush_tlb_mm(mm) flushes the specified mm context TLB's
+ * - flush_tlb_page(vma, vmaddr) flushes one page
+ * - flush_tlb_range(mm, start, end) flushes a range of pages
+ */
+
+#define flush_tlb() flush_tlb_mm(current->mm)
+
+extern __inline__ void flush_tlb_all(void)
+{
+ struct task_struct *p;
+
+ p = &init_task;
+ do {
+ processor.u.armv2._update_map(p);
+ p = p->next_task;
+ } while (p != &init_task);
+
+ processor.u.armv2._remap_memc (current);
+}
+
+extern __inline__ void flush_tlb_mm(struct mm_struct *mm)
+{
+ struct task_struct *p;
+
+ p = &init_task;
+ do {
+ if (p->mm == mm)
+ processor.u.armv2._update_map(p);
+ p = p->next_task;
+ } while (p != &init_task);
+
+ if (current->mm == mm)
+ processor.u.armv2._remap_memc (current);
+}
+
+#define flush_tlb_range(mm, start, end) flush_tlb_mm(mm)
+#define flush_tlb_page(vma, vmaddr) flush_tlb_mm(vma->vm_mm)
+
+#define __flush_entry_to_ram(entry)
+
+/* Certain architectures need to do special things when pte's
+ * within a page table are directly modified. Thus, the following
+ * hook is made available.
+ */
+#define set_pte(pteptr, pteval) ((*(pteptr)) = (pteval))
+
+/* PMD_SHIFT determines the size of the area a second-level page table can map */
+#define PMD_SHIFT PAGE_SHIFT
+#define PMD_SIZE (1UL << PMD_SHIFT)
+#define PMD_MASK (~(PMD_SIZE-1))
+
+/* PGDIR_SHIFT determines what a third-level page table entry can map */
+#define PGDIR_SHIFT PAGE_SHIFT
+#define PGDIR_SIZE (1UL << PGDIR_SHIFT)
+#define PGDIR_MASK (~(PGDIR_SIZE-1))
+
+/*
+ * entries per page directory level: the arm3 is one-level, so
+ * we don't really have any PMD or PTE directory physically.
+ */
+#define PTRS_PER_PTE 1
+#define PTRS_PER_PMD 1
+#define PTRS_PER_PGD 1024
+
+/* Just any arbitrary offset to the start of the vmalloc VM area: the
+ * current 8MB value just means that there will be a 8MB "hole" after the
+ * physical memory until the kernel virtual memory starts. That means that
+ * any out-of-bounds memory accesses will hopefully be caught.
+ * The vmalloc() routines leaves a hole of 4kB between each vmalloced
+ * area for the same reason. ;)
+ */
+#define VMALLOC_START 0x01a00000
+#define VMALLOC_VMADDR(x) ((unsigned long)(x))
+
+#define _PAGE_PRESENT 0x001
+#define _PAGE_RW 0x002
+#define _PAGE_USER 0x004
+#define _PAGE_PCD 0x010
+#define _PAGE_ACCESSED 0x020
+#define _PAGE_DIRTY 0x040
+
+#define _PAGE_TABLE (_PAGE_PRESENT | _PAGE_RW | _PAGE_USER | _PAGE_ACCESSED | _PAGE_DIRTY)
+#define _PAGE_CHG_MASK (PAGE_MASK | _PAGE_ACCESSED | _PAGE_DIRTY)
+
+#define PAGE_NONE __pgprot(_PAGE_PRESENT | _PAGE_ACCESSED)
+#define PAGE_SHARED __pgprot(_PAGE_PRESENT | _PAGE_RW | _PAGE_USER | _PAGE_ACCESSED)
+#define PAGE_COPY __pgprot(_PAGE_PRESENT | _PAGE_USER | _PAGE_ACCESSED)
+#define PAGE_READONLY __pgprot(_PAGE_PRESENT | _PAGE_USER | _PAGE_ACCESSED)
+#define PAGE_KERNEL __pgprot(_PAGE_PRESENT | _PAGE_RW | _PAGE_DIRTY | _PAGE_ACCESSED)
+
+/*
+ * The arm can't do page protection for execute, and considers that the same are read.
+ * Also, write permissions imply read permissions. This is the closest we can get..
+ */
+#define __P000 PAGE_NONE
+#define __P001 PAGE_READONLY
+#define __P010 PAGE_COPY
+#define __P011 PAGE_COPY
+#define __P100 PAGE_READONLY
+#define __P101 PAGE_READONLY
+#define __P110 PAGE_COPY
+#define __P111 PAGE_COPY
+
+#define __S000 PAGE_NONE
+#define __S001 PAGE_READONLY
+#define __S010 PAGE_SHARED
+#define __S011 PAGE_SHARED
+#define __S100 PAGE_READONLY
+#define __S101 PAGE_READONLY
+#define __S110 PAGE_SHARED
+#define __S111 PAGE_SHARED
+
+#undef TEST_VERIFY_AREA
+
+/*
+ * BAD_PAGE is used for a bogus page.
+ *
+ * ZERO_PAGE is a global shared page that is always zero: used
+ * for zero-mapped memory areas etc..
+ */
+extern pte_t __bad_page(void);
+extern unsigned long *empty_zero_page;
+
+#define BAD_PAGE __bad_page()
+#define ZERO_PAGE ((unsigned long) empty_zero_page)
+
+/* number of bits that fit into a memory pointer */
+#define BYTES_PER_PTR (sizeof(unsigned long))
+#define BITS_PER_PTR (8*BYTES_PER_PTR)
+
+/* to align the pointer to a pointer address */
+#define PTR_MASK (~(sizeof(void*)-1))
+
+/* sizeof(void*)==1<<SIZEOF_PTR_LOG2 */
+#define SIZEOF_PTR_LOG2 2
+
+/* to find an entry in a page-table */
+#define PAGE_PTR(address) \
+((unsigned long)(address)>>(PAGE_SHIFT-SIZEOF_PTR_LOG2)&PTR_MASK&~PAGE_MASK)
+
+/* to set the page-dir */
+#define SET_PAGE_DIR(tsk,pgdir) \
+do { \
+ tsk->tss.memmap = (unsigned long)pgdir; \
+ processor.u.armv2._update_map(tsk); \
+ if ((tsk) == current) \
+ processor.u.armv2._remap_memc (current); \
+} while (0)
+
+extern unsigned long physical_start;
+extern unsigned long physical_end;
+
+extern inline int pte_none(pte_t pte) { return !pte_val(pte); }
+extern inline int pte_present(pte_t pte) { return pte_val(pte) & _PAGE_PRESENT; }
+extern inline void pte_clear(pte_t *ptep) { pte_val(*ptep) = 0; }
+
+extern inline int pmd_none(pmd_t pmd) { return 0; }
+extern inline int pmd_bad(pmd_t pmd) { return 0; }
+extern inline int pmd_present(pmd_t pmd) { return 1; }
+extern inline void pmd_clear(pmd_t * pmdp) { }
+
+/*
+ * The "pgd_xxx()" functions here are trivial for a folded two-level
+ * setup: the pgd is never bad, and a pmd always exists (as it's folded
+ * into the pgd entry)
+ */
+extern inline int pgd_none(pgd_t pgd) { return 0; }
+extern inline int pgd_bad(pgd_t pgd) { return 0; }
+extern inline int pgd_present(pgd_t pgd) { return 1; }
+extern inline void pgd_clear(pgd_t * pgdp) { }
+
+/*
+ * The following only work if pte_present() is true.
+ * Undefined behaviour if not..
+ */
+extern inline int pte_read(pte_t pte) { return pte_val(pte) & _PAGE_USER; }
+extern inline int pte_write(pte_t pte) { return pte_val(pte) & _PAGE_RW; }
+extern inline int pte_exec(pte_t pte) { return pte_val(pte) & _PAGE_USER; }
+extern inline int pte_dirty(pte_t pte) { return pte_val(pte) & _PAGE_DIRTY; }
+extern inline int pte_young(pte_t pte) { return pte_val(pte) & _PAGE_ACCESSED; }
+#define pte_cacheable(pte) 1
+
+extern inline pte_t pte_nocache(pte_t pte) { return pte; }
+extern inline pte_t pte_wrprotect(pte_t pte) { pte_val(pte) &= ~_PAGE_RW; return pte; }
+extern inline pte_t pte_rdprotect(pte_t pte) { pte_val(pte) &= ~_PAGE_USER; return pte; }
+extern inline pte_t pte_exprotect(pte_t pte) { pte_val(pte) &= ~_PAGE_USER; return pte; }
+extern inline pte_t pte_mkclean(pte_t pte) { pte_val(pte) &= ~_PAGE_DIRTY; return pte; }
+extern inline pte_t pte_mkold(pte_t pte) { pte_val(pte) &= ~_PAGE_ACCESSED; return pte; }
+extern inline pte_t pte_mkwrite(pte_t pte) { pte_val(pte) |= _PAGE_RW; return pte; }
+extern inline pte_t pte_mkread(pte_t pte) { pte_val(pte) |= _PAGE_USER; return pte; }
+extern inline pte_t pte_mkexec(pte_t pte) { pte_val(pte) |= _PAGE_USER; return pte; }
+extern inline pte_t pte_mkdirty(pte_t pte) { pte_val(pte) |= _PAGE_DIRTY; return pte; }
+extern inline pte_t pte_mkyoung(pte_t pte) { pte_val(pte) |= _PAGE_ACCESSED; return pte; }
+
+/*
+ * Conversion functions: convert a page and protection to a page entry,
+ * and a page entry and page directory to the page they refer to.
+ */
+extern inline pte_t mk_pte(unsigned long page, pgprot_t pgprot)
+{ pte_t pte; pte_val(pte) = virt_to_phys(page) | pgprot_val(pgprot); return pte; }
+
+extern inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
+{ pte_val(pte) = (pte_val(pte) & _PAGE_CHG_MASK) | pgprot_val(newprot); return pte; }
+
+extern inline unsigned long pte_page(pte_t pte)
+{ return phys_to_virt(pte_val(pte) & PAGE_MASK); }
+
+extern inline unsigned long pmd_page(pmd_t pmd)
+{ return phys_to_virt(pmd_val(pmd) & PAGE_MASK); }
+
+/* to find an entry in a page-table-directory */
+extern inline pgd_t * pgd_offset(struct mm_struct * mm, unsigned long address)
+{
+ return mm->pgd + (address >> PGDIR_SHIFT);
+}
+
+/* Find an entry in the second-level page table.. */
+#define pmd_offset(dir, address) ((pmd_t *)(dir))
+
+/* Find an entry in the third-level page table.. */
+#define pte_offset(dir, address) ((pte_t *)(dir))
+
+/*
+ * Allocate and free page tables. The xxx_kernel() versions are
+ * used to allocate a kernel page table - this turns on ASN bits
+ * if any.
+ */
+extern inline void pte_free_kernel(pte_t * pte)
+{
+ pte_val(*pte) = 0;
+}
+
+extern inline pte_t * pte_alloc_kernel(pmd_t *pmd, unsigned long address)
+{
+ return (pte_t *) pmd;
+}
+
+/*
+ * allocating and freeing a pmd is trivial: the 1-entry pmd is
+ * inside the pgd, so has no extra memory associated with it.
+ */
+#define pmd_free_kernel(pmdp)
+#define pmd_alloc_kernel(pgd,address) ((pmd_t *)(pgd))
+
+#define pte_free(ptep)
+#define pte_alloc(pmd,address) ((pte_t *)(pmd))
+
+/*
+ * allocating and freeing a pmd is trivial: the 1-entry pmd is
+ * inside the pgd, so has no extra memory associated with it.
+ */
+#define pmd_free(pmd)
+#define pmd_alloc(pgd,address) ((pmd_t *)(pgd))
+
+extern inline void pgd_free(pgd_t * pgd)
+{
+ extern void kfree(void *);
+ kfree((void *)pgd);
+}
+
+extern inline pgd_t * pgd_alloc(void)
+{
+ pgd_t *pgd;
+ extern void *kmalloc(unsigned int, int);
+
+ pgd = (pgd_t *) kmalloc(PTRS_PER_PGD * BYTES_PER_PTR, GFP_KERNEL);
+ if (pgd)
+ memset(pgd, 0, PTRS_PER_PGD * BYTES_PER_PTR);
+ return pgd;
+}
+
+extern pgd_t swapper_pg_dir[PTRS_PER_PGD];
+
+#define update_mmu_cache(vma,address,pte) processor.u.armv2._update_mmu_cache(vma,address,pte)
+
+#define SWP_TYPE(entry) (((entry) >> 1) & 0x7f)
+#define SWP_OFFSET(entry) ((entry) >> 8)
+#define SWP_ENTRY(type,offset) (((type) << 1) | ((offset) << 8))
+
+#endif /* __ASM_PROC_PAGE_H */
+
--- /dev/null
+/*
+ * linux/include/asm-arm/proc-armo/pgtable.h
+ *
+ * Copyright (C) 1995, 1996 Russell King
+ * Modified 18/19-Oct-1997 for two-level page table
+ */
+#ifndef __ASM_PROC_PGTABLE_H
+#define __ASM_PROC_PGTABLE_H
+
+#include <asm/arch/mmu.h>
+#include <linux/slab.h>
+
+#define LIBRARY_TEXT_START 0x0c000000
+
+/*
+ * Cache flushing...
+ */
+#define flush_cache_all() do { } while (0)
+#define flush_cache_mm(mm) do { } while (0)
+#define flush_cache_range(mm,start,end) do { } while (0)
+#define flush_cache_page(vma,vmaddr) do { } while (0)
+#define flush_page_to_ram(page) do { } while (0)
+#define flush_icache_range(start,end) do { } while (0)
+
+/*
+ * TLB flushing:
+ *
+ * - flush_tlb() flushes the current mm struct TLBs
+ * - flush_tlb_all() flushes all processes TLBs
+ * - flush_tlb_mm(mm) flushes the specified mm context TLB's
+ * - flush_tlb_page(vma, vmaddr) flushes one page
+ * - flush_tlb_range(mm, start, end) flushes a range of pages
+ */
+#define flush_tlb() do { } while (0)
+#define flush_tlb_all() do { } while (0)
+#define flush_tlb_mm(mm) do { } while (0)
+#define flush_tlb_range(mm, start, end) do { } while (0)
+#define flush_tlb_page(vma, vmaddr) do { } while (0)
+
+/*
+ * We have a mem map cache...
+ */
+extern __inline__ void update_mm_cache_all(void)
+{
+ struct task_struct *p;
+
+ p = &init_task;
+ do {
+ processor.u.armv2._update_map(p);
+ p = p->next_task;
+ } while (p != &init_task);
+
+ processor.u.armv2._remap_memc (current);
+}
+
+extern __inline__ void update_mm_cache_task(struct task_struct *tsk)
+{
+ processor.u.armv2._update_map(tsk);
+
+ if (tsk == current)
+ processor.u.armv2._remap_memc (tsk);
+}
+
+extern __inline__ void update_mm_cache_mm(struct mm_struct *mm)
+{
+ struct task_struct *p;
+
+ p = &init_task;
+ do {
+ if (p->mm == mm)
+ processor.u.armv2._update_map(p);
+ p = p->next_task;
+ } while (p != &init_task);
+
+ if (current->mm == mm)
+ processor.u.armv2._remap_memc (current);
+}
+
+extern __inline__ void update_mm_cache_mm_addr(struct mm_struct *mm, unsigned long addr, pte_t pte)
+{
+ struct task_struct *p;
+
+ p = &init_task;
+ do {
+ if (p->mm == mm)
+ processor.u.armv2._update_mmu_cache(p, addr, pte);
+ p = p->next_task;
+ } while (p != &init_task);
+
+ if (current->mm == mm)
+ processor.u.armv2._remap_memc (current);
+}
+
+#define __flush_entry_to_ram(entry)
+
+/* Certain architectures need to do special things when pte's
+ * within a page table are directly modified. Thus, the following
+ * hook is made available.
+ */
+/* PMD_SHIFT determines the size of the area a second-level page table can map */
+#define PMD_SHIFT 20
+#define PMD_SIZE (1UL << PMD_SHIFT)
+#define PMD_MASK (~(PMD_SIZE-1))
+
+/* PGDIR_SHIFT determines what a third-level page table entry can map */
+#define PGDIR_SHIFT 20
+#define PGDIR_SIZE (1UL << PGDIR_SHIFT)
+#define PGDIR_MASK (~(PGDIR_SIZE-1))
+
+/*
+ * entries per page directory level: the arm3 is one-level, so
+ * we don't really have any PMD or PTE directory physically.
+ *
+ * 18-Oct-1997 RMK Now two-level (32x32)
+ */
+#define PTRS_PER_PTE 32
+#define PTRS_PER_PMD 1
+#define PTRS_PER_PGD 32
+
+/* Just any arbitrary offset to the start of the vmalloc VM area: the
+ * current 8MB value just means that there will be a 8MB "hole" after the
+ * physical memory until the kernel virtual memory starts. That means that
+ * any out-of-bounds memory accesses will hopefully be caught.
+ * The vmalloc() routines leaves a hole of 4kB between each vmalloced
+ * area for the same reason. ;)
+ */
+#define VMALLOC_START 0x01a00000
+#define VMALLOC_VMADDR(x) ((unsigned long)(x))
+
+#define _PAGE_PRESENT 0x01
+#define _PAGE_READONLY 0x02
+#define _PAGE_NOT_USER 0x04
+#define _PAGE_OLD 0x08
+#define _PAGE_CLEAN 0x10
+
+#define _PAGE_TABLE (_PAGE_PRESENT)
+#define _PAGE_CHG_MASK (PAGE_MASK | _PAGE_OLD | _PAGE_CLEAN)
+
+/* -- present -- -- !dirty -- --- !write --- ---- !user --- */
+#define PAGE_NONE __pgprot(_PAGE_PRESENT | _PAGE_CLEAN | _PAGE_READONLY | _PAGE_NOT_USER)
+#define PAGE_SHARED __pgprot(_PAGE_PRESENT | _PAGE_CLEAN )
+#define PAGE_COPY __pgprot(_PAGE_PRESENT | _PAGE_CLEAN | _PAGE_READONLY )
+#define PAGE_READONLY __pgprot(_PAGE_PRESENT | _PAGE_CLEAN | _PAGE_READONLY )
+#define PAGE_KERNEL __pgprot(_PAGE_PRESENT | _PAGE_NOT_USER)
+
+/*
+ * The arm can't do page protection for execute, and considers that the same are read.
+ * Also, write permissions imply read permissions. This is the closest we can get..
+ */
+#define __P000 PAGE_NONE
+#define __P001 PAGE_READONLY
+#define __P010 PAGE_COPY
+#define __P011 PAGE_COPY
+#define __P100 PAGE_READONLY
+#define __P101 PAGE_READONLY
+#define __P110 PAGE_COPY
+#define __P111 PAGE_COPY
+
+#define __S000 PAGE_NONE
+#define __S001 PAGE_READONLY
+#define __S010 PAGE_SHARED
+#define __S011 PAGE_SHARED
+#define __S100 PAGE_READONLY
+#define __S101 PAGE_READONLY
+#define __S110 PAGE_SHARED
+#define __S111 PAGE_SHARED
+
+#undef TEST_VERIFY_AREA
+
+extern unsigned long *empty_zero_page;
+
+/*
+ * BAD_PAGETABLE is used when we need a bogus page-table, while
+ * BAD_PAGE is used for a bogus page.
+ *
+ * ZERO_PAGE is a global shared page that is always zero: used
+ * for zero-mapped memory areas etc..
+ */
+extern pte_t __bad_page(void);
+extern pte_t *__bad_pagetable(void);
+
+#define BAD_PAGETABLE __bad_pagetable()
+#define BAD_PAGE __bad_page()
+#define ZERO_PAGE ((unsigned long) empty_zero_page)
+
+/* number of bits that fit into a memory pointer */
+#define BYTES_PER_PTR (sizeof(unsigned long))
+#define BITS_PER_PTR (8*BYTES_PER_PTR)
+
+/* to align the pointer to a pointer address */
+#define PTR_MASK (~(sizeof(void*)-1))
+
+/* sizeof(void*)==1<<SIZEOF_PTR_LOG2 */
+#define SIZEOF_PTR_LOG2 2
+
+/* to find an entry in a page-table */
+#define PAGE_PTR(address) \
+((unsigned long)(address)>>(PAGE_SHIFT-SIZEOF_PTR_LOG2)&PTR_MASK&~PAGE_MASK)
+
+/* to set the page-dir */
+#define SET_PAGE_DIR(tsk,pgdir) \
+do { \
+ tsk->tss.memmap = (unsigned long)pgdir; \
+ processor.u.armv2._update_map(tsk); \
+ if ((tsk) == current) \
+ processor.u.armv2._remap_memc (current); \
+} while (0)
+
+extern unsigned long physical_start;
+extern unsigned long physical_end;
+
+#define pte_none(pte) (!pte_val(pte))
+#define pte_present(pte) (pte_val(pte) & _PAGE_PRESENT)
+#define pte_clear(ptep) set_pte((ptep), __pte(0))
+
+#define pmd_none(pmd) (!pmd_val(pmd))
+#define pmd_bad(pmd) ((pmd_val(pmd) & 0xfc000002))
+#define pmd_present(pmd) (pmd_val(pmd) & _PAGE_PRESENT)
+#define pmd_clear(pmdp) set_pmd(pmdp, __pmd(0))
+
+/*
+ * The "pgd_xxx()" functions here are trivial for a folded two-level
+ * setup: the pgd is never bad, and a pmd always exists (as it's folded
+ * into the pgd entry)
+ */
+#define pgd_none(pgd) (0)
+#define pgd_bad(pgd) (0)
+#define pgd_present(pgd) (1)
+#define pgd_clear(pgdp)
+
+/*
+ * The following only work if pte_present() is true.
+ * Undefined behaviour if not..
+ */
+extern inline int pte_read(pte_t pte) { return !(pte_val(pte) & _PAGE_NOT_USER); }
+extern inline int pte_write(pte_t pte) { return !(pte_val(pte) & _PAGE_READONLY); }
+extern inline int pte_exec(pte_t pte) { return !(pte_val(pte) & _PAGE_NOT_USER); }
+extern inline int pte_dirty(pte_t pte) { return !(pte_val(pte) & _PAGE_CLEAN); }
+extern inline int pte_young(pte_t pte) { return !(pte_val(pte) & _PAGE_OLD); }
+#define pte_cacheable(pte) 1
+
+extern inline pte_t pte_nocache(pte_t pte) { return pte; }
+extern inline pte_t pte_wrprotect(pte_t pte) { pte_val(pte) |= _PAGE_READONLY; return pte; }
+extern inline pte_t pte_rdprotect(pte_t pte) { pte_val(pte) |= _PAGE_NOT_USER; return pte; }
+extern inline pte_t pte_exprotect(pte_t pte) { pte_val(pte) |= _PAGE_NOT_USER; return pte; }
+extern inline pte_t pte_mkclean(pte_t pte) { pte_val(pte) |= _PAGE_CLEAN; return pte; }
+extern inline pte_t pte_mkold(pte_t pte) { pte_val(pte) |= _PAGE_OLD; return pte; }
+
+extern inline pte_t pte_mkwrite(pte_t pte) { pte_val(pte) &= ~_PAGE_READONLY; return pte; }
+extern inline pte_t pte_mkread(pte_t pte) { pte_val(pte) &= ~_PAGE_NOT_USER; return pte; }
+extern inline pte_t pte_mkexec(pte_t pte) { pte_val(pte) &= ~_PAGE_NOT_USER; return pte; }
+extern inline pte_t pte_mkdirty(pte_t pte) { pte_val(pte) &= ~_PAGE_CLEAN; return pte; }
+extern inline pte_t pte_mkyoung(pte_t pte) { pte_val(pte) &= ~_PAGE_OLD; return pte; }
+
+/*
+ * Conversion functions: convert a page and protection to a page entry,
+ * and a page entry and page directory to the page they refer to.
+ */
+extern __inline__ pte_t mk_pte(unsigned long page, pgprot_t pgprot)
+{
+ pte_t pte;
+ pte_val(pte) = __virt_to_phys(page) | pgprot_val(pgprot);
+ return pte;
+}
+
+/* This takes a physical page address that is used by the remapping functions */
+extern __inline__ pte_t mk_pte_phys(unsigned long physpage, pgprot_t pgprot)
+{
+ pte_t pte;
+ pte_val(pte) = physpage + pgprot_val(pgprot);
+ return pte;
+}
+
+extern __inline__ pte_t pte_modify(pte_t pte, pgprot_t newprot)
+{
+ pte_val(pte) = (pte_val(pte) & _PAGE_CHG_MASK) | pgprot_val(newprot);
+ return pte;
+}
+
+#define set_pte(pteptr, pteval) ((*(pteptr)) = (pteval))
+
+extern __inline__ unsigned long pte_page(pte_t pte)
+{
+ return __phys_to_virt(pte_val(pte) & PAGE_MASK);
+}
+
+extern __inline__ pmd_t mk_pmd (pte_t *ptep)
+{
+ pmd_t pmd;
+ pmd_val(pmd) = __virt_to_phys((unsigned long)ptep) | _PAGE_TABLE;
+ return pmd;
+}
+
+#define set_pmd(pmdp,pmd) ((*(pmdp)) = (pmd))
+
+extern __inline__ unsigned long pmd_page(pmd_t pmd)
+{
+ return __phys_to_virt(pmd_val(pmd) & ~_PAGE_TABLE);
+}
+
+/* to find an entry in a kernel page-table-directory */
+#define pgd_offset_k(address) pgd_offset(&init_mm, address)
+
+/* to find an entry in a page-table-directory */
+extern __inline__ pgd_t * pgd_offset(struct mm_struct * mm, unsigned long address)
+{
+ return mm->pgd + (address >> PGDIR_SHIFT);
+}
+
+/* Find an entry in the second-level page table.. */
+#define pmd_offset(dir, address) ((pmd_t *)(dir))
+
+/* Find an entry in the third-level page table.. */
+extern __inline__ pte_t * pte_offset(pmd_t *dir, unsigned long address)
+{
+ return (pte_t *)pmd_page(*dir) + ((address >> PAGE_SHIFT) & (PTRS_PER_PTE - 1));
+}
+
+/*
+ * Allocate and free page tables. The xxx_kernel() versions are
+ * used to allocate a kernel page table - this turns on ASN bits
+ * if any.
+ */
+#define pte_free_kernel(pte) pte_free((pte))
+#define pte_alloc_kernel(pmd,address) pte_alloc((pmd),(address))
+
+/*
+ * allocating and freeing a pmd is trivial: the 1-entry pmd is
+ * inside the pgd, so has no extra memory associated with it.
+ */
+#define pmd_free_kernel(pmdp)
+#define pmd_alloc_kernel(pgd,address) ((pmd_t *)(pgd))
+
+extern __inline__ void pte_free(pte_t * pte)
+{
+ kfree (pte);
+}
+
+extern const char bad_pmd_string[];
+
+extern __inline__ pte_t *pte_alloc(pmd_t * pmd, unsigned long address)
+{
+ address = (address >> PAGE_SHIFT) & (PTRS_PER_PTE - 1);
+
+ if (pmd_none (*pmd)) {
+ pte_t *page = (pte_t *) kmalloc (PTRS_PER_PTE * BYTES_PER_PTR, GFP_KERNEL);
+ if (pmd_none (*pmd)) {
+ if (page) {
+ memzero (page, PTRS_PER_PTE * BYTES_PER_PTR);
+ set_pmd(pmd, mk_pmd(page));
+ return page + address;
+ }
+ set_pmd (pmd, mk_pmd (BAD_PAGETABLE));
+ return NULL;
+ }
+ kfree (page);
+ }
+ if (pmd_bad (*pmd)) {
+ printk(bad_pmd_string, pmd_val(*pmd));
+ set_pmd (pmd, mk_pmd (BAD_PAGETABLE));
+ return NULL;
+ }
+ return (pte_t *) pmd_page(*pmd) + address;
+}
+
+/*
+ * allocating and freeing a pmd is trivial: the 1-entry pmd is
+ * inside the pgd, so has no extra memory associated with it.
+ */
+#define pmd_free(pmd)
+#define pmd_alloc(pgd,address) ((pmd_t *)(pgd))
+
+/*
+ * Free a page directory. Takes the virtual address.
+ */
+extern __inline__ void pgd_free(pgd_t * pgd)
+{
+ kfree ((void *)pgd);
+}
+
+/*
+ * Allocate a new page directory. Return the virtual address of it.
+ */
+extern __inline__ pgd_t * pgd_alloc(void)
+{
+ pgd_t *pgd;
+
+ pgd = (pgd_t *) kmalloc(PTRS_PER_PGD * BYTES_PER_PTR, GFP_KERNEL);
+ if (pgd)
+ memzero (pgd, PTRS_PER_PGD * BYTES_PER_PTR);
+ return pgd;
+}
+
+extern pgd_t swapper_pg_dir[PTRS_PER_PGD];
+
+#define update_mmu_cache(vma,address,pte)
+
+#define SWP_TYPE(entry) (((entry) >> 1) & 0x7f)
+#define SWP_OFFSET(entry) ((entry) >> 8)
+#define SWP_ENTRY(type,offset) (((type) << 1) | ((offset) << 8))
+
+#endif /* __ASM_PROC_PAGE_H */
+
--- /dev/null
+/*
+ * linux/include/asm-arm/proc-armo/processor.h
+ *
+ * Copyright (c) 1996 Russell King.
+ *
+ * Changelog:
+ * 27-06-1996 RMK Created
+ * 10-10-1996 RMK Brought up to date with SA110
+ * 26-09-1996 RMK Added 'EXTRA_THREAD_STRUCT*'
+ * 28-09-1996 RMK Moved start_thread into the processor dependencies
+ * 11-01-1998 RMK Added new uaccess_t
+ */
+#ifndef __ASM_PROC_PROCESSOR_H
+#define __ASM_PROC_PROCESSOR_H
+
+#ifdef __KERNEL__
+
+#include <asm/assembler.h>
+#include <linux/string.h>
+
+#define KERNEL_STACK_SIZE 4096
+
+/*
+ * on arm2,3 wp does not work
+ */
+#define wp_works_ok 0
+#define wp_works_ok__is_a_macro /* for versions in ksyms.c */
+
+struct context_save_struct {
+ unsigned long r4;
+ unsigned long r5;
+ unsigned long r6;
+ unsigned long r7;
+ unsigned long r8;
+ unsigned long r9;
+ unsigned long fp;
+ unsigned long pc;
+};
+
+typedef struct {
+ void (*put_byte)(void); /* Special calling convention */
+ void (*get_byte)(void); /* Special calling convention */
+ void (*put_half)(void); /* Special calling convention */
+ void (*get_half)(void); /* Special calling convention */
+ void (*put_word)(void); /* Special calling convention */
+ void (*get_word)(void); /* Special calling convention */
+ unsigned long (*copy_from_user)(void *to, const void *from, unsigned long sz);
+ unsigned long (*copy_to_user)(void *to, const void *from, unsigned long sz);
+ unsigned long (*clear_user)(void *addr, unsigned long sz);
+ unsigned long (*strncpy_from_user)(char *to, const char *from, unsigned long sz);
+ unsigned long (*strlen_user)(const char *s);
+} uaccess_t;
+
+extern uaccess_t uaccess_user, uaccess_kernel;
+
+#define EXTRA_THREAD_STRUCT \
+ uaccess_t *uaccess; /* User access functions*/ \
+ struct context_save_struct *save; \
+ unsigned long memmap; \
+ unsigned long memcmap[256];
+
+#define EXTRA_THREAD_STRUCT_INIT \
+ &uaccess_kernel, \
+ 0, \
+ (unsigned long) swapper_pg_dir, \
+ { 0, }
+
+DECLARE_THREAD_STRUCT;
+
+/*
+ * Return saved PC of a blocked thread.
+ */
+extern __inline__ unsigned long thread_saved_pc (struct thread_struct *t)
+{
+ if (t->save)
+ return t->save->pc & ~PCMASK;
+ else
+ return 0;
+}
+
+extern __inline__ unsigned long get_css_fp (struct thread_struct *t)
+{
+ if (t->save)
+ return t->save->fp;
+ else
+ return 0;
+}
+
+asmlinkage void ret_from_sys_call(void) __asm__("ret_from_sys_call");
+
+extern __inline__ void copy_thread_css (struct context_save_struct *save)
+{
+ save->r4 =
+ save->r5 =
+ save->r6 =
+ save->r7 =
+ save->r8 =
+ save->r9 =
+ save->fp = 0;
+ save->pc = ((unsigned long)ret_from_sys_call) | SVC26_MODE;
+}
+
+#define start_thread(regs,pc,sp) \
+({ \
+ unsigned long *stack = (unsigned long *)sp; \
+ set_fs(USER_DS); \
+ memzero(regs->uregs, sizeof (regs->uregs)); \
+ regs->ARM_pc = pc; /* pc */ \
+ regs->ARM_sp = sp; /* sp */ \
+ regs->ARM_r2 = stack[2]; /* r2 (envp) */ \
+ regs->ARM_r1 = stack[1]; /* r1 (argv) */ \
+ regs->ARM_r0 = stack[0]; /* r0 (argc) */ \
+ flush_tlb_mm(current->mm); \
+})
+
+#endif
+
+#endif
--- /dev/null
+/*
+ * linux/include/asm-arm/proc-armo/ptrace.h
+ *
+ * Copyright (C) 1996 Russell King
+ */
+
+#ifndef __ASM_PROC_PTRACE_H
+#define __ASM_PROC_PTRACE_H
+
+/* this struct defines the way the registers are stored on the
+ stack during a system call. */
+
+struct pt_regs {
+ long uregs[17];
+};
+
+#define ARM_pc uregs[15]
+#define ARM_lr uregs[14]
+#define ARM_sp uregs[13]
+#define ARM_ip uregs[12]
+#define ARM_fp uregs[11]
+#define ARM_r10 uregs[10]
+#define ARM_r9 uregs[9]
+#define ARM_r8 uregs[8]
+#define ARM_r7 uregs[7]
+#define ARM_r6 uregs[6]
+#define ARM_r5 uregs[5]
+#define ARM_r4 uregs[4]
+#define ARM_r3 uregs[3]
+#define ARM_r2 uregs[2]
+#define ARM_r1 uregs[1]
+#define ARM_r0 uregs[0]
+#define ARM_ORIG_r0 uregs[16] /* -1 */
+
+#define USR26_MODE 0x00
+#define FIQ26_MODE 0x01
+#define IRQ26_MODE 0x02
+#define SVC26_MODE 0x03
+#define MODE_MASK 0x03
+#define F_BIT (1 << 26)
+#define I_BIT (1 << 27)
+#define CC_V_BIT (1 << 28)
+#define CC_C_BIT (1 << 29)
+#define CC_Z_BIT (1 << 30)
+#define CC_N_BIT (1 << 31)
+
+#define user_mode(regs) \
+ (((regs)->ARM_pc & MODE_MASK) == USR26_MODE)
+
+#define processor_mode(regs) \
+ ((regs)->ARM_pc & MODE_MASK)
+
+#define interrupts_enabled(regs) \
+ (!((regs)->ARM_pc & I_BIT))
+
+#define fast_interrupts_enabled(regs) \
+ (!((regs)->ARM_pc & F_BIT))
+
+#define condition_codes(regs) \
+ ((regs)->ARM_pc & (CC_V_BIT|CC_C_BIT|CC_Z_BIT|CC_N_BIT))
+
+#define instruction_pointer(regs) ((regs)->ARM_pc & 0x03fffffc)
+#define pc_pointer(v) ((v) & 0x03fffffc)
+#endif
+
--- /dev/null
+/*
+ * linux/include/asm-arm/proc-armo/semaphore.h
+ */
+#ifndef __ASM_PROC_SEMAPHORE_H
+#define __ASM_PROC_SEMAPHORE_H
+
+/*
+ * This is ugly, but we want the default case to fall through.
+ * "__down" is the actual routine that waits...
+ */
+extern inline void down(struct semaphore * sem)
+{
+ __asm__ __volatile__ ("
+ @ atomic down operation
+ mov r0, pc
+ orr r1, r0, #0x08000000
+ and r0, r0, #0x0c000003
+ teqp r1, #0
+ ldr r1, [%0]
+ subs r1, r1, #1
+ str r1, [%0]
+ mov r1, pc, lsr #28
+ teqp r0, r1, lsl #28
+ movmi r0, %0
+ blmi " SYMBOL_NAME_STR(__down)
+ : : "r" (sem) : "r0", "r1", "r2", "r3", "ip", "lr", "cc");
+}
+
+/*
+ * This is ugly, but we want the default case to fall through.
+ * "__down_interruptible" is the actual routine that waits...
+ */
+extern inline int down_interruptible (struct semaphore * sem)
+{
+ int result;
+ __asm__ __volatile__ ("
+ @ atomic down operation
+ mov r0, pc
+ orr r1, r0, #0x08000000
+ and r0, r0, #0x0c000003
+ teqp r1, #0
+ ldr r1, [%1]
+ subs r1, r1, #1
+ str r1, [%1]
+ mov r1, pc, lsr #28
+ orrmi r0, r0, #0x80000000 @ set N
+ teqp r0, r1, lsl #28
+ movmi r0, %1
+ movpl r0, #0
+ blmi " SYMBOL_NAME_STR(__down_interruptible) "
+ mov %0, r0"
+ : "=r" (result)
+ : "r" (sem)
+ : "r0", "r1", "r2", "r3", "ip", "lr", "cc");
+ return result;
+}
+
+/*
+ * Note! This is subtle. We jump to wake people up only if
+ * the semaphore was negative (== somebody was waiting on it).
+ * The default case (no contention) will result in NO
+ * jumps for both down() and up().
+ */
+extern inline void up(struct semaphore * sem)
+{
+ __asm__ __volatile__ ("
+ @ atomic up operation
+ mov r0, pc
+ orr r1, r0, #0x08000000
+ and r0, r0, #0x0c000003
+ teqp r1, #0
+ ldr r1, [%0]
+ adds r1, r1, #1
+ str r1, [%0]
+ mov r1, pc, lsr #28
+ orrls r0, r0, #0x80000000 @ set N
+ teqp r0, r1, lsl #28
+ movmi r0, %0
+ blmi " SYMBOL_NAME_STR(__up)
+ : : "r" (sem) : "r0", "r1", "r2", "r3", "ip", "lr", "cc");
+}
+
+#endif
--- /dev/null
+/*
+ * linux/include/asm-arm/proc-armo/shmparam.h
+ *
+ * Copyright (C) 1996 Russell King
+ *
+ * definitions for the shared process memory on the ARM3
+ */
+
+#ifndef __ASM_PROC_SHMPARAM_H
+#define __ASM_PROC_SHMPARAM_H
+
+#ifndef SHM_RANGE_START
+#define SHM_RANGE_START 0x00a00000
+#define SHM_RANGE_END 0x00c00000
+#define SHMMAX 0x003fa000
+#endif
+
+#endif
--- /dev/null
+/*
+ * linux/include/asm-arm/proc-armo/system.h
+ *
+ * Copyright (C) 1995, 1996 Russell King
+ */
+
+#ifndef __ASM_PROC_SYSTEM_H
+#define __ASM_PROC_SYSTEM_H
+
+extern const char xchg_str[];
+
+#include <asm/proc-fns.h>
+
+extern __inline__ unsigned long __xchg(unsigned long x, volatile void *ptr, int size)
+{
+ switch (size) {
+ case 1: return processor.u.armv2._xchg_1(x, ptr);
+ case 2: return processor.u.armv2._xchg_2(x, ptr);
+ case 4: return processor.u.armv2._xchg_4(x, ptr);
+ default: arm_invalidptr(xchg_str, size);
+ }
+}
+
+/*
+ * We need to turn the caches off before calling the reset vector - RiscOS
+ * messes up if we don't
+ */
+#define proc_hard_reset() processor._proc_fin()
+
+/*
+ * This processor does not idle
+ */
+#define proc_idle()
+
+/*
+ * A couple of speedups for the ARM
+ */
+
+/*
+ * Save the current interrupt enable state & disable IRQs
+ */
+#define __save_flags_cli(x) \
+ do { \
+ unsigned long temp; \
+ __asm__ __volatile__( \
+" mov %0, pc\n" \
+" orr %1, %0, #0x08000000\n" \
+" and %0, %0, #0x0c000000\n" \
+" teqp %1, #0\n" \
+ : "=r" (x), "=r" (temp) \
+ : \
+ : "memory"); \
+ } while (0)
+
+/*
+ * Enable IRQs
+ */
+#define __sti() \
+ do { \
+ unsigned long temp; \
+ __asm__ __volatile__( \
+" mov %0, pc\n" \
+" bic %0, %0, #0x08000000\n" \
+" teqp %0, #0\n" \
+ : "=r" (temp) \
+ : \
+ : "memory"); \
+ } while(0)
+
+/*
+ * Disable IRQs
+ */
+#define __cli() \
+ do { \
+ unsigned long temp; \
+ __asm__ __volatile__( \
+" mov %0, pc\n" \
+" orr %0, %0, #0x08000000\n" \
+" teqp %0, #0\n" \
+ : "=r" (temp) \
+ : \
+ : "memory"); \
+ } while(0)
+
+/*
+ * save current IRQ & FIQ state
+ */
+#define __save_flags(x) \
+ do { \
+ __asm__ __volatile__( \
+" mov %0, pc\n" \
+" and %0, %0, #0x0c000000\n" \
+ : "=r" (x)); \
+ } while (0)
+
+/*
+ * restore saved IRQ & FIQ state
+ */
+#define __restore_flags(x) \
+ do { \
+ unsigned long temp; \
+ __asm__ __volatile__( \
+" mov %0, pc\n" \
+" bic %0, %0, #0x0c000000\n" \
+" orr %0, %0, %1\n" \
+" teqp %0, #0\n" \
+ : "=r" (temp) \
+ : "r" (x) \
+ : "memory"); \
+ } while (0)
+
+#ifdef __SMP__
+#error SMP not supported
+#else
+
+#define cli() __cli()
+#define sti() __sti()
+#define save_flags(x) __save_flags(x)
+#define restore_flags(x) __restore_flags(x)
+#define save_flags_cli(x) __save_flags_cli(x)
+
+#endif
+
+#endif
--- /dev/null
+/*
+ * linux/include/asm-arm/proc-armo/segment.h
+ *
+ * Copyright (C) 1996 Russell King
+ */
+
+/*
+ * The fs functions are implemented on the ARM2 and ARM3 architectures
+ * manually.
+ * Use *_user functions to access user memory with faulting behaving
+ * as though the user is accessing the memory.
+ * Use set_fs(get_ds()) and then the *_user functions to allow them to
+ * access kernel memory.
+ */
+
+/*
+ * These are the values used to represent the user `fs' and the kernel `ds'
+ */
+#define KERNEL_DS 0x03000000
+#define USER_DS 0x02000000
+
+#define get_ds() (KERNEL_DS)
+#define get_fs() (current->addr_limit)
+#define segment_eq(a,b) ((a) == (b))
+
+extern uaccess_t uaccess_user, uaccess_kernel;
+
+extern __inline__ void set_fs (mm_segment_t fs)
+{
+ current->addr_limit = fs;
+ current->tss.uaccess = fs == USER_DS ? &uaccess_user : &uaccess_kernel;
+}
+
+#define __range_ok(addr,size) ({ \
+ unsigned long flag, sum; \
+ __asm__ __volatile__("adds %1, %2, %3; cmpls %1, %0; movls %0, #0" \
+ : "=&r" (flag), "=&r" (sum) \
+ : "r" (addr), "Ir" (size), "0" (current->addr_limit) \
+ : "cc"); \
+ flag; })
+
+#define __addr_ok(addr) ({ \
+ unsigned long flag; \
+ __asm__ __volatile__("cmp %2, %0; movlo %0, #0" \
+ : "=&r" (flag) \
+ : "0" (current->addr_limit), "r" (addr) \
+ : "cc"); \
+ (flag == 0); })
+
+#define access_ok(type,addr,size) (__range_ok(addr,size) == 0)
+
+#define __put_user_asm_byte(x,addr,err) \
+ __asm__ __volatile__( \
+ " mov r0, %1\n" \
+ " mov r1, %2\n" \
+ " mov r2, %0\n" \
+ " mov lr, pc\n" \
+ " mov pc, %3\n" \
+ " mov %0, r2\n" \
+ : "=r" (err) \
+ : "r" (x), "r" (addr), "r" (current->tss.uaccess->put_byte), \
+ "0" (err) \
+ : "r0", "r1", "r2", "lr")
+
+#define __put_user_asm_half(x,addr,err) \
+ __asm__ __volatile__( \
+ " mov r0, %1\n" \
+ " mov r1, %2\n" \
+ " mov r2, %0\n" \
+ " mov lr, pc\n" \
+ " mov pc, %3\n" \
+ " mov %0, r2\n" \
+ : "=r" (err) \
+ : "r" (x), "r" (addr), "r" (current->tss.uaccess->put_half), \
+ "0" (err) \
+ : "r0", "r1", "r2", "lr")
+
+#define __put_user_asm_word(x,addr,err) \
+ __asm__ __volatile__( \
+ " mov r0, %1\n" \
+ " mov r1, %2\n" \
+ " mov r2, %0\n" \
+ " mov lr, pc\n" \
+ " mov pc, %3\n" \
+ " mov %0, r2\n" \
+ : "=r" (err) \
+ : "r" (x), "r" (addr), "r" (current->tss.uaccess->put_word), \
+ "0" (err) \
+ : "r0", "r1", "r2", "lr")
+
+#define __get_user_asm_byte(x,addr,err) \
+ __asm__ __volatile__( \
+ " mov r0, %2\n" \
+ " mov r1, %0\n" \
+ " mov lr, pc\n" \
+ " mov pc, %3\n" \
+ " mov %0, r1\n" \
+ " mov %1, r0\n" \
+ : "=r" (err), "=r" (x) \
+ : "r" (addr), "r" (current->tss.uaccess->get_byte), "0" (err) \
+ : "r0", "r1", "r2", "lr")
+
+#define __get_user_asm_half(x,addr,err) \
+ __asm__ __volatile__( \
+ " mov r0, %2\n" \
+ " mov r1, %0\n" \
+ " mov lr, pc\n" \
+ " mov pc, %3\n" \
+ " mov %0, r1\n" \
+ " mov %1, r0\n" \
+ : "=r" (err), "=r" (x) \
+ : "r" (addr), "r" (current->tss.uaccess->get_half), "0" (err) \
+ : "r0", "r1", "r2", "lr")
+
+#define __get_user_asm_word(x,addr,err) \
+ __asm__ __volatile__( \
+ " mov r0, %2\n" \
+ " mov r1, %0\n" \
+ " mov lr, pc\n" \
+ " mov pc, %3\n" \
+ " mov %0, r1\n" \
+ " mov %1, r0\n" \
+ : "=r" (err), "=r" (x) \
+ : "r" (addr), "r" (current->tss.uaccess->get_word), "0" (err) \
+ : "r0", "r1", "r2", "lr")
+
+#define __do_copy_from_user(to,from,n) \
+ (n) = current->tss.uaccess->copy_from_user((to),(from),(n))
+
+#define __do_copy_to_user(to,from,n) \
+ (n) = current->tss.uaccess->copy_to_user((to),(from),(n))
+
+#define __do_clear_user(addr,sz) \
+ (sz) = current->tss.uaccess->clear_user((addr),(sz))
+
+#define __do_strncpy_from_user(dst,src,count,res) \
+ (res) = current->tss.uaccess->strncpy_from_user(dst,src,count)
+
+#define __do_strlen_user(s,res) \
+ (res) = current->tss.uaccess->strlen_user(s)
--- /dev/null
+/*
+ * linux/include/asm-arm/proc-armo/uncompress.h
+ *
+ * (c) 1997 Russell King
+ */
+
+#define proc_decomp_setup()
--- /dev/null
+/*
+ * linux/asm-arm/proc-armv/assembler.h
+ *
+ * Copyright (C) 1996 Russell King
+ *
+ * This file contains arm architecture specific defines
+ * for the different processors
+ */
+
+/*
+ * LOADREGS: multiple register load (ldm) with pc in register list
+ * (takes account of ARM6 not using ^)
+ *
+ * RETINSTR: return instruction: adds the 's' in at the end of the
+ * instruction if this is not an ARM6
+ *
+ * SAVEIRQS: save IRQ state (not required on ARM2/ARM3 - done
+ * implicitly
+ *
+ * RESTOREIRQS: restore IRQ state (not required on ARM2/ARM3 - done
+ * implicitly with ldm ... ^ or movs.
+ *
+ * These next two need thinking about - can't easily use stack... (see system.S)
+ * DISABLEIRQS: disable IRQS in SVC mode
+ *
+ * ENABLEIRQS: enable IRQS in SVC mode
+ *
+ * USERMODE: switch to USER mode
+ *
+ * SVCMODE: switch to SVC mode
+ */
+
+#define N_BIT (1 << 31)
+#define Z_BIT (1 << 30)
+#define C_BIT (1 << 29)
+#define V_BIT (1 << 28)
+
+#define PCMASK 0
+
+#ifdef __ASSEMBLER__
+
+#define I_BIT (1 << 7)
+#define F_BIT (1 << 6)
+
+#define MODE_FIQ26 0x01
+#define MODE_FIQ32 0x11
+
+#define DEFAULT_FIQ MODE_FIQ32
+
+#define LOADREGS(cond, base, reglist...)\
+ ldm##cond base,reglist
+
+#define RETINSTR(instr, regs...)\
+ instr regs
+
+#define MODENOP
+
+#define MODE(savereg,tmpreg,mode) \
+ mrs savereg, cpsr; \
+ bic tmpreg, savereg, $0x1f; \
+ orr tmpreg, tmpreg, $mode; \
+ msr cpsr, tmpreg
+
+#define RESTOREMODE(savereg) \
+ msr cpsr, savereg
+
+#define SAVEIRQS(tmpreg)\
+ mrs tmpreg, cpsr; \
+ str tmpreg, [sp, $-4]!
+
+#define RESTOREIRQS(tmpreg)\
+ ldr tmpreg, [sp], $4; \
+ msr cpsr, tmpreg
+
+#define DISABLEIRQS(tmpreg)\
+ mrs tmpreg , cpsr; \
+ orr tmpreg , tmpreg , $I_BIT; \
+ msr cpsr, tmpreg
+
+#define ENABLEIRQS(tmpreg)\
+ mrs tmpreg , cpsr; \
+ bic tmpreg , tmpreg , $I_BIT; \
+ msr cpsr, tmpreg
+#endif
--- /dev/null
+/*
+ * linux/include/asm-arm/proc-armv/mm-init.h
+ *
+ * Copyright (C) 1996 Russell King
+ *
+ * This contains the code to setup the memory map on an ARM v3 or v4 machine.
+ * This is both processor & architecture specific, and requires some
+ * more work to get it to fit into our separate processor and architecture
+ * structure.
+ */
+
+/*
+ * On ebsa, we want the memory map set up so:
+ *
+ * PHYS VIRT
+ * 00000000 00000000 Zero page
+ * 000003ff 000003ff Zero page end
+ * 00000000 c0000000 Kernel and all physical memory
+ * 01ffffff c1ffffff End of physical (32MB)
+ * e0000000 e0000000 IO start
+ * ffffffff ffffffff IO end
+ *
+ * On rpc, we want:
+ *
+ * PHYS VIRT
+ * 10000000 00000000 Zero page
+ * 100003ff 000003ff Zero page end
+ * 10000000 c0000000 Kernel and all physical memory
+ * 1fffffff cfffffff End of physical (32MB)
+ * 02000000 d?000000 Screen memory (first image)
+ * 02000000 d8000000 Screen memory (second image)
+ * 00000000 df000000 StrongARM cache invalidation area
+ * 03000000 e0000000 IO start
+ * 03ffffff e0ffffff IO end
+ *
+ * We set it up using the section page table entries.
+ */
+
+#include <asm/arch/mmap.h>
+#include <asm/pgtable.h>
+
+#define V2P(x) virt_to_phys(x)
+#define PTE_SIZE (PTRS_PER_PTE * 4)
+
+#define PMD_SECT (PMD_TYPE_SECT | PMD_DOMAIN(DOMAIN_KERNEL) | PMD_SECT_CACHEABLE)
+
+static inline void setup_swapper_dir (int index, unsigned long entry)
+{
+ pmd_t pmd;
+
+ pmd_val(pmd) = entry;
+ set_pmd (pmd_offset (swapper_pg_dir + index, 0), pmd);
+}
+
+static inline unsigned long setup_pagetables(unsigned long start_mem, unsigned long end_mem)
+{
+ unsigned long address;
+ unsigned int spi;
+ union { unsigned long l; unsigned long *p; } u;
+
+ /* map in zero page */
+ u.l = ((start_mem + (PTE_SIZE-1)) & ~(PTE_SIZE-1));
+ start_mem = u.l + PTE_SIZE;
+ memzero (u.p, PTE_SIZE);
+ *u.p = V2P(PAGE_OFFSET) | PTE_CACHEABLE | PTE_TYPE_SMALL;
+ setup_swapper_dir (0, V2P(u.l) | PMD_TYPE_TABLE | PMD_DOMAIN(DOMAIN_USER));
+
+ for (spi = 1; spi < (PAGE_OFFSET >> PGDIR_SHIFT); spi++)
+ pgd_val(swapper_pg_dir[spi]) = 0;
+
+ /* map in physical ram & kernel */
+ address = PAGE_OFFSET;
+ while (spi < end_mem >> PGDIR_SHIFT) {
+ setup_swapper_dir (spi++,
+ V2P(address) | PMD_SECT |
+ PMD_SECT_BUFFERABLE | PMD_SECT_AP_WRITE);
+ address += PGDIR_SIZE;
+ }
+ while (spi < PTRS_PER_PGD)
+ pgd_val(swapper_pg_dir[spi++]) = 0;
+
+ /*
+ * An area to invalidate the cache
+ */
+ setup_swapper_dir (0xdf0, SAFE_ADDR | PMD_SECT | PMD_SECT_AP_READ);
+
+ /* map in IO */
+ address = IO_START;
+ spi = IO_BASE >> PGDIR_SHIFT;
+ pgd_val(swapper_pg_dir[spi-1]) = 0xc0000000 | PMD_TYPE_SECT |
+ PMD_DOMAIN(DOMAIN_KERNEL) | PMD_SECT_AP_WRITE;
+ while (address < IO_START + IO_SIZE && address) {
+ pgd_val(swapper_pg_dir[spi++]) = address |
+ PMD_TYPE_SECT | PMD_DOMAIN(DOMAIN_IO) |
+ PMD_SECT_AP_WRITE;
+ address += PGDIR_SIZE;
+ }
+
+#ifdef HAVE_MAP_VID_MEM
+ map_screen_mem(0, 0, 0);
+#endif
+
+ flush_cache_all();
+ return start_mem;
+}
+
+static inline void mark_usable_memory_areas(unsigned long *start_mem, unsigned long end_mem)
+{
+ unsigned long smem;
+
+ *start_mem = smem = PAGE_ALIGN(*start_mem);
+
+ while (smem < end_mem) {
+ clear_bit(PG_reserved, &mem_map[MAP_NR(smem)].flags);
+ smem += PAGE_SIZE;
+ }
+}
+
--- /dev/null
+/*
+ * linux/include/asm-arm/proc-armv/page.h
+ *
+ * Copyright (C) 1995, 1996 Russell King
+ */
+
+#ifndef __ASM_PROC_PAGE_H
+#define __ASM_PROC_PAGE_H
+
+/* PAGE_SHIFT determines the page size */
+#define PAGE_SHIFT 12
+#define PAGE_SIZE (1UL << PAGE_SHIFT)
+#define PAGE_MASK (~(PAGE_SIZE-1))
+
+#ifdef __KERNEL__
+
+#define STRICT_MM_TYPECHECKS
+
+#ifdef STRICT_MM_TYPECHECKS
+/*
+ * These are used to make use of C type-checking..
+ */
+typedef struct { unsigned long pte; } pte_t;
+typedef struct { unsigned long pmd; } pmd_t;
+typedef struct { unsigned long pgd; } pgd_t;
+typedef struct { unsigned long pgprot; } pgprot_t;
+
+#define pte_val(x) ((x).pte)
+#define pmd_val(x) ((x).pmd)
+#define pgd_val(x) ((x).pgd)
+#define pgprot_val(x) ((x).pgprot)
+
+#define __pte(x) ((pte_t) { (x) } )
+#define __pmd(x) ((pmd_t) { (x) } )
+#define __pgd(x) ((pgd_t) { (x) } )
+#define __pgprot(x) ((pgprot_t) { (x) } )
+
+#else
+/*
+ * .. while these make it easier on the compiler
+ */
+typedef unsigned long pte_t;
+typedef unsigned long pmd_t;
+typedef unsigned long pgd_t;
+typedef unsigned long pgprot_t;
+
+#define pte_val(x) (x)
+#define pmd_val(x) (x)
+#define pgd_val(x) (x)
+#define pgprot_val(x) (x)
+
+#define __pte(x) (x)
+#define __pmd(x) (x)
+#define __pgd(x) (x)
+#define __pgprot(x) (x)
+
+#endif
+
+/* to align the pointer to the (next) page boundary */
+#define PAGE_ALIGN(addr) (((addr)+PAGE_SIZE-1)&PAGE_MASK)
+
+/* This handles the memory map.. */
+#define PAGE_OFFSET 0xc0000000
+#define MAP_NR(addr) (((unsigned long)(addr) - PAGE_OFFSET) >> PAGE_SHIFT)
+
+#endif /* __KERNEL__ */
+
+#endif /* __ASM_PROC_PAGE_H */
--- /dev/null
+/*
+ * linux/include/asm-arm/proc-armv/param.h
+ *
+ * Copyright (C) 1996 Russell King
+ */
+
+#ifndef __ASM_PROC_PARAM_H
+#define __ASM_PROC_PARAM_H
+
+#ifndef HZ
+#define HZ 100
+#endif
+
+#define EXEC_PAGESIZE 4096
+
+#ifndef NGROUPS
+#define NGROUPS 32
+#endif
+
+#ifndef NOGROUP
+#define NOGROUP (-1)
+#endif
+
+#define MAXHOSTNAMELEN 64 /* max length of hostname */
+
+#endif
+
--- /dev/null
+/*
+ * linux/include/asm-arm/proc-armv/pgtable.h
+ *
+ * Copyright (C) 1995, 1996, 1997 Russell King
+ *
+ * 12-01-1997 RMK Altered flushing routines to use function pointers
+ * now possible to combine ARM6, ARM7 and StrongARM versions.
+ */
+#ifndef __ASM_PROC_PGTABLE_H
+#define __ASM_PROC_PGTABLE_H
+
+#include <asm/arch/mmu.h>
+
+#define LIBRARY_TEXT_START 0x0c000000
+
+/*
+ * Cache flushing...
+ */
+#define flush_cache_all() \
+ processor.u.armv3v4._flush_cache_all()
+
+#define flush_cache_mm(_mm) \
+ do { \
+ if ((_mm) == current->mm) \
+ processor.u.armv3v4._flush_cache_all(); \
+ } while (0)
+
+#define flush_cache_range(_mm,_start,_end) \
+ do { \
+ if ((_mm) == current->mm) \
+ processor.u.armv3v4._flush_cache_area \
+ ((_start), (_end), 1); \
+ } while (0)
+
+#define flush_cache_page(_vma,_vmaddr) \
+ do { \
+ if ((_vma)->vm_mm == current->mm) \
+ processor.u.armv3v4._flush_cache_area \
+ ((_vmaddr), (_vmaddr) + PAGE_SIZE, \
+ ((_vma)->vm_flags & VM_EXEC) ? 1 : 0); \
+ } while (0)
+
+#define flush_icache_range(_start,_end) \
+ processor.u.armv3v4._flush_icache_area((_start), (_end))
+
+/*
+ * We don't have a mem map cache...
+ */
+#define update_mm_cache_all() do { } while (0)
+#define update_mm_cache_task(tsk) do { } while (0)
+#define update_mm_cache_mm(mm) do { } while (0)
+#define update_mm_cache_mm_addr(mm,addr,pte) do { } while (0)
+
+/*
+ * This flushes back any buffered write data. We have to clean and flush the entries
+ * in the cache for this page. Is it necessary to invalidate the I-cache?
+ */
+#define flush_page_to_ram(_page) \
+ processor.u.armv3v4._flush_ram_page ((_page) & PAGE_MASK);
+
+/*
+ * Make the page uncacheable (must flush page beforehand).
+ */
+#define uncache_page(_page) \
+ processor.u.armv3v4._flush_ram_page ((_page) & PAGE_MASK);
+
+/*
+ * TLB flushing:
+ *
+ * - flush_tlb() flushes the current mm struct TLBs
+ * - flush_tlb_all() flushes all processes TLBs
+ * - flush_tlb_mm(mm) flushes the specified mm context TLB's
+ * - flush_tlb_page(vma, vmaddr) flushes one page
+ * - flush_tlb_range(mm, start, end) flushes a range of pages
+ *
+ * GCC uses conditional instructions, and expects the assembler code to do so as well.
+ *
+ * We drain the write buffer in here to ensure that the page tables in ram
+ * are really up to date. It is more efficient to do this here...
+ */
+#define flush_tlb() flush_tlb_all()
+
+#define flush_tlb_all() \
+ processor.u.armv3v4._flush_tlb_all()
+
+#define flush_tlb_mm(_mm) \
+ do { \
+ if ((_mm) == current->mm) \
+ processor.u.armv3v4._flush_tlb_all(); \
+ } while (0)
+
+#define flush_tlb_range(_mm,_start,_end) \
+ do { \
+ if ((_mm) == current->mm) \
+ processor.u.armv3v4._flush_tlb_area \
+ ((_start), (_end), 1); \
+ } while (0)
+
+#define flush_tlb_page(_vma,_vmaddr) \
+ do { \
+ if ((_vma)->vm_mm == current->mm) \
+ processor.u.armv3v4._flush_tlb_area \
+ ((_vmaddr), (_vmaddr) + PAGE_SIZE, \
+ ((_vma)->vm_flags & VM_EXEC) ? 1 : 0); \
+ } while (0)
+
+/*
+ * Since the page tables are in cached memory, we need to flush the dirty
+ * data cached entries back before we flush the tlb... This is also useful
+ * to flush out the SWI instruction for signal handlers...
+ */
+#define __flush_entry_to_ram(entry) \
+ processor.u.armv3v4._flush_cache_entry((unsigned long)(entry))
+
+#define __flush_pte_to_ram(entry) \
+ processor.u.armv3v4._flush_cache_pte((unsigned long)(entry))
+
+/* PMD_SHIFT determines the size of the area a second-level page table can map */
+#define PMD_SHIFT 20
+#define PMD_SIZE (1UL << PMD_SHIFT)
+#define PMD_MASK (~(PMD_SIZE-1))
+
+/* PGDIR_SHIFT determines what a third-level page table entry can map */
+#define PGDIR_SHIFT 20
+#define PGDIR_SIZE (1UL << PGDIR_SHIFT)
+#define PGDIR_MASK (~(PGDIR_SIZE-1))
+
+/*
+ * entries per page directory level: the sa110 is two-level, so
+ * we don't really have any PMD directory physically.
+ */
+#define PTRS_PER_PTE 256
+#define PTRS_PER_PMD 1
+#define PTRS_PER_PGD 4096
+
+/* Just any arbitrary offset to the start of the vmalloc VM area: the
+ * current 8MB value just means that there will be a 8MB "hole" after the
+ * physical memory until the kernel virtual memory starts. That means that
+ * any out-of-bounds memory accesses will hopefully be caught.
+ * The vmalloc() routines leaves a hole of 4kB between each vmalloced
+ * area for the same reason. ;)
+ */
+#define VMALLOC_OFFSET (8*1024*1024)
+#define VMALLOC_START (((unsigned long)high_memory + VMALLOC_OFFSET) & ~(VMALLOC_OFFSET-1))
+#define VMALLOC_VMADDR(x) ((unsigned long)(x))
+
+/* PMD types (actually level 1 descriptor) */
+#define PMD_TYPE_MASK 0x0003
+#define PMD_TYPE_FAULT 0x0000
+#define PMD_TYPE_TABLE 0x0001
+#define PMD_TYPE_SECT 0x0002
+#define PMD_UPDATABLE 0x0010
+#define PMD_SECT_CACHEABLE 0x0008
+#define PMD_SECT_BUFFERABLE 0x0004
+#define PMD_SECT_AP_WRITE 0x0400
+#define PMD_SECT_AP_READ 0x0800
+#define PMD_DOMAIN(x) ((x) << 5)
+
+/* PTE types (actially level 2 descriptor) */
+#define PTE_TYPE_MASK 0x0003
+#define PTE_TYPE_FAULT 0x0000
+#define PTE_TYPE_LARGE 0x0001
+#define PTE_TYPE_SMALL 0x0002
+#define PTE_AP_READ 0x0aa0
+#define PTE_AP_WRITE 0x0550
+#define PTE_CACHEABLE 0x0008
+#define PTE_BUFFERABLE 0x0004
+
+/* Domains */
+#define DOMAIN_USER 0
+#define DOMAIN_KERNEL 1
+#define DOMAIN_TABLE 1
+#define DOMAIN_IO 2
+
+#define _PAGE_CHG_MASK (0xfffff00c | PTE_TYPE_MASK)
+
+/*
+ * We define the bits in the page tables as follows:
+ * PTE_BUFFERABLE page is dirty
+ * PTE_AP_WRITE page is writable
+ * PTE_AP_READ page is a young (unsetting this causes faults for any access)
+ *
+ * Any page that is mapped in is assumed to be readable...
+ */
+#define PAGE_NONE __pgprot(PTE_TYPE_SMALL)
+#define PAGE_SHARED __pgprot(PTE_TYPE_SMALL | PTE_CACHEABLE | PTE_AP_READ | PTE_AP_WRITE)
+#define PAGE_COPY __pgprot(PTE_TYPE_SMALL | PTE_CACHEABLE | PTE_AP_READ)
+#define PAGE_READONLY __pgprot(PTE_TYPE_SMALL | PTE_CACHEABLE | PTE_AP_READ)
+#define PAGE_KERNEL __pgprot(PTE_TYPE_SMALL | PTE_CACHEABLE | PTE_BUFFERABLE | PTE_AP_WRITE)
+
+#define _PAGE_USER_TABLE (PMD_TYPE_TABLE | PMD_DOMAIN(DOMAIN_USER))
+#define _PAGE_KERNEL_TABLE (PMD_TYPE_TABLE | PMD_DOMAIN(DOMAIN_KERNEL))
+
+/*
+ * The arm can't do page protection for execute, and considers that the same are read.
+ * Also, write permissions imply read permissions. This is the closest we can get..
+ */
+#define __P000 PAGE_NONE
+#define __P001 PAGE_READONLY
+#define __P010 PAGE_COPY
+#define __P011 PAGE_COPY
+#define __P100 PAGE_READONLY
+#define __P101 PAGE_READONLY
+#define __P110 PAGE_COPY
+#define __P111 PAGE_COPY
+
+#define __S000 PAGE_NONE
+#define __S001 PAGE_READONLY
+#define __S010 PAGE_SHARED
+#define __S011 PAGE_SHARED
+#define __S100 PAGE_READONLY
+#define __S101 PAGE_READONLY
+#define __S110 PAGE_SHARED
+#define __S111 PAGE_SHARED
+
+#undef TEST_VERIFY_AREA
+
+/*
+ * BAD_PAGETABLE is used when we need a bogus page-table, while
+ * BAD_PAGE is used for a bogus page.
+ *
+ * ZERO_PAGE is a global shared page that is always zero: used
+ * for zero-mapped memory areas etc..
+ */
+extern pte_t __bad_page(void);
+extern pte_t * __bad_pagetable(void);
+extern unsigned long *empty_zero_page;
+
+#define BAD_PAGETABLE __bad_pagetable()
+#define BAD_PAGE __bad_page()
+#define ZERO_PAGE ((unsigned long) empty_zero_page)
+
+/* number of bits that fit into a memory pointer */
+#define BYTES_PER_PTR (sizeof(unsigned long))
+#define BITS_PER_PTR (8*BYTES_PER_PTR)
+
+/* to align the pointer to a pointer address */
+#define PTR_MASK (~(sizeof(void*)-1))
+
+/* sizeof(void*)==1<<SIZEOF_PTR_LOG2 */
+#define SIZEOF_PTR_LOG2 2
+
+/* to find an entry in a page-table */
+#define PAGE_PTR(address) \
+((unsigned long)(address)>>(PAGE_SHIFT-SIZEOF_PTR_LOG2)&PTR_MASK&~PAGE_MASK)
+
+/* to set the page-dir */
+#define SET_PAGE_DIR(tsk,pgdir) \
+do { \
+ tsk->tss.memmap = __virt_to_phys(pgdir); \
+ if ((tsk) == current) \
+ __asm__ __volatile__( \
+ "mcr%? p15, 0, %0, c2, c0, 0\n" \
+ : : "r" (tsk->tss.memmap)); \
+} while (0)
+
+extern __inline__ int pte_none(pte_t pte)
+{
+ return !pte_val(pte);
+}
+
+#define pte_clear(ptep) set_pte(ptep, __pte(0))
+
+extern __inline__ int pte_present(pte_t pte)
+{
+ switch (pte_val(pte) & PTE_TYPE_MASK) {
+ case PTE_TYPE_LARGE:
+ case PTE_TYPE_SMALL:
+ return 1;
+ default:
+ return 0;
+ }
+}
+
+extern __inline__ int pmd_none(pmd_t pmd)
+{
+ return !pmd_val(pmd);
+}
+
+#define pmd_clear(pmdp) set_pmd(pmdp, __pmd(0))
+
+extern __inline__ int pmd_bad(pmd_t pmd)
+{
+ switch (pmd_val(pmd) & PMD_TYPE_MASK) {
+ case PMD_TYPE_FAULT:
+ case PMD_TYPE_TABLE:
+ return 0;
+ default:
+ return 1;
+ }
+}
+
+extern __inline__ int pmd_present(pmd_t pmd)
+{
+ switch (pmd_val(pmd) & PMD_TYPE_MASK) {
+ case PMD_TYPE_TABLE:
+ return 1;
+ default:
+ return 0;
+ }
+}
+
+/*
+ * The "pgd_xxx()" functions here are trivial for a folded two-level
+ * setup: the pgd is never bad, and a pmd always exists (as it's folded
+ * into the pgd entry)
+ */
+#define pgd_none(pgd) (0)
+#define pgd_bad(pgd) (0)
+#define pgd_present(pgd) (1)
+#define pgd_clear(pgdp)
+
+/*
+ * The following only work if pte_present() is true.
+ * Undefined behaviour if not..
+ */
+#define pte_read(pte) (1)
+#define pte_exec(pte) (1)
+
+extern __inline__ int pte_write(pte_t pte)
+{
+ return pte_val(pte) & PTE_AP_WRITE;
+}
+
+extern __inline__ int pte_cacheable(pte_t pte)
+{
+ return pte_val(pte) & PTE_CACHEABLE;
+}
+
+extern __inline__ int pte_dirty(pte_t pte)
+{
+ return pte_val(pte) & PTE_BUFFERABLE;
+}
+
+extern __inline__ int pte_young(pte_t pte)
+{
+ return pte_val(pte) & PTE_AP_READ;
+}
+
+extern __inline__ pte_t pte_wrprotect(pte_t pte)
+{
+ pte_val(pte) &= ~PTE_AP_WRITE;
+ return pte;
+}
+
+extern __inline__ pte_t pte_nocache(pte_t pte)
+{
+ pte_val(pte) &= ~PTE_CACHEABLE;
+ return pte;
+}
+
+extern __inline__ pte_t pte_mkclean(pte_t pte)
+{
+ pte_val(pte) &= ~PTE_BUFFERABLE;
+ return pte;
+}
+
+extern __inline__ pte_t pte_mkold(pte_t pte)
+{
+ pte_val(pte) &= ~PTE_AP_READ;
+ return pte;
+}
+
+extern __inline__ pte_t pte_mkwrite(pte_t pte)
+{
+ pte_val(pte) |= PTE_AP_WRITE;
+ return pte;
+}
+
+extern __inline__ pte_t pte_mkdirty(pte_t pte)
+{
+ pte_val(pte) |= PTE_BUFFERABLE;
+ return pte;
+}
+
+extern __inline__ pte_t pte_mkyoung(pte_t pte)
+{
+ pte_val(pte) |= PTE_AP_READ;
+ return pte;
+}
+
+/*
+ * The following are unable to be implemented on this MMU
+ */
+#if 0
+extern __inline__ pte_t pte_rdprotect(pte_t pte)
+{
+ pte_val(pte) &= ~(PTE_CACHEABLE|PTE_AP_READ);
+ return pte;
+}
+
+extern __inline__ pte_t pte_exprotect(pte_t pte)
+{
+ pte_val(pte) &= ~(PTE_CACHEABLE|PTE_AP_READ);
+ return pte;
+}
+
+extern __inline__ pte_t pte_mkread(pte_t pte)
+{
+ pte_val(pte) |= PTE_CACHEABLE;
+ return pte;
+}
+
+extern __inline__ pte_t pte_mkexec(pte_t pte)
+{
+ pte_val(pte) |= PTE_CACHEABLE;
+ return pte;
+}
+#endif
+
+/*
+ * Conversion functions: convert a page and protection to a page entry,
+ * and a page entry and page directory to the page they refer to.
+ */
+extern __inline__ pte_t mk_pte(unsigned long page, pgprot_t pgprot)
+{
+ pte_t pte;
+ pte_val(pte) = __virt_to_phys(page) | pgprot_val(pgprot);
+ return pte;
+}
+
+/* This takes a physical page address that is used by the remapping functions */
+extern __inline__ pte_t mk_pte_phys(unsigned long physpage, pgprot_t pgprot)
+{
+ pte_t pte;
+ pte_val(pte) = physpage + pgprot_val(pgprot);
+ return pte;
+}
+
+extern __inline__ pte_t pte_modify(pte_t pte, pgprot_t newprot)
+{
+ pte_val(pte) = (pte_val(pte) & _PAGE_CHG_MASK) | pgprot_val(newprot);
+ return pte;
+}
+
+extern __inline__ void set_pte(pte_t *pteptr, pte_t pteval)
+{
+ *pteptr = pteval;
+ __flush_pte_to_ram(pteptr);
+}
+
+extern __inline__ unsigned long pte_page(pte_t pte)
+{
+ return (unsigned long)phys_to_virt(pte_val(pte) & PAGE_MASK);
+}
+
+extern __inline__ pmd_t mk_user_pmd(pte_t *ptep)
+{
+ pmd_t pmd;
+ pmd_val(pmd) = __virt_to_phys((unsigned long)ptep) | _PAGE_USER_TABLE;
+ return pmd;
+}
+
+extern __inline__ pmd_t mk_kernel_pmd(pte_t *ptep)
+{
+ pmd_t pmd;
+ pmd_val(pmd) = __virt_to_phys((unsigned long)ptep) | _PAGE_KERNEL_TABLE;
+ return pmd;
+}
+
+#if 1
+#define set_pmd(pmdp,pmd) processor.u.armv3v4._set_pmd(pmdp,pmd)
+#else
+extern __inline__ void set_pmd(pmd_t *pmdp, pmd_t pmd)
+{
+ *pmdp = pmd;
+ __flush_pte_to_ram(pmdp);
+}
+#endif
+
+extern __inline__ unsigned long pmd_page(pmd_t pmd)
+{
+ return (unsigned long)phys_to_virt(pmd_val(pmd) & 0xfffffc00);
+}
+
+/* to find an entry in a kernel page-table-directory */
+#define pgd_offset_k(address) pgd_offset(&init_mm, address)
+
+/* to find an entry in a page-table-directory */
+extern __inline__ pgd_t * pgd_offset(struct mm_struct * mm, unsigned long address)
+{
+ return mm->pgd + (address >> PGDIR_SHIFT);
+}
+
+/* Find an entry in the second-level page table.. */
+#define pmd_offset(dir, address) ((pmd_t *)(dir))
+
+/* Find an entry in the third-level page table.. */
+extern __inline__ pte_t * pte_offset(pmd_t * dir, unsigned long address)
+{
+ return (pte_t *) pmd_page(*dir) + ((address >> PAGE_SHIFT) & (PTRS_PER_PTE - 1));
+}
+
+extern unsigned long get_small_page(int priority);
+extern void free_small_page(unsigned long page);
+
+/*
+ * Allocate and free page tables. The xxx_kernel() versions are
+ * used to allocate a kernel page table - this turns on ASN bits
+ * if any.
+ */
+extern __inline__ void pte_free_kernel(pte_t * pte)
+{
+ free_small_page((unsigned long) pte);
+}
+
+extern const char bad_pmd_string[];
+
+extern __inline__ pte_t * pte_alloc_kernel(pmd_t *pmd, unsigned long address)
+{
+ address = (address >> PAGE_SHIFT) & (PTRS_PER_PTE - 1);
+ if (pmd_none(*pmd)) {
+ pte_t *page = (pte_t *) get_small_page(GFP_KERNEL);
+ if (pmd_none(*pmd)) {
+ if (page) {
+ memzero (page, PTRS_PER_PTE * BYTES_PER_PTR);
+ set_pmd(pmd, mk_kernel_pmd(page));
+ return page + address;
+ }
+ set_pmd(pmd, mk_kernel_pmd(BAD_PAGETABLE));
+ return NULL;
+ }
+ free_small_page((unsigned long) page);
+ }
+ if (pmd_bad(*pmd)) {
+ printk(bad_pmd_string, pmd_val(*pmd));
+ set_pmd(pmd, mk_kernel_pmd(BAD_PAGETABLE));
+ return NULL;
+ }
+ return (pte_t *) pmd_page(*pmd) + address;
+}
+
+/*
+ * allocating and freeing a pmd is trivial: the 1-entry pmd is
+ * inside the pgd, so has no extra memory associated with it.
+ */
+#define pmd_free_kernel(pmdp) pmd_val(*(pmdp)) = 0;
+#define pmd_alloc_kernel(pgdp, address) ((pmd_t *)(pgdp))
+
+extern __inline__ void pte_free(pte_t * pte)
+{
+ free_small_page((unsigned long) pte);
+}
+
+extern __inline__ pte_t * pte_alloc(pmd_t * pmd, unsigned long address)
+{
+ address = (address >> PAGE_SHIFT) & (PTRS_PER_PTE - 1);
+
+ if (pmd_none(*pmd)) {
+ pte_t *page = (pte_t *) get_small_page(GFP_KERNEL);
+ if (pmd_none(*pmd)) {
+ if (page) {
+ memzero (page, PTRS_PER_PTE * BYTES_PER_PTR);
+ set_pmd(pmd, mk_user_pmd(page));
+ return page + address;
+ }
+ set_pmd(pmd, mk_user_pmd(BAD_PAGETABLE));
+ return NULL;
+ }
+ free_small_page ((unsigned long) page);
+ }
+ if (pmd_bad(*pmd)) {
+ printk(bad_pmd_string, pmd_val(*pmd));
+ set_pmd(pmd, mk_user_pmd(BAD_PAGETABLE));
+ return NULL;
+ }
+ return (pte_t *) pmd_page(*pmd) + address;
+}
+
+/*
+ * allocating and freeing a pmd is trivial: the 1-entry pmd is
+ * inside the pgd, so has no extra memory associated with it.
+ */
+#define pmd_free(pmdp) pmd_val(*(pmdp)) = 0;
+#define pmd_alloc(pgdp, address) ((pmd_t *)(pgdp))
+
+/*
+ * Free a page directory. Takes the virtual address.
+ */
+extern __inline__ void pgd_free(pgd_t * pgd)
+{
+ free_pages((unsigned long) pgd, 2);
+}
+
+/*
+ * Allocate a new page directory. Return the virtual address of it.
+ */
+extern __inline__ pgd_t * pgd_alloc(void)
+{
+ unsigned long pgd;
+
+ /*
+ * need to get a 16k page for level 1
+ */
+ pgd = __get_free_pages(GFP_KERNEL,2,0);
+ if (pgd)
+ memzero ((void *)pgd, PTRS_PER_PGD * BYTES_PER_PTR);
+ return (pgd_t *)pgd;
+}
+
+extern pgd_t swapper_pg_dir[PTRS_PER_PGD];
+
+/*
+ * The sa110 doesn't have any external MMU info: the kernel page
+ * tables contain all the necessary information.
+ */
+extern __inline__ void update_mmu_cache(struct vm_area_struct * vma,
+ unsigned long address, pte_t pte)
+{
+}
+
+#define SWP_TYPE(entry) (((entry) >> 2) & 0x7f)
+#define SWP_OFFSET(entry) ((entry) >> 9)
+#define SWP_ENTRY(type,offset) (((type) << 2) | ((offset) << 9))
+
+#endif /* __ASM_PROC_PAGE_H */
+
--- /dev/null
+/*
+ * linux/include/asm-arm/proc-armv/processor.h
+ *
+ * Copyright (c) 1996 Russell King.
+ *
+ * Changelog:
+ * 20-09-1996 RMK Created
+ * 26-09-1996 RMK Added 'EXTRA_THREAD_STRUCT*'
+ * 28-09-1996 RMK Moved start_thread into the processor dependencies
+ */
+#ifndef __ASM_PROC_PROCESSOR_H
+#define __ASM_PROC_PROCESSOR_H
+
+#ifdef __KERNEL__
+
+#define KERNEL_STACK_SIZE PAGE_SIZE
+
+/*
+ * on arm2,3 wp does not work
+ */
+#define wp_works_ok 0
+#define wp_works_ok__is_a_macro /* for versions in ksyms.c */
+
+struct context_save_struct {
+ unsigned long cpsr;
+ unsigned long r4;
+ unsigned long r5;
+ unsigned long r6;
+ unsigned long r7;
+ unsigned long r8;
+ unsigned long r9;
+ unsigned long fp;
+ unsigned long pc;
+};
+
+#define EXTRA_THREAD_STRUCT \
+ struct context_save_struct *save; \
+ unsigned long memmap;
+
+#define EXTRA_THREAD_STRUCT_INIT \
+ 0, \
+ ((unsigned long) swapper_pg_dir) - PAGE_OFFSET
+
+DECLARE_THREAD_STRUCT;
+
+/*
+ * Return saved PC of a blocked thread.
+ */
+extern __inline__ unsigned long thread_saved_pc (struct thread_struct *t)
+{
+ if (t->save)
+ return t->save->pc;
+ else
+ return 0;
+}
+
+extern __inline__ unsigned long get_css_fp (struct thread_struct *t)
+{
+ if (t->save)
+ return t->save->fp;
+ else
+ return 0;
+}
+
+asmlinkage void ret_from_sys_call(void) __asm__ ("ret_from_sys_call");
+
+extern __inline__ void copy_thread_css (struct context_save_struct *save)
+{
+ save->cpsr = SVC_MODE;
+ save->r4 =
+ save->r5 =
+ save->r6 =
+ save->r7 =
+ save->r8 =
+ save->r9 =
+ save->fp = 0;
+ save->pc = (unsigned long) ret_from_sys_call;
+}
+
+#define start_thread(regs,pc,sp) \
+({ \
+ unsigned long *stack = (unsigned long *)sp; \
+ set_fs(USER_DS); \
+ memzero(regs->uregs, sizeof(regs->uregs)); \
+ regs->ARM_cpsr = sp <= 0x04000000 ? USR26_MODE : USR_MODE; \
+ regs->ARM_pc = pc; /* pc */ \
+ regs->ARM_sp = sp; /* sp */ \
+ regs->ARM_r2 = stack[2]; /* r2 (envp) */ \
+ regs->ARM_r1 = stack[1]; /* r1 (argv) */ \
+ regs->ARM_r0 = stack[0]; /* r0 (argc) */ \
+})
+
+/* Allocation and freeing of basic task resources. */
+/*
+ * NOTE! The task struct and the stack go together
+ */
+#define alloc_task_struct() \
+ ((struct task_struct *) __get_free_pages(GFP_KERNEL,1,0))
+#define free_task_struct(p) free_pages((unsigned long)(p),1)
+
+#endif
+
+#endif
--- /dev/null
+/*
+ * linux/include/asm-arm/proc-armv/ptrace.h
+ *
+ * Copyright (C) 1996 Russell King
+ */
+
+#ifndef __ASM_PROC_PTRACE_H
+#define __ASM_PROC_PTRACE_H
+
+/* this struct defines the way the registers are stored on the
+ stack during a system call. */
+
+struct pt_regs {
+ long uregs[18];
+};
+
+#define ARM_cpsr uregs[16]
+#define ARM_pc uregs[15]
+#define ARM_lr uregs[14]
+#define ARM_sp uregs[13]
+#define ARM_ip uregs[12]
+#define ARM_fp uregs[11]
+#define ARM_r10 uregs[10]
+#define ARM_r9 uregs[9]
+#define ARM_r8 uregs[8]
+#define ARM_r7 uregs[7]
+#define ARM_r6 uregs[6]
+#define ARM_r5 uregs[5]
+#define ARM_r4 uregs[4]
+#define ARM_r3 uregs[3]
+#define ARM_r2 uregs[2]
+#define ARM_r1 uregs[1]
+#define ARM_r0 uregs[0]
+#define ARM_ORIG_r0 uregs[17] /* -1 */
+
+#define USR26_MODE 0x00
+#define FIQ26_MODE 0x01
+#define IRQ26_MODE 0x02
+#define SVC26_MODE 0x03
+#define USR_MODE 0x10
+#define FIQ_MODE 0x11
+#define IRQ_MODE 0x12
+#define SVC_MODE 0x13
+#define ABT_MODE 0x17
+#define UND_MODE 0x1b
+#define SYSTEM_MODE 0x1f
+#define MODE_MASK 0x1f
+#define F_BIT 0x40
+#define I_BIT 0x80
+#define CC_V_BIT (1 << 28)
+#define CC_C_BIT (1 << 29)
+#define CC_Z_BIT (1 << 30)
+#define CC_N_BIT (1 << 31)
+
+#define user_mode(regs) \
+ ((((regs)->ARM_cpsr & MODE_MASK) == USR_MODE) || \
+ (((regs)->ARM_cpsr & MODE_MASK) == USR26_MODE))
+
+#define processor_mode(regs) \
+ ((regs)->ARM_cpsr & MODE_MASK)
+
+#define interrupts_enabled(regs) \
+ (!((regs)->ARM_cpsr & I_BIT))
+
+#define fast_interrupts_enabled(regs) \
+ (!((regs)->ARM_cpsr & F_BIT))
+
+#define condition_codes(regs) \
+ ((regs)->ARM_cpsr & (CC_V_BIT|CC_C_BIT|CC_Z_BIT|CC_N_BIT))
+
+#define instruction_pointer(regs) ((regs)->ARM_pc)
+#define pc_pointer(v) (v)
+
+#endif
+
--- /dev/null
+/*
+ * linux/include/asm-arm/semaphore.h
+ */
+#ifndef __ASM_PROC_SEMAPHORE_H
+#define __ASM_PROC_SEMAPHORE_H
+
+/*
+ * This is ugly, but we want the default case to fall through.
+ * "__down" is the actual routine that waits...
+ */
+extern inline void down(struct semaphore * sem)
+{
+ __asm__ __volatile__ ("
+ @ atomic down operation
+ mrs r0, cpsr
+ orr r1, r0, #128 @ disable IRQs
+ bic r0, r0, #0x80000000 @ clear N
+ msr cpsr, r1
+ ldr r1, [%0]
+ subs r1, r1, #1
+ str r1, [%0]
+ orrmi r0, r0, #0x80000000 @ set N
+ msr cpsr, r0
+ movmi r0, %0
+ blmi " SYMBOL_NAME_STR(__down)
+ : : "r" (sem) : "r0", "r1", "r2", "r3", "ip", "lr", "cc");
+}
+
+/*
+ * This is ugly, but we want the default case to fall through.
+ * "__down_interruptible" is the actual routine that waits...
+ */
+extern inline int down_interruptible (struct semaphore * sem)
+{
+ int result;
+ __asm__ __volatile__ ("
+ @ atomic down operation
+ mrs r0, cpsr
+ orr r1, r0, #128 @ disable IRQs
+ bic r0, r0, #0x80000000 @ clear N
+ msr cpsr, r1
+ ldr r1, [%1]
+ subs r1, r1, #1
+ str r1, [%1]
+ orrmi r0, r0, #0x80000000 @ set N
+ msr cpsr, r0
+ movmi r0, %1
+ movpl r0, #0
+ blmi " SYMBOL_NAME_STR(__down_interruptible) "
+ mov %0, r0"
+ : "=r" (result)
+ : "r" (sem)
+ : "r0", "r1", "r2", "r3", "ip", "lr", "cc");
+ return result;
+}
+
+/*
+ * Note! This is subtle. We jump to wake people up only if
+ * the semaphore was negative (== somebody was waiting on it).
+ * The default case (no contention) will result in NO
+ * jumps for both down() and up().
+ */
+extern inline void up(struct semaphore * sem)
+{
+ __asm__ __volatile__ ("
+ @ atomic up operation
+ mrs r0, cpsr
+ orr r1, r0, #128 @ disable IRQs
+ bic r0, r0, #0x80000000 @ clear N
+ msr cpsr, r1
+ ldr r1, [%0]
+ adds r1, r1, #1
+ str r1, [%0]
+ orrls r0, r0, #0x80000000 @ set N
+ msr cpsr, r0
+ movmi r0, %0
+ blmi " SYMBOL_NAME_STR(__up)
+ : : "r" (sem) : "r0", "r1", "r2", "r3", "ip", "lr", "cc");
+}
+
+#endif
--- /dev/null
+/*
+ * linux/include/asm-arm/proc-armv/shmparam.h
+ *
+ * Copyright (C) 1996 Russell King
+ *
+ * definitions for the shared process memory on ARM v3 or v4
+ * processors
+ */
+
+#ifndef __ASM_PROC_SHMPARAM_H
+#define __ASM_PROC_SHMPARAM_H
+
+#ifndef SHM_RANGE_START
+#define SHM_RANGE_START 0x50000000
+#define SHM_RANGE_END 0x60000000
+#define SHMMAX 0x01000000
+#endif
+
+#endif
--- /dev/null
+/*
+ * linux/include/asm-arm/proc-armv/system.h
+ *
+ * Copyright (C) 1996 Russell King
+ */
+
+#ifndef __ASM_PROC_SYSTEM_H
+#define __ASM_PROC_SYSTEM_H
+
+extern const char xchg_str[];
+
+extern __inline__ unsigned long __xchg(unsigned long x, volatile void *ptr, int size)
+{
+ switch (size) {
+ case 1: __asm__ __volatile__ ("swpb %0, %1, [%2]" : "=r" (x) : "r" (x), "r" (ptr) : "memory");
+ break;
+ case 2: abort ();
+ case 4: __asm__ __volatile__ ("swp %0, %1, [%2]" : "=r" (x) : "r" (x), "r" (ptr) : "memory");
+ break;
+ default: arm_invalidptr(xchg_str, size);
+ }
+ return x;
+}
+
+/*
+ * This processor does not need anything special before reset,
+ * but RPC may do...
+ */
+extern __inline__ void proc_hard_reset(void)
+{
+}
+
+/*
+ * We can wait for an interrupt...
+ */
+#if 0
+#define proc_idle() \
+ do { \
+ __asm__ __volatile__( \
+" mcr p15, 0, %0, c15, c8, 2" \
+ : : "r" (0)); \
+ } while (0)
+#else
+#define proc_idle()
+#endif
+/*
+ * A couple of speedups for the ARM
+ */
+
+/*
+ * Save the current interrupt enable state & disable IRQs
+ */
+#define __save_flags_cli(x) \
+ do { \
+ unsigned long temp; \
+ __asm__ __volatile__( \
+ "mrs %1, cpsr\n" \
+" and %0, %1, #192\n" \
+" orr %1, %1, #128\n" \
+" msr cpsr, %1" \
+ : "=r" (x), "=r" (temp) \
+ : \
+ : "memory"); \
+ } while (0)
+
+/*
+ * Enable IRQs
+ */
+#define __sti() \
+ do { \
+ unsigned long temp; \
+ __asm__ __volatile__( \
+ "mrs %0, cpsr\n" \
+" bic %0, %0, #128\n" \
+" msr cpsr, %0" \
+ : "=r" (temp) \
+ : \
+ : "memory"); \
+ } while(0)
+
+/*
+ * Disable IRQs
+ */
+#define __cli() \
+ do { \
+ unsigned long temp; \
+ __asm__ __volatile__( \
+ "mrs %0, cpsr\n" \
+" orr %0, %0, #128\n" \
+" msr cpsr, %0" \
+ : "=r" (temp) \
+ : \
+ : "memory"); \
+ } while(0)
+
+/*
+ * save current IRQ & FIQ state
+ */
+#define __save_flags(x) \
+ do { \
+ __asm__ __volatile__( \
+ "mrs %0, cpsr\n" \
+" and %0, %0, #192" \
+ : "=r" (x) \
+ : \
+ : "memory"); \
+ } while (0)
+
+/*
+ * restore saved IRQ & FIQ state
+ */
+#define __restore_flags(x) \
+ do { \
+ unsigned long temp; \
+ __asm__ __volatile__( \
+ "mrs %0, cpsr\n" \
+" bic %0, %0, #192\n" \
+" orr %0, %0, %1\n" \
+" msr cpsr, %0" \
+ : "=r" (temp) \
+ : "r" (x) \
+ : "memory"); \
+ } while (0)
+
+#ifdef __SMP__
+#error SMP not supported
+#else
+
+#define cli() __cli()
+#define sti() __sti()
+#define save_flags(x) __save_flags(x)
+#define restore_flags(x) __restore_flags(x)
+#define save_flags_cli(x) __save_flags_cli(x)
+
+#endif
+
+#endif
--- /dev/null
+/*
+ * linux/include/asm-arm/proc-armv/uaccess.h
+ */
+
+/*
+ * The fs functions are implemented on the ARMV3 and V4 architectures
+ * using the domain register.
+ *
+ * DOMAIN_IO - domain 2 includes all IO only
+ * DOMAIN_KERNEL - domain 1 includes all kernel memory only
+ * DOMAIN_USER - domain 0 includes all user memory only
+ */
+
+#define DOMAIN_CLIENT 1
+#define DOMAIN_MANAGER 3
+
+#define DOMAIN_USER_CLIENT ((DOMAIN_CLIENT) << 0)
+#define DOMAIN_USER_MANAGER ((DOMAIN_MANAGER) << 0)
+
+#define DOMAIN_KERNEL_CLIENT ((DOMAIN_CLIENT) << 2)
+#define DOMAIN_KERNEL_MANAGER ((DOMAIN_MANAGER) << 2)
+
+#define DOMAIN_IO_CLIENT ((DOMAIN_CLIENT) << 4)
+#define DOMAIN_IO_MANAGER ((DOMAIN_MANAGER) << 4)
+
+/*
+ * When we want to access kernel memory in the *_user functions,
+ * we change the domain register to KERNEL_DS, thus allowing
+ * unrestricted access
+ */
+#define KERNEL_DOMAIN (DOMAIN_USER_CLIENT | DOMAIN_KERNEL_MANAGER | DOMAIN_IO_CLIENT)
+#define USER_DOMAIN (DOMAIN_USER_CLIENT | DOMAIN_KERNEL_CLIENT | DOMAIN_IO_CLIENT)
+
+/*
+ * Note that this is actually 0x1,0000,0000
+ */
+#define KERNEL_DS 0x00000000
+#define USER_DS 0xc0000000
+
+#define get_ds() (KERNEL_DS)
+#define get_fs() (current->addr_limit)
+
+#define segment_eq(a,b) ((a) == (b))
+
+extern __inline__ void set_fs (mm_segment_t fs)
+{
+ current->addr_limit = fs;
+
+ __asm__ __volatile__("mcr p15, 0, %0, c3, c0" :
+ : "r" (fs ? USER_DOMAIN : KERNEL_DOMAIN));
+}
+
+/*
+ * a + s <= 2^32 -> C = 0 || Z = 0 (LS)
+ * (a + s) <= l -> C = 0 || Z = 0 (LS)
+ */
+#define __range_ok(addr,size) ({ \
+ unsigned long flag, sum; \
+ __asm__ __volatile__("adds %1, %2, %3; cmpls %1, %0; movls %0, #0" \
+ : "=&r" (flag), "=&r" (sum) \
+ : "r" (addr), "Ir" (size), "0" (current->addr_limit) \
+ : "cc"); \
+ flag; })
+
+#define __addr_ok(addr) ({ \
+ unsigned long flag; \
+ __asm__ __volatile__("cmp %2, %0; movlo %0, #0" \
+ : "=&r" (flag) \
+ : "0" (current->addr_limit), "r" (addr) \
+ : "cc"); \
+ (flag == 0); })
+
+#define access_ok(type,addr,size) (__range_ok(addr,size) == 0)
+
+#define __put_user_asm_byte(x,addr,err) \
+ __asm__ __volatile__( \
+ "1: strbt %1,[%2],#0\n" \
+ "2:\n" \
+ " .section .fixup,\"ax\"\n" \
+ " .align 2\n" \
+ "3: mvn %0, %3\n" \
+ " b 2b\n" \
+ " .previous\n" \
+ " .section __ex_table,\"a\"\n" \
+ " .align 3\n" \
+ " .long 1b, 3b\n" \
+ " .previous" \
+ : "=r" (err) \
+ : "r" (x), "r" (addr), "i" (EFAULT), "0" (err))
+
+#define __put_user_asm_half(x,addr,err) \
+({ \
+ unsigned long __temp = (unsigned long)(x); \
+ __asm__ __volatile__( \
+ "1: strbt %1,[%3],#0\n" \
+ "2: strbt %2,[%4],#0\n" \
+ "3:\n" \
+ " .section .fixup,\"ax\"\n" \
+ " .align 2\n" \
+ "4: mvn %0, %5\n" \
+ " b 3b\n" \
+ " .previous\n" \
+ " .section __ex_table,\"a\"\n" \
+ " .align 3\n" \
+ " .long 1b, 4b\n" \
+ " .long 2b, 4b\n" \
+ " .previous" \
+ : "=r" (err) \
+ : "r" (__temp), "r" (__temp >> 8), \
+ "r" (addr), "r" ((int)(addr) + 1), \
+ "i" (EFAULT), "0" (err)); \
+})
+
+#define __put_user_asm_word(x,addr,err) \
+ __asm__ __volatile__( \
+ "1: strt %1,[%2],#0\n" \
+ "2:\n" \
+ " .section .fixup,\"ax\"\n" \
+ " .align 2\n" \
+ "3: mvn %0, %3\n" \
+ " b 2b\n" \
+ " .previous\n" \
+ " .section __ex_table,\"a\"\n" \
+ " .align 3\n" \
+ " .long 1b, 3b\n" \
+ " .previous" \
+ : "=r" (err) \
+ : "r" (x), "r" (addr), "i" (EFAULT), "0" (err))
+
+#define __get_user_asm_byte(x,addr,err) \
+ __asm__ __volatile__( \
+ "1: ldrbt %1,[%2],#0\n" \
+ "2:\n" \
+ " .section .fixup,\"ax\"\n" \
+ " .align 2\n" \
+ "3: mvn %0, %3\n" \
+ " b 2b\n" \
+ " .previous\n" \
+ " .section __ex_table,\"a\"\n" \
+ " .align 3\n" \
+ " .long 1b, 3b\n" \
+ " .previous" \
+ : "=r" (err), "=r" (x) \
+ : "r" (addr), "i" (EFAULT), "0" (err))
+
+#define __get_user_asm_half(x,addr,err) \
+({ \
+ unsigned long __temp; \
+ __asm__ __volatile__( \
+ "1: ldrbt %1,[%3],#0\n" \
+ "2: ldrbt %2,[%4],#0\n" \
+ " orr %1, %1, %2, lsl #8\n" \
+ "3:\n" \
+ " .section .fixup,\"ax\"\n" \
+ " .align 2\n" \
+ "4: mvn %0, %5\n" \
+ " b 3b\n" \
+ " .previous\n" \
+ " .section __ex_table,\"a\"\n" \
+ " .align 3\n" \
+ " .long 1b, 4b\n" \
+ " .long 2b, 4b\n" \
+ " .previous" \
+ : "=r" (err), "=r" (x), "=&r" (__temp) \
+ : "r" (addr), "r" ((int)(addr) + 1), \
+ "i" (EFAULT), "0" (err)); \
+})
+
+
+#define __get_user_asm_word(x,addr,err) \
+ __asm__ __volatile__( \
+ "1: ldrt %1,[%2],#0\n" \
+ "2:\n" \
+ " .section .fixup,\"ax\"\n" \
+ " .align 2\n" \
+ "3: mvn %0, %3\n" \
+ " b 2b\n" \
+ " .previous\n" \
+ " .section __ex_table,\"a\"\n" \
+ " .align 3\n" \
+ " .long 1b, 3b\n" \
+ " .previous" \
+ : "=r" (err), "=r" (x) \
+ : "r" (addr), "i" (EFAULT), "0" (err))
+
+extern unsigned long __arch_copy_from_user(void *to, const void *from, unsigned long n);
+#define __do_copy_from_user(to,from,n) \
+ (n) = __arch_copy_from_user(to,from,n)
+
+extern unsigned long __arch_copy_to_user(void *to, const void *from, unsigned long n);
+#define __do_copy_to_user(to,from,n) \
+ (n) = __arch_copy_to_user(to,from,n)
+
+extern unsigned long __arch_clear_user(void *addr, unsigned long n);
+#define __do_clear_user(addr,sz) \
+ (sz) = __arch_clear_user(addr,sz)
+
+extern unsigned long __arch_strncpy_from_user(char *to, const char *from, unsigned long count);
+#define __do_strncpy_from_user(dst,src,count,res) \
+ (res) = __arch_strncpy_from_user(dst,src,count)
+
+extern unsigned long __arch_strlen_user(const char *s);
+#define __do_strlen_user(s,res) \
+ (res) = __arch_strlen_user(s)
--- /dev/null
+/*
+ * linux/include/asm-arm/proc-armv/uncompress.h
+ *
+ * (c) 1997 Russell King
+ */
+
+static inline void proc_decomp_setup (void)
+{
+ __asm__ __volatile__("
+ mrc p15, 0, r0, c0, c0
+ eor r0, r0, #0x44 << 24
+ eor r0, r0, #0x01 << 16
+ eor r0, r0, #0xA1 << 8
+ movs r0, r0, lsr #4
+ mcreq p15, 0, r0, c7, c5, 0 @ flush I cache
+ mrceq p15, 0, r0, c1, c0
+ orreq r0, r0, #1 << 12
+ mcreq p15, 0, r0, c1, c0 @ enable I cache
+ mov r0, #0
+ mcreq p15, 0, r0, c15, c1, 2 @ enable clock switching
+ " : : : "r0", "cc", "memory");
+}
--- /dev/null
+/*
+ * linux/include/asm-arm/proc-fns.h
+ *
+ * Copyright (C) 1997 Russell King
+ */
+#ifndef __ASM_PROCFNS_H
+#define __ASM_PROCFNS_H
+
+#include <asm/page.h>
+
+#ifdef __KERNEL__
+/*
+ * Don't change this structure
+ */
+extern struct processor {
+ const char *name;
+ /* MISC
+ *
+ * flush caches for task switch
+ */
+ void (*_switch_to)(void *prev, void *next);
+ /*
+ * get data abort address/flags
+ */
+ void (*_data_abort)(unsigned long pc);
+ /*
+ * check for any bugs
+ */
+ void (*_check_bugs)(void);
+ /*
+ * Set up any processor specifics
+ */
+ void (*_proc_init)(void);
+ /*
+ * Disable any processor specifics
+ */
+ void (*_proc_fin)(void);
+ /*
+ * Processor architecture specific
+ */
+ union {
+ struct {
+ /* CACHE
+ *
+ * flush all caches
+ */
+ void (*_flush_cache_all)(void);
+ /*
+ * flush a specific page or pages
+ */
+ void (*_flush_cache_area)(unsigned long address, unsigned long end, int flags);
+ /*
+ * flush cache entry for an address
+ */
+ void (*_flush_cache_entry)(unsigned long address);
+ /*
+ * flush a virtual address used for a page table
+ * note D-cache only!
+ */
+ void (*_flush_cache_pte)(unsigned long address);
+ /*
+ * flush a page to RAM
+ */
+ void (*_flush_ram_page)(unsigned long page);
+ /* TLB
+ *
+ * flush all TLBs
+ */
+ void (*_flush_tlb_all)(void);
+ /*
+ * flush a specific TLB
+ */
+ void (*_flush_tlb_area)(unsigned long address, unsigned long end, int flags);
+ /*
+ * Set a PMD (handling IMP bit 4)
+ */
+ void (*_set_pmd)(pmd_t *pmdp, pmd_t pmd);
+ /*
+ * Special stuff for a reset
+ */
+ unsigned long (*reset)(void);
+ /*
+ * flush an icached page
+ */
+ void (*_flush_icache_area)(unsigned long start, unsigned long end);
+ } armv3v4;
+ struct {
+ /* MEMC
+ *
+ * remap memc tables
+ */
+ void (*_remap_memc)(void *tsk);
+ /*
+ * update task's idea of mmap
+ */
+ void (*_update_map)(void *tsk);
+ /*
+ * update task's idea after abort
+ */
+ void (*_update_mmu_cache)(void *vma, unsigned long addr, pte_t pte);
+ /* XCHG
+ */
+ unsigned long (*_xchg_1)(unsigned long x, volatile void *ptr);
+ unsigned long (*_xchg_2)(unsigned long x, volatile void *ptr);
+ unsigned long (*_xchg_4)(unsigned long x, volatile void *ptr);
+ } armv2;
+ } u;
+} processor;
+#endif
+#endif
+
--- /dev/null
+/*
+ * include/asm-arm/processor.h
+ *
+ * Copyright (C) 1995 Russell King
+ */
+
+#ifndef __ASM_ARM_PROCESSOR_H
+#define __ASM_ARM_PROCESSOR_H
+
+struct fp_hard_struct {
+ unsigned int save[140/4]; /* as yet undefined */
+};
+
+struct fp_soft_struct {
+ unsigned int save[140/4]; /* undefined information */
+};
+
+union fp_state {
+ struct fp_hard_struct hard;
+ struct fp_soft_struct soft;
+};
+
+typedef unsigned long mm_segment_t; /* domain register */
+
+#define DECLARE_THREAD_STRUCT \
+struct thread_struct { \
+ unsigned long address; /* Address of fault */ \
+ unsigned long trap_no; /* Trap number */ \
+ unsigned long error_code; /* Error code of trap */ \
+ union fp_state fpstate; /* FPE save state */ \
+ EXTRA_THREAD_STRUCT \
+}
+
+#include <asm/arch/processor.h>
+#include <asm/proc/processor.h>
+
+#define INIT_TSS { \
+ 0, \
+ 0, \
+ 0, \
+ { { { 0, }, }, }, \
+ EXTRA_THREAD_STRUCT_INIT \
+}
+
+/* Free all resources held by a thread. */
+extern void release_thread(struct task_struct *);
+
+#define init_task (init_task_union.task)
+#define init_stack (init_task_union.stack)
+
+#endif /* __ASM_ARM_PROCESSOR_H */
--- /dev/null
+/*
+ * linux/include/asm-arm/procinfo.h
+ *
+ * Copyright (C) 1996 Russell King
+ */
+
+#ifndef __ASM_PROCINFO_H
+#define __ASM_PROCINFO_H
+
+#include <asm/proc-fns.h>
+
+#define F_MEMC (1<<0)
+#define F_MMU (1<<1)
+#define F_32BIT (1<<2)
+#define F_CACHE (1<<3)
+#define F_IOEB (1<<31)
+
+#ifndef __ASSEMBLER__
+
+struct armversions {
+ unsigned long id;
+ unsigned long mask;
+ unsigned long features;
+ const char *manu;
+ const char *name;
+ const struct processor *proc;
+};
+
+#endif
+
+#endif
+
--- /dev/null
+#ifndef __ASM_ARM_PTRACE_H
+#define __ASM_ARM_PTRACE_H
+
+#include <asm/proc/ptrace.h>
+
+#ifdef __KERNEL__
+extern void show_regs(struct pt_regs *);
+#endif
+
+#endif
+
--- /dev/null
+#ifndef _ARM_RESOURCE_H
+#define _ARM_RESOURCE_H
+
+/*
+ * Resource limits
+ */
+
+#define RLIMIT_CPU 0 /* CPU time in ms */
+#define RLIMIT_FSIZE 1 /* Maximum filesize */
+#define RLIMIT_DATA 2 /* max data size */
+#define RLIMIT_STACK 3 /* max stack size */
+#define RLIMIT_CORE 4 /* max core file size */
+#define RLIMIT_RSS 5 /* max resident set size */
+#define RLIMIT_NPROC 6 /* max number of processes */
+#define RLIMIT_NOFILE 7 /* max number of open files */
+#define RLIMIT_MEMLOCK 8 /* max locked-in-memory address space */
+#define RLIMIT_AS 9 /* address space limit */
+
+#define RLIM_NLIMITS 10
+
+#ifdef __KERNEL__
+
+#define INIT_RLIMITS \
+{ \
+ { LONG_MAX, LONG_MAX }, \
+ { LONG_MAX, LONG_MAX }, \
+ { LONG_MAX, LONG_MAX }, \
+ { _STK_LIM, _STK_LIM }, \
+ { 0, LONG_MAX }, \
+ { LONG_MAX, LONG_MAX }, \
+ { MAX_TASKS_PER_USER, MAX_TASKS_PER_USER }, \
+ { NR_OPEN, NR_OPEN }, \
+ { LONG_MAX, LONG_MAX }, \
+ { LONG_MAX, LONG_MAX }, \
+}
+
+#endif /* __KERNEL__ */
+
+#endif
--- /dev/null
+#ifndef _ASMARM_SCATTERLIST_H
+#define _ASMARM_SCATTERLIST_H
+
+struct scatterlist {
+ char * address; /* Location data is to be transferred to */
+ char * alt_address; /* Location of actual if address is a
+ * dma indirect buffer. NULL otherwise */
+ unsigned int length;
+};
+
+#define ISA_DMA_THRESHOLD (0xffffffff)
+
+#endif /* _ASMARM_SCATTERLIST_H */
--- /dev/null
+#ifndef __ASM_ARM_SEGMENT_H
+#define __ASM_ARM_SEGMENT_H
+
+#define __KERNEL_CS 0x0
+#define __KERNEL_DS 0x0
+
+#define __USER_CS 0x1
+#define __USER_DS 0x1
+
+#endif /* __ASM_ARM_SEGMENT_H */
+
--- /dev/null
+/*
+ * linux/include/asm-arm/semaphore.h
+ */
+#ifndef __ASM_ARM_SEMAPHORE_H
+#define __ASM_ARM_SEMAPHORE_H
+
+#include <linux/linkage.h>
+#include <asm/system.h>
+#include <asm/atomic.h>
+
+struct semaphore {
+ atomic_t count;
+ int waking;
+ struct wait_queue * wait;
+};
+
+#define MUTEX ((struct semaphore) { ATOMIC_INIT(1), 0, NULL })
+#define MUTEX_LOCKED ((struct semaphore) { ATOMIC_INIT(0), 0, NULL })
+
+asmlinkage void __down_failed (void /* special register calling convention */);
+asmlinkage int __down_failed_interruptible (void /* special register calling convention */);
+asmlinkage void __up_wakeup (void /* special register calling convention */);
+
+extern void __down(struct semaphore * sem);
+extern void __up(struct semaphore * sem);
+
+#define sema_init(sem, val) atomic_set(&((sem)->count), (val))
+
+/*
+ * These two _must_ execute atomically wrt each other.
+ *
+ * This is trivially done with load_locked/store_cond,
+ * but on the x86 we need an external synchronizer.
+ * Currently this is just the global interrupt lock,
+ * bah. Go for a smaller spinlock some day.
+ *
+ * (On the other hand this shouldn't be in any critical
+ * path, so..)
+ */
+static inline void wake_one_more(struct semaphore * sem)
+{
+ unsigned long flags;
+
+ save_flags(flags);
+ cli();
+ sem->waking++;
+ restore_flags(flags);
+}
+
+static inline int waking_non_zero(struct semaphore *sem)
+{
+ unsigned long flags;
+ int ret = 0;
+
+ save_flags(flags);
+ cli();
+ if (sem->waking > 0) {
+ sem->waking--;
+ ret = 1;
+ }
+ restore_flags(flags);
+ return ret;
+}
+
+#include <asm/proc/semaphore.h>
+
+#endif
--- /dev/null
+/*
+ * linux/include/asm-arm/serial.h
+ *
+ * Copyright (c) 1996 Russell King.
+ *
+ * Changelog:
+ * 15-10-1996 RMK Created
+ */
+
+#ifndef __ASM_SERIAL_H
+#define __ASM_SERIAL_H
+
+#include <asm/arch/serial.h>
+
+#endif
--- /dev/null
+/*
+ * include/asm/setup.h
+ *
+ * Structure passed to kernel to tell it about the hardware it's running on
+ *
+ * Copyright (C) 1997,1998 Russell King
+ */
+#ifndef __ASMARM_SETUP_H
+#define __ASMARM_SETUP_H
+
+struct param_struct {
+ union {
+ struct {
+ unsigned long page_size; /* 0 */
+ unsigned long nr_pages; /* 4 */
+ unsigned long ramdisk_size; /* 8 */
+ unsigned long flags; /* 12 */
+#define FLAG_READONLY 1
+#define FLAG_RDLOAD 4
+#define FLAG_RDPROMPT 8
+ unsigned long rootdev; /* 16 */
+ unsigned long video_num_cols; /* 20 */
+ unsigned long video_num_rows; /* 24 */
+ unsigned long video_x; /* 28 */
+ unsigned long video_y; /* 32 */
+ unsigned long memc_control_reg; /* 36 */
+ unsigned char sounddefault; /* 40 */
+ unsigned char adfsdrives; /* 41 */
+ unsigned char bytes_per_char_h; /* 42 */
+ unsigned char bytes_per_char_v; /* 43 */
+ unsigned long pages_in_bank[4]; /* 44 */
+ unsigned long pages_in_vram; /* 60 */
+ unsigned long initrd_start; /* 64 */
+ unsigned long initrd_size; /* 68 */
+ unsigned long rd_start; /* 72 */
+ } s;
+ char unused[256];
+ } u1;
+ union {
+ char paths[8][128];
+ struct {
+ unsigned long magic;
+ char n[1024 - sizeof(unsigned long)];
+ } s;
+ } u2;
+ char commandline[256];
+};
+
+#endif
--- /dev/null
+#ifndef _ASMARM_SHMPARAM_H
+#define _ASMARM_SHMPARAM_H
+
+/*
+ * Include the machine specific shm parameters before the processor
+ * dependent parameters so that the machine parameters can override
+ * the processor parameters
+ */
+#include <asm/arch/shmparam.h>
+#include <asm/proc/shmparam.h>
+
+/*
+ * Format of a swap-entry for shared memory pages currently out in
+ * swap space (see also mm/swap.c).
+ *
+ * SWP_TYPE = SHM_SWP_TYPE
+ * SWP_OFFSET is used as follows:
+ *
+ * bits 0..6 : id of shared memory segment page belongs to (SHM_ID)
+ * bits 7..21: index of page within shared memory segment (SHM_IDX)
+ * (actually fewer bits get used since SHMMAX is so low)
+ */
+
+/*
+ * Keep _SHM_ID_BITS as low as possible since SHMMNI depends on it and
+ * there is a static array of size SHMMNI.
+ */
+#define _SHM_ID_BITS 7
+#define SHM_ID_MASK ((1<<_SHM_ID_BITS)-1)
+
+#define SHM_IDX_SHIFT (_SHM_ID_BITS)
+#define _SHM_IDX_BITS 15
+#define SHM_IDX_MASK ((1<<_SHM_IDX_BITS)-1)
+
+/*
+ * _SHM_ID_BITS + _SHM_IDX_BITS must be <= 24 on the i386 and
+ * SHMMAX <= (PAGE_SIZE << _SHM_IDX_BITS).
+ */
+
+#define SHMMIN 1 /* really PAGE_SIZE */ /* min shared seg size (bytes) */
+#define SHMMNI (1<<_SHM_ID_BITS) /* max num of segs system wide */
+#define SHMALL /* max shm system wide (pages) */ \
+ (1<<(_SHM_IDX_BITS+_SHM_ID_BITS))
+#define SHMLBA PAGE_SIZE /* attach addr a multiple of this */
+#define SHMSEG SHMMNI /* max shared segs per process */
+
+#endif /* _ASMARM_SHMPARAM_H */
--- /dev/null
+#ifndef _ASMARM_SIGCONTEXT_H
+#define _ASMARM_SIGCONTEXT_H
+
+/*
+ * Signal context structure - contains all info to do with the state
+ * before the signal handler was invoked. Note: only add new entries
+ * to the end of the structure.
+ */
+struct sigcontext {
+ unsigned long trap_no;
+ unsigned long error_code;
+ unsigned long oldmask;
+ unsigned long arm_r0;
+ unsigned long arm_r1;
+ unsigned long arm_r2;
+ unsigned long arm_r3;
+ unsigned long arm_r4;
+ unsigned long arm_r5;
+ unsigned long arm_r6;
+ unsigned long arm_r7;
+ unsigned long arm_r8;
+ unsigned long arm_r9;
+ unsigned long arm_r10;
+ unsigned long arm_fp;
+ unsigned long arm_ip;
+ unsigned long arm_sp;
+ unsigned long arm_lr;
+ unsigned long arm_pc;
+ unsigned long arm_cpsr;
+};
+
+
+#endif
--- /dev/null
+#ifndef _ASMARM_SIGINFO_H
+#define _ASMARM_SIGINFO_H
+
+#include <linux/types.h>
+
+/* XXX: This structure was copied from the Alpha; is there an iBCS version? */
+
+typedef union sigval {
+ int sival_int;
+ void *sival_ptr;
+} sigval_t;
+
+#define SI_MAX_SIZE 128
+#define SI_PAD_SIZE ((SI_MAX_SIZE/sizeof(int)) - 3)
+
+typedef struct siginfo {
+ int si_signo;
+ int si_errno;
+ int si_code;
+
+ union {
+ int _pad[SI_PAD_SIZE];
+
+ /* kill() */
+ struct {
+ pid_t _pid; /* sender's pid */
+ uid_t _uid; /* sender's uid */
+ } _kill;
+
+ /* POSIX.1b timers */
+ struct {
+ unsigned int _timer1;
+ unsigned int _timer2;
+ } _timer;
+
+ /* POSIX.1b signals */
+ struct {
+ pid_t _pid; /* sender's pid */
+ uid_t _uid; /* sender's uid */
+ sigval_t _sigval;
+ } _rt;
+
+ /* SIGCHLD */
+ struct {
+ pid_t _pid; /* which child */
+ int _status; /* exit code */
+ clock_t _utime;
+ clock_t _stime;
+ } _sigchld;
+
+ /* SIGILL, SIGFPE, SIGSEGV, SIGBUS */
+ struct {
+ void *_addr; /* faulting insn/memory ref. */
+ } _sigfault;
+
+ /* SIGPOLL */
+ struct {
+ int _band; /* POLL_IN, POLL_OUT, POLL_MSG */
+ int _fd;
+ } _sigpoll;
+ } _sifields;
+} siginfo_t;
+
+/*
+ * How these fields are to be accessed.
+ */
+#define si_pid _sifields._kill._pid
+#define si_uid _sifields._kill._uid
+#define si_status _sifields._sigchld._status
+#define si_utime _sifields._sigchld._utime
+#define si_stime _sifields._sigchld._stime
+#define si_value _sifields._rt._sigval
+#define si_int _sifields._rt._sigval.sival_int
+#define si_ptr _sifields._rt._sigval.sival_ptr
+#define si_addr _sifields._sigfault._addr
+#define si_band _sifields._sigpoll._band
+#define si_fd _sifields._sigpoll._fd
+
+/*
+ * si_code values
+ * Digital reserves positive values for kernel-generated signals.
+ */
+#define SI_USER 0 /* sent by kill, sigsend, raise */
+#define SI_KERNEL 0x80 /* sent by the kernel from somewhere */
+#define SI_QUEUE -1 /* sent by sigqueue */
+#define SI_TIMER -2 /* sent by timer expiration */
+#define SI_MESGQ -3 /* sent by real time mesq state change */
+#define SI_ASYNCIO -4 /* sent by AIO completion */
+
+#define SI_FROMUSER(siptr) ((siptr)->si_code <= 0)
+#define SI_FROMKERNEL(siptr) ((siptr)->si_code > 0)
+
+/*
+ * SIGILL si_codes
+ */
+#define ILL_ILLOPC 1 /* illegal opcode */
+#define ILL_ILLOPN 2 /* illegal operand */
+#define ILL_ILLADR 3 /* illegal addressing mode */
+#define ILL_ILLTRP 4 /* illegal trap */
+#define ILL_PRVOPC 5 /* privileged opcode */
+#define ILL_PRVREG 6 /* privileged register */
+#define ILL_COPROC 7 /* coprocessor error */
+#define ILL_BADSTK 8 /* internal stack error */
+#define NSIGILL 8
+
+/*
+ * SIGFPE si_codes
+ */
+#define FPE_INTDIV 1 /* integer divide by zero */
+#define FPE_INTOVF 2 /* integer overflow */
+#define FPE_FLTDIV 3 /* floating point divide by zero */
+#define FPE_FLTOVF 4 /* floating point overflow */
+#define FPE_FLTUND 5 /* floating point underflow */
+#define FPE_FLTRES 6 /* floating point inexact result */
+#define FPE_FLTINV 7 /* floating point invalid operation */
+#define FPE_FLTSUB 8 /* subscript out of range */
+#define NSIGFPE 8
+
+/*
+ * SIGSEGV si_codes
+ */
+#define SEGV_MAPERR 1 /* address not mapped to object */
+#define SEGV_ACCERR 2 /* invalid permissions for mapped object */
+#define NSIGSEGV 2
+
+/*
+ * SIGBUS si_codes
+ */
+#define BUS_ADRALN 1 /* invalid address alignment */
+#define BUS_ADRERR 2 /* non-existant physical address */
+#define BUS_OBJERR 3 /* object specific hardware error */
+#define NSIGBUS 3
+
+/*
+ * SIGTRAP si_codes
+ */
+#define TRAP_BRKPT 1 /* process breakpoint */
+#define TRAP_TRACE 2 /* process trace trap */
+#define NSIGTRAP
+
+/*
+ * SIGCHLD si_codes
+ */
+#define CLD_EXITED 1 /* child has exited */
+#define CLD_KILLED 2 /* child was killed */
+#define CLD_DUMPED 3 /* child terminated abnormally */
+#define CLD_TRAPPED 4 /* traced child has trapped */
+#define CLD_STOPPED 5 /* child has stopped */
+#define CLD_CONTINUED 6 /* stopped child has continued */
+#define NSIGCHLD
+
+/*
+ * SIGPOLL si_codes
+ */
+#define POLL_IN 1 /* data input available */
+#define POLL_OUT 2 /* output buffers available */
+#define POLL_MSG 3 /* input message available */
+#define POLL_ERR 4 /* i/o error */
+#define POLL_PRI 5 /* high priority input available */
+#define POLL_HUP 6 /* device disconnected */
+#define NSIGPOLL 6
+
+/*
+ * sigevent definitions
+ *
+ * It seems likely that SIGEV_THREAD will have to be handled from
+ * userspace, libpthread transmuting it to SIGEV_SIGNAL, which the
+ * thread manager then catches and does the appropriate nonsense.
+ * However, everything is written out here so as to not get lost.
+ */
+#define SIGEV_SIGNAL 0 /* notify via signal */
+#define SIGEV_NONE 1 /* other notification: meaningless */
+#define SIGEV_THREAD 2 /* deliver via thread creation */
+
+#define SIGEV_MAX_SIZE 64
+#define SIGEV_PAD_SIZE ((SIGEV_MAX_SIZE/sizeof(int)) - 3)
+
+typedef struct sigevent {
+ sigval_t sigev_value;
+ int sigev_signo;
+ int sigev_notify;
+ union {
+ int _pad[SIGEV_PAD_SIZE];
+
+ struct {
+ void (*_function)(sigval_t);
+ void *_attribute; /* really pthread_attr_t */
+ } _sigev_thread;
+ } _sigev_un;
+} sigevent_t;
+
+#define sigev_notify_function _sigev_un._sigev_thread._function
+#define sigev_notify_attributes _sigev_un._sigev_thread._attribute
+
+#endif
--- /dev/null
+#ifndef _ASMARM_SIGNAL_H
+#define _ASMARM_SIGNAL_H
+
+#include <linux/types.h>
+
+/* Avoid too many header ordering problems. */
+struct siginfo;
+
+/* Most things should be clean enough to redefine this at will, if care
+ is taken to make libc match. */
+
+#define _NSIG 64
+#define _NSIG_BPW 32
+#define _NSIG_WORDS (_NSIG / _NSIG_BPW)
+
+typedef unsigned long old_sigset_t; /* at least 32 bits */
+
+typedef struct {
+ unsigned long sig[_NSIG_WORDS];
+} sigset_t;
+
+#define SIGHUP 1
+#define SIGINT 2
+#define SIGQUIT 3
+#define SIGILL 4
+#define SIGTRAP 5
+#define SIGABRT 6
+#define SIGIOT 6
+#define SIGBUS 7
+#define SIGFPE 8
+#define SIGKILL 9
+#define SIGUSR1 10
+#define SIGSEGV 11
+#define SIGUSR2 12
+#define SIGPIPE 13
+#define SIGALRM 14
+#define SIGTERM 15
+#define SIGSTKFLT 16
+#define SIGCHLD 17
+#define SIGCONT 18
+#define SIGSTOP 19
+#define SIGTSTP 20
+#define SIGTTIN 21
+#define SIGTTOU 22
+#define SIGURG 23
+#define SIGXCPU 24
+#define SIGXFSZ 25
+#define SIGVTALRM 26
+#define SIGPROF 27
+#define SIGWINCH 28
+#define SIGIO 29
+#define SIGPOLL SIGIO
+/*
+#define SIGLOST 29
+*/
+#define SIGPWR 30
+#define SIGUNUSED 31
+
+/* These should not be considered constants from userland. */
+#define SIGRTMIN 32
+#define SIGRTMAX (_NSIG-1)
+
+/*
+ * SA_FLAGS values:
+ *
+ * SA_ONSTACK is not currently supported, but will allow sigaltstack(2).
+ * SA_INTERRUPT is a no-op, but left due to historical reasons. Use the
+ * SA_RESTART flag to get restarting signals (which were the default long ago)
+ * SA_NOCLDSTOP flag to turn off SIGCHLD when children stop.
+ * SA_RESETHAND clears the handler when the signal is delivered.
+ * SA_NOCLDWAIT flag on SIGCHLD to inhibit zombies.
+ * SA_NODEFER prevents the current signal from being masked in the handler.
+ *
+ * SA_ONESHOT and SA_NOMASK are the historical Linux names for the Single
+ * Unix names RESETHAND and NODEFER respectively.
+ */
+#define SA_NOCLDSTOP 0x00000001
+#define SA_NOCLDWAIT 0x00000002 /* not supported yet */
+#define SA_SIGINFO 0x00000004
+#define SA_ONSTACK 0x08000000
+#define SA_RESTART 0x10000000
+#define SA_NODEFER 0x40000000
+#define SA_RESETHAND 0x80000000
+
+#define SA_NOMASK SA_NODEFER
+#define SA_ONESHOT SA_RESETHAND
+#define SA_INTERRUPT 0x20000000 /* dummy -- ignored */
+
+#define SA_RESTORER 0x04000000
+
+#ifdef __KERNEL__
+
+/*
+ * These values of sa_flags are used only by the kernel as part of the
+ * irq handling routines.
+ *
+ * SA_INTERRUPT is also used by the irq handling routines.
+ * SA_SHIRQ is for shared interrupt support on PCI and EISA.
+ */
+#define SA_PROBE SA_ONESHOT
+#define SA_SAMPLE_RANDOM SA_RESTART
+#define SA_SHIRQ 0x04000000
+#endif
+
+#define SIG_BLOCK 0 /* for blocking signals */
+#define SIG_UNBLOCK 1 /* for unblocking signals */
+#define SIG_SETMASK 2 /* for setting the signal mask */
+
+/* Type of a signal handler. */
+typedef void (*__sighandler_t)(int);
+
+#define SIG_DFL ((__sighandler_t)0) /* default signal handling */
+#define SIG_IGN ((__sighandler_t)1) /* ignore signal */
+#define SIG_ERR ((__sighandler_t)-1) /* error return from signal */
+
+struct old_sigaction {
+ __sighandler_t sa_handler;
+ old_sigset_t sa_mask;
+ unsigned long sa_flags;
+ void (*sa_restorer)(void);
+};
+
+struct sigaction {
+ __sighandler_t sa_handler;
+ unsigned long sa_flags;
+ void (*sa_restorer)(void);
+ sigset_t sa_mask; /* mask last for extensibility */
+};
+
+struct k_sigaction {
+ struct sigaction sa;
+};
+
+typedef struct sigaltstack {
+ void *ss_sp;
+ int ss_flags;
+ size_t ss_size;
+} stack_t;
+
+#ifdef __KERNEL__
+#include <asm/sigcontext.h>
+
+#define sigmask(sig) (1UL << ((sig) - 1))
+
+#endif
+
+#endif
--- /dev/null
+#ifndef __ASM_SMP_H
+#define __ASM_SMP_H
+
+#ifdef __SMP__
+#error SMP not supported
+#endif
+#endif
--- /dev/null
+#ifndef __I386_SMPLOCK_H
+#define __I386_SMPLOCK_H
+
+#define __STR(x) #x
+
+#ifndef __SMP__
+
+#define lock_kernel() do { } while(0)
+#define unlock_kernel() do { } while(0)
+#define release_kernel_lock(task, cpu, depth) ((depth) = 1)
+#define reacquire_kernel_lock(task, cpu, depth) do { } while(0)
+
+#else
+#error SMP not supported
+#endif /* __SMP__ */
+
+#endif /* __I386_SMPLOCK_H */
--- /dev/null
+#ifndef _ASMARM_SOCKET_H
+#define _ASMARM_SOCKET_H
+
+#include <asm/sockios.h>
+
+/* For setsockoptions(2) */
+#define SOL_SOCKET 1
+
+#define SO_DEBUG 1
+#define SO_REUSEADDR 2
+#define SO_TYPE 3
+#define SO_ERROR 4
+#define SO_DONTROUTE 5
+#define SO_BROADCAST 6
+#define SO_SNDBUF 7
+#define SO_RCVBUF 8
+#define SO_KEEPALIVE 9
+#define SO_OOBINLINE 10
+#define SO_NO_CHECK 11
+#define SO_PRIORITY 12
+#define SO_LINGER 13
+#define SO_BSDCOMPAT 14
+/* To add :#define SO_REUSEPORT 15 */
+#define SO_PASSCRED 16
+#define SO_PEERCRED 17
+#define SO_RCVLOWAT 18
+#define SO_SNDLOWAT 19
+#define SO_RCVTIMEO 20
+#define SO_SNDTIMEO 21
+
+#define SO_BINDTODEVICE 25
+
+#endif /* _ASM_SOCKET_H */
--- /dev/null
+#ifndef __ARCH_ARM_SOCKIOS_H
+#define __ARCH_ARM_SOCKIOS_H
+
+/* Socket-level I/O control calls. */
+#define FIOSETOWN 0x8901
+#define SIOCSPGRP 0x8902
+#define FIOGETOWN 0x8903
+#define SIOCGPGRP 0x8904
+#define SIOCATMARK 0x8905
+#define SIOCGSTAMP 0x8906 /* Get stamp */
+
+#endif
--- /dev/null
+#ifndef __ASM_SOFTIRQ_H
+#define __ASM_SOFTIRQ_H
+
+#include <asm/atomic.h>
+#include <asm/hardirq.h>
+
+#define get_active_bhs() (bh_mask & bh_active)
+#define clear_active_bhs(x) atomic_clear_mask((int)(x),&bh_active)
+
+extern inline void init_bh(int nr, void (*routine)(void))
+{
+ bh_base[nr] = routine;
+ bh_mask_count[nr] = 0;
+ bh_mask |= 1 << nr;
+}
+
+extern inline void remove_bh(int nr)
+{
+ bh_base[nr] = NULL;
+ bh_mask &= ~(1 << nr);
+}
+
+extern inline void mark_bh(int nr)
+{
+ set_bit(nr, &bh_active);
+}
+
+/*
+ * These use a mask count to correctly handle
+ * nested disable/enable calls
+ */
+extern inline void disable_bh(int nr)
+{
+ bh_mask &= ~(1 << nr);
+ bh_mask_count[nr]++;
+}
+
+extern inline void enable_bh(int nr)
+{
+ if (!--bh_mask_count[nr])
+ bh_mask |= 1 << nr;
+}
+
+#ifdef __SMP__
+#error SMP not supported
+#else
+
+extern int __arm_bh_counter;
+
+extern inline void start_bh_atomic(void)
+{
+ __arm_bh_counter++;
+ barrier();
+}
+
+extern inline void end_bh_atomic(void)
+{
+ barrier();
+ __arm_bh_counter--;
+}
+
+/* These are for the irq's testing the lock */
+#define softirq_trylock() (__arm_bh_counter ? 0 : (__arm_bh_counter=1))
+#define softirq_endlock() (__arm_bh_counter = 0)
+
+#endif /* SMP */
+
+#endif /* __ASM_SOFTIRQ_H */
--- /dev/null
+#ifndef __ASM_SPINLOCK_H
+#define __ASM_SPINLOCK_H
+
+#ifndef __SMP__
+
+/*
+ * Your basic spinlocks, allowing only a single CPU anywhere
+ */
+typedef struct { } spinlock_t;
+#define SPIN_LOCK_UNLOCKED { }
+
+#define spin_lock_init(lock) do { } while(0)
+#define spin_lock(lock) do { } while(0)
+#define spin_trylock(lock) do { } while(0)
+#define spin_unlock_wait(lock) do { } while(0)
+#define spin_unlock(lock) do { } while(0)
+#define spin_lock_irq(lock) cli()
+#define spin_unlock_irq(lock) sti()
+
+#define spin_lock_irqsave(lock, flags) \
+ do { save_flags(flags); cli(); } while (0)
+#define spin_unlock_irqrestore(lock, flags) \
+ restore_flags(flags)
+
+/*
+ * Read-write spinlocks, allowing multiple readers
+ * but only one writer.
+ *
+ * NOTE! it is quite common to have readers in interrupts
+ * but no interrupt writers. For those circumstances we
+ * can "mix" irq-safe locks - any writer needs to get a
+ * irq-safe write-lock, but readers can get non-irqsafe
+ * read-locks.
+ */
+typedef struct { } rwlock_t;
+#define RW_LOCK_UNLOCKED { }
+
+#define read_lock(lock) do { } while(0)
+#define read_unlock(lock) do { } while(0)
+#define write_lock(lock) do { } while(0)
+#define write_unlock(lock) do { } while(0)
+#define read_lock_irq(lock) cli()
+#define read_unlock_irq(lock) sti()
+#define write_lock_irq(lock) cli()
+#define write_unlock_irq(lock) sti()
+
+#define read_lock_irqsave(lock, flags) \
+ do { save_flags(flags); cli(); } while (0)
+#define read_unlock_irqrestore(lock, flags) \
+ restore_flags(flags)
+#define write_lock_irqsave(lock, flags) \
+ do { save_flags(flags); cli(); } while (0)
+#define write_unlock_irqrestore(lock, flags) \
+ restore_flags(flags)
+
+#else
+#error ARM architecture does not support spin locks
+#endif /* SMP */
+#endif /* __ASM_SPINLOCK_H */
--- /dev/null
+#ifndef _ASMARM_STAT_H
+#define _ASMARM_STAT_H
+
+struct __old_kernel_stat {
+ unsigned short st_dev;
+ unsigned short st_ino;
+ unsigned short st_mode;
+ unsigned short st_nlink;
+ unsigned short st_uid;
+ unsigned short st_gid;
+ unsigned short st_rdev;
+ unsigned long st_size;
+ unsigned long st_atime;
+ unsigned long st_mtime;
+ unsigned long st_ctime;
+};
+
+struct stat {
+ unsigned short st_dev;
+ unsigned short __pad1;
+ unsigned long st_ino;
+ unsigned short st_mode;
+ unsigned short st_nlink;
+ unsigned short st_uid;
+ unsigned short st_gid;
+ unsigned short st_rdev;
+ unsigned short __pad2;
+ unsigned long st_size;
+ unsigned long st_blksize;
+ unsigned long st_blocks;
+ unsigned long st_atime;
+ unsigned long __unused1;
+ unsigned long st_mtime;
+ unsigned long __unused2;
+ unsigned long st_ctime;
+ unsigned long __unused3;
+ unsigned long __unused4;
+ unsigned long __unused5;
+};
+
+#endif
--- /dev/null
+#ifndef _ASMARM_STATFS_H
+#define _ASMARM_STATFS_H
+
+#ifndef __KERNEL_STRICT_NAMES
+
+#include <linux/types.h>
+
+typedef __kernel_fsid_t fsid_t;
+
+#endif
+
+struct statfs {
+ long f_type;
+ long f_bsize;
+ long f_blocks;
+ long f_bfree;
+ long f_bavail;
+ long f_files;
+ long f_ffree;
+ __kernel_fsid_t f_fsid;
+ long f_namelen;
+ long f_spare[6];
+};
+
+#endif
--- /dev/null
+#ifndef __ASM_ARM_STRING_H
+#define __ASM_ARM_STRING_H
+
+/*
+ * inline versions, hmm...
+ */
+
+#define __HAVE_ARCH_STRRCHR
+extern char * strrchr(const char * s, int c);
+
+#define __HAVE_ARCH_STRCHR
+extern char * strchr(const char * s, int c);
+
+#define __HAVE_ARCH_MEMCPY
+#define __HAVE_ARCH_MEMMOVE
+#define __HAVE_ARCH_MEMSET
+
+#define __HAVE_ARCH_MEMZERO
+extern void memzero(void *ptr, int n);
+
+extern void memsetl (unsigned long *, unsigned long, int n);
+
+#endif
+
--- /dev/null
+#ifndef __ASM_ARM_SYSTEM_H
+#define __ASM_ARM_SYSTEM_H
+
+#include <linux/kernel.h>
+#include <asm/proc-fns.h>
+
+extern void arm_malalignedptr(const char *, void *, volatile void *);
+extern void arm_invalidptr(const char *, int);
+
+#define xchg(ptr,x) \
+ ((__typeof__(*(ptr)))__xchg((unsigned long)(x),(ptr),sizeof(*(ptr))))
+
+#define tas(ptr) (xchg((ptr),1))
+
+/*
+ * switch_to(prev, next) should switch from task `prev' to `next'
+ * `prev' will never be the same as `next'.
+ *
+ * `next' and `prev' should be struct task_struct, but it isn't always defined
+ */
+#define switch_to(prev,next) processor._switch_to(prev,next)
+
+/*
+ * Include processor dependent parts
+ */
+#include <asm/proc/system.h>
+#include <asm/arch/system.h>
+
+#define mb() __asm__ __volatile__ ("" : : : "memory")
+#define nop() __asm__ __volatile__("mov r0,r0\n\t");
+
+#endif
+
--- /dev/null
+#ifndef __ASM_ARM_TERMBITS_H
+#define __ASM_ARM_TERMBITS_H
+
+#include <linux/posix_types.h>
+
+typedef unsigned char cc_t;
+typedef unsigned int speed_t;
+typedef unsigned int tcflag_t;
+
+#define NCCS 19
+struct termios {
+ tcflag_t c_iflag; /* input mode flags */
+ tcflag_t c_oflag; /* output mode flags */
+ tcflag_t c_cflag; /* control mode flags */
+ tcflag_t c_lflag; /* local mode flags */
+ cc_t c_line; /* line discipline */
+ cc_t c_cc[NCCS]; /* control characters */
+};
+
+/* c_cc characters */
+#define VINTR 0
+#define VQUIT 1
+#define VERASE 2
+#define VKILL 3
+#define VEOF 4
+#define VTIME 5
+#define VMIN 6
+#define VSWTC 7
+#define VSTART 8
+#define VSTOP 9
+#define VSUSP 10
+#define VEOL 11
+#define VREPRINT 12
+#define VDISCARD 13
+#define VWERASE 14
+#define VLNEXT 15
+#define VEOL2 16
+
+/* c_iflag bits */
+#define IGNBRK 0000001
+#define BRKINT 0000002
+#define IGNPAR 0000004
+#define PARMRK 0000010
+#define INPCK 0000020
+#define ISTRIP 0000040
+#define INLCR 0000100
+#define IGNCR 0000200
+#define ICRNL 0000400
+#define IUCLC 0001000
+#define IXON 0002000
+#define IXANY 0004000
+#define IXOFF 0010000
+#define IMAXBEL 0020000
+
+/* c_oflag bits */
+#define OPOST 0000001
+#define OLCUC 0000002
+#define ONLCR 0000004
+#define OCRNL 0000010
+#define ONOCR 0000020
+#define ONLRET 0000040
+#define OFILL 0000100
+#define OFDEL 0000200
+#define NLDLY 0000400
+#define NL0 0000000
+#define NL1 0000400
+#define CRDLY 0003000
+#define CR0 0000000
+#define CR1 0001000
+#define CR2 0002000
+#define CR3 0003000
+#define TABDLY 0014000
+#define TAB0 0000000
+#define TAB1 0004000
+#define TAB2 0010000
+#define TAB3 0014000
+#define XTABS 0014000
+#define BSDLY 0020000
+#define BS0 0000000
+#define BS1 0020000
+#define VTDLY 0040000
+#define VT0 0000000
+#define VT1 0040000
+#define FFDLY 0100000
+#define FF0 0000000
+#define FF1 0100000
+
+/* c_cflag bit meaning */
+#define CBAUD 0010017
+#define B0 0000000 /* hang up */
+#define B50 0000001
+#define B75 0000002
+#define B110 0000003
+#define B134 0000004
+#define B150 0000005
+#define B200 0000006
+#define B300 0000007
+#define B600 0000010
+#define B1200 0000011
+#define B1800 0000012
+#define B2400 0000013
+#define B4800 0000014
+#define B9600 0000015
+#define B19200 0000016
+#define B38400 0000017
+#define EXTA B19200
+#define EXTB B38400
+#define CSIZE 0000060
+#define CS5 0000000
+#define CS6 0000020
+#define CS7 0000040
+#define CS8 0000060
+#define CSTOPB 0000100
+#define CREAD 0000200
+#define PARENB 0000400
+#define PARODD 0001000
+#define HUPCL 0002000
+#define CLOCAL 0004000
+#define CBAUDEX 0010000
+#define B57600 0010001
+#define B115200 0010002
+#define B230400 0010003
+#define B460800 0010004
+#define CIBAUD 002003600000 /* input baud rate (not used) */
+#define CMSPAR 010000000000 /* mark or space (stick) parity */
+#define CRTSCTS 020000000000 /* flow control */
+
+/* c_lflag bits */
+#define ISIG 0000001
+#define ICANON 0000002
+#define XCASE 0000004
+#define ECHO 0000010
+#define ECHOE 0000020
+#define ECHOK 0000040
+#define ECHONL 0000100
+#define NOFLSH 0000200
+#define TOSTOP 0000400
+#define ECHOCTL 0001000
+#define ECHOPRT 0002000
+#define ECHOKE 0004000
+#define FLUSHO 0010000
+#define PENDIN 0040000
+#define IEXTEN 0100000
+
+/* tcflow() and TCXONC use these */
+#define TCOOFF 0
+#define TCOON 1
+#define TCIOFF 2
+#define TCION 3
+
+/* tcflush() and TCFLSH use these */
+#define TCIFLUSH 0
+#define TCOFLUSH 1
+#define TCIOFLUSH 2
+
+/* tcsetattr uses these */
+#define TCSANOW 0
+#define TCSADRAIN 1
+#define TCSAFLUSH 2
+
+#endif /* __ASM_ARM_TERMBITS_H */
--- /dev/null
+#ifndef __ASM_ARM_TERMIOS_H
+#define __ASM_ARM_TERMIOS_H
+
+#include <asm/termbits.h>
+#include <asm/ioctls.h>
+
+struct winsize {
+ unsigned short ws_row;
+ unsigned short ws_col;
+ unsigned short ws_xpixel;
+ unsigned short ws_ypixel;
+};
+
+#define NCC 8
+struct termio {
+ unsigned short c_iflag; /* input mode flags */
+ unsigned short c_oflag; /* output mode flags */
+ unsigned short c_cflag; /* control mode flags */
+ unsigned short c_lflag; /* local mode flags */
+ unsigned char c_line; /* line discipline */
+ unsigned char c_cc[NCC]; /* control characters */
+};
+
+#ifdef __KERNEL__
+/* intr=^C quit=^| erase=del kill=^U
+ eof=^D vtime=\0 vmin=\1 sxtc=\0
+ start=^Q stop=^S susp=^Z eol=\0
+ reprint=^R discard=^U werase=^W lnext=^V
+ eol2=\0
+*/
+#define INIT_C_CC "\003\034\177\025\004\0\1\0\021\023\032\0\022\017\027\026\0"
+#endif
+
+/* modem lines */
+#define TIOCM_LE 0x001
+#define TIOCM_DTR 0x002
+#define TIOCM_RTS 0x004
+#define TIOCM_ST 0x008
+#define TIOCM_SR 0x010
+#define TIOCM_CTS 0x020
+#define TIOCM_CAR 0x040
+#define TIOCM_RNG 0x080
+#define TIOCM_DSR 0x100
+#define TIOCM_CD TIOCM_CAR
+#define TIOCM_RI TIOCM_RNG
+
+/* ioctl (fd, TIOCSERGETLSR, &result) where result may be as below */
+
+/* line disciplines */
+#define N_TTY 0
+#define N_SLIP 1
+#define N_MOUSE 2
+#define N_PPP 3
+#define N_STRIP 4
+#define N_AX25 5
+#define N_X25 6 /* X.25 async */
+
+#ifdef __KERNEL__
+
+#include <linux/string.h>
+
+/*
+ * Translate a "termio" structure into a "termios". Ugh.
+ */
+#define SET_LOW_TERMIOS_BITS(termios, termio, x) { \
+ unsigned short __tmp; \
+ get_user(__tmp,&(termio)->x); \
+ *(unsigned short *) &(termios)->x = __tmp; \
+}
+
+#define user_termio_to_kernel_termios(termios, termio) \
+({ \
+ SET_LOW_TERMIOS_BITS(termios, termio, c_iflag); \
+ SET_LOW_TERMIOS_BITS(termios, termio, c_oflag); \
+ SET_LOW_TERMIOS_BITS(termios, termio, c_cflag); \
+ SET_LOW_TERMIOS_BITS(termios, termio, c_lflag); \
+ copy_from_user((termios)->c_cc, (termio)->c_cc, NCC); \
+})
+
+/*
+ * Translate a "termios" structure into a "termio". Ugh.
+ */
+#define kernel_termios_to_user_termio(termio, termios) \
+({ \
+ put_user((termios)->c_iflag, &(termio)->c_iflag); \
+ put_user((termios)->c_oflag, &(termio)->c_oflag); \
+ put_user((termios)->c_cflag, &(termio)->c_cflag); \
+ put_user((termios)->c_lflag, &(termio)->c_lflag); \
+ put_user((termios)->c_line, &(termio)->c_line); \
+ copy_to_user((termio)->c_cc, (termios)->c_cc, NCC); \
+})
+
+#define user_termios_to_kernel_termios(k, u) copy_from_user(k, u, sizeof(struct termios))
+#define kernel_termios_to_user_termios(u, k) copy_to_user(u, k, sizeof(struct termios))
+
+#endif /* __KERNEL__ */
+
+#endif /* __ASM_ARM_TERMIOS_H */
--- /dev/null
+/*
+ * linux/include/asm-arm/timex.h
+ *
+ * Architecture Specific TIME specifications
+ *
+ * Copyright (C) 1997,1998 Russell King
+ */
+#ifndef _ASMARM_TIMEX_H
+#define _ASMARM_TIMEX_H
+
+#include <asm/arch/timex.h>
+
+#endif
--- /dev/null
+#ifndef __ASM_ARM_TYPES_H
+#define __ASM_ARM_TYPES_H
+
+typedef unsigned short umode_t;
+
+/*
+ * __xx is ok: it doesn't pollute the POSIX namespace. Use these in the
+ * header files exported to user space
+ */
+
+typedef __signed__ char __s8;
+typedef unsigned char __u8;
+
+typedef __signed__ short __s16;
+typedef unsigned short __u16;
+
+typedef __signed__ int __s32;
+typedef unsigned int __u32;
+
+#if defined(__GNUC__) && !defined(__STRICT_ANSI__)
+typedef __signed__ long long __s64;
+typedef unsigned long long __u64;
+#endif
+
+/*
+ * These aren't exported outside the kernel to avoid name space clashes
+ */
+#ifdef __KERNEL__
+
+typedef signed char s8;
+typedef unsigned char u8;
+
+typedef signed short s16;
+typedef unsigned short u16;
+
+typedef signed int s32;
+typedef unsigned int u32;
+
+typedef signed long long s64;
+typedef unsigned long long u64;
+
+#endif /* __KERNEL__ */
+
+#endif
+
--- /dev/null
+#ifndef _ASMARM_UACCESS_H
+#define _ASMARM_UACCESS_H
+
+/*
+ * User space memory access functions
+ */
+#include <linux/sched.h>
+
+#define VERIFY_READ 0
+#define VERIFY_WRITE 1
+
+/*
+ * The exception table consists of pairs of addresses: the first is the
+ * address of an instruction that is allowed to fault, and the second is
+ * the address at which the program should continue. No registers are
+ * modified, so it is entirely up to the continuation code to figure out
+ * what to do.
+ *
+ * All the routines below use bits of fixup code that are out of line
+ * with the main instruction path. This means when everything is well,
+ * we don't even have to jump over them. Further, they do not intrude
+ * on our cache or tlb entries.
+ */
+
+struct exception_table_entry
+{
+ unsigned long insn, fixup;
+};
+
+/* Returns 0 if exception not found and fixup otherwise. */
+extern unsigned long search_exception_table(unsigned long);
+
+#include <asm/proc/uaccess.h>
+
+extern inline int verify_area(int type, const void * addr, unsigned long size)
+{
+ return access_ok(type,addr,size) ? 0 : -EFAULT;
+}
+
+/*
+ * Single-value transfer routines. They automatically use the right
+ * size if we just have the right pointer type.
+ *
+ * The "__xxx" versions of the user access functions do not verify the
+ * address space - it must have been done previously with a separate
+ * "access_ok()" call.
+ *
+ * The "xxx_ret" versions return constant specified in the third
+ * argument if something bad happens.
+ */
+#define get_user(x,p) __get_user_check((x),(p),sizeof(*(p)))
+#define __get_user(x,p) __get_user_nocheck((x),(p),sizeof(*(p)))
+#define get_user_ret(x,p,r) ({ if (get_user(x,p)) return r; })
+#define __get_user_ret(x,p,r) ({ if (__get_user(x,p)) return r; })
+
+#define put_user(x,p) __put_user_check((__typeof(*(p)))(x),(p),sizeof(*(p)))
+#define __put_user(x,p) __put_user_nocheck((__typeof(*(p)))(x),(p),sizeof(*(p)))
+#define put_user_ret(x,p,r) ({ if (put_user(x,p)) return r; })
+#define __put_user_ret(x,p,r) ({ if (__put_user(x,p)) return r; })
+
+static __inline__ unsigned long copy_from_user(void *to, const void *from, unsigned long n)
+{
+ if (access_ok(VERIFY_READ, from, n))
+ __do_copy_from_user(to, from, n);
+ return n;
+}
+
+static __inline__ unsigned long __copy_from_user(void *to, const void *from, unsigned long n)
+{
+ __do_copy_from_user(to, from, n);
+ return n;
+}
+
+#define copy_from_user_ret(t,f,n,r) \
+ ({ if (copy_from_user(t,f,n)) return r; })
+
+static __inline__ unsigned long copy_to_user(void *to, const void *from, unsigned long n)
+{
+ if (access_ok(VERIFY_WRITE, to, n))
+ __do_copy_to_user(to, from, n);
+ return n;
+}
+
+static __inline__ unsigned long __copy_to_user(void *to, const void *from, unsigned long n)
+{
+ __do_copy_to_user(to, from, n);
+ return n;
+}
+
+#define copy_to_user_ret(t,f,n,r) \
+ ({ if (copy_to_user(t,f,n)) return r; })
+
+static __inline__ unsigned long clear_user (void *to, unsigned long n)
+{
+ if (access_ok(VERIFY_WRITE, to, n))
+ __do_clear_user(to, n);
+ return n;
+}
+
+static __inline__ unsigned long __clear_user (void *to, unsigned long n)
+{
+ __do_clear_user(to, n);
+ return n;
+}
+
+static __inline__ long strncpy_from_user (char *dst, const char *src, long count)
+{
+ long res = -EFAULT;
+ if (access_ok(VERIFY_READ, src, 1))
+ __do_strncpy_from_user(dst, src, count, res);
+ return res;
+}
+
+static __inline__ long __strncpy_from_user (char *dst, const char *src, long count)
+{
+ long res;
+ __do_strncpy_from_user(dst, src, count, res);
+ return res;
+}
+
+extern __inline__ long strlen_user (const char *s)
+{
+ unsigned long res = 0;
+
+ if (__addr_ok(s))
+ __do_strlen_user (s, res);
+
+ return res;
+}
+
+/*
+ * These are the work horses of the get/put_user functions
+ */
+#define __get_user_check(x,ptr,size) \
+({ \
+ long __gu_err = -EFAULT, __gu_val = 0; \
+ const __typeof__(*(ptr)) *__gu_addr = (ptr); \
+ if (access_ok(VERIFY_READ,__gu_addr,size)) \
+ __get_user_size(__gu_val,__gu_addr,(size),__gu_err); \
+ (x) = (__typeof__(*(ptr)))__gu_val; \
+ __gu_err; \
+})
+
+#define __get_user_nocheck(x,ptr,size) \
+({ \
+ long __gu_err = 0, __gu_val = 0; \
+ __get_user_size(__gu_val,(ptr),(size),__gu_err); \
+ (x) = (__typeof__(*(ptr)))__gu_val; \
+ __gu_err; \
+})
+
+#define __put_user_check(x,ptr,size) \
+({ \
+ long __pu_err = -EFAULT; \
+ __typeof__(*(ptr)) *__pu_addr = (ptr); \
+ if (access_ok(VERIFY_WRITE,__pu_addr,size)) \
+ __put_user_size((x),__pu_addr,(size),__pu_err); \
+ __pu_err; \
+})
+
+#define __put_user_nocheck(x,ptr,size) \
+({ \
+ long __pu_err = 0; \
+ __put_user_size((x),(ptr),(size),__pu_err); \
+ __pu_err; \
+})
+
+extern long __get_user_bad(void);
+
+#define __get_user_size(x,ptr,size,retval) \
+do { \
+ retval = 0; \
+ switch (size) { \
+ case 1: __get_user_asm_byte(x,ptr,retval); break; \
+ case 2: __get_user_asm_half(x,ptr,retval); break; \
+ case 4: __get_user_asm_word(x,ptr,retval); break; \
+ default: (x) = __get_user_bad(); \
+ } \
+} while (0)
+
+extern long __put_user_bad(void);
+
+#define __put_user_size(x,ptr,size,retval) \
+do { \
+ retval = 0; \
+ switch (size) { \
+ case 1: __put_user_asm_byte(x,ptr,retval); break; \
+ case 2: __put_user_asm_half(x,ptr,retval); break; \
+ case 4: __put_user_asm_word(x,ptr,retval); break; \
+ default: __put_user_bad(); \
+ } \
+} while (0)
+
+#endif /* _ASMARM_UACCESS_H */
--- /dev/null
+#ifndef _ASMARM_UCONTEXT_H
+#define _ASMARM_UCONTEXT_H
+
+struct ucontext {
+ unsigned long uc_flags;
+ struct ucontext *uc_link;
+ stack_t uc_stack;
+ struct sigcontext uc_mcontext;
+ sigset_t uc_sigmask; /* mask last for extensibility */
+};
+
+#endif /* !_ASMARM_UCONTEXT_H */
--- /dev/null
+#ifndef __ARM_UNALIGNED_H
+#define __ARM_UNALIGNED_H
+
+#define get_unaligned(ptr) \
+ ((__typeof__(*(ptr)))__get_unaligned((ptr), sizeof(*(ptr))))
+
+#define put_unaligned(val, ptr) \
+ __put_unaligned((unsigned long)(val), (ptr), sizeof(*(ptr)))
+
+extern void bad_unaligned_access_length (void);
+
+extern inline unsigned long __get_unaligned(const void *ptr, size_t size)
+{
+ unsigned long val;
+ switch (size) {
+ case 1:
+ val = *(const unsigned char *)ptr;
+ break;
+
+ case 2:
+ val = ((const unsigned char *)ptr)[0] | (((const unsigned char *)ptr)[1] << 8);
+ break;
+
+ case 4:
+ val = ((const unsigned char *)ptr)[0] | (((const unsigned char *)ptr)[1] << 8) |
+ (((const unsigned char *)ptr)[2]) << 16 | (((const unsigned char *)ptr)[3] << 24);
+ break;
+
+ default:
+ bad_unaligned_access_length ();
+ }
+ return val;
+}
+
+extern inline void __put_unaligned(unsigned long val, void *ptr, size_t size)
+{
+ switch (size) {
+ case 1:
+ *(unsigned char *)ptr = val;
+ break;
+
+ case 2:
+ ((unsigned char *)ptr)[0] = val;
+ ((unsigned char *)ptr)[1] = val >> 8;
+ break;
+
+ case 4:
+ ((unsigned char *)ptr)[0] = val;
+ ((unsigned char *)ptr)[1] = val >> 8;
+ ((unsigned char *)ptr)[2] = val >> 16;
+ ((unsigned char *)ptr)[3] = val >> 24;
+ break;
+
+ default:
+ bad_unaligned_access_length ();
+ }
+}
+
+#endif
--- /dev/null
+#ifndef __ASM_ARM_UNISTD_H
+#define __ASM_ARM_UNISTD_H
+
+#define __NR_SYSCALL_BASE 0x900000
+
+/*
+ * This file contains the system call numbers.
+ */
+
+#define __NR_setup (__NR_SYSCALL_BASE+ 0) /* used only by init, to get system going */
+#define __NR_exit (__NR_SYSCALL_BASE+ 1)
+#define __NR_fork (__NR_SYSCALL_BASE+ 2)
+#define __NR_read (__NR_SYSCALL_BASE+ 3)
+#define __NR_write (__NR_SYSCALL_BASE+ 4)
+#define __NR_open (__NR_SYSCALL_BASE+ 5)
+#define __NR_close (__NR_SYSCALL_BASE+ 6)
+#define __NR_waitpid (__NR_SYSCALL_BASE+ 7)
+#define __NR_creat (__NR_SYSCALL_BASE+ 8)
+#define __NR_link (__NR_SYSCALL_BASE+ 9)
+#define __NR_unlink (__NR_SYSCALL_BASE+ 10)
+#define __NR_execve (__NR_SYSCALL_BASE+ 11)
+#define __NR_chdir (__NR_SYSCALL_BASE+ 12)
+#define __NR_time (__NR_SYSCALL_BASE+ 13)
+#define __NR_mknod (__NR_SYSCALL_BASE+ 14)
+#define __NR_chmod (__NR_SYSCALL_BASE+ 15)
+#define __NR_chown (__NR_SYSCALL_BASE+ 16)
+#define __NR_break (__NR_SYSCALL_BASE+ 17)
+#define __NR_oldstat (__NR_SYSCALL_BASE+ 18)
+#define __NR_lseek (__NR_SYSCALL_BASE+ 19)
+#define __NR_getpid (__NR_SYSCALL_BASE+ 20)
+#define __NR_mount (__NR_SYSCALL_BASE+ 21)
+#define __NR_umount (__NR_SYSCALL_BASE+ 22)
+#define __NR_setuid (__NR_SYSCALL_BASE+ 23)
+#define __NR_getuid (__NR_SYSCALL_BASE+ 24)
+#define __NR_stime (__NR_SYSCALL_BASE+ 25)
+#define __NR_ptrace (__NR_SYSCALL_BASE+ 26)
+#define __NR_alarm (__NR_SYSCALL_BASE+ 27)
+#define __NR_oldfstat (__NR_SYSCALL_BASE+ 28)
+#define __NR_pause (__NR_SYSCALL_BASE+ 29)
+#define __NR_utime (__NR_SYSCALL_BASE+ 30)
+#define __NR_stty (__NR_SYSCALL_BASE+ 31)
+#define __NR_gtty (__NR_SYSCALL_BASE+ 32)
+#define __NR_access (__NR_SYSCALL_BASE+ 33)
+#define __NR_nice (__NR_SYSCALL_BASE+ 34)
+#define __NR_ftime (__NR_SYSCALL_BASE+ 35)
+#define __NR_sync (__NR_SYSCALL_BASE+ 36)
+#define __NR_kill (__NR_SYSCALL_BASE+ 37)
+#define __NR_rename (__NR_SYSCALL_BASE+ 38)
+#define __NR_mkdir (__NR_SYSCALL_BASE+ 39)
+#define __NR_rmdir (__NR_SYSCALL_BASE+ 40)
+#define __NR_dup (__NR_SYSCALL_BASE+ 41)
+#define __NR_pipe (__NR_SYSCALL_BASE+ 42)
+#define __NR_times (__NR_SYSCALL_BASE+ 43)
+#define __NR_prof (__NR_SYSCALL_BASE+ 44)
+#define __NR_brk (__NR_SYSCALL_BASE+ 45)
+#define __NR_setgid (__NR_SYSCALL_BASE+ 46)
+#define __NR_getgid (__NR_SYSCALL_BASE+ 47)
+#define __NR_signal (__NR_SYSCALL_BASE+ 48)
+#define __NR_geteuid (__NR_SYSCALL_BASE+ 49)
+#define __NR_getegid (__NR_SYSCALL_BASE+ 50)
+#define __NR_acct (__NR_SYSCALL_BASE+ 51)
+#define __NR_phys (__NR_SYSCALL_BASE+ 52)
+#define __NR_lock (__NR_SYSCALL_BASE+ 53)
+#define __NR_ioctl (__NR_SYSCALL_BASE+ 54)
+#define __NR_fcntl (__NR_SYSCALL_BASE+ 55)
+#define __NR_mpx (__NR_SYSCALL_BASE+ 56)
+#define __NR_setpgid (__NR_SYSCALL_BASE+ 57)
+#define __NR_ulimit (__NR_SYSCALL_BASE+ 58)
+#define __NR_oldolduname (__NR_SYSCALL_BASE+ 59)
+#define __NR_umask (__NR_SYSCALL_BASE+ 60)
+#define __NR_chroot (__NR_SYSCALL_BASE+ 61)
+#define __NR_ustat (__NR_SYSCALL_BASE+ 62)
+#define __NR_dup2 (__NR_SYSCALL_BASE+ 63)
+#define __NR_getppid (__NR_SYSCALL_BASE+ 64)
+#define __NR_getpgrp (__NR_SYSCALL_BASE+ 65)
+#define __NR_setsid (__NR_SYSCALL_BASE+ 66)
+#define __NR_sigaction (__NR_SYSCALL_BASE+ 67)
+#define __NR_sgetmask (__NR_SYSCALL_BASE+ 68)
+#define __NR_ssetmask (__NR_SYSCALL_BASE+ 69)
+#define __NR_setreuid (__NR_SYSCALL_BASE+ 70)
+#define __NR_setregid (__NR_SYSCALL_BASE+ 71)
+#define __NR_sigsuspend (__NR_SYSCALL_BASE+ 72)
+#define __NR_sigpending (__NR_SYSCALL_BASE+ 73)
+#define __NR_sethostname (__NR_SYSCALL_BASE+ 74)
+#define __NR_setrlimit (__NR_SYSCALL_BASE+ 75)
+#define __NR_getrlimit (__NR_SYSCALL_BASE+ 76)
+#define __NR_getrusage (__NR_SYSCALL_BASE+ 77)
+#define __NR_gettimeofday (__NR_SYSCALL_BASE+ 78)
+#define __NR_settimeofday (__NR_SYSCALL_BASE+ 79)
+#define __NR_getgroups (__NR_SYSCALL_BASE+ 80)
+#define __NR_setgroups (__NR_SYSCALL_BASE+ 81)
+#define __NR_select (__NR_SYSCALL_BASE+ 82)
+#define __NR_symlink (__NR_SYSCALL_BASE+ 83)
+#define __NR_oldlstat (__NR_SYSCALL_BASE+ 84)
+#define __NR_readlink (__NR_SYSCALL_BASE+ 85)
+#define __NR_uselib (__NR_SYSCALL_BASE+ 86)
+#define __NR_swapon (__NR_SYSCALL_BASE+ 87)
+#define __NR_reboot (__NR_SYSCALL_BASE+ 88)
+#define __NR_readdir (__NR_SYSCALL_BASE+ 89)
+#define __NR_mmap (__NR_SYSCALL_BASE+ 90)
+#define __NR_munmap (__NR_SYSCALL_BASE+ 91)
+#define __NR_truncate (__NR_SYSCALL_BASE+ 92)
+#define __NR_ftruncate (__NR_SYSCALL_BASE+ 93)
+#define __NR_fchmod (__NR_SYSCALL_BASE+ 94)
+#define __NR_fchown (__NR_SYSCALL_BASE+ 95)
+#define __NR_getpriority (__NR_SYSCALL_BASE+ 96)
+#define __NR_setpriority (__NR_SYSCALL_BASE+ 97)
+#define __NR_profil (__NR_SYSCALL_BASE+ 98)
+#define __NR_statfs (__NR_SYSCALL_BASE+ 99)
+#define __NR_fstatfs (__NR_SYSCALL_BASE+100)
+#define __NR_ioperm (__NR_SYSCALL_BASE+101)
+#define __NR_socketcall (__NR_SYSCALL_BASE+102)
+#define __NR_syslog (__NR_SYSCALL_BASE+103)
+#define __NR_setitimer (__NR_SYSCALL_BASE+104)
+#define __NR_getitimer (__NR_SYSCALL_BASE+105)
+#define __NR_stat (__NR_SYSCALL_BASE+106)
+#define __NR_lstat (__NR_SYSCALL_BASE+107)
+#define __NR_fstat (__NR_SYSCALL_BASE+108)
+#define __NR_olduname (__NR_SYSCALL_BASE+109)
+#define __NR_iopl (__NR_SYSCALL_BASE+110)
+#define __NR_vhangup (__NR_SYSCALL_BASE+111)
+#define __NR_idle (__NR_SYSCALL_BASE+112)
+#define __NR_syscall (__NR_SYSCALL_BASE+113) /* syscall to call a syscall! */
+#define __NR_wait4 (__NR_SYSCALL_BASE+114)
+#define __NR_swapoff (__NR_SYSCALL_BASE+115)
+#define __NR_sysinfo (__NR_SYSCALL_BASE+116)
+#define __NR_ipc (__NR_SYSCALL_BASE+117)
+#define __NR_fsync (__NR_SYSCALL_BASE+118)
+#define __NR_sigreturn (__NR_SYSCALL_BASE+119)
+#define __NR_clone (__NR_SYSCALL_BASE+120)
+#define __NR_setdomainname (__NR_SYSCALL_BASE+121)
+#define __NR_uname (__NR_SYSCALL_BASE+122)
+#define __NR_modify_ldt (__NR_SYSCALL_BASE+123)
+#define __NR_adjtimex (__NR_SYSCALL_BASE+124)
+#define __NR_mprotect (__NR_SYSCALL_BASE+125)
+#define __NR_sigprocmask (__NR_SYSCALL_BASE+126)
+#define __NR_create_module (__NR_SYSCALL_BASE+127)
+#define __NR_init_module (__NR_SYSCALL_BASE+128)
+#define __NR_delete_module (__NR_SYSCALL_BASE+129)
+#define __NR_get_kernel_syms (__NR_SYSCALL_BASE+130)
+#define __NR_quotactl (__NR_SYSCALL_BASE+131)
+#define __NR_getpgid (__NR_SYSCALL_BASE+132)
+#define __NR_fchdir (__NR_SYSCALL_BASE+133)
+#define __NR_bdflush (__NR_SYSCALL_BASE+134)
+#define __NR_sysfs (__NR_SYSCALL_BASE+135)
+#define __NR_personality (__NR_SYSCALL_BASE+136)
+#define __NR_afs_syscall (__NR_SYSCALL_BASE+137) /* Syscall for Andrew File System */
+#define __NR_setfsuid (__NR_SYSCALL_BASE+138)
+#define __NR_setfsgid (__NR_SYSCALL_BASE+139)
+#define __NR__llseek (__NR_SYSCALL_BASE+140)
+#define __NR_getdents (__NR_SYSCALL_BASE+141)
+#define __NR__newselect (__NR_SYSCALL_BASE+142)
+#define __NR_flock (__NR_SYSCALL_BASE+143)
+#define __NR_msync (__NR_SYSCALL_BASE+144)
+#define __NR_readv (__NR_SYSCALL_BASE+145)
+#define __NR_writev (__NR_SYSCALL_BASE+146)
+#define __NR_getsid (__NR_SYSCALL_BASE+147)
+#define __NR_fdatasync (__NR_SYSCALL_BASE+148)
+#define __NR__sysctl (__NR_SYSCALL_BASE+149)
+#define __NR_mlock (__NR_SYSCALL_BASE+150)
+#define __NR_munlock (__NR_SYSCALL_BASE+151)
+#define __NR_mlockall (__NR_SYSCALL_BASE+152)
+#define __NR_munlockall (__NR_SYSCALL_BASE+153)
+#define __NR_sched_setparam (__NR_SYSCALL_BASE+154)
+#define __NR_sched_getparam (__NR_SYSCALL_BASE+155)
+#define __NR_sched_setscheduler (__NR_SYSCALL_BASE+156)
+#define __NR_sched_getscheduler (__NR_SYSCALL_BASE+157)
+#define __NR_sched_yield (__NR_SYSCALL_BASE+158)
+#define __NR_sched_get_priority_max (__NR_SYSCALL_BASE+159)
+#define __NR_sched_get_priority_min (__NR_SYSCALL_BASE+160)
+#define __NR_sched_rr_get_interval (__NR_SYSCALL_BASE+161)
+#define __NR_nanosleep (__NR_SYSCALL_BASE+162)
+#define __NR_mremap (__NR_SYSCALL_BASE+163)
+#define __NR_setresuid (__NR_SYSCALL_BASE+164)
+#define __NR_getresuid (__NR_SYSCALL_BASE+165)
+#define __NR_vm86 (__NR_SYSCALL_BASE+166)
+#define __NR_query_module (__NR_SYSCALL_BASE+167)
+#define __NR_poll (__NR_SYSCALL_BASE+168)
+#define __NR_nfsservctl (__NR_SYSCALL_BASE+169)
+#define __NR_setresgid (__NR_SYSCALL_BASE+170)
+#define __NR_getresgid (__NR_SYSCALL_BASE+171)
+#define __NR_prctl (__NR_SYSCALL_BASE+172)
+#define __NR_rt_sigreturn (__NR_SYSCALL_BASE+173)
+#define __NR_rt_sigaction (__NR_SYSCALL_BASE+174)
+#define __NR_rt_sigprocmask (__NR_SYSCALL_BASE+175)
+#define __NR_rt_sigpending (__NR_SYSCALL_BASE+176)
+#define __NR_rt_sigtimedwait (__NR_SYSCALL_BASE+177)
+#define __NR_rt_sigqueueinfo (__NR_SYSCALL_BASE+178)
+#define __NR_rt_sigsuspend (__NR_SYSCALL_BASE+179)
+#define __NR_pread (__NR_SYSCALL_BASE+180)
+#define __NR_pwrite (__NR_SYSCALL_BASE+181)
+
+#define __sys2(x) #x
+#define __sys1(x) __sys2(x)
+
+#ifndef __syscall
+#define __syscall(name) "swi\t" __sys1(__NR_##name) "\n\t"
+#endif
+
+#define __syscall_return(type, res) \
+do { \
+ if ((unsigned long)(res) >= (unsigned long)(-125)) { \
+ errno = -(res); \
+ res = -1; \
+ } \
+ return (type) (res); \
+} while (0)
+
+#define _syscall0(type,name) \
+type name(void) { \
+ long __res; \
+ __asm__ __volatile__ ( \
+ __syscall(name) \
+ "mov %0,r0" \
+ :"=r" (__res) : : "r0","lr"); \
+ __syscall_return(type,__res); \
+}
+
+#define _syscall1(type,name,type1,arg1) \
+type name(type1 arg1) { \
+ long __res; \
+ __asm__ __volatile__ ( \
+ "mov\tr0,%1\n\t" \
+ __syscall(name) \
+ "mov %0,r0" \
+ : "=r" (__res) \
+ : "r" ((long)(arg1)) \
+ : "r0","lr"); \
+ __syscall_return(type,__res); \
+}
+
+#define _syscall2(type,name,type1,arg1,type2,arg2) \
+type name(type1 arg1,type2 arg2) { \
+ long __res; \
+ __asm__ __volatile__ ( \
+ "mov\tr0,%1\n\t" \
+ "mov\tr1,%2\n\t" \
+ __syscall(name) \
+ "mov\t%0,r0" \
+ : "=r" (__res) \
+ : "r" ((long)(arg1)),"r" ((long)(arg2)) \
+ : "r0","r1","lr"); \
+ __syscall_return(type,__res); \
+}
+
+
+#define _syscall3(type,name,type1,arg1,type2,arg2,type3,arg3) \
+type name(type1 arg1,type2 arg2,type3 arg3) { \
+ long __res; \
+ __asm__ __volatile__ ( \
+ "mov\tr0,%1\n\t" \
+ "mov\tr1,%2\n\t" \
+ "mov\tr2,%3\n\t" \
+ __syscall(name) \
+ "mov\t%0,r0" \
+ : "=r" (__res) \
+ : "r" ((long)(arg1)),"r" ((long)(arg2)),"r" ((long)(arg3)) \
+ : "r0","r1","r2","lr"); \
+ __syscall_return(type,__res); \
+}
+
+
+#define _syscall4(type,name,type1,arg1,type2,arg2,type3,arg3,type4,arg4) \
+type name(type1 arg1, type2 arg2, type3 arg3, type4 arg4) { \
+ long __res; \
+ __asm__ __volatile__ ( \
+ "mov\tr0,%1\n\t" \
+ "mov\tr1,%2\n\t" \
+ "mov\tr2,%3\n\t" \
+ "mov\tr3,%4\n\t" \
+ __syscall(name) \
+ "mov\t%0,r0" \
+ : "=r" (__res) \
+ : "r" ((long)(arg1)),"r" ((long)(arg2)),"r" ((long)(arg3)),"r" ((long)(arg4)) \
+ : "r0","r1","r2","r3","lr"); \
+ __syscall_return(type,__res); \
+}
+
+
+#define _syscall5(type,name,type1,arg1,type2,arg2,type3,arg3,type4,arg4,type5,arg5) \
+type name(type1 arg1, type2 arg2, type3 arg3, type4 arg4, type5 arg5) { \
+ long __res; \
+ __asm__ __volatile__ ( \
+ "mov\tr0,%1\n\t" \
+ "mov\tr1,%2\n\t" \
+ "mov\tr2,%3\n\t" \
+ "mov\tr3,%4\n\t" \
+ "mov\tr4,%5\n\t" \
+ __syscall(name) \
+ "mov\t%0,r0" \
+ : "=r" (__res) \
+ : "r" ((long)(arg1)),"r" ((long)(arg2)),"r" ((long)(arg3)),"r" ((long)(arg4)), \
+ "r" ((long)(arg5)) \
+ : "r0","r1","r2","r3","r4","lr"); \
+ __syscall_return(type,__res); \
+}
+
+#ifdef __KERNEL_SYSCALLS__
+
+/*
+ * we need this inline - forking from kernel space will result
+ * in NO COPY ON WRITE (!!!), until an execve is executed. This
+ * is no problem, but for the stack. This is handled by not letting
+ * main() use the stack at all after fork(). Thus, no function
+ * calls - which means inline code for fork too, as otherwise we
+ * would use the stack upon exit from 'fork()'.
+ *
+ * Actually only pause and fork are needed inline, so that there
+ * won't be any messing with the stack from main(), but we define
+ * some others too.
+ */
+#define __NR__exit __NR_exit
+static inline _syscall0(int,idle);
+static inline _syscall0(int,fork);
+static inline _syscall2(int,clone,unsigned long,flags,char *,esp);
+static inline _syscall0(int,pause);
+static inline _syscall1(int,setup,int,magic);
+static inline _syscall0(int,sync);
+static inline _syscall0(pid_t,setsid);
+static inline _syscall3(int,write,int,fd,const char *,buf,off_t,count);
+static inline _syscall3(int,read,int,fd,char *,buf,off_t,count)
+static inline _syscall1(int,dup,int,fd);
+static inline _syscall3(int,execve,const char *,file,char **,argv,char **,envp);
+static inline _syscall3(int,open,const char *,file,int,flag,int,mode);
+static inline _syscall1(int,close,int,fd);
+static inline _syscall1(int,_exit,int,exitcode);
+static inline _syscall3(pid_t,waitpid,pid_t,pid,int *,wait_stat,int,options);
+
+static inline pid_t wait(int * wait_stat)
+{
+ return waitpid(-1,wait_stat,0);
+}
+
+
+
+/*
+ * This is the mechanism for creating a new kernel thread.
+ *
+ * NOTE! Only a kernel-only process(ie the swapper or direct descendants
+ * who haven't done an "execve()") should use this: it will work within
+ * a system call from a "real" process, but the process memory space will
+ * not be free'd until both the parent and the child have exited.
+ */
+static inline pid_t kernel_thread(int (*fn)(void *), void * arg, unsigned long flags)
+{
+ long retval;
+
+ __asm__ __volatile__("
+ mov r0,%1
+ mov r1,%2
+ "__syscall(clone)"
+ teq r0, #0
+ bne 1f
+ mov r0,%4
+ mov lr, pc
+ mov pc, %3
+ "__syscall(exit)"
+1: mov %0,r0"
+ : "=r" (retval)
+ : "Ir" (flags | CLONE_VM), "Ir" (NULL), "r" (fn), "Ir" (arg)
+ : "r0","r1","r2","r3","lr");
+
+ return retval;
+}
+
+#endif
+
+#endif /* __ASM_ARM_UNISTD_H */
+
+
+
--- /dev/null
+#ifndef _ARM_USER_H
+#define _ARM_USER_H
+
+#include <asm/page.h>
+#include <linux/ptrace.h>
+/* Core file format: The core file is written in such a way that gdb
+ can understand it and provide useful information to the user (under
+ linux we use the 'trad-core' bfd). There are quite a number of
+ obstacles to being able to view the contents of the floating point
+ registers, and until these are solved you will not be able to view the
+ contents of them. Actually, you can read in the core file and look at
+ the contents of the user struct to find out what the floating point
+ registers contain.
+ The actual file contents are as follows:
+ UPAGE: 1 page consisting of a user struct that tells gdb what is present
+ in the file. Directly after this is a copy of the task_struct, which
+ is currently not used by gdb, but it may come in useful at some point.
+ All of the registers are stored as part of the upage. The upage should
+ always be only one page.
+ DATA: The data area is stored. We use current->end_text to
+ current->brk to pick up all of the user variables, plus any memory
+ that may have been malloced. No attempt is made to determine if a page
+ is demand-zero or if a page is totally unused, we just cover the entire
+ range. All of the addresses are rounded in such a way that an integral
+ number of pages is written.
+ STACK: We need the stack information in order to get a meaningful
+ backtrace. We need to write the data from (esp) to
+ current->start_stack, so we round each of these off in order to be able
+ to write an integer number of pages.
+ The minimum core file size is 3 pages, or 12288 bytes.
+*/
+
+struct user_fp {
+ struct fp_reg {
+ unsigned int sign1:1;
+ unsigned int unused:15;
+ unsigned int sign2:1;
+ unsigned int exponent:14;
+ unsigned int j:1;
+ unsigned int mantissa1:31;
+ unsigned int mantissa0:32;
+ } fpregs[8];
+ unsigned int fpsr:32;
+ unsigned int fpcr:32;
+};
+
+/* When the kernel dumps core, it starts by dumping the user struct -
+ this will be used by gdb to figure out where the data and stack segments
+ are within the file, and what virtual addresses to use. */
+struct user{
+/* We start with the registers, to mimic the way that "memory" is returned
+ from the ptrace(3,...) function. */
+ struct pt_regs regs; /* Where the registers are actually stored */
+/* ptrace does not yet supply these. Someday.... */
+ int u_fpvalid; /* True if math co-processor being used. */
+ /* for this mess. Not yet used. */
+/* The rest of this junk is to help gdb figure out what goes where */
+ unsigned long int u_tsize; /* Text segment size (pages). */
+ unsigned long int u_dsize; /* Data segment size (pages). */
+ unsigned long int u_ssize; /* Stack segment size (pages). */
+ unsigned long start_code; /* Starting virtual address of text. */
+ unsigned long start_stack; /* Starting virtual address of stack area.
+ This is actually the bottom of the stack,
+ the top of the stack is always found in the
+ esp register. */
+ long int signal; /* Signal that caused the core dump. */
+ int reserved; /* No longer used */
+ struct pt_regs * u_ar0; /* Used by gdb to help find the values for */
+ /* the registers. */
+ unsigned long magic; /* To uniquely identify a core file */
+ char u_comm[32]; /* User command that was responsible */
+ int u_debugreg[8];
+ struct user_fp u_fp; /* FP state */
+ struct user_fp_struct * u_fp0;/* Used by gdb to help find the values for */
+ /* the FP registers. */
+};
+#define NBPG PAGE_SIZE
+#define UPAGES 1
+#define HOST_TEXT_START_ADDR (u.start_code)
+#define HOST_STACK_END_ADDR (u.start_stack + u.u_ssize * NBPG)
+
+#endif /* _ARM_USER_H */
--- /dev/null
+#ifndef _ASMARM_VT_H
+#define _ASMARM_VT_H
+
+#define VT_GETSCRINFO 0x56FD /* get screen info */
+#define VT_GETPALETTE 0x56FE /* get palette */
+#define VT_SETPALETTE 0x56FF /* set palette */
+
+#endif /* _ASMARM_VT_H */
#define hardirq_exit(cpu) (local_irq_count[cpu]--)
#define synchronize_irq() do { } while (0)
+#define synchronize_one_irq(x) do { } while (0)
#else
}
extern void synchronize_irq(void);
+extern void synchronize_one_irq(unsigned int irq);
#endif /* __SMP__ */
/*
* linux/include/asm/irq.h
*
- * (C) 1992, 1993 Linus Torvalds
+ * (C) 1992, 1993 Linus Torvalds, (C) 1997 Ingo Molnar
*
- * IRQ/IPI changes taken from work by Thomas Radke <tomsoft@informatik.tu-chemnitz.de>
+ * IRQ/IPI changes taken from work by Thomas Radke
+ * <tomsoft@informatik.tu-chemnitz.de>
*/
+#ifndef __SMP__
#define NR_IRQS 16
+#else
+#define NR_IRQS 24
+#endif
#define TIMER_IRQ 0
{
bh_mask &= ~(1 << nr);
bh_mask_count[nr]++;
+ synchronize_irq();
}
extern inline void enable_bh(int nr)
#define access_ok(type,addr,size) ( (__range_ok(addr,size) == 0) && \
((type) == VERIFY_READ || boot_cpu_data.wp_works_ok || \
+ segment_eq(get_fs(),KERNEL_DS) || \
__verify_write((void *)(addr),(size))))
#endif /* CPU */
/* Allocation and freeing of basic task resources. */
#define alloc_task_struct() \
- ((struct task_struct *) __get_free_pages(GFP_KERNEL,1,0))
+ ((struct task_struct *) __get_free_pages(GFP_KERNEL,1))
#define free_task_struct(p) free_pages((unsigned long)(p),1)
#define init_task (init_task_union.task)
* NOTE! The task struct and the stack go together
*/
#define alloc_task_struct() \
- ((struct task_struct *) __get_free_pages(GFP_KERNEL,1,0))
+ ((struct task_struct *) __get_free_pages(GFP_KERNEL,1))
#define free_task_struct(p) free_pages((unsigned long)(p),1)
#define init_task (init_task_union.task)
* NOTE! The task struct and the stack go together
*/
#define alloc_task_struct() \
- ((struct task_struct *) __get_free_pages(GFP_KERNEL,1,0))
+ ((struct task_struct *) __get_free_pages(GFP_KERNEL,1))
#define free_task_struct(p) free_pages((unsigned long)(p),1)
/* in process.c - for early bootup debug -- Cort */
#ifdef __KERNEL__
/* Allocation and freeing of task_struct and kernel stack. */
-#define alloc_task_struct() ((struct task_struct *)__get_free_pages(GFP_KERNEL, 1, 0))
+#define alloc_task_struct() ((struct task_struct *)__get_free_pages(GFP_KERNEL, 1))
#define free_task_struct(tsk) free_pages((unsigned long)(tsk),1)
#define init_task (init_task_union.task)
--- /dev/null
+#ifndef _ADFS_FS_H
+#define _ADFS_FS_H
+
+#include <linux/types.h>
+/*
+ * Structures of data on the disk
+ */
+
+/*
+ * Disc Record at disc address 0xc00
+ */
+struct adfs_discrecord {
+ unsigned char log2secsize;
+ unsigned char secspertrack;
+ unsigned char heads;
+ unsigned char density;
+ unsigned char idlen;
+ unsigned char log2bpmb;
+ unsigned char skew;
+ unsigned char bootoption;
+ unsigned char lowsector;
+ unsigned char nzones;
+ unsigned short zone_spare;
+ unsigned long root;
+ unsigned long disc_size;
+ unsigned short disc_id;
+ unsigned char disc_name[10];
+ unsigned long disc_type;
+ unsigned long disc_size_high;
+ unsigned char log2sharesize:4;
+ unsigned char unused:4;
+ unsigned char big_flag:1;
+};
+
+#define ADFS_DISCRECORD (0xc00)
+#define ADFS_DR_OFFSET (0x1c0)
+#define ADFS_DR_SIZE 60
+#define ADFS_SUPER_MAGIC 0xadf5
+#define ADFS_FREE_FRAG 0
+#define ADFS_BAD_FRAG 1
+#define ADFS_ROOT_FRAG 2
+
+/*
+ * Directory header
+ */
+struct adfs_dirheader {
+ unsigned char startmasseq;
+ unsigned char startname[4];
+};
+
+#define ADFS_NEWDIR_SIZE 2048
+#define ADFS_OLDDIR_SIZE 1024
+#define ADFS_NUM_DIR_ENTRIES 77
+
+/*
+ * Directory entries
+ */
+struct adfs_direntry {
+ char dirobname[10];
+#define ADFS_NAME_LEN 10
+ __u8 dirload[4];
+ __u8 direxec[4];
+ __u8 dirlen[4];
+ __u8 dirinddiscadd[3];
+ __u8 newdiratts;
+#define ADFS_NDA_OWNER_READ (1 << 0)
+#define ADFS_NDA_OWNER_WRITE (1 << 1)
+#define ADFS_NDA_LOCKED (1 << 2)
+#define ADFS_NDA_DIRECTORY (1 << 3)
+#define ADFS_NDA_EXECUTE (1 << 4)
+#define ADFS_NDA_PUBLIC_READ (1 << 5)
+#define ADFS_NDA_PUBLIC_WRITE (1 << 6)
+};
+
+#define ADFS_MAX_NAME_LEN 255
+struct adfs_idir_entry {
+ __u32 inode_no; /* Address */
+ __u32 file_id; /* file id */
+ __u32 name_len; /* name length */
+ __u32 size; /* size */
+ __u32 mtime; /* modification time */
+ __u32 filetype; /* RiscOS file type */
+ __u8 mode; /* internal mode */
+ char name[ADFS_MAX_NAME_LEN]; /* file name */
+};
+
+/*
+ * Directory tail
+ */
+union adfs_dirtail {
+ struct {
+ unsigned char dirlastmask;
+ char dirname[10];
+ unsigned char dirparent[3];
+ char dirtitle[19];
+ unsigned char reserved[14];
+ unsigned char endmasseq;
+ unsigned char endname[4];
+ unsigned char dircheckbyte;
+ } old;
+ struct {
+ unsigned char dirlastmask;
+ unsigned char reserved[2];
+ unsigned char dirparent[3];
+ char dirtitle[19];
+ char dirname[10];
+ unsigned char endmasseq;
+ unsigned char endname[4];
+ unsigned char dircheckbyte;
+ } new;
+};
+
+#ifdef __KERNEL__
+
+
+/*
+ * Calculate the boot block checksum on an ADFS drive. Note that this will
+ * appear to be correct if the sector contains all zeros, so also check that
+ * the disk size is non-zero!!!
+ */
+
+extern inline int adfs_checkbblk(unsigned char *ptr)
+{
+ int i = 511;
+
+ int result = 0;
+
+ do {
+ result = (result & 0xff) + (result >> 8);
+ result = result + ptr[i];
+ i--;
+ }
+ while (i != 0);
+
+ result &= 0xff;
+ return result != ptr[511];
+ return 0;
+}
+
+/* dir.c */
+extern unsigned int adfs_val (unsigned char *p, int len);
+extern int adfs_dir_read_parent (struct inode *inode, struct buffer_head **bhp);
+extern int adfs_dir_read (struct inode *inode, struct buffer_head **bhp);
+extern int adfs_dir_check (struct inode *inode, struct buffer_head **bhp,
+ int buffers, union adfs_dirtail *dtp);
+extern void adfs_dir_free (struct buffer_head **bhp, int buffers);
+extern int adfs_dir_get (struct super_block *sb, struct buffer_head **bhp,
+ int buffers, int pos, unsigned long parent_object_id,
+ struct adfs_idir_entry *ide);
+extern int adfs_dir_find_entry (struct super_block *sb, struct buffer_head **bhp,
+ int buffers, unsigned int index,
+ struct adfs_idir_entry *ide);
+
+/* inode.c */
+extern int adfs_inode_validate (struct inode *inode);
+extern unsigned long adfs_inode_generate (unsigned long parent_id, int diridx);
+extern unsigned long adfs_inode_objid (struct inode *inode);
+extern unsigned int adfs_parent_bmap (struct inode *inode, int block);
+extern unsigned int adfs_bmap (struct inode *inode, int block);
+extern void adfs_read_inode (struct inode *inode);
+
+/* map.c */
+extern int adfs_map_lookup (struct super_block *sb, int frag_id, int offset);
+
+/* namei.c */
+extern int adfs_lookup (struct inode *dir, struct dentry *dentry);
+
+/* super.c */
+extern int init_adfs_fs (void);
+extern void adfs_error (struct super_block *, const char *, const char *, ...);
+
+/*
+ * Inodes and file operations
+ */
+
+/* dir.c */
+extern struct inode_operations adfs_dir_inode_operations;
+
+/* file.c */
+extern struct inode_operations adfs_file_inode_operations;
+#endif
+
+#endif
--- /dev/null
+#ifndef _ADFS_FS_H
+#define _ADFS_FS_H
+
+#include <linux/types.h>
+/*
+ * Structures of data on the disk
+ */
+
+/*
+ * Disc Record at disc address 0xc00
+ */
+struct adfs_discrecord {
+ unsigned char log2secsize;
+ unsigned char secspertrack;
+ unsigned char heads;
+ unsigned char density;
+ unsigned char idlen;
+ unsigned char log2bpmb;
+ unsigned char skew;
+ unsigned char bootoption;
+ unsigned char lowsector;
+ unsigned char nzones;
+ unsigned short zone_spare;
+ unsigned long root;
+ unsigned long disc_size;
+ unsigned short disc_id;
+ unsigned char disc_name[10];
+ unsigned long disc_type;
+ unsigned long disc_size_high;
+ unsigned char log2sharesize:4;
+ unsigned char unused:4;
+ unsigned char big_flag:1;
+};
+
+#define ADFS_DISCRECORD (0xc00)
+#define ADFS_DR_OFFSET (0x1c0)
+#define ADFS_DR_SIZE 60
+#define ADFS_SUPER_MAGIC 0xadf5
+#define ADFS_FREE_FRAG 0
+#define ADFS_BAD_FRAG 1
+#define ADFS_ROOT_FRAG 2
+
+/*
+ * Directory header
+ */
+struct adfs_dirheader {
+ unsigned char startmasseq;
+ unsigned char startname[4];
+};
+
+#define ADFS_NEWDIR_SIZE 2048
+#define ADFS_OLDDIR_SIZE 1024
+#define ADFS_NUM_DIR_ENTRIES 77
+
+/*
+ * Directory entries
+ */
+struct adfs_direntry {
+ char dirobname[10];
+#define ADFS_NAME_LEN 10
+ __u8 dirload[4];
+ __u8 direxec[4];
+ __u8 dirlen[4];
+ __u8 dirinddiscadd[3];
+ __u8 newdiratts;
+#define ADFS_NDA_OWNER_READ (1 << 0)
+#define ADFS_NDA_OWNER_WRITE (1 << 1)
+#define ADFS_NDA_LOCKED (1 << 2)
+#define ADFS_NDA_DIRECTORY (1 << 3)
+#define ADFS_NDA_EXECUTE (1 << 4)
+#define ADFS_NDA_PUBLIC_READ (1 << 5)
+#define ADFS_NDA_PUBLIC_WRITE (1 << 6)
+};
+
+#define ADFS_MAX_NAME_LEN 255
+struct adfs_idir_entry {
+ __u32 inode_no; /* Address */
+ __u32 file_id; /* file id */
+ __u32 name_len; /* name length */
+ __u32 size; /* size */
+ __u32 mtime; /* modification time */
+ __u32 filetype; /* RiscOS file type */
+ __u8 mode; /* internal mode */
+ char name[ADFS_MAX_NAME_LEN]; /* file name */
+};
+
+/*
+ * Directory tail
+ */
+union adfs_dirtail {
+ struct {
+ unsigned char dirlastmask;
+ char dirname[10];
+ unsigned char dirparent[3];
+ char dirtitle[19];
+ unsigned char reserved[14];
+ unsigned char endmasseq;
+ unsigned char endname[4];
+ unsigned char dircheckbyte;
+ } old;
+ struct {
+ unsigned char dirlastmask;
+ unsigned char reserved[2];
+ unsigned char dirparent[3];
+ char dirtitle[19];
+ char dirname[10];
+ unsigned char endmasseq;
+ unsigned char endname[4];
+ unsigned char dircheckbyte;
+ } new;
+};
+
+#ifdef __KERNEL__
+/*
+ * Calculate the boot block checksum on an ADFS drive. Note that this will
+ * appear to be correct if the sector contains all zeros, so also check that
+ * the disk size is non-zero!!!
+ */
+extern inline int adfs_checkbblk (unsigned char *ptr)
+{
+ unsigned int result = 0;
+ unsigned char *p = ptr + 511;
+
+ do {
+ result = (result & 0xff) + (result >> 8);
+ result = result + *--p;
+ } while (p != ptr);
+
+ return (result & 0xff) != ptr[511];
+}
+
+/* dir.c */
+extern unsigned int adfs_val (unsigned char *p, int len);
+extern int adfs_dir_read_parent (struct inode *inode, struct buffer_head **bhp);
+extern int adfs_dir_read (struct inode *inode, struct buffer_head **bhp);
+extern int adfs_dir_check (struct inode *inode, struct buffer_head **bhp,
+ int buffers, union adfs_dirtail *dtp);
+extern void adfs_dir_free (struct buffer_head **bhp, int buffers);
+extern int adfs_dir_get (struct super_block *sb, struct buffer_head **bhp,
+ int buffers, int pos, unsigned long parent_object_id,
+ struct adfs_idir_entry *ide);
+extern int adfs_dir_find_entry (struct super_block *sb, struct buffer_head **bhp,
+ int buffers, unsigned int index,
+ struct adfs_idir_entry *ide);
+
+/* inode.c */
+extern int adfs_inode_validate (struct inode *inode);
+extern unsigned long adfs_inode_generate (unsigned long parent_id, int diridx);
+extern unsigned long adfs_inode_objid (struct inode *inode);
+extern unsigned int adfs_parent_bmap (struct inode *inode, int block);
+extern unsigned int adfs_bmap (struct inode *inode, int block);
+extern void adfs_read_inode (struct inode *inode);
+
+/* map.c */
+extern int adfs_map_lookup (struct super_block *sb, int frag_id, int offset);
+
+/* namei.c */
+extern int adfs_lookup (struct inode * dir, const char * name, int len,
+ struct inode ** result);
+
+/* super.c */
+extern int init_adfs_fs (void);
+extern void adfs_error (struct super_block *, const char *, const char *, ...);
+
+/*
+ * Inodes and file operations
+ */
+
+/* dir.c */
+extern struct inode_operations adfs_dir_inode_operations;
+
+/* file.c */
+extern struct inode_operations adfs_file_inode_operations;
+#endif
+
+#endif
+
--- /dev/null
+/*
+ * linux/include/linux/adfs_fs_i.h
+ *
+ * Copyright (C) 1997 Russell King
+ */
+
+#ifndef _ADFS_FS_I
+#define _ADFS_FS_I
+
+/*
+ * adfs file system inode data in memory
+ */
+struct adfs_inode_info {
+ unsigned long file_id; /* id of fragments containing actual data */
+};
+
+#endif
--- /dev/null
+/*
+ * linux/include/linux/adfs_fs_sb.h
+ *
+ * Copyright (C) 1997 Russell King
+ */
+
+#ifndef _ADFS_FS_SB
+#define _ADFS_FS_SB
+
+#include <linux/adfs_fs.h>
+
+/*
+ * adfs file system superblock data in memory
+ */
+struct adfs_sb_info {
+ struct buffer_head *s_sbh; /* buffer head containing disc record */
+ struct adfs_discrecord *s_dr; /* pointer to disc record in s_sbh */
+ __u16 s_zone_size; /* size of a map zone in bits */
+ __u16 s_ids_per_zone; /* max. no ids in one zone */
+ __u32 s_idlen; /* length of ID in map */
+ __u32 s_map_size; /* size of a map */
+ __u32 s_zonesize; /* zone size (in map bits) */
+ __u32 s_map_block; /* block address of map */
+ struct buffer_head **s_map; /* bh list containing map */
+ __u32 s_root; /* root disc address */
+ __s8 s_map2blk; /* shift left by this for map->sector */
+};
+
+#endif
};
extern void register_console(struct console *);
+extern int unregister_console(struct console *);
extern struct console *console_drivers;
#endif /* linux/console.h */
* Drive parameters (user modifiable)
*/
struct floppy_drive_params {
- char cmos; /* cmos type */
+ signed char cmos; /* cmos type */
/* Spec2 is (HLD<<1 | ND), where HLD is head load time (1=2ms, 2=4 ms
* etc) and ND is set means no DMA. Hardcoded to 6 (HLD=6ms, use DMA).
#include <linux/romfs_fs_i.h>
#include <linux/smb_fs_i.h>
#include <linux/hfs_fs_i.h>
+#include <linux/adfs_fs_i.h>
/*
* Attribute flags. These should be or-ed together to figure out what
struct romfs_inode_info romfs_i;
struct smb_inode_info smbfs_i;
struct hfs_inode_info hfs_i;
+ struct adfs_inode_info adfs_i;
struct socket socket_i;
void *generic_ip;
} u;
#include <linux/romfs_fs_sb.h>
#include <linux/smb_fs_sb.h>
#include <linux/hfs_fs_sb.h>
+#include <linux/adfs_fs_sb.h>
struct super_block {
kdev_t s_dev;
struct romfs_sb_info romfs_sb;
struct smb_sb_info smbfs_sb;
struct hfs_sb_info hfs_sb;
+ struct adfs_sb_info adfs_sb;
void *generic_sbp;
} u;
};
unsigned char __data[0];
};
+#ifdef __KERNEL__
+#define optlength(opt) (sizeof(struct ip_options) + opt->optlen)
+#endif
+
struct iphdr {
#if defined(__LITTLE_ENDIAN_BITFIELD)
__u8 ihl:4,
#define _LINUX_KERNEL_STAT_H
#include <asm/irq.h>
+#include <linux/smp.h>
#include <linux/tasks.h>
/*
unsigned int dk_drive_wblk[DK_NDRIVE];
unsigned int pgpgin, pgpgout;
unsigned int pswpin, pswpout;
- unsigned int interrupts[NR_IRQS];
+ unsigned int interrupts[NR_CPUS][NR_IRQS];
unsigned int ipackets, opackets;
unsigned int ierrors, oerrors;
unsigned int collisions;
#define SYMBOL_NAME_LABEL(X) X/**/:
#endif
+#ifdef __arm__
+#define __ALIGN .align 0
+#define __ALIGN_STR ".align 0"
+#else
#ifdef __mc68000__
#define __ALIGN .align 4
#define __ALIGN_STR ".align 4"
#define __ALIGN_STR ".align 16,0x90"
#endif /* __i486__/__i586__ */
#endif /* __mc68000__ */
+#endif /* __arm__ */
#ifdef __ASSEMBLY__
#define NFS_WRITE_CANCELLED 0x0004 /* has been cancelled */
#define NFS_WRITE_UNCOMMITTED 0x0008 /* written but uncommitted (NFSv3) */
#define NFS_WRITE_INVALIDATE 0x0010 /* invalidate after write */
-#define NFS_WRITE_INPROGRESS 0x0020 /* RPC call in progress */
+#define NFS_WRITE_INPROGRESS 0x0100 /* RPC call in progress */
+#define NFS_WRITE_COMPLETE 0x0200 /* RPC call completed */
#define WB_WANTLOCK(req) ((req)->wb_flags & NFS_WRITE_WANTLOCK)
#define WB_HAVELOCK(req) ((req)->wb_flags & NFS_WRITE_LOCKED)
#define WB_UNCOMMITTED(req) ((req)->wb_flags & NFS_WRITE_UNCOMMITTED)
#define WB_INVALIDATE(req) ((req)->wb_flags & NFS_WRITE_INVALIDATE)
#define WB_INPROGRESS(req) ((req)->wb_flags & NFS_WRITE_INPROGRESS)
+#define WB_COMPLETE(req) ((req)->wb_flags & NFS_WRITE_COMPLETE)
/*
* linux/fs/nfs/proc.c
PROC_KSYMS,
PROC_DMA,
PROC_IOPORTS,
-#ifdef __SMP_PROF__
- PROC_SMP_PROF,
-#endif
PROC_PROFILE, /* whether enabled or not */
PROC_CMDLINE,
PROC_SYS,
} mac;
struct dst_entry *dst;
+
+#if (defined(__alpha__) || defined(__sparc64__)) && (defined(CONFIG_IPV6) || defined(CONFIG_IPV6_MODULE))
+ char cb[48]; /* sorry. 64bit pointers have a price */
+#else
char cb[32];
+#endif
__u32 seq; /* TCP sequence number */
__u32 end_seq; /* seq [+ fin] [+ syn] + datalen */
return list;
}
+extern __inline__ struct sk_buff *skb_peek_tail(struct sk_buff_head *list_)
+{
+ struct sk_buff *list = ((struct sk_buff *)list_)->prev;
+ if (list == (struct sk_buff *)list_)
+ list = NULL;
+ return list;
+}
+
/*
* Return the length of an sk_buff queue
*/
restore_flags(flags);
}
+/* XXX: more streamlined implementation */
+extern __inline__ struct sk_buff *__skb_dequeue_tail(struct sk_buff_head *list)
+{
+ struct sk_buff *skb = skb_peek_tail(list);
+ if (skb)
+ __skb_unlink(skb, list);
+ return skb;
+}
+
+extern __inline__ struct sk_buff *skb_dequeue_tail(struct sk_buff_head *list)
+{
+ long flags;
+ struct sk_buff *result;
+
+ save_flags(flags);
+ cli();
+ result = __skb_dequeue_tail(list);
+ restore_flags(flags);
+ return result;
+}
+
+
extern const char skb_put_errstr[];
extern const char skb_push_errstr[];
#ifdef __SMP__
#include <asm/smp.h>
-
+
+/*
+ * main IPI interface, handles INIT, TLB flush, STOP, etc.:
+ */
extern void smp_message_pass(int target, int msg, unsigned long data, int wait);
-extern void smp_boot_cpus(void); /* Boot processor call to load the other CPU's */
-extern void smp_callin(void); /* Processor call in. Must hold processors until .. */
-extern void smp_commence(void); /* Multiprocessors may now schedule */
-extern int smp_num_cpus;
-extern int smp_threads_ready; /* True once the per process idle is forked */
-#ifdef __SMP_PROF__
-extern volatile unsigned long smp_spins[NR_CPUS]; /* count of interrupt spins */
-extern volatile unsigned long smp_spins_sys_idle[]; /* count of idle spins */
-extern volatile unsigned long smp_spins_syscall[]; /* count of syscall spins */
-extern volatile unsigned long smp_spins_syscall_cur[]; /* count of syscall spins for the current
- call */
-extern volatile unsigned long smp_idle_count[1+NR_CPUS];/* count idle ticks */
-extern volatile unsigned long smp_idle_map; /* map with idle cpus */
-#else
-extern volatile unsigned long smp_spins;
-#endif
+/*
+ * Boot processor call to load the other CPU's
+ */
+extern void smp_boot_cpus(void);
+
+/*
+ * Processor call in. Must hold processors until ..
+ */
+extern void smp_callin(void);
+
+/*
+ * Multiprocessors may now schedule
+ */
+extern void smp_commence(void);
+
+/*
+ * True once the per process idle is forked
+ */
+extern int smp_threads_ready;
+
+extern int smp_num_cpus;
extern volatile unsigned long smp_msg_data;
extern volatile int smp_src_cpu;
extern volatile int smp_msg_id;
-#define MSG_ALL_BUT_SELF 0x8000 /* Assume <32768 CPU's */
+#define MSG_ALL_BUT_SELF 0x8000 /* Assume <32768 CPU's */
#define MSG_ALL 0x8001
-#define MSG_INVALIDATE_TLB 0x0001 /* Remote processor TLB invalidate */
-#define MSG_STOP_CPU 0x0002 /* Sent to shut down slave CPU's when rebooting */
-#define MSG_RESCHEDULE 0x0003 /* Reschedule request from master CPU */
+#define MSG_INVALIDATE_TLB 0x0001 /* Remote processor TLB invalidate */
+#define MSG_STOP_CPU 0x0002 /* Sent to shut down slave CPU's
+ * when rebooting
+ */
+#define MSG_RESCHEDULE 0x0003 /* Reschedule request from master CPU */
#else
#define BASE_ACK_SIZE (NETHDR_SIZE + MAX_HEADER + 15)
#define MAX_ACK_SIZE (NETHDR_SIZE + sizeof(struct tcphdr) + MAX_HEADER + 15)
#define MAX_RESET_SIZE (NETHDR_SIZE + sizeof(struct tcphdr) + MAX_HEADER + 15)
+#define MAX_TCPHEADER_SIZE (NETHDR_SIZE + sizeof(struct tcphdr) + 20 + MAX_HEADER + 15)
#define MAX_WINDOW 32767 /* Never offer a window over 32767 without using
window scaling (not yet supported). Some poor
struct sockaddr *uaddr,
int addr_len);
-
/* From syncookies.c */
extern struct sock *cookie_v4_check(struct sock *sk, struct sk_buff *skb,
struct ip_options *opt);
extern void tcp_write_wakeup(struct sock *);
extern void tcp_send_fin(struct sock *sk);
extern int tcp_send_synack(struct sock *);
-extern int tcp_send_skb(struct sock *, struct sk_buff *);
+extern void tcp_send_skb(struct sock *, struct sk_buff *);
extern void tcp_send_ack(struct sock *sk);
extern void tcp_send_delayed_ack(struct sock *sk, int max_timeout);
}
}
+static __inline__ void tcp_build_options(__u32 *ptr, struct tcp_opt *tp)
+{
+ /* FIXME: We will still need to do SACK here. */
+ if (tp->tstamp_ok) {
+ *ptr = ntohl((TCPOPT_NOP << 24)
+ | (TCPOPT_NOP << 16)
+ | (TCPOPT_TIMESTAMP << 8)
+ | TCPOLEN_TIMESTAMP);
+ /* rest filled in by tcp_update_options */
+ }
+}
+
+static __inline__ void tcp_update_options(__u32 *ptr, struct tcp_opt *tp)
+{
+ /* FIXME: We will still need to do SACK here. */
+ if (tp->tstamp_ok) {
+ *++ptr = htonl(jiffies);
+ *++ptr = htonl(tp->ts_recent);
+ }
+}
+
+/*
+ * This routines builds a generic TCP header.
+ * They also build the RFC1323 Timestamp, but don't fill the
+ * actual timestamp in (you need to call tcp_update_options for this).
+ * It can't (unfortunately) do SACK as well.
+ * XXX: pass tp instead of sk here.
+ */
+
+static inline void tcp_build_header_data(struct tcphdr *th, struct sock *sk, int push)
+{
+ struct tcp_opt *tp = &(sk->tp_pinfo.af_tcp);
+
+ memcpy(th,(void *) &(sk->dummy_th), sizeof(*th));
+ th->seq = htonl(sk->write_seq);
+ if (!push)
+ th->psh = 1;
+ tcp_build_options((__u32*)(th+1), tp);
+}
+
+static inline void tcp_build_header(struct tcphdr *th, struct sock *sk)
+{
+ struct tcp_opt *tp = &(sk->tp_pinfo.af_tcp);
+
+ memcpy(th,(void *) &(sk->dummy_th), sizeof(*th));
+ th->seq = htonl(sk->write_seq);
+ th->ack_seq = htonl(tp->last_ack_sent = tp->rcv_nxt);
+ th->window = htons(tcp_select_window(sk));
+ tcp_build_options((__u32 *)(th+1), tp);
+}
+
/*
* Construct a tcp options header for a SYN or SYN_ACK packet.
* If this is every changed make sure to change the definition of
extern void dquot_init(void);
extern void smp_setup(char *str, int *ints);
+extern void ioapic_pirq_setup(char *str, int *ints);
extern void no_scroll(char *str, int *ints);
extern void swap_setup(char *str, int *ints);
extern void buff_setup(char *str, int *ints);
#ifdef __SMP__
{ "nosmp", smp_setup },
{ "maxcpus=", smp_setup },
+ { "pirq=", ioapic_pirq_setup },
#endif
#ifdef CONFIG_BLK_DEV_RAM
{ "ramdisk_start=", ramdisk_start_setup },
#else
+extern void setup_IO_APIC(void);
+
/*
* Multiprocessor idle thread is in arch/...
*/
memory_start = paging_init(memory_start,memory_end);
trap_init();
init_IRQ();
+ memory_start = console_init(memory_start,memory_end);
sched_init();
time_init();
parse_options(command_line);
#if defined(CONFIG_PCI) && defined(CONFIG_PCI_CONSOLE)
memory_start = pci_init(memory_start,memory_end);
#endif
+#if HACK
memory_start = console_init(memory_start,memory_end);
+#endif
#if defined(CONFIG_PCI) && !defined(CONFIG_PCI_CONSOLE)
memory_start = pci_init(memory_start,memory_end);
#endif
printk("POSIX conformance testing by UNIFIX\n");
#ifdef __SMP__
smp_init();
+ setup_IO_APIC();
#endif
#ifdef CONFIG_SYSCTL
sysctl_init();
#include <linux/swap.h>
#include <linux/ctype.h>
#include <linux/file.h>
+#include <linux/console.h>
extern unsigned char aux_device_present, kbd_read_mask;
/* binfmt_aout */
EXPORT_SYMBOL(get_write_access);
EXPORT_SYMBOL(put_write_access);
+
+/* dynamic registering of consoles */
+EXPORT_SYMBOL(register_console);
+EXPORT_SYMBOL(unregister_console);
return error;
}
+spinlock_t console_lock;
asmlinkage int printk(const char *fmt, ...)
{
__save_flags(flags);
__cli();
+ spin_lock(&console_lock);
va_start(args, fmt);
i = vsprintf(buf + 3, fmt, args); /* hopefully i < sizeof(buf)-4 */
buf_end = buf + 3 + i;
if (line_feed)
msg_level = -1;
}
+ spin_unlock(&console_lock);
__restore_flags(flags);
- wake_up_interruptible(&log_wait);
+/* wake_up_interruptible(&log_wait);*/
return i;
}
}
}
+
+int unregister_console(struct console * console)
+{
+ struct console *a,*b;
+
+ if (console_drivers == console) {
+ console_drivers=console->next;
+ return (0);
+ }
+ for (a=console_drivers->next, b=console_drivers ;
+ a; b=a, a=b->next) {
+ if (a == console) {
+ b->next = a->next;
+ return 0;
+ }
+ }
+
+ return (1);
+}
+
/*
* Write a message to a certain tty, not just the console. This is used for
* messages that need to be redirected to a specific tty.
if (!(page = __find_page(inode, pgpos, *hash))) {
if (!page_cache) {
page_cache = __get_free_page(GFP_KERNEL);
- if (!page_cache) {
- status = -ENOMEM;
- break;
- }
- continue;
+ if (page_cache)
+ continue;
+ status = -ENOMEM;
+ break;
}
page = mem_map + MAP_NR(page_cache);
add_to_page_cache(page, inode, pgpos, hash);
}
/*
- * WSH 06/05/97: restructured slightly to make sure we release
- * the page on an error exit. Removed explicit setting of
- * PG_locked, as that's handled below the i_op->xxx interface.
+ * Note: setting of the PG_locked bit is handled
+ * below the i_op->xxx interface.
*/
didread = 0;
page_wait:
wait_on_page(page);
+ if (PageUptodate(page))
+ goto do_update_page;
/*
- * If the page is not uptodate, and we're writing less
+ * The page is not up-to-date ... if we're writing less
* than a full page of data, we may have to read it first.
- * However, don't bother with reading the page when it's
- * after the current end of file.
+ * But if the page is past the current end of file, we must
+ * clear it before updating.
*/
- if (!PageUptodate(page)) {
- if (bytes < PAGE_SIZE && pgpos < inode->i_size) {
- status = -EIO; /* two tries ... error out */
- if (didread < 2)
- status = inode->i_op->readpage(dentry,
- page);
+ if (bytes < PAGE_SIZE) {
+ if (pgpos < inode->i_size) {
+ status = -EIO;
+ if (didread >= 2)
+ goto done_with_page;
+ status = inode->i_op->readpage(dentry, page);
if (status < 0)
goto done_with_page;
didread++;
goto page_wait;
+ } else {
+ /* Must clear for partial writes */
+ memset((void *) page_address(page), 0,
+ PAGE_SIZE);
}
- set_bit(PG_uptodate, &page->flags);
}
+ /*
+ * N.B. We should defer setting PG_uptodate at least until
+ * the data is copied. A failure in i_op->updatepage() could
+ * leave the page with garbage data.
+ */
+ set_bit(PG_uptodate, &page->flags);
+do_update_page:
/* Alright, the page is there. Now update it. */
status = inode->i_op->updatepage(dentry, page, buf,
offset, bytes, sync);
if (page_cache)
free_page(page_cache);
- if (written)
- return written;
- return status;
+ return written ? written : status;
}
/*
{
struct page * page;
struct page ** hash;
- unsigned long page_cache;
+ unsigned long page_cache = 0;
hash = page_hash(inode, offset);
page = __find_page(inode, offset, *hash);
add_to_page_cache(page, inode, offset, hash);
}
if (atomic_read(&page->count) != 2)
- printk("get_cached_page: page count=%d\n",
+ printk(KERN_ERR "get_cached_page: page count=%d\n",
atomic_read(&page->count));
if (test_bit(PG_locked, &page->flags))
- printk("get_cached_page: page already locked!\n");
+ printk(KERN_ERR "get_cached_page: page already locked!\n");
set_bit(PG_locked, &page->flags);
+ page_cache = page_address(page);
out:
- return page_address(page);
+ return page_cache;
}
/*
ifr->ifr_ifindex = dev->ifindex;
return 0;
+ case SIOCGIFTXQLEN:
+ ifr->ifr_qlen = dev->tx_queue_len;
+ return 0;
+
+ case SIOCSIFTXQLEN:
+ if(ifr->ifr_qlen<2 || ifr->ifr_qlen>1024)
+ return -EINVAL;
+ dev->tx_queue_len = ifr->ifr_qlen;
+ return 0;
+
/*
* Unknown or private ioctl
*/
case SIOCGIFSLAVE:
case SIOCGIFMAP:
case SIOCGIFINDEX:
+ case SIOCGIFTXQLEN:
ret = dev_ifsioc(&ifr, cmd);
if (!ret) {
#ifdef CONFIG_NET_ALIAS
case SIOCADDMULTI:
case SIOCDELMULTI:
case SIOCSIFHWBROADCAST:
+ case SIOCSIFTXQLEN:
if (!suser())
return -EPERM;
rtnl_lock();
*
* The Internet Protocol (IP) output module.
*
- * Version: $Id: ip_output.c,v 1.44 1997/12/27 20:41:14 kuznet Exp $
+ * Version: $Id: ip_output.c,v 1.45 1998/01/15 22:06:35 freitag Exp $
*
* Authors: Ross Biro, <bir7@leland.Stanford.Edu>
* Fred N. van Kempen, <waltje@uWalt.NL.Mugnet.ORG>
/*
- * Queues a packet to be sent, and starts the transmitter
- * if necessary. if free = 1 then we free the block after
- * transmit, otherwise we don't. If free==2 we not only
- * free the block but also don't assign a new ip seq number.
- * This routine also needs to put in the total length,
- * and compute the checksum
+ * Queues a packet to be sent, and starts the transmitter if necessary.
+ * This routine also needs to put in the total length and compute the
+ * checksum
*/
void ip_queue_xmit(struct sk_buff *skb)
iph->tot_len = htons(tot_len);
iph->id = htons(ip_id_count++);
- if (rt->u.dst.obsolete)
- goto check_route;
-after_check_route:
+ if (rt->u.dst.obsolete) {
+ /* Ugly... ugly... but what can I do?
+ Essentially it is "ip_reroute_output" function. --ANK
+ */
+ struct rtable *nrt;
+ if (ip_route_output(&nrt, rt->key.dst, rt->key.src, rt->key.tos,
+ sk?sk->bound_dev_if:0))
+ goto drop;
+ skb->dst = &nrt->u.dst;
+ ip_rt_put(rt);
+ rt = nrt;
+ }
+
dev = rt->u.dst.dev;
- if (call_out_firewall(PF_INET, dev, iph, NULL,&skb) < FW_ACCEPT) {
- kfree_skb(skb, FREE_WRITE);
- return;
- }
+ if (call_out_firewall(PF_INET, dev, iph, NULL,&skb) < FW_ACCEPT)
+ goto drop;
#ifdef CONFIG_NET_SECURITY
/*
ip_send_check(iph);
if (call_out_firewall(PF_SECURITY, NULL, NULL, (void *) 4, &skb)<FW_ACCEPT)
- {
- kfree_skb(skb, FREE_WRITE);
- return;
- }
+ goto drop;
iph = skb->nh.iph;
/* don't update tot_len, as the dev->mtu is already decreased */
if (tot_len > rt->u.dst.pmtu)
goto fragment;
+#ifndef CONFIG_NET_SECURITY
/*
* Add an IP checksum
*/
ip_send_check(iph);
+#endif
if (sk)
skb->priority = sk->priority;
skb->dst->output(skb);
return;
-check_route:
- /* Ugly... ugly... but what can I do?
-
- Essentially it is "ip_reroute_output" function. --ANK
- */
- {
- struct rtable *nrt;
- if (ip_route_output(&nrt, rt->key.dst, rt->key.src, rt->key.tos, sk?sk->bound_dev_if:0)) {
- kfree_skb(skb, 0);
- return;
- }
- skb->dst = &nrt->u.dst;
- ip_rt_put(rt);
- rt = nrt;
- }
- goto after_check_route;
-
fragment:
if ((iph->frag_off & htons(IP_DF)))
{
printk(KERN_DEBUG "sending pkt_too_big to self\n");
icmp_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED,
htonl(rt->u.dst.pmtu));
-
- kfree_skb(skb, FREE_WRITE);
- return;
+ goto drop;
}
ip_fragment(skb, skb->dst->output);
+ return;
+
+drop:
+ kfree_skb(skb, FREE_WRITE);
}
/*
mf = 0;
/*
- * Can't fragment raw packets
+ * Don't fragment packets for path mtu discovery.
*/
- if (offset > 0 && df)
+ if (offset > 0 && df) {
return(-EMSGSIZE);
+ }
/*
* Lock the device lists.
/*
* Process BOOTP extension.
*/
-__initfunc(static void ic_do_bootp_ext(u8 *ext))
+__initfunc(static void ic_do_bootp_ext(struct bootp_pkt *b, u8 *ext))
{
#ifdef IPCONFIG_DEBUG
u8 *c;
opt = ext;
ext += ext[1] + 2;
if (ext <= end)
- ic_do_bootp_ext(opt);
+ ic_do_bootp_ext(b, opt);
}
}
}
/*
* sysctl_net_ipv4.c: sysctl interface to net IPV4 subsystem.
*
- * $Id: sysctl_net_ipv4.c,v 1.23 1997/12/13 21:52:57 kuznet Exp $
+ * $Id: sysctl_net_ipv4.c,v 1.25 1998/01/15 22:40:57 freitag Exp $
*
* Begun April 1, 1996, Mike Shaver.
* Added /proc/sys/net/ipv4 directory entry (empty =) ). [MS]
extern int sysctl_tcp_max_ka_probes;
extern int sysctl_tcp_retries1;
extern int sysctl_tcp_retries2;
-extern int sysctl_tcp_max_delay_acks;
extern int sysctl_tcp_fin_timeout;
extern int sysctl_tcp_syncookies;
extern int sysctl_tcp_syn_retries;
&sysctl_intvec, NULL, NULL, &tcp_retr1_max},
{NET_IPV4_TCP_RETRIES2, "tcp_retries2",
&sysctl_tcp_retries2, sizeof(int), 0644, NULL, &proc_dointvec},
- {NET_IPV4_TCP_MAX_DELAY_ACKS, "tcp_max_delay_acks",
- &sysctl_tcp_max_delay_acks, sizeof(int), 0644, NULL, &proc_dointvec},
{NET_IPV4_TCP_FIN_TIMEOUT, "tcp_fin_timeout",
&sysctl_tcp_fin_timeout, sizeof(int), 0644, NULL,
&proc_dointvec_jiffies},
*
* Implementation of the Transmission Control Protocol(TCP).
*
- * Version: $Id: tcp.c,v 1.76 1997/12/30 19:43:17 kuznet Exp $
+ * Version: $Id: tcp.c,v 1.77 1998/01/15 22:40:18 freitag Exp $
*
* Authors: Ross Biro, <bir7@leland.Stanford.Edu>
* Fred N. van Kempen, <waltje@uWalt.NL.Mugnet.ORG>
/*
- * Walk down the receive queue counting readable data until we hit the
- * end or we find a gap in the received data queue (ie a frame missing
- * that needs sending to us).
+ * Walk down the receive queue counting readable data.
*/
static int tcp_readable(struct sock *sk)
/* Do until a push or until we are out of data. */
do {
/* Found a hole so stops here. */
- if (before(counted, skb->seq))
+ if (before(counted, skb->seq)) /* should not happen */
break;
/* Length - header but start from where we are up to
mask = POLLERR;
/* Connected? */
if ((1 << sk->state) & ~(TCPF_SYN_SENT|TCPF_SYN_RECV)) {
+ int space;
+
if (sk->shutdown & RCV_SHUTDOWN)
mask |= POLLHUP;
-
+
if ((tp->rcv_nxt != sk->copied_seq) &&
(sk->urg_seq != sk->copied_seq ||
tp->rcv_nxt != sk->copied_seq+1 ||
sk->urginline || !sk->urg_data))
mask |= POLLIN | POLLRDNORM;
- /* FIXME: this assumed sk->mtu is correctly maintained.
- * I see no evidence this is the case. -- erics
- */
- if (!(sk->shutdown & SEND_SHUTDOWN) &&
- (sock_wspace(sk) >= sk->mtu+128+sk->prot->max_header))
+#if 1 /* This needs benchmarking and real world tests */
+ space = sk->dst_cache->pmtu + 128;
+ if (space < 2048) /* XXX */
+ space = 2048;
+#else /* 2.0 way */
+ /* More than half of the socket queue free? */
+ space = atomic_read(&sk->wmem_alloc) / 2;
+#endif
+ /* Always wake the user up when an error occured */
+ if (sock_wspace(sk) >= space)
mask |= POLLOUT | POLLWRNORM;
-
if (sk->urg_data)
- mask |= POLLPRI;
+ mask |= POLLPRI;
}
return mask;
}
return put_user(amount, (int *)arg);
}
default:
- return(-ENOIOCTLCMD);
+ return(-EINVAL);
};
}
-
-/*
- * This routine builds a generic TCP header.
- * It also builds in the RFC1323 Timestamp.
- * It can't (unfortunately) do SACK as well.
- */
-
-extern __inline void tcp_build_header(struct tcphdr *th, struct sock *sk, int push)
-{
- struct tcp_opt *tp = &(sk->tp_pinfo.af_tcp);
-
- memcpy(th,(void *) &(sk->dummy_th), sizeof(*th));
- th->seq = htonl(sk->write_seq);
- th->psh =(push == 0) ? 1 : 0;
- th->ack_seq = htonl(tp->rcv_nxt);
- th->window = htons(tcp_select_window(sk));
-
- /* FIXME: could use the inline found in tcp_output.c as well.
- * Probably that means we should move these up to an include file. --erics
- */
- if (tp->tstamp_ok) {
- __u32 *ptr = (__u32 *)(th+1);
- *ptr++ = ntohl((TCPOPT_NOP << 24) | (TCPOPT_NOP << 16)
- | (TCPOPT_TIMESTAMP << 8) | TCPOLEN_TIMESTAMP);
- /* FIXME: Not sure it's worth setting these here already, but I'm
- * also not sure we replace them on all paths later. --erics
- */
- *ptr++ = jiffies;
- *ptr++ = tp->ts_recent;
- }
-}
-
/*
* Wait for a socket to get into the connected state
*/
skb_put(skb,tp->tcp_header_len);
seglen -= copy;
- tcp_build_header(skb->h.th, sk, seglen || iovlen);
+ tcp_build_header_data(skb->h.th, sk, seglen || iovlen);
/* FIXME: still need to think about SACK options here. */
if (flags & MSG_OOB) {
static void cleanup_rbuf(struct sock *sk)
{
struct sk_buff *skb;
-
+ struct tcp_opt *tp;
+
/* NOTE! The socket must be locked, so that we don't get
* a messed-up receive queue.
*/
SOCK_DEBUG(sk, "sk->rspace = %lu\n", sock_rspace(sk));
+ tp = &(sk->tp_pinfo.af_tcp);
+
/* We send a ACK if the sender is blocked
* else let tcp_data deal with the acking policy.
*/
- if (sk->tp_pinfo.af_tcp.delayed_acks) {
- struct tcp_opt *tp = &(sk->tp_pinfo.af_tcp);
+ if (tp->delayed_acks) {
__u32 rcv_wnd;
/* FIXME: double check this rule, then check against
struct sock *newsk = NULL;
int error;
+ lock_sock(sk);
+
/* We need to make sure that this socket is listening,
* and that it has something pending.
*/
error = EINVAL;
if (sk->state != TCP_LISTEN)
- goto no_listen;
-
- lock_sock(sk);
+ goto out;
+ /* Find already established connection */
req = tcp_find_established(tp, &prev);
- if (req) {
-got_new_connect:
- tcp_synq_unlink(tp, req, prev);
- newsk = req->sk;
- tcp_openreq_free(req);
- sk->ack_backlog--;
- /* FIXME: need to check here if socket has already
- * an soft_err or err set.
- * We have two options here then: reply (this behaviour matches
- * Solaris) or return the error to the application (old Linux)
- */
- error = 0;
-out:
- release_sock(sk);
-no_listen:
- sk->err = error;
- return newsk;
+ if (!req) {
+ /* If this is a non blocking socket don't sleep */
+ error = EAGAIN;
+ if (flags & O_NONBLOCK)
+ goto out;
+
+ error = ERESTARTSYS;
+ req = wait_for_connect(sk, &prev);
+ if (!req)
+ goto out;
+ error = 0;
}
- error = EAGAIN;
- if (flags & O_NONBLOCK)
- goto out;
- req = wait_for_connect(sk, &prev);
- if (req)
- goto got_new_connect;
- error = ERESTARTSYS;
- goto out;
+ tcp_synq_unlink(tp, req, prev);
+ newsk = req->sk;
+ tcp_openreq_free(req);
+ sk->ack_backlog--; /* XXX */
+
+ /* FIXME: need to check here if newsk has already
+ * an soft_err or err set.
+ * We have two options here then: reply (this behaviour matches
+ * Solaris) or return the error to the application (old Linux)
+ */
+ error = 0;
+ out:
+ release_sock(sk);
+ sk->err = error;
+ return newsk;
}
/*
*
* Implementation of the Transmission Control Protocol(TCP).
*
- * Version: $Id: tcp_input.c,v 1.65 1997/12/13 21:52:58 kuznet Exp $
+ * Version: $Id: tcp_input.c,v 1.66 1998/01/15 22:40:29 freitag Exp $
*
* Authors: Ross Biro, <bir7@leland.Stanford.Edu>
* Fred N. van Kempen, <waltje@uWalt.NL.Mugnet.ORG>
* next packet on ack of previous packet.
* Andi Kleen : Moved open_request checking here
* and process RSTs for open_requests.
+ * Andi Kleen : Better prune_queue, and other fixes.
*/
#include <linux/config.h>
int sysctl_tcp_timestamps;
int sysctl_tcp_window_scaling;
int sysctl_tcp_syncookies = SYNC_INIT;
-int sysctl_tcp_max_delay_acks = MAX_DELAY_ACK;
int sysctl_tcp_stdurg;
static tcp_sys_cong_ctl_t tcp_sys_cong_ctl_f = &tcp_cong_avoid_vanj;
*/
static __inline__ int tcp_fast_parse_options(struct tcphdr *th, struct tcp_opt *tp)
{
+ /* If we didn't send out any options ignore them all */
if (tp->tcp_header_len == sizeof(struct tcphdr))
return 0;
if (th->doff == sizeof(struct tcphdr)>>2) {
if (after(skb->end_seq, ack))
break;
+#if 0
SOCK_DEBUG(sk, "removing seg %x-%x from retransmit queue\n",
skb->seq, skb->end_seq);
+#endif
acked = FLAG_DATA_ACKED;
struct sk_buff *skb;
struct tcp_opt *tp = &(sk->tp_pinfo.af_tcp);
- /* FIXME: out_of_order_queue is a strong tcp_opt candidate... -DaveM */
while ((skb = skb_peek(&sk->out_of_order_queue))) {
if (after(skb->seq, tp->rcv_nxt))
break;
if (!after(skb->end_seq, tp->rcv_nxt)) {
- SOCK_DEBUG(sk, "ofo packet was allready received \n");
+ SOCK_DEBUG(sk, "ofo packet was already received \n");
skb_unlink(skb);
kfree_skb(skb, FREE_READ);
continue;
*/
if (skb->seq == tp->rcv_nxt) {
/* Ok. In sequence. */
-queue_and_out:
+ queue_and_out:
dst_confirm(sk->dst_cache);
skb_queue_tail(&sk->receive_queue, skb);
tp->rcv_nxt = skb->end_seq;
return;
}
- /* Not in sequence, either a retransmit or some packet got lost. */
+ /* An old packet, either a retransmit or some packet got lost. */
if (!after(skb->end_seq, tp->rcv_nxt)) {
/* A retransmit, 2nd most common case. Force an imediate ack. */
SOCK_DEBUG(sk, "retransmit received: seq %X\n", skb->seq);
- tp->delayed_acks = sysctl_tcp_max_delay_acks;
+ tp->delayed_acks = MAX_DELAY_ACK;
kfree_skb(skb, FREE_READ);
return;
}
}
/* Ok. This is an out_of_order segment, force an ack. */
- tp->delayed_acks = sysctl_tcp_max_delay_acks;
+ tp->delayed_acks = MAX_DELAY_ACK;
/* Disable header predition. */
tp->pred_flags = 0;
}
}
-static __inline__ void tcp_ack_snd_check(struct sock *sk)
+/*
+ * Check if sending an ack is needed.
+ */
+static __inline__ void __tcp_ack_snd_check(struct sock *sk)
{
struct tcp_opt *tp = &(sk->tp_pinfo.af_tcp);
* - we don't have a window update to send
* - must send at least every 2 full sized packets
*/
- if (tp->delayed_acks == 0) {
- /* We sent a data segment already. */
- return;
- }
- if (tp->delayed_acks >= sysctl_tcp_max_delay_acks || tcp_raise_window(sk))
+ if (tp->delayed_acks >= MAX_DELAY_ACK || tcp_raise_window(sk))
tcp_send_ack(sk);
else
tcp_send_delayed_ack(sk, HZ/2);
}
+static __inline__ void tcp_ack_snd_check(struct sock *sk)
+{
+ struct tcp_opt *tp = &(sk->tp_pinfo.af_tcp);
+ if (tp->delayed_acks == 0) {
+ /* We sent a data segment already. */
+ return;
+ }
+ __tcp_ack_snd_check(sk);
+}
+
+
/*
* This routine is only called when we have urgent data
* signalled. Its the 'slow' part of tcp_urg. It could be
}
}
+/*
+ * Clean first the out_of_order queue, then the receive queue until
+ * the socket is in its memory limits again.
+ */
static void prune_queue(struct sock *sk)
{
+ struct tcp_opt *tp;
struct sk_buff * skb;
- /* Clean the out_of_order queue. */
- while ((skb = skb_dequeue(&sk->out_of_order_queue)))
+ SOCK_DEBUG(sk, "prune_queue: c=%x\n", sk->copied_seq);
+
+ /* First Clean the out_of_order queue. */
+ /* Start with the end because there are probably the least
+ * useful packets (crossing fingers).
+ */
+ while ((skb = skb_dequeue_tail(&sk->out_of_order_queue))) {
kfree_skb(skb, FREE_READ);
+ if (atomic_read(&sk->rmem_alloc) <= sk->rcvbuf)
+ return;
+ }
+
+ tp = &sk->tp_pinfo.af_tcp;
+
+ /* Now continue with the receive queue if it wasn't enough */
+ while ((skb = skb_peek_tail(&sk->receive_queue))) {
+ /* Never remove packets that have been already acked */
+ if (before(skb->end_seq, tp->last_ack_sent+1)) {
+ printk(KERN_DEBUG "prune_queue: hit acked data c=%x,%x,%x\n",
+ sk->copied_seq, skb->end_seq, tp->last_ack_sent);
+ break;
+ }
+ skb_unlink(skb);
+ tp->rcv_nxt = skb->seq;
+ kfree_skb(skb, FREE_READ);
+ if (atomic_read(&sk->rmem_alloc) <= sk->rcvbuf)
+ break;
+ }
}
int tcp_rcv_established(struct sock *sk, struct sk_buff *skb,
if (tcp_paws_discard(tp)) {
if (!th->rst) {
tcp_send_ack(sk);
- kfree_skb(skb, FREE_READ);
- return 0;
+ goto discard;
}
}
tcp_replace_ts_recent(tp,skb->end_seq);
if (len <= th->doff*4) {
/* Bulk data transfer: sender */
if (len == th->doff*4) {
- tcp_ack(sk, th, skb->seq, skb->ack_seq, len);
+ tcp_ack(sk, th, skb->seq, skb->ack_seq, len);
+ kfree_skb(skb, FREE_READ);
tcp_data_snd_check(sk);
+ return 0;
+ } else { /* Header too small */
+ tcp_statistics.TcpInErrs++;
+ goto discard;
}
-
- tcp_statistics.TcpInErrs++;
- kfree_skb(skb, FREE_READ);
- return 0;
} else if (skb->ack_seq == tp->snd_una) {
/* Bulk data transfer: receiver */
- skb_pull(skb,th->doff*4);
+ if (atomic_read(&sk->rmem_alloc) > sk->rcvbuf)
+ goto discard;
+ skb_pull(skb,th->doff*4);
+
/* DO NOT notify forward progress here.
* It saves dozen of CPU instructions in fast path. --ANK
*/
sk->data_ready(sk, 0);
tcp_delack_estimator(tp);
+#if 1 /* This checks for required window updates too. */
+ tp->delayed_acks++;
+ __tcp_ack_snd_check(sk);
+#else
if (tp->delayed_acks++ == 0)
tcp_send_delayed_ack(sk, HZ/2);
else
tcp_send_ack(sk);
+#endif
return 0;
}
}
tp->rcv_wup, tp->rcv_wnd);
}
tcp_send_ack(sk);
- kfree_skb(skb, FREE_READ);
- return 0;
+ goto discard;
}
}
if(th->rst) {
tcp_reset(sk,skb);
- kfree_skb(skb, FREE_READ);
- return 0;
+ goto discard;
}
-
+
if(th->ack)
tcp_ack(sk, th, skb->seq, skb->ack_seq, len);
(void) tcp_fin(skb, sk, th);
tcp_data_snd_check(sk);
- tcp_ack_snd_check(sk);
- /* If our receive queue has grown past its limits,
- * try to prune away duplicates etc..
- */
+ /* If our receive queue has grown past its limits shrink it */
if (atomic_read(&sk->rmem_alloc) > sk->rcvbuf)
prune_queue(sk);
- if (!queued)
+ tcp_ack_snd_check(sk);
+
+ if (!queued) {
+ discard:
kfree_skb(skb, FREE_READ);
+ }
return 0;
}
}
}
- case TCP_ESTABLISHED:
+ case TCP_ESTABLISHED:
queued = tcp_data(skb, sk, len);
+
+ /* This can only happen when MTU+skbheader > rcvbuf */
+ if (atomic_read(&sk->rmem_alloc) > sk->rcvbuf)
+ prune_queue(sk);
break;
}
{
int val = sysctl_tcp_cong_avoidance;
int retv;
+ static tcp_sys_cong_ctl_t tab[] = {
+ tcp_cong_avoid_vanj,
+ tcp_cong_avoid_vegas
+ };
retv = proc_dointvec(ctl, write, filp, buffer, lenp);
if (write) {
- switch (sysctl_tcp_cong_avoidance) {
- case 0:
- tcp_sys_cong_ctl_f = &tcp_cong_avoid_vanj;
- break;
- case 1:
- tcp_sys_cong_ctl_f = &tcp_cong_avoid_vegas;
- break;
- default:
+ if ((unsigned)sysctl_tcp_cong_avoidance > 1) {
retv = -EINVAL;
sysctl_tcp_cong_avoidance = val;
- };
+ } else {
+ tcp_sys_cong_ctl_f = tab[sysctl_tcp_cong_avoidance];
+ }
}
-
return retv;
}
*
* Implementation of the Transmission Control Protocol(TCP).
*
- * Version: $Id: tcp_ipv4.c,v 1.77 1997/12/13 21:53:00 kuznet Exp $
+ * Version: $Id: tcp_ipv4.c,v 1.79 1998/01/15 22:40:47 freitag Exp $
*
* IPv4 specific functions
*
* Added new listen sematics (ifdefed by
* NEW_LISTEN for now)
* Juan Jose Ciarlante: ip_dynaddr bits
+ * Andi Kleen: various fixes.
*/
#include <linux/config.h>
* dropped. This is the new "fast" path mtu
* discovery.
*/
- if (!sk->sock_readers)
+ if (!sk->sock_readers) {
+ lock_sock(sk);
tcp_simple_retransmit(sk);
+ release_sock(sk);
+ } /* else let the usual retransmit timer handle it */
}
}
}
* it's just the icmp type << 8 | icmp code. After adjustment
* header points to the first 8 bytes of the tcp header. We need
* to find the appropriate port.
+ *
+ * The locking strategy used here is very "optimistic". When
+ * someone else accesses the socket the ICMP is just dropped
+ * and for some paths there is no check at all.
+ * A more general error queue to queue errors for later handling
+ * is probably better.
*/
void tcp_v4_err(struct sk_buff *skb, unsigned char *dp, int len)
switch (type) {
case ICMP_SOURCE_QUENCH:
+#ifndef OLD_SOURCE_QUENCH /* This is deprecated */
tp->snd_ssthresh = max(tp->snd_cwnd >> 1, 2);
tp->snd_cwnd = tp->snd_ssthresh;
tp->high_seq = tp->snd_nxt;
+#endif
return;
case ICMP_PARAMETERPROB:
sk->err=EPROTO;
- sk->error_report(sk);
+ sk->error_report(sk); /* This isn't serialized on SMP! */
break;
case ICMP_DEST_UNREACH:
if (code == ICMP_FRAG_NEEDED) { /* PMTU discovery (RFC1191) */
*/
return;
}
-
+
if (!th->syn && !th->ack)
return;
req = tcp_v4_search_req(tp, iph, th, &prev);
}
if(icmp_err_convert[code].fatal || opening) {
+ /* This code isn't serialized with the socket code */
sk->err = icmp_err_convert[code].errno;
if (opening) {
tcp_statistics.TcpAttemptFails++;
static void tcp_v4_or_free(struct open_request *req)
{
if(!req->sk && req->af.v4_req.opt)
- kfree_s(req->af.v4_req.opt,
- sizeof(struct ip_options) + req->af.v4_req.opt->optlen);
+ kfree_s(req->af.v4_req.opt, optlength(req->af.v4_req.opt));
}
static inline void syn_flood_warning(struct sk_buff *skb)
}
}
+/*
+ * Save and compile IPv4 options into the open_request if needed.
+ */
+static inline struct ip_options *
+tcp_v4_save_options(struct sock *sk, struct sk_buff *skb,
+ struct ip_options *opt)
+{
+ struct ip_options *dopt = NULL;
+
+ if (opt && opt->optlen) {
+ int opt_size = optlength(opt);
+ dopt = kmalloc(opt_size, GFP_ATOMIC);
+ if (dopt) {
+ if (ip_options_echo(dopt, skb)) {
+ kfree_s(dopt, opt_size);
+ dopt = NULL;
+ }
+ }
+ }
+ return dopt;
+}
+
int sysctl_max_syn_backlog = 1024;
int sysctl_tcp_syn_taildrop = 1;
int tcp_v4_conn_request(struct sock *sk, struct sk_buff *skb, void *ptr,
__u32 isn)
{
- struct ip_options *opt = (struct ip_options *) ptr;
struct tcp_opt tp;
struct open_request *req;
struct tcphdr *th = skb->h.th;
req->snt_isn = isn;
- /* IPv4 options */
- req->af.v4_req.opt = NULL;
+ req->af.v4_req.opt = tcp_v4_save_options(sk, skb, ptr);
- if (opt && opt->optlen) {
- int opt_size = sizeof(struct ip_options) + opt->optlen;
-
- req->af.v4_req.opt = kmalloc(opt_size, GFP_ATOMIC);
- if (req->af.v4_req.opt) {
- if (ip_options_echo(req->af.v4_req.opt, skb)) {
- kfree_s(req->af.v4_req.opt, opt_size);
- req->af.v4_req.opt = NULL;
- }
- }
- }
req->class = &or_ipv4;
req->retrans = 0;
req->sk = NULL;
tcp_v4_send_synack(sk, req);
if (want_cookie) {
- if (req->af.v4_req.opt)
- kfree(req->af.v4_req.opt);
+ if (req->af.v4_req.opt)
+ kfree(req->af.v4_req.opt);
+ tcp_v4_or_free(req);
tcp_openreq_free(req);
- } else {
+ } else {
req->expires = jiffies + TCP_TIMEOUT_INIT;
tcp_inc_slow_timer(TCP_SLT_SYNACK);
tcp_synq_queue(&sk->tp_pinfo.af_tcp, req);
}
sk->data_ready(sk, 0);
-exit:
return 0;
dead:
SOCK_DEBUG(sk, "Reset on %p: Connect on dead socket.\n",sk);
tcp_statistics.TcpAttemptFails++;
- return -ENOTCONN;
+ return -ENOTCONN; /* send reset */
+
error:
tcp_statistics.TcpAttemptFails++;
- goto exit;
+ return 0;
}
struct sock * tcp_v4_syn_recv_sock(struct sock *sk, struct sk_buff *skb,
/* Or else we die! -DaveM */
newsk->sklist_next = NULL;
- newsk->opt = req->af.v4_req.opt;
+ newsk->opt = req->af.v4_req.opt;
skb_queue_head_init(&newsk->write_queue);
skb_queue_head_init(&newsk->receive_queue);
if (sk->filter)
{
if (sk_filter(skb, sk->filter_data, sk->filter))
- return -EPERM; /* Toss packet */
+ goto discard;
}
#endif /* CONFIG_FILTER */
- skb_set_owner_r(skb, sk);
-
/*
* socket locking is here for SMP purposes as backlog rcv
* is currently called with bh processing disabled.
*/
lock_sock(sk);
+ /*
+ * This doesn't check if the socket has enough room for the packet.
+ * Either process the packet _without_ queueing it and then free it,
+ * or do the check later.
+ */
+ skb_set_owner_r(skb, sk);
+
if (sk->state == TCP_ESTABLISHED) { /* Fast path */
if (tcp_rcv_established(sk, skb, skb->h.th, skb->len))
goto reset;
sk = nsk;
}
- if (tcp_rcv_state_process(sk, skb, skb->h.th,
- &(IPCB(skb)->opt), skb->len))
+ if (tcp_rcv_state_process(sk, skb, skb->h.th, &(IPCB(skb)->opt), skb->len))
goto reset;
release_sock(sk);
return 0;
*
* Implementation of the Transmission Control Protocol(TCP).
*
- * Version: $Id: tcp_output.c,v 1.50 1997/10/15 19:13:02 freitag Exp $
+ * Version: $Id: tcp_output.c,v 1.51 1998/01/15 22:40:39 freitag Exp $
*
* Authors: Ross Biro, <bir7@leland.Stanford.Edu>
* Fred N. van Kempen, <waltje@uWalt.NL.Mugnet.ORG>
tp->retransmits == 0);
}
-static __inline__ void tcp_build_options(__u32 *ptr, struct tcp_opt *tp)
-{
- /* FIXME: We will still need to do SACK here. */
- if (tp->tstamp_ok) {
- *ptr++ = ntohl((TCPOPT_NOP << 24)
- | (TCPOPT_NOP << 16)
- | (TCPOPT_TIMESTAMP << 8)
- | TCPOLEN_TIMESTAMP);
- /* WARNING: If HZ is ever larger than 1000 on some system,
- * then we will be violating RFC1323 here because our timestamps
- * will be moving too fast.
- * FIXME: code TCP so it uses at most ~ 1000 ticks a second?
- * (I notice alpha is 1024 ticks now). -- erics
- */
- *ptr++ = htonl(jiffies);
- *ptr = htonl(tp->ts_recent);
- }
-}
-
-static __inline__ void tcp_update_options(__u32 *ptr, struct tcp_opt *tp)
-{
- /* FIXME: We will still need to do SACK here. */
- if (tp->tstamp_ok) {
- *++ptr = htonl(jiffies);
- *++ptr = htonl(tp->ts_recent);
- }
-}
-
/*
* This is the main buffer sending routine. We queue the buffer
* having checked it is sane seeming.
*/
-int tcp_send_skb(struct sock *sk, struct sk_buff *skb)
+void tcp_send_skb(struct sock *sk, struct sk_buff *skb)
{
struct tcphdr * th = skb->h.th;
struct tcp_opt *tp = &(sk->tp_pinfo.af_tcp);
"(skb = %p, data = %p, th = %p, len = %u)\n",
skb, skb->data, th, skb->len);
kfree_skb(skb, FREE_WRITE);
- return 0;
+ return;
}
/* If we have queued a header size packet.. (these crash a few
if(!th->syn && !th->fin) {
printk(KERN_DEBUG "tcp_send_skb: attempt to queue a bogon.\n");
kfree_skb(skb,FREE_WRITE);
- return 0;
+ return;
}
}
struct sk_buff * buff;
/* This is going straight out. */
- tp->last_ack_sent = th->ack_seq = htonl(tp->rcv_nxt);
+ tp->last_ack_sent = tp->rcv_nxt;
+ th->ack_seq = htonl(tp->rcv_nxt);
th->window = htons(tcp_select_window(sk));
tcp_update_options((__u32 *)(th+1),tp);
if (!tcp_timer_is_set(sk, TIME_RETRANS))
tcp_reset_xmit_timer(sk, TIME_RETRANS, tp->rto);
- return 0;
+ return;
}
queue:
tp->pending = TIME_PROBE0;
tcp_reset_xmit_timer(sk, TIME_PROBE0, tp->rto);
}
- return 0;
+ return;
}
/*
{
struct tcp_opt *tp = &sk->tp_pinfo.af_tcp;
int mss = sk->mss;
- long free_space = sock_rspace(sk)/2;
+ long free_space = sock_rspace(sk) / 2;
long window, cur_win;
if (tp->window_clamp) {
break;
}
- SOCK_DEBUG(sk, "retransmit sending\n");
+ SOCK_DEBUG(sk, "retransmit sending seq=%x\n", skb->seq);
/* Update ack and window. */
tp->last_ack_sent = th->ack_seq = htonl(tp->rcv_nxt);
/* The fin can only be transmited after the data. */
skb_queue_tail(&sk->write_queue, buff);
if (tp->send_head == NULL) {
+ /* FIXME: BUG! we need to check if the fin fits into the window
+ * here. If not we need to do window probing (sick, but true)
+ */
struct sk_buff *skb1;
tp->packets_out++;
/* Swap the send and the receive. */
th->window = ntohs(tcp_select_window(sk));
th->seq = ntohl(tp->snd_nxt);
- tp->last_ack_sent = th->ack_seq = ntohl(tp->rcv_nxt);
+ tp->last_ack_sent = tp->rcv_nxt;
+ th->ack_seq = htonl(tp->rcv_nxt);
/* Fill in the packet and send it. */
tp->af_specific->send_check(sk, th, tp->tcp_header_len, buff);
+#if 0
SOCK_DEBUG(sk, "\rtcp_send_ack: seq %x ack %x\n",
tp->snd_nxt, tp->rcv_nxt);
+#endif
tp->af_specific->queue_xmit(buff);
tcp_statistics.TcpOutSegs++;
{
struct tcp_opt *tp = &(sk->tp_pinfo.af_tcp);
- if (sk->zapped)
- return; /* After a valid reset we can send no more. */
-
tcp_write_wakeup(sk);
tp->pending = TIME_PROBE0;
tp->backoff++;
EXPORT_SYMBOL(xrlim_allow);
#endif
+#ifdef CONFIG_RTNETLINK
+EXPORT_SYMBOL(rtnetlink_links);
+EXPORT_SYMBOL(__rta_fill);
+EXPORT_SYMBOL(rtnetlink_dump_ifinfo);
+EXPORT_SYMBOL(netlink_set_err);
+EXPORT_SYMBOL(netlink_broadcast);
+EXPORT_SYMBOL(rtnl_wlockct);
+EXPORT_SYMBOL(rtnl);
+EXPORT_SYMBOL(neigh_delete);
+EXPORT_SYMBOL(neigh_add);
+EXPORT_SYMBOL(neigh_dump_info);
+#endif
+
#ifdef CONFIG_PACKET_MODULE
EXPORT_SYMBOL(dev_set_allmulti);
EXPORT_SYMBOL(dev_set_promiscuity);
EXPORT_SYMBOL(sklist_remove_socket);
EXPORT_SYMBOL(rtnl_wait);
EXPORT_SYMBOL(rtnl_rlockct);
-#ifdef CONFIG_RTNETLINK
-EXPORT_SYMBOL(rtnl);
-EXPORT_SYMBOL(rtnl_wlockct);
-#endif
#endif
#if defined(CONFIG_IPV6_MODULE) || defined(CONFIG_PACKET_MODULE)
*/
static struct rpc_wait_queue childq = RPC_INIT_WAITQ("childq");
+/*
+ * RPC tasks sit here while waiting for conditions to improve.
+ */
+static struct rpc_wait_queue delay_queue = RPC_INIT_WAITQ("delayq");
+
/*
* All RPC tasks are linked into this list
*/
void
rpc_delay(struct rpc_task *task, unsigned long delay)
{
- static struct rpc_wait_queue delay_queue;
-
task->tk_timeout = delay;
rpc_sleep_on(&delay_queue, task, NULL, __rpc_atrun);
}
static int executing = 0;
int incr = RPC_IS_ASYNC(task)? 1 : 0;
- if (incr && (executing || rpc_inhibit)) {
- printk("RPC: rpc_execute called recursively!\n");
- return;
+ if (incr) {
+ if (rpc_inhibit) {
+ printk("RPC: execution inhibited!\n");
+ return;
+ }
+ if (executing)
+ printk("RPC: %d tasks executed\n", executing);
}
+
executing += incr;
__rpc_execute(task);
executing -= incr;
save_flags(oldflags); cli();
rpc_make_runnable(child);
restore_flags(oldflags);
+ /* N.B. Is it possible for the child to have already finished? */
rpc_sleep_on(&childq, task, func, NULL);
}
struct rpc_task **q, *rovr;
dprintk("RPC: killing all tasks for client %p\n", clnt);
+ /* N.B. Why bother to inhibit? Nothing blocks here ... */
rpc_inhibit++;
for (q = &all_tasks; (rovr = *q); q = &rovr->tk_next_task) {
if (!clnt || rovr->tk_client == clnt) {
if (!t)
return;
- printk("-pid- proc flgs status -client- --rqstp- -timeout "
+ printk("-pid- proc flgs status -client- -prog- --rqstp- -timeout "
"-rpcwait -action- --exit--\n");
for (; t; t = next) {
next = t->tk_next_task;
- printk("%05d %04d %04x %06d %8p %8p %08ld %8p %8p %8p\n",
+ printk("%05d %04d %04x %06d %8p %6d %8p %08ld %8s %8p %8p\n",
t->tk_pid, t->tk_proc, t->tk_flags, t->tk_status,
- t->tk_client, t->tk_rqstp, t->tk_timeout,
- t->tk_rpcwait, t->tk_action, t->tk_exit);
+ t->tk_client, t->tk_client->cl_prog,
+ t->tk_rqstp, t->tk_timeout,
+ t->tk_rpcwait ? rpc_qname(t->tk_rpcwait) : " <NULL> ",
+ t->tk_action, t->tk_exit);
if (!(t->tk_flags & RPC_TASK_NFSWRITE))
continue;
{
struct svc_sock *svsk;
- disable_bh(NET_BH);
+ start_bh_atomic();
if ((svsk = serv->sv_sockets) != NULL)
rpc_remove_list(&serv->sv_sockets, svsk);
- enable_bh(NET_BH);
+ end_bh_atomic();
if (svsk) {
dprintk("svc: socket %p dequeued\n", svsk->sk_sk);
static inline void
svc_sock_received(struct svc_sock *svsk, int count)
{
- disable_bh(NET_BH);
+ start_bh_atomic();
if ((svsk->sk_data -= count) < 0) {
printk(KERN_NOTICE "svc: sk_data negative!\n");
svsk->sk_data = 0;
svsk->sk_sk);
svc_sock_enqueue(svsk);
}
- enable_bh(NET_BH);
+ end_bh_atomic();
}
/*
static inline void
svc_sock_accepted(struct svc_sock *svsk)
{
- disable_bh(NET_BH);
+ start_bh_atomic();
svsk->sk_busy = 0;
svsk->sk_conn--;
if (svsk->sk_conn || svsk->sk_data || svsk->sk_close) {
svsk->sk_sk);
svc_sock_enqueue(svsk);
}
- enable_bh(NET_BH);
+ end_bh_atomic();
}
/*
if (signalled())
return -EINTR;
- disable_bh(NET_BH);
+ start_bh_atomic();
if ((svsk = svc_sock_dequeue(serv)) != NULL) {
- enable_bh(NET_BH);
+ end_bh_atomic();
rqstp->rq_sock = svsk;
svsk->sk_inuse++; /* N.B. where is this decremented? */
} else {
*/
current->state = TASK_INTERRUPTIBLE;
add_wait_queue(&rqstp->rq_wait, &wait);
- enable_bh(NET_BH);
+ end_bh_atomic();
schedule();
if (!(svsk = rqstp->rq_sock)) {
task->tk_pid, status, xprt->connected);
task->tk_timeout = 60 * HZ;
- disable_bh(NET_BH);
+ start_bh_atomic();
if (!xprt->connected) {
rpc_sleep_on(&xprt->reconn, task,
xprt_reconn_status, xprt_reconn_timeout);
- enable_bh(NET_BH);
+ end_bh_atomic();
return;
}
- enable_bh(NET_BH);
+ end_bh_atomic();
}
xprt->connecting = 0;
/* For fast networks/servers we have to put the request on
* the pending list now:
*/
- disable_bh(NET_BH);
+ start_bh_atomic();
rpc_add_wait_queue(&xprt->pending, task);
task->tk_callback = NULL;
- enable_bh(NET_BH);
+ end_bh_atomic();
/* Continue transmitting the packet/record. We must be careful
* to cope with writespace callbacks arriving _after_ we have
task->tk_pid, xprt->snd_buf.io_len,
req->rq_slen);
task->tk_status = 0;
- disable_bh(NET_BH);
+ start_bh_atomic();
if (!xprt->write_space) {
/* Remove from pending */
rpc_remove_wait_queue(task);
rpc_sleep_on(&xprt->sending, task,
xprt_transmit_status, NULL);
- enable_bh(NET_BH);
+ end_bh_atomic();
return;
}
- enable_bh(NET_BH);
+ end_bh_atomic();
}
}
*/
task->tk_timeout = req->rq_timeout.to_current;
- disable_bh(NET_BH);
+ start_bh_atomic();
if (!req->rq_gotit) {
rpc_sleep_on(&xprt->pending, task,
xprt_receive_status, xprt_timer);
}
- enable_bh(NET_BH);
+ end_bh_atomic();
dprintk("RPC: %4d xprt_receive returns %d\n",
task->tk_pid, task->tk_status);
dprintk("RPC: %4d release request %p\n", task->tk_pid, req);
/* remove slot from queue of pending */
- disable_bh(NET_BH);
+ start_bh_atomic();
if (task->tk_rpcwait) {
printk("RPC: task of released request still queued!\n");
#ifdef RPC_DEBUG
rpc_del_timer(task);
rpc_remove_wait_queue(task);
}
- enable_bh(NET_BH);
+ end_bh_atomic();
/* Decrease congestion value. If congestion threshold is not yet
* reached, pass on the request slot.
# successful. If it is, then all of the "*.orig" files are removed.
#
# Nick Holloway <Nick.Holloway@alfie.demon.co.uk>, 2nd January 1995.
+#
+# Added support for handling multiple types of compression. What includes
+# gzip, bzip, bzip2, zip, compress, and plaintext.
+#
+# Adam Sulmicki <adam@cfar.umd.edu>, 1st January 1997.
# Set directories from arguments, or use defaults.
sourcedir=${1-/usr/src/linux}
while :
do
SUBLEVEL=`expr $SUBLEVEL + 1`
- patch=patch-$VERSION.$PATCHLEVEL.$SUBLEVEL.gz
- if [ ! -r $patchdir/$patch ]
- then
- break
+ patch=patch-$VERSION.$PATCHLEVEL.$SUBLEVEL
+ if [ -r $patchdir/${patch}.gz ]; then
+ ext=".gz"
+ name="gzip"
+ uncomp="gunzip -dc"
+ elif [ -r $patchdir/${patch}.bz ]; then
+ ext=".bz"
+ name="bzip"
+ uncomp="bunzip -dc"
+ elif [ -r $patchdir/${patch}.bz2 ]; then
+ ext=".bz2"
+ name="bzip2"
+ uncomp="bunzip2 -dc"
+ elif [ -r $patchdir/${patch}.zip ]; then
+ ext=".zip"
+ name="zip"
+ uncomp="unzip -d"
+ elif [ -r $patchdir/${patch}.Z ]; then
+ ext=".Z"
+ name="uncompress"
+ uncomp="uncompress -c"
+ elif [ -r $patchdir/${patch} ]; then
+ ext=""
+ name="plaintext"
+ uncomp="cat"
+ else
+ break
fi
- echo -n "Applying $patch... "
- if gunzip -dc $patchdir/$patch | patch -p1 -s -N -E -d $sourcedir
+ echo -n "Applying ${patch} (${name})... "
+ if $uncomp ${patchdir}/${patch}${ext} | patch -p1 -s -N -E -d $sourcedir
then
- echo "done."
+ echo "done."
else
echo "failed. Clean up yourself."
break