S: 95170 Deuil La Barre
S: France
+N: Aristeu Sergio Rozanski Filho
+E: aris@conectiva.com.br
+D: Support for EtherExpress 10 ISA (i82595) in eepro driver
+S: Rua Tocantins, 89 - Cristo Rei
+S: 80050-430 - Curitiba - Parana
+S: Brazil
+
N: Alessandro Rubini
E: rubini@ipvvis.unipv.it
D: the gpm mouse server and kernel support for it
IBM ServeRAID Support
CONFIG_SCSI_IPS
This is support for the IBM ServeRAID hardware RAID controllers.
- Consult the SCSI-HOWTO, available via anonymous FTP from
- ftp://metalab.unc.edu/pub/Linux/docs/HOWTO, and the file
- README.ips in drivers/scsi for more information. If this driver
- does not work correctly without modification please contact the
- author by email at ipslinux@us.ibm.com.
+ See http://www.developer.ibm.com/welcome/netfinity/serveraid.html
+ for more information. If this driver does not work correctly
+ without modification please contact the author by email at
+ ipslinux@us.ibm.com.
BusLogic SCSI support
CONFIG_SCSI_BUSLOGIC
EtherExpress PRO support
CONFIG_EEXPRESS_PRO
- If you have a network (Ethernet) card of this type, say Y. Note
- however that the EtherExpress PRO/100 Ethernet card has its own
- separate driver. Please read the Ethernet-HOWTO, available via FTP
+ If you have a network (Ethernet) card of this type, say Y. This
+ driver supports intel i82595{FX,TX} based boards. Note however
+ that the EtherExpress PRO/100 Ethernet card has its own separate
+ driver. Please read the Ethernet-HOWTO, available via FTP
(user: anonymous) in ftp://metalab.unc.edu/pub/Linux/docs/HOWTO.
This driver is also available as a module ( = code which can be
FBA devices
CONFIG_DASD_FBA
- FBA devices are currently unsupported.
+ FBA devices are e.g. the Vitual disk in storage under VM/ESA and others.
+
+Diag access to CMS formatted minidisk
+CONFIG_DASD_MDSK
+ By using this access method you can acess any disk supported by VM/ESA.
+ You have to format the disk using CMS and then specify the parameter
+ dasd_force_diag=<devno> in the parameter line of the kernel.
Compaq SMART2 support
CONFIG_BLK_CPQ_DA
Traffic Shaper For Linux
-This is the current ALPHA release of the traffic shaper for Linux. It works
+This is the current BETA release of the traffic shaper for Linux. It works
within the following limits:
o Minimum shaping speed is currently about 9600 baud (it can only
mrouted tunnels via a traffic shaper to control bandwidth usage.
The shaper is device/route based. This makes it very easy to use
-with any setup BUT less flexible. You may well want to combine this patch
-with Mike McLagan <mmclagan@linux.org>'s patch to allow routes to be
-specified by source/destination pairs.
+with any setup BUT less flexible. You may need to use iproute2 to set up
+multiple route tables to get the flexibility.
There is no "borrowing" or "sharing" scheme. This is a simple
-traffic limiter. I'd like to implement Van Jacobson and Sally Floyd's CBQ
-architecture into Linux one day (maybe in 2.1 sometime) and do this with
-style.
+traffic limiter. We implement Van Jacobson and Sally Floyd's CBQ
+architecture into Linux 2.2. THis is the preferred solution. Shaper is
+for simple or back compatible setups.
Alan
People keep asking about the WDT watchdog timer hardware: The phone contacts
for Industrial Computer Source are:
-US: 619 677 0877 (sales) 0895 (fax)
-UK: 01243 533900
-France (1) 69.18.74.30
+Industrial Computer Source
+http://www.indcompsrc.com
+ICS Advent, San Diego
+6260 Sequence Dr.
+San Diego, CA 92121-4371
+Phone (858) 677-0877
+FAX: (858) 677-0895
+>
+ICS Advent Europe, UK
+Oving Road
+Chichester,
+West Sussex,
+PO19 4ET, UK
+Phone: 00.44.1243.533900
-Industrial Computer Source
-9950 Barnes Canyon Road
-San Diego, CA
-
-http://www.industry.net/indcompsrc
and please mention Linux when enquiring.
IBM ServeRAID RAID DRIVER
P: Keith Mitchell
M: ipslinux@us.ibm.com
-W: http://www.developer.ibm.com/welcome/netfinity/serveraid_beta.html
+W: http://www.developer.ibm.com/welcome/netfinity/serveraid.html
S: Supported
IDE DRIVER [GENERAL]
extern void __divqu (void);
extern void __remqu (void);
+EXPORT_SYMBOL(init_mm);
+
EXPORT_SYMBOL(alpha_mv);
EXPORT_SYMBOL(enable_irq);
EXPORT_SYMBOL(disable_irq);
return -EBUSY;
while (*(void **)lock)
- schedule();
+ barrier();
goto again;
}
unsigned int i;
unsigned long delay;
+ /*
+ * something may have generated an irq long ago and we want to
+ * flush such a longstanding irq before considering it as spurious.
+ */
+ spin_lock_irq(&irq_controller_lock);
+ for (i = NR_IRQS-1; i > 0; i--)
+ if (!irq_desc[i].action)
+ irq_desc[i].handler->startup(i);
+ spin_unlock_irq(&irq_controller_lock);
+
+ /* Wait for longstanding interrupts to trigger. */
+ for (delay = jiffies + HZ/50; time_after(delay, jiffies); )
+ /* about 20ms delay */ synchronize_irq();
+
/*
- * first, enable any unassigned irqs
+ * enable any unassigned irqs
+ * (we must startup again here because if a longstanding irq
+ * happened in the previous stage, it may have masked itself)
*/
spin_lock_irq(&irq_controller_lock);
for (i = NR_IRQS-1; i > 0; i--) {
}
printk("\nCall Trace: ");
if (!esp || (esp & 3))
- printk("<INVALID ESP!>");
+ printk("Bad EIP value.");
else {
stack = (unsigned long *) esp;
i = 1;
}
printk("\nCode: ");
if (!regs->eip || regs->eip==-1)
- printk("<INVALID EIP!>");
+ printk("Bad EIP value.");
else {
for(i=0;i<20;i++)
printk("%02x ", ((unsigned char *)regs->eip)[i]);
MAKEBOOT = $(MAKE) -C arch/$(ARCH)/boot
+MAKESILO = $(MAKE) -C arch/$(ARCH)/tools/silo
+
+MAKEDASDFMT = $(MAKE) -C arch/$(ARCH)/tools/dasdfmt
+
silo:
- @$(MAKEBOOT) silo
+ @$(MAKESILO) silo
dasdfmt:
- @$(MAKEBOOT) dasdfmt
+ @$(MAKEDASDFMT) dasdfmt
image: vmlinux
@$(MAKEBOOT) image
$(OBJCOPY) -O binary $< $@
image: $(CONFIGURE) $(TOPDIR)/vmlinux \
- iplfba.boot ipleckd.boot
+ iplfba.boot ipleckd.boot ipldump.boot
$(OBJCOPY) -O binary $(TOPDIR)/vmlinux image
$(NM) $(TOPDIR)/vmlinux | grep -v '\(compiled\)\|\( [aU] \)\|\(\.\)\|\(LASH[RL]DI\)' | sort > $(TOPDIR)/System.map
dep:
clean:
- rm -f image listing iplfba.boot ipleckd.boot
+ rm -f image listing iplfba.boot ipleckd.boot ipldump.boot
--- /dev/null
+/*
+ * arch/s390/boot/ipldump.S
+ *
+ * S390 version
+ * Copyright (C) 1999,2000 IBM Deutschland Entwicklung GmbH, IBM Corporation
+ * Author(s): Martin Schwidefsky (schwidefsky@de.ibm.com),
+ *
+ * Tape dump ipl record. Put it on a tape and ipl from it and it will
+ * write a dump of the real storage after the ipl record on that tape.
+ */
+
+#include <linux/config.h>
+#include <asm/setup.h>
+#include <asm/lowcore.h>
+
+#define IPL_BS 1024
+ .org 0
+ .long 0x00080000,0x80000000+_start # The first 24 bytes are loaded
+ .long 0x07000000,0x60000001 # by ipl to addresses 0-23.
+ .long 0x02000000,0x20000000+IPL_BS # (a PSW and two CCWs).
+ .long 0x00000000,0x00000000
+ .long 0x00000000,0x00000000 # svc old psw
+ .long 0x00000000,0x00000000 # program check old psw
+ .long 0x00000000,0x00000000 # machine check old psw
+ .long 0x00000000,0x00000000 # io old psw
+ .long 0x00000000,0x00000000
+ .long 0x00000000,0x00000000
+ .long 0x00000000,0x00000000
+ .long 0x000a0000,0x00000058 # external new psw
+ .long 0x000a0000,0x00000060 # svc new psw
+ .long 0x000a0000,0x00000068 # program check new psw
+ .long 0x000a0000,0x00000070 # machine check new psw
+ .long 0x00080000,0x80000000+.Lioint # io new psw
+
+ .org 0x100
+ .globl _start
+_start:
+ l %r1,0xb8 # load ipl subchannel number
+#
+# find out memory size
+#
+ mvc 104(8,0),.Lpcmem0 # setup program check handler
+ slr %r3,%r3
+ lhi %r2,1
+ sll %r2,20
+.Lloop0:
+ l %r0,0(%r3) # test page
+ ar %r3,%r2 # add 1M
+ jnm .Lloop0 # r1 < 0x80000000 -> loop
+.Lchkmem0:
+ n %r3,.L4malign0 # align to multiples of 4M
+ st %r3,.Lmemsize # store memory size
+.Lmemok:
+
+#
+# first write a tape mark
+#
+ bras %r14,.Ltapemark
+#
+# write real storage to tape
+#
+ slr %r2,%r2 # start at address 0
+ bras %r14,.Lwriter # load ramdisk
+#
+# write another tape mark
+#
+ bras %r14,.Ltapemark
+#
+# everything written, stop processor
+#
+ lpsw .Lstopped
+#
+# subroutine for writing to tape
+# Paramters:
+# R1 = device number
+# R2 = start address
+# R3 = length
+.Lwriter:
+ st %r14,.Lldret
+ la %r12,.Lorbread # r12 = address of orb
+ la %r5,.Lirb # r5 = address of irb
+ st %r2,.Lccwwrite+4 # initialize CCW data addresses
+ lctl %c6,%c6,.Lcr6
+ slr %r2,%r2
+.Lldlp:
+ lhi %r6,3 # 3 retries
+.Lssch:
+ ssch 0(%r12) # write chunk of IPL_BS bytes
+ jnz .Llderr
+.Lw4end:
+ bras %r14,.Lwait4io
+ tm 8(%r5),0x82 # do we have a problem ?
+ jnz .Lrecov
+ l %r0,.Lccwwrite+4 # update CCW data addresses
+ ahi %r0,IPL_BS
+ st %r0,.Lccwwrite+4
+ clr %r0,%r3 # enough ?
+ jl .Lldlp
+.Ldone:
+ l %r14,.Lldret
+ br %r14 # r2 contains the total size
+.Lrecov:
+ bras %r14,.Lsense # do the sensing
+ brct %r6,.Lssch # dec. retry count & branch
+ j .Llderr
+.Ltapemark:
+ st %r14,.Lldret
+ la %r12,.Lorbmark # r12 = address of orb
+ la %r5,.Lirb # r5 = address of irb
+ lctl %c6,%c6,.Lcr6
+ ssch 0(%r12) # write a tape mark
+ jnz .Llderr
+ bras %r14,.Lwait4io
+ l %r14,.Lldret
+ br %r14
+#
+# Sense subroutine
+#
+.Lsense:
+ st %r14,.Lsnsret
+ la %r7,.Lorbsense
+ ssch 0(%r7) # start sense command
+ jnz .Llderr
+ bras %r14,.Lwait4io
+ l %r14,.Lsnsret
+ tm 8(%r5),0x82 # do we have a problem ?
+ jnz .Llderr
+ br %r14
+#
+# Wait for interrupt subroutine
+#
+.Lwait4io:
+ lpsw .Lwaitpsw
+.Lioint:
+ c %r1,0xb8 # compare subchannel number
+ jne .Lwait4io
+ tsch 0(%r5)
+ slr %r0,%r0
+ tm 8(%r5),0x82 # do we have a problem ?
+ jnz .Lwtexit
+ tm 8(%r5),0x04 # got device end ?
+ jz .Lwait4io
+.Lwtexit:
+ br %r14
+.Llderr:
+ lpsw .Lcrash
+
+ .align 8
+.Lorbread:
+ .long 0x00000000,0x0080ff00,.Lccwwrite
+ .align 8
+.Lorbsense:
+ .long 0x00000000,0x0080ff00,.Lccwsense
+ .align 8
+.Lorbmark:
+ .long 0x00000000,0x0080ff00,.Lccwmark
+ .align 8
+.Lccwwrite:
+ .long 0x01200000+IPL_BS,0x00000000
+.Lccwsense:
+ .long 0x04200001,0x00000000
+.Lccwmark:
+ .long 0x1f200001,0x00000000
+.Lwaitpsw:
+ .long 0x020a0000,0x80000000+.Lioint
+
+.Lirb: .long 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
+.Lcr6: .long 0xff000000
+ .align 8
+.Lcrash:.long 0x000a0000,0x00000000
+.Lstopped: .long 0x000a0000,0x00001234
+.Lpcmem0:.long 0x00080000,0x80000000 + .Lchkmem0
+.L4malign0:.long 0xffc00000
+.Lmemsize:.long 0
+.Lldret:.long 0
+.Lsnsret: .long 0
+
+ .org IPL_BS
+
lr %r15,%r4 # save number or blocks
slr %r7,%r7
icm %r7,3,.Lrdcdata+14 # load heads to r7
- clc .Lrdcdata+3(2),.L3390
- jne .L010 # 3380 or 3390 ?
- lhi %r6,12 # setup r6 correct!
- j .L011
-.L010:
- clc .Lrdcdata+3(2),.L9343
- jne .L013
lhi %r6,9
- j .L011
-.L013:
+ clc .Lrdcdata+3(2),.L9343
+ je .L011
lhi %r6,10
+ clc .Lrdcdata+3(2),.L3380
+ je .L011
+ lhi %r6,12
+ clc .Lrdcdata+3(2),.L3390
+ je .L011
+ bras %r14,.Ldisab
.L011:
# loop for nbl times
.Lrdloop:
.L3390:
.word 0x3390
.L9343:
- .word 0x9343
+ .word 0x934a
+.L3380:
+ .word 0x3380
.Lnull:
.long 0x00000000,0x00000000
+ .align 4
.Lrdcdata:
.long 0x00000000,0x00000000
.long 0x00000000,0x00000000
endmenu
source drivers/s390/Config.in
+comment 'Character devices'
+bool 'Unix98 PTY support' CONFIG_UNIX98_PTYS
+if [ "$CONFIG_UNIX98_PTYS" = "y" ]; then
+ int 'Maximum number of Unix98 PTYs in use (0-2048)' CONFIG_UNIX98_PTY_COUNT 256
+fi
if [ "$CONFIG_NET" = "y" ]; then
source net/Config.in
#
CONFIG_MODULES=y
# CONFIG_MODVERSIONS is not set
-# CONFIG_KMOD is not set
+CONFIG_KMOD=y
#
# General setup
# CONFIG_BLK_DEV_MD is not set
CONFIG_BLK_DEV_RAM=y
CONFIG_BLK_DEV_INITRD=y
-CONFIG_MDISK=y
-# CONFIG_MDISK_SYNC is not set
+CONFIG_BLK_DEV_XPRAM=y
+CONFIG_BLK_DEV_INITRD=y
+# CONFIG_MDISK is not set
CONFIG_DASD=y
CONFIG_DASD_ECKD=y
+CONFIG_DASD_FBA=y
+CONFIG_DASD_MDSK=y
#
# S/390 Network device support
CONFIG_3215_CONSOLE=y
CONFIG_HWC=y
CONFIG_HWC_CONSOLE=y
+CONFIG_UNIX98_PTYS=y
+CONFIG_UNIX98_PTY_COUNT=256
#
# Networking options
# CONFIG_NTFS_FS is not set
# CONFIG_HPFS_FS is not set
CONFIG_PROC_FS=y
+# CONFIG_DEVPTS_FS is not set
# CONFIG_QNX4FS_FS is not set
# CONFIG_ROMFS_FS is not set
CONFIG_EXT2_FS=y
#
# CONFIG_CODA_FS is not set
CONFIG_NFS_FS=y
-CONFIG_NFSD=y
-# CONFIG_NFSD_SUN is not set
+# CONFIG_NFSD is not set
CONFIG_SUNRPC=y
CONFIG_LOCKD=y
# CONFIG_SMB_FS is not set
O_TARGET := kernel.o
O_OBJS := lowcore.o entry.o bitmap.o traps.o time.o process.o irq.o \
setup.o sys_s390.o ptrace.o signal.o cpcmd.o ebcdic.o \
- s390fpu.o s390io.o reipl.o
+ s390fpu.o s390io.o reipl.o debug.o s390_ext.o s390dyn.o \
+ s390mach.o
OX_OBJS := s390_ksyms.o
MX_OBJS :=
*/
#include <linux/stddef.h>
+#include <linux/kernel.h>
#include <asm/string.h>
#include <asm/ebcdic.h>
}
}
-int sys_msgcp(char *str)
-{
- char buffer[256];
-
- sprintf(buffer, "MSG * %s", str);
- cpcmd(buffer, NULL, 0);
- cpcmd("STOP", NULL, 0);
-}
-
--- /dev/null
+#include <linux/stddef.h>
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/malloc.h>
+#include <asm/ebcdic.h>
+
+#ifdef MODULE
+#include <linux/module.h>
+#endif
+
+#include <asm/ebcdic.h>
+
+#include "debug.h"
+
+debug_info_t debug_areas[MAX_DEBUG_AREAS];
+debug_info_t *free_area = 0;
+static int initialized = 0;
+
+static spinlock_t debug_lock = SPIN_LOCK_UNLOCKED;
+
+debug_info_t *
+debug_register (char *name, int page_order, int nr_areas)
+{
+ debug_info_t *rc = 0;
+ int i;
+ long flags;
+
+ if ( ! initialized ){
+ debug_init();
+ initialized = 1;
+ }
+ if (!free_area)
+ {
+ printk (KERN_WARNING "No free debug area\n");
+ return NULL;
+ }
+ spin_lock_irqsave (&debug_lock, flags);
+ rc = free_area;
+ free_area = *((debug_info_t **) rc);
+
+ rc->areas = (debug_entry_t **) kmalloc (nr_areas *
+ sizeof (debug_entry_t *),
+ GFP_ATOMIC);
+ if (!rc->areas)
+ {
+ goto noareas;
+ }
+
+ for (i = 0; i < nr_areas; i++)
+ {
+ rc->areas[i] = (debug_entry_t *) __get_free_pages (GFP_ATOMIC,
+ page_order);
+ if (!rc->areas[i])
+ {
+ for (i--; i >= 0; i--)
+ {
+ free_pages ((unsigned long) rc->areas[i], page_order);
+ }
+ goto nopages;
+ }
+ }
+
+ rc->page_order = page_order;
+ rc->nr_areas = nr_areas;
+ rc->name = kmalloc (strlen (name) + 1, GFP_ATOMIC);
+ strncpy (rc->name, name, strlen (name));
+ rc->name[strlen (name)] = 0;
+ rc->active_entry = kmalloc (nr_areas, GFP_ATOMIC);
+ memset(rc->active_entry, 0, nr_areas * sizeof(int));
+ printk (KERN_INFO "reserved %d areas of %d pages for debugging %s\n",
+ nr_areas, 1 << page_order, name);
+ goto exit;
+
+nopages:
+noareas:
+ free_area = rc;
+exit:
+ spin_unlock_irqrestore (&debug_lock, flags);
+ return rc;
+}
+
+void
+debug_unregister (debug_info_t * id, char *name)
+{
+ int i = id->nr_areas;
+ long flags;
+ spin_lock_irqsave (&debug_lock, flags);
+ printk (KERN_INFO "freeing debug area %p named '%s'\n", id, name);
+ if (strncmp (name, id->name, strlen (name)))
+ {
+ printk (KERN_ERR "name '%s' does not match against '%s'\n",
+ name, id->name);
+ }
+ for (i--; i >= 0; i--)
+ {
+ free_pages ((unsigned long) id->areas[i], id->page_order);
+ }
+ kfree (id->areas);
+ kfree (id->name);
+ *((debug_info_t **) id) = free_area;
+ free_area = id;
+ spin_unlock_irqrestore (&debug_lock, flags);
+ return;
+}
+
+static inline void
+proceed_active_entry (debug_info_t * id)
+{
+ id->active_entry[id->active_area] =
+ (id->active_entry[id->active_area]++) %
+ ((PAGE_SIZE / sizeof (debug_entry_t)) << (id->page_order));
+}
+
+static inline void
+proceed_active_area (debug_info_t * id)
+{
+ id->active_area = (id->active_area++) % id->nr_areas;
+}
+
+static inline debug_entry_t *
+get_active_entry (debug_info_t * id)
+{
+ return &id->areas[id->active_area][id->active_entry[id->active_area]];
+}
+
+static inline debug_entry_t *
+debug_common ( debug_info_t * id )
+{
+ debug_entry_t * active;
+ proceed_active_entry (id);
+ active = get_active_entry (id);
+ STCK (active->id.stck);
+ active->id.stck = active->id.stck >> 4;
+ active->id.fields.cpuid = smp_processor_id ();
+ active->caller = __builtin_return_address (0);
+ return active;
+}
+
+void
+debug_event (debug_info_t * id, int level, unsigned int tag)
+{
+ long flags;
+ debug_entry_t *active;
+ if (!id)
+ {
+ return;
+ }
+ if (level < id->level)
+ {
+ return;
+ }
+ spin_lock_irqsave (&id->lock, flags);
+ active = debug_common(id);
+ active->tag.tag = tag;
+ spin_unlock_irqrestore (&id->lock, flags);
+ return;
+}
+
+void
+debug_text_event (debug_info_t * id, int level, char tag[4])
+{
+ long flags;
+ debug_entry_t *active;
+ if (!id)
+ {
+ return;
+ }
+ if (level < id->level)
+ {
+ return;
+ }
+ spin_lock_irqsave (&id->lock, flags);
+ active = debug_common(id);
+ strncpy ( active->tag.text, tag, 4);
+ ASCEBC (active->tag.text, 4 );
+ spin_unlock_irqrestore (&id->lock, flags);
+ return;
+}
+
+void
+debug_exception (debug_info_t * id, int level, unsigned int tag)
+{
+ long flags;
+ debug_entry_t *active;
+ if (!id)
+ {
+ return;
+ }
+ if (level < id->level)
+ {
+ return;
+ }
+ spin_lock_irqsave (&id->lock, flags);
+ active = debug_common(id);
+ active->tag.tag = tag;
+ proceed_active_area (id);
+ spin_unlock_irqrestore (&id->lock, flags);
+
+ return;
+}
+
+void
+debug_text_exception (debug_info_t * id, int level, char tag[4])
+{
+ long flags;
+ debug_entry_t *active;
+ if (!id)
+ {
+ return;
+ }
+ if (level < id->level)
+ {
+ return;
+ }
+ spin_lock_irqsave (&id->lock, flags);
+ active = debug_common(id);
+ strncpy ( active->tag.text, tag, 4);
+ ASCEBC (active->tag.text, 4 );
+ proceed_active_area (id);
+ spin_unlock_irqrestore (&id->lock, flags);
+ return;
+}
+
+int
+debug_init (void)
+{
+ int rc = 0;
+ int i;
+ for (i = 0; i < MAX_DEBUG_AREAS - 1; i++)
+ {
+ *(debug_info_t **) (&debug_areas[i]) =
+ (debug_info_t *) (&debug_areas[i + 1]);
+ }
+ *(debug_info_t **) (&debug_areas[i]) = (debug_info_t *) NULL;
+ free_area = &(debug_areas[0]);
+ printk (KERN_INFO "%d areas reserved for debugging information\n",
+ MAX_DEBUG_AREAS);
+ return rc;
+}
+
+#ifdef MODULE
+int
+init_module (void)
+{
+ int rc = 0;
+ rc = debug_init ();
+ if (rc)
+ {
+ printk (KERN_INFO "An error occurred with debug_init\n");
+ }
+
+ { /* test section */
+ debug_info_t *a[4];
+ printk (KERN_INFO "registering 1, %p\n", a[0] =
+ debug_register ("debug1", 1, 1));
+ printk (KERN_INFO "registering 2, %p\n", a[1] =
+ debug_register ("debug2", 1, 2));
+ printk (KERN_INFO "registering 3, %p\n", a[2] =
+ debug_register ("debug3", 2, 1));
+ printk (KERN_INFO "registering 4, %p\n", a[3] =
+ debug_register ("debug4", 2, 2));
+ debug_unregister (a[0], "debug1");
+ debug_unregister (a[1], "debug3");
+ printk (KERN_INFO "registering 1, %p\n", a[0] =
+ debug_register ("debug5", 1, 1));
+ printk (KERN_INFO "registering 2, %p\n", a[1] =
+ debug_register ("debug6", 1, 2));
+ debug_unregister (a[2], "debug2");
+ debug_unregister (a[3], "debug4");
+ debug_unregister (a[0], "debug5");
+ debug_unregister (a[1], "debug6");
+ }
+ return rc;
+}
+
+void
+cleanup_module (void)
+{
+
+ return;
+}
+
+#endif /* MODULE */
--- /dev/null
+
+#ifndef DEBUG_H
+#define DEBUG_H
+
+#include <asm/spinlock.h>
+
+#define MAX_DEBUG_AREAS 16
+
+#define STCK(x) asm volatile ("STCK %0":"=m" (x))
+
+typedef struct
+{
+ union
+ {
+ struct
+ {
+ unsigned long long cpuid:4;
+ unsigned long long clock:60;
+ }
+ fields;
+ unsigned long long stck;
+ }
+ id;
+ void *caller;
+ union
+ {
+ unsigned long tag;
+ char text[4];
+ }
+ tag;
+}
+debug_entry_t;
+
+typedef struct
+{
+ char *name;
+ int level;
+ int nr_areas;
+ int page_order;
+ debug_entry_t **areas;
+ int active_area;
+ int *active_entry;
+ spinlock_t lock;
+}
+debug_info_t;
+
+int debug_init (void);
+debug_info_t *debug_register (char *name, int pages_index, int nr_areas);
+void debug_unregister (debug_info_t * id, char *name);
+void debug_event (debug_info_t * id, int level, unsigned int tag);
+void debug_text_event (debug_info_t * id, int level, char tag[4]);
+void debug_exception (debug_info_t * id, int level, unsigned int tag);
+void debug_text_exception (debug_info_t * id, int level, char tag[4]);
+
+#endif
/*
- * EBCDIC 037 conversion table:
+ * ASCII (IBM PC 437) -> EBCDIC 500
+ */
+__u8 _ascebc_500[256] =
+{
+ /*00 NUL SOH STX ETX EOT ENQ ACK BEL */
+ 0x00, 0x01, 0x02, 0x03, 0x37, 0x2D, 0x2E, 0x2F,
+ /*08 BS HT LF VT FF CR SO SI */
+ /* ->NL */
+ 0x16, 0x05, 0x15, 0x0B, 0x0C, 0x0D, 0x0E, 0x0F,
+ /*10 DLE DC1 DC2 DC3 DC4 NAK SYN ETB */
+ 0x10, 0x11, 0x12, 0x13, 0x3C, 0x3D, 0x32, 0x26,
+ /*18 CAN EM SUB ESC FS GS RS US */
+ /* ->IGS ->IRS ->IUS */
+ 0x18, 0x19, 0x3F, 0x27, 0x22, 0x1D, 0x1E, 0x1F,
+ /*20 SP ! " # $ % & ' */
+ 0x40, 0x4F, 0x7F, 0x7B, 0x5B, 0x6C, 0x50, 0x7D,
+ /*28 ( ) * + , - . / */
+ 0x4D, 0x5D, 0x5C, 0x4E, 0x6B, 0x60, 0x4B, 0x61,
+ /*30 0 1 2 3 4 5 6 7 */
+ 0xF0, 0xF1, 0xF2, 0xF3, 0xF4, 0xF5, 0xF6, 0xF7,
+ /*38 8 9 : ; < = > ? */
+ 0xF8, 0xF9, 0x7A, 0x5E, 0x4C, 0x7E, 0x6E, 0x6F,
+ /*40 @ A B C D E F G */
+ 0x7C, 0xC1, 0xC2, 0xC3, 0xC4, 0xC5, 0xC6, 0xC7,
+ /*48 H I J K L M N O */
+ 0xC8, 0xC9, 0xD1, 0xD2, 0xD3, 0xD4, 0xD5, 0xD6,
+ /*50 P Q R S T U V W */
+ 0xD7, 0xD8, 0xD9, 0xE2, 0xE3, 0xE4, 0xE5, 0xE6,
+ /*58 X Y Z [ \ ] ^ _ */
+ 0xE7, 0xE8, 0xE9, 0x4A, 0xE0, 0x5A, 0x5F, 0x6D,
+ /*60 ` a b c d e f g */
+ 0x79, 0x81, 0x82, 0x83, 0x84, 0x85, 0x86, 0x87,
+ /*68 h i j k l m n o */
+ 0x88, 0x89, 0x91, 0x92, 0x93, 0x94, 0x95, 0x96,
+ /*70 p q r s t u v w */
+ 0x97, 0x98, 0x99, 0xA2, 0xA3, 0xA4, 0xA5, 0xA6,
+ /*78 x y z { | } ~ DL */
+ 0xA7, 0xA8, 0xA9, 0xC0, 0xBB, 0xD0, 0xA1, 0x07,
+ /*80*/
+ 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F,
+ /*88*/
+ 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F,
+ /*90*/
+ 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F,
+ /*98*/
+ 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F,
+ /*A0*/
+ 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F,
+ /*A8*/
+ 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F,
+ /*B0*/
+ 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F,
+ /*B8*/
+ 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F,
+ /*C0*/
+ 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F,
+ /*C8*/
+ 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F,
+ /*D0*/
+ 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F,
+ /*D8*/
+ 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F,
+ /*E0 sz */
+ 0x3F, 0x59, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F,
+ /*E8*/
+ 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F,
+ /*F0*/
+ 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F,
+ /*F8*/
+ 0x90, 0x3F, 0x3F, 0x3F, 0x3F, 0xEA, 0x3F, 0xFF
+};
+
+/*
+ * EBCDIC 500 -> ASCII (IBM PC 437)
+ */
+__u8 _ebcasc_500[256] =
+{
+ /* 0x00 NUL SOH STX ETX *SEL HT *RNL DEL */
+ 0x00, 0x01, 0x02, 0x03, 0x07, 0x09, 0x07, 0x7F,
+ /* 0x08 -GE -SPS -RPT VT FF CR SO SI */
+ 0x07, 0x07, 0x07, 0x0B, 0x0C, 0x0D, 0x0E, 0x0F,
+ /* 0x10 DLE DC1 DC2 DC3 -RES -NL BS -POC
+ -ENP ->LF */
+ 0x10, 0x11, 0x12, 0x13, 0x07, 0x0A, 0x08, 0x07,
+ /* 0x18 CAN EM -UBS -CU1 -IFS -IGS -IRS -ITB
+ -IUS */
+ 0x18, 0x19, 0x07, 0x07, 0x07, 0x07, 0x07, 0x07,
+ /* 0x20 -DS -SOS FS -WUS -BYP LF ETB ESC
+ -INP */
+ 0x07, 0x07, 0x1C, 0x07, 0x07, 0x0A, 0x17, 0x1B,
+ /* 0x28 -SA -SFE -SM -CSP -MFA ENQ ACK BEL
+ -SW */
+ 0x07, 0x07, 0x07, 0x07, 0x07, 0x05, 0x06, 0x07,
+ /* 0x30 ---- ---- SYN -IR -PP -TRN -NBS EOT */
+ 0x07, 0x07, 0x16, 0x07, 0x07, 0x07, 0x07, 0x04,
+ /* 0x38 -SBS -IT -RFF -CU3 DC4 NAK ---- SUB */
+ 0x07, 0x07, 0x07, 0x07, 0x14, 0x15, 0x07, 0x1A,
+ /* 0x40 SP RSP \81ä ---- */
+ 0x20, 0xFF, 0x83, 0x84, 0x85, 0xA0, 0x07, 0x86,
+ /* 0x48 [ . < ( + ! */
+ 0x87, 0xA4, 0x5B, 0x2E, 0x3C, 0x28, 0x2B, 0x21,
+ /* 0x50 & ---- */
+ 0x26, 0x82, 0x88, 0x89, 0x8A, 0xA1, 0x8C, 0x07,
+ /* 0x58 \81ß ] $ * ) ; ^ */
+ 0x8D, 0xE1, 0x5D, 0x24, 0x2A, 0x29, 0x3B, 0x5E,
+ /* 0x60 - / ---- \81Ä ---- ---- ---- */
+ 0x2D, 0x2F, 0x07, 0x8E, 0x07, 0x07, 0x07, 0x8F,
+ /* 0x68 ---- , % _ > ? */
+ 0x80, 0xA5, 0x07, 0x2C, 0x25, 0x5F, 0x3E, 0x3F,
+ /* 0x70 ---- ---- ---- ---- ---- ---- ---- */
+ 0x07, 0x90, 0x07, 0x07, 0x07, 0x07, 0x07, 0x07,
+ /* 0x78 * ` : # @ ' = " */
+ 0x70, 0x60, 0x3A, 0x23, 0x40, 0x27, 0x3D, 0x22,
+ /* 0x80 * a b c d e f g */
+ 0x07, 0x61, 0x62, 0x63, 0x64, 0x65, 0x66, 0x67,
+ /* 0x88 h i ---- ---- ---- */
+ 0x68, 0x69, 0xAE, 0xAF, 0x07, 0x07, 0x07, 0xF1,
+ /* 0x90 \81° j k l m n o p */
+ 0xF8, 0x6A, 0x6B, 0x6C, 0x6D, 0x6E, 0x6F, 0x70,
+ /* 0x98 q r ---- ---- */
+ 0x71, 0x72, 0xA6, 0xA7, 0x91, 0x07, 0x92, 0x07,
+ /* 0xA0 ~ s t u v w x */
+ 0xE6, 0x7E, 0x73, 0x74, 0x75, 0x76, 0x77, 0x78,
+ /* 0xA8 y z ---- ---- ---- ---- */
+ 0x79, 0x7A, 0xAD, 0xAB, 0x07, 0x07, 0x07, 0x07,
+ /* 0xB0 ---- \81§ ---- */
+ 0x9B, 0x9C, 0x9D, 0xFA, 0x07, 0x07, 0x07, 0xAC,
+ /* 0xB8 ---- | ---- ---- ---- ---- */
+ 0xAB, 0x07, 0xAA, 0x7C, 0x07, 0x07, 0x07, 0x07,
+ /* 0xC0 { A B C D E F G */
+ 0x7B, 0x41, 0x42, 0x43, 0x44, 0x45, 0x46, 0x47,
+ /* 0xC8 H I ---- \81ö ---- */
+ 0x48, 0x49, 0x07, 0x93, 0x94, 0x95, 0xA2, 0x07,
+ /* 0xD0 } J K L M N O P */
+ 0x7D, 0x4A, 0x4B, 0x4C, 0x4D, 0x4E, 0x4F, 0x50,
+ /* 0xD8 Q R ---- \81ü */
+ 0x51, 0x52, 0x07, 0x96, 0x81, 0x97, 0xA3, 0x98,
+ /* 0xE0 \ S T U V W X */
+ 0x5C, 0xF6, 0x53, 0x54, 0x55, 0x56, 0x57, 0x58,
+ /* 0xE8 Y Z ---- \81Ö ---- ---- ---- */
+ 0x59, 0x5A, 0xFD, 0x07, 0x99, 0x07, 0x07, 0x07,
+ /* 0xF0 0 1 2 3 4 5 6 7 */
+ 0x30, 0x31, 0x32, 0x33, 0x34, 0x35, 0x36, 0x37,
+ /* 0xF8 8 9 ---- ---- \81Ü ---- ---- ---- */
+ 0x38, 0x39, 0x07, 0x07, 0x9A, 0x07, 0x07, 0x07
+};
+
+
+/*
+ * EBCDIC 037/500 conversion table:
* from upper to lower case
*/
__u8 _ebc_tolower[256] =
/*
- * EBCDIC 037 conversion table:
+ * EBCDIC 037/500 conversion table:
* from lower to upper case
*/
__u8 _ebc_toupper[256] =
enable = 0x03
daton = 0x04
+/*
+ * Base Address of this Module --- saved in __LC_ENTRY_BASE
+ */
+ .globl entry_base
+entry_base:
+
+#define BASED(name) name-entry_base(%r13)
#if 0
/* some code left lying around in case we need a
* R15 - kernel stack pointer
*/
-#define SAVE_ALL1(psworg) \
- st %r15,__LC_SAVE_AREA ; \
+#define SAVE_ALL(psworg) \
+ stm %r13,%r15,__LC_SAVE_AREA ; \
+ stam %a2,%a4,__LC_SAVE_AREA+12 ; \
+ basr %r13,0 ; /* temp base pointer */ \
+ l %r13,.Lentry_base-.(%r13) ; /* load &entry_base to %r13 */ \
tm psworg+1,0x01 ; /* test problem state bit */ \
- jz 0f ; /* skip stack setup save */ \
+ bz BASED(.+12) ; /* skip stack & access regs setup */ \
l %r15,__LC_KERNEL_STACK ; /* problem state -> load ksp */ \
-0: ahi %r15,-SP_SIZE ; /* make room for registers & psw */ \
- srl %r15,3 ; \
- sll %r15,3 ; /* align stack pointer to 8 */ \
- stm %r0,%r14,SP_R0(%r15) ; /* store gprs 0-14 to kernel stack */ \
+ lam %a2,%a4,BASED(.Lc_ac) ; /* set ac.reg. 2 to primary space */ \
+ /* and access reg. 4 to home space */ \
+0: s %r15,BASED(.Lc_spsize); /* make room for registers & psw */ \
+ n %r15,BASED(.Lc0xfffffff8) ; /* align stack pointer to 8 */ \
+ stm %r0,%r12,SP_R0(%r15) ; /* store gprs 0-12 to kernel stack */ \
st %r2,SP_ORIG_R2(%r15) ; /* store original content of gpr 2 */ \
- mvc SP_RF(4,%r15),__LC_SAVE_AREA ; /* move R15 to stack */ \
+ mvc SP_RD(12,%r15),__LC_SAVE_AREA ; /* move R13-R15 to stack */ \
stam %a0,%a15,SP_AREGS(%r15) ; /* store access registers to kst. */ \
+ mvc SP_AREGS+8(12,%r15),__LC_SAVE_AREA+12 ; /* store ac. regs */ \
mvc SP_PSW(8,%r15),psworg ; /* move user PSW to stack */ \
- xc 0(7,%r15),0(%r15) ; /* clear back chain & trap ind. */ \
- mvi SP_TRAP+3(%r15),psworg ; /* store trap indication in pt_regs */ \
- slr %r0,%r0 ; \
- sar %a2,%r0 ; /* set ac.reg. 2 to primary space */ \
- lhi %r0,1 ; \
- sar %a4,%r0 ; /* set access reg. 4 to home space */
-
-#define RESTORE_ALL1 \
- mvc 0x50(8,0),SP_PSW(%r15) ; /* move user PSW to lowcore */ \
+ la %r0,psworg ; /* store trap indication */ \
+ st %r0,SP_TRAP(%r15) ; \
+ xc 0(4,%r15),0(%r15) ; /* clear back chain */
+
+#define RESTORE_ALL \
+ mvc __LC_RETURN_PSW(8,0),SP_PSW(%r15) ; /* move user PSW to lowcore */ \
lam %a0,%a15,SP_AREGS(%r15) ; /* load the access registers */ \
lm %r0,%r15,SP_R0(%r15) ; /* load gprs 0-15 of user */ \
- ni 0x51(0),0xfd ; /* clear wait state bit */ \
- lpsw 0x50 /* back to caller */
-
-#if CONFIG_REMOTE_DEBUG
-#define SAVE_ALL(psworg) \
- SAVE_ALL1(psworg) \
- tm psworg+1,0x01 ; /* test problem state bit */ \
- jz 0f ; /* skip stack setup save */ \
- stctl %c0,%c15,SP_CRREGS(%r15) ; /* save control regs for remote debugging */ \
-0:
-
-#define RESTORE_ALL \
- tm SP_PSW+1(%r15),0x01 ; /* test problem state bit */ \
- jz 0f ; /* skip cr restore */ \
- lctl %c0,%c15,SP_CRREGS(%r15) ; /* store control regs for remote debugging */ \
-0: RESTORE_ALL1
-#else
-
-#define SAVE_ALL(psworg) \
- SAVE_ALL1(psworg)
-
-#define RESTORE_ALL \
- RESTORE_ALL1
-#endif
+ ni __LC_RETURN_PSW+1(0),0xfd ; /* clear wait state bit */ \
+ lpsw __LC_RETURN_PSW /* back to caller */
#define GET_CURRENT /* load pointer to task_struct to R9 */ \
- lhi %r9,-8192 ; \
- nr %r9,15
-
+ lr %r9,%r15 ; \
+ n %r9,BASED(.Lc0xffffe000)
/*
* Scheduler resume function, called by switch_to
*/
.globl resume
resume:
+ basr %r1,0 # setup base pointer
+resume_base:
l %r4,_TSS_PTREGS(%r3)
tm SP_PSW-SP_PTREGS(%r4),0x40 # is the new process using per ?
- jz RES_DN1 # if not we're fine
+ bz resume_noper-resume_base(%r1) # if not we're fine
stctl %r9,%r11,24(%r15) # We are using per stuff
clc _TSS_PER(12,%r3),24(%r15)
- je RES_DN1 # we got away without bashing TLB's
+ be resume_noper-resume_base(%r1) # we got away w/o bashing TLB's
lctl %c9,%c11,_TSS_PER(%r3) # Nope we didn't
-RES_DN1:
+resume_noper:
stm %r6,%r15,24(%r15) # store resume registers of prev task
st %r15,_TSS_KSP(%r2) # store kernel stack ptr to prev->tss.ksp
- lhi %r0,-8192
- nr %r0,%r15
+ lr %r0,%r15
+ n %r0,.Lc0xffffe000-resume_base(%r1)
l %r15,_TSS_KSP(%r3) # load kernel stack ptr from next->tss.ksp
- lhi %r1,8191
+ l %r1,.Lc8191-resume_base(%r1)
or %r1,%r15
- ahi %r1,1
+ la %r1,1(%r1)
st %r1,__LC_KERNEL_STACK # __LC_KERNEL_STACK = new kernel stack
stam %a2,%a2,_TSS_AR2(%r2) # store kernel access reg. 2
stam %a4,%a4,_TSS_AR4(%r2) # store kernel access reg. 4
* are executed with interrupts enabled.
*/
-sysc_lit:
- sysc_bhmask: .long bh_mask
- sysc_bhactive: .long bh_active
- sysc_do_signal: .long do_signal
- sysc_do_bottom_half:.long do_bottom_half
- sysc_schedule: .long schedule
- sysc_trace: .long syscall_trace
-#ifdef __SMP__
- sysc_schedtail: .long schedule_tail
-#endif
- sysc_clone: .long sys_clone
- sysc_fork: .long sys_fork
- sysc_vfork: .long sys_vfork
- sysc_sigreturn: .long sys_sigreturn
- sysc_rt_sigreturn: .long sys_rt_sigreturn
- sysc_execve: .long sys_execve
- sysc_sigsuspend: .long sys_sigsuspend
- sysc_rt_sigsuspend: .long sys_rt_sigsuspend
-
.globl system_call
system_call:
SAVE_ALL(0x20)
- XC SP_SVC_STEP(4,%r15),SP_SVC_STEP(%r15)
+ xc SP_SVC_STEP(4,%r15),SP_SVC_STEP(%r15)
pgm_system_call:
- basr %r13,0
- ahi %r13,sysc_lit-. # setup base pointer R13 to sysc_lit
slr %r8,%r8 # gpr 8 is call save (-> tracesys)
ic %r8,0x8B # get svc number from lowcore
stosm 24(%r15),0x03 # reenable interrupts
GET_CURRENT # load pointer to task_struct to R9
sll %r8,2
- l %r8,sys_call_table-sysc_lit(8,%r13) # get address of system call
+ l %r8,sys_call_table-entry_base(8,%r13) # get address of system call
tm flags+3(%r9),0x20 # PF_TRACESYS
- jnz sysc_tracesys
+ bnz BASED(sysc_tracesys)
basr %r14,%r8 # call sys_xxxx
st %r2,SP_R2(%r15) # store return value (change R2 on stack)
# ATTENTION: check sys_execve_glue before
sysc_return:
GET_CURRENT # load pointer to task_struct to R9
tm SP_PSW+1(%r15),0x01 # returning to user ?
- jno sysc_leave # no-> skip bottom half, resched & signal
+ bno BASED(sysc_leave) # no-> skip bottom half, resched & signal
#
# check, if bottom-half has to be done
#
- l %r1,sysc_bhmask-sysc_lit(%r13)
+ l %r1,BASED(.Lbhmask)
l %r0,0(%r1)
- l %r1,sysc_bhactive-sysc_lit(%r13)
+ l %r1,BASED(.Lbhactive)
n %r0,0(%r1)
- jnz sysc_handle_bottom_half
+ bnz BASED(sysc_handle_bottom_half)
#
# check, if reschedule is needed
#
sysc_return_bh:
icm %r0,15,need_resched(%r9) # get need_resched from task_struct
- jnz sysc_reschedule
+ bnz BASED(sysc_reschedule)
icm %r0,15,sigpending(%r9) # get sigpending from task_struct
- jnz sysc_signal_return
+ bnz BASED(sysc_signal_return)
sysc_leave:
icm %r0,15,SP_SVC_STEP(%r15) # get sigpending from task_struct
- jnz pgm_svcret
+ bnz BASED(pgm_svcret)
stnsm 24(%r15),disable # disable I/O and ext. interrupts
RESTORE_ALL
sysc_signal_return:
la %r2,SP_PTREGS(%r15) # load pt_regs
sr %r3,%r3 # clear *oldset
- l %r1,sysc_do_signal-sysc_lit(%r13)
- la %r14,sysc_leave-sysc_lit(%r13)
+ l %r1,BASED(.Ldo_signal)
+ la %r14,BASED(sysc_leave)
br %r1 # return point is sysc_leave
#
# call trace before and after sys_call
#
sysc_tracesys:
- l %r1,sysc_trace-sysc_lit(%r13)
- lhi %r2,-ENOSYS
+ l %r1,BASED(.Ltrace)
+ l %r2,BASED(.Lc_ENOSYS)
st %r2,SP_R2(%r15) # give sysc_trace an -ENOSYS retval
basr %r14,%r1
lm %r3,%r6,SP_R3(%r15)
l %r2,SP_ORIG_R2(%r15)
basr %r14,%r8 # call sys_xxx
st %r2,SP_R2(%r15) # store return value
- l %r1,sysc_trace-sysc_lit(%r13)
- la %r14,sysc_return-sysc_lit(%r13)
+ l %r1,BASED(.Ltrace)
+ la %r14,BASED(sysc_return)
br %r1 # return point is sysc_return
# is zero
#
sysc_handle_bottom_half:
- l %r1,sysc_do_bottom_half-sysc_lit(%r13)
- la %r14,sysc_return_bh-sysc_lit(%r13)
+ l %r1,BASED(.Ldo_bottom_half)
+ la %r14,BASED(sysc_return_bh)
br %r1 # call do_bottom_half
#
# call schedule with sysc_return as return-address
#
sysc_reschedule:
- l %r1,sysc_schedule-sysc_lit(%r13)
- la %r14,sysc_return-sysc_lit(%r13)
+ l %r1,BASED(.Lschedule)
+ la %r14,BASED(sysc_return)
br %r1 # call scheduler, return to sysc_return
#
.globl ret_from_fork
ret_from_fork:
basr %r13,0
- ahi %r13,sysc_lit-. # setup base pointer R13 to $SYSCDAT
+ l %r13,.Lentry_base-.(%r13) # setup base pointer to &entry_base
GET_CURRENT # load pointer to task_struct to R9
stosm 24(%r15),0x03 # reenable interrupts
sr %r0,%r0 # child returns 0
st %r0,SP_R2(%r15) # store return value (change R2 on stack)
#ifdef __SMP__
- l %r1,sysc_schedtail-sysc_lit(%r13)
- la %r14,sysc_return-sysc_lit(%r13)
+ l %r1,BASED(.Lschedtail)
+ la %r14,BASED(sysc_return)
br %r1 # call schedule_tail, return to sysc_return
#else
- j sysc_return
+ b BASED(sysc_return)
#endif
#
#
sys_clone_glue:
la %r2,SP_PTREGS(%r15) # load pt_regs
- l %r1,sysc_clone-sysc_lit(%r13)
+ l %r1,BASED(.Lclone)
br %r1 # branch to sys_clone
sys_fork_glue:
la %r2,SP_PTREGS(%r15) # load pt_regs
- l %r1,sysc_fork-sysc_lit(%r13)
+ l %r1,BASED(.Lfork)
br %r1 # branch to sys_fork
sys_vfork_glue:
la %r2,SP_PTREGS(%r15) # load pt_regs
- l %r1,sysc_vfork-sysc_lit(%r13)
+ l %r1,BASED(.Lvfork)
br %r1 # branch to sys_vfork
sys_execve_glue:
la %r2,SP_PTREGS(%r15) # load pt_regs
- l %r1,sysc_execve-sysc_lit(%r13)
+ l %r1,BASED(.Lexecve)
lr %r12,%r14 # save return address
basr %r14,%r1 # call sys_execve
ltr %r2,%r2 # check if execve failed
sys_sigreturn_glue:
la %r2,SP_PTREGS(%r15) # load pt_regs as parameter
- l %r1,sysc_sigreturn-sysc_lit(%r13)
+ l %r1,BASED(.Lsigreturn)
br %r1 # branch to sys_sigreturn
sys_rt_sigreturn_glue:
la %r2,SP_PTREGS(%r15) # load pt_regs as parameter
- l %r1,sysc_rt_sigreturn-sysc_lit(%r13)
+ l %r1,BASED(.Lrt_sigreturn)
br %r1 # branch to sys_sigreturn
#
lr %r4,%r3 # move history1 parameter
lr %r3,%r2 # move history0 parameter
la %r2,SP_PTREGS(%r15) # load pt_regs as first parameter
- l %r1,sysc_sigsuspend-sysc_lit(%r13)
+ l %r1,BASED(.Lsigsuspend)
la %r14,4(%r14) # skip store of return value
br %r1 # branch to sys_sigsuspend
lr %r4,%r3 # move sigsetsize parameter
lr %r3,%r2 # move unewset parameter
la %r2,SP_PTREGS(%r15) # load pt_regs as first parameter
- l %r1,sysc_rt_sigsuspend-sysc_lit(%r13)
+ l %r1,BASED(.Lrt_sigsuspend)
la %r14,4(%r14) # skip store of return value
br %r1 # branch to sys_rt_sigsuspend
.long sys_write
.long sys_open /* 5 */
.long sys_close
- .long sys_waitpid
+ .long sys_ni_syscall /* old waitpid syscall holder */
.long sys_creat
.long sys_link
.long sys_unlink /* 10 */
.long sys_chmod /* 15 */
.long sys_lchown
.long sys_ni_syscall /* old break syscall holder */
- .long sys_stat
+ .long sys_ni_syscall /* old stat syscall holder */
.long sys_lseek
.long sys_getpid /* 20 */
.long sys_mount
.long sys_stime /* 25 */
.long sys_ptrace
.long sys_alarm
- .long sys_fstat
+ .long sys_ni_syscall /* old fstat syscall holder */
.long sys_pause
.long sys_utime /* 30 */
.long sys_ni_syscall /* old stty syscall holder */
.long sys_ni_syscall /* old mpx syscall holder */
.long sys_setpgid
.long sys_ni_syscall /* old ulimit syscall holder */
- .long sys_olduname
+ .long sys_ni_syscall /* old uname syscall holder */
.long sys_umask /* 60 */
.long sys_chroot
.long sys_ustat
.long sys_getpgrp /* 65 */
.long sys_setsid
.long sys_sigaction
- .long sys_sgetmask
- .long sys_ssetmask
+ .long sys_ni_syscall /* old sgetmask syscall holder */
+ .long sys_ni_syscall /* old ssetmask syscall holder */
.long sys_setreuid /* 70 */
.long sys_setregid
.long sys_sigsuspend_glue
.long sys_settimeofday
.long sys_getgroups /* 80 */
.long sys_setgroups
- .long old_select
+ .long sys_ni_syscall /* old select syscall holder */
.long sys_symlink
- .long sys_lstat
+ .long sys_ni_syscall /* old lstat syscall holder */
.long sys_readlink /* 85 */
.long sys_uselib
.long sys_swapon
.long sys_newstat
.long sys_newlstat
.long sys_newfstat
- .long sys_uname
+ .long sys_ni_syscall /* old uname syscall holder */
.long sys_ni_syscall /* 110 */ /* iopl for i386 */
.long sys_vhangup
.long sys_idle
.long sys_ni_syscall /* streams1 */
.long sys_ni_syscall /* streams2 */
.long sys_vfork_glue /* 190 */
- .rept 254-190
+ .rept 255-190
.long sys_ni_syscall
.endr
- .long sys_msgcp /* 255 */
/*
* Program check handler routine
*/
-pgm_lit:
- pgm_handle_per: .long handle_per_exception
- pgm_jump_table: .long pgm_check_table
- pgm_sysc_ret: .long sysc_return
- pgm_sysc_lit: .long sysc_lit
- pgm_do_signal: .long do_signal
-
.globl pgm_check_handler
pgm_check_handler:
/*
* we just ignore the PER event (FIXME: is there anything we have to do
* for LPSW?).
*/
+
+ stm %r13,%r15,__LC_SAVE_AREA
+ stam %a2,%a4,__LC_SAVE_AREA+12
+ basr %r13,0 # temp base pointer
+ l %r13,.Lentry_base-.(%r13)# load &entry_base to %r13
tm __LC_PGM_INT_CODE+1,0x80 # check whether we got a per exception
- jz pgm_sv # skip if not
+ bz BASED(pgm_sv) # skip if not
tm __LC_PGM_OLD_PSW,0x40 # test if per event recording is on
- jnz pgm_sv # skip if it is
+ bnz BASED(pgm_sv) # skip if it is
# ok its one of the special cases, now we need to find out which one
clc __LC_PGM_OLD_PSW(8),__LC_SVC_NEW_PSW
- je pgm_svcper
+ be BASED(pgm_svcper)
# no interesting special case, ignore PER event
+ lm %r13,%r15,__LC_SAVE_AREA
lpsw 0x28
# it was a single stepped SVC that is causing all the trouble
pgm_svcper:
- SAVE_ALL(0x20)
+ tm 0x21,0x01 # test problem state bit
+ bz BASED(.+12) # skip stack & access regs setup
+ l %r15,__LC_KERNEL_STACK # problem state -> load ksp
+ lam %a2,%a4,BASED(.Lc_ac) # set ac.reg. 2 to primary space
+ # and access reg. 4 to home space
+ s %r15,BASED(.Lc_spsize) # make room for registers & psw
+ n %r15,BASED(.Lc0xfffffff8) # align stack pointer to 8
+ stm %r0,%r12,SP_R0(%r15) # store gprs 0-12 to kernel stack
+ st %r2,SP_ORIG_R2(%r15) # store original content of gpr 2
+ mvc SP_RD(12,%r15),__LC_SAVE_AREA # move R13-R15 to stack
+ stam %a0,%a15,SP_AREGS(%r15) # store access registers to kst.
+ mvc SP_AREGS+8(12,%r15),__LC_SAVE_AREA+12 # store ac. regs
+ mvc SP_PSW(8,%r15),0x20 # move user PSW to stack
+ la %r0,0x20 # store trap indication
+ st %r0,SP_TRAP(%r15)
+ xc 0(4,%r15),0(%r15) # clear back chain
mvi SP_SVC_STEP(%r15),1 # make SP_SVC_STEP nonzero
mvc SP_PGM_OLD_ILC(4,%r15),__LC_PGM_ILC # save program check information
- j pgm_system_call # now do the svc
+ b BASED(pgm_system_call) # now do the svc
pgm_svcret:
- mvc __LC_PGM_ILC(4),SP_PGM_OLD_ILC(%r15) # restore program check info
- lhi %r0,0x28
- st %r0,SP_TRAP(%r15) # set new trap indicator
- j pgm_no_sv
+ mvi SP_TRAP+3(%r15),0x28 # set trap indication back to pgm_chk
+ lh %r7,SP_PGM_OLD_ILC(%r15) # get ilc from stack
+ xc SP_SVC_STEP(4,%r15),SP_SVC_STEP(%r15)
+ b BASED(pgm_no_sv)
pgm_sv:
- SAVE_ALL(0x28)
-pgm_no_sv:
- XC SP_SVC_STEP(4,%r15),SP_SVC_STEP(%r15)
- basr %r13,0
- ahi %r13,pgm_lit-. # setup base pointer R13 to $PGMDAT
+ tm 0x29,0x01 # test problem state bit
+ bz BASED(.+12) # skip stack & access regs setup
+ l %r15,__LC_KERNEL_STACK # problem state -> load ksp
+ lam %a2,%a4,BASED(.Lc_ac) # set ac.reg. 2 to primary space
+ # and access reg. 4 to home space
+ s %r15,BASED(.Lc_spsize) # make room for registers & psw
+ n %r15,BASED(.Lc0xfffffff8) # align stack pointer to 8
+ stm %r0,%r12,SP_R0(%r15) # store gprs 0-12 to kernel stack
+ st %r2,SP_ORIG_R2(%r15) # store original content of gpr 2
+ mvc SP_RD(12,%r15),__LC_SAVE_AREA # move R13-R15 to stack
+ stam %a0,%a15,SP_AREGS(%r15) # store access registers to kst.
+ mvc SP_AREGS+8(12,%r15),__LC_SAVE_AREA+12 # store ac. regs
+ mvc SP_PSW(8,%r15),0x28 # move user PSW to stack
+ la %r0,0x28 # store trap indication
+ st %r0,SP_TRAP(%r15)
+ xc 0(4,%r15),0(%r15) # clear back chain
+ xc SP_SVC_STEP(4,%r15),SP_SVC_STEP(%r15)
lh %r7,__LC_PGM_ILC # load instruction length
+pgm_no_sv:
lh %r8,__LC_PGM_INT_CODE # N.B. saved int code used later KEEP it
stosm 24(%r15),0x03 # reenable interrupts
lr %r3,%r8
- lhi %r0,0x7f
+ la %r0,0x7f
nr %r3,%r0 # clear per-event-bit
- je pgm_dn # none of Martins exceptions occured bypass
- l %r9,pgm_jump_table-pgm_lit(%r13)
+ be BASED(pgm_dn) # none of Martins exceptions occured bypass
+ l %r9,BASED(.Ljump_table)
sll %r3,2
l %r9,0(%r3,%r9) # load address of handler routine
la %r2,SP_PTREGS(%r15) # address of register-save area
srl %r3,2
- chi %r3,0x4 # protection-exception ?
- jne pgm_go # if not,
+ cl %r3,BASED(.Lc4) # protection-exception ?
+ bne BASED(pgm_go) # if not,
l %r5,SP_PSW+4(15) # load psw addr
sr %r5,%r7 # substract ilc from psw
st %r5,SP_PSW+4(15) # store corrected psw addr
pgm_go: basr %r14,%r9 # branch to interrupt-handler
-pgm_dn: lhi %r0,0x80
+pgm_dn: la %r0,0x80
nr %r8,%r0 # check for per exception
- je pgm_return
+ be BASED(pgm_return)
la %r2,SP_PTREGS(15) # address of register-save area
- l %r9,pgm_handle_per-pgm_lit(%r13) # load adr. of per handler
- l %r14,pgm_sysc_ret-pgm_lit(%r13) # load adr. of system return
- l %r13,pgm_sysc_lit-pgm_lit(%r13)
+ l %r9,BASED(.Lhandle_per) # load adr. of per handler
+ la %r14,BASED(sysc_return) # load adr. of system return
br %r9 # branch to handle_per_exception
#
# the backend code is the same as for sys-call
#
pgm_return:
- l %r14,pgm_sysc_ret-pgm_lit(%r13)
- l %r13,pgm_sysc_lit-pgm_lit(%r13)
- br %r14
-
-default_trap_handler:
- .globl default_trap_handler
- lpsw 112
+ b BASED(sysc_return)
/*
* IO interrupt handler routine
*/
-io_lit:
- io_do_IRQ: .long do_IRQ
- io_schedule: .long schedule
- io_do_signal: .long do_signal
- io_bhmask: .long bh_mask
- io_bhactive: .long bh_active
- io_do_bottom_half:.long do_bottom_half
-
.globl io_int_handler
io_int_handler:
SAVE_ALL(0x38)
- basr %r13,0
- ahi %r13,io_lit-. # setup base pointer R13 to $IODAT
la %r2,SP_PTREGS(%r15) # address of register-save area
sr %r3,%r3
icm %r3,%r3,__LC_SUBCHANNEL_NR # load subchannel nr & extend to int
l %r4,__LC_IO_INT_PARM # load interuption parm
- l %r9,io_do_IRQ-io_lit(%r13) # load address of do_IRQ
+ l %r9,BASED(.Ldo_IRQ) # load address of do_IRQ
basr %r14,%r9 # branch to standard irq handler
io_return:
GET_CURRENT # load pointer to task_struct to R9
tm SP_PSW+1(%r15),0x01 # returning to user ?
- jz io_leave # no-> skip resched & signal
+ bz BASED(io_leave) # no-> skip resched & signal
stosm 24(%r15),0x03 # reenable interrupts
#
# check, if bottom-half has to be done
#
- l %r1,io_bhmask-io_lit(%r13)
+ l %r1,BASED(.Lbhmask)
l %r0,0(%r1)
- l %r1,io_bhactive-io_lit(%r13)
+ l %r1,BASED(.Lbhactive)
n %r0,0(%r1)
- jnz io_handle_bottom_half
+ bnz BASED(io_handle_bottom_half)
io_return_bh:
#
# check, if reschedule is needed
#
icm %r0,15,need_resched(%r9) # get need_resched from task_struct
- jnz io_reschedule
+ bnz BASED(io_reschedule)
icm %r0,15,sigpending(%r9) # get sigpending from task_struct
- jnz io_signal_return
+ bnz BASED(io_signal_return)
io_leave:
stnsm 24(%r15),disable # disable I/O and ext. interrupts
RESTORE_ALL
# is zero
#
io_handle_bottom_half:
- l %r1,io_do_bottom_half-io_lit(%r13)
- la %r14,io_return_bh-io_lit(%r13)
+ l %r1,BASED(.Ldo_bottom_half)
+ la %r14,BASED(io_return_bh)
br %r1 # call do_bottom_half
#
# call schedule with io_return as return-address
#
io_reschedule:
- l %r1,io_schedule-io_lit(%r13)
- la %r14,io_return-io_lit(%r13)
+ l %r1,BASED(.Lschedule)
+ la %r14,BASED(io_return)
br %r1 # call scheduler, return to io_return
#
io_signal_return:
la %r2,SP_PTREGS(%r15) # load pt_regs
sr %r3,%r3 # clear *oldset
- l %r1,io_do_signal-io_lit(%r13)
- la %r14,io_leave-io_lit(%r13)
+ l %r1,BASED(.Ldo_signal)
+ la %r14,BASED(io_leave)
br %r1 # return point is io_leave
/*
* External interrupt handler routine
*/
-ext_lit:
- ext_timer_int: .long do_timer_interrupt
-#ifdef __SMP__
- ext_call_int: .long do_ext_call_interrupt
-#endif
-#ifdef CONFIG_HWC
- ext_hwc_int: .long do_hwc_interrupt
-#endif
-#ifdef CONFIG_MDISK
- ext_mdisk_int: .long do_mdisk_interrupt
-#endif
-#ifdef CONFIG_IUCV
- ext_iucv_int: .long do_iucv_interrupt
-#endif
- ext_io_lit: .long io_lit
- ext_io_return: .long io_return
-
.globl ext_int_handler
ext_int_handler:
SAVE_ALL(0x18)
- basr %r13,0
- ahi %r13,ext_lit-. # setup base pointer R13 to $EXTDAT
la %r2,SP_PTREGS(%r15) # address of register-save area
lh %r3,__LC_EXT_INT_CODE # error code
-#ifdef __SMP__
- chi %r3,0x1202 # EXTERNAL_CALL
- jne ext_no_extcall
- l %r9,ext_call_int-ext_lit(%r13) # load ext_call_interrupt
- l %r14,ext_io_return-ext_lit(%r13)
- l %r13,ext_io_lit-ext_lit(%r13)
- br %r9 # branch to ext call handler
-ext_no_extcall:
-#endif
- chi %r3,0x1004 # CPU_TIMER
- jne ext_no_timer
- l %r9,ext_timer_int-ext_lit(%r13) # load timer_interrupt
- l %r14,ext_io_return-ext_lit(%r13)
- l %r13,ext_io_lit-ext_lit(%r13)
+ lr %r1,%r3 # calculate index
+ srl %r1,8 # = (code + (code >> 8)) & 0xff
+ alr %r1,%r3
+ n %r1,BASED(.Lc0xff)
+ sll %r1,2
+ l %r9,BASED(.Lext_hash)
+ l %r9,0(%r1,%r9) # get first list entry for hash value
+ ltr %r9,%r9 # == NULL ?
+ bz BASED(io_return) # yes, nothing to do, exit
+ext_int_loop:
+ ch %r3,8(%r9) # compare external interrupt code
+ be BASED(ext_int_found)
+ icm %r9,15,0(%r9) # next list entry
+ bnz BASED(ext_int_loop)
+ b BASED(io_return)
+ext_int_found:
+ l %r9,4(%r9) # get handler address
+ la %r14,BASED(io_return)
br %r9 # branch to ext call handler
-ext_no_timer:
-#ifdef CONFIG_HWC
- chi %r3,0x2401 # HWC interrupt
- jne ext_no_hwc
- l %r9,ext_hwc_int-ext_lit(%r13) # load addr. of hwc routine
- l %r14,ext_io_return-ext_lit(%r13)
- l %r13,ext_io_lit-ext_lit(%r13)
- br %r9 # branch to ext call handler
-ext_no_hwc:
-#endif
-#ifdef CONFIG_MDISK
- chi %r3,0x2603 # diag 250 (VM) interrupt
- jne ext_no_mdisk
- l %r9,ext_mdisk_int-ext_lit(%r13)
- l %r14,ext_io_return-ext_lit(%r13)
- l %r13,ext_io_lit-ext_lit(%r13)
- br %r9 # branch to ext call handler
-ext_no_mdisk:
-#endif
-#ifdef CONFIG_IUCV
- chi %r3,0x4000 # diag 250 (VM) interrupt
- jne ext_no_iucv
- l %r9,ext_iucv_int-ext_lit(%r13)
- l %r14,ext_io_return-ext_lit(%r13)
- l %r13,ext_io_lit-ext_lit(%r13)
- br %r9 # branch to ext call handler
-ext_no_iucv:
-#endif
-
- l %r14,ext_io_return-ext_lit(%r13)
- l %r13,ext_io_lit-ext_lit(%r13)
- br %r14 # use backend code of io_int_handler
/*
* Machine check handler routines
*/
-mcck_lit:
- mcck_crw_pending: .long do_crw_pending
-
.globl mcck_int_handler
mcck_int_handler:
SAVE_ALL(0x30)
- basr %r13,0
- ahi %r13,mcck_lit-. # setup base pointer R13 to $MCCKDAT
- tm __LC_MCCK_CODE+1,0x40
- jno mcck_no_crw
- l %r1,mcck_crw_pending-mcck_lit(%r13)
- basr %r14,%r1 # call do_crw_pending
-mcck_no_crw:
+ l %r1,BASED(.Ls390_mcck)
+ basr %r14,%r1 # call machine check handler
mcck_return:
RESTORE_ALL
lam %a0,%a15,__LC_AREGS_SAVE_AREA
stosm 0(%r15),daton # now we can turn dat on
lm %r6,%r15,24(%r15) # load registers from clone
- bras %r14,restart_go
- .long start_secondary
-restart_go:
- l %r14,0(%r14)
+ basr %r14,0
+ l %r14,restart_addr-.(%r14)
br %r14 # branch to start_secondary
+restart_addr:
+ .long start_secondary
#else
/*
* If we do not run with SMP enabled, let the new CPU crash ...
#endif
+/*
+ * Integer constants
+ */
+ .align 4
+.Lc0xfffffff8: .long -8 # to align stack pointer to 8
+.Lc0xffffe000: .long -8192 # to round stack pointer to &task_struct
+.Lc8191: .long 8191
+.Lc_spsize: .long SP_SIZE
+.Lc_ac: .long 0,0,1
+.Lc_ENOSYS: .long -ENOSYS
+.Lc4: .long 4
+.Lc0x1202: .long 0x1202
+.Lc0x1004: .long 0x1004
+.Lc0x2401: .long 0x2401
+.Lc0x4000: .long 0x4000
+.Lc0xff: .long 0xff
+/*
+ * Symbol constants
+ */
+.Lbhactive: .long bh_active
+.Lbhmask: .long bh_mask
+.Ls390_mcck: .long s390_do_machine_check
+.Ldo_IRQ: .long do_IRQ
+.Ldo_bottom_half:
+ .long do_bottom_half
+.Ldo_signal: .long do_signal
+.Lentry_base: .long entry_base
+.Lext_hash: .long ext_int_hash
+.Lhandle_per: .long handle_per_exception
+.Ljump_table: .long pgm_check_table
+.Lschedule: .long schedule
+.Lclone: .long sys_clone
+.Lexecve: .long sys_execve
+.Lfork: .long sys_fork
+.Lrt_sigreturn:.long sys_rt_sigreturn
+.Lrt_sigsuspend:
+ .long sys_rt_sigsuspend
+.Lsigreturn: .long sys_sigreturn
+.Lsigsuspend: .long sys_sigsuspend
+.Ltrace: .long syscall_trace
+.Lvfork: .long sys_vfork
-
-
-
+#ifdef __SMP__
+.Lschedtail: .long schedule_tail
+#endif
.org 0x10000
.globl start
start: basr %r13,0 # get base
-.LPG1: lctl %c1,%c1,.Lpstd-.LPG1(%r13) # load pstd
- lctl %c7,%c7,.Lpstd-.LPG1(%r13) # load sstd
- lctl %c13,%c13,.Lpstd-.LPG1(%r13) # load hstd
- lctl %c0,%c0,.Lcr0-.LPG1(%r13) # set CR0
+.LPG1: lctl %c0,%c15,.Lctl-.LPG1(%r13) # load all control registers
l %r12,.Lparm1-.LPG1(%r13) # pointer to parameter area
#
adbr %f0,%f2 # test IEEE add instruction
oi MACHINE_FLAGS+3-PARMAREA(%r12),2 # set IEEE fpu flag
.Lchkfpu:
+#
+# find out if we have the CSP instruction
+#
+ mvc 104(8,0),.Lpccsp-.LPG1(%r13) # setup program check handler
+ la %r0,0
+ lr %r1,%r0
+ la %r2,.Lflt0-.LPG1(%r13)
+ csp %r0,%r2 # Test CSP instruction
+ oi MACHINE_FLAGS+3-PARMAREA(%r12),8 # set CSP flag
+.Lchkcsp:
lpsw .Lentry-.LPG1(13) # jump to _stext in primary-space,
# virtual and never return ...
.align 8
.Lentry:.long 0x04080000,0x80000000 + _stext
-.Lpstd: .long .Lpgd+0x7F # segment-table
-.Lcr0: .long 0x04b50002
+.Lctl: .long 0x04b50002 # cr0: various things
+ .long .Lpgd+0x7f # cr1: primary space segment table
+ .long 0 # cr2: access register translation
+ .long 0 # cr3: instruction authorization
+ .long 0 # cr4: instruction authorization
+ .long 0 # cr5: various things
+ .long 0 # cr6: I/O interrupts
+ .long .Lpgd+0x7f # cr7: secondary space segment table
+ .long 0 # cr8: access registers translation
+ .long 0 # cr9: tracing off
+ .long 0 # cr10: tracing off
+ .long 0 # cr11: tracing off
+ .long 0 # cr12: tracing off
+ .long .Lpgd+0x7f # cr13: home space segment table
+ .long 0xc0000000 # cr14: machine check handling off
+ .long 0 # cr15: linkage stack operations
.Lpcmem:.long 0x00080000,0x80000000 + .Lchkmem
.Lpcfpu:.long 0x00080000,0x80000000 + .Lchkfpu
+.Lpccsp:.long 0x00080000,0x80000000 + .Lchkcsp
.Lflt0: .double 0
.Lparm1:.long PARMAREA
.L4malign:.long 0xffc00000
jo .-4 # branch back, if not finish
# check control registers
stctl %c0,%c15,0(%r15)
- l %r0,0(%r15)
- o %r0,.Lcr0or-.LPG2(%r13) # enable sigp external ints.
- st %r0,0(%r15)
+ oc 2(1,%r15),.Locbits+5-.LPG2(%r13) # enable sigp external ints.
+ oc 0(1,%r15),.Locbits+4-.LPG2(%r13) # low addresss proctection
lctl %c0,%c15,0(%r15)
#
.Linittu: .long init_task_union
.Lbss_bgn: .long __bss_start
.Lbss_end: .long _end
-.Lcr0or: .long 0x00002000
+.Locbits: .long 0x01020408,0x10204080
.align 4
.Laregs: .long 0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0
.align 8
if ( ioinfo[irq] == INVALID_STORAGE_AREA )
return( -ENODEV);
- action = ioinfo[irq]->irq_desc.action;
+ action = (struct irqaction *) ioinfo[irq]->irq_desc.action;
if (action)
{
{
}
-unsigned long __init init_IRQ(unsigned long memory)
+__initfunc(unsigned long init_IRQ( unsigned long memory))
{
return s390_init_IRQ( memory);
}
void *dev_id)
{
return( s390_request_irq( irq, handler, irqflags, devname, dev_id ) );
+
}
#define __KERNEL_SYSCALLS__
#include <stdarg.h>
+#include <linux/config.h>
#include <linux/errno.h>
#include <linux/sched.h>
#include <linux/kernel.h>
do_reipl: basr %r13,0
.Lpg0: lpsw .Lnewpsw-.Lpg0(%r13)
.Lpg1: lctl %c6,%c6,.Lall-.Lpg0(%r13)
+ stctl %c0,%c0,.Lctlsave-.Lpg0(%r13)
+ ni .Lctlsave-.Lpg0(%r13),0xef
+ lctl %c0,%c0,.Lctlsave-.Lpg0(%r13)
lr %r1,%r2
mvc __LC_PGM_NEW_PSW(8,0),.Lpcnew-.Lpg0(%r13)
stsch .Lschib-.Lpg0(%r13)
.Ldisab: st %r14,.Ldispsw+4-.Lpg0(%r13)
lpsw .Ldispsw-.Lpg0(%r13)
.align 8
-.Lall: .long 0xff000000;
+.Lall: .long 0xff000000
.Lnull: .long 0x00000000
+.Lctlsave: .long 0x00000000
.align 8
.Lnewpsw: .long 0x00080000,0x80000000+.Lpg1
.Lpcnew: .long 0x00080000,0x80000000+.Lecs
--- /dev/null
+/*
+ * arch/s390/kernel/s390_ext.c
+ *
+ * S390 version
+ * Copyright (C) 1999,2000 IBM Deutschland Entwicklung GmbH, IBM Corporation
+ * Author(s): Holger Smolinski (Holger.Smolinski@de.ibm.com),
+ * Martin Schwidefsky (schwidefsky@de.ibm.com)
+ */
+
+#include <linux/kernel.h>
+#include <linux/malloc.h>
+#include <asm/lowcore.h>
+#include <asm/s390_ext.h>
+
+/*
+ * Simple hash strategy: index = ((code >> 8) + code) & 0xff;
+ * ext_int_hash[index] is the start of the list for all external interrupts
+ * that hash to this index. With the current set of external interrupts
+ * (0x1202 external call, 0x1004 cpu timer, 0x2401 hwc console and 0x4000
+ * iucv) this is always the first element.
+ */
+ext_int_info_t *ext_int_hash[256] = { 0, };
+ext_int_info_t ext_int_info_timer;
+ext_int_info_t ext_int_info_hwc;
+
+int register_external_interrupt(__u16 code, ext_int_handler_t handler) {
+ ext_int_info_t *p;
+ int index;
+
+ index = (code + (code >> 8)) & 0xff;
+ p = ext_int_hash[index];
+ while (p != NULL) {
+ if (p->code == code)
+ return -EBUSY;
+ p = p->next;
+ }
+ if (code == 0x1004) /* time_init is done before kmalloc works :-/ */
+ p = &ext_int_info_timer;
+ else if (code == 0x2401) /* hwc_init is done too early too */
+ p = &ext_int_info_hwc;
+ else
+ p = (ext_int_info_t *)
+ kmalloc(sizeof(ext_int_info_t), GFP_ATOMIC);
+ if (p == NULL)
+ return -ENOMEM;
+ p->code = code;
+ p->handler = handler;
+ p->next = ext_int_hash[index];
+ ext_int_hash[index] = p;
+ return 0;
+}
+
+int unregister_external_interrupt(__u16 code, ext_int_handler_t handler) {
+ ext_int_info_t *p, *q;
+ int index;
+
+ index = (code + (code >> 8)) & 0xff;
+ q = NULL;
+ p = ext_int_hash[index];
+ while (p != NULL) {
+ if (p->code == code && p->handler == handler)
+ break;
+ q = p;
+ p = p->next;
+ }
+ if (p == NULL)
+ return -ENOENT;
+ if (q != NULL)
+ q->next = p->next;
+ else
+ ext_int_hash[index] = p->next;
+ if (code != 0x1004 && code != 0x2401)
+ kfree(p);
+ return 0;
+}
+
+
#include <asm/irq.h>
#include <asm/string.h>
#include <asm/checksum.h>\r
+#include <asm/s390_ext.h>
+#if CONFIG_CHANDEV
+#include <asm/chandev.h>
+#endif
+#if CONFIG_IP_MULTICAST
+#include <net/arp.h>
+#endif
+
/*
* I/O subsystem
EXPORT_SYMBOL(get_irq_first);
EXPORT_SYMBOL(get_irq_next);
+/*
+ * External interrupts
+ */
+EXPORT_SYMBOL(register_external_interrupt);
+EXPORT_SYMBOL(unregister_external_interrupt);
+
/*
* memory management
*/
#endif
EXPORT_SYMBOL(kernel_thread);
EXPORT_SYMBOL(csum_fold);
-
+#if CONFIG_CHANDEV
+EXPORT_SYMBOL(chandev_register_and_probe);
+EXPORT_SYMBOL(chandev_unregister);
+EXPORT_SYMBOL(chandev_initdevice);
+EXPORT_SYMBOL(chandev_initnetdevice);
+#endif
+#if CONFIG_IP_MULTICAST
+/* Required for lcs gigibit ethernet multicast support */
+EXPORT_SYMBOL(arp_mc_map);
+#endif
--- /dev/null
+/*
+ * arch/s390/kernel/s390dyn.c
+ * S/390 dynamic device attachment
+ *
+ * S390 version
+ * Copyright (C) 1999,2000 IBM Deutschland Entwicklung GmbH, IBM Corporation
+ * Author(s): Ingo Adlung (adlung@de.ibm.com)
+ */
+
+#include <linux/init.h>
+#include <linux/smp_lock.h>
+
+#include <asm/irq.h>
+#include <asm/s390io.h>
+#include <asm/s390dyn.h>
+
+static devreg_t *devreg_anchor = NULL;
+static spinlock_t dyn_lock = SPIN_LOCK_UNLOCKED;
+
+int s390_device_register( devreg_t *drinfo )
+{
+ unsigned long flags;
+
+ int ret = 0;
+ devreg_t *pdevreg = devreg_anchor;
+
+ if ( drinfo == NULL )
+ return( -EINVAL );
+
+ spin_lock_irqsave( &dyn_lock, flags );
+
+ while ( (pdevreg != NULL) && (ret ==0) )
+ {
+ if ( pdevreg == drinfo )
+ {
+ ret = -EINVAL;
+ }
+ else
+ {
+ if ( ( (pdevreg->flag & DEVREG_TYPE_DEVNO)
+ && (pdevreg->ci.devno ) )
+ && ( (drinfo->flag & DEVREG_TYPE_DEVNO )
+ && (drinfo->ci.devno ) ) )
+ {
+ ret = -EBUSY;
+ }
+ else if ( (pdevreg->flag & DEVREG_EXACT_MATCH)
+ && (drinfo->flag & DEVREG_EXACT_MATCH ) )
+ {
+ if ( memcmp( &pdevreg->ci.hc, &drinfo->ci.hc, 6) )
+ ret = -EBUSY;
+ }
+ else if ( (pdevreg->flag & DEVREG_MATCH_DEV_TYPE)
+ && (drinfo->flag & DEVREG_MATCH_DEV_TYPE ) )
+ {
+ if ( (pdevreg->ci.hc.dtype == drinfo->ci.hc.dtype)
+ && (pdevreg->ci.hc.dmode == drinfo->ci.hc.dmode) )
+ ret = -EBUSY;
+ }
+ else if ( (pdevreg->flag & DEVREG_MATCH_CU_TYPE)
+ && (drinfo->flag & DEVREG_MATCH_CU_TYPE ) )
+ {
+ if ( (pdevreg->ci.hc.ctype == drinfo->ci.hc.ctype)
+ && (pdevreg->ci.hc.cmode == drinfo->ci.hc.cmode) )
+ ret = -EBUSY;
+ }
+ else if ( (pdevreg->flag & DEVREG_NO_CU_INFO)
+ && (drinfo->flag & DEVREG_NO_CU_INFO ) )
+ {
+ if ( (pdevreg->ci.hnc.dtype == drinfo->ci.hnc.dtype)
+ && (pdevreg->ci.hnc.dmode == drinfo->ci.hnc.dmode) )
+ ret = -EBUSY;
+ }
+
+ pdevreg = pdevreg->next;
+
+ } /* endif */
+
+ } /* endwhile */
+
+ /*
+ * only enqueue if no collision was found ...
+ */
+ if ( ret == 0 )
+ {
+ drinfo->next = devreg_anchor;
+ drinfo->prev = NULL;
+
+ if ( devreg_anchor != NULL )
+ {
+ devreg_anchor->prev = drinfo;
+
+ } /* endif */
+
+ } /* endif */
+
+ spin_unlock_irqrestore( &dyn_lock, flags );
+
+ return( ret);
+}
+
+
+int s390_device_unregister( devreg_t *dreg )
+{
+ unsigned long flags;
+
+ int ret = -EINVAL;
+ devreg_t *pdevreg = devreg_anchor;
+
+ if ( dreg == NULL )
+ return( -EINVAL );
+
+ spin_lock_irqsave( &dyn_lock, flags );
+
+ while ( (pdevreg != NULL )
+ && ( ret != 0 ) )
+ {
+ if ( pdevreg == dreg )
+ {
+ devreg_t *dprev = pdevreg->prev;
+ devreg_t *dnext = pdevreg->next;
+
+ if ( (dprev != NULL) && (dnext != NULL) )
+ {
+ dnext->prev = dprev;
+ dprev->next = dnext;
+ }
+ if ( (dprev != NULL) && (dnext == NULL) )
+ {
+ dprev->next = NULL;
+ }
+ if ( (dprev == NULL) && (dnext != NULL) )
+ {
+ dnext->prev = NULL;
+
+ } /* else */
+
+ ret = 0;
+ }
+ else
+ {
+ pdevreg = pdevreg->next;
+
+ } /* endif */
+
+ } /* endwhile */
+
+ spin_unlock_irqrestore( &dyn_lock, flags );
+
+ return( ret);
+}
+
+
+devreg_t * s390_search_devreg( ioinfo_t *ioinfo )
+{
+ unsigned long flags;
+
+ devreg_t *pdevreg = devreg_anchor;
+
+ if ( ioinfo == NULL )
+ return( NULL );
+
+ spin_lock_irqsave( &dyn_lock, flags );
+
+ while ( pdevreg != NULL )
+ {
+ if ( (pdevreg->flag & DEVREG_TYPE_DEVNO )
+ && (ioinfo->ui.flags.dval == 1 )
+ && (ioinfo->devno == pdevreg->ci.devno) )
+ {
+ break;
+ }
+ else if ( pdevreg->flag & DEVREG_EXACT_MATCH )
+ {
+ if ( memcmp( &pdevreg->ci.hc,
+ &ioinfo->senseid.cu_type, 6 ) )
+ break;
+ }
+ else if ( pdevreg->flag & DEVREG_MATCH_DEV_TYPE )
+ {
+ if ( (pdevreg->ci.hc.dtype == ioinfo->senseid.dev_type )
+ && (pdevreg->ci.hc.dmode == ioinfo->senseid.dev_model) )
+ break;
+ }
+ else if ( pdevreg->flag & DEVREG_MATCH_CU_TYPE )
+ {
+ if ( (pdevreg->ci.hc.ctype == ioinfo->senseid.cu_type )
+ && (pdevreg->ci.hc.cmode == ioinfo->senseid.cu_model) )
+ break;
+ }
+ else if ( pdevreg->flag & DEVREG_NO_CU_INFO )
+ {
+ if ( (pdevreg->ci.hnc.dtype == ioinfo->senseid.dev_type )
+ && (pdevreg->ci.hnc.dmode == ioinfo->senseid.dev_model) )
+ break;
+ }
+
+ pdevreg = pdevreg->next;
+
+ } /* endwhile */
+
+ spin_unlock_irqrestore( &dyn_lock, flags );
+
+ return( pdevreg);
+}
+
*
* S390 version
* Copyright (C) 1999, 2000 IBM Deutschland Entwicklung GmbH,
- IBM Corporation
+ * IBM Corporation
* Author(s): Ingo Adlung (adlung@de.ibm.com)
*/
#include <asm/smp.h>
#include <asm/pgtable.h>
#include <asm/delay.h>
+#include <asm/processor.h>
#include <asm/lowcore.h>
#include <asm/s390io.h>
+#include <asm/s390dyn.h>
+#include <asm/s390mach.h>
#undef CONFIG_DEBUG_IO
+#define CONFIG_DEBUG_CRW
#define REIPL_DEVID_MAGIC 0x87654321
-struct irqaction init_IRQ_action;
+struct s390_irqaction init_IRQ_action;
unsigned int highest_subchannel;
ioinfo_t *ioinfo_head = NULL;
ioinfo_t *ioinfo_tail = NULL;
ioinfo_t *ioinfo[__MAX_SUBCHANNELS] = {
[0 ... (__MAX_SUBCHANNELS-1)] = INVALID_STORAGE_AREA
};
-spinlock_t sync_isc; // synchronous irq processing lock
-psw_t io_sync_wait; // wait PSW for sync IO, prot. by sync_isc
-psw_t io_new_psw; // save I/O new PSW, prot. by sync_isc
-int cons_dev = -1; // identify console device
-int init_IRQ_complete = 0;
-schib_t init_schib;
+
+static spinlock_t sync_isc = SPIN_LOCK_UNLOCKED;
+ // synchronous irq processing lock
+static psw_t io_sync_wait; // wait PSW for sync IO, prot. by sync_isc
+static psw_t io_new_psw; // save I/O new PSW, prot. by sync_isc
+static int cons_dev = -1; // identify console device
+static int init_IRQ_complete = 0;
+static schib_t init_schib;
+static irb_t init_irb;
+static __u64 irq_IPL_TOD;
/*
* Dummy controller type for unused interrupts
};
static void init_IRQ_handler( int irq, void *dev_id, struct pt_regs *regs);
-static int s390_setup_irq(unsigned int irq, struct irqaction * new);
+static int s390_setup_irq(unsigned int irq, struct s390_irqaction * new);
static void s390_process_subchannels( void);
-static void s390_device_recognition( void);
-static int s390_validate_subchannel( int irq);
-static int s390_SenseID( int irq, senseid_t *sid);
+static void s390_device_recognition_all( void);
+static void s390_device_recognition_irq( int irq);
+static int s390_validate_subchannel( int irq, int enable);
+static int s390_SenseID( int irq, senseid_t *sid, __u8 lpm);
+static int s390_SetPGID( int irq, __u8 lpm, pgid_t *pgid);
+static int s390_SensePGID( int irq, __u8 lpm, pgid_t *pgid);
static int s390_process_IRQ( unsigned int irq );
+static int disable_subchannel( unsigned int irq);
+
+int s390_DevicePathVerification( int irq, __u8 domask );
extern int do_none(unsigned int irq, int cpu, struct pt_regs * regs);
extern int enable_none(unsigned int irq);
// fix me ! must be removed with 2.3.x and follow-up releases
//
static void * alloc_bootmem( unsigned long size);
+static int free_bootmem( unsigned long buffer, unsigned long size);
static unsigned long memory_start = 0;
void s390_displayhex(char *str,void *ptr,s32 cnt);
void s390_displayhex(char *str,void *ptr,s32 cnt)
{
s32 cnt1,cnt2,maxcnt2;
- u32 *currptr=(u32 *)ptr;
+ u32 *currptr=(__u32 *)ptr;
printk("\n%s\n",str);
for(cnt1=0;cnt1<cnt;cnt1+=16)
{
- printk("%08X ",(u32)currptr);
+ printk("%08X ",(__u32)currptr);
maxcnt2=cnt-cnt1;
if(maxcnt2>16)
maxcnt2=16;
}
}
-int s390_request_irq( unsigned int irq,
- void (*handler)(int, void *, struct pt_regs *),
+
+int s390_request_irq_special( int irq,
+ io_handler_func_t io_handler,
+ not_oper_handler_func_t not_oper_handler,
unsigned long irqflags,
const char *devname,
void *dev_id)
{
int retval;
- struct irqaction *action;
+ struct s390_irqaction *action;
if (irq >= __MAX_SUBCHANNELS)
return -EINVAL;
- if ( !handler || !dev_id )
+ if ( !io_handler || !dev_id )
return -EINVAL;
/*
*/
if ( init_IRQ_complete )
{
- action = (struct irqaction *)
- kmalloc(sizeof(struct irqaction), GFP_KERNEL);
+ action = (struct s390_irqaction *)
+ kmalloc( sizeof(struct s390_irqaction),
+ GFP_KERNEL);
}
else
{
} /* endif */
- action->handler = handler;
+ action->handler = io_handler;
action->flags = irqflags;
- action->mask = 0;
action->name = devname;
- action->next = NULL;
action->dev_id = dev_id;
retval = s390_setup_irq(irq, action);
- if ( retval && init_IRQ_complete )
+ if ( init_IRQ_complete )
+ {
+ if ( !retval )
+ {
+ s390_DevicePathVerification( irq, 0 );
+ }
+ else
{
kfree(action);
} /* endif */
+ } /* endif */
+
+ if ( retval == 0 )
+ {
+ ioinfo[irq]->ui.flags.newreq = 1;
+ ioinfo[irq]->nopfunc = not_oper_handler;
+ }
+
return retval;
}
+
+int s390_request_irq( unsigned int irq,
+ void (*handler)(int, void *, struct pt_regs *),
+ unsigned long irqflags,
+ const char *devname,
+ void *dev_id)
+{
+ int ret;
+
+ ret = s390_request_irq_special( irq,
+ (io_handler_func_t)handler,
+ NULL,
+ irqflags,
+ devname,
+ dev_id);
+
+ if ( ret == 0 )
+ {
+ ioinfo[irq]->ui.flags.newreq = 0;
+
+ } /* endif */
+
+ return( ret);
+}
+
void s390_free_irq(unsigned int irq, void *dev_id)
{
- unsigned int flags;
+ unsigned long flags;
int ret;
unsigned int count = 0;
do
{
- ret = ioinfo[irq]->irq_desc.handler->disable(irq);
+ ret = disable_subchannel( irq);
count++;
- if ( count == 3 )
+ if ( ret == -EBUSY )
+ {
+ int iret;
+
+ /*
+ * kill it !
+ * ... we first try sync and eventually
+ * try terminating the current I/O by
+ * an async request, twice halt, then
+ * clear.
+ */
+ if ( count < 2 )
+ {
+ iret = halt_IO( irq,
+ 0xC8C1D3E3,
+ DOIO_WAIT_FOR_INTERRUPT );
+
+ if ( iret == -EBUSY )
+ {
+ halt_IO( irq, 0xC8C1D3E3, 0);
+ s390irq_spin_unlock_irqrestore( irq, flags);
+ tod_wait( 200000 ); /* 200 ms */
+ s390irq_spin_lock_irqsave( irq, flags);
+
+ } /* endif */
+ }
+ else
+ {
+ iret = clear_IO( irq,
+ 0x40C3D3D9,
+ DOIO_WAIT_FOR_INTERRUPT );
+
+ if ( iret == -EBUSY )
+ {
+ clear_IO( irq, 0xC8C1D3E3, 0);
+ s390irq_spin_unlock_irqrestore( irq, flags);
+ tod_wait( 1000000 ); /* 1000 ms */
+ s390irq_spin_lock_irqsave( irq, flags);
+
+ } /* endif */
+
+ } /* endif */
+
+ if ( count == 2 )
+ {
+ /* give it a very last try ... */
+ disable_subchannel( irq);
+
+ if ( ioinfo[irq]->ui.flags.busy )
{
printk( KERN_CRIT"free_irq(%04X) "
"- device %04X busy, retry "
irq,
ioinfo[irq]->devstat.devno);
+ } /* endif */
+
+ break; /* sigh, let's give up ... */
+
+ } /* endif */
+
} /* endif */
} while ( ret == -EBUSY );
ioinfo[irq]->irq_desc.action = NULL;
ioinfo[irq]->ui.flags.ready = 0;
-
- ioinfo[irq]->irq_desc.handler->enable = &enable_none;
- ioinfo[irq]->irq_desc.handler->disable = &disable_none;
-
+ ioinfo[irq]->irq_desc.handler->enable = enable_none;
+ ioinfo[irq]->irq_desc.handler->disable = disable_none;
ioinfo[irq]->ui.flags.unready = 0; /* deregister ended */
+ ioinfo[irq]->nopfunc = NULL;
+
s390irq_spin_unlock_irqrestore( irq, flags);
}
else
else
{
ioinfo[irq]->schib.pmcw.ena = 1;
+ ioinfo[irq]->schib.pmcw.isc = 3;
do
{
*/
ioinfo[irq]->ui.flags.s_pend = 1;
-
s390_process_IRQ( irq );
-
ioinfo[irq]->ui.flags.s_pend = 0;
ret = -EIO; /* might be overwritten */
retry--;
break;
+ case 2:
+ tod_wait(100); /* allow for recovery */
+ ret = -EBUSY;
+ retry--;
+ break;
+
case 3:
ioinfo[irq]->ui.flags.oper = 0;
ret = -ENODEV;
s390_process_IRQ( irq );
ioinfo[irq]->ui.flags.s_pend = 0;
- ret = -EBUSY; /* might be overwritten */
+ ret = -EIO; /* might be overwritten */
/* ... on re-driving the */
/* ... msch() call */
retry--;
"device %04X received !\n",
irq,
ioinfo[irq]->devstat.devno);
- ret = -ENODEV; // never reached
+ ret = -EBUSY;
break;
case 3 :
}
-
-int s390_setup_irq(unsigned int irq, struct irqaction * new)
+int s390_setup_irq( unsigned int irq, struct s390_irqaction * new)
{
unsigned long flags;
int rc = 0;
{
ioinfo[irq]->irq_desc.action = new;
ioinfo[irq]->irq_desc.status = 0;
- ioinfo[irq]->irq_desc.handler->enable = &enable_subchannel;
- ioinfo[irq]->irq_desc.handler->disable = &disable_subchannel;
- ioinfo[irq]->irq_desc.handler->handle = &handle_IRQ_event;
+ ioinfo[irq]->irq_desc.handler->enable = enable_subchannel;
+ ioinfo[irq]->irq_desc.handler->disable = disable_subchannel;
+ ioinfo[irq]->irq_desc.handler->handle = handle_IRQ_event;
ioinfo[irq]->ui.flags.ready = 1;
return( ret );
}
+int free_bootmem( unsigned long buffer, unsigned long size)
+{
+ int ret = 0;
+
+ /*
+ * We don't have buffer management, thus a free
+ * must follow the matching alloc.
+ */
+ if ( buffer == (memory_start - size) )
+ memory_start -= size;
+ else
+ ret = -EINVAL;
+
+ return( ret );
+}
+
unsigned long s390_init_IRQ( unsigned long memstart)
{
unsigned long flags; /* PSW flags */
atomic_set(&S390_lowcore.local_bh_count,0);
atomic_set(&S390_lowcore.local_irq_count,0);
+ asm volatile ("STCK %0" : "=m" (irq_IPL_TOD));
+
/*
* As we don't know about the calling environment
* we assure running disabled. Before leaving the
cr6 = 0x10000000;
asm volatile ("LCTL 6,6,%0":: "m" (cr6):"memory");
- s390_device_recognition();
+ s390_device_recognition_all();
init_IRQ_complete = 1;
/*
* The flag usage is mutal exclusive ...
*/
- if ( (flag & DOIO_RETURN_CHAN_END)
+ if ( (flag & DOIO_EARLY_NOTIFICATION)
&& (flag & DOIO_REPORT_ALL ) )
{
return( -EINVAL );
} /* endif */
- memset( &(ioinfo[irq]->orb), '\0', sizeof( orb_t) );
-
/*
* setup ORB
*/
+ ioinfo[irq]->orb.intparm = (__u32)&ioinfo[irq]->u_intparm;
ioinfo[irq]->orb.fmt = 1;
ioinfo[irq]->orb.pfch = !(flag & DOIO_DENY_PREFETCH);
}
else
{
- ioinfo[irq]->orb.lpm = ioinfo[irq]->schib.pmcw.pam;
+ ioinfo[irq]->orb.lpm = ioinfo[irq]->opm;
} /* endif */
} /* endif */
+ if ( flag & DOIO_DONT_CALL_INTHDLR )
+ {
+ ioinfo[irq]->ui.flags.repnone = 1;
+
+ } /* endif */
+
/*
* Issue "Start subchannel" and process condition code
*/
'\0', sizeof( irb_t) );
} /* endif */
+ memset( &ioinfo[irq]->devstat.ii.irb,
+ '\0',
+ sizeof( irb_t) );
+
/*
* initialize device status information
*/
* or if we are to return all interrupt info.
* Default is to call IRQ handler at secondary status only
*/
- if ( flag & DOIO_RETURN_CHAN_END )
+ if ( flag & DOIO_EARLY_NOTIFICATION )
{
ioinfo[irq]->ui.flags.fast = 1;
}
} /* endif */
- if ( flag & DOIO_VALID_LPM )
- {
- ioinfo[irq]->lpm = lpm; /* specific path */
- }
- else
- {
- ioinfo[irq]->lpm = 0xff; /* any path */
-
- } /* endif */
+ ioinfo[irq]->ulpm = ioinfo[irq]->orb.lpm;
/*
* If synchronous I/O processing is requested, we have
*/
if ( flag & DOIO_WAIT_FOR_INTERRUPT )
{
- int io_sub;
+ int io_sub = -1;
__u32 io_parm;
psw_t io_new_psw;
int ccode;
+ uint64_t time_start;
+ uint64_t time_curr;
int ready = 0;
struct _lowcore *lc = NULL;
+ int do_retry = 1;
/*
* We shouldn't perform a TPI loop, waiting for an
break;
} /* endswitch */
- io_sync_wait.addr = (unsigned long) &&io_wakeup
- | 0x80000000L;
+ io_sync_wait.addr = FIX_PSW(&&io_wakeup);
/*
* Martin didn't like modifying the new PSW, now we take
*/
*(__u32 *)__LC_SYNC_IO_WORD = 1;
+ asm volatile ("STCK %0" : "=m" (time_start));
+
+ time_start = time_start >> 32;
+
+ do
+ {
+ if ( flag & DOIO_TIMEOUT )
+ {
+ tpi_info_t tpi_info;
+
do
{
+ if ( tpi(&tpi_info) == 1 )
+ {
+ io_sub = tpi_info.irq;
+ break;
+ }
+ else
+ {
+ tod_wait(100); /* usecs */
+ asm volatile ("STCK %0" : "=m" (time_curr));
+
+ if ( ((time_curr >> 32) - time_start ) >= 3 )
+ do_retry = 0;
+
+ } /* endif */
+ } while ( do_retry );
+ }
+ else
+ {
asm volatile ( "lpsw %0" : : "m" (io_sync_wait) );
+
io_wakeup:
- io_parm = *(__u32 *)__LC_IO_INT_PARM;
io_sub = (__u32)*(__u16 *)__LC_SUBCHANNEL_NR;
+ } /* endif */
+
+ if ( do_retry )
ready = s390_process_IRQ( io_sub );
- } while ( !((io_sub == irq) && (ready == 1)) );
+ /*
+ * surrender when retry count's exceeded ...
+ */
+ } while ( !( ( io_sub == irq )
+ && ( ready == 1 ))
+ && do_retry );
*(__u32 *)__LC_SYNC_IO_WORD = 0;
+ if ( !do_retry )
+ ret = -ETIMEDOUT;
+
} /* endif */
break;
case 1 : /* status pending */
- ioinfo[irq]->devstat.flag |= DEVSTAT_STATUS_PENDING;
+ ioinfo[irq]->devstat.flag = DEVSTAT_START_FUNCTION
+ | DEVSTAT_STATUS_PENDING;
/*
* initialize the device driver specific devstat irb area
ioinfo[irq]->ui.flags.s_pend = 0;
ioinfo[irq]->ui.flags.busy = 0;
ioinfo[irq]->ui.flags.doio = 0;
+
ioinfo[irq]->ui.flags.repall = 0;
ioinfo[irq]->ui.flags.w4final = 0;
*/
if ( ioinfo[irq]->devstat.ii.irb.scsw.cc == 3 )
{
+ if ( flag & DOIO_VALID_LPM )
+ {
+ ioinfo[irq]->opm &= ~(ioinfo[irq]->devstat.ii.irb.esw.esw1.lpum);
+ }
+ else
+ {
+ ioinfo[irq]->opm = 0;
+
+ } /* endif */
+
+ if ( ioinfo[irq]->opm == 0 )
+ {
ret = -ENODEV;
- ioinfo[irq]->devstat.flag |= DEVSTAT_NOT_OPER;
ioinfo[irq]->ui.flags.oper = 0;
+ }
+ else
+ {
+ ret = -EIO;
+
+ } /* endif */
+
+ ioinfo[irq]->devstat.flag |= DEVSTAT_NOT_OPER;
-#if CONFIG_DEBUG_IO
+#ifdef CONFIG_DEBUG_IO
{
char buffer[80];
((devstat_t *)(ioinfo[irq]->irq_desc.action->dev_id))->ii.sense.data,
((devstat_t *)(ioinfo[irq]->irq_desc.action->dev_id))->rescnt);
- }
+ } /* endif */
}
#endif
}
ret = -EBUSY;
break;
- default: /* device not operational */
+ default: /* device/path not operational */
+
+ if ( flag & DOIO_VALID_LPM )
+ {
+ ioinfo[irq]->opm &= ~lpm;
+ }
+ else
+ {
+ ioinfo[irq]->opm = 0;
- ret = -ENODEV;
- ioinfo[irq]->ui.flags.oper = 0;
+ } /* endif */
+ if ( ioinfo[irq]->opm == 0 )
+ {
+ ioinfo[irq]->ui.flags.oper = 0;
ioinfo[irq]->devstat.flag |= DEVSTAT_NOT_OPER;
+ } /* endif */
+
+ ret = -ENODEV;
+
memcpy( ioinfo[irq]->irq_desc.action->dev_id,
&(ioinfo[irq]->devstat),
sizeof( devstat_t) );
-#if CONFIG_DEBUG_IO
+#ifdef CONFIG_DEBUG_IO
{
char buffer[80];
} /* endswitch */
- if ( ( flag & DOIO_WAIT_FOR_INTERRUPT )
- && ( sync_isc_locked ) )
+ if ( sync_isc_locked )
{
- disable_cpu_sync_isc( irq );
+ int iret;
+ int retry = 5;
+ int halt = 0;
- spin_unlock_irqrestore( &sync_isc, psw_flags);
+ do
+ {
+ iret = disable_cpu_sync_isc( irq );
+ retry--;
+
+ /* try stopping it ... */
+ if ( (iret == -EBUSY) && !halt )
+ {
+ halt_IO( irq, 0x00004711, 0 );
+ halt = 1;
+
+ } /* endif */
+
+ tod_wait( 100);
+
+ } while ( retry && (iret == -EBUSY ) );
sync_isc_locked = 0; // local setting
ioinfo[irq]->ui.flags.syncio = 0; // global setting
+ spin_unlock_irqrestore( &sync_isc, psw_flags);
+
+ } /* endif */
+
+ if ( flag & DOIO_DONT_CALL_INTHDLR )
+ {
+ ioinfo[irq]->ui.flags.repnone = 0;
+
} /* endif */
return( ret);
return( -ENODEV);
}
- /* handler registered ? */
- if ( !ioinfo[irq]->ui.flags.ready )
+ /* handler registered ? or free_irq() in process already ? */
+ if ( !ioinfo[irq]->ui.flags.ready || ioinfo[irq]->ui.flags.unready )
{
return( -ENODEV );
*/
int halt_IO( int irq,
unsigned long user_intparm,
- unsigned int flag) /* possible DOIO_WAIT_FOR_INTERRUPT */
+ unsigned long flag) /* possible DOIO_WAIT_FOR_INTERRUPT */
{
int ret;
int ccode;
/*
* We don't allow for halt_io with a sync do_IO() requests pending.
*/
- else if ( ioinfo[irq]->ui.flags.syncio )
+ else if ( ioinfo[irq]->ui.flags.syncio
+ && (flag & DOIO_WAIT_FOR_INTERRUPT))
{
ret = -EBUSY;
}
break;
} /* endswitch */
- io_sync_wait.addr = (unsigned long)&&hio_wakeup
- | 0x80000000L;
+ io_sync_wait.addr = FIX_PSW(&&hio_wakeup);
/*
* Martin didn't like modifying the new PSW, now we take
} /* endswitch */
- if ( ( flag & DOIO_WAIT_FOR_INTERRUPT )
- && ( sync_isc_locked ) )
+ if ( sync_isc_locked )
{
+ disable_cpu_sync_isc( irq );
+
sync_isc_locked = 0; // local setting
ioinfo[irq]->ui.flags.syncio = 0; // global setting
- disable_cpu_sync_isc( irq );
-
spin_unlock_irqrestore( &sync_isc, psw_flags);
} /* endif */
return( ret );
}
-
/*
- * do_IRQ() handles all normal I/O device IRQ's (the special
- * SMP cross-CPU interrupts have their own specific
- * handlers).
- *
- * Returns: 0 - no ending status received, no further action taken
- * 1 - interrupt handler was called with ending status
+ * Note: The "intparm" parameter is not used by the clear_IO() function
+ * itself, as no ORB is built for the CSCH instruction. However,
+ * it allows the device interrupt handler to associate the upcoming
+ * interrupt with the clear_IO() request.
*/
-asmlinkage void do_IRQ( struct pt_regs regs,
- unsigned int irq,
- __u32 s390_intparm )
+int clear_IO( int irq,
+ unsigned long user_intparm,
+ unsigned long flag) /* possible DOIO_WAIT_FOR_INTERRUPT */
{
-#ifdef CONFIG_FAST_IRQ
+ int ret;
int ccode;
- tpi_info_t tpi_info;
- int new_irq;
-#endif
- int use_irq = irq;
-// __u32 use_intparm = s390_intparm;
+ unsigned long psw_flags;
+
+ int sync_isc_locked = 0;
+
+ if ( irq > highest_subchannel || irq < 0 )
+ {
+ ret = -ENODEV;
+ }
- //
- // fix me !!!
- //
- // We need to schedule device recognition, the interrupt stays
- // pending. We need to dynamically allocate an ioinfo structure.
- //
if ( ioinfo[irq] == INVALID_STORAGE_AREA )
{
- return;
+ return( -ENODEV);
}
/*
- * take fast exit if CPU is in sync. I/O state
- *
- * Note: we have to turn off the WAIT bit and re-disable
- * interrupts prior to return as this was the initial
- * entry condition to synchronous I/O.
+ * we only allow for halt_IO if the device has an I/O handler associated
*/
- if ( *(__u32 *)__LC_SYNC_IO_WORD )
+ else if ( !ioinfo[irq]->ui.flags.ready )
{
- regs.psw.mask &= ~(_PSW_WAIT_MASK_BIT | _PSW_IO_MASK_BIT);
-
- return;
-
- } /* endif */
-
- s390irq_spin_lock(use_irq);
-
-#ifdef CONFIG_FAST_IRQ
- do {
-#endif /* CONFIG_FAST_IRQ */
-
- s390_process_IRQ( use_irq );
-
-#ifdef CONFIG_FAST_IRQ
-
+ ret = -ENODEV;
+ }
/*
- * more interrupts pending ?
+ * we ignore the halt_io() request if ending_status was received but
+ * a SENSE operation is waiting for completion.
*/
- ccode = tpi( &tpi_info );
-
- if ( ! ccode )
- break; // no, leave ...
-
- new_irq = tpi_info.irq;
-// use_intparm = tpi_info.intparm;
-
+ else if ( ioinfo[irq]->ui.flags.w4sense )
+ {
+ ret = 0;
+ }
/*
- * if the interrupt is for a different irq we
- * release the current irq lock and obtain
- * a new one ...
+ * We don't allow for halt_io with a sync do_IO() requests pending.
+ * Concurrent I/O is possible in SMP environments only, but the
+ * sync. I/O request can be gated to one CPU at a time only.
*/
- if ( new_irq != use_irq )
+ else if ( ioinfo[irq]->ui.flags.syncio )
{
- s390irq_spin_unlock(use_irq);
- use_irq = new_irq;
- s390irq_spin_lock(use_irq);
-
- } /* endif */
-
- } while ( 1 );
-
-#endif /* CONFIG_FAST_IRQ */
-
- s390irq_spin_unlock(use_irq);
-
- return;
+ ret = -EBUSY;
}
-
+ else
+ {
/*
- * s390_process_IRQ() handles status pending situations and interrupts
- *
- * Called by : do_IRQ() - for "real" interrupts
- * s390_start_IO, halt_IO()
- * - status pending cond. after SSCH, or HSCH
- * disable_subchannel() - status pending conditions (after MSCH)
- *
- * Returns: 0 - no ending status received, no further action taken
- * 1 - interrupt handler was called with ending status
+ * If sync processing was requested we lock the sync ISC,
+ * modify the device to present interrupts for this ISC only
+ * and switch the CPU to handle this ISC + the console ISC
+ * exclusively.
*/
-int s390_process_IRQ( unsigned int irq )
+ if ( flag & DOIO_WAIT_FOR_INTERRUPT )
{
- int ccode; /* condition code from tsch() operation */
- int irb_cc; /* condition code from irb */
- int sdevstat; /* effective struct devstat size to copy */
- unsigned int fctl; /* function control */
- unsigned int stctl; /* status control */
- unsigned int actl; /* activity control */
- struct irqaction *action;
- struct pt_regs regs; /* for interface compatibility only */
-
- int issense = 0;
- int ending_status = 0;
- int allow4handler = 1;
- int chnchk = 0;
-#if 0
- int cpu = smp_processor_id();
+ //
+ // check whether we run recursively (sense processing)
+ //
+ if ( !ioinfo[irq]->ui.flags.syncio )
+ {
+ spin_lock_irqsave( &sync_isc, psw_flags);
- kstat.irqs[cpu][irq]++;
-#endif
- action = ioinfo[irq]->irq_desc.action;
+ ret = enable_cpu_sync_isc( irq);
- /*
- * It might be possible that a device was not-oper. at the time
- * of free_irq() processing. This means the handler is no longer
- * available when the device possibly becomes ready again. In
- * this case we perform delayed disable_subchannel() processing.
- */
- if ( action == NULL )
+ if ( ret )
{
- if ( !ioinfo[irq]->ui.flags.d_disable )
+ spin_unlock_irqrestore( &sync_isc,
+ psw_flags);
+ return( ret);
+ }
+ else
{
- printk( KERN_CRIT"s390_process_IRQ(%04X) "
- "- no interrupt handler registered"
- "for device %04X !\n",
- irq,
- ioinfo[irq]->devstat.devno);
+ sync_isc_locked = 1; // local
+ ioinfo[irq]->ui.flags.syncio = 1; // global
+
+ } /* endif */
} /* endif */
} /* endif */
/*
- * retrieve the i/o interrupt information (irb),
- * update the device specific status information
- * and possibly call the interrupt handler.
- *
- * Note 1: At this time we don't process the resulting
- * condition code (ccode) from tsch(), although
- * we probably should.
- *
- * Note 2: Here we will have to check for channel
- * check conditions and call a channel check
- * handler.
- *
- * Note 3: If a start function was issued, the interruption
- * parameter relates to it. If a halt function was
- * issued for an idle device, the intparm must not
- * be taken from lowcore, but from the devstat area.
+ * Issue "Halt subchannel" and process condition code
*/
- ccode = tsch( irq, &(ioinfo[irq]->devstat.ii.irb) );
+ ccode = csch( irq );
- //
- // We must only accumulate the status if initiated by do_IO() or halt_IO()
- //
- if ( ioinfo[irq]->ui.flags.busy )
+ switch ( ccode ) {
+ case 0:
+
+ ioinfo[irq]->ui.flags.haltio = 1;
+
+ if ( !ioinfo[irq]->ui.flags.doio )
{
- ioinfo[irq]->devstat.dstat |= ioinfo[irq]->devstat.ii.irb.scsw.dstat;
- ioinfo[irq]->devstat.cstat |= ioinfo[irq]->devstat.ii.irb.scsw.cstat;
+ ioinfo[irq]->ui.flags.busy = 1;
+ ioinfo[irq]->u_intparm = user_intparm;
+ ioinfo[irq]->devstat.cstat = 0;
+ ioinfo[irq]->devstat.dstat = 0;
+ ioinfo[irq]->devstat.lpum = 0;
+ ioinfo[irq]->devstat.flag = DEVSTAT_CLEAR_FUNCTION;
+ ioinfo[irq]->devstat.scnt = 0;
+
}
else
{
- ioinfo[irq]->devstat.dstat = ioinfo[irq]->devstat.ii.irb.scsw.dstat;
- ioinfo[irq]->devstat.cstat = ioinfo[irq]->devstat.ii.irb.scsw.cstat;
-
- ioinfo[irq]->devstat.flag = 0; // reset status flags
+ ioinfo[irq]->devstat.flag |= DEVSTAT_CLEAR_FUNCTION;
} /* endif */
- ioinfo[irq]->devstat.lpum = ioinfo[irq]->devstat.ii.irb.esw.esw1.lpum;
-
- if ( ioinfo[irq]->ui.flags.busy)
+ /*
+ * If synchronous I/O processing is requested, we have
+ * to wait for the corresponding interrupt to occur by
+ * polling the interrupt condition. However, as multiple
+ * interrupts may be outstanding, we must not just wait
+ * for the first interrupt, but must poll until ours
+ * pops up.
+ */
+ if ( flag & DOIO_WAIT_FOR_INTERRUPT )
{
- ioinfo[irq]->devstat.intparm = ioinfo[irq]->u_intparm;
+ int io_sub;
+ __u32 io_parm;
+ psw_t io_new_psw;
+ int ccode;
- } /* endif */
+ int ready = 0;
+ struct _lowcore *lc = NULL;
/*
- * reset device-busy bit if no longer set in irb
+ * We shouldn't perform a TPI loop, waiting for
+ * an interrupt to occur, but should load a
+ * WAIT PSW instead. Otherwise we may keep the
+ * channel subsystem busy, not able to present
+ * the interrupt. When our sync. interrupt
+ * arrived we reset the I/O old PSW to its
+ * original value.
*/
- if ( (ioinfo[irq]->devstat.dstat & DEV_STAT_BUSY )
- && ((ioinfo[irq]->devstat.ii.irb.scsw.dstat & DEV_STAT_BUSY) == 0))
- {
- ioinfo[irq]->devstat.dstat &= ~DEV_STAT_BUSY;
+ memcpy( &io_new_psw,
+ &lc->io_new_psw,
+ sizeof(psw_t));
- } /* endif */
+ ccode = iac();
+
+ switch (ccode) {
+ case 0: // primary-space
+ io_sync_wait.mask = _IO_PSW_MASK
+ | _PSW_PRIM_SPACE_MODE
+ | _PSW_IO_WAIT;
+ break;
+ case 1: // secondary-space
+ io_sync_wait.mask = _IO_PSW_MASK
+ | _PSW_SEC_SPACE_MODE
+ | _PSW_IO_WAIT;
+ break;
+ case 2: // access-register
+ io_sync_wait.mask = _IO_PSW_MASK
+ | _PSW_ACC_REG_MODE
+ | _PSW_IO_WAIT;
+ break;
+ case 3: // home-space
+ io_sync_wait.mask = _IO_PSW_MASK
+ | _PSW_HOME_SPACE_MODE
+ | _PSW_IO_WAIT;
+ break;
+ default:
+ panic( "halt_IO() : unexpected "
+ "address-space-control %d\n",
+ ccode);
+ break;
+ } /* endswitch */
+
+ io_sync_wait.addr = FIX_PSW(&&cio_wakeup);
/*
- * Save residual count and CCW information in case primary and
- * secondary status are presented with different interrupts.
+ * Martin didn't like modifying the new PSW, now we take
+ * a fast exit in do_IRQ() instead
*/
- if ( ioinfo[irq]->devstat.ii.irb.scsw.stctl & SCSW_STCTL_PRIM_STATUS )
- {
- ioinfo[irq]->devstat.rescnt = ioinfo[irq]->devstat.ii.irb.scsw.count;
-
-#if CONFIG_DEBUG_IO
- if ( irq != cons_dev )
- printk( "s390_process_IRQ( %04X ) : "
- "residual count from irb after tsch() %d\n",
- irq, ioinfo[irq]->devstat.rescnt );
-#endif
- } /* endif */
+ *(__u32 *)__LC_SYNC_IO_WORD = 1;
- if ( ioinfo[irq]->devstat.ii.irb.scsw.cpa != 0 )
+ do
{
- ioinfo[irq]->devstat.cpa = ioinfo[irq]->devstat.ii.irb.scsw.cpa;
- } /* endif */
+ asm volatile ( "lpsw %0" : : "m" (io_sync_wait) );
+cio_wakeup:
+ io_parm = *(__u32 *)__LC_IO_INT_PARM;
+ io_sub = (__u32)*(__u16 *)__LC_SUBCHANNEL_NR;
- irb_cc = ioinfo[irq]->devstat.ii.irb.scsw.cc;
+ ready = s390_process_IRQ( io_sub );
- //
- // check for any kind of channel or interface control check but don't
- // issue the message for the console device
- //
- if ( (ioinfo[irq]->devstat.ii.irb.scsw.cstat
- & ( SCHN_STAT_CHN_DATA_CHK
- | SCHN_STAT_CHN_CTRL_CHK
- | SCHN_STAT_INTF_CTRL_CHK ) )
- && (irq != cons_dev ) )
- {
- printk( "Channel-Check or Interface-Control-Check "
- "received\n"
- " ... device %04X on subchannel %04X, dev_stat "
- ": %02X sch_stat : %02X\n",
- ioinfo[irq]->devstat.devno,
- irq,
- ioinfo[irq]->devstat.dstat,
- ioinfo[irq]->devstat.cstat);
+ } while ( !((io_sub == irq) && (ready == 1)) );
- chnchk = 1;
+ *(__u32 *)__LC_SYNC_IO_WORD = 0;
} /* endif */
- issense = ioinfo[irq]->devstat.ii.irb.esw.esw0.erw.cons;
-
- if ( issense )
- {
- ioinfo[irq]->devstat.scnt =
- ioinfo[irq]->devstat.ii.irb.esw.esw0.erw.scnt;
- ioinfo[irq]->devstat.flag |=
- DEVSTAT_FLAG_SENSE_AVAIL;
+ ret = 0;
+ break;
- sdevstat = sizeof( devstat_t);
-
-#if CONFIG_DEBUG_IO
- if ( irq != cons_dev )
- printk( "s390_process_IRQ( %04X ) : "
- "concurrent sense bytes avail %d\n",
- irq, ioinfo[irq]->devstat.scnt );
-#endif
- }
- else
- {
- /* don't copy the sense data area ! */
- sdevstat = sizeof( devstat_t) - SENSE_MAX_COUNT;
-
- } /* endif */
+ case 1 : /* status pending */
- switch ( irb_cc ) {
- case 1: /* status pending */
+ ioinfo[irq]->devstat.flag |= DEVSTAT_STATUS_PENDING;
- ioinfo[irq]->devstat.flag |= DEVSTAT_STATUS_PENDING;
+ /*
+ * initialize the device driver specific devstat irb area
+ */
+ memset( &((devstat_t *) ioinfo[irq]->irq_desc.action->dev_id)->ii.irb,
+ '\0', sizeof( irb_t) );
- case 0: /* normal i/o interruption */
+ /*
+ * Let the common interrupt handler process the pending
+ * status. However, we must avoid calling the user
+ * action handler, as it won't be prepared to handle
+ * a pending status during do_IO() processing inline.
+ * This also implies that s390_process_IRQ must
+ * terminate synchronously - especially if device
+ * sensing is required.
+ */
+ ioinfo[irq]->ui.flags.s_pend = 1;
+ ioinfo[irq]->ui.flags.busy = 1;
+ ioinfo[irq]->ui.flags.doio = 1;
- fctl = ioinfo[irq]->devstat.ii.irb.scsw.fctl;
- stctl = ioinfo[irq]->devstat.ii.irb.scsw.stctl;
- actl = ioinfo[irq]->devstat.ii.irb.scsw.actl;
+ s390_process_IRQ( irq );
+
+ ioinfo[irq]->ui.flags.s_pend = 0;
+ ioinfo[irq]->ui.flags.busy = 0;
+ ioinfo[irq]->ui.flags.doio = 0;
+ ioinfo[irq]->ui.flags.repall = 0;
+ ioinfo[irq]->ui.flags.w4final = 0;
- if ( chnchk && (ioinfo[irq]->senseid.cu_type == 0x3088))
+ ioinfo[irq]->devstat.flag |= DEVSTAT_FINAL_STATUS;
+
+ /*
+ * In multipath mode a condition code 3 implies the last
+ * path has gone, except we have previously restricted
+ * the I/O to a particular path. A condition code 1
+ * (0 won't occur) results in return code EIO as well
+ * as 3 with another path than the one used (i.e. path available mask is non-zero).
+ */
+ if ( ioinfo[irq]->devstat.ii.irb.scsw.cc == 3 )
+ {
+ ret = -ENODEV;
+ ioinfo[irq]->devstat.flag |= DEVSTAT_NOT_OPER;
+ ioinfo[irq]->ui.flags.oper = 0;
+ }
+ else
+ {
+ ret = -EIO;
+ ioinfo[irq]->devstat.flag &= ~DEVSTAT_NOT_OPER;
+ ioinfo[irq]->ui.flags.oper = 1;
+
+ } /* endif */
+
+ break;
+
+ case 2 : /* busy */
+
+ ret = -EBUSY;
+ break;
+
+ default: /* device not operational */
+
+ ret = -ENODEV;
+ break;
+
+ } /* endswitch */
+
+ if ( sync_isc_locked )
{
- char buffer[80];
+ disable_cpu_sync_isc( irq );
- sprintf( buffer, "s390_process_IRQ(%04X) - irb for "
- "device %04X after channel check\n",
- irq,
- ioinfo[irq]->devstat.devno );
+ sync_isc_locked = 0; // local setting
+ ioinfo[irq]->ui.flags.syncio = 0; // global setting
+
+ spin_unlock_irqrestore( &sync_isc, psw_flags);
- s390_displayhex( buffer,
- &(ioinfo[irq]->devstat.ii.irb) ,
- sizeof(irb_t));
} /* endif */
- ioinfo[irq]->stctl |= stctl;
+ } /* endif */
+
+ return( ret );
+}
- ending_status = ( stctl & SCSW_STCTL_SEC_STATUS )
- || ( stctl == (SCSW_STCTL_ALERT_STATUS | SCSW_STCTL_STATUS_PEND) )
- || ( (fctl == SCSW_FCTL_HALT_FUNC) && (stctl == SCSW_STCTL_STATUS_PEND) );
/*
- * Check for unsolicited interrupts - for debug purposes only
- *
- * We only consider an interrupt as unsolicited, if the device was not
- * actively in use (busy) and an interrupt other than an ALERT status
- * was received.
+ * do_IRQ() handles all normal I/O device IRQ's (the special
+ * SMP cross-CPU interrupts have their own specific
+ * handlers).
*
- * Note: We must not issue a message to the console, if the
- * unsolicited interrupt applies to the console device
- * itself !
+ * Returns: 0 - no ending status received, no further action taken
+ * 1 - interrupt handler was called with ending status
*/
-#if CONFIG_DEBUG_IO
- if ( ( irq != cons_dev )
- && !( stctl & SCSW_STCTL_ALERT_STATUS )
- && ( ioinfo[irq]->ui.flags.busy == 0 ) )
+asmlinkage void do_IRQ( struct pt_regs regs,
+ unsigned int irq,
+ __u32 s390_intparm )
{
- char buffer[80];
-
- printk( "Unsolicited interrupt received for device %04X on subchannel %04X\n"
- " ... device status : %02X subchannel status : %02X\n",
- ioinfo[irq]->devstat.devno,
- irq,
- ioinfo[irq]->devstat.dstat,
- ioinfo[irq]->devstat.cstat);
-
- sprintf( buffer, "s390_process_IRQ(%04X) - irb for "
- "device %04X, ending_status %d\n",
- irq,
- ioinfo[irq]->devstat.devno,
- ending_status);
+#ifdef CONFIG_FAST_IRQ
+ int ccode;
+ tpi_info_t tpi_info;
+ int new_irq;
+#endif
+ int use_irq = irq;
- s390_displayhex( buffer,
- &(ioinfo[irq]->devstat.ii.irb) ,
- sizeof(irb_t));
+ //
+ // fix me !!!
+ //
+ // We need to schedule device recognition, the interrupt stays
+ // pending. We need to dynamically allocate an ioinfo structure.
+ //
+ if ( ioinfo[irq] == INVALID_STORAGE_AREA )
+ {
+ return; /* this keeps the device boxed ... */
+ }
- } /* endif */
-#endif
/*
- * Check whether we must issue a SENSE CCW ourselves if there is no
- * concurrent sense facility installed for the subchannel.
+ * take fast exit if CPU is in sync. I/O state
*
- * Note: We should check for ioinfo[irq]->ui.flags.consns but VM
- * violates the ESA/390 architecture and doesn't present an
- * operand exception for virtual devices without concurrent
- * sense facility available/supported when enabling the
- * concurrent sense facility.
+ * Note: we have to turn off the WAIT bit and re-disable
+ * interrupts prior to return as this was the initial
+ * entry condition to synchronous I/O.
*/
- if ( ( ( ioinfo[irq]->devstat.ii.irb.scsw.dstat & DEV_STAT_UNIT_CHECK )
- && ( !issense ) )
- || ( ioinfo[irq]->ui.flags.delsense && ending_status ) )
+ if ( *(__u32 *)__LC_SYNC_IO_WORD )
{
- int ret_io;
- ccw1_t *s_ccw = &ioinfo[irq]->senseccw;
- unsigned long s_flag = 0;
+ regs.psw.mask &= ~(_PSW_WAIT_MASK_BIT | _PSW_IO_MASK_BIT);
+
+ return;
+
+ } /* endif */
+
+ s390irq_spin_lock(use_irq);
+
+#ifdef CONFIG_FAST_IRQ
+ do {
+#endif /* CONFIG_FAST_IRQ */
+
+ s390_process_IRQ( use_irq );
+
+#ifdef CONFIG_FAST_IRQ
- if (ending_status)
- {
/*
- * We copy the current status information into the device driver
- * status area. Then we can use the local devstat area for device
- * sensing. When finally calling the IRQ handler we must not overlay
- * the original device status but copy the sense data only.
+ * more interrupts pending ?
*/
- memcpy( ioinfo[irq]->irq_desc.action->dev_id,
- &(ioinfo[irq]->devstat),
- sizeof( devstat_t) );
+ ccode = tpi( &tpi_info );
- s_ccw->cmd_code = CCW_CMD_BASIC_SENSE;
- s_ccw->cda = (char *)virt_to_phys( ioinfo[irq]->devstat.ii.sense.data);
- s_ccw->count = SENSE_MAX_COUNT;
- s_ccw->flags = CCW_FLAG_SLI;
+ if ( ! ccode )
+ break; // no, leave ...
+
+ new_irq = tpi_info.irq;
/*
- * If free_irq() or a sync do_IO/s390_start_IO() is in
- * process we have to sense synchronously
+ * if the interrupt is for a different irq we
+ * release the current irq lock and obtain
+ * a new one ...
*/
- if ( ioinfo[irq]->ui.flags.unready || ioinfo[irq]->ui.flags.syncio )
+ if ( new_irq != use_irq )
{
- s_flag = DOIO_WAIT_FOR_INTERRUPT;
+ s390irq_spin_unlock(use_irq);
+ use_irq = new_irq;
+ s390irq_spin_lock(use_irq);
+
+ } /* endif */
+
+ } while ( 1 );
+
+#endif /* CONFIG_FAST_IRQ */
+
+ s390irq_spin_unlock(use_irq);
+
+ return;
+ }
+
+ /*
+ * s390_process_IRQ() handles status pending situations and interrupts
+ *
+ * Called by : do_IRQ() - for "real" interrupts
+ * s390_start_IO, halt_IO()
+ * - status pending cond. after SSCH, or HSCH
+ * disable_subchannel() - status pending conditions (after MSCH)
+ *
+ * Returns: 0 - no ending status received, no further action taken
+ * 1 - interrupt handler was called with ending status
+ */
+int s390_process_IRQ( unsigned int irq )
+{
+ int ccode; /* cond code from tsch() operation */
+ int irb_cc; /* cond code from irb */
+ int sdevstat; /* struct devstat size to copy */
+ unsigned int fctl; /* function control */
+ unsigned int stctl; /* status control */
+ unsigned int actl; /* activity control */
+ struct s390_irqaction *action;
+ struct pt_regs regs; /* for interface compatibility only */
+
+ int issense = 0;
+ int ending_status = 0;
+ int allow4handler = 1;
+ int chnchk = 0;
+#if 0
+ int cpu = smp_processor_id();
+
+ kstat.irqs[cpu][irq]++;
+#endif
+
+ if ( ioinfo[irq] == INVALID_STORAGE_AREA )
+ {
+ /* we can't properly process the interrupt ... */
+ tsch( irq, &init_irb );
+ return( 1 );
+ }
+ else
+ {
+ action = ioinfo[irq]->irq_desc.action;
+
+ } /* endif */
+
+#ifdef CONFIG_DEBUG_IO
+ /*
+ * It might be possible that a device was not-oper. at the time
+ * of free_irq() processing. This means the handler is no longer
+ * available when the device possibly becomes ready again. In
+ * this case we perform delayed disable_subchannel() processing.
+ */
+ if ( action == NULL )
+ {
+ if ( !ioinfo[irq]->ui.flags.d_disable )
+ {
+ printk( KERN_CRIT"s390_process_IRQ(%04X) "
+ "- no interrupt handler registered "
+ "for device %04X !\n",
+ irq,
+ ioinfo[irq]->devstat.devno);
+
+ } /* endif */
+ } /* endif */
+#endif
+
+ /*
+ * retrieve the i/o interrupt information (irb),
+ * update the device specific status information
+ * and possibly call the interrupt handler.
+ *
+ * Note 1: At this time we don't process the resulting
+ * condition code (ccode) from tsch(), although
+ * we probably should.
+ *
+ * Note 2: Here we will have to check for channel
+ * check conditions and call a channel check
+ * handler.
+ *
+ * Note 3: If a start function was issued, the interruption
+ * parameter relates to it. If a halt function was
+ * issued for an idle device, the intparm must not
+ * be taken from lowcore, but from the devstat area.
+ */
+ ccode = tsch( irq, &(ioinfo[irq]->devstat.ii.irb) );
+
+ //
+ // We must only accumulate the status if initiated by do_IO() or halt_IO()
+ //
+ if ( ioinfo[irq]->ui.flags.busy )
+ {
+ ioinfo[irq]->devstat.dstat |= ioinfo[irq]->devstat.ii.irb.scsw.dstat;
+ ioinfo[irq]->devstat.cstat |= ioinfo[irq]->devstat.ii.irb.scsw.cstat;
+ }
+ else
+ {
+ ioinfo[irq]->devstat.dstat = ioinfo[irq]->devstat.ii.irb.scsw.dstat;
+ ioinfo[irq]->devstat.cstat = ioinfo[irq]->devstat.ii.irb.scsw.cstat;
+
+ ioinfo[irq]->devstat.flag = 0; // reset status flags
+
+ } /* endif */
+
+ ioinfo[irq]->devstat.lpum = ioinfo[irq]->devstat.ii.irb.esw.esw1.lpum;
+
+ if ( ioinfo[irq]->ui.flags.busy)
+ {
+ ioinfo[irq]->devstat.intparm = ioinfo[irq]->u_intparm;
+
+ } /* endif */
+
+ /*
+ * reset device-busy bit if no longer set in irb
+ */
+ if ( (ioinfo[irq]->devstat.dstat & DEV_STAT_BUSY )
+ && ((ioinfo[irq]->devstat.ii.irb.scsw.dstat & DEV_STAT_BUSY) == 0))
+ {
+ ioinfo[irq]->devstat.dstat &= ~DEV_STAT_BUSY;
+
+ } /* endif */
+
+ /*
+ * Save residual count and CCW information in case primary and
+ * secondary status are presented with different interrupts.
+ */
+ if ( ioinfo[irq]->devstat.ii.irb.scsw.stctl & SCSW_STCTL_PRIM_STATUS )
+ {
+ ioinfo[irq]->devstat.rescnt = ioinfo[irq]->devstat.ii.irb.scsw.count;
+
+#ifdef CONFIG_DEBUG_IO
+ if ( irq != cons_dev )
+ printk( "s390_process_IRQ( %04X ) : "
+ "residual count from irb after tsch() %d\n",
+ irq, ioinfo[irq]->devstat.rescnt );
+#endif
+ } /* endif */
+
+ if ( ioinfo[irq]->devstat.ii.irb.scsw.cpa != 0 )
+ {
+ ioinfo[irq]->devstat.cpa = ioinfo[irq]->devstat.ii.irb.scsw.cpa;
+
+ } /* endif */
+
+ irb_cc = ioinfo[irq]->devstat.ii.irb.scsw.cc;
+
+ //
+ // check for any kind of channel or interface control check but don't
+ // issue the message for the console device
+ //
+ if ( (ioinfo[irq]->devstat.ii.irb.scsw.cstat
+ & ( SCHN_STAT_CHN_DATA_CHK
+ | SCHN_STAT_CHN_CTRL_CHK
+ | SCHN_STAT_INTF_CTRL_CHK ) )
+ && (irq != cons_dev ) )
+ {
+ printk( "Channel-Check or Interface-Control-Check "
+ "received\n"
+ " ... device %04X on subchannel %04X, dev_stat "
+ ": %02X sch_stat : %02X\n",
+ ioinfo[irq]->devstat.devno,
+ irq,
+ ioinfo[irq]->devstat.dstat,
+ ioinfo[irq]->devstat.cstat);
+
+ chnchk = 1;
+
+ } /* endif */
+
+ issense = ioinfo[irq]->devstat.ii.irb.esw.esw0.erw.cons;
+
+ if ( issense )
+ {
+ ioinfo[irq]->devstat.scnt =
+ ioinfo[irq]->devstat.ii.irb.esw.esw0.erw.scnt;
+ ioinfo[irq]->devstat.flag |=
+ DEVSTAT_FLAG_SENSE_AVAIL;
+
+ sdevstat = sizeof( devstat_t);
+
+#ifdef CONFIG_DEBUG_IO
+ if ( irq != cons_dev )
+ printk( "s390_process_IRQ( %04X ) : "
+ "concurrent sense bytes avail %d\n",
+ irq, ioinfo[irq]->devstat.scnt );
+#endif
+ }
+ else
+ {
+ /* don't copy the sense data area ! */
+ sdevstat = sizeof( devstat_t) - SENSE_MAX_COUNT;
+
+ } /* endif */
+
+ switch ( irb_cc ) {
+ case 1: /* status pending */
+
+ ioinfo[irq]->devstat.flag |= DEVSTAT_STATUS_PENDING;
+
+ case 0: /* normal i/o interruption */
+
+ fctl = ioinfo[irq]->devstat.ii.irb.scsw.fctl;
+ stctl = ioinfo[irq]->devstat.ii.irb.scsw.stctl;
+ actl = ioinfo[irq]->devstat.ii.irb.scsw.actl;
+
+ if ( chnchk && (ioinfo[irq]->senseid.cu_type == 0x3088))
+ {
+ char buffer[80];
+
+ sprintf( buffer, "s390_process_IRQ(%04X) - irb for "
+ "device %04X after channel check\n",
+ irq,
+ ioinfo[irq]->devstat.devno );
+
+ s390_displayhex( buffer,
+ &(ioinfo[irq]->devstat.ii.irb) ,
+ sizeof(irb_t));
+ } /* endif */
+
+ ioinfo[irq]->stctl |= stctl;
+
+ ending_status = ( stctl & SCSW_STCTL_SEC_STATUS )
+ || ( stctl == (SCSW_STCTL_ALERT_STATUS | SCSW_STCTL_STATUS_PEND) )
+ || ( (fctl == SCSW_FCTL_HALT_FUNC) && (stctl == SCSW_STCTL_STATUS_PEND) )
+ || ( (fctl == SCSW_FCTL_CLEAR_FUNC) && (stctl == SCSW_STCTL_STATUS_PEND) );
+
+ /*
+ * Check for unsolicited interrupts - for debug purposes only
+ *
+ * We only consider an interrupt as unsolicited, if the device was not
+ * actively in use (busy) and an interrupt other than an ALERT status
+ * was received.
+ *
+ * Note: We must not issue a message to the console, if the
+ * unsolicited interrupt applies to the console device
+ * itself !
+ */
+#ifdef CONFIG_DEBUG_IO
+ if ( ( irq != cons_dev )
+ && !( stctl & SCSW_STCTL_ALERT_STATUS )
+ && ( ioinfo[irq]->ui.flags.busy == 0 ) )
+ {
+ char buffer[80];
+
+ printk( "Unsolicited interrupt received for device %04X on subchannel %04X\n"
+ " ... device status : %02X subchannel status : %02X\n",
+ ioinfo[irq]->devstat.devno,
+ irq,
+ ioinfo[irq]->devstat.dstat,
+ ioinfo[irq]->devstat.cstat);
+
+ sprintf( buffer, "s390_process_IRQ(%04X) - irb for "
+ "device %04X, ending_status %d\n",
+ irq,
+ ioinfo[irq]->devstat.devno,
+ ending_status);
+
+ s390_displayhex( buffer,
+ &(ioinfo[irq]->devstat.ii.irb) ,
+ sizeof(irb_t));
+
+ } /* endif */
+
+#endif
+ /*
+ * take fast exit if no handler is available
+ */
+ if ( !action )
+ return( ending_status );
+
+ /*
+ * Check whether we must issue a SENSE CCW ourselves if there is no
+ * concurrent sense facility installed for the subchannel.
+ *
+ * Note: We should check for ioinfo[irq]->ui.flags.consns but VM
+ * violates the ESA/390 architecture and doesn't present an
+ * operand exception for virtual devices without concurrent
+ * sense facility available/supported when enabling the
+ * concurrent sense facility.
+ */
+ if ( ( ( ioinfo[irq]->devstat.ii.irb.scsw.dstat & DEV_STAT_UNIT_CHECK )
+ && ( !issense ) )
+ || ( ioinfo[irq]->ui.flags.delsense && ending_status ) )
+ {
+ int ret_io;
+ ccw1_t *s_ccw = &ioinfo[irq]->senseccw;
+ unsigned long s_flag = 0;
+
+ if ( ending_status )
+ {
+ /*
+ * We copy the current status information into the device driver
+ * status area. Then we can use the local devstat area for device
+ * sensing. When finally calling the IRQ handler we must not overlay
+ * the original device status but copy the sense data only.
+ */
+ memcpy( action->dev_id,
+ &(ioinfo[irq]->devstat),
+ sizeof( devstat_t) );
+
+ s_ccw->cmd_code = CCW_CMD_BASIC_SENSE;
+ s_ccw->cda = (char *)virt_to_phys( ioinfo[irq]->devstat.ii.sense.data);
+ s_ccw->count = SENSE_MAX_COUNT;
+ s_ccw->flags = CCW_FLAG_SLI;
+
+ /*
+ * If free_irq() or a sync do_IO/s390_start_IO() is in
+ * process we have to sense synchronously
+ */
+ if ( ioinfo[irq]->ui.flags.unready || ioinfo[irq]->ui.flags.syncio )
+ {
+ s_flag = DOIO_WAIT_FOR_INTERRUPT;
+
+ } /* endif */
+
+ /*
+ * Reset status info
+ *
+ * It does not matter whether this is a sync. or async.
+ * SENSE request, but we have to assure we don't call
+ * the irq handler now, but keep the irq in busy state.
+ * In sync. mode s390_process_IRQ() is called recursively,
+ * while in async. mode we re-enter do_IRQ() with the
+ * next interrupt.
+ *
+ * Note : this may be a delayed sense request !
+ */
+ allow4handler = 0;
+
+ ioinfo[irq]->ui.flags.fast = 0;
+ ioinfo[irq]->ui.flags.repall = 0;
+ ioinfo[irq]->ui.flags.w4final = 0;
+ ioinfo[irq]->ui.flags.delsense = 0;
+
+ ioinfo[irq]->devstat.cstat = 0;
+ ioinfo[irq]->devstat.dstat = 0;
+ ioinfo[irq]->devstat.rescnt = SENSE_MAX_COUNT;
+
+ ioinfo[irq]->ui.flags.w4sense = 1;
+
+ ret_io = s390_start_IO( irq,
+ s_ccw,
+ 0xE2C5D5E2, // = SENSe
+ 0, // n/a
+ s_flag);
+ }
+ else
+ {
+ /*
+ * we received an Unit Check but we have no final
+ * status yet, therefore we must delay the SENSE
+ * processing. However, we must not report this
+ * intermediate status to the device interrupt
+ * handler.
+ */
+ ioinfo[irq]->ui.flags.fast = 0;
+ ioinfo[irq]->ui.flags.repall = 0;
+
+ ioinfo[irq]->ui.flags.delsense = 1;
+ allow4handler = 0;
+
+ } /* endif */
+
+ } /* endif */
+
+ /*
+ * we allow for the device action handler if .
+ * - we received ending status
+ * - the action handler requested to see all interrupts
+ * - we received a PCI
+ * - fast notification was requested (primary status)
+ * - unsollicited interrupts
+ *
+ */
+ if ( allow4handler )
+ {
+ allow4handler = ending_status
+ || ( ioinfo[irq]->ui.flags.repall )
+ || ( ioinfo[irq]->devstat.ii.irb.scsw.cstat & SCHN_STAT_PCI )
+ || ( (ioinfo[irq]->ui.flags.fast ) && (stctl & SCSW_STCTL_PRIM_STATUS) )
+ || ( ioinfo[irq]->ui.flags.oper == 0 );
+
+ } /* endif */
+
+ /*
+ * We used to copy the device status information right before
+ * calling the device action handler. However, in status
+ * pending situations during do_IO() or halt_IO(), as well as
+ * enable_subchannel/disable_subchannel processing we must
+ * synchronously return the status information and must not
+ * call the device action handler.
+ *
+ */
+ if ( allow4handler )
+ {
+ /*
+ * if we were waiting for sense data we copy the sense
+ * bytes only as the original status information was
+ * saved prior to sense already.
+ */
+ if ( ioinfo[irq]->ui.flags.w4sense )
+ {
+ int sense_count = SENSE_MAX_COUNT-ioinfo[irq]->devstat.rescnt;
+
+#ifdef CONFIG_DEBUG_IO
+ if ( irq != cons_dev )
+ printk( "s390_process_IRQ( %04X ) : "
+ "BASIC SENSE bytes avail %d\n",
+ irq, sense_count );
+#endif
+ ioinfo[irq]->ui.flags.w4sense = 0;
+ ((devstat_t *)(action->dev_id))->flag |= DEVSTAT_FLAG_SENSE_AVAIL;
+ ((devstat_t *)(action->dev_id))->scnt = sense_count;
+
+ if ( sense_count >= 0 )
+ {
+ memcpy( ((devstat_t *)(action->dev_id))->ii.sense.data,
+ &(ioinfo[irq]->devstat.ii.sense.data),
+ sense_count);
+ }
+ else
+ {
+#if 1
+ panic( "s390_process_IRQ(%04x) encountered "
+ "negative sense count\n",
+ irq);
+#else
+ printk( KERN_CRIT"s390_process_IRQ(%04x) encountered "
+ "negative sense count\n",
+ irq);
+#endif
+ } /* endif */
+ }
+ else
+ {
+ memcpy( action->dev_id, &(ioinfo[irq]->devstat), sdevstat );
+
+ } /* endif */
+
+ } /* endif */
+
+ /*
+ * for status pending situations other than deferred interrupt
+ * conditions detected by s390_process_IRQ() itself we must not
+ * call the handler. This will synchronously be reported back
+ * to the caller instead, e.g. when detected during do_IO().
+ */
+ if ( ioinfo[irq]->ui.flags.s_pend
+ || ioinfo[irq]->ui.flags.unready
+ || ioinfo[irq]->ui.flags.repnone )
+ {
+ if ( ending_status )
+ {
+
+ ioinfo[irq]->ui.flags.busy = 0;
+ ioinfo[irq]->ui.flags.doio = 0;
+ ioinfo[irq]->ui.flags.haltio = 0;
+ ioinfo[irq]->ui.flags.fast = 0;
+ ioinfo[irq]->ui.flags.repall = 0;
+ ioinfo[irq]->ui.flags.w4final = 0;
+
+ ioinfo[irq]->devstat.flag |= DEVSTAT_FINAL_STATUS;
+ action->dev_id->flag |= DEVSTAT_FINAL_STATUS;
+
+ } /* endif */
+
+ allow4handler = 0;
+
+ } /* endif */
+
+ /*
+ * Call device action handler if applicable
+ */
+ if ( allow4handler )
+ {
+
+ /*
+ * We only reset the busy condition when we are sure that no further
+ * interrupt is pending for the current I/O request (ending_status).
+ */
+ if ( ending_status || !ioinfo[irq]->ui.flags.oper )
+ {
+ ioinfo[irq]->ui.flags.oper = 1; /* dev IS oper */
+
+ ioinfo[irq]->ui.flags.busy = 0;
+ ioinfo[irq]->ui.flags.doio = 0;
+ ioinfo[irq]->ui.flags.haltio = 0;
+ ioinfo[irq]->ui.flags.fast = 0;
+ ioinfo[irq]->ui.flags.repall = 0;
+ ioinfo[irq]->ui.flags.w4final = 0;
+
+ ioinfo[irq]->devstat.flag |= DEVSTAT_FINAL_STATUS;
+ ((devstat_t *)(action->dev_id))->flag |= DEVSTAT_FINAL_STATUS;
+
+ if ( ioinfo[irq]->ui.flags.newreq )
+ {
+ action->handler( irq, ioinfo[irq]->u_intparm );
+ }
+ else
+ {
+ ((io_handler_func1_t)action->handler)( irq, action->dev_id, ®s );
+
+ } /* endif */
+
+ //
+ // reset intparm after final status or we will badly present unsolicited
+ // interrupts with a intparm value possibly no longer valid.
+ //
+ ioinfo[irq]->devstat.intparm = 0;
+
+ //
+ // Was there anything queued ? Start the pending channel program
+ // if there is one.
+ //
+ if ( ioinfo[irq]->ui.flags.doio_q )
+ {
+ int ret;
+
+ ret = s390_start_IO( irq,
+ ioinfo[irq]->qcpa,
+ ioinfo[irq]->qintparm,
+ ioinfo[irq]->qlpm,
+ ioinfo[irq]->qflag);
+
+ ioinfo[irq]->ui.flags.doio_q = 0;
+
+ /*
+ * If s390_start_IO() failed call the device's interrupt
+ * handler, the IRQ related devstat area was setup by
+ * s390_start_IO() accordingly already (status pending
+ * condition).
+ */
+ if ( ret )
+ {
+ if ( ioinfo[irq]->ui.flags.newreq )
+ {
+ action->handler( irq, ioinfo[irq]->u_intparm );
+ }
+ else
+ {
+ ((io_handler_func1_t)action->handler)( irq, action->dev_id, ®s );
+
+ } /* endif */
+
+ } /* endif */
+
+ } /* endif */
+
+ }
+ else
+ {
+ ioinfo[irq]->ui.flags.w4final = 1;
+
+ if ( ioinfo[irq]->ui.flags.newreq )
+ {
+ action->handler( irq, ioinfo[irq]->u_intparm );
+ }
+ else
+ {
+ ((io_handler_func1_t)action->handler)( irq, action->dev_id, ®s );
+
+ } /* endif */
+
+ } /* endif */
+
+ } /* endif */
+
+ break;
+
+ case 3: /* device/path not operational */
+
+ ioinfo[irq]->ui.flags.busy = 0;
+ ioinfo[irq]->ui.flags.doio = 0;
+ ioinfo[irq]->ui.flags.haltio = 0;
+
+ ioinfo[irq]->devstat.cstat = 0;
+ ioinfo[irq]->devstat.dstat = 0;
+
+ if ( ioinfo[irq]->ulpm != ioinfo[irq]->opm )
+ {
+ /*
+ * either it was the only path or it was restricted ...
+ */
+ ioinfo[irq]->opm &= ~(ioinfo[irq]->devstat.ii.irb.esw.esw1.lpum);
+ }
+ else
+ {
+ ioinfo[irq]->opm = 0;
+
+ } /* endif */
+
+ if ( ioinfo[irq]->opm == 0 )
+ {
+ ioinfo[irq]->ui.flags.oper = 0;
+
+ } /* endif */
+
+ ioinfo[irq]->devstat.flag |= DEVSTAT_NOT_OPER;
+ ioinfo[irq]->devstat.flag |= DEVSTAT_FINAL_STATUS;
+
+ /*
+ * When we find a device "not oper" we save the status
+ * information into the device status area and call the
+ * device specific interrupt handler.
+ *
+ * Note: currently we don't have any way to reenable
+ * the device unless an unsolicited interrupt
+ * is presented. We don't check for spurious
+ * interrupts on "not oper" conditions.
+ */
+
+ if ( ( ioinfo[irq]->ui.flags.fast )
+ && ( ioinfo[irq]->ui.flags.w4final ) )
+ {
+ /*
+ * If a new request was queued already, we have
+ * to simulate the "not oper" status for the
+ * queued request by switching the "intparm" value
+ * and notify the interrupt handler.
+ */
+ if ( ioinfo[irq]->ui.flags.doio_q )
+ {
+ ioinfo[irq]->devstat.intparm = ioinfo[irq]->qintparm;
+
+ } /* endif */
+
+ } /* endif */
+
+ ioinfo[irq]->ui.flags.fast = 0;
+ ioinfo[irq]->ui.flags.repall = 0;
+ ioinfo[irq]->ui.flags.w4final = 0;
+
+ /*
+ * take fast exit if no handler is available
+ */
+ if ( !action )
+ return( ending_status );
+
+ memcpy( action->dev_id, &(ioinfo[irq]->devstat), sdevstat );
+
+ ioinfo[irq]->devstat.intparm = 0;
+
+ if ( !ioinfo[irq]->ui.flags.s_pend )
+ {
+ if ( ioinfo[irq]->ui.flags.newreq )
+ {
+ action->handler( irq, ioinfo[irq]->u_intparm );
+ }
+ else
+ {
+ ((io_handler_func1_t)action->handler)( irq, action->dev_id, ®s );
+
+ } /* endif */
+
+ } /* endif */
+
+ ending_status = 1;
+
+ break;
+
+ } /* endswitch */
+
+ return( ending_status );
+}
+
+/*
+ * Set the special i/o-interruption sublass 7 for the
+ * device specified by parameter irq. There can only
+ * be a single device been operated on this special
+ * isc. This function is aimed being able to check
+ * on special device interrupts in disabled state,
+ * without having to delay I/O processing (by queueing)
+ * for non-console devices.
+ *
+ * Setting of this isc is done by set_cons_dev(), while
+ * reset_cons_dev() resets this isc and re-enables the
+ * default isc3 for this device. wait_cons_dev() allows
+ * to actively wait on an interrupt for this device in
+ * disabed state. When the interrupt condition is
+ * encountered, wait_cons_dev(9 calls do_IRQ() to have
+ * the console device driver processing the interrupt.
+ */
+int set_cons_dev( int irq )
+{
+ int ccode;
+ unsigned long cr6 __attribute__ ((aligned (8)));
+ int rc = 0;
+
+ if ( cons_dev != -1 )
+ {
+ rc = -EBUSY;
+ }
+ else if ( (irq > highest_subchannel) || (irq < 0) )
+ {
+ rc = -ENODEV;
+ }
+ else if ( ioinfo[irq] == INVALID_STORAGE_AREA )
+ {
+ return( -ENODEV);
+ }
+ else
+ {
+ /*
+ * modify the indicated console device to operate
+ * on special console interrupt sublass 7
+ */
+ ccode = stsch( irq, &(ioinfo[irq]->schib) );
+
+ if (ccode)
+ {
+ rc = -ENODEV;
+ ioinfo[irq]->devstat.flag |= DEVSTAT_NOT_OPER;
+ }
+ else
+ {
+ ioinfo[irq]->schib.pmcw.isc = 7;
+
+ ccode = msch( irq, &(ioinfo[irq]->schib) );
+
+ if (ccode)
+ {
+ rc = -EIO;
+ }
+ else
+ {
+ cons_dev = irq;
+
+ /*
+ * enable console I/O-interrupt sublass 7
+ */
+ asm volatile ("STCTL 6,6,%0": "=m" (cr6));
+ cr6 |= 0x01000000;
+ asm volatile ("LCTL 6,6,%0":: "m" (cr6):"memory");
+
+ } /* endif */
+
+ } /* endif */
+
+ } /* endif */
+
+ return( rc);
+}
+
+int reset_cons_dev( int irq)
+{
+ int rc = 0;
+ int ccode;
+ long cr6 __attribute__ ((aligned (8)));
+
+ if ( cons_dev != -1 )
+ {
+ rc = -EBUSY;
+ }
+ else if ( (irq > highest_subchannel) || (irq < 0) )
+ {
+ rc = -ENODEV;
+ }
+ else if ( ioinfo[irq] == INVALID_STORAGE_AREA )
+ {
+ return( -ENODEV);
+ }
+ else
+ {
+ /*
+ * reset the indicated console device to operate
+ * on default console interrupt sublass 3
+ */
+ ccode = stsch( irq, &(ioinfo[irq]->schib) );
+
+ if (ccode)
+ {
+ rc = -ENODEV;
+ ioinfo[irq]->devstat.flag |= DEVSTAT_NOT_OPER;
+ }
+ else
+ {
+
+ ioinfo[irq]->schib.pmcw.isc = 3;
+
+ ccode = msch( irq, &(ioinfo[irq]->schib) );
+
+ if (ccode)
+ {
+ rc = -EIO;
+ }
+ else
+ {
+ cons_dev = -1;
+
+ /*
+ * disable special console I/O-interrupt sublass 7
+ */
+ asm volatile ("STCTL 6,6,%0": "=m" (cr6));
+ cr6 &= 0xFEFFFFFF;
+ asm volatile ("LCTL 6,6,%0":: "m" (cr6):"memory");
+
+ } /* endif */
+
+ } /* endif */
+
+ } /* endif */
+
+ return( rc);
+}
+
+int wait_cons_dev( int irq )
+{
+ int rc = 0;
+ long save_cr6;
+
+ if ( irq == cons_dev )
+ {
+
+ /*
+ * before entering the spinlock we may already have
+ * processed the interrupt on a different CPU ...
+ */
+ if ( ioinfo[irq]->ui.flags.busy == 1 )
+ {
+ long cr6 __attribute__ ((aligned (8)));
+
+ /*
+ * disable all, but isc 7 (console device)
+ */
+ asm volatile ("STCTL 6,6,%0": "=m" (cr6));
+ save_cr6 = cr6;
+ cr6 &= 0x01FFFFFF;
+ asm volatile ("LCTL 6,6,%0":: "m" (cr6):"memory");
+
+ do {
+ tpi_info_t tpi_info;
+ if (tpi(&tpi_info) == 1) {
+ s390_process_IRQ( tpi_info.irq );
+ } else {
+ s390irq_spin_unlock(irq);
+ tod_wait(100);
+ s390irq_spin_lock(irq);
+ }
+ eieio();
+ } while (ioinfo[irq]->ui.flags.busy == 1);
+
+ /*
+ * restore previous isc value
+ */
+ asm volatile ("STCTL 6,6,%0": "=m" (cr6));
+ cr6 = save_cr6;
+ asm volatile ("LCTL 6,6,%0":: "m" (cr6):"memory");
+
+ } /* endif */
+
+ }
+ else
+ {
+ rc = EINVAL;
+
+ } /* endif */
+
+
+ return(rc);
+}
+
+
+int enable_cpu_sync_isc( int irq )
+{
+ int ccode;
+ long cr6 __attribute__ ((aligned (8)));
+
+ int count = 0;
+ int rc = 0;
+
+ if ( irq <= highest_subchannel && ioinfo[irq] != INVALID_STORAGE_AREA )
+ {
+ ccode = stsch( irq, &(ioinfo[irq]->schib) );
+
+ if ( !ccode )
+ {
+ ioinfo[irq]->schib.pmcw.isc = 5;
+
+ do
+ {
+ ccode = msch( irq, &(ioinfo[irq]->schib) );
+
+ if (ccode == 0 )
+ {
+ /*
+ * enable interrupt subclass in CPU
+ */
+ asm volatile ("STCTL 6,6,%0": "=m" (cr6));
+ cr6 |= 0x04000000; // enable sync isc 5
+ cr6 &= 0xEFFFFFFF; // disable standard isc 3
+ asm volatile ("LCTL 6,6,%0":: "m" (cr6):"memory");
+ }
+ else if (ccode == 3)
+ {
+ rc = -ENODEV; // device not-oper - very unlikely
+
+ }
+ else if (ccode == 2)
+ {
+ rc = -EBUSY; // device busy - should not happen
+
+ }
+ else if (ccode == 1)
+ {
+ //
+ // process pending status
+ //
+ ioinfo[irq]->ui.flags.s_pend = 1;
+
+ s390_process_IRQ( irq );
+
+ ioinfo[irq]->ui.flags.s_pend = 0;
+
+ count++;
+
+ } /* endif */
+
+ } while ( ccode == 1 && count < 3 );
+
+ if ( count == 3)
+ {
+ rc = -EIO;
+
+ } /* endif */
+ }
+ else
+ {
+ rc = -ENODEV; // device is not-operational
+
+ } /* endif */
+ }
+ else
+ {
+ rc = -EINVAL;
+
+ } /* endif */
+
+ return( rc);
+}
+
+int disable_cpu_sync_isc( int irq)
+{
+ int rc = 0;
+ int retry = 5;
+ int ccode;
+ long cr6 __attribute__ ((aligned (8)));
+
+ if ( irq <= highest_subchannel && ioinfo[irq] != INVALID_STORAGE_AREA )
+ {
+ ccode = stsch( irq, &(ioinfo[irq]->schib) );
+
+ ioinfo[irq]->schib.pmcw.isc = 3;
+
+ do {
+
+ ccode = msch( irq, &(ioinfo[irq]->schib) );
+
+ switch ( ccode ) {
+ case 0:
+ /*
+ * disable interrupt subclass in CPU
+ */
+ asm volatile ("STCTL 6,6,%0": "=m" (cr6));
+ cr6 &= 0xFBFFFFFF; // disable sync isc 5
+ cr6 |= 0x10000000; // enable standard isc 3
+ asm volatile ("LCTL 6,6,%0":: "m" (cr6):"memory");
+ break;
+ case 1:
+ ioinfo[irq]->ui.flags.s_pend = 1;
+ s390_process_IRQ( irq );
+ ioinfo[irq]->ui.flags.s_pend = 0;
+ retry--;
+ rc = -EIO;
+ break;
+ case 2:
+ rc = -EBUSY;
+ break;
+ default:
+ rc = -ENODEV;
+ break;
+ } /* endswitch */
+
+ } while ( retry && (ccode ==1) );
+ }
+ else
+ {
+ rc = -EINVAL;
+
+ } /* endif */
+
+ return( rc);
+}
+
+//
+// Input :
+// devno - device number
+// ps - pointer to sense ID data area
+//
+// Output : none
+//
+void VM_virtual_device_info( unsigned int devno,
+ senseid_t *ps )
+{
+ diag210_t diag_data;
+ int ccode;
+
+ int error = 0;
+
+ diag_data.vrdcdvno = devno;
+ diag_data.vrdclen = sizeof( diag210_t);
+ ccode = diag210( (diag210_t *)virt_to_phys( &diag_data ) );
+ ps->reserved = 0xff;
+
+ switch (diag_data.vrdcvcla) {
+ case 0x80:
+
+ switch (diag_data.vrdcvtyp) {
+ case 00:
+
+ ps->cu_type = 0x3215;
+
+ break;
+
+ default:
+
+ error = 1;
+
+ break;
+
+ } /* endswitch */
+
+ break;
+
+ case 0x40:
+
+ switch (diag_data.vrdcvtyp) {
+ case 0xC0:
+
+ ps->cu_type = 0x5080;
+
+ break;
+
+ case 0x80:
+
+ ps->cu_type = 0x2250;
+
+ break;
+
+ case 0x04:
+
+ ps->cu_type = 0x3277;
+
+ break;
+
+ case 0x01:
+
+ ps->cu_type = 0x3278;
+
+ break;
+
+ default:
+
+ error = 1;
+
+ break;
+
+ } /* endswitch */
+
+ break;
+
+ case 0x20:
+
+ switch (diag_data.vrdcvtyp) {
+ case 0x84:
+
+ ps->cu_type = 0x3505;
+
+ break;
+
+ case 0x82:
+
+ ps->cu_type = 0x2540;
+
+ break;
+
+ case 0x81:
+
+ ps->cu_type = 0x2501;
+
+ break;
+
+ default:
+
+ error = 1;
+
+ break;
+
+ } /* endswitch */
+
+ break;
+
+ case 0x10:
+
+ switch (diag_data.vrdcvtyp) {
+ case 0x84:
+
+ ps->cu_type = 0x3525;
+
+ break;
+
+ case 0x82:
+
+ ps->cu_type = 0x2540;
+
+ break;
+
+ case 0x4F:
+ case 0x4E:
+ case 0x48:
+
+ ps->cu_type = 0x3820;
+
+ break;
+
+ case 0x4D:
+ case 0x49:
+ case 0x45:
+
+ ps->cu_type = 0x3800;
+
+ break;
+
+ case 0x4B:
+
+ ps->cu_type = 0x4248;
+
+ break;
+
+ case 0x4A:
+
+ ps->cu_type = 0x4245;
+
+ break;
+
+ case 0x47:
+
+ ps->cu_type = 0x3262;
+
+ break;
+
+ case 0x43:
+
+ ps->cu_type = 0x3203;
+
+ break;
+
+ case 0x42:
+
+ ps->cu_type = 0x3211;
- } /* endif */
+ break;
- /*
- * Reset status info
- *
- * It does not matter whether this is a sync. or async.
- * SENSE request, but we have to assure we don't call
- * the irq handler now, but keep the irq in busy state.
- * In sync. mode s390_process_IRQ() is called recursively,
- * while in async. mode we re-enter do_IRQ() with the
- * next interrupt.
- *
- * Note : this may be a delayed sense request !
- */
- allow4handler = 0;
+ case 0x41:
- ioinfo[irq]->ui.flags.fast = 0;
- ioinfo[irq]->ui.flags.repall = 0;
- ioinfo[irq]->ui.flags.w4final = 0;
- ioinfo[irq]->ui.flags.delsense = 0;
+ ps->cu_type = 0x1403;
- ioinfo[irq]->devstat.cstat = 0;
- ioinfo[irq]->devstat.dstat = 0;
- ioinfo[irq]->devstat.rescnt = SENSE_MAX_COUNT;
+ break;
- ioinfo[irq]->ui.flags.w4sense = 1;
-
- ret_io = s390_start_IO( irq,
- s_ccw,
- 0xE2C5D5E2, // = SENSe
- 0, // n/a
- s_flag);
- }
- else
- {
- /*
- * we received an Unit Check but we have no final
- * status yet, therefore we must delay the SENSE
- * processing. However, we must not report this
- * intermediate status to the device interrupt
- * handler.
- */
- ioinfo[irq]->ui.flags.fast = 0;
- ioinfo[irq]->ui.flags.repall = 0;
+ default:
- ioinfo[irq]->ui.flags.delsense = 1;
- allow4handler = 0;
+ error = 1;
- } /* endif */
+ break;
- } /* endif */
+ } /* endswitch */
- /*
- * we allow for the device action handler if .
- * - we received ending status
- * - the action handler requested to see all interrupts
- * - we received a PCI
- * - fast notification was requested (primary status)
- * - unsollicited interrupts
- *
- */
- if ( allow4handler )
- {
- allow4handler = ending_status
- || ( ioinfo[irq]->ui.flags.repall )
- || ( ioinfo[irq]->devstat.ii.irb.scsw.cstat & SCHN_STAT_PCI )
- || ( (ioinfo[irq]->ui.flags.fast ) && (stctl & SCSW_STCTL_PRIM_STATUS) )
- || ( ioinfo[irq]->ui.flags.oper == 0 );
+ break;
- } /* endif */
+ case 0x08:
- /*
- * We used to copy the device status information right before
- * calling the device action handler. However, in status
- * pending situations during do_IO() or halt_IO(), as well as
- * enable_subchannel/disable_subchannel processing we must
- * synchronously return the status information and must not
- * call the device action handler.
- *
- */
- if ( allow4handler )
- {
- /*
- * if we were waiting for sense data we copy the sense
- * bytes only as the original status information was
- * saved prior to sense already.
- */
- if ( ioinfo[irq]->ui.flags.w4sense )
- {
- int sense_count = SENSE_MAX_COUNT-ioinfo[irq]->devstat.rescnt;
+ switch (diag_data.vrdcvtyp) {
+ case 0x82:
-#if CONFIG_DEBUG_IO
- if ( irq != cons_dev )
- printk( "s390_process_IRQ( %04X ) : "
- "BASIC SENSE bytes avail %d\n",
- irq, sense_count );
-#endif
- ioinfo[irq]->ui.flags.w4sense = 0;
- ((devstat_t *)(action->dev_id))->flag |= DEVSTAT_FLAG_SENSE_AVAIL;
- ((devstat_t *)(action->dev_id))->scnt = sense_count;
+ ps->cu_type = 0x3422;
- if (sense_count >= 0)
- {
- memcpy( ((devstat_t *)(action->dev_id))->ii.sense.data,
- &(ioinfo[irq]->devstat.ii.sense.data),
- sense_count);
- }
- else
- {
-#if 1
- panic( "s390_process_IRQ(%04x) encountered "
- "negative sense count\n",
- irq);
-#else
- printk( KERN_CRIT"s390_process_IRQ(%04x) encountered "
- "negative sense count\n",
- irq);
-#endif
- } /* endif */
- }
- else
- {
- memcpy( action->dev_id, &(ioinfo[irq]->devstat), sdevstat );
+ break;
- } /* endif */
+ case 0x81:
- } /* endif */
+ ps->cu_type = 0x3490;
- /*
- * for status pending situations other than deferred interrupt
- * conditions detected by s390_process_IRQ() itself we must not
- * call the handler. This will synchronously be reported back
- * to the caller instead, e.g. when detected during do_IO().
- */
- if ( ioinfo[irq]->ui.flags.s_pend )
- allow4handler = 0;
+ break;
- /*
- * Call device action handler if applicable
- */
- if ( allow4handler )
- {
+ case 0x10:
- /*
- * We only reset the busy condition when we are sure that no further
- * interrupt is pending for the current I/O request (ending_status).
- */
- if ( ending_status || !ioinfo[irq]->ui.flags.oper )
- {
- ioinfo[irq]->ui.flags.oper = 1; /* dev IS oper */
+ ps->cu_type = 0x3420;
- ioinfo[irq]->ui.flags.busy = 0;
- ioinfo[irq]->ui.flags.doio = 0;
- ioinfo[irq]->ui.flags.haltio = 0;
- ioinfo[irq]->ui.flags.fast = 0;
- ioinfo[irq]->ui.flags.repall = 0;
- ioinfo[irq]->ui.flags.w4final = 0;
+ break;
- ioinfo[irq]->devstat.flag |= DEVSTAT_FINAL_STATUS;
- ((devstat_t *)(action->dev_id))->flag |= DEVSTAT_FINAL_STATUS;
+ case 0x02:
- action->handler( irq, action->dev_id, ®s);
+ ps->cu_type = 0x3430;
- //
- // reset intparm after final status or we will badly present unsolicited
- // interrupts with a intparm value possibly no longer valid.
- //
- ioinfo[irq]->devstat.intparm = 0;
+ break;
- //
- // Was there anything queued ? Start the pending channel program
- // if there is one.
- //
- if ( ioinfo[irq]->ui.flags.doio_q )
- {
- int ret;
+ case 0x01:
- ret = s390_start_IO( irq,
- ioinfo[irq]->qcpa,
- ioinfo[irq]->qintparm,
- ioinfo[irq]->qlpm,
- ioinfo[irq]->qflag);
+ ps->cu_type = 0x3480;
- ioinfo[irq]->ui.flags.doio_q = 0;
+ break;
- /*
- * If s390_start_IO() failed call the device's interrupt
- * handler, the IRQ related devstat area was setup by
- * s390_start_IO() accordingly already (status pending
- * condition).
- */
- if ( ret )
- {
- action->handler( irq, action->dev_id, ®s);
+ case 0x42:
- } /* endif */
+ ps->cu_type = 0x3424;
- } /* endif */
+ break;
- }
- else
- {
- ioinfo[irq]->ui.flags.w4final = 1;
- action->handler( irq, action->dev_id, ®s);
+ case 0x44:
- } /* endif */
+ ps->cu_type = 0x9348;
- } /* endif */
+ break;
- break;
+ default:
- case 3: /* device not operational */
+ error = 1;
- ioinfo[irq]->ui.flags.oper = 0;
+ break;
- ioinfo[irq]->ui.flags.busy = 0;
- ioinfo[irq]->ui.flags.doio = 0;
- ioinfo[irq]->ui.flags.haltio = 0;
+ } /* endswitch */
- ioinfo[irq]->devstat.cstat = 0;
- ioinfo[irq]->devstat.dstat = 0;
- ioinfo[irq]->devstat.flag |= DEVSTAT_NOT_OPER;
- ioinfo[irq]->devstat.flag |= DEVSTAT_FINAL_STATUS;
+ break;
- /*
- * When we find a device "not oper" we save the status
- * information into the device status area and call the
- * device specific interrupt handler.
- *
- * Note: currently we don't have any way to reenable
- * the device unless an unsolicited interrupt
- * is presented. We don't check for spurious
- * interrupts on "not oper" conditions.
- */
+ case 02: /* special device class ... */
- if ( ( ioinfo[irq]->ui.flags.fast )
- && ( ioinfo[irq]->ui.flags.w4final ) )
- {
- /*
- * If a new request was queued already, we have
- * to simulate the "not oper" status for the
- * queued request by switching the "intparm" value
- * and notify the interrupt handler.
- */
- if ( ioinfo[irq]->ui.flags.doio_q )
- {
- ioinfo[irq]->devstat.intparm = ioinfo[irq]->qintparm;
+ switch (diag_data.vrdcvtyp) {
+ case 0x20: /* OSA */
- } /* endif */
+ ps->cu_type = 0x3088;
+ ps->cu_model = 0x60;
- } /* endif */
+ break;
- ioinfo[irq]->ui.flags.fast = 0;
- ioinfo[irq]->ui.flags.repall = 0;
- ioinfo[irq]->ui.flags.w4final = 0;
+ default:
- memcpy( action->dev_id, &(ioinfo[irq]->devstat), sdevstat );
+ error = 1;
+ break;
- ioinfo[irq]->devstat.intparm = 0;
+ } /* endswitch */
- if ( !ioinfo[irq]->ui.flags.s_pend )
- action->handler( irq, action->dev_id, ®s);
+ break;
- ending_status = 1;
+ default:
+
+ error = 1;
break;
} /* endswitch */
- return( ending_status );
-}
-
-/*
- * Set the special i/o-interruption sublass 7 for the
- * device specified by parameter irq. There can only
- * be a single device been operated on this special
- * isc. This function is aimed being able to check
- * on special device interrupts in disabled state,
- * without having to delay I/O processing (by queueing)
- * for non-console devices.
+ if ( error )
+ {printk( "DIAG X'210' for device %04X returned (cc = %d): vdev class : %02X, "
+ "vdev type : %04X \n ... rdev class : %02X, rdev type : %04X, rdev model: %02X\n",
+ devno,
+ ccode,
+ diag_data.vrdcvcla,
+ diag_data.vrdcvtyp,
+ diag_data.vrdcrccl,
+ diag_data.vrdccrty,
+ diag_data.vrdccrmd );
+
+ } /* endif */
+
+}
+
+/*
+ * This routine returns the characteristics for the device
+ * specified. Some old devices might not provide the necessary
+ * command code information during SenseID processing. In this
+ * case the function returns -EINVAL. Otherwise the function
+ * allocates a decice specific data buffer and provides the
+ * device characteristics together with the buffer size. Its
+ * the callers responability to release the kernel memory if
+ * not longer needed. In case of persistent I/O problems -EBUSY
+ * is returned.
*
- * Setting of this isc is done by set_cons_dev(), while
- * reset_cons_dev() resets this isc and re-enables the
- * default isc3 for this device. wait_cons_dev() allows
- * to actively wait on an interrupt for this device in
- * disabed state. When the interrupt condition is
- * encountered, wait_cons_dev(9 calls do_IRQ() to have
- * the console device driver processing the interrupt.
+ * The function may be called enabled or disabled. However, the
+ * caller must have locked the irq it is requesting data for.
+ *
+ * Note : It would have been nice to collect this information
+ * during init_IRQ() processing but this is not possible
+ *
+ * a) without statically pre-allocation fixed size buffers
+ * as virtual memory management isn't available yet.
+ *
+ * b) without unnecessarily increase system startup by
+ * evaluating devices eventually not used at all.
*/
-int set_cons_dev( int irq )
+int read_dev_chars( int irq, void **buffer, int length )
{
- int ccode;
- unsigned long cr6 __attribute__ ((aligned (8)));
- int rc = 0;
+ unsigned int flags;
+ ccw1_t *rdc_ccw;
+ devstat_t devstat;
+ char *rdc_buf;
+ int devflag;
- if ( cons_dev != -1 )
+ int ret = 0;
+ int emulated = 0;
+ int retry = 5;
+
+ if ( !buffer || !length )
{
- rc = -EBUSY;
- }
- else if ( (irq > highest_subchannel) || (irq < 0) )
+ return( -EINVAL );
+
+ } /* endif */
+
+ if ( (irq > highest_subchannel) || (irq < 0 ) )
{
- rc = -ENODEV;
+ return( -ENODEV );
+
}
else if ( ioinfo[irq] == INVALID_STORAGE_AREA )
{
return( -ENODEV);
}
- else
- {
- /*
- * modify the indicated console device to operate
- * on special console interrupt sublass 7
- */
- ccode = stsch( irq, &(ioinfo[irq]->schib) );
- if (ccode)
- {
- rc = -ENODEV;
- ioinfo[irq]->devstat.flag |= DEVSTAT_NOT_OPER;
- }
- else
+ if ( ioinfo[irq]->ui.flags.oper == 0 )
{
- ioinfo[irq]->schib.pmcw.isc = 7;
-
- ccode = msch( irq, &(ioinfo[irq]->schib) );
+ return( -ENODEV );
- if (ccode)
- {
- rc = -EIO;
- }
- else
- {
- cons_dev = irq;
+ } /* endif */
/*
- * enable console I/O-interrupt sublass 7
+ * Before playing around with irq locks we should assure
+ * running disabled on (just) our CPU. Sync. I/O requests
+ * also require to run disabled.
+ *
+ * Note : as no global lock is required, we must not use
+ * cli(), but __cli() instead.
*/
- asm volatile ("STCTL 6,6,%0": "=m" (cr6));
- cr6 |= 0x01000000;
- asm volatile ("LCTL 6,6,%0":: "m" (cr6):"memory");
+ __save_flags(flags);
+ __cli();
- } /* endif */
+ rdc_ccw = &ioinfo[irq]->senseccw;
- } /* endif */
+ if ( !ioinfo[irq]->ui.flags.ready )
+ {
+ ret = request_irq( irq,
+ init_IRQ_handler,
+ 0, "RDC", &devstat );
- } /* endif */
+ if ( !ret )
+ {
+ emulated = 1;
- return( rc);
-}
+ } /* endif */
-int reset_cons_dev( int irq)
-{
- int rc = 0;
- int ccode;
- long cr6 __attribute__ ((aligned (8)));
+ } /* endif */
- if ( cons_dev != -1 )
- {
- rc = -EBUSY;
- }
- else if ( (irq > highest_subchannel) || (irq < 0) )
+ if ( !ret )
{
- rc = -ENODEV;
- }
- else if ( ioinfo[irq] == INVALID_STORAGE_AREA )
+ if ( ! *buffer )
{
- return( -ENODEV);
+ rdc_buf = kmalloc( length, GFP_KERNEL);
}
else
{
- /*
- * reset the indicated console device to operate
- * on default console interrupt sublass 3
- */
- ccode = stsch( irq, &(ioinfo[irq]->schib) );
+ rdc_buf = *buffer;
- if (ccode)
+ } /* endif */
+
+ if ( !rdc_buf )
{
- rc = -ENODEV;
- ioinfo[irq]->devstat.flag |= DEVSTAT_NOT_OPER;
+ ret = -ENOMEM;
}
else
{
+ do
+ {
+ rdc_ccw->cmd_code = CCW_CMD_RDC;
+ rdc_ccw->cda = (char *)virt_to_phys( rdc_buf );
+ rdc_ccw->count = length;
+ rdc_ccw->flags = CCW_FLAG_SLI;
+
+ memset( (devstat_t *)(ioinfo[irq]->irq_desc.action->dev_id),
+ '\0',
+ sizeof( devstat_t));
+
+ ret = s390_start_IO( irq,
+ rdc_ccw,
+ 0x00524443, // RDC
+ 0, // n/a
+ DOIO_WAIT_FOR_INTERRUPT
+ | DOIO_DONT_CALL_INTHDLR );
+ retry--;
+ devflag = ((devstat_t *)(ioinfo[irq]->irq_desc.action->dev_id))->flag;
- ioinfo[irq]->schib.pmcw.isc = 3;
+ } while ( ( retry )
+ && ( ret || (devflag & DEVSTAT_STATUS_PENDING) ) );
- ccode = msch( irq, &(ioinfo[irq]->schib) );
+ } /* endif */
- if (ccode)
- {
- rc = -EIO;
- }
- else
+ if ( !retry )
{
- cons_dev = -1;
+ ret = -EBUSY;
+
+ } /* endif */
+
+ __restore_flags(flags);
/*
- * disable special console I/O-interrupt sublass 7
+ * on success we update the user input parms
*/
- asm volatile ("STCTL 6,6,%0": "=m" (cr6));
- cr6 &= 0xFEFFFFFF;
- asm volatile ("LCTL 6,6,%0":: "m" (cr6):"memory");
+ if ( !ret )
+ {
+ *buffer = rdc_buf;
} /* endif */
+ if ( emulated )
+ {
+ free_irq( irq, &devstat);
+
} /* endif */
} /* endif */
- return( rc);
+ return( ret );
}
-int wait_cons_dev( int irq )
-{
- int rc = 0;
- long save_cr6;
-
- if ( irq == cons_dev )
- {
-
/*
- * before entering the spinlock we may already have
- * processed the interrupt on a different CPU ...
+ * Read Configuration data
*/
- if ( ioinfo[irq]->ui.flags.busy == 1 )
+int read_conf_data( int irq, void **buffer, int *length, __u8 lpm )
{
- long cr6 __attribute__ ((aligned (8)));
+ unsigned long flags;
+ int ciw_cnt;
- /*
- * disable all, but isc 7 (console device)
- */
- asm volatile ("STCTL 6,6,%0": "=m" (cr6));
- save_cr6 = cr6;
- cr6 &= 0x01FFFFFF;
- asm volatile ("LCTL 6,6,%0":: "m" (cr6):"memory");
+ int found = 0; // RCD CIW found
+ int ret = 0; // return code
- do {
- tpi_info_t tpi_info;
- if (tpi(&tpi_info) == 1) {
- s390_process_IRQ( tpi_info.irq );
- } else {
- s390irq_spin_unlock(irq);
- tod_wait(100);
- s390irq_spin_lock(irq);
+ if ( (irq > highest_subchannel) || (irq < 0 ) )
+ {
+ return( -ENODEV );
}
- eieio();
- } while (ioinfo[irq]->ui.flags.busy == 1);
+ else if ( ioinfo[irq] == INVALID_STORAGE_AREA )
+ {
+ return( -ENODEV);
+ }
+ else if ( !buffer || !length )
+ {
+ return( -EINVAL);
+ }
+ else if ( ioinfo[irq]->ui.flags.oper == 0 )
+ {
+ return( -ENODEV );
+ }
+ else if ( ioinfo[irq]->ui.flags.esid == 0 )
+ {
+ return( -EOPNOTSUPP );
+
+ } /* endif */
/*
- * restore previous isc value
+ * scan for RCD command in extended SenseID data
*/
- asm volatile ("STCTL 6,6,%0": "=m" (cr6));
- cr6 = save_cr6;
- asm volatile ("LCTL 6,6,%0":: "m" (cr6):"memory");
- } /* endif */
-
- }
- else
+ for ( ciw_cnt = 0; (found == 0) && (ciw_cnt < 62); ciw_cnt++ )
{
- rc = EINVAL;
+ if ( ioinfo[irq]->senseid.ciw[ciw_cnt].ct == CIW_TYPE_RCD )
+ {
+ /*
+ * paranoia check ...
+ */
+ if ( ioinfo[irq]->senseid.ciw[ciw_cnt].cmd != 0 )
+ {
+ found = 1;
} /* endif */
+ break;
+ } /* endif */
- return(rc);
-}
-
+ } /* endfor */
-int enable_cpu_sync_isc( int irq )
+ if ( found )
{
- int ccode;
- long cr6 __attribute__ ((aligned (8)));
+ devstat_t devstat; /* inline device status area */
+ devstat_t *pdevstat;
+ int ioflags;
- int count = 0;
- int rc = 0;
+ ccw1_t *rcd_ccw = &ioinfo[irq]->senseccw;
+ char *rcd_buf = NULL;
+ int emulated = 0; /* no i/O handler installed */
+ int retry = 5; /* retry count */
- if ( irq <= highest_subchannel && ioinfo[irq] != INVALID_STORAGE_AREA )
- {
- ccode = stsch( irq, &(ioinfo[irq]->schib) );
+ __save_flags(flags);
+ __cli();
- if ( !ccode )
+ if ( !ioinfo[irq]->ui.flags.ready )
{
- ioinfo[irq]->schib.pmcw.isc = 5;
+ pdevstat = &devstat;
+ ret = request_irq( irq,
+ init_IRQ_handler,
+ 0, "RCD", pdevstat );
- do
+ if ( !ret )
{
- ccode = msch( irq, &(ioinfo[irq]->schib) );
-
- if (ccode == 0 )
- {
- /*
- * enable interrupt subclass in CPU
- */
- asm volatile ("STCTL 6,6,%0": "=m" (cr6));
- cr6 |= 0x04000000; // enable sync isc 5
- cr6 &= 0xEFFFFFFF; // disable standard isc 3
- asm volatile ("LCTL 6,6,%0":: "m" (cr6):"memory");
- }
- else if (ccode == 3)
- {
- rc = -ENODEV; // device not-oper - very unlikely
+ emulated = 1;
+ } /* endif */
}
- else if (ccode == 2)
+ else
{
- rc = -EBUSY; // device busy - should not happen
+ pdevstat = ioinfo[irq]->irq_desc.action->dev_id;
- }
- else if (ccode == 1)
- {
- //
- // process pending status
- //
- ioinfo[irq]->ui.flags.s_pend = 1;
+ } /* endif */
- s390_process_IRQ( irq );
+ if ( !ret )
+ {
+ if ( init_IRQ_complete )
+ {
+ rcd_buf = kmalloc( ioinfo[irq]->senseid.ciw[ciw_cnt].count,
+ GFP_KERNEL);
+ }
+ else
+ {
+ rcd_buf = alloc_bootmem( ioinfo[irq]->senseid.ciw[ciw_cnt].count);
- ioinfo[irq]->ui.flags.s_pend = 0;
+ } /* endif */
- count++;
+ if ( rcd_buf == NULL )
+ {
+ ret = -ENOMEM;
} /* endif */
- } while ( ccode == 1 && count < 3 );
+ if ( !ret )
+ {
+ memset( rcd_buf,
+ '\0',
+ ioinfo[irq]->senseid.ciw[ciw_cnt].count);
- if ( count == 3)
+ do
{
- rc = -EIO;
+ rcd_ccw->cmd_code = ioinfo[irq]->senseid.ciw[ciw_cnt].cmd;
+ rcd_ccw->cda = (char *)virt_to_phys( rcd_buf );
+ rcd_ccw->count = ioinfo[irq]->senseid.ciw[ciw_cnt].count;
+ rcd_ccw->flags = CCW_FLAG_SLI;
- } /* endif */
+ memset( pdevstat, '\0', sizeof( devstat_t));
+
+ if ( lpm )
+ {
+ ioflags = DOIO_WAIT_FOR_INTERRUPT
+ | DOIO_VALID_LPM
+ | DOIO_DONT_CALL_INTHDLR;
}
else
{
- rc = -ENODEV; // device is not-operational
+ ioflags = DOIO_WAIT_FOR_INTERRUPT
+ | DOIO_DONT_CALL_INTHDLR;
} /* endif */
+
+ ret = s390_start_IO( irq,
+ rcd_ccw,
+ 0x00524344, // == RCD
+ lpm,
+ ioflags );
+
+ switch ( ret ) {
+ case 0 :
+ case -EIO :
+
+ if ( !(pdevstat->flag & ( DEVSTAT_STATUS_PENDING
+ | DEVSTAT_NOT_OPER
+ | DEVSTAT_FLAG_SENSE_AVAIL ) ) )
+ {
+ retry = 0; // we got it ...
}
else
{
- rc = -EINVAL;
+ retry--; // try again ...
} /* endif */
- return( rc);
-}
+ break;
-int disable_cpu_sync_isc( int irq)
-{
- int rc = 0;
- int ccode;
- long cr6 __attribute__ ((aligned (8)));
+ default : // -EBUSY, -ENODEV, ???
+ retry = 0;
- if ( irq <= highest_subchannel && ioinfo[irq] != INVALID_STORAGE_AREA )
- {
- ccode = stsch( irq, &(ioinfo[irq]->schib) );
+ } /* endswitch */
- ioinfo[irq]->schib.pmcw.isc = 3;
+ } while ( retry );
+
+ } /* endif */
- ccode = msch( irq, &(ioinfo[irq]->schib) );
+ __restore_flags( flags );
- if (ccode)
+ } /* endif */
+
+ /*
+ * on success we update the user input parms
+ */
+ if ( ret == 0 )
+ {
+ *length = ioinfo[irq]->senseid.ciw[ciw_cnt].count;
+ *buffer = rcd_buf;
+ }
+ else
{
- rc = -EIO;
+ if ( rcd_buf != NULL )
+ {
+ if ( init_IRQ_complete )
+ {
+ kfree( rcd_buf );
}
else
{
+ free_bootmem( (unsigned long)rcd_buf,
+ ioinfo[irq]->senseid.ciw[ciw_cnt].count);
- /*
- * enable interrupt subclass in CPU
- */
- asm volatile ("STCTL 6,6,%0": "=m" (cr6));
- cr6 &= 0xFBFFFFFF; // disable sync isc 5
- cr6 |= 0x10000000; // enable standard isc 3
- asm volatile ("LCTL 6,6,%0":: "m" (cr6):"memory");
+ } /* endif */
+
+ } /* endif */
+
+ *buffer = NULL;
+ *length = 0;
} /* endif */
+ if ( emulated )
+ free_irq( irq, pdevstat);
}
else
{
- rc = -EINVAL;
+ ret = -EOPNOTSUPP;
} /* endif */
- return( rc);
+ return( ret );
+
}
-//
-// Input :
-// devno - device number
-// ps - pointer to sense ID data area
-//
-// Output : none
-//
-void VM_virtual_device_info( unsigned int devno,
- senseid_t *ps )
+int get_dev_info( int irq, dev_info_t * pdi)
{
- diag210_t diag_data;
- int ccode;
-
- int error = 0;
-
- diag_data.vrdcdvno = devno;
- diag_data.vrdclen = sizeof( diag210_t);
- ccode = diag210( (diag210_t *)virt_to_phys( &diag_data ) );
- ps->reserved = 0xff;
-
- switch (diag_data.vrdcvcla) {
- case 0x80:
+ return( get_dev_info_by_irq( irq, pdi));
+}
- switch (diag_data.vrdcvtyp) {
- case 00:
- ps->cu_type = 0x3215;
+static int __inline__ get_next_available_irq( ioinfo_t *pi)
+{
+ int ret_val;
+ while ( TRUE )
+ {
+ if ( pi->ui.flags.oper )
+ {
+ ret_val = pi->irq;
break;
+ }
+ else
+ {
+ pi = pi->next;
- default:
-
- error = 1;
-
+ //
+ // leave at end of list unconditionally
+ //
+ if ( pi == NULL )
+ {
+ ret_val = -ENODEV;
break;
+ }
- } /* endswitch */
+ } /* endif */
- break;
+ } /* endwhile */
- case 0x40:
+ return ret_val;
+}
- switch (diag_data.vrdcvtyp) {
- case 0xC0:
- ps->cu_type = 0x5080;
+int get_irq_first( void )
+{
+ int ret_irq;
- break;
+ if ( ioinfo_head )
+ {
+ if ( ioinfo_head->ui.flags.oper )
+ {
+ ret_irq = ioinfo_head->irq;
+ }
+ else if ( ioinfo_head->next )
+ {
+ ret_irq = get_next_available_irq( ioinfo_head->next );
- case 0x80:
+ }
+ else
+ {
+ ret_irq = -ENODEV;
- ps->cu_type = 0x2250;
+ } /* endif */
+ }
+ else
+ {
+ ret_irq = -ENODEV;
- break;
+ } /* endif */
- case 0x04:
+ return ret_irq;
+}
- ps->cu_type = 0x3277;
+int get_irq_next( int irq )
+{
+ int ret_irq;
- break;
+ if ( ioinfo[irq] != INVALID_STORAGE_AREA )
+ {
+ if ( ioinfo[irq]->next )
+ {
+ if ( ioinfo[irq]->next->ui.flags.oper )
+ {
+ ret_irq = ioinfo[irq]->next->irq;
+ }
+ else
+ {
+ ret_irq = get_next_available_irq( ioinfo[irq]->next );
- case 0x01:
+ } /* endif */
+ }
+ else
+ {
+ ret_irq = -ENODEV;
- ps->cu_type = 0x3278;
+ } /* endif */
+ }
+ else
+ {
+ ret_irq = -EINVAL;
- break;
+ } /* endif */
- default:
+ return ret_irq;
+}
- error = 1;
+int get_dev_info_by_irq( int irq, dev_info_t *pdi)
+{
- break;
+ if ( irq > highest_subchannel || irq < 0 )
+ {
+ return -ENODEV;
+ }
+ else if ( pdi == NULL )
+ {
+ return -EINVAL;
+ }
+ else if ( ioinfo[irq] == INVALID_STORAGE_AREA )
+ {
+ return( -ENODEV);
+ }
+ else
+ {
+ pdi->devno = ioinfo[irq]->schib.pmcw.dev;
+ pdi->irq = irq;
- } /* endswitch */
+ if ( ioinfo[irq]->ui.flags.oper )
+ {
+ pdi->status = 0;
+ memcpy( &(pdi->sid_data),
+ &ioinfo[irq]->senseid,
+ sizeof( senseid_t));
+ }
+ else
+ {
+ pdi->status = DEVSTAT_NOT_OPER;
+ memcpy( &(pdi->sid_data),
+ '\0',
+ sizeof( senseid_t));
+ pdi->sid_data.cu_type = 0xFFFF;
- break;
+ } /* endif */
- case 0x20:
+ if ( ioinfo[irq]->ui.flags.ready )
+ pdi->status |= DEVSTAT_DEVICE_OWNED;
- switch (diag_data.vrdcvtyp) {
- case 0x84:
+ return 0;
- ps->cu_type = 0x3505;
+ } /* endif */
- break;
+}
- case 0x82:
- ps->cu_type = 0x2540;
+int get_dev_info_by_devno( unsigned int devno, dev_info_t *pdi)
+{
+ int i;
+ int rc = -ENODEV;
- break;
+ if ( devno > 0x0000ffff )
+ {
+ return -ENODEV;
+ }
+ else if ( pdi == NULL )
+ {
+ return -EINVAL;
+ }
+ else
+ {
- case 0x81:
+ for ( i=0; i <= highest_subchannel; i++ )
+ {
- ps->cu_type = 0x2501;
+ if ( ioinfo[i] != INVALID_STORAGE_AREA
+ && ioinfo[i]->schib.pmcw.dev == devno )
+ {
+ if ( ioinfo[i]->ui.flags.oper )
+ {
+ pdi->status = 0;
+ pdi->irq = i;
+ pdi->devno = devno;
- break;
+ memcpy( &(pdi->sid_data),
+ &ioinfo[i]->senseid,
+ sizeof( senseid_t));
+ }
+ else
+ {
+ pdi->status = DEVSTAT_NOT_OPER;
+ pdi->irq = i;
+ pdi->devno = devno;
- default:
+ memcpy( &(pdi->sid_data), '\0', sizeof( senseid_t));
+ pdi->sid_data.cu_type = 0xFFFF;
- error = 1;
+ } /* endif */
+
+ if ( ioinfo[i]->ui.flags.ready )
+ pdi->status |= DEVSTAT_DEVICE_OWNED;
+ rc = 0; /* found */
break;
- } /* endswitch */
+ } /* endif */
- break;
+ } /* endfor */
- case 0x10:
+ return( rc);
+
+ } /* endif */
- switch (diag_data.vrdcvtyp) {
- case 0x84:
+}
- ps->cu_type = 0x3525;
+int get_irq_by_devno( unsigned int devno )
+{
+ int i;
+ int rc = -1;
- break;
+ if ( devno <= 0x0000ffff )
+ {
+ for ( i=0; i <= highest_subchannel; i++ )
+ {
+ if ( (ioinfo[i] != INVALID_STORAGE_AREA )
+ && (ioinfo[i]->schib.pmcw.dev == devno)
+ && (ioinfo[i]->schib.pmcw.dnv == 1 ) )
+ {
+ rc = i;
+ break;
- case 0x82:
+ } /* endif */
- ps->cu_type = 0x2540;
+ } /* endfor */
- break;
+ } /* endif */
- case 0x4F:
- case 0x4E:
- case 0x48:
+ return( rc);
+}
- ps->cu_type = 0x3820;
+unsigned int get_devno_by_irq( int irq )
+{
- break;
+ if ( ( irq > highest_subchannel )
+ || ( irq < 0 )
+ || ( ioinfo[irq] == INVALID_STORAGE_AREA ) )
+ {
+ return -1;
- case 0x4D:
- case 0x49:
- case 0x45:
+ } /* endif */
- ps->cu_type = 0x3800;
+ /*
+ * we don't need to check for the device be operational
+ * as the initial STSCH will always present the device
+ * number defined by the IOCDS regardless of the device
+ * existing or not. However, there could be subchannels
+ * defined who's device number isn't valid ...
+ */
+ if ( ioinfo[irq]->schib.pmcw.dnv )
+ return( ioinfo[irq]->schib.pmcw.dev );
+ else
+ return -1;
+}
- break;
+/*
+ * s390_device_recognition_irq
+ *
+ * Used for individual device recognition. Issues the device
+ * independant SenseID command to obtain info the device type.
+ *
+ */
+void s390_device_recognition_irq( int irq )
+{
+ int ret;
+ unsigned long psw_flags;
- case 0x4B:
+ /*
+ * We issue the SenseID command on I/O subchannels we think are
+ * operational only.
+ */
+ if ( ( ioinfo[irq] != INVALID_STORAGE_AREA )
+ && ( ioinfo[irq]->schib.pmcw.st == 0 )
+ && ( ioinfo[irq]->ui.flags.oper == 1 ) )
+ {
+ int irq_ret;
+ devstat_t devstat;
- ps->cu_type = 0x4248;
+ irq_ret = request_irq( irq,
+ init_IRQ_handler,
+ 0,
+ "INIT",
+ &devstat);
- break;
+ if ( !irq_ret )
+ {
+ /*
+ * avoid sync processing (STSCH/MSCH) for every
+ * single I/O during boot (IPL) processing.
+ */
+ spin_lock_irqsave( &sync_isc, psw_flags);
- case 0x4A:
+ ret = enable_cpu_sync_isc( irq);
- ps->cu_type = 0x4245;
+ if ( ret )
+ {
+ spin_unlock_irqrestore( &sync_isc, psw_flags);
+ }
+ else
+ {
+ ioinfo[irq]->ui.flags.syncio = 1; // global
- break;
+ memset( &ioinfo[irq]->senseid, '\0', sizeof( senseid_t));
- case 0x47:
+ s390_SenseID( irq, &ioinfo[irq]->senseid, 0xff );
+#if 0 /* FIXME */
+ /*
+ * We initially check the configuration data for
+ * those devices with more than a single path
+ */
+ if ( ioinfo[irq]->schib.pmcw.pim != 0x80 )
+ {
+ char *prcd;
+ int lrcd;
- ps->cu_type = 0x3262;
+ ret = read_conf_data( irq, (void **)&prcd, &lrcd, 0 );
- break;
+ if ( !ret ) // on success only ...
+ {
+#ifdef CONFIG_DEBUG_IO
+ char buffer[80];
+
+ sprintf( buffer,
+ "RCD for device(%04X)/"
+ "subchannel(%04X) returns :\n",
+ ioinfo[irq]->schib.pmcw.dev,
+ irq );
+
+ s390_displayhex( buffer, prcd, lrcd );
+#endif
+ if ( init_IRQ_complete )
+ {
+ kfree( prcd );
+ }
+ else
+ {
+ free_bootmem( (unsigned long)prcd, lrcd );
- case 0x43:
+ } /* endif */
- ps->cu_type = 0x3203;
+ } /* endif */
- break;
+ } /* endif */
+#endif
+ s390_DevicePathVerification( irq, 0 );
- case 0x42:
+ disable_cpu_sync_isc( irq );
- ps->cu_type = 0x3211;
+ ioinfo[irq]->ui.flags.syncio = 0; // global
- break;
+ spin_unlock_irqrestore( &sync_isc, psw_flags);
- case 0x41:
+ } /* endif */
- ps->cu_type = 0x1403;
+ free_irq( irq, &devstat );
- break;
+ } /* endif */
- default:
+ } /* endif */
- error = 1;
+}
- break;
+/*
+ * s390_device_recognition_all
+ *
+ * Used for system wide device recognition.
+ *
+ */
+void s390_device_recognition_all( void)
+{
+ int irq = 0; /* let's start with subchannel 0 ... */
- } /* endswitch */
+ do
+ {
+ s390_device_recognition_irq( irq );
- break;
+ irq ++;
- case 0x08:
+ } while ( irq <= highest_subchannel );
- switch (diag_data.vrdcvtyp) {
- case 0x82:
+}
- ps->cu_type = 0x3422;
- break;
+/*
+ * s390_search_devices
+ *
+ * Determines all subchannels available to the system.
+ *
+ */
+void s390_process_subchannels( void)
+{
+ int ret;
+ int irq = 0; /* Evaluate all subchannels starting with 0 ... */
- case 0x81:
+ do
+ {
+ ret = s390_validate_subchannel( irq, 0);
- ps->cu_type = 0x3490;
+ irq++;
- break;
+ } while ( (ret != -ENXIO) && (irq < __MAX_SUBCHANNELS) );
- case 0x10:
+ highest_subchannel = --irq;
- ps->cu_type = 0x3420;
+ printk( "\nHighest subchannel number detected: %u\n",
+ highest_subchannel);
+}
- break;
+/*
+ * s390_validate_subchannel()
+ *
+ * Process the subchannel for the requested irq. Returns 1 for valid
+ * subchannels, otherwise 0.
+ */
+int s390_validate_subchannel( int irq, int enable )
+{
- case 0x02:
+ int retry; /* retry count for status pending conditions */
+ int ccode; /* condition code for stsch() only */
+ int ccode2; /* condition code for other I/O routines */
+ schib_t *p_schib;
+ int ret;
- ps->cu_type = 0x3430;
+ /*
+ * The first subchannel that is not-operational (ccode==3)
+ * indicates that there aren't any more devices available.
+ */
+ if ( ( init_IRQ_complete )
+ && ( ioinfo[irq] != INVALID_STORAGE_AREA ) )
+ {
+ p_schib = &ioinfo[irq]->schib;
+ }
+ else
+ {
+ p_schib = &init_schib;
- break;
+ } /* endif */
- case 0x01:
+ /*
+ * If we knew the device before we assume the worst case ...
+ */
+ if ( ioinfo[irq] != INVALID_STORAGE_AREA )
+ {
+ ioinfo[irq]->ui.flags.oper = 0;
+ ioinfo[irq]->ui.flags.dval = 0;
- ps->cu_type = 0x3480;
+ } /* endif */
- break;
+ ccode = stsch( irq, p_schib);
- case 0x42:
+ if ( !ccode )
+ {
+ /*
+ * ... just being curious we check for non I/O subchannels
+ */
+ if ( p_schib->pmcw.st )
+ {
+ printk( "Subchannel %04X reports "
+ "non-I/O subchannel type %04X\n",
+ irq,
+ p_schib->pmcw.st);
- ps->cu_type = 0x3424;
+ if ( ioinfo[irq] != INVALID_STORAGE_AREA )
+ ioinfo[irq]->ui.flags.oper = 0;
- break;
+ } /* endif */
- case 0x44:
+ if ( p_schib->pmcw.dnv )
+ {
+ if ( ioinfo[irq] == INVALID_STORAGE_AREA )
+ {
- ps->cu_type = 0x9348;
+ if ( !init_IRQ_complete )
+ {
+ ioinfo[irq] =
+ (ioinfo_t *)alloc_bootmem( sizeof(ioinfo_t));
+ }
+ else
+ {
+ ioinfo[irq] =
+ (ioinfo_t *)kmalloc( sizeof(ioinfo_t),
+ GFP_KERNEL );
- break;
+ } /* endif */
- default:
+ memset( ioinfo[irq], '\0', sizeof( ioinfo_t));
+ memcpy( &ioinfo[irq]->schib,
+ &init_schib,
+ sizeof( schib_t));
+ ioinfo[irq]->irq_desc.status = IRQ_DISABLED;
+ ioinfo[irq]->irq_desc.handler = &no_irq_type;
- error = 1;
+ /*
+ * We have to insert the new ioinfo element
+ * into the linked list, either at its head,
+ * its tail or insert it.
+ */
+ if ( ioinfo_head == NULL ) /* first element */
+ {
+ ioinfo_head = ioinfo[irq];
+ ioinfo_tail = ioinfo[irq];
+ }
+ else if ( irq < ioinfo_head->irq ) /* new head */
+ {
+ ioinfo[irq]->next = ioinfo_head;
+ ioinfo_head->prev = ioinfo[irq];
+ ioinfo_head = ioinfo[irq];
+ }
+ else if ( irq > ioinfo_tail->irq ) /* new tail */
+ {
+ ioinfo_tail->next = ioinfo[irq];
+ ioinfo[irq]->prev = ioinfo_tail;
+ ioinfo_tail = ioinfo[irq];
+ }
+ else /* insert element */
+ {
+ ioinfo_t *pi = ioinfo_head;
+ do
+ {
+ if ( irq < pi->next->irq )
+ {
+ ioinfo[irq]->next = pi->next;
+ ioinfo[irq]->prev = pi;
+ pi->next->prev = ioinfo[irq];
+ pi->next = ioinfo[irq];
break;
- } /* endswitch */
-
- break;
+ } /* endif */
- default:
+ pi = pi->next;
- error = 1;
+ } while ( 1 );
- break;
+ } /* endif */
- } /* endswitch */
+ } /* endif */
- if ( error )
- {printk( "DIAG X'210' for device %04X returned (cc = %d): vdev class : %02X, "
- "vdev type : %04X \n ... rdev class : %02X, rdev type : %04X, rdev model: %02X\n",
- devno,
- ccode,
- diag_data.vrdcvcla,
- diag_data.vrdcvtyp,
- diag_data.vrdcrccl,
- diag_data.vrdccrty,
- diag_data.vrdccrmd );
+ // initialize some values ...
+ ioinfo[irq]->ui.flags.pgid_supp = 1;
- } /* endif */
+ ioinfo[irq]->opm = ioinfo[irq]->schib.pmcw.pim
+ & ioinfo[irq]->schib.pmcw.pam
+ & ioinfo[irq]->schib.pmcw.pom;
-}
+ printk( "Detected device %04X on subchannel %04X"
+ " - PIM = %02X, PAM = %02X, POM = %02X\n",
+ ioinfo[irq]->schib.pmcw.dev,
+ irq,
+ ioinfo[irq]->schib.pmcw.pim,
+ ioinfo[irq]->schib.pmcw.pam,
+ ioinfo[irq]->schib.pmcw.pom);
/*
- * This routine returns the characteristics for the device
- * specified. Some old devices might not provide the necessary
- * command code information during SenseID processing. In this
- * case the function returns -EINVAL. Otherwise the function
- * allocates a decice specific data buffer and provides the
- * device characteristics together with the buffer size. Its
- * the callers responability to release the kernel memory if
- * not longer needed. In case of persistent I/O problems -EBUSY
- * is returned.
- *
- * The function may be called enabled or disabled. However, the
- * caller must have locked the irq it is requesting data for.
- *
- * Note : It would have been nice to collect this information
- * during init_IRQ() processing but this is not possible
- *
- * a) without statically pre-allocation fixed size buffers
- * as virtual memory management isn't available yet.
- *
- * b) without unnecessarily increase system startup by
- * evaluating devices eventually not used at all.
+ * initialize ioinfo structure
+ */
+ ioinfo[irq]->irq = irq;
+ ioinfo[irq]->nopfunc = NULL;
+ ioinfo[irq]->ui.flags.busy = 0;
+ ioinfo[irq]->ui.flags.ready = 0;
+ ioinfo[irq]->ui.flags.dval = 1;
+ ioinfo[irq]->devstat.intparm = 0;
+ ioinfo[irq]->devstat.devno = ioinfo[irq]->schib.pmcw.dev;
+ ioinfo[irq]->devno = ioinfo[irq]->schib.pmcw.dev;
+
+ /*
+ * We should have at least one CHPID ...
*/
-int read_dev_chars( int irq, void **buffer, int length )
+ if ( ioinfo[irq]->opm )
{
- unsigned int flags;
- ccw1_t *rdc_ccw;
- devstat_t devstat;
- char *rdc_buf;
- int devflag;
-
- int ret = 0;
- int emulated = 0;
- int retry = 5;
-
- if ( !buffer || !length )
+ /*
+ * We now have to initially ...
+ * ... set "interruption sublass"
+ * ... enable "concurrent sense"
+ * ... enable "multipath mode" if more than one
+ * CHPID is available. This is done regardless
+ * whether multiple paths are available for us.
+ *
+ * Note : we don't enable the device here, this is temporarily
+ * done during device sensing below.
+ */
+ ioinfo[irq]->schib.pmcw.isc = 3; /* could be smth. else */
+ ioinfo[irq]->schib.pmcw.csense = 1; /* concurrent sense */
+ ioinfo[irq]->schib.pmcw.ena = enable;
+ ioinfo[irq]->schib.pmcw.intparm =
+ ioinfo[irq]->schib.pmcw.dev;
+
+ if ( ( ioinfo[irq]->opm != 0x80 )
+ && ( ioinfo[irq]->opm != 0x40 )
+ && ( ioinfo[irq]->opm != 0x20 )
+ && ( ioinfo[irq]->opm != 0x10 )
+ && ( ioinfo[irq]->opm != 0x08 )
+ && ( ioinfo[irq]->opm != 0x04 )
+ && ( ioinfo[irq]->opm != 0x02 )
+ && ( ioinfo[irq]->opm != 0x01 ) )
{
- return( -EINVAL );
+ ioinfo[irq]->schib.pmcw.mp = 1; /* multipath mode */
} /* endif */
- if ( (irq > highest_subchannel) || (irq < 0 ) )
- {
- return( -ENODEV );
+ retry = 5;
- }
- else if ( ioinfo[irq] == INVALID_STORAGE_AREA )
+ do
{
- return( -ENODEV);
- }
-
- if ( ioinfo[irq]->ui.flags.oper == 0 )
- {
- return( -ENODEV );
+ ccode2 = msch_err( irq, &ioinfo[irq]->schib);
+
+ switch (ccode2) {
+ case 0: // successful completion
+ //
+ // concurrent sense facility available ...
+ //
+ ioinfo[irq]->ui.flags.oper = 1;
+ ioinfo[irq]->ui.flags.consns = 1;
+ ret = 0;
+ break;
- } /* endif */
+ case 1: // status pending
+ //
+ // How can we have a pending status as
+ // device is disabled for interrupts ?
+ // Anyway, process it ...
+ //
+ ioinfo[irq]->ui.flags.s_pend = 1;
+ s390_process_IRQ( irq);
+ ioinfo[irq]->ui.flags.s_pend = 0;
+ retry--;
+ ret = -EIO;
+ break;
+ case 2: // busy
/*
- * Before playing around with irq locks we should assure
- * running disabled on (just) our CPU. Sync. I/O requests
- * also require to run disabled.
- *
- * Note : as no global lock is required, we must not use
- * cli(), but __cli() instead.
+ * we mark it not-oper as we can't
+ * properly operate it !
*/
- __save_flags(flags);
- __cli();
+ ioinfo[irq]->ui.flags.oper = 0;
+ tod_wait( 100); /* allow for recovery */
+ retry--;
+ ret = -EBUSY;
+ break;
- rdc_ccw = &ioinfo[irq]->senseccw;
+ case 3: // not operational
+ ioinfo[irq]->ui.flags.oper = 0;
+ retry = 0;
+ ret = -ENODEV;
+ break;
- if ( !ioinfo[irq]->ui.flags.ready )
- {
- ret = request_irq( irq,
- init_IRQ_handler,
- 0, "RDC", &devstat );
+ default:
+#define PGMCHK_OPERAND_EXC 0x15
- if ( !ret )
+ if ( (ccode2 & PGMCHK_OPERAND_EXC) == PGMCHK_OPERAND_EXC )
{
- emulated = 1;
-
- } /* endif */
+ /*
+ * re-issue the modify subchannel without trying to
+ * enable the concurrent sense facility
+ */
+ ioinfo[irq]->schib.pmcw.csense = 0;
- } /* endif */
+ ccode2 = msch_err( irq, &ioinfo[irq]->schib);
- if ( !ret )
- {
- if ( ! *buffer )
+ if ( ccode2 != 0 )
{
- rdc_buf = kmalloc( length, GFP_KERNEL);
+ printk( " ... msch() (2) failed with CC = %X\n",
+ ccode2 );
+ ioinfo[irq]->ui.flags.oper = 0;
+ ret = -EIO;
}
else
{
- rdc_buf = *buffer;
+ ioinfo[irq]->ui.flags.oper = 1;
+ ioinfo[irq]->ui.flags.consns = 0;
+ ret = 0;
} /* endif */
-
- if ( !rdc_buf )
- {
- ret = -ENOMEM;
}
else
{
- do
- {
- rdc_ccw->cmd_code = CCW_CMD_RDC;
- rdc_ccw->cda = (char *)virt_to_phys( rdc_buf );
- rdc_ccw->count = length;
- rdc_ccw->flags = CCW_FLAG_SLI;
-
- ret = s390_start_IO( irq,
- rdc_ccw,
- 0x00524443, // RDC
- 0, // n/a
- DOIO_WAIT_FOR_INTERRUPT );
- retry--;
- devflag = ((devstat_t *)(ioinfo[irq]->irq_desc.action->dev_id))->flag;
-
- } while ( ( retry )
- && ( ret || (devflag & DEVSTAT_STATUS_PENDING) ) );
+ printk( " ... msch() (1) failed with CC = %X\n",
+ ccode2);
+ ioinfo[irq]->ui.flags.oper = 0;
+ ret = -EIO;
} /* endif */
- if ( !retry )
- {
- ret = -EBUSY;
+ retry = 0;
+ break;
- } /* endif */
+ } /* endswitch */
- __restore_flags(flags);
+ } while ( ccode2 && retry );
- /*
- * on success we update the user input parms
- */
- if ( !ret )
+ if ( (ccode2 != 0) && (ccode2 != 3) && (!retry) )
{
- *buffer = rdc_buf;
+ printk( " ... msch() retry count for "
+ "subchannel %04X exceeded, CC = %d\n",
+ irq,
+ ccode2);
} /* endif */
+ }
+ else
+ {
+ /* no path available ... */
+ ioinfo[irq]->ui.flags.oper = 0;
+ ret = -ENODEV;
- if ( emulated )
+ } /* endif */
+ }
+ else
{
- free_irq( irq, &devstat);
+ ret = -ENODEV;
} /* endif */
+ }
+ else
+ {
+
+ ret = -ENXIO;
} /* endif */
}
/*
- * Read Configuration data
+ * s390_SenseID
+ *
+ * Try to obtain the 'control unit'/'device type' information
+ * associated with the subchannel.
+ *
+ * The function is primarily meant to be called without irq
+ * action handler in place. However, it also allows for
+ * use with an action handler in place. If there is already
+ * an action handler registered assure it can handle the
+ * s390_SenseID() related device interrupts - interruption
+ * parameter used is 0x00E2C9C4 ( SID ).
*/
-int read_conf_data( int irq, void **buffer, int *length )
+int s390_SenseID( int irq, senseid_t *sid, __u8 lpm )
{
- int found = 0;
- int ciw_cnt = 0;
- unsigned int flags;
-
- int ret = 0;
+ ccw1_t sense_ccw[2]; /* ccw area for SenseID command */
+ senseid_t isid; /* internal sid */
+ devstat_t devstat; /* required by request_irq() */
+ __u8 pathmask; /* calulate path mask */
+ __u8 domask; /* path mask to use */
+ int inlreq; /* inline request_irq() */
+ int irq_ret; /* return code */
+ devstat_t *pdevstat; /* ptr to devstat in use */
+ int retry; /* retry count */
+ int io_retry; /* retry indicator */
+
+ senseid_t *psid = sid;/* start with the external buffer */
+ int sbuffer = 0; /* switch SID data buffer */
if ( (irq > highest_subchannel) || (irq < 0 ) )
{
return( -ENODEV );
+
}
else if ( ioinfo[irq] == INVALID_STORAGE_AREA )
{
} /* endif */
- /*
- * scan for RCD command in extended SenseID data
- */
- for ( ; (found == 0) && (ciw_cnt < 62); ciw_cnt++ )
- {
- if ( ioinfo[irq]->senseid.ciw[ciw_cnt].ct == CIW_TYPE_RCD )
- {
- found = 1;
- break;
- } /* endif */
-
- } /* endfor */
-
- if ( found )
- {
- ccw1_t *rcd_ccw = &ioinfo[irq]->senseccw;
- devstat_t devstat;
- char *rcd_buf;
- int devflag;
-
- int emulated = 0;
- int retry = 5;
-
- __save_flags(flags);
- __cli();
-
if ( !ioinfo[irq]->ui.flags.ready )
{
- ret = request_irq( irq,
- init_IRQ_handler,
- 0, "RCD", &devstat );
-
- if ( !ret )
- {
- emulated = 1;
-
- } /* endif */
-
- } /* endif */
-
- if ( !ret )
- {
- rcd_buf = kmalloc( ioinfo[irq]->senseid.ciw[ciw_cnt].count,
- GFP_KERNEL);
-
- do
- {
- rcd_ccw->cmd_code = ioinfo[irq]->senseid.ciw[ciw_cnt].cmd;
- rcd_ccw->cda = (char *)virt_to_phys( rcd_buf );
- rcd_ccw->count = ioinfo[irq]->senseid.ciw[ciw_cnt].count;
- rcd_ccw->flags = CCW_FLAG_SLI;
-
- ret = s390_start_IO( irq,
- rcd_ccw,
- 0x00524344, // == RCD
- 0, // n/a
- DOIO_WAIT_FOR_INTERRUPT );
-
- retry--;
-
- devflag = ((devstat_t *)(ioinfo[irq]->irq_desc.action->dev_id))->flag;
-
- } while ( ( retry )
- && ( ret || (devflag & DEVSTAT_STATUS_PENDING) ) );
-
- if ( !retry )
- ret = -EBUSY;
-
- __restore_flags(flags);
- } /* endif */
+ pdevstat = &devstat;
/*
- * on success we update the user input parms
+ * Perform SENSE ID command processing. We have to request device
+ * ownership and provide a dummy I/O handler. We issue sync. I/O
+ * requests and evaluate the devstat area on return therefore
+ * we don't need a real I/O handler in place.
*/
- if ( !ret )
- {
- *length = ioinfo[irq]->senseid.ciw[ciw_cnt].count;
- *buffer = rcd_buf;
-
- } /* endif */
+ irq_ret = request_irq( irq, init_IRQ_handler, 0, "SID", &devstat);
- if ( emulated )
- free_irq( irq, &devstat);
+ if ( irq_ret == 0 )
+ inlreq = 1;
}
else
{
- ret = -EINVAL;
+ inlreq = 0;
+ irq_ret = 0;
+ pdevstat = ioinfo[irq]->irq_desc.action->dev_id;
} /* endif */
- return( ret );
-
-}
-
-int get_dev_info( int irq, dev_info_t *pdi)
+ if ( irq_ret == 0 )
{
- return( get_dev_info_by_irq( irq, pdi) );
-}
+ int i;
+ s390irq_spin_lock( irq);
-static int __inline__ get_next_available_irq( ioinfo_t *pi)
+ // more than one path installed ?
+ if ( ioinfo[irq]->schib.pmcw.pim != 0x80 )
{
- int ret_val;
-
- while ( TRUE )
- {
- if ( pi->ui.flags.oper )
- {
- ret_val = pi->irq;
- break;
+ sense_ccw[0].cmd_code = CCW_CMD_SUSPEND_RECONN;
+ sense_ccw[0].cda = 0;
+ sense_ccw[0].count = 0;
+ sense_ccw[0].flags = CCW_FLAG_SLI | CCW_FLAG_CC;
+
+ sense_ccw[1].cmd_code = CCW_CMD_SENSE_ID;
+ sense_ccw[1].cda = (char *)virt_to_phys( psid );
+ sense_ccw[1].count = sizeof( senseid_t);
+ sense_ccw[1].flags = CCW_FLAG_SLI;
}
else
{
- pi = pi->next;
-
- //
- // leave at end of list unconditionally
- //
- if ( pi == NULL )
- {
- ret_val = -ENODEV;
- break;
- }
+ sense_ccw[0].cmd_code = CCW_CMD_SENSE_ID;
+ sense_ccw[0].cda = (char *)virt_to_phys( psid );
+ sense_ccw[0].count = sizeof( senseid_t);
+ sense_ccw[0].flags = CCW_FLAG_SLI;
} /* endif */
- } /* endwhile */
-
- return ret_val;
-}
+ for ( i = 0 ; (i < 8) ; i++ )
+ {
+ pathmask = 0x80 >> i;
+ domask = ioinfo[irq]->opm & pathmask;
-int get_irq_first( void )
-{
- int ret_irq;
+ if ( lpm )
+ domask &= lpm;
- if ( ioinfo_head )
- {
- if ( ioinfo_head->ui.flags.oper )
- {
- ret_irq = ioinfo_head->irq;
- }
- else if ( ioinfo_head->next )
+ if ( domask )
{
- ret_irq = get_next_available_irq( ioinfo_head->next );
+ psid->cu_type = 0xFFFF; /* initialize fields ... */
+ psid->cu_model = 0;
+ psid->dev_type = 0;
+ psid->dev_model = 0;
- }
- else
- {
- ret_irq = -ENODEV;
+ retry = 5; /* retry count */
+ io_retry = 1; /* enable retries */
- } /* endif */
- }
- else
+ /*
+ * We now issue a SenseID request. In case of BUSY,
+ * STATUS PENDING or non-CMD_REJECT error conditions
+ * we run simple retries.
+ */
+ do
{
- ret_irq = -ENODEV;
-
- } /* endif */
+ memset( pdevstat, '\0', sizeof( devstat_t) );
- return ret_irq;
-}
+ irq_ret = s390_start_IO( irq,
+ sense_ccw,
+ 0x00E2C9C4, // == SID
+ domask,
+ DOIO_WAIT_FOR_INTERRUPT
+ | DOIO_TIMEOUT
+ | DOIO_VALID_LPM
+ | DOIO_DONT_CALL_INTHDLR );
-int get_irq_next( int irq )
-{
- int ret_irq;
+ //
+ // The OSA_E FE card possibly causes -ETIMEDOUT
+ // conditions, as the SenseID may stay start
+ // pending. This will cause start_IO() to finally
+ // halt the operation we should retry. If the halt
+ // fails this may cause -EBUSY we simply retry
+ // and eventually clean up with free_irq().
+ //
- if ( ioinfo[irq] != INVALID_STORAGE_AREA )
+ if ( psid->cu_type == 0xFFFF )
{
- if ( ioinfo[irq]->next )
+ if ( pdevstat->flag & DEVSTAT_STATUS_PENDING )
{
- if ( ioinfo[irq]->next->ui.flags.oper )
+#ifdef CONFIG_DEBUG_IO
+ printk( "SenseID : device %04X on "
+ "Subchannel %04X "
+ "reports pending status, "
+ "retry : %d\n",
+ ioinfo[irq]->schib.pmcw.dev,
+ irq,
+ retry);
+#endif
+ } /* endif */
+
+ if ( pdevstat->flag & DEVSTAT_FLAG_SENSE_AVAIL )
{
- ret_irq = ioinfo[irq]->next->irq;
- }
- else
+ /*
+ * if the device doesn't support the SenseID
+ * command further retries wouldn't help ...
+ */
+ if ( pdevstat->ii.sense.data[0]
+ & (SNS0_CMD_REJECT | SNS0_INTERVENTION_REQ) )
{
- ret_irq = get_next_available_irq( ioinfo[irq]->next );
-
- } /* endif */
+#ifdef CONFIG_DEBUG_IO
+ printk( "SenseID : device %04X on "
+ "Subchannel %04X "
+ "reports cmd reject or "
+ "intervention required\n",
+ ioinfo[irq]->schib.pmcw.dev,
+ irq);
+#endif
+ io_retry = 0;
}
+#ifdef CONFIG_DEBUG_IO
else
{
- ret_irq = -ENODEV;
+ printk( "SenseID : UC on "
+ "dev %04X, "
+ "retry %d, "
+ "lpum %02X, "
+ "cnt %02d, "
+ "sns :"
+ " %02X%02X%02X%02X "
+ "%02X%02X%02X%02X ...\n",
+ ioinfo[irq]->schib.pmcw.dev,
+ retry,
+ pdevstat->lpum,
+ pdevstat->scnt,
+ pdevstat->ii.sense.data[0],
+ pdevstat->ii.sense.data[1],
+ pdevstat->ii.sense.data[2],
+ pdevstat->ii.sense.data[3],
+ pdevstat->ii.sense.data[4],
+ pdevstat->ii.sense.data[5],
+ pdevstat->ii.sense.data[6],
+ pdevstat->ii.sense.data[7]);
} /* endif */
+#endif
}
- else
+ else if ( ( pdevstat->flag & DEVSTAT_NOT_OPER )
+ || ( irq_ret != -ENODEV ) )
{
- ret_irq = -EINVAL;
+#ifdef CONFIG_DEBUG_IO
+ printk( "SenseID : path %02X for "
+ "device %04X on "
+ "subchannel %04X "
+ "is 'not operational'\n",
+ domask,
+ ioinfo[irq]->schib.pmcw.dev,
+ irq);
+#endif
- } /* endif */
+ io_retry = 0;
+ ioinfo[irq]->opm &= ~domask;
- return ret_irq;
}
+#ifdef CONFIG_DEBUG_IO
+ else if ( (pdevstat->flag !=
+ ( DEVSTAT_START_FUNCTION
+ | DEVSTAT_FINAL_STATUS ) )
+ && !(pdevstat->flag &
+ DEVSTAT_STATUS_PENDING ) )
+ {
+ printk( "SenseID : start_IO() for "
+ "device %04X on "
+ "subchannel %04X "
+ "returns %d, retry %d, "
+ "status %04X\n",
+ ioinfo[irq]->schib.pmcw.dev,
+ irq,
+ irq_ret,
+ retry,
+ pdevstat->flag);
-int get_dev_info_by_irq( int irq, dev_info_t *pdi)
-{
-
- if ( irq > highest_subchannel || irq < 0 )
- {
- return -ENODEV;
- }
- else if ( pdi == NULL )
- {
- return -EINVAL;
+ } /* endif */
+#endif
}
- else if ( ioinfo[irq] == INVALID_STORAGE_AREA )
+ else // we got it ...
{
- return( -ENODEV);
- }
- else
+ if ( !sbuffer ) // switch buffers
{
- pdi->devno = ioinfo[irq]->schib.pmcw.dev;
- pdi->irq = irq;
+ /*
+ * we report back the
+ * first hit only
+ */
+ psid = &isid;
- if ( ioinfo[irq]->ui.flags.oper )
+ if ( ioinfo[irq]->schib.pmcw.pim != 0x80 )
{
- pdi->status = 0;
- memcpy( &(pdi->sid_data),
- &ioinfo[irq]->senseid,
- sizeof( senseid_t));
+ sense_ccw[1].cda = (char *)virt_to_phys( psid );
}
else
{
- pdi->status = DEVSTAT_NOT_OPER;
- memcpy( &(pdi->sid_data),
- '\0',
- sizeof( senseid_t));
- pdi->sid_data.cu_type = 0xFFFF;
+ sense_ccw[0].cda = (char *)virt_to_phys( psid );
} /* endif */
- if ( ioinfo[irq]->ui.flags.ready )
- pdi->status |= DEVSTAT_DEVICE_OWNED;
+ /*
+ * if just the very first
+ * was requested to be
+ * sensed disable further
+ * scans.
+ */
+ if ( !lpm )
+ lpm = domask;
- return 0;
+ sbuffer = 1;
} /* endif */
-}
-
-
-int get_dev_info_by_devno( unsigned int devno, dev_info_t *pdi)
+ if ( pdevstat->rescnt < (sizeof( senseid_t) - 8) )
{
- int i;
- int rc = -ENODEV;
+ ioinfo[irq]->ui.flags.esid = 1;
- if ( devno > 0x0000ffff )
- {
- return -ENODEV;
- }
- else if ( pdi == NULL )
- {
- return -EINVAL;
- }
- else
- {
+ } /* endif */
- for ( i=0; i <= highest_subchannel; i++ )
- {
+ io_retry = 0;
- if ( ioinfo[i] != INVALID_STORAGE_AREA
- && ioinfo[i]->schib.pmcw.dev == devno )
- {
- if ( ioinfo[i]->ui.flags.oper )
- {
- pdi->status = 0;
- pdi->irq = i;
- pdi->devno = devno;
+ } /* endif */
- memcpy( &(pdi->sid_data),
- &ioinfo[i]->senseid,
- sizeof( senseid_t));
- }
- else
+ if ( io_retry )
{
- pdi->status = DEVSTAT_NOT_OPER;
- pdi->irq = i;
- pdi->devno = devno;
+ retry--;
- memcpy( &(pdi->sid_data), '\0', sizeof( senseid_t));
- pdi->sid_data.cu_type = 0xFFFF;
+ if ( retry == 0 )
+ {
+ io_retry = 0;
} /* endif */
- if ( ioinfo[i]->ui.flags.ready )
- pdi->status |= DEVSTAT_DEVICE_OWNED;
+ } /* endif */
- rc = 0; /* found */
- break;
+ } while ( (io_retry) );
- } /* endif */
+ } /* endif - domask */
} /* endfor */
- return( rc);
+ s390irq_spin_unlock( irq);
- } /* endif */
+ /*
+ * If we installed the irq action handler we have to
+ * release it too.
+ */
+ if ( inlreq )
+ free_irq( irq, pdevstat);
-}
+ /*
+ * if running under VM check there ... perhaps we should do
+ * only if we suffered a command reject, but it doesn't harm
+ */
+ if ( ( sid->cu_type == 0xFFFF )
+ && ( MACHINE_IS_VM ) )
+ {
+ VM_virtual_device_info( ioinfo[irq]->schib.pmcw.dev,
+ sid );
+ } /* endif */
-int get_irq_by_devno( unsigned int devno )
+ if ( sid->cu_type == 0xFFFF )
{
- int i;
- int rc = -1;
+ /*
+ * SenseID CU-type of 0xffff indicates that no device
+ * information could be retrieved (pre-init value).
+ *
+ * If we can't couldn't identify the device type we
+ * consider the device "not operational".
+ */
+#ifdef CONFIG_DEBUG_IO
+ printk( "SenseID : unknown device %04X on subchannel %04X\n",
+ ioinfo[irq]->schib.pmcw.dev,
+ irq);
+#endif
+ ioinfo[irq]->ui.flags.oper = 0;
- if ( devno <= 0x0000ffff )
+ } /* endif */
+
+ /*
+ * Issue device info message if unit was operational .
+ */
+ if ( ioinfo[irq]->ui.flags.oper )
{
- for ( i=0; i <= highest_subchannel; i++ )
+ if ( sid->dev_type != 0 )
{
- if ( (ioinfo[i] != INVALID_STORAGE_AREA )
- && (ioinfo[i]->schib.pmcw.dev == devno)
- && (ioinfo[i]->schib.pmcw.dnv == 1 ) )
+ printk( "SenseID : device %04X reports: CU Type/Mod = %04X/%02X,"
+ " Dev Type/Mod = %04X/%02X\n",
+ ioinfo[irq]->schib.pmcw.dev,
+ sid->cu_type,
+ sid->cu_model,
+ sid->dev_type,
+ sid->dev_model);
+ }
+ else
{
- rc = i;
- break;
+ printk( "SenseID : device %04X reports:"
+ " Dev Type/Mod = %04X/%02X\n",
+ ioinfo[irq]->schib.pmcw.dev,
+ sid->cu_type,
+ sid->cu_model);
} /* endif */
- } /* endfor */
-
} /* endif */
- return( rc);
-}
-
-unsigned int get_devno_by_irq( int irq )
-{
-
- if ( ( irq > highest_subchannel )
- || ( irq < 0 )
- || ( ioinfo[irq] == INVALID_STORAGE_AREA ) )
- {
- return -1;
+ if ( ioinfo[irq]->ui.flags.oper )
+ irq_ret = 0;
+ else
+ irq_ret = -ENODEV;
} /* endif */
- /*
- * we don't need to check for the device be operational
- * as the initial STSCH will always present the device
- * number defined by the IOCDS regardless of the device
- * existing or not. However, there could be subchannels
- * defined who's device number isn't valid ...
- */
- if ( ioinfo[irq]->schib.pmcw.dnv )
- return( ioinfo[irq]->schib.pmcw.dev );
- else
- return -1;
+ return( irq_ret );
}
-/*
- * s390_device_recognition
- *
- * Used for system wide device recognition. Issues the device
- * independant SenseID command to obtain info the device type.
- *
- */
-void s390_device_recognition( void)
+static int __inline__ s390_SetMultiPath( int irq )
{
+ int cc;
- int irq = 0; /* let's start with subchannel 0 ... */
+ cc = stsch( irq, &ioinfo[irq]->schib );
- do
- {
- /*
- * We issue the SenseID command on I/O subchannels we think are
- * operational only.
- */
- if ( ( ioinfo[irq] != INVALID_STORAGE_AREA )
- && ( ioinfo[irq]->schib.pmcw.st == 0 )
- && ( ioinfo[irq]->ui.flags.oper == 1 ) )
+ if ( !cc )
{
- s390_SenseID( irq, &ioinfo[irq]->senseid );
-
- } /* endif */
+ ioinfo[irq]->schib.pmcw.mp = 1; /* multipath mode */
- irq ++;
+ cc = msch( irq, &ioinfo[irq]->schib );
- } while ( irq <= highest_subchannel );
+ } /* endif */
+ return( cc);
}
-
/*
- * s390_search_devices
+ * Device Path Verification
*
- * Determines all subchannels available to the system.
+ * Path verification is accomplished by checking which paths (CHPIDs) are
+ * available. Further, a path group ID is set, if possible in multipath
+ * mode, otherwise in single path mode.
*
*/
-void s390_process_subchannels( void)
+int s390_DevicePathVerification( int irq, __u8 usermask )
{
- int isValid;
- int irq = 0; /* Evaluate all subchannels starting with 0 ... */
+#if 1
+ int ccode;
+ __u8 pathmask;
+ __u8 domask;
- do
- {
- isValid = s390_validate_subchannel( irq);
+ int ret = 0;
- irq++;
+ if ( ioinfo[irq]->ui.flags.pgid_supp == 0 )
+ {
+ return( 0); // just exit ...
- } while ( isValid && irq < __MAX_SUBCHANNELS );
+ } /* endif */
- highest_subchannel = --irq;
+ ccode = stsch( irq, &(ioinfo[irq]->schib) );
- printk( "\nHighest subchannel number detected: %u\n",
- highest_subchannel);
+ if ( ccode )
+ {
+ ret = -ENODEV;
}
-
+ else if ( ioinfo[irq]->schib.pmcw.pim == 0x80 )
+ {
/*
- * s390_validate_subchannel()
- *
- * Process the subchannel for the requested irq. Returns 1 for valid
- * subchannels, otherwise 0.
+ * no error, just not required for single path only devices
*/
-int s390_validate_subchannel( int irq )
+ ioinfo[irq]->ui.flags.pgid_supp = 0;
+ ret = 0;
+ }
+ else
{
+ int i;
+ pgid_t pgid;
+ __u8 dev_path;
+ int first = 1;
- int retry; /* retry count for status pending conditions */
- int ccode; /* condition code for stsch() only */
- int ccode2; /* condition code for other I/O routines */
- schib_t *p_schib;
+ ioinfo[irq]->opm = ioinfo[irq]->schib.pmcw.pim
+ & ioinfo[irq]->schib.pmcw.pam
+ & ioinfo[irq]->schib.pmcw.pom;
- /*
- * The first subchannel that is not-operational (ccode==3)
- * indicates that there aren't any more devices available.
- */
- if ( ( init_IRQ_complete )
- && ( ioinfo[irq] != INVALID_STORAGE_AREA ) )
+ if ( usermask )
{
- p_schib = &ioinfo[irq]->schib;
+ dev_path = usermask;
}
else
{
- p_schib = &init_schib;
+ dev_path = ioinfo[irq]->opm;
} /* endif */
- ccode = stsch( irq, p_schib);
-
- if ( ccode == 0)
- {
/*
- * ... just being curious we check for non I/O subchannels
+ * let's build a path group ID if we don't have one yet
*/
- if ( p_schib->pmcw.st )
+ if ( ioinfo[irq]->ui.flags.pgid == 0)
{
- printk( "Subchannel %04X reports "
- "non-I/O subchannel type %04X\n",
- irq,
- p_schib->pmcw.st);
+ ioinfo[irq]->pgid.cpu_addr = *(__u16 *)__LC_CPUADDR;
+ ioinfo[irq]->pgid.cpu_id = ((cpuid_t *)__LC_CPUID)->ident;
+ ioinfo[irq]->pgid.cpu_model = ((cpuid_t *)__LC_CPUID)->machine;
+ ioinfo[irq]->pgid.tod_high = *(__u32 *)&irq_IPL_TOD;
- if ( ioinfo[irq] != INVALID_STORAGE_AREA )
- ioinfo[irq]->ui.flags.oper = 0;
+ ioinfo[irq]->ui.flags.pgid = 1;
} /* endif */
- if ( p_schib->pmcw.dnv )
- {
- if ( ioinfo[irq] == INVALID_STORAGE_AREA )
- {
+ memcpy( &pgid, &ioinfo[irq]->pgid, sizeof(pgid_t));
- if ( !init_IRQ_complete )
+ for ( i = 0; i < 8 && !ret ; i++)
{
- ioinfo[irq] =
- (ioinfo_t *)alloc_bootmem( sizeof(ioinfo_t));
- }
- else
+ pathmask = 0x80 >> i;
+
+ domask = dev_path & pathmask;
+
+ if ( domask )
{
- ioinfo[irq] =
- (ioinfo_t *)kmalloc( sizeof(ioinfo_t),
- GFP_KERNEL );
+ ret = s390_SetPGID( irq, domask, &pgid );
- } /* endif */
+ /*
+ * For the *first* path we are prepared
+ * for recovery
+ *
+ * - If we fail setting the PGID we assume its
+ * using a different PGID already (VM) we
+ * try to sense.
+ */
+ if ( ret == -EOPNOTSUPP && first )
+ {
+ *(int *)&pgid = 0;
- memset( ioinfo[irq], '\0', sizeof( ioinfo_t));
- memcpy( &ioinfo[irq]->schib,
- &init_schib,
- sizeof( schib_t));
- ioinfo[irq]->irq_desc.status = IRQ_DISABLED;
- ioinfo[irq]->irq_desc.handler = &no_irq_type;
+ ret = s390_SensePGID( irq, domask, &pgid);
+ first = 0;
+ if ( ret == 0 )
+ {
/*
- * We have to insert the new ioinfo element
- * into the linked list, either at its head,
- * its tail or insert it.
+ * Check whether we retrieved
+ * a reasonable PGID ...
*/
- if ( ioinfo_head == NULL ) /* first element */
+ if ( pgid.inf.ps.state1 == SNID_STATE1_GROUPED )
{
- ioinfo_head = ioinfo[irq];
- ioinfo_tail = ioinfo[irq];
+ memcpy( &(ioinfo[irq]->pgid),
+ &pgid,
+ sizeof(pgid_t) );
}
- else if (irq < ioinfo_head->irq) /* new head */
+ else // ungrouped or garbage ...
{
- ioinfo[irq]->next = ioinfo_head;
- ioinfo_head->prev = ioinfo[irq];
- ioinfo_head = ioinfo[irq];
+ ret = -EOPNOTSUPP;
+
+ } /* endif */
}
- else if (irq > ioinfo_tail->irq) /* new tail */
+ else
{
- ioinfo_tail->next = ioinfo[irq];
- ioinfo[irq]->prev = ioinfo_tail;
- ioinfo_tail = ioinfo[irq];
+ ioinfo[irq]->ui.flags.pgid_supp = 0;
+
+#ifdef CONFIG_DEBUG_IO
+ printk( "PathVerification(%04X) "
+ "- Device %04X doesn't "
+ " support path grouping\n",
+ irq,
+ ioinfo[irq]->schib.pmcw.dev);
+#endif
+
+ } /* endif */
}
- else /* insert element */
+ else if ( ret )
{
- ioinfo_t *pi = ioinfo_head;
- do
- {
- if ( irq < pi->next->irq )
- {
- ioinfo[irq]->next = pi->next;
- ioinfo[irq]->prev = pi;
- pi->next->prev = ioinfo[irq];
- pi->next = ioinfo[irq];
- break;
+#ifdef CONFIG_DEBUG_IO
+ printk( "PathVerification(%04X) "
+ "- Device %04X doesn't "
+ " support path grouping\n",
+ irq,
+ ioinfo[irq]->schib.pmcw.dev);
- } /* endif */
+#endif
- pi = pi->next;
+ ioinfo[irq]->ui.flags.pgid_supp = 0;
- } while ( 1 );
+ } /* endif */
} /* endif */
+ } /* endfor */
+
} /* endif */
- printk( "Detected device %04X on subchannel %04X"
- " - PIM = %02X, PAM = %02X, POM = %02X\n",
- ioinfo[irq]->schib.pmcw.dev,
- irq,
- ioinfo[irq]->schib.pmcw.pim,
- ioinfo[irq]->schib.pmcw.pam,
- ioinfo[irq]->schib.pmcw.pom);
+ return ret;
+#else
+ return 0;
+#endif
+}
/*
- * We now have to initially ...
- * ... set "interruption sublass"
- * ... enable "concurrent sense"
- * ... enable "multipath mode" if more than
- * one CHPID is available
+ * s390_SetPGID
+ *
+ * Set Path Group ID
*
- * Note : we don't enable the device here, this is temporarily
- * done during device sensing below.
*/
- ioinfo[irq]->schib.pmcw.isc = 3; /* could be smth. else */
- ioinfo[irq]->schib.pmcw.csense = 1; /* concurrent sense */
- ioinfo[irq]->schib.pmcw.ena = 0; /* force disable it */
- ioinfo[irq]->schib.pmcw.intparm =
- ioinfo[irq]->schib.pmcw.dev;
+int s390_SetPGID( int irq, __u8 lpm, pgid_t *pgid )
+ {
+ ccw1_t spid_ccw[2]; /* ccw area for SPID command */
+ devstat_t devstat; /* required by request_irq() */
+ devstat_t *pdevstat = &devstat;
+
+ int irq_ret = 0; /* return code */
+ int retry = 5; /* retry count */
+ int inlreq = 0; /* inline request_irq() */
+ int mpath = 1; /* try multi-path first */
+
+ if ( (irq > highest_subchannel) || (irq < 0 ) )
+ {
+ return( -ENODEV );
- if ( ( ioinfo[irq]->schib.pmcw.pim != 0 )
- && ( ioinfo[irq]->schib.pmcw.pim != 0x80 ) )
+ }
+ else if ( ioinfo[irq] == INVALID_STORAGE_AREA )
{
- ioinfo[irq]->schib.pmcw.mp = 1; /* multipath mode */
+ return( -ENODEV);
- } /* endif */
+ } /* endif */
- /*
- * initialize ioinfo structure
- */
- ioinfo[irq]->irq = irq;
- ioinfo[irq]->ui.flags.busy = 0;
- ioinfo[irq]->ui.flags.ready = 0;
- ioinfo[irq]->ui.flags.oper = 1;
- ioinfo[irq]->devstat.intparm = 0;
- ioinfo[irq]->devstat.devno = ioinfo[irq]->schib.pmcw.dev;
+ if ( ioinfo[irq]->ui.flags.oper == 0 )
+ {
+ return( -ENODEV );
- retry = 5;
+ } /* endif */
- do
- {
- ccode2 = msch_err( irq, &ioinfo[irq]->schib);
+ if ( !ioinfo[irq]->ui.flags.ready )
+ {
+ /*
+ * Perform SENSE ID command processing. We have to request device
+ * ownership and provide a dummy I/O handler. We issue sync. I/O
+ * requests and evaluate the devstat area on return therefore
+ * we don't need a real I/O handler in place.
+ */
+ irq_ret = request_irq( irq,
+ init_IRQ_handler,
+ 0,
+ "SPID",
+ pdevstat);
- switch (ccode2) {
- case 0: // successful completion
- //
- // concurrent sense facility available ...
- //
- ioinfo[irq]->ui.flags.consns = 1;
- break;
+ if ( irq_ret == 0 )
+ inlreq = 1;
+ }
+ else
+ {
+ pdevstat = ioinfo[irq]->irq_desc.action->dev_id;
- case 1: // status pending
- //
- // How can we have a pending status as device is
- // disabled for interrupts ? Anyway, clear it ...
- //
- tsch( irq, &(ioinfo[irq]->devstat.ii.irb) );
- retry--;
- break;
+ } /* endif */
- case 2: // busy
- retry--;
- break;
+ if ( irq_ret == 0 )
+ {
+ s390irq_spin_lock( irq);
- case 3: // not operational
- ioinfo[irq]->ui.flags.oper = 0;
- retry = 0;
- break;
+ spid_ccw[0].cmd_code = 0x5B; /* suspend multipath reconnect */
+ spid_ccw[0].cda = 0;
+ spid_ccw[0].count = 0;
+ spid_ccw[0].flags = CCW_FLAG_SLI | CCW_FLAG_CC;
- default:
-#define PGMCHK_OPERAND_EXC 0x15
+ spid_ccw[1].cmd_code = CCW_CMD_SET_PGID;
+ spid_ccw[1].cda = (char *)virt_to_phys( pgid );
+ spid_ccw[1].count = sizeof( pgid_t);
+ spid_ccw[1].flags = CCW_FLAG_SLI;
+
+ pgid->inf.fc = SPID_FUNC_MULTI_PATH | SPID_FUNC_ESTABLISH;
- if ( (ccode2 & PGMCHK_OPERAND_EXC) == PGMCHK_OPERAND_EXC )
- {
/*
- * re-issue the modify subchannel without trying to
- * enable the concurrent sense facility
+ * We now issue a SenseID request. In case of BUSY
+ * or STATUS PENDING conditions we retry 5 times.
*/
- ioinfo[irq]->schib.pmcw.csense = 0;
+ do
+ {
+ memset( pdevstat, '\0', sizeof( devstat_t) );
- ccode2 = msch_err( irq, &ioinfo[irq]->schib);
+ irq_ret = s390_start_IO( irq,
+ spid_ccw,
+ 0xE2D7C9C4, // == SPID
+ lpm, // n/a
+ DOIO_WAIT_FOR_INTERRUPT
+ | DOIO_VALID_LPM
+ | DOIO_DONT_CALL_INTHDLR );
+
+ if ( !irq_ret )
+ {
+ if ( pdevstat->flag & DEVSTAT_STATUS_PENDING )
+ {
+#ifdef CONFIG_DEBUG_IO
+ printk( "SPID - Device %04X "
+ "on Subchannel %04X "
+ "reports pending status, "
+ "retry : %d\n",
+ ioinfo[irq]->schib.pmcw.dev,
+ irq,
+ retry);
+#endif
+ } /* endif */
- if ( ccode2 != 0 )
+ if ( pdevstat->flag == ( DEVSTAT_START_FUNCTION
+ | DEVSTAT_FINAL_STATUS ) )
{
- printk( " ... modify subchannel (2) failed with CC = %X\n",
- ccode2 );
- ioinfo[irq]->ui.flags.oper = 0;
+ retry = 0; // successfully set ...
+ }
+ else if ( pdevstat->flag & DEVSTAT_FLAG_SENSE_AVAIL )
+ {
+ /*
+ * If the device doesn't support the
+ * Sense Path Group ID command
+ * further retries wouldn't help ...
+ */
+ if ( pdevstat->ii.sense.data[0] & SNS0_CMD_REJECT )
+ {
+ if ( mpath )
+ {
+ pgid->inf.fc = SPID_FUNC_SINGLE_PATH
+ | SPID_FUNC_ESTABLISH;
+ mpath = 0;
+ retry--;
}
else
{
- ioinfo[irq]->ui.flags.consns = 0;
+ irq_ret = -EOPNOTSUPP;
+ retry = 0;
} /* endif */
}
+#ifdef CONFIG_DEBUG_IO
else
{
- printk( " ... modify subchannel (1) failed with CC = %X\n",
- ccode2);
- ioinfo[irq]->ui.flags.oper = 0;
+ printk( "SPID - device %04X,"
+ " unit check,"
+ " retry %d, cnt %02d,"
+ " sns :"
+ " %02X%02X%02X%02X %02X%02X%02X%02X ...\n",
+ ioinfo[irq]->schib.pmcw.dev,
+ retry,
+ pdevstat->scnt,
+ pdevstat->ii.sense.data[0],
+ pdevstat->ii.sense.data[1],
+ pdevstat->ii.sense.data[2],
+ pdevstat->ii.sense.data[3],
+ pdevstat->ii.sense.data[4],
+ pdevstat->ii.sense.data[5],
+ pdevstat->ii.sense.data[6],
+ pdevstat->ii.sense.data[7]);
} /* endif */
+#endif
+ }
+ else if ( pdevstat->flag & DEVSTAT_NOT_OPER )
+ {
+ printk( "SPID - Device %04X "
+ "on Subchannel %04X "
+ "became 'not operational'\n",
+ ioinfo[irq]->schib.pmcw.dev,
+ irq);
retry = 0;
- break;
-
- } /* endswitch */
-
- } while ( ccode2 && retry );
- if ( (ccode2 < 3) && (!retry) )
+ } /* endif */
+ }
+ else if ( irq_ret != -ENODEV )
{
- printk( " ... msch() retry count for "
- "subchannel %04X exceeded, CC = %d\n",
- irq,
- ccode2);
+ retry--;
+ }
+ else
+ {
+ retry = 0;
} /* endif */
- } /* endif */
+ } while ( retry > 0 );
- } /* endif */
+ s390irq_spin_unlock( irq);
/*
- * indicate whether the subchannel is valid
+ * If we installed the irq action handler we have to
+ * release it too.
*/
- if ( ccode == 3)
- return(0);
- else
- return(1);
+ if ( inlreq )
+ free_irq( irq, pdevstat);
+
+ } /* endif */
+
+ return( irq_ret );
}
+
/*
- * s390_SenseID
+ * s390_SensePGID
*
- * Try to obtain the 'control unit'/'device type' information
- * associated with the subchannel.
+ * Sense Path Group ID
*
- * The function is primarily meant to be called without irq
- * action handler in place. However, it also allows for
- * use with an action handler in place. If there is already
- * an action handler registered assure it can handle the
- * s390_SenseID() related device interrupts - interruption
- * parameter used is 0x00E2C9C4 ( SID ).
*/
-int s390_SenseID( int irq, senseid_t *sid )
+int s390_SensePGID( int irq, __u8 lpm, pgid_t *pgid )
{
- ccw1_t sense_ccw; /* ccw area for SenseID command */
+ ccw1_t snid_ccw; /* ccw area for SNID command */
devstat_t devstat; /* required by request_irq() */
+ devstat_t *pdevstat = &devstat;
int irq_ret = 0; /* return code */
int retry = 5; /* retry count */
else if ( ioinfo[irq] == INVALID_STORAGE_AREA )
{
return( -ENODEV);
+
} /* endif */
if ( ioinfo[irq]->ui.flags.oper == 0 )
* requests and evaluate the devstat area on return therefore
* we don't need a real I/O handler in place.
*/
- irq_ret = request_irq( irq, init_IRQ_handler, 0, "SID", &devstat);
+ irq_ret = request_irq( irq,
+ init_IRQ_handler,
+ 0,
+ "SNID",
+ pdevstat);
if ( irq_ret == 0 )
inlreq = 1;
+ }
+ else
+ {
+ pdevstat = ioinfo[irq]->irq_desc.action->dev_id;
+
} /* endif */
if ( irq_ret == 0 )
{
s390irq_spin_lock( irq);
- sense_ccw.cmd_code = CCW_CMD_SENSE_ID;
- sense_ccw.cda = (char *)virt_to_phys( sid );
- sense_ccw.count = sizeof( senseid_t);
- sense_ccw.flags = CCW_FLAG_SLI;
-
- ioinfo[irq]->senseid.cu_type = 0xFFFF; /* initialize fields ... */
- ioinfo[irq]->senseid.cu_model = 0;
- ioinfo[irq]->senseid.dev_type = 0;
- ioinfo[irq]->senseid.dev_model = 0;
+ snid_ccw.cmd_code = CCW_CMD_SENSE_PGID;
+ snid_ccw.cda = (char *)virt_to_phys( pgid );
+ snid_ccw.count = sizeof( pgid_t);
+ snid_ccw.flags = CCW_FLAG_SLI;
/*
* We now issue a SenseID request. In case of BUSY
*/
do
{
- memset( &devstat, '\0', sizeof( devstat_t) );
+ memset( pdevstat, '\0', sizeof( devstat_t) );
irq_ret = s390_start_IO( irq,
- &sense_ccw,
- 0x00E2C9C4, // == SID
- 0, // n/a
- DOIO_WAIT_FOR_INTERRUPT );
-
- if ( sid->cu_type == 0xFFFF )
- {
- if ( devstat.flag & DEVSTAT_STATUS_PENDING )
+ &snid_ccw,
+ 0xE2D5C9C4, // == SNID
+ lpm, // n/a
+ DOIO_WAIT_FOR_INTERRUPT
+ | DOIO_VALID_LPM
+ | DOIO_DONT_CALL_INTHDLR );
+
+ if ( irq_ret == 0 )
{
-#if CONFIG_DEBUG_IO
- printk( "Device %04X on Subchannel %04X "
- "reports pending status, retry : %d\n",
- ioinfo[irq]->schib.pmcw.dev,
- irq,
- retry);
-#endif
- } /* endif */
-
- if ( devstat.flag & DEVSTAT_FLAG_SENSE_AVAIL )
+ if ( pdevstat->flag & DEVSTAT_FLAG_SENSE_AVAIL )
{
/*
- * if the device doesn't support the SenseID
- * command further retries wouldn't help ...
+ * If the device doesn't support the
+ * Sense Path Group ID command
+ * further retries wouldn't help ...
*/
- if ( devstat.ii.sense.data[0] == SNS0_CMD_REJECT )
+ if ( pdevstat->ii.sense.data[0] & SNS0_CMD_REJECT )
{
retry = 0;
+ irq_ret = -EOPNOTSUPP;
}
-#if CONFIG_DEBUG_IO
else
{
- printk( "Device %04X,"
- " UC/SenseID,"
+#ifdef CONFIG_DEBUG_IO
+ printk( "SNID - device %04X,"
+ " unit check,"
+ " flag %04X, "
" retry %d, cnt %02d,"
" sns :"
" %02X%02X%02X%02X %02X%02X%02X%02X ...\n",
ioinfo[irq]->schib.pmcw.dev,
+ pdevstat->flag,
retry,
- devstat.scnt,
- devstat.ii.sense.data[0],
- devstat.ii.sense.data[1],
- devstat.ii.sense.data[2],
- devstat.ii.sense.data[3],
- devstat.ii.sense.data[4],
- devstat.ii.sense.data[5],
- devstat.ii.sense.data[6],
- devstat.ii.sense.data[7]);
+ pdevstat->scnt,
+ pdevstat->ii.sense.data[0],
+ pdevstat->ii.sense.data[1],
+ pdevstat->ii.sense.data[2],
+ pdevstat->ii.sense.data[3],
+ pdevstat->ii.sense.data[4],
+ pdevstat->ii.sense.data[5],
+ pdevstat->ii.sense.data[6],
+ pdevstat->ii.sense.data[7]);
- } /* endif */
#endif
+ retry--;
+
+ } /* endif */
}
- else if ( devstat.flag & DEVSTAT_NOT_OPER )
+ else if ( pdevstat->flag & DEVSTAT_NOT_OPER )
{
- printk( "Device %04X on Subchannel %04X "
+ printk( "SNID - Device %04X "
+ "on Subchannel %04X "
"became 'not operational'\n",
ioinfo[irq]->schib.pmcw.dev,
irq);
retry = 0;
- } /* endif */
}
- else // we got it ...
+ else
{
- retry = 0;
+ retry = 0; // success ...
} /* endif */
+ }
+ else if ( irq_ret != -ENODEV ) // -EIO, or -EBUSY
+ {
+#ifdef CONFIG_DEBUG_IO
+ if ( pdevstat->flag & DEVSTAT_STATUS_PENDING )
+ {
+ printk( "SNID - Device %04X "
+ "on Subchannel %04X "
+ "reports pending status, "
+ "retry : %d\n",
+ ioinfo[irq]->schib.pmcw.dev,
+ irq,
+ retry);
+ } /* endif */
+#endif
+ printk( "SNID - device %04X,"
+ " start_io() reports rc : %d, retrying ...\n",
+ ioinfo[irq]->schib.pmcw.dev,
+ irq_ret);
retry--;
+ }
+ else // -ENODEV ...
+ {
+ retry = 0;
+
+ } /* endif */
} while ( retry > 0 );
* release it too.
*/
if ( inlreq )
- free_irq( irq, &devstat);
+ free_irq( irq, pdevstat);
+
+ } /* endif */
+
+ return( irq_ret );
+}
/*
- * if running under VM check there ... perhaps we should do
- * only if we suffered a command reject, but it doesn't harm
+ * s390_do_crw_pending
+ *
+ * Called by the machine check handler to process CRW pending
+ * conditions. It may be a single CRW, or CRWs may be chained.
+ *
+ * Note : we currently process CRWs for subchannel source only
*/
- if ( ( sid->cu_type == 0xFFFF )
- && ( MACHINE_IS_VM ) )
+void s390_do_crw_pending( crwe_t *pcrwe )
{
- VM_virtual_device_info( ioinfo[irq]->schib.pmcw.dev,
- sid );
- } /* endif */
+ int irq;
+ int dev_oper = 0;
+ int dev_no = -1;
+ int lock = 0;
- if ( sid->cu_type == 0xFFFF )
+#ifdef CONFIG_DEBUG_CRW
+ printk( "do_crw_pending : starting ...\n");
+#endif
+
+ while ( pcrwe != NULL )
{
+ switch ( pcrwe->crw.rsc ) {
+ case CRW_RSC_SCH :
+
+#ifdef CONFIG_DEBUG_CRW
+ printk( "do_crw_pending : source is "
+ "subchannel\n");
+#endif
+ irq = pcrwe->crw.rsid;
+
/*
- * SenseID CU-type of 0xffff indicates that no device
- * information could be retrieved (pre-init value).
- *
- * If we can't couldn't identify the device type we
- * consider the device "not operational".
+ * If the device isn't known yet
+ * we can't lock it ...
*/
- printk( "Unknown device %04X on subchannel %04X\n",
- ioinfo[irq]->schib.pmcw.dev,
- irq);
- ioinfo[irq]->ui.flags.oper = 0;
+ if ( ioinfo[irq] != INVALID_STORAGE_AREA )
+ {
+ s390irq_spin_lock( irq );
+ lock = 1;
+
+ dev_oper = ioinfo[irq]->ui.flags.oper;
+
+ if ( ioinfo[irq]->ui.flags.dval )
+ dev_no = ioinfo[irq]->devno;
} /* endif */
+#ifdef CONFIG_DEBUG_CRW
+ printk( "do_crw_pending : subchannel validation - start ...\n");
+#endif
+ s390_validate_subchannel( irq, 0 );
+
+ if ( irq > highest_subchannel )
+ highest_subchannel = irq;
+
+#ifdef CONFIG_DEBUG_CRW
+ printk( "do_crw_pending : subchannel validation - done\n");
+#endif
/*
- * Issue device info message if unit was operational .
+ * After the validate processing
+ * the ioinfo control block
+ * should be allocated ...
*/
- if ( ioinfo[irq]->ui.flags.oper )
+ if ( lock )
{
- if ( sid->dev_type != 0 )
+ s390irq_spin_unlock( irq );
+
+ } /* endif */
+
+#ifdef CONFIG_DEBUG_CRW
+ if ( ioinfo[irq] != INVALID_STORAGE_AREA )
{
- printk( "Device %04X reports: CU Type/Mod = %04X/%02X,"
- " Dev Type/Mod = %04X/%02X\n",
- ioinfo[irq]->schib.pmcw.dev,
- sid->cu_type,
- sid->cu_model,
- sid->dev_type,
- sid->dev_model);
+ printk( "do_crw_pending : ioinfo at %08X\n",
+ (unsigned)ioinfo[irq]);
+
+ } /* endif */
+#endif
+
+ if ( ioinfo[irq] != INVALID_STORAGE_AREA )
+ {
+ if ( ioinfo[irq]->ui.flags.oper == 0 )
+ {
+ /*
+ * If the device has gone
+ * call not oper handler
+ */
+ if ( ( dev_oper == 1 )
+ && ( ioinfo[irq]->nopfunc != NULL ) )
+ {
+ free_irq( irq,
+ ioinfo[irq]->irq_desc.action->dev_id );
+ ioinfo[irq]->nopfunc( irq,
+ DEVSTAT_DEVICE_GONE );
+
+ } /* endif */
}
else
{
- printk( "Device %04X reports:"
- " Dev Type/Mod = %04X/%02X\n",
- ioinfo[irq]->schib.pmcw.dev,
- sid->cu_type,
- sid->cu_model);
+#ifdef CONFIG_DEBUG_CRW
+ printk( "do_crw_pending : device "
+ "recognition - start ...\n");
+#endif
+ s390_device_recognition_irq( irq );
+
+#ifdef CONFIG_DEBUG_CRW
+ printk( "do_crw_pending : device "
+ "recognition - done\n");
+#endif
+
+ /*
+ * the device became operational
+ */
+ if ( dev_oper == 0 )
+ {
+ devreg_t *pdevreg;
+
+ pdevreg = s390_search_devreg( ioinfo[irq] );
+
+ if ( pdevreg != NULL )
+ {
+ if ( pdevreg->oper_func != NULL )
+ pdevreg->oper_func( irq, pdevreg );
} /* endif */
+ }
+ /*
+ * ... it is and was operational, but
+ * the devno may have changed
+ */
+ else if ( ioinfo[irq]->devno != dev_no )
+ {
+ ioinfo[irq]->nopfunc( irq,
+ DEVSTAT_REVALIDATE );
} /* endif */
- if ( ioinfo[irq]->ui.flags.oper )
- irq_ret = 0;
- else
- irq_ret = -ENODEV;
+ } /* endif */
} /* endif */
- return( irq_ret );
-}
+ break;
-void do_crw_pending(void)
-{
+ case CRW_RSC_MONITOR :
+
+#ifdef CONFIG_DEBUG_CRW
+ printk( "do_crw_pending : source is "
+ "monitoring facility\n");
+#endif
+ break;
+
+ case CRW_RSC_CPATH :
+
+#ifdef CONFIG_DEBUG_CRW
+ printk( "do_crw_pending : source is "
+ "channel path\n");
+#endif
+ break;
+
+ case CRW_RSC_CONFIG :
+
+#ifdef CONFIG_DEBUG_CRW
+ printk( "do_crw_pending : source is "
+ "configuration-alert facility\n");
+#endif
+ break;
+
+ case CRW_RSC_CSS :
+
+#ifdef CONFIG_DEBUG_CRW
+ printk( "do_crw_pending : source is "
+ "channel path\n");
+#endif
+ break;
+
+ default :
+
+#ifdef CONFIG_DEBUG_CRW
+ printk( "do_crw_pending : unknown source\n");
+#endif
+ break;
+
+ } /* endswitch */
+
+ pcrwe = pcrwe->crwe_next;
+
+ } /* endwhile */
+
+#ifdef CONFIG_DEBUG_CRW
+ printk( "do_crw_pending : done\n");
+#endif
+
+ return;
}
+
/* added by Holger Smolinski for reipl support in reipl.S */
void
reipl ( int sch )
--- /dev/null
+/*
+ * arch/s390/kernel/s390mach.c
+ * S/390 machine check handler,
+ * currently only channel-reports are supported
+ *
+ * S390 version
+ * Copyright (C) 1999,2000 IBM Deutschland Entwicklung GmbH, IBM Corporation
+ * Author(s): Ingo Adlung (adlung@de.ibm.com)
+ */
+
+#include <linux/init.h>
+#include <linux/malloc.h>
+#include <linux/smp.h>
+
+#include <asm/irq.h>
+#include <asm/lowcore.h>
+#include <asm/semaphore.h>
+#include <asm/spinlock.h>
+#include <asm/s390io.h>
+#include <asm/s390dyn.h>
+#include <asm/s390mach.h>
+
+#define S390_MACHCHK_DEBUG
+
+static int __init s390_machine_check_handler( void * parm );
+static void s390_enqueue_mchchk( mache_t *mchchk );
+static mache_t *s390_dequeue_mchchk( void );
+static void s390_enqueue_free_mchchk( mache_t *mchchk );
+static mache_t *s390_dequeue_free_mchchk( void );
+static int s390_collect_crw_info( void );
+
+static struct semaphore s_sem[2];
+
+static mache_t *mchchk_queue_head = NULL;
+static mache_t *mchchk_queue_tail = NULL;
+static mache_t *mchchk_queue_free = NULL;
+static crwe_t *crw_buffer_anchor = NULL;
+static spinlock_t mchchk_queue_lock = SPIN_LOCK_UNLOCKED;
+static spinlock_t crw_queue_lock = SPIN_LOCK_UNLOCKED;
+
+static inline void init_MUTEX (struct semaphore *sem)
+{
+ sema_init(sem, 1);
+}
+
+static inline void init_MUTEX_LOCKED (struct semaphore *sem)
+{
+ sema_init(sem, 0);
+}
+
+/*
+ * s390_init_machine_check
+ *
+ * initialize machine check handling
+ */
+void s390_init_machine_check( void )
+{
+ crwe_t *pcrwe; /* CRW buffer element pointer */
+ mache_t *pmache; /* machine check element pointer */
+
+ init_MUTEX_LOCKED( &s_sem[0] );
+ init_MUTEX_LOCKED( &s_sem[1] );
+
+ pcrwe = kmalloc( MAX_CRW_PENDING * sizeof( crwe_t), GFP_KERNEL);
+
+ if ( pcrwe )
+ {
+ int i;
+
+ crw_buffer_anchor = pcrwe;
+
+ for ( i=0; i < MAX_CRW_PENDING; i++)
+ {
+ pcrwe->crwe_next = (crwe_t *)((unsigned long)pcrwe + sizeof(crwe_t));
+ pcrwe = pcrwe->crwe_next;
+
+ } /* endfor */
+
+ pcrwe->crwe_next = NULL;
+
+ }
+ else
+ {
+ panic( "s390_init_machine_check : unable to obtain memory\n");
+
+ } /* endif */
+
+ pmache = kmalloc( MAX_MACH_PENDING * sizeof( mache_t), GFP_KERNEL);
+
+ if ( pmache )
+ {
+ int i;
+
+ for ( i=0; i < MAX_MACH_PENDING; i++)
+ {
+ s390_enqueue_free_mchchk( pmache );
+ pmache = (mache_t *)((unsigned long)pmache + sizeof(mache_t));
+
+ } /* endfor */
+ }
+ else
+ {
+ panic( "s390_init_machine_check : unable to obtain memory\n");
+
+ } /* endif */
+
+#ifdef S390_MACHCHK_DEBUG
+ printk( "init_mach : starting machine check handler\n");
+#endif
+
+ kernel_thread( s390_machine_check_handler, s_sem, 0);
+
+ /*
+ * wait for the machine check handler to be ready
+ */
+#ifdef S390_MACHCHK_DEBUG
+ printk( "init_mach : waiting for machine check handler coming up ... \n");
+#endif
+
+ down( &s_sem[0]);
+
+ smp_ctl_clear_bit( 14, 25 ); // disable damage MCH
+#if 1
+ smp_ctl_set_bit( 14, 28 ); // enable channel report MCH
+#endif
+
+#ifdef S390_MACHCHK_DEBUG
+ printk( "init_mach : machine check buffer : head = %08X\n",
+ (unsigned)&mchchk_queue_head);
+ printk( "init_mach : machine check buffer : tail = %08X\n",
+ (unsigned)&mchchk_queue_tail);
+ printk( "init_mach : machine check buffer : free = %08X\n",
+ (unsigned)&mchchk_queue_free);
+ printk( "init_mach : CRW entry buffer anchor = %08X\n",
+ (unsigned)&crw_buffer_anchor);
+ printk( "init_mach : machine check handler ready\n");
+#endif
+
+ return;
+}
+
+/*
+ * s390_do_machine_check
+ *
+ * mchine check pre-processor, collecting the machine check info,
+ * queueing it and posting the machine check handler for processing.
+ */
+void __init s390_do_machine_check( void )
+{
+ int crw_count;
+ mcic_t mcic;
+
+#ifdef S390_MACHCHK_DEBUG
+ printk( "s390_do_machine_check : starting ...\n");
+#endif
+
+ memcpy( &mcic,
+ &S390_lowcore.mcck_interuption_code,
+ sizeof(__u64));
+
+ if ( mcic.mcc.mcd.cp ) // CRW pending ?
+ {
+ crw_count = s390_collect_crw_info();
+
+ if ( crw_count )
+ {
+ up( &s_sem[1] );
+
+ } /* endif */
+
+ } /* endif */
+
+#ifdef S390_MACHCHK_DEBUG
+ printk( "s390_do_machine_check : done \n");
+#endif
+
+ return;
+}
+
+/*
+ * s390_machine_check_handler
+ *
+ * machine check handler, dequeueing machine check entries
+ * and processing them
+ */
+static int __init s390_machine_check_handler( void *parm)
+{
+ struct semaphore *sem = parm;
+ int flags;
+ mache_t *pmache;
+
+ int found = 0;
+
+#ifdef S390_MACHCHK_DEBUG
+ printk( "mach_handler : up\n");
+#endif
+
+ up( &sem[0] );
+
+#ifdef S390_MACHCHK_DEBUG
+ printk( "mach_handler : ready\n");
+#endif
+
+ do {
+
+#ifdef S390_MACHCHK_DEBUG
+ printk( "mach_handler : waiting for wakeup\n");
+#endif
+
+ down_interruptible( &sem[1] );
+
+#ifdef S390_MACHCHK_DEBUG
+ printk( "\nmach_handler : wakeup ... \n");
+#endif
+
+ __save_flags( flags );
+ __cli();
+
+ pmache = s390_dequeue_mchchk();
+
+ if ( pmache )
+ {
+ found = 1;
+
+ if ( pmache->mcic.mcc.mcd.cp )
+ {
+ crwe_t *pcrwe_n;
+ crwe_t *pcrwe_h;
+
+ s390_do_crw_pending( pmache->mc.crwe );
+
+ pcrwe_h = pmache->mc.crwe;
+ pcrwe_n = pmache->mc.crwe->crwe_next;
+
+ pmache->mcic.mcc.mcd.cp = 0;
+ pmache->mc.crwe = NULL;
+
+ spin_lock( &crw_queue_lock);
+
+ while ( pcrwe_h )
+ {
+ pcrwe_h->crwe_next = crw_buffer_anchor;
+ crw_buffer_anchor = pcrwe_h;
+ pcrwe_h = pcrwe_n;
+
+ if ( pcrwe_h != NULL )
+ pcrwe_n = pcrwe_h->crwe_next;
+
+ } /* endwhile */
+
+ spin_unlock( &crw_queue_lock);
+
+ } /* endif */
+
+ s390_enqueue_free_mchchk( pmache );
+ }
+ else
+ {
+ found = 0;
+
+ // unconditional surrender ...
+#ifdef S390_MACHCHK_DEBUG
+ printk( "mach_handler : terminated \n");
+#endif
+
+ } /* endif */
+
+ __restore_flags( flags );
+
+ } while ( found );
+
+ return( 0);
+}
+
+/*
+ * s390_dequeue_mchchk
+ *
+ * Dequeue an entry from the machine check queue
+ *
+ * Note : The queue elements provide for a double linked list.
+ * We dequeue entries from the tail, and enqueue entries to
+ * the head.
+ *
+ */
+static mache_t *s390_dequeue_mchchk( void )
+{
+ mache_t *qe;
+
+ spin_lock( &mchchk_queue_lock );
+
+ qe = mchchk_queue_tail;
+
+ if ( qe != NULL )
+ {
+ mchchk_queue_tail = qe->prev;
+
+ if ( mchchk_queue_tail != NULL )
+ {
+ mchchk_queue_tail->next = NULL;
+ }
+ else
+ {
+ mchchk_queue_head = NULL;
+
+ } /* endif */
+
+ } /* endif */
+
+ spin_unlock( &mchchk_queue_lock );
+
+ return qe;
+}
+
+/*
+ * s390_enqueue_mchchk
+ *
+ * Enqueue an entry to the machine check queue.
+ *
+ * Note : The queue elements provide for a double linked list.
+ * We enqueue entries to the head, and dequeue entries from
+ * the tail.
+ *
+ */
+static void s390_enqueue_mchchk( mache_t *pmache )
+{
+ spin_lock( &mchchk_queue_lock );
+
+ if ( pmache != NULL )
+ {
+
+ if ( mchchk_queue_head == NULL ) /* first element */
+ {
+ pmache->next = NULL;
+ pmache->prev = NULL;
+
+ mchchk_queue_head = pmache;
+ mchchk_queue_tail = pmache;
+ }
+ else /* new head */
+ {
+ pmache->prev = NULL;
+ pmache->next = mchchk_queue_head;
+
+ mchchk_queue_head->prev = pmache;
+ mchchk_queue_head = pmache;
+
+ } /* endif */
+
+ } /* endif */
+
+ spin_unlock( &mchchk_queue_lock );
+
+ return;
+}
+
+
+/*
+ * s390_enqueue_free_mchchk
+ *
+ * Enqueue a free entry to the free queue.
+ *
+ * Note : While the queue elements provide for a double linked list,
+ * the free queue entries are only concatenated by means of a
+ * single linked list (forward concatenation).
+ *
+ */
+static void s390_enqueue_free_mchchk( mache_t *pmache )
+{
+ if ( pmache != NULL)
+ {
+ memset( pmache, '\0', sizeof( mache_t ));
+
+ spin_lock( &mchchk_queue_lock );
+
+ pmache->next = mchchk_queue_free;
+
+ mchchk_queue_free = pmache;
+
+ spin_unlock( &mchchk_queue_lock );
+
+ } /* endif */
+
+ return;
+}
+
+/*
+ * s390_dequeue_free_mchchk
+ *
+ * Dequeue an entry from the free queue.
+ *
+ * Note : While the queue elements provide for a double linked list,
+ * the free queue entries are only concatenated by means of a
+ * single linked list (forward concatenation).
+ *
+ */
+static mache_t *s390_dequeue_free_mchchk( void )
+{
+ mache_t *qe;
+
+ spin_lock( &mchchk_queue_lock );
+
+ qe = mchchk_queue_free;
+
+ if ( qe != NULL )
+ {
+ mchchk_queue_free = qe->next;
+
+ } /* endif */
+
+ spin_unlock( &mchchk_queue_lock );
+
+ return qe;
+}
+
+/*
+ * s390_collect_crw_info
+ *
+ * Retrieve CRWs. If a CRW was found a machine check element
+ * is dequeued from the free chain, filled and enqueued to
+ * be processed.
+ *
+ * The function returns the number of CRWs found.
+ *
+ * Note : We must always be called disabled ...
+ */
+static int s390_collect_crw_info( void )
+{
+ crw_t tcrw; /* temporarily holds a CRW */
+ int ccode; /* condition code from stcrw() */
+ crwe_t *pcrwe; /* pointer to CRW buffer entry */
+
+ mache_t *pmache = NULL; /* ptr to mchchk entry */
+ int chain = 0; /* indicate chaining */
+ crwe_t *pccrw = NULL; /* ptr to current CRW buffer entry */
+ int count = 0; /* CRW count */
+
+#ifdef S390_MACHCHK_DEBUG
+ printk( "crw_info : looking for CRWs ...\n");
+#endif
+
+ do
+ {
+ ccode = stcrw( (__u32 *)&tcrw);
+
+ if ( ccode == 0 )
+ {
+ count++;
+
+#ifdef S390_MACHCHK_DEBUG
+ printk( "crw_info : CRW reports "
+ "slct=%d, oflw=%d, chn=%d, "
+ "rsc=%X, anc=%d, erc=%X, "
+ "rsid=%X\n",
+ tcrw.slct,
+ tcrw.oflw,
+ tcrw.chn,
+ tcrw.rsc,
+ tcrw.anc,
+ tcrw.erc,
+ tcrw.rsid );
+#endif
+
+ /*
+ * Dequeue a CRW entry from the free chain
+ * and process it ...
+ */
+ spin_lock( &crw_queue_lock );
+
+ pcrwe = crw_buffer_anchor;
+
+ if ( pcrwe == NULL )
+ {
+ printk( KERN_CRIT"crw_info : "
+ "no CRW buffer entries available\n");
+ break;
+
+ } /* endif */
+
+ crw_buffer_anchor = pcrwe->crwe_next;
+ pcrwe->crwe_next = NULL;
+
+ spin_unlock( &crw_queue_lock );
+
+ memcpy( &(pcrwe->crw), &tcrw, sizeof(crw_t));
+
+ /*
+ * If it is the first CRW, chain it to the mchchk
+ * buffer entry, otherwise to the last CRW entry.
+ */
+ if ( chain == 0 )
+ {
+ pmache = s390_dequeue_free_mchchk();
+
+ if ( pmache != NULL )
+ {
+ memset( pmache, '\0', sizeof(mache_t));
+
+ pmache->mcic.mcc.mcd.cp = 1;
+ pmache->mc.crwe = pcrwe;
+ pccrw = pcrwe;
+
+ }
+ else
+ {
+ panic( "crw_info : "
+ "unable to dequeue "
+ "free mchchk buffer");
+
+ } /* endif */
+ }
+ else
+ {
+ pccrw->crwe_next = pcrwe;
+ pccrw = pcrwe;
+
+ } /* endif */
+
+ if ( pccrw->crw.chn )
+ {
+#ifdef S390_MACHCHK_DEBUG
+ printk( "crw_info : "
+ "chained CRWs pending ...\n\n");
+#endif
+ chain = 1;
+ }
+ else
+ {
+ chain = 0;
+
+ /*
+ * We can enqueue the mchchk buffer if
+ * there aren't more CRWs chained.
+ */
+ s390_enqueue_mchchk( pmache);
+
+ } /* endif */
+
+ } /* endif */
+
+ } while ( ccode == 0 );
+
+ return( count );
+}
+
current->used_math = 0;
}
+/*
+ * VM halt and poweroff setup routines
+ */
+char vmhalt_cmd[128] = "";
+char vmpoff_cmd[128] = "";
+
+static inline void strncpy_skip_quote(char *dst, char *src, int n)
+{
+ int sx, dx;
+
+ dx = 0;
+ for (sx = 0; src[sx] != 0; sx++) {
+ if (src[sx] == '"') continue;
+ dst[dx++] = src[sx];
+ if (dx >= n) break;
+ }
+}
+
+__initfunc(void vmhalt_setup(char *str, char *ints))
+{
+ strncpy_skip_quote(vmhalt_cmd, str, 127);
+ vmhalt_cmd[127] = 0;
+ return;
+}
+
+__initfunc(void vmpoff_setup(char *str, char *ints))
+{
+ strncpy_skip_quote(vmpoff_cmd, str, 127);
+ vmpoff_cmd[127] = 0;
+ return;
+}
+
/*
* Reboot, halt and power_off routines for non SMP.
*/
+
#ifndef __SMP__
void machine_restart(char * __unused)
{
reipl(S390_lowcore.ipl_device);
-#if 0
- if (MACHINE_IS_VM) {
- cpcmd("IPL", NULL, 0);
- } else {
- /* FIXME: how to reipl ? */
- disabled_wait(2);
- }
-#endif
}
void machine_halt(void)
{
- if (MACHINE_IS_VM) {
- cpcmd("IPL 200 STOP", NULL, 0);
- } else {
+ if (MACHINE_IS_VM && strlen(vmhalt_cmd) > 0)
+ cpcmd(vmhalt_cmd, NULL, 0);
disabled_wait(0);
}
-}
void machine_power_off(void)
{
- if (MACHINE_IS_VM) {
- cpcmd("IPL CMS", NULL, 0);
- } else {
+ if (MACHINE_IS_VM && strlen(vmpoff_cmd) > 0)
+ cpcmd(vmpoff_cmd, NULL, 0);
disabled_wait(0);
}
-}
#endif
/*
* memory size
*/
if (c == ' ' && strncmp(from, "mem=", 4) == 0) {
- if (to != command_line) to--;
memory_end = simple_strtoul(from+4, &from, 0);
if ( *from == 'K' || *from == 'k' ) {
memory_end = memory_end << 10;
from++;
}
}
+ /*
+ * "ipldelay=XXX[sSmM]" waits for the specified time
+ */
if (c == ' ' && strncmp(from, "ipldelay=", 9) == 0) {
unsigned long delay;
- if (to != command_line) to--;
delay = simple_strtoul(from+9, &from, 0);
if (*from == 's' || *from == 'S') {
delay = delay*1000000;
/*
* Atomically swap in the new signal mask, and wait for a signal.
*/
-asmlinkage int sys_sigsuspend(struct pt_regs * regs,int history0, int history1, old_sigset_t mask)
+asmlinkage int
+sys_sigsuspend(struct pt_regs * regs,int history0, int history1, old_sigset_t mask)
{
sigset_t saveset;
spin_unlock_irq(¤t->sigmask_lock);
regs->gprs[2] = -EINTR;
- while (1)
- {
+ while (1) {
current->state = TASK_INTERRUPTIBLE;
schedule();
if (do_signal(regs, &saveset))
}
}
-asmlinkage int sys_rt_sigsuspend(struct pt_regs * regs,sigset_t *unewset, size_t sigsetsize)
+asmlinkage int
+sys_rt_sigsuspend(struct pt_regs * regs,sigset_t *unewset, size_t sigsetsize)
{
sigset_t saveset, newset;
static void setup_frame(int sig, struct k_sigaction *ka,
sigset_t *set, struct pt_regs * regs)
{
+ sigframe *frame;
- if(!setup_frame_common(sig,ka,set,regs,sizeof(sigframe),
- (S390_SYSCALL_OPCODE|__NR_sigreturn)))
+ if((frame=setup_frame_common(sig,ka,set,regs,sizeof(sigframe),
+ (S390_SYSCALL_OPCODE|__NR_sigreturn)))==0)
goto give_sigsegv;
#if DEBUG_SIG
printk("SIG deliver (%s:%d): sp=%p pc=%p ra=%p\n",
current->comm, current->pid, frame, regs->eip, frame->pretcode);
#endif
+ /* Martin wants this for pthreads */
+ regs->gprs[3] = (addr_t)&frame->sc;
return;
give_sigsegv:
err |= __put_user(sas_ss_flags(orig_sp),
&frame->uc.uc_stack.ss_flags);
err |= __put_user(current->sas_ss_size, &frame->uc.uc_stack.ss_size);
- regs->gprs[3] = (u32)&frame->info;
- regs->gprs[4] = (u32)&frame->uc;
+ regs->gprs[3] = (addr_t)&frame->info;
+ regs->gprs[4] = (addr_t)&frame->uc;
if (err)
goto give_sigsegv;
#include <linux/mm.h>
#include <asm/pgtable.h>
#include <asm/string.h>
+#include <asm/s390_ext.h>
#include "cpcmd.h"
#include <asm/irq.h>
/*
* Reboot, halt and power_off routines for SMP.
*/
+extern char vmhalt_cmd[];
+extern char vmpoff_cmd[];
+extern void reipl(int ipl_device);
+
void do_machine_restart(void)
{
smp_send_stop();
reipl(S390_lowcore.ipl_device);
-#if 0
- if (MACHINE_IS_VM) {
- cpcmd("IPL", NULL, 0);
- } else {
- /* FIXME: how to reipl ? */
- disabled_wait(2);
- }
-#endif
}
void machine_restart(char * __unused)
void do_machine_halt(void)
{
smp_send_stop();
- if (MACHINE_IS_VM) {
- cpcmd("IPL CMS", NULL, 0);
- } else {
+ if (MACHINE_IS_VM && strlen(vmhalt_cmd) > 0)
+ cpcmd(vmhalt_cmd, NULL, 0);
disabled_wait(0);
}
-}
void machine_halt(void)
{
void do_machine_power_off(void)
{
smp_send_stop();
- if (MACHINE_IS_VM) {
- cpcmd("IPL CMS", NULL, 0);
- } else {
+ if (MACHINE_IS_VM && strlen(vmpoff_cmd) > 0)
+ cpcmd(vmpoff_cmd, NULL, 0);
disabled_wait(0);
}
-}
void machine_power_off(void)
{
* cpus are handled.
*/
-void do_ext_call_interrupt(__u16 source_cpu_addr)
+void do_ext_call_interrupt(struct pt_regs *regs, __u16 source_cpu_addr)
{
ec_ext_call *ec, *next;
int bits;
return; /* no command signals */
/* Make a fifo out of the lifo */
- next = ec;
+ next = ec->next;
ec->next = NULL;
while (next != NULL) {
ec_ext_call *tmp = next->next;
atomic_set(&ec->status,ec_done);
return;
}
+ case ec_ptlb:
+ atomic_set(&ec->status, ec_executing);
+ __flush_tlb();
+ atomic_set(&ec->status, ec_done);
+ return;
default:
}
ec = ec->next;
* Activate a secondary processor.
*/
extern void init_100hz_timer(void);
+extern void cpu_init (void);
int __init start_secondary(void *cpuvoid)
{
int curr_cpu;
int i;
+ /* request the 0x1202 external interrupt */
+ if (register_external_interrupt(0x1202, do_ext_call_interrupt) != 0)
+ panic("Couldn't request external interrupt 0x1202");
smp_count_cpus();
memset(lowcore_ptr,0,sizeof(lowcore_ptr));
* This is really horribly ugly.
*/
asmlinkage int sys_ipc (uint call, int first, int second,
- int third, void *ptr, long fifth)
+ int third, void *ptr)
{
+ struct ipc_kludge tmp;
int ret;
switch (call) {
second, third);
break;
case MSGRCV:
- return sys_msgrcv (first,
- (struct msgbuf *) ptr,
- second, fifth, third);
+ if (!ptr)
+ return -EINVAL;
+ if (copy_from_user (&tmp, (struct ipc_kludge *) ptr,
+ sizeof (struct ipc_kludge)))
+ return -EFAULT;
+ return sys_msgrcv (first, tmp.msgp,
+ second, tmp.msgtyp, third);
case MSGGET:
return sys_msgget ((key_t) first, second);
case MSGCTL:
#include <asm/uaccess.h>
#include <asm/delay.h>
+#include <asm/irq.h>
+#include <asm/s390_ext.h>
#include <linux/mc146818rtc.h>
#include <linux/timex.h>
extern __u16 boot_cpu_addr;
#endif
-void do_timer_interrupt(struct pt_regs *regs,int error_code)
+void do_timer_interrupt(struct pt_regs *regs, __u16 error_code)
{
unsigned long flags;
printk("time_init: TOD clock stopped/non-operational\n");
break;
}
+ /* request the 0x1004 external interrupt */
+ if (register_external_interrupt(0x1004, do_timer_interrupt) != 0)
+ panic("Couldn't request external interrupts 0x1004");
init_100hz_timer();
init_timer_cc = S390_lowcore.jiffy_timer_cc;
init_timer_cc -= 0x8126d60e46000000LL -
return(FALSE);
}
+asmlinkage void default_trap_handler(struct pt_regs * regs, long error_code)
+{
+ if (check_for_fixup(regs) == 0) {
+ current->tss.error_code = error_code;
+ current->tss.trap_no = error_code;
+ force_sig(SIGSEGV, current);
+ die("Unknown program exception",regs,error_code);
+ }
+}
DO_ERROR(2, SIGILL, "privileged operation", privileged_op, current)
DO_ERROR(3, SIGILL, "execute exception", execute_exception, current)
LR 1,2 # don't touch address in R2
LTR 4,4
JZ strncpy_exit # 0 bytes -> nothing to do
- AHI 4,-1
SR 0,0
- BASR 5,0
strncpy_loop:
ICM 0,1,0(3) # ICM sets the cc, IC does not
LA 3,1(0,3)
STC 0,0(0,1)
LA 1,1(0,1)
JZ strncpy_exit # ICM inserted a 0x00
- BCTR 4,5 # R4 -= 1, jump to strncpy_loop if >= 0
+ BRCT 4,strncpy_loop # R4 -= 1, jump to strncpy_loop if > 0
strncpy_exit:
BR 14
address = S390_lowcore.trans_exc_code&0x7ffff000;
- if (atomic_read(&S390_lowcore.local_irq_count))
- die("page fault from irq handler",regs,error_code);
-
tsk = current;
mm = tsk->mm;
+ if (atomic_read(&S390_lowcore.local_irq_count))
+ die("page fault from irq handler",regs,error_code);
+
down(&mm->mmap_sem);
vma = find_vma(mm, address);
printk("code should be 4, 10 or 11 (%lX) \n",error_code&0xFF);
goto bad_area;
}
- handle_mm_fault(tsk, vma, address, write);
+
+ /*
+ * If for any reason at all we couldn't handle the fault,
+ * make sure we exit gracefully rather than endlessly redo
+ * the fault.
+ */
+survive:
+ {
+ int fault = handle_mm_fault(tsk, vma, address, write);
+ if (!fault)
+ goto do_sigbus;
+ if (fault < 0)
+ goto out_of_memory;
+ }
up(&mm->mmap_sem);
return;
return;
}
+ no_context:
/* Are we prepared to handle this kernel fault? */
-
if ((fixup = search_exception_table(regs->psw.addr)) != 0) {
regs->psw.addr = fixup;
return;
* need to define, which information is useful here
*/
- lock_kernel();
die("Oops", regs, error_code);
do_exit(SIGKILL);
- unlock_kernel();
-}
+
/*
+ * We ran out of memory, or some other thing happened to us that made
+ * us unable to handle the page fault gracefully.
+ */
+out_of_memory:
+ if (tsk->pid == 1)
{
- char c;
- int i,j;
- char *addr;
- addr = ((char*) psw_addr)-0x20;
- for (i=0;i<16;i++) {
- if (i == 2)
- printk("\n");
- printk ("%08X: ",(unsigned long) addr);
- for (j=0;j<4;j++) {
- printk("%08X ",*(unsigned long*)addr);
- addr += 4;
- }
- addr -=0x10;
- printk(" | ");
- for (j=0;j<16;j++) {
- printk("%c",(c=*addr++) < 0x20 ? '.' : c );
+ tsk->policy |= SCHED_YIELD;
+ schedule();
+ goto survive;
}
-
- printk("\n");
- }
- printk("\n");
+ up(&mm->mmap_sem);
+ if (psw_mask & PSW_PROBLEM_STATE)
+ {
+ printk("VM: killing process %s\n", tsk->comm);
+ do_exit(SIGKILL);
}
-
-*/
-
-
-
-
-
+ goto no_context;
+
+do_sigbus:
+ up(&mm->mmap_sem);
+
+ /*
+ * Send a sigbus, regardless of whether we were in kernel
+ * or user mode.
+ */
+ tsk->tss.prot_addr = address;
+ tsk->tss.error_code = error_code;
+ tsk->tss.trap_no = 14;
+ force_sig(SIGBUS, tsk);
+
+ /* Kernel mode? Handle exceptions or die */
+ if (!(psw_mask & PSW_PROBLEM_STATE))
+ goto no_context;
+}
tmp = start_mem;
while (tmp < end_mem) {
- if ((tmp & 0x3ff000) == 0 && test_access(tmp) == 0) {
+ if (tmp && (tmp & 0x3ff000) == 0 &&
+ test_access(tmp) == 0) {
int i;
printk("4M Segment %lX not available\n",tmp);
for (i = 0;i<0x400;i++) {
atomic_set(&mem_map[MAP_NR(addr)].count, 1);
free_page(addr);
}
- printk ("Freeing unused kernel memory: %ldk freed\n", (&__init_end - &__init_begin) >> 10);
+ printk ("Freeing unused kernel memory: %dk freed\n", (&__init_end - &__init_begin) >> 10);
}
void si_meminfo(struct sysinfo *val)
--- /dev/null
+all: dasdfmt
+
+dasdfmt: dasdfmt.c
+ $(CROSS_COMPILE)gcc -o $@ $^
+ $(STRIP) $@
+
+clean:
+ rm -f dasdfmt
+
#include <string.h>
#include <dirent.h>
#include <mntent.h>
-#include "../../../drivers/s390/block/dasd.h" /* uses DASD_PARTN_BITS */
#define __KERNEL__ /* we want to use kdev_t and not have to define it */
#include <linux/kdev_t.h>
#undef __KERNEL__
+#include <linux/fs.h>
+#include <linux/dasd.h>
+
#define EXIT_MISUSE 1
#define EXIT_BUSY 2
#define TEMPFILENAME "/tmp/ddfXXXXXX"
#define TEMPFILENAMECHARS 8 /* 8 characters are fixed in all temp filenames */
-#define IOCTL_COMMAND 'D' << 8
#define SLASHDEV "/dev/"
#define PROC_DASD_DEVICES "/proc/dasd/devices"
+#define PROC_MOUNTS "/proc/mounts" /* _PATH_MOUNTED is /etc/mtab - maybe bad */
+#define PROC_SWAPS "/proc/swaps"
#define DASD_DRIVER_NAME "dasd"
#define PROC_LINE_LENGTH 80
#define ERR_LENGTH 80
ERRMSG_EXIT(EXIT_MISUSE,"%s: " str " " \
"is in invalid format\n",prog_name);}
-typedef struct {
- int start_unit;
- int stop_unit;
- int blksize;
-} format_data_t;
-
-char prog_name[]="dasd_format";
+char *prog_name;/*="dasdfmt";*/
char tempfilename[]=TEMPFILENAME;
void
exit_usage(int exitcode)
{
- printf("Usage: %s [-htvyV] [-b blocksize] <range> <diskspec>\n\n",
+ printf("Usage: %s [-htvyV] [-b <blocksize>] [<range>] <diskspec>\n\n",
prog_name);
- printf(" where <range> is either\n");
- printf(" -s start_track -e end_track\n");
+ printf(" -t means testmode\n");
+ printf(" -v means verbose mode\n");
+ printf(" -V means print version\n");
+ printf(" <blocksize> has to be power of 2 and at least 512\n");
+ printf(" <range> is either\n");
+ printf(" -s <start_track> -e <end_track>\n");
printf(" or\n");
- printf(" -r start_track-end_track\n");
+ printf(" -r <start_track>-<end_track>\n");
printf(" and <diskspec> is either\n");
- printf(" -f /dev/ddX\n");
+ printf(" -f /dev/dasdX\n");
printf(" or\n");
printf(" -n <s390-devnr>\n");
exit(exitcode);
/*
* first, check filesystems
*/
- if (!(f = fopen(_PATH_MOUNTED, "r")))
- ERRMSG_EXIT(EXIT_FAILURE, "%s: %s\n", _PATH_MOUNTED,
+ if (!(f = fopen(PROC_MOUNTS, "r")))
+ ERRMSG_EXIT(EXIT_FAILURE, "%s: %s\n", PROC_MOUNTS,
strerror(errno));
while ((ment = getmntent(f))) {
if (stat(ment->mnt_fsname, &stbuf) == 0)
/*
* second, check active swap spaces
*/
- if (!(f = fopen("/proc/swaps", "r")))
- ERRMSG_EXIT(EXIT_FAILURE, "/proc/swaps: %s", strerror(errno));
+ if (!(f = fopen(PROC_SWAPS, "r")))
+ ERRMSG_EXIT(EXIT_FAILURE, PROC_SWAPS " %s", strerror(errno));
/*
* skip header line
*/
if (!withoutprompt) {
printf("\n--->> ATTENTION! <<---\n");
printf("All data in the specified range of that " \
- "device will be lost.\nType yes to continue" \
- ", no will leave the disk untouched: ");
+ "device will be lost.\nType \"yes\" to " \
+ "continue, no will leave the disk untouched: ");
fgets(inp_buffer,sizeof(inp_buffer),stdin);
if (strcasecmp(inp_buffer,"yes") &&
strcasecmp(inp_buffer,"yes\n")) {
if ( !( (withoutprompt)&&(verbosity<1) ))
printf("Formatting the device. This may take a " \
"while (get yourself a coffee).\n");
- rc=ioctl(fd,IOCTL_COMMAND,format_params);
+ rc=ioctl(fd,BIODASDFORMAT,format_params);
if (rc)
ERRMSG_EXIT(EXIT_FAILURE,"%s: the dasd driver " \
"returned with the following error " \
"message:\n%s\n",prog_name,strerror(errno));
printf("Finished formatting the device.\n");
+ printf("Rereading the partition table... ");
+ rc=ioctl(fd,BLKRRPART,NULL);
+ if (rc) {
+ ERRMSG("%s: error during rereading the partition " \
+ "table: %s.\n",prog_name,strerror(errno));
+ } else printf("done.\n");
+
break;
}
int devfile_specified,devno_specified,range_specified;
/******************* initialization ********************/
+ prog_name=argv[0];
endptr=NULL;
str=check_param(CHECK_ALL,format_params);
if (str!=NULL) ERRMSG_EXIT(EXIT_MISUSE,"%s: %s\n",prog_name,str);
- /*************** issue the real command *****************/
+ /******* issue the real command and reread part table *******/
do_format_dasd(dev_name,format_params,testmode,verbosity,
withoutprompt);
--- /dev/null
+CROSS_COMPILE = s390-
+
+all: hwc_cntl_key
+
+hwc_cntl_key: hwc_cntl_key.c
+ $(CROSS_COMPILE)gcc -o $@ $^
+ $(STRIP) $@
+
+clean:
+ rm -f hwc_cntl_key.c
+
--- /dev/null
+/*
+ * small application to set string that will be used as CNTL-C
+ * employing a HWC terminal ioctl command
+ *
+ * returns: number of written or read characters
+ *
+ * Copyright (C) 2000 IBM Corporation
+ * Author(s): Martin Peschke <peschke@fh-brandenburg.de>
+ */
+
+#include <string.h>
+#include <stdio.h>
+
+/* everything about the HWC terminal driver ioctl-commands */
+#include "../../../../drivers/s390/char/sclp_tty.h"
+
+/* standard input, should be our HWC tty */
+#define DESCRIPTOR 0
+
+int main(int argc, char *argv[], char *env[])
+{
+ unsigned char buf[HWC_TTY_MAX_CNTL_SIZE];
+
+ if (argc >= 2) {
+ if (strcmp(argv[1], "c") == 0 ||
+ strcmp(argv[1], "C") == 0 ||
+ strcmp(argv[1], "INTR_CHAR") == 0) {
+ if (argc == 2) {
+ ioctl(DESCRIPTOR, TIOCHWCTTYGINTRC, buf);
+ printf("%s\n", buf);
+ return strlen(buf);
+ } else return ioctl(DESCRIPTOR, TIOCHWCTTYSINTRC, argv[2]);
+// currently not yet implemented in HWC terminal driver
+#if 0
+ } else if (strcmp(argv[1], "d") == 0 ||
+ strcmp(argv[1], "D") == 0 ||
+ strcmp(argv[1], "EOF_CHAR") == 0) {
+ if (argc == 2) {
+ ioctl(DESCRIPTOR, TIOCHWCTTYGEOFC, buf);
+ printf("%s\n", buf);
+ return strlen(buf);
+ } else return ioctl(DESCRIPTOR, TIOCHWCTTYSEOFC, argv[2]);
+ } else if (strcmp(argv[1], "z") == 0 ||
+ strcmp(argv[1], "Z") == 0 ||
+ strcmp(argv[1], "SUSP_CHAR") == 0) {
+ if (argc == 2) {
+ ioctl(DESCRIPTOR, TIOCHWCTTYGSUSPC, buf);
+ printf("%s\n", buf);
+ return strlen(buf);
+ } else return ioctl(DESCRIPTOR, TIOCHWCTTYSSUSPC, argv[2]);
+ } else if (strcmp(argv[1], "n") == 0 ||
+ strcmp(argv[1], "N") == 0 ||
+ strcmp(argv[1], "NEW_LINE") == 0) {
+ if (argc == 2) {
+ ioctl(DESCRIPTOR, TIOCHWCTTYGNL, buf);
+ printf("%s\n", buf);
+ return strlen(buf);
+ } else return ioctl(DESCRIPTOR, TIOCHWCTTYSNL, argv[2]);
+#endif
+ }
+ }
+
+ printf("usage: hwc_cntl_key <control-key> [<new string>]\n");
+ printf(" <control-key> ::= \"c\" | \"C\" | \"INTR_CHAR\" |\n");
+ printf(" \"d\" | \"D\" | \"EOF_CHAR\" |\n");
+ printf(" \"z\" | \"Z\" | \"SUSP_CHAR\" |\n");
+ printf(" \"n\" | \"N\" | \"NEW_LINE\"\n");
+ return -1;
+}
all: silo
+silo.o: silo.c
+ $(CROSS_COMPILE)gcc -c -o silo.o -O2 silo.c
+
+cfg.o: cfg.c
+ $(CROSS_COMPILE)gcc -c -o cfg.o -O2 cfg.c
+
silo: silo.o cfg.o
$(CROSS_COMPILE)gcc -o $@ $^
+ $(STRIP) $@
clean:
rm -f *.o silo
--- /dev/null
+.TH SILO 8 "Thu Feb 17 2000"\r
+.UC 4\r
+.SH NAME\r
+silo \- preparing a DASD to become an IPL device\r
+.SH SYNOPSIS\r
+\fBsilo\fR -d \fIipldevice\fR [-hV?] [-t[\fI#\fR]] [-v[\fI#\fR]]\r
+ [-F \fIconfig-file\fR] [-b \fIbootsector\fR]\r
+ [-f \fIimage\fR] [-p \fIparameterfile\fR] [-B \fIbootmap\fR]
+.SH DESCRIPTION\r
+\fBsilo\fR makes a DASD an IPLable volume. All files needed for IPL must \r
+reside on that volume, namely the \fIimage\fR, the \FIparameterline\fR and\r
+the bootmap.
+Only one IPLable image per volume is supported. Currently we require an ECKD\r
+type DASD with a blocksize of at least 2048 bytes to IPL. By default silo\r
+does \fBnot\fR modify anything on your disk, but prints out its actions.\r
+\r
+\fBWARNING\fR: Incautious usage of \fBsilo\fR can leave your system in a \r
+state that is not IPLable!\r
+\r
+There are some defaults for the most common parameters compiled into the\r
+binary. You can overwrite these defaults by your own using /etc/silo.conf\r
+or another config file specified by \fB-F\fR \fIconfig-file\fR. All values\r
+set by defaults or the config file can be overwritten using the commandline\r
+options of silo.
+\r
+The config file recognizes the following statements:
+.TP\r
+\fBipldevice\fR=\fIdevicenode\fR\r
+sets the ipldevice to devicenode. The device node specified must be the node\r
+of the 'full' device and not one of a partition.
+\r
+.TP\r
+\fBappend\fR=\fIlist of parameters\fR\r
+sets additional parameters to be added to the parmfile. These parameters are\r
+added to any parmfile specified on the command line. The old parameter file\r
+is preserved and a new one is created with a temporary name.\r
+\r
+.TP\r
+\fBimage\fR=\fIimage\fR\r
+sets the name of the image to be IPLed from that volume. The default name\r
+is \fI./image\fR.
+\r
+.TP\r
+\fBbootsect\fR=\fIbootsect\fR\r
+sets the name of the bootsector to be used as IPL record for that volume.\r
+The default name is \fI/boot/ipleckd.boot\fR.
+\r
+.TP\r
+\fBmap\fR=\fIbootmap\fR\r
+sets the name of the bootmap to hold the map information needen during IPL.\r
+The default name is \fI./boot.map\fR. In testonly mode this name is replaced\r
+by a temporary name.
+.TP\r
+\fBparmfile\fR=\fIparameter file\fR\r
+sets the name of the parameter file holding the kernel parameters to be used\r
+during setup of the kernel. The default name is \fI./parmfile\fR.
+\r
+.TP\r
+\fBramdisk\fR=\fIramdisk image\fR\r
+optionally sets the name of a ramdisk image to be used as an initial ramdisk.\r
+\r
+.TP\r
+\fBroot\fR=\fIdevice node\fR\r
+sets the device holding the root device of the IPLed system.\r
+\r
+.TP\r
+\fBreadonly\fR
+sets the flag to mount thedevice holding the root device of the IPLed system.\r
+in readonly mode, before the final mount is done by /etc/fstab.
+\r
+.TP\r
+\fBverbose\fR=\fIlevel\fR\r
+sets the level of verbosity to \fIlevel\fR.\r
+\r
+.TP\r
+\fBtestlevel\fR=\fIlevel\fR\r
+decreases the testing level (from 2) by \fIlevel\fR.\r
+\r
+.SH OPTIONS\r
+.TP\r
+\fB-t\fR [\fI#\fR]\r
+decreases the testing level by one, or \fi#\fR, rsp. By default the testing\r
+level is set to 2, which means that no modifications are made to the disk.\r
+A testing level of 1 means, that a bootmap is generated with a temporary\r
+filename, but the IPL records of the disk are not modified. Only with a\r
+testing level of 0 or below, the disk is really made IPLable.
+\r
+.TP\r
+\fB-v\fR [\fI#\fR]\r
+Increases verbosity, or sets verbosity to \fI#\fR, rsp.\r
+\r
+.TP\r
+\fB-V\fR \r
+Print version number and exit.\r
+\r
+.SH FILES\r
+.TP\r
+\fI/etc/silo.conf\fR the default configuration file.
+\fI/boot/ipleckd.boot\fR the default bootsector for ECKD devices.
+\fI/boot/iplfba.boot\fR the bootsector for FBA devices.
+\fI./boot.map\fR the default name of the bootmap.
+\fI./image\fR the default name of the kernel image.
+\fI./parmfile\fR the default name of the parameter file.
+\fI/tmp/silodev\fR a device node which is created temporarily.
+\r
+.SH BUGS\r
+.TP\r
+IPL from FBA disks is not yet supported.
+.TP\r
+When \fBsilo\fR aborts it does not at all clean up its temporary files.
+.TP\r
+\fBsilo\fR must be run in a directory residing on the device you want to IPL.
+\r
+.SH AUTHOR\r
+.nf\r
+This man-page was written by Holger Smolinski <Holger.Smolinski@de.ibm.com>
+.fi\r
*
* S390 version
* Copyright (C) 1999,2000 IBM Deutschland Entwicklung GmbH, IBM Corporation
- * Author(s): Holger Smolinski <linux390@de.ibm.com>
*
+ * Report bugs to: <linux390@de.ibm.com>
+ *
+ * Author(s): Holger Smolinski <Holger.Smolinski@de.ibm.com>
* Fritz Elfert <felfert@to.com> contributed support for
* /etc/silo.conf based on Intel's lilo
*/
{ cft_strg, "root", NULL, NULL,NULL },
{ cft_flag, "readonly", NULL, NULL,NULL },
{ cft_strg, "verbose", NULL, NULL,NULL },
+ { cft_strg, "testlevel", NULL, NULL,NULL },
{ cft_end, NULL, NULL, NULL,NULL }
};
/* end */
#define SILO_CFG "/etc/silo.conf"
+#define SILO_IMAGE "./image"
+#define SILO_BOOTMAP "./boot.map"
+#define SILO_PARMFILE "./parmfile"
+#define SILO_BOOTSECT "/boot/ipleckd.boot"
#define PRINT_LEVEL(x,y...) if ( silo_options.verbosity >= x ) printf(y)
#define ERROR_LEVEL(x,y...) if ( silo_options.verbosity >= x ) fprintf(stderr,y)
struct silo_options
{
short int verbosity;
- struct
- {
- unsigned char testonly;
- }
- flags;
+ short int testlevel;
char *image;
char *ipldevice;
char *parmfile;
char *ramdisk;
char *bootsect;
char *conffile;
+ char *bootmap;
}
silo_options =
{
1, /* verbosity */
- {
- 0, /* testonly */
- }
- ,
- "./image", /* image */
+ 2, /* testlevel */
+ SILO_IMAGE, /* image */
NULL, /* ipldevice */
- NULL, /* parmfile */
+ SILO_PARMFILE, /* parmfile */
NULL, /* initrd */
- "ipleckd.boot", /* bootsector */
- SILO_CFG /* silo.conf file */
+ SILO_BOOTSECT, /* bootsector */
+ SILO_CFG, /* silo.conf file */
+ SILO_BOOTMAP, /* boot.map */
};
struct blockdesc
printf ("-p parmfile : set parameter file to parmfile\n");
printf ("-b bootsect : set bootsector to bootsect\n");
printf ("Additional options\n");
+ printf ("-B bootmap:\n");
printf ("-v: increase verbosity level\n");
printf ("-v#: set verbosity level to #\n");
- printf ("-t: toggle testonly flag\n");
+ printf ("-t: decrease testing level\n");
printf ("-h: print this message\n");
printf ("-?: print this message\n");
printf ("-V: print version\n");
int
read_cfg(struct silo_options *o)
{
+ char *tmp;
if (access(o->conffile, R_OK) && (errno == ENOENT))
return 0;
/* If errno != ENOENT, let cfg_open report an error */
cfg_open(o->conffile);
cfg_parse(cf_options);
- o->ipldevice = cfg_get_strg(cf_options, "ipldevice");
- o->image = cfg_get_strg(cf_options, "image");
- o->parmfile = cfg_get_strg(cf_options, "parmfile");
+ tmp = cfg_get_strg(cf_options, "ipldevice");
+ if ( ! o->ipldevice && tmp )
+ o->ipldevice = tmp;
+ tmp = cfg_get_strg(cf_options, "image");
+ if ( ! strncmp(o-> image,SILO_IMAGE,strlen(SILO_IMAGE)) && tmp )
+ o->image = tmp;
+ tmp = cfg_get_strg(cf_options, "parmfile");
+ if ( !strncmp(o->parmfile,SILO_PARMFILE,strlen(SILO_PARMFILE)) && tmp)
+ o->parmfile = tmp;
+ if ( ! o -> ramdisk )
o->ramdisk = cfg_get_strg(cf_options, "ramdisk");
- o->bootsect = cfg_get_strg(cf_options, "bootsect");
- if (cfg_get_strg(cf_options, "verbose")) {
+ tmp = cfg_get_strg(cf_options, "bootsect");
+ if ( !strncmp(o -> bootsect,SILO_BOOTSECT,strlen(SILO_BOOTSECT))&&tmp)
+ o->bootsect = tmp;
+ tmp = cfg_get_strg(cf_options, "map") ;
+ if ( !strncmp(o -> bootmap,SILO_BOOTMAP,strlen(SILO_BOOTMAP)) && tmp)
+ o->bootmap = tmp;
+ tmp = cfg_get_strg(cf_options, "verbose");
+ if ( tmp ) {
unsigned short v;
- sscanf (cfg_get_strg(cf_options, "verbose"), "%hu", &v);
+ sscanf (tmp, "%hu", &v);
o->verbosity = v;
}
+ tmp = cfg_get_strg(cf_options, "testlevel");
+ if ( tmp ) {
+ unsigned short t;
+ sscanf (tmp, "%hu", &t);
+ o->testlevel += t;
+ }
return 1;
}
char *
-gen_tmpparm()
+gen_tmpparm( char *pfile )
{
char *append = cfg_get_strg(cf_options, "append");
char *root = cfg_get_strg(cf_options, "root");
int ro = cfg_get_flag(cf_options, "readonly");
- FILE *f;
+ FILE *f,*of;
char *fn;
+ char c;
char *tmpdir=NULL,*save=NULL;
if (!append && !root && !ro)
- return NULL;
-
- tmpdir=getenv("TMPDIR");
- if (tmpdir) {
- NTRY( save=(char*)malloc(strlen(tmpdir)));
- NTRY( strcpy(save,tmpdir));
- }
- ITRY( setenv("TMPDIR",".",1));
+ return pfile;
+ of = fopen(pfile, "r");
+ if ( of ) {
NTRY( fn = tempnam(NULL,"parm."));
- NTRY( f = fopen(fn, "w"));
+ } else {
+ fn = pfile;
+ }
+ NTRY( f = fopen(fn, "a+"));
+ if ( of ) {
+ while ( ! feof (of) ) {
+ c=fgetc(of);
+ fputc(c,f);
+ }
+ }
if (root)
fprintf(f, "root=%s ", root);
if (ro)
fprintf(f, "%s", append);
fprintf(f, "\n");
fclose(f);
- if ( save )
- ITRY( setenv("TMPDIR",save,1));
+ fclose(of);
+ printf ("tempfile is %s\n",fn);
return strdup(fn);
}
int rc = 0;
int oc;
- read_cfg(o);
- while ((oc = getopt (argc, argv, "Vf:F:d:p:r:b:B:h?v::t")) != -1)
+ while ((oc = getopt (argc, argv, "Vf:F:d:p:r:b:B:h?v::t::")) != -1)
{
switch (oc)
{
case 'V':
printf("silo version: %s\n",SILO_VERSION);
exit(0);
- case 't':
- TOGGLE (o->flags.testonly);
- PRINT_LEVEL (1, "Testonly flag is now %sactive\n", o->flags.testonly ? "" : "in");
- break;
case 'v':
{
unsigned short v;
PRINT_LEVEL (1, "Verbosity value is now %hu\n", o->verbosity);
break;
}
+ case 't':
+ {
+ unsigned short t;
+ if (optarg && sscanf (optarg, "%hu", &t))
+ o->testlevel -= t;
+ else
+ o->testlevel--;
+ PRINT_LEVEL (1, "Testonly flag is now %d\n", o->testlevel);
+ break;
+ }
case 'h':
case '?':
usage ();
case 'b':
GETARG (o->bootsect);
break;
+ case 'B':
+ GETARG (o->bootmap);
default:
rc = EINVAL;
break;
}
}
-
+ read_cfg(o);
return rc;
}
struct stat st;
int bs = 1024;
int l;
+
ITRY (stat (name, &dst));
if (S_ISREG (dst.st_mode))
{
int crc = 0;
if (!o->ipldevice || !o->image || !o->bootsect)
{
+ if (!o->ipldevice)
+ fprintf(stderr,"ipldevice\n");
+ if (!o->image)
+ fprintf(stderr,"image\n");
+ if (!o->bootsect)
+ fprintf(stderr,"bootsect\n");
+
usage ();
exit (1);
}
- PRINT_LEVEL (1, "IPL device is: '%s'", o->ipldevice);
+ PRINT_LEVEL (1, "Testlevel is set to %d\n",o->testlevel);
+ PRINT_LEVEL (1, "IPL device is: '%s'", o->ipldevice);
ITRY (dev = verify_device (o->ipldevice));
PRINT_LEVEL (2, "...ok...(%d/%d)", (unsigned short) MAJOR (dev), (unsigned short) MINOR (dev));
PRINT_LEVEL (1, "\n");
PRINT_LEVEL (1, "...ok...");
PRINT_LEVEL (0, "\n");
+ if ( o -> testlevel > 0 &&
+ ! strncmp( o->bootmap, SILO_BOOTMAP,strlen(SILO_BOOTMAP) )) {
+ NTRY( o -> bootmap = tempnam(NULL,"boot."));
+ }
+ PRINT_LEVEL (0, "bootmap is set to: '%s'", o->bootmap);
+ if ( access ( o->bootmap, O_RDWR ) == -1 ) {
+ if ( errno == ENOENT ) {
+ ITRY (creat ( o-> bootmap, O_RDWR ));
+ } else {
+ PRINT_LEVEL(1,"Cannot acces bootmap file '%s': %s\n",o->bootmap,
+ strerror(errno));
+ }
+ }
+ ITRY (verify_file (o->bootmap, dev));
+ PRINT_LEVEL (1, "...ok...");
+ PRINT_LEVEL (0, "\n");
+
PRINT_LEVEL (0, "Kernel image is: '%s'", o->image);
ITRY (verify_file (o->image, dev));
PRINT_LEVEL (1, "...ok...");
PRINT_LEVEL (0, "\n");
- if (o->parmfile)
- {
- PRINT_LEVEL (0, "parameterfile is: '%s'", o->parmfile);
+ PRINT_LEVEL (0, "original parameterfile is: '%s'", o->parmfile);
+ ITRY (verify_file (o->parmfile, dev));
+ PRINT_LEVEL (1, "...ok...");
+ o->parmfile = gen_tmpparm(o->parmfile);
+ PRINT_LEVEL (0, "final parameterfile is: '%s'", o->parmfile);
ITRY (verify_file (o->parmfile, dev));
PRINT_LEVEL (1, "...ok...");
PRINT_LEVEL (0, "\n");
- }
if (o->ramdisk)
{
}
int
-write_bootsect (char *ipldevice, char *bootsect, struct blocklist *blklst)
+write_bootsect (struct silo_options *o, struct blocklist *blklst)
{
int i;
int s_fd, d_fd, b_fd, bd_fd;
struct stat s_st, d_st, b_st;
int rc=0;
int bs, boots;
- char *mapname;
char *tmpdev;
char buffer[4096]={0,};
- ITRY (d_fd = open (ipldevice, O_RDWR | O_SYNC));
+ ITRY (d_fd = open (o->ipldevice, O_RDWR | O_SYNC));
ITRY (fstat (d_fd, &d_st));
- if (!(mapname = cfg_get_strg(cf_options, "map")))
- mapname = "boot.map";
- ITRY (s_fd = open (mapname, O_RDWR | O_TRUNC | O_CREAT | O_SYNC));
- ITRY (verify_file (bootsect, d_st.st_rdev));
+ ITRY (s_fd = open (o->bootmap, O_RDWR | O_TRUNC | O_CREAT | O_SYNC));
+ ITRY (verify_file (o->bootsect, d_st.st_rdev));
for (i = 0; i < blklst->ix; i++)
{
int offset = blklst->blk[i].off;
int addrct = blklst->blk[i].addr | (blklst->blk[i].ct & 0xff);
PRINT_LEVEL (1, "ix %i: offset: %06x count: %02x address: 0x%08x\n", i, offset, blklst->blk[i].ct & 0xff, blklst->blk[i].addr);
+ if ( o->testlevel <= 1 ) {
NTRY (write (s_fd, &offset, sizeof (int)));
NTRY (write (s_fd, &addrct, sizeof (int)));
}
+ }
ITRY (ioctl (s_fd,FIGETBSZ, &bs));
- ITRY (stat (mapname, &s_st));
+ ITRY (stat (o->bootmap, &s_st));
if (s_st.st_size > bs )
{
- ERROR_LEVEL (0,"%s is larger than one block\n", mapname);
+ ERROR_LEVEL (0,"%s is larger than one block\n", o->bootmap);
rc = -1;
errno = EINVAL;
}
close(s_fd);
ITRY (unlink(tmpdev));
/* Now patch the bootsector */
- ITRY (b_fd = open (bootsect, O_RDONLY));
+ ITRY (b_fd = open (o->bootsect, O_RDONLY));
NTRY (read (b_fd, buffer, 4096));
memset (buffer + 0xe0, 0, 8);
*(int *) (buffer + 0xe0) = boots;
- if ( ! silo_options.flags.testonly ) {
+ if ( o -> testlevel <= 0 ) {
NTRY (write (d_fd, buffer, 4096));
NTRY (write (d_fd, buffer, 4096));
}
do_silo (struct silo_options *o)
{
int rc = 0;
- char *tmp_parmfile = NULL;
int device_fd;
int image_fd;
{
ITRY (add_file_to_blocklist (o->parmfile, &blklist, 0x00008000));
}
- else
- {
- if ((tmp_parmfile = gen_tmpparm()))
- ITRY (add_file_to_blocklist (tmp_parmfile, &blklist, 0x00008000));
- }
if (o->ramdisk)
{
ITRY (add_file_to_blocklist (o->ramdisk, &blklist, 0x00800000));
}
- ITRY (write_bootsect (o->ipldevice, o->bootsect, &blklist));
- if (tmp_parmfile)
- ITRY (remove (tmp_parmfile));
-
+ ITRY (write_bootsect (o, &blklist));
return rc;
}
main (int argct, char *argv[])
{
int rc = 0;
+ char *save=NULL;
+ char *tmpdir=getenv("TMPDIR");
+ if (tmpdir) {
+ NTRY( save=(char*)malloc(strlen(tmpdir)+1));
+ NTRY( strncpy(save,tmpdir,strlen(tmpdir)+1));
+ }
+ ITRY( setenv("TMPDIR",".",1));
ITRY (parse_options (&silo_options, argct, argv));
ITRY (verify_options (&silo_options));
+ if ( silo_options.testlevel > 0 ) {
+ printf ("WARNING: silo does not modify your volume. Use -t2 to change IPL records\n");
+ }
ITRY (do_silo (&silo_options));
+ if ( save )
+ ITRY( setenv("TMPDIR",save,1));
return rc;
}
-ipldevice = /dev/dasd00
+ipldevice = /dev/dasda
image = /boot/image
bootsect = /boot/ipleckd.boot
map = /boot/boot.map
root = /dev/dasd01
readonly
-append = "dasd=200-20f"
+append = "dasd=200-20f noinitrd"
case HDIO_SET_NICE:
case BLKROSET:
case BLKROGET:
+ case BLKELVGET:
+ case BLKELVSET:
/* 0x02 -- Floppy ioctls */
case FDMSGON:
unsigned long first_sector, first_size, this_sector, this_size;
int mask = (1 << hd->minor_shift) - 1;
int i;
+ int loopct = 0; /* number of links followed
+ without finding a data partition */
first_sector = hd->part[MINOR(dev)].start_sect;
first_size = hd->part[MINOR(dev)].nr_sects;
this_sector = first_sector;
while (1) {
+ if (++loopct > 100)
+ return;
if ((current_minor & mask) == 0)
return;
if (!(bh = bread(dev,0,get_ptable_blocksize(dev))))
current_minor++;
if ((current_minor & mask) == 0)
goto done;
+ loopct = 0;
}
/*
* Next, process the (first) extended partition, if present.
struct buffer_head *bh;
struct solaris_x86_vtoc *v;
struct solaris_x86_slice *s;
+ int mask = (1 << hd->minor_shift) - 1;
int i;
if(!(bh = bread(dev, 0, get_ptable_blocksize(dev))))
return;
}
for(i=0; i<SOLARIS_X86_NUMSLICE; i++) {
+ if ((current_minor & mask) == 0)
+ break;
+
s = &v->v_slice[i];
if (s->s_size == 0)
continue;
+
printk(" [s%d]", i);
/* solaris partitions are relative to current MS-DOS
* one but add_partition starts relative to sector
#endif /* CONFIG_ULTRIX_PARTITION */
+#ifdef CONFIG_ARCH_S390
+#include <asm/ebcdic.h>
+#include "../s390/block/dasd_types.h"
+
+dasd_information_t **dasd_information = NULL;
+
+typedef enum {
+ ibm_partition_none = 0,
+ ibm_partition_lnx1 = 1,
+ ibm_partition_vol1 = 3,
+ ibm_partition_cms1 = 4
+} ibm_partition_t;
+
+static ibm_partition_t
+get_partition_type ( char * type )
+{
+ static char lnx[5]="LNX1";
+ static char vol[5]="VOL1";
+ static char cms[5]="CMS1";
+ if ( ! strncmp ( lnx, "LNX1",4 ) ) {
+ ASCEBC(lnx,4);
+ ASCEBC(vol,4);
+ ASCEBC(cms,4);
+ }
+ if ( ! strncmp (type,lnx,4) ||
+ ! strncmp (type,"LNX1",4) )
+ return ibm_partition_lnx1;
+ if ( ! strncmp (type,vol,4) )
+ return ibm_partition_vol1;
+ if ( ! strncmp (type,cms,4) )
+ return ibm_partition_cms1;
+ return ibm_partition_none;
+}
+
+int
+ibm_partition (struct gendisk *hd, kdev_t dev, int first_sector)
+{
+ struct buffer_head *bh;
+ ibm_partition_t partition_type;
+ char type[5] = {0,};
+ char name[7] = {0,};
+ int di = MINOR(dev) >> hd->minor_shift;
+ if ( ! get_ptable_blocksize(dev) )
+ return 0;
+ if ( ! dasd_information )
+ return 0;
+ if ( ( bh = bread( dev,
+ dasd_information[di]->sizes.label_block,
+ dasd_information[di]->sizes.bp_sector ) ) != NULL ) {
+ strncpy ( type,bh -> b_data, 4);
+ strncpy ( name,bh -> b_data + 4, 6);
+ } else {
+ return 0;
+ }
+ if ( (*(char *)bh -> b_data) & 0x80 ) {
+ EBCASC(name,6);
+ }
+ switch ( partition_type = get_partition_type(type) ) {
+ case ibm_partition_lnx1:
+ printk ( "(LNX1)/%6s:",name);
+ add_partition( hd, MINOR(dev) + 1,
+ (dasd_information[di]->sizes.label_block + 1) <<
+ dasd_information[di]->sizes.s2b_shift,
+ (dasd_information [di]->sizes.blocks -
+ dasd_information[di]->sizes.label_block - 1) <<
+ dasd_information[di]->sizes.s2b_shift,0 );
+ break;
+ case ibm_partition_vol1:
+ printk ( "(VOL1)/%6s:",name);
+ break;
+ case ibm_partition_cms1:
+ printk ( "(CMS1)/%6s:",name);
+ if (* (((long *)bh->b_data) + 13) == 0) {
+ /* disk holds a CMS filesystem */
+ add_partition( hd, MINOR(dev) + 1,
+ (dasd_information [di]->sizes.label_block + 1) <<
+ dasd_information [di]->sizes.s2b_shift,
+ (dasd_information [di]->sizes.blocks -
+ dasd_information [di]->sizes.label_block) <<
+ dasd_information [di]->sizes.s2b_shift,0 );
+ printk ("(CMS)");
+ } else {
+ /* disk is reserved minidisk */
+ long *label=(long*)bh->b_data;
+ int offset = label[13];
+ int size = (label[7]-1-label[13])*(label[3]>>9);
+ add_partition( hd, MINOR(dev) + 1,
+ offset << dasd_information [di]->sizes.s2b_shift,
+ size<<dasd_information [di]->sizes.s2b_shift,0 );
+ printk ("(MDSK)");
+ }
+ break;
+ case ibm_partition_none:
+ printk ( "(nonl)/ :");
+ add_partition( hd, MINOR(dev) + 1,
+ (dasd_information [di]->sizes.label_block + 1) <<
+ dasd_information [di]->sizes.s2b_shift,
+ (dasd_information [di]->sizes.blocks -
+ dasd_information [di]->sizes.label_block - 1) <<
+ dasd_information [di]->sizes.s2b_shift,0 );
+ break;
+ }
+ printk ( "\n" );
+ bforget(bh);
+ return 1;
+}
+#endif
+
static void check_partition(struct gendisk *hd, kdev_t dev)
{
static int first_time = 1;
#ifdef CONFIG_ULTRIX_PARTITION
if(ultrix_partition(hd, dev, first_sector))
return;
+#endif
+#ifdef CONFIG_ARCH_S390
+ if (ibm_partition (hd, dev, first_sector))
+ return;
#endif
printk(" unknown partition table\n");
}
* ACER50 (and others?) require the full spec length mode sense
* page capabilities size, but older drives break.
*/
- if (!(drive->id && !strcmp(drive->id->model,"ATAPI CD ROM DRIVE 50X MAX")))
- size -= 4;
+ if (drive->id) {
+ if (!(!strcmp(drive->id->model, "ATAPI CD ROM DRIVE 50X MAX") ||
+ !strcmp(drive->id->model, "WPI CDS-32X")))
+ size -= sizeof(cap->pad);
+ }
/* we have to cheat a little here. the packet will eventually
* be queued with ide_cdrom_packet(), which extracts the
drive->nice1 = (arg >> IDE_NICE_1) & 1;
return 0;
+ case BLKELVGET:
+ case BLKELVSET:
+ return blkelv_ioctl(inode->i_rdev, cmd, arg);
+
RO_IOCTLS(inode->i_rdev, arg);
default:
*
* Copyright (C) 1991, 1992 Linus Torvalds
* Copyright (C) 1994, Karl Keyte: Added support for disk statistics
+ * Elevator latency, (C) 2000 Andrea Arcangeli <andrea@suse.de> SuSE
*/
/*
#include <asm/system.h>
#include <asm/io.h>
+#include <asm/uaccess.h>
#include <linux/blk.h>
#include <linux/module.h>
/*
* used to wait on when there are no free requests
*/
-struct wait_queue * wait_for_request = NULL;
+struct wait_queue * wait_for_request;
/* This specifies how many sectors to read ahead on the disk. */
-int read_ahead[MAX_BLKDEV] = {0, };
+int read_ahead[MAX_BLKDEV];
/* blk_dev_struct is:
* *request_fn
*
* if (!blk_size[MAJOR]) then no minor size checking is done.
*/
-int * blk_size[MAX_BLKDEV] = { NULL, NULL, };
+int * blk_size[MAX_BLKDEV];
/*
* blksize_size contains the size of all block-devices:
*
* if (!blksize_size[MAJOR]) then 1024 bytes is assumed.
*/
-int * blksize_size[MAX_BLKDEV] = { NULL, NULL, };
+int * blksize_size[MAX_BLKDEV];
/*
* hardsect_size contains the size of the hardware sector of a device.
* This is currently set by some scsi devices and read by the msdos fs driver.
* Other uses may appear later.
*/
-int * hardsect_size[MAX_BLKDEV] = { NULL, NULL, };
+int * hardsect_size[MAX_BLKDEV];
/*
* The following tunes the read-ahead algorithm in mm/filemap.c
*/
-int * max_readahead[MAX_BLKDEV] = { NULL, NULL, };
+int * max_readahead[MAX_BLKDEV];
/*
* Max number of sectors per request
*/
-int * max_sectors[MAX_BLKDEV] = { NULL, NULL, };
+int * max_sectors[MAX_BLKDEV];
/*
* Max number of segments per request
*/
-int * max_segments[MAX_BLKDEV] = { NULL, NULL, };
+int * max_segments[MAX_BLKDEV];
static inline int get_max_sectors(kdev_t dev)
{
return &blk_dev[major].current_request;
}
+static inline int get_request_latency(elevator_t * elevator, int rw)
+{
+ int latency;
+
+ latency = elevator->read_latency;
+ if (rw != READ)
+ latency = elevator->write_latency;
+
+ return latency;
+}
+
/*
* remove the plug and let it rip..
*/
printk(KERN_ERR "drive_stat_acct: cmd not R/W?\n");
}
+static int blkelvget_ioctl(elevator_t * elevator, blkelv_ioctl_arg_t * arg)
+{
+ int ret;
+ blkelv_ioctl_arg_t output;
+
+ output.queue_ID = elevator->queue_ID;
+ output.read_latency = elevator->read_latency;
+ output.write_latency = elevator->write_latency;
+ output.max_bomb_segments = elevator->max_bomb_segments;
+
+ ret = -EFAULT;
+ if (copy_to_user(arg, &output, sizeof(blkelv_ioctl_arg_t)))
+ goto out;
+ ret = 0;
+ out:
+ return ret;
+}
+
+static int blkelvset_ioctl(elevator_t * elevator, const blkelv_ioctl_arg_t * arg)
+{
+ blkelv_ioctl_arg_t input;
+ int ret;
+
+ ret = -EFAULT;
+ if (copy_from_user(&input, arg, sizeof(blkelv_ioctl_arg_t)))
+ goto out;
+
+ ret = -EINVAL;
+ if (input.read_latency < 0)
+ goto out;
+ if (input.write_latency < 0)
+ goto out;
+ if (input.max_bomb_segments <= 0)
+ goto out;
+
+ elevator->read_latency = input.read_latency;
+ elevator->write_latency = input.write_latency;
+ elevator->max_bomb_segments = input.max_bomb_segments;
+
+ ret = 0;
+ out:
+ return ret;
+}
+
+int blkelv_ioctl(kdev_t dev, unsigned long cmd, unsigned long arg)
+{
+ elevator_t * elevator = &blk_dev[MAJOR(dev)].elevator;
+ blkelv_ioctl_arg_t * __arg = (blkelv_ioctl_arg_t *) arg;
+
+ switch (cmd) {
+ case BLKELVGET:
+ return blkelvget_ioctl(elevator, __arg);
+ case BLKELVSET:
+ return blkelvset_ioctl(elevator, __arg);
+ }
+}
+
+static inline int seek_to_not_starving_chunk(struct request ** req, int * lat)
+{
+ struct request * tmp = *req;
+ int found = 0, pos = 0;
+ int last_pos = 0, __lat = *lat;
+
+ do {
+ if (tmp->elevator_latency <= 0)
+ {
+ *req = tmp;
+ found = 1;
+ last_pos = pos;
+ if (last_pos >= __lat)
+ break;
+ }
+ pos += tmp->nr_segments;
+ } while ((tmp = tmp->next));
+ *lat -= last_pos;
+
+ return found;
+}
+
+#define CASE_COALESCE_BUT_FIRST_REQUEST_MAYBE_BUSY \
+ case IDE0_MAJOR: /* same as HD_MAJOR */ \
+ case IDE1_MAJOR: \
+ case FLOPPY_MAJOR: \
+ case IDE2_MAJOR: \
+ case IDE3_MAJOR: \
+ case IDE4_MAJOR: \
+ case IDE5_MAJOR: \
+ case ACSI_MAJOR: \
+ case MFM_ACORN_MAJOR: \
+ case MDISK_MAJOR: \
+ case DASD_MAJOR:
+#define CASE_COALESCE_ALSO_FIRST_REQUEST \
+ case SCSI_DISK0_MAJOR: \
+ case SCSI_DISK1_MAJOR: \
+ case SCSI_DISK2_MAJOR: \
+ case SCSI_DISK3_MAJOR: \
+ case SCSI_DISK4_MAJOR: \
+ case SCSI_DISK5_MAJOR: \
+ case SCSI_DISK6_MAJOR: \
+ case SCSI_DISK7_MAJOR: \
+ case SCSI_CDROM_MAJOR: \
+ case DAC960_MAJOR+0: \
+ case DAC960_MAJOR+1: \
+ case DAC960_MAJOR+2: \
+ case DAC960_MAJOR+3: \
+ case DAC960_MAJOR+4: \
+ case DAC960_MAJOR+5: \
+ case DAC960_MAJOR+6: \
+ case DAC960_MAJOR+7: \
+ case COMPAQ_SMART2_MAJOR+0: \
+ case COMPAQ_SMART2_MAJOR+1: \
+ case COMPAQ_SMART2_MAJOR+2: \
+ case COMPAQ_SMART2_MAJOR+3: \
+ case COMPAQ_SMART2_MAJOR+4: \
+ case COMPAQ_SMART2_MAJOR+5: \
+ case COMPAQ_SMART2_MAJOR+6: \
+ case COMPAQ_SMART2_MAJOR+7:
+
+#define elevator_starve_rest_of_queue(req) \
+do { \
+ struct request * tmp = (req); \
+ for ((tmp) = (tmp)->next; (tmp); (tmp) = (tmp)->next) \
+ (tmp)->elevator_latency--; \
+} while (0)
+
+static inline void elevator_queue(struct request * req,
+ struct request * tmp,
+ int latency,
+ struct blk_dev_struct * dev,
+ struct request ** queue_head)
+{
+ struct request * __tmp;
+ int starving, __latency;
+
+ starving = seek_to_not_starving_chunk(&tmp, &latency);
+ __tmp = tmp;
+ __latency = latency;
+
+ for (;; tmp = tmp->next)
+ {
+ if ((latency -= tmp->nr_segments) <= 0)
+ {
+ tmp = __tmp;
+ latency = __latency - tmp->nr_segments;
+
+ if (starving)
+ break;
+
+ switch (MAJOR(req->rq_dev))
+ {
+ CASE_COALESCE_BUT_FIRST_REQUEST_MAYBE_BUSY
+ if (tmp == dev->current_request)
+ default:
+ goto link;
+ CASE_COALESCE_ALSO_FIRST_REQUEST
+ }
+
+ latency += tmp->nr_segments;
+ req->next = tmp;
+ *queue_head = req;
+ goto after_link;
+ }
+
+ if (!tmp->next)
+ break;
+
+ {
+ const int after_current = IN_ORDER(tmp,req);
+ const int before_next = IN_ORDER(req,tmp->next);
+
+ if (!IN_ORDER(tmp,tmp->next)) {
+ if (after_current || before_next)
+ break;
+ } else {
+ if (after_current && before_next)
+ break;
+ }
+ }
+ }
+
+ link:
+ req->next = tmp->next;
+ tmp->next = req;
+
+ after_link:
+ req->elevator_latency = latency;
+
+ elevator_starve_rest_of_queue(req);
+}
+
/*
* add-request adds a request to the linked list.
* It disables interrupts (aquires the request spinlock) so that it can muck
short disk_index;
unsigned long flags;
int queue_new_request = 0;
+ int latency;
switch (major) {
case DAC960_MAJOR+0:
break;
}
- req->next = NULL;
+ latency = get_request_latency(&dev->elevator, req->cmd);
/*
* We use the goto to reduce locking complexity
if (req->bh)
mark_buffer_clean(req->bh);
if (!(tmp = *current_request)) {
+ req->next = NULL;
+ req->elevator_latency = latency;
*current_request = req;
if (dev->current_request != &dev->plug)
queue_new_request = 1;
goto out;
}
- for ( ; tmp->next ; tmp = tmp->next) {
- const int after_current = IN_ORDER(tmp,req);
- const int before_next = IN_ORDER(req,tmp->next);
-
- if (!IN_ORDER(tmp,tmp->next)) {
- if (after_current || before_next)
- break;
- } else {
- if (after_current && before_next)
- break;
- }
- }
- req->next = tmp->next;
- tmp->next = req;
+ elevator_queue(req, tmp, latency, dev, current_request);
/* for SCSI devices, call request_fn unconditionally */
if (scsi_blk_major(major) ||
total_segments--;
if (total_segments > max_segments)
return;
+ if (next->elevator_latency < req->elevator_latency)
+ req->elevator_latency = next->elevator_latency;
req->bhtail->b_reqnext = next->bh;
req->bhtail = next->bhtail;
req->nr_sectors += next->nr_sectors;
wake_up (&wait_for_request);
}
+#define read_pendings(req) \
+({ \
+ int __ret = 0; \
+ struct request * tmp = (req); \
+ do { \
+ if (tmp->cmd == READ) \
+ { \
+ __ret = 1; \
+ break; \
+ } \
+ tmp = tmp->next; \
+ } while (tmp); \
+ __ret; \
+})
+
void make_request(int major, int rw, struct buffer_head * bh)
{
unsigned int sector, count;
- struct request * req;
+ struct request * req, * prev;
int rw_ahead, max_req, max_sectors, max_segments;
unsigned long flags;
+ int latency, starving;
count = bh->b_size >> 9;
sector = bh->b_rsector;
max_sectors = get_max_sectors(bh->b_rdev);
max_segments = get_max_segments(bh->b_rdev);
+ latency = get_request_latency(&blk_dev[major].elevator, rw);
+
/*
* Now we acquire the request spinlock, we have to be mega careful
* not to schedule or do something nonatomic
major != DDV_MAJOR && major != NBD_MAJOR)
plug_device(blk_dev + major); /* is atomic */
} else switch (major) {
- case IDE0_MAJOR: /* same as HD_MAJOR */
- case IDE1_MAJOR:
- case FLOPPY_MAJOR:
- case IDE2_MAJOR:
- case IDE3_MAJOR:
- case IDE4_MAJOR:
- case IDE5_MAJOR:
- case ACSI_MAJOR:
- case MFM_ACORN_MAJOR:
- case MDISK_MAJOR:
- case DASD_MAJOR:
+ CASE_COALESCE_BUT_FIRST_REQUEST_MAYBE_BUSY
/*
* The scsi disk and cdrom drivers completely remove the request
* from the queue when they start processing an entry. For this
* entry may be busy being processed and we thus can't change it.
*/
if (req == blk_dev[major].current_request)
- req = req->next;
- if (!req)
- break;
+ {
+ if (!(req = req->next))
+ break;
+ latency -= req->nr_segments;
+ }
/* fall through */
+ CASE_COALESCE_ALSO_FIRST_REQUEST
- case SCSI_DISK0_MAJOR:
- case SCSI_DISK1_MAJOR:
- case SCSI_DISK2_MAJOR:
- case SCSI_DISK3_MAJOR:
- case SCSI_DISK4_MAJOR:
- case SCSI_DISK5_MAJOR:
- case SCSI_DISK6_MAJOR:
- case SCSI_DISK7_MAJOR:
- case SCSI_CDROM_MAJOR:
- case DAC960_MAJOR+0:
- case DAC960_MAJOR+1:
- case DAC960_MAJOR+2:
- case DAC960_MAJOR+3:
- case DAC960_MAJOR+4:
- case DAC960_MAJOR+5:
- case DAC960_MAJOR+6:
- case DAC960_MAJOR+7:
- case COMPAQ_SMART2_MAJOR+0:
- case COMPAQ_SMART2_MAJOR+1:
- case COMPAQ_SMART2_MAJOR+2:
- case COMPAQ_SMART2_MAJOR+3:
- case COMPAQ_SMART2_MAJOR+4:
- case COMPAQ_SMART2_MAJOR+5:
- case COMPAQ_SMART2_MAJOR+6:
- case COMPAQ_SMART2_MAJOR+7:
+ /* avoid write-bombs to not hurt iteractiveness of reads */
+ if (rw != READ && read_pendings(req))
+ max_segments = blk_dev[major].elevator.max_bomb_segments;
+ starving = seek_to_not_starving_chunk(&req, &latency);
+ prev = NULL;
do {
if (req->sem)
continue;
continue;
/* Can we add it to the end of this request? */
if (req->sector + req->nr_sectors == sector) {
+ if (latency - req->nr_segments < 0)
+ break;
if (req->bhtail->b_data + req->bhtail->b_size
!= bh->b_data) {
if (req->nr_segments < max_segments)
req->nr_segments++;
- else continue;
+ else break;
}
req->bhtail->b_reqnext = bh;
req->bhtail = bh;
req->nr_sectors += count;
+
+ /* latency stuff */
+ if ((latency -= req->nr_segments) < req->elevator_latency)
+ req->elevator_latency = latency;
+ elevator_starve_rest_of_queue(req);
+
/* Can we now merge this req with the next? */
attempt_merge(req, max_sectors, max_segments);
/* or to the beginning? */
} else if (req->sector - count == sector) {
+ if (!prev && starving)
+ break;
if (bh->b_data + bh->b_size
!= req->bh->b_data) {
if (req->nr_segments < max_segments)
req->nr_segments++;
- else continue;
+ else break;
}
bh->b_reqnext = req->bh;
req->bh = bh;
req->current_nr_sectors = count;
req->sector = sector;
req->nr_sectors += count;
+
+ /* latency stuff */
+ if (latency < --req->elevator_latency)
+ req->elevator_latency = latency;
+ elevator_starve_rest_of_queue(req);
+
+ if (prev)
+ attempt_merge(prev, max_sectors, max_segments);
} else
continue;
spin_unlock_irqrestore(&io_request_lock,flags);
return;
- } while ((req = req->next) != NULL);
+ } while (prev = req,
+ (latency -= req->nr_segments) >= 0 && (req = req->next) != NULL);
}
/* find an unused request. */
req->sem = NULL;
req->bh = bh;
req->bhtail = bh;
- req->next = NULL;
add_request(major+blk_dev,req);
return;
{
struct request * req;
struct blk_dev_struct *dev;
+ static unsigned int queue_ID;
for (dev = blk_dev + MAX_BLKDEV; dev-- != blk_dev;) {
dev->request_fn = NULL;
dev->plug_tq.sync = 0;
dev->plug_tq.routine = &unplug_device;
dev->plug_tq.data = dev;
+ dev->elevator = ELEVATOR_DEFAULTS;
+ dev->elevator.queue_ID = queue_ID++;
}
req = all_requests + NR_REQUEST;
while (--req >= all_requests) {
req->rq_status = RQ_INACTIVE;
- req->next = NULL;
}
memset(ro_bits,0,sizeof(ro_bits));
memset(max_readahead, 0, sizeof(max_readahead));
#endif
#ifdef CONFIG_DASD
dasd_init();
+#endif
+#ifdef CONFIG_BLK_DEV_XPRAM
+ xpram_init();
#endif
return 0;
};
EXPORT_SYMBOL(io_request_lock);
EXPORT_SYMBOL(end_that_request_first);
EXPORT_SYMBOL(end_that_request_last);
+EXPORT_SYMBOL(blkelv_ioctl);
factor = min = 1 << FACTOR_SHIFT(FACTOR((md_dev+minor)));
+ md_blocksizes[minor] <<= FACTOR_SHIFT(FACTOR((md_dev+minor)));
+
for (i=0; i<md_dev[minor].nb_dev; i++)
if (md_dev[minor].devices[i].size<min)
{
#include <linux/module.h>
#include <linux/config.h>
+#include <linux/kmod.h>
#include <linux/types.h>
#include <linux/kernel.h>
#include <linux/ioport.h>
s = protocol; e = s+1;
+ if (!protocols[0])
+ request_module ("paride_protocol");
+
if (autoprobe) {
s = 0;
e = MAX_PROTOS;
#undef BLOCKMOVE
#define Z_WAKE
static char rcsid[] =
-"$Revision: 2.3.2.5 $$Date: 2000/01/19 14:35:33 $";
+"$Revision: 2.3.2.6 $$Date: 2000/05/05 13:56:05 $";
/*
* linux/drivers/char/cyclades.c
*
- * This file contains the driver for the Cyclades Cyclom-Y multiport
+ * This file contains the driver for the Cyclades async multiport
* serial boards.
*
* Initially written by Randolph Bentson <bentson@grieg.seaslug.org>.
*
* This version supports shared IRQ's (only for PCI boards).
*
- * This module exports the following rs232 io functions:
- * int cy_init(void);
- * int cy_open(struct tty_struct *tty, struct file *filp);
- * and the following functions for modularization.
- * int init_module(void);
- * void cleanup_module(void);
- *
* $Log: cyclades.c,v $
+ * Revision 2.3.2.6 2000/05/05 13:56:05 ivan
+ * Driver now reports physical instead of virtual memory addresses.
+ * Masks were added to some Cyclades-Z read accesses.
+ * Implemented workaround for PLX9050 bug that would cause a system lockup
+ * in certain systems, depending on the MMIO addresses allocated to the
+ * board.
+ * Changed the Tx interrupt programming in the CD1400 chips to boost up
+ * performance (Cyclom-Y only).
+ * Did some code "housekeeping".
+ *
* Revision 2.3.2.5 2000/01/19 14:35:33 ivan
* Fixed bug in cy_set_termios on CRTSCTS flag turnoff.
*
#ifndef SERIAL_XMIT_SIZE
#define SERIAL_XMIT_SIZE (MIN(PAGE_SIZE, 4096))
#endif
-#define WAKEUP_CHARS (SERIAL_XMIT_SIZE-256)
+#define WAKEUP_CHARS 256
#define STD_COM_FLAGS (0)
static struct tty_driver cy_serial_driver, cy_callout_driver;
static int serial_refcount;
-#ifndef CONFIG_COBALT_27
+#if defined(__i386__) || defined(__alpha__)
/* This is the address lookup table. The driver will probe for
Cyclom-Y/ISA boards at all addresses in here. If you want the
driver to probe addresses at a different address, add it to
MODULE_PARM(irq, "1-" __MODULE_STRING(NR_CARDS) "i");
#endif
-#endif /* CONFIG_COBALT_27 */
+#endif /* (__i386__) || (__alpha__) */
/* This is the per-card data structure containing address, irq, number of
channels, etc. This driver supports a maximum of NR_CARDS cards.
static void cy_start(struct tty_struct *);
static void set_line_char(struct cyclades_port *);
static int cyz_issue_cmd(struct cyclades_card *, uclong, ucchar, uclong);
-#ifndef CONFIG_COBALT_27
+#if defined(__i386__) || defined(__alpha__)
static unsigned detect_isa_irq (volatile ucchar *);
-#endif /* CONFIG_COBALT_27 */
-#ifdef CYCLOM_SHOW_STATUS
-static void show_status(int);
-#endif
+#endif /* (__i386__) || (__alpha__) */
static int cyclades_get_proc_info(char *, char **, off_t , int , int *, void *);
(tty->ldisc.write_wakeup)(tty);
}
wake_up_interruptible(&tty->write_wait);
- wake_up_interruptible(&tty->poll_wait);
+ wake_up_interruptible(&tty->poll_wait);
}
#ifdef Z_WAKE
if (test_and_clear_bit(Cy_EVENT_SHUTDOWN_WAKEUP, &info->event)) {
return(0);
} /* cyy_issue_cmd */
-#ifndef CONFIG_COBALT_27 /* ISA interrupt detection code */
+#if defined(__i386__) || defined(__alpha__)
+/* ISA interrupt detection code */
static unsigned
detect_isa_irq (volatile ucchar *address)
{
cy_writeb((u_long)address + (CyCAR<<index), 0);
cy_writeb((u_long)address + (CySRER<<index),
- cy_readb(address + (CySRER<<index)) | CyTxMpty);
+ cy_readb(address + (CySRER<<index)) | CyTxRdy);
restore_flags(flags);
/* Wait ... */
save_car = cy_readb(address + (CyCAR<<index));
cy_writeb((u_long)address + (CyCAR<<index), (save_xir & 0x3));
cy_writeb((u_long)address + (CySRER<<index),
- cy_readb(address + (CySRER<<index)) & ~CyTxMpty);
+ cy_readb(address + (CySRER<<index)) & ~CyTxRdy);
cy_writeb((u_long)address + (CyTIR<<index), (save_xir & 0x3f));
cy_writeb((u_long)address + (CyCAR<<index), (save_car));
cy_writeb((u_long)address + (Cy_ClrIntr<<index), 0);
return (irq > 0)? irq : 0;
}
-#endif /* CONFIG_COBALT_27 */
+#endif /* (__i386__) || (__alpha__) */
/* The real interrupt service routine is called
whenever the card wants its hand held--chars
/* validate the port# (as configured and open) */
if( (i < 0) || (NR_PORTS <= i) ){
cy_writeb((u_long)base_addr+(CySRER<<index),
- cy_readb(base_addr+(CySRER<<index)) & ~CyTxMpty);
+ cy_readb(base_addr+(CySRER<<index)) & ~CyTxRdy);
goto txend;
}
info = &cy_port[i];
info->last_active = jiffies;
if(info->tty == 0){
cy_writeb((u_long)base_addr+(CySRER<<index),
- cy_readb(base_addr+(CySRER<<index)) & ~CyTxMpty);
+ cy_readb(base_addr+(CySRER<<index)) & ~CyTxRdy);
goto txdone;
}
if (!info->xmit_cnt){
cy_writeb((u_long)base_addr+(CySRER<<index),
cy_readb(base_addr+(CySRER<<index)) &
- ~CyTxMpty);
+ ~CyTxRdy);
goto txdone;
}
if (info->xmit_buf == 0){
cy_writeb((u_long)base_addr+(CySRER<<index),
cy_readb(base_addr+(CySRER<<index)) &
- ~CyTxMpty);
+ ~CyTxRdy);
goto txdone;
}
if (info->tty->stopped || info->tty->hw_stopped){
cy_writeb((u_long)base_addr+(CySRER<<index),
cy_readb(base_addr+(CySRER<<index)) &
- ~CyTxMpty);
+ ~CyTxRdy);
goto txdone;
}
/* Because the Embedded Transmit Commands have
info->tty->hw_stopped = 0;
cy_writeb((u_long)base_addr+(CySRER<<index),
cy_readb(base_addr+(CySRER<<index)) |
- CyTxMpty);
+ CyTxRdy);
cy_sched_event(info,
Cy_EVENT_WRITE_WAKEUP);
}
info->tty->hw_stopped = 1;
cy_writeb((u_long)base_addr+(CySRER<<index),
cy_readb(base_addr+(CySRER<<index)) &
- ~CyTxMpty);
+ ~CyTxRdy);
}
}
}
return (-1);
}
zfw_ctrl = (struct ZFW_CTRL *)
- (cinfo->base_addr + cy_readl(&firm_id->zfwctrl_addr));
+ (cinfo->base_addr +
+ (cy_readl(&firm_id->zfwctrl_addr) & 0xfffff));
board_ctrl = &zfw_ctrl->board_ctrl;
loc_doorbell = cy_readl(&((struct RUNTIME_9060 *)
return (-1);
}
zfw_ctrl = (struct ZFW_CTRL *)
- (cinfo->base_addr + cy_readl(&firm_id->zfwctrl_addr));
+ (cinfo->base_addr +
+ (cy_readl(&firm_id->zfwctrl_addr) & 0xfffff));
board_ctrl = &zfw_ctrl->board_ctrl;
index = 0;
firm_id = (struct FIRM_ID *)(cinfo->base_addr + ID_ADDRESS);
zfw_ctrl = (struct ZFW_CTRL *)
- (cinfo->base_addr + cy_readl(&firm_id->zfwctrl_addr));
+ (cinfo->base_addr +
+ (cy_readl(&firm_id->zfwctrl_addr) & 0xfffff));
board_ctrl = &(zfw_ctrl->board_ctrl);
fw_ver = cy_readl(&board_ctrl->fw_version);
hw_ver = cy_readl(&((struct RUNTIME_9060 *)(cinfo->ctl_addr))->mail_box_0);
firm_id = (struct FIRM_ID *)(cinfo->base_addr + ID_ADDRESS);
zfw_ctrl = (struct ZFW_CTRL *)
- (cinfo->base_addr + cy_readl(&firm_id->zfwctrl_addr));
+ (cinfo->base_addr +
+ (cy_readl(&firm_id->zfwctrl_addr) & 0xfffff));
board_ctrl = &(zfw_ctrl->board_ctrl);
/* Skip first polling cycle to avoid racing conditions with the FW */
return -ENODEV;
}
- zfw_ctrl =
- (struct ZFW_CTRL *)
- (cy_card[card].base_addr + cy_readl(&firm_id->zfwctrl_addr));
+ zfw_ctrl = (struct ZFW_CTRL *)
+ (cy_card[card].base_addr +
+ (cy_readl(&firm_id->zfwctrl_addr) & 0xfffff));
board_ctrl = &zfw_ctrl->board_ctrl;
ch_ctrl = zfw_ctrl->ch_ctrl;
CY_LOCK(info, flags);
cy_writeb((u_long)base_addr+(CyCAR<<index), channel);
cy_writeb((u_long)base_addr+(CySRER<<index),
- cy_readb(base_addr+(CySRER<<index)) | CyTxMpty);
+ cy_readb(base_addr+(CySRER<<index)) | CyTxRdy);
CY_UNLOCK(info, flags);
} else {
#ifdef CONFIG_CYZ_INTR
return;
}
- zfw_ctrl =
- (struct ZFW_CTRL *)
- (cy_card[card].base_addr + cy_readl(&firm_id->zfwctrl_addr));
+ zfw_ctrl = (struct ZFW_CTRL *)
+ (cy_card[card].base_addr +
+ (cy_readl(&firm_id->zfwctrl_addr) & 0xfffff));
board_ctrl = &(zfw_ctrl->board_ctrl);
ch_ctrl = zfw_ctrl->ch_ctrl;
return -EINVAL;
}
- zfw_ctrl =
- (struct ZFW_CTRL *)
- (base_addr + cy_readl(&firm_id->zfwctrl_addr));
+ zfw_ctrl = (struct ZFW_CTRL *)
+ (base_addr + (cy_readl(&firm_id->zfwctrl_addr) & 0xfffff));
board_ctrl = &zfw_ctrl->board_ctrl;
ch_ctrl = zfw_ctrl->ch_ctrl;
index = cy_card[card].bus_index;
base_addr = (unsigned char *)
(cy_card[card].base_addr + (cy_chip_offset[chip]<<index));
- while (cy_readb(base_addr+(CySRER<<index)) & CyTxMpty) {
+ while (cy_readb(base_addr+(CySRER<<index)) & CyTxRdy) {
#ifdef CY_DEBUG_WAIT_UNTIL_SENT
printk("Not clean (jiff=%lu)...", jiffies);
#endif
unsigned char *base_addr = (unsigned char *)
cy_card[info->card].base_addr;
struct FIRM_ID *firm_id = (struct FIRM_ID *) (base_addr + ID_ADDRESS);
- struct ZFW_CTRL *zfw_ctrl =
- (struct ZFW_CTRL *) (base_addr + cy_readl(&firm_id->zfwctrl_addr));
+ struct ZFW_CTRL *zfw_ctrl = (struct ZFW_CTRL *)
+ (base_addr + (cy_readl(&firm_id->zfwctrl_addr) & 0xfffff));
struct CH_CTRL *ch_ctrl = zfw_ctrl->ch_ctrl;
int channel = info->line - cy_card[info->card].first_line;
int retval;
volatile uclong tx_put, tx_get, tx_bufsize;
firm_id = (struct FIRM_ID *)(cy_card[card].base_addr + ID_ADDRESS);
- zfw_ctrl = (struct ZFW_CTRL *) (cy_card[card].base_addr +
- cy_readl(&firm_id->zfwctrl_addr));
+ zfw_ctrl = (struct ZFW_CTRL *)
+ (cy_card[card].base_addr +
+ (cy_readl(&firm_id->zfwctrl_addr) & 0xfffff));
ch_ctrl = &(zfw_ctrl->ch_ctrl[channel]);
buf_ctrl = &(zfw_ctrl->buf_ctrl[channel]);
return;
}
- zfw_ctrl = (struct ZFW_CTRL *)
- (cy_card[card].base_addr + cy_readl(&firm_id->zfwctrl_addr));
+ zfw_ctrl = (struct ZFW_CTRL *)
+ (cy_card[card].base_addr +
+ (cy_readl(&firm_id->zfwctrl_addr) & 0xfffff));
board_ctrl = &zfw_ctrl->board_ctrl;
ch_ctrl = &(zfw_ctrl->ch_ctrl[channel]);
buf_ctrl = &zfw_ctrl->buf_ctrl[channel];
firm_id = (struct FIRM_ID *)
(cy_card[card].base_addr + ID_ADDRESS);
if (ISZLOADED(cy_card[card])) {
- zfw_ctrl = (struct ZFW_CTRL *)
- (cy_card[card].base_addr + cy_readl(&firm_id->zfwctrl_addr));
+ zfw_ctrl = (struct ZFW_CTRL *)
+ (cy_card[card].base_addr +
+ (cy_readl(&firm_id->zfwctrl_addr) & 0xfffff));
board_ctrl = &zfw_ctrl->board_ctrl;
ch_ctrl = zfw_ctrl->ch_ctrl;
lstatus = cy_readl(&ch_ctrl[channel].rs_status);
firm_id = (struct FIRM_ID *)
(cy_card[card].base_addr + ID_ADDRESS);
if (ISZLOADED(cy_card[card])) {
- zfw_ctrl = (struct ZFW_CTRL *)
- (cy_card[card].base_addr + cy_readl(&firm_id->zfwctrl_addr));
+ zfw_ctrl = (struct ZFW_CTRL *)
+ (cy_card[card].base_addr +
+ (cy_readl(&firm_id->zfwctrl_addr) & 0xfffff));
board_ctrl = &zfw_ctrl->board_ctrl;
ch_ctrl = zfw_ctrl->ch_ctrl;
cy_writeb((u_long)base_addr+(CyCAR<<index),
(u_char)(channel & 0x0003)); /* index channel */
cy_writeb((u_long)base_addr+(CySRER<<index),
- cy_readb(base_addr+(CySRER<<index)) & ~CyTxMpty);
+ cy_readb(base_addr+(CySRER<<index)) & ~CyTxRdy);
CY_UNLOCK(info, flags);
} else {
// Nothing to do!
cy_writeb((u_long)base_addr+(CyCAR<<index),
(u_char)(channel & 0x0003)); /* index channel */
cy_writeb((u_long)base_addr+(CySRER<<index),
- cy_readb(base_addr+(CySRER<<index)) | CyTxMpty);
+ cy_readb(base_addr+(CySRER<<index)) | CyTxRdy);
CY_UNLOCK(info, flags);
} else {
// Nothing to do!
return chip_number;
} /* cyy_init_card */
-#ifndef CONFIG_COBALT_27
/*
* ---------------------------------------------------------------------
* cy_detect_isa() - Probe for Cyclom-Y/ISA boards.
__initfunc(static int
cy_detect_isa(void))
{
+#if defined(__i386__) || defined(__alpha__)
unsigned short cy_isa_irq,nboard;
volatile ucchar *cy_isa_address;
unsigned short i,j,cy_isa_nchan;
}
/* probe for CD1400... */
-
#if !defined(__alpha__)
cy_isa_address = ioremap((ulong)cy_isa_address, CyISA_Ywin);
#endif
cy_next_channel += cy_isa_nchan;
}
return(nboard);
-
+#else
+ return(0);
+#endif /* (__i386__) || (__alpha__) */
} /* cy_detect_isa */
-#endif /* CONFIG_COBALT_27 */
static void
plx_init(uclong addr, uclong initctl)
struct pci_dev *pdev = NULL;
unsigned char cyy_rev_id;
unsigned char cy_pci_irq = 0;
- uclong cy_pci_addr0, cy_pci_addr1, cy_pci_addr2;
+ uclong cy_pci_phys0, cy_pci_phys2;
+ uclong cy_pci_addr0, cy_pci_addr2;
unsigned short i,j,cy_pci_nchan, plx_ver;
unsigned short device_id,dev_index = 0;
uclong mailbox;
uclong Ze_addr0[NR_CARDS], Ze_addr2[NR_CARDS], ZeIndex = 0;
+ uclong Ze_phys0[NR_CARDS], Ze_phys2[NR_CARDS];
unsigned char Ze_irq[NR_CARDS];
if(pci_present() == 0) { /* PCI bus not present */
/* read PCI configuration area */
cy_pci_irq = pdev->irq;
- cy_pci_addr0 = pdev->base_address[0];
- cy_pci_addr1 = pdev->base_address[1];
- cy_pci_addr2 = pdev->base_address[2];
- pci_read_config_byte(pdev, PCI_REVISION_ID, &cyy_rev_id);
+ cy_pci_phys0 = pdev->base_address[0];
+ cy_pci_phys2 = pdev->base_address[2];
+ pci_read_config_byte(pdev, PCI_REVISION_ID, &cyy_rev_id);
device_id &= ~PCI_DEVICE_ID_MASK;
printk("rev_id=%d) IRQ%d\n",
cyy_rev_id, (int)cy_pci_irq);
printk("Cyclom-Y/PCI:found winaddr=0x%lx ctladdr=0x%lx\n",
- (ulong)cy_pci_addr2, (ulong)cy_pci_addr0);
+ (ulong)cy_pci_phys2, (ulong)cy_pci_phys0);
#endif
- cy_pci_addr0 &= PCI_BASE_ADDRESS_MEM_MASK;
- cy_pci_addr2 &= PCI_BASE_ADDRESS_MEM_MASK;
+ cy_pci_phys0 &= PCI_BASE_ADDRESS_MEM_MASK;
+ cy_pci_phys2 &= PCI_BASE_ADDRESS_MEM_MASK;
- if (cy_pci_addr2 & ~PCI_BASE_ADDRESS_IO_MASK) {
+ if (cy_pci_phys2 & ~PCI_BASE_ADDRESS_IO_MASK) {
printk(" Warning: PCI I/O bit incorrectly set. "
"Ignoring it...\n");
- cy_pci_addr2 &= PCI_BASE_ADDRESS_IO_MASK;
+ cy_pci_phys2 &= PCI_BASE_ADDRESS_IO_MASK;
}
#if defined(__alpha__)
printk("rev_id=%d) IRQ%d\n",
cyy_rev_id, (int)cy_pci_irq);
printk("Cyclom-Y/PCI:found winaddr=0x%lx ctladdr=0x%lx\n",
- (ulong)cy_pci_addr2, (ulong)cy_pci_addr0);
+ (ulong)cy_pci_phys2, (ulong)cy_pci_phys0);
printk("Cyclom-Y/PCI not supported for low addresses in "
"Alpha systems.\n");
i--;
continue;
}
#else
- cy_pci_addr0 = (ulong)ioremap(cy_pci_addr0, CyPCI_Yctl);
- cy_pci_addr2 = (ulong)ioremap(cy_pci_addr2, CyPCI_Ywin);
+ cy_pci_addr0 = (ulong)ioremap(cy_pci_phys0, CyPCI_Yctl);
+ cy_pci_addr2 = (ulong)ioremap(cy_pci_phys2, CyPCI_Ywin);
#endif
#ifdef CY_PCI_DEBUG
if(cy_pci_nchan == 0) {
printk("Cyclom-Y PCI host card with ");
printk("no Serial-Modules at 0x%lx.\n",
- (ulong) cy_pci_addr2);
+ (ulong) cy_pci_phys2);
i--;
continue;
}
if((cy_next_channel+cy_pci_nchan) > NR_PORTS) {
printk("Cyclom-Y/PCI found at 0x%lx ",
- (ulong) cy_pci_addr2);
+ (ulong) cy_pci_phys2);
printk("but no channels are available.\n");
printk("Change NR_PORTS in cyclades.c and recompile kernel.\n");
return(i);
}
if (j == NR_CARDS) { /* no more cy_cards available */
printk("Cyclom-Y/PCI found at 0x%lx ",
- (ulong) cy_pci_addr2);
+ (ulong) cy_pci_phys2);
printk("but no more cards can be used.\n");
printk("Change NR_CARDS in cyclades.c and recompile kernel.\n");
return(i);
SA_SHIRQ, "Cyclom-Y", &cy_card[j]))
{
printk("Cyclom-Y/PCI found at 0x%lx ",
- (ulong) cy_pci_addr2);
+ (ulong) cy_pci_phys2);
printk("but could not allocate IRQ%d.\n",
cy_pci_irq);
return(i);
}
/* set cy_card */
+ cy_card[j].base_phys = (ulong)cy_pci_phys2;
+ cy_card[j].ctl_phys = (ulong)cy_pci_phys0;
cy_card[j].base_addr = (ulong)cy_pci_addr2;
cy_card[j].ctl_addr = (ulong)cy_pci_addr0;
cy_card[j].irq = (int) cy_pci_irq;
switch (plx_ver) {
case PLX_9050:
- plx_init(cy_pci_addr0, 0x50);
-
- cy_writew(cy_pci_addr0+0x4c,
- cy_readw(cy_pci_addr0+0x4c)|0x0040);
+ cy_writeb(cy_pci_addr0+0x4c, 0x43);
break;
case PLX_9060:
/* print message */
printk("Cyclom-Y/PCI #%d: 0x%lx-0x%lx, IRQ%d, ",
j+1,
- (ulong)cy_pci_addr2,
- (ulong)(cy_pci_addr2 + CyPCI_Ywin - 1),
+ (ulong)cy_pci_phys2,
+ (ulong)(cy_pci_phys2 + CyPCI_Ywin - 1),
(int)cy_pci_irq);
printk("%d channels starting from port %d.\n",
cy_pci_nchan, cy_next_channel);
printk("rev_id=%d) IRQ%d\n",
cyy_rev_id, (int)cy_pci_irq);
printk("Cyclades-Z/PCI: found winaddr=0x%lx ctladdr=0x%lx\n",
- (ulong)cy_pci_addr2, (ulong)cy_pci_addr0);
+ (ulong)cy_pci_phys2, (ulong)cy_pci_phys0);
printk("Cyclades-Z/PCI not supported for low addresses\n");
break;
}else if (device_id == PCI_DEVICE_ID_CYCLOM_Z_Hi){
printk("rev_id=%d) IRQ%d\n",
cyy_rev_id, (int)cy_pci_irq);
printk("Cyclades-Z/PCI: found winaddr=0x%lx ctladdr=0x%lx\n",
- (ulong)cy_pci_addr2, (ulong)cy_pci_addr0);
+ (ulong)cy_pci_phys2, (ulong)cy_pci_phys0);
#endif
- cy_pci_addr0 &= PCI_BASE_ADDRESS_MEM_MASK;
+ cy_pci_phys0 &= PCI_BASE_ADDRESS_MEM_MASK;
+ cy_pci_phys2 &= PCI_BASE_ADDRESS_MEM_MASK;
+
+ if (cy_pci_phys2 & ~PCI_BASE_ADDRESS_IO_MASK) {
+ printk(" Warning: PCI I/O bit incorrectly set. "
+ "Ignoring it...\n");
+ cy_pci_phys2 &= PCI_BASE_ADDRESS_IO_MASK;
+ }
#if !defined(__alpha__)
- cy_pci_addr0 = (ulong)ioremap(cy_pci_addr0, CyPCI_Zctl);
+ cy_pci_addr0 = (ulong)ioremap(cy_pci_phys0, CyPCI_Zctl);
#endif
/* Disable interrupts on the PLX before resetting it */
mailbox = (uclong)cy_readl(&((struct RUNTIME_9060 *)
cy_pci_addr0)->mail_box_0);
- cy_pci_addr2 &= PCI_BASE_ADDRESS_MEM_MASK;
-
- if (cy_pci_addr2 & ~PCI_BASE_ADDRESS_IO_MASK) {
- printk(" Warning: PCI I/O bit incorrectly set. "
- "Ignoring it...\n");
- cy_pci_addr2 &= PCI_BASE_ADDRESS_IO_MASK;
- }
if (mailbox == ZE_V1) {
#if !defined(__alpha__)
- cy_pci_addr2 = (ulong)ioremap(cy_pci_addr2, CyPCI_Ze_win);
+ cy_pci_addr2 = (ulong)ioremap(cy_pci_phys2, CyPCI_Ze_win);
#endif
if (ZeIndex == NR_CARDS) {
printk("Cyclades-Ze/PCI found at 0x%lx ",
- (ulong)cy_pci_addr2);
+ (ulong)cy_pci_phys2);
printk("but no more cards can be used.\n");
printk("Change NR_CARDS in cyclades.c and recompile kernel.\n");
} else {
+ Ze_phys0[ZeIndex] = cy_pci_phys0;
+ Ze_phys2[ZeIndex] = cy_pci_phys2;
Ze_addr0[ZeIndex] = cy_pci_addr0;
Ze_addr2[ZeIndex] = cy_pci_addr2;
Ze_irq[ZeIndex] = cy_pci_irq;
continue;
} else {
#if !defined(__alpha__)
- cy_pci_addr2 = (ulong)ioremap(cy_pci_addr2, CyPCI_Zwin);
+ cy_pci_addr2 = (ulong)ioremap(cy_pci_phys2, CyPCI_Zwin);
#endif
}
if((cy_next_channel+cy_pci_nchan) > NR_PORTS) {
printk("Cyclades-8Zo/PCI found at 0x%lx ",
- (ulong)cy_pci_addr2);
+ (ulong)cy_pci_phys2);
printk("but no channels are available.\n");
printk("Change NR_PORTS in cyclades.c and recompile kernel.\n");
return(i);
}
if (j == NR_CARDS) { /* no more cy_cards available */
printk("Cyclades-8Zo/PCI found at 0x%lx ",
- (ulong)cy_pci_addr2);
+ (ulong)cy_pci_phys2);
printk("but no more cards can be used.\n");
printk("Change NR_CARDS in cyclades.c and recompile kernel.\n");
return(i);
if(request_irq(cy_pci_irq, cyz_interrupt,
SA_SHIRQ, "Cyclades-Z", &cy_card[j]))
{
- printk("Could not allocate IRQ%d ",
+ printk("Cyclades-8Zo/PCI found at 0x%lx ",
+ (ulong) cy_pci_phys2);
+ printk("but could not allocate IRQ%d.\n",
cy_pci_irq);
- printk("for Cyclades-8Zo/PCI at 0x%lx.\n",
- (ulong)cy_pci_addr2);
return(i);
}
}
/* set cy_card */
+ cy_card[j].base_phys = cy_pci_phys2;
+ cy_card[j].ctl_phys = cy_pci_phys0;
cy_card[j].base_addr = cy_pci_addr2;
cy_card[j].ctl_addr = cy_pci_addr0;
cy_card[j].irq = (int) cy_pci_irq;
/* don't report IRQ if board is no IRQ */
if( (cy_pci_irq != 0) && (cy_pci_irq != 255) )
printk("Cyclades-8Zo/PCI #%d: 0x%lx-0x%lx, IRQ%d, ",
- j+1,(ulong)cy_pci_addr2,
- (ulong)(cy_pci_addr2 + CyPCI_Zwin - 1),
+ j+1,(ulong)cy_pci_phys2,
+ (ulong)(cy_pci_phys2 + CyPCI_Zwin - 1),
(int)cy_pci_irq);
else
#endif /* CONFIG_CYZ_INTR */
printk("Cyclades-8Zo/PCI #%d: 0x%lx-0x%lx, ",
- j+1,(ulong)cy_pci_addr2,
- (ulong)(cy_pci_addr2 + CyPCI_Zwin - 1));
+ j+1,(ulong)cy_pci_phys2,
+ (ulong)(cy_pci_phys2 + CyPCI_Zwin - 1));
printk("%d channels starting from port %d.\n",
cy_pci_nchan,cy_next_channel);
}
for (; ZeIndex != 0 && i < NR_CARDS; i++) {
+ cy_pci_phys0 = Ze_phys0[0];
+ cy_pci_phys2 = Ze_phys2[0];
cy_pci_addr0 = Ze_addr0[0];
cy_pci_addr2 = Ze_addr2[0];
cy_pci_irq = Ze_irq[0];
for (j = 0 ; j < ZeIndex-1 ; j++) {
+ Ze_phys0[j] = Ze_phys0[j+1];
+ Ze_phys2[j] = Ze_phys2[j+1];
Ze_addr0[j] = Ze_addr0[j+1];
Ze_addr2[j] = Ze_addr2[j+1];
Ze_irq[j] = Ze_irq[j+1];
if((cy_next_channel+cy_pci_nchan) > NR_PORTS) {
printk("Cyclades-Ze/PCI found at 0x%lx ",
- (ulong)cy_pci_addr2);
+ (ulong)cy_pci_phys2);
printk("but no channels are available.\n");
printk("Change NR_PORTS in cyclades.c and recompile kernel.\n");
return(i);
}
if (j == NR_CARDS) { /* no more cy_cards available */
printk("Cyclades-Ze/PCI found at 0x%lx ",
- (ulong)cy_pci_addr2);
+ (ulong)cy_pci_phys2);
printk("but no more cards can be used.\n");
printk("Change NR_CARDS in cyclades.c and recompile kernel.\n");
return(i);
if(request_irq(cy_pci_irq, cyz_interrupt,
SA_SHIRQ, "Cyclades-Z", &cy_card[j]))
{
- printk("Could not allocate IRQ%d ",
+ printk("Cyclades-Ze/PCI found at 0x%lx ",
+ (ulong) cy_pci_phys2);
+ printk("but could not allocate IRQ%d.\n",
cy_pci_irq);
- printk("for Cyclades-Ze/PCI at 0x%lx.\n",
- (ulong) cy_pci_addr2);
return(i);
}
}
#endif /* CONFIG_CYZ_INTR */
/* set cy_card */
+ cy_card[j].base_phys = cy_pci_phys2;
+ cy_card[j].ctl_phys = cy_pci_phys0;
cy_card[j].base_addr = cy_pci_addr2;
cy_card[j].ctl_addr = cy_pci_addr0;
cy_card[j].irq = (int) cy_pci_irq;
/* don't report IRQ if board is no IRQ */
if( (cy_pci_irq != 0) && (cy_pci_irq != 255) )
printk("Cyclades-Ze/PCI #%d: 0x%lx-0x%lx, IRQ%d, ",
- j+1,(ulong)cy_pci_addr2,
- (ulong)(cy_pci_addr2 + CyPCI_Ze_win - 1),
+ j+1,(ulong)cy_pci_phys2,
+ (ulong)(cy_pci_phys2 + CyPCI_Ze_win - 1),
(int)cy_pci_irq);
else
#endif /* CONFIG_CYZ_INTR */
printk("Cyclades-Ze/PCI #%d: 0x%lx-0x%lx, ",
- j+1,(ulong)cy_pci_addr2,
- (ulong)(cy_pci_addr2 + CyPCI_Ze_win - 1));
+ j+1,(ulong)cy_pci_phys2,
+ (ulong)(cy_pci_phys2 + CyPCI_Ze_win - 1));
printk("%d channels starting from port %d.\n",
cy_pci_nchan,cy_next_channel);
}
if (ZeIndex != 0) {
printk("Cyclades-Ze/PCI found at 0x%x ",
- (unsigned int) Ze_addr2[0]);
+ (unsigned int) Ze_phys2[0]);
printk("but no more cards can be used.\n");
printk("Change NR_CARDS in cyclades.c and recompile kernel.\n");
}
availability of cy_card and cy_port data structures and updating
the cy_next_channel. */
-#ifndef CONFIG_COBALT_27
/* look for isa boards */
cy_isa_nboard = cy_detect_isa();
-#endif /* CONFIG_COBALT_27 */
/* look for pci boards */
cy_pci_nboard = cy_detect_pci();
info->icount.frame = info->icount.parity = 0;
info->icount.overrun = info->icount.brk = 0;
chip_number = (port - cinfo->first_line) / 4;
- if ((info->chip_rev = cy_readb(cinfo->base_addr +
- (cy_chip_offset[chip_number]<<index) +
- (CyGFRCR<<index))) >= CD1400_REV_J) {
+ if ((info->chip_rev =
+ cy_readb(cinfo->base_addr +
+ (cy_chip_offset[chip_number]<<index) +
+ (CyGFRCR<<index))) >= CD1400_REV_J) {
/* It is a CD1400 rev. J or later */
info->tbpr = baud_bpr_60[13]; /* Tx BPR */
info->tco = baud_co_60[13]; /* Tx CO */
tmp_buf = NULL;
}
} /* cleanup_module */
-#else
+#else /* MODULE */
/* called by linux/init/main.c to parse command line options */
void
cy_setup(char *str, int *ints)
{
-#ifndef CONFIG_COBALT_27
+#if defined(__i386__) || defined(__alpha__)
int i, j;
for (i = 0 ; i < NR_ISA_ADDRS ; i++) {
cy_isa_addresses[i++] = (unsigned char *)(ints[j]);
}
}
-#endif /* CONFIG_COBALT_27 */
-
+#endif /* (__i386__) || (__alpha__) */
} /* cy_setup */
-#endif
-
-
-#ifdef CYCLOM_SHOW_STATUS
-static void
-show_status(int line_num)
-{
- unsigned char *base_addr;
- int card,chip,channel,index;
- struct cyclades_port * info;
- unsigned long flags;
-
- info = &cy_port[line_num];
- card = info->card;
- index = cy_card[card].bus_index;
- channel = (info->line) - (cy_card[card].first_line);
- chip = channel>>2;
- channel &= 0x03;
- printk(" card %d, chip %d, channel %d\n", card, chip, channel);/**/
-
- printk(" cy_card\n");
- printk(" irq base_addr num_chips first_line = %d %lx %d %d\n",
- cy_card[card].irq, (long)cy_card[card].base_addr,
- cy_card[card].num_chips, cy_card[card].first_line);
-
- printk(" cy_port\n");
- printk(" card line flags = %d %d %x\n",
- info->card, info->line, info->flags);
- printk(" *tty read_status_mask timeout xmit_fifo_size ",
- printk("= %lx %x %x %x\n",
- (long)info->tty, info->read_status_mask,
- info->timeout, info->xmit_fifo_size);
- printk(" cor1,cor2,cor3,cor4,cor5 = %x %x %x %x %x\n",
- info->cor1, info->cor2, info->cor3, info->cor4, info->cor5);
- printk(" tbpr,tco,rbpr,rco = %d %d %d %d\n",
- info->tbpr, info->tco, info->rbpr, info->rco);
- printk(" close_delay event count = %d %d %d\n",
- info->close_delay, info->event, info->count);
- printk(" x_char blocked_open = %x %x\n",
- info->x_char, info->blocked_open);
- printk(" session pgrp open_wait = %lx %lx %lx\n",
- info->session, info->pgrp, (long)info->open_wait);
-
- CY_LOCK(info, flags);
-
- base_addr = (unsigned char*)
- (cy_card[card].base_addr
- + (cy_chip_offset[chip]<<index));
-
-/* Global Registers */
-
- printk(" CyGFRCR %x\n", cy_readb(base_addr + CyGFRCR<<index));
- printk(" CyCAR %x\n", cy_readb(base_addr + CyCAR<<index));
- printk(" CyGCR %x\n", cy_readb(base_addr + CyGCR<<index));
- printk(" CySVRR %x\n", cy_readb(base_addr + CySVRR<<index));
- printk(" CyRICR %x\n", cy_readb(base_addr + CyRICR<<index));
- printk(" CyTICR %x\n", cy_readb(base_addr + CyTICR<<index));
- printk(" CyMICR %x\n", cy_readb(base_addr + CyMICR<<index));
- printk(" CyRIR %x\n", cy_readb(base_addr + CyRIR<<index));
- printk(" CyTIR %x\n", cy_readb(base_addr + CyTIR<<index));
- printk(" CyMIR %x\n", cy_readb(base_addr + CyMIR<<index));
- printk(" CyPPR %x\n", cy_readb(base_addr + CyPPR<<index));
-
- cy_writeb(base_addr + CyCAR<<index, (u_char)channel);
-
-/* Virtual Registers */
-
- printk(" CyRIVR %x\n", cy_readb(base_addr + CyRIVR<<index));
- printk(" CyTIVR %x\n", cy_readb(base_addr + CyTIVR<<index));
- printk(" CyMIVR %x\n", cy_readb(base_addr + CyMIVR<<index));
- printk(" CyMISR %x\n", cy_readb(base_addr + CyMISR<<index));
-
-/* Channel Registers */
-
- printk(" CyCCR %x\n", cy_readb(base_addr + CyCCR<<index));
- printk(" CySRER %x\n", cy_readb(base_addr + CySRER<<index));
- printk(" CyCOR1 %x\n", cy_readb(base_addr + CyCOR1<<index));
- printk(" CyCOR2 %x\n", cy_readb(base_addr + CyCOR2<<index));
- printk(" CyCOR3 %x\n", cy_readb(base_addr + CyCOR3<<index));
- printk(" CyCOR4 %x\n", cy_readb(base_addr + CyCOR4<<index));
- printk(" CyCOR5 %x\n", cy_readb(base_addr + CyCOR5<<index));
- printk(" CyCCSR %x\n", cy_readb(base_addr + CyCCSR<<index));
- printk(" CyRDCR %x\n", cy_readb(base_addr + CyRDCR<<index));
- printk(" CySCHR1 %x\n", cy_readb(base_addr + CySCHR1<<index));
- printk(" CySCHR2 %x\n", cy_readb(base_addr + CySCHR2<<index));
- printk(" CySCHR3 %x\n", cy_readb(base_addr + CySCHR3<<index));
- printk(" CySCHR4 %x\n", cy_readb(base_addr + CySCHR4<<index));
- printk(" CySCRL %x\n", cy_readb(base_addr + CySCRL<<index));
- printk(" CySCRH %x\n", cy_readb(base_addr + CySCRH<<index));
- printk(" CyLNC %x\n", cy_readb(base_addr + CyLNC<<index));
- printk(" CyMCOR1 %x\n", cy_readb(base_addr + CyMCOR1<<index));
- printk(" CyMCOR2 %x\n", cy_readb(base_addr + CyMCOR2<<index));
- printk(" CyRTPR %x\n", cy_readb(base_addr + CyRTPR<<index));
- printk(" CyMSVR1 %x\n", cy_readb(base_addr + CyMSVR1<<index));
- printk(" CyMSVR2 %x\n", cy_readb(base_addr + CyMSVR2<<index));
- printk(" CyRBPR %x\n", cy_readb(base_addr + CyRBPR<<index));
- printk(" CyRCOR %x\n", cy_readb(base_addr + CyRCOR<<index));
- printk(" CyTBPR %x\n", cy_readb(base_addr + CyTBPR<<index));
- printk(" CyTCOR %x\n", cy_readb(base_addr + CyTCOR<<index));
-
- CY_UNLOCK(info, flags);
-} /* show_status */
-#endif
+#endif /* MODULE */
* Al Longyear <longyear@netcom.com>, Paul Mackerras <Paul.Mackerras@cs.anu.edu.au>
*
* Original release 01/11/99
- * ==FILEDATE 19990901==
+ * ==FILEDATE 20000515==
*
* This code is released under the GNU General Public License (GPL)
*
*/
#define HDLC_MAGIC 0x239e
-#define HDLC_VERSION "1.11"
+#define HDLC_VERSION "1.15"
#include <linux/version.h>
#include <linux/config.h>
#include <linux/malloc.h>
#include <linux/tty.h>
#include <linux/errno.h>
-#include <linux/sched.h> /* to get the struct task_struct */
#include <linux/string.h> /* used in new tty drivers */
#include <linux/signal.h> /* used in new tty drivers */
#include <asm/system.h>
#if LINUX_VERSION_CODE < VERSION(2,1,0)
#define __init
typedef int spinlock_t;
+#define spin_lock_init(a)
#define spin_lock_irqsave(a,b) {save_flags((b));cli();}
#define spin_unlock_irqrestore(a,b) {restore_flags((b));}
#define spin_lock(a)
register int actual;
unsigned long flags;
N_HDLC_BUF *tbuf;
-
+
if (debuglevel >= DEBUG_LEVEL_INFO)
printk("%s(%d)n_hdlc_send_frames() called\n",__FILE__,__LINE__);
+ check_again:
save_flags(flags);
cli ();
return;
}
n_hdlc->tbusy = 1;
+ n_hdlc->woke_up = 0;
restore_flags(flags);
/* get current transmit buffer or get new transmit */
__FILE__,__LINE__,tbuf,tbuf->count);
/* Send the next block of data to device */
- n_hdlc->woke_up = 0;
tty->flags |= (1 << TTY_DO_WRITE_WAKEUP);
actual = tty->driver.write(tty, 0, tbuf->buf, tbuf->count);
/* wait up sleeping writers */
wake_up_interruptible(&n_hdlc->write_wait);
+ wake_up_interruptible(&n_hdlc->poll_wait);
/* get next pending transmit buffer */
tbuf = n_hdlc_buf_get(&n_hdlc->tx_buf_list);
__FILE__,__LINE__,tbuf);
/* buffer not accepted by driver */
-
- /* check if wake up code called since last write call */
- if (n_hdlc->woke_up)
- continue;
-
/* set this buffer as pending buffer */
n_hdlc->tbuf = tbuf;
break;
n_hdlc->tbusy = 0;
restore_flags(flags);
+ if (n_hdlc->woke_up)
+ goto check_again;
+
if (debuglevel >= DEBUG_LEVEL_INFO)
printk("%s(%d)n_hdlc_send_frames() exit\n",__FILE__,__LINE__);
tty->flags &= ~(1 << TTY_DO_WRITE_WAKEUP);
return;
}
-
- if (!n_hdlc->tbuf)
- tty->flags &= ~(1 << TTY_DO_WRITE_WAKEUP);
- else
- n_hdlc_send_frames (n_hdlc, tty);
+
+ n_hdlc_send_frames (n_hdlc, tty);
} /* end of n_hdlc_tty_wakeup() */
wake_up_interruptible (&n_hdlc->read_wait);
wake_up_interruptible (&n_hdlc->poll_wait);
if (n_hdlc->tty->fasync != NULL)
+#if LINUX_VERSION_CODE < VERSION(2,3,0)
kill_fasync (n_hdlc->tty->fasync, SIGIO);
-
+#else
+ kill_fasync (n_hdlc->tty->fasync, SIGIO, POLL_IN);
+#endif
} /* end of n_hdlc_tty_receive() */
/* n_hdlc_tty_read()
count = maxframe;
}
+ add_wait_queue(&n_hdlc->write_wait, &wait);
+ set_current_state(TASK_INTERRUPTIBLE);
+
/* Allocate transmit buffer */
- tbuf = n_hdlc_buf_get(&n_hdlc->tx_free_buf_list);
- if (!tbuf) {
- /* sleep until transmit buffer available */
- add_wait_queue(&n_hdlc->write_wait, &wait);
- while (!tbuf) {
- set_current_state(TASK_INTERRUPTIBLE);
- schedule();
-
- n_hdlc = tty2n_hdlc (tty);
- if (!n_hdlc || n_hdlc->magic != HDLC_MAGIC ||
- tty != n_hdlc->tty) {
- printk("n_hdlc_tty_write: %p invalid after wait!\n", n_hdlc);
- error = -EIO;
- break;
- }
+ /* sleep until transmit buffer available */
+ while (!(tbuf = n_hdlc_buf_get(&n_hdlc->tx_free_buf_list))) {
+ schedule();
- if (signal_pending(current)) {
- error = -EINTR;
- break;
- }
+ n_hdlc = tty2n_hdlc (tty);
+ if (!n_hdlc || n_hdlc->magic != HDLC_MAGIC ||
+ tty != n_hdlc->tty) {
+ printk("n_hdlc_tty_write: %p invalid after wait!\n", n_hdlc);
+ error = -EIO;
+ break;
+ }
- tbuf = n_hdlc_buf_get(&n_hdlc->tx_free_buf_list);
+ if (signal_pending(current)) {
+ error = -EINTR;
+ break;
}
- set_current_state(TASK_RUNNING);
- remove_wait_queue(&n_hdlc->write_wait, &wait);
}
+ set_current_state(TASK_RUNNING);
+ remove_wait_queue(&n_hdlc->write_wait, &wait);
+
if (!error) {
/* Retrieve the user's buffer */
COPY_FROM_USER (error, tbuf->buf, data, count);
if (n_hdlc && n_hdlc->magic == HDLC_MAGIC && tty == n_hdlc->tty) {
/* queue current process into any wait queue that */
/* may awaken in the future (read and write) */
+#if LINUX_VERSION_CODE < VERSION(2,1,89)
+ poll_wait(&n_hdlc->poll_wait, wait);
+#else
poll_wait(filp, &n_hdlc->poll_wait, wait);
+#endif
/* set bits for operations that wont block */
if(n_hdlc->rx_buf_list.head)
mask |= POLLIN | POLLRDNORM; /* readable */
void n_hdlc_buf_list_init(N_HDLC_BUF_LIST *list)
{
memset(list,0,sizeof(N_HDLC_BUF_LIST));
-
+ spin_lock_init(&list->spinlock);
} /* end of n_hdlc_buf_list_init() */
/* n_hdlc_buf_put()
#ifdef CONFIG_3215
extern long con3215_init(long, long);
#endif /* CONFIG_3215 */
+#ifdef CONFIG_HWC_CONSOLE
+extern long hwc_console_init(long);
+#endif
#ifndef MIN
#define MIN(a,b) ((a) < (b) ? (a) : (b))
II. IOP Access
Access to the I2O subsystem is provided through the device file named
-/dev/i2octl. This file is a character file with major number 10 and minor
+/dev/i2o/ctl. This file is a character file with major number 10 and minor
number 166. It can be created through the following command:
- mknod /dev/i2octl c 10 166
+ mknod /dev/i2o/ctl c 10 166
III. Determining the IOP Count
In the process of determining this. Current idea is to have use
the select() interface to allow user apps to periodically poll
- the /dev/i2octl device for events. When select() notifies the user
+ the /dev/i2o/ctl device for events. When select() notifies the user
that an event is available, the user would call read() to retrieve
a list of all the events that are pending for the specific device.
{ { 0, -1 }, } },
{ PCI_VENDOR_ID_AFAVLAB, PCI_DEVICE_ID_AFAVLAB_TK9902, 1,
{ { 0, 1 }, } },
+ { PCI_VENDOR_ID_TIMEDIA, PCI_DEVICE_ID_TIMEDIA_1889, 1,
+ { { 2, -1 }, } },
{ 0, }
};
tristate 'DEPCA, DE10x, DE200, DE201, DE202, DE422 support' CONFIG_DEPCA
tristate 'EtherWORKS 3 (DE203, DE204, DE205) support' CONFIG_EWRK3
tristate 'EtherExpress 16 support' CONFIG_EEXPRESS
- tristate 'EtherExpressPro support' CONFIG_EEXPRESS_PRO
+ tristate 'EtherExpressPro/EtherExpress 10 (i82595) support' CONFIG_EEXPRESS_PRO
tristate 'FMV-181/182/183/184 support' CONFIG_FMV18X
tristate 'HP PCLAN+ (27247B and 27252A) support' CONFIG_HPLAN_PLUS
tristate 'HP PCLAN (27245 and other 27xxx series) support' CONFIG_HPLAN
struct device *dev=(struct device *)d;
struct comx_channel *ch=dev->priv;
struct syncppp_data *spch=ch->LINE_privdata;
- struct sppp *sp = &((struct ppp_device *)dev)->sppp;
+ struct sppp *sp = (struct sppp *)sppp_of(dev);
if(!(ch->line_status & PROTO_UP) &&
(sp->pp_link_state==SPPP_LINK_UP)) {
static int syncppp_init(struct device *dev)
{
struct comx_channel *ch = dev->priv;
- struct ppp_device *pppdev = (struct ppp_device*)dev;
+ struct ppp_device *pppdev = (struct ppp_device *)ch->if_ptr;
ch->LINE_privdata = kmalloc(sizeof(struct syncppp_data), GFP_KERNEL);
+ pppdev->dev = dev;
sppp_attach(pppdev);
if(ch->protocol == &hdlc_protocol) {
debug_file->write_proc = &comx_write_proc;
debug_file->nlink = 1;
- /* struct ppp_device is a bit larger then struct device and the
- syncppp driver needs it */
- if ((dev = kmalloc(sizeof(struct ppp_device), GFP_KERNEL)) == NULL) {
+ if ((dev = kmalloc(sizeof(struct device), GFP_KERNEL)) == NULL) {
return -ENOMEM;
}
- memset(dev, 0, sizeof(struct ppp_device));
+ memset(dev, 0, sizeof(struct device));
dev->name = (char *)new_dir->name;
dev->init = comx_init_dev;
return -EIO;
}
ch=dev->priv;
+ if((ch->if_ptr = (void *)kmalloc(sizeof(struct ppp_device),
+ GFP_KERNEL)) == NULL) {
+ return -ENOMEM;
+ }
+ memset(ch->if_ptr, 0, sizeof(struct ppp_device));
ch->debug_file = debug_file;
ch->procdir = new_dir;
new_dir->data = dev;
};
struct comx_channel {
+ void *if_ptr; // General purpose pointer
struct device *dev; // Where we belong to
struct device *twin; // On dual-port cards
struct proc_dir_entry *procdir; // the directory
/* Per-channel data structure */
struct channel_data {
+ void *if_ptr; /* General purpose pointer (used by SPPP) */
int usage; /* Usage count; >0 for chrdev, -1 for netdev */
int num; /* Number of the channel */
struct cosa_data *cosa; /* Pointer to the per-card structure */
static void sppp_channel_init(struct channel_data *chan)
{
struct device *d;
+ chan->if_ptr = &chan->pppdev;
+ chan->pppdev.dev = kmalloc(sizeof(struct device), GFP_KERNEL);
sppp_attach(&chan->pppdev);
- d=&chan->pppdev.dev;
+ d=chan->pppdev.dev;
d->name = chan->name;
d->base_addr = chan->cosa->datareg;
d->irq = chan->cosa->irq;
dev_init_buffers(d);
if (register_netdev(d) == -1) {
printk(KERN_WARNING "%s: register_netdev failed.\n", d->name);
- sppp_detach(&chan->pppdev.dev);
+ sppp_detach(chan->pppdev.dev);
return;
}
}
static void sppp_channel_delete(struct channel_data *chan)
{
- sppp_detach(&chan->pppdev.dev);
- unregister_netdev(&chan->pppdev.dev);
+ sppp_detach(chan->pppdev.dev);
+ unregister_netdev(chan->pppdev.dev);
}
chan->stats.rx_dropped++;
return NULL;
}
- chan->pppdev.dev.trans_start = jiffies;
+ chan->pppdev.dev->trans_start = jiffies;
return skb_put(chan->rx_skb, size);
}
return 0;
}
chan->rx_skb->protocol = htons(ETH_P_WAN_PPP);
- chan->rx_skb->dev = &chan->pppdev.dev;
+ chan->rx_skb->dev = chan->pppdev.dev;
chan->rx_skb->mac.raw = chan->rx_skb->data;
chan->stats.rx_packets++;
chan->stats.rx_bytes += chan->cosa->rxsize;
netif_rx(chan->rx_skb);
chan->rx_skb = 0;
- chan->pppdev.dev.trans_start = jiffies;
+ chan->pppdev.dev->trans_start = jiffies;
return 0;
}
chan->tx_skb = 0;
chan->stats.tx_packets++;
chan->stats.tx_bytes += size;
- chan->pppdev.dev.tbusy = 0;
+ chan->pppdev.dev->tbusy = 0;
mark_bh(NET_BH);
return 1;
}
This is a compatibility hardware problem.
Versions:
+ 0.12 added support to 82595FX etherexpress 10 based cards
+ (aris <aris@conectiva.com.br>), 04/26/2000)
0.11e some tweaks about multiple cards support (PdP, jul/aug 1999)
0.11d added __initdata, __initfunc stuff; call spin_lock_init
in eepro_probe1. Replaced "eepro" by dev->name. Augmented
*/
static const char *version =
- "eepro.c: v0.11d 08/12/1998 dupuis@lei.ucl.ac.be\n";
+ "eepro.c: v0.12 04/26/2000 aris@conectiva.com.br\n";
#include <linux/module.h>
/* First, a few definitions that the brave might change. */
/* A zero-terminated list of I/O addresses to be probed. */
static unsigned int eepro_portlist[] compat_init_data =
- { 0x300, 0x210, 0x240, 0x280, 0x2C0, 0x200, 0x320, 0x340, 0x360, 0};
+#ifdef PnPWakeup
+ { 0x210, 0x300,
+#else
+ { 0x300, 0x210,
+#endif
+ 0x240, 0x280, 0x2C0, 0x200, 0x320, 0x340, 0x360, 0};
/* note: 0x300 is default, the 595FX supports ALL IO Ports
from 0x000 to 0x3F0, some of which are reserved in PCs */
#define LAN595TX 1
#define LAN595FX 2
+/* global to recall the read_eepro */
+static unsigned char etherexpress10 = 0;
+
/* Information that need to be kept for each board. */
struct eepro_local {
struct enet_statistics stats;
buffer (transmit-buffer = 32K - receive-buffer).
*/
+
+/* now this section could be used by both boards: the oldies and the ee10:
+ * ee10 uses tx buffer before of rx buffer and the oldies the inverse.
+ * (aris)
+ */
#define RAM_SIZE 0x8000
+
#define RCV_HEADER 8
-#define RCV_RAM 0x6000 /* 24KB default for RCV buffer */
-#define RCV_LOWER_LIMIT 0x00 /* 0x0000 */
-/* #define RCV_UPPER_LIMIT ((RCV_RAM - 2) >> 8) */ /* 0x5ffe */
-#define RCV_UPPER_LIMIT (((rcv_ram) - 2) >> 8)
-/* #define XMT_RAM (RAM_SIZE - RCV_RAM) */ /* 8KB for XMT buffer */
-#define XMT_RAM (RAM_SIZE - (rcv_ram)) /* 8KB for XMT buffer */
-/* #define XMT_LOWER_LIMIT (RCV_RAM >> 8) */ /* 0x6000 */
-#define XMT_LOWER_LIMIT ((rcv_ram) >> 8)
-#define XMT_UPPER_LIMIT ((RAM_SIZE - 2) >> 8) /* 0x7ffe */
-#define XMT_HEADER 8
+#define RCV_DEFAULT_RAM 0x6000
+#define RCV_RAM rcv_ram
+
+static unsigned rcv_ram = RCV_DEFAULT_RAM;
+
+#define XMT_HEADER 8
+#define XMT_RAM (RAM_SIZE - RCV_RAM)
+
+#define XMT_START ((rcv_start + RCV_RAM) % RAM_SIZE)
+
+#define RCV_LOWER_LIMIT (rcv_start >> 8)
+#define RCV_UPPER_LIMIT (((rcv_start + RCV_RAM) - 2) >> 8)
+#define XMT_LOWER_LIMIT (XMT_START >> 8)
+#define XMT_UPPER_LIMIT (((XMT_START + XMT_RAM) - 2) >> 8)
+
+#define RCV_START_PRO 0x00
+#define RCV_START_10 XMT_RAM
+ /* by default the old driver */
+static unsigned rcv_start = RCV_START_PRO;
#define RCV_DONE 0x0008
#define RX_OK 0x2000
#define IO_32_BIT 0x10
#define RCV_BAR 0x04 /* The following are word (16-bit) registers */
#define RCV_STOP 0x06
-#define XMT_BAR 0x0a
+
+#define XMT_BAR_PRO 0x0a
+#define XMT_BAR_10 0x0b
+static unsigned xmt_bar = XMT_BAR_PRO;
+
#define HOST_ADDRESS_REG 0x0c
#define IO_PORT 0x0e
#define IO_PORT_32_BIT 0x0c
#define INT_NO_REG 0x02
#define RCV_LOWER_LIMIT_REG 0x08
#define RCV_UPPER_LIMIT_REG 0x09
-#define XMT_LOWER_LIMIT_REG 0x0a
-#define XMT_UPPER_LIMIT_REG 0x0b
+
+#define XMT_LOWER_LIMIT_REG_PRO 0x0a
+#define XMT_UPPER_LIMIT_REG_PRO 0x0b
+#define XMT_LOWER_LIMIT_REG_10 0x0b
+#define XMT_UPPER_LIMIT_REG_10 0x0a
+static unsigned xmt_lower_limit_reg = XMT_LOWER_LIMIT_REG_PRO;
+static unsigned xmt_upper_limit_reg = XMT_UPPER_LIMIT_REG_PRO;
/* Bank 2 registers */
#define XMT_Chain_Int 0x20 /* Interrupt at the end of the transmit chain */
#define I_ADD_REG4 0x08
#define I_ADD_REG5 0x09
-#define EEPROM_REG 0x0a
+#define EEPROM_REG_PRO 0x0a
+#define EEPROM_REG_10 0x0b
+static unsigned eeprom_reg = EEPROM_REG_PRO;
+
#define EESK 0x01
#define EECS 0x02
#define EEDI 0x04
(detachable devices only).
*/
#ifdef HAVE_DEVLIST
-/* Support for an alternate probe manager, which will eliminate the
- boilerplate below. */
+ /* Support for an alternate probe manager, which will eliminate the
+ boilerplate below. */
struct netdev_entry netcard_drv =
-{"eepro", eepro_probe1, EEPRO_IO_EXTENT, eepro_portlist};
+ {"eepro", eepro_probe1, EEPRO_IO_EXTENT, eepro_portlist};
#else
compat_init_func(int eepro_probe(struct device *dev))
{
}
#endif
-void printEEPROMInfo(short ioaddr)
+static void printEEPROMInfo(short ioaddr)
{
unsigned short Word;
int i,j;
id=inb(ioaddr + ID_REG);
- printk(KERN_DEBUG " id: %#x ",id);
- printk(" io: %#x ",ioaddr);
-
if (((id) & ID_REG_MASK) == ID_REG_SIG) {
/* We seem to have the 82595 signature, let's
play with its counter (last 2 bits of
register 2 of bank 0) to be sure. */
-
+
counter = (id & R_ROBIN_BITS);
if (((id=inb(ioaddr+ID_REG)) & R_ROBIN_BITS) ==
(counter + 0x40)) {
/* Yes, the 82595 has been found */
+ printk(KERN_DEBUG " id: %#x ",id);
+ printk(" io: %#x ",ioaddr);
/* Now, get the ethernet hardware address from
the EEPROM */
station_addr[0] = read_eeprom(ioaddr, 2);
+
+ /* FIXME - find another way to know that we've found
+ * a Etherexpress 10
+ */
+ if (station_addr[0] == 0x0000 ||
+ station_addr[0] == 0xffff) {
+ etherexpress10 = 1;
+ eeprom_reg = EEPROM_REG_10;
+ rcv_start = RCV_START_10;
+ xmt_lower_limit_reg = XMT_LOWER_LIMIT_REG_10;
+ xmt_upper_limit_reg = XMT_UPPER_LIMIT_REG_10;
+
+ station_addr[0] = read_eeprom(ioaddr, 2);
+ }
+
station_addr[1] = read_eeprom(ioaddr, 3);
station_addr[2] = read_eeprom(ioaddr, 4);
/* Check the station address for the manufacturer's code */
if (net_debug>3)
printEEPROMInfo(ioaddr);
-
- if (read_eeprom(ioaddr,7)== ee_FX_INT2IRQ) { /* int to IRQ Mask */
+
+ if (etherexpress10) {
+ eepro = 2;
+ printk("%s: Intel EtherExpress 10 ISA\n at %#x,",
+ dev->name, ioaddr);
+ } else if (read_eeprom(ioaddr,7)== ee_FX_INT2IRQ) {
+ /* int to IRQ Mask */
eepro = 2;
printk("%s: Intel EtherExpress Pro/10+ ISA\n at %#x,",
dev->name, ioaddr);
} else
if (station_addr[2] == 0x00aa) {
eepro = 1;
- printk("%s: Intel EtherExpress Pro/10 ISA at %#x,",
+ printk("%s: Intel EtherExpress Pro/10 ISA at %#x,",
dev->name, ioaddr);
}
else {
printk("%c%02x", i ? ':' : ' ', dev->dev_addr[i]);
}
+ dev->mem_start = (RCV_LOWER_LIMIT << 8);
+
if ((dev->mem_end & 0x3f) < 3 || /* RX buffer must be more than 3K */
(dev->mem_end & 0x3f) > 29) /* and less than 29K */
- dev->mem_end = RCV_RAM; /* or it will be set to 24K */
- else dev->mem_end = 1024*dev->mem_end; /* Maybe I should shift << 10 */
+ dev->mem_end = (RCV_UPPER_LIMIT << 8);
+ else {
+ dev->mem_end = (dev->mem_end * 1024) +
+ (RCV_LOWER_LIMIT << 8);
+ rcv_ram = dev->mem_end - (RCV_LOWER_LIMIT << 8);
+ }
- /* From now on, dev->mem_end contains the actual size of rx buffer */
+ /* From now on, dev->mem_end - dev->mem_start contains
+ * the actual size of rx buffer
+ */
if (net_debug > 3)
- printk(", %dK RCV buffer", (int)(dev->mem_end)/1024);
-
-
+ printk(", %dK RCV buffer",
+ (int)(dev->mem_end - dev->mem_start)/1024);
+
/* ............... */
if (GetBit( read_eeprom(ioaddr, 5),ee_BNC_TPE))
return ENODEV;
} else
- if (dev->irq==2) dev->irq = 9;
-
- else if (dev->irq == 2)
- dev->irq = 9;
+ if (dev->irq==2)
+ dev->irq = 9;
}
if (dev->irq > 2) {
}
else printk(", %s.\n", ifmap[dev->if_port]);
- if ((dev->mem_start & 0xf) > 0) /* I don't know if this is */
- net_debug = dev->mem_start & 7; /* still useful or not */
-
if (net_debug > 3) {
i = read_eeprom(ioaddr, 5);
if (i & 0x2000) /* bit 13 of EEPROM word 5 */
{
unsigned short temp_reg, old8, old9;
int irqMask;
- int i, ioaddr = dev->base_addr, rcv_ram = dev->mem_end;
+ int i, ioaddr = dev->base_addr;
+
struct eepro_local *lp = (struct eepro_local *)dev->priv;
if (net_debug > 3)
/* Initialize the 82595. */
outb(BANK2_SELECT, ioaddr); /* be CAREFUL, BANK 2 now */
- temp_reg = inb(ioaddr + EEPROM_REG);
+ temp_reg = inb(ioaddr + eeprom_reg);
lp->stepping = temp_reg >> 5; /* Get the stepping number of the 595 */
printk(KERN_DEBUG "The stepping of the 82595 is %d\n", lp->stepping);
if (temp_reg & 0x10) /* Check the TurnOff Enable bit */
- outb(temp_reg & 0xef, ioaddr + EEPROM_REG);
- for (i=0; i < 6; i++)
+ outb(temp_reg & 0xef, ioaddr + eeprom_reg);
+ for (i=0; i < 6; i++)
outb(dev->dev_addr[i] , ioaddr + I_ADD_REG0 + i);
temp_reg = inb(ioaddr + REG1); /* Setup Transmit Chaining */
/* Initialize the RCV and XMT upper and lower limits */
outb(RCV_LOWER_LIMIT, ioaddr + RCV_LOWER_LIMIT_REG);
outb(RCV_UPPER_LIMIT, ioaddr + RCV_UPPER_LIMIT_REG);
- outb(XMT_LOWER_LIMIT, ioaddr + XMT_LOWER_LIMIT_REG);
- outb(XMT_UPPER_LIMIT, ioaddr + XMT_UPPER_LIMIT_REG);
+ outb(XMT_LOWER_LIMIT, ioaddr + xmt_lower_limit_reg);
+ outb(XMT_UPPER_LIMIT, ioaddr + xmt_upper_limit_reg);
/* Enable the interrupt line. */
temp_reg = inb(ioaddr + REG1);
outb(ALL_MASK, ioaddr + STATUS_REG);
/* Initialize RCV */
- outw(RCV_LOWER_LIMIT << 8, ioaddr + RCV_BAR);
- lp->rx_start = (RCV_LOWER_LIMIT << 8) ;
- outw((RCV_UPPER_LIMIT << 8) | 0xfe, ioaddr + RCV_STOP);
+ outw((RCV_LOWER_LIMIT << 8), ioaddr + RCV_BAR);
+ lp->rx_start = (RCV_LOWER_LIMIT << 8);
+ outw(((RCV_UPPER_LIMIT << 8) | 0xfe), ioaddr + RCV_STOP);
/* Initialize XMT */
- outw(XMT_LOWER_LIMIT << 8, ioaddr + XMT_BAR);
+ outw((XMT_LOWER_LIMIT << 8), ioaddr + xmt_bar);
/* Check for the i82595TX and i82595FX */
old8 = inb(ioaddr + 8);
SLOW_DOWN;
SLOW_DOWN;
- lp->tx_start = lp->tx_end = XMT_LOWER_LIMIT << 8; /* or = RCV_RAM */
+ lp->tx_start = lp->tx_end = (XMT_LOWER_LIMIT << 8);
lp->tx_last = 0;
dev->tbusy = 0;
{
struct eepro_local *lp = (struct eepro_local *)dev->priv;
int ioaddr = dev->base_addr;
- int rcv_ram = dev->mem_end;
#if defined (LINUX_VERSION_CODE) && LINUX_VERSION_CODE > 0x20155
unsigned long flags;
if (tickssofar < 40)
return 1;
+ /* let's disable interrupts so we can avoid confusion on SMP
+ */
+ outb(ALL_MASK, ioaddr + INT_MASK_REG);
+
/* if (net_debug > 1) */
printk(KERN_ERR "%s: transmit timed out, %s?\n", dev->name,
"network cable problem");
SLOW_DOWN;
/* Do I also need to flush the transmit buffers here? YES? */
- lp->tx_start = lp->tx_end = rcv_ram;
+ lp->tx_start = lp->tx_end = (XMT_LOWER_LIMIT << 8);
lp->tx_last = 0;
dev->tbusy=0;
dev->trans_start = jiffies;
- outb(RCV_ENABLE_CMD, ioaddr);
+ /* re-enabling all interrupts */
+ outb(ALL_MASK & ~(RX_MASK | TX_MASK), ioaddr + INT_MASK_REG);
+
+ outb(RCV_ENABLE_CMD, ioaddr);
}
-#if defined (LINUX_VERSION_CODE) && LINUX_VERSION_CODE < 0x20155
- /* If some higher layer thinks we've missed an tx-done interrupt
- we are passed NULL. Caution: dev_tint() handles the cli()/sti()
- itself. */
- /* if (skb == NULL) {
- dev_tint(dev);
- return 0;
- }*/
- /* according to A. Cox, this is obsolete since 1.0 */
-#endif
-
#if defined (LINUX_VERSION_CODE) && LINUX_VERSION_CODE > 0x20155
spin_lock_irqsave(&lp->lock, flags);
#endif
} else {
short length = ETH_ZLEN < skb->len ? skb->len : ETH_ZLEN;
unsigned char *buf = skb->data;
+ int discard = lp->stats.tx_dropped;
#if defined (LINUX_VERSION_CODE) && LINUX_VERSION_CODE > 0x20155
lp->stats.tx_bytes+=skb->len;
#endif
hardware_send_packet(dev, buf, length);
+
+ if (lp->stats.tx_dropped != discard)
+ return 1;
+
dev->trans_start = jiffies;
}
ioaddr = dev->base_addr;
- do {
- status = inb(ioaddr + STATUS_REG);
-
+ while (((status = inb(ioaddr + STATUS_REG)) & 0x06) && (boguscount--))
+ {
+ switch (status & (RX_INT | TX_INT)) {
+ case (RX_INT | TX_INT):
+ outb (RX_INT | TX_INT, ioaddr + STATUS_REG);
+ break;
+ case RX_INT:
+ outb (RX_INT, ioaddr + STATUS_REG);
+ break;
+ case TX_INT:
+ outb (TX_INT, ioaddr + STATUS_REG);
+ break;
+ }
if (status & RX_INT) {
if (net_debug > 4)
- printk(KERN_DEBUG "%s: packet received interrupt.\n", dev->name);
+ printk(KERN_DEBUG "%s: packet received interrupt.\n", dev->name);
- /* Acknowledge the RX_INT */
- outb(RX_INT, ioaddr + STATUS_REG);
/* Get the received packets */
eepro_rx(dev);
}
-
- else if (status & TX_INT) {
+ if (status & TX_INT) {
if (net_debug > 4)
- printk(KERN_DEBUG "%s: packet transmit interrupt.\n", dev->name);
-
- /* Acknowledge the TX_INT */
- outb(TX_INT, ioaddr + STATUS_REG);
+ printk(KERN_DEBUG "%s: packet transmit interrupt.\n", dev->name);
/* Process the status of transmitted packets */
eepro_transmit_interrupt(dev);
}
+ }
- } while ((boguscount-- > 0) && (status & 0x06));
-
- dev->interrupt = 0;
+ dev->interrupt = 0;
if (net_debug > 5)
printk(KERN_DEBUG "%s: exiting eepro_interrupt routine.\n", dev->name);
{
struct eepro_local *lp = (struct eepro_local *)dev->priv;
int ioaddr = dev->base_addr;
- int rcv_ram = dev->mem_end;
short temp_reg;
dev->tbusy = 1;
/* Flush the Tx and disable Rx. */
outb(STOP_RCV_CMD, ioaddr);
- lp->tx_start = lp->tx_end = rcv_ram ;
+ lp->tx_start = lp->tx_end = (XMT_LOWER_LIMIT << 8);
lp->tx_last = 0;
/* Mask all the interrupts. */
outw(eaddrs[0], ioaddr + IO_PORT);
outw(eaddrs[1], ioaddr + IO_PORT);
outw(eaddrs[2], ioaddr + IO_PORT);
- outw(lp->tx_end, ioaddr + XMT_BAR);
+ outw(lp->tx_end, ioaddr + xmt_bar);
outb(MC_SETUP, ioaddr);
/* Update the transmit queue */
{
int i;
unsigned short retval = 0;
- short ee_addr = ioaddr + EEPROM_REG;
+ short ee_addr = ioaddr + eeprom_reg;
int read_cmd = location | EE_READ_CMD;
short ctrl_val = EECS ;
+ /* XXXX - this is not the final version. We must test this on other
+ * boards other than eepro10. I think that it won't let other
+ * boards to fail. (aris)
+ */
+ if (etherexpress10) {
+ outb(BANK1_SELECT, ioaddr);
+ outb(0x00, ioaddr + STATUS_REG);
+ }
+
outb(BANK2_SELECT, ioaddr);
outb(ctrl_val, ee_addr);
{
struct eepro_local *lp = (struct eepro_local *)dev->priv;
short ioaddr = dev->base_addr;
- int rcv_ram = dev->mem_end;
unsigned status, tx_available, last, end, boguscount = 100;
if (net_debug > 5)
if (dev->interrupt == 1) {
/* Enable RX and TX interrupts */
- outb(ALL_MASK & ~(RX_MASK | TX_MASK), ioaddr + INT_MASK_REG);
+ outb(ALL_MASK & ~(RX_MASK | TX_MASK), ioaddr + INT_MASK_REG);
continue;
}
last = lp->tx_end;
end = last + (((length + 3) >> 1) << 1) + XMT_HEADER;
- if (end >= RAM_SIZE) { /* the transmit buffer is wrapped around */
-
- if ((RAM_SIZE - last) <= XMT_HEADER) {
+ if (end >= (XMT_UPPER_LIMIT << 8)) { /* the transmit buffer is wrapped around */
+ if (((XMT_UPPER_LIMIT << 8) - last) <= XMT_HEADER) {
/* Arrrr!!!, must keep the xmt header together,
several days were lost to chase this one down. */
- last = rcv_ram;
+ last = (XMT_LOWER_LIMIT << 8);
end = last + (((length + 3) >> 1) << 1) + XMT_HEADER;
}
-
- else end = rcv_ram + (end - RAM_SIZE);
+ else end = (XMT_LOWER_LIMIT << 8) + (end - XMT_RAM);
}
outw(last, ioaddr + HOST_ADDRESS_REG);
- outw(XMT_CMD, ioaddr + IO_PORT);
+ outw(XMT_CMD, ioaddr + IO_PORT);
outw(0, ioaddr + IO_PORT);
outw(end, ioaddr + IO_PORT);
outw(length, ioaddr + IO_PORT);
/* A dummy read to flush the DRAM write pipeline */
status = inw(ioaddr + IO_PORT);
- if (lp->tx_start == lp->tx_end) {
- outw(last, ioaddr + XMT_BAR);
- outb(XMT_CMD, ioaddr);
+ if (lp->tx_start == lp->tx_end) {
+ outw(last, ioaddr + xmt_bar);
+ outb(XMT_CMD, ioaddr);
lp->tx_start = last; /* I don't like to change tx_start here */
}
else {
dev->tbusy = 0;
}
+ /* now we are serializing tx. tbusy won't come back until
+ * the tx interrupt
+ */
+ if (etherexpress10)
+ dev->tbusy = 1;
+
/* Enable RX and TX interrupts */
outb(ALL_MASK & ~(RX_MASK | TX_MASK), ioaddr + INT_MASK_REG);
return;
}
+ outb(ALL_MASK & ~(RX_MASK | TX_MASK), ioaddr + INT_MASK_REG);
dev->tbusy = 1;
+
if (net_debug > 5)
printk(KERN_DEBUG "%s: exiting hardware_send_packet routine.\n", dev->name);
}
eepro_rx(struct device *dev)
{
struct eepro_local *lp = (struct eepro_local *)dev->priv;
- short ioaddr = dev->base_addr, rcv_ram = dev->mem_end;
+ short ioaddr = dev->base_addr;
short boguscount = 20;
- short rcv_car = lp->rx_start;
+ unsigned rcv_car = lp->rx_start;
unsigned rcv_event, rcv_status, rcv_next_frame, rcv_size;
if (net_debug > 5)
printk(KERN_DEBUG "%s: entering eepro_rx routine.\n", dev->name);
+ /* disabling all interrupts */
+ outb(ALL_MASK, ioaddr + STATUS_REG);
+
/* Set the read pointer to the start of the RCV */
outw(rcv_car, ioaddr + HOST_ADDRESS_REG);
}
if (rcv_car == 0)
- rcv_car = (RCV_UPPER_LIMIT << 8) | 0xff;
+ rcv_car = ((RCV_UPPER_LIMIT << 8) | 0xff);
outw(rcv_car - 1, ioaddr + RCV_STOP);
if (net_debug > 5)
printk(KERN_DEBUG "%s: exiting eepro_rx routine.\n", dev->name);
+
+ outb(ALL_MASK & ~(RX_MASK | TX_MASK), ioaddr + INT_MASK_REG);
}
static void
struct eepro_local *lp = (struct eepro_local *)dev->priv;
short ioaddr = dev->base_addr;
short boguscount = 20;
- short xmt_status;
+ unsigned xmt_status;
/*
if (dev->tbusy == 0) {
dev->name);
}
*/
+ while (lp->tx_start != lp->tx_end && boguscount) {
- while (lp->tx_start != lp->tx_end) {
-
- outw(lp->tx_start, ioaddr + HOST_ADDRESS_REG);
+ outw(lp->tx_start, ioaddr + HOST_ADDRESS_REG);
xmt_status = inw(ioaddr+IO_PORT);
-
- if ((xmt_status & TX_DONE_BIT) == 0) break;
+
+ if ((xmt_status & TX_DONE_BIT) == 0) {
+ udelay(40);
+ boguscount--;
+ continue;
+ }
xmt_status = inw(ioaddr+IO_PORT);
lp->tx_start = inw(ioaddr+IO_PORT);
+ if (etherexpress10) {
+ lp->tx_start = (XMT_LOWER_LIMIT << 8);
+ lp->tx_end = lp->tx_start;
+
+ /* yeah, black magic :( */
+ outb(BANK0_SELECT, ioaddr);
+ outb(ALL_MASK & ~(RX_MASK | TX_MASK), ioaddr + INT_MASK_REG);
+
+ outb(RCV_DISABLE_CMD, ioaddr);
+ outb(RCV_ENABLE_CMD, ioaddr);
+ }
+
+ /* here the tbusy comes to 0 for normal and ee10 cards
+ */
dev->tbusy = 0;
+
mark_bh(NET_BH);
-
- if (xmt_status & 0x2000)
+
+ if (xmt_status & 0x2000) {
lp->stats.tx_packets++;
+ }
else {
lp->stats.tx_errors++;
- if (xmt_status & 0x0400)
+ if (xmt_status & 0x0400) {
lp->stats.tx_carrier_errors++;
- printk("%s: XMT status = %#x\n",
- dev->name, xmt_status);
- printk(KERN_DEBUG "%s: XMT status = %#x\n",
- dev->name, xmt_status);
+ printk(KERN_DEBUG "%s: carrier error\n",
+ dev->name);
+ printk(KERN_DEBUG "%s: XMT status = %#x\n",
+ dev->name, xmt_status);
+ }
+ else {
+ printk(KERN_DEBUG "%s: XMT status = %#x\n",
+ dev->name, xmt_status);
+ printk(KERN_DEBUG "%s: XMT status = %#x\n",
+ dev->name, xmt_status);
+ }
+
+ if (etherexpress10) {
+ /* Try to restart the adaptor. */
+ outb(SEL_RESET_CMD, ioaddr);
+
+ /* We are supposed to wait for 2 us after a SEL_RESET */
+ SLOW_DOWN;
+ SLOW_DOWN;
+
+ /* first enable interrupts */
+ outb(BANK0_SELECT, ioaddr);
+ outb(ALL_MASK & ~(RX_INT | TX_INT), ioaddr + STATUS_REG);
+
+ outb(RCV_ENABLE_CMD, ioaddr);
+ }
}
if (xmt_status & 0x000f) {
if ((xmt_status & 0x0040) == 0x0) {
lp->stats.tx_heartbeat_errors++;
}
-
- if (--boguscount == 0)
- break;
+ boguscount--;
}
}
static char devicename[MAX_EEPRO][9];
static struct device dev_eepro[MAX_EEPRO];
-static int io[MAX_EEPRO] = {
-#ifdef PnPWakeup
- 0x210, /*: default for PnP enabled FX chips */
-#else
- 0x200, /* Why? */
-#endif
- [1 ... MAX_EEPRO - 1] = -1 };
+static int io[MAX_EEPRO];
static int irq[MAX_EEPRO] = { [0 ... MAX_EEPRO-1] = 0 };
static int mem[MAX_EEPRO] = { /* Size of the rx buffer in KB */
- [0 ... MAX_EEPRO-1] = RCV_RAM/1024
+ [0 ... MAX_EEPRO-1] = RCV_DEFAULT_RAM/1024
};
+static int autodetect;
static int n_eepro = 0;
/* For linux 2.1.xx */
MODULE_PARM(io, "1-" __MODULE_STRING(MAX_EEPRO) "i");
MODULE_PARM(irq, "1-" __MODULE_STRING(MAX_EEPRO) "i");
MODULE_PARM(mem, "1-" __MODULE_STRING(MAX_EEPRO) "i");
+MODULE_PARM(autodetect, "1-" __MODULE_STRING(1) "i");
#endif
int
init_module(void)
{
- if (io[0] == 0)
- printk("eepro_init_module: You should not use auto-probing with insmod!\n");
-
- while (n_eepro < MAX_EEPRO && io[n_eepro] >= 0) {
+ int i;
+ if (io[0] == 0 && autodetect == 0) {
+ printk("eepro_init_module: Probe is very dangerous in ISA boards!\n");
+ printk("eepro_init_module: Please add \"autodetect=1\" to force probe\n");
+ return 1;
+ }
+ else if (autodetect) {
+ /* if autodetect is set then we must force detection */
+ io[0] = 0;
+
+ printk("eepro_init_module: Auto-detecting boards (May God protect us...)\n");
+ }
+
+ for (i = 0; i < MAX_EEPRO; i++) {
struct device *d = &dev_eepro[n_eepro];
d->name = devicename[n_eepro]; /* inserted by drivers/net/net_init.c */
d->mem_end = mem[n_eepro];
- d->base_addr = io[n_eepro];
+ d->base_addr = io[0];
d->irq = irq[n_eepro];
d->init = eepro_probe;
if (register_netdev(d) == 0)
n_eepro++;
- else
- break;
}
return n_eepro ? 0 : -ENODEV;
struct sv11_device
{
+ void *if_ptr; /* General purpose pointer (used by SPPP) */
struct z8530_dev sync;
struct ppp_device netdev;
char name[16];
goto fail3;
memset(sv, 0, sizeof(*sv));
+ sv->if_ptr=&sv->netdev;
+
+ sv->netdev.dev=(struct device *)kmalloc(sizeof(struct device), GFP_KERNEL);
+ if(!sv->netdev.dev)
+ goto fail2;
dev=&sv->sync;
if(request_irq(irq, &z8530_interrupt, SA_INTERRUPT, "Hostess SV/11", dev)<0)
{
printk(KERN_WARNING "hostess: IRQ %d already in use.\n", irq);
- goto fail2;
+ goto fail1;
}
dev->irq=irq;
dev->chanA.private=sv;
- dev->chanA.netdevice=&sv->netdev.dev;
+ dev->chanA.netdevice=sv->netdev.dev;
dev->chanA.dev=dev;
dev->chanB.dev=dev;
dev->name=sv->name;
free_dma(dev->chanA.txdma);
fail:
free_irq(irq, dev);
+fail1:
+ kfree(sv->netdev.dev);
fail2:
kfree(sv);
fail3:
static void sv11_shutdown(struct sv11_device *dev)
{
- sppp_detach(&dev->netdev.dev);
+ sppp_detach(dev->netdev.dev);
z8530_shutdown(&dev->sync);
- unregister_netdev(&dev->netdev.dev);
+ unregister_netdev(dev->netdev.dev);
free_irq(dev->sync.irq, dev);
if(dma)
{
* Changes by Jochen Friedrich to enable RFC1469 Option 2 multicasting
* i.e. using functional address C0 00 00 04 00 00 to transmit and
* receive multicast packets.
+ *
+ * Changes by Mike Sullivan (based on original sram patch by Dave Grothe
+ * to support windowing into on adapter shared ram.
+ * i.e. Use LANAID to setup a PnP configuration with 16K RAM. Paging
+ * will shift this 16K window over the entire available shared RAM.
*/
/* change the define of IBMTR_DEBUG_MESSAGES to a nonzero value
#define NO_AUTODETECT 1
#undef NO_AUTODETECT
-#undef ENABLE_PAGING
+//#undef ENABLE_PAGING
+#define ENABLE_PAGING 1
#define FALSE 0
static char *version =
"ibmtr.c: v1.3.57 8/ 7/94 Peter De Schrijver and Mark Swanson\n"
" v2.1.125 10/20/98 Paul Norton <pnorton@ieee.org>\n"
-" v2.2.0 12/30/98 Joel Sloan <jjs@c-me.com>\n";
+" v2.2.0 12/30/98 Joel Sloan <jjs@c-me.com>\n"
+" v2.2.1 02/08/00 Mike Sullivan <sullivam@us.ibm.com>\n";
static char pcchannelid[] = {
0x05, 0x00, 0x04, 0x09,
ti->mapped_ram_size = ti->avail_shared_ram;
} else {
#ifdef ENABLE_PAGING
- unsigned char pg_size;
+ unsigned char pg_size=0;
#endif
#if !TR_NEWFORMAT
pg_size=64; /* 32KB page size */
break;
case 0xc:
- ti->page_mask=(ti->mapped_ram_size==32) ? 0xc0 : 0;
- ti->page_mask=(ti->mapped_ram_size==64) ? 0x80 : 0;
- DPRINTK("Dual size shared RAM page (code=0xC), don't support it!\n");
- /* nb/dwm: I did this because RRR (3,2) bits are documented as
- R/O and I can't find how to select which page size
- Also, the above conditional statement sequence is invalid
- as page_mask will always be set by the second stmt */
- kfree_s(ti, sizeof(struct tok_info));
- return -ENODEV;
+ switch (ti->mapped_ram_size) {
+ case 32:
+ ti->page_mask=0xc0;
+ pg_size=32;
+ break;
+ case 64:
+ ti->page_mask=0x80;
+ pg_size=64;
+ break;
+ }
break;
default:
DPRINTK("Unknown shared ram paging info %01X\n",ti->shared_ram_paging);
return -ENODEV;
break;
}
+
+ if (ibmtr_debug_trace & TRC_INIT)
+ DPRINTK("Shared RAM paging code: "
+ "%02X mapped RAM size: %dK shared RAM size: %dK page mask: %0xX\n:",
+ ti->shared_ram_paging, ti->mapped_ram_size/2, ti->avail_shared_ram/2, ti->page_mask);
+
if (ti->page_mask) {
if (pg_size > ti->mapped_ram_size) {
DPRINTK("Page size (%d) > mapped ram window (%d), can't page.\n",
- pg_size, ti->mapped_ram_size);
+ pg_size/2, ti->mapped_ram_size/2);
ti->page_mask = 0; /* reset paging */
- } else {
- ti->mapped_ram_size=ti->avail_shared_ram;
- DPRINTK("Shared RAM paging enabled. Page size : %uK\n",
- ((ti->page_mask^ 0xff)+1)>>2);
}
+ } else if (pg_size > ti->mapped_ram_size) {
+ DPRINTK("Page size (%d) > mapped ram window (%d), can't page.\n",
+ pg_size/2, ti->mapped_ram_size/2);
+ }
+
#endif
}
/* finish figuring the shared RAM address */
DPRINTK("Hardware address : %02X:%02X:%02X:%02X:%02X:%02X\n",
dev->dev_addr[0], dev->dev_addr[1], dev->dev_addr[2],
dev->dev_addr[3], dev->dev_addr[4], dev->dev_addr[5]);
+ if (ti->page_mask)
+ DPRINTK("Shared RAM paging enabled. Page size: %uK Shared Ram size %dK\n",
+ ((ti->page_mask ^ 0xff)+1)>>2,ti->avail_shared_ram/2);
+ else
+ DPRINTK("Shared RAM paging disabled. ti->page_mask %x\n",ti->page_mask);
#endif
/* Calculate the maximum DHB we can use */
- switch (ti->mapped_ram_size) {
+ if (!ti->page_mask) {
+ ti->avail_shared_ram=ti->mapped_ram_size;
+ }
+ switch (ti->avail_shared_ram) {
case 16 : /* 8KB shared RAM */
ti->dhb_size4mb = MIN(ti->dhb_size4mb, 2048);
ti->rbuf_len4 = 1032;
ti->rbuf_cnt16 = 2;
break;
case 32 : /* 16KB shared RAM */
- ti->dhb_size4mb = MIN(ti->dhb_size4mb, 4464);
+ ti->dhb_size4mb = MIN(ti->dhb_size4mb, 2048);
ti->rbuf_len4 = 520;
ti->rbuf_cnt4 = 9;
- ti->dhb_size16mb = MIN(ti->dhb_size16mb, 4096);
+ ti->dhb_size16mb = MIN(ti->dhb_size16mb, 2048);
ti->rbuf_len16 = 1032; /* 1024 usable */
ti->rbuf_cnt16 = 4;
break;
case 64 : /* 32KB shared RAM */
- ti->dhb_size4mb = MIN(ti->dhb_size4mb, 4464);
+ ti->dhb_size4mb = MIN(ti->dhb_size4mb, 2048);
ti->rbuf_len4 = 1032;
ti->rbuf_cnt4 = 6;
- ti->dhb_size16mb = MIN(ti->dhb_size16mb, 10240);
+ ti->dhb_size16mb = MIN(ti->dhb_size16mb, 2048);
ti->rbuf_len16 = 1032;
ti->rbuf_cnt16 = 10;
break;
case 127 : /* 63KB shared RAM */
- ti->dhb_size4mb = MIN(ti->dhb_size4mb, 4464);
+ ti->dhb_size4mb = MIN(ti->dhb_size4mb, 2048);
ti->rbuf_len4 = 1032;
ti->rbuf_cnt4 = 6;
- ti->dhb_size16mb = MIN(ti->dhb_size16mb, 16384);
+ ti->dhb_size16mb = MIN(ti->dhb_size16mb, 2048);
ti->rbuf_len16 = 1032;
ti->rbuf_cnt16 = 16;
break;
case 128 : /* 64KB shared RAM */
- ti->dhb_size4mb = MIN(ti->dhb_size4mb, 4464);
+ ti->dhb_size4mb = MIN(ti->dhb_size4mb, 2048);
ti->rbuf_len4 = 1032;
ti->rbuf_cnt4 = 6;
- ti->dhb_size16mb = MIN(ti->dhb_size16mb, 17960);
+ ti->dhb_size16mb = MIN(ti->dhb_size16mb, 2048);
ti->rbuf_len16 = 1032;
ti->rbuf_cnt16 = 18;
break;
'E' -- 8kb 'D' -- 16kb
'C' -- 32kb 'A' -- 64KB
'B' - 64KB less 512 bytes at top
- (WARNING ... must zero top bytes in INIT */
+ (WARNING ... must zero top bytes in INIT */
avail_sram_code=0xf-readb(adapt_info->mmio + AIPAVAILSHRAM);
if (avail_sram_code)
{
struct tok_info *ti=(struct tok_info *)dev->priv;
+ SET_PAGE(ti->srb_page);
ti->open_status = CLOSED;
dev->init = tok_init_card;
address[3] |= mclist->dmi_addr[5];
mclist = mclist->next;
}
- SET_PAGE(ti->srb);
+ SET_PAGE(ti->srb_page);
for (i=0; i<sizeof(struct srb_set_funct_addr); i++)
writeb(0, ti->srb+i);
struct tok_info *ti=(struct tok_info *) dev->priv;
+ SET_PAGE(ti->srb_page);
writeb(DIR_CLOSE_ADAPTER,
ti->srb + offsetof(struct srb_close_adapter, command));
writeb(CMD_IN_SRB, ti->mmio + ACA_OFFSET + ACA_SET + ISRA_ODD);
sleep_on(&ti->wait_for_tok_int);
+ SET_PAGE(ti->srb_page);
if (readb(ti->srb + offsetof(struct srb_close_adapter, ret_code)))
DPRINTK("close adapter failed: %02X\n",
(int)readb(ti->srb + offsetof(struct srb_close_adapter, ret_code)));
unsigned char status;
struct tok_info *ti;
struct device *dev;
+#ifdef ENABLE_PAGING
+ unsigned char save_srpr;
+#endif
dev = dev_id;
#if TR_VERBOSE
#endif
ti = (struct tok_info *) dev->priv;
spin_lock(&(ti->lock));
+#ifdef ENABLE_PAGING
+ save_srpr=readb(ti->mmio+ACA_OFFSET+ACA_RW+SRPR_EVEN);
+#endif
/* Disable interrupts till processing is finished */
dev->interrupt=1;
if (status == 0xFF)
{
DPRINTK("PCMCIA card removed.\n");
- spin_unlock(&(ti->lock));
- dev->interrupt = 0;
- return;
+ dev->interrupt = 0;
+ goto return_point;
}
/* Check ISRP EVEN too. */
if ( readb (ti->mmio + ACA_OFFSET + ACA_RW + ISRP_EVEN) == 0xFF)
{
DPRINTK("PCMCIA card removed.\n");
- spin_unlock(&(ti->lock));
dev->interrupt = 0;
- return;
+ goto return_point;
}
#endif
int i;
__u32 check_reason;
+ __u8 check_reason_page=0;
- check_reason=ti->mmio + ntohs(readw(ti->sram + ACA_OFFSET + ACA_RW +WWCR_EVEN));
+ check_reason=ntohs(readw(ti->sram + ACA_OFFSET + ACA_RW +WWCR_EVEN));
+ if (ti->page_mask) {
+ check_reason_page=(check_reason>>8) & ti->page_mask;
+ check_reason &= ~(ti->page_mask << 8);
+ }
+ check_reason += ti->sram;
+ SET_PAGE(check_reason_page);
DPRINTK("Adapter check interrupt\n");
DPRINTK("8 reason bytes follow: ");
/* SRB, ASB, ARB or SSB response */
if (status & SRB_RESP_INT) { /* SRB response */
+ SET_PAGE(ti->srb_page);
+#if TR_VERBOSE
+ DPRINTK("SRB resp: cmd=%02X rsp=%02X\n",
+ readb(ti->srb),
+ readb(ti->srb + offsetof(struct srb_xmit, ret_code)));
+#endif
switch(readb(ti->srb)) { /* SRB command check */
unsigned char open_ret_code;
__u16 open_error_code;
- ti->srb=ti->sram+ntohs(readw(ti->init_srb +offsetof(struct srb_open_response, srb_addr)));
- ti->ssb=ti->sram+ntohs(readw(ti->init_srb +offsetof(struct srb_open_response, ssb_addr)));
- ti->arb=ti->sram+ntohs(readw(ti->init_srb +offsetof(struct srb_open_response, arb_addr)));
- ti->asb=ti->sram+ntohs(readw(ti->init_srb +offsetof(struct srb_open_response, asb_addr)));
+ ti->srb=ntohs(readw(ti->init_srb +offsetof(struct srb_open_response, srb_addr)));
+ ti->ssb=ntohs(readw(ti->init_srb +offsetof(struct srb_open_response, ssb_addr)));
+ ti->arb=ntohs(readw(ti->init_srb +offsetof(struct srb_open_response, arb_addr)));
+ ti->asb=ntohs(readw(ti->init_srb +offsetof(struct srb_open_response, asb_addr)));
+ if (ti->page_mask) {
+ ti->srb_page=(ti->srb>>8) & ti->page_mask;
+ ti->srb &= ~(ti->page_mask<<8);
+ ti->ssb_page=(ti->ssb>>8) & ti->page_mask;
+ ti->ssb &= ~(ti->page_mask<<8);
+ ti->arb_page=(ti->arb>>8) & ti->page_mask;
+ ti->arb &= ~(ti->page_mask<<8);
+ ti->asb_page=(ti->asb>>8) & ti->page_mask;
+ ti->asb &= ~(ti->page_mask<<8);
+ }
+ ti->srb+=ti->sram;
+ ti->ssb+=ti->sram;
+ ti->arb+=ti->sram;
+ ti->asb+=ti->sram;
+
ti->current_skb=NULL;
open_ret_code = readb(ti->init_srb +offsetof(struct srb_open_response, ret_code));
} /* SRB response */
if (status & ASB_FREE_INT) { /* ASB response */
+ SET_PAGE(ti->asb_page);
+#if TR_VERBOSE
+ DPRINTK("ASB resp: cmd=%02X\n", readb(ti->asb));
+#endif
switch(readb(ti->asb)) { /* ASB command check */
} /* ASB response */
if (status & ARB_CMD_INT) { /* ARB response */
+ SET_PAGE(ti->arb_page);
+#if TR_VERBOSE
+ DPRINTK("ARB resp: cmd=%02X rsp=%02X\n",
+ readb(ti->arb),
+ readb(ti->arb + offsetof(struct arb_dlc_status, status)));
+#endif
switch (readb(ti->arb)) { /* ARB command check */
if (status & SSB_RESP_INT) { /* SSB response */
unsigned char retcode;
+ SET_PAGE(ti->ssb_page);
+#if TR_VERBOSE
+ DPRINTK("SSB resp: cmd=%02X rsp=%02X\n",
+ readb(ti->ssb), readb(ti->ssb+2));
+#endif
switch (readb(ti->ssb)) { /* SSB command check */
case XMIT_DIR_FRAME:
case XMIT_XID_CMD:
DPRINTK("xmit xid ret_code: %02X\n", (int)readb(ti->ssb+2));
+ break;
default:
DPRINTK("Unknown command %02X in ssb\n", (int)readb(ti->ssb));
DPRINTK("Unexpected interrupt from tr adapter\n");
}
+#ifdef PCMCIA
+ return_point:
+#endif
+#ifdef ENABLE_PAGING
+ writeb(save_srpr, ti->mmio+ACA_OFFSET+ACA_RW+SRPR_EVEN);
+#endif
+
spin_unlock(&(ti->lock));
}
writeb(ti->sram_base, ti->mmio + ACA_OFFSET + ACA_RW + RRR_EVEN);
ti->sram=((__u32)ti->sram_base << 12);
}
- ti->init_srb=ti->sram
- +ntohs((unsigned short)readw(ti->mmio+ ACA_OFFSET + WRBR_EVEN));
- SET_PAGE(ntohs((unsigned short)readw(ti->mmio+ACA_OFFSET + WRBR_EVEN)));
+ ti->init_srb=ntohs((unsigned short)readw(ti->mmio+ ACA_OFFSET + WRBR_EVEN));
+ if (ti->page_mask) {
+ ti->init_srb_page=(ti->init_srb>>8)&ti->page_mask;
+ ti->init_srb &= ~(ti->page_mask<<8);
+ }
+ ti->init_srb+=ti->sram;
+
+ if (ti->avail_shared_ram == 127) {
+ int i;
+ int last_512=0xfe00;
+ if (ti->page_mask) {
+ last_512 &= ~(ti->page_mask<<8);
+ }
+ // initialize high section of ram (if necessary)
+ SET_PAGE(0xc0);
+ for (i=0; i<512; i++) {
+ writeb(0,ti->sram+last_512+i);
+ }
+ }
+ SET_PAGE(ti->init_srb_page);
dev->mem_start = ti->sram;
dev->mem_end = ti->sram + (ti->mapped_ram_size<<9) - 1;
#if TR_VERBOSE
{
int i;
- DPRINTK("init_srb(%p):", ti->init_srb);
+ DPRINTK("init_srb(%lx):", (long)ti->init_srb);
for (i=0;i<17;i++) printk("%02X ", (int)readb(ti->init_srb+i));
printk("\n");
}
/* Reset adapter */
dev->tbusy=1; /* nothing can be done before reset and open completed */
-#ifdef ENABLE_PAGING
- if(ti->page_mask)
- writeb(SRPR_ENABLE_PAGING, ti->mmio + ACA_OFFSET + ACA_RW + SRPR_EVEN);
-#endif
writeb(~INT_ENABLE, ti->mmio + ACA_OFFSET + ACA_RESET + ISRP_EVEN);
outb(0, PIOaddr+ADAPTRESET);
for (i=jiffies+TR_RESET_INTERVAL; time_before_eq(jiffies, i);); /* wait 50ms */
outb(0,PIOaddr+ADAPTRESETREL);
+#ifdef ENABLE_PAGING
+ if(ti->page_mask)
+ writeb(SRPR_ENABLE_PAGING, ti->mmio + ACA_OFFSET + ACA_RW + SRPR_EVEN);
+#endif
#if !TR_NEWFORMAT
DPRINTK("card reset\n");
int i;
struct tok_info *ti=(struct tok_info *) dev->priv;
- SET_PAGE(ti->srb);
+ SET_PAGE(ti->srb_page);
for (i=0; i<sizeof(struct dlc_open_sap); i++)
writeb(0, ti->srb+i);
ti->init_srb + offsetof(struct dir_open_adapter, num_dhb));
writeb(DLC_MAX_SAP,
ti->init_srb + offsetof(struct dir_open_adapter, dlc_max_sap));
- writeb(DLC_MAX_STA,
+ writeb(DLC_MAX_STA,
ti->init_srb + offsetof(struct dir_open_adapter, dlc_max_sta));
ti->srb=ti->init_srb; /* We use this one in the interrupt handler */
+ ti->srb_page=ti->init_srb_page;
+ DPRINTK("Opend adapter: Xmit bfrs: %d X %d, Rcv bfrs: %d X %d\n",
+ readb(ti->init_srb+offsetof(struct dir_open_adapter,num_dhb)),
+ ntohs(readw(ti->init_srb+offsetof(struct dir_open_adapter,dhb_length))),
+ ntohs(readw(ti->init_srb+offsetof(struct dir_open_adapter,num_rcv_buf))),
+ ntohs(readw(ti->init_srb+offsetof(struct dir_open_adapter,rcv_buf_len))) );
writeb(INT_ENABLE, ti->mmio + ACA_OFFSET + ACA_SET + ISRP_EVEN);
writeb(CMD_IN_SRB, ti->mmio + ACA_OFFSET + ACA_SET + ISRA_ODD);
unsigned char xmit_command;
int i;
struct trllc *llc;
+ struct srb_xmit xsrb;
+ __u8 dhb_page=0;
+ __u8 llc_ssap;
+
+ SET_PAGE(ti->asb_page);
if (readb(ti->asb + offsetof(struct asb_xmit_resp, ret_code))!=0xFF)
DPRINTK("ASB not free !!!\n");
providing a shared memory address for us
to stuff with data. Here we compute the
effective address where we will place data.*/
- dhb=ti->sram
- +ntohs(readw(ti->arb + offsetof(struct arb_xmit_req, dhb_address)));
+ SET_PAGE(ti->arb_page);
+ dhb=ntohs(readw(ti->arb + offsetof(struct arb_xmit_req, dhb_address)));
+ if (ti->page_mask) {
+ dhb_page=(dhb >> 8) & ti->page_mask;
+ dhb &= ~(ti->page_mask << 8);
+ }
+ dhb+=ti->sram;
/* Figure out the size of the 802.5 header */
if (!(trhdr->saddr[0] & 0x80)) /* RIF present? */
llc = (struct trllc *)(ti->current_skb->data + hdr_len);
- xmit_command = readb(ti->srb + offsetof(struct srb_xmit, command));
+ llc_ssap=llc->ssap;
+ SET_PAGE(ti->srb_page);
+ memcpy_fromio(&xsrb, ti->srb, sizeof(xsrb));
+ SET_PAGE(ti->asb_page);
+ xmit_command=xsrb.command;
writeb(xmit_command, ti->asb + offsetof(struct asb_xmit_resp, command));
- writew(readb(ti->srb + offsetof(struct srb_xmit, station_id)),
+ writew(xsrb.station_id,
ti->asb + offsetof(struct asb_xmit_resp, station_id));
- writeb(llc->ssap, ti->asb + offsetof(struct asb_xmit_resp, rsap_value));
- writeb(readb(ti->srb + offsetof(struct srb_xmit, cmd_corr)),
+ writeb(llc_ssap, ti->asb + offsetof(struct asb_xmit_resp, rsap_value));
+ writeb(xsrb.cmd_corr,
ti->asb + offsetof(struct asb_xmit_resp, cmd_corr));
writeb(0, ti->asb + offsetof(struct asb_xmit_resp, ret_code));
writew(htons(0x11),
ti->asb + offsetof(struct asb_xmit_resp, frame_length));
writeb(0x0e, ti->asb + offsetof(struct asb_xmit_resp, hdr_length));
+ SET_PAGE(dhb_page);
writeb(AC, dhb);
writeb(LLC_FRAME, dhb+1);
{
struct tok_info *ti=(struct tok_info *) dev->priv;
__u32 rbuffer, rbufdata;
+ __u8 rbuffer_page=0;
__u32 llc;
unsigned char *data;
unsigned int rbuffer_len, lan_hdr_len, hdr_len, ip_len, length;
int IPv4_p = 0;
unsigned int chksum = 0;
struct iphdr *iph;
+ struct arb_rec_req rarb;
- rbuffer=(ti->sram
- +ntohs(readw(ti->arb + offsetof(struct arb_rec_req, rec_buf_addr))))+2;
+ SET_PAGE(ti->arb_page);
+ memcpy_fromio(&rarb, ti->arb, sizeof(rarb));
+ rbuffer=ntohs(rarb.rec_buf_addr)+2;
+ if (ti->page_mask) {
+ rbuffer_page=(rbuffer >> 8) & ti->page_mask;
+ rbuffer &= ~(ti->page_mask<<8);
+ }
+ rbuffer += ti->sram;
+
+ SET_PAGE(ti->asb_page);
if(readb(ti->asb + offsetof(struct asb_rec, ret_code))!=0xFF)
DPRINTK("ASB not free !!!\n");
writeb(REC_DATA,
ti->asb + offsetof(struct asb_rec, command));
- writew(readw(ti->arb + offsetof(struct arb_rec_req, station_id)),
+ writew(rarb.station_id,
ti->asb + offsetof(struct asb_rec, station_id));
- writew(readw(ti->arb + offsetof(struct arb_rec_req, rec_buf_addr)),
+ writew(rarb.rec_buf_addr,
ti->asb + offsetof(struct asb_rec, rec_buf_addr));
- lan_hdr_len=readb(ti->arb + offsetof(struct arb_rec_req, lan_hdr_len));
+ lan_hdr_len=rarb.lan_hdr_len;
hdr_len = lan_hdr_len + sizeof(struct trllc) + sizeof(struct iphdr);
+ SET_PAGE(rbuffer_page);
llc=(rbuffer + offsetof(struct rec_buf, data) + lan_hdr_len);
#if TR_VERBOSE
DPRINTK("offsetof data: %02X lan_hdr_len: %02X\n",
(unsigned int)offsetof(struct rec_buf,data), (unsigned int)lan_hdr_len);
- DPRINTK("llc: %08X rec_buf_addr: %04X ti->sram: %p\n", llc,
- ntohs(readw(ti->arb + offsetof(struct arb_rec_req, rec_buf_addr))),
- ti->sram);
+ DPRINTK("llc: %08X rec_buf_addr: %04X ti->sram: %lx\n", llc,
+ ntohs(rarb.rec_buf_addr),
+ (long)ti->sram);
DPRINTK("dsap: %02X, ssap: %02X, llc: %02X, protid: %02X%02X%02X, "
"ethertype: %04X\n",
(int)readb(llc + offsetof(struct trllc, dsap)),
(int)readw(llc + offsetof(struct trllc, ethertype)));
#endif
if (readb(llc + offsetof(struct trllc, llc))!=UI_CMD) {
+ SET_PAGE(ti->asb_page);
writeb(DATA_LOST, ti->asb + offsetof(struct asb_rec, ret_code));
ti->tr_stats.rx_dropped++;
writeb(RESP_IN_ASB, ti->mmio + ACA_OFFSET + ACA_SET + ISRA_ODD);
return;
}
- length = ntohs(readw(ti->arb+offsetof(struct arb_rec_req, frame_len)));
- if ((readb(llc + offsetof(struct trllc, dsap))==EXTENDED_SAP) &&
+ length = ntohs(rarb.frame_len);
+ if ((readb(llc + offsetof(struct trllc, dsap))==EXTENDED_SAP) &&
(readb(llc + offsetof(struct trllc, ssap))==EXTENDED_SAP) &&
(length>=hdr_len)) {
IPv4_p = 1;
}
#endif
- skb_size = length-lan_hdr_len+sizeof(struct trh_hdr)+sizeof(struct trllc);
+ skb_size = length;
if (!(skb=dev_alloc_skb(skb_size))) {
DPRINTK("out of memory. frame dropped.\n");
break;
length -= rbuffer_len;
data += rbuffer_len;
+ if (ti->page_mask) {
+ rbuffer_page=(rbuffer>>8) & ti->page_mask;
+ rbuffer &= ~(ti->page_mask << 8);
+ }
rbuffer += ti->sram;
+ SET_PAGE(rbuffer_page);
rbuffer_len = ntohs(readw(rbuffer + offsetof(struct rec_buf, buf_len)));
rbufdata = rbuffer + offsetof(struct rec_buf, data);
}
+ SET_PAGE(ti->asb_page);
writeb(0, ti->asb + offsetof(struct asb_rec, ret_code));
writeb(RESP_IN_ASB, ti->mmio + ACA_OFFSET + ACA_SET + ISRA_ODD);
/* Save skb; we'll need it when the adapter asks for the data */
ti->current_skb=skb;
+ SET_PAGE(ti->srb_page);
writeb(XMIT_UI_FRAME, ti->srb + offsetof(struct srb_xmit, command));
writew(ti->exsap_station_id, ti->srb
+offsetof(struct srb_xmit, station_id));
ti=(struct tok_info *) dev->priv;
ti->readlog_pending = 0;
+ SET_PAGE(ti->srb_page);
writeb(DIR_READ_LOG, ti->srb);
writeb(INT_ENABLE, ti->mmio + ACA_OFFSET + ACA_SET + ISRP_EVEN);
writeb(CMD_IN_SRB, ti->mmio + ACA_OFFSET + ACA_SET + ISRA_ODD);
#define TCR_ODD 0x0D
#define TVR_EVEN 0x0E /* Timer value registers - even and odd */
#define TVR_ODD 0x0F
-#define SRPR_EVEN 0x10 /* Shared RAM paging registers - even and odd */
+#define SRPR_EVEN 0x18 /* Shared RAM paging registers - even and odd */
#define SRPR_ENABLE_PAGING 0xc0
-#define SRPR_ODD 0x11 /* Not used. */
+#define SRPR_ODD 0x19 /* Not used. */
#define TOKREAD 0x60
#define TOKOR 0x40
#define TOKAND 0x20
#define ACA_RW 0x00
#ifdef ENABLE_PAGING
-#define SET_PAGE(x) (writeb(((x>>8)&ti.page_mask), \
- ti->mmio + ACA_OFFSET + ACA_RW + SRPR_EVEN))
+#define SET_PAGE(x) (writeb((x), \
+ ti->mmio + ACA_OFFSET + ACA_RW + SRPR_EVEN))
#else
#define SET_PAGE(x)
#endif
__u32 ssb; /* System Status Block address */
__u32 arb; /* Adapter Request Block address */
__u32 asb; /* Adapter Status Block address */
+ __u8 init_srb_page;
+ __u8 srb_page;
+ __u8 ssb_page;
+ __u8 arb_page;
+ __u8 asb_page;
unsigned short exsap_station_id;
unsigned short global_int_enable;
struct sk_buff *current_skb;
struct slvl_device
{
+ void *if_ptr; /* General purpose pointer (used by SPPP) */
struct z8530_channel *chan;
struct ppp_device netdev;
char name[16];
memset(b, 0, sizeof(*sv));
b->dev[0].chan = &b->board.chanA;
+ b->dev[0].if_ptr = &b->dev[0].netdev;
+ b->dev[0].netdev.dev=(struct device *)
+ kmalloc(sizeof(struct device), GFP_KERNEL);
+ if(!b->dev[0].netdev.dev)
+ goto fail2;
+
b->dev[1].chan = &b->board.chanB;
+ b->dev[1].if_ptr = &b->dev[1].netdev;
+ b->dev[1].netdev.dev=(struct device *)
+ kmalloc(sizeof(struct device), GFP_KERNEL);
+ if(!b->dev[1].netdev.dev)
+ goto fail1_0;
dev=&b->board;
if(request_irq(irq, &z8530_interrupt, SA_INTERRUPT, "SeaLevel", dev)<0)
{
printk(KERN_WARNING "sealevel: IRQ %d already in use.\n", irq);
- goto fail2;
+ goto fail1_1;
}
dev->irq=irq;
dev->chanA.private=&b->dev[0];
dev->chanB.private=&b->dev[1];
- dev->chanA.netdevice=&b->dev[0].netdev.dev;
- dev->chanB.netdevice=&b->dev[1].netdev.dev;
+ dev->chanA.netdevice=b->dev[0].netdev.dev;
+ dev->chanB.netdevice=b->dev[1].netdev.dev;
dev->chanA.dev=dev;
dev->chanB.dev=dev;
dev->name=b->dev[0].name;
free_dma(dev->chanA.txdma);
fail:
free_irq(irq, dev);
+fail1_1:
+ kfree(b->dev[1].netdev.dev);
+fail1_0:
+ kfree(b->dev[0].netdev.dev);
fail2:
kfree(b);
fail3:
for(u=0; u<2; u++)
{
- sppp_detach(&b->dev[u].netdev.dev);
- unregister_netdev(&b->dev[u].netdev.dev);
+ sppp_detach(b->dev[u].netdev.dev);
+ unregister_netdev(b->dev[u].netdev.dev);
}
free_irq(b->board.irq, &b->board);
. Fixed bug reported by Gardner Buchanan in
. smc_enable, with outw instead of outb
. 03/06/96 Erik Stahlman Added hardware multicast from Peter Cammaert
+ . 04/14/00 Heiko Pruessing (SMA Regelsysteme) Fixed bug in chip memory
+ . allocation
----------------------------------------------------------------------------*/
static const char *version =
- "smc9194.c:v0.12 03/06/96 by Erik Stahlman (erik@vt.edu)\n";
+ "smc9194.c:v0.13 04/14/00 by Erik Stahlman (erik@vt.edu)\n";
#ifdef MODULE
#include <linux/module.h>
length = ETH_ZLEN < skb->len ? skb->len : ETH_ZLEN;
+
/*
- . the MMU wants the number of pages to be the number of 256 bytes
- . 'pages', minus 1 ( since a packet can't ever have 0 pages :) )
+ ** The MMU wants the number of pages to be the number of 256 bytes
+ ** 'pages', minus 1 ( since a packet can't ever have 0 pages :) )
+ **
+ ** Pkt size for allocating is data length +6 (for additional status words,
+ ** length and ctl!) If odd size last byte is included in this header.
*/
- numPages = length / 256;
+ numPages = ((length & 0xfffe) + 6) / 256;
if (numPages > 7 ) {
printk(CARDNAME": Far too big packet error. \n");
static void if_down(struct device *dev)
{
- struct sppp *sp = &((struct ppp_device *)dev)->sppp;
+ struct sppp *sp = (struct sppp *)sppp_of(dev);
- sp->pp_link_state=SPPP_LINK_DOWN;
+ sp->pp_link_state = SPPP_LINK_DOWN;
}
/*
void sppp_input (struct device *dev, struct sk_buff *skb)
{
struct ppp_header *h;
- struct sppp *sp = &((struct ppp_device *)dev)->sppp;
+ struct sppp *sp = (struct sppp *)sppp_of(dev);
skb->dev=dev;
skb->mac.raw=skb->data;
static int sppp_hard_header(struct sk_buff *skb, struct device *dev, __u16 type,
void *daddr, void *saddr, unsigned int len)
{
- struct sppp *sp = &((struct ppp_device *)dev)->sppp;
+ struct sppp *sp = (struct sppp *)sppp_of(dev);
struct ppp_header *h;
skb_push(skb,sizeof(struct ppp_header));
h=(struct ppp_header *)skb->data;
int sppp_close (struct device *dev)
{
- struct sppp *sp = &((struct ppp_device *)dev)->sppp;
+ struct sppp *sp = (struct sppp *)sppp_of(dev);
sp->pp_link_state = SPPP_LINK_DOWN;
sp->lcp.state = LCP_STATE_CLOSED;
sp->ipcp.state = IPCP_STATE_CLOSED;
int sppp_open (struct device *dev)
{
- struct sppp *sp = &((struct ppp_device *)dev)->sppp;
+ struct sppp *sp = (struct sppp *)sppp_of(dev);
sppp_close(dev);
if (!(sp->pp_flags & PP_CISCO)) {
sppp_lcp_open (sp);
int sppp_reopen (struct device *dev)
{
- struct sppp *sp = &((struct ppp_device *)dev)->sppp;
+ struct sppp *sp = (struct sppp *)sppp_of(dev);
sppp_close(dev);
if (!(sp->pp_flags & PP_CISCO))
{
int sppp_do_ioctl(struct device *dev, struct ifreq *ifr, int cmd)
{
- struct sppp *sp = &((struct ppp_device *)dev)->sppp;
+ struct sppp *sp = (struct sppp *)sppp_of(dev);
if(dev->flags&IFF_UP)
return -EBUSY;
void sppp_attach(struct ppp_device *pd)
{
- struct device *dev=&pd->dev;
+ struct device *dev = pd->dev;
struct sppp *sp = &pd->sppp;
/* Initialize keepalive handler. */
void sppp_detach (struct device *dev)
{
- struct sppp **q, *p, *sp = &((struct ppp_device *)dev)->sppp;
-
+ struct sppp **q, *p, *sp = (struct sppp *)sppp_of(dev);
/* Remove the entry from the keepalive list. */
for (q = &spppq; (p = *q); q = &p->pp_next)
dev_add_pack(&sppp_packet_type);
}
+EXPORT_SYMBOL(sync_ppp_init);
+
#ifdef MODULE
int init_module(void)
struct ppp_device
{
- struct device dev; /* Network device */
+ struct device *dev; /* Network device pointer */
struct sppp sppp; /* Synchronous PPP */
};
+#define sppp_of(dev) \
+ (&((struct ppp_device *)(*(unsigned long *)((dev)->priv)))->sppp)
+
#define PP_KEEPALIVE 0x01 /* use keepalive protocol */
#define PP_CISCO 0x02 /* use Cisco protocol instead of PPP */
#define PP_TIMO 0x04 /* cp_timeout routine active */
int sppp_open (struct device *dev);
int sppp_reopen (struct device *dev);
int sppp_close (struct device *dev);
+void sync_ppp_init (void);
#endif
#define SPPPIOCCISCO (SIOCDEVPRIVATE)
if [ "$CONFIG_BLK_DEV_RAM" = "y" ]; then
bool ' Initial RAM disk (initrd) support' CONFIG_BLK_DEV_INITRD
fi
+tristate 'XPRAM disk support' CONFIG_BLK_DEV_XPRAM
bool 'Support for VM minidisk (VM only)' CONFIG_MDISK
if [ "$CONFIG_MDISK" = "y" ]; then
if [ "$CONFIG_DASD" != "n" ]; then
comment 'DASD disciplines'
bool ' Support for ECKD Disks' CONFIG_DASD_ECKD
+ bool ' Support for FBA Disks' CONFIG_DASD_FBA
+# bool ' Support for CKD Disks (unsupported)' CONFIG_DASD_CKD
+ if [ "$CONFIG_MDISK" = "n" ]; then
+ bool ' Support for DIAG access to CMS formatted Disks' CONFIG_DASD_MDSK
+ fi
fi
#menu_option next_comment
+# comment 'S/390-SCSI support'
+# tristate 'S/390-SCSI support' CONFIG_SCSI
#endmenu
if [ "$CONFIG_NET" = "y" ]; then
mainmenu_option next_comment
comment 'S/390 Network device support'
-
bool 'Network device support' CONFIG_NETDEVICES
if [ "$CONFIG_NETDEVICES" = "y" ]; then
menu_option next_comment
SUBDIRS := $(SUBDIRS) arch/s390/drivers/block arch/s390/drivers/char \
arch/s390/drivers/misc arch/s390/drivers/net
-MOD_SUB_DIRS += ./net
+MOD_SUB_DIRS += ./net ./block
O_OBJS := block/s390-block.o \
char/s390-char.o \
O_TARGET := s390-block.o
O_OBJS :=
M_OBJS :=
+D_OBJS :=
ifeq ($(CONFIG_DASD),y)
- O_OBJS += dasd.o dasd_ccwstuff.o
+ O_OBJS += dasd.o dasd_ccwstuff.o dasd_erp.o dasd_setup.o
ifeq ($(CONFIG_PROC_FS),y)
O_OBJS += dasd_proc.o dasd_profile.o
endif
ifeq ($(CONFIG_DASD_ECKD),y)
- O_OBJS += dasd_eckd.o
+ O_OBJS += dasd_eckd.o dasd_eckd_erp.o dasd_3990_erp.o dasd_9343_erp.o
endif
+ ifeq ($(CONFIG_DASD_FBA),y)
+ O_OBJS += dasd_fba.o
+ endif
+ ifeq ($(CONFIG_DASD_MDSK),y)
+ O_OBJS += dasd_mdsk.o
+ endif
+# ifeq ($(CONFIG_DASD_CKD),y)
+# O_OBJS += dasd_ckd.o
+# endif
+endif
+
+ifeq ($(CONFIG_DASD),m)
+ M_OBJS += dasd_mod.o
+ D_OBJS += dasd.o dasd_ccwstuff.o dasd_erp.o dasd_setup.o
+ ifeq ($(CONFIG_PROC_FS),y)
+ D_OBJS += dasd_proc.o dasd_profile.o
+ endif
+ ifeq ($(CONFIG_DASD_ECKD),y)
+ D_OBJS += dasd_eckd.o dasd_eckd_erp.o dasd_3990_erp.o dasd_9343_erp.o
+ endif
+ ifeq ($(CONFIG_DASD_FBA),y)
+ D_OBJS += dasd_fba.o
+ endif
+ ifeq ($(CONFIG_DASD_MDSK),y)
+ D_OBJS += dasd_mdsk.o
+ endif
+# ifeq ($(CONFIG_DASD_CKD),y)
+# D_OBJS += dasd_ckd.o
+# endif
endif
ifeq ($(CONFIG_MDISK),y)
O_OBJS += mdisk.o
endif
+ifeq ($(CONFIG_BLK_DEV_XPRAM),y)
+ O_OBJS += xpram.o
+else
+ ifeq ($(CONFIG_BLK_DEV_XPRAM),m)
+ M_OBJS += xpram.o
+ endif
+endif
+
dasd_mod.o: $(D_OBJS)
- $(LD) $(LD_RFLAG) -r -o $@ $+
+ $(LD) $(LD_RFLAG) -r -o $@ $(D_OBJS)
include $(TOPDIR)/Rules.make
/*
* File...........: linux/drivers/s390/block/dasd.c
* Author(s)......: Holger Smolinski <Holger.Smolinski@de.ibm.com>
- * : Utz Bacher <utz.bacher@de.ibm.com>
* Bugreports.to..: <Linux390@de.ibm.com>
* (C) IBM Corporation, IBM Deutschland Entwicklung GmbH, 1999,2000
*/
#include <linux/stddef.h>
#include <linux/kernel.h>
-#ifdef MODULE
-#include <linux/module.h>
-#endif /* MODULE */
-
#include <linux/tqueue.h>
#include <linux/timer.h>
#include <linux/malloc.h>
#include <asm/uaccess.h>
#include <asm/irq.h>
+#include <asm/s390_ext.h>
-#include "dasd.h"
+#include <linux/dasd.h>
#include <linux/blk.h>
+#include "dasd_erp.h"
#include "dasd_types.h"
#include "dasd_ccwstuff.h"
-#define PRINTK_HEADER DASD_NAME":"
+#include "../../../arch/s390/kernel/debug.h"
-#define CCW_READ_DEVICE_CHARACTERISTICS 0x64
+#define PRINTK_HEADER DASD_NAME":"
#define DASD_SSCH_RETRIES 2
(( info -> sid_data.cu_type ct ) && ( info -> sid_data.cu_model cm )) && \
(( info -> sid_data.dev_type dt ) && ( info -> sid_data.dev_model dm )) )
+#ifdef MODULE
+#include <linux/module.h>
+
+char *dasd[DASD_MAX_DEVICES] =
+{NULL,};
+#ifdef CONFIG_DASD_MDSK
+char *dasd_force_mdsk[DASD_MAX_DEVICES] =
+{NULL,};
+#endif
+
+kdev_t ROOT_DEV;
+
+EXPORT_NO_SYMBOLS;
+MODULE_AUTHOR ("Holger Smolinski <Holger.Smolinski@de.ibm.com>");
+MODULE_DESCRIPTION ("Linux on S/390 DASD device driver, Copyright 2000 IBM Corporation");
+MODULE_PARM (dasd, "1-" __MODULE_STRING (DASD_MAX_DEVICES) "s");
+#ifdef CONFIG_DASD_MDSK
+MODULE_PARM (dasd_force_mdsk, "1-" __MODULE_STRING (DASD_MAX_DEVICES) "s");
+#endif
+#endif
+
/* Prototypes for the functions called from external */
-static ssize_t dasd_read (struct file *, char *, size_t, loff_t *);
-static ssize_t dasd_write (struct file *, const char *, size_t, loff_t *);
-static int dasd_ioctl (struct inode *, struct file *, unsigned int, unsigned long);
-static int dasd_open (struct inode *, struct file *);
-static int dasd_fsync (struct file *, struct dentry *);
-static int dasd_release (struct inode *, struct file *);
+void dasd_partn_detect (int di);
+int devindex_from_devno (int devno);
+int dasd_is_accessible (int devno);
+void dasd_add_devno_to_ranges (int devno);
+
+#ifdef CONFIG_DASD_MDSK
+extern int dasd_force_mdsk_flag[DASD_MAX_DEVICES];
+extern void do_dasd_mdsk_interrupt (struct pt_regs *regs, __u16 code);
+extern int dasd_parse_module_params (void);
+extern void (**ext_mdisk_int) (void);
+#endif
+
void dasd_debug (unsigned long tag);
void dasd_profile_add (cqr_t *cqr);
void dasd_proc_init (void);
-static struct file_operations dasd_file_operations;
+static int dasd_format (int dev, format_data_t * fdata);
+
+static struct file_operations dasd_device_operations;
spinlock_t dasd_lock; /* general purpose lock for the dasd driver */
/* All asynchronous I/O should waint on this wait_queue */
struct wait_queue *dasd_waitq = NULL;
-static int dasd_autodetect = 1;
-static int dasd_devno[DASD_MAX_DEVICES] =
-{0,};
-static int dasd_count = 0;
-
extern dasd_chanq_t *cq_head;
+extern int dasd_probeonly;
-static int
-dasd_get_hexdigit (char c)
-{
- if ((c >= '0') && (c <= '9'))
- return c - '0';
- if ((c >= 'a') && (c <= 'f'))
- return c + 10 - 'a';
- if ((c >= 'A') && (c <= 'F'))
- return c + 10 - 'A';
- return -1;
-}
+debug_info_t *dasd_debug_info;
-/* sets the string pointer after the next comma */
-static void
-dasd_scan_for_next_comma (char **strptr)
-{
- while (((**strptr) != ',') && ((**strptr)++))
- (*strptr)++;
+extern dasd_information_t **dasd_information;
- /* set the position AFTER the comma */
- if (**strptr == ',')
- (*strptr)++;
-}
+dasd_information_t *dasd_info[DASD_MAX_DEVICES] =
+{NULL,};
+static struct hd_struct dd_hdstruct[DASD_MAX_DEVICES << PARTN_BITS];
+static int dasd_blks[256] =
+{0,};
+static int dasd_secsize[256] =
+{0,};
+static int dasd_blksize[256] =
+{0,};
+static int dasd_maxsecs[256] =
+{0,};
-/*sets the string pointer after the next comma, if a parse error occured */
-static int
-dasd_get_next_int (char **strptr)
+void
+dasd_geninit (struct gendisk *dd)
{
- int j, i = -1; /* for cosmetic reasons first -1, then 0 */
- if (isxdigit (**strptr)) {
- for (i = 0; isxdigit (**strptr);) {
- i <<= 4;
- j = dasd_get_hexdigit (**strptr);
- if (j == -1) {
- PRINT_ERR ("no integer: skipping range.\n");
- dasd_scan_for_next_comma (strptr);
- i = -1;
- break;
- }
- i += j;
- (*strptr)++;
- if (i > 0xffff) {
- PRINT_ERR (" value too big, skipping range.\n");
- dasd_scan_for_next_comma (strptr);
- i = -1;
- break;
- }
- }
- }
- return i;
}
-static inline int
-devindex_from_devno (int devno)
+struct gendisk dd_gendisk =
{
- int i;
- for (i = 0; i < dasd_count; i++) {
- if (dasd_devno[i] == devno)
- return i;
- }
- if (dasd_autodetect) {
- if (dasd_count < DASD_MAX_DEVICES) {
- dasd_devno[dasd_count] = devno;
- return dasd_count++;
- }
- return -EOVERFLOW;
- }
- return -ENODEV;
-}
+ major:MAJOR_NR, /* Major number */
+ major_name:"dasd", /* Major name */
+ minor_shift:PARTN_BITS, /* Bits to shift to get real from partn */
+ max_p:1 << PARTN_BITS, /* Number of partitions per real */
+ max_nr:0, /* number */
+ init:dasd_geninit,
+ part:dd_hdstruct, /* hd struct */
+ sizes:dasd_blks, /* sizes in blocks */
+ nr_real:0,
+ real_devices:NULL, /* internal */
+ next:NULL /* next */
+};
-/* returns 1, if dasd_no is in the specified ranges, otherwise 0 */
-static inline int
-dasd_is_accessible (int devno)
-{
- return (devindex_from_devno (devno) >= 0);
-}
+static atomic_t bh_scheduled = ATOMIC_INIT (0);
-/* dasd_insert_range skips ranges, if the start or the end is -1 */
-static void
-dasd_insert_range (int start, int end)
+void
+dasd_schedule_bh (void (*func) (void))
{
- int curr;
- FUNCTION_ENTRY ("dasd_insert_range");
- if (dasd_count >= DASD_MAX_DEVICES) {
- PRINT_ERR (" too many devices specified, ignoring some.\n");
- FUNCTION_EXIT ("dasd_insert_range");
- return;
- }
- if ((start == -1) || (end == -1)) {
- PRINT_ERR ("invalid format of parameter, skipping range\n");
- FUNCTION_EXIT ("dasd_insert_range");
+ static struct tq_struct dasd_tq =
+ {0,};
+ /* Protect against rescheduling, when already running */
+ if (atomic_compare_and_swap (0, 1, &bh_scheduled))
return;
- }
- if (end < start) {
- PRINT_ERR (" ignoring range from %x to %x - start value " \
- "must be less than end value.\n", start, end);
- FUNCTION_EXIT ("dasd_insert_range");
+ dasd_tq.routine = (void *) (void *) func;
+ queue_task (&dasd_tq, &tq_immediate);
+ mark_bh (IMMEDIATE_BH);
return;
}
-/* concurrent execution would be critical, but will not occur here */
- for (curr = start; curr <= end; curr++) {
- if (dasd_is_accessible (curr)) {
- PRINT_WARN (" %x is already in list as device %d\n",
- curr, devindex_from_devno (curr));
- }
- dasd_devno[dasd_count] = curr;
- dasd_count++;
- if (dasd_count >= DASD_MAX_DEVICES) {
- PRINT_ERR (" too many devices specified, ignoring some.\n");
- break;
- }
- }
- PRINT_INFO (" added dasd range from %x to %x.\n",
- start, dasd_devno[dasd_count - 1]);
-
- FUNCTION_EXIT ("dasd_insert_range");
-}
-
-void
-dasd_setup (char *str, int *ints)
-{
- int devno, devno2;
-
- FUNCTION_ENTRY ("dasd_setup");
- dasd_autodetect = 0;
- while (*str && *str != 1) {
- if (!isxdigit (*str)) {
- str++; /* to avoid looping on two commas */
- PRINT_ERR (" kernel parameter in invalid format.\n");
- continue;
- }
- devno = dasd_get_next_int (&str);
-
- /* range was skipped? -> scan for comma has been done */
- if (devno == -1)
- continue;
-
- if (*str == ',') {
- str++;
- dasd_insert_range (devno, devno);
- continue;
- }
- if (*str == '-') {
- str++;
- devno2 = dasd_get_next_int (&str);
- if (devno2 == -1) {
- PRINT_ERR (" invalid character in " \
- "kernel parameters.");
- } else {
- dasd_insert_range (devno, devno2);
- }
- dasd_scan_for_next_comma (&str);
- continue;
- }
- if (*str == 0) {
- dasd_insert_range (devno, devno);
- break;
- }
- PRINT_ERR (" unexpected character in kernel parameter, " \
- "skipping range.\n");
- }
- FUNCTION_EXIT ("dasd_setup");
-}
-
-static void
-dd_geninit (struct gendisk *ignored)
-{
- FUNCTION_ENTRY ("dd_geninit");
- FUNCTION_EXIT ("dd_geninit");
-}
-
-static struct hd_struct dd_hdstruct[DASD_MAX_DEVICES << PARTN_BITS];
-static int dd_blocksizes[DASD_MAX_DEVICES << PARTN_BITS];
-
-struct gendisk dd_gendisk =
-{
- MAJOR_NR, /* Major number */
- "dd", /* Major name */
- PARTN_BITS, /* Bits to shift to get real from partn */
- 1 << PARTN_BITS, /* Number of partitions per real */
- DASD_MAX_DEVICES, /* maximum number of real */
- dd_geninit, /* init function */
- dd_hdstruct, /* hd struct */
- dd_blocksizes, /* block sizes */
- 0, /* number */
- NULL, /* internal */
- NULL /* next */
-
-};
void
sleep_done (struct semaphore *sem)
#ifdef CONFIG_DASD_ECKD
extern dasd_operations_t dasd_eckd_operations;
#endif /* CONFIG_DASD_ECKD */
+#ifdef CONFIG_DASD_FBA
+extern dasd_operations_t dasd_fba_operations;
+#endif /* CONFIG_DASD_FBA */
+#ifdef CONFIG_DASD_MDSK
+extern dasd_operations_t dasd_mdsk_operations;
+#endif /* CONFIG_DASD_MDSK */
dasd_operations_t *dasd_disciplines[] =
{
#ifdef CONFIG_DASD_ECKD
- &dasd_eckd_operations
+ &dasd_eckd_operations,
#endif /* CONFIG_DASD_ECKD */
+#ifdef CONFIG_DASD_FBA
+ &dasd_fba_operations,
+#endif /* CONFIG_DASD_FBA */
+#ifdef CONFIG_DASD_MDSK
+ &dasd_mdsk_operations,
+#endif /* CONFIG_DASD_MDSK */
+#ifdef CONFIG_DASD_CKD
+ &dasd_ckd_operations,
+#endif /* CONFIG_DASD_CKD */
+ NULL
};
char *dasd_name[] =
{
#ifdef CONFIG_DASD_ECKD
- "ECKD"
+ "ECKD",
#endif /* CONFIG_DASD_ECKD */
+#ifdef CONFIG_DASD_FBA
+ "FBA",
+#endif /* CONFIG_DASD_FBA */
+#ifdef CONFIG_DASD_MDSK
+ "MDSK",
+#endif /* CONFIG_DASD_MDSK */
+#ifdef CONFIG_DASD_CKD
+ "CKD",
+#endif /* CONFIG_DASD_CKD */
+ "END"
};
-dasd_information_t *dasd_info[DASD_MAX_DEVICES] =
-{NULL,};
-
-static int dasd_blks[256] =
-{0,};
-static int dasd_secsize[256] =
-{0,};
-static int dasd_blksize[256] =
-{0,};
-static int dasd_maxsecs[256] =
-{0,};
-
-void
-fill_sizes (int di)
-{
- int rc;
- int minor;
- rc = dasd_disciplines[dasd_info[di]->type]->fill_sizes (di);
- switch (rc) {
- case -EMEDIUMTYPE:
- dasd_info[di]->flags |= DASD_NOT_FORMATTED;
- break;
- }
- PRINT_INFO ("%ld kB <- 'soft'-block: %d, hardsect %d Bytes\n",
- dasd_info[di]->sizes.kbytes,
- dasd_info[di]->sizes.bp_block,
- dasd_info[di]->sizes.bp_sector);
- switch (dasd_info[di]->type) {
-#ifdef CONFIG_DASD_ECKD
- case dasd_eckd:
- dasd_info[di]->sizes.first_sector =
- 3 << dasd_info[di]->sizes.s2b_shift;
- break;
-#endif /* CONFIG_DASD_ECKD */
- default:
- INTERNAL_CHECK ("Unknown dasd type %d\n", dasd_info[di]->type);
- }
- minor = di << PARTN_BITS;
- dasd_blks[minor] = dasd_info[di]->sizes.kbytes;
- dasd_secsize[minor] = dasd_info[di]->sizes.bp_sector;
- dasd_blksize[minor] = dasd_info[di]->sizes.bp_block;
- dasd_maxsecs[minor] = 252<<dasd_info[di]->sizes.s2b_shift;
- dasd_blks[minor + 1] = dasd_info[di]->sizes.kbytes -
- (dasd_info[di]->sizes.first_sector >> 1);
- dasd_secsize[minor + 1] = dasd_info[di]->sizes.bp_sector;
- dasd_blksize[minor + 1] = dasd_info[di]->sizes.bp_block;
- dasd_maxsecs[minor+1] = 252<<dasd_info[di]->sizes.s2b_shift;
-}
-
-int
-dasd_format (int dev, format_data_t * fdata)
-{
- int rc;
- int devindex = DEVICE_NR (dev);
- PRINT_INFO ("Format called with devno %x\n", dev);
- if (MINOR (dev) & (0xff >> (8 - PARTN_BITS))) {
- PRINT_WARN ("Can't format partition! minor %x %x\n",
- MINOR (dev), 0xff >> (8 - PARTN_BITS));
- return -EINVAL;
- }
- down (&dasd_info[devindex]->sem);
- if (dasd_info[devindex]->open_count == 1) {
- rc = dasd_disciplines[dasd_info[devindex]->type]->
- dasd_format (devindex, fdata);
- if (rc) {
- PRINT_WARN ("Formatting failed rc=%d\n", rc);
- }
- } else {
- PRINT_WARN ("device is open! %d\n", dasd_info[devindex]->open_count);
- rc = -EINVAL;
- }
- if (!rc) {
- fill_sizes (devindex);
- dasd_info[devindex]->flags &= ~DASD_NOT_FORMATTED;
- } else {
- dasd_info[devindex]->flags |= DASD_NOT_FORMATTED;
- }
- up (&dasd_info[devindex]->sem);
- return rc;
-}
-
-static inline int
-do_dasd_ioctl (struct inode *inp, unsigned int no, unsigned long data)
+static int
+do_dasd_ioctl (struct inode *inp, /* unsigned */ int no, unsigned long data)
{
int rc;
int di;
switch (no) {
case BLKGETSIZE:{ /* Return device size */
- unsigned long blocks;
- if (inp->i_rdev & 0x01) {
- blocks = (dev->sizes.blocks - 3) <<
- dev->sizes.s2b_shift;
- } else {
- blocks = dev->sizes.kbytes << dev->sizes.s2b_shift;
- }
- rc = copy_to_user ((long *) data, &blocks, sizeof (long));
+ int blocks = dasd_blks[MINOR (inp->i_rdev)] << 1;
+ rc = copy_to_user ((long *) data,
+ &blocks,
+ sizeof (long));
break;
}
case BLKFLSBUF:{
break;
}
case BLKRRPART:{
- INTERNAL_CHECK ("BLKRPART not implemented%s", "");
- rc = -EINVAL;
+ dasd_partn_detect (di);
+ rc = 0;
+ break;
+ }
+ case BIODASDRLB:{
+ rc = copy_to_user ((int *) data,
+ &dasd_info[di]->sizes.label_block,
+ sizeof (int));
+ break;
+ }
+ case BLKGETBSZ:{
+ rc = copy_to_user ((int *) data,
+ &dasd_info[di]->sizes.bp_block,
+ sizeof (int));
break;
}
case HDIO_GETGEO:{
- INTERNAL_CHECK ("HDIO_GETGEO not implemented%s", "");
- rc = -EINVAL;
+ struct hd_geometry geo;
+ dasd_disciplines[dev->type]->fill_geometry (di, &geo);
+ rc = copy_to_user ((struct hd_geometry *) data, &geo,
+ sizeof (struct hd_geometry));
break;
}
RO_IOCTLS (inp->i_rdev, data);
-
case BIODASDRSID:{
rc = copy_to_user ((void *) data,
&(dev->info.sid_data),
int xlt;
rc = copy_from_user (&xlt, (void *) data,
sizeof (int));
+#if 0
PRINT_INFO("Xlating %d to",xlt);
+#endif
if (rc)
break;
- if (MINOR (inp->i_rdev) & 1)
- offset = 3;
+ offset = dd_gendisk.part[MINOR (inp->i_rdev)].start_sect >>
+ dev->sizes.s2b_shift;
xlt += offset;
+#if 0
printk(" %d \n",xlt);
+#endif
rc = copy_to_user ((void *) data, &xlt,
sizeof (int));
break;
case BIODASDFORMAT:{
/* fdata == NULL is a valid arg to dasd_format ! */
format_data_t *fdata = NULL;
+ PRINT_WARN ("called format ioctl\n");
if (data) {
fdata = kmalloc (sizeof (format_data_t),
GFP_ATOMIC);
break;
}
default:
+ PRINT_WARN ("unknown ioctl number %08x %08lx\n", no, BIODASDFORMAT);
rc = -EINVAL;
break;
}
wake_up (&dasd_waitq);
}
+
+int
+dasd_watch_volume (int di)
+{
+ int rc = 0;
+
+ return rc;
+}
+
+void
+dasd_watcher (void)
+{
+ int i = 0;
+ int rc;
+ do {
+ for ( i = 0; i < DASD_MAX_DEVICES; i++ ) {
+ if ( dasd_info [i] ) {
+ rc = dasd_watch_volume ( i );
+ }
+ }
+ interruptible_sleep_on(&dasd_waitq);
+ } while(1);
+}
+
int
-dasd_unregister_dasd (int irq, dasd_type_t dt, dev_info_t * info)
+dasd_unregister_dasd (int di)
{
int rc = 0;
- FUNCTION_ENTRY ("dasd_unregister_dasd");
- INTERNAL_CHECK ("dasd_unregister_dasd not implemented%s\n", "");
- FUNCTION_EXIT ("dasd_unregister_dasd");
+ int minor;
+ int i;
+
+ minor = di << PARTN_BITS;
+ if (!dasd_info[di]) { /* devindex is not free */
+ INTERNAL_CHECK ("trying to free unallocated device %d\n", di);
+ return -ENODEV;
+ }
+ /* delete all that partition stuff */
+ for (i = 0; i < (1 << PARTN_BITS); i++) {
+ dasd_blks[minor] = 0;
+ dasd_secsize[minor + i] = 0;
+ dasd_blksize[minor + i] = 0;
+ dasd_maxsecs[minor + i] = 0;
+ }
+ /* reset DASD to unknown statuss */
+ atomic_set (&dasd_info[di]->status, DASD_INFO_STATUS_UNKNOWN);
+
+ free_irq (dasd_info[di]->info.irq, &(dasd_info[di]->dev_status));
+ if (dasd_info[di]->rdc_data)
+ kfree (dasd_info[di]->rdc_data);
+ kfree (dasd_info[di]);
+ PRINT_INFO ("%04d deleted from list of valid DASDs\n",
+ dasd_info[di]->info.devno);
return rc;
}
check_type (dev_info_t * info)
{
dasd_type_t type = dasd_none;
+ int di;
FUNCTION_ENTRY ("check_type");
+ di = devindex_from_devno (info->devno);
+
+#ifdef CONFIG_DASD_MDSK
+ if (MACHINE_IS_VM && dasd_force_mdsk_flag[di] == 1) {
+ type = dasd_mdsk;
+ } else
+#endif /* CONFIG_DASD_MDSK */
+
#ifdef CONFIG_DASD_ECKD
if (MATCH (info, == 0x3990, ||1, == 0x3390, ||1) ||
MATCH (info, == 0x9343, ||1, == 0x9345, ||1) ||
type = dasd_eckd;
} else
#endif /* CONFIG_DASD_ECKD */
+#ifdef CONFIG_DASD_FBA
+ if (MATCH (info, == 0x6310, ||1, == 0x9336, ||1)) {
+ type = dasd_fba;
+ } else
+#endif /* CONFIG_DASD_FBA */
+#ifdef CONFIG_DASD_MDSK
+ if (MACHINE_IS_VM) {
+ type = dasd_mdsk;
+ } else
+#endif /* CONFIG_DASD_MDSK */
{
type = dasd_none;
}
+
FUNCTION_EXIT ("check_type");
return type;
}
static int
dasd_read_characteristics (dasd_information_t * info)
{
- int rc;
+ int rc = 0;
int ct = 0;
dev_info_t *di;
dasd_type_t dt;
#ifdef CONFIG_DASD_ECKD
case dasd_eckd:
ct = 64;
+ rc = read_dev_chars (info->info.irq,
+ (void *) &(info->rdc_data), ct);
break;
#endif /* CONFIG_DASD_ECKD */
+#ifdef CONFIG_DASD_FBA
+ case dasd_fba:
+ ct = 32;
+ rc = read_dev_chars (info->info.irq,
+ (void *) &(info->rdc_data), ct);
+ break;
+#endif /* CONFIG_DASD_FBA */
+#ifdef CONFIG_DASD_MDSK
+ case dasd_mdsk:
+ ct = 0;
+ break;
+#endif /* CONFIG_DASD_FBA */
default:
INTERNAL_ERROR ("don't know dasd type %d\n", dt);
}
- rc = read_dev_chars (info->info.irq,
- (void *) &(info->rdc_data), ct);
if (rc) {
PRINT_WARN ("RDC resulted in rc=%d\n", rc);
}
}
/* How many sectors must be in a request to dequeue it ? */
-#define QUEUE_BLOCKS 100
+#define QUEUE_BLOCKS 25
#define QUEUE_SECTORS (QUEUE_BLOCKS << dasd_info[di]->sizes.s2b_shift)
/* How often to retry an I/O before raising an error */
#define DASD_MAX_RETRIES 5
-static atomic_t bh_scheduled = ATOMIC_INIT (0);
-
-static inline void
-schedule_bh (void (*func) (void))
-{
- static struct tq_struct dasd_tq =
- {0,};
- /* Protect against rescheduling, when already running */
- if (atomic_compare_and_swap (0, 1, &bh_scheduled))
- return;
- dasd_tq.routine = (void *) (void *) func;
- queue_task (&dasd_tq, &tq_immediate);
- mark_bh (IMMEDIATE_BH);
- return;
-}
-
static inline
cqr_t *
dasd_cqr_from_req (struct request *req)
if (!info)
return NULL;
/* if applicable relocate block */
- if (MINOR (req->rq_dev) & 0x1) {
- req->sector += info->sizes.first_sector;
+ if (MINOR (req->rq_dev) & ((1 << PARTN_BITS) - 1)) {
+ req->sector +=
+ dd_gendisk.part[MINOR (req->rq_dev)].start_sect;
}
/* Now check for consistency */
if (!req->nr_sectors) {
#ifdef DASD_PROFILE
asm volatile ("STCK %0":"=m" (cqr->buildclk));
#endif /* DASD_PROFILE */
- if (atomic_compare_and_swap (CQR_STATUS_EMPTY,
- CQR_STATUS_FILLED,
- &cqr->status)) {
- PRINT_WARN ("cqr from req stat changed %d\n",
- atomic_read (&cqr->status));
- }
+ ACS (cqr->status, CQR_STATUS_EMPTY, CQR_STATUS_FILLED);
}
return cqr;
}
int retries = DASD_SSCH_RETRIES;
int di, irq;
- dasd_debug (cqr); /* cqr */
+ dasd_debug ((unsigned long) cqr); /* cqr */
if (!cqr) {
PRINT_WARN ("(start_IO) no cqr passed\n");
return -EINVAL;
}
- if (cqr->magic != DASD_MAGIC) {
- PRINT_WARN ("(start_IO) magic number mismatch\n");
- return -EINVAL;
+#ifdef CONFIG_DASD_MDSK
+ if (cqr->magic == MDSK_MAGIC) {
+ return dasd_mdsk_start_IO (cqr);
}
- if (atomic_compare_and_swap (CQR_STATUS_QUEUED,
- CQR_STATUS_IN_IO,
- &cqr->status)) {
- PRINT_WARN ("start_IO: status changed %d\n",
- atomic_read (&cqr->status));
- atomic_set (&cqr->status, CQR_STATUS_ERROR);
+#endif /* CONFIG_DASD_MDSK */
+ if (cqr->magic != DASD_MAGIC && cqr->magic != ERP_MAGIC) {
+ PRINT_ERR ("(start_IO) magic number mismatch\n");
return -EINVAL;
}
+ ACS (cqr->status, CQR_STATUS_QUEUED, CQR_STATUS_IN_IO);
di = cqr->devindex;
irq = dasd_info[di]->info.irq;
do {
cqr, dasd_info[di]->info.devno, retries);
break;
case -EBUSY: /* set up timer, try later */
+
PRINT_WARN ("cqr %p: 0x%04x busy, %d retries left\n",
cqr, dasd_info[di]->info.devno, retries);
break;
default:
+
PRINT_WARN ("cqr %p: 0x%04x %d, %d retries left\n",
cqr, rc, dasd_info[di]->info.devno,
retries);
}
} while (rc && --retries);
if (rc) {
- if (atomic_compare_and_swap (CQR_STATUS_IN_IO,
- CQR_STATUS_ERROR,
- &cqr->status)) {
- PRINT_WARN ("start_IO:(done) status changed %d\n",
- atomic_read (&cqr->status));
- atomic_set (&cqr->status, CQR_STATUS_ERROR);
- }
+ ACS (cqr->status, CQR_STATUS_IN_IO, CQR_STATUS_ERROR);
}
return rc;
}
dasd_dump_sense (devstat_t * stat)
{
int sl, sct;
+ if (!stat->flag | DEVSTAT_FLAG_SENSE_AVAIL) {
+ PRINT_INFO ("I/O status w/o sense data\n");
+ } else {
printk (KERN_INFO PRINTK_HEADER
- "-------------------I/O resulted in unit check:-----------\n");
+ "-------------------I/O result:-----------\n");
for (sl = 0; sl < 4; sl++) {
printk (KERN_INFO PRINTK_HEADER "Sense:");
for (sct = 0; sct < 8; sct++) {
printk ("\n");
}
}
+}
+
+int
+register_dasd_last (int di)
+{
+ int rc = 0;
+ int minor;
+ int i;
+
+ rc = dasd_disciplines[dasd_info[di]->type]->fill_sizes_last (di);
+ if (!rc) {
+ ACS (dasd_info[di]->status,
+ DASD_INFO_STATUS_DETECTED, DASD_INFO_STATUS_FORMATTED);
+ } else { /* -EMEDIUMTYPE: */
+ ACS (dasd_info[di]->status,
+ DASD_INFO_STATUS_DETECTED, DASD_INFO_STATUS_ANALYSED);
+ }
+ PRINT_INFO ("%04X (dasd%c):%ld kB <- block: %d on sector %d B\n",
+ dasd_info[di]->info.devno,
+ 'a' + di,
+ dasd_info[di]->sizes.kbytes,
+ dasd_info[di]->sizes.bp_block,
+ dasd_info[di]->sizes.bp_sector);
+ minor = di << PARTN_BITS;
+ dasd_blks[minor] = dasd_info[di]->sizes.kbytes;
+ for (i = 0; i < (1 << PARTN_BITS); i++) {
+ dasd_secsize[minor + i] = dasd_info[di]->sizes.bp_sector;
+ dasd_blksize[minor + i] = dasd_info[di]->sizes.bp_block;
+ dasd_maxsecs[minor + i] = 252 << dasd_info[di]->sizes.s2b_shift;
+ }
+ return rc;
+}
+
+void
+dasd_partn_detect (int di)
+{
+ int minor = di << PARTN_BITS;
+ while (atomic_read (&dasd_info[di]->status) !=
+ DASD_INFO_STATUS_FORMATTED) {
+ interruptible_sleep_on(&dasd_info[di]->wait_q);
+ }
+ dd_gendisk.part[minor].nr_sects = dasd_info[di]->sizes.kbytes << 1;
+ resetup_one_dev (&dd_gendisk, di);
+}
void
dasd_do_chanq (void)
{
dasd_chanq_t *qp = NULL;
- cqr_t *cqr;
+ cqr_t *cqr, *next;
long flags;
int irq;
int tasks;
+
atomic_set (&bh_scheduled, 0);
dasd_debug (0xc4c40000); /* DD */
- while ((tasks = atomic_read(&chanq_tasks)) != 0) {
-/* initialization and wraparound */
- if (qp == NULL) {
- dasd_debug (0xc4c46df0); /* DD_0 */
- qp = cq_head;
- if (!qp) {
- dasd_debug (0xc4c46ff1); /* DD?1 */
- dasd_debug (tasks);
- PRINT_ERR("Mismatch of NULL queue pointer and "
- "still %d chanq_tasks to do!!\n"
- "Please send output of /proc/dasd/debug "
- "to Linux390@de.ibm.com\n", tasks);
- atomic_set(&chanq_tasks,0);
- break;
- }
- }
+ for (qp = cq_head; qp != NULL;) {
/* Get first request */
- dasd_debug (qp); /* qp */
+ dasd_debug ((unsigned long) qp);
cqr = (cqr_t *) (qp->head);
/* empty queue -> dequeue and proceed */
if (!cqr) {
}
/* process all requests on that queue */
do {
- cqr_t *next;
+ next = NULL;
dasd_debug ((unsigned long) cqr); /* cqr */
- if (cqr->magic != DASD_MAGIC) {
+ if (cqr->magic != DASD_MAGIC &&
+ cqr->magic != MDSK_MAGIC &&
+ cqr->magic != ERP_MAGIC) {
dasd_debug (0xc4c46ff2); /* DD?2 */
panic ( PRINTK_HEADER "do_cq:"
"magic mismatch %p -> %x\n",
switch (atomic_read (&cqr->status)) {
case CQR_STATUS_IN_IO:
dasd_debug (0xc4c4c9d6); /* DDIO */
- cqr = NULL;
break;
case CQR_STATUS_QUEUED:
dasd_debug (0xc4c4e2e3); /* DDST */
- if (dasd_start_IO (cqr) == 0) {
- atomic_dec (&chanq_tasks);
- cqr = NULL;
+ if (dasd_start_IO (cqr) != 0) {
+ PRINT_WARN("start_io failed\n");
}
break;
- case CQR_STATUS_ERROR:
+ case CQR_STATUS_ERROR:{
dasd_debug (0xc4c4c5d9); /* DDER */
- dasd_dump_sense (cqr->dstat);
if ( ++ cqr->retries < 2 ) {
atomic_set (&cqr->status,
CQR_STATUS_QUEUED);
- dasd_debug (0xc4c4e2e3); /* DDST */
+ dasd_debug (0xc4c4e2e3);
if (dasd_start_IO (cqr) == 0) {
- atomic_dec (&chanq_tasks);
- cqr = NULL;
+ atomic_dec (&qp->
+ dirty_requests);
+ break;
}
+ }
+ ACS (cqr->status,
+ CQR_STATUS_ERROR,
+ CQR_STATUS_FAILED);
+ break;
+ }
+ case CQR_STATUS_ERP_PEND:{
+ /* This case is entered, when an interrupt
+ ended with a condittion */
+ dasd_erp_action_t erp_action;
+ erp_t *erp = NULL;
+
+ if ( cqr -> magic != ERP_MAGIC ) {
+ erp = request_er ();
+ if (erp == NULL) {
+ PRINT_WARN ("No memory for ERP%s\n", "");
+ break;
+ }
+ memset (erp, 0, sizeof (erp_t));
+ erp->cqr.magic = ERP_MAGIC;
+ erp->cqr.int4cqr = cqr;
+ erp->cqr.devindex= cqr->devindex;
+ erp_action = dasd_erp_action (cqr);
+ if (erp_action) {
+ PRINT_WARN ("Taking ERP action %p\n", erp_action);
+ erp_action (erp);
+ }
+ dasd_chanq_enq_head(qp, (cqr_t *) erp);
+ next = (cqr_t *) erp;
} else {
- atomic_set (&cqr->status,
+ PRINT_WARN("ERP_ACTION failed\n");
+ ACS (cqr->status,
+ CQR_STATUS_ERP_PEND,
CQR_STATUS_FAILED);
}
break;
- case CQR_STATUS_DONE:
+ }
+ case CQR_STATUS_ERP_ACTIVE:
+ break;
+ case CQR_STATUS_DONE:{
next = cqr->next;
- dasd_debug (0xc4c49692); /* DDok */
+ if (cqr->magic == DASD_MAGIC) {
+ dasd_debug (0xc4c49692);
+ } else if (cqr->magic == ERP_MAGIC) {
+ dasd_erp_action_t erp_postaction;
+ erp_t *erp = (erp_t *) cqr;
+ erp_postaction =
+ dasd_erp_postaction (erp);
+ if (erp_postaction)
+ erp_postaction (erp);
+ atomic_dec (&qp->dirty_requests);
+ } else if (cqr->magic == MDSK_MAGIC) {
+ } else {
+ PRINT_WARN ("unknown magic%s\n", "");
+ }
dasd_end_cqr (cqr, 1);
- atomic_dec (&chanq_tasks);
- cqr = next;
break;
- case CQR_STATUS_FAILED:
+ }
+ case CQR_STATUS_FAILED: {
next = cqr->next;
- dasd_debug (0xc4c47a7a); /* DD:: */
+ if (cqr->magic == DASD_MAGIC) {
+ dasd_debug (0xc4c49692);
+ } else if (cqr->magic == ERP_MAGIC) {
+ dasd_erp_action_t erp_postaction;
+ erp_t *erp = (erp_t *) cqr;
+ erp_postaction =
+ dasd_erp_postaction (erp);
+ if (erp_postaction)
+ erp_postaction (erp);
+ } else if (cqr->magic == MDSK_MAGIC) {
+ } else {
+ PRINT_WARN ("unknown magic%s\n", "");
+ }
dasd_end_cqr (cqr, 0);
- atomic_dec (&chanq_tasks);
- cqr = next;
+ atomic_dec (&qp->dirty_requests);
break;
+ }
default:
PRINT_WARN ("unknown cqrstatus\n");
- cqr = NULL;
}
s390irq_spin_unlock_irqrestore (irq, flags);
- } while (cqr);
+ } while ((cqr = next) != NULL);
qp = qp->next_q;
}
spin_lock (&io_request_lock);
void
do_dasd_request (void)
{
- struct request *req;
- struct request *prev;
- struct request *next;
- char broken[DASD_MAX_DEVICES] = {0,};
- char busy[DASD_MAX_DEVICES] = {0,};
- int di;
+ struct request *req, *next, *prev;
cqr_t *cqr;
- long caller;
dasd_chanq_t *q;
long flags;
- int irq;
+ int di, irq;
+ int broken, busy;
dasd_debug (0xc4d90000); /* DR */
- dasd_debug (__builtin_return_address(0)); /* calleraddres */
+ dasd_debug ((unsigned long) __builtin_return_address (0));
prev = NULL;
for (req = CURRENT; req != NULL; req = next) {
next = req->next;
irq = dasd_info[di]->info.irq;
s390irq_spin_lock_irqsave (irq, flags);
q = &dasd_info[di]->queue;
- busy[di] = busy[di]||(atomic_read(&q->flags)&DASD_CHANQ_BUSY);
- if ( !busy[di] ||
- ((!broken[di]) && (req->nr_sectors >= QUEUE_SECTORS))) {
-#if 0
- if ( q -> queued_requests < QUEUE_THRESHOLD ) {
-#endif
+ busy = atomic_read (&q->flags) & DASD_CHANQ_BUSY;
+ broken = atomic_read (&q->flags) & DASD_REQUEST_Q_BROKEN;
+ if (!busy ||
+ (!broken &&
+ (req->nr_sectors >= QUEUE_SECTORS))) {
if (prev) {
prev->next = next;
} else {
}
dasd_debug ((unsigned long) cqr); /* cqr */
dasd_chanq_enq (q, cqr);
- if (!(atomic_read (&q->flags) &
- DASD_CHANQ_ACTIVE)) {
+ if (!(atomic_read (&q->flags) & DASD_CHANQ_ACTIVE)) {
cql_enq_head (q);
}
- if (!busy[di]) {
- dasd_debug (0xc4d9e2e3); /* DRST */
- if (dasd_start_IO (cqr) != 0) {
- atomic_inc (&chanq_tasks);
- schedule_bh (dasd_do_chanq);
- busy[di] = 1;
+ if (!busy) {
+ atomic_clear_mask (DASD_REQUEST_Q_BROKEN,
+ &q->flags);
+ if ( atomic_read (&q->dirty_requests) == 0 ) {
+ if (dasd_start_IO (cqr) == 0) {
+ } else {
+ dasd_schedule_bh (dasd_do_chanq);
}
}
-#if 0
}
-#endif
} else {
dasd_debug (0xc4d9c2d9); /* DRBR */
- broken[di] = 1;
+ atomic_set_mask (DASD_REQUEST_Q_BROKEN, &q->flags);
prev = req;
}
cont:
int ip;
cqr_t *cqr;
int done_fast_io = 0;
+ dasd_era_t era;
+ static int counter = 0;
dasd_debug (0xc4c80000); /* DH */
- if (!stat)
+ if (!stat) {
PRINT_ERR ("handler called without devstat");
+ return;
+ }
ip = stat->intparm;
dasd_debug (ip); /* intparm */
- switch (ip) { /* filter special intparms... */
- case 0x00000000: /* no intparm: unsolicited interrupt */
+ if (!ip) { /* no intparm: unsolicited interrupt */
dasd_debug (0xc4c8a489); /* DHui */
- PRINT_INFO ("Unsolicited interrupt on device %04X\n",
+ PRINT_INFO ("%04X caught unsolicited interrupt\n",
stat->devno);
- dasd_dump_sense (stat);
return;
- default:
+ }
if (ip & 0x80000001) {
dasd_debug (0xc4c8a489); /* DHui */
- PRINT_INFO ("Spurious interrupt %08x on device %04X\n",
- ip, stat->devno);
+ PRINT_INFO ("%04X caught spurious interrupt with parm %08x\n",
+ stat->devno, ip);
return;
}
cqr = (cqr_t *) ip;
- if (cqr->magic != DASD_MAGIC) {
- dasd_debug (0xc4c86ff1); /* DH?1 */
- PRINT_ERR ("handler:magic mismatch on %p %08x\n",
- cqr, cqr->magic);
- return;
- }
+ if (cqr->magic == DASD_MAGIC || cqr->magic == ERP_MAGIC) {
asm volatile ("STCK %0":"=m" (cqr->stopclk));
- if (stat->cstat == 0x00 && stat->dstat == 0x0c) {
+ if ((stat->cstat == 0x00 &&
+ stat->dstat == (DEV_STAT_CHN_END | DEV_STAT_DEV_END)) ||
+ ((era = dasd_erp_examine (cqr, stat)) == dasd_era_none)) {
dasd_debug (0xc4c89692); /* DHok */
- if (atomic_compare_and_swap (CQR_STATUS_IN_IO,
- CQR_STATUS_DONE,
- &cqr->status)) {
- PRINT_WARN ("handler: cqrstat changed%d\n",
- atomic_read (&cqr->status));
+#if 0
+ if ( counter < 20 || cqr -> magic == ERP_MAGIC) {
+ counter ++;
+#endif
+ ACS (cqr->status, CQR_STATUS_IN_IO, CQR_STATUS_DONE);
+#if 0
+ } else {
+ counter=0;
+ PRINT_WARN ("Faking I/O error\n");
+ ACS (cqr->status, CQR_STATUS_IN_IO, CQR_STATUS_ERP_PEND);
+ atomic_inc (&dasd_info[cqr->devindex]->
+ queue.dirty_requests);
+ }
+#endif
+ if (atomic_read (&dasd_info[cqr->devindex]->status) ==
+ DASD_INFO_STATUS_DETECTED) {
+ register_dasd_last (cqr->devindex);
+ if ( dasd_info[cqr->devindex]->wait_q ) {
+ wake_up( &dasd_info[cqr->devindex]->
+ wait_q);
+ }
}
- if (cqr->next) {
+ if (cqr->next &&
+ (atomic_read (&cqr->next->status) ==
+ CQR_STATUS_QUEUED)) {
dasd_debug (0xc4c8e2e3); /* DHST */
- if (dasd_start_IO (cqr->next) == 0)
+ if (dasd_start_IO (cqr->next) == 0) {
done_fast_io = 1;
+ } else {
}
- break;
}
+ } else { /* only visited in case of error ! */
dasd_debug (0xc4c8c5d9); /* DHER */
+ dasd_dump_sense (stat);
if (!cqr->dstat)
cqr->dstat = kmalloc (sizeof (devstat_t),
GFP_ATOMIC);
if (cqr->dstat) {
memcpy (cqr->dstat, stat, sizeof (devstat_t));
} else {
- PRINT_ERR ("no memory for dtstat\n");
+ PRINT_ERR ("no memory for dstat\n");
}
+ atomic_inc (&dasd_info[cqr->devindex]->
+ queue.dirty_requests);
/* errorprocessing */
- atomic_set (&cqr->status, CQR_STATUS_ERROR);
+ if (era == dasd_era_fatal) {
+ PRINT_WARN ("ERP returned fatal error\n");
+ ACS (cqr->status,
+ CQR_STATUS_IN_IO, CQR_STATUS_FAILED);
+ } else {
+ ACS (cqr->status,
+ CQR_STATUS_IN_IO, CQR_STATUS_ERP_PEND);
+ }
}
if (done_fast_io == 0)
atomic_clear_mask (DASD_CHANQ_BUSY,
dasd_wakeup ();
} else if (! (cqr->options & DOIO_WAIT_FOR_INTERRUPT) ){
dasd_debug (0xc4c8a293); /* DHsl */
- atomic_inc (&chanq_tasks);
- schedule_bh (dasd_do_chanq);
+ dasd_schedule_bh (dasd_do_chanq);
} else {
dasd_debug (0x64686f6f); /* DH_g */
dasd_debug (cqr->flags); /* DH_g */
}
+ } else {
+ dasd_debug (0xc4c86ff1); /* DH?1 */
+ PRINT_ERR ("handler:magic mismatch on %p %08x\n",
+ cqr, cqr->magic);
+ return;
+ }
dasd_debug (0xc4c86d6d); /* DHwu */
}
+static int
+dasd_format (int dev, format_data_t * fdata)
+{
+ int rc;
+ int devindex = DEVICE_NR (dev);
+ dasd_chanq_t *q;
+ cqr_t *cqr;
+ int irq;
+ long flags;
+ PRINT_INFO ("%04X called format on %x\n",
+ dasd_info[devindex]->info.devno, dev);
+ if (MINOR (dev) & (0xff >> (8 - PARTN_BITS))) {
+ PRINT_WARN ("Can't format partition! minor %x %x\n",
+ MINOR (dev), 0xff >> (8 - PARTN_BITS));
+ return -EINVAL;
+ }
+ down (&dasd_info[devindex]->sem);
+ atomic_set (&dasd_info[devindex]->status,
+ DASD_INFO_STATUS_UNKNOWN);
+ if (dasd_info[devindex]->open_count == 1) {
+ rc = dasd_disciplines[dasd_info[devindex]->type]->
+ dasd_format (devindex, fdata);
+ if (rc) {
+ PRINT_WARN ("Formatting failed rc=%d\n", rc);
+ up (&dasd_info[devindex]->sem);
+ return rc;
+ }
+ } else {
+ PRINT_WARN ("device is open! %d\n", dasd_info[devindex]->open_count);
+ up (&dasd_info[devindex]->sem);
+ return -EINVAL;
+ }
+#if DASD_PARANOIA > 1
+ if (!dasd_disciplines[dasd_info[devindex]->type]->fill_sizes_first) {
+ INTERNAL_CHECK ("No fill_sizes for dt=%d\n", dasd_info[devindex]->type);
+ } else
+#endif /* DASD_PARANOIA */
+ {
+ ACS (dasd_info[devindex]->status,
+ DASD_INFO_STATUS_UNKNOWN, DASD_INFO_STATUS_DETECTED);
+ irq = dasd_info[devindex]->info.irq;
+ PRINT_INFO ("%04X reacessing, irq %x, index %d\n",
+ get_devno_by_irq (irq), irq, devindex);
+ s390irq_spin_lock_irqsave (irq, flags);
+ q = &dasd_info[devindex]->queue;
+ cqr = dasd_disciplines[dasd_info[devindex]->type]->
+ fill_sizes_first (devindex);
+ dasd_chanq_enq (q, cqr);
+ if (!(atomic_read (&q->flags) & DASD_CHANQ_ACTIVE)) {
+ cql_enq_head (q);
+ }
+ dasd_schedule_bh (dasd_do_chanq);
+ s390irq_spin_unlock_irqrestore (irq, flags);
+ }
+ up (&dasd_info[devindex]->sem);
+ return rc;
+}
+
static int
register_dasd (int irq, dasd_type_t dt, dev_info_t * info)
{
int rc = 0;
int di;
+ unsigned long flags;
+ dasd_chanq_t *q;
+ cqr_t *cqr;
static spinlock_t register_lock = SPIN_LOCK_UNLOCKED;
spin_lock (®ister_lock);
FUNCTION_ENTRY ("register_dasd");
memcpy (&(dasd_info[di]->info), info, sizeof (dev_info_t));
spin_lock_init (&dasd_info[di]->queue.f_lock);
spin_lock_init (&dasd_info[di]->queue.q_lock);
- atomic_set (&dasd_info[di]->queue.flags, 0);
dasd_info[di]->type = dt;
dasd_info[di]->irq = irq;
dasd_info[di]->sem = MUTEX;
+#ifdef CONFIG_DASD_MDSK
+ if (dt == dasd_mdsk) {
+ dasd_info[di]->rdc_data = kmalloc (
+ sizeof (dasd_characteristics_t), GFP_ATOMIC);
+ if (!dasd_info[di]->rdc_data) {
+ PRINT_WARN ("No memory for char on irq %d\n", irq);
+ goto unalloc;
+ }
+ dasd_info[di]->rdc_data->mdsk.dev_nr = dasd_info[di]->
+ info.devno;
+ dasd_info[di]->rdc_data->mdsk.rdc_len =
+ sizeof (dasd_mdsk_characteristics_t);
+ } else
+#endif /* CONFIG_DASD_MDSK */
rc = dasd_read_characteristics (dasd_info[di]);
if (rc) {
PRINT_WARN ("RDC returned error %d\n", rc);
rc = -ENODEV;
goto unalloc;
}
+#ifdef CONFIG_DASD_MDSK
+ if (dt == dasd_mdsk) {
+
+ } else
+#endif /* CONFIG_DASD_MDSK */
rc = request_irq (irq, dasd_handler, 0, "dasd",
&(dasd_info[di]->dev_status));
+ ACS (dasd_info[di]->status,
+ DASD_INFO_STATUS_UNKNOWN, DASD_INFO_STATUS_DETECTED);
if (rc) {
#if DASD_PARANOIA > 0
printk (KERN_WARNING PRINTK_HEADER
"Cannot register irq %d, rc=%d\n",
irq, rc);
-#endif /* DASD_DEBUG */
+#endif /* DASD_PARANOIA */
rc = -ENODEV;
goto unalloc;
}
#if DASD_PARANOIA > 1
- if (!dasd_disciplines[dt]->fill_sizes) {
+ if (!dasd_disciplines[dt]->fill_sizes_first) {
INTERNAL_CHECK ("No fill_sizes for dt=%d\n", dt);
goto unregister;
}
#endif /* DASD_PARANOIA */
- fill_sizes (di);
+ irq = dasd_info[di]->info.irq;
+ PRINT_INFO ("%04X trying to access, irq %x, index %d\n",
+ get_devno_by_irq (irq), irq, di);
+ s390irq_spin_lock_irqsave (irq, flags);
+ q = &dasd_info[di]->queue;
+ cqr = dasd_disciplines[dt]->fill_sizes_first (di);
+ dasd_chanq_enq (q, cqr);
+ if (!(atomic_read (&q->flags) & DASD_CHANQ_ACTIVE)) {
+ cql_enq_head (q);
+ }
+ if (dasd_start_IO (cqr) != 0) {
+ dasd_schedule_bh (dasd_do_chanq);
+ }
+ s390irq_spin_unlock_irqrestore (irq, flags);
+
goto exit;
unregister:
free_irq (irq, &(dasd_info[di]->dev_status));
unalloc:
kfree (dasd_info[di]);
+ dasd_info[di] = NULL;
exit:
spin_unlock (®ister_lock);
FUNCTION_EXIT ("register_dasd");
dev_info_t info;
dasd_type_t dt;
- FUNCTION_ENTRY ("probe_for_dasd");
-
rc = get_dev_info_by_irq (irq, &info);
+
if (rc == -ENODEV) { /* end of device list */
return rc;
+ } else if ((info.status & DEVSTAT_DEVICE_OWNED)) {
+ return -EBUSY;
+ } else if ((info.status & DEVSTAT_NOT_OPER)) {
+ return -ENODEV;
}
#if DASD_PARANOIA > 2
- if (rc) {
+ else {
INTERNAL_CHECK ("unknown rc %d of get_dev_info", rc);
return rc;
}
#endif /* DASD_PARANOIA */
- if ((info.status & DEVSTAT_NOT_OPER)) {
+
+ dt = check_type (&info); /* make a first guess */
+
+ if (dt == dasd_none) {
return -ENODEV;
}
- dt = check_type (&info);
- switch (dt) {
-#ifdef CONFIG_DASD_ECKD
- case dasd_eckd:
-#endif /* CONFIG_DASD_ECKD */
- FUNCTION_CONTROL ("Probing devno %d...\n", info.devno);
if (!dasd_is_accessible (info.devno)) {
- FUNCTION_CONTROL ("out of range...skip%s\n", "");
return -ENODEV;
}
- if (dasd_disciplines[dt]->ck_devinfo) {
- rc = dasd_disciplines[dt]->ck_devinfo (&info);
- }
-#if DASD_PARANOIA > 1
- else {
+ if (!dasd_disciplines[dt]->ck_devinfo) {
INTERNAL_ERROR ("no ck_devinfo function%s\n", "");
return -ENODEV;
}
-#endif /* DASD_PARANOIA */
- if (rc == -ENODEV) {
+ rc = dasd_disciplines[dt]->ck_devinfo (&info);
+ if (rc) {
return rc;
}
-#if DASD_PARANOIA > 2
- if (rc) {
- INTERNAL_CHECK ("unknown error rc=%d\n", rc);
+ if (dasd_probeonly) {
+ PRINT_INFO ("%04X not enabled due to probeonly mode\n",
+ info.devno);
+ dasd_add_devno_to_ranges (info.devno);
return -ENODEV;
- }
-#endif /* DASD_PARANOIA */
+ } else {
rc = register_dasd (irq, dt, &info);
+ }
if (rc) {
- PRINT_INFO ("devno %x not enabled as minor %d due to errors\n",
- info.devno,
- devindex_from_devno (info.devno) <<
- PARTN_BITS);
+ PRINT_WARN ("%04X not enabled due to errors\n",
+ info.devno);
} else {
- PRINT_INFO ("devno %x added as minor %d (%s)\n",
+ PRINT_INFO ("%04X is (dasd%c) minor %d (%s)\n",
info.devno,
+ 'a' + devindex_from_devno (info.devno),
devindex_from_devno (info.devno) << PARTN_BITS,
dasd_name[dt]);
}
- case dasd_none:
- break;
- default:
- PRINT_DEBUG ("unknown device type\n");
- break;
- }
- FUNCTION_EXIT ("probe_for_dasd");
+
return rc;
}
int rc = 0;
FUNCTION_ENTRY ("register_major");
- rc = register_blkdev (major, DASD_NAME, &dasd_file_operations);
+ rc = register_blkdev (major, DASD_NAME, &dasd_device_operations);
#if DASD_PARANOIA > 1
if (rc) {
PRINT_WARN ("registering major -> rc=%d aborting... \n", rc);
modifier is to make sure, that they are only called via the kernel's methods
*/
-static ssize_t
-dasd_read (struct file *filp, char *b, size_t s, loff_t * o)
-{
- ssize_t rc;
- FUNCTION_ENTRY ("dasd_read");
- rc = block_read (filp, b, s, o);
- FUNCTION_EXIT ("dasd_read");
- return rc;
-}
-static ssize_t
-dasd_write (struct file *filp, const char *b, size_t s, loff_t * o)
-{
- ssize_t rc;
- FUNCTION_ENTRY ("dasd_write");
- rc = block_write (filp, b, s, o);
- FUNCTION_EXIT ("dasd_write");
- return rc;
-}
static int
dasd_ioctl (struct inode *inp, struct file *filp,
unsigned int no, unsigned long data)
return rc;
}
-static int
-dasd_fsync (struct file *filp, struct dentry *d)
-{
- int rc = 0;
- FUNCTION_ENTRY ("dasd_fsync");
- if (!filp) {
- return -EINVAL;
- }
- rc = block_fsync (filp, d);
- FUNCTION_EXIT ("dasd_fsync");
- return rc;
-}
-
static int
dasd_release (struct inode *inp, struct file *filp)
{
}
static struct
-file_operations dasd_file_operations =
+file_operations dasd_device_operations =
{
- NULL, /* loff_t(*llseek)(struct file *,loff_t,int); */
- dasd_read,
- dasd_write,
- NULL, /* int(*readdir)(struct file *,void *,filldir_t); */
- NULL, /* u int(*poll)(struct file *,struct poll_table_struct *); */
- dasd_ioctl,
- NULL, /* int (*mmap) (struct file *, struct vm_area_struct *); */
- dasd_open,
- NULL, /* int (*flush) (struct file *) */
- dasd_release,
- dasd_fsync,
- NULL, /* int (*fasync) (int, struct file *, int);
- int (*check_media_change) (kdev_t dev);
- int (*revalidate) (kdev_t dev);
- int (*lock) (struct file *, int, struct file_lock *); */
+ read:block_read,
+ write:block_write,
+ fsync:block_fsync,
+ ioctl:dasd_ioctl,
+ open:dasd_open,
+ release:dasd_release,
};
int
int rc = 0;
int i;
- FUNCTION_ENTRY ("dasd_init");
PRINT_INFO ("initializing...\n");
- atomic_set (&chanq_tasks, 0);
atomic_set (&bh_scheduled, 0);
spin_lock_init (&dasd_lock);
+#ifdef CONFIG_DASD_MDSK
+ /*
+ * enable service-signal external interruptions,
+ * Control Register 0 bit 22 := 1
+ * (besides PSW bit 7 must be set to 1 somewhere for external
+ * interruptions)
+ */
+ ctl_set_bit (0, 9);
+ register_external_interrupt (0x2603, do_dasd_mdsk_interrupt);
+#endif
+ dasd_debug_info = debug_register ("dasd", 1, 4);
/* First register to the major number */
+
rc = register_major (MAJOR_NR);
#if DASD_PARANOIA > 1
if (rc) {
PRINT_WARN ("registering major_nr returned rc=%d\n", rc);
return rc;
}
-#endif /* DASD_PARANOIA */
- read_ahead[MAJOR_NR] = 8;
+#endif /* DASD_PARANOIA */ read_ahead[MAJOR_NR] = 8;
blk_size[MAJOR_NR] = dasd_blks;
hardsect_size[MAJOR_NR] = dasd_secsize;
blksize_size[MAJOR_NR] = dasd_blksize;
dasd_proc_init ();
#endif /* CONFIG_PROC_FS */
/* Now scan the device list for DASDs */
- FUNCTION_CONTROL ("entering detection loop%s\n", "");
for (i = 0; i < NR_IRQS; i++) {
int irc; /* Internal return code */
- LOOP_CONTROL ("Probing irq %d...\n", i);
irc = probe_for_dasd (i);
switch (irc) {
case 0:
- LOOP_CONTROL ("Added DASD%s\n", "");
break;
case -ENODEV:
- LOOP_CONTROL ("No DASD%s\n", "");
+ case -EBUSY:
break;
case -EMEDIUMTYPE:
PRINT_WARN ("DASD not formatted%s\n", "");
break;
}
}
- FUNCTION_CONTROL ("detection loop completed%s\n", "");
+ FUNCTION_CONTROL ("detection loop completed %s partn check...\n", "");
/* Finally do the genhd stuff */
-#if 0 /* 2 b done */
dd_gendisk.next = gendisk_head;
gendisk_head = &dd_gendisk;
- for (i = 0; i < DASD_MAXDEVICES; i++) {
- LOOP_CONTROL ("Setting partitions of DASD %d\n", i);
- resetup_one_dev (&dd_gendisk, i);
+ dasd_information = dasd_info; /* to enable genhd to know about DASD */
+ tod_wait (1000000);
+
+ /* wait on root filesystem before detecting partitions */
+ if (MAJOR (ROOT_DEV) == DASD_MAJOR) {
+ int count = 10;
+ i = DEVICE_NR (ROOT_DEV);
+ if (dasd_info[i] == NULL) {
+ panic ("root device not accessible\n");
+ }
+ while ((atomic_read (&dasd_info[i]->status) !=
+ DASD_INFO_STATUS_FORMATTED) &&
+ count ) {
+ PRINT_INFO ("Waiting on root volume...%d seconds left\n", count);
+ tod_wait (1000000);
+ count--;
+ }
+ if (count == 0) {
+ panic ("Waiting on root volume...giving up!\n");
+ }
+ }
+ for (i = 0; i < DASD_MAX_DEVICES; i++) {
+ if (dasd_info[i]) {
+ if (atomic_read (&dasd_info[i]->status) ==
+ DASD_INFO_STATUS_FORMATTED) {
+ dasd_partn_detect (i);
+ } else { /* start kernel thread for devices not ready now */
+ kernel_thread (dasd_partn_detect, (void *) i,
+ CLONE_FS | CLONE_FILES | CLONE_SIGHAND);
+ }
+ }
}
-#endif /* 0 */
- FUNCTION_EXIT ("dasd_init");
return rc;
}
#ifdef MODULE
+
int
init_module (void)
{
int rc = 0;
- FUNCTION_ENTRY ("init_module");
- PRINT_INFO ("trying to load module\n");
+ PRINT_INFO ("Initializing module\n");
+ rc = dasd_parse_module_params ();
+ if (rc == 0) {
+ PRINT_INFO ("module parameters parsed successfully\n");
+ } else {
+ PRINT_WARN ("parsing parameters returned rc=%d\n", rc);
+ }
rc = dasd_init ();
if (rc == 0) {
- PRINT_INFO ("module loaded successfully\n");
+ PRINT_INFO ("module initialized successfully\n");
} else {
- PRINT_WARN ("warning: Module load returned rc=%d\n", rc);
+ PRINT_WARN ("initializing module returned rc=%d\n", rc);
}
FUNCTION_EXIT ("init_module");
return rc;
cleanup_module (void)
{
int rc = 0;
+ struct gendisk *genhd = gendisk_head, *prev = NULL;
- FUNCTION_ENTRY ("cleanup_module");
PRINT_INFO ("trying to unload module \n");
- /* FIXME: replace by proper unload functionality */
- INTERNAL_ERROR ("Modules not yet implemented %s", "");
+ /* unregister gendisk stuff */
+ for (genhd = gendisk_head; genhd; prev = genhd, genhd = genhd->next) {
+ if (genhd == dd_gendisk) {
+ if (prev)
+ prev->next = genhd->next;
+ else {
+ gendisk_head = genhd->next;
+ }
+ break;
+ }
+ }
+ /* unregister devices */
+ for (i = 0; i = DASD_MAX_DEVICES; i++) {
+ if (dasd_info[i])
+ dasd_unregister_dasd (i);
+ }
if (rc == 0) {
PRINT_INFO ("module unloaded successfully\n");
} else {
PRINT_WARN ("module unloaded with errors\n");
}
- FUNCTION_EXIT ("cleanup_module");
}
#endif /* MODULE */
+++ /dev/null
-
-#ifndef DASD_H
-#define DASD_H
-
-/* First of all the external stuff */
-#include <linux/ioctl.h>
-#include <linux/major.h>
-
-#define IOCTL_LETTER 'D'
-#define BIODASDFORMAT _IO(IOCTL_LETTER,0) /* Format the volume or an extent */
-#define BIODASDDISABLE _IO(IOCTL_LETTER,1) /* Disable the volume (for Linux) */
-#define BIODASDENABLE _IO(IOCTL_LETTER,2) /* Enable the volume (for Linux) */
-/* Stuff for reading and writing the Label-Area to/from user space */
-#define BIODASDGTVLBL _IOR(IOCTL_LETTER,3,dasd_volume_label_t)
-#define BIODASDSTVLBL _IOW(IOCTL_LETTER,4,dasd_volume_label_t)
-#define BIODASDRWTB _IOWR(IOCTL_LETTER,5,int)
-#define BIODASDRSID _IOR(IOCTL_LETTER,6,senseid_t)
-
-typedef
-union {
- char bytes[512];
- struct {
- /* 80 Bytes of Label data */
- char identifier[4]; /* e.g. "LNX1", "VOL1" or "CMS1" */
- char label[6]; /* Given by user */
- char security;
- char vtoc[5]; /* Null in "LNX1"-labelled partitions */
- char reserved0[5];
- long ci_size;
- long blk_per_ci;
- long lab_per_ci;
- char reserved1[4];
- char owner[0xe];
- char no_part;
- char reserved2[0x1c];
- /* 16 Byte of some information on the dasd */
- short blocksize;
- char nopart;
- char unused;
- long unused2[3];
- /* 7*10 = 70 Bytes of partition data */
- struct {
- char type;
- long start;
- long size;
- char unused;
- } part[7];
- } __attribute__ ((packed)) label;
-} dasd_volume_label_t;
-
-typedef union {
- struct {
- unsigned long no;
- unsigned int ct;
- } __attribute__ ((packed)) input;
- struct {
- unsigned long noct;
- } __attribute__ ((packed)) output;
-} __attribute__ ((packed)) dasd_xlate_t;
-
-void dasd_setup (char *, int *);
-int dasd_init (void);
-#ifdef MODULE
-int init_module (void);
-void cleanup_module (void);
-#endif /* MODULE */
-
-/* Definitions for blk.h */
-/* #define DASD_MAGIC 0x44415344 is ascii-"DASD" */
-/* #define dasd_MAGIC 0x64617364; is ascii-"DASD" */
-#define DASD_MAGIC 0xC4C1E2C4 /* is ebcdic-"DASD" */
-#define dasd_MAGIC 0x8481A284 /* is ebcdic-"DASD" */
-#define DASD_NAME "dasd"
-#define DASD_PARTN_BITS 2
-#define DASD_MAX_DEVICES (256>>DASD_PARTN_BITS)
-
-#define MAJOR_NR DASD_MAJOR
-#define PARTN_BITS DASD_PARTN_BITS
-
-#ifdef __KERNEL__
-/* Now lets turn to the internal sbtuff */
-
-/*
- define the debug levels:
- - 0 No debugging output to console or syslog
- - 1 Log internal errors to syslog, ignore check conditions
- - 2 Log internal errors and check conditions to syslog
- - 3 Log internal errors to console, log check conditions to syslog
- - 4 Log internal errors and check conditions to console
- - 5 panic on internal errors, log check conditions to console
- - 6 panic on both, internal errors and check conditions
- */
-#define DASD_DEBUG 4
-
-#define DASD_PROFILE
-/*
- define the level of paranoia
- - 0 quite sure, that things are going right
- - 1 sanity checking, only to avoid panics
- - 2 normal sanity checking
- - 3 extensive sanity checks
- - 4 exhaustive debug messages
- */
-#define DASD_PARANOIA 2
-
-/*
- define the depth of flow control, which is logged as a check condition
- - 0 No flow control messages
- - 1 Entry of functions logged like check condition
- - 2 Entry and exit of functions logged like check conditions
- - 3 Internal structure broken down
- - 4 unrolling of loops,...
- */
-#define DASD_FLOW_CONTROL 0
-
-#if DASD_DEBUG > 0
-#define PRINT_DEBUG(x...) printk ( KERN_DEBUG PRINTK_HEADER x )
-#define PRINT_INFO(x...) printk ( KERN_INFO PRINTK_HEADER x )
-#define PRINT_WARN(x...) printk ( KERN_WARNING PRINTK_HEADER x )
-#define PRINT_ERR(x...) printk ( KERN_ERR PRINTK_HEADER x )
-#define PRINT_FATAL(x...) panic ( PRINTK_HEADER x )
-#else
-#define PRINT_DEBUG(x...) printk ( KERN_DEBUG PRINTK_HEADER x )
-#define PRINT_INFO(x...) printk ( KERN_DEBUG PRINTK_HEADER x )
-#define PRINT_WARN(x...) printk ( KERN_DEBUG PRINTK_HEADER x )
-#define PRINT_ERR(x...) printk ( KERN_DEBUG PRINTK_HEADER x )
-#define PRINT_FATAL(x...) printk ( KERN_DEBUG PRINTK_HEADER x )
-#endif /* DASD_DEBUG */
-
-#define INTERNAL_ERRMSG(x,y...) \
-"Internal error: in file " __FILE__ " line: %d: " x, __LINE__, y
-#define INTERNAL_CHKMSG(x,y...) \
-"Inconsistency: in file " __FILE__ " line: %d: " x, __LINE__, y
-#define INTERNAL_FLWMSG(x,y...) \
-"Flow control: file " __FILE__ " line: %d: " x, __LINE__, y
-
-#if DASD_DEBUG > 4
-#define INTERNAL_ERROR(x...) PRINT_FATAL ( INTERNAL_ERRMSG ( x ) )
-#elif DASD_DEBUG > 2
-#define INTERNAL_ERROR(x...) PRINT_ERR ( INTERNAL_ERRMSG ( x ) )
-#elif DASD_DEBUG > 0
-#define INTERNAL_ERROR(x...) PRINT_WARN ( INTERNAL_ERRMSG ( x ) )
-#else
-#define INTERNAL_ERROR(x...)
-#endif /* DASD_DEBUG */
-
-#if DASD_DEBUG > 5
-#define INTERNAL_CHECK(x...) PRINT_FATAL ( INTERNAL_CHKMSG ( x ) )
-#elif DASD_DEBUG > 3
-#define INTERNAL_CHECK(x...) PRINT_ERR ( INTERNAL_CHKMSG ( x ) )
-#elif DASD_DEBUG > 1
-#define INTERNAL_CHECK(x...) PRINT_WARN ( INTERNAL_CHKMSG ( x ) )
-#else
-#define INTERNAL_CHECK(x...)
-#endif /* DASD_DEBUG */
-
-#if DASD_DEBUG > 3
-#define INTERNAL_FLOW(x...) PRINT_ERR ( INTERNAL_FLWMSG ( x ) )
-#elif DASD_DEBUG > 2
-#define INTERNAL_FLOW(x...) PRINT_WARN ( INTERNAL_FLWMSG ( x ) )
-#else
-#define INTERNAL_FLOW(x...)
-#endif /* DASD_DEBUG */
-
-#if DASD_FLOW_CONTROL > 0
-#define FUNCTION_ENTRY(x) INTERNAL_FLOW( x "entered %s\n","" );
-#else
-#define FUNCTION_ENTRY(x)
-#endif /* DASD_FLOW_CONTROL */
-
-#if DASD_FLOW_CONTROL > 1
-#define FUNCTION_EXIT(x) INTERNAL_FLOW( x "exited %s\n","" );
-#else
-#define FUNCTION_EXIT(x)
-#endif /* DASD_FLOW_CONTROL */
-
-#if DASD_FLOW_CONTROL > 2
-#define FUNCTION_CONTROL(x...) INTERNAL_FLOW( x );
-#else
-#define FUNCTION_CONTROL(x...)
-#endif /* DASD_FLOW_CONTROL */
-
-#if DASD_FLOW_CONTROL > 3
-#define LOOP_CONTROL(x...) INTERNAL_FLOW( x );
-#else
-#define LOOP_CONTROL(x...)
-#endif /* DASD_FLOW_CONTROL */
-
-#define DASD_DO_IO_SLEEP 0x01
-#define DASD_DO_IO_NOLOCK 0x02
-#define DASD_DO_IO_NODEC 0x04
-
-#define DASD_NOT_FORMATTED 0x01
-
-extern struct wait_queue *dasd_waitq;
-
-#undef DEBUG_DASD_MALLOC
-#ifdef DEBUG_DASD_MALLOC
-void *b;
-#define kmalloc(x...) (PRINT_INFO(" kmalloc %p\n",b=kmalloc(x)),b)
-#define kfree(x) PRINT_INFO(" kfree %p\n",x);kfree(x)
-#define get_free_page(x...) (PRINT_INFO(" gfp %p\n",b=get_free_page(x)),b)
-#define __get_free_pages(x...) (PRINT_INFO(" gfps %p\n",b=__get_free_pages(x)),b)
-#endif /* DEBUG_DASD_MALLOC */
-
-#endif /* __KERNEL__ */
-#endif /* DASD_H */
-
-/*
- * Overrides for Emacs so that we follow Linus's tabbing style.
- * Emacs will notice this stuff at the end of the file and automatically
- * adjust the settings for this buffer only. This must remain at the end
- * of the file.
- * ---------------------------------------------------------------------------
- * Local variables:
- * c-indent-level: 4
- * c-brace-imaginary-offset: 0
- * c-brace-offset: -4
- * c-argdecl-indent: 4
- * c-label-offset: -4
- * c-continued-statement-offset: 4
- * c-continued-brace-offset: 0
- * indent-tabs-mode: nil
- * tab-width: 8
- * End:
- */
--- /dev/null
+/*
+ * File...........: linux/drivers/s390/block/dasd_3990_erp.c
+ * Author(s)......: Holger Smolinski <Holger.Smolinski@de.ibm.com>
+ * Horst Hummel <Horst.Hummel@de.ibm.com>
+ * Bugreports.to..: <Linux390@de.ibm.com>
+ * (C) IBM Corporation, IBM Deutschland Entwicklung GmbH, 2000
+ */
+
+#include <linux/dasd.h>
+#include "dasd_erp.h"
+
+#define PRINTK_HEADER "dasd_erp(3990)"
+
+/*
+ * DASD_3990_ERP_EXAMINE_32
+ *
+ * DESCRIPTION
+ * Checks only for fatal/no/recoverable error.
+ * A detailed examination of the sense data is done later outside
+ * the interrupt handler.
+ *
+ * RETURN VALUES
+ * dasd_era_none no error
+ * dasd_era_fatal for all fatal (unrecoverable errors)
+ * dasd_era_recover for recoverable others.
+ */
+dasd_era_t
+dasd_3990_erp_examine_32 (char *sense)
+{
+
+ switch (sense[25]) {
+ case 0x00:
+ return dasd_era_none;
+ case 0x01:
+ return dasd_era_fatal;
+ default:
+ return dasd_era_recover;
+ }
+
+} /* end dasd_3990_erp_examine_32 */
+
+/*
+ * DASD_3990_ERP_EXAMINE_24
+ *
+ * DESCRIPTION
+ * Checks only for fatal (unrecoverable) error.
+ * A detailed examination of the sense data is done later outside
+ * the interrupt handler.
+ *
+ * Each bit configuration leading to an action code 2 (Exit with
+ * programming error or unusual condition indication)
+ * and 10 (disabled interface) are handled as fatal error´s.
+ *
+ * All other configurations are handled as recoverable errors.
+ *
+ * RETURN VALUES
+ * dasd_era_fatal for all fatal (unrecoverable errors)
+ * dasd_era_recover for all others.
+ */
+dasd_era_t
+dasd_3990_erp_examine_24 (char *sense)
+{
+
+ /* check for 'Command Recejct' which is always a fatal error */
+ if (sense[0] & 0x80) {
+ return dasd_era_fatal;
+ }
+ /* check for 'Invalid Track Format' */
+ if (sense[1] & 0x40) {
+ return dasd_era_fatal;
+ }
+ /* check for 'No Record Found' */
+ if (sense[1] & 0x08) {
+ return dasd_era_fatal;
+ }
+ /* return recoverable for all others */
+ return dasd_era_recover;
+
+} /* END dasd_3990_erp_examine_24 */
+
+/*
+ * DASD_3990_ERP_EXAMINE
+ *
+ * DESCRIPTION
+ * Checks only for fatal/no/recover error.
+ * A detailed examination of the sense data is done later outside
+ * the interrupt handler.
+ *
+ * The logic is based on the 'IBM 3990 Storage Control Reference' manual
+ * 'Chapter 7. Error Recovery Procedures'.
+ *
+ * RETURN VALUES
+ * dasd_era_none no error
+ * dasd_era_fatal for all fatal (unrecoverable errors)
+ * dasd_era_recover for all others.
+ */
+dasd_era_t
+dasd_3990_erp_examine (cqr_t * cqr, devstat_t * stat)
+{
+
+ char *sense = stat->ii.sense.data;
+
+ /* check for successful execution first */
+ if (stat->cstat == 0x00 &&
+ stat->dstat == (DEV_STAT_CHN_END | DEV_STAT_DEV_END))
+ return dasd_era_none;
+
+ /* distinguish between 24 and 32 byte sense data */
+ if (sense[27] & 0x80) {
+
+ /* examine the 32 byte sense data */
+ return dasd_3990_erp_examine_32 (sense);
+
+ } else {
+
+ /* examine the 24 byte sense data */
+ return dasd_3990_erp_examine_24 (sense);
+
+ } /* end distinguish between 24 and 32 byte sense data */
+
+} /* END dasd_3990_erp_examine */
+
+/*
+ * Overrides for Emacs so that we follow Linus's tabbing style.
+ * Emacs will notice this stuff at the end of the file and automatically
+ * adjust the settings for this buffer only. This must remain at the end
+ * of the file.
+ * ---------------------------------------------------------------------------
+ * Local variables:
+ * c-indent-level: 4
+ * c-brace-imaginary-offset: 0
+ * c-brace-offset: -4
+ * c-argdecl-indent: 4
+ * c-label-offset: -4
+ * c-continued-statement-offset: 4
+ * c-continued-brace-offset: 0
+ * indent-tabs-mode: nil
+ * tab-width: 8
+ * End:
+ */
--- /dev/null
+/*
+ * File...........: linux/drivers/s390/block/dasd_9345_erp.h
+ * Author(s)......: Holger Smolinski <Holger.Smolinski@de.ibm.com>
+ * Bugreports.to..: <Linux390@de.ibm.com>
+ * (C) IBM Corporation, IBM Deutschland Entwicklung GmbH, 2000
+ */
+
+#include <linux/dasd.h>
+#include "dasd_erp.h"
+
+#define PRINTK_HEADER "dasd_erp(9343)"
+
+dasd_era_t
+dasd_9343_erp_examine (cqr_t * cqr, devstat_t * stat)
+{
+ if (stat->cstat == 0x00 &&
+ stat->dstat == (DEV_STAT_CHN_END | DEV_STAT_DEV_END))
+ return dasd_era_none;
+ return dasd_era_recover;
+}
#include <linux/kernel.h>
#include <linux/malloc.h>
#include <asm/spinlock.h>
+#include <linux/dasd.h>
#include <asm/atomic.h>
-#include "dasd.h"
#include "dasd_types.h"
#define PRINTK_HEADER "dasd_ccw:"
}
#endif /* DASD_PARANOIA */
exit:
- FUNCTION_EXIT ("request_cpa");
return freeblk;
}
return;
}
+/* ---------------------------------------------------------- */
+
+static erp_t *erpp = NULL;
+#ifdef __SMP__
+static spinlock_t erp_lock = SPIN_LOCK_UNLOCKED;
+#endif /* __SMP__ */
+
+void
+erf_enq (erp_t * cqf)
+{
+ *(erp_t **) cqf = erpp;
+ erpp = cqf;
+}
+
+erp_t *
+erf_deq (void)
+{
+ erp_t *erp = erpp;
+ erpp = *(erp_t **) erpp;
+ return erp;
+}
+
+erp_t *
+request_er (void)
+{
+ erp_t *erp = NULL;
+ int i;
+ erp_t *area;
+
+ spin_lock (&erp_lock);
+ while (erpp == NULL) {
+ do {
+ area = (erp_t *) get_free_page (GFP_ATOMIC);
+ if (area == NULL) {
+ printk (KERN_WARNING PRINTK_HEADER
+ "No memory for chanq area\n");
+ }
+ } while (!area);
+ memset (area, 0, PAGE_SIZE);
+ if (dasd_page_count + 1 >= MAX_DASD_PAGES) {
+ PRINT_WARN ("Requesting too many pages...");
+ } else {
+ dasd_page[dasd_page_count++] =
+ (long) area;
+ }
+ for (i = 0; i < 4096 / sizeof (erp_t); i++) {
+ erf_enq (area + i);
+ }
+ }
+ erp = erf_deq ();
+ spin_unlock (&erp_lock);
+ return erp;
+}
+
+void
+release_er (erp_t * erp)
+{
+ spin_lock (&erp_lock);
+ erf_enq (erp);
+ spin_unlock (&erp_lock);
+ return;
+}
+
/* ----------------------------------------------------------- */
cqr_t *
request_cqr (int cpsize, int datasize)
memset (cqr->data,0,datasize);
}
goto exit;
- nodata:
- release_cp (cqr->cplength, cqr->cpaddr);
nocp:
release_cq (cqr);
cqr = NULL;
return rc;
}
+/* ----------------------------------------------------------- */
+erp_t *
+allloc_erp (erp_t * erp, int cpsize, int datasize)
+{
+ if (cpsize) {
+ erp->cqr.cpaddr = request_cp (cpsize);
+ if (erp->cqr.cpaddr == NULL) {
+ printk (KERN_WARNING PRINTK_HEADER __FILE__
+ "No memory for channel program\n");
+ goto nocp;
+ }
+ erp->cqr.cplength = cpsize;
+ }
+ if (datasize) {
+ do {
+ erp->cqr.data = (char *) kmalloc (datasize, GFP_ATOMIC);
+ if (erp->cqr.data == NULL) {
+ printk (KERN_WARNING PRINTK_HEADER __FILE__
+ "No memory for ERP data area\n");
+ }
+ } while (!erp->cqr.data);
+ memset (erp->cqr.data, 0, datasize);
+ }
+ goto exit;
+ nocp:
+ release_er (erp);
+ erp = NULL;
+ exit:
+ return erp;
+}
+
+int
+release_erp (erp_t * erp)
+{
+ int rc = 0;
+ if (erp == NULL) {
+ rc = -ENOENT;
+ return rc;
+ }
+ if (erp->cqr.data) {
+ kfree (erp->cqr.data);
+ }
+ if (erp->cqr.dstat) {
+ kfree (erp->cqr.dstat);
+ }
+ if (erp->cqr.cpaddr) {
+ release_cp (erp->cqr.cplength, erp->cqr.cpaddr);
+ }
+ erp->cqr.magic = dasd_MAGIC;
+ release_er (erp);
+ return rc;
+}
+
/* -------------------------------------------------------------- */
void
dasd_chanq_enq (dasd_chanq_t * q, cqr_t * cqr)
cqr->next = NULL;
q->tail = cqr;
q->queued_requests ++;
- if (atomic_compare_and_swap(CQR_STATUS_FILLED,
- CQR_STATUS_QUEUED,
- &cqr->status)) {
- PRINT_WARN ("q_cqr: %p status changed %d\n",
- cqr,atomic_read(&cqr->status));
- atomic_set(&cqr->status,CQR_STATUS_QUEUED);
+ ACS (cqr->status, CQR_STATUS_FILLED, CQR_STATUS_QUEUED);
}
+
+void
+dasd_chanq_enq_head (dasd_chanq_t * q, cqr_t * cqr)
+{
+ cqr->next = q->head;
+ q->head = cqr;
+ if (q->tail == NULL)
+ q->tail = cqr;
+ q->queued_requests++;
+ ACS (cqr->status, CQR_STATUS_FILLED, CQR_STATUS_QUEUED);
}
int
}
cqr->next = NULL;
q->queued_requests --;
+ if (cqr->magic == ERP_MAGIC)
+ return release_erp ((erp_t *) cqr);
+ else
return release_cqr(cqr);
}
spin_lock(&cq_lock);
if (! (atomic_read(&q->flags) & DASD_CHANQ_ACTIVE)) {
PRINT_WARN("Queue not active\n");
- }
- else if (cq_head == q) {
+ } else if (cq_head == q) {
cq_head = q->next_q;
} else {
c = cq_head;
cqr_t *request_cqr (int, int);
int release_cqr (cqr_t *);
+erp_t *request_er (void);
+int release_er (erp_t *);
int dasd_chanq_enq (dasd_chanq_t *, cqr_t *);
int dasd_chanq_deq (dasd_chanq_t *, cqr_t *);
void cql_enq_head (dasd_chanq_t * q);
#include <linux/stddef.h>
#include <linux/kernel.h>
-#ifdef MODULE
-#include <linux/module.h>
-#endif /* MODULE */
-
#include <linux/malloc.h>
+#include <linux/dasd.h>
+#include <linux/hdreg.h> /* HDIO_GETGEO */
+
#include <asm/io.h>
#include <asm/irq.h>
#include "dasd_types.h"
#include "dasd_ccwstuff.h"
-#include "dasd.h"
-
#ifdef PRINTK_HEADER
#undef PRINTK_HEADER
#endif /* PRINTK_HEADER */
eckd_home_t;
-/* eckd count area */
-typedef struct {
- __u16 cyl;
- __u16 head;
- __u8 record;
- __u8 kl;
- __u16 dl;
-} __attribute__ ((packed))
-
-eckd_count_t;
+dasd_era_t dasd_eckd_erp_examine (cqr_t *, devstat_t *);
static unsigned int
round_up_multiple (unsigned int no, unsigned int mult)
memset (de_ccw, 0, sizeof (ccw1_t));
de_ccw->cmd_code = CCW_DEFINE_EXTENT;
de_ccw->count = 16;
- de_ccw->cda = (void *) virt_to_phys (data);
+ de_ccw->cda = (void *) __pa (data);
memset (data, 0, sizeof (DE_eckd_data_t));
switch (cmd) {
break;
case DASD_ECKD_CCW_WRITE:
case DASD_ECKD_CCW_WRITE_MT:
+ data->mask.perm = 0x02;
data->attributes.operation = 0x3; /* enable seq. caching */
break;
case DASD_ECKD_CCW_WRITE_CKD:
memset (lo_ccw, 0, sizeof (ccw1_t));
lo_ccw->cmd_code = DASD_ECKD_CCW_LOCATE_RECORD;
lo_ccw->count = 16;
- lo_ccw->cda = (void *) virt_to_phys (data);
+ lo_ccw->cda = (void *) __pa (data);
memset (data, 0, sizeof (LO_eckd_data_t));
switch (cmd) {
data->operation.operation = 0x16;
break;
case DASD_ECKD_CCW_WRITE_RECORD_ZERO:
- data->operation.orientation = 0x3;
+ data->operation.orientation = 0x1;
data->operation.operation = 0x03;
data->count++;
break;
r0_data->record = 0;
r0_data->kl = 0;
r0_data->dl = 8;
- last_ccw->cmd_code = 0x03;
+ last_ccw->cmd_code = DASD_ECKD_CCW_WRITE_RECORD_ZERO;
last_ccw->count = 8;
last_ccw->flags = CCW_FLAG_CC | CCW_FLAG_SLI;
- last_ccw->cda = (void *) virt_to_phys (r0_data);
+ last_ccw->cda = (void *) __pa (r0_data);
last_ccw++;
}
/* write remaining records */
last_ccw->cmd_code = DASD_ECKD_CCW_WRITE_CKD;
last_ccw->flags = CCW_FLAG_CC | CCW_FLAG_SLI;
last_ccw->count = 8;
- last_ccw->cda = (void *)
- virt_to_phys (ct_data + i);
+ last_ccw->cda = (void *) __pa (ct_data + i);
}
(last_ccw - 1)->flags &= ~(CCW_FLAG_CC | CCW_FLAG_DC);
fcp -> devindex = di;
fcp -> flags = DASD_DO_IO_SLEEP;
do {
- struct wait_queue wait = {current, NULL};
+ struct wait_queue wait =
+ {current, NULL};
unsigned long flags;
int irq;
int cs;
int blk_per_trk = recs_per_track (&(info->rdc_data->eckd),
0, info->sizes.bp_block);
int byt_per_blk = info->sizes.bp_block;
- int noblk = req-> nr_sectors >> info->sizes.s2b_shift;
int btrk = (req->sector >> info->sizes.s2b_shift) / blk_per_trk;
int etrk = ((req->sector + req->nr_sectors - 1) >>
info->sizes.s2b_shift) / blk_per_trk;
-
- if ( ! noblk ) {
- PRINT_ERR("No blocks to write...returning\n");
- return NULL;
- }
+ int bhct;
+ long size;
if (req->cmd == READ) {
rw_cmd = DASD_ECKD_CCW_READ_MT;
}
#endif /* DASD_PARANOIA */
/* Build the request */
- rw_cp = request_cqr (2 + noblk,
+#if 0
+ PRINT_INFO ("req %d %d %d %d\n", devindex, req->cmd, req->sector, req->nr_sectors);
+#endif
+ /* count bhs to prevent errors, when bh smaller than block */
+ bhct = 0;
+
+ for (bh = req->bh; bh; bh = bh->b_reqnext) {
+ if (bh->b_size > byt_per_blk)
+ for (size = 0; size < bh->b_size; size += byt_per_blk)
+ bhct++;
+ else
+ bhct++;
+ }
+
+ rw_cp = request_cqr (2 + bhct,
sizeof (DE_eckd_data_t) +
sizeof (LO_eckd_data_t));
if ( ! rw_cp ) {
req->nr_sectors >> info->sizes.s2b_shift,
rw_cmd, info);
ccw->flags = CCW_FLAG_CC;
- for (bh = req->bh; bh; bh = bh->b_reqnext) {
- long size;
+ for (bh = req->bh; bh != NULL;) {
+ if (bh->b_size > byt_per_blk) {
for (size = 0; size < bh->b_size; size += byt_per_blk) {
ccw++;
ccw->flags = CCW_FLAG_CC;
ccw->cmd_code = rw_cmd;
ccw->count = byt_per_blk;
- ccw->cda = (void *) virt_to_phys (bh->b_data + size);
+ ccw->cda = (void *) __pa (bh->b_data + size);
}
+ bh = bh->b_reqnext;
+ } else { /* group N bhs to fit into byt_per_blk */
+ for (size = 0; bh != NULL && size < byt_per_blk;) {
+ ccw++;
+ ccw->flags = CCW_FLAG_DC;
+ ccw->cmd_code = rw_cmd;
+ ccw->count = bh->b_size;
+ ccw->cda = (void *) __pa (bh->b_data);
+ size += bh->b_size;
+ bh = bh->b_reqnext;
}
- ccw->flags &= ~(CCW_FLAG_DC | CCW_FLAG_CC);
- return rw_cp;
-}
-
-cqr_t *
-dasd_eckd_rw_label (int devindex, int rw, char *buffer)
-{
- int cmd_code = 0x03;
- dasd_information_t *info = dasd_info[devindex];
- cqr_t *cqr;
- ccw1_t *ccw;
-
- switch (rw) {
- case READ:
- cmd_code = DASD_ECKD_CCW_READ;
- break;
- case WRITE:
- cmd_code = DASD_ECKD_CCW_WRITE;
- break;
-#if DASD_PARANOIA > 2
- default:
- INTERNAL_ERROR ("unknown cmd %d", rw);
+ if (size != byt_per_blk) {
+ PRINT_WARN ("Cannot fulfill small request %d vs. %d (%d sects)\n", size, byt_per_blk, req->nr_sectors);
+ release_cqr (rw_cp);
return NULL;
-#endif /* DASD_PARANOIA */
}
- cqr = request_cqr (3, sizeof (DE_eckd_data_t) +
- sizeof (LO_eckd_data_t));
- ccw = cqr->cpaddr;
- define_extent (ccw, cqr->data, 0, 0, cmd_code, info);
- ccw->flags |= CCW_FLAG_CC;
- ccw++;
- locate_record (ccw, cqr->data + 1, 0, 2, 1, cmd_code, info);
- ccw->flags |= CCW_FLAG_CC;
- ccw++;
- ccw->cmd_code = cmd_code;
- ccw->flags |= CCW_FLAG_SLI;
- ccw->count = sizeof (dasd_volume_label_t);
- ccw->cda = (void *) virt_to_phys ((void *) buffer);
- return cqr;
-
+ ccw->flags = CCW_FLAG_CC;
+ }
+ }
+ ccw->flags &= ~(CCW_FLAG_DC | CCW_FLAG_CC);
+ return rw_cp;
}
void
return rc;
}
-int
-dasd_eckd_read_count (int di)
+cqr_t *
+dasd_eckd_fill_sizes_first (int di)
{
- int rc;
cqr_t *rw_cp = NULL;
ccw1_t *ccw;
DE_eckd_data_t *DE_data;
LO_eckd_data_t *LO_data;
- eckd_count_t *count_data;
- int retries = 5;
- unsigned long flags;
- int irq;
- int cs;
dasd_information_t *info = dasd_info[di];
+ eckd_count_t *count_data = &(info->private.eckd.count_data);
+
+ dasd_info[di]->sizes.label_block = 2;
+
rw_cp = request_cqr (3,
sizeof (DE_eckd_data_t) +
- sizeof (LO_eckd_data_t) +
- sizeof (eckd_count_t));
+ sizeof (LO_eckd_data_t));
DE_data = rw_cp->data;
LO_data = rw_cp->data + sizeof (DE_eckd_data_t);
- count_data = (eckd_count_t*)((long)LO_data + sizeof (LO_eckd_data_t));
ccw = rw_cp->cpaddr;
define_extent (ccw, DE_data, 0, 0, DASD_ECKD_CCW_READ_COUNT, info);
ccw->flags = CCW_FLAG_CC;
ccw++;
ccw->cmd_code = DASD_ECKD_CCW_READ_COUNT;
ccw->count = 8;
- ccw->cda = (void *) virt_to_phys (count_data);
+ ccw->cda = (void *) __pa (count_data);
rw_cp->devindex = di;
- rw_cp -> options = DOIO_WAIT_FOR_INTERRUPT;
- do {
- irq = dasd_info[di]->info.irq;
- s390irq_spin_lock_irqsave (irq, flags);
- atomic_set(&rw_cp -> status, CQR_STATUS_QUEUED);
- rc = dasd_start_IO ( rw_cp );
- s390irq_spin_unlock_irqrestore (irq, flags);
- retries --;
- cs = atomic_read(&rw_cp->status);
- if ( cs != CQR_STATUS_DONE && retries == 5 ) {
- dasd_eckd_print_error(rw_cp->dstat);
- }
- } while ( ( ( cs != CQR_STATUS_DONE) || rc ) && retries );
- if ( ( rc || cs != CQR_STATUS_DONE) ) {
- if ( ( cs == CQR_STATUS_ERROR ) &&
- ( rw_cp -> dstat -> ii.sense.data[1] == 0x08 ) ) {
- rc = -EMEDIUMTYPE;
- } else {
- dasd_eckd_print_error (rw_cp->dstat);
- rc = -EIO;
- }
- } else {
- rc = count_data->dl;
- }
- release_cqr (rw_cp);
- return rc;
+ atomic_set (&rw_cp->status, CQR_STATUS_FILLED);
+ return rw_cp;
}
int
-dasd_eckd_fill_sizes (int devindex)
+dasd_eckd_fill_sizes_last (int devindex)
{
- int bs = 0;
- int sb;
+ int sb,rpt;
dasd_information_t *in = dasd_info[devindex];
- bs = dasd_eckd_read_count (devindex);
+ int bs = in->private.eckd.count_data.dl;
if (bs <= 0) {
PRINT_INFO("Cannot figure out blocksize. did you format the disk?\n");
memset (&(in -> sizes), 0, sizeof(dasd_sizes_t ));
}
in->sizes.bp_sector = in->sizes.bp_block;
- in->sizes.b2k_shift = 0; /* bits to shift a block to get 1k */
- for (sb = 1024; sb < bs; sb = sb << 1)
- in->sizes.b2k_shift++;
-
+ if (bs & 511) {
+ PRINT_INFO ("Probably no Linux formatted device!\n");
+ return -EMEDIUMTYPE;
+ }
in->sizes.s2b_shift = 0; /* bits to shift 512 to get a block */
for (sb = 512; sb < bs; sb = sb << 1)
in->sizes.s2b_shift++;
in->sizes.blocks = in->rdc_data->eckd.no_cyl *
in->rdc_data->eckd.trk_per_cyl *
recs_per_track (&(in->rdc_data->eckd), 0, bs);
- in->sizes.kbytes = in->sizes.blocks << in->sizes.b2k_shift;
+
+ in->sizes.kbytes = ( in->sizes.blocks << in->sizes.s2b_shift) >> 1;
+
+ rpt = recs_per_track (&(in->rdc_data->eckd), 0, in->sizes.bp_block),
PRINT_INFO ("Verified: %d B/trk %d B/Blk(%d B) %d Blks/trk %d kB/trk \n",
bytes_per_track (&(in->rdc_data->eckd)),
- bytes_per_record (&(in->rdc_data->eckd), 0, in->sizes.bp_block),
+ bytes_per_record (&(in->rdc_data->eckd), 0,
+ in->sizes.bp_block),
in->sizes.bp_block,
- recs_per_track (&(in->rdc_data->eckd), 0, in->sizes.bp_block),
- (recs_per_track (&(in->rdc_data->eckd), 0, in->sizes.bp_block) <<
- in->sizes.b2k_shift ));
+ rpt,
+ (rpt << in->sizes.s2b_shift) >> 1);
return 0;
}
+void
+dasd_eckd_fill_geometry (int di, struct hd_geometry *geo)
+{
+ dasd_information_t *info = dasd_info[di];
+ geo->cylinders = info->rdc_data->eckd.no_cyl;
+ geo->heads = info->rdc_data->eckd.trk_per_cyl;
+ geo->sectors = recs_per_track (&(info->rdc_data->eckd),
+ 0, info->sizes.bp_block);
+ geo->start = info->sizes.label_block + 1;
+}
+
dasd_operations_t dasd_eckd_operations =
{
- dasd_eckd_ck_devinfo,
- dasd_eckd_build_req,
- dasd_eckd_rw_label,
- dasd_eckd_ck_char,
- dasd_eckd_fill_sizes,
- dasd_eckd_format,
+ ck_devinfo:dasd_eckd_ck_devinfo,
+ get_req_ccw:dasd_eckd_build_req,
+ ck_characteristics:dasd_eckd_ck_char,
+ fill_sizes_first:dasd_eckd_fill_sizes_first,
+ fill_sizes_last:dasd_eckd_fill_sizes_last,
+ dasd_format:dasd_eckd_format,
+ fill_geometry:dasd_eckd_fill_geometry,
+ erp_examine:dasd_eckd_erp_examine
};
/*
--- /dev/null
+/*
+ * File...........: linux/drivers/s390/block/dasd_eckd_erp.h
+ * Author(s)......: Holger Smolinski <Holger.Smolinski@de.ibm.com>
+ * Bugreports.to..: <Linux390@de.ibm.com>
+ * (C) IBM Corporation, IBM Deutschland Entwicklung GmbH, 2000
+ */
+
+#include <linux/dasd.h>
+#include "dasd_erp.h"
+
+#define PRINTK_HEADER "dasd_erp(eckd)"
+
+dasd_era_t
+dasd_eckd_erp_examine (cqr_t * cqr, devstat_t * stat)
+{
+
+ if (stat->cstat == 0x00 &&
+ stat->dstat == (DEV_STAT_CHN_END | DEV_STAT_DEV_END))
+ return dasd_era_none;
+
+ switch (dasd_info[cqr->devindex]->info.sid_data.cu_model) {
+ case 0x3990:
+ return dasd_3990_erp_examine (cqr, stat);
+ case 0x9343:
+ return dasd_9343_erp_examine (cqr, stat);
+ default:
+ return dasd_era_recover;
+ }
+}
--- /dev/null
+/*
+ * File...........: linux/drivers/s390/block/dasd_erp.c
+ * Author(s)......: Holger Smolinski <Holger.Smolinski@de.ibm.com>
+ * Bugreports.to..: <Linux390@de.ibm.com>
+ * (C) IBM Corporation, IBM Deutschland Entwicklung GmbH, 2000
+ */
+
+#include <asm/irq.h>
+#include <linux/dasd.h>
+#include "dasd_erp.h"
+
+#define PRINTK_HEADER "dasd_erp"
+
+dasd_era_t
+dasd_erp_examine (cqr_t * cqr, devstat_t * stat)
+{
+ int rc;
+ if (stat->cstat == 0x00 &&
+ stat->dstat == (DEV_STAT_CHN_END | DEV_STAT_DEV_END)) {
+ PRINT_WARN ("No error detected\n");
+ rc = dasd_era_none;
+ } else if (!(stat->flag & DEVSTAT_FLAG_SENSE_AVAIL)) {
+ PRINT_WARN ("No sense data available, try to recover anyway\n");
+ rc = dasd_era_recover;
+ } else
+#if DASD_PARANOIA > 1
+ if (!dasd_disciplines[dasd_info[cqr->devindex]->type]->
+ erp_examine) {
+ INTERNAL_CHECK ("No erp_examinator for dt=%d\n",
+ dasd_info[cqr->devindex]->type);
+ rc = dasd_era_fatal;
+ } else
+#endif
+ {
+ PRINT_WARN ("calling examinator\n");
+ rc = dasd_disciplines[dasd_info[cqr->devindex]->type]->
+ erp_examine (cqr, stat);
+ }
+ PRINT_WARN("ERP action code = %d\n",rc);
+ return rc;
+}
+
+void
+default_erp_action (erp_t * erp)
+{
+ cqr_t *cqr = erp->cqr.int4cqr;
+ ccw1_t *cpa = request_cp(1,0);
+
+ memset (cpa,0,sizeof(ccw1_t));
+
+ cpa -> cmd_code = CCW_CMD_NOOP;
+
+ ((cqr_t *) erp)->cpaddr = cpa;
+ if (cqr->retries++ <= 16) {
+ ACS (cqr->status,
+ CQR_STATUS_ERP_PEND,
+ CQR_STATUS_QUEUED);
+ } else {
+ PRINT_WARN ("ERP retry count exceeded\n");
+ ACS (cqr->status,
+ CQR_STATUS_ERP_PEND,
+ CQR_STATUS_FAILED);
+ }
+ atomic_set (&(((cqr_t *) erp)->status), CQR_STATUS_FILLED);
+}
+
+dasd_erp_action_t
+dasd_erp_action (struct cqr_t *cqr)
+{
+ return default_erp_action;
+}
+
+dasd_erp_action_t
+dasd_erp_postaction (struct erp_t * erp)
+{
+ return NULL;
+}
--- /dev/null
+/*
+ * File...........: linux/drivers/s390/block/dasd_erp.h
+ * Author(s)......: Holger Smolinski <Holger.Smolinski@de.ibm.com>
+ * Bugreports.to..: <Linux390@de.ibm.com>
+ * (C) IBM Corporation, IBM Deutschland Entwicklung GmbH, 2000
+ */
+
+#ifndef DASD_ERP_H
+#define DASD_ERP_H
+
+typedef enum {
+ dasd_era_fatal = -1, /* no chance to recover */
+ dasd_era_none = 0, /* don't recover, everything alright */
+ dasd_era_msg = 1, /* don't recover, just report... */
+ dasd_era_recover = 2 /* recovery action recommended */
+} dasd_era_t;
+
+#include "dasd_types.h"
+
+typedef struct erp_t {
+ struct cqr_t cqr;
+} __attribute__ ((packed))
+
+erp_t;
+
+typedef void (*dasd_erp_action_t) (erp_t *);
+
+dasd_era_t dasd_erp_examine (struct cqr_t *, devstat_t *);
+dasd_erp_action_t dasd_erp_action (struct cqr_t *);
+dasd_erp_action_t dasd_erp_postaction (struct erp_t *);
+
+#endif /* DASD_ERP_H */
--- /dev/null
+
+#include <linux/stddef.h>
+#include <linux/kernel.h>
+
+#ifdef MODULE
+#include <linux/module.h>
+#endif /* MODULE */
+
+#include <linux/malloc.h>
+#include <linux/dasd.h>
+#include <linux/hdreg.h> /* HDIO_GETGEO */
+
+#include <asm/io.h>
+
+#include <asm/irq.h>
+
+#include "dasd_types.h"
+#include "dasd_ccwstuff.h"
+
+#define DASD_FBA_CCW_LOCATE 0x43
+#define DASD_FBA_CCW_READ 0x42
+#define DASD_FBA_CCW_WRITE 0x41
+
+#ifdef PRINTK_HEADER
+#undef PRINTK_HEADER
+#endif /* PRINTK_HEADER */
+#define PRINTK_HEADER "dasd(fba):"
+
+typedef
+struct {
+ struct {
+ unsigned char perm:2; /* Permissions on this extent */
+ unsigned char zero:2; /* Must be zero */
+ unsigned char da:1; /* usually zero */
+ unsigned char diag:1; /* allow diagnose */
+ unsigned char zero2:2; /* zero */
+ } __attribute__ ((packed)) mask;
+ unsigned char zero; /* Must be zero */
+ unsigned short blk_size; /* Blocksize */
+ unsigned long ext_loc; /* Extent locator */
+ unsigned long ext_beg; /* logical number of block 0 in extent */
+ unsigned long ext_end; /* logocal number of last block in extent */
+} __attribute__ ((packed, aligned (32)))
+
+DE_fba_data_t;
+
+typedef
+struct {
+ struct {
+ unsigned char zero:4;
+ unsigned char cmd:4;
+ } __attribute__ ((packed)) operation;
+ unsigned char auxiliary;
+ unsigned short blk_ct;
+ unsigned long blk_nr;
+} __attribute__ ((packed, aligned (32)))
+
+LO_fba_data_t;
+
+static void
+define_extent (ccw1_t * ccw, DE_fba_data_t * DE_data, int rw,
+ int blksize, int beg, int nr)
+{
+ memset (DE_data, 0, sizeof (DE_fba_data_t));
+ ccw->cmd_code = CCW_DEFINE_EXTENT;
+ ccw->count = 16;
+ ccw->cda = (void *) __pa (DE_data);
+ if (rw == WRITE)
+ (DE_data->mask).perm = 0x0;
+ else if (rw == READ)
+ (DE_data->mask).perm = 0x1;
+ else
+ DE_data->mask.perm = 0x2;
+ DE_data->blk_size = blksize;
+ DE_data->ext_loc = beg;
+ DE_data->ext_end = nr - 1;
+}
+
+static void
+locate_record (ccw1_t * ccw, LO_fba_data_t * LO_data, int rw, int block_nr,
+ int block_ct)
+{
+ memset (LO_data, 0, sizeof (LO_fba_data_t));
+ ccw->cmd_code = DASD_FBA_CCW_LOCATE;
+ ccw->count = 8;
+ ccw->cda = (void *) __pa (LO_data);
+ if (rw == WRITE)
+ LO_data->operation.cmd = 0x5;
+ else if (rw == READ)
+ LO_data->operation.cmd = 0x6;
+ else
+ LO_data->operation.cmd = 0x8;
+ LO_data->blk_nr = block_nr;
+ LO_data->blk_ct = block_ct;
+}
+
+int
+dasd_fba_ck_devinfo (dev_info_t * info)
+{
+ return 0;
+}
+
+cqr_t *
+dasd_fba_build_req (int devindex,
+ struct request * req)
+{
+ cqr_t *rw_cp = NULL;
+ ccw1_t *ccw;
+
+ DE_fba_data_t *DE_data;
+ LO_fba_data_t *LO_data;
+ struct buffer_head *bh;
+ int rw_cmd;
+ int byt_per_blk = dasd_info[devindex]->sizes.bp_block;
+ int bhct;
+ long size;
+
+ if (!req->nr_sectors) {
+ PRINT_ERR ("No blocks to write...returning\n");
+ return NULL;
+ }
+ if (req->cmd == READ) {
+ rw_cmd = DASD_FBA_CCW_READ;
+ } else
+#if DASD_PARANOIA > 2
+ if (req->cmd == WRITE)
+#endif /* DASD_PARANOIA */
+ {
+ rw_cmd = DASD_FBA_CCW_WRITE;
+ }
+#if DASD_PARANOIA > 2
+ else {
+ PRINT_ERR ("Unknown command %d\n", req->cmd);
+ return NULL;
+ }
+#endif /* DASD_PARANOIA */
+ bhct = 0;
+ for (bh = req->bh; bh; bh = bh->b_reqnext) {
+ if (bh->b_size > byt_per_blk)
+ for (size = 0; size < bh->b_size; size += byt_per_blk)
+ bhct++;
+ else
+ bhct++;
+ }
+
+ /* Build the request */
+ rw_cp = request_cqr (2 + bhct,
+ sizeof (DE_fba_data_t) +
+ sizeof (LO_fba_data_t));
+ if (!rw_cp) {
+ return NULL;
+ }
+ DE_data = rw_cp->data;
+ LO_data = rw_cp->data + sizeof (DE_fba_data_t);
+ ccw = rw_cp->cpaddr;
+
+ define_extent (ccw, DE_data, req->cmd, byt_per_blk,
+ req->sector, req->nr_sectors);
+ ccw->flags = CCW_FLAG_CC;
+ ccw++;
+ locate_record (ccw, LO_data, req->cmd, 0, req->nr_sectors);
+ ccw->flags = CCW_FLAG_CC;
+ for (bh = req->bh; bh;) {
+ if (bh->b_size > byt_per_blk) {
+ for (size = 0; size < bh->b_size; size += byt_per_blk) {
+ ccw++;
+ if (dasd_info[devindex]->rdc_data->fba.mode.bits.data_chain) {
+ ccw->flags = CCW_FLAG_DC;
+ } else {
+ ccw->flags = CCW_FLAG_CC;
+ }
+ ccw->cmd_code = rw_cmd;
+ ccw->count = byt_per_blk;
+ ccw->cda = (void *) __pa (bh->b_data + size);
+ }
+ bh = bh->b_reqnext;
+ } else { /* group N bhs to fit into byt_per_blk */
+ for (size = 0; bh != NULL && size < byt_per_blk;) {
+ ccw++;
+ if (dasd_info[devindex]->rdc_data->fba.mode.bits.data_chain) {
+ ccw->flags = CCW_FLAG_DC;
+ } else {
+ PRINT_WARN ("Cannot chain chunks smaller than one block\n");
+ release_cqr (rw_cp);
+ return NULL;
+ }
+ ccw->cmd_code = rw_cmd;
+ ccw->count = bh->b_size;
+ ccw->cda = (void *) __pa (bh->b_data);
+ size += bh->b_size;
+ bh = bh->b_reqnext;
+ }
+ ccw->flags = CCW_FLAG_CC;
+ if (size != byt_per_blk) {
+ PRINT_WARN ("Cannot fulfill request smaller than block\n");
+ release_cqr (rw_cp);
+ return NULL;
+ }
+ }
+ }
+ ccw->flags &= ~(CCW_FLAG_DC | CCW_FLAG_CC);
+ return rw_cp;
+}
+
+void
+dasd_fba_print_char (dasd_characteristics_t * ct)
+{
+ dasd_fba_characteristics_t *c =
+ (dasd_fba_characteristics_t *) ct;
+ PRINT_INFO ("%d blocks of %d bytes %d MB\n",
+ c->blk_bdsa, c->blk_size,
+ (c->blk_bdsa * (c->blk_size >> 9)) >> 11);
+ PRINT_INFO ("%soverrun, %s mode, data chains %sallowed\n"
+ "%sremovable, %sshared\n",
+ (c->mode.bits.overrunnable ? "" : "no "),
+ (c->mode.bits.burst_byte ? "burst" : "byte"),
+ (c->mode.bits.data_chain ? "" : "not "),
+ (c->features.bits.removable ? "" : "not "),
+ (c->features.bits.shared ? "" : "not "));
+};
+
+int
+dasd_fba_ck_char (dasd_characteristics_t * rdc)
+{
+ int rc = 0;
+ dasd_fba_print_char (rdc);
+ return rc;
+}
+
+cqr_t *
+dasd_fba_fill_sizes_first (int di)
+{
+ cqr_t *rw_cp = NULL;
+ ccw1_t *ccw;
+ DE_fba_data_t *DE_data;
+ LO_fba_data_t *LO_data;
+ dasd_information_t *info = dasd_info[di];
+ static char buffer[8];
+
+ dasd_info[di]->sizes.label_block = 1;
+
+ rw_cp = request_cqr (3,
+ sizeof (DE_fba_data_t) +
+ sizeof (LO_fba_data_t));
+ DE_data = rw_cp->data;
+ LO_data = rw_cp->data + sizeof (DE_fba_data_t);
+ ccw = rw_cp->cpaddr;
+ define_extent (ccw, DE_data, READ, info->sizes.bp_block, 1, 1);
+ ccw->flags = CCW_FLAG_CC;
+ ccw++;
+ locate_record (ccw, LO_data, READ, 0, 1);
+ ccw->flags = CCW_FLAG_CC;
+ ccw++;
+ ccw->cmd_code = DASD_FBA_CCW_READ;
+ ccw->flags = CCW_FLAG_SLI;
+ ccw->count = 8;
+ ccw->cda = (void *) __pa (buffer);
+ rw_cp->devindex = di;
+ atomic_set (&rw_cp->status, CQR_STATUS_FILLED);
+ return rw_cp;
+}
+
+int
+dasd_fba_fill_sizes_last (int devindex)
+{
+ int rc = 0;
+ int sb;
+ dasd_information_t *info = dasd_info[devindex];
+
+ info->sizes.bp_sector = info->rdc_data->fba.blk_size;
+ info->sizes.bp_block = info->sizes.bp_sector;
+
+ info->sizes.s2b_shift = 0; /* bits to shift 512 to get a block */
+ for (sb = 512; sb < info->sizes.bp_sector; sb = sb << 1)
+ info->sizes.s2b_shift++;
+
+ info->sizes.blocks = (info->rdc_data->fba.blk_bdsa);
+
+ if (info->sizes.s2b_shift >= 1)
+ info->sizes.kbytes = info->sizes.blocks <<
+ (info->sizes.s2b_shift - 1);
+ else
+ info->sizes.kbytes = info->sizes.blocks >>
+ (-(info->sizes.s2b_shift - 1));
+
+ return rc;
+}
+
+int
+dasd_fba_format (int devindex, format_data_t * fdata)
+{
+ int rc = 0;
+ return rc;
+}
+
+void
+dasd_fba_fill_geometry (int di, struct hd_geometry *geo)
+{
+ int bfactor, nr_sectors, sec_size;
+ int trk_cap, trk_low, trk_high, tfactor, nr_trks, trk_size;
+ int cfactor, nr_cyls, cyl_size;
+ int remainder;
+
+ dasd_information_t *info = dasd_info[di];
+ PRINT_INFO ("FBA has no geometry! Faking one...\n%s", "");
+
+ /* determine the blocking factor of sectors */
+ for (bfactor = 8; bfactor > 0; bfactor--) {
+ remainder = info->rdc_data->fba.blk_bdsa % bfactor;
+ PRINT_INFO ("bfactor %d remainder %d\n", bfactor, remainder);
+ if (!remainder)
+ break;
+ }
+ nr_sectors = info->rdc_data->fba.blk_bdsa / bfactor;
+ sec_size = info->rdc_data->fba.blk_size * bfactor;
+
+ geo -> sectors = bfactor;
+
+ /* determine the nr of sectors per track */
+ trk_cap = (64 * 1 << 10) / sec_size; /* 64k in sectors */
+ trk_low = trk_cap * 2 / 3;
+ trk_high = trk_cap * 4 / 3;
+ for (tfactor = trk_high; tfactor > trk_low; tfactor--) {
+ PRINT_INFO ("remainder %d\n", remainder);
+ remainder = nr_sectors % bfactor;
+ if (!remainder)
+ break;
+ }
+ nr_trks = nr_sectors / tfactor;
+ trk_size = sec_size * tfactor;
+
+ /* determine the nr of trks per cylinder */
+ for (cfactor = 31; cfactor > 0; cfactor--) {
+ PRINT_INFO ("remainder %d\n", remainder);
+ remainder = nr_trks % bfactor;
+ if (!remainder)
+ break;
+ }
+ nr_cyls = nr_trks / cfactor;
+ sec_size = info->rdc_data->fba.blk_size * bfactor;
+
+ geo -> heads = nr_trks;
+ geo -> cylinders = nr_cyls;
+ geo -> start = info->sizes.label_block + 1;
+}
+
+dasd_operations_t dasd_fba_operations =
+{
+ ck_devinfo:dasd_fba_ck_devinfo,
+ get_req_ccw:dasd_fba_build_req,
+ ck_characteristics:dasd_fba_ck_char,
+ fill_sizes_first:dasd_fba_fill_sizes_first,
+ fill_sizes_last:dasd_fba_fill_sizes_last,
+ dasd_format:dasd_fba_format,
+ fill_geometry:dasd_fba_fill_geometry
+};
--- /dev/null
+#include <linux/dasd.h>
+#include <linux/malloc.h>
+#include <linux/ctype.h>
+#include "dasd_types.h"
+#include "dasd_ccwstuff.h"
+#include "dasd_mdsk.h"
+
+#ifdef PRINTK_HEADER
+#undef PRINTK_HEADER
+#endif /* PRINTK_HEADER */
+#define PRINTK_HEADER "dasd(mdsk):"
+
+int dasd_is_accessible (int devno);
+void dasd_insert_range (int start, int end);
+
+/*
+ * The device characteristics function
+ */
+static __inline__ int
+dia210 (void *devchar)
+{
+ int rc;
+
+ asm volatile (" lr 2,%1\n"
+ " .long 0x83200210\n"
+ " ipm %0\n"
+ " srl %0,28"
+ :"=d" (rc)
+ :"d" ((void *) __pa (devchar))
+ :"2");
+ return rc;
+}
+
+static __inline__ int
+dia250 (void *iob, int cmd)
+{
+ int rc;
+
+ asm volatile (" lr 2,%1\n"
+ " lr 3,%2\n"
+ " .long 0x83230250\n"
+ " lr %0,3"
+ :"=d" (rc)
+ :"d" ((void *) __pa (iob)), "d" (cmd)
+ :"2", "3");
+ return rc;
+}
+
+/*
+ * Init of minidisk device
+ */
+
+static __inline__ int
+mdsk_init_io (int di, int blocksize, int offset, int size)
+{
+ mdsk_init_io_t *iib = &(dasd_info[di]->private.mdsk.iib);
+ int rc;
+
+ memset (iib, 0, sizeof (mdsk_init_io_t));
+
+ iib->dev_nr = dasd_info[di]->info.devno;
+ iib->block_size = blocksize;
+ iib->offset = offset;
+ iib->start_block = 0;
+ iib->end_block = size;
+
+ rc = dia250 (iib, INIT_BIO);
+
+ return rc;
+}
+
+/*
+ * release of minidisk device
+ */
+
+static __inline__ int
+mdsk_term_io (int di)
+{
+ mdsk_init_io_t *iib = &(dasd_info[di]->private.mdsk.iib);
+ int rc;
+
+ memset (iib, 0, sizeof (mdsk_init_io_t));
+ iib->dev_nr = dasd_info[di]->info.devno;
+ rc = dia250 (iib, TERM_BIO);
+ return rc;
+}
+
+void dasd_do_chanq (void);
+void dasd_schedule_bh (void (*func) (void));
+int register_dasd_last (int di);
+
+int
+dasd_mdsk_start_IO (cqr_t * cqr)
+{
+ int rc;
+ mdsk_rw_io_t *iob = &(dasd_info[cqr->devindex]->private.mdsk.iob);
+
+ iob->dev_nr = dasd_info[cqr->devindex]->info.devno;
+ iob->key = 0;
+ iob->flags = 2;
+ iob->block_count = cqr->cplength >> 1;
+ iob->interrupt_params = (u32) cqr;
+ iob->bio_list = __pa (cqr->cpaddr);
+
+ asm volatile ("STCK %0":"=m" (cqr->startclk));
+ rc = dia250 (iob, RW_BIO);
+ if (rc > 8) {
+ PRINT_WARN ("dia250 returned CC %d\n", rc);
+ ACS (cqr->status, CQR_STATUS_QUEUED, CQR_STATUS_ERROR);
+ } else {
+ ACS (cqr->status, CQR_STATUS_QUEUED, CQR_STATUS_IN_IO);
+ atomic_dec (&chanq_tasks);
+ }
+ return rc;
+}
+
+void
+do_dasd_mdsk_interrupt (struct pt_regs *regs, __u16 code)
+{
+ int intparm = S390_lowcore.ext_params;
+ char status = *((char *) S390_lowcore.ext_params + 5);
+ cqr_t *cqr = (cqr_t *) intparm;
+ if (!intparm)
+ return;
+ if (cqr->magic != MDSK_MAGIC) {
+ panic ("unknown magic number\n");
+ }
+ asm volatile ("STCK %0":"=m" (cqr->stopclk));
+ if (atomic_read (&dasd_info[cqr->devindex]->status) ==
+ DASD_INFO_STATUS_DETECTED) {
+ register_dasd_last (cqr->devindex);
+ }
+ switch (status) {
+ case 0x00:
+ ACS (cqr->status, CQR_STATUS_IN_IO, CQR_STATUS_DONE);
+ break;
+ case 0x01:
+ case 0x02:
+ case 0x03:
+ default:
+ ACS (cqr->status, CQR_STATUS_IN_IO, CQR_STATUS_FAILED);
+ atomic_inc (&dasd_info[cqr->devindex]->
+ queue.dirty_requests);
+ break;
+ }
+ atomic_inc (&chanq_tasks);
+ dasd_schedule_bh (dasd_do_chanq);
+}
+
+cqr_t *
+dasd_mdsk_build_req (int devindex,
+ struct request *req)
+{
+ cqr_t *rw_cp = NULL;
+ struct buffer_head *bh;
+ int rw_cmd;
+ dasd_information_t *info = dasd_info[devindex];
+ int noblk = req->nr_sectors >> info->sizes.s2b_shift;
+ int byt_per_blk = info->sizes.bp_block;
+ int block;
+ mdsk_bio_t *bio;
+ int bhct;
+ long size;
+
+ if (!noblk) {
+ PRINT_ERR ("No blocks to write...returning\n");
+ return NULL;
+ }
+ if (req->cmd == READ) {
+ rw_cmd = MDSK_READ_REQ;
+ } else
+#if DASD_PARANOIA > 2
+ if (req->cmd == WRITE)
+#endif /* DASD_PARANOIA */
+ {
+ rw_cmd = MDSK_WRITE_REQ;
+ }
+#if DASD_PARANOIA > 2
+ else {
+ PRINT_ERR ("Unknown command %d\n", req->cmd);
+ return NULL;
+ }
+#endif /* DASD_PARANOIA */
+ bhct = 0;
+ for (bh = req->bh; bh; bh = bh->b_reqnext) {
+ if (bh->b_size > byt_per_blk)
+ for (size = 0; size < bh->b_size; size += byt_per_blk)
+ bhct++;
+ else
+ bhct++;
+ }
+ /* Build the request */
+ rw_cp = request_cqr (MDSK_BIOS (bhct), 0);
+ if (!rw_cp) {
+ return NULL;
+ }
+ rw_cp->magic = MDSK_MAGIC;
+ bio = (mdsk_bio_t *) (rw_cp->cpaddr);
+
+ block = req->sector >> info->sizes.s2b_shift;
+ for (bh = req->bh; bh; bh = bh->b_reqnext) {
+ if (bh->b_size >= byt_per_blk) {
+ memset (bio, 0, sizeof (mdsk_bio_t));
+ for (size = 0; size < bh->b_size; size += byt_per_blk) {
+ bio->type = rw_cmd;
+ bio->block_number = block + 1;
+ bio->buffer = __pa (bh->b_data + size);
+ bio++;
+ block++;
+ }
+ } else {
+ PRINT_WARN ("Cannot fulfill request smaller than block\n");
+ release_cqr (rw_cp);
+ return NULL;
+ }
+ }
+ return rw_cp;
+}
+
+int
+dasd_mdsk_ck_devinfo (dev_info_t * info)
+{
+ int rc = 0;
+
+ dasd_mdsk_characteristics_t devchar =
+ {0,};
+
+ devchar.dev_nr = info->devno;
+ devchar.rdc_len = sizeof (dasd_mdsk_characteristics_t);
+
+ if (dia210 (&devchar) != 0) {
+ return -ENODEV;
+ }
+ if (devchar.vdev_class == DEV_CLASS_FBA ||
+ devchar.vdev_class == DEV_CLASS_ECKD ||
+ devchar.vdev_class == DEV_CLASS_CKD) {
+ return 0;
+ } else {
+ return -ENODEV;
+ }
+ return rc;
+}
+
+int
+dasd_mdsk_ck_characteristics (dasd_characteristics_t * dchar)
+{
+ int rc = 0;
+ dasd_mdsk_characteristics_t *devchar =
+ (dasd_mdsk_characteristics_t *) dchar;
+
+ if (dia210 (devchar) != 0) {
+ return -ENODEV;
+ }
+ if (devchar->vdev_class != DEV_CLASS_FBA &&
+ devchar->vdev_class != DEV_CLASS_ECKD &&
+ devchar->vdev_class != DEV_CLASS_CKD) {
+ return -ENODEV;
+ }
+ return rc;
+}
+
+cqr_t *
+dasd_mdsk_fill_sizes_first (int di)
+{
+ cqr_t *cqr = NULL;
+ dasd_information_t *info = dasd_info[di];
+ mdsk_bio_t *bio;
+ int bsize;
+ int rc;
+ /* Figure out position of label block */
+ if (info->rdc_data->mdsk.vdev_class == DEV_CLASS_FBA) {
+ info->sizes.label_block = 1;
+ } else if (info->rdc_data->mdsk.vdev_class == DEV_CLASS_ECKD ||
+ info->rdc_data->mdsk.vdev_class == DEV_CLASS_CKD) {
+ dasd_info[di]->sizes.label_block = 2;
+ } else {
+ return NULL;
+ }
+
+ /* figure out blocksize of device */
+ mdsk_term_io (di);
+ for (bsize = 512; bsize <= PAGE_SIZE; bsize = bsize << 1) {
+ rc = mdsk_init_io (di, bsize, 0, 64);
+ if (rc <= 4) {
+ break;
+ }
+ }
+ if (bsize > PAGE_SIZE) {
+ PRINT_INFO ("Blocksize larger than 4096??\n");
+ rc = mdsk_term_io (di);
+ return NULL;
+ }
+ dasd_info[di]->sizes.bp_sector = bsize;
+
+ info->private.mdsk.label = (long *) get_free_page (GFP_KERNEL);
+ cqr = request_cqr (MDSK_BIOS (1), 0);
+ cqr->magic = MDSK_MAGIC;
+ bio = (mdsk_bio_t *) (cqr->cpaddr);
+ memset (bio, 0, sizeof (mdsk_bio_t));
+ bio->type = MDSK_READ_REQ;
+ bio->block_number = info->sizes.label_block + 1;
+ bio->buffer = __pa (info->private.mdsk.label);
+ cqr->devindex = di;
+ atomic_set (&cqr->status, CQR_STATUS_FILLED);
+
+ return cqr;
+}
+
+int
+dasd_mdsk_fill_sizes_last (int di)
+{
+ int sb;
+ dasd_information_t *info = dasd_info[di];
+ long *label = info->private.mdsk.label;
+ int bs = info->private.mdsk.iib.block_size;
+
+ info->sizes.s2b_shift = 0; /* bits to shift 512 to get a block */
+ for (sb = 512; sb < bs; sb = sb << 1)
+ info->sizes.s2b_shift++;
+
+ if (label[3] != bs) {
+ PRINT_WARN ("%04X mismatching blocksizes\n", info->info.devno);
+ atomic_set (&dasd_info[di]->status,
+ DASD_INFO_STATUS_DETECTED);
+ return -EINVAL;
+ }
+ if (label[0] != 0xc3d4e2f1) { /* CMS1 */
+ PRINT_WARN ("%04X is not CMS formatted\n", info->info.devno);
+ }
+ if (label[13] == 0) {
+ PRINT_WARN ("%04X is not reserved\n", info->info.devno);
+ }
+ /* defaults for first partition */
+ info->private.mdsk.setup.size =
+ (label[7] - 1 - label[13]) * (label[3] >> 9) >> 1;
+ info->private.mdsk.setup.blksize = label[3];
+ info->private.mdsk.setup.offset = label[13] + 1;
+
+ /* real size of the volume */
+ info->sizes.bp_block = label[3];
+ info->sizes.kbytes = label[7] * (label[3] >> 9) >> 1;
+
+ if (info->sizes.s2b_shift >= 1)
+ info->sizes.blocks = info->sizes.kbytes >>
+ (info->sizes.s2b_shift - 1);
+ else
+ info->sizes.blocks = info->sizes.kbytes <<
+ (-(info->sizes.s2b_shift - 1));
+
+ PRINT_INFO ("%ld kBytes in %d blocks of %d Bytes\n",
+ info->sizes.kbytes,
+ info->sizes.blocks,
+ info->sizes.bp_sector);
+ return 0;
+
+}
+
+dasd_operations_t dasd_mdsk_operations =
+{
+ ck_devinfo:dasd_mdsk_ck_devinfo,
+ get_req_ccw:dasd_mdsk_build_req,
+ ck_characteristics:dasd_mdsk_ck_characteristics,
+ fill_sizes_first:dasd_mdsk_fill_sizes_first,
+ fill_sizes_last:dasd_mdsk_fill_sizes_last,
+ dasd_format:NULL
+};
--- /dev/null
+#ifndef DASD_MDSK_H
+#define DASD_MDSK_H
+
+#define MDSK_WRITE_REQ 0x01
+#define MDSK_READ_REQ 0x02
+
+#define INIT_BIO 0x00
+#define RW_BIO 0x01
+#define TERM_BIO 0x02
+
+#define DEV_CLASS_FBA 0x01
+#define DEV_CLASS_ECKD 0x02 /* sure ?? */
+#define DEV_CLASS_CKD 0x04 /* sure ?? */
+
+#define MDSK_BIOS(x) (2*(x))
+
+typedef struct mdsk_dev_char_t {
+ u8 type;
+ u8 status;
+ u16 spare1;
+ u32 block_number;
+ u32 alet;
+ u32 buffer;
+} __attribute__ ((packed, aligned (8)))
+
+mdsk_bio_t;
+
+typedef struct {
+ u16 dev_nr;
+ u16 spare1[11];
+ u32 block_size;
+ u32 offset;
+ u32 start_block;
+ u32 end_block;
+ u32 spare2[6];
+} __attribute__ ((packed, aligned (8)))
+
+mdsk_init_io_t;
+
+typedef struct {
+ u16 dev_nr;
+ u16 spare1[11];
+ u8 key;
+ u8 flags;
+ u16 spare2;
+ u32 block_count;
+ u32 alet;
+ u32 bio_list;
+ u32 interrupt_params;
+ u32 spare3[5];
+} __attribute__ ((packed, aligned (8)))
+
+mdsk_rw_io_t;
+
+typedef struct {
+ long vdev;
+ long size;
+ long offset;
+ long blksize;
+ int force_mdsk;
+} mdsk_setup_data_t;
+
+#endif
#include <linux/proc_fs.h>
-#include "dasd.h"
+#include <linux/dasd.h>
+
#include "dasd_types.h"
int dasd_proc_read_devices ( char *, char **, off_t, int, int);
extern int dasd_proc_read_debug ( char *, char **, off_t, int, int);
#endif /* DASD_PROFILE */
-struct proc_dir_entry dasd_proc_root_entry = {
- 0,
- 4,"dasd",
- S_IFDIR | S_IRUGO | S_IXUGO | S_IWUSR | S_IWGRP,
- 1,0,0,
- 0,
- NULL,
+struct proc_dir_entry dasd_proc_root_entry =
+{
+ low_ino:0,
+ namelen:4,
+ name:"dasd",
+ mode:S_IFDIR | S_IRUGO | S_IXUGO | S_IWUSR | S_IWGRP,
+ nlink:1,
+ uid:0,
+ gid:0,
+ size:0
};
-struct proc_dir_entry dasd_proc_devices_entry = {
- 0,
- 7,"devices",
- S_IFREG | S_IRUGO | S_IXUGO | S_IWUSR | S_IWGRP,
- 1,0,0,
- 0,
- NULL,
- &dasd_proc_read_devices,
+struct proc_dir_entry dasd_proc_devices_entry =
+{
+ low_ino:0,
+ namelen:7,
+ name:"devices",
+ mode:S_IFREG | S_IRUGO | S_IXUGO | S_IWUSR | S_IWGRP,
+ nlink:1,
+ uid:0,
+ gid:0,
+ size:0,
+ get_info:&dasd_proc_read_devices,
};
#ifdef DASD_PROFILE
-struct proc_dir_entry dasd_proc_stats_entry = {
- 0,
- 10,"statistics",
- S_IFREG | S_IRUGO | S_IXUGO | S_IWUSR | S_IWGRP,
- 1,0,0,
- 0,
- NULL,
- &dasd_proc_read_statistics,
+struct proc_dir_entry dasd_proc_stats_entry =
+{
+ low_ino:0,
+ namelen:10,
+ name:"statistics",
+ mode:S_IFREG | S_IRUGO | S_IXUGO | S_IWUSR | S_IWGRP,
+ nlink:1,
+ uid:0,
+ gid:0,
+ size:0,
+ get_info:&dasd_proc_read_statistics
};
-struct proc_dir_entry dasd_proc_debug_entry = {
- 0,
- 5,"debug",
- S_IFREG | S_IRUGO | S_IXUGO | S_IWUSR | S_IWGRP,
- 1,0,0,
- 0,
- NULL,
- &dasd_proc_read_debug,
+struct proc_dir_entry dasd_proc_debug_entry =
+{
+ low_ino:0,
+ namelen:5,
+ name:"debug",
+ mode:S_IFREG | S_IRUGO | S_IXUGO | S_IWUSR | S_IWGRP,
+ nlink:1,
+ uid:0,
+ gid:0,
+ size:0,
+ get_info:&dasd_proc_read_debug
};
#endif /* DASD_PROFILE */
-struct proc_dir_entry dasd_proc_device_template = {
+struct proc_dir_entry dasd_proc_device_template =
+{
0,
6,"dd????",
S_IFBLK | S_IRUGO | S_IWUSR | S_IWGRP,
#endif /* DASD_PROFILE */
}
-
int
dasd_proc_read_devices ( char * buf, char **start, off_t off, int len, int d)
{
DASD_MAJOR,
i << PARTN_BITS,
'a' + i );
- if (info->flags == DASD_NOT_FORMATTED) {
- len += sprintf ( buf + len, " n/a");
- } else {
- len += sprintf ( buf + len, " %6d",
+ switch (atomic_read (&info->status)) {
+ case DASD_INFO_STATUS_UNKNOWN:
+ len += sprintf (buf + len, " unknown");
+ break;
+ case DASD_INFO_STATUS_DETECTED:
+ len += sprintf (buf + len, " avail");
+ break;
+ case DASD_INFO_STATUS_ANALYSED:
+ len += sprintf (buf + len, " n/f");
+ break;
+ default:
+ len += sprintf (buf + len, " %7d",
info->sizes.bp_block);
}
len += sprintf ( buf + len, "\n");
return len;
}
-
void
dasd_proc_add_node (int di)
{
+
#include <linux/mm.h>
#include <asm/spinlock.h>
-#include "dasd.h"
+#include <linux/dasd.h>
+
#include "dasd_types.h"
#define PRINTK_HEADER "dasd_profile:"
} __attribute__ ((packed)) u;
unsigned long caller_address;
unsigned long tag;
-} __attribute__ ((packed)) dasd_debug_entry;
+} __attribute__ ((packed))
+
+dasd_debug_entry;
static dasd_debug_entry *dasd_debug_area = NULL;
static dasd_debug_entry *dasd_debug_actual;
/* initialize in first call ... */
if ( ! dasd_debug_area ) {
dasd_debug_actual = dasd_debug_area =
- get_free_page (GFP_ATOMIC);
+ (dasd_debug_entry *) get_free_page (GFP_ATOMIC);
if ( ! dasd_debug_area ) {
PRINT_WARN("No debug area allocated\n");
return;
__asm__ __volatile__ ( "STCK %0"
:"=m" (d->u.clock));
d->tag = tag;
- d -> caller_address = __builtin_return_address(0);
+ d->caller_address = (unsigned long) __builtin_return_address (0);
d->u.s.cpu = smp_processor_id();
}
off_t off, int len, int dd)
{
dasd_debug_entry *d;
- char tag[9] = { 0, };
+ char tag[9] =
+ {0,};
long flags;
spin_lock_irqsave(&debug_lock,flags);
len = 0;
if ( *(char*)(&d->tag) == 'D' ) {
memcpy(tag,&(d->tag),4);
tag[4]=0;
- }
- else {
- sprintf(tag,"%08x",d->tag);
+ } else {
+ sprintf (tag, "%08lx", d->tag);
tag[8]=0;
}
len += sprintf ( buf+len,
- "%lx %08x%05x %08lx (%8s)\n",
+ "%x %08x%05x %08lx (%8s)\n",
d->u.s.cpu, d->u.s.ts1, d->u.s.ts2,
d->caller_address,tag);
}
--- /dev/null
+/*
+ * File...........: linux/drivers/s390/block/dasd_setup.c
+ * Author(s)......: Holger Smolinski <Holger.Smolinski@de.ibm.com>
+ * : Utz Bacher <utz.bacher@de.ibm.com>
+ * Bugreports.to..: <Linux390@de.ibm.com>
+ * (C) IBM Corporation, IBM Deutschland Entwicklung GmbH, 1999,2000
+ */
+
+#include <linux/ctype.h>
+#include <linux/malloc.h>
+
+#include <linux/dasd.h>
+#include "dasd_types.h"
+
+#define PRINTK_HEADER "dasd(setup):"
+
+#define MIN(a,b) (((a)<(b))?(a):(b))
+
+static int dasd_autodetect = 0;
+int dasd_probeonly = 1;
+static int dasd_count = 0;
+static int dasd_devno[DASD_MAX_DEVICES] =
+{0,};
+int dasd_force_mdsk_flag[DASD_MAX_DEVICES] =
+{0,};
+
+extern char *dasd[DASD_MAX_DEVICES];
+#ifdef CONFIG_DASD_MDSK
+extern char *dasd_force_mdsk[DASD_MAX_DEVICES];
+#endif
+
+typedef struct dasd_range {
+ int from, to;
+ struct dasd_range *next;
+} dasd_range;
+
+static dasd_range *first_range = NULL;
+
+void
+dasd_add_devno_to_ranges (int devno)
+{
+ dasd_range *p, *prev;
+ for (p = first_range; p; prev = p, p = prev->next) {
+ if (devno >= p->from && devno <= p->to) {
+ PRINT_WARN ("devno %04X already in range %04X-%04X\n",
+ devno, p->from, p->to);
+ return;
+ }
+ if (devno == (p->from - 1)) {
+ p->from--;
+ return;
+ }
+ if (devno == (p->to + 1)) {
+ p->to++;
+ return;
+ }
+ }
+ prev = kmalloc (sizeof (dasd_range), GFP_ATOMIC);
+ prev->from = prev->to = devno;
+ if (!first_range) {
+ first_range = prev;
+ }
+ return;
+}
+
+int
+dasd_proc_print_probed_ranges (char *buf, char **start, off_t off, int len, int d)
+{
+ dasd_range *p;
+ len = sprintf (buf, "Probed ranges of the DASD driver\n");
+ for (p = first_range; p; p = p->next) {
+ if (len >= PAGE_SIZE - 80)
+ len += sprintf (buf + len, "terminated...\n");
+ if (p != first_range) {
+ len += sprintf (buf + len, ",");
+ }
+ if (p->from == p->to) {
+ len += sprintf (buf + len, "%04x", p->from);
+ } else {
+ len += sprintf (buf + len, "%04x-%04x",
+ p->from, p->to);
+ }
+ }
+ len += sprintf (buf + len, "\n");
+ return len;
+}
+
+static int
+dasd_get_hexdigit (char c)
+{
+ if ((c >= '0') && (c <= '9'))
+ return c - '0';
+ if ((c >= 'a') && (c <= 'f'))
+ return c + 10 - 'a';
+ if ((c >= 'A') && (c <= 'F'))
+ return c + 10 - 'A';
+ return -1;
+}
+
+/* sets the string pointer after the next comma */
+static void
+dasd_scan_for_next_comma (char **strptr)
+{
+ while (((**strptr) != ',') && (**strptr))
+ (*strptr)++;
+
+ /* set the position AFTER the comma */
+ if (**strptr == ',')
+ (*strptr)++;
+}
+
+/*sets the string pointer after the next comma, if a parse error occured */
+static int
+dasd_get_next_int (char **strptr)
+{
+ int j, i = -1; /* for cosmetic reasons first -1, then 0 */
+ if (isxdigit (**strptr)) {
+ for (i = 0; isxdigit (**strptr);) {
+ i <<= 4;
+ j = dasd_get_hexdigit (**strptr);
+ if (j == -1) {
+ PRINT_ERR ("no integer: skipping range.\n");
+ dasd_scan_for_next_comma (strptr);
+ i = -1;
+ break;
+ }
+ i += j;
+ (*strptr)++;
+ if (i > 0xffff) {
+ PRINT_ERR (" value too big, skipping range.\n");
+ dasd_scan_for_next_comma (strptr);
+ i = -1;
+ break;
+ }
+ }
+ }
+ return i;
+}
+
+int
+devindex_from_devno (int devno)
+{
+ int i;
+ if (dasd_probeonly) {
+ return 0;
+ }
+ for (i = 0; i < dasd_count; i++) {
+ if (dasd_devno[i] == devno)
+ return i;
+ }
+ if (dasd_autodetect) {
+ if (dasd_count < DASD_MAX_DEVICES) {
+ dasd_devno[dasd_count] = devno;
+ return dasd_count++;
+ }
+ return -EOVERFLOW;
+ }
+ return -ENODEV;
+}
+
+/* returns 1, if dasd_no is in the specified ranges, otherwise 0 */
+int
+dasd_is_accessible (int devno)
+{
+ return (devindex_from_devno (devno) >= 0);
+}
+
+/* dasd_insert_range skips ranges, if the start or the end is -1 */
+void
+dasd_insert_range (int start, int end)
+{
+ int curr;
+ if (dasd_count >= DASD_MAX_DEVICES) {
+ PRINT_ERR (" too many devices specified, ignoring some.\n");
+ return;
+ }
+ if ((start == -1) || (end == -1)) {
+ PRINT_ERR
+ ("invalid format of parameter, skipping range\n");
+ return;
+ }
+ if (end < start) {
+ PRINT_ERR (" ignoring range from %x to %x - start value " \
+ "must be less than end value.\n", start, end);
+ return;
+ }
+/* concurrent execution would be critical, but will not occur here */
+ for (curr = start; curr <= end; curr++) {
+ if (dasd_is_accessible (curr)) {
+ PRINT_WARN (" %x is already in list as device %d\n",
+ curr, devindex_from_devno (curr));
+ }
+ dasd_devno[dasd_count] = curr;
+ dasd_count++;
+ if (dasd_count >= DASD_MAX_DEVICES) {
+ PRINT_ERR (" too many devices specified, ignoring some.\n");
+ break;
+ }
+ }
+ PRINT_INFO (" added dasd range from %x to %x.\n",
+ start, dasd_devno[dasd_count - 1]);
+
+}
+
+void
+dasd_setup (char *str, int *ints)
+{
+ int devno, devno2;
+ static const char *adstring = "autodetect";
+ static const char *prstring = "probeonly";
+ if (!strncmp (str, prstring,
+ MIN (strlen (str), strlen (prstring)))) {
+ dasd_autodetect = 1;
+ return;
+ }
+ if (!strncmp (str, adstring,
+ MIN (strlen (str), strlen (adstring)))) {
+ dasd_autodetect = 1;
+ dasd_probeonly = 0;
+ return;
+ }
+ dasd_probeonly = 0;
+ while (*str && *str != 1) {
+ if (!isxdigit (*str)) {
+ str++; /* to avoid looping on two commas */
+ PRINT_ERR (" kernel parameter in invalid format.\n");
+ continue;
+ }
+ devno = dasd_get_next_int (&str);
+
+ /* range was skipped? -> scan for comma has been done */
+ if (devno == -1)
+ continue;
+
+ if (*str == ',') {
+ str++;
+ dasd_insert_range (devno, devno);
+ continue;
+ }
+ if (*str == '-') {
+ str++;
+ devno2 = dasd_get_next_int (&str);
+ if (devno2 == -1) {
+ PRINT_ERR (" invalid character in " \
+ "kernel parameters.");
+ } else {
+ dasd_insert_range (devno, devno2);
+ }
+ dasd_scan_for_next_comma (&str);
+ continue;
+ }
+ if (*str == 0) {
+ dasd_insert_range (devno, devno);
+ break;
+ }
+ PRINT_ERR (" unexpected character in kernel parameter, " \
+ "skipping range.\n");
+ }
+}
+#ifdef CONFIG_DASD_MDSK
+int dasd_force_mdsk_flag[DASD_MAX_DEVICES];
+
+/*
+ * Parameter parsing function, called from init/main.c
+ * size : size in kbyte
+ * offset : offset after which minidisk is available
+ * blksize : blocksize minidisk is formated
+ * Format is: mdisk=<vdev>,<vdev>,...
+ */
+void
+dasd_mdsk_setup (char *str, int *ints)
+{
+ int devno, devno2;
+ int di, i;
+
+ while (*str && *str != 1) {
+ if (!isxdigit (*str)) {
+ str++; /* to avoid looping on two commas */
+ PRINT_ERR (" kernel parameter in invalid format.\n");
+ continue;
+ }
+ devno = dasd_get_next_int (&str);
+
+ /* range was skipped? -> scan for comma has been done */
+ if (devno == -1)
+ continue;
+
+ if (*str == ',') {
+ str++;
+ di = devindex_from_devno (devno);
+ if (di >= DASD_MAX_DEVICES) {
+ return;
+ } else if (di < 0)
+ dasd_insert_range (devno, devno);
+ dasd_force_mdsk_flag[di] = 1;
+ continue;
+ }
+ if (*str == '-') {
+ str++;
+ devno2 = dasd_get_next_int (&str);
+ if (devno2 == -1) {
+ PRINT_ERR (" invalid character in " \
+ "kernel parameters.");
+ } else {
+ for (i = devno; i <= devno2; i++) {
+ di = devindex_from_devno (i);
+ if (di >= DASD_MAX_DEVICES) {
+ return;
+ } else if (di < 0)
+ dasd_insert_range (i, i);
+ dasd_force_mdsk_flag[di] = 1;
+ }
+ }
+ dasd_scan_for_next_comma (&str);
+ continue;
+ }
+ if (*str == 0) {
+ di = devindex_from_devno (devno);
+ if (di >= DASD_MAX_DEVICES) {
+ return;
+ } else if (di < 0)
+ dasd_insert_range (devno, devno);
+ dasd_force_mdsk_flag[di] = 1;
+ break;
+ }
+ PRINT_ERR (" unexpected character in kernel parameter, " \
+ "skipping range.\n");
+ }
+}
+#endif
+
+#ifdef MODULE
+int
+dasd_parse_module_params (void)
+{
+ while (dasd)
+ dasd_setup (dasd, NULL);
+#ifdef CONFIG_DASD_MDSK
+ while (dasd_force_mdsk)
+ dasd_mdsk_setup (dasd_force_mdsk, NULL);
+#endif
+}
+#endif
+
/*
* File...........: linux/drivers/s390/block/dasd_types.h
* Author.........: Holger Smolinski <Holger.Smolinski@de.ibm.com>
#ifndef DASD_TYPES_H
#define DASD_TYPES_H
-#include "dasd.h"
+#include <linux/dasd.h>
#include <linux/blkdev.h>
+#include <linux/hdreg.h>
+
#include <asm/irq.h>
+#include "dasd_mdsk.h"
+
+#define ACS(where,from,to) if (atomic_compare_and_swap (from, to, &where)) {\
+ PRINT_WARN ("%s/%d atomic %s from %d(%s) to %d(%s) failed, was %d\n",\
+ __FILE__,__LINE__,#where,from,#from,to,#to,\
+ atomic_read (&where));\
+ atomic_set(&where,to);}
+
#define CCW_DEFINE_EXTENT 0x63
#define CCW_LOCATE_RECORD 0x43
#define CCW_READ_DEVICE_CHARACTERISTICS 0x64
typedef
enum {
dasd_none = -1,
+#ifdef CONFIG_DASD_ECKD
+ dasd_eckd,
+#endif /* CONFIG_DASD_ECKD */
#ifdef CONFIG_DASD_FBA
dasd_fba,
#endif /* CONFIG_DASD_FBA */
+#ifdef CONFIG_DASD_MDSK
+ dasd_mdsk,
+#endif /* CONFIG_DASD_MDSK */
#ifdef CONFIG_DASD_CKD
dasd_ckd,
#endif /* CONFIG_DASD_CKD */
-#ifdef CONFIG_DASD_ECKD
- dasd_eckd
-#endif /* CONFIG_DASD_ECKD */
+ dasd_end
} dasd_type_t;
typedef
dasd_eckd_characteristics_t;
+typedef struct {
+ u16 dev_nr;
+ u16 rdc_len;
+ u8 vdev_class;
+ u8 vdev_type;
+ u8 vdev_status;
+ u8 vdev_flags;
+ u8 rdev_class;
+ u8 rdev_type;
+ u8 rdev_model;
+ u8 rdev_features;
+} __attribute__ ((packed, aligned (32)))
+
+dasd_mdsk_characteristics_t;
+
+/* eckd count area */
+typedef struct {
+ __u16 cyl;
+ __u16 head;
+ __u8 record;
+ __u8 kl;
+ __u16 dl;
+} __attribute__ ((packed))
+
+eckd_count_t;
+
#ifdef CONFIG_DASD_CKD
struct dasd_ckd_characteristics {
char info[64];
#endif /* CONFIG_DASD_CKD */
-#ifdef CONFIG_DASD_ECKD
-struct dasd_eckd_characteristics {
- char info[64];
-};
-
-#endif /* CONFIG_DASD_ECKD */
-
typedef
union {
char __attribute__ ((aligned (32))) bytes[64];
#ifdef CONFIG_DASD_ECKD
dasd_eckd_characteristics_t eckd;
#endif /* CONFIG_DASD_ECKD */
+#ifdef CONFIG_DASD_MDSK
+ dasd_mdsk_characteristics_t mdsk;
+#endif /* CONFIG_DASD_ECKD */
} __attribute__ ((aligned (32)))
dasd_characteristics_t;
#define CQR_STATUS_QUEUED 0x02
#define CQR_STATUS_IN_IO 0x04
#define CQR_STATUS_DONE 0x08
-#define CQR_STATUS_RETRY 0x10
-#define CQR_STATUS_ERROR 0x20
-#define CQR_STATUS_FAILED 0x40
-#define CQR_STATUS_SLEEP 0x80
+#define CQR_STATUS_ERROR 0x10
+#define CQR_STATUS_ERP_PEND 0x20
+#define CQR_STATUS_ERP_ACTIVE 0x40
+#define CQR_STATUS_FAILED 0x80
#define CQR_FLAGS_SLEEP 0x01
#define CQR_FLAGS_WAIT 0x02
spinlock_t lock;
int options;
} __attribute__ ((packed))
+
cqr_t;
typedef
unsigned int bp_block;
unsigned int blocks;
unsigned int s2b_shift;
- unsigned int b2k_shift;
- unsigned int first_sector;
+ unsigned int label_block;
} dasd_sizes_t;
#define DASD_CHANQ_ACTIVE 0x01
#define DASD_CHANQ_BUSY 0x02
+#define DASD_REQUEST_Q_BROKEN 0x04
+
typedef
struct dasd_chanq_t {
volatile cqr_t *head;
spinlock_t f_lock; /* lock for flag operations */
int queued_requests;
atomic_t flags;
+ atomic_t dirty_requests;
struct dasd_chanq_t *next_q; /* pointer to next queue */
} __attribute__ ((packed, aligned (16)))
+
dasd_chanq_t;
+#define DASD_INFO_STATUS_UNKNOWN 0x00 /* nothing known about DASD */
+#define DASD_INFO_STATUS_DETECTED 0x01 /* DASD identified as DASD, irq taken */
+#define DASD_INFO_STATUS_ANALYSED 0x02 /* first block read, filled sizes */
+#define DASD_INFO_STATUS_FORMATTED 0x04 /* identified valid format */
+#define DASD_INFO_STATUS_PARTITIONED 0x08 /* identified partitions */
+
typedef
struct dasd_information_t {
devstat_t dev_status;
int open_count;
spinlock_t lock;
struct semaphore sem;
- unsigned long flags;
+ atomic_t status;
int irq;
+ mdsk_setup_data_t *mdsk_setup;
struct proc_dir_entry *proc_device;
union {
+ struct {
+ eckd_count_t count_data;
+ } eckd;
struct {
char dummy;
} fba;
+ struct {
+ mdsk_init_io_t iib;
+ mdsk_rw_io_t iob;
+ mdsk_setup_data_t setup;
+ long *label;
+ } mdsk;
struct {
char dummy;
} ckd;
- struct {
- int blk_per_trk;
- } eckd;
} private;
+ struct wait_queue *wait_q;
} dasd_information_t;
-typedef struct {
- int start_unit;
- int stop_unit;
- int blksize;
-} format_data_t;
+extern dasd_information_t *dasd_info[];
+
+#include "dasd_erp.h"
typedef
struct {
int (*ck_devinfo) (dev_info_t *);
cqr_t *(*get_req_ccw) (int, struct request *);
- cqr_t *(*rw_label) (int, int, char *);
int (*ck_characteristics) (dasd_characteristics_t *);
- int (*fill_sizes) (int);
+ cqr_t *(*fill_sizes_first) (int);
+ int (*fill_sizes_last) (int);
int (*dasd_format) (int, format_data_t *);
+ void (*fill_geometry) (int, struct hd_geometry *);
+ dasd_era_t (*erp_examine) (cqr_t *, devstat_t *);
} dasd_operations_t;
-extern dasd_information_t *dasd_info[];
+extern dasd_operations_t *dasd_disciplines[];
+
+/* Prototypes */
+int dasd_start_IO (cqr_t * cqr);
#endif /* DASD_TYPES_H */
/* Added statement HSM 12/03/99 */
#include <asm/irq.h>
+#include <asm/s390_ext.h>
#define MAJOR_NR MDISK_MAJOR /* force definitions on in blk.h */
* queues and marks a bottom half.
*
*/
-void do_mdisk_interrupt(void)
+void do_mdisk_interrupt(struct pt_regs *regs, __u16 code)
{
u16 code;
mdisk_Dev *dev;
,MAJOR_NR);
return MAJOR_NR;
}
-
+ register_external_interrupt(0x2603,do_mdisk_interrupt);
/*
* setup global major dependend structures
*/
--- /dev/null
+
+/*
+ * xpram.c -- the S/390 expanded memory RAM-disk
+ *
+ * significant parts of this code are based on
+ * the sbull device driver presented in
+ * A. Rubini: Linux Device Drivers
+ *
+ * Author of XPRAM specific coding: Reinhard Buendgen
+ * buendgen@de.ibm.com
+ *
+ * External interfaces:
+ * Interfaces to linux kernel
+ * xpram_setup: read kernel parameters (see init/main.c)
+ * xpram_init: initialize device driver (see drivers/block/ll_rw_blk.c)
+ * Module interfaces
+ * init_module
+ * cleanup_module
+ * Device specific file operations
+ * xpram_iotcl
+ * xpram_open
+ * xpram_release
+ *
+ * "ad-hoc" partitioning:
+ * the expanded memory can be partitioned among several devices
+ * (with different minors). The partitioning set up can be
+ * set by kernel or module parameters (int devs & int sizes[])
+ *
+ * module parameters: devs= and sizes=
+ * kernel parameters: xpram_parts=
+ * note: I did not succeed in parsing numbers
+ * for module parameters of type string "s" ?!?
+ *
+ * Other kenel files/modules affected(gerp for "xpram" or "XPRAM":
+ * drivers/s390/block/Config.in
+ * drivers/s390/block/Makefile
+ * include/linux/blk.h
+ * include/linux/major.h
+ * init/main.c
+ * drivers/block/s390/block/ll_rw_blk.c
+ *
+ *
+ * Potential future improvements:
+ * request clustering: first coding started not yet tested or integrated
+ * I doubt that it really pays off
+ * generic hard disk support to replace ad-hoc partitioning
+ *
+ * Tested with 2.2.14 (under VM)
+ */
+
+
+#ifdef MODULE
+# ifndef __KERNEL__
+# define __KERNEL__
+# endif
+# define __NO_VERSION__ /* don't define kernel_version in module.h */
+#endif /* MODULE */
+
+#include <linux/module.h>
+#include <linux/version.h>
+
+#ifdef MODULE
+char kernel_version [] = UTS_RELEASE;
+#endif
+
+#include <linux/sched.h>
+#include <linux/kernel.h> /* printk() */
+#include <linux/malloc.h> /* kmalloc() */
+#include <linux/fs.h> /* everything... */
+#include <linux/errno.h> /* error codes */
+#include <linux/timer.h>
+#include <linux/types.h> /* size_t */
+#include <linux/ctype.h> /* isdigit, isxdigit */
+#include <linux/fcntl.h> /* O_ACCMODE */
+#include <linux/hdreg.h> /* HDIO_GETGEO */
+
+#include <asm/system.h> /* cli(), *_flags */
+#include <asm/uaccess.h> /* put_user */
+
+/*
+ define the debug levels:
+ - 0 No debugging output to console or syslog
+ - 1 Log internal errors to syslog, ignore check conditions
+ - 2 Log internal errors and check conditions to syslog
+ - 3 Log internal errors to console, log check conditions to syslog
+ - 4 Log internal errors and check conditions to console
+ - 5 panic on internal errors, log check conditions to console
+ - 6 panic on both, internal errors and check conditions
+ */
+#define XPRAM_DEBUG 4
+
+#define PRINTK_HEADER XPRAM_NAME
+
+#if XPRAM_DEBUG > 0
+#define PRINT_DEBUG(x...) printk ( KERN_DEBUG PRINTK_HEADER "debug:" x )
+#define PRINT_INFO(x...) printk ( KERN_INFO PRINTK_HEADER "info:" x )
+#define PRINT_WARN(x...) printk ( KERN_WARNING PRINTK_HEADER "warning:" x )
+#define PRINT_ERR(x...) printk ( KERN_ERR PRINTK_HEADER "error:" x )
+#define PRINT_FATAL(x...) panic ( PRINTK_HEADER "panic:"x )
+#else
+#define PRINT_DEBUG(x...) printk ( KERN_DEBUG PRINTK_HEADER "debug:" x )
+#define PRINT_INFO(x...) printk ( KERN_DEBUG PRINTK_HEADER "info:" x )
+#define PRINT_WARN(x...) printk ( KERN_DEBUG PRINTK_HEADER "warning:" x )
+#define PRINT_ERR(x...) printk ( KERN_DEBUG PRINTK_HEADER "error:" x )
+#define PRINT_FATAL(x...) printk ( KERN_DEBUG PRINTK_HEADER "panic:" x )
+#endif
+
+#define MAJOR_NR xpram_major /* force definitions on in blk.h */
+int xpram_major; /* must be declared before including blk.h */
+
+
+#define DEVICE_NR(device) MINOR(device) /* xpram has no partition bits */
+#define DEVICE_NAME "xpram" /* name for messaging */
+#define DEVICE_INTR xpram_intrptr /* pointer to the bottom half */
+#define DEVICE_NO_RANDOM /* no entropy to contribute */
+
+
+#define DEVICE_OFF(d) /* do-nothing */
+
+#define DEVICE_REQUEST *xpram_dummy_device_request /* dummy function variable
+ * to prevent warnings
+ */
+
+#include <linux/blk.h>
+
+#include "xpram.h" /* local definitions */
+
+/*
+ * Non-prefixed symbols are static. They are meant to be assigned at
+ * load time. Prefixed symbols are not static, so they can be used in
+ * debugging. They are hidden anyways by register_symtab() unless
+ * XPRAM_DEBUG is defined.
+ */
+
+static int major = XPRAM_MAJOR;
+static int devs = XPRAM_DEVS;
+static int rahead = XPRAM_RAHEAD;
+static int sizes[XPRAM_MAX_DEVS] = { 0, };
+static int blksize = XPRAM_BLKSIZE;
+static int hardsect = XPRAM_HARDSECT;
+
+int xpram_devs, xpram_rahead;
+int xpram_blksize, xpram_hardsect;
+int xpram_mem_avail = 0;
+int xpram_sizes[XPRAM_MAX_DEVS];
+
+
+MODULE_PARM(devs,"i");
+MODULE_PARM(sizes,"1-" __MODULE_STRING(XPRAM_MAX_DEVS) "i");
+
+MODULE_PARM_DESC(devs, "number of devices (\"partitions\"), " \
+ "the default is " __MODULE_STRING(XPRAM_DEVS) "\n");
+MODULE_PARM_DESC(sizes, "list of device (partition) sizes " \
+ "the defaults are 0s \n" \
+ "All devices with size 0 equally partition the "
+ "remaining space on the expanded strorage not "
+ "claimed by explicit sizes\n");
+
+
+
+/* The following items are obtained through kmalloc() in init_module() */
+
+Xpram_Dev *xpram_devices = NULL;
+int *xpram_blksizes = NULL;
+int *xpram_hardsects = NULL;
+int *xpram_offsets = NULL; /* partition offsets */
+
+#define MIN(x,y) ((x) < (y) ? (x) : (y))
+#define MAX(x,y) ((x) > (y) ? (x) : (y))
+
+/*
+ * compute nearest multiple of 4 , argument must be non-negative
+ * the macros used depends on XPRAM_KB_IN_PG = 4
+ */
+
+#define NEXT4(x) ((x & 0x3) ? (x+4-(x &0x3)) : (x)) /* increment if needed */
+#define LAST4(x) ((x & 0x3) ? (x-4+(x & 0x3)) : (x)) /* decrement if needed */
+
+#if 0 /* this is probably not faster than the previous code */
+#define NEXT4(x) ((((x-1)>>2)>>2)+4) /* increment if needed */
+#define LAST4(x) (((x+3)>>2)<<2) /* decrement if needed */
+#endif
+
+/* integer formats */
+#define XPRAM_INVALF -1 /* invalid */
+#define XPRAM_HEXF 0 /* hexadecimal */
+#define XPRAM_DECF 1 /* decimal */
+
+/*
+ * parsing operations (needed for kernel parameter parsing)
+ */
+
+/* -------------------------------------------------------------------------
+ * sets the string pointer after the next comma
+ *
+ * argument: strptr pointer to string
+ * side effect: strptr points to endof string or to position of the next
+ * comma
+ * ------------------------------------------------------------------------*/
+static void
+xpram_scan_to_next_comma (char **strptr)
+{
+ while ( ((**strptr) != ',') && (**strptr) )
+ (*strptr)++;
+}
+
+/* -------------------------------------------------------------------------
+ * interpret character as hex-digit
+ *
+ * argument: c charcter
+ * result: c interpreted as hex-digit
+ * note: can be used to read digits for any base <= 16
+ * ------------------------------------------------------------------------*/
+static int
+xpram_get_hexdigit (char c)
+{
+ if ((c >= '0') && (c <= '9'))
+ return c - '0';
+ if ((c >= 'a') && (c <= 'f'))
+ return c + 10 - 'a';
+ if ((c >= 'A') && (c <= 'F'))
+ return c + 10 - 'A';
+ return -1;
+}
+
+/*--------------------------------------------------------------------------
+ * Check format of unsigned integer
+ *
+ * Argument: strptr pointer to string
+ * result: -1 if strptr does not start with a digit
+ * (does not start an integer)
+ * 0 if strptr starts a positive hex-integer with "0x"
+ * 1 if strptr start a positive decimal integer
+ *
+ * side effect: if strptr start a positive hex-integer then strptr is
+ * set to the character after the "0x"
+ *-------------------------------------------------------------------------*/
+static int
+xpram_int_format(char **strptr)
+{
+ if ( !isdigit(**strptr) )
+ return XPRAM_INVALF;
+ if ( (**strptr == '0')
+ && ( (*((*strptr)+1) == 'x') || (*((*strptr) +1) == 'X') )
+ && isdigit(*((*strptr)+3)) ) {
+ *strptr=(*strptr)+2;
+ return XPRAM_HEXF;
+ } else return XPRAM_DECF;
+}
+
+/*--------------------------------------------------------------------------
+ * Read non-negative decimal integer
+ *
+ * Argument: strptr pointer to string starting with a non-negative integer
+ * in decimal format
+ * result: the value of theinitial integer pointed to by strptr
+ *
+ * side effect: strptr is set to the first character following the integer
+ *-------------------------------------------------------------------------*/
+
+static int
+xpram_read_decint (char ** strptr)
+{
+ int res=0;
+ while ( isdigit(**strptr) ) {
+ res = (res*10) + xpram_get_hexdigit(**strptr);
+ (*strptr)++;
+ }
+ return res;
+}
+
+/*--------------------------------------------------------------------------
+ * Read non-negative hex-integer
+ *
+ * Argument: strptr pointer to string starting with a non-negative integer
+ * in hexformat (without "0x" prefix)
+ * result: the value of the initial integer pointed to by strptr
+ *
+ * side effect: strptr is set to the first character following the integer
+ *-------------------------------------------------------------------------*/
+
+static int
+xpram_read_hexint (char ** strptr)
+{
+ int res=0;
+ while ( isxdigit(**strptr) ) {
+ res = (res<<4) + xpram_get_hexdigit(**strptr);
+ (*strptr)++;
+ }
+ return res;
+}
+/*--------------------------------------------------------------------------
+ * Read non-negative integer
+ *
+ * Argument: strptr pointer to string starting with a non-negative integer
+ (either in decimal- or in hex-format
+ * result: the value of the initial integer pointed to by strptr
+ * in case of a parsing error the result is -EINVAL
+ *
+ * side effect: strptr is set to the first character following the integer
+ *-------------------------------------------------------------------------*/
+
+static int
+xpram_read_int (char ** strptr)
+{
+ switch ( xpram_int_format(strptr) ) {
+ case XPRAM_INVALF: return -EINVAL;
+ case XPRAM_HEXF: return xpram_read_hexint(strptr);
+ case XPRAM_DECF: return xpram_read_decint(strptr);
+ default: return -EINVAL;
+ }
+}
+
+/*--------------------------------------------------------------------------
+ * Read size
+ *
+ * Argument: strptr pointer to string starting with a non-negative integer
+ * followed optionally by a size modifier:
+ * k or K for kilo (default),
+ * m or M for mega
+ * g or G for giga
+ * result: the value of the initial integer pointed to by strptr
+ * multiplied by the modifier value devided by 1024
+ * in case of a parsing error the result is -EINVAL
+ *
+ * side effect: strptr is set to the first character following the size
+ *-------------------------------------------------------------------------*/
+
+static int
+xpram_read_size (char ** strptr)
+{
+ int res;
+
+ res=xpram_read_int(strptr);
+ if ( res < 0 )return res;
+ switch ( **strptr ) {
+ case 'g':
+ case 'G': res=res*1024;
+ case 'm':
+ case 'M': res=res*1024;
+ case 'k' :
+ case 'K' : (* strptr)++;
+ }
+
+ return res;
+}
+
+
+/*--------------------------------------------------------------------------
+ * Read tail of comma separated size list ",i1,i2,...,in"
+ *
+ * Arguments:strptr pointer to string. It is assumed that the string has
+ * the format (","<size>)*
+ * maxl integer describing the maximal number of elements in the
+ list pointed to by strptr, max must be > 0.
+ * ilist array of dimension >= maxl of integers to be modified
+ *
+ * result: -EINVAL if the list is longer than maxl
+ * 0 otherwise
+ *
+ * side effects: for j=1,...,n ilist[ij] is set to the value of ij if it is
+ * a valid non-negative integer and to -EINVAL otherwise
+ * if no comma is found where it is expected an entry in
+ * ilist is set to -EINVAL
+ *-------------------------------------------------------------------------*/
+static int
+xpram_read_size_list_tail (char ** strptr, int maxl, int * ilist)
+{
+ int i=0;
+ char *str = *strptr;
+ int res=0;
+
+ while ( (*str == ',') && (i < maxl) ) {
+ str++;
+ ilist[i] = xpram_read_size(&str);
+ if ( ilist[i] == -EINVAL ) {
+ xpram_scan_to_next_comma(&str);
+ res = -EINVAL;
+ }
+ i++;
+ }
+ return res;
+#if 0 /* be lenient about trailing stuff */
+ if ( *str != 0 && *str != ' ' ) {
+ ilist[MAX(i-1,0)] = -EINVAL;
+ return -EINVAL;
+ } else return 0;
+#endif
+}
+
+
+/*
+ * expanded memory operations
+ */
+
+
+/*--------------------------------------------------------------------*/
+/* Copy expanded memory page (4kB) into main memory */
+/* Arguments */
+/* page_addr: address of target page */
+/* xpage_index: index of expandeded memory page */
+/* Return value */
+/* 0: if operation succeeds */
+/* non-0: otherwise */
+/*--------------------------------------------------------------------*/
+long xpram_page_in (unsigned long page_addr, unsigned long xpage_index)
+{
+ long cc=0;
+ unsigned long real_page_addr = __pa(page_addr);
+ __asm__ __volatile__ (
+ " lr 1,%1 \n" /* r1 = real_page_addr */
+ " lr 2,%2 \n" /* r2 = xpage_index */
+ " .long 0xb22e0012 \n" /* pgin r1,r2 */
+ /* copy page from expanded memory */
+ "0: ipm %0 \n" /* save status (cc & program mask */
+ " srl %0,28(0) \n" /* cc into least significant bits */
+ "1: \n" /* we are done */
+ ".section .fixup,\"ax\"\n" /* start of fix up section */
+ "2: lhi %0,2 \n" /* return unused condition code 2 */
+ " bras 1,3f \n" /* safe label 1: in r1 and goto 3 */
+ " .long 1b \n" /* literal containing label 1 */
+ "3: l 1,0(1) \n" /* load label 1 address into r1 */
+ " br 1 \n" /* goto label 1 (across sections) */
+ ".previous \n" /* back in text section */
+ ".section __ex_table,\"a\"\n" /* start __extable */
+ " .align 4 \n"
+ " .long 0b,2b \n" /* failure point 0, fixup code 2 */
+ ".previous \n"
+ : "=d" (cc) : "d" (real_page_addr), "d" (xpage_index) : "cc", "1", "2"
+ );
+ switch (cc) {
+ case 0: return 0;
+ case 1: return -EIO;
+ case 2: return -ENXIO;
+ case 3: return -ENXIO;
+ default: return -EIO; /* should not happen */
+ };
+}
+
+/*--------------------------------------------------------------------*/
+/* Copy a 4kB page of main memory to an expanded memory page */
+/* Arguments */
+/* page_addr: address of source page */
+/* xpage_index: index of expandeded memory page */
+/* Return value */
+/* 0: if operation succeeds */
+/* non-0: otherwise */
+/*--------------------------------------------------------------------*/
+long xpram_page_out (unsigned long page_addr, unsigned long xpage_index)
+{
+ long cc=0;
+ unsigned long real_page_addr = __pa(page_addr);
+ __asm__ __volatile__ (
+ " lr 1,%1 \n" /* r1 = mem_page */
+ " lr 2,%2 \n" /* r2 = rpi */
+ " .long 0xb22f0012 \n" /* pgout r1,r2 */
+ /* copy page from expanded memory */
+ "0: ipm %0 \n" /* save status (cc & program mask */
+ " srl %0,28(0) \n" /* cc into least significant bits */
+ "1: \n" /* we are done */
+ ".section .fixup,\"ax\"\n" /* start of fix up section */
+ "2: lhi %0,2 \n" /* return unused condition code 2 */
+ " bras 1,3f \n" /* safe label 1: in r1 and goto 3 */
+ " .long 1b \n" /* literal containing label 1 */
+ "3: l 1,0(1) \n" /* load label 1 address into r1 */
+ " br 1 \n" /* goto label 1 (across sections) */
+ ".previous \n" /* back in text section */
+ ".section __ex_table,\"a\"\n" /* start __extable */
+ " .align 4 \n"
+ " .long 0b,2b \n" /* failure point 0, fixup code 2 */
+ ".previous \n"
+ : "=d" (cc) : "d" (real_page_addr), "d" (xpage_index) : "cc", "1", "2"
+ );
+ switch (cc) {
+ case 0: return 0;
+ case 1: return -EIO;
+ case 2: { PRINT_ERR("expanded storage lost!\n"); return -ENXIO; }
+ case 3: return -ENXIO;
+ default: return -EIO; /* should not happen */
+ }
+}
+
+/*--------------------------------------------------------------------*/
+/* Measure expanded memory */
+/* Return value */
+/* size of expanded memory in kB (must be a multipe of 4) */
+/*--------------------------------------------------------------------*/
+int xpram_size(void)
+{
+ long cc=0;
+ unsigned long base=0;
+ unsigned long po, pi, rpi; /* page index order, page index */
+
+ unsigned long mem_page = __get_free_page(GFP_KERNEL);
+
+ /* for po=0,1,2,... try to move in page number base+(2^po)-1 */
+ pi=1;
+ for (po=0; po <= 32; po++) { /* pi = 2^po */
+ cc=xpram_page_in(mem_page,base+pi-1);
+ if ( cc ) break;
+ pi <<= 1;
+ }
+ if ( cc && (po < 31 ) ) {
+ pi >>=1;
+ base += pi;
+ pi >>=1;
+ for ( ; pi > 0; pi >>= 1) {
+ rpi = pi - 1;
+ cc=xpram_page_in(mem_page,base+rpi);
+ if ( !cc ) base += pi;
+ }
+ }
+
+ free_page (mem_page);
+
+ if ( cc && (po < 31) )
+ return (XPRAM_KB_IN_PG * base);
+ else /* return maximal value possible */
+ return INT_MAX;
+}
+
+/*
+ * Open and close
+ */
+
+int xpram_open (struct inode *inode, struct file *filp)
+{
+ Xpram_Dev *dev; /* device information */
+ int num = MINOR(inode->i_rdev);
+
+
+ if (num >= xpram_devs) return -ENODEV;
+ dev = xpram_devices + num;
+
+ PRINT_DEBUG("calling xpram_open for device %d (size %dkB, usage: %d)\n", num,dev->size,atomic_read(&(dev->usage)));
+
+ atomic_inc(&(dev->usage));
+ MOD_INC_USE_COUNT;
+ return 0; /* success */
+}
+
+int xpram_release (struct inode *inode, struct file *filp)
+{
+ Xpram_Dev *dev = xpram_devices + MINOR(inode->i_rdev);
+
+ PRINT_DEBUG("calling xpram_release for device %d (size %dkB, usage: %d)\n",MINOR(inode->i_rdev) ,dev->size,atomic_read(&(dev->usage)));
+
+ /*
+ * If the device is closed for the last time, start a timer
+ * to release RAM in half a minute. The function and argument
+ * for the timer have been setup in init_module()
+ */
+ if (!atomic_dec_return(&(dev->usage))) {
+ /* but flush it right now */
+ fsync_dev(inode->i_rdev);
+ invalidate_buffers(inode->i_rdev);
+ }
+ MOD_DEC_USE_COUNT;
+ return(0);
+}
+
+
+/*
+ * The ioctl() implementation
+ */
+
+int xpram_ioctl (struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg)
+{
+ int err, size;
+ struct hd_geometry *geo = (struct hd_geometry *)arg;
+
+ PRINT_DEBUG("ioctl 0x%x 0x%lx\n", cmd, arg);
+ switch(cmd) {
+
+ case BLKGETSIZE: /* 0x1260 */
+ /* Return the device size, expressed in sectors */
+ if (!arg) return -EINVAL; /* NULL pointer: not valid */
+ err= 0; /* verify_area_20(VERIFY_WRITE, (long *) arg, sizeof(long));
+ * if (err) return err;
+ */
+ put_user ( 1024* xpram_sizes[MINOR(inode->i_rdev)]
+ / XPRAM_SOFTSECT,
+ (long *) arg);
+ return 0;
+
+ case BLKFLSBUF: /* flush, 0x1261 */
+ fsync_dev(inode->i_rdev);
+ if ( suser() )invalidate_buffers(inode->i_rdev);
+ return 0;
+
+ case BLKRAGET: /* return the readahead value, 0x1263 */
+ if (!arg) return -EINVAL;
+ err = 0; /* verify_area_20(VERIFY_WRITE, (long *) arg, sizeof(long));
+ * if (err) return err;
+ */
+ put_user(read_ahead[MAJOR(inode->i_rdev)], (long *)arg);
+
+ return 0;
+
+ case BLKRASET: /* set the readahead value, 0x1262 */
+ if (!suser()) return -EACCES;
+ if (arg > 0xff) return -EINVAL; /* limit it */
+ read_ahead[MAJOR(inode->i_rdev)] = arg;
+ atomic_eieio();
+ return 0;
+
+ case BLKRRPART: /* re-read partition table: can't do it, 0x1259 */
+ return -EINVAL;
+
+ RO_IOCTLS(inode->i_rdev, arg); /* the default RO operations
+ * BLKROSET
+ * BLKROGET
+ */
+
+ case HDIO_GETGEO:
+ /*
+ * get geometry: we have to fake one... trim the size to a
+ * multiple of 64 (32k): tell we have 16 sectors, 4 heads,
+ * whatever cylinders. Tell also that data starts at sector. 4.
+ */
+ size = xpram_mem_avail * 1024 / XPRAM_SOFTSECT;
+ /* size = xpram_mem_avail * 1024 / xpram_hardsect; */
+ size &= ~0x3f; /* multiple of 64 */
+ if (geo==NULL) return -EINVAL;
+ /*
+ * err=verify_area_20(VERIFY_WRITE, geo, sizeof(*geo));
+ * if (err) return err;
+ */
+
+ put_user(size >> 6, &geo->cylinders);
+ put_user( 4, &geo->heads);
+ put_user( 16, &geo->sectors);
+ put_user( 4, &geo->start);
+
+ return 0;
+ }
+
+ return -EINVAL; /* unknown command */
+}
+
+/*
+ * The file operations
+ */
+
+struct file_operations xpram_fops = {
+ NULL, /* lseek: default */
+ block_read,
+ block_write,
+ NULL, /* xpram_readdir */
+ NULL, /* xpram_select */
+ xpram_ioctl,
+ NULL, /* xpram_mmap */
+ xpram_open,
+ NULL, /* flush */
+ xpram_release,
+ block_fsync,
+ NULL, /* xpram_fasync */
+ NULL,
+ NULL
+};
+
+/*
+ * Block-driver specific functions
+ */
+
+void xpram_request(void)
+{
+ Xpram_Dev *device;
+ /* u8 *ptr; */
+ /* int size; */
+ unsigned long page_no; /* expanded memory page number */
+ unsigned long sects_to_copy; /* number of sectors to be copied */
+ char * buffer; /* local pointer into buffer cache */
+ int dev_no; /* device number of request */
+ int fault; /* faulty access to expanded memory */
+
+ while(1) {
+ INIT_REQUEST;
+
+ fault=0;
+ dev_no = DEVICE_NR(CURRENT_DEV);
+ /* Check if the minor number is in range */
+ if ( dev_no > xpram_devs ) {
+ static int count = 0;
+ if (count++ < 5) /* print the message at most five times */
+ PRINT_WARN(" request for unknown device\n");
+ end_request(0);
+ continue;
+ }
+
+ /* pointer to device structure, from the global array */
+ device = xpram_devices + dev_no;
+ sects_to_copy = CURRENT->current_nr_sectors;
+ /* does request exceed size of device ? */
+ if ( XPRAM_SEC2KB(sects_to_copy) > xpram_sizes[dev_no] ) {
+ PRINT_WARN(" request past end of device\n");
+ end_request(0);
+ continue;
+ }
+
+ /* Does request start at page boundery? -- paranoia */
+#if 0
+ PRINT_DEBUG(" req %lx, sect %lx, to copy %lx, buf addr %lx\n", (unsigned long) CURRENT, CURRENT->sector, sects_to_copy, (unsigned long) CURRENT->buffer);
+#endif
+ buffer = CURRENT->buffer;
+#if XPRAM_SEC_IN_PG != 1
+ /* Does request start at an expanded storage page boundery? */
+ if ( CURRENT->sector & (XPRAM_SEC_IN_PG - 1) ) {
+ PRINT_WARN(" request does not start at an expanded storage page boundery\n");
+ PRINT_WARN(" referenced sector: %ld\n",CURRENT->sector);
+ end_request(0);
+ continue;
+ }
+ /* Does request refere to partial expanded storage pages? */
+ if ( sects_to_copy & (XPRAM_SEC_IN_PG - 1) ) {
+ PRINT_WARN(" request referes to a partial expanded storage page\n");
+ end_request(0);
+ continue;
+ }
+#endif /* XPRAM_SEC_IN_PG != 1 */
+ /* Is request buffer aligned with kernel pages? */
+ if ( ((unsigned long)buffer) & (XPRAM_PGSIZE-1) ) {
+ PRINT_WARN(" request buffer is not aligned with kernel pages\n");
+ end_request(0);
+ continue;
+ }
+
+ /* which page of expanded storage is affected first? */
+ page_no = (xpram_offsets[dev_no] >> XPRAM_KB_IN_PG_ORDER)
+ + (CURRENT->sector >> XPRAM_SEC_IN_PG_ORDER);
+
+#if 0
+ PRINT_DEBUG("request: %d ( dev %d, copy %d sectors, at page %d ) \n", CURRENT->cmd,dev_no,sects_to_copy,page_no);
+#endif
+
+ switch(CURRENT->cmd) {
+ case READ:
+ do {
+ if ( (fault=xpram_page_in((unsigned long)buffer,page_no)) ) {
+ PRINT_WARN("xpram(dev %d): page in failed for page %ld.\n",dev_no,page_no);
+ break;
+ }
+ sects_to_copy -= XPRAM_SEC_IN_PG;
+ buffer += XPRAM_PGSIZE;
+ page_no++;
+ } while ( sects_to_copy > 0 );
+ break;
+ case WRITE:
+ do {
+ if ( (fault=xpram_page_out((unsigned long)buffer,page_no))
+ ) {
+ PRINT_WARN("xpram(dev %d): page out failed for page %ld.\n",dev_no,page_no);
+ break;
+ }
+ sects_to_copy -= XPRAM_SEC_IN_PG;
+ buffer += XPRAM_PGSIZE;
+ page_no++;
+ } while ( sects_to_copy > 0 );
+ break;
+ default:
+ /* can't happen */
+ end_request(0);
+ continue;
+ }
+ if ( fault ) end_request(0);
+ else end_request(1); /* success */
+ }
+}
+
+/*
+ * Kernel interfaces
+ */
+
+/*
+ * Parses the kernel parameters given in the kernel parameter line.
+ * The expected format is
+ * <number_of_partitions>[","<partition_size>]*
+ * where
+ * devices is a positive integer that initializes xpram_devs
+ * each size is a non-negative integer possibly followed by a
+ * magnitude (k,K,m,M,g,G), the list of sizes initialises
+ * xpram_sizes
+ *
+ * Arguments
+ * str: substring of kernel parameter line that contains xprams
+ * kernel parameters.
+ * ints: not used
+ *
+ * Side effects
+ * the global variabls devs is set to the value of
+ * <number_of_partitions> and sizes[i] is set to the i-th
+ * partition size (if provided). A parsing error of a value
+ * results in this value being set to -EINVAL.
+ */
+void xpram_setup (char *str, int *ints)
+{
+ devs = xpram_read_int(&str);
+ if ( devs != -EINVAL )
+ if ( xpram_read_size_list_tail(&str,devs,sizes) < 0 )
+ PRINT_ERR("error while reading xpram parameters.\n");
+}
+
+/*
+ * initialize xpram device driver
+ *
+ * Result: 0 ok
+ * negative number: negative error code
+ */
+
+int xpram_init(void)
+{
+ int result, i;
+ int mem_usable; /* net size of expanded memory */
+ int mem_needed=0; /* size of expanded memory needed to fullfill
+ * requirements of non-zero parameters in sizes
+ */
+
+ int mem_auto_no=0; /* number of (implicit) zero parameters in zises */
+ int mem_auto; /* automatically determined device size */
+
+ /*
+ * Copy the (static) cfg variables to public prefixed ones to allow
+ * snoozing with a debugger.
+ */
+
+ xpram_rahead = rahead;
+ xpram_blksize = blksize;
+ xpram_hardsect = hardsect;
+
+ PRINT_INFO("initializing: %s\n","");
+ /* check arguments */
+ xpram_major = major;
+ if ( (devs <= 0) || (devs > XPRAM_MAX_DEVS) ) {
+ PRINT_ERR("invalid number %d of devices\n",devs);
+ PRINT_ERR("Giving up xpram\n");
+ return -EINVAL;
+ }
+ xpram_devs = devs;
+ for (i=0; i < xpram_devs; i++) {
+ if ( sizes[i] < 0 ) {
+ PRINT_ERR("Invalid partition size %d kB\n",xpram_sizes[i]);
+ PRINT_ERR("Giving up xpram\n");
+ return -EINVAL;
+ } else {
+ xpram_sizes[i] = NEXT4(sizes[i]); /* page align */
+ if ( sizes[i] ) mem_needed += xpram_sizes[i];
+ else mem_auto_no++;
+ }
+ }
+
+ PRINT_DEBUG(" major %d \n", xpram_major);
+ PRINT_INFO(" number of devices (partitions): %d \n", xpram_devs);
+ for (i=0; i < xpram_devs; i++) {
+ if ( sizes[i] )
+ PRINT_INFO(" size of partition %d: %d kB\n", i, xpram_sizes[i]);
+ else
+ PRINT_INFO(" size of partition %d to be set automatically\n",i);
+ }
+ PRINT_DEBUG(" memory needed (for sized partitions): %d kB\n", mem_needed);
+ PRINT_DEBUG(" partitions to be sized automaticallys: %d\n", mem_auto_no);
+
+#if 0
+ /* Hardsect can't be changed :( */
+ /* I try it any way. Yet I must distinguish
+ * between hardsects (to be changed to 4096)
+ * and soft sectors, hard-coded for buffer
+ * sizes within the requests
+ */
+ if (hardsect != 512) {
+ PRINT_ERR("Can't change hardsect size\n");
+ hardsect = xpram_hardsect = 512;
+ }
+#endif
+ PRINT_INFO(" hardsector size: %dB \n",xpram_hardsect);
+
+ /*
+ * Register your major, and accept a dynamic number
+ */
+ result = register_blkdev(xpram_major, "xpram", &xpram_fops);
+ if (result < 0) {
+ PRINT_ERR("Can't get major %d\n",xpram_major);
+ PRINT_ERR("Giving up xpram\n");
+ return result;
+ }
+ if (xpram_major == 0) xpram_major = result; /* dynamic */
+ major = xpram_major; /* Use `major' later on to save typing */
+
+ result = -ENOMEM; /* for the possible errors */
+
+ /*
+ * measure expanded memory
+ */
+
+ xpram_mem_avail = xpram_size();
+ if (!xpram_mem_avail) {
+ PRINT_ERR("No or not enough expanded memory available\n");
+ PRINT_ERR("Giving up xpram\n");
+ result = -ENODEV;
+ goto fail_malloc;
+ }
+ PRINT_INFO(" %d kB expanded memory found.\n",xpram_mem_avail );
+
+ /*
+ * Assign the other needed values: request, rahead, size, blksize,
+ * hardsect. All the minor devices feature the same value.
+ * Note that `xpram' defines all of them to allow testing non-default
+ * values. A real device could well avoid setting values in global
+ * arrays if it uses the default values.
+ */
+
+ blk_dev[major].request_fn = xpram_request;
+ read_ahead[major] = xpram_rahead;
+
+ /* we want to have XPRAM_UNUSED blocks security buffer between devices */
+ mem_usable=xpram_mem_avail-(XPRAM_UNUSED*(xpram_devs-1));
+ if ( mem_needed > mem_usable ) {
+ PRINT_ERR("Not enough expanded memory available\n");
+ PRINT_ERR("Giving up xpram\n");
+ goto fail_malloc;
+ }
+
+ /*
+ * partitioning:
+ * xpram_sizes[i] != 0; partition i has size xpram_sizes[i] kB
+ * else: ; all partitions i with xpram_sizesxpram_size[i]
+ * partition equally the remaining space
+ */
+
+ if ( mem_auto_no ) {
+ mem_auto=LAST4((mem_usable-mem_needed)/mem_auto_no);
+ PRINT_INFO(" automatically determined partition size: %d kB\n", mem_auto);
+ for (i=0; i < xpram_devs; i++)
+ if (xpram_sizes[i] == 0) xpram_sizes[i] = mem_auto;
+ }
+ blk_size[major]=xpram_sizes;
+
+ xpram_offsets = kmalloc(xpram_devs * sizeof(int), GFP_KERNEL);
+ if (!xpram_offsets)
+ goto fail_malloc;
+ xpram_offsets[0] = 0;
+ for (i=1; i < xpram_devs; i++)
+ xpram_offsets[i] = xpram_offsets[i-1] + xpram_sizes[i-1] + XPRAM_UNUSED;
+
+#if 0
+ for (i=0; i < xpram_devs; i++)
+ PRINT_DEBUG(" device(%d) offset = %d kB, size = %d kB\n",i, xpram_offsets[i], xpram_sizes[i]);
+#endif
+
+ xpram_blksizes = kmalloc(xpram_devs * sizeof(int), GFP_KERNEL);
+ if (!xpram_blksizes)
+ goto fail_malloc_blksizes;
+ for (i=0; i < xpram_devs; i++) /* all the same blocksize */
+ xpram_blksizes[i] = xpram_blksize;
+ blksize_size[major]=xpram_blksizes;
+
+ xpram_hardsects = kmalloc(xpram_devs * sizeof(int), GFP_KERNEL);
+ if (!xpram_hardsects)
+ goto fail_malloc_hardsects;
+ for (i=0; i < xpram_devs; i++) /* all the same hardsect */
+ xpram_hardsects[i] = xpram_hardsect;
+ hardsect_size[major]=xpram_hardsects;
+
+ /*
+ * allocate the devices -- we can't have them static, as the number
+ * can be specified at load time
+ */
+
+ xpram_devices = kmalloc(xpram_devs * sizeof (Xpram_Dev), GFP_KERNEL);
+ if (!xpram_devices)
+ goto fail_malloc_devices;
+ memset(xpram_devices, 0, xpram_devs * sizeof (Xpram_Dev));
+ for (i=0; i < xpram_devs; i++) {
+ /* data and usage remain zeroed */
+ xpram_devices[i].size = xpram_sizes[i]; /* size in kB not in bytes */
+ atomic_set(&(xpram_devices[i].usage),0);
+ }
+
+ return 0; /* succeed */
+
+ fail_malloc_blksizes:
+ kfree (xpram_offsets);
+ fail_malloc_hardsects:
+ kfree (xpram_blksizes);
+ blksize_size[major] = NULL;
+ fail_malloc_devices:
+ kfree(xpram_hardsects);
+ hardsect_size[major] = NULL;
+ fail_malloc:
+ read_ahead[major] = 0;
+ blk_dev[major].request_fn = NULL;
+ unregister_chrdev(major, "xpram");
+ return result;
+}
+
+/*
+ * Finally, the module stuff
+ */
+
+int init_module(void)
+{
+ int rc = 0;
+
+ PRINT_INFO ("trying to load module\n");
+ rc = xpram_init ();
+ if (rc == 0) {
+ PRINT_INFO ("Module loaded successfully\n");
+ } else {
+ PRINT_WARN ("Module load returned rc=%d\n", rc);
+ }
+ return rc;
+}
+
+void cleanup_module(void)
+{
+ int i;
+
+ /* first of all, flush it all and reset all the data structures */
+
+
+ for (i=0; i<xpram_devs; i++)
+ fsync_dev(MKDEV(xpram_major, i)); /* flush the devices */
+
+ blk_dev[major].request_fn = NULL;
+ read_ahead[major] = 0;
+ blk_size[major] = NULL;
+ kfree(blksize_size[major]);
+ blksize_size[major] = NULL;
+ kfree(hardsect_size[major]);
+ hardsect_size[major] = NULL;
+ kfree(xpram_offsets);
+
+ /* finally, the usual cleanup */
+ unregister_blkdev(major, "xpram");
+
+ kfree(xpram_devices);
+}
--- /dev/null
+
+/*
+ * xpram.h -- definitions for the char module
+ *
+ *********/
+
+
+#include <linux/ioctl.h>
+#include <asm-s390/atomic.h>
+#include <linux/major.h>
+
+/* version dependencies have been confined to a separate file */
+
+/*
+ * Macros to help debugging
+ */
+
+#define XPRAM_NAME "xpram" /* name of device/module */
+#define XPRAM_DEVS 1 /* one partition */
+#define XPRAM_RAHEAD 8 /* no real read ahead */
+#define XPRAM_PGSIZE 4096 /* page size of (expanded) mememory pages
+ * according to S/390 architecture
+ */
+#define XPRAM_BLKSIZE XPRAM_PGSIZE /* must be equalt to page size ! */
+#define XPRAM_HARDSECT XPRAM_PGSIZE /* FIXME -- we have to deal with both
+ * this hard sect size and in some cases
+ * hard coded 512 bytes which I call
+ * soft sects:
+ */
+#define XPRAM_SOFTSECT 512
+#define XPRAM_MAX_DEVS 32 /* maximal number of devices (partitions) */
+#define XPRAM_MAX_DEVS1 33 /* maximal number of devices (partitions) +1 */
+
+/* The following macros depend on the sizes above */
+
+#define XPRAM_KB_IN_PG 4 /* 4 kBs per page */
+#define XPRAM_KB_IN_PG_ORDER 2 /* 2^? kBs per page */
+
+/* Eventhough XPRAM_HARDSECT is set to 4k some data structures use hard
+ * coded 512 byte sa sector size
+ */
+#define XPRAM_SEC2KB(x) ((x >> 1) + (x & 1)) /* modifier used to compute size
+ in kB from number of sectors */
+#define XPRAM_SEC_IN_PG 8 /* 8 sectors per page */
+#define XPRAM_SEC_IN_PG_ORDER 3 /* 2^? sectors per page */
+
+#define XPRAM_UNUSED 40 /* unused space between devices,
+ * in kB, i.e.
+ * must be a multiple of 4
+ */
+/*
+ * The xpram device is removable: if it is left closed for more than
+ * half a minute, it is removed. Thus use a usage count and a
+ * kernel timer
+ */
+
+typedef struct Xpram_Dev {
+ int size; /* size in KB not in Byte - RB - */
+ atomic_t usage;
+ u8 *data;
+} Xpram_Dev;
+
+void xpram_setup (char *, int *);
+int xpram_init(void);
tty->ldisc.write_wakeup)
(tty->ldisc.write_wakeup)(tty);
wake_up_interruptible(&tty->write_wait);
+ wake_up_interruptible(&tty->poll_wait);
}
}
if (count >= TTY_FLIPBUF_SIZE - tty->flip.count)
count = TTY_FLIPBUF_SIZE - tty->flip.count - 1;
EBCASC(raw->inbuf, count);
- if (count == 2 &&
- strncmp(raw->inbuf, "^c", 2) == 0) {
+ if (count == 2 && (
+ /* hat is 0xb0 in codepage 037 (US etc.) and thus */
+ /* converted to 0x5e in ascii ('^') */
+ strncmp(raw->inbuf, "^c", 2) == 0 ||
+ /* hat is 0xb0 in several other codepages (German,*/
+ /* UK, ...) and thus converted to ascii octal 252 */
+ strncmp(raw->inbuf, "\252c", 2) == 0) ) {
/* emulate a control C = break */
tty->flip.count++;
*tty->flip.flag_buf_ptr++ = TTY_NORMAL;
*tty->flip.char_buf_ptr++ = INTR_CHAR(tty);
tty_flip_buffer_push(raw->tty);
- } else if (count == 2 &&
- strncmp(raw->inbuf, "^d", 2) == 0) {
+ } else if (count == 2 && (
+ strncmp(raw->inbuf, "^d", 2) == 0 ||
+ strncmp(raw->inbuf, "\252d", 2) == 0) ) {
/* emulate a control D = end of file */
tty->flip.count++;
*tty->flip.flag_buf_ptr++ = TTY_NORMAL;
*tty->flip.char_buf_ptr++ = EOF_CHAR(tty);
tty_flip_buffer_push(raw->tty);
- } else if (count == 2 &&
- strncmp(raw->inbuf, "^z", 2) == 0) {
+ } else if (count == 2 && (
+ strncmp(raw->inbuf, "^z", 2) == 0 ||
+ strncmp(raw->inbuf, "\252z", 2) == 0) ) {
/* emulate a control Z = suspend */
tty->flip.count++;
*tty->flip.flag_buf_ptr++ = TTY_NORMAL;
memcpy(tty->flip.char_buf_ptr,
raw->inbuf, count);
if (count < 2 ||
- strncmp(raw->inbuf+count-2, "^n", 2)) {
+ (strncmp(raw->inbuf+count-2, "^n", 2) ||
+ strncmp(raw->inbuf+count-2, "\252n", 2)) ) {
/* don't add the auto \n */
tty->flip.char_buf_ptr[count] = '\n';
memset(tty->flip.flag_buf_ptr,
return -1;
raw->flags |= RAW3215_ACTIVE;
s390irq_spin_lock_irqsave(raw->irq, flags);
+ set_cons_dev(raw->irq);
raw3215_try_io(raw);
s390irq_spin_unlock_irqrestore(raw->irq, flags);
raw = (raw3215_info *) tty->driver_data;
raw3215_flush_buffer(raw);
wake_up_interruptible(&tty->write_wait);
+ wake_up_interruptible(&tty->poll_wait);
if ((tty->flags & (1 << TTY_DO_WRITE_WAKEUP)) &&
tty->ldisc.write_wakeup)
(tty->ldisc.write_wakeup)(tty);
if (MACHINE_IS_VM) {
cpcmd("TERM CONMODE 3215", NULL, 0);
cpcmd("TERM AUTOCR OFF", NULL, 0);
- cpcmd("TERM HOLD OFF", NULL, 0);
- cpcmd("TERM MORE 5 5", NULL, 0);
}
kmem_start = (kmem_start + 7) & -8L;
if (raw->irq != -1) {
register_console(&con3215);
- s390irq_spin_lock(raw->irq);
- set_cons_dev(raw->irq);
- s390irq_spin_unlock(raw->irq);
} else {
kmem_start = (long) raw;
raw3215[0] = NULL;
void hwc_console_write(struct console *, const char *, unsigned int);
kdev_t hwc_console_device(struct console *);
+void hwc_console_unblank (void);
#define HWC_CON_PRINT_HEADER "hwc console driver: "
NULL,
hwc_console_device,
NULL,
- NULL,
+ hwc_console_unblank,
NULL,
CON_PRINTBUFFER,
0,
return MKDEV(hwc_console_major, hwc_console_minor);
}
+void
+hwc_console_unblank (void)
+{
+ hwc_unblank ();
+}
+
#endif
__initfunc(unsigned long hwc_console_init(unsigned long kmem_start))
if (hwc_init(&kmem_start) == 0) {
+ hwc_tty_init ();
+
#ifdef CONFIG_HWC_CONSOLE
register_console(&hwc_console);
#endif
-
- hwc_tty_init();
} else
panic (HWC_CON_PRINT_HEADER "hwc initialisation failed !");
#include <asm/bitops.h>
#include <asm/setup.h>
#include <asm/page.h>
+#include <asm/s390_ext.h>
#ifndef MIN
#define MIN(a,b) ((a<b) ? a : b)
#define MAX_KMEM_PAGES (sizeof(kmem_pages_t) << 3)
#define HWC_TIMER_RUNS 1
-#define FLUSH_HWCBS 2
+#define HWC_FLUSH 2
+#define HWC_INIT 4
+#define HWC_BROKEN 8
+#define HWC_INTERRUPT 16
static struct {
unsigned char flags;
+ hwc_high_level_calls_t *calls;
+
spinlock_t lock;
struct timer_list write_timer;
0,
0,
0,
- 0
+ 0,
+ NULL
};
+static unsigned long cr0 __attribute__ ((aligned (8)));
+static unsigned long cr0_save __attribute__ ((aligned (8)));
+static unsigned char psw_mask __attribute__ ((aligned (8)));
+
#define DELAYED_WRITE 0
#define IMMEDIATE_WRITE 1
-static signed int do_hwc_write(int from_user, const unsigned char *,
+static signed int do_hwc_write (int from_user, unsigned char *,
unsigned int,
unsigned char,
unsigned char);
static asmlinkage int
-internal_print (char write_time, const char *fmt,...)
+internal_print (char write_time, char *fmt,...)
{
va_list args;
int i;
if (page >= hwc_data.kmem_start &&
page < hwc_data.kmem_end) {
- memset((void *) page, 0, PAGE_SIZE);
+/* memset((void *) page, 0, PAGE_SIZE); */
page_nr = (int) ((page - hwc_data.kmem_start) >> 12);
clear_bit(page_nr, &hwc_data.kmem_pages);
release_write_hwcb();
- hwc_data.flags &= ~FLUSH_HWCBS;
+ hwc_data.flags &= ~HWC_FLUSH;
}
static int
{
write_hwcb_t *hwcb;
int retval;
+
+#ifdef DUMP_HWC_WRITE_ERROR
unsigned char *param;
param = ext_int_param();
- if (param != hwc_data.current_hwcb)
+ if (param != hwc_data.current_hwcb) {
+ internal_print (
+ DELAYED_WRITE,
+ HWC_RW_PRINT_HEADER
+ "write_event_mask_2 : "
+ "HWCB address does not fit "
+ "(expected: 0x%x, got: 0x%x).\n",
+ hwc_data.current_hwcb,
+ param);
return -EINVAL;
+ }
+#endif
hwcb = (write_hwcb_t *) OUT_HWCB;
-#ifdef DUMP_HWC_WRITE_ERROR
-#if 0
- if (((unsigned char *) hwcb) != param)
+#ifdef DUMP_HWC_WRITE_LIST_ERROR
+ if (((unsigned char *) hwcb) != hwc_data.current_hwcb) {
__asm__("LHI 1,0xe22\n\t"
"LRA 2,0(0,%0)\n\t"
"LRA 3,0(0,%1)\n\t"
:"a"(OUT_HWCB),
"a"(hwc_data.current_hwcb),
"a"(BUF_HWCB),
- "a"(param)
+ "a" (hwcb)
:"1", "2", "3", "4", "5");
+ }
#endif
- if (hwcb->response_code != 0x0020)
-#if 0
- internal_print(DELAYED_WRITE, HWC_RW_PRINT_HEADER
- "\n************************ error in write_event_data_2()\n"
- "OUT_HWCB: 0x%x\n"
- "BUF_HWCB: 0x%x\n"
- "response_code: 0x%x\n"
- "hwc_data.hwcb_count: %d\n"
- "hwc_data.kmem_pages: 0x%x\n"
- "hwc_data.ioctls.kmem_hwcb: %d\n"
- "hwc_data.ioctls.max_hwcb: %d\n"
- "hwc_data.kmem_start: 0x%x\n"
- "hwc_data.kmem_end: 0x%x\n"
- "*****************************************************\n",
- OUT_HWCB,
- BUF_HWCB,
- hwcb->response_code,
- hwc_data.hwcb_count,
- hwc_data.kmem_pages,
- hwc_data.ioctls.kmem_hwcb,
- hwc_data.ioctls.max_hwcb,
- hwc_data.kmem_start,
- hwc_data.kmem_end);
-#endif
+
+#ifdef DUMP_HWC_WRITE_ERROR
+ if (hwcb->response_code != 0x0020) {
__asm__("LHI 1,0xe21\n\t"
"LRA 2,0(0,%0)\n\t"
"LRA 3,0(0,%1)\n\t"
"a"(BUF_HWCB),
"a"(&(hwc_data.hwcb_count))
:"1", "2", "3", "4", "5");
+ }
#endif
if (hwcb->response_code == 0x0020) {
retval = OUT_HWCB_CHAR;
release_write_hwcb();
- } else
+ } else {
+ internal_print (
+ DELAYED_WRITE,
+ HWC_RW_PRINT_HEADER
+ "write_event_data_2 : "
+ "failed operation "
+ "(response code: 0x%x "
+ "HWCB address: 0x%x).\n",
+ hwcb->response_code,
+ hwcb);
retval = -EIO;
+ }
hwc_data.current_servc = 0;
hwc_data.current_hwcb = NULL;
- if (hwc_data.flags & FLUSH_HWCBS)
+ if (hwc_data.flags & HWC_FLUSH)
flush_hwcbs();
return retval;
static int
do_hwc_write (
int from_user,
- const unsigned char *msg,
+ unsigned char *msg,
unsigned int count,
unsigned char code,
unsigned char write_time)
else
orig_ch = msg[i_msg];
if (code == CODE_EBCDIC)
- ch = _ebcasc[orig_ch];
+ ch = (MACHINE_IS_VM ? _ebcasc[orig_ch] : _ebcasc_500[orig_ch]);
else
ch = orig_ch;
hwc_data.obuf[hwc_data.obuf_start +
obuf_cursor++]
= (code == CODE_ASCII) ?
- _ascebc[orig_ch]:orig_ch;
+ (MACHINE_IS_VM ?
+ _ascebc[orig_ch] :
+ _ascebc_500[orig_ch]) :
+ orig_ch;
}
if (obuf_cursor > obuf_count)
obuf_count = obuf_cursor;
spin_lock_irqsave(&hwc_data.lock, flags);
- retval = do_hwc_write(from_user, msg, count, hwc_data.ioctls.code,
+ retval = do_hwc_write (from_user, (unsigned char *) msg,
+ count, hwc_data.ioctls.code,
IMMEDIATE_WRITE);
spin_unlock_irqrestore(&hwc_data.lock, flags);
if (hwc_data.current_servc != HWC_CMDW_WRITEDATA)
flush_hwcbs();
else
- hwc_data.flags |= FLUSH_HWCBS;
+ hwc_data.flags |= HWC_FLUSH;
}
if (flag & IN_WRITE_BUF) {
hwc_data.obuf_cursor = 0;
if (hwc_data.ioctls.echo)
do_hwc_write(0, start, count, CODE_EBCDIC, IMMEDIATE_WRITE);
- if (hwc_data.ioctls.code == CODE_ASCII)
+ if (hwc_data.ioctls.code == CODE_ASCII) {
+ if (MACHINE_IS_VM)
EBCASC(start, count);
-
- store_hwc_input(start, count);
+ else
+ EBCASC_500 (start, count);
+ }
+ if (hwc_data.calls != NULL)
+ if (hwc_data.calls->move_input != NULL)
+ (hwc_data.calls->move_input) (start, count);
return count;
}
case 0x60F0 :
case 0x62F0 :
+ internal_print (
+ IMMEDIATE_WRITE,
+ HWC_RW_PRINT_HEADER
+ "unconditional read: "
+ "got interrupt and tried to read input, "
+ "but nothing found (response code=0x%x).\n",
+ hwcb->response_code);
return 0;
case 0x0100 :
unsigned int condition_code;
int retval;
- memcpy(hwc_data.page, &init_hwcb_template, sizeof(init_hwcb_t));
-
condition_code = service_call(HWC_CMDW_WRITEMASK, hwc_data.page);
#ifdef DUMP_HWC_INIT_ERROR
- if (condition_code != HWC_COMMAND_INITIATED)
+ if (condition_code == HWC_NOT_OPERATIONAL)
__asm__("LHI 1,0xe10\n\t"
"L 2,0(0,%0)\n\t"
"LRA 3,0(0,%1)\n\t"
if (hwcb->hwc_receive_mask & ET_PMsgCmd_Mask)
hwc_data.write_prio = 1;
- if (hwcb->hwc_send_mask & ET_OpCmd_Mask)
+ if (hwcb->hwc_send_mask & ET_OpCmd_Mask) {
+ internal_print (DELAYED_WRITE,
+ HWC_RW_PRINT_HEADER
+ "capable of receipt of commands\n");
hwc_data.read_nonprio = 1;
-
- if (hwcb->hwc_send_mask & ET_PMsgCmd_Mask)
+ }
+ if (hwcb->hwc_send_mask & ET_PMsgCmd_Mask) {
+ internal_print (DELAYED_WRITE,
+ HWC_RW_PRINT_HEADER
+ "capable of receipt of priority commands\n");
hwc_data.read_nonprio = 1;
-
+ }
if ((hwcb->response_code != 0x0020) ||
(!hwc_data.write_nonprio) ||
((!hwc_data.read_nonprio) && (!hwc_data.read_prio)))
:"a"(hwcb), "a"(&(hwcb->response_code))
:"1", "2", "3");
#else
- retval = -EIO
+ retval = -EIO;
#endif
hwc_data.current_servc = 0;
return retval;
}
+int
+do_hwc_init (void)
+{
+ int retval;
+
+ memcpy (hwc_data.page, &init_hwcb_template, sizeof (init_hwcb_t));
+
+ do {
+
+ retval = write_event_mask_1 ();
+
+ if (retval == -EBUSY) {
+
+ hwc_data.flags |= HWC_INIT;
+
+ asm volatile ("STCTL 0,0,%0":"=m" (cr0));
+ cr0_save = cr0;
+ cr0 |= 0x00000200;
+ cr0 &= 0xFFFFF3AC;
+ asm volatile ("LCTL 0,0,%0"::"m" (cr0):"memory");
+
+ asm volatile ("STOSM %0,0x01"
+ :"=m" (psw_mask)::"memory");
+
+ while (!(hwc_data.flags & HWC_INTERRUPT))
+ barrier ();
+
+ asm volatile ("STNSM %0,0xFE"
+ :"=m" (psw_mask)::"memory");
+
+ asm volatile ("LCTL 0,0,%0"
+ ::"m" (cr0_save):"memory");
+
+ hwc_data.flags &= ~HWC_INIT;
+ }
+ } while (retval == -EBUSY);
+
+ if (retval == -EIO) {
+ hwc_data.flags |= HWC_BROKEN;
+ printk (HWC_RW_PRINT_HEADER "HWC not operational\n");
+ }
+ return retval;
+}
+
+void do_hwc_interrupt (struct pt_regs *regs, __u16 code);
+
int
hwc_init (unsigned long *kmem_start)
{
int retval;
+
#ifdef BUFFER_STRESS_TEST
init_hwcb_t *hwcb;
#endif
-#ifdef CONFIG_3215
- if (MACHINE_IS_VM)
- return kmem_start;
-#endif
+ if (register_external_interrupt (0x2401, do_hwc_interrupt) != 0)
+ panic ("Couldn't request external interrupts 0x2401");
spin_lock_init(&hwc_data.lock);
- retval = write_event_mask_1();
- if (retval < 0)
- return retval;
-
#ifdef USE_VM_DETECTION
if (MACHINE_IS_VM) {
*kmem_start += hwc_data.ioctls.kmem_hwcb * PAGE_SIZE;
hwc_data.kmem_end = *kmem_start - 1;
+ retval = do_hwc_init ();
+
ctl_set_bit(0, 9);
#ifdef BUFFER_STRESS_TEST
#endif
- return retval;
+ return /*retval */ 0;
+}
+
+signed int
+hwc_register_calls (hwc_high_level_calls_t * calls)
+{
+ if (calls == NULL)
+ return -EINVAL;
+
+ if (hwc_data.calls != NULL)
+ return -EBUSY;
+
+ hwc_data.calls = calls;
+ return 0;
+}
+
+signed int
+hwc_unregister_calls (hwc_high_level_calls_t * calls)
+{
+ if (hwc_data.calls == NULL)
+ return -EINVAL;
+
+ if (calls != hwc_data.calls)
+ return -EINVAL;
+
+ hwc_data.calls = NULL;
+ return 0;
}
void
-do_hwc_interrupt (void)
+do_hwc_interrupt (struct pt_regs *regs, __u16 code)
{
+ if (hwc_data.flags & HWC_INIT) {
+
+ hwc_data.flags |= HWC_INTERRUPT;
+ } else if (hwc_data.flags & HWC_BROKEN) {
+
+ if (!do_hwc_init ()) {
+ hwc_data.flags &= ~HWC_BROKEN;
+ internal_print (DELAYED_WRITE,
+ HWC_RW_PRINT_HEADER
+ "delayed HWC setup after"
+ " temporary breakdown\n");
+ }
+ } else {
spin_lock(&hwc_data.lock);
if (!hwc_data.current_servc) {
write_event_data_1();
}
+ if (hwc_data.calls != NULL)
+ if (hwc_data.calls->wake_up != NULL)
+ (hwc_data.calls->wake_up) ();
+ spin_unlock (&hwc_data.lock);
+ }
+}
- wake_up_hwc_tty();
+void
+hwc_unblank (void)
+{
+ spin_lock (&hwc_data.lock);
spin_unlock(&hwc_data.lock);
+
+ asm volatile ("STCTL 0,0,%0":"=m" (cr0));
+ cr0_save = cr0;
+ cr0 |= 0x00000200;
+ cr0 &= 0xFFFFF3AC;
+ asm volatile ("LCTL 0,0,%0"::"m" (cr0):"memory");
+
+ asm volatile ("STOSM %0,0x01":"=m" (psw_mask)::"memory");
+
+ while (ALL_HWCB_CHAR)
+ barrier ();
+
+ asm volatile ("STNSM %0,0xFE":"=m" (psw_mask)::"memory");
+
+ asm volatile ("LCTL 0,0,%0"::"m" (cr0_save):"memory");
}
int
#include <linux/ioctl.h>
-#ifndef __HWC_RW_C__
-
-extern int hwc_init(unsigned long *);
-
-extern int hwc_write(int from_user, const unsigned char *, unsigned int);
-
-extern unsigned int hwc_chars_in_buffer(unsigned char);
-
-extern unsigned int hwc_write_room(unsigned char);
-
-extern void hwc_flush_buffer(unsigned char);
-
-extern signed int hwc_ioctl(unsigned int, unsigned long);
-
-extern void do_hwc_interrupt(void);
-
-extern int hwc_printk(const char *, ...);
-
-#else
-
-extern void store_hwc_input(unsigned char*, unsigned int);
+typedef struct {
-extern void wake_up_hwc_tty(void);
+ void (*move_input) (unsigned char *, unsigned int);
-#endif
+ void (*wake_up) (void);
+} hwc_high_level_calls_t;
#define IN_HWCB 1
#define IN_WRITE_BUF 2
#define CODE_ASCII 0x0
#define CODE_EBCDIC 0x1
+#ifndef __HWC_RW_C__
+
+extern int hwc_init (unsigned long *);
+
+extern int hwc_write (int from_user, const unsigned char *, unsigned int);
+
+extern unsigned int hwc_chars_in_buffer (unsigned char);
+
+extern unsigned int hwc_write_room (unsigned char);
+
+extern void hwc_flush_buffer (unsigned char);
+
+extern void hwc_unblank (void);
+
+extern signed int hwc_ioctl (unsigned int, unsigned long);
+
+extern void do_hwc_interrupt (void);
+
+extern int hwc_printk (const char *,...);
+
+extern signed int hwc_register_calls (hwc_high_level_calls_t *);
+
+extern signed int hwc_unregister_calls (hwc_high_level_calls_t *);
+
+#endif
+
#endif
#include <asm/uaccess.h>
#include "hwc_rw.h"
+#include "hwc_tty.h"
#define HWC_TTY_PRINT_HEADER "hwc tty driver: "
unsigned short int buf_count;
spinlock_t lock;
+
+ hwc_high_level_calls_t calls;
+
+ hwc_tty_ioctl_t ioctl;
} hwc_tty_data_struct;
-static hwc_tty_data_struct hwc_tty_data;
+static hwc_tty_data_struct hwc_tty_data =
+{ /* NULL/0 */ };
static struct tty_driver hwc_tty_driver;
static struct tty_struct * hwc_tty_table[1];
static struct termios * hwc_tty_termios[1];
extern struct termios tty_std_termios;
+void hwc_tty_wake_up (void);
+void hwc_tty_input (unsigned char *, unsigned int);
+
static int
hwc_tty_open (struct tty_struct *tty,
struct file *filp)
hwc_tty_data.tty = tty;
tty->low_latency = 0;
+ hwc_tty_data.calls.wake_up = hwc_tty_wake_up;
+ hwc_tty_data.calls.move_input = hwc_tty_input;
+ hwc_register_calls (&(hwc_tty_data.calls));
+
return 0;
}
-void
-wake_up_hwc_tty (void)
-{
- if ((hwc_tty_data.tty->flags & (1 << TTY_DO_WRITE_WAKEUP)) &&
- hwc_tty_data.tty->ldisc.write_wakeup)
- (hwc_tty_data.tty->ldisc.write_wakeup)(hwc_tty_data.tty);
- wake_up_interruptible(&hwc_tty_data.tty->write_wait);
-}
+#if 0
static void
hwc_tty_close (struct tty_struct *tty,
return;
}
hwc_tty_data.tty = NULL;
+
+ hwc_unregister_calls (&(hwc_tty_data.calls));
}
+#endif
static int
hwc_tty_write_room (struct tty_struct *tty)
static void
hwc_tty_flush_buffer (struct tty_struct *tty)
{
- wake_up_hwc_tty();
+ hwc_tty_wake_up ();
}
static int
unsigned int cmd,
unsigned long arg)
{
+ unsigned long count;
+
if (tty->flags & (1 << TTY_IO_ERROR))
return -EIO;
+ switch (cmd) {
+ case TIOCHWCTTYSINTRC:
+ count = strlen_user ((const char *) arg);
+ if (count > HWC_TTY_MAX_CNTL_SIZE)
+ return -EINVAL;
+ strncpy_from_user (hwc_tty_data.ioctl.intr_char,
+ (const char *) arg, count);
+
+ hwc_tty_data.ioctl.intr_char_size = count - 1;
+ return count;
+
+ case TIOCHWCTTYGINTRC:
+ return copy_to_user ((void *) arg,
+ (const void *) hwc_tty_data.ioctl.intr_char,
+ (long) hwc_tty_data.ioctl.intr_char_size);
+
+ default:
return hwc_ioctl(cmd, arg);
}
+}
void
-store_hwc_input (unsigned char *buf, unsigned int count)
+hwc_tty_wake_up (void)
+{
+ if (hwc_tty_data.tty == NULL)
+ return;
+ if ((hwc_tty_data.tty->flags & (1 << TTY_DO_WRITE_WAKEUP)) &&
+ hwc_tty_data.tty->ldisc.write_wakeup)
+ (hwc_tty_data.tty->ldisc.write_wakeup) (hwc_tty_data.tty);
+ wake_up_interruptible (&hwc_tty_data.tty->write_wait);
+ wake_up_interruptible (&hwc_tty_data.tty->poll_wait);
+}
+
+void
+hwc_tty_input (unsigned char *buf, unsigned int count)
{
struct tty_struct *tty = hwc_tty_data.tty;
+#if 0
+
+ if (tty != NULL) {
+
+ if (count == 2 && (
+
+ strncmp (buf, "^c", 2) == 0 ||
+
+ strncmp (buf, "\0252c", 2) == 0)) {
+ tty->flip.count++;
+ *tty->flip.flag_buf_ptr++ = TTY_NORMAL;
+ *tty->flip.char_buf_ptr++ = INTR_CHAR (tty);
+ } else if (count == 2 && (
+ strncmp (buf, "^d", 2) == 0 ||
+ strncmp (buf, "\0252d", 2) == 0)) {
+ tty->flip.count++;
+ *tty->flip.flag_buf_ptr++ = TTY_NORMAL;
+ *tty->flip.char_buf_ptr++ = EOF_CHAR (tty);
+ } else if (count == 2 && (
+ strncmp (buf, "^z", 2) == 0 ||
+ strncmp (buf, "\0252z", 2) == 0)) {
+ tty->flip.count++;
+ *tty->flip.flag_buf_ptr++ = TTY_NORMAL;
+ *tty->flip.char_buf_ptr++ = SUSP_CHAR (tty);
+ } else {
+
+ memcpy (tty->flip.char_buf_ptr, buf, count);
+ if (count < 2 || (
+ strncmp (buf + count - 2, "^n", 2) ||
+ strncmp (buf + count - 2, "\0252n", 2))) {
+ tty->flip.char_buf_ptr[count] = '\n';
+ count++;
+ } else
+ count -= 2;
+ memset (tty->flip.flag_buf_ptr, TTY_NORMAL, count);
+ tty->flip.char_buf_ptr += count;
+ tty->flip.flag_buf_ptr += count;
+ tty->flip.count += count;
+ }
+ tty_flip_buffer_push (tty);
+ hwc_tty_wake_up ();
+ }
+#endif
+
if (tty != NULL) {
- if (count == 2 && strncmp(buf, "^c", 2) == 0) {
+ if (count == hwc_tty_data.ioctl.intr_char_size &&
+ strncmp (buf, hwc_tty_data.ioctl.intr_char,
+ hwc_tty_data.ioctl.intr_char_size) == 0) {
tty->flip.count++;
*tty->flip.flag_buf_ptr++ = TTY_NORMAL;
*tty->flip.char_buf_ptr++ = INTR_CHAR(tty);
- } else if (count == 2 && strncmp(buf, "^d", 2) == 0) {
+ } else if (count == 2 && (
+ strncmp (buf, "^d", 2) == 0 ||
+ strncmp (buf, "\0252d", 2) == 0)) {
tty->flip.count++;
*tty->flip.flag_buf_ptr++ = TTY_NORMAL;
*tty->flip.char_buf_ptr++ = EOF_CHAR(tty);
- } else if (count == 2 && strncmp(buf, "^z", 2) == 0) {
+ } else if (count == 2 && (
+ strncmp (buf, "^z", 2) == 0 ||
+ strncmp (buf, "\0252z", 2) == 0)) {
tty->flip.count++;
*tty->flip.flag_buf_ptr++ = TTY_NORMAL;
*tty->flip.char_buf_ptr++ = SUSP_CHAR(tty);
} else {
memcpy(tty->flip.char_buf_ptr, buf, count);
- if (count < 2 ||
- strncmp(buf + count - 2, "^n", 2)) {
+ if (count < 2 || (
+ strncmp (buf + count - 2, "^n", 2) ||
+ strncmp (buf + count - 2, "\0252n", 2))) {
tty->flip.char_buf_ptr[count] = '\n';
count++;
} else
tty->flip.count += count;
}
tty_flip_buffer_push(tty);
- wake_up_hwc_tty();
+ hwc_tty_wake_up ();
}
}
hwc_tty_init (void)
{
memset (&hwc_tty_driver, 0, sizeof(struct tty_driver));
+ memset (&hwc_tty_data, 0, sizeof (hwc_tty_data_struct));
hwc_tty_driver.magic = TTY_DRIVER_MAGIC;
hwc_tty_driver.driver_name = "tty_hwc";
hwc_tty_driver.name = "ttyS";
--- /dev/null
+/*
+ * drivers/s390/char/hwc_tty.h
+ * interface to the HWC-terminal driver
+ *
+ * S390 version
+ * Copyright (C) 2000 IBM Deutschland Entwicklung GmbH, IBM Corporation
+ * Author(s): Martin Peschke <peschke@fh-brandenburg.de>
+ */
+
+#ifndef __HWC_TTY_H__
+#define __HWC_TTY_H__
+
+#include <linux/ioctl.h>
+
+#define HWC_TTY_MAX_CNTL_SIZE 20
+
+typedef struct {
+ unsigned char intr_char[HWC_TTY_MAX_CNTL_SIZE];
+ unsigned char intr_char_size;
+} hwc_tty_ioctl_t;
+
+static hwc_tty_ioctl_t _ioctl;
+
+#define HWC_TTY_IOCTL_LETTER 'B'
+
+#define TIOCHWCTTYSINTRC _IOW(HWC_TTY_IOCTL_LETTER, 40, _ioctl.intr_char)
+
+#define TIOCHWCTTYGINTRC _IOR(HWC_TTY_IOCTL_LETTER, 41, _ioctl.intr_char)
+
+#endif
+++ /dev/null
-/*
- * arch/s390/kernel/ebcdic.c
- * ECBDIC -> ASCII, ASCII -> ECBDIC conversion tables.
- *
- * S390 version
- * Copyright (C) 1998 IBM Corporation
- * Author(s): Martin Schwidefsky <schwidefsky@de.ibm.com>
- */
-
-#include <asm/types.h>
-
-/*
- * ASCII -> EBCDIC
- */
-__u8 _ascebc[256] =
-{
- /*00 NL SH SX EX ET NQ AK BL */
- 0x00, 0x01, 0x02, 0x03, 0x37, 0x2D, 0x2E, 0x2F,
- /*08 BS HT LF VT FF CR SO SI */
- 0x16, 0x05, 0x15, 0x0B, 0x0C, 0x0D, 0x0E, 0x0F,
- /*10 DL D1 D2 D3 D4 NK SN EB */
- 0x10, 0x11, 0x12, 0x13, 0x3C, 0x15, 0x32, 0x26,
- /*18 CN EM SB EC FS GS RS US */
- 0x18, 0x19, 0x3F, 0x27, 0x1C, 0x1D, 0x1E, 0x1F,
- /*20 SP ! " # $ % & ' */
- 0x40, 0x5A, 0x7F, 0x7B, 0x5B, 0x6C, 0x50, 0x7D,
- /*28 ( ) * + , - . / */
- 0x4D, 0x5D, 0x5C, 0x4E, 0x6B, 0x60, 0x4B, 0x61,
- /*30 0 1 2 3 4 5 6 7 */
- 0xF0, 0xF1, 0xF2, 0xF3, 0xF4, 0xF5, 0xF6, 0xF7,
- /*38 8 9 : ; < = > ? */
- 0xF8, 0xF9, 0x7A, 0x5E, 0x4C, 0x7E, 0x6E, 0x6F,
- /*40 @ A B C D E F G */
- 0x7C, 0xC1, 0xC2, 0xC3, 0xC4, 0xC5, 0xC6, 0xC7,
- /*48 H I J K L M N O */
- 0xC8, 0xC9, 0xD1, 0xD2, 0xD3, 0xD4, 0xD5, 0xD6,
- /*50 P Q R S T U V W */
- 0xD7, 0xD8, 0xD9, 0xE2, 0xE3, 0xE4, 0xE5, 0xE6,
- /*58 X Y Z [ \ ] ^ _ */
- 0xE7, 0xE8, 0xE9, 0xAD, 0xE0, 0xBD, 0x5F, 0x6D,
- /*60 ` a b c d e f g */
- 0x79, 0x81, 0x82, 0x83, 0x84, 0x85, 0x86, 0x87,
- /*68 h i j k l m n o */
- 0x88, 0x89, 0x91, 0x92, 0x93, 0x94, 0x95, 0x96,
- /*70 p q r s t u v w */
- 0x97, 0x98, 0x99, 0xA2, 0xA3, 0xA4, 0xA5, 0xA6,
- /*78 x y z { | } ~ DL */
- 0xA7, 0xA8, 0xA9, 0xC0, 0x4F, 0xD0, 0xA1, 0x07,
- 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F,
- 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F,
- 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F,
- 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F,
- 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F,
- 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F,
- 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F,
- 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F,
- 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F,
- 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F,
- 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F,
- 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F,
- 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F,
- 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F,
- 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F,
- 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0xFF
-};
-
-/*
- * EBCDIC -> ASCII
- */
-__u8 _ebcasc[256] =
-{
- /* 0x00 NUL SOH STX ETX *SEL HT *RNL DEL */
- 0x00, 0x01, 0x02, 0x03, 0x07, 0x09, 0x07, 0x7F,
- /* 0x08 -GE -SPS -RPT VT FF CR SO SI */
- 0x07, 0x07, 0x07, 0x0B, 0x0C, 0x0D, 0x0E, 0x0F,
- /* 0x10 DLE DC1 DC2 DC3 -RES -NL BS -POC
- -ENP ->LF */
- 0x10, 0x11, 0x12, 0x13, 0x07, 0x0A, 0x08, 0x07,
- /* 0x18 CAN EM -UBS -CU1 -IFS -IGS -IRS -ITB
- -IUS */
- 0x18, 0x19, 0x07, 0x07, 0x07, 0x07, 0x07, 0x07,
- /* 0x20 -DS -SOS FS -WUS -BYP LF ETB ESC
- -INP */
- 0x07, 0x07, 0x1C, 0x07, 0x07, 0x0A, 0x17, 0x1B,
- /* 0x28 -SA -SFE -SM -CSP -MFA ENQ ACK BEL
- -SW */
- 0x07, 0x07, 0x07, 0x07, 0x07, 0x05, 0x06, 0x07,
- /* 0x30 ---- ---- SYN -IR -PP -TRN -NBS EOT */
- 0x07, 0x07, 0x16, 0x07, 0x07, 0x07, 0x07, 0x04,
- /* 0x38 -SBS -IT -RFF -CU3 DC4 NAK ---- SUB */
- 0x07, 0x07, 0x07, 0x07, 0x14, 0x15, 0x07, 0x1A,
- /* 0x40 SP RSP \81ä ---- */
- 0x20, 0xFF, 0x83, 0x84, 0x85, 0xA0, 0x07, 0x86,
- /* 0x48 . < ( + | */
- 0x87, 0xA4, 0x9B, 0x2E, 0x3C, 0x28, 0x2B, 0x7C,
- /* 0x50 & ---- */
- 0x26, 0x82, 0x88, 0x89, 0x8A, 0xA1, 0x8C, 0x07,
- /* 0x58 \81ß ! $ * ) ; */
- 0x8D, 0xE1, 0x21, 0x24, 0x2A, 0x29, 0x3B, 0xAA,
- /* 0x60 - / ---- \81Ä ---- ---- ---- */
- 0x2D, 0x2F, 0x07, 0x8E, 0x07, 0x07, 0x07, 0x8F,
- /* 0x68 ---- , % _ > ? */
- 0x80, 0xA5, 0x07, 0x2C, 0x25, 0x5F, 0x3E, 0x3F,
- /* 0x70 ---- ---- ---- ---- ---- ---- ---- */
- 0x07, 0x90, 0x07, 0x07, 0x07, 0x07, 0x07, 0x07,
- /* 0x78 * ` : # @ ' = " */
- 0x70, 0x60, 0x3A, 0x23, 0x40, 0x27, 0x3D, 0x22,
- /* 0x80 * a b c d e f g */
- 0x07, 0x61, 0x62, 0x63, 0x64, 0x65, 0x66, 0x67,
- /* 0x88 h i ---- ---- ---- */
- 0x68, 0x69, 0xAE, 0xAF, 0x07, 0x07, 0x07, 0xF1,
- /* 0x90 \81° j k l m n o p */
- 0xF8, 0x6A, 0x6B, 0x6C, 0x6D, 0x6E, 0x6F, 0x70,
- /* 0x98 q r ---- ---- */
- 0x71, 0x72, 0xA6, 0xA7, 0x91, 0x07, 0x92, 0x07,
- /* 0xA0 ~ s t u v w x */
- 0xE6, 0x7E, 0x73, 0x74, 0x75, 0x76, 0x77, 0x78,
- /* 0xA8 y z ---- ---- ---- ---- */
- 0x79, 0x7A, 0xAD, 0xAB, 0x07, 0x07, 0x07, 0x07,
- /* 0xB0 ^ ---- \81§ ---- */
- 0x5E, 0x9C, 0x9D, 0xFA, 0x07, 0x07, 0x07, 0xAC,
- /* 0xB8 ---- [ ] ---- ---- ---- ---- */
- 0xAB, 0x07, 0x5B, 0x5D, 0x07, 0x07, 0x07, 0x07,
- /* 0xC0 { A B C D E F G */
- 0x7B, 0x41, 0x42, 0x43, 0x44, 0x45, 0x46, 0x47,
- /* 0xC8 H I ---- \81ö ---- */
- 0x48, 0x49, 0x07, 0x93, 0x94, 0x95, 0xA2, 0x07,
- /* 0xD0 } J K L M N O P */
- 0x7D, 0x4A, 0x4B, 0x4C, 0x4D, 0x4E, 0x4F, 0x50,
- /* 0xD8 Q R ---- \81ü */
- 0x51, 0x52, 0x07, 0x96, 0x81, 0x97, 0xA3, 0x98,
- /* 0xE0 \ S T U V W X */
- 0x5C, 0xF6, 0x53, 0x54, 0x55, 0x56, 0x57, 0x58,
- /* 0xE8 Y Z ---- \81Ö ---- ---- ---- */
- 0x59, 0x5A, 0xFD, 0x07, 0x99, 0x07, 0x07, 0x07,
- /* 0xF0 0 1 2 3 4 5 6 7 */
- 0x30, 0x31, 0x32, 0x33, 0x34, 0x35, 0x36, 0x37,
- /* 0xF8 8 9 ---- ---- \81Ü ---- ---- ---- */
- 0x38, 0x39, 0x07, 0x07, 0x9A, 0x07, 0x07, 0x07
-};
-
-/*
- * EBCDIC (capitals) -> ASCII (small case)
- */
-__u8 _ebcasc_reduce_case[256] =
-{
- /* 0x00 NUL SOH STX ETX *SEL HT *RNL DEL */
- 0x00, 0x01, 0x02, 0x03, 0x07, 0x09, 0x07, 0x7F,
-
- /* 0x08 -GE -SPS -RPT VT FF CR SO SI */
- 0x07, 0x07, 0x07, 0x0B, 0x0C, 0x0D, 0x0E, 0x0F,
-
- /* 0x10 DLE DC1 DC2 DC3 -RES -NL BS -POC
- -ENP ->LF */
- 0x10, 0x11, 0x12, 0x13, 0x07, 0x0A, 0x08, 0x07,
-
- /* 0x18 CAN EM -UBS -CU1 -IFS -IGS -IRS -ITB
- -IUS */
- 0x18, 0x19, 0x07, 0x07, 0x07, 0x07, 0x07, 0x07,
-
- /* 0x20 -DS -SOS FS -WUS -BYP LF ETB ESC
- -INP */
- 0x07, 0x07, 0x1C, 0x07, 0x07, 0x0A, 0x17, 0x1B,
-
- /* 0x28 -SA -SFE -SM -CSP -MFA ENQ ACK BEL
- -SW */
- 0x07, 0x07, 0x07, 0x07, 0x07, 0x05, 0x06, 0x07,
-
- /* 0x30 ---- ---- SYN -IR -PP -TRN -NBS EOT */
- 0x07, 0x07, 0x16, 0x07, 0x07, 0x07, 0x07, 0x04,
-
- /* 0x38 -SBS -IT -RFF -CU3 DC4 NAK ---- SUB */
- 0x07, 0x07, 0x07, 0x07, 0x14, 0x15, 0x07, 0x1A,
-
- /* 0x40 SP RSP \81ä ---- */
- 0x20, 0xFF, 0x83, 0x84, 0x85, 0xA0, 0x07, 0x86,
-
- /* 0x48 . < ( + | */
- 0x87, 0xA4, 0x9B, 0x2E, 0x3C, 0x28, 0x2B, 0x7C,
-
- /* 0x50 & ---- */
- 0x26, 0x82, 0x88, 0x89, 0x8A, 0xA1, 0x8C, 0x07,
-
- /* 0x58 \81ß ! $ * ) ; */
- 0x8D, 0xE1, 0x21, 0x24, 0x2A, 0x29, 0x3B, 0xAA,
-
- /* 0x60 - / ---- \81Ä ---- ---- ---- */
- 0x2D, 0x2F, 0x07, 0x84, 0x07, 0x07, 0x07, 0x8F,
-
- /* 0x68 ---- , % _ > ? */
- 0x80, 0xA5, 0x07, 0x2C, 0x25, 0x5F, 0x3E, 0x3F,
-
- /* 0x70 ---- ---- ---- ---- ---- ---- ---- */
- 0x07, 0x90, 0x07, 0x07, 0x07, 0x07, 0x07, 0x07,
-
- /* 0x78 * ` : # @ ' = " */
- 0x70, 0x60, 0x3A, 0x23, 0x40, 0x27, 0x3D, 0x22,
-
- /* 0x80 * a b c d e f g */
- 0x07, 0x61, 0x62, 0x63, 0x64, 0x65, 0x66, 0x67,
-
- /* 0x88 h i ---- ---- ---- */
- 0x68, 0x69, 0xAE, 0xAF, 0x07, 0x07, 0x07, 0xF1,
-
- /* 0x90 \81° j k l m n o p */
- 0xF8, 0x6A, 0x6B, 0x6C, 0x6D, 0x6E, 0x6F, 0x70,
-
- /* 0x98 q r ---- ---- */
- 0x71, 0x72, 0xA6, 0xA7, 0x91, 0x07, 0x92, 0x07,
-
- /* 0xA0 ~ s t u v w x */
- 0xE6, 0x7E, 0x73, 0x74, 0x75, 0x76, 0x77, 0x78,
-
- /* 0xA8 y z ---- ---- ---- ---- */
- 0x79, 0x7A, 0xAD, 0xAB, 0x07, 0x07, 0x07, 0x07,
-
- /* 0xB0 ^ ---- \81§ ---- */
- 0x5E, 0x9C, 0x9D, 0xFA, 0x07, 0x07, 0x07, 0xAC,
-
- /* 0xB8 ---- [ ] ---- ---- ---- ---- */
- 0xAB, 0x07, 0x5B, 0x5D, 0x07, 0x07, 0x07, 0x07,
-
- /* 0xC0 { A B C D E F G */
- 0x7B, 0x61, 0x62, 0x63, 0x64, 0x65, 0x66, 0x67,
-
- /* 0xC8 H I ---- \81ö ---- */
- 0x68, 0x69, 0x07, 0x93, 0x94, 0x95, 0xA2, 0x07,
-
- /* 0xD0 } J K L M N O P */
- 0x7D, 0x6A, 0x6B, 0x6C, 0x6D, 0x6E, 0x6F, 0x70,
-
- /* 0xD8 Q R ---- \81ü */
- 0x71, 0x72, 0x07, 0x96, 0x81, 0x97, 0xA3, 0x98,
-
- /* 0xE0 \ S T U V W X */
- 0x5C, 0xF6, 0x73, 0x74, 0x75, 0x76, 0x77, 0x78,
-
- /* 0xE8 Y Z ---- \81Ö ---- ---- ---- */
- 0x79, 0x7A, 0xFD, 0x07, 0x94, 0x07, 0x07, 0x07,
-
- /* 0xF0 0 1 2 3 4 5 6 7 */
- 0x30, 0x31, 0x32, 0x33, 0x34, 0x35, 0x36, 0x37,
-
- /* 0xF8 8 9 ---- ---- \81Ü ---- ---- ---- */
- 0x38, 0x39, 0x07, 0x07, 0x81, 0x07, 0x07, 0x07
-};
M_OBJS :=
include $(TOPDIR)/Rules.make
+
* CTC / ESCON network driver
*
* S390 version
- * Copyright (C) 1999,2000 IBM Deutschland Entwicklung GmbH, IBM Corporation
+ * Copyright (C) 1999 IBM Deutschland Entwicklung GmbH, IBM Corporation
* Author(s): Dieter Wellerdiek (wel@de.ibm.com)
*
+ * 2.3 Updates Martin Schwidefsky (schwidefsky@de.ibm.com)
+ * Denis Joseph Barrow (djbarrow@de.ibm.com,barrow_dj@yahoo.com)
+ *
*
* Description of the Kernel Parameter
* Normally the CTC driver selects the channels in order (automatic channel
* - Possibility to switch the automatic selection off
* - Minor bug fixes
*/
-
+#include <linux/version.h>
+#include <linux/init.h>
#include <linux/kernel.h>
#include <linux/malloc.h>
#include <linux/errno.h>
struct block *block;
};
+#if LINUX_VERSION_CODE>=0x020300
+typedef struct net_device net_device;
+#else
+typedef struct device net_device;
+typedef struct wait_queue* wait_queue_head_t;
+#define DECLARE_WAITQUEUE(waitqname,waitqtask) struct wait_queue waitqname = {waitqtask, NULL }
+#define init_waitqueue_head(nothing)
+#endif
+
struct channel {
unsigned int devno;
struct buffer *free_anchor;
struct buffer *proc_anchor;
devstat_t *devstat;
- struct device *dev; /* backward pointer to the network device */
- struct wait_queue *wait;
+ net_device *dev; /* backward pointer to the network device */
+ wait_queue_head_t wait;
struct tq_struct tq;
struct timer_list timer;
unsigned long flag_a; /* atomic flags */
struct ctc_priv {
- struct enet_statistics stats;
+ struct net_device_stats stats;
+#if LINUX_VERSION_CODE>=0x02032D
+ int tbusy;
+#endif
struct channel channel[2];
__u16 protocol;
};
struct packet data;
};
+#if LINUX_VERSION_CODE>=0x02032D
+#define ctc_protect_busy(dev) \
+s390irq_spin_lock(((struct ctc_priv *)dev->priv)->channel[WRITE].irq)
+#define ctc_unprotect_busy(dev) \
+s390irq_spin_unlock(((struct ctc_priv *)dev->priv)->channel[WRITE].irq)
+
+#define ctc_protect_busy_irqsave(dev,flags) \
+s390irq_spin_lock_irqsave(((struct ctc_priv *)dev->priv)->channel[WRITE].irq,flags)
+#define ctc_unprotect_busy_irqrestore(dev,flags) \
+s390irq_spin_unlock_irqrestore(((struct ctc_priv *)dev->priv)->channel[WRITE].irq,flags)
+
+static __inline__ void ctc_set_busy(net_device *dev)
+{
+ ((struct ctc_priv *)dev->priv)->tbusy=1;
+ netif_stop_queue(dev);
+}
+
+static __inline__ void ctc_clear_busy(net_device *dev)
+{
+ ((struct ctc_priv *)dev->priv)->tbusy=0;
+ netif_start_queue(dev);
+}
+
+static __inline__ int ctc_check_busy(net_device *dev)
+{
+ eieio();
+ return(((struct ctc_priv *)dev->priv)->tbusy);
+}
+
+
+static __inline__ void ctc_setbit_busy(int nr,net_device *dev)
+{
+ set_bit(nr,&(((struct ctc_priv *)dev->priv)->tbusy));
+ netif_stop_queue(dev);
+}
+
+static __inline__ void ctc_clearbit_busy(int nr,net_device *dev)
+{
+ clear_bit(nr,&(((struct ctc_priv *)dev->priv)->tbusy));
+ if(((struct ctc_priv *)dev->priv)->tbusy==0)
+ netif_start_queue(dev);
+}
+
+static __inline__ int ctc_test_and_setbit_busy(int nr,net_device *dev)
+{
+ netif_stop_queue(dev);
+ return(test_and_set_bit(nr,&((struct ctc_priv *)dev->priv)->tbusy));
+}
+#else
+
+#define ctc_protect_busy(dev)
+#define ctc_unprotect_busy(dev)
+#define ctc_protect_busy_irqsave(dev,flags)
+#define ctc_unprotect_busy_irqrestore(dev,flags)
+
+static __inline__ void ctc_set_busy(net_device *dev)
+{
+ dev->tbusy=1;
+ eieio();
+}
+
+static __inline__ void ctc_clear_busy(net_device *dev)
+{
+ dev->tbusy=0;
+ eieio();
+}
+
+static __inline__ int ctc_check_busy(net_device *dev)
+{
+ eieio();
+ return(dev->tbusy);
+}
+
+
+static __inline__ void ctc_setbit_busy(int nr,net_device *dev)
+{
+ set_bit(nr,(void *)&dev->tbusy);
+}
+
+static __inline__ void ctc_clearbit_busy(int nr,net_device *dev)
+{
+ clear_bit(nr,(void *)&dev->tbusy);
+}
+
+static __inline__ int ctc_test_and_setbit_busy(int nr,net_device *dev)
+{
+ return(test_and_set_bit(nr,(void *)&dev->tbusy));
+}
+#endif
+
+
+
+
/* Interrupt handler */
static void ctc_irq_handler(int irq, void *initparm, struct pt_regs *regs);
/* Functions for the DEV methods */
-void ctc_setup(char *dev_name, int *ints);
-int ctc_probe(struct device *dev);
+int ctc_probe(net_device *dev);
-static int ctc_open(struct device *dev);
+static int ctc_open(net_device *dev);
static void ctc_timer (struct channel *ctc);
-static int ctc_release(struct device *dev);
-static int ctc_tx(struct sk_buff *skb, struct device *dev);
-static int ctc_change_mtu(struct device *dev, int new_mtu);
-struct net_device_stats* ctc_stats(struct device *dev);
+static int ctc_release(net_device *dev);
+static int ctc_tx(struct sk_buff *skb, net_device *dev);
+static int ctc_change_mtu(net_device *dev, int new_mtu);
+struct net_device_stats* ctc_stats(net_device *dev);
/*
* 0xnnnn is the cu number write
* ctcx can be ctc0 to ctc7 or escon0 to escon7
*/
-void ctc_setup(char *dev_name, int *ints)
+#if LINUX_VERSION_CODE>=0x020300
+static int __init ctc_setup(char *dev_name)
+#else
+__initfunc(void ctc_setup(char *dev_name,int *ints))
+#endif
{
struct adapterlist tmp;
-
+#if LINUX_VERSION_CODE>=0x020300
+ #define CTC_MAX_PARMS 4
+ int ints[CTC_MAX_PARMS+1];
+ get_options(dev_name,CTC_MAX_PARMS,ints);
+ #define ctc_setup_return return(1)
+#else
+ #define ctc_setup_return return
+#endif
ctc_tab_init();
ctc_no_auto = 1;
if (!strcmp(dev_name,"noauto")) {
printk(KERN_INFO "ctc: automatic channel selection deactivated\n");
- return;
+ ctc_setup_return;
}
tmp.devno[WRITE] = -ENODEV;
break;
} else {
printk(KERN_WARNING "%s: wrong Channel protocol type passed\n", dev_name);
- return;
+ ctc_setup_return;
}
break;
default:
printk(KERN_WARNING "ctc: wrong number of parameter passed\n");
- return;
+ ctc_setup_return;
}
ctc_adapter[extract_channel_media(dev_name)][extract_channel_id(dev_name)] = tmp;
#ifdef DEBUG
printk(DEBUG "%s: protocol=%x read=%04x write=%04x\n",
dev_name, tmp.protocol, tmp.devno[READ], tmp.devno[WRITE]);
#endif
- return;
+ ctc_setup_return;
}
-
+#if LINUX_VERSION_CODE>=0x020300
+__setup("ctc=", ctc_setup);
+#endif
/*
* ctc_probe
* this function is called for each channel network device,
* which is defined in the /init/main.c
*/
-int ctc_probe(struct device *dev)
+int ctc_probe(net_device *dev)
{
int rc;
int c;
*
*/
-static void inline ccw_check_return_code (struct device *dev, int return_code)
+static void inline ccw_check_return_code (net_device *dev, int return_code)
{
if (return_code != 0) {
switch (return_code) {
}
-static void inline ccw_check_unit_check (struct device *dev, char sense)
+static void inline ccw_check_unit_check (net_device *dev, char sense)
{
#ifdef DEBUG
printk(KERN_INFO "%s: Unit Check with sense code: %02x\n",
__u8 flags = 0x00;
struct channel *ctc = NULL;
struct ctc_priv *privptr = NULL;
- struct device *dev = NULL;
+ net_device *dev = NULL;
ccw1_t ccw_set_x_mode[2] = {{CCW_CMD_SET_EXTENDED, CCW_FLAG_SLI | CCW_FLAG_CC, 0, NULL},
{CCW_CMD_NOOP, CCW_FLAG_SLI, 0, NULL}};
}
ctc = (struct channel *) (devstat->intparm);
- dev = (struct device *) ctc->dev;
+ dev = (net_device *) ctc->dev;
privptr = dev->priv;
#ifdef DEBUG
(devstat->ii.sense.data[0] & 0x40) == 0x40 ||
devstat->ii.sense.data[0] == 0 ) {
privptr->stats.rx_errors++;
- set_bit(TB_RETRY, (void *)&dev->tbusy);
+ /* Need protection here cos we are in the read irq */
+ /* handler the tbusy is for the write subchannel */
+ ctc_protect_busy(dev);
+ ctc_setbit_busy(TB_RETRY,dev);
+ ctc_unprotect_busy(dev);
init_timer(&ctc->timer);
ctc->timer.function = (void *)ctc_read_retry;
ctc->timer.data = (__u32)ctc;
if(!devstat->flag & DEVSTAT_FINAL_STATUS)
return;
-
- clear_bit(TB_RETRY, (void *)&dev->tbusy);
-
+ ctc_protect_busy(dev);
+ ctc_clearbit_busy(TB_RETRY,dev);
+ ctc_unprotect_busy(dev);
ctc_buffer_swap(&ctc->free_anchor, &ctc->proc_anchor);
if (ctc->free_anchor != NULL) {
ctc->proc_anchor->block->length = 0;
ctc_buffer_swap(&ctc->proc_anchor, &ctc->free_anchor);
- clear_bit(TB_NOBUFFER, (void *)&dev->tbusy);
-
+ ctc_clearbit_busy(TB_NOBUFFER,dev);
if (ctc->proc_anchor != NULL) {
#ifdef DEBUG
printk(KERN_DEBUG "%s: IRQ early swap buffer\n",dev->name);
}
if (ctc->free_anchor->block->length != 0) {
- if (test_and_set_bit(TB_TX, (void *)&dev->tbusy) == 0) { /* set transmission to busy */
+ if (ctc_test_and_setbit_busy(TB_TX,dev) == 0) {
+ /* set transmission to busy */
ctc_buffer_swap(&ctc->free_anchor, &ctc->proc_anchor);
- clear_bit(TB_TX, (void *)&dev->tbusy);
+ ctc_clearbit_busy(TB_TX,dev);
#ifdef DEBUG
printk(KERN_DEBUG "%s: last buffer move in IRQ\n",dev->name);
#endif
__u8 flags = 0x00;
__u32 saveflags;
- struct device *dev;
+ net_device *dev;
struct ctc_priv *privptr;
struct packet *lp;
struct sk_buff *skb;
- dev = (struct device *) ctc->dev;
+ dev = (net_device *) ctc->dev;
privptr = (struct ctc_priv *) dev->priv;
#ifdef DEBUG
__u32 parm;
__u8 flags = 0x00;
__u32 saveflags;
- struct device *dev;
+ net_device *dev;
- dev = (struct device *) ctc->dev;
+ dev = (net_device *) ctc->dev;
#ifdef DEBUG
printk(KERN_DEBUG "%s: read retry - state-%02x\n" ,dev->name, ctc->state);
__u32 parm;
__u8 flags = 0x00;
__u32 saveflags;
- struct device *dev;
+ net_device *dev;
- dev = (struct device *) ctc->dev;
+ dev = (net_device *) ctc->dev;
#ifdef DEBUG
printk(KERN_DEBUG "%s: write retry - state-%02x\n" ,dev->name, ctc->state);
* ctc_open
*
*/
-static int ctc_open(struct device *dev)
+static int ctc_open(net_device *dev)
{
int rc;
int i;
__u32 saveflags;
__u32 parm;
struct ctc_priv *privptr;
- struct wait_queue wait = { current, NULL };
+ DECLARE_WAITQUEUE(wait, current);
struct timer_list timer;
- dev->tbusy = 1;
- dev->start = 0;
+ ctc_set_busy(dev);
privptr = (struct ctc_priv *) (dev->priv);
if (rc != 0)
return -ENOMEM;
}
+ init_waitqueue_head(&privptr->channel[i].wait);
privptr->channel[i].tq.next = NULL;
privptr->channel[i].tq.sync = 0;
privptr->channel[i].tq.routine = (void *)(void *)ctc_irq_bh;
}
printk(KERN_INFO "%s: connected with remote side\n",dev->name);
- dev->start = 1;
- dev->tbusy = 0;
+ ctc_clear_busy(dev);
return 0;
}
static void ctc_timer (struct channel *ctc)
{
#ifdef DEBUG
- struct device *dev;
+ net_device *dev;
- dev = (struct device *) ctc->dev;
+ dev = (net_device *) ctc->dev;
printk(KERN_DEBUG "%s: timer return\n" ,dev->name);
#endif
ctc->flag |= CTC_TIMER;
* ctc_release
*
*/
-static int ctc_release(struct device *dev)
+static int ctc_release(net_device *dev)
{
int rc;
int i;
__u32 saveflags;
__u32 parm;
struct ctc_priv *privptr;
- struct wait_queue wait = { current, NULL };
+ DECLARE_WAITQUEUE(wait, current);
privptr = (struct ctc_priv *) dev->priv;
- dev->start = 0;
- set_bit(TB_STOP, (void *)&dev->tbusy);
-
+ ctc_protect_busy_irqsave(dev,saveflags);
+ ctc_setbit_busy(TB_STOP,dev);
+ ctc_unprotect_busy_irqrestore(dev,flags);
for (i = 0; i < 2; i++) {
s390irq_spin_lock_irqsave(privptr->channel[i].irq, saveflags);
privptr->channel[i].state = CTC_STOP;
*
*
*/
-static int ctc_tx(struct sk_buff *skb, struct device *dev)
+static int ctc_tx(struct sk_buff *skb, net_device *dev)
{
- int rc;
+ int rc=0,rc2;
__u32 parm;
__u8 flags = 0x00;
__u32 saveflags;
struct ctc_priv *privptr;
struct packet *lp;
+
privptr = (struct ctc_priv *) (dev->priv);
if (skb == NULL) {
return -EIO;
}
- if (dev->tbusy != 0) {
- return -EBUSY;
+ s390irq_spin_lock_irqsave(privptr->channel[WRITE].irq, saveflags);
+ if (ctc_check_busy(dev)) {
+ rc=-EBUSY;
+ goto Done;
}
- if (test_and_set_bit(TB_TX, (void *)&dev->tbusy) != 0) { /* set transmission to busy */
- return -EBUSY;
+ if (ctc_test_and_setbit_busy(TB_TX,dev)) { /* set transmission to busy */
+ rc=-EBUSY;
+ goto Done;
}
if (65535 - privptr->channel[WRITE].free_anchor->block->length - PACKET_HEADER_LENGTH <= skb->len + PACKET_HEADER_LENGTH + 2) {
#ifdef DEBUG
printk(KERN_DEBUG "%s: early swap\n", dev->name);
#endif
- s390irq_spin_lock_irqsave(privptr->channel[WRITE].irq, saveflags);
+
ctc_buffer_swap(&privptr->channel[WRITE].free_anchor, &privptr->channel[WRITE].proc_anchor);
- s390irq_spin_unlock_irqrestore(privptr->channel[WRITE].irq, saveflags);
if (privptr->channel[WRITE].free_anchor == NULL){
- set_bit(TB_NOBUFFER, (void *)&dev->tbusy);
- clear_bit(TB_TX, (void *)&dev->tbusy);
- return -EBUSY;
+ ctc_setbit_busy(TB_NOBUFFER,dev);
+ rc=-EBUSY;
+ goto Done2;
}
}
privptr->channel[WRITE].free_anchor->packets++;
if (test_and_set_bit(0, (void *)&privptr->channel[WRITE].IO_active) == 0) {
- s390irq_spin_lock_irqsave(privptr->channel[WRITE].irq, saveflags);
ctc_buffer_swap(&privptr->channel[WRITE].free_anchor,&privptr->channel[WRITE].proc_anchor);
privptr->channel[WRITE].ccw[1].count = privptr->channel[WRITE].proc_anchor->block->length;
privptr->channel[WRITE].ccw[1].cda = (char *)virt_to_phys(privptr->channel[WRITE].proc_anchor->block);
parm = (__u32) &privptr->channel[WRITE];
- rc = do_IO (privptr->channel[WRITE].irq, &privptr->channel[WRITE].ccw[0], parm, 0xff, flags );
- if (rc != 0)
- ccw_check_return_code(dev, rc);
+ rc2 = do_IO (privptr->channel[WRITE].irq, &privptr->channel[WRITE].ccw[0], parm, 0xff, flags );
+ if (rc2 != 0)
+ ccw_check_return_code(dev, rc2);
dev->trans_start = jiffies;
- s390irq_spin_unlock_irqrestore(privptr->channel[WRITE].irq, saveflags);
}
-
if (privptr->channel[WRITE].free_anchor == NULL)
- set_bit(TB_NOBUFFER, (void *)&dev->tbusy);
-
- clear_bit(TB_TX, (void *)&dev->tbusy);
- return 0;
+ ctc_setbit_busy(TB_NOBUFFER,dev);
+Done2:
+ ctc_clearbit_busy(TB_TX,dev);
+Done:
+ s390irq_spin_unlock_irqrestore(privptr->channel[WRITE].irq, saveflags);
+ return(rc);
}
* 576 to 65527 for OS/390
*
*/
-static int ctc_change_mtu(struct device *dev, int new_mtu)
+static int ctc_change_mtu(net_device *dev, int new_mtu)
{
if ((new_mtu < 576) || (new_mtu > 65528))
return -EINVAL;
* ctc_stats
*
*/
-struct net_device_stats *ctc_stats(struct device *dev)
+struct net_device_stats *ctc_stats(net_device *dev)
{
struct ctc_priv *privptr;
* Network driver for VM using iucv
*
* S390 version
- * Copyright (C) 1999,2000 IBM Deutschland Entwicklung GmbH, IBM Corporation
+ * Copyright (C) 1999 IBM Deutschland Entwicklung GmbH, IBM Corporation
* Author(s): Stefan Hegewald <hegewald@de.ibm.com>
* Hartmut Penner <hpenner@de.ibm.com>
+ *
+ * 2.3 Updates Denis Joseph Barrow (djbarrow@de.ibm.com,barrow_dj@yahoo.com)
+ * Martin Schwidefsky (schwidefsky@de.ibm.com)
+ *
+
*/
#ifndef __KERNEL__
#include <linux/errno.h> /* error codes */
#include <linux/types.h> /* size_t */
#include <linux/interrupt.h> /* mark_bh */
-#include <linux/netdevice.h> /* struct device, and other headers */
-#include <linux/inetdevice.h> /* struct device, and other headers */
+#include <linux/netdevice.h> /* struct net_device, and other headers */
+#include <linux/inetdevice.h> /* struct net_device, and other headers */
#include <linux/if_arp.h>
#include <linux/rtnetlink.h>
#include <linux/ip.h> /* struct iphdr */
#include <asm/checksum.h>
#include <asm/io.h>
#include <asm/string.h>
+#include <asm/s390_ext.h>
#include "iucv.h"
+
+
+
#define DEBUG123
#define MAX_DEVICES 10
static int iucv_pathid[MAX_DEVICES] = {0};
static unsigned char iucv_ext_int_buffer[40] __attribute__((aligned (8))) ={0};
static unsigned char glob_command_buffer[40] __attribute__((aligned (8)));
-struct device iucv_devs[];
+
+#if LINUX_VERSION_CODE>=0x20300
+typedef struct net_device net_device;
+#else
+typedef struct device net_device;
+#endif
+net_device iucv_devs[];
/* This structure is private to each device. It is used to pass */
short len;
};
+
+
+static __inline__ int netif_is_busy(net_device *dev)
+{
+#if LINUX_VERSION_CODE<0x02032D
+ return(dev->tbusy);
+#else
+ return(test_bit(LINK_STATE_XOFF,&dev->flags));
+#endif
+}
+
+
+
+#if LINUX_VERSION_CODE<0x02032D
+#define netif_enter_interrupt(dev) dev->interrupt=1
+#define netif_exit_interrupt(dev) dev->interrupt=0
+#define netif_start(dev) dev->start=1
+#define netif_stop(dev) dev->start=0
+
+static __inline__ void netif_stop_queue(net_device *dev)
+{
+ dev->tbusy=1;
+}
+
+static __inline__ void netif_start_queue(net_device *dev)
+{
+ dev->tbusy=0;
+}
+
+static __inline__ void netif_wake_queue(net_device *dev)
+{
+ dev->tbusy=0;
+ mark_bh(NET_BH);
+}
+
+#else
+#define netif_enter_interrupt(dev)
+#define netif_exit_interrupt(dev)
+#define netif_start(dev)
+#define netif_stop(dev)
+#endif
+
+
+
/*
* Following the iucv primitives
*/
/*--------------------------*/
/* Get device from pathid */
/*--------------------------*/
-struct device * get_device_from_pathid(int pathid)
+net_device * get_device_from_pathid(int pathid)
{
int i;
for (i=0;i<=MAX_DEVICES;i++)
/*--------------------------*/
/* Get device from userid */
/*--------------------------*/
-struct device * get_device_from_userid(char * userid)
+net_device * get_device_from_userid(char * userid)
{
int i;
- struct device * dev;
+ net_device * dev;
struct iucv_priv *privptr;
for (i=0;i<=MAX_DEVICES;i++)
{
/*--------------------------*/
/* Open iucv Device Driver */
/*--------------------------*/
-int iucv_open(struct device *dev)
+int iucv_open(net_device *dev)
{
int rc;
unsigned short iucv_used_pathid;
privptr = (struct iucv_priv *)(dev->priv);
if(privptr->pathid != -1) {
- dev->start = 1;
- dev->tbusy = 0;
+ netif_start(dev);
+ netif_start_queue(dev);
return 0;
}
if ((rc = iucv_connect(privptr->command_buffer,
printk( "iucv: iucv_connect ended with rc: %X\n",rc);
printk( "iucv[%d] pathid %X \n",(int)(dev-iucv_devs),privptr->pathid);
#endif
- dev->start = 1;
- dev->tbusy = 0;
+ netif_start(dev);
+ netif_start_queue(dev);
return 0;
}
/*-----------------------------------------------------------------------*/
/* Receive a packet: retrieve, encapsulate and pass over to upper levels */
/*-----------------------------------------------------------------------*/
-void iucv_rx(struct device *dev, int len, unsigned char *buf)
+void iucv_rx(net_device *dev, int len, unsigned char *buf)
{
struct sk_buff *skb;
/*----------------------------*/
/* handle interrupts */
/*----------------------------*/
-void do_iucv_interrupt(void)
+void do_iucv_interrupt(struct pt_regs *regs, __u16 code)
{
int rc;
struct in_device *indev;
struct in_ifaddr *inaddr;
unsigned long len=0;
- struct device *dev=0;
+ net_device *dev=0;
struct iucv_priv *privptr;
INTERRUPT_T * extern_int_buffer;
unsigned short iucv_data_len=0;
/* get own buffer: */
extern_int_buffer = (INTERRUPT_T*) iucv_ext_int_buffer;
- dev->interrupt = 1; /* lock ! */
+ netif_enter_interrupt(dev); /* lock ! */
#ifdef DEBUG
printk( "iucv: do_iucv_interrupt %x received; pathid: %02X\n",
dev = get_device_from_pathid(extern_int_buffer->ippathid);
privptr = (struct iucv_priv *)(dev->priv);
privptr->stats.tx_packets++;
- mark_bh(NET_BH);
- dev->tbusy = 0; /* transmission is no longer busy*/
+ netif_wake_queue(dev); /* transmission is no longer busy*/
break;
iucv_data_len= *((unsigned short*)rcvptr);
} while (iucv_data_len != 0);
- dev->tbusy = 0; /* transmission is no longer busy*/
+ netif_start_queue(dev); /* transmission is no longer busy*/
break;
default:
break;
} /* end switch */
- dev->interrupt = 0; /* release lock*/
+ netif_exit_interrupt(dev); /* release lock*/
#ifdef DEBUG
printk( "iucv: leaving do_iucv_interrupt.\n");
/*-------------------------------------------*/
/* Transmit a packet (low level interface) */
/*-------------------------------------------*/
-int iucv_hw_tx(char *send_buf, int len,struct device *dev)
+int iucv_hw_tx(char *send_buf, int len,net_device *dev)
{
/* This function deals with hw details. */
/* This interface strips off the ethernet header details. */
/*------------------------------------------*/
/* Transmit a packet (called by the kernel) */
/*------------------------------------------*/
-int iucv_tx(struct sk_buff *skb, struct device *dev)
+int iucv_tx(struct sk_buff *skb, net_device *dev)
{
int retval=0;
printk( "iucv: enter iucv_tx, using %s\n",dev->name);
#endif
- if (dev->tbusy) /* shouldn't happen*/
+ if (netif_is_busy(dev)) /* shouldn't happen */
{
privptr->stats.tx_errors++;
dev_kfree_skb(skb);
printk("iucv: %s: transmit access conflict ! leaving iucv_tx.\n", dev->name);
}
- dev->tbusy = 1; /* transmission is busy*/
+ netif_stop_queue(dev); /* transmission is busy*/
dev->trans_start = jiffies; /* save the timestamp*/
/* actual deliver of data is device-specific, and not shown here */
/*---------------*/
/* iucv_release */
/*---------------*/
-int iucv_release(struct device *dev)
+int iucv_release(net_device *dev)
{
int rc =0;
struct iucv_priv *privptr;
privptr = (struct iucv_priv *) (dev->priv);
- dev->start = 0;
- dev->tbusy = 1; /* can't transmit any more*/
+ netif_stop(dev);
+ netif_stop_queue(dev); /* can't transmit any more*/
rc = iucv_sever(privptr->command_buffer);
if (rc!=0)
{
/*-----------------------------------------------*/
/* Configuration changes (passed on by ifconfig) */
/*-----------------------------------------------*/
-int iucv_config(struct device *dev, struct ifmap *map)
+int iucv_config(net_device *dev, struct ifmap *map)
{
if (dev->flags & IFF_UP) /* can't act on a running interface*/
return -EBUSY;
/*----------------*/
/* Ioctl commands */
/*----------------*/
-int iucv_ioctl(struct device *dev, struct ifreq *rq, int cmd)
+int iucv_ioctl(net_device *dev, struct ifreq *rq, int cmd)
{
#ifdef DEBUG
printk( "iucv: device %s; iucv_ioctl\n",dev->name);
/*---------------------------------*/
/* Return statistics to the caller */
/*---------------------------------*/
-struct net_device_stats *iucv_stats(struct device *dev)
+struct net_device_stats *iucv_stats(net_device *dev)
{
struct iucv_priv *priv = (struct iucv_priv *)dev->priv;
#ifdef DEBUG
* IUCV can handle MTU sizes from 576 to approx. 32000
*/
-static int iucv_change_mtu(struct device *dev, int new_mtu)
+static int iucv_change_mtu(net_device *dev, int new_mtu)
{
#ifdef DEBUG
printk( "iucv: device %s; iucv_change_mtu\n",dev->name);
/* The init function (sometimes called probe).*/
/* It is invoked by register_netdev() */
/*--------------------------------------------*/
-int iucv_init(struct device *dev)
+int iucv_init(net_device *dev)
{
int rc;
struct iucv_priv *privptr;
printk( "iucv: iucv_init, device: %s\n",dev->name);
#endif
+ /* request the 0x4000 external interrupt */
+ if (register_external_interrupt(0x4000, do_iucv_interrupt) != 0)
+ panic("Couldn't request external interrupts 0x4000");
+
dev->open = iucv_open;
dev->stop = iucv_release;
dev->set_config = iucv_config;
*
* string passed: iucv=userid1,...,useridn
*/
-
-__initfunc(int iucv_setup(char* str, int *ints))
+#if LINUX_VERSION_CODE>=0x020300
+static int __init iucv_setup(char *str)
+#else
+__initfunc(void iucv_setup(char *str,int *ints))
+#endif
{
int result=0, i=0,j=0, k=0, device_present=0;
char *s = str;
- struct device * dev ={0};
+ net_device * dev ={0};
#ifdef DEBUG
printk( "iucv: start registering device(s)... \n");
#ifdef DEBUG
printk( "iucv: end register devices, %d devices present\n",device_present);
#endif
- return device_present ? 0 : -ENODEV;
+ /* return device_present ? 0 : -ENODEV; */
+#if LINUX_VERSION_CODE>=0x020300
+ return 1;
+#else
+ return;
+#endif
}
-
-
+#if LINUX_VERSION_CODE>=0x020300
+__setup("iucv=", iucv_setup);
+#endif
/*-------------*/
/* The devices */
/*-------------*/
char iucv_names[MAX_DEVICES*8]; /* MAX_DEVICES eight-byte buffers */
-struct device iucv_devs[MAX_DEVICES] = {
+net_device iucv_devs[MAX_DEVICES] = {
{
iucv_names, /* name -- set at load time */
0, 0, 0, 0, /* shmem addresses */
-Change Log
-~~~~~~~~~~
+IBM ServeRAID driver Change Log
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ 4.00.06 - Fix timeout with initial FFDC command
+
+ 4.00.05 - Remove wish_block from init routine
+ - Use linux/spinlock.h instead of asm/spinlock.h for kernels
+ 2.3.18 and later
+ - Sync with other changes from the 2.3 kernels
- 1.00.00 - Initial Public Release
- - Functionally equivalent to 0.99.05
+ 4.00.04 - Rename structures/constants to be prefixed with IPS_
+
+ 4.00.03 - Add alternative passthru interface
+ - Add ability to flash ServeRAID BIOS
+
+ 4.00.02 - Fix problem with PT DCDB with no buffer
+
+ 4.00.01 - Add support for First Failure Data Capture
+
+ 4.00.00 - Add support for ServeRAID 4
+
+ 3.60.02 - Make DCDB direction based on lookup table.
+ - Only allow one DCDB command to a SCSI ID at a time.
+
+ 3.60.01 - Remove bogus error check in passthru routine.
+
+ 3.60.00 - Bump max commands to 128 for use with ServeRAID
+ firmware 3.60.
+ - Change version to 3.60 to coincide with ServeRAID release
+ numbering.
+
+ 1.00.00 - Initial Public Release
+ - Functionally equivalent to 0.99.05
0.99.05 - Fix an oops on certain passthru commands
- 0.99.04 - Fix race condition in the passthru mechanism
+ 0.99.04 - Fix race condition in the passthru mechanism
-- this required the interface to the utilities to change
- - Fix error recovery code
+ - Fix error recovery code
- 0.99.03 - Make interrupt routine handle all completed request on the
- adapter not just the first one
- - Make sure passthru commands get woken up if we run out of
- SCBs
- - Send all of the commands on the queue at once rather than
- one at a time since the card will support it.
+ 0.99.03 - Make interrupt routine handle all completed request on the
+ adapter not just the first one
+ - Make sure passthru commands get woken up if we run out of
+ SCBs
+ - Send all of the commands on the queue at once rather than
+ one at a time since the card will support it.
- 0.99.02 - Added some additional debug statements to print out
+ 0.99.02 - Added some additional debug statements to print out
errors if an error occurs while trying to read/write
to a logical drive (IPS_DEBUG).
- Fixed read/write errors when the adapter is using an
+ - Fixed read/write errors when the adapter is using an
8K stripe size.
+
switch (i) {
#if LINUX_VERSION_CODE >= CVT_LINUX_VERSION(1,3,0)
case 0:
- ok = request_irq(pHCB->HCS_Intr, i91u_intr0, SA_INTERRUPT | SA_SHIRQ, "i91u", NULL);
+ ok = request_irq(pHCB->HCS_Intr, i91u_intr0, SA_INTERRUPT | SA_SHIRQ, "i91u", hreg);
break;
case 1:
- ok = request_irq(pHCB->HCS_Intr, i91u_intr1, SA_INTERRUPT | SA_SHIRQ, "i91u", NULL);
+ ok = request_irq(pHCB->HCS_Intr, i91u_intr1, SA_INTERRUPT | SA_SHIRQ, "i91u", hreg);
break;
case 2:
- ok = request_irq(pHCB->HCS_Intr, i91u_intr2, SA_INTERRUPT | SA_SHIRQ, "i91u", NULL);
+ ok = request_irq(pHCB->HCS_Intr, i91u_intr2, SA_INTERRUPT | SA_SHIRQ, "i91u", hreg);
break;
case 3:
- ok = request_irq(pHCB->HCS_Intr, i91u_intr3, SA_INTERRUPT | SA_SHIRQ, "i91u", NULL);
+ ok = request_irq(pHCB->HCS_Intr, i91u_intr3, SA_INTERRUPT | SA_SHIRQ, "i91u", hreg);
break;
case 4:
- ok = request_irq(pHCB->HCS_Intr, i91u_intr4, SA_INTERRUPT | SA_SHIRQ, "i91u", NULL);
+ ok = request_irq(pHCB->HCS_Intr, i91u_intr4, SA_INTERRUPT | SA_SHIRQ, "i91u", hreg);
break;
case 5:
- ok = request_irq(pHCB->HCS_Intr, i91u_intr5, SA_INTERRUPT | SA_SHIRQ, "i91u", NULL);
+ ok = request_irq(pHCB->HCS_Intr, i91u_intr5, SA_INTERRUPT | SA_SHIRQ, "i91u", hreg);
break;
case 6:
- ok = request_irq(pHCB->HCS_Intr, i91u_intr6, SA_INTERRUPT | SA_SHIRQ, "i91u", NULL);
+ ok = request_irq(pHCB->HCS_Intr, i91u_intr6, SA_INTERRUPT | SA_SHIRQ, "i91u", hreg);
break;
case 7:
- ok = request_irq(pHCB->HCS_Intr, i91u_intr7, SA_INTERRUPT | SA_SHIRQ, "i91u", NULL);
+ ok = request_irq(pHCB->HCS_Intr, i91u_intr7, SA_INTERRUPT | SA_SHIRQ, "i91u", hreg);
break;
default:
i91u_panic("i91u: Too many host adapters\n");
/* 3.60.01 - Remove bogus error check in passthru routine */
/* 3.60.02 - Make DCDB direction based on lookup table */
/* - Only allow one DCDB command to a SCSI ID at a time */
+/* 4.00.00 - Add support for ServeRAID 4 */
+/* 4.00.01 - Add support for First Failure Data Capture */
+/* 4.00.02 - Fix problem with PT DCDB with no buffer */
+/* 4.00.03 - Add alternative passthru interface */
+/* - Add ability to flash ServeRAID BIOS */
+/* 4.00.04 - Rename structures/constants to be prefixed with IPS_ */
+/* 4.00.05 - Remove wish_block from init routine */
+/* - Use linux/spinlock.h instead of asm/spinlock.h for kernels */
+/* 2.3.18 and later */
+/* - Sync with other changes from the 2.3 kernels */
+/* 4.00.06 - Fix timeout with initial FFDC command */
/* */
/*****************************************************************************/
#include "ips.h"
#include <linux/stat.h>
-#include <linux/malloc.h>
#include <linux/config.h>
+
+#if LINUX_VERSION_CODE >= LinuxVersionCode(2,3,18)
+#include <linux/spinlock.h>
+#else
#include <asm/spinlock.h>
+#endif
+
#include <linux/smp.h>
/*
* DRIVER_VER
*/
-#define IPS_VERSION_HIGH "3.60" /* MUST be 4 chars */
-#define IPS_VERSION_LOW ".02 " /* MUST be 4 chars */
+#define IPS_VERSION_HIGH "4.00" /* MUST be 4 chars */
+#define IPS_VERSION_LOW ".06 " /* MUST be 4 chars */
+#if LINUX_VERSION_CODE < LinuxVersionCode(2,3,27)
struct proc_dir_entry proc_scsi_ips = {
#if !defined(PROC_SCSI_IPS)
0, /* Use dynamic inode allocation */
#endif
3, "ips",
S_IFDIR | S_IRUGO | S_IXUGO, 2
-};
-
-#if LINUX_VERSION_CODE < LinuxVersionCode(2,1,93)
- #include <linux/bios32.h>
+}
+;
#endif
#if !defined(__i386__)
#endif
#if IPS_DEBUG >= 12
- #define DBG(s) printk(KERN_NOTICE s "\n"); MDELAY(2*ONE_SEC)
+ #define DBG(s) printk(KERN_NOTICE s "\n"); MDELAY(2*IPS_ONE_SEC)
#elif IPS_DEBUG >= 11
#define DBG(s) printk(KERN_NOTICE s "\n")
#else
static int ips_cmd_timeout = 60;
static int ips_reset_timeout = 60 * 5;
-#define MAX_ADAPTER_NAME 6
+#define MAX_ADAPTER_NAME 7
static char ips_adapter_name[][30] = {
"ServeRAID",
"ServeRAID on motherboard",
"ServeRAID on motherboard",
"ServeRAID 3H",
- "ServeRAID 3L"
+ "ServeRAID 3L",
+ "ServeRAID 4H"
};
/*
* Direction table
*/
static char ips_command_direction[] = {
-IPS_DATA_NONE, IPS_DATA_NONE, IPS_DATA_IN, IPS_DATA_IN, IPS_DATA_OUT,
-IPS_DATA_IN, IPS_DATA_IN, IPS_DATA_OUT, IPS_DATA_IN, IPS_DATA_UNK,
-IPS_DATA_OUT, IPS_DATA_OUT, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK,
-IPS_DATA_IN, IPS_DATA_NONE, IPS_DATA_IN, IPS_DATA_IN, IPS_DATA_OUT,
-IPS_DATA_IN, IPS_DATA_OUT, IPS_DATA_NONE, IPS_DATA_NONE, IPS_DATA_OUT,
-IPS_DATA_NONE, IPS_DATA_IN, IPS_DATA_NONE, IPS_DATA_IN, IPS_DATA_OUT,
-IPS_DATA_NONE, IPS_DATA_UNK, IPS_DATA_IN, IPS_DATA_UNK, IPS_DATA_IN,
-IPS_DATA_UNK, IPS_DATA_OUT, IPS_DATA_IN, IPS_DATA_UNK, IPS_DATA_UNK,
-IPS_DATA_IN, IPS_DATA_IN, IPS_DATA_OUT, IPS_DATA_NONE, IPS_DATA_UNK,
-IPS_DATA_IN, IPS_DATA_OUT, IPS_DATA_OUT, IPS_DATA_OUT, IPS_DATA_OUT,
-IPS_DATA_OUT, IPS_DATA_NONE, IPS_DATA_IN, IPS_DATA_NONE, IPS_DATA_NONE,
-IPS_DATA_IN, IPS_DATA_OUT, IPS_DATA_OUT, IPS_DATA_OUT, IPS_DATA_OUT,
-IPS_DATA_IN, IPS_DATA_OUT, IPS_DATA_IN, IPS_DATA_OUT, IPS_DATA_OUT,
-IPS_DATA_OUT, IPS_DATA_IN, IPS_DATA_IN, IPS_DATA_IN, IPS_DATA_NONE,
-IPS_DATA_UNK, IPS_DATA_NONE, IPS_DATA_NONE, IPS_DATA_NONE, IPS_DATA_UNK,
-IPS_DATA_NONE, IPS_DATA_OUT, IPS_DATA_IN, IPS_DATA_UNK, IPS_DATA_UNK,
-IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK,
-IPS_DATA_OUT, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK,
-IPS_DATA_IN, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK,
-IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK,
-IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK,
-IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK,
-IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK,
-IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK,
-IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK,
-IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK,
-IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK,
-IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK,
-IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK,
-IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK,
-IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK,
-IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK,
-IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK,
-IPS_DATA_NONE, IPS_DATA_NONE, IPS_DATA_UNK, IPS_DATA_IN, IPS_DATA_NONE,
-IPS_DATA_OUT, IPS_DATA_UNK, IPS_DATA_NONE, IPS_DATA_UNK, IPS_DATA_OUT,
-IPS_DATA_OUT, IPS_DATA_OUT, IPS_DATA_OUT, IPS_DATA_OUT, IPS_DATA_NONE,
-IPS_DATA_UNK, IPS_DATA_IN, IPS_DATA_OUT, IPS_DATA_IN, IPS_DATA_IN,
-IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK,
-IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK,
-IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK,
-IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK,
-IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK,
-IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK,
-IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK,
-IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK,
-IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK,
-IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_OUT,
-IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK,
-IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK,
-IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK,
-IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK
+IPS_DATA_NONE, IPS_DATA_NONE, IPS_DATA_IN, IPS_DATA_IN, IPS_DATA_OUT,
+IPS_DATA_IN, IPS_DATA_IN, IPS_DATA_OUT, IPS_DATA_IN, IPS_DATA_UNK,
+IPS_DATA_OUT, IPS_DATA_OUT, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK,
+IPS_DATA_IN, IPS_DATA_NONE, IPS_DATA_IN, IPS_DATA_IN, IPS_DATA_OUT,
+IPS_DATA_IN, IPS_DATA_OUT, IPS_DATA_NONE, IPS_DATA_NONE, IPS_DATA_OUT,
+IPS_DATA_NONE, IPS_DATA_IN, IPS_DATA_NONE, IPS_DATA_IN, IPS_DATA_OUT,
+IPS_DATA_NONE, IPS_DATA_UNK, IPS_DATA_IN, IPS_DATA_UNK, IPS_DATA_IN,
+IPS_DATA_UNK, IPS_DATA_OUT, IPS_DATA_IN, IPS_DATA_UNK, IPS_DATA_UNK,
+IPS_DATA_IN, IPS_DATA_IN, IPS_DATA_OUT, IPS_DATA_NONE, IPS_DATA_UNK,
+IPS_DATA_IN, IPS_DATA_OUT, IPS_DATA_OUT, IPS_DATA_OUT, IPS_DATA_OUT,
+IPS_DATA_OUT, IPS_DATA_NONE, IPS_DATA_IN, IPS_DATA_NONE, IPS_DATA_NONE,
+IPS_DATA_IN, IPS_DATA_OUT, IPS_DATA_OUT, IPS_DATA_OUT, IPS_DATA_OUT,
+IPS_DATA_IN, IPS_DATA_OUT, IPS_DATA_IN, IPS_DATA_OUT, IPS_DATA_OUT,
+IPS_DATA_OUT, IPS_DATA_IN, IPS_DATA_IN, IPS_DATA_IN, IPS_DATA_NONE,
+IPS_DATA_UNK, IPS_DATA_NONE, IPS_DATA_NONE, IPS_DATA_NONE, IPS_DATA_UNK,
+IPS_DATA_NONE, IPS_DATA_OUT, IPS_DATA_IN, IPS_DATA_UNK, IPS_DATA_UNK,
+IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK,
+IPS_DATA_OUT, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK,
+IPS_DATA_IN, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK,
+IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK,
+IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK,
+IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK,
+IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK,
+IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK,
+IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK,
+IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK,
+IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK,
+IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK,
+IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK,
+IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK,
+IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK,
+IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK,
+IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK,
+IPS_DATA_NONE, IPS_DATA_NONE, IPS_DATA_UNK, IPS_DATA_IN, IPS_DATA_NONE,
+IPS_DATA_OUT, IPS_DATA_UNK, IPS_DATA_NONE, IPS_DATA_UNK, IPS_DATA_OUT,
+IPS_DATA_OUT, IPS_DATA_OUT, IPS_DATA_OUT, IPS_DATA_OUT, IPS_DATA_NONE,
+IPS_DATA_UNK, IPS_DATA_IN, IPS_DATA_OUT, IPS_DATA_IN, IPS_DATA_IN,
+IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK,
+IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK,
+IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK,
+IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK,
+IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK,
+IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK,
+IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK,
+IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK,
+IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK,
+IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_OUT,
+IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK,
+IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK,
+IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK,
+IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK, IPS_DATA_UNK
};
/*
void do_ipsintr(int, void *, struct pt_regs *);
static int ips_hainit(ips_ha_t *);
static int ips_map_status(ips_scb_t *, ips_stat_t *);
-static int ips_send(ips_ha_t *, ips_scb_t *, scb_callback);
-static int ips_send_wait(ips_ha_t *, ips_scb_t *, int);
+static int ips_send(ips_ha_t *, ips_scb_t *, ips_scb_callback);
+static int ips_send_wait(ips_ha_t *, ips_scb_t *, int, int);
static int ips_send_cmd(ips_ha_t *, ips_scb_t *);
static int ips_chkstatus(ips_ha_t *);
static int ips_online(ips_ha_t *, ips_scb_t *);
static int ips_issue(ips_ha_t *, ips_scb_t *);
static int ips_isintr(ips_ha_t *);
static int ips_wait(ips_ha_t *, int, int);
-static int ips_write_driver_status(ips_ha_t *);
-static int ips_read_adapter_status(ips_ha_t *);
-static int ips_read_subsystem_parameters(ips_ha_t *);
-static int ips_read_config(ips_ha_t *);
-static int ips_clear_adapter(ips_ha_t *);
-static int ips_readwrite_page5(ips_ha_t *, int);
+static int ips_write_driver_status(ips_ha_t *, int);
+static int ips_read_adapter_status(ips_ha_t *, int);
+static int ips_read_subsystem_parameters(ips_ha_t *, int);
+static int ips_read_config(ips_ha_t *, int);
+static int ips_clear_adapter(ips_ha_t *, int);
+static int ips_readwrite_page5(ips_ha_t *, int, int);
static void ips_intr(ips_ha_t *);
-static void ips_next(ips_ha_t *);
+static void ips_next(ips_ha_t *, int);
static void ipsintr_blocking(ips_ha_t *, struct ips_scb *);
static void ipsintr_done(ips_ha_t *, struct ips_scb *);
static void ips_done(ips_ha_t *, ips_scb_t *);
static void ips_init_scb(ips_ha_t *, ips_scb_t *);
static void ips_freescb(ips_ha_t *, ips_scb_t *);
static void ips_statinit(ips_ha_t *);
+static void ips_fix_ffdc_time(ips_ha_t *, ips_scb_t *, time_t);
+static void ips_ffdc_reset(ips_ha_t *, int);
+static void ips_ffdc_time(ips_ha_t *, int);
static ips_scb_t * ips_getscb(ips_ha_t *);
static inline void ips_putq_scb_head(ips_scb_queue_t *, ips_scb_t *);
static inline void ips_putq_scb_tail(ips_scb_queue_t *, ips_scb_t *);
static inline void ips_putq_wait_tail(ips_wait_queue_t *, Scsi_Cmnd *);
static inline Scsi_Cmnd * ips_removeq_wait_head(ips_wait_queue_t *);
static inline Scsi_Cmnd * ips_removeq_wait(ips_wait_queue_t *, Scsi_Cmnd *);
+static inline void ips_putq_copp_head(ips_copp_queue_t *, ips_copp_wait_item_t *);
+static inline void ips_putq_copp_tail(ips_copp_queue_t *, ips_copp_wait_item_t *);
+static inline ips_copp_wait_item_t * ips_removeq_copp(ips_copp_queue_t *, ips_copp_wait_item_t *);
+static inline ips_copp_wait_item_t * ips_removeq_copp_head(ips_copp_queue_t *);
+static int ips_erase_bios(ips_ha_t *);
+static int ips_program_bios(ips_ha_t *, char *, int);
+static int ips_verify_bios(ips_ha_t *, char *, int);
#ifndef NO_IPS_CMDLINE
static int ips_is_passthru(Scsi_Cmnd *);
static int ips_make_passthru(ips_ha_t *, Scsi_Cmnd *, ips_scb_t *);
static int ips_usrcmd(ips_ha_t *, ips_passthru_t *, ips_scb_t *);
+static int ips_newusrcmd(ips_ha_t *, ips_passthru_t *, ips_scb_t *);
+static void ips_cleanup_passthru(ips_ha_t *, ips_scb_t *);
#endif
int ips_proc_info(char *, char **, off_t, int, int, int);
static int ips_host_info(ips_ha_t *, char *, off_t, int);
-static void copy_mem_info(INFOSTR *, char *, int);
-static int copy_info(INFOSTR *, char *, ...);
+static void copy_mem_info(IPS_INFOSTR *, char *, int);
+static int copy_info(IPS_INFOSTR *, char *, ...);
/*--------------------------------------------------------------------------*/
/* Exported Functions */
/* */
/* Detect and initialize the driver */
/* */
+/* NOTE: this routine is called under the io_request_lock spinlock */
+/* */
/****************************************************************************/
int
ips_detect(Scsi_Host_Template *SHT) {
ips_ha_t *ha;
u32 io_addr;
u16 planer;
+ u8 revision_id;
u8 bus;
u8 func;
u8 irq;
DBG("ips_detect");
SHT->proc_info = ips_proc_info;
+#if LINUX_VERSION_CODE < LinuxVersionCode(2,3,27)
SHT->proc_dir = &proc_scsi_ips;
+#else
+ SHT->proc_name = "ips";
+#endif
#if defined(CONFIG_PCI)
irq = dev->irq;
bus = dev->bus->number;
func = dev->devfn;
- io_addr = dev->base_address[0];
/* get planer status */
if (pci_read_config_word(dev, 0x04, &planer)) {
}
/* check I/O address */
+#if LINUX_VERSION_CODE < LinuxVersionCode(2,3,13)
+ io_addr = dev->base_address[0];
+
if ((io_addr & PCI_BASE_ADDRESS_SPACE) != PCI_BASE_ADDRESS_SPACE_IO)
continue;
+ /* get the BASE IO Address */
+ io_addr &= PCI_BASE_ADDRESS_IO_MASK;
+#else
+ io_addr = dev->resource[0].start;
+
+ if ((dev->resource[0].flags & PCI_BASE_ADDRESS_SPACE) != PCI_BASE_ADDRESS_SPACE_IO)
+ continue;
+#endif
+
/* check to see if an onboard planer controller is disabled */
if (!(planer & 0x000C)) {
continue;
}
- /* get the BASE IO Address */
- io_addr &= PCI_BASE_ADDRESS_IO_MASK;
-
#ifdef IPS_PCI_PROBE_DEBUG
printk(KERN_NOTICE "(%s%d) detect bus %d, func %x, irq %d, io %x\n",
ips_name, index, bus, func, irq, io_addr);
#endif
+ /* get the revision ID */
+ if (pci_read_config_byte(dev, 0x08, &revision_id)) {
+ printk(KERN_WARNING "(%s%d) can't get revision id.\n",
+ ips_name, index);
+
+ continue;
+ }
+
/* found a controller */
sh = scsi_register(SHT, sizeof(ips_ha_t));
continue;
}
- ha = HA(sh);
+ ha = IPS_HA(sh);
memset(ha, 0, sizeof(ips_ha_t));
/* Initialize spin lock */
spin_lock_init(&ha->scb_lock);
spin_lock_init(&ha->copp_lock);
+ spin_lock_init(&ha->ips_lock);
+ spin_lock_init(&ha->copp_waitlist.lock);
+ spin_lock_init(&ha->scb_waitlist.lock);
+ spin_lock_init(&ha->scb_activelist.lock);
ips_sh[ips_num_controllers] = sh;
ips_ha[ips_num_controllers] = ha;
ips_num_controllers++;
ha->active = 1;
- ha->enq = kmalloc(sizeof(ENQCMD), GFP_KERNEL|GFP_DMA);
+ ha->enq = kmalloc(sizeof(IPS_ENQ), GFP_KERNEL|GFP_DMA);
if (!ha->enq) {
printk(KERN_WARNING "(%s%d) Unable to allocate host inquiry structure - skipping contoller\n",
continue;
}
- ha->adapt = kmalloc(sizeof(ADAPTER_AREA), GFP_KERNEL|GFP_DMA);
+ ha->adapt = kmalloc(sizeof(IPS_ADAPTER), GFP_KERNEL|GFP_DMA);
if (!ha->adapt) {
printk(KERN_WARNING "(%s%d) Unable to allocate host adapt structure - skipping controller\n",
continue;
}
- ha->conf = kmalloc(sizeof(CONFCMD), GFP_KERNEL|GFP_DMA);
+ ha->conf = kmalloc(sizeof(IPS_CONF), GFP_KERNEL|GFP_DMA);
if (!ha->conf) {
printk(KERN_WARNING "(%s%d) Unable to allocate host conf structure - skipping controller\n",
continue;
}
- ha->nvram = kmalloc(sizeof(NVRAM_PAGE5), GFP_KERNEL|GFP_DMA);
+ ha->nvram = kmalloc(sizeof(IPS_NVRAM_P5), GFP_KERNEL|GFP_DMA);
if (!ha->nvram) {
printk(KERN_WARNING "(%s%d) Unable to allocate host nvram structure - skipping controller\n",
continue;
}
- ha->subsys = kmalloc(sizeof(SUBSYS_PARAM), GFP_KERNEL|GFP_DMA);
+ ha->subsys = kmalloc(sizeof(IPS_SUBSYS), GFP_KERNEL|GFP_DMA);
if (!ha->subsys) {
printk(KERN_WARNING "(%s%d) Unable to allocate host subsystem structure - skipping controller\n",
continue;
}
- ha->dummy = kmalloc(sizeof(BASIC_IO_CMD), GFP_KERNEL|GFP_DMA);
+ ha->dummy = kmalloc(sizeof(IPS_IO_CMD), GFP_KERNEL|GFP_DMA);
if (!ha->dummy) {
printk(KERN_WARNING "(%s%d) Unable to allocate host dummy structure - skipping controller\n",
continue;
}
+ ha->ioctl_data = kmalloc(IPS_IOCTL_SIZE, GFP_KERNEL|GFP_DMA);
+ ha->ioctl_datasize = IPS_IOCTL_SIZE;
+ if (!ha->ioctl_data) {
+ printk(KERN_WARNING "(%s%d) Unable to allocate ioctl data - skipping controller\n",
+ ips_name, index);
+
+ ha->active = 0;
+
+ continue;
+ }
+
/* Store away needed values for later use */
sh->io_port = io_addr;
sh->n_io_port = 255;
sh->cmd_per_lun = sh->hostt->cmd_per_lun;
sh->unchecked_isa_dma = sh->hostt->unchecked_isa_dma;
sh->use_clustering = sh->hostt->use_clustering;
- sh->wish_block = FALSE;
/* Store info in HA structure */
ha->io_addr = io_addr;
ha->irq = irq;
ha->host_num = index;
+ ha->revision_id = revision_id;
/* install the interrupt handler */
if (request_irq(irq, do_ipsintr, SA_SHIRQ, ips_name, ha)) {
}
memset(ha->scbs, 0, sizeof(ips_scb_t));
- ha->scbs->sg_list = (SG_LIST *) kmalloc(sizeof(SG_LIST) * MAX_SG_ELEMENTS, GFP_KERNEL|GFP_DMA);
+ ha->scbs->sg_list = (IPS_SG_LIST *) kmalloc(sizeof(IPS_SG_LIST) * IPS_MAX_SG, GFP_KERNEL|GFP_DMA);
if (!ha->scbs->sg_list) {
/* couldn't allocate a temp SCB S/G list */
printk(KERN_WARNING "(%s%d) unable to allocate CCBs - skipping contoller\n",
panic("(%s) release, invalid Scsi_Host pointer.\n",
ips_name);
- ha = HA(sh);
+ ha = IPS_HA(sh);
if (!ha)
return (FALSE);
ips_init_scb(ha, scb);
scb->timeout = ips_cmd_timeout;
- scb->cdb[0] = FLUSH_CACHE;
+ scb->cdb[0] = IPS_CMD_FLUSH;
- scb->cmd.flush_cache.op_code = FLUSH_CACHE;
+ scb->cmd.flush_cache.op_code = IPS_CMD_FLUSH;
scb->cmd.flush_cache.command_id = IPS_COMMAND_ID(ha, scb);
- scb->cmd.flush_cache.state = NORM_STATE;
+ scb->cmd.flush_cache.state = IPS_NORM_STATE;
scb->cmd.flush_cache.reserved = 0;
scb->cmd.flush_cache.reserved2 = 0;
scb->cmd.flush_cache.reserved3 = 0;
printk("(%s%d) Flushing Cache.\n", ips_name, ha->host_num);
/* send command */
- if (ips_send_wait(ha, scb, ips_cmd_timeout) == IPS_FAILURE)
+ if (ips_send_wait(ha, scb, ips_cmd_timeout, IPS_INTR_ON) == IPS_FAILURE)
printk("(%s%d) Incomplete Flush.\n", ips_name, ha->host_num);
printk("(%s%d) Flushing Complete.\n", ips_name, ha->host_num);
int
ips_eh_abort(Scsi_Cmnd *SC) {
ips_ha_t *ha;
+ ips_copp_wait_item_t *item;
DBG("ips_eh_abort");
if (test_and_set_bit(IPS_IN_ABORT, &ha->flags))
return (FAILED);
+ /* See if the command is on the copp queue */
+ IPS_QUEUE_LOCK(&ha->copp_waitlist);
+ item = ha->copp_waitlist.head;
+ while ((item) && (item->scsi_cmd != SC))
+ item = item->next;
+ IPS_QUEUE_UNLOCK(&ha->copp_waitlist);
+
+ if (item) {
+ /* Found it */
+ ips_removeq_copp(&ha->copp_waitlist, item);
+ clear_bit(IPS_IN_ABORT, &ha->flags);
+
+ return (SUCCESS);
+ }
+
/* See if the command is on the wait queue */
- if (ips_removeq_wait(&ha->scb_waitlist, SC) ||
- ips_removeq_wait(&ha->copp_waitlist, SC)) {
+ if (ips_removeq_wait(&ha->scb_waitlist, SC)) {
/* command not sent yet */
clear_bit(IPS_IN_ABORT, &ha->flags);
int
ips_abort(Scsi_Cmnd *SC) {
ips_ha_t *ha;
+ ips_copp_wait_item_t *item;
DBG("ips_abort");
if (test_and_set_bit(IPS_IN_ABORT, &ha->flags))
return (SCSI_ABORT_SNOOZE);
+ /* See if the command is on the copp queue */
+ IPS_QUEUE_LOCK(&ha->copp_waitlist);
+ item = ha->copp_waitlist.head;
+ while ((item) && (item->scsi_cmd != SC))
+ item = item->next;
+ IPS_QUEUE_UNLOCK(&ha->copp_waitlist);
+
+ if (item) {
+ /* Found it */
+ ips_removeq_copp(&ha->copp_waitlist, item);
+ clear_bit(IPS_IN_ABORT, &ha->flags);
+
+ return (SCSI_ABORT_PENDING);
+ }
+
/* See if the command is on the wait queue */
- if (ips_removeq_wait(&ha->scb_waitlist, SC) ||
- ips_removeq_wait(&ha->copp_waitlist, SC)) {
+ if (ips_removeq_wait(&ha->scb_waitlist, SC)) {
/* command not sent yet */
clear_bit(IPS_IN_ABORT, &ha->flags);
/* */
/* Reset the controller (with new eh error code) */
/* */
+/* NOTE: this routine is called under the io_request_lock spinlock */
+/* */
/****************************************************************************/
int
ips_eh_reset(Scsi_Cmnd *SC) {
- ips_ha_t *ha;
- ips_scb_t *scb;
+ u32 cpu_flags;
+ ips_ha_t *ha;
+ ips_scb_t *scb;
+ ips_copp_wait_item_t *item;
DBG("ips_eh_reset");
if (test_and_set_bit(IPS_IN_RESET, &ha->flags))
return (FAILED);
- /* See if the command is on the waiting queue */
- if (ips_removeq_wait(&ha->scb_waitlist, SC) ||
- ips_removeq_wait(&ha->copp_waitlist, SC)) {
+ /* See if the command is on the copp queue */
+ IPS_QUEUE_LOCK(&ha->copp_waitlist);
+ item = ha->copp_waitlist.head;
+ while ((item) && (item->scsi_cmd != SC))
+ item = item->next;
+ IPS_QUEUE_UNLOCK(&ha->copp_waitlist);
+
+ if (item) {
+ /* Found it */
+ ips_removeq_copp(&ha->copp_waitlist, item);
+ clear_bit(IPS_IN_RESET, &ha->flags);
+
+ return (SUCCESS);
+ }
+
+ /* See if the command is on the wait queue */
+ if (ips_removeq_wait(&ha->scb_waitlist, SC)) {
/* command not sent yet */
- clear_bit(IPS_IN_ABORT, &ha->flags);
+ clear_bit(IPS_IN_RESET, &ha->flags);
return (SUCCESS);
}
return (FAILED);
}
- if (!ips_clear_adapter(ha)) {
+ if (!ips_clear_adapter(ha, IPS_INTR_IORL)) {
clear_bit(IPS_IN_RESET, &ha->flags);
return (FAILED);
}
+ /* FFDC */
+ if (ha->subsys->param[3] & 0x300000) {
+ struct timeval tv;
+
+ do_gettimeofday(&tv);
+ IPS_HA_LOCK(cpu_flags);
+ ha->last_ffdc = tv.tv_sec;
+ ha->reset_count++;
+ IPS_HA_UNLOCK(cpu_flags);
+ ips_ffdc_reset(ha, IPS_INTR_IORL);
+ }
+
/* Now fail all of the active commands */
#if IPS_DEBUG >= 1
printk(KERN_WARNING "(%s%d) Failing active commands\n",
ips_name, ha->host_num);
#endif
while ((scb = ips_removeq_scb_head(&ha->scb_activelist))) {
- scb->scsi_cmd->result = DID_RESET << 16;
+ scb->scsi_cmd->result = (DID_RESET << 16) | (SUGGEST_RETRY << 24);
scb->scsi_cmd->scsi_done(scb->scsi_cmd);
ips_freescb(ha, scb);
}
/* Reset the number of active IOCTLs */
+ IPS_HA_LOCK(cpu_flags);
ha->num_ioctl = 0;
+ IPS_HA_UNLOCK(cpu_flags);
clear_bit(IPS_IN_RESET, &ha->flags);
* handler wants to do this and since
* interrupts are turned off here....
*/
- ips_next(ha);
+ ips_next(ha, IPS_INTR_IORL);
}
return (SUCCESS);
/* */
/* Reset the controller */
/* */
+/* NOTE: this routine is called under the io_request_lock spinlock */
+/* */
/****************************************************************************/
int
ips_reset(Scsi_Cmnd *SC, unsigned int flags) {
- ips_ha_t *ha;
- ips_scb_t *scb;
+ u32 cpu_flags;
+ ips_ha_t *ha;
+ ips_scb_t *scb;
+ ips_copp_wait_item_t *item;
DBG("ips_reset");
if (test_and_set_bit(IPS_IN_RESET, &ha->flags))
return (SCSI_RESET_SNOOZE);
- /* See if the command is on the waiting queue */
- if (ips_removeq_wait(&ha->scb_waitlist, SC) ||
- ips_removeq_wait(&ha->copp_waitlist, SC)) {
+ /* See if the command is on the copp queue */
+ IPS_QUEUE_LOCK(&ha->copp_waitlist);
+ item = ha->copp_waitlist.head;
+ while ((item) && (item->scsi_cmd != SC))
+ item = item->next;
+ IPS_QUEUE_UNLOCK(&ha->copp_waitlist);
+
+ if (item) {
+ /* Found it */
+ ips_removeq_copp(&ha->copp_waitlist, item);
+ clear_bit(IPS_IN_RESET, &ha->flags);
+
+ return (SCSI_RESET_SNOOZE);
+ }
+
+ /* See if the command is on the wait queue */
+ if (ips_removeq_wait(&ha->scb_waitlist, SC)) {
/* command not sent yet */
- clear_bit(IPS_IN_ABORT, &ha->flags);
+ clear_bit(IPS_IN_RESET, &ha->flags);
return (SCSI_RESET_SNOOZE);
}
return (SCSI_RESET_ERROR);
}
- if (!ips_clear_adapter(ha)) {
+ if (!ips_clear_adapter(ha, IPS_INTR_IORL)) {
clear_bit(IPS_IN_RESET, &ha->flags);
return (SCSI_RESET_ERROR);
}
+ /* FFDC */
+ if (ha->subsys->param[3] & 0x300000) {
+ struct timeval tv;
+
+ do_gettimeofday(&tv);
+ IPS_HA_LOCK(cpu_flags);
+ ha->last_ffdc = tv.tv_sec;
+ ha->reset_count++;
+ IPS_HA_UNLOCK(cpu_flags);
+ ips_ffdc_reset(ha, IPS_INTR_IORL);
+ }
+
/* Now fail all of the active commands */
#if IPS_DEBUG >= 1
printk(KERN_WARNING "(%s%d) Failing active commands\n",
ips_name, ha->host_num);
#endif
while ((scb = ips_removeq_scb_head(&ha->scb_activelist))) {
- scb->scsi_cmd->result = DID_RESET << 16;
+ scb->scsi_cmd->result = (DID_RESET << 16) | (SUGGEST_RETRY << 24);
scb->scsi_cmd->scsi_done(scb->scsi_cmd);
ips_freescb(ha, scb);
}
/* Reset the number of active IOCTLs */
+ IPS_HA_LOCK(cpu_flags);
ha->num_ioctl = 0;
+ IPS_HA_UNLOCK(cpu_flags);
clear_bit(IPS_IN_RESET, &ha->flags);
* handler wants to do this and since
* interrupts are turned off here....
*/
- ips_next(ha);
+ ips_next(ha, IPS_INTR_IORL);
}
return (SCSI_RESET_SUCCESS);
/* */
/* Send a command to the controller */
/* */
+/* NOTE: */
+/* Linux obtains io_request_lock before calling this function */
+/* */
/****************************************************************************/
int
ips_queue(Scsi_Cmnd *SC, void (*done) (Scsi_Cmnd *)) {
ips_ha_t *ha;
+ u32 cpu_flags;
+#if LINUX_VERSION_CODE < LinuxVersionCode(2,3,1)
+ struct semaphore sem = MUTEX_LOCKED;
+#else
+ DECLARE_MUTEX_LOCKED(sem);
+#endif
DBG("ips_queue");
#ifndef NO_IPS_CMDLINE
if (ips_is_passthru(SC)) {
+ IPS_QUEUE_LOCK(&ha->copp_waitlist);
if (ha->copp_waitlist.count == IPS_MAX_IOCTL_QUEUE) {
+ IPS_QUEUE_UNLOCK(&ha->copp_waitlist);
SC->result = DID_BUS_BUSY << 16;
done(SC);
return (0);
+ } else {
+ IPS_QUEUE_UNLOCK(&ha->copp_waitlist);
}
} else {
#endif
+ IPS_QUEUE_LOCK(&ha->scb_waitlist);
if (ha->scb_waitlist.count == IPS_MAX_QUEUE) {
+ IPS_QUEUE_UNLOCK(&ha->scb_waitlist);
SC->result = DID_BUS_BUSY << 16;
done(SC);
return (0);
+ } else {
+ IPS_QUEUE_UNLOCK(&ha->scb_waitlist);
}
#ifndef NO_IPS_CMDLINE
SC->target,
SC->lun);
#if IPS_DEBUG >= 11
- MDELAY(2*ONE_SEC);
+ MDELAY(2*IPS_ONE_SEC);
#endif
#endif
#ifndef NO_IPS_CMDLINE
- if (ips_is_passthru(SC))
- ips_putq_wait_tail(&ha->copp_waitlist, SC);
+ if (ips_is_passthru(SC)) {
+ ips_copp_wait_item_t *scratch;
+
+ /* allocate space for the scribble */
+ scratch = kmalloc(sizeof(ips_copp_wait_item_t), GFP_KERNEL);
+
+ if (!scratch) {
+ SC->result = DID_ERROR << 16;
+ done(SC);
+
+ return (0);
+ }
+
+ scratch->scsi_cmd = SC;
+ scratch->sem = &sem;
+ scratch->next = NULL;
+
+ ips_putq_copp_tail(&ha->copp_waitlist, scratch);
+ }
else
#endif
ips_putq_wait_tail(&ha->scb_waitlist, SC);
+ IPS_HA_LOCK(cpu_flags);
if ((!test_bit(IPS_IN_INTR, &ha->flags)) &&
(!test_bit(IPS_IN_ABORT, &ha->flags)) &&
- (!test_bit(IPS_IN_RESET, &ha->flags)))
- ips_next(ha);
+ (!test_bit(IPS_IN_RESET, &ha->flags))) {
+ IPS_HA_UNLOCK(cpu_flags);
+ ips_next(ha, IPS_INTR_IORL);
+ } else {
+ IPS_HA_UNLOCK(cpu_flags);
+ }
+
+ /*
+ * If this request was a new style IOCTL wait
+ * for it to finish.
+ *
+ * NOTE: we relinquished the lock above so this should
+ * not cause contention problems
+ */
+ if (ips_is_passthru(SC) && SC->cmnd[0] == IPS_IOCTL_NEW_COMMAND) {
+ char *user_area;
+ char *kern_area;
+ u32 datasize;
+
+ /* free io_request_lock */
+ spin_unlock_irq(&io_request_lock);
+
+ /* wait for the command to finish */
+ down(&sem);
+
+ /* reobtain the lock */
+ spin_lock_irq(&io_request_lock);
+
+ /* command finished -- copy back */
+ user_area = *((char **) &SC->cmnd[4]);
+ kern_area = ha->ioctl_data;
+ datasize = *((u32 *) &SC->cmnd[8]);
+
+ if (copy_to_user(user_area, kern_area, datasize) > 0) {
+#if IPS_DEBUG_PT >= 1
+ printk(KERN_NOTICE "(%s%d) passthru failed - unable to copy out user data\n",
+ ips_name, ha->host_num);
+#endif
+
+ SC->result = DID_ERROR << 16;
+ SC->scsi_done(SC);
+ } else {
+ SC->scsi_done(SC);
+ }
+ }
return (0);
}
if (!ha->active)
return (0);
- if (!ips_read_adapter_status(ha))
+ if (!ips_read_adapter_status(ha, IPS_INTR_ON))
/* ?!?! Enquiry command failed */
return (0);
if ((disk->capacity > 0x400000) &&
((ha->enq->ucMiscFlag & 0x8) == 0)) {
- heads = NORM_MODE_HEADS;
- sectors = NORM_MODE_SECTORS;
+ heads = IPS_NORM_HEADS;
+ sectors = IPS_NORM_SECTORS;
} else {
- heads = COMP_MODE_HEADS;
- sectors = COMP_MODE_SECTORS;
+ heads = IPS_COMP_HEADS;
+ sectors = IPS_COMP_SECTORS;
}
cylinders = disk->capacity / (heads * sectors);
void
do_ipsintr(int irq, void *dev_id, struct pt_regs *regs) {
ips_ha_t *ha;
- unsigned int cpu_flags;
+ u32 cpu_flags;
DBG("do_ipsintr");
clear_bit(IPS_IN_INTR, &ha->flags);
spin_unlock_irqrestore(&io_request_lock, cpu_flags);
+
+ /* start the next command */
+ ips_next(ha, IPS_INTR_ON);
}
/****************************************************************************/
ips_stat_t *sp;
ips_scb_t *scb;
int status;
+ u32 cpu_flags;
DBG("ips_intr");
if (!ha->active)
return;
+ IPS_HA_LOCK(cpu_flags);
while (ips_isintr(ha)) {
sp = &ha->sp;
* use the callback function to finish things up
* NOTE: interrupts are OFF for this
*/
+ IPS_HA_UNLOCK(cpu_flags);
(*scb->callback) (ha, scb);
+ IPS_HA_LOCK(cpu_flags);
}
- clear_bit(IPS_IN_INTR, &ha->flags);
+ IPS_HA_UNLOCK(cpu_flags);
}
/****************************************************************************/
DBG("ips_info");
- ha = HA(SH);
+ ha = IPS_HA(SH);
if (!ha)
return (NULL);
if (!SC)
return (0);
- if ((SC->channel == 0) &&
+ if (((SC->cmnd[0] == IPS_IOCTL_COMMAND) || (SC->cmnd[0] == IPS_IOCTL_NEW_COMMAND)) &&
+ (SC->channel == 0) &&
(SC->target == IPS_ADAPTER_ID) &&
(SC->lun == 0) &&
- (SC->cmnd[0] == 0x0d) &&
(SC->request_bufflen) &&
(!SC->use_sg) &&
(((char *) SC->request_buffer)[0] == 'C') &&
/****************************************************************************/
/* */
-/* Routine Name: ips_is_passthru */
+/* Routine Name: ips_make_passthru */
/* */
/* Routine Description: */
/* */
}
pt = (ips_passthru_t *) SC->request_buffer;
- scb->scsi_cmd = SC;
- if (SC->request_bufflen < (sizeof(ips_passthru_t) + pt->CmdBSize)) {
- /* wrong size */
-#if IPS_DEBUG_PT >= 1
- printk(KERN_NOTICE "(%s%d) Passthru structure wrong size\n",
- ips_name, ha->host_num);
-#endif
-
- return (IPS_FAILURE);
- }
+ /*
+ * Some notes about the passthru interface used
+ *
+ * IF the scsi op_code == 0x0d then we assume
+ * that the data came along with/goes with the
+ * packet we received from the sg driver. In this
+ * case the CmdBSize field of the pt structure is
+ * used for the size of the buffer.
+ *
+ * IF the scsi op_code == 0x81 then we assume that
+ * we will need our own buffer and we will copy the
+ * data to/from the user buffer passed in the scsi
+ * command. The data address resides at offset 4
+ * in the scsi command. The length of the data resides
+ * at offset 8 in the scsi command.
+ */
switch (pt->CoppCmd) {
case IPS_NUMCTRLS:
SC->result = DID_OK << 16;
return (IPS_SUCCESS_IMM);
+
case IPS_CTRLINFO:
memcpy(SC->request_buffer + sizeof(ips_passthru_t),
ha, sizeof(ips_ha_t));
SC->result = DID_OK << 16;
return (IPS_SUCCESS_IMM);
- case COPPUSRCMD:
- if (ips_usrcmd(ha, pt, scb))
- return (IPS_SUCCESS);
- else
- return (IPS_FAILURE);
+
+ case IPS_COPPUSRCMD:
+ case IPS_COPPIOCCMD:
+ if (SC->cmnd[0] == IPS_IOCTL_COMMAND) {
+ if (SC->request_bufflen < (sizeof(ips_passthru_t) + pt->CmdBSize)) {
+ /* wrong size */
+ #if IPS_DEBUG_PT >= 1
+ printk(KERN_NOTICE "(%s%d) Passthru structure wrong size\n",
+ ips_name, ha->host_num);
+ #endif
+
+ return (IPS_FAILURE);
+ }
+
+ if (ips_usrcmd(ha, pt, scb))
+ return (IPS_SUCCESS);
+ else
+ return (IPS_FAILURE);
+ } else if (SC->cmnd[0] == IPS_IOCTL_NEW_COMMAND) {
+ if (SC->request_bufflen < (sizeof(ips_passthru_t))) {
+ /* wrong size */
+ #if IPS_DEBUG_PT >= 1
+ printk(KERN_NOTICE "(%s%d) Passthru structure wrong size\n",
+ ips_name, ha->host_num);
+ #endif
+
+ return (IPS_FAILURE);
+ }
+
+ if (ips_newusrcmd(ha, pt, scb))
+ return (IPS_SUCCESS);
+ else
+ return (IPS_FAILURE);
+ }
+
break;
- }
+
+ case IPS_FLASHBIOS:
+ /* we must use the new interface */
+ if (SC->cmnd[0] != IPS_IOCTL_NEW_COMMAND)
+ return (IPS_FAILURE);
+
+ /* don't flash the BIOS on future cards */
+ if (ha->revision_id > IPS_REVID_TROMBONE64) {
+#if IPS_DEBUG_PT >= 1
+ printk(KERN_NOTICE "(%s%d) flash bios failed - unsupported controller\n",
+ ips_name, ha->host_num);
+#endif
+ return (IPS_FAILURE);
+ }
+
+ /* copy in the size/buffer ptr from the scsi command */
+ memcpy(&pt->CmdBuffer, &SC->cmnd[4], 4);
+ memcpy(&pt->CmdBSize, &SC->cmnd[8], 4);
+
+ /* must have a buffer */
+ if ((!pt->CmdBSize) || (!pt->CmdBuffer))
+ return (IPS_FAILURE);
+
+ /* make sure buffer is big enough */
+ if (pt->CmdBSize > ha->ioctl_datasize) {
+ void *bigger_struct;
+
+ /* try to allocate a bigger struct */
+ bigger_struct = kmalloc(pt->CmdBSize, GFP_KERNEL|GFP_DMA);
+ if (bigger_struct) {
+ /* free the old memory */
+ kfree(ha->ioctl_data);
+
+ /* use the new memory */
+ ha->ioctl_data = bigger_struct;
+ ha->ioctl_datasize = pt->CmdBSize;
+ } else
+ return (IPS_FAILURE);
+ }
+
+ /* copy in the buffer */
+ if (copy_from_user(ha->ioctl_data, pt->CmdBuffer, pt->CmdBSize) > 0) {
+#if IPS_DEBUG_PT >= 1
+ printk(KERN_NOTICE "(%s%d) flash bios failed - unable to copy user buffer\n",
+ ips_name, ha->host_num);
+#endif
+
+ return (IPS_FAILURE);
+ }
+
+ if (ips_erase_bios(ha)) {
+#if IPS_DEBUG_PT >= 1
+ printk(KERN_NOTICE "(%s%d) flash bios failed - unable to erase flash\n",
+ ips_name, ha->host_num);
+#endif
+
+ return (IPS_FAILURE);
+ }
+
+ if (ips_program_bios(ha, ha->ioctl_data, pt->CmdBSize)) {
+#if IPS_DEBUG_PT >= 1
+ printk(KERN_NOTICE "(%s%d) flash bios failed - unable to program flash\n",
+ ips_name, ha->host_num);
+#endif
+
+ return (IPS_FAILURE);
+ }
+
+ if (ips_verify_bios(ha, ha->ioctl_data, pt->CmdBSize)) {
+#if IPS_DEBUG_PT >= 1
+ printk(KERN_NOTICE "(%s%d) flash bios failed - unable to verify flash\n",
+ ips_name, ha->host_num);
+#endif
+
+ return (IPS_FAILURE);
+ }
+
+ return (IPS_SUCCESS_IMM);
+ } /* end switch */
return (IPS_FAILURE);
}
/****************************************************************************/
static int
ips_usrcmd(ips_ha_t *ha, ips_passthru_t *pt, ips_scb_t *scb) {
- SG_LIST *sg_list;
+ IPS_SG_LIST *sg_list;
DBG("ips_usrcmd");
sg_list = scb->sg_list;
/* copy in the CP */
- memcpy(&scb->cmd, &pt->CoppCP.cmd, sizeof(IOCTL_INFO));
- memcpy(&scb->dcdb, &pt->CoppCP.dcdb, sizeof(DCDB_TABLE));
+ memcpy(&scb->cmd, &pt->CoppCP.cmd, sizeof(IPS_IOCTL_CMD));
+ memcpy(&scb->dcdb, &pt->CoppCP.dcdb, sizeof(IPS_DCDB_TABLE));
/* FIX stuff that might be wrong */
scb->sg_list = sg_list;
scb->scb_busaddr = VIRT_TO_BUS(scb);
- scb->bus = 0;
- scb->target_id = 0;
- scb->lun = 0;
+ scb->bus = scb->scsi_cmd->channel;
+ scb->target_id = scb->scsi_cmd->target;
+ scb->lun = scb->scsi_cmd->lun;
scb->sg_len = 0;
scb->data_len = 0;
scb->flags = 0;
scb->cmd.basic_io.command_id = IPS_COMMAND_ID(ha, scb);
/* we don't support DCDB/READ/WRITE Scatter Gather */
- if ((scb->cmd.basic_io.op_code == READ_SCATTER_GATHER) ||
- (scb->cmd.basic_io.op_code == WRITE_SCATTER_GATHER) ||
- (scb->cmd.basic_io.op_code == DIRECT_CDB_SCATTER_GATHER))
+ if ((scb->cmd.basic_io.op_code == IPS_CMD_READ_SG) ||
+ (scb->cmd.basic_io.op_code == IPS_CMD_WRITE_SG) ||
+ (scb->cmd.basic_io.op_code == IPS_CMD_DCDB_SG))
return (0);
if (pt->CmdBSize) {
scb->data_busaddr = 0L;
}
+ if (scb->cmd.dcdb.op_code == IPS_CMD_DCDB)
+ scb->cmd.dcdb.dcdb_address = VIRT_TO_BUS(&scb->dcdb);
+
if (pt->CmdBSize) {
- if (scb->cmd.dcdb.op_code == DIRECT_CDB) {
- scb->cmd.dcdb.dcdb_address = VIRT_TO_BUS(&scb->dcdb);
+ if (scb->cmd.dcdb.op_code == IPS_CMD_DCDB)
scb->dcdb.buffer_pointer = scb->data_busaddr;
- } else {
+ else
scb->cmd.basic_io.sg_addr = scb->data_busaddr;
+ }
+
+ /* set timeouts */
+ if (pt->TimeOut) {
+ scb->timeout = pt->TimeOut;
+
+ if (pt->TimeOut <= 10)
+ scb->dcdb.cmd_attribute |= IPS_TIMEOUT10;
+ else if (pt->TimeOut <= 60)
+ scb->dcdb.cmd_attribute |= IPS_TIMEOUT60;
+ else
+ scb->dcdb.cmd_attribute |= IPS_TIMEOUT20M;
+ }
+
+ /* assume success */
+ scb->scsi_cmd->result = DID_OK << 16;
+
+ /* success */
+ return (1);
+}
+
+/****************************************************************************/
+/* */
+/* Routine Name: ips_newusrcmd */
+/* */
+/* Routine Description: */
+/* */
+/* Process a user command and make it ready to send */
+/* */
+/****************************************************************************/
+static int
+ips_newusrcmd(ips_ha_t *ha, ips_passthru_t *pt, ips_scb_t *scb) {
+ IPS_SG_LIST *sg_list;
+ char *user_area;
+ char *kern_area;
+ u32 datasize;
+
+ DBG("ips_usrcmd");
+
+ if ((!scb) || (!pt) || (!ha))
+ return (0);
+
+ /* Save the S/G list pointer so it doesn't get clobbered */
+ sg_list = scb->sg_list;
+
+ /* copy in the CP */
+ memcpy(&scb->cmd, &pt->CoppCP.cmd, sizeof(IPS_IOCTL_CMD));
+ memcpy(&scb->dcdb, &pt->CoppCP.dcdb, sizeof(IPS_DCDB_TABLE));
+
+ /* FIX stuff that might be wrong */
+ scb->sg_list = sg_list;
+ scb->scb_busaddr = VIRT_TO_BUS(scb);
+ scb->bus = scb->scsi_cmd->channel;
+ scb->target_id = scb->scsi_cmd->target;
+ scb->lun = scb->scsi_cmd->lun;
+ scb->sg_len = 0;
+ scb->data_len = 0;
+ scb->flags = 0;
+ scb->op_code = 0;
+ scb->callback = ipsintr_done;
+ scb->timeout = ips_cmd_timeout;
+ scb->cmd.basic_io.command_id = IPS_COMMAND_ID(ha, scb);
+
+ /* we don't support DCDB/READ/WRITE Scatter Gather */
+ if ((scb->cmd.basic_io.op_code == IPS_CMD_READ_SG) ||
+ (scb->cmd.basic_io.op_code == IPS_CMD_WRITE_SG) ||
+ (scb->cmd.basic_io.op_code == IPS_CMD_DCDB_SG))
+ return (0);
+
+ if (pt->CmdBSize) {
+ if (pt->CmdBSize > ha->ioctl_datasize) {
+ void *bigger_struct;
+
+ /* try to allocate a bigger struct */
+ bigger_struct = kmalloc(pt->CmdBSize, GFP_KERNEL|GFP_DMA);
+ if (bigger_struct) {
+ /* free the old memory */
+ kfree(ha->ioctl_data);
+
+ /* use the new memory */
+ ha->ioctl_data = bigger_struct;
+ ha->ioctl_datasize = pt->CmdBSize;
+ } else
+ return (0);
+
+ }
+
+ scb->data_busaddr = VIRT_TO_BUS(ha->ioctl_data);
+
+ /* Attempt to copy in the data */
+ user_area = *((char **) &scb->scsi_cmd->cmnd[4]);
+ kern_area = ha->ioctl_data;
+ datasize = *((u32 *) &scb->scsi_cmd->cmnd[8]);
+
+ if (copy_from_user(kern_area, user_area, datasize) > 0) {
+#if IPS_DEBUG_PT >= 1
+ printk(KERN_NOTICE "(%s%d) passthru failed - unable to copy in user data\n",
+ ips_name, ha->host_num);
+#endif
+
+ return (0);
}
+
+ } else {
+ scb->data_busaddr = 0L;
+ }
+
+ if (scb->cmd.dcdb.op_code == IPS_CMD_DCDB)
+ scb->cmd.dcdb.dcdb_address = VIRT_TO_BUS(&scb->dcdb);
+
+ if (pt->CmdBSize) {
+ if (scb->cmd.dcdb.op_code == IPS_CMD_DCDB)
+ scb->dcdb.buffer_pointer = scb->data_busaddr;
+ else
+ scb->cmd.basic_io.sg_addr = scb->data_busaddr;
}
/* set timeouts */
scb->timeout = pt->TimeOut;
if (pt->TimeOut <= 10)
- scb->dcdb.cmd_attribute |= TIMEOUT_10;
+ scb->dcdb.cmd_attribute |= IPS_TIMEOUT10;
else if (pt->TimeOut <= 60)
- scb->dcdb.cmd_attribute |= TIMEOUT_60;
+ scb->dcdb.cmd_attribute |= IPS_TIMEOUT60;
else
- scb->dcdb.cmd_attribute |= TIMEOUT_20M;
+ scb->dcdb.cmd_attribute |= IPS_TIMEOUT20M;
}
- /* assume error */
- scb->scsi_cmd->result = DID_ERROR << 16;
+ /* assume success */
+ scb->scsi_cmd->result = DID_OK << 16;
/* success */
return (1);
/* */
/****************************************************************************/
static void
-ips_cleanup_passthru(ips_scb_t *scb) {
+ips_cleanup_passthru(ips_ha_t *ha, ips_scb_t *scb) {
ips_passthru_t *pt;
DBG("ips_cleanup_passthru");
pt = (ips_passthru_t *) scb->scsi_cmd->request_buffer;
/* Copy data back to the user */
- pt->BasicStatus = scb->basic_status;
- pt->ExtendedStatus = scb->extended_status;
-
- scb->scsi_cmd->result = DID_OK << 16;
+ if (scb->scsi_cmd->cmnd[0] == IPS_IOCTL_COMMAND) {
+ /* Copy data back to the user */
+ pt->BasicStatus = scb->basic_status;
+ pt->ExtendedStatus = scb->extended_status;
+ } else {
+ pt->BasicStatus = scb->basic_status;
+ pt->ExtendedStatus = scb->extended_status;
+ up(scb->sem);
+ }
}
#endif
/****************************************************************************/
static int
ips_host_info(ips_ha_t *ha, char *ptr, off_t offset, int len) {
- INFOSTR info;
+ IPS_INFOSTR info;
DBG("ips_host_info");
copy_info(&info, "\nIBM ServeRAID General Information:\n\n");
- if ((ha->nvram->signature == NVRAM_PAGE5_SIGNATURE) &&
+ if ((ha->nvram->signature == IPS_NVRAM_P5_SIG) &&
(ha->nvram->adapter_type != 0))
copy_info(&info, "\tController Type : %s\n", ips_adapter_name[ha->ad_type-1]);
else
copy_info(&info, "\tIO port address : 0x%lx\n", ha->io_addr);
copy_info(&info, "\tIRQ number : %d\n", ha->irq);
- if (ha->nvram->signature == NVRAM_PAGE5_SIGNATURE)
+ if (ha->nvram->signature == IPS_NVRAM_P5_SIG)
copy_info(&info, "\tBIOS Version : %c%c%c%c%c%c%c%c\n",
ha->nvram->bios_high[0], ha->nvram->bios_high[1],
ha->nvram->bios_high[2], ha->nvram->bios_high[3],
/* */
/* Routine Description: */
/* */
-/* Copy data into an INFOSTR structure */
+/* Copy data into an IPS_INFOSTR structure */
/* */
/****************************************************************************/
static void
-copy_mem_info(INFOSTR *info, char *data, int len) {
+copy_mem_info(IPS_INFOSTR *info, char *data, int len) {
DBG("copy_mem_info");
if (info->pos + len > info->length)
/* */
/****************************************************************************/
static int
-copy_info(INFOSTR *info, char *fmt, ...) {
+copy_info(IPS_INFOSTR *info, char *fmt, ...) {
va_list args;
char buf[81];
int len;
/* initialize status queue */
ips_statinit(ha);
+ ha->reset_count = 1;
+
/* Setup HBA ID's */
- if (!ips_read_config(ha)) {
+ if (!ips_read_config(ha, IPS_INTR_IORL)) {
#ifndef NO_IPS_RESET
+ ha->reset_count++;
+
/* Try to reset the controller and try again */
if (!ips_reset_adapter(ha)) {
printk(KERN_WARNING "(%s%d) unable to reset controller.\n",
return (0);
}
- if (!ips_clear_adapter(ha)) {
+ if (!ips_clear_adapter(ha, IPS_INTR_IORL)) {
printk(KERN_WARNING "(%s%d) unable to initialize controller.\n",
ips_name, ha->host_num);
#endif
- if (!ips_read_config(ha)) {
+ if (!ips_read_config(ha, IPS_INTR_IORL)) {
printk(KERN_WARNING "(%s%d) unable to read config from controller.\n",
ips_name, ha->host_num);
} /* end if */
/* write driver version */
- if (!ips_write_driver_status(ha)) {
+ if (!ips_write_driver_status(ha, IPS_INTR_IORL)) {
printk(KERN_WARNING "(%s%d) unable to write driver info to controller.\n",
ips_name, ha->host_num);
return (0);
}
- if (!ips_read_adapter_status(ha)) {
+ if (!ips_read_adapter_status(ha, IPS_INTR_IORL)) {
printk(KERN_WARNING "(%s%d) unable to read controller status.\n",
ips_name, ha->host_num);
return (0);
}
- if (!ips_read_subsystem_parameters(ha)) {
+ if (!ips_read_subsystem_parameters(ha, IPS_INTR_IORL)) {
printk(KERN_WARNING "(%s%d) unable to read subsystem parameters.\n",
ips_name, ha->host_num);
return (0);
}
+ /* FFDC */
+ if (ha->subsys->param[3] & 0x300000) {
+ struct timeval tv;
+
+ do_gettimeofday(&tv);
+ ha->last_ffdc = tv.tv_sec;
+ ips_ffdc_reset(ha, IPS_INTR_IORL);
+ }
+
/* set limits on SID, LUN, BUS */
- ha->ntargets = MAX_TARGETS + 1;
+ ha->ntargets = IPS_MAX_TARGETS + 1;
ha->nlun = 1;
- ha->nbus = (ha->enq->ucMaxPhysicalDevices / MAX_TARGETS);
+ ha->nbus = (ha->enq->ucMaxPhysicalDevices / IPS_MAX_TARGETS);
switch (ha->conf->logical_drive[0].ucStripeSize) {
case 4:
/* */
/* Take the next command off the queue and send it to the controller */
/* */
-/* ASSUMED to be called from within a lock */
-/* */
/****************************************************************************/
static void
-ips_next(ips_ha_t *ha) {
- ips_scb_t *scb;
- Scsi_Cmnd *SC;
- Scsi_Cmnd *p;
- int ret;
+ips_next(ips_ha_t *ha, int intr) {
+ ips_scb_t *scb;
+ Scsi_Cmnd *SC;
+ Scsi_Cmnd *p;
+ ips_copp_wait_item_t *item;
+ int ret;
+ int intr_status;
+ u32 cpu_flags;
+ u32 cpu_flags2;
DBG("ips_next");
if (!ha)
return ;
+ /*
+ * Block access to the queue function so
+ * this command won't time out
+ */
+ if (intr == IPS_INTR_ON) {
+ spin_lock_irqsave(&io_request_lock, cpu_flags2);
+ intr_status = IPS_INTR_IORL;
+ } else {
+ intr_status = intr;
+
+ /* Quiet the compiler */
+ cpu_flags2 = 0;
+ }
+
+ if (ha->subsys->param[3] & 0x300000) {
+ struct timeval tv;
+
+ do_gettimeofday(&tv);
+
+ IPS_HA_LOCK(cpu_flags);
+ if (tv.tv_sec - ha->last_ffdc > IPS_SECS_8HOURS) {
+ ha->last_ffdc = tv.tv_sec;
+ IPS_HA_UNLOCK(cpu_flags);
+ ips_ffdc_time(ha, intr_status);
+ } else {
+ IPS_HA_UNLOCK(cpu_flags);
+ }
+ }
+
+ if (intr == IPS_INTR_ON)
+ spin_unlock_irqrestore(&io_request_lock, cpu_flags2);
+
#ifndef NO_IPS_CMDLINE
/*
* Send passthru commands
* since we limit the number that can be active
* on the card at any one time
*/
+ IPS_HA_LOCK(cpu_flags);
+ IPS_QUEUE_LOCK(&ha->copp_waitlist);
while ((ha->num_ioctl < IPS_MAX_IOCTL) &&
(ha->copp_waitlist.head) &&
(scb = ips_getscb(ha))) {
- SC = ips_removeq_wait_head(&ha->copp_waitlist);
- ret = ips_make_passthru(ha, SC, scb);
+ IPS_QUEUE_UNLOCK(&ha->copp_waitlist);
+ IPS_HA_UNLOCK(cpu_flags);
+ item = ips_removeq_copp_head(&ha->copp_waitlist);
+ scb->scsi_cmd = item->scsi_cmd;
+ scb->sem = item->sem;
+ kfree(item);
+
+ ret = ips_make_passthru(ha, scb->scsi_cmd, scb);
switch (ret) {
case IPS_FAILURE:
if (scb->scsi_cmd) {
+ /* raise the semaphore */
+ if (scb->scsi_cmd->cmnd[0] == IPS_IOCTL_NEW_COMMAND)
+ up(scb->sem);
+
scb->scsi_cmd->result = DID_ERROR << 16;
- scb->scsi_cmd->scsi_done(scb->scsi_cmd);
}
ips_freescb(ha, scb);
break;
case IPS_SUCCESS_IMM:
- if (scb->scsi_cmd)
- scb->scsi_cmd->scsi_done(scb->scsi_cmd);
+ if (scb->scsi_cmd) {
+ /* raise the semaphore */
+ if (scb->scsi_cmd->cmnd[0] == IPS_IOCTL_NEW_COMMAND)
+ up(scb->sem);
+ }
+
ips_freescb(ha, scb);
break;
default:
break;
} /* end case */
- if (ret != IPS_SUCCESS)
+ if (ret != IPS_SUCCESS) {
+ IPS_HA_LOCK(cpu_flags);
+ IPS_QUEUE_LOCK(&ha->copp_waitlist);
continue;
+ }
ret = ips_send_cmd(ha, scb);
switch(ret) {
case IPS_FAILURE:
if (scb->scsi_cmd) {
+ /* raise the semaphore */
+ if (scb->scsi_cmd->cmnd[0] == IPS_IOCTL_NEW_COMMAND)
+ up(scb->sem);
+
scb->scsi_cmd->result = DID_ERROR << 16;
- scb->scsi_cmd->scsi_done(scb->scsi_cmd);
}
ips_freescb(ha, scb);
break;
case IPS_SUCCESS_IMM:
- if (scb->scsi_cmd)
- scb->scsi_cmd->scsi_done(scb->scsi_cmd);
+ if (scb->scsi_cmd) {
+ /* raise the semaphore */
+ if (scb->scsi_cmd->cmnd[0] == IPS_IOCTL_NEW_COMMAND)
+ up(scb->sem);
+ }
+
ips_freescb(ha, scb);
break;
default:
break;
} /* end case */
+
+ IPS_HA_LOCK(cpu_flags);
+ IPS_QUEUE_LOCK(&ha->copp_waitlist);
}
+
+ IPS_QUEUE_UNLOCK(&ha->copp_waitlist);
+ IPS_HA_UNLOCK(cpu_flags);
#endif
/*
* Send "Normal" I/O commands
*/
+ IPS_HA_LOCK(cpu_flags);
+ IPS_QUEUE_LOCK(&ha->scb_waitlist);
p = ha->scb_waitlist.head;
+ IPS_QUEUE_UNLOCK(&ha->scb_waitlist);
while ((p) && (scb = ips_getscb(ha))) {
if ((p->channel > 0) && (ha->dcdb_active[p->channel-1] & (1 << p->target))) {
ips_freescb(ha, scb);
continue;
}
+ IPS_HA_UNLOCK(cpu_flags);
+
SC = ips_removeq_wait(&ha->scb_waitlist, p);
SC->result = DID_OK;
scb->dcdb.transfer_length = 0;
if (scb->data_len >= IPS_MAX_XFER) {
- scb->dcdb.cmd_attribute |= TRANSFER_64K;
+ scb->dcdb.cmd_attribute |= IPS_TRANSFER64K;
scb->dcdb.transfer_length = 0;
}
} /* end case */
p = (Scsi_Cmnd *) p->host_scribble;
+
+ IPS_HA_LOCK(cpu_flags);
} /* end while */
+
+ IPS_HA_UNLOCK(cpu_flags);
}
/****************************************************************************/
if (!item)
return ;
+ IPS_QUEUE_LOCK(queue);
+
item->q_next = queue->head;
queue->head = item;
queue->tail = item;
queue->count++;
+
+ IPS_QUEUE_UNLOCK(queue);
}
/****************************************************************************/
if (!item)
return ;
+ IPS_QUEUE_LOCK(queue);
+
item->q_next = NULL;
if (queue->tail)
queue->head = item;
queue->count++;
+
+ IPS_QUEUE_UNLOCK(queue);
}
/****************************************************************************/
DBG("ips_removeq_scb_head");
+ IPS_QUEUE_LOCK(queue);
+
item = queue->head;
- if (!item)
+ if (!item) {
+ IPS_QUEUE_UNLOCK(queue);
+
return (NULL);
+ }
queue->head = item->q_next;
item->q_next = NULL;
queue->count--;
+ IPS_QUEUE_UNLOCK(queue);
+
return (item);
}
ips_removeq_scb(ips_scb_queue_t *queue, ips_scb_t *item) {
ips_scb_t *p;
- DBG("ips_removeq_scb");
+ DBG("ips_removeq_scb");
+
+ if (!item)
+ return (NULL);
+
+ IPS_QUEUE_LOCK(queue);
+
+ if (item == queue->head) {
+ IPS_QUEUE_UNLOCK(queue);
+
+ return (ips_removeq_scb_head(queue));
+ }
+
+ p = queue->head;
+
+ while ((p) && (item != p->q_next))
+ p = p->q_next;
+
+ if (p) {
+ /* found a match */
+ p->q_next = item->q_next;
+
+ if (!item->q_next)
+ queue->tail = p;
+
+ item->q_next = NULL;
+ queue->count--;
+
+ IPS_QUEUE_UNLOCK(queue);
+
+ return (item);
+ }
+
+ IPS_QUEUE_UNLOCK(queue);
+
+ return (NULL);
+}
+
+/****************************************************************************/
+/* */
+/* Routine Name: ips_putq_wait_head */
+/* */
+/* Routine Description: */
+/* */
+/* Add an item to the head of the queue */
+/* */
+/* ASSUMED to be called from within a lock */
+/* */
+/****************************************************************************/
+static inline void
+ips_putq_wait_head(ips_wait_queue_t *queue, Scsi_Cmnd *item) {
+ DBG("ips_putq_wait_head");
+
+ if (!item)
+ return ;
+
+ IPS_QUEUE_LOCK(queue);
+
+ item->host_scribble = (char *) queue->head;
+ queue->head = item;
+
+ if (!queue->tail)
+ queue->tail = item;
+
+ queue->count++;
+
+ IPS_QUEUE_UNLOCK(queue);
+}
+
+/****************************************************************************/
+/* */
+/* Routine Name: ips_putq_wait_tail */
+/* */
+/* Routine Description: */
+/* */
+/* Add an item to the tail of the queue */
+/* */
+/* ASSUMED to be called from within a lock */
+/* */
+/****************************************************************************/
+static inline void
+ips_putq_wait_tail(ips_wait_queue_t *queue, Scsi_Cmnd *item) {
+ DBG("ips_putq_wait_tail");
+
+ if (!item)
+ return ;
+
+ IPS_QUEUE_LOCK(queue);
+
+ item->host_scribble = NULL;
+
+ if (queue->tail)
+ queue->tail->host_scribble = (char *)item;
+
+ queue->tail = item;
+
+ if (!queue->head)
+ queue->head = item;
+
+ queue->count++;
+
+ IPS_QUEUE_UNLOCK(queue);
+}
+
+/****************************************************************************/
+/* */
+/* Routine Name: ips_removeq_wait_head */
+/* */
+/* Routine Description: */
+/* */
+/* Remove the head of the queue */
+/* */
+/* ASSUMED to be called from within a lock */
+/* */
+/****************************************************************************/
+static inline Scsi_Cmnd *
+ips_removeq_wait_head(ips_wait_queue_t *queue) {
+ Scsi_Cmnd *item;
+
+ DBG("ips_removeq_wait_head");
+
+ IPS_QUEUE_LOCK(queue);
+
+ item = queue->head;
+
+ if (!item) {
+ IPS_QUEUE_UNLOCK(queue);
+
+ return (NULL);
+ }
+
+ queue->head = (Scsi_Cmnd *) item->host_scribble;
+ item->host_scribble = NULL;
+
+ if (queue->tail == item)
+ queue->tail = NULL;
+
+ queue->count--;
+
+ IPS_QUEUE_UNLOCK(queue);
+
+ return (item);
+}
+
+/****************************************************************************/
+/* */
+/* Routine Name: ips_removeq_wait */
+/* */
+/* Routine Description: */
+/* */
+/* Remove an item from a queue */
+/* */
+/* ASSUMED to be called from within a lock */
+/* */
+/****************************************************************************/
+static inline Scsi_Cmnd *
+ips_removeq_wait(ips_wait_queue_t *queue, Scsi_Cmnd *item) {
+ Scsi_Cmnd *p;
+
+ DBG("ips_removeq_wait");
if (!item)
return (NULL);
- if (item == queue->head)
- return (ips_removeq_scb_head(queue));
+ IPS_QUEUE_LOCK(queue);
+
+ if (item == queue->head) {
+ IPS_QUEUE_UNLOCK(queue);
+
+ return (ips_removeq_wait_head(queue));
+ }
p = queue->head;
- while ((p) && (item != p->q_next))
- p = p->q_next;
+ while ((p) && (item != (Scsi_Cmnd *) p->host_scribble))
+ p = (Scsi_Cmnd *) p->host_scribble;
if (p) {
/* found a match */
- p->q_next = item->q_next;
+ p->host_scribble = item->host_scribble;
- if (!item->q_next)
+ if (!item->host_scribble)
queue->tail = p;
- item->q_next = NULL;
+ item->host_scribble = NULL;
queue->count--;
+ IPS_QUEUE_UNLOCK(queue);
+
return (item);
}
+ IPS_QUEUE_UNLOCK(queue);
+
return (NULL);
}
/****************************************************************************/
/* */
-/* Routine Name: ips_putq_wait_head */
+/* Routine Name: ips_putq_copp_head */
/* */
/* Routine Description: */
/* */
/* */
/****************************************************************************/
static inline void
-ips_putq_wait_head(ips_wait_queue_t *queue, Scsi_Cmnd *item) {
- DBG("ips_putq_wait_head");
+ips_putq_copp_head(ips_copp_queue_t *queue, ips_copp_wait_item_t *item) {
+ DBG("ips_putq_copp_head");
if (!item)
return ;
- item->host_scribble = (char *) queue->head;
+ IPS_QUEUE_LOCK(queue);
+
+ item->next = queue->head;
queue->head = item;
if (!queue->tail)
queue->tail = item;
queue->count++;
+
+ IPS_QUEUE_UNLOCK(queue);
}
/****************************************************************************/
/* */
-/* Routine Name: ips_putq_wait_tail */
+/* Routine Name: ips_putq_copp_tail */
/* */
/* Routine Description: */
/* */
/* */
/****************************************************************************/
static inline void
-ips_putq_wait_tail(ips_wait_queue_t *queue, Scsi_Cmnd *item) {
- DBG("ips_putq_wait_tail");
+ips_putq_copp_tail(ips_copp_queue_t *queue, ips_copp_wait_item_t *item) {
+ DBG("ips_putq_copp_tail");
if (!item)
return ;
- item->host_scribble = NULL;
+ IPS_QUEUE_LOCK(queue);
+
+ item->next = NULL;
if (queue->tail)
- queue->tail->host_scribble = (char *)item;
+ queue->tail->next = item;
queue->tail = item;
queue->head = item;
queue->count++;
+
+ IPS_QUEUE_UNLOCK(queue);
}
/****************************************************************************/
/* */
-/* Routine Name: ips_removeq_wait_head */
+/* Routine Name: ips_removeq_copp_head */
/* */
/* Routine Description: */
/* */
/* ASSUMED to be called from within a lock */
/* */
/****************************************************************************/
-static inline Scsi_Cmnd *
-ips_removeq_wait_head(ips_wait_queue_t *queue) {
- Scsi_Cmnd *item;
+static inline ips_copp_wait_item_t *
+ips_removeq_copp_head(ips_copp_queue_t *queue) {
+ ips_copp_wait_item_t *item;
- DBG("ips_removeq_wait_head");
+ DBG("ips_removeq_copp_head");
+
+ IPS_QUEUE_LOCK(queue);
item = queue->head;
- if (!item)
+ if (!item) {
+ IPS_QUEUE_UNLOCK(queue);
+
return (NULL);
+ }
- queue->head = (Scsi_Cmnd *) item->host_scribble;
- item->host_scribble = NULL;
+ queue->head = item->next;
+ item->next = NULL;
if (queue->tail == item)
queue->tail = NULL;
queue->count--;
+ IPS_QUEUE_UNLOCK(queue);
+
return (item);
}
/****************************************************************************/
/* */
-/* Routine Name: ips_removeq_wait */
+/* Routine Name: ips_removeq_copp */
/* */
/* Routine Description: */
/* */
/* ASSUMED to be called from within a lock */
/* */
/****************************************************************************/
-static inline Scsi_Cmnd *
-ips_removeq_wait(ips_wait_queue_t *queue, Scsi_Cmnd *item) {
- Scsi_Cmnd *p;
+static inline ips_copp_wait_item_t *
+ips_removeq_copp(ips_copp_queue_t *queue, ips_copp_wait_item_t *item) {
+ ips_copp_wait_item_t *p;
- DBG("ips_removeq_wait");
+ DBG("ips_removeq_copp");
if (!item)
return (NULL);
- if (item == queue->head)
- return (ips_removeq_wait_head(queue));
+ IPS_QUEUE_LOCK(queue);
+
+ if (item == queue->head) {
+ IPS_QUEUE_UNLOCK(queue);
+
+ return (ips_removeq_copp_head(queue));
+ }
p = queue->head;
- while ((p) && (item != (Scsi_Cmnd *) p->host_scribble))
- p = (Scsi_Cmnd *) p->host_scribble;
+ while ((p) && (item != p->next))
+ p = p->next;
if (p) {
/* found a match */
- p->host_scribble = item->host_scribble;
+ p->next = item->next;
- if (!item->host_scribble)
+ if (!item->next)
queue->tail = p;
- item->host_scribble = NULL;
+ item->next = NULL;
queue->count--;
+ IPS_QUEUE_UNLOCK(queue);
+
return (item);
}
+ IPS_QUEUE_UNLOCK(queue);
+
return (NULL);
}
static void
ips_done(ips_ha_t *ha, ips_scb_t *scb) {
int ret;
+ u32 cpu_flags;
DBG("ips_done");
#ifndef NO_IPS_CMDLINE
if ((scb->scsi_cmd) && (ips_is_passthru(scb->scsi_cmd))) {
- ips_cleanup_passthru(scb);
+ ips_cleanup_passthru(ha, scb);
+ IPS_HA_LOCK(cpu_flags);
ha->num_ioctl--;
+ IPS_HA_UNLOCK(cpu_flags);
} else {
#endif
/*
scb->dcdb.transfer_length = 0;
if (scb->data_len >= IPS_MAX_XFER) {
- scb->dcdb.cmd_attribute |= TRANSFER_64K;
+ scb->dcdb.cmd_attribute |= IPS_TRANSFER64K;
scb->dcdb.transfer_length = 0;
}
} /* end if passthru */
#endif
- if (scb->bus)
+ if (scb->bus) {
+ IPS_HA_LOCK(cpu_flags);
ha->dcdb_active[scb->bus-1] &= ~(1 << scb->target_id);
+ IPS_HA_UNLOCK(cpu_flags);
+ }
/* call back to SCSI layer */
- scb->scsi_cmd->scsi_done(scb->scsi_cmd);
- ips_freescb(ha, scb);
+ if (scb->scsi_cmd && scb->scsi_cmd->cmnd[0] != IPS_IOCTL_NEW_COMMAND)
+ scb->scsi_cmd->scsi_done(scb->scsi_cmd);
- /* do the next command */
- ips_next(ha);
+ ips_freescb(ha, scb);
}
/****************************************************************************/
/* default driver error */
errcode = DID_ERROR;
- switch (scb->basic_status & GSC_STATUS_MASK) {
- case CMD_TIMEOUT:
+ switch (scb->basic_status & IPS_GSC_STATUS_MASK) {
+ case IPS_CMD_TIMEOUT:
errcode = DID_TIME_OUT;
break;
- case INVAL_OPCO:
- case INVAL_CMD_BLK:
- case INVAL_PARM_BLK:
- case LOG_DRV_ERROR:
- case CMD_CMPLT_WERROR:
+ case IPS_INVAL_OPCO:
+ case IPS_INVAL_CMD_BLK:
+ case IPS_INVAL_PARM_BLK:
+ case IPS_LD_ERROR:
+ case IPS_CMD_CMPLT_WERROR:
break;
- case PHYS_DRV_ERROR:
+ case IPS_PHYS_DRV_ERROR:
/*
* For physical drive errors that
* are not on a logical drive should
errcode = DID_OK;
switch (scb->extended_status) {
- case SELECTION_TIMEOUT:
+ case IPS_ERR_SEL_TO:
if (scb->bus) {
scb->scsi_cmd->result |= DID_TIME_OUT << 16;
return (0);
}
break;
- case DATA_OVER_UNDER_RUN:
+ case IPS_ERR_OU_RUN:
if ((scb->bus) && (scb->dcdb.transfer_length < scb->data_len)) {
if ((scb->scsi_cmd->cmnd[0] == INQUIRY) &&
((((char *) scb->scsi_cmd->buffer)[0] & 0x1f) == TYPE_DISK)) {
}
break;
- case EXT_RECOVERY:
+ case IPS_ERR_RECOVERY:
/* don't fail recovered errors */
if (scb->bus) {
scb->scsi_cmd->result |= DID_OK << 16;
}
break;
- case EXT_HOST_RESET:
- case EXT_DEVICE_RESET:
+ case IPS_ERR_HOST_RESET:
+ case IPS_ERR_DEV_RESET:
errcode = DID_RESET;
break;
- case EXT_CHECK_CONDITION:
+ case IPS_ERR_CKCOND:
break;
} /* end switch */
} /* end switch */
/* */
/****************************************************************************/
static int
-ips_send(ips_ha_t *ha, ips_scb_t *scb, scb_callback callback) {
+ips_send(ips_ha_t *ha, ips_scb_t *scb, ips_scb_callback callback) {
int ret;
DBG("ips_send");
/* */
/****************************************************************************/
static int
-ips_send_wait(ips_ha_t *ha, ips_scb_t *scb, int timeout) {
+ips_send_wait(ips_ha_t *ha, ips_scb_t *scb, int timeout, int intr) {
int ret;
DBG("ips_send_wait");
if ((ret == IPS_FAILURE) || (ret == IPS_SUCCESS_IMM))
return (ret);
- ret = ips_wait(ha, timeout, IPS_INTR_OFF);
+ ret = ips_wait(ha, timeout, intr);
return (ret);
}
scb->scsi_cmd->result = DID_OK << 16;
if (scb->scsi_cmd->cmnd[0] == INQUIRY) {
- INQUIRYDATA inq;
+ IPS_INQ_DATA inq;
- memset(&inq, 0, sizeof(INQUIRYDATA));
+ memset(&inq, 0, sizeof(IPS_INQ_DATA));
inq.DeviceType = TYPE_PROCESSOR;
inq.DeviceTypeQualifier = 0;
scb->scsi_cmd->result = DID_OK << 16;
}
} else {
- scb->cmd.logical_info.op_code = GET_LOGICAL_DRIVE_INFO;
+ scb->cmd.logical_info.op_code = IPS_CMD_GET_LD_INFO;
scb->cmd.logical_info.command_id = IPS_COMMAND_ID(ha, scb);
scb->cmd.logical_info.buffer_addr = VIRT_TO_BUS(&ha->adapt->logical_drive_info);
scb->cmd.logical_info.reserved = 0;
case WRITE_6:
if (!scb->sg_len) {
scb->cmd.basic_io.op_code =
- (scb->scsi_cmd->cmnd[0] == READ_6) ? IPS_READ : IPS_WRITE;
+ (scb->scsi_cmd->cmnd[0] == READ_6) ? IPS_CMD_READ : IPS_CMD_WRITE;
} else {
scb->cmd.basic_io.op_code =
- (scb->scsi_cmd->cmnd[0] == READ_6) ? READ_SCATTER_GATHER : WRITE_SCATTER_GATHER;
+ (scb->scsi_cmd->cmnd[0] == READ_6) ? IPS_CMD_READ_SG : IPS_CMD_WRITE_SG;
}
scb->cmd.basic_io.command_id = IPS_COMMAND_ID(ha, scb);
case WRITE_10:
if (!scb->sg_len) {
scb->cmd.basic_io.op_code =
- (scb->scsi_cmd->cmnd[0] == READ_10) ? IPS_READ : IPS_WRITE;
+ (scb->scsi_cmd->cmnd[0] == READ_10) ? IPS_CMD_READ : IPS_CMD_WRITE;
} else {
scb->cmd.basic_io.op_code =
- (scb->scsi_cmd->cmnd[0] == READ_10) ? READ_SCATTER_GATHER : WRITE_SCATTER_GATHER;
+ (scb->scsi_cmd->cmnd[0] == READ_10) ? IPS_CMD_READ_SG : IPS_CMD_WRITE_SG;
}
scb->cmd.basic_io.command_id = IPS_COMMAND_ID(ha, scb);
break;
case MODE_SENSE:
- scb->cmd.basic_io.op_code = ENQUIRY;
+ scb->cmd.basic_io.op_code = IPS_CMD_ENQUIRY;
scb->cmd.basic_io.command_id = IPS_COMMAND_ID(ha, scb);
scb->cmd.basic_io.sg_addr = VIRT_TO_BUS(ha->enq);
ret = IPS_SUCCESS;
break;
case READ_CAPACITY:
- scb->cmd.logical_info.op_code = GET_LOGICAL_DRIVE_INFO;
+ scb->cmd.logical_info.op_code = IPS_CMD_GET_LD_INFO;
scb->cmd.logical_info.command_id = IPS_COMMAND_ID(ha, scb);
scb->cmd.logical_info.buffer_addr = VIRT_TO_BUS(&ha->adapt->logical_drive_info);
scb->cmd.logical_info.reserved = 0;
/* setup DCDB */
if (scb->bus > 0) {
if (!scb->sg_len)
- scb->cmd.dcdb.op_code = DIRECT_CDB;
+ scb->cmd.dcdb.op_code = IPS_CMD_DCDB;
else
- scb->cmd.dcdb.op_code = DIRECT_CDB_SCATTER_GATHER;
+ scb->cmd.dcdb.op_code = IPS_CMD_DCDB_SG;
ha->dcdb_active[scb->bus-1] |= (1 << scb->target_id);
scb->cmd.dcdb.command_id = IPS_COMMAND_ID(ha, scb);
scb->cmd.dcdb.reserved3 = 0;
scb->dcdb.device_address = ((scb->bus - 1) << 4) | scb->target_id;
- scb->dcdb.cmd_attribute |= DISCONNECT_ALLOWED;
+ scb->dcdb.cmd_attribute |= IPS_DISCONNECT_ALLOWED;
if (scb->timeout) {
if (scb->timeout <= 10)
- scb->dcdb.cmd_attribute |= TIMEOUT_10;
+ scb->dcdb.cmd_attribute |= IPS_TIMEOUT10;
else if (scb->timeout <= 60)
- scb->dcdb.cmd_attribute |= TIMEOUT_60;
+ scb->dcdb.cmd_attribute |= IPS_TIMEOUT60;
else
- scb->dcdb.cmd_attribute |= TIMEOUT_20M;
+ scb->dcdb.cmd_attribute |= IPS_TIMEOUT20M;
}
- if (!(scb->dcdb.cmd_attribute & TIMEOUT_20M))
- scb->dcdb.cmd_attribute |= TIMEOUT_20M;
+ if (!(scb->dcdb.cmd_attribute & IPS_TIMEOUT20M))
+ scb->dcdb.cmd_attribute |= IPS_TIMEOUT20M;
scb->dcdb.sense_length = sizeof(scb->scsi_cmd->sense_buffer);
scb->dcdb.buffer_pointer = scb->data_busaddr;
command_id = ips_statupd(ha);
- if (command_id > (MAX_CMDS-1)) {
+ if (command_id > (IPS_MAX_CMDS-1)) {
printk(KERN_NOTICE "(%s%d) invalid command id received: %d\n",
ips_name, ha->host_num, command_id);
scb = &ha->scbs[command_id];
sp->scb_addr = (u32) scb;
sp->residue_len = 0;
- scb->basic_status = basic_status = ha->adapt->p_status_tail->basic_status & BASIC_STATUS_MASK;
+ scb->basic_status = basic_status = ha->adapt->p_status_tail->basic_status & IPS_BASIC_STATUS_MASK;
scb->extended_status = ext_status = ha->adapt->p_status_tail->extended_status;
/* Remove the item from the active queue */
errcode = DID_OK;
ret = 0;
- if (((basic_status & GSC_STATUS_MASK) == SSUCCESS) ||
- ((basic_status & GSC_STATUS_MASK) == RECOVERED_ERROR)) {
+ if (((basic_status & IPS_GSC_STATUS_MASK) == IPS_CMD_SUCCESS) ||
+ ((basic_status & IPS_GSC_STATUS_MASK) == IPS_CMD_RECOVERED_ERROR)) {
if (scb->bus == 0) {
#if IPS_DEBUG >= 1
- if ((basic_status & GSC_STATUS_MASK) == RECOVERED_ERROR) {
+ if ((basic_status & IPS_GSC_STATUS_MASK) == IPS_CMD_RECOVERED_ERROR) {
printk(KERN_NOTICE "(%s%d) Recovered Logical Drive Error OpCode: %x, BSB: %x, ESB: %x\n",
ips_name, ha->host_num,
scb->cmd.basic_io.op_code, basic_status, ext_status);
ips_online(ips_ha_t *ha, ips_scb_t *scb) {
DBG("ips_online");
- if (scb->target_id >= MAX_LOGICAL_DRIVES)
+ if (scb->target_id >= IPS_MAX_LD)
return (0);
- if ((scb->basic_status & GSC_STATUS_MASK) > 1) {
+ if ((scb->basic_status & IPS_GSC_STATUS_MASK) > 1) {
memset(&ha->adapt->logical_drive_info, 0, sizeof(ha->adapt->logical_drive_info));
return (0);
}
if (scb->target_id < ha->adapt->logical_drive_info.no_of_log_drive &&
- ha->adapt->logical_drive_info.drive_info[scb->target_id].state != OFF_LINE &&
- ha->adapt->logical_drive_info.drive_info[scb->target_id].state != FREE &&
- ha->adapt->logical_drive_info.drive_info[scb->target_id].state != CRS &&
- ha->adapt->logical_drive_info.drive_info[scb->target_id].state != SYS)
+ ha->adapt->logical_drive_info.drive_info[scb->target_id].state != IPS_LD_OFFLINE &&
+ ha->adapt->logical_drive_info.drive_info[scb->target_id].state != IPS_LD_FREE &&
+ ha->adapt->logical_drive_info.drive_info[scb->target_id].state != IPS_LD_CRS &&
+ ha->adapt->logical_drive_info.drive_info[scb->target_id].state != IPS_LD_SYS)
return (1);
else
return (0);
/****************************************************************************/
static int
ips_inquiry(ips_ha_t *ha, ips_scb_t *scb) {
- INQUIRYDATA inq;
+ IPS_INQ_DATA inq;
DBG("ips_inquiry");
- memset(&inq, 0, sizeof(INQUIRYDATA));
+ memset(&inq, 0, sizeof(IPS_INQ_DATA));
inq.DeviceType = TYPE_DISK;
inq.DeviceTypeQualifier = 0;
/****************************************************************************/
static int
ips_rdcap(ips_ha_t *ha, ips_scb_t *scb) {
- CAPACITY_T *cap;
+ IPS_CAPACITY *cap;
DBG("ips_rdcap");
if (scb->scsi_cmd->bufflen < 8)
return (0);
- cap = (CAPACITY_T *) scb->scsi_cmd->request_buffer;
+ cap = (IPS_CAPACITY *) scb->scsi_cmd->request_buffer;
cap->lba = htonl(ha->adapt->logical_drive_info.drive_info[scb->target_id].sector_count - 1);
cap->len = htonl((u32) IPS_BLKSIZE);
if (ha->enq->ulDriveSize[scb->target_id] > 0x400000 &&
(ha->enq->ucMiscFlag & 0x8) == 0) {
- heads = NORM_MODE_HEADS;
- sectors = NORM_MODE_SECTORS;
+ heads = IPS_NORM_HEADS;
+ sectors = IPS_NORM_SECTORS;
} else {
- heads = COMP_MODE_HEADS;
- sectors = COMP_MODE_SECTORS;
+ heads = IPS_COMP_HEADS;
+ sectors = IPS_COMP_SECTORS;
}
cylinders = ha->enq->ulDriveSize[scb->target_id] / (heads * sectors);
case 0x03: /* page 3 */
mdata.pdata.pg3.pg_pc = 0x3;
mdata.pdata.pg3.pg_res1 = 0;
- mdata.pdata.pg3.pg_len = sizeof(DADF_T);
+ mdata.pdata.pg3.pg_len = sizeof(IPS_DADF);
mdata.plh.plh_len = 3 + mdata.plh.plh_bdl + mdata.pdata.pg3.pg_len;
mdata.pdata.pg3.pg_trk_z = 0;
mdata.pdata.pg3.pg_asec_z = 0;
case 0x4:
mdata.pdata.pg4.pg_pc = 0x4;
mdata.pdata.pg4.pg_res1 = 0;
- mdata.pdata.pg4.pg_len = sizeof(RDDG_T);
+ mdata.pdata.pg4.pg_len = sizeof(IPS_RDDG);
mdata.plh.plh_len = 3 + mdata.plh.plh_bdl + mdata.pdata.pg4.pg_len;
mdata.pdata.pg4.pg_cylu = (cylinders >> 8) & 0xffff;
mdata.pdata.pg4.pg_cyll = cylinders & 0xff;
ha->dummy = NULL;
}
+ if (ha->ioctl_data) {
+ kfree(ha->ioctl_data);
+ ha->ioctl_data = NULL;
+ ha->ioctl_datasize = 0;
+ }
+
if (ha->scbs) {
for (i = 0; i < ha->max_cmds; i++) {
if (ha->scbs[i].sg_list)
scb_p = &ha->scbs[i];
/* allocate S/G list */
- scb_p->sg_list = (SG_LIST *) kmalloc(sizeof(SG_LIST) * MAX_SG_ELEMENTS, GFP_KERNEL|GFP_DMA);
+ scb_p->sg_list = (IPS_SG_LIST *) kmalloc(sizeof(IPS_SG_LIST) * IPS_MAX_SG, GFP_KERNEL|GFP_DMA);
if (! scb_p->sg_list)
return (0);
/****************************************************************************/
static void
ips_init_scb(ips_ha_t *ha, ips_scb_t *scb) {
- SG_LIST *sg_list;
+ IPS_SG_LIST *sg_list;
DBG("ips_init_scb");
/* zero fill */
memset(scb, 0, sizeof(ips_scb_t));
- memset(ha->dummy, 0, sizeof(BASIC_IO_CMD));
+ memset(ha->dummy, 0, sizeof(IPS_IO_CMD));
/* Initialize dummy command bucket */
ha->dummy->op_code = 0xFF;
ha->dummy->ccsar = VIRT_TO_BUS(ha->dummy);
- ha->dummy->command_id = MAX_CMDS;
+ ha->dummy->command_id = IPS_MAX_CMDS;
/* set bus address of scb */
scb->scb_busaddr = VIRT_TO_BUS(scb);
scb->sg_list = sg_list;
/* Neptune Fix */
- scb->cmd.basic_io.cccr = ILE;
+ scb->cmd.basic_io.cccr = IPS_BIT_ILE;
scb->cmd.basic_io.ccsar = VIRT_TO_BUS(ha->dummy);
}
static ips_scb_t *
ips_getscb(ips_ha_t *ha) {
ips_scb_t *scb;
- unsigned int cpu_flags;
+ u32 cpu_flags;
DBG("ips_getscb");
- spin_lock_irqsave(&ha->scb_lock, cpu_flags);
+ IPS_SCB_LOCK(cpu_flags);
if ((scb = ha->scb_freelist) == NULL) {
- spin_unlock_irqrestore(&ha->scb_lock, cpu_flags);
+ IPS_SCB_UNLOCK(cpu_flags);
return (NULL);
}
ha->scb_freelist = scb->q_next;
scb->q_next = NULL;
- spin_unlock_irqrestore(&ha->scb_lock, cpu_flags);
+ IPS_SCB_UNLOCK(cpu_flags);
ips_init_scb(ha, scb);
/****************************************************************************/
static void
ips_freescb(ips_ha_t *ha, ips_scb_t *scb) {
- unsigned int cpu_flags;
+ u32 cpu_flags;
DBG("ips_freescb");
/* check to make sure this is not our "special" scb */
if (IPS_COMMAND_ID(ha, scb) < (ha->max_cmds - 1)) {
- spin_lock_irqsave(&ha->scb_lock, cpu_flags);
+ IPS_SCB_LOCK(cpu_flags);
scb->q_next = ha->scb_freelist;
ha->scb_freelist = scb;
- spin_unlock_irqrestore(&ha->scb_lock, cpu_flags);
+ IPS_SCB_UNLOCK(cpu_flags);
}
}
ips_reset_adapter(ips_ha_t *ha) {
u8 Isr;
u8 Cbsp;
- u8 PostByte[MAX_POST_BYTES];
- u8 ConfigByte[MAX_CONFIG_BYTES];
+ u8 PostByte[IPS_MAX_POST_BYTES];
+ u8 ConfigByte[IPS_MAX_CONFIG_BYTES];
int i, j;
int reset_counter;
+ u32 cpu_flags;
DBG("ips_reset_adapter");
ha->io_addr, ha->irq);
#endif
+ IPS_HA_LOCK(cpu_flags);
+
reset_counter = 0;
while (reset_counter < 2) {
reset_counter++;
- outb(RST, ha->io_addr + SCPR);
- MDELAY(ONE_SEC);
- outb(0, ha->io_addr + SCPR);
- MDELAY(ONE_SEC);
+ outb(IPS_BIT_RST, ha->io_addr + IPS_REG_SCPR);
+ MDELAY(IPS_ONE_SEC);
+ outb(0, ha->io_addr + IPS_REG_SCPR);
+ MDELAY(IPS_ONE_SEC);
- for (i = 0; i < MAX_POST_BYTES; i++) {
+ for (i = 0; i < IPS_MAX_POST_BYTES; i++) {
for (j = 0; j < 45; j++) {
- Isr = inb(ha->io_addr + HISR);
- if (Isr & GHI)
+ Isr = inb(ha->io_addr + IPS_REG_HISR);
+ if (Isr & IPS_BIT_GHI)
break;
- MDELAY(ONE_SEC);
+ MDELAY(IPS_ONE_SEC);
}
if (j >= 45) {
/* error occured */
if (reset_counter < 2)
continue;
- else
+ else {
/* reset failed */
+ IPS_HA_UNLOCK(cpu_flags);
+
return (0);
+ }
}
- PostByte[i] = inb(ha->io_addr + ISPR);
- outb(Isr, ha->io_addr + HISR);
+ PostByte[i] = inb(ha->io_addr + IPS_REG_ISPR);
+ outb(Isr, ha->io_addr + IPS_REG_HISR);
}
- if (PostByte[0] < GOOD_POST_BASIC_STATUS) {
+ if (PostByte[0] < IPS_GOOD_POST_STATUS) {
printk("(%s%d) reset controller fails (post status %x %x).\n",
ips_name, ha->host_num, PostByte[0], PostByte[1]);
+ IPS_HA_UNLOCK(cpu_flags);
+
return (0);
}
- for (i = 0; i < MAX_CONFIG_BYTES; i++) {
+ for (i = 0; i < IPS_MAX_CONFIG_BYTES; i++) {
for (j = 0; j < 240; j++) {
- Isr = inb(ha->io_addr + HISR);
- if (Isr & GHI)
+ Isr = inb(ha->io_addr + IPS_REG_HISR);
+ if (Isr & IPS_BIT_GHI)
break;
- MDELAY(ONE_SEC); /* 100 msec */
+ MDELAY(IPS_ONE_SEC); /* 100 msec */
}
if (j >= 240) {
/* error occured */
if (reset_counter < 2)
continue;
- else
+ else {
/* reset failed */
+ IPS_HA_UNLOCK(cpu_flags);
+
return (0);
+ }
}
- ConfigByte[i] = inb(ha->io_addr + ISPR);
- outb(Isr, ha->io_addr + HISR);
+ ConfigByte[i] = inb(ha->io_addr + IPS_REG_ISPR);
+ outb(Isr, ha->io_addr + IPS_REG_HISR);
}
if (ConfigByte[0] == 0 && ConfigByte[1] == 2) {
printk("(%s%d) reset controller fails (status %x %x).\n",
ips_name, ha->host_num, ConfigByte[0], ConfigByte[1]);
+ IPS_HA_UNLOCK(cpu_flags);
+
return (0);
}
for (i = 0; i < 240; i++) {
- Cbsp = inb(ha->io_addr + CBSP);
+ Cbsp = inb(ha->io_addr + IPS_REG_CBSP);
- if ((Cbsp & OP) == 0)
+ if ((Cbsp & IPS_BIT_OP) == 0)
break;
- MDELAY(ONE_SEC);
+ MDELAY(IPS_ONE_SEC);
}
if (i >= 240) {
/* error occured */
if (reset_counter < 2)
continue;
- else
+ else {
/* reset failed */
+ IPS_HA_UNLOCK(cpu_flags);
+
return (0);
+ }
}
/* setup CCCR */
- outw(0x1010, ha->io_addr + CCCR);
+ outw(0x1010, ha->io_addr + IPS_REG_CCCR);
/* Enable busmastering */
- outb(EBM, ha->io_addr + SCPR);
+ outb(IPS_BIT_EBM, ha->io_addr + IPS_REG_SCPR);
/* setup status queues */
ips_statinit(ha);
/* Enable interrupts */
- outb(EI, ha->io_addr + HISR);
+ outb(IPS_BIT_EI, ha->io_addr + IPS_REG_HISR);
/* if we get here then everything went OK */
break;
}
+ IPS_HA_UNLOCK(cpu_flags);
+
return (1);
}
DBG("ips_statinit");
ha->adapt->p_status_start = ha->adapt->status;
- ha->adapt->p_status_end = ha->adapt->status + MAX_CMDS;
+ ha->adapt->p_status_end = ha->adapt->status + IPS_MAX_CMDS;
ha->adapt->p_status_tail = ha->adapt->status;
phys_status_start = VIRT_TO_BUS(ha->adapt->status);
- outl(phys_status_start, ha->io_addr + SQSR);
- outl(phys_status_start + STATUS_Q_SIZE, ha->io_addr + SQER);
- outl(phys_status_start + STATUS_SIZE, ha->io_addr + SQHR);
- outl(phys_status_start, ha->io_addr + SQTR);
+ outl(phys_status_start, ha->io_addr + IPS_REG_SQSR);
+ outl(phys_status_start + IPS_STATUS_Q_SIZE, ha->io_addr + IPS_REG_SQER);
+ outl(phys_status_start + IPS_STATUS_SIZE, ha->io_addr + IPS_REG_SQHR);
+ outl(phys_status_start, ha->io_addr + IPS_REG_SQTR);
ha->adapt->hw_status_start = phys_status_start;
ha->adapt->hw_status_tail = phys_status_start;
if (ha->adapt->p_status_tail != ha->adapt->p_status_end) {
ha->adapt->p_status_tail++;
- ha->adapt->hw_status_tail += sizeof(STATUS);
+ ha->adapt->hw_status_tail += sizeof(IPS_STATUS);
} else {
ha->adapt->p_status_tail = ha->adapt->p_status_start;
ha->adapt->hw_status_tail = ha->adapt->hw_status_start;
}
- outl(ha->adapt->hw_status_tail, ha->io_addr + SQTR);
+ outl(ha->adapt->hw_status_tail, ha->io_addr + IPS_REG_SQTR);
command_id = ha->adapt->p_status_tail->command_id;
ips_issue(ips_ha_t *ha, ips_scb_t *scb) {
u32 TimeOut;
u16 val;
+ u32 cpu_flags;
DBG("ips_issue");
ips_name,
scb->cmd.basic_io.command_id);
#if IPS_DEBUG >= 11
- MDELAY(ONE_SEC);
+ MDELAY(IPS_ONE_SEC);
#endif
#endif
+ IPS_HA_LOCK(cpu_flags);
+
TimeOut = 0;
- while ((val = inw(ha->io_addr + CCCR)) & SEMAPHORE) {
+ while ((val = inw(ha->io_addr + IPS_REG_CCCR)) & IPS_BIT_SEM) {
UDELAY(1000);
- if (++TimeOut >= SEMAPHORE_TIMEOUT) {
- if (!(val & START_STOP_BIT))
+ if (++TimeOut >= IPS_SEM_TIMEOUT) {
+ if (!(val & IPS_BIT_START_STOP))
break;
printk(KERN_WARNING "(%s%d) ips_issue val [0x%x].\n",
printk(KERN_WARNING "(%s%d) ips_issue semaphore chk timeout.\n",
ips_name, ha->host_num);
+ IPS_HA_UNLOCK(cpu_flags);
+
return (IPS_FAILURE);
} /* end if */
} /* end while */
- outl(scb->scb_busaddr, ha->io_addr + CCSAR);
- outw(START_COMMAND, ha->io_addr + CCCR);
+ outl(scb->scb_busaddr, ha->io_addr + IPS_REG_CCSAR);
+ outw(IPS_BIT_START_CMD, ha->io_addr + IPS_REG_CCCR);
+
+ IPS_HA_UNLOCK(cpu_flags);
return (IPS_SUCCESS);
}
DBG("ips_isintr");
- Isr = inb(ha->io_addr + HISR);
+ Isr = inb(ha->io_addr + IPS_REG_HISR);
if (Isr == 0xFF)
/* ?!?! Nothing really there */
return (0);
- if (Isr & SCE)
+ if (Isr & IPS_BIT_SCE)
return (1);
- else if (Isr & (SQO | GHI)) {
+ else if (Isr & (IPS_BIT_SQO | IPS_BIT_GHI)) {
/* status queue overflow or GHI */
/* just clear the interrupt */
- outb(Isr, ha->io_addr + HISR);
+ outb(Isr, ha->io_addr + IPS_REG_HISR);
}
return (0);
static int
ips_wait(ips_ha_t *ha, int time, int intr) {
int ret;
+ u8 done;
DBG("ips_wait");
ret = IPS_FAILURE;
+ done = FALSE;
- time *= ONE_SEC; /* convert seconds to milliseconds */
+ time *= IPS_ONE_SEC; /* convert seconds to milliseconds */
- while (time > 0) {
- if (intr == IPS_INTR_OFF) {
+ while ((time > 0) && (!done)) {
+ if (intr == IPS_INTR_ON) {
+ if (ha->waitflag == FALSE) {
+ ret = IPS_SUCCESS;
+ done = TRUE;
+ break;
+ }
+ } else if (intr == IPS_INTR_IORL) {
if (ha->waitflag == FALSE) {
/*
* controller generated an interupt to
* and ips_intr() has serviced the interrupt.
*/
ret = IPS_SUCCESS;
+ done = TRUE;
break;
}
/*
- * NOTE: Interrupts are disabled here
- * On an SMP system interrupts will only
- * be disabled on one processor.
- * So, ultimately we still need to set the
- * "I'm in the interrupt handler flag"
+ * NOTE: we already have the io_request_lock so
+ * even if we get an interrupt it won't get serviced
+ * until after we finish.
*/
+
while (test_and_set_bit(IPS_IN_INTR, &ha->flags))
UDELAY(1000);
ips_intr(ha);
clear_bit(IPS_IN_INTR, &ha->flags);
-
- } else {
+ } else if (intr == IPS_INTR_HAL) {
if (ha->waitflag == FALSE) {
+ /*
+ * controller generated an interupt to
+ * acknowledge completion of the command
+ * and ips_intr() has serviced the interrupt.
+ */
ret = IPS_SUCCESS;
+ done = TRUE;
break;
}
+
+ /*
+ * NOTE: since we were not called with the iorequest lock
+ * we must obtain it before we can call the interrupt handler.
+ * We were called under the HA lock so we can assume that interrupts
+ * are masked.
+ */
+ spin_lock(&io_request_lock);
+
+ while (test_and_set_bit(IPS_IN_INTR, &ha->flags))
+ UDELAY(1000);
+
+ ips_intr(ha);
+
+ clear_bit(IPS_IN_INTR, &ha->flags);
+
+ spin_unlock(&io_request_lock);
}
UDELAY(1000); /* 1 milisecond */
/* */
/****************************************************************************/
static int
-ips_write_driver_status(ips_ha_t *ha) {
+ips_write_driver_status(ips_ha_t *ha, int intr) {
DBG("ips_write_driver_status");
- if (!ips_readwrite_page5(ha, FALSE)) {
+ if (!ips_readwrite_page5(ha, FALSE, intr)) {
printk(KERN_WARNING "(%s%d) unable to read NVRAM page 5.\n",
ips_name, ha->host_num);
/* check to make sure the page has a valid */
/* signature */
- if (ha->nvram->signature != NVRAM_PAGE5_SIGNATURE) {
+ if (ha->nvram->signature != IPS_NVRAM_P5_SIG) {
#if IPS_DEBUG >= 1
printk("(%s%d) NVRAM page 5 has an invalid signature: %X.\n",
ips_name, ha->host_num, ha->nvram->signature);
ha->ad_type = ha->nvram->adapter_type;
/* change values (as needed) */
- ha->nvram->operating_system = OS_LINUX;
+ ha->nvram->operating_system = IPS_OS_LINUX;
strncpy((char *) ha->nvram->driver_high, IPS_VERSION_HIGH, 4);
strncpy((char *) ha->nvram->driver_low, IPS_VERSION_LOW, 4);
/* now update the page */
- if (!ips_readwrite_page5(ha, TRUE)) {
+ if (!ips_readwrite_page5(ha, TRUE, intr)) {
printk(KERN_WARNING "(%s%d) unable to write NVRAM page 5.\n",
ips_name, ha->host_num);
/* */
/****************************************************************************/
static int
-ips_read_adapter_status(ips_ha_t *ha) {
+ips_read_adapter_status(ips_ha_t *ha, int intr) {
ips_scb_t *scb;
int ret;
ips_init_scb(ha, scb);
scb->timeout = ips_cmd_timeout;
- scb->cdb[0] = ENQUIRY;
+ scb->cdb[0] = IPS_CMD_ENQUIRY;
- scb->cmd.basic_io.op_code = ENQUIRY;
+ scb->cmd.basic_io.op_code = IPS_CMD_ENQUIRY;
scb->cmd.basic_io.command_id = IPS_COMMAND_ID(ha, scb);
scb->cmd.basic_io.sg_count = 0;
scb->cmd.basic_io.sg_addr = VIRT_TO_BUS(ha->enq);
scb->cmd.basic_io.reserved = 0;
/* send command */
- ret = ips_send_wait(ha, scb, ips_cmd_timeout);
+ ret = ips_send_wait(ha, scb, ips_cmd_timeout, intr);
if ((ret == IPS_FAILURE) || (ret == IPS_SUCCESS_IMM))
return (0);
/* */
/****************************************************************************/
static int
-ips_read_subsystem_parameters(ips_ha_t *ha) {
+ips_read_subsystem_parameters(ips_ha_t *ha, int intr) {
ips_scb_t *scb;
int ret;
ips_init_scb(ha, scb);
scb->timeout = ips_cmd_timeout;
- scb->cdb[0] = GET_SUBSYS_PARAM;
+ scb->cdb[0] = IPS_CMD_GET_SUBSYS;
- scb->cmd.basic_io.op_code = GET_SUBSYS_PARAM;
+ scb->cmd.basic_io.op_code = IPS_CMD_GET_SUBSYS;
scb->cmd.basic_io.command_id = IPS_COMMAND_ID(ha, scb);
scb->cmd.basic_io.sg_count = 0;
scb->cmd.basic_io.sg_addr = VIRT_TO_BUS(ha->subsys);
scb->cmd.basic_io.reserved = 0;
/* send command */
- ret = ips_send_wait(ha, scb, ips_cmd_timeout);
+ ret = ips_send_wait(ha, scb, ips_cmd_timeout, intr);
if ((ret == IPS_FAILURE) || (ret == IPS_SUCCESS_IMM))
return (0);
/* */
/****************************************************************************/
static int
-ips_read_config(ips_ha_t *ha) {
+ips_read_config(ips_ha_t *ha, int intr) {
ips_scb_t *scb;
int i;
int ret;
ips_init_scb(ha, scb);
scb->timeout = ips_cmd_timeout;
- scb->cdb[0] = READ_NVRAM_CONFIGURATION;
+ scb->cdb[0] = IPS_CMD_READ_CONF;
- scb->cmd.basic_io.op_code = READ_NVRAM_CONFIGURATION;
+ scb->cmd.basic_io.op_code = IPS_CMD_READ_CONF;
scb->cmd.basic_io.command_id = IPS_COMMAND_ID(ha, scb);
scb->cmd.basic_io.sg_addr = VIRT_TO_BUS(ha->conf);
/* send command */
- if (((ret = ips_send_wait(ha, scb, ips_cmd_timeout)) == IPS_FAILURE) ||
+ if (((ret = ips_send_wait(ha, scb, ips_cmd_timeout, intr)) == IPS_FAILURE) ||
(ret == IPS_SUCCESS_IMM) ||
- ((scb->basic_status & GSC_STATUS_MASK) > 1)) {
+ ((scb->basic_status & IPS_GSC_STATUS_MASK) > 1)) {
- memset(ha->conf, 0, sizeof(CONFCMD));
+ memset(ha->conf, 0, sizeof(IPS_CONF));
/* reset initiator IDs */
ha->conf->init_id[0] = IPS_ADAPTER_ID;
/* */
/* Routine Description: */
/* */
-/* Read the configuration on the adapter */
+/* Read nvram page 5 from the adapter */
/* */
/****************************************************************************/
static int
-ips_readwrite_page5(ips_ha_t *ha, int write) {
+ips_readwrite_page5(ips_ha_t *ha, int write, int intr) {
ips_scb_t *scb;
int ret;
ips_init_scb(ha, scb);
scb->timeout = ips_cmd_timeout;
- scb->cdb[0] = RW_NVRAM_PAGE;
+ scb->cdb[0] = IPS_CMD_RW_NVRAM_PAGE;
- scb->cmd.nvram.op_code = RW_NVRAM_PAGE;
+ scb->cmd.nvram.op_code = IPS_CMD_RW_NVRAM_PAGE;
scb->cmd.nvram.command_id = IPS_COMMAND_ID(ha, scb);
scb->cmd.nvram.page = 5;
scb->cmd.nvram.write = write;
scb->cmd.nvram.reserved2 = 0;
/* issue the command */
- if (((ret = ips_send_wait(ha, scb, ips_cmd_timeout)) == IPS_FAILURE) ||
+ if (((ret = ips_send_wait(ha, scb, ips_cmd_timeout, intr)) == IPS_FAILURE) ||
(ret == IPS_SUCCESS_IMM) ||
- ((scb->basic_status & GSC_STATUS_MASK) > 1)) {
+ ((scb->basic_status & IPS_GSC_STATUS_MASK) > 1)) {
- memset(ha->nvram, 0, sizeof(NVRAM_PAGE5));
+ memset(ha->nvram, 0, sizeof(IPS_NVRAM_P5));
return (0);
}
/* */
/****************************************************************************/
static int
-ips_clear_adapter(ips_ha_t *ha) {
+ips_clear_adapter(ips_ha_t *ha, int intr) {
ips_scb_t *scb;
int ret;
ips_init_scb(ha, scb);
scb->timeout = ips_reset_timeout;
- scb->cdb[0] = CONFIG_SYNC;
+ scb->cdb[0] = IPS_CMD_CONFIG_SYNC;
- scb->cmd.config_sync.op_code = CONFIG_SYNC;
+ scb->cmd.config_sync.op_code = IPS_CMD_CONFIG_SYNC;
scb->cmd.config_sync.command_id = IPS_COMMAND_ID(ha, scb);
scb->cmd.config_sync.channel = 0;
- scb->cmd.config_sync.source_target = POCL;
+ scb->cmd.config_sync.source_target = IPS_POCL;
scb->cmd.config_sync.reserved = 0;
scb->cmd.config_sync.reserved2 = 0;
scb->cmd.config_sync.reserved3 = 0;
/* issue command */
- ret = ips_send_wait(ha, scb, ips_reset_timeout);
+ ret = ips_send_wait(ha, scb, ips_reset_timeout, intr);
if ((ret == IPS_FAILURE) || (ret == IPS_SUCCESS_IMM))
return (0);
/* send unlock stripe command */
ips_init_scb(ha, scb);
- scb->cdb[0] = GET_ERASE_ERROR_TABLE;
+ scb->cdb[0] = IPS_CMD_ERROR_TABLE;
scb->timeout = ips_reset_timeout;
- scb->cmd.unlock_stripe.op_code = GET_ERASE_ERROR_TABLE;
+ scb->cmd.unlock_stripe.op_code = IPS_CMD_ERROR_TABLE;
scb->cmd.unlock_stripe.command_id = IPS_COMMAND_ID(ha, scb);
scb->cmd.unlock_stripe.log_drv = 0;
- scb->cmd.unlock_stripe.control = CSL;
+ scb->cmd.unlock_stripe.control = IPS_CSL;
scb->cmd.unlock_stripe.reserved = 0;
scb->cmd.unlock_stripe.reserved2 = 0;
scb->cmd.unlock_stripe.reserved3 = 0;
/* issue command */
- ret = ips_send_wait(ha, scb, ips_reset_timeout);
+ ret = ips_send_wait(ha, scb, ips_reset_timeout, intr);
if ((ret == IPS_FAILURE) || (ret == IPS_SUCCESS_IMM))
return (0);
return (1);
}
+/****************************************************************************/
+/* */
+/* Routine Name: ips_ffdc_reset */
+/* */
+/* Routine Description: */
+/* */
+/* FFDC: write reset info */
+/* */
+/****************************************************************************/
+static void
+ips_ffdc_reset(ips_ha_t *ha, int intr) {
+ ips_scb_t *scb;
+
+ DBG("ips_ffdc_reset");
+
+ scb = &ha->scbs[ha->max_cmds-1];
+
+ ips_init_scb(ha, scb);
+
+ scb->timeout = ips_cmd_timeout;
+ scb->cdb[0] = IPS_CMD_FFDC;
+ scb->cmd.ffdc.op_code = IPS_CMD_FFDC;
+ scb->cmd.ffdc.command_id = IPS_COMMAND_ID(ha, scb);
+ scb->cmd.ffdc.reset_count = ha->reset_count;
+ scb->cmd.ffdc.reset_type = 0x80;
+
+ /* convert time to what the card wants */
+ ips_fix_ffdc_time(ha, scb, ha->last_ffdc);
+
+ /* issue command */
+ ips_send_wait(ha, scb, ips_cmd_timeout, intr);
+}
+
+/****************************************************************************/
+/* */
+/* Routine Name: ips_ffdc_time */
+/* */
+/* Routine Description: */
+/* */
+/* FFDC: write time info */
+/* */
+/****************************************************************************/
+static void
+ips_ffdc_time(ips_ha_t *ha, int intr) {
+ ips_scb_t *scb;
+
+ DBG("ips_ffdc_time");
+
+#if IPS_DEBUG >= 1
+ printk(KERN_NOTICE "(%s%d) Sending time update.\n",
+ ips_name, ha->host_num);
+#endif
+
+ scb = &ha->scbs[ha->max_cmds-1];
+
+ ips_init_scb(ha, scb);
+
+ scb->timeout = ips_cmd_timeout;
+ scb->cdb[0] = IPS_CMD_FFDC;
+ scb->cmd.ffdc.op_code = IPS_CMD_FFDC;
+ scb->cmd.ffdc.command_id = IPS_COMMAND_ID(ha, scb);
+ scb->cmd.ffdc.reset_count = 0;
+ scb->cmd.ffdc.reset_type = 0x80;
+
+ /* convert time to what the card wants */
+ ips_fix_ffdc_time(ha, scb, ha->last_ffdc);
+
+ /* issue command */
+ ips_send_wait(ha, scb, ips_cmd_timeout, intr);
+}
+
+/****************************************************************************/
+/* */
+/* Routine Name: ips_fix_ffdc_time */
+/* */
+/* Routine Description: */
+/* Adjust time_t to what the card wants */
+/* */
+/****************************************************************************/
+static void
+ips_fix_ffdc_time(ips_ha_t *ha, ips_scb_t *scb, time_t current_time) {
+ long days;
+ long rem;
+ int i;
+ int year;
+ int yleap;
+ int year_lengths[2] = { IPS_DAYS_NORMAL_YEAR, IPS_DAYS_LEAP_YEAR };
+ int month_lengths[12][2] = { {31, 31},
+ {28, 29},
+ {31, 31},
+ {30, 30},
+ {31, 31},
+ {30, 30},
+ {31, 31},
+ {31, 31},
+ {30, 30},
+ {31, 31},
+ {30, 30},
+ {31, 31} };
+
+ days = current_time / IPS_SECS_DAY;
+ rem = current_time % IPS_SECS_DAY;
+
+ scb->cmd.ffdc.hour = (rem / IPS_SECS_HOUR);
+ rem = rem % IPS_SECS_HOUR;
+ scb->cmd.ffdc.minute = (rem / IPS_SECS_MIN);
+ scb->cmd.ffdc.second = (rem % IPS_SECS_MIN);
+
+ year = IPS_EPOCH_YEAR;
+ while (days < 0 || days >= year_lengths[yleap = IPS_IS_LEAP_YEAR(year)]) {
+ int newy;
+
+ newy = year + (days / IPS_DAYS_NORMAL_YEAR);
+ if (days < 0)
+ --newy;
+ days -= (newy - year) * IPS_DAYS_NORMAL_YEAR +
+ IPS_NUM_LEAP_YEARS_THROUGH(newy - 1) -
+ IPS_NUM_LEAP_YEARS_THROUGH(year - 1);
+ year = newy;
+ }
+
+ scb->cmd.ffdc.yearH = year / 100;
+ scb->cmd.ffdc.yearL = year % 100;
+
+ for (i = 0; days >= month_lengths[i][yleap]; ++i)
+ days -= month_lengths[i][yleap];
+
+ scb->cmd.ffdc.month = i + 1;
+ scb->cmd.ffdc.day = days + 1;
+}
+
+/****************************************************************************
+ * BIOS Flash Routines *
+ ****************************************************************************/
+
+/****************************************************************************/
+/* */
+/* Routine Name: ips_erase_bios */
+/* */
+/* Routine Description: */
+/* Erase the BIOS on the adapter */
+/* */
+/****************************************************************************/
+static int
+ips_erase_bios(ips_ha_t *ha) {
+ int timeout;
+ u8 status;
+
+ /* Clear the status register */
+ outl(0, ha->io_addr + IPS_REG_FLAP);
+ if (ha->revision_id == IPS_REVID_TROMBONE64)
+ UDELAY(5); /* 5 us */
+
+ outb(0x50, ha->io_addr + IPS_REG_FLDP);
+ if (ha->revision_id == IPS_REVID_TROMBONE64)
+ UDELAY(5); /* 5 us */
+
+ /* Erase Setup */
+ outb(0x20, ha->io_addr + IPS_REG_FLDP);
+ if (ha->revision_id == IPS_REVID_TROMBONE64)
+ UDELAY(5); /* 5 us */
+
+ /* Erase Confirm */
+ outb(0xD0, ha->io_addr + IPS_REG_FLDP);
+ if (ha->revision_id == IPS_REVID_TROMBONE64)
+ UDELAY(5); /* 5 us */
+
+ /* Erase Status */
+ outb(0x70, ha->io_addr + IPS_REG_FLDP);
+ if (ha->revision_id == IPS_REVID_TROMBONE64)
+ UDELAY(5); /* 5 us */
+
+ timeout = 80000; /* 80 seconds */
+
+ while (timeout > 0) {
+ if (ha->revision_id == IPS_REVID_TROMBONE64) {
+ outl(0, ha->io_addr + IPS_REG_FLAP);
+ UDELAY(5); /* 5 us */
+ }
+
+ status = inb(ha->io_addr + IPS_REG_FLDP);
+
+ if (status & 0x80)
+ break;
+
+ MDELAY(1);
+ timeout--;
+ }
+
+ /* check for timeout */
+ if (timeout <= 0) {
+ /* timeout */
+
+ /* try to suspend the erase */
+ outb(0xB0, ha->io_addr + IPS_REG_FLDP);
+ if (ha->revision_id == IPS_REVID_TROMBONE64)
+ UDELAY(5); /* 5 us */
+
+ /* wait for 10 seconds */
+ timeout = 10000;
+ while (timeout > 0) {
+ if (ha->revision_id == IPS_REVID_TROMBONE64) {
+ outl(0, ha->io_addr + IPS_REG_FLAP);
+ UDELAY(5); /* 5 us */
+ }
+
+ status = inb(ha->io_addr + IPS_REG_FLDP);
+
+ if (status & 0xC0)
+ break;
+
+ MDELAY(1);
+ timeout--;
+ }
+
+ return (1);
+ }
+
+ /* check for valid VPP */
+ if (status & 0x08)
+ /* VPP failure */
+ return (1);
+
+ /* check for succesful flash */
+ if (status & 0x30)
+ /* sequence error */
+ return (1);
+
+ /* Otherwise, we were successful */
+ /* clear status */
+ outb(0x50, ha->io_addr + IPS_REG_FLDP);
+ if (ha->revision_id == IPS_REVID_TROMBONE64)
+ UDELAY(5); /* 5 us */
+
+ /* enable reads */
+ outb(0xFF, ha->io_addr + IPS_REG_FLDP);
+ if (ha->revision_id == IPS_REVID_TROMBONE64)
+ UDELAY(5); /* 5 us */
+
+ return (0);
+}
+
+/****************************************************************************/
+/* */
+/* Routine Name: ips_program_bios */
+/* */
+/* Routine Description: */
+/* Program the BIOS on the adapter */
+/* */
+/****************************************************************************/
+static int
+ips_program_bios(ips_ha_t *ha, char *buffer, int buffersize) {
+ int i;
+ int timeout;
+ u8 status;
+
+ for (i = 0; i < buffersize; i++) {
+ /* write a byte */
+ outl(i, ha->io_addr + IPS_REG_FLAP);
+ if (ha->revision_id == IPS_REVID_TROMBONE64)
+ UDELAY(5); /* 5 us */
+
+ outb(0x40, ha->io_addr + IPS_REG_FLDP);
+ if (ha->revision_id == IPS_REVID_TROMBONE64)
+ UDELAY(5); /* 5 us */
+
+ outb(buffer[i], ha->io_addr + IPS_REG_FLDP);
+ if (ha->revision_id == IPS_REVID_TROMBONE64)
+ UDELAY(5); /* 5 us */
+
+ /* wait up to one second */
+ timeout = 1000;
+ while (timeout > 0) {
+ if (ha->revision_id == IPS_REVID_TROMBONE64) {
+ outl(0, ha->io_addr + IPS_REG_FLAP);
+ UDELAY(5); /* 5 us */
+ }
+
+ status = inb(ha->io_addr + IPS_REG_FLDP);
+
+ if (status & 0x80)
+ break;
+
+ MDELAY(1);
+ timeout--;
+ }
+
+ if (timeout == 0) {
+ /* timeout error */
+ outl(0, ha->io_addr + IPS_REG_FLAP);
+ if (ha->revision_id == IPS_REVID_TROMBONE64)
+ UDELAY(5); /* 5 us */
+
+ outb(0xFF, ha->io_addr + IPS_REG_FLDP);
+ if (ha->revision_id == IPS_REVID_TROMBONE64)
+ UDELAY(5); /* 5 us */
+
+ return (1);
+ }
+
+ /* check the status */
+ if (status & 0x18) {
+ /* programming error */
+ outl(0, ha->io_addr + IPS_REG_FLAP);
+ if (ha->revision_id == IPS_REVID_TROMBONE64)
+ UDELAY(5); /* 5 us */
+
+ outb(0xFF, ha->io_addr + IPS_REG_FLDP);
+ if (ha->revision_id == IPS_REVID_TROMBONE64)
+ UDELAY(5); /* 5 us */
+
+ return (1);
+ }
+ } /* end for */
+
+ /* Enable reading */
+ outl(0, ha->io_addr + IPS_REG_FLAP);
+ if (ha->revision_id == IPS_REVID_TROMBONE64)
+ UDELAY(5); /* 5 us */
+
+ outb(0xFF, ha->io_addr + IPS_REG_FLDP);
+ if (ha->revision_id == IPS_REVID_TROMBONE64)
+ UDELAY(5); /* 5 us */
+
+ return (0);
+}
+
+/****************************************************************************/
+/* */
+/* Routine Name: ips_verify_bios */
+/* */
+/* Routine Description: */
+/* Verify the BIOS on the adapter */
+/* */
+/****************************************************************************/
+static int
+ips_verify_bios(ips_ha_t *ha, char *buffer, int buffersize) {
+ u8 checksum;
+ int i;
+
+ /* test 1st byte */
+ outl(0, ha->io_addr + IPS_REG_FLAP);
+ if (ha->revision_id == IPS_REVID_TROMBONE64)
+ UDELAY(5); /* 5 us */
+
+ if (inb(ha->io_addr + IPS_REG_FLDP) != 0x55)
+ return (1);
+
+ outl(1, ha->io_addr + IPS_REG_FLAP);
+ if (ha->revision_id == IPS_REVID_TROMBONE64)
+ UDELAY(5); /* 5 us */
+ if (inb(ha->io_addr + IPS_REG_FLDP) != 0xAA)
+ return (1);
+
+ checksum = 0xff;
+ for (i = 2; i < buffersize; i++) {
+
+ outl(i, ha->io_addr + IPS_REG_FLAP);
+ if (ha->revision_id == IPS_REVID_TROMBONE64)
+ UDELAY(5); /* 5 us */
+
+ checksum = (u8) checksum + inb(ha->io_addr + IPS_REG_FLDP);
+ }
+
+ if (checksum != 0)
+ /* failure */
+ return (1);
+ else
+ /* success */
+ return (0);
+}
+
#if defined (MODULE)
Scsi_Host_Template driver_template = IPS;
* Some handy macros
*/
#ifndef LinuxVersionCode
- #define LinuxVersionCode(x,y,z) (((x)<<16)+((y)<<8)+(z))
+ #define LinuxVersionCode(x,y,z) (((x)<<16)+((y)<<8)+(z))
#endif
- #define HA(x) ((ips_ha_t *) x->hostdata)
+ #define IPS_HA(x) ((ips_ha_t *) x->hostdata)
#define IPS_COMMAND_ID(ha, scb) (int) (scb - ha->scbs)
- #define VIRT_TO_BUS(x) (unsigned int)virt_to_bus((void *) x)
+
+ #ifndef VIRT_TO_BUS
+ #define VIRT_TO_BUS(x) (unsigned int)virt_to_bus((void *) x)
+ #endif
- #define UDELAY udelay
- #define MDELAY mdelay
+ #ifndef UDELAY
+ #define UDELAY udelay
+ #endif
+
+ #ifndef MDELAY
+ #define MDELAY mdelay
+ #endif
- #define verify_area_20(t,a,sz) (0) /* success */
- #define PUT_USER put_user
- #define __PUT_USER __put_user
- #define PUT_USER_RET put_user_ret
- #define GET_USER get_user
- #define __GET_USER __get_user
- #define GET_USER_RET get_user_ret
+ #ifndef verify_area_20
+ #define verify_area_20(t,a,sz) (0) /* success */
+ #endif
+
+ #ifndef PUT_USER
+ #define PUT_USER put_user
+ #endif
+
+ #ifndef __PUT_USER
+ #define __PUT_USER __put_user
+ #endif
+
+ #ifndef PUT_USER_RET
+ #define PUT_USER_RET put_user_ret
+ #endif
+
+ #ifndef GET_USER
+ #define GET_USER get_user
+ #endif
+
+ #ifndef __GET_USER
+ #define __GET_USER __get_user
+ #endif
+
+ #ifndef GET_USER_RET
+ #define GET_USER_RET get_user_ret
+ #endif
-/*
- * Adapter address map equates
- */
- #define HISR 0x08 /* Host Interrupt Status Reg */
- #define CCSAR 0x10 /* Cmd Channel System Addr Reg */
- #define CCCR 0x14 /* Cmd Channel Control Reg */
- #define SQHR 0x20 /* Status Q Head Reg */
- #define SQTR 0x24 /* Status Q Tail Reg */
- #define SQER 0x28 /* Status Q End Reg */
- #define SQSR 0x2C /* Status Q Start Reg */
- #define SCPR 0x05 /* Subsystem control port reg */
- #define ISPR 0x06 /* interrupt status port reg */
- #define CBSP 0x07 /* CBSP register */
+ /*
+ * Lock macros
+ */
+ #define IPS_SCB_LOCK(cpu_flags) spin_lock_irqsave(&ha->scb_lock, cpu_flags)
+ #define IPS_SCB_UNLOCK(cpu_flags) spin_unlock_irqrestore(&ha->scb_lock, cpu_flags)
+ #define IPS_QUEUE_LOCK(queue) spin_lock_irqsave(&(queue)->lock, (queue)->cpu_flags)
+ #define IPS_QUEUE_UNLOCK(queue) spin_unlock_irqrestore(&(queue)->lock, (queue)->cpu_flags)
+ #define IPS_HA_LOCK(cpu_flags) spin_lock_irqsave(&ha->ips_lock, cpu_flags)
+ #define IPS_HA_UNLOCK(cpu_flags) spin_unlock_irqrestore(&ha->ips_lock, cpu_flags)
-/*
- * Adapter register bit equates
- */
- #define GHI 0x04 /* HISR General Host Interrupt */
- #define SQO 0x02 /* HISR Status Q Overflow */
- #define SCE 0x01 /* HISR Status Channel Enqueue */
- #define SEMAPHORE 0x08 /* CCCR Semaphore Bit */
- #define ILE 0x10 /* CCCR ILE Bit */
- #define START_COMMAND 0x101A /* CCCR Start Command Channel */
- #define START_STOP_BIT 0x0002 /* CCCR Start/Stop Bit */
- #define RST 0x80 /* SCPR Reset Bit */
- #define EBM 0x02 /* SCPR Enable Bus Master */
- #define EI 0x80 /* HISR Enable Interrupts */
- #define OP 0x01 /* OP bit in CBSP */
+ /*
+ * Adapter address map equates
+ */
+ #define IPS_REG_HISR 0x08 /* Host Interrupt Status Reg */
+ #define IPS_REG_CCSAR 0x10 /* Cmd Channel System Addr Reg */
+ #define IPS_REG_CCCR 0x14 /* Cmd Channel Control Reg */
+ #define IPS_REG_SQHR 0x20 /* Status Q Head Reg */
+ #define IPS_REG_SQTR 0x24 /* Status Q Tail Reg */
+ #define IPS_REG_SQER 0x28 /* Status Q End Reg */
+ #define IPS_REG_SQSR 0x2C /* Status Q Start Reg */
+ #define IPS_REG_SCPR 0x05 /* Subsystem control port reg */
+ #define IPS_REG_ISPR 0x06 /* interrupt status port reg */
+ #define IPS_REG_CBSP 0x07 /* CBSP register */
+ #define IPS_REG_FLAP 0x18 /* Flash address port */
+ #define IPS_REG_FLDP 0x1C /* Flash data port */
-/*
- * Adapter Command ID Equates
- */
- #define GET_LOGICAL_DRIVE_INFO 0x19
- #define GET_SUBSYS_PARAM 0x40
- #define READ_NVRAM_CONFIGURATION 0x38
- #define RW_NVRAM_PAGE 0xBC
- #define IPS_READ 0x02
- #define IPS_WRITE 0x03
- #define ENQUIRY 0x05
- #define FLUSH_CACHE 0x0A
- #define NORM_STATE 0x00
- #define READ_SCATTER_GATHER 0x82
- #define WRITE_SCATTER_GATHER 0x83
- #define DIRECT_CDB 0x04
- #define DIRECT_CDB_SCATTER_GATHER 0x84
- #define CONFIG_SYNC 0x58
- #define POCL 0x30
- #define GET_ERASE_ERROR_TABLE 0x17
- #define RESET_CHANNEL 0x1A
- #define CSL 0xFF
- #define ADAPT_RESET 0xFF
+ /*
+ * Adapter register bit equates
+ */
+ #define IPS_BIT_GHI 0x04 /* HISR General Host Interrupt */
+ #define IPS_BIT_SQO 0x02 /* HISR Status Q Overflow */
+ #define IPS_BIT_SCE 0x01 /* HISR Status Channel Enqueue */
+ #define IPS_BIT_SEM 0x08 /* CCCR Semaphore Bit */
+ #define IPS_BIT_ILE 0x10 /* CCCR ILE Bit */
+ #define IPS_BIT_START_CMD 0x101A /* CCCR Start Command Channel */
+ #define IPS_BIT_START_STOP 0x0002 /* CCCR Start/Stop Bit */
+ #define IPS_BIT_RST 0x80 /* SCPR Reset Bit */
+ #define IPS_BIT_EBM 0x02 /* SCPR Enable Bus Master */
+ #define IPS_BIT_EI 0x80 /* HISR Enable Interrupts */
+ #define IPS_BIT_OP 0x01 /* OP bit in CBSP */
-/*
- * Adapter Equates
- */
+ /*
+ * Adapter Command ID Equates
+ */
+ #define IPS_CMD_GET_LD_INFO 0x19
+ #define IPS_CMD_GET_SUBSYS 0x40
+ #define IPS_CMD_READ_CONF 0x38
+ #define IPS_CMD_RW_NVRAM_PAGE 0xBC
+ #define IPS_CMD_READ 0x02
+ #define IPS_CMD_WRITE 0x03
+ #define IPS_CMD_FFDC 0xD7
+ #define IPS_CMD_ENQUIRY 0x05
+ #define IPS_CMD_FLUSH 0x0A
+ #define IPS_CMD_READ_SG 0x82
+ #define IPS_CMD_WRITE_SG 0x83
+ #define IPS_CMD_DCDB 0x04
+ #define IPS_CMD_DCDB_SG 0x84
+ #define IPS_CMD_CONFIG_SYNC 0x58
+ #define IPS_CMD_ERROR_TABLE 0x17
+
+ /*
+ * Adapter Equates
+ */
+ #define IPS_CSL 0xFF
+ #define IPS_POCL 0x30
+ #define IPS_NORM_STATE 0x00
#define IPS_MAX_ADAPTERS 16
#define IPS_MAX_IOCTL 1
#define IPS_MAX_IOCTL_QUEUE 8
#define IPS_MAX_QUEUE 128
#define IPS_BLKSIZE 512
- #define MAX_SG_ELEMENTS 17
- #define MAX_LOGICAL_DRIVES 8
- #define MAX_CHANNELS 3
- #define MAX_TARGETS 15
- #define MAX_CHUNKS 16
- #define MAX_CMDS 128
+ #define IPS_MAX_SG 17
+ #define IPS_MAX_LD 8
+ #define IPS_MAX_CHANNELS 4
+ #define IPS_MAX_TARGETS 15
+ #define IPS_MAX_CHUNKS 16
+ #define IPS_MAX_CMDS 128
#define IPS_MAX_XFER 0x10000
- #define COMP_MODE_HEADS 128
- #define COMP_MODE_SECTORS 32
- #define NORM_MODE_HEADS 254
- #define NORM_MODE_SECTORS 63
- #define NVRAM_PAGE5_SIGNATURE 0xFFDDBB99
- #define MAX_POST_BYTES 0x02
- #define MAX_CONFIG_BYTES 0x02
- #define GOOD_POST_BASIC_STATUS 0x80
- #define SEMAPHORE_TIMEOUT 2000
- #define IPS_INTR_OFF 0
- #define IPS_INTR_ON 1
+ #define IPS_NVRAM_P5_SIG 0xFFDDBB99
+ #define IPS_MAX_POST_BYTES 0x02
+ #define IPS_MAX_CONFIG_BYTES 0x02
+ #define IPS_GOOD_POST_STATUS 0x80
+ #define IPS_SEM_TIMEOUT 2000
+ #define IPS_IOCTL_COMMAND 0x0D
+ #define IPS_IOCTL_NEW_COMMAND 0x81
+ #define IPS_INTR_ON 0
+ #define IPS_INTR_IORL 1
+ #define IPS_INTR_HAL 2
#define IPS_ADAPTER_ID 0xF
#define IPS_VENDORID 0x1014
#define IPS_DEVICEID 0x002E
- #define TIMEOUT_10 0x10
- #define TIMEOUT_60 0x20
- #define TIMEOUT_20M 0x30
- #define STATUS_SIZE 4
- #define STATUS_Q_SIZE (MAX_CMDS+1) * STATUS_SIZE
- #define ONE_MSEC 1
- #define ONE_SEC 1000
+ #define IPS_IOCTL_SIZE 8192
+ #define IPS_STATUS_SIZE 4
+ #define IPS_STATUS_Q_SIZE (IPS_MAX_CMDS+1) * IPS_STATUS_SIZE
+ #define IPS_ONE_MSEC 1
+ #define IPS_ONE_SEC 1000
+
+ /*
+ * Geometry Settings
+ */
+ #define IPS_COMP_HEADS 128
+ #define IPS_COMP_SECTORS 32
+ #define IPS_NORM_HEADS 254
+ #define IPS_NORM_SECTORS 63
-/*
- * Adapter Basic Status Codes
- */
- #define BASIC_STATUS_MASK 0xFF
- #define GSC_STATUS_MASK 0x0F
- #define SSUCCESS 0x00
- #define RECOVERED_ERROR 0x01
- #define IPS_CHECK_CONDITION 0x02
- #define INVAL_OPCO 0x03
- #define INVAL_CMD_BLK 0x04
- #define INVAL_PARM_BLK 0x05
+ /*
+ * Adapter Basic Status Codes
+ */
+ #define IPS_BASIC_STATUS_MASK 0xFF
+ #define IPS_GSC_STATUS_MASK 0x0F
+ #define IPS_CMD_SUCCESS 0x00
+ #define IPS_CMD_RECOVERED_ERROR 0x01
+ #define IPS_INVAL_OPCO 0x03
+ #define IPS_INVAL_CMD_BLK 0x04
+ #define IPS_INVAL_PARM_BLK 0x05
#define IPS_BUSY 0x08
- #define ADAPT_HARDWARE_ERROR 0x09
- #define ADAPT_FIRMWARE_ERROR 0x0A
- #define CMD_CMPLT_WERROR 0x0C
- #define LOG_DRV_ERROR 0x0D
- #define CMD_TIMEOUT 0x0E
- #define PHYS_DRV_ERROR 0x0F
+ #define IPS_CMD_CMPLT_WERROR 0x0C
+ #define IPS_LD_ERROR 0x0D
+ #define IPS_CMD_TIMEOUT 0x0E
+ #define IPS_PHYS_DRV_ERROR 0x0F
-/*
- * Adapter Extended Status Equates
- */
- #define SELECTION_TIMEOUT 0xF0
- #define DATA_OVER_UNDER_RUN 0xF2
- #define EXT_HOST_RESET 0xF7
- #define EXT_DEVICE_RESET 0xF8
- #define EXT_RECOVERY 0xFC
- #define EXT_CHECK_CONDITION 0xFF
+ /*
+ * Adapter Extended Status Equates
+ */
+ #define IPS_ERR_SEL_TO 0xF0
+ #define IPS_ERR_OU_RUN 0xF2
+ #define IPS_ERR_HOST_RESET 0xF7
+ #define IPS_ERR_DEV_RESET 0xF8
+ #define IPS_ERR_RECOVERY 0xFC
+ #define IPS_ERR_CKCOND 0xFF
-/*
- * Operating System Defines
- */
- #define OS_WINDOWS_NT 0x01
- #define OS_NETWARE 0x02
- #define OS_OPENSERVER 0x03
- #define OS_UNIXWARE 0x04
- #define OS_SOLARIS 0x05
- #define OS_OS2 0x06
- #define OS_LINUX 0x07
- #define OS_FREEBSD 0x08
+ /*
+ * Operating System Defines
+ */
+ #define IPS_OS_WINDOWS_NT 0x01
+ #define IPS_OS_NETWARE 0x02
+ #define IPS_OS_OPENSERVER 0x03
+ #define IPS_OS_UNIXWARE 0x04
+ #define IPS_OS_SOLARIS 0x05
+ #define IPS_OS_OS2 0x06
+ #define IPS_OS_LINUX 0x07
+ #define IPS_OS_FREEBSD 0x08
-/*
- * Adapter Command/Status Packet Definitions
- */
+ /*
+ * Adapter Revision ID's
+ */
+ #define IPS_REVID_SERVERAID 0x02
+ #define IPS_REVID_NAVAJO 0x03
+ #define IPS_REVID_SERVERAID2 0x04
+ #define IPS_REVID_CLARINETP1 0x05
+ #define IPS_REVID_CLARINETP2 0x07
+ #define IPS_REVID_CLARINETP3 0x0D
+ #define IPS_REVID_TROMBONE32 0x0F
+ #define IPS_REVID_TROMBONE64 0x10
+
+ /*
+ * Adapter Command/Status Packet Definitions
+ */
#define IPS_SUCCESS 0x01 /* Successfully completed */
#define IPS_SUCCESS_IMM 0x02 /* Success - Immediately */
#define IPS_FAILURE 0x04 /* Completed with Error */
-/*
- * Logical Drive Equates
- */
- #define OFF_LINE 0x02
- #define OKAY 0x03
- #define FREE 0x00
- #define SYS 0x06
- #define CRS 0x24
+ /*
+ * Logical Drive Equates
+ */
+ #define IPS_LD_OFFLINE 0x02
+ #define IPS_LD_OKAY 0x03
+ #define IPS_LD_FREE 0x00
+ #define IPS_LD_SYS 0x06
+ #define IPS_LD_CRS 0x24
-/*
- * DCDB Table Equates
- */
-#ifndef HOSTS_C
- #define NO_DISCONNECT 0x00
- #define DISCONNECT_ALLOWED 0x80
- #define NO_AUTO_REQUEST_SENSE 0x40
+ /*
+ * DCDB Table Equates
+ */
+ #define IPS_NO_DISCONNECT 0x00
+ #define IPS_DISCONNECT_ALLOWED 0x80
+ #define IPS_NO_AUTO_REQSEN 0x40
#define IPS_DATA_NONE 0x00
#define IPS_DATA_UNK 0x00
#define IPS_DATA_IN 0x01
#define IPS_DATA_OUT 0x02
- #define TRANSFER_64K 0x08
- #define NOTIMEOUT 0x00
- #define TIMEOUT10 0x10
- #define TIMEOUT60 0x20
- #define TIMEOUT20M 0x30
-/*
- * Host adapter Flags (bit numbers)
- */
+ #define IPS_TRANSFER64K 0x08
+ #define IPS_NOTIMEOUT 0x00
+ #define IPS_TIMEOUT10 0x10
+ #define IPS_TIMEOUT60 0x20
+ #define IPS_TIMEOUT20M 0x30
+
+ /*
+ * Host adapter Flags (bit numbers)
+ */
#define IPS_IN_INTR 0
#define IPS_IN_ABORT 1
#define IPS_IN_RESET 2
-/*
- * SCB Flags
- */
- #define SCB_ACTIVE 0x00001
- #define SCB_WAITING 0x00002
-#endif /* HOSTS_C */
-/*
- * Passthru stuff
- */
- #define COPPUSRCMD (('C'<<8) | 65)
+ /*
+ * SCB Flags
+ */
+ #define IPS_SCB_ACTIVE 0x00001
+ #define IPS_SCB_WAITING 0x00002
+
+ /*
+ * Passthru stuff
+ */
+ #define IPS_COPPUSRCMD (('C'<<8) | 65)
+ #define IPS_COPPIOCCMD (('C'<<8) | 66)
#define IPS_NUMCTRLS (('C'<<8) | 68)
#define IPS_CTRLINFO (('C'<<8) | 69)
+ #define IPS_FLASHBIOS (('C'<<8) | 70)
-/*
- * Scsi_Host Template
- */
+ /* time oriented stuff */
+ #define IPS_IS_LEAP_YEAR(y) (((y % 4 == 0) && ((y % 100 != 0) || (y % 400 == 0))) ? 1 : 0)
+ #define IPS_NUM_LEAP_YEARS_THROUGH(y) ((y) / 4 - (y) / 100 + (y) / 400)
+
+ #define IPS_SECS_MIN 60
+ #define IPS_SECS_HOUR 3600
+ #define IPS_SECS_8HOURS 28800
+ #define IPS_SECS_DAY 86400
+ #define IPS_DAYS_NORMAL_YEAR 365
+ #define IPS_DAYS_LEAP_YEAR 366
+ #define IPS_EPOCH_YEAR 1970
+
+ /*
+ * Scsi_Host Template
+ */
#define IPS { \
next : NULL, \
module : NULL, \
- proc_dir : NULL, \
proc_info : NULL, \
name : NULL, \
detect : ips_detect, \
bios_param : ips_biosparam, \
can_queue : 0, \
this_id: -1, \
- sg_tablesize : MAX_SG_ELEMENTS, \
+ sg_tablesize : IPS_MAX_SG, \
cmd_per_lun: 16, \
present : 0, \
unchecked_isa_dma : 0, \
u16 reserved;
u32 ccsar;
u32 cccr;
-} BASIC_IO_CMD, *PBASIC_IO_CMD;
+} IPS_IO_CMD, *PIPS_IO_CMD;
typedef struct {
u8 op_code;
u32 reserved3;
u32 ccsar;
u32 cccr;
-} LOGICAL_INFO, *PLOGICAL_INFO;
+} IPS_LD_CMD, *PIPS_LD_CMD;
typedef struct {
u8 op_code;
u32 reserved3;
u32 buffer_addr;
u32 reserved4;
-} IOCTL_INFO, *PIOCTL_INFO;
+} IPS_IOCTL_CMD, *PIPS_IOCTL_CMD;
typedef struct {
u8 op_code;
u32 reserved3;
u32 ccsar;
u32 cccr;
-} DCDB_CMD, *PDCDB_CMD;
+} IPS_DCDB_CMD, *PIPS_DCDB_CMD;
typedef struct {
u8 op_code;
u32 reserved3;
u32 ccsar;
u32 cccr;
-} CONFIG_SYNC_CMD, *PCONFIG_SYNC_CMD;
+} IPS_CS_CMD, *PIPS_CS_CMD;
typedef struct {
u8 op_code;
u32 reserved3;
u32 ccsar;
u32 cccr;
-} UNLOCK_STRIPE_CMD, *PUNLOCK_STRIPE_CMD;
+} IPS_US_CMD, *PIPS_US_CMD;
typedef struct {
u8 op_code;
u32 reserved4;
u32 ccsar;
u32 cccr;
-} FLUSH_CACHE_CMD, *PFLUSH_CACHE_CMD;
+} IPS_FC_CMD, *PIPS_FC_CMD;
typedef struct {
u8 op_code;
u32 reserved3;
u32 ccsar;
u32 cccr;
-} STATUS_CMD, *PSTATUS_CMD;
+} IPS_STATUS_CMD, *PIPS_STATUS_CMD;
typedef struct {
u8 op_code;
u32 reserved2;
u32 ccsar;
u32 cccr;
-} NVRAM_CMD, *PNVRAM_CMD;
+} IPS_NVRAM_CMD, *PIPS_NVRAM_CMD;
+
+typedef struct {
+ u8 op_code;
+ u8 command_id;
+ u8 reset_count;
+ u8 reset_type;
+ u8 second;
+ u8 minute;
+ u8 hour;
+ u8 day;
+ u8 reserved1[4];
+ u8 month;
+ u8 yearH;
+ u8 yearL;
+ u8 reserved2;
+} IPS_FFDC_CMD, *PIPS_FFDC_CMD;
typedef union {
- BASIC_IO_CMD basic_io;
- LOGICAL_INFO logical_info;
- IOCTL_INFO ioctl_info;
- DCDB_CMD dcdb;
- CONFIG_SYNC_CMD config_sync;
- UNLOCK_STRIPE_CMD unlock_stripe;
- FLUSH_CACHE_CMD flush_cache;
- STATUS_CMD status;
- NVRAM_CMD nvram;
-} HOST_COMMAND, *PHOST_COMMAND;
+ IPS_IO_CMD basic_io;
+ IPS_LD_CMD logical_info;
+ IPS_IOCTL_CMD ioctl_info;
+ IPS_DCDB_CMD dcdb;
+ IPS_CS_CMD config_sync;
+ IPS_US_CMD unlock_stripe;
+ IPS_FC_CMD flush_cache;
+ IPS_STATUS_CMD status;
+ IPS_NVRAM_CMD nvram;
+ IPS_FFDC_CMD ffdc;
+} IPS_HOST_COMMAND, *PIPS_HOST_COMMAND;
typedef struct {
u8 logical_id;
u8 raid_level;
u8 state;
u32 sector_count;
-} DRIVE_INFO, *PDRIVE_INFO;
-
-typedef struct {
- u8 no_of_log_drive;
- u8 reserved[3];
- DRIVE_INFO drive_info[MAX_LOGICAL_DRIVES];
-} LOGICAL_DRIVE_INFO, *PLOGICAL_DRIVE_INFO;
+} IPS_DRIVE_INFO, *PIPS_DRIVE_INFO;
typedef struct {
- u8 ha_num;
- u8 bus_num;
- u8 id;
- u8 device_type;
- u32 data_len;
- u32 data_ptr;
- u8 scsi_cdb[12];
- u32 data_counter;
- u32 block_size;
-} NON_DISK_DEVICE_INFO, *PNON_DISK_DEVICE_INFO;
+ u8 no_of_log_drive;
+ u8 reserved[3];
+ IPS_DRIVE_INFO drive_info[IPS_MAX_LD];
+} IPS_LD_INFO, *PIPS_LD_INFO;
typedef struct {
u8 device_address;
u8 sense_info[64];
u8 scsi_status;
u8 reserved2[3];
-} DCDB_TABLE, *PDCDB_TABLE;
+} IPS_DCDB_TABLE, *PIPS_DCDB_TABLE;
typedef struct {
volatile u8 reserved;
volatile u8 command_id;
volatile u8 basic_status;
volatile u8 extended_status;
-} STATUS, *PSTATUS;
+} IPS_STATUS, *PIPS_STATUS;
typedef struct {
- STATUS status[MAX_CMDS + 1];
- volatile PSTATUS p_status_start;
- volatile PSTATUS p_status_end;
- volatile PSTATUS p_status_tail;
+ IPS_STATUS status[IPS_MAX_CMDS + 1];
+ volatile PIPS_STATUS p_status_start;
+ volatile PIPS_STATUS p_status_end;
+ volatile PIPS_STATUS p_status_tail;
volatile u32 hw_status_start;
volatile u32 hw_status_tail;
- LOGICAL_DRIVE_INFO logical_drive_info;
-} ADAPTER_AREA, *PADAPTER_AREA;
+ IPS_LD_INFO logical_drive_info;
+} IPS_ADAPTER, *PIPS_ADAPTER;
typedef struct {
u8 ucLogDriveCount;
u8 ucNVramDevChgCnt;
u8 CodeBlkVersion[8];
u8 BootBlkVersion[8];
- u32 ulDriveSize[MAX_LOGICAL_DRIVES];
+ u32 ulDriveSize[IPS_MAX_LD];
u8 ucConcurrentCmdCount;
u8 ucMaxPhysicalDevices;
u16 usFlashRepgmCount;
u16 usConfigUpdateCount;
u8 ucBlkFlag;
u8 reserved;
- u16 usAddrDeadDisk[MAX_CHANNELS * MAX_TARGETS];
-} ENQCMD, *PENQCMD;
+ u16 usAddrDeadDisk[IPS_MAX_CHANNELS * IPS_MAX_TARGETS];
+} IPS_ENQ, *PIPS_ENQ;
typedef struct {
u8 ucInitiator;
u8 ucState;
u32 ulBlockCount;
u8 ucDeviceId[28];
-} DEVSTATE, *PDEVSTATE;
+} IPS_DEVSTATE, *PIPS_DEVSTATE;
typedef struct {
u8 ucChn;
u16 ucReserved;
u32 ulStartSect;
u32 ulNoOfSects;
-} CHUNK, *PCHUNK;
+} IPS_CHUNK, *PIPS_CHUNK;
typedef struct {
u16 ucUserField;
u8 ucParams;
u8 ucReserved;
u32 ulLogDrvSize;
- CHUNK chunk[MAX_CHUNKS];
-} LOGICAL_DRIVE, *PLOGICAL_DRIVE;
+ IPS_CHUNK chunk[IPS_MAX_CHUNKS];
+} IPS_LD, *PIPS_LD;
typedef struct {
u8 board_disc[8];
u8 ucCompression;
u8 ucNvramType;
u32 ulNvramSize;
-} HARDWARE_DISC, *PHARDWARE_DISC;
+} IPS_HARDWARE, *PIPS_HARDWARE;
typedef struct {
u8 ucLogDriveCount;
u16 user_field;
u8 ucRebuildRate;
u8 ucReserve;
- HARDWARE_DISC hardware_disc;
- LOGICAL_DRIVE logical_drive[MAX_LOGICAL_DRIVES];
- DEVSTATE dev[MAX_CHANNELS][MAX_TARGETS+1];
+ IPS_HARDWARE hardware_disc;
+ IPS_LD logical_drive[IPS_MAX_LD];
+ IPS_DEVSTATE dev[IPS_MAX_CHANNELS][IPS_MAX_TARGETS+1];
u8 reserved[512];
-} CONFCMD, *PCONFCMD;
+} IPS_CONF, *PIPS_CONF;
typedef struct {
u32 signature;
u8 driver_high[4];
u8 driver_low[4];
u8 reserved4[100];
-} NVRAM_PAGE5, *PNVRAM_PAGE5;
+} IPS_NVRAM_P5, *PIPS_NVRAM_P5;
-typedef struct _SUBSYS_PARAM {
+typedef struct _IPS_SUBSYS {
u32 param[128];
-} SUBSYS_PARAM, *PSUBSYS_PARAM;
+} IPS_SUBSYS, *PIPS_SUBSYS;
/*
* Inquiry Data Format
*/
-#ifndef HOSTS_C
-
typedef struct {
u8 DeviceType:5;
u8 DeviceTypeQualifier:3;
u8 ProductRevisionLevel[4];
u8 VendorSpecific[20];
u8 Reserved3[40];
-} INQUIRYDATA, *PINQUIRYDATA;
+} IPS_INQ_DATA, *PIPS_INQ_DATA;
-#endif
/*
* Read Capacity Data Format
*/
typedef struct {
u32 lba;
u32 len;
-} CAPACITY_T;
+} IPS_CAPACITY;
/*
* Sense Data Format
u32 pg_rmb:1; /* Removeable */
u32 pg_hsec:1; /* Hard sector formatting */
u32 pg_ssec:1; /* Soft sector formatting */
-} DADF_T;
+} IPS_DADF;
typedef struct {
u8 pg_pc:6; /* Page Code */
u32 pg_landu:16; /* Landing zone cylinder (upper) */
u32 pg_landl:8; /* Landing zone cylinder (lower) */
u32 pg_res2:24; /* Reserved */
-} RDDG_T;
+} IPS_RDDG;
-struct blk_desc {
+struct ips_blk_desc {
u8 bd_dencode;
u8 bd_nblks1;
u8 bd_nblks2;
u8 plh_res:7; /* Reserved */
u8 plh_wp:1; /* Write protect */
u8 plh_bdl; /* Block descriptor length */
-} SENSE_PLH_T;
+} ips_sense_plh_t;
typedef struct {
- SENSE_PLH_T plh;
- struct blk_desc blk_desc;
+ ips_sense_plh_t plh;
+ struct ips_blk_desc blk_desc;
union {
- DADF_T pg3;
- RDDG_T pg4;
+ IPS_DADF pg3;
+ IPS_RDDG pg4;
} pdata;
} ips_mdata_t;
typedef struct ips_sglist {
u32 address;
u32 length;
-} SG_LIST, *PSG_LIST;
+} IPS_SG_LIST, *PIPS_SG_LIST;
-typedef struct _INFOSTR {
+typedef struct _IPS_INFOSTR {
char *buffer;
int length;
int offset;
int pos;
-} INFOSTR;
+} IPS_INFOSTR;
/*
* Status Info
typedef struct ips_scb_queue {
struct ips_scb *head;
struct ips_scb *tail;
- unsigned int count;
+ u32 count;
+ u32 cpu_flags;
+ spinlock_t lock;
} ips_scb_queue_t;
/*
typedef struct ips_wait_queue {
Scsi_Cmnd *head;
Scsi_Cmnd *tail;
- unsigned int count;
+ u32 count;
+ u32 cpu_flags;
+ spinlock_t lock;
} ips_wait_queue_t;
+typedef struct ips_copp_wait_item {
+ Scsi_Cmnd *scsi_cmd;
+ struct semaphore *sem;
+ struct ips_copp_wait_item *next;
+} ips_copp_wait_item_t;
+
+typedef struct ips_copp_queue {
+ struct ips_copp_wait_item *head;
+ struct ips_copp_wait_item *tail;
+ u32 count;
+ u32 cpu_flags;
+ spinlock_t lock;
+} ips_copp_queue_t;
+
typedef struct ips_ha {
- u8 ha_id[MAX_CHANNELS+1];
- u32 dcdb_active[MAX_CHANNELS];
+ u8 ha_id[IPS_MAX_CHANNELS+1];
+ u32 dcdb_active[IPS_MAX_CHANNELS];
u32 io_addr; /* Base I/O address */
u8 irq; /* IRQ for adapter */
u8 ntargets; /* Number of targets */
struct ips_scb *scbs; /* Array of all CCBS */
struct ips_scb *scb_freelist; /* SCB free list */
ips_wait_queue_t scb_waitlist; /* Pending SCB list */
- ips_wait_queue_t copp_waitlist; /* Pending PT list */
+ ips_copp_queue_t copp_waitlist; /* Pending PT list */
ips_scb_queue_t scb_activelist; /* Active SCB list */
- BASIC_IO_CMD *dummy; /* dummy command */
- ADAPTER_AREA *adapt; /* Adapter status area */
- ENQCMD *enq; /* Adapter Enquiry data */
- CONFCMD *conf; /* Adapter config data */
- NVRAM_PAGE5 *nvram; /* NVRAM page 5 data */
- SUBSYS_PARAM *subsys; /* Subsystem parameters */
+ IPS_IO_CMD *dummy; /* dummy command */
+ IPS_ADAPTER *adapt; /* Adapter status area */
+ IPS_ENQ *enq; /* Adapter Enquiry data */
+ IPS_CONF *conf; /* Adapter config data */
+ IPS_NVRAM_P5 *nvram; /* NVRAM page 5 data */
+ IPS_SUBSYS *subsys; /* Subsystem parameters */
+ char *ioctl_data; /* IOCTL data area */
+ u32 ioctl_datasize; /* IOCTL data size */
u32 cmd_in_progress; /* Current command in progress*/
u32 flags; /* HA flags */
u8 waitflag; /* are we waiting for cmd */
u8 active;
- u32 reserved:16; /* reserved space */
- struct wait_queue *copp_queue; /* passthru sync queue */
+ u16 reset_count; /* number of resets */
+ u32 last_ffdc; /* last time we sent ffdc info*/
+ u8 revision_id; /* Revision level */
#if LINUX_VERSION_CODE >= LinuxVersionCode(2,1,0)
spinlock_t scb_lock;
spinlock_t copp_lock;
+ spinlock_t ips_lock;
#endif
} ips_ha_t;
-typedef void (*scb_callback) (ips_ha_t *, struct ips_scb *);
+typedef void (*ips_scb_callback) (ips_ha_t *, struct ips_scb *);
/*
* SCB Format
*/
typedef struct ips_scb {
- HOST_COMMAND cmd;
- DCDB_TABLE dcdb;
+ IPS_HOST_COMMAND cmd;
+ IPS_DCDB_TABLE dcdb;
u8 target_id;
u8 bus;
u8 lun;
u32 sg_len;
u32 flags;
u32 op_code;
- SG_LIST *sg_list;
+ IPS_SG_LIST *sg_list;
Scsi_Cmnd *scsi_cmd;
struct ips_scb *q_next;
- scb_callback callback;
+ ips_scb_callback callback;
+ struct semaphore *sem;
} ips_scb_t;
+typedef struct ips_scb_pt {
+ IPS_HOST_COMMAND cmd;
+ IPS_DCDB_TABLE dcdb;
+ u8 target_id;
+ u8 bus;
+ u8 lun;
+ u8 cdb[12];
+ u32 scb_busaddr;
+ u32 data_busaddr;
+ u32 timeout;
+ u8 basic_status;
+ u8 extended_status;
+ u16 breakup;
+ u32 data_len;
+ u32 sg_len;
+ u32 flags;
+ u32 op_code;
+ IPS_SG_LIST *sg_list;
+ Scsi_Cmnd *scsi_cmd;
+ struct ips_scb *q_next;
+ ips_scb_callback callback;
+} ips_scb_pt_t;
+
/*
* Passthru Command Format
*/
typedef struct {
- u8 CoppID[4];
- u32 CoppCmd;
- u32 PtBuffer;
- u8 *CmdBuffer;
- u32 CmdBSize;
- ips_scb_t CoppCP;
- u32 TimeOut;
- u8 BasicStatus;
- u8 ExtendedStatus;
- u16 reserved;
+ u8 CoppID[4];
+ u32 CoppCmd;
+ u32 PtBuffer;
+ u8 *CmdBuffer;
+ u32 CmdBSize;
+ ips_scb_pt_t CoppCP;
+ u32 TimeOut;
+ u8 BasicStatus;
+ u8 ExtendedStatus;
+ u16 reserved;
} ips_passthru_t;
#endif
-
-
/*
* Overrides for Emacs so that we almost follow Linus's tabbing style.
* Emacs will notice this stuff at the end of the file and automatically
"megaraid: to protect your data, please upgrade your firmware to version\n"
"megaraid: 3.10 or later, available from the Dell Technical Support web\n"
"megaraid: site at\n"
-"http://support.dell.com/us/en/filelib/download/index.asp?fileid=2489\n");
+"http://support.dell.com/us/en/filelib/download/index.asp?fileid=2940\n");
megaraid_release (host);
#ifdef MODULE
continue;
return put_user(blksize_size[MAJOR(dev)][MINOR(dev)&0x0F],
(int *)arg);
+ case BLKELVGET:
+ case BLKELVSET:
+ return blkelv_ioctl(inode->i_rdev, cmd, arg);
+
RO_IOCTLS(dev, arg);
default:
{"FUTURE DOMAIN CORP. (C) 1992 V8.00.004/02/92", 5, 44, FD},
{"IBM F1 BIOS V1.1004/30/92", 5, 25, FD},
{"FUTURE DOMAIN TMC-950", 5, 21, FD},
+ /* Added for 2.2.16 by Matthias_Heidbrink@b.maus.de */
+ {"IBM F1 V1.2009/22/93", 5, 25, FD},
};
#define NUM_SIGNATURES (sizeof(signatures) / sizeof(Signature))
int sr_dev_ioctl(struct cdrom_device_info *cdi,
unsigned int cmd, unsigned long arg)
{
- return scsi_ioctl(scsi_CDs[MINOR(cdi->dev)].device,cmd,(void *) arg);
+ switch (cmd) {
+ case BLKRAGET:
+ if (!arg)
+ return -EINVAL;
+ return put_user(read_ahead[MAJOR(cdi->dev)], (long *) arg);
+ case BLKRASET:
+ if (!capable(CAP_SYS_ADMIN))
+ return -EACCES;
+ if (!(cdi->dev))
+ return -EINVAL;
+ if (arg > 0xff)
+ return -EINVAL;
+ read_ahead[MAJOR(cdi->dev)] = arg;
+ return 0;
+ case BLKSSZGET:
+ return put_user(blksize_size[MAJOR(cdi->dev)][MINOR(cdi->dev)], (int *) arg);
+ case BLKFLSBUF:
+ if (!capable(CAP_SYS_ADMIN))
+ return -EACCES;
+ if (!(cdi->dev))
+ return -EINVAL;
+ fsync_dev(cdi->dev);
+ invalidate_buffers(cdi->dev);
+ return 0;
+ default:
+ return scsi_ioctl(scsi_CDs[MINOR(cdi->dev)].device,cmd,(void *) arg);
+ }
}
/*
struct sg_item sg[SG_LEN]; /* 32*8 */
u32 offset; /* 4 */
u32 port; /* 4 */
- u32 used;
- u32 num;
+ u32 used; /* 4 */
+ u32 num; /* 4 */
};
/*
* we have 3 seperate dma engines. pcm in, pcm out, and mic.
* each dma engine has controlling registers. These goofy
* names are from the datasheet, but make it easy to write
- * code while leafing through it.
+ * code while leafing through it. Right now we don't support
+ * the MIC input.
*/
#define ENUM_ENGINE(PRE,DIG) \
fragsize = bufsize / SG_LEN;
/*
- * Load up 32 sg entries and take an interrupt at half
- * way (we might want more interrupts later..)
+ * Load up 32 sg entries and take an interrupt at each
+ * step (we might want less interrupts later..)
*/
for(i=0;i<32;i++)
status = inl(card->iobase + GLOB_STA);
if(!(status & INT_MASK))
+ {
+ spin_unlock(&card->lock);
return; /* not for us */
+ }
// printk("Interrupt %X: ", status);
if(status & (INT_PO|INT_PI|INT_MC))
if(!(eid&0x0001))
printk(KERN_WARNING "i810_audio: only 48Khz playback available.\n");
+ else
+ /* Enable variable rate mode */
+ i810_ac97_set(codec, AC97_EXTENDED_STATUS,
+ i810_ac97_get(codec,AC97_EXTENDED_STATUS)|1);
if ((codec->dev_mixer = register_sound_mixer(&i810_mixer_fops, -1)) < 0) {
printk(KERN_ERR "i810_audio: couldn't register mixer!\n");
struct ess_state *s = (struct ess_state *)file->private_data;
unsigned long flags;
unsigned int mask = 0;
+ int ret;
VALIDATE_STATE(s);
+
+
+/* In 0.14 prog_dmabuf always returns success anyway ... */
+ if (file->f_mode & FMODE_WRITE) {
+ if (!s->dma_dac.ready && (ret = prog_dmabuf(s, 0)))
+ return POLLERR;
+ }
+ if (file->f_mode & FMODE_READ) {
+ if (!s->dma_adc.ready && (ret = prog_dmabuf(s, 1)))
+ return POLLERR;
+ }
+
if (file->f_mode & (FMODE_WRITE|FMODE_READ))
poll_wait(file, &s->poll_wait, wait);
+
spin_lock_irqsave(&s->lock, flags);
ess_update_ptr(s);
if (file->f_mode & FMODE_READ) {
newsize = 208;
if (newsize > 4096)
newsize = 4096;
- for (new2size = 128; new2size < newsize; new2size <<= 1)
- if (new2size - newsize > newsize - (new2size >> 1))
- new2size >>= 1;
+ for (new2size = 128; new2size < newsize; new2size <<= 1);
+ if (new2size - newsize > newsize - (new2size >> 1))
+ new2size >>= 1;
dma_bufsize = new2size;
}
return 250000 / vidc_audio_rate;
goto out;
}
+ if ((unsigned long)addr + text_data < text_data)
+ goto out;
+
do_mmap(NULL, 0, text_data,
PROT_READ|PROT_WRITE|PROT_EXEC, MAP_FIXED|MAP_PRIVATE, 0);
retval = read_exec(interpreter_dentry, offset, addr, text_data, 0);
bhnext = bh->b_next_free;
if (bh->b_dev != dev || bh->b_size == size)
continue;
- if (buffer_dirty(bh))
- printk(KERN_ERR "set_blocksize: dev %s buffer_dirty %lu size %lu\n", kdevname(dev), bh->b_blocknr, bh->b_size);
if (buffer_locked(bh))
{
slept = 1;
wait_on_buffer(bh);
}
+ if (buffer_dirty(bh))
+ printk(KERN_WARNING "set_blocksize: dev %s buffer_dirty %lu size %lu\n", kdevname(dev), bh->b_blocknr, bh->b_size);
if (!bh->b_count)
put_last_free(bh);
else
- printk(KERN_ERR
+ {
+ mark_buffer_clean(bh);
+ clear_bit(BH_Uptodate, &bh->b_state);
+ clear_bit(BH_Req, &bh->b_state);
+ printk(KERN_WARNING
"set_blocksize: "
- "b_count %d, dev %s, block %lu!\n",
+ "b_count %d, dev %s, block %lu, from %p\n",
bh->b_count, bdevname(bh->b_dev),
- bh->b_blocknr);
+ bh->b_blocknr, __builtin_return_address(0));
+ }
if (slept)
goto again;
}
if (!grow_buffers(size)) {
wakeup_bdflush(1);
current->policy |= SCHED_YIELD;
+ current->state = TASK_RUNNING;
schedule();
}
}
}
}
#endif
+ /*
+ * kernel module loader fixup
+ * We don't try to load run modprobe in kernel space but at the
+ * same time kernel/kmod.c calls us with fs set to KERNEL_DS. This
+ * would cause us to explode messily on a split address space machine
+ * and its sort of lucky it ever worked before. Since the S/390 is
+ * such a split address space box we have to fix it..
+ */
+
+ set_fs(USER_DS);
+
for (try=0; try<2; try++) {
for (fmt = formats ; fmt ; fmt = fmt->next) {
int (*fn)(struct linux_binprm *, struct pt_regs *) = fmt->load_binary;
"disk_wblk %u %u %u %u\n"
"page %u %u\n"
#ifdef CONFIG_ARCH_S390
- "swap %u %u\n",
+ "swap %u %u\n"
+ "intr 1 0",
#else
"swap %u %u\n"
"intr %u",
"disk_wblk %u %u %u %u\n"
"page %u %u\n"
#ifdef CONFIG_ARCH_S390
- "swap %u %u\n",
+ "swap %u %u\n"
+ "intr 1 0",
#else
"swap %u %u\n"
"intr %u",
case PROC_PID_CPU:
return 0;
}
- if ((current->fsuid == euid && ok) || capable(CAP_DAC_OVERRIDE))
+ if(capable(CAP_DAC_OVERRIDE) || (current->fsuid == euid && ok))
return 0;
return 1;
}
struct task_struct *current;
__asm__("lhi %0,-8192\n\t"
"nr %0,15"
- : "=r" (current) );
+ : "=&r" (current) );
return current;
}
#include <types.h>
#endif
+extern __u8 _ascebc_500[]; /* ASCII -> EBCDIC 500 conversion table */
+extern __u8 _ebcasc_500[]; /* EBCDIC 500 -> ASCII conversion table */
extern __u8 _ascebc[]; /* ASCII -> EBCDIC conversion table */
extern __u8 _ebcasc[]; /* EBCDIC -> ASCII conversion table */
extern __u8 _ebc_tolower[]; /* EBCDIC -> lowercase */
#define ASCEBC(addr,nr) codepage_convert(_ascebc, addr, nr)
#define EBCASC(addr,nr) codepage_convert(_ebcasc, addr, nr)
+#define ASCEBC_500(addr,nr) codepage_convert(_ascebc_500, addr, nr)
+#define EBCASC_500(addr,nr) codepage_convert(_ebcasc_500, addr, nr)
#define EBC_TOLOWER(addr,nr) codepage_convert(_ebc_tolower, addr, nr)
#define EBC_TOUPPER(addr,nr) codepage_convert(_ebc_toupper, addr, nr)
/* ... per MSCH, however, if facility */
/* ... is not installed, this results */
/* ... in an operand exception. */
- } pmcw_t;
+ } __attribute__ ((packed)) pmcw_t;
/*
* subchannel status word
unsigned int dstat : 8; /* device status */
unsigned int cstat : 8; /* subchannel status */
unsigned int count : 16; /* residual count */
- } scsw_t;
+ } __attribute__ ((packed)) scsw_t;
#define SCSW_FCTL_CLEAR_FUNC 0x1
#define SCSW_FCTL_HALT_FUNC 0x2
pmcw_t pmcw; /* path management control word */
scsw_t scsw; /* subchannel status word */
char mda[12]; /* model dependent area */
- } schib_t;
+ } schib_t __attribute__ ((packed,aligned(4)));
typedef struct {
char cmd_code;/* command code */
#define CCW_FLAG_IDA 0x04
#define CCW_FLAG_SUSPEND 0x02
+#define CCW_CMD_READ_IPL 0x02
+#define CCW_CMD_NOOP 0x03
#define CCW_CMD_BASIC_SENSE 0x04
#define CCW_CMD_TIC 0x08
-#define CCW_CMD_SENSE_ID 0xE4
-#define CCW_CMD_NOOP 0x03
+#define CCW_CMD_SENSE_PGID 0x34
+#define CCW_CMD_SUSPEND_RECONN 0x5B
#define CCW_CMD_RDC 0x64
-#define CCW_CMD_READ_IPL 0x02
+#define CCW_CMD_SET_PGID 0xAF
+#define CCW_CMD_SENSE_ID 0xE4
#define SENSE_MAX_COUNT 0x20
unsigned int intparm; /* interruption parameter */
} tpi_info_t;
-
-/*
- * This is the "IRQ descriptor", which contains various information
- * about the irq, including what kind of hardware handling it has,
- * whether it is disabled etc etc.
- *
- * Pad this out to 32 bytes for cache and indexing reasons.
- */
-typedef struct {
- unsigned int status; /* IRQ status - IRQ_INPROGRESS, IRQ_DISABLED */
- struct hw_interrupt_type *handler; /* handle/enable/disable functions */
- struct irqaction *action; /* IRQ action list */
- unsigned int unused[3];
- spinlock_t irq_lock;
- } irq_desc_t;
-
//
// command information word (CIW) layout
//
unsigned char dev_model; /* device model */
unsigned char unused; /* padding byte */
/* extended part */
- ciw_t ciw[62]; /* variable # of CIWs */
+ ciw_t ciw[16]; /* variable # of CIWs */
} __attribute__ ((packed,aligned(4))) senseid_t;
/*
#define DEVSTAT_START_FUNCTION 0x00000004
#define DEVSTAT_HALT_FUNCTION 0x00000008
#define DEVSTAT_STATUS_PENDING 0x00000010
+#define DEVSTAT_REVALIDATE 0x00000020
+#define DEVSTAT_DEVICE_GONE 0x00000040
#define DEVSTAT_DEVICE_OWNED 0x00000080
+#define DEVSTAT_CLEAR_FUNCTION 0x00000100
#define DEVSTAT_FINAL_STATUS 0x80000000
+#define INTPARM_STATUS_PENDING 0xFFFFFFFF
+
+typedef void (* io_handler_func1_t) ( int irq,
+ devstat_t *devstat,
+ struct pt_regs *rgs);
+
+typedef void (* io_handler_func_t) ( int irq,
+ __u32 intparm );
+
+typedef void ( * not_oper_handler_func_t)( int irq,
+ int status );
+
+struct s390_irqaction {
+ io_handler_func_t handler;
+ unsigned long flags;
+ const char *name;
+ devstat_t *dev_id;
+};
+
+
+/*
+ * This is the "IRQ descriptor", which contains various information
+ * about the irq, including what kind of hardware handling it has,
+ * whether it is disabled etc etc.
+ *
+ * Pad this out to 32 bytes for cache and indexing reasons.
+ */
+typedef struct {
+ unsigned int status; /* IRQ status - IRQ_INPROGRESS, IRQ_DISABLED */
+ struct hw_interrupt_type *handler; /* handle/enable/disable functions */
+ struct s390_irqaction *action; /* IRQ action list */
+ } irq_desc_t;
+
+typedef struct {
+ __u8 state1 : 2; /* path state value 1 */
+ __u8 state2 : 2; /* path state value 2 */
+ __u8 state3 : 1; /* path state value 3 */
+ __u8 resvd : 3; /* reserved */
+ } __attribute__ ((packed)) path_state_t;
+
+typedef struct {
+ union {
+ __u8 fc; /* SPID function code */
+ path_state_t ps; /* SNID path state */
+ } inf;
+ __u32 cpu_addr : 16; /* CPU address */
+ __u32 cpu_id : 24; /* CPU identification */
+ __u32 cpu_model : 16; /* CPU model */
+ __u32 tod_high; /* high word TOD clock */
+ } __attribute__ ((packed)) pgid_t;
+
+#define SPID_FUNC_SINGLE_PATH 0x00
+#define SPID_FUNC_MULTI_PATH 0x80
+#define SPID_FUNC_ESTABLISH 0x00
+#define SPID_FUNC_RESIGN 0x40
+#define SPID_FUNC_DISBAND 0x20
+
+#define SNID_STATE1_RESET 0
+#define SNID_STATE1_UNGROUPED 2
+#define SNID_STATE1_GROUPED 3
+
+#define SNID_STATE2_NOT_RESVD 0
+#define SNID_STATE2_RESVD_ELSE 2
+#define SNID_STATE2_RESVD_SELF 3
+
+#define SNID_STATE3_MULTI_PATH 1
+#define SNID_STATE3_SINGLE_PATH 0
+
/*
* Flags used as input parameters for do_IO()
*/
-#define DOIO_EARLY_NOTIFICATION 0x01 /* allow for I/O completion ... */
+#define DOIO_EARLY_NOTIFICATION 0x0001 /* allow for I/O completion ... */
/* ... notification after ... */
/* ... primary interrupt status */
#define DOIO_RETURN_CHAN_END DOIO_EARLY_NOTIFICATION
-#define DOIO_VALID_LPM 0x02 /* LPM input parameter is valid */
-#define DOIO_WAIT_FOR_INTERRUPT 0x04 /* wait synchronously for interrupt */
-#define DOIO_REPORT_ALL 0x08 /* report all interrupt conditions */
-#define DOIO_ALLOW_SUSPEND 0x10 /* allow for channel prog. suspend */
-#define DOIO_DENY_PREFETCH 0x20 /* don't allow for CCW prefetch */
-#define DOIO_SUPPRESS_INTER 0x40 /* suppress intermediate inter. */
+#define DOIO_VALID_LPM 0x0002 /* LPM input parameter is valid */
+#define DOIO_WAIT_FOR_INTERRUPT 0x0004 /* wait synchronously for interrupt */
+#define DOIO_REPORT_ALL 0x0008 /* report all interrupt conditions */
+#define DOIO_ALLOW_SUSPEND 0x0010 /* allow for channel prog. suspend */
+#define DOIO_DENY_PREFETCH 0x0020 /* don't allow for CCW prefetch */
+#define DOIO_SUPPRESS_INTER 0x0040 /* suppress intermediate inter. */
/* ... for suspended CCWs */
+#define DOIO_TIMEOUT 0x0080 /* 3 secs. timeout for sync. I/O */
+#define DOIO_DONT_CALL_INTHDLR 0x0100 /* don't call interrupt handler */
/*
* do_IO()
unsigned char lpm, /* logical path mask */
unsigned long flag); /* flags : see above */
+void do_crw_pending( void ); /* CRW handler */
+
int resume_IO( int irq); /* IRQ aka. subchannel number */
int halt_IO( int irq, /* IRQ aka. subchannel number */
unsigned long intparm, /* dummy intparm */
- unsigned int flag); /* possible DOIO_WAIT_FOR_INTERRUPT */
+ unsigned long flag); /* possible DOIO_WAIT_FOR_INTERRUPT */
+int clear_IO( int irq, /* IRQ aka. subchannel number */
+ unsigned long intparm, /* dummy intparm */
+ unsigned long flag); /* possible DOIO_WAIT_FOR_INTERRUPT */
int process_IRQ( struct pt_regs regs,
unsigned int irq,
int get_irq_next ( int irq );
int read_dev_chars( int irq, void **buffer, int length );
-int read_conf_data( int irq, void **buffer, int *length );
+int read_conf_data( int irq, void **buffer, int *length, __u8 lpm );
+
+int s390_DevicePathVerification( int irq, __u8 domask );
+
+int s390_request_irq_special( int irq,
+ io_handler_func_t io_handler,
+ not_oper_handler_func_t not_oper_handler,
+ unsigned long irqflags,
+ const char *devname,
+ void *dev_id);
-extern int handle_IRQ_event(unsigned int, int cpu, struct pt_regs *);
+extern int handle_IRQ_event( unsigned int irq, int cpu, struct pt_regs *);
extern int set_cons_dev(int irq);
extern int reset_cons_dev(int irq);
#include <asm/s390io.h>
#define s390irq_spin_lock(irq) \
- spin_lock(&(ioinfo[irq]->irq_desc.irq_lock))
+ spin_lock(&(ioinfo[irq]->irq_lock))
#define s390irq_spin_unlock(irq) \
- spin_unlock(&(ioinfo[irq]->irq_desc.irq_lock))
+ spin_unlock(&(ioinfo[irq]->irq_lock))
#define s390irq_spin_lock_irqsave(irq,flags) \
- spin_lock_irqsave(&(ioinfo[irq]->irq_desc.irq_lock), flags)
+ spin_lock_irqsave(&(ioinfo[irq]->irq_lock), flags)
+
#define s390irq_spin_unlock_irqrestore(irq,flags) \
- spin_unlock_irqrestore(&(ioinfo[irq]->irq_desc.irq_lock), flags)
+ spin_unlock_irqrestore(&(ioinfo[irq]->irq_lock), flags)
+
#endif
#define __LC_SUBCHANNEL_NR 0x0BA
#define __LC_IO_INT_PARM 0x0BC
#define __LC_MCCK_CODE 0x0E8
-#define __LC_CREGS_SAVE_AREA 0x1C0
-#define __LC_AREGS_SAVE_AREA 0x120
+#define __LC_AREGS_SAVE_AREA 0x200
+#define __LC_CREGS_SAVE_AREA 0x240
+#define __LC_RETURN_PSW 0x280
#define __LC_SYNC_IO_WORD 0x400
#define _EXT_PSW_MASK 0x04080000
#define _PGM_PSW_MASK 0x04080000
#define _SVC_PSW_MASK 0x04080000
-#define _MCCK_PSW_MASK 0x040A0000
+#define _MCCK_PSW_MASK 0x04080000
#define _IO_PSW_MASK 0x04080000
-#define _USER_PSW_MASK 0x070DC000/* DAT, IO, EXT, Home-space */
-#define _PSW_IO_WAIT 0x020A0000/* IO, Wait */
+#define _USER_PSW_MASK 0x0709C000/* DAT, IO, EXT, Home-space */
#define _WAIT_PSW_MASK 0x070E0000/* DAT, IO, EXT, Wait, Home-space */
#define _DW_PSW_MASK 0x000A0000/* disabled wait PSW mask */
+
#define _PRIMARY_MASK 0x0000 /* MASK for SACF */
#define _SECONDARY_MASK 0x0100 /* MASK for SACF */
#define _ACCESS_MASK 0x0200 /* MASK for SACF */
#define _HOME_MASK 0x0300 /* MASK for SACF */
-#define _PSW_PRIM_SPACE_MODE 0x04000000
-#define _PSW_SEC_SPACE_MODE 0x04008000
-#define _PSW_ACC_REG_MODE 0x04004000
-#define _PSW_HOME_SPACE_MODE 0x0400C000
+#define _PSW_PRIM_SPACE_MODE 0x00000000
+#define _PSW_SEC_SPACE_MODE 0x00008000
+#define _PSW_ACC_REG_MODE 0x00004000
+#define _PSW_HOME_SPACE_MODE 0x0000C000
-#define _PSW_WAIT_MASK_BIT 0x00020000
-#define _PSW_IO_MASK_BIT 0x02000000
+#define _PSW_WAIT_MASK_BIT 0x00020000 /* Wait bit */
+#define _PSW_IO_MASK_BIT 0x02000000 /* IO bit */
+#define _PSW_IO_WAIT 0x02020000 /* IO & Wait bit */
/* we run in 31 Bit mode */
#define _ADDR_31 0x80000000
__u32 failing_storage_address; /* 0x0f8 */
__u8 pad5[0x100-0xfc]; /* 0x0fc */
__u32 st_status_fixed_logout[4];/* 0x100 */
- __u8 pad6[0x120-0x110]; /* 0x110 */
- __u32 access_regs_save_area[16];/* 0x120 */
+ __u8 pad6[0x160-0x110]; /* 0x110 */
__u32 floating_pt_save_area[8]; /* 0x160 */
__u32 gpregs_save_area[16]; /* 0x180 */
- __u32 cregs_save_area[16]; /* 0x1c0 */
+ __u8 pad7[0x200-0x1c0]; /* 0x1c0 */
- __u8 pad7[0x400-0x200]; /* 0x200 */
+ __u32 access_regs_save_area[16];/* 0x200 */
+ __u32 cregs_save_area[16]; /* 0x240 */
+ psw_t return_psw; /* 0x280 */
+ __u8 pad8[0x400-0x288]; /* 0x288 */
__u32 sync_io_word; /* 0x400 */
- __u8 pad8[0xc00-0x404]; /* 0x404 */
+ __u8 pad9[0xc00-0x404]; /* 0x404 */
/* System info area */
__u32 save_area[16]; /* 0xc00 */
#define __flush_tlb() \
do { __asm__ __volatile__("ptlb": : :"memory"); } while (0)
-
-static inline void __flush_global_tlb(void)
-{
- int cs1=0,dum=0;
- int *adr;
- long long dummy=0;
- adr = (int*) (((int)(((int*) &dummy)+1) & 0xfffffffc)|1);
- __asm__ __volatile__("lr 2,%0\n\t"
- "lr 3,%1\n\t"
- "lr 4,%2\n\t"
- ".long 0xb2500024" :
- : "d" (cs1), "d" (dum), "d" (adr)
- : "2", "3", "4");
-}
-
static inline void __flush_tlb_one(struct mm_struct *mm,
unsigned long addr);
__flush_tlb();
}
+#if 0 /* Arggh, ipte doesn't work correctly !! */
static inline void flush_tlb_page(struct vm_area_struct *vma,
- unsigned long addr)
+ unsigned long va)
{
- __flush_tlb_one(vma->vm_mm,addr);
+ __flush_tlb_one(vma->vm_mm,va);
}
+#else
+#define flush_tlb_page(vma, va) flush_tlb_all()
+#endif
static inline void flush_tlb_range(struct mm_struct *mm,
unsigned long start, unsigned long end)
#include <asm/smp.h>
+static inline void __flush_global_tlb_csp(void)
+{
+ int cs1=0,dum=0;
+ int *adr;
+ long long dummy=0;
+ adr = (int*) (((int)(((int*) &dummy)+1) & 0xfffffffc)|1);
+ __asm__ __volatile__("lr 2,%0\n\t"
+ "lr 3,%1\n\t"
+ "lr 4,%2\n\t"
+ "csp 2,4" :
+ : "d" (cs1), "d" (dum), "d" (adr)
+ : "2", "3", "4");
+}
+
+static inline void __flush_global_tlb(void)
+{
+ if (MACHINE_HAS_CSP)
+ __flush_global_tlb_csp();
+ else
+ smp_ext_call_sync_others(ec_ptlb, NULL);
+}
+
#define local_flush_tlb() \
__flush_tlb()
static inline void flush_tlb_current_task(void)
{
- if ((atomic_read(¤t->mm->count) != 1) ||
- (current->mm->cpu_vm_mask != (1UL << smp_processor_id()))) {
+ if ((smp_num_cpus > 1) &&
+ ((atomic_read(¤t->mm->count) != 1) ||
+ (current->mm->cpu_vm_mask != (1UL << smp_processor_id())))) {
current->mm->cpu_vm_mask = (1UL << smp_processor_id());
__flush_global_tlb();
} else {
static inline void flush_tlb_mm(struct mm_struct * mm)
{
- if ((atomic_read(&mm->count) != 1) ||
- (mm->cpu_vm_mask != (1UL << smp_processor_id()))) {
+ if ((smp_num_cpus > 1) &&
+ ((atomic_read(&mm->count) != 1) ||
+ (mm->cpu_vm_mask != (1UL << smp_processor_id())))) {
mm->cpu_vm_mask = (1UL << smp_processor_id());
__flush_global_tlb();
} else {
}
}
+#if 0 /* Arggh, ipte doesn't work correctly !! */
static inline void flush_tlb_page(struct vm_area_struct * vma,
unsigned long va)
{
__flush_tlb_one(vma->vm_mm,va);
}
+#else
+#define flush_tlb_page(vma, va) flush_tlb_all()
+#endif
static inline void flush_tlb_range(struct mm_struct * mm,
unsigned long start, unsigned long end)
{
- if ((atomic_read(&mm->count) != 1) ||
- (mm->cpu_vm_mask != (1UL << smp_processor_id()))) {
+ if ((smp_num_cpus > 1) &&
+ ((atomic_read(&mm->count) != 1) ||
+ (mm->cpu_vm_mask != (1UL << smp_processor_id())))) {
mm->cpu_vm_mask = (1UL << smp_processor_id());
__flush_global_tlb();
} else {
/*
* No mapping available
*/
-#define PAGE_NONE __pgprot(_PAGE_INVALID )
-
+#define PAGE_NONE __pgprot(_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_INVALID)
#define PAGE_SHARED __pgprot(_PAGE_PRESENT | _PAGE_ACCESSED)
#define PAGE_COPY __pgprot(_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_RO)
#define PAGE_READONLY __pgprot(_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_RO)
} while (0)
-extern inline int pte_none(pte_t pte) { return ((pte_val(pte) & (_PAGE_INVALID | _PAGE_RO)) ==
+extern inline int pte_none(pte_t pte) { return ((pte_val(pte) &
+ (_PAGE_INVALID | _PAGE_RO | _PAGE_PRESENT)) ==
_PAGE_INVALID); }
extern inline int pte_present(pte_t pte) { return pte_val(pte) & _PAGE_PRESENT; }
extern inline void pte_clear(pte_t *ptep) { pte_val(*ptep) = _PAGE_INVALID; }
/* perform syscall argument validation (get/set_fs) */
mm_segment_t fs;
per_struct per_info;/* Must be aligned on an 4 byte boundary*/
+ addr_t ieee_instruction_pointer;
+ /* Used to give failing instruction back to user for ieee exceptions */
};
typedef struct thread_struct thread_struct;
#define PSW_PER_MASK 0x40000000UL
#define USER_STD_MASK 0x00000080UL
#define PSW_PROBLEM_STATE 0x00010000UL
+#define PSW_ENABLED_STATE 0x03000000UL
/*
* Function to drop a processor into disabled wait state
struct pt_regs
{
S390_REGS
- long trap;
+ __u32 trap;
};
#if CONFIG_REMOTE_DEBUG
typedef struct
{
S390_REGS
- long trap;
+ __u32 trap;
__u32 crs[16];
s390_fp_regs fp_regs;
} gdb_pt_regs;
* include/asm-s390/queue.h
*
* S390 version
- * Copyright (C) 1999 IBM Deutschland Entwicklung GmbH, IBM Corporation
+ * Copyright (C) 1999,2000 IBM Deutschland Entwicklung GmbH, IBM Corporation
* Author(s): Denis Joseph Barrow (djbarrow@de.ibm.com,barrow_dj@yahoo.com)
*
* A little set of queue utilies.
*/
#include <linux/stddef.h>
+#include <asm/types.h>
typedef struct queue
{
struct queue *next;
} queue;
+typedef queue list;
+
typedef struct
{
queue *head;
return(head);
}
+static __inline__ void init_list(list **lhead)
+{
+ *lhead=NULL;
+}
+
+static __inline__ void add_to_list(list **lhead,list *member)
+{
+ member->next=*lhead;
+ *lhead=member;
+}
+
+static __inline__ void add_to_list_tail(list **lhead,list *member)
+{
+ list *curr,*prev;
+ if(*lhead==NULL)
+ *lhead=member;
+ else
+ {
+ prev=*lhead;
+ for(curr=(*lhead)->next;curr!=NULL;curr=curr->next)
+ prev=curr;
+ prev->next=member;
+ }
+}
+static __inline__ void add_to_list_tail_null(list **lhead,list *member)
+{
+ member->next=NULL;
+ add_to_list_tail_null(lhead,member);
+}
+
+
+static __inline__ int is_in_list(list *lhead,list *member)
+{
+ list *curr;
+
+ for(curr=lhead;curr!=NULL;curr=curr->next)
+ if(curr==member)
+ return(TRUE);
+ return(FALSE);
+}
+
+static __inline__ int get_prev(list *lhead,list *member,list **prev)
+{
+ list *curr;
+
+ *prev=NULL;
+ for(curr=lhead;curr!=NULL;curr=curr->next)
+ {
+ if(curr==member)
+ return(TRUE);
+ *prev=curr;
+ }
+ *prev=NULL;
+ return(FALSE);
+}
+
+
+
+static __inline__ int remove_from_list(list **lhead,list *member)
+{
+ list *prev;
+
+ if(get_prev(*lhead,member,&prev))
+ {
+
+ if(prev)
+ prev->next=member->next;
+ else
+ *lhead=member->next;
+ return(TRUE);
+ }
+ return(FALSE);
+}
+static __inline__ int remove_from_queue(qheader *qhead,queue *member)
+{
+ queue *prev;
+ if(get_prev(qhead->head,(list *)member,(list **)&prev))
+ {
+ if(prev)
+ {
+ prev->next=member->next;
+ if(prev->next==NULL)
+ qhead->tail=prev;
+ }
+ else
+ {
+ if(qhead->head==qhead->tail)
+ qhead->tail=NULL;
+ qhead->head=member->next;
+ }
+ return(TRUE);
+ }
+ return(FALSE);
+}
--- /dev/null
+#ifndef _S390_EXTINT_H
+#define _S390_EXTINT_H
+
+/*
+ * include/asm-s390/s390_ext.h
+ *
+ * S390 version
+ * Copyright (C) 1999,2000 IBM Deutschland Entwicklung GmbH, IBM Corporation
+ * Author(s): Holger Smolinski (Holger.Smolinski@de.ibm.com),
+ * Martin Schwidefsky (schwidefsky@de.ibm.com)
+ */
+
+typedef void (*ext_int_handler_t)(struct pt_regs *regs, __u16 code);
+
+/*
+ * Warning: if you change ext_int_info_t you have to change the
+ * external interrupt handler in entry.S too.
+ */
+typedef struct ext_int_info_t {
+ struct ext_int_info_t *next;
+ ext_int_handler_t handler;
+ __u16 code;
+} __attribute__ ((packed)) ext_int_info_t;
+
+extern ext_int_info_t *ext_int_hash[];
+
+int register_external_interrupt(__u16 code, ext_int_handler_t handler);
+int unregister_external_interrupt(__u16 code, ext_int_handler_t handler);
+
+#endif
--- /dev/null
+/*
+ * arch/s390/kernel/s390dyn.h
+ * S/390 data definitions for dynamic device attachment
+ *
+ * S390 version
+ * Copyright (C) 1999 IBM Deutschland Entwicklung GmbH, IBM Corporation
+ * Author(s): Ingo Adlung (adlung@de.ibm.com)
+ */
+
+#ifndef __s390dyn_h
+#define __s390dyn_h
+
+struct _devreg;
+
+typedef int (* oper_handler_func_t)( int irq,
+ struct _devreg *dreg);
+
+typedef struct _devreg {
+ union {
+ int devno;
+
+ struct _hc {
+ __u16 ctype;
+ __u8 cmode;
+ __u16 dtype;
+ __u8 dmode;
+ } hc; /* has controller info */
+
+ struct _hnc {
+ __u16 dtype;
+ __u8 dmode;
+ __u16 res1;
+ __u8 res2;
+ } hnc; /* has no controller info */
+ } ci;
+
+ int flag;
+ oper_handler_func_t oper_func;
+ struct _devreg *prev;
+ struct _devreg *next;
+} devreg_t;
+
+#define DEVREG_EXACT_MATCH 0x00000001
+#define DEVREG_MATCH_DEV_TYPE 0x00000002
+#define DEVREG_MATCH_CU_TYPE 0x00000004
+#define DEVREG_NO_CU_INFO 0x00000008
+
+#define DEVREG_TYPE_DEVNO 0x80000000
+#define DEVREG_TYPE_DEVCHARS 0x40000000
+
+int s390_device_register ( devreg_t *drinfo );
+int s390_device_unregister( devreg_t *dreg );
+devreg_t * s390_search_devreg ( ioinfo_t *ioinfo );
+
+#endif /* __s390dyn */
typedef struct _ioinfo {
unsigned int irq; /* aka. subchannel number */
spinlock_t irq_lock; /* irq lock */
+
+ struct _ioinfo *prev;
+ struct _ioinfo *next;
+
union {
unsigned int info;
struct {
unsigned int consns : 1; /* concurrent sense is available */
unsigned int delsense : 1; /* delayed SENSE required */
unsigned int s_pend : 1; /* status pending condition */
- unsigned int unused : 16; /* unused */
+ unsigned int pgid : 1; /* "path group ID" is valid */
+ unsigned int pgid_supp : 1; /* "path group ID" command is supported */
+ unsigned int esid : 1; /* Ext. SenseID supported by HW */
+ unsigned int rcd : 1; /* RCD supported by HW */
+ unsigned int repnone : 1; /* don't call IRQ handler on interrupt */
+ unsigned int newreq : 1; /* new register interface */
+ unsigned int dval : 1; /* device number valid */
+ unsigned int unused : (sizeof(unsigned int)*8 - 23); /* unused */
} __attribute__ ((packed)) flags;
} ui;
+
unsigned long u_intparm; /* user interruption parameter */
senseid_t senseid; /* SenseID info */
irq_desc_t irq_desc; /* irq descriptor */
- unsigned int lpm; /* logical path mask to be used ... */
- /* ... from do_IO() parms. Only ... */
- /* ... valid if vlpm is set too. */
+ not_oper_handler_func_t nopfunc; /* not oper handler */
+ __u8 ulpm; /* logical path mask used for I/O */
+ __u8 opm; /* path mask of operational paths */
+ __u16 devno; /* device number */
+ pgid_t pgid; /* path group ID */
schib_t schib; /* subchannel information block */
orb_t orb; /* operation request block */
devstat_t devstat; /* device status */
ccw1_t *qcpa; /* queued channel program */
ccw1_t senseccw; /* ccw for sense command */
unsigned int stctl; /* accumulated status control from irb */
- unsigned int qintparm; /* queued interruption parameter */
+ unsigned long qintparm; /* queued interruption parameter */
unsigned long qflag; /* queued flags */
- unsigned char qlpm; /* queued logical path mask */
-
- struct _ioinfo *prev;
- struct _ioinfo *next;
+ __u8 qlpm; /* queued logical path mask */
} __attribute__ ((aligned(8))) ioinfo_t;
--- /dev/null
+/*
+ * arch/s390/kernel/s390mach.h
+ * S/390 data definitions for machine check processing
+ *
+ * S390 version
+ * Copyright (C) 1999, 2000 IBM Deutschland Entwicklung GmbH,
+ * IBM Corporation
+ * Author(s): Ingo Adlung (adlung@de.ibm.com)
+ */
+
+#ifndef __s390mach_h
+#define __s390mach_h
+
+#include <asm/types.h>
+
+typedef struct _mci {
+ __u32 to_be_defined_1 : 9;
+ __u32 cp : 1; /* channel-report pending */
+ __u32 to_be_defined_2 : 22;
+ __u32 to_be_defined_3;
+ } mci_t;
+
+//
+// machine-check-interruption code
+//
+typedef struct _mcic {
+ union _mcc {
+ __u64 mcl; /* machine check int. code - long info */
+ mci_t mcd; /* machine check int. code - details */
+ } mcc;
+} __attribute__ ((packed)) mcic_t;
+
+//
+// Channel Report Word
+//
+typedef struct _crw {
+ __u32 res1 : 1; /* reserved zero */
+ __u32 slct : 1; /* solicited */
+ __u32 oflw : 1; /* overflow */
+ __u32 chn : 1; /* chained */
+ __u32 rsc : 4; /* reporting source code */
+ __u32 anc : 1; /* ancillary report */
+ __u32 res2 : 1; /* reserved zero */
+ __u32 erc : 6; /* error-recovery code */
+ __u32 rsid : 16; /* reporting-source ID */
+} __attribute__ ((packed)) crw_t;
+
+#define CRW_RSC_MONITOR 0x2 /* monitoring facility */
+#define CRW_RSC_SCH 0x3 /* subchannel */
+#define CRW_RSC_CPATH 0x4 /* channel path */
+#define CRW_RSC_CONFIG 0x9 /* configuration-alert facility */
+#define CRW_RSC_CSS 0xB /* channel subsystem */
+
+#define CRW_ERC_EVENT 0x00 /* event information pending */
+#define CRW_ERC_AVAIL 0x01 /* available */
+#define CRW_ERC_INIT 0x02 /* initialized */
+#define CRW_ERC_TERROR 0x03 /* temporary error */
+#define CRW_ERC_IPARM 0x04 /* installed parm initialized */
+#define CRW_ERC_TERM 0x05 /* terminal */
+#define CRW_ERC_PERRN 0x06 /* perm. error, fac. not init */
+#define CRW_ERC_PERRI 0x07 /* perm. error, facility init */
+#define CRW_ERC_PMOD 0x08 /* installed parameters modified */
+
+#define MAX_CRW_PENDING 100
+#define MAX_MACH_PENDING 100
+
+//
+// CRW Entry
+//
+typedef struct _crwe {
+ crw_t crw;
+ struct _crwe *crwe_next;
+} __attribute__ ((packed)) crwe_t;
+
+typedef struct _mache {
+ spinlock_t lock;
+ unsigned int status;
+ mcic_t mcic;
+ union _mc {
+ crwe_t *crwe; /* CRW if applicable */
+ } mc;
+ struct _mache *next;
+ struct _mache *prev;
+} mache_t;
+
+#define MCHCHK_STATUS_TO_PROCESS 0x00000001
+#define MCHCHK_STATUS_IN_PROGRESS 0x00000002
+#define MCHCHK_STATUS_WAITING 0x00000004
+
+void s390_init_machine_check( void );
+void __init s390_do_machine_check ( void );
+void s390_do_crw_pending ( crwe_t *pcrwe );
+
+extern __inline__ int stcrw( __u32 *pcrw )
+{
+ int ccode;
+
+ __asm__ __volatile__(
+ "STCRW 0(%1)\n\t"
+ "IPM %0\n\t"
+ "SRL %0,28\n\t"
+ : "=d" (ccode) : "a" (pcrw)
+ : "cc", "1" );
+ return ccode;
+}
+
+#endif /* __s390mach */
#define MACHINE_IS_VM (MACHINE_FLAGS & 1)
#define MACHINE_HAS_IEEE (MACHINE_FLAGS & 2)
#define MACHINE_IS_P390 (MACHINE_FLAGS & 4)
+#define MACHINE_HAS_CSP (MACHINE_FLAGS & 8)
#define RAMDISK_ORIGIN 0x800000
#define RAMDISK_BLKSIZE 0x1000
ec_set_ctl,
ec_get_ctl,
ec_set_ctl_masked,
+ ec_ptlb,
ec_cmd_last
} ec_cmd_sig;
extern inline void spin_lock(spinlock_t *lp)
{
- __asm__ __volatile(" lhi 1,-1\n"
+ __asm__ __volatile(" basr 1,0\n"
"0: slr 0,0\n"
" cs 0,1,%1\n"
" jl 0b"
{
unsigned long result;
__asm__ __volatile(" slr %1,%1\n"
- " lhi 0,-1\n"
- "0: cs %1,0,%0"
+ " basr 1,0\n"
+ "0: cs %1,1,%0"
: "=m" (lp->lock), "=&d" (result)
- : "0" (lp->lock) : "0");
+ : "0" (lp->lock) : "1");
return !result;
}
#ifndef _S390_STAT_H
#define _S390_STAT_H
+#ifndef _LINUX_TYPES_H
+#include <linux/types.h>
+#endif
+
struct __old_kernel_stat {
unsigned short st_dev;
unsigned short st_ino;
};
struct stat {
- unsigned short st_dev;
- unsigned short __pad1;
- unsigned long st_ino;
- unsigned short st_mode;
- unsigned short st_nlink;
- unsigned short st_uid;
- unsigned short st_gid;
- unsigned short st_rdev;
- unsigned short __pad2;
- unsigned long st_size;
- unsigned long st_blksize;
- unsigned long st_blocks;
- unsigned long st_atime;
+ dev_t st_dev;
+ unsigned short int __pad1;
+ ino_t st_ino;
+ mode_t st_mode;
+ short st_nlink;
+ uid_t st_uid;
+ gid_t st_gid;
+ dev_t st_rdev;
+ unsigned short int __pad2;
+ off_t st_size;
+ off_t st_blksize;
+ off_t st_blocks;
+ time_t st_atime;
unsigned long __unused1;
- unsigned long st_mtime;
+ time_t st_mtime;
unsigned long __unused2;
- unsigned long st_ctime;
+ time_t st_ctime;
unsigned long __unused3;
unsigned long __unused4;
unsigned long __unused5;
/*
* Translate a "termio" structure into a "termios". Ugh.
*/
-#define SET_LOW_TERMIOS_BITS(termios, termio, x) { \
- unsigned short __tmp; \
- get_user(__tmp,&(termio)->x); \
- *(unsigned short *) &(termios)->x = __tmp; \
-}
#define user_termio_to_kernel_termios(termios, termio) \
({ \
- SET_LOW_TERMIOS_BITS(termios, termio, c_iflag); \
- SET_LOW_TERMIOS_BITS(termios, termio, c_oflag); \
- SET_LOW_TERMIOS_BITS(termios, termio, c_cflag); \
- SET_LOW_TERMIOS_BITS(termios, termio, c_lflag); \
+ unsigned short tmp; \
+ get_user(tmp, &(termio)->c_iflag); \
+ (termios)->c_iflag = (0xffff0000 & ((termios)->c_iflag)) | tmp; \
+ get_user(tmp, &(termio)->c_oflag); \
+ (termios)->c_oflag = (0xffff0000 & ((termios)->c_oflag)) | tmp; \
+ get_user(tmp, &(termio)->c_cflag); \
+ (termios)->c_cflag = (0xffff0000 & ((termios)->c_cflag)) | tmp; \
+ get_user(tmp, &(termio)->c_lflag); \
+ (termios)->c_lflag = (0xffff0000 & ((termios)->c_lflag)) | tmp; \
+ get_user((termios)->c_line, &(termio)->c_line); \
copy_from_user((termios)->c_cc, (termio)->c_cc, NCC); \
})
#define BITS_PER_LONG 32
-#endif /* __KERNEL__ */
#ifndef TRUE
#define TRUE 1
#endif
#ifndef FALSE
#define FALSE 0
#endif
+
+#endif /* __KERNEL__ */
#endif
static inline long
strncpy_from_user(char *dst, const char *src, long count)
{
- int len;
+ long len;
__asm__ __volatile__ ( " iac 1\n"
" slr %0,%0\n"
" lr 2,%1\n"
*
* Return 0 for error
*/
-static inline long strnlen_user(const char * src, long n)
+static inline unsigned long
+strnlen_user(const char * src, unsigned long n)
{
__asm__ __volatile__ (" iac 1\n"
" alr %0,%1\n"
: "cc", "0", "1", "4" );
return n;
}
-#define strlen_user(str) strnlen_user(str, ~0UL >> 1)
+#define strlen_user(str) strnlen_user(str, ~0UL)
/*
* Zero Userspace
"0: mvcle 4,2,0\n"
" jo 0b\n"
"1: sacf 0(1)\n"
- " lr %0,3\n"
+ " lr %0,5\n"
".section __ex_table,\"a\"\n"
" .align 4\n"
" .long 0b,1b\n"
#define __NR_write 4
#define __NR_open 5
#define __NR_close 6
-#define __NR_waitpid 7
#define __NR_creat 8
#define __NR_link 9
#define __NR_unlink 10
#define __NR_mknod 14
#define __NR_chmod 15
#define __NR_lchown 16
-#define __NR_break 17
-#define __NR_oldstat 18
#define __NR_lseek 19
#define __NR_getpid 20
#define __NR_mount 21
#define __NR_stime 25
#define __NR_ptrace 26
#define __NR_alarm 27
-#define __NR_oldfstat 28
#define __NR_pause 29
#define __NR_utime 30
-#define __NR_stty 31
-#define __NR_gtty 32
#define __NR_access 33
#define __NR_nice 34
-#define __NR_ftime 35
#define __NR_sync 36
#define __NR_kill 37
#define __NR_rename 38
#define __NR_dup 41
#define __NR_pipe 42
#define __NR_times 43
-#define __NR_prof 44
#define __NR_brk 45
#define __NR_setgid 46
#define __NR_getgid 47
#define __NR_getegid 50
#define __NR_acct 51
#define __NR_umount2 52
-#define __NR_lock 53
#define __NR_ioctl 54
#define __NR_fcntl 55
-#define __NR_mpx 56
#define __NR_setpgid 57
-#define __NR_ulimit 58
-#define __NR_oldolduname 59
#define __NR_umask 60
#define __NR_chroot 61
#define __NR_ustat 62
#define __NR_getpgrp 65
#define __NR_setsid 66
#define __NR_sigaction 67
-#define __NR_sgetmask 68
-#define __NR_ssetmask 69
#define __NR_setreuid 70
#define __NR_setregid 71
#define __NR_sigsuspend 72
#define __NR_settimeofday 79
#define __NR_getgroups 80
#define __NR_setgroups 81
-#define __NR_select 82
#define __NR_symlink 83
-#define __NR_oldlstat 84
#define __NR_readlink 85
#define __NR_uselib 86
#define __NR_swapon 87
#define __NR_fchown 95
#define __NR_getpriority 96
#define __NR_setpriority 97
-#define __NR_profil 98
#define __NR_statfs 99
#define __NR_fstatfs 100
#define __NR_ioperm 101
#define __NR_stat 106
#define __NR_lstat 107
#define __NR_fstat 108
-#define __NR_olduname 109
-#define __NR_iopl 110
#define __NR_vhangup 111
#define __NR_idle 112
-#define __NR_vm86old 113
#define __NR_wait4 114
#define __NR_swapoff 115
#define __NR_sysinfo 116
#define __NR_clone 120
#define __NR_setdomainname 121
#define __NR_uname 122
-#define __NR_modify_ldt 123
#define __NR_adjtimex 124
#define __NR_mprotect 125
#define __NR_sigprocmask 126
#define __NR_mremap 163
#define __NR_setresuid 164
#define __NR_getresuid 165
-#define __NR_vm86 166
#define __NR_query_module 167
#define __NR_poll 168
#define __NR_nfsservctl 169
#define __NR_capset 185
#define __NR_sigaltstack 186
#define __NR_sendfile 187
-#define __NR_getpmsg 188 /* some people actually want streams */
-#define __NR_putpmsg 189 /* some people actually want streams */
#define __NR_vfork 190
/* user-visible error numbers are in the range -1 - -122: see <asm-s390/errno.h> */
static inline _syscall3(int,open,const char *,file,int,flag,int,mode)
static inline _syscall1(int,close,int,fd)
static inline _syscall1(int,_exit,int,exitcode)
-static inline _syscall3(pid_t,waitpid,pid_t,pid,int *,wait_stat,int,options)
static inline _syscall1(int,delete_module,const char *,name)
+static inline _syscall2(long,stat,char *,filename,struct stat *,statbuf)
+
+extern int sys_wait4(int, int *, int, struct rusage *);
+static inline pid_t waitpid(int pid, int * wait_stat, int flags)
+{
+ return sys_wait4(pid, wait_stat, flags, NULL);
+}
static inline pid_t wait(int * wait_stat)
{
#ifdef CONFIG_ARCH_S390
extern int mdisk_init(void);
extern int dasd_init(void);
+extern int xpram_init(void);
#endif /* CONFIG_ARCH_S390 */
extern void set_device_ro(kdev_t dev,int flag);
#elif (MAJOR_NR == DASD_MAJOR)
+#define LOCAL_END_REQUEST
#define DEVICE_NAME "dasd"
#define DEVICE_REQUEST do_dasd_request
#define DEVICE_NR(device) (MINOR(device) >> PARTN_BITS)
struct buffer_head * bh;
struct buffer_head * bhtail;
struct request * next;
+ int elevator_latency;
};
typedef void (request_fn_proc) (void);
typedef struct request ** (queue_proc) (kdev_t dev);
+typedef struct elevator_s
+{
+ int read_latency;
+ int write_latency;
+ int max_bomb_segments;
+ unsigned int queue_ID;
+} elevator_t;
+
+#define ELEVATOR_DEFAULTS \
+((elevator_t) { \
+ 128, /* read_latency */ \
+ 8192, /* write_latency */ \
+ 4, /* max_bomb_segments */ \
+ })
+
+extern int blkelv_ioctl(kdev_t, unsigned long, unsigned long);
+
+typedef struct blkelv_ioctl_arg_s {
+ int queue_ID;
+ int read_latency;
+ int write_latency;
+ int max_bomb_segments;
+} blkelv_ioctl_arg_t;
+
+#define BLKELVGET _IOR(0x12,106,sizeof(blkelv_ioctl_arg_t))
+#define BLKELVSET _IOW(0x12,107,sizeof(blkelv_ioctl_arg_t))
+
struct blk_dev_struct {
request_fn_proc *request_fn;
/*
struct request *current_request;
struct request plug;
struct tq_struct plug_tq;
+
+ elevator_t elevator;
};
struct sec_size {
*
* This file contains the general definitions for the cyclades.c driver
*$Log: cyclades.h,v $
+ *Revision 3.1 2000/04/19 18:52:52 ivan
+ *converted address fields to unsigned long and added fields for physical
+ *addresses on cyclades_card structure;
+ *
*Revision 3.0 1998/11/02 14:20:59 ivan
*added nports field on cyclades_card structure;
*
/* Per card data structure */
struct cyclades_card {
- long base_addr;
- long ctl_addr;
+ unsigned long base_phys;
+ unsigned long ctl_phys;
+ unsigned long base_addr;
+ unsigned long ctl_addr;
int irq;
int num_chips; /* 0 if card absent, -1 if Z/PCI, else Y */
int first_line; /* minor number of first channel on card */
--- /dev/null
+
+#ifndef DASD_H
+#define DASD_H
+
+/* First of all the external stuff */
+#include <linux/ioctl.h>
+#include <linux/major.h>
+#include <linux/wait.h>
+
+#define IOCTL_LETTER 'D'
+/* Format the volume or an extent */
+#define BIODASDFORMAT _IOW(IOCTL_LETTER,0,format_data_t)
+/* Disable the volume (for Linux) */
+#define BIODASDDISABLE _IO(IOCTL_LETTER,1)
+/* Enable the volume (for Linux) */
+#define BIODASDENABLE _IO(IOCTL_LETTER,2)
+/* Stuff for reading and writing the Label-Area to/from user space */
+#define BIODASDGTVLBL _IOR(IOCTL_LETTER,3,dasd_volume_label_t)
+#define BIODASDSTVLBL _IOW(IOCTL_LETTER,4,dasd_volume_label_t)
+#define BIODASDRWTB _IOWR(IOCTL_LETTER,5,int)
+#define BIODASDRSID _IOR(IOCTL_LETTER,6,senseid_t)
+#define BIODASDRLB _IOR(IOCTL_LETTER,7,int)
+#define BLKGETBSZ _IOR(IOCTL_LETTER,8,int)
+
+typedef struct {
+ int start_unit;
+ int stop_unit;
+ int blksize;
+} format_data_t;
+
+typedef
+union {
+ char bytes[512];
+ struct {
+ /* 80 Bytes of Label data */
+ char identifier[4]; /* e.g. "LNX1", "VOL1" or "CMS1" */
+ char label[6]; /* Given by user */
+ char security;
+ char vtoc[5]; /* Null in "LNX1"-labelled partitions */
+ char reserved0[5];
+ long ci_size;
+ long blk_per_ci;
+ long lab_per_ci;
+ char reserved1[4];
+ char owner[0xe];
+ char no_part;
+ char reserved2[0x1c];
+ /* 16 Byte of some information on the dasd */
+ short blocksize;
+ char nopart;
+ char unused;
+ long unused2[3];
+ /* 7*10 = 70 Bytes of partition data */
+ struct {
+ char type;
+ long start;
+ long size;
+ char unused;
+ } part[7];
+ } __attribute__ ((packed)) label;
+} dasd_volume_label_t;
+
+typedef union {
+ struct {
+ unsigned long no;
+ unsigned int ct;
+ } __attribute__ ((packed)) input;
+ struct {
+ unsigned long noct;
+ } __attribute__ ((packed)) output;
+} __attribute__ ((packed)) dasd_xlate_t;
+
+int dasd_init (void);
+#ifdef MODULE
+int init_module (void);
+void cleanup_module (void);
+#endif /* MODULE */
+
+/* Definitions for blk.h */
+/* #define DASD_MAGIC 0x44415344 is ascii-"DASD" */
+/* #define dasd_MAGIC 0x64617364; is ascii-"dasd" */
+#define DASD_MAGIC 0xC4C1E2C4 /* is ebcdic-"DASD" */
+#define dasd_MAGIC 0x8481A284 /* is ebcdic-"dasd" */
+#define MDSK_MAGIC 0xD4C4E2D2 /* is ebcdic-"MDSK" */
+#define mdsk_MAGIC 0x9484A292 /* is ebcdic-"mdsk" */
+#define ERP_MAGIC 0xC5D9D740 /* is ebcdic-"ERP" */
+#define erp_MAGIC 0x45999740 /* is ebcdic-"erp" */
+
+#define DASD_NAME "dasd"
+#define DASD_PARTN_BITS 2
+#define DASD_MAX_DEVICES (256>>DASD_PARTN_BITS)
+
+#define MAJOR_NR DASD_MAJOR
+#define PARTN_BITS DASD_PARTN_BITS
+
+#ifdef __KERNEL__
+/* Now lets turn to the internal sbtuff */
+/*
+ define the debug levels:
+ - 0 No debugging output to console or syslog
+ - 1 Log internal errors to syslog, ignore check conditions
+ - 2 Log internal errors and check conditions to syslog
+ - 3 Log internal errors to console, log check conditions to syslog
+ - 4 Log internal errors and check conditions to console
+ - 5 panic on internal errors, log check conditions to console
+ - 6 panic on both, internal errors and check conditions
+ */
+#define DASD_DEBUG 4
+
+#define DASD_PROFILE
+/*
+ define the level of paranoia
+ - 0 quite sure, that things are going right
+ - 1 sanity checking, only to avoid panics
+ - 2 normal sanity checking
+ - 3 extensive sanity checks
+ - 4 exhaustive debug messages
+ */
+#define DASD_PARANOIA 2
+
+/*
+ define the depth of flow control, which is logged as a check condition
+ - 0 No flow control messages
+ - 1 Entry of functions logged like check condition
+ - 2 Entry and exit of functions logged like check conditions
+ - 3 Internal structure broken down
+ - 4 unrolling of loops,...
+ */
+#define DASD_FLOW_CONTROL 0
+
+#if DASD_DEBUG > 0
+#define PRINT_DEBUG(x...) printk ( KERN_DEBUG PRINTK_HEADER x )
+#define PRINT_INFO(x...) printk ( KERN_INFO PRINTK_HEADER x )
+#define PRINT_WARN(x...) printk ( KERN_WARNING PRINTK_HEADER x )
+#define PRINT_ERR(x...) printk ( KERN_ERR PRINTK_HEADER x )
+#define PRINT_FATAL(x...) panic ( PRINTK_HEADER x )
+#else
+#define PRINT_DEBUG(x...) printk ( KERN_DEBUG PRINTK_HEADER x )
+#define PRINT_INFO(x...) printk ( KERN_DEBUG PRINTK_HEADER x )
+#define PRINT_WARN(x...) printk ( KERN_DEBUG PRINTK_HEADER x )
+#define PRINT_ERR(x...) printk ( KERN_DEBUG PRINTK_HEADER x )
+#define PRINT_FATAL(x...) printk ( KERN_DEBUG PRINTK_HEADER x )
+#endif /* DASD_DEBUG */
+
+#define INTERNAL_ERRMSG(x,y...) \
+"Internal error: in file " __FILE__ " line: %d: " x, __LINE__, y
+#define INTERNAL_CHKMSG(x,y...) \
+"Inconsistency: in file " __FILE__ " line: %d: " x, __LINE__, y
+#define INTERNAL_FLWMSG(x,y...) \
+"Flow control: file " __FILE__ " line: %d: " x, __LINE__, y
+
+#if DASD_DEBUG > 4
+#define INTERNAL_ERROR(x...) PRINT_FATAL ( INTERNAL_ERRMSG ( x ) )
+#elif DASD_DEBUG > 2
+#define INTERNAL_ERROR(x...) PRINT_ERR ( INTERNAL_ERRMSG ( x ) )
+#elif DASD_DEBUG > 0
+#define INTERNAL_ERROR(x...) PRINT_WARN ( INTERNAL_ERRMSG ( x ) )
+#else
+#define INTERNAL_ERROR(x...)
+#endif /* DASD_DEBUG */
+
+#if DASD_DEBUG > 5
+#define INTERNAL_CHECK(x...) PRINT_FATAL ( INTERNAL_CHKMSG ( x ) )
+#elif DASD_DEBUG > 3
+#define INTERNAL_CHECK(x...) PRINT_ERR ( INTERNAL_CHKMSG ( x ) )
+#elif DASD_DEBUG > 1
+#define INTERNAL_CHECK(x...) PRINT_WARN ( INTERNAL_CHKMSG ( x ) )
+#else
+#define INTERNAL_CHECK(x...)
+#endif /* DASD_DEBUG */
+
+#if DASD_DEBUG > 3
+#define INTERNAL_FLOW(x...) PRINT_ERR ( INTERNAL_FLWMSG ( x ) )
+#elif DASD_DEBUG > 2
+#define INTERNAL_FLOW(x...) PRINT_WARN ( INTERNAL_FLWMSG ( x ) )
+#else
+#define INTERNAL_FLOW(x...)
+#endif /* DASD_DEBUG */
+
+#if DASD_FLOW_CONTROL > 0
+#define FUNCTION_ENTRY(x) INTERNAL_FLOW( x "entered %s\n","" );
+#else
+#define FUNCTION_ENTRY(x)
+#endif /* DASD_FLOW_CONTROL */
+
+#if DASD_FLOW_CONTROL > 1
+#define FUNCTION_EXIT(x) INTERNAL_FLOW( x "exited %s\n","" );
+#else
+#define FUNCTION_EXIT(x)
+#endif /* DASD_FLOW_CONTROL */
+
+#if DASD_FLOW_CONTROL > 2
+#define FUNCTION_CONTROL(x...) INTERNAL_FLOW( x );
+#else
+#define FUNCTION_CONTROL(x...)
+#endif /* DASD_FLOW_CONTROL */
+
+#if DASD_FLOW_CONTROL > 3
+#define LOOP_CONTROL(x...) INTERNAL_FLOW( x );
+#else
+#define LOOP_CONTROL(x...)
+#endif /* DASD_FLOW_CONTROL */
+
+#define DASD_DO_IO_SLEEP 0x01
+#define DASD_DO_IO_NOLOCK 0x02
+#define DASD_DO_IO_NODEC 0x04
+
+#define DASD_NOT_FORMATTED 0x01
+
+extern struct wait_queue *dasd_waitq;
+
+#undef DEBUG_DASD_MALLOC
+#ifdef DEBUG_DASD_MALLOC
+void *b;
+#define kmalloc(x...) (PRINT_INFO(" kmalloc %p\n",b=kmalloc(x)),b)
+#define kfree(x) PRINT_INFO(" kfree %p\n",x);kfree(x)
+#define get_free_page(x...) (PRINT_INFO(" gfp %p\n",b=get_free_page(x)),b)
+#define __get_free_pages(x...) (PRINT_INFO(" gfps %p\n",b=__get_free_pages(x)),b)
+#endif /* DEBUG_DASD_MALLOC */
+
+#endif /* __KERNEL__ */
+#endif /* DASD_H */
+
+/*
+ * Overrides for Emacs so that we follow Linus's tabbing style.
+ * Emacs will notice this stuff at the end of the file and automatically
+ * adjust the settings for this buffer only. This must remain at the end
+ * of the file.
+ * ---------------------------------------------------------------------------
+ * Local variables:
+ * c-indent-level: 4
+ * c-brace-imaginary-offset: 0
+ * c-brace-offset: -4
+ * c-argdecl-indent: 4
+ * c-label-offset: -4
+ * c-continued-statement-offset: 4
+ * c-continued-brace-offset: 0
+ * indent-tabs-mode: nil
+ * tab-width: 8
+ * End:
+ */
#define BLKSECTSET _IO(0x12,102)/* set max sectors per request (ll_rw_blk.c) */
#define BLKSECTGET _IO(0x12,103)/* get max sectors per request (ll_rw_blk.c) */
#define BLKSSZGET _IO(0x12,104) /* get block device sector size */
+#if 0
+#define BLKELVGET _IOR(0x12,106,sizeof(blkelv_ioctl_arg_t))/* elevator get */
+#define BLKELVSET _IOW(0x12,107,sizeof(blkelv_ioctl_arg_t))/* elevator set */
+#endif
#define BMAP_IOCTL 1 /* obsolete - kept for compatibility */
#define FIBMAP _IO(0x00,1) /* bmap access */
#define CM206_CDROM_MAJOR 32
#define IDE2_MAJOR 33
#define IDE3_MAJOR 34
+#define XPRAM_MAJOR 35 /* expanded storage on S/390 = "slow ram" */
+ /* proposed by Peter */
#define NETLINK_MAJOR 36
#define PS2ESDI_MAJOR 36
#define IDETAPE_MAJOR 37
#define PCI_DEVICE_ID_LAVA_DUAL_PAR_A 0x8002 /* The Lava Dual Parallel is */
#define PCI_DEVICE_ID_LAVA_DUAL_PAR_B 0x8003 /* two PCI devices on a card */
+#define PCI_VENDOR_ID_TIMEDIA 0x1409
+#define PCI_DEVICE_ID_TIMEDIA_1889 0x7168
+
#define PCI_VENDOR_ID_AFAVLAB 0x14db
#define PCI_DEVICE_ID_AFAVLAB_TK9902 0x2120
#endif
#ifdef CONFIG_DASD
-#include "../drivers/s390/block/dasd.h"
+#include <linux/dasd.h>
+#endif
+
+#ifdef CONFIG_BLK_DEV_XPRAM
+#include "../drivers/s390/block/xpram.h"
#endif
#ifdef CONFIG_MAC
#ifdef CONFIG_3215
extern void con3215_setup(char *str, int *ints);
#endif
-#ifdef CONFIG_3215
-extern void con3215_setup(char *str, int *ints);
-#endif
#ifdef CONFIG_MDISK
extern void mdisk_setup(char *str, int *ints);
#endif
#ifdef CONFIG_DASD
extern void dasd_setup(char *str, int *ints);
+#ifdef CONFIG_DASD_MDSK
+extern void dasd_mdsk_setup(char *str, int *ints);
+#endif
+#endif
+#ifdef CONFIG_BLK_DEV_XPRAM
+extern void xpram_setup(char *str, int *ints);
+#endif
+#ifdef CONFIG_ARCH_S390
+extern void vmhalt_setup(char *str, int *ints);
+extern void vmpoff_setup(char *str, int *ints);
#endif
extern void floppy_setup(char *str, int *ints);
extern void st_setup(char *str, int *ints);
{ "dasdf", (DASD_MAJOR << MINORBITS) + (5 << 2) },
{ "dasdg", (DASD_MAJOR << MINORBITS) + (6 << 2) },
{ "dasdh", (DASD_MAJOR << MINORBITS) + (7 << 2) },
+ { "dasdi", (DASD_MAJOR << MINORBITS) + (8 << 2) },
+ { "dasdj", (DASD_MAJOR << MINORBITS) + (9 << 2) },
+ { "dasdk", (DASD_MAJOR << MINORBITS) + (11 << 2) },
+ { "dasdl", (DASD_MAJOR << MINORBITS) + (12 << 2) },
+ { "dasdm", (DASD_MAJOR << MINORBITS) + (13 << 2) },
+ { "dasdn", (DASD_MAJOR << MINORBITS) + (14 << 2) },
+ { "dasdo", (DASD_MAJOR << MINORBITS) + (15 << 2) },
+ { "dasdp", (DASD_MAJOR << MINORBITS) + (16 << 2) },
+ { "dasdq", (DASD_MAJOR << MINORBITS) + (17 << 2) },
+ { "dasdr", (DASD_MAJOR << MINORBITS) + (18 << 2) },
+ { "dasds", (DASD_MAJOR << MINORBITS) + (19 << 2) },
+ { "dasdt", (DASD_MAJOR << MINORBITS) + (20 << 2) },
+ { "dasdu", (DASD_MAJOR << MINORBITS) + (21 << 2) },
+ { "dasdv", (DASD_MAJOR << MINORBITS) + (22 << 2) },
+ { "dasdw", (DASD_MAJOR << MINORBITS) + (23 << 2) },
+ { "dasdx", (DASD_MAJOR << MINORBITS) + (24 << 2) },
+ { "dasdy", (DASD_MAJOR << MINORBITS) + (25 << 2) },
+ { "dasdz", (DASD_MAJOR << MINORBITS) + (26 << 2) },
+#endif
+#ifdef CONFIG_BLK_DEV_XPRAM
+ { "xpram0", (XPRAM_MAJOR << MINORBITS) },
+ { "xpram1", (XPRAM_MAJOR << MINORBITS) + 1 },
+ { "xpram2", (XPRAM_MAJOR << MINORBITS) + 2 },
+ { "xpram3", (XPRAM_MAJOR << MINORBITS) + 3 },
+ { "xpram4", (XPRAM_MAJOR << MINORBITS) + 4 },
+ { "xpram5", (XPRAM_MAJOR << MINORBITS) + 5 },
+ { "xpram6", (XPRAM_MAJOR << MINORBITS) + 6 },
+ { "xpram7", (XPRAM_MAJOR << MINORBITS) + 7 },
+ { "xpram8", (XPRAM_MAJOR << MINORBITS) + 8 },
+ { "xpram9", (XPRAM_MAJOR << MINORBITS) + 9 },
+ { "xpram10", (XPRAM_MAJOR << MINORBITS) + 10 },
+ { "xpram11", (XPRAM_MAJOR << MINORBITS) + 11 },
+ { "xpram12", (XPRAM_MAJOR << MINORBITS) + 12 },
+ { "xpram13", (XPRAM_MAJOR << MINORBITS) + 13 },
+ { "xpram14", (XPRAM_MAJOR << MINORBITS) + 14 },
+ { "xpram15", (XPRAM_MAJOR << MINORBITS) + 15 },
+ { "xpram16", (XPRAM_MAJOR << MINORBITS) + 16 },
+ { "xpram17", (XPRAM_MAJOR << MINORBITS) + 17 },
+ { "xpram18", (XPRAM_MAJOR << MINORBITS) + 18 },
+ { "xpram19", (XPRAM_MAJOR << MINORBITS) + 19 },
+ { "xpram20", (XPRAM_MAJOR << MINORBITS) + 20 },
+ { "xpram21", (XPRAM_MAJOR << MINORBITS) + 21 },
+ { "xpram22", (XPRAM_MAJOR << MINORBITS) + 22 },
+ { "xpram23", (XPRAM_MAJOR << MINORBITS) + 23 },
+ { "xpram24", (XPRAM_MAJOR << MINORBITS) + 24 },
+ { "xpram25", (XPRAM_MAJOR << MINORBITS) + 25 },
+ { "xpram26", (XPRAM_MAJOR << MINORBITS) + 26 },
+ { "xpram27", (XPRAM_MAJOR << MINORBITS) + 27 },
+ { "xpram28", (XPRAM_MAJOR << MINORBITS) + 28 },
+ { "xpram29", (XPRAM_MAJOR << MINORBITS) + 29 },
+ { "xpram30", (XPRAM_MAJOR << MINORBITS) + 30 },
+ { "xpram31", (XPRAM_MAJOR << MINORBITS) + 31 },
#endif
{ NULL, 0 }
};
{ "noapic", ioapic_setup },
{ "pirq=", ioapic_pirq_setup },
#endif
+
#endif
#ifdef CONFIG_BLK_DEV_RAM
{ "ramdisk_start=", ramdisk_start_setup },
#ifdef CONFIG_BLK_DEV_INITRD
{ "noinitrd", no_initrd },
#endif
+#endif
#ifdef CONFIG_CTC
{ "ctc=", ctc_setup } ,
{ "iucv=", iucv_setup } ,
#endif
-#endif
-
#ifdef CONFIG_FB
{ "video=", video_setup },
#endif
#ifdef CONFIG_3215
{ "condev=", con3215_setup },
#endif
-#ifdef CONFIG_3215
- { "condev=", con3215_setup },
-#endif
#ifdef CONFIG_MDISK
{ "mdisk=", mdisk_setup },
#endif
#ifdef CONFIG_DASD
{ "dasd=", dasd_setup },
+#ifdef CONFIG_DASD_MDSK
+ { "dasd_force_diag=", dasd_mdsk_setup },
+#endif
+#endif
+#ifdef CONFIG_BLK_DEV_XPRAM
+ { "xpram_parts=", xpram_setup },
+#endif
+#ifdef CONFIG_ARCH_S390
+ { "vmhalt=", vmhalt_setup },
+ { "vmpoff=", vmpoff_setup },
#endif
{ 0, 0 }
};
static void __init parse_options(char *line)
{
char *next;
+ char *quote;
int args, envs;
if (!*line)
envs = 1; /* TERM is set to 'linux' by default */
next = line;
while ((line = next) != NULL) {
- if ((next = strchr(line,' ')) != NULL)
- *next++ = 0;
+ /* On S/390 we want to be able to pass an options that
+ * contains blanks. For example vmhalt="IPL CMS".
+ * To allow that I added code that prevents blanks in
+ * quotes to be recognized as delimiter. -- Martin
+ */
+ quote = strchr(line,'"');
+ next = strchr(line, ' ');
+ while (next != NULL && quote != NULL && quote < next) {
+ /* we found a left quote before the next blank
+ * now we have to find the matching right quote
+ */
+ next = strchr(quote+1, '"');
+ if (next != NULL) {
+ quote = strchr(next+1, '"');
+ next = strchr(next+1, ' ');
+ }
+ }
+ if (next != NULL)
+ *next++ = 0;
/*
* check for kernel options first..
*/
/* Set up devices .. */
device_setup();
-
+#if CONFIG_CHANDEV
+ chandev_init();
+#endif
/* .. executable formats .. */
binfmt_setup();
#include <linux/smp_lock.h>
#include <linux/init.h>
#include <linux/vmalloc.h>
+#include <linux/tasks.h>
#include <asm/uaccess.h>
#include <asm/pgtable.h>
shmd->vm_ops = &shm_vm_ops;
shp->u.shm_nattch++; /* prevent destruction */
- if ((err = shm_map (shmd))) {
+ if (shp->u.shm_nattch > 0xffff - NR_TASKS || (err = shm_map (shmd))) {
if (--shp->u.shm_nattch <= 0 && shp->u.shm_perm.mode & SHM_DEST)
killseg(id);
kmem_cache_free(vm_area_cachep, shmd);
printk("shm_open: unused id=%d PANIC\n", id);
return;
}
+ if (!++shp->u.shm_nattch) {
+ shp->u.shm_nattch--;
+ return; /* XXX: should be able to report failure */
+ }
insert_attach(shp,shmd); /* insert shmd into shp->attaches */
- shp->u.shm_nattch++;
shp->u.shm_atime = CURRENT_TIME;
shp->u.shm_lpid = current->pid;
}
(struct timer_vec *)&tv1, &tv2, &tv3, &tv4, &tv5
};
+static struct timer_list ** run_timer_list_running;
+
#define NOOF_TVECS (sizeof(tvecs) / sizeof(tvecs[0]))
static unsigned long timer_jiffies = 0;
static inline void insert_timer(struct timer_list *timer,
- struct timer_list **vec, int idx)
+ struct timer_list **vec)
{
- if ((timer->next = vec[idx]))
- vec[idx]->prev = timer;
- vec[idx] = timer;
- timer->prev = (struct timer_list *)&vec[idx];
+ if ((timer->next = *vec))
+ (*vec)->prev = timer;
+ *vec = timer;
+ timer->prev = (struct timer_list *)vec;
}
static inline void internal_add_timer(struct timer_list *timer)
*/
unsigned long expires = timer->expires;
unsigned long idx = expires - timer_jiffies;
+ struct timer_list ** vec;
- if (idx < TVR_SIZE) {
+ if (run_timer_list_running)
+ vec = run_timer_list_running;
+ else if (idx < TVR_SIZE) {
int i = expires & TVR_MASK;
- insert_timer(timer, tv1.vec, i);
+ vec = tv1.vec + i;
} else if (idx < 1 << (TVR_BITS + TVN_BITS)) {
int i = (expires >> TVR_BITS) & TVN_MASK;
- insert_timer(timer, tv2.vec, i);
+ vec = tv2.vec + i;
} else if (idx < 1 << (TVR_BITS + 2 * TVN_BITS)) {
int i = (expires >> (TVR_BITS + TVN_BITS)) & TVN_MASK;
- insert_timer(timer, tv3.vec, i);
+ vec = tv3.vec + i;
} else if (idx < 1 << (TVR_BITS + 3 * TVN_BITS)) {
int i = (expires >> (TVR_BITS + 2 * TVN_BITS)) & TVN_MASK;
- insert_timer(timer, tv4.vec, i);
+ vec = tv4.vec + i;
} else if ((signed long) idx < 0) {
/* can happen if you add a timer with expires == jiffies,
* or you set a timer to go off in the past
*/
- insert_timer(timer, tv1.vec, tv1.index);
+ vec = tv1.vec + tv1.index;
} else if (idx <= 0xffffffffUL) {
int i = (expires >> (TVR_BITS + 3 * TVN_BITS)) & TVN_MASK;
- insert_timer(timer, tv5.vec, i);
+ vec = tv5.vec + i;
} else {
/* Can only get here on architectures with 64-bit jiffies */
timer->next = timer->prev = timer;
+ return;
}
+ insert_timer(timer, vec);
}
spinlock_t timerlist_lock = SPIN_LOCK_UNLOCKED;
{
spin_lock_irq(&timerlist_lock);
while ((long)(jiffies - timer_jiffies) >= 0) {
- struct timer_list *timer;
+ struct timer_list *timer, * queued = NULL;
if (!tv1.index) {
int n = 1;
do {
cascade_timers(tvecs[n]);
} while (tvecs[n]->index == 1 && ++n < NOOF_TVECS);
}
+ run_timer_list_running = &queued;
while ((timer = tv1.vec[tv1.index])) {
void (*fn)(unsigned long) = timer->function;
unsigned long data = timer->data;
fn(data);
spin_lock_irq(&timerlist_lock);
}
+ run_timer_list_running = NULL;
++timer_jiffies;
tv1.index = (tv1.index + 1) & TVR_MASK;
+ while (queued)
+ {
+ timer = queued;
+ queued = queued->next;
+ internal_add_timer(timer);
+ }
}
spin_unlock_irq(&timerlist_lock);
}
if (mm->def_flags & VM_LOCKED) {
unsigned long locked = mm->locked_vm << PAGE_SHIFT;
locked += len;
+ if (locked < len)
+ return -EAGAIN;
if ((current->rlim[RLIMIT_MEMLOCK].rlim_cur < RLIM_INFINITY) &&
(locked > current->rlim[RLIMIT_MEMLOCK].rlim_cur))
return -EAGAIN;
goto free_vma;
/* Check against address space limit. */
+ if ((mm->total_vm << PAGE_SHIFT) + len < len)
+ goto free_vma;
if ((current->rlim[RLIMIT_AS].rlim_cur < RLIM_INFINITY) &&
((mm->total_vm << PAGE_SHIFT) + len
> current->rlim[RLIMIT_AS].rlim_cur))
if (PageSwapCache(page)) {
/* Make sure we are the only process doing I/O with this swap page. */
- while (test_and_set_bit(offset,p->swap_lockmap)) {
- run_task_queue(&tq_disk);
- sleep_on(&lock_queue);
+ if (test_and_set_bit(offset, p->swap_lockmap))
+ {
+ struct wait_queue __wait;
+
+ __wait.task = current;
+ add_wait_queue(&lock_queue, &__wait);
+ for (;;) {
+ current->state = TASK_UNINTERRUPTIBLE;
+ mb();
+ if (!test_and_set_bit(offset, p->swap_lockmap))
+ break;
+ run_task_queue(&tq_disk);
+ schedule();
+ }
+ current->state = TASK_RUNNING;
+ remove_wait_queue(&lock_queue, &__wait);
}
/*
/* Device callback registration */
EXPORT_SYMBOL(register_netdevice_notifier);
EXPORT_SYMBOL(unregister_netdevice_notifier);
+EXPORT_SYMBOL(register_inetaddr_notifier);
+EXPORT_SYMBOL(unregister_inetaddr_notifier);
/* support for loadable net drivers */
#ifdef CONFIG_NET
* number of socks to 2*max_files and
* the number of skb queueable in the
* dgram receiver.
+ * Malcolm Beattie : Set peercred for socketpair
*
* Known differences from reference BSD that was tested:
*
unix_lock(skb);
unix_peer(ska)=skb;
unix_peer(skb)=ska;
+ ska->peercred.pid = skb->peercred.pid = current->pid;
+ ska->peercred.uid = skb->peercred.uid = current->euid;
+ ska->peercred.gid = skb->peercred.gid = current->egid;
if (ska->type != SOCK_DGRAM)
{