D: Original author of the Linux networking code
N: Anton Blanchard
-E: anton@progsoc.uts.edu.au
-W: http://www.progsoc.uts.edu.au/~anton/
+E: anton@linuxcare.com
+W: http://linuxcare.com.au/anton/
P: 1024/8462A731 4C 55 86 34 44 59 A7 99 2B 97 88 4A 88 9A 0D 97
-D: sun4 port
+D: sun4 port, Sparc hacker
S: 47 Robert Street
S: Marrickville NSW 2204
S: Australia
fuser, which comes with psmisc, reads /proc/*/fd/* to do its job.
Upgrade psmisc if 2.2 changes to /proc broke the version you're using.
-Tunelp
-======
-
- A new version of tunelp is available which will allow you to enable
-"trustirq" mode, improving printing while using IRQ-driven lp ports.
-
PCI utils
=========
you need to install the LVM tools. More information can be found at the home page
of the LVM project at http://linux.msede.com/lvm/.
+Inline Documentation
+====================
+Many of the functions available for modules to use are now documented
+with specially-formatted comments near their definitions. These
+comments can be combined with the SGML templates in the
+Documentation/DocBook directory to make DocBook files, which can then
+be combined with DocBook stylesheets to make PostScript documents,
+HTML pages, PDF files, and so on. In order to convert from DocBook
+format to a format of your choice, you'll need to install jade, as
+well as some stylesheets.
+
Where to get the files
**********************
http://linux.powertweak.com/files/powertweak-0.1.2.tgz
ftp://atrey.karlin.mff.cuni.cz/pub/linux/pci/powertweak/powertweak-0.1.2.tgz
-Tunelp
-======
-
-The 0-2.1.131 release:
-ftp://e-mind.com/pub/linux/tunelp/tunelp-0-2.1.131.tar.gz
-
Xosview
=======
The 0.7 release:
ftp://linux.msede.com/lvm/v0.7/lvm_0.7.tar.gz
+Jade
+====
+
+The 1.2.1 release:
+ftp://ftp.jclark.com/pub/jade/jade-1.2.1.tar.gz
+
+DSSSL Stylesheets for the DocBook DTD
+=====================================
+
+http://nwalsh.com/docbook/dsssl/
+
Other Info
==========
Differentiated Services (diffserv) and Resource Reservation Protocol
(RSVP) on your Linux router if you also say Y to "QoS support",
"Packet classifier API" and to some classifiers below. Documentation
- and software is at http://icawwww1.ipfl.ch/linux/diffserv/ .
+ and software is at http://icawww1.epfl.ch/linux-diffserv/ .
If you say Y here and to "/proc file system" below, you will be able
to read status information about packet schedulers from the file
Differentiated Services (diffserv) and Resource Reservation Protocol
(RSVP) on your Linux router if you also say Y to "Packet classifier
API" and to some classifiers below. Documentation and software is at
- http://icawwww1.ipfl.ch/linux/diffserv/ .
+ http://icawww1.epfl.ch/linux-diffserv/ .
Note that the answer to this question won't directly affect the
kernel: saying N will just cause this configure script to skip all
This will enable you to use Differentiated Services (diffserv) and
Resource Reservation Protocol (RSVP) on your Linux router.
Documentation and software is at
- http://icawwww1.ipfl.ch/linux/diffserv/ .
+ http://icawww1.epfl.ch/linux-diffserv/ .
### Add
#tristate ' TC index classifier' CONFIG_NET_CLS_TCINDEX
Please read the file Documentation/sound/Soundblaster.
You should also say Y here for cards based on the Avance Logic
- ALS-007 chip (read Documentation/sound/ALS) and for cards based
- on ESS chips (read Documentation/sound/ESS1868 and
+ ALS-007 and ALS-1X0 chips (read Documentation/sound/ALS) and for cards
+ based on ESS chips (read Documentation/sound/ESS1868 and
Documentation/sound/ESS). If you have an SB AWE 32 or SB AWE 64, say
- Y here and also to "Additional lowlevel drivers" and to "SB32/AWE
- support" below and read Documentation/sound/INSTALL.awe. If you have
- an IBM Mwave card, say Y here and read Documentation/sound/mwave.
+ Y here and also to "AWE32 synth" below and read
+ Documentation/sound/INSTALL.awe. If you have an IBM Mwave card, say
+ Y here and read Documentation/sound/mwave.
If you compile the driver into the kernel and don't want to use
isapnp, you have to add "sb=<io>,<irq>,<dma>,<dma2>" to the kernel
-[Also cloned from vesafb.txt, thanks to Gerd]
+$Id: tgafb.txt,v 1.1.2.2 2000/04/04 06:50:18 mato Exp $
What is tgafb?
===============
This is a driver for DECChip 21030 based graphics framebuffers, a.k.a. TGA
-cards, specifically the following models
+cards, which are usually found in older Digital Alpha systems. The
+following models are supported:
-ZLxP-E1 (8bpp, 4 MB VRAM)
+ZLxP-E1 (8bpp, 2 MB VRAM)
ZLxP-E2 (32bpp, 8 MB VRAM)
ZLxP-E3 (32bpp, 16 MB VRAM, Zbuffer)
-This version, tgafb-1.12, is almost a complete rewrite of the code written
-by Geert Uytterhoeven, which was based on the original TGA console code
-written by Jay Estabrook.
+This version is an almost complete rewrite of the code written by Geert
+Uytterhoeven, which was based on the original TGA console code written by
+Jay Estabrook.
-Major new features:
+Major new features since Linux 2.0.x:
- * Support for multiple resolutions, including setting the resolution at
- boot time, allowing the use of a fixed-frequency monitor.
- * Complete code rewrite to follow Geert's skeletonfb spec which will allow
- future implementation of hardware acceleration and other features.
+ * Support for multiple resolutions
+ * Support for fixed-frequency and other oddball monitors
+ (by allowing the video mode to be set at boot time)
+
+User-visible changes since Linux 2.2.x:
+
+ * Sync-on-green is now handled properly
+ * More useful information is printed on bootup
+ (this helps if people run into problems)
+
+This driver does not (yet) support the TGA2 family of framebuffers, so the
+PowerStorm 3D30/4D20 (also known as PBXGB) cards are not supported. These
+can however be used with the standard VGA Text Console driver.
Configuration
font:X - default font to use. All fonts are supported, including the
SUN12x22 font which is very nice at high resolutions.
-mode:X - default video mode. See drivers/video/tgafb.c for a list.
-
-X11
-===
-
-XF68_FBDev should work just fine, but I haven't tested it. Running
-the XF86_TGA server (reasonably recent versions of which support all TGA
-cards) works fine for me.
-
-One minor problem with XF86_TGA is when running tgafb in resolutions higher
-than 640x480, on switching VCs from tgafb to X, the entire screen is not
-re-drawn and must be manually refreshed. This is an X server problem, not a
-tgafb problem.
+
+mode:X - default video mode. The following video modes are supported:
+ 640x480-60, 800x600-56, 640x480-72, 800x600-60, 800x600-72,
+ 1024x768-60, 1152x864-60, 1024x768-70, 1024x768-76,
+ 1152x864-70, 1280x1024-61, 1024x768-85, 1280x1024-70,
+ 1152x864-84, 1280x1024-76, 1280x1024-85
+
+
+Known Issues
+============
+
+The XFree86 FBDev server has been reported not to work, since tgafb doesn't do
+mmap(). Running the standard XF86_TGA server from XFree86 3.3.x works fine for
+me, however this server does not do acceleration, which make certain operations
+quite slow. Support for acceleration is being progressively integrated in
+XFree86 4.x.
+
+When running tgafb in resolutions higher than 640x480, on switching VCs from
+tgafb to XF86_TGA 3.3.x, the entire screen is not re-drawn and must be manually
+refreshed. This is an X server problem, not a tgafb problem, and is fixed in
+XFree86 4.0.
Enjoy!
sound configuration section of the kernel config:
- 100% Sound Blaster compatibles (SB16/32/64, ESS, Jazz16) support
- FM synthesizer (YM3812/OPL-3) support
-Since the ALS-007/100/200 is a PnP card, the sound driver probably should be
-compiled as a module, with the isapnptools used to wake up the sound card.
-Set the "I/O base for SB", "Sound Blaster IRQ" and "Sound Blaster DMA" (8 bit -
+Since the ALS-007/100/200 are PnP cards, ISAPnP support should probably be
+compiled in.
+
+Alternatively, if you decide not to use kernel level ISAPnP, you can use the
+user mode isapnptools to wake up the sound card, as in 2.2.X. Set the "I/O
+base for SB", "Sound Blaster IRQ" and "Sound Blaster DMA" (8 bit -
either 0, 1 or 3) to the values used in your particular installation (they
should match the values used to configure the card using isapnp). The
ALS-007 does NOT implement 16 bit DMA, so the "Sound Blaster 16 bit DMA"
30 March 1998
Modified 2000-02-26 by Dave Forrest, drf5n@virginia.edu to add ALS100/ALS200
+Modified 2000-04-10 by Paul Laufer, pelaufer@csupomona.edu to add ISAPnP info.
2) If your card is NOT "Plug-n-Play" then go to 5th step now. In the other case
proceed to step 3.
-3) You should obtain isapnptools. I looked through other PnP packages
-for Linux, but all they are either in deep unstable beta/alpha releases or
-they are much worse than isapnptools. In my case isapnptools were included in
+3) You should compile in kernel ISAPnP support or you should obtain isapnptools.
+If you choose kernel level ISAPnP skip to step 5. I looked through other PnP
+packages for Linux, but all they are either in deep unstable beta/alpha releases
+or they are much worse than isapnptools. In my case isapnptools were included in
a Linux distribution (Red Hat 5.x). If you also already have them then go to
step 4.
In "make (x,menu)config" select in "Sound":
select "OSS sound modules" as <M> (module)
-
-In "Additional low level sound drivers":
-"Additional low level sound drivers", "AWE32 synth" as <M> (module).
-Select "Additional low level sound drivers" as [y] (or [*] (yes)) (If it is not
-available as [y], select it as <M> (module))
+select "AWE32 Synth" as <M> (module)
Now recompile the kernel (make dep; make (b)zImage, b(z)lilo, etc...;
make modules; make modules_install), update your boot loader (if required) and
alias midi awe_wave
post-install awe_wave /usr/bin/sfxload /usr/synthfm.sbk
-options sb io=0x220 irq=5 dma=1 dma16=5 mpu_io=0x330
-(on io=0xaaa irq=b.... you should use your own settings)
That will enable the Sound Blaster and AWE wave synthesis.
To play midi files you should get one of these programs:
Yaroslav Rosomakho (alons55@dialup.ptt.ru)
http://www.yar.opennet.ru
-Last Updated: 3Jan99
+Last Updated: 10Apr2000
* NOTE TO LINUX USERS
-To enable this driver on linux-2.[01].x kernels, you need turn on both
-"lowlevel drivers support" and "AWE32 synth support" options in sound
-menu when configure your linux kernel and modules. The precise
-installation procedure is described in the AWE64-Mini-HOWTO and
-linux-kernel/Documetation/sound/AWE32.
+To enable this driver on linux-2.[01].x kernels, you need turn on
+"AWE32 synth" options in sound menu when configure your linux kernel
+and modules. The precise installation procedure is described in the
+AWE64-Mini-HOWTO and linux-kernel/Documetation/sound/AWE32.
If you're using PnP cards, the card must be initialized before loading
the sound driver. There're several options to do this:
options sb io=0x220 irq=7 dma=1 dma16=5 mpu_io=0x330
options adlib_card io=0x388 # FM synthesizer
+ Alternatively, if you have compiled in kernel level ISAPnP support:
+
+alias char-major-14 sb
+post-install sb /sbin/modprobe "-k" "adlib_card"
+options adlib_card io=0x388
+
The effect of this is that the sound driver and all necessary bits and
pieces autoload on demand, assuming you use kerneld (a sound choice) and
autoclean when not in use. Also, options for the device drivers are
dma16 16-bit DMA channel for SB16 and equivalent cards (5,6,7)
mpu_io I/O for MPU chip if present (0x300,0x330)
-mad16=1 Set when loading this as part of the MAD16 setup only
-trix=1 Set when loading this as part of the Audiotrix setup only
-pas2=1 Set when loading this as part of the Pas2 setup only
sm_games=1 Set if you have a Logitech soundman games
acer=1 Set this to detect cards in some ACER notebooks
mwave_bug=1 Set if you are trying to use this driver with mwave (see on)
+type Use this to specify a specific card type
+
+The following arguments are taken if ISAPnP support is compiled in
+
+isapnp=0 Set this to disable ISAPnP detection (use io=0xXXX etc. above)
+multiple=1 Set to enable detection of multiple Soundblaster cards.
+reverse=1 Reverses the order of the search in the PnP table.
+uart401=1 Set to enable detection of mpu devices on some clones.
+isapnpjump Jumps to a specific slot in the driver's PnP table. Use the
+ source, Luke.
You may well want to load the opl3 driver for synth music on most SB and
clone SB devices
P: Jakub Jelinek
M: jj@sunsite.ms.mff.cuni.cz
P: Anton Blanchard
-M: anton@progsoc.uts.edu.au
+M: anton@linuxcare.com
L: sparclinux@vger.rutgers.edu
L: ultralinux@vger.rutgers.edu
W: http://ultra.linux.cz
VERSION = 2
PATCHLEVEL = 3
SUBLEVEL = 99
-EXTRAVERSION = -pre5
+EXTRAVERSION = -pre6
KERNELRELEASE=$(VERSION).$(PATCHLEVEL).$(SUBLEVEL)$(EXTRAVERSION)
[7.1.] Software (add the output of the ver_linux script here)
[7.2.] Processor information (from /proc/cpuinfo):
[7.3.] Module information (from /proc/modules):
-[7.4.] SCSI information (from /proc/scsi/scsi)
-[7.5.] Other information that might be relevant to the problem
+[7.4.] Loaded driver and hardware information (/proc/ioports, /proc/iomem)
+[7.5.] PCI information ('lspci -vvv' as root)
+[7.6.] SCSI information (from /proc/scsi/scsi)
+[7.7.] Other information that might be relevant to the problem
(please look in /proc and include all information that you
think to be relevant):
[X.] Other notes, patches, fixes, workarounds:
if [ "$CONFIG_MK6" = "y" ]; then
define_bool CONFIG_X86_ALIGNMENT_16 y
define_bool CONFIG_X86_TSC y
- define_bool CONFIG_X86_USE_3DNOW y
define_bool CONFIG_X86_USE_PPRO_CHECKSUM y
fi
if [ "$CONFIG_M686" = "y" ]; then
define_bool CONFIG_X86_USE_3DNOW y
define_bool CONFIG_X86_PGE y
define_bool CONFIG_X86_USE_PPRO_CHECKSUM y
+ define_int CONFIG_X86_L1_CACHE_BYTES 64
fi
tristate '/dev/cpu/microcode - Intel P6 CPU microcode support' CONFIG_MICROCODE
* (c) 1999, 2000 Ingo Molnar <mingo@redhat.com>
*
* Fixes
- * Maciej W. Rozycki : Bits for genuine 82489DX timers
+ * Maciej W. Rozycki : Bits for genuine 82489DX APICs;
+ * thanks to Eric Gilmore for
+ * testing these extensively
*/
#include <linux/config.h>
return maxlvt;
}
-void disable_local_APIC (void)
+static void clear_local_APIC(void)
{
- unsigned long value;
- int maxlvt;
+ int maxlvt;
+ unsigned long v;
+
+ maxlvt = get_maxlvt();
/*
- * Disable APIC
+ * Careful: we have to set masks only first to deassert
+ * any level-triggered sources.
*/
- value = apic_read(APIC_SPIV);
- value &= ~(1<<8);
- apic_write(APIC_SPIV,value);
+ v = apic_read(APIC_LVTT);
+ apic_write_around(APIC_LVTT, v | APIC_LVT_MASKED);
+ v = apic_read(APIC_LVT0);
+ apic_write_around(APIC_LVT0, v | APIC_LVT_MASKED);
+ v = apic_read(APIC_LVT1);
+ apic_write_around(APIC_LVT1, v | APIC_LVT_MASKED);
+ if (maxlvt >= 3) {
+ v = apic_read(APIC_LVTERR);
+ apic_write_around(APIC_LVTERR, v | APIC_LVT_MASKED);
+ }
+ if (maxlvt >= 4) {
+ v = apic_read(APIC_LVTPC);
+ apic_write_around(APIC_LVTPC, v | APIC_LVT_MASKED);
+ }
/*
* Clean APIC state for other OSs:
*/
- value = apic_read(APIC_SPIV);
- value &= ~(1<<8);
- apic_write(APIC_SPIV,value);
- maxlvt = get_maxlvt();
- apic_write_around(APIC_LVTT, 0x00010000);
- apic_write_around(APIC_LVT0, 0x00010000);
- apic_write_around(APIC_LVT1, 0x00010000);
+ apic_write_around(APIC_LVTT, APIC_LVT_MASKED);
+ apic_write_around(APIC_LVT0, APIC_LVT_MASKED);
+ apic_write_around(APIC_LVT1, APIC_LVT_MASKED);
if (maxlvt >= 3)
- apic_write_around(APIC_LVTERR, 0x00010000);
+ apic_write_around(APIC_LVTERR, APIC_LVT_MASKED);
if (maxlvt >= 4)
- apic_write_around(APIC_LVTPC, 0x00010000);
+ apic_write_around(APIC_LVTPC, APIC_LVT_MASKED);
+}
+
+void __init connect_bsp_APIC(void)
+{
+ if (pic_mode) {
+ /*
+ * Do not trust the local APIC being empty at bootup.
+ */
+ clear_local_APIC();
+ /*
+ * PIC mode, enable symmetric IO mode in the IMCR,
+ * i.e. connect BSP's local APIC to INT and NMI lines.
+ */
+ printk("leaving PIC mode, enabling symmetric IO mode.\n");
+ outb(0x70, 0x22);
+ outb(0x01, 0x23);
+ }
+}
+
+void disconnect_bsp_APIC(void)
+{
+ if (pic_mode) {
+ /*
+ * Put the board back into PIC mode (has an effect
+ * only on certain older boards). Note that APIC
+ * interrupts, including IPIs, won't work beyond
+ * this point! The only exception are INIT IPIs.
+ */
+ printk("disabling symmetric IO mode, entering PIC mode.\n");
+ outb(0x70, 0x22);
+ outb(0x00, 0x23);
+ }
+}
+
+void disable_local_APIC(void)
+{
+ unsigned long value;
+
+ clear_local_APIC();
+
+ /*
+ * Disable APIC (implies clearing of registers
+ * for 82489DX!).
+ */
+ value = apic_read(APIC_SPIV);
+ value &= ~(1<<8);
+ apic_write_around(APIC_SPIV, value);
+}
+
+void __init sync_Arb_IDs(void)
+{
+ Dprintk("Synchronizing Arb IDs.\n");
+ apic_write_around(APIC_ICR, APIC_DEST_ALLINC | APIC_INT_LEVELTRIG
+ | APIC_DM_INIT);
}
extern void __error_in_apic_c (void);
{
unsigned long value, ver, maxlvt;
+ value = apic_read(APIC_LVR);
+ ver = GET_APIC_VERSION(value);
+
if ((SPURIOUS_APIC_VECTOR & 0x0f) != 0x0f)
__error_in_apic_c();
if (!test_bit(GET_APIC_ID(apic_read(APIC_ID)), &phys_cpu_present_map))
BUG();
- value = apic_read(APIC_SPIV);
+ value = apic_read(APIC_SPIV);
+ value &= ~APIC_VECTOR_MASK;
/*
* Enable APIC
*/
- value |= (1<<8);
+ value |= (1<<8);
/*
* Some unknown Intel IO/APIC (or APIC) errata is biting us with
*/
#if 0
/* Enable focus processor (bit==0) */
- value &= ~(1<<9);
+ value &= ~(1<<9);
#else
/* Disable focus processor (bit==1) */
value |= (1<<9);
* Set spurious IRQ vector
*/
value |= SPURIOUS_APIC_VECTOR;
- apic_write(APIC_SPIV,value);
+ apic_write_around(APIC_SPIV, value);
/*
* Set up LVT0, LVT1:
* strictly necessery in pure symmetric-IO mode, but sometimes
* we delegate interrupts to the 8259A.
*/
- if (!smp_processor_id()) {
- value = 0x00000700;
+ /*
+ * TODO: set up through-local-APIC from through-I/O-APIC? --macro
+ */
+ value = apic_read(APIC_LVT0) & APIC_LVT_MASKED;
+ if (!smp_processor_id() && (pic_mode || !value)) {
+ value = APIC_DM_EXTINT;
printk("enabled ExtINT on CPU#%d\n", smp_processor_id());
} else {
- value = 0x00010700;
+ value = APIC_DM_EXTINT | APIC_LVT_MASKED;
printk("masked ExtINT on CPU#%d\n", smp_processor_id());
}
- apic_write_around(APIC_LVT0,value);
+ apic_write_around(APIC_LVT0, value);
/*
* only the BP should see the LINT1 NMI signal, obviously.
*/
if (!smp_processor_id())
- value = 0x00000400; // unmask NMI
+ value = APIC_DM_NMI;
else
- value = 0x00010400; // mask NMI
- apic_write_around(APIC_LVT1,value);
+ value = APIC_DM_NMI | APIC_LVT_MASKED;
+ if (!APIC_INTEGRATED(ver)) /* 82489DX */
+ value |= APIC_LVT_LEVEL_TRIGGER;
+ apic_write_around(APIC_LVT1, value);
- value = apic_read(APIC_LVR);
- ver = GET_APIC_VERSION(value);
if (APIC_INTEGRATED(ver)) { /* !82489DX */
maxlvt = get_maxlvt();
- /*
- * Due to the Pentium erratum 3AP.
- */
- if (maxlvt > 3) {
- apic_readaround(APIC_SPIV); // not strictly necessery
+ if (maxlvt > 3) /* Due to the Pentium erratum 3AP. */
apic_write(APIC_ESR, 0);
- }
value = apic_read(APIC_ESR);
printk("ESR value before enabling vector: %08lx\n", value);
- value = apic_read(APIC_LVTERR);
value = ERROR_APIC_VECTOR; // enables sending errors
- apic_write(APIC_LVTERR,value);
+ apic_write_around(APIC_LVTERR, value);
/*
* spec says clear errors after enabling vector.
*/
- if (maxlvt != 3) {
- apic_readaround(APIC_SPIV);
+ if (maxlvt > 3)
apic_write(APIC_ESR, 0);
- }
value = apic_read(APIC_ESR);
printk("ESR value after enabling vector: %08lx\n", value);
} else
* Set Task Priority to 'accept all'. We never change this
* later on.
*/
- value = apic_read(APIC_TASKPRI);
- value &= ~APIC_TPRI_MASK;
- apic_write(APIC_TASKPRI,value);
+ value = apic_read(APIC_TASKPRI);
+ value &= ~APIC_TPRI_MASK;
+ apic_write_around(APIC_TASKPRI, value);
/*
* Set up the logical destination ID and put the
* APIC into flat delivery mode.
*/
- value = apic_read(APIC_LDR);
+ value = apic_read(APIC_LDR);
value &= ~APIC_LDR_MASK;
value |= (1<<(smp_processor_id()+24));
- apic_write(APIC_LDR,value);
+ apic_write_around(APIC_LDR, value);
- value = apic_read(APIC_DFR);
- value |= SET_APIC_DFR(0xf);
- apic_write(APIC_DFR, value);
+ /*
+ * Must be "all ones" explicitly for 82489DX.
+ */
+ apic_write_around(APIC_DFR, 0xffffffff);
}
void __init init_apic_mappings(void)
set_fixmap_nocache(FIX_APIC_BASE, apic_phys);
Dprintk("mapped APIC to %08lx (%08lx)\n", APIC_BASE, apic_phys);
+ /*
+ * Fetch the APIC ID of the BSP in case we have a
+ * default configuration (or the MP table is broken).
+ */
+ if (boot_cpu_id == -1U)
+ boot_cpu_id = GET_APIC_ID(apic_read(APIC_ID));
+
#ifdef CONFIG_X86_IO_APIC
{
unsigned long ioapic_phys, idx = FIX_IO_APIC_BASE_0;
* chipset timer can cause.
*/
- } while (delta<300);
+ } while (delta < 300);
}
/*
{
unsigned int lvtt1_value, tmp_value;
- tmp_value = apic_read(APIC_LVTT);
lvtt1_value = SET_APIC_TIMER_BASE(APIC_TIMER_BASE_DIV) |
APIC_LVT_TIMER_PERIODIC | LOCAL_TIMER_VECTOR;
- apic_write(APIC_LVTT, lvtt1_value);
+ apic_write_around(APIC_LVTT, lvtt1_value);
/*
* Divide PICLK by 16
*/
tmp_value = apic_read(APIC_TDCR);
- apic_write(APIC_TDCR, (tmp_value
+ apic_write_around(APIC_TDCR, (tmp_value
& ~(APIC_TDR_DIV_1 | APIC_TDR_DIV_TMBASE))
| APIC_TDR_DIV_16);
- tmp_value = apic_read(APIC_TMICT);
- apic_write(APIC_TMICT, clocks/APIC_DIVISOR);
+ apic_write_around(APIC_TMICT, clocks/APIC_DIVISOR);
}
void setup_APIC_timer(void * data)
t0 = apic_read(APIC_TMCCT)*APIC_DIVISOR;
do {
+ /*
+ * It looks like the 82489DX cannot handle
+ * consecutive reads of the TMCCT register well;
+ * this dummy read prevents it from a lockup.
+ */
+ apic_read(APIC_SPIV);
t1 = apic_read(APIC_TMCCT)*APIC_DIVISOR;
delta = (int)(t0 - t1 - slice*(smp_processor_id()+1));
} while (delta < 0);
#undef APIC_DIVISOR
+#ifdef CONFIG_SMP
+static inline void handle_smp_time (int user, int cpu)
+{
+ int system = !user;
+ struct task_struct * p = current;
+ /*
+ * After doing the above, we need to make like
+ * a normal interrupt - otherwise timer interrupts
+ * ignore the global interrupt lock, which is the
+ * WrongThing (tm) to do.
+ */
+
+ irq_enter(cpu, 0);
+ update_one_process(p, 1, user, system, cpu);
+ if (p->pid) {
+ p->counter -= 1;
+ if (p->counter <= 0) {
+ p->counter = 0;
+ p->need_resched = 1;
+ }
+ if (p->priority < DEF_PRIORITY) {
+ kstat.cpu_nice += user;
+ kstat.per_cpu_nice[cpu] += user;
+ } else {
+ kstat.cpu_user += user;
+ kstat.per_cpu_user[cpu] += user;
+ }
+ kstat.cpu_system += system;
+ kstat.per_cpu_system[cpu] += system;
+
+ }
+ irq_exit(cpu, 0);
+}
+#endif
+
/*
* Local timer interrupt handler. It does both profiling and
* process statistics/rescheduling.
inline void smp_local_timer_interrupt(struct pt_regs * regs)
{
- int user = (user_mode(regs) != 0);
int cpu = smp_processor_id();
/*
* updated with atomic operations). This is especially
* useful with a profiling multiplier != 1
*/
- if (!user)
- x86_do_profile(regs->eip);
if (--prof_counter[cpu] <= 0) {
- int system = 1 - user;
- struct task_struct * p = current;
-
/*
* The multiplier may have changed since the last time we got
* to this point as a result of the user writing to
prof_old_multiplier[cpu] = prof_counter[cpu];
}
- /*
- * After doing the above, we need to make like
- * a normal interrupt - otherwise timer interrupts
- * ignore the global interrupt lock, which is the
- * WrongThing (tm) to do.
- */
-
- irq_enter(cpu, 0);
- update_one_process(p, 1, user, system, cpu);
- if (p->pid) {
- p->counter -= 1;
- if (p->counter <= 0) {
- p->counter = 0;
- p->need_resched = 1;
- }
- if (p->priority < DEF_PRIORITY) {
- kstat.cpu_nice += user;
- kstat.per_cpu_nice[cpu] += user;
- } else {
- kstat.cpu_user += user;
- kstat.per_cpu_user[cpu] += user;
- }
- kstat.cpu_system += system;
- kstat.per_cpu_system[cpu] += system;
-
- }
- irq_exit(cpu, 0);
+#ifdef CONFIG_SMP
+ handle_smp_time(user_mode(regs), cpu);
+#endif
}
/*
*/
asmlinkage void smp_spurious_interrupt(void)
{
- ack_APIC_irq();
+ unsigned long v;
+
+ /*
+ * Check if this really is a spurious interrupt and ACK it
+ * if it is a vectored one. Just in case...
+ * Spurious interrupts should not be ACKed.
+ */
+ v = apic_read(APIC_ISR + ((SPURIOUS_APIC_VECTOR & ~0x1f) >> 1));
+ if (v & (1 << (SPURIOUS_APIC_VECTOR & 0x1f)))
+ ack_APIC_irq();
+
/* see sw-dev-man vol 3, chapter 7.4.13.5 */
printk("spurious APIC interrupt on CPU#%d, should never happen.\n",
smp_processor_id());
extern int dump_fpu(elf_fpregset_t *);
extern spinlock_t rtc_lock;
+#if defined(CONFIG_APM)
+extern void machine_real_restart(unsigned char *, int);
+EXPORT_SYMBOL(machine_real_restart);
+#endif
+
#ifdef CONFIG_SMP
extern void FASTCALL( __write_lock_failed(rwlock_t *rw));
extern void FASTCALL( __read_lock_failed(rwlock_t *rw));
EXPORT_SYMBOL_NOVERS(__down_read_failed);
EXPORT_SYMBOL_NOVERS(__rwsem_wake);
/* Networking helper routines. */
-EXPORT_SYMBOL(csum_partial_copy);
EXPORT_SYMBOL(csum_partial_copy_generic);
/* Delay loops */
EXPORT_SYMBOL(__udelay);
EXPORT_SYMBOL(strtok);
EXPORT_SYMBOL(strpbrk);
-EXPORT_SYMBOL(strstr);
EXPORT_SYMBOL(strncpy_from_user);
EXPORT_SYMBOL(__strncpy_from_user);
for (i = 0; i < NR_IRQS; i++) {
irq_desc[i].status = IRQ_DISABLED;
irq_desc[i].action = 0;
- irq_desc[i].depth = 0;
+ irq_desc[i].depth = 1;
if (i < 16) {
/*
* and Ingo Molnar <mingo@redhat.com>
*
* Fixes
- * Maciej W. Rozycki : Bits for genuine 82489DX APICs
+ * Maciej W. Rozycki : Bits for genuine 82489DX APICs;
+ * thanks to Eric Gilmore for
+ * testing these extensively
*/
#include <linux/mm.h>
/* MP IRQ source entries */
int mp_irq_entries = 0;
-/* non-0 if default (table-less) MP configuration */
-int mpc_default_type = 0;
-
/*
* Rough estimation of how many shared IRQs there are, can
* be changed anytime.
#define MAX_PIRQS 8
int pirq_entries [MAX_PIRQS];
-int pirqs_enabled;
+int pirqs_enabled = 0;
int skip_ioapic_setup = 0;
static int __init ioapic_setup(char *str)
int lbus = mp_irqs[i].mpc_srcbus;
if ((mp_bus_id_to_type[lbus] == MP_BUS_ISA ||
- mp_bus_id_to_type[lbus] == MP_BUS_EISA) &&
+ mp_bus_id_to_type[lbus] == MP_BUS_EISA ||
+ mp_bus_id_to_type[lbus] == MP_BUS_MCA) &&
(mp_irqs[i].mpc_irqtype == type) &&
(mp_irqs[i].mpc_srcbusirq == 0x00))
if (mp_ioapics[apic].mpc_apicid == mp_irqs[i].mpc_dstapic)
break;
- if ((apic || IO_APIC_IRQ(mp_irqs[i].mpc_dstirq)) &&
- (mp_bus_id_to_type[lbus] == MP_BUS_PCI) &&
+ if ((mp_bus_id_to_type[lbus] == MP_BUS_PCI) &&
!mp_irqs[i].mpc_irqtype &&
(bus == mp_bus_id_to_pci_bus[mp_irqs[i].mpc_srcbus]) &&
(slot == ((mp_irqs[i].mpc_srcbusirq >> 2) & 0x1f))) {
int irq = pin_2_irq(i,apic,mp_irqs[i].mpc_dstirq);
+ if (!(apic || IO_APIC_IRQ(irq)))
+ continue;
+
if (pci_pin == (mp_irqs[i].mpc_srcbusirq & 3))
return irq;
/*
* EISA conforming in the MP table, that means its trigger type must
* be read in from the ELCR */
-#define default_EISA_trigger(idx) (EISA_ELCR(mp_irqs[idx].mpc_dstirq))
+#define default_EISA_trigger(idx) (EISA_ELCR(mp_irqs[idx].mpc_srcbusirq))
#define default_EISA_polarity(idx) (0)
-/* ISA interrupts are always polarity zero edge triggered, even when
- * listed as conforming in the MP table. */
+/* ISA interrupts are always polarity zero edge triggered,
+ * when listed as conforming in the MP table. */
#define default_ISA_trigger(idx) (0)
#define default_ISA_polarity(idx) (0)
+/* PCI interrupts are always polarity one level triggered,
+ * when listed as conforming in the MP table. */
+
+#define default_PCI_trigger(idx) (1)
+#define default_PCI_polarity(idx) (1)
+
+/* MCA interrupts are always polarity zero level triggered,
+ * when listed as conforming in the MP table. */
+
+#define default_MCA_trigger(idx) (1)
+#define default_MCA_polarity(idx) (0)
+
static int __init MPBIOS_polarity(int idx)
{
int bus = mp_irqs[idx].mpc_srcbus;
polarity = default_ISA_polarity(idx);
break;
}
- case MP_BUS_EISA:
+ case MP_BUS_EISA: /* EISA pin */
{
polarity = default_EISA_polarity(idx);
break;
}
case MP_BUS_PCI: /* PCI pin */
{
- polarity = 1;
+ polarity = default_PCI_polarity(idx);
+ break;
+ }
+ case MP_BUS_MCA: /* MCA pin */
+ {
+ polarity = default_MCA_polarity(idx);
break;
}
default:
{
switch (mp_bus_id_to_type[bus])
{
- case MP_BUS_ISA:
+ case MP_BUS_ISA: /* ISA pin */
{
trigger = default_ISA_trigger(idx);
break;
}
- case MP_BUS_EISA:
+ case MP_BUS_EISA: /* EISA pin */
{
trigger = default_EISA_trigger(idx);
break;
}
- case MP_BUS_PCI: /* PCI pin, level */
+ case MP_BUS_PCI: /* PCI pin */
{
- trigger = 1;
+ trigger = default_PCI_trigger(idx);
+ break;
+ }
+ case MP_BUS_MCA: /* MCA pin */
+ {
+ trigger = default_MCA_trigger(idx);
break;
}
default:
{
case MP_BUS_ISA: /* ISA pin */
case MP_BUS_EISA:
+ case MP_BUS_MCA:
{
irq = mp_irqs[idx].mpc_srcbusirq;
break;
disable_8259A_irq(0);
- apic_readaround(APIC_LVT0);
- apic_write(APIC_LVT0, 0x00010700); // mask LVT0
+ /* mask LVT0 */
+ apic_write_around(APIC_LVT0, APIC_LVT_MASKED | APIC_DM_EXTINT);
init_8259A(1);
/*
* Add it to the IO-APIC irq-routing table:
*/
- io_apic_write(0, 0x10+2*pin, *(((int *)&entry)+0));
io_apic_write(0, 0x11+2*pin, *(((int *)&entry)+1));
+ io_apic_write(0, 0x10+2*pin, *(((int *)&entry)+0));
enable_8259A_irq(0);
}
printk(KERN_DEBUG ".... IRQ redirection table:\n");
- printk(KERN_DEBUG " NR Log Phy ");
- printk(KERN_DEBUG "Mask Trig IRR Pol Stat Dest Deli Vect: \n");
+ printk(KERN_DEBUG " NR Log Phy Mask Trig IRR Pol"
+ " Stat Dest Deli Vect: \n");
for (i = 0; i <= reg_01.entries; i++) {
struct IO_APIC_route_entry entry;
print_APIC_bitfield(APIC_IRR);
if (APIC_INTEGRATED(ver)) { /* !82489DX */
- /*
- * Due to the Pentium erratum 3AP.
- */
- if (maxlvt > 3) {
- apic_readaround(APIC_SPIV); // not strictly necessery
+ if (maxlvt > 3) /* Due to the Pentium erratum 3AP. */
apic_write(APIC_ESR, 0);
- }
v = apic_read(APIC_ESR);
printk(KERN_DEBUG "... APIC ESR: %08x\n", v);
}
print_local_APIC(NULL);
}
+void /*__init*/ print_PIC(void)
+{
+ unsigned int v, flags;
+
+ printk(KERN_DEBUG "\nprinting PIC contents\n");
+
+ v = inb(0xa1) << 8 | inb(0x21);
+ printk(KERN_DEBUG "... PIC IMR: %04x\n", v);
+
+ v = inb(0xa0) << 8 | inb(0x20);
+ printk(KERN_DEBUG "... PIC IRR: %04x\n", v);
+
+ __save_flags(flags);
+ __cli();
+ outb(0x0b,0xa0);
+ outb(0x0b,0x20);
+ v = inb(0xa0) << 8 | inb(0x20);
+ outb(0x0a,0xa0);
+ outb(0x0a,0x20);
+ __restore_flags(flags);
+ printk(KERN_DEBUG "... PIC ISR: %04x\n", v);
+
+ v = inb(0x4d1) << 8 | inb(0x4d0);
+ printk(KERN_DEBUG "... PIC ELCR: %04x\n", v);
+}
+
static void __init enable_IO_APIC(void)
{
struct IO_APIC_reg_01 reg_01;
}
if (!pirqs_enabled)
for (i = 0; i < MAX_PIRQS; i++)
- pirq_entries[i] =- 1;
-
- if (pic_mode) {
- /*
- * PIC mode, enable symmetric IO mode in the IMCR.
- */
- printk("leaving PIC mode, enabling symmetric IO mode.\n");
- outb(0x70, 0x22);
- outb(0x01, 0x23);
- }
+ pirq_entries[i] = -1;
/*
* The number of IO-APIC IRQ registers (== #pins):
*/
clear_IO_APIC();
- /*
- * Put it back into PIC mode (has an effect only on
- * certain older boards)
- */
- if (pic_mode) {
- printk("disabling symmetric IO mode, entering PIC mode.\n");
- outb_p(0x70, 0x22);
- outb_p(0x00, 0x23);
- }
+ disconnect_bsp_APIC();
}
/*
}
}
-static void __init construct_default_ISA_mptable(void)
-{
- int i, pos = 0;
- const int bus_type = (mpc_default_type == 2 || mpc_default_type == 3 ||
- mpc_default_type == 6) ? MP_BUS_EISA : MP_BUS_ISA;
-
- for (i = 0; i < 16; i++) {
- if (!IO_APIC_IRQ(i))
- continue;
-
- mp_irqs[pos].mpc_irqtype = mp_INT;
- mp_irqs[pos].mpc_irqflag = 0; /* default */
- mp_irqs[pos].mpc_srcbus = 0;
- mp_irqs[pos].mpc_srcbusirq = i;
- mp_irqs[pos].mpc_dstapic = 0;
- mp_irqs[pos].mpc_dstirq = i;
- pos++;
- }
- mp_irq_entries = pos;
- mp_bus_id_to_type[0] = bus_type;
-
- /*
- * MP specification 1.4 defines some extra rules for default
- * configurations, fix them up here:
- */
- switch (mpc_default_type)
- {
- case 2:
- /*
- * IRQ0 is not connected:
- */
- mp_irqs[0].mpc_irqtype = mp_ExtINT;
- break;
- default:
- /*
- * pin 2 is IRQ0:
- */
- mp_irqs[0].mpc_dstirq = 2;
- }
-
-}
-
/*
* There is a nasty bug in some older SMP boards, their mptable lies
* about the timer IRQ. We do the following to work around the situation:
unsigned int t1 = jiffies;
sti();
- mdelay(40);
+ /* Let ten ticks pass... */
+ mdelay((10 * 1000) / HZ);
- if (jiffies-t1>1)
+ /*
+ * Expect a few ticks at least, to be sure some possible
+ * glue logic does not lock up after one or two first
+ * ticks in a non-ExtINT mode. Also the local APIC
+ * might have cached one ExtINT interrupt. Finally, at
+ * least one tick may be lost due to delays.
+ */
+ if (jiffies - t1 > 4)
return 1;
return 0;
static void enable_NMI_through_LVT0 (void * dummy)
{
- apic_readaround(APIC_LVT0);
- apic_write(APIC_LVT0, 0x00000400); // unmask and set to NMI
+ unsigned int v, ver;
+
+ ver = apic_read(APIC_LVR);
+ ver = GET_APIC_VERSION(ver);
+ v = APIC_DM_NMI; /* unmask and set to NMI */
+ if (!APIC_INTEGRATED(ver)) /* 82489DX */
+ v |= APIC_LVT_LEVEL_TRIGGER;
+ apic_write_around(APIC_LVT0, v);
}
static void setup_nmi (void)
printk(KERN_INFO "..TIMER: vector=%d pin1=%d pin2=%d\n", vector, pin1, pin2);
- /*
- * Ok, does IRQ0 through the IOAPIC work?
- */
- if (timer_irq_works()) {
- if (nmi_watchdog) {
- disable_8259A_irq(0);
- init_8259A(1);
- setup_nmi();
- enable_8259A_irq(0);
- if (nmi_irq_works())
- return;
- } else
- return;
- }
-
if (pin1 != -1) {
- printk(KERN_ERR "..MP-BIOS bug: 8254 timer not connected to IO-APIC\n");
+ /*
+ * Ok, does IRQ0 through the IOAPIC work?
+ */
+ unmask_IO_APIC_irq(0);
+ if (timer_irq_works()) {
+ if (nmi_watchdog) {
+ disable_8259A_irq(0);
+ init_8259A(1);
+ setup_nmi();
+ enable_8259A_irq(0);
+ nmi_irq_works();
+ }
+ return;
+ }
clear_IO_APIC_pin(0, pin1);
+ printk(KERN_ERR "..MP-BIOS bug: 8254 timer not connected to IO-APIC\n");
}
printk(KERN_INFO "...trying to set up timer (IRQ0) through the 8259A ... ");
printk("works.\n");
if (nmi_watchdog) {
setup_nmi();
- if (nmi_irq_works())
- return;
- } else
- return;
+ nmi_irq_works();
+ }
+ return;
}
/*
* Cleanup, just in case ...
disable_8259A_irq(0);
irq_desc[0].handler = &lapic_irq_type;
- init_8259A(1); // AEOI mode
- apic_readaround(APIC_LVT0);
- apic_write(APIC_LVT0, 0x00000000 | vector); // Fixed mode
+ init_8259A(1); /* AEOI mode */
+ apic_write_around(APIC_LVT0, APIC_DM_FIXED | vector); /* Fixed mode */
enable_8259A_irq(0);
if (timer_irq_works()) {
io_apic_irqs = ~PIC_IRQS;
printk("ENABLING IO-APIC IRQs\n");
- /*
- * If there are no explicit MP IRQ entries, it's either one of the
- * default configuration types or we are broken. In both cases it's
- * fine to set up most of the low 16 IO-APIC pins to ISA defaults.
- */
- if (!mp_irq_entries) {
- printk("no explicit IRQ entries, using default mptable\n");
- construct_default_ISA_mptable();
- }
-
/*
* Set up the IO-APIC IRQ routing table by parsing the MP-BIOS
* mptable:
*/
setup_ioapic_ids_from_mpc();
+ sync_Arb_IDs();
setup_IO_APIC_irqs();
init_IO_APIC_traps();
check_timer();
{
if (!smp_found_config)
return;
+ connect_bsp_APIC();
setup_local_APIC();
setup_IO_APIC();
setup_APIC_clocks();
* @irq: Interrupt to disable
*
* Disable the selected interrupt line. Disables of an interrupt
- * stack. Unlike disable_irq, this function does not ensure existing
- * instances of the irq handler have completed before returning.
+ * stack. Unlike disable_irq(), this function does not ensure existing
+ * instances of the IRQ handler have completed before returning.
*
* This function may be called from IRQ context.
*/
irq_dir[irq] = proc_mkdir(name, root_irq_dir);
/* create /proc/irq/1234/smp_affinity */
- entry = create_proc_entry("smp_affinity", 0700, irq_dir[irq]);
+ entry = create_proc_entry("smp_affinity", 0600, irq_dir[irq]);
entry->nlink = 1;
entry->data = (void *)(long)irq;
root_irq_dir = proc_mkdir("irq", 0);
/* create /proc/irq/prof_cpu_mask */
- entry = create_proc_entry("prof_cpu_mask", 0700, root_irq_dir);
+ entry = create_proc_entry("prof_cpu_mask", 0600, root_irq_dir);
entry->nlink = 1;
entry->data = (void *)&prof_cpu_mask;
/**
* mca_find_adapter - scan for adapters
* @id: MCA identification to search for
- * @start: Starting slot
+ * @start: starting slot
*
* Search the MCA configuration for adapters matching the 16bit
* ID given. The first time it should be called with start as zero
* and then further calls made passing the return value of the
- * previous call until MCA_NOTFOUND is returned.
+ * previous call until %MCA_NOTFOUND is returned.
*
* Disabled adapters are not reported.
*/
/**
* mca_find_unused_adapter - scan for unused adapters
* @id: MCA identification to search for
- * @start: Starting slot
+ * @start: starting slot
*
* Search the MCA configuration for adapters matching the 16bit
* ID given. The first time it should be called with start as zero
* and then further calls made passing the return value of the
- * previous call until MCA_NOTFOUND is returned.
+ * previous call until %MCA_NOTFOUND is returned.
*
* Adapters that have been claimed by drivers and those that
* are disabled are not reported. This function thus allows a driver
* function is called with the buffer, slot, and device pointer (or
* some equally informative context information, or nothing, if you
* prefer), and is expected to put useful information into the
- * buffer. The adapter name, id, and POS registers get printed
+ * buffer. The adapter name, ID, and POS registers get printed
* before this is called though, so don't do it again.
*
- * This should be called with a NULL procfn when a module
+ * This should be called with a %NULL @procfn when a module
* unregisters, thus preventing kernel crashes and other such
* nastiness.
*/
* Erich Boleyn : MP v1.4 and additional changes.
* Alan Cox : Added EBDA scanning
* Ingo Molnar : various cleanups and rewrites
- * Maciej W. Rozycki : Bits for genuine 82489DX APICs
+ * Maciej W. Rozycki : Bits for default MP configurations
*/
#include <linux/mm.h>
* Various Linux-internal data structures created from the
* MP-table.
*/
-int apic_version [NR_CPUS];
+int apic_version [MAX_APICS];
int mp_bus_id_to_type [MAX_MP_BUSSES] = { -1, };
int mp_bus_id_to_pci_bus [MAX_MP_BUSSES] = { -1, };
int mp_current_pci_id = 0;
unsigned long mp_lapic_addr = 0;
/* Processor that is doing the boot up */
-unsigned int boot_cpu_id = 0;
+unsigned int boot_cpu_id = -1U;
/* Internal processor count */
-static unsigned int num_processors = 1;
+static unsigned int num_processors = 0;
/* Bitmask of physically existing CPUs */
unsigned long phys_cpu_present_map = 0;
if (m->mpc_cpuflag & CPU_BOOTPROCESSOR) {
Dprintk(" Bootup CPU\n");
boot_cpu_id = m->mpc_apicid;
- } else
- /* Boot CPU already counted */
- num_processors++;
+ }
+ num_processors++;
- if (m->mpc_apicid > NR_CPUS) {
- printk("Processor #%d unused. (Max %d processors).\n",
- m->mpc_apicid, NR_CPUS);
+ if (m->mpc_apicid > MAX_APICS) {
+ printk("Processor #%d INVALID. (Max ID: %d).\n",
+ m->mpc_apicid, MAX_APICS);
return;
}
ver = m->mpc_apicver;
if (strncmp(str, "ISA", 3) == 0) {
mp_bus_id_to_type[m->mpc_busid] = MP_BUS_ISA;
- } else {
- if (strncmp(str, "EISA", 4) == 0) {
+ } else if (strncmp(str, "EISA", 4) == 0) {
mp_bus_id_to_type[m->mpc_busid] = MP_BUS_EISA;
- } else {
- if (strncmp(str, "PCI", 3) == 0) {
+ } else if (strncmp(str, "PCI", 3) == 0) {
mp_bus_id_to_type[m->mpc_busid] = MP_BUS_PCI;
mp_bus_id_to_pci_bus[m->mpc_busid] = mp_current_pci_id;
mp_current_pci_id++;
+ } else if (strncmp(str, "MCA", 3) == 0) {
+ mp_bus_id_to_type[m->mpc_busid] = MP_BUS_MCA;
} else {
printk("Unknown bustype %s\n", str);
panic("cannot handle bus - mail to linux-smp@vger.rutgers.edu");
- } } }
+ }
}
static void __init MP_ioapic_info (struct mpc_config_ioapic *m)
static void __init MP_intsrc_info (struct mpc_config_intsrc *m)
{
mp_irqs [mp_irq_entries] = *m;
+ Dprintk("Int: type %d, pol %d, trig %d, bus %d,"
+ " IRQ %02x, APIC ID %x, APIC INT %02x\n",
+ m->mpc_irqtype, m->mpc_irqflag & 3,
+ (m->mpc_irqflag >> 2) & 3, m->mpc_srcbus,
+ m->mpc_srcbusirq, m->mpc_dstapic, m->mpc_dstirq);
if (++mp_irq_entries == MAX_IRQ_SOURCES)
panic("Max # of irq sources exceeded!!\n");
}
static void __init MP_lintsrc_info (struct mpc_config_lintsrc *m)
{
+ Dprintk("Lint: type %d, pol %d, trig %d, bus %d,"
+ " IRQ %02x, APIC ID %x, APIC LINT %02x\n",
+ m->mpc_irqtype, m->mpc_irqflag & 3,
+ (m->mpc_irqflag >> 2) &3, m->mpc_srcbusid,
+ m->mpc_srcbusirq, m->mpc_destapic, m->mpc_destapiclint);
/*
* Well it seems all SMP boards in existence
* use ExtINT/LVT1 == LINT0 and
return num_processors;
}
+static void __init construct_default_ioirq_mptable(int mpc_default_type)
+{
+ struct mpc_config_intsrc intsrc;
+ int i;
+
+ intsrc.mpc_type = MP_INTSRC;
+ intsrc.mpc_irqflag = 0; /* conforming */
+ intsrc.mpc_srcbus = 0;
+ intsrc.mpc_dstapic = mp_ioapics[0].mpc_apicid;
+
+ intsrc.mpc_irqtype = mp_INT;
+ for (i = 0; i < 16; i++) {
+ switch (mpc_default_type) {
+ case 2:
+ if (i == 0 || i == 13)
+ continue; /* IRQ0 & IRQ13 not connected */
+ /* fall through */
+ default:
+ if (i == 2)
+ continue; /* IRQ2 is never connected */
+ }
+
+ intsrc.mpc_srcbusirq = i;
+ intsrc.mpc_dstirq = i ? i : 2; /* IRQ0 to INTIN2 */
+ MP_intsrc_info(&intsrc);
+ }
+
+ intsrc.mpc_irqtype = mp_ExtINT;
+ intsrc.mpc_srcbusirq = 0;
+ intsrc.mpc_dstirq = 0; /* 8259A to INTIN0 */
+ MP_intsrc_info(&intsrc);
+}
+
+static inline void __init construct_default_ISA_mptable(int mpc_default_type)
+{
+ struct mpc_config_processor processor;
+ struct mpc_config_bus bus;
+ struct mpc_config_ioapic ioapic;
+ struct mpc_config_lintsrc lintsrc;
+ int linttypes[2] = { mp_ExtINT, mp_NMI };
+ int i;
+
+ /*
+ * local APIC has default address
+ */
+ mp_lapic_addr = APIC_DEFAULT_PHYS_BASE;
+
+ /*
+ * 2 CPUs, numbered 0 & 1.
+ */
+ processor.mpc_type = MP_PROCESSOR;
+ /* Either an integrated APIC or a discrete 82489DX. */
+ processor.mpc_apicver = mpc_default_type > 4 ? 0x10 : 0x01;
+ processor.mpc_cpuflag = CPU_ENABLED;
+ processor.mpc_cpufeature = (boot_cpu_data.x86 << 8) |
+ (boot_cpu_data.x86_model << 4) |
+ boot_cpu_data.x86_mask;
+ processor.mpc_featureflag = boot_cpu_data.x86_capability;
+ processor.mpc_reserved[0] = 0;
+ processor.mpc_reserved[1] = 0;
+ for (i = 0; i < 2; i++) {
+ processor.mpc_apicid = i;
+ MP_processor_info(&processor);
+ }
+
+ bus.mpc_type = MP_BUS;
+ bus.mpc_busid = 0;
+ switch (mpc_default_type) {
+ default:
+ printk("???\nUnknown standard configuration %d\n",
+ mpc_default_type);
+ /* fall through */
+ case 1:
+ case 5:
+ memcpy(bus.mpc_bustype, "ISA ", 6);
+ break;
+ case 2:
+ case 6:
+ case 3:
+ memcpy(bus.mpc_bustype, "EISA ", 6);
+ break;
+ case 4:
+ case 7:
+ memcpy(bus.mpc_bustype, "MCA ", 6);
+ }
+ MP_bus_info(&bus);
+ if (mpc_default_type > 4) {
+ bus.mpc_busid = 1;
+ memcpy(bus.mpc_bustype, "PCI ", 6);
+ MP_bus_info(&bus);
+ }
+
+ ioapic.mpc_type = MP_IOAPIC;
+ ioapic.mpc_apicid = 2;
+ ioapic.mpc_apicver = mpc_default_type > 4 ? 0x10 : 0x01;
+ ioapic.mpc_flags = MPC_APIC_USABLE;
+ ioapic.mpc_apicaddr = 0xFEC00000;
+ MP_ioapic_info(&ioapic);
+
+ /*
+ * We set up most of the low 16 IO-APIC pins according to MPS rules.
+ */
+ construct_default_ioirq_mptable(mpc_default_type);
+
+ lintsrc.mpc_type = MP_LINTSRC;
+ lintsrc.mpc_irqflag = 0; /* conforming */
+ lintsrc.mpc_srcbusid = 0;
+ lintsrc.mpc_srcbusirq = 0;
+ lintsrc.mpc_destapic = MP_APIC_ALL;
+ for (i = 0; i < 2; i++) {
+ lintsrc.mpc_irqtype = linttypes[i];
+ lintsrc.mpc_destapiclint = i;
+ MP_lintsrc_info(&lintsrc);
+ }
+}
+
static struct intel_mp_floating *mpf_found;
/*
printk(" Virtual Wire compatibility mode.\n");
pic_mode = 0;
}
- /*
- * default CPU id - if it's different in the mptable
- * then we change it before first using it.
- */
- boot_cpu_id = 0;
+
/*
* Now see if we need to read further.
*/
if (mpf->mpf_feature1 != 0) {
+
printk("Default MP configuration #%d\n", mpf->mpf_feature1);
+ construct_default_ISA_mptable(mpf->mpf_feature1);
- /*
- * local APIC has default address
- */
- mp_lapic_addr = APIC_DEFAULT_PHYS_BASE;
+ } else if (mpf->mpf_physptr) {
/*
- * 2 CPUs, numbered 0 & 1.
+ * Read the physical hardware table. Anything here will
+ * override the defaults.
*/
- phys_cpu_present_map = 3;
- num_processors = 2;
+ smp_read_mpc((void *)mpf->mpf_physptr);
- nr_ioapics = 1;
- mp_ioapics[0].mpc_apicaddr = 0xFEC00000;
- mp_ioapics[0].mpc_apicid = 2;
/*
- * Save the default type number, we
- * need it later to set the IO-APIC
- * up properly:
+ * If there are no explicit MP IRQ entries, then we are
+ * broken. We set up most of the low 16 IO-APIC pins to
+ * ISA defaults and hope it will work.
*/
- mpc_default_type = mpf->mpf_feature1;
+ if (!mp_irq_entries) {
+ struct mpc_config_bus bus;
- printk("Bus #0 is ");
- }
+ printk("BIOS bug, no explicit IRQ entries, using default mptable. (tell your hw vendor)\n");
- switch (mpf->mpf_feature1) {
- case 1:
- case 5:
- printk("ISA\n");
- break;
- case 2:
- printk("EISA with no IRQ0 and no IRQ13 DMA chaining\n");
- break;
- case 6:
- case 3:
- printk("EISA\n");
- break;
- case 4:
- case 7:
- printk("MCA\n");
- break;
- case 0:
- if (!mpf->mpf_physptr)
- BUG();
- break;
- default:
- printk("???\nUnknown standard configuration %d\n",
- mpf->mpf_feature1);
- return;
- }
- if (mpf->mpf_feature1 > 4) {
- printk("Bus #1 is PCI\n");
+ bus.mpc_type = MP_BUS;
+ bus.mpc_busid = 0;
+ memcpy(bus.mpc_bustype, "ISA ", 6);
+ MP_bus_info(&bus);
- /*
- * Set local APIC version to the integrated form.
- * It's initialized to zero otherwise, representing
- * a discrete 82489DX.
- */
- apic_version[0] = 0x10;
- apic_version[1] = 0x10;
- }
- /*
- * Read the physical hardware table. Anything here will override the
- * defaults.
- */
- if (mpf->mpf_physptr)
- smp_read_mpc((void *)mpf->mpf_physptr);
+ construct_default_ioirq_mptable(0);
+ }
+
+ } else
+ BUG();
printk("Processors: %d\n", num_processors);
/*
*
* Memory type region registers control the caching on newer Intel and
* non Intel processors. This function allows drivers to request an
- * MTRR is added. The details and hardware specifics of each processors
+ * MTRR is added. The details and hardware specifics of each processor's
* implementation are hidden from the caller, but nevertheless the
* caller should expect to need to provide a power of two size on an
* equivalent power of two boundary.
*
* The available types are
*
- * MTRR_TYPE_UNCACHEABLE - No caching
+ * %MTRR_TYPE_UNCACHEABLE - No caching
*
- * MTRR_TYPE_WRITEBACK - Write data back in bursts whenever
+ * %MTRR_TYPE_WRITEBACK - Write data back in bursts whenever
*
- * MTRR_TYPE_WRCOMB - Write data back soon but allow bursts
+ * %MTRR_TYPE_WRCOMB - Write data back soon but allow bursts
*
- * MTRR_TYPE_WRTHROUGH - Cache reads but not writes
+ * %MTRR_TYPE_WRTHROUGH - Cache reads but not writes
*
* BUGS: Needs a quiet flag for the cases where drivers do not mind
* failures and do not wish system log messages to be sent.
if (!memcmp(from+4, "nopentium", 9)) {
from += 9+4;
boot_cpu_data.x86_capability &= ~X86_FEATURE_PSE;
+ } else if (!memcmp(from+4, "exactmap", 8)) {
+ from += 8+4;
+ e820.nr_map = 0;
+ usermem = 1;
} else {
/* If the user specifies memory size, we
* blow away any automatically generated
* We use 'broadcast', CPU->CPU IPIs and self-IPIs too.
*/
-static unsigned int cached_APIC_ICR;
-static unsigned int cached_APIC_ICR2;
-
-/*
- * Caches reserved bits, APIC reads are (mildly) expensive
- * and force otherwise unnecessary CPU synchronization.
- *
- * (We could cache other APIC registers too, but these are the
- * main ones used in RL.)
- */
-#define slow_ICR (apic_read(APIC_ICR) & ~0xFDFFF)
-#define slow_ICR2 (apic_read(APIC_ICR2) & 0x00FFFFFF)
-
-void cache_APIC_registers (void)
-{
- cached_APIC_ICR = slow_ICR;
- cached_APIC_ICR2 = slow_ICR2;
- mb();
-}
-
-static inline unsigned int __get_ICR (void)
-{
-#if FORCE_READ_AROUND_WRITE
- /*
- * Wait for the APIC to become ready - this should never occur. It's
- * a debugging check really.
- */
- int count = 0;
- unsigned int cfg;
-
- while (count < 1000)
- {
- cfg = slow_ICR;
- if (!(cfg&(1<<12)))
- return cfg;
- printk("CPU #%d: ICR still busy [%08x]\n",
- smp_processor_id(), cfg);
- irq_err_count++;
- count++;
- udelay(10);
- }
- printk("CPU #%d: previous IPI still not cleared after 10mS\n",
- smp_processor_id());
- return cfg;
-#else
- return cached_APIC_ICR;
-#endif
-}
-
-static inline unsigned int __get_ICR2 (void)
-{
-#if FORCE_READ_AROUND_WRITE
- return slow_ICR2;
-#else
- return cached_APIC_ICR2;
-#endif
-}
-
-#define LOGICAL_DELIVERY 1
-
static inline int __prepare_ICR (unsigned int shortcut, int vector)
{
- unsigned int cfg;
-
- cfg = __get_ICR();
- cfg |= APIC_DEST_DM_FIXED|shortcut|vector
-#if LOGICAL_DELIVERY
- |APIC_DEST_LOGICAL
-#endif
- ;
-
- return cfg;
+ return APIC_DM_FIXED | shortcut | vector | APIC_DEST_LOGICAL;
}
static inline int __prepare_ICR2 (unsigned int mask)
{
- unsigned int cfg;
-
- cfg = __get_ICR2();
-#if LOGICAL_DELIVERY
- cfg |= SET_APIC_DEST_FIELD(mask);
-#else
- cfg |= SET_APIC_DEST_FIELD(mask);
-#endif
-
- return cfg;
+ return SET_APIC_DEST_FIELD(mask);
}
static inline void __send_IPI_shortcut(unsigned int shortcut, int vector)
{
+ /*
+ * Subtle. In the case of the 'never do double writes' workaround
+ * we have to lock out interrupts to be safe. As we don't care
+ * of the value read we use an atomic rmw access to avoid costly
+ * cli/sti. Otherwise we use an even cheaper single atomic write
+ * to the APIC.
+ */
unsigned int cfg;
-/*
- * Subtle. In the case of the 'never do double writes' workaround we
- * have to lock out interrupts to be safe. Otherwise it's just one
- * single atomic write to the APIC, no need for cli/sti.
- */
-#if FORCE_READ_AROUND_WRITE
- unsigned long flags;
-
- __save_flags(flags);
- __cli();
-#endif
/*
* No need to touch the target chip field
/*
* Send the IPI. The write to APIC_ICR fires this off.
*/
- apic_write(APIC_ICR, cfg);
-#if FORCE_READ_AROUND_WRITE
- __restore_flags(flags);
-#endif
+ apic_write_around(APIC_ICR, cfg);
}
static inline void send_IPI_allbutself(int vector)
static inline void send_IPI_mask(int mask, int vector)
{
unsigned long cfg;
-#if FORCE_READ_AROUND_WRITE
unsigned long flags;
__save_flags(flags);
__cli();
-#endif
/*
* prepare target chip field
*/
-
cfg = __prepare_ICR2(mask);
- apic_write(APIC_ICR2, cfg);
+ apic_write_around(APIC_ICR2, cfg);
/*
* program the ICR
/*
* Send the IPI. The write to APIC_ICR fires this off.
*/
- apic_write(APIC_ICR, cfg);
-#if FORCE_READ_AROUND_WRITE
+ apic_write_around(APIC_ICR, cfg);
__restore_flags(flags);
-#endif
}
/*
* from Jose Renau
* Ingo Molnar : various cleanups and rewrites
* Tigran Aivazian : fixed "0.00 in /proc/uptime on SMP" bug.
+ * Maciej W. Rozycki : Bits for genuine 82489DX APICs
*/
#include <linux/config.h>
return do_fork(CLONE_VM|CLONE_PID, 0, ®s);
}
+#if APIC_DEBUG
+static inline void inquire_remote_apic(int apicid)
+{
+ int i, regs[] = { APIC_ID >> 4, APIC_LVR >> 4, APIC_SPIV >> 4 };
+ char *names[] = { "ID", "VERSION", "SPIV" };
+ int timeout, status;
+
+ printk("Inquiring remote APIC #%d...\n", apicid);
+
+ for (i = 0; i < sizeof(regs) / sizeof(*regs); i++) {
+ printk("... APIC #%d %s: ", apicid, names[i]);
+
+ apic_write_around(APIC_ICR2, SET_APIC_DEST_FIELD(apicid));
+ apic_write_around(APIC_ICR, APIC_DM_REMRD | regs[i]);
+
+ timeout = 0;
+ do {
+ udelay(100);
+ status = apic_read(APIC_ICR) & APIC_ICR_RR_MASK;
+ } while (status == APIC_ICR_RR_INPROG && timeout++ < 1000);
+
+ switch (status) {
+ case APIC_ICR_RR_VALID:
+ status = apic_read(APIC_RRR);
+ printk("%08x\n", status);
+ break;
+ default:
+ printk("failed\n");
+ }
+ }
+}
+#endif
+
static void __init do_boot_cpu (int apicid)
{
- unsigned long cfg;
struct task_struct *idle;
- unsigned long send_status, accept_status;
+ unsigned long send_status, accept_status, boot_status, maxlvt;
int timeout, num_starts, j, cpu;
unsigned long start_eip;
start_eip = setup_trampoline();
/* So we see what's up */
- printk("Booting processor %d eip %lx\n", cpu, start_eip);
+ printk("Booting processor %d/%d eip %lx\n", cpu, apicid, start_eip);
stack_start.esp = (void *) (1024 + PAGE_SIZE + (char *)idle);
/*
* Be paranoid about clearing APIC errors.
*/
if (APIC_INTEGRATED(apic_version[apicid])) {
- apic_readaround(APIC_SPIV);
+ apic_read_around(APIC_SPIV);
apic_write(APIC_ESR, 0);
- accept_status = (apic_read(APIC_ESR) & 0xEF);
+ apic_read(APIC_ESR);
}
/*
* Status is now clean
*/
- send_status = 0;
+ send_status = 0;
accept_status = 0;
+ boot_status = 0;
/*
* Starting actual IPI sequence...
Dprintk("Asserting INIT.\n");
/*
- * Turn INIT on
- */
- cfg = apic_read(APIC_ICR2);
- cfg &= 0x00FFFFFF;
-
- /*
- * Target chip
+ * Turn INIT on target chip
*/
- apic_write(APIC_ICR2, cfg | SET_APIC_DEST_FIELD(apicid));
+ apic_write_around(APIC_ICR2, SET_APIC_DEST_FIELD(apicid));
/*
* Send IPI
*/
- cfg = apic_read(APIC_ICR);
- cfg &= ~0xCDFFF;
- cfg |= (APIC_DEST_LEVELTRIG | APIC_DEST_ASSERT | APIC_DEST_DM_INIT);
- apic_write(APIC_ICR, cfg);
+ apic_write_around(APIC_ICR, APIC_INT_LEVELTRIG | APIC_INT_ASSERT
+ | APIC_DM_INIT);
+
+ Dprintk("Waiting for send to finish...\n");
+ timeout = 0;
+ do {
+ Dprintk("+");
+ udelay(100);
+ send_status = apic_read(APIC_ICR) & APIC_ICR_BUSY;
+ } while (send_status && (timeout++ < 1000));
+
+ mdelay(10);
- udelay(200);
Dprintk("Deasserting INIT.\n");
/* Target chip */
- cfg = apic_read(APIC_ICR2);
- cfg &= 0x00FFFFFF;
- apic_write(APIC_ICR2, cfg|SET_APIC_DEST_FIELD(apicid));
+ apic_write_around(APIC_ICR2, SET_APIC_DEST_FIELD(apicid));
/* Send IPI */
- cfg = apic_read(APIC_ICR);
- cfg &= ~0xCDFFF;
- cfg |= (APIC_DEST_LEVELTRIG | APIC_DEST_DM_INIT);
- apic_write(APIC_ICR, cfg);
+ apic_write_around(APIC_ICR, APIC_INT_LEVELTRIG | APIC_DM_INIT);
+
+ Dprintk("Waiting for send to finish...\n");
+ timeout = 0;
+ do {
+ Dprintk("+");
+ udelay(100);
+ send_status = apic_read(APIC_ICR) & APIC_ICR_BUSY;
+ } while (send_status && (timeout++ < 1000));
/*
* Should we send STARTUP IPIs ?
*/
Dprintk("#startup loops: %d.\n", num_starts);
+ maxlvt = get_maxlvt();
+
for (j = 1; j <= num_starts; j++) {
Dprintk("Sending STARTUP #%d.\n",j);
- apic_readaround(APIC_SPIV);
+ apic_read_around(APIC_SPIV);
apic_write(APIC_ESR, 0);
apic_read(APIC_ESR);
Dprintk("After apic_write.\n");
*/
/* Target chip */
- cfg = apic_read(APIC_ICR2);
- cfg &= 0x00FFFFFF;
- apic_write(APIC_ICR2, cfg | SET_APIC_DEST_FIELD(apicid));
+ apic_write_around(APIC_ICR2, SET_APIC_DEST_FIELD(apicid));
/* Boot on the stack */
- cfg = apic_read(APIC_ICR);
- cfg &= ~0xCDFFF;
- cfg |= (APIC_DEST_DM_STARTUP | (start_eip >> 12));
-
/* Kick the second */
- apic_write(APIC_ICR, cfg);
+ apic_write_around(APIC_ICR, APIC_DM_STARTUP
+ | (start_eip >> 12));
Dprintk("Startup point 1.\n");
do {
Dprintk("+");
udelay(100);
- send_status = apic_read(APIC_ICR) & 0x1000;
+ send_status = apic_read(APIC_ICR) & APIC_ICR_BUSY;
} while (send_status && (timeout++ < 1000));
/*
* Give the other CPU some time to accept the IPI.
*/
udelay(200);
+ /*
+ * Due to the Pentium erratum 3AP.
+ */
+ if (maxlvt > 3) {
+ apic_read_around(APIC_SPIV);
+ apic_write(APIC_ESR, 0);
+ }
accept_status = (apic_read(APIC_ESR) & 0xEF);
if (send_status || accept_status)
break;
/*
* Wait 5s total for a response
*/
- for (timeout = 0; timeout < 1000000000; timeout++) {
+ for (timeout = 0; timeout < 50000; timeout++) {
if (test_bit(cpu, &cpu_callin_map))
break; /* It has booted */
udelay(100);
Dprintk("OK.\n");
printk("CPU%d: ", cpu);
print_cpu_info(&cpu_data[cpu]);
+ Dprintk("CPU has booted.\n");
} else {
+ boot_status = 1;
if (*((volatile unsigned char *)phys_to_virt(8192))
- == 0xA5) /* trampoline code not run */
+ == 0xA5)
+ /* trampoline started but...? */
printk("Stuck ??\n");
else
- printk("CPU booted but not responding.\n");
+ /* trampoline code not run */
+ printk("Not responding.\n");
+#if APIC_DEBUG
+ inquire_remote_apic(apicid);
+#endif
}
- Dprintk("CPU has booted.\n");
- } else {
+ }
+ if (send_status || accept_status || boot_status) {
x86_cpu_to_apicid[cpu] = -1;
x86_apicid_to_cpu[apicid] = -1;
cpucount--;
Dprintk("Getting LVT1: %x\n", reg);
}
+ connect_bsp_APIC();
setup_local_APIC();
if (GET_APIC_ID(apic_read(APIC_ID)) != boot_cpu_id)
if (!(phys_cpu_present_map & (1 << apicid)))
continue;
- if ((max_cpus >= 0) && (max_cpus < cpucount+1))
+ if ((max_cpus >= 0) && (max_cpus <= cpucount+1))
continue;
do_boot_cpu(apicid);
printk(KERN_WARNING "WARNING: SMP operation may be unreliable with B stepping processors.\n");
Dprintk("Boot done.\n");
- cache_APIC_registers();
#ifndef CONFIG_VISWS
/*
* Here we can be sure that there is an IO-APIC in the system. Let's
for (i = 0; i < 16; i++) {
irq_desc[i].status = IRQ_DISABLED;
irq_desc[i].action = 0;
- irq_desc[i].depth = 0;
+ irq_desc[i].depth = 1;
/*
* Cobalt IRQs are mapped to standard ISA
irq_dir[irq] = proc_mkdir(name, root_irq_dir);
/* create /proc/irq/1234/smp_affinity */
- entry = create_proc_entry("smp_affinity", 0700, irq_dir[irq]);
+ entry = create_proc_entry("smp_affinity", 0600, irq_dir[irq]);
entry->nlink = 1;
entry->data = (void *)(long)irq;
root_irq_dir = proc_mkdir("irq", 0);
/* create /proc/irq/prof_cpu_mask */
- entry = create_proc_entry("prof_cpu_mask", 0700, root_irq_dir);
+ entry = create_proc_entry("prof_cpu_mask", 0600, root_irq_dir);
entry->nlink = 1;
entry->data = (void *)&prof_cpu_mask;
irq_dir[irq] = proc_mkdir(name, root_irq_dir);
/* create /proc/irq/1234/smp_affinity */
- entry = create_proc_entry("smp_affinity", 0700, irq_dir[irq]);
+ entry = create_proc_entry("smp_affinity", 0600, irq_dir[irq]);
entry->nlink = 1;
entry->data = (void *)irq;
root_irq_dir = proc_mkdir("irq", 0);
/* create /proc/irq/prof_cpu_mask */
- entry = create_proc_entry("prof_cpu_mask", 0700, root_irq_dir);
+ entry = create_proc_entry("prof_cpu_mask", 0600, root_irq_dir);
entry->nlink = 1;
entry->data = (void *)&prof_cpu_mask;
-# $Id: Makefile,v 1.52 1999/12/21 04:02:17 davem Exp $
+# $Id: Makefile,v 1.53 2000/03/31 04:06:19 davem Exp $
# Makefile for the linux kernel.
#
# Note! Dependencies are done automagically by 'make dep', which also
/* Someone please write the code to support this beast! ;) */
{ 2, 0, "Bipolar Integrated Technology - B5010"},
{ 3, 0, "LSI Logic Corporation - unknown-type"},
- { 4, 0, "Texas Instruments, Inc. - SuperSparc 50"},
+ { 4, 0, "Texas Instruments, Inc. - SuperSparc-(II)"},
/* SparcClassic -- borned STP1010TAB-50*/
{ 4, 1, "Texas Instruments, Inc. - MicroSparc"},
{ 4, 2, "Texas Instruments, Inc. - MicroSparc II"},
static void
pt_os_succ_return (struct pt_regs *regs, unsigned long val, long *addr)
{
- if (current->personality & PER_BSD)
+ if (current->personality == PER_SUNOS)
pt_succ_return (regs, val);
else
pt_succ_return_linux (regs, val, addr);
pt_error_return(regs, EIO);
return;
}
- if (current->personality & PER_BSD)
+ if (current->personality == PER_SUNOS)
pt_succ_return (regs, v);
else
pt_succ_return_linux (regs, v, addr);
goto out;
}
- if (((current->personality & PER_BSD) && (request == PTRACE_SUNATTACH))
- || (!(current->personality & PER_BSD) && (request == PTRACE_ATTACH))) {
+ if ((current->personality == PER_SUNOS && request == PTRACE_SUNATTACH)
+ || (current->personality != PER_SUNOS && request == PTRACE_ATTACH)) {
unsigned long flags;
if(child == current) {
pt_succ_return(regs, 0);
goto out;
}
- if (!(child->flags & PF_PTRACED)
- && ((current->personality & PER_BSD) && (request != PTRACE_SUNATTACH))
- && (!(current->personality & PER_BSD) && (request != PTRACE_ATTACH))) {
+ if (!(child->flags & PF_PTRACED)) {
pt_error_return(regs, ESRCH);
goto out;
}
-/* $Id: signal.c,v 1.101 2000/01/21 11:38:38 jj Exp $
+/* $Id: signal.c,v 1.102 2000/04/08 02:11:36 davem Exp $
* linux/arch/sparc/kernel/signal.c
*
* Copyright (C) 1991, 1992 Linus Torvalds
-/* $Id: sys_sunos.c,v 1.118 2000/03/26 11:28:56 davem Exp $
+/* $Id: sys_sunos.c,v 1.120 2000/04/08 08:32:14 davem Exp $
* sys_sunos.c: SunOS specific syscall compatibility support.
*
* Copyright (C) 1995 David S. Miller (davem@caip.rutgers.edu)
#include <linux/socket.h>
#include <linux/in.h>
#include <linux/nfs.h>
+#include <linux/nfs2.h>
#include <linux/nfs_mount.h>
/* for sunos_select */
down(¤t->mm->mmap_sem);
lock_kernel();
- current->personality |= PER_BSD;
if(flags & MAP_NORESERVE) {
static int cnt;
if (cnt++ < 10)
/* SunOS binaries expect that select won't change the tvp contents */
lock_kernel();
- current->personality |= STICKY_TIMEOUTS;
ret = sys_select (width, inp, outp, exp, tvp);
if (ret == -EINTR && tvp) {
time_t sec, usec;
* address to create a socket and bind it to a reserved
* port on this system
*/
- if (copy_from_user(&sunos_mount, data, sizeof(sunos_mount))
+ if (copy_from_user(&sunos_mount, data, sizeof(sunos_mount)))
return -EFAULT;
server_fd = sys_socket (AF_INET, SOCK_DGRAM, IPPROTO_UDP);
dev_fname = getname(data);
} else if(strcmp(type_page, "nfs") == 0) {
ret = sunos_nfs_mount (dir_page, flags, data);
- goto out2
+ goto out2;
} else if(strcmp(type_page, "ufs") == 0) {
printk("Warning: UFS filesystem mounts unsupported.\n");
ret = -ENODEV;
- goto out2
+ goto out2;
} else if(strcmp(type_page, "proc")) {
ret = -ENODEV;
- goto out2
+ goto out2;
}
ret = PTR_ERR(dev_fname);
if (IS_ERR(dev_fname))
return rval;
}
-asmlinkage int sunos_open(const char *filename, int flags, int mode)
-{
- int ret;
-
- lock_kernel();
- current->personality |= PER_BSD;
- ret = sys_open (filename, flags, mode);
- unlock_kernel();
- return ret;
-}
-
-
#define SUNOS_EWOULDBLOCK 35
/* see the sunos man page read(2v) for an explanation
struct k_sigaction new_ka, old_ka;
int ret;
- current->personality |= PER_BSD;
-
if(act) {
old_sigset_t mask;
.globl sunos_sys_table
sunos_sys_table:
/*0*/ .long sunos_indir, sys_exit, sys_fork
- .long sunos_read, sunos_write, sunos_open
+ .long sunos_read, sunos_write, sys_open
.long sys_close, sunos_wait4, sys_creat
.long sys_link, sys_unlink, sunos_execv
.long sys_chdir, sunos_nosys, sys_mknod
-# $Id: Makefile,v 1.33 2000/03/16 00:52:07 anton Exp $
+# $Id: Makefile,v 1.34 2000/03/31 04:06:20 davem Exp $
# Makefile for Sparc library files..
#
-# $Id: Makefile,v 1.36 2000/01/29 01:09:05 anton Exp $
+# $Id: Makefile,v 1.37 2000/03/31 04:06:22 davem Exp $
# Makefile for the linux Sparc-specific parts of the memory manager.
#
# Note! Dependencies are done automagically by 'make dep', which also
-/* $Id: sun4c.c,v 1.190 2000/02/14 04:52:34 jj Exp $
+/* $Id: sun4c.c,v 1.191 2000/04/08 02:11:41 davem Exp $
* sun4c.c: Doing in software what should be done in hardware.
*
* Copyright (C) 1996 David S. Miller (davem@caip.rutgers.edu)
CONFIG_AUTOFS_FS=m
CONFIG_AUTOFS4_FS=m
# CONFIG_ADFS_FS is not set
+# CONFIG_ADFS_FS_RW is not set
CONFIG_AFFS_FS=m
# CONFIG_HFS_FS is not set
CONFIG_BFS_FS=m
CONFIG_VFAT_FS=m
CONFIG_EFS_FS=m
CONFIG_CRAMFS=m
+CONFIG_RAMFS=m
CONFIG_ISO9660_FS=m
-# CONFIG_JOLIET is not set
+CONFIG_JOLIET=y
CONFIG_MINIX_FS=m
# CONFIG_NTFS_FS is not set
+# CONFIG_NTFS_RW is not set
CONFIG_HPFS_FS=m
CONFIG_PROC_FS=y
# CONFIG_DEVFS_FS is not set
# CONFIG_DEVFS_DEBUG is not set
CONFIG_DEVPTS_FS=y
# CONFIG_QNX4FS_FS is not set
+# CONFIG_QNX4FS_RW is not set
CONFIG_ROMFS_FS=m
CONFIG_EXT2_FS=y
CONFIG_SYSV_FS=m
-# CONFIG_SYSV_FS_WRITE is not set
+CONFIG_SYSV_FS_WRITE=y
CONFIG_UDF_FS=m
-# CONFIG_UDF_RW is not set
+CONFIG_UDF_RW=y
CONFIG_UFS_FS=m
-# CONFIG_UFS_FS_WRITE is not set
+CONFIG_UFS_FS_WRITE=y
#
# Network File Systems
#
CONFIG_CODA_FS=m
CONFIG_NFS_FS=y
+CONFIG_NFS_V3=y
# CONFIG_ROOT_NFS is not set
CONFIG_NFSD=m
-# CONFIG_NFSD_V3 is not set
+CONFIG_NFSD_V3=y
CONFIG_SUNRPC=y
CONFIG_LOCKD=y
+CONFIG_LOCKD_V4=y
CONFIG_SMB_FS=m
CONFIG_NCP_FS=m
-# CONFIG_NCPFS_PACKET_SIGNING is not set
-# CONFIG_NCPFS_IOCTL_LOCKING is not set
-# CONFIG_NCPFS_STRONG is not set
-# CONFIG_NCPFS_NFS_NS is not set
-# CONFIG_NCPFS_OS2_NS is not set
-# CONFIG_NCPFS_SMALLDOS is not set
-# CONFIG_NCPFS_MOUNT_SUBDIR is not set
-# CONFIG_NCPFS_NDS_DOMAINS is not set
-# CONFIG_NCPFS_NLS is not set
-# CONFIG_NCPFS_EXTRAS is not set
+CONFIG_NCPFS_PACKET_SIGNING=y
+CONFIG_NCPFS_IOCTL_LOCKING=y
+CONFIG_NCPFS_STRONG=y
+CONFIG_NCPFS_NFS_NS=y
+CONFIG_NCPFS_OS2_NS=y
+CONFIG_NCPFS_SMALLDOS=y
+CONFIG_NCPFS_MOUNT_SUBDIR=y
+CONFIG_NCPFS_NDS_DOMAINS=y
+CONFIG_NCPFS_NLS=y
+CONFIG_NCPFS_EXTRAS=y
#
# Partition Types
-# $Id: Makefile,v 1.52 2000/03/19 07:00:29 ecd Exp $
+# $Id: Makefile,v 1.53 2000/03/31 04:06:22 davem Exp $
# Makefile for the linux kernel.
#
# Note! Dependencies are done automagically by 'make dep', which also
return retval;
/* OK, This is the point of no return */
- current->personality = PER_LINUX;
+ current->personality = PER_SUNOS;
current->mm->end_code = ex.a_text +
(current->mm->start_code = N_TXTADDR(ex));
-/* $Id: pci_sabre.c,v 1.16 2000/03/25 05:18:12 davem Exp $
+/* $Id: pci_sabre.c,v 1.17 2000/03/31 04:06:59 davem Exp $
* pci_sabre.c: Sabre specific PCI controller support.
*
* Copyright (C) 1997, 1998, 1999 David S. Miller (davem@caipfs.rutgers.edu)
sabre_bus = pci_scan_bus(p->pci_first_busno,
p->pci_ops,
&p->pbm_A);
-
+#if 0
{
unsigned int devfn;
u8 *addr;
devfn, PCI_LATENCY_TIMER);
pci_config_write8(addr, 32);
}
-
+#endif
apb_init(p, sabre_bus);
walk = &sabre_bus->children;
static void
pt_os_succ_return (struct pt_regs *regs, unsigned long val, long *addr)
{
- if (current->personality & PER_BSD)
+ if (current->personality == PER_SUNOS)
pt_succ_return (regs, val);
else
pt_succ_return_linux (regs, val, addr);
goto out;
}
- if (((current->personality & PER_BSD) && (request == PTRACE_SUNATTACH))
- || (!(current->personality & PER_BSD) && (request == PTRACE_ATTACH))) {
+ if ((current->personality == PER_SUNOS && request == PTRACE_SUNATTACH)
+ || (current->personality != PER_SUNOS && request == PTRACE_ATTACH)) {
unsigned long flags;
if(child == current) {
pt_succ_return(regs, 0);
goto out;
}
- if (!(child->flags & PF_PTRACED)
- && ((current->personality & PER_BSD) && (request != PTRACE_SUNATTACH))
- && (!(current->personality & PER_BSD) && (request != PTRACE_ATTACH))) {
+ if (!(child->flags & PF_PTRACED)) {
pt_error_return(regs, ESRCH);
goto out;
}
-/* $Id: signal.c,v 1.48 1999/12/15 22:24:52 davem Exp $
+/* $Id: signal.c,v 1.49 2000/04/08 02:11:46 davem Exp $
* arch/sparc64/kernel/signal.c
*
* Copyright (C) 1991, 1992 Linus Torvalds
-/* $Id: signal32.c,v 1.60 2000/02/25 06:02:37 jj Exp $
+/* $Id: signal32.c,v 1.61 2000/04/08 02:11:46 davem Exp $
* arch/sparc64/kernel/signal32.c
*
* Copyright (C) 1991, 1992 Linus Torvalds
-/* $Id: sys_sparc32.c,v 1.142 2000/03/24 04:17:38 davem Exp $
+/* $Id: sys_sparc32.c,v 1.144 2000/04/08 02:11:47 davem Exp $
* sys_sparc32.c: Conversion between 32bit and 64bit native syscalls.
*
* Copyright (C) 1997,1998 Jakub Jelinek (jj@sunsite.mff.cuni.cz)
s32 gf32_version;
};
+struct nfsctl_fdparm32 {
+ struct sockaddr gd32_addr;
+ s8 gd32_path[NFS_MAXPATHLEN+1];
+ s32 gd32_version;
+};
+
+struct nfsctl_fsparm32 {
+ struct sockaddr gd32_addr;
+ s8 gd32_path[NFS_MAXPATHLEN+1];
+ s32 gd32_maxlen;
+};
+
struct nfsctl_arg32 {
s32 ca32_version; /* safeguard */
union {
struct nfsctl_export32 u32_export;
struct nfsctl_uidmap32 u32_umap;
struct nfsctl_fhparm32 u32_getfh;
- u32 u32_debug;
+ struct nfsctl_fdparm32 u32_getfd;
+ struct nfsctl_fsparm32 u32_getfs;
} u;
#define ca32_svc u.u32_svc
#define ca32_client u.u32_client
#define ca32_export u.u32_export
#define ca32_umap u.u32_umap
#define ca32_getfh u.u32_getfh
+#define ca32_getfd u.u32_getfd
+#define ca32_getfs u.u32_getfs
#define ca32_authd u.u32_authd
-#define ca32_debug u.u32_debug
};
union nfsctl_res32 {
return err;
}
+static int nfs_getfd32_trans(struct nfsctl_arg *karg, struct nfsctl_arg32 *arg32)
+{
+ int err;
+
+ err = __get_user(karg->ca_version, &arg32->ca32_version);
+ err |= copy_from_user(&karg->ca_getfd.gd_addr,
+ &arg32->ca32_getfd.gd32_addr,
+ (sizeof(struct sockaddr)));
+ err |= copy_from_user(&karg->ca_getfd.gd_path,
+ &arg32->ca32_getfd.gd32_path,
+ (NFS_MAXPATHLEN+1));
+ err |= __get_user(karg->ca_getfd.gd_version,
+ &arg32->ca32_getfd.gd32_version);
+ return err;
+}
+
+static int nfs_getfs32_trans(struct nfsctl_arg *karg, struct nfsctl_arg32 *arg32)
+{
+ int err;
+
+ err = __get_user(karg->ca_version, &arg32->ca32_version);
+ err |= copy_from_user(&karg->ca_getfs.gd_addr,
+ &arg32->ca32_getfs.gd32_addr,
+ (sizeof(struct sockaddr)));
+ err |= copy_from_user(&karg->ca_getfs.gd_path,
+ &arg32->ca32_getfs.gd32_path,
+ (NFS_MAXPATHLEN+1));
+ err |= __get_user(karg->ca_getfs.gd_maxlen,
+ &arg32->ca32_getfs.gd32_maxlen);
+ return err;
+}
+
/* This really doesn't need translations, we are only passing
* back a union which contains opaque nfs file handle data.
*/
err = nfs_clnt32_trans(karg, arg32);
break;
case NFSCTL_EXPORT:
+ case NFSCTL_UNEXPORT:
err = nfs_exp32_trans(karg, arg32);
break;
/* This one is unimplemented, be we're ready for it. */
case NFSCTL_GETFH:
err = nfs_getfh32_trans(karg, arg32);
break;
+ case NFSCTL_GETFD:
+ err = nfs_getfd32_trans(karg, arg32);
+ break;
+ case NFSCTL_GETFS:
+ err = nfs_getfs32_trans(karg, arg32);
+ break;
default:
err = -EINVAL;
break;
err = sys_nfsservctl(cmd, karg, kres);
set_fs(oldfs);
- if(!err && cmd == NFSCTL_GETFH)
+ if (err)
+ goto done;
+
+ if((cmd == NFSCTL_GETFH) ||
+ (cmd == NFSCTL_GETFD) ||
+ (cmd == NFSCTL_GETFS))
err = nfs_getfh32_res_trans(kres, res32);
done:
-/* $Id: sys_sunos32.c,v 1.43 2000/03/26 11:28:53 davem Exp $
+/* $Id: sys_sunos32.c,v 1.44 2000/04/08 02:11:50 davem Exp $
* sys_sunos32.c: SunOS binary compatability layer on sparc64.
*
* Copyright (C) 1995, 1996, 1997 David S. Miller (davem@caip.rutgers.edu)
#include <linux/socket.h>
#include <linux/in.h>
#include <linux/nfs.h>
+#include <linux/nfs2.h>
#include <linux/nfs_mount.h>
/* for sunos_select */
down(¤t->mm->mmap_sem);
lock_kernel();
- current->personality |= PER_BSD;
if(flags & MAP_NORESERVE) {
static int cnt;
if (cnt++ < 10)
/* SunOS binaries expect that select won't change the tvp contents */
lock_kernel();
- current->personality |= STICKY_TIMEOUTS;
ret = sys32_select (width, inp, outp, exp, tvp_x);
if (ret == -EINTR && tvp_x) {
struct timeval32 *tvp = (struct timeval32 *)A(tvp_x);
* address to create a socket and bind it to a reserved
* port on this system
*/
- if (copy_from_user(&sunos_mount, data, sizeof(sunos_mount))
+ if (copy_from_user(&sunos_mount, data, sizeof(sunos_mount)))
return -EFAULT;
server_fd = sys_socket (AF_INET, SOCK_DGRAM, IPPROTO_UDP);
dev_fname = getname(data);
} else if(strcmp(type_page, "nfs") == 0) {
ret = sunos_nfs_mount (dir_page, flags, data);
- goto out2
+ goto out2;
} else if(strcmp(type_page, "ufs") == 0) {
printk("Warning: UFS filesystem mounts unsupported.\n");
ret = -ENODEV;
- goto out2
+ goto out2;
} else if(strcmp(type_page, "proc")) {
ret = -ENODEV;
- goto out2
+ goto out2;
}
ret = PTR_ERR(dev_fname);
if (IS_ERR(dev_fname))
{
const char *filename = (const char *)(long)fname;
- current->personality |= PER_BSD;
return sparc32_open(filename, flags, mode);
}
struct k_sigaction new_ka, old_ka;
int ret;
- current->personality |= PER_BSD;
-
if (act) {
old_sigset_t32 mask;
-# $Id: Makefile,v 1.21 2000/03/27 10:38:41 davem Exp $
+# $Id: Makefile,v 1.22 2000/03/31 04:06:23 davem Exp $
# Makefile for Sparc library files..
#
-# $Id: Makefile,v 1.6 2000/01/31 01:30:49 davem Exp $
+# $Id: Makefile,v 1.7 2000/03/31 04:06:24 davem Exp $
# Makefile for the linux Sparc64-specific parts of the memory manager.
#
# Note! Dependencies are done automagically by 'make dep', which also
-# $Id: Makefile,v 1.5 1999/12/21 04:02:26 davem Exp $
+# $Id: Makefile,v 1.6 2000/03/31 04:06:25 davem Exp $
# Makefile for the Sun Boot PROM interface library under
# Linux.
#
-/* $Id: fs.c,v 1.17 2000/03/10 04:43:30 davem Exp $
+/* $Id: fs.c,v 1.18 2000/04/08 02:11:54 davem Exp $
* fs.c: fs related syscall emulation for Solaris
*
* Copyright (C) 1997,1998 Jakub Jelinek (jj@sunsite.mff.cuni.cz)
unlock_kernel();
fput(file);
}
-out:
+
return error;
}
-/* $Id: misc.c,v 1.23 2000/03/13 21:57:34 davem Exp $
+/* $Id: misc.c,v 1.24 2000/04/08 02:11:55 davem Exp $
* misc.c: Miscelaneous syscall emulation for Solaris
*
* Copyright (C) 1997,1998 Jakub Jelinek (jj@sunsite.mff.cuni.cz)
unsigned long retval, ret_type;
lock_kernel();
- current->personality |= PER_SVR4;
+ current->personality = PER_SVR4;
if (flags & MAP_NORESERVE) {
static int cnt = 0;
tristate 'Loopback device support' CONFIG_BLK_DEV_LOOP
dep_tristate 'Network block device support' CONFIG_BLK_DEV_NBD $CONFIG_NET
+tristate 'Logical volume manager (LVM) support' CONFIG_BLK_DEV_LVM N
+if [ "$CONFIG_BLK_DEV_LVM" != "n" ]; then
+ bool ' LVM information in proc filesystem' CONFIG_LVM_PROC_FS Y
+fi
+
bool 'Multiple devices driver support' CONFIG_BLK_DEV_MD
dep_tristate ' Linear (append) mode' CONFIG_MD_LINEAR $CONFIG_BLK_DEV_MD
dep_tristate ' RAID-0 (striping) mode' CONFIG_MD_STRIPED $CONFIG_BLK_DEV_MD
#dep_tristate ' RAID-1 (mirroring) mode' CONFIG_MD_MIRRORING $CONFIG_BLK_DEV_MD
#dep_tristate ' RAID-4/RAID-5 mode' CONFIG_MD_RAID5 $CONFIG_BLK_DEV_MD
-if [ "$CONFIG_MD_LINEAR" = "y" -o "$CONFIG_MD_STRIPED" = "y" ]; then
- bool ' Boot support (linear, striped)' CONFIG_MD_BOOT
-fi
tristate 'RAM disk support' CONFIG_BLK_DEV_RAM
dep_bool ' Initial RAM disk (initrd) support' CONFIG_BLK_DEV_INITRD $CONFIG_BLK_DEV_RAM
} else
DPRINT("botched floppy option\n");
DPRINT("Read linux/drivers/block/README.fd\n");
- return 1;
+ return 0;
}
static int have_no_fdc= -EIO;
{
#ifdef CONFIG_PARPORT
parport_init();
-#endif
- /*
- * I2O must come before block and char as the I2O layer may
- * in future claim devices that block/char most not touch.
- */
-#ifdef CONFIG_I2O
- i2o_init();
#endif
chr_dev_init();
blk_dev_init();
sti();
+#ifdef CONFIG_I2O
+ i2o_init();
+#endif
#ifdef CONFIG_BLK_DEV_DAC960
DAC960_Initialize();
#endif
#ifdef CONFIG_DASD
dasd_init();
#endif
+#ifdef CONFIG_BLK_DEV_LVM
+ lvm_init();
+#endif
return 0;
};
static char pv_name[NAME_LEN];
/* static char rootvg[NAME_LEN] = { 0, }; */
static uint lv_open = 0;
-static const char *const lvm_name = LVM_NAME;
+const char *const lvm_name = LVM_NAME;
static int lock = 0;
static int loadtime = 0;
static uint vg_count = 0;
#include <linux/blk.h>
-#ifdef CONFIG_MD_BOOT
-extern kdev_t name_to_kdev_t(char *line) md__init;
-#endif
-
#define DEBUG 0
#if DEBUG
# define dprintk(x...) printk(x)
#undef OUT
-/* support old ioctls/init - cold add only */
-int do_md_add(mddev_t *mddev, kdev_t dev)
-{
- int err;
- mdk_rdev_t *rdev;
-
- if (mddev->sb || mddev->pers)
- return -EBUSY;
- err = md_import_device(dev, 0);
- if (err) return err;
- rdev = find_rdev_all(dev);
- if (!rdev) {
- MD_BUG();
- return -EINVAL;
- }
- rdev->old_dev = dev;
- rdev->desc_nr = mddev->nb_dev;
- bind_rdev_to_array(rdev, mddev);
- return 0;
-}
-
-#define SET_SB(x,v) mddev->sb->x = v
-#define SET_RSB(x,y) mddev->sb->disks[nr].x = y
-static void autorun_array (mddev_t *mddev);
-int do_md_start(mddev_t *mddev, int info)
-{
- int pers = (info & 0xFF0000UL)>>16;
-// int fault= (info & 0x00FF00UL)>>8;
- int factor=(info & 0x0000FFUL);
-
- struct md_list_head *tmp;
- mdk_rdev_t *rdev, *rdev0=NULL;
- int err = 0;
-
- if (mddev->sb) {
- printk("array md%d already has superbloc!!\n",
- mdidx(mddev));
- return -EBUSY;
- }
- if (pers==1 || pers==2) {
- /* non-persistant super block */
- int devs = mddev->nb_dev;
- if (alloc_array_sb(mddev))
- return -ENOMEM;
- mddev->sb->major_version = MD_MAJOR_VERSION;
- mddev->sb->minor_version = MD_MINOR_VERSION;
- mddev->sb->patch_version = MD_PATCHLEVEL_VERSION;
- mddev->sb->ctime = CURRENT_TIME;
-
- SET_SB(level,pers_to_level(pers));
- SET_SB(size,0);
- SET_SB(nr_disks, devs);
- SET_SB(raid_disks, devs);
- SET_SB(md_minor,mdidx(mddev));
- SET_SB(not_persistent, 1);
-
-
- SET_SB(state, 1<<MD_SB_CLEAN);
- SET_SB(active_disks, devs);
- SET_SB(working_disks, devs);
- SET_SB(failed_disks, 0);
- SET_SB(spare_disks, 0);
-
- SET_SB(layout,0);
- SET_SB(chunk_size, 1<<(factor+PAGE_SHIFT));
-
- mddev->sb->md_magic = MD_SB_MAGIC;
-
- /*
- * Generate a 128 bit UUID
- */
- get_random_bytes(&mddev->sb->set_uuid0, 4);
- get_random_bytes(&mddev->sb->set_uuid1, 4);
- get_random_bytes(&mddev->sb->set_uuid2, 4);
- get_random_bytes(&mddev->sb->set_uuid3, 4);
-
- /* add each disc */
- ITERATE_RDEV(mddev,rdev,tmp) {
- int nr, size;
- nr = rdev->desc_nr;
- SET_RSB(number,nr);
- SET_RSB(major,MAJOR(rdev->dev));
- SET_RSB(minor,MINOR(rdev->dev));
- SET_RSB(raid_disk,nr);
- SET_RSB(state,6); /* ACTIVE|SYNC */
- size = calc_dev_size(rdev->dev, mddev, 0);
- rdev->sb_offset = calc_dev_sboffset(rdev->dev, mddev, 0);
-
- if (!mddev->sb->size || (mddev->sb->size > size))
- mddev->sb->size = size;
- }
- sync_sbs(mddev);
- err = do_md_run(mddev);
- if (err)
- do_md_stop(mddev, 0);
- } else {
- /* persistant super block - ignore the info and read the superblocks */
- ITERATE_RDEV(mddev,rdev,tmp) {
- if ((err = read_disk_sb(rdev))) {
- printk("md: could not read %s's sb, not importing!\n",
- partition_name(rdev->dev));
- break;
- }
- if ((err = check_disk_sb(rdev))) {
- printk("md: %s has invalid sb, not importing!\n",
- partition_name(rdev->dev));
- break;
- }
- rdev->desc_nr = rdev->sb->this_disk.number;
- if (!rdev0) rdev0=rdev;
- if (!uuid_equal(rdev0, rdev)) {
- printk("%s has different UUID to %s .. dropping\n",
- partition_name(rdev->dev),
- partition_name(rdev0->dev));
- err = -EINVAL;
- break;
- }
- if (!sb_equal(rdev0->sb, rdev->sb)) {
- printk("%s has same UUID as %s, but superblocks differ ...\n", partition_name(rdev->dev), partition_name(rdev0->dev));
- err = -EINVAL;
- break;
- }
- }
- if (!err)
- autorun_array(mddev);
- }
- return err;
-}
-#undef SET_SB
-#undef SET_RSB
/*
* We have to safely support old arrays too.
*/
}
default:
}
- /* handle "old style" ioctls */
- switch (cmd)
- {
- case START_MD:
- if (!mddev)
- return -ENODEV;
- err = lock_mddev(mddev);
- if (err) {
- printk("ioctl lock interrupted, reason %d, cmd %d\n",err, cmd);
- goto abort;
- }
- err = do_md_start(mddev, (int) arg);
- if (err) {
- printk("couldn't mdstart\n");
- goto abort_unlock;
- }
- goto done_unlock;
- case STOP_MD:
- if (!mddev)
- return -ENODEV;
- err = lock_mddev(mddev);
- if (err) {
- printk("ioctl lock interrupted, reason %d, cmd %d\n",err, cmd);
- goto abort_unlock;
- }
- err = do_md_stop(mddev, 0);
- if (err) {
- printk("couldn't mdstop\n");
- goto abort_unlock;
- }
- goto done_unlock;
- case REGISTER_DEV:
- /* add this device to an unstarted array,
- * create the array if needed */
- if (!mddev)
- mddev = alloc_mddev(dev);
- if (!mddev) {
- err = -ENOMEM;
- goto abort;
- }
- err = lock_mddev(mddev);
- if (err) {
- printk("ioctl, reason %d, cmd %d\n", err, cmd);
- goto abort;
- }
- err = do_md_add(mddev, to_kdev_t((dev_t) arg));
- if (err) {
- printk("do_md_add failed %d\n", err);
- goto abort_unlock;
- }
- goto done_unlock;
- }
-
switch (cmd)
{
case SET_ARRAY_INFO:
return;
}
-#ifdef CONFIG_MD_BOOT
-#define MAX_MD_BOOT_DEVS 16
-struct {
- unsigned long set;
- int pers[MAX_MD_BOOT_DEVS];
- kdev_t devices[MAX_MD_BOOT_DEVS][MAX_REAL];
-} md_setup_args md__initdata = {
- 0,{0},{{0}}
-};
-
-/*
- * Parse the command-line parameters given our kernel, but do not
- * actually try to invoke the MD device now; that is handled by
- * md_setup_drive after the low-level disk drivers have initialised.
- *
- * 27/11/1999: Fixed to work correctly with the 2.3 kernel (which
- * assigns the task of parsing integer arguments to the
- * invoked program now). Added ability to initialise all
- * the MD devices (by specifying multiple "md=" lines)
- * instead of just one. -- KTK
- */
-static int __init md_setup(char *str)
-{
- int minor, level, factor, fault, i;
- kdev_t device;
- char *devnames, *pername;
-
- if(get_option(&str, &minor) != 2 || /* MD Number */
- get_option(&str, &level) != 2 || /* RAID Personality */
- get_option(&str, &factor) != 2 || /* Chunk Size */
- get_option(&str, &fault) != 2) {
- printk("md: Too few arguments supplied to md=.\n");
- return 0;
- } else if (minor >= MAX_MD_BOOT_DEVS) {
- printk ("md: Minor device number too high.\n");
- return 0;
- } else if (md_setup_args.set & (1 << minor)) {
- printk ("md: Warning - md=%d,... has been specified twice;\n"
- " will discard the first definition.\n", minor);
- }
- switch(level) {
-#ifdef CONFIG_MD_LINEAR
- case -1:
- level = LINEAR<<16;
- pername = "linear";
- break;
-#endif
-#ifdef CONFIG_MD_STRIPED
- case 0:
- level = STRIPED<<16;
- pername = "striped";
- break;
-#endif
- default:
- printk ("md: The kernel has not been configured for raid%d"
- " support!\n", level);
- return 0;
- }
- devnames = str;
- for (i = 0; str; i++) {
- if ((device = name_to_kdev_t(str))) {
- md_setup_args.devices[minor][i] = device;
- } else {
- printk ("md: Unknown device name, %s.\n", str);
- return 0;
- }
- if ((str = strchr(str, ',')) != NULL)
- str++;
- }
- if (!i) {
- printk ("md: No devices specified for md%d?\n", minor);
- return 0;
- }
-
- printk ("md: Will configure md%d (%s) from %s, below.\n",
- minor, pername, devnames);
- md_setup_args.devices[minor][i] = (kdev_t) 0;
- md_setup_args.pers[minor] = level | factor | (fault << 8);
- md_setup_args.set |= (1 << minor);
- return 1;
-}
-#endif
-
static void md_geninit (void)
{
int i;
- for(i = 0; i < MAX_MD_BOOT_DEVS; i++) {
+ for(i = 0; i < MAX_MD_DEVS; i++) {
md_blocksizes[i] = 1024;
md_size[i] = 0;
md_maxreadahead[i] = MD_READAHEAD;
return (0);
}
-#ifdef CONFIG_MD_BOOT
-void __init md_setup_drive(void)
-{
- int minor, i;
- kdev_t dev;
- mddev_t*mddev;
-
- for (minor = 0; minor < MAX_MD_BOOT_DEVS; minor++) {
- if ((md_setup_args.set & (1 << minor)) == 0)
- continue;
- printk("md: Loading md%d.\n", minor);
- mddev = alloc_mddev(MKDEV(MD_MAJOR,minor));
- for (i = 0; (dev = md_setup_args.devices[minor][i]); i++)
- do_md_add (mddev, dev);
- do_md_start (mddev, md_setup_args.pers[minor]);
- }
-}
-
-__setup("md=", md_setup);
-#endif
-
MD_EXPORT_SYMBOL(md_size);
MD_EXPORT_SYMBOL(register_md_personality);
MD_EXPORT_SYMBOL(unregister_md_personality);
dep_tristate ' QuickCam Colour Video For Linux (EXPERIMENTAL)' CONFIG_VIDEO_CQCAM $CONFIG_VIDEO_DEV $CONFIG_PARPORT
fi
fi
- dep_tristate 'CPiA Video For Linux' CONFIG_VIDEO_CPIA $CONFIG_VIDEO_DEV
+ dep_tristate ' CPiA Video For Linux' CONFIG_VIDEO_CPIA $CONFIG_VIDEO_DEV
if [ "$CONFIG_VIDEO_CPIA" != "n" ]; then
if [ "CONFIG_PARPORT_1284" != "n" ]; then
- dep_tristate 'CPiA Parallel Port Lowlevel Support' CONFIG_VIDEO_CPIA_PP $CONFIG_VIDEO_CPIA $CONFIG_PARPORT
+ dep_tristate ' CPiA Parallel Port Lowlevel Support' CONFIG_VIDEO_CPIA_PP $CONFIG_VIDEO_CPIA $CONFIG_PARPORT
fi
if [ "$CONFIG_USB" != "n" ]; then
- dep_tristate 'CPiA USB Lowlevel Support' CONFIG_VIDEO_CPIA_USB $CONFIG_VIDEO_CPIA $CONFIG_USB
+ dep_tristate ' CPiA USB Lowlevel Support' CONFIG_VIDEO_CPIA_USB $CONFIG_VIDEO_CPIA $CONFIG_USB
fi
fi
dep_tristate ' SAA5249 Teletext processor' CONFIG_VIDEO_SAA5249 $CONFIG_VIDEO_DEV $CONFIG_I2C
* @misc: device structure
*
* Register a miscellaneous device with the kernel. If the minor
- * number is set to MISC_DYNAMIC_MINOR a minor number is assigned
+ * number is set to %MISC_DYNAMIC_MINOR a minor number is assigned
* and placed in the minor field of the structure. For other cases
* the minor number requested is used.
*
* The structure passed is linked into the kernel and may not be
- * destroyed until it has been unregistered
+ * destroyed until it has been unregistered.
*
* A zero is returned on success and a negative errno code for
* failure.
* @misc: device to unregister
*
* Unregister a miscellaneous device that was previously
- * successfully registered with misc_register. Success
+ * successfully registered with misc_register()F. Success
* is indicated by a zero return, a negative errno code
* indicates an error.
*/
static void kbd_write_command_w(int data);
static void kbd_write_output_w(int data);
+#ifdef CONFIG_PSMOUSE
+static void aux_write_ack(int val);
+#endif
spinlock_t kbd_controller_lock = SPIN_LOCK_UNLOCKED;
static unsigned char handle_kbd_event(void);
static int __init psaux_init(void);
+#define AUX_RECONNECT 170 /* scancode when ps2 device is plugged (back) in */
+
static struct aux_queue *queue; /* Mouse data buffer. */
static int aux_count = 0;
/* used when we send commands to the mouse that expect an ACK. */
}
mouse_reply_expected = 0;
}
+ else if(scancode == AUX_RECONNECT){
+ queue->head = queue->tail = 0; /* Flush input queue */
+ aux_write_ack(AUX_ENABLE_DEV); /* ping the mouse :) */
+ return;
+ }
add_mouse_randomness(scancode);
if (aux_count) {
Based largely on the bttv driver by Ralph Metzler (rjkm@thp.uni-koeln.de)
- Additional debugging and coding by Takashi Oe (toe@unlinfo.unl.edu)
- (Some codes are stolen from proposed v4l2 videodev.c
- of Bill Dirks <dirks@rendition.com>)
+ Additional debugging and coding by Takashi Oe (toe@unlserve.unl.edu)
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
static unsigned char saa_status(int, struct planb *);
static void saa_set(unsigned char, unsigned char, struct planb *);
static void saa_init_regs(struct planb *);
-static void * rvmalloc(unsigned long);
-static void rvfree(void *, unsigned long);
-static unsigned long vmalloc_to_bus(void *);
-static unsigned long vmalloc_to_phys(void *);
-static int fbuffer_alloc(struct planb *);
+static int grabbuf_alloc(struct planb *);
static int vgrab(struct planb *, struct video_mmap *);
static void add_clip(struct planb *, struct video_clip *);
static void fill_cmd_buff(struct planb *);
/* Memory management functions */
/*******************************/
-static void * rvmalloc(unsigned long size)
+static int grabbuf_alloc(struct planb *pb)
{
- void *mem, *memptr;
- unsigned long page;
-
- mem=vmalloc(size);
- if (mem)
- {
- memset(mem, 0, size); /* Clear the ram out, leave no junk */
- memptr = mem;
- while (size > 0)
- {
- page = vmalloc_to_phys(memptr);
- mem_map_reserve(MAP_NR(phys_to_virt(page)));
- memptr+=PAGE_SIZE;
- size-=PAGE_SIZE;
- }
- }
- return mem;
-}
+ int i, npage;
-static void rvfree(void * mem, unsigned long size)
-{
- void *memptr;
- unsigned long page;
-
- if (mem)
- {
- memptr = mem;
- while (size > 0)
- {
- page = vmalloc_to_phys(memptr);
- mem_map_unreserve(MAP_NR(phys_to_virt(page)));
- memptr += PAGE_SIZE;
- size-=PAGE_SIZE;
- }
- vfree(mem);
+ npage = MAX_GBUFFERS * ((PLANB_MAX_FBUF / PAGE_SIZE + 1)
+#ifndef PLANB_GSCANLINE
+ + MAX_LNUM
+#endif /* PLANB_GSCANLINE */
+ );
+ if ((pb->rawbuf = (unsigned char**) kmalloc (npage
+ * sizeof(unsigned long), GFP_KERNEL)) == 0)
+ return -ENOMEM;
+ for (i = 0; i < npage; i++) {
+ pb->rawbuf[i] = (unsigned char *)__get_free_pages(GFP_KERNEL
+ |GFP_DMA, 0);
+ if (!pb->rawbuf[i])
+ break;
+ set_bit(PG_reserved, &mem_map[MAP_NR(pb->rawbuf[i])].flags);
}
-}
-
-/* Useful for using vmalloc()ed memory as DMA target */
-static unsigned long vmalloc_to_bus(void *virt)
-{
- pgd_t *pgd;
- pmd_t *pmd;
- pte_t *pte;
- unsigned long a = (unsigned long)virt;
-
- if (pgd_none(*(pgd = pgd_offset(current->mm, a))) ||
- pmd_none(*(pmd = pmd_offset(pgd, a))) ||
- pte_none(*(pte = pte_offset(pmd, a))))
- return 0;
- return virt_to_bus((void *)pte_page(*pte))
- + (a & (PAGE_SIZE - 1));
-}
-
-static unsigned long vmalloc_to_phys(void *virt) {
- return virt_to_phys(bus_to_virt(vmalloc_to_bus(virt)));
-}
-
-/*
- * Create the giant waste of buffer space we need for now
- * until we get DMA to user space sorted out (probably 2.3.x)
- *
- * We only create this as and when someone uses mmap
- */
-
-static int fbuffer_alloc(struct planb *pb)
-{
- if(!pb->fbuffer)
- pb->fbuffer=(unsigned char *) rvmalloc(MAX_GBUFFERS
- * PLANB_MAX_FBUF);
- else
- printk(KERN_ERR "PlanB: Double alloc of fbuffer!\n");
- if(!pb->fbuffer)
+ if (i-- < npage) {
+ printk(KERN_DEBUG "PlanB: init_grab: grab buffer not allocated\n");
+ for (; i > 0; i--) {
+ clear_bit(PG_reserved,
+ &mem_map[MAP_NR(pb->rawbuf[i])].flags);
+ free_pages((unsigned long)pb->rawbuf[i], 0);
+ }
+ kfree(pb->rawbuf);
return -ENOBUFS;
+ }
+ pb->rawbuf_size = npage;
return 0;
}
+ PLANB_DUMMY);
pb->mask = (unsigned char *)(pb->frame_stat+MAX_GBUFFERS);
- pb->fbuffer = (unsigned char *)rvmalloc(MAX_GBUFFERS * PLANB_MAX_FBUF);
- if (!pb->fbuffer) {
- kfree(pb->priv_space);
- return -ENOMEM;
- }
+ pb->rawbuf = NULL;
+ pb->rawbuf_size = 0;
pb->grabbing = 0;
for (i = 0; i < MAX_GBUFFERS; i++) {
pb->frame_stat[i] = GBUFFER_UNUSED;
#ifndef PLANB_GSCANLINE
pb->lsize[i] = 0;
pb->lnum[i] = 0;
- pb->l_fr_addr[i]=(unsigned char *)rvmalloc(PAGE_SIZE*MAX_LNUM);
- if (!pb->l_fr_addr[i]) {
- int j;
- kfree(pb->priv_space);
- rvfree((void *)pb->fbuffer, MAX_GBUFFERS
- * PLANB_MAX_FBUF);
- for(j = 0; j < i; j++)
- rvfree((void *)pb->l_fr_addr[j], PAGE_SIZE
- * MAX_LNUM);
- return -ENOMEM;
- }
#endif /* PLANB_GSCANLINE */
}
pb->gcount = 0;
pb->suspend = 0;
pb->last_fr = -999;
pb->prev_last_fr = -999;
- return 0;
+
+ /* Reset DMA controllers */
+ planb_dbdma_stop(&pb->planb_base->ch2);
+ planb_dbdma_stop(&pb->planb_base->ch1);
+
+ return 0;
}
static void planb_prepare_close(struct planb *pb)
{
-#ifndef PLANB_GSCANLINE
int i;
-#endif
/* make sure the dma's are idle */
planb_dbdma_stop(&pb->planb_base->ch2);
pb->priv_space = 0;
pb->cmd_buff_inited = 0;
}
- if(pb->fbuffer)
- rvfree((void *)pb->fbuffer, MAX_GBUFFERS*PLANB_MAX_FBUF);
- pb->fbuffer = NULL;
-#ifndef PLANB_GSCANLINE
- for(i = 0; i < MAX_GBUFFERS; i++) {
- if(pb->l_fr_addr[i])
- rvfree((void *)pb->l_fr_addr[i], PAGE_SIZE * MAX_LNUM);
- pb->l_fr_addr[i] = NULL;
+ if(pb->rawbuf) {
+ for (i = 0; i < pb->rawbuf_size; i++) {
+ clear_bit(PG_reserved,
+ &mem_map[MAP_NR(pb->rawbuf[i])].flags);
+ free_pages((unsigned long)pb->rawbuf[i], 0);
+ }
+ kfree(pb->rawbuf);
}
-#endif /* PLANB_GSCANLINE */
+ pb->rawbuf = NULL;
}
/*****************************/
0,
0,
};
-#define PLANB_PALETTE_MAX 15
-#define SWAP4(x) (((x>>24) & 0x000000ff) |\
- ((x>>8) & 0x0000ff00) |\
- ((x<<8) & 0x00ff0000) |\
- ((x<<24) & 0xff000000))
+#define PLANB_PALETTE_MAX 15
static inline int overlay_is_active(struct planb *pb)
{
unsigned int fr = mp->frame;
unsigned int format;
- if(pb->fbuffer==NULL) {
- if(fbuffer_alloc(pb))
- return -ENOBUFS;
+ if(pb->rawbuf==NULL) {
+ int err;
+ if((err=grabbuf_alloc(pb)))
+ return err;
}
IDEBUG("PlanB: grab %d: %dx%d(%u)\n", pb->grabbing,
return -EINVAL;
planb_lock(pb);
- pb->gbuffer[fr] = (unsigned char *)(pb->fbuffer + PLANB_MAX_FBUF * fr);
if(mp->width != pb->gwidth[fr] || mp->height != pb->gheight[fr] ||
format != pb->gfmt[fr] || (pb->gnorm_switch[fr])) {
-#ifdef PLANB_GSCANLINE
int i;
-#else
+#ifndef PLANB_GSCANLINE
unsigned int osize = pb->gwidth[fr] * pb->gheight[fr]
* pb->gfmt[fr];
unsigned int nsize = mp->width * mp->height * format;
#ifndef PLANB_GSCANLINE
if(pb->gnorm_switch[fr])
nsize = 0;
- if(nsize < osize)
- memset((void *)(pb->gbuffer[fr] + nsize), 0,
- osize - nsize);
- memset((void *)pb->l_fr_addr[fr], 0, PAGE_SIZE * pb->lnum[fr]);
+ if (nsize < osize) {
+ for(i = pb->gbuf_idx[fr]; osize > 0; i++) {
+ memset((void *)pb->rawbuf[i], 0, PAGE_SIZE);
+ osize -= PAGE_SIZE;
+ }
+ }
+ for(i = pb->l_fr_addr_idx[fr]; i < pb->l_fr_addr_idx[fr]
+ + pb->lnum[fr]; i++)
+ memset((void *)pb->rawbuf[i], 0, PAGE_SIZE);
#else
/* XXX TODO */
/*
unsigned long base;
#endif
unsigned long jump;
- unsigned char *vaddr;
+ int pagei;
volatile struct dbdma_cmd *c1;
volatile struct dbdma_cmd *jump_addr;
/* even field data: */
- vaddr = pb->gbuffer[fr];
+ pagei = pb->gbuf_idx[fr];
#ifdef PLANB_GSCANLINE
for (i = 0; i < nlines; i += stepsize) {
tab_cmd_gen(c1++, INPUT_MORE | BR_IFSET, count,
- vmalloc_to_bus(vaddr + i * scanline), jump);
+ virt_to_bus(pb->rawbuf[pagei
+ + i * scanline / PAGE_SIZE]), jump);
}
#else
i = 0;
do {
int j;
- base = vmalloc_to_bus((void*)vaddr);
+ base = virt_to_bus(pb->rawbuf[pagei]);
nlpp = (PAGE_SIZE - leftover1) / count / stepsize;
for(j = 0; j < nlpp && i < nlines; j++, i += stepsize, c1++)
tab_cmd_gen(c1, INPUT_MORE | KEY_STREAM0 | BR_IFSET,
tab_cmd_gen(c1++, INPUT_MORE | BR_IFSET, count, base
+ count * nlpp * stepsize + leftover1, jump);
} else {
- pb->l_to_addr[fr][pb->lnum[fr]] = vaddr + count * nlpp
- * stepsize + leftover1;
+ pb->l_to_addr[fr][pb->lnum[fr]] = pb->rawbuf[pagei]
+ + count * nlpp * stepsize + leftover1;
+ pb->l_to_next_idx[fr][pb->lnum[fr]] = pagei + 1;
+ pb->l_to_next_size[fr][pb->lnum[fr]] = count - lov0;
tab_cmd_gen(c1++, INPUT_MORE | BR_IFSET, count,
- vmalloc_to_bus(pb->l_fr_addr[fr] + PAGE_SIZE
- * pb->lnum[fr]), jump);
+ virt_to_bus(pb->rawbuf[pb->l_fr_addr_idx[fr]
+ + pb->lnum[fr]]), jump);
if(++pb->lnum[fr] > MAX_LNUM)
pb->lnum[fr]--;
}
i += stepsize;
}
}
- vaddr += PAGE_SIZE;
+ pagei++;
} while(i < nlines);
tab_cmd_dbdma(c1, DBDMA_NOP | BR_ALWAYS, jump);
c1 = jump_addr;
#ifdef PLANB_GSCANLINE
for (i = 1; i < nlines; i += stepsize) {
tab_cmd_gen(c1++, INPUT_MORE | BR_IFSET, count,
- vmalloc_to_bus(vaddr + i * scanline), jump);
+ virt_to_bus(pb->rawbuf[pagei
+ + i * scanline / PAGE_SIZE]), jump);
}
#else
i = 1;
leftover1 = 0;
- vaddr = pb->gbuffer[fr];
+ pagei = pb->gbuf_idx[fr];
if(nlines <= 1)
goto skip;
do {
int j;
- base = vmalloc_to_bus((void*)vaddr);
+ base = virt_to_bus(pb->rawbuf[pagei]);
nlpp = (PAGE_SIZE - leftover1) / count / stepsize;
if(leftover1 >= count) {
tab_cmd_gen(c1++, INPUT_MORE | KEY_STREAM0 | BR_IFSET, count,
leftover1 = 0;
else {
if(lov0 > count) {
- pb->l_to_addr[fr][pb->lnum[fr]] = vaddr + count
- * (nlpp * stepsize + 1) + leftover1;
+ pb->l_to_addr[fr][pb->lnum[fr]] = pb->rawbuf[pagei]
+ + count * (nlpp * stepsize + 1) + leftover1;
+ pb->l_to_next_idx[fr][pb->lnum[fr]] = pagei + 1;
+ pb->l_to_next_size[fr][pb->lnum[fr]] = count * stepsize
+ - lov0;
tab_cmd_gen(c1++, INPUT_MORE | BR_IFSET, count,
- vmalloc_to_bus(pb->l_fr_addr[fr] + PAGE_SIZE
- * pb->lnum[fr]), jump);
+ virt_to_bus(pb->rawbuf[pb->l_fr_addr_idx[fr]
+ + pb->lnum[fr]]), jump);
if(++pb->lnum[fr] > MAX_LNUM)
pb->lnum[fr]--;
i += stepsize;
leftover1 = count * stepsize - lov0;
}
}
- vaddr += PAGE_SIZE;
+ pagei++;
} while(i < nlines);
skip:
tab_cmd_dbdma(c1, DBDMA_NOP | BR_ALWAYS, jump);
cmd_tab_data_end:
tab_cmd_store(c1++, (unsigned)(&pb->planb_base_phys->intr_stat),
- (fr << 2) | PLANB_FRM_IRQ | PLANB_GEN_IRQ);
+ (fr << 9) | PLANB_FRM_IRQ | PLANB_GEN_IRQ);
/* stop it */
tab_cmd_dbdma(c1, DBDMA_STOP, 0);
IDEBUG("PlanB: planb_irq()\n");
/* get/clear interrupt status bits */
+ eieio();
stat = in_le32(&pb->planb_base->intr_stat);
astat = stat & pb->intr_mask;
- out_le32(&pb->planb_base->intr_stat, PLANB_IRQ_CMD_MASK
+ out_le32(&pb->planb_base->intr_stat, PLANB_FRM_IRQ
& ~astat & stat & ~PLANB_GEN_IRQ);
+ IDEBUG("PlanB: stat = %X, astat = %X\n", stat, astat);
if(astat & PLANB_FRM_IRQ) {
- unsigned int fr = stat >> 2;
+ unsigned int fr = stat >> 9;
#ifndef PLANB_GSCANLINE
int i;
#endif
#ifndef PLANB_GSCANLINE
IDEBUG("PlanB: %d * %d bytes are being copied over\n",
pb->lnum[fr], pb->lsize[fr]);
- for(i = 0; i < pb->lnum[fr]; i++)
- memcpy(pb->l_to_addr[fr][i], pb->l_fr_addr[fr]
- + PAGE_SIZE * i, pb->lsize[fr]);
+ for(i = 0; i < pb->lnum[fr]; i++) {
+ int first = pb->lsize[fr] - pb->l_to_next_size[fr][i];
+
+ memcpy(pb->l_to_addr[fr][i],
+ pb->rawbuf[pb->l_fr_addr_idx[fr] + i],
+ first);
+ memcpy(pb->rawbuf[pb->l_to_next_idx[fr][i]],
+ pb->rawbuf[pb->l_fr_addr_idx[fr] + i] + first,
+ pb->l_to_next_size[fr][i]);
+ }
#endif
pb->frame_stat[fr] = GBUFFER_DONE;
pb->grabbing--;
IDEBUG("PlanB: waiting for grab"
" done (%d)\n", i);
interruptible_sleep_on(&pb->capq);
+ if(signal_pending(current))
+ return -EINTR;
goto chk_grab;
case GBUFFER_DONE:
pb->frame_stat[i] = GBUFFER_UNUSED;
return 0;
}
-/*
- * This maps the vmalloced and reserved fbuffer to user space.
- *
- * FIXME:
- * - PAGE_READONLY should suffice!?
- * - remap_page_range is kind of inefficient for page by page remapping.
- * But e.g. pte_alloc() does not work in modules ... :-(
- */
-
static int planb_mmap(struct video_device *dev, const char *adr, unsigned long size)
{
- struct planb *pb=(struct planb *)dev;
- unsigned long start=(unsigned long) adr;
- unsigned long page;
- void *pos;
+ int i;
+ struct planb *pb = (struct planb *)dev;
+ unsigned long start = (unsigned long)adr;
- if (size>MAX_GBUFFERS*PLANB_MAX_FBUF)
+ if (size > MAX_GBUFFERS * PLANB_MAX_FBUF)
return -EINVAL;
- if (!pb->fbuffer)
- {
- if(fbuffer_alloc(pb))
- return -EINVAL;
+ if (!pb->rawbuf) {
+ int err;
+ if((err=grabbuf_alloc(pb)))
+ return err;
}
- pos = (void *)pb->fbuffer;
- while (size > 0)
- {
- page = vmalloc_to_phys(pos);
- if (remap_page_range(start, page, PAGE_SIZE, PAGE_SHARED))
- return -EAGAIN;
- start+=PAGE_SIZE;
- pos+=PAGE_SIZE;
- size-=PAGE_SIZE;
+ for (i = 0; i < pb->rawbuf_size; i++) {
+ if (remap_page_range(start, virt_to_phys((void *)pb->rawbuf[i]),
+ PAGE_SIZE, PAGE_SHARED))
+ return -EAGAIN;
+ start += PAGE_SIZE;
+ if (size <= PAGE_SIZE)
+ break;
+ size -= PAGE_SIZE;
}
return 0;
}
{
unsigned char saa_rev;
int i, result;
+ unsigned long flags;
memset ((void *) &pb->win, 0, sizeof (struct planb_window));
/* Simple sanity check */
pb->tab_size = PLANB_MAXLINES + 40;
pb->suspend = 0;
pb->lock = 0;
- pb->lockq = NULL;
+ init_waitqueue_head(&pb->lockq);
pb->ch1_cmd = 0;
pb->ch2_cmd = 0;
pb->mask = 0;
pb->offset = 0;
pb->user = 0;
pb->overlay = 0;
- pb->suspendq = NULL;
+ init_waitqueue_head(&pb->suspendq);
pb->cmd_buff_inited = 0;
pb->frame_buffer_phys = 0;
/* clear interrupt mask */
pb->intr_mask = PLANB_CLR_IRQ;
+ save_flags(flags); cli();
result = request_irq(pb->irq, planb_irq, 0, "PlanB", (void *)pb);
- if (result==-EINVAL) {
- printk(KERN_ERR "PlanB: Bad irq number (%d) or handler\n",
- (int)pb->irq);
- return result;
- }
- if (result==-EBUSY) {
- printk(KERN_ERR "PlanB: I don't know why, but IRQ %d busy\n",
- (int)pb->irq);
- return result;
- }
- if (result < 0)
- return result;
+ if (result < 0) {
+ if (result==-EINVAL)
+ printk(KERN_ERR "PlanB: Bad irq number (%d) "
+ "or handler\n", (int)pb->irq);
+ else if (result==-EBUSY)
+ printk(KERN_ERR "PlanB: I don't know why, "
+ "but IRQ %d is busy\n", (int)pb->irq);
+ restore_flags(flags);
+ return result;
+ }
+ disable_irq(pb->irq);
+ restore_flags(flags);
/* Now add the template and register the device unit. */
memcpy(&pb->video_dev,&planb_template,sizeof(planb_template));
pb->picture.depth = pb->win.depth;
pb->frame_stat=NULL;
- pb->capq=NULL;
+ init_waitqueue_head(&pb->capq);
for(i=0; i<MAX_GBUFFERS; i++) {
- pb->gbuffer[i]=NULL;
+ pb->gbuf_idx[i] = PLANB_MAX_FBUF * i / PAGE_SIZE;
pb->gwidth[i]=0;
pb->gheight[i]=0;
pb->gfmt[i]=0;
pb->cap_cmd[i]=NULL;
#ifndef PLANB_GSCANLINE
- pb->l_fr_addr[i]=NULL;
+ pb->l_fr_addr_idx[i] = MAX_GBUFFERS * (PLANB_MAX_FBUF
+ / PAGE_SIZE + 1) + MAX_LNUM * i;
pb->lsize[i] = 0;
pb->lnum[i] = 0;
#endif
}
- pb->fbuffer=NULL;
+ pb->rawbuf=NULL;
pb->grabbing=0;
- /* clear interrupts */
+ /* enable interrupts */
out_le32(&pb->planb_base->intr_stat, PLANB_CLR_IRQ);
- /* set interrupt mask */
pb->intr_mask = PLANB_FRM_IRQ;
+ enable_irq(pb->irq);
if(video_register_device(&pb->video_dev, VFL_TYPE_GRABBER)<0)
return -1;
Based largely on the bttv driver by Ralph Metzler (rjkm@thp.uni-koeln.de)
- Additional debugging and coding by Takashi Oe (toe@unlinfo.unl.edu)
+ Additional debugging and coding by Takashi Oe (toe@unlserve.unl.edu)
This program is free software; you can redistribute it and/or modify
/* for capture operations */
#define MAX_GBUFFERS 2
+/* note PLANB_MAX_FBUF must be divisible by PAGE_SIZE */
#ifdef PLANB_GSCANLINE
#define PLANB_MAX_FBUF 0x240000 /* 576 * 1024 * 4 */
#define TAB_FACTOR (1)
volatile unsigned int intr_stat; /* 0x104: irq status */
#define PLANB_CLR_IRQ 0x00 /* clear Plan B interrupt */
#define PLANB_GEN_IRQ 0x01 /* assert Plan B interrupt */
-#define PLANB_FRM_IRQ 0x02 /* end of frame */
-#define PLANB_IRQ_CMD_MASK 0x00000003U /* reserve 2 lsbs for command */
+#define PLANB_FRM_IRQ 0x0100 /* end of frame */
unsigned int pad3[1]; /* empty? */
volatile unsigned int reg5; /* 0x10c: ??? */
unsigned int pad4[60]; /* empty? */
wait_queue_head_t capq;
int last_fr;
int prev_last_fr;
- unsigned char *fbuffer;
- unsigned char *gbuffer[MAX_GBUFFERS];
+ unsigned char **rawbuf;
+ int rawbuf_size;
+ int gbuf_idx[MAX_GBUFFERS];
volatile struct dbdma_cmd *cap_cmd[MAX_GBUFFERS];
volatile struct dbdma_cmd *last_cmd[MAX_GBUFFERS];
volatile struct dbdma_cmd *pre_cmd[MAX_GBUFFERS];
#else
#define MAX_LNUM 431 /* change this if PLANB_MAXLINES or */
/* PLANB_MAXPIXELS changes */
- unsigned char *l_fr_addr[MAX_GBUFFERS];
+ int l_fr_addr_idx[MAX_GBUFFERS];
unsigned char *l_to_addr[MAX_GBUFFERS][MAX_LNUM];
+ int l_to_next_idx[MAX_GBUFFERS][MAX_LNUM];
+ int l_to_next_size[MAX_GBUFFERS][MAX_LNUM];
int lsize[MAX_GBUFFERS], lnum[MAX_GBUFFERS];
#endif
};
void add_keyboard_randomness(unsigned char scancode)
{
- add_timer_randomness(&keyboard_timer_state, scancode);
+ static unsigned char last_scancode = 0;
+ /* ignore autorepeat (multiple key down w/o key up) */
+ if (scancode != last_scancode) {
+ last_scancode = scancode;
+ add_timer_randomness(&keyboard_timer_state, scancode);
+ }
}
void add_mouse_randomness(__u32 mouse_data)
*
* The port specified is deconfigured and its resources are freed. Any
* user of the port is disconnected as if carrier was dropped. Line is
- * the port number returned by register_serial.
+ * the port number returned by register_serial().
*/
void unregister_serial(int line)
/**
* video_register_device - register video4linux devices
- * @vfd: Video device structure we want to register
+ * @vfd: video device structure we want to register
* @type: type of device to register
* FIXME: needs a semaphore on 2.3.x
*
* The registration code assigns minor numbers based on the type
* requested. -ENFILE is returned in all the device slots for this
* catetory are full. If not then the minor field is set and the
- * driver initialize function is called (if non NULL).
+ * driver initialize function is called (if non %NULL).
*
* Zero is returned on success.
*
* Valid types are
*
- * VFL_TYPE_GRABBER - A frame grabber
+ * %VFL_TYPE_GRABBER - A frame grabber
*
- * VFL_TYPE_VTX - A teletext device
+ * %VFL_TYPE_VTX - A teletext device
*
- * VFL_TYPE_VBI - Vertical blank data (undecoded)
+ * %VFL_TYPE_VBI - Vertical blank data (undecoded)
*
- * VFL_TYPE_RADIO - A radio card
+ * %VFL_TYPE_RADIO - A radio card
*/
int video_register_device(struct video_device *vfd, int type)
tristate 'I2O support' CONFIG_I2O
-dep_tristate ' I2O PCI support' CONFIG_I2O_PCI $CONFIG_I2O
+if [ "$CONFIG_PCI" = "y" ]; then
+ dep_tristate ' I2O PCI support' CONFIG_I2O_PCI $CONFIG_I2O
+fi
dep_tristate ' I2O Block OSM' CONFIG_I2O_BLOCK $CONFIG_I2O
if [ "$CONFIG_NET" = "y" ]; then
dep_tristate ' I2O LAN OSM' CONFIG_I2O_LAN $CONFIG_I2O
Debugging SCSI and Block OSM
Deepak Saxena, Intel Corp.
+ Various core/block extensions
/proc interface, bug fixes
Ioctl interfaces for control
Debugging LAN OSM
STATUS:
o The core setup works within limits.
-o The scsi layer seems to almost work. I'm still chasing down the hang
- bug.
-o The block OSM is fairly minimal but does seem to work.
+o The scsi layer seems to almost work.
+ I'm still chasing down the hang bug.
+o The block OSM is mostly functional
o LAN OSM works with FDDI and Ethernet cards.
TO DO:
General:
-o Support multiple IOP's and tell them about each other
o Provide hidden address space if asked
o Long term message flow control
o PCI IOP's without interrupts are not supported yet
o Push FAIL handling into the core
o DDM control interfaces for module load etc
-o Event handling
+o Add I2O 2.0 support (Deffered to 2.5 kernel)
Block:
-o Real error handler
o Multiple major numbers
o Read ahead and cache handling stuff. Talk to Ingo and people
o Power management
SCSI:
o Find the right way to associate drives/luns/busses
-Lan: Batch mode sends
- Performance tuning
- Event handling
+Lan:
+o Performance tuning
+o Test Fibre Channel code
+o Fix lan_set_mc_list()
Tape:
o Anyone seen anything implementing this ?
+ (D.S: Will attempt to do so if spare cycles permit)
This document and the I2O user space interface are currently maintained
by Deepak Saxena. Please send all comments, errata, and bug fixes to
-deepak@plexity.net
+deepak@csociety.purdue.edu
II. IOP Access
/*
- * I2O block device driver.
+ * I2O Random Block Storage Class OSM
*
- * (C) Copyright 1999 Red Hat Software
+ * (C) Copyright 1999 Red Hat Software
*
- * Written by Alan Cox, Building Number Three Ltd
+ * Written by Alan Cox, Building Number Three Ltd
*
- * This program is free software; you can redistribute it and/or
- * modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version
- * 2 of the License, or (at your option) any later version.
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
*
- * This is a beta test release. Most of the good code was taken
- * from the nbd driver by Pavel Machek, who in turn took some of it
- * from loop.c. Isn't free software great for reusability 8)
+ * This is a beta test release. Most of the good code was taken
+ * from the nbd driver by Pavel Machek, who in turn took some of it
+ * from loop.c. Isn't free software great for reusability 8)
+ *
+ * Fixes/additions:
+ * Steve Ralston:
+ * Multiple device handling error fixes,
+ * Added a queue depth.
+ * Alan Cox:
+ * FC920 has an rmw bug. Dont or in the end marker.
+ * Removed queue walk, fixed for 64bitness.
+ * Deepak Saxena:
+ * Independent queues per IOP
+ * Support for dynamic device creation/deletion
+ * Code cleanup
+ * Support for larger I/Os through merge* functions
+ * (taken from DAC960 driver)
*
- * Fixes:
- * Steve Ralston: Multiple device handling error fixes,
- * Added a queue depth.
- * Alan Cox: FC920 has an rmw bug. Dont or in the
- * end marker.
- * Removed queue walk, fixed for 64bitness.
* To do:
- * Multiple majors
* Serial number scanning to find duplicates for FC multipathing
- * Set the new max_sectors according to max message size
- * Use scatter gather chains for bigger I/O sizes
*/
#include <linux/major.h>
#include <linux/reboot.h>
#include <asm/uaccess.h>
+#include <asm/semaphore.h>
#include <asm/io.h>
#include <asm/atomic.h>
+#include <linux/smp_lock.h>
+#include <linux/wait.h>
#define MAJOR_NR I2O_MAJOR
#define MAX_I2OB 16
-#define MAX_I2OB_DEPTH 32
+#define MAX_I2OB_DEPTH 128
#define MAX_I2OB_RETRIES 4
+//#define DRIVERDEBUG
+#ifdef DRIVERDEBUG
+#define DEBUG( s )
+#else
+#define DEBUG( s ) printk( s )
+#endif
+
+/*
+ * Events that this OSM is interested in
+ */
+#define I2OB_EVENT_MASK I2O_EVT_IND_BSA_VOLUME_LOAD | \
+ I2O_EVT_IND_BSA_VOLUME_UNLOAD | \
+ I2O_EVT_IND_BSA_VOLUME_UNLOAD_REQ | \
+ I2O_EVT_IND_BSA_CAPACITY_CHANGE
+
+
+/*
+ * I2O Block Error Codes - should be in a header file really...
+ */
+#define I2O_BSA_DSC_SUCCESS 0x0000
+#define I2O_BSA_DSC_MEDIA_ERROR 0x0001
+#define I2O_BSA_DSC_ACCESS_ERROR 0x0002
+#define I2O_BSA_DSC_DEVICE_FAILURE 0x0003
+#define I2O_BSA_DSC_DEVICE_NOT_READY 0x0004
+#define I2O_BSA_DSC_MEDIA_NOT_PRESENT 0x0005
+#define I2O_BSA_DSC_MEDIA_LOCKED 0x0006
+#define I2O_BSA_DSC_MEDIA_FAILURE 0x0007
+#define I2O_BSA_DSC_PROTOCOL_FAILURE 0x0008
+#define I2O_BSA_DSC_BUS_FAILURE 0x0009
+#define I2O_BSA_DSC_ACCESS_VIOLATION 0x000A
+#define I2O_BSA_DSC_WRITE_PROTECTED 0x000B
+#define I2O_BSA_DSC_DEVICE_RESET 0x000C
+#define I2O_BSA_DSC_VOLUME_CHANGED 0x000D
+#define I2O_BSA_DSC_TIMEOUT 0x000E
+
/*
* Some of these can be made smaller later
*/
static int i2ob_context;
+/*
+ * I2O Block device descriptor
+ */
struct i2ob_device
{
struct i2o_controller *controller;
struct i2o_device *i2odev;
+ int unit;
int tid;
int flags;
int refcnt;
struct request *head, *tail;
+ request_queue_t *req_queue;
+ int max_segments;
int done_flag;
};
* We should cache align these to avoid ping-ponging lines on SMP
* boxes under heavy I/O load...
*/
-
struct i2ob_request
{
struct i2ob_request *next;
struct request *req;
int num;
-};
+} __cacheline_aligned;
+/*
+ * Per IOP requst queue information
+ *
+ * We have a separate requeust_queue_t per IOP so that a heavilly
+ * loaded I2O block device on an IOP does not starve block devices
+ * across all I2O controllers.
+ *
+ */
+struct i2ob_iop_queue
+{
+ atomic_t queue_depth;
+ struct i2ob_request request_queue[MAX_I2OB_DEPTH];
+ struct i2ob_request *i2ob_qhead;
+ request_queue_t req_queue;
+};
+static struct i2ob_iop_queue *i2ob_queues[MAX_I2O_CONTROLLERS] = {NULL};
/*
* Each I2O disk is one of these.
*/
static struct i2ob_device i2ob_dev[MAX_I2OB<<4];
-static int i2ob_devices = 0;
+static int i2ob_dev_count = 0;
static struct hd_struct i2ob[MAX_I2OB<<4];
static struct gendisk i2ob_gendisk; /* Declared later */
-static atomic_t queue_depth; /* For flow control later on */
-static struct i2ob_request i2ob_queue[MAX_I2OB_DEPTH+1];
-static struct i2ob_request *i2ob_qhead;
+/*
+ * Mutex and spin lock for event handling synchronization
+ * evt_msg contains the last event.
+ */
+DECLARE_MUTEX(i2ob_evt_sem);
+static spinlock_t i2ob_evt_lock = SPIN_LOCK_UNLOCKED;
+static unsigned int evt_msg[MSG_FRAME_SIZE>>2];
+DECLARE_WAIT_QUEUE_HEAD(i2ob_evt_wait);
static struct timer_list i2ob_timer;
static int i2ob_timer_started = 0;
-#define DEBUG( s )
-/* #define DEBUG( s ) printk( s )
- */
-
+static void i2o_block_reply(struct i2o_handler *, struct i2o_controller *,
+ struct i2o_message *);
+static void i2ob_new_device(struct i2o_controller *, struct i2o_device *);
+static void i2ob_del_device(struct i2o_controller *, struct i2o_device *);
+static void i2ob_reboot_event(void);
static int i2ob_install_device(struct i2o_controller *, struct i2o_device *, int);
static void i2ob_end_request(struct request *);
-static void i2ob_request(request_queue_t * q);
+static void i2ob_request(request_queue_t *);
+static int i2ob_init_iop(unsigned int);
+static request_queue_t* i2ob_get_queue(kdev_t);
+static int i2ob_query_device(struct i2ob_device *, int, int, void*, int);
+static int do_i2ob_revalidate(kdev_t, int);
+static int i2ob_evt(void *);
+
+static int evt_pid = 0;
+static int evt_running = 0;
/*
- * Dump messages.
+ * I2O OSM registration structure...keeps getting bigger and bigger :)
*/
-static void i2ob_dump_msg(struct i2ob_device *dev,u32 *msg,int size)
+static struct i2o_handler i2o_block_handler =
{
- int cnt;
-
- printk(KERN_INFO "\n\ni2o message:\n");
- for (cnt = 0; cnt<size; cnt++)
- {
- printk(KERN_INFO "m[%d]=%x\n",cnt,msg[cnt]);
- }
- printk(KERN_INFO "\n");
-}
+ i2o_block_reply,
+ i2ob_new_device,
+ i2ob_del_device,
+ i2ob_reboot_event,
+ "I2O Block OSM",
+ 0,
+ I2O_CLASS_RANDOM_BLOCK_STORAGE
+};
/*
* Get a message
struct request *req = ireq->req;
struct buffer_head *bh = req->bh;
int count = req->nr_sectors<<9;
+ char *last = NULL;
+ unsigned short size = 0;
+ // printk(KERN_INFO "i2ob_send called\n");
/* Map the message to a virtual address */
msg = c->mem_offset + m;
/*
- * Build the message based on the request.
+ * Build the message based on the request.
*/
__raw_writel(i2ob_context|(unit<<8), msg+8);
__raw_writel(ireq->num, msg+12);
__raw_writel(1<<16, msg+16);
while(bh!=NULL)
{
- /*
- * Its best to do this in one not or it in
- * later. mptr is in PCI space so fast to write
- * sucky to read.
- */
- if(bh->b_reqnext)
- __raw_writel(0x10000000|(bh->b_size), mptr);
+ if(bh->b_data == last) {
+ size += bh->b_size;
+ last += bh->b_size;
+ if(bh->b_reqnext)
+ __raw_writel(0x14000000|(size), mptr-8);
+ else
+ __raw_writel(0xD4000000|(size), mptr-8);
+ }
else
- __raw_writel(0xD0000000|(bh->b_size), mptr);
-
- __raw_writel(virt_to_bus(bh->b_data), mptr+4);
- mptr+=8;
+ {
+ if(bh->b_reqnext)
+ __raw_writel(0x10000000|(bh->b_size), mptr);
+ else
+ __raw_writel(0xD0000000|(bh->b_size), mptr);
+ __raw_writel(virt_to_bus(bh->b_data), mptr+4);
+ mptr += 8;
+ size = bh->b_size;
+ last = bh->b_data + size;
+ }
+
count -= bh->b_size;
bh = bh->b_reqnext;
}
else if(req->cmd == WRITE)
{
__raw_writel(I2O_CMD_BLOCK_WRITE<<24|HOST_TID<<12|tid, msg+4);
- __raw_writel(1<<16, msg+16);
+ /*
+ * Allow replies to come back once data is cached in the controller
+ * This allows us to handle writes quickly thus giving more of the
+ * queue to reads.
+ */
+ __raw_writel(0x00000010, msg+16);
while(bh!=NULL)
{
- if(bh->b_reqnext)
- __raw_writel(0x14000000|(bh->b_size), mptr);
+ if(bh->b_data == last) {
+ size += bh->b_size;
+ last += bh->b_size;
+ if(bh->b_reqnext)
+ __raw_writel(0x14000000|(size), mptr-8);
+ else
+ __raw_writel(0xD4000000|(size), mptr-8);
+ }
else
- __raw_writel(0xD4000000|(bh->b_size), mptr);
+ {
+ if(bh->b_reqnext)
+ __raw_writel(0x14000000|(bh->b_size), mptr);
+ else
+ __raw_writel(0xD4000000|(bh->b_size), mptr);
+ __raw_writel(virt_to_bus(bh->b_data), mptr+4);
+ mptr += 8;
+ size = bh->b_size;
+ last = bh->b_data + size;
+ }
+
count -= bh->b_size;
- __raw_writel(virt_to_bus(bh->b_data), mptr+4);
- mptr+=8;
bh = bh->b_reqnext;
}
}
__raw_writel(I2O_MESSAGE_SIZE(mptr-msg)>>2 | SGL_OFFSET_8, msg);
- if(req->current_nr_sectors > 8)
+ if(req->current_nr_sectors > i2ob_max_sectors[unit])
printk("Gathered sectors %ld.\n",
req->current_nr_sectors);
}
i2o_post_message(c,m);
- atomic_inc(&queue_depth);
+ atomic_inc(&i2ob_queues[c->unit]->queue_depth);
return 0;
}
* must hold the lock.
*/
-static inline void i2ob_unhook_request(struct i2ob_request *ireq)
+static inline void i2ob_unhook_request(struct i2ob_request *ireq,
+ unsigned int iop)
{
- ireq->next = i2ob_qhead;
- i2ob_qhead = ireq;
+ ireq->next = i2ob_queues[iop]->i2ob_qhead;
+ i2ob_queues[iop]->i2ob_qhead = ireq;
}
/*
* Request completion handler
*/
-static void i2ob_end_request(struct request *req)
+static inline void i2ob_end_request(struct request *req)
{
/*
* Loop until all of the buffers that are linked
* unlocked.
*/
-// printk("ending request %p: ", req);
- while (end_that_request_first( req, !req->errors, "i2o block" ))
- {
-// printk(" +\n");
- }
+ while (end_that_request_first( req, !req->errors, "i2o block" ));
/*
* It is now ok to complete the request.
*/
-
-// printk("finishing ");
end_that_request_last( req );
-// printk("done\n");
+}
+
+/*
+ * Request merging functions
+ */
+static inline int i2ob_new_segment(request_queue_t *q, struct request *req,
+ int __max_segments)
+{
+ int max_segments = i2ob_dev[MINOR(req->rq_dev)].max_segments;
+
+ if (__max_segments < max_segments)
+ max_segments = __max_segments;
+
+ if (req->nr_segments < max_segments) {
+ req->nr_segments++;
+ q->elevator.nr_segments++;
+ return 1;
+ }
+ return 0;
+}
+
+static int i2ob_back_merge(request_queue_t *q, struct request *req,
+ struct buffer_head *bh, int __max_segments)
+{
+ if (req->bhtail->b_data + req->bhtail->b_size == bh->b_data)
+ return 1;
+ return i2ob_new_segment(q, req, __max_segments);
+}
+
+static int i2ob_front_merge(request_queue_t *q, struct request *req,
+ struct buffer_head *bh, int __max_segments)
+{
+ if (bh->b_data + bh->b_size == req->bh->b_data)
+ return 1;
+ return i2ob_new_segment(q, req, __max_segments);
+}
+
+static int i2ob_merge_requests(request_queue_t *q,
+ struct request *req,
+ struct request *next,
+ int __max_segments)
+{
+ int max_segments = i2ob_dev[MINOR(req->rq_dev)].max_segments;
+ int total_segments = req->nr_segments + next->nr_segments;
+ int same_segment;
+
+ if (__max_segments < max_segments)
+ max_segments = __max_segments;
+
+ same_segment = 0;
+ if (req->bhtail->b_data + req->bhtail->b_size == next->bh->b_data)
+ {
+ total_segments--;
+ same_segment = 1;
+ }
+
+ if (total_segments > max_segments)
+ return 0;
+
+ q->elevator.nr_segments -= same_segment;
+ req->nr_segments = total_segments;
+ return 1;
}
static void i2o_block_reply(struct i2o_handler *h, struct i2o_controller *c, struct i2o_message *msg)
{
unsigned long flags;
- struct i2ob_request *ireq;
+ struct i2ob_request *ireq = NULL;
u8 st;
u32 *m = (u32 *)msg;
u8 unit = (m[2]>>8)&0xF0; /* low 4 bits are partition */
-
+ struct i2ob_device *dev = &i2ob_dev[(unit&0xF0)];
+
+ /*
+ * FAILed message
+ */
if(m[0] & (1<<13))
{
- printk("IOP fail.\n");
- printk("From %d To %d Cmd %d.\n",
- (m[1]>>12)&0xFFF,
- m[1]&0xFFF,
- m[1]>>24);
- printk("Failure Code %d.\n", m[4]>>24);
- if(m[4]&(1<<16))
- printk("Format error.\n");
- if(m[4]&(1<<17))
- printk("Path error.\n");
- if(m[4]&(1<<18))
- printk("Path State.\n");
- if(m[4]&(1<<18))
- printk("Congestion.\n");
-
- m=(u32 *)bus_to_virt(m[7]);
- printk("Failing message is %p.\n", m);
-
- /* We need to up the request failure count here and maybe
- abort it */
- ireq=&i2ob_queue[m[3]];
+ /*
+ * FAILed message from controller
+ * We increment the error count and abort it
+ *
+ * In theory this will never happen. The I2O block class
+ * speficiation states that block devices never return
+ * FAILs but instead use the REQ status field...but
+ * better be on the safe side since no one really follows
+ * the spec to the book :)
+ */
+ ireq=&i2ob_queues[c->unit]->request_queue[m[3]];
+ ireq->req->errors++;
+
+ spin_lock_irqsave(&io_request_lock, flags);
+ i2ob_unhook_request(ireq, c->unit);
+ i2ob_end_request(ireq->req);
+ spin_unlock_irqrestore(&io_request_lock, flags);
+
/* Now flush the message by making it a NOP */
m[0]&=0x00FFFFFF;
m[0]|=(I2O_CMD_UTIL_NOP)<<24;
i2o_post_message(c,virt_to_bus(m));
-
+
+ return;
}
- else
+
+ if(msg->function == I2O_CMD_UTIL_EVT_REGISTER)
+ {
+ spin_lock(&i2ob_evt_lock);
+ memcpy(&evt_msg, m, msg->size);
+ spin_unlock(&i2ob_evt_lock);
+ wake_up_interruptible(&i2ob_evt_wait);
+ return;
+ }
+
+ if(!dev->i2odev)
{
- if(m[2]&0x40000000)
- {
- int * ptr = (int *)m[3];
- if(m[4]>>24)
- *ptr = -1;
- else
- *ptr = 1;
- return;
- }
/*
- * Lets see what is cooking. We stuffed the
- * request in the context.
+ * This is HACK, but Intel Integrated RAID allows user
+ * to delete a volume that is claimed, locked, and in use
+ * by the OS. We have to check for a reply from a
+ * non-existent device and flag it as an error or the system
+ * goes kaput...
*/
+ ireq=&i2ob_queues[c->unit]->request_queue[m[3]];
+ ireq->req->errors++;
+ printk(KERN_WARNING "I2O Block: Data transfer to deleted device!\n");
+ spin_lock_irqsave(&io_request_lock, flags);
+ i2ob_unhook_request(ireq, c->unit);
+ i2ob_end_request(ireq->req);
+ spin_unlock_irqrestore(&io_request_lock, flags);
+ return;
+ }
+
+ /*
+ * Lets see what is cooking. We stuffed the
+ * request in the context.
+ */
- ireq=&i2ob_queue[m[3]];
- st=m[4]>>24;
-
- if(st!=0)
- {
- printk(KERN_ERR "i2ob: error %08X\n", m[4]);
- ireq->req->errors++;
- if (ireq->req->errors < MAX_I2OB_RETRIES)
- {
- u32 retry_msg;
- struct i2ob_device *dev;
+ ireq=&i2ob_queues[c->unit]->request_queue[m[3]];
+ st=m[4]>>24;
- printk(KERN_ERR "i2ob: attempting retry %d for request %p\n",ireq->req->errors+1,ireq->req);
-
- /*
- * Get a message for this retry.
- */
- dev = &i2ob_dev[(unit&0xF0)];
- retry_msg = i2ob_get(dev);
-
- /*
- * If we cannot get a message then
- * forget the retry and fail the
- * request. Note that since this is
- * being called from the interrupt
- * handler, a request has just been
- * completed and there will most likely
- * be space on the inbound message
- * fifo so this won't happen often.
- */
- if(retry_msg!=0xFFFFFFFF)
- {
- /*
- * Decrement the queue depth since
- * this request has completed and
- * it will be incremented again when
- * i2ob_send is called below.
- */
- atomic_dec(&queue_depth);
-
- /*
- * Send the request again.
- */
- i2ob_send(retry_msg, dev,ireq,i2ob[unit].start_sect, (unit&0xF0));
- /*
- * Don't fall through.
- */
- return;
- }
- }
- }
- else
- ireq->req->errors = 0;
+ if(st!=0)
+ {
+ char *bsa_errors[] =
+ {
+ "Success",
+ "Media Error",
+ "Failure communicating to device",
+ "Device Failure",
+ "Device is not ready",
+ "Media not present",
+ "Media is locked by another user",
+ "Media has failed",
+ "Failure communicating to device",
+ "Device bus failure",
+ "Device is locked by another user",
+ "Device is write protected",
+ "Device has reset",
+ "Volume has changed, waiting for acknowledgement"
+ };
+
+ printk(KERN_ERR "\n/dev/%s error: %s", dev->i2odev->dev_name,
+ bsa_errors[m[4]&0XFFFF]);
+ if(m[4]&0x00FF0000)
+ printk(" - DDM attempted %d retries", (m[4]>>16)&0x00FF );
+ printk("\n");
+
+ ireq->req->errors++;
}
-
+ else
+ ireq->req->errors = 0;
+
/*
* Dequeue the request. We use irqsave locks as one day we
* may be running polled controllers from a BH...
*/
spin_lock_irqsave(&io_request_lock, flags);
- i2ob_unhook_request(ireq);
+ i2ob_unhook_request(ireq, c->unit);
i2ob_end_request(ireq->req);
-
+ atomic_dec(&i2ob_queues[c->unit]->queue_depth);
+
/*
* We may be able to do more I/O
*/
-
- atomic_dec(&queue_depth);
- i2ob_request(NULL);
+ i2ob_request(dev->req_queue);
+
spin_unlock_irqrestore(&io_request_lock, flags);
}
-static struct i2o_handler i2o_block_handler =
+/*
+ * Event handler. Needs to be a separate thread b/c we may have
+ * to do things like scan a partition table, or query parameters
+ * which cannot be done from an interrupt or from a bottom half.
+ */
+static int i2ob_evt(void *dummy)
{
- i2o_block_reply,
- "I2O Block OSM",
- 0,
- I2O_CLASS_RANDOM_BLOCK_STORAGE
-};
+ unsigned int evt;
+ unsigned int flags;
+ int unit;
+ int i;
+
+ lock_kernel();
+ exit_files(current);
+ daemonize();
+ unlock_kernel();
+
+ strcpy(current->comm, "i2oblock");
+ evt_running = 1;
+
+ while(1)
+ {
+ interruptible_sleep_on(&i2ob_evt_wait);
+ if(signal_pending(current)) {
+ evt_running = 0;
+ return 0;
+ }
+
+ printk(KERN_INFO "Doing something in i2o_block event thread\n");
+
+ /*
+ * Keep another CPU/interrupt from overwriting the
+ * message while we're reading it
+ *
+ * We stuffed the unit in the TxContext and grab the event mask
+ * None of the BSA we care about events have EventData
+ */
+ spin_lock_irqsave(&i2ob_evt_lock, flags);
+ unit = evt_msg[3];
+ evt = evt_msg[4];
+ spin_unlock_irqrestore(&i2ob_evt_lock, flags);
+
+ switch(evt)
+ {
+ /*
+ * New volume loaded on same TID, so we just re-install.
+ * The TID/controller don't change as it is the same
+ * I2O device. It's just new media that we have to
+ * rescan.
+ */
+ case I2O_EVT_IND_BSA_VOLUME_LOAD:
+ {
+ i2ob_install_device(i2ob_dev[unit].i2odev->controller,
+ i2ob_dev[unit].i2odev, unit);
+ break;
+ }
+
+ /*
+ * No media, so set all parameters to 0 and set the media
+ * change flag. The I2O device is still valid, just doesn't
+ * have media, so we don't want to clear the controller or
+ * device pointer.
+ */
+ case I2O_EVT_IND_BSA_VOLUME_UNLOAD:
+ {
+ for(i = unit; i <= unit+15; i++)
+ {
+ i2ob_sizes[i] = 0;
+ i2ob_hardsizes[i] = 0;
+ i2ob_max_sectors[i] = 0;
+ i2ob[i].nr_sects = 0;
+ i2ob_gendisk.part[i].nr_sects = 0;
+ }
+ i2ob_media_change_flag[unit] = 1;
+ break;
+ }
+
+ case I2O_EVT_IND_BSA_VOLUME_UNLOAD_REQ:
+ printk(KERN_WARNING "%s: Attempt to eject locked media\n",
+ i2ob_dev[unit].i2odev->dev_name);
+ break;
+
+ /*
+ * The capacity has changed and we are going to be
+ * updating the max_sectors and other information
+ * about this disk. We try a revalidate first. If
+ * the block device is in use, we don't want to
+ * do that as there may be I/Os bound for the disk
+ * at the moment. In that case we read the size
+ * from the device and update the information ourselves
+ * and the user can later force a partition table
+ * update through an ioctl.
+ */
+ case I2O_EVT_IND_BSA_CAPACITY_CHANGE:
+ {
+ u64 size;
+
+ if(do_i2ob_revalidate(MKDEV(MAJOR_NR, unit),0) != -EBUSY)
+ continue;
+
+ if(i2ob_query_device(&i2ob_dev[unit], 0x0004, 0, &size, 8) !=0 )
+ i2ob_query_device(&i2ob_dev[unit], 0x0000, 4, &size, 8);
+
+ spin_lock_irqsave(&io_request_lock, flags);
+ i2ob_sizes[unit] = (int)(size>>10);
+ i2ob_gendisk.part[unit].nr_sects = size>>9;
+ i2ob[unit].nr_sects = (int)(size>>9);
+ spin_unlock_irqrestore(&io_request_lock, flags);
+ break;
+ }
+
+ /*
+ * An event we didn't ask for. Call the card manufacturer
+ * and tell them to fix their firmware :)
+ */
+ default:
+ printk(KERN_INFO "%s: Received event we didn't register for\n"
+ KERN_INFO " Call I2O card manufacturer\n",
+ i2ob_dev[unit].i2odev->dev_name);
+ break;
+ }
+ };
+
+ return 0;
+}
/*
* The timer handler will attempt to restart requests
* had no more room in its inbound fifo.
*/
-static void i2ob_timer_handler(unsigned long dummy)
+static void i2ob_timer_handler(unsigned long q)
{
unsigned long flags;
/*
* Restart any requests.
*/
- i2ob_request(NULL);
+ i2ob_request((request_queue_t*)q);
/*
* Free the lock.
* on us. We must unlink CURRENT in this routine before we return, if
* we use it.
*/
-
-static void i2ob_request(request_queue_t * q)
+static void i2ob_request(request_queue_t *q)
{
struct request *req;
struct i2ob_request *ireq;
struct i2ob_device *dev;
u32 m;
- while (!QUEUE_EMPTY) {
+ // printk(KERN_INFO "i2ob_request() called with queue %p\n", q);
+
+ while (!list_empty(&q->queue_head)) {
/*
* On an IRQ completion if there is an inactive
* request on the queue head it means it isnt yet
* ready to dispatch.
*/
- if(CURRENT->rq_status == RQ_INACTIVE)
+ req = blkdev_entry_next_request(&q->queue_head);
+
+ if(req->rq_status == RQ_INACTIVE)
return;
- /*
- * Queue depths probably belong with some kind of
- * generic IOP commit control. Certainly its not right
- * its global!
+ unit = MINOR(req->rq_dev);
+ dev = &i2ob_dev[(unit&0xF0)];
+
+ /*
+ * Queue depths probably belong with some kind of
+ * generic IOP commit control. Certainly its not right
+ * its global!
*/
- if(atomic_read(&queue_depth)>=MAX_I2OB_DEPTH)
+ if(atomic_read(&i2ob_queues[dev->unit]->queue_depth)>=MAX_I2OB_DEPTH)
break;
- req = CURRENT;
- unit = MINOR(req->rq_dev);
- dev = &i2ob_dev[(unit&0xF0)];
/* Get a message */
m = i2ob_get(dev);
* 500ms.
*/
i2ob_timer.expires = jiffies + (HZ >> 1);
+ i2ob_timer.data = (unsigned int)q;
/*
* Start it.
add_timer(&i2ob_timer);
}
}
+
+ /*
+ * Everything ok, so pull from kernel queue onto our queue
+ */
req->errors = 0;
- blkdev_dequeue_request(req);
+ blkdev_dequeue_request(req);
req->sem = NULL;
-
- ireq = i2ob_qhead;
- i2ob_qhead = ireq->next;
+
+ ireq = i2ob_queues[dev->unit]->i2ob_qhead;
+ i2ob_queues[dev->unit]->i2ob_qhead = ireq->next;
ireq->req = req;
i2ob_send(m, dev, ireq, i2ob[unit].start_sect, (unit&0xF0));
}
}
+
/*
* SCSI-CAM for ioctl geometry mapping
* Duplicated with SCSI - this should be moved into somewhere common
* perhaps genhd ?
+ *
+ * LBA -> CHS mapping table taken from:
+ *
+ * "Incorporating the I2O Architecture into BIOS for Intel Architecture
+ * Platforms"
+ *
+ * This is an I2O document that is only available to I2O members,
+ * not developers.
+ *
+ * From my understanding, this is how all the I2O cards do this
+ *
+ * Disk Size | Sectors | Heads | Cylinders
+ * ---------------+---------+-------+-------------------
+ * 1 < X <= 528M | 63 | 16 | X/(63 * 16 * 512)
+ * 528M < X <= 1G | 63 | 32 | X/(63 * 32 * 512)
+ * 1 < X <528M | 63 | 16 | X/(63 * 16 * 512)
+ * 1 < X <528M | 63 | 16 | X/(63 * 16 * 512)
+ *
*/
-
+#define BLOCK_SIZE_528M 1081344
+#define BLOCK_SIZE_1G 2097152
+#define BLOCK_SIZE_21G 4403200
+#define BLOCK_SIZE_42G 8806400
+#define BLOCK_SIZE_84G 17612800
+
static void i2o_block_biosparam(
unsigned long capacity,
unsigned short *cyls,
unsigned char *hds,
unsigned char *secs)
{
- unsigned long heads, sectors, cylinders, temp;
-
- cylinders = 1024L; /* Set number of cylinders to max */
- sectors = 62L; /* Maximize sectors per track */
-
- temp = cylinders * sectors; /* Compute divisor for heads */
- heads = capacity / temp; /* Compute value for number of heads */
- if (capacity % temp) { /* If no remainder, done! */
- heads++; /* Else, increment number of heads */
- temp = cylinders * heads; /* Compute divisor for sectors */
- sectors = capacity / temp; /* Compute value for sectors per
- track */
- if (capacity % temp) { /* If no remainder, done! */
- sectors++; /* Else, increment number of sectors */
- temp = heads * sectors; /* Compute divisor for cylinders */
- cylinders = capacity / temp;/* Compute number of cylinders */
- }
- }
- /* if something went wrong, then apparently we have to return
- a geometry with more than 1024 cylinders */
- if (cylinders == 0 || heads > 255 || sectors > 63 || cylinders >1023)
- {
- unsigned long temp_cyl;
-
+ unsigned long heads, sectors, cylinders;
+
+ sectors = 63L; /* Maximize sectors per track */
+ if(capacity <= BLOCK_SIZE_528M)
+ heads = 16;
+ else if(capacity <= BLOCK_SIZE_1G)
+ heads = 32;
+ else if(capacity <= BLOCK_SIZE_21G)
heads = 64;
- sectors = 32;
- temp_cyl = capacity / (heads * sectors);
- if (temp_cyl > 1024)
- {
- heads = 255;
- sectors = 63;
- }
- cylinders = capacity / (heads * sectors);
- }
- *cyls = (unsigned int) cylinders; /* Stuff return values */
- *secs = (unsigned int) sectors;
- *hds = (unsigned int) heads;
-}
+ else if(capacity <= BLOCK_SIZE_42G)
+ heads = 128;
+ else
+ heads = 255;
+
+ cylinders = capacity / (heads * sectors);
+
+ *cyls = (unsigned short) cylinders; /* Stuff return values */
+ *secs = (unsigned char) sectors;
+ *hds = (unsigned char) heads;
+}
+
/*
* Rescan the partition tables
int i;
minor&=0xF0;
-
+
i2ob_dev[minor].refcnt++;
if(i2ob_dev[minor].refcnt>maxu+1)
{
if (minor >= (MAX_I2OB<<4))
return -ENODEV;
dev = &i2ob_dev[(minor&0xF0)];
+
+ /*
+ * This is to deail with the case of an application
+ * opening a device and then the device dissapears while
+ * it's in use, and then the application tries to release
+ * it. ex: Unmounting a deleted RAID volume at reboot.
+ * If we send messages, it will just cause FAILs since
+ * the TID no longer exists.
+ */
+ if(!dev->i2odev)
+ return 0;
+
+ /* Sync the device so we don't get errors */
+ fsync_dev(inode->i_rdev);
+
if (dev->refcnt <= 0)
printk(KERN_ALERT "i2ob_release: refcount(%d) <= 0\n", dev->refcnt);
dev->refcnt--;
msg[3] = (u32)query_done;
msg[4] = 60<<16;
i2o_post_wait(dev->controller, msg, 20, 2);
+
/*
* Unlock the media
*/
/*
* Now unclaim the device.
*/
- if (i2o_release_device(dev->i2odev, &i2o_block_handler, I2O_CLAIM_PRIMARY)<0)
+ if (i2o_release_device(dev->i2odev, &i2o_block_handler))
printk(KERN_ERR "i2ob_release: controller rejected unclaim.\n");
}
if (minor >= MAX_I2OB<<4)
return -ENODEV;
dev=&i2ob_dev[(minor&0xF0)];
- if(dev->i2odev == NULL)
+
+ if(!dev->i2odev)
return -ENODEV;
-
+
if(dev->refcnt++==0)
{
u32 msg[6];
- int *query_done;
-
- if(i2o_claim_device(dev->i2odev, &i2o_block_handler, I2O_CLAIM_PRIMARY)<0)
+ if(i2o_claim_device(dev->i2odev, &i2o_block_handler))
{
dev->refcnt--;
+ printk(KERN_INFO "I2O Block: Could not open device\n");
return -EBUSY;
}
- query_done = &dev->done_flag;
/*
* Mount the media if needed. Note that we don't use
* the lock bit. Since we have to issue a lock if it
*/
msg[0] = FIVE_WORD_MSG_SIZE|SGL_OFFSET_0;
msg[1] = I2O_CMD_BLOCK_MMOUNT<<24|HOST_TID<<12|dev->tid;
- msg[2] = i2ob_context|0x40000000;
- msg[3] = (u32)query_done;
msg[4] = -1;
msg[5] = 0;
i2o_post_wait(dev->controller, msg, 24, 2);
+
/*
* Lock the media
*/
msg[0] = FIVE_WORD_MSG_SIZE|SGL_OFFSET_0;
msg[1] = I2O_CMD_BLOCK_MLOCK<<24|HOST_TID<<12|dev->tid;
- msg[2] = i2ob_context|0x40000000;
- msg[3] = (u32)query_done;
msg[4] = -1;
i2o_post_wait(dev->controller, msg, 20, 2);
}
struct i2ob_device *dev=&i2ob_dev[unit];
int i;
+ /*
+ * For logging purposes...
+ */
+ printk(KERN_INFO "i2ob: Installing tid %d device at unit %d\n",
+ d->lct_data.tid, unit);
+
/*
* Ask for the current media data. If that isn't supported
* then we ask for the device capacity data
*/
-
if(i2ob_query_device(dev, 0x0004, 1, &blocksize, 4) != 0
|| i2ob_query_device(dev, 0x0004, 0, &size, 8) !=0 )
{
i2ob_query_device(dev, 0x0000, 6, &status, 4);
i2ob_sizes[unit] = (int)(size>>10);
i2ob_hardsizes[unit] = blocksize;
+ i2ob_gendisk.part[unit].nr_sects = size>>9;
+ i2ob[unit].nr_sects = (int)(size>>9);
- limit=4096; /* 8 deep scatter gather */
+ /* Set limit based on inbound frame size */
+ limit = (d->controller->status_block->inbound_frame_size - 8)/2;
+ limit = limit<<9;
- printk("Byte limit is %d.\n", limit);
+ /*
+ * Max number of Scatter-Gather Elements
+ */
+ i2ob_dev[unit].max_segments =
+ (d->controller->status_block->inbound_frame_size - 8)/2;
+
+ printk(KERN_INFO "Max Segments set to %d\n",
+ i2ob_dev[unit].max_segments);
+ printk(KERN_INFO "Byte limit is %d.\n", limit);
for(i=unit;i<=unit+15;i++)
- i2ob_max_sectors[i]=(limit>>9);
-
- i2ob[unit].nr_sects = (int)(size>>9);
+ {
+ i2ob_max_sectors[i]=MAX_SECTORS;
+ i2ob_dev[i].max_segments =
+ (d->controller->status_block->inbound_frame_size - 8)/2;
+ }
i2ob_query_device(dev, 0x0000, 0, &type, 1);
sprintf(d->dev_name, "%s%c", i2ob_gendisk.major_name, 'a' + (unit>>4));
- printk("%s: ", d->dev_name);
- if(status&(1<<10))
- printk("RAID ");
+ printk(KERN_INFO "%s: ", d->dev_name);
switch(type)
{
case 0: printk("Disk Storage");break;
default:
printk("Type %d", type);
}
+ if(status&(1<<10))
+ printk("(RAID)");
if(((flags & (1<<3)) && !(status & (1<<3))) ||
((flags & (1<<4)) && !(status & (1<<4))))
{
- printk(" Not loaded.\n");
- return 0;
+ printk(KERN_INFO " Not loaded.\n");
+ return 1;
}
- printk(" %dMb, %d byte sectors",
+ printk("- %dMb, %d byte sectors",
(int)(size>>20), blocksize);
if(status&(1<<0))
{
printk(", %dKb cache", cachesize);
}
printk(".\n");
- printk("%s: Maximum sectors/read set to %d.\n",
+ printk(KERN_INFO "%s: Maximum sectors/read set to %d.\n",
d->dev_name, i2ob_max_sectors[unit]);
+
+ /*
+ * If this is the first I2O block device found on this IOP,
+ * we need to initialize all the queue data structures
+ * before any I/O can be performed. If it fails, this
+ * device is useless.
+ */
+ if(!i2ob_queues[c->unit]) {
+ if(i2ob_init_iop(c->unit))
+ return 1;
+ }
+
+ /*
+ * This will save one level of lookup/indirection in critical
+ * code so that we can directly get the queue ptr from the
+ * device instead of having to go the IOP data structure.
+ */
+ dev->req_queue = &i2ob_queues[c->unit]->req_queue;
+
grok_partitions(&i2ob_gendisk, unit>>4, 1<<4, (long)(size>>9));
+
+ /*
+ * Register for the events we're interested in and that the
+ * device actually supports.
+ */
+ i2o_event_register(c, d->lct_data.tid, i2ob_context, unit,
+ (I2OB_EVENT_MASK & d->lct_data.event_capabilities));
+
+ return 0;
+}
+
+/*
+ * Initialize IOP specific queue structures. This is called
+ * once for each IOP that has a block device sitting behind it.
+ */
+static int i2ob_init_iop(unsigned int unit)
+{
+ int i;
+
+ i2ob_queues[unit] = (struct i2ob_iop_queue*)
+ kmalloc(sizeof(struct i2ob_iop_queue), GFP_ATOMIC);
+ if(!i2ob_queues[unit])
+ {
+ printk(KERN_WARNING
+ "Could not allocate request queue for I2O block device!\n");
+ return -1;
+ }
+
+ for(i = 0; i< MAX_I2OB_DEPTH; i++)
+ {
+ i2ob_queues[unit]->request_queue[i].next =
+ &i2ob_queues[unit]->request_queue[i+1];
+ i2ob_queues[unit]->request_queue[i].num = i;
+ }
+
+ /* Queue is MAX_I2OB + 1... */
+ i2ob_queues[unit]->request_queue[i].next = NULL;
+ i2ob_queues[unit]->i2ob_qhead = &i2ob_queues[unit]->request_queue[0];
+ atomic_set(&i2ob_queues[unit]->queue_depth, 0);
+
+ blk_init_queue(&i2ob_queues[unit]->req_queue, i2ob_request);
+ blk_queue_headactive(&i2ob_queues[unit]->req_queue, 0);
+ i2ob_queues[unit]->req_queue.back_merge_fn = i2ob_back_merge;
+ i2ob_queues[unit]->req_queue.front_merge_fn = i2ob_front_merge;
+ i2ob_queues[unit]->req_queue.merge_requests_fn = i2ob_merge_requests;
+ i2ob_queues[unit]->req_queue.queuedata = &i2ob_queues[unit];
+
return 0;
}
+/*
+ * Get the request queue for the given device.
+ */
+static request_queue_t* i2ob_get_queue(kdev_t dev)
+{
+ int unit = MINOR(dev)&0xF0;
+
+ return i2ob_dev[unit].req_queue;
+}
+
+/*
+ * Probe the I2O subsytem for block class devices
+ */
static void i2ob_probe(void)
{
int i;
for(d=c->devices;d!=NULL;d=d->next)
{
- if(d->lct_data->class_id!=I2O_CLASS_RANDOM_BLOCK_STORAGE)
+ if(d->lct_data.class_id!=I2O_CLASS_RANDOM_BLOCK_STORAGE)
continue;
- if(d->lct_data->user_tid != 0xFFF)
+ if(d->lct_data.user_tid != 0xFFF)
continue;
+ if(i2o_claim_device(d, &i2o_block_handler))
+ {
+ printk(KERN_WARNING "i2o_block: Controller %d, TID %d\n", c->unit,
+ d->lct_data.tid);
+ printk(KERN_WARNING "\tDevice refused claim! Skipping installation\n");
+ continue;
+ }
+
if(unit<MAX_I2OB<<4)
{
/*
struct i2ob_device *dev=&i2ob_dev[unit];
dev->i2odev = d;
dev->controller = c;
- dev->tid = d->lct_data->tid;
-
- /*
- * Insure the device can be claimed
- * before installing it.
- */
- if(i2o_claim_device(dev->i2odev, &i2o_block_handler, I2O_CLAIM_PRIMARY )==0)
+ dev->unit = c->unit;
+ dev->tid = d->lct_data.tid;
+
+ if(i2ob_install_device(c,d,unit))
+ printk(KERN_WARNING "Could not install I2O block device\n");
+ else
{
- printk(KERN_INFO "Claimed Dev %p Tid %d Unit %d\n",dev,dev->tid,unit);
- i2ob_install_device(c,d,unit);
- unit+=16;
-
- /*
- * Now that the device has been
- * installed, unclaim it so that
- * it can be claimed by either
- * the block or scsi driver.
- */
- if(i2o_release_device(dev->i2odev, &i2o_block_handler, I2O_CLAIM_PRIMARY))
- printk(KERN_INFO "Could not unclaim Dev %p Tid %d\n",dev,dev->tid);
+ unit+=16;
+ i2ob_dev_count++;
+
+ /* We want to know when device goes away */
+ i2o_device_notify_on(d, &i2o_block_handler);
}
- else
- printk(KERN_INFO "TID %d not claimed\n",dev->tid);
}
else
{
if(!warned++)
- printk("i2o_block: too many device, registering only %d.\n", unit>>4);
+ printk(KERN_WARNING "i2o_block: too many device, registering only %d.\n", unit>>4);
}
+ i2o_release_device(d, &i2o_block_handler);
}
i2o_unlock_controller(c);
}
- i2ob_devices = unit;
}
/*
- * Have we seen a media change ?
+ * New device notification handler. Called whenever a new
+ * I2O block storage device is added to the system.
+ *
+ * Should we spin lock around this to keep multiple devs from
+ * getting updated at the same time?
+ *
*/
+void i2ob_new_device(struct i2o_controller *c, struct i2o_device *d)
+{
+ struct i2ob_device *dev;
+ int unit = 0;
+
+ printk(KERN_INFO "i2o_block: New device detected\n");
+ printk(KERN_INFO " Controller %d Tid %d\n",c->unit, d->lct_data.tid);
+
+ /* Check for available space */
+ if(i2ob_dev_count>=MAX_I2OB<<4)
+ {
+ printk(KERN_ERR "i2o_block: No more devices allowed!\n");
+ return;
+ }
+ for(unit = 0; unit < (MAX_I2OB<<4); unit += 16)
+ {
+ if(!i2ob_dev[unit].i2odev)
+ break;
+ }
+
+ /*
+ * Creating a RAID 5 volume takes a little while and the UTIL_CLAIM
+ * will fail if we don't give the card enough time to do it's magic,
+ * so we just sleep for a little while and let it do it's thing
+ */
+ current->state = TASK_INTERRUPTIBLE;
+ schedule_timeout(10*HZ);
+
+ if(i2o_claim_device(d, &i2o_block_handler))
+ {
+ printk(KERN_INFO
+ "i2o_block: Unable to claim device. Installation aborted\n");
+ return;
+ }
+
+ dev = &i2ob_dev[unit];
+ dev->i2odev = d;
+ dev->controller = c;
+ dev->tid = d->lct_data.tid;
+
+ if(i2ob_install_device(c,d,unit))
+ printk(KERN_ERR "i2o_block: Could not install new device\n");
+ else
+ {
+ i2ob_dev_count++;
+ i2o_device_notify_on(d, &i2o_block_handler);
+ }
+
+ i2o_release_device(d, &i2o_block_handler);
+ return;
+}
+
+/*
+ * Deleted device notification handler. Called when a device we
+ * are talking to has been deleted by the user or some other
+ * mysterious fource outside the kernel.
+ */
+void i2ob_del_device(struct i2o_controller *c, struct i2o_device *d)
+{
+ int unit = 0;
+ int i = 0;
+ int flags;
+
+ spin_lock_irqsave(&io_request_lock, flags);
+
+ /*
+ * Need to do this...we somtimes get two events from the IRTOS
+ * in a row and that causes lots of problems.
+ */
+ i2o_device_notify_off(d, &i2o_block_handler);
+
+ printk(KERN_INFO "I2O Block Device Deleted\n");
+
+ for(unit = 0; unit < MAX_I2OB<<4; unit += 16)
+ {
+ if(i2ob_dev[unit].i2odev == d)
+ {
+ printk(KERN_INFO " /dev/%s: Controller %d Tid %d\n",
+ d->dev_name, c->unit, d->lct_data.tid);
+ break;
+ }
+ }
+ if(unit >= MAX_I2OB<<4)
+ {
+ printk(KERN_ERR "i2ob_del_device called, but not in dev table!\n");
+ return;
+ }
+
+ /*
+ * This will force errors when i2ob_get_queue() is called
+ * by the kenrel.
+ */
+ i2ob_dev[unit].req_queue = NULL;
+ for(i = unit; i <= unit+15; i++)
+ {
+ i2ob_dev[i].i2odev = NULL;
+ i2ob_sizes[i] = 0;
+ i2ob_hardsizes[i] = 0;
+ i2ob_max_sectors[i] = 0;
+ i2ob[i].nr_sects = 0;
+ i2ob_gendisk.part[i].nr_sects = 0;
+ }
+ spin_unlock_irqrestore(&io_request_lock, flags);
+
+ /*
+ * Sync the device...this will force all outstanding I/Os
+ * to attempt to complete, thus causing error messages.
+ * We have to do this as the user could immediatelly create
+ * a new volume that gets assigned the same minor number.
+ * If there are still outstanding writes to the device,
+ * that could cause data corruption on the new volume!
+ *
+ * The truth is that deleting a volume that you are currently
+ * accessing will do _bad things_ to your system. This
+ * handler will keep it from crashing, but must probably
+ * you'll have to do a 'reboot' to get the system running
+ * properly. Deleting disks you are using is dumb.
+ * Umount them first and all will be good!
+ *
+ * It's not this driver's job to protect the system from
+ * dumb user mistakes :)
+ */
+ if(i2ob_dev[unit].refcnt)
+ fsync_dev(MKDEV(MAJOR_NR,unit));
+
+ /*
+ * Decrease usage count for module
+ */
+ while(i2ob_dev[unit].refcnt--)
+ MOD_DEC_USE_COUNT;
+
+ i2ob_dev[unit].refcnt = 0;
+
+ i2ob_dev[i].tid = 0;
+
+ /*
+ * Do we need this?
+ * The media didn't really change...the device is just gone
+ */
+ i2ob_media_change_flag[unit] = 1;
+
+ i2ob_dev_count--;
+
+ return;
+}
+
+/*
+ * Have we seen a media change ?
+ */
static int i2ob_media_change(kdev_t dev)
{
int i=MINOR(dev);
return do_i2ob_revalidate(dev, 0);
}
-static int i2ob_reboot_event(struct notifier_block *n, unsigned long code, void *p)
+/*
+ * Reboot notifier. This is called by i2o_core when the system
+ * shuts down.
+ */
+static void i2ob_reboot_event(void)
{
int i;
- if(code != SYS_RESTART && code != SYS_HALT && code != SYS_POWER_OFF)
- return NOTIFY_DONE;
for(i=0;i<MAX_I2OB;i++)
{
struct i2ob_device *dev=&i2ob_dev[(i<<4)];
if(dev->refcnt!=0)
{
/*
- * Flush the onboard cache on power down
- * also unlock the media
+ * Flush the onboard cache
*/
u32 msg[5];
int *query_done = &dev->done_flag;
msg[3] = (u32)query_done;
msg[4] = 60<<16;
i2o_post_wait(dev->controller, msg, 20, 2);
+
/*
* Unlock the media
*/
i2o_post_wait(dev->controller, msg, 20, 2);
}
}
- return NOTIFY_DONE;
}
-struct notifier_block i2ob_reboot_notifier =
-{
- i2ob_reboot_event,
- NULL,
- 0
-};
-
static struct block_device_operations i2ob_fops =
{
- open: i2ob_open,
- release: i2ob_release,
- ioctl: i2ob_ioctl,
- check_media_change: i2ob_media_change,
- revalidate: i2ob_revalidate,
+ open: i2ob_open,
+ release: i2ob_release,
+ ioctl: i2ob_ioctl,
+ check_media_change: i2ob_media_change,
+ revalidate: i2ob_revalidate,
};
-
+
+
static struct gendisk i2ob_gendisk =
{
MAJOR_NR,
NULL
};
+
/*
* And here should be modules and kernel interface
* (Just smiley confuses emacs :-)
{
int i;
- printk(KERN_INFO "I2O Block Storage OSM v0.07. (C) 1999 Red Hat Software.\n");
+ printk(KERN_INFO "I2O Block Storage OSM v0.9\n");
+ printk(KERN_INFO " (c) Copyright 1999, 2000 Red Hat Software.\n");
/*
* Register the block device interfaces
*/
if (register_blkdev(MAJOR_NR, "i2o_block", &i2ob_fops)) {
- printk("Unable to get major number %d for i2o_block\n",
+ printk(KERN_ERR "Unable to get major number %d for i2o_block\n",
MAJOR_NR);
return -EIO;
}
#ifdef MODULE
- printk("i2o_block: registered device at major %d\n", MAJOR_NR);
+ printk(KERN_INFO "i2o_block: registered device at major %d\n", MAJOR_NR);
#endif
/*
hardsect_size[MAJOR_NR] = i2ob_hardsizes;
blk_size[MAJOR_NR] = i2ob_sizes;
max_sectors[MAJOR_NR] = i2ob_max_sectors;
+ blk_dev[MAJOR_NR].queue = i2ob_get_queue;
blk_init_queue(BLK_DEFAULT_QUEUE(MAJOR_NR), i2ob_request);
blk_queue_headactive(BLK_DEFAULT_QUEUE(MAJOR_NR), 0);
/*
* Set up the queue
*/
-
- for(i = 0; i< MAX_I2OB_DEPTH; i++)
+ for(i = 0; i < MAX_I2O_CONTROLLERS; i++)
{
- i2ob_queue[i].next = &i2ob_queue[i+1];
- i2ob_queue[i].num = i;
+ i2ob_queues[i] = NULL;
}
-
- /* Queue is MAX_I2OB + 1... */
- i2ob_queue[i].next = NULL;
- i2ob_qhead = &i2ob_queue[0];
-
+
/*
* Timers
*/
i2ob_context = i2o_block_handler.context;
/*
- * Finally see what is actually plugged in to our controllers
+ * Initialize event handling thread
*/
+ init_MUTEX_LOCKED(&i2ob_evt_sem);
+ evt_pid = kernel_thread(i2ob_evt, NULL, CLONE_SIGHAND);
+ if(evt_pid < 0)
+ {
+ printk(KERN_ERR
+ "i2o_block: Could not initialize event thread. Aborting\n");
+ i2o_remove_handler(&i2o_block_handler);
+ return 0;
+ }
+ /*
+ * Finally see what is actually plugged in to our controllers
+ */
for (i = 0; i < MAX_I2OB; i++)
register_disk(&i2ob_gendisk, MKDEV(MAJOR_NR,i<<4), 1<<4,
&i2ob_fops, 0);
i2ob_probe();
- register_reboot_notifier(&i2ob_reboot_notifier);
return 0;
}
void cleanup_module(void)
{
struct gendisk **gdp;
+ int i;
- unregister_reboot_notifier(&i2ob_reboot_notifier);
+ /*
+ * Unregister for updates from any devices..otherwise we still
+ * get them and the core jumps to random memory :O
+ */
+ if(i2ob_dev_count) {
+ struct i2o_device *d;
+ for(i = 0; i < MAX_I2OB; i++)
+ if((d=i2ob_dev[i<<4].i2odev)) {
+ i2o_device_notify_off(d, &i2o_block_handler);
+ i2o_event_register(d->controller, d->lct_data.tid,
+ i2ob_context, i<<4, 0);
+ }
+ }
/*
* Flush the OSM
if (unregister_blkdev(MAJOR_NR, "i2o_block") != 0)
printk("i2o_block: cleanup_module failed\n");
+ if(evt_running) {
+ i = kill_proc(evt_pid, SIGTERM, 1);
+ if(!i) {
+ int count = 5 * 100;
+ while(evt_running && --count) {
+ current->state = TASK_INTERRUPTIBLE;
+ schedule_timeout(1);
+ }
+
+ if(!count)
+ printk(KERN_ERR "Giving up on i2oblock thread...\n");
+ }
+ }
+
+
/*
* Why isnt register/unregister gendisk in the kernel ???
*/
struct i2o_handler cfg_handler=
{
i2o_cfg_reply,
+ NULL,
+ NULL,
+ NULL,
"Configuration",
0,
0xffffffff // All classes
return -ENOMEM;
}
- len = i2o_issue_params(i2o_cmd, c, kcmd.tid,
- ops, kcmd.oplen, res, 65536);
- i2o_unlock_controller(c);
+ len = i2o_issue_params(i2o_cmd, c, kcmd.tid,
+ ops, kcmd.oplen, res, 65536);
+ i2o_unlock_controller(c);
kfree(ops);
if (len < 0) {
kfree(res);
- return len; /* -DetailedStatus */
+ return -EAGAIN;
}
put_user(len, kcmd.reslen);
/* Device exists? */
for(d = iop->devices; d; d = d->next)
- if(d->lct_data->tid == kdesc.tid)
+ if(d->lct_data.tid == kdesc.tid)
break;
if(!d)
#endif
{
printk(KERN_INFO "I2O configuration manager v 0.04.\n");
- printk(KERN_INFO " (C) Copyright 1999 Red Hat Software");
+ printk(KERN_INFO " (C) Copyright 1999 Red Hat Software\n");
if((page_buf = kmalloc(4096, GFP_KERNEL))==NULL)
{
* Core I2O structure managment
*
* (C) Copyright 1999 Red Hat Software
- *
+ *
* Written by Alan Cox, Building Number Three Ltd
*
* This program is free software; you can redistribute it and/or
*
* A lot of the I2O message side code from this is taken from the
* Red Creek RCPCI45 adapter driver by Red Creek Communications
- *
+ *
* Fixes by:
* Philipp Rumpf
* Juha Sievänen <Juha.Sievanen@cs.Helsinki.FI>
* Auvo Häkkinen <Auvo.Hakkinen@cs.Helsinki.FI>
* Deepak Saxena <deepak@plexity.net>
+ *
*/
#include <linux/config.h>
#include <linux/init.h>
#include <linux/malloc.h>
#include <linux/spinlock.h>
+#include <linux/smp_lock.h>
#include <linux/bitops.h>
#include <linux/wait.h>
+#include <linux/delay.h>
#include <linux/timer.h>
+#include <linux/tqueue.h>
+#include <linux/interrupt.h>
+#include <linux/sched.h>
+#include <asm/semaphore.h>
#include <asm/io.h>
+#include <linux/reboot.h>
#include "i2o_lan.h"
// #define DRIVERDEBUG
-// #define DEBUG_IRQ
+#ifdef DRIVERDEBUG
+#define dprintk(x) printk x
+#else
#define dprintk(x)
+#endif
-/*
- * Size of the I2O module table
- */
-
-static struct i2o_handler *i2o_handlers[MAX_I2O_MODULES];
-static struct i2o_controller *i2o_controllers[MAX_I2O_CONTROLLERS];
-struct i2o_controller *i2o_controller_chain;
+/* OSM table */
+static struct i2o_handler *i2o_handlers[MAX_I2O_MODULES] = {NULL};
+
+/* Controller list */
+static struct i2o_controller *i2o_controllers[MAX_I2O_CONTROLLERS] = {NULL};
+struct i2o_controller *i2o_controller_chain = NULL;
int i2o_num_controllers = 0;
+
+/* Initiator Context for Core message */
static int core_context = 0;
-static int i2o_activate_controller(struct i2o_controller *iop);
-static int i2o_online_controller(struct i2o_controller *c);
-static int i2o_init_outbound_q(struct i2o_controller *c);
+/* Initialization && shutdown functions */
+static void i2o_sys_init(void);
+static void i2o_sys_shutdown(void);
+static int i2o_clear_controller(struct i2o_controller *);
+static int i2o_reboot_event(struct notifier_block *, unsigned long , void *);
+static int i2o_online_controller(struct i2o_controller *);
+static int i2o_init_outbound_q(struct i2o_controller *);
+static int i2o_post_outbound_messages(struct i2o_controller *);
+static int i2o_issue_claim(struct i2o_controller *, int, int, int, u32);
+
+/* Reply handler */
static void i2o_core_reply(struct i2o_handler *, struct i2o_controller *,
struct i2o_message *);
-static int i2o_add_management_user(struct i2o_device *, struct i2o_handler *);
-static int i2o_remove_management_user(struct i2o_device *, struct i2o_handler *);
-void i2o_dump_message(u32 *msg);
-
-static int i2o_issue_claim(struct i2o_controller *, int, int, int, u32);
-static int i2o_reset_controller(struct i2o_controller *);
+/* Various helper functions */
static int i2o_lct_get(struct i2o_controller *);
+static int i2o_lct_notify(struct i2o_controller *);
static int i2o_hrt_get(struct i2o_controller *);
-static void i2o_sys_init(void);
-static void i2o_sys_shutdown(void);
-
static int i2o_build_sys_table(void);
static int i2o_systab_send(struct i2o_controller *c);
+/* I2O core event handler */
+static int i2o_core_evt(void *);
+static int evt_pid;
+static int evt_running;
+
+/* Dynamic LCT update handler */
+static int i2o_dyn_lct(void *);
+
+void i2o_report_controller_unit(struct i2o_controller *, struct i2o_device *);
+
/*
* I2O System Table. Contains information about
* all the IOPs in the system. Used to inform IOPs
static spinlock_t post_wait_lock = SPIN_LOCK_UNLOCKED;
static void i2o_post_wait_complete(u32, int);
-/* Message handler */
+/* OSM descriptor handler */
static struct i2o_handler i2o_core_handler =
{
(void *)i2o_core_reply,
+ NULL,
+ NULL,
+ NULL,
"I2O core layer",
+ 0,
0
};
/*
- * I2O configuration spinlock. This isnt a big deal for contention
- * so we have one only
+ * Used when queing a reply to be handled later
+ */
+struct reply_info
+{
+ struct i2o_controller *iop;
+ u32 msg[MSG_FRAME_SIZE];
+};
+static struct reply_info evt_reply;
+static struct reply_info events[I2O_EVT_Q_LEN];
+static int evt_in = 0;
+static int evt_out = 0;
+static int evt_q_len = 0;
+#define MODINC(x,y) (x = x++ % y)
+
+/*
+ * I2O configuration spinlock. This isnt a big deal for contention
+ * so we have one only
*/
-
static spinlock_t i2o_configuration_lock = SPIN_LOCK_UNLOCKED;
+/*
+ * Event spinlock. Used to keep event queue sane and from
+ * handling multiple events simultaneously.
+ */
+static spinlock_t i2o_evt_lock = SPIN_LOCK_UNLOCKED;
+
+/*
+ * Semaphore used to syncrhonize event handling thread with
+ * interrupt handler.
+ */
+DECLARE_MUTEX(evt_sem);
+DECLARE_WAIT_QUEUE_HEAD(evt_wait);
+
+static struct notifier_block i2o_reboot_notifier =
+{
+ i2o_reboot_event,
+ NULL,
+ 0
+};
+
+
/*
* I2O Core reply handler
- *
- * Only messages this should see are i2o_post_wait() replies
*/
void i2o_core_reply(struct i2o_handler *h, struct i2o_controller *c,
struct i2o_message *m)
{
u32 *msg=(u32 *)m;
- int status;
+ u32 status;
u32 context = msg[2];
#if 0
i2o_report_status(KERN_INFO, "i2o_core", msg);
#endif
-
+
if (msg[0] & (1<<13)) // Fail bit is set
- {
- printk(KERN_ERR "%s: Failed to process the msg:\n",c->name);
- printk(KERN_ERR " Cmd = 0x%02X, InitiatorTid = %d, TargetTid =%d\n",
- (msg[1] >> 24) & 0xFF, (msg[1] >> 12) & 0xFFF, msg[1] &
- 0xFFF);
- printk(KERN_ERR " FailureCode = 0x%02X\n Severity = 0x%02X\n"
- "LowestVersion = 0x%02X\n HighestVersion = 0x%02X\n",
- msg[4] >> 24, (msg[4] >> 16) & 0xFF,
- (msg[4] >> 8) & 0xFF, msg[4] & 0xFF);
- printk(KERN_ERR " FailingHostUnit = 0x%04X\n FailingIOP = 0x%03X\n",
- msg[5] >> 16, msg[5] & 0xFFF);
- return;
- }
+ {
+ printk(KERN_ERR "%s: Failed to process the msg:\n",c->name);
+ printk(KERN_ERR " Cmd = 0x%02X, InitiatorTid = %d, TargetTid =% d\n",
+ (msg[1] >> 24) & 0xFF, (msg[1] >> 12) & 0xFFF, msg[1] & 0xFFF);
+ printk(KERN_ERR " FailureCode = 0x%02X\n Severity = 0x%02X\n"
+ "LowestVersion = 0x%02X\n HighestVersion = 0x%02X\n",
+ msg[4] >> 24, (msg[4] >> 16) & 0xFF,
+ (msg[4] >> 8) & 0xFF, msg[4] & 0xFF);
+ printk(KERN_ERR " FailingHostUnit = 0x%04X\n FailingIOP = 0x%03X\n",
+ msg[5] >> 16, msg[5] & 0xFFF);
+ return;
+ }
if(msg[2]&0x80000000) // Post wait message
{
if (msg[4] >> 24)
{
- i2o_report_status(KERN_WARNING, "i2o_core: post_wait reply", msg);
+ i2o_report_status(KERN_INFO, "i2o_core: post_wait reply", msg);
status = -(msg[4] & 0xFFFF);
}
else
status = I2O_POST_WAIT_OK;
i2o_post_wait_complete(context, status);
+ return;
+ }
+
+ if(m->function == I2O_CMD_UTIL_EVT_REGISTER)
+ {
+ memcpy(events[evt_in].msg, msg, MSG_FRAME_SIZE);
+ events[evt_in].iop = c;
+
+ spin_lock(&i2o_evt_lock);
+ MODINC(evt_in, I2O_EVT_Q_LEN);
+ if(evt_q_len == I2O_EVT_Q_LEN)
+ MODINC(evt_out, I2O_EVT_Q_LEN);
+ else
+ evt_q_len++;
+ spin_unlock(&i2o_evt_lock);
+
+ up(&evt_sem);
+ wake_up_interruptible(&evt_wait);
+ return;
+ }
+
+ if(m->function == I2O_CMD_LCT_NOTIFY)
+ {
+ up(&c->lct_sem);
+ return;
}
+
+ /*
+ * If this happens, we want to dump the message to the syslog so
+ * it can be sent back to the card manufacturer by the end user
+ * to aid in debugging.
+ *
+ */
+ printk(KERN_WARNING "%s: Unsolicited message reply sent to core!"
+ "Message dumped to syslog\n",
+ c->name);
+ i2o_dump_message(msg);
+
+ return;
}
/*
/*
- * Each I2O controller has a chain of devices on it - these match
- * the useful parts of the LCT of the board.
+ * Each I2O controller has a chain of devices on it.
+ * Each device has a pointer to it's LCT entry to be used
+ * for fun purposes.
*/
int i2o_install_device(struct i2o_controller *c, struct i2o_device *d)
int __i2o_delete_device(struct i2o_device *d)
{
struct i2o_device **p;
+ int i;
p=&(d->controller->devices);
/*
* Hey we have a driver!
+ * Check to see if the driver wants us to notify it of
+ * device deletion. If it doesn't we assume that it
+ * is unsafe to delete a device with an owner and
+ * fail.
*/
-
if(d->owner)
- return -EBUSY;
+ {
+ if(d->owner->dev_del_notify)
+ {
+ dprintk((KERN_INFO "Device has owner, notifying\n"));
+ d->owner->dev_del_notify(d->controller, d);
+ if(d->owner)
+ {
+ printk(KERN_WARNING
+ "Driver \"%s\" did not release device!\n", d->owner->name);
+ return -EBUSY;
+ }
+ }
+ else
+ return -EBUSY;
+ }
/*
- * Seek, locate
+ * Tell any other users who are talking to this device
+ * that it's going away. We assume that everything works.
*/
+ for(i=0; i < I2O_MAX_MANAGERS; i++)
+ {
+ if(d->managers[i] && d->managers[i]->dev_del_notify)
+ d->managers[i]->dev_del_notify(d->controller, d);
+ }
while(*p!=NULL)
{
spin_lock(&i2o_configuration_lock);
+ /*
+ * Seek, locate
+ */
+
ret = __i2o_delete_device(d);
spin_unlock(&i2o_configuration_lock);
if(i2o_controllers[i]==NULL)
{
i2o_controllers[i]=c;
+ c->devices = NULL;
c->next=i2o_controller_chain;
i2o_controller_chain=c;
c->unit = i;
-
+ c->page_frame = NULL;
+ c->hrt = NULL;
+ c->lct = NULL;
+ c->dlct = (i2o_lct*)kmalloc(8192, GFP_KERNEL);
+ c->status_block = NULL;
sprintf(c->name, "i2o/iop%d", i);
i2o_num_controllers++;
+ init_MUTEX_LOCKED(&c->lct_sem);
spin_unlock(&i2o_configuration_lock);
return 0;
}
struct i2o_controller **p;
int users;
char name[16];
+ int stat;
+
+ dprintk((KERN_INFO "Deleting controller iop%d\n", c->unit));
+
+ /*
+ * Clear event registration as this can cause weird behavior
+ */
+ if(c->status_block->iop_state == ADAPTER_STATE_OPERATIONAL)
+ i2o_event_register(c, core_context, 0, 0, 0);
spin_lock(&i2o_configuration_lock);
if((users=atomic_read(&c->users)))
{
- printk(KERN_INFO "%s busy: %d users for controller.\n", c->name, users);
- c->bus_disable(c);
+ dprintk((KERN_INFO "I2O: %d users for controller iop%d\n", users,
+ c->unit));
spin_unlock(&i2o_configuration_lock);
return -EBUSY;
}
if(__i2o_delete_device(c->devices)<0)
{
/* Shouldnt happen */
- c->bus_disable(c);
+ c->bus_disable(c);
spin_unlock(&i2o_configuration_lock);
return -EBUSY;
}
}
+ /*
+ * If this is shutdown time, the thread's already been killed
+ */
+ if(c->lct_running) {
+ stat = kill_proc(c->lct_pid, SIGTERM, 1);
+ if(!stat) {
+ int count = 10 * 100;
+ while(c->lct_running && --count) {
+ current->state = TASK_INTERRUPTIBLE;
+ schedule_timeout(1);
+ }
+
+ if(!count)
+ printk(KERN_ERR
+ "%s: LCT thread still running!\n",
+ c->name);
+ }
+ }
+
p=&i2o_controller_chain;
while(*p)
{
if(*p==c)
{
- /* Ask the IOP to switch into RESET state */
- i2o_reset_controller(c);
+ /* Ask the IOP to switch to HOLD state */
+ if (i2o_clear_controller(c) < 0)
+ printk(KERN_ERR "Unable to clear iop%d\n", c->unit);
/* Release IRQ */
c->destructor(c);
kfree(c->lct);
if(c->status_block)
kfree(c->status_block);
+ if(c->dlct)
+ kfree(c->dlct);
i2o_controllers[c->unit]=NULL;
memcpy(name, c->name, strlen(c->name)+1);
kfree(c);
- i2o_num_controllers--;
-
dprintk((KERN_INFO "%s: Deleted from controller chain.\n", name));
-
+
+ i2o_num_controllers--;
return 0;
}
p=&((*p)->next);
/*
- * Claim a device for use as either the primary user or just
- * as a management/secondary user
+ * Claim a device for use by an OSM
*/
-int i2o_claim_device(struct i2o_device *d, struct i2o_handler *h, u32 type)
+int i2o_claim_device(struct i2o_device *d, struct i2o_handler *h)
{
- /* Device already has a primary user or too many managers */
- if((type == I2O_CLAIM_PRIMARY && d->owner) ||
- (d->num_managers == I2O_MAX_MANAGERS))
- {
- return -EBUSY;
- }
-
- if(i2o_issue_claim(d->controller,d->lct_data->tid, h->context, 1, type))
+ spin_lock(&i2o_configuration_lock);
+ if(d->owner)
{
+ printk(KERN_INFO "issue claim called, but dev as owner!");
+ spin_unlock(&i2o_configuration_lock);
return -EBUSY;
}
- spin_lock(&i2o_configuration_lock);
- if(d->owner)
+ if(i2o_issue_claim(d->controller,d->lct_data.tid, h->context, 1,
+ I2O_CLAIM_PRIMARY))
{
spin_unlock(&i2o_configuration_lock);
return -EBUSY;
}
- atomic_inc(&d->controller->users);
-
- if(type == I2O_CLAIM_PRIMARY)
- d->owner=h;
- else
- if (i2o_add_management_user(d, h))
- printk(KERN_WARNING "i2o: Too many managers for TID %d\n",
- d->lct_data->tid);
-
+ d->owner=h;
spin_unlock(&i2o_configuration_lock);
return 0;
}
-int i2o_release_device(struct i2o_device *d, struct i2o_handler *h, u32 type)
+/*
+ * Release a device that the OS is using
+ */
+int i2o_release_device(struct i2o_device *d, struct i2o_handler *h)
{
int err = 0;
spin_lock(&i2o_configuration_lock);
-
- /* Primary user */
- if(type == I2O_CLAIM_PRIMARY)
+ if(d->owner != h)
{
- if(d->owner != h)
- err = -ENOENT;
- else
- {
- if(i2o_issue_claim(d->controller, d->lct_data->tid, h->context, 0,
- type))
- {
- err = -ENXIO;
- }
- else
- {
- d->owner = NULL;
- atomic_dec(&d->controller->users);
- }
- }
-
spin_unlock(&i2o_configuration_lock);
- return err;
- }
+ return -ENOENT;
+ }
- /* Management or other user */
- if(i2o_remove_management_user(d, h))
- err = -ENOENT;
- else
+ if(i2o_issue_claim(d->controller, d->lct_data.tid, h->context, 0,
+ I2O_CLAIM_PRIMARY))
{
- atomic_dec(&d->controller->users);
-
- if(i2o_issue_claim(d->controller,d->lct_data->tid, h->context, 0,
- type))
- err = -ENXIO;
+ err = -ENXIO;
}
+ d->owner = NULL;
+
spin_unlock(&i2o_configuration_lock);
return err;
}
-int i2o_add_management_user(struct i2o_device *d, struct i2o_handler *h)
+/*
+ * Called by OSMs to let the core know that they want to be
+ * notified if the given device is deleted from the system.
+ */
+int i2o_device_notify_on(struct i2o_device *d, struct i2o_handler *h)
{
int i;
if(d->num_managers == I2O_MAX_MANAGERS)
- return 1;
+ return -ENOSPC;
for(i = 0; i < I2O_MAX_MANAGERS; i++)
+ {
if(!d->managers[i])
+ {
d->managers[i] = h;
+ break;
+ }
+ }
d->num_managers++;
return 0;
}
-int i2o_remove_management_user(struct i2o_device *d, struct i2o_handler *h)
+/*
+ * Called by OSMs to let the core know that they no longer
+ * are interested in the fate of the given device.
+ */
+int i2o_device_notify_off(struct i2o_device *d, struct i2o_handler *h)
{
int i;
if(d->managers[i] == h)
{
d->managers[i] = NULL;
+ d->num_managers--;
return 0;
}
}
return -ENOENT;
}
+/*
+ * Event registration API
+ */
+int i2o_event_register(struct i2o_controller *c, u32 tid,
+ u32 init_context, u32 tr_context, u32 evt_mask)
+{
+ u32 msg[5]; // Not performance critical, so we just
+ // i2o_post_this it instead of building it
+ // in IOP memory
+
+ msg[0] = FIVE_WORD_MSG_SIZE|SGL_OFFSET_0;
+ msg[1] = I2O_CMD_UTIL_EVT_REGISTER<<24 | HOST_TID<<12 | tid;
+ msg[2] = (u32)init_context;
+ msg[3] = (u32)tr_context;
+ msg[4] = evt_mask;
+
+ return i2o_post_this(c, msg, sizeof(msg));
+}
+
+/*
+ * Event ack API
+ *
+ * We just take a pointer to the original UTIL_EVENT_REGISTER reply
+ * message and change the function code since that's what spec
+ * describes an EventAck message looking like.
+ */
+int i2o_event_ack(struct i2o_controller *c, u32 *msg)
+{
+ struct i2o_message *m = (struct i2o_message *)msg;
+
+ m->function = I2O_CMD_UTIL_EVT_ACK;
+
+ return i2o_post_wait(c, msg, m->size * 4, 2);
+}
+
+/*
+ * Core event handler. Runs as a separate thread and is woken
+ * up whenever there is an Executive class event.
+ */
+static int i2o_core_evt(void *foo)
+{
+ struct reply_info reply_data;
+ struct reply_info *reply = &reply_data;
+ u32 *msg = reply->msg;
+ struct i2o_controller *c = NULL;
+ int flags;
+
+ lock_kernel();
+ exit_files(current);
+ daemonize();
+ unlock_kernel();
+
+ strcpy(current->comm, "i2oevtd");
+ evt_running = 1;
+
+ while(1)
+ {
+ down_interruptible(&evt_sem);
+ if(signal_pending(current))
+ {
+ dprintk((KERN_INFO "I2O event thread dead\n"));
+ evt_running = 0;
+ return 0;
+ }
+
+ /*
+ * Copy the data out of the queue so that we don't have to lock
+ * around the whole function and just around the qlen update
+ */
+ spin_lock_irqsave(&i2o_evt_lock, flags);
+ memcpy(reply, &events[evt_out], sizeof(struct reply_info));
+ MODINC(evt_out, I2O_EVT_Q_LEN);
+ evt_q_len--;
+ spin_unlock_irqrestore(&i2o_evt_lock, flags);
+
+ c = reply->iop;
+ dprintk((KERN_INFO "I2O IRTOS EVENT: iop%d, event %#10x\n", c->unit, msg[4]));
+
+ /*
+ * We do not attempt to delete/quiesce/etc. the controller if
+ * some sort of error indidication occurs. We may want to do
+ * so in the future, but for now we just let the user deal with
+ * it. One reason for this is that what to do with an error
+ * or when to send what ærror is not really agreed on, so
+ * we get errors that may not be fatal but just look like they
+ * are...so let the user deal with it.
+ */
+ switch(msg[4])
+ {
+ case I2O_EVT_IND_EXEC_RESOURCE_LIMITS:
+ printk(KERN_ERR "iop%d: Out of resources\n", c->unit);
+ break;
+
+ case I2O_EVT_IND_EXEC_POWER_FAIL:
+ printk(KERN_ERR "iop%d: Power failure\n", c->unit);
+ break;
+
+ case I2O_EVT_IND_EXEC_HW_FAIL:
+ {
+ char *fail[] =
+ {
+ "Unknown Error",
+ "Power Lost",
+ "Code Violation",
+ "Parity Error",
+ "Code Execution Exception",
+ "Watchdog Timer Expired"
+ };
+
+ if(msg[5] <= 6)
+ printk(KERN_ERR "%s: Hardware Failure: %s\n",
+ c->name, fail[msg[5]]);
+ else
+ printk(KERN_ERR "%s: Unknown Hardware Failure\n", c->name);
+
+ break;
+ }
+
+ /*
+ * New device created
+ * - Create a new i2o_device entry
+ * - Inform all interested drivers about this device's existence
+ */
+ case I2O_EVT_IND_EXEC_NEW_LCT_ENTRY:
+ {
+ struct i2o_device *d = (struct i2o_device *)
+ kmalloc(sizeof(struct i2o_device), GFP_KERNEL);
+ int i;
+
+ memcpy(&d->lct_data, &msg[5], sizeof(i2o_lct_entry));
+
+ d->next = NULL;
+ d->controller = c;
+ d->flags = 0;
+
+ i2o_report_controller_unit(c, d);
+ i2o_install_device(c,d);
+
+ for(i = 0; i < MAX_I2O_MODULES; i++)
+ {
+ if(i2o_handlers[i] &&
+ i2o_handlers[i]->new_dev_notify &&
+ (i2o_handlers[i]->class&d->lct_data.class_id))
+ i2o_handlers[i]->new_dev_notify(c,d);
+ }
+
+ break;
+ }
+
+ /*
+ * LCT entry for a device has been modified, so update it
+ * internally.
+ */
+ case I2O_EVT_IND_EXEC_MODIFIED_LCT:
+ {
+ struct i2o_device *d;
+ i2o_lct_entry *new_lct = (i2o_lct_entry *)&msg[5];
+
+ for(d = c->devices; d; d = d->next)
+ {
+ if(d->lct_data.tid == new_lct->tid)
+ {
+ memcpy(&d->lct_data, new_lct, sizeof(i2o_lct_entry));
+ break;
+ }
+ }
+ break;
+ }
+
+ case I2O_EVT_IND_CONFIGURATION_FLAG:
+ printk(KERN_WARNING "%s requires user configuration\n", c->name);
+ break;
+
+ case I2O_EVT_IND_GENERAL_WARNING:
+ printk(KERN_WARNING "%s: Warning notification received!"
+ "Check configuration for errors!\n", c->name);
+ break;
+
+ default:
+ printk(KERN_WARNING "%s: Unknown event...check config\n", c->name);
+ break;
+ }
+ }
+
+ return 0;
+}
+
+/*
+ * Dynamic LCT update. This compares the LCT with the currently
+ * installed devices to check for device deletions..this needed b/c there
+ * is no DELETED_LCT_ENTRY EventIndicator for the Executive class so
+ * we can't just have the event handler do this...annoying
+ *
+ * This is a hole in the spec that will hopefully be fixed someday.
+ */
+static int i2o_dyn_lct(void *foo)
+{
+ struct i2o_controller *c = (struct i2o_controller *)foo;
+ struct i2o_device *d = NULL;
+ struct i2o_device *d1 = NULL;
+ int i = 0;
+ int found = 0;
+ int entries;
+ void *tmp;
+ char name[16];
+
+ lock_kernel();
+ exit_files(current);
+ daemonize();
+ unlock_kernel();
+
+ sprintf(name, "iop%d_lctd", c->unit);
+ strcpy(current->comm, name);
+
+ c->lct_running = 1;
+
+ while(1)
+ {
+ down_interruptible(&c->lct_sem);
+ if(signal_pending(current))
+ {
+ dprintk((KERN_ERR "%s: LCT thread dead\n", c->name));
+ c->lct_running = 0;
+ return 0;
+ }
+
+ entries = c->dlct->table_size;
+ entries -= 3;
+ entries /= 9;
+
+ dprintk((KERN_INFO "I2O: Dynamic LCT Update\n"));
+ dprintk((KERN_INFO "I2O: Dynamic LCT contains %d entries\n", entries));
+
+ if(!entries)
+ {
+ printk(KERN_INFO "iop%d: Empty LCT???\n", c->unit);
+ continue;
+ }
+
+ /*
+ * Loop through all the devices on the IOP looking for their
+ * LCT data in the LCT. We assume that TIDs are not repeated.
+ * as that is the only way to really tell. It's been confirmed
+ * by the IRTOS vendor(s?) that TIDs are not reused until they
+ * wrap arround(4096), and I doubt a system will up long enough
+ * to create/delete that many devices.
+ */
+ for(d = c->devices; d; )
+ {
+ found = 0;
+ d1 = d->next;
+
+ for(i = 0; i < entries; i++)
+ {
+ if(d->lct_data.tid == c->dlct->lct_entry[i].tid)
+ {
+ found = 1;
+ break;
+ }
+ }
+ if(!found)
+ {
+ dprintk((KERN_INFO "Deleted device!\n"));
+ i2o_delete_device(d);
+ }
+ d = d1;
+ }
+
+ /*
+ * Tell LCT to renotify us next time there is a change
+ */
+ i2o_lct_notify(c);
+
+ /*
+ * Copy new LCT into public LCT
+ *
+ * Possible race if someone is reading LCT while we are copying
+ * over it. If this happens, we'll fix it then. but I doubt that
+ * the LCT will get updated often enough or will get read by
+ * a user often enough to worry.
+ */
+ if(c->lct->table_size < c->dlct->table_size)
+ {
+ tmp = c->lct;
+ c->lct = kmalloc(c->dlct->table_size<<2, GFP_KERNEL);
+ if(!c->lct)
+ {
+ printk(KERN_ERR "%s: No memory for LCT!\n", c->name);
+ c->lct = tmp;
+ continue;
+ }
+ kfree(tmp);
+ }
+ memcpy(c->lct, c->dlct, c->dlct->table_size<<2);
+ }
+
+ return 0;
+}
+
/*
* This is called by the bus specific driver layer when an interrupt
* or poll of this card interface is desired.
{
struct i2o_message *m;
u32 mv;
+ u32 *msg;
+ int count = 0;
-#ifdef DEBUG_IRQ
- printk(KERN_INFO "%s: interrupt\n", c->name);
-#endif
- /* Sometimes we get here, but a message can't be read. Why? */
+ /*
+ * Old 960 steppings had a bug in the I2O unit that caused
+ * the queue to appear empty when it wasn't.
+ */
if((mv=I2O_REPLY_READ32(c))==0xFFFFFFFF)
mv=I2O_REPLY_READ32(c);
- while (mv!=0xFFFFFFFF)
+ while(mv!=0xFFFFFFFF)
{
struct i2o_handler *i;
m=(struct i2o_message *)bus_to_virt(mv);
+ msg=(u32*)m;
+
+ count++;
+
/*
* Temporary Debugging
*/
if(m->function==0x15)
- printk("UTFR!\n");
-
-#ifdef DEBUG_IRQ
- i2o_dump_message((u32*)m);
-#endif
+ printk(KERN_ERR "%s: UTFR!\n", c->name);
i=i2o_handlers[m->initiator_context&(MAX_I2O_MODULES-1)];
- if(i)
+ if(i && i->reply)
i->reply(i,c,m);
else
{
- printk("i2o: Spurious reply to handler %d\n",
+ printk(KERN_WARNING "I2O: Spurious reply to handler %d\n",
m->initiator_context&(MAX_I2O_MODULES-1));
- i2o_dump_message((u32*)m);
}
i2o_flush_reply(c,mv);
mb();
- mv=I2O_REPLY_READ32(c);
- }
+
+ /* That 960 bug again... */
+ if((mv=I2O_REPLY_READ32(c))==0xFFFFFFFF)
+ mv=I2O_REPLY_READ32(c);
+
+ }
}
{
if((jiffies-time)>=5*HZ)
{
- dprintk((KERN_ERR "%s: Timeout waiting for message frame (%s).\n",
+ dprintk((KERN_ERR "%s: Timeout waiting for message frame to send %s.\n",
c->name, why));
return 0xFFFFFFFF;
}
return m;
}
-
/*
* Dump the information block associated with a given unit (TID)
*/
-void i2o_report_controller_unit(struct i2o_controller *c, int unit)
+void i2o_report_controller_unit(struct i2o_controller *c, struct i2o_device *d)
{
char buf[64];
+ char str[22];
+ int ret;
+ int unit = d->lct_data.tid;
- if(i2o_query_scalar(c, unit, 0xF100, 3, buf, 16)>=0)
+ printk(KERN_INFO "Target ID %d.\n", unit);
+
+ if((ret=i2o_query_scalar(c, unit, 0xF100, 3, buf, 16))>=0)
{
buf[16]=0;
- printk(KERN_INFO " Vendor: %s", buf);
+ printk(KERN_INFO " Vendor: %s\n", buf);
}
- if(i2o_query_scalar(c, unit, 0xF100, 4, buf, 16)>=0)
+ if((ret=i2o_query_scalar(c, unit, 0xF100, 4, buf, 16))>=0)
{
+
buf[16]=0;
- printk(" Device: %s", buf);
+ printk(KERN_INFO " Device: %s\n", buf);
}
#if 0
if(i2o_query_scalar(c, unit, 0xF100, 5, buf, 16)>=0)
{
buf[16]=0;
- printk("Description: %s", buf);
+ printk(KERN_INFO " Description: %s\n", buf);
}
#endif
- if(i2o_query_scalar(c, unit, 0xF100, 6, buf, 8)>=0)
+ if((ret=i2o_query_scalar(c, unit, 0xF100, 6, buf, 8))>=0)
{
buf[8]=0;
- printk(" Rev: %s\n", buf);
+ printk(KERN_INFO " Rev: %s\n", buf);
}
+
+ printk(KERN_INFO " Class: ");
+ sprintf(str, "%-21s", i2o_get_class_name(d->lct_data.class_id));
+ printk("%s\n", str);
+
+ printk(KERN_INFO " Subclass: 0x%04X\n", d->lct_data.sub_class);
+ printk(KERN_INFO " Flags: ");
+
+ if(d->lct_data.device_flags&(1<<0))
+ printk("C"); // ConfigDialog requested
+ if(d->lct_data.device_flags&(1<<1))
+ printk("U"); // Multi-user capable
+ if(!(d->lct_data.device_flags&(1<<4)))
+ printk("P"); // Peer service enabled!
+ if(!(d->lct_data.device_flags&(1<<5)))
+ printk("M"); // Mgmt service enabled!
+ printk("\n");
+
}
static int i2o_parse_hrt(struct i2o_controller *c)
{
#ifdef DRIVERDEBUG
- u32 *rows=(u32 *)c->hrt;
+ u32 *rows=(u32*)c->hrt;
u8 *p=(u8 *)c->hrt;
u8 *d;
int count;
int length;
int i;
int state;
-
- if(p[3]!=0) {
+
+ if(p[3]!=0)
+ {
printk(KERN_ERR "%s: HRT table for controller is too new a version.\n",
c->name);
- return -1;
+ return -1;
}
-
+
count=p[0]|(p[1]<<8);
length = p[2];
printk("\n");
rows+=length;
}
-
#endif
return 0;
}
int i;
int max;
int tid;
- u32 *p;
struct i2o_device *d;
- char str[22];
i2o_lct *lct = c->lct;
if (lct == NULL) {
- printk(KERN_ERR "%s: LCT is empty???\n",c->name);
+ printk(KERN_ERR "%s: LCT is empty???\n", c->name);
return -1;
}
-
- max = lct->table_size;
+
+ max = lct->table_size;
max -= 3;
max /= 9;
-
- printk(KERN_INFO "%s: LCT has %d entries.\n", c->name,max);
+
+ printk(KERN_INFO "%s: LCT has %d entries.\n", c->name, max);
if(lct->iop_flags&(1<<0))
printk(KERN_WARNING "%s: Configuration dialog desired.\n", c->name);
d = (struct i2o_device *)kmalloc(sizeof(struct i2o_device), GFP_KERNEL);
if(d==NULL)
{
- printk(KERN_CRIT "i2o_core: Out of memory for I2O device data.\n");
+ printk("i2o_core: Out of memory for I2O device data.\n");
return -ENOMEM;
}
d->controller = c;
d->next = NULL;
- d->lct_data = &lct->lct_entry[i];
+ memcpy(&d->lct_data, &lct->lct_entry[i], sizeof(i2o_lct_entry));
d->flags = 0;
- tid = d->lct_data->tid;
+ tid = d->lct_data.tid;
- printk(KERN_INFO "Target ID %d.\n", tid);
-
- i2o_report_controller_unit(c, tid);
+ i2o_report_controller_unit(c, d);
i2o_install_device(c, d);
-
- printk(KERN_INFO " Class: ");
-
- sprintf(str, "%-21s", i2o_get_class_name(d->lct_data->class_id));
- printk("%s", str);
-
- printk(" Subclass: 0x%04X Flags: ",
- d->lct_data->sub_class);
-
- if(d->lct_data->device_flags&(1<<0))
- printk("C"); // ConfigDialog requested
- if(d->lct_data->device_flags&(1<<1))
- printk("M"); // Multi-user capable
- if(!(d->lct_data->device_flags&(1<<4)))
- printk("P"); // Peer service enabled!
- if(!(d->lct_data->device_flags&(1<<5)))
- printk("m"); // Mgmt service enabled!
- printk("\n");
- p+=9;
}
return 0;
}
u32 msg[4];
int ret;
+ i2o_status_get(c);
+
/* SysQuiesce discarded if IOP not in READY or OPERATIONAL state */
if ((c->status_block->iop_state != ADAPTER_STATE_READY) &&
- (c->status_block->iop_state != ADAPTER_STATE_OPERATIONAL))
+ (c->status_block->iop_state != ADAPTER_STATE_OPERATIONAL))
{
return 0;
}
- msg[0]=FOUR_WORD_MSG_SIZE|SGL_OFFSET_0;
- msg[1]=I2O_CMD_SYS_QUIESCE<<24|HOST_TID<<12|ADAPTER_TID;
- /* msg[2] filled in i2o_post_wait */
- msg[3]=0;
+ msg[0] = FOUR_WORD_MSG_SIZE|SGL_OFFSET_0;
+ msg[1] = I2O_CMD_SYS_QUIESCE<<24|HOST_TID<<12|ADAPTER_TID;
+ msg[3] = 0;
/* Long timeout needed for quiesce if lots of devices */
- if ((ret = i2o_post_wait(c, msg, sizeof(msg), 120)))
- printk(KERN_INFO "%s: Unable to quiesce (status=%#10x).\n",
+ if ((ret = i2o_post_wait(c, msg, sizeof(msg), 240)))
+ printk(KERN_INFO "%s: Unable to quiesce (status=%#10x).\n",
c->name, ret);
else
dprintk((KERN_INFO "%s: Quiesced.\n", c->name));
i2o_status_get(c); // Reread the Status Block
- return ret;
+ return ret;
+
}
-/*
+/*
* Enable IOP. Allows the IOP to resume external operations.
*/
int i2o_enable_controller(struct i2o_controller *c)
{
u32 msg[4];
int ret;
+
+ i2o_status_get(c);
+ /* Enable only allowed on READY state */
+ if(c->status_block->iop_state != ADAPTER_STATE_READY)
+ return -EINVAL;
+
msg[0]=FOUR_WORD_MSG_SIZE|SGL_OFFSET_0;
msg[1]=I2O_CMD_SYS_ENABLE<<24|HOST_TID<<12|ADAPTER_TID;
- /* msg[2] filled in i2o_post_wait */
- /* How long of a timeout do we need? */
+ /* How long of a timeout do we need? */
if ((ret = i2o_post_wait(c, msg, sizeof(msg), 240)))
- printk(KERN_ERR "%s: Could not enable (status=%#10x).\n",
+ printk(KERN_ERR "%s: Could not enable (status=%#10x).\n",
c->name, ret);
else
dprintk((KERN_INFO "%s: Enabled.\n", c->name));
return ret;
}
-/*
- * Clear an IOP to HOLD state, ie. terminate external operations, clear all
+/*
+ * Clear an IOP to HOLD state, ie. terminate external operations, clear all
* input queues and prepare for a system restart. IOP's internal operation
* continues normally and the outbound queue is alive.
* IOP is not expected to rebuild its LCT.
msg[0]=FOUR_WORD_MSG_SIZE|SGL_OFFSET_0;
msg[1]=I2O_CMD_ADAPTER_CLEAR<<24|HOST_TID<<12|ADAPTER_TID;
- /* msg[2] filled in i2o_post_wait */
msg[3]=0;
if ((ret=i2o_post_wait(c, msg, sizeof(msg), 30)))
- printk(KERN_INFO "%s: Unable to clear (status=%#10x).\n",
+ printk(KERN_INFO "%s: Unable to clear (status=%#10x).\n",
c->name, ret);
else
dprintk((KERN_INFO "%s: Cleared.\n",c->name));
for (iop = i2o_controller_chain; iop; iop = iop->next)
if (iop != c)
- i2o_enable_controller(iop);
+ i2o_enable_controller(iop);
return ret;
}
-/*
- * Reset the IOP into INIT state and wait until IOP gets into RESET state.
- * Terminate all external operations, clear IOP's inbound and outbound
- * queues, terminate all DDMs, and reload the IOP's operating environment
+/*
+ * Reset the IOP into INIT state and wait until IOP gets into RESET state.
+ * Terminate all external operations, clear IOP's inbound and outbound
+ * queues, terminate all DDMs, and reload the IOP's operating environment
* and all local DDMs. IOP rebuilds its LCT.
*/
static int i2o_reset_controller(struct i2o_controller *c)
{
- struct i2o_controller *iop;
+ struct i2o_controller *iop;
u32 m;
u8 *status;
u32 *msg;
for (iop = i2o_controller_chain; iop; iop = iop->next)
i2o_quiesce_controller(iop);
+ /* Get a message */
m=i2o_wait_message(c, "AdapterReset");
if(m==0xFFFFFFFF)
return -ETIMEDOUT;
msg=(u32 *)(c->mem_offset+m);
-
- status = kmalloc(4,GFP_KERNEL);
- if (status==NULL) {
- printk(KERN_ERR "%s: IOP reset failed - no free memory.\n",
- c->name);
+
+ status=(void *)kmalloc(4, GFP_KERNEL);
+ if(status==NULL) {
+ printk(KERN_ERR "IOP reset failed - no free memory.\n");
return -ENOMEM;
}
- memset(status,0,4);
-
+ memset(status, 0, 4);
+
msg[0]=EIGHT_WORD_MSG_SIZE|SGL_OFFSET_0;
msg[1]=I2O_CMD_ADAPTER_RESET<<24|HOST_TID<<12|ADAPTER_TID;
msg[2]=core_context;
/* Wait for a reply */
time=jiffies;
- while (status[0]==0)
+ while(status[0]==0)
{
- if((jiffies-time)>=5*HZ)
+ if((jiffies-time)>=20*HZ)
{
- printk(KERN_ERR "%s: IOP reset timeout.\n", c->name);
+ printk(KERN_ERR "IOP reset timeout.\n");
kfree(status);
return -ETIMEDOUT;
}
barrier();
}
- if (status[0]==0x01)
- {
+ if (status[0]==0x01)
+ {
/*
* Once the reset is sent, the IOP goes into the INIT state
* which is indeterminate. We need to wait until the IOP
* has rebooted before we can let the system talk to
* it. We read the inbound Free_List until a message is
- * available. If we can't read one in the given amount of
+ * available. If we can't read one in the given ammount of
* time, we assume the IOP could not reboot properly.
*/
+ dprintk((KERN_INFO "Reset succeeded...waiting for reboot\n"));
+
time = jiffies;
m = I2O_POST_READ32(c);
while(m == 0XFFFFFFFF)
{
if((jiffies-time) >= 30*HZ)
{
- printk(KERN_ERR "%s: Timeout waiting for IOP reset.\n",
+ printk(KERN_ERR "%s: Timeout waiting for IOP reset.\n",
c->name);
- kfree(status);
return -ETIMEDOUT;
}
schedule();
i2o_status_get(c);
if (status[0] == 0x02 || c->status_block->iop_state != ADAPTER_STATE_RESET)
{
- printk(KERN_WARNING "%s: Reset rejected, trying to clear\n",c->name);
+ printk(KERN_WARNING "%s: Reset rejected, trying to clear\n",c->name);
i2o_clear_controller(c);
-
}
/* Enable other IOPs */
for (iop = i2o_controller_chain; iop; iop = iop->next)
if (iop != c)
- i2o_enable_controller(iop);
+ i2o_enable_controller(iop);
kfree(status);
return 0;
}
+/*
+ * Get the status block for the IOP
+ */
int i2o_status_get(struct i2o_controller *c)
{
long time;
u32 *msg;
u8 *status_block;
- if (c->status_block == NULL) {
+ if (c->status_block == NULL)
+ {
c->status_block = (i2o_status_block *)
- kmalloc(sizeof(i2o_status_block),GFP_KERNEL);
+ kmalloc(sizeof(i2o_status_block),GFP_KERNEL);
if (c->status_block == NULL)
{
- printk(KERN_CRIT "%s: Get Status Block failed; Out of memory.\n", c->name);
+ printk(KERN_CRIT "%s: Get Status Block failed; Out of memory.\n",
+ c->name);
return -ENOMEM;
}
}
status_block = (u8*)c->status_block;
memset(c->status_block,0,sizeof(i2o_status_block));
-
+
m=i2o_wait_message(c, "StatusGet");
if(m==0xFFFFFFFF)
return -ETIMEDOUT;
-
+
msg=(u32 *)(c->mem_offset+m);
msg[0]=NINE_WORD_MSG_SIZE|SGL_OFFSET_0;
msg[4]=0;
msg[5]=0;
msg[6]=virt_to_phys(c->status_block);
- msg[7]=0; /* 64bit host FIXME */
+ msg[7]=0; /* 64bit host FIXME */
msg[8]=sizeof(i2o_status_block); /* always 88 bytes */
i2o_post_message(c,m);
/* Wait for a reply */
-
+
time=jiffies;
while(status_block[87]!=0xFF)
{
schedule();
barrier();
}
-
+
/* Ok the reply has arrived. Fill in the important stuff */
- c->inbound_size = c->status_block->inbound_frame_size *4;
+ c->inbound_size = (status_block[12]|(status_block[13]<<8))*4;
#ifdef DRIVERDEBUG
printk(KERN_INFO "%s: State = ", c->name);
switch (c->status_block->iop_state) {
- case 0x01:
- printk("INIT\n");
+ case 0x01:
+ printk("INIT\n");
break;
- case 0x02:
- printk("RESET\n");
+ case 0x02:
+ printk("RESET\n");
break;
- case 0x04:
- printk("HOLD\n");
+ case 0x04:
+ printk("HOLD\n");
break;
- case 0x05:
- printk("READY\n");
+ case 0x05:
+ printk("READY\n");
break;
- case 0x08:
- printk("OPERATIONAL\n");
+ case 0x08:
+ printk("OPERATIONAL\n");
break;
- case 0x10:
- printk("FAILED\n");
+ case 0x10:
+ printk("FAILED\n");
break;
- case 0x11:
- printk("FAULTED\n");
+ case 0x11:
+ printk("FAULTED\n");
break;
- default:
+ default:
printk("%x (unknown !!)\n",c->status_block->iop_state);
- }
-#endif
+}
+#endif
return 0;
}
-
+/*
+ * Get the Hardware Resource Table for the device.
+ * The HRT contains information about possible hidden devices
+ * but is mostly useless to us
+ */
int i2o_hrt_get(struct i2o_controller *c)
{
u32 msg[6];
msg[0]= SIX_WORD_MSG_SIZE| SGL_OFFSET_4;
msg[1]= I2O_CMD_HRT_GET<<24 | HOST_TID<<12 | ADAPTER_TID;
- /* msg[2] filled in i2o_post_wait */
msg[3]= 0;
msg[4]= (0xD0000000 | size); /* Simple transaction */
msg[5]= virt_to_phys(c->hrt); /* Dump it here */
return 0;
}
+/*
+ * Send the I2O System Table to the specified IOP
+ *
+ * The system table contains information about all the IOPs in the
+ * system. It is build and then sent to each IOP so that IOPs can
+ * establish connections between each other.
+ *
+ */
static int i2o_systab_send(struct i2o_controller *iop)
{
- u32 msg[12];
- u32 privmem[2];
- u32 privio[2];
- int ret;
-
- /* See i2o_status_block */
-#if 0
- iop->status->current_mem_base;
- iop->status->current_mem_size;
- iop->status->current_io_base;
- iop->status->current_io_size;
-#endif
+ u32 msg[12];
+ u32 privmem[2];
+ u32 privio[2];
+ int ret;
-/* FIXME */
- privmem[0]=iop->priv_mem; /* Private memory space base address */
- privmem[1]=iop->priv_mem_size;
- privio[0]=iop->priv_io; /* Private I/O address */
- privio[1]=iop->priv_io_size;
+ privmem[0] = iop->status_block->current_mem_base;
+ privmem[1] = iop->status_block->current_mem_size;
+ privio[0] = iop->status_block->current_io_base;
+ privio[1] = iop->status_block->current_io_size;
msg[0] = I2O_MESSAGE_SIZE(12) | SGL_OFFSET_6;
- msg[1] = I2O_CMD_SYS_TAB_SET<<24 | HOST_TID<<12 | ADAPTER_TID;
- /* msg[2] filled in i2o_post_wait */
+ msg[1] = I2O_CMD_SYS_TAB_SET<<24 | HOST_TID<<12 | ADAPTER_TID;
msg[3] = 0;
- msg[4] = (0<<16) | ((iop->unit+2) << 12); /* Host 0 IOP ID (unit + 2) */
- msg[5] = 0; /* Segment 0 */
+ msg[4] = (0<<16) | ((iop->unit+2) << 12); /* Host 0 IOP ID (unit + 2) */
+ msg[5] = 0; /* Segment 0 */
- /*
- * Provide three SGL-elements:
- * System table (SysTab), Private memory space declaration and
- * Private i/o space declaration
- */
- msg[6] = 0x54000000 | sys_tbl_len;
- msg[7] = virt_to_phys(sys_tbl);
- msg[8] = 0x54000000 | 0;
- msg[9] = virt_to_phys(privmem);
- msg[10] = 0xD4000000 | 0;
- msg[11] = virt_to_phys(privio);
+ /*
+ * Provide three SGL-elements:
+ * System table (SysTab), Private memory space declaration and
+ * Private i/o space declaration
+ */
+ msg[6] = 0x54000000 | sys_tbl_len;
+ msg[7] = virt_to_phys(sys_tbl);
+ msg[8] = 0x54000000 | 0;
+ msg[9] = virt_to_phys(privmem);
+ msg[10] = 0xD4000000 | 0;
+ msg[11] = virt_to_phys(privio);
if ((ret=i2o_post_wait(iop, msg, sizeof(msg), 120)))
printk(KERN_INFO "%s: Unable to set SysTab (status=%#10x).\n",
printk(KERN_INFO "Activating I2O controllers\n");
printk(KERN_INFO "This may take a few minutes if there are many devices\n");
-
+
/* In INIT state, Activate IOPs */
-
for (iop = i2o_controller_chain; iop; iop = niop) {
+ dprintk((KERN_INFO "Calling i2o_activate_controller for %s\n",
+ iop->name));
niop = iop->next;
i2o_activate_controller(iop);
}
* If build_sys_table fails, we kill everything and bail
* as we can't init the IOPs w/o a system table
*/
+ dprintk((KERN_INFO "calling i2o_build_sys_table\n"));
if (i2o_build_sys_table() < 0) {
i2o_sys_shutdown();
return;
/* If IOP don't get online, we need to rebuild the System table */
for (iop = i2o_controller_chain; iop; iop = niop) {
niop = iop->next;
+ dprintk((KERN_INFO "Calling i2o_online_controller for %s\n", iop->name));
if (i2o_online_controller(iop) < 0)
goto rebuild_sys_tab;
}
/* Active IOPs now in OPERATIONAL state */
+
+ /*
+ * Register for status updates from all IOPs
+ */
+ for(iop = i2o_controller_chain; iop; iop=iop->next) {
+
+ /* Create a kernel thread to deal with dynamic LCT updates */
+ iop->lct_pid = kernel_thread(i2o_dyn_lct, iop, CLONE_SIGHAND);
+
+ /* Update change ind on DLCT */
+ iop->dlct->change_ind = iop->lct->change_ind;
+
+ /* Start dynamic LCT updates */
+ i2o_lct_notify(iop);
+
+ /* Register for all events from IRTOS */
+ i2o_event_register(iop, core_context, 0, 0, 0xFFFFFFFF);
+ }
}
/*
/* In READY state, Get status */
if (i2o_status_get(iop) < 0) {
- printk("Unable to obtain status of IOP, attempting a reset.\n");
+ printk(KERN_INFO "Unable to obtain status of IOP, attempting a reset.\n");
i2o_reset_controller(iop);
if (i2o_status_get(iop) < 0) {
- printk("IOP not responding.\n");
+ printk(KERN_ERR "%s: IOP not responding.\n", iop->name);
i2o_delete_controller(iop);
return -1;
}
return -1;
}
-// if (iop->status_block->iop_state == ADAPTER_STATE_HOLD ||
if (iop->status_block->iop_state == ADAPTER_STATE_READY ||
iop->status_block->iop_state == ADAPTER_STATE_OPERATIONAL ||
+ iop->status_block->iop_state == ADAPTER_STATE_HOLD ||
iop->status_block->iop_state == ADAPTER_STATE_FAILED)
{
+ u32 m[MSG_FRAME_SIZE];
dprintk((KERN_INFO "%s: already running...trying to reset\n",
iop->name));
+
+ i2o_init_outbound_q(iop);
+ I2O_REPLY_WRITE32(iop,virt_to_phys(m));
+
i2o_reset_controller(iop);
if (i2o_status_get(iop) < 0 ||
return -1;
}
+ if (i2o_post_outbound_messages(iop))
+ return -1;
+
/* In HOLD state */
if (i2o_hrt_get(iop) < 0) {
return 0;
}
+
/*
* Clear and (re)initialize IOP's outbound queue
*/
u32 m;
u32 *msg;
u32 time;
- int i;
+ dprintk((KERN_INFO "%s: Initializing Outbound Queue\n", c->name));
m=i2o_wait_message(c, "OutboundInit");
if(m==0xFFFFFFFF)
return -ETIMEDOUT;
return -ENOMEM;
}
memset(status, 0, 4);
-
- msg[0]= EIGHT_WORD_MSG_SIZE| SGL_OFFSET_6;
+
+
+ msg[0]= EIGHT_WORD_MSG_SIZE| TRL_OFFSET_6;
msg[1]= I2O_CMD_OUTBOUND_INIT<<24 | HOST_TID<<12 | ADAPTER_TID;
msg[2]= core_context;
- msg[3]= 0x0106; /* Transaction context */
- msg[4]= 4096; /* Host page frame size */
+ msg[3]= 0x0106; /* Transaction context */
+ msg[4]= 4096; /* Host page frame size */
/* Frame size is in words. Pick 128, its what everyone elses uses and
- other sizes break some adapters. */
- msg[5]= (MSG_FRAME_SIZE>>2)<<16|0x80; /* Outbound msg frame size and Initcode */
- msg[6]= 0xD0000004; /* Simple SG LE, EOB */
+ other sizes break some adapters. */
+ msg[5]= MSG_FRAME_SIZE<<16|0x80; /* Outbound msg frame size and Initcode */
+ msg[6]= 0xD0000004; /* Simple SG LE, EOB */
msg[7]= virt_to_bus(status);
i2o_post_message(c,m);
- barrier();
+ barrier();
time=jiffies;
while(status[0]<0x02)
{
- if((jiffies-time)>=5*HZ)
+ if((jiffies-time)>=30*HZ)
{
if(status[0]==0x00)
printk(KERN_ERR "%s: Ignored queue initialize request.\n",
c->name);
- else
+ else
printk(KERN_ERR "%s: Outbound queue initialize timeout.\n",
c->name);
kfree(status);
return -ETIMEDOUT;
- }
+ }
schedule();
barrier();
- }
+ }
if(status[0] != I2O_CMD_OUTBOUND_INIT_COMPLETE)
{
- printk(KERN_ERR "%s: Outbound queue initialize rejected (%d).\n",
- c->name, status[0]);
+ printk(KERN_ERR "%s: IOP outbound initialise failed.\n", c->name);
kfree(status);
- return -EINVAL;
+ return -ETIMEDOUT;
}
+
+ return 0;
+}
+
+int i2o_post_outbound_messages(struct i2o_controller *c)
+{
+ int i;
+ u32 m;
/* Alloc space for IOP's outbound queue message frames */
c->page_frame = kmalloc(MSG_POOL_SIZE, GFP_KERNEL);
if(c->page_frame==NULL) {
printk(KERN_CRIT "%s: Outbound Q initialize failed; out of memory.\n",
c->name);
- kfree(status);
return -ENOMEM;
- }
+ }
m=virt_to_phys(c->page_frame);
-
+
/* Post frames */
for(i=0; i< NMBR_MSG_FRAMES; i++) {
- I2O_REPLY_WRITE32(c,m);
- mb();
+ I2O_REPLY_WRITE32(c,m);
+ mb();
m += MSG_FRAME_SIZE;
}
- kfree(status);
return 0;
}
return 0;
}
+/*
+ * Like above, but used for async notification. The main
+ * difference is that we keep track of the CurrentChangeIndiicator
+ * so that we only get updates when it actually changes.
+ *
+ */
+int i2o_lct_notify(struct i2o_controller *c)
+{
+ u32 msg[8];
+
+ msg[0] = EIGHT_WORD_MSG_SIZE|SGL_OFFSET_6;
+ msg[1] = I2O_CMD_LCT_NOTIFY<<24 | HOST_TID<<12 | ADAPTER_TID;
+ msg[2] = core_context;
+ msg[3] = 0xDEADBEEF;
+ msg[4] = 0xFFFFFFFF; /* All devices */
+ msg[5] = c->dlct->change_ind+1; /* Next change */
+ msg[6] = 0xD0000000|8192;
+ msg[7] = virt_to_bus(c->dlct);
+ return i2o_post_this(c, msg, sizeof(msg));
+}
+
/*
* Bring a controller online into OPERATIONAL state.
*/
-
int i2o_online_controller(struct i2o_controller *iop)
{
if (i2o_systab_send(iop) < 0) {
/* In READY state */
+ dprintk((KERN_INFO "Attempting to enable iop%d\n", iop->unit));
if (i2o_enable_controller(iop) < 0) {
i2o_delete_controller(iop);
return -1;
/* In OPERATIONAL state */
+ dprintk((KERN_INFO "Attempting to get/parse lct iop%d\n", iop->unit));
if (i2o_lct_get(iop) < 0){
i2o_delete_controller(iop);
return -1;
return 0;
}
+/*
+ * Build system table
+ *
+ * The system table contains information about all the IOPs in the
+ * system (duh) and is used by the Executives on the IOPs to establish
+ * peer2peer connections. We're not supporting peer2peer at the moment,
+ * but this will be needed down the road for things like lan2lan forwarding.
+ */
static int i2o_build_sys_table(void)
{
struct i2o_controller *iop = NULL;
+ struct i2o_controller *niop = NULL;
int count = 0;
sys_tbl_len = sizeof(struct i2o_sys_tbl) + // Header + IOPs
sys_tbl = kmalloc(sys_tbl_len, GFP_KERNEL);
if(!sys_tbl) {
- printk(KERN_CRIT "SysTab Set failed. Out of memory.\n");
+ printk(KERN_CRIT "SysTab Set failed. Out of memory.\n");
return -ENOMEM;
}
memset((void*)sys_tbl, 0, sys_tbl_len);
sys_tbl->version = I2OVERSION; /* TODO: Version 2.0 */
sys_tbl->change_ind = sys_tbl_ind++;
- for(iop = i2o_controller_chain; iop; iop = iop->next)
+ for(iop = i2o_controller_chain; iop; iop = niop)
{
- // Get updated Status Block so we have the latest information
- if (i2o_status_get(iop)) {
+ niop = iop->next;
+
+ /*
+ * Get updated IOP state so we have the latest information
+ *
+ * We should delete the controller at this point if it
+ * doesn't respond since if it's not on the system table
+ * it is techninically not part of the I2O subsyßtem...
+ */
+ if(i2o_status_get(iop)) {
+ printk(KERN_ERR "%s: Deleting b/c could not get status while"
+ "attempting to build system table", iop->name);
+ i2o_delete_controller(iop);
sys_tbl->num_entries--;
- continue; // try next one
+ continue; // try the next one
}
sys_tbl->iops[count].org_id = iop->status_block->org_id;
#ifdef DRIVERDEBUG
{
- u32 *table = (u32*)sys_tbl;
+ u32 *table;
+ table = (u32*)sys_tbl;
for(count = 0; count < (sys_tbl_len >>2); count++)
- printk(KERN_INFO "sys_tbl[%d] = %0#10x\n",
- count, table[count]);
+ printk(KERN_INFO "sys_tbl[%d] = %0#10x\n", count, table[count]);
}
#endif
if(m==0xFFFFFFFF)
{
- printk(KERN_ERR "%s: Timeout waiting for message frame!\n",
- c->name);
+ printk(KERN_ERR "i2o/iop%d: Timeout waiting for message frame!\n",
+ c->unit);
return -ETIMEDOUT;
}
-
msg = (u32 *)(c->mem_offset + m);
- memcpy_toio(msg, data, len);
+ memcpy_toio(msg, data, len);
i2o_post_message(c,m);
return 0;
}
/*
- * Post a message and wait for a response flag to be set.
+ * This core API allows an OSM to post a message and then be told whether
+ * or not the system received a successful reply. It is useful when
+ * the OSM does not want to know the exact 3
*/
int i2o_post_wait(struct i2o_controller *c, u32 *msg, int len, int timeout)
{
spin_lock_irqsave(&post_wait_lock, flags);
wait_data->next = post_wait_queue;
post_wait_queue = wait_data;
- wait_data->id = (++post_wait_id) & 0x7fff;
+ wait_data->id = (++post_wait_id) & 0x7fff;
spin_unlock_irqrestore(&post_wait_lock, flags);
wait_data->wq = &wq_i2o_post;
wait_data->status = -EAGAIN;
- msg[2]=0x80000000|(u32)core_context|((u32)wait_data->id<<16);
-
+ msg[2] = 0x80000000|(u32)core_context|((u32)wait_data->id<<16);
+
if ((status = i2o_post_this(c, msg, len))==0) {
interruptible_sleep_on_timeout(&wq_i2o_post, HZ * timeout);
status = wait_data->status;
- }
+ }
+
+#ifdef DRIVERDEBUG
+ if(status == -EAGAIN)
+ printk(KERN_INFO "POST WAIT TIMEOUT\n");
+#endif
+ /*
+ * Remove the entry from the queue.
+ * Since i2o_post_wait() may have been called again by
+ * a different thread while we were waiting for this
+ * instance to complete, we're not guaranteed that
+ * this entry is at the head of the queue anymore, so
+ * we need to search for it, find it, and delete it.
+ */
p2 = NULL;
spin_lock_irqsave(&post_wait_lock, flags);
for(p1 = post_wait_queue; p1; p2 = p1, p1 = p1->next) {
p2->next = p1->next;
else
post_wait_queue = p1->next;
+
break;
}
}
*/
static void i2o_post_wait_complete(u32 context, int status)
{
- struct i2o_post_wait_data *p1;
+ struct i2o_post_wait_data *p1 = NULL;
/*
* We need to search through the post_wait
for(p1 = post_wait_queue; p1; p1 = p1->next) {
if(p1->id == ((context >> 16) & 0x7fff)) {
p1->status = status;
- spin_unlock(&post_wait_lock);
wake_up_interruptible(p1->wq);
+ spin_unlock(&post_wait_lock);
return;
}
}
spin_unlock(&post_wait_lock);
- printk(KERN_DEBUG "i2o: i2o_post_wait reply after timeout!");
-}
-
-/*
- * Send UTIL_EVENT messages
- */
-
-int i2o_event_register(struct i2o_controller *c, int tid, int context,
- u32 evt_mask)
-{
- u32 msg[5];
-
- msg[0] = FIVE_WORD_MSG_SIZE | SGL_OFFSET_0;
- msg[1] = I2O_CMD_UTIL_EVT_REGISTER << 24 | HOST_TID << 12 | tid;
- msg[2] = context;
- msg[3] = 0;
- msg[4] = evt_mask;
-
- if (i2o_post_this(c, msg, sizeof(msg)) < 0)
- return -ETIMEDOUT;
-
- return 0;
-}
-
-int i2o_event_ack(struct i2o_controller *c, int tid, int context,
- u32 evt_indicator, void *evt_data, int evt_data_len)
-{
- u32 msg[c->inbound_size];
-
- msg[0] = I2O_MESSAGE_SIZE(5 + evt_data_len / 4) | SGL_OFFSET_5;
- msg[1] = I2O_CMD_UTIL_EVT_ACK << 24 | HOST_TID << 12 | tid;
- msg[2] = context;
- msg[3] = 0;
- msg[4] = evt_indicator;
- memcpy(msg+5, evt_data, evt_data_len);
-
- if (i2o_post_this(c, msg, sizeof(msg)) < 0)
- return -ETIMEDOUT;
-
- return 0;
+ printk(KERN_DEBUG "i2o_post_wait reply after timeout!");
}
/*
* Issue UTIL_CLAIM or UTIL_RELEASE messages
*/
-
-static int i2o_issue_claim(struct i2o_controller *c, int tid, int context, int onoff, u32 type)
+static int i2o_issue_claim(struct i2o_controller *c, int tid, int context,
+ int onoff, u32 type)
{
u32 msg[5];
else
msg[1] = I2O_CMD_UTIL_RELEASE << 24 | HOST_TID << 12 | tid;
- /* msg[2] filled in i2o_post_wait */
msg[3] = 0;
msg[4] = type;
-
- return i2o_post_wait(c, msg, sizeof(msg), 2);
+
+ return i2o_post_wait(c, msg, sizeof(msg), 30);
}
/* Issue UTIL_PARAMS_GET or UTIL_PARAMS_SET
*
* This function can be used for all UtilParamsGet/Set operations.
- * The OperationBlock is given in opblk-buffer,
- * and results are returned in resblk-buffer.
- * Note that the minimum sized resblk is 8 bytes and contains
+ * The OperationList is given in oplist-buffer,
+ * and results are returned in reslist-buffer.
+ * Note that the minimum sized reslist is 8 bytes and contains
* ResultCount, ErrorInfoSize, BlockStatus and BlockSize.
*/
int i2o_issue_params(int cmd, struct i2o_controller *iop, int tid,
- void *opblk, int oplen, void *resblk, int reslen)
+ void *oplist, int oplen, void *reslist, int reslen)
{
u32 msg[9];
- u32 *res = (u32 *)resblk;
+ u8 *res = (u8 *)reslist;
+ u32 *res32 = (u32*)reslist;
+ u32 *restmp = (u32*)reslist;
+ int len = 0;
+ int i = 0;
int wait_status;
msg[0] = NINE_WORD_MSG_SIZE | SGL_OFFSET_5;
msg[1] = cmd << 24 | HOST_TID << 12 | tid;
- /* msg[2] filled in i2o_post_wait */
msg[3] = 0;
msg[4] = 0;
- msg[5] = 0x54000000 | oplen; /* OperationBlock */
- msg[6] = virt_to_bus(opblk);
- msg[7] = 0xD0000000 | reslen; /* ResultBlock */
- msg[8] = virt_to_bus(resblk);
+ msg[5] = 0x54000000 | oplen; /* OperationList */
+ msg[6] = virt_to_bus(oplist);
+ msg[7] = 0xD0000000 | reslen; /* ResultList */
+ msg[8] = virt_to_bus(reslist);
- if ((wait_status = i2o_post_wait(iop, msg, sizeof(msg), 20)))
- return wait_status; /* -DetailedStatus */
+ if((wait_status = i2o_post_wait(iop, msg, sizeof(msg), 10)))
+ return wait_status; /* -DetailedStatus */
- if (res[1]&0x00FF0000) /* BlockStatus != SUCCESS */
+ /*
+ * Calculate number of bytes of Result LIST
+ * We need to loop through each Result BLOCK and grab the length
+ */
+ restmp = res32 + 1;
+ len = 1;
+ for(i = 0; i < (res32[0]&0X0000FFFF); i++)
{
- printk(KERN_WARNING "%s: %s - Error:\n ErrorInfoSize = 0x%02x, "
- "BlockStatus = 0x%02x, BlockSize = 0x%04x\n",
- iop->name,
- (cmd == I2O_CMD_UTIL_PARAMS_SET) ? "PARAMS_SET"
- : "PARAMS_GET",
- res[1]>>24, (res[1]>>16)&0xFF, res[1]&0xFFFF);
- return -((res[1] >> 16) & 0xFF); /* -BlockStatus */
+ if(restmp[0]&0x00FF0000) /* BlockStatus != SUCCESS */
+ {
+ printk(KERN_WARNING "%s - Error:\n ErrorInfoSize = 0x%02x, "
+ "BlockStatus = 0x%02x, BlockSize = 0x%04x\n",
+ (cmd == I2O_CMD_UTIL_PARAMS_SET) ? "PARAMS_SET"
+ : "PARAMS_GET",
+ res32[1]>>24, (res32[1]>>16)&0xFF, res32[1]&0xFFFF);
+
+ /*
+ * If this is the only request,than we return an error
+ */
+ if((res32[0]&0x0000FFFF) == 1)
+ return -((res[1] >> 16) & 0xFF); /* -BlockStatus */
+ }
+
+ len += restmp[0] & 0x0000FFFF; /* Length of res BLOCK */
+ restmp += restmp[0] & 0x0000FFFF; /* Skip to next BLOCK */
}
- return 4 + ((res[1] & 0x0000FFFF) << 2); /* bytes used in resblk */
+ return (len << 2);
+
+ // return 4 + ((res[1] & 0x0000FFFF) << 2); /* bytes used in resblk */
}
/*
if (field == -1) /* whole group */
opblk[4] = -1;
-
+
size = i2o_issue_params(I2O_CMD_UTIL_PARAMS_GET, iop, tid,
opblk, sizeof(opblk), resblk, sizeof(resblk));
-
+
if (size < 0)
return size;
/*
* Clear table group, i.e. delete all rows.
*/
-
int i2o_clear_table(struct i2o_controller *iop, int tid, int group)
{
u16 opblk[] = { 1, 0, I2O_PARAMS_TABLE_CLEAR, group };
* else just specific fields are given, rest use defaults
* buf contains fieldindexes, rowcount, keyvalues
*/
-
int i2o_row_add_table(struct i2o_controller *iop, int tid,
int group, int fieldcount, void *buf, int buflen)
{
/*
* Delete rows from a table group.
*/
-
int i2o_row_delete_table(struct i2o_controller *iop, int tid,
int group, int keycount, void *keys, int keyslen)
{
return size;
}
+/*
+ * Used for error reporting/debugging purposes
+ */
void i2o_report_common_status(u8 req_status)
{
/* the following reply status strings are common to all classes */
return;
}
+/*
+ * Used for error reporting/debugging purposes
+ */
static void i2o_report_common_dsc(u16 detailed_status)
{
/* The following detailed statuscodes are valid
return;
}
+/*
+ * Used for error reporting/debugging purposes
+ */
static void i2o_report_lan_dsc(u16 detailed_status)
{
static char *LAN_DSC[] = { // Lan detailed status code strings
return;
}
+/*
+ * Used for error reporting/debugging purposes
+ */
static void i2o_report_util_cmd(u8 cmd)
{
switch (cmd) {
case I2O_CMD_UTIL_NOP:
- printk("UTIL_NOP, ");
+ printk(KERN_INFO "UTIL_NOP, ");
break;
case I2O_CMD_UTIL_ABORT:
- printk("UTIL_ABORT, ");
+ printk(KERN_INFO "UTIL_ABORT, ");
break;
case I2O_CMD_UTIL_CLAIM:
- printk("UTIL_CLAIM, ");
+ printk(KERN_INFO "UTIL_CLAIM, ");
break;
case I2O_CMD_UTIL_RELEASE:
- printk("UTIL_CLAIM_RELEASE, ");
+ printk(KERN_INFO "UTIL_CLAIM_RELEASE, ");
break;
case I2O_CMD_UTIL_CONFIG_DIALOG:
- printk("UTIL_CONFIG_DIALOG, ");
+ printk(KERN_INFO "UTIL_CONFIG_DIALOG, ");
break;
case I2O_CMD_UTIL_DEVICE_RESERVE:
- printk("UTIL_DEVICE_RESERVE, ");
+ printk(KERN_INFO "UTIL_DEVICE_RESERVE, ");
break;
case I2O_CMD_UTIL_DEVICE_RELEASE:
- printk("UTIL_DEVICE_RELEASE, ");
+ printk(KERN_INFO "UTIL_DEVICE_RELEASE, ");
break;
case I2O_CMD_UTIL_EVT_ACK:
- printk("UTIL_EVENT_ACKNOWLEDGE, ");
+ printk(KERN_INFO "UTIL_EVENT_ACKNOWLEDGE, ");
break;
case I2O_CMD_UTIL_EVT_REGISTER:
- printk("UTIL_EVENT_REGISTER, ");
+ printk(KERN_INFO "UTIL_EVENT_REGISTER, ");
break;
case I2O_CMD_UTIL_LOCK:
- printk("UTIL_LOCK, ");
+ printk(KERN_INFO "UTIL_LOCK, ");
break;
case I2O_CMD_UTIL_LOCK_RELEASE:
- printk("UTIL_LOCK_RELEASE, ");
+ printk(KERN_INFO "UTIL_LOCK_RELEASE, ");
break;
case I2O_CMD_UTIL_PARAMS_GET:
- printk("UTIL_PARAMS_GET, ");
+ printk(KERN_INFO "UTIL_PARAMS_GET, ");
break;
case I2O_CMD_UTIL_PARAMS_SET:
- printk("UTIL_PARAMS_SET, ");
+ printk(KERN_INFO "UTIL_PARAMS_SET, ");
break;
case I2O_CMD_UTIL_REPLY_FAULT_NOTIFY:
- printk("UTIL_REPLY_FAULT_NOTIFY, ");
+ printk(KERN_INFO "UTIL_REPLY_FAULT_NOTIFY, ");
break;
default:
- printk("%0#2x, ",cmd);
+ printk(KERN_INFO "%0#2x, ",cmd);
}
return;
}
-
+/*
+ * Used for error reporting/debugging purposes
+ */
static void i2o_report_exec_cmd(u8 cmd)
{
switch (cmd) {
case I2O_CMD_ADAPTER_ASSIGN:
- printk("EXEC_ADAPTER_ASSIGN, ");
+ printk(KERN_INFO "EXEC_ADAPTER_ASSIGN, ");
break;
case I2O_CMD_ADAPTER_READ:
- printk("EXEC_ADAPTER_READ, ");
+ printk(KERN_INFO "EXEC_ADAPTER_READ, ");
break;
case I2O_CMD_ADAPTER_RELEASE:
- printk("EXEC_ADAPTER_RELEASE, ");
+ printk(KERN_INFO "EXEC_ADAPTER_RELEASE, ");
break;
case I2O_CMD_BIOS_INFO_SET:
- printk("EXEC_BIOS_INFO_SET, ");
+ printk(KERN_INFO "EXEC_BIOS_INFO_SET, ");
break;
case I2O_CMD_BOOT_DEVICE_SET:
- printk("EXEC_BOOT_DEVICE_SET, ");
+ printk(KERN_INFO "EXEC_BOOT_DEVICE_SET, ");
break;
case I2O_CMD_CONFIG_VALIDATE:
- printk("EXEC_CONFIG_VALIDATE, ");
+ printk(KERN_INFO "EXEC_CONFIG_VALIDATE, ");
break;
case I2O_CMD_CONN_SETUP:
- printk("EXEC_CONN_SETUP, ");
+ printk(KERN_INFO "EXEC_CONN_SETUP, ");
break;
case I2O_CMD_DDM_DESTROY:
- printk("EXEC_DDM_DESTROY, ");
+ printk(KERN_INFO "EXEC_DDM_DESTROY, ");
break;
case I2O_CMD_DDM_ENABLE:
- printk("EXEC_DDM_ENABLE, ");
+ printk(KERN_INFO "EXEC_DDM_ENABLE, ");
break;
case I2O_CMD_DDM_QUIESCE:
- printk("EXEC_DDM_QUIESCE, ");
+ printk(KERN_INFO "EXEC_DDM_QUIESCE, ");
break;
case I2O_CMD_DDM_RESET:
- printk("EXEC_DDM_RESET, ");
+ printk(KERN_INFO "EXEC_DDM_RESET, ");
break;
case I2O_CMD_DDM_SUSPEND:
- printk("EXEC_DDM_SUSPEND, ");
+ printk(KERN_INFO "EXEC_DDM_SUSPEND, ");
break;
case I2O_CMD_DEVICE_ASSIGN:
- printk("EXEC_DEVICE_ASSIGN, ");
+ printk(KERN_INFO "EXEC_DEVICE_ASSIGN, ");
break;
case I2O_CMD_DEVICE_RELEASE:
- printk("EXEC_DEVICE_RELEASE, ");
+ printk(KERN_INFO "EXEC_DEVICE_RELEASE, ");
break;
case I2O_CMD_HRT_GET:
- printk("EXEC_HRT_GET, ");
+ printk(KERN_INFO "EXEC_HRT_GET, ");
break;
case I2O_CMD_ADAPTER_CLEAR:
- printk("EXEC_IOP_CLEAR, ");
+ printk(KERN_INFO "EXEC_IOP_CLEAR, ");
break;
case I2O_CMD_ADAPTER_CONNECT:
- printk("EXEC_IOP_CONNECT, ");
+ printk(KERN_INFO "EXEC_IOP_CONNECT, ");
break;
case I2O_CMD_ADAPTER_RESET:
- printk("EXEC_IOP_RESET, ");
+ printk(KERN_INFO "EXEC_IOP_RESET, ");
break;
case I2O_CMD_LCT_NOTIFY:
- printk("EXEC_LCT_NOTIFY, ");
+ printk(KERN_INFO "EXEC_LCT_NOTIFY, ");
break;
case I2O_CMD_OUTBOUND_INIT:
- printk("EXEC_OUTBOUND_INIT, ");
+ printk(KERN_INFO "EXEC_OUTBOUND_INIT, ");
break;
case I2O_CMD_PATH_ENABLE:
- printk("EXEC_PATH_ENABLE, ");
+ printk(KERN_INFO "EXEC_PATH_ENABLE, ");
break;
case I2O_CMD_PATH_QUIESCE:
- printk("EXEC_PATH_QUIESCE, ");
+ printk(KERN_INFO "EXEC_PATH_QUIESCE, ");
break;
case I2O_CMD_PATH_RESET:
- printk("EXEC_PATH_RESET, ");
+ printk(KERN_INFO "EXEC_PATH_RESET, ");
break;
case I2O_CMD_STATIC_MF_CREATE:
- printk("EXEC_STATIC_MF_CREATE, ");
+ printk(KERN_INFO "EXEC_STATIC_MF_CREATE, ");
break;
case I2O_CMD_STATIC_MF_RELEASE:
- printk("EXEC_STATIC_MF_RELEASE, ");
+ printk(KERN_INFO "EXEC_STATIC_MF_RELEASE, ");
break;
case I2O_CMD_STATUS_GET:
- printk("EXEC_STATUS_GET, ");
+ printk(KERN_INFO "EXEC_STATUS_GET, ");
break;
case I2O_CMD_SW_DOWNLOAD:
- printk("EXEC_SW_DOWNLOAD, ");
+ printk(KERN_INFO "EXEC_SW_DOWNLOAD, ");
break;
case I2O_CMD_SW_UPLOAD:
- printk("EXEC_SW_UPLOAD, ");
+ printk(KERN_INFO "EXEC_SW_UPLOAD, ");
break;
case I2O_CMD_SW_REMOVE:
- printk("EXEC_SW_REMOVE, ");
+ printk(KERN_INFO "EXEC_SW_REMOVE, ");
break;
case I2O_CMD_SYS_ENABLE:
- printk("EXEC_SYS_ENABLE, ");
+ printk(KERN_INFO "EXEC_SYS_ENABLE, ");
break;
case I2O_CMD_SYS_MODIFY:
- printk("EXEC_SYS_MODIFY, ");
+ printk(KERN_INFO "EXEC_SYS_MODIFY, ");
break;
case I2O_CMD_SYS_QUIESCE:
- printk("EXEC_SYS_QUIESCE, ");
+ printk(KERN_INFO "EXEC_SYS_QUIESCE, ");
break;
case I2O_CMD_SYS_TAB_SET:
- printk("EXEC_SYS_TAB_SET, ");
+ printk(KERN_INFO "EXEC_SYS_TAB_SET, ");
break;
default:
- printk("%02x, ",cmd);
+ printk(KERN_INFO "%02x, ",cmd);
}
return;
}
+/*
+ * Used for error reporting/debugging purposes
+ */
static void i2o_report_lan_cmd(u8 cmd)
{
switch (cmd) {
case LAN_PACKET_SEND:
- printk("LAN_PACKET_SEND, ");
+ printk(KERN_INFO "LAN_PACKET_SEND, ");
break;
case LAN_SDU_SEND:
- printk("LAN_SDU_SEND, ");
+ printk(KERN_INFO "LAN_SDU_SEND, ");
break;
case LAN_RECEIVE_POST:
- printk("LAN_RECEIVE_POST, ");
+ printk(KERN_INFO "LAN_RECEIVE_POST, ");
break;
case LAN_RESET:
- printk("LAN_RESET, ");
+ printk(KERN_INFO "LAN_RESET, ");
break;
case LAN_SUSPEND:
- printk("LAN_SUSPEND, ");
+ printk(KERN_INFO "LAN_SUSPEND, ");
break;
default:
- printk("%02x, ",cmd);
+ printk(KERN_INFO "%02x, ",cmd);
}
return;
}
-/* TODO: Add support for other classes */
+/*
+ * Used for error reporting/debugging purposes
+ *
+ * This will have to be rewritten someday. The code currently
+ * assumes that a certain range of commands is reserved for
+ * given class. This is not completely true. Exec and Util
+ * message have their numbers reserved, but the rest are
+ * available _for each device class to use as it wishes_
+ *
+ * For example 0x37 is BsaCacheFlush for a block class device and
+ * LanSuspend for a LAN class device.
+ *
+ * The ideal way to do this would be to look at the TID and then
+ * find the LCT entry to determine what the class of the device is.
+ *
+ */
void i2o_report_status(const char *severity, const char *module, u32 *msg)
{
u8 cmd = (msg[1]>>24)&0xFF;
return;
}
- printk("%02x, %02x / %04x.\n", cmd, req_status, detailed_status);
+ printk(KERN_INFO "%02x, %02x / %04x.\n", cmd, req_status, detailed_status);
return;
}
{
#ifdef DRIVERDEBUG
int i;
-
printk(KERN_INFO "Dumping I2O message size %d @ %p\n",
msg[0]>>16&0xffff, msg);
for(i = 0; i < ((msg[0]>>16)&0xffff); i++)
#endif
}
-#ifdef MODULE
+/*
+ * I2O reboot/shutdown notification.
+ *
+ * - Call each OSM's reboot notifier (if one exists)
+ * - Quiesce each IOP in the system
+ *
+ * Each IOP has to be quiesced before we can ensure that the system
+ * can be properly shutdown as a transaction that has already been
+ * acknowledged still needs to be placed in permanent store on the IOP.
+ * The SysQuiesce causes the IOP to force all HDMs to complete their
+ * transactions before returning, so only at that point is it safe
+ *
+ */
+static int i2o_reboot_event(struct notifier_block *n, unsigned long code, void
+*p)
+{
+ int i = 0;
+ struct i2o_controller *c = NULL;
-EXPORT_SYMBOL(i2o_install_handler);
-EXPORT_SYMBOL(i2o_remove_handler);
+ if(code != SYS_RESTART && code != SYS_HALT && code != SYS_POWER_OFF)
+ return NOTIFY_DONE;
-EXPORT_SYMBOL(i2o_install_controller);
-EXPORT_SYMBOL(i2o_delete_controller);
-EXPORT_SYMBOL(i2o_unlock_controller);
-EXPORT_SYMBOL(i2o_find_controller);
+ printk(KERN_INFO "Shutting down I2O system.\n");
+ printk(KERN_INFO
+ " This could take a few minutes if there are many devices attached\n");
+
+ for(i = 0; i < MAX_I2O_MODULES; i++)
+ {
+ if(i2o_handlers[i] && i2o_handlers[i]->reboot_notify)
+ i2o_handlers[i]->reboot_notify();
+ }
+
+ for(c = i2o_controller_chain; c; c = c->next)
+ {
+ if(i2o_quiesce_controller(c))
+ {
+ printk(KERN_WARNING "i2o: Could not quiesce %s." "
+ Verify setup on next system power up.\n", c->name);
+ }
+ }
+
+ return NOTIFY_DONE;
+}
+
+
+#ifdef MODULE
+
+EXPORT_SYMBOL(i2o_controller_chain);
EXPORT_SYMBOL(i2o_num_controllers);
+EXPORT_SYMBOL(i2o_find_controller);
+EXPORT_SYMBOL(i2o_unlock_controller);
+EXPORT_SYMBOL(i2o_status_get);
-EXPORT_SYMBOL(i2o_event_register);
-EXPORT_SYMBOL(i2o_event_ack);
+EXPORT_SYMBOL(i2o_install_handler);
+EXPORT_SYMBOL(i2o_remove_handler);
EXPORT_SYMBOL(i2o_claim_device);
EXPORT_SYMBOL(i2o_release_device);
-EXPORT_SYMBOL(i2o_run_queue);
-EXPORT_SYMBOL(i2o_activate_controller);
-EXPORT_SYMBOL(i2o_get_class_name);
-EXPORT_SYMBOL(i2o_status_get);
+EXPORT_SYMBOL(i2o_device_notify_on);
+EXPORT_SYMBOL(i2o_device_notify_off);
+
+EXPORT_SYMBOL(i2o_post_this);
+EXPORT_SYMBOL(i2o_post_wait);
EXPORT_SYMBOL(i2o_query_scalar);
EXPORT_SYMBOL(i2o_set_scalar);
EXPORT_SYMBOL(i2o_query_table);
EXPORT_SYMBOL(i2o_clear_table);
EXPORT_SYMBOL(i2o_row_add_table);
-
-EXPORT_SYMBOL(i2o_post_this);
-EXPORT_SYMBOL(i2o_post_wait);
+EXPORT_SYMBOL(i2o_row_delete_table);
EXPORT_SYMBOL(i2o_issue_params);
+EXPORT_SYMBOL(i2o_event_register);
+EXPORT_SYMBOL(i2o_event_ack);
+
EXPORT_SYMBOL(i2o_report_status);
+EXPORT_SYMBOL(i2o_dump_message);
+EXPORT_SYMBOL(i2o_get_class_name);
MODULE_AUTHOR("Red Hat Software");
MODULE_DESCRIPTION("I2O Core");
+
int init_module(void)
{
- printk(KERN_INFO "I2O Core - (c) Copyright 1999 Red Hat Software.\n");
+ printk(KERN_INFO "I2O Core - (C) Copyright 1999 Red Hat Software\n");
if (i2o_install_handler(&i2o_core_handler) < 0)
{
printk(KERN_ERR
- "i2o: Unable to install core handler.\nI2O stack not loaded!");
+ "i2o_core: Unable to install core handler.\nI2O stack not loaded!");
return 0;
}
core_context = i2o_core_handler.context;
+
/*
* Attach core to I2O PCI transport (and others as they are developed)
*/
printk(KERN_INFO "i2o: No PCI I2O controllers found\n");
#endif
+ /*
+ * Initialize event handling thread
+ */
+ init_MUTEX_LOCKED(&evt_sem);
+ evt_pid = kernel_thread(i2o_core_evt, &evt_reply, CLONE_SIGHAND);
+ if(evt_pid < 0)
+ {
+ printk(KERN_ERR "I2O: Could not create event handler kernel thread\n");
+ i2o_remove_handler(&i2o_core_handler);
+ return 0;
+ }
+ else(KERN_INFO "event thread created as pid %d\n", evt_pid);
+
if(i2o_num_controllers)
i2o_sys_init();
+ register_reboot_notifier(&i2o_reboot_notifier);
+
return 0;
}
void cleanup_module(void)
{
+ int stat;
+
+ unregister_reboot_notifier(&i2o_reboot_notifier);
+
if(i2o_num_controllers)
i2o_sys_shutdown();
+ /*
+ * If this is shutdown time, the thread has already been killed
+ */
+ if(evt_running) {
+ stat = kill_proc(evt_pid, SIGTERM, 1);
+ if(!stat) {
+ int count = 10 * 100;
+ while(evt_running && count) {
+ current->state = TASK_INTERRUPTIBLE;
+ schedule_timeout(1);
+ }
+
+ if(!count)
+ printk(KERN_ERR "i2o: Event thread still running!\n");
+ }
+ }
+
#ifdef CONFIG_I2O_PCI_MODULE
i2o_pci_core_detach();
#endif
i2o_remove_handler(&i2o_core_handler);
+
+ unregister_reboot_notifier(&i2o_reboot_notifier);
}
#else
core_context = i2o_core_handler.context;
+ /*
+ * Initialize event handling thread
+ * We may not find any controllers, but still want this as
+ * down the road we may have hot pluggable controllers that
+ * need to be dealt with.
+ */
+ init_MUTEX_LOCKED(&evt_sem);
+ if((evt_pid = kernel_thread(i2o_core_evt, &evt_reply, CLONE_SIGHAND)) < 0)
+ {
+ printk(KERN_ERR "I2O: Could not create event handler kernel thread\n");
+ i2o_remove_handler(&i2o_core_handler);
+ return 0;
+ }
+
+
#ifdef CONFIG_I2O_PCI
i2o_pci_init();
#endif
if(i2o_num_controllers)
i2o_sys_init();
+ register_reboot_notifier(&i2o_reboot_notifier);
+
i2o_config_init();
#ifdef CONFIG_I2O_BLOCK
i2o_block_init();
/*
- * linux/drivers/i2o/i2o_lan.c
+ * drivers/i2o/i2o_lan.c
*
- * I2O LAN CLASS OSM January 7th 1999
+ * I2O LAN CLASS OSM April 3rd 2000
*
- * (C) Copyright 1999 University of Helsinki,
- * Department of Computer Science
+ * (C) Copyright 1999, 2000 University of Helsinki,
+ * Department of Computer Science
*
* This code is still under development / test.
*
* 2 of the License, or (at your option) any later version.
*
* Authors: Auvo Häkkinen <Auvo.Hakkinen@cs.Helsinki.FI>
- * Juha Sievänen <Juha.Sievanen@cs.Helsinki.FI>
+ * Fixes: Juha Sievänen <Juha.Sievanen@cs.Helsinki.FI>
+ * Taneli Vähäkangas <Taneli.Vahakangas@cs.Helsinki.FI>
* Deepak Saxena <deepak@plexity.net>
*
* Tested: in FDDI environment (using SysKonnect's DDM)
#include <linux/netdevice.h>
#include <linux/etherdevice.h>
#include <linux/fddidevice.h>
+#include <linux/trdevice.h>
+#include <linux/fcdevice.h>
+
#include <linux/skbuff.h>
#include <linux/if_arp.h>
#include <linux/malloc.h>
-#include <linux/trdevice.h>
#include <linux/init.h>
#include <linux/spinlock.h>
#include <linux/tqueue.h>
#define dprintk(s, args...)
#endif
-/* Module params */
-
-static u32 bucketpost = I2O_BUCKET_COUNT;
-static u32 bucketthresh = I2O_BUCKET_THRESH;
-static u32 rx_copybreak = 200;
+/* The following module parameters are used as default values
+ * for per interface values located in the net_device private area.
+ * Private values are changed via /proc filesystem.
+ */
+static u32 max_buckets_out = I2O_LAN_MAX_BUCKETS_OUT;
+static u32 bucket_thresh = I2O_LAN_BUCKET_THRESH;
+static u32 rx_copybreak = I2O_LAN_RX_COPYBREAK;
+static tx_batch_mode = I2O_LAN_TX_BATCH_MODE;
+static i2o_event_mask = I2O_LAN_EVENT_MASK;
#define MAX_LAN_CARDS 16
static struct net_device *i2o_landevs[MAX_LAN_CARDS+1];
-static int unit = -1; /* device unit number */
+static int unit = -1; /* device unit number */
-struct i2o_lan_local {
- u8 unit;
- struct i2o_device *i2o_dev;
- struct fddi_statistics stats; /* see also struct net_device_stats */
- unsigned short (*type_trans)(struct sk_buff *, struct net_device *);
- u32 bucket_count; /* nbr of buckets sent to DDM */
- u32 tx_count; /* packets in one TX message frame */
- u32 tx_max_out; /* DDM's Tx queue len */
- u32 tx_out; /* outstanding TXes */
- u32 sgl_max; /* max SGLs in one message frame */
- u32 m; /* IOP address of msg frame */
-
- struct tq_struct i2o_batch_send_task;
- struct sk_buff **i2o_fbl; /* Free bucket list (to reuse skbs) */
- int i2o_fbl_tail;
-
- spinlock_t lock;
-};
+extern rwlock_t dev_mc_lock;
-static void i2o_lan_reply(struct i2o_handler *h, struct i2o_controller *iop,
- struct i2o_message *m);
-static void i2o_lan_event_reply(struct net_device *dev, u32 *msg);
+static void i2o_lan_reply(struct i2o_handler *h, struct i2o_controller *iop, struct i2o_message *m);
+static void i2o_lan_send_post_reply(struct i2o_handler *h, struct i2o_controller *iop, struct i2o_message *m);
static int i2o_lan_receive_post(struct net_device *dev);
-static int i2o_lan_receive_post_reply(struct net_device *dev, u32 *msg);
+static void i2o_lan_receive_post_reply(struct i2o_handler *h, struct i2o_controller *iop, struct i2o_message *m);
static void i2o_lan_release_buckets(struct net_device *dev, u32 *msg);
+static int i2o_lan_reset(struct net_device *dev);
+static void i2o_lan_handle_event(struct net_device *dev, u32 *msg);
+
+/* Structures to register handlers for the incoming replies. */
+
+static struct i2o_handler i2o_lan_send_handler = {
+ i2o_lan_send_post_reply, // For send replies
+ NULL,
+ NULL,
+ NULL,
+ "I2O Lan OSM send",
+ -1,
+ I2O_CLASS_LAN
+};
+static int lan_send_context;
+
+static struct i2o_handler i2o_lan_receive_handler = {
+ i2o_lan_receive_post_reply, // For receive replies
+ NULL,
+ NULL,
+ NULL,
+ "I2O Lan OSM receive",
+ -1,
+ I2O_CLASS_LAN
+};
+static int lan_receive_context;
+
static struct i2o_handler i2o_lan_handler = {
- i2o_lan_reply,
+ i2o_lan_reply, // For other replies
+ NULL,
+ NULL,
+ NULL,
"I2O Lan OSM",
- 0, // context
+ -1,
I2O_CLASS_LAN
};
static int lan_context;
0, 0, (void (*)(void *))i2o_lan_receive_post, (void *) 0
};
+/* Functions to handle message failures and transaction errors:
+==============================================================*/
+
/*
- * i2o_lan_reply(): The only callback function to handle incoming messages.
+ * i2o_lan_handle_failure(): Fail bit has been set since IOP's message
+ * layer cannot deliver the request to the target, or the target cannot
+ * process the request.
*/
-static void i2o_lan_reply(struct i2o_handler *h, struct i2o_controller *iop,
- struct i2o_message *m)
+static void i2o_lan_handle_failure(struct net_device *dev, u32 *msg)
{
- u32 *msg = (u32 *)m;
- u8 unit = (u8)(msg[2]>>16); // InitiatorContext
- struct net_device *dev = i2o_landevs[unit];
+ struct i2o_lan_local *priv = (struct i2o_lan_local *)dev->priv;
+ struct i2o_device *i2o_dev = priv->i2o_dev;
+ struct i2o_controller *iop = i2o_dev->controller;
- if (msg[0] & (1<<13)) { // Fail bit is set
- printk(KERN_ERR "%s: IOP failed to process the msg:\n",dev->name);
- printk(KERN_ERR " Cmd = 0x%02X, InitiatorTid = %d, TargetTid = %d\n",
- (msg[1] >> 24) & 0xFF, (msg[1] >> 12) & 0xFFF, msg[1] & 0xFFF);
- printk(KERN_ERR " FailureCode = 0x%02X\n Severity = 0x%02X\n "
- "LowestVersion = 0x%02X\n HighestVersion = 0x%02X\n",
- msg[4] >> 24, (msg[4] >> 16) & 0xFF,
- (msg[4] >> 8) & 0xFF, msg[4] & 0xFF);
- printk(KERN_ERR " FailingHostUnit = 0x%04X\n FailingIOP = 0x%03X\n",
- msg[5] >> 16, msg[5] & 0xFFF);
- return;
- }
+ u32 *preserved_msg = (u32*)(iop->mem_offset + msg[7]);
+ // FIXME on 64-bit host
+ u32 *sgl_elem = &preserved_msg[4];
+ struct sk_buff *skb = NULL;
+ u8 le_flag;
-#ifndef DRIVERDEBUG
- if (msg[4] >> 24) /* ReqStatus != SUCCESS */
-#endif
- i2o_report_status(KERN_INFO, dev->name, msg);
-
- switch (msg[1] >> 24) {
- case LAN_RECEIVE_POST:
- {
- if (netif_running(dev)) {
- if (!(msg[4]>>24)) {
- i2o_lan_receive_post_reply(dev,msg);
- break;
- }
+// To be added to i2o_core.c
+// i2o_report_failure(KERN_INFO, iop, dev->name, msg);
- // Something VERY wrong if this is happening
- printk( KERN_WARNING "%s: rejected bucket post.\n", dev->name);
- }
+ /* If PacketSend failed, free sk_buffs reserved by upper layers */
- // Shutting down, we are getting unused buckets back
- i2o_lan_release_buckets(dev,msg);
-
- break;
- }
+ if (msg[1] >> 24 == LAN_PACKET_SEND) {
+ do {
+ skb = (struct sk_buff *)(sgl_elem[1]);
+ dev_kfree_skb_irq(skb);
- case LAN_PACKET_SEND:
- case LAN_SDU_SEND:
- {
- struct i2o_lan_local *priv = (struct i2o_lan_local *)dev->priv;
- u8 trl_count = msg[3] & 0x000000FF;
-
- while (trl_count) {
- // The HDM has handled the outgoing packet
- dev_kfree_skb((struct sk_buff *)msg[4 + trl_count]);
- dprintk(KERN_INFO "%s: Request skb freed (trl_count=%d).\n",
- dev->name,trl_count);
- priv->tx_out--;
- trl_count--;
- }
+ atomic_dec(&priv->tx_out);
+
+ le_flag = *sgl_elem >> 31;
+ sgl_elem +=3;
+ } while (le_flag == 0); /* Last element flag not set */
if (netif_queue_stopped(dev))
netif_wake_queue(dev);
-
- break;
}
- case LAN_RESET: /* default reply without payload */
- case LAN_SUSPEND:
- break;
+ /* If ReceivePost failed, free sk_buffs we have reserved */
- case I2O_CMD_UTIL_EVT_REGISTER:
- case I2O_CMD_UTIL_EVT_ACK:
- i2o_lan_event_reply(dev, msg);
- break;
+ if (msg[1] >> 24 == LAN_RECEIVE_POST) {
+ do {
+ skb = (struct sk_buff *)(sgl_elem[1]);
+ dev_kfree_skb_irq(skb);
- default:
- printk(KERN_ERR "%s: No handler for the reply.\n", dev->name);
- i2o_report_status(KERN_INFO, dev->name, msg);
+ atomic_dec(&priv->buckets_out);
+
+ le_flag = *sgl_elem >> 31;
+ sgl_elem +=3;
+ } while (le_flag == 0); /* Last element flag not set */
+ }
+
+ /* Release the preserved msg frame by resubmitting it as a NOP */
+
+ preserved_msg[0] = THREE_WORD_MSG_SIZE | SGL_OFFSET_0;
+ preserved_msg[1] = I2O_CMD_UTIL_NOP << 24 | HOST_TID << 12 | 0;
+ preserved_msg[2] = 0;
+ i2o_post_message(iop, msg[7]);
+}
+/*
+ * i2o_lan_handle_transaction_error(): IOP or DDM has rejected the request
+ * for general cause (format error, bad function code, insufficient resources,
+ * etc.). We get one transaction_error for each failed transaction.
+ */
+static void i2o_lan_handle_transaction_error(struct net_device *dev, u32 *msg)
+{
+ struct i2o_lan_local *priv = (struct i2o_lan_local *)dev->priv;
+ struct sk_buff *skb;
+
+// To be added to i2o_core.c
+// i2o_report_transaction_error(KERN_INFO, dev->name, msg);
+
+ /* If PacketSend was rejected, free sk_buff reserved by upper layers */
+
+ if (msg[1] >> 24 == LAN_PACKET_SEND) {
+ skb = (struct sk_buff *)(msg[3]); // TransactionContext
+ dev_kfree_skb_irq(skb);
+ atomic_dec(&priv->tx_out);
+
+ if (netif_queue_stopped(dev))
+ netif_wake_queue(dev);
+ }
+
+ /* If ReceivePost was rejected, free sk_buff we have reserved */
+
+ if (msg[1] >> 24 == LAN_RECEIVE_POST) {
+ skb = (struct sk_buff *)(msg[3]);
+ dev_kfree_skb_irq(skb);
+ atomic_dec(&priv->buckets_out);
}
}
/*
- * i2o_lan_event_reply(): Handle events.
+ * i2o_lan_handle_status(): Common parts of handling a not succeeded request
+ * (status != SUCCESS).
*/
-static void i2o_lan_event_reply(struct net_device *dev, u32 *msg)
+static int i2o_lan_handle_status(struct net_device *dev, u32 *msg)
{
- struct i2o_lan_local *priv = (struct i2o_lan_local *)dev->priv;
- struct i2o_device *i2o_dev = priv->i2o_dev;
- struct i2o_controller *iop = i2o_dev->controller;
- struct i2o_reply {
- u8 version_offset;
- u8 msg_flags;
- u16 msg_size;
- u32 tid:12;
- u32 initiator:12;
- u32 function:8;
- u32 initiator_context;
- u32 transaction_context;
- u32 evt_indicator;
- u32 evt_data[(iop->inbound_size - 20) / 4]; /* max */
- } *evt = (struct i2o_reply *)msg;
-
- int evt_data_len = (evt->msg_size - 5) * 4; /* real */
-
- if (evt->function == I2O_CMD_UTIL_EVT_REGISTER) {
- printk(KERN_INFO "%s: I2O event - ", dev->name);
-
- switch (evt->evt_indicator) {
- case I2O_EVT_IND_STATE_CHANGE:
- printk("State chance 0x%08X.\n",
- evt->evt_data[0]);
- break;
- case I2O_EVT_IND_GENERAL_WARNING:
- printk("General warning 0x%02X.\n",
- evt->evt_data[0]);
- break;
- case I2O_EVT_IND_CONFIGURATION_FLAG:
- printk("Configuration requested.\n");
- break;
- case I2O_EVT_IND_LOCK_RELEASE:
- printk("Lock released.\n");
- break;
- case I2O_EVT_IND_CAPABILITY_CHANGE:
- printk("Capability change 0x%02X.\n",
- evt->evt_data[0]);
- break;
- case I2O_EVT_IND_DEVICE_RESET:
- printk("Device reset.\n");
- break;
- case I2O_EVT_IND_EVT_MASK_MODIFIED:
- printk("Event mask modified, 0x%08X.\n",
- evt->evt_data[0]);
- break;
- case I2O_EVT_IND_FIELD_MODIFIED: {
- u16 *work16 = (u16 *)evt->evt_data;
- printk("Group 0x%04X, field %d changed.\n",
- work16[0], work16[1]);
- break;
- }
- case I2O_EVT_IND_VENDOR_EVT: {
- int i;
- printk("Vendor event:\n");
- for (i = 0; i < evt_data_len / 4; i++)
- printk(" 0x%08X\n", evt->evt_data[i]);
- break;
- }
- case I2O_EVT_IND_DEVICE_STATE:
- printk("Device state changed 0x%08X.\n",
- evt->evt_data[0]);
- break;
- case I2O_LAN_EVT_LINK_DOWN:
- printk("Link to the physical device is lost.\n");
- break;
- case I2O_LAN_EVT_LINK_UP:
- printk("Link to the physical device is (re)established.\n");
- break;
- case I2O_LAN_EVT_MEDIA_CHANGE:
- printk("Media change.\n");
- break;
- default:
- printk("Event Indicator = 0x%08X.\n",
- evt->evt_indicator);
- }
-
- /*
- * EventAck necessary only for events that cause the device
- * to syncronize with the user
- *
- *if (i2o_event_ack(iop, i2o_dev->lct_data->tid,
- * priv->unit << 16 | lan_context,
- * evt->evt_indicator,
- * evt->evt_data, evt_data_len) < 0)
- * printk("%s: Event Acknowledge timeout.\n", dev->name);
- */
- }
-
- /* else evt->function == I2O_CMD_UTIL_EVT_ACK) */
- /* Do we need to do something here too? */
+ /* Fail bit set? */
+
+ if (msg[0] & MSG_FAIL) {
+ i2o_lan_handle_failure(dev, msg);
+ return -1;
+ }
+
+ /* Message rejected for general cause? */
+
+ if ((msg[4]>>24) == I2O_REPLY_STATUS_TRANSACTION_ERROR) {
+ i2o_lan_handle_transaction_error(dev, msg);
+ return -1;
+ }
+
+ /* Else have to handle it in the callback function */
+
+ return 0;
}
+/* Callback functions called from the interrupt routine:
+=======================================================*/
+
/*
- * i2o_lan_release_buckets(): Handle unused buckets.
+ * i2o_lan_send_post_reply(): Callback function to handle PostSend replies.
*/
-static void i2o_lan_release_buckets(struct net_device *dev, u32 *msg)
+static void i2o_lan_send_post_reply(struct i2o_handler *h,
+ struct i2o_controller *iop, struct i2o_message *m)
{
- struct i2o_lan_local *priv = (struct i2o_lan_local *)dev->priv;
- u8 trl_count = (u8)(msg[3] & 0x000000FF);
- u32 *pskb = &msg[6];
+ u32 *msg = (u32 *)m;
+ u8 unit = (u8)(msg[2]>>16); // InitiatorContext
+ struct net_device *dev = i2o_landevs[unit];
+ struct i2o_lan_local *priv = (struct i2o_lan_local *)dev->priv;
+ u8 trl_count = msg[3] & 0x000000FF;
- while (trl_count--) {
- dprintk("%s: Releasing unused sk_buff %p.\n",dev->name,
- (struct sk_buff*)(*pskb));
- dev_kfree_skb((struct sk_buff*)(*pskb));
- pskb++;
- priv->bucket_count--;
+#ifdef DRIVERDEBUG
+ i2o_report_status(KERN_INFO, dev->name, msg);
+#endif
+
+ if ((msg[4] >> 24) != I2O_REPLY_STATUS_SUCCESS) {
+ if (i2o_lan_handle_status(dev, msg))
+ return;
+
+ /* Else we get pending transmit request(s) back */
}
+
+ /* DDM has handled transmit request(s), free sk_buffs */
+
+ while (trl_count) {
+ dev_kfree_skb_irq((struct sk_buff *)msg[4 + trl_count]);
+ dprintk(KERN_INFO "%s: Request skb freed (trl_count=%d).\n",
+ dev->name, trl_count);
+ atomic_dec(&priv->tx_out);
+ trl_count--;
+ }
+
+ /* If priv->tx_out had reached tx_max_out, the queue was stopped */
+
+ if (netif_queue_stopped(dev))
+ netif_wake_queue(dev);
}
/*
- * i2o_lan_receive_post_reply(): Process incoming packets.
+ * i2o_lan_receive_post_reply(): Callback function to process incoming packets.
*/
-static int i2o_lan_receive_post_reply(struct net_device *dev, u32 *msg)
+static void i2o_lan_receive_post_reply(struct i2o_handler *h,
+ struct i2o_controller *iop, struct i2o_message *m)
{
+ u32 *msg = (u32 *)m;
+ u8 unit = (u8)(msg[2]>>16); // InitiatorContext
+ struct net_device *dev = i2o_landevs[unit];
+
struct i2o_lan_local *priv = (struct i2o_lan_local *)dev->priv;
struct i2o_bucket_descriptor *bucket = (struct i2o_bucket_descriptor *)&msg[6];
struct i2o_packet_info *packet;
u8 trl_count = msg[3] & 0x000000FF;
struct sk_buff *skb, *old_skb;
+ unsigned long flags = 0;
+
+#ifdef DRIVERDEBUG
+ i2o_report_status(KERN_INFO, dev->name, msg);
+#endif
+
+ if ((msg[4] >> 24) != I2O_REPLY_STATUS_SUCCESS) {
+ if (i2o_lan_handle_status(dev, msg))
+ return;
+
+ /* Getting unused buckets back? */
+
+ if (msg[4] & I2O_LAN_DSC_CANCELED ||
+ msg[4] & I2O_LAN_DSC_RECEIVE_ABORTED) {
+ i2o_lan_release_buckets(dev, msg);
+ return;
+ }
+
+ /* Which DetailedStatusCodes need special treatment? */
+ }
+
+ /* Else we are receiving incoming post. */
while (trl_count--) {
- skb = (struct sk_buff *)bucket->context;
- packet = (struct i2o_packet_info *)bucket->packet_info;
- priv->bucket_count--;
+ skb = (struct sk_buff *)bucket->context;
+ packet = (struct i2o_packet_info *)bucket->packet_info;
+ atomic_dec(&priv->buckets_out);
+#if 0
+/* Is this enough? If we get erroneous bucket, we can't assume that skb could
+ * be reused, can we?
+ */
+
+ /* Should we optimise these ifs away from the fast path? -taneli */
+ if (packet->flags & 0x0f) {
+
+ if (packet->flags & 0x01)
+ printk(KERN_WARNING "%s: packet with errors.\n", dev->name);
+ if (packet->flags & 0x0c)
+ /* This actually means that the hw is b0rken, since we
+ have asked it to not send fragmented packets. */
+ printk(KERN_DEBUG "%s: multi-bucket packets not supported!\n", dev->name);
+ bucket++;
+ if (skb)
+ dev_kfree_skb_irq(skb);
+ continue;
+ }
- if (packet->len < rx_copybreak) {
+ if (packet->status & 0xff) {
+ /* Silently discard, unless debugging. */
+ dprintk(KERN_DEBUG "%s: toasted packet received.\n", dev->name);
+ bucket++;
+ if (skb)
+ dev_kfree_skb_irq(skb);
+ continue;
+ }
+#endif
+ if (packet->len < priv->rx_copybreak) {
old_skb = skb;
- skb = (struct sk_buff *)dev_alloc_skb(packet->len+2);
+ skb = (struct sk_buff *)dev_alloc_skb(packet->len+2);
if (skb == NULL) {
- printk("%s: Can't allocate skb.\n", dev->name);
- return -ENOMEM;
- }
- skb_reserve(skb,2);
- memcpy(skb_put(skb,packet->len), old_skb->data, packet->len);
-
- if (priv->i2o_fbl_tail < I2O_BUCKET_COUNT)
- priv->i2o_fbl[++priv->i2o_fbl_tail] = old_skb;
+ printk(KERN_ERR "%s: Can't allocate skb.\n", dev->name);
+ return;
+ }
+ skb_reserve(skb, 2);
+ memcpy(skb_put(skb, packet->len), old_skb->data, packet->len);
+
+ spin_lock_irqsave(&priv->fbl_lock, flags);
+ if (priv->i2o_fbl_tail < I2O_LAN_MAX_BUCKETS_OUT)
+ priv->i2o_fbl[++priv->i2o_fbl_tail] = old_skb;
else
- dev_kfree_skb(old_skb);
+ dev_kfree_skb_irq(old_skb);
+ spin_unlock_irqrestore(&priv->fbl_lock, flags);
} else
- skb_put(skb,packet->len);
-
+ skb_put(skb, packet->len);
+
skb->dev = dev;
skb->protocol = priv->type_trans(skb, dev);
netif_rx(skb);
+ dev->last_rx = jiffies;
dprintk(KERN_INFO "%s: Incoming packet (%d bytes) delivered "
- "to upper level.\n",dev->name,packet->len);
+ "to upper level.\n", dev->name, packet->len);
bucket++; // to next Packet Descriptor Block
}
#ifdef DRIVERDEBUG
if (msg[5] == 0)
printk(KERN_INFO "%s: DDM out of buckets (priv->count = %d)!\n",
- dev->name, priv->bucket_count);
+ dev->name, atomic_read(&priv->buckets_out));
#endif
- if (priv->bucket_count <= bucketpost - bucketthresh) {
- i2o_post_buckets_task.data = (void *)dev;
- queue_task(&i2o_post_buckets_task, &tq_immediate);
- mark_bh(IMMEDIATE_BH);
- /* Note: the task is queued only once */
+ /* If DDM has already consumed bucket_tresh buckets, post new ones */
+
+ if (atomic_read(&priv->buckets_out) <= priv->max_buckets_out - priv->bucket_thresh) {
+ i2o_post_buckets_task.data = (void *)dev;
+ queue_task(&i2o_post_buckets_task, &tq_immediate);
+ mark_bh(IMMEDIATE_BH);
+ }
+
+ return;
+}
+
+/*
+ * i2o_lan_reply(): Callback function to handle other incoming messages
+ * except SendPost and ReceivePost.
+ */
+static void i2o_lan_reply(struct i2o_handler *h, struct i2o_controller *iop,
+ struct i2o_message *m)
+{
+ u32 *msg = (u32 *)m;
+ u8 unit = (u8)(msg[2]>>16); // InitiatorContext
+ struct net_device *dev = i2o_landevs[unit];
+
+#ifdef DRIVERDEBUG
+ i2o_report_status(KERN_INFO, dev->name, msg);
+#endif
+
+ if ((msg[4] >> 24) != I2O_REPLY_STATUS_SUCCESS) {
+ if (i2o_lan_handle_status(dev, msg))
+ return;
+
+ /* This should NOT be reached */
+ }
+
+ switch (msg[1] >> 24) {
+ case LAN_RESET:
+ case LAN_SUSPEND:
+ /* default reply without payload */
+ break;
+ case I2O_CMD_UTIL_EVT_REGISTER:
+ case I2O_CMD_UTIL_EVT_ACK:
+ i2o_lan_handle_event(dev, msg);
+ break;
+ default:
+ printk(KERN_ERR "%s: No handler for the reply.\n",
+ dev->name);
+ i2o_report_status(KERN_INFO, dev->name, msg);
}
-
- return 0;
}
+/* Functions used by the above callback functions:
+=================================================*/
+/*
+ * i2o_lan_release_buckets(): Free unused buckets (sk_buffs).
+ */
+static void i2o_lan_release_buckets(struct net_device *dev, u32 *msg)
+{
+ struct i2o_lan_local *priv = (struct i2o_lan_local *)dev->priv;
+ u8 trl_elem_size = (u8)(msg[3]>>8 & 0x000000FF);
+ u8 trl_count = (u8)(msg[3] & 0x000000FF);
+ u32 *pskb = &msg[6];
+
+ while (trl_count--) {
+ dprintk(KERN_DEBUG "%s: Releasing unused sk_buff %p (trl_count=%d).\n",
+ dev->name, (struct sk_buff*)(*pskb),trl_count+1);
+ dev_kfree_skb_irq((struct sk_buff *)(*pskb));
+ pskb += 1 + trl_elem_size;
+ atomic_dec(&priv->buckets_out);
+ }
+}
+
+/*
+ * i2o_lan_event_reply(): Handle events.
+ */
+static void i2o_lan_handle_event(struct net_device *dev, u32 *msg)
+{
+ struct i2o_lan_local *priv = (struct i2o_lan_local *)dev->priv;
+ struct i2o_device *i2o_dev = priv->i2o_dev;
+ struct i2o_controller *iop = i2o_dev->controller;
+ struct i2o_reply {
+ u8 version_offset;
+ u8 msg_flags;
+ u16 msg_size;
+ u32 tid:12;
+ u32 initiator:12;
+ u32 function:8;
+ u32 initiator_context;
+ u32 transaction_context;
+ u32 evt_indicator;
+ u32 data[(iop->inbound_size - 20) / 4]; /* max */
+ } *evt = (struct i2o_reply *)msg;
+ int evt_data_len = (evt->msg_size - 5) * 4; /* real */
+
+ printk(KERN_INFO "%s: I2O event - ", dev->name);
+
+ if (evt->function == I2O_CMD_UTIL_EVT_ACK) {
+ printk("Event acknowledgement reply.\n");
+ return;
+ }
+
+ /* Else evt->function == I2O_CMD_UTIL_EVT_REGISTER) */
+
+ switch (evt->evt_indicator) {
+ case I2O_EVT_IND_STATE_CHANGE: {
+ struct state_data {
+ u16 status;
+ u8 state;
+ u8 data;
+ } *evt_data = (struct state_data *)(evt->data[0]);
+
+ printk("State chance 0x%08x.\n", evt->data[0]);
+
+ /* If the DDM is in error state, recovery may be
+ * possible if status = Transmit or Receive Control
+ * Unit Inoperable.
+ */
+ if (evt_data->state==0x05 && evt_data->status==0x0003)
+ i2o_lan_reset(dev);
+ break;
+ }
+
+ case I2O_EVT_IND_GENERAL_WARNING:
+ printk("General warning 0x%04x.\n", evt->data[0]);
+ break;
+
+ case I2O_EVT_IND_CONFIGURATION_FLAG:
+ printk("Configuration requested.\n");
+ break;
+
+ case I2O_EVT_IND_CAPABILITY_CHANGE:
+ printk("Capability change 0x%04x.\n", evt->data[0]);
+ break;
+
+ case I2O_EVT_IND_DEVICE_RESET:
+ /* Spec 2.0 p. 6-121:
+ * The event of _DEVICE_RESET should also be responded
+ */
+ printk("Device reset.\n");
+ if (i2o_event_ack(iop, msg) < 0)
+ printk("%s: Event Acknowledge timeout.\n", dev->name);
+ break;
+
+ case I2O_EVT_IND_EVT_MASK_MODIFIED:
+ printk("Event mask modified, 0x%08x.\n", evt->data[0]);
+ break;
+
+ case I2O_EVT_IND_FIELD_MODIFIED: {
+ u16 *work16 = (u16 *)evt->data;
+ printk("Group 0x%04x, field %d changed.\n", work16[0],
+ work16[1]);
+ break;
+ }
+
+ case I2O_EVT_IND_VENDOR_EVT: {
+ int i;
+ printk("Vendor event:\n");
+ for (i = 0; i < evt_data_len / 4; i++)
+ printk(" 0x%08x\n", evt->data[i]);
+ break;
+ }
+
+ case I2O_EVT_IND_DEVICE_STATE:
+ printk("Device state changed 0x%08x.\n", evt->data[0]);
+ break;
+
+ case I2O_LAN_EVT_LINK_DOWN:
+ printk("Link to the physical device is lost.\n");
+ break;
+
+ case I2O_LAN_EVT_LINK_UP:
+ printk("Link to the physical device is (re)established.\n");
+ break;
+
+ case I2O_LAN_EVT_MEDIA_CHANGE:
+ printk("Media change.\n");
+ break;
+
+ default:
+ printk("Event Indicator = 0x%08x.\n", evt->evt_indicator);
+ }
+
+ /* Note: EventAck necessary only for events that cause the device to
+ * syncronize with the user.
+ */
+}
/*
* i2o_lan_receive_post(): Post buckets to receive packets.
struct i2o_device *i2o_dev = priv->i2o_dev;
struct i2o_controller *iop = i2o_dev->controller;
struct sk_buff *skb;
- u32 m; u32 *msg;
- u32 bucket_len = (dev->mtu + dev->hard_header_len);
- u32 total = bucketpost - priv->bucket_count;
- u32 bucket_count;
- u32 *sgl_elem;
+ u32 m, *msg;
+ u32 bucket_len = (dev->mtu + dev->hard_header_len);
+ u32 total = priv->max_buckets_out - atomic_read(&priv->buckets_out);
+ u32 bucket_count;
+ u32 *sgl_elem;
+ unsigned long flags;
+
+ /* Send (total/bucket_count) separate I2O requests */
+
+ while (total) {
+ m = I2O_POST_READ32(iop);
+ if (m == 0xFFFFFFFF)
+ return -ETIMEDOUT;
+ msg = (u32 *)(iop->mem_offset + m);
- while (total) {
- m = I2O_POST_READ32(iop);
- if (m == 0xFFFFFFFF)
- return -ETIMEDOUT;
- msg = (u32 *)(iop->mem_offset + m);
+ bucket_count = (total >= priv->sgl_max) ? priv->sgl_max : total;
+ total -= bucket_count;
+ atomic_add(bucket_count, &priv->buckets_out);
- bucket_count = (total >= priv->sgl_max) ? priv->sgl_max : total;
- total -= bucket_count;
- priv->bucket_count += bucket_count;
+ dprintk(KERN_INFO "%s: Sending %d buckets (size %d) to LAN DDM.\n",
+ dev->name, bucket_count, bucket_len);
- dprintk(KERN_INFO "%s: Sending %d buckets (size %d) to LAN HDM.\n",
- dev->name, bucket_count, bucket_len);
+ /* Fill in the header */
__raw_writel(I2O_MESSAGE_SIZE(4 + 3 * bucket_count) | SGL_OFFSET_4, msg);
- __raw_writel(LAN_RECEIVE_POST<<24 | HOST_TID<<12 | i2o_dev->lct_data->tid, msg+1);
- __raw_writel(priv->unit << 16 | lan_context, msg+2);
+ __raw_writel(LAN_RECEIVE_POST<<24 | HOST_TID<<12 | i2o_dev->lct_data.tid, msg+1);
+ __raw_writel(priv->unit << 16 | lan_receive_context, msg+2);
__raw_writel(bucket_count, msg+3);
- sgl_elem = &msg[4];
-
- while (bucket_count--) {
- if (priv->i2o_fbl_tail >= 0)
- skb = priv->i2o_fbl[priv->i2o_fbl_tail--];
- else {
- skb = dev_alloc_skb(bucket_len + 2);
- if (skb == NULL)
- return -ENOMEM;
- skb_reserve(skb, 2);
- }
- __raw_writel(0x51000000 | bucket_len, sgl_elem);
- __raw_writel((u32)skb, sgl_elem+1);
- __raw_writel(virt_to_bus(skb->data), sgl_elem+2);
- sgl_elem += 3;
- }
-
- /* set LE flag and post buckets */
+ sgl_elem = &msg[4];
+
+ /* Fill in the payload - contains bucket_count SGL elements */
+
+ while (bucket_count--) {
+ spin_lock_irqsave(&priv->fbl_lock, flags);
+ if (priv->i2o_fbl_tail >= 0)
+ skb = priv->i2o_fbl[priv->i2o_fbl_tail--];
+ else {
+ skb = dev_alloc_skb(bucket_len + 2);
+ if (skb == NULL) {
+ spin_unlock_irqrestore(&priv->fbl_lock, flags);
+ return -ENOMEM;
+ }
+ skb_reserve(skb, 2);
+ }
+ spin_unlock_irqrestore(&priv->fbl_lock, flags);
+
+ __raw_writel(0x51000000 | bucket_len, sgl_elem);
+ __raw_writel((u32)skb, sgl_elem+1);
+ __raw_writel(virt_to_bus(skb->data), sgl_elem+2);
+ sgl_elem += 3;
+ }
+
+ /* set LE flag and post */
__raw_writel(__raw_readl(sgl_elem-3) | 0x80000000, (sgl_elem-3));
- i2o_post_message(iop,m);
- }
+ i2o_post_message(iop, m);
+ }
- return 0;
+ return 0;
}
+/* Functions called from the network stack, and functions called by them:
+========================================================================*/
+
/*
* i2o_lan_reset(): Reset the LAN adapter into the operational state and
* restore it to full operation.
*/
static int i2o_lan_reset(struct net_device *dev)
{
- struct i2o_lan_local *priv = (struct i2o_lan_local *)dev->priv;
+ struct i2o_lan_local *priv = (struct i2o_lan_local *)dev->priv;
struct i2o_device *i2o_dev = priv->i2o_dev;
- struct i2o_controller *iop = i2o_dev->controller;
+ struct i2o_controller *iop = i2o_dev->controller;
u32 msg[5];
dprintk(KERN_INFO "%s: LAN RESET MESSAGE.\n", dev->name);
msg[0] = FIVE_WORD_MSG_SIZE | SGL_OFFSET_0;
- msg[1] = LAN_RESET<<24 | HOST_TID<<12 | i2o_dev->lct_data->tid;
+ msg[1] = LAN_RESET<<24 | HOST_TID<<12 | i2o_dev->lct_data.tid;
msg[2] = priv->unit << 16 | lan_context; // InitiatorContext
msg[3] = 0; // TransactionContext
- msg[4] = 1 << 16; // return posted buckets
+ msg[4] = 0; // keep posted buckets
if (i2o_post_this(iop, msg, sizeof(msg)) < 0)
- return -ETIMEDOUT;
+ return -ETIMEDOUT;
return 0;
}
/*
* i2o_lan_suspend(): Put LAN adapter into a safe, non-active state.
- * Reply to any LAN class message with status error_no_data_transfer
+ * IOP replies to any LAN class message with status error_no_data_transfer
* / suspended.
*/
static int i2o_lan_suspend(struct net_device *dev)
{
- struct i2o_lan_local *priv = (struct i2o_lan_local *)dev->priv;
+ struct i2o_lan_local *priv = (struct i2o_lan_local *)dev->priv;
struct i2o_device *i2o_dev = priv->i2o_dev;
- struct i2o_controller *iop = i2o_dev->controller;
+ struct i2o_controller *iop = i2o_dev->controller;
u32 msg[5];
dprintk(KERN_INFO "%s: LAN SUSPEND MESSAGE.\n", dev->name);
msg[0] = FIVE_WORD_MSG_SIZE | SGL_OFFSET_0;
- msg[1] = LAN_SUSPEND<<24 | HOST_TID<<12 | i2o_dev->lct_data->tid;
+ msg[1] = LAN_SUSPEND<<24 | HOST_TID<<12 | i2o_dev->lct_data.tid;
msg[2] = priv->unit << 16 | lan_context; // InitiatorContext
msg[3] = 0; // TransactionContext
msg[4] = 1 << 16; // return posted buckets
*/
static void i2o_set_batch_mode(struct net_device *dev)
{
- struct i2o_lan_local *priv = (struct i2o_lan_local *)dev->priv;
- struct i2o_device *i2o_dev = priv->i2o_dev;
- struct i2o_controller *iop = i2o_dev->controller;
+ struct i2o_lan_local *priv = (struct i2o_lan_local *)dev->priv;
+ struct i2o_device *i2o_dev = priv->i2o_dev;
+ struct i2o_controller *iop = i2o_dev->controller;
u32 val;
- /* set LAN_BATCH_CONTROL attributes */
+ /* Set defaults LAN_BATCH_CONTROL attributes */
+ /* May be changed via /proc or Configuration Utility */
- // enable batch mode, toggle automatically
- val = 0x00000000;
- if (i2o_set_scalar(iop, i2o_dev->lct_data->tid, 0x0003, 0, &val, sizeof(val)) <0)
+ val = 0x00000000; // enable batch mode, toggle automatically
+ if (i2o_set_scalar(iop, i2o_dev->lct_data.tid, 0x0003, 0, &val, sizeof(val)) <0)
printk(KERN_WARNING "%s: Unable to enter I2O LAN batch mode.\n",
- dev->name);
+ dev->name);
+ else
+ dprintk(KERN_INFO "%s: I2O LAN batch mode enabled.\n", dev->name);
+
+ /* Set LAN_OPERATION attributes */
+
+#ifdef DRIVERDEBUG
+/* Added for testing: this will be removed */
+ val = 0x00000003; // 1 = UserFlags
+ if (i2o_set_scalar(iop, i2o_dev->lct_data.tid, 0x0004, 1, &val, sizeof(val)) < 0)
+ printk(KERN_WARNING "%s: Can't enable ErrorReporting & BadPacketHandling.\n",
+ dev->name);
else
- dprintk(KERN_INFO "%s: I2O LAN batch mode enabled.\n",dev->name);
+ dprintk(KERN_INFO "%s: ErrorReporting enabled, "
+ "BadPacketHandling enabled.\n", dev->name);
+#endif /* DRIVERDEBUG */
/*
* When PacketOrphanlimit is same as the maximum packet length,
* the packets will never be split into two separate buckets
*/
-
- /* set LAN_OPERATION attributes */
-
- val = dev->mtu + dev->hard_header_len; // PacketOrphanLimit
- if (i2o_set_scalar(iop, i2o_dev->lct_data->tid, 0x0004, 2, &val, sizeof(val)) < 0)
+ val = dev->mtu + dev->hard_header_len; // 2 = PacketOrphanLimit
+ if (i2o_set_scalar(iop, i2o_dev->lct_data.tid, 0x0004, 2, &val, sizeof(val)) < 0)
printk(KERN_WARNING "%s: Unable to set PacketOrphanLimit.\n",
- dev->name);
+ dev->name);
else
dprintk(KERN_INFO "%s: PacketOrphanLimit set to %d.\n",
- dev->name,val);
-
- return;
+ dev->name, val);
+
+ return;
}
+/* Functions called from the network stack:
+==========================================*/
+
/*
* i2o_lan_open(): Open the device to send/receive packets via
- * the network device.
+ * the network device.
*/
static int i2o_lan_open(struct net_device *dev)
{
- struct i2o_lan_local *priv = (struct i2o_lan_local *)dev->priv;
+ struct i2o_lan_local *priv = (struct i2o_lan_local *)dev->priv;
struct i2o_device *i2o_dev = priv->i2o_dev;
-#if 0
struct i2o_controller *iop = i2o_dev->controller;
- u32 evt_mask = 0xFFC00007; // All generic events, all lan events
-#endif
- if (i2o_claim_device(i2o_dev, &i2o_lan_handler, I2O_CLAIM_PRIMARY)) {
+
+ MOD_INC_USE_COUNT;
+
+ if (i2o_claim_device(i2o_dev, &i2o_lan_handler)) {
printk(KERN_WARNING "%s: Unable to claim the I2O LAN device.\n", dev->name);
+ MOD_DEC_USE_COUNT;
return -EAGAIN;
}
- dprintk(KERN_INFO "%s: I2O LAN device claimed (tid=%d).\n",
- dev->name, i2o_dev->lct_data->tid);
-#if 0
- if (i2o_event_register(iop, i2o_dev->lct_data->tid,
- priv->unit << 16 | lan_context, evt_mask) < 0)
+ dprintk(KERN_INFO "%s: I2O LAN device (tid=%d) claimed by LAN OSM.\n",
+ dev->name, i2o_dev->lct_data.tid);
+
+ if (i2o_event_register(iop, i2o_dev->lct_data.tid,
+ priv->unit << 16 | lan_context, 0, priv->i2o_event_mask) < 0)
printk(KERN_WARNING "%s: Unable to set the event mask.\n", dev->name);
-#endif
+
i2o_lan_reset(dev);
-
- priv->i2o_fbl = kmalloc(bucketpost * sizeof(struct sk_buff *),GFP_KERNEL);
- if (priv->i2o_fbl == NULL)
+
+ priv->i2o_fbl = kmalloc(priv->max_buckets_out * sizeof(struct sk_buff *),
+ GFP_KERNEL);
+ if (priv->i2o_fbl == NULL) {
+ MOD_DEC_USE_COUNT;
return -ENOMEM;
+ }
priv->i2o_fbl_tail = -1;
-
- netif_start_queue(dev);
+ priv->send_active = 0;
i2o_set_batch_mode(dev);
i2o_lan_receive_post(dev);
- MOD_INC_USE_COUNT;
+ netif_start_queue(dev);
return 0;
}
*/
static int i2o_lan_close(struct net_device *dev)
{
- struct i2o_lan_local *priv = (struct i2o_lan_local *)dev->priv;
- struct i2o_device *i2o_dev = priv->i2o_dev;
-#if 0
+ struct i2o_lan_local *priv = (struct i2o_lan_local *)dev->priv;
+ struct i2o_device *i2o_dev = priv->i2o_dev;
struct i2o_controller *iop = i2o_dev->controller;
- if (i2o_event_register(iop, i2o_dev->lct_data->tid,
- priv->unit << 16 | lan_context, 0) < 0)
- printk(KERN_WARNING "%s: Unable to clear the event mask.\n",
-#endif dev->name);
-
netif_stop_queue(dev);
+
i2o_lan_suspend(dev);
- if (i2o_release_device(i2o_dev, &i2o_lan_handler, I2O_CLAIM_PRIMARY))
+ if (i2o_event_register(iop, i2o_dev->lct_data.tid,
+ priv->unit << 16 | lan_context, 0, 0) < 0)
+ printk(KERN_WARNING "%s: Unable to clear the event mask.\n",
+ dev->name);
+
+ if (i2o_release_device(i2o_dev, &i2o_lan_handler)) {
printk(KERN_WARNING "%s: Unable to unclaim I2O LAN device "
- "(tid=%d).\n", dev->name, i2o_dev->lct_data->tid);
+ "(tid=%d).\n", dev->name, i2o_dev->lct_data.tid);
+ return -EBUSY;
+ }
while (priv->i2o_fbl_tail >= 0)
dev_kfree_skb(priv->i2o_fbl[priv->i2o_fbl_tail--]);
return 0;
}
-#if 0
/*
- * i2o_lan_sdu_send(): Send a packet, MAC header added by the HDM.
- * Must be supported by Fibre Channel, optional for Ethernet/802.3,
- * Token Ring, FDDI
+ * i2o_lan_tx_timeout(): Tx timeout handler.
*/
-static int i2o_lan_sdu_send(struct sk_buff *skb, struct net_device *dev)
-{
- return -EINVAL;
+static void i2o_lan_tx_timeout(struct net_device *dev)
+{
+ if (!netif_queue_stopped(dev))
+ netif_start_queue(dev);
}
-#endif
+#define batching(x, cond) ( (x)->tx_batch_mode==1 || ((x)->tx_batch_mode==2 && (cond)) )
+
+/*
+ * Batch send packets. Both i2o_lan_sdu_send and i2o_lan_packet_send
+ * use this. I'm still not pleased. If you come up with
+ * something better, please tell me. -taneli
+ */
static void i2o_lan_batch_send(struct net_device *dev)
-{
+{
struct i2o_lan_local *priv = (struct i2o_lan_local *)dev->priv;
struct i2o_controller *iop = priv->i2o_dev->controller;
+ spin_lock_irq(&priv->tx_lock);
if (priv->tx_count != 0) {
+ dev->trans_start = jiffies;
+ i2o_post_message(iop, priv->m);
+ dprintk(KERN_DEBUG "%s: %d packets sent.\n", dev->name, priv->tx_count);
+ priv->tx_count = 0;
+ }
+ spin_unlock_irq(&priv->tx_lock);
+
+ priv->send_active = 0;
+}
+
+/*
+ * i2o_lan_sdu_send(): Send a packet, MAC header added by the DDM.
+ * Must be supported by Fibre Channel, optional for Ethernet/802.3,
+ * Token Ring, FDDI
+ */
+
+/*
+ * This is a coarse first approximation. Needs testing. Any takers? -taneli
+ */
+static int i2o_lan_sdu_send(struct sk_buff *skb, struct net_device *dev)
+{
+ struct i2o_lan_local *priv = (struct i2o_lan_local *)dev->priv;
+ struct i2o_device *i2o_dev = priv->i2o_dev;
+ struct i2o_controller *iop = i2o_dev->controller;
+ int tickssofar = jiffies - dev->trans_start;
+ u32 m, *msg;
+ u32 *sgl_elem;
+
+ spin_lock_irq(&priv->tx_lock);
+
+ priv->tx_count++;
+ atomic_inc(&priv->tx_out);
+
+ if (priv->tx_count == 1) {
+ m = I2O_POST_READ32(iop);
+ if (m == 0xFFFFFFFF) {
+ spin_unlock_irq(&priv->tx_lock);
+ return 1;
+ }
+ msg = (u32 *)(iop->mem_offset + m);
+ priv->m = m;
+
+ __raw_writel(NINE_WORD_MSG_SIZE | 1<<12 | SGL_OFFSET_4, msg);
+ __raw_writel(LAN_PACKET_SEND<<24 | HOST_TID<<12 | i2o_dev->lct_data.tid, msg+1);
+ __raw_writel(priv->unit << 16 | lan_send_context, msg+2); // InitiatorContext
+ __raw_writel(1 << 3, msg+3); // TransmitControlWord
+
+ __raw_writel(0xD7000000 | skb->len, msg+4); // MAC hdr included
+ __raw_writel((u32)skb, msg+5); // TransactionContext
+ __raw_writel(virt_to_bus(skb->data), msg+6);
+ __raw_writel((u32)skb->mac.raw, msg+7);
+ __raw_writel((u32)skb->mac.raw+4, msg+8);
+ if (batching(priv, !tickssofar) && !priv->send_active) {
+ priv->send_active = 1;
+ queue_task(&priv->i2o_batch_send_task, &tq_scheduler);
+ }
+ } else { /* Add new SGL element to the previous message frame */
+
+ msg = (u32 *)(iop->mem_offset + priv->m);
+ sgl_elem = &msg[priv->tx_count * 5 + 1];
+
+ __raw_writel(I2O_MESSAGE_SIZE((__raw_readl(msg)>>16) + 5) | 1<<12 | SGL_OFFSET_4, msg);
+ __raw_writel(__raw_readl(sgl_elem-5) & 0x7FFFFFFF, sgl_elem-5); /* clear LE flag */
+ __raw_writel(0xD5000000 | skb->len, sgl_elem);
+ __raw_writel((u32)skb, sgl_elem+1);
+ __raw_writel(virt_to_bus(skb->data), sgl_elem+2);
+ __raw_writel((u32)(skb->mac.raw), sgl_elem+3);
+ __raw_writel((u32)(skb->mac.raw)+1, sgl_elem+4);
+ }
+
+ /* If tx not in batch mode or frame is full, send immediatelly */
+
+ if (!batching(priv, !tickssofar) || priv->tx_count == priv->sgl_max) {
+ dev->trans_start = jiffies;
i2o_post_message(iop, priv->m);
- dprintk("%s: %d packets sent.\n", dev->name, priv->tx_count);
+ dprintk(KERN_DEBUG "%s: %d packets sent.\n", dev->name, priv->tx_count);
priv->tx_count = 0;
}
+
+ /* If DDMs TxMaxPktOut reached, stop queueing layer to send more */
+
+ if (atomic_read(&priv->tx_out) >= priv->tx_max_out)
+ netif_stop_queue(dev);
+
+ spin_unlock_irq(&priv->tx_lock);
+ return 0;
}
/*
struct i2o_lan_local *priv = (struct i2o_lan_local *)dev->priv;
struct i2o_device *i2o_dev = priv->i2o_dev;
struct i2o_controller *iop = i2o_dev->controller;
+ int tickssofar = jiffies - dev->trans_start;
u32 m, *msg;
u32 *sgl_elem;
+ spin_lock_irq(&priv->tx_lock);
+
priv->tx_count++;
- priv->tx_out++;
+ atomic_inc(&priv->tx_out);
if (priv->tx_count == 1) {
- dprintk("%s: New message frame\n", dev->name);
-
m = I2O_POST_READ32(iop);
if (m == 0xFFFFFFFF) {
- dev_kfree_skb(skb);
- return -ETIMEDOUT;
+ spin_unlock_irq(&priv->tx_lock);
+ return 1;
}
msg = (u32 *)(iop->mem_offset + m);
priv->m = m;
__raw_writel(SEVEN_WORD_MSG_SIZE | 1<<12 | SGL_OFFSET_4, msg);
- __raw_writel(LAN_PACKET_SEND<<24 | HOST_TID<<12 | i2o_dev->lct_data->tid, msg+1);
- __raw_writel(priv->unit << 16 | lan_context, msg+2); // InitiatorContext
- __raw_writel(1 << 4, msg+3); // TransmitControlWord
+ __raw_writel(LAN_PACKET_SEND<<24 | HOST_TID<<12 | i2o_dev->lct_data.tid, msg+1);
+ __raw_writel(priv->unit << 16 | lan_send_context, msg+2); // InitiatorContext
+ __raw_writel(1 << 3, msg+3); // TransmitControlWord
+
__raw_writel(0xD5000000 | skb->len, msg+4); // MAC hdr included
__raw_writel((u32)skb, msg+5); // TransactionContext
__raw_writel(virt_to_bus(skb->data), msg+6);
-
- queue_task(&priv->i2o_batch_send_task, &tq_scheduler);
-
+ if (batching(priv, !tickssofar) && !priv->send_active) {
+ priv->send_active = 1;
+ queue_task(&priv->i2o_batch_send_task, &tq_scheduler);
+ }
} else { /* Add new SGL element to the previous message frame */
-
- dprintk("%s: Adding packet %d to msg frame\n",
- dev->name, priv->tx_count);
msg = (u32 *)(iop->mem_offset + priv->m);
sgl_elem = &msg[priv->tx_count * 3 + 1];
__raw_writel(0xD5000000 | skb->len, sgl_elem);
__raw_writel((u32)skb, sgl_elem+1);
__raw_writel(virt_to_bus(skb->data), sgl_elem+2);
+ }
- if (priv->tx_count == priv->sgl_max) { /* frame full, send now */
- i2o_post_message(iop, priv->m);
- dprintk("%s: %d packets sent.\n", dev->name, priv->tx_count);
- priv->tx_count = 0;
- }
+ /* If tx not in batch mode or frame is full, send immediatelly */
+
+ if (!batching(priv, !tickssofar) || priv->tx_count == priv->sgl_max) {
+ dev->trans_start = jiffies;
+ i2o_post_message(iop, priv->m);
+ dprintk(KERN_DEBUG"%s: %d packets sent.\n", dev->name, priv->tx_count);
+ priv->tx_count = 0;
}
-
- /* If HDMs TxMaxPktOut reached, stay busy (don't clean tbusy) */
- if (priv->tx_out >= priv->tx_max_out)
+ /* If DDMs TxMaxPktOut reached, stop queueing layer to send more */
+
+ if (atomic_read(&priv->tx_out) >= priv->tx_max_out)
netif_stop_queue(dev);
-
+
+ spin_unlock_irq(&priv->tx_lock);
return 0;
}
*/
static struct net_device_stats *i2o_lan_get_stats(struct net_device *dev)
{
- struct i2o_lan_local *priv = (struct i2o_lan_local *)dev->priv;
+ struct i2o_lan_local *priv = (struct i2o_lan_local *)dev->priv;
struct i2o_device *i2o_dev = priv->i2o_dev;
struct i2o_controller *iop = i2o_dev->controller;
u64 val64[16];
u64 supported_group[4] = { 0, 0, 0, 0 };
- if (i2o_query_scalar(iop, i2o_dev->lct_data->tid, 0x0100, -1, val64,
- sizeof(val64)) < 0)
- printk("%s: Unable to query LAN_HISTORICAL_STATS.\n",dev->name);
+ if (i2o_query_scalar(iop, i2o_dev->lct_data.tid, 0x0100, -1, val64,
+ sizeof(val64)) < 0)
+ printk(KERN_INFO "%s: Unable to query LAN_HISTORICAL_STATS.\n", dev->name);
else {
- dprintk("%s: LAN_HISTORICAL_STATS queried.\n",dev->name);
- priv->stats.tx_packets = val64[0];
- priv->stats.tx_bytes = val64[1];
- priv->stats.rx_packets = val64[2];
- priv->stats.rx_bytes = val64[3];
- priv->stats.tx_errors = val64[4];
- priv->stats.rx_errors = val64[5];
+ dprintk(KERN_DEBUG "%s: LAN_HISTORICAL_STATS queried.\n", dev->name);
+ priv->stats.tx_packets = val64[0];
+ priv->stats.tx_bytes = val64[1];
+ priv->stats.rx_packets = val64[2];
+ priv->stats.rx_bytes = val64[3];
+ priv->stats.tx_errors = val64[4];
+ priv->stats.rx_errors = val64[5];
priv->stats.rx_dropped = val64[6];
}
- if (i2o_query_scalar(iop, i2o_dev->lct_data->tid, 0x0180, -1,
- &supported_group, sizeof(supported_group)) < 0)
- printk("%s: Unable to query LAN_SUPPORTED_OPTIONAL_HISTORICAL_STATS.\n",dev->name);
+ if (i2o_query_scalar(iop, i2o_dev->lct_data.tid, 0x0180, -1,
+ &supported_group, sizeof(supported_group)) < 0)
+ printk(KERN_INFO "%s: Unable to query LAN_SUPPORTED_OPTIONAL_HISTORICAL_STATS.\n", dev->name);
if (supported_group[2]) {
- if (i2o_query_scalar(iop, i2o_dev->lct_data->tid, 0x0183, -1,
- val64, sizeof(val64)) < 0)
- printk("%s: Unable to query LAN_OPTIONAL_RX_HISTORICAL_STATS.\n",dev->name);
+ if (i2o_query_scalar(iop, i2o_dev->lct_data.tid, 0x0183, -1,
+ val64, sizeof(val64)) < 0)
+ printk(KERN_INFO "%s: Unable to query LAN_OPTIONAL_RX_HISTORICAL_STATS.\n", dev->name);
else {
- dprintk("%s: LAN_OPTIONAL_RX_HISTORICAL_STATS queried.\n",dev->name);
- priv->stats.multicast = val64[4];
+ dprintk(KERN_DEBUG "%s: LAN_OPTIONAL_RX_HISTORICAL_STATS queried.\n", dev->name);
+ priv->stats.multicast = val64[4];
priv->stats.rx_length_errors = val64[10];
priv->stats.rx_crc_errors = val64[0];
}
}
- if (i2o_dev->lct_data->sub_class == I2O_LAN_ETHERNET) {
- u64 supported_stats = 0;
-
- if (i2o_query_scalar(iop, i2o_dev->lct_data->tid, 0x0200, -1,
- val64, sizeof(val64)) < 0)
- printk("%s: Unable to query LAN_802_3_HISTORICAL_STATS.\n",dev->name);
+ if (i2o_dev->lct_data.sub_class == I2O_LAN_ETHERNET) {
+ u64 supported_stats = 0;
+ if (i2o_query_scalar(iop, i2o_dev->lct_data.tid, 0x0200, -1,
+ val64, sizeof(val64)) < 0)
+ printk(KERN_INFO "%s: Unable to query LAN_802_3_HISTORICAL_STATS.\n", dev->name);
else {
- dprintk("%s: LAN_802_3_HISTORICAL_STATS queried.\n",dev->name);
+ dprintk(KERN_DEBUG "%s: LAN_802_3_HISTORICAL_STATS queried.\n", dev->name);
priv->stats.transmit_collision = val64[1] + val64[2];
- priv->stats.rx_frame_errors = val64[0];
+ priv->stats.rx_frame_errors = val64[0];
priv->stats.tx_carrier_errors = val64[6];
}
- if (i2o_query_scalar(iop, i2o_dev->lct_data->tid, 0x0280, -1,
- &supported_stats, sizeof(supported_stats)) < 0)
- printk("%s: Unable to query LAN_SUPPORTED_802_3_HISTORICAL_STATS.\n", dev->name);
+ if (i2o_query_scalar(iop, i2o_dev->lct_data.tid, 0x0280, -1,
+ &supported_stats, sizeof(supported_stats)) < 0)
+ printk(KERN_INFO "%s: Unable to query LAN_SUPPORTED_802_3_HISTORICAL_STATS.\n", dev->name);
- if (supported_stats != 0) {
- if (i2o_query_scalar(iop, i2o_dev->lct_data->tid, 0x0281, -1,
- val64, sizeof(val64)) < 0)
- printk("%s: Unable to query LAN_OPTIONAL_802_3_HISTORICAL_STATS.\n",dev->name);
+ if (supported_stats != 0) {
+ if (i2o_query_scalar(iop, i2o_dev->lct_data.tid, 0x0281, -1,
+ val64, sizeof(val64)) < 0)
+ printk(KERN_INFO "%s: Unable to query LAN_OPTIONAL_802_3_HISTORICAL_STATS.\n", dev->name);
else {
- dprintk("%s: LAN_OPTIONAL_802_3_HISTORICAL_STATS queried.\n",dev->name);
+ dprintk(KERN_DEBUG "%s: LAN_OPTIONAL_802_3_HISTORICAL_STATS queried.\n", dev->name);
if (supported_stats & 0x1)
priv->stats.rx_over_errors = val64[0];
if (supported_stats & 0x4)
}
#ifdef CONFIG_TR
- if (i2o_dev->lct_data->sub_class == I2O_LAN_TR) {
- if (i2o_query_scalar(iop, i2o_dev->lct_data->tid, 0x0300, -1,
- val64, sizeof(val64)) < 0)
- printk("%s: Unable to query LAN_802_5_HISTORICAL_STATS.\n",dev->name);
+ if (i2o_dev->lct_data.sub_class == I2O_LAN_TR) {
+ if (i2o_query_scalar(iop, i2o_dev->lct_data.tid, 0x0300, -1,
+ val64, sizeof(val64)) < 0)
+ printk(KERN_INFO "%s: Unable to query LAN_802_5_HISTORICAL_STATS.\n", dev->name);
else {
struct tr_statistics *stats =
- (struct tr_statistics *)&priv->stats;
- dprintk("%s: LAN_802_5_HISTORICAL_STATS queried.\n",dev->name);
+ (struct tr_statistics *)&priv->stats;
+ dprintk(KERN_DEBUG "%s: LAN_802_5_HISTORICAL_STATS queried.\n", dev->name);
stats->line_errors = val64[0];
stats->internal_errors = val64[7];
#endif
#ifdef CONFIG_FDDI
- if (i2o_dev->lct_data->sub_class == I2O_LAN_FDDI) {
- if (i2o_query_scalar(iop, i2o_dev->lct_data->tid, 0x0400, -1,
- val64, sizeof(val64)) < 0)
- printk("%s: Unable to query LAN_FDDI_HISTORICAL_STATS.\n",dev->name);
+ if (i2o_dev->lct_data.sub_class == I2O_LAN_FDDI) {
+ if (i2o_query_scalar(iop, i2o_dev->lct_data.tid, 0x0400, -1,
+ val64, sizeof(val64)) < 0)
+ printk(KERN_INFO "%s: Unable to query LAN_FDDI_HISTORICAL_STATS.\n", dev->name);
else {
- dprintk("%s: LAN_FDDI_HISTORICAL_STATS queried.\n",dev->name);
+ dprintk(KERN_DEBUG "%s: LAN_FDDI_HISTORICAL_STATS queried.\n", dev->name);
priv->stats.smt_cf_state = val64[0];
memcpy(priv->stats.mac_upstream_nbr, &val64[1], FDDI_K_ALEN);
- memcpy(priv->stats.mac_downstream_nbr, &val64[2], FDDI_K_ALEN);
+ memcpy(priv->stats.mac_downstream_nbr, &val64[2], FDDI_K_ALEN);
priv->stats.mac_error_cts = val64[3];
priv->stats.mac_lost_cts = val64[4];
priv->stats.mac_rmt_state = val64[5];
memcpy(priv->stats.port_lct_fail_cts, &val64[6], 8);
- memcpy(priv->stats.port_lem_reject_cts, &val64[7], 8);
+ memcpy(priv->stats.port_lem_reject_cts, &val64[7], 8);
memcpy(priv->stats.port_lem_cts, &val64[8], 8);
memcpy(priv->stats.port_pcm_state, &val64[9], 8);
}
/* FDDI optional stats not yet defined */
- }
+ }
+#endif
+
+#ifdef CONFIG_NET_FC
+ /* Fibre Channel Statistics not yet defined in 1.53 nor 2.0 */
#endif
return (struct net_device_stats *)&priv->stats;
static void i2o_lan_set_mc_list(struct net_device *dev)
{
- struct i2o_lan_local *priv = (struct i2o_lan_local *)dev->priv;
- struct i2o_device *i2o_dev = priv->i2o_dev;
- struct i2o_controller *iop = i2o_dev->controller;
- u32 filter_mask;
- u32 max_size_mc_table;
- u32 mc_addr_group[64];
-
- if (i2o_query_scalar(iop, i2o_dev->lct_data->tid, 0x0001, -1,
- &mc_addr_group, sizeof(mc_addr_group)) < 0 ) {
- printk(KERN_WARNING "%s: Unable to query LAN_MAC_ADDRESS group.\n", dev->name);
- return;
- }
+ struct i2o_lan_local *priv = (struct i2o_lan_local *)dev->priv;
+ struct i2o_device *i2o_dev = priv->i2o_dev;
+ struct i2o_controller *iop = i2o_dev->controller;
+ u32 filter_mask;
+ u32 max_size_mc_table;
+ u32 mc_addr_group[64];
- max_size_mc_table = mc_addr_group[8];
+// This isn't safe yet. Needs to be async.
+return;
- if (dev->flags & IFF_PROMISC) {
- filter_mask = 0x00000002;
- dprintk(KERN_INFO "%s: Enabling promiscuous mode...\n", dev->name);
- }
+// read_lock_bh(&dev_mc_lock);
+// spin_lock(&dev->xmit_lock);
+// dev->xmit_lock_owner = smp_processor_id();
- else if ((dev->flags & IFF_ALLMULTI) || dev->mc_count > max_size_mc_table) {
- filter_mask = 0x00000004;
- dprintk(KERN_INFO "%s: Enabling all multicast mode...\n", dev->name);
- }
+ if (i2o_query_scalar(iop, i2o_dev->lct_data.tid, 0x0001, -1,
+ &mc_addr_group, sizeof(mc_addr_group)) < 0 ) {
+ printk(KERN_WARNING "%s: Unable to query LAN_MAC_ADDRESS group.\n", dev->name);
+ return;
+ }
- else if (dev->mc_count) {
- struct dev_mc_list *mc;
+ max_size_mc_table = mc_addr_group[8];
+
+ if (dev->flags & IFF_PROMISC) {
+ filter_mask = 0x00000002;
+ printk(KERN_INFO "%s: Enabling promiscuous mode...\n", dev->name);
+ } else if ((dev->flags & IFF_ALLMULTI) || dev->mc_count > max_size_mc_table) {
+ filter_mask = 0x00000004;
+ printk(KERN_INFO "%s: Enabling all multicast mode...\n", dev->name);
+ } else if (dev->mc_count) {
+ struct dev_mc_list *mc;
u8 mc_table[2 + 8 * dev->mc_count]; // RowCount, Addresses
u64 *work64 = (u64 *)(mc_table + 2);
- filter_mask = 0x00000000;
- dprintk(KERN_INFO "%s: Enabling multicast mode...\n", dev->name);
+ filter_mask = 0x00000000;
+ printk(KERN_INFO "%s: Enabling multicast mode...\n", dev->name);
- /* Fill multicast addr table */
+ /* Fill multicast addr table */
memset(mc_table, 0, sizeof(mc_table));
- memcpy(mc_table, &dev->mc_count, 2);
- for (mc = dev->mc_list; mc ; mc = mc->next, work64++ )
- memcpy(work64, mc->dmi_addr, mc->dmi_addrlen);
-
- /* Clear old mc table, copy new table to <iop,tid> */
+ memcpy(mc_table, &dev->mc_count, 2);
+ for (mc = dev->mc_list; mc ; mc = mc->next, work64++ )
+ memcpy(work64, mc->dmi_addr, mc->dmi_addrlen);
- if (i2o_clear_table(iop, i2o_dev->lct_data->tid, 0x0002) < 0)
- printk("%s: Unable to clear LAN_MULTICAST_MAC_ADDRESS table.\n",dev->name);
+ /* Clear old mc table, copy new table to <iop,tid> */
- if ((i2o_row_add_table(iop, i2o_dev->lct_data->tid, 0x0002, -1,
- mc_table, sizeof(mc_table))) < 0)
- printk("%s: Unable to set LAN_MULTICAST_MAC_ADDRESS table.\n",dev->name);
- }
+ if (i2o_clear_table(iop, i2o_dev->lct_data.tid, 0x0002) < 0)
+ printk(KERN_INFO "%s: Unable to clear LAN_MULTICAST_MAC_ADDRESS table.\n", dev->name);
- else {
- filter_mask = 0x00000300; // Broadcast, Multicast disabled
- printk(KERN_INFO "%s: Enabling unicast mode...\n",dev->name);
- }
+ if ((i2o_row_add_table(iop, i2o_dev->lct_data.tid, 0x0002, -1,
+ mc_table, sizeof(mc_table))) < 0)
+ printk(KERN_INFO "%s: Unable to set LAN_MULTICAST_MAC_ADDRESS table.\n", dev->name);
+ } else {
+ filter_mask = 0x00000300; // Broadcast, Multicast disabled
+ printk(KERN_INFO "%s: Enabling unicast mode...\n", dev->name);
+ }
/* Finally copy new FilterMask to <iop,tid> */
- if (i2o_set_scalar(iop, i2o_dev->lct_data->tid, 0x0001, 3,
- &filter_mask, sizeof(filter_mask)) <0)
- printk(KERN_WARNING "%s: Unable to set MAC FilterMask.\n",dev->name);
+ if (i2o_set_scalar(iop, i2o_dev->lct_data.tid, 0x0001, 3,
+ &filter_mask, sizeof(filter_mask)) <0)
+ printk(KERN_WARNING "%s: Unable to set MAC FilterMask.\n", dev->name);
+
+ dev->xmit_lock_owner = -1;
+ spin_unlock(&dev->xmit_lock);
+// read_unlock_bh(&dev_mc_lock);
- return;
+ return;
}
+static struct tq_struct i2o_lan_set_mc_list_task = {
+ 0, 0, (void (*)(void *))i2o_lan_set_mc_list, (void *) 0
+};
+
/*
* i2o_lan_set_multicast_list():
* Queue routine i2o_lan_set_mc_list() to be called later.
static void i2o_lan_set_multicast_list(struct net_device *dev)
{
- struct tq_struct *task;
-
- task = (struct tq_struct *)kmalloc(sizeof(struct tq_struct), GFP_KERNEL);
- if (task == NULL)
- return;
-
- task->next = NULL;
- task->sync = 0;
- task->routine = (void *)i2o_lan_set_mc_list;
- task->data = (void *)dev;
- queue_task(task, &tq_scheduler);
+ if (!in_interrupt()) {
+ i2o_lan_set_mc_list_task.data = (void *)dev;
+ queue_task(&i2o_lan_set_mc_list_task, &tq_scheduler);
+ } else {
+ i2o_lan_set_mc_list(dev);
+ }
}
/*
*/
static int i2o_lan_change_mtu(struct net_device *dev, int new_mtu)
{
- if ((new_mtu < 68) || (new_mtu > 9000))
+ struct i2o_lan_local *priv = (struct i2o_lan_local *)dev->priv;
+ struct i2o_device *i2o_dev = priv->i2o_dev;
+ u32 max_pkt_size;
+
+ if (i2o_query_scalar(i2o_dev->controller, i2o_dev->lct_data.tid,
+ 0x0000, 6, &max_pkt_size, 4) < 0)
+ return -EFAULT;
+
+ if (new_mtu < 68 || max_pkt_size < new_mtu)
return -EINVAL;
-
+
dev->mtu = new_mtu;
return 0;
}
+/* Functions to initialize I2O LAN OSM:
+======================================*/
+
/*
* i2o_lan_register_device(): Register LAN class device to kernel.
*/
unsigned short (*type_trans)(struct sk_buff *, struct net_device *);
void (*unregister_dev)(struct net_device *dev);
- switch (i2o_dev->lct_data->sub_class) {
+ switch (i2o_dev->lct_data.sub_class) {
case I2O_LAN_ETHERNET:
- dev = init_etherdev(NULL, sizeof(struct i2o_lan_local));
+ dev = init_etherdev(NULL, sizeof(struct i2o_lan_local));
if (dev == NULL)
return NULL;
type_trans = eth_type_trans;
#ifdef CONFIG_ANYLAN
case I2O_LAN_100VG:
printk(KERN_ERR "i2o_lan: 100base VG not yet supported.\n");
+ return NULL;
break;
#endif
if (dev==NULL)
return NULL;
type_trans = tr_type_trans;
- unregister_dev = unregister_trdev;
+ unregister_dev = unregister_trdev;
break;
#endif
case I2O_LAN_FDDI:
{
int size = sizeof(struct net_device) + sizeof(struct i2o_lan_local)
- + sizeof("fddi%d ");
+ + sizeof("fddi%d ");
- dev = (struct net_device *) kmalloc(size, GFP_KERNEL);
- memset((char *)dev, 0, size);
- dev->priv = (void *)(dev + 1);
- dev->name = (char *)(dev + 1) + sizeof(struct i2o_lan_local);
+ dev = (struct net_device *) kmalloc(size, GFP_KERNEL);
+ if (dev == NULL)
+ return NULL;
+ memset((char *)dev, 0, size);
+ dev->priv = (void *)(dev + 1);
+ dev->name = (char *)(dev + 1) + sizeof(struct i2o_lan_local);
- if (dev_alloc_name(dev,"fddi%d") < 0) {
+ if (dev_alloc_name(dev, "fddi%d") < 0) {
printk(KERN_WARNING "i2o_lan: Too many FDDI devices.\n");
kfree(dev);
return NULL;
}
type_trans = fddi_type_trans;
unregister_dev = (void *)unregister_netdevice;
-
+
fddi_setup(dev);
register_netdev(dev);
- }
+ }
break;
#endif
-#ifdef CONFIG_FIBRE_CHANNEL
+#ifdef CONFIG_NET_FC
case I2O_LAN_FIBRE_CHANNEL:
- printk(KERN_INFO "i2o_lan: Fibre Channel not yet supported.\n");
- break;
+ dev = init_fcdev(NULL, sizeof(struct i2o_lan_local));
+ if (dev == NULL)
+ return NULL;
+ type_trans = NULL;
+/* FIXME: Move fc_type_trans() from drivers/net/fc/iph5526.c to net/802/fc.c
+ * and export it in include/linux/fcdevice.h
+ * type_trans = fc_type_trans;
+ */
+ unregister_dev = (void *)unregister_fcdev;
+ break;
#endif
case I2O_LAN_UNKNOWN:
default:
- printk(KERN_ERR "i2o_lan: LAN type 0x%08X not supported.\n",
- i2o_dev->lct_data->sub_class);
+ printk(KERN_ERR "i2o_lan: LAN type 0x%04x not supported.\n",
+ i2o_dev->lct_data.sub_class);
return NULL;
}
priv = (struct i2o_lan_local *)dev->priv;
priv->i2o_dev = i2o_dev;
priv->type_trans = type_trans;
- priv->bucket_count = 0;
priv->sgl_max = (i2o_dev->controller->inbound_size - 16) / 12;
+ atomic_set(&priv->buckets_out, 0);
+
+ /* Set default values for user configurable parameters */
+ /* Private values are changed via /proc file system */
+
+ priv->max_buckets_out = max_buckets_out;
+ priv->bucket_thresh = bucket_thresh;
+ priv->rx_copybreak = rx_copybreak;
+ priv->tx_batch_mode = tx_batch_mode;
+ priv->i2o_event_mask = i2o_event_mask;
+
+ priv->tx_lock = SPIN_LOCK_UNLOCKED;
+ priv->fbl_lock = SPIN_LOCK_UNLOCKED;
unit++;
i2o_landevs[unit] = dev;
priv->unit = unit;
- if (i2o_query_scalar(i2o_dev->controller, i2o_dev->lct_data->tid,
+ if (i2o_query_scalar(i2o_dev->controller, i2o_dev->lct_data.tid,
0x0001, 0, &hw_addr, sizeof(hw_addr)) < 0) {
- printk(KERN_ERR "%s: Unable to query hardware address.\n", dev->name);
+ printk(KERN_ERR "%s: Unable to query hardware address.\n", dev->name);
unit--;
unregister_dev(dev);
kfree(dev);
- return NULL;
+ return NULL;
}
- dprintk("%s: hwaddr = %02X:%02X:%02X:%02X:%02X:%02X\n",
- dev->name,hw_addr[0], hw_addr[1], hw_addr[2], hw_addr[3],
- hw_addr[4], hw_addr[5]);
+ dprintk(KERN_DEBUG "%s: hwaddr = %02X:%02X:%02X:%02X:%02X:%02X\n",
+ dev->name, hw_addr[0], hw_addr[1], hw_addr[2], hw_addr[3],
+ hw_addr[4], hw_addr[5]);
dev->addr_len = 6;
memcpy(dev->dev_addr, hw_addr, 6);
- if (i2o_query_scalar(i2o_dev->controller, i2o_dev->lct_data->tid,
- 0x0007, 2, &tx_max_out, sizeof(tx_max_out)) < 0)
- {
- printk(KERN_ERR "%s: Unable to query max TX queue.\n", dev->name);
- unit--;
- unregister_dev(dev);
- kfree(dev);
- return NULL;
+ if (i2o_query_scalar(i2o_dev->controller, i2o_dev->lct_data.tid,
+ 0x0007, 2, &tx_max_out, sizeof(tx_max_out)) < 0) {
+ printk(KERN_ERR "%s: Unable to query max TX queue.\n", dev->name);
+ unit--;
+ unregister_dev(dev);
+ kfree(dev);
+ return NULL;
}
- dprintk(KERN_INFO "%s: Max TX Outstanding = %d.\n", dev->name, tx_max_out);
- priv->tx_max_out = tx_max_out;
- priv->tx_out = 0;
- priv->tx_count = 0;
- priv->lock = SPIN_LOCK_UNLOCKED;
+ dprintk(KERN_INFO "%s: Max TX Outstanding = %d.\n", dev->name, tx_max_out);
+ priv->tx_max_out = tx_max_out;
+ atomic_set(&priv->tx_out, 0);
+ priv->tx_count = 0;
priv->i2o_batch_send_task.next = NULL;
priv->i2o_batch_send_task.sync = 0;
priv->i2o_batch_send_task.routine = (void *)i2o_lan_batch_send;
priv->i2o_batch_send_task.data = (void *)dev;
- dev->open = i2o_lan_open;
- dev->stop = i2o_lan_close;
- dev->hard_start_xmit = i2o_lan_packet_send;
- dev->get_stats = i2o_lan_get_stats;
+ dev->open = i2o_lan_open;
+ dev->stop = i2o_lan_close;
+ dev->get_stats = i2o_lan_get_stats;
dev->set_multicast_list = i2o_lan_set_multicast_list;
- dev->change_mtu = i2o_lan_change_mtu;
+ dev->tx_timeout = i2o_lan_tx_timeout;
+ dev->watchdog_timeo = I2O_LAN_TX_TIMEOUT;
+
+#ifdef CONFIG_NET_FC
+ if (i2o_dev->lct_data.sub_class == I2O_LAN_FIBRE_CHANNEL)
+ dev->hard_start_xmit = i2o_lan_sdu_send;
+ else
+#endif
+ dev->hard_start_xmit = i2o_lan_packet_send;
+
+ if (i2o_dev->lct_data.sub_class == I2O_LAN_ETHERNET)
+ dev->change_mtu = i2o_lan_change_mtu;
return dev;
}
struct net_device *dev;
int i;
- printk(KERN_INFO "Linux I2O LAN OSM (c) 1999 University of Helsinki.\n");
+ printk(KERN_INFO "I2O LAN OSM (c) 1999 University of Helsinki.\n");
+
+ /* Module params used as global defaults for private values */
+
+ if (max_buckets_out > I2O_LAN_MAX_BUCKETS_OUT)
+ max_buckets_out = I2O_LAN_MAX_BUCKETS_OUT;
+ if (bucket_thresh > max_buckets_out)
+ bucket_thresh = max_buckets_out;
+
+ /* Install handlers for incoming replies */
+
+ if (i2o_install_handler(&i2o_lan_send_handler) < 0) {
+ printk(KERN_ERR "i2o_lan: Unable to register I2O LAN OSM.\n");
+ return -EINVAL;
+ }
+ lan_send_context = i2o_lan_send_handler.context;
- if (bucketpost > I2O_BUCKET_COUNT)
- bucketpost = I2O_BUCKET_COUNT;
- if (bucketthresh > bucketpost)
- bucketthresh = bucketpost;
+ if (i2o_install_handler(&i2o_lan_receive_handler) < 0) {
+ printk(KERN_ERR "i2o_lan: Unable to register I2O LAN OSM.\n");
+ return -EINVAL;
+ }
+ lan_receive_context = i2o_lan_receive_handler.context;
if (i2o_install_handler(&i2o_lan_handler) < 0) {
printk(KERN_ERR "i2o_lan: Unable to register I2O LAN OSM.\n");
return -EINVAL;
}
lan_context = i2o_lan_handler.context;
-
+
for(i=0; i <= MAX_LAN_CARDS; i++)
i2o_landevs[i] = NULL;
struct i2o_controller *iop = i2o_find_controller(i);
struct i2o_device *i2o_dev;
- if (iop==NULL) continue;
+ if (iop==NULL)
+ continue;
for (i2o_dev=iop->devices;i2o_dev != NULL;i2o_dev=i2o_dev->next) {
- if (i2o_dev->lct_data->class_id != I2O_CLASS_LAN)
+ if (i2o_dev->lct_data.class_id != I2O_CLASS_LAN)
continue;
/* Make sure device not already claimed by an ISM */
- if (i2o_dev->lct_data->user_tid != 0xFFF)
+ if (i2o_dev->lct_data.user_tid != 0xFFF)
continue;
if (unit == MAX_LAN_CARDS) {
dev = i2o_lan_register_device(i2o_dev);
if (dev == NULL) {
- printk(KERN_ERR "i2o_lan: Unable to register I2O LAN device.\n");
- continue; // try next one
+ printk(KERN_ERR "i2o_lan: Unable to register I2O LAN device 0x%04x.\n",
+ i2o_dev->lct_data.sub_class);
+ continue;
}
- printk(KERN_INFO "%s: I2O LAN device registered, tid = %d,"
- " subclass = 0x%08X, unit = %d.\n",
- dev->name, i2o_dev->lct_data->tid, i2o_dev->lct_data->sub_class,
- ((struct i2o_lan_local *)dev->priv)->unit);
+ printk(KERN_INFO "%s: I2O LAN device registered, "
+ "subclass = 0x%04x, unit = %d, tid = %d.\n",
+ dev->name, i2o_dev->lct_data.sub_class,
+ ((struct i2o_lan_local *)dev->priv)->unit,
+ i2o_dev->lct_data.tid);
}
i2o_unlock_controller(iop);
for (i = 0; i <= unit; i++) {
struct net_device *dev = i2o_landevs[i];
- struct i2o_lan_local *priv = (struct i2o_lan_local *)dev->priv;
- struct i2o_device *i2o_dev = priv->i2o_dev;
+ struct i2o_lan_local *priv = (struct i2o_lan_local *)dev->priv;
+ struct i2o_device *i2o_dev = priv->i2o_dev;
- switch (i2o_dev->lct_data->sub_class) {
+ switch (i2o_dev->lct_data.sub_class) {
case I2O_LAN_ETHERNET:
unregister_netdev(dev);
- kfree(dev);
break;
#ifdef CONFIG_FDDI
case I2O_LAN_FDDI:
unregister_netdevice(dev);
- kfree(dev);
break;
#endif
#ifdef CONFIG_TR
case I2O_LAN_TR:
unregister_trdev(dev);
- kfree(dev);
+ break;
+#endif
+#ifdef CONFIG_NET_FC
+ case I2O_LAN_FIBRE_CHANNEL:
+ unregister_fcdev(dev);
break;
#endif
default:
- printk(KERN_WARNING "i2o_lan: Spurious I2O LAN subclass 0x%08X.\n",
- i2o_dev->lct_data->sub_class);
+ printk(KERN_WARNING "%s: Spurious I2O LAN subclass 0x%08x.\n",
+ dev->name, i2o_dev->lct_data.sub_class);
}
dprintk(KERN_INFO "%s: I2O LAN device unregistered.\n",
dev->name);
+ kfree(dev);
}
i2o_remove_handler(&i2o_lan_handler);
+ i2o_remove_handler(&i2o_lan_send_handler);
+ i2o_remove_handler(&i2o_lan_receive_handler);
}
EXPORT_NO_SYMBOLS;
-MODULE_AUTHOR("Univ of Helsinki, CS Department");
+MODULE_AUTHOR("University of Helsinki, Department of Computer Science");
MODULE_DESCRIPTION("I2O Lan OSM");
-MODULE_PARM(bucketpost, "i"); // Total number of buckets to post
-MODULE_PARM(bucketthresh, "i"); // Bucket post threshold
-MODULE_PARM(rx_copybreak, "i");
+MODULE_PARM(max_buckets_out, "1-" __MODULE_STRING(I2O_LAN_MAX_BUCKETS_OUT) "i");
+MODULE_PARM_DESC(max_buckets_out, "Total number of buckets to post (1-)");
+MODULE_PARM(bucket_thresh, "1-" __MODULE_STRING(I2O_LAN_MAX_BUCKETS_OUT) "i");
+MODULE_PARM_DESC(bucket_thresh, "Bucket post threshold (1-)");
+MODULE_PARM(rx_copybreak, "1-" "i");
+MODULE_PARM_DESC(rx_copybreak, "Copy breakpoint for copy only small frames (1-)");
+MODULE_PARM(tx_batch_mode, "0-1" "i");
+MODULE_PARM_DESC(tx_batch_mode, "0=Use immediate mode send, 1=Use batch mode send");
#endif
/*
- * i2o_lan.h LAN Class specific definitions
+ * i2o_lan.h I2O LAN Class definitions
*
- * I2O LAN CLASS OSM Prototyping, May 17th 1999
+ * I2O LAN CLASS OSM April 3rd 2000
*
- * (C) Copyright 1999 University of Helsinki,
- * Department of Computer Science
+ * (C) Copyright 1999, 2000 University of Helsinki,
+ * Department of Computer Science
*
* This code is still under development / test.
*
- * Author: Auvo Häkkinen <Auvo.Hakkinen@cs.Helsinki.FI>
- * Juha Sievänen <Juha.Sievanen@cs.Helsinki.FI>
+ * Author: Auvo Häkkinen <Auvo.Hakkinen@cs.Helsinki.FI>
+ * Juha Sievänen <Juha.Sievanen@cs.Helsinki.FI>
+ * Taneli Vähäkangas <Taneli.Vahakangas@cs.Helsinki.FI>
*/
#ifndef _I2O_LAN_H
#define _I2O_LAN_H
-/* Tunable parameters first */
+/* Default values for tunable parameters first */
-#define I2O_BUCKET_COUNT 256
-#define I2O_BUCKET_THRESH 18 /* 9 buckets in one message */
+#define I2O_LAN_MAX_BUCKETS_OUT 256
+#define I2O_LAN_BUCKET_THRESH 18 /* 9 buckets in one message */
+#define I2O_LAN_RX_COPYBREAK 200
+#define I2O_LAN_TX_TIMEOUT (1*HZ)
+#define I2O_LAN_TX_BATCH_MODE 1 /* 1=on, 0=off */
+#define I2O_LAN_EVENT_MASK 0 /* 0=None, 0xFFC00002=All */
/* LAN types */
#define I2O_LAN_ETHERNET 0x0030
#define I2O_LAN_EVT_LINK_DOWN 0x01
#define I2O_LAN_EVT_LINK_UP 0x02
#define I2O_LAN_EVT_MEDIA_CHANGE 0x04
-
+
+#include <linux/netdevice.h>
+#include <linux/fddidevice.h>
+
+struct i2o_lan_local {
+ u8 unit;
+ struct i2o_device *i2o_dev;
+ struct fddi_statistics stats; /* see also struct net_device_stats */
+ unsigned short (*type_trans)(struct sk_buff *, struct net_device *);
+ atomic_t buckets_out; /* nbr of unused buckets on DDM */
+ atomic_t tx_out; /* outstanding TXes */
+ u8 tx_count; /* packets in one TX message frame */
+ u16 tx_max_out; /* DDM's Tx queue len */
+ u8 sgl_max; /* max SGLs in one message frame */
+ u32 m; /* IOP address of msg frame */
+
+ struct tq_struct i2o_batch_send_task;
+ int send_active;
+ struct sk_buff **i2o_fbl; /* Free bucket list (to reuse skbs) */
+ int i2o_fbl_tail;
+ spinlock_t fbl_lock;
+
+ spinlock_t tx_lock;
+
+ /* LAN OSM configurable parameters are here: */
+
+ u16 max_buckets_out; /* max nbr of buckets to send to DDM */
+ u16 bucket_thresh; /* send more when this many used */
+ u16 rx_copybreak;
+
+ u8 tx_batch_mode; /* Set when using batch mode sends */
+ u32 i2o_event_mask; /* To turn on interesting event flags */
+};
+
#endif /* _I2O_LAN_H */
iounmap(((u8 *)c->post_port)-0x40);
#ifdef CONFIG_MTRR
- if(c->bus.pci.mtrr_reg > 0)
- mtrr_del(c->bus.pci.mtrr_reg, 0, 0);
+ if(c->bus.pci.mtrr_reg0 > 0)
+ mtrr_del(c->bus.pci.mtrr_reg0, 0, 0);
+ if(c->bus.pci.mtrr_reg1 > 0)
+ mtrr_del(c->bus.pci.mtrr_reg1, 0, 0);
#endif
}
* Enable Write Combining MTRR for IOP's memory region
*/
#ifdef CONFIG_MTRR
- c->bus.pci.mtrr_reg =
- mtrr_add(c->mem_phys, size, MTRR_TYPE_WRCOMB, 1);
+ c->bus.pci.mtrr_reg0 =
+ mtrr_add(c->mem_phys, size, MTRR_TYPE_WRCOMB, 1);
+/*
+* If it is an INTEL i960 I/O processor then set the first 64K to Uncacheable
+* since the region contains the Messaging unit which shouldn't be cached.
+*/
+ c->bus.pci.mtrr_reg1 = -1;
+ if(dev->vendor == PCI_VENDOR_ID_INTEL)
+ {
+ printk(KERN_INFO "i2o_pci: MTRR workaround for Intel i960 processor\n");
+ c->bus.pci.mtrr_reg1 =
+ mtrr_add(c->mem_phys, 65536, MTRR_TYPE_UNCACHABLE, 1);
+ if(c->bus.pci.mtrr_reg1< 0)
+ printk(KERN_INFO "i2o_pci: Error in setting MTRR_TYPE_UNCACHABLE\n");
+ }
+
#endif
I2O_IRQ_WRITE32(c,0xFFFFFFFF);
printk(KERN_INFO "i2o: Checking for PCI I2O controllers...\n");
- pci_for_each_dev(dev) {
+ pci_for_each_dev(dev)
+ {
if((dev->class>>8)!=PCI_CLASS_INTELLIGENT_I2O)
continue;
if((dev->class&0xFF)>1)
/*
* TODO List
*
- * - Add support for any version 2.0 spec changes once 2.0 IRTOS
+ * - Add support for any version 2.0 spec changes once 2.0 IRTOS is
* is available to test with
* - Clean up code to use official structure definitions
*/
write_proc_t *write_proc; /* write func */
} i2o_proc_entry;
+// #define DRIVERDEBUG
+
static int i2o_proc_read_lct(char *, char **, off_t, int, int *, void *);
static int i2o_proc_read_hrt(char *, char **, off_t, int, int *, void *);
static int i2o_proc_read_status(char *, char **, off_t, int, int *, void *);
struct proc_dir_entry * );
static void i2o_proc_remove_controller(struct i2o_controller *,
struct proc_dir_entry * );
+static void i2o_proc_add_device(struct i2o_device *, struct proc_dir_entry *);
+static void i2o_proc_remove_device(struct i2o_device *);
static int create_i2o_procfs(void);
static int destroy_i2o_procfs(void);
+static void i2o_proc_new_dev(struct i2o_controller *, struct i2o_device *);
+static void i2o_proc_dev_del(struct i2o_controller *, struct i2o_device *);
static int i2o_proc_read_lan_dev_info(char *, char **, off_t, int, int *,
void *);
static struct proc_dir_entry *i2o_proc_dir_root;
+/*
+ * I2O OSM descriptor
+ */
+static struct i2o_handler i2o_proc_handler =
+{
+ NULL,
+ i2o_proc_new_dev,
+ i2o_proc_dev_del,
+ NULL,
+ "I2O procfs Layer",
+ 0,
+ 0xffffffff // All classes
+};
+
/*
* IOP specific entries...write field just in case someone
* ever wants one.
{NULL, 0, NULL, NULL}
};
+
static char *chtostr(u8 *chars, int n)
{
char tmp[256];
if(lct->boot_tid)
len += sprintf(buf+len, "Boot Device @ ID %d\n", lct->boot_tid);
+ len +=
+ sprintf(buf+len, "Current Change Indicator: %#10x\n", lct->change_ind);
+
for(i = 0; i < entries; i++)
{
len += sprintf(buf+len, "Entry %d\n", i);
len = 0;
token = i2o_query_table(I2O_PARAMS_TABLE_GET,
- d->controller, d->lct_data->tid, 0xF000, -1, NULL, 0,
+ d->controller, d->lct_data.tid, 0xF000, -1, NULL, 0,
&result, sizeof(result));
if (token < 0) {
len = 0;
token = i2o_query_table(I2O_PARAMS_TABLE_GET,
- d->controller, d->lct_data->tid,
+ d->controller, d->lct_data.tid,
0xF001, -1, NULL, 0,
&result, sizeof(result));
len = 0;
token = i2o_query_table(I2O_PARAMS_TABLE_GET,
- d->controller, d->lct_data->tid,
+ d->controller, d->lct_data.tid,
0xF002, -1, NULL, 0,
&result, sizeof(result));
len = 0;
token = i2o_query_table(I2O_PARAMS_TABLE_GET,
- d->controller, d->lct_data->tid,
+ d->controller, d->lct_data.tid,
0xF003, -1, NULL, 0,
&result, sizeof(result));
len = 0;
token = i2o_query_table(I2O_PARAMS_TABLE_GET,
- d->controller, d->lct_data->tid,
+ d->controller, d->lct_data.tid,
0xF000, -1,
NULL, 0,
&result, sizeof(result));
len = 0;
token = i2o_query_table(I2O_PARAMS_TABLE_GET,
- d->controller, d->lct_data->tid,
+ d->controller, d->lct_data.tid,
0xF006, -1,
NULL, 0,
&result, sizeof(result));
len = 0;
- token = i2o_query_scalar(d->controller, d->lct_data->tid,
+ token = i2o_query_scalar(d->controller, d->lct_data.tid,
0xF100, -1,
&work32, sizeof(work32));
len = 0;
- token = i2o_query_scalar(d->controller, d->lct_data->tid,
+ token = i2o_query_scalar(d->controller, d->lct_data.tid,
0xF101, -1,
&result, sizeof(result));
spin_lock(&i2o_proc_lock);
len = 0;
- token = i2o_query_scalar(d->controller, d->lct_data->tid,
+ token = i2o_query_scalar(d->controller, d->lct_data.tid,
0xF102, -1,
&result, sizeof(result));
len = 0;
- token = i2o_query_scalar(d->controller, d->lct_data->tid,
+ token = i2o_query_scalar(d->controller, d->lct_data.tid,
0xF103, -1,
&work32, sizeof(work32));
spin_lock(&i2o_proc_lock);
len = 0;
- token = i2o_query_scalar(d->controller, d->lct_data->tid,
+ token = i2o_query_scalar(d->controller, d->lct_data.tid,
0xF200, -1,
&result, sizeof(result));
spin_lock(&i2o_proc_lock);
len = 0;
- token = i2o_query_scalar(d->controller, d->lct_data->tid,
+ token = i2o_query_scalar(d->controller, d->lct_data.tid,
0x0000, -1, &work32, 56*4);
if (token < 0) {
len += i2o_report_query_status(buf+len, token, "0x0000 LAN Device Info");
spin_lock(&i2o_proc_lock);
len = 0;
- token = i2o_query_scalar(d->controller, d->lct_data->tid,
+ token = i2o_query_scalar(d->controller, d->lct_data.tid,
0x0001, -1, &work32, 48*4);
if (token < 0) {
len += i2o_report_query_status(buf+len, token,"0x0001 LAN MAC Address");
len = 0;
token = i2o_query_table(I2O_PARAMS_TABLE_GET,
- d->controller, d->lct_data->tid, 0x0002, -1,
+ d->controller, d->lct_data.tid, 0x0002, -1,
NULL, 0, &result, sizeof(result));
if (token < 0) {
spin_lock(&i2o_proc_lock);
len = 0;
- token = i2o_query_scalar(d->controller, d->lct_data->tid,
+ token = i2o_query_scalar(d->controller, d->lct_data.tid,
0x0003, -1, &work32, 9*4);
if (token < 0) {
len += i2o_report_query_status(buf+len, token,"0x0003 LAN Batch Control");
spin_lock(&i2o_proc_lock);
len = 0;
- token = i2o_query_scalar(d->controller, d->lct_data->tid,
+ token = i2o_query_scalar(d->controller, d->lct_data.tid,
0x0004, -1, &work32, 20);
if (token < 0) {
len += i2o_report_query_status(buf+len, token,"0x0004 LAN Operation");
spin_lock(&i2o_proc_lock);
len = 0;
- token = i2o_query_scalar(d->controller, d->lct_data->tid,
+ token = i2o_query_scalar(d->controller, d->lct_data.tid,
0x0005, -1, &result, sizeof(result));
if (token < 0) {
len += i2o_report_query_status(buf+len, token, "0x0005 LAN Media Operation");
len = 0;
token = i2o_query_table(I2O_PARAMS_TABLE_GET,
- d->controller, d->lct_data->tid,
+ d->controller, d->lct_data.tid,
0x0006, -1, NULL, 0, &result, sizeof(result));
if (token < 0) {
spin_lock(&i2o_proc_lock);
len = 0;
- token = i2o_query_scalar(d->controller, d->lct_data->tid,
+ token = i2o_query_scalar(d->controller, d->lct_data.tid,
0x0007, -1, &work32, 8*4);
if (token < 0) {
len += i2o_report_query_status(buf+len, token,"0x0007 LAN Transmit Info");
spin_lock(&i2o_proc_lock);
len = 0;
- token = i2o_query_scalar(d->controller, d->lct_data->tid,
+ token = i2o_query_scalar(d->controller, d->lct_data.tid,
0x0008, -1, &work32, 8*4);
if (token < 0) {
len += i2o_report_query_status(buf+len, token,"0x0008 LAN Receive Info");
spin_lock(&i2o_proc_lock);
len = 0;
- token = i2o_query_scalar(d->controller, d->lct_data->tid,
+ token = i2o_query_scalar(d->controller, d->lct_data.tid,
0x0100, -1, &stats, sizeof(stats));
if (token < 0) {
len += i2o_report_query_status(buf+len, token,"0x100 LAN Statistics");
/* Optional statistics follows */
/* Get 0x0180 to see which optional groups/fields are supported */
- token = i2o_query_scalar(d->controller, d->lct_data->tid,
+ token = i2o_query_scalar(d->controller, d->lct_data.tid,
0x0180, -1, &supp_groups, sizeof(supp_groups));
if (token < 0) {
if (supp_groups[1]) /* 0x0182 */
{
- token = i2o_query_scalar(d->controller, d->lct_data->tid,
+ token = i2o_query_scalar(d->controller, d->lct_data.tid,
0x0182, -1, &tx_stats, sizeof(tx_stats));
if (token < 0) {
if (supp_groups[2]) /* 0x0183 */
{
- token = i2o_query_scalar(d->controller, d->lct_data->tid,
+ token = i2o_query_scalar(d->controller, d->lct_data.tid,
0x0183, -1, &rx_stats, sizeof(rx_stats));
if (token < 0) {
len += i2o_report_query_status(buf+len, token,"0x183 LAN Optional Rx Historical Stats");
if (supp_groups[3]) /* 0x0184 */
{
- token = i2o_query_scalar(d->controller, d->lct_data->tid,
+ token = i2o_query_scalar(d->controller, d->lct_data.tid,
0x0184, -1, &chksum_stats, sizeof(chksum_stats));
if (token < 0) {
spin_lock(&i2o_proc_lock);
len = 0;
- token = i2o_query_scalar(d->controller, d->lct_data->tid,
+ token = i2o_query_scalar(d->controller, d->lct_data.tid,
0x0200, -1, &stats, sizeof(stats));
if (token < 0) {
/* Optional Ethernet statistics follows */
/* Get 0x0280 to see which optional fields are supported */
- token = i2o_query_scalar(d->controller, d->lct_data->tid,
+ token = i2o_query_scalar(d->controller, d->lct_data.tid,
0x0280, -1, &supp_fields, sizeof(supp_fields));
if (token < 0) {
if (supp_fields) /* 0x0281 */
{
- token = i2o_query_scalar(d->controller, d->lct_data->tid,
+ token = i2o_query_scalar(d->controller, d->lct_data.tid,
0x0281, -1, &stats, sizeof(stats));
if (token < 0) {
spin_lock(&i2o_proc_lock);
len = 0;
- token = i2o_query_scalar(d->controller, d->lct_data->tid,
+ token = i2o_query_scalar(d->controller, d->lct_data.tid,
0x0300, -1, &work64, sizeof(work64));
if (token < 0) {
spin_lock(&i2o_proc_lock);
len = 0;
- token = i2o_query_scalar(d->controller, d->lct_data->tid,
+ token = i2o_query_scalar(d->controller, d->lct_data.tid,
0x0400, -1, &work64, sizeof(work64));
if (token < 0) {
for(dev = pctrl->devices; dev; dev = dev->next)
{
- sprintf(buff, "%0#5x", dev->lct_data->tid);
+ sprintf(buff, "%0#5x", dev->lct_data.tid);
dir1 = proc_mkdir(buff, dir);
dev->proc_entry = dir1;
if(!dir1)
printk(KERN_INFO "i2o_proc: Could not allocate proc dir\n");
-
- i2o_proc_create_entries(dev, generic_dev_entries, dir1);
- switch(dev->lct_data->class_id)
- {
+ i2o_proc_add_device(dev, dir1);
+ }
+
+ return 0;
+}
+
+void i2o_proc_new_dev(struct i2o_controller *c, struct i2o_device *d)
+{
+ char buff[10];
+
+#ifdef DRIVERDEBUG
+ printk(KERN_INFO "Adding new device to /proc/i2o/iop%d\n", c->unit);
+#endif
+ sprintf(buff, "%0#5x", d->lct_data.tid);
+
+ d->proc_entry = proc_mkdir(buff, c->proc_entry);
+
+ if(!d->proc_entry)
+ {
+ printk(KERN_WARNING "i2o: Could not allocate procdir!\n");
+ return;
+ }
+
+ i2o_proc_add_device(d, d->proc_entry);
+}
+
+void i2o_proc_add_device(struct i2o_device *dev, struct proc_dir_entry *dir)
+{
+ i2o_proc_create_entries(dev, generic_dev_entries, dir);
+
+ /* Inform core that we want updates about this device's status */
+ i2o_device_notify_on(dev, &i2o_proc_handler);
+ switch(dev->lct_data.class_id)
+ {
case I2O_CLASS_SCSI_PERIPHERAL:
case I2O_CLASS_RANDOM_BLOCK_STORAGE:
- i2o_proc_create_entries(dev, rbs_dev_entries, dir1);
+ i2o_proc_create_entries(dev, rbs_dev_entries, dir);
break;
case I2O_CLASS_LAN:
- i2o_proc_create_entries(dev, lan_entries, dir1);
- switch(dev->lct_data->sub_class)
+ i2o_proc_create_entries(dev, lan_entries, dir);
+ switch(dev->lct_data.sub_class)
{
- case I2O_LAN_ETHERNET:
- i2o_proc_create_entries(dev, lan_eth_entries,
- dir1);
- break;
- case I2O_LAN_FDDI:
- i2o_proc_create_entries(dev, lan_fddi_entries,
- dir1);
- break;
- case I2O_LAN_TR:
- i2o_proc_create_entries(dev, lan_tr_entries,
- dir1);
- break;
- default:
- break;
+ case I2O_LAN_ETHERNET:
+ i2o_proc_create_entries(dev, lan_eth_entries, dir);
+ break;
+ case I2O_LAN_FDDI:
+ i2o_proc_create_entries(dev, lan_fddi_entries, dir);
+ break;
+ case I2O_LAN_TR:
+ i2o_proc_create_entries(dev, lan_tr_entries, dir);
+ break;
+ default:
+ break;
}
break;
default:
break;
- }
}
-
- return 0;
}
static void i2o_proc_remove_controller(struct i2o_controller *pctrl,
struct proc_dir_entry *parent)
{
char buff[10];
- char dev_id[10];
- struct proc_dir_entry *de;
struct i2o_device *dev;
/* Remove unused device entries */
for(dev=pctrl->devices; dev; dev=dev->next)
+ i2o_proc_remove_device(dev);
+
+ if(!pctrl->proc_entry->count)
{
- de=dev->proc_entry;
- sprintf(dev_id, "%0#5x", dev->lct_data->tid);
+ sprintf(buff, "iop%d", pctrl->unit);
- /* Would it be safe to remove _files_ even if they are in use? */
- if((de) && (!de->count))
- {
- i2o_proc_remove_entries(generic_dev_entries, de);
+ i2o_proc_remove_entries(generic_iop_entries, pctrl->proc_entry);
- switch(dev->lct_data->class_id)
- {
+ remove_proc_entry(buff, parent);
+ pctrl->proc_entry = NULL;
+ }
+}
+
+void i2o_proc_remove_device(struct i2o_device *dev)
+{
+ struct proc_dir_entry *de=dev->proc_entry;
+ char dev_id[10];
+
+ sprintf(dev_id, "%0#5x", dev->lct_data.tid);
+
+ i2o_device_notify_off(dev, &i2o_proc_handler);
+ /* Would it be safe to remove _files_ even if they are in use? */
+ if((de) && (!de->count))
+ {
+ i2o_proc_remove_entries(generic_dev_entries, de);
+ switch(dev->lct_data.class_id)
+ {
case I2O_CLASS_SCSI_PERIPHERAL:
case I2O_CLASS_RANDOM_BLOCK_STORAGE:
i2o_proc_remove_entries(rbs_dev_entries, de);
break;
case I2O_CLASS_LAN:
+ {
i2o_proc_remove_entries(lan_entries, de);
- switch(dev->lct_data->sub_class)
+ switch(dev->lct_data.sub_class)
{
case I2O_LAN_ETHERNET:
i2o_proc_remove_entries(lan_eth_entries, de);
break;
}
}
- remove_proc_entry(dev_id, parent);
+ remove_proc_entry(dev_id, dev->controller->proc_entry);
}
}
+}
+
+void i2o_proc_dev_del(struct i2o_controller *c, struct i2o_device *d)
+{
+#ifdef DRIVERDEBUG
+ printk(KERN_INFO, "Deleting device %d from iop%d\n",
+ d->lct_data.tid, c->unit);
+#endif
- if(!pctrl->proc_entry->count)
- {
- sprintf(buff, "iop%d", pctrl->unit);
-
- i2o_proc_remove_entries(generic_iop_entries, pctrl->proc_entry);
-
- remove_proc_entry(buff, parent);
- pctrl->proc_entry = NULL;
- }
+ i2o_proc_remove_device(d);
}
static int create_i2o_procfs(void)
return 0;
}
-
+
#ifdef MODULE
#define i2o_proc_init init_module
#endif
int __init i2o_proc_init(void)
{
+ if (i2o_install_handler(&i2o_proc_handler) < 0)
+ {
+ printk(KERN_ERR "i2o_proc: Unable to install PROC handler.\n");
+ return 0;
+ }
+
if(create_i2o_procfs())
return -EBUSY;
#ifdef MODULE
-
MODULE_AUTHOR("Deepak Saxena");
MODULE_DESCRIPTION("I2O procfs Handler");
void cleanup_module(void)
{
destroy_i2o_procfs();
+ i2o_remove_handler(&i2o_proc_handler);
}
#endif
struct i2o_handler i2o_scsi_handler=
{
i2o_scsi_reply,
+ NULL,
+ NULL,
+ NULL,
"I2O SCSI OSM",
0,
I2O_CLASS_SCSI_PERIPHERAL
{
u8 reply[8];
- if(i2o_query_scalar(c, d->lct_data->tid, 0, 3, reply, 4)<0)
+ if(i2o_query_scalar(c, d->lct_data.tid, 0, 3, reply, 4)<0)
return -1;
*target=reply[0];
- if(i2o_query_scalar(c, d->lct_data->tid, 0, 4, reply, 8)<0)
+ if(i2o_query_scalar(c, d->lct_data.tid, 0, 4, reply, 8)<0)
return -1;
*lun=reply[1];
int target;
h->controller=c;
- h->bus_task=d->lct_data->tid;
+ h->bus_task=d->lct_data.tid;
for(target=0;target<16;target++)
for(lun=0;lun<8;lun++)
for(unit=c->devices;unit!=NULL;unit=unit->next)
{
dprintk(("Class %03X, parent %d, want %d.\n",
- unit->lct_data->class_id, unit->lct_data->parent_tid, d->lct_data->tid));
+ unit->lct_data.class_id, unit->lct_data.parent_tid, d->lct_data.tid));
/* Only look at scsi and fc devices */
- if ( (unit->lct_data->class_id != I2O_CLASS_SCSI_PERIPHERAL)
- && (unit->lct_data->class_id != I2O_CLASS_FIBRE_CHANNEL_PERIPHERAL)
+ if ( (unit->lct_data.class_id != I2O_CLASS_SCSI_PERIPHERAL)
+ && (unit->lct_data.class_id != I2O_CLASS_FIBRE_CHANNEL_PERIPHERAL)
)
continue;
/* On our bus ? */
- dprintk(("Found a disk (%d).\n", unit->lct_data->tid));
- if ((unit->lct_data->parent_tid == d->lct_data->tid)
- || (unit->lct_data->parent_tid == d->lct_data->parent_tid)
+ dprintk(("Found a disk (%d).\n", unit->lct_data.tid));
+ if ((unit->lct_data.parent_tid == d->lct_data.tid)
+ || (unit->lct_data.parent_tid == d->lct_data.parent_tid)
)
{
u16 limit;
dprintk(("Its ours.\n"));
if(i2o_find_lun(c, unit, &target, &lun)==-1)
{
- printk(KERN_ERR "i2o_scsi: Unable to get lun for tid %d.\n", unit->lct_data->tid);
+ printk(KERN_ERR "i2o_scsi: Unable to get lun for tid %d.\n", unit->lct_data.tid);
continue;
}
dprintk(("Found disk %d %d.\n", target, lun));
- h->task[target][lun]=unit->lct_data->tid;
+ h->task[target][lun]=unit->lct_data.tid;
h->tagclock[target][lun]=jiffies;
/* Get the max fragments/request */
- i2o_query_scalar(c, d->lct_data->tid, 0xF103, 3, &limit, 2);
+ i2o_query_scalar(c, d->lct_data.tid, 0xF103, 3, &limit, 2);
/* sanity */
if ( limit == 0 )
/*
* bus_adapter, SCSI (obsolete), or FibreChannel busses only
*/
- if( (d->lct_data->class_id!=I2O_CLASS_BUS_ADAPTER_PORT) // bus_adapter
-// && (d->lct_data->class_id!=I2O_CLASS_FIBRE_CHANNEL_PORT) // FC_PORT
+ if( (d->lct_data.class_id!=I2O_CLASS_BUS_ADAPTER_PORT) // bus_adapter
+// && (d->lct_data.class_id!=I2O_CLASS_FIBRE_CHANNEL_PORT) // FC_PORT
)
continue;
MODULE_AUTHOR("Carsten Paeth <calle@calle.in-berlin.de>");
-int suppress_pollack = 0;
+static int suppress_pollack = 0;
MODULE_PARM(suppress_pollack, "0-1i");
/* ------------------------------------------------------------- */
/* ------------------------------------------------------------- */
-int suppress_pollack = 0;
+static int suppress_pollack = 0;
MODULE_AUTHOR("Carsten Paeth <calle@calle.in-berlin.de>");
* ei_close - shut down network device
* @dev: network device to close
*
- * Opposite of ei_open. Only used when "ifconfig <devname> down" is done.
+ * Opposite of ei_open(). Only used when "ifconfig <devname> down" is done.
*/
int ei_close(struct net_device *dev)
{
* the 8390 via the card specific functions and fire them at the networking
* stack. We also handle transmit completions and wake the transmit path if
* neccessary. We also update the counters and do other housekeeping as
- * needed
+ * needed.
*/
void ei_interrupt(int irq, void *dev_id, struct pt_regs * regs)
* is a much better solution as it avoids kernel based Tx timeouts, and
* an unnecessary card reset.
*
- * Called with lock held
+ * Called with lock held.
*/
static void ei_tx_err(struct net_device *dev)
* @dev: network device for which tx intr is handled
*
* We have finished a transmit: check for errors and then trigger the next
- * packet to be sent. Called with lock held
+ * packet to be sent. Called with lock held.
*/
static void ei_tx_intr(struct net_device *dev)
* @dev: network device with which receive will be run
*
* We have a good packet(s), get it/them out of the buffers.
- * Called with lock held
+ * Called with lock held.
*/
static void ei_receive(struct net_device *dev)
* This includes causing "the NIC to defer indefinitely when it is stopped
* on a busy network." Ugh.
* Called with lock held. Don't call this with the interrupts off or your
- * computer will hate you - it takes 10mS or so.
+ * computer will hate you - it takes 10ms or so.
*/
static void ei_rx_overrun(struct net_device *dev)
/* usage message */
printk (KERN_ERR
"ltpc: usage: ltpc=auto|iobase[,irq[,dma]]\n");
+ return 0;
}
- return 1;
} else {
io = ints[1];
if (ints[0] > 1) {
irq = ints[2];
- return 1;
}
if (ints[0] > 2) {
dma = ints[3];
- return 1;
}
/* ignore any other paramters */
}
void __init arcnet_init(void)
{
- static int arcnet_inited __initdata = 0;
+ static int arcnet_inited = 0;
int count;
if (arcnet_inited++)
s = get_options(s, 4, ints);
if (!ints[0])
- return 1;
+ return 0;
dev = alloc_bootmem(sizeof(struct net_device) + 10);
memset(dev, 0, sizeof(struct net_device) + 10);
dev->name = (char *) (dev + 1);
if (register_netdev(d) == 0)
n_eepro++;
+ else
+ break;
}
return n_eepro ? 0 : -ENODEV;
static int __init baycom_epp_setup(char *str)
{
- static unsigned __initdata nr_dev = 0;
+ static unsigned __initlocaldata nr_dev = 0;
int ints[2];
if (nr_dev >= NR_PORTS)
donecount);
#endif
#ifdef LINUX_2_1
- dev_kfree_skb( lp->txrhead->skb );
+ dev_kfree_skb_any( lp->txrhead->skb );
#else
- dev_kfree_skb( lp->txrhead->skb, FREE_WRITE );
+ dev_kfree_skb_any( lp->txrhead->skb, FREE_WRITE );
#endif
lp->txrhead->skb=(void *)NULL;
lp->txrhead=lp->txrhead->next;
hp100_ints_on();
#ifdef LINUX_2_1
- dev_kfree_skb( skb );
+ dev_kfree_skb_any( skb );
#else
- dev_kfree_skb( skb, FREE_WRITE );
+ dev_kfree_skb_any( skb, FREE_WRITE );
#endif
#ifdef HP100_DEBUG_TX
#endif
if(ptr->skb!=NULL)
#ifdef LINUX_2_1
- dev_kfree_skb( ptr->skb );
+ dev_kfree_skb_any( ptr->skb );
#else
- dev_kfree_skb( ptr->skb, FREE_READ );
+ dev_kfree_skb_any( ptr->skb, FREE_READ );
#endif
lp->stats.rx_errors++;
}
dev_link_t *link;
ray_dev_t *local = (ray_dev_t *)dev->priv;
+ MOD_INC_USE_COUNT;
+
DEBUG(1, "ray_open('%s')\n", dev->name);
for (link = dev_list; link; link = link->next)
if (link->priv == dev) break;
- if (!DEV_OK(link))
+ if (!DEV_OK(link)) {
+ MOD_DEC_USE_COUNT;
return -ENODEV;
+ }
if (link->open == 0) local->num_multi = 0;
link->open++;
- MOD_INC_USE_COUNT;
if (sniffer) netif_stop_queue(dev);
else netif_start_queue(dev);
return 0;
}
-static __exit void tulip_exit(void)
+static void __exit tulip_exit(void)
{
pci_unregister_driver(&tulip_ops);
}
return ERROR;
}
/* Malloc up new buffer. */
- rcv->skb = dev_alloc_skb(rcv->length.h);
+ rcv->skb = dev_alloc_skb(rcv->length.h + 2);
if (rcv->skb == NULL) {
printk(KERN_ERR "%s: Memory squeeze.\n", dev->name);
return ERROR;
}
+ skb_reserve(rcv->skb, 2); /* Align IP on 16 byte boundaries */
skb_put(rcv->skb,rcv->length.h);
rcv->skb->dev = dev;
rcv->state = PLIP_PK_DATA;
switch (nl->connection) {
case PLIP_CN_CLOSING:
- netif_start_queue (dev);
+ netif_wake_queue (dev);
case PLIP_CN_NONE:
case PLIP_CN_SEND:
dev->last_rx = jiffies;
if (skb->len > dev->mtu + dev->hard_header_len) {
printk(KERN_WARNING "%s: packet too big, %d.\n", dev->name, (int)skb->len);
netif_start_queue (dev);
- return 0;
+ return 1;
}
if (net_debug > 2)
mark_bh(IMMEDIATE_BH);
spin_unlock_irq(&nl->lock);
- netif_start_queue (dev);
return 0;
}
#include <linux/timer.h>
#include <asm/io.h>
+
+
+/* undefine, or define to various debugging levels (>4 == obscene levels) */
+#undef TULIP_DEBUG
+
+
+#ifdef TULIP_DEBUG
+/* note: prints function name for you */
+#define DPRINTK(fmt, args...) printk(KERN_DEBUG "%s: " fmt, __FUNCTION__ , ## args)
+#else
+#define DPRINTK(fmt, args...)
+#endif
+
+
+
+
struct tulip_chip_table {
char *chip_name;
int io_size;
csr13_mask_10bt = (csr13_eng | csr13_cac | csr13_srl),
};
+enum t21143_csr6_bits {
+ csr6_sc = (1<<31),
+ csr6_ra = (1<<30),
+ csr6_ign_dest_msb = (1<<26),
+ csr6_mbo = (1<<25),
+ csr6_scr = (1<<24),
+ csr6_pcs = (1<<23),
+ csr6_ttm = (1<<22),
+ csr6_sf = (1<<21),
+ csr6_hbd = (1<<19),
+ csr6_ps = (1<<18),
+ csr6_ca = (1<<17),
+ csr6_st = (1<<13),
+ csr6_fc = (1<<12),
+ csr6_om_int_loop = (1<<10),
+ csr6_om_ext_loop = (1<<11),
+ csr6_fd = (1<<9),
+ csr6_pm = (1<<7),
+ csr6_pr = (1<<6),
+ csr6_sb = (1<<5),
+ csr6_if = (1<<4),
+ csr6_pb = (1<<3),
+ csr6_ho = (1<<2),
+ csr6_sr = (1<<1),
+ csr6_hp = (1<<0),
+
+ csr6_mask_capture = (csr6_sc | csr6_ca),
+ csr6_mask_defstate = (csr6_mask_capture | csr6_mbo),
+ csr6_mask_fullcap = (csr6_mask_defstate | csr6_hbd |
+ csr6_ps | (3<<14) | csr6_fd),
+};
+
/* Keep the ring sizes a power of two for efficiency.
Making the Tx ring too large decreases the effectiveness of channel
dma_addr_t mapping;
};
+
struct tulip_private {
const char *product_name;
struct net_device *next_module;
- Urban Widmark: minor cleanups, merges from Becker 1.03a/1.04 versions
LK1.1.3:
- - Urban Widmark: use PCI DMA interface (with thanks to the eepro100.c code)
- update "Theory of Operation" with softnet/locking changes
+ - Urban Widmark: use PCI DMA interface (with thanks to the eepro100.c
+ code) update "Theory of Operation" with
+ softnet/locking changes
- Dave Miller: PCI DMA and endian fixups
- Jeff Garzik: MOD_xxx race fixes, updated PCI resource allocation
+
+ LK1.1.4:
+ - Urban Widmark: fix gcc 2.95.2 problem and
+ remove writel's to fixed address 0x7c
*/
/* A few user-configurable values. These may be modified when a driver
#include <asm/io.h>
static const char *versionA __devinitdata =
-"via-rhine.c:v1.03a-LK1.1.3 3/23/2000 Written by Donald Becker\n";
+"via-rhine.c:v1.03a-LK1.1.4 3/28/2000 Written by Donald Becker\n";
static const char *versionB __devinitdata =
" http://cesdis.gsfc.nasa.gov/linux/drivers/via-rhine.html\n";
{
struct netdev_private *np = (struct netdev_private *)dev->priv;
int i;
+ dma_addr_t next = np->rx_ring_dma;
np->cur_rx = np->cur_tx = 0;
np->dirty_rx = np->dirty_tx = 0;
for (i = 0; i < RX_RING_SIZE; i++) {
np->rx_ring[i].rx_status = 0;
np->rx_ring[i].desc_length = cpu_to_le32(np->rx_buf_sz);
- np->rx_ring[i].next_desc =
- cpu_to_le32(np->rx_ring_dma + sizeof(struct rx_desc)*(i+1));
+ next += sizeof(struct rx_desc);
+ np->rx_ring[i].next_desc = cpu_to_le32(next);
np->rx_skbuff[i] = 0;
}
/* Mark the last entry as wrapping the ring. */
np->rx_ring[i].addr = cpu_to_le32(np->rx_skbuff_dma[i]);
np->rx_ring[i].rx_status = cpu_to_le32(DescOwn);
}
+ np->dirty_rx = (unsigned int)(i - RX_RING_SIZE);
+ next = np->tx_ring_dma;
for (i = 0; i < TX_RING_SIZE; i++) {
np->tx_skbuff[i] = 0;
np->tx_ring[i].tx_status = 0;
np->tx_ring[i].desc_length = cpu_to_le32(0x00e08000);
- np->tx_ring[i].next_desc =
- cpu_to_le32(np->tx_ring_dma + sizeof(struct tx_desc)*(i+1));
+ next += sizeof(struct tx_desc);
+ np->tx_ring[i].next_desc = cpu_to_le32(next);
np->tx_buf[i] = kmalloc(PKT_BUF_SZ, GFP_KERNEL);
}
np->tx_ring[i-1].next_desc = cpu_to_le32(np->tx_ring_dma);
if (intr_status & IntrStatsMax) {
np->stats.rx_crc_errors += readw(ioaddr + RxCRCErrs);
np->stats.rx_missed_errors += readw(ioaddr + RxMissed);
- writel(0, RxMissed);
}
if (intr_status & IntrTxAbort) {
/* Stats counted in Tx-done handler, just restart Tx. */
non-critical. */
np->stats.rx_crc_errors += readw(ioaddr + RxCRCErrs);
np->stats.rx_missed_errors += readw(ioaddr + RxMissed);
- writel(0, RxMissed);
return &np->stats;
}
static struct proc_dir_entry *create_comx_proc_entry(char *name, int mode,
int size, struct proc_dir_entry *dir);
-static void comx_fill_inode(struct inode *inode, int fill);
-
static struct dentry_operations comx_dentry_operations = {
NULL, /* revalidate */
NULL, /* d_hash */
};
-struct proc_dir_entry comx_root_dir = {
- 0, 4, "comx",
- S_IFDIR | S_IWUSR | S_IRUGO | S_IXUGO, 2, 0, 0,
- 0, &comx_root_inode_ops,
- NULL, comx_fill_inode,
- NULL, &proc_root, NULL
-};
+static struct proc_dir_entry * comx_root_dir;
struct comx_debugflags_struct comx_debugflags[] = {
{ "comx_rx", DEBUG_COMX_RX },
{ NULL, 0 }
};
-static void comx_fill_inode(struct inode *inode, int fill)
-{
- if (fill)
- MOD_INC_USE_COUNT;
- else
- MOD_DEC_USE_COUNT;
-}
-
int comx_debug(struct net_device *dev, char *fmt, ...)
{
struct net_device *dev;
struct comx_channel *ch;
- if (dir->i_ino != comx_root_dir.low_ino) return -ENOTDIR;
+ if (dir->i_ino != comx_root_dir->low_ino) return -ENOTDIR;
if ((new_dir = create_proc_entry(dentry->d_name.name, mode | S_IFDIR,
- &comx_root_dir)) == NULL) {
+ comx_root_dir)) == NULL) {
return -EIO;
}
- new_dir->proc_iops = &proc_dir_inode_operations; // ez egy normalis /proc konyvtar
new_dir->nlink = 2;
new_dir->data = NULL; // ide jon majd a struct dev
int ret;
/* Egyelore miert ne ? */
- if (dir->i_ino != comx_root_dir.low_ino) return -ENOTDIR;
+ if (dir->i_ino != comx_root_dir->low_ino) return -ENOTDIR;
if (dev->flags & IFF_UP) {
printk(KERN_ERR "%s: down interface before removing it\n", dev->name);
remove_proc_entry(FILENAME_STATUS, entry);
remove_proc_entry(FILENAME_HARDWARE, entry);
remove_proc_entry(FILENAME_PROTOCOL, entry);
- remove_proc_entry(dentry->d_name.name, &comx_root_dir);
-// proc_unregister(&comx_root_dir, dentry->d_inode->i_ino);
+ remove_proc_entry(dentry->d_name.name, comx_root_dir);
MOD_DEC_USE_COUNT;
return 0;
{
struct proc_dir_entry *new_file;
- memcpy(&comx_root_inode_ops, &proc_dir_inode_operations,
- sizeof(struct inode_operations));
comx_root_inode_ops.lookup = &comx_lookup;
comx_root_inode_ops.mkdir = &comx_mkdir;
comx_root_inode_ops.rmdir = &comx_rmdir;
- memcpy(&comx_normal_inode_ops, &proc_net_inode_operations,
- sizeof(struct inode_operations));
- comx_normal_inode_ops.default_file_ops = &comx_normal_file_ops;
comx_normal_inode_ops.lookup = &comx_lookup;
memcpy(&comx_debug_inode_ops, &comx_normal_inode_ops,
sizeof(struct inode_operations));
- comx_debug_inode_ops.default_file_ops = &comx_debug_file_ops;
- memcpy(&comx_normal_file_ops, proc_net_inode_operations.default_file_ops,
- sizeof(struct file_operations));
comx_normal_file_ops.open = &comx_file_open;
comx_normal_file_ops.release = &comx_file_release;
comx_debug_file_ops.llseek = &comx_debug_lseek;
comx_debug_file_ops.read = &comx_debug_read;
- if (proc_register(&proc_root, &comx_root_dir) < 0) return -ENOMEM;
-
+ comx_root_dir = create_proc_entry("comx",
+ S_IFDIR | S_IWUSR | S_IRUGO | S_IXUGO, &proc_root);
+ if (!comx_root_dir)
+ return -ENOMEM;
+ comx_root_dir->proc_iops = &comx_root_inode_ops;
if ((new_file = create_proc_entry(FILENAME_HARDWARELIST,
- S_IFREG | 0444, &comx_root_dir)) == NULL) {
+ S_IFREG | 0444, comx_root_dir)) == NULL) {
return -ENOMEM;
}
- new_file->ops = &comx_normal_inode_ops;
+ new_file->proc_iops = &comx_normal_inode_ops;
new_file->data = new_file;
new_file->read_proc = &comx_root_read_proc;
new_file->write_proc = NULL;
new_file->nlink = 1;
if ((new_file = create_proc_entry(FILENAME_PROTOCOLLIST,
- S_IFREG | 0444, &comx_root_dir)) == NULL) {
+ S_IFREG | 0444, comx_root_dir)) == NULL) {
return -ENOMEM;
}
#ifdef MODULE
void cleanup_module(void)
{
- remove_proc_entry(FILENAME_HARDWARELIST, &comx_root_dir);
- remove_proc_entry(FILENAME_PROTOCOLLIST, &comx_root_dir);
- proc_unregister(&proc_root, comx_root_dir.low_ino);
+ remove_proc_entry(FILENAME_HARDWARELIST, comx_root_dir);
+ remove_proc_entry(FILENAME_PROTOCOLLIST, comx_root_dir);
+ remove_proc_entry(comx_root_dir->name, &proc_root);
}
#endif
#define SEEK_END 2
#endif
-extern struct proc_dir_entry comx_root_dir;
+extern struct proc_dir_entry * comx_root_dir;
extern int comx_register_hardware(struct comx_hardware *comx_hw);
extern int comx_unregister_hardware(char *name);
#ifdef MODULE
int init_module (void)
#else
-__initfunc(int wanpipe_init(void))
+int __init wanpipe_init(void)
#endif
{
printk(KERN_INFO "%s v%u.%u %s\n",
*
* This can be called directly by cards that do not have
* timing constraints but is normally called from the network layer
- * after interrupt servicing to process frames queued via netif_rx.
+ * after interrupt servicing to process frames queued via netif_rx().
*
* We process the options in the card. If the frame is destined for
* the protocol stacks then it requeues the frame for the upper level
/**
* z8530_channel_load - Load channel data
* @c: Z8530 channel to configure
- * @rtable: Table of register, value pairs
+ * @rtable: table of register, value pairs
* FIXME: ioctl to allow user uploaded tables
*
* Load a Z8530 channel up from the system data. We use +16 to
- * indicate the 'prime' registers. The value 255 terminates the
- * table
+ * indicate the "prime" registers. The value 255 terminates the
+ * table.
*/
int z8530_channel_load(struct z8530_channel *c, u8 *rtable)
+2000-03-29 Tim Waugh <twaugh@redhat.com>
+
+ * parport_pc.c: Add support for another PCI card.
+
2000-03-27 Tim Waugh <twaugh@redhat.com>
* parport_pc.c (parport_pc_ecp_read_block_pio): Correct operation
return;
}
-/* Find a device by canonical device number. */
+/**
+ * parport_open - find a device by canonical device number
+ * @devnum: canonical device number
+ * @name: name to associate with the device
+ * @pf: preemption callback
+ * @kf: kick callback
+ * @irqf: interrupt handler
+ * @flags: registration flags
+ * @handle: driver data
+ *
+ * This function is similar to parport_register_device(), except
+ * that it locates a device by its number rather than by the port
+ * it is attached to. See parport_find_device() and
+ * parport_find_class().
+ *
+ * All parameters except for @devnum are the same as for
+ * parport_register_device(). The return value is the same as
+ * for parport_register_device().
+ **/
+
struct pardevice *parport_open (int devnum, const char *name,
int (*pf) (void *), void (*kf) (void *),
void (*irqf) (int, void *, struct pt_regs *),
return dev;
}
-/* The converse of parport_open. */
+/**
+ * parport_close - close a device opened with parport_open()
+ * @dev: device to close
+ *
+ * This is to parport_open() as parport_unregister_device() is to
+ * parport_register_device().
+ **/
+
void parport_close (struct pardevice *dev)
{
parport_unregister_device (dev);
}
-/* Convert device coordinates into a canonical device number. */
+/**
+ * parport_device_num - convert device coordinates into a
+ * canonical device number
+ * @parport: parallel port number
+ * @mux: multiplexor port number (-1 for no multiplexor)
+ * @daisy: daisy chain address (-1 for no daisy chain address)
+ *
+ * This tries to locate a device on the given parallel port,
+ * multiplexor port and daisy chain address, and returns its
+ * device number or -NXIO if no device with those coordinates
+ * exists.
+ **/
+
int parport_device_num (int parport, int mux, int daisy)
{
struct daisydev *dev = topology;
return dev->devnum;
}
-/* Convert a canonical device number into device coordinates. */
+/**
+ * parport_device_coords - convert a canonical device number into
+ * device coordinates
+ * @devnum: device number
+ * @parport: pointer to storage for parallel port number
+ * @mux: pointer to storage for multiplexor port number
+ * @daisy: pointer to storage for daisy chain address
+ *
+ * This function converts a device number into its coordinates in
+ * terms of which parallel port in the system it is attached to,
+ * which multiplexor port it is attached to if there is a
+ * multiplexor on that port, and which daisy chain address it has
+ * if it is in a daisy chain.
+ *
+ * The caller must allocate storage for @parport, @mux, and
+ * @daisy.
+ *
+ * If there is no device with the specified device number, -ENXIO
+ * is returned. Otherwise, the values pointed to by @parport,
+ * @mux, and @daisy are set to the coordinates of the device,
+ * with -1 for coordinates with no value.
+ *
+ * This function is not actually very useful, but this interface
+ * was suggested by IEEE 1284.3.
+ **/
+
int parport_device_coords (int devnum, int *parport, int *mux, int *daisy)
{
struct daisydev *dev = topology;
/* Find a device with a particular manufacturer and model string,
starting from a given device number. Like the PCI equivalent,
'from' itself is skipped. */
+
+/**
+ * parport_find_device - find a device with a specified
+ * manufacturer and model string
+ * @mfg: required manufacturer string
+ * @mdl: required model string
+ * @from: previous device number found in search, or %NULL for
+ * new search
+ *
+ * This walks through the list of parallel port devices looking
+ * for a device whose 'MFG' string matches @mfg and whose 'MDL'
+ * string matches @mdl in their IEEE 1284 Device ID.
+ *
+ * When a device is found matching those requirements, its device
+ * number is returned; if there is no matching device, a negative
+ * value is returned.
+ *
+ * A new search it initiated by passing %NULL as the @from
+ * argument. If @from is not %NULL, the search continues from
+ * that device.
+ **/
+
int parport_find_device (const char *mfg, const char *mdl, int from)
{
struct daisydev *d = topology; /* sorted by devnum */
return -1;
}
-/* Find a device in a particular class. Like the PCI equivalent,
- 'from' itself is skipped. */
+/**
+ * parport_find_class - find a device in a specified class
+ * @cls: required class
+ * @from: previous device number found in search, or %NULL for
+ * new search
+ *
+ * This walks through the list of parallel port devices looking
+ * for a device whose 'CLS' string matches @cls in their IEEE
+ * 1284 Device ID.
+ *
+ * When a device is found matching those requirements, its device
+ * number is returned; if there is no matching device, a negative
+ * value is returned.
+ *
+ * A new search it initiated by passing %NULL as the @from
+ * argument. If @from is not %NULL, the search continues from
+ * that device.
+ **/
+
int parport_find_class (parport_device_class cls, int from)
{
struct daisydev *d = topology; /* sorted by devnum */
parport_ieee1284_wakeup (port_from_cookie[cookie % PARPORT_MAX]);
}
-/* Wait for a parport_ieee1284_wakeup.
- * 0: success
- * <0: error (exit as soon as possible)
- * >0: timed out
+/**
+ * parport_wait_event - wait for an event on a parallel port
+ * @port: port to wait on
+ * @timeout: time to wait (in jiffies)
+ *
+ * This function waits for up to @timeout jiffies for an
+ * interrupt to occur on a parallel port. If the port timeout is
+ * set to zero, it returns immediately.
+ *
+ * If an interrupt occurs before the timeout period elapses, this
+ * function returns one immediately. If it times out, it returns
+ * a value greater than zero. An error code less than zero
+ * indicates an error (most likely a pending signal), and the
+ * calling code should finish what it's doing as soon as it can.
*/
+
int parport_wait_event (struct parport *port, signed long timeout)
{
int ret;
return ret;
}
-/* Wait for Status line(s) to change in 35 ms - see IEEE1284-1994 page 24 to
- * 25 for this. After this time we can create a timeout because the
- * peripheral doesn't conform to IEEE1284. We want to save CPU time: we are
- * waiting a maximum time of 500 us busy (this is for speed). If there is
- * not the right answer in this time, we call schedule and other processes
- * are able to eat the time up to 40ms.
- */
+/**
+ * parport_poll_peripheral - poll status lines
+ * @port: port to watch
+ * @mask: status lines to watch
+ * @result: desired values of chosen status lines
+ * @usec: timeout
+ *
+ * This function busy-waits until the masked status lines have
+ * the desired values, or until the timeout period elapses. The
+ * @mask and @result parameters are bitmasks, with the bits
+ * defined by the constants in parport.h: %PARPORT_STATUS_BUSY,
+ * and so on.
+ *
+ * This function does not call schedule(); instead it busy-waits
+ * using udelay(). It currently has a resolution of 5usec.
+ *
+ * If the status lines take on the desired values before the
+ * timeout period elapses, parport_poll_peripheral() returns zero
+ * immediately. A zero return value greater than zero indicates
+ * a timeout. An error code (less than zero) indicates an error,
+ * most likely a signal that arrived, and the caller should
+ * finish what it is doing as soon as possible.
+*/
int parport_poll_peripheral(struct parport *port,
unsigned char mask,
return 1;
}
+/**
+ * parport_wait_peripheral - wait for status lines to change in 35ms
+ * @port: port to watch
+ * @mask: status lines to watch
+ * @result: desired values of chosen status lines
+ *
+ * This function waits until the masked status lines have the
+ * desired values, or until 35ms have elapsed (see IEEE 1284-1994
+ * page 24 to 25 for why this value in particular is hardcoded).
+ * The @mask and @result parameters are bitmasks, with the bits
+ * defined by the constants in parport.h: %PARPORT_STATUS_BUSY,
+ * and so on.
+ *
+ * The port is polled quickly to start off with, in anticipation
+ * of a fast response from the peripheral. This fast polling
+ * time is configurable (using /proc), and defaults to 500usec.
+ * If the timeout for this port (see parport_set_timeout()) is
+ * zero, the fast polling time is 35ms, and this function does
+ * not call schedule().
+ *
+ * If the timeout for this port is non-zero, after the fast
+ * polling fails it uses parport_wait_event() to wait for up to
+ * 10ms, waking up if an interrupt occurs.
+ */
+
int parport_wait_peripheral(struct parport *port,
unsigned char mask,
unsigned char result)
}
#endif /* IEEE1284 support */
-/* Negotiate an IEEE 1284 mode.
- * return values are:
- * 0 - handshake OK; IEEE1284 peripheral and mode available
- * -1 - handshake failed; peripheral is not compliant (or none present)
- * 1 - handshake OK; IEEE1284 peripheral present but mode not available
+/**
+ * parport_negotiate - negotiate an IEEE 1284 mode
+ * @port: port to use
+ * @mode: mode to negotiate to
+ *
+ * Use this to negotiate to a particular IEEE 1284 transfer mode.
+ * The @mode parameter should be one of the constants in
+ * parport.h starting %IEEE1284_MODE_xxx.
+ *
+ * The return value is 0 if the peripheral has accepted the
+ * negotiation to the mode specified, -1 if the peripheral is not
+ * IEEE 1284 compliant (or not present), or 1 if the peripheral
+ * has rejected the negotiation.
*/
+
int parport_negotiate (struct parport *port, int mode)
{
#ifndef CONFIG_PARPORT_1284
#endif /* IEEE1284 support */
}
-/* Write a block of data. */
+/**
+ * parport_write - write a block of data to a parallel port
+ * @port: port to write to
+ * @buffer: data buffer (in kernel space)
+ * @len: number of bytes of data to transfer
+ *
+ * This will write up to @len bytes of @buffer to the port
+ * specified, using the IEEE 1284 transfer mode most recently
+ * negotiated to (using parport_negotiate()), as long as that
+ * mode supports forward transfers (host to peripheral).
+ *
+ * It is the caller's responsibility to ensure that the first
+ * @len bytes of @buffer are valid.
+ *
+ * This function returns the number of bytes transferred (if zero
+ * or positive), or else an error code.
+ */
+
ssize_t parport_write (struct parport *port, const void *buffer, size_t len)
{
#ifndef CONFIG_PARPORT_1284
#endif /* IEEE1284 support */
}
-/* Read a block of data. */
+/**
+ * parport_read - read a block of data from a parallel port
+ * @port: port to read from
+ * @buffer: data buffer (in kernel space)
+ * @len: number of bytes of data to transfer
+ *
+ * This will read up to @len bytes of @buffer to the port
+ * specified, using the IEEE 1284 transfer mode most recently
+ * negotiated to (using parport_negotiate()), as long as that
+ * mode supports reverse transfers (peripheral to host).
+ *
+ * It is the caller's responsibility to ensure that the first
+ * @len bytes of @buffer are available to write to.
+ *
+ * This function returns the number of bytes transferred (if zero
+ * or positive), or else an error code.
+ */
+
ssize_t parport_read (struct parport *port, void *buffer, size_t len)
{
#ifndef CONFIG_PARPORT_1284
#endif /* IEEE1284 support */
}
-/* Set the amount of time we wait while nothing's happening. */
+/**
+ * parport_set_timeout - set the inactivity timeout for a device
+ * on a port
+ * @dev: device on a port
+ * @inactivity: inactivity timeout (in jiffies)
+ *
+ * This sets the inactivity timeout for a particular device on a
+ * port. This affects functions like parport_wait_peripheral().
+ * The special value 0 means not to call schedule() while dealing
+ * with this device.
+ *
+ * The return value is the previous inactivity timeout.
+ *
+ * Any callers of parport_wait_event() for this device are woken
+ * up.
+ */
+
long parport_set_timeout (struct pardevice *dev, long inactivity)
{
long int old = dev->timeout;
boca_ioppar,
plx_9050,
afavlab_tk9902,
+ timedia_1889,
};
/* boca_ioppar */ { 1, { { 0, -1 }, } },
/* plx_9050 */ { 2, { { 4, -1 }, { 5, -1 }, } },
/* afavlab_tk9902 */ { 1, { { 0, 1 }, } },
+ /* timedia_1889 */ { 1, { { 2, -1 }, } },
};
static struct pci_device_id parport_pc_pci_tbl[] __devinitdata = {
PCI_SUBVENDOR_ID_EXSYS, PCI_SUBDEVICE_ID_EXSYS_4014, 0,0, plx_9050 },
{ PCI_VENDOR_ID_AFAVLAB, PCI_DEVICE_ID_AFAVLAB_TK9902,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, afavlab_tk9902 },
+ { PCI_VENDOR_ID_TIMEDIA, PCI_DEVICE_ID_TIMEDIA_1889,
+ PCI_ANY_ID, PCI_ANY_ID, 0, 0, timedia_1889 },
{ 0, }, /* terminate list */
};
MODULE_DEVICE_TABLE(pci,parport_pc_pci_tbl);
EXPORT_SYMBOL(pci_assign_resource);
EXPORT_SYMBOL(pci_register_driver);
EXPORT_SYMBOL(pci_unregister_driver);
+EXPORT_SYMBOL(pci_dev_driver);
EXPORT_SYMBOL(pci_match_device);
EXPORT_SYMBOL(pci_find_parent_resource);
save_flags(flags);
cli();
for (bp = (struct NCR53c7x0_break *) host->breakpoints;
- bp; bp = (struct NCR53c7x0_break *) bp->next); {
+ bp; bp = (struct NCR53c7x0_break *) bp->next) {
sprintf (buf, "scsi%d : bp : success : at %08x, replaces %08x %08x",
bp->addr, bp->old[0], bp->old[1]);
len = strlen(buf);
SCpnt->result = 0;
SCpnt->underflow = 0; /* Do not flag underflow conditions */
+ SCpnt->old_underflow = 0;
SCpnt->resid = 0;
SCpnt->state = SCSI_STATE_INITIALIZING;
SCpnt->owner = SCSI_OWNER_HIGHLEVEL;
SCpnt->cmd_len = COMMAND_SIZE(SCpnt->cmnd[0]);
SCpnt->old_cmd_len = SCpnt->cmd_len;
SCpnt->sc_old_data_direction = SCpnt->sc_data_direction;
+ SCpnt->old_underflow = SCpnt->underflow;
/* Start the timer ticking. */
SCpnt->cmd_len = COMMAND_SIZE(SCpnt->cmnd[0]);
SCpnt->old_cmd_len = SCpnt->cmd_len;
SCpnt->sc_old_data_direction = SCpnt->sc_data_direction;
+ SCpnt->old_underflow = SCpnt->underflow;
/* Start the timer ticking. */
SCpnt->use_sg = SCpnt->old_use_sg;
SCpnt->cmd_len = SCpnt->old_cmd_len;
SCpnt->sc_data_direction = SCpnt->sc_old_data_direction;
+ SCpnt->underflow = SCpnt->old_underflow;
/*
* Zero the sense information from the last time we tried
SCpnt->old_use_sg = 0;
SCpnt->old_cmd_len = 0;
SCpnt->underflow = 0;
+ SCpnt->old_underflow = 0;
SCpnt->transfersize = 0;
SCpnt->resid = 0;
SCpnt->serial_number = 0;
unsigned underflow; /* Return error if less than
this amount is transfered */
+ unsigned old_underflow; /* save underflow here when reusing the
+ * command for error handling */
unsigned transfersize; /* How much we are guaranteed to
transfer with each SCSI transfer
SCpnt->use_sg = SCpnt->old_use_sg;
SCpnt->cmd_len = SCpnt->old_cmd_len;
SCpnt->sc_data_direction = SCpnt->sc_old_data_direction;
+ SCpnt->underflow = SCpnt->old_underflow;
scsi_send_eh_cmnd(SCpnt, SCpnt->timeout_per_command);
SCpnt->use_sg = 0;
SCpnt->cmd_len = COMMAND_SIZE(SCpnt->cmnd[0]);
SCpnt->sc_data_direction = SCSI_DATA_READ;
+ SCpnt->underflow = 0;
scsi_send_eh_cmnd(SCpnt, SENSE_TIMEOUT);
SCpnt->use_sg = SCpnt->old_use_sg;
SCpnt->cmd_len = SCpnt->old_cmd_len;
SCpnt->sc_data_direction = SCpnt->sc_old_data_direction;
+ SCpnt->underflow = SCpnt->old_underflow;
/*
* Hey, we are done. Let's look to see what happened.
SCpnt->request_bufflen = 256;
SCpnt->use_sg = 0;
SCpnt->cmd_len = COMMAND_SIZE(SCpnt->cmnd[0]);
- scsi_send_eh_cmnd(SCpnt, SENSE_TIMEOUT);
+ SCpnt->underflow = 0;
SCpnt->sc_data_direction = SCSI_DATA_NONE;
+ scsi_send_eh_cmnd(SCpnt, SENSE_TIMEOUT);
+
/* Last chance to have valid sense data */
if (!scsi_sense_valid(SCpnt))
memcpy((void *) SCpnt->sense_buffer,
SCpnt->use_sg = SCpnt->old_use_sg;
SCpnt->cmd_len = SCpnt->old_cmd_len;
SCpnt->sc_data_direction = SCpnt->sc_old_data_direction;
+ SCpnt->underflow = SCpnt->old_underflow;
/*
* Hey, we are done. Let's look to see what happened.
*/
SCpnt->use_sg = SCpnt->old_use_sg;
SCpnt->sc_data_direction = SCpnt->sc_old_data_direction;
+ SCpnt->underflow = SCpnt->old_underflow;
*SClist = SCpnt;
}
SCpnt->old_use_sg = SCpnt->use_sg;
SCpnt->old_cmd_len = SCpnt->cmd_len;
SCpnt->sc_old_data_direction = SCpnt->sc_data_direction;
+ SCpnt->old_underflow = SCpnt->underflow;
memcpy((void *) SCpnt->data_cmnd,
(const void *) SCpnt->cmnd, sizeof(SCpnt->cmnd));
SCpnt->buffer = SCpnt->request_buffer;
SCpnt->use_sg = SCpnt->old_use_sg;
SCpnt->cmd_len = SCpnt->old_cmd_len;
SCpnt->sc_data_direction = SCpnt->sc_old_data_direction;
+ SCpnt->underflow = SCpnt->old_underflow;
}
switch (host_byte(result)) {
case DID_OK:
SCpnt->use_sg = SCpnt->old_use_sg;
SCpnt->cmd_len = SCpnt->old_cmd_len;
SCpnt->sc_data_direction = SCpnt->sc_old_data_direction;
+ SCpnt->underflow = SCpnt->old_underflow;
SCpnt->result = 0;
/*
* Ugly, ugly. The newer interfaces all
SCpnt->use_sg = SCpnt->old_use_sg;
SCpnt->cmd_len = SCpnt->old_cmd_len;
SCpnt->sc_data_direction = SCpnt->sc_old_data_direction;
+ SCpnt->underflow = SCpnt->old_underflow;
/*
* The upper layers assume the lock isn't held. We mustn't
* disappoint them. When the new error handling code is in
++cur;
}
#endif /* SCSI_NCR_BOOT_COMMAND_LINE_SUPPORT */
- return 0;
+ return 1;
}
/*===================================================================
bool ' 16 bit sampling option of GUS (_NOT_ GUS MAX)' CONFIG_SOUND_GUS16
bool ' GUS MAX support' CONFIG_SOUND_GUSMAX
fi
+ dep_tristate ' Intel ICH audio support' CONFIG_SOUND_ICH $CONFIG_SOUND_OSS
dep_tristate ' Loopback MIDI device support' CONFIG_SOUND_VMIDI $CONFIG_SOUND_OSS
dep_tristate ' MediaTrix AudioTrix Pro support' CONFIG_SOUND_TRIX $CONFIG_SOUND_OSS
if [ "$CONFIG_SOUND_TRIX" = "y" ]; then
obj-$(CONFIG_SOUND_MSNDPIN) += msnd.o msnd_pinnacle.o
obj-$(CONFIG_SOUND_VWSND) += vwsnd.o
obj-$(CONFIG_SOUND_NM256) += nm256_audio.o ac97.o
+obj-$(CONFIG_SOUND_NM256) += i810_audio.o ac97.o
obj-$(CONFIG_SOUND_SONICVIBES) += sonicvibes.o
obj-$(CONFIG_SOUND_CMPCI) += cmpci.o
obj-$(CONFIG_SOUND_ES1370) += es1370.o
* Alan Cox : reformatted. Fixed SMP bugs. Moved to kernel alloc/free
* of irqs. Use dev_id.
* Christoph Hellwig : adapted to module_init/module_exit
+ * Aki Laukkanen : added power management support
*
* Status:
* Tested. Believed fully functional.
#include <linux/init.h>
#include <linux/module.h>
#include <linux/stddef.h>
+#include <linux/pm.h>
#include "soundmodule.h"
int irq;
int dma1, dma2;
int dual_dma; /* 1, when two DMA channels allocated */
+ int subtype;
unsigned char MCE_bit;
unsigned char saved_regs[32];
int debug_flag;
int irq_ok;
mixer_ents *mix_devices;
int mixer_output_port;
+
+ /* Power management */
+ struct pm_dev *pmdev;
} ad1848_info;
typedef struct ad1848_port_info
}
ad1848_port_info;
+static struct address_info cfg;
static int nr_ad1848_devs = 0;
+
int deskpro_xl = 0;
int deskpro_m = 0;
int soundpro = 0;
static void ad1848_halt_input(int dev);
static void ad1848_halt_output(int dev);
static void ad1848_trigger(int dev, int bits);
+static int ad1848_pm_callback(struct pm_dev *dev, pm_request_t rqst, void *data);
#ifndef EXCLUDE_TIMERS
static int ad1848_tmr_install(int dev);
static void ad1848_tmr_reprogram(int dev);
-
#endif
static int ad_read(ad1848_info * devc, int reg)
if (devc->model == MD_IWAVE)
ad_write(devc, 12, 0x6c); /* Select codec mode 3 */
- if (devc-> model != MD_1845_SSCAPE)
+ if (devc->model != MD_1845_SSCAPE)
for (i = 16; i < 32; i++)
ad_write(devc, i, init_values[i]);
* The actually used IRQ is ABS(irq).
*/
-
int my_dev;
char dev_name[100];
int e;
devc->timer_ticks = 0;
devc->dma1 = dma_playback;
devc->dma2 = dma_capture;
+ devc->subtype = cfg.card_subtype;
devc->audio_flags = DMA_AUTOMODE;
devc->playback_dev = devc->record_dev = 0;
if (name != NULL)
nr_ad1848_devs++;
+ devc->pmdev = pm_register(PM_ISA_DEV, my_dev, ad1848_pm_callback);
+ if (devc->pmdev)
+ devc->pmdev->data = devc;
+
ad1848_init_hw(devc);
if (irq > 0)
devc->irq_ok = 1;
}
#else
- devc->irq_ok=1;
+ devc->irq_ok = 1;
#endif
}
else
if(mixer>=0)
sound_unload_mixerdev(mixer);
+ if (devc->pmdev)
+ pm_unregister(devc->pmdev);
+
nr_ad1848_devs--;
for ( ; i < nr_ad1848_devs ; i++)
adev_info[i] = adev_info[i+1];
hw_config->slots[0] = ad1848_init("MS Sound System", hw_config->io_base + 4,
hw_config->irq,
hw_config->dma,
- hw_config->dma2, 0, hw_config->osp);
+ hw_config->dma2, 0,
+ hw_config->osp);
request_region(hw_config->io_base, 4, "WSS config");
return;
}
outb((bits | dma_bits[dma] | dma2_bit), config_port); /* Write IRQ+DMA setup */
- hw_config->slots[0] = ad1848_init("MSS audio codec", hw_config->io_base + 4,
+ hw_config->slots[0] = ad1848_init("MS Sound System", hw_config->io_base + 4,
hw_config->irq,
- dma,
- dma2, 0,
+ dma, dma2, 0,
hw_config->osp);
request_region(hw_config->io_base, 4, "WSS config");
}
}
#endif /* EXCLUDE_TIMERS */
+static int ad1848_suspend(ad1848_info *devc)
+{
+ unsigned long flags;
+
+ save_flags(flags);
+ cli();
+
+ ad_mute(devc);
+
+ restore_flags(flags);
+ return 0;
+}
+
+static int ad1848_resume(ad1848_info *devc)
+{
+ unsigned long flags;
+ int mixer_levels[32], i;
+
+ save_flags(flags);
+ cli();
+
+ /* store old mixer levels */
+ memcpy(mixer_levels, devc->levels, sizeof (mixer_levels));
+ ad1848_init_hw(devc);
+
+ /* restore mixer levels */
+ for (i = 0; i < 32; i++)
+ ad1848_mixer_set(devc, devc->dev_no, mixer_levels[i]);
+
+ if (!devc->subtype) {
+ static signed char interrupt_bits[12] = { -1, -1, -1, -1, -1, 0x00, -1, 0x08, -1, 0x10, 0x18, 0x20 };
+ static char dma_bits[4] = { 1, 2, 0, 3 };
+
+ signed char bits;
+ char dma2_bit = 0;
+
+ int config_port = devc->base + 0;
+
+ bits = interrupt_bits[devc->irq];
+ if (bits == -1) {
+ printk(KERN_ERR "MSS: Bad IRQ %d\n", devc->irq);
+ return -1;
+ }
+
+ outb((bits | 0x40), config_port);
+
+ if (devc->dma2 != -1 && devc->dma2 != devc->dma1)
+ if ( (devc->dma1 == 0 && devc->dma2 == 1) ||
+ (devc->dma1 == 1 && devc->dma2 == 0) ||
+ (devc->dma1 == 3 && devc->dma2 == 0))
+ dma2_bit = 0x04;
+
+ outb((bits | dma_bits[devc->dma1] | dma2_bit), config_port);
+ }
+
+ restore_flags(flags);
+ return 0;
+}
+
+static int ad1848_pm_callback(struct pm_dev *dev, pm_request_t rqst, void *data)
+{
+ ad1848_info *devc = dev->data;
+ if (devc) {
+ DEB(printk("ad1848: pm event received: 0x%x\n", rqst));
+
+ switch (rqst) {
+ case PM_SUSPEND:
+ ad1848_suspend(devc);
+ break;
+ case PM_RESUME:
+ ad1848_resume(devc);
+ break;
+ }
+ }
+ return 0;
+}
+
EXPORT_SYMBOL(ad1848_detect);
EXPORT_SYMBOL(ad1848_init);
static int __initdata dma2 = -1;
static int __initdata type = 0;
-static struct address_info cfg;
-
MODULE_PARM(io, "i"); /* I/O for a raw AD1848 card */
MODULE_PARM(irq, "i"); /* IRQ to use */
MODULE_PARM(dma, "i"); /* First DMA channel */
--- /dev/null
+#include <linux/module.h>
+#include <linux/version.h>
+#include <linux/string.h>
+#include <linux/ctype.h>
+#include <linux/ioport.h>
+#include <linux/sched.h>
+#include <linux/delay.h>
+#include <linux/sound.h>
+#include <linux/malloc.h>
+#include <linux/soundcard.h>
+#include <linux/pci.h>
+#include <asm/io.h>
+#include <asm/dma.h>
+#include <linux/init.h>
+#include <linux/poll.h>
+#include <linux/spinlock.h>
+#include <linux/ac97_codec.h>
+#include <asm/uaccess.h>
+#include <asm/hardirq.h>
+
+#ifndef PCI_DEVICE_ID_INTEL_82801
+#define PCI_DEVICE_ID_INTEL_82801 0x2415
+#endif
+#ifndef PCI_DEVICE_ID_INTEL_82901
+#define PCI_DEVICE_ID_INTEL_82901 0x2425
+#endif
+#ifndef PCI_DEVICE_ID_INTEL_440MX
+#define PCI_DEVICE_ID_INTEL_440MX 0x7195
+#endif
+
+#define ADC_RUNNING 1
+#define DAC_RUNNING 2
+
+#define I810_FMT_16BIT 1
+#define I810_FMT_STEREO 2
+#define I810_FMT_MASK 3
+
+/* the 810's array of pointers to data buffers */
+
+struct sg_item {
+#define BUSADDR_MASK 0xFFFFFFFE
+ u32 busaddr;
+#define CON_IOC 0x80000000 /* interrupt on completion */
+#define CON_BUFPAD 0x40000000 /* pad underrun with last sample, else 0 */
+#define CON_BUFLEN_MASK 0x0000ffff /* buffer length in samples */
+ u32 control;
+};
+
+/* an instance of the i810 channel */
+#define SG_LEN 32
+struct i810_channel
+{
+ /* these sg guys should probably be allocated
+ seperately as nocache. Must be 8 byte aligned */
+ struct sg_item sg[SG_LEN]; /* 32*8 */
+ u32 offset; /* 4 */
+ u32 port; /* 4 */
+ u32 used;
+ u32 num;
+};
+
+/*
+ * we have 3 seperate dma engines. pcm in, pcm out, and mic.
+ * each dma engine has controlling registers. These goofy
+ * names are from the datasheet, but make it easy to write
+ * code while leafing through it.
+ */
+
+#define ENUM_ENGINE(PRE,DIG) \
+enum { \
+ ##PRE##_BDBAR = 0x##DIG##0, /* Buffer Descriptor list Base Address */ \
+ ##PRE##_CIV = 0x##DIG##4, /* Current Index Value */ \
+ ##PRE##_LVI = 0x##DIG##5, /* Last Valid Index */ \
+ ##PRE##_SR = 0x##DIG##6, /* Status Register */ \
+ ##PRE##_PICB = 0x##DIG##8, /* Position In Current Buffer */ \
+ ##PRE##_PIV = 0x##DIG##a, /* Prefetched Index Value */ \
+ ##PRE##_CR = 0x##DIG##b /* Control Register */ \
+}
+
+ENUM_ENGINE(OFF,0); /* Offsets */
+ENUM_ENGINE(PI,0); /* PCM In */
+ENUM_ENGINE(PO,1); /* PCM Out */
+ENUM_ENGINE(MC,2); /* Mic In */
+
+enum {
+ GLOB_CNT = 0x2c, /* Global Control */
+ GLOB_STA = 0x30, /* Global Status */
+ CAS = 0x34 /* Codec Write Semaphore Register */
+};
+
+/* interrupts for a dma engine */
+#define DMA_INT_FIFO (1<<4) /* fifo under/over flow */
+#define DMA_INT_COMPLETE (1<<3) /* buffer read/write complete and ioc set */
+#define DMA_INT_LVI (1<<2) /* last valid done */
+#define DMA_INT_CELV (1<<1) /* last valid is current */
+#define DMA_INT_MASK (DMA_INT_FIFO|DMA_INT_COMPLETE|DMA_INT_LVI)
+
+/* interrupts for the whole chip */
+#define INT_SEC (1<<11)
+#define INT_PRI (1<<10)
+#define INT_MC (1<<7)
+#define INT_PO (1<<6)
+#define INT_PI (1<<5)
+#define INT_MO (1<<2)
+#define INT_NI (1<<1)
+#define INT_GPI (1<<0)
+#define INT_MASK (INT_SEC|INT_PRI|INT_MC|INT_PO|INT_PI|INT_MO|INT_NI|INT_GPI)
+
+
+#define DRIVER_VERSION "0.01"
+
+/* magic numbers to protect our data structures */
+#define I810_CARD_MAGIC 0x5072696E /* "Prin" */
+#define I810_STATE_MAGIC 0x63657373 /* "cess" */
+#define I810_DMA_MASK 0xffffffff /* DMA buffer mask for pci_alloc_consist */
+#define NR_HW_CH 3
+
+/* maxinum number of AC97 codecs connected, AC97 2.0 defined 4 */
+#define NR_AC97 2
+
+/* minor number of /dev/dspW */
+#define SND_DEV_DSP8 1
+
+/* minor number of /dev/dspW */
+#define SND_DEV_DSP16 1
+
+static const unsigned sample_size[] = { 1, 2, 2, 4 };
+static const unsigned sample_shift[] = { 0, 1, 1, 2 };
+
+enum {
+ ICH82801AA = 0,
+ ICH82901AB,
+ INTEL440MX
+};
+
+static char * card_names[] = {
+ "Intel ICH 82801AA",
+ "Intel ICH 82901AB",
+ "Intel 440MX"
+};
+
+static struct pci_device_id i810_pci_tbl [] __initdata = {
+ {PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801,
+ PCI_ANY_ID, PCI_ANY_ID, 0, 0, ICH82801AA},
+ {PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82901,
+ PCI_ANY_ID, PCI_ANY_ID, 0, 0, ICH82901AB},
+ {PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_440MX,
+ PCI_ANY_ID, PCI_ANY_ID, 0, 0, INTEL440MX},
+ {0,}
+};
+
+MODULE_DEVICE_TABLE (pci, i810_pci_tbl);
+
+/* "software" or virtual channel, an instance of opened /dev/dsp */
+struct i810_state {
+ unsigned int magic;
+ struct i810_card *card; /* Card info */
+
+ /* single open lock mechanism, only used for recording */
+ struct semaphore open_sem;
+ wait_queue_head_t open_wait;
+
+ /* file mode */
+ mode_t open_mode;
+
+ /* virtual channel number */
+ int virt;
+
+ struct dmabuf {
+ /* wave sample stuff */
+ unsigned int rate;
+ unsigned char fmt, enable;
+
+ /* hardware channel */
+ struct i810_channel *channel;
+
+ /* OSS buffer management stuff */
+ void *rawbuf;
+ dma_addr_t dma_handle;
+ unsigned buforder;
+ unsigned numfrag;
+ unsigned fragshift;
+
+ /* our buffer acts like a circular ring */
+ unsigned hwptr; /* where dma last started, updated by update_ptr */
+ unsigned swptr; /* where driver last clear/filled, updated by read/write */
+ int count; /* bytes to be comsumed or been generated by dma machine */
+ unsigned total_bytes; /* total bytes dmaed by hardware */
+
+ unsigned error; /* number of over/underruns */
+ wait_queue_head_t wait; /* put process on wait queue when no more space in buffer */
+
+ /* redundant, but makes calculations easier */
+ unsigned fragsize;
+ unsigned dmasize;
+ unsigned fragsamples;
+
+ /* OSS stuff */
+ unsigned mapped:1;
+ unsigned ready:1;
+ unsigned endcleared:1;
+ unsigned update_flag;
+ unsigned ossfragshift;
+ int ossmaxfrags;
+ unsigned subdivision;
+ } dmabuf;
+};
+
+
+struct i810_card {
+ struct i810_channel channel[3];
+ unsigned int magic;
+
+ /* We keep trident cards in a linked list */
+ struct i810_card *next;
+
+ /* The trident has a certain amount of cross channel interaction
+ so we use a single per card lock */
+ spinlock_t lock;
+
+ /* PCI device stuff */
+ struct pci_dev * pci_dev;
+ u16 pci_id;
+
+ /* soundcore stuff */
+ int dev_audio;
+
+ /* structures for abstraction of hardware facilities, codecs, banks and channels*/
+ struct ac97_codec *ac97_codec[NR_AC97];
+ struct i810_state *states[NR_HW_CH];
+
+ u16 ac97_features;
+
+ /* hardware resources */
+ unsigned long iobase;
+ unsigned long ac97base;
+ u32 irq;
+
+ /* Function support */
+ struct i810_channel *(*alloc_pcm_channel)(struct i810_card *);
+ struct i810_channel *(*alloc_rec_pcm_channel)(struct i810_card *);
+ void (*free_pcm_channel)(struct i810_card *, int chan);
+};
+
+static struct i810_card *devs = NULL;
+
+static int i810_open_mixdev(struct inode *inode, struct file *file);
+static int i810_release_mixdev(struct inode *inode, struct file *file);
+static int i810_ioctl_mixdev(struct inode *inode, struct file *file, unsigned int cmd,
+ unsigned long arg);
+static loff_t i810_llseek(struct file *file, loff_t offset, int origin);
+
+extern __inline__ unsigned ld2(unsigned int x)
+{
+ unsigned r = 0;
+
+ if (x >= 0x10000) {
+ x >>= 16;
+ r += 16;
+ }
+ if (x >= 0x100) {
+ x >>= 8;
+ r += 8;
+ }
+ if (x >= 0x10) {
+ x >>= 4;
+ r += 4;
+ }
+ if (x >= 4) {
+ x >>= 2;
+ r += 2;
+ }
+ if (x >= 2)
+ r++;
+ return r;
+}
+
+static u16 i810_ac97_get(struct ac97_codec *dev, u8 reg);
+static void i810_ac97_set(struct ac97_codec *dev, u8 reg, u16 data);
+
+static struct i810_channel *i810_alloc_pcm_channel(struct i810_card *card)
+{
+ if(card->channel[1].used==1)
+ return NULL;
+ card->channel[1].used=1;
+ card->channel[1].offset = 0;
+ card->channel[1].port = 0x10;
+ card->channel[1].num=1;
+ return &card->channel[1];
+}
+
+static struct i810_channel *i810_alloc_rec_pcm_channel(struct i810_card *card)
+{
+ if(card->channel[0].used==1)
+ return NULL;
+ card->channel[0].used=1;
+ card->channel[0].offset = 0;
+ card->channel[0].port = 0x00;
+ card->channel[1].num=0;
+ return &card->channel[0];
+}
+
+static void i810_free_pcm_channel(struct i810_card *card, int channel)
+{
+ card->channel[channel].used=0;
+}
+
+/* set playback sample rate */
+static unsigned int i810_set_dac_rate(struct i810_state * state, unsigned int rate)
+{
+ struct dmabuf *dmabuf = &state->dmabuf;
+ u16 dacp, rp;
+ struct ac97_codec *codec=state->card->ac97_codec[0];
+
+ if(!(state->card->ac97_features&0x0001))
+ return 48000;
+
+ if (rate > 48000)
+ rate = 48000;
+ if (rate < 4000)
+ rate = 4000;
+
+ /* Power down the DAC */
+ dacp=i810_ac97_get(codec, AC97_POWER_CONTROL);
+ i810_ac97_set(codec, AC97_POWER_CONTROL, dacp|0x0200);
+
+ /* Load the rate and read the effective rate */
+ i810_ac97_set(codec, AC97_PCM_FRONT_DAC_RATE, rate);
+ rp=i810_ac97_get(codec, AC97_PCM_FRONT_DAC_RATE);
+
+ printk("DAC rate set to %d Returned %d\n",
+ rate, (int)rp);
+
+ rate=rp;
+
+ /* Power it back up */
+ i810_ac97_set(codec, AC97_POWER_CONTROL, dacp);
+
+ dmabuf->rate = rate;
+#ifdef DEBUG
+ printk("i810_audio: called i810_set_dac_rate : rate = %d\n", rate);
+#endif
+
+ return rate;
+}
+
+/* set recording sample rate */
+static unsigned int i810_set_adc_rate(struct i810_state * state, unsigned int rate)
+{
+ struct dmabuf *dmabuf = &state->dmabuf;
+
+ if (rate > 48000)
+ rate = 48000;
+ if (rate < 48000)
+ rate = 48000;
+
+ dmabuf->rate = rate;
+#ifdef DEBUG
+ printk("i810_audio: called i810_set_adc_rate : rate = %d\n", rate);
+#endif
+ return rate;
+}
+
+/* prepare channel attributes for playback */
+static void i810_play_setup(struct i810_state *state)
+{
+// struct dmabuf *dmabuf = &state->dmabuf;
+// struct i810_channel *channel = dmabuf->channel;
+ /* Fixed format. .. */
+ //if (dmabuf->fmt & I810_FMT_16BIT)
+ //if (dmabuf->fmt & I810_FMT_STEREO)
+}
+
+/* prepare channel attributes for recording */
+static void i810_rec_setup(struct i810_state *state)
+{
+// u16 w;
+// struct i810_card *card = state->card;
+// struct dmabuf *dmabuf = &state->dmabuf;
+// struct i810_channel *channel = dmabuf->channel;
+
+ /* Enable AC-97 ADC (capture) */
+// if (dmabuf->fmt & I810_FMT_16BIT) {
+// if (dmabuf->fmt & I810_FMT_STEREO)
+}
+
+
+/* get current playback/recording dma buffer pointer (byte offset from LBA),
+ called with spinlock held! */
+
+extern __inline__ unsigned i810_get_dma_addr(struct i810_state *state)
+{
+ struct dmabuf *dmabuf = &state->dmabuf;
+ u32 offset;
+ struct i810_channel *c = dmabuf->channel;
+
+ if (!dmabuf->enable)
+ return 0;
+ offset = inb(state->card->iobase+c->port+OFF_CIV);
+ offset++;
+ offset&=31;
+ /* Offset has to compensate for the fact we finished the segment
+ on the IRQ so we are at next_segment,0 */
+// printk("BANK%d ", offset);
+ offset *= (dmabuf->dmasize/SG_LEN);
+// printk("DMASZ=%d", dmabuf->dmasize);
+// offset += 1024-(4*inw(state->card->iobase+c->port+OFF_PICB));
+// printk("OFF%d ", offset);
+ return offset;
+}
+
+static void resync_dma_ptrs(struct i810_state *state)
+{
+ struct dmabuf *dmabuf = &state->dmabuf;
+ struct i810_channel *c = dmabuf->channel;
+ int offset;
+
+ offset = inb(state->card->iobase+c->port+OFF_CIV);
+ offset *= (dmabuf->dmasize/SG_LEN);
+
+ dmabuf->hwptr=dmabuf->swptr = offset;
+}
+
+/* Stop recording (lock held) */
+extern __inline__ void __stop_adc(struct i810_state *state)
+{
+ struct dmabuf *dmabuf = &state->dmabuf;
+ struct i810_card *card = state->card;
+
+ dmabuf->enable &= ~ADC_RUNNING;
+ outb(0, card->iobase + PI_CR);
+}
+
+static void stop_adc(struct i810_state *state)
+{
+ struct i810_card *card = state->card;
+ unsigned long flags;
+
+ spin_lock_irqsave(&card->lock, flags);
+ __stop_adc(state);
+ spin_unlock_irqrestore(&card->lock, flags);
+}
+
+static void start_adc(struct i810_state *state)
+{
+ struct dmabuf *dmabuf = &state->dmabuf;
+ struct i810_card *card = state->card;
+ unsigned long flags;
+
+ spin_lock_irqsave(&card->lock, flags);
+ if ((dmabuf->mapped || dmabuf->count < (signed)dmabuf->dmasize) && dmabuf->ready) {
+ dmabuf->enable |= ADC_RUNNING;
+ outb((1<<4) | 1<<2 | 1, card->iobase + PI_CR);
+ }
+ spin_unlock_irqrestore(&card->lock, flags);
+}
+
+/* stop playback (lock held) */
+extern __inline__ void __stop_dac(struct i810_state *state)
+{
+ struct dmabuf *dmabuf = &state->dmabuf;
+ struct i810_card *card = state->card;
+
+ dmabuf->enable &= ~DAC_RUNNING;
+ outb(0, card->iobase + PO_CR);
+}
+
+static void stop_dac(struct i810_state *state)
+{
+ struct i810_card *card = state->card;
+ unsigned long flags;
+
+ spin_lock_irqsave(&card->lock, flags);
+ __stop_dac(state);
+ spin_unlock_irqrestore(&card->lock, flags);
+}
+
+static void start_dac(struct i810_state *state)
+{
+ struct dmabuf *dmabuf = &state->dmabuf;
+ struct i810_card *card = state->card;
+ unsigned long flags;
+
+ spin_lock_irqsave(&card->lock, flags);
+ if ((dmabuf->mapped || dmabuf->count > 0) && dmabuf->ready) {
+ if(!(dmabuf->enable&DAC_RUNNING))
+ {
+ dmabuf->enable |= DAC_RUNNING;
+ outb((1<<4) | 1<<2 | 1, card->iobase + PO_CR);
+ }
+ }
+ spin_unlock_irqrestore(&card->lock, flags);
+}
+
+#define DMABUF_DEFAULTORDER (15-PAGE_SHIFT)
+#define DMABUF_MINORDER 1
+
+/* allocate DMA buffer, playback and recording buffer should be allocated seperately */
+static int alloc_dmabuf(struct i810_state *state)
+{
+ struct dmabuf *dmabuf = &state->dmabuf;
+ void *rawbuf;
+ int order;
+ unsigned long map, mapend;
+
+ /* alloc as big a chunk as we can, FIXME: is this necessary ?? */
+ for (order = DMABUF_DEFAULTORDER; order >= DMABUF_MINORDER; order--)
+ if ((rawbuf = pci_alloc_consistent(state->card->pci_dev,
+ PAGE_SIZE << order,
+ &dmabuf->dma_handle)))
+ break;
+ if (!rawbuf)
+ return -ENOMEM;
+
+#ifdef DEBUG
+ printk("i810_audio: allocated %ld (order = %d) bytes at %p\n",
+ PAGE_SIZE << order, order, rawbuf);
+#endif
+
+ dmabuf->ready = dmabuf->mapped = 0;
+ dmabuf->rawbuf = rawbuf;
+ dmabuf->buforder = order;
+
+ /* now mark the pages as reserved; otherwise remap_page_range doesn't do what we want */
+ mapend = MAP_NR(rawbuf + (PAGE_SIZE << order) - 1);
+ for (map = MAP_NR(rawbuf); map <= mapend; map++)
+ set_bit(PG_reserved, &mem_map[map].flags);
+
+ return 0;
+}
+
+/* free DMA buffer */
+static void dealloc_dmabuf(struct i810_state *state)
+{
+ struct dmabuf *dmabuf = &state->dmabuf;
+ unsigned long map, mapend;
+
+ if (dmabuf->rawbuf) {
+ /* undo marking the pages as reserved */
+ mapend = MAP_NR(dmabuf->rawbuf + (PAGE_SIZE << dmabuf->buforder) - 1);
+ for (map = MAP_NR(dmabuf->rawbuf); map <= mapend; map++)
+ clear_bit(PG_reserved, &mem_map[map].flags);
+ pci_free_consistent(state->card->pci_dev, PAGE_SIZE << dmabuf->buforder,
+ dmabuf->rawbuf, dmabuf->dma_handle);
+ }
+ dmabuf->rawbuf = NULL;
+ dmabuf->mapped = dmabuf->ready = 0;
+}
+
+static int prog_dmabuf(struct i810_state *state, unsigned rec)
+{
+ struct dmabuf *dmabuf = &state->dmabuf;
+ struct sg_item *sg;
+ unsigned bytepersec;
+ unsigned bufsize;
+ unsigned long flags;
+ int ret;
+ unsigned fragsize;
+ int i;
+
+ spin_lock_irqsave(&state->card->lock, flags);
+ resync_dma_ptrs(state);
+ dmabuf->total_bytes = 0;
+ dmabuf->count = dmabuf->error = 0;
+ spin_unlock_irqrestore(&state->card->lock, flags);
+
+ /* allocate DMA buffer if not allocated yet */
+ if (!dmabuf->rawbuf)
+ if ((ret = alloc_dmabuf(state)))
+ return ret;
+
+ /* FIXME: figure out all this OSS fragment stuff */
+ bytepersec = dmabuf->rate << sample_shift[dmabuf->fmt];
+ bufsize = PAGE_SIZE << dmabuf->buforder;
+ if (dmabuf->ossfragshift) {
+ if ((1000 << dmabuf->ossfragshift) < bytepersec)
+ dmabuf->fragshift = ld2(bytepersec/1000);
+ else
+ dmabuf->fragshift = dmabuf->ossfragshift;
+ } else {
+ /* lets hand out reasonable big ass buffers by default */
+ dmabuf->fragshift = (dmabuf->buforder + PAGE_SHIFT -2);
+ }
+ dmabuf->numfrag = bufsize >> dmabuf->fragshift;
+ while (dmabuf->numfrag < 4 && dmabuf->fragshift > 3) {
+ dmabuf->fragshift--;
+ dmabuf->numfrag = bufsize >> dmabuf->fragshift;
+ }
+ dmabuf->fragsize = 1 << dmabuf->fragshift;
+ if (dmabuf->ossmaxfrags >= 4 && dmabuf->ossmaxfrags < dmabuf->numfrag)
+ dmabuf->numfrag = dmabuf->ossmaxfrags;
+ dmabuf->fragsamples = dmabuf->fragsize >> sample_shift[dmabuf->fmt];
+ dmabuf->dmasize = dmabuf->numfrag << dmabuf->fragshift;
+
+ memset(dmabuf->rawbuf, (dmabuf->fmt & I810_FMT_16BIT) ? 0 : 0x80,
+ dmabuf->dmasize);
+
+ /*
+ * Now set up the ring
+ */
+
+ sg=&dmabuf->channel->sg[0];
+ fragsize = bufsize / SG_LEN;
+
+ /*
+ * Load up 32 sg entries and take an interrupt at half
+ * way (we might want more interrupts later..)
+ */
+
+ for(i=0;i<32;i++)
+ {
+ sg->busaddr=virt_to_bus(dmabuf->rawbuf+fragsize*i);
+ sg->control=(fragsize>>1);
+ sg->control|=CON_IOC;
+ sg++;
+ }
+ spin_lock_irqsave(&state->card->lock, flags);
+ outl(virt_to_bus(&dmabuf->channel->sg[0]), state->card->iobase+dmabuf->channel->port+OFF_BDBAR);
+ outb(16, state->card->iobase+dmabuf->channel->port+OFF_LVI);
+ outb(0, state->card->iobase+dmabuf->channel->port+OFF_CIV);
+ if (rec) {
+ i810_rec_setup(state);
+ } else {
+ i810_play_setup(state);
+ }
+ spin_unlock_irqrestore(&state->card->lock, flags);
+
+ /* set the ready flag for the dma buffer */
+ dmabuf->ready = 1;
+
+#ifdef DEBUG
+ printk("i810_audio: prog_dmabuf, sample rate = %d, format = %d, numfrag = %d, "
+ "fragsize = %d dmasize = %d\n",
+ dmabuf->rate, dmabuf->fmt, dmabuf->numfrag,
+ dmabuf->fragsize, dmabuf->dmasize);
+#endif
+
+ return 0;
+}
+
+/* we are doing quantum mechanics here, the buffer can only be empty, half or full filled i.e.
+ |------------|------------| or |xxxxxxxxxxxx|------------| or |xxxxxxxxxxxx|xxxxxxxxxxxx|
+ but we almost always get this
+ |xxxxxx------|------------| or |xxxxxxxxxxxx|xxxxx-------|
+ so we have to clear the tail space to "silence"
+ |xxxxxx000000|------------| or |xxxxxxxxxxxx|xxxxxx000000|
+*/
+static void i810_clear_tail(struct i810_state *state)
+{
+ struct dmabuf *dmabuf = &state->dmabuf;
+ unsigned swptr;
+ unsigned char silence = (dmabuf->fmt & I810_FMT_16BIT) ? 0 : 0x80;
+ unsigned int len;
+ unsigned long flags;
+
+ spin_lock_irqsave(&state->card->lock, flags);
+ swptr = dmabuf->swptr;
+ spin_unlock_irqrestore(&state->card->lock, flags);
+
+ if (swptr == 0 || swptr == dmabuf->dmasize / 2 || swptr == dmabuf->dmasize)
+ return;
+
+ if (swptr < dmabuf->dmasize/2)
+ len = dmabuf->dmasize/2 - swptr;
+ else
+ len = dmabuf->dmasize - swptr;
+
+ memset(dmabuf->rawbuf + swptr, silence, len);
+
+ spin_lock_irqsave(&state->card->lock, flags);
+ dmabuf->swptr += len;
+ dmabuf->count += len;
+ spin_unlock_irqrestore(&state->card->lock, flags);
+
+ /* restart the dma machine in case it is halted */
+ start_dac(state);
+}
+
+static int drain_dac(struct i810_state *state, int nonblock)
+{
+ DECLARE_WAITQUEUE(wait, current);
+ struct dmabuf *dmabuf = &state->dmabuf;
+ unsigned long flags;
+ unsigned long tmo;
+ int count;
+
+ if (dmabuf->mapped || !dmabuf->ready)
+ return 0;
+
+ add_wait_queue(&dmabuf->wait, &wait);
+ for (;;) {
+ /* It seems that we have to set the current state to TASK_INTERRUPTIBLE
+ every time to make the process really go to sleep */
+ current->state = TASK_INTERRUPTIBLE;
+
+ spin_lock_irqsave(&state->card->lock, flags);
+ count = dmabuf->count;
+ spin_unlock_irqrestore(&state->card->lock, flags);
+
+ if (count <= 0)
+ break;
+
+ if (signal_pending(current))
+ break;
+
+ if (nonblock) {
+ remove_wait_queue(&dmabuf->wait, &wait);
+ current->state = TASK_RUNNING;
+ return -EBUSY;
+ }
+
+ /* No matter how much data left in the buffer, we have to wait untill
+ CSO == ESO/2 or CSO == ESO when address engine interrupts */
+ tmo = (dmabuf->dmasize * HZ) / dmabuf->rate;
+ tmo >>= sample_shift[dmabuf->fmt];
+ if (!schedule_timeout(tmo ? tmo : 1) && tmo){
+ printk(KERN_ERR "i810_audio: drain_dac, dma timeout?\n");
+ break;
+ }
+ }
+ remove_wait_queue(&dmabuf->wait, &wait);
+ current->state = TASK_RUNNING;
+ if (signal_pending(current))
+ return -ERESTARTSYS;
+
+ return 0;
+}
+
+/* update buffer manangement pointers, especially, dmabuf->count and dmabuf->hwptr */
+static void i810_update_ptr(struct i810_state *state)
+{
+ struct dmabuf *dmabuf = &state->dmabuf;
+ unsigned hwptr, swptr;
+ int clear_cnt = 0;
+ int diff;
+ unsigned char silence;
+// unsigned half_dmasize;
+
+ /* update hardware pointer */
+ hwptr = i810_get_dma_addr(state);
+ diff = (dmabuf->dmasize + hwptr - dmabuf->hwptr) % dmabuf->dmasize;
+// printk("HWP %d,%d,%d\n", hwptr, dmabuf->hwptr, diff);
+ dmabuf->hwptr = hwptr;
+ dmabuf->total_bytes += diff;
+
+ /* error handling and process wake up for DAC */
+ if (dmabuf->enable == ADC_RUNNING) {
+ if (dmabuf->mapped) {
+ dmabuf->count -= diff;
+ if (dmabuf->count >= (signed)dmabuf->fragsize)
+ wake_up(&dmabuf->wait);
+ } else {
+ dmabuf->count += diff;
+
+ if (dmabuf->count < 0 || dmabuf->count > dmabuf->dmasize) {
+ /* buffer underrun or buffer overrun, we have no way to recover
+ it here, just stop the machine and let the process force hwptr
+ and swptr to sync */
+ __stop_adc(state);
+ dmabuf->error++;
+ }
+ else if (!dmabuf->endcleared) {
+ swptr = dmabuf->swptr;
+ silence = (dmabuf->fmt & I810_FMT_16BIT ? 0 : 0x80);
+ if (dmabuf->count < (signed) dmabuf->fragsize)
+ {
+ clear_cnt = dmabuf->fragsize;
+ if ((swptr + clear_cnt) > dmabuf->dmasize)
+ clear_cnt = dmabuf->dmasize - swptr;
+ memset (dmabuf->rawbuf + swptr, silence, clear_cnt);
+ dmabuf->endcleared = 1;
+ }
+ }
+ /* since dma machine only interrupts at ESO and ESO/2, we sure have at
+ least half of dma buffer free, so wake up the process unconditionally */
+ wake_up(&dmabuf->wait);
+ }
+ }
+ /* error handling and process wake up for DAC */
+ if (dmabuf->enable == DAC_RUNNING) {
+ if (dmabuf->mapped) {
+ dmabuf->count += diff;
+ if (dmabuf->count >= (signed)dmabuf->fragsize)
+ wake_up(&dmabuf->wait);
+ } else {
+ dmabuf->count -= diff;
+
+ if (dmabuf->count < 0 || dmabuf->count > dmabuf->dmasize) {
+ /* buffer underrun or buffer overrun, we have no way to recover
+ it here, just stop the machine and let the process force hwptr
+ and swptr to sync */
+ __stop_dac(state);
+ printk("DMA overrun on send\n");
+ dmabuf->error++;
+ }
+ /* since dma machine only interrupts at ESO and ESO/2, we sure have at
+ least half of dma buffer free, so wake up the process unconditionally */
+ wake_up(&dmabuf->wait);
+ }
+ }
+}
+
+static void i810_channel_interrupt(struct i810_card *card)
+{
+ int i;
+
+// printk("CHANNEL IRQ .. ");
+ for(i=0;i<NR_HW_CH;i++)
+ {
+ struct i810_state *state = card->states[i];
+ struct i810_channel *c;
+ unsigned long port = card->iobase;
+ u16 status;
+
+ if(!state)
+ continue;
+ if(!state->dmabuf.ready)
+ continue;
+ c=state->dmabuf.channel;
+
+ port+=c->port;
+
+// printk("PORT %lX (", port);
+
+ status = inw(port + OFF_SR);
+
+// printk("ST%d ", status);
+
+ if(status & DMA_INT_LVI)
+ {
+ /* Back to the start */
+// printk("LVI - STOP");
+ outb((inb(port+OFF_CIV)-1)&31, port+OFF_LVI);
+ i810_update_ptr(state);
+ outb(0, port + OFF_CR);
+ }
+ if(status & DMA_INT_COMPLETE)
+ {
+ int x;
+ /* Kepe the card chasing its tail */
+ outb(x=((inb(port+OFF_CIV)-1)&31), port+OFF_LVI);
+ i810_update_ptr(state);
+// printk("COMP%d ",x);
+ }
+// printk(")");
+ outw(status & DMA_INT_MASK, port + OFF_SR);
+ }
+// printk("\n");
+}
+
+static void i810_interrupt(int irq, void *dev_id, struct pt_regs *regs)
+{
+ struct i810_card *card = (struct i810_card *)dev_id;
+ u32 status;
+
+ spin_lock(&card->lock);
+
+ status = inl(card->iobase + GLOB_STA);
+ if(!(status & INT_MASK))
+ return; /* not for us */
+
+// printk("Interrupt %X: ", status);
+ if(status & (INT_PO|INT_PI|INT_MC))
+ i810_channel_interrupt(card);
+
+ /* clear 'em */
+ outl(status & INT_MASK, card->iobase + GLOB_STA);
+ spin_unlock(&card->lock);
+}
+
+static loff_t i810_llseek(struct file *file, loff_t offset, int origin)
+{
+ return -ESPIPE;
+}
+
+/* in this loop, dmabuf.count signifies the amount of data that is waiting to be copied to
+ the user's buffer. it is filled by the dma machine and drained by this loop. */
+static ssize_t i810_read(struct file *file, char *buffer, size_t count, loff_t *ppos)
+{
+ struct i810_state *state = (struct i810_state *)file->private_data;
+ struct dmabuf *dmabuf = &state->dmabuf;
+ ssize_t ret;
+ unsigned long flags;
+ unsigned swptr;
+ int cnt;
+
+#ifdef DEBUG
+ printk("i810_audio: i810_read called, count = %d\n", count);
+#endif
+
+ if (ppos != &file->f_pos)
+ return -ESPIPE;
+ if (dmabuf->mapped)
+ return -ENXIO;
+ if (!dmabuf->ready && (ret = prog_dmabuf(state, 1)))
+ return ret;
+ if (!access_ok(VERIFY_WRITE, buffer, count))
+ return -EFAULT;
+ ret = 0;
+
+ while (count > 0) {
+ spin_lock_irqsave(&state->card->lock, flags);
+ if (dmabuf->count > (signed) dmabuf->dmasize) {
+ /* buffer overrun, we are recovering from sleep_on_timeout,
+ resync hwptr and swptr, make process flush the buffer */
+ dmabuf->count = dmabuf->dmasize;
+ dmabuf->swptr = dmabuf->hwptr;
+ }
+ swptr = dmabuf->swptr;
+ cnt = dmabuf->dmasize - swptr;
+ if (dmabuf->count < cnt)
+ cnt = dmabuf->count;
+ spin_unlock_irqrestore(&state->card->lock, flags);
+
+ if (cnt > count)
+ cnt = count;
+ if (cnt <= 0) {
+ unsigned long tmo;
+ /* buffer is empty, start the dma machine and wait for data to be
+ recorded */
+ start_adc(state);
+ if (file->f_flags & O_NONBLOCK) {
+ if (!ret) ret = -EAGAIN;
+ return ret;
+ }
+ /* No matter how much space left in the buffer, we have to wait untill
+ CSO == ESO/2 or CSO == ESO when address engine interrupts */
+ tmo = (dmabuf->dmasize * HZ) / (dmabuf->rate * 2);
+ tmo >>= sample_shift[dmabuf->fmt];
+ /* There are two situations when sleep_on_timeout returns, one is when
+ the interrupt is serviced correctly and the process is waked up by
+ ISR ON TIME. Another is when timeout is expired, which means that
+ either interrupt is NOT serviced correctly (pending interrupt) or it
+ is TOO LATE for the process to be scheduled to run (scheduler latency)
+ which results in a (potential) buffer overrun. And worse, there is
+ NOTHING we can do to prevent it. */
+ if (!interruptible_sleep_on_timeout(&dmabuf->wait, tmo)) {
+#ifdef DEBUG
+ printk(KERN_ERR "i810_audio: recording schedule timeout, "
+ "dmasz %u fragsz %u count %i hwptr %u swptr %u\n",
+ dmabuf->dmasize, dmabuf->fragsize, dmabuf->count,
+ dmabuf->hwptr, dmabuf->swptr);
+#endif
+ /* a buffer overrun, we delay the recovery untill next time the
+ while loop begin and we REALLY have space to record */
+ }
+ if (signal_pending(current)) {
+ ret = ret ? ret : -ERESTARTSYS;
+ return ret;
+ }
+ continue;
+ }
+
+ if (copy_to_user(buffer, dmabuf->rawbuf + swptr, cnt)) {
+ if (!ret) ret = -EFAULT;
+ return ret;
+ }
+
+ swptr = (swptr + cnt) % dmabuf->dmasize;
+
+ spin_lock_irqsave(&state->card->lock, flags);
+ dmabuf->swptr = swptr;
+ dmabuf->count -= cnt;
+ spin_unlock_irqrestore(&state->card->lock, flags);
+
+ count -= cnt;
+ buffer += cnt;
+ ret += cnt;
+ start_adc(state);
+ }
+ return ret;
+}
+
+/* in this loop, dmabuf.count signifies the amount of data that is waiting to be dma to
+ the soundcard. it is drained by the dma machine and filled by this loop. */
+static ssize_t i810_write(struct file *file, const char *buffer, size_t count, loff_t *ppos)
+{
+ struct i810_state *state = (struct i810_state *)file->private_data;
+ struct dmabuf *dmabuf = &state->dmabuf;
+ ssize_t ret;
+ unsigned long flags;
+ unsigned swptr;
+ int cnt;
+
+#ifdef DEBUG
+ printk("i810_audio: i810_write called, count = %d\n", count);
+#endif
+
+ if (ppos != &file->f_pos)
+ return -ESPIPE;
+ if (dmabuf->mapped)
+ return -ENXIO;
+ if (!dmabuf->ready && (ret = prog_dmabuf(state, 0)))
+ return ret;
+ if (!access_ok(VERIFY_READ, buffer, count))
+ return -EFAULT;
+ ret = 0;
+
+ while (count > 0) {
+ spin_lock_irqsave(&state->card->lock, flags);
+ if (dmabuf->count < 0) {
+ /* buffer underrun, we are recovering from sleep_on_timeout,
+ resync hwptr and swptr */
+ dmabuf->count = 0;
+ dmabuf->swptr = dmabuf->hwptr;
+ }
+ swptr = dmabuf->swptr;
+ cnt = dmabuf->dmasize - swptr;
+ if (dmabuf->count + cnt > dmabuf->dmasize)
+ cnt = dmabuf->dmasize - dmabuf->count;
+ spin_unlock_irqrestore(&state->card->lock, flags);
+
+ if (cnt > count)
+ cnt = count;
+ if (cnt <= 0) {
+ unsigned long tmo;
+ /* buffer is full, start the dma machine and wait for data to be
+ played */
+ start_dac(state);
+ if (file->f_flags & O_NONBLOCK) {
+ if (!ret) ret = -EAGAIN;
+ return ret;
+ }
+ /* No matter how much data left in the buffer, we have to wait untill
+ CSO == ESO/2 or CSO == ESO when address engine interrupts */
+ tmo = (dmabuf->dmasize * HZ) / (dmabuf->rate * 2);
+ tmo >>= sample_shift[dmabuf->fmt];
+ /* There are two situations when sleep_on_timeout returns, one is when
+ the interrupt is serviced correctly and the process is waked up by
+ ISR ON TIME. Another is when timeout is expired, which means that
+ either interrupt is NOT serviced correctly (pending interrupt) or it
+ is TOO LATE for the process to be scheduled to run (scheduler latency)
+ which results in a (potential) buffer underrun. And worse, there is
+ NOTHING we can do to prevent it. */
+ if (!interruptible_sleep_on_timeout(&dmabuf->wait, tmo)) {
+#ifdef DEBUG
+ printk(KERN_ERR "i810_audio: playback schedule timeout, "
+ "dmasz %u fragsz %u count %i hwptr %u swptr %u\n",
+ dmabuf->dmasize, dmabuf->fragsize, dmabuf->count,
+ dmabuf->hwptr, dmabuf->swptr);
+#endif
+ /* a buffer underrun, we delay the recovery untill next time the
+ while loop begin and we REALLY have data to play */
+ }
+ if (signal_pending(current)) {
+ if (!ret) ret = -ERESTARTSYS;
+ return ret;
+ }
+ continue;
+ }
+ if (copy_from_user(dmabuf->rawbuf + swptr, buffer, cnt)) {
+ if (!ret) ret = -EFAULT;
+ return ret;
+ }
+
+ swptr = (swptr + cnt) % dmabuf->dmasize;
+
+ spin_lock_irqsave(&state->card->lock, flags);
+ dmabuf->swptr = swptr;
+ dmabuf->count += cnt;
+ dmabuf->endcleared = 0;
+ spin_unlock_irqrestore(&state->card->lock, flags);
+
+ count -= cnt;
+ buffer += cnt;
+ ret += cnt;
+ start_dac(state);
+ }
+ return ret;
+}
+
+static unsigned int i810_poll(struct file *file, struct poll_table_struct *wait)
+{
+ struct i810_state *state = (struct i810_state *)file->private_data;
+ struct dmabuf *dmabuf = &state->dmabuf;
+ unsigned long flags;
+ unsigned int mask = 0;
+
+ if (file->f_mode & FMODE_WRITE)
+ poll_wait(file, &dmabuf->wait, wait);
+ if (file->f_mode & FMODE_READ)
+ poll_wait(file, &dmabuf->wait, wait);
+
+ spin_lock_irqsave(&state->card->lock, flags);
+ i810_update_ptr(state);
+ if (file->f_mode & FMODE_READ) {
+ if (dmabuf->count >= (signed)dmabuf->fragsize)
+ mask |= POLLIN | POLLRDNORM;
+ }
+ if (file->f_mode & FMODE_WRITE) {
+ if (dmabuf->mapped) {
+ if (dmabuf->count >= (signed)dmabuf->fragsize)
+ mask |= POLLOUT | POLLWRNORM;
+ } else {
+ if ((signed)dmabuf->dmasize >= dmabuf->count + (signed)dmabuf->fragsize)
+ mask |= POLLOUT | POLLWRNORM;
+ }
+ }
+ spin_unlock_irqrestore(&state->card->lock, flags);
+
+ return mask;
+}
+
+static int i810_mmap(struct file *file, struct vm_area_struct *vma)
+{
+ struct i810_state *state = (struct i810_state *)file->private_data;
+ struct dmabuf *dmabuf = &state->dmabuf;
+ int ret;
+ unsigned long size;
+
+ if (vma->vm_flags & VM_WRITE) {
+ if ((ret = prog_dmabuf(state, 0)) != 0)
+ return ret;
+ } else if (vma->vm_flags & VM_READ) {
+ if ((ret = prog_dmabuf(state, 1)) != 0)
+ return ret;
+ } else
+ return -EINVAL;
+
+ if (vma->vm_pgoff != 0)
+ return -EINVAL;
+ size = vma->vm_end - vma->vm_start;
+ if (size > (PAGE_SIZE << dmabuf->buforder))
+ return -EINVAL;
+ if (remap_page_range(vma->vm_start, virt_to_phys(dmabuf->rawbuf),
+ size, vma->vm_page_prot))
+ return -EAGAIN;
+ dmabuf->mapped = 1;
+
+ return 0;
+}
+
+static int i810_ioctl(struct inode *inode, struct file *file, unsigned int cmd, unsigned long arg)
+{
+ struct i810_state *state = (struct i810_state *)file->private_data;
+ struct dmabuf *dmabuf = &state->dmabuf;
+ unsigned long flags;
+ audio_buf_info abinfo;
+ count_info cinfo;
+ int val, mapped, ret;
+
+ mapped = ((file->f_mode & FMODE_WRITE) && dmabuf->mapped) ||
+ ((file->f_mode & FMODE_READ) && dmabuf->mapped);
+#ifdef DEBUG
+ printk("i810_audio: i810_ioctl, command = %2d, arg = 0x%08x\n",
+ _IOC_NR(cmd), arg ? *(int *)arg : 0);
+#endif
+
+ switch (cmd)
+ {
+ case OSS_GETVERSION:
+ return put_user(SOUND_VERSION, (int *)arg);
+
+ case SNDCTL_DSP_RESET:
+ /* FIXME: spin_lock ? */
+ if (file->f_mode & FMODE_WRITE) {
+ stop_dac(state);
+ synchronize_irq();
+ dmabuf->ready = 0;
+ resync_dma_ptrs(state);
+ dmabuf->swptr = dmabuf->hwptr = 0;
+ dmabuf->count = dmabuf->total_bytes = 0;
+ }
+ if (file->f_mode & FMODE_READ) {
+ stop_adc(state);
+ synchronize_irq();
+ resync_dma_ptrs(state);
+ dmabuf->ready = 0;
+ dmabuf->swptr = dmabuf->hwptr = 0;
+ dmabuf->count = dmabuf->total_bytes = 0;
+ }
+ return 0;
+
+ case SNDCTL_DSP_SYNC:
+ if (file->f_mode & FMODE_WRITE)
+ return drain_dac(state, file->f_flags & O_NONBLOCK);
+ return 0;
+
+ case SNDCTL_DSP_SPEED: /* set smaple rate */
+ get_user_ret(val, (int *)arg, -EFAULT);
+ if (val >= 0) {
+ if (file->f_mode & FMODE_WRITE) {
+ stop_dac(state);
+ dmabuf->ready = 0;
+ spin_lock_irqsave(&state->card->lock, flags);
+ i810_set_dac_rate(state, val);
+ spin_unlock_irqrestore(&state->card->lock, flags);
+ }
+ if (file->f_mode & FMODE_READ) {
+ stop_adc(state);
+ dmabuf->ready = 0;
+ spin_lock_irqsave(&state->card->lock, flags);
+ i810_set_adc_rate(state, val);
+ spin_unlock_irqrestore(&state->card->lock, flags);
+ }
+ }
+ return put_user(dmabuf->rate, (int *)arg);
+
+ case SNDCTL_DSP_STEREO: /* set stereo or mono channel */
+ get_user_ret(val, (int *)arg, -EFAULT);
+ if(val==0)
+ return -EINVAL;
+ if (file->f_mode & FMODE_WRITE) {
+ stop_dac(state);
+ dmabuf->ready = 0;
+ dmabuf->fmt = I810_FMT_STEREO;
+ }
+ if (file->f_mode & FMODE_READ) {
+ stop_adc(state);
+ dmabuf->ready = 0;
+ dmabuf->fmt = I810_FMT_STEREO;
+ }
+ return 0;
+
+ case SNDCTL_DSP_GETBLKSIZE:
+ if (file->f_mode & FMODE_WRITE) {
+ if ((val = prog_dmabuf(state, 0)))
+ return val;
+ return put_user(dmabuf->fragsize, (int *)arg);
+ }
+ if (file->f_mode & FMODE_READ) {
+ if ((val = prog_dmabuf(state, 1)))
+ return val;
+ return put_user(dmabuf->fragsize, (int *)arg);
+ }
+
+ case SNDCTL_DSP_GETFMTS: /* Returns a mask of supported sample format*/
+ return put_user(AFMT_S16_LE, (int *)arg);
+
+ case SNDCTL_DSP_SETFMT: /* Select sample format */
+ get_user_ret(val, (int *)arg, -EFAULT);
+ if (val != AFMT_QUERY) {
+ if (file->f_mode & FMODE_WRITE) {
+ stop_dac(state);
+ dmabuf->ready = 0;
+ }
+ if (file->f_mode & FMODE_READ) {
+ stop_adc(state);
+ dmabuf->ready = 0;
+ }
+ }
+ return put_user(AFMT_S16_LE, (int *)arg);
+
+ case SNDCTL_DSP_CHANNELS:
+ get_user_ret(val, (int *)arg, -EFAULT);
+ if (val != 0) {
+ if (file->f_mode & FMODE_WRITE) {
+ stop_dac(state);
+ dmabuf->ready = 0;
+ }
+ if (file->f_mode & FMODE_READ) {
+ stop_adc(state);
+ dmabuf->ready = 0;
+ }
+ }
+ return put_user(2, (int *)arg);
+
+ case SNDCTL_DSP_POST:
+ /* FIXME: the same as RESET ?? */
+ return 0;
+
+ case SNDCTL_DSP_SUBDIVIDE:
+ if (dmabuf->subdivision)
+ return -EINVAL;
+ get_user_ret(val, (int *)arg, -EFAULT);
+ if (val != 1 && val != 2 && val != 4)
+ return -EINVAL;
+ dmabuf->subdivision = val;
+ return 0;
+
+ case SNDCTL_DSP_SETFRAGMENT:
+ get_user_ret(val, (int *)arg, -EFAULT);
+
+ dmabuf->ossfragshift = val & 0xffff;
+ dmabuf->ossmaxfrags = (val >> 16) & 0xffff;
+ if (dmabuf->ossfragshift < 4)
+ dmabuf->ossfragshift = 4;
+ if (dmabuf->ossfragshift > 15)
+ dmabuf->ossfragshift = 15;
+ if (dmabuf->ossmaxfrags < 4)
+ dmabuf->ossmaxfrags = 4;
+
+ return 0;
+
+ case SNDCTL_DSP_GETOSPACE:
+ if (!(file->f_mode & FMODE_WRITE))
+ return -EINVAL;
+ if (!dmabuf->enable && (val = prog_dmabuf(state, 0)) != 0)
+ return val;
+ spin_lock_irqsave(&state->card->lock, flags);
+ i810_update_ptr(state);
+ abinfo.fragsize = dmabuf->fragsize;
+ abinfo.bytes = dmabuf->dmasize - dmabuf->count;
+ abinfo.fragstotal = dmabuf->numfrag;
+ abinfo.fragments = abinfo.bytes >> dmabuf->fragshift;
+ spin_unlock_irqrestore(&state->card->lock, flags);
+ return copy_to_user((void *)arg, &abinfo, sizeof(abinfo)) ? -EFAULT : 0;
+
+ case SNDCTL_DSP_GETISPACE:
+ if (!(file->f_mode & FMODE_READ))
+ return -EINVAL;
+ if (!dmabuf->enable && (val = prog_dmabuf(state, 1)) != 0)
+ return val;
+ spin_lock_irqsave(&state->card->lock, flags);
+ i810_update_ptr(state);
+ abinfo.fragsize = dmabuf->fragsize;
+ abinfo.bytes = dmabuf->count;
+ abinfo.fragstotal = dmabuf->numfrag;
+ abinfo.fragments = abinfo.bytes >> dmabuf->fragshift;
+ spin_unlock_irqrestore(&state->card->lock, flags);
+ return copy_to_user((void *)arg, &abinfo, sizeof(abinfo)) ? -EFAULT : 0;
+
+ case SNDCTL_DSP_NONBLOCK:
+ file->f_flags |= O_NONBLOCK;
+ return 0;
+
+ case SNDCTL_DSP_GETCAPS:
+ return put_user(DSP_CAP_REALTIME|DSP_CAP_TRIGGER|DSP_CAP_MMAP|DSP_CAP_BIND,
+ (int *)arg);
+
+ case SNDCTL_DSP_GETTRIGGER:
+ val = 0;
+ if (file->f_mode & FMODE_READ && dmabuf->enable)
+ val |= PCM_ENABLE_INPUT;
+ if (file->f_mode & FMODE_WRITE && dmabuf->enable)
+ val |= PCM_ENABLE_OUTPUT;
+ return put_user(val, (int *)arg);
+
+ case SNDCTL_DSP_SETTRIGGER:
+ get_user_ret(val, (int *)arg, -EFAULT);
+ if (file->f_mode & FMODE_READ) {
+ if (val & PCM_ENABLE_INPUT) {
+ if (!dmabuf->ready && (ret = prog_dmabuf(state, 1)))
+ return ret;
+ start_adc(state);
+ } else
+ stop_adc(state);
+ }
+ if (file->f_mode & FMODE_WRITE) {
+ if (val & PCM_ENABLE_OUTPUT) {
+ if (!dmabuf->ready && (ret = prog_dmabuf(state, 0)))
+ return ret;
+ start_dac(state);
+ } else
+ stop_dac(state);
+ }
+ return 0;
+
+ case SNDCTL_DSP_GETIPTR:
+ if (!(file->f_mode & FMODE_READ))
+ return -EINVAL;
+ spin_lock_irqsave(&state->card->lock, flags);
+ i810_update_ptr(state);
+ cinfo.bytes = dmabuf->total_bytes;
+ cinfo.blocks = dmabuf->count >> dmabuf->fragshift;
+ cinfo.ptr = dmabuf->hwptr;
+ if (dmabuf->mapped)
+ dmabuf->count &= dmabuf->fragsize-1;
+ spin_unlock_irqrestore(&state->card->lock, flags);
+ return copy_to_user((void *)arg, &cinfo, sizeof(cinfo));
+
+ case SNDCTL_DSP_GETOPTR:
+ if (!(file->f_mode & FMODE_WRITE))
+ return -EINVAL;
+ spin_lock_irqsave(&state->card->lock, flags);
+ i810_update_ptr(state);
+ cinfo.bytes = dmabuf->total_bytes;
+ cinfo.blocks = dmabuf->count >> dmabuf->fragshift;
+ cinfo.ptr = dmabuf->hwptr;
+ if (dmabuf->mapped)
+ dmabuf->count &= dmabuf->fragsize-1;
+ spin_unlock_irqrestore(&state->card->lock, flags);
+ return copy_to_user((void *)arg, &cinfo, sizeof(cinfo));
+
+ case SNDCTL_DSP_SETDUPLEX:
+ return -EINVAL;
+
+ case SNDCTL_DSP_GETODELAY:
+ if (!(file->f_mode & FMODE_WRITE))
+ return -EINVAL;
+ spin_lock_irqsave(&state->card->lock, flags);
+ i810_update_ptr(state);
+ val = dmabuf->count;
+ spin_unlock_irqrestore(&state->card->lock, flags);
+ return put_user(val, (int *)arg);
+
+ case SOUND_PCM_READ_RATE:
+ return put_user(dmabuf->rate, (int *)arg);
+
+ case SOUND_PCM_READ_CHANNELS:
+ return put_user((dmabuf->fmt & I810_FMT_STEREO) ? 2 : 1,
+ (int *)arg);
+
+ case SOUND_PCM_READ_BITS:
+ return put_user(AFMT_S16_LE, (int *)arg);
+
+ case SNDCTL_DSP_MAPINBUF:
+ case SNDCTL_DSP_MAPOUTBUF:
+ case SNDCTL_DSP_SETSYNCRO:
+ case SOUND_PCM_WRITE_FILTER:
+ case SOUND_PCM_READ_FILTER:
+ return -EINVAL;
+ }
+ return -EINVAL;
+}
+
+static int i810_open(struct inode *inode, struct file *file)
+{
+ int i = 0;
+ int minor = MINOR(inode->i_rdev);
+ struct i810_card *card = devs;
+ struct i810_state *state = NULL;
+ struct dmabuf *dmabuf = NULL;
+
+ /* find an avaiable virtual channel (instance of /dev/dsp) */
+ while (card != NULL) {
+ for (i = 0; i < NR_HW_CH; i++) {
+ if (card->states[i] == NULL) {
+ state = card->states[i] = (struct i810_state *)
+ kmalloc(sizeof(struct i810_state), GFP_KERNEL);
+ if (state == NULL)
+ return -ENOMEM;
+ memset(state, 0, sizeof(struct i810_state));
+ dmabuf = &state->dmabuf;
+ goto found_virt;
+ }
+ }
+ card = card->next;
+ }
+ /* no more virtual channel avaiable */
+ if (!state)
+ return -ENODEV;
+
+ found_virt:
+ /* found a free virtual channel, allocate hardware channels */
+ if(file->f_mode & FMODE_READ)
+ dmabuf->channel = card->alloc_rec_pcm_channel(card);
+ else
+ dmabuf->channel = card->alloc_pcm_channel(card);
+
+ if (dmabuf->channel == NULL) {
+ kfree (card->states[i]);
+ card->states[i] = NULL;;
+ return -ENODEV;
+ }
+
+ /* initialize the virtual channel */
+ state->virt = i;
+ state->card = card;
+ state->magic = I810_STATE_MAGIC;
+ init_waitqueue_head(&dmabuf->wait);
+ init_MUTEX(&state->open_sem);
+ file->private_data = state;
+
+ down(&state->open_sem);
+
+ /* set default sample format. According to OSS Programmer's Guide /dev/dsp
+ should be default to unsigned 8-bits, mono, with sample rate 8kHz and
+ /dev/dspW will accept 16-bits sample */
+ if (file->f_mode & FMODE_WRITE) {
+ dmabuf->fmt &= ~I810_FMT_MASK;
+ dmabuf->fmt |= I810_FMT_16BIT;
+ dmabuf->ossfragshift = 0;
+ dmabuf->ossmaxfrags = 0;
+ dmabuf->subdivision = 0;
+ i810_set_dac_rate(state, 48000);
+ }
+
+ if (file->f_mode & FMODE_READ) {
+ dmabuf->fmt &= ~I810_FMT_MASK;
+ dmabuf->fmt |= I810_FMT_16BIT;
+ dmabuf->ossfragshift = 0;
+ dmabuf->ossmaxfrags = 0;
+ dmabuf->subdivision = 0;
+ i810_set_adc_rate(state, 48000);
+ }
+
+ state->open_mode |= file->f_mode & (FMODE_READ | FMODE_WRITE);
+ up(&state->open_sem);
+
+ MOD_INC_USE_COUNT;
+ return 0;
+}
+
+static int i810_release(struct inode *inode, struct file *file)
+{
+ struct i810_state *state = (struct i810_state *)file->private_data;
+ struct dmabuf *dmabuf = &state->dmabuf;
+
+ if (file->f_mode & FMODE_WRITE) {
+ i810_clear_tail(state);
+ drain_dac(state, file->f_flags & O_NONBLOCK);
+ }
+
+ /* stop DMA state machine and free DMA buffers/channels */
+ down(&state->open_sem);
+
+ if (file->f_mode & FMODE_WRITE) {
+ stop_dac(state);
+ dealloc_dmabuf(state);
+ state->card->free_pcm_channel(state->card, dmabuf->channel->num);
+ }
+ if (file->f_mode & FMODE_READ) {
+ stop_adc(state);
+ dealloc_dmabuf(state);
+ state->card->free_pcm_channel(state->card, dmabuf->channel->num);
+ }
+
+ kfree(state->card->states[state->virt]);
+ state->card->states[state->virt] = NULL;
+ state->open_mode &= (~file->f_mode) & (FMODE_READ|FMODE_WRITE);
+
+ /* we're covered by the open_sem */
+ up(&state->open_sem);
+
+ MOD_DEC_USE_COUNT;
+ return 0;
+}
+
+static /*const*/ struct file_operations i810_audio_fops = {
+ llseek: i810_llseek,
+ read: i810_read,
+ write: i810_write,
+ poll: i810_poll,
+ ioctl: i810_ioctl,
+ mmap: i810_mmap,
+ open: i810_open,
+ release: i810_release,
+};
+
+/* Write AC97 codec registers */
+
+static u16 i810_ac97_get(struct ac97_codec *dev, u8 reg)
+{
+ struct i810_card *card = dev->private_data;
+ int count = 100;
+
+ while(count-- && (inb(card->iobase + CAS) & 1))
+ udelay(1);
+ return inw(card->ac97base + (reg&0x7f));
+}
+
+static void i810_ac97_set(struct ac97_codec *dev, u8 reg, u16 data)
+{
+ struct i810_card *card = dev->private_data;
+ int count = 100;
+
+ while(count-- && (inb(card->iobase + CAS) & 1))
+ udelay(1);
+ outw(data, card->ac97base + (reg&0x7f));
+}
+
+
+/* OSS /dev/mixer file operation methods */
+
+static int i810_open_mixdev(struct inode *inode, struct file *file)
+{
+ int i;
+ int minor = MINOR(inode->i_rdev);
+ struct i810_card *card = devs;
+
+ for (card = devs; card != NULL; card = card->next)
+ for (i = 0; i < NR_AC97; i++)
+ if (card->ac97_codec[i] != NULL &&
+ card->ac97_codec[i]->dev_mixer == minor)
+ goto match;
+
+ if (!card)
+ return -ENODEV;
+
+ match:
+ file->private_data = card->ac97_codec[i];
+
+ MOD_INC_USE_COUNT;
+ return 0;
+}
+
+static int i810_release_mixdev(struct inode *inode, struct file *file)
+{
+ MOD_DEC_USE_COUNT;
+ return 0;
+}
+
+static int i810_ioctl_mixdev(struct inode *inode, struct file *file, unsigned int cmd,
+ unsigned long arg)
+{
+ struct ac97_codec *codec = (struct ac97_codec *)file->private_data;
+
+ return codec->mixer_ioctl(codec, cmd, arg);
+}
+
+static /*const*/ struct file_operations i810_mixer_fops = {
+ llseek: i810_llseek,
+ ioctl: i810_ioctl_mixdev,
+ open: i810_open_mixdev,
+ release: i810_release_mixdev,
+};
+
+/* AC97 codec initialisation. */
+static int __init i810_ac97_init(struct i810_card *card)
+{
+ int num_ac97 = 0;
+ int ready_2nd = 0;
+ struct ac97_codec *codec;
+ u16 eid;
+
+ outl(0, card->iobase + GLOB_CNT);
+ udelay(500);
+ outl(1<<1, card->iobase + GLOB_CNT);
+
+ for (num_ac97 = 0; num_ac97 < NR_AC97; num_ac97++) {
+ if ((codec = kmalloc(sizeof(struct ac97_codec), GFP_KERNEL)) == NULL)
+ return -ENOMEM;
+ memset(codec, 0, sizeof(struct ac97_codec));
+
+ /* initialize some basic codec information, other fields will be filled
+ in ac97_probe_codec */
+ codec->private_data = card;
+ codec->id = num_ac97;
+
+ codec->codec_read = i810_ac97_get;
+ codec->codec_write = i810_ac97_set;
+
+ if (ac97_probe_codec(codec) == 0)
+ break;
+
+ eid = i810_ac97_get(codec, AC97_EXTENDED_ID);
+
+ if(eid==0xFFFFFF)
+ {
+ printk(KERN_WARNING "i810_audio: no codec attached ?\n");
+ kfree(codec);
+ break;
+ }
+
+ card->ac97_features = eid;
+
+ if(!(eid&0x0001))
+ printk(KERN_WARNING "i810_audio: only 48Khz playback available.\n");
+
+ if ((codec->dev_mixer = register_sound_mixer(&i810_mixer_fops, -1)) < 0) {
+ printk(KERN_ERR "i810_audio: couldn't register mixer!\n");
+ kfree(codec);
+ break;
+ }
+
+ /* Now check the codec for useful features to make up for
+ the dumbness of the 810 hardware engine */
+
+ card->ac97_codec[num_ac97] = codec;
+
+ /* if there is no secondary codec at all, don't probe any more */
+ if (!ready_2nd)
+ return num_ac97+1;
+ }
+ return num_ac97;
+}
+
+/* install the driver, we do not allocate hardware channel nor DMA buffer now, they are defered
+ untill "ACCESS" time (in prog_dmabuf called by open/read/write/ioctl/mmap) */
+
+static int __init i810_probe(struct pci_dev *pci_dev, const struct pci_device_id *pci_id)
+{
+ struct i810_card *card;
+
+ if (!pci_dma_supported(pci_dev, I810_DMA_MASK)) {
+ printk(KERN_ERR "intel810: architecture does not support"
+ " 32bit PCI busmaster DMA\n");
+ return -ENODEV;
+ }
+
+ if ((card = kmalloc(sizeof(struct i810_card), GFP_KERNEL)) == NULL) {
+ printk(KERN_ERR "i810_audio: out of memory\n");
+ return -ENOMEM;
+ }
+ memset(card, 0, sizeof(*card));
+
+ card->iobase = pci_dev->resource[1].start;
+ card->ac97base = pci_dev->resource[0].start;
+ card->pci_dev = pci_dev;
+ card->pci_id = pci_id->device;
+ card->irq = pci_dev->irq;
+ card->next = devs;
+ card->magic = I810_CARD_MAGIC;
+ spin_lock_init(&card->lock);
+ devs = card;
+
+ pci_set_master(pci_dev);
+ pci_enable_device(pci_dev);
+
+ printk(KERN_INFO "i810: %s found at IO 0x%04lx and 0x%04lx, IRQ %d\n",
+ card_names[pci_id->driver_data], card->iobase, card->ac97base,
+ card->irq);
+
+ card->alloc_pcm_channel = i810_alloc_pcm_channel;
+ card->alloc_rec_pcm_channel = i810_alloc_rec_pcm_channel;
+ card->free_pcm_channel = i810_free_pcm_channel;
+
+ /* claim our iospace and irq */
+ request_region(card->iobase, 64, card_names[pci_id->driver_data]);
+ request_region(card->ac97base, 256, card_names[pci_id->driver_data]);
+
+ if (request_irq(card->irq, &i810_interrupt, SA_SHIRQ,
+ card_names[pci_id->driver_data], card)) {
+ printk(KERN_ERR "i810_audio: unable to allocate irq %d\n", card->irq);
+ release_region(card->iobase, 64);
+ release_region(card->ac97base, 256);
+ kfree(card);
+ return -ENODEV;
+ }
+ /* register /dev/dsp */
+ if ((card->dev_audio = register_sound_dsp(&i810_audio_fops, -1)) < 0) {
+ printk(KERN_ERR "i810_audio: couldn't register DSP device!\n");
+ release_region(card->iobase, 64);
+ release_region(card->ac97base, 256);
+ free_irq(card->irq, card);
+ kfree(card);
+ return -ENODEV;
+ }
+
+
+ /* initialize AC97 codec and register /dev/mixer */
+ if (i810_ac97_init(card) <= 0) {
+ unregister_sound_dsp(card->dev_audio);
+ release_region(card->iobase, 64);
+ release_region(card->ac97base, 256);
+ free_irq(card->irq, card);
+ kfree(card);
+ return -ENODEV;
+ }
+ pci_dev->driver_data = card;
+ pci_dev->dma_mask = I810_DMA_MASK;
+
+// printk("resetting codec?\n");
+ outl(0, card->iobase + GLOB_CNT);
+ udelay(500);
+// printk("bringing it back?\n");
+ outl(1<<1, card->iobase + GLOB_CNT);
+ return 0;
+}
+
+static void __exit i810_remove(struct pci_dev *pci_dev)
+{
+ int i;
+ struct i810_card *card = pci_dev->driver_data;
+ /* free hardware resources */
+ free_irq(card->irq, devs);
+ release_region(card->iobase, 64);
+ release_region(card->ac97base, 256);
+
+ /* unregister audio devices */
+ for (i = 0; i < NR_AC97; i++)
+ if (devs->ac97_codec[i] != NULL) {
+ unregister_sound_mixer(card->ac97_codec[i]->dev_mixer);
+ kfree (card->ac97_codec[i]);
+ }
+ unregister_sound_dsp(card->dev_audio);
+ kfree(card);
+}
+
+MODULE_AUTHOR("");
+MODULE_DESCRIPTION("Intel 810 audio support");
+
+#define I810_MODULE_NAME "intel810_audio"
+
+static struct pci_driver i810_pci_driver = {
+ name: I810_MODULE_NAME,
+ id_table: i810_pci_tbl,
+ probe: i810_probe,
+ remove: i810_remove,
+};
+
+static int __init i810_init_module (void)
+{
+ if (!pci_present()) /* No PCI bus in this machine! */
+ return -ENODEV;
+
+ printk(KERN_INFO "Intel 810 + AC97 Audio, version "
+ DRIVER_VERSION ", " __TIME__ " " __DATE__ "\n");
+
+ if (!pci_register_driver(&i810_pci_driver)) {
+ pci_unregister_driver(&i810_pci_driver);
+ return -ENODEV;
+ }
+ return 0;
+}
+
+static void __exit i810_cleanup_module (void)
+{
+ pci_unregister_driver(&i810_pci_driver);
+}
+
+module_init(i810_init_module);
+module_exit(i810_cleanup_module);
* Based heavily on SonicVibes.c:
* Copyright (C) 1998-1999 Thomas Sailer (sailer@ife.ee.ethz.ch)
*
- * Heavily modified by Zach Brown <zab@redhat.com> based on lunch
+ * Heavily modified by Zach Brown <zab@zabbo.net> based on lunch
* with ESS engineers. Many thanks to Howard Kim for providing
* contacts and hardware. Honorable mention goes to Eric
* Brombaugh for all sorts of things. Best regards to the
* being used now is quite dirty and assumes we're on a uni-processor
* machine. Much of it will need to be cleaned up for SMP ACPI or
* similar.
+ *
+ * We also pay attention to PCI power management now. The driver
+ * will power down units of the chip that it knows aren't needed.
+ * The WaveProcessor and company are only powered on when people
+ * have /dev/dsp*s open. On removal the driver will
+ * power down the maestro entirely. There could still be
+ * trouble with BIOSen that magically change power states
+ * themselves, but we'll see.
*
* History
+ * (still based on v0.14) Mar 29 2000 - Zach Brown <zab@redhat.com>
+ * move to 2.3 power management interface, which
+ * required hacking some suspend/resume/check paths
+ * make static compilation work
+ * v0.14 - Jan 28 2000 - Zach Brown <zab@redhat.com>
+ * add PCI power management through ACPI regs.
+ * we now shut down on machine reboot/halt
+ * leave scary PCI config items alone (isa stuff, mostly)
+ * enable 1921s, it seems only mine was broke.
+ * fix swapped left/right pcm dac. har har.
+ * up bob freq, increase buffers, fix pointers at underflow
+ * silly compilation problems
* v0.13 - Nov 18 1999 - Zach Brown <zab@redhat.com>
* fix nec Versas? man would that be cool.
* v0.12 - Nov 12 1999 - Zach Brown <zab@redhat.com>
* bob freq code, region sanity, jitter sync fix; all from Eric
*
* TODO
- * some people get indir reg timeouts?
* fix bob frequency
* endianness
* do smart things with ac97 2.0 bits.
* docking and dual codecs and 978?
* leave 54->61 open
- * resolve 2.3/2.2 stuff
*
* it also would be fun to have a mode that would not use pci dma at all
* but would copy into the wavecache on board memory and use that
#define SILLY_MAKE_INIT(FUNC) __initfunc(FUNC)
#define SILLY_OFFSET(VMA) ((VMA)->vm_offset)
+
#else
#define SILLY_PCI_BASE_ADDRESS(PCIDEV) (PCIDEV->resource[0].start)
#define SILLY_MAKE_INIT(FUNC) __init FUNC
#define SILLY_OFFSET(VMA) ((VMA)->vm_pgoff)
+
#endif
#include <linux/string.h>
#include <asm/dma.h>
#include <linux/init.h>
#include <linux/poll.h>
+#include <linux/reboot.h>
#include <asm/uaccess.h>
#include <asm/hardirq.h>
#include <linux/pm.h>
static int maestro_pm_callback(struct pm_dev *dev, pm_request_t rqst, void *d);
-static int in_suspend=0;
-wait_queue_head_t suspend_queue;
-static void check_suspend(void);
-#define CHECK_SUSPEND check_suspend();
#include "maestro.h"
#ifdef M_DEBUG
static int debug=0;
-static int dsps_order=0;
#define M_printk(args...) {if (debug) printk(args);}
#else
#define M_printk(x)
#endif
+/* we try to setup 2^(dsps_order) /dev/dsp devices */
+static int dsps_order=0;
+/* wether or not we mess around with power management */
+static int use_pm=2; /* set to 1 for force */
+
/* --------------------------------------------------------------------- */
-#define DRIVER_VERSION "0.13"
+#define DRIVER_VERSION "0.14"
#ifndef PCI_VENDOR_ESS
#define PCI_VENDOR_ESS 0x125D
#define NR_APUS 64
#define NR_APU_REGS 16
+/* acpi states */
+enum {
+ ACPI_D0=0,
+ ACPI_D1,
+ ACPI_D2,
+ ACPI_D3
+};
+
+/* bits in the acpi masks */
+#define ACPI_12MHZ ( 1 << 15)
+#define ACPI_24MHZ ( 1 << 14)
+#define ACPI_978 ( 1 << 13)
+#define ACPI_SPDIF ( 1 << 12)
+#define ACPI_GLUE ( 1 << 11)
+#define ACPI__10 ( 1 << 10) /* reserved */
+#define ACPI_PCIINT ( 1 << 9)
+#define ACPI_HV ( 1 << 8) /* hardware volume */
+#define ACPI_GPIO ( 1 << 7)
+#define ACPI_ASSP ( 1 << 6)
+#define ACPI_SB ( 1 << 5) /* sb emul */
+#define ACPI_FM ( 1 << 4) /* fm emul */
+#define ACPI_RB ( 1 << 3) /* ringbus / aclink */
+#define ACPI_MIDI ( 1 << 2)
+#define ACPI_GP ( 1 << 1) /* game port */
+#define ACPI_WP ( 1 << 0) /* wave processor */
+
+#define ACPI_ALL (0xffff)
+#define ACPI_SLEEP (~(ACPI_SPDIF|ACPI_ASSP|ACPI_SB|ACPI_FM| \
+ ACPI_MIDI|ACPI_GP|ACPI_WP))
+#define ACPI_NONE (ACPI__10)
+
+/* these masks indicate which units we care about at
+ which states */
+u16 acpi_state_mask[] = {
+ [ACPI_D0] = ACPI_ALL,
+ [ACPI_D1] = ACPI_SLEEP,
+ [ACPI_D2] = ACPI_SLEEP,
+ [ACPI_D3] = ACPI_NONE
+};
+
static const unsigned sample_size[] = { 1, 2, 2, 4 };
static const unsigned sample_shift[] = { 0, 1, 1, 2 };
[TYPE_MAESTRO2E] = (50000000L / 1024L)
};
+static int maestro_notifier(struct notifier_block *nb, unsigned long event, void *buf);
+
+static struct notifier_block maestro_nb = {maestro_notifier, NULL, 0};
+
/* --------------------------------------------------------------------- */
struct ess_state {
/* pointer to each dsp?s piece of the apu->src buffer page */
void *mixbuf;
+
};
struct ess_card {
unsigned int mixer_state[SOUND_MIXER_NRDEVICES];
} mix;
+ int power_regs;
+
+ int in_suspend;
+ wait_queue_head_t suspend_queue;
+
struct ess_state channels[MAX_DSPS];
u16 maestro_map[NR_IDRS]; /* Register map */
/* we have to store this junk so that we can come back from a
int dmaorder;
/* hardware resources */
- struct pci_dev pcidev; /* uck.. */
+ struct pci_dev *pcidev;
u32 iobase;
u32 irq;
/* --------------------------------------------------------------------- */
+static void check_suspend(struct ess_card *card);
+
static struct ess_card *devs = NULL;
/* --------------------------------------------------------------------- */
* ESS Maestro AC97 codec programming interface.
*/
-static void maestro_ac97_set(int io, u8 cmd, u16 val)
+static void maestro_ac97_set(struct ess_card *card, u8 cmd, u16 val)
{
+ int io = card->iobase;
int i;
/*
* Wait for the codec bus to be free
*/
- CHECK_SUSPEND;
+ check_suspend(card);
for(i=0;i<10000;i++)
{
mdelay(1);
}
-static u16 maestro_ac97_get(int io, u8 cmd)
+static u16 maestro_ac97_get(struct ess_card *card, u8 cmd)
{
+ int io = card->iobase;
int sanity=10000;
u16 data;
int i;
- CHECK_SUSPEND;
+ check_suspend(card);
/*
* Wait for the codec bus to be free
*/
int ret=0;
struct ac97_mixer_hw *mh = &ac97_hw[mixer];
- val = maestro_ac97_get(card->iobase , mh->offset);
+ val = maestro_ac97_get(card, mh->offset);
if(AC97_STEREO_MASK & (1<<mixer)) {
/* nice stereo mixers .. */
call with spinlock held */
/* linear scale -> log */
-unsigned char lin2log[101] =
+static unsigned char lin2log[101] =
{
0, 0 , 15 , 23 , 30 , 34 , 38 , 42 , 45 , 47 ,
50 , 52 , 53 , 55 , 57 , 58 , 60 , 61 , 62 ,
} else if (mixer == SOUND_MIXER_SPEAKER) {
val = (((100 - left) * mh->scale) / 100) << 1;
} else if (mixer == SOUND_MIXER_MIC) {
- val = maestro_ac97_get(card->iobase , mh->offset) & ~0x801f;
+ val = maestro_ac97_get(card, mh->offset) & ~0x801f;
val |= (((100 - left) * mh->scale) / 100);
/* the low bit is optional in the tone sliders and masking
it lets is avoid the 0xf 'bypass'.. */
} else if (mixer == SOUND_MIXER_BASS) {
- val = maestro_ac97_get(card->iobase , mh->offset) & ~0x0f00;
+ val = maestro_ac97_get(card , mh->offset) & ~0x0f00;
val |= ((((100 - left) * mh->scale) / 100) << 8) & 0x0e00;
} else if (mixer == SOUND_MIXER_TREBLE) {
- val = maestro_ac97_get(card->iobase , mh->offset) & ~0x000f;
+ val = maestro_ac97_get(card , mh->offset) & ~0x000f;
val |= (((100 - left) * mh->scale) / 100) & 0x000e;
}
- maestro_ac97_set(card->iobase , mh->offset, val);
+ maestro_ac97_set(card , mh->offset, val);
M_printk(" -> %x\n",val);
}
static int
ac97_recmask_io(struct ess_card *card, int read, int mask)
{
- unsigned int val = ac97_oss_mask[ maestro_ac97_get(card->iobase, 0x1a) & 0x7 ];
+ unsigned int val = ac97_oss_mask[ maestro_ac97_get(card, 0x1a) & 0x7 ];
if (read) return val;
M_printk("maestro: setting ac97 recmask to 0x%x\n",val);
- maestro_ac97_set(card->iobase,0x1a,val);
+ maestro_ac97_set(card,0x1a,val);
return 0;
};
* The PT101 setup is untested.
*/
-static u16 maestro_ac97_init(struct ess_card *card, int iobase)
+static u16 maestro_ac97_init(struct ess_card *card)
{
u16 vend1, vend2, caps;
card->mix.write_mixer = ac97_write_mixer;
card->mix.recmask_io = ac97_recmask_io;
- vend1 = maestro_ac97_get(iobase, 0x7c);
- vend2 = maestro_ac97_get(iobase, 0x7e);
+ vend1 = maestro_ac97_get(card, 0x7c);
+ vend2 = maestro_ac97_get(card, 0x7e);
- caps = maestro_ac97_get(iobase, 0x00);
+ caps = maestro_ac97_get(card, 0x00);
printk(KERN_INFO "maestro: AC97 Codec detected: v: 0x%2x%2x caps: 0x%x pwr: 0x%x\n",
- vend1,vend2,caps,maestro_ac97_get(iobase,0x26) & 0xf);
+ vend1,vend2,caps,maestro_ac97_get(card,0x26) & 0xf);
if (! (caps & 0x4) ) {
/* no bass/treble nobs */
switch ((long)(vend1 << 16) | vend2) {
case 0x545200ff: /* TriTech */
/* no idea what this does */
- maestro_ac97_set(iobase,0x2a,0x0001);
- maestro_ac97_set(iobase,0x2c,0x0000);
- maestro_ac97_set(iobase,0x2c,0xffff);
+ maestro_ac97_set(card,0x2a,0x0001);
+ maestro_ac97_set(card,0x2c,0x0000);
+ maestro_ac97_set(card,0x2c,0xffff);
break;
+#if 0 /* i thought the problems I was seeing were with
+ the 1921, but apparently they were with the pci board
+ it was on, so this code is commented out.
+ lets see if this holds true. */
case 0x83847609: /* ESS 1921 */
/* writing to 0xe (mic) or 0x1a (recmask) seems
to hang this codec */
card->mix.record_sources = 0;
card->mix.recmask_io = NULL;
#if 0 /* don't ask. I have yet to see what these actually do. */
- maestro_ac97_set(iobase,0x76,0xABBA); /* o/~ Take a chance on me o/~ */
+ maestro_ac97_set(card,0x76,0xABBA); /* o/~ Take a chance on me o/~ */
udelay(20);
- maestro_ac97_set(iobase,0x78,0x3002);
+ maestro_ac97_set(card,0x78,0x3002);
udelay(20);
- maestro_ac97_set(iobase,0x78,0x3802);
+ maestro_ac97_set(card,0x78,0x3802);
udelay(20);
#endif
break;
+#endif
default: break;
}
- maestro_ac97_set(iobase, 0x1E, 0x0404);
+ maestro_ac97_set(card, 0x1E, 0x0404);
/* null misc stuff */
- maestro_ac97_set(iobase, 0x20, 0x0000);
+ maestro_ac97_set(card, 0x20, 0x0000);
return 0;
}
{
unsigned long flags;
- CHECK_SUSPEND;
+ check_suspend(s->card);
spin_lock_irqsave(&s->card->lock,flags);
__maestro_write(s->card,reg,data);
if(READABLE_MAP & (1<<reg))
{
unsigned long flags;
- CHECK_SUSPEND;
+ check_suspend(s->card);
spin_lock_irqsave(&s->card->lock,flags);
__maestro_read(s->card,reg);
{
unsigned long flags;
- CHECK_SUSPEND;
+ check_suspend(s->card);
if(channel&ESS_CHAN_HARD)
channel&=~ESS_CHAN_HARD;
unsigned long flags;
u16 v;
- CHECK_SUSPEND;
+ check_suspend(s->card);
if(channel&ESS_CHAN_HARD)
channel&=~ESS_CHAN_HARD;
{
long ioaddr = s->card->iobase;
unsigned long flags;
- CHECK_SUSPEND;
+ check_suspend(s->card);
spin_lock_irqsave(&s->card->lock,flags);
long ioaddr = s->card->iobase;
unsigned long flags;
u16 value;
- CHECK_SUSPEND;
+ check_suspend(s->card);
spin_lock_irqsave(&s->card->lock,flags);
outw(reg, ioaddr+0x10);
}
/* stop output apus */
-extern inline void stop_dac(struct ess_state *s)
+static void stop_dac(struct ess_state *s)
{
/* XXX have to lock around this? */
if (! (s->enable & DAC_RUNNING)) return;
/* XXX think about endianess when writing these registers */
M_printk("maestro: ess_play_setup: APU[%d] pa = 0x%x\n", ess->apu[channel], pa);
- /* Load the buffer into the wave engine */
+ /* start of sample */
apu_set_register(ess, channel, 4, ((pa>>16)&0xFF)<<8);
apu_set_register(ess, channel, 5, pa&0xFFFF);
+ /* sample end */
apu_set_register(ess, channel, 6, (pa+size)&0xFFFF);
- /* setting loop == sample len */
+ /* setting loop len == sample len */
apu_set_register(ess, channel, 7, size);
/* clear effects/env.. */
if(mode&ESS_FMT_STEREO) {
/* set panning: left or right */
- apu_set_register(ess, channel, 10, 0x8F00 | (channel ? 0x10 : 0));
+ apu_set_register(ess, channel, 10, 0x8F00 | (channel ? 0 : 0x10));
ess->apu_mode[channel] += 0x10;
} else
apu_set_register(ess, channel, 10, 0x8F08);
int divide;
/* XXX make freq selector much smarter, see calc_bob_rate */
- int freq = 150; /* requested frequency - calculate what we want here. */
+ int freq = 200;
/* compute ideal interrupt frequency for buffer size & play rate */
/* first, find best prescaler value to match freq */
db->hwptr = db->swptr = db->total_bytes = db->count = db->error = db->endcleared = 0;
+ /* this algorithm is a little nuts.. where did /1000 come from? */
bytepersec = rate << sample_shift[fmt];
bufs = PAGE_SIZE << db->buforder;
if (db->ossfragshift) {
memset(db->rawbuf, (fmt & ESS_FMT_16BIT) ? 0 : 0x80, db->dmasize);
spin_lock_irqsave(&s->lock, flags);
- if (rec) {
- ess_rec_setup(s, fmt, s->rateadc,
- db->rawbuf, db->numfrag << db->fragshift);
- } else {
- ess_play_setup(s, fmt, s->ratedac,
- db->rawbuf, db->numfrag << db->fragshift);
- }
+ if (rec)
+ ess_rec_setup(s, fmt, s->rateadc, db->rawbuf, db->dmasize);
+ else
+ ess_play_setup(s, fmt, s->ratedac, db->rawbuf, db->dmasize);
+
spin_unlock_irqrestore(&s->lock, flags);
db->ready = 1;
/* oh boy should this all be re-written. everything in the current code paths think
that the various counters/pointers are expressed in bytes to the user but we have
two apus doing stereo stuff so we fix it up here.. it propogates to all the various
- counters from here. Notice that this means that mono recording is very very
- broken right now. */
+ counters from here. */
if ( s->fmt & (ESS_FMT_STEREO << ESS_ADC_SHIFT)) {
hwptr = (get_dmac(s)*2) % s->dma_adc.dmasize;
} else {
hwptr = get_dmaa(s) % s->dma_dac.dmasize;
/* the apu only reports the length it has seen, not the
length of the memory that has been used (the WP
- knows that */
+ knows that) */
if ( ((s->fmt >> ESS_DAC_SHIFT) & ESS_FMT_MASK) == (ESS_FMT_STEREO|ESS_FMT_16BIT))
hwptr<<=1;
s->dma_dac.count -= diff;
/* M_printk("maestro: ess_update_ptr: diff: %d, count: %d\n", diff, s->dma_dac.count); */
if (s->dma_dac.count <= 0) {
+ M_printk("underflow! diff: %d count: %d hw: %d sw: %d\n", diff, s->dma_dac.count,
+ hwptr, s->dma_dac.swptr);
/* FILL ME
wrindir(s, SV_CIENABLE, s->enable); */
/* XXX how on earth can calling this with the lock held work.. */
stop_dac(s);
/* brute force everyone back in sync, sigh */
s->dma_dac.count = 0;
- s->dma_dac.swptr = 0;
- s->dma_dac.hwptr = 0;
+ s->dma_dac.swptr = hwptr;
s->dma_dac.error++;
} else if (s->dma_dac.count <= (signed)s->dma_dac.fragsize && !s->dma_dac.endcleared) {
clear_advance(s);
}
if (s->dma_dac.count + (signed)s->dma_dac.fragsize <= (signed)s->dma_dac.dmasize) {
wake_up(&s->dma_dac.wait);
+/* printk("waking up DAC count: %d sw: %d hw: %d\n",s->dma_dac.count, s->dma_dac.swptr,
+ hwptr);*/
}
}
}
}
static /*const*/ struct file_operations ess_mixer_fops = {
- llseek: ess_llseek,
- ioctl: ess_ioctl_mixdev,
- open: ess_open_mixdev,
- release: ess_release_mixdev,
+ llseek: ess_llseek,
+ ioctl: ess_ioctl_mixdev,
+ open: ess_open_mixdev,
+ release: ess_release_mixdev,
};
/* --------------------------------------------------------------------- */
goto rec_return_free;
}
if (!interruptible_sleep_on_timeout(&s->dma_adc.wait, HZ)) {
- if(! in_suspend)
- printk(KERN_DEBUG "maestro: read: chip lockup? dmasz %u fragsz %u count %i hwptr %u swptr %u\n",
+ if(! s->card->in_suspend) printk(KERN_DEBUG "maestro: read: chip lockup? dmasz %u fragsz %u count %i hwptr %u swptr %u\n",
s->dma_adc.dmasize, s->dma_adc.fragsize, s->dma_adc.count,
s->dma_adc.hwptr, s->dma_adc.swptr);
stop_adc(s);
goto return_free;
}
if (!interruptible_sleep_on_timeout(&s->dma_dac.wait, HZ)) {
- if(! in_suspend)
- printk(KERN_DEBUG "maestro: write: chip lockup? dmasz %u fragsz %u count %i hwptr %u swptr %u\n",
+ if(! s->card->in_suspend) printk(KERN_DEBUG "maestro: write: chip lockup? dmasz %u fragsz %u count %i hwptr %u swptr %u\n",
s->dma_dac.dmasize, s->dma_dac.fragsize, s->dma_dac.count,
s->dma_dac.hwptr, s->dma_dac.swptr);
stop_dac(s);
if (!ret) ret = -EFAULT;
goto return_free;
}
+/* printk("wrote %d bytes at sw: %d cnt: %d while hw: %d\n",cnt, swptr, s->dma_dac.count, s->dma_dac.hwptr);*/
swptr = (swptr + cnt) % s->dma_dac.dmasize;
case SNDCTL_DSP_SETFRAGMENT:
get_user_ret(val, (int *)arg, -EFAULT);
+ M_printk("maestro: SETFRAGMENT: %0x\n",val);
if (file->f_mode & FMODE_READ) {
s->dma_adc.ossfragshift = val & 0xffff;
s->dma_adc.ossmaxfrags = (val >> 16) & 0xffff;
wave_set_register(s, 0x01FF , packed_phys);
}
+/*
+ * this guy makes sure we're in the right power
+ * state for what we want to be doing
+ */
+static void maestro_power(struct ess_card *card, int tostate)
+{
+ u16 active_mask = acpi_state_mask[tostate];
+ u8 state;
+
+ if(!use_pm) return;
+
+ pci_read_config_byte(card->pcidev, card->power_regs+0x4, &state);
+ state&=3;
+
+ /* make sure we're in the right state */
+ if(state != tostate) {
+ M_printk(KERN_WARNING "maestro: dev %02x:%02x.%x switching from D%d to D%d\n",
+ card->pcidev->bus->number,
+ PCI_SLOT(card->pcidev->devfn),
+ PCI_FUNC(card->pcidev->devfn),
+ state,tostate);
+ pci_write_config_byte(card->pcidev, card->power_regs+0x4, tostate);
+ }
+
+ /* and make sure the units we care about are on
+ XXX we might want to do this before state flipping? */
+ pci_write_config_word(card->pcidev, 0x54, ~ active_mask);
+ pci_write_config_word(card->pcidev, 0x56, ~ active_mask);
+}
+
/* we allocate a large power of two for all our memory.
this is cut up into (not to scale :):
|silly fifo word | 512byte mixbuf per adc | dac/adc * channels |
unsigned long mapend,map;
/* alloc as big a chunk as we can */
- for (order = (dsps_order + (15-PAGE_SHIFT) + 1); order >= (dsps_order + 2 + 1); order--)
+ for (order = (dsps_order + (16-PAGE_SHIFT) + 1); order >= (dsps_order + 2 + 1); order--)
if((rawbuf = (void *)__get_free_pages(GFP_KERNEL|GFP_DMA, order)))
break;
s->card->dmapages = rawbuf;
s->card->dmaorder = order;
- /* play bufs are in the same first region as record bufs */
- set_base_registers(s,rawbuf);
-
- M_printk("maestro: writing %lx (%lx) to the wp\n",virt_to_bus(rawbuf),
- ((virt_to_bus(rawbuf))&0xFFE00000)>>12);
-
for(i=0;i<NR_DSPS;i++) {
struct ess_state *ess = &s->card->channels[i];
happily scribble away.. */
ess->mixbuf = rawbuf + (512 * (i+1));
- M_printk("maestro: setup apu %d: %p %p %p\n",i,ess->dma_dac.rawbuf,
+ M_printk("maestro: setup apu %d: dac: %p adc: %p mix: %p\n",i,ess->dma_dac.rawbuf,
ess->dma_adc.rawbuf, ess->mixbuf);
}
return -ENOMEM;
}
+ /* we're covered by the open_sem */
+ if( ! s->card->dsps_open ) {
+ maestro_power(s->card,ACPI_D0);
+ start_bob(s);
+ }
+ s->card->dsps_open++;
+ M_printk("maestro: open, %d bobs now\n",s->card->dsps_open);
+
+ /* ok, lets write WC base regs now that we've
+ powered up the chip */
+ M_printk("maestro: writing 0x%lx (bus 0x%lx) to the wp\n",virt_to_bus(s->card->dmapages),
+ ((virt_to_bus(s->card->dmapages))&0xFFE00000)>>12);
+ set_base_registers(s,s->card->dmapages);
+
if (file->f_mode & FMODE_READ) {
/*
fmtm &= ~((ESS_FMT_STEREO | ESS_FMT_16BIT) << ESS_ADC_SHIFT);
set_fmt(s, fmtm, fmts);
s->open_mode |= file->f_mode & (FMODE_READ | FMODE_WRITE);
- /* we're covered by the open_sem */
- if( ! s->card->dsps_open ) {
- start_bob(s);
- }
- s->card->dsps_open++;
- M_printk("maestro: open, %d bobs now\n",s->card->dsps_open);
-
up(&s->open_sem);
MOD_INC_USE_COUNT;
return 0;
/* we're covered by the open_sem */
M_printk("maestro: %d dsps now alive\n",s->card->dsps_open-1);
if( --s->card->dsps_open <= 0) {
+ s->card->dsps_open = 0;
stop_bob(s);
free_buffers(s);
+ maestro_power(s->card,ACPI_D2);
}
up(&s->open_sem);
wake_up(&s->open_wait);
}
static struct file_operations ess_audio_fops = {
- llseek: ess_llseek,
- read: ess_read,
- write: ess_write,
- poll: ess_poll,
- ioctl: ess_ioctl,
- mmap: ess_mmap,
- open: ess_open,
- release: ess_release,
+ llseek: ess_llseek,
+ read: ess_read,
+ write: ess_write,
+ poll: ess_poll,
+ ioctl: ess_ioctl,
+ mmap: ess_mmap,
+ open: ess_open,
+ release: ess_release,
};
static int
maestro_config(struct ess_card *card)
{
- struct pci_dev *pcidev = &card->pcidev;
+ struct pci_dev *pcidev = card->pcidev;
struct ess_state *ess = &card->channels[0];
int apu,iobase = card->iobase;
u16 w;
u32 n;
- /*
- * Disable ACPI
+ /* We used to muck around with pci config space that
+ * we had no business messing with. We don't know enough
+ * about the machine to know which DMA mode is appropriate,
+ * etc. We were guessing wrong on some machines and making
+ * them unhappy. We now trust in the BIOS to do things right,
+ * which almost certainly means a new host of problems will
+ * arise with broken BIOS implementations. screw 'em.
+ * We're already intolerant of machines that don't assign
+ * IRQs.
*/
-
- pci_write_config_dword(pcidev, 0x54, 0x00000000);
- pci_write_config_dword(pcidev, 0x56, 0x00000000);
- /*
- * Use TDMA for now. TDMA works on all boards, so while its
- * not the most efficient its the simplest.
- */
+ /* do config work at full power */
+ maestro_power(card,ACPI_D0);
pci_read_config_word(pcidev, 0x50, &w);
- /* Clear DMA bits */
- w&=~(1<<10|1<<9|1<<8);
-
- /* TDMA on */
- w|= (1<<8);
-
- /*
- * Some of these are undocumented bits
- */
-
- w&=~(1<<13)|(1<<14); /* PIC Snoop mode bits */
- w&=~(1<<11); /* Safeguard off */
- w|= (1<<7); /* Posted write */
- w|= (1<<6); /* ISA timing on */
- /* XXX huh? claims to be reserved.. */
- w&=~(1<<5); /* Don't swap left/right */
- w&=~(1<<1); /* Subtractive decode off */
+ w&=~(1<<5); /* Don't swap left/right (undoc)*/
pci_write_config_word(pcidev, 0x50, w);
w&=~(1<<6); /* Debounce off */
w&=~(1<<5); /* GPIO 4:5 */
w|= (1<<4); /* Disconnect from the CHI. Enabling this made a dell 7500 work. */
- w&=~(1<<3); /* IDMA off (undocumented) */
w&=~(1<<2); /* MIDI fix off (undoc) */
w&=~(1<<1); /* reserved, always write 0 */
- w&=~(1<<0); /* IRQ to ISA off (undoc) */
pci_write_config_word(pcidev, 0x52, w);
-
- /*
- * DDMA off
- */
-
- pci_read_config_word(pcidev, 0x60, &w);
- w&=~(1<<0);
- pci_write_config_word(pcidev, 0x60, w);
/*
* Legacy mode
pci_write_config_word(pcidev, 0x40, w);
- /* stake our claim on the iospace */
- request_region(iobase, 256, card_names[card->card_type]);
sound_reset(iobase);
}
+/* this guy tries to find the pci power management
+ * register bank. this should really be in core
+ * code somewhere. 1 on success. */
+int
+parse_power(struct ess_card *card, struct pci_dev *pcidev)
+{
+ u32 n;
+ u16 w;
+ u8 next;
+ int max = 64; /* an a 8bit guy pointing to 32bit guys
+ can only express so much. */
+
+ card->power_regs = 0;
+
+ /* check to see if we have a capabilities list in
+ the config register */
+ pci_read_config_word(pcidev, PCI_STATUS, &w);
+ if(! w & PCI_STATUS_CAP_LIST) return 0;
+
+ /* walk the list, starting at the head. */
+ pci_read_config_byte(pcidev,PCI_CAPABILITY_LIST,&next);
+
+ while(next && max--) {
+ pci_read_config_dword(pcidev, next & ~3, &n);
+ if((n & 0xff) == PCI_CAP_ID_PM) {
+ card->power_regs = next;
+ break;
+ }
+ next = ((n>>8) & 0xff);
+ }
+
+ return card->power_regs ? 1 : 0;
+}
static int
maestro_install(struct pci_dev *pcidev, int card_type)
int i;
struct ess_card *card;
struct ess_state *ess;
- struct pm_dev *pmdev;
+ struct pm_dev *pmdev;
int num = 0;
/* don't pick up weird modem maestros */
iobase = SILLY_PCI_BASE_ADDRESS(pcidev);
- if(check_region(iobase, 256))
+ /* stake our claim on the iospace */
+ if( request_region(iobase, 256, card_names[card_type]) == NULL )
{
printk(KERN_WARNING "maestro: can't allocate 256 bytes I/O at 0x%4.4x\n", iobase);
return 0;
}
/* this was tripping up some machines */
- if(pcidev->irq == 0)
- {
+ if(pcidev->irq == 0) {
printk(KERN_WARNING "maestro: pci subsystem reports irq 0, this might not be correct.\n");
}
}
memset(card, 0, sizeof(*card));
- memcpy(&card->pcidev,pcidev,sizeof(card->pcidev));
+ card->pcidev = pcidev;
- pmdev = pm_register(PM_PCI_DEV,
- PM_PCI_ID(pcidev),
- maestro_pm_callback);
- if (pmdev)
- pmdev->data = card;
+ pmdev = pm_register(PM_PCI_DEV, PM_PCI_ID(pcidev),
+ maestro_pm_callback);
+ if (pmdev)
+ pmdev->data = card;
+
+ if (register_reboot_notifier(&maestro_nb)) {
+ printk(KERN_WARNING "maestro: reboot notifier registration failed; may not reboot properly.\n");
+ }
card->iobase = iobase;
card->card_type = card_type;
card->next = devs;
card->magic = ESS_CARD_MAGIC;
spin_lock_init(&card->lock);
+ init_waitqueue_head(&card->suspend_queue);
devs = card;
/* init our groups of 6 apus */
pci_read_config_dword(pcidev, PCI_SUBSYSTEM_VENDOR_ID, &n);
printk(KERN_INFO "maestro: subvendor id: 0x%08x\n",n);
+ /* turn off power management unless:
+ * - the user explicitly asks for it
+ * or
+ * - we're not a 2e, lesser chipps seem to have problems.
+ * - we're not on our _very_ small whitelist. some implemenetations
+ * really dont' like the pm code, others require it.
+ * feel free to expand this as required.
+ */
+#define SUBSYSTEM_VENDOR(x) (x&0xffff)
+ if( (use_pm != 1) &&
+ ((card_type != TYPE_MAESTRO2E) || (SUBSYSTEM_VENDOR(n) != 0x1028)))
+ use_pm = 0;
+
+ if(!use_pm)
+ printk(KERN_INFO "maestro: not attempting power management.\n");
+ else {
+ if(!parse_power(card,pcidev))
+ printk(KERN_INFO "maestro: no PCI power managment interface found.\n");
+ else {
+ pci_read_config_dword(pcidev, card->power_regs, &n);
+ printk(KERN_INFO "maestro: PCI power managment capability: 0x%x\n",n>>16);
+ }
+ }
+
maestro_config(card);
- if(maestro_ac97_get(iobase, 0x00)==0x0080) {
+ if(maestro_ac97_get(card, 0x00)==0x0080) {
printk(KERN_ERR "maestro: my goodness! you seem to have a pt101 codec, which is quite rare.\n"
"\tyou should tell someone about this.\n");
} else {
- maestro_ac97_init(card,iobase);
+ maestro_ac97_init(card);
}
if ((card->dev_mixer = register_sound_mixer(&ess_mixer_fops, -1)) < 0) {
unregister_sound_dsp(s->dev_audio);
}
release_region(card->iobase, 256);
+ unregister_reboot_notifier(&maestro_nb);
kfree(card);
return 0;
}
+ /* now go to sleep 'till something interesting happens */
+ maestro_power(card,ACPI_D2);
printk(KERN_INFO "maestro: %d channels configured.\n", num);
return 1;
printk(KERN_WARNING "maestro: clipping dsps_order to %d\n",dsps_order);
}
- init_waitqueue_head(&suspend_queue);
-
/*
* Find the ESS Maestro 2.
*/
return 0;
}
-/* --------------------------------------------------------------------- */
-
-#ifdef MODULE
-MODULE_AUTHOR("Zach Brown <zab@redhat.com>, Alan Cox <alan@redhat.com>");
-MODULE_DESCRIPTION("ESS Maestro Driver");
-#ifdef M_DEBUG
-MODULE_PARM(debug,"i");
-#endif
-MODULE_PARM(dsps_order,"i");
-
-void cleanup_module(void)
+static void nuke_maestros(void)
{
- struct ess_card *s;
+ struct ess_card *card;
+ /* we do these unconditionally, which is probably wrong */
pm_unregister_all(maestro_pm_callback);
+ unregister_reboot_notifier(&maestro_nb);
- while ((s = devs)) {
+ while ((card = devs)) {
int i;
devs = devs->next;
/* XXX maybe should force stop bob, but should be all
stopped by _release by now */
- free_irq(s->irq, s);
- unregister_sound_mixer(s->dev_mixer);
+ free_irq(card->irq, card);
+ unregister_sound_mixer(card->dev_mixer);
for(i=0;i<NR_DSPS;i++)
{
- struct ess_state *ess = &s->channels[i];
+ struct ess_state *ess = &card->channels[i];
if(ess->dev_audio != -1)
unregister_sound_dsp(ess->dev_audio);
}
- release_region(s->iobase, 256);
- kfree(s);
+ /* Goodbye, Mr. Bond. */
+ maestro_power(card,ACPI_D3);
+ release_region(card->iobase, 256);
+ kfree(card);
}
+ devs = NULL;
+}
+
+static int maestro_notifier(struct notifier_block *nb, unsigned long event, void *buf)
+{
+ /* this notifier is called when the kernel is really shut down. */
+ M_printk("maestro: shutting down\n");
+ nuke_maestros();
+ return NOTIFY_OK;
+}
+
+/* --------------------------------------------------------------------- */
+
+#ifdef MODULE
+MODULE_AUTHOR("Zach Brown <zab@zabbo.net>, Alan Cox <alan@redhat.com>");
+MODULE_DESCRIPTION("ESS Maestro Driver");
+#ifdef M_DEBUG
+MODULE_PARM(debug,"i");
+#endif
+MODULE_PARM(dsps_order,"i");
+MODULE_PARM(use_pm,"i");
+
+void cleanup_module(void) {
M_printk("maestro: unloading\n");
+ nuke_maestros();
}
-#endif /* MODULE */
+#else /* MODULE */
+__initcall(init_maestro);
+#endif
+
+/* --------------------------------------------------------------------- */
void
-check_suspend(void)
+check_suspend(struct ess_card *card)
{
DECLARE_WAITQUEUE(wait, current);
- if(!in_suspend) return;
+ if(!card->in_suspend) return;
- in_suspend++;
- add_wait_queue(&suspend_queue, &wait);
+ card->in_suspend++;
+ add_wait_queue(&(card->suspend_queue), &wait);
current->state = TASK_UNINTERRUPTIBLE;
schedule();
- remove_wait_queue(&suspend_queue, &wait);
+ remove_wait_queue(&(card->suspend_queue), &wait);
current->state = TASK_RUNNING;
}
maestro_suspend(struct ess_card *card)
{
unsigned long flags;
- int i,j;
+ int i,j;
- save_flags(flags);
- cli();
-
- M_printk("maestro: pm in dev %p\n",card);
-
- for(i=0;i<NR_DSPS;i++) {
- struct ess_state *s = &card->channels[i];
-
- if(s->dev_audio == -1)
- continue;
-
- M_printk("maestro: stopping apus for device %d\n",i);
- stop_dac(s);
- stop_adc(s);
- for(j=0;j<6;j++)
- card->apu_map[s->apu[j]][5]=apu_get_register(s,j,5);
-
- }
+ save_flags(flags);
+ cli(); /* over-kill */
- /* get rid of interrupts? */
- if( card->dsps_open > 0)
- stop_bob(&card->channels[0]);
+ M_printk("maestro: apm in dev %p\n",card);
- in_suspend=1;
+ /* we have to read from the apu regs, need
+ to power it up */
+ maestro_power(card,ACPI_D0);
- restore_flags(flags);
+ for(i=0;i<NR_DSPS;i++) {
+ struct ess_state *s = &card->channels[i];
+
+ if(s->dev_audio == -1)
+ continue;
- /* we'll let the bios do the rest of the power down.. */
+ M_printk("maestro: stopping apus for device %d\n",i);
+ stop_dac(s);
+ stop_adc(s);
+ for(j=0;j<6;j++)
+ card->apu_map[s->apu[j]][5]=apu_get_register(s,j,5);
+
+ }
+ /* get rid of interrupts? */
+ if( card->dsps_open > 0)
+ stop_bob(&card->channels[0]);
+
+ card->in_suspend++;
+
+ restore_flags(flags);
+
+ /* we trust in the bios to power down the chip on suspend.
+ * XXX I'm also not sure that in_suspend will protect
+ * against all reg accesses from here on out.
+ */
return 0;
}
static int
maestro_resume(struct ess_card *card)
{
unsigned long flags;
- int i;
+ int i;
save_flags(flags);
- cli();
- in_suspend=0;
- M_printk("maestro: resuming\n");
-
- /* first lets just bring everything back. .*/
-
- M_printk("maestro: pm in dev %p\n",card);
-
- maestro_config(card);
- /* need to restore the base pointers.. */
- if(card->dmapages)
- set_base_registers(&card->channels[0],card->dmapages);
-
- mixer_push_state(card);
-
- for(i=0;i<NR_DSPS;i++) {
- struct ess_state *s = &card->channels[i];
- int chan,reg;
-
- if(s->dev_audio == -1)
- continue;
-
- for(chan = 0 ; chan < 6 ; chan++) {
- wave_set_register(s,s->apu[chan]<<3,s->apu_base[chan]);
- for(reg = 1 ; reg < NR_APU_REGS ; reg++)
- apu_set_register(s,chan,reg,s->card->apu_map[s->apu[chan]][reg]);
- }
- for(chan = 0 ; chan < 6 ; chan++)
- apu_set_register(s,chan,0,s->card->apu_map[s->apu[chan]][0] & 0xFF0F);
- }
+ cli(); /* over-kill */
+
+ card->in_suspend = 0;
+
+ M_printk("maestro: resuming card at %p\n",card);
+
+ /* restore all our config */
+ maestro_config(card);
+ /* need to restore the base pointers.. */
+ if(card->dmapages)
+ set_base_registers(&card->channels[0],card->dmapages);
+
+ mixer_push_state(card);
+
+ /* set each channels' apu control registers before
+ * restoring audio
+ */
+ for(i=0;i<NR_DSPS;i++) {
+ struct ess_state *s = &card->channels[i];
+ int chan,reg;
+
+ if(s->dev_audio == -1)
+ continue;
+
+ for(chan = 0 ; chan < 6 ; chan++) {
+ wave_set_register(s,s->apu[chan]<<3,s->apu_base[chan]);
+ for(reg = 1 ; reg < NR_APU_REGS ; reg++)
+ apu_set_register(s,chan,reg,s->card->apu_map[s->apu[chan]][reg]);
+ }
+ for(chan = 0 ; chan < 6 ; chan++)
+ apu_set_register(s,chan,0,s->card->apu_map[s->apu[chan]][0] & 0xFF0F);
+ }
/* now we flip on the music */
- M_printk("maestro: pm in dev %p\n",card);
-
- for(i=0;i<NR_DSPS;i++) {
- struct ess_state *s = &card->channels[i];
-
- /* these use the apu_mode, and can handle
- spurious calls */
- start_dac(s);
- start_adc(s);
- }
- if( card->dsps_open > 0)
- start_bob(&card->channels[0]);
+
+ if( card->dsps_open <= 0) {
+ /* this card's idle */
+ maestro_power(card,ACPI_D2);
+ } else {
+ /* ok, we're actually playing things on
+ this card */
+ maestro_power(card,ACPI_D0);
+ start_bob(&card->channels[0]);
+ for(i=0;i<NR_DSPS;i++) {
+ struct ess_state *s = &card->channels[i];
+
+ /* these use the apu_mode, and can handle
+ spurious calls */
+ start_dac(s);
+ start_adc(s);
+ }
+ }
restore_flags(flags);
- wake_up(&suspend_queue);
+ /* all right, we think things are ready,
+ wake up people who were using the device
+ when we suspended */
+ wake_up(&(card->suspend_queue));
return 0;
}
int
-maestro_pm_callback(struct pm_dev *dev, pm_request_t rqst, void *data) {
- struct ess_card *card = (struct ess_card*) dev->data;
- if (card) {
- M_printk("maestro: pm event received: 0x%x\n", rqst);
-
- switch (rqst) {
- case PM_SUSPEND:
- maestro_suspend(card);
- break;
- case PM_RESUME:
- maestro_resume(card);
- break;
- }
- }
+maestro_pm_callback(struct pm_dev *dev, pm_request_t rqst, void *data)
+{
+ struct ess_card *card = (struct ess_card*) dev->data;
+ if ( ! card ) goto out;
+
+ M_printk("maestro: pm event 0x%x received for card %p\n", rqst, card);
+
+ switch (rqst) {
+ case PM_SUSPEND:
+ maestro_suspend(card);
+ break;
+ case PM_RESUME:
+ maestro_resume(card);
+ break;
+ /*
+ * we'd also like to find out about
+ * power level changes because some biosen
+ * do mean things to the maestro when they
+ * change their power state.
+ */
+ }
+out:
return 0;
}
* Version 2 (June 1991). See the "COPYING" file distributed with this software
* for more info.
*
- *
* 26-11-1999 Patched to compile without ISA PnP support in the
* kernel - Daniel Stone (tamriel@ductape.net)
*
* 26-03-2000 Fixed acer, esstype and sm_games module options.
* Alessandro Zummo <azummo@ita.flashnet.it>
*
+ * 27-03-2000 ISAPnP multiple card detection, cleanup, and reorg.
+ * Thanks to Gaël Quéri and Alessandro Zummo for testing and fixes.
+ * Paul E. Laufer <pelaufer@csupomona.edu>
+ *
*/
#include <linux/config.h>
#include "sb_mixer.h"
#include "sb.h"
-static int sbmpu = 0;
+#if defined CONFIG_ISAPNP || defined CONFIG_ISAPNP_MODULE
+#define SB_CARDS_MAX 4
+#else
+#define SB_CARDS_MAX 1
+#endif
+
+static int sbmpu[SB_CARDS_MAX] = {0};
+static int sb_cards_num = 0;
extern void *smw_free;
{
if(!sb_dsp_init(hw_config))
hw_config->slots[0] = -1;
- SOUND_LOCK;
}
static int __init probe_sb(struct address_info *hw_config)
if (hw_config->io_base == -1 || hw_config->dma == -1 || hw_config->irq == -1)
{
- printk(KERN_ERR "sb_card: I/O, IRQ, and DMA are mandatory\n");
+ printk(KERN_ERR "sb: I/O, IRQ, and DMA are mandatory\n");
return -EINVAL;
}
}
#endif
- /* This is useless since it is done by sb_dsp_detect - azummo */
-
- if (check_region(hw_config->io_base, 16))
- {
- printk(KERN_ERR "sb_card: I/O port 0x%x is already in use\n\n", hw_config->io_base);
- return 0;
- }
-
/* Setup extra module options */
sbmo.acer = acer;
return sb_dsp_detect(hw_config, 0, 0, &sbmo);
}
-static void __exit unload_sb(struct address_info *hw_config)
+static void __exit unload_sb(struct address_info *hw_config, int card)
{
if(hw_config->slots[0]!=-1)
- sb_dsp_unload(hw_config, sbmpu);
+ sb_dsp_unload(hw_config, sbmpu[card]);
}
-static struct address_info cfg;
-static struct address_info cfg_mpu;
+static struct address_info cfg[SB_CARDS_MAX];
+static struct address_info cfg_mpu[SB_CARDS_MAX];
-struct pci_dev *sb_dev = NULL,
- *mpu_dev = NULL;
+struct pci_dev *sb_dev[SB_CARDS_MAX] = {NULL},
+ *mpu_dev[SB_CARDS_MAX] = {NULL};
#if defined CONFIG_ISAPNP || defined CONFIG_ISAPNP_MODULE
static int isapnp = 1;
static int isapnpjump = 0;
-static int activated = 1;
+static int multiple = 0;
+static int reverse = 0;
+static int uart401 = 0;
+
+static int audio_activated[SB_CARDS_MAX] = {0};
+static int mpu_activated[SB_CARDS_MAX] = {0};
#else
static int isapnp = 0;
+static int multiple = 1;
#endif
MODULE_DESCRIPTION("Soundblaster driver");
#if defined CONFIG_ISAPNP || defined CONFIG_ISAPNP_MODULE
MODULE_PARM(isapnp, "i");
MODULE_PARM(isapnpjump, "i");
+MODULE_PARM(multiple, "i");
+MODULE_PARM(reverse, "i");
+MODULE_PARM(uart401, "i");
MODULE_PARM_DESC(isapnp, "When set to 0, Plug & Play support will be disabled");
MODULE_PARM_DESC(isapnpjump, "Jumps to a specific slot in the driver's PnP table. Use the source, Luke.");
+MODULE_PARM_DESC(multiple, "When set to 0, will not search for multiple cards");
+MODULE_PARM_DESC(reverse, "When set to 1, will reverse ISAPnP search order");
+MODULE_PARM_DESC(uart401, "When set to 1, will attempt to detect and enable the mpu on some clones");
#endif
MODULE_PARM_DESC(io, "Soundblaster i/o base address (0x220,0x240,0x260,0x280)");
#if defined CONFIG_ISAPNP || defined CONFIG_ISAPNP_MODULE
+/* Please add new entries at the end of the table */
+static struct {
+ char *name;
+ unsigned short card_vendor, card_device, audio_vendor, audio_function, mpu_vendor, mpu_function;
+ short dma, dma2, mpu_io, mpu_irq; /* see sb_init() */
+} sb_isapnp_list[] __initdata = {
+ {"Sound Blaster 16",
+ ISAPNP_VENDOR('C','T','L'), ISAPNP_DEVICE(0x0024),
+ ISAPNP_VENDOR('C','T','L'), ISAPNP_FUNCTION(0x0031),
+ 0,0,
+ 0,1,1,-1},
+ {"Sound Blaster 16",
+ ISAPNP_VENDOR('C','T','L'), ISAPNP_DEVICE(0x0026),
+ ISAPNP_VENDOR('C','T','L'), ISAPNP_FUNCTION(0x0031),
+ 0,0,
+ 0,1,1,-1},
+ {"Sound Blaster 16",
+ ISAPNP_VENDOR('C','T','L'), ISAPNP_DEVICE(0x0027),
+ ISAPNP_VENDOR('C','T','L'), ISAPNP_FUNCTION(0x0031),
+ 0,0,
+ 0,1,1,-1},
+ {"Sound Blaster 16",
+ ISAPNP_VENDOR('C','T','L'), ISAPNP_DEVICE(0x0029),
+ ISAPNP_VENDOR('C','T','L'), ISAPNP_FUNCTION(0x0031),
+ 0,0,
+ 0,1,1,-1},
+ {"Sound Blaster 16",
+ ISAPNP_VENDOR('C','T','L'), ISAPNP_DEVICE(0x002b),
+ ISAPNP_VENDOR('C','T','L'), ISAPNP_FUNCTION(0x0031),
+ 0,0,
+ 0,1,1,-1},
+ {"Sound Blaster Vibra16S",
+ ISAPNP_VENDOR('C','T','L'), ISAPNP_DEVICE(0x0051),
+ ISAPNP_VENDOR('C','T','L'), ISAPNP_FUNCTION(0x0001),
+ 0,0,
+ 0,1,1,-1},
+ {"Sound Blaster Vibra16C",
+ ISAPNP_VENDOR('C','T','L'), ISAPNP_DEVICE(0x0070),
+ ISAPNP_VENDOR('C','T','L'), ISAPNP_FUNCTION(0x0001),
+ 0,0,
+ 0,1,1,-1},
+ {"Sound Blaster Vibra16CL",
+ ISAPNP_VENDOR('C','T','L'), ISAPNP_DEVICE(0x0080),
+ ISAPNP_VENDOR('C','T','L'), ISAPNP_FUNCTION(0x0041),
+ 0,0,
+ 0,1,1,-1},
+ {"Sound Blaster Vibra16X",
+ ISAPNP_VENDOR('C','T','L'), ISAPNP_DEVICE(0x00F0),
+ ISAPNP_VENDOR('C','T','L'), ISAPNP_FUNCTION(0x0043),
+ 0,0,
+ 0,1,1,-1},
+ {"Sound Blaster AWE 32",
+ ISAPNP_VENDOR('C','T','L'), ISAPNP_DEVICE(0x0039),
+ ISAPNP_VENDOR('C','T','L'), ISAPNP_FUNCTION(0x0031),
+ 0,0,
+ 0,1,1,-1},
+ {"Sound Blaster AWE 32",
+ ISAPNP_VENDOR('C','T','L'), ISAPNP_DEVICE(0x0042),
+ ISAPNP_VENDOR('C','T','L'), ISAPNP_FUNCTION(0x0031),
+ 0,0,
+ 0,1,1,-1},
+ {"Sound Blaster AWE 32",
+ ISAPNP_VENDOR('C','T','L'), ISAPNP_DEVICE(0x0043),
+ ISAPNP_VENDOR('C','T','L'), ISAPNP_FUNCTION(0x0031),
+ 0,0,
+ 0,1,1,-1},
+ {"Sound Blaster AWE 32",
+ ISAPNP_VENDOR('C','T','L'), ISAPNP_DEVICE(0x0044),
+ ISAPNP_VENDOR('C','T','L'), ISAPNP_FUNCTION(0x0031),
+ 0,0,
+ 0,1,1,-1},
+ {"Sound Blaster AWE 32",
+ ISAPNP_VENDOR('C','T','L'), ISAPNP_DEVICE(0x0048),
+ ISAPNP_VENDOR('C','T','L'), ISAPNP_FUNCTION(0x0031),
+ 0,0,
+ 0,1,1,-1},
+ {"Sound Blaster AWE 32",
+ ISAPNP_VENDOR('C','T','L'), ISAPNP_DEVICE(0x0054),
+ ISAPNP_VENDOR('C','T','L'), ISAPNP_FUNCTION(0x0031),
+ 0,0,
+ 0,1,1,-1},
+ {"Sound Blaster AWE 32",
+ ISAPNP_VENDOR('C','T','L'), ISAPNP_DEVICE(0x009C),
+ ISAPNP_VENDOR('C','T','L'), ISAPNP_FUNCTION(0x0041),
+ 0,0,
+ 0,1,1,-1},
+ {"Sound Blaster AWE 64",
+ ISAPNP_VENDOR('C','T','L'), ISAPNP_DEVICE(0x009D),
+ ISAPNP_VENDOR('C','T','L'), ISAPNP_FUNCTION(0x0042),
+ 0,0,
+ 0,1,1,-1},
+ {"Sound Blaster AWE 64 Gold",
+ ISAPNP_VENDOR('C','T','L'), ISAPNP_DEVICE(0x009E),
+ ISAPNP_VENDOR('C','T','L'), ISAPNP_FUNCTION(0x0044),
+ 0,0,
+ 0,1,1,-1},
+ {"Sound Blaster AWE 64",
+ ISAPNP_VENDOR('C','T','L'), ISAPNP_DEVICE(0x00C1),
+ ISAPNP_VENDOR('C','T','L'), ISAPNP_FUNCTION(0x0042),
+ 0,0,
+ 0,1,1,-1},
+ {"Sound Blaster AWE 64",
+ ISAPNP_VENDOR('C','T','L'), ISAPNP_DEVICE(0x00C3),
+ ISAPNP_VENDOR('C','T','L'), ISAPNP_FUNCTION(0x0045),
+ 0,0,
+ 0,1,1,-1},
+ {"Sound Blaster AWE 64",
+ ISAPNP_VENDOR('C','T','L'), ISAPNP_DEVICE(0x00C5),
+ ISAPNP_VENDOR('C','T','L'), ISAPNP_FUNCTION(0x0045),
+ 0,0,
+ 0,1,1,-1},
+ {"Sound Blaster AWE 64",
+ ISAPNP_VENDOR('C','T','L'), ISAPNP_DEVICE(0x00C7),
+ ISAPNP_VENDOR('C','T','L'), ISAPNP_FUNCTION(0x0045),
+ 0,0,
+ 0,1,1,-1},
+ {"Sound Blaster AWE 64",
+ ISAPNP_VENDOR('C','T','L'), ISAPNP_DEVICE(0x00E4),
+ ISAPNP_VENDOR('C','T','L'), ISAPNP_FUNCTION(0x0045),
+ 0,0,
+ 0,1,1,-1},
+ {"ESS 1868",
+ ISAPNP_VENDOR('E','S','S'), ISAPNP_DEVICE(0x1868),
+ ISAPNP_VENDOR('E','S','S'), ISAPNP_FUNCTION(0x1868),
+ 0,0,
+ 0,1,2,-1},
+ {"ESS 1868",
+ ISAPNP_VENDOR('E','S','S'), ISAPNP_DEVICE(0x1868),
+ ISAPNP_VENDOR('E','S','S'), ISAPNP_FUNCTION(0x8611),
+ 0,0,
+ 0,1,2,-1},
+ {"ESS 1869 PnP AudioDrive",
+ ISAPNP_VENDOR('E','S','S'), ISAPNP_DEVICE(0x0003),
+ ISAPNP_VENDOR('E','S','S'), ISAPNP_FUNCTION(0x1869),
+ 0,0,
+ 0,1,2,-1},
+ {"ESS 1869",
+ ISAPNP_VENDOR('E','S','S'), ISAPNP_DEVICE(0x1869),
+ ISAPNP_VENDOR('E','S','S'), ISAPNP_FUNCTION(0x1869),
+ 0,0,
+ 0,1,2,-1},
+ {"ESS 1878",
+ ISAPNP_VENDOR('E','S','S'), ISAPNP_DEVICE(0x1878),
+ ISAPNP_VENDOR('E','S','S'), ISAPNP_FUNCTION(0x1878),
+ 0,0,
+ 0,1,2,-1},
+ {"ESS 1879",
+ ISAPNP_VENDOR('E','S','S'), ISAPNP_DEVICE(0x1879),
+ ISAPNP_VENDOR('E','S','S'), ISAPNP_FUNCTION(0x1879),
+ 0,0,
+ 0,1,2,-1},
+ {"CMI 8330 SoundPRO",
+ ISAPNP_VENDOR('C','M','I'), ISAPNP_DEVICE(0x0001),
+ ISAPNP_VENDOR('@','X','@'), ISAPNP_FUNCTION(0x0001),
+ ISAPNP_VENDOR('@','H','@'), ISAPNP_FUNCTION(0x0001),
+ 0,1,0,-1},
+ {"Diamond DT0197H",
+ ISAPNP_VENDOR('R','W','B'), ISAPNP_DEVICE(0x1688),
+ ISAPNP_VENDOR('@','@','@'), ISAPNP_FUNCTION(0x0001),
+ ISAPNP_VENDOR('@','X','@'), ISAPNP_FUNCTION(0x0001),
+ 0,-1,0,0},
+ {"ALS007",
+ ISAPNP_VENDOR('A','L','S'), ISAPNP_DEVICE(0x0007),
+ ISAPNP_VENDOR('@','@','@'), ISAPNP_FUNCTION(0x0001),
+ ISAPNP_VENDOR('@','X','@'), ISAPNP_FUNCTION(0x0001),
+ 0,-1,0,0},
+ {"ALS100",
+ ISAPNP_VENDOR('A','L','S'), ISAPNP_DEVICE(0x0001),
+ ISAPNP_VENDOR('@','@','@'), ISAPNP_FUNCTION(0x0001),
+ ISAPNP_VENDOR('@','X','@'), ISAPNP_FUNCTION(0x0001),
+ 1,0,0,0},
+ {"ALS110",
+ ISAPNP_VENDOR('A','L','S'), ISAPNP_DEVICE(0x0110),
+ ISAPNP_VENDOR('@','@','@'), ISAPNP_FUNCTION(0x1001),
+ ISAPNP_VENDOR('@','X','@'), ISAPNP_FUNCTION(0x1001),
+ 1,0,0,0},
+ {"ALS120",
+ ISAPNP_VENDOR('A','L','S'), ISAPNP_DEVICE(0x0120),
+ ISAPNP_VENDOR('@','@','@'), ISAPNP_FUNCTION(0x2001),
+ ISAPNP_VENDOR('@','X','@'), ISAPNP_FUNCTION(0x2001),
+ 1,0,0,0},
+ {"ALS200",
+ ISAPNP_VENDOR('A','L','S'), ISAPNP_DEVICE(0x0200),
+ ISAPNP_VENDOR('@','@','@'), ISAPNP_FUNCTION(0x0020),
+ ISAPNP_VENDOR('@','X','@'), ISAPNP_FUNCTION(0x0020),
+ 1,0,0,0},
+ {"RTL3000",
+ ISAPNP_VENDOR('R','T','L'), ISAPNP_DEVICE(0x3000),
+ ISAPNP_VENDOR('@','@','@'), ISAPNP_FUNCTION(0x2001),
+ ISAPNP_VENDOR('@','X','@'), ISAPNP_FUNCTION(0x2001),
+ 1,0,0,0},
+ {0}
+};
+
/* That's useful. */
#define show_base(devname, resname, resptr) printk(KERN_INFO "sb: %s %s base located at %#lx\n", devname, resname, (resptr)->start)
int err;
/* Device already active? Let's use it */
-
if(dev->active)
- {
- activated = 0;
return(dev);
- }
- if((err = dev->activate(dev)) < 0)
- {
+
+ if((err = dev->activate(dev)) < 0) {
printk(KERN_ERR "sb: %s %s config failed (out of resources?)[%d]\n", devname, resname, err);
dev->deactivate(dev);
return(dev);
}
-/* Card's specific initialization functions
- */
-
-static struct pci_dev *sb_init_generic(struct pci_bus *bus, struct pci_dev *card, struct address_info *hw_config, struct address_info *mpu_config)
+static struct pci_dev *sb_init(struct pci_bus *bus, struct address_info *hw_config, struct address_info *mpu_config, int slot, int card)
{
- if((sb_dev = isapnp_find_dev(bus, card->vendor, card->device, NULL)))
- {
- sb_dev->prepare(sb_dev);
- if((sb_dev = activate_dev("Soundblaster", "sb", sb_dev)))
- {
- hw_config->io_base = sb_dev->resource[0].start;
- hw_config->irq = sb_dev->irq_resource[0].start;
- hw_config->dma = sb_dev->dma_resource[0].start;
- hw_config->dma2 = sb_dev->dma_resource[1].start;
- mpu_config->io_base = sb_dev->resource[1].start;
- }
- }
- return(sb_dev);
-}
-
-static struct pci_dev *sb_init_ess(struct pci_bus *bus, struct pci_dev *card, struct address_info *hw_config, struct address_info *mpu_config)
-{
- if((sb_dev = isapnp_find_dev(bus, card->vendor, card->device, NULL)))
+ /* Configure Audio device */
+ if((sb_dev[card] = isapnp_find_dev(bus, sb_isapnp_list[slot].audio_vendor, sb_isapnp_list[slot].audio_function, NULL)))
{
- sb_dev->prepare(sb_dev);
-
- if((sb_dev = activate_dev("ESS", "sb", sb_dev)))
- {
- hw_config->io_base = sb_dev->resource[0].start;
- hw_config->irq = sb_dev->irq_resource[0].start;
- hw_config->dma = sb_dev->dma_resource[0].start;
- hw_config->dma2 = sb_dev->dma_resource[1].start;
- mpu_config->io_base = sb_dev->resource[2].start;
+ int ret;
+ ret = sb_dev[card]->prepare(sb_dev[card]);
+ /* If device is active, assume configured with /proc/isapnp
+ * and use anyway. Some other way to check this? */
+ if(ret && ret != -EBUSY) {
+ printk(KERN_ERR "sb: ISAPnP found device that could not be autoconfigured.\n");
+ return(NULL);
}
- }
- return(sb_dev);
-}
-
-static struct pci_dev *sb_init_cmi(struct pci_bus *bus, struct pci_dev *card, struct address_info *hw_config, struct address_info *mpu_config)
-{
- /*
- * The CMI8330/C3D is a very 'stupid' chip... where did they get al those @@@ ?
- * It's ISAPnP section is badly designed and has many flaws, i'll do my best
- * to workaround them. I strongly suggest you to buy a real soundcard.
- * The CMI8330 on my motherboard has also the bad habit to activate
- * the rear channel of my amplifier instead of the front one.
- */
-
- /* @X@0001:Soundblaster.
- */
-
- if((sb_dev = isapnp_find_dev(bus,
- ISAPNP_VENDOR('@','X','@'), ISAPNP_FUNCTION(0x0001), NULL)))
- {
- sb_dev->prepare(sb_dev);
+ if(ret == -EBUSY)
+ audio_activated[card] = 1;
- if((sb_dev = activate_dev("CMI8330", "sb", sb_dev)))
+ if((sb_dev[card] = activate_dev(sb_isapnp_list[slot].name, "sb", sb_dev[card])))
{
- hw_config->io_base = sb_dev->resource[0].start;
- hw_config->irq = sb_dev->irq_resource[0].start;
- hw_config->dma = sb_dev->dma_resource[0].start;
- hw_config->dma2 = sb_dev->dma_resource[1].start;
-
- show_base("CMI8330", "sb", &sb_dev->resource[0]);
- }
+ hw_config->io_base = sb_dev[card]->resource[0].start;
+ hw_config->irq = sb_dev[card]->irq_resource[0].start;
+ hw_config->dma = sb_dev[card]->dma_resource[sb_isapnp_list[slot].dma].start;
+ if(sb_isapnp_list[slot].dma2 != -1)
+ hw_config->dma2 = sb_dev[card]->dma_resource[sb_isapnp_list[slot].dma2].start;
+ else
+ hw_config->dma2 = -1;
+ } else
+ return(NULL);
+ } else
+ return(NULL);
- if(!sb_dev) return(NULL);
+ /* Cards with MPU as part of Audio device (CTL and ESS) */
+ if(!sb_isapnp_list[slot].mpu_vendor) {
+ mpu_config->io_base = sb_dev[card]->resource[sb_isapnp_list[slot].mpu_io].start;
+ return(sb_dev[card]);
}
- else
- printk(KERN_ERR "sb: CMI8330 panic: sb base not found\n");
- /* @H@0001:mpu
- */
-
- if((mpu_dev = isapnp_find_dev(bus,
- ISAPNP_VENDOR('@','H','@'), ISAPNP_FUNCTION(0x0001), NULL)))
+ /* Cards with separate MPU device (ALS, CMI, etc */
+ if(!uart401)
+ return(sb_dev[card]);
+ if((mpu_dev[card] = isapnp_find_dev(bus, sb_isapnp_list[slot].mpu_vendor, sb_isapnp_list[slot].mpu_function, NULL)))
{
- mpu_dev->prepare(mpu_dev);
-
- /* This disables the interrupt on this resource. Do we need it ?
- */
-
- mpu_dev->irq_resource[0].flags = 0;
-
- if((mpu_dev = activate_dev("CMI8330", "mpu", mpu_dev)))
- {
- show_base("CMI8330", "mpu", &mpu_dev->resource[0]);
- mpu_config->io_base = mpu_dev->resource[0].start;
+ int ret = mpu_dev[card]->prepare(mpu_dev[card]);
+ /* If device is active, assume configured with /proc/isapnp
+ * and use anyway */
+ if(ret && ret != -EBUSY) {
+ printk(KERN_ERR "sb: MPU device could not be autoconfigured.\n");
+ return(sb_dev[card]);
}
- }
- else
- printk(KERN_ERR "sb: CMI8330 panic: mpu not found\n");
-
- printk(KERN_INFO "sb: CMI8330 mail reports to Alessandro Zummo <azummo@ita.flashnet.it>\n");
-
- return(sb_dev);
-}
-
-static struct pci_dev *sb_init_diamond(struct pci_bus *bus, struct pci_dev *card, struct address_info *hw_config, struct address_info *mpu_config)
-{
- /*
- * Diamonds DT0197H
- * very similar to the CMI8330 above
- */
-
- /* @@@0001:Soundblaster.
- */
-
- if((sb_dev = isapnp_find_dev(bus,
- ISAPNP_VENDOR('@','@','@'), ISAPNP_FUNCTION(0x0001), NULL)))
- {
- sb_dev->prepare(sb_dev);
+ if(ret == -EBUSY)
+ mpu_activated[card] = 1;
- if((sb_dev = activate_dev("DT0197H", "sb", sb_dev)))
- {
- hw_config->io_base = sb_dev->resource[0].start;
- hw_config->irq = sb_dev->irq_resource[0].start;
- hw_config->dma = sb_dev->dma_resource[0].start;
- hw_config->dma2 = -1;
-
- show_base("DT0197H", "sb", &sb_dev->resource[0]);
- }
-
- if(!sb_dev) return(NULL);
- }
- else
- printk(KERN_ERR "sb: DT0197H panic: sb base not found\n");
-
- /* @X@0001:mpu
- */
-
- if((mpu_dev = isapnp_find_dev(bus,
- ISAPNP_VENDOR('@','X','@'), ISAPNP_FUNCTION(0x0001), NULL)))
- {
- mpu_dev->prepare(mpu_dev);
-
- if((mpu_dev = activate_dev("DT0197H", "mpu", mpu_dev)))
- {
- show_base("DT0197H", "mpu", &mpu_dev->resource[0]);
- mpu_config->io_base = mpu_dev->resource[0].start;
- }
- }
- else
- printk(KERN_ERR "sb: DT0197H panic: mpu not found\n");
-
- printk(KERN_INFO "sb: DT0197H mail reports to Torsten Werner <twerner@intercomm.de>\n");
-
- return(sb_dev);
-}
-
-static struct pci_dev *sb_init_als(struct pci_bus *bus, struct pci_dev *card, struct address_info *hw_config, struct address_info *mpu_config)
-{
- /*
- * ALS100
- * very similar to both ones above above
- */
-
- /* @@@0001:Soundblaster.
- */
-
- if((sb_dev = isapnp_find_dev(bus,
- ISAPNP_VENDOR('@','@','@'), ISAPNP_FUNCTION(0x0001), NULL)))
- {
- sb_dev->prepare(sb_dev);
+ /* Some mpus use audio device irq? Need to test... -PEL */
+ if(sb_isapnp_list[slot].mpu_irq == -1)
+ mpu_dev[card]->irq_resource[0].flags = 0;
- if((sb_dev = activate_dev("ALS100", "sb", sb_dev)))
- {
- hw_config->io_base = sb_dev->resource[0].start;
- hw_config->irq = sb_dev->irq_resource[0].start;
- hw_config->dma = sb_dev->dma_resource[1].start;
- hw_config->dma2 = sb_dev->dma_resource[0].start;
-
- show_base("ALS100", "sb", &sb_dev->resource[0]);
- }
-
- if(!sb_dev) return(NULL);
- }
- else
- printk(KERN_ERR "sb: ALS100 panic: sb base not found\n");
-
- /* @X@0001:mpu
- */
-
- if((mpu_dev = isapnp_find_dev(bus,
- ISAPNP_VENDOR('@','X','@'), ISAPNP_FUNCTION(0x0001), NULL)))
- {
- mpu_dev->prepare(mpu_dev);
-
- if((mpu_dev = activate_dev("ALS100", "mpu", mpu_dev)))
- {
- show_base("ALS100", "mpu", &mpu_dev->resource[0]);
- mpu_config->io_base = mpu_dev->resource[0].start;
+ if((mpu_dev[card] = activate_dev(sb_isapnp_list[slot].name, "mpu", mpu_dev[card]))) {
+ mpu_config->io_base = mpu_dev[card]->resource[sb_isapnp_list[slot].mpu_io].start;
+ if(sb_isapnp_list[slot].mpu_irq != -1)
+ mpu_config->irq = mpu_dev[card]->irq_resource[sb_isapnp_list[slot].mpu_irq].start;
}
}
else
- printk(KERN_ERR "sb: ALS100 panic: mpu not found\n");
-
- printk(KERN_INFO "sb: ALS100 mail reports to Torsten Werner <twerner@intercomm.de>\n");
-
- return(sb_dev);
+ printk(KERN_ERR "sb: %s panic: mpu not found\n", sb_isapnp_list[slot].name);
+
+ return(sb_dev[card]);
}
-#define SBF_DEV 0x01 /* Please notice that cards without this flag are on the top in the list */
-
-
-static struct { unsigned short vendor, function, flags; struct pci_dev * (*initfunc)(struct pci_bus *, struct pci_dev *, struct address_info *, struct address_info *); char *name; }
-sb_isapnp_list[] __initdata = {
- {ISAPNP_VENDOR('C','M','I'), ISAPNP_FUNCTION(0x0001), 0, &sb_init_cmi, "CMI 8330 SoundPRO" },
- {ISAPNP_VENDOR('R','W','B'), ISAPNP_FUNCTION(0x1688), 0, &sb_init_diamond, "Diamond DT0197H" },
- {ISAPNP_VENDOR('A','L','S'), ISAPNP_FUNCTION(0x0001), 0, &sb_init_als, "ALS 100" },
- {ISAPNP_VENDOR('C','T','L'), ISAPNP_FUNCTION(0x0001), SBF_DEV, &sb_init_generic, "Sound Blaster 16" },
- {ISAPNP_VENDOR('C','T','L'), ISAPNP_FUNCTION(0x0031), SBF_DEV, &sb_init_generic, "Sound Blaster 16" },
- {ISAPNP_VENDOR('C','T','L'), ISAPNP_FUNCTION(0x0041), SBF_DEV, &sb_init_generic, "Sound Blaster 16" },
- {ISAPNP_VENDOR('C','T','L'), ISAPNP_FUNCTION(0x0042), SBF_DEV, &sb_init_generic, "Sound Blaster 16" },
- {ISAPNP_VENDOR('C','T','L'), ISAPNP_FUNCTION(0x0043), SBF_DEV, &sb_init_generic, "Sound Blaster 16" },
- {ISAPNP_VENDOR('C','T','L'), ISAPNP_FUNCTION(0x0045), SBF_DEV, &sb_init_generic, "Sound Blaster 16" },
- {ISAPNP_VENDOR('E','S','S'), ISAPNP_FUNCTION(0x0968), SBF_DEV, &sb_init_ess, "ESS 1688" },
- {ISAPNP_VENDOR('E','S','S'), ISAPNP_FUNCTION(0x1868), SBF_DEV, &sb_init_ess, "ESS 1868" },
- {ISAPNP_VENDOR('E','S','S'), ISAPNP_FUNCTION(0x8611), SBF_DEV, &sb_init_ess, "ESS 1868" },
- {ISAPNP_VENDOR('E','S','S'), ISAPNP_FUNCTION(0x1869), SBF_DEV, &sb_init_ess, "ESS 1869" },
- {ISAPNP_VENDOR('E','S','S'), ISAPNP_FUNCTION(0x1878), SBF_DEV, &sb_init_ess, "ESS 1878" },
- {ISAPNP_VENDOR('E','S','S'), ISAPNP_FUNCTION(0x1879), SBF_DEV, &sb_init_ess, "ESS 1879" },
- {0}
-};
-
-static int __init sb_isapnp_init(struct address_info *hw_config, struct address_info *mpu_config, struct pci_bus *bus, struct pci_dev *card, int slot)
+static int __init sb_isapnp_init(struct address_info *hw_config, struct address_info *mpu_config, struct pci_bus *bus, int slot, int card)
{
- struct pci_dev *idev = NULL;
-
- /* You missed the init func? That's bad. */
- if(sb_isapnp_list[slot].initfunc)
- {
- char *busname = bus->name[0] ? bus->name : sb_isapnp_list[slot].name;
-
- printk(KERN_INFO "sb: %s detected\n", busname);
+ char *busname = bus->name[0] ? bus->name : sb_isapnp_list[slot].name;
- /* Initialize this baby. */
+ printk(KERN_INFO "sb: %s detected\n", busname);
- if((idev = sb_isapnp_list[slot].initfunc(bus, card, hw_config, mpu_config)))
- {
- /* We got it. */
+ /* Initialize this baby. */
- printk(KERN_NOTICE "sb: ISAPnP reports '%s' at i/o %#x, irq %d, dma %d, %d\n",
- busname,
- hw_config->io_base, hw_config->irq, hw_config->dma,
- hw_config->dma2);
- return 1;
- }
- else
- printk(KERN_INFO "sb: Failed to initialize %s\n", busname);
+ if(sb_init(bus, hw_config, mpu_config, slot, card)) {
+ /* We got it. */
+
+ printk(KERN_NOTICE "sb: ISAPnP reports '%s' at i/o %#x, irq %d, dma %d, %d\n",
+ busname,
+ hw_config->io_base, hw_config->irq, hw_config->dma,
+ hw_config->dma2);
+ return 1;
}
else
- printk(KERN_ERR "sb: Bad entry in sb_card.c PnP table\n");
+ printk(KERN_INFO "sb: Failed to initialize %s\n", busname);
return 0;
}
-/* Actually this routine will detect and configure only the first card with successful
- initialization. isapnpjump could be used to jump to a specific entry.
- Please always add entries at the end of the array.
- Should this be fixed? - azummo
-*/
-
-int __init sb_isapnp_probe(struct address_info *hw_config, struct address_info *mpu_config)
+int __init sb_isapnp_probe(struct address_info *hw_config, struct address_info *mpu_config, int card)
{
+ static int first = 1;
int i;
/* Count entries in sb_isapnp_list */
- for (i = 0; sb_isapnp_list[i].vendor != 0; i++);
+ for (i = 0; sb_isapnp_list[i].card_vendor != 0; i++);
+ i--;
/* Check and adjust isapnpjump */
- if( isapnpjump < 0 || isapnpjump > ( i - 1 ) )
- {
- printk(KERN_ERR "sb: Valid range for isapnpjump is 0-%d. Adjusted to 0.\n", i-1);
- isapnpjump = 0;
- }
-
- for (i = isapnpjump; sb_isapnp_list[i].vendor != 0; i++) {
-
- if(!(sb_isapnp_list[i].flags & SBF_DEV))
- {
- struct pci_bus *bus = NULL;
-
- while ((bus = isapnp_find_card(
- sb_isapnp_list[i].vendor,
- sb_isapnp_list[i].function,
- bus))) {
-
- if(sb_isapnp_init(hw_config, mpu_config, bus, NULL, i))
- return 0;
- }
- }
+ if( isapnpjump < 0 || isapnpjump > i) {
+ isapnpjump = reverse ? i : 0;
+ printk(KERN_ERR "sb: Valid range for isapnpjump is 0-%d. Adjusted to %d.\n", i, isapnpjump);
}
- /* No cards found. I'll try now to search inside every card for a logical device
- * that matches any entry marked with SBF_DEV in the table.
- */
+ if(!first || !reverse)
+ i = isapnpjump;
+ first = 0;
+ while(sb_isapnp_list[i].card_vendor != 0) {
+ static struct pci_bus *bus = NULL;
- for (i = isapnpjump; sb_isapnp_list[i].vendor != 0; i++) {
-
- if(sb_isapnp_list[i].flags & SBF_DEV)
- {
- struct pci_dev *card = NULL;
-
- while ((card = isapnp_find_dev(NULL,
- sb_isapnp_list[i].vendor,
- sb_isapnp_list[i].function,
- card))) {
-
- if(sb_isapnp_init(hw_config, mpu_config, card->bus, card, i))
- return 0;
+ while ((bus = isapnp_find_card(
+ sb_isapnp_list[i].card_vendor,
+ sb_isapnp_list[i].card_device,
+ bus))) {
+
+ if(sb_isapnp_init(hw_config, mpu_config, bus, i, card)) {
+ isapnpjump = i; /* start next search from here */
+ return 0;
}
}
+ i += reverse ? -1 : 1;
}
return -ENODEV;
static int __init init_sb(void)
{
+ int card, max = multiple ? SB_CARDS_MAX : 1;
+
printk(KERN_INFO "Soundblaster audio driver Copyright (C) by Hannu Savolainen 1993-1996\n");
+
+ for(card = 0; card < max; card++, sb_cards_num++) {
+#if defined CONFIG_ISAPNP || defined CONFIG_ISAPNP_MODULE
+ /* Please remember that even with CONFIG_ISAPNP defined one should still be
+ able to disable PNP support for this single driver! */
+ if(isapnp && (sb_isapnp_probe(&cfg[card], &cfg_mpu[card], card) < 0) ) {
+ if(!sb_cards_num) {
+ printk(KERN_NOTICE "sb: No ISAPnP cards found, trying standard ones...\n");
+ isapnp = 0;
+ } else
+ break;
+ }
+#endif
- /* Please remember that even with CONFIG_ISAPNP defined one should still be
- able to disable PNP support for this single driver!
- */
+ if(!isapnp) {
+ cfg[card].io_base = io;
+ cfg[card].irq = irq;
+ cfg[card].dma = dma;
+ cfg[card].dma2 = dma16;
+ }
-#if defined CONFIG_ISAPNP || defined CONFIG_ISAPNP_MODULE
- if(isapnp && (sb_isapnp_probe(&cfg, &cfg_mpu) < 0) ) {
- printk(KERN_NOTICE "sb_card: No ISAPnP cards found, trying standard ones...\n");
- isapnp = 0;
- }
-#endif
+ cfg[card].card_subtype = type;
+
+ if (!probe_sb(&cfg[card]))
+ return -ENODEV;
+ attach_sb_card(&cfg[card]);
- if( isapnp == 0 ) {
- cfg.io_base = io;
- cfg.irq = irq;
- cfg.dma = dma;
- cfg.dma2 = dma16;
+ if(cfg[card].slots[0]==-1)
+ return -ENODEV;
+
+ if (!isapnp)
+ cfg_mpu[card].io_base = mpu_io;
+ if (probe_sbmpu(&cfg_mpu[card]))
+ sbmpu[card] = 1;
+ if (sbmpu[card])
+ attach_sbmpu(&cfg_mpu[card]);
}
- cfg.card_subtype = type;
+ SOUND_LOCK;
- if (!probe_sb(&cfg))
- return -ENODEV;
- attach_sb_card(&cfg);
+ if(isapnp)
+ printk(KERN_NOTICE "sb: %d Soundblaster PnP card(s) found.\n", sb_cards_num);
- if(cfg.slots[0]==-1)
- return -ENODEV;
-
- if (isapnp == 0)
- cfg_mpu.io_base = mpu_io;
- if (probe_sbmpu(&cfg_mpu))
- sbmpu = 1;
- if (sbmpu)
- attach_sbmpu(&cfg_mpu);
return 0;
}
static void __exit cleanup_sb(void)
{
+ int i;
+
if (smw_free) {
vfree(smw_free);
smw_free = NULL;
}
- unload_sb(&cfg);
- if (sbmpu)
- unload_sbmpu(&cfg_mpu);
- SOUND_LOCK_END;
-#if defined CONFIG_ISAPNP || defined CONFIG_ISAPNP_MODULE
- if(activated)
- {
- if(sb_dev) sb_dev->deactivate(sb_dev);
- if(mpu_dev) mpu_dev->deactivate(mpu_dev);
- }
+ for(i = 0; i < sb_cards_num; i++) {
+ unload_sb(&cfg[i], i);
+ if (sbmpu[i])
+ unload_sbmpu(&cfg_mpu[i]);
+
+#if defined CONFIG_ISAPNP || defined CONFIG_ISAPNP_MODULE
+ if(!audio_activated[i] && sb_dev[i])
+ sb_dev[i]->deactivate(sb_dev[i]);
+ if(!mpu_activated[i] && mpu_dev[i])
+ mpu_dev[i]->deactivate(mpu_dev[i]);
#endif
+ }
+ SOUND_LOCK_END;
}
module_init(init_sb);
return 0;
}
hw_config->name = "Sound Blaster 16";
- hw_config->irq = -devc->irq;
+ if (hw_config->irq < 3 || hw_config->irq == devc->irq)
+ hw_config->irq = -devc->irq;
if (devc->minor > 12) /* What is Vibra's version??? */
sb16_set_mpu_port(devc, hw_config);
break;
/**
* unregister_sound_special - unregister a special sound device
- * @unit: Unit number to allocate
+ * @unit: unit number to allocate
*
- * Release a sound device that was allocated with register_sound_special.
- * The unit passed is the return value from the register function.
+ * Release a sound device that was allocated with
+ * register_sound_special(). The unit passed is the return value from
+ * the register function.
*/
/**
* unregister_sound_mixer - unregister a mixer
- * @unit: Unit number to allocate
+ * @unit: unit number to allocate
*
- * Release a sound device that was allocated with register_sound_mixer.
+ * Release a sound device that was allocated with register_sound_mixer().
* The unit passed is the return value from the register function.
*/
/**
* unregister_sound_midi - unregister a midi device
- * @unit: Unit number to allocate
+ * @unit: unit number to allocate
*
- * Release a sound device that was allocated with register_sound_midi.
+ * Release a sound device that was allocated with register_sound_midi().
* The unit passed is the return value from the register function.
*/
/**
* unregister_sound_dsp - unregister a DSP device
- * @unit: Unit number to allocate
+ * @unit: unit number to allocate
*
- * Release a sound device that was allocated with register_sound_dsp.
+ * Release a sound device that was allocated with register_sound_dsp().
* The unit passed is the return value from the register function.
*
* Both of the allocated units are released together automatically.
/**
* unregister_sound_synth - unregister a synth device
- * @unit: Unit number to allocate
+ * @unit: unit number to allocate
*
- * Release a sound device that was allocated with register_sound_synth.
+ * Release a sound device that was allocated with register_sound_synth().
* The unit passed is the return value from the register function.
*/
static void destroy_special_devices(void)
{
- unregister_sound_special(6);
+ unregister_sound_special(1);
unregister_sound_special(8);
}
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*
* History
+ * v0.14.2 Mar 29 2000 Ching Ling Lee
+ * Add clear to silence advance in trident_update_ptr
+ * fix invalid data of the end of the sound
+ * v0.14.1 Mar 24 2000 Ching Ling Lee
+ * ALi 5451 support added, playback and recording O.K.
+ * ALi 5451 originally developed and structured based on sonicvibes, and
+ * suggested to merge into this file by Alan Cox.
* v0.14 Mar 15 2000 Ollie Lho
* 5.1 channel output support with channel binding. What's the Matrix ?
* v0.13.1 Mar 10 2000 Ollie Lho
* new pci device driver interface for 2.4 kernel (done)
*/
+#include <linux/config.h>
#include <linux/module.h>
#include <linux/version.h>
#include <linux/string.h>
enum {
TRIDENT_4D_DX = 0,
TRIDENT_4D_NX,
- SIS_7018
+ SIS_7018,
+ ALI_5451
};
static char * card_names[] = {
"Trident 4DWave DX",
"Trident 4DWave NX",
- "SiS 7018 PCI Audio"
+ "SiS 7018 PCI Audio",
+ "ALi Audio Accelerator"
};
static struct pci_device_id trident_pci_tbl [] __initdata = {
PCI_ANY_ID, PCI_ANY_ID, 0, 0, TRIDENT_4D_NX},
{PCI_VENDOR_ID_SI, PCI_DEVICE_ID_SI_7018,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, SIS_7018},
+ {PCI_VENDOR_ID_ALI, PCI_DEVICE_ID_ALI_5451,
+ PCI_ANY_ID, PCI_ANY_ID, 0, 0, ALI_5451},
+ {0,}
};
MODULE_DEVICE_TABLE (pci, trident_pci_tbl);
/* hardware channel */
struct trident_channel *channel;
- /* OSS buffer manangemeent stuff */
+ /* OSS buffer management stuff */
void *rawbuf;
dma_addr_t dma_handle;
unsigned buforder;
/* OSS stuff */
unsigned mapped:1;
unsigned ready:1;
+ unsigned endcleared:1;
+ unsigned update_flag;
unsigned ossfragshift;
int ossmaxfrags;
unsigned subdivision;
/* PCI device stuff */
struct pci_dev * pci_dev;
u16 pci_id;
+ u8 revision;
/* soundcore stuff */
int dev_audio;
/* hardware resources */
unsigned long iobase;
u32 irq;
+
+ /* Function support */
+ struct trident_channel *(*alloc_pcm_channel)(struct trident_card *);
+ struct trident_channel *(*alloc_rec_pcm_channel)(struct trident_card *);
+ void (*free_pcm_channel)(struct trident_card *, int chan);
+ void (*address_interrupt)(struct trident_card *);
};
/* table to map from CHANNELMASK to channel attribute for SiS 7018 */
static struct trident_card *devs = NULL;
+static void ali_ac97_set(struct ac97_codec *codec, u8 reg, u16 val);
+static u16 ali_ac97_get(struct ac97_codec *codec, u8 reg);
+
static void trident_ac97_set(struct ac97_codec *codec, u8 reg, u16 val);
static u16 trident_ac97_get(struct ac97_codec *codec, u8 reg);
case PCI_DEVICE_ID_SI_7018:
global_control |= (ENDLP_IE | MIDLP_IE| BANK_B_EN);
break;
+ case PCI_DEVICE_ID_ALI_5451:
case PCI_DEVICE_ID_TRIDENT_4DWAVE_DX:
case PCI_DEVICE_ID_TRIDENT_4DWAVE_NX:
global_control |= (ENDLP_IE | MIDLP_IE);
return NULL;
}
+static struct trident_channel *ali_alloc_pcm_channel(struct trident_card *card)
+{
+ struct trident_pcm_bank *bank;
+ int idx;
+
+ bank = &card->banks[BANK_A];
+
+ if (bank->bitmap == ~0UL) {
+ /* no more free channels avaliable */
+ printk(KERN_ERR "trident: no more channels available on Bank B.\n");
+ return NULL;
+ }
+ for (idx = 0; idx <= 31; idx++) {
+ if (!(bank->bitmap & (1 << idx))) {
+ struct trident_channel *channel = &bank->channels[idx];
+ bank->bitmap |= 1 << idx;
+ channel->num = idx;
+ return channel;
+ }
+ }
+ return NULL;
+}
+
+static struct trident_channel *ali_alloc_rec_pcm_channel(struct trident_card *card)
+{
+ struct trident_pcm_bank *bank;
+ int idx = ALI_PCM_IN_CHANNEL;
+
+ bank = &card->banks[BANK_A];
+
+ if (!(bank->bitmap & (1 << idx))) {
+ struct trident_channel *channel = &bank->channels[idx];
+ bank->bitmap |= 1 << idx;
+ channel->num = idx;
+ return channel;
+ }
+ return NULL;
+}
+
+
static void trident_free_pcm_channel(struct trident_card *card, int channel)
{
int bank;
}
}
+static void ali_free_pcm_channel(struct trident_card *card, int channel)
+{
+ int bank;
+
+ if (channel > 31)
+ return;
+
+ bank = channel >> 5;
+ channel = channel & 0x1f;
+
+ if (card->banks[bank].bitmap & (1 << (channel))) {
+ card->banks[bank].bitmap &= ~(1 << (channel));
+ }
+}
+
+
/* called with spin lock held */
+
static int trident_load_channel_registers(struct trident_card *card, u32 *data, unsigned int channel)
{
int i;
/* output the channel registers */
for (i = 0; i < CHANNEL_REGS; i++) {
outl(data[i], TRID_REG(card, CHANNEL_START + 4*i));
+ if (i == 2)
+ if (card->pci_id == PCI_DEVICE_ID_ALI_5451)
+ i++; //skip i=3
}
return TRUE;
switch (state->card->pci_id)
{
+ case PCI_DEVICE_ID_ALI_5451:
+ data[0] = 0; /* Current Sample Offset */
+ data[2] = (channel->eso << 16) | (channel->delta & 0xffff);
+ data[3] = 0;
+ break;
case PCI_DEVICE_ID_SI_7018:
data[0] = 0; /* Current Sample Offset */
data[2] = (channel->eso << 16) | (channel->delta & 0xffff);
/* Enable AC-97 ADC (capture) */
switch (card->pci_id)
{
+ case PCI_DEVICE_ID_ALI_5451:
case PCI_DEVICE_ID_SI_7018:
/* for 7018, the ac97 is always in playback/record (duplex) mode */
break;
switch (state->card->pci_id)
{
+ case PCI_DEVICE_ID_ALI_5451:
case PCI_DEVICE_ID_SI_7018:
case PCI_DEVICE_ID_TRIDENT_4DWAVE_DX:
/* 16 bits ESO, CSO for 7018 and DX */
static void trident_update_ptr(struct trident_state *state)
{
struct dmabuf *dmabuf = &state->dmabuf;
- unsigned hwptr;
+ unsigned hwptr, swptr;
+ int clear_cnt = 0;
int diff;
+ unsigned char silence;
+ unsigned half_dmasize;
/* update hardware pointer */
hwptr = trident_get_dma_addr(state);
__stop_adc(state);
dmabuf->error++;
}
+ else if (!dmabuf->endcleared) {
+ swptr = dmabuf->swptr;
+ silence = (dmabuf->fmt & TRIDENT_FMT_16BIT ? 0 : 0x80);
+ if (dmabuf->update_flag & ALI_ADDRESS_INT_UPDATE) {
+ /* We must clear end data of 1/2 dmabuf if needed.
+ According to 1/2 algorithm of Address Engine Interrupt,
+ check the validation of the data of half dmasize. */
+ half_dmasize = dmabuf->dmasize / 2;
+ if ((diff = hwptr - half_dmasize) < 0 )
+ diff = hwptr;
+ if ((dmabuf->count + diff) < half_dmasize) {
+ //there is invalid data in the end of half buffer
+ if ((clear_cnt = half_dmasize - swptr) < 0)
+ clear_cnt += half_dmasize;
+ memset (dmabuf->rawbuf + swptr, silence, clear_cnt); //clear the invalid data
+ dmabuf->endcleared = 1;
+ }
+ } else if (dmabuf->count < (signed) dmabuf->fragsize) {
+ clear_cnt = dmabuf->fragsize;
+ if ((swptr + clear_cnt) > dmabuf->dmasize)
+ clear_cnt = dmabuf->dmasize - swptr;
+ memset (dmabuf->rawbuf + swptr, silence, clear_cnt);
+ dmabuf->endcleared = 1;
+ }
+ }
/* since dma machine only interrupts at ESO and ESO/2, we sure have at
least half of dma buffer free, so wake up the process unconditionally */
wake_up(&dmabuf->wait);
wake_up(&dmabuf->wait);
}
}
+ dmabuf->update_flag &= ~ALI_ADDRESS_INT_UPDATE;
}
-static void trident_interrupt(int irq, void *dev_id, struct pt_regs *regs)
+static void trident_address_interrupt(struct trident_card *card)
{
+ int i;
struct trident_state *state;
- struct trident_card *card = (struct trident_card *)dev_id;
+
+ /* Update the pointers for all channels we are running. */
+ /* FIXME: should read interrupt status only once */
+ for (i = 0; i < NR_HW_CH; i++) {
+ if (trident_check_channel_interrupt(card, 63 - i)) {
+ trident_ack_channel_interrupt(card, 63 - i);
+ if ((state = card->states[i]) != NULL) {
+ trident_update_ptr(state);
+ } else {
+ printk("trident: spurious channel irq %d.\n",
+ 63 - i);
+ trident_stop_voice(card, 63 - i);
+ trident_disable_voice_irq(card, 63 - i);
+ }
+ }
+ }
+}
+
+static void ali_address_interrupt(struct trident_card *card)
+{
int i;
+ struct trident_state *state;
+
+ for (i = 0; i < NR_HW_CH; i++) {
+ if (trident_check_channel_interrupt(card, i)) {
+ trident_ack_channel_interrupt(card, i);
+ if ((state = card->states[i]) != NULL) {
+ state->dmabuf.update_flag |= ALI_ADDRESS_INT_UPDATE;
+ trident_update_ptr(state);
+ } else {
+ printk("ali: spurious channel irq %d.\n", i);
+ trident_stop_voice(card, i);
+ trident_disable_voice_irq(card, i);
+ }
+ }
+ }
+}
+
+static void trident_interrupt(int irq, void *dev_id, struct pt_regs *regs)
+{
+ struct trident_card *card = (struct trident_card *)dev_id;
u32 event;
spin_lock(&card->lock);
#endif
if (event & ADDRESS_IRQ) {
- /* Update the pointers for all channels we are running. */
- /* FIXME: should read interrupt status only once */
- for (i = 0; i < NR_HW_CH; i++) {
- if (trident_check_channel_interrupt(card, 63 - i)) {
- trident_ack_channel_interrupt(card, 63 - i);
- if ((state = card->states[i]) != NULL) {
- trident_update_ptr(state);
- } else {
- printk("trident: spurious channel irq %d.\n",
- 63 - i);
- trident_stop_voice(card, 63 - i);
- trident_disable_voice_irq(card, 63 - i);
- }
- }
- }
+ card->address_interrupt(card);
}
/* manually clear interrupt status, bad hardware design, blame T^2 */
return -EFAULT;
ret = 0;
+ if (state->card->pci_id == PCI_DEVICE_ID_ALI_5451)
+ outl ( inl (TRID_REG (state->card, ALI_GLOBAL_CONTROL)) | ALI_PCM_IN_ENABLE, TRID_REG (state->card, ALI_GLOBAL_CONTROL));
+
while (count > 0) {
spin_lock_irqsave(&state->card->lock, flags);
if (dmabuf->count > (signed) dmabuf->dmasize) {
return -EFAULT;
ret = 0;
+ if (state->card->pci_id == PCI_DEVICE_ID_ALI_5451)
+ if (dmabuf->channel->num == ALI_PCM_IN_CHANNEL)
+ outl ( inl (TRID_REG (state->card, ALI_GLOBAL_CONTROL)) & ALI_PCM_IN_DISABLE, TRID_REG (state->card, ALI_GLOBAL_CONTROL));
+
while (count > 0) {
spin_lock_irqsave(&state->card->lock, flags);
if (dmabuf->count < 0) {
spin_lock_irqsave(&state->card->lock, flags);
dmabuf->swptr = swptr;
dmabuf->count += cnt;
+ dmabuf->endcleared = 0;
spin_unlock_irqrestore(&state->card->lock, flags);
count -= cnt;
found_virt:
/* found a free virtual channel, allocate hardware channels */
- if ((dmabuf->channel = trident_alloc_pcm_channel(card)) == NULL) {
+ if(file->f_mode & FMODE_READ)
+ dmabuf->channel = card->alloc_rec_pcm_channel(card);
+ else
+ dmabuf->channel = card->alloc_pcm_channel(card);
+
+ if (dmabuf->channel == NULL) {
kfree (card->states[i]);
card->states[i] = NULL;;
return -ENODEV;
}
if (file->f_mode & FMODE_READ) {
- /* FIXME: Trident 4d can only record in singed 16-bits stereo, 48kHz sample,
+ if (card->pci_id == PCI_DEVICE_ID_ALI_5451) {
+ card->states[ALI_PCM_IN_CHANNEL] = state;
+ card->states[i] = NULL;
+ state->virt = ALI_PCM_IN_CHANNEL;
+ }
+ /* FIXME: Trident 4d can only record in signed 16-bits stereo, 48kHz sample,
to be dealed with in trident_set_adc_rate() ?? */
dmabuf->fmt &= ~TRIDENT_FMT_MASK;
if ((minor & 0x0f) == SND_DEV_DSP16)
if (file->f_mode & FMODE_WRITE) {
stop_dac(state);
dealloc_dmabuf(state);
- trident_free_pcm_channel(state->card, dmabuf->channel->num);
+ state->card->free_pcm_channel(state->card, dmabuf->channel->num);
}
if (file->f_mode & FMODE_READ) {
stop_adc(state);
dealloc_dmabuf(state);
- trident_free_pcm_channel(state->card, dmabuf->channel->num);
+ state->card->free_pcm_channel(state->card, dmabuf->channel->num);
}
kfree(state->card->states[state->virt]);
return ((u16) (data >> 16));
}
+/* Write AC97 codec registers for ALi*/
+static void ali_ac97_set(struct ac97_codec *codec, u8 reg, u16 val)
+{
+ struct trident_card *card = (struct trident_card *)codec->private_data;
+ unsigned int address, mask;
+ unsigned int wCount1 = 0xffff;
+ unsigned int wCount2= 0xffff;
+ unsigned long chk1, chk2;
+ unsigned long flags;
+ u32 data;
+
+ data = ((u32) val) << 16;
+
+ address = ALI_AC97_WRITE;
+ mask = ALI_AC97_WRITE_ACTION | ALI_AC97_AUDIO_BUSY;
+ if (codec->id)
+ mask |= ALI_AC97_SECONDARY;
+ if (card->revision == 0x02)
+ mask |= ALI_AC97_WRITE_MIXER_REGISTER;
+
+ spin_lock_irqsave(&card->lock, flags);
+ while (wCount1--) {
+ if ((inw(TRID_REG(card, address)) & ALI_AC97_BUSY_WRITE) == 0) {
+ data |= (mask | (reg & AC97_REG_ADDR));
+
+ chk1 = inl(TRID_REG(card, ALI_STIMER));
+ chk2 = inl(TRID_REG(card, ALI_STIMER));
+ while (wCount2-- && (chk1 == chk2))
+ chk2 = inl(TRID_REG(card, ALI_STIMER));
+ if (wCount2 == 0) {
+ spin_unlock_irqrestore(&card->lock, flags);
+ return;
+ }
+ outl(data, TRID_REG(card, address)); //write!
+ spin_unlock_irqrestore(&card->lock, flags);
+ return; //success
+ }
+ inw(TRID_REG(card, address)); //wait a read cycle
+ }
+
+ printk(KERN_ERR "ali: AC97 CODEC write timed out.\n");
+ spin_unlock_irqrestore(&card->lock, flags);
+ return;
+}
+
+/* Read AC97 codec registers for ALi*/
+static u16 ali_ac97_get(struct ac97_codec *codec, u8 reg)
+{
+ struct trident_card *card = (struct trident_card *)codec->private_data;
+ unsigned int address, mask;
+ unsigned int wCount1 = 0xffff;
+ unsigned int wCount2= 0xffff;
+ unsigned long chk1, chk2;
+ unsigned long flags;
+ u32 data;
+
+ address = ALI_AC97_READ;
+ if (card->revision == 0x02) {
+ address = ALI_AC97_WRITE;
+ mask &= ALI_AC97_READ_MIXER_REGISTER;
+ }
+ mask = ALI_AC97_READ_ACTION | ALI_AC97_AUDIO_BUSY;
+ if (codec->id)
+ mask |= ALI_AC97_SECONDARY;
+
+ spin_lock_irqsave(&card->lock, flags);
+ data = (mask | (reg & AC97_REG_ADDR));
+ while (wCount1--) {
+ if ((inw(TRID_REG(card, address)) & ALI_AC97_BUSY_READ) == 0) {
+ chk1 = inl(TRID_REG(card, ALI_STIMER));
+ chk2 = inl(TRID_REG(card, ALI_STIMER));
+ while (wCount2-- && (chk1 == chk2))
+ chk2 = inl(TRID_REG(card, ALI_STIMER));
+ if (wCount2 == 0) {
+ printk(KERN_ERR "ali: AC97 CODEC read timed out.\n");
+ spin_unlock_irqrestore(&card->lock, flags);
+ return 0;
+ }
+ outl(data, TRID_REG(card, address)); //read!
+ wCount2 = 0xffff;
+ while (wCount2--) {
+ if ((inw(TRID_REG(card, address)) & ALI_AC97_BUSY_READ) == 0) {
+ data = inl(TRID_REG(card, address));
+ spin_unlock_irqrestore(&card->lock, flags);
+ return ((u16) (data >> 16));
+ }
+ }
+ }
+ inw(TRID_REG(card, address)); //wait a read cycle
+ }
+ spin_unlock_irqrestore(&card->lock, flags);
+ printk(KERN_ERR "ali: AC97 CODEC read timed out.\n");
+ return 0;
+}
+
/* OSS /dev/mixer file operation methods */
static int trident_open_mixdev(struct inode *inode, struct file *file)
{
really exist */
switch (card->pci_id)
{
+ case PCI_DEVICE_ID_ALI_5451:
+ outl(PCMOUT|SECONDARY_ID, TRID_REG(card, SI_SERIAL_INTF_CTRL));
+ ready_2nd = inl(TRID_REG(card, SI_SERIAL_INTF_CTRL));
+ ready_2nd &= SI_AC97_SECONDARY_READY;
+ break;
case PCI_DEVICE_ID_SI_7018:
/* disable AC97 GPIO interrupt */
outl(0x00, TRID_REG(card, SI_AC97_GPIO));
in ac97_probe_codec */
codec->private_data = card;
codec->id = num_ac97;
- /* controller specific low level AC97 access function */
- codec->codec_read = trident_ac97_get;
- codec->codec_write = trident_ac97_set;
+ if (card->pci_id == PCI_DEVICE_ID_ALI_5451) {
+ codec->codec_read = ali_ac97_get;
+ codec->codec_write = ali_ac97_set;
+ }
+ else {
+ codec->codec_read = trident_ac97_get;
+ codec->codec_write = trident_ac97_set;
+ }
+
if (ac97_probe_codec(codec) == 0)
break;
{
unsigned long iobase;
struct trident_card *card;
+ u8 revision;
if (!pci_dma_supported(pci_dev, TRIDENT_DMA_MASK)) {
printk(KERN_ERR "trident: architecture does not support"
" 30bit PCI busmaster DMA\n");
return -ENODEV;
}
+ pci_read_config_byte(pci_dev, PCI_CLASS_REVISION, &revision);
iobase = pci_dev->resource[0].start;
if (check_region(iobase, 256)) {
card->iobase = iobase;
card->pci_dev = pci_dev;
card->pci_id = pci_id->device;
+ card->revision = revision;
card->irq = pci_dev->irq;
card->next = devs;
card->magic = TRIDENT_CARD_MAGIC;
printk(KERN_INFO "trident: %s found at IO 0x%04lx, IRQ %d\n",
card_names[pci_id->driver_data], card->iobase, card->irq);
+ if(card->pci_id == PCI_DEVICE_ID_ALI_5451)
+ {
+ card->alloc_pcm_channel = ali_alloc_pcm_channel;
+ card->alloc_rec_pcm_channel = ali_alloc_rec_pcm_channel;
+ card->free_pcm_channel = ali_free_pcm_channel;
+ card->address_interrupt = ali_address_interrupt;
+ }
+ else
+ {
+ card->alloc_pcm_channel = trident_alloc_pcm_channel;
+ card->alloc_rec_pcm_channel = trident_alloc_pcm_channel;
+ card->free_pcm_channel = trident_free_pcm_channel;
+ card->address_interrupt = trident_address_interrupt;
+ }
/* claim our iospace and irq */
request_region(card->iobase, 256, card_names[pci_id->driver_data]);
if (request_irq(card->irq, &trident_interrupt, SA_SHIRQ,
}
/* register /dev/dsp */
if ((card->dev_audio = register_sound_dsp(&trident_audio_fops, -1)) < 0) {
- printk(KERN_ERR "trident: coundn't register DSP device!\n");
+ printk(KERN_ERR "trident: couldn't register DSP device!\n");
release_region(iobase, 256);
free_irq(card->irq, card);
kfree(card);
}
outl(0x00, TRID_REG(card, T4D_MUSICVOL_WAVEVOL));
+ if (card->pci_id == PCI_DEVICE_ID_ALI_5451)
+ {
+ /* edited by HMSEO for GT sound */
+#ifdef CONFIG_ALPHA_NAUTILUS
+ ac97_data = trident_ac97_get (card->ac97_codec[0], AC97_POWER_CONTROL);
+ trident_ac97_set (card->ac97_codec[0], AC97_POWER_CONTROL, ac97_data | ALI_EAPD_POWER_DOWN);
+#endif
+ /* edited by HMSEO for GT sound*/
+ }
+
pci_dev->driver_data = card;
pci_dev->dma_mask = TRIDENT_DMA_MASK;
kfree(card);
}
-MODULE_AUTHOR("Alan Cox, Aaron Holtzman, Ollie Lho");
-MODULE_DESCRIPTION("Trident 4DWave/SiS 7018 PCI Audio Driver");
+MODULE_AUTHOR("Alan Cox, Aaron Holtzman, Ollie Lho, Ching Ling Lee");
+MODULE_DESCRIPTION("Trident 4DWave/SiS 7018/ALi 5451 PCI Audio Driver");
#define TRIDENT_MODULE_NAME "trident"
if (!pci_present()) /* No PCI bus in this machine! */
return -ENODEV;
- printk(KERN_INFO "Trident 4DWave/SiS 7018 PCI Audio, version "
+ printk(KERN_INFO "Trident 4DWave/SiS 7018/ALi 5451 PCI Audio, version "
DRIVER_VERSION ", " __TIME__ " " __DATE__ "\n");
if (!pci_register_driver(&trident_pci_driver)) {
#define PCI_VENDOR_ID_SI 0x0139
#endif
+#ifndef PCI_VENDOR_ID_ALI
+#define PCI_VENDOR_ID_ALI 0x10b9
+#endif
+
#ifndef PCI_DEVICE_ID_TRIDENT_4DWAVE_DX
#define PCI_DEVICE_ID_TRIDENT_4DWAVE_DX 0x2000
#endif
#define PCI_DEVICE_ID_SI_7018 0x7018
#endif
+#ifndef PCI_DEVICE_ID_ALI_5451
+#define PCI_DEVICE_ID_ALI_5451 0x5451
+#endif
+
#ifndef FALSE
#define FALSE 0
#define TRUE 1
T4D_AINT_B = 0xd8, T4D_AINTEN_B = 0xdc
};
+enum ali_op_registers {
+ ALI_GLOBAL_CONTROL = 0xd4,
+ ALI_STIMER = 0xc8
+};
+
+enum ali_global_control_bit {
+ ALI_PCM_IN_ENABLE = 0x80000000,
+ ALI_PCM_IN_DISABLE = 0x7fffffff
+};
+
+enum ali_pcm_in_channel_num {
+ ALI_PCM_IN_CHANNEL = 31
+};
+
+enum ali_ac97_power_control_bit {
+ ALI_EAPD_POWER_DOWN = 0x8000
+};
+
+enum ali_update_ptr_flags {
+ ALI_ADDRESS_INT_UPDATE = 0x01
+};
+
/* S/PDIF Operational Registers for 4D-NX */
enum nx_spdif_registers {
NX_SPCTRL_SPCSO = 0x24, NX_SPLBA = 0x28,
SI_SERIAL_INTF_CTRL = 0x48, SI_AC97_GPIO = 0x4c
};
+enum ali_ac97_registers {
+ ALI_AC97_WRITE = 0x40, ALI_AC97_READ = 0x44
+};
+
/* Bit mask for operational registers */
#define AC97_REG_ADDR 0x000000ff
+enum ali_ac97_bits {
+ ALI_AC97_BUSY_WRITE = 0x8000, ALI_AC97_BUSY_READ = 0x8000,
+ ALI_AC97_WRITE_ACTION = 0x8000, ALI_AC97_READ_ACTION = 0x8000,
+ ALI_AC97_AUDIO_BUSY = 0x4000, ALI_AC97_SECONDARY = 0x0080,
+ ALI_AC97_READ_MIXER_REGISTER = 0xfeff,
+ ALI_AC97_WRITE_MIXER_REGISTER = 0x0100
+};
+
enum sis7018_ac97_bits {
SI_AC97_BUSY_WRITE = 0x8000, SI_AC97_BUSY_READ = 0x8000,
SI_AC97_AUDIO_BUSY = 0x4000, SI_AC97_MODEM_BUSY = 0x2000,
*
* including portions (c) 1995-1998 Patrick Caulfield.
*
+ * slight improvements (c) 2000 Edward Betts <edward@debian.org>
+ *
* This file is based on the VGA console driver (vgacon.c):
*
* Created 28 Sep 1997 by Geert Uytterhoeven
/* Ok, there is definitely a card registering at the correct
* memory location, so now we do an I/O port test.
*/
-
- if (! test_mda_b(0x66, 0x0f)) { /* cursor low register */
+
+ /* Edward: These two mess `tests' mess up my cursor on bootup */
+
+ /* cursor low register */
+ /* if (! test_mda_b(0x66, 0x0f)) {
return 0;
- }
- if (! test_mda_b(0x99, 0x0f)) { /* cursor low register */
+ } */
+
+ /* cursor low register */
+ /* if (! test_mda_b(0x99, 0x0f)) {
return 0;
- }
+ } */
/* See if the card is a Hercules, by checking whether the vsync
* bit of the status register is changing. This test lasts for
mda_initialize();
}
+ /* cursor looks ugly during boot-up, so turn it off */
+ mda_set_cursor(mda_vram_len - 1);
+
printk("mdacon: %s with %ldK of memory detected.\n",
mda_type_name, mda_vram_len/1024);
static int mdacon_blank(struct vc_data *c, int blank)
{
- if (blank) {
- outb_p(0x00, mda_mode_port); /* disable video */
+ if (mda_type == TYPE_MDA) {
+ if (blank)
+ scr_memsetw((void *)mda_vram_base,
+ mda_convert_attr(c->vc_video_erase_char),
+ c->vc_screenbuf_size);
+ /* Tell console.c that it has to restore the screen itself */
+ return 1;
} else {
- outb_p(MDA_MODE_VIDEO_EN | MDA_MODE_BLINK_EN, mda_mode_port);
+ if (blank)
+ outb_p(0x00, mda_mode_port); /* disable video */
+ else
+ outb_p(MDA_MODE_VIDEO_EN | MDA_MODE_BLINK_EN,
+ mda_mode_port);
+ return 0;
}
-
- return 0;
}
static int mdacon_font_op(struct vc_data *c, struct console_font_op *op)
*/
/* version number of this driver */
-#define RIVAFB_VERSION "0.7.0"
+#define RIVAFB_VERSION "0.7.1"
#include <linux/config.h>
#include <linux/module.h>
CH_RIVA_TNT2,
CH_RIVA_UTNT2, /* UTNT2 */
CH_RIVA_VTNT2, /* VTNT2 */
+ CH_RIVA_UVTNT2, /* VTNT2 */
CH_RIVA_ITNT2, /* ITNT2 */
};
{ "RIVA-TNT2", 5 },
{ "RIVA-UTNT2", 5 },
{ "RIVA-VTNT2", 5 },
+ { "RIVA-UVTNT2", 5 },
{ "RIVA-ITNT2", 5 },
};
{ PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_TNT2, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_RIVA_TNT2 },
{ PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_UTNT2, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_RIVA_UTNT2 },
{ PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_VTNT2, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_RIVA_VTNT2 },
+ { PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_UVTNT2, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_RIVA_VTNT2 },
{ PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_ITNT2, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_RIVA_ITNT2 },
{ 0, }, /* terminate list */
};
/*
* linux/drivers/video/tgafb.c -- DEC 21030 TGA frame buffer device
*
- * Copyright (C) 1999 Martin Lucina, Tom Zerucha
+ * Copyright (C) 1999,2000 Martin Lucina, Tom Zerucha
*
- * $Id: tgafb.c,v 1.12 1999/07/01 13:39:23 mato Exp $
+ * $Id: tgafb.c,v 1.12.2.3 2000/04/04 06:44:56 mato Exp $
*
* This driver is partly based on the original TGA framebuffer device, which
* was partly based on the original TGA console driver, which are
#include <linux/init.h>
#include <linux/pci.h>
#include <linux/selection.h>
+#include <linux/console.h>
#include <asm/io.h>
#include <video/fbcon.h>
static int current_par_valid = 0;
static struct display disp;
-static char __initdata default_fontname[40] = { 0 };
+static char default_fontname[40] = { 0 };
static struct fb_var_screeninfo default_var;
static int default_var_valid = 0;
0x00000001
};
-const unsigned int bt463_cursor_source[4] = {
- 0xffff0000, 0x00000000, 0x00000000, 0x00000000
-};
-
/*
* Predefined video modes
fix->type = FB_TYPE_PACKED_PIXELS;
fix->type_aux = 0;
- if (fb_info.tga_type == 0) /* 8-plane */
+ if (fb_info.tga_type == TGA_TYPE_8PLANE) {
fix->visual = FB_VISUAL_PSEUDOCOLOR;
- else /* 24-plane or 24plusZ */
+ } else {
fix->visual = FB_VISUAL_TRUECOLOR;
+ }
fix->line_length = par->xres * (par->bits_per_pixel >> 3);
fix->smem_start = fb_info.tga_fb_base;
struct tgafb_par *par = (struct tgafb_par *)fb_par;
/* round up some */
- if (fb_info.tga_type == 0) {
+ if (fb_info.tga_type == TGA_TYPE_8PLANE) {
if (var->bits_per_pixel > 8) {
return -EINVAL;
}
par->htimings |= TGA_HORIZ_POLARITY;
if (var->sync & FB_SYNC_VERT_HIGH_ACT)
par->vtimings |= TGA_VERT_POLARITY;
- /* what about sync on green? */
+ if (var->sync & FB_SYNC_ON_GREEN) {
+ par->sync_on_green = 1;
+ } else {
+ par->sync_on_green = 0;
+ }
/* store other useful values in par */
par->xres = var->xres;
var->sync |= FB_SYNC_HOR_HIGH_ACT;
if (par->vtimings & TGA_VERT_POLARITY)
var->sync |= FB_SYNC_VERT_HIGH_ACT;
+ if (par->sync_on_green == 1)
+ var->sync |= FB_SYNC_ON_GREEN;
var->xres_virtual = var->xres;
var->yres_virtual = var->yres;
var->xoffset = var->yoffset = 0;
/* depth-related */
- if (fb_info.tga_type == 0) {
+ if (fb_info.tga_type == TGA_TYPE_8PLANE) {
var->red.offset = 0;
var->green.offset = 0;
var->blue.offset = 0;
} else {
- /* XXX: is this correct? */
var->red.offset = 16;
var->green.offset = 8;
var->blue.offset = 0;
if (current_par_valid)
*par = current_par;
else {
- if (fb_info.tga_type == 0)
+ if (fb_info.tga_type == TGA_TYPE_8PLANE)
default_var.bits_per_pixel = 8;
else
default_var.bits_per_pixel = 32;
static void tgafb_set_par(const void *fb_par, struct fb_info_gen *info)
{
- int i, j, temp;
+ int i, j;
struct tgafb_par *par = (struct tgafb_par *)fb_par;
#if 0
current_par = *par;
current_par_valid = 1;
- /* first, disable video timing */
- TGA_WRITE_REG(0x03, TGA_VALID_REG); /* SCANNING and BLANK */
+ /* first, disable video */
+ TGA_WRITE_REG(TGA_VALID_VIDEO | TGA_VALID_BLANK, TGA_VALID_REG);
/* write the DEEP register */
while (TGA_READ_REG(TGA_CMD_STAT_REG) & 1) /* wait for not busy */
TGA_WRITE_REG(par->vtimings, TGA_VERT_REG);
/* initalise RAMDAC */
- if (fb_info.tga_type == 0) { /* 8-plane */
+ if (fb_info.tga_type == TGA_TYPE_8PLANE) {
/* init BT485 RAMDAC registers */
- BT485_WRITE(0xa2, BT485_CMD_0);
+ BT485_WRITE(0xa2 | (par->sync_on_green ? 0x8 : 0x0), BT485_CMD_0);
BT485_WRITE(0x01, BT485_ADDR_PAL_WRITE);
BT485_WRITE(0x14, BT485_CMD_3); /* cursor 64x64 */
BT485_WRITE(0x40, BT485_CMD_1);
TGA_WRITE_REG(0x00|(BT485_DATA_PAL<<8), TGA_RAMDAC_REG);
}
-#if 0
- /* initialize RAMDAC cursor colors */
- BT485_WRITE(0, BT485_ADDR_CUR_WRITE);
-
- BT485_WRITE(0x00, BT485_DATA_CUR); /* overscan WHITE */
- BT485_WRITE(0x00, BT485_DATA_CUR); /* overscan WHITE */
- BT485_WRITE(0x00, BT485_DATA_CUR); /* overscan WHITE */
-
- BT485_WRITE(0x00, BT485_DATA_CUR); /* color 1 BLACK */
- BT485_WRITE(0x00, BT485_DATA_CUR); /* color 1 BLACK */
- BT485_WRITE(0x00, BT485_DATA_CUR); /* color 1 BLACK */
-
- BT485_WRITE(0x00, BT485_DATA_CUR); /* color 2 BLACK */
- BT485_WRITE(0x00, BT485_DATA_CUR); /* color 2 BLACK */
- BT485_WRITE(0x00, BT485_DATA_CUR); /* color 2 BLACK */
-
- BT485_WRITE(0x00, BT485_DATA_CUR); /* color 3 BLACK */
- BT485_WRITE(0x00, BT485_DATA_CUR); /* color 3 BLACK */
- BT485_WRITE(0x00, BT485_DATA_CUR); /* color 3 BLACK */
-
- /* initialize RAMDAC cursor RAM */
- BT485_WRITE(0x00, BT485_ADDR_PAL_WRITE);
-
- for (i = 0; i < tga_font_height_padded; i++)
- for (j = 7; j >= 0; j--) {
-#if 0
- /* note that this is for a top-right alignment
- * - top left is commented out */
- if( j > /*<*/ ((tga_font_width - 1) >> 3) ) {
- BT485_WRITE(0, BT485_CUR_RAM);
- }
- else if( j == ((tga_font_width - 1) >> 3) ) {
- BT485_WRITE((0xff >> /*<<*/
- (7 - ((tga_font_width - 1)&7))) , BT485_CUR_RAM);
- }
- else {
- BT485_WRITE(0xff, BT485_CUR_RAM);
- }
-#else
- BT485_WRITE(0, BT485_CUR_RAM);
-#endif
- }
- for (i = tga_font_height_padded; i < 64; i++)
- for (j = 0; j < 8; j++) {
- BT485_WRITE(0, BT485_CUR_RAM);
- }
- /* mask? */
-
- for (i = 0; i < 512; i++) {
- BT485_WRITE(0xff, BT485_CUR_RAM);
- }
-#endif
-
} else { /* 24-plane or 24plusZ */
- TGA_WRITE_REG(0x01, TGA_VALID_REG); /* SCANNING */
-
- /*
- * init some registers
- */
+ /* init BT463 registers */
BT463_WRITE(BT463_REG_ACC, BT463_CMD_REG_0, 0x40);
BT463_WRITE(BT463_REG_ACC, BT463_CMD_REG_1, 0x08);
- BT463_WRITE(BT463_REG_ACC, BT463_CMD_REG_2, 0x40);
+ BT463_WRITE(BT463_REG_ACC, BT463_CMD_REG_2,
+ (par->sync_on_green ? 0x80 : 0x40));
BT463_WRITE(BT463_REG_ACC, BT463_READ_MASK_0, 0xff);
BT463_WRITE(BT463_REG_ACC, BT463_READ_MASK_1, 0xff);
BT463_WRITE(BT463_REG_ACC, BT463_BLINK_MASK_2, 0x00);
BT463_WRITE(BT463_REG_ACC, BT463_BLINK_MASK_3, 0x00);
- /*
- * fill the palette
- */
+ /* fill the palette */
BT463_LOAD_ADDR(0x0000);
TGA_WRITE_REG((BT463_PALETTE<<2), TGA_RAMDAC_REG);
TGA_WRITE_REG(0x00|(BT463_PALETTE<<10), TGA_RAMDAC_REG);
}
- /*
- * fill window type table after start of vertical retrace
- */
+ /* fill window type table after start of vertical retrace */
while (!(TGA_READ_REG(TGA_INTR_STAT_REG) & 0x01))
continue;
TGA_WRITE_REG(0x01, TGA_INTR_STAT_REG);
TGA_WRITE_REG(0x01|(BT463_REG_ACC<<10), TGA_RAMDAC_REG);
TGA_WRITE_REG(0x80|(BT463_REG_ACC<<10), TGA_RAMDAC_REG);
}
-
-#if 0
- /*
- * init cursor colors
- */
- BT463_LOAD_ADDR(BT463_CUR_CLR_0);
-
- TGA_WRITE_REG(0x00|(BT463_REG_ACC<<10), TGA_RAMDAC_REG); /* background */
- TGA_WRITE_REG(0x00|(BT463_REG_ACC<<10), TGA_RAMDAC_REG); /* background */
- TGA_WRITE_REG(0x00|(BT463_REG_ACC<<10), TGA_RAMDAC_REG); /* background */
-
- TGA_WRITE_REG(0xff|(BT463_REG_ACC<<10), TGA_RAMDAC_REG); /* foreground */
- TGA_WRITE_REG(0xff|(BT463_REG_ACC<<10), TGA_RAMDAC_REG); /* foreground */
- TGA_WRITE_REG(0xff|(BT463_REG_ACC<<10), TGA_RAMDAC_REG); /* foreground */
-
- TGA_WRITE_REG(0x00|(BT463_REG_ACC<<10), TGA_RAMDAC_REG);
- TGA_WRITE_REG(0x00|(BT463_REG_ACC<<10), TGA_RAMDAC_REG);
- TGA_WRITE_REG(0x00|(BT463_REG_ACC<<10), TGA_RAMDAC_REG);
-
- TGA_WRITE_REG(0x00|(BT463_REG_ACC<<10), TGA_RAMDAC_REG);
- TGA_WRITE_REG(0x00|(BT463_REG_ACC<<10), TGA_RAMDAC_REG);
- TGA_WRITE_REG(0x00|(BT463_REG_ACC<<10), TGA_RAMDAC_REG);
-
- /*
- * finally, init the cursor shape
- */
- temp = tga_fb_base - 1024; /* this assumes video starts at base
- and base is beyond memory start*/
-
- for (i = 0; i < tga_font_height_padded*4; i++)
- writel(bt463_cursor_source[i&3], temp + i*4);
- for (i = tga_font_height_padded*4; i < 256; i++)
- writel(0, temp + i*4);
- TGA_WRITE_REG(temp & 0x000fffff, TGA_CURSOR_BASE_REG);
-#endif
+
}
/* finally, enable video scan
(and pray for the monitor... :-) */
- TGA_WRITE_REG(0x01, TGA_VALID_REG); /* SCANNING */
+ TGA_WRITE_REG(TGA_VALID_VIDEO, TGA_VALID_REG);
}
palette[regno].blue = blue;
#ifdef FBCON_HAS_CFB32
- if (regno < 16 && fb_info.tga_type != 0)
+ if (regno < 16 && fb_info.tga_type != TGA_TYPE_8PLANE)
fbcon_cfb32_cmap[regno] = (red << 16) | (green << 8) | blue;
#endif
- if (fb_info.tga_type == 0) { /* 8-plane */
+ if (fb_info.tga_type == TGA_TYPE_8PLANE) {
BT485_WRITE(regno, BT485_ADDR_PAL_WRITE);
TGA_WRITE_REG(BT485_DATA_PAL, TGA_RAMDAC_SETUP_REG);
TGA_WRITE_REG(red|(BT485_DATA_PAL<<8),TGA_RAMDAC_REG);
if (con == currcon) { /* current console? */
err = fb_set_cmap(cmap, kspc, tgafb_setcolreg, info);
#if 1
- if (fb_info.tga_type != 0)
+ if (fb_info.tga_type != TGA_TYPE_8PLANE)
tgafb_update_palette();
#endif
return err;
static int tgafb_blank(int blank, struct fb_info_gen *info)
{
static int tga_vesa_blanked = 0;
- u32 vhcr, vvcr;
+ u32 vhcr, vvcr, vvvr;
unsigned long flags;
save_flags(flags);
vhcr = TGA_READ_REG(TGA_HORIZ_REG);
vvcr = TGA_READ_REG(TGA_VERT_REG);
+ vvvr = TGA_READ_REG(TGA_VALID_REG) & ~(TGA_VALID_VIDEO | TGA_VALID_BLANK);
switch (blank) {
case 0: /* Unblanking */
TGA_WRITE_REG(vvcr & 0xbfffffff, TGA_VERT_REG);
tga_vesa_blanked = 0;
}
- TGA_WRITE_REG(0x01, TGA_VALID_REG); /* SCANNING */
+ TGA_WRITE_REG(vvvr | TGA_VALID_VIDEO, TGA_VALID_REG);
break;
case 1: /* Normal blanking */
- TGA_WRITE_REG(0x03, TGA_VALID_REG); /* SCANNING and BLANK */
+ TGA_WRITE_REG(vvvr | TGA_VALID_VIDEO | TGA_VALID_BLANK, TGA_VALID_REG);
break;
case 2: /* VESA blank (vsync off) */
TGA_WRITE_REG(vvcr | 0x40000000, TGA_VERT_REG);
- TGA_WRITE_REG(0x02, TGA_VALID_REG); /* BLANK */
+ TGA_WRITE_REG(vvvr | TGA_VALID_BLANK, TGA_VALID_REG);
tga_vesa_blanked = 1;
break;
case 3: /* VESA blank (hsync off) */
TGA_WRITE_REG(vhcr | 0x40000000, TGA_HORIZ_REG);
- TGA_WRITE_REG(0x02, TGA_VALID_REG); /* BLANK */
+ TGA_WRITE_REG(vvvr | TGA_VALID_BLANK, TGA_VALID_REG);
tga_vesa_blanked = 1;
break;
case 4: /* Poweroff */
TGA_WRITE_REG(vhcr | 0x40000000, TGA_HORIZ_REG);
TGA_WRITE_REG(vvcr | 0x40000000, TGA_VERT_REG);
- TGA_WRITE_REG(0x02, TGA_VALID_REG); /* BLANK */
+ TGA_WRITE_REG(vvvr | TGA_VALID_BLANK, TGA_VALID_REG);
tga_vesa_blanked = 1;
break;
}
static void tgafb_set_disp(const void *fb_par, struct display *disp,
struct fb_info_gen *info)
{
- disp->screen_base = fb_info.tga_fb_base;
+ disp->screen_base = (char *)fb_info.tga_fb_base;
switch (fb_info.tga_type) {
#ifdef FBCON_HAS_CFB8
- case 0: /* 8-plane */
+ case TGA_TYPE_8PLANE:
disp->dispsw = &fbcon_cfb8;
break;
#endif
#ifdef FBCON_HAS_CFB32
- case 1: /* 24-plane */
- case 3: /* 24plusZ */
- disp->dispsw = &fbcon_cfb32;
+ case TGA_TYPE_24PLANE:
+ case TGA_TYPE_24PLUSZ:
+ disp->dispsw = &fbcon_cfb32;
disp->dispsw_data = &fbcon_cfb32_cmap;
break;
#endif
char *this_opt;
int i;
- if (options && *options)
+ if (options && *options) {
for(this_opt=strtok(options,","); this_opt; this_opt=strtok(NULL,",")) {
- if (!*this_opt) continue;
+ if (!*this_opt) { continue; }
- if (!strncmp(this_opt, "font:", 5))
+ if (!strncmp(this_opt, "font:", 5)) {
strncpy(default_fontname, this_opt+5, sizeof default_fontname);
+ }
+
else if (!strncmp(this_opt, "mode:", 5)) {
for (i = 0; i < NUM_TOTAL_MODES; i++) {
if (!strcmp(this_opt+5, tgafb_predefined[i].name))
default_var = tgafb_predefined[i].var;
default_var_valid = 1;
}
- } else {
+ }
+
+ else {
printk(KERN_ERR "tgafb: unknown parameter %s\n", this_opt);
}
}
+ }
return 0;
}
pdev = pci_find_device(PCI_VENDOR_ID_DEC, PCI_DEVICE_ID_DEC_TGA, NULL);
if (!pdev)
return -ENXIO;
- fb_info.tga_mem_base = ioremap(pdev->resource[0].start, 0);
-#ifdef DEBUG
- printk(KERN_DEBUG "tgafb_init: mem_base 0x%x\n", fb_info.tga_mem_base);
-#endif /* DEBUG */
+ /* divine board type */
+ fb_info.tga_mem_base = (unsigned long)ioremap(pdev->resource[0].start, 0);
fb_info.tga_type = (readl(fb_info.tga_mem_base) >> 12) & 0x0f;
fb_info.tga_regs_base = fb_info.tga_mem_base + TGA_REGS_OFFSET;
fb_info.tga_fb_base = (fb_info.tga_mem_base
+ fb_offset_presets[fb_info.tga_type]);
+ pci_read_config_byte(pdev, PCI_REVISION_ID, &fb_info.tga_chip_rev);
- /* XXX Why the fuck is it called modename if it identifies the board? */
- strcpy (fb_info.gen.info.modename,"DEC 21030 TGA ");
- switch (fb_info.tga_type)
- {
- case 0: /* 8-plane */
- strcat (fb_info.gen.info.modename, "8-plane");
- break;
-
- case 1:
- strcat (fb_info.gen.info.modename, "24-plane");
- break;
-
- case 3:
- strcat (fb_info.gen.info.modename, "24plusZ");
- break;
- }
+ /* setup framebuffer */
fb_info.gen.info.node = -1;
fb_info.gen.info.flags = FBINFO_FLAG_DEFAULT;
fb_info.gen.fbhw = &tgafb_hwswitch;
fb_info.gen.fbhw->detect();
+ printk (KERN_INFO "tgafb: DC21030 [TGA] detected, rev=0x%02x\n", fb_info.tga_chip_rev);
+ printk (KERN_INFO "tgafb: at PCI bus %d, device %d, function %d\n",
+ pdev->bus->number, PCI_SLOT(pdev->devfn), PCI_FUNC(pdev->devfn));
+
+ switch (fb_info.tga_type)
+ {
+ case TGA_TYPE_8PLANE:
+ strcpy (fb_info.gen.info.modename,"Digital ZLXp-E1");
+ break;
+
+ case TGA_TYPE_24PLANE:
+ strcpy (fb_info.gen.info.modename,"Digital ZLXp-E2");
+ break;
+
+ case TGA_TYPE_24PLUSZ:
+ strcpy (fb_info.gen.info.modename,"Digital ZLXp-E3");
+ break;
+ }
+
/* This should give a reasonable default video mode */
- if (!default_var_valid)
+
+ if (!default_var_valid) {
default_var = tgafb_predefined[0].var;
+ }
fbgen_get_var(&disp.var, -1, &fb_info.gen.info);
disp.var.activate = FB_ACTIVATE_NOW;
fbgen_do_set_var(&disp.var, 1, &fb_info.gen);
fbgen_install_cmap(0, &fb_info.gen);
if (register_framebuffer(&fb_info.gen.info) < 0)
return -EINVAL;
- printk(KERN_INFO "fb%d: %s frame buffer device\n", GET_FB_IDX(fb_info.gen.info.node),
- fb_info.gen.info.modename);
+ printk(KERN_INFO "fb%d: %s frame buffer device at 0x%lx\n",
+ GET_FB_IDX(fb_info.gen.info.node), fb_info.gen.info.modename,
+ pdev->resource[0].start);
return 0;
}
/*
* linux/drivers/video/tgafb.h -- DEC 21030 TGA frame buffer device
*
- * Copyright (C) 1999 Martin Lucina, Tom Zerucha
+ * Copyright (C) 1999,2000 Martin Lucina, Tom Zerucha
*
- * $Id: tgafb.h,v 1.4 1999/05/15 08:44:31 mato Exp $
+ * $Id: tgafb.h,v 1.4.2.3 2000/04/04 06:44:56 mato Exp $
*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file COPYING in the main directory of this archive for
* TGA hardware description (minimal)
*/
+#define TGA_TYPE_8PLANE 0
+#define TGA_TYPE_24PLANE 1
+#define TGA_TYPE_24PLUSZ 3
/*
* Offsets within Memory Space
/*
- * useful defines for managing the video timing registers
+ * useful defines for managing the registers
*/
#define TGA_HORIZ_ODD 0x80000000
#define TGA_VERT_FP 0x0000f800
#define TGA_VERT_ACTIVE 0x000007ff
+#define TGA_VALID_VIDEO 0x01
+#define TGA_VALID_BLANK 0x02
+#define TGA_VALID_CURSOR 0x04
+
/*
* useful defines for managing the ICS1562 PLL clock
struct fb_info_gen gen;
/* Device dependent information */
- int tga_type; /* TGA type: {8plane, 24plane, 24plusZ} */
- unsigned long tga_mem_base;
- unsigned long tga_fb_base;
- unsigned long tga_regs_base;
+ u8 tga_type; /* TGA_TYPE_XXX */
+ u8 tga_chip_rev; /* dc21030 revision */
+ u64 tga_mem_base;
+ u64 tga_fb_base;
+ u64 tga_regs_base;
struct fb_var_screeninfo default_var; /* default video mode */
};
*/
struct tgafb_par {
- int xres, yres; /* resolution in pixels */
- unsigned int htimings; /* horizontal timing register */
- unsigned int vtimings; /* vertical timing register */
- unsigned int pll_freq; /* pixclock in mhz */
- unsigned int bits_per_pixel; /* bits per pixel */
+ u32 xres, yres; /* resolution in pixels */
+ u32 htimings; /* horizontal timing register */
+ u32 vtimings; /* vertical timing register */
+ u32 pll_freq; /* pixclock in mhz */
+ u32 bits_per_pixel; /* bits per pixel */
+ u32 sync_on_green; /* set if sync is on green */
};
#endif /* TGAFB_H */
affs_new_header(struct inode *inode)
{
s32 block;
- struct buffer_head *bh;
pr_debug("AFFS: new_header(ino=%lu)\n",inode->i_ino);
if (!(block = affs_balloc(inode,0))) {
while (affs_find_new_zone(inode->i_sb,0)) {
if ((block = affs_balloc(inode,0)))
- goto init_block;
+ return block;
schedule();
}
return 0;
}
-init_block:
- if (!(bh = getblk(inode->i_dev,block,AFFS_I2BSIZE(inode)))) {
- affs_error(inode->i_sb,"new_header","Cannot get block %d",block);
- return 0;
- }
- memset(bh->b_data,0,AFFS_I2BSIZE(inode));
- mark_buffer_uptodate(bh,1);
- mark_buffer_dirty(bh,1);
- affs_brelse(bh);
-
return block;
}
unsigned long oldest;
struct affs_zone *zone;
struct super_block *sb;
- struct buffer_head *bh;
int i = 0;
s32 block;
unlock_super(sb);
block = inode->u.affs_i.i_data[inode->u.affs_i.i_pa_next++];
inode->u.affs_i.i_pa_next &= AFFS_MAX_PREALLOC - 1;
- goto init_block;
+ return block;
}
unlock_super(sb);
oldest = jiffies;
if (!(block = affs_balloc(inode,i))) { /* No data zones left */
while (affs_find_new_zone(sb,i)) {
if ((block = affs_balloc(inode,i)))
- goto init_block;
+ return block;
schedule();
}
inode->u.affs_i.i_zone = 0;
zone->z_ino = -1;
return 0;
}
-
-init_block:
- if (!(bh = getblk(inode->i_dev,block,sb->s_blocksize))) {
- affs_error(inode->i_sb,"new_data","Cannot get block %d",block);
- return 0;
- }
- memset(bh->b_data,0,sb->s_blocksize);
- mark_buffer_uptodate(bh,1);
- mark_buffer_dirty(bh,1);
- affs_brelse(bh);
-
return block;
}
*/
#define DEBUG 0
+#include <asm/div64.h>
#include <asm/uaccess.h>
#include <asm/system.h>
#include <linux/sched.h>
#include <linux/malloc.h>
#include <linux/stat.h>
#include <linux/locks.h>
+#include <linux/smp_lock.h>
#include <linux/dirent.h>
#include <linux/fs.h>
#include <linux/amigaffs.h>
#error PAGE_SIZE must be at least 4096
#endif
-static int affs_bmap(struct inode *inode, int block);
static struct buffer_head *affs_getblock(struct inode *inode, s32 block);
static ssize_t affs_file_read_ofs(struct file *filp, char *buf, size_t count, loff_t *ppos);
static ssize_t affs_file_write(struct file *filp, const char *buf, size_t count, loff_t *ppos);
return inode->u.affs_i.i_ec->ec[index];
}
-static int
+int
affs_bmap(struct inode *inode, int block)
{
struct buffer_head *bh;
pr_debug("AFFS: bmap(%lu,%d)\n",inode->i_ino,block);
+ lock_kernel();
if (block < 0) {
affs_error(inode->i_sb,"bmap","Block < 0");
- return 0;
+ goto out_fail;
}
if (!inode->u.affs_i.i_ec) {
if (alloc_ext_cache(inode)) {
- return 0;
+ goto out_fail;
}
}
tkc = &inode->u.affs_i.i_ec->kc[i];
/* Look in any cache if the key is there */
if (block <= tkc->kc_last && block >= tkc->kc_first) {
+ unlock_kernel();
return tkc->kc_keys[block - tkc->kc_first];
}
}
for (;;) {
bh = affs_bread(inode->i_dev,key,AFFS_I2BSIZE(inode));
- if (!bh)
- return 0;
+ if (!bh)
+ goto out_fail;
+
index = seqnum_to_index(ext);
if (index > inode->u.affs_i.i_ec->max_ext &&
(affs_checksum_block(AFFS_I2BSIZE(inode),bh->b_data,&ptype,&stype) ||
(ptype != T_SHORT && ptype != T_LIST) || stype != ST_FILE)) {
affs_brelse(bh);
- return 0;
+ goto out_fail;
}
nkey = be32_to_cpu(FILE_END(bh->b_data,inode)->extension);
if (block < AFFS_I2HSIZE(inode)) {
kc->kc_next_key = nkey;
key = be32_to_cpu(AFFS_BLOCK(bh->b_data,inode,block));
affs_brelse(bh);
+out:
+ unlock_kernel();
return key;
+
+out_fail:
+ key=0;
+ goto out;
}
-/* AFFS is currently broken */
-static int affs_get_block(struct inode *inode, long block, struct buffer_head *bh, int create)
+
+static int affs_get_block(struct inode *inode, long block, struct buffer_head *bh_result, int create)
{
- BUG();
- return -1;
+ int err, phys=0, new=0;
+
+ if (!create) {
+ phys = affs_bmap(inode, block);
+ if (phys) {
+ bh_result->b_dev = inode->i_dev;
+ bh_result->b_blocknr = phys;
+ bh_result->b_state |= (1UL << BH_Mapped);
+ }
+ return 0;
+ }
+
+ err = -EIO;
+ lock_kernel();
+ if (block < 0)
+ goto abort_negative;
+
+ if (affs_getblock(inode, block)==NULL) {
+ err = -EIO;
+ goto abort;
+ }
+
+ bh_result->b_dev = inode->i_dev;
+ bh_result->b_blocknr = phys;
+ bh_result->b_state |= (1UL << BH_Mapped);
+ if (new)
+ bh_result->b_state |= (1UL << BH_New);
+
+abort:
+ unlock_kernel();
+ return err;
+
+abort_negative:
+ affs_error(inode->i_sb,"affs_get_block","Block < 0");
+ goto abort;
+
}
+
static int affs_writepage(struct dentry *dentry, struct page *page)
{
return block_write_full_page(page,affs_get_block);
* What a mess.
*/
-static struct buffer_head *
-affs_getblock(struct inode *inode, s32 block)
+static struct buffer_head * affs_getblock(struct inode *inode, s32 block)
{
struct super_block *sb = inode->i_sb;
int ofs = sb->u.affs_sb.s_flags & SF_OFS;
pr_debug("AFFS: getblock(%lu,%d)\n",inode->i_ino,block);
- if (block < 0)
- goto out_fail;
-
key = calc_key(inode,&ext);
block -= ext * AFFS_I2HSIZE(inode);
pt = ext ? T_LIST : T_SHORT;
for (cf = 0; j < AFFS_I2HSIZE(inode) && j <= block; j++) {
if (ofs && !pbh && inode->u.affs_i.i_lastblock >= 0) {
if (j > 0) {
- s32 k = AFFS_BLOCK(bh->b_data, inode,
- j - 1);
+ s32 k = AFFS_BLOCK(bh->b_data, inode, j - 1);
pbh = affs_bread(inode->i_dev,
be32_to_cpu(k),
AFFS_I2BSIZE(inode));
} else
pbh = affs_getblock(inode,inode->u.affs_i.i_lastblock);
if (!pbh) {
- affs_error(sb,"getblock",
- "Cannot get last block in file");
+ affs_error(sb,"getblock", "Cannot get last block in file");
break;
}
}
if (ofs) {
ebh = affs_bread(inode->i_dev,nkey,AFFS_I2BSIZE(inode));
if (!ebh) {
- affs_error(sb,"getblock",
- "Cannot get block %d",nkey);
+ affs_error(sb,"getblock", "Cannot get block %d",nkey);
affs_free_block(sb,nkey);
AFFS_BLOCK(bh->b_data,inode,j) = 0;
break;
DATA_FRONT(ebh)->primary_type = cpu_to_be32(T_DATA);
DATA_FRONT(ebh)->header_key = cpu_to_be32(inode->i_ino);
DATA_FRONT(ebh)->sequence_number = cpu_to_be32(inode->u.affs_i.i_lastblock + 1);
- affs_fix_checksum(AFFS_I2BSIZE(inode),
- ebh->b_data, 5);
+ affs_fix_checksum(AFFS_I2BSIZE(inode), ebh->b_data, 5);
mark_buffer_dirty(ebh, 0);
if (pbh) {
DATA_FRONT(pbh)->data_size = cpu_to_be32(AFFS_I2BSIZE(inode) - 24);
ssize_t blocksize;
struct buffer_head *bh;
void *data;
+ loff_t tmp;
pr_debug("AFFS: file_read_ofs(ino=%lu,pos=%lu,%d)\n",inode->i_ino,
(unsigned long)*ppos,count);
left = MIN (inode->i_size - *ppos,count - (buf - start));
if (!left)
break;
- sector = affs_bmap(inode,(u32)*ppos / blocksize);
+ tmp = *ppos;
+ do_div(tmp, blocksize);
+ sector = affs_bmap(inode, tmp);
if (!sector)
break;
- offset = (u32)*ppos % blocksize;
+ tmp = *ppos;
+ offset = do_div(tmp, blocksize);
bh = affs_bread(inode->i_dev,sector,AFFS_I2BSIZE(inode));
if (!bh)
break;
}
static ssize_t
-affs_file_write(struct file *filp, const char *buf, size_t count, loff_t *ppos)
+affs_file_write(struct file *file, const char *buf, size_t count, loff_t *ppos)
{
- struct inode *inode = filp->f_dentry->d_inode;
- off_t pos;
- ssize_t written;
- ssize_t c;
- ssize_t blocksize;
- struct buffer_head *bh;
- char *p;
+ ssize_t retval;
- if (!count)
- return 0;
- pr_debug("AFFS: file_write(ino=%lu,pos=%lu,count=%d)\n",inode->i_ino,
- (unsigned long)*ppos,count);
-
- if (!inode) {
- affs_error(inode->i_sb,"file_write","Inode = NULL");
- return -EINVAL;
+ retval = generic_file_write (file, buf, count, ppos);
+ if (retval >0) {
+ struct inode *inode = file->f_dentry->d_inode;
+ inode->i_ctime = inode->i_mtime = CURRENT_TIME;
+ mark_inode_dirty(inode);
}
- if (!S_ISREG(inode->i_mode)) {
- affs_error(inode->i_sb,"file_write",
- "Trying to write to non-regular file (mode=%07o)",
- inode->i_mode);
- return -EINVAL;
- }
- if (!inode->u.affs_i.i_ec && alloc_ext_cache(inode))
- return -ENOMEM;
- if (filp->f_flags & O_APPEND)
- pos = inode->i_size;
- else
- pos = *ppos;
- written = 0;
- blocksize = AFFS_I2BSIZE(inode);
-
- while (written < count) {
- bh = affs_getblock(inode,pos / blocksize);
- if (!bh) {
- if (!written)
- written = -ENOSPC;
- break;
- }
- c = blocksize - (pos % blocksize);
- if (c > count - written)
- c = count - written;
- if (c != blocksize && !buffer_uptodate(bh)) {
- ll_rw_block(READ,1,&bh);
- wait_on_buffer(bh);
- if (!buffer_uptodate(bh)) {
- affs_brelse(bh);
- if (!written)
- written = -EIO;
- break;
- }
- }
- p = (pos % blocksize) + bh->b_data;
- c -= copy_from_user(p,buf,c);
- if (!c) {
- affs_brelse(bh);
- if (!written)
- written = -EFAULT;
- break;
- }
- update_vm_cache(inode,pos,p,c);
- mark_buffer_uptodate(bh,1);
- mark_buffer_dirty(bh,0);
- affs_brelse(bh);
- pos += c;
- written += c;
- buf += c;
- }
- if (pos > inode->i_size)
- inode->i_size = pos;
- inode->i_mtime = inode->i_ctime = CURRENT_TIME;
- *ppos = pos;
- mark_inode_dirty(inode);
- return written;
+ return retval;
}
static ssize_t
-affs_file_write_ofs(struct file *filp, const char *buf, size_t count, loff_t *ppos)
+affs_file_write_ofs(struct file *file, const char *buf, size_t count, loff_t *ppos)
{
- struct inode *inode = filp->f_dentry->d_inode;
- off_t pos;
- ssize_t written;
- ssize_t c;
- ssize_t blocksize;
- struct buffer_head *bh;
- char *p;
+ ssize_t retval;
- pr_debug("AFFS: file_write_ofs(ino=%lu,pos=%lu,count=%d)\n",inode->i_ino,
- (unsigned long)*ppos,count);
-
- if (!count)
- return 0;
- if (!inode) {
- affs_error(inode->i_sb,"file_write_ofs","Inode = NULL");
- return -EINVAL;
+ retval = generic_file_write (file, buf, count, ppos);
+ if (retval >0) {
+ struct inode *inode = file->f_dentry->d_inode;
+ inode->i_ctime = inode->i_mtime = CURRENT_TIME;
+ mark_inode_dirty(inode);
}
- if (!S_ISREG(inode->i_mode)) {
- affs_error(inode->i_sb,"file_write_ofs",
- "Trying to write to non-regular file (mode=%07o)",
- inode->i_mode);
- return -EINVAL;
- }
- if (!inode->u.affs_i.i_ec && alloc_ext_cache(inode))
- return -ENOMEM;
- if (filp->f_flags & O_APPEND)
- pos = inode->i_size;
- else
- pos = *ppos;
-
- bh = NULL;
- blocksize = AFFS_I2BSIZE(inode) - 24;
- written = 0;
- while (written < count) {
- bh = affs_getblock(inode,pos / blocksize);
- if (!bh) {
- if (!written)
- written = -ENOSPC;
- break;
- }
- c = blocksize - (pos % blocksize);
- if (c > count - written)
- c = count - written;
- if (c != blocksize && !buffer_uptodate(bh)) {
- ll_rw_block(READ,1,&bh);
- wait_on_buffer(bh);
- if (!buffer_uptodate(bh)) {
- affs_brelse(bh);
- if (!written)
- written = -EIO;
- break;
- }
- }
- p = (pos % blocksize) + bh->b_data + 24;
- c -= copy_from_user(p,buf,c);
- if (!c) {
- affs_brelse(bh);
- if (!written)
- written = -EFAULT;
- break;
- }
- update_vm_cache(inode,pos,p,c);
-
- pos += c;
- buf += c;
- written += c;
- DATA_FRONT(bh)->data_size = cpu_to_be32(be32_to_cpu(DATA_FRONT(bh)->data_size) + c);
- affs_fix_checksum(AFFS_I2BSIZE(inode),bh->b_data,5);
- mark_buffer_uptodate(bh,1);
- mark_buffer_dirty(bh,0);
- affs_brelse(bh);
- }
- if (pos > inode->i_size)
- inode->i_size = pos;
- *ppos = pos;
- inode->i_mtime = inode->i_ctime = CURRENT_TIME;
- mark_inode_dirty(inode);
- return written;
+ return retval;
}
/* Free any preallocated blocks. */
int blocksize = AFFS_I2BSIZE(inode);
int rem;
int ext;
+ loff_t tmp;
pr_debug("AFFS: truncate(inode=%ld,size=%lu)\n",inode->i_ino,inode->i_size);
net_blocksize = blocksize - ((inode->i_sb->u.affs_sb.s_flags & SF_OFS) ? 24 : 0);
- first = (inode->i_size + net_blocksize - 1) / net_blocksize;
+ first = inode->i_size + net_blocksize -1;
+ do_div (first, net_blocksize);
if (inode->u.affs_i.i_lastblock < first - 1) {
/* There has to be at least one new block to be allocated */
if (!inode->u.affs_i.i_ec && alloc_ext_cache(inode)) {
affs_warning(inode->i_sb,"truncate","Cannot extend file");
inode->i_size = net_blocksize * (inode->u.affs_i.i_lastblock + 1);
} else if (inode->i_sb->u.affs_sb.s_flags & SF_OFS) {
- rem = inode->i_size % net_blocksize;
+ tmp = inode->i_size;
+ rem = do_div(tmp, net_blocksize);
DATA_FRONT(bh)->data_size = cpu_to_be32(rem ? rem : net_blocksize);
affs_fix_checksum(blocksize,bh->b_data,5);
mark_buffer_dirty(bh,0);
affs_free_block(inode->i_sb,ekey);
ekey = key;
}
- block = ((inode->i_size + net_blocksize - 1) / net_blocksize) - 1;
+ block = inode->i_size + net_blocksize - 1;
+ do_div (block, net_blocksize);
+ block--;
inode->u.affs_i.i_lastblock = block;
/* If the file is not truncated to a block boundary,
* so it cannot become accessible again.
*/
- rem = inode->i_size % net_blocksize;
+ tmp = inode->i_size;
+ rem = do_div(tmp, net_blocksize);
if (rem) {
if ((inode->i_sb->u.affs_sb.s_flags & SF_OFS))
rem += 24;
*/
#define DEBUG 0
+#include <asm/div64.h>
#include <linux/errno.h>
#include <linux/fs.h>
#include <linux/malloc.h>
unsigned long prot;
s32 ptype, stype;
unsigned short id;
+ loff_t tmp;
pr_debug("AFFS: read_inode(%lu)\n",inode->i_ino);
block = AFFS_I2BSIZE(inode) - 24;
else
block = AFFS_I2BSIZE(inode);
- inode->u.affs_i.i_lastblock = ((inode->i_size + block - 1) / block) - 1;
+ tmp = inode->i_size + block -1;
+ do_div (tmp, block);
+ tmp--;
+ inode->u.affs_i.i_lastblock = tmp;
break;
case ST_SOFTLINK:
inode->i_mode |= S_IFLNK;
* @inode: Inode to mark bad
*
* When an inode cannot be read due to a media or remote network
- * failure this function makes the inode 'bad' and causes I/O operations
- * on it to fail from this point on
+ * failure this function makes the inode "bad" and causes I/O operations
+ * on it to fail from this point on.
*/
void make_bad_inode(struct inode * inode)
* is_bad_inode - is an inode errored
* @inode: inode to test
*
- * Returns true if the inode in question has been marked as bad
+ * Returns true if the inode in question has been marked as bad.
*/
int is_bad_inode(struct inode * inode)
/* OK, This is the point of no return */
current->personality = PER_LINUX;
-#if defined(__sparc__) && !defined(__sparc_v9__)
+#if defined(__sparc__)
+ current->personality = PER_SUNOS;
+#if !defined(__sparc_v9__)
memcpy(¤t->thread.core_exec, &ex, sizeof(struct exec));
+#endif
#endif
current->mm->end_code = ex.a_text +
* since the vma has no handle.
*/
-static int block_fsync(struct file *filp, struct dentry *dentry)
+int block_fsync(struct file *filp, struct dentry *dentry)
{
return fsync_dev(dentry->d_inode->i_rdev);
}
return ret;
}
-static int blkdev_close(struct inode * inode, struct file * filp)
+int blkdev_close(struct inode * inode, struct file * filp)
{
return blkdev_put(inode->i_bdev, BDEV_FILE);
}
#define DO16(buf) DO8(buf,0); DO8(buf,8);
/* ========================================================================= */
-uLong ZEXPORT adler32(adler, buf, len)
+uLong ZEXPORT cramfs_adler32(adler, buf, len)
uLong adler;
const Bytef *buf;
uInt len;
*/
-void inflate_blocks_reset(s, z, c)
+void cramfs_inflate_blocks_reset(s, z, c)
inflate_blocks_statef *s;
z_streamp z;
uLongf *c;
if (c != Z_NULL)
*c = s->check;
if (s->mode == CODES)
- inflate_codes_free(s->sub.decode.codes, z);
+ cramfs_inflate_codes_free(s->sub.decode.codes, z);
s->mode = TYPE;
s->bitk = 0;
s->bitb = 0;
}
-inflate_blocks_statef *inflate_blocks_new(z, c, w)
+inflate_blocks_statef *cramfs_inflate_blocks_new(z, c, w)
z_streamp z;
check_func c;
uInt w;
s->end = s->window + w;
s->checkfn = c;
s->mode = TYPE;
- inflate_blocks_reset(s, z, Z_NULL);
+ cramfs_inflate_blocks_reset(s, z, Z_NULL);
return s;
}
-int inflate_blocks(s, z, r)
+int cramfs_inflate_blocks(s, z, r)
inflate_blocks_statef *s;
z_streamp z;
int r;
uInt bl, bd;
inflate_huft *tl, *td;
- inflate_trees_fixed(&bl, &bd, &tl, &td, z);
- s->sub.decode.codes = inflate_codes_new(bl, bd, tl, td, z);
+ cramfs_inflate_trees_fixed(&bl, &bd, &tl, &td, z);
+ s->sub.decode.codes = cramfs_inflate_codes_new(bl, bd, tl, td, z);
if (s->sub.decode.codes == Z_NULL)
{
r = Z_MEM_ERROR;
while (s->sub.trees.index < 19)
s->sub.trees.blens[border[s->sub.trees.index++]] = 0;
s->sub.trees.bb = 7;
- t = inflate_trees_bits(s->sub.trees.blens, &s->sub.trees.bb,
+ t = cramfs_inflate_trees_bits(s->sub.trees.blens, &s->sub.trees.bb,
&s->sub.trees.tb, s->hufts, z);
if (t != Z_OK)
{
t = s->sub.trees.bb;
NEEDBITS(t)
- h = s->sub.trees.tb + ((uInt)b & inflate_mask[t]);
+ h = s->sub.trees.tb + ((uInt)b & cramfs_inflate_mask[t]);
t = h->bits;
c = h->base;
if (c < 16)
j = c == 18 ? 11 : 3;
NEEDBITS(t + i)
DUMPBITS(t)
- j += (uInt)b & inflate_mask[i];
+ j += (uInt)b & cramfs_inflate_mask[i];
DUMPBITS(i)
i = s->sub.trees.index;
t = s->sub.trees.table;
bl = 9; /* must be <= 9 for lookahead assumptions */
bd = 6; /* must be <= 9 for lookahead assumptions */
t = s->sub.trees.table;
- t = inflate_trees_dynamic(257 + (t & 0x1f), 1 + ((t >> 5) & 0x1f),
+ t = cramfs_inflate_trees_dynamic(257 + (t & 0x1f), 1 + ((t >> 5) & 0x1f),
s->sub.trees.blens, &bl, &bd, &tl, &td,
s->hufts, z);
if (t != Z_OK)
r = t;
LEAVE
}
- if ((c = inflate_codes_new(bl, bd, tl, td, z)) == Z_NULL)
+ if ((c = cramfs_inflate_codes_new(bl, bd, tl, td, z)) == Z_NULL)
{
r = Z_MEM_ERROR;
LEAVE
s->mode = CODES;
case CODES:
UPDATE
- if ((r = inflate_codes(s, z, r)) != Z_STREAM_END)
- return inflate_flush(s, z, r);
+ if ((r = cramfs_inflate_codes(s, z, r)) != Z_STREAM_END)
+ return cramfs_inflate_flush(s, z, r);
r = Z_OK;
- inflate_codes_free(s->sub.decode.codes, z);
+ cramfs_inflate_codes_free(s->sub.decode.codes, z);
LOAD
if (!s->last)
{
}
-int inflate_blocks_free(s, z)
+int cramfs_inflate_blocks_free(s, z)
inflate_blocks_statef *s;
z_streamp z;
{
- inflate_blocks_reset(s, z, Z_NULL);
+ cramfs_inflate_blocks_reset(s, z, Z_NULL);
return Z_OK;
}
-void inflate_set_dictionary(s, d, n)
+void cramfs_inflate_set_dictionary(s, d, n)
inflate_blocks_statef *s;
const Bytef *d;
uInt n;
struct inflate_blocks_state;
typedef struct inflate_blocks_state FAR inflate_blocks_statef;
-extern inflate_blocks_statef * inflate_blocks_new OF((
+extern inflate_blocks_statef * cramfs_inflate_blocks_new OF((
z_streamp z,
check_func c, /* check function */
uInt w)); /* window size */
-extern int inflate_blocks OF((
+extern int cramfs_inflate_blocks OF((
inflate_blocks_statef *,
z_streamp ,
int)); /* initial return code */
-extern void inflate_blocks_reset OF((
+extern void cramfs_inflate_blocks_reset OF((
inflate_blocks_statef *,
z_streamp ,
uLongf *)); /* check value on output */
-extern int inflate_blocks_free OF((
+extern int cramfs_inflate_blocks_free OF((
inflate_blocks_statef *,
z_streamp));
-extern void inflate_set_dictionary OF((
+extern void cramfs_inflate_set_dictionary OF((
inflate_blocks_statef *s,
const Bytef *d, /* dictionary */
uInt n)); /* dictionary length */
};
-inflate_codes_statef *inflate_codes_new(bl, bd, tl, td, z)
+inflate_codes_statef *cramfs_inflate_codes_new(bl, bd, tl, td, z)
uInt bl, bd;
inflate_huft *tl;
inflate_huft *td; /* need separate declaration for Borland C++ */
}
-int inflate_codes(s, z, r)
+int cramfs_inflate_codes(s, z, r)
inflate_blocks_statef *s;
z_streamp z;
int r;
if (m >= 258 && n >= 10)
{
UPDATE
- r = inflate_fast(c->lbits, c->dbits, c->ltree, c->dtree, s, z);
+ r = cramfs_inflate_fast(c->lbits, c->dbits, c->ltree, c->dtree, s, z);
LOAD
if (r != Z_OK)
{
case LEN: /* i: get length/literal/eob next */
j = c->sub.code.need;
NEEDBITS(j)
- t = c->sub.code.tree + ((uInt)b & inflate_mask[j]);
+ t = c->sub.code.tree + ((uInt)b & cramfs_inflate_mask[j]);
DUMPBITS(t->bits)
e = (uInt)(t->exop);
if (e == 0) /* literal */
case LENEXT: /* i: getting length extra (have base) */
j = c->sub.copy.get;
NEEDBITS(j)
- c->len += (uInt)b & inflate_mask[j];
+ c->len += (uInt)b & cramfs_inflate_mask[j];
DUMPBITS(j)
c->sub.code.need = c->dbits;
c->sub.code.tree = c->dtree;
case DIST: /* i: get distance next */
j = c->sub.code.need;
NEEDBITS(j)
- t = c->sub.code.tree + ((uInt)b & inflate_mask[j]);
+ t = c->sub.code.tree + ((uInt)b & cramfs_inflate_mask[j]);
DUMPBITS(t->bits)
e = (uInt)(t->exop);
if (e & 16) /* distance */
case DISTEXT: /* i: getting distance extra */
j = c->sub.copy.get;
NEEDBITS(j)
- c->sub.copy.dist += (uInt)b & inflate_mask[j];
+ c->sub.copy.dist += (uInt)b & cramfs_inflate_mask[j];
DUMPBITS(j)
c->mode = COPY;
case COPY: /* o: copying bytes in window, waiting for space */
}
-void inflate_codes_free(c, z)
+void cramfs_inflate_codes_free(c, z)
inflate_codes_statef *c;
z_streamp z;
{
struct inflate_codes_state;
typedef struct inflate_codes_state FAR inflate_codes_statef;
-extern inflate_codes_statef *inflate_codes_new OF((
+extern inflate_codes_statef *cramfs_inflate_codes_new OF((
uInt, uInt,
inflate_huft *, inflate_huft *,
z_streamp ));
-extern int inflate_codes OF((
+extern int cramfs_inflate_codes OF((
inflate_blocks_statef *,
z_streamp ,
int));
-extern void inflate_codes_free OF((
+extern void cramfs_inflate_codes_free OF((
inflate_codes_statef *,
z_streamp ));
at least ten. The ten bytes are six bytes for the longest length/
distance pair plus four bytes for overloading the bit buffer. */
-int inflate_fast(bl, bd, tl, td, s, z)
+int cramfs_inflate_fast(bl, bd, tl, td, s, z)
uInt bl, bd;
inflate_huft *tl;
inflate_huft *td; /* need separate declaration for Borland C++ */
LOAD
/* initialize masks */
- ml = inflate_mask[bl];
- md = inflate_mask[bd];
+ ml = cramfs_inflate_mask[bl];
+ md = cramfs_inflate_mask[bd];
/* do until not enough input or output space for fast loop */
do { /* assume called with m >= 258 && n >= 10 */
{
/* get extra bits for length */
e &= 15;
- c = t->base + ((uInt)b & inflate_mask[e]);
+ c = t->base + ((uInt)b & cramfs_inflate_mask[e]);
DUMPBITS(e)
/* decode distance base of block to copy */
/* get extra bits to add to distance base */
e &= 15;
GRABBITS(e) /* get extra bits (up to 13) */
- d = t->base + ((uInt)b & inflate_mask[e]);
+ d = t->base + ((uInt)b & cramfs_inflate_mask[e]);
DUMPBITS(e)
/* do the copy */
else if ((e & 64) == 0)
{
t += t->base;
- e = (t += ((uInt)b & inflate_mask[e]))->exop;
+ e = (t += ((uInt)b & cramfs_inflate_mask[e]))->exop;
}
else
{
if ((e & 64) == 0)
{
t += t->base;
- if ((e = (t += ((uInt)b & inflate_mask[e]))->exop) == 0)
+ if ((e = (t += ((uInt)b & cramfs_inflate_mask[e]))->exop) == 0)
{
DUMPBITS(t->bits)
*q++ = (Byte)t->base;
subject to change. Applications should only use zlib.h.
*/
-extern int inflate_fast OF((
+extern int cramfs_inflate_fast OF((
uInt,
uInt,
inflate_huft *,
};
-int ZEXPORT inflateReset(z)
+int ZEXPORT cramfs_inflateReset(z)
z_streamp z;
{
if (z == Z_NULL || z->state == Z_NULL)
z->total_in = z->total_out = 0;
z->msg = Z_NULL;
z->state->mode = z->state->nowrap ? BLOCKS : METHOD;
- inflate_blocks_reset(z->state->blocks, z, Z_NULL);
+ cramfs_inflate_blocks_reset(z->state->blocks, z, Z_NULL);
return Z_OK;
}
-int ZEXPORT inflateEnd(z)
+int ZEXPORT cramfs_inflateEnd(z)
z_streamp z;
{
if (z == Z_NULL || z->state == Z_NULL)
return Z_STREAM_ERROR;
if (z->state->blocks != Z_NULL)
- inflate_blocks_free(z->state->blocks, z);
+ cramfs_inflate_blocks_free(z->state->blocks, z);
z->state = Z_NULL;
return Z_OK;
}
-int ZEXPORT inflateInit2_(z, w, version, stream_size)
+int ZEXPORT cramfs_inflateInit2_(z, w, version, stream_size)
z_streamp z;
int w;
const char *version;
/* set window size */
if (w < 8 || w > 15)
{
- inflateEnd(z);
+ cramfs_inflateEnd(z);
return Z_STREAM_ERROR;
}
z->state->wbits = (uInt)w;
/* create inflate_blocks state */
if ((z->state->blocks =
- inflate_blocks_new(z, z->state->nowrap ? Z_NULL : adler32, (uInt)1 << w))
+ cramfs_inflate_blocks_new(z, z->state->nowrap ? Z_NULL : cramfs_adler32, (uInt)1 << w))
== Z_NULL)
{
- inflateEnd(z);
+ cramfs_inflateEnd(z);
return Z_MEM_ERROR;
}
/* reset state */
- inflateReset(z);
+ cramfs_inflateReset(z);
return Z_OK;
}
-int ZEXPORT inflateInit_(z, version, stream_size)
+int ZEXPORT cramfs_inflateInit_(z, version, stream_size)
z_streamp z;
const char *version;
int stream_size;
{
- return inflateInit2_(z, DEF_WBITS, version, stream_size);
+ return cramfs_inflateInit2_(z, DEF_WBITS, version, stream_size);
}
#define NEEDBYTE {if(z->avail_in==0)return r;r=f;}
#define NEXTBYTE (z->avail_in--,z->total_in++,*z->next_in++)
-int ZEXPORT inflate(z, f)
+int ZEXPORT cramfs_inflate(z, f)
z_streamp z;
int f;
{
z->state->sub.marker = 0; /* can try inflateSync */
return Z_STREAM_ERROR;
case BLOCKS:
- r = inflate_blocks(z->state->blocks, z, r);
+ r = cramfs_inflate_blocks(z->state->blocks, z, r);
if (r == Z_DATA_ERROR)
{
z->state->mode = BAD;
if (r != Z_STREAM_END)
return r;
r = f;
- inflate_blocks_reset(z->state->blocks, z, &z->state->sub.check.was);
+ cramfs_inflate_blocks_reset(z->state->blocks, z, &z->state->sub.check.was);
if (z->state->nowrap)
{
z->state->mode = DONE;
}
-int ZEXPORT inflateSync(z)
+int ZEXPORT cramfs_inflateSync(z)
z_streamp z;
{
uInt n; /* number of bytes to look at */
if (m != 4)
return Z_DATA_ERROR;
r = z->total_in; w = z->total_out;
- inflateReset(z);
+ cramfs_inflateReset(z);
z->total_in = r; z->total_out = w;
z->state->mode = BLOCKS;
return Z_OK;
* decompressing, PPP checks that at the end of input packet, inflate is
* waiting for these length bytes.
*/
-int ZEXPORT inflateSyncPoint(z)
+int ZEXPORT cramfs_inflateSyncPoint(z)
z_streamp z;
{
if (z == Z_NULL || z->state == Z_NULL || z->state->blocks == Z_NULL)
#include "zutil.h"
#include "inftrees.h"
-const char inflate_copyright[] =
+static const char inflate_copyright[] =
" inflate 1.1.3 Copyright 1995-1998 Mark Adler ";
/*
If you use the zlib library in a product, an acknowledgment is welcome
}
-int inflate_trees_bits(c, bb, tb, hp, z)
+int cramfs_inflate_trees_bits(c, bb, tb, hp, z)
uIntf *c; /* 19 code lengths */
uIntf *bb; /* bits tree desired/actual depth */
inflate_huft * FAR *tb; /* bits tree result */
return r;
}
-int inflate_trees_dynamic(nl, nd, c, bl, bd, tl, td, hp, z)
+int cramfs_inflate_trees_dynamic(nl, nd, c, bl, bd, tl, td, hp, z)
uInt nl; /* number of literal/length codes */
uInt nd; /* number of distance codes */
uIntf *c; /* that many (total) code lengths */
#include "inffixed.h"
-int inflate_trees_fixed(bl, bd, tl, td, z)
+int cramfs_inflate_trees_fixed(bl, bd, tl, td, z)
uIntf *bl; /* literal desired/actual bit depth */
uIntf *bd; /* distance desired/actual bit depth */
inflate_huft * FAR *tl; /* literal/length tree result */
value below is more than safe. */
#define MANY 1440
-extern int inflate_trees_bits OF((
+extern int cramfs_inflate_trees_bits OF((
uIntf *, /* 19 code lengths */
uIntf *, /* bits tree desired/actual depth */
inflate_huft * FAR *, /* bits tree result */
inflate_huft *, /* space for trees */
z_streamp)); /* for messages */
-extern int inflate_trees_dynamic OF((
+extern int cramfs_inflate_trees_dynamic OF((
uInt, /* number of literal/length codes */
uInt, /* number of distance codes */
uIntf *, /* that many (total) code lengths */
inflate_huft *, /* space for trees */
z_streamp)); /* for messages */
-extern int inflate_trees_fixed OF((
+extern int cramfs_inflate_trees_fixed OF((
uIntf *, /* literal desired/actual bit depth */
uIntf *, /* distance desired/actual bit depth */
inflate_huft * FAR *, /* literal/length tree result */
struct inflate_codes_state {int dummy;}; /* for buggy compilers */
/* And'ing with mask[n] masks the lower n bits */
-uInt inflate_mask[17] = {
+uInt cramfs_inflate_mask[17] = {
0x0000,
0x0001, 0x0003, 0x0007, 0x000f, 0x001f, 0x003f, 0x007f, 0x00ff,
0x01ff, 0x03ff, 0x07ff, 0x0fff, 0x1fff, 0x3fff, 0x7fff, 0xffff
/* copy as much as possible from the sliding window to the output area */
-int inflate_flush(s, z, r)
+int cramfs_inflate_flush(s, z, r)
inflate_blocks_statef *s;
z_streamp z;
int r;
#define UPDIN {z->avail_in=n;z->total_in+=p-z->next_in;z->next_in=p;}
#define UPDOUT {s->write=q;}
#define UPDATE {UPDBITS UPDIN UPDOUT}
-#define LEAVE {UPDATE return inflate_flush(s,z,r);}
+#define LEAVE {UPDATE return cramfs_inflate_flush(s,z,r);}
/* get bytes and bits */
#define LOADIN {p=z->next_in;n=z->avail_in;b=s->bitb;k=s->bitk;}
#define NEEDBYTE {if(n)r=Z_OK;else LEAVE}
#define WAVAIL (uInt)(q<s->read?s->read-q-1:s->end-q)
#define LOADOUT {q=s->write;m=(uInt)WAVAIL;}
#define WRAP {if(q==s->end&&s->read!=s->window){q=s->window;m=(uInt)WAVAIL;}}
-#define FLUSH {UPDOUT r=inflate_flush(s,z,r); LOADOUT}
+#define FLUSH {UPDOUT r=cramfs_inflate_flush(s,z,r); LOADOUT}
#define NEEDOUT {if(m==0){WRAP if(m==0){FLUSH WRAP if(m==0) LEAVE}}r=Z_OK;}
#define OUTBYTE(a) {*q++=(Byte)(a);m--;}
/* load local pointers */
#define LOAD {LOADIN LOADOUT}
/* masks for lower bits (size given to avoid silly warnings with Visual C++) */
-extern uInt inflate_mask[17];
+extern uInt cramfs_inflate_mask[17];
/* copy as much as possible from the sliding window to the output area */
-extern int inflate_flush OF((
+extern int cramfs_inflate_flush OF((
inflate_blocks_statef *,
z_streamp ,
int));
stream.avail_out = (uInt)*destLen;
if ((uLong)stream.avail_out != *destLen) return Z_BUF_ERROR;
- err = inflateInit(&stream);
+ err = cramfs_inflateInit(&stream);
if (err != Z_OK) return err;
- err = inflate(&stream, Z_FINISH);
+ err = cramfs_inflate(&stream, Z_FINISH);
if (err != Z_STREAM_END) {
- inflateEnd(&stream);
+ cramfs_inflateEnd(&stream);
return err == Z_OK ? Z_BUF_ERROR : err;
}
*destLen = stream.total_out;
- err = inflateEnd(&stream);
+ err = cramfs_inflateEnd(&stream);
return err;
}
/*
-ZEXTERN int ZEXPORT inflateInit OF((z_streamp strm));
+ZEXTERN int ZEXPORT cramfs_inflateInit OF((z_streamp strm));
Initializes the internal stream state for decompression. The fields
next_in, avail_in, zalloc, zfree and opaque must be initialized before by
*/
-ZEXTERN int ZEXPORT inflate OF((z_streamp strm, int flush));
+ZEXTERN int ZEXPORT cramfs_inflate OF((z_streamp strm, int flush));
/*
inflate decompresses as much data as possible, and stops when the input
buffer becomes empty or the output buffer becomes full. It may some
*/
-ZEXTERN int ZEXPORT inflateEnd OF((z_streamp strm));
+ZEXTERN int ZEXPORT cramfs_inflateEnd OF((z_streamp strm));
/*
All dynamically allocated data structures for this stream are freed.
This function discards any unprocessed input and does not flush any
inflate().
*/
-ZEXTERN int ZEXPORT inflateSync OF((z_streamp strm));
+ZEXTERN int ZEXPORT cramfs_inflateSync OF((z_streamp strm));
/*
Skips invalid compressed data until a full flush point (see above the
description of deflate with Z_FULL_FLUSH) can be found, or until all
until success or end of the input data.
*/
-ZEXTERN int ZEXPORT inflateReset OF((z_streamp strm));
+ZEXTERN int ZEXPORT cramfs_inflateReset OF((z_streamp strm));
/*
This function is equivalent to inflateEnd followed by inflateInit,
but does not free and reallocate all the internal decompression state.
compression library.
*/
-ZEXTERN uLong ZEXPORT adler32 OF((uLong adler, const Bytef *buf, uInt len));
+ZEXTERN uLong ZEXPORT cramfs_adler32 OF((uLong adler, const Bytef *buf, uInt len));
/*
Update a running Adler-32 checksum with the bytes buf[0..len-1] and
*/
ZEXTERN int ZEXPORT deflateInit_ OF((z_streamp strm, int level,
const char *version, int stream_size));
-ZEXTERN int ZEXPORT inflateInit_ OF((z_streamp strm,
+ZEXTERN int ZEXPORT cramfs_inflateInit_ OF((z_streamp strm,
const char *version, int stream_size));
ZEXTERN int ZEXPORT deflateInit2_ OF((z_streamp strm, int level, int method,
int windowBits, int memLevel,
int strategy, const char *version,
int stream_size));
-ZEXTERN int ZEXPORT inflateInit2_ OF((z_streamp strm, int windowBits,
+ZEXTERN int ZEXPORT cramfs_inflateInit2_ OF((z_streamp strm, int windowBits,
const char *version, int stream_size));
#define deflateInit(strm, level) \
deflateInit_((strm), (level), ZLIB_VERSION, sizeof(z_stream))
-#define inflateInit(strm) \
- inflateInit_((strm), ZLIB_VERSION, sizeof(z_stream))
+#define cramfs_inflateInit(strm) \
+ cramfs_inflateInit_((strm), ZLIB_VERSION, sizeof(z_stream))
#define deflateInit2(strm, level, method, windowBits, memLevel, strategy) \
deflateInit2_((strm),(level),(method),(windowBits),(memLevel),\
(strategy), ZLIB_VERSION, sizeof(z_stream))
#define inflateInit2(strm, windowBits) \
- inflateInit2_((strm), (windowBits), ZLIB_VERSION, sizeof(z_stream))
+ cramfs_inflateInit2_((strm), (windowBits), ZLIB_VERSION, sizeof(z_stream))
#if !defined(_Z_UTIL_H) && !defined(NO_DUMMY_DECL)
#endif
ZEXTERN const char * ZEXPORT zError OF((int err));
-ZEXTERN int ZEXPORT inflateSyncPoint OF((z_streamp z));
+ZEXTERN int ZEXPORT cramfs_inflateSyncPoint OF((z_streamp z));
ZEXTERN const uLongf * ZEXPORT get_crc_table OF((void));
#ifdef __cplusplus
stream.next_out = dst;
stream.avail_out = dstlen;
- err = inflateReset(&stream);
+ err = cramfs_inflateReset(&stream);
if (err != Z_OK) {
- printk("inflateReset error %d\n", err);
- inflateEnd(&stream);
- inflateInit(&stream);
+ printk("cramfs_inflateReset error %d\n", err);
+ cramfs_inflateEnd(&stream);
+ cramfs_inflateInit(&stream);
}
- err = inflate(&stream, Z_FINISH);
+ err = cramfs_inflate(&stream, Z_FINISH);
if (err != Z_STREAM_END)
goto err;
return stream.total_out;
if (!initialized++) {
stream.next_in = NULL;
stream.avail_in = 0;
- inflateInit(&stream);
+ cramfs_inflateInit(&stream);
}
return 0;
}
int cramfs_uncompress_exit(void)
{
if (!--initialized)
- inflateEnd(&stream);
+ cramfs_inflateEnd(&stream);
return 0;
}
}
/*
- * dput
+ * This is dput
*
* This is complicated by the fact that we do not want to put
* dentries that are no longer on any hash chain on the unused
* @parent: parent of entry to allocate
* @name: qstr of the name
*
- * Allocates a dentry. It returns NULL if there is insufficient memory
+ * Allocates a dentry. It returns %NULL if there is insufficient memory
* available. On a success the dentry is returned. The name passed in is
* copied and the copy passed in may be reused after this call.
*/
/**
* d_instantiate - fill in inode information for a dentry
* @entry: dentry to complete
- * @inode: inode to attacheto this dentry
+ * @inode: inode to attach to this dentry
*
* Fill in inode information in the entry.
*
*
* NOTE! This assumes that the inode count has been incremented
* (or otherwise set) by the caller to indicate that it is now
- * in use by the dcache..
+ * in use by the dcache.
*/
void d_instantiate(struct dentry *entry, struct inode * inode)
* d_alloc_root - allocate root dentry
* @root_inode: inode to allocate the root for
*
- * Allocate a root ('/') dentry for the inode given. The inode is
- * instantiated and returned. NULL is returned if there is insufficient
- * memory or the inode passed is NULL.
+ * Allocate a root ("/") dentry for the inode given. The inode is
+ * instantiated and returned. %NULL is returned if there is insufficient
+ * memory or the inode passed is %NULL.
*/
struct dentry * d_alloc_root(struct inode * root_inode)
* Searches the children of the parent dentry for the name in question. If
* the dentry is found its reference count is incremented and the dentry
* is returned. The caller must use d_put to free the entry when it has
- * finished using it. NULL is returned on failure.
+ * finished using it. %NULL is returned on failure.
*/
struct dentry * d_lookup(struct dentry * parent, struct qstr * name)
* d_rehash - add an entry back to the hash
* @entry: dentry to add to the hash
*
- * Adds a dentry to the hash according to its name
+ * Adds a dentry to the hash according to its name.
*/
void d_rehash(struct dentry * entry)
* @buffer: buffer to return value in
* @buflen: buffer length
*
- * Convert a dentry into an ascii path name. If the entry has been deleted
- * the string ' (deleted)' is appended. Note that this is ambiguous. Returns
+ * Convert a dentry into an ASCII path name. If the entry has been deleted
+ * the string " (deleted)" is appended. Note that this is ambiguous. Returns
* the buffer.
*
- * "buflen" should be PAGE_SIZE or more.
+ * "buflen" should be %PAGE_SIZE or more.
*/
char * __d_path(struct dentry *dentry, struct vfsmount *vfsmnt,
struct dentry *root, struct vfsmount *rootmnt,
if (IS_IMMUTABLE(inode))
return /* -EPERM */;
cluster = SECTOR_SIZE*sbi->cluster_size;
- MSDOS_I(inode)->mmu_private = inode->i_size;
+ /*
+ * This protects against truncating a file bigger than it was then
+ * trying to write into the hole.
+ */
+ if (MSDOS_I(inode)->mmu_private > inode->i_size)
+ MSDOS_I(inode)->mmu_private = inode->i_size;
+
fat_free(inode,(inode->i_size+(cluster-1))>>sbi->cluster_bits);
MSDOS_I(inode)->i_attrs |= ATTR_ARCH;
inode->i_ctime = inode->i_mtime = CURRENT_TIME;
* __mark_inode_dirty - internal function
* @inode: inode to mark
*
- * Mark an inode as dirty. Callers should use mark_inode_dirty
+ * Mark an inode as dirty. Callers should use mark_inode_dirty.
*/
void __mark_inode_dirty(struct inode *inode)
* no pre-existing information.
*
* On a successful return the inode pointer is returned. On a failure
- * a NULL pointer is returned. The returned inode is not on any superblock
+ * a %NULL pointer is returned. The returned inode is not on any superblock
* lists.
*/
* @inode: unhashed inode
*
* Add an inode to the inode hash for this superblock. If the inode
- * has no superblock it is added to a seperate anonymous chain
+ * has no superblock it is added to a separate anonymous chain.
*/
void insert_inode_hash(struct inode *inode)
* remove_inode_hash - remove an inode from the hash
* @inode: inode to unhash
*
- * Remove an inode from the superblock or anonymous hash
+ * Remove an inode from the superblock or anonymous hash.
*/
void remove_inode_hash(struct inode *inode)
*
* Returns the block number on the device holding the inode that
* is the disk block number for the block of the file requested.
- * That is asked for block 4 of inode 1 the function will return the
+ * That is, asked for block 4 of inode 1 the function will return the
* disk block relative to the disk start that holds that block of the
- * file
+ * file.
*/
int bmap(struct inode * inode, int block)
* update_atime - update the access time
* @inode: inode accessed
*
- * Update the accessed time on an inode and mark it for writeback.
+ * Update the accessed time on an inode and mark it for writeback.
* This function automatically handles read only file systems and media,
- * as well as the noatime flag and inode specific noatime markers
+ * as well as the "noatime" flag and inode specific "noatime" markers.
*/
void update_atime (struct inode *inode)
bool ' Alpha OSF partition support' CONFIG_OSF_PARTITION
bool ' Amiga partition table support' CONFIG_AMIGA_PARTITION
bool ' Atari partition table support' CONFIG_ATARI_PARTITION
+ if [ "$CONFIG_ARCH_S390" = "y" ]; then
+ bool ' IBM disk label and partition support' CONFIG_IBM_PARTITION
+ fi
bool ' Macintosh partition map support' CONFIG_MAC_PARTITION
bool ' PC BIOS (MSDOS partition tables) support' CONFIG_MSDOS_PARTITION
if [ "$CONFIG_MSDOS_PARTITION" = "y" ]; then
O_OBJS += atari.o
endif
+ifeq ($(CONFIG_IBM_PARTITION),y)
+O_OBJS += ibm.o
+endif
+
ifeq ($(CONFIG_MAC_PARTITION),y)
O_OBJS += mac.o
endif
O_OBJS += ultrix.o
endif
+
include $(TOPDIR)/Rules.make
#include "osf.h"
#include "sgi.h"
#include "sun.h"
+#include "ibm.h"
extern void device_init(void);
extern void md_setup_drive(void);
#endif
#ifdef CONFIG_ULTRIX_PARTITION
ultrix_partition,
+#endif
+#ifdef CONFIG_IBM_PARTITION
+ ibm_partition,
#endif
NULL
};
+#if defined CONFIG_BLK_DEV_LVM || defined CONFIG_BLK_DEV_LVM_MODULE
+#include <linux/lvm.h>
+void (*lvm_hd_name_ptr) (char *, int) = NULL;
+#endif
+
/*
* disk_name() is used by genhd.c and blkpg.c.
* It formats the devicename of the indicated disk into
* This requires special handling here.
*/
switch (hd->major) {
+#if defined CONFIG_BLK_DEV_LVM || defined CONFIG_BLK_DEV_LVM_MODULE
+ case LVM_BLK_MAJOR:
+ *buf = 0;
+ if ( lvm_hd_name_ptr != NULL)
+ (lvm_hd_name_ptr) ( buf, minor);
+ return buf;
+#endif
case IDE9_MAJOR:
unit += 2;
case IDE8_MAJOR:
{
if (!gdev)
return;
- grok_partitions(gdev, MINOR(dev)>>gdev->minor_shift, minors, size);
+ grok_partitions(gdev, MINOR(dev)>>gdev->minor_shift, minors, size);
}
void grok_partitions(struct gendisk *dev, int drive, unsigned minors, long size)
#ifdef CONFIG_BLK_DEV_MD
autodetect_raid();
#endif
-#ifdef CONFIG_MD_BOOT
- md_setup_drive();
-#endif
-
return 0;
}
--- /dev/null
+/*
+ * File...........: linux/fs/partitions/ibm.c
+ * Author(s)......: Holger Smolinski <Holger.Smolinski@de.ibm.com>
+ * Bugreports.to..: <Linux390@de.ibm.com>
+ * (C) IBM Corporation, IBM Deutschland Entwicklung GmbH, 2000
+ */
+
+#include <linux/fs.h>
+#include <linux/genhd.h>
+#include <linux/kernel.h>
+#include <linux/major.h>
+#include <linux/string.h>
+#include <linux/blk.h>
+
+#include <asm/ebcdic.h>
+#include "../../drivers/s390/block/dasd_types.h"
+
+typedef enum {
+ ibm_partition_none = 0,
+ ibm_partition_lnx1 = 1,
+ ibm_partition_vol1 = 3,
+ ibm_partition_cms1 = 4
+} ibm_partition_t;
+
+static ibm_partition_t
+get_partition_type ( char * type )
+{
+ static char lnx[5]="LNX1";
+ static char vol[5]="VOL1";
+ static char cms[5]="CMS1";
+ if ( ! strncmp ( lnx, "LNX1",4 ) ) {
+ ASCEBC(lnx,4);
+ ASCEBC(vol,4);
+ ASCEBC(cms,4);
+ }
+ if ( ! strncmp (type,lnx,4) ||
+ ! strncmp (type,"LNX1",4) )
+ return ibm_partition_lnx1;
+ if ( ! strncmp (type,vol,4) )
+ return ibm_partition_vol1;
+ if ( ! strncmp (type,cms,4) )
+ return ibm_partition_cms1;
+ return ibm_partition_none;
+}
+
+void
+ibm_partition (struct gendisk *hd, kdev_t dev)
+{
+ struct buffer_head *bh;
+ ibm_partition_t partition_type;
+ int di = MINOR(dev) >> PARTN_BITS;
+ char type[5] = {0,};
+ char name[7] = {0,};
+ if ( bh = bread( dev,
+ dasd_info[di]->sizes.label_block,
+ get_ptable_blocksize(dev) ) ) {
+ strncpy ( type,bh -> b_data, 4);
+ strncpy ( name,bh -> b_data + 4, 6);
+ } else {
+ return;
+ }
+ if ( (*(char *)bh -> b_data) & 0x80 ) {
+ EBCASC(name,6);
+ }
+ switch ( partition_type = get_partition_type(type) ) {
+ case ibm_partition_lnx1:
+ printk ( "(LNX1)/%6s:",name);
+ add_gd_partition( hd, MINOR(dev) + 1,
+ (dasd_info [di]->sizes.label_block + 1) <<
+ dasd_info [di]->sizes.s2b_shift,
+ (dasd_info [di]->sizes.blocks -
+ dasd_info [di]->sizes.label_block - 1) <<
+ dasd_info [di]->sizes.s2b_shift );
+ break;
+ case ibm_partition_vol1:
+ printk ( "(VOL1)/%6s:",name);
+ break;
+ case ibm_partition_cms1:
+ printk ( "(CMS1)/%6s:",name);
+ if (* (((long *)bh->b_data) + 13) == 0) {
+ /* disk holds a CMS filesystem */
+ add_gd_partition( hd, MINOR(dev) + 1,
+ (dasd_info [di]->sizes.label_block + 1) <<
+ dasd_info [di]->sizes.s2b_shift,
+ (dasd_info [di]->sizes.blocks -
+ dasd_info [di]->sizes.label_block) <<
+ dasd_info [di]->sizes.s2b_shift );
+ printk ("(CMS)");
+ } else {
+ /* disk is reserved minidisk */
+ int offset = (*(((long *)bh->b_data) + 13));
+ int size = (*(((long *)bh->b_data) + 7)) - 1 -
+ (*(((long *)bh->b_data) + 13)) *
+ ((*(((long *)bh->b_data) + 3)) >> 9);
+ add_gd_partition( hd, MINOR(dev) + 1,
+ offset << dasd_info [di]->sizes.s2b_shift,
+ size << dasd_info [di]->sizes.s2b_shift );
+ printk ("(MDSK)");
+ }
+ break;
+ case ibm_partition_none:
+ printk ( "(nonl)/ :");
+/*
+ printk ( "%d %d %d ", MINOR(dev) + 1,
+ (dasd_info [di]->sizes.label_block + 1) <<
+ dasd_info [di]->sizes.s2b_shift,
+ (dasd_info [di]->sizes.blocks -
+ dasd_info [di]->sizes.label_block - 1) <<
+ dasd_info [di]->sizes.s2b_shift );
+*/
+ add_gd_partition( hd, MINOR(dev) + 1,
+ (dasd_info [di]->sizes.label_block + 1) <<
+ dasd_info [di]->sizes.s2b_shift,
+ (dasd_info [di]->sizes.blocks -
+ dasd_info [di]->sizes.label_block - 1) <<
+ dasd_info [di]->sizes.s2b_shift );
+ break;
+ }
+ printk ( "\n" );
+ bforget(bh);
+}
+
--- /dev/null
+void ibm_partition (struct gendisk *hd, kdev_t dev);
* @fs: the file system structure
*
* Adds the file system passed to the list of file systems the kernel
- * is aware of for by mount and other syscalls. Returns 0 on success,
+ * is aware of for mount and other syscalls. Returns 0 on success,
* or a negative errno code on an error.
*
- * The file_system_type that is passed is linked into the kernel
+ * The &struct file_system_type that is passed is linked into the kernel
* structures and must not be freed until the file system has been
* unregistered.
*/
* with the kernel. An error is returned if the file system is not found.
* Zero is returned on a success.
*
- * Once this function has returned the file_system_type structure may be
- * freed or reused.
+ * Once this function has returned the &struct file_system_type structure
+ * may be freed or reused.
*/
int unregister_filesystem(struct file_system_type * fs)
* @sb: superblock to wait on
*
* Waits for a superblock to become unlocked and then returns. It does
- * not take the lock. This is an internal function. See wait_on_super.
+ * not take the lock. This is an internal function. See wait_on_super().
*/
void __wait_on_super(struct super_block * sb)
/**
* get_super - get the superblock of a device
- * @dev: device to get the super block for
+ * @dev: device to get the superblock for
*
* Scans the superblock list and finds the superblock of the file system
- * mounted on the device given. NULL is returned if no match is found.
+ * mounted on the device given. %NULL is returned if no match is found.
*/
struct super_block * get_super(kdev_t dev)
/**
* get_empty_super - find empty superblocks
*
- * Find a super_block with no device assigned. A free superblock is
+ * Find a superblock with no device assigned. A free superblock is
* found and returned. If neccessary new superblocks are allocated.
- * NULL is returned if there are insufficient resources to complete
- * the request
+ * %NULL is returned if there are insufficient resources to complete
+ * the request.
*/
struct super_block *get_empty_super(void)
static struct dentry *check_pseudo_root(struct super_block *sb)
{
struct dentry *root, *init;
+ struct vfsmount *vfsmnt = NULL;
/*
* Check whether we're mounted as the root device.
if (sb->s_dev != ROOT_DEV)
goto out_noroot;
-
-printk("check_pseudo_root: mounted as root\n");
- root = lookup_dentry(UMSDOS_PSDROOT_NAME, dget(sb->s_root), 0);
+ /*
+ * lookup_dentry needs a (so far non-existent) root.
+ */
+ current->fs->root = dget(sb->s_root);
+ current->fs->rootmnt = mntget(vfsmnt);
+ current->fs->pwd = dget(sb->s_root);
+ current->fs->pwdmnt = mntget(vfsmnt);
+ printk(KERN_INFO "check_pseudo_root: mounted as root\n");
+ root = lookup_dentry(UMSDOS_PSDROOT_NAME, 0);
if (IS_ERR(root))
goto out_noroot;
if (!root->d_inode)
goto out_dput;
-printk("check_pseudo_root: found %s/%s\n",
-root->d_parent->d_name.name, root->d_name.name);
+
+ printk(KERN_INFO "check_pseudo_root: found %s/%s\n", root->d_parent->d_name.name, root->d_name.name);
/* look for /sbin/init */
- init = lookup_dentry("sbin/init", dget(root), 0);
+ init = lookup_dentry("sbin/init", 0);
if (!IS_ERR(init)) {
if (init->d_inode)
goto root_ok;
goto out_dput;
root_ok:
-printk("check_pseudo_root: found %s/%s, enabling pseudo-root\n",
-init->d_parent->d_name.name, init->d_name.name);
+ printk(KERN_INFO "check_pseudo_root: found %s/%s, enabling pseudo-root\n", init->d_parent->d_name.name, init->d_name.name);
dput(init);
return root;
#include <linux/config.h>
#include <asm/apicdef.h>
+#include <asm/system.h>
#define APIC_DEBUG 1
extern __inline void apic_write(unsigned long reg, unsigned long v)
{
- *((volatile unsigned long *)(APIC_BASE+reg))=v;
+ *((volatile unsigned long *)(APIC_BASE+reg)) = v;
+}
+
+extern __inline void apic_write_atomic(unsigned long reg, unsigned long v)
+{
+ xchg((volatile unsigned long *)(APIC_BASE+reg), v);
}
extern __inline unsigned long apic_read(unsigned long reg)
#ifdef CONFIG_X86_GOOD_APIC
# define FORCE_READ_AROUND_WRITE 0
-# define apic_readaround(x)
+# define apic_read_around(x)
+# define apic_write_around(x,y) apic_write((x),(y))
#else
# define FORCE_READ_AROUND_WRITE 1
-# define apic_readaround(x) apic_read(x)
+# define apic_read_around(x) apic_read(x)
+# define apic_write_around(x,y) apic_write_atomic((x),(y))
#endif
-#define apic_write_around(x,y) \
- do { apic_readaround(x); apic_write(x,y); } while (0)
-
extern inline void ack_APIC_irq(void)
{
- /* Clear the IPI */
-
- apic_readaround(APIC_EOI);
/*
- * on P6+ cores (CONFIG_X86_GOOD_APIC) ack_APIC_irq() actually
- * gets compiled as a single instruction ... yummie.
+ * ack_APIC_irq() actually gets compiled as a single instruction:
+ * - a single rmw on Pentium/82489DX
+ * - a single write on P6+ cores (CONFIG_X86_GOOD_APIC)
+ * ... yummie.
*/
- apic_write(APIC_EOI, 0); /* Docs say use 0 for future compatibility */
+
+ /* Docs say use 0 for future compatibility */
+ apic_write_around(APIC_EOI, 0);
}
extern int get_maxlvt(void);
+extern void connect_bsp_APIC (void);
+extern void disconnect_bsp_APIC (void);
extern void disable_local_APIC (void);
extern void cache_APIC_registers (void);
+extern void sync_Arb_IDs(void);
extern void setup_local_APIC (void);
extern void init_apic_mappings(void);
extern void smp_local_timer_interrupt(struct pt_regs * regs);
#define SET_APIC_LOGICAL_ID(x) (((x)<<24))
#define APIC_ALL_CPUS 0xFF
#define APIC_DFR 0xE0
-#define GET_APIC_DFR(x) (((x)>>28)&0x0F)
-#define SET_APIC_DFR(x) ((x)<<28)
#define APIC_SPIV 0xF0
#define APIC_ISR 0x100
#define APIC_TMR 0x180
#define APIC_DEST_SELF 0x40000
#define APIC_DEST_ALLINC 0x80000
#define APIC_DEST_ALLBUT 0xC0000
-#define APIC_DEST_RR_MASK 0x30000
-#define APIC_DEST_RR_INVALID 0x00000
-#define APIC_DEST_RR_INPROG 0x10000
-#define APIC_DEST_RR_VALID 0x20000
-#define APIC_DEST_LEVELTRIG 0x08000
-#define APIC_DEST_ASSERT 0x04000
-#define APIC_DEST_BUSY 0x01000
+#define APIC_ICR_RR_MASK 0x30000
+#define APIC_ICR_RR_INVALID 0x00000
+#define APIC_ICR_RR_INPROG 0x10000
+#define APIC_ICR_RR_VALID 0x20000
+#define APIC_INT_LEVELTRIG 0x08000
+#define APIC_INT_ASSERT 0x04000
+#define APIC_ICR_BUSY 0x01000
#define APIC_DEST_LOGICAL 0x00800
-#define APIC_DEST_DM_FIXED 0x00000
-#define APIC_DEST_DM_LOWEST 0x00100
-#define APIC_DEST_DM_SMI 0x00200
-#define APIC_DEST_DM_REMRD 0x00300
-#define APIC_DEST_DM_NMI 0x00400
-#define APIC_DEST_DM_INIT 0x00500
-#define APIC_DEST_DM_STARTUP 0x00600
-#define APIC_DEST_VECTOR_MASK 0x000FF
+#define APIC_DM_FIXED 0x00000
+#define APIC_DM_LOWEST 0x00100
+#define APIC_DM_SMI 0x00200
+#define APIC_DM_REMRD 0x00300
+#define APIC_DM_NMI 0x00400
+#define APIC_DM_INIT 0x00500
+#define APIC_DM_STARTUP 0x00600
+#define APIC_DM_EXTINT 0x00700
+#define APIC_VECTOR_MASK 0x000FF
#define APIC_ICR2 0x310
#define GET_APIC_DEST_FIELD(x) (((x)>>24)&0xFF)
#define SET_APIC_DEST_FIELD(x) ((x)<<24)
#include <asm/processor.h>
#include <asm/msr.h>
-#define CONFIG_BUGi386
-
static int __init no_halt(char *s)
{
boot_cpu_data.hlt_works_ok = 0;
}
/*
- * Check wether we are able to run this kernel safely on SMP.
+ * Check whether we are able to run this kernel safely on SMP.
*
* - In order to run on a i386, we need to be compiled for i386
* (for due to lack of "invlpg" and working WP on a i386)
* - In order to run on anything without a TSC, we need to be
* compiled for a i486.
- * - In order to work on a Pentium/SMP machine, we need to be
- * compiled for a Pentium or lower, as a PPro config implies
- * a properly working local APIC without the need to do extra
- * reads from the APIC.
+ * - In order to support the local APIC on a buggy Pentium machine,
+ * we need to be compiled with CONFIG_X86_GOOD_APIC disabled,
+ * which happens implicitly if compiled for a Pentium or lower
+ * (unless an advanced selection of CPU features is used) as an
+ * otherwise config implies a properly working local APIC without
+ * the need to do extra reads from the APIC.
*/
static void __init check_config(void)
#endif
/*
- * If we were told we had a good APIC for SMP, we'd better be a PPro
+ * If we were told we had a good local APIC, check for buggy Pentia,
+ * i.e. all B steppings and the C2 stepping of P54C when using their
+ * integrated APIC (see 11AP erratum in "Pentium Processor
+ * Specification Update").
*/
-#if defined(CONFIG_X86_GOOD_APIC) && defined(CONFIG_SMP)
- if (smp_found_config && boot_cpu_data.x86 <= 5)
- panic("Kernel compiled for PPro+, assumes local APIC without read-before-write bug");
+#if defined(CONFIG_X86_LOCAL_APIC) && defined(CONFIG_X86_GOOD_APIC)
+ if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL
+ && boot_cpu_data.x86_capability & X86_FEATURE_APIC
+ && boot_cpu_data.x86 == 5
+ && boot_cpu_data.x86_model == 2
+ && (boot_cpu_data.x86_mask < 6 || boot_cpu_data.x86_mask == 11))
+ panic("Kernel compiled for PPro+, assumes a local APIC without the read-before-write bug!");
#endif
}
/**
* mca_set_dma_mode - set the DMA mode
* @dmanr: DMA channel
- * @mode: The mode to set
+ * @mode: mode to set
*
* The DMA controller supports several modes. The mode values you can
- * set are
+ * set are :
*
- * MCA_DMA_MODE_READ when reading from the DMA device.
+ * %MCA_DMA_MODE_READ when reading from the DMA device.
*
- * MCA_DMA_MODE_WRITE to writing to the DMA device.
+ * %MCA_DMA_MODE_WRITE to writing to the DMA device.
*
- * MCA_DMA_MODE_IO to do DMA to or from an I/O port.
+ * %MCA_DMA_MODE_IO to do DMA to or from an I/O port.
*
- * MCA_DMA_MODE_16 to do 16bit transfers.
+ * %MCA_DMA_MODE_16 to do 16bit transfers.
*
*/
#define SMP_MAGIC_IDENT (('_'<<24)|('P'<<16)|('M'<<8)|'_')
+/*
+ * a maximum of 16 APICs with the current APIC ID architecture.
+ */
+#define MAX_APICS 16
+
struct intel_mp_floating
{
char mpf_signature[4]; /* "_MP_" */
enum mp_bustype {
MP_BUS_ISA,
MP_BUS_EISA,
- MP_BUS_PCI
+ MP_BUS_PCI,
+ MP_BUS_MCA
};
extern int mp_bus_id_to_type [MAX_MP_BUSSES];
extern int mp_bus_id_to_pci_bus [MAX_MP_BUSSES];
extern void find_smp_config (void);
extern void get_smp_config (void);
extern int nr_ioapics;
-extern int apic_version [NR_CPUS];
+extern int apic_version [MAX_APICS];
extern int mp_bus_id_to_type [MAX_MP_BUSSES];
extern int mp_irq_entries;
extern struct mpc_config_intsrc mp_irqs [MAX_IRQ_SOURCES];
}
/* end of additional stuff */
+#define __HAVE_ARCH_STRSTR
+extern inline char * strstr(const char * cs,const char * ct)
+{
+int d0, d1;
+register char * __res;
+__asm__ __volatile__(
+ "movl %6,%%edi\n\t"
+ "repne\n\t"
+ "scasb\n\t"
+ "notl %%ecx\n\t"
+ "decl %%ecx\n\t" /* NOTE! This also sets Z if searchstring='' */
+ "movl %%ecx,%%edx\n"
+ "1:\tmovl %6,%%edi\n\t"
+ "movl %%esi,%%eax\n\t"
+ "movl %%edx,%%ecx\n\t"
+ "repe\n\t"
+ "cmpsb\n\t"
+ "je 2f\n\t" /* also works for empty string, see above */
+ "xchgl %%eax,%%esi\n\t"
+ "incl %%esi\n\t"
+ "cmpb $0,-1(%%eax)\n\t"
+ "jne 1b\n\t"
+ "xorl %%eax,%%eax\n\t"
+ "2:"
+ :"=a" (__res), "=&c" (d0), "=&S" (d1)
+ :"0" (0), "1" (0xffffffff), "2" (cs), "g" (ct)
+ :"dx", "di");
+return __res;
+}
+
/*
* This looks horribly ugly, but the compiler can optimize it totally,
* as we by now know that both pattern and count is constant..
-/* $Id: namei.h,v 1.14 1999/06/10 05:23:12 davem Exp $
+/* $Id: namei.h,v 1.15 2000/04/08 02:15:14 davem Exp $
* linux/include/asm-sparc/namei.h
*
* Routines to handle famous /usr/gnemul/s*.
static inline char * __emul_prefix(void)
{
switch (current->personality) {
- case PER_BSD:
+ case PER_SUNOS:
return SPARC_BSD_EMUL;
case PER_SVR4:
return SPARC_SOL_EMUL;
-/* $Id: namei.h,v 1.15 1999/06/10 05:23:17 davem Exp $
+/* $Id: namei.h,v 1.16 2000/04/08 02:15:17 davem Exp $
* linux/include/asm-sparc64/namei.h
*
* Routines to handle famous /usr/gnemul/s*.
static inline char * __emul_prefix(void)
{
switch (current->personality) {
- case PER_BSD:
+ case PER_SUNOS:
return SPARC_BSD_EMUL;
case PER_SVR4:
return SPARC_SOL_EMUL;
-/* $Id: ttable.h,v 1.14 1999/10/13 11:48:58 jj Exp $ */
+/* $Id: ttable.h,v 1.15 2000/04/03 10:36:42 davem Exp $ */
#ifndef _SPARC64_TTABLE_H
#define _SPARC64_TTABLE_H
sethi %hi(109f), %g7; \
ba,pt %xcc, scetrap; \
109: or %g7, %lo(109b), %g7; \
- call routine; \
+ ba,pt %xcc, routine; \
sethi %hi(systbl), %l7; \
nop; nop; nop;
* @entry: dentry to add
* @inode: The inode to attach to this dentry
*
- * This adds the entry to the hash queues and initializes "d_inode".
- * The entry was actually filled in earlier during "d_alloc()"
+ * This adds the entry to the hash queues and initializes @inode.
+ * The entry was actually filled in earlier during d_alloc().
*/
static __inline__ void d_add(struct dentry * entry, struct inode * inode)
/**
* dget - get a reference to a dentry
- * @dentry: dentry to get a reference too
+ * @dentry: dentry to get a reference to
*
- * Given a dentry or NULL pointer increment the reference count
+ * Given a dentry or %NULL pointer increment the reference count
* if appropriate and return the dentry. A dentry will not be
* destroyed when it has references.
*/
* d_unhashed - is dentry hashed
* @dentry: entry to check
*
- * Returns true if the dentry passed is not currently hashed
+ * Returns true if the dentry passed is not currently hashed.
*/
static __inline__ int d_unhashed(struct dentry *dentry)
#include <linux/stat.h>
#include <linux/cache.h>
#include <linux/stddef.h>
-#include <linux/string.h>
#include <asm/atomic.h>
#include <asm/bitops.h>
#ifdef __KERNEL__
+#include <linux/string.h>
#include <asm/semaphore.h>
#include <asm/byteorder.h>
extern struct block_device *bdget(dev_t);
extern void bdput(struct block_device *);
extern int blkdev_open(struct inode *, struct file *);
+extern int blkdev_close(struct inode * inode, struct file * filp);
extern struct file_operations def_blk_fops;
extern struct file_operations def_fifo_fops;
extern int ioctl_by_bdev(struct block_device *, unsigned, unsigned long);
typedef int (*read_actor_t)(read_descriptor_t *, struct page *, unsigned long, unsigned long);
+/* needed for stackable file system support */
+extern loff_t default_llseek(struct file *file, loff_t offset, int origin);
+
extern struct dentry * lookup_dentry(const char *, unsigned int);
extern int walk_init(const char *, unsigned, struct nameidata *);
extern int walk_name(const char *, unsigned, struct nameidata *);
/* Generic buffer handling for block filesystems.. */
extern int block_flushpage(struct page *, unsigned long);
+extern int block_fsync(struct file *filp, struct dentry *dentry);
extern int block_symlink(struct inode *, const char *, int);
extern int block_write_full_page(struct page*, get_block_t*);
extern int block_read_full_page(struct page*, get_block_t*);
--- /dev/null
+/*
+ * I2O user space accessible structures/APIs
+ *
+ * (c) Copyright 1999, 2000 Red Hat Software
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ *************************************************************************
+ *
+ * This header file defines the I2O APIs that are available to both
+ * the kernel and user level applications. Kernel specific structures
+ * are defined in i2o_osm. OSMs should include _only_ i2o_osm.h which
+ * automatically includs this file.
+ *
+ */
+
+#ifndef _I2O_DEV_H
+#define _I2O_DEV_H
+
+/* How many controllers are we allowing */
+#define MAX_I2O_CONTROLLERS 32
+
+#include <linux/ioctl.h>
+
+/*
+ * I2O Control IOCTLs and structures
+ */
+#define I2O_MAGIC_NUMBER 'i'
+#define I2OGETIOPS _IOR(I2O_MAGIC_NUMBER,0,u8[MAX_I2O_CONTROLLERS])
+#define I2OHRTGET _IOWR(I2O_MAGIC_NUMBER,1,struct i2o_cmd_hrtlct)
+#define I2OLCTGET _IOWR(I2O_MAGIC_NUMBER,2,struct i2o_cmd_hrtlct)
+#define I2OPARMSET _IOWR(I2O_MAGIC_NUMBER,3,struct i2o_cmd_psetget)
+#define I2OPARMGET _IOWR(I2O_MAGIC_NUMBER,4,struct i2o_cmd_psetget)
+#define I2OSWDL _IOWR(I2O_MAGIC_NUMBER,5,struct i2o_sw_xfer)
+#define I2OSWUL _IOWR(I2O_MAGIC_NUMBER,6,struct i2o_sw_xfer)
+#define I2OSWDEL _IOWR(I2O_MAGIC_NUMBER,7,struct i2o_sw_xfer)
+#define I2OVALIDATE _IOR(I2O_MAGIC_NUMBER,8,u32)
+#define I2OHTML _IOWR(I2O_MAGIC_NUMBER,9,struct i2o_html)
+#define I2OEVTREG _IOW(I2O_MAGIC_NUMBER,10,struct i2o_evt_id)
+#define I2OEVTGET _IOR(I2O_MAGIC_NUMBER,11,struct i2o_evt_info)
+
+struct i2o_cmd_hrtlct
+{
+ unsigned int iop; /* IOP unit number */
+ void *resbuf; /* Buffer for result */
+ unsigned int *reslen; /* Buffer length in bytes */
+};
+
+struct i2o_cmd_psetget
+{
+ unsigned int iop; /* IOP unit number */
+ unsigned int tid; /* Target device TID */
+ void *opbuf; /* Operation List buffer */
+ unsigned int oplen; /* Operation List buffer length in bytes */
+ void *resbuf; /* Result List buffer */
+ unsigned int *reslen; /* Result List buffer length in bytes */
+};
+
+struct i2o_sw_xfer
+{
+ unsigned int iop; /* IOP unit number */
+ unsigned char flags; /* Flags field */
+ unsigned char sw_type; /* Software type */
+ unsigned int sw_id; /* Software ID */
+ void *buf; /* Pointer to software buffer */
+ unsigned int *swlen; /* Length of software data */
+ unsigned int *maxfrag; /* Maximum fragment count */
+ unsigned int *curfrag; /* Current fragment count */
+};
+
+struct i2o_html
+{
+ unsigned int iop; /* IOP unit number */
+ unsigned int tid; /* Target device ID */
+ unsigned int page; /* HTML page */
+ void *resbuf; /* Buffer for reply HTML page */
+ unsigned int *reslen; /* Length in bytes of reply buffer */
+ void *qbuf; /* Pointer to HTTP query string */
+ unsigned int qlen; /* Length in bytes of query string buffer */
+};
+
+#define I2O_EVT_Q_LEN 32
+
+struct i2o_evt_id
+{
+ unsigned int iop;
+ unsigned int tid;
+ unsigned int evt_mask;
+};
+
+/* Event data size = frame size - message header + evt indicator */
+#define I2O_EVT_DATA_SIZE 88
+
+struct i2o_evt_info
+{
+ struct i2o_evt_id id;
+ unsigned char evt_data[I2O_EVT_DATA_SIZE];
+ unsigned int data_size;
+};
+
+struct i2o_evt_get
+{
+ struct i2o_evt_info info;
+ int pending;
+ int lost;
+};
+
+
+/**************************************************************************
+ * HRT related constants and structures
+ **************************************************************************/
+#define I2O_BUS_LOCAL 0
+#define I2O_BUS_ISA 1
+#define I2O_BUS_EISA 2
+#define I2O_BUS_MCA 3
+#define I2O_BUS_PCI 4
+#define I2O_BUS_PCMCIA 5
+#define I2O_BUS_NUBUS 6
+#define I2O_BUS_CARDBUS 7
+#define I2O_BUS_UNKNOWN 0x80
+
+#ifndef __KERNEL__
+
+typedef unsigned char u8;
+typedef unsigned short u16;
+typedef unsigned int u32;
+
+#endif /* __KERNEL__ */
+
+typedef struct _i2o_pci_bus {
+ u8 PciFunctionNumber;
+ u8 PciDeviceNumber;
+ u8 PciBusNumber;
+ u8 reserved;
+ u16 PciVendorID;
+ u16 PciDeviceID;
+} i2o_pci_bus;
+
+typedef struct _i2o_local_bus {
+ u16 LbBaseIOPort;
+ u16 reserved;
+ u32 LbBaseMemoryAddress;
+} i2o_local_bus;
+
+typedef struct _i2o_isa_bus {
+ u16 IsaBaseIOPort;
+ u8 CSN;
+ u8 reserved;
+ u32 IsaBaseMemoryAddress;
+} i2o_isa_bus;
+
+typedef struct _i2o_eisa_bus_info {
+ u16 EisaBaseIOPort;
+ u8 reserved;
+ u8 EisaSlotNumber;
+ u32 EisaBaseMemoryAddress;
+} i2o_eisa_bus;
+
+typedef struct _i2o_mca_bus {
+ u16 McaBaseIOPort;
+ u8 reserved;
+ u8 McaSlotNumber;
+ u32 McaBaseMemoryAddress;
+} i2o_mca_bus;
+
+typedef struct _i2o_other_bus {
+ u16 BaseIOPort;
+ u16 reserved;
+ u32 BaseMemoryAddress;
+} i2o_other_bus;
+
+typedef struct _i2o_hrt_entry {
+ u32 adapter_id;
+ u32 parent_tid:12;
+ u32 state:4;
+ u32 bus_num:8;
+ u32 bus_type:8;
+ union {
+ i2o_pci_bus pci_bus;
+ i2o_local_bus local_bus;
+ i2o_isa_bus isa_bus;
+ i2o_eisa_bus eisa_bus;
+ i2o_mca_bus mca_bus;
+ i2o_other_bus other_bus;
+ } bus;
+} i2o_hrt_entry;
+
+typedef struct _i2o_hrt {
+ u16 num_entries;
+ u8 entry_len;
+ u8 hrt_version;
+ u32 change_ind;
+ i2o_hrt_entry hrt_entry[1];
+} i2o_hrt;
+
+typedef struct _i2o_lct_entry {
+ u32 entry_size:16;
+ u32 tid:12;
+ u32 reserved:4;
+ u32 change_ind;
+ u32 device_flags;
+ u32 class_id:12;
+ u32 version:4;
+ u32 vendor_id:16;
+ u32 sub_class;
+ u32 user_tid:12;
+ u32 parent_tid:12;
+ u32 bios_info:8;
+ u8 identity_tag[8];
+ u32 event_capabilities;
+} i2o_lct_entry;
+
+typedef struct _i2o_lct {
+ u32 table_size:16;
+ u32 boot_tid:12;
+ u32 lct_ver:4;
+ u32 iop_flags;
+ u32 change_ind;
+ i2o_lct_entry lct_entry[1];
+} i2o_lct;
+
+typedef struct _i2o_status_block {
+ u16 org_id;
+ u16 reserved;
+ u16 iop_id:12;
+ u16 reserved1:4;
+ u16 host_unit_id;
+ u16 segment_number:12;
+ u16 i2o_version:4;
+ u8 iop_state;
+ u8 msg_type;
+ u16 inbound_frame_size;
+ u8 init_code;
+ u8 reserved2;
+ u32 max_inbound_frames;
+ u32 cur_inbound_frames;
+ u32 max_outbound_frames;
+ char product_id[24];
+ u32 expected_lct_size;
+ u32 iop_capabilities;
+ u32 desired_mem_size;
+ u32 current_mem_size;
+ u32 current_mem_base;
+ u32 desired_io_size;
+ u32 current_io_size;
+ u32 current_io_base;
+ u32 reserved3:24;
+ u32 cmd_status:8;
+} i2o_status_block;
+
+/* Event indicator mask flags */
+#define I2O_EVT_IND_STATE_CHANGE 0x80000000
+#define I2O_EVT_IND_GENERAL_WARNING 0x40000000
+#define I2O_EVT_IND_CONFIGURATION_FLAG 0x20000000
+#define I2O_EVT_IND_LOCK_RELEASE 0x10000000
+#define I2O_EVT_IND_CAPABILITY_CHANGE 0x08000000
+#define I2O_EVT_IND_DEVICE_RESET 0x04000000
+#define I2O_EVT_IND_EVT_MASK_MODIFIED 0x02000000
+#define I2O_EVT_IND_FIELD_MODIFIED 0x01000000
+#define I2O_EVT_IND_VENDOR_EVT 0x00800000
+#define I2O_EVT_IND_DEVICE_STATE 0x00400000
+
+/* Executive event indicitors */
+#define I2O_EVT_IND_EXEC_RESOURCE_LIMITS 0x00000001
+#define I2O_EVT_IND_EXEC_CONNECTION_FAIL 0x00000002
+#define I2O_EVT_IND_EXEC_ADAPTER_FAULT 0x00000004
+#define I2O_EVT_IND_EXEC_POWER_FAIL 0x00000008
+#define I2O_EVT_IND_EXEC_RESET_PENDING 0x00000010
+#define I2O_EVT_IND_EXEC_RESET_IMMINENT 0x00000020
+#define I2O_EVT_IND_EXEC_HW_FAIL 0x00000040
+#define I2O_EVT_IND_EXEC_XCT_CHANGE 0x00000080
+#define I2O_EVT_IND_EXEC_NEW_LCT_ENTRY 0x00000100
+#define I2O_EVT_IND_EXEC_MODIFIED_LCT 0x00000200
+#define I2O_EVT_IND_EXEC_DDM_AVAILABILITY 0x00000400
+
+/* Random Block Storage Event Indicators */
+#define I2O_EVT_IND_BSA_VOLUME_LOAD 0x00000001
+#define I2O_EVT_IND_BSA_VOLUME_UNLOAD 0x00000002
+#define I2O_EVT_IND_BSA_VOLUME_UNLOAD_REQ 0x00000004
+#define I2O_EVT_IND_BSA_CAPACITY_CHANGE 0x00000008
+#define I2O_EVT_IND_BSA_SCSI_SMART 0x00000010
+
+/* Event data for generic events */
+#define I2O_EVT_STATE_CHANGE_NORMAL 0x00
+#define I2O_EVT_STATE_CHANGE_SUSPENDED 0x01
+#define I2O_EVT_STATE_CHANGE_RESTART 0x02
+#define I2O_EVT_STATE_CHANGE_NA_RECOVER 0x03
+#define I2O_EVT_STATE_CHANGE_NA_NO_RECOVER 0x04
+#define I2O_EVT_STATE_CHANGE_QUIESCE_REQUEST 0x05
+#define I2O_EVT_STATE_CHANGE_FAILED 0x10
+#define I2O_EVT_STATE_CHANGE_FAULTED 0x11
+
+#define I2O_EVT_GEN_WARNING_NORMAL 0x00
+#define I2O_EVT_GEN_WARNING_ERROR_THRESHOLD 0x01
+#define I2O_EVT_GEN_WARNING_MEDIA_FAULT 0x02
+
+#define I2O_EVT_CAPABILITY_OTHER 0x01
+#define I2O_EVT_CAPABILITY_CHANGED 0x02
+
+#define I2O_EVT_SENSOR_STATE_CHANGED 0x01
+
+/*
+ * I2O classes / subclasses
+ */
+
+/* Class ID and Code Assignments
+ * (LCT.ClassID.Version field)
+ */
+#define I2O_CLASS_VERSION_10 0x00
+#define I2O_CLASS_VERSION_11 0x01
+
+/* Class code names
+ * (from v1.5 Table 6-1 Class Code Assignments.)
+ */
+
+#define I2O_CLASS_EXECUTIVE 0x000
+#define I2O_CLASS_DDM 0x001
+#define I2O_CLASS_RANDOM_BLOCK_STORAGE 0x010
+#define I2O_CLASS_SEQUENTIAL_STORAGE 0x011
+#define I2O_CLASS_LAN 0x020
+#define I2O_CLASS_WAN 0x030
+#define I2O_CLASS_FIBRE_CHANNEL_PORT 0x040
+#define I2O_CLASS_FIBRE_CHANNEL_PERIPHERAL 0x041
+#define I2O_CLASS_SCSI_PERIPHERAL 0x051
+#define I2O_CLASS_ATE_PORT 0x060
+#define I2O_CLASS_ATE_PERIPHERAL 0x061
+#define I2O_CLASS_FLOPPY_CONTROLLER 0x070
+#define I2O_CLASS_FLOPPY_DEVICE 0x071
+#define I2O_CLASS_BUS_ADAPTER_PORT 0x080
+#define I2O_CLASS_PEER_TRANSPORT_AGENT 0x090
+#define I2O_CLASS_PEER_TRANSPORT 0x091
+
+/*
+ * Rest of 0x092 - 0x09f reserved for peer-to-peer classes
+ */
+
+#define I2O_CLASS_MATCH_ANYCLASS 0xffffffff
+
+/*
+ * Subclasses
+ */
+
+#define I2O_SUBCLASS_i960 0x001
+#define I2O_SUBCLASS_HDM 0x020
+#define I2O_SUBCLASS_ISM 0x021
+
+/* Operation functions */
+
+#define I2O_PARAMS_FIELD_GET 0x0001
+#define I2O_PARAMS_LIST_GET 0x0002
+#define I2O_PARAMS_MORE_GET 0x0003
+#define I2O_PARAMS_SIZE_GET 0x0004
+#define I2O_PARAMS_TABLE_GET 0x0005
+#define I2O_PARAMS_FIELD_SET 0x0006
+#define I2O_PARAMS_LIST_SET 0x0007
+#define I2O_PARAMS_ROW_ADD 0x0008
+#define I2O_PARAMS_ROW_DELETE 0x0009
+#define I2O_PARAMS_TABLE_CLEAR 0x000A
+
+/*
+ * I2O serial number conventions / formats
+ * (circa v1.5)
+ */
+
+#define I2O_SNFORMAT_UNKNOWN 0
+#define I2O_SNFORMAT_BINARY 1
+#define I2O_SNFORMAT_ASCII 2
+#define I2O_SNFORMAT_UNICODE 3
+#define I2O_SNFORMAT_LAN48_MAC 4
+#define I2O_SNFORMAT_WAN 5
+
+/*
+ * Plus new in v2.0 (Yellowstone pdf doc)
+ */
+
+#define I2O_SNFORMAT_LAN64_MAC 6
+#define I2O_SNFORMAT_DDM 7
+#define I2O_SNFORMAT_IEEE_REG64 8
+#define I2O_SNFORMAT_IEEE_REG128 9
+#define I2O_SNFORMAT_UNKNOWN2 0xff
+
+/*
+ * I2O Get Status State values
+ */
+
+#define ADAPTER_STATE_INITIALIZING 0x01
+#define ADAPTER_STATE_RESET 0x02
+#define ADAPTER_STATE_HOLD 0x04
+#define ADAPTER_STATE_READY 0x05
+#define ADAPTER_STATE_OPERATIONAL 0x08
+#define ADAPTER_STATE_FAILED 0x10
+#define ADAPTER_STATE_FAULTED 0x11
+
+#endif /* _I2O_DEV_H */
+/*
+ * I2O kernel space accessible structures/APIs
+ *
+ * (c) Copyright 1999, 2000 Red Hat Software
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ *************************************************************************
+ *
+ * This header file defined the I2O APIs/structures for use by
+ * the I2O kernel modules.
+ *
+ */
+
#ifndef _I2O_H
#define _I2O_H
+#ifdef __KERNEL__ /* This file to be included by kernel only */
-/*
- * Tunable parameters first
- */
+#include <linux/i2o-dev.h>
/* How many different OSM's are we allowing */
#define MAX_I2O_MODULES 64
-/* How many controllers are we allowing */
-#define MAX_I2O_CONTROLLERS 32
-
-#include <linux/ioctl.h>
-
-/*
- * I2O Control IOCTLs and structures
- */
-#define I2O_MAGIC_NUMBER 'i'
-#define I2OGETIOPS _IOR(I2O_MAGIC_NUMBER,0,u8[MAX_I2O_CONTROLLERS])
-#define I2OHRTGET _IOWR(I2O_MAGIC_NUMBER,1,struct i2o_cmd_hrtlct)
-#define I2OLCTGET _IOWR(I2O_MAGIC_NUMBER,2,struct i2o_cmd_hrtlct)
-#define I2OPARMSET _IOWR(I2O_MAGIC_NUMBER,3,struct i2o_cmd_psetget)
-#define I2OPARMGET _IOWR(I2O_MAGIC_NUMBER,4,struct i2o_cmd_psetget)
-#define I2OSWDL _IOWR(I2O_MAGIC_NUMBER,5,struct i2o_sw_xfer)
-#define I2OSWUL _IOWR(I2O_MAGIC_NUMBER,6,struct i2o_sw_xfer)
-#define I2OSWDEL _IOWR(I2O_MAGIC_NUMBER,7,struct i2o_sw_xfer)
-#define I2OVALIDATE _IOR(I2O_MAGIC_NUMBER,8,u32)
-#define I2OHTML _IOWR(I2O_MAGIC_NUMBER,9,struct i2o_html)
-#define I2OEVTREG _IOW(I2O_MAGIC_NUMBER,10,struct i2o_evt_id)
-#define I2OEVTGET _IOR(I2O_MAGIC_NUMBER,11,struct i2o_evt_info)
-
-struct i2o_cmd_hrtlct
-{
- unsigned int iop; /* IOP unit number */
- void *resbuf; /* Buffer for result */
- unsigned int *reslen; /* Buffer length in bytes */
-};
-
-struct i2o_cmd_psetget
-{
- unsigned int iop; /* IOP unit number */
- unsigned int tid; /* Target device TID */
- void *opbuf; /* Operation List buffer */
- unsigned int oplen; /* Operation List buffer length in bytes */
- void *resbuf; /* Result List buffer */
- unsigned int *reslen; /* Result List buffer length in bytes */
-};
-
-struct i2o_sw_xfer
-{
- unsigned int iop; /* IOP unit number */
- unsigned char flags; /* Flags field */
- unsigned char sw_type; /* Software type */
- unsigned int sw_id; /* Software ID */
- void *buf; /* Pointer to software buffer */
- unsigned int *swlen; /* Length of software data */
- unsigned int *maxfrag; /* Maximum fragment count */
- unsigned int *curfrag; /* Current fragment count */
-};
-
-struct i2o_html
-{
- unsigned int iop; /* IOP unit number */
- unsigned int tid; /* Target device ID */
- unsigned int page; /* HTML page */
- void *resbuf; /* Buffer for reply HTML page */
- unsigned int *reslen; /* Length in bytes of reply buffer */
- void *qbuf; /* Pointer to HTTP query string */
- unsigned int qlen; /* Length in bytes of query string buffer */
-};
-
-#define I2O_EVT_Q_LEN 32
-
-struct i2o_evt_id
-{
- unsigned int iop;
- unsigned int tid;
- unsigned int evt_mask;
-};
-
-//
-// Event data size = frame size - message header + evt indicator
-#define I2O_EVT_DATA_SIZE 88
-
-struct i2o_evt_info
-{
- struct i2o_evt_id id;
- unsigned char evt_data[I2O_EVT_DATA_SIZE];
- unsigned int data_size;
-};
-
-struct i2o_evt_get
-{
- struct i2o_evt_info info;
- int pending;
- int lost;
-};
-
-
-/**************************************************************************
- * HRT related constants and structures
- **************************************************************************/
-#define I2O_BUS_LOCAL 0
-#define I2O_BUS_ISA 1
-#define I2O_BUS_EISA 2
-#define I2O_BUS_MCA 3
-#define I2O_BUS_PCI 4
-#define I2O_BUS_PCMCIA 5
-#define I2O_BUS_NUBUS 6
-#define I2O_BUS_CARDBUS 7
-#define I2O_BUS_UNKNOWN 0x80
-
-#ifndef __KERNEL__
-typedef unsigned char u8;
-typedef unsigned short u16;
-typedef unsigned int u32;
-#endif /* __KERNEL__ */
-
-typedef struct _i2o_pci_bus {
- u8 PciFunctionNumber;
- u8 PciDeviceNumber;
- u8 PciBusNumber;
- u8 reserved;
- u16 PciVendorID;
- u16 PciDeviceID;
-} i2o_pci_bus;
-
-typedef struct _i2o_local_bus {
- u16 LbBaseIOPort;
- u16 reserved;
- u32 LbBaseMemoryAddress;
-} i2o_local_bus;
-
-typedef struct _i2o_isa_bus {
- u16 IsaBaseIOPort;
- u8 CSN;
- u8 reserved;
- u32 IsaBaseMemoryAddress;
-} i2o_isa_bus;
-
-typedef struct _i2o_eisa_bus_info {
- u16 EisaBaseIOPort;
- u8 reserved;
- u8 EisaSlotNumber;
- u32 EisaBaseMemoryAddress;
-} i2o_eisa_bus;
-
-typedef struct _i2o_mca_bus {
- u16 McaBaseIOPort;
- u8 reserved;
- u8 McaSlotNumber;
- u32 McaBaseMemoryAddress;
-} i2o_mca_bus;
-
-typedef struct _i2o_other_bus {
- u16 BaseIOPort;
- u16 reserved;
- u32 BaseMemoryAddress;
-} i2o_other_bus;
-
-typedef struct _i2o_hrt_entry {
- u32 adapter_id;
- u32 parent_tid:12;
- u32 state:4;
- u32 bus_num:8;
- u32 bus_type:8;
- union {
- i2o_pci_bus pci_bus;
- i2o_local_bus local_bus;
- i2o_isa_bus isa_bus;
- i2o_eisa_bus eisa_bus;
- i2o_mca_bus mca_bus;
- i2o_other_bus other_bus;
- } bus;
-} i2o_hrt_entry;
-
-typedef struct _i2o_hrt {
- u16 num_entries;
- u8 entry_len;
- u8 hrt_version;
- u32 change_ind;
- i2o_hrt_entry hrt_entry[1];
-} i2o_hrt;
-
-typedef struct _i2o_lct_entry {
- u32 entry_size:16;
- u32 tid:12;
- u32 reserved:4;
- u32 change_ind;
- u32 device_flags;
- u32 class_id:12;
- u32 version:4;
- u32 vendor_id:16;
- u32 sub_class;
- u32 user_tid:12;
- u32 parent_tid:12;
- u32 bios_info:8;
- u8 identity_tag[8];
- u32 event_capabilities;
-} i2o_lct_entry;
-
-typedef struct _i2o_lct {
- u32 table_size:16;
- u32 boot_tid:12;
- u32 lct_ver:4;
- u32 iop_flags;
- u32 current_change_ind;
- i2o_lct_entry lct_entry[1];
-} i2o_lct;
-
-typedef struct _i2o_status_block {
- u16 org_id;
- u16 reserved;
- u16 iop_id:12;
- u16 reserved1:4;
- u16 host_unit_id;
- u16 segment_number:12;
- u16 i2o_version:4;
- u8 iop_state;
- u8 msg_type;
- u16 inbound_frame_size;
- u8 init_code;
- u8 reserved2;
- u32 max_inbound_frames;
- u32 cur_inbound_frames;
- u32 max_outbound_frames;
- char product_id[24];
- u32 expected_lct_size;
- u32 iop_capabilities;
- u32 desired_mem_size;
- u32 current_mem_size;
- u32 current_mem_base;
- u32 desired_io_size;
- u32 current_io_size;
- u32 current_io_base;
- u32 reserved3:24;
- u32 cmd_status:8;
-} i2o_status_block;
-
-/* Event indicator mask flags */
-#define I2O_EVT_IND_STATE_CHANGE 0x80000000
-#define I2O_EVT_IND_GENERAL_WARNING 0x40000000
-#define I2O_EVT_IND_CONFIGURATION_FLAG 0x20000000
-#define I2O_EVT_IND_LOCK_RELEASE 0x10000000
-#define I2O_EVT_IND_CAPABILITY_CHANGE 0x08000000
-#define I2O_EVT_IND_DEVICE_RESET 0x04000000
-#define I2O_EVT_IND_EVT_MASK_MODIFIED 0x02000000
-#define I2O_EVT_IND_FIELD_MODIFIED 0x01000000
-#define I2O_EVT_IND_VENDOR_EVT 0x00800000
-#define I2O_EVT_IND_DEVICE_STATE 0x00400000
-
-/* Event data for generic events */
-#define I2O_EVT_STATE_CHANGE_NORMAL 0x00
-#define I2O_EVT_STATE_CHANGE_SUSPENDED 0x01
-#define I2O_EVT_STATE_CHANGE_RESTART 0x02
-#define I2O_EVT_STATE_CHANGE_NA_RECOVER 0x03
-#define I2O_EVT_STATE_CHANGE_NA_NO_RECOVER 0x04
-#define I2O_EVT_STATE_CHANGE_QUIESCE_REQUEST 0x05
-#define I2O_EVT_STATE_CHANGE_FAILED 0x10
-#define I2O_EVT_STATE_CHANGE_FAULTED 0x11
-
-#define I2O_EVT_GEN_WARNING_NORMAL 0x00
-#define I2O_EVT_GEN_WARNING_ERROR_THRESHOLD 0x01
-#define I2O_EVT_GEN_WARNING_MEDIA_FAULT 0x02
-
-#define I2O_EVT_CAPABILITY_OTHER 0x01
-#define I2O_EVT_CAPABILITY_CHANGED 0x02
-
-#define I2O_EVT_SENSOR_STATE_CHANGED 0x01
-
-#ifdef __KERNEL__ /* ioctl stuff only thing exported to users */
+/* How many OSMs can register themselves for device status updates? */
#define I2O_MAX_MANAGERS 4
-/*
- * I2O Interface Objects
- */
-
+#include <asm/semaphore.h> /* Needed for MUTEX init macros */
#include <linux/config.h>
#include <linux/notifier.h>
#include <asm/atomic.h>
/*
* message structures
*/
-
struct i2o_message
{
u8 version_offset;
/*
* Each I2O device entity has one or more of these. There is one
- * per device. *FIXME* how to handle multiple types on one unit.
+ * per device.
*/
-
struct i2o_device
{
- i2o_lct_entry *lct_data;/* Device LCT information */
+ i2o_lct_entry lct_data; /* Device LCT information */
u32 flags;
int i2oversion; /* I2O version supported. Actually there
* should be high and low version */
{
int irq;
#ifdef CONFIG_MTRR
- int mtrr_reg;
+ int mtrr_reg0;
+ int mtrr_reg1;
#endif
};
+/*
+ * Transport types supported by I2O stack
+ */
+#define I2O_TYPE_PCI 0x01 /* PCI I2O controller */
+
/*
- * Each I2O controller has one of these objects
+ * Each I2O controller has one of these objects
*/
-
struct i2o_controller
{
char name[16];
int type;
int enabled;
-#define I2O_TYPE_PCI 0x01 /* PCI I2O controller */
-
struct notifier_block *event_notifer; /* Events */
atomic_t users;
struct i2o_device *devices; /* I2O device chain */
struct i2o_controller *next; /* Controller chain */
- volatile u32 *post_port; /* Messaging ports */
- volatile u32 *reply_port;
- volatile u32 *irq_mask; /* Interrupt port */
+ volatile u32 *post_port; /* Inbout port */
+ volatile u32 *reply_port; /* Outbound port */
+ volatile u32 *irq_mask; /* Interrupt register */
+
+ /* Dynamic LCT related data */
+ struct semaphore lct_sem;
+ int lct_pid;
+ int lct_running;
i2o_status_block *status_block; /* IOP status block */
- i2o_lct *lct;
- i2o_hrt *hrt;
+ i2o_lct *lct; /* Logical Config Table */
+ i2o_lct *dlct; /* Temp LCT */
+ i2o_hrt *hrt; /* HW Resource Table */
u32 mem_offset; /* MFA offset */
u32 mem_phys; /* MFA physical */
- u32 priv_mem;
- u32 priv_mem_size;
- u32 priv_io;
- u32 priv_io_size;
-
struct proc_dir_entry* proc_entry; /* /proc dir */
union
{ /* Bus information */
struct i2o_pci pci;
} bus;
+
/* Bus specific destructor */
void (*destructor)(struct i2o_controller *);
+
/* Bus specific attach/detach */
int (*bind)(struct i2o_controller *, struct i2o_device *);
+
/* Bus specific initiator */
int (*unbind)(struct i2o_controller *, struct i2o_device *);
+
/* Bus specific enable/disable */
void (*bus_enable)(struct i2o_controller *c);
void (*bus_disable)(struct i2o_controller *c);
int inbound_size; /* Inbound queue size */
};
+/*
+ * OSM resgistration block
+ *
+ * Each OSM creates at least one of these and registers it with the
+ * I2O core through i2o_register_handler. An OSM may want to
+ * register more than one if it wants a fast path to a reply
+ * handler by having a separate initiator context for each
+ * class function.
+ */
struct i2o_handler
{
+ /* Message reply handler */
void (*reply)(struct i2o_handler *, struct i2o_controller *, struct i2o_message *);
- char *name;
- int context; /* Low 8 bits of the transaction info */
- u32 class; /* I2O classes that this driver handles */
+
+ /* New device notification handler */
+ void (*new_dev_notify)(struct i2o_controller *, struct i2o_device *);
+
+ /* Device deltion handler */
+ void (*dev_del_notify)(struct i2o_controller *, struct i2o_device *);
+
+ /* Reboot notification handler */
+ void (*reboot_notify)(void);
+
+ char *name; /* OSM name */
+ int context; /* Low 8 bits of the transaction info */
+ u32 class; /* I2O classes that this driver handles */
/* User data follows */
};
void (*run_queue)(struct i2o_controller *c);
int (*delete)(struct i2o_controller *);
};
-#endif
+#endif // MODULE
/*
* I2O System table entry
+ *
+ * The system table contains information about all the IOPs in the
+ * system. It is sent to all IOPs so that they can create peer2peer
+ * connections between them.
*/
struct i2o_sys_tbl_entry
{
I2O_REPLY_WRITE32(c,m);
}
-extern int i2o_install_controller(struct i2o_controller *);
-extern int i2o_delete_controller(struct i2o_controller *);
-extern void i2o_unlock_controller(struct i2o_controller *);
extern struct i2o_controller *i2o_find_controller(int);
-extern int i2o_status_get(struct i2o_controller *);
+extern void i2o_unlock_controller(struct i2o_controller *);
+extern struct i2o_controller *i2o_controller_chain;
extern int i2o_num_controllers;
+extern int i2o_status_get(struct i2o_controller *);
extern int i2o_install_handler(struct i2o_handler *);
extern int i2o_remove_handler(struct i2o_handler *);
-extern int i2o_claim_device(struct i2o_device *, struct i2o_handler *, u32);
-extern int i2o_release_device(struct i2o_device *, struct i2o_handler *, u32);
+extern int i2o_claim_device(struct i2o_device *, struct i2o_handler *);
+extern int i2o_release_device(struct i2o_device *, struct i2o_handler *);
+extern int i2o_device_notify_on(struct i2o_device *, struct i2o_handler *);
+extern int i2o_device_notify_off(struct i2o_device *, struct i2o_handler *);
extern int i2o_post_this(struct i2o_controller *, u32 *, int);
extern int i2o_post_wait(struct i2o_controller *, u32 *, int, int);
-extern int i2o_issue_params(int, struct i2o_controller *, int, void *,
- int, void *, int);
extern int i2o_query_scalar(struct i2o_controller *, int, int, int, void *, int);
extern int i2o_set_scalar(struct i2o_controller *, int, int, int, void *, int);
-
extern int i2o_query_table(int, struct i2o_controller *, int, int, int, void *,
int, void *, int);
extern int i2o_clear_table(struct i2o_controller *, int, int);
extern int i2o_row_add_table(struct i2o_controller *, int, int, int, void *,
int);
+extern int i2o_row_delete_table(struct i2o_controller *, int, int, int, void *,
+ int);
+extern int i2o_issue_params(int, struct i2o_controller *, int, void *,
+ int, void *, int);
-extern int i2o_event_register(struct i2o_controller *, int, int, u32);
-extern int i2o_event_ack(struct i2o_controller *, int, int, u32, void *, int);
+extern int i2o_event_register(struct i2o_controller *, u32, u32, u32, u32);
+extern int i2o_event_ack(struct i2o_controller *, u32 *);
-extern void i2o_run_queue(struct i2o_controller *);
extern void i2o_report_status(const char *, const char *, u32 *);
extern void i2o_dump_message(u32 *);
-
extern const char *i2o_get_class_name(int);
+extern int i2o_install_controller(struct i2o_controller *);
+extern int i2o_activate_controller(struct i2o_controller *);
+extern void i2o_run_queue(struct i2o_controller *);
+extern int i2o_delete_controller(struct i2o_controller *);
-/*
- * I2O classes / subclasses
- */
-
-/* Class ID and Code Assignments
- * (LCT.ClassID.Version field)
- */
-#define I2O_CLASS_VERSION_10 0x00
-#define I2O_CLASS_VERSION_11 0x01
-
-/* Class code names
- * (from v1.5 Table 6-1 Class Code Assignments.)
- */
-
-#define I2O_CLASS_EXECUTIVE 0x000
-#define I2O_CLASS_DDM 0x001
-#define I2O_CLASS_RANDOM_BLOCK_STORAGE 0x010
-#define I2O_CLASS_SEQUENTIAL_STORAGE 0x011
-#define I2O_CLASS_LAN 0x020
-#define I2O_CLASS_WAN 0x030
-#define I2O_CLASS_FIBRE_CHANNEL_PORT 0x040
-#define I2O_CLASS_FIBRE_CHANNEL_PERIPHERAL 0x041
-#define I2O_CLASS_SCSI_PERIPHERAL 0x051
-#define I2O_CLASS_ATE_PORT 0x060
-#define I2O_CLASS_ATE_PERIPHERAL 0x061
-#define I2O_CLASS_FLOPPY_CONTROLLER 0x070
-#define I2O_CLASS_FLOPPY_DEVICE 0x071
-#define I2O_CLASS_BUS_ADAPTER_PORT 0x080
-#define I2O_CLASS_PEER_TRANSPORT_AGENT 0x090
-#define I2O_CLASS_PEER_TRANSPORT 0x091
-
-/* Rest of 0x092 - 0x09f reserved for peer-to-peer classes
- */
-
-#define I2O_CLASS_MATCH_ANYCLASS 0xffffffff
-
-/* Subclasses
- */
-
-#define I2O_SUBCLASS_i960 0x001
-#define I2O_SUBCLASS_HDM 0x020
-#define I2O_SUBCLASS_ISM 0x021
-
-/* Operation functions */
-
-#define I2O_PARAMS_FIELD_GET 0x0001
-#define I2O_PARAMS_LIST_GET 0x0002
-#define I2O_PARAMS_MORE_GET 0x0003
-#define I2O_PARAMS_SIZE_GET 0x0004
-#define I2O_PARAMS_TABLE_GET 0x0005
-#define I2O_PARAMS_FIELD_SET 0x0006
-#define I2O_PARAMS_LIST_SET 0x0007
-#define I2O_PARAMS_ROW_ADD 0x0008
-#define I2O_PARAMS_ROW_DELETE 0x0009
-#define I2O_PARAMS_TABLE_CLEAR 0x000A
/*
- * I2O serial number conventions / formats
- * (circa v1.5)
+ * I2O Function codes
*/
-#define I2O_SNFORMAT_UNKNOWN 0
-#define I2O_SNFORMAT_BINARY 1
-#define I2O_SNFORMAT_ASCII 2
-#define I2O_SNFORMAT_UNICODE 3
-#define I2O_SNFORMAT_LAN48_MAC 4
-#define I2O_SNFORMAT_WAN 5
-
-/* Plus new in v2.0 (Yellowstone pdf doc)
- */
-
-#define I2O_SNFORMAT_LAN64_MAC 6
-#define I2O_SNFORMAT_DDM 7
-#define I2O_SNFORMAT_IEEE_REG64 8
-#define I2O_SNFORMAT_IEEE_REG128 9
-#define I2O_SNFORMAT_UNKNOWN2 0xff
-
-/* Transaction Reply Lists (TRL) Control Word structure */
-
-#define TRL_SINGLE_FIXED_LENGTH 0x00
-#define TRL_SINGLE_VARIABLE_LENGTH 0x40
-#define TRL_MULTIPLE_FIXED_LENGTH 0x80
-
/*
- * Messaging API values
- */
-
+ * Executive Class
+ */
#define I2O_CMD_ADAPTER_ASSIGN 0xB3
#define I2O_CMD_ADAPTER_READ 0xB2
#define I2O_CMD_ADAPTER_RELEASE 0xB5
#define I2O_CMD_SYS_QUIESCE 0xC3
#define I2O_CMD_SYS_TAB_SET 0xA3
+/*
+ * Utility Class
+ */
#define I2O_CMD_UTIL_NOP 0x00
#define I2O_CMD_UTIL_ABORT 0x01
#define I2O_CMD_UTIL_CLAIM 0x09
#define I2O_CMD_UTIL_LOCK_RELEASE 0x19
#define I2O_CMD_UTIL_REPLY_FAULT_NOTIFY 0x15
+/*
+ * SCSI Host Bus Adapter Class
+ */
#define I2O_CMD_SCSI_EXEC 0x81
#define I2O_CMD_SCSI_ABORT 0x83
#define I2O_CMD_SCSI_BUSRESET 0x27
+/*
+ * Random Block Storage Class
+ */
#define I2O_CMD_BLOCK_READ 0x30
#define I2O_CMD_BLOCK_WRITE 0x31
#define I2O_CMD_BLOCK_CFLUSH 0x37
/*
* Init Outbound Q status
*/
-
#define I2O_CMD_OUTBOUND_INIT_IN_PROGRESS 0x01
#define I2O_CMD_OUTBOUND_INIT_REJECTED 0x02
#define I2O_CMD_OUTBOUND_INIT_FAILED 0x03
#define I2O_CMD_OUTBOUND_INIT_COMPLETE 0x04
-/*
- * I2O Get Status State values
- */
-
-#define ADAPTER_STATE_INITIALIZING 0x01
-#define ADAPTER_STATE_RESET 0x02
-#define ADAPTER_STATE_HOLD 0x04
-#define ADAPTER_STATE_READY 0x05
-#define ADAPTER_STATE_OPERATIONAL 0x08
-#define ADAPTER_STATE_FAILED 0x10
-#define ADAPTER_STATE_FAULTED 0x11
-
/* I2O API function return values */
#define I2O_RTN_NO_ERROR 0
/* Message header defines for VersionOffset */
#define I2OVER15 0x0001
#define I2OVER20 0x0002
+
/* Default is 1.5, FIXME: Need support for both 1.5 and 2.0 */
#define I2OVERSION I2OVER15
+
#define SGL_OFFSET_0 I2OVERSION
#define SGL_OFFSET_4 (0x0040 | I2OVERSION)
#define SGL_OFFSET_5 (0x0050 | I2OVERSION)
#define TRL_OFFSET_5 (0x0050 | I2OVERSION)
#define TRL_OFFSET_6 (0x0060 | I2OVERSION)
+/* Transaction Reply Lists (TRL) Control Word structure */
+#define TRL_SINGLE_FIXED_LENGTH 0x00
+#define TRL_SINGLE_VARIABLE_LENGTH 0x40
+#define TRL_MULTIPLE_FIXED_LENGTH 0x80
+
+
/* msg header defines for MsgFlags */
#define MSG_STATIC 0x0100
#define MSG_64BIT_CNTXT 0x0200
#define I2O_POST_WAIT_TIMEOUT -ETIMEDOUT
#endif /* __KERNEL__ */
-
#endif /* _I2O_H */
extern unsigned char fat_esc2uni[];
/* fatfs_syms.c */
-extern int init_fat_fs(void);
extern void cleanup_fat_fs(void);
/* nls.c */
extern void parport_release(struct pardevice *dev);
-/* parport_yield relinquishes the port if it would be helpful to other
- drivers. The return value is the same as for parport_claim. */
+/**
+ * parport_yield - relinquish a parallel port temporarily
+ * @dev: a device on the parallel port
+ *
+ * This function relinquishes the port if it would be helpful to other
+ * drivers to do so. Afterwards it tries to reclaim the port using
+ * parport_claim(), and the return value is the same as for
+ * parport_claim(). If it fails, the port is left unclaimed and it is
+ * the driver's responsibility to reclaim the port.
+ *
+ * The parport_yield() and parport_yield_blocking() functions are for
+ * marking points in the driver at which other drivers may claim the
+ * port and use their devices. Yielding the port is similar to
+ * releasing it and reclaiming it, but is more efficient because no
+ * action is taken if there are no other devices needing the port. In
+ * fact, nothing is done even if there are other devices waiting but
+ * the current device is still within its "timeslice". The default
+ * timeslice is half a second, but it can be adjusted via the /proc
+ * interface.
+ **/
extern __inline__ int parport_yield(struct pardevice *dev)
{
unsigned long int timeslip = (jiffies - dev->time);
return parport_claim(dev);
}
-/* parport_yield_blocking is the same but uses parport_claim_or_block
- instead of parport_claim. */
+/**
+ * parport_yield - relinquish a parallel port temporarily
+ * @dev: a device on the parallel port
+ *
+ * This function relinquishes the port if it would be helpful to other
+ * drivers to do so. Afterwards it tries to reclaim the port using
+ * parport_claim_or_block(), and the return value is the same as for
+ * parport_claim_or_block().
+ **/
extern __inline__ int parport_yield_blocking(struct pardevice *dev)
{
unsigned long int timeslip = (jiffies - dev->time);
#define PER_WYSEV386 (0x0004 | STICKY_TIMEOUTS)
#define PER_ISCR4 (0x0005 | STICKY_TIMEOUTS)
#define PER_BSD (0x0006)
+#define PER_SUNOS (PER_BSD | STICKY_TIMEOUTS)
#define PER_XENIX (0x0007 | STICKY_TIMEOUTS)
#define PER_LINUX32 (0x0008)
#define PER_IRIX32 (0x0009 | STICKY_TIMEOUTS) /* IRIX5 32-bit */
/* ioctls */
-/* compat */
-#define REGISTER_DEV _IO (MD_MAJOR, 1)
-#define START_MD _IO (MD_MAJOR, 2)
-#define STOP_MD _IO (MD_MAJOR, 3)
-
-
/* status */
#define RAID_VERSION _IOR (MD_MAJOR, 0x10, mdu_version_t)
#define GET_ARRAY_INFO _IOR (MD_MAJOR, 0x11, mdu_array_info_t)
* skb_queue_empty - check if a queue is empty
* @list: queue head
*
- * Returns true if the queue is empty, false otherwise
+ * Returns true if the queue is empty, false otherwise.
*/
extern __inline__ int skb_queue_empty(struct sk_buff_head *list)
/**
* kfree_skb - free an sk_buff
- * @skb: The buffer to free
+ * @skb: buffer to free
*
* Drop a reference to the buffer and free it if the usage count has
* hit zero.
/**
* skb_cloned - is the buffer a clone
- * @skb: Buffer to check
+ * @skb: buffer to check
*
- * Returns true if the buffer was generated with skb_clone and is
+ * Returns true if the buffer was generated with skb_clone() and is
* one of multiple shared copies of the buffer. Cloned buffers are
* shared data so must not be written to under normal circumstances.
*/
* copy of the data, drops a reference count on the old copy and returns
* the new copy with the reference count at 1. If the buffer is not a clone
* the original buffer is returned. When called with a spinlock held or
- * from interrupt state pri must be GFP_ATOMIC
+ * from interrupt state @pri must be %GFP_ATOMIC
*
- * NULL is returned on a memory allocation failure.
+ * %NULL is returned on a memory allocation failure.
*/
extern __inline__ struct sk_buff *skb_unshare(struct sk_buff *skb, int pri)
* skb_peek
* @list_: list to peek at
*
- * Peek an sk_buff. Unlike most other operations you _MUST_
+ * Peek an &sk_buff. Unlike most other operations you _MUST_
* be careful with this one. A peek leaves the buffer on the
* list and someone else may run off with it. You must hold
* the appropriate locks or have a private queue to do this.
*
- * Returns NULL for an empty list or a pointer to the head element.
+ * Returns %NULL for an empty list or a pointer to the head element.
* The reference count is not incremented and the reference is therefore
* volatile. Use with caution.
*/
* skb_peek_tail
* @list_: list to peek at
*
- * Peek an sk_buff. Unlike most other operations you _MUST_
+ * Peek an &sk_buff. Unlike most other operations you _MUST_
* be careful with this one. A peek leaves the buffer on the
* list and someone else may run off with it. You must hold
* the appropriate locks or have a private queue to do this.
*
- * Returns NULL for an empty list or a pointer to the tail element.
+ * Returns %NULL for an empty list or a pointer to the tail element.
* The reference count is not incremented and the reference is therefore
* volatile. Use with caution.
*/
* skb_queue_len - get queue length
* @list_: list to measure
*
- * Return the length of an sk_buff queue.
+ * Return the length of an &sk_buff queue.
*/
extern __inline__ __u32 skb_queue_len(struct sk_buff_head *list_)
* @newsk: buffer to queue
*
* Queue a buffer at the start of the list. This function takes the
- * list lock and can be used safely with other locking sk_buff functions
+ * list lock and can be used safely with other locking &sk_buff functions
* safely.
*
* A buffer cannot be placed on two lists at the same time.
* @newsk: buffer to queue
*
* Queue a buffer at the tail of the list. This function takes the
- * list lock and can be used safely with other locking sk_buff functions
+ * list lock and can be used safely with other locking &sk_buff functions
* safely.
*
* A buffer cannot be placed on two lists at the same time.
*
* Remove the head of the list. This function does not take any locks
* so must be used with appropriate locks held only. The head item is
- * returned or NULL if the list is empty.
+ * returned or %NULL if the list is empty.
*/
extern __inline__ struct sk_buff *__skb_dequeue(struct sk_buff_head *list)
*
* Remove the head of the list. The list lock is taken so the function
* may be used safely with other locking list functions. The head item is
- * returned or NULL if the list is empty.
+ * returned or %NULL if the list is empty.
*/
extern __inline__ struct sk_buff *skb_dequeue(struct sk_buff_head *list)
*
* Remove the tail of the list. This function does not take any locks
* so must be used with appropriate locks held only. The tail item is
- * returned or NULL if the list is empty.
+ * returned or %NULL if the list is empty.
*/
extern __inline__ struct sk_buff *__skb_dequeue_tail(struct sk_buff_head *list)
*
* Remove the head of the list. The list lock is taken so the function
* may be used safely with other locking list functions. The tail item is
- * returned or NULL if the list is empty.
+ * returned or %NULL if the list is empty.
*/
extern __inline__ struct sk_buff *skb_dequeue_tail(struct sk_buff_head *list)
*
* This function extends the used data area of the buffer. If this would
* exceed the total buffer size the kernel will panic. A pointer to the
- * first byte of the extra data is returned
+ * first byte of the extra data is returned.
*/
extern __inline__ unsigned char *skb_put(struct sk_buff *skb, unsigned int len)
* @len: amount of data to add
*
* This function extends the used data area of the buffer at the buffer
- * start. If this would exceed the total buffer headroom the kernel will
- * panic. A pointer to the first byte of the extra data is returned
+ * start. If this would exceed the total buffer headroom the kernel will
+ * panic. A pointer to the first byte of the extra data is returned.
*/
extern __inline__ unsigned char *skb_push(struct sk_buff *skb, unsigned int len)
* @skb: buffer to use
* @len: amount of data to remove
*
- * This function removes data from the start of a buffer, returning
+ * This function removes data from the start of a buffer, returning
* the memory to the headroom. A pointer to the next data in the buffer
* is returned. Once the data has been pulled future pushes will overwrite
- * the old data
+ * the old data.
*/
extern __inline__ unsigned char * skb_pull(struct sk_buff *skb, unsigned int len)
* skb_headroom - bytes at buffer head
* @skb: buffer to check
*
- * Return the number of bytes of free space at the head of an sk_buff
+ * Return the number of bytes of free space at the head of an &sk_buff.
*/
extern __inline__ int skb_headroom(const struct sk_buff *skb)
* @skb: buffer to alter
* @len: bytes to move
*
- * Increase the headroom of an empty sk_buff by reducing the tail
+ * Increase the headroom of an empty &sk_buff by reducing the tail
* room. This is only allowed for an empty buffer.
*/
* skb_orphan - orphan a buffer
* @skb: buffer to orphan
*
- * If a buffer currently has an owner then we call the owners
- * destructor function and make the skb unowned. The buffer continues
+ * If a buffer currently has an owner then we call the owner's
+ * destructor function and make the @skb unowned. The buffer continues
* to exist but is no longer charged to its former owner.
*/
* skb_purge - empty a list
* @list: list to empty
*
- * Delete all buffers on an sk_buff list. Each buffer is removed from
+ * Delete all buffers on an &sk_buff list. Each buffer is removed from
* the list and one reference dropped. This function takes the list
* lock and is atomic with respect to other list locking functions.
*/
* __skb_purge - empty a list
* @list: list to empty
*
- * Delete all buffers on an sk_buff list. Each buffer is removed from
+ * Delete all buffers on an &sk_buff list. Each buffer is removed from
* the list and one reference dropped. This function does not take the
* list lock and the caller must hold the relevant locks to use it.
*/
* dev_alloc_skb - allocate an skbuff for sending
* @length: length to allocate
*
- * Allocate a new sk_buff and assign it a usage count of one. The
+ * Allocate a new &sk_buff and assign it a usage count of one. The
* buffer has unspecified headroom built in. Users should allocate
* the headroom they think they need without accounting for the
* built in space. The built in space is used for optimisations.
*
- * NULL is returned in there is no free memory. Although this function
+ * %NULL is returned in there is no free memory. Although this function
* allocates memory it can be called from an interrupt.
*/
*
* If the buffer passed lacks sufficient headroom or is a clone then
* it is copied and the additional headroom made available. If there
- * is no free memory NULL is returned. The new buffer is returned if
+ * is no free memory %NULL is returned. The new buffer is returned if
* a copy was made (and the old one dropped a reference). The existing
* buffer is returned otherwise.
*
extern void * memmove(void *,const void *,__kernel_size_t);
extern void * memscan(void *,int,__kernel_size_t);
extern int memcmp(const void *,const void *,__kernel_size_t);
+extern void * memchr(const void *,int,__kernel_size_t);
/*
* Include machine specific inline routines
#define PIPE_CONTROL 2
#define PIPE_BULK 3
-#define USB_ISOCHRONOUS 0
-#define USB_INTERRUPT 1
-#define USB_CONTROL 2
-#define USB_BULK 3
-
#define usb_maxpacket(dev, pipe, out) (out \
? (dev)->epmaxpacketout[usb_pipeendpoint(pipe)] \
: (dev)->epmaxpacketin [usb_pipeendpoint(pipe)] )
#define VID_HARDWARE_ZR36120 25 /* Zoran ZR36120/ZR36125 */
#define VID_HARDWARE_ZR36067 26 /* Zoran ZR36067/36060 */
#define VID_HARDWARE_OV511 27
+#define VID_HARDWARE_ZR356700 28 /* Zoran 36700 series */
/*
* Initialiser list
/* This part is used for the timeout functions. */
- spinlock_t timer_lock; /* Required until timer in core is repaired */
struct timer_list timer; /* This is the sock cleanup timer. */
struct timeval stamp;
return;
};
- spin_lock_bh(&sk->timer_lock);
- if (timer->prev != NULL && del_timer(timer))
+ if (timer_pending(timer) && del_timer(timer))
__sock_put(sk);
- spin_unlock_bh(&sk->timer_lock);
}
/* This function does not return reliable answer. Use it only as advice.
#define SCROLL_YNOPARTIAL __SCROLL_YNOPARTIAL
+#if defined(__sparc__)
+
+/* We map all of our framebuffers such that big-endian accesses
+ * are what we want, so the following is sufficient.
+ */
+
+#define fb_readb sbus_readb
+#define fb_readw sbus_readw
+#define fb_readl sbus_readl
+#define fb_writeb sbus_writeb
+#define fb_writew sbus_writew
+#define fb_writel sbus_writel
+#define fb_memset sbus_memset_io
+
+#elif defined(__i386__) || defined(__alpha__)
+
+#define fb_readb __raw_readb
+#define fb_readw __raw_readw
+#define fb_readl __raw_readl
+#define fb_writeb __raw_writeb
+#define fb_writew __raw_writew
+#define fb_writel __raw_writel
+#define fb_memset memset_io
+
+#else
+
+#define fb_readb(addr) (*(volatile u8 *) (addr))
+#define fb_readw(addr) (*(volatile u16 *) (addr))
+#define fb_readl(addr) (*(volatile u32 *) (addr))
+#define fb_writeb(b,addr) (*(volatile u8 *) (addr) = (b))
+#define fb_writew(b,addr) (*(volatile u16 *) (addr) = (b))
+#define fb_writel(b,addr) (*(volatile u32 *) (addr) = (b))
+#define fb_memset memset
+
+#endif
+
+
extern void fbcon_redraw_bmove(struct display *, int, int, int, int, int, int);
static __inline__ void *fb_memclear_small(void *s, size_t count)
{
- return(memset(s, 0, count));
+ char *xs = (char *) s;
+
+ while (count--)
+ fb_writeb(0, xs++);
+
+ return s;
}
static __inline__ void *fb_memclear(void *s, size_t count)
{
- return(memset(s, 0, count));
+ unsigned long xs = (unsigned long) s;
+
+ if (count < 8)
+ goto rest;
+
+ if (xs & 1) {
+ fb_writeb(0, xs++);
+ count--;
+ }
+ if (xs & 2) {
+ fb_writew(0, xs);
+ xs += 2;
+ count -= 2;
+ }
+ while (count > 3) {
+ fb_writel(0, xs);
+ xs += 4;
+ count -= 4;
+ }
+rest:
+ while (count--)
+ fb_writeb(0, xs++);
+
+ return s;
}
static __inline__ void *fb_memset255(void *s, size_t count)
{
- return(memset(s, 255, count));
+ unsigned long xs = (unsigned long) s;
+
+ if (count < 8)
+ goto rest;
+
+ if (xs & 1) {
+ fb_writeb(0xff, xs++);
+ count--;
+ }
+ if (xs & 2) {
+ fb_writew(0xffff, xs);
+ xs += 2;
+ count -= 2;
+ }
+ while (count > 3) {
+ fb_writel(0xffffffff, xs);
+ xs += 4;
+ count -= 4;
+ }
+rest:
+ while (count--)
+ fb_writeb(0xff, xs++);
+
+ return s;
}
#if defined(__i386__)
return dst;
}
-#else /* !i386 */
+#else /* !__i386__ */
/*
* Anyone who'd like to write asm functions for other CPUs?
static __inline__ void *fb_memmove(void *d, const void *s, size_t count)
{
- return(memmove(d, s, count));
-}
-
-static __inline__ void fast_memmove(char *dst, const char *src, size_t size)
-{
- memmove(dst, src, size);
-}
-
-#endif /* !i386 */
-
-#endif
-
-
-#if defined(__sparc__)
+ unsigned long dst, src;
-/* We map all of our framebuffers such that big-endian accesses
- * are what we want, so the following is sufficient.
- */
-
-#define fb_readb sbus_readb
-#define fb_readw sbus_readw
-#define fb_readl sbus_readl
-#define fb_writeb sbus_writeb
-#define fb_writew sbus_writew
-#define fb_writel sbus_writel
-#define fb_memset sbus_memset_io
+ if (d < s) {
+ dst = (unsigned long) d;
+ src = (unsigned long) s;
+
+ if ((count < 8) || ((dst ^ src) & 3))
+ goto restup;
+
+ if (dst & 1) {
+ fb_writeb(fb_readb(src++), dst++);
+ count--;
+ }
+ if (dst & 2) {
+ fb_writew(fb_readw(src), dst);
+ src += 2;
+ dst += 2;
+ count -= 2;
+ }
+ while (count > 3) {
+ fb_writel(fb_readl(src), dst);
+ src += 4;
+ dst += 4;
+ count -= 4;
+ }
+
+ restup:
+ while (count--)
+ fb_writeb(fb_readb(src++), dst++);
+ } else {
+ dst = (unsigned long) d + count - 1;
+ src = (unsigned long) s + count - 1;
+
+ if ((count < 8) || ((dst ^ src) & 3))
+ goto restdown;
+
+ if (dst & 1) {
+ fb_writeb(fb_readb(src--), dst--);
+ count--;
+ }
+ if (dst & 2) {
+ fb_writew(fb_readw(src), dst);
+ src -= 2;
+ dst -= 2;
+ count -= 2;
+ }
+ while (count > 3) {
+ fb_writel(fb_readl(src), dst);
+ src -= 4;
+ dst -= 4;
+ count -= 4;
+ }
+
+ restdown:
+ while (count--)
+ fb_writeb(fb_readb(src--), dst--);
+ }
-#elif defined(__i386__) || defined(__alpha__)
+ return d;
+}
-#define fb_readb __raw_readb
-#define fb_readw __raw_readw
-#define fb_readl __raw_readl
-#define fb_writeb __raw_writeb
-#define fb_writew __raw_writew
-#define fb_writel __raw_writel
-#define fb_memset memset_io
+static __inline__ void fast_memmove(char *d, const char *s, size_t count)
+{
+ unsigned long dst, src;
-#else
+ if (d < s) {
+ dst = (unsigned long) d;
+ src = (unsigned long) s;
+
+ if ((count < 8) || ((dst ^ src) & 3))
+ goto restup;
+
+ if (dst & 1) {
+ fb_writeb(fb_readb(src++), dst++);
+ count--;
+ }
+ if (dst & 2) {
+ fb_writew(fb_readw(src), dst);
+ src += 2;
+ dst += 2;
+ count -= 2;
+ }
+ while (count > 3) {
+ fb_writel(fb_readl(src), dst);
+ src += 4;
+ dst += 4;
+ count -= 4;
+ }
+
+ restup:
+ while (count--)
+ fb_writeb(fb_readb(src++), dst++);
+ } else {
+ dst = (unsigned long) d + count - 1;
+ src = (unsigned long) s + count - 1;
+
+ if ((count < 8) || ((dst ^ src) & 3))
+ goto restdown;
+
+ if (dst & 1) {
+ fb_writeb(fb_readb(src--), dst--);
+ count--;
+ }
+ if (dst & 2) {
+ fb_writew(fb_readw(src), dst);
+ src -= 2;
+ dst -= 2;
+ count -= 2;
+ }
+ while (count > 3) {
+ fb_writel(fb_readl(src), dst);
+ src -= 4;
+ dst -= 4;
+ count -= 4;
+ }
+
+ restdown:
+ while (count--)
+ fb_writeb(fb_readb(src--), dst--);
+ }
+}
-#define fb_readb(addr) (*(volatile u8 *) (addr))
-#define fb_readw(addr) (*(volatile u16 *) (addr))
-#define fb_readl(addr) (*(volatile u32 *) (addr))
-#define fb_writeb(b,addr) (*(volatile u8 *) (addr) = (b))
-#define fb_writew(b,addr) (*(volatile u16 *) (addr) = (b))
-#define fb_writel(b,addr) (*(volatile u32 *) (addr) = (b))
-#define fb_memset memset
+#endif /* !__i386__ */
-#endif
+#endif /* !__mc68000__ */
#endif /* _VIDEO_FBCON_H */
#include <linux/kmod.h>
#endif
+#ifdef CONFIG_BLK_DEV_LVM_MODULE
+extern void (*lvm_hd_name_ptr) ( char*, int);
+EXPORT_SYMBOL(lvm_hd_name_ptr);
+#endif
+
extern int console_loglevel;
extern void set_device_ro(kdev_t dev,int flag);
#if !defined(CONFIG_NFSD) && defined(CONFIG_NFSD_MODULE)
EXPORT_SYMBOL(page_readlink);
EXPORT_SYMBOL(page_follow_link);
EXPORT_SYMBOL(page_symlink_inode_operations);
+EXPORT_SYMBOL(block_fsync);
EXPORT_SYMBOL(block_symlink);
EXPORT_SYMBOL(vfs_readdir);
-/* for stackable file systems (lofs, wrapfs, etc.) */
-EXPORT_SYMBOL(add_to_page_cache);
+/* for stackable file systems (lofs, wrapfs, cryptfs, etc.) */
+EXPORT_SYMBOL(default_llseek);
+EXPORT_SYMBOL(dentry_open);
EXPORT_SYMBOL(filemap_nopage);
EXPORT_SYMBOL(filemap_swapout);
EXPORT_SYMBOL(filemap_sync);
-EXPORT_SYMBOL(remove_inode_page);
+EXPORT_SYMBOL(lock_page);
#if !defined(CONFIG_NFSD) && defined(CONFIG_NFSD_MODULE)
EXPORT_SYMBOL(do_nfsservctl);
EXPORT_SYMBOL(sync_dev);
EXPORT_SYMBOL(devfs_register_partitions);
EXPORT_SYMBOL(blkdev_open);
+EXPORT_SYMBOL(blkdev_close);
EXPORT_SYMBOL(blkdev_get);
EXPORT_SYMBOL(blkdev_put);
EXPORT_SYMBOL(ioctl_by_bdev);
/* process management */
EXPORT_SYMBOL(__wake_up);
+EXPORT_SYMBOL(wake_up_process);
EXPORT_SYMBOL(sleep_on);
EXPORT_SYMBOL(sleep_on_timeout);
EXPORT_SYMBOL(interruptible_sleep_on);
/**
* pm_register - register a device with power management
- * @type: The device type
- * @id: Device ID
- * @callback: Callback function
+ * @type: device type
+ * @id: device ID
+ * @callback: callback function
*
* Add a device to the list of devices that wish to be notified about
- * power management events. A pm_dev structure is returnd on success,
- * on failure the return is NULL
+ * power management events. A &pm_dev structure is returned on success,
+ * on failure the return is %NULL.
*/
struct pm_dev *pm_register(pm_dev_t type,
* @data: data for the callback
*
* Issue a power management request to a given device. The
- * PM_SUSPEND and PM_RESUME events are handled specially. The
- * data field must hold the intented next state. No call is made
+ * %PM_SUSPEND and %PM_RESUME events are handled specially. The
+ * data field must hold the intended next state. No call is made
* if the state matches.
*
* BUGS: what stops two power management requests occuring in parallel
}
/**
- * pm_send - send request to all managed device
+ * pm_send_all - send request to all managed devices
* @rqst: power management request
* @data: data for the callback
*
* Issue a power management request to a all devices. The
- * PM_SUSPEND events are handled specially. Any device is
+ * %PM_SUSPEND events are handled specially. Any device is
* permitted to fail a suspend by returning a non zero (error)
* value from its callback function. If any device vetoes a
* suspend request then all other devices that have suspended
/**
* pm_find - find a device
* @type: type of device
- * @from: Where to start looking
+ * @from: where to start looking
*
* Scan the power management list for devices of a specific type. The
* return value for a matching device may be passed to further calls
- * to this function to find further matches. A NULL indicates the end
+ * to this function to find further matches. A %NULL indicates the end
* of the list.
*
- * To search from the beginning pass NULL as the from value.
+ * To search from the beginning pass %NULL as the @from value.
*/
struct pm_dev *pm_find(pm_dev_t type, struct pm_dev *from)
* it should be in this state _before_ it is released.
*/
static inline void
-__kmem_cache_free(kmem_cache_t *cachep, const void *objp)
+__kmem_cache_free(kmem_cache_t *cachep, void *objp)
{
kmem_slab_t *slabp;
kmem_bufctl_t *bufp;
*/
cachep = SLAB_GET_PAGE_CACHE(page);
if (cachep && (cachep->c_flags & SLAB_CFLGS_GENERAL)) {
- __kmem_cache_free(cachep, objp);
+ __kmem_cache_free(cachep, (void *)objp);
return;
}
}
cachep = SLAB_GET_PAGE_CACHE(page);
if (cachep && cachep->c_flags & SLAB_CFLGS_GENERAL) {
if (size <= cachep->c_org_size) { /* XXX better check */
- __kmem_cache_free(cachep, objp);
+ __kmem_cache_free(cachep, (void *)objp);
return;
}
}
if (pos > offset + length) /* We have dumped enough */
break;
}
- spin_lock_bh(&atalk_sockets_lock);
+ spin_unlock_bh(&atalk_sockets_lock);
/* The data in question runs from begin to begin+len */
*start = buffer + (offset - begin); /* Start of wanted data */
* dev_add_pack - add packet handler
* @pt: packet type declaration
*
- * Add a protocol handler to the networking stack. The passed packet_type
+ * Add a protocol handler to the networking stack. The passed &packet_type
* is linked into kernel lists and may not be freed until it has been
* removed from the kernel lists.
*/
* @pt: packet type declaration
*
* Remove a protocol handler that was previously added to the kernel
- * protocol handlers by dev_add_pack. The passed packet_type is removed
+ * protocol handlers by dev_add_pack(). The passed &packet_type is removed
* from the kernel lists and can be freed or reused once this function
* returns.
*/
* __dev_get_by_name - find a device by its name
* @name: name to find
*
- * Find an interface by name. Must be called under rtnl semaphore
- * or dev_base_lock. If the name is found a pointer to the device
- * is returned. If the name is not found then NULL is returned. The
+ * Find an interface by name. Must be called under RTNL semaphore
+ * or @dev_base_lock. If the name is found a pointer to the device
+ * is returned. If the name is not found then %NULL is returned. The
* reference counters are not incremented so the caller must be
* careful with locks.
*/
* Find an interface by name. This can be called from any
* context and does its own locking. The returned handle has
* the usage count incremented and the caller must use dev_put() to
- * release it when it is no longer needed. NULL is returned if no
+ * release it when it is no longer needed. %NULL is returned if no
* matching device is found.
*/
* __dev_get_by_index - find a device by its ifindex
* @ifindex: index of device
*
- * Search for an interface by index. Returns NULL if the device
+ * Search for an interface by index. Returns %NULL if the device
* is not found or a pointer to the device. The device has not
* had its reference counter increased so the caller must be careful
- * about locking. The caller must hold either the rtnl semaphore
- * or dev_base_lock.
+ * about locking. The caller must hold either the RTNL semaphore
+ * or @dev_base_lock.
*/
struct net_device * __dev_get_by_index(int ifindex)
* @name: name format string
* @err: error return pointer
*
- * Passed a format string - eg "lt%d" it will allocate a network device
- * and space for the name. NULL is returned if no memory is available.
+ * Passed a format string, eg. "lt%d", it will allocate a network device
+ * and space for the name. %NULL is returned if no memory is available.
* If the allocation succeeds then the name is assigned and the
- * device pointer returned. NULL is returned if the name allocation failed.
- * The cause of an error is returned as a negative errno code in the
- * variable err points to.
+ * device pointer returned. %NULL is returned if the name allocation
+ * failed. The cause of an error is returned as a negative errno code
+ * in the variable @err points to.
*
- * The claler must hold the dev_base or rtnl locks when doing this in order
- * to avoid duplicate name allocations.
+ * The caller must hold the @dev_base or RTNL locks when doing this in
+ * order to avoid duplicate name allocations.
*/
struct net_device *dev_alloc(const char *name, int *err)
* dev_open - prepare an interface for use.
* @dev: device to open
*
- * Takes a device from down to up state. The devices private open
+ * Takes a device from down to up state. The device's private open
* function is invoked and then the multicast lists are loaded. Finally
- * the device is moved into the up state and a NETDEV_UP message is
+ * the device is moved into the up state and a %NETDEV_UP message is
* sent to the netdev notifier chain.
*
* Calling this function on an active interface is a nop. On a failure
* @dev: device to shutdown
*
* This function moves an active device into down state. A
- * NETDEV_GOING_DOWN is sent to the netev notifier chain. The device
- * is then deactivated and finally a NETDEV_DOWN is sent to the notifier
+ * %NETDEV_GOING_DOWN is sent to the netdev notifier chain. The device
+ * is then deactivated and finally a %NETDEV_DOWN is sent to the notifier
* chain.
*/
* unregister_netdevice_notifier - unregister a network notifier block
* @nb: notifier
*
- * Unregister a notifier previously registered by register_netdevice_notifier
- * The notifier is unlinked into the kernel structures and may
- * then be reused. A negative errno code is returned on a failure.
+ * Unregister a notifier previously registered by
+ * register_netdevice_notifier(). The notifier is unlinked into the
+ * kernel structures and may then be reused. A negative errno code
+ * is returned on a failure.
*/
int unregister_netdevice_notifier(struct notifier_block *nb)
* @fn: function to call
*
* Make a function call that is atomic with respect to the protocol
- * layers
+ * layers.
*/
void net_call_rx_atomic(void (*fn)(void))
* @slave: slave device
* @master: new master device
*
- * Changes the master device of the slave. Pass NULL to break the
+ * Changes the master device of the slave. Pass %NULL to break the
* bonding. The caller must hold the RTNL semaphore. On a failure
* a negative errno code is returned. On success the reference counts
- * are adjusted, RTM_NEWLINK is sent to the routing socket and the
+ * are adjusted, %RTM_NEWLINK is sent to the routing socket and the
* function returns zero.
*/
* Add or remove reception of all multicast frames to a device. While the
* count in the device remains above zero the interface remains listening
* to all interfaces. Once it hits zero the device reverts back to normal
- * filtering operation. A negative inc value is used to drop the counter
+ * filtering operation. A negative @inc value is used to drop the counter
* when releasing a resource needing all multicasts.
*/
return ret;
}
#ifdef WIRELESS_EXT
+ /* Take care of Wireless Extensions */
if (cmd >= SIOCIWFIRST && cmd <= SIOCIWLAST) {
- dev_load(ifr.ifr_name);
- if (IW_IS_SET(cmd)) {
- if (!suser())
+ /* If command is `set a parameter', or
+ * `get the encoding parameters', check if
+ * the user has the right to do it */
+ if (IW_IS_SET(cmd) || (cmd == SIOCGIWENCODE)) {
+ if(!capable(CAP_NET_ADMIN))
return -EPERM;
}
+ dev_load(ifr.ifr_name);
rtnl_lock();
ret = dev_ifsioc(&ifr, cmd);
rtnl_unlock();
* @dev: device to register
*
* Take a completed network device structure and add it to the kernel
- * interfaces. A NETDEV_REGISTER message is sent to the netdev notifier
+ * interfaces. A %NETDEV_REGISTER message is sent to the netdev notifier
* chain. 0 is returned on success. A negative errno code is returned
* on a failure to set up the device, or if the name is a duplicate.
*
* @sz: size
* @here: address
*
- * Out of line support code for skb_put. Not user callable
+ * Out of line support code for skb_put(). Not user callable.
*/
void skb_over_panic(struct sk_buff *skb, int sz, void *here)
* @sz: size
* @here: address
*
- * Out of line support code for skb_push. Not user callable
+ * Out of line support code for skb_push(). Not user callable.
*/
* @size: size to allocate
* @gfp_mask: allocation mask
*
- * Allocate a new sk_buff. The returned buffer has no headroom and a
+ * Allocate a new &sk_buff. The returned buffer has no headroom and a
* tail room of size bytes. The object has a reference count of one.
- * The return is the buffer. On a failure the return is NULL.
+ * The return is the buffer. On a failure the return is %NULL.
*
- * Buffers may only be allocated from interrupts using a gfp_mask of
- * GFP_ATOMIC.
+ * Buffers may only be allocated from interrupts using a @gfp_mask of
+ * %GFP_ATOMIC.
*/
struct sk_buff *alloc_skb(unsigned int size,int gfp_mask)
* @skb: buffer to clone
* @gfp_mask: allocation priority
*
- * Duplicate an sk_buff. The new one is not owned by a socket. Both
+ * Duplicate an &sk_buff. The new one is not owned by a socket. Both
* copies share the same packet data but not structure. The new
* buffer has a reference count of 1. If the allocation fails the
- * function returns NULL otherwise the new buffer is returned.
+ * function returns %NULL otherwise the new buffer is returned.
*
- * If this function is called from an interrupt gfp_mask must be
- * GFP_ATOMIC.
+ * If this function is called from an interrupt gfp_mask() must be
+ * %GFP_ATOMIC.
*/
struct sk_buff *skb_clone(struct sk_buff *skb, int gfp_mask)
* @skb: buffer to copy
* @gfp_mask: allocation priority
*
- * Make a copy of both an sk_buff and its data. This is used when the
+ * Make a copy of both an &sk_buff and its data. This is used when the
* caller wishes to modify the data and needs a private copy of the
- * data to alter. Returns NULL on failure or the pointer to the buffer
+ * data to alter. Returns %NULL on failure or the pointer to the buffer
* on success. The returned buffer has a reference count of 1.
*
- * You must pass GFP_ATOMIC as the allocation priority if this function
+ * You must pass %GFP_ATOMIC as the allocation priority if this function
* is called from an interrupt.
*/
* @newtailroom: new free bytes at tail
* @gfp_mask: allocation priority
*
- * Make a copy of both an sk_buff and its data and while doing so
+ * Make a copy of both an &sk_buff and its data and while doing so
* allocate additional space.
*
* This is used when the caller wishes to modify the data and needs a
* private copy of the data to alter as well as more space for new fields.
- * Returns NULL on failure or the pointer to the buffer
+ * Returns %NULL on failure or the pointer to the buffer
* on success. The returned buffer has a reference count of 1.
*
- * You must pass GFP_ATOMIC as the allocation priority if this function
+ * You must pass %GFP_ATOMIC as the allocation priority if this function
* is called from an interrupt.
*/
* handler for protocols to use and generic option handler.
*
*
- * Version: $Id: sock.c,v 1.91 2000/03/25 01:55:03 davem Exp $
+ * Version: $Id: sock.c,v 1.92 2000/04/08 07:21:15 davem Exp $
*
* Authors: Ross Biro, <bir7@leland.Stanford.Edu>
* Fred N. van Kempen, <waltje@uWalt.NL.Mugnet.ORG>
if (sk->shutdown&SEND_SHUTDOWN)
goto failure;
- if (fallback) {
- /* The buffer get won't block, or use the atomic queue.
- * It does produce annoying no free page messages still.
- */
- skb = sock_wmalloc(sk, size, 0, GFP_BUFFER);
+ if (atomic_read(&sk->wmem_alloc) < sk->sndbuf) {
+ if (fallback) {
+ /* The buffer get won't block, or use the atomic queue.
+ * It does produce annoying no free page messages still.
+ */
+ skb = alloc_skb(size, GFP_BUFFER);
+ if (skb)
+ break;
+ try_size = fallback;
+ }
+ skb = alloc_skb(try_size, sk->allocation);
if (skb)
break;
- try_size = fallback;
+ err = -ENOBUFS;
+ goto failure;
}
- skb = sock_wmalloc(sk, try_size, 0, sk->allocation);
- if (skb)
- break;
/*
* This means we have too many buffers for this socket already.
timeo = sock_wait_for_wmem(sk, timeo);
}
+ skb_set_owner_w(skb, sk);
return skb;
interrupted:
skb_queue_head_init(&sk->write_queue);
skb_queue_head_init(&sk->error_queue);
- spin_lock_init(&sk->timer_lock);
init_timer(&sk->timer);
sk->allocation = GFP_KERNEL;
decnet_address = dn_htons(area << 10 | node);
dn_dn2eth(decnet_ether_address, dn_ntohs(decnet_address));
- return 0;
+ return 1;
}
__setup("decnet=", decnet_setup);
*
* Alan Cox, <alan@redhat.com>
*
- * Version: $Id: icmp.c,v 1.67 2000/03/25 01:55:11 davem Exp $
+ * Version: $Id: icmp.c,v 1.68 2000/04/08 02:44:18 davem Exp $
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
}
if (ip_route_output(&rt, daddr, rt->rt_spec_dst, RT_TOS(skb->nh.iph->tos), 0))
goto out;
- ip_build_xmit(sk, icmp_glue_bits, icmp_param,
- icmp_param->data_len+sizeof(struct icmphdr),
- &ipc, rt, MSG_DONTWAIT);
+ if (icmpv4_xrlim_allow(rt, icmp_param->icmph.type,
+ icmp_param->icmph.code)) {
+ ip_build_xmit(sk, icmp_glue_bits, icmp_param,
+ icmp_param->data_len+sizeof(struct icmphdr),
+ &ipc, rt, MSG_DONTWAIT);
+ }
ip_rt_put(rt);
out:
icmp_xmit_unlock_bh();
num++;
}
- return 0;
+ return 1;
}
static int __init nfsaddrs_config_setup(char *addrs)
*
* Implementation of the Transmission Control Protocol(TCP).
*
- * Version: $Id: tcp.c,v 1.166 2000/03/25 01:55:11 davem Exp $
+ * Version: $Id: tcp.c,v 1.167 2000/04/08 07:21:18 davem Exp $
*
* Authors: Ross Biro, <bir7@leland.Stanford.Edu>
* Fred N. van Kempen, <waltje@uWalt.NL.Mugnet.ORG>
tmp += copy;
queue_it = 0;
}
- skb = sock_wmalloc(sk, tmp, 0, GFP_KERNEL);
- /* If we didn't get any memory, we need to sleep. */
- if (skb == NULL) {
+ if (tcp_memory_free(sk)) {
+ skb = alloc_skb(tmp, GFP_KERNEL);
+ if (skb == NULL)
+ goto do_oom;
+ skb_set_owner_w(skb, sk);
+ } else {
+ /* If we didn't get any memory, we need to sleep. */
set_bit(SOCK_ASYNC_NOSPACE, &sk->socket->flags);
set_bit(SOCK_NOSPACE, &sk->socket->flags);
err = -EPIPE;
}
goto out;
+do_oom:
+ err = copied ? : -ENOBUFS;
+ goto out;
do_interrupted:
if(copied)
err = copied;
*
* Implementation of the Transmission Control Protocol(TCP).
*
- * Version: $Id: tcp_input.c,v 1.191 2000/03/25 01:55:13 davem Exp $
+ * Version: $Id: tcp_input.c,v 1.192 2000/04/08 07:21:20 davem Exp $
*
* Authors: Ross Biro, <bir7@leland.Stanford.Edu>
* Fred N. van Kempen, <waltje@uWalt.NL.Mugnet.ORG>
tp->fin_seq = TCP_SKB_CB(skb)->end_seq;
tp->ack.pending = 1;
+ tp->ack.quick = 0;
sk->shutdown |= RCV_SHUTDOWN;
/* Do not send POLL_HUP for half duplex close. */
if (sk->shutdown == SHUTDOWN_MASK || sk->state == TCP_CLOSE)
- sock_wake_async(sk->socket, 1, POLL_HUP);
+ sk_wake_async(sk, 1, POLL_HUP);
else
- sock_wake_async(sk->socket, 1, POLL_IN);
+ sk_wake_async(sk, 1, POLL_IN);
}
}
kill_proc(sk->proc, SIGURG, 1);
else
kill_pg(-sk->proc, SIGURG, 1);
- sock_wake_async(sk->socket, 3, POLL_PRI);
+ sk_wake_async(sk, 3, POLL_PRI);
}
/* We may be adding urgent data when the last byte read was
if(!sk->dead) {
sk->state_change(sk);
- sock_wake_async(sk->socket, 0, POLL_OUT);
+ sk_wake_async(sk, 0, POLL_OUT);
}
if (tp->write_pending) {
*/
if (!sk->dead) {
sk->state_change(sk);
- sock_wake_async(sk->socket,0,POLL_OUT);
+ sk_wake_async(sk,0,POLL_OUT);
}
tp->snd_una = TCP_SKB_CB(skb)->ack_seq;
*
* Implementation of the Transmission Control Protocol(TCP).
*
- * Version: $Id: tcp_output.c,v 1.123 2000/03/25 01:52:05 davem Exp $
+ * Version: $Id: tcp_output.c,v 1.124 2000/04/08 07:21:24 davem Exp $
*
* Authors: Ross Biro, <bir7@leland.Stanford.Edu>
* Fred N. van Kempen, <waltje@uWalt.NL.Mugnet.ORG>
timeout = jiffies + ato;
/* Use new timeout only if there wasn't a older one earlier. */
- spin_lock_bh(&sk->timer_lock);
- if (!tp->delack_timer.prev || !del_timer(&tp->delack_timer)) {
- sock_hold(sk);
- tp->delack_timer.expires = timeout;
- } else {
+ if (timer_pending(&tp->delack_timer)) {
+ unsigned long old_timeout = tp->delack_timer.expires;
+
/* If delack timer was blocked or is about to expire,
* send ACK now.
*/
- if (tp->ack.blocked || time_before_eq(tp->delack_timer.expires, jiffies+(ato>>2))) {
- spin_unlock_bh(&sk->timer_lock);
-
+ if (tp->ack.blocked || time_before_eq(old_timeout, jiffies+(ato>>2))) {
tcp_send_ack(sk);
- __sock_put(sk);
return;
}
- if (time_before(timeout, tp->delack_timer.expires))
- tp->delack_timer.expires = timeout;
+ if (!time_before(timeout, old_timeout))
+ timeout = old_timeout;
}
- add_timer(&tp->delack_timer);
- spin_unlock_bh(&sk->timer_lock);
+ if (!mod_timer(&tp->delack_timer, timeout))
+ sock_hold(sk);
#ifdef TCP_FORMAL_WINDOW
/* Explanation. Header prediction path does not handle
*
* Implementation of the Transmission Control Protocol(TCP).
*
- * Version: $Id: tcp_timer.c,v 1.74 2000/02/14 20:56:30 davem Exp $
+ * Version: $Id: tcp_timer.c,v 1.75 2000/04/08 07:21:25 davem Exp $
*
* Authors: Ross Biro, <bir7@leland.Stanford.Edu>
* Fred N. van Kempen, <waltje@uWalt.NL.Mugnet.ORG>
{
struct tcp_opt *tp = &sk->tp_pinfo.af_tcp;
- spin_lock_init(&sk->timer_lock);
-
init_timer(&tp->retransmit_timer);
tp->retransmit_timer.function=&tcp_retransmit_timer;
tp->retransmit_timer.data = (unsigned long) sk;
{
struct tcp_opt *tp = &sk->tp_pinfo.af_tcp;
- spin_lock_bh(&sk->timer_lock);
switch (what) {
case TCP_TIME_RETRANS:
/* When seting the transmit timer the probe timer
* The delayed ack timer can be set if we are changing the
* retransmit timer when removing acked frames.
*/
- if(tp->probe_timer.prev && del_timer(&tp->probe_timer))
+ if (timer_pending(&tp->probe_timer) && del_timer(&tp->probe_timer))
__sock_put(sk);
- if (!tp->retransmit_timer.prev || !del_timer(&tp->retransmit_timer))
- sock_hold(sk);
if (when > TCP_RTO_MAX) {
printk(KERN_DEBUG "reset_xmit_timer sk=%p when=0x%lx, caller=%p\n", sk, when, NET_CALLER(sk));
when = TCP_RTO_MAX;
}
- mod_timer(&tp->retransmit_timer, jiffies+when);
+ if (!mod_timer(&tp->retransmit_timer, jiffies+when))
+ sock_hold(sk);
break;
case TCP_TIME_DACK:
- if (!tp->delack_timer.prev || !del_timer(&tp->delack_timer))
+ if (!mod_timer(&tp->delack_timer, jiffies+when))
sock_hold(sk);
- mod_timer(&tp->delack_timer, jiffies+when);
break;
case TCP_TIME_PROBE0:
- if (!tp->probe_timer.prev || !del_timer(&tp->probe_timer))
+ if (!mod_timer(&tp->probe_timer, jiffies+when))
sock_hold(sk);
- mod_timer(&tp->probe_timer, jiffies+when);
- break;
+ break;
default:
printk(KERN_DEBUG "bug: unknown timer value\n");
};
- spin_unlock_bh(&sk->timer_lock);
}
void tcp_clear_xmit_timers(struct sock *sk)
{
struct tcp_opt *tp = &sk->tp_pinfo.af_tcp;
- spin_lock_bh(&sk->timer_lock);
- if(tp->retransmit_timer.prev && del_timer(&tp->retransmit_timer))
+ if(timer_pending(&tp->retransmit_timer) && del_timer(&tp->retransmit_timer))
__sock_put(sk);
- if(tp->delack_timer.prev && del_timer(&tp->delack_timer))
+ if(timer_pending(&tp->delack_timer) && del_timer(&tp->delack_timer))
__sock_put(sk);
tp->ack.blocked = 0;
- if(tp->probe_timer.prev && del_timer(&tp->probe_timer))
+ if(timer_pending(&tp->probe_timer) && del_timer(&tp->probe_timer))
__sock_put(sk);
- if(sk->timer.prev && del_timer(&sk->timer))
+ if(timer_pending(&sk->timer) && del_timer(&sk->timer))
__sock_put(sk);
- spin_unlock_bh(&sk->timer_lock);
}
static void tcp_write_err(struct sock *sk)
tcp_done(sk);
}
+/* Do not allow orphaned sockets to eat all our resources.
+ * This is direct violation of TCP specs, but it is required
+ * to prevent DoS attacks. It is called when a retransmission timeout
+ * or zero probe timeout occurs on orphaned socket.
+ *
+ * Criterium is still not confirmed experimentally and may change.
+ * We kill the socket, if:
+ * 1. If number of orphaned sockets exceeds an administratively configured
+ * limit.
+ * 2. Under pessimistic assumption that all the orphans eat memory not
+ * less than this one, total consumed memory exceeds all
+ * the available memory.
+ */
+static int tcp_out_of_resources(struct sock *sk, int do_reset)
+{
+ int orphans = atomic_read(&tcp_orphan_count);
+
+ if (orphans >= sysctl_tcp_max_orphans ||
+ ((orphans*atomic_read(&sk->wmem_alloc))>>PAGE_SHIFT) >= num_physpages) {
+ if (net_ratelimit())
+ printk(KERN_INFO "Out of socket memory\n");
+ if (do_reset)
+ tcp_send_active_reset(sk, GFP_ATOMIC);
+ tcp_done(sk);
+ return 1;
+ }
+ return 0;
+}
+
/* A write timeout has occurred. Process the after effects. */
static int tcp_write_timeout(struct sock *sk)
{
dst_negative_advice(&sk->dst_cache);
}
+
retry_until = sysctl_tcp_retries2;
- if (sk->dead)
+ if (sk->dead) {
+ if (tcp_out_of_resources(sk, tp->retransmits < retry_until))
+ return 1;
+
retry_until = sysctl_tcp_orphan_retries;
+ }
}
if (tp->retransmits >= retry_until) {
* with RFCs, only probe timer combines both retransmission timeout
* and probe timeout in one bottle. --ANK
*/
- max_probes = sk->dead ? sysctl_tcp_orphan_retries : sysctl_tcp_retries2;
+ max_probes = sysctl_tcp_retries2;
+
+ if (sk->dead) {
+ if (tcp_out_of_resources(sk, tp->probes_out <= max_probes))
+ goto out_unlock;
+
+ max_probes = sysctl_tcp_orphan_retries;
+ }
if (tp->probes_out > max_probes) {
tcp_write_err(sk);
tp->retransmits++;
tp->rto = min(tp->rto << 1, TCP_RTO_MAX);
tcp_reset_xmit_timer(sk, TCP_TIME_RETRANS, tp->rto);
+ if (tp->retransmits > sysctl_tcp_retries1)
+ __sk_dst_reset(sk);
TCP_CHECK_TIMER(sk);
out_unlock:
void tcp_delete_keepalive_timer (struct sock *sk)
{
- spin_lock_bh(&sk->timer_lock);
- if (sk->timer.prev && del_timer (&sk->timer))
+ if (timer_pending(&sk->timer) && del_timer (&sk->timer))
__sock_put(sk);
- spin_unlock_bh(&sk->timer_lock);
}
void tcp_reset_keepalive_timer (struct sock *sk, unsigned long len)
{
- spin_lock_bh(&sk->timer_lock);
- if(!sk->timer.prev || !del_timer(&sk->timer))
+ if (!mod_timer(&sk->timer, jiffies+len))
sock_hold(sk);
- mod_timer(&sk->timer, jiffies+len);
- spin_unlock_bh(&sk->timer_lock);
}
void tcp_set_keepalive(struct sock *sk, int val)
EXPORT_SYMBOL(xdr_encode_netobj);
EXPORT_SYMBOL(xdr_zero);
EXPORT_SYMBOL(xdr_one);
+EXPORT_SYMBOL(xdr_two);
EXPORT_SYMBOL(xdr_shift_iovec);
EXPORT_SYMBOL(xdr_zero_iovec);
* as published by the Free Software Foundation; either version
* 2 of the License, or (at your option) any later version.
*
- * Version: $Id: af_unix.c,v 1.91 2000/03/25 01:55:34 davem Exp $
+ * Version: $Id: af_unix.c,v 1.93 2000/04/08 07:21:29 davem Exp $
*
* Fixes:
* Linus Torvalds : Assorted bug cures.
(err = unix_autobind(sock)) != 0)
goto out;
+ err = -EMSGSIZE;
+ if ((unsigned)len > sk->sndbuf - 32)
+ goto out;
skb = sock_alloc_send_skb(sk, len, 0, msg->msg_flags&MSG_DONTWAIT, &err);
if (skb==NULL)
{
char buf[1024];
char *vec[8192];
+ char *fvec[200];
+ char **svec;
char type[64];
int i;
int vp=2;
while(fgets(buf, 1024, stdin))
{
- if(*buf!='!')
+ if(*buf!='!') {
printf("%s", buf);
- else
- {
- fflush(stdout);
- if(buf[1]=='E')
- strcpy(type, "-function");
- else if(buf[1]=='I')
- strcpy(type, "-nofunction");
- else
- {
- fprintf(stderr, "Unknown ! escape.\n");
- exit(1);
- }
- switch(pid=fork())
- {
- case -1:
- perror("fork");
- exit(1);
- case 0:
- execvp("scripts/kernel-doc", vec);
- perror("exec scripts/kernel-doc");
- exit(1);
- default:
- waitpid(pid, NULL,0);
+ continue;
+ }
+
+ fflush(stdout);
+ svec = vec;
+ if(buf[1]=='E')
+ strcpy(type, "-function");
+ else if(buf[1]=='I')
+ strcpy(type, "-nofunction");
+ else if(buf[1]=='F') {
+ int snarf = 0;
+ fvec[0] = "kernel-doc";
+ fvec[1] = "-docbook";
+ strcpy (type, "-function");
+ vp = 2;
+ for (i = 2; buf[i]; i++) {
+ if (buf[i] == ' ' || buf[i] == '\n') {
+ buf[i] = '\0';
+ snarf = 1;
+ continue;
+ }
+
+ if (snarf) {
+ snarf = 0;
+ fvec[vp++] = type;
+ fvec[vp++] = &buf[i];
+ }
}
+ fvec[vp++] = &buf[2];
+ fvec[vp] = NULL;
+ svec = fvec;
+ } else
+ {
+ fprintf(stderr, "Unknown ! escape.\n");
+ exit(1);
+ }
+ switch(pid=fork())
+ {
+ case -1:
+ perror("fork");
+ exit(1);
+ case 0:
+ execvp("scripts/kernel-doc", svec);
+ perror("exec scripts/kernel-doc");
+ exit(1);
+ default:
+ waitpid(pid, NULL,0);
}
}
exit(0);
#!/usr/bin/perl
## Copyright (c) 1998 Michael Zucchi, All Rights Reserved ##
+## Copyright (C) 2000 Tim Waugh <twaugh@redhat.com> ##
## ##
-## This software falls under the GNU Public License. Please read ##
-## the COPYING file for more information ##
+## This software falls under the GNU General Public License. ##
+## Please read the COPYING file for more information ##
#
# This will read a 'c' file and scan for embedded comments in the
#
# 'funcname()' - function
# '$ENVVAR' - environmental variable
-# '&struct_name' - name of a structure
+# '&struct_name' - name of a structure (up to two words including 'struct')
# '@parameter' - name of a parameter
# '%CONST' - name of a constant.
# match expressions used to find embedded type information
-$type_constant = "\\\%(\\w+)";
-$type_func = "(\\w+\\(\\))";
+$type_constant = "\\\%([-_\\w]+)";
+$type_func = "(\\w+)\\(\\)";
$type_param = "\\\@(\\w+)";
-$type_struct = "\\\&(\\w+)";
+$type_struct = "\\\&((struct\\s*)?\\w+)";
$type_env = "(\\\$\\w+)";
$blankline_html = "<p>";
# sgml, docbook format
-%highlights_sgml = ( $type_constant, "<replaceable class=\"option\">\$1</replaceable>",
+%highlights_sgml = ( "([^=])\\\"([^\\\"<]+)\\\"", "\$1<quote>\$2</quote>",
+ $type_constant, "<constant>\$1</constant>",
$type_func, "<function>\$1</function>",
$type_struct, "<structname>\$1</structname>",
$type_env, "<envar>\$1</envar>",
$blankline_gnome = "</para><para>\n";
# these are pretty rough
-%highlights_man = ( $type_constant, "\\n.I \\\"\$1\\\"\\n",
- $type_func, "\\n.B \\\"\$1\\\"\\n",
- $type_struct, "\\n.I \\\"\$1\\\"\\n",
- $type_param."([\.\, ]*)\n?", "\\n.I \\\"\$1\$2\\\"\\n" );
+%highlights_man = ( $type_constant, "\$1",
+ $type_func, "\\\\fB\$1\\\\fP",
+ $type_struct, "\\\\fI\$1\\\\fP",
+ $type_param, "\\\\fI\$1\\\\fP" );
$blankline_man = "";
# text-mode
print "(";
$count = 0;
foreach $parameter (@{$args{'parameterlist'}}) {
- print "<i>".$args{'parametertypes'}{$parameter}."</i> <b>".$parameter."</b>\n";
+ $type = $args{'parametertypes'}{$parameter};
+ if ($type =~ m/([^\(]*\(\*)\s*\)\s*\(([^\)]*)\)/) {
+ # pointer-to-function
+ print "<i>$1</i><b>$parameter</b>) <i>($2)</i>";
+ } else {
+ print "<i>".$type."</i> <b>".$parameter."</b>";
+ }
if ($count != $#{$args{'parameterlist'}}) {
$count++;
- print ", ";
+ print ",\n";
}
}
print ")\n";
print "<h3>Arguments</h3>\n";
print "<dl>\n";
foreach $parameter (@{$args{'parameterlist'}}) {
- print "<dt><i>".$args{'parametertypes'}{$parameter}."</i> <b>".$parameter."</b>\n";
+ print "<dt><b>".$parameter."</b>\n";
print "<dd>";
output_highlight($args{'parameters'}{$parameter});
}
print "</dl>\n";
foreach $section (@{$args{'sectionlist'}}) {
print "<h3>$section</h3>\n";
- print "<ul>\n";
+ print "<blockquote>\n";
output_highlight($args{'sections'}{$section});
- print "</ul>\n";
+ print "</blockquote>\n";
}
print "<hr>\n";
}
$count = 0;
if ($#{$args{'parameterlist'}} >= 0) {
foreach $parameter (@{$args{'parameterlist'}}) {
- print " <paramdef>".$args{'parametertypes'}{$parameter};
- print " <parameter>$parameter</parameter></paramdef>\n";
+ $type = $args{'parametertypes'}{$parameter};
+ if ($type =~ m/([^\(]*\(\*)\s*\)\s*\(([^\)]*)\)/) {
+ # pointer-to-function
+ print " <paramdef>$1<parameter>$parameter</parameter>)\n";
+ print " <funcparams>$2</funcparams></paramdef>\n";
+ } else {
+ print " <paramdef>".$type;
+ print " <parameter>$parameter</parameter></paramdef>\n";
+ }
}
} else {
print " <void>\n";
$count = 0;
if ($#{$args{'parameterlist'}} >= 0) {
foreach $parameter (@{$args{'parameterlist'}}) {
- print " <paramdef>".$args{'parametertypes'}{$parameter};
- print " <parameter>$parameter</parameter></paramdef>\n";
+ $type = $args{'parametertypes'}{$parameter};
+ if ($type =~ m/([^\(]*\(\*)\s*\)\s*\(([^\)]*)\)/) {
+ # pointer-to-function
+ print " <paramdef>$1 <parameter>$parameter</parameter>)\n";
+ print " <funcparams>$2</funcparams></paramdef>\n";
+ } else {
+ print " <paramdef>".$type;
+ print " <parameter>$parameter</parameter></paramdef>\n";
+ }
}
} else {
print " <void>\n";
my ($parameter, $section);
my $count;
- print ".TH \"$args{'module'}\" \"$args{'function'}\" \"25 May 1998\" \"API Manual\" LINUX\n";
+ print ".TH \"$args{'module'}\" 4 \"$args{'function'}\" \"25 May 1998\" \"API Manual\" LINUX\n";
- print ".SH Function\n";
+ print ".SH NAME\n";
+ print $args{'function'}." \\- ".$args{'purpose'}."\n";
- print ".I \"".$args{'functiontype'}."\"\n";
- print ".B \"".$args{'function'}."\"\n";
- print "(\n";
+ print ".SH SYNOPSIS\n";
+ print ".B \"".$args{'functiontype'}."\" ".$args{'function'}."\n";
$count = 0;
+ $parenth = "(";
+ $post = ",";
foreach $parameter (@{$args{'parameterlist'}}) {
- print ".I \"".$args{'parametertypes'}{$parameter}."\"\n.B \"".$parameter."\"\n";
- if ($count != $#{$args{'parameterlist'}}) {
- $count++;
- print ",\n";
+ if ($count == $#{$args{'parameterlist'}}) {
+ $post = ");";
}
+ $type = $args{'parametertypes'}{$parameter};
+ if ($type =~ m/([^\(]*\(\*)\s*\)\s*\(([^\)]*)\)/) {
+ # pointer-to-function
+ print ".BI \"".$parenth.$1."\" ".$parameter." \") (".$2.")".$post."\"\n";
+ } else {
+ $type =~ s/([^\*])$/\1 /;
+ print ".BI \"".$parenth.$type."\" ".$parameter." \"".$post."\"\n";
+ }
+ $count++;
+ $parenth = "";
}
- print ")\n";
print ".SH Arguments\n";
foreach $parameter (@{$args{'parameterlist'}}) {
- print ".IP \"".$args{'parametertypes'}{$parameter}." ".$parameter."\" 12\n";
+ print ".IP \"".$parameter."\" 12\n";
output_highlight($args{'parameters'}{$parameter});
}
foreach $section (@{$args{'sectionlist'}}) {
my ($parameter, $section);
my $count;
- print ".TH \"$args{'module'}\" \"$args{'module'}\" \"25 May 1998\" \"API Manual\" LINUX\n";
+ print ".TH \"$args{'module'}\" 4 \"$args{'module'}\" \"25 May 1998\" \"API Manual\" LINUX\n";
foreach $section (@{$args{'sectionlist'}}) {
print ".SH \"$section\"\n";
my %args = %{$_[0]};
my ($parameter, $section);
- print "Function = ".$args{'function'}."\n";
- print " return type: ".$args{'functiontype'}."\n\n";
+ print "Function:\n\n";
+ $start=$args{'functiontype'}." ".$args{'function'}." (";
+ print $start;
+ $count = 0;
+ foreach $parameter (@{$args{'parameterlist'}}) {
+ if ($type =~ m/([^\(]*\(\*)\s*\)\s*\(([^\)]*)\)/) {
+ # pointer-to-function
+ print $1.$parameter.") (".$2;
+ } else {
+ print $type." ".$parameter;
+ }
+ if ($count != $#{$args{'parameterlist'}}) {
+ $count++;
+ print ",\n";
+ print " " x length($start);
+ } else {
+ print ");\n\n";
+ }
+ }
+
+ print "Arguments:\n\n";
foreach $parameter (@{$args{'parameterlist'}}) {
- print " ".$args{'parametertypes'}{$parameter}." ".$parameter."\n";
- print " -> ".$args{'parameters'}{$parameter}."\n";
+ print $parameter."\n\t".$args{'parameters'}{$parameter}."\n";
}
foreach $section (@{$args{'sectionlist'}}) {
- print " $section:\n";
- print " -> ";
+ print "$section:\n\n";
output_highlight($args{'sections'}{$section});
}
+ print "\n\n";
}
sub output_intro_text {
$prototype =~ s/^inline+ //;
$prototype =~ s/^__inline__+ //;
- if ($prototype =~ m/^()([a-zA-Z0-9_~:]+)\s*\(([^\)]*)\)/ ||
- $prototype =~ m/^(\w+)\s+([a-zA-Z0-9_~:]+)\s*\(([^\)]*)\)/ ||
- $prototype =~ m/^(\w+\s*\*)\s*([a-zA-Z0-9_~:]+)\s*\(([^\)]*)\)/ ||
- $prototype =~ m/^(\w+\s+\w+)\s+([a-zA-Z0-9_~:]+)\s*\(([^\)]*)\)/ ||
- $prototype =~ m/^(\w+\s+\w+\s*\*)\s*([a-zA-Z0-9_~:]+)\s*\(([^\)]*)\)/) {
+ if ($prototype =~ m/^()([a-zA-Z0-9_~:]+)\s*\(([^\{]*)\)/ ||
+ $prototype =~ m/^(\w+)\s+([a-zA-Z0-9_~:]+)\s*\(([^\{]*)\)/ ||
+ $prototype =~ m/^(\w+\s*\*)\s*([a-zA-Z0-9_~:]+)\s*\(([^\{]*)\)/ ||
+ $prototype =~ m/^(\w+\s+\w+)\s+([a-zA-Z0-9_~:]+)\s*\(([^\{]*)\)/ ||
+ $prototype =~ m/^(\w+\s+\w+\s*\*)\s*([a-zA-Z0-9_~:]+)\s*\(([^\{]*)\)/) {
$return_type = $1;
$function_name = $2;
$args = $3;
+ # allow for up to fours args to function pointers
+ $args =~ s/(\([^\),]+),/\1#/;
+ $args =~ s/(\([^\),]+),/\1#/;
+ $args =~ s/(\([^\),]+),/\1#/;
# print STDERR "ARGS = '$args'\n";
foreach $arg (split ',', $args) {
# strip leading/trailing spaces
$arg =~ s/^\s*//;
$arg =~ s/\s*$//;
-# print STDERR "SCAN ARG: '$arg'\n";
- @args = split('\s', $arg);
-
-# print STDERR " -> @args\n";
- $param = pop @args;
-# print STDERR " -> @args\n";
- if ($param =~ m/^(\*+)(.*)/) {
- $param = $2;
- push @args, $1;
+
+ if ($arg =~ m/\(/) {
+ # pointer-to-function
+ $arg =~ tr/#/,/;
+ $arg =~ m/[^\(]+\(\*([^\)]+)\)/;
+ $param = $1;
+ $type = $arg;
+ $type =~ s/([^\(]+\(\*)$param/\1/;
+ } else {
+# print STDERR "SCAN ARG: '$arg'\n";
+ @args = split('\s', $arg);
+
+# print STDERR " -> @args\n";
+ $param = pop @args;
+# print STDERR " -> @args\n";
+ if ($param =~ m/^(\*+)(.*)/) {
+ $param = $2;
+ push @args, $1;
+ }
+ $type = join " ", @args;
}
- $type = join " ", @args;
if ($type eq "" && $param eq "...")
{